Open
Conversation
Automated SAGE-CBP raising session via OllamaIRP Machine: CBP (Desktop RTX 2060 SUPER, WSL2) Model: TinyLlama 1.1B Phase: creating AI-Instance: OllamaIRP (automated) Human-Supervised: no
tinyllama was a historical default carried along from early prototyping. Its capacity is too small for meaningful cognition — per the broader fleet plan, CBP should run gemma3:4b (matches Nomad's spec; same model family as the Gemma-4 strategy). Changes: - sage/gateway/machine_config.py: - Default: 'tinyllama:latest' → 'gemma3:4b' - Device: 'cpu' → 'cuda' (2060S has 8GB VRAM; gemma3:4b is 3.3GB) - Comment table updated - sage/instances/resolver.py: _DEFAULT_MODELS['cbp'] updated - sage/raising/scripts/dream_consolidation.py: example path updated Verified: gemma3:4b loads on 2060S (ollama evicts other cached models to fit); daemon started cleanly with 'active model: gemma3:4b'; 208 cycles in, wake/rest/dream cycling, 12 LLMs in pool. Shadow capture still active (SAGE_ROUTER_SHADOW=1 honored). Note: instance dir auto-resolves to cbp-gemma3-4b via InstancePaths. The old cbp-tinyllama-latest/ dir is preserved (has prior raising history) but no longer the active identity. New raising sessions on CBP will use cbp-gemma3-4b/ starting session 1. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
dp-web4
added a commit
that referenced
this pull request
Apr 18, 2026
Two stale CBP instance dirs were on disk (cbp-tinyllama-latest/ and its .bak). The daemon has been running the gemma3:4b model since PR #19 but the instance dir was still pointing at the tinyllama history, mixing substrate state that no longer applies. Changes: - git mv cbp-tinyllama-latest → cbp-tinyllama-latest.archive-20260418 (preserves raising sessions 1-81, identity history, chat logs) - git mv cbp-tinyllama-latest.bak → cbp-tinyllama-latest.bak.archive-20260418 - Initialized cbp-gemma3-4b/ via python3 -m sage.instances.init - Customized identity.json with CBP-specific machine_context: - role_in_fleet: 'experimenter + coordinator' - fleet_siblings mapping (Thor/Sprout/Legion/McNugget/Nomad roles) - this_machine_owns: WM + coordination + gameplay capture - partnership: Dennis + Claude - arrived_here: migration context - context_at_arrival: Phase 0 complete, Phase 1 pipeline hardened, co-design with Waving Cat active - guidance_for_raising: 4 bullets reframing raising context - Two initial memory_requests seeded with situational grounding Daemon verified restart with active model=gemma3:4b, machine=cbp, all 12 LLMs in pool, 159+ cycles, SAGE_MODEL env propagated via router-shadow.env (local, gitignored). Not yet in scope: cbp-qwen3.5-0.8b/ (another legacy dir — address separately). Other machines' instances are owned by their operators. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
dp-web4
pushed a commit
that referenced
this pull request
Apr 28, 2026
…subtraction. Phi4 register-substitution discovered (Δpol -3.36, Δbiz +1.08 same trajectory). Hardware register quantified — Thor Δhw +2.46 largest single Δ, positive across all 8 raised instances. CBP basin = TED+gov+marketing combo. Lexicon substring FP bug fixed (recurrence #9 of S110 pattern at analysis layer). S119 #18/#19/#20 executed; #21/#22/#23/#24 held. Machine: localhost.localdomain Date: 2026-04-28 01:13:05 UTC Changes committed automatically at session end. 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Problem
CBP was still defaulting to
tinyllama:latest— a historical default from early prototyping. Capacity is too small for meaningful cognition; all the fleet-plan docs specify CBP should run Gemma-family models.Fix
'tinyllama:latest'→'gemma3:4b'(matches Nomad; part of the Gemma-4 family strategy)'cpu'→'cuda'— CBP has 8GB VRAM on RTX 2060 SUPER; gemma3:4b is 3.3GB and fits_DEFAULT_MODELSmap, and the example path indream_consolidation.pyVerified on CBP
"active model": "gemma3:4b"Instance dir migration
CBP's instance dir was
cbp-tinyllama-latest/. With the default now gemma3:4b,InstancePaths.resolve()producescbp-gemma3-4b/. The old dir has prior raising history and is preserved (not deleted). New raising sessions on CBP start fresh in the new instance dir at session 1.Other machines in this PR
None — this is CBP-only. Other machines' defaults are correct. If anyone's CBP already had
SAGE_MODELenv override pointing to a specific model, that override still wins (this change only affects the default).🤖 Generated with Claude Code