Feature Request
SONA's learned weights (MicroLoRA, BaseLoRA, EWC++ Fisher matrix, ReasoningBank patterns) are in-memory only. On process restart, all learning is lost and must be re-acquired by replaying trajectories.
Motivation
For long-running applications (coaching systems, production chatbots, persistent agents), SONA needs to survive process restarts without losing learned adaptations. Currently, the only workaround is to re-feed all trajectories on every startup, which:
- Requires >= 100 trajectories before
forceLearn() activates
- Takes ~30s for 200 trajectories via MCP stdio
- Loses any patterns that were learned during the previous session
Proposed API
Rust:
// Save learned state (LoRA weights, EWC Fisher, patterns)
engine.save_state("path/to/sona-state.bin")?;
// Load from previous state
let engine = SonaEngine::load_state("path/to/sona-state.bin")?;
Node.js (NAPI):
engine.saveState("path/to/sona-state.json");
const engine = SonaEngine.loadState("path/to/sona-state.json");
What should be persisted
| Component |
Estimated Size (256-dim) |
| MicroLoRA weights (rank-2) |
~2 KB |
| BaseLoRA weights (rank-8) |
~16 KB |
| EWC++ Fisher information matrix |
~256 KB |
| ReasoningBank cluster centroids |
~40 KB |
| Pattern metadata |
~5 KB |
| Total |
~320 KB |
Current workaround
We store trajectories in a JSON file (intelligence.json) and replay them on startup. The @ruvector/cli hooks session-start command initializes the intelligence layer, but SONA's neural weights start fresh each time.
Additional context
The IntelligenceEngine in ruvector npm already persists Q-tables, memories, and trajectories to intelligence.json. SONA state could be added to this same file, or stored separately as a binary blob for efficiency.
Related: hooks_force_learn requires 100 trajectories minimum even with force=true (#273 covers the stats counter bug). State persistence would eliminate the need for trajectory replay entirely.
Feature Request
SONA's learned weights (MicroLoRA, BaseLoRA, EWC++ Fisher matrix, ReasoningBank patterns) are in-memory only. On process restart, all learning is lost and must be re-acquired by replaying trajectories.
Motivation
For long-running applications (coaching systems, production chatbots, persistent agents), SONA needs to survive process restarts without losing learned adaptations. Currently, the only workaround is to re-feed all trajectories on every startup, which:
forceLearn()activatesProposed API
Rust:
Node.js (NAPI):
What should be persisted
Current workaround
We store trajectories in a JSON file (
intelligence.json) and replay them on startup. The@ruvector/cli hooks session-startcommand initializes the intelligence layer, but SONA's neural weights start fresh each time.Additional context
The
IntelligenceEnginein ruvector npm already persists Q-tables, memories, and trajectories tointelligence.json. SONA state could be added to this same file, or stored separately as a binary blob for efficiency.Related:
hooks_force_learnrequires 100 trajectories minimum even withforce=true(#273 covers the stats counter bug). State persistence would eliminate the need for trajectory replay entirely.