Context
When potential adopters ask "what kind of resource overhead does mnemonic have?" we don't have a good answer beyond eyeballing ps output. The health report and status API should include runtime metrics so we can answer this with real numbers.
Current snapshot from a Linux machine (378 memories, 5 min uptime):
- RSS: ~53 MB (Go daemon only)
- CPU: 0.9% idle
- DB: 113 MB
Proposed
Add to HealthReport and /api/v1/status:
- HeapAlloc / HeapSys — current and system heap (from
runtime.MemStats)
- Goroutines —
runtime.NumGoroutine()
- GC pause total — cumulative GC pause time
- DB size — file size of the SQLite database
- Uptime — already tracked, just verify it's in the status API response
- Historical trending — store periodic snapshots so resource usage can be graphed over time (dashboard panel candidate)
Sample in the existing monitorLoop every health report interval (default 5m). No new goroutines, no new dependencies.
What we already have
LLM call metering is already covered by InstrumentedProvider in internal/llm/instrumented.go — tracks prompt/completion/total tokens, latency, caller agent, and cost estimation via pricing.go. This issue is about the daemon process itself, not LLM network overhead.
Why
- FAQ / README can reference actual metrics
- Dashboard could show a resource panel
- Windows users (where
systemctl status isn't available) need another way to check overhead
Context
When potential adopters ask "what kind of resource overhead does mnemonic have?" we don't have a good answer beyond eyeballing
psoutput. The health report and status API should include runtime metrics so we can answer this with real numbers.Current snapshot from a Linux machine (378 memories, 5 min uptime):
Proposed
Add to
HealthReportand/api/v1/status:runtime.MemStats)runtime.NumGoroutine()Sample in the existing
monitorLoopevery health report interval (default 5m). No new goroutines, no new dependencies.What we already have
LLM call metering is already covered by
InstrumentedProviderininternal/llm/instrumented.go— tracks prompt/completion/total tokens, latency, caller agent, and cost estimation viapricing.go. This issue is about the daemon process itself, not LLM network overhead.Why
systemctl statusisn't available) need another way to check overhead