Environment
- codebase-memory-mcp version: 0.4.6
- Binary:
/home/jm/.local/bin/codebase-memory-mcp (144MB ELF)
- OS: Linux 6.6.87 (WSL2)
- Client: Claude Code (stdio transport)
- Project size: Large Next.js monorepo (~42MB project DB, 147MB global DB)
Summary
The MCP server has been consistently crashing since 2026-03-11 11:07 AM. Every session follows the same pattern: connection establishes successfully, server becomes unresponsive, and must be escalated through SIGINT → SIGTERM → SIGKILL. 7 documented kill events over 25+ hours. The server has not completed a healthy session since the first crash.
Reproduction
- Index a large project (~42MB resulting DB)
- Server runs normally for some time
- A
list_projects call triggers a hang (observed: 58 seconds before timeout)
- Server stops responding to all signals except SIGKILL
- Every subsequent session startup repeats the hang
Crash timeline (from MCP logs)
| Timestamp |
Event |
| 2026-03-11 11:07:53 |
First crash — server killed after becoming unresponsive |
| 2026-03-11 12:04:23 |
Two simultaneous kill signals (cascade) |
| 2026-03-11 12:11+ |
Every new session hangs immediately |
| 2026-03-11 12:33 |
list_projects captured taking 58s with "still running (30s elapsed)" warning |
| 2026-03-12 08:01–08:42 |
Still crashing 25 hours later, same pattern |
Observed indicators
list_projects took 58 seconds before the process hung — suggests lock contention or full table scan on a large DB
- db-shm file: 32KB with 0 frames — consistent with stuck WAL state or a transaction that never committed
- db-wal file: Empty (0 bytes) — WAL was checkpointed or truncated, but shm retained stale lock state
- Connection startup degraded from ~900ms to 1.7s+ over the crash period
- Signal escalation pattern: SIGINT ignored → SIGTERM ignored → SIGKILL required — process is hung on I/O or a lock, not in a catchable error state
Database files at time of crash
| File |
Size |
home-jm-projects-cinephile-moments.db |
42MB |
home-jm-projects-cinephile-moments.db-shm |
32KB |
home-jm-projects-cinephile-moments.db-wal |
0B |
home-jm.db (global) |
147MB |
Log files
All in ~/.cache/claude-cli-nodejs/-home-jm-projects-cinephile-moments/mcp-logs-codebase-memory-mcp/:
2026-03-11T09-59-28-246Z.jsonl — First crash
2026-03-11T12-04-41-998Z.jsonl — Cascade kills
2026-03-11T12-11-15-378Z.jsonl — Continued failures
2026-03-11T12-27-35-544Z.jsonl — 58s list_projects hang captured
2026-03-12T08-01-49-249Z.jsonl — Still broken 25h later
2026-03-12T08-26-44-612Z.jsonl — Same pattern
2026-03-12T08-42-14-152Z.jsonl — Last observed (handshake only, then unresponsive)
Hypothesis
SQLite lock contention or stuck WAL state on a large database. The db-shm file retaining 32KB with 0 frames after a SIGKILL suggests the process was killed mid-transaction, leaving shared memory in an inconsistent state. Subsequent startups inherit the stale lock and immediately deadlock.
Possible contributing factors:
- No
PRAGMA busy_timeout or it's set too low for a 42MB+ DB
list_projects may do a full scan without pagination
- SIGKILL during an active WAL write leaves shm in a state that new connections can't recover from
Workaround
Deleting the -shm and -wal files (forcing SQLite to rebuild lock state) or deleting the project DB entirely (forcing re-index) are likely workarounds, but haven't been attempted yet to preserve the crash state for this report.
Environment
/home/jm/.local/bin/codebase-memory-mcp(144MB ELF)Summary
The MCP server has been consistently crashing since 2026-03-11 11:07 AM. Every session follows the same pattern: connection establishes successfully, server becomes unresponsive, and must be escalated through SIGINT → SIGTERM → SIGKILL. 7 documented kill events over 25+ hours. The server has not completed a healthy session since the first crash.
Reproduction
list_projectscall triggers a hang (observed: 58 seconds before timeout)Crash timeline (from MCP logs)
list_projectscaptured taking 58s with "still running (30s elapsed)" warningObserved indicators
list_projectstook 58 seconds before the process hung — suggests lock contention or full table scan on a large DBDatabase files at time of crash
home-jm-projects-cinephile-moments.dbhome-jm-projects-cinephile-moments.db-shmhome-jm-projects-cinephile-moments.db-walhome-jm.db(global)Log files
All in
~/.cache/claude-cli-nodejs/-home-jm-projects-cinephile-moments/mcp-logs-codebase-memory-mcp/:2026-03-11T09-59-28-246Z.jsonl— First crash2026-03-11T12-04-41-998Z.jsonl— Cascade kills2026-03-11T12-11-15-378Z.jsonl— Continued failures2026-03-11T12-27-35-544Z.jsonl— 58slist_projectshang captured2026-03-12T08-01-49-249Z.jsonl— Still broken 25h later2026-03-12T08-26-44-612Z.jsonl— Same pattern2026-03-12T08-42-14-152Z.jsonl— Last observed (handshake only, then unresponsive)Hypothesis
SQLite lock contention or stuck WAL state on a large database. The
db-shmfile retaining 32KB with 0 frames after a SIGKILL suggests the process was killed mid-transaction, leaving shared memory in an inconsistent state. Subsequent startups inherit the stale lock and immediately deadlock.Possible contributing factors:
PRAGMA busy_timeoutor it's set too low for a 42MB+ DBlist_projectsmay do a full scan without paginationWorkaround
Deleting the
-shmand-walfiles (forcing SQLite to rebuild lock state) or deleting the project DB entirely (forcing re-index) are likely workarounds, but haven't been attempted yet to preserve the crash state for this report.