feat: agentmemory v0.4.0 — the memory layer for all AI coding agents#9
feat: agentmemory v0.4.0 — the memory layer for all AI coding agents#9
Conversation
7 new features: Claude Code memory bridge, standalone cross-agent MCP server, knowledge graph with entity extraction, 4-tier memory consolidation pipeline, team/shared memory, memory governance with audit trail, git-versioned snapshots. 33 functions, 18 MCP tools, 6 MCP resources, 3 MCP prompts, 49 REST endpoints, 21 KV scopes, 216 tests. All features opt-in via env vars.
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (17)
📝 WalkthroughWalkthroughThis PR introduces a comprehensive feature expansion to the agent memory system, adding seven major capabilities: Claude Code bridge for memory file synchronization, knowledge graph extraction and querying, multi-tier memory consolidation, team-based memory sharing, governance and audit tracking, Git-backed snapshots, and a standalone MCP server. The system expands from 10 to 18 MCP tools and 34 to 49 REST endpoints with corresponding data model extensions and configuration loader functions. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant SDK
participant KV Store
participant MemoryProvider
participant FileSystem
rect rgba(100, 150, 255, 0.5)
Note over Client,FileSystem: Claude Bridge Sync Flow
Client->>SDK: mem::claude-bridge-sync
SDK->>KV Store: list(memories)
KV Store-->>SDK: [Memory entries]
SDK->>KV Store: get(projectPath)
KV Store-->>SDK: project summary
SDK->>SDK: serializeToMemoryMd(memories, summary, lineBudget)
SDK->>FileSystem: mkdirSync + writeFileSync
FileSystem-->>SDK: File written
SDK->>KV Store: set(audit entry)
KV Store-->>SDK: Audit recorded
SDK-->>Client: {path, lineCount, success}
end
rect rgba(100, 255, 150, 0.5)
Note over Client,FileSystem: Graph Extraction & Query Flow
Client->>SDK: mem::graph-extract
SDK->>MemoryProvider: compress(observations, GRAPH_EXTRACTION_SYSTEM)
MemoryProvider-->>SDK: XML response
SDK->>SDK: parseGraphXml(xml)
SDK->>KV Store: upsert(graphNodes)
SDK->>KV Store: upsert(graphEdges)
KV Store-->>SDK: Confirmed
SDK->>KV Store: set(audit entry)
SDK-->>Client: {addedNodes, addedEdges, success}
end
rect rgba(255, 200, 100, 0.5)
Note over Client,FileSystem: Consolidation Pipeline Flow
Client->>SDK: mem::consolidate-pipeline
SDK->>KV Store: list(summaries/patterns)
KV Store-->>SDK: [Episodes/Patterns]
SDK->>MemoryProvider: compress(episodes, SEMANTIC_MERGE_SYSTEM)
MemoryProvider-->>SDK: Facts XML
SDK->>SDK: parseXml + updateOrCreate(semanticMemories)
SDK->>KV Store: set(semanticMemories)
SDK->>KV Store: set(audit entry)
SDK-->>Client: {newFacts, newProcedures, success}
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
Note
Due to the large number of review comments, Critical severity comments were prioritized as inline comments.
🟠 Major comments (21)
src/prompts/graph-extraction.ts-19-33 (1)
19-33:⚠️ Potential issue | 🟠 MajorBound prompt size to prevent extraction failures on large observation batches.
buildGraphExtractionPromptcurrently appends full observation text without any limits. Large inputs can exceed model context and cause failed or degraded graph extraction.🛠️ Suggested guardrails
export function buildGraphExtractionPrompt(observations: Array<{ title: string; narrative: string; concepts: string[]; files: string[]; type: string; }>): string { - const items = observations + const MAX_OBSERVATIONS = 50; + const MAX_NARRATIVE_CHARS = 1200; + const items = observations + .slice(0, MAX_OBSERVATIONS) .map( (o, i) => - `[${i + 1}] Type: ${o.type}\nTitle: ${o.title}\nNarrative: ${o.narrative}\nConcepts: ${o.concepts.join(", ")}\nFiles: ${o.files.join(", ")}`, + `[${i + 1}] Type: ${o.type}\nTitle: ${o.title}\nNarrative: ${o.narrative.slice(0, MAX_NARRATIVE_CHARS)}\nConcepts: ${o.concepts.join(", ")}\nFiles: ${o.files.join(", ")}`, ) .join("\n\n"); return `Extract entities and relationships from these observations:\n\n${items}`; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/prompts/graph-extraction.ts` around lines 19 - 33, buildGraphExtractionPrompt currently concatenates full observation fields causing prompts to exceed model context for large observation arrays; update the function to enforce size limits by (1) capping the number of observations included (e.g., maxObservations) and (2) truncating long string fields (title, narrative, concepts, files) to a safe length or token count before joining; ensure the truncation preserves key context (e.g., keep start and/or end with ellipsis) and include a short note in the returned prompt when data was truncated so downstream extraction knows not all content was included; modify buildGraphExtractionPrompt and its parameter handling to accept/configure the limits or use sensible defaults.src/functions/graph.ts-33-34 (1)
33-34:⚠️ Potential issue | 🟠 MajorValidate extracted node/edge enum types before persisting.
LLM output is untrusted. Direct casts can store invalid
typevalues and break downstream assumptions.Proposed fix
+const VALID_NODE_TYPES = new Set<GraphNode["type"]>([ + "file", + "function", + "concept", + "error", + "decision", + "pattern", + "library", + "person", +]); +const VALID_EDGE_TYPES = new Set<GraphEdge["type"]>([ + "uses", + "imports", + "modifies", + "causes", + "fixes", + "depends_on", + "related_to", +]); @@ - const type = match[1] as GraphNode["type"]; + const type = match[1] as GraphNode["type"]; + if (!VALID_NODE_TYPES.has(type)) continue; @@ - const type = match[1] as GraphEdge["type"]; + const type = match[1] as GraphEdge["type"]; + if (!VALID_EDGE_TYPES.has(type)) continue;Also applies to: 57-58
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/functions/graph.ts` around lines 33 - 34, When extracting types from regex matches (the variables named type from match[1] used as GraphNode["type"] and similarly for edge type around lines 57-58), validate that the extracted string is one of the allowed enum values before casting/persisting: implement a type guard or lookup against the GraphNode type set (and GraphEdge type set for edges) and reject, throw, or skip the record if it is not valid; replace the direct cast (as GraphNode["type"]) with the guarded/validated value so only known enum members are stored.src/mcp/standalone.ts-20-109 (1)
20-109:⚠️ Potential issue | 🟠 Major
tools/listadvertises capabilities thattools/calldoes not implement.The server lists all tools from registry, but only five tool handlers exist here. Clients will receive false-positive capabilities.
Proposed fix (minimal contract-safe option)
+const STANDALONE_TOOL_NAMES = new Set([ + "memory_save", + "memory_recall", + "memory_sessions", + "memory_export", + "memory_audit", +]); @@ - case "tools/list": - return { tools: getAllTools() }; + case "tools/list": + return { + tools: getAllTools().filter((t) => STANDALONE_TOOL_NAMES.has(t.name)), + };Also applies to: 128-129
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/mcp/standalone.ts` around lines 20 - 109, The registry advertised tools include entries that are not handled by the tools/call switch in standalone.ts (the case handlers for memory_save, memory_recall, memory_sessions, memory_export, memory_audit), causing clients to see unsupported capabilities; fix this by making the tools/list output only the tools implemented by the tools/call switch (or by adding corresponding case handlers) — update the listing logic (where tools are collected for advertising) to derive the advertised tool names from the same source as the switch (or explicitly whitelist the implemented names: "memory_save","memory_recall","memory_sessions","memory_export","memory_audit") so registry and the tool dispatcher remain in sync.src/functions/snapshot.ts-68-75 (1)
68-75:⚠️ Potential issue | 🟠 MajorRestore is not a full rollback of captured snapshot state.
Create includes
graphNodesandobservations, but restore only upserts sessions/memories and never clears stale records created after the snapshot.Proposed fix
- const state = JSON.parse(content) as { - sessions?: Array<{ id: string } & Record<string, unknown>>; - memories?: Array<{ id: string } & Record<string, unknown>>; - }; + const state = JSON.parse(content) as { + sessions?: Array<{ id: string } & Record<string, unknown>>; + memories?: Array<{ id: string } & Record<string, unknown>>; + graphNodes?: Array<{ id: string } & Record<string, unknown>>; + observations?: Record< + string, + Array<{ id: string } & Record<string, unknown>> + >; + }; + // Replace sessions + for (const s of await kv.list<Array<{ id: string } & Record<string, unknown>>>(KV.sessions).then((x) => x as Array<{ id: string } & Record<string, unknown>>)) { + await kv.delete(KV.sessions, s.id); + } if (state.sessions) { for (const session of state.sessions) { await kv.set(KV.sessions, session.id, session); } } + // Replace memories + for (const m of await kv.list<Array<{ id: string } & Record<string, unknown>>>(KV.memories).then((x) => x as Array<{ id: string } & Record<string, unknown>>)) { + await kv.delete(KV.memories, m.id); + } if (state.memories) { for (const memory of state.memories) { await kv.set(KV.memories, memory.id, memory); } } + + // Restore graph nodes + if (state.graphNodes) { + for (const node of state.graphNodes) { + await kv.set(KV.graphNodes, node.id, node); + } + } + + // Restore per-session observations + if (state.observations) { + for (const [sessionId, obs] of Object.entries(state.observations)) { + for (const entry of obs) { + await kv.set(KV.observations(sessionId), entry.id, entry); + } + } + }Also applies to: 169-183
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/functions/snapshot.ts` around lines 68 - 75, The restore logic currently only upserts sessions and memories and never removes records created after the snapshot (graphNodes and observations are captured in the state object but not reconciled), so update the restoreSnapshot (or restore) routine to perform a full rollback: load the snapshot's state (version/timestamp/sessions/memories/graphNodes/observations), begin a transaction, delete any sessions/memories/graphNodes/observations in the DB that are NOT present in the snapshot, then upsert the snapshot entries for sessions, memories, graphNodes and observations; ensure operations reference the same unique keys used elsewhere (e.g., session id, memory id, graphNode id, observation id) and commit/rollback the transaction on error so the DB is left consistent.src/functions/team.ts-39-47 (1)
39-47:⚠️ Potential issue | 🟠 MajorRespect
TeamConfig.modewhen setting visibility.Hardcoding
visibility: "shared"ignores private mode and can violate expected team privacy behavior.Proposed fix
- visibility: "shared", + visibility: config.mode,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/functions/team.ts` around lines 39 - 47, The visibility field of the TeamSharedItem is hardcoded to "shared" which ignores TeamConfig.mode; in the block that constructs the shared object (variable shared in function handling team shares), set visibility based on config.mode (e.g., "private" when config.mode === "private", otherwise "shared") so the created item respects TeamConfig.mode and team privacy settings.src/functions/snapshot.ts-164-185 (1)
164-185:⚠️ Potential issue | 🟠 MajorProtect
state.jsoncleanup withfinallyduring restore.If rehydration fails,
state.jsoncan remain checked out from the target commit, leaving the repo in a dirty/inconsistent state.Proposed fix
- await gitExec(snapshotDir, ["checkout", data.commitHash, "--", "state.json"]); - const content = readFileSync( - join(snapshotDir, "state.json"), - "utf-8", - ); - const state = JSON.parse(content) as { + await gitExec(snapshotDir, ["checkout", data.commitHash, "--", "state.json"]); + try { + const content = readFileSync( + join(snapshotDir, "state.json"), + "utf-8", + ); + const state = JSON.parse(content) as { sessions?: Array<{ id: string } & Record<string, unknown>>; memories?: Array<{ id: string } & Record<string, unknown>>; - }; + }; - if (state.sessions) { - for (const session of state.sessions) { - await kv.set(KV.sessions, session.id, session); + if (state.sessions) { + for (const session of state.sessions) { + await kv.set(KV.sessions, session.id, session); + } } - } - if (state.memories) { - for (const memory of state.memories) { - await kv.set(KV.memories, memory.id, memory); + if (state.memories) { + for (const memory of state.memories) { + await kv.set(KV.memories, memory.id, memory); + } } - } - - await gitExec(snapshotDir, ["checkout", "HEAD", "--", "state.json"]); + } finally { + await gitExec(snapshotDir, ["checkout", "HEAD", "--", "state.json"]).catch(() => {}); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/functions/snapshot.ts` around lines 164 - 185, The current restore logic that checks out state.json from data.commitHash and later restores HEAD uses a final gitExec(["checkout","HEAD","--","state.json"]) but it runs only on the success path; wrap the rehydration steps (gitExec(... checkout data.commitHash ...), reading/parsing state.json, and the kv.set loops) in a try block and move the gitExec(snapshotDir, ["checkout", "HEAD", "--", "state.json"]) into a finally block so the file is always checked back out regardless of errors; keep error propagation (re-throw or let it bubble) so failures are not swallowed and reference the existing symbols: gitExec, snapshotDir, data.commitHash, state.json, kv.set, KV.sessions, and KV.memories.src/functions/graph.ts-161-170 (1)
161-170:⚠️ Potential issue | 🟠 MajorGuard
mem::graph-queryagainst empty params.If called without params, dereferencing
data.maxDepthwill throw.Proposed fix
- async (data: { + async (data: { startNodeId?: string; nodeType?: string; maxDepth?: number; query?: string; - }): Promise<GraphQueryResult> => { + } = {}): Promise<GraphQueryResult> => { @@ - const maxDepth = Math.min(data.maxDepth || 3, 5); + const maxDepth = Math.min(data.maxDepth ?? 3, 5);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/functions/graph.ts` around lines 161 - 170, The graph query handler can throw when called with no params because it dereferences data.maxDepth; fix by guarding the input: give the async handler a safe default parameter (e.g., data defaulting to an empty object) or use optional chaining when reading data (data?.maxDepth), then compute maxDepth the same way (Math.min((data?.maxDepth || 3), 5)); ensure this change is applied in the anonymous async function that returns Promise<GraphQueryResult> (the mem::graph-query handler) so accessing KV.graphNodes and KV.graphEdges remains safe when data is omitted.src/functions/team.ts-20-37 (1)
20-37:⚠️ Potential issue | 🟠 Major
itemType: "observation"cannot succeed with the current lookup logic.Observation shares always fail as “Item not found” because only
KV.memoriesis queried.Proposed fix
- async (data: { + async (data: { itemId: string; itemType: "observation" | "memory" | "pattern"; + sessionId?: string; project?: string; }) => { @@ - if (data.itemType === "memory" || data.itemType === "pattern") { + if (data.itemType === "observation") { + if (!data.sessionId) { + return { success: false, error: "sessionId is required for observation" }; + } + content = await kv.get(KV.observations(data.sessionId), data.itemId); + } else if (data.itemType === "memory" || data.itemType === "pattern") { content = await kv.get<Memory>(KV.memories, data.itemId); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/functions/team.ts` around lines 20 - 37, The lookup only queries KV.memories so itemType === "observation" always returns "Item not found"; update the retrieval logic in the anonymous async handler (the block that checks data.itemType and uses kv.get and KV.memories) to branch on data.itemType and call the correct KV namespace for observations (e.g., use KV.observations or the appropriate observations store) when data.itemType === "observation", keep using KV.memories for "memory" and "pattern", and ensure the variable content is assigned from the correct kv.get call so observation shares can succeed.src/mcp/standalone.ts-24-25 (1)
24-25:⚠️ Potential issue | 🟠 MajorUse collision-resistant IDs in
memory_save.
Date.now().toString(36)can collide under fast concurrent calls and overwrite entries.Proposed fix
+import { randomUUID } from "node:crypto"; @@ - const id = `mem_${Date.now().toString(36)}`; + const id = `mem_${randomUUID().replace(/-/g, "").slice(0, 16)}`;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/mcp/standalone.ts` around lines 24 - 25, The current memory_save ID generation uses Date.now().toString(36) which can collide under concurrent calls; replace this with a collision-resistant ID (e.g., crypto.randomUUID() or a UUID/nanoid) when creating const id (referenced in memory_save and used in kv.set("mem:memories", id,...)) so each stored memory gets a unique, non-colliding key; keep the same id variable name and usage but change only the generator to a UUID/random-id provider and import any required library.src/functions/snapshot.ts-87-91 (1)
87-91:⚠️ Potential issue | 🟠 MajorHandle non-“no changes” commit failures explicitly.
This path currently returns success for any
git commitfailure, which can hide real snapshot failures.Proposed fix
- try { - await gitExec(snapshotDir, ["commit", "-m", message]); - } catch { - return { success: true, message: "No changes to snapshot" }; - } + const staged = await gitExec(snapshotDir, [ + "status", + "--porcelain", + "--", + "state.json", + ]); + if (!staged) { + return { success: true, message: "No changes to snapshot" }; + } + await gitExec(snapshotDir, ["commit", "-m", message]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/functions/snapshot.ts` around lines 87 - 91, The try/catch around the git commit call in snapshot.ts (gitExec(snapshotDir, ["commit", "-m", message])) currently treats any failure as a benign "No changes to snapshot"; change it to inspect the git error and only return success when the failure indicates there are truly no changes (e.g., error message/exit code referencing "nothing to commit" or similar), otherwise propagate or return a failure with the actual error details; use the error object or captured stderr from gitExec to distinguish these cases and include that error text in the returned failure message so real commit errors aren't masked.src/functions/graph.ts-113-118 (1)
113-118:⚠️ Potential issue | 🟠 MajorAvoid scanning
KV.graphNodeson every extracted node.Fetching all graph nodes inside the loop creates repeated I/O and poor scaling for larger graphs.
Proposed fix
- for (const node of nodes) { - const existing = await kv - .list<GraphNode>(KV.graphNodes) - .then((all) => - all.find((n) => n.name === node.name && n.type === node.type), - ); + const existingNodes = await kv.list<GraphNode>(KV.graphNodes); + const nodeIndex = new Map( + existingNodes.map((n) => [`${n.type}:${n.name}`, n] as const), + ); + + for (const node of nodes) { + const existing = nodeIndex.get(`${node.type}:${node.name}`); if (existing) { const merged = { ...existing, @@ await kv.set(KV.graphNodes, existing.id, merged); + nodeIndex.set(`${existing.type}:${existing.name}`, merged); } else { await kv.set(KV.graphNodes, node.id, node); + nodeIndex.set(`${node.type}:${node.name}`, node); } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/functions/graph.ts` around lines 113 - 118, The loop over nodes repeatedly calls kv.list<GraphNode>(KV.graphNodes), causing excessive I/O; instead, load all existing nodes once before the loop (call kv.list<GraphNode>(KV.graphNodes) outside the for (const node of nodes) loop), build a lookup (e.g., Map keyed by `${name}|${type}`) and replace the per-iteration .find(...) with a constant-time lookup of existing; ensure when you create or update a node inside the loop you also update the in-memory map so subsequent iterations see the change.src/mcp/server.ts-742-758 (1)
742-758:⚠️ Potential issue | 🟠 MajorResource read fallbacks are masking real errors with synthetic success payloads.
Both handlers return
200with default data on any exception. This can hide invalid URI decoding and KV/storage failures.🔧 Suggested direction
- if (teamProfileMatch) { - try { - const teamId = decodeURIComponent(teamProfileMatch[1]); - const items = await kv.list(KV.teamShared(teamId)); + if (teamProfileMatch) { + let teamId: string; + try { + teamId = decodeURIComponent(teamProfileMatch[1]); + } catch { + return { + status_code: 400, + body: { error: "Invalid percent-encoding in URI" }, + }; + } + try { + const items = await kv.list(KV.teamShared(teamId)); return { status_code: 200, body: { contents: [ { uri, mimeType: "application/json", text: JSON.stringify({ teamId, sharedItems: items.length, }), }, ], }, }; } catch { - return { - status_code: 200, - body: { - contents: [ - { - uri, - mimeType: "application/json", - text: JSON.stringify({ teamId: "unknown", sharedItems: 0 }), - }, - ], - }, - }; + return { status_code: 500, body: { error: "Internal error" } }; } }Also applies to: 783-796
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/mcp/server.ts` around lines 742 - 758, The catch blocks currently swallow all exceptions and return a synthetic 200 response with default graph data; update the error handling in these handlers (the try/catch that returns status_code: 200 and the similar block later) to log the actual error and return an appropriate non-200 status (e.g., 400 for URI/decode errors, 404 if resource not found, 500 for KV/storage failures) instead of the fake success payload, and include the error message in the returned body (or rethrow to the upstream error middleware) while preserving the original response shape (uri, mimeType) for client diagnostics.src/functions/governance.ts-11-14 (1)
11-14:⚠️ Potential issue | 🟠 MajorGuard optional payloads before field access.
Both handlers assume
dataexists. A direct trigger without body will throw before your validation logic runs.🔧 Proposed fix
- async (data: { memoryIds: string[]; reason?: string }) => { + async (data?: { memoryIds: string[]; reason?: string }) => { const ctx = getContext(); - if (!data.memoryIds || !Array.isArray(data.memoryIds) || data.memoryIds.length === 0) { + if (!data?.memoryIds || !Array.isArray(data.memoryIds) || data.memoryIds.length === 0) { return { success: false, error: "memoryIds array is required" }; } ... - async (data: GovernanceFilter & { dryRun?: boolean }) => { + async (data: GovernanceFilter & { dryRun?: boolean } = {}) => { const ctx = getContext();Also applies to: 41-47
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/functions/governance.ts` around lines 11 - 14, The handlers access data.memoryIds without first ensuring data exists; add a top-level guard in each async handler (the function parameter named data in the governance handlers) to check if data is present and is an object (e.g., if (!data || typeof data !== 'object') return { success: false, error: "request body is required" }), then proceed to validate data.memoryIds as you already do; apply the same pattern to the other handler around lines 41-47 to avoid throwing when the request has no body.src/mcp/server.ts-290-312 (1)
290-312:⚠️ Potential issue | 🟠 MajorDon’t map all tool-call failures to “feature not enabled”.
These
catchblocks return200+ “not enabled” for any exception, including real runtime errors. That hides operational failures and makes debugging difficult.🔧 Proposed fix pattern (example for one case)
- } catch { - return { - status_code: 200, - body: { - content: [ - { - type: "text", - text: "Knowledge graph not enabled. Set GRAPH_EXTRACTION_ENABLED=true", - }, - ], - }, - }; - } + } catch (err) { + const msg = err instanceof Error ? err.message : String(err); + if (/unknown function|not found/i.test(msg)) { + return { + status_code: 200, + body: { + content: [ + { + type: "text", + text: "Knowledge graph not enabled. Set GRAPH_EXTRACTION_ENABLED=true", + }, + ], + }, + }; + } + throw err; + }Also applies to: 331-343, 359-371, 397-409, 425-437, 493-505
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/mcp/server.ts` around lines 290 - 312, The catch blocks around sdk.trigger(funcId, {}) currently map all exceptions to a 200/"not enabled" response; update each catch (locations handling sdk.trigger in server.ts) to distinguish between a "bridge not enabled" error and other runtime errors: inspect the caught error (e.g., error.message or a specific error type), and if it clearly indicates the CLAUDE_MEMORY_BRIDGE is disabled return the existing 200 response with the explanatory text, otherwise log the full error and return a 500 response with a safe error message/body indicating an internal failure; apply this change to all occurrences that call sdk.trigger(funcId, {}) (including the blocks at the ranges noted).src/triggers/api.ts-624-633 (1)
624-633:⚠️ Potential issue | 🟠 MajorDo not collapse all trigger failures into “not enabled” 404s.
These handlers convert every exception into a feature-disabled response. Internal runtime errors get misreported and become hard to diagnose.
🔧 Proposed fix pattern (example)
- } catch { - return { - status_code: 404, - body: { error: "Knowledge graph not enabled" }, - }; - } + } catch (err) { + const msg = err instanceof Error ? err.message : String(err); + if (/unknown function|not found/i.test(msg)) { + return { + status_code: 404, + body: { error: "Knowledge graph not enabled" }, + }; + } + return { status_code: 500, body: { error: "Internal error" } }; + }Also applies to: 646-654, 678-686, 700-708, 723-730, 745-755, 780-785, 799-805, 820-824, 908-913, 927-935, 952-957
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/triggers/api.ts` around lines 624 - 633, The current handlers swallow all exceptions from sdk.trigger (e.g., the "mem::claude-bridge-read" call) and always return a 404 "not enabled" response; update each handler to detect a genuine "trigger not found / feature disabled" error (check error.code or error.message for the SDK's not-found/disabled signal) and only return the 404 in that case, otherwise log/return a 500/internal-error (or rethrow) with the actual error details so runtime faults are visible; apply this change for the sdk.trigger calls and their surrounding handler functions listed (e.g., the mem::claude-bridge-read handler and the other handlers noted in the comment).src/config.ts-170-171 (1)
170-171:⚠️ Potential issue | 🟠 MajorClamp new numeric env settings to valid positive ranges.
SNAPSHOT_INTERVAL,GRAPH_EXTRACTION_BATCH_SIZE, andCONSOLIDATION_DECAY_DAYSaccept0/negativevalues today, which can break scheduling and decay calculations.🔧 Proposed fix
export function loadSnapshotConfig(): { enabled: boolean; interval: number; dir: string; } { const env = getMergedEnv(); return { enabled: env["SNAPSHOT_ENABLED"] === "true", - interval: safeParseInt(env["SNAPSHOT_INTERVAL"], 3600), + interval: Math.max(1, safeParseInt(env["SNAPSHOT_INTERVAL"], 3600)), dir: env["SNAPSHOT_DIR"] || join(homedir(), ".agentmemory", "snapshots"), }; } ... export function getGraphBatchSize(): number { - return safeParseInt(getMergedEnv()["GRAPH_EXTRACTION_BATCH_SIZE"], 10); + return Math.max(1, safeParseInt(getMergedEnv()["GRAPH_EXTRACTION_BATCH_SIZE"], 10)); } ... export function getConsolidationDecayDays(): number { - return safeParseInt(getMergedEnv()["CONSOLIDATION_DECAY_DAYS"], 30); + return Math.max(1, safeParseInt(getMergedEnv()["CONSOLIDATION_DECAY_DAYS"], 30)); }Also applies to: 179-189
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/config.ts` around lines 170 - 171, Clamp the numeric env settings by validating the parsed ints from safeParseInt and ensuring they fall into a valid positive range before assigning to the config fields: for SNAPSHOT_INTERVAL (config property interval), GRAPH_EXTRACTION_BATCH_SIZE (config property graphExtractionBatchSize) and CONSOLIDATION_DECAY_DAYS (config property consolidationDecayDays) replace direct safeParseInt usage with a min-clamp (e.g. Math.max(parsed, 1)) so any 0/negative values become 1 (or another agreed minimum) and avoid scheduling/decay logic breaking; reference the safeParseInt calls and the interval / graphExtractionBatchSize / consolidationDecayDays assignments in src/config.ts and apply the same clamp pattern to the other occurrences around lines 179-189.src/functions/consolidation-pipeline.ts-89-117 (1)
89-117:⚠️ Potential issue | 🟠 MajorPrevent duplicate semantic facts within a single run.
When a new semantic fact is inserted, it is not added to
existingSemantic. Repeated<fact>entries in the same model response will create duplicate records.🔧 Proposed fix
} else { const sem: SemanticMemory = { id: generateId("sem"), fact, confidence, sourceSessionIds: recentSummaries.map((s) => s.sessionId), sourceMemoryIds: [], accessCount: 1, lastAccessedAt: now, strength: confidence, createdAt: now, updatedAt: now, }; await kv.set(KV.semantic, sem.id, sem); + existingSemantic.push(sem); newFacts++; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/functions/consolidation-pipeline.ts` around lines 89 - 117, When inserting a new SemanticMemory inside the consolidation loop (look for factRegex, existingSemantic, SemanticMemory, generateId("sem"), kv.set and newFacts), append the newly created sem object into the existingSemantic collection immediately after kv.set so subsequent identical facts in the same response are detected as duplicates; ensure you use the same case-normalization used in the existing lookup (fact.toLowerCase()) when adding to existingSemantic to avoid duplicate inserts within a single run.src/config.ts-140-150 (1)
140-150:⚠️ Potential issue | 🟠 MajorClaude project slugging is not cross-platform safe.
Only replacing
/misses Windows separators and reserved characters (\,:), which can produce invalid or incorrect bridge paths.🔧 Proposed fix
- const safePath = projectPath.replace(/\//g, "-").replace(/^-/, ""); + const safePath = projectPath + .replace(/^[A-Za-z]:/, "") + .replace(/[\\/]/g, "-") + .replace(/[^A-Za-z0-9._-]/g, "-") + .replace(/^-+/, "");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/config.ts` around lines 140 - 150, The current safePath generation only replaces forward slashes and so can produce invalid paths on Windows; update the safePath creation (used before memoryFilePath assignment) to sanitize projectPath across platforms by normalizing and replacing all path separators and reserved characters (e.g., both '/' and '\\', ':' and other Windows-reserved chars like '*?"<>|') with a safe separator, then trim any leading hyphen; implement this change where safePath is computed (referencing projectPath and safePath) so memoryFilePath is built from a cross-platform-safe slug.src/functions/consolidation-pipeline.ts-161-195 (1)
161-195:⚠️ Potential issue | 🟠 MajorPrevent duplicate procedures within a single extraction response.
existingProcsis loaded once and not updated after inserts. Duplicate<procedure ...>blocks in one response can create duplicate procedural memories.🔧 Proposed fix
} else { const proc: ProceduralMemory = { id: generateId("proc"), name, steps, triggerCondition: trigger, frequency: 1, sourceSessionIds: [], strength: 0.5, createdAt: now, updatedAt: now, }; await kv.set(KV.procedural, proc.id, proc); + existingProcs.push(proc); newProcs++; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/functions/consolidation-pipeline.ts` around lines 161 - 195, The code currently finds existing procedures by checking the initially loaded existingProcs array, so duplicate <procedure> blocks in a single response can create duplicate entries; after creating or updating a proc inside the loop (the block that uses existingProcs, procRegex, kv.set, generateId, ProceduralMemory, newProcs), update the in-memory lookup so subsequent iterations see the change — for example add the newly created proc to existingProcs (or maintain a Map keyed by lowercased name and insert/update it after kv.set) so duplicates in the same response are detected and de-duplicated.src/functions/consolidation-pipeline.ts-221-227 (1)
221-227:⚠️ Potential issue | 🟠 MajorProcedural decay is currently not persisted.
applyDecayis called on a cloned array, but writes are done from the originalproceduralarray, so procedural strengths never change.🔧 Proposed fix
- applyDecay( - procedural.map((p) => ({ ...p, lastAccessedAt: undefined })), - decayDays, - ); + applyDecay(procedural, decayDays); for (const p of procedural) { await kv.set(KV.procedural, p.id, p); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/functions/consolidation-pipeline.ts` around lines 221 - 227, applyDecay is being called on a cloned array (procedural.map(...)) but the code persists the original procedural items, so the decayed strengths are never saved; change the flow to persist the decayed results (e.g., capture the return value of applyDecay into a variable like updatedProcedural or mutate the original array) and then iterate over that updated collection when calling kv.set(KV.procedural, p.id, p), ensuring ProceduralMemory entries reflect decayDays and lastAccessedAt changes.src/functions/claude-bridge.ts-3-4 (1)
3-4:⚠️ Potential issue | 🟠 MajorReplace synchronous filesystem APIs with async variants in request handlers.
Both
mem::claude-bridge-read(lines 86, 89) andmem::claude-bridge-sync(lines 138-141) use synchronous filesystem operations inside async handler functions. These calls block the Node.js event loop and degrade API responsiveness under concurrent requests.🔧 Proposed fix direction
-import { readFileSync, writeFileSync, mkdirSync, existsSync } from "node:fs"; +import { existsSync } from "node:fs"; +import { readFile, writeFile, mkdir } from "node:fs/promises"; ... - const content = readFileSync(config.memoryFilePath, "utf-8"); + const content = await readFile(config.memoryFilePath, "utf-8"); ... - if (!existsSync(dir)) { - mkdirSync(dir, { recursive: true }); - } - writeFileSync(config.memoryFilePath, md, "utf-8"); + if (!existsSync(dir)) { + await mkdir(dir, { recursive: true }); + } + await writeFile(config.memoryFilePath, md, "utf-8");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/functions/claude-bridge.ts` around lines 3 - 4, The handlers mem::claude-bridge-read and mem::claude-bridge-sync currently call readFileSync, writeFileSync, mkdirSync and existsSync inside async request handlers which blocks the event loop; change them to use the async fs API (e.g. fs.promises or node:fs/promises) and await fs.readFile, fs.writeFile, fs.mkdir({recursive:true}) and check existence with fs.access or try/catch instead of existsSync; update the imports (remove sync functions from "node:fs" and import the promises API) and ensure each handler awaits the I/O calls and properly catches and logs errors so the handlers remain non-blocking.
🟡 Minor comments (6)
src/mcp/transport.ts-32-43 (1)
32-43:⚠️ Potential issue | 🟡 MinorConsider validating the JSON-RPC request structure.
The parsed JSON is cast to
JsonRpcRequestwithout validation. Malformed requests (missingid, missingmethod, or wrongjsonrpcversion) will propagate to the handler or cause runtime errors when accessingrequest.idorrequest.method.🛡️ Proposed fix to add basic validation
let request: JsonRpcRequest; try { request = JSON.parse(trimmed); } catch { const error: JsonRpcResponse = { jsonrpc: "2.0", id: 0, error: { code: -32700, message: "Parse error" }, }; process.stdout.write(JSON.stringify(error) + "\n"); return; } + if ( + request.jsonrpc !== "2.0" || + request.id === undefined || + typeof request.method !== "string" + ) { + const error: JsonRpcResponse = { + jsonrpc: "2.0", + id: request.id ?? 0, + error: { code: -32600, message: "Invalid Request" }, + }; + process.stdout.write(JSON.stringify(error) + "\n"); + return; + } + try { const result = await handler(request.method, request.params || {});🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/mcp/transport.ts` around lines 32 - 43, The parsed JSON is being assumed to be a valid JsonRpcRequest; add explicit validation right after parsing (inside the try block, before using `request`): verify `request.jsonrpc === "2.0"`, `typeof request.method === "string"` (non-empty), and that `request.id` is present (allow number, string or null per your RPC rules); if validation fails, write a JSON-RPC error response with code -32600 and message "Invalid Request" (include the parsed `id` if present or null otherwise) and return. Update the error creation/emit logic that currently builds the Parse error to instead produce the Invalid Request response when validation fails, and keep the existing -32700 handler for actual JSON.parse failures. Use the existing `request`, `JsonRpcRequest`, and `JsonRpcResponse` symbols to locate and implement this check.src/functions/export-import.ts-268-287 (1)
268-287:⚠️ Potential issue | 🟡 MinorImport of new data types doesn't respect the
strategyparameter.The import logic for
graphNodes,graphEdges,semanticMemories, andproceduralMemoriesalways overwrites existing entries regardless of thestrategy("merge", "replace", "skip"). This is inconsistent with how sessions, memories, and observations handle the skip strategy (lines 210-266).Additionally, these new fields lack the size validation applied to other imports (e.g.,
MAX_MEMORIES,MAX_SESSIONS).♻️ Proposed fix to add strategy handling (example for graphNodes)
if (importData.graphNodes) { for (const node of importData.graphNodes) { + if (strategy === "skip") { + const existing = await kv + .get<GraphNode>(KV.graphNodes, node.id) + .catch(() => null); + if (existing) { + stats.skipped++; + continue; + } + } await kv.set(KV.graphNodes, node.id, node); } }Similar changes would apply to
graphEdges,semanticMemories, andproceduralMemories.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/functions/export-import.ts` around lines 268 - 287, The import loop for importData.graphNodes, graphEdges, semanticMemories, and proceduralMemories always calls kv.set and thus overwrites existing entries and skips size checks; update these loops to respect the incoming strategy parameter ("merge", "replace", "skip") the same way sessions/memories/observations do (see existing logic around lines handling sessions and observations), and add the same max-size validation (e.g., MAX_MEMORIES, MAX_SESSIONS) before inserting items: for "skip" check kv.get for existence and continue if present, for "merge" read existing item, merge fields then kv.set, and for "replace" always kv.set; ensure you use the same KV keys (KV.graphNodes, KV.graphEdges, KV.semantic, KV.procedural) and the same async kv.get/kv.set pattern as other imports so behavior is consistent.test/audit.test.ts-60-85 (1)
60-85:⚠️ Potential issue | 🟡 MinorTimestamp-order assertions depend on wall-clock sleeps and can become flaky.
Lines 62 and 84 rely on
setTimeoutto force ordering. Under noisy CI timing, this can intermittently fail. Prefer deterministic timestamps in test fixtures for stable ordering/filter checks.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/audit.test.ts` around lines 60 - 85, The tests are flaky because they use real time sleeps to order entries; instead make timestamps deterministic by either passing explicit timestamps into recordAudit (use unique monotonic timestamps for early/late calls) or by mocking the clock (e.g., jest.useFakeTimers / jest.setSystemTime or spy on Date.now) before calling recordAudit, then restore the clock; update the queryAudit tests to assert ordering based on the injected/mock timestamps rather than relying on setTimeout.README.md-683-693 (1)
683-693:⚠️ Potential issue | 🟡 MinorData model table includes an undefined KV scope.
mem:confidenceis documented here, but it is not present insrc/state/schema.tsKV definitions. Please align docs to the actual schema (or add the missing scope in code).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 683 - 693, The README lists a KV scope `mem:confidence` that isn't defined in the live schema; update the source to match the docs by either adding the missing KV scope to the schema exports in src/state/schema.ts (define the `mem:confidence` key/constant and include it in whatever KV scope/enum or mapping you use, e.g., alongside `mem:semantic`, `mem:procedural`) or remove/replace the `mem:confidence` entry from the README so it matches the current schema; ensure the change references the same naming (`mem:confidence`) and the schema symbols or mapping where other `mem:` scopes are declared.src/index.ts-158-163 (1)
158-163:⚠️ Potential issue | 🟡 Minor
SNAPSHOT_INTERVALis currently logged but not enforced.The startup log says snapshots run every N seconds, but no periodic scheduler is wired in this path.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/index.ts` around lines 158 - 163, The startup log mentions snapshots run every snapshotConfig.interval seconds but no scheduler is set up; when snapshotConfig.enabled is true, after calling registerSnapshotFunction(sdk, kv, snapshotConfig.dir) create a periodic timer (e.g., setInterval) that re-invokes the snapshot process on the given dir at snapshotConfig.interval seconds (validate interval is a positive number and multiply by 1000 for milliseconds); reference loadSnapshotConfig(), snapshotConfig.interval and registerSnapshotFunction(sdk, kv, snapshotConfig.dir) to locate where to add the timer and ensure you clear or store the timer if needed for shutdown.src/triggers/api.ts-800-801 (1)
800-801:⚠️ Potential issue | 🟡 MinorValidate and clamp
limitquery params before forwarding.Negative or invalid limits can pass through to downstream handlers, producing surprising pagination/slice behavior.
🔧 Proposed fix
- const limit = parseInt(req.query_params?.["limit"] as string) || 20; + const rawLimit = Number.parseInt(req.query_params?.["limit"] as string, 10); + const limit = Number.isFinite(rawLimit) ? Math.max(1, Math.min(500, rawLimit)) : 20; ... - limit: parseInt(req.query_params?.["limit"] as string) || 50, + limit: (() => { + const rawLimit = Number.parseInt(req.query_params?.["limit"] as string, 10); + return Number.isFinite(rawLimit) ? Math.max(1, Math.min(500, rawLimit)) : 50; + })(),Also applies to: 839-841
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/triggers/api.ts` around lines 800 - 801, The parsed `limit` from req.query_params is not validated before being sent to sdk.trigger, which allows negative/invalid values; update the parsing logic where limit is derived (the parseInt(...) || 20 expression used before calling sdk.trigger("mem::team-feed", { limit })) to use parseInt with radix 10, treat NaN as the default (20), and clamp the value to a safe range (e.g. min 1, max 100) using Math.max/Math.min before passing it to sdk.trigger; apply the same validated/clamped logic for the other occurrence around lines 839-841 where limit is used.
🧹 Nitpick comments (8)
src/mcp/tools-registry.ts (2)
1-8:McpToolDeftyping is too narrow for robust schema contracts.Current typing only supports
{ type, description }, which limits expressing constraints (enum/ranges/arrays) and weakens tool-call guidance/validation.Proposed type refinement
export type McpToolDef = { name: string; description: string; inputSchema: { type: "object"; - properties: Record<string, { type: string; description: string }>; + properties: Record< + string, + { + type: "string" | "number" | "boolean" | "array" | "object"; + description?: string; + enum?: string[]; + minimum?: number; + maximum?: number; + } + >; required?: string[]; + additionalProperties?: boolean; }; };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/mcp/tools-registry.ts` around lines 1 - 8, McpToolDef's inputSchema is too restrictive; update the typing so inputSchema and its properties accept full JSON Schema-like definitions (e.g., allow enums, arrays, numeric constraints, nested objects) instead of only { type, description }. Replace the current inline shape for inputSchema/properties with a reusable broader type (e.g., JsonSchema or Record<string, any>/unknown with proper typings) and use that type for McpToolDef.inputSchema and properties entries to enable enums/ranges/arrays and nested schemas while preserving the existing required?: string[] field.
128-159: Make selector arguments required for deterministic tool behavior.Lines 133 and 156 define selector-like fields (
direction,tier) but neither schema marks them required, which can lead to ambiguous tool invocations.Suggested schema tightening
{ name: "memory_claude_bridge_sync", description: "Sync memory state to/from Claude Code's native MEMORY.md file.", inputSchema: { type: "object", properties: { direction: { type: "string", description: "'read' to import from MEMORY.md, 'write' to export to MEMORY.md" }, }, + required: ["direction"], }, }, @@ { name: "memory_consolidate", description: "Run the 4-tier memory consolidation pipeline (working -> episodic -> semantic -> procedural).", inputSchema: { type: "object", properties: { tier: { type: "string", description: "Target tier: episodic, semantic, or procedural" }, }, + required: ["tier"], }, },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/mcp/tools-registry.ts` around lines 128 - 159, The schemas for the tools memory_claude_bridge_sync and memory_consolidate must mark their selector fields as required to avoid ambiguous invocations: update the memory_claude_bridge_sync inputSchema to include a required: ["direction"] entry (and consider adding an enum for "direction" values like "read"|"write"), and update the memory_consolidate inputSchema to include required: ["tier"] (and optionally an enum for allowed tiers like "episodic"|"semantic"|"procedural"); modify the objects named memory_claude_bridge_sync and memory_consolidate in tools-registry.ts to add these required properties to their inputSchema.test/mcp-standalone.test.ts (1)
21-32: Avoid hard-coded tool totals in registry tests.These assertions will require manual updates on every tool addition/removal. Prefer deriving expected totals from
CORE_TOOLS.length + V040_TOOLS.length.♻️ Proposed update
- it("getAllTools returns 18 tools", () => { + it("getAllTools returns all registered tools", () => { const tools = getAllTools(); - expect(tools.length).toBe(18); + expect(tools.length).toBe(CORE_TOOLS.length + V040_TOOLS.length); }); - it("CORE_TOOLS has 10 items", () => { - expect(CORE_TOOLS.length).toBe(10); + it("CORE_TOOLS is non-empty", () => { + expect(CORE_TOOLS.length).toBeGreaterThan(0); }); - it("V040_TOOLS has 8 items", () => { - expect(V040_TOOLS.length).toBe(8); + it("V040_TOOLS is non-empty", () => { + expect(V040_TOOLS.length).toBeGreaterThan(0); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/mcp-standalone.test.ts` around lines 21 - 32, Replace the hard-coded expected counts with derived lengths: change the assertion in the "getAllTools returns 18 tools" test to compare getAllTools().length to CORE_TOOLS.length + V040_TOOLS.length, and remove or update the separate fixed-number tests so that "CORE_TOOLS has 10 items" and "V040_TOOLS has 8 items" either assert against their own .length (no fixed numbers) or are removed; use the symbols getAllTools, CORE_TOOLS, and V040_TOOLS to locate and implement the change.test/claude-bridge.test.ts (1)
25-59: Extract shared KV/SDK test doubles to a common helper.
mockKVandmockSdkare duplicated across multiple test files in this PR. Centralizing them will reduce drift and make future test behavior changes safer.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/claude-bridge.test.ts` around lines 25 - 59, The mockKV and mockSdk test doubles are duplicated across tests; extract them into a shared test helper (e.g., test/helpers/mockKVSdk.ts), export the functions mockKV and mockSdk with the same signatures and return shapes so existing tests keep types, then replace the inline implementations in this file (and other test files in the PR) with imports from that helper and remove the duplicated definitions; ensure registerFunction/trigger semantics and generic signatures for get/set/list are preserved so tests continue to compile and run.test/team.test.ts (1)
113-129: Replace real-time delay with deterministic clock control.Using
setTimeout(10)for ordering can be flaky in CI. Prefer fake timers or explicit mocked timestamps for stable sort-order verification.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/team.test.ts` around lines 113 - 129, The test uses a real-time delay (await new Promise(r => setTimeout(r, 10))) to create ordering for mem::team-share events, which is flaky; instead make the ordering deterministic by either using fake timers (e.g., jest.useFakeTimers() and advancing time) or by explicitly setting timestamps when invoking sdk.trigger for mem::team-share so that the mem::team-feed result (TeamSharedItem.sharedAt) can be asserted deterministically; locate the calls to sdk.trigger("mem::team-share", {...}) and sdk.trigger("mem::team-feed", {...}) and replace the time-based delay with one of these deterministic approaches.test/graph.test.ts (1)
119-133: Strengthen BFS assertions to reduce false positives.The traversal test currently checks only minimum counts. Consider asserting that the expected neighbor node (
main) is present and connected by the expected edge type.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/graph.test.ts` around lines 119 - 133, Update the "graph-query with startNodeId does BFS traversal" test to assert specific expected neighbors and edge types instead of only minimum counts: after obtaining fileNode (from kv.list<GraphNode>) and the BFS result (as GraphQueryResult from sdk.trigger("mem::graph-query")), find a node with name "main" in result.nodes and assert it exists, then assert result.edges contains an edge whose source is fileNode.id and target is the "main" node id with the expected edge type (e.g., "imports" or whatever edge type your graph uses); keep existing depth/assertions but add these presence checks to reduce false positives.test/snapshot.test.ts (1)
134-141: Add field-level assertions forsnapshot-listitems.This test currently checks only that
snapshotsis an array. AssertingcommitHash,createdAt, andmessageshape/content would make regressions easier to catch.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/snapshot.test.ts` around lines 134 - 141, The test "snapshot-list returns snapshots from git log" currently only asserts that result.snapshots is an array; update it to assert each item from sdk.trigger("mem::snapshot-list", {}) has the expected fields and shapes: iterate result.snapshots and check for a non-empty string commitHash (optionally validate hex-ish format), a createdAt that is a parsable ISO timestamp (e.g., Date.parse(createdAt) is not NaN), and a message that is a string (non-empty); use Array.prototype.every or a for-loop to assert types/contents so regressions in commitHash/createdAt/message shape are caught.src/mcp/in-memory-kv.ts (1)
45-54: Persist writes should be atomic to avoid truncated state files.A crash or termination during direct write can leave corrupted JSON.
Proposed fix
-import { existsSync, readFileSync, writeFileSync, mkdirSync } from "node:fs"; +import { existsSync, readFileSync, writeFileSync, mkdirSync, renameSync } from "node:fs"; @@ - writeFileSync(this.persistPath, JSON.stringify(data), "utf-8"); + const tempPath = `${this.persistPath}.tmp`; + writeFileSync(tempPath, JSON.stringify(data), "utf-8"); + renameSync(tempPath, this.persistPath);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/mcp/in-memory-kv.ts` around lines 45 - 54, The persist() method currently writes directly to this.persistPath which can produce truncated/corrupted JSON on crash; change it to perform an atomic replace by writing the serialized JSON to a temporary file in the same directory (e.g., "${persistPath}.tmp" or a random temp filename), flush/close that temp file, then rename (replace) the temp file to this.persistPath (using fs.renameSync or equivalent) so the replace is atomic; ensure you still create the directory if missing and use the same serialization (JSON.stringify) before the temp write and only rename on successful write/close.
ℹ️ Review info
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (38)
README.mdpackage.jsonplugin/scripts/pre-compact.mjsplugin/scripts/session-end.mjssrc/config.tssrc/functions/audit.tssrc/functions/claude-bridge.tssrc/functions/consolidation-pipeline.tssrc/functions/export-import.tssrc/functions/governance.tssrc/functions/graph.tssrc/functions/snapshot.tssrc/functions/team.tssrc/hooks/pre-compact.tssrc/hooks/session-end.tssrc/index.tssrc/mcp/in-memory-kv.tssrc/mcp/server.tssrc/mcp/standalone.tssrc/mcp/tools-registry.tssrc/mcp/transport.tssrc/prompts/consolidation.tssrc/prompts/graph-extraction.tssrc/state/schema.tssrc/telemetry/setup.tssrc/triggers/api.tssrc/types.tstest/audit.test.tstest/claude-bridge.test.tstest/consolidation-pipeline.test.tstest/export-import.test.tstest/governance.test.tstest/graph.test.tstest/mcp-resources.test.tstest/mcp-standalone.test.tstest/snapshot.test.tstest/team.test.tstsdown.config.ts
| { | ||
| entry: ["src/mcp/standalone.ts"], | ||
| outDir: "dist", | ||
| format: ["esm"], | ||
| clean: false, | ||
| sourcemap: false, | ||
| target: "node18", | ||
| banner: { js: "#!/usr/bin/env node" }, | ||
| }, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, find and read tsdown.config.ts to see full context
fd -e "tsdown.config.ts" -o "tsconfig.ts"Repository: rohitg00/agentmemory
Length of output: 205
🏁 Script executed:
# Also check package.json for the bin field
fd -e "package.json" | head -5Repository: rohitg00/agentmemory
Length of output: 46
🏁 Script executed:
# Find tsdown.config.ts
fd "tsdown.config.ts"Repository: rohitg00/agentmemory
Length of output: 80
🏁 Script executed:
# Find package.json
fd "package.json"Repository: rohitg00/agentmemory
Length of output: 76
🏁 Script executed:
cat -n tsdown.config.tsRepository: rohitg00/agentmemory
Length of output: 1594
🏁 Script executed:
cat package.jsonRepository: rohitg00/agentmemory
Length of output: 1461
🏁 Script executed:
# Check if src/mcp/standalone.ts exists
test -f src/mcp/standalone.ts && echo "File exists" || echo "File does not exist"Repository: rohitg00/agentmemory
Length of output: 75
🌐 Web query:
tsdown v0.20.3 entry file naming convention esm output default filename
💡 Result:
-
Entry naming (“
[name]”)- If you pass entries as paths (string/array/glob), the output name is the entry file’s basename (no extension).
- If you use the object/alias form, the object key becomes the output name (e.g.
{ main: 'src/index.ts' }→dist/main…). [1]
-
ESM output is the default format
tsdowndefaults to ESM output (--format esm). [2]
-
Default output filename pattern (ESM)
- Entry chunks use the pattern
[name]+ “resolved JS extension” (Rolldown’s default entry pattern is[name].js, andtsdownapplies its own extension resolution on top). [3][4]
- Entry chunks use the pattern
-
What extension you get by default (ESM)
fixedExtensioncontrols whether ESM is forced to.mjs; otherwise the extension depends on package/module settings. The option is documented as: “always.cjsor.mjs… otherwise… depends on the package type.” [4][5]
So, for an entry src/index.ts with default ESM output, you’ll get dist/index.<js-or-mjs> (or dist/<alias>.<js-or-mjs> if using the alias/object entry form). [1][2][4][5]
References:
[1] https://tsdown.dev/options/entry
[2] https://tsdown.dev/options/output-format
[3] https://rolldown.rs/reference/outputoptions.entryfilenames
[4] https://tsdown.dev/reference/api/Interface.Options
[5] https://deepwiki.com/rolldown/tsdown/3.2-output-generation
Fix tsdown entry configuration to match bin target filename.
The standalone entry at line 30 uses array form entry: ["src/mcp/standalone.ts"], which causes tsdown to output dist/standalone.mjs. However, package.json (line 8) expects dist/mcp-standalone.mjs. This mismatch will break the agentmemory-mcp command at runtime.
Change line 30 to use the object/alias form to specify the correct output name:
Suggested fix
{
entry: { "mcp-standalone": "src/mcp/standalone.ts" },
outDir: "dist",
format: ["esm"],
clean: false,
sourcemap: false,
target: "node18",
banner: { js: "#!/usr/bin/env node" },
},
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tsdown.config.ts` around lines 29 - 37, The tsdown config's entry currently
uses an array ("entry: [\"src/mcp/standalone.ts\"]") which produces
dist/standalone.mjs but package.json expects dist/mcp-standalone.mjs; update the
entry to the object/alias form so the output filename is mcp-standalone (e.g.,
entry: { "mcp-standalone": "src/mcp/standalone.ts" }) ensuring the build output
matches the bin name referenced by package.json and preserving the other options
(outDir, format, target, banner) unchanged.
Critical: Fix bin path mismatch (dist/mcp-standalone.mjs -> dist/standalone.mjs) Bugs: Fix procedural decay, parseFloat||0.5 for zero values, snapshot restore missing graphNodes/observations, git catch too broad, N+1 graph query, BFS duplicate edges, array mutation in audit sort, date validation Improvements: Lazy readline in transport, JSON-RPC validation, persist error handling, ID collision prevention, defensive null coalescing for concepts/files, Windows backslash support in config, direction required for bridge sync tool, try-catch for audit/governance MCP, team profile fallback teamId, POST body for bridge sync hook, observations validation for graph-extract endpoint, await in test
Summary
agentmemory v0.4.0 adds 7 major features to become the definitive memory layer for all AI coding agents — not just Claude Code.
New Features (all opt-in via env vars)
~/.claude/projects/*/memory/MEMORY.mdnpx agentmemory-mcp— works with Cursor, Codex, Gemini CLI, WindsurfStats
New Files (21 source + 8 tests)
src/functions/{audit,claude-bridge,graph,consolidation-pipeline,team,governance,snapshot}.tssrc/mcp/{tools-registry,in-memory-kv,transport,standalone}.tssrc/prompts/{graph-extraction,consolidation}.tstest/{audit,claude-bridge,graph,consolidation-pipeline,team,governance,snapshot,mcp-standalone}.test.tsModified Files (17)
Test plan
npx tsc --noEmit— compiles cleanlynpx vitest run— 216 tests pass (29 test files)CLAUDE_MEMORY_BRIDGE=truelogs sync to MEMORY.mdGRAPH_EXTRACTION_ENABLED=truelogs extraction enablednpx agentmemory-mcpresponds to MCP initializeSummary by CodeRabbit
Release Notes
New Features
Documentation
Tests