feat: v0.5.0 orchestration layer — 14 modules, 518 tests#74
Conversation
Add actions, frontier, leases, routines, signals, checkpoints, flow-compress, mesh, and branch-aware modules with full MCP tools, REST endpoints, and 170 new tests. Includes SSRF protection, race-condition-safe keyed mutex locking, SHA-256 fingerprinting, and fixes for 29 CodeRabbit review findings. - 9 new source files (src/functions/*) - 8 new test files (170 tests, total 386) - 10 new MCP tools, 23 new REST endpoints - 8 new KV scopes, new types for orchestration - Version bump to 0.5.0, README and viewer updated
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds v0.5.0: new orchestration for actions, leases, routines, signals, checkpoints, mesh, sketches, crystals, sentinels, facets; expands KV scopes and public types; extends export/import to v0.5.0; wires many new MCP tools and REST endpoints; registers new SDK functions and adds comprehensive unit tests. Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant SDK
participant Actions as mem::action-*
participant Leases as mem::lease-*
participant Checkpoints as mem::checkpoint-*
participant Frontier as mem::frontier
participant KV
Client->>SDK: trigger mem::action-create(title, edges)
SDK->>Actions: call action-create
Actions->>KV: validate parent/targets
KV-->>Actions: return entities
Actions->>KV: persist action + edges
KV-->>Actions: ack
Actions-->>SDK: return action
Client->>SDK: trigger mem::lease-acquire(actionId, agentId)
SDK->>Leases: call lease-acquire
Leases->>KV: withKeyedLock(actionId) -> check action
KV-->>Leases: action data
Leases->>KV: persist lease, update action status
KV-->>Leases: ack
Leases-->>SDK: return lease
Client->>SDK: trigger mem::checkpoint-resolve(checkpointId, status=passed)
SDK->>Checkpoints: call checkpoint-resolve
Checkpoints->>KV: withKeyedLock(checkpointId) -> update checkpoint
Checkpoints->>KV: find gated actions (actionEdges)
KV-->>Checkpoints: gate states
Checkpoints->>KV: update actions (blocked->pending) as needed
KV-->>Checkpoints: ack
Checkpoints-->>SDK: return result
Client->>SDK: trigger mem::frontier(project, limit)
SDK->>Frontier: call frontier
Frontier->>KV: load actions, edges, leases, checkpoints
KV-->>Frontier: datasets
Frontier->>Frontier: compute scores, filter blockers/leases
Frontier-->>SDK: return ranked frontier
SDK-->>Client: frontier result
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 18
🧹 Nitpick comments (8)
src/viewer/server.ts (1)
501-515: Use the shared version constant here.These health payloads are another manual
"0.5.0"copy. The repo already hassrc/version.ts, so the health API can drift from the exported schema and viewer surfaces on the next release.♻️ Suggested change
import { getLatestHealth } from "../health/monitor.js"; +import { VERSION } from "../version.js"; ... - version: "0.5.0", + version: VERSION, ... - version: "0.5.0", + version: VERSION,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/viewer/server.ts` around lines 501 - 515, The health responses in viewer/server.ts hardcode "0.5.0" which can drift; import the shared version export from src/version.ts (e.g., the exported symbol named version or VERSION) at the top of the file and replace the literal "0.5.0" in the JSON health payloads (both the try and catch blocks where version: "0.5.0" appears) with that imported constant so the health API always reflects the repository-wide version.test/actions.test.ts (1)
350-369: Avoid real sleeps in the ordering setup.These
setTimeout(5)gaps make the suite slower and still timer-resolution-dependent.vi.useFakeTimers()/vi.setSystemTime()will make the ordering deterministic without wall-clock waits.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/actions.test.ts` around lines 350 - 369, The beforeEach setup uses real 5ms setTimeout gaps to order sdk.trigger calls; switch to deterministic fake timers by calling vi.useFakeTimers() (and vi.setSystemTime for a base timestamp) before issuing the sdk.trigger calls, then advance the fake clock (vi.advanceTimersByTime or vi.setSystemTime increments) between the sdk.trigger invocations so the created items have distinct timestamps without real sleeps, and finally restore timers with vi.useRealTimers() in afterEach; update the beforeEach/afterEach surrounding the sdk.trigger calls to use these vi timer APIs to make ordering deterministic.src/index.ts (1)
248-249: Prefer deriving these inventory counts.
72 REST + 28 MCP tools + 6 MCP resources + 3 MCP promptsis now a second source of truth that will drift again on the next surface-area change. Export the counts from the registry modules or compute them at startup instead of hardcoding them here.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/index.ts` around lines 248 - 249, The hardcoded inventory string "[agentmemory] Endpoints: 72 REST + 28 MCP tools + 6 MCP resources + 3 MCP prompts" is a second source of truth and will drift; replace the literal with computed counts by importing or referencing the registries that hold those entries (e.g., the REST endpoints registry and the MCP tools/resources/prompts registries) and compute their sizes (length/size) at startup, then build the log message dynamically (e.g., `${restRegistry.length} REST + ${mcpToolsRegistry.length} MCP tools + ${mcpResourcesRegistry.length} MCP resources + ${mcpPromptsRegistry.length} MCP prompts`) so the message always reflects the actual registries.test/mesh.test.ts (1)
48-486: Please add sync + SSRF regression cases.This suite covers register/list/receive/remove, but it never exercises
mem::mesh-syncor the private/local URL rejection rules called out in the mesh work. That leaves the most security-sensitive path unpinned.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/mesh.test.ts` around lines 48 - 486, Add tests covering the mem::mesh-sync path and SSRF/local‑URL rejection: add a new describe block (e.g., "mesh-sync and SSRF") that uses sdk.trigger("mem::mesh-sync", {...}) to exercise successful sync of peers/memories/actions and assert KV changes (keys like "mem:mesh", "mem:memories", "mem:actions"), and add regression cases that call sdk.trigger("mem::mesh-register" or mem::mesh-sync) with private/local URLs (e.g., 127.0.0.1, ::1, 10.x.x.x, 192.168.x.x, localhost) asserting the handler returns failure and error mentioning SSRF/local URL rejection; reuse existing test helpers (mockSdk, mockKV) and reference peer id/name fields from MeshPeer to validate no KV writes occurred for rejected entries.src/functions/mesh.ts (2)
170-211: Consider stricter validation for incoming mesh data.The
mem::mesh-receivefunction validatesidandupdatedAtbut doesn't validate other required fields. Malformed incoming data could corrupt the KV store. Consider adding schema validation or at minimum checking critical fields liketype,title,status, etc.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/functions/mesh.ts` around lines 170 - 211, The mem::mesh-receive handler currently only checks id and updatedAt for Memory and Action objects, risking malformed entries in kv (KV.memories / KV.actions); update the validation inside the registered function to perform stricter checks (or call a shared validator) before kv.set: verify required fields (e.g., Memory.type, Memory.title, Memory.status and Action.type, Action.status, any required payload shapes), ensure enums/strings have allowed values and timestamps parse, and reject or skip entries that fail validation so only well-formed Memory/Action objects are written to the store; add clear error/logging for skipped items to aid debugging.
6-27: SSRF protection could be bypassed with additional private address formats.The blocklist is good but incomplete. Consider also blocking:
.localand.localhostdomain suffixes- IPv6 private ranges (e.g.,
fe80::,fc00::,fd00::)[::1]in bracketed notation- DNS rebinding via CNAME to private IPs (harder to mitigate client-side)
♻️ Suggested enhancement for more complete SSRF protection
function isAllowedUrl(urlStr: string): boolean { try { const parsed = new URL(urlStr); if (parsed.protocol !== "http:" && parsed.protocol !== "https:") return false; const host = parsed.hostname.toLowerCase(); if ( host === "localhost" || host === "127.0.0.1" || host === "::1" || host === "0.0.0.0" || + host.endsWith(".localhost") || + host.endsWith(".local") || host.startsWith("10.") || host.startsWith("192.168.") || host === "169.254.169.254" || - /^172\.(1[6-9]|2\d|3[01])\./.test(host) + /^172\.(1[6-9]|2\d|3[01])\./.test(host) || + host.startsWith("fc") || + host.startsWith("fd") || + host.startsWith("fe80:") ) { return false; } return true; } catch { return false; } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/functions/mesh.ts` around lines 6 - 27, The isAllowedUrl function currently misses several private/reserved address formats; update it to reject hostnames that end with ".local" or ".localhost", explicitly handle IPv6 addresses (strip surrounding brackets if present) and reject loopback/bracketed "::1", and block IPv6 private/link-local prefixes (e.g., addresses starting with "fe80:", "fc00:", "fd00:"); implement these checks inside isAllowedUrl (use the existing parsed = new URL(urlStr) and parsed.hostname logic) and return false when any of those conditions match.src/functions/flow-compress.ts (1)
153-173: Static analysis flagged potential ReDoS, but risk is low in this context.The
extractfunction uses a dynamic RegExp with thetagparameter. While the static analyzer flagged this, the risk is mitigated because:
- The
tagvalues are hardcoded strings ("goal","outcome", etc.) from internal code- The LLM response could theoretically contain adversarial patterns, but the regex pattern
<tag>([\\s\\S]*?)</${tag}>is non-backtracking for well-formed inputIf you want to be defensive against malformed LLM responses, consider a simple string-based extraction or add a timeout.
♻️ Alternative: String-based extraction without regex
const extract = (tag: string): string => { const startTag = `<${tag}>`; const endTag = `</${tag}>`; const startIdx = response.indexOf(startTag); if (startIdx === -1) return ""; const contentStart = startIdx + startTag.length; const endIdx = response.indexOf(endTag, contentStart); if (endIdx === -1) return ""; return response.slice(contentStart, endIdx).trim(); };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/functions/flow-compress.ts` around lines 153 - 173, The dynamic RegExp in parseFlowSummary's inner function extract can trigger static-analysis ReDoS warnings; replace that RegExp-based extraction with a deterministic string-based approach: in parseFlowSummary (function parseFlowSummary) implement extract(tag) to build startTag/endTag strings, find start and end indices using indexOf (with the search for endTag starting at contentStart), return the sliced substring trimmed or empty if tags not found; keep the same returned object keys (goal, outcome, steps, discoveries, lesson) so callers are unchanged.test/checkpoints.test.ts (1)
15-28: Return cloned values from the KV double.
getandlisthand back live objects from the backingMap, andsetstores the caller's object by reference. Mutating a fetched record can therefore change the store without aset, which makes these tests less representative and can hide missing-persistence bugs.💡 Suggested fix
function mockKV() { const store = new Map<string, Map<string, unknown>>(); + const clone = <T>(value: T): T => + JSON.parse(JSON.stringify(value)) as T; return { get: async <T>(scope: string, key: string): Promise<T | null> => { - return (store.get(scope)?.get(key) as T) ?? null; + const value = store.get(scope)?.get(key); + return value === undefined ? null : clone(value as T); }, set: async <T>(scope: string, key: string, data: T): Promise<T> => { if (!store.has(scope)) store.set(scope, new Map()); - store.get(scope)!.set(key, data); - return data; + const copy = clone(data); + store.get(scope)!.set(key, copy); + return clone(copy); }, delete: async (scope: string, key: string): Promise<void> => { store.get(scope)?.delete(key); }, list: async <T>(scope: string): Promise<T[]> => { const entries = store.get(scope); - return entries ? (Array.from(entries.values()) as T[]) : []; + return entries + ? Array.from(entries.values(), (value) => clone(value as T)) + : []; }, }; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/checkpoints.test.ts` around lines 15 - 28, The in-memory KV implementation (get, set, list) currently returns and stores references to caller objects which allows external mutation to alter the store; change set to store a cloned copy of data (e.g., use structuredClone(data) or JSON.parse(JSON.stringify(data)) as a fallback) so the backing Map never holds caller references, change get to return a clone of the stored value (structuredClone on the result) and change list to return an array of cloned values (map each entry through structuredClone/JSON clone) so callers receive deep-copied objects and cannot mutate the store indirectly; keep delete unchanged.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@README.md`:
- Around line 99-100: The README has inconsistent numeric counts—update all
stale literal occurrences to match the new totals already introduced earlier
(e.g., replace any "Tools (18)" with "Tools (28)", replace the "49 REST
endpoints" string with the current REST endpoint total used in the summary, and
replace "216 tests" with the current test count); search the file for the exact
strings "49 REST endpoints", "Tools (18)", "216 tests" and the other ranges
called out and make each numeric literal consistent with the new totals (ensure
any parenthetical/tool/resource/prompt counts match the header summary: 28
tools, 6 resources, 3 prompts).
In `@src/functions/actions.ts`:
- Around line 28-33: The Action is always initialized with status "pending" even
when it has unmet requirements; change the creation logic in the Action
initializer (the Action object with id from generateId("act"), title,
description, status, priority) to set status to "blocked" whenever data.requires
exists and contains any requirement that is not already completed (i.e., if
data.requires.length > 0 or any required action state !== "completed"); apply
the same check to the other creation branch referenced around lines 61-79 so new
actions with unresolved requires start as "blocked" instead of "pending".
- Around line 51-79: The action is written to storage before validating edges,
causing partial persistence if an edge is invalid; move validation of data.edges
(check every e.type against validEdgeTypes and verify target exists via
kv.get(KV.actions, e.targetActionId)) to run before the initial await
kv.set(KV.actions, action.id, action), and only after all edges are validated
create the ActionEdge objects (using generateId("ae")), persist them with
kv.set(KV.actionEdges, ...), and then return success; keep the same error
responses but ensure no action or edge is stored unless all edges pass
validation.
In `@src/functions/branch-aware.ts`:
- Around line 144-166: The current filter on sessions (variable matching) only
scopes by projectRoot and ignores branch, so sessions from other worktrees bleed
through; update the filter used on sessions (from KV.sessions) to also require
the session's branch to match the resolved branch value: inside the
matching.filter callback, after resolving whether s.project/s.cwd maps to
projectRoot (the existing checks), add a branch check (e.g., if (s.branch !==
branch) return false) so only sessions with the same branch/worktree are
returned (use Session.branch, worktreeInfo.branch/branch variables and keep
existing cwd/project checks).
In `@src/functions/checkpoints.ts`:
- Around line 35-47: When attaching gated_by edges for each id in
data.linkedActionIds, first fetch the action via kv.get(KV.actions, actionId)
and if it doesn't exist reject/throw or return an error for that action id; if
it exists and its status is "pending" update its status to "blocked" and persist
the change via kv.set(KV.actions, actionId, updatedAction) before
creating/saving the ActionEdge (generateId("ae") / KV.actionEdges) that
references checkpoint.id; keep the existing edge-creation code but add these
existence checks and state transition so linked pending actions become blocked
immediately.
- Around line 103-114: The current unblock logic only checks checkpoint gates
(gates/allPassed) and can move an action to "pending" even if it still has
unfinished "requires" dependencies; update the readiness check to also scan
allEdges for requires dependencies for this action (e.g., const requires =
allEdges.filter(e => e.type === "requires" && e.targetActionId === actionId))
and ensure each requires' source is satisfied via cpMap (const reqSatisfied =
requires.every(r => { const dep = cpMap.get(r.sourceActionId); return dep &&
dep.status === "passed"; })); only set action.status = "pending", update
updatedAt, kv.set(KV.actions, action.id, action), and increment unblockedCount
when both allPassed and reqSatisfied are true.
In `@src/functions/export-import.ts`:
- Line 105: The supportedVersions set currently includes "0.5.0" even though
mem::export/mem::import and the replace logic don't round-trip the new
orchestration scopes; either remove "0.5.0" from the supportedVersions Set
declaration or implement full import/export and replace behavior for the new
scopes (actions, action edges, leases, routines, routine runs, signals,
checkpoints, mesh peers) so they are exported, imported, and cleared by replace;
locate the supportedVersions constant and either drop "0.5.0" or add the missing
serialization/deserialization and scope-clearing code paths in the mem::export,
mem::import, and replace handlers to ensure lossless backups for version 0.5.0.
In `@src/functions/leases.ts`:
- Line 20: The lock key used in withKeyedLock (currently
`mem:lease:${data.actionId}`) must be changed to the same action-level lock
`mem:action:${data.actionId}` so all handlers that write KV.actions use a single
consistent lock; update every occurrence (the withKeyedLock invocations at/near
the shown lines and the other instances around lines 83, 126, 162-163) to use
`mem:action:${actionId}` (or `data.actionId` as in the surrounding function) to
prevent concurrent lease/action/checkpoint updates from interleaving.
- Around line 25-27: The check that prevents leasing completed actions only
rejects statuses "done" and "cancelled", allowing actions with status "blocked"
to be leased and reactivated; update the conditional(s) that inspect
action.status (the existing checks returning { success: false, error: "action
already completed" }) to also treat "blocked" as not leasable—i.e., include
"blocked" in the disallowed statuses (apply the change to both occurrences that
check action.status in src/functions/leases.ts so blocked actions cannot be
acquired or flipped to active).
- Around line 84-90: The current lookup for activeLease uses only status ===
"active" and thus returns expired leases; update the filter where leases are
searched (the variable leases and activeLease referencing KV.leases, matching on
actionId and agentId and status) to also check the expiry timestamp (expiresAt)
and only treat a lease as active if expiresAt is in the future (e.g., ensure
Date.now() < new Date(l.expiresAt).getTime() or compare numeric expiry). Apply
the same expiry-aware check to the other identical lookup blocks (the similar
logic around lines 99-109) so expired holders cannot release or complete
actions.
In `@src/functions/mesh.ts`:
- Around line 29-58: The mem::mesh-register handler (registered in
registerMeshFunction) currently saves arbitrary peer URLs without SSRF
validation; add the same URL allowlist check used in mem::mesh-sync
(isAllowedUrl) inside the mem::mesh-register path before creating the MeshPeer
and calling kv.set for KV.mesh, returning a failure response when
isAllowedUrl(data.url) is false; keep existing duplicate checks (existing.find)
and preserve fields generated by generateId("peer") and sharedScopes handling.
In `@src/functions/routines.ts`:
- Around line 169-223: In the sdk.registerFunction handler for id
"mem::routine-status" (the async function taking data: { runId: string }),
remove the unreachable check for action.status === "failed" inside the loop over
run.actionIds and instead check the actual failure-like state(s) from the Action
type; specifically replace if (action.status === "cancelled" || action.status
=== "failed") anyFailed = true; with a check against "blocked" (e.g. if
(action.status === "cancelled" || action.status === "blocked") anyFailed =
true;) so anyFailed reflects real Action statuses before updating run.status and
persisting via kv.set.
In `@src/functions/signals.ts`:
- Around line 70-100: The mem::signal-read handler returns broadcast signals
(where s.to is falsy) to every agent but only marks readAt when sig.to ===
data.agentId, causing broadcasts to reappear; either (A) exclude broadcasts from
the API (filter out signals with !s.to before applying unreadOnly/limit and
before setting sig.readAt), or (B) implement recipient-scoped read tracking by
adding a per-recipient field (e.g., sig.readBy: Record<agentId, ISODate>) and
update/read that map instead of sig.readAt (update logic around sig.readAt,
sig.readBy, KV.signals and the mem::signal-read response and filters
accordingly). Ensure checks reference the existing symbols (s.to, sig.readAt,
sig.readBy, data.agentId, KV.signals, sig.id) so reads and unreadOnly behave
correctly for broadcasts.
In `@src/mcp/server.ts`:
- Around line 542-544: The code currently calls (args.tags as string).split(",")
without validating type, which will throw if callers send an array/object;
update the handlers that parse CSV-style inputs (references: args.tags and
args.linkedActionIds) to first check typeof value === "string" (or treat
undefined/null as empty), return a 400 Bad Request when the value is present but
not a string, and then split on "," and map/trim while filtering out empty
strings (e.g., .split(",").map(t => t.trim()).filter(Boolean)); apply the same
guard/filter pattern to both tags and linkedActionIds locations so malformed
input yields a 400 and empty entries are removed.
In `@src/state/schema.ts`:
- Around line 45-48: Replace the runtime require usage by adding static
node:crypto imports at the top and update both functions to use them: import the
needed symbols (e.g. import { createHash, randomUUID } from "node:crypto") and
then in fingerprintId use createHash("sha256")... instead of require, and in
generateId use randomUUID() (or appropriate exported function) instead of the
undefined crypto variable; update any references from crypto.createHash to
createHash and from crypto.randomUUID to randomUUID so the module works under
ESM.
In `@src/triggers/api.ts`:
- Around line 1013-1014: The current handlers unconditionally return 2xx after
await sdk.trigger(...) (e.g., the call to sdk.trigger("mem::action-create") in
the api::action-create wrapper and the similar sdk.trigger in
api::action-update), which hides failures from the mem service; update these
handlers to inspect the resolved result for error/failure indicators (for
example result.error, result.status, or a boolean result.ok depending on the mem
response shape) and map mem-side failures to appropriate HTTP responses (return
4xx/409 for "parent action not found" or "action not found", otherwise propagate
4xx/5xx based on the mem error), only returning 2xx when the result indicates
success; apply the same check-and-map logic to the other SDK trigger calls in
the REST block near the api::action-create and api::action-update handlers (the
nearby sdk.trigger invocations around the lines referenced).
In `@test/actions.test.ts`:
- Around line 92-100: The test "clamps priority 0 to default 5 (falsy fallback)"
locks in incorrect behavior by mapping priority 0 to 5; update the test for the
current MCP schema (priority 1-10) by either asserting the create call is
rejected for invalid priority 0 or asserting priority 0 is clamped to 1 instead
of 5. Locate the test using the it(...) block and the
sdk.trigger("mem::action-create", {...}) call and change the expectation from
result.action.priority === 5 to either expect(result.success).toBe(false) (or
assert an error) or expect(result.action.priority).toBe(1) depending on the
intended contract. Ensure any test title string is updated to reflect the new
expectation.
In `@test/routines.test.ts`:
- Around line 424-439: The Action status union in the Action interface (defined
in src/types.ts) is missing "failed", which forces test/workaround casts; update
the Action.type/status union to include "failed" (so it becomes ... | "failed")
and then remove any unsafe casts used to fake the value (e.g., the cast in
test/routines.test.ts around setting action2.status). Ensure the change aligns
with the check in src/functions/routines.ts that uses action.status ===
"failed".
---
Nitpick comments:
In `@src/functions/flow-compress.ts`:
- Around line 153-173: The dynamic RegExp in parseFlowSummary's inner function
extract can trigger static-analysis ReDoS warnings; replace that RegExp-based
extraction with a deterministic string-based approach: in parseFlowSummary
(function parseFlowSummary) implement extract(tag) to build startTag/endTag
strings, find start and end indices using indexOf (with the search for endTag
starting at contentStart), return the sliced substring trimmed or empty if tags
not found; keep the same returned object keys (goal, outcome, steps,
discoveries, lesson) so callers are unchanged.
In `@src/functions/mesh.ts`:
- Around line 170-211: The mem::mesh-receive handler currently only checks id
and updatedAt for Memory and Action objects, risking malformed entries in kv
(KV.memories / KV.actions); update the validation inside the registered function
to perform stricter checks (or call a shared validator) before kv.set: verify
required fields (e.g., Memory.type, Memory.title, Memory.status and Action.type,
Action.status, any required payload shapes), ensure enums/strings have allowed
values and timestamps parse, and reject or skip entries that fail validation so
only well-formed Memory/Action objects are written to the store; add clear
error/logging for skipped items to aid debugging.
- Around line 6-27: The isAllowedUrl function currently misses several
private/reserved address formats; update it to reject hostnames that end with
".local" or ".localhost", explicitly handle IPv6 addresses (strip surrounding
brackets if present) and reject loopback/bracketed "::1", and block IPv6
private/link-local prefixes (e.g., addresses starting with "fe80:", "fc00:",
"fd00:"); implement these checks inside isAllowedUrl (use the existing parsed =
new URL(urlStr) and parsed.hostname logic) and return false when any of those
conditions match.
In `@src/index.ts`:
- Around line 248-249: The hardcoded inventory string "[agentmemory] Endpoints:
72 REST + 28 MCP tools + 6 MCP resources + 3 MCP prompts" is a second source of
truth and will drift; replace the literal with computed counts by importing or
referencing the registries that hold those entries (e.g., the REST endpoints
registry and the MCP tools/resources/prompts registries) and compute their sizes
(length/size) at startup, then build the log message dynamically (e.g.,
`${restRegistry.length} REST + ${mcpToolsRegistry.length} MCP tools +
${mcpResourcesRegistry.length} MCP resources + ${mcpPromptsRegistry.length} MCP
prompts`) so the message always reflects the actual registries.
In `@src/viewer/server.ts`:
- Around line 501-515: The health responses in viewer/server.ts hardcode "0.5.0"
which can drift; import the shared version export from src/version.ts (e.g., the
exported symbol named version or VERSION) at the top of the file and replace the
literal "0.5.0" in the JSON health payloads (both the try and catch blocks where
version: "0.5.0" appears) with that imported constant so the health API always
reflects the repository-wide version.
In `@test/actions.test.ts`:
- Around line 350-369: The beforeEach setup uses real 5ms setTimeout gaps to
order sdk.trigger calls; switch to deterministic fake timers by calling
vi.useFakeTimers() (and vi.setSystemTime for a base timestamp) before issuing
the sdk.trigger calls, then advance the fake clock (vi.advanceTimersByTime or
vi.setSystemTime increments) between the sdk.trigger invocations so the created
items have distinct timestamps without real sleeps, and finally restore timers
with vi.useRealTimers() in afterEach; update the beforeEach/afterEach
surrounding the sdk.trigger calls to use these vi timer APIs to make ordering
deterministic.
In `@test/checkpoints.test.ts`:
- Around line 15-28: The in-memory KV implementation (get, set, list) currently
returns and stores references to caller objects which allows external mutation
to alter the store; change set to store a cloned copy of data (e.g., use
structuredClone(data) or JSON.parse(JSON.stringify(data)) as a fallback) so the
backing Map never holds caller references, change get to return a clone of the
stored value (structuredClone on the result) and change list to return an array
of cloned values (map each entry through structuredClone/JSON clone) so callers
receive deep-copied objects and cannot mutate the store indirectly; keep delete
unchanged.
In `@test/mesh.test.ts`:
- Around line 48-486: Add tests covering the mem::mesh-sync path and
SSRF/local‑URL rejection: add a new describe block (e.g., "mesh-sync and SSRF")
that uses sdk.trigger("mem::mesh-sync", {...}) to exercise successful sync of
peers/memories/actions and assert KV changes (keys like "mem:mesh",
"mem:memories", "mem:actions"), and add regression cases that call
sdk.trigger("mem::mesh-register" or mem::mesh-sync) with private/local URLs
(e.g., 127.0.0.1, ::1, 10.x.x.x, 192.168.x.x, localhost) asserting the handler
returns failure and error mentioning SSRF/local URL rejection; reuse existing
test helpers (mockSdk, mockKV) and reference peer id/name fields from MeshPeer
to validate no KV writes occurred for rejected entries.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: d0a010e3-d41d-41e8-9c1a-eeb9f3528419
📒 Files selected for processing (31)
README.mdpackage.jsonsrc/functions/actions.tssrc/functions/branch-aware.tssrc/functions/checkpoints.tssrc/functions/export-import.tssrc/functions/flow-compress.tssrc/functions/frontier.tssrc/functions/leases.tssrc/functions/mesh.tssrc/functions/routines.tssrc/functions/signals.tssrc/index.tssrc/mcp/server.tssrc/mcp/tools-registry.tssrc/state/schema.tssrc/triggers/api.tssrc/types.tssrc/version.tssrc/viewer/index.htmlsrc/viewer/server.tstest/actions.test.tstest/checkpoints.test.tstest/export-import.test.tstest/frontier.test.tstest/leases.test.tstest/mcp-standalone.test.tstest/mesh.test.tstest/routines.test.tstest/schema-fingerprint.test.tstest/signals.test.ts
| const projectRoot = worktreeInfo.mainRepoRoot || data.cwd; | ||
| const branch = data.branch || worktreeInfo.branch; | ||
|
|
||
| const sessions = await kv.list<Session>(KV.sessions); | ||
|
|
||
| const matching = sessions.filter((s) => { | ||
| if (s.project === projectRoot || s.cwd === projectRoot) return true; | ||
| if (s.cwd.startsWith(projectRoot + "/")) return true; | ||
| return false; | ||
| }); | ||
|
|
||
| matching.sort( | ||
| (a, b) => | ||
| new Date(b.startedAt).getTime() - new Date(a.startedAt).getTime(), | ||
| ); | ||
|
|
||
| return { | ||
| success: true, | ||
| sessions: matching, | ||
| projectRoot, | ||
| branch, | ||
| isWorktree: worktreeInfo.isWorktree, | ||
| }; |
There was a problem hiding this comment.
This is not branch-scoped yet.
The filter is anchored on mainRepoRoot and never uses branch to narrow results. Sessions from other worktrees on the same repo can bleed in, while sibling worktrees outside the main repo tree can be missed altogether.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/functions/branch-aware.ts` around lines 144 - 166, The current filter on
sessions (variable matching) only scopes by projectRoot and ignores branch, so
sessions from other worktrees bleed through; update the filter used on sessions
(from KV.sessions) to also require the session's branch to match the resolved
branch value: inside the matching.filter callback, after resolving whether
s.project/s.cwd maps to projectRoot (the existing checks), add a branch check
(e.g., if (s.branch !== branch) return false) so only sessions with the same
branch/worktree are returned (use Session.branch, worktreeInfo.branch/branch
variables and keep existing cwd/project checks).
| await kv.set(KV.checkpoints, checkpoint.id, checkpoint); | ||
|
|
||
| if (data.linkedActionIds && data.linkedActionIds.length > 0) { | ||
| for (const actionId of data.linkedActionIds) { | ||
| const edge: ActionEdge = { | ||
| id: generateId("ae"), | ||
| type: "gated_by", | ||
| sourceActionId: actionId, | ||
| targetActionId: checkpoint.id, | ||
| createdAt: now.toISOString(), | ||
| }; | ||
| await kv.set(KV.actionEdges, edge.id, edge); | ||
| } |
There was a problem hiding this comment.
Creating the checkpoint never actually gates its linked actions.
This block only persists the checkpoint and gated_by edges. A linked action that is currently pending stays runnable until some other code manually flips it to blocked, and nonexistent action IDs are not rejected here either.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/functions/checkpoints.ts` around lines 35 - 47, When attaching gated_by
edges for each id in data.linkedActionIds, first fetch the action via
kv.get(KV.actions, actionId) and if it doesn't exist reject/throw or return an
error for that action id; if it exists and its status is "pending" update its
status to "blocked" and persist the change via kv.set(KV.actions, actionId,
updatedAction) before creating/saving the ActionEdge (generateId("ae") /
KV.actionEdges) that references checkpoint.id; keep the existing edge-creation
code but add these existence checks and state transition so linked pending
actions become blocked immediately.
| export function fingerprintId(prefix: string, content: string): string { | ||
| const crypto = require("node:crypto") as typeof import("node:crypto"); | ||
| const hash = crypto.createHash("sha256").update(content).digest("hex"); | ||
| return `${prefix}_${hash.slice(0, 16)}`; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== module/build config ==="
for pattern in '^package\.json$' '^tsconfig.*\.json$' '^tsdown\.config\..*$' '^tsup\.config\..*$'; do
fd -HI "$pattern"
done | sort -u | while read -r f; do
echo "--- $f ---"
sed -n '1,220p' "$f"
done
echo "=== require() usages in TypeScript sources ==="
rg -n --type ts '\brequire\s*\('Repository: rohitg00/agentmemory
Length of output: 3371
🏁 Script executed:
sed -n '40,55p' src/state/schema.ts | cat -nRepository: rohitg00/agentmemory
Length of output: 891
🏁 Script executed:
head -20 src/state/schema.ts | cat -nRepository: rohitg00/agentmemory
Length of output: 905
🏁 Script executed:
rg -A5 "^import.*crypto" src/state/schema.tsRepository: rohitg00/agentmemory
Length of output: 46
🏁 Script executed:
head -50 src/state/schema.ts | cat -nRepository: rohitg00/agentmemory
Length of output: 2031
🏁 Script executed:
head -5 src/state/schema.ts | cat -nRepository: rohitg00/agentmemory
Length of output: 267
🏁 Script executed:
wc -l src/state/schema.ts && grep -n "^import\|^export\|require" src/state/schema.ts | head -20Repository: rohitg00/agentmemory
Length of output: 412
🏁 Script executed:
cat src/state/schema.tsRepository: rohitg00/agentmemory
Length of output: 2150
Replace require() with a static node:crypto import.
The project is configured for ESM ("type": "module" in package.json, format: ["esm"] in tsdown.config.ts). The require("node:crypto") call at line 46 will fail at runtime with ReferenceError: require is not defined. Move the import to the top of the file and use the named import:
Suggested change
+import { createHash } from "node:crypto";
+
export const KV = {
// ...
};
export function fingerprintId(prefix: string, content: string): string {
- const crypto = require("node:crypto") as typeof import("node:crypto");
- const hash = crypto.createHash("sha256").update(content).digest("hex");
+ const hash = createHash("sha256").update(content).digest("hex");
return `${prefix}_${hash.slice(0, 16)}`;
}Note: Line 41 in generateId() also references an undefined crypto variable—ensure that function is fixed as well (likely needs import { randomUUID } from "node:crypto" or similar).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/state/schema.ts` around lines 45 - 48, Replace the runtime require usage
by adding static node:crypto imports at the top and update both functions to use
them: import the needed symbols (e.g. import { createHash, randomUUID } from
"node:crypto") and then in fingerprintId use createHash("sha256")... instead of
require, and in generateId use randomUUID() (or appropriate exported function)
instead of the undefined crypto variable; update any references from
crypto.createHash to createHash and from crypto.randomUUID to randomUUID so the
module works under ESM.
| const result = await sdk.trigger("mem::action-create", req.body); | ||
| return { status_code: 201, body: result }; |
There was a problem hiding this comment.
Return 4xx/409 when the mem call reports failure.
These wrappers always emit 2xx once sdk.trigger(...) resolves. api::action-create can therefore return 201 for parent action not found, and api::action-update can return 200 for action not found. The same pattern repeats across the new REST block.
Also applies to: 1040-1041
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/triggers/api.ts` around lines 1013 - 1014, The current handlers
unconditionally return 2xx after await sdk.trigger(...) (e.g., the call to
sdk.trigger("mem::action-create") in the api::action-create wrapper and the
similar sdk.trigger in api::action-update), which hides failures from the mem
service; update these handlers to inspect the resolved result for error/failure
indicators (for example result.error, result.status, or a boolean result.ok
depending on the mem response shape) and map mem-side failures to appropriate
HTTP responses (return 4xx/409 for "parent action not found" or "action not
found", otherwise propagate 4xx/5xx based on the mem error), only returning 2xx
when the result indicates success; apply the same check-and-map logic to the
other SDK trigger calls in the REST block near the api::action-create and
api::action-update handlers (the nearby sdk.trigger invocations around the lines
referenced).
| it("clamps priority 0 to default 5 (falsy fallback)", async () => { | ||
| const result = (await sdk.trigger("mem::action-create", { | ||
| title: "Zero priority task", | ||
| priority: 0, | ||
| })) as { success: boolean; action: Action }; | ||
|
|
||
| expect(result.success).toBe(true); | ||
| expect(result.action.priority).toBe(5); | ||
| }); |
There was a problem hiding this comment.
Don't lock in 0 -> 5 priority semantics.
The public MCP schema now documents priority as 1-10, so mapping 0 to the midpoint is a contract bug. This test should either expect rejection or clamp 0 to 1; otherwise it preserves the wrong behavior.
💡 Suggested expectation
- it("clamps priority 0 to default 5 (falsy fallback)", async () => {
+ it("clamps priority 0 to 1", async () => {
@@
- expect(result.action.priority).toBe(5);
+ expect(result.action.priority).toBe(1);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| it("clamps priority 0 to default 5 (falsy fallback)", async () => { | |
| const result = (await sdk.trigger("mem::action-create", { | |
| title: "Zero priority task", | |
| priority: 0, | |
| })) as { success: boolean; action: Action }; | |
| expect(result.success).toBe(true); | |
| expect(result.action.priority).toBe(5); | |
| }); | |
| it("clamps priority 0 to 1", async () => { | |
| const result = (await sdk.trigger("mem::action-create", { | |
| title: "Zero priority task", | |
| priority: 0, | |
| })) as { success: boolean; action: Action }; | |
| expect(result.success).toBe(true); | |
| expect(result.action.priority).toBe(1); | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/actions.test.ts` around lines 92 - 100, The test "clamps priority 0 to
default 5 (falsy fallback)" locks in incorrect behavior by mapping priority 0 to
5; update the test for the current MCP schema (priority 1-10) by either
asserting the create call is rejected for invalid priority 0 or asserting
priority 0 is clamped to 1 instead of 5. Locate the test using the it(...) block
and the sdk.trigger("mem::action-create", {...}) call and change the expectation
from result.action.priority === 5 to either expect(result.success).toBe(false)
(or assert an error) or expect(result.action.priority).toBe(1) depending on the
intended contract. Ensure any test title string is updated to reflect the new
expectation.
| it("marks run failed when any action has status failed", async () => { | ||
| const action = await kv.get<Action>("mem:actions", actionIds[1]); | ||
| (action as Action).status = "done"; | ||
| await kv.set("mem:actions", actionIds[1], action); | ||
|
|
||
| const action2 = await kv.get<Action>("mem:actions", actionIds[2]); | ||
| (action2 as unknown as { status: string }).status = "failed"; | ||
| await kv.set("mem:actions", actionIds[2], action2); | ||
|
|
||
| const result = (await sdk.trigger("mem::routine-status", { | ||
| runId, | ||
| })) as { success: boolean; run: RoutineRun }; | ||
|
|
||
| expect(result.success).toBe(true); | ||
| expect(result.run.status).toBe("failed"); | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check Action status type definition
rg -n "status.*:" --type=ts src/types.ts | head -20
rg -n "ActionStatus|status\s*:" --type=ts src/types.tsRepository: rohitg00/agentmemory
Length of output: 1013
🏁 Script executed:
#!/bin/bash
# Get context around line 439 in src/types.ts to identify the type name
sed -n '430,450p' src/types.ts
# Check src/functions/routines.ts around line 198
sed -n '190,210p' src/functions/routines.tsRepository: rohitg00/agentmemory
Length of output: 1366
🏁 Script executed:
#!/bin/bash
# Verify the exact line numbers and get more context
sed -n '435,445p' src/types.tsRepository: rohitg00/agentmemory
Length of output: 336
Add "failed" to the Action status union type.
The Action interface in src/types.ts line 439 defines status as "pending" | "active" | "done" | "blocked" | "cancelled", but src/functions/routines.ts line 198 checks for action.status === "failed". The type cast at line 430 is a workaround for this mismatch. Update the type definition to include "failed":
status: "pending" | "active" | "done" | "blocked" | "cancelled" | "failed";
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@test/routines.test.ts` around lines 424 - 439, The Action status union in the
Action interface (defined in src/types.ts) is missing "failed", which forces
test/workaround casts; update the Action.type/status union to include "failed"
(so it becomes ... | "failed") and then remove any unsafe casts used to fake the
value (e.g., the cast in test/routines.test.ts around setting action2.status).
Ensure the change aligns with the check in src/functions/routines.ts that uses
action.status === "failed".
5 new modules inspired by beads patterns but with original naming and iii-engine real-time streaming (no polling): - sentinels: event-driven condition watchers (webhook, timer, threshold, pattern, approval) that auto-unblock gated actions via SSE - sketches: ephemeral action graphs with auto-expiry, promote or discard - crystallize: LLM-powered compaction of completed action chains into compact crystal digests with key outcomes and lessons - diagnostics: self-diagnosis across 8 categories (actions, leases, sentinels, sketches, signals, sessions, memories, mesh) with auto-heal - facets: multi-dimensional tagging (dimension:value) with AND/OR queries - 5 new source files, 5 new test files (132 tests, total 518) - 9 new MCP tools (total 37), 21 new REST endpoints (total 93) - 4 new KV scopes (total 33), new types for all modules - README stats and function table updated
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (9)
src/functions/diagnostics.ts (1)
5-10: Unused import:Checkpoint.The
Checkpointtype is imported but not used anywhere in this file. It should be removed to keep imports clean.Suggested fix
import type { Action, ActionEdge, DiagnosticCheck, Lease, - Checkpoint, Signal, Sentinel, Sketch, MeshPeer, Session, Memory, } from "../types.js";🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/functions/diagnostics.ts` around lines 5 - 10, Remove the unused type import "Checkpoint" from the import list (the import that currently includes Action, ActionEdge, DiagnosticCheck, Lease, Checkpoint) in this module; update the import statement to only include the types actually used (Action, ActionEdge, DiagnosticCheck, Lease) so there are no unused imports and linting/compilation warnings are resolved.src/functions/crystallize.ts (1)
202-204: Silent error swallowing loses debugging information.The
catch { continue }block silently discards any errors from individual crystallization attempts. While continuing on failure is reasonable for batch operations, consider logging the error or including failed group keys in the response for observability.Suggested improvement
+ const failures: Array<{ groupKey: string; error: string }> = []; for (const [, groupActions] of groups) { + const groupKey = groupActions[0].parentId ?? groupActions[0].project ?? "_ungrouped"; const actionIds = groupActions.map((a) => a.id); const project = groupActions[0].project; try { const result = (await sdk.trigger("mem::crystallize", { actionIds, project, })) as { success: boolean; crystal?: Crystal }; if (result.success && result.crystal) { crystalIds.push(result.crystal.id); } - } catch { - continue; + } catch (err) { + failures.push({ groupKey, error: String(err) }); } } return { success: true, groupCount: groups.size, crystalIds, + failures: failures.length > 0 ? failures : undefined, };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/functions/crystallize.ts` around lines 202 - 204, The empty catch is swallowing errors; update the catch to capture the thrown error (catch (err)) and log it with the existing logger (e.g., processLogger.error or console.error) including context such as the group identifier, then push that group's key/id into a local failedGroupKeys array so failures are recorded; after the loop, include failedGroupKeys in the function's response or result object so callers can observe which groups failed (adjust any call sites if needed).src/functions/facets.ts (1)
140-164: Clarify behavior when bothmatchAllandmatchAnyare provided.The current logic requires ALL of
matchAllto be present AND at least one ofmatchAnyto be present. This is effectively(ALL matchAll) AND (ANY matchAny). However, the tool description intools-registry.tsuses the example "Find all actions tagged priority:urgent AND team:backend" which suggests pure AND logic.Consider whether this combined behavior is intentional. If a user provides only
matchAny, a target matching any of those facets passes. But if they provide both, they must satisfy both conditions, which may be unexpected.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/functions/facets.ts` around lines 140 - 164, The loop over targetFacetMap currently enforces (ALL matchAll) AND (ANY matchAny), which is confusing; change the logic in the function that iterates targetFacetMap so that when both data.matchAll and data.matchAny are provided you treat them as a single AND requirement by merging them into a requiredKeys set (e.g., requiredKeys = new Set([...data.matchAll, ...data.matchAny])) and check requiredKeys.every(k => entry.facetKeys.has(k)), then populate matchedFacets from the matched keys; if only data.matchAny is provided preserve the current OR behavior (anyPresent), and if only data.matchAll is provided preserve the ALL behavior — update the checks around entry.facetKeys, matched, and the object pushed to results (targetId, targetType, matchedFacets) accordingly so the combined case behaves as pure AND.src/functions/sentinels.ts (2)
386-391: Potential performance concern with full collection scans.
unblockLinkedActionsloads all edges, all sentinels, and all checkpoints into memory on every invocation. For large datasets, this could become a bottleneck.Consider filtering by the specific action IDs being unblocked, or caching the gate map if called frequently in batch operations.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/functions/sentinels.ts` around lines 386 - 391, The current unblockLinkedActions implementation builds gateMap by calling kv.list for all ActionEdge, Sentinel, and Checkpoint entries (allEdges, allSentinels, allCheckpoints), which causes full collection scans; change this to only load entries related to the action IDs being unblocked (e.g., query/filter edges by source/target action IDs, and fetch only the sentinel/checkpoint IDs referenced by those edges via targeted kv.get or a batched multi-get), or add an optional precomputed gateMap/cache parameter so callers that process batches can reuse it; update unblockLinkedActions to use the filtered results (instead of allEdges/allSentinels/allCheckpoints) when building gateMap.
125-138: Timer-based sentinel cannot be cancelled once scheduled.The
setTimeoutat line 127 schedules a trigger but the timer handle is not stored. If the sentinel is cancelled viamem::sentinel-cancelbefore the timer fires, the callback will still execute and attempt to trigger the sentinel (though the status check at line 130 will prevent state mutation).This is a minor inefficiency rather than a bug since the guard at line 130 prevents incorrect state changes. However, storing the timer handle in a
Map<string, NodeJS.Timeout>would allow proper cleanup on cancellation.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/functions/sentinels.ts` around lines 125 - 138, Create a module-level Map<string, NodeJS.Timeout> (e.g., sentinelTimers) to track timer handles so scheduled timer-based sentinels can be cancelled; when scheduling in the data.type === "timer" branch (where setTimeout is called), store the returned timeout handle in sentinelTimers.set(sentinel.id, handle) and ensure the timeout handler removes sentinelTimers.delete(sentinel.id) when it runs; update the cancel logic for the mem::sentinel-cancel flow to lookup the handle from sentinelTimers, call clearTimeout(handle) and delete the entry so timers are cleaned up on cancellation; keep existing withKeyedLock, KV.sentinels and unblockLinkedActions usage intact.src/functions/sketches.ts (1)
184-186: Action deletion without per-action locks is intentional but worth documenting.Both
sketch-discardandsketch-gcdelete actions without acquiring per-action locks. This works because:
- The sketch lock serializes all operations on sketch-owned actions
- Actions with
sketchIdset are effectively owned by the sketchHowever, if external code modifies sketch-owned actions via
mem::action-updatewhile holding an action lock, there could be a race. Consider adding a brief comment explaining the ownership model.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/functions/sketches.ts` around lines 184 - 186, Add a brief inline comment above the loop that deletes actions (the block iterating over sketch.actionIds and calling kv.delete(KV.actions, actionId)) explaining the ownership model: sketch locks (used by sketch-discard and sketch-gc) serialize operations on sketch-owned actions and actions with sketchId set are considered owned by the sketch, so per-action locks are intentionally not acquired here; also mention the potential race if external code uses mem::action-update while holding an action lock so future maintainers understand the trade-off.test/diagnostics.test.ts (1)
60-75: Factory helper ID generation could collide in tight loops.The
makeActionand similar helpers useDate.now().toString(36)combined with a short random suffix. In fast test loops,Date.now()may return the same value, and the random portion (Math.random().toString(36).slice(2, 8)) provides ~31 bits of entropy — usually sufficient but not guaranteed unique.For test stability, consider using a counter or UUID.
💡 Alternative using counter
let idCounter = 0; function makeAction(overrides: Partial<Action> = {}): Action { return { id: `act_test_${++idCounter}`, // ...rest }; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/diagnostics.test.ts` around lines 60 - 75, The makeAction factory currently builds IDs using Date.now().toString(36) + Math.random() which can collide in tight test loops; update makeAction (and any similar helpers) to produce deterministic, non-colliding IDs by using a module-scoped incrementing counter (e.g., idCounter++) or a UUID generator instead of Date.now/Math.random so repeated fast calls are unique, and ensure the new ID format (e.g., prefix like "act_test_") is applied in the makeAction function signature to keep tests stable.test/sentinels.test.ts (1)
566-625: Consider adding test coverage for pattern-type sentinel check.The
mem::sentinel-checktests only cover threshold-type sentinels. The pattern-type sentinel check logic (lines 237-283 in sentinels.ts) involves session/observation iteration and regex matching, which would benefit from explicit test coverage.💡 Example test case for pattern sentinel
it("triggers pattern sentinel when observation matches", async () => { // Create a session with observations await kv.set("mem:sessions", "ses_1", { id: "ses_1", project: "test", cwd: "/tmp", startedAt: new Date().toISOString(), status: "active", observationCount: 1, }); const sentinel = (await sdk.trigger("mem::sentinel-create", { name: "error-watcher", type: "pattern", config: { pattern: "error|fail" }, })) as { success: boolean; sentinel: Sentinel }; // Add observation matching pattern after sentinel creation await kv.set(`mem:obs:ses_1`, "obs_1", { id: "obs_1", title: "Build failed with error", timestamp: new Date().toISOString(), }); const result = (await sdk.trigger("mem::sentinel-check", {})) as { success: boolean; triggered: string[]; }; expect(result.triggered).toContain(sentinel.sentinel.id); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/sentinels.test.ts` around lines 566 - 625, Add a test that covers the pattern-type sentinel flow: create a session entry via kv.set("mem:sessions", "<sessionId>", ... ) with observationCount >=1, create a pattern sentinel via sdk.trigger("mem::sentinel-create", { type: "pattern", config: { pattern: "..." } }), then insert an observation that matches the regex using kv.set(`mem:obs:<sessionId>`, "<obsId>", { title: "...matching text..." , timestamp: ... }), call sdk.trigger("mem::sentinel-check", {}) and assert the returned triggered array includes the sentinel id; this will exercise the pattern matching logic in sentinels.ts (the session/observation iteration and regex matching used by mem::sentinel-check).test/sketches.test.ts (1)
399-404: Timing-dependent test ordering may be flaky.The 5ms delays between sketch creations are used to ensure distinct
createdAttimestamps for ordering tests. While this works, it could be flaky on slow CI runners or under load.Consider mocking
Dateor setting explicitcreatedAtvalues in the test data.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/sketches.test.ts` around lines 399 - 404, The test relies on tiny setTimeout delays around sdk.trigger("mem::sketch-create") calls to produce distinct createdAt timestamps, which is flaky; instead, make timestamps deterministic by either mocking global Date.now/Date to return controlled incremental values for the duration of this test or pass explicit createdAt values when calling sdk.trigger("mem::sketch-create") (or the factory used to create sketches) so ordering assertions do not depend on timing; update the test setup/teardown to restore Date behavior if you mock it.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/functions/crystallize.ts`:
- Line 143: The code is using an undefined type CrystallizableAction when
calling kv.list; replace that type with the correct imported type Action (i.e.,
change the generic on kv.list from CrystallizableAction to Action) so the call
let allActions = await kv.list<...>(KV.actions) compiles; ensure Action is
imported where needed or already available in the module.
In `@src/functions/sentinels.ts`:
- Around line 237-239: The code constructs a RegExp from user input
(cfg.pattern) in the pattern sentinel branch (when sentinel.type === "pattern")
and runs it against observation titles (new RegExp(...) and the loop that tests
titles), which opens a ReDoS risk; fix by validating or replacing the regex
before compilation: either run cfg.pattern through a safe-regex/redos-detector
check and reject or sanitize overly-complex patterns, or switch to a safe engine
like re2 for compilation and testing, or enforce a per-test timeout/worker to
abort long-running matches; ensure the change touches the pattern branch where
cfg.pattern is read and where new RegExp(...) is used so malicious patterns are
never compiled/executed.
In `@src/triggers/api.ts`:
- Around line 1655-1663: The create endpoints are returning HTTP 200 instead of
201; update the registered handler responses for both "api::sketch-create" and
"api::sketch-add" so that successful resource creation returns status_code: 201
(and keep the returned body/result the same). Locate the sdk.registerFunction
handlers with ids "api::sketch-create" and "api::sketch-add" and change their
success return objects from { status_code: 200, body: ... } to { status_code:
201, body: ... }.
- Around line 1608-1616: The create handler for "api::sentinel-create" returns
status_code: 200 but should return 201 to match other create endpoints; update
the return in the sdk.registerFunction async handler for function_id
"api::sentinel-create" to use status_code: 201 on success (where it currently
returns 200), and apply the same change to the handlers for "api::sketch-create"
and "api::sketch-add" so all create endpoints consistently return 201 after
successful creation.
In `@src/types.ts`:
- Around line 251-259: The ExportData interface was extended with orchestration
fields (actions, actionEdges, routines, signals, checkpoints, sentinels,
sketches, crystals, facets) but the export and import handlers still only
serialize/restore sessions, observations, memories, summaries, profiles,
graphNodes, graphEdges, semanticMemories, and proceduralMemories; update both
handlers to include the new fields so exports contain and imports restore
orchestration data: modify the export routine that builds the exported object to
add properties actions, actionEdges, routines, signals, checkpoints, sentinels,
sketches, crystals, and facets (pulling them from the same KV/schema sources as
the others), and modify the import/restore routine to check for and restore
these fields back into storage (using the same restore logic or functions used
for semanticMemories/proceduralMemories) when VERSION === "0.5.0" or present in
the payload, ensuring no silent discard of the new types.
---
Nitpick comments:
In `@src/functions/crystallize.ts`:
- Around line 202-204: The empty catch is swallowing errors; update the catch to
capture the thrown error (catch (err)) and log it with the existing logger
(e.g., processLogger.error or console.error) including context such as the group
identifier, then push that group's key/id into a local failedGroupKeys array so
failures are recorded; after the loop, include failedGroupKeys in the function's
response or result object so callers can observe which groups failed (adjust any
call sites if needed).
In `@src/functions/diagnostics.ts`:
- Around line 5-10: Remove the unused type import "Checkpoint" from the import
list (the import that currently includes Action, ActionEdge, DiagnosticCheck,
Lease, Checkpoint) in this module; update the import statement to only include
the types actually used (Action, ActionEdge, DiagnosticCheck, Lease) so there
are no unused imports and linting/compilation warnings are resolved.
In `@src/functions/facets.ts`:
- Around line 140-164: The loop over targetFacetMap currently enforces (ALL
matchAll) AND (ANY matchAny), which is confusing; change the logic in the
function that iterates targetFacetMap so that when both data.matchAll and
data.matchAny are provided you treat them as a single AND requirement by merging
them into a requiredKeys set (e.g., requiredKeys = new Set([...data.matchAll,
...data.matchAny])) and check requiredKeys.every(k => entry.facetKeys.has(k)),
then populate matchedFacets from the matched keys; if only data.matchAny is
provided preserve the current OR behavior (anyPresent), and if only
data.matchAll is provided preserve the ALL behavior — update the checks around
entry.facetKeys, matched, and the object pushed to results (targetId,
targetType, matchedFacets) accordingly so the combined case behaves as pure AND.
In `@src/functions/sentinels.ts`:
- Around line 386-391: The current unblockLinkedActions implementation builds
gateMap by calling kv.list for all ActionEdge, Sentinel, and Checkpoint entries
(allEdges, allSentinels, allCheckpoints), which causes full collection scans;
change this to only load entries related to the action IDs being unblocked
(e.g., query/filter edges by source/target action IDs, and fetch only the
sentinel/checkpoint IDs referenced by those edges via targeted kv.get or a
batched multi-get), or add an optional precomputed gateMap/cache parameter so
callers that process batches can reuse it; update unblockLinkedActions to use
the filtered results (instead of allEdges/allSentinels/allCheckpoints) when
building gateMap.
- Around line 125-138: Create a module-level Map<string, NodeJS.Timeout> (e.g.,
sentinelTimers) to track timer handles so scheduled timer-based sentinels can be
cancelled; when scheduling in the data.type === "timer" branch (where setTimeout
is called), store the returned timeout handle in sentinelTimers.set(sentinel.id,
handle) and ensure the timeout handler removes
sentinelTimers.delete(sentinel.id) when it runs; update the cancel logic for the
mem::sentinel-cancel flow to lookup the handle from sentinelTimers, call
clearTimeout(handle) and delete the entry so timers are cleaned up on
cancellation; keep existing withKeyedLock, KV.sentinels and unblockLinkedActions
usage intact.
In `@src/functions/sketches.ts`:
- Around line 184-186: Add a brief inline comment above the loop that deletes
actions (the block iterating over sketch.actionIds and calling
kv.delete(KV.actions, actionId)) explaining the ownership model: sketch locks
(used by sketch-discard and sketch-gc) serialize operations on sketch-owned
actions and actions with sketchId set are considered owned by the sketch, so
per-action locks are intentionally not acquired here; also mention the potential
race if external code uses mem::action-update while holding an action lock so
future maintainers understand the trade-off.
In `@test/diagnostics.test.ts`:
- Around line 60-75: The makeAction factory currently builds IDs using
Date.now().toString(36) + Math.random() which can collide in tight test loops;
update makeAction (and any similar helpers) to produce deterministic,
non-colliding IDs by using a module-scoped incrementing counter (e.g.,
idCounter++) or a UUID generator instead of Date.now/Math.random so repeated
fast calls are unique, and ensure the new ID format (e.g., prefix like
"act_test_") is applied in the makeAction function signature to keep tests
stable.
In `@test/sentinels.test.ts`:
- Around line 566-625: Add a test that covers the pattern-type sentinel flow:
create a session entry via kv.set("mem:sessions", "<sessionId>", ... ) with
observationCount >=1, create a pattern sentinel via
sdk.trigger("mem::sentinel-create", { type: "pattern", config: { pattern: "..."
} }), then insert an observation that matches the regex using
kv.set(`mem:obs:<sessionId>`, "<obsId>", { title: "...matching text..." ,
timestamp: ... }), call sdk.trigger("mem::sentinel-check", {}) and assert the
returned triggered array includes the sentinel id; this will exercise the
pattern matching logic in sentinels.ts (the session/observation iteration and
regex matching used by mem::sentinel-check).
In `@test/sketches.test.ts`:
- Around line 399-404: The test relies on tiny setTimeout delays around
sdk.trigger("mem::sketch-create") calls to produce distinct createdAt
timestamps, which is flaky; instead, make timestamps deterministic by either
mocking global Date.now/Date to return controlled incremental values for the
duration of this test or pass explicit createdAt values when calling
sdk.trigger("mem::sketch-create") (or the factory used to create sketches) so
ordering assertions do not depend on timing; update the test setup/teardown to
restore Date behavior if you mock it.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 27d674c0-8ea7-44b3-ab6f-b092efd6fd03
📒 Files selected for processing (18)
README.mdsrc/functions/crystallize.tssrc/functions/diagnostics.tssrc/functions/facets.tssrc/functions/sentinels.tssrc/functions/sketches.tssrc/index.tssrc/mcp/server.tssrc/mcp/tools-registry.tssrc/state/schema.tssrc/triggers/api.tssrc/types.tstest/crystallize.test.tstest/diagnostics.test.tstest/facets.test.tstest/mcp-standalone.test.tstest/sentinels.test.tstest/sketches.test.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- src/mcp/server.ts
- test/mcp-standalone.test.ts
| const dryRun = data.dryRun ?? false; | ||
| const cutoff = Date.now() - olderThanDays * 24 * 60 * 60 * 1000; | ||
|
|
||
| let allActions = await kv.list<CrystallizableAction>(KV.actions); |
There was a problem hiding this comment.
Undefined type CrystallizableAction - should be Action.
The type CrystallizableAction is used but not defined or imported anywhere in this file. This will cause a TypeScript compilation error. Based on the usage pattern, this should be Action.
Suggested fix
- let allActions = await kv.list<CrystallizableAction>(KV.actions);
+ let allActions = await kv.list<Action>(KV.actions);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let allActions = await kv.list<CrystallizableAction>(KV.actions); | |
| let allActions = await kv.list<Action>(KV.actions); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/functions/crystallize.ts` at line 143, The code is using an undefined
type CrystallizableAction when calling kv.list; replace that type with the
correct imported type Action (i.e., change the generic on kv.list from
CrystallizableAction to Action) so the call let allActions = await
kv.list<...>(KV.actions) compiles; ensure Action is imported where needed or
already available in the module.
| if (sentinel.type === "pattern") { | ||
| const cfg = sentinel.config as { pattern: string }; | ||
| const regex = new RegExp(cfg.pattern, "i"); |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
What npm packages can validate regex patterns for ReDoS safety in Node.js?
💡 Result:
- safe-regex — simple boolean check: “is this pattern possibly catastrophic?” (static heuristic). [1]
- safer-regex — similar to
safe-regex, with CLI-oriented usage. [2] - redos — a RegEx DoS (ReDoS) scanner for regex patterns. [3]
- redos-detector — CLI/library to check patterns and score/report catastrophic backtracking risk (JavaScript/Node-focused). [4]
- vuln-regex-detector — detects vulnerable regexes that can trigger catastrophic backtracking in JavaScript (Node.js). [5]
- regexploit — finds ReDoS-vulnerable regexes (includes tooling to extract/scan regexes from JavaScript codebases). [6]
Sources:
[1] (npmjs.com)
[2] (skypack.dev)
[3] (npmjs.com)
[4] (npmjs.com)
[5] (npmjs.com)
[6] (github.com)
Citations:
- 1: https://www.npmjs.com/package/safe-regex?utm_source=openai
- 2: https://www.skypack.dev/view/safer-regex?utm_source=openai
- 3: https://www.npmjs.com/package/redos?utm_source=openai
- 4: https://www.npmjs.com/package/redos-detector?utm_source=openai
- 5: https://www.npmjs.com/package/vuln-regex-detector?utm_source=openai
- 6: https://github.com/doyensec/regexploit?utm_source=openai
🏁 Script executed:
cat -n src/functions/sentinels.ts | head -250 | tail -50Repository: rohitg00/agentmemory
Length of output: 2290
🏁 Script executed:
wc -l src/functions/sentinels.tsRepository: rohitg00/agentmemory
Length of output: 94
🏁 Script executed:
# Get the full context around lines 237-239 and examine how cfg is used
sed -n '200,280p' src/functions/sentinels.tsRepository: rohitg00/agentmemory
Length of output: 3031
🏁 Script executed:
# Look at the sentinel-create function to see how cfg.pattern is validated
sed -n '16,142p' src/functions/sentinels.tsRepository: rohitg00/agentmemory
Length of output: 4227
🏁 Script executed:
# Check the Sentinel type schema to understand what validation exists
grep -r "type Sentinel" src/ --include="*.ts" -A 15Repository: rohitg00/agentmemory
Length of output: 46
🏁 Script executed:
# Look at imports and check if there's any schema validation for patterns
head -20 src/functions/sentinels.tsRepository: rohitg00/agentmemory
Length of output: 681
🏁 Script executed:
# Check the Sentinel type definition
cat src/types.ts | grep -A 20 "type Sentinel"Repository: rohitg00/agentmemory
Length of output: 46
🏁 Script executed:
# Check if sentinel-create has any authentication/authorization checks
grep -B 5 -A 5 "mem::sentinel-create" src/functions/sentinels.ts | head -20Repository: rohitg00/agentmemory
Length of output: 356
🏁 Script executed:
# Check how observation titles are created - are they user-controlled?
git ls-files src/ | xargs grep -l "observations" | head -5Repository: rohitg00/agentmemory
Length of output: 192
🏁 Script executed:
# Look at where regex.test is called in context
sed -n '237,270p' src/functions/sentinels.tsRepository: rohitg00/agentmemory
Length of output: 1337
🏁 Script executed:
# Find and read the types file to see Sentinel definition
find src -name "types.ts" -o -name "types.d.ts" | xargs cat | grep -A 30 "Sentinel"Repository: rohitg00/agentmemory
Length of output: 1344
🏁 Script executed:
# Check if there's any regex validation library already imported
grep -r "safe-regex\|redos\|regexploit" src/Repository: rohitg00/agentmemory
Length of output: 46
🏁 Script executed:
# Verify the flow: user input -> pattern storage -> regex execution
grep -n "regex.test" src/functions/sentinels.tsRepository: rohitg00/agentmemory
Length of output: 116
🏁 Script executed:
# Check the full function signature and any auth checks
grep -B 10 "mem::sentinel-create" src/functions/sentinels.ts | head -20Repository: rohitg00/agentmemory
Length of output: 279
🏁 Script executed:
# Check the SDK function registration to see if there are global auth guards
grep -r "registerFunction" src/ | grep -i "auth\|permission" | head -5Repository: rohitg00/agentmemory
Length of output: 46
🏁 Script executed:
# Look at the function parameters more carefully - any user/auth context?
sed -n '16,25p' src/functions/sentinels.tsRepository: rohitg00/agentmemory
Length of output: 380
User-supplied regex pattern creates ReDoS vulnerability.
The cfg.pattern value comes from user input (via the sentinel-create function at line 39) and is passed directly to new RegExp() at line 239 without validation. While the pattern is validated to be a non-empty string, its complexity is never checked. Malicious patterns like (a+)+$ can cause catastrophic backtracking, freezing the event loop. The regex is then executed in a loop against observation titles (line 253), amplifying the impact.
Consider validating the pattern's complexity using an npm package like safe-regex or redos-detector before compilation.
🛡️ Proposed fix using pattern validation
+import safeRegex from "safe-regex";
+
if (sentinel.type === "pattern") {
const cfg = sentinel.config as { pattern: string };
+ if (!safeRegex(cfg.pattern)) {
+ continue; // Skip unsafe patterns
+ }
const regex = new RegExp(cfg.pattern, "i");Alternatively, use the re2 package which doesn't support backtracking, or wrap regex execution in a timeout.
🧰 Tools
🪛 ast-grep (0.41.0)
[warning] 238-238: Regular expression constructed from variable input detected. This can lead to Regular Expression Denial of Service (ReDoS) attacks if the variable contains malicious patterns. Use libraries like 'recheck' to validate regex safety or use static patterns.
Context: new RegExp(cfg.pattern, "i")
Note: [CWE-1333] Inefficient Regular Expression Complexity [REFERENCES]
- https://owasp.org/www-community/attacks/Regular_expression_Denial_of_Service_-_ReDoS
- https://cwe.mitre.org/data/definitions/1333.html
(regexp-from-variable)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/functions/sentinels.ts` around lines 237 - 239, The code constructs a
RegExp from user input (cfg.pattern) in the pattern sentinel branch (when
sentinel.type === "pattern") and runs it against observation titles (new
RegExp(...) and the loop that tests titles), which opens a ReDoS risk; fix by
validating or replacing the regex before compilation: either run cfg.pattern
through a safe-regex/redos-detector check and reject or sanitize overly-complex
patterns, or switch to a safe engine like re2 for compilation and testing, or
enforce a per-test timeout/worker to abort long-running matches; ensure the
change touches the pattern branch where cfg.pattern is read and where new
RegExp(...) is used so malicious patterns are never compiled/executed.
| sdk.registerFunction({ id: "api::sentinel-create" }, async (req: ApiRequest) => { | ||
| const denied = checkAuth(req, secret); | ||
| if (denied) return denied; | ||
| const body = req.body as Record<string, unknown>; | ||
| if (!body?.name) return { status_code: 400, body: { error: "name is required" } }; | ||
| const result = await sdk.trigger("mem::sentinel-create", body); | ||
| return { status_code: 200, body: result }; | ||
| }); | ||
| sdk.registerTrigger({ type: "http", function_id: "api::sentinel-create", config: { api_path: "/agentmemory/sentinels", http_method: "POST" } }); |
There was a problem hiding this comment.
Inconsistent status code for sentinel-create.
api::sentinel-create returns status_code: 200 on success, but similar create operations (api::action-create, api::checkpoint-create, api::routine-create) return status_code: 201. Consider returning 201 for consistency with REST conventions for resource creation.
🔧 Proposed fix
const result = await sdk.trigger("mem::sentinel-create", body);
- return { status_code: 200, body: result };
+ return { status_code: 201, body: result };Apply the same fix to api::sketch-create (line 1661) and api::sketch-add (line 1671).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/triggers/api.ts` around lines 1608 - 1616, The create handler for
"api::sentinel-create" returns status_code: 200 but should return 201 to match
other create endpoints; update the return in the sdk.registerFunction async
handler for function_id "api::sentinel-create" to use status_code: 201 on
success (where it currently returns 200), and apply the same change to the
handlers for "api::sketch-create" and "api::sketch-add" so all create endpoints
consistently return 201 after successful creation.
| sdk.registerFunction({ id: "api::sketch-create" }, async (req: ApiRequest) => { | ||
| const denied = checkAuth(req, secret); | ||
| if (denied) return denied; | ||
| const body = req.body as Record<string, unknown>; | ||
| if (!body?.title) return { status_code: 400, body: { error: "title is required" } }; | ||
| const result = await sdk.trigger("mem::sketch-create", body); | ||
| return { status_code: 200, body: result }; | ||
| }); | ||
| sdk.registerTrigger({ type: "http", function_id: "api::sketch-create", config: { api_path: "/agentmemory/sketches", http_method: "POST" } }); |
There was a problem hiding this comment.
Sketch endpoints also use 200 instead of 201 for creates.
Same issue as sentinel-create — api::sketch-create and api::sketch-add return 200 but should return 201 for new resource creation.
Also applies to: 1665-1673
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/triggers/api.ts` around lines 1655 - 1663, The create endpoints are
returning HTTP 200 instead of 201; update the registered handler responses for
both "api::sketch-create" and "api::sketch-add" so that successful resource
creation returns status_code: 201 (and keep the returned body/result the same).
Locate the sdk.registerFunction handlers with ids "api::sketch-create" and
"api::sketch-add" and change their success return objects from { status_code:
200, body: ... } to { status_code: 201, body: ... }.
- actions: validate edges before persisting, set blocked status for requires deps - leases: use mem:action lock key, reject blocked actions, check expiry on release - checkpoints: validate linkedActionIds exist, check requires edges in unblock - mesh: add SSRF validation on peer registration - routines: remove invalid "failed" action status check - export-import: add v0.5.0 scope export/import (actions, sentinels, sketches, etc) - mcp/server: validate CSV inputs are strings before splitting - schema: replace runtime require with static import - README: fix stale tool/endpoint counts (28→37, 72→93)
There was a problem hiding this comment.
Actionable comments posted: 9
♻️ Duplicate comments (5)
README.md (2)
99-99:⚠️ Potential issue | 🟡 MinorFinish the MCP doc sweep.
These lines now advertise 37 MCP tools, but the dedicated MCP section later still says
### Tools (18)and only documents 18 entries. The README is internally inconsistent until that section is updated too.Also applies to: 128-129
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` at line 99, The README currently claims "Any agent... can use agentmemory's 37 tools" while the later MCP tools section header reads "### Tools (18)" and only lists 18 entries; reconcile these inconsistencies by either updating the introductory sentence to the correct tools count or expanding the "### Tools (18)" section to document all 37 tools so counts and entries match; locate the intro sentence ("Any agent that connects to MCP servers...") and the "### Tools (18)" heading and ensure the numeric value and listed tool entries are consistent across both places (also apply the same fix to the repeated mention at lines ~128-129).
646-646:⚠️ Potential issue | 🟡 MinorSource-file metric is still stale.
The PR summary says this release moves from 87 to 101 source files, but this line now says 92. Please update this metric in the same pass so the architecture stats stay aligned.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` at line 646, Update the stale source-file metric line in README.md that currently reads the "92 source files..." entry to match the PR summary (change it to reflect 101 source files); locate the exact string "**92 source files. ~14,500 LOC. 518 tests. 354KB bundled.**" and replace the source-file count so the architecture stats align with the PR claim.src/functions/export-import.ts (1)
71-79:⚠️ Potential issue | 🟠 Major
0.5.0backup support is still incomplete.
KV.leases,KV.routineRuns, andKV.meshare still omitted from the payload, andreplacestill does not clear the new namespaces. A0.5.0export/import round-trip can still lose or retain stale orchestration state.Also applies to: 95-103
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/functions/export-import.ts` around lines 71 - 79, Export/import currently omits KV.leases, KV.routineRuns, and KV.mesh and the "replace" path doesn't clear target namespaces, causing stale orchestration state after a 0.5.0 round-trip; update the export logic in src/functions/export-import.ts to include kv.list<Lease>(KV.leases), kv.list<RoutineRun>(KV.routineRuns), and kv.list<Mesh>(KV.mesh) alongside the existing lists (match how actions/actionEdges/etc. are collected) and modify the replace/import routine to explicitly clear the destination namespaces (remove or truncate existing keys in the target KV namespaces) before writing the imported entries so stale leases/runs/mesh entries are not retained.src/functions/checkpoints.ts (1)
35-56:⚠️ Potential issue | 🟠 MajorCreating a checkpoint still doesn't gate pending actions.
This path validates linked action IDs and stores
gated_byedges, but it never flips linkedpendingactions toblocked. They remain leasable/runnable until some other flow blocks them manually, which defeats the checkpoint.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/functions/checkpoints.ts` around lines 35 - 56, When creating the "gated_by" ActionEdge for each linkedActionId in the checkpoint creation flow, also load the corresponding Action (kv.get<Action>(KV.actions, actionId)) and if its status is "pending" update it to "blocked" and persist it (kv.set(KV.actions, actionId, updatedAction)) so pending actions become non-leasable; perform this inside the same loop after generating the edge (use the existing variables actionId and edge), preserving other action fields and timestamps when writing the updated action back.src/mcp/server.ts (1)
540-542:⚠️ Potential issue | 🟠 MajorSeveral new MCP handlers still turn bad input into 500s.
These branches cast unknown values and call
.split()orJSON.parse()directly. Arrays/objects/invalid JSON fall into the outercatchasInternal error, andmemory_crystallizealso crashes whenactionIdsis missing. Validatetypeof value === "string"first, return 400 on parse failures, andfilter(Boolean)after splitting.Also applies to: 745-749, 804-807, 821-821, 845-845, 855-856, 863-864, 884-889
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/mcp/server.ts` around lines 540 - 542, The code is treating unvalidated inputs as strings and calling .split() / JSON.parse() directly which converts bad input into 500s or crashes (e.g., the tags parsing block and the memory_crystallize handler that expects actionIds). Update each handler that parses client input (e.g., the tags parsing code, any JSON.parse() usages, and the memory_crystallize logic that reads actionIds) to first check typeof value === "string" before parsing/splitting, perform JSON.parse inside a try/catch and return a 400 with a clear error on parse failure, use .split(",").map(t=>t.trim()).filter(Boolean) for comma lists, and explicitly validate presence/shape of required fields like actionIds (return 400 if missing/invalid) instead of allowing the outer catch to turn these into 500s.
🧹 Nitpick comments (1)
src/functions/export-import.ts (1)
134-134: UpdateExportImportInputSchemato include0.5.0and model the new version's structure.The schema at
src/eval/schemas.ts:113-123is out of sync with the supported versions atsrc/functions/export-import.ts:134. While no import entrypoint currently enforces schema validation, the schema should be updated to reflect all supported versions and their data structures for consistency and to prevent issues if validation is added later.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/functions/export-import.ts` at line 134, ExportImportInputSchema is missing the "0.5.0" branch while supportedVersions (const supportedVersions) includes "0.5.0"; update the discriminated union in ExportImportInputSchema (in src/eval/schemas.ts) to add a new case for version "0.5.0" that models the new payload shape introduced in that release (mirror the same pattern used for "0.3.0"/"0.4.0": include the version literal, required top-level fields, and any new/renamed properties), and export the updated schema so consumers (and any future validation in the import/export entrypoints) will recognize and validate "0.5.0" inputs. Ensure the new schema variant uses the same symbol names (ExportImportInputSchema) and preserves backwards-compatible validators for older versions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@README.md`:
- Line 559: The README's API summary line stating "93 endpoints on port `3111`
(87 core + 6 MCP protocol)" is inconsistent with the downstream table that
documents only 49 routes; update the README so they match by either (A)
expanding the API table to include all 93 routes, or (B) changing the summary
line to reflect the documented 49 routes and/or add a clarifying note that the
table is a representative subset; locate the summary string "93 endpoints on
port `3111` (87 core + 6 MCP protocol)" and the API routes table in README.md
and apply the chosen fix, ensuring the summary text and any header labels (e.g.,
"API section" / table caption) are updated accordingly so documentation is
consistent.
- Around line 688-701: The README header "Functions (33)" is now inaccurate
after adding many slash-separated entries (e.g., mem::action-create,
mem::frontier, mem::lease-acquire, mem::routine-create, mem::signal-send,
mem::checkpoint-create, mem::flow-compress, mem::mesh-register,
mem::detect-worktree, mem::sentinel-create, mem::sketch-create,
mem::crystallize, mem::diagnose, mem::facet-tag); fix it by either recalculating
the true number of individual functions counting each slash-separated token and
updating the count, or change the header label to something like "Function
groups" or "Functions (groups)" to reflect grouped entries—update the README
heading text accordingly.
In `@src/functions/checkpoints.ts`:
- Around line 27-33: Reject invalid checkpoint.type and checkpoint.status values
at runtime by validating incoming data.type and data.status against the
application's allowed enums before constructing the checkpoint object (the block
that sets type: data.type || "external", createdAt: now.toISOString(),
linkedActionIds: data.linkedActionIds || [], expiresAt: ...). If data.type or
data.status is present and not one of the permitted enum values, throw/return a
4xx error (or otherwise reject the request) with a clear message; apply the same
guard to the update path around the code at the other spot (the block referenced
by lines 71-76). Use the same enum/constants your domain uses for checkpoint
types/statuses (validate against those instead of allowing arbitrary strings)
and ensure the check happens before persisting or returning the constructed
checkpoint.
In `@src/functions/export-import.ts`:
- Around line 320-364: The import loops in export-import.ts always call kv.set
(e.g., kv.set(KV.actions, action.id, action), kv.set(KV.actionEdges, edge.id,
edge), etc.), so strategy: "skip" is ignored; update each block that writes to
KV (actions, actionEdges, routines, signals, checkpoints, sentinels, sketches,
crystals, facets) to first check existence using the same pattern as the core
session/memory paths (e.g., await kv.get(KV.<namespace>, id) or an exists/has
helper) and if strategy === "skip" and the key exists, skip calling kv.set;
otherwise proceed with kv.set. Ensure you reference the existing strategy
variable and reuse the same existence-check helper or logic used elsewhere to
keep behavior consistent.
In `@src/functions/leases.ts`:
- Line 18: The ttl calculation in src/functions/leases.ts allows negative or
non-numeric ttlMs through; change the logic in the places computing ttl (the
const ttl = ... expression used in acquire and renew) to first validate
data.ttlMs is a finite positive number, otherwise fall back to
DEFAULT_LEASE_TTL_MS, then clamp to MAX_LEASE_TTL_MS; ensure the final ttl is a
finite positive integer (or milliseconds) so downstream calls (e.g.,
toISOString()) cannot receive NaN or an already-expired timestamp.
In `@src/functions/mesh.ts`:
- Around line 196-209: The current receive path does an unlocked get/set on
KV.actions (see usage of KV.actions and action.id) which can overwrite newer
local changes; wrap the read/compare/write in
withKeyedLock(`mem:action:${action.id}`) to serialize updates: inside the lock
re-fetch the existing Action from kv.get(Action, KV.actions, action.id), compare
timestamps (new Date(action.updatedAt) > new Date(existing.updatedAt)) and only
then call kv.set(KV.actions, action.id, action); apply the same locking pattern
to the other receive block referenced (the code around the second occurrence at
lines ~272-280).
- Around line 6-23: The isAllowedUrl function’s textual hostname checks are
bypassable; update isAllowedUrl (used before registering/syncing peers in
mem::mesh-sync) to canonicalize the hostname (strip trailing dots, handle
punycode), then perform DNS resolution to get canonical IP addresses (use a
blocking or async dns.lookup/resolve call where this check runs) and validate
each resolved address against IPv4/IPv6 private, loopback, link-local, ULA and
IPv4-mapped ranges (reject 127/8, 10/8, 172.16/12, 192.168/16, 169.254/16, ::1,
fc00::/7, fe80::/10 and ::ffff:0:0/96 mapped IPv4), also reject short numeric
IPv4 forms and hostnames that fail resolution; only allow if the resolved
canonical IPs are public and the protocol is http/https—return false on DNS
resolution failure or any match to a disallowed range.
In `@src/functions/routines.ts`:
- Around line 148-158: The created RoutineRun's stepStatus (run.stepStatus) is
only set once and never refreshed, so callers keep seeing "pending"; update and
persist run.stepStatus whenever action states change by deriving the per-step
map from mem::routine-status's actionStates and progress and writing the updated
run back to the KV store (kv.set) — e.g., after computing actionStates/progress
in the routines logic (both at creation and in the later block referenced around
lines 189-208), recompute run.stepStatus from actionStates, assign it on the
RoutineRun object, and call kv.set(KV.routineRuns, run.id, run) to persist the
refreshed stepStatus.
- Around line 33-39: The routine creation currently maps steps without
validating the DAG or enforcing dependencies, which allows missing/duplicate
orders and leaves dependent steps as runnable; update the logic around the steps
mapping (the block creating step objects and the later block that uses
stepOrderToActionId.get(depOrder)) to: (1) validate there are no duplicate
step.order values and that every dependsOn refers to a known step order,
throwing/rejecting if any unknown or duplicate order is found; (2) initialize
any step that has a non-empty dependsOn array with status "blocked" (instead of
"pending"); and (3) when resolving dependsOn to action IDs via
stepOrderToActionId.get(depOrder), fail fast if get() returns undefined rather
than silently skipping, so malformed DAGs are rejected. Also apply the same
checks and initialization where steps are processed later in the file (the
section referenced around lines 91-145).
---
Duplicate comments:
In `@README.md`:
- Line 99: The README currently claims "Any agent... can use agentmemory's 37
tools" while the later MCP tools section header reads "### Tools (18)" and only
lists 18 entries; reconcile these inconsistencies by either updating the
introductory sentence to the correct tools count or expanding the "### Tools
(18)" section to document all 37 tools so counts and entries match; locate the
intro sentence ("Any agent that connects to MCP servers...") and the "### Tools
(18)" heading and ensure the numeric value and listed tool entries are
consistent across both places (also apply the same fix to the repeated mention
at lines ~128-129).
- Line 646: Update the stale source-file metric line in README.md that currently
reads the "92 source files..." entry to match the PR summary (change it to
reflect 101 source files); locate the exact string "**92 source files. ~14,500
LOC. 518 tests. 354KB bundled.**" and replace the source-file count so the
architecture stats align with the PR claim.
In `@src/functions/checkpoints.ts`:
- Around line 35-56: When creating the "gated_by" ActionEdge for each
linkedActionId in the checkpoint creation flow, also load the corresponding
Action (kv.get<Action>(KV.actions, actionId)) and if its status is "pending"
update it to "blocked" and persist it (kv.set(KV.actions, actionId,
updatedAction)) so pending actions become non-leasable; perform this inside the
same loop after generating the edge (use the existing variables actionId and
edge), preserving other action fields and timestamps when writing the updated
action back.
In `@src/functions/export-import.ts`:
- Around line 71-79: Export/import currently omits KV.leases, KV.routineRuns,
and KV.mesh and the "replace" path doesn't clear target namespaces, causing
stale orchestration state after a 0.5.0 round-trip; update the export logic in
src/functions/export-import.ts to include kv.list<Lease>(KV.leases),
kv.list<RoutineRun>(KV.routineRuns), and kv.list<Mesh>(KV.mesh) alongside the
existing lists (match how actions/actionEdges/etc. are collected) and modify the
replace/import routine to explicitly clear the destination namespaces (remove or
truncate existing keys in the target KV namespaces) before writing the imported
entries so stale leases/runs/mesh entries are not retained.
In `@src/mcp/server.ts`:
- Around line 540-542: The code is treating unvalidated inputs as strings and
calling .split() / JSON.parse() directly which converts bad input into 500s or
crashes (e.g., the tags parsing block and the memory_crystallize handler that
expects actionIds). Update each handler that parses client input (e.g., the tags
parsing code, any JSON.parse() usages, and the memory_crystallize logic that
reads actionIds) to first check typeof value === "string" before
parsing/splitting, perform JSON.parse inside a try/catch and return a 400 with a
clear error on parse failure, use .split(",").map(t=>t.trim()).filter(Boolean)
for comma lists, and explicitly validate presence/shape of required fields like
actionIds (return 400 if missing/invalid) instead of allowing the outer catch to
turn these into 500s.
---
Nitpick comments:
In `@src/functions/export-import.ts`:
- Line 134: ExportImportInputSchema is missing the "0.5.0" branch while
supportedVersions (const supportedVersions) includes "0.5.0"; update the
discriminated union in ExportImportInputSchema (in src/eval/schemas.ts) to add a
new case for version "0.5.0" that models the new payload shape introduced in
that release (mirror the same pattern used for "0.3.0"/"0.4.0": include the
version literal, required top-level fields, and any new/renamed properties), and
export the updated schema so consumers (and any future validation in the
import/export entrypoints) will recognize and validate "0.5.0" inputs. Ensure
the new schema variant uses the same symbol names (ExportImportInputSchema) and
preserves backwards-compatible validators for older versions.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 55cf6fa0-6cd6-43f9-bf81-5d357a9d29a0
📒 Files selected for processing (10)
README.mdsrc/functions/actions.tssrc/functions/checkpoints.tssrc/functions/export-import.tssrc/functions/leases.tssrc/functions/mesh.tssrc/functions/routines.tssrc/mcp/server.tssrc/state/schema.tstest/routines.test.ts
🚧 Files skipped from review as they are similar to previous changes (3)
- test/routines.test.ts
- src/functions/actions.ts
- src/state/schema.ts
| type: data.type || "external", | ||
| createdAt: now.toISOString(), | ||
| linkedActionIds: data.linkedActionIds || [], | ||
| expiresAt: data.expiresInMs | ||
| ? new Date(now.getTime() + data.expiresInMs).toISOString() | ||
| : undefined, | ||
| }; |
There was a problem hiding this comment.
Reject unsupported checkpoint type and status values at runtime.
These handlers only check presence, so raw SDK/MCP callers can persist arbitrary strings into checkpoint.type and checkpoint.status. Once that happens, list filters and unblock logic are operating on impossible enum values.
🛡️ Suggested guard
+ const validTypes: Checkpoint["type"][] = [
+ "ci",
+ "approval",
+ "deploy",
+ "external",
+ "timer",
+ ];
+ if (data.type && !validTypes.includes(data.type)) {
+ return { success: false, error: "invalid checkpoint type" };
+ }
+
const checkpoint: Checkpoint = {
id: generateId("ckpt"),
name: data.name.trim(),
description: (data.description || "").trim(),
status: "pending",
@@
- if (!data.checkpointId || !data.status) {
+ if (!data.checkpointId) {
return {
success: false,
- error: "checkpointId and status are required",
+ error: "checkpointId is required",
};
}
+ if (data.status !== "passed" && data.status !== "failed") {
+ return {
+ success: false,
+ error: "status must be passed or failed",
+ };
+ }Also applies to: 71-76
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/functions/checkpoints.ts` around lines 27 - 33, Reject invalid
checkpoint.type and checkpoint.status values at runtime by validating incoming
data.type and data.status against the application's allowed enums before
constructing the checkpoint object (the block that sets type: data.type ||
"external", createdAt: now.toISOString(), linkedActionIds: data.linkedActionIds
|| [], expiresAt: ...). If data.type or data.status is present and not one of
the permitted enum values, throw/return a 4xx error (or otherwise reject the
request) with a clear message; apply the same guard to the update path around
the code at the other spot (the block referenced by lines 71-76). Use the same
enum/constants your domain uses for checkpoint types/statuses (validate against
those instead of allowing arbitrary strings) and ensure the check happens before
persisting or returning the constructed checkpoint.
| function isAllowedUrl(urlStr: string): boolean { | ||
| try { | ||
| const parsed = new URL(urlStr); | ||
| if (parsed.protocol !== "http:" && parsed.protocol !== "https:") return false; | ||
| const host = parsed.hostname.toLowerCase(); | ||
| if ( | ||
| host === "localhost" || | ||
| host === "127.0.0.1" || | ||
| host === "::1" || | ||
| host === "0.0.0.0" || | ||
| host.startsWith("10.") || | ||
| host.startsWith("192.168.") || | ||
| host === "169.254.169.254" || | ||
| /^172\.(1[6-9]|2\d|3[01])\./.test(host) | ||
| ) { | ||
| return false; | ||
| } | ||
| return true; |
There was a problem hiding this comment.
The SSRF filter is still bypassable.
This check only matches a few literal host forms. 127.1, localhost., IPv4-mapped or ULA IPv6, other link-local addresses, or hostnames that resolve to private IPs will pass and then be fetched in mem::mesh-sync. Resolve hostnames to canonical IPs, or use an explicit allowlist, before registering/syncing peers.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/functions/mesh.ts` around lines 6 - 23, The isAllowedUrl function’s
textual hostname checks are bypassable; update isAllowedUrl (used before
registering/syncing peers in mem::mesh-sync) to canonicalize the hostname (strip
trailing dots, handle punycode), then perform DNS resolution to get canonical IP
addresses (use a blocking or async dns.lookup/resolve call where this check
runs) and validate each resolved address against IPv4/IPv6 private, loopback,
link-local, ULA and IPv4-mapped ranges (reject 127/8, 10/8, 172.16/12,
192.168/16, 169.254/16, ::1, fc00::/7, fe80::/10 and ::ffff:0:0/96 mapped IPv4),
also reject short numeric IPv4 forms and hostnames that fail resolution; only
allow if the resolved canonical IPs are public and the protocol is
http/https—return false on DNS resolution failure or any match to a disallowed
range.
…alidation, README counts - mesh.ts: add withKeyedLock on action writes in receive path, add IPv6 private ranges to SSRF check - routines.ts: validate DAG (duplicate orders, unknown deps), set dep actions to blocked, refresh stepStatus in routine-status - checkpoints.ts: runtime type enum validation, set linked pending actions to blocked - leases.ts: validate ttlMs is finite positive number - export-import.ts: add skip strategy checks for all v0.5.0 import blocks - mcp/server.ts: typeof guards on tags, config JSON.parse, actionIds, linkedActionIds, categories - README.md: Tools 18→37, Functions 33→50, stats line updated - test: update checkpoint test for new blocked-on-create behavior
There was a problem hiding this comment.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/functions/export-import.ts (1)
220-239:⚠️ Potential issue | 🟠 Major
replacestrategy does not clear new orchestration namespaces.The replace logic clears sessions, memories, and summaries but leaves actions, actionEdges, routines, signals, checkpoints, sentinels, sketches, crystals, and facets intact. This causes stale data to persist after a "replace" import, breaking the expected semantics.
🔧 Proposed fix to clear new namespaces on replace
await kv.delete(KV.summaries, s.sessionId); } + const existingActions = await kv.list<Action>(KV.actions); + for (const a of existingActions) await kv.delete(KV.actions, a.id); + const existingActionEdges = await kv.list<ActionEdge>(KV.actionEdges); + for (const e of existingActionEdges) await kv.delete(KV.actionEdges, e.id); + const existingRoutines = await kv.list<Routine>(KV.routines); + for (const r of existingRoutines) await kv.delete(KV.routines, r.id); + const existingSignals = await kv.list<Signal>(KV.signals); + for (const s of existingSignals) await kv.delete(KV.signals, s.id); + const existingCheckpoints = await kv.list<Checkpoint>(KV.checkpoints); + for (const c of existingCheckpoints) await kv.delete(KV.checkpoints, c.id); + const existingSentinels = await kv.list<Sentinel>(KV.sentinels); + for (const s of existingSentinels) await kv.delete(KV.sentinels, s.id); + const existingSketches = await kv.list<Sketch>(KV.sketches); + for (const s of existingSketches) await kv.delete(KV.sketches, s.id); + const existingCrystals = await kv.list<Crystal>(KV.crystals); + for (const c of existingCrystals) await kv.delete(KV.crystals, c.id); + const existingFacets = await kv.list<Facet>(KV.facets); + for (const f of existingFacets) await kv.delete(KV.facets, f.id); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/functions/export-import.ts` around lines 220 - 239, The replace branch in export-import.ts currently clears sessions, observations, memories, and summaries but omits other orchestration namespaces; update the strategy === "replace" block to also enumerate and delete entries from the remaining KV namespaces (e.g., KV.actions, KV.actionEdges, KV.routines, KV.signals, KV.checkpoints, KV.sentinels, KV.sketches, KV.crystals, KV.facets) using the same pattern as for sessions/memories (await kv.list<...>(KV.xxx) then await kv.delete(...) per item) so all stale orchestration data is removed on replace.
♻️ Duplicate comments (2)
README.md (1)
578-578:⚠️ Potential issue | 🟡 MinorThe API endpoint count still doesn't match the documented table.
The header claims "93 endpoints" but the API table below (lines 580-631) documents only 49 routes. This is the same unresolved issue from the previous review. Please either:
- Expand the table to include all 93 routes, or
- Add a note that the table is a representative subset, or
- Update the count to match the 49 documented endpoints
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` at line 578, The README header "93 endpoints on port `3111` (87 core + 6 MCP protocol)" is inconsistent with the API table that only documents 49 routes; update the documentation to either expand the API table to list all 93 routes, add a clear note that the table is a representative subset, or change the header count to match the 49 documented endpoints. Locate the header string "93 endpoints on port `3111` (87 core + 6 MCP protocol)" and the API routes table below it, then implement one of the three fixes (expand the table, add a representative-subset note adjacent to that header, or adjust the numeric counts in the header) so the header and table are consistent.src/functions/leases.ts (1)
106-116:⚠️ Potential issue | 🟠 MajorDon't overwrite newer action states on release.
If another handler already moved the action to
blockedorcancelled, this branch still forces it back topending/done. Only mutate the action if it is stillactiveand still assigned todata.agentId.Suggested guard
- if (action) { + if (action && action.status === "active" && action.assignedTo === data.agentId) { if (data.result) { action.status = "done"; action.result = data.result; } else { action.status = "pending"; } action.assignedTo = undefined; action.updatedAt = new Date().toISOString(); await kv.set(KV.actions, action.id, action); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/functions/leases.ts` around lines 106 - 116, When releasing the action, avoid overwriting newer states by first verifying the fetched action is still the one this agent owns and still active: in the block that handles kv.get<Action>(KV.actions, data.actionId) check action.status === "active" and action.assignedTo === data.agentId before mutating; only then set action.status = data.result ? "done" : "pending", action.result = data.result (when present), action.assignedTo = undefined, action.updatedAt = new Date().toISOString(), and persist with kv.set(KV.actions, action.id, action).
🧹 Nitpick comments (9)
src/functions/export-import.ts (2)
17-17: Unused importRoutineRun.
RoutineRunis imported but never used in this file. Neither the export nor import logic handles routine runs, andExportDatadoesn't include aroutineRunsfield.🧹 Proposed fix
ActionEdge, Routine, - RoutineRun, Signal,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/functions/export-import.ts` at line 17, Remove the unused import RoutineRun from the top of the file and any related unused references; update the import list that currently includes RoutineRun so only used symbols remain, and verify ExportData and the export/import logic still compile (no runtime changes needed) — ensure no dead imports remain in the module that includes ExportData, Export/Import functions, and any helper functions referenced in this file.
299-318: Consider adding skip strategy to pre-existing imports for consistency.The
graphNodes,graphEdges,semanticMemories, andproceduralMemoriesimports (pre-existing code) don't respect the skip strategy, while the newly added namespaces do. For consistent behavior, consider applying the same pattern here.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/functions/export-import.ts` around lines 299 - 318, The graphNodes/graphEdges/semanticMemories/proceduralMemories import loops currently always call kv.set and ignore the existing-item skip strategy; update each loop (the blocks handling importData.graphNodes, importData.graphEdges, importData.semanticMemories, importData.proceduralMemories) to respect the same skip strategy used by the newly added namespaces: check the configured skip flag (e.g., skipExisting or importOptions.skipExisting) and, when true, call kv.get(KV.<namespace>, id) to detect pre-existing items and continue/skip instead of calling kv.set; otherwise proceed to kv.set as now—use the same KV namespace constants (KV.graphNodes, KV.graphEdges, KV.semantic, KV.procedural) and the same id properties (node.id, edge.id, sem.id, proc.id) to locate entries.src/functions/routines.ts (3)
253-258: Progress metrics omitblockedandcancelledactions.Actions with
"blocked"or"cancelled"status are not counted in the progress object, making the sum ofdone + active + pendingless thantotal. Consider adding these counts for complete visibility.♻️ Proposed enhancement
progress: { total: actionStates.length, done: actionStates.filter((a) => a.status === "done").length, active: actionStates.filter((a) => a.status === "active").length, pending: actionStates.filter((a) => a.status === "pending").length, + blocked: actionStates.filter((a) => a.status === "blocked").length, + cancelled: actionStates.filter((a) => a.status === "cancelled").length, },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/functions/routines.ts` around lines 253 - 258, The progress object built from actionStates currently only counts done, active, and pending, omitting blocked and cancelled so their sum can be less than total; update the progress construction (referencing actionStates and the progress object in routines.ts) to also compute blocked: actionStates.filter(a => a.status === "blocked").length and cancelled: actionStates.filter(a => a.status === "cancelled").length (or equivalent), ensuring the reported counts (done, active, pending, blocked, cancelled) together account for total.
263-278: Consider keyed lock for consistency withroutine-run.Unlike
mem::routine-run, this function doesn't usewithKeyedLock. While the freeze operation is idempotent, the read-modify-write pattern could lose concurrent updates to other fields (e.g., ifroutine-unfreezeor tag updates are added later). Adding a lock now ensures future safety.♻️ Optional: Add keyed lock
async (data: { routineId: string }) => { if (!data.routineId) { return { success: false, error: "routineId is required" }; } + return withKeyedLock(`mem:routine:${data.routineId}`, async () => { const routine = await kv.get<Routine>(KV.routines, data.routineId); if (!routine) { return { success: false, error: "routine not found" }; } routine.frozen = true; routine.updatedAt = new Date().toISOString(); await kv.set(KV.routines, routine.id, routine); return { success: true, routine }; + }); },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/functions/routines.ts` around lines 263 - 278, The freeze handler mem::routine-freeze reads and then writes the routine without using the existing locking helper, so concurrent updates can be lost; wrap the read-modify-write in withKeyedLock using the same keying strategy as routine-run (lock on the routine id / KV.routines key), fetch the routine inside the lock, set routine.frozen and updatedAt, then persist with kv.set before releasing the lock; reference withKeyedLock, mem::routine-freeze, routine-run, kv.get, kv.set, and KV.routines when making the change.
214-234: Potential TOCTOU race withlease-cleanup.Per
src/functions/leases.ts:160-201,lease-cleanupcan revert an"active"action back to"pending"when its lease expires. Sinceroutine-statusreads action statuses without coordination, it may observe a mid-cleanup state where an action it expected to be"active"has been reset. This could cause transient inconsistencies in reported progress or step status.Consider whether this is acceptable (caller retries) or if a read-lock/snapshot mechanism is warranted for stricter consistency guarantees.
src/functions/mesh.ts (1)
238-262: Consider pagination for large datasets.
kv.listloads all items into memory before filtering. For large memory/action stores, this could cause memory pressure. If the KV layer supports filtered queries or cursors, consider using them.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/functions/mesh.ts` around lines 238 - 262, collectSyncData currently calls kv.list<Memory>(KV.memories) and kv.list<Action>(KV.actions) which loads all items into memory; change these calls to use the KV layer's pagination/cursor or streaming API (e.g., list with cursor/limit or an async iterator) and process items in chunks: iterate pages, filter each page by new Date(item.updatedAt).getTime() > sinceTime, push matches into result.memories/result.actions and continue until no more pages; if the KV supports server-side filtering by updatedAt or a query param, use that instead to avoid fetching irrelevant records. Ensure you update the code paths referencing collectSyncData, KV.memories, KV.actions and preserve the existing sinceTime logic.src/mcp/server.ts (3)
535-538: Redundant cast:args.requiresis already confirmed to be a string.Line 535 checks
typeof args.requires === "string", so the cast at line 536 is unnecessary.♻️ Suggested change
if (typeof args.requires === "string" && args.requires.trim()) { - for (const id of (args.requires as string).split(",").map((s: string) => s.trim())) { + for (const id of args.requires.split(",").map((s: string) => s.trim()).filter(Boolean)) { edges.push({ type: "requires", targetActionId: id }); } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/mcp/server.ts` around lines 535 - 538, The cast (args.requires as string) is redundant because you already checked typeof args.requires === "string"; update the block that builds edges (the for loop that pushes into edges with { type: "requires", targetActionId: id }) to use args.requires directly (e.g., args.requires.split(...).map(...)) and remove the unnecessary cast while preserving the trimming/splitting logic and behavior.
97-104: Consider filtering empty entries for consistency.Other CSV parsers in this file (e.g.,
linkedActionIdsat line 807,actionIdsat line 849) use.filter(Boolean)to remove empty entries. Theconceptsandfilesparsers here do not, which could pass empty strings to the trigger if input contains consecutive commas (e.g.,"foo,,bar").♻️ Suggested change
const concepts = typeof args.concepts === "string" - ? args.concepts.split(",").map((c: string) => c.trim()) + ? args.concepts.split(",").map((c: string) => c.trim()).filter(Boolean) : []; const files = typeof args.files === "string" - ? args.files.split(",").map((f: string) => f.trim()) + ? args.files.split(",").map((f: string) => f.trim()).filter(Boolean) : [];🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/mcp/server.ts` around lines 97 - 104, The CSV parsing for concepts and files currently splits args.concepts and args.files into arrays but doesn't remove empty strings; update the parsing for the symbols concepts and files (where args.concepts and args.files are handled) to mirror other parsers by adding a .filter(Boolean) after the .map(...) so consecutive commas like "foo,,bar" won't produce empty entries being passed to the trigger.
540-542: Same consistency note: consider adding.filter(Boolean)to tags parsing.♻️ Suggested change
const tags = typeof args.tags === "string" && args.tags.trim() - ? args.tags.split(",").map((t: string) => t.trim()) + ? args.tags.split(",").map((t: string) => t.trim()).filter(Boolean) : [];🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/mcp/server.ts` around lines 540 - 542, The tags parsing currently splits and trims args.tags but can leave empty strings; update the expression that builds the tags array (the const tags assignment that checks typeof args.tags === "string") to remove falsy/empty entries by applying .filter(Boolean) (or .filter(t => t !== "") ) after the .map to ensure tags contains only non-empty strings; keep the initial ternary guard and trimming logic intact.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@README.md`:
- Line 667: The heading "### Functions (50)" is incorrect; update the README
heading to reflect the actual count—either change it to "### Functions (69)" if
you want to count each slash-separated entry as an individual function, or
change it to "### Function groups (36)" if you prefer to count grouped entries;
edit the heading text "### Functions (50)" accordingly so it matches the table
entries below.
In `@src/functions/leases.ts`:
- Around line 23-78: The code uses an in-process lock (withKeyedLock) which
won't prevent races across processes: two replicas can both call
kv.list(KV.leases), see no active lease, then both create leases (kv.set) for
the same action; replace the in-memory locking with a distributed/atomic
approach — either use your KV store's compare-and-set or transactional API
(e.g., an atomic create-if-not-exists or conditional put) instead of relying on
withKeyedLock, so the flow in this function that reads existingLeases (kv.list),
builds a new Lease (generateId("lse")), and writes it with kv.set becomes a
single atomic check-and-set operation (or guarded by a distributed lock) to
ensure only one agent can create an active lease for actionId across all
processes.
- Around line 150-153: The renew implementation resets the lease expiry to
now+ttl which can shorten a still-active lease; change the renewal base to the
later of now and the existing expiry (use new Date(activeLease.expiresAt) and
compare to now), then set activeLease.expiresAt = new Date(base.getTime() +
ttl).toISOString(), update activeLease.renewedAt = now.toISOString(), and
persist via kv.set(KV.leases, activeLease.id, activeLease); this ensures renew
extends an active lease rather than replacing it with a potentially earlier
deadline.
In `@src/functions/mesh.ts`:
- Around line 271-279: The memory upsert loop in applySyncData has the same race
as actions—wrap each memory operation in the keyed lock used elsewhere: call
withKeyedLock(mem.id, async () => { const existing = await
kv.get<Memory>(KV.memories, mem.id); if (!existing || new Date(mem.updatedAt) >
new Date(existing.updatedAt)) { await kv.set(KV.memories, mem.id, mem);
applied++; } }); ensuring the get/compare/set happens inside the lock and
preserves the existing semantics for scopes.includes("memories") and
data.memories.
- Around line 264-295: The applySyncData function currently writes pulled
memories/actions without validation; update applySyncData to validate each item
before applying: for both Memory entries (data.memories) and Action entries
(data.actions) check required fields (id and updatedAt), ensure updatedAt parses
to a valid Date, and skip (and optionally log) any entries that fail validation;
only compare dates and call kv.set for items that pass validation and are newer
than existing (use the same Date parsing logic as in mesh-receive to avoid
silent corrupt writes).
- Around line 184-199: The memory upsert loop in src/functions/mesh.ts currently
does an unlocked get/compare/set on KV.memories (for each mem in data.memories)
which can cause race conditions; refactor the block that reads existing = await
kv.get<Memory>(KV.memories, mem.id) and the subsequent compare/new set into a
withKeyedLock keyed by mem.id (the same helper used for actions) so that you
validate mem.id and mem.updatedAt before acquiring the lock, then inside the
lock re-fetch existing, compare updatedAt, perform kv.set(KV.memories, mem.id,
mem) and increment accepted only while holding the lock, and finally release the
lock; this preserves the current validation logic but adds the keyed locking
around the get/compare/set sequence to prevent concurrent clobbering.
In `@src/functions/routines.ts`:
- Around line 225-232: The code that maps action.status into run.stepStatus
(look for stepOrder, action.status, the mapped variable and run.stepStatus in
routines.ts) doesn't handle the "blocked" Action status and currently casts
arbitrary statuses into the narrower stepStatus union; update the mapping logic
so that "blocked" is explicitly mapped (e.g., to "failed" or another allowed
value) and ensure mapped is typed as "pending" | "active" | "done" | "failed"
before assigning to run.stepStatus[stepOrder], avoiding unsafe casts and
preserving statusChanged logic.
In `@src/mcp/server.ts`:
- Around line 887-900: The case "memory_facet_query" currently calls .split() on
args.matchAll and args.matchAny without type-checking which will throw if the
client sends a non-string; update the logic in this block (referencing
args.matchAll, args.matchAny, fqAll, fqAny and the memory_facet_query case) to
first check typeof value === "string" before calling .split(), set fqAll/fqAny
to undefined when missing, and return a 400 response if either matchAll or
matchAny is present but not a string (instead of letting an exception bubble to
a 500); then pass fqAll/fqAny into sdk.trigger("mem::facet-query") as before.
- Around line 819-825: The case "memory_sentinel_trigger" currently calls
JSON.parse(args.result as string) which will throw if args.result is not a
string; update the handler in the memory_sentinel_trigger branch to first check
typeof args.result === "string" before parsing, accept non-string values
(objects/numbers) by using them directly or convert them to a string-safe
representation, and wrap JSON.parse in a try/catch to return a 400 error with a
clear message on invalid JSON; adjust the call to
sdk.trigger("mem::sentinel-trigger", { sentinelId: args.sentinelId, result:
parsedOrRawResult }) accordingly so invalid input yields 400 instead of a 500.
---
Outside diff comments:
In `@src/functions/export-import.ts`:
- Around line 220-239: The replace branch in export-import.ts currently clears
sessions, observations, memories, and summaries but omits other orchestration
namespaces; update the strategy === "replace" block to also enumerate and delete
entries from the remaining KV namespaces (e.g., KV.actions, KV.actionEdges,
KV.routines, KV.signals, KV.checkpoints, KV.sentinels, KV.sketches, KV.crystals,
KV.facets) using the same pattern as for sessions/memories (await
kv.list<...>(KV.xxx) then await kv.delete(...) per item) so all stale
orchestration data is removed on replace.
---
Duplicate comments:
In `@README.md`:
- Line 578: The README header "93 endpoints on port `3111` (87 core + 6 MCP
protocol)" is inconsistent with the API table that only documents 49 routes;
update the documentation to either expand the API table to list all 93 routes,
add a clear note that the table is a representative subset, or change the header
count to match the 49 documented endpoints. Locate the header string "93
endpoints on port `3111` (87 core + 6 MCP protocol)" and the API routes table
below it, then implement one of the three fixes (expand the table, add a
representative-subset note adjacent to that header, or adjust the numeric counts
in the header) so the header and table are consistent.
In `@src/functions/leases.ts`:
- Around line 106-116: When releasing the action, avoid overwriting newer states
by first verifying the fetched action is still the one this agent owns and still
active: in the block that handles kv.get<Action>(KV.actions, data.actionId)
check action.status === "active" and action.assignedTo === data.agentId before
mutating; only then set action.status = data.result ? "done" : "pending",
action.result = data.result (when present), action.assignedTo = undefined,
action.updatedAt = new Date().toISOString(), and persist with kv.set(KV.actions,
action.id, action).
---
Nitpick comments:
In `@src/functions/export-import.ts`:
- Line 17: Remove the unused import RoutineRun from the top of the file and any
related unused references; update the import list that currently includes
RoutineRun so only used symbols remain, and verify ExportData and the
export/import logic still compile (no runtime changes needed) — ensure no dead
imports remain in the module that includes ExportData, Export/Import functions,
and any helper functions referenced in this file.
- Around line 299-318: The
graphNodes/graphEdges/semanticMemories/proceduralMemories import loops currently
always call kv.set and ignore the existing-item skip strategy; update each loop
(the blocks handling importData.graphNodes, importData.graphEdges,
importData.semanticMemories, importData.proceduralMemories) to respect the same
skip strategy used by the newly added namespaces: check the configured skip flag
(e.g., skipExisting or importOptions.skipExisting) and, when true, call
kv.get(KV.<namespace>, id) to detect pre-existing items and continue/skip
instead of calling kv.set; otherwise proceed to kv.set as now—use the same KV
namespace constants (KV.graphNodes, KV.graphEdges, KV.semantic, KV.procedural)
and the same id properties (node.id, edge.id, sem.id, proc.id) to locate
entries.
In `@src/functions/mesh.ts`:
- Around line 238-262: collectSyncData currently calls
kv.list<Memory>(KV.memories) and kv.list<Action>(KV.actions) which loads all
items into memory; change these calls to use the KV layer's pagination/cursor or
streaming API (e.g., list with cursor/limit or an async iterator) and process
items in chunks: iterate pages, filter each page by new
Date(item.updatedAt).getTime() > sinceTime, push matches into
result.memories/result.actions and continue until no more pages; if the KV
supports server-side filtering by updatedAt or a query param, use that instead
to avoid fetching irrelevant records. Ensure you update the code paths
referencing collectSyncData, KV.memories, KV.actions and preserve the existing
sinceTime logic.
In `@src/functions/routines.ts`:
- Around line 253-258: The progress object built from actionStates currently
only counts done, active, and pending, omitting blocked and cancelled so their
sum can be less than total; update the progress construction (referencing
actionStates and the progress object in routines.ts) to also compute blocked:
actionStates.filter(a => a.status === "blocked").length and cancelled:
actionStates.filter(a => a.status === "cancelled").length (or equivalent),
ensuring the reported counts (done, active, pending, blocked, cancelled)
together account for total.
- Around line 263-278: The freeze handler mem::routine-freeze reads and then
writes the routine without using the existing locking helper, so concurrent
updates can be lost; wrap the read-modify-write in withKeyedLock using the same
keying strategy as routine-run (lock on the routine id / KV.routines key), fetch
the routine inside the lock, set routine.frozen and updatedAt, then persist with
kv.set before releasing the lock; reference withKeyedLock, mem::routine-freeze,
routine-run, kv.get, kv.set, and KV.routines when making the change.
In `@src/mcp/server.ts`:
- Around line 535-538: The cast (args.requires as string) is redundant because
you already checked typeof args.requires === "string"; update the block that
builds edges (the for loop that pushes into edges with { type: "requires",
targetActionId: id }) to use args.requires directly (e.g.,
args.requires.split(...).map(...)) and remove the unnecessary cast while
preserving the trimming/splitting logic and behavior.
- Around line 97-104: The CSV parsing for concepts and files currently splits
args.concepts and args.files into arrays but doesn't remove empty strings;
update the parsing for the symbols concepts and files (where args.concepts and
args.files are handled) to mirror other parsers by adding a .filter(Boolean)
after the .map(...) so consecutive commas like "foo,,bar" won't produce empty
entries being passed to the trigger.
- Around line 540-542: The tags parsing currently splits and trims args.tags but
can leave empty strings; update the expression that builds the tags array (the
const tags assignment that checks typeof args.tags === "string") to remove
falsy/empty entries by applying .filter(Boolean) (or .filter(t => t !== "") )
after the .map to ensure tags contains only non-empty strings; keep the initial
ternary guard and trimming logic intact.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: f2285a3c-200a-46b4-b048-7bb7ead05383
📒 Files selected for processing (8)
README.mdsrc/functions/checkpoints.tssrc/functions/export-import.tssrc/functions/leases.tssrc/functions/mesh.tssrc/functions/routines.tssrc/mcp/server.tstest/checkpoints.test.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- src/functions/checkpoints.ts
| **101 source files. ~15,000 LOC. 518 tests. 361KB bundled.** | ||
|
|
||
| ### Functions (33) | ||
| ### Functions (50) |
There was a problem hiding this comment.
Functions (50) is inaccurate.
Counting all slash-separated entries in the table below (lines 669-721) as individual functions yields 69 functions, not 50. Please either:
- Update the count to
Functions (69), or - Relabel as
Function groups (36)if counting grouped entries rather than individual functions
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@README.md` at line 667, The heading "### Functions (50)" is incorrect; update
the README heading to reflect the actual count—either change it to "###
Functions (69)" if you want to count each slash-separated entry as an individual
function, or change it to "### Function groups (36)" if you prefer to count
grouped entries; edit the heading text "### Functions (50)" accordingly so it
matches the table entries below.
| return withKeyedLock(`mem:action:${data.actionId}`, async () => { | ||
| const action = await kv.get<Action>(KV.actions, data.actionId); | ||
| if (!action) { | ||
| return { success: false, error: "action not found" }; | ||
| } | ||
| if (action.status === "done" || action.status === "cancelled") { | ||
| return { success: false, error: "action already completed" }; | ||
| } | ||
| if (action.status === "blocked") { | ||
| return { success: false, error: "action is blocked" }; | ||
| } | ||
|
|
||
| const existingLeases = await kv.list<Lease>(KV.leases); | ||
| const activeLease = existingLeases.find( | ||
| (l) => | ||
| l.actionId === data.actionId && | ||
| l.status === "active" && | ||
| new Date(l.expiresAt).getTime() > Date.now(), | ||
| ); | ||
|
|
||
| if (activeLease) { | ||
| if (activeLease.agentId === data.agentId) { | ||
| return { | ||
| success: true, | ||
| lease: activeLease, | ||
| renewed: false, | ||
| message: "Already holding this lease", | ||
| }; | ||
| } | ||
| return { | ||
| success: false, | ||
| error: "action already leased", | ||
| heldBy: activeLease.agentId, | ||
| expiresAt: activeLease.expiresAt, | ||
| }; | ||
| } | ||
|
|
||
| const now = new Date(); | ||
| const lease: Lease = { | ||
| id: generateId("lse"), | ||
| actionId: data.actionId, | ||
| agentId: data.agentId, | ||
| acquiredAt: now.toISOString(), | ||
| expiresAt: new Date(now.getTime() + ttl).toISOString(), | ||
| status: "active", | ||
| }; | ||
|
|
||
| await kv.set(KV.leases, lease.id, lease); | ||
|
|
||
| action.status = "active"; | ||
| action.assignedTo = data.agentId; | ||
| action.updatedAt = now.toISOString(); | ||
| await kv.set(KV.actions, action.id, action); | ||
|
|
||
| return { success: true, lease, renewed: false }; | ||
| }); |
There was a problem hiding this comment.
Process-local locking is not enough for lease exclusivity.
This path relies on withKeyedLock, but src/state/keyed-mutex.ts:1-18 keeps locks in an in-memory Map. Two mem::lease-acquire requests handled by different processes/replicas can both observe “no active lease” and create separate active leases for the same action. The current lock protects only a single Node process, not the ownership invariant.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/functions/leases.ts` around lines 23 - 78, The code uses an in-process
lock (withKeyedLock) which won't prevent races across processes: two replicas
can both call kv.list(KV.leases), see no active lease, then both create leases
(kv.set) for the same action; replace the in-memory locking with a
distributed/atomic approach — either use your KV store's compare-and-set or
transactional API (e.g., an atomic create-if-not-exists or conditional put)
instead of relying on withKeyedLock, so the flow in this function that reads
existingLeases (kv.list), builds a new Lease (generateId("lse")), and writes it
with kv.set becomes a single atomic check-and-set operation (or guarded by a
distributed lock) to ensure only one agent can create an active lease for
actionId across all processes.
…xport replace cleanup, MCP input guards - leases.ts: renew extends from max(now, existing expiry) instead of now; release verifies action ownership before mutation - mesh.ts: withKeyedLock on memory writes in mesh-receive; applySyncData validates id/updatedAt and locks both memory and action writes - routines.ts: stepStatus maps "blocked" to "pending" explicitly; progress includes blocked/cancelled counts; routine-freeze wrapped in withKeyedLock - export-import.ts: remove unused RoutineRun import; replace strategy clears all orchestration namespaces; skip strategy for graphNodes/graphEdges/semantic/procedural - mcp/server.ts: sentinel_trigger JSON.parse with typeof+try/catch; facet_query typeof guards on matchAll/matchAny; remove redundant requires cast; .filter(Boolean) on concepts/files/tags/requires CSV splits - README.md: clarify API table is a representative subset
There was a problem hiding this comment.
Actionable comments posted: 9
♻️ Duplicate comments (3)
src/functions/routines.ts (1)
114-156:⚠️ Potential issue | 🔴 CriticalPersist dependent steps as
blockedon the first write.Line 121 writes every action as
pending, then Lines 148-154 flip dependency-bearing steps toblockedin a second pass. That leaves a race where other readers can observe and pick up a dependent step before its prerequisites are enforced.Suggested fix
const action: Action = { id: generateId("act"), title: override.title || template.title || step.title, description: override.description || template.description || step.description, - status: "pending", + status: step.dependsOn.length > 0 ? "blocked" : "pending", priority: override.priority ?? template.priority ?? 5, createdAt: now, updatedAt: now, createdBy: data.initiatedBy || "routine", @@ - if (step.dependsOn.length > 0) { - const action = await kv.get<Action>(KV.actions, actionId); - if (action) { - action.status = "blocked"; - action.updatedAt = now; - await kv.set(KV.actions, action.id, action); - } - stepStatus[step.order] = "pending"; - } + if (step.dependsOn.length > 0) { + stepStatus[step.order] = "pending"; + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/functions/routines.ts` around lines 114 - 156, The code currently creates Action objects with status "pending" then flips dependency-bearing steps to "blocked" in a second pass, risking a race; modify the Action creation logic (the Action object built in the block that calls generateId("act") and kv.set(KV.actions, action.id, action)) to set status = "blocked" when step.dependsOn.length > 0 (and set stepStatus[step.order] = "blocked" at that time) instead of always "pending", remove or no-op the later adjustment that fetches actions by id and resets status, and keep using stepOrderToActionId and actionIds as-is so dependent steps are persisted as blocked on first write.src/functions/mesh.ts (1)
7-28:⚠️ Potential issue | 🔴 CriticalHarden
isAllowedUrlbeyond literal hostname checks.This still only blocks a handful of text forms.
127.1,localhost., IPv4-mapped/ULA IPv6, and hostnames that resolve to private or loopback IPs will still pass and then be fetched later. Canonicalize + resolve the hostname and reject any private, loopback, link-local, or mapped address before allowing the peer.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/functions/mesh.ts` around lines 7 - 28, The current isAllowedUrl only checks literal hostnames and misses short IPv4 forms, trailing dots, IPv4-mapped IPv6, ULA, link-local and names that resolve to private/loopback addresses; change isAllowedUrl to be async (isAllowedUrl(urlStr: string): Promise<boolean>), canonicalize the hostname (trim trailing dot, toLowerCase), detect if host is already an IP with net.isIP and reject if it falls in private/loopback/link-local ranges (127.0.0.0/8, 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16, 169.254.0.0/16, ::1, fc00::/7, fe80::/10, and IPv4-mapped ::ffff:), otherwise perform a DNS resolution (use dns.promises.lookup(host, { all: true }) or resolve) and validate every resolved address with the same checks (using net.isIP or ipaddr.js to parse/match ranges); return false if any resolved address is private/loopback/mapped/link-local, otherwise true. Ensure references: function isAllowedUrl, use dns.promises.lookup / dns.resolve and net.isIP or ipaddr.js for range checks, and update callers to await the new async function.src/mcp/server.ts (1)
535-542:⚠️ Potential issue | 🟠 MajorDon't silently discard non-string multivalue inputs in
memory_action_create.If a caller sends
requiresas a JSON array, this branch creates no edges at all. Insrc/functions/actions.ts:51-82, that means the action never flips toblocked, so work can be scheduled before its declared dependencies are satisfied.tagshas the same silent-drop behavior. Return a 400 for malformed input, or acceptstring[]explicitly.Suggested fix
+ if ( + args.requires !== undefined && + typeof args.requires !== "string" && + !Array.isArray(args.requires) + ) { + return { status_code: 400, body: { error: "requires must be a comma-separated string or string array" } }; + } + if ( + args.tags !== undefined && + typeof args.tags !== "string" && + !Array.isArray(args.tags) + ) { + return { status_code: 400, body: { error: "tags must be a comma-separated string or string array" } }; + } const edges: Array<{ type: string; targetActionId: string }> = []; - if (typeof args.requires === "string" && args.requires.trim()) { - for (const id of args.requires.split(",").map((s: string) => s.trim()).filter(Boolean)) { + const requires = Array.isArray(args.requires) + ? args.requires + : typeof args.requires === "string" + ? args.requires.split(",") + : []; + for (const id of requires.map((s: string) => s.trim()).filter(Boolean)) { edges.push({ type: "requires", targetActionId: id }); - } } - const tags = typeof args.tags === "string" && args.tags.trim() - ? args.tags.split(",").map((t: string) => t.trim()).filter(Boolean) - : []; + const tags = (Array.isArray(args.tags) + ? args.tags + : typeof args.tags === "string" + ? args.tags.split(",") + : [] + ).map((t: string) => t.trim()).filter(Boolean);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/mcp/server.ts` around lines 535 - 542, The current memory_action_create handling silently ignores non-string multivalue inputs for args.requires and args.tags; update the validation in memory_action_create to accept either a string or an array of strings: if args.requires or args.tags is a string, split and trim as before; else if it's an array, validate every item is a non-empty string and iterate them to add edges/tags; otherwise return a 400 error for malformed input. Locate the block referencing args.requires and args.tags in memory_action_create and implement the dual-case logic (string vs string[]) with explicit type checks and a clear 400 response for invalid types.
🧹 Nitpick comments (2)
src/functions/export-import.ts (1)
371-378: ActionEdges import does not validate referential integrity.When importing
actionEdges, there's no validation that the referencedsourceActionIdandtargetActionIdexist in the imported actions. Per the relevant code snippet fromsrc/functions/actions.ts(lines 276-277), the system expects edges to reference valid actions—propagateCompletiondoesactionMap.get(d.targetActionId)and gracefully handles missing targets, but orphaned edges could cause subtle issues.Consider adding a warning or validation pass after import to detect orphaned edges.
🔧 Optional: Add post-import validation for edge integrity
// After importing all actions and edges, validate references if (importData.actions && importData.actionEdges) { const actionIds = new Set(importData.actions.map(a => a.id)); const orphanedEdges = importData.actionEdges.filter( e => !actionIds.has(e.sourceActionId) || !actionIds.has(e.targetActionId) ); if (orphanedEdges.length > 0) { ctx.logger.warn("Import contains orphaned action edges", { count: orphanedEdges.length }); } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/functions/export-import.ts` around lines 371 - 378, The import loop for importData.actionEdges (using KV.actionEdges, kv.set and stats.skipped) does not validate that each edge's sourceActionId and targetActionId exist among the imported actions; add a post-import validation step after all importData.actions and importData.actionEdges are processed that builds a Set of imported action IDs (from importData.actions) and scans importData.actionEdges for edges whose sourceActionId or targetActionId are missing, then log a warning via the same logger (include count and optionally the offending edge ids) so orphaned edges are detected and visible; keep the existing skip/replace behavior but ensure the validator runs even if strategy === "skip".src/functions/leases.ts (1)
164-165: Consider pagination for large-scale cleanup.
kv.list<Lease>(KV.leases)loads all leases into memory before iterating. For systems with many concurrent leases, this could cause memory pressure during cleanup.This is fine for moderate scale, but if lease volume grows significantly, consider adding pagination or streaming iteration.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/functions/leases.ts` around lines 164 - 165, The current cleanup uses kv.list<Lease>(KV.leases) which loads all leases into memory (variable leases); change this to a paginated/batched iteration so you process a limited batch at a time rather than storing everything. Replace the single kv.list call with a loop that calls the KV listing API with pagination/cursor parameters, handle each page of items (checking expiry using now = Date.now()) and apply deletion/cleanup per batch, and continue until the cursor indicates completion; ensure you update any code that assumes a full-array response (e.g., uses leases.forEach) to operate on each page's items instead.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@README.md`:
- Line 665: The README contains inconsistent bundle-size text between the two
summaries: update the line containing "**101 source files. ~15,000 LOC. 518
tests. 361KB bundled.**" so it matches the other occurrence (the one reporting
354KB) or vice versa; locate the README occurrences of the bundle-size summary
(the exact string "**101 source files. ~15,000 LOC. 518 tests. 361KB bundled.**"
and the other summary around line 764) and change one value so both instances
report the same bundle size (either 354KB or 361KB) to ensure consistency.
In `@src/functions/export-import.ts`:
- Around line 70-78: The export function is missing orchestration scopes leases,
routineRuns and mesh; add calls to await kv.list<Lease>(KV.leases).catch(() =>
[]), await kv.list<RoutineRun>(KV.routineRuns).catch(() => []), and await
kv.list<Mesh>(KV.mesh).catch(() => []) alongside the existing
actions/actionEdges/... variables (referencing the symbols leases, routineRuns,
mesh and KV.leases/KV.routineRuns/KV.mesh) and ensure the import phase also
processes these three collections using the same .catch(() => []) pattern so
they are included in both export collection and subsequent import handling.
In `@src/functions/mesh.ts`:
- Around line 123-145: The code incorrectly reuses peer.lastSyncAt for both push
and pull flows (used by collectSyncData and the export?since= fetch) so advance
the wrong cursor; add separate cursors (e.g., peer.lastPushSyncAt and
peer.lastPullSyncAt) or track local variables pushCursor/pullCursor, use
pushCursor when calling collectSyncData and the POST to
/agentmemory/mesh/receive and use pullCursor for the GET to
/agentmemory/mesh/export?since=, and only update the corresponding cursor (push
vs pull) on successful response handling (the places that currently set
peer.lastSyncAt after success should instead set the direction-specific cursor).
- Around line 125-146: The mesh sync fetches never send authentication, so
update both outbound calls in src/functions/mesh.ts (the POST to
`${peer.url}/agentmemory/mesh/receive` that sends pushData and the GET to
`${peer.url}/agentmemory/mesh/export?since=${peer.lastSyncAt || ""}`) to include
an Authorization header: construct a token from the peer object (e.g.,
peer.authHeader || peer.authToken || peer.apiKey) and add headers: {
"Content-Type": "application/json", Authorization: `Bearer ${token}` } for the
POST (and the same Authorization header for the GET), preserving the existing
AbortSignal.timeout; also handle the case where no token exists (log/skip or
surface a clear error into result.errors).
- Around line 125-146: The outbound fetch calls to
`${peer.url}/agentmemory/mesh/receive` and `${peer.url}/agentmemory/mesh/export`
follow redirects by default, which allows a malicious peer to bypass SSRF
protections; update both fetch invocations in mesh.ts (the POST to
/agentmemory/mesh/receive and the GET to /agentmemory/mesh/export in the
pull/both branch) to include a redirect policy (e.g., add redirect: "manual" or
"error" to the fetch options) alongside the existing headers and signal so
redirects are not automatically followed.
In `@src/functions/routines.ts`:
- Around line 214-267: When iterating run.actionIds and calling kv.get(Action),
treat a missing action (kv.get returning undefined/null) as an incomplete/failed
action: set allDone = false and anyFailed = true, and push a placeholder into
actionStates (e.g., with actionId from run.actionIds, title like "missing", and
status "cancelled" or "missing") so it is counted in progress; ensure
progress.total is computed from run.actionIds.length rather than
actionStates.length; keep the existing stepOrder mapping logic in routines.ts
(the loop over run.actionIds, the variables allDone/anyFailed/actionStates) but
add the missing-action branch to update run.stepStatus/statusChanged
appropriately and persist the run if changed.
In `@src/mcp/server.ts`:
- Around line 802-805: The code currently overwrites object-valued sentinel
configs by only parsing string configs and leaving snlConfig = {} when
args.config is already an object; update the logic in the handler around
snlConfig so that if args.config is a plain object (typeof args.config ===
"object" && args.config !== null) you assign it directly to snlConfig, otherwise
if it is a non-empty string parse JSON with try/catch and return 400 on parse
error (this applies to both the snlConfig block and the similar block at lines
~809-813); reference the snlConfig variable and args.config in your change so
structured configs flow through to the sentinel creation functions that accept
config?: Record<string, unknown>.
- Around line 719-721: The handler currently treats flags like args.unreadOnly
and args.dryRun as strings, so JSON booleans (true/false) are misinterpreted;
update the calls to sdk.trigger("mem::signal-read", ...) and the memory_heal
path to normalize these inputs to actual booleans (e.g., treat true if
args.unreadOnly === true || args.unreadOnly === "true", and similarly for
dryRun) before passing them into the payload so both JSON booleans and string
values behave correctly.
---
Duplicate comments:
In `@src/functions/mesh.ts`:
- Around line 7-28: The current isAllowedUrl only checks literal hostnames and
misses short IPv4 forms, trailing dots, IPv4-mapped IPv6, ULA, link-local and
names that resolve to private/loopback addresses; change isAllowedUrl to be
async (isAllowedUrl(urlStr: string): Promise<boolean>), canonicalize the
hostname (trim trailing dot, toLowerCase), detect if host is already an IP with
net.isIP and reject if it falls in private/loopback/link-local ranges
(127.0.0.0/8, 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16, 169.254.0.0/16, ::1,
fc00::/7, fe80::/10, and IPv4-mapped ::ffff:), otherwise perform a DNS
resolution (use dns.promises.lookup(host, { all: true }) or resolve) and
validate every resolved address with the same checks (using net.isIP or
ipaddr.js to parse/match ranges); return false if any resolved address is
private/loopback/mapped/link-local, otherwise true. Ensure references: function
isAllowedUrl, use dns.promises.lookup / dns.resolve and net.isIP or ipaddr.js
for range checks, and update callers to await the new async function.
In `@src/functions/routines.ts`:
- Around line 114-156: The code currently creates Action objects with status
"pending" then flips dependency-bearing steps to "blocked" in a second pass,
risking a race; modify the Action creation logic (the Action object built in the
block that calls generateId("act") and kv.set(KV.actions, action.id, action)) to
set status = "blocked" when step.dependsOn.length > 0 (and set
stepStatus[step.order] = "blocked" at that time) instead of always "pending",
remove or no-op the later adjustment that fetches actions by id and resets
status, and keep using stepOrderToActionId and actionIds as-is so dependent
steps are persisted as blocked on first write.
In `@src/mcp/server.ts`:
- Around line 535-542: The current memory_action_create handling silently
ignores non-string multivalue inputs for args.requires and args.tags; update the
validation in memory_action_create to accept either a string or an array of
strings: if args.requires or args.tags is a string, split and trim as before;
else if it's an array, validate every item is a non-empty string and iterate
them to add edges/tags; otherwise return a 400 error for malformed input. Locate
the block referencing args.requires and args.tags in memory_action_create and
implement the dual-case logic (string vs string[]) with explicit type checks and
a clear 400 response for invalid types.
---
Nitpick comments:
In `@src/functions/export-import.ts`:
- Around line 371-378: The import loop for importData.actionEdges (using
KV.actionEdges, kv.set and stats.skipped) does not validate that each edge's
sourceActionId and targetActionId exist among the imported actions; add a
post-import validation step after all importData.actions and
importData.actionEdges are processed that builds a Set of imported action IDs
(from importData.actions) and scans importData.actionEdges for edges whose
sourceActionId or targetActionId are missing, then log a warning via the same
logger (include count and optionally the offending edge ids) so orphaned edges
are detected and visible; keep the existing skip/replace behavior but ensure the
validator runs even if strategy === "skip".
In `@src/functions/leases.ts`:
- Around line 164-165: The current cleanup uses kv.list<Lease>(KV.leases) which
loads all leases into memory (variable leases); change this to a
paginated/batched iteration so you process a limited batch at a time rather than
storing everything. Replace the single kv.list call with a loop that calls the
KV listing API with pagination/cursor parameters, handle each page of items
(checking expiry using now = Date.now()) and apply deletion/cleanup per batch,
and continue until the cursor indicates completion; ensure you update any code
that assumes a full-array response (e.g., uses leases.forEach) to operate on
each page's items instead.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: db690326-bc1f-4244-91f7-bc6b6e1e2d6f
📒 Files selected for processing (6)
README.mdsrc/functions/export-import.tssrc/functions/leases.tssrc/functions/mesh.tssrc/functions/routines.tssrc/mcp/server.ts
| const actions = await kv.list<Action>(KV.actions).catch(() => []); | ||
| const actionEdges = await kv.list<ActionEdge>(KV.actionEdges).catch(() => []); | ||
| const sentinels = await kv.list<Sentinel>(KV.sentinels).catch(() => []); | ||
| const sketches = await kv.list<Sketch>(KV.sketches).catch(() => []); | ||
| const crystals = await kv.list<Crystal>(KV.crystals).catch(() => []); | ||
| const facets = await kv.list<Facet>(KV.facets).catch(() => []); | ||
| const routines = await kv.list<Routine>(KV.routines).catch(() => []); | ||
| const signals = await kv.list<Signal>(KV.signals).catch(() => []); | ||
| const checkpoints = await kv.list<Checkpoint>(KV.checkpoints).catch(() => []); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify which KV scopes are defined vs. exported
echo "=== KV scopes in schema ==="
rg -n 'mem:' src/state/schema.ts | head -40
echo ""
echo "=== Scopes exported in export-import.ts ==="
rg -n 'kv\.list<' src/functions/export-import.tsRepository: rohitg00/agentmemory
Length of output: 3169
Add missing orchestration scopes to export function.
The export collects orchestration data but omits three defined KV scopes, breaking lossless backup/restore:
leases(mem:leases)routineRuns(mem:routine-runs)mesh(mem:mesh)
These should be added to the collection phase (lines 70-78) and import phase using the same .catch(() => []) pattern as the other orchestration data types.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/functions/export-import.ts` around lines 70 - 78, The export function is
missing orchestration scopes leases, routineRuns and mesh; add calls to await
kv.list<Lease>(KV.leases).catch(() => []), await
kv.list<RoutineRun>(KV.routineRuns).catch(() => []), and await
kv.list<Mesh>(KV.mesh).catch(() => []) alongside the existing
actions/actionEdges/... variables (referencing the symbols leases, routineRuns,
mesh and KV.leases/KV.routineRuns/KV.mesh) and ensure the import phase also
processes these three collections using the same .catch(() => []) pattern so
they are included in both export collection and subsequent import handling.
| const pushData = await collectSyncData(kv, scopes, peer.lastSyncAt); | ||
| try { | ||
| const response = await fetch(`${peer.url}/agentmemory/mesh/receive`, { | ||
| method: "POST", | ||
| headers: { "Content-Type": "application/json" }, | ||
| body: JSON.stringify(pushData), | ||
| signal: AbortSignal.timeout(30000), | ||
| }); | ||
| if (response.ok) { | ||
| const body = (await response.json()) as { accepted: number }; | ||
| result.pushed = body.accepted || 0; | ||
| } else { | ||
| result.errors.push(`push failed: HTTP ${response.status}`); | ||
| } | ||
| } catch (err) { | ||
| result.errors.push(`push failed: ${String(err)}`); | ||
| } | ||
| } | ||
|
|
||
| if (direction === "pull" || direction === "both") { | ||
| try { | ||
| const response = await fetch( | ||
| `${peer.url}/agentmemory/mesh/export?since=${peer.lastSyncAt || ""}`, |
There was a problem hiding this comment.
Don’t reuse one lastSyncAt cursor for both push and pull.
peer.lastSyncAt drives the push diff at Line 123 and the pull diff at Line 145, then gets advanced once on success. A direction: "push" run will make the next pull skip remote updates that were never imported, and a direction: "pull" run can do the same to local updates that were never exported. Track separate push/pull cursors, or only advance the direction-specific cursor that actually ran.
Also applies to: 163-165
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/functions/mesh.ts` around lines 123 - 145, The code incorrectly reuses
peer.lastSyncAt for both push and pull flows (used by collectSyncData and the
export?since= fetch) so advance the wrong cursor; add separate cursors (e.g.,
peer.lastPushSyncAt and peer.lastPullSyncAt) or track local variables
pushCursor/pullCursor, use pushCursor when calling collectSyncData and the POST
to /agentmemory/mesh/receive and use pullCursor for the GET to
/agentmemory/mesh/export?since=, and only update the corresponding cursor (push
vs pull) on successful response handling (the places that currently set
peer.lastSyncAt after success should instead set the direction-specific cursor).
| const response = await fetch(`${peer.url}/agentmemory/mesh/receive`, { | ||
| method: "POST", | ||
| headers: { "Content-Type": "application/json" }, | ||
| body: JSON.stringify(pushData), | ||
| signal: AbortSignal.timeout(30000), | ||
| }); | ||
| if (response.ok) { | ||
| const body = (await response.json()) as { accepted: number }; | ||
| result.pushed = body.accepted || 0; | ||
| } else { | ||
| result.errors.push(`push failed: HTTP ${response.status}`); | ||
| } | ||
| } catch (err) { | ||
| result.errors.push(`push failed: ${String(err)}`); | ||
| } | ||
| } | ||
|
|
||
| if (direction === "pull" || direction === "both") { | ||
| try { | ||
| const response = await fetch( | ||
| `${peer.url}/agentmemory/mesh/export?since=${peer.lastSyncAt || ""}`, | ||
| { signal: AbortSignal.timeout(30000) }, |
There was a problem hiding this comment.
Mesh sync never authenticates to the peer APIs.
Both outbound calls hit routes that are guarded by checkAuth in src/triggers/api.ts:1454-1502, but no auth header/token is sent here. In any secured deployment, push and pull will just return 401 and the mesh cannot sync.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/functions/mesh.ts` around lines 125 - 146, The mesh sync fetches never
send authentication, so update both outbound calls in src/functions/mesh.ts (the
POST to `${peer.url}/agentmemory/mesh/receive` that sends pushData and the GET
to `${peer.url}/agentmemory/mesh/export?since=${peer.lastSyncAt || ""}`) to
include an Authorization header: construct a token from the peer object (e.g.,
peer.authHeader || peer.authToken || peer.apiKey) and add headers: {
"Content-Type": "application/json", Authorization: `Bearer ${token}` } for the
POST (and the same Authorization header for the GET), preserving the existing
AbortSignal.timeout; also handle the case where no token exists (log/skip or
surface a clear error into result.errors).
🧩 Analysis chain
🌐 Web query:
What is the default redirect behavior of the Fetch API / Node.js fetch, and how do you disable redirects with the redirect option?
💡 Result:
-
Default behavior (Fetch API + Node.js
fetch): redirects are followed automatically because the defaultredirectmode is"follow". [1][2] -
Disable redirects by setting
redirectin thefetch()options:-
Hard-disable (fail on any 3xx redirect):
await fetch(url, { redirect: "error" });
This rejects with a network error if a redirect response is returned. [1]
-
Don’t follow automatically (return the redirect response instead):
const res = await fetch(url, { redirect: "manual" });
In browsers this yields a redirect response you can’t fully inspect (
opaqueredirect), while Node.js (Undici) returns the actual 3xx response instead of anopaqueredirectfiltered one. [1][3]
-
Sources:
[1] MDN – RequestInit.redirect (defaults to follow; error rejects; manual doesn’t auto-follow) (developer.mozilla.org)
[2] MDN – Request.redirect (default is follow) (developer.mozilla.org)
[3] Undici README – manual redirect behavior on the server (returns actual response) (github.com)
Citations:
- 1: https://developer.mozilla.org/en-US/docs/Web/API/RequestInit?utm_source=openai
- 2: https://developer.mozilla.org/en-US/docs/Web/API/Request/redirect?utm_source=openai
- 3: https://github.com/nodejs/undici?utm_source=openai
Disable redirects on outbound mesh fetches.
These requests trust only the original peer.url, but fetch follows 3xx redirects by default. A malicious peer can redirect /agentmemory/mesh/receive or /agentmemory/mesh/export to an internal target and bypass the SSRF filter entirely. Set redirect: "manual" (or "error" if no redirects are expected) to prevent automatic redirect following.
🛡️ Suggested hardening
const response = await fetch(`${peer.url}/agentmemory/mesh/receive`, {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify(pushData),
+ redirect: "manual",
signal: AbortSignal.timeout(30000),
});
...
const response = await fetch(
`${peer.url}/agentmemory/mesh/export?since=${peer.lastSyncAt || ""}`,
- { signal: AbortSignal.timeout(30000) },
+ { redirect: "manual", signal: AbortSignal.timeout(30000) },
);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/functions/mesh.ts` around lines 125 - 146, The outbound fetch calls to
`${peer.url}/agentmemory/mesh/receive` and `${peer.url}/agentmemory/mesh/export`
follow redirects by default, which allows a malicious peer to bypass SSRF
protections; update both fetch invocations in mesh.ts (the POST to
/agentmemory/mesh/receive and the GET to /agentmemory/mesh/export in the
pull/both branch) to include a redirect policy (e.g., add redirect: "manual" or
"error" to the fetch options) alongside the existing headers and signal so
redirects are not automatically followed.
| const orders = data.steps.map((s, i) => s.order ?? i); | ||
| const uniqueOrders = new Set(orders); | ||
| if (uniqueOrders.size !== orders.length) { | ||
| return { success: false, error: "duplicate step orders" }; | ||
| } | ||
| for (const step of data.steps) { | ||
| if (step.dependsOn) { | ||
| for (const dep of step.dependsOn) { | ||
| if (!uniqueOrders.has(dep)) { | ||
| return { success: false, error: `step ${step.order ?? data.steps.indexOf(step)} depends on unknown order ${dep}` }; | ||
| } | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
Reject cyclic routine dependencies.
This only verifies that each referenced order exists. A self-edge or cycle still passes here, and mem::routine-run will then create a run that can never unblock. Please validate acyclicity before persisting the routine.
…action handling, boolean normalization - mesh.ts: add redirect:"error" to both outbound fetch calls to prevent SSRF via redirect - routines.ts: create actions with status "blocked" directly when hasDeps (eliminates two-pass race); handle missing actions in routine-status as cancelled; progress.total uses run.actionIds.length - mcp/server.ts: sentinel config accepts object values directly; normalize unreadOnly/dryRun for both JSON booleans and string values - README.md: consistent bundle size (365KB) across both occurrences
Summary
Commit 1: 9 Core Orchestration Modules
Commit 2: 5 Advanced Modules (inspired by beads, original design)
Stats
Security
Test plan
npx tsdown— 354KB bundle)npx vitest run)Summary by CodeRabbit
New Features
Data & API
Tests
Chore