Skip to content

feat: migrate agentmemory to iii v0.11 and add upgrade command#116

Merged
rohitg00 merged 19 commits intomainfrom
feat/iii-v011-migration-upgrade-cmd
Apr 15, 2026
Merged

feat: migrate agentmemory to iii v0.11 and add upgrade command#116
rohitg00 merged 19 commits intomainfrom
feat/iii-v011-migration-upgrade-cmd

Conversation

@rohitg00
Copy link
Copy Markdown
Owner

@rohitg00 rohitg00 commented Apr 12, 2026

Summary

  • migrate agentmemory from legacy iii-sdk v0.3 APIs to iii v0.11 patterns (registerWorker, object-style trigger, positional registerFunction)
  • update runtime configuration and trigger wiring for v0.11 semantics, including durable:subscriber queue triggers, state update helpers, middleware-based auth hooks, and stream event handling improvements
  • add agentmemory upgrade CLI command and document it in README for one-command dependency/runtime refresh
  • align tests and mocks with the new SDK invocation/register shapes

Key changes

  • SDK/API migration
    • init(...) -> registerWorker(...)
    • trigger(id, payload) / triggerVoid(...) -> trigger({ function_id, payload, action? })
    • registerFunction({ id }, handler) -> registerFunction(id, handler, options?)
  • State/stream updates
    • state::set payload now uses value
    • added/used state::update helper where appropriate
    • prefer stream::send for ephemeral viewer events and keep stream::set for persisted records
  • Trigger/runtime updates
    • queue subscribers migrated to durable:subscriber
    • added state reaction trigger for session observation count changes
    • auth middleware function wiring in API trigger registrations
  • Config migration
    • migrated iii-config.yaml and iii-config.docker.yaml to workers/adapters style
  • CLI enhancement
    • added agentmemory upgrade command that performs best-effort upgrades for JS deps, iii-sdk, optional cargo iii-engine, and docker image refresh

Test plan

  • CI=true pnpm --dir \"/Users/rohitghumare/iii/agentmemory\" run test
    • Result: 57/57 files passed, 654/654 tests passed

Summary by CodeRabbit

  • New Features

    • New CLI "upgrade" command to refresh local deps and runtime
    • API auth middleware for protected HTTP endpoints
    • Viewer WebSocket tries a direct endpoint first with robust fallback
  • Configuration Changes

    • Runtime config moved to named worker-based format
    • Observability defaults strengthened (sampling, metrics, console logs)
  • Chores

    • Bumped iii-sdk to ^0.11.0-next.8
    • README Quick Start adds Upgrade / Maintenance notes
  • Audit & Reliability

    • Widespread audit recording added for many state-changing operations

Migrate the codebase from legacy iii-sdk v0.3 APIs to v0.11 trigger/register patterns, update runtime configs, and align stream/state behavior with newer engine semantics. Add a new `agentmemory upgrade` CLI command so users can refresh dependencies and iii runtime components with one entrypoint.
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 12, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Refactors SDK call shapes (register/trigger), adds widespread audit/safeAudit calls, updates StateKV API (adds update), migrates iii-config modules→workers, introduces CLI upgrade, improves viewer WS handling, and adapts tests/mocks to new calling conventions.

Changes

Cohort / File(s) Summary
Configuration
iii-config.yaml, iii-config.docker.yaml
Replaced top-level modules with workers; moved adapter selection to adapter.name/adapter.config; added/renamed workers (iii-http, iii-state, iii-exec, etc.); expanded observability settings.
CLI & Package
src/cli.ts, package.json, README.md
Added upgrade CLI command (runs package manager install, optional cargo install, optional docker pull) with spinner and optional-failure handling; bumped iii-sdk version; documented upgrade in README.
SDK registration & triggers (bulk)
src/functions/..., src/mcp/server.ts, src/triggers/*, src/index.ts, src/health/monitor.ts
Migrated sdk.registerFunction({ id })sdk.registerFunction("id"); unified sdk.trigger(...) to object form { function_id, payload }; removed inline registration descriptions across many files.
Audit additions
src/functions/*, src/functions/audit.ts
Inserted awaited recordAudit(...) calls at numerous state-mutating points; added exported safeAudit wrapper to catch/log audit failures.
State KV API
src/state/kv.ts
Changed trigger call shapes to object form; renamed set(..., data)set(..., value); added StateKV.update(scope, key, ops) to apply path-based updates via state::update.
Trigger/action behavior adjustments
src/functions/observe.ts, src/functions/compress.ts, src/functions/remember.ts, src/functions/sliding-window.ts, etc.
Replaced fire-and-forget triggerVoid with awaited sdk.trigger({ function_id, payload, action: TriggerAction.Void() }); added TriggerAction imports; adjusted stream/send payload shapes and concurrent trigger handling.
API & events
src/triggers/api.ts, src/triggers/events.ts
Added middleware::api-auth and attached it to HTTP triggers; strengthened request validation; moved some session updates to kv.update; converted some triggers to durable:subscriber; added event::session::observation-count-changed.
MCP tooling & input parsing
src/mcp/server.ts
Added input normalizers (asNonEmptyString, asNumber, parseCsvList); tightened argument validation/clamping; migrated trigger/register call shapes.
Viewer UI
src/viewer/index.html
Added direct WS endpoint with fallback and failure-tracking; suppressed join on direct sockets; enhanced stream event parsing and observation filtering (looksLikeObservation).
Tests & mocks
test/helpers/mocks.ts, test/*
Normalized test mocks to accept `registerFunction(id
Types
src/types.ts
Added required sourceBucket: string to RetentionScore.
New helper
src/functions/audit.ts
Added safeAudit exported helper that wraps recordAudit in try/catch and best-effort logging.

Sequence Diagram(s)

sequenceDiagram
  participant Dev as Developer (cli)
  participant FS as FileSystem
  participant PM as PackageManager
  participant RUST as Cargo
  participant DOCKER as DockerHub

  Dev->>FS: detect package.json / lockfile
  Dev->>PM: run install/refresh (npm or pnpm)
  PM-->>Dev: install result
  Dev->>PM: optional: upgrade package (`iii-sdk@latest`)
  PM-->>Dev: upgrade result
  Dev->>RUST: if cargo present -> cargo install iii-engine --force
  RUST-->>Dev: cargo result
  Dev->>DOCKER: if docker present -> docker pull iiidev/iii:latest
  DOCKER-->>Dev: pull result
  Dev-->>Dev: aggregate results -> print completion
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Poem

🐰 I hopped through callbacks, triggers, and queues,

Workers rearranged and configs got new hues.
An upgrade command shined with a spinner’s bright cheer,
Audits now whisper when state changes appear.
This rabbit applauds — the code feels less askew.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title 'feat: migrate agentmemory to iii v0.11 and add upgrade command' clearly and specifically summarizes the two main changes: SDK migration and new CLI feature.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/iii-v011-migration-upgrade-cmd

Drop pnpm-lock.yaml from this branch to keep the migration PR focused on source and config changes only.
Update package-lock.json so npm ci in CI matches package.json after the iii-sdk migration.
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 14

🧹 Nitpick comments (6)
src/functions/flow-compress.ts (1)

84-88: Reuse one timestamp for createdAt and updatedAt.

Capture one ISO timestamp and assign it to both fields to avoid tiny clock skew and align with repo convention.

As per coding guidelines: Capture timestamps once with new Date().toISOString() and reuse the value rather than calling multiple times.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/functions/flow-compress.ts` around lines 84 - 88, The memory object is
setting createdAt and updatedAt with separate new Date().toISOString() calls;
capture a single timestamp into a local const (e.g., now or ts) before
constructing the memory and assign that same value to both createdAt and
updatedAt so they match; update the code around the generateId("mem")/memory
creation to use that single timestamp variable for both fields.
src/functions/governance.ts (1)

48-116: Consider parallelizing independent deletes in bulk path.

kv.delete calls in the candidate loop are independent and can be batched with Promise.all to reduce latency on large sets.

As per coding guidelines: Perform parallel operations where possible using Promise.all for independent kv writes/reads in TypeScript.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/functions/governance.ts` around lines 48 - 116, The current sequential
deletes in the mem::governance-bulk handler iterate over candidates and await
each kv.delete, which is slow for large sets; change that loop to run deletes in
parallel by creating an array of delete promises from candidates.map(m =>
kv.delete(KV.memories, m.id)) and await Promise.all on that array (optionally
chunking into safe-sized batches if you expect very large lists). Keep the audit
call (recordAudit) and ctx.logger.info after awaiting the deletions so they run
only once deletes complete, and ensure you preserve the dryRun branch and
returned shape.
src/functions/checkpoints.ts (1)

118-137: Avoid listing all actions inside each linked-action iteration.

await kv.list<Action>(KV.actions) is inside the per-action loop. Pulling it once before the loop reduces repeated full scans and keeps resolve latency stable.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/functions/checkpoints.ts` around lines 118 - 137, The code currently
calls await kv.list<Action>(KV.actions) inside the per-action loop over
checkpoint.linkedActionIds (inside the withKeyedLock for actionId), causing
repeated full scans; move the call out of the loop and build actionMap once
before iterating linkedActionIds so the per-action logic (inside withKeyedLock
for each actionId) uses the precomputed actionMap to check requires
dependencies; update references to actionMap and remove the in-loop kv.list call
in the block that computes allRequiresMet.
src/functions/snapshot.ts (1)

59-94: Capture one timestamp per snapshot-create execution.

This path creates multiple ISO timestamps for one logical operation, which can produce slight metadata skew.

Proposed refactor
-        const state = {
+        const nowIso = new Date().toISOString();
+        const state = {
           version: VERSION,
-          timestamp: new Date().toISOString(),
+          timestamp: nowIso,
           sessions,
           memories,
           graphNodes,
           observations,
         };
@@
-        const message = data?.message || `Snapshot ${new Date().toISOString()}`;
+        const message = data?.message || `Snapshot ${nowIso}`;
@@
-          createdAt: new Date().toISOString(),
+          createdAt: nowIso,

As per coding guidelines src/**/*.ts: "Capture timestamps once with new Date().toISOString() and reuse the value rather than calling multiple times".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/functions/snapshot.ts` around lines 59 - 94, This code creates multiple
ISO timestamps for one snapshot; capture a single timestamp once (e.g., const ts
= new Date().toISOString()) before building the state and reuse ts for
state.timestamp and meta.createdAt (and use it in the default message if needed)
so all snapshot metadata shares the exact same timestamp; update the snapshot
function where state, message and meta are created (symbols: state, message,
meta, generateId) to reference the single ts variable instead of calling new
Date().toISOString() multiple times.
test/actions.test.ts (1)

33-49: Consider reusing the shared test SDK mock to avoid drift.
This local mockSdk() duplicates logic already maintained in test/helpers/mocks.ts and can diverge over time.

♻️ Suggested consolidation
 import { describe, it, expect, beforeEach, vi } from "vitest";
+import { mockSdk } from "./helpers/mocks.js";
 ...
-function mockSdk() {
-  const functions = new Map<string, Function>();
-  return {
-    registerFunction: (idOrOpts: string | { id: string }, handler: Function) => {
-      const id = typeof idOrOpts === "string" ? idOrOpts : idOrOpts.id;
-      functions.set(id, handler);
-    },
-    registerTrigger: () => {},
-    trigger: async (idOrInput: string | { function_id: string; payload: unknown }, data?: unknown) => {
-      const id = typeof idOrInput === "string" ? idOrInput : idOrInput.function_id;
-      const payload = typeof idOrInput === "string" ? data : idOrInput.payload;
-      const fn = functions.get(id);
-      if (!fn) throw new Error(`No function: ${id}`);
-      return fn(payload);
-    },
-  };
-}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/actions.test.ts` around lines 33 - 49, The local mockSdk() in this test
duplicates an existing shared test SDK mock and risks drift; replace its usage
by importing and reusing the shared mock from test/helpers/mocks.ts (or
delegating to the exported factory there) instead of redefining mockSdk()
locally, ensuring registerFunction, registerTrigger and trigger behavior comes
from the single source of truth so the test uses the shared implementation.
src/index.ts (1)

131-142: Consider simplifying the type narrowing for getMeter.

The current implementation uses verbose type assertions. A cleaner approach would extract this into a helper or use a single type guard.

♻️ Optional simplification
-  const meterAccessor =
-    "getMeter" in sdk &&
-    typeof (sdk as unknown as { getMeter?: (name: string) => unknown }).getMeter ===
-      "function"
-      ? (
-          sdk as unknown as {
-            getMeter: (name: string) => unknown;
-          }
-        ).getMeter.bind(sdk) as (name: string) => unknown
-      : undefined;
-
-  initMetrics(meterAccessor as ((name: string) => import("@opentelemetry/api").Meter) | undefined);
+  const meterAccessor = typeof (sdk as any).getMeter === "function"
+    ? (sdk as any).getMeter.bind(sdk)
+    : undefined;
+
+  initMetrics(meterAccessor as ((name: string) => import("@opentelemetry/api").Meter) | undefined);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/index.ts` around lines 131 - 142, The type-narrowing around sdk.getMeter
is verbose—refactor by extracting a small type guard/helper (e.g.,
isGetMeterAvailable(sdk): sdk is { getMeter: (name: string) => unknown }) and
use it to set meterAccessor cleanly; replace the long assertion block that
declares meterAccessor with a clear conditional using the guard (referencing
sdk, getMeter and meterAccessor) and then pass meterAccessor to initMetrics as
before.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@README.md`:
- Around line 299-300: Remove the mutating "npx `@agentmemory/agentmemory`
upgrade" step from the 30-second quick-start block and instead add a new
"Upgrade / Maintenance" section later in the README that contains the upgrade
command, an explicit warning that it mutates the current workspace, and a short
summary of what it does (upgrades JS deps, may run cargo install iii-engine
--force, and pulls Docker images); reference the actual command string "npx
`@agentmemory/agentmemory` upgrade" and the implementation at src/cli.ts:544-595
in the new section so readers know where to find details.

In `@src/cli.ts`:
- Around line 558-615: The upgrade flow prints success unconditionally despite
runCommand returning false on failure; update runUpgrade (or the surrounding
logic in src/cli.ts) to capture the boolean result of each mandatory runCommand
call (e.g., the package manager "install" call via runCommand(pnpmBin|npmBin,
["install"], ...), the cargo install call via runCommand(cargoBin, ["install",
"iii-engine", "--force"], ...), etc.), treat any false as a failure by aborting
the flow (returning/throwing or calling process.exit(1)) and avoid calling
p.note("Upgrade flow completed.") on failure; preserve the existing optional
behavior for calls marked optional: true so their false results do not abort.

In `@src/functions/branch-aware.ts`:
- Around line 146-149: Remove the stray closing brace in the sessions.filter
callback: in the matching assignment (the filter over sessions) the condition
s.cwd.startsWith(projectRoot + "/") incorrectly contains an extra '}' — update
the callback used to compute matching (the anonymous function passed to
sessions.filter) to remove that stray '}' so the startsWith expression is
syntactically correct and the file parses.

In `@src/functions/privacy.ts`:
- Around line 32-35: The mem::privacy handler registered via
sdk.registerFunction currently assumes data.input is a string; add an explicit
guard in that async callback to validate data and typeof data.input === "string"
before calling stripPrivateData (use the sdk.registerFunction callback, the data
parameter, and stripPrivateData identifiers to locate the code). If validation
fails, return a safe error response or normalized fallback (e.g., { output: "" }
or an error object) rather than calling stripPrivateData on a malformed value,
so the function never throws on bad payloads.

In `@src/functions/team.ts`:
- Line 79: The defaulting for the variable limit uses || which treats 0 as
falsy; change the assignment that reads from data?.limit so it uses nullish
coalescing instead of || (i.e. use the ?? operator when assigning to limit from
data?.limit) to preserve explicit zero values for limit.

In `@src/health/monitor.ts`:
- Around line 57-58: There is a syntax error in the timeout branch where
setTimeout rejects with new Error("timeout" }) — remove the stray closing brace
so the rejection constructs a valid Error; locate the Promise created in
monitor.ts (the new Promise<never>((_, reject) => setTimeout(...,
KV_PROBE_TIMEOUT)) block) and change the reject call to use new Error("timeout")
without the extra '}' so the file parses cleanly.

In `@src/mcp/server.ts`:
- Around line 125-131: The handler currently forwards raw args to sdk.trigger
(see fileList computation, sdk.trigger call and args.sessionId) without runtime
validation; update this MCP boundary to validate and normalize each incoming
arg: check typeof for strings/numbers/booleans, parse CSV fields by splitting on
"," then trim() and filter(Boolean) (for files/expandIds), coerce numeric fields
(maxDepth, limit, minConfidence, expiresInMs) with Number() and fallbacks,
coerce booleans explicitly, and build a whitelisted payload object containing
only validated fields (sessionId as a string, files: string[], and validated
numeric/boolean fields) before calling sdk.trigger; apply the same pattern to
the other similar handler locations referenced (lines noted in the review).

In `@src/triggers/api.ts`:
- Around line 903-910: The condition checking request body memoryIds has a
syntax error due to an extra '}' inside Array.isArray; in the auth route where
checkAuth(req, secret) is called (see the authErr handling), fix the memoryIds
validation by removing the stray brace so the condition reads that
req.body?.memoryIds exists AND Array.isArray(req.body.memoryIds); ensure the
function/method containing this check compiles after correcting the parenthesis.
- Around line 1490-1495: There's a syntax error in the date parsing block: the
call to Number.isNaN contains an extra `}` which breaks parsing. Edit the block
that computes `parsed = new Date(since).getTime()` and change the conditional to
call Number.isNaN(parsed) (remove the stray brace) so the validation reads: if
(Number.isNaN(parsed)) { return { status_code: 400, body: { error: "Invalid
'since' date format" } }; } ensuring the `since`/`parsed` check is syntactically
correct.
- Around line 349-358: The condition validating req.body.terms contains a stray
closing brace inside the .every() callback causing a parse error; edit the
validation in the handler where req.body.terms is checked so the .every((t:
unknown) => typeof t === "string") callback has no extra "}" and the overall if
parentheses remain balanced, leaving the returned 400 response unchanged.
- Around line 1867-1871: The anonymous HTTP handler that returns "{ status_code:
statusCode, body: result }" has an extra closing brace before the call to
sdk.registerTrigger; remove the stray "}" so the handler's closing looks like
"});" instead of " } });" to fix the syntax error and keep the subsequent
sdk.registerTrigger({ type: "http", function_id: "api::lesson-save", ... }) call
intact.

In `@src/viewer/index.html`:
- Around line 2824-2826: The incoming direct-format stream messages are wrapped
as { event: { type: 'event', data, event_type } } but handleStreamEvent only
handles evt.type === 'create'|'update'|'sync', so these messages are dropped;
fix by either updating handleStreamEvent to recognize and handle the 'event'
type (e.g., branch on evt.type === 'event' and use evt.event_type/evt.data to
trigger the same UI updates as other events) or change the mapping at the caller
(where msg.event_type is detected) to translate the direct format into an
existing type such as 'create' (e.g., set event.type = 'create') before calling
handleStreamEvent; locate handleStreamEvent and the code that calls
handleStreamEvent({ event: ... }) to implement one of these two fixes.
- Around line 2797-2802: The current try/catch around new
WebSocket(WS_DIRECT_URL) won't fallback on connection failures because those are
asynchronous; update the logic so the fallback to WS_URL is driven by runtime
connection events: add a boolean flag (e.g., directFailed) and counters for
consecutive direct endpoint failures, set directFailed = false on
state.ws.onopen, increment a failure counter in state.ws.onclose/onerror when
state.ws.__direct is true, and once the counter exceeds a threshold mark
directFailed = true and create a new WebSocket(WS_URL) (with state.ws.__direct =
false); also ensure reconnect logic checks directFailed before attempting to
reconnect to WS_DIRECT_URL.

In `@test/retention.test.ts`:
- Around line 52-60: The SDK mock in the test accepts legacy string forms and
masks missing migration; update the mock so registerFunction and trigger enforce
the v0.11 object contract: change registerFunction to only accept an options
object with { id: string } and throw if a string is passed, and change trigger
to only accept { function_id: string; payload: unknown } (and throw if a
string/data pair is used); update the mock's internal lookups (functions Map) to
use the object ids and ensure errors are thrown when legacy signatures are used
so tests fail until production call sites are migrated (refer to
registerFunction, trigger, and the functions Map in the test).

---

Outside diff comments:
In `@src/functions/actions.ts`:
- Around line 8-92: The handlers mem::action-create, mem::action-update and
mem::action-edge-create persist state but never call recordAudit(), so add
recordAudit() calls after successful mutations: in mem::action-create call
recordAudit({ type: "action.create", actionId: action.id, actor: data.createdBy
|| "unknown", payload: { action, edges: pendingEdges } }) just after
kv.set(KV.actions, ...) and after creating each edge (or once for all edges) to
record the created action and edges; similarly in mem::action-update call
recordAudit({ type: "action.update", actionId: action.id, actor: <updater>,
payload: { before: prevAction, after: action } }) immediately after persisting
updates; and in mem::action-edge-create call recordAudit({ type:
"actionEdge.create", edgeId: edge.id, actor: <actor>, payload: edge }) after
kv.set(KV.actionEdges, ...). Ensure you import/available recordAudit and include
relevant identifiers (action.id, edge.id, pendingEdges, KV.actions,
KV.actionEdges) and only record on successful persistence.

In `@src/functions/auto-forget.ts`:
- Around line 43-161: The auto-forget flow performs deletions/updates (calls to
kv.delete and kv.set for KV.memories and KV.observations, and the mutation
older.isLatest = false) without audit entries; add recordAudit(...) calls for
every non-dry-run state change inside the loops where you delete a memory
(kv.delete(KV.memories, mem.id)), mark an older memory non-latest and persist it
(kv.set(KV.memories, older.id, older)), and when deleting observations
(kv.delete(KV.observations(sessions[i].id), obs.id)). For each call include the
operation type ("delete" or "update"), the resource kind ("memory" or
"observation"), the resource id, a brief reason (e.g., "auto-forget TTL" /
"auto-forget contradiction" / "auto-forget low-value observation"), and any
relevant metadata (similarity/olderId/sessionId/timestamp) so audits can tie
back to the result entries; only invoke recordAudit when dryRun is false.

In `@src/functions/cascade.ts`:
- Around line 7-79: The cascade-update handler ("mem::cascade-update") mutates
state via kv.set for GraphNode and GraphEdge (and potentially other memory
updates) but does not call recordAudit(); update the function to call
recordAudit() immediately after each state-changing kv.set (e.g., after updating
node in the GraphNode loop and after updating edge in the GraphEdge loop) with
relevant context (operation name "mem::cascade-update", resource id
node.id/edge.id, resource type "GraphNode"/"GraphEdge", prior/updated fields or
at least a concise change summary, and the actor/context), and also add
recordAudit() for any future kv.sets that change Memory state (references:
superseded, nodes, edges, and the kv.set calls in the function) so all mutations
are auditable.

In `@src/functions/checkpoints.ts`:
- Around line 8-204: The flows in mem::checkpoint-create,
mem::checkpoint-resolve and mem::checkpoint-expire mutate checkpoint and action
state but never call recordAudit(), so add audit records for each state-changing
operation: in mem::checkpoint-create call recordAudit() after
kv.set(KV.checkpoints, checkpoint.id, checkpoint) to record the checkpoint
creation (entityType "checkpoint", action "create", include checkpoint.id and
metadata like name/type); in the same function after you set an action status to
"blocked" (inside the loop where you update action.status to "blocked") call
recordAudit() for that action (entityType "action", action "status-change",
include previousStatus and newStatus and checkpoint id); in
mem::checkpoint-resolve call recordAudit() after updating the checkpoint (action
"resolve" with resolvedBy/result and old/new status) and for each action you
unblocked (where you change action.status from "blocked" to "pending") call
recordAudit() for that action (include which checkpoint caused the unblock); and
in mem::checkpoint-expire call recordAudit() when you mark a checkpoint
"expired" (action "expire" with timestamps). Ensure you import/use the existing
recordAudit function and include relevant identifiers (checkpoint.id,
action.id), previous and new statuses, and resolvedBy/result where applicable.

In `@src/functions/claude-bridge.ts`:
- Around line 77-96: The mem::claude-bridge-read handler writes state to
KV.claudeBridge via kv.set but does not call recordAudit(), so add a
recordAudit() invocation immediately after the kv.set in the read path (same way
the sync path does) to record the "last-read" update; reference the
mem::claude-bridge-read function, the kv.set(KV.claudeBridge, "last-read", ...)
call and ensure you log the same audit metadata (timestamp, actor/context from
getContext(), and the sections/lineCount) using recordAudit().

In `@src/functions/compress.ts`:
- Around line 130-162: The post-persist stream trigger calls (the two
sdk.trigger(...) calls using STREAM.name/STREAM.group/STREAM.viewerGroup and
TriggerAction.Void()) must not make the whole operation fail because KV.set and
getSearchIndex().add already persisted the observation; change the logic so the
persistence (await kv.set(...) and getSearchIndex().add(...)) remains awaited
and committed, then invoke the two sdk.trigger calls in a non-fatal way — either
run them via Promise.allSettled or wrap each sdk.trigger(...) in its own
try/catch, log any errors (including context: data.sessionId,
data.observationId) and do not rethrow; ensure the overall function still
returns success: true when persistence succeeded even if triggers fail.

In `@src/functions/consolidate.ts`:
- Around line 69-210: The consolidate function (sdk.registerFunction
"mem::consolidate") performs state-changing kv.set calls when evolving an
existing Memory (existingMatch -> evolved) and when creating a new Memory
(memory) but does not call recordAudit(); update both branches to call
recordAudit() for each created or updated Memory after kv.set (or before
depending on your audit pattern) so every kv.set(KV.memories, ...) invocation
for existingMatch, evolved, and memory is accompanied by a recordAudit() call
that records the operation, the Memory id, actor/context (ctx), and payload;
locate kv.set usages inside the evolve branch (where existingMatch.isLatest is
set and evolved is constructed) and the create branch (memory) and add
recordAudit() around those writes.

In `@src/functions/diagnostics.ts`:
- Around line 554-556: The heal path uses withKeyedLock keyed as
`mem:lease:${lease.actionId}` which doesn't match the lease lifecycle handlers
that use the `mem:action:${actionId}` namespace, allowing concurrent conflicting
mutations; change the lock key in the heal code (the withKeyedLock call) to use
`mem:action:${lease.actionId}` (or otherwise derive the same actionId key used
by acquire/release/renew/cleanup) so all lease-related operations lock on the
same `mem:action:${actionId}` namespace and prevent races.
- Around line 409-783: The mem::heal function performs many kv.set/kv.delete
mutations (see the withKeyedLock blocks inside the actions handling, leases
handling, sentinels handling, sketches handling, signals deletion, orphaned
lease deletion, and memories handling) without creating audit entries; update
every state-changing operation to call recordAudit(...) (or wrap the mutation in
the existing audit helper) so each kv.set and kv.delete is accompanied by a
corresponding audit record describing the change (entity id, type, previous/new
statuses, and reason like "mem::heal"), ensuring you add the recordAudit calls
inside the same critical sections (e.g., inside the withKeyedLock callbacks) and
for direct deletions so that all mutations performed by mem::heal are auditable.

In `@src/functions/enrich.ts`:
- Around line 19-53: The sdk.trigger calls in the mem::enrich function use
positional args which v0.11 doesn't support; update both calls to use the
object-form { function_id, payload } instead of positional parameters — replace
the sdk.trigger invocation for "mem::file-context" (currently passing {
sessionId, files } as second arg) with sdk.trigger({ function_id:
"mem::file-context", payload: { sessionId: data.sessionId, files: data.files }
}) and likewise replace the sdk.trigger call for "mem::search" (currently
passing { query, limit }) with sdk.trigger({ function_id: "mem::search",
payload: { query: searchQueries.join(" "), limit: 5 } }) so the rest of the
logic in mem::enrich (fileContextPromise and searchPromise) continues to work
unchanged.

In `@src/functions/evict.ts`:
- Around line 38-158: The evict handler registered in
sdk.registerFunction("mem::evict") performs multiple kv.delete operations but
doesn't call recordAudit; update each deletion branch (session deletions in the
stale session block, observation deletions in lowImportanceObs and capEvictions
blocks, and memory deletions in expiredMemories and nonLatestMemories blocks) to
invoke recordAudit(ctx, { resource: "<type>", id: <id>, action: "delete",
reason: "<brief reason>", dryRun }) (or similar audit payload) alongside the
existing delete logic so every state-changing kv.delete call records an audit
entry referencing the relevant ids (session.id, o.id, mem.id) and include
dryRun/status metadata; ensure recordAudit is awaited (or errors handled)
consistent with existing error handling patterns.

In `@src/functions/export-import.ts`:
- Around line 254-318: The import path in export-import.ts fails to clear or
restore project profiles: when strategy === "replace" add deletion of existing
profiles (e.g., iterate kv.list<Profile>(KV.profiles).catch(() => []) and
kv.delete(KV.profiles, profile.id)) alongside the other KV clears, and update
the import/restore logic later in the file (the restore block around the 378-516
region) to write back serialized profiles into KV.profiles (use the same Profile
type and IDs) so exported profiles are re-created during import; reference
KV.profiles, the Profile type, and the replace branch in export-import.ts when
making changes.
- Around line 159-169: The mem::import handler currently dereferences
importData.version without validating exportData; add an explicit input guard at
the start of the registered function to verify data.exportData exists and is an
object with a string version before checking supportedVersions. In the
sdk.registerFunction("mem::import", ...) body, validate data.exportData (e.g.,
if (!data.exportData || typeof data.exportData !== "object" || typeof
data.exportData.version !== "string") ) and return/throw a clear validation
error (following the project input-validation/audit pattern) instead of
proceeding to the supportedVersions.has(importData.version) check; keep the
supportedVersions logic and subsequent import flow unchanged.
- Around line 254-516: The import handler currently mutates many KV scopes but
never creates an audit entry; add a single await recordAudit(...) call after the
successful import completes (immediately after the existing
ctx.logger.info("Import complete", { strategy, ...stats }) and before the final
return) that records the operation name, the strategy and the final counts (the
stats object) so the audit log captures the state-changing import; reference the
existing symbols recordAudit, ctx.logger.info, strategy and stats in your call
and ensure it runs only on success.

In `@src/functions/file-index.ts`:
- Around line 21-46: Validate the incoming payload at the top of the
"mem::file-context" handler: check that data?.sessionId is a non-empty string
and data?.files is an array (and coerce/ignore non-string entries), and if
validation fails return an empty FileHistory[] (or safe default) instead of
proceeding; also call recordAudit(...) to record the invalid request (use the
same sessionId if present or a placeholder) before returning. Place the checks
before any kv.list or obsCache work so you avoid exceptions when data.files is
missing or malformed.

In `@src/functions/flow-compress.ts`:
- Around line 26-120: The mem::flow-compress handler currently persists the
compressed memory with kv.set(memory.id, memory) but never records an audit;
after successfully saving the memory (the memory object and id from
generateId("mem")), call recordAudit(...) to log the state change (include
actor/context, action like "create_memory" or "compress_flow", the memory id,
and the memory metadata such as metadata.flowCompressed and actionCount). Ensure
recordAudit is awaited and placed before returning success so the audit is
persisted on success; keep the existing try/catch behavior and only call
recordAudit in the successful path after kv.set and before returning the success
response.

In `@src/functions/leases.ts`:
- Around line 158-212: The mem::lease-cleanup registered function mutates Lease
and Action records without audit entries; update the async callback passed to
withKeyedLock so that after setting currentLease.status = "expired" and saving
it via kv.set(KV.leases, ...), you call recordAudit(...) recording the lease id,
previous->new state, actor/context, and reason; likewise, if you modify
action.status/assignedTo and kv.set(KV.actions, ...), call recordAudit(...) for
that Action change as well; place these recordAudit calls immediately after each
kv.set and include identifiers (currentLease.id, action.id) and the mutation
details so audits are captured for both lease expiry and action reassignment.
- Around line 122-157: The mem::lease-renew handler mutates lease state
(activeLease.expiresAt and renewedAt) but does not call recordAudit; update the
function inside the withKeyedLock block so that after updating activeLease and
before/after kv.set(KV.leases, activeLease.id, activeLease) you call
recordAudit(...) to record the mutation (include the actor info such as
data.agentId and data.actionId, the resource id activeLease.id, the operation
type like "renew", and the payload or diff showing expiresAt/renewedAt); ensure
recordAudit is awaited and that the audited entry contains enough context to
trace who performed the renewal and the new timestamps.
- Around line 11-79: The mem::lease-acquire path creates a Lease and mutates an
Action without calling recordAudit(); add audit records for both the new lease
and the updated action. Before writing the lease, call recordAudit for the
KV.leases entity (use lease.id and include the created lease payload and an
operation type like "create"); before updating the action, capture the prior
action state, then call recordAudit for KV.actions with action.id including
before/after state and an operation type like "update", then persist both kv.set
calls; ensure you use the existing recordAudit function signature used elsewhere
in the codebase so audits are consistently recorded.
- Around line 81-120: The mem::lease-release handler updates Lease and Action
state without creating audit records; inside the withKeyedLock callback (the
function registered as "mem::lease-release") add recordAudit() calls after
persisting the mutated lease (after kv.set(KV.leases, activeLease.id,
activeLease)) and after persisting the action (after kv.set(KV.actions,
action.id, action)) to record the change; for each recordAudit include the
entity type/ID (lease.id or action.id), the actor (data.agentId or system if
appropriate), the operation ("lease.release" / "action.update"), and previous vs
new state (status, expiresAt/assignedTo/result) so audits capture who changed
what for Lease and Action mutations.

In `@src/functions/lessons.ts`:
- Around line 206-245: The decay sweep currently mutates lessons and
soft-deletes some but never calls recordAudit(); update the
mem::lesson-decay-sweep handler to create an audit entry for every lesson
change: when you detect a changed lesson (where you currently set
lesson.confidence/lastDecayedAt/updatedAt and lesson.deleted), call
recordAudit(...) with the lesson id, resource type (lesson), an action like
"decay" or "soft-delete" (choose "soft-delete" when lesson.deleted is set,
otherwise "decay"), include the prior values (previous confidence, previous
deleted flag) and the new values (new confidence, deleted flag), and include
metadata such as actor: "system" and reason: "decay-sweep" before or immediately
after pushing the lesson into dirty; then continue to persist with
kv.set(KV.lessons, l.id, l) as you do now so the audit reflects the exact
mutation persisted.

In `@src/functions/mesh.ts`:
- Around line 143-339: The code mutates mesh state without audit records; add
recordAudit calls at each state-changing spot: after creating a peer in
mem::mesh-register (after await kv.set(KV.mesh, peer.id, peer)) record an audit
with action "mesh.register" and payload { peerId: peer.id, name: peer.name, url:
peer.url, sharedScopes: peer.sharedScopes }; in mem::mesh-sync record audits
when you change peer.status to "syncing", when you mark peer.status "error" for
blocked URL or failed sync, and when you set peer.lastSyncAt to the new
timestamp (use actions like "mesh.sync.start", "mesh.sync.error",
"mesh.sync.complete" with payload { peerId, direction, scopes, pushed, pulled,
errors }); in mem::mesh-receive record an audit for accepted relation inserts
and for overall accepted counts after merges (action "mesh.receive" with {
accepted }); and in mem::mesh-remove add a recordAudit after deleting the peer
(action "mesh.remove" with { peerId }); include actor/context (e.g.
meshAuthToken or system identity) and timestamps in each recordAudit call.

In `@src/functions/migrate.ts`:
- Around line 50-145: The SQLite Database handle opened via Database(...) may
not be closed if an exception occurs later; move db.close() into a finally block
so the handle is always released: keep the current try that creates const db =
Database(...) and performs reads/writes, add a finally clause that checks the db
variable (Database, db) and calls db.close(), and leave the outer catch to log
errors (err, ctx.logger.error) and return the failure result; ensure db is in
scope for finally (declare it before the try if needed).

In `@src/functions/obsidian-export.ts`:
- Around line 201-211: The handler currently assumes `data` is defined and
directly accesses `data.vaultDir`/`data.types`, which can throw if the trigger
sends no payload; update the async handler (the function that calls
resolveVaultDir and builds exportTypes) to first validate `data` (e.g., if
(!data) return structured error), then safely read vaultDir and types (use
defaults like [] or the existing ["memories","lessons","crystals","sessions"])
before calling resolveVaultDir and creating the Set for exportTypes; keep the
existing error return shape when vaultDir is invalid and ensure this follows the
sdk.registerFunction() input-validation pattern.

In `@src/functions/profile.ts`:
- Around line 12-116: The mem::profile function accepts data.project without
validation and writes state with kv.set without recording an audit; add input
validation at the start of the registered function (validate data and that
data.project is a non-empty string, returning a validation error) and before the
state change call to kv.set(KV.profiles, data.project, profile) invoke
recordAudit(...) with context info (ctx and relevant metadata) so every
state-changing operation is audited; update any early returns to ensure invalid
input returns a proper error object and keep the existing ctx/getContext(),
kv.list, and profile generation logic intact.

In `@src/functions/query-expansion.ts`:
- Around line 73-77: Validate the incoming payload in the
sdk.registerFunction("mem::expand-query", async (data: { query: string;
maxReformulations?: number }) => { ... }) handler: ensure data.query exists and
is a non-empty string (reject or throw a descriptive error via getContext()/ctx
if not), and sanitize data.maxReformulations by converting to an integer,
defaulting to 5 if missing/invalid, and clamping it to an allowed positive range
(e.g. min 1, max 10) before assigning to maxR; update uses of maxR accordingly
so downstream logic always receives a validated positive integer.

In `@src/functions/reflect.ts`:
- Around line 425-468: The mem::insight-decay-sweep handler mutates insights but
doesn't call recordAudit(); after persisting dirty entries (after the await
Promise.all(...) that writes KV.insights), invoke recordAudit() with an audit
event (e.g., type "insight.decay") including metadata: decayed count (decayed),
softDeleted count (softDeleted), total processed (items.length), timestamp, and
an array of affected insight IDs (dirty.map(i => i.id)); ensure you call the
existing recordAudit function and await it before returning so the decay
operation is recorded.

In `@src/functions/relations.ts`:
- Around line 38-147: Both mem::relate and mem::evolve perform state changes but
never call recordAudit(); add audit writes after each persistent mutation. In
the mem::relate handler (sdk.registerFunction "mem::relate") call
recordAudit(...) after persisting the relation (after await kv.set(KV.relations,
relationId, relation)) and also after updating source/target memories (after
each await kv.set(KV.memories, ...)) recording actor from ctx (getContext()),
relationId, type, sourceId, targetId and confidence. In the mem::evolve handler
(sdk.registerFunction "mem::evolve") call recordAudit(...) after marking
existing.isLatest = false and saving it, after creating/saving evolved (await
kv.set(KV.memories, evolved.id, evolved)), and after creating the supersedes
relation (await kv.set(KV.relations,...)), including actor, oldId (existing.id),
newId (evolved.id), version and operation type ("evolve" or "supersedes") so all
state-changing operations are audited.
- Around line 46-50: The nested withKeyedLock calls on `mem:${firstId}` and
`mem:${secondId}` can deadlock when sourceId === targetId; replace the nested
locks with a single lock keyed by the sorted pair (e.g., build one key like
`mem:${firstId}:${secondId}` using the existing firstId/secondId) and wrap the
whole critical section in that single withKeyedLock call (keep using the same
async callback pattern). Also add audit logging for every state mutation: call
recordAudit(...) alongside each kv.set(...) call in this function (the three
kv.set sites currently without audit) so every memory update is recorded—ensure
the recordAudit invocation includes the same context/ids used in the mutation.

In `@src/functions/retention.ts`:
- Around line 96-146: The retention score entries currently lack the source
bucket, so semantic scores get orphaned when deletion only targets KV.memories;
update the RetentionScore shape to include a sourceBucket (or source) field and,
in both places where entries are built (the memory branch and the semanticMems
branch that create `entry: RetentionScore` and call `kv.set(KV.retentionScores,
...)`), set sourceBucket to the corresponding bucket identifier (e.g.,
KV.memories or KV.semantic / a stable string). Also update the eviction/deletion
logic that removes retention scores (the code that currently deletes only from
KV.memories) to read each RetentionScore.sourceBucket and delete the retention
key from the proper bucket so scores stay consistent with their underlying
record.

In `@src/functions/routines.ts`:
- Around line 8-63: The create handler (registered as "mem::routine-create")
mutates state but doesn't call recordAudit(); update this function to call
recordAudit(...) after the routine is persisted (after kv.set and before
returning) with the created routine id, action type (e.g., "routine.create"),
actor/context info and the new state; apply the same pattern to the other
mutating handlers referenced in the review—"mem::routine-run",
"mem::routine-status" (when status changes) and "mem::routine-freeze"—so each
state-changing operation records an audit entry using recordAudit with
appropriate action names and payloads.

In `@src/functions/sentinels.ts`:
- Around line 124-137: The timer callback passed to setTimeout (the async arrow
function that calls withKeyedLock for mem:sentinel:${sentinel.id}) can throw and
currently causes unhandled promise rejections; wrap the entire async callback
body in a try/catch so any errors from withKeyedLock, kv.get, kv.set, or
unblockLinkedActions are caught and handled (e.g., log the error and/or
retry/mark sentinel accordingly) to prevent unhandled rejections. Ensure the
catch references the sentinel.id/context so logs are useful and does not
suppress necessary failure handling.
- Around line 17-371: The sentinel handlers (registered as
"mem::sentinel-create", "mem::sentinel-trigger", "mem::sentinel-check",
"mem::sentinel-cancel", and "mem::sentinel-expire") mutate sentinel and action
state but never call recordAudit(); update each place that changes
status/result/creates edges to invoke recordAudit(...) with a clear event (e.g.,
"sentinel.create", "sentinel.trigger", "sentinel.threshold_trigger",
"sentinel.pattern_trigger", "sentinel.cancel", "sentinel.expire",
"action.unblocked") including relevant metadata (sentinel.id, prior status, new
status, reason, linkedActionIds, triggeredAt, result) right after the KV write
(kv.set) so audits reflect the persisted state; ensure the timer callback and
the withKeyedLock blocks in threshold/pattern/trigger/cancel/expire paths also
call recordAudit in the same manner.

In `@src/functions/signals.ts`:
- Around line 7-51: The mem::signal-send handler creates and persists a Signal
via kv.set but misses an audit entry; after successfully setting the signal (in
the async function registered by sdk.registerFunction "mem::signal-send"), call
recordAudit to record the creation, including the collection/key (KV.signals),
the new signal.id, the action "create", the new signal object, and actor context
(use data.from and any request metadata available), and ensure the recordAudit
call runs after kv.set and before returning { success: true, signal } so all
state-changing operations are audited.
- Around line 166-181: The cleanup loop in the registered function
"mem::signal-cleanup" removes expired Signal entries with kv.delete(KV.signals,
sig.id) but doesn't call recordAudit() for those state mutations; update the
loop so that for each deletion you call recordAudit (e.g., recordAudit('delete',
KV.signals, sig.id, { before: sig })) immediately before or after kv.delete to
record the mutation and relevant metadata (id, key space, and the deleted Signal
object) and keep the removed counter logic intact.
- Around line 53-103: The mem::signal-read handler updates Signal.readAt and
writes back to KV without recording an audit entry; update the loop in the
mem::signal-read function so that before/when calling kv.set(KV.signals, sig.id,
sig) you call recordAudit(...) to record the mutation (e.g., action
"signal.mark_read", resource id sig.id, resource type "Signal", actor
data.agentId, timestamp, and any relevant before/after or changed fields like
readAt), then proceed with kv.set; reference the mem::signal-read function, the
Signal objects, kv.set and KV.signals, and ensure recordAudit is awaited or its
promise handled consistently with other async ops.

In `@src/functions/sketches.ts`:
- Around line 8-267: The review notes missing audit calls for state changes; add
await recordAudit(...) calls in mem::sketch-create, mem::sketch-add,
mem::sketch-promote, mem::sketch-discard, and mem::sketch-gc immediately after
each persisted mutation (after kv.set/kv.delete that creates/updates/deletes
sketches, actions, and actionEdges) to record the operation, resource id(s),
action type (create/add/promote/discard/gc), and relevant metadata (e.g.,
sketch.id, action.id(s), edge ids, project, status changes); ensure you await
the promise and include contextual details (caller/source and timestamp)
consistent with existing recordAudit signature so every state-changing branch
logs an audit entry.

In `@src/functions/sliding-window.ts`:
- Around line 120-130: Add input guards that validate required IDs before any KV
access or fan-out: in the sdk.registerFunction handler for "mem::enrich-window"
(the async function that reads data.observationId, data.sessionId and computes
hprev/hnext) check that sessionId and observationId are non-empty strings and
throw or return a structured error (or call ctx.logger/error handling)
immediately if missing, so no KV namespace reads occur; apply the same pattern
in the "mem::enrich-session" handler (the other registerFunction around the
206-218 area) to validate the required sessionId/observationId before any
downstream KV operations or trigger logic. Ensure the validation happens at the
top of each function and uses the existing ctx or sdk error/reporting
conventions.
- Around line 171-188: The persisted enriched chunk (EnrichedChunk created with
generateId("ec") and stored via kv.set(KV.enrichedChunks(data.sessionId),
data.observationId, enriched)) is missing an audit entry; after the kv.set
completes, call recordAudit(...) to record this state change, including the
action/type (e.g., "persist_enriched_chunk"), the enriched id and/or the
enriched object, and context metadata (sessionId and observationId) so the audit
log captures who/what changed; use the existing recordAudit function and ensure
it is invoked only after a successful kv.set.

In `@src/functions/snapshot.ts`:
- Around line 146-154: The handler registered as "mem::snapshot-restore"
dereferences data.commitHash before ensuring data exists; add an early guard
that verifies the payload object and that commitHash is a string (e.g. if (!data
|| typeof data.commitHash !== "string") return { success: false, error:
"commitHash is required" }) before calling COMMIT_HASH_RE.test, so validation on
commitHash only runs after confirming data is defined; update the input checks
inside the async function registered in
sdk.registerFunction("mem::snapshot-restore") accordingly.

In `@src/functions/summarize.ts`:
- Line 119: The kv.set call that writes the summary (kv.set with KV.summaries,
data.sessionId, summary) is missing the required audit entry; update the path to
call recordAudit() immediately when you perform this state change using the same
pattern as other mutating functions in the repo—invoke recordAudit with the
KV.summaries key and the same session id (data.sessionId) and include the
operation context and relevant payload/metadata (e.g., summary or its id and
actor/context info) so an audit record is emitted for this write.
- Around line 47-51: Add a runtime guard that validates data.sessionId is
present and a non-empty string before calling kv.get in the async handler (the
code using getContext() and kv.get<Session>(KV.sessions,...)); if validation
fails return or throw a controlled error (e.g., bad request) so you never call
kv.get with an invalid key. Implement this inside the function registered via
sdk.registerFunction() (where the handler is defined), use the same sessionId
symbol when checking, and ensure any early-exit is recorded via recordAudit() or
the existing audit flow before returning.

In `@src/functions/team.ts`:
- Around line 132-143: The write to KV (kv.set(KV.teamProfile(config.teamId),
"profile", profile)) must be accompanied by an audit record; update the handler
to call recordAudit(...) when persisting TeamProfile, passing the actor/context,
the target key (use KV.teamProfile(config.teamId)), the action (e.g.,
"update_team_profile"), and a payload summarizing the change (e.g., new profile
metadata or diff), then perform the kv.set; ensure you invoke recordAudit before
or atomically with the kv.set so every state mutation via kv.set is recorded.
- Around line 21-33: The function in team-share reads data.itemId/itemType
without guarding that data exists; update the handler (the async function
registered for team-share in src/functions/team.ts) to first validate that data
is present (e.g., if (!data) return { success: false, error: "payload required"
}) before accessing data.itemId or data.itemType, then keep the existing checks
(VALID_ITEM_TYPES and required fields). Also ensure the handler follows the
sdk.registerFunction() pattern with explicit input validation so the runtime
won't call the function with an undefined payload; reference the async handler,
VALID_ITEM_TYPES, and getContext when making these changes.

---

Nitpick comments:
In `@src/functions/checkpoints.ts`:
- Around line 118-137: The code currently calls await
kv.list<Action>(KV.actions) inside the per-action loop over
checkpoint.linkedActionIds (inside the withKeyedLock for actionId), causing
repeated full scans; move the call out of the loop and build actionMap once
before iterating linkedActionIds so the per-action logic (inside withKeyedLock
for each actionId) uses the precomputed actionMap to check requires
dependencies; update references to actionMap and remove the in-loop kv.list call
in the block that computes allRequiresMet.

In `@src/functions/flow-compress.ts`:
- Around line 84-88: The memory object is setting createdAt and updatedAt with
separate new Date().toISOString() calls; capture a single timestamp into a local
const (e.g., now or ts) before constructing the memory and assign that same
value to both createdAt and updatedAt so they match; update the code around the
generateId("mem")/memory creation to use that single timestamp variable for both
fields.

In `@src/functions/governance.ts`:
- Around line 48-116: The current sequential deletes in the mem::governance-bulk
handler iterate over candidates and await each kv.delete, which is slow for
large sets; change that loop to run deletes in parallel by creating an array of
delete promises from candidates.map(m => kv.delete(KV.memories, m.id)) and await
Promise.all on that array (optionally chunking into safe-sized batches if you
expect very large lists). Keep the audit call (recordAudit) and ctx.logger.info
after awaiting the deletions so they run only once deletes complete, and ensure
you preserve the dryRun branch and returned shape.

In `@src/functions/snapshot.ts`:
- Around line 59-94: This code creates multiple ISO timestamps for one snapshot;
capture a single timestamp once (e.g., const ts = new Date().toISOString())
before building the state and reuse ts for state.timestamp and meta.createdAt
(and use it in the default message if needed) so all snapshot metadata shares
the exact same timestamp; update the snapshot function where state, message and
meta are created (symbols: state, message, meta, generateId) to reference the
single ts variable instead of calling new Date().toISOString() multiple times.

In `@src/index.ts`:
- Around line 131-142: The type-narrowing around sdk.getMeter is
verbose—refactor by extracting a small type guard/helper (e.g.,
isGetMeterAvailable(sdk): sdk is { getMeter: (name: string) => unknown }) and
use it to set meterAccessor cleanly; replace the long assertion block that
declares meterAccessor with a clear conditional using the guard (referencing
sdk, getMeter and meterAccessor) and then pass meterAccessor to initMetrics as
before.

In `@test/actions.test.ts`:
- Around line 33-49: The local mockSdk() in this test duplicates an existing
shared test SDK mock and risks drift; replace its usage by importing and reusing
the shared mock from test/helpers/mocks.ts (or delegating to the exported
factory there) instead of redefining mockSdk() locally, ensuring
registerFunction, registerTrigger and trigger behavior comes from the single
source of truth so the test uses the shared implementation.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 99e29bcb-6d7f-4a10-b253-587bb9774609

📥 Commits

Reviewing files that changed from the base of the PR and between 649e6e4 and dc1041d.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (99)
  • README.md
  • iii-config.docker.yaml
  • iii-config.yaml
  • package.json
  • src/cli.ts
  • src/functions/actions.ts
  • src/functions/auto-forget.ts
  • src/functions/branch-aware.ts
  • src/functions/cascade.ts
  • src/functions/checkpoints.ts
  • src/functions/claude-bridge.ts
  • src/functions/compress.ts
  • src/functions/consolidate.ts
  • src/functions/consolidation-pipeline.ts
  • src/functions/context.ts
  • src/functions/crystallize.ts
  • src/functions/diagnostics.ts
  • src/functions/enrich.ts
  • src/functions/evict.ts
  • src/functions/export-import.ts
  • src/functions/facets.ts
  • src/functions/file-index.ts
  • src/functions/flow-compress.ts
  • src/functions/frontier.ts
  • src/functions/governance.ts
  • src/functions/graph.ts
  • src/functions/leases.ts
  • src/functions/lessons.ts
  • src/functions/mesh.ts
  • src/functions/migrate.ts
  • src/functions/observe.ts
  • src/functions/obsidian-export.ts
  • src/functions/patterns.ts
  • src/functions/privacy.ts
  • src/functions/profile.ts
  • src/functions/query-expansion.ts
  • src/functions/reflect.ts
  • src/functions/relations.ts
  • src/functions/remember.ts
  • src/functions/retention.ts
  • src/functions/routines.ts
  • src/functions/sentinels.ts
  • src/functions/signals.ts
  • src/functions/sketches.ts
  • src/functions/skill-extract.ts
  • src/functions/sliding-window.ts
  • src/functions/smart-search.ts
  • src/functions/snapshot.ts
  • src/functions/summarize.ts
  • src/functions/team.ts
  • src/functions/temporal-graph.ts
  • src/functions/timeline.ts
  • src/functions/verify.ts
  • src/functions/working-memory.ts
  • src/health/monitor.ts
  • src/index.ts
  • src/mcp/server.ts
  • src/state/kv.ts
  • src/triggers/api.ts
  • src/triggers/events.ts
  • src/viewer/index.html
  • test/actions.test.ts
  • test/auto-forget.test.ts
  • test/checkpoints.test.ts
  • test/claude-bridge.test.ts
  • test/confidence.test.ts
  • test/consolidation-pipeline.test.ts
  • test/crystallize.test.ts
  • test/diagnostics.test.ts
  • test/enrich.test.ts
  • test/export-import.test.ts
  • test/facets.test.ts
  • test/frontier.test.ts
  • test/governance.test.ts
  • test/graph.test.ts
  • test/helpers/mocks.ts
  • test/leases.test.ts
  • test/lessons.test.ts
  • test/mcp-prompts.test.ts
  • test/mcp-resources.test.ts
  • test/mesh.test.ts
  • test/obsidian-export.test.ts
  • test/profile.test.ts
  • test/query-expansion.test.ts
  • test/reflect.test.ts
  • test/relations.test.ts
  • test/retention.test.ts
  • test/routines.test.ts
  • test/sentinels.test.ts
  • test/signals.test.ts
  • test/sketches.test.ts
  • test/skill-extract.test.ts
  • test/sliding-window.test.ts
  • test/smart-search.test.ts
  • test/snapshot.test.ts
  • test/team.test.ts
  • test/temporal-graph.test.ts
  • test/timeline.test.ts
  • test/working-memory.test.ts

Comment thread README.md Outdated
Comment thread src/cli.ts
Comment thread src/functions/branch-aware.ts
Comment thread src/functions/privacy.ts
Comment thread src/functions/team.ts Outdated
Comment thread src/triggers/api.ts
Comment thread src/triggers/api.ts Outdated
Comment thread src/viewer/index.html Outdated
Comment thread src/viewer/index.html
Comment thread test/retention.test.ts Outdated
Fix malformed braces introduced during API migration in triggers/health/branch-aware files so tsdown build passes in CI.
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/triggers/api.ts (1)

200-243: ⚠️ Potential issue | 🔴 Critical

Validate sessionId, project, and cwd before using them.

Line 206 destructures req.body with no guard, and Lines 240-243 call kv.update() with req.body.sessionId unchecked. A malformed request here throws or writes under an invalid key instead of returning 400.

🐛 Suggested guard
   sdk.registerFunction("api::session::start",
     async (
       req: ApiRequest<{ sessionId: string; project: string; cwd: string }>,
     ): Promise<Response> => {
       const authErr = checkAuth(req, secret);
       if (authErr) return authErr;
-      const { sessionId, project, cwd } = req.body;
+      const body = (req.body ?? {}) as Record<string, unknown>;
+      if (
+        typeof body.sessionId !== "string" ||
+        typeof body.project !== "string" ||
+        typeof body.cwd !== "string" ||
+        !body.sessionId.trim() ||
+        !body.project.trim() ||
+        !body.cwd.trim()
+      ) {
+        return {
+          status_code: 400,
+          body: { error: "sessionId, project, and cwd are required" },
+        };
+      }
+      const { sessionId, project, cwd } = body as {
+        sessionId: string;
+        project: string;
+        cwd: string;
+      };
...
   sdk.registerFunction("api::session::end",
     async (req: ApiRequest<{ sessionId: string }>): Promise<Response> => {
       const authErr = checkAuth(req, secret);
       if (authErr) return authErr;
-      await kv.update(KV.sessions, req.body.sessionId, [
+      if (!req.body?.sessionId || typeof req.body.sessionId !== "string") {
+        return { status_code: 400, body: { error: "sessionId is required" } };
+      }
+      await kv.update(KV.sessions, req.body.sessionId, [
As per coding guidelines: `{src/mcp/server.ts,src/triggers/api.ts}`: Input validation must occur at system boundaries (MCP handlers, REST endpoints)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/triggers/api.ts` around lines 200 - 243, The request body fields used in
api::session::start (destructuring of req.body into sessionId, project, cwd) and
api::session::end (req.body.sessionId passed to kv.update) are not validated and
can crash or write invalid keys; add input validation after checkAuth in both
registered functions: verify sessionId is a non-empty string (and project/cwd
are valid strings for api::session::start), return a 400 Response when
validation fails, and only call kv.set/kv.update or trigger mem::context when
validated; reference the functions registered as "api::session::start" and
"api::session::end", the checkAuth call, and the kv.set/kv.update and
sdk.trigger usages to locate where to add the guards.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/triggers/api.ts`:
- Around line 1748-1753: The api::crystal-list handler (registered via
sdk.registerFunction) currently forwards limit from req.query_params to
sdk.trigger (mem::crystal-list) even when the query value is non-numeric;
similarly other handlers forward minConfidence/limit (see handlers around the
same file) — update each API boundary (the sdk.registerFunction callbacks) to
validate numeric query parameters (e.g., limit and minConfidence) before calling
sdk.trigger: check that the param exists and is a valid number (use
Number.isFinite after coercion or a strict regex/parse check), and if invalid
return { status_code: 400, body: "invalid numeric parameter" } (or similar)
instead of forwarding the bad value; ensure you change places where
req.query_params.limit or minConfidence are parsed (the parseInt/Number usage)
so they never send NaN/strings downstream.
- Around line 120-145: The handlers api::observe and api::context (and other
listed endpoints) are forwarding req.body verbatim into sdk.trigger (e.g.,
sdk.trigger({ function_id: "mem::observe", payload: req.body }) and
sdk.trigger({ function_id: "mem::context", payload: req.body })); fix by
validating and whitelisting fields before calling sdk.trigger: explicitly build
a new payload object containing only allowed properties (for api::context accept
only sessionId, project, optional budget and validate types/ranges; for
api::observe allow only the expected HookPayload fields), perform basic
validation and normalization, and pass that sanitized object to sdk.trigger
instead of req.body; update other similar handlers that call sdk.trigger to
follow the same pattern.

---

Outside diff comments:
In `@src/triggers/api.ts`:
- Around line 200-243: The request body fields used in api::session::start
(destructuring of req.body into sessionId, project, cwd) and api::session::end
(req.body.sessionId passed to kv.update) are not validated and can crash or
write invalid keys; add input validation after checkAuth in both registered
functions: verify sessionId is a non-empty string (and project/cwd are valid
strings for api::session::start), return a 400 Response when validation fails,
and only call kv.set/kv.update or trigger mem::context when validated; reference
the functions registered as "api::session::start" and "api::session::end", the
checkAuth call, and the kv.set/kv.update and sdk.trigger usages to locate where
to add the guards.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: eaaa7024-660f-4de2-99d9-0222edfd17bd

📥 Commits

Reviewing files that changed from the base of the PR and between dc1041d and a356d4b.

⛔ Files ignored due to path filters (1)
  • package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (3)
  • src/functions/branch-aware.ts
  • src/health/monitor.ts
  • src/triggers/api.ts
✅ Files skipped from review due to trivial changes (1)
  • src/health/monitor.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/functions/branch-aware.ts

Comment thread src/triggers/api.ts Outdated
Comment thread src/triggers/api.ts
Apply the reviewed v0.11 follow-up fixes across runtime, docs, and tests by tightening validation, correcting upgrade/error handling, and adding missing audit coverage on state mutations. This also addresses stream fallback behavior, lock consistency, retention source-bucket cleanup, and test/mock alignment so build and test remain green.
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (6)
src/viewer/index.html (2)

2797-2853: ⚠️ Potential issue | 🟠 Major

Bind reconnect handlers to the created socket, not state.ws.

These callbacks all dereference the mutable global state.ws. After a reconnect assigns a newer socket, a late onerror/onclose from the old one can flip the retry/direct-fallback state or close the healthy replacement.

Suggested fix
 function connectWs() {
   if (wsRetries >= WS_MAX_RETRIES) return;
   var useDirect = !directFailed;
+  var ws;
   try {
-    state.ws = new WebSocket(useDirect ? WS_DIRECT_URL : WS_URL);
-    state.ws.__direct = useDirect;
+    ws = new WebSocket(useDirect ? WS_DIRECT_URL : WS_URL);
+    ws.__direct = useDirect;
   } catch (_) {
-    state.ws = new WebSocket(WS_URL);
-    state.ws.__direct = false;
+    ws = new WebSocket(WS_URL);
+    ws.__direct = false;
   }
+  state.ws = ws;
   try {
-    state.ws.onopen = function() {
+    ws.onopen = function() {
+      if (state.ws !== ws) return;
       wsRetries = 0;
-      if (state.ws.__direct) {
+      if (ws.__direct) {
         directFailures = 0;
         directFailed = false;
       }
-      if (!state.ws.__direct) {
-        state.ws.send(JSON.stringify({
+      if (!ws.__direct) {
+        ws.send(JSON.stringify({
           type: 'join',
           data: {
             subscriptionId: 'viewer-' + Date.now(),
             streamName: 'mem-live',
             groupId: 'viewer'
           }
         }));
       }
       document.getElementById('ws-status').textContent = 'live';
       document.getElementById('ws-status').className = 'ws-status connected';
     };
-    state.ws.onmessage = function(e) {
+    ws.onmessage = function(e) {
+      if (state.ws !== ws) return;
       try {
         var msg = JSON.parse(e.data);
         ...
       } catch {}
     };
-    state.ws.onclose = function() {
-      if (state.ws && state.ws.__direct) {
+    ws.onclose = function() {
+      if (state.ws !== ws) return;
+      if (ws.__direct) {
         directFailures += 1;
         if (directFailures >= DIRECT_FAILURE_THRESHOLD) {
           directFailed = true;
         }
       }
       ...
     };
-    state.ws.onerror = function() { state.ws.close(); };
+    ws.onerror = function() {
+      if (state.ws === ws) ws.close();
+    };
   } catch {
     ...
   }
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/viewer/index.html` around lines 2797 - 2853, The reconnect handlers
currently reference the mutable global state.ws, which allows old sockets to
affect the new one; fix connectWs by creating a local socket variable (e.g., var
ws = new WebSocket(...)), set ws.__direct on that local variable, attach all
handlers (onopen, onmessage, onclose, onerror) to ws, and only after handlers
are bound assign state.ws = ws so later reconnects don't let callbacks from
previous sockets mutate global retry/direct state; update any handler references
that used state.ws (like state.ws.close()) to use the local ws or guard against
staleness by comparing state.ws === ws before mutating global state or
performing closes.

2832-2867: ⚠️ Potential issue | 🟠 Major

Don't route all direct stream events as observations.

stream::send also publishes session.activity messages, but this branch currently feeds every { event_type, data } payload into routeWsMessage({ observation: ... }). That will inject non-observation payloads into the timeline/activity lists, where Line 2889 onward expects real observation objects with fields like id and timestamp.

Suggested fix
-      } else if (msg.event_type && msg.data) {
-        handleStreamEvent({ event: { type: 'create', data: msg.data, event_type: msg.event_type } });
+      } else if (msg.event_type && msg.data) {
+        handleStreamEvent({ event: { type: 'event', data: msg.data, event_type: msg.event_type } });
       }
 ...
 function handleStreamEvent(msg) {
   var evt = msg.event;
   if (evt.type === 'event' && evt.data) {
-    routeWsMessage({ observation: evt.data.observation || evt.data });
+    if (evt.event_type === 'raw_observation' || evt.event_type === 'compressed_observation') {
+      routeWsMessage({ observation: evt.data.observation || evt.data });
+    } else if (evt.event_type === 'session.activity') {
+      state.dashboard.loaded = false;
+      state.sessions.loaded = false;
+      if (state.activeTab === 'dashboard') loadDashboard();
+      if (state.activeTab === 'sessions') loadSessions();
+    }
     return;
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/viewer/index.html` around lines 2832 - 2867, The handler is routing every
direct stream event into routeWsMessage which causes non-observation payloads
(like session.activity) to be treated as timeline observations; update
handleStreamEvent to only call routeWsMessage for true observation objects —
e.g., check evt.event_type (or evt.data) and ensure the payload looks like an
observation (has required fields such as id and timestamp or contains
data.observation) before routing; otherwise ignore or handle session.activity
separately. Reference: handleStreamEvent and routeWsMessage; gate the call using
evt.event_type !== 'session.activity' or by validating evt.data.id &&
evt.data.timestamp (or evt.data.observation) prior to invoking routeWsMessage.
src/functions/obsidian-export.ts (1)

200-215: ⚠️ Potential issue | 🟠 Major

Validate vaultDir and types before using them.

The new guard only checks that a payload exists. A non-string vaultDir or non-array types will still throw inside resolve() / new Set(...) instead of returning a clean error response.

Suggested fix
   sdk.registerFunction("mem::obsidian-export", 
     async (data: { vaultDir?: string; types?: string[] } | undefined) => {
       if (!data) {
         return { success: false, error: "payload is required" };
       }
+      if (data.vaultDir !== undefined && typeof data.vaultDir !== "string") {
+        return { success: false, error: "vaultDir must be a string" };
+      }
+      if (
+        data.types !== undefined &&
+        (!Array.isArray(data.types) ||
+          data.types.some((type) => typeof type !== "string"))
+      ) {
+        return { success: false, error: "types must be an array of strings" };
+      }

       const vaultDir = resolveVaultDir(data.vaultDir);
       if (!vaultDir) {
         return {
           success: false,
           error: `vaultDir must be inside ${getExportRoot()}`,
         };
       }
       const exportTypes = new Set(
         data.types ?? ["memories", "lessons", "crystals", "sessions"],
       );
+      const allowedTypes = new Set(["memories", "lessons", "crystals", "sessions"]);
+      if ([...exportTypes].some((type) => !allowedTypes.has(type))) {
+        return { success: false, error: "types contains unsupported export targets" };
+      }
As per coding guidelines: "TypeScript functions must use `sdk.registerFunction()` pattern with input validation, state access via kv.get/kv.set/kv.list, and audit recording via recordAudit()"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/functions/obsidian-export.ts` around lines 200 - 215, The handler
registered as "mem::obsidian-export" currently only checks for a missing payload
but calls resolveVaultDir(data.vaultDir) and new Set(data.types) without
validating types; add explicit validation: ensure data.vaultDir is a string
before calling resolveVaultDir (return { success: false, error: "vaultDir must
be a string" } if not), and ensure data.types is either undefined or an array of
strings before constructing the Set (return { success: false, error: "types must
be an array of strings" } on failure); keep using resolveVaultDir and
getExportRoot for path validation and return the same style error object when
resolveVaultDir indicates an invalid path.
src/functions/governance.ts (1)

101-109: ⚠️ Potential issue | 🟠 Major

Avoid fail-fast Promise.all for destructive bulk deletes.

If one kv.delete() rejects here, other deletes may already have succeeded, but the handler exits before auditing or reporting the partial result. That leaves a partially applied bulk delete with no audit trail.

Suggested direction
-      await Promise.all(candidates.map((mem) => kv.delete(KV.memories, mem.id)));
+      const deleteResults = await Promise.allSettled(
+        candidates.map(async (mem) => {
+          await kv.delete(KV.memories, mem.id);
+          return mem.id;
+        }),
+      );
+
+      const deletedIds = deleteResults.flatMap((result) =>
+        result.status === "fulfilled" ? [result.value] : [],
+      );
+      const failedDeletes = deleteResults.length - deletedIds.length;
+      if (failedDeletes > 0) {
+        return {
+          success: false,
+          error: `Failed to delete ${failedDeletes} memories`,
+          deleted: deletedIds.length,
+        };
+      }

       await recordAudit(
         kv,
         "delete",
         "mem::governance-bulk",
-        candidates.map((m) => m.id),
-        { filter: data, deleted: candidates.length },
+        deletedIds,
+        { filter: data, deleted: deletedIds.length },
       );

-      ctx.logger.info("Governance bulk delete", { deleted: candidates.length });
-      return { success: true, deleted: candidates.length };
+      ctx.logger.info("Governance bulk delete", { deleted: deletedIds.length });
+      return { success: true, deleted: deletedIds.length };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/functions/governance.ts` around lines 101 - 109, The current bulk delete
uses await Promise.all(candidates.map((mem) => kv.delete(KV.memories, mem.id)))
which fails fast and can leave partially-applied deletes without an audit;
replace this with Promise.allSettled over candidates.map(...) to run all delete
attempts, collect which mem.id deletions succeeded vs failed, then call
recordAudit(kv, "delete", "mem::governance-bulk", successfulIds, { filter: data,
deleted: successfulCount, failed: failedCount, errors: failureDetails }) and
log/report failures so the audit reflects the partial outcome (keep references
to kv.delete, recordAudit, candidates, data, and KV.memories when making the
change).
src/functions/mesh.ts (1)

144-153: ⚠️ Potential issue | 🟠 Major

Guard missing mesh payloads before field access.

mem::mesh-register, mem::mesh-sync, mem::mesh-receive, and mem::mesh-remove all dereference data immediately. An omitted payload becomes a TypeError instead of a structured failure response.

Suggested pattern
-  sdk.registerFunction("mem::mesh-register", 
-    async (data: {
+  sdk.registerFunction("mem::mesh-register", 
+    async (data: {
       url: string;
       name: string;
       sharedScopes?: string[];
       syncFilter?: { project?: string };
-    }) => {
+    } | undefined) => {
+      if (!data || typeof data !== "object") {
+        return { success: false, error: "payload required" };
+      }
       if (!data.url || !data.name) {
         return { success: false, error: "url and name are required" };
       }

Apply the same guard to the other mesh handlers that read data.*.

As per coding guidelines TypeScript functions must use sdk.registerFunction() pattern with input validation, state access via kv.get/kv.set/kv.list, and audit recording via recordAudit().

Also applies to: 193-207, 329-359, 365-368

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/functions/mesh.ts` around lines 144 - 153, Add a null/undefined payload
guard at the start of each sdk.registerFunction handler that dereferences data
(mem::mesh-register, mem::mesh-sync, mem::mesh-receive, mem::mesh-remove): check
that data is present and return a structured failure { success: false, error:
"payload required" } (or a message naming missing fields) before accessing
data.*; then validate required fields per handler (e.g., url and name for
mem::mesh-register), continue to use kv.get/kv.set/kv.list for state and call
recordAudit() as before on success/failure. Ensure the guard is the first
statement inside each registered async function so no TypeError can occur when
payload is omitted.
src/functions/retention.ts (1)

208-217: ⚠️ Potential issue | 🟠 Major

Missing audit trail for memory eviction.

The eviction operation permanently deletes memories but does not call recordAudit(). Per coding guidelines, state-changing operations (especially destructive ones like eviction) should be audited for traceability.

Additionally, the silent catch block swallows errors without any logging, making it difficult to diagnose eviction failures.

🛡️ Proposed fix to add audit and error logging
       let evicted = 0;
       for (const candidate of candidates) {
         try {
           await kv.delete(candidate.sourceBucket || KV.memories, candidate.memoryId);
           await kv.delete(KV.retentionScores, candidate.memoryId);
+          await recordAudit(kv, "evict", "mem::retention-evict", [candidate.memoryId], {
+            sourceBucket: candidate.sourceBucket || KV.memories,
+            score: candidate.score,
+          });
           evicted++;
-        } catch {
+        } catch (err) {
+          ctx.logger.warn("Failed to evict memory", {
+            memoryId: candidate.memoryId,
+            error: String(err),
+          });
           continue;
         }
       }

Note: You'll also need to import recordAudit from ./audit.js at the top of the file. Based on learnings: "Use recordAudit() for all state-changing operations in TypeScript functions."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/functions/retention.ts` around lines 208 - 217, The eviction loop over
candidates deletes memories (kv.delete calls for candidate.sourceBucket ||
KV.memories and KV.retentionScores) but does not call recordAudit() and silently
swallows errors; update the loop to import recordAudit from ./audit.js and,
after each successful deletion, call recordAudit(...) with context identifying
the memoryId, action "evict" and sourceBucket; in the catch block log the error
(e.g., processLogger.error or logger.error) including candidate.memoryId and
continue, and ensure evicted++ only increments after successful delete + audit
to preserve accurate counts.
♻️ Duplicate comments (1)
src/mcp/server.ts (1)

95-98: ⚠️ Potential issue | 🟡 Minor

These limit defaults still swallow an explicit 0.

The || fallback here undoes the same 0-preserving fix you made elsewhere, so MCP callers still can’t intentionally request zero results.

Also applies to: 456-458, 484-487

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/mcp/server.ts` around lines 95 - 98, The current use of the || fallback
when calling sdk.trigger for function_id "mem::search" causes an explicit 0 for
args.limit to be treated as falsy and replaced by 10; change the fallback to
preserve 0 by using a nullish/undefined check (e.g., use args.limit ?? 10 or a
typeof check like typeof args.limit === "number" ? args.limit : 10) when
constructing the payload, and make the same change for the other identical sites
that build payloads from args.limit in this file (the other sdk.trigger usages
noted in the review).
🧹 Nitpick comments (3)
src/cli.ts (1)

557-564: The failed flag is set but never reachable when true.

The requireSuccess helper sets failed = true, but every call site immediately follows with return process.exit(1). The final if (failed) check on lines 621-623 is therefore dead code—failed can only be false when that line is reached.

Consider simplifying by removing the failed flag entirely, or keeping it only if you anticipate future required steps that shouldn't immediately abort.

♻️ Option A: Remove dead code
-  let failed = false;
-  const requireSuccess = (ok: boolean, label: string): boolean => {
-    if (!ok) {
-      failed = true;
-      p.log.error(`Upgrade aborted: ${label} failed.`);
-    }
-    return ok;
-  };
+  const requireSuccess = (ok: boolean, label: string): boolean => {
+    if (!ok) {
+      p.log.error(`Upgrade aborted: ${label} failed.`);
+    }
+    return ok;
+  };

And remove lines 621-623:

-  if (failed) {
-    return process.exit(1);
-  }
-
   p.note(

Also applies to: 621-623

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/cli.ts` around lines 557 - 564, The flag `failed` in the CLI upgrade flow
is dead because `requireSuccess(ok, label)` sets `failed = true` but every
caller immediately calls `process.exit(1)`, so `failed` can never be true when
the final `if (failed)` branch runs; remove the unused state by deleting the
`failed` variable and any final `if (failed) { ... }` check (and simplify
`requireSuccess` to just log and return the boolean or call `process.exit`
directly), or alternatively stop exiting immediately at each call site so
`failed` is meaningful — pick one approach and update the `requireSuccess`
function and its callers accordingly (referencing the `failed` variable, the
`requireSuccess` helper, and the immediate `process.exit(1)` calls).
src/functions/file-index.ts (1)

64-100: Consider using Promise.all for parallel observation lookups.

The observation cache is built sequentially in lines 57-62. Per coding guidelines, independent kv reads should use Promise.all for parallel operations.

♻️ Proposed parallel optimization
       const obsCache = new Map<string, CompressedObservation[]>();
-      for (const session of otherSessions) {
-        obsCache.set(
-          session.id,
-          await kv.list<CompressedObservation>(KV.observations(session.id)),
-        );
-      }
+      const obsResults = await Promise.all(
+        otherSessions.map(async (session) => ({
+          sessionId: session.id,
+          observations: await kv.list<CompressedObservation>(KV.observations(session.id)),
+        })),
+      );
+      for (const { sessionId, observations } of obsResults) {
+        obsCache.set(sessionId, observations);
+      }

As per coding guidelines: "Perform parallel operations where possible using Promise.all for independent kv writes/reads in TypeScript."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/functions/file-index.ts` around lines 64 - 100, The per-session
observation lookup inside the files loop is performed sequentially; change it to
run in parallel using Promise.all by mapping otherSessions to async tasks that
read obsCache.get(session.id) and return matching observations for the current
file (using the same matching logic), then await Promise.all, flatten the
results and assign to history.observations before sorting/slicing; reference the
variables/functions: files loop, otherSessions, obsCache.get(session.id), and
history.observations to locate where to replace the inner for (const session of
otherSessions) loop with a Promise.all-based approach that collects per-session
matches concurrently and then proceeds with history.observations.sort(...) and
slice.
test/retention.test.ts (1)

227-233: Add test coverage for semantic memory eviction using sourceBucket.

The production code uses candidate.sourceBucket when deleting memories, which falls back to KV.memories if undefined. Currently, there is no test verifying that semantic memories are evicted from KV.semantic rather than the fallback bucket. Consider adding a test that creates semantic memories, triggers eviction (with dryRun: false), and verifies the correct bucket is used for deletion.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/retention.test.ts` around lines 227 - 233, Add a test that creates
semantic memories in KV.semantic, ensures the candidate objects include
sourceBucket set to the semantic bucket (so production code uses
candidate.sourceBucket instead of falling back to KV.memories), then call
sdk.trigger with function_id "mem::retention-evict" and payload { threshold:
<value>, dryRun: false } to perform eviction and assert that deletions were
issued against KV.semantic (and not KV.memories); reference the
candidate.sourceBucket field, the KV.semantic and KV.memories buckets, and the
"mem::retention-evict" trigger to locate where to insert the test and
assertions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/cli.ts`:
- Around line 594-607: The prompt result from p.confirm() can be a cancel symbol
(not a boolean) so check the type of upgradeEngine before acting: after calling
p.confirm(...) verify if typeof upgradeEngine !== "boolean" (or explicitly
detect the cancel symbol) and handle cancellation (log and exit or return)
instead of treating it as truthy; only call runCommand(cargoBin, ["install",
"iii-engine", "--force"], ...) and requireSuccess(...) when upgradeEngine ===
true, otherwise log the skipped/cancelled action with p.log.info and return/exit
as appropriate.

In `@src/functions/leases.ts`:
- Around line 175-180: The audit call in mem::lease-renew currently records
renewals as "lease_acquire", which conflates first claims and extensions; update
the recordAudit invocation in leases.ts (the call that passes "lease_acquire"
inside mem::lease-renew) to use a distinct event name like "lease_renew" (and
adjust any downstream audit filters/consumers expecting the old string) while
preserving the same payload (actionId, agentId, before, after, and
activeLease.id) so counts and filters can distinguish renewals from
acquisitions.

In `@src/functions/relations.ts`:
- Around line 47-50: The pair-only lock (`pairKey = mem:${firstId}:${secondId}`)
doesn't prevent concurrent mutations to the same entity (e.g., A in relate(A,B)
and relate(A,C)), so change the locking to acquire per-entity locks for both
involved IDs in a canonical order before mutating shared rows: use withKeyedLock
on keys like `mem:${firstId}` and `mem:${secondId}` (or call withKeyedLock with
an array of those two keys sorted) to ensure both entity rows are serialized;
update the two locations using the pair lock (the block using pairKey around the
relate logic and the similar block at lines ~83-102) to acquire both individual
locks in deterministic order to avoid deadlock and preserve concurrent safety.

In `@src/functions/sketches.ts`:
- Around line 33-38: The code awaits recordAudit() after performing KV mutations
(e.g., kv.set, kv.delete) which allows an audit write failure to surface as an
error after state has already changed; change these handlers to avoid rolling
back applied mutations by executing recordAudit in a non-fatal way: call
recordAudit but do not let its rejection propagate—either fire-and-forget (no
await) or await inside a try/catch and log the error (using processLogger or
equivalent) instead of throwing; update all usages around the sketch handlers
that call recordAudit (references: recordAudit, kv.set, kv.delete) so audit
failures are logged but do not cause the function to return an error.

In `@src/functions/summarize.ts`:
- Around line 124-128: The summary is already persisted via kv.set(KV.summaries,
sessionId, summary) but a failing recordAudit("compress", "mem::summarize", ...)
can still bubble up and cause callers to retry; wrap the recordAudit call in a
non-fatal handler (e.g., await it inside a try/catch and log the error, or
dispatch it fire-and-forget) so failures in recordAudit do not cause
mem::summarize to fail; keep references to KV.summaries, sessionId, summary and
the recordAudit invocation when applying this change.

In `@src/mcp/server.ts`:
- Around line 259-262: The payload currently sets refresh using args.refresh ===
"true", which treats only the string "true" as truthy and ignores boolean true;
update the assignment in the sdk.trigger call (the object with function_id
"mem::profile") to accept both boolean true and the string "true" (e.g., use
args.refresh === true || args.refresh === "true") so boolean values sent by MCP
clients are handled correctly.
- Around line 890-899: The handler currently builds sketchPayload using
asNonEmptyString(args.title) which may be undefined and then forwards the
request to sdk.trigger("mem::sketch-create"); instead validate inputs at the MCP
boundary: explicitly check that title (and other required fields if necessary)
is a non-empty string before calling sdk.trigger, and if validation fails return
a 400 response from this handler rather than proxying to mem::sketch-create;
update the code paths around sketchPayload construction, the asNonEmptyString
usage, and the sdk.trigger call to perform this early validation and
short-circuit with a 400.

---

Outside diff comments:
In `@src/functions/governance.ts`:
- Around line 101-109: The current bulk delete uses await
Promise.all(candidates.map((mem) => kv.delete(KV.memories, mem.id))) which fails
fast and can leave partially-applied deletes without an audit; replace this with
Promise.allSettled over candidates.map(...) to run all delete attempts, collect
which mem.id deletions succeeded vs failed, then call recordAudit(kv, "delete",
"mem::governance-bulk", successfulIds, { filter: data, deleted: successfulCount,
failed: failedCount, errors: failureDetails }) and log/report failures so the
audit reflects the partial outcome (keep references to kv.delete, recordAudit,
candidates, data, and KV.memories when making the change).

In `@src/functions/mesh.ts`:
- Around line 144-153: Add a null/undefined payload guard at the start of each
sdk.registerFunction handler that dereferences data (mem::mesh-register,
mem::mesh-sync, mem::mesh-receive, mem::mesh-remove): check that data is present
and return a structured failure { success: false, error: "payload required" }
(or a message naming missing fields) before accessing data.*; then validate
required fields per handler (e.g., url and name for mem::mesh-register),
continue to use kv.get/kv.set/kv.list for state and call recordAudit() as before
on success/failure. Ensure the guard is the first statement inside each
registered async function so no TypeError can occur when payload is omitted.

In `@src/functions/obsidian-export.ts`:
- Around line 200-215: The handler registered as "mem::obsidian-export"
currently only checks for a missing payload but calls
resolveVaultDir(data.vaultDir) and new Set(data.types) without validating types;
add explicit validation: ensure data.vaultDir is a string before calling
resolveVaultDir (return { success: false, error: "vaultDir must be a string" }
if not), and ensure data.types is either undefined or an array of strings before
constructing the Set (return { success: false, error: "types must be an array of
strings" } on failure); keep using resolveVaultDir and getExportRoot for path
validation and return the same style error object when resolveVaultDir indicates
an invalid path.

In `@src/functions/retention.ts`:
- Around line 208-217: The eviction loop over candidates deletes memories
(kv.delete calls for candidate.sourceBucket || KV.memories and
KV.retentionScores) but does not call recordAudit() and silently swallows
errors; update the loop to import recordAudit from ./audit.js and, after each
successful deletion, call recordAudit(...) with context identifying the
memoryId, action "evict" and sourceBucket; in the catch block log the error
(e.g., processLogger.error or logger.error) including candidate.memoryId and
continue, and ensure evicted++ only increments after successful delete + audit
to preserve accurate counts.

In `@src/viewer/index.html`:
- Around line 2797-2853: The reconnect handlers currently reference the mutable
global state.ws, which allows old sockets to affect the new one; fix connectWs
by creating a local socket variable (e.g., var ws = new WebSocket(...)), set
ws.__direct on that local variable, attach all handlers (onopen, onmessage,
onclose, onerror) to ws, and only after handlers are bound assign state.ws = ws
so later reconnects don't let callbacks from previous sockets mutate global
retry/direct state; update any handler references that used state.ws (like
state.ws.close()) to use the local ws or guard against staleness by comparing
state.ws === ws before mutating global state or performing closes.
- Around line 2832-2867: The handler is routing every direct stream event into
routeWsMessage which causes non-observation payloads (like session.activity) to
be treated as timeline observations; update handleStreamEvent to only call
routeWsMessage for true observation objects — e.g., check evt.event_type (or
evt.data) and ensure the payload looks like an observation (has required fields
such as id and timestamp or contains data.observation) before routing; otherwise
ignore or handle session.activity separately. Reference: handleStreamEvent and
routeWsMessage; gate the call using evt.event_type !== 'session.activity' or by
validating evt.data.id && evt.data.timestamp (or evt.data.observation) prior to
invoking routeWsMessage.

---

Duplicate comments:
In `@src/mcp/server.ts`:
- Around line 95-98: The current use of the || fallback when calling sdk.trigger
for function_id "mem::search" causes an explicit 0 for args.limit to be treated
as falsy and replaced by 10; change the fallback to preserve 0 by using a
nullish/undefined check (e.g., use args.limit ?? 10 or a typeof check like
typeof args.limit === "number" ? args.limit : 10) when constructing the payload,
and make the same change for the other identical sites that build payloads from
args.limit in this file (the other sdk.trigger usages noted in the review).

---

Nitpick comments:
In `@src/cli.ts`:
- Around line 557-564: The flag `failed` in the CLI upgrade flow is dead because
`requireSuccess(ok, label)` sets `failed = true` but every caller immediately
calls `process.exit(1)`, so `failed` can never be true when the final `if
(failed)` branch runs; remove the unused state by deleting the `failed` variable
and any final `if (failed) { ... }` check (and simplify `requireSuccess` to just
log and return the boolean or call `process.exit` directly), or alternatively
stop exiting immediately at each call site so `failed` is meaningful — pick one
approach and update the `requireSuccess` function and its callers accordingly
(referencing the `failed` variable, the `requireSuccess` helper, and the
immediate `process.exit(1)` calls).

In `@src/functions/file-index.ts`:
- Around line 64-100: The per-session observation lookup inside the files loop
is performed sequentially; change it to run in parallel using Promise.all by
mapping otherSessions to async tasks that read obsCache.get(session.id) and
return matching observations for the current file (using the same matching
logic), then await Promise.all, flatten the results and assign to
history.observations before sorting/slicing; reference the variables/functions:
files loop, otherSessions, obsCache.get(session.id), and history.observations to
locate where to replace the inner for (const session of otherSessions) loop with
a Promise.all-based approach that collects per-session matches concurrently and
then proceeds with history.observations.sort(...) and slice.

In `@test/retention.test.ts`:
- Around line 227-233: Add a test that creates semantic memories in KV.semantic,
ensures the candidate objects include sourceBucket set to the semantic bucket
(so production code uses candidate.sourceBucket instead of falling back to
KV.memories), then call sdk.trigger with function_id "mem::retention-evict" and
payload { threshold: <value>, dryRun: false } to perform eviction and assert
that deletions were issued against KV.semantic (and not KV.memories); reference
the candidate.sourceBucket field, the KV.semantic and KV.memories buckets, and
the "mem::retention-evict" trigger to locate where to insert the test and
assertions.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: b267d619-5820-40ab-9ed8-513a463ee9a9

📥 Commits

Reviewing files that changed from the base of the PR and between a356d4b and 056d9be.

📒 Files selected for processing (42)
  • README.md
  • src/cli.ts
  • src/functions/actions.ts
  • src/functions/auto-forget.ts
  • src/functions/cascade.ts
  • src/functions/checkpoints.ts
  • src/functions/claude-bridge.ts
  • src/functions/compress.ts
  • src/functions/consolidate.ts
  • src/functions/diagnostics.ts
  • src/functions/enrich.ts
  • src/functions/evict.ts
  • src/functions/export-import.ts
  • src/functions/file-index.ts
  • src/functions/flow-compress.ts
  • src/functions/governance.ts
  • src/functions/leases.ts
  • src/functions/lessons.ts
  • src/functions/mesh.ts
  • src/functions/migrate.ts
  • src/functions/obsidian-export.ts
  • src/functions/privacy.ts
  • src/functions/profile.ts
  • src/functions/query-expansion.ts
  • src/functions/reflect.ts
  • src/functions/relations.ts
  • src/functions/retention.ts
  • src/functions/routines.ts
  • src/functions/sentinels.ts
  • src/functions/signals.ts
  • src/functions/sketches.ts
  • src/functions/sliding-window.ts
  • src/functions/snapshot.ts
  • src/functions/summarize.ts
  • src/functions/team.ts
  • src/index.ts
  • src/mcp/server.ts
  • src/types.ts
  • src/viewer/index.html
  • test/actions.test.ts
  • test/enrich.test.ts
  • test/retention.test.ts
✅ Files skipped from review due to trivial changes (3)
  • src/functions/evict.ts
  • test/enrich.test.ts
  • README.md
🚧 Files skipped from review as they are similar to previous changes (17)
  • src/functions/lessons.ts
  • src/functions/flow-compress.ts
  • src/functions/consolidate.ts
  • src/functions/migrate.ts
  • src/functions/cascade.ts
  • src/functions/claude-bridge.ts
  • src/functions/snapshot.ts
  • src/functions/sentinels.ts
  • src/functions/checkpoints.ts
  • src/functions/auto-forget.ts
  • src/functions/query-expansion.ts
  • src/functions/signals.ts
  • src/functions/actions.ts
  • test/actions.test.ts
  • src/functions/compress.ts
  • src/functions/export-import.ts
  • src/functions/routines.ts

Comment thread src/cli.ts
Comment thread src/functions/leases.ts Outdated
Comment thread src/functions/relations.ts Outdated
Comment thread src/functions/sketches.ts
Comment thread src/functions/summarize.ts
Comment thread src/mcp/server.ts
Comment thread src/mcp/server.ts
…on-upgrade-cmd

# Conflicts:
#	src/cli.ts
#	src/functions/auto-forget.ts
#	src/functions/evict.ts
#	src/functions/export-import.ts
#	src/functions/file-index.ts
#	src/functions/governance.ts
#	src/functions/relations.ts
CodeRabbit flagged 9 remaining issues on #116 — all real. Each was
verified against the current branch code before applying. Two
findings were deliberately skipped as policy/scope issues and are
documented at the bottom.

### Applied (9 real findings)

- src/triggers/api.ts — api.ts numeric query param validation:
  multiple sites forwarded `parseInt(params.limit)` result to the
  downstream function without a Number.isFinite check. A non-numeric
  query value produced NaN at the iii-sdk trigger boundary. Added
  parseOptionalInt and parseOptionalFloat helpers at the top of the
  file, wired into api::crystal-list, api::lesson-list, and
  api::insight-list (5 sites total).

- src/triggers/api.ts — api::observe, api::context, api::session::start,
  api::session::end were forwarding req.body verbatim to sdk.trigger
  with no validation. Added explicit type checks mirroring the existing
  api::search pattern, construct a sanitized payload object before
  triggering.

- src/cli.ts — p.confirm() can return a cancel Symbol on Ctrl+C, which
  is truthy, so the upgrade path ran even when the user cancelled.
  Added p.isCancel() check and explicit boolean comparison.

- src/cli.ts — removed dead `failed` flag in runUpgrade: every caller
  already calls process.exit(1) immediately, so the final `if (failed)`
  branch was unreachable. Simplified requireSuccess accordingly.

- src/functions/leases.ts — mem::lease-renew recorded audit events as
  "lease_acquire", which conflated renewals with first claims. Renamed
  the audit event to "lease_renew" so downstream audit filters can tell
  the two operations apart.

- src/functions/relations.ts — the pair lock
  `mem:${firstId}:${secondId}` serialized concurrent relate(A,B) calls
  with identical pairs, but did NOT protect concurrent relate(A,B) +
  relate(A,C). Both modify memory A's `relatedIds` array, which is a
  classic last-writer-wins race producing lost relation edges. Fixed
  by replacing the pair lock with nested per-entity locks in canonical
  sort order: withKeyedLock(mem:firstId) wrapping withKeyedLock(mem:secondId).
  Since the ids are sorted deterministically, no deadlock is possible.

- src/functions/sketches.ts + src/functions/summarize.ts + new
  src/functions/audit.ts safeAudit helper — recordAudit was awaited
  after kv.set calls. An audit write failure would reject and the
  caller would see an error even though the target state was already
  persisted. New safeAudit wrapper swallows audit errors and logs them
  via ctx.logger.warn, preserving the mutation's success. Applied to
  all 11 audit sites in sketches.ts and the single site in summarize.ts.

- src/mcp/server.ts — three issues in the MCP handler:
  1. memory_profile's refresh flag used `args.refresh === "true"`,
     ignoring boolean `true` from MCP clients that send proper types.
     Now accepts both.
  2. memory_sketch_create built sketchPayload with
     `asNonEmptyString(args.title)` which could return undefined,
     then forwarded that to the downstream function. Added explicit
     validation + 400 response at the MCP boundary.
  3. memory_recall, memory_team_feed, memory_audit_query used
     `(args.limit as number) || 10` which replaced an explicit 0 with
     10. Changed to typeof check so explicit 0 (rare but legal) is
     preserved.

- src/functions/governance.ts — mem::governance-bulk used Promise.all
  for the delete batch, which fails fast on the first error. The audit
  record still said `deleted: candidates.length` even if only half
  actually succeeded. Switched to Promise.allSettled, split results
  into successfulIds and failures arrays, record audit with both counts
  plus the per-failure details for traceability.

- src/functions/mesh.ts — mem::mesh-register, mem::mesh-sync,
  mem::mesh-receive, mem::mesh-remove all dereferenced `data.*` without
  checking that data was passed. A TypeError would bubble up on null
  payload. Added early null/type guards returning structured errors.

- src/functions/obsidian-export.ts — resolveVaultDir(data.vaultDir)
  was called without validating that vaultDir was a string, and
  `new Set(data.types)` without validating types was an array of
  strings. Added explicit validation returning 400-style error
  responses.

- src/functions/retention.ts — the eviction loop silently swallowed
  kv.delete errors via a bare `continue`. Added ctx.logger.warn on
  catch, included memoryId and sourceBucket in the log, and now
  returns `failed` count alongside `evicted` in the response.

- src/viewer/index.html — two WebSocket bugs:
  1. connectWs assigned to the mutable global state.ws before binding
     handlers, so old sockets could have their callbacks fire and
     mutate retry/direct state after state.ws was overwritten by a
     new socket. Fixed by creating a local `ws` variable, binding
     all handlers to it, only then assigning state.ws = ws. Each
     handler guards `if (state.ws !== ws) return` so stale callbacks
     are dropped.
  2. handleStreamEvent routed EVERY incoming event to routeWsMessage,
     so non-observation events (session.activity, etc.) were being
     treated as timeline observations and causing UI confusion. Added
     a looksLikeObservation helper + event_type gate that only routes
     real observation payloads.

- test/retention.test.ts — added an explicit sourceBucket eviction
  test that seeds a semantic memory at high age, runs retention-score,
  then runs retention-evict at high threshold and asserts BOTH
  KV.semantic and KV.memories are empty. Proves the candidate.sourceBucket
  branch added in this PR actually routes to the right bucket.

- src/types.ts — side fix: the ExportData.version union on line 254
  used comma separators instead of pipes in 3 positions (0.7.9,
  0.8.0, 0.8.1 → needed | between them). tsdown strips types so the
  build passed, but tsc --noEmit would have thrown and any IDE showed
  squiggles. Pre-existing latent bug, fixed while in the file.

### Deliberately skipped

- retention.ts eviction audit (CodeRabbit asked to add recordAudit
  to the eviction loop): policy change, not a bug fix. Verified:
  auto-forget.ts, evict.ts, retention-evict, remember-forget all
  skip audit by convention — only governance audits. Adding audit
  everywhere is a policy decision tracked in issue #125.

- file-index.ts sequential → parallel session lookup: micro-opt,
  the loop is already cache-backed, negligible savings, not worth
  the readability cost.

Tests: 655/655. Build clean.
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
src/functions/retention.ts (1)

208-231: ⚠️ Potential issue | 🟠 Major

Don't hide partial eviction failures behind success: true.

If one delete succeeds and the other fails, this already mutated state, but the handler still returns { success: true, evicted }. Return failed as part of the result at minimum, and handle the two deletes independently so callers can detect and retry partial failures.

Possible direction
       let evicted = 0;
       let failed = 0;
       for (const candidate of candidates) {
-        try {
-          await kv.delete(candidate.sourceBucket || KV.memories, candidate.memoryId);
-          await kv.delete(KV.retentionScores, candidate.memoryId);
+        const bucket = candidate.sourceBucket || KV.memories;
+        const results = await Promise.allSettled([
+          kv.delete(bucket, candidate.memoryId),
+          kv.delete(KV.retentionScores, candidate.memoryId),
+        ]);
+
+        if (results.every((result) => result.status === "fulfilled")) {
           evicted++;
-        } catch (err) {
+          continue;
+        }
+
+        const err = results.find((result) => result.status === "rejected");
+        failed++;
           ctx.logger.warn("Retention eviction failed for candidate", {
             memoryId: candidate.memoryId,
             sourceBucket: candidate.sourceBucket,
-            error: err instanceof Error ? err.message : String(err),
+            error:
+              err?.status === "rejected"
+                ? err.reason instanceof Error
+                  ? err.reason.message
+                  : String(err.reason)
+                : "unknown",
           });
-        }
       }
 
-      return { success: true, evicted };
+      return { success: failed === 0, evicted, failed };
As per coding guidelines, `src/functions/**/*.ts`: Perform parallel operations where possible using Promise.all for independent kv writes/reads in TypeScript.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/functions/retention.ts` around lines 208 - 231, The current retention
loop treats partial failures as overall success and performs serial deletes;
change it so each candidate's two deletes (kv.delete(candidate.sourceBucket ||
KV.memories, candidate.memoryId) and kv.delete(KV.retentionScores,
candidate.memoryId)) are attempted independently and in parallel (use
Promise.all for each candidate) so one failure doesn't mask the other, count
both evicted and failed accordingly, include failed in the returned result
(e.g., return { success: failed === 0, evicted, failed }), and keep existing
ctx.logger.warn/ctx.logger.info calls (refer to the candidates iteration,
kv.delete calls, KV.memories, KV.retentionScores, ctx.logger, and the final
return) to surface partial failures for callers to detect and retry.
src/functions/relations.ts (1)

146-173: ⚠️ Potential issue | 🔴 Critical

Audit failures can strand mem::evolve mid-migration.

After Line 145, the old memory is already persisted with isLatest = false. If either of these new recordAudit() calls throws before the new memory and supersedes relation are fully written, the caller gets a failure and the version chain is left half-applied. These audit writes need to be non-fatal.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/functions/relations.ts` around lines 146 - 173, The two recordAudit()
calls in mem::evolve (the ones after kv.set(KV.memories, evolved.id, evolved)
and after creating the relationId) must not be allowed to abort the migration;
wrap each recordAudit(...) in a try/catch so any thrown error is caught, logged
(or otherwise noted) and suppressed so the function continues to persist the new
memory and the MemoryRelation (constructed as relation with type "supersedes",
sourceId evolved.id, targetId existing.id, createdAt now, confidence 1.0, and
saved via kv.set(KV.relations, relationId, relation) where relationId =
generateId("rel")), rather than rethrowing — keep the main kv.set operations
fatal but make audit writes non-fatal.
♻️ Duplicate comments (2)
src/mcp/server.ts (1)

882-885: ⚠️ Potential issue | 🟠 Major

Validate required IDs before forwarding these MCP calls.

memory_sentinel_trigger and memory_sketch_promote still pass args.sentinelId / args.sketchId straight through, so bad client input becomes a downstream function error instead of a 400 here. Reuse asNonEmptyString() and reject empty or non-string IDs at the MCP boundary. As per coding guidelines "Input validation must occur at system boundaries (MCP handlers, REST endpoints) in TypeScript"

Also applies to: 911-914

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/mcp/server.ts` around lines 882 - 885, Validate sentinelId/sketchId at
the MCP handler boundary using the existing asNonEmptyString() helper rather
than forwarding args directly: in the memory_sentinel_trigger handler (where
sdk.trigger is called with sentinelId) and in memory_sketch_promote (similarly
for sketchId) call asNonEmptyString(args.sentinelId) /
asNonEmptyString(args.sketchId), and if validation fails return a 400-style MCP
error (reject/throw an error indicating invalid/missing ID) instead of invoking
sdk.trigger; this ensures invalid client input is rejected before calling
downstream functions.
src/functions/relations.ts (1)

49-50: ⚠️ Potential issue | 🔴 Critical

Handle sourceId === targetId before nesting the same lock.

When both IDs are equal, this acquires withKeyedLock("mem:<id>") inside itself. The inner call waits for the outer lock to release, while the outer call is awaiting the inner one, so self-relations deadlock permanently.

Suggested fix
-      return withKeyedLock(`mem:${firstId}`, async () =>
-        withKeyedLock(`mem:${secondId}`, async () => {
+      const run = async () => {
           const source = await kv.get<Memory>(KV.memories, data.sourceId);
           const target = await kv.get<Memory>(KV.memories, data.targetId);
           if (!source || !target) {
             return {
               success: false,
@@
           ctx.logger.info("Memory relation created", {
             relationId,
             type: data.type,
             source: data.sourceId,
             target: data.targetId,
           });
           return { success: true, relationId, relation };
-        }),
-      );
+      };
+
+      if (firstId === secondId) {
+        return withKeyedLock(`mem:${firstId}`, run);
+      }
+
+      return withKeyedLock(`mem:${firstId}`, () =>
+        withKeyedLock(`mem:${secondId}`, run),
+      );
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/functions/relations.ts` around lines 49 - 50, The nested withKeyedLock
calls can deadlock when sourceId === targetId because it re-acquires the same
lock inside itself; modify the logic around firstId/secondId so you check if
firstId === secondId and in that case acquire a single
withKeyedLock(`mem:${firstId}`) (or handle the self-relation without nesting)
instead of calling withKeyedLock twice; update the block that currently nests
withKeyedLock(`mem:${firstId}`, async () => withKeyedLock(`mem:${secondId}`,
...)) to branch on equality and only acquire one lock for the identical IDs.
🧹 Nitpick comments (3)
src/viewer/index.html (1)

2880-2893: Minor: observation variable is declared twice in the function.

The variable observation is declared at both line 2881 and line 2889. While this works correctly in JavaScript due to var hoisting (they're in separate branches), it can be slightly confusing when reading the code.

This is a style nit and doesn't affect functionality.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/viewer/index.html` around lines 2880 - 2893, The code declares the
variable observation twice with var in different branches; to clean this up,
declare a single mutable variable once (e.g., let observation;) before the
evt.type checks and then assign observation = evt.data.observation || evt.data
in the 'event' branch and observation = payload.observation || payload in the
'create'/'update' branch, keeping the calls to looksLikeObservation and
routeWsMessage unchanged (functions to locate: looksLikeObservation and
routeWsMessage).
src/triggers/api.ts (2)

913-925: Inconsistent numeric query parameter parsing.

api::team-feed uses raw parseInt(...) || 20 while newer endpoints like api::crystal-list, api::lesson-list, and api::insight-list use the parseOptionalInt helper. Similar inconsistency exists in api::audit (line 956), api::frontier (line 1211), and api::signal-read (line 1403).

Consider using parseOptionalInt/parseOptionalFloat consistently across all endpoints for uniform handling of malformed numeric inputs.

♻️ Example fix for api::team-feed
  sdk.registerFunction("api::team-feed", 
    async (req: ApiRequest): Promise<Response> => {
      const authErr = checkAuth(req, secret);
      if (authErr) return authErr;
      try {
-       const limit = parseInt(req.query_params?.["limit"] as string) || 20;
+       const limit = parseOptionalInt(req.query_params?.["limit"]) ?? 20;
        const result = await sdk.trigger({ function_id: "mem::team-feed", payload: { limit } });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/triggers/api.ts` around lines 913 - 925, The api::team-feed endpoint
parses the limit query param with raw parseInt(...) || 20 which is inconsistent
and can mishandle malformed numbers; replace that logic in the
sdk.registerFunction handler for "api::team-feed" to use the existing
parseOptionalInt helper (like other endpoints) to parse
req.query_params["limit"] and default to 20 when parseOptionalInt returns
undefined/NaN, and apply the same pattern to the other mentioned handlers
("api::audit", "api::frontier", "api::signal-read") so all numeric query params
use parseOptionalInt/parseOptionalFloat for uniform behavior.

96-130: Redundant auth check when middleware is configured.

The trigger config includes middleware_function_ids: ["middleware::api-auth"], which handles authentication before this handler runs. The inline checkAuth(req, secret) call at line 98 is therefore redundant.

This pattern repeats in api::observe, api::context, api::search, api::session::start, api::session::end, and api::summarize.

♻️ Consider removing redundant inline auth for endpoints with middleware
  sdk.registerFunction("api::health", 
    async (req: ApiRequest): Promise<Response> => {
-     const authErr = checkAuth(req, secret);
-     if (authErr) return authErr;
-
      const health = await getLatestHealth(kv);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/triggers/api.ts` around lines 96 - 130, The inline auth check using
checkAuth(req, secret) inside handlers like the
sdk.registerFunction("api::health") should be removed when those handlers are
already protected by middleware_function_ids ("middleware::api-auth"); delete
the authErr assignment and the early return (if (authErr) return authErr;) from
api::health and the other handlers named api::observe, api::context,
api::search, api::session::start, api::session::end, and api::summarize; if you
need to support both middleware and non-middleware invocation, replace the
unconditional checkAuth call with a conditional that only runs when the trigger
config lacks middleware_function_ids, otherwise rely on the middleware for
authentication.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/functions/governance.ts`:
- Around line 101-103: The current Promise.allSettled over candidates.map(...)
will fire kv.delete for every candidate concurrently and can overwhelm the state
worker; change it to a bounded-concurrency or batched delete loop: instead of
Promise.allSettled(candidates.map(...)), iterate candidates in chunks or use a
small concurrency limiter (e.g., semaphore/p-limit) to call kv.delete for at
most N simultaneous deletes, awaiting each batch/slot before continuing, and
then collect/aggregate the per-delete results into the results variable so
downstream logic using results behaves the same.
- Around line 117-128: The bulk-delete audit call to recordAudit(...) must be
best-effort so failures don't make the overall operation appear to have failed;
wrap the recordAudit call (the call with arguments kv, "delete",
"mem::governance-bulk", successfulIds, {...}) in a try/catch (or replace with
safeAudit(...) if that helper exists) and swallow/log any audit errors (e.g.,
processLogger.error or console.error) without rethrowing so the committed
deletes are not masked by audit failures.

In `@src/functions/relations.ts`:
- Around line 75-102: The inline recordAudit calls (recordAudit) between durable
writes to KV.relations and memory relatedIds can leave state partially updated
and are misclassified as "evolve"; move durable writes (updates to KV.relations
via relationId and KV.memories for data.sourceId/data.targetId) before emitting
any audit, then emit audits as best-effort (wrap in safeAudit or a local
try/catch) so failures don't rollback or throw; also change the audit operation
from "evolve" to a relation-specific value (e.g., "relation:create" or
"relation:update") and add that new literal to the AuditEntry.operation union in
src/types.ts.

In `@src/functions/retention.ts`:
- Line 212: The deletion uses the wrong fallback bucket: change the kv.delete
call that currently uses candidate.sourceBucket || KV.memories to instead use
candidate.sourceBucket || KV.retentionScores so legacy retention entries without
sourceBucket (in KV.retentionScores) are removed correctly; update the delete
invocation that references candidate.memoryId and candidate.sourceBucket in
retention.ts accordingly.
- Around line 76-79: The merge of config using "{ ...DEFAULT_DECAY,
...data.config }" in the sdk.registerFunction("mem::retention-score", ...) is
shallow and can drop nested keys like tierThresholds; update the function to
perform a deep merge for nested fields (at least for tierThresholds) between
DEFAULT_DECAY and data.config (e.g., merge DEFAULT_DECAY.tierThresholds with
data.config.tierThresholds) and add input validation for the resulting
DecayConfig (ensure hot/warm/cold keys exist and are numbers within expected
ranges) before using it in the retention calculations; reference DEFAULT_DECAY,
DecayConfig, and tierThresholds in your changes and keep the
sdk.registerFunction(...) pattern with validation early in the function.

In `@src/triggers/api.ts`:
- Around line 1978-1986: In the sdk.registerFunction handler
"api::obsidian-export" (ApiRequest), validate that body.vaultDir is a non-empty
string before calling sdk.trigger; if vaultDir is missing or not a string,
return a 400-style response (e.g., status_code 400 with an error message)
instead of passing the invalid value to the downstream function_id
"mem::obsidian-export"; keep existing types parsing and checkAuth usage and only
call sdk.trigger when vaultDir passes validation.

---

Outside diff comments:
In `@src/functions/relations.ts`:
- Around line 146-173: The two recordAudit() calls in mem::evolve (the ones
after kv.set(KV.memories, evolved.id, evolved) and after creating the
relationId) must not be allowed to abort the migration; wrap each
recordAudit(...) in a try/catch so any thrown error is caught, logged (or
otherwise noted) and suppressed so the function continues to persist the new
memory and the MemoryRelation (constructed as relation with type "supersedes",
sourceId evolved.id, targetId existing.id, createdAt now, confidence 1.0, and
saved via kv.set(KV.relations, relationId, relation) where relationId =
generateId("rel")), rather than rethrowing — keep the main kv.set operations
fatal but make audit writes non-fatal.

In `@src/functions/retention.ts`:
- Around line 208-231: The current retention loop treats partial failures as
overall success and performs serial deletes; change it so each candidate's two
deletes (kv.delete(candidate.sourceBucket || KV.memories, candidate.memoryId)
and kv.delete(KV.retentionScores, candidate.memoryId)) are attempted
independently and in parallel (use Promise.all for each candidate) so one
failure doesn't mask the other, count both evicted and failed accordingly,
include failed in the returned result (e.g., return { success: failed === 0,
evicted, failed }), and keep existing ctx.logger.warn/ctx.logger.info calls
(refer to the candidates iteration, kv.delete calls, KV.memories,
KV.retentionScores, ctx.logger, and the final return) to surface partial
failures for callers to detect and retry.

---

Duplicate comments:
In `@src/functions/relations.ts`:
- Around line 49-50: The nested withKeyedLock calls can deadlock when sourceId
=== targetId because it re-acquires the same lock inside itself; modify the
logic around firstId/secondId so you check if firstId === secondId and in that
case acquire a single withKeyedLock(`mem:${firstId}`) (or handle the
self-relation without nesting) instead of calling withKeyedLock twice; update
the block that currently nests withKeyedLock(`mem:${firstId}`, async () =>
withKeyedLock(`mem:${secondId}`, ...)) to branch on equality and only acquire
one lock for the identical IDs.

In `@src/mcp/server.ts`:
- Around line 882-885: Validate sentinelId/sketchId at the MCP handler boundary
using the existing asNonEmptyString() helper rather than forwarding args
directly: in the memory_sentinel_trigger handler (where sdk.trigger is called
with sentinelId) and in memory_sketch_promote (similarly for sketchId) call
asNonEmptyString(args.sentinelId) / asNonEmptyString(args.sketchId), and if
validation fails return a 400-style MCP error (reject/throw an error indicating
invalid/missing ID) instead of invoking sdk.trigger; this ensures invalid client
input is rejected before calling downstream functions.

---

Nitpick comments:
In `@src/triggers/api.ts`:
- Around line 913-925: The api::team-feed endpoint parses the limit query param
with raw parseInt(...) || 20 which is inconsistent and can mishandle malformed
numbers; replace that logic in the sdk.registerFunction handler for
"api::team-feed" to use the existing parseOptionalInt helper (like other
endpoints) to parse req.query_params["limit"] and default to 20 when
parseOptionalInt returns undefined/NaN, and apply the same pattern to the other
mentioned handlers ("api::audit", "api::frontier", "api::signal-read") so all
numeric query params use parseOptionalInt/parseOptionalFloat for uniform
behavior.
- Around line 96-130: The inline auth check using checkAuth(req, secret) inside
handlers like the sdk.registerFunction("api::health") should be removed when
those handlers are already protected by middleware_function_ids
("middleware::api-auth"); delete the authErr assignment and the early return (if
(authErr) return authErr;) from api::health and the other handlers named
api::observe, api::context, api::search, api::session::start, api::session::end,
and api::summarize; if you need to support both middleware and non-middleware
invocation, replace the unconditional checkAuth call with a conditional that
only runs when the trigger config lacks middleware_function_ids, otherwise rely
on the middleware for authentication.

In `@src/viewer/index.html`:
- Around line 2880-2893: The code declares the variable observation twice with
var in different branches; to clean this up, declare a single mutable variable
once (e.g., let observation;) before the evt.type checks and then assign
observation = evt.data.observation || evt.data in the 'event' branch and
observation = payload.observation || payload in the 'create'/'update' branch,
keeping the calls to looksLikeObservation and routeWsMessage unchanged
(functions to locate: looksLikeObservation and routeWsMessage).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 430ba503-8b70-4b20-b428-22b5fa0a47ae

📥 Commits

Reviewing files that changed from the base of the PR and between 056d9be and 3e3ac7e.

📒 Files selected for processing (15)
  • src/cli.ts
  • src/functions/audit.ts
  • src/functions/governance.ts
  • src/functions/leases.ts
  • src/functions/mesh.ts
  • src/functions/obsidian-export.ts
  • src/functions/relations.ts
  • src/functions/retention.ts
  • src/functions/sketches.ts
  • src/functions/summarize.ts
  • src/mcp/server.ts
  • src/triggers/api.ts
  • src/types.ts
  • src/viewer/index.html
  • test/retention.test.ts
🚧 Files skipped from review as they are similar to previous changes (7)
  • src/functions/obsidian-export.ts
  • src/types.ts
  • test/retention.test.ts
  • src/functions/leases.ts
  • src/functions/sketches.ts
  • src/functions/mesh.ts
  • src/cli.ts

Comment thread src/functions/governance.ts Outdated
Comment thread src/functions/governance.ts Outdated
Comment thread src/functions/relations.ts Outdated
Comment thread src/functions/retention.ts Outdated
Comment thread src/functions/retention.ts Outdated
Comment thread src/triggers/api.ts Outdated
Validate and sanitize session, observe, and context inputs plus numeric query params before forwarding to memory functions, returning 400 for invalid values instead of propagating malformed payloads. Also remove duplicate CLI command registration introduced during merge resolution.
…ound 2)

CodeRabbit's second review pass on #116 flagged 6 real issues introduced
by the previous round of fixes (commit 3e3ac7e). Each was verified
against the current code before applying.

### Fixed

- src/functions/governance.ts — mem::governance-bulk switched to
  Promise.allSettled in the previous round but fanned out every
  kv.delete concurrently. The governance endpoint can target tens of
  thousands of memories in a single call; firing them all at once
  overwhelms the state worker. Batched with BATCH_SIZE=50 chunks,
  awaited serially while preserving the per-batch allSettled so a
  single failure doesn't abort the rest.
- src/functions/governance.ts — the recordAudit call AFTER the bulk
  delete is already committed was still using `await recordAudit`, so
  an audit write failure would bubble up as a "failed" response even
  though the rows were actually deleted. Switched to safeAudit so the
  deletes are reflected faithfully in the response regardless of
  audit health.
- src/functions/relations.ts — the mem::relate handler was emitting
  three recordAudit calls INTERLEAVED with three kv.set writes:
  relation → audit → source update → audit → target update → audit.
  A thrown audit in the middle would leave KV.relations and
  source/target.relatedIds in a partial state. Restructured to
  complete all three durable writes first, then emit a single
  safeAudit at the end. Also changed the operation from the generic
  "evolve" (which made audit queries indistinguishable from actual
  version evolutions) to a new "relation_create" event, added to the
  AuditEntry.operation union in src/types.ts.
- src/functions/retention.ts — mem::retention-score used
  `{ ...DEFAULT_DECAY, ...data.config }` which is only a shallow
  merge. A caller passing `{ config: { tierThresholds: { cold: 0.2 } } }`
  would drop the hot and warm thresholds, causing every downstream
  `s.score >= config.tierThresholds.hot` comparison to produce NaN
  and misclassify every tier bucket. Deep-merged tierThresholds
  explicitly.
- src/functions/retention.ts — mem::retention-evict previously
  used `candidate.sourceBucket || KV.memories` as the delete bucket.
  That's correct for new retention scores (which now store
  sourceBucket) but wrong for pre-migration scores whose sourceBucket
  is undefined: the fallback would try to delete semantic ids from
  KV.memories and silently no-op, leaving the real semantic memory
  alive. Fixed by splitting the delete path: if sourceBucket is set,
  use it; otherwise attempt both KV.memories and KV.semantic with
  individual .catch(() => {}) wrappers so whichever bucket actually
  contains the id succeeds and the other is a no-op. Legacy rows
  retire naturally on the next scoring run, which writes sourceBucket.
- src/triggers/api.ts — api::obsidian-export was validated at the
  downstream mem::obsidian-export layer in the previous round, but
  the API boundary handler was still forwarding body.vaultDir without
  a type check. Added the same 400 response pattern used by
  api::search / api::observe / api::context for consistency.

### Skipped from CodeRabbit's suggestions

- CodeRabbit's proposed fix for retention.ts:249 was
  `candidate.sourceBucket || KV.retentionScores` which would try to
  delete the memory from the retention scores namespace — that's
  where the scoring entry lives, not where the memory lives. That
  fix is wrong. The two-bucket fallback approach above is the
  correct one.

Tests: 655/655. Build clean.
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/functions/retention.ts (1)

217-225: ⚠️ Potential issue | 🟠 Major

Validate threshold and maxEvict before selecting candidates.

These values come from the runtime payload. maxEvict = -1 turns slice(0, -1) into “everything except the last candidate”, and the current reads still assume the payload object exists.

Proposed fix
-      const threshold = data.threshold ?? DEFAULT_DECAY.tierThresholds.cold;
-      const maxEvict = data.maxEvict ?? 50;
+      const payload = data ?? {};
+      const threshold = payload.threshold ?? DEFAULT_DECAY.tierThresholds.cold;
+      const maxEvict = payload.maxEvict ?? 50;
+      if (!Number.isFinite(threshold)) {
+        throw new Error("threshold must be finite");
+      }
+      if (!Number.isInteger(maxEvict) || maxEvict < 0) {
+        throw new Error("maxEvict must be a non-negative integer");
+      }

As per coding guidelines, src/functions/**/*.ts: TypeScript functions must use sdk.registerFunction() pattern with input validation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/functions/retention.ts` around lines 217 - 225, In the
mem::retention-evict handler registered via sdk.registerFunction, validate the
incoming payload fields before using them: ensure data exists and that threshold
is a finite number (fallback to DEFAULT_DECAY.tierThresholds.cold when missing
or invalid) and ensure maxEvict is a non-negative integer (fallback to 50 when
missing, NaN, negative, or not an integer); then clamp maxEvict to a safe upper
bound before using it in slice so slice(0, maxEvict) is never called with a
negative value; update any downstream assumptions that data exists to use the
validated local variables instead of raw data.threshold/data.maxEvict.
♻️ Duplicate comments (3)
src/triggers/api.ts (1)

2058-2066: ⚠️ Potential issue | 🟡 Minor

Missing type validation for vaultDir parameter.

The endpoint parses types but passes body.vaultDir without validating it's a string. If the caller provides a non-string or omits it, the downstream handler receives an invalid value.

🛡️ Proposed fix: Validate vaultDir
 sdk.registerFunction("api::obsidian-export",  async (req: ApiRequest) => {
   const denied = checkAuth(req, secret);
   if (denied) return denied;
   const body = (req.body as Record<string, unknown>) || {};
+  if (body.vaultDir !== undefined && typeof body.vaultDir !== "string") {
+    return { status_code: 400, body: { error: "vaultDir must be a string" } };
+  }
   const types = typeof body.types === "string" ? body.types.split(",").map((t: string) => t.trim()).filter(Boolean) : undefined;
-  const result = await sdk.trigger({ function_id: "mem::obsidian-export", payload: { vaultDir: body.vaultDir, types } });
+  const result = await sdk.trigger({ function_id: "mem::obsidian-export", payload: { vaultDir: body.vaultDir as string | undefined, types } });
   return { status_code: 200, body: result };
 });

As per coding guidelines: {src/mcp/server.ts,src/triggers/api.ts}: Input validation must occur at system boundaries (MCP handlers, REST endpoints)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/triggers/api.ts` around lines 2058 - 2066, The api::obsidian-export
handler registers via sdk.registerFunction and currently passes body.vaultDir
unchecked into sdk.trigger; add input validation in that function: ensure
body.vaultDir exists, is a string (e.g., typeof body.vaultDir === "string"), and
optionally trim/check non-empty, and if invalid return a 400 response (e.g., {
status_code: 400, body: { error: "vaultDir must be a non-empty string" } });
only pass the validated vaultDir value into the payload of sdk.trigger (the
mem::obsidian-export call) so downstream handlers receive a guaranteed string.
src/functions/retention.ts (2)

249-250: ⚠️ Potential issue | 🟠 Major

Legacy retention rows still evict semantic entries from the wrong bucket.

sourceBucket is only guaranteed for newly recomputed scores. For pre-migration rows, candidate.sourceBucket || KV.memories deletes the retention row but leaves the semantic record behind in KV.semantic.

Proposed fix
-          await kv.delete(candidate.sourceBucket || KV.memories, candidate.memoryId);
+          const sourceBucket =
+            candidate.sourceBucket ??
+            ((await kv.get<SemanticMemory>(KV.semantic, candidate.memoryId).catch(() => null))
+              ? KV.semantic
+              : KV.memories);
+          await kv.delete(sourceBucket, candidate.memoryId);
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/functions/retention.ts` around lines 249 - 250, The retention cleanup
currently deletes the semantic entry from candidate.sourceBucket || KV.memories
which is wrong for legacy rows that live in KV.semantic; update the kv.delete
call that removes the semantic record to use candidate.sourceBucket ||
KV.semantic (i.e., default to KV.semantic when candidate.sourceBucket is
missing) while keeping the kv.delete(KV.retentionScores, candidate.memoryId)
line intact; locate the deletion logic around the candidate variable in
retention.ts and change the bucket fallback used when calling kv.delete for the
semantic entry.

90-93: ⚠️ Potential issue | 🟠 Major

Deep-merge and validate config before using it.

Line 93 still does a shallow merge and assumes the payload is present. A runtime payload that only overrides tierThresholds.cold drops hot/warm, and a missing payload throws before defaults apply.

Proposed fix
-      const config = { ...DEFAULT_DECAY, ...data.config };
+      const payload = data ?? {};
+      const overrides = payload.config ?? {};
+      const config: DecayConfig = {
+        ...DEFAULT_DECAY,
+        ...overrides,
+        tierThresholds: {
+          ...DEFAULT_DECAY.tierThresholds,
+          ...overrides.tierThresholds,
+        },
+      };
+      if (
+        !Number.isFinite(config.lambda) ||
+        !Number.isFinite(config.sigma) ||
+        !Number.isFinite(config.tierThresholds.hot) ||
+        !Number.isFinite(config.tierThresholds.warm) ||
+        !Number.isFinite(config.tierThresholds.cold)
+      ) {
+        throw new Error("Invalid decay config");
+      }

As per coding guidelines, src/functions/**/*.ts: TypeScript functions must use sdk.registerFunction() pattern with input validation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/functions/retention.ts` around lines 90 - 93, The shallow merge at
mem::retention-score (using { ...DEFAULT_DECAY, ...data.config }) can drop
nested fields and crash when data.config is missing; change to perform a deep
merge and validate the resulting DecayConfig before use: create or reuse a
deepMerge helper (or use lodash.merge) to merge DEFAULT_DECAY with data.config,
then run a small validator/type-guard that checks required nested fields (e.g.,
tierThresholds.hot/warm/cold and numeric types) and either populate missing
nested defaults from DEFAULT_DECAY or throw/return a clear validation error;
update the sdk.registerFunction handler to use the merged+validated config
variable everywhere instead of the shallow-merged one.
🧹 Nitpick comments (6)
src/cli.ts (1)

715-720: Consider simplifying the requireSuccess helper.

The helper logs an error but delegates exit to the caller. Since every call site immediately does return process.exit(1) on failure, consider having requireSuccess call process.exit(1) directly or convert it to a simple inline check, reducing boilerplate.

Current pattern at each call site:

if (!requireSuccess(installOk, "pnpm install")) return process.exit(1);

This is functional but slightly verbose.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/cli.ts` around lines 715 - 720, The helper requireSuccess currently logs
an error but leaves exiting to each caller, which creates repetitive call sites;
update requireSuccess to call process.exit(1) directly when ok is false (so
callers simply call requireSuccess(installOk, "pnpm install") without an extra
return/process.exit), or alternatively remove requireSuccess and replace the
callers with a concise inline check that both logs and exits; locate the helper
named requireSuccess and all its call sites (e.g., where installOk is passed)
and implement the chosen approach to eliminate the duplicated "if
(!requireSuccess(...)) return process.exit(1)" pattern.
src/functions/governance.ts (1)

115-118: Hide backend exception text behind a stable failure reason.

This error value is later stored in details.failures and returned to callers. Serializing raw exception messages here leaks lower-level storage/runtime details and makes them part of the public API contract. Prefer a stable public reason like delete_failed, and keep the original exception out of the response payload.

💡 Minimal change
           failures.push({
             id: candidates[i].id,
-            error: result.reason instanceof Error ? result.reason.message : String(result.reason),
+            error: "delete_failed",
           });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/functions/governance.ts` around lines 115 - 118, The code is exposing raw
exception text in the public response by pushing result.reason into failures;
change the payload to use a stable public reason (e.g. "delete_failed") instead
of the raw message when constructing failures (the block that pushes into
failures for candidates[i].id and result.reason), and ensure the original
exception (result.reason) is logged internally (not returned) for diagnostics;
update the entry to set error: "delete_failed" (or another stable token) and
move any detailed error logging to a logger call nearby.
src/triggers/api.ts (4)

18-28: Unused helper functions.

parseOptionalInt and parseOptionalFloat are defined but never used in this file. The codebase uses parseOptionalPositiveInt and parseOptionalFiniteNumber instead.

♻️ Proposed fix: Remove dead code
-function parseOptionalInt(raw: unknown): number | undefined {
-  if (raw === undefined || raw === null || raw === "") return undefined;
-  const n = typeof raw === "number" ? raw : parseInt(String(raw), 10);
-  return Number.isFinite(n) ? n : undefined;
-}
-
-function parseOptionalFloat(raw: unknown): number | undefined {
-  if (raw === undefined || raw === null || raw === "") return undefined;
-  const n = typeof raw === "number" ? raw : parseFloat(String(raw));
-  return Number.isFinite(n) ? n : undefined;
-}
-
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/triggers/api.ts` around lines 18 - 28, Remove the dead helper functions
parseOptionalInt and parseOptionalFloat from this file; they are unused and
duplicate functionality provided by parseOptionalPositiveInt and
parseOptionalFiniteNumber, so simply delete the two function declarations
(parseOptionalInt and parseOptionalFloat) to eliminate dead code and keep the
file focused on the used parsers.

388-399: Query parameter lacks type validation.

Line 392 casts req.query_params["sessionId"] as string without verifying it's actually a string. Query params can be arrays or undefined.

♻️ Proposed fix: Use asNonEmptyString for consistency
 sdk.registerFunction("api::observations", 
   async (req: ApiRequest): Promise<Response> => {
     const authErr = checkAuth(req, secret);
     if (authErr) return authErr;
-    const sessionId = req.query_params["sessionId"] as string;
-    if (!sessionId)
+    const sessionId = asNonEmptyString(req.query_params?.["sessionId"]);
+    if (!sessionId) {
       return { status_code: 400, body: { error: "sessionId required" } };
+    }
     const observations = await kv.list<CompressedObservation>(
       KV.observations(sessionId),
     );
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/triggers/api.ts` around lines 388 - 399, In the api::observations handler
registered via sdk.registerFunction, validate req.query_params["sessionId"]
instead of blind-casting: use the shared utility asNonEmptyString (or
equivalent) to ensure sessionId is a non-empty string, return a 400 error if
validation fails, and only then proceed to call KV.observations(sessionId) and
kv.list; update the sessionId assignment to use the validated value and preserve
the existing auth check via checkAuth.

356-362: Missing payload whitelisting for api::summarize.

The handler passes req.body directly to sdk.trigger without validating or whitelisting fields. The sessionId should be validated similar to other session endpoints.

♻️ Proposed fix: Validate and whitelist sessionId
 sdk.registerFunction("api::summarize", 
   async (req: ApiRequest<{ sessionId: string }>): Promise<Response> => {
     const authErr = checkAuth(req, secret);
     if (authErr) return authErr;
-    const result = await sdk.trigger({ function_id: "mem::summarize", payload: req.body });
+    const sessionId = asNonEmptyString((req.body as Record<string, unknown>)?.sessionId);
+    if (!sessionId) {
+      return {
+        status_code: 400,
+        body: { error: "sessionId is required and must be a non-empty string" },
+      };
+    }
+    const result = await sdk.trigger({ function_id: "mem::summarize", payload: { sessionId } });
     return { status_code: 200, body: result };
   },
 );

As per coding guidelines: src/triggers/api.ts: REST endpoint handlers must validate inputs, whitelist request body fields, and never pass raw request body directly to sdk.trigger()

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/triggers/api.ts` around lines 356 - 362, The api::summarize handler
currently forwards req.body to sdk.trigger without validation; update the
sdk.registerFunction("api::summarize") handler to extract and whitelist only
sessionId from req.body (e.g., const { sessionId } = req.body), validate it the
same way other session endpoints do (presence, type, expected format/length),
return a 400-style error Response when invalid, and call sdk.trigger with a
payload object containing just sessionId (not the raw req.body); keep
checkAuth(req, secret) as-is and ensure error paths return the same Response
shape as other endpoints.

88-107: Redundant authentication: middleware and handler both check auth.

The middleware::api-auth function performs auth validation, but endpoints like api::health (and others with middleware_function_ids) also call checkAuth(req, secret) in their handlers. This results in double authentication checks.

Either:

  1. Remove checkAuth calls from handlers that use the middleware, or
  2. Remove middleware_function_ids if you prefer handler-level auth

Also applies to: 147-155

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/triggers/api.ts` around lines 88 - 107, The middleware
"middleware::api-auth" already enforces auth, so remove redundant handler-level
auth checks: locate handlers that list middleware_function_ids including
"middleware::api-auth" (e.g., the api::health handler referenced in the comment)
and delete their calls to checkAuth(req, secret) and any immediate early
unauthorized responses; also remove any now-unused imports/variables tied solely
to checkAuth in those handlers. If you prefer handler-level auth instead, do the
inverse: remove "middleware::api-auth" from the middleware_function_ids for
those endpoints (and keep checkAuth there), but do not keep both; ensure no
behavioral changes beyond removing the duplicate check.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/cli.ts`:
- Around line 49-52: Remove the duplicated example entry for "npx
`@agentmemory/agentmemory` upgrade" in the CLI help output: locate the help/usage
string or array used to render examples (e.g., the variable named helpText,
usageExamples, or the showHelp/printUsage function in src/cli.ts) and delete the
repeated line so the upgrade example appears only once.

In `@src/functions/evict.ts`:
- Around line 72-78: The code currently swallows kv.delete(...) errors with
.catch(() => {}) and then always calls recordAudit(...), causing audits/logs to
record failed deletes; change each occurrence (e.g. the session branch with
kv.delete(KV.sessions, session.id) followed by recordAudit(kv, "delete",
"mem::evict", [session.id], ...)) to await the delete and only call recordAudit
when the delete actually succeeds (handle failures by logging or propagating the
error instead of swallowing it). Apply the same pattern to the observation and
memory branches where deletes and subsequent cleanup/audit happen so access-log
cleanup, stats, and audits only run on successful deletes.

In `@src/functions/file-index.ts`:
- Around line 35-42: The current guard in file-index.ts wrongly treats missing
sessionId as invalid and returns an empty result; change the logic in the
function (around the if-block that calls recordAudit and returns { context: "",
files: [] }) so that it only rejects when files.length === 0, not when sessionId
is falsy, and only records an "invalid_payload" audit when files is empty;
additionally adjust any later logic that filters out the current session (the
code that excludes results based on sessionId) to perform that exclusion only if
sessionId is present. Apply the same change to the other similar guard
referenced in the comment (the check at the later location around line 46) so
missing sessionId is allowed but file list is still required.
- Around line 47-48: Normalize and validate the incoming project input before
using it to filter sessions: in the function that contains otherSessions =
otherSessions.filter((s) => s.project === data.project) (in
src/functions/file-index.ts), perform a type check (ensure project is a string)
and trim whitespace (e.g., const normalizedProject = typeof data.project ===
'string' ? data.project.trim() : undefined) and then use normalizedProject in
the filter; also update the sdk.registerFunction() input validation to
validate/normalize the project field so callers cannot pass whitespace-only or
non-string values that would silently exclude sessions.

In `@src/functions/governance.ts`:
- Around line 139-144: The return value currently always sets success: true even
when failures exist; change success to reflect actual outcome (e.g., success:
failures.length === 0) or add an explicit partialSuccess/partialFailure field so
callers can distinguish full success from partial/total failure; update the
return object that uses successfulIds and failures (the block returning {
success, deleted, failed, failures }) to compute success from failures.length
and keep deleted: successfulIds.length and failed: failures.length unchanged.

In `@src/functions/retention.ts`:
- Around line 247-252: The retention eviction loop currently deletes entries but
doesn't emit audit records; add an import for recordAudit at file scope and,
inside the successful deletion branch in the for-loop (where
deleteAccessLog(...) and evicted++ happen), call recordAudit(kv,
'mem::retention-evict', { memoryId: candidate.memoryId, bucket:
candidate.sourceBucket || KV.memories }) (or the project's standard audit
payload) so each eviction is recorded; ensure the call executes only on
successful delete and uses the same KV instance and candidate identifiers used
by deleteAccessLog and the other deletes.

---

Outside diff comments:
In `@src/functions/retention.ts`:
- Around line 217-225: In the mem::retention-evict handler registered via
sdk.registerFunction, validate the incoming payload fields before using them:
ensure data exists and that threshold is a finite number (fallback to
DEFAULT_DECAY.tierThresholds.cold when missing or invalid) and ensure maxEvict
is a non-negative integer (fallback to 50 when missing, NaN, negative, or not an
integer); then clamp maxEvict to a safe upper bound before using it in slice so
slice(0, maxEvict) is never called with a negative value; update any downstream
assumptions that data exists to use the validated local variables instead of raw
data.threshold/data.maxEvict.

---

Duplicate comments:
In `@src/functions/retention.ts`:
- Around line 249-250: The retention cleanup currently deletes the semantic
entry from candidate.sourceBucket || KV.memories which is wrong for legacy rows
that live in KV.semantic; update the kv.delete call that removes the semantic
record to use candidate.sourceBucket || KV.semantic (i.e., default to
KV.semantic when candidate.sourceBucket is missing) while keeping the
kv.delete(KV.retentionScores, candidate.memoryId) line intact; locate the
deletion logic around the candidate variable in retention.ts and change the
bucket fallback used when calling kv.delete for the semantic entry.
- Around line 90-93: The shallow merge at mem::retention-score (using {
...DEFAULT_DECAY, ...data.config }) can drop nested fields and crash when
data.config is missing; change to perform a deep merge and validate the
resulting DecayConfig before use: create or reuse a deepMerge helper (or use
lodash.merge) to merge DEFAULT_DECAY with data.config, then run a small
validator/type-guard that checks required nested fields (e.g.,
tierThresholds.hot/warm/cold and numeric types) and either populate missing
nested defaults from DEFAULT_DECAY or throw/return a clear validation error;
update the sdk.registerFunction handler to use the merged+validated config
variable everywhere instead of the shallow-merged one.

In `@src/triggers/api.ts`:
- Around line 2058-2066: The api::obsidian-export handler registers via
sdk.registerFunction and currently passes body.vaultDir unchecked into
sdk.trigger; add input validation in that function: ensure body.vaultDir exists,
is a string (e.g., typeof body.vaultDir === "string"), and optionally trim/check
non-empty, and if invalid return a 400 response (e.g., { status_code: 400, body:
{ error: "vaultDir must be a non-empty string" } }); only pass the validated
vaultDir value into the payload of sdk.trigger (the mem::obsidian-export call)
so downstream handlers receive a guaranteed string.

---

Nitpick comments:
In `@src/cli.ts`:
- Around line 715-720: The helper requireSuccess currently logs an error but
leaves exiting to each caller, which creates repetitive call sites; update
requireSuccess to call process.exit(1) directly when ok is false (so callers
simply call requireSuccess(installOk, "pnpm install") without an extra
return/process.exit), or alternatively remove requireSuccess and replace the
callers with a concise inline check that both logs and exits; locate the helper
named requireSuccess and all its call sites (e.g., where installOk is passed)
and implement the chosen approach to eliminate the duplicated "if
(!requireSuccess(...)) return process.exit(1)" pattern.

In `@src/functions/governance.ts`:
- Around line 115-118: The code is exposing raw exception text in the public
response by pushing result.reason into failures; change the payload to use a
stable public reason (e.g. "delete_failed") instead of the raw message when
constructing failures (the block that pushes into failures for candidates[i].id
and result.reason), and ensure the original exception (result.reason) is logged
internally (not returned) for diagnostics; update the entry to set error:
"delete_failed" (or another stable token) and move any detailed error logging to
a logger call nearby.

In `@src/triggers/api.ts`:
- Around line 18-28: Remove the dead helper functions parseOptionalInt and
parseOptionalFloat from this file; they are unused and duplicate functionality
provided by parseOptionalPositiveInt and parseOptionalFiniteNumber, so simply
delete the two function declarations (parseOptionalInt and parseOptionalFloat)
to eliminate dead code and keep the file focused on the used parsers.
- Around line 388-399: In the api::observations handler registered via
sdk.registerFunction, validate req.query_params["sessionId"] instead of
blind-casting: use the shared utility asNonEmptyString (or equivalent) to ensure
sessionId is a non-empty string, return a 400 error if validation fails, and
only then proceed to call KV.observations(sessionId) and kv.list; update the
sessionId assignment to use the validated value and preserve the existing auth
check via checkAuth.
- Around line 356-362: The api::summarize handler currently forwards req.body to
sdk.trigger without validation; update the
sdk.registerFunction("api::summarize") handler to extract and whitelist only
sessionId from req.body (e.g., const { sessionId } = req.body), validate it the
same way other session endpoints do (presence, type, expected format/length),
return a 400-style error Response when invalid, and call sdk.trigger with a
payload object containing just sessionId (not the raw req.body); keep
checkAuth(req, secret) as-is and ensure error paths return the same Response
shape as other endpoints.
- Around line 88-107: The middleware "middleware::api-auth" already enforces
auth, so remove redundant handler-level auth checks: locate handlers that list
middleware_function_ids including "middleware::api-auth" (e.g., the api::health
handler referenced in the comment) and delete their calls to checkAuth(req,
secret) and any immediate early unauthorized responses; also remove any
now-unused imports/variables tied solely to checkAuth in those handlers. If you
prefer handler-level auth instead, do the inverse: remove "middleware::api-auth"
from the middleware_function_ids for those endpoints (and keep checkAuth there),
but do not keep both; ensure no behavioral changes beyond removing the duplicate
check.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: ca8b98e8-8823-4368-8c7a-d5b3e6891267

📥 Commits

Reviewing files that changed from the base of the PR and between 3e3ac7e and 3f99449.

⛔ Files ignored due to path filters (1)
  • package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (20)
  • README.md
  • package.json
  • src/cli.ts
  • src/functions/auto-forget.ts
  • src/functions/context.ts
  • src/functions/evict.ts
  • src/functions/export-import.ts
  • src/functions/file-index.ts
  • src/functions/governance.ts
  • src/functions/relations.ts
  • src/functions/remember.ts
  • src/functions/retention.ts
  • src/functions/smart-search.ts
  • src/functions/snapshot.ts
  • src/functions/timeline.ts
  • src/functions/working-memory.ts
  • src/triggers/api.ts
  • src/types.ts
  • test/export-import.test.ts
  • test/smart-search.test.ts
✅ Files skipped from review due to trivial changes (7)
  • package.json
  • test/smart-search.test.ts
  • README.md
  • src/functions/export-import.ts
  • src/functions/snapshot.ts
  • src/functions/working-memory.ts
  • src/functions/context.ts
🚧 Files skipped from review as they are similar to previous changes (7)
  • src/types.ts
  • src/functions/timeline.ts
  • src/functions/remember.ts
  • test/export-import.test.ts
  • src/functions/auto-forget.ts
  • src/functions/smart-search.ts
  • src/functions/relations.ts

Comment thread src/cli.ts Outdated
Comment thread src/functions/evict.ts Outdated
Comment thread src/functions/file-index.ts Outdated
Comment thread src/functions/file-index.ts Outdated
Comment thread src/functions/governance.ts
Comment thread src/functions/retention.ts
…com/rohitg00/agentmemory into feat/iii-v011-migration-upgrade-cmd

# Conflicts:
#	src/functions/governance.ts
#	src/functions/relations.ts
Bound retention/governance side effects and make audit logging best-effort so successful state writes are not masked by telemetry failures. Also tighten boundary validation for MCP/API inputs, resolve relation lock edge cases, and align retention test mocks with v0.11 trigger/registerFunction call shapes.
Resolve pull conflicts by preserving bounded governance deletes, best-effort auditing, relation lock/audit hardening, retention validation and eviction error accounting, strict obsidian export input checks, and v0.11-compatible retention test mocks.
Remove duplicate CLI help examples, harden eviction/delete bookkeeping, allow file-context without sessionId, sanitize governance failure output, validate retention-evict inputs with bounded limits, and tighten summarize/observations/obsidian-export input validation.
Use the engine-compatible stream payload key `type` for stream::send events and update docs/plugin examples to reflect v0.11 function registration and trigger request shapes.
….11.0

Complete the origin/main merge, resolve observe auto-compress path conflicts to v0.11 trigger semantics, bump iii-sdk from next to stable 0.11.0, align docs/changelog guidance, and update tests/mocks for stable SDK behavior.
Resolve merge conflicts in retention/changelog/types/tests by keeping the latest mainline retention-source routing and audit coverage while preserving the stable iii-sdk v0.11 migration updates and trigger-shape compatibility.
@rohitg00 rohitg00 merged commit 5d70ecf into main Apr 15, 2026
2 checks passed
rohitg00 added a commit that referenced this pull request Apr 15, 2026
…ility

iii-sdk v0.11 dropped getContext() entirely. 32 src/functions/*.ts files still
imported and called it, crashing node dist/index.mjs on first import with:
  SyntaxError: module 'iii-sdk' does not provide an export named 'getContext'

- Add src/logger.ts: thin stderr shim (info/warn/error) with identical call signature
- Remove getContext import and const ctx = getContext() from all 32 function files
- Replace ctx.logger.* with logger.* throughout
- Fix search.ts registerFunction({ id: ... }) -> registerFunction('id') for v0.11 API
- Update 45 test mock blocks from vi.mock("iii-sdk") to vi.mock("../src/logger.js")
- Bump to v0.8.11

Tests: 731/731 pass. Binary starts cleanly. Closes #116.
rohitg00 added a commit that referenced this pull request Apr 15, 2026
* fix: replace getContext() with logger shim for iii-sdk v0.11 compatibility

iii-sdk v0.11 dropped getContext() entirely. 32 src/functions/*.ts files still
imported and called it, crashing node dist/index.mjs on first import with:
  SyntaxError: module 'iii-sdk' does not provide an export named 'getContext'

- Add src/logger.ts: thin stderr shim (info/warn/error) with identical call signature
- Remove getContext import and const ctx = getContext() from all 32 function files
- Replace ctx.logger.* with logger.* throughout
- Fix search.ts registerFunction({ id: ... }) -> registerFunction('id') for v0.11 API
- Update 45 test mock blocks from vi.mock("iii-sdk") to vi.mock("../src/logger.js")
- Bump to v0.8.11

Tests: 731/731 pass. Binary starts cleanly. Closes #116.

* fix: bump version to 0.8.11 in plugin.json, export-import supported versions, and test
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant