Skip to content

feat: v0.7.0 — lessons, tool visibility, auto-consolidation, obsidian export, npx bootstrap#82

Merged
rohitg00 merged 7 commits intomainfrom
feat/v0.7.0-dx-improvements
Apr 4, 2026
Merged

feat: v0.7.0 — lessons, tool visibility, auto-consolidation, obsidian export, npx bootstrap#82
rohitg00 merged 7 commits intomainfrom
feat/v0.7.0-dx-improvements

Conversation

@rohitg00
Copy link
Copy Markdown
Owner

@rohitg00 rohitg00 commented Apr 4, 2026

Summary

Five DX improvements based on competitive research against Mem0 (48K stars), Engram (80% LOCOMO), CodeMem, and 30+ other agent memory tools:

  • npx agentmemory zero-config startup — CLI bootstrap auto-detects and starts iii-engine (tries iii binary, falls back to Docker)
  • Simplified MCP surface — 7 core tools by default, AGENTMEMORY_TOOLS=all unlocks all 49. Call handler remains unfiltered
  • Auto-consolidation on session endCONSOLIDATION_ENABLED defaults to true. Session-end hook: end → crystallize → consolidate → bridge sync
  • First-class lesson memory type — Confidence-scored lessons (0.0-1.0) with dedup, reinforcement, and daily decay sweep. Crystal lessons auto-flow at confidence 0.6
  • Obsidian Markdown export — YAML frontmatter + wikilinks to ~/.agentmemory/vault/, with MOC.md index. Auto-export via OBSIDIAN_AUTO_EXPORT=true

Changes

New files (3):

  • src/cli.ts — CLI bootstrap with engine auto-detection
  • src/functions/lessons.ts — 5 iii functions (save, recall, list, strengthen, decay-sweep)
  • src/functions/obsidian-export.ts — Markdown export with frontmatter + wikilinks

Modified files (18):

  • package.json — v0.7.0, bin → cli.mjs, postbuild copies configs
  • tsdown.config.ts — Added cli.ts build entry
  • src/version.ts — 0.7.0
  • src/types.ts — Lesson interface, ExportData version, AuditEntry ops
  • src/state/schema.ts — KV.lessons scope
  • src/config.ts — Flipped consolidation default to enabled
  • src/index.ts — Register lessons + obsidian, always register consolidation, decay timer
  • src/hooks/session-end.ts — Auto-crystallize + consolidation calls
  • src/mcp/tools-registry.ts — V070_TOOLS (3), ESSENTIAL_TOOLS (7), getVisibleTools()
  • src/mcp/server.ts — Lesson + obsidian handlers, uses getVisibleTools
  • src/mcp/standalone.ts — Uses getVisibleTools
  • src/triggers/api.ts — 6 new REST endpoints (lessons CRUD + obsidian export)
  • src/functions/crystallize.ts — Crystal → lesson bridge (parallel saves)
  • src/functions/consolidation-pipeline.ts — Obsidian auto-export hook
  • src/functions/export-import.ts — v0.7.0 in supported versions

Stats

  • 49 MCP tools (7 visible by default)
  • 99 REST endpoints
  • 573 tests passing (0 regressions)
  • 21 files changed, 990 insertions(+), 22 deletions(-)

Test plan

  • npx agentmemory --help shows usage
  • npx agentmemory auto-starts iii-engine (or shows install instructions)
  • MCP tools/list returns 7 tools by default
  • AGENTMEMORY_TOOLS=all returns all 49 tools
  • memory_lesson_save creates lesson with dedup
  • memory_lesson_recall returns sorted by confidence × recency
  • Session-end hook triggers crystallize + consolidation
  • memory_obsidian_export creates valid .md files with frontmatter
  • All 573 existing tests pass

Summary by CodeRabbit

  • New Features

    • Command-line interface with engine auto-start and readiness checks.
    • Lesson system: save, recall/search, strengthen, and background decay.
    • Obsidian vault export for memories, lessons, crystals, and sessions.
    • New HTTP endpoints and MCP tools to access lesson and export features.
    • Optional post-session consolidation/crystallization when enabled.
  • Documentation

    • Added detailed architecture and agent workflow docs; updated README counts.
  • Chores

    • Bumped release to 0.7.0.
    • Extended build to include config files; added CI and publish workflows.
    • Added repository consistency tests and updated export/import compatibility.

… export, npx bootstrap

Five DX improvements based on competitive research against Mem0, Engram, CodeMem:

1. `npx agentmemory` zero-config startup
   - New CLI bootstrap (src/cli.ts) auto-detects and starts iii-engine
   - Tries `iii` binary first, falls back to `docker compose up -d`
   - Bundles iii-config.yaml and docker-compose.yml in dist/

2. Simplified MCP tool surface (7 core tools by default)
   - AGENTMEMORY_TOOLS=all unlocks all 49 tools
   - Default: save, recall, consolidate, forget, sessions, diagnose, lesson_save
   - Call handler remains unfiltered — any tool callable by name

3. Auto-consolidation on session end
   - CONSOLIDATION_ENABLED defaults to true (was false)
   - Session-end hook: session/end → crystallize → consolidate → bridge sync
   - Consolidation pipeline always registered (timer gated by config)

4. First-class lesson memory type with confidence decay
   - Lesson interface: confidence (0-1), reinforcements, decayRate, source
   - 5 functions: lesson-save, lesson-recall, lesson-list, lesson-strengthen, lesson-decay-sweep
   - Dedup via SHA-256 fingerprint — duplicate saves strengthen existing
   - Crystal lessons auto-flow into lesson system at confidence 0.6
   - Daily decay sweep with parallel KV writes

5. Obsidian-compatible Markdown export
   - Export to ~/.agentmemory/vault/ with YAML frontmatter + wikilinks
   - MOC.md (Map of Content) index file
   - Parallel KV reads, auto-export via OBSIDIAN_AUTO_EXPORT=true

Stats: 49 MCP tools, 99 REST endpoints, 573 tests passing
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 4, 2026

Warning

Rate limit exceeded

@rohitg00 has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 16 minutes and 43 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 16 minutes and 43 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 973712f6-8940-4fff-aba1-f5427bc55db8

📥 Commits

Reviewing files that changed from the base of the PR and between 70dea4b and 70b2d50.

📒 Files selected for processing (3)
  • .github/workflows/ci.yml
  • package.json
  • tsdown.config.ts
📝 Walkthrough

Walkthrough

Adds a lesson system (save/recall/decay), Obsidian vault export, new REST and MCP interfaces for lessons/export, a standalone CLI that can auto-start the engine, consolidation control enhancements, and bumps package/version to 0.7.0. Includes tests, CI workflows, and documentation updates.

Changes

Cohort / File(s) Summary
Version & Packaging
package.json, src/version.ts, plugin/.claude-plugin/plugin.json, test/export-import.test.ts
Bumped release to 0.7.0. Updated package bin.agentmemory to dist/cli.mjs. Adjusted tests and plugin manifest to match new version.
CLI & Build
src/cli.ts, tsdown.config.ts
Added new Node CLI entrypoint with args (--help, --tools, --no-engine, --port), engine health checks, auto-start via iii or docker compose, and readiness polling. Added tsdown build target for CLI ESM output to dist.
Lesson System (new)
src/functions/lessons.ts, src/types.ts, src/state/schema.ts
Introduced KV-backed Lesson entity and APIs: mem::lesson-save, mem::lesson-recall, mem::lesson-list, mem::lesson-strengthen, mem::lesson-decay-sweep. Added Lesson type and KV.lessons key.
Obsidian Export (new)
src/functions/obsidian-export.ts, src/types.ts
New mem::obsidian-export handler producing .md files (frontmatter + content) per entity type, top-level MOC, audit entry, and per-file error collection.
Consolidation & Session Hooks
src/functions/consolidation-pipeline.ts, src/hooks/session-end.ts, plugin/scripts/session-end.mjs
mem::consolidate-pipeline gains force flag and gating by consolidation setting; optional Obsidian auto-export trigger; session-end hooks/scripts now conditionally trigger auto-crystallize and consolidation (best-effort, errors suppressed).
Crystallize Integration
src/functions/crystallize.ts
After persisting crystals, triggers mem::lesson-save for each lesson in the digest concurrently; individual trigger failures are suppressed.
Export/Import & Types
src/functions/export-import.ts, src/types.ts
Added 0.7.0 export version, included lessons in export schema, and extended import handling for Lesson entities (replace/skip strategies). Added audit operations for lesson/export actions.
MCP Tools & API Endpoints
src/mcp/tools-registry.ts, src/mcp/server.ts, src/mcp/standalone.ts, src/triggers/api.ts
Added three MCP tools (memory_lesson_save, memory_lesson_recall, memory_obsidian_export) and corresponding REST endpoints (/agentmemory/lessons*, /agentmemory/obsidian/export). Introduced getVisibleTools() controlled by AGENTMEMORY_TOOLS.
Startup & Scheduler
src/index.ts
Registered lessons and obsidian-export functions at startup. Consolidation registration is always wired; consolidation timer runs only when enabled. Added a 24-hour lesson-decay sweep timer controlled by LESSON_DECAY_ENABLED. Updated startup endpoint/tool counts.
Tests, CI & Docs
test/*, .github/workflows/*, AGENTS.md, README.md
New consistency tests and updated test expectations for tool counts and export version. Added CI and publish workflows. Created AGENTS.md and updated README counts (100 REST endpoints, 41 MCP tools).

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Possibly related PRs

Poem

🐰 I nibble notes into lesson crumbs and tuck them neat,
Crystals spawn, the vault builds pages for my fleet,
A CLI hops awake and nudges engines to begin,
Decay drifts out, reinforcement hops back in,
Version 0.7 — tiny paws, big memory beat!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately summarizes the main features introduced in this release (v0.7.0): lessons, tool visibility, auto-consolidation, obsidian export, and npx bootstrap startup.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/v0.7.0-dx-improvements

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
src/functions/export-import.ts (1)

165-171: ⚠️ Potential issue | 🟠 Major

0.7.0 is accepted, but lessons are not imported/exported yet (silent data loss risk).

Line 165 allows 0.7.0, but this handler still does not persist importData.lessons (and mem::export also does not include lessons). A 0.7.0 round-trip can drop lesson records without error.

Suggested fix direction
+ // export path
+ const lessons = await kv.list<Lesson>(KV.lessons).catch(() => []);
  const exportData: ExportData = {
    version: VERSION,
    ...
+   lessons: lessons.length > 0 ? lessons : undefined,
  };

+ // import replace path
+ for (const l of await kv.list<Lesson>(KV.lessons).catch(() => [])) {
+   await kv.delete(KV.lessons, l.id);
+ }

+ // import merge/skip path
+ if (importData.lessons) {
+   for (const lesson of importData.lessons) {
+     if (strategy === "skip") {
+       const existing = await kv.get(KV.lessons, lesson.id).catch(() => null);
+       if (existing) { stats.skipped++; continue; }
+     }
+     await kv.set(KV.lessons, lesson.id, lesson);
+   }
+ }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/functions/export-import.ts` around lines 165 - 171, The code currently
accepts "0.7.0" in supportedVersions but the handler does not import or export
importData.lessons, causing silent data loss; update the check to either remove
"0.7.0" from the supportedVersions Set or add an explicit validation that
importData.lessons is empty and return an error when lessons are present.
Specifically, modify the Set in supportedVersions (and/or add a guard after
checking importData.version) so that importData.version "0.7.0" is treated as
unsupported or that an error is returned if importData.lessons exists,
referencing the supportedVersions Set and importData.lessons symbols in
export-import.ts.
src/index.ts (1)

311-319: ⚠️ Potential issue | 🟠 Major

Document and align consolidation default behavior with other feature flags.

The code at src/config.ts:183-185 sets isConsolidationEnabled() to default to true (opt-out), but this conflicts with:

  1. README documentation: Shows CONSOLIDATION_ENABLED=false as the example configuration (opt-in pattern)
  2. Inconsistency with similar flags: isGraphExtractionEnabled() and isStandaloneMcp() both default to false (opt-in: === "true" checks)

This means existing deployments will auto-consolidate every 2 hours by default. Either update the code to match the opt-in pattern used elsewhere (=== "true"), or update documentation and provide migration guidance for the opt-out behavior change.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/index.ts` around lines 311 - 319, isConsolidationEnabled() currently
defaults to true (opt-out) which conflicts with README and other flags; change
the implementation in config.ts to follow the opt-in pattern used by
isGraphExtractionEnabled() and isStandaloneMcp() (i.e. evaluate
CONSOLIDATION_ENABLED === "true" so default is false), then keep the
consolidation scheduling in src/index.ts (consolidationTimer and
sdk.trigger("mem::consolidate-pipeline")) as-is; finally, update README examples
and add a brief migration note explaining the behavior change from opt-out to
opt-in so operators know to set CONSOLIDATION_ENABLED=true to re-enable
auto-consolidation.
🧹 Nitpick comments (11)
test/export-import.test.ts (1)

118-128: Add v0.7.0 lessons assertions in export/import tests.

This segment validates the version bump, but it should also assert lessons behavior for v0.7.0 (export presence + import round-trip) to guard schema compatibility.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/export-import.test.ts` around lines 118 - 128, The export test "export
produces valid ExportData structure" currently only checks version and counts;
add assertions that for version "0.7.0" the ExportData includes the new lessons
field (e.g., expect(result.lessons).toBeDefined() and appropriate length/content
expectations) and update the import round-trip test to call
sdk.trigger("mem::import", { data: result }) after export and assert that the
imported state includes the same lessons (use the same identifiers used in the
test: result, ExportData, sdk.trigger with "mem::export" and "mem::import") to
ensure lessons survive export+import.
test/mcp-standalone.test.ts (1)

25-31: Exact total-count assertion is brittle for a growing tool registry.

Consider avoiding expect(tools.length).toBe(41) and relying on required tool presence (plus uniqueness) so future tool additions don’t force unrelated test churn.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/mcp-standalone.test.ts` around lines 25 - 31, The test in getAllTools
should not assert an exact total count (tools.length === 41) because the
registry grows; modify the test in test/mcp-standalone.test.ts around the
getAllTools() usage to remove or replace expect(tools.length).toBe(41) with
assertions that ensure required tools exist and the list has unique names—for
example, assert presence of "memory_verify", "memory_lesson_save",
"memory_lesson_recall", "memory_obsidian_export" via tools.some(...) and check
uniqueness by building a Set of tool.name and comparing Set.size to
tools.length; keep the checks referring to getAllTools and the tool.name
property so the intent is clear.
src/mcp/tools-registry.ts (1)

751-759: memory_governance_delete in essential tools may be overly permissive for default visibility.

The ESSENTIAL_TOOLS set includes memory_governance_delete, which permanently deletes memories. For a "core" default toolset aimed at simplicity, exposing a destructive operation by default could be risky for new users.

Consider whether memory_governance_delete should require AGENTMEMORY_TOOLS=all instead, replacing it with a safer default like memory_export or memory_patterns.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/mcp/tools-registry.ts` around lines 751 - 759, ESSENTIAL_TOOLS currently
contains "memory_governance_delete", which exposes a destructive
permanent-delete operation by default; remove "memory_governance_delete" from
the ESSENTIAL_TOOLS Set and replace it with a safer default such as
"memory_export" or "memory_patterns", and ensure the destructive tool
"memory_governance_delete" is gated behind the stricter permission flag (e.g.,
only enabled when AGENTMEMORY_TOOLS=all) so that the default core toolset
(ESSENTIAL_TOOLS) remains non-destructive.
src/cli.ts (1)

84-114: Spawned engine process is orphaned with no cleanup.

Using detached: true and child.unref() means the spawned iii-engine or Docker process continues running after the CLI exits. While this is likely intentional for persistent operation, there's no mechanism to stop the engine when the CLI shuts down gracefully.

Consider documenting this behavior in the help text, or providing a --stop flag for cleanup.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/cli.ts` around lines 84 - 114, startEngine currently detaches child
processes (spawn with detached: true and child.unref()) which leaves iii-engine
or Docker running with no cleanup; implement a complementary stop/cleanup path
and/or document behavior: add a stopEngine function that detects which launch
method was used (use the same logic as startEngine: findIiiConfig,
whichBinary("iii"), dockerCompose existence), record the PID or launch mode to a
small state file when starting, and implement a CLI --stop flag that reads that
state and either kills the recorded PID (for direct iii binary) or runs the
corresponding docker compose down (for Docker launches); also update the CLI
help text to mention persistent background behavior and the new --stop option.
src/index.ts (1)

303-309: Consider making the lesson decay sweep configurable.

Unlike auto-forget (gated by AUTO_FORGET_ENABLED) and consolidation (gated by isConsolidationEnabled()), the lesson decay sweep timer runs unconditionally. While this may be intentional for maintenance hygiene, consider adding an opt-out env var (e.g., LESSON_DECAY_ENABLED) for consistency with other background tasks.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/index.ts` around lines 303 - 309, Add a config guard so the lesson decay
sweep runs only when enabled by an environment flag (e.g., LESSON_DECAY_ENABLED)
similar to AUTO_FORGET_ENABLED/isConsolidationEnabled(); wrap the setInterval
creation for lessonDecayTimer and the subsequent lessonDecayTimer.unref() and
console.log in an if-check that reads and parses LESSON_DECAY_ENABLED
(defaulting to true if absent or explicitly "true"/"1"), and skip creating the
timer (and the log) when the flag is false; keep the
sdk.trigger("mem::lesson-decay-sweep", {}) error handling as-is inside the
interval.
src/functions/obsidian-export.ts (3)

214-247: Wrap individual file writes in try-catch to handle partial failures gracefully.

If one file write fails (e.g., permission denied), the entire export aborts. Consider catching errors per-file and reporting which files failed while continuing with others.

♻️ Proposed error handling pattern
+      const errors: Array<{ id: string; error: string }> = [];
+
       for (const m of memories.filter((m) => m.isLatest)) {
         const filename = `${sanitize(m.id)}.md`;
-        writeFileSync(join(dirs.memories, filename), memoryToMd(m));
-        stats.memories++;
-        memoryMoc.push(`- [[memories/${sanitize(m.id)}|${m.title}]] (${m.type}, strength: ${m.strength})`);
+        try {
+          writeFileSync(join(dirs.memories, filename), memoryToMd(m));
+          stats.memories++;
+          memoryMoc.push(`- [[memories/${sanitize(m.id)}|${m.title}]] (${m.type}, strength: ${m.strength})`);
+        } catch (err) {
+          errors.push({ id: m.id, error: String(err) });
+        }
       }

Then include errors in the return value.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/functions/obsidian-export.ts` around lines 214 - 247, Wrap each per-item
writeFileSync call inside the memories/lessons/crystals/sessions loops in a
try-catch so one failing file doesn't abort the whole export: for each loop (the
blocks that write memoryToMd, lessonToMd, crystalToMd, sessionToMd using
sanitize and join(dirs.*,...)) catch errors, push a descriptive record (id,
path, error.message) into a new errors array, and continue; increment stats and
push to the respective *Moc arrays only on success; finally include the errors
array in the function's return value so callers can see which files failed.

17-19: Edge case: slice(0, 100) may split multi-byte characters.

For UTF-8 strings, slicing by character index is generally safe in JavaScript, but if the intent is to limit byte length for filesystem compatibility, consider using a byte-aware truncation. The current approach is acceptable for most use cases.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/functions/obsidian-export.ts` around lines 17 - 19, The sanitize function
uses slice(0, 100) which can split multi-byte UTF-8 characters if your intent is
to limit byte length; change sanitize to perform a byte-aware truncation:
compute the UTF-8 byte length (e.g., Buffer.byteLength) of the name and, if it
exceeds the target (100 bytes), iteratively trim characters from the end until
the byte length is <= 100 (or build a new string by appending characters while
checking cumulative byte length) before replacing invalid filesystem chars; keep
the existing regex replacement for illegal characters and ensure the byte-limit
occurs after replacement so the final string is safe for filesystems.

235-241: Consider making the session limit configurable.

The session export is hardcoded to the 50 most recent sessions. Consider accepting this as an optional parameter for flexibility.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/functions/obsidian-export.ts` around lines 235 - 241, The code currently
hardcodes slicing to 50 in the sessions pipeline (the recent variable created
from sessions.sort(...).slice(0, 50)); change this to use a configurable
parameter (e.g., limit or maxSessions) on the obsidian export function (or the
enclosing function that builds recent), defaulting to 50, validate it is a
positive integer, and replace .slice(0, 50) with .slice(0, limit); update any
callers to pass the new optional parameter where appropriate and add tests or
input validation to ensure backward compatibility when the parameter is omitted.
src/triggers/api.ts (1)

1842-1858: Consider returning 201 for newly created lessons.

The api::lesson-save endpoint always returns status_code: 200, but when a lesson is created (vs. strengthened), returning 201 Created would be more RESTful and consistent with other creation endpoints like api::remember (Line 325) and api::action-create (Line 1014).

♻️ Proposed fix
     const result = await sdk.trigger("mem::lesson-save", {
       content: body.content,
       context: body.context || "",
       confidence: typeof body.confidence === "number" ? body.confidence : undefined,
       project: typeof body.project === "string" ? body.project : undefined,
       tags,
       source: "manual",
     });
-    return { status_code: 200, body: result };
+    const statusCode = result.action === "created" ? 201 : 200;
+    return { status_code: statusCode, body: result };
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/triggers/api.ts` around lines 1842 - 1858, The endpoint always returns
status_code 200; change the response to return 201 when the lesson was newly
created and 200 when it was only strengthened: after calling
sdk.trigger("mem::lesson-save") (the result variable), inspect a creation
indicator on result (e.g., result.created === true or result.isNew === true or
result.status === "created" — adapt to the actual shape returned by
mem::lesson-save) and set status_code = 201 for newly created lessons, otherwise
keep 200; update the return in the api::lesson-save handler accordingly.
src/functions/lessons.ts (2)

148-175: Consider adding audit logging to mem::lesson-list.

Unlike mem::lesson-recall which audits the query, mem::lesson-list does not record audit entries. This creates an inconsistency in audit coverage for lesson access patterns.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/functions/lessons.ts` around lines 148 - 175, The mem::lesson-list
handler (the async function registered with sdk.registerFunction id
"mem::lesson-list") currently returns filtered lessons but lacks audit logging;
add an audit entry before returning (using the same audit mechanism used by
mem::lesson-recall) that records the query parameters (project, source,
minConfidence, limit), the number of lessons returned, and any request/context
identifiers available, so access to lesson listings is logged consistently with
mem::lesson-recall.

111-129: Simple term matching may miss relevant lessons.

The scoring uses substring matching (text.includes(t)), which works for exact term matches but won't handle word boundaries, stemming, or fuzzy matching. For example, searching "test" would match "testing" but also "contest". This is acceptable for v0.7.0 but consider improving search quality in future iterations.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/functions/lessons.ts` around lines 111 - 129, The current scoring in the
scored mapping uses substring matching via text.includes(t), which causes false
positives (e.g., "test" matching "contest"); update the matching logic in the
scored computation to use word-boundary or token-based matching instead of
simple substring checks: build terms from query (already in terms) and for each
term test against text with a word-boundary-aware check (or tokenize text into
words and compare exact tokens, optionally lowercased) before computing
matchCount/relevance so relevance reflects whole-word matches; replace the
text.includes usage inside the .map callback that computes matchCount/relevance
(the block that returns { lesson: l, score }) with this more precise matching
approach.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@package.json`:
- Line 13: The build script's boolean chaining lets a failing tsdown be
swallowed by the following "|| true", so update the "build" script entry (the
package.json "build" script) to ensure tsdown failure stops the script by
grouping each fallback copy with its own "|| true" (e.g., run tsdown, then for
each cp wrap the cp...|| true in a grouped/parenthesized expression or run them
as separate commands so only the cp steps are tolerant), ensuring tsdown is not
covered by a subsequent "|| true" and that only the copy commands are allowed to
fail silently.

In `@src/cli.ts`:
- Around line 76-82: whichBinary currently runs the Unix-only "which" command
and returns null on Windows; update whichBinary to detect the platform (e.g.,
process.platform === "win32") and call the Windows "where" command on Windows or
the existing "which" on *nix, falling back to a cross-platform strategy (try
both commands or use a Node API like child_process.spawnSync with shell) so that
callers like iiiBin and dockerBin can locate binaries on Windows too; ensure the
function still returns a trimmed string on success and null on failure, and keep
error handling consistent with the existing try/catch behavior.
- Around line 143-146: The error message hardcodes ports (3111, 3112, 49134)
which can be wrong when the user passes a --port override; update the failure
logging in src/cli.ts (the block that checks the ready variable and calls
process.exit) to reference the actual port configuration variable used by the
CLI (e.g., the parsed port option or the variable named port/basePort) instead
of hardcoded numbers, and if other dependent ports are derived from that base
(like port+1 or a fixed control port) compute them dynamically so the
console.error lines accurately reflect the ports being checked.

In `@src/config.ts`:
- Around line 183-185: The change inverted consolidation default; update
isConsolidationEnabled() to preserve previous opt-in behavior (only enabled when
CONSOLODATION_ENABLED is explicitly "true") or else gate the session-end caller
to /agentmemory/consolidate-pipeline so it only invokes consolidation when
isConsolidationEnabled() returns true; specifically, modify the
isConsolidationEnabled function to check for equality to "true" (or use a
boolean parse) and/or add a conditional around the session-end hook that
triggers the consolidate-pipeline call so the periodic timer and on-session-end
call are consistent; also add a release note/migration entry documenting that
consolidation is now opt-in unless the env var is set.

In `@src/functions/consolidation-pipeline.ts`:
- Around line 235-239: The empty catch around the Obsidian export call swallows
errors; update the try/catch that calls sdk.trigger("mem::obsidian-export", {})
so the catch logs a warning (using the existing logger or processLogger)
including the error message and stack, and also mark the export failure in the
results object/array (e.g., add a results.push or set results.obsidianExport = {
success: false, error: <error> } or similar) so consumers can observe the
failure; keep the original success path to record success in results when no
error is thrown.

In `@src/functions/lessons.ts`:
- Around line 78-79: The audit call after saving a lesson can throw and will
surface to the caller even though the lesson was persisted; wrap the call to
recordAudit(kv, "lesson_save", "mem::lesson-save", [lesson.id]) in a try-catch
(or use a centralized audit helper) so failures are logged/handled and do not
bubble up, ensuring the primary operation (await kv.set(KV.lessons, lesson.id,
lesson)) remains successful; include context such as lesson.id in the log/error
handling so failures are traceable.

In `@src/functions/obsidian-export.ts`:
- Around line 1-3: Replace blocking fs calls in obsidian-export.ts: swap
synchronous mkdirSync and writeFileSync usage for the Promise-based APIs (import
from "fs/promises" and use mkdir and writeFile). Update any call sites that
currently call mkdirSync(..., { recursive: true }) and writeFileSync(...) to
await mkdir(..., { recursive: true }) and await writeFile(...), handle/propagate
errors appropriately (try/catch or let caller handle) and ensure the containing
function(s) (e.g., the export/handler function(s) referencing mkdirSync and
writeFileSync) are declared async so you can await these operations.

---

Outside diff comments:
In `@src/functions/export-import.ts`:
- Around line 165-171: The code currently accepts "0.7.0" in supportedVersions
but the handler does not import or export importData.lessons, causing silent
data loss; update the check to either remove "0.7.0" from the supportedVersions
Set or add an explicit validation that importData.lessons is empty and return an
error when lessons are present. Specifically, modify the Set in
supportedVersions (and/or add a guard after checking importData.version) so that
importData.version "0.7.0" is treated as unsupported or that an error is
returned if importData.lessons exists, referencing the supportedVersions Set and
importData.lessons symbols in export-import.ts.

In `@src/index.ts`:
- Around line 311-319: isConsolidationEnabled() currently defaults to true
(opt-out) which conflicts with README and other flags; change the implementation
in config.ts to follow the opt-in pattern used by isGraphExtractionEnabled() and
isStandaloneMcp() (i.e. evaluate CONSOLIDATION_ENABLED === "true" so default is
false), then keep the consolidation scheduling in src/index.ts
(consolidationTimer and sdk.trigger("mem::consolidate-pipeline")) as-is;
finally, update README examples and add a brief migration note explaining the
behavior change from opt-out to opt-in so operators know to set
CONSOLIDATION_ENABLED=true to re-enable auto-consolidation.

---

Nitpick comments:
In `@src/cli.ts`:
- Around line 84-114: startEngine currently detaches child processes (spawn with
detached: true and child.unref()) which leaves iii-engine or Docker running with
no cleanup; implement a complementary stop/cleanup path and/or document
behavior: add a stopEngine function that detects which launch method was used
(use the same logic as startEngine: findIiiConfig, whichBinary("iii"),
dockerCompose existence), record the PID or launch mode to a small state file
when starting, and implement a CLI --stop flag that reads that state and either
kills the recorded PID (for direct iii binary) or runs the corresponding docker
compose down (for Docker launches); also update the CLI help text to mention
persistent background behavior and the new --stop option.

In `@src/functions/lessons.ts`:
- Around line 148-175: The mem::lesson-list handler (the async function
registered with sdk.registerFunction id "mem::lesson-list") currently returns
filtered lessons but lacks audit logging; add an audit entry before returning
(using the same audit mechanism used by mem::lesson-recall) that records the
query parameters (project, source, minConfidence, limit), the number of lessons
returned, and any request/context identifiers available, so access to lesson
listings is logged consistently with mem::lesson-recall.
- Around line 111-129: The current scoring in the scored mapping uses substring
matching via text.includes(t), which causes false positives (e.g., "test"
matching "contest"); update the matching logic in the scored computation to use
word-boundary or token-based matching instead of simple substring checks: build
terms from query (already in terms) and for each term test against text with a
word-boundary-aware check (or tokenize text into words and compare exact tokens,
optionally lowercased) before computing matchCount/relevance so relevance
reflects whole-word matches; replace the text.includes usage inside the .map
callback that computes matchCount/relevance (the block that returns { lesson: l,
score }) with this more precise matching approach.

In `@src/functions/obsidian-export.ts`:
- Around line 214-247: Wrap each per-item writeFileSync call inside the
memories/lessons/crystals/sessions loops in a try-catch so one failing file
doesn't abort the whole export: for each loop (the blocks that write memoryToMd,
lessonToMd, crystalToMd, sessionToMd using sanitize and join(dirs.*,...)) catch
errors, push a descriptive record (id, path, error.message) into a new errors
array, and continue; increment stats and push to the respective *Moc arrays only
on success; finally include the errors array in the function's return value so
callers can see which files failed.
- Around line 17-19: The sanitize function uses slice(0, 100) which can split
multi-byte UTF-8 characters if your intent is to limit byte length; change
sanitize to perform a byte-aware truncation: compute the UTF-8 byte length
(e.g., Buffer.byteLength) of the name and, if it exceeds the target (100 bytes),
iteratively trim characters from the end until the byte length is <= 100 (or
build a new string by appending characters while checking cumulative byte
length) before replacing invalid filesystem chars; keep the existing regex
replacement for illegal characters and ensure the byte-limit occurs after
replacement so the final string is safe for filesystems.
- Around line 235-241: The code currently hardcodes slicing to 50 in the
sessions pipeline (the recent variable created from sessions.sort(...).slice(0,
50)); change this to use a configurable parameter (e.g., limit or maxSessions)
on the obsidian export function (or the enclosing function that builds recent),
defaulting to 50, validate it is a positive integer, and replace .slice(0, 50)
with .slice(0, limit); update any callers to pass the new optional parameter
where appropriate and add tests or input validation to ensure backward
compatibility when the parameter is omitted.

In `@src/index.ts`:
- Around line 303-309: Add a config guard so the lesson decay sweep runs only
when enabled by an environment flag (e.g., LESSON_DECAY_ENABLED) similar to
AUTO_FORGET_ENABLED/isConsolidationEnabled(); wrap the setInterval creation for
lessonDecayTimer and the subsequent lessonDecayTimer.unref() and console.log in
an if-check that reads and parses LESSON_DECAY_ENABLED (defaulting to true if
absent or explicitly "true"/"1"), and skip creating the timer (and the log) when
the flag is false; keep the sdk.trigger("mem::lesson-decay-sweep", {}) error
handling as-is inside the interval.

In `@src/mcp/tools-registry.ts`:
- Around line 751-759: ESSENTIAL_TOOLS currently contains
"memory_governance_delete", which exposes a destructive permanent-delete
operation by default; remove "memory_governance_delete" from the ESSENTIAL_TOOLS
Set and replace it with a safer default such as "memory_export" or
"memory_patterns", and ensure the destructive tool "memory_governance_delete" is
gated behind the stricter permission flag (e.g., only enabled when
AGENTMEMORY_TOOLS=all) so that the default core toolset (ESSENTIAL_TOOLS)
remains non-destructive.

In `@src/triggers/api.ts`:
- Around line 1842-1858: The endpoint always returns status_code 200; change the
response to return 201 when the lesson was newly created and 200 when it was
only strengthened: after calling sdk.trigger("mem::lesson-save") (the result
variable), inspect a creation indicator on result (e.g., result.created === true
or result.isNew === true or result.status === "created" — adapt to the actual
shape returned by mem::lesson-save) and set status_code = 201 for newly created
lessons, otherwise keep 200; update the return in the api::lesson-save handler
accordingly.

In `@test/export-import.test.ts`:
- Around line 118-128: The export test "export produces valid ExportData
structure" currently only checks version and counts; add assertions that for
version "0.7.0" the ExportData includes the new lessons field (e.g.,
expect(result.lessons).toBeDefined() and appropriate length/content
expectations) and update the import round-trip test to call
sdk.trigger("mem::import", { data: result }) after export and assert that the
imported state includes the same lessons (use the same identifiers used in the
test: result, ExportData, sdk.trigger with "mem::export" and "mem::import") to
ensure lessons survive export+import.

In `@test/mcp-standalone.test.ts`:
- Around line 25-31: The test in getAllTools should not assert an exact total
count (tools.length === 41) because the registry grows; modify the test in
test/mcp-standalone.test.ts around the getAllTools() usage to remove or replace
expect(tools.length).toBe(41) with assertions that ensure required tools exist
and the list has unique names—for example, assert presence of "memory_verify",
"memory_lesson_save", "memory_lesson_recall", "memory_obsidian_export" via
tools.some(...) and check uniqueness by building a Set of tool.name and
comparing Set.size to tools.length; keep the checks referring to getAllTools and
the tool.name property so the intent is clear.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 7fb56e21-608e-448e-bba4-8270d6772242

📥 Commits

Reviewing files that changed from the base of the PR and between fa608ed and 95b6890.

📒 Files selected for processing (21)
  • package.json
  • plugin/scripts/session-end.mjs
  • src/cli.ts
  • src/config.ts
  • src/functions/consolidation-pipeline.ts
  • src/functions/crystallize.ts
  • src/functions/export-import.ts
  • src/functions/lessons.ts
  • src/functions/obsidian-export.ts
  • src/hooks/session-end.ts
  • src/index.ts
  • src/mcp/server.ts
  • src/mcp/standalone.ts
  • src/mcp/tools-registry.ts
  • src/state/schema.ts
  • src/triggers/api.ts
  • src/types.ts
  • src/version.ts
  • test/export-import.test.ts
  • test/mcp-standalone.test.ts
  • tsdown.config.ts

Comment thread package.json Outdated
Comment thread src/cli.ts
Comment thread src/cli.ts
Comment thread src/config.ts
Comment thread src/functions/consolidation-pipeline.ts
Comment thread src/functions/lessons.ts Outdated
Comment thread src/functions/obsidian-export.ts Outdated
- Create AGENTS.md with strict consistency rules for MCP tools, REST
  endpoints, versions, KV scopes, and audit operations
- Fix MCP tool count: 38 → 41 across README.md (4 occurrences)
- Fix REST endpoint count: 93/95 → 99 across README.md and index.ts
- Fix plugin.json: version 0.6.1 → 0.7.0, tool count 5 → 41
- Fix cli.ts help text: 48+ → 41 MCP tools
- Fix README api.ts path reference → triggers/api.ts
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
README.md (1)

136-148: ⚠️ Potential issue | 🟡 Minor

README still has stale pre-v0.7 references.

These sections now say 41 tools / 99 endpoints, but Line 165-166 still says 38 tools + 93 endpoints, Line 203 still shows version: "0.6.1", Line 424 still says Tools (38), and Line 607 still shows CONSOLIDATION_ENABLED=false even though the runtime now defaults that flag to true. Please sweep the remaining references in the same pass so the docs do not contradict themselves.

Also applies to: 623-623, 686-686

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@README.md` around lines 136 - 148, Update all stale pre-v0.7 references in
README so the doc is consistent: replace old counts and version strings (e.g.,
change any occurrences of "38 tools", "93 endpoints", "version: \"0.6.1\"" and
headings like "Tools (38)") to the current values shown elsewhere ("41 tools",
"99 endpoints", update version to the current release used in the repo), and set
any example env/flag lines such as "CONSOLIDATION_ENABLED=false" to reflect the
runtime default (true); sweep the entire README for any remaining mentions
(including the strings "41 tools", "99 REST endpoints", "CONSOLIDATION_ENABLED"
and the version literal) and update them so all counts, the version string, and
the default flag are consistent.
🧹 Nitpick comments (1)
AGENTS.md (1)

14-35: Automate the sync rules instead of documenting more manual steps.

This adds another hard-coded stats block on top of an already long manual update checklist. Given how many places now duplicate tool/endpoint/version counts, a small CI check that derives them from the registries would be more reliable than relying on hand-edited numbers alone.

Also applies to: 106-113

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@AGENTS.md` around lines 14 - 35, The documented manual sync checklist is
error-prone; implement CI-driven validation that derives counts and versions
from canonical sources (e.g., getAllTools() in src/mcp/tools-registry.ts, the
mcp::tools::call handler in src/mcp/server.ts, endpoint registrations in
src/triggers/api.ts, the VERSION constant in src/version.ts and
supportedVersions set in src/functions/export-import.ts) and compares them to
the duplicated values in README.md, plugin/.claude-plugin/plugin.json,
src/index.ts log counts and test assertions (test/mcp-standalone.test.ts,
test/export-import.test.ts); add a small validation script or test that fails
the build when mismatches are found and wire it into CI so maintainers no longer
have to manually update all referenced locations.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/index.ts`:
- Around line 167-168: The consolidation handler is being registered
unconditionally via registerConsolidationPipelineFunction(sdk, kv, provider) so
CONSOLOIDATION_ENABLED=false only stops the timer but does not prevent
mem::consolidate-pipeline from being invoked; either wrap the call to
registerConsolidationPipelineFunction(...) with an isConsolidationEnabled()
check where it’s invoked, or add an early-exit guard inside the consolidation
handler implementation (in consolidation-pipeline.ts) that checks
isConsolidationEnabled() and returns without performing work if false; update
both registration sites (including the other block around the 311–319 area) so
no callers can execute mem::consolidate-pipeline when the flag is disabled, or
alternatively rename the flag to reflect that it only disables the interval.
- Around line 303-309: The scheduled daily sweep (lessonDecayTimer invoking
sdk.trigger("mem::lesson-decay-sweep")) causes multi-week lessons to be
over-decayed because lessons.ts computes weeksSince from lastReinforcedAt ||
createdAt; update the decay job instead of just the schedule: modify the sweep
handler in src/functions/lessons.ts (the mem::lesson-decay-sweep implementation)
to track and use a lesson-level timestamp (e.g., lastDecayedAt) and compute the
decay delta as the weeks since lastDecayedAt (falling back to
lastReinforcedAt/createdAt only for initial runs), subtract only that delta from
confidence, and set lastDecayedAt to now when done; alternatively, if you prefer
not to change the data model now, change the interval in index.ts
(lessonDecayTimer) to run weekly (or only once per week) to match the current
weeksSince logic—choose one approach and update the corresponding code paths
(lessonDecayTimer/sdk.trigger and mem::lesson-decay-sweep / lastDecayedAt
handling) so decay is applied incrementally instead of reapplying full age every
run.

---

Outside diff comments:
In `@README.md`:
- Around line 136-148: Update all stale pre-v0.7 references in README so the doc
is consistent: replace old counts and version strings (e.g., change any
occurrences of "38 tools", "93 endpoints", "version: \"0.6.1\"" and headings
like "Tools (38)") to the current values shown elsewhere ("41 tools", "99
endpoints", update version to the current release used in the repo), and set any
example env/flag lines such as "CONSOLIDATION_ENABLED=false" to reflect the
runtime default (true); sweep the entire README for any remaining mentions
(including the strings "41 tools", "99 REST endpoints", "CONSOLIDATION_ENABLED"
and the version literal) and update them so all counts, the version string, and
the default flag are consistent.

---

Nitpick comments:
In `@AGENTS.md`:
- Around line 14-35: The documented manual sync checklist is error-prone;
implement CI-driven validation that derives counts and versions from canonical
sources (e.g., getAllTools() in src/mcp/tools-registry.ts, the mcp::tools::call
handler in src/mcp/server.ts, endpoint registrations in src/triggers/api.ts, the
VERSION constant in src/version.ts and supportedVersions set in
src/functions/export-import.ts) and compares them to the duplicated values in
README.md, plugin/.claude-plugin/plugin.json, src/index.ts log counts and test
assertions (test/mcp-standalone.test.ts, test/export-import.test.ts); add a
small validation script or test that fails the build when mismatches are found
and wire it into CI so maintainers no longer have to manually update all
referenced locations.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: e7ecdae4-5241-470d-a8e5-2f5aaaab12cc

📥 Commits

Reviewing files that changed from the base of the PR and between 95b6890 and d3fa25f.

📒 Files selected for processing (5)
  • AGENTS.md
  • README.md
  • plugin/.claude-plugin/plugin.json
  • src/cli.ts
  • src/index.ts
✅ Files skipped from review due to trivial changes (1)
  • plugin/.claude-plugin/plugin.json
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/cli.ts

Comment thread src/index.ts Outdated
Comment thread src/index.ts Outdated
…sync fs, export-import lessons

Inline fixes:
- Revert CONSOLIDATION_ENABLED to opt-in (=== "true"), matching original behavior
- Add early-exit guard in consolidation-pipeline handler when disabled
- Gate session-end crystallize+consolidation calls on CONSOLIDATION_ENABLED
- Fix lesson decay over-decay bug: add lastDecayedAt to Lesson, compute
  incremental delta instead of reapplying full age every sweep run
- Add LESSON_DECAY_ENABLED flag (default true) to gate the sweep timer

Outside diff fixes:
- Add lessons to export-import (export + import + replace cleanup)
- Log warning instead of swallowing obsidian auto-export errors

Nitpick fixes:
- Switch obsidian-export to async fs/promises (mkdir, writeFile)
- Add per-item try/catch in obsidian-export, return errors array
- Replace governance_delete with smart_search in ESSENTIAL_TOOLS (non-destructive default)
- Fix CLI whichBinary for Windows (uses "where" on win32)
- Use dynamic port in CLI error message instead of hardcoded 3111
- Return 201 for newly created lessons, 200 for strengthened
- Wrap lesson audit calls in try/catch so audit failure doesn't surface
- Fix build script to not swallow tsdown failure
- Remove exact tool count from test, use >=41 + uniqueness + required names
- Add test/consistency.test.ts: validates version, tool count, README consistency
- Add isConsolidationEnabled mock to consolidation-pipeline test

579 tests passing (573 original + 6 new consistency checks)
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (3)
test/consolidation-pipeline.test.ts (1)

9-12: Mock correctly enables consolidation for existing tests.

The addition of isConsolidationEnabled: () => true correctly ensures these tests exercise the enabled path. Consider adding test coverage for the gating behavior:

  1. A test where isConsolidationEnabled returns false — expect early exit with appropriate result.
  2. A test where consolidation is disabled but force: true is passed — expect pipeline to proceed anyway.

This would validate the early-exit guard and the force bypass mentioned in the PR objectives.

,

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/consolidation-pipeline.test.ts` around lines 9 - 12, Add two unit tests
to validate the consolidation gate: mock isConsolidationEnabled to return false
and assert the pipeline returns the early-exit result (no work done), and mock
isConsolidationEnabled to return false but invoke the pipeline with force: true
and assert it proceeds as normal; specifically update the existing test suite
(consolidation-pipeline.test.ts) to add one test that stubs
isConsolidationEnabled() => false and expects the early-exit behavior, and a
second test that stubs isConsolidationEnabled() => false while calling the
pipeline function with force: true and expects the pipeline to execute its
normal consolidation flow (use the same mock helpers around
getConsolidationDecayDays and the pipeline entrypoint you already test).
src/functions/lessons.ts (1)

116-134: Consider edge case: single-character search terms are dropped.

Line 119 filters terms where t.length > 1, meaning single-character queries like "C" or "R" won't match anything. This may be intentional to reduce noise, but worth documenting or adjusting if single-char project codes or tags are expected.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/functions/lessons.ts` around lines 116 - 134, The terms filter currently
drops single-character tokens (terms.filter((t) => t.length > 1)) which prevents
searches like "C" or "R" from matching; update the logic in the scored
computation (the map where text, terms, matchCount, relevance are computed) to
either allow single-character terms (use t.length > 0 or >=1) or make the
minimum length configurable (e.g., MIN_TERM_LENGTH) and apply that instead, then
adjust relevance calculation or add a comment documenting the chosen behavior so
single-char project codes/tags are handled consistently.
test/consistency.test.ts (1)

32-36: README tool count assertions may be brittle.

These tests expect specific text patterns in README.md. If the README phrasing changes (e.g., "49 MCP tools available" vs "49 MCP tools"), these tests will fail even though the tool count is correct.

Consider using regex patterns for more flexibility, or document that README must contain these exact phrases.

♻️ Optional: Use regex for more flexible matching
   it("README mentions correct MCP tool count", () => {
     const readme = readText("README.md");
-    expect(readme).toContain(`${toolCount} MCP tools`);
-    expect(readme).toContain(`${toolCount} tools, 6 resources`);
+    expect(readme).toMatch(new RegExp(`\\b${toolCount}\\s+(?:MCP\\s+)?tools\\b`));
+    expect(readme).toMatch(new RegExp(`\\b${toolCount}\\s+tools,\\s*6\\s+resources\\b`));
   });
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/consistency.test.ts` around lines 32 - 36, The README assertions in the
test ("it" block using readme and toolCount) are brittle because they rely on
exact substring matches; change the two expects that call
expect(readme).toContain(...) to use flexible regex matching instead (e.g.,
replace with expect(readme).toMatch(...) using a RegExp that allows optional
words/punctuation around `${toolCount}` and "MCP tools" and similarly for the
`${toolCount} tools, 6 resources` phrase) so the test accepts small phrasing
variations while still enforcing the correct counts.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/functions/lessons.ts`:
- Around line 138-141: The call to recordAudit for "lesson_recall"
(mem::lesson-recall) is unguarded and can cause the whole lesson recall to fail
if auditing errors occur; wrap the await recordAudit(kv, "lesson_recall",
"mem::lesson-recall", ..., { query: data.query, resultCount: scored.length }) in
a try-catch inside the same function (e.g., the lesson recall handler) and on
error only log or process the audit error (do not rethrow) so the search result
is returned even if auditing fails.
- Around line 197-199: The unguarded call to recordAudit("lesson_strengthen",
"mem::lesson-strengthen", [lesson.id]) can bubble up and cause the whole
lesson-strengthen operation to appear to fail even after kv.set succeeded; wrap
the recordAudit(...) invocation in a try-catch inside the same function handling
lesson strengthening (surround the recordAudit call, leaving kv.set untouched),
and on catch log the error (including the lesson.id and context) but do not
rethrow so the reinforcement succeeds even if auditing fails.

In `@src/index.ts`:
- Around line 277-279: Update the hard-coded startup log in src/index.ts (the
console.log that prints "[agentmemory] Endpoints: ...") to show the correct
counts: change "99 REST" to "100 REST" and either compute and insert the actual
MCP prompt count or remove the "3 MCP prompts" segment if no prompts exist;
ensure the string reads "100 REST + 41 MCP tools + 6 MCP resources" plus the
verified MCP prompts portion. Locate the log emission (console.log near the
top-level bootstrap in src/index.ts) and replace the literal message with the
corrected counts, and add (or create) a small runtime consistency check that
computes registered REST endpoints, MCP tools, MCP resources, and MCP prompts at
startup and asserts/logs they match the printed numbers to prevent future drift.

---

Nitpick comments:
In `@src/functions/lessons.ts`:
- Around line 116-134: The terms filter currently drops single-character tokens
(terms.filter((t) => t.length > 1)) which prevents searches like "C" or "R" from
matching; update the logic in the scored computation (the map where text, terms,
matchCount, relevance are computed) to either allow single-character terms (use
t.length > 0 or >=1) or make the minimum length configurable (e.g.,
MIN_TERM_LENGTH) and apply that instead, then adjust relevance calculation or
add a comment documenting the chosen behavior so single-char project codes/tags
are handled consistently.

In `@test/consistency.test.ts`:
- Around line 32-36: The README assertions in the test ("it" block using readme
and toolCount) are brittle because they rely on exact substring matches; change
the two expects that call expect(readme).toContain(...) to use flexible regex
matching instead (e.g., replace with expect(readme).toMatch(...) using a RegExp
that allows optional words/punctuation around `${toolCount}` and "MCP tools" and
similarly for the `${toolCount} tools, 6 resources` phrase) so the test accepts
small phrasing variations while still enforcing the correct counts.

In `@test/consolidation-pipeline.test.ts`:
- Around line 9-12: Add two unit tests to validate the consolidation gate: mock
isConsolidationEnabled to return false and assert the pipeline returns the
early-exit result (no work done), and mock isConsolidationEnabled to return
false but invoke the pipeline with force: true and assert it proceeds as normal;
specifically update the existing test suite (consolidation-pipeline.test.ts) to
add one test that stubs isConsolidationEnabled() => false and expects the
early-exit behavior, and a second test that stubs isConsolidationEnabled() =>
false while calling the pipeline function with force: true and expects the
pipeline to execute its normal consolidation flow (use the same mock helpers
around getConsolidationDecayDays and the pipeline entrypoint you already test).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: bc5ae2c2-1838-4d1a-94a9-e352fb6cde9c

📥 Commits

Reviewing files that changed from the base of the PR and between d3fa25f and 9785572.

📒 Files selected for processing (15)
  • package.json
  • plugin/scripts/session-end.mjs
  • src/cli.ts
  • src/functions/consolidation-pipeline.ts
  • src/functions/export-import.ts
  • src/functions/lessons.ts
  • src/functions/obsidian-export.ts
  • src/hooks/session-end.ts
  • src/index.ts
  • src/mcp/tools-registry.ts
  • src/triggers/api.ts
  • src/types.ts
  • test/consistency.test.ts
  • test/consolidation-pipeline.test.ts
  • test/mcp-standalone.test.ts
✅ Files skipped from review due to trivial changes (2)
  • package.json
  • src/cli.ts
🚧 Files skipped from review as they are similar to previous changes (8)
  • test/mcp-standalone.test.ts
  • plugin/scripts/session-end.mjs
  • src/functions/consolidation-pipeline.ts
  • src/mcp/tools-registry.ts
  • src/functions/obsidian-export.ts
  • src/types.ts
  • src/hooks/session-end.ts
  • src/triggers/api.ts

Comment thread src/functions/lessons.ts Outdated
Comment thread src/functions/lessons.ts Outdated
Comment thread src/index.ts
Comment on lines 277 to 279
console.log(
`[agentmemory] Endpoints: 93 REST + 46 MCP tools + 6 MCP resources + 3 MCP prompts`,
`[agentmemory] Endpoints: 99 REST + 41 MCP tools + 6 MCP resources + 3 MCP prompts`,
);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Count REST endpoints registered in api.ts
echo "=== Checking REST endpoint registrations ==="
rg -c 'sdk\.registerTrigger' src/triggers/api.ts || echo "0"

# Count MCP tools
echo "=== Checking MCP tool count ==="
rg -oP 'name:\s*"memory_\w+"' src/mcp/tools-registry.ts | wc -l

# Look for the getVisibleTools and getAllTools functions
echo "=== Tool visibility functions ==="
ast-grep --pattern 'export function getAllTools() { $$$ }'

Repository: rohitg00/agentmemory

Length of output: 181


🏁 Script executed:

#!/bin/bash

echo "=== Verifying REST endpoint count ==="
# More detailed check for REST endpoints
rg 'sdk\.registerTrigger' src/triggers/api.ts | wc -l

echo ""
echo "=== Checking MCP resources ==="
# Look for resource registration patterns
rg -i 'resource' src/mcp/ --type ts | grep -i 'register\|add' | head -20

echo ""
echo "=== Checking MCP prompts ==="
# Look for prompt registration patterns
rg -i 'prompt' src/mcp/ --type ts | grep -i 'register\|add' | head -20

echo ""
echo "=== File structure in src/mcp ==="
find src/mcp -type f -name "*.ts" | sort

echo ""
echo "=== Looking for resource definitions ==="
rg 'resource' src/mcp/ --type ts -A 2 -B 2 | head -50

Repository: rohitg00/agentmemory

Length of output: 1852


🏁 Script executed:

#!/bin/bash

echo "=== Finding MCP_RESOURCES definition ==="
rg 'MCP_RESOURCES' src/ --type ts -B 5 -A 50 | head -100

echo ""
echo "=== Finding MCP_PROMPTS or similar ==="
rg 'MCP_PROMPTS\|prompt\|Prompt' src/mcp/ --type ts -B 2 -A 5 | head -80

echo ""
echo "=== Checking what's exported from mcp/server.ts ==="
rg 'export' src/mcp/server.ts | head -20

Repository: rohitg00/agentmemory

Length of output: 5557


🏁 Script executed:

#!/bin/bash

echo "=== Searching for prompts in entire codebase ==="
rg -i 'prompt' src/ --type ts | grep -v node_modules | head -30

echo ""
echo "=== Looking at the exact log message in index.ts ==="
rg -B 5 -A 5 'Endpoints:' src/index.ts

echo ""
echo "=== Counting MCP_RESOURCES items exactly ==="
sed -n '/const MCP_RESOURCES = \[/,/\];/p' src/mcp/server.ts | grep -c 'uri:'

Repository: rohitg00/agentmemory

Length of output: 3127


Update log message with correct endpoint counts.

The REST endpoint count is 100 (not 99 as claimed). MCP tools (41) and MCP resources (6) counts are accurate, but no evidence of "3 MCP prompts" was found in the codebase—this count should be verified or removed. Update the log message at lines 277-279 to reflect the correct counts:

99 REST + 41 MCP tools + 6 MCP resources + 3 MCP prompts

should be:

100 REST + 41 MCP tools + 6 MCP resources + [X MCP prompts]

Consider adding a runtime consistency test to validate these counts match actual registered endpoints to prevent drift.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/index.ts` around lines 277 - 279, Update the hard-coded startup log in
src/index.ts (the console.log that prints "[agentmemory] Endpoints: ...") to
show the correct counts: change "99 REST" to "100 REST" and either compute and
insert the actual MCP prompt count or remove the "3 MCP prompts" segment if no
prompts exist; ensure the string reads "100 REST + 41 MCP tools + 6 MCP
resources" plus the verified MCP prompts portion. Locate the log emission
(console.log near the top-level bootstrap in src/index.ts) and replace the
literal message with the corrected counts, and add (or create) a small runtime
consistency check that computes registered REST endpoints, MCP tools, MCP
resources, and MCP prompts at startup and asserts/logs they match the printed
numbers to prevent future drift.

rohitg00 added 3 commits April 4, 2026 17:09
…m publish

Review fixes:
- Wrap lesson_recall and lesson_strengthen audit calls in try/catch
- Fix REST endpoint count: 99 → 100 (verified via grep) across index.ts,
  README.md, and AGENTS.md
- Use regex in consistency test for README assertions (flexible phrasing)
- Add consolidation gate tests: disabled returns early, force=true bypasses

CI/CD:
- Add .github/workflows/ci.yml — test on Node 18/20/22
- Add .github/workflows/publish.yml — auto-publish to npm on GitHub release
  (uses NPM_TOKEN secret + provenance)

Nitpick:
- Single-char term filter (t.length > 1) kept intentionally — prevents noise
  from single-letter matches; documented in AGENTS.md if needed

581 tests passing
@anthropic-ai/claude-agent-sdk@0.2.56 requires zod@^4.0.0 as a peer
dependency but the project uses zod@^3.23.0. The lock file resolves
this locally but npm ci is strict about peer deps in CI.
tsdown/rolldown uses node:util.styleText which is only available in
Node 20.12+. Updated engines field to >=20.0.0 to match.
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
README.md (2)

424-424: ⚠️ Potential issue | 🟡 Minor

MCP Server section header has outdated tool count.

The "Tools (38)" header should be updated to "Tools (41)" to match the rest of the document.

📝 Proposed fix
-### Tools (38)
+### Tools (41)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@README.md` at line 424, Update the MCP Server section header text "### Tools
(38)" to "### Tools (41)" so the header tool count matches the rest of the
document and the rest of the README; locate the header string "### Tools (38)"
and replace the number 38 with 41.

165-166: ⚠️ Potential issue | 🟡 Minor

Inconsistent tool and endpoint counts in "Choosing an integration method" table.

Lines 165-166 still reference old counts (38 tools, 93 endpoints) while the rest of the README has been updated to 41 tools and 100 endpoints.

📝 Proposed fix
 | Using Cursor, Windsurf, or any MCP client | MCP server (38 tools + 6 resources + 3 prompts) |
-| Building your own agent framework | REST API (93 endpoints) |
+| Using Cursor, Windsurf, or any MCP client | MCP server (41 tools + 6 resources + 3 prompts) |
+| Building your own agent framework | REST API (100 endpoints) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@README.md` around lines 165 - 166, Update the counts in the "Choosing an
integration method" table row that currently reads "Using Cursor, Windsurf, or
any MCP client | MCP server (38 tools + 6 resources + 3 prompts) | Building your
own agent framework | REST API (93 endpoints)" to reflect the current totals:
change "38 tools" to "41 tools" and change "93 endpoints" to "100 endpoints" so
the table matches the rest of the README.
🧹 Nitpick comments (1)
.github/workflows/ci.yml (1)

1-22: LGTM — Standard CI workflow with good Node version coverage.

The workflow correctly tests against all supported Node versions (18, 20, 22) matching the engines field in package.json.

Consider adding npm caching for faster CI runs:

♻️ Optional: Add npm caching
       - uses: actions/setup-node@v4
         with:
           node-version: ${{ matrix.node-version }}
+          cache: 'npm'
       - run: npm ci --legacy-peer-deps
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/ci.yml around lines 1 - 22, Add npm caching to the CI
workflow to speed up runs by configuring either actions/cache@v4 or the
setup-node cache option for the test job: update the "test" job (refer to the
"test" job name and the steps that run "actions/checkout@v4",
"actions/setup-node@v4", and "npm ci --legacy-peer-deps") to cache the
node_modules and npm cache (use a key based on matrix.node-version and
package-lock.json) and restore it before running "npm ci", ensuring the cache is
saved after installation for subsequent jobs.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@test/consistency.test.ts`:
- Line 14: Replace use of import.meta.dirname (which requires Node 20.11+) with
a Node-compatible approach: compute ROOT by converting import.meta.url to a file
path via fileURLToPath and then using path.join/dirname as needed. Update the
code that defines the constant ROOT (currently using import.meta.dirname) to
instead call fileURLToPath(import.meta.url) and derive the parent directory so
the test works on Node >=20.0.0 while preserving the same resolved path
semantics.

---

Outside diff comments:
In `@README.md`:
- Line 424: Update the MCP Server section header text "### Tools (38)" to "###
Tools (41)" so the header tool count matches the rest of the document and the
rest of the README; locate the header string "### Tools (38)" and replace the
number 38 with 41.
- Around line 165-166: Update the counts in the "Choosing an integration method"
table row that currently reads "Using Cursor, Windsurf, or any MCP client | MCP
server (38 tools + 6 resources + 3 prompts) | Building your own agent framework
| REST API (93 endpoints)" to reflect the current totals: change "38 tools" to
"41 tools" and change "93 endpoints" to "100 endpoints" so the table matches the
rest of the README.

---

Nitpick comments:
In @.github/workflows/ci.yml:
- Around line 1-22: Add npm caching to the CI workflow to speed up runs by
configuring either actions/cache@v4 or the setup-node cache option for the test
job: update the "test" job (refer to the "test" job name and the steps that run
"actions/checkout@v4", "actions/setup-node@v4", and "npm ci --legacy-peer-deps")
to cache the node_modules and npm cache (use a key based on matrix.node-version
and package-lock.json) and restore it before running "npm ci", ensuring the
cache is saved after installation for subsequent jobs.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 270129a2-57b4-4ce4-86d3-85c84c2bbbd3

📥 Commits

Reviewing files that changed from the base of the PR and between 9785572 and 70dea4b.

📒 Files selected for processing (8)
  • .github/workflows/ci.yml
  • .github/workflows/publish.yml
  • AGENTS.md
  • README.md
  • src/functions/lessons.ts
  • src/index.ts
  • test/consistency.test.ts
  • test/consolidation-pipeline.test.ts
✅ Files skipped from review due to trivial changes (1)
  • .github/workflows/publish.yml
🚧 Files skipped from review as they are similar to previous changes (1)
  • test/consolidation-pipeline.test.ts

Comment thread test/consistency.test.ts
import { getAllTools } from "../src/mcp/tools-registry.js";
import { VERSION } from "../src/version.js";

const ROOT = join(import.meta.dirname, "..");
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Verify Node version compatibility for import.meta.dirname
echo "Checking Node.js documentation for import.meta.dirname availability..."
# Check if any other files use import.meta.dirname
rg -n 'import\.meta\.dirname' --type ts

Repository: rohitg00/agentmemory

Length of output: 208


🏁 Script executed:

find . -type f -name "*.yml" -o -name "*.yaml" | grep -E "(workflow|ci|github)" | head -20

Repository: rohitg00/agentmemory

Length of output: 122


🏁 Script executed:

# Check for CI configuration
find . -type f \( -name "*.yml" -o -name "*.yaml" \) | xargs grep -l "node" 2>/dev/null | head -10

Repository: rohitg00/agentmemory

Length of output: 140


🏁 Script executed:

# Check if there's a package.json with Node version specification
cat package.json | grep -A 5 -B 5 "engines"

Repository: rohitg00/agentmemory

Length of output: 210


🏁 Script executed:

cat .github/workflows/ci.yml

Repository: rohitg00/agentmemory

Length of output: 478


import.meta.dirname requires Node 20.11.0+ but package.json declares compatibility with Node 20.0.0+.

import.meta.dirname was introduced in Node 20.11.0 and Node 21.2.0. However, the package.json specifies "engines": { "node": ">=20.0.0" }, which includes Node 20.0.0 through 20.10.x where this API is not available. This creates a version compatibility mismatch.

🛠️ Proposed fix using fileURLToPath
 import { describe, it, expect, vi } from "vitest";
 import { readFileSync } from "node:fs";
 import { join } from "node:path";
+import { fileURLToPath } from "node:url";
+import { dirname } from "node:path";

 vi.mock("iii-sdk", () => ({
   getContext: () => ({
     logger: { info: vi.fn(), error: vi.fn(), warn: vi.fn() },
   }),
 }));

 import { getAllTools } from "../src/mcp/tools-registry.js";
 import { VERSION } from "../src/version.js";

-const ROOT = join(import.meta.dirname, "..");
+const ROOT = join(dirname(fileURLToPath(import.meta.url)), "..");
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const ROOT = join(import.meta.dirname, "..");
import { describe, it, expect, vi } from "vitest";
import { readFileSync } from "node:fs";
import { join } from "node:path";
import { fileURLToPath } from "node:url";
import { dirname } from "node:path";
vi.mock("iii-sdk", () => ({
getContext: () => ({
logger: { info: vi.fn(), error: vi.fn(), warn: vi.fn() },
}),
}));
import { getAllTools } from "../src/mcp/tools-registry.js";
import { VERSION } from "../src/version.js";
const ROOT = join(dirname(fileURLToPath(import.meta.url)), "..");
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/consistency.test.ts` at line 14, Replace use of import.meta.dirname
(which requires Node 20.11+) with a Node-compatible approach: compute ROOT by
converting import.meta.url to a file path via fileURLToPath and then using
path.join/dirname as needed. Update the code that defines the constant ROOT
(currently using import.meta.dirname) to instead call
fileURLToPath(import.meta.url) and derive the parent directory so the test works
on Node >=20.0.0 while preserving the same resolved path semantics.

tsdown errors on CI with "Consider adding inlineOnly option" when
dependencies are bundled. Setting inlineOnly: false suppresses this.
Also updated target from node18 to node20 to match engines field.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant