refactor: streamline tool image management and enhance catalog structure#32
Conversation
- Removed the `ghcr-verify.yml` workflow and associated Dockerfile as part of the cleanup. - Updated `detect-matrix.sh` to compare tool blobs instead of versions for more accurate change detection. - Enhanced `read-tool-manifest.sh` to support both npm and PyPI package definitions, ensuring tools can specify dependencies correctly. - Introduced new Dockerfiles for various tools, allowing for better integration with upstream packages from npm and PyPI. - Updated `mcp-tools.json` and `tools.json` to reflect new tools and their configurations, including version bumps and additional metadata. - Added a script to build local images for tools, improving the development workflow.
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughDeletes GHCR verification artifacts; expands tools catalog with five new tools and Dockerfiles; adds npm+PyPI upstream support; implements local catalog loading, private-folder host mounts (frontend → API → service → MCP sync); and introduces a memory session subsystem with provider-backed persistence. Changes
Sequence Diagram(s)sequenceDiagram
participant FE as Frontend
participant API as HTTP Server
participant TE as ToolEngine Service
participant REG as MCP Registry
participant FS as Host Filesystem
FE->>API: PUT /v1/toolengine/private-folder {tool_id, path}
API->>FS: validate path & create dir (tokio::fs::create_dir_all)
API->>TE: set ServerEntry::Stdio.private_host_path in MCP config
TE->>REG: sync_workspace_mounted_tools_for_catalog(mcp_config_path, bot_id)
REG-->>TE: sync result
TE->>API: persist config, spawn rebuild task
API-->>FE: 200 { ok: true }
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (3)
tools/update-upstream.sh (1)
40-85: Consider passingslugas a parameter tobump_tool()for clarity.The function uses
$slug(line 49) from the outer loop scope implicitly. While this works in bash, passing it as an explicit parameter would improve readability and make the function more self-contained.♻️ Suggested refactor
bump_tool() { local idx="$1" local kind="$2" local new_upstream_ver="$3" + local slug="$4" local current_tool ... } ... - bump_tool "$i" "npm" "$latest_npm" + bump_tool "$i" "npm" "$latest_npm" "$slug" ... - bump_tool "$i" "pypi" "$latest_pypi" + bump_tool "$i" "pypi" "$latest_pypi" "$slug"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/update-upstream.sh` around lines 40 - 85, The bump_tool() function currently relies on the outer-scope variable $slug (used in the echo) which makes it implicit and harder to reason about; change bump_tool() to accept slug as an explicit parameter (e.g., add a slug parameter to the signature and set local slug="$4" or similar), update the internal echo and any other uses to reference the local slug variable, and update every call site to pass the slug argument when invoking bump_tool so the function becomes self-contained and clear; locate function bump_tool and all its callers to apply this change.tools/fetch/Dockerfile (1)
1-11: Dockerfile structure looks good, but local builds may use stale defaults.The Dockerfile correctly:
- Uses a slim base image
- Creates a non-root user with fixed UID
- Disables pip caching
- Sets
PYTHONUNBUFFERED=1for proper log streamingHowever, the comment on line 1 notes CI passes
--build-argfrommcp-tools.json, buttools/build-local-images.sh(context snippet 1, lines 40-52) does not extract or passupstream_mcp_pypi.package/upstream_mcp_pypi.version. Local builds will use the hardcoded defaults here, which may drift from the catalog.This is not a blocker since CI builds correctly, but local dev builds may produce images with different package versions than intended.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/fetch/Dockerfile` around lines 1 - 11, Local builds are using the hardcoded ARG defaults in the Dockerfile (UPSTREAM_MCP_PYPI_PACKAGE / UPSTREAM_MCP_PYPI_VERSION) and so can drift from the catalog; update the local build script that invokes docker build to parse upstream_mcp_pypi.package and upstream_mcp_pypi.version from mcp-tools.json and pass them via --build-arg UPSTREAM_MCP_PYPI_PACKAGE=... and --build-arg UPSTREAM_MCP_PYPI_VERSION=... (falling back to the existing defaults if keys are absent) so local images match CI's resolved package/version.tools/build-local-images.sh (1)
40-52: Local builds don't pass upstream package versions as build args.The script extracts
slug,image, andcurrentbut ignoresupstream_mcp_npmandupstream_mcp_pypifields. The Dockerfiles have hardcodedARGdefaults, so builds succeed but may use different package versions than specified inmcp-tools.json.For example, if
tools/mcp-tools.jsonspecifies"version": "2025.4.8"but the Dockerfile hasARG UPSTREAM_MCP_PYPI_VERSION=2025.4.7, the local build uses2025.4.7.This creates a divergence between local dev builds and CI builds (which use
read-tool-manifest.shto pass correct--build-argvalues).🔧 Proposed fix to pass build args
# Space-separated (avoid `@tsv` + split quirks in some jq versions). -while read -r slug image current; do +while read -r slug image current npm_pkg npm_ver pypi_pkg pypi_ver; do [[ -z "$slug" ]] && continue ctx="${ROOT}/tools/${slug}" df="${ctx}/Dockerfile" if [[ ! -f "$df" ]]; then echo "skip $slug: no $df" >&2 continue fi tag="${image}:${current}" + BUILD_ARGS=() + if [[ -n "$npm_pkg" && -n "$npm_ver" ]]; then + BUILD_ARGS+=(--build-arg "UPSTREAM_MCP_NPM_PACKAGE=$npm_pkg") + BUILD_ARGS+=(--build-arg "UPSTREAM_MCP_NPM_VERSION=$npm_ver") + fi + if [[ -n "$pypi_pkg" && -n "$pypi_ver" ]]; then + BUILD_ARGS+=(--build-arg "UPSTREAM_MCP_PYPI_PACKAGE=$pypi_pkg") + BUILD_ARGS+=(--build-arg "UPSTREAM_MCP_PYPI_VERSION=$pypi_ver") + fi echo "=== build $slug -> $tag ===" - "${RUNTIME}" build "${PLATFORM_ARGS[@]}" -f "$df" -t "$tag" "$ctx" -done < <(jq -r '.tools[] | "\(.id | split("/")[1]) \(.image) \(.current)"' "$REG") + "${RUNTIME}" build "${PLATFORM_ARGS[@]}" "${BUILD_ARGS[@]}" -f "$df" -t "$tag" "$ctx" +done < <(jq -r '.tools[] | "\(.id | split("/")[1]) \(.image) \(.current) \(.upstream_mcp_npm.package // "") \(.upstream_mcp_npm.version // "") \(.upstream_mcp_pypi.package // "") \(.upstream_mcp_pypi.version // "")"' "$REG")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/build-local-images.sh` around lines 40 - 52, The loop currently reads only slug, image, and current and invokes "${RUNTIME}" build without passing upstream package versions; update the jq extraction to also output upstream_mcp_npm and upstream_mcp_pypi (e.g. add "\(.upstream_mcp_npm) \(.upstream_mcp_pypi)") and read them into variables (e.g. npm_ver, pypi_ver), then append conditional --build-arg flags to the build invocation like --build-arg UPSTREAM_MCP_NPM_VERSION="$npm_ver" and --build-arg UPSTREAM_MCP_PYPI_VERSION="$pypi_ver" (only include each flag when the corresponding variable is non-empty) so Docker builds use the versions from the manifest instead of Dockerfile defaults.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@tools/build-local-images.sh`:
- Around line 40-52: The loop currently reads only slug, image, and current and
invokes "${RUNTIME}" build without passing upstream package versions; update the
jq extraction to also output upstream_mcp_npm and upstream_mcp_pypi (e.g. add
"\(.upstream_mcp_npm) \(.upstream_mcp_pypi)") and read them into variables (e.g.
npm_ver, pypi_ver), then append conditional --build-arg flags to the build
invocation like --build-arg UPSTREAM_MCP_NPM_VERSION="$npm_ver" and --build-arg
UPSTREAM_MCP_PYPI_VERSION="$pypi_ver" (only include each flag when the
corresponding variable is non-empty) so Docker builds use the versions from the
manifest instead of Dockerfile defaults.
In `@tools/fetch/Dockerfile`:
- Around line 1-11: Local builds are using the hardcoded ARG defaults in the
Dockerfile (UPSTREAM_MCP_PYPI_PACKAGE / UPSTREAM_MCP_PYPI_VERSION) and so can
drift from the catalog; update the local build script that invokes docker build
to parse upstream_mcp_pypi.package and upstream_mcp_pypi.version from
mcp-tools.json and pass them via --build-arg UPSTREAM_MCP_PYPI_PACKAGE=... and
--build-arg UPSTREAM_MCP_PYPI_VERSION=... (falling back to the existing defaults
if keys are absent) so local images match CI's resolved package/version.
In `@tools/update-upstream.sh`:
- Around line 40-85: The bump_tool() function currently relies on the
outer-scope variable $slug (used in the echo) which makes it implicit and harder
to reason about; change bump_tool() to accept slug as an explicit parameter
(e.g., add a slug parameter to the signature and set local slug="$4" or
similar), update the internal echo and any other uses to reference the local
slug variable, and update every call site to pass the slug argument when
invoking bump_tool so the function becomes self-contained and clear; locate
function bump_tool and all its callers to apply this change.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 6a9981a3-89fe-4162-a501-94f591ba8404
📒 Files selected for processing (18)
.github/ghcr-verify-context/Dockerfile.github/scripts/tools-publish/detect-matrix.sh.github/scripts/tools-publish/read-tool-manifest.sh.github/scripts/tools-publish/smoke-test-mcp.sh.github/workflows/ghcr-verify.yml.github/workflows/tools-publish.ymlsrc-tauri/src/modules/tool_engine/service.rssrc-tauri/src/modules/tool_engine/tools.jsonsrc-tauri/src/modules/tool_engine/types.rssrc/shared/ui/SpecMockup.tsxtools/build-local-images.shtools/fetch/Dockerfiletools/git/Dockerfiletools/mcp-tools.jsontools/memory/Dockerfiletools/sequential-thinking/Dockerfiletools/time/Dockerfiletools/update-upstream.sh
💤 Files with no reviewable changes (2)
- .github/ghcr-verify-context/Dockerfile
- .github/workflows/ghcr-verify.yml
- Added `private_folder` configuration to tools, allowing them to persist state in a designated host directory. - Introduced new API endpoint to set the host path for tools that declare a private folder. - Enhanced the MCP server entry to include `private_host_path` for tools utilizing private folders. - Updated the UI components to manage private folder settings, including folder selection and error handling. - Revised tool catalog to reflect changes in memory server description and private folder configuration.
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
doc/tool-engine/manual-publish.md (1)
86-95:⚠️ Potential issue | 🟡 MinorDoc still describes the update flow as npm-only.
The "Files" section and PR summary indicate
tools/update-upstream.shnow bumps both npm and PyPI versions, but this section header (Line 86) and the body text (Line 95) still describe only npm. Worth updating so users don't think PyPI tools are unsupported by the script.✏️ Suggested wording
-## Updating upstream npm versions +## Updating upstream npm / PyPI versions @@ -This checks the npm registry for newer versions, bumps `mcp-tools.json`, and prints a summary. Commit, push, and CI builds only the affected tools. +This checks the npm and PyPI registries for newer versions, bumps `mcp-tools.json`, and prints a summary. Commit, push, and CI builds only the affected tools.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@doc/tool-engine/manual-publish.md` around lines 86 - 95, The doc header and body still say the update flow is npm-only; update the "## Updating upstream npm versions" section and its descriptive text to reflect that ./tools/update-upstream.sh now checks and bumps both npm and PyPI package versions (e.g., change the header to "Updating upstream npm and PyPI versions" and update the sentence that currently says "This checks the npm registry..." to mention both registries and that mcp-tools.json is bumped for either ecosystem), and ensure example usage remains correct (./tools/update-upstream.sh and ./tools/update-upstream.sh file-manager) and the PR summary/Files guidance mentions that the script may modify packages for npm or PyPI.
🧹 Nitpick comments (2)
src/modules/toolengine/index.ts (1)
133-162: Duplicated error-parsing block across three API helpers.Lines 147–156 are a verbatim copy of the same block in
installTool(Lines 116–125) anduninstallTool(Lines 177–186). Consider extracting a small helper (e.g.parseApiError(resp)) to keep these three in sync as the error contract evolves. Not blocking.♻️ Sketch
async function parseApiError(resp: Response): Promise<string> { const raw = await resp.text(); try { const body = JSON.parse(raw) as { error?: string }; return body.error ?? raw.trim() || `Request failed (HTTP ${resp.status})`; } catch { return raw.trim() || `Request failed (HTTP ${resp.status})`; } }Then each helper becomes
return { ok: false, error: await parseApiError(resp) };.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/modules/toolengine/index.ts` around lines 133 - 162, Extract the duplicated response-parsing logic used in putToolPrivateFolder, installTool and uninstallTool into a small async helper (e.g., parseApiError) that accepts the Response and returns a string error; update each helper to call `await parseApiError(resp)` and return `{ ok: false, error }`. Ensure parseApiError implements the same behavior: read resp.text(), try JSON.parse to get body.error, fall back to raw.trim(), and finally include a default `Request failed (HTTP ${resp.status})` when empty. Keep the helper name and usage consistent so you can replace the three duplicated blocks with `return { ok: false, error: await parseApiError(resp) };`.src-tauri/src/modules/tool_engine/service.rs (1)
49-99: Optional: cache the local catalog to avoid re-reading on every request.
try_load_local_tools_catalogwalks up to 8 parent directories and parses JSON on everyload_catalog()call, which fires from/v1/toolengine/catalog,/v1/toolengine/install,/v1/toolengine/private-folder, etc. For a dev workflow this is fine; for repeated polling it's wasted I/O/CPU. A tiny mtime-keyed cache insidetry_load_local_tools_catalog(or aOnceCellrefreshed on SIGHUP/interval) would be a nice-to-have.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/modules/tool_engine/service.rs` around lines 49 - 99, try_load_local_tools_catalog currently re-reads and reparses files on every call; add a small mtime-keyed cache so repeated calls skip I/O when nothing changed. Implement a static cache (e.g., OnceCell or a Mutex<HashMap<PathBuf, (SystemTime, ToolCatalog)>>) inside try_load_local_tools_catalog that for each candidate path checks std::fs::metadata(&p).modified() against the cached timestamp and only calls std::fs::read_to_string + parse_catalog when the mtime differs (update the cached entry on change); keep the existing return behavior so load_catalog need not change. Optionally expose a simple refresh trigger (SIGHUP/interval) if you prefer invalidation instead of mtimes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src-tauri/src/infrastructure/http_server.rs`:
- Around line 821-870: The code creates the on-disk directory before validating
the tool/server entry and the path, which can leave orphaned directories and
allows relative paths; move the directory creation to after acquiring the config
(use state.mcp_config_mutex), locating the server via
cfg.servers.get_mut(&te_service::server_key(&tool_id)), and confirming it
matches crate::modules::mcp::types::ServerEntry::Stdio; validate that
Path::new(&path).is_absolute() beforehand and return BAD_REQUEST if not; also
switch from std::fs::create_dir_all to tokio::fs::create_dir_all (or run
blocking) so directory creation does not block the async runtime.
In `@src/modules/mcp/components/McpServerCard.tsx`:
- Around line 486-532: The Save flow silently drops an unapplied
tePrivatePathInput because handleSubmit uses entry.private_host_path while users
can change tePrivatePathInput without calling applyTePrivateFolder; fix by
preventing Save when there's an unsaved private-folder edit: in the component
that renders the Save button/handleSubmit, compute a boolean like
hasUnsavedPrivate = tePrivatePathInput !== (entry.private_host_path ?? "") and
disable the Save button (or block handleSubmit) when hasUnsavedPrivate or
tePrivateApplyBusy is true, and show a small inline hint near the Save button
(e.g., "Apply data folder first") so users know to call applyTePrivateFolder
before saving.
- Around line 248-265: The effect watching name starts an async fetchToolCatalog
and unguardedly sets state (setTePrivateToolId, setTePrivatePathInput) which can
be clobbered by out-of-order responses; fix by adding a cancellation flag (e.g.,
let cancelled = false) inside the useEffect, check it before calling
setTePrivateToolId/setTePrivatePathInput after the await, and set cancelled =
true in the effect cleanup so only the latest fetch (and the result filtered by
teServerKeyForToolId(name)) updates state; keep the early-return branch for
non-"te_" names unchanged.
---
Outside diff comments:
In `@doc/tool-engine/manual-publish.md`:
- Around line 86-95: The doc header and body still say the update flow is
npm-only; update the "## Updating upstream npm versions" section and its
descriptive text to reflect that ./tools/update-upstream.sh now checks and bumps
both npm and PyPI package versions (e.g., change the header to "Updating
upstream npm and PyPI versions" and update the sentence that currently says
"This checks the npm registry..." to mention both registries and that
mcp-tools.json is bumped for either ecosystem), and ensure example usage remains
correct (./tools/update-upstream.sh and ./tools/update-upstream.sh file-manager)
and the PR summary/Files guidance mentions that the script may modify packages
for npm or PyPI.
---
Nitpick comments:
In `@src-tauri/src/modules/tool_engine/service.rs`:
- Around line 49-99: try_load_local_tools_catalog currently re-reads and
reparses files on every call; add a small mtime-keyed cache so repeated calls
skip I/O when nothing changed. Implement a static cache (e.g., OnceCell or a
Mutex<HashMap<PathBuf, (SystemTime, ToolCatalog)>>) inside
try_load_local_tools_catalog that for each candidate path checks
std::fs::metadata(&p).modified() against the cached timestamp and only calls
std::fs::read_to_string + parse_catalog when the mtime differs (update the
cached entry on change); keep the existing return behavior so load_catalog need
not change. Optionally expose a simple refresh trigger (SIGHUP/interval) if you
prefer invalidation instead of mtimes.
In `@src/modules/toolengine/index.ts`:
- Around line 133-162: Extract the duplicated response-parsing logic used in
putToolPrivateFolder, installTool and uninstallTool into a small async helper
(e.g., parseApiError) that accepts the Response and returns a string error;
update each helper to call `await parseApiError(resp)` and return `{ ok: false,
error }`. Ensure parseApiError implements the same behavior: read resp.text(),
try JSON.parse to get body.error, fall back to raw.trim(), and finally include a
default `Request failed (HTTP ${resp.status})` when empty. Keep the helper name
and usage consistent so you can replace the three duplicated blocks with `return
{ ok: false, error: await parseApiError(resp) };`.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: e45ede4b-e55a-450a-b5b7-057ef4779838
📒 Files selected for processing (20)
doc/tool-engine/manual-publish.mdsrc-tauri/src/infrastructure/http_server.rssrc-tauri/src/modules/bot/agent.rssrc-tauri/src/modules/bot/service.rssrc-tauri/src/modules/mcp/registry.rssrc-tauri/src/modules/mcp/service.rssrc-tauri/src/modules/mcp/types.rssrc-tauri/src/modules/memory/mod.rssrc-tauri/src/modules/mod.rssrc-tauri/src/modules/tool_engine/service.rssrc-tauri/src/modules/tool_engine/tools.jsonsrc-tauri/src/modules/tool_engine/types.rssrc-tauri/src/shared/state.rssrc/modules/mcp/components/McpServerCard.tsxsrc/modules/mcp/index.tssrc/modules/toolengine/index.tstools/build-local-images.shtools/mcp-tools.jsontools/memory/Dockerfiletools/update-upstream.sh
💤 Files with no reviewable changes (1)
- src-tauri/src/modules/tool_engine/tools.json
✅ Files skipped from review due to trivial changes (2)
- src-tauri/src/modules/mod.rs
- tools/memory/Dockerfile
🚧 Files skipped from review as they are similar to previous changes (1)
- tools/update-upstream.sh
- Updated the manual-publish documentation to clarify the update process for both npm and PyPI versions. - Refactored error handling in the tool engine to use a dedicated function for parsing API errors, improving code readability and maintainability. - Added cancellation logic in the InlineEditForm component to prevent state updates after unmounting. - Enhanced the UI to provide feedback when unsaved changes exist, ensuring better user experience. - Improved the local tools catalog loading mechanism with caching based on modification time, optimizing performance.
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (3)
src-tauri/src/modules/tool_engine/service.rs (3)
931-968: Test uses a fixed shared temp path.
std::env::temp_dir().join("pengine-mem-test-data")is a process-wide constant, so two concurrent test runs (e.g.,cargo testin multiple checkouts, or nextest isolation) can race onremove_dir_all+create_dir_all.podman_run_argv_for_toolonly formats strings fromhost_path, so the directory doesn't actually need to exist here — you can drop the filesystem setup entirely or usetempfile::TempDir.♻️ Proposed fix
- let tmp = std::env::temp_dir().join("pengine-mem-test-data"); - let _ = std::fs::remove_dir_all(&tmp); - std::fs::create_dir_all(&tmp).unwrap(); + let tmp = std::path::PathBuf::from("/tmp/pengine-mem-test-fixture"); let pb = PrivateBind { host_path: tmp.as_path(), config: pf, bot_id: "12345", };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/modules/tool_engine/service.rs` around lines 931 - 968, The test memory_catalog_has_private_folder_and_argv_includes_bind_and_env should not use a fixed shared temp path; remove the process-wide std::env::temp_dir().join("pengine-mem-test-data") + remove_dir_all/create_dir_all sequence and either use a unique TempDir (e.g., tempfile::TempDir) for host_path or skip creating the directory entirely since podman_run_argv_for_tool only formats strings from PrivateBind.host_path; update the PrivateBind construction and assertions to use the TempDir path (or a simple PathBuf) so tests no longer race across processes.
158-266: Private-folder helpers and stdio rebuild look correct.
resolve_private_host_pathcleanly handles the trimmed-override / default split, and threadingPrivateBindthroughrebuild_installed_catalog_tool_stdiocorrectly preservesprivate_host_pathacross resyncs while regenerating argv. Thecatalog_tool_stdio_eqcomparator plus the early stdio-only guard also prevent accidental clobbering of non-stdio entries with the same server key.One nit:
rebuild_installed_catalog_tool_stdioreturnsErr(...)ifprevisn'tStdio, andsync_workspace_mounted_tools_for_catalogpropagates that with?, so a single malformed entry aborts the entire sync. Since catalog tools are always stdio today this is fine, but acontinue(with a warn log) would be more resilient to schema drift.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/modules/tool_engine/service.rs` around lines 158 - 266, rebuild_installed_catalog_tool_stdio currently errors if prev isn't ServerEntry::Stdio which causes sync_workspace_mounted_tools_for_catalog to abort on a single malformed entry; change rebuild_installed_catalog_tool_stdio to return Result<Option<ServerEntry>, String> (or otherwise return Ok(None) for non-Stdio) instead of Err, and update sync_workspace_mounted_tools_for_catalog to treat None by logging a warning (include the server key/id) and continue the loop; keep existing behavior for real errors (propagate with ?) and preserve use of catalog_tool_stdio_eq and PrivateBind handling.
596-632: Install-timebot_idis intentionally the fallback — worth a brief comment.
install_toolalways passesBOT_ID_FALLBACKinto the firstPrivateBind, so the initial stdio argv contains--env=<var>=/…/default.<ext>. This is rewritten by the post-installrebuild_registry_into_state→sync_workspace_mounted_tools_for_catalogpath (which does pass the realbot_id), so behavior is correct when a bot is already connected at install time. Given the subtlety, a one-line comment here pointing to that rebuild step would help future readers.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/modules/tool_engine/service.rs` around lines 596 - 632, Add a one-line comment next to the creation of the PrivateBind (where BOT_ID_FALLBACK is passed) in install_tool explaining that the install-time bot_id intentionally uses BOT_ID_FALLBACK and that the real bot_id is applied later during post-install rebuild_registry_into_state -> sync_workspace_mounted_tools_for_catalog; reference PrivateBind, BOT_ID_FALLBACK, install_tool, and the rebuild_registry_into_state/sync_workspace_mounted_tools_for_catalog flow so future readers know this is corrected after install.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src-tauri/src/modules/tool_engine/service.rs`:
- Around line 63-107: The ancestor-walk in try_load_local_tools_catalog
currently searches up from std::env::current_dir() and can accidentally override
production catalogs; restrict or gate that behavior so production builds cannot
pick up random repo files: modify try_load_local_tools_catalog to only perform
the upward directory loop when compiled in debug mode (#[cfg(debug_assertions)])
or when an explicit environment variable (e.g. LOCAL_TOOLS_CATALOG=true) is set,
otherwise only look for the catalog next to the binary/manifest path (use
env!("CARGO_MANIFEST_DIR") or std::env::current_exe() parent) and do not
traverse parents; update the search logic and any uses of
local_tools_catalog_mtime_cache() accordingly so production runs ignore
ancestor-walk unless explicitly enabled.
In `@src/modules/toolengine/index.ts`:
- Around line 49-51: The stray doc comment describing retry behavior is now
attached to parseApiError; move the block comment that starts "Tauri starts the
loopback API..." so it sits immediately above fetchOkWithRetry instead of above
parseApiError, ensuring parseApiError retains only its intended doc/comment;
update spacing so TSDoc/LSP hovers associate the retry comment with
fetchOkWithRetry and not parseApiError.
---
Nitpick comments:
In `@src-tauri/src/modules/tool_engine/service.rs`:
- Around line 931-968: The test
memory_catalog_has_private_folder_and_argv_includes_bind_and_env should not use
a fixed shared temp path; remove the process-wide
std::env::temp_dir().join("pengine-mem-test-data") +
remove_dir_all/create_dir_all sequence and either use a unique TempDir (e.g.,
tempfile::TempDir) for host_path or skip creating the directory entirely since
podman_run_argv_for_tool only formats strings from PrivateBind.host_path; update
the PrivateBind construction and assertions to use the TempDir path (or a simple
PathBuf) so tests no longer race across processes.
- Around line 158-266: rebuild_installed_catalog_tool_stdio currently errors if
prev isn't ServerEntry::Stdio which causes
sync_workspace_mounted_tools_for_catalog to abort on a single malformed entry;
change rebuild_installed_catalog_tool_stdio to return
Result<Option<ServerEntry>, String> (or otherwise return Ok(None) for non-Stdio)
instead of Err, and update sync_workspace_mounted_tools_for_catalog to treat
None by logging a warning (include the server key/id) and continue the loop;
keep existing behavior for real errors (propagate with ?) and preserve use of
catalog_tool_stdio_eq and PrivateBind handling.
- Around line 596-632: Add a one-line comment next to the creation of the
PrivateBind (where BOT_ID_FALLBACK is passed) in install_tool explaining that
the install-time bot_id intentionally uses BOT_ID_FALLBACK and that the real
bot_id is applied later during post-install rebuild_registry_into_state ->
sync_workspace_mounted_tools_for_catalog; reference PrivateBind,
BOT_ID_FALLBACK, install_tool, and the
rebuild_registry_into_state/sync_workspace_mounted_tools_for_catalog flow so
future readers know this is corrected after install.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 6d37a2af-150e-48f6-b193-66aec5999236
📒 Files selected for processing (5)
doc/tool-engine/manual-publish.mdsrc-tauri/src/infrastructure/http_server.rssrc-tauri/src/modules/tool_engine/service.rssrc/modules/mcp/components/McpServerCard.tsxsrc/modules/toolengine/index.ts
✅ Files skipped from review due to trivial changes (2)
- doc/tool-engine/manual-publish.md
- src/modules/mcp/components/McpServerCard.tsx
| /** Tauri starts the loopback API in a spawned task; the webview may load first. Brief retries avoid a false "offline" flash. */ | ||
| /** Same error extraction as the former inline blocks on failed toolengine HTTP calls. */ | ||
| async function parseApiError(resp: Response): Promise<string> { |
There was a problem hiding this comment.
Stray doc comment now attached to parseApiError.
The /** Tauri starts the loopback API... Brief retries avoid a false "offline" flash. */ comment describes fetchOkWithRetry's retry behavior, but the new parseApiError was inserted between the comment and its original function. It now doc-attaches to parseApiError in tools like TSDoc/LSP hovers. Move it down to fetchOkWithRetry.
✏️ Proposed fix
-/** Tauri starts the loopback API in a spawned task; the webview may load first. Brief retries avoid a false "offline" flash. */
/** Same error extraction as the former inline blocks on failed toolengine HTTP calls. */
async function parseApiError(resp: Response): Promise<string> {
...
}
+/** Tauri starts the loopback API in a spawned task; the webview may load first. Brief retries avoid a false "offline" flash. */
async function fetchOkWithRetry(📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| /** Tauri starts the loopback API in a spawned task; the webview may load first. Brief retries avoid a false "offline" flash. */ | |
| /** Same error extraction as the former inline blocks on failed toolengine HTTP calls. */ | |
| async function parseApiError(resp: Response): Promise<string> { | |
| /** Same error extraction as the former inline blocks on failed toolengine HTTP calls. */ | |
| async function parseApiError(resp: Response): Promise<string> { |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/modules/toolengine/index.ts` around lines 49 - 51, The stray doc comment
describing retry behavior is now attached to parseApiError; move the block
comment that starts "Tauri starts the loopback API..." so it sits immediately
above fetchOkWithRetry instead of above parseApiError, ensuring parseApiError
retains only its intended doc/comment; update spacing so TSDoc/LSP hovers
associate the retry comment with fetchOkWithRetry and not parseApiError.
… tools - Implemented `install_all_catalog_tools` and `uninstall_all_catalog_tools` functions to handle bulk operations for catalog tools. - Updated the tool manager to include new actions for installing and uninstalling all tools at once. - Enhanced documentation to clarify usage of new actions and their implications for tool management.
…g loading - Removed outdated comments and documentation from the tool engine codebase to enhance clarity and maintainability. - Improved the local tools catalog loading mechanism to include checks for the executable's directory, optimizing the path resolution process. - Updated function signatures and return types for better consistency and error handling in the tool engine.
There was a problem hiding this comment.
Actionable comments posted: 6
🧹 Nitpick comments (1)
src-tauri/src/modules/memory/mod.rs (1)
115-143: Detection by raw tool-name set is fragile across MCP server variants.Any MCP server that happens to expose tools literally named
create_entitiesandadd_observations— regardless of schema — will be bound asBackend::KnowledgeGraphand then receiveentityType: "ChatSession"/entityName/observationspayloads shaped to the official@modelcontextprotocol/server-memory. If a fork renames a field (e.g.entity_namevsentityName) the server will accept the call and silently drop data rather than error.Given the per-file context notes that
parse_toolsreturns whatever the server advertises with no schema validation, consider either:
- Also asserting on
input_schemashape (presence ofentities[].entityType,observations[].entityName) duringdetect, or- Tagging the expected server by catalog id as a secondary signal.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/modules/memory/mod.rs` around lines 115 - 143, The current MemoryProvider::detect binds any provider advertising tools named "create_entities" and "add_observations" to Backend::KnowledgeGraph which is brittle; update MemoryProvider::detect to additionally validate the tool input schema before accepting a match (or use the provider catalog id as a secondary signal). Concretely, when iterating reg.providers() and examining p.tools() in MemoryProvider::detect, locate the Tool entries for "create_entities" and "add_observations" and assert their input schema contains the expected fields (e.g., entities[].entityType and observations[].entityName) or alternatively require p.catalog_id to be in an allowlist; only return Some(Self { backend: Backend::KnowledgeGraph, provider: p.clone() }) if those schema checks (or catalog id check) pass.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src-tauri/src/infrastructure/http_server.rs`:
- Around line 930-937: The handler currently returns StatusCode::BAD_REQUEST
when tokio::fs::create_dir_all(&path).await fails; change this to return a more
accurate server-side status (e.g., StatusCode::INTERNAL_SERVER_ERROR) or map
specific io::ErrorKind cases (PermissionDenied -> StatusCode::FORBIDDEN,
AlreadyExists/Other -> 409 or 500) instead of BAD_REQUEST; update the Err branch
that constructs Json(ErrorResponse { error: format!("cannot create directory:
{e}") }) to use the chosen StatusCode and preserve the error message so clients
receive correct semantics for create_dir_all failures.
In `@src-tauri/src/modules/bot/agent.rs`:
- Around line 367-374: The current handle_diary_line function silently returns
an empty TurnResult with suppress_telegram_reply: true when state.memory_session
is None, which can lead to no-reply races if the diary session was cleared by
handle_diary_end or handle_session_end; change handle_diary_line to detect that
memory_session.read().await is None and instead return a TurnResult with
suppress_telegram_reply: false and a short user-facing notice (e.g., "Diary
session ended; please start a new session") or alternatively fall through to the
normal model turn handling path so the user receives a response; update the code
paths that construct TurnResult (reference: handle_diary_line, memory_session,
TurnResult, suppress_telegram_reply, handle_diary_end, handle_session_end) to
ensure the reply is not suppressed when the session is missing.
In `@src-tauri/src/modules/memory/mod.rs`:
- Around line 151-165: The start_session function is not idempotent: when
calling provider.call_tool("create_entities") with the session name produced by
session_entity_name a duplicate-entity error will propagate; update
start_session to handle this by either catching the "already exists" error shape
returned from provider.call_tool("create_entities") and treating it as success,
or by making the entity name unique (e.g., append a short random suffix or
nanoseconds to session_entity_name before calling create_entities); as an
alternative you can probe with provider.call_tool("search_nodes") for an
existing session_entity_name and skip create_entities if found—modify
start_session to implement one of these approaches and ensure the function
returns Ok(()) on duplicate/exists cases rather than propagating an Err.
- Around line 57-65: The is_starfleet_signoff matcher (is_starfleet_signoff)
currently only checks the first and last tokens and allows arbitrary middle
tokens; tighten it by enforcing that the token count is between 3 and 4 and that
every middle token (the slice between first and last from toks) contains only
alphabetic characters (no digits/punctuation) before returning true; update the
logic that computes toks, first, last and the final matches! check to include a
length upper bound and an alphabetic-only check on middle tokens.
- Around line 83-105: detect_session_command currently lowercases and trims only
ASCII punctuation, so smart quotes and Unicode punctuation (e.g. U+2019 RIGHT
SINGLE QUOTATION MARK, U+201C/U+201D, ellipsis, fullwidth punctuation) bypass
matching; update detect_session_command to first apply Unicode normalization
(NFKC or NFC) and explicitly normalize curly quotes to straight ASCII quotes
(map U+2018/U+2019 -> '\'' and U+201C/U+201D -> '"'), then perform lowercase and
use a broader trailing-punctuation trim (trim_end_matches with a predicate that
removes chars where char::is_ascii_punctuation() ||
unicode_general_category_is_punctuation) before comparing against
SESSION_START_PHRASES, DIARY_START_PHRASES, DIARY_END_PHRASES,
SESSION_START_PHRASES, SESSION_END_PHRASES and calling is_starfleet_signoff so
that “captain’s log” and messages ending with Unicode punctuation match
correctly.
- Around line 132-143: Update the detect function to log which provider was
selected and clarify the selection contract: in detect(reg: &ToolRegistry) when
you return Some(Self { backend: Backend::KnowledgeGraph, provider: p.clone() }),
emit a tracing::info! (including p.server_name() or p.clone().server_name())
that names the chosen provider so operators can see which server was picked;
also update the detect docstring to state that the function picks the first
matching provider in iteration order (users cannot currently control preference)
so the first-match tie-breaker is explicit.
---
Nitpick comments:
In `@src-tauri/src/modules/memory/mod.rs`:
- Around line 115-143: The current MemoryProvider::detect binds any provider
advertising tools named "create_entities" and "add_observations" to
Backend::KnowledgeGraph which is brittle; update MemoryProvider::detect to
additionally validate the tool input schema before accepting a match (or use the
provider catalog id as a secondary signal). Concretely, when iterating
reg.providers() and examining p.tools() in MemoryProvider::detect, locate the
Tool entries for "create_entities" and "add_observations" and assert their input
schema contains the expected fields (e.g., entities[].entityType and
observations[].entityName) or alternatively require p.catalog_id to be in an
allowlist; only return Some(Self { backend: Backend::KnowledgeGraph, provider:
p.clone() }) if those schema checks (or catalog id check) pass.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 7049f259-b6ca-4602-80ea-b9d0de84a125
⛔ Files ignored due to path filters (1)
src-tauri/Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (11)
src-tauri/Cargo.tomlsrc-tauri/src/infrastructure/http_server.rssrc-tauri/src/modules/bot/agent.rssrc-tauri/src/modules/mcp/client.rssrc-tauri/src/modules/mcp/native.rssrc-tauri/src/modules/mcp/registry.rssrc-tauri/src/modules/mcp/service.rssrc-tauri/src/modules/mcp/types.rssrc-tauri/src/modules/memory/mod.rssrc-tauri/src/modules/tool_engine/service.rssrc/modules/toolengine/index.ts
✅ Files skipped from review due to trivial changes (1)
- src-tauri/Cargo.toml
🚧 Files skipped from review as they are similar to previous changes (1)
- src-tauri/src/modules/mcp/types.rs
- Introduced `unicode-normalization` and `unicode-general-category` dependencies to enhance text processing capabilities. - Updated the `http_server.rs` to improve error handling by categorizing IO errors. - Refactored diary session handling in `agent.rs` for better clarity and user feedback. - Enhanced memory module with new functions for Unicode normalization and improved session command detection.
There was a problem hiding this comment.
🧹 Nitpick comments (5)
src-tauri/src/modules/memory/mod.rs (3)
174-196: Redundanttrim()beforetrim_session_message_end.
trim_session_message_endalready includesc.is_whitespace()in its predicate (line 90), so the intermediatenormalized.trim()on line 176 is a no-op for trailing whitespace. It still strips leading whitespace, whichtrim_session_message_enddoes not — so if the intent is to allow leading whitespace (and only leading whitespace, no leading punctuation) this is actually correct. Worth a brief inline comment so future readers know the asymmetry is deliberate.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/modules/memory/mod.rs` around lines 174 - 196, The extra normalized.trim() in detect_session_command looks redundant for trailing whitespace because trim_session_message_end already treats whitespace, so either remove the intermediate trim or (preferable per review) keep it but add a short inline comment next to the normalized.trim() explaining the deliberate asymmetry: that trim() is kept to remove leading whitespace while trim_session_message_end handles trailing whitespace/punctuation, and reference the helper function trim_session_message_end and the detect_session_command function so future readers understand the intent.
275-289: Idempotency via error-string sniffing is fragile — consider a probe-based fallback.Catching "already exists / duplicate / already known" substrings handles the official
@modelcontextprotocol/server-memorytoday, but a future KG backend or thete_pengine-memoryimplementation could phrase the conflict differently (e.g."entity name collision","409","conflict", localized text), causingstart_sessionto regress to a hard error on retries.A more robust approach (for a future iteration, not a blocker): probe via
search_nodes/open_nodesbeforecreate_entities, or gate the create behind an idempotent upsert helper that both backends can share. The current implementation is acceptable as a first cut but is worth flagging in a follow-up.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/modules/memory/mod.rs` around lines 275 - 289, The current error-handling in the match on self.provider.call_tool("create_entities", args) uses brittle substring checks on the error string; change this to a probe-based or upsert approach: before calling create_entities from start_session (or wrap the call in a new idempotent helper like upsert_entities), call the provider's search_nodes or open_nodes to detect existing entities and skip creation if present, or implement an upsert-style provider helper that handles create-or-ignore semantics; update references to self.provider.call_tool("create_entities", ...) to use that probe/upsert helper so backends that return different conflict messages won't cause hard errors.
296-309: Error handling is already in place; consider adding retry/session recovery logic for improved reliability.The
appendmethod itself has no retry or recovery logic, but callers inagent.rsdo catch and log errors from failed append operations (e.g., "turn not saved: {e}"). However, logging doesn't prevent the observation from being lost ifadd_observationsfails because the entity no longer exists (e.g., memory cleared mid-session). For better reliability, consider implementing retry logic or automatically re-creating the session entity on "not found" errors before abandoning the write.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/modules/memory/mod.rs` around lines 296 - 309, The append method (pub async fn append) currently calls provider.call_tool("add_observations") without retry or recovery; update append to catch errors from self.provider.call_tool for Backend::KnowledgeGraph, implement a limited retry loop (e.g., 2-3 attempts with backoff) and if the error indicates the target entity is missing (a "not found" or similar error string/code), attempt to re-create the entity/session (call the provider tool that creates entities or sessions) before retrying the add_observations call; ensure failures still propagate as Err after retries so callers can log as before.src-tauri/src/infrastructure/http_server.rs (2)
501-527: Brittle..pattern can silently miss newStdiofields.
mcp_stdio_identity_ignores_direct_returnuses..in both arms, which currently only hidesdirect_return. If a future field is added toServerEntry::Stdio(e.g. a new mount spec, workdir, uid, etc.), this function will continue to report "identity-equal" even when that new field differs, and the fast path at Lines 611–621 (patch_stdio_direct_return_in_registry→return) will skip the registry rebuild and leave the running MCP child out of sync withmcp.json.Consider listing every identity field explicitly so the compiler forces a review whenever the enum grows:
♻️ Proposed fail-closed variant
-fn mcp_stdio_identity_ignores_direct_return( - old: &crate::modules::mcp::types::ServerEntry, - new: &crate::modules::mcp::types::ServerEntry, -) -> bool { - use crate::modules::mcp::types::ServerEntry; - match (old, new) { - ( - ServerEntry::Stdio { - command: c0, - args: a0, - env: e0, - private_host_path: p0, - .. - }, - ServerEntry::Stdio { - command: c1, - args: a1, - env: e1, - private_host_path: p1, - .. - }, - ) => c0 == c1 && a0 == a1 && e0 == e1 && p0 == p1, - _ => false, - } -} +fn mcp_stdio_identity_ignores_direct_return( + old: &crate::modules::mcp::types::ServerEntry, + new: &crate::modules::mcp::types::ServerEntry, +) -> bool { + use crate::modules::mcp::types::ServerEntry; + match (old, new) { + ( + ServerEntry::Stdio { + command: c0, + args: a0, + env: e0, + private_host_path: p0, + direct_return: _, + }, + ServerEntry::Stdio { + command: c1, + args: a1, + env: e1, + private_host_path: p1, + direct_return: _, + }, + ) => c0 == c1 && a0 == a1 && e0 == e1 && p0 == p1, + _ => false, + } +}Removing
..makes adding a newStdiofield a compile error here, forcing an explicit decision on whether it belongs in the identity comparison.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/infrastructure/http_server.rs` around lines 501 - 527, The comparison function mcp_stdio_identity_ignores_direct_return is using `..` in both ServerEntry::Stdio pattern arms which can silently ignore future Stdio fields; change the match arms for ServerEntry::Stdio in mcp_stdio_identity_ignores_direct_return to deconstruct and compare every concrete field that contributes to identity (explicitly list command, args, env, private_host_path and any other current Stdio fields) instead of using `..` so the compiler will force you to handle new fields when the enum grows and avoid erroneously treating non-identical entries as equal.
909-978: Minor: filesystem I/O held undermcp_config_mutex.
tokio::fs::create_dir_all(&path).awaitat Line 931 runs while themcp_config_mutexguard is held, so a slow/NFS/permission-hanging target path will block every other MCP config operation (upsert/delete/filesystem-paths/install/uninstall) until the syscall returns. Sincepathhas already been validated as absolute and the tool'sprivate_folderpresence has been checked before the mutex is taken, you could create the directory first and only acquire the mutex for the config mutation + save.Not a functional bug — just narrows the critical section if you care about responsiveness under a slow host FS.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/infrastructure/http_server.rs` around lines 909 - 978, The create_dir_all call is executed while holding the mcp_config_mutex, which enlarges the critical section; move the filesystem operation out of the mutex: validate path earlier (ensuring it's absolute and private_folder checked), call tokio::fs::create_dir_all(&path).await before locking state.mcp_config_mutex, map its IO errors to the same StatusCode logic you have now, then lock mcp_config_mutex and perform the ServerEntry::Stdio mutation (*private_host_path = Some(path.clone())), call te_service::sync_workspace_mounted_tools_for_catalog and mcp_service::save_config as before (keeping existing error-to-StatusCode mapping).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src-tauri/src/infrastructure/http_server.rs`:
- Around line 501-527: The comparison function
mcp_stdio_identity_ignores_direct_return is using `..` in both
ServerEntry::Stdio pattern arms which can silently ignore future Stdio fields;
change the match arms for ServerEntry::Stdio in
mcp_stdio_identity_ignores_direct_return to deconstruct and compare every
concrete field that contributes to identity (explicitly list command, args, env,
private_host_path and any other current Stdio fields) instead of using `..` so
the compiler will force you to handle new fields when the enum grows and avoid
erroneously treating non-identical entries as equal.
- Around line 909-978: The create_dir_all call is executed while holding the
mcp_config_mutex, which enlarges the critical section; move the filesystem
operation out of the mutex: validate path earlier (ensuring it's absolute and
private_folder checked), call tokio::fs::create_dir_all(&path).await before
locking state.mcp_config_mutex, map its IO errors to the same StatusCode logic
you have now, then lock mcp_config_mutex and perform the ServerEntry::Stdio
mutation (*private_host_path = Some(path.clone())), call
te_service::sync_workspace_mounted_tools_for_catalog and
mcp_service::save_config as before (keeping existing error-to-StatusCode
mapping).
In `@src-tauri/src/modules/memory/mod.rs`:
- Around line 174-196: The extra normalized.trim() in detect_session_command
looks redundant for trailing whitespace because trim_session_message_end already
treats whitespace, so either remove the intermediate trim or (preferable per
review) keep it but add a short inline comment next to the normalized.trim()
explaining the deliberate asymmetry: that trim() is kept to remove leading
whitespace while trim_session_message_end handles trailing
whitespace/punctuation, and reference the helper function
trim_session_message_end and the detect_session_command function so future
readers understand the intent.
- Around line 275-289: The current error-handling in the match on
self.provider.call_tool("create_entities", args) uses brittle substring checks
on the error string; change this to a probe-based or upsert approach: before
calling create_entities from start_session (or wrap the call in a new idempotent
helper like upsert_entities), call the provider's search_nodes or open_nodes to
detect existing entities and skip creation if present, or implement an
upsert-style provider helper that handles create-or-ignore semantics; update
references to self.provider.call_tool("create_entities", ...) to use that
probe/upsert helper so backends that return different conflict messages won't
cause hard errors.
- Around line 296-309: The append method (pub async fn append) currently calls
provider.call_tool("add_observations") without retry or recovery; update append
to catch errors from self.provider.call_tool for Backend::KnowledgeGraph,
implement a limited retry loop (e.g., 2-3 attempts with backoff) and if the
error indicates the target entity is missing (a "not found" or similar error
string/code), attempt to re-create the entity/session (call the provider tool
that creates entities or sessions) before retrying the add_observations call;
ensure failures still propagate as Err after retries so callers can log as
before.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: a2a3db17-1170-48fb-9bbe-f6aacfecb8ba
⛔ Files ignored due to path filters (1)
src-tauri/Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (4)
src-tauri/Cargo.tomlsrc-tauri/src/infrastructure/http_server.rssrc-tauri/src/modules/bot/agent.rssrc-tauri/src/modules/memory/mod.rs
🚧 Files skipped from review as they are similar to previous changes (2)
- src-tauri/Cargo.toml
- src-tauri/src/modules/bot/agent.rs
…ssion handling - Removed `unicode-normalization` and `unicode-general-category` dependencies from `Cargo.toml` and `Cargo.lock` as they are no longer needed. - Simplified the memory session handling in `agent.rs` by cleaning up unused constants and improving the clarity of session command detection. - Introduced a new function to compact JSON schemas by stripping unnecessary description fields, optimizing memory usage in the MCP module.
ghcr-verify.ymlworkflow and associated Dockerfile as part of the cleanup.detect-matrix.shto compare tool blobs instead of versions for more accurate change detection.read-tool-manifest.shto support both npm and PyPI package definitions, ensuring tools can specify dependencies correctly.mcp-tools.jsonandtools.jsonto reflect new tools and their configurations, including version bumps and additional metadata.Summary by CodeRabbit
New Features
Documentation
Chores