feat: integrate Tauri event bridge and refactor MCP tools management#19
Conversation
- Added `initTauriRegistryBridge` to connect Tauri events with browser events for MCP tools. - Introduced `useRegistryChanged` hook to simplify registry change handling in `McpToolsPanel` and `ToolEnginePanel`. - Updated `tool_manager` integration in the MCP service to ensure consistent tool management. - Enhanced error handling and async support in tool management functions. - Refactored tests to support async operations for improved reliability.
📝 WalkthroughWalkthroughAdds a native Changes
Sequence Diagram(s)sequenceDiagram
participant FE as React Frontend
participant Bridge as Tauri Bridge
participant Service as MCP Service
participant Native as NativeProvider (tool_manager)
participant ToolEngine as Tool Engine Service
FE->>Bridge: user triggers install/uninstall via UI
Bridge->>Service: request action / rebuild_registry_into_state
Service->>Native: connect_one_server(..., tool_manager) / call(action)
Native->>ToolEngine: install_tool() or uninstall_tool()
ToolEngine-->>Native: operation result
Native-->>Service: return Ok/Err
Service->>Service: rebuild_registry_into_state()
Service->>Bridge: emit `pengine-registry-changed`
Bridge-->>FE: forwards as window `PENGINE_MCP_REGISTRY_CHANGED`
FE->>FE: useRegistryChanged callback reloads UI
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (3)
src/modules/toolengine/components/ToolEnginePanel.tsx (1)
56-57: Avoid duplicate refreshes after local install/uninstall actions.With the new listener at Line 56, successful actions can trigger
loadData()twice (once via dispatched registry event, once via explicit handler call). Consider using a single refresh path to reduce redundant fetches.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/modules/toolengine/components/ToolEnginePanel.tsx` around lines 56 - 57, The new useRegistryChanged listener causes loadData to run twice for local install/uninstall flows (once from the dispatched registry event and once from the explicit success handler); fix by consolidating into a single refresh path—either remove the explicit handler call and rely on useRegistryChanged to invoke loadData, or add a guard/flag (e.g., a local boolean or a requestId) that the explicit handler sets (or the registry event carries) so loadData short-circuits when the refresh was already triggered; update references in ToolEnginePanel around useRegistryChanged and the install/uninstall success handlers so only one codepath calls loadData.src-tauri/src/modules/mcp/native.rs (1)
144-167: Consider holdingmcp_config_mutexwhen reading installed tool IDs.
installed_tool_idsat line 146 reads the config file without acquiringmcp_config_mutex. If a concurrent install/uninstall is mid-write, this could read a partial or inconsistent file. While the worst case is an incorrect "installed" status display (not data corruption), acquiring the mutex would ensure consistency.That said, this is a read-only listing for display purposes, so the risk is low. Mark as optional improvement.
🔧 Optional: Acquire config mutex for consistent reads
async fn handle_list_tools(state: &AppState) -> Result<String, String> { let catalog = tool_engine_service::load_catalog()?; + let _cfg_guard = state.mcp_config_mutex.lock().await; let installed = tool_engine_service::installed_tool_ids(&state.mcp_config_path); let installed_set: HashSet<&str> = installed.iter().map(|s| s.as_str()).collect();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/modules/mcp/native.rs` around lines 144 - 167, The list-tools handler (handle_list_tools) calls tool_engine_service::installed_tool_ids without holding the application's mcp_config_mutex, risking reading a concurrently-written config; acquire the AppState's mcp_config_mutex (or otherwise lock the same mutex used by install/uninstall) around the installed_tool_ids call to ensure a consistent read, then release the mutex before proceeding to build the catalog output. Use the state.mcp_config_mutex lock guard to protect the read so it matches the install/uninstall protection.src-tauri/src/modules/mcp/service.rs (1)
266-275: Consider logging a warning if config save fails during auto-insert.Line 274 silently discards the
save_configresult. If the save fails, the in-memory config will includetool_managerbut the disk won't reflect this, causing repeated auto-insert attempts on each rebuild. This is harmless but slightly wasteful.🔧 Optional: Log on save failure
); - let _ = save_config(&state.mcp_config_path, &cfg); + if let Err(e) = save_config(&state.mcp_config_path, &cfg) { + // Non-fatal: tool_manager will be re-added on next rebuild + tracing::warn!("failed to persist tool_manager auto-add: {e}"); + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/modules/mcp/service.rs` around lines 266 - 275, The code silently ignores save_config result when auto-inserting the tool manager; update the block that inserts native::TOOL_MANAGER_ID (and the subsequent call to save_config(&state.mcp_config_path, &cfg)) to check the Result and log a warning on Err (including the error and context such as native::TOOL_MANAGER_ID and state.mcp_config_path) so failures are visible; keep the insertion in-memory but emit a process- or module-level warning via the existing logger or eprintln! to aid debugging (refer to save_config, cfg, native::TOOL_MANAGER_ID, ServerEntry::Native, and state.mcp_config_path).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/shared/mcpEvents.ts`:
- Around line 16-26: initTauriRegistryBridge currently creates a new Tauri event
listener on every call, causing duplicate notifications; make it idempotent by
introducing a module-level boolean (e.g., tauriBridgeInitialized) and return
early if true, then only call import(...).then(listen...) when not initialized
and set tauriBridgeInitialized = true after successful listen; reference the
existing symbols TAURI_REGISTRY_CHANGED and notifyMcpRegistryChanged and ensure
the flag is only set on successful listener registration (do not set it inside
the catch).
---
Nitpick comments:
In `@src-tauri/src/modules/mcp/native.rs`:
- Around line 144-167: The list-tools handler (handle_list_tools) calls
tool_engine_service::installed_tool_ids without holding the application's
mcp_config_mutex, risking reading a concurrently-written config; acquire the
AppState's mcp_config_mutex (or otherwise lock the same mutex used by
install/uninstall) around the installed_tool_ids call to ensure a consistent
read, then release the mutex before proceeding to build the catalog output. Use
the state.mcp_config_mutex lock guard to protect the read so it matches the
install/uninstall protection.
In `@src-tauri/src/modules/mcp/service.rs`:
- Around line 266-275: The code silently ignores save_config result when
auto-inserting the tool manager; update the block that inserts
native::TOOL_MANAGER_ID (and the subsequent call to
save_config(&state.mcp_config_path, &cfg)) to check the Result and log a warning
on Err (including the error and context such as native::TOOL_MANAGER_ID and
state.mcp_config_path) so failures are visible; keep the insertion in-memory but
emit a process- or module-level warning via the existing logger or eprintln! to
aid debugging (refer to save_config, cfg, native::TOOL_MANAGER_ID,
ServerEntry::Native, and state.mcp_config_path).
In `@src/modules/toolengine/components/ToolEnginePanel.tsx`:
- Around line 56-57: The new useRegistryChanged listener causes loadData to run
twice for local install/uninstall flows (once from the dispatched registry event
and once from the explicit success handler); fix by consolidating into a single
refresh path—either remove the explicit handler call and rely on
useRegistryChanged to invoke loadData, or add a guard/flag (e.g., a local
boolean or a requestId) that the explicit handler sets (or the registry event
carries) so loadData short-circuits when the refresh was already triggered;
update references in ToolEnginePanel around useRegistryChanged and the
install/uninstall success handlers so only one codepath calls loadData.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 9dd4da85-3c95-412f-b74a-e01c113a4988
📒 Files selected for processing (10)
src-tauri/mcp.example.jsonsrc-tauri/src/modules/mcp/native.rssrc-tauri/src/modules/mcp/registry.rssrc-tauri/src/modules/mcp/service.rssrc-tauri/tests/mcp_tools.rssrc/main.tsxsrc/modules/mcp/components/McpToolsPanel.tsxsrc/modules/toolengine/components/ToolEnginePanel.tsxsrc/shared/mcpEvents.tssrc/shared/useRegistryChanged.ts
- Updated Tool Engine panel to ensure data is reloaded after installation and uninstallation failures. - Enhanced error handling to provide clearer feedback on action failures. - Refactored MCP service to include proper locking mechanisms when accessing installed tool IDs, improving thread safety. - Added logging for failed configuration saves to aid in debugging.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
src/shared/mcpEvents.ts (1)
33-36: Consider adding dev-time logging for bridge setup failures.Right now failures are fully silent. A small debug log would help distinguish “expected non-Tauri runtime” from real bridge setup regressions during development.
🔧 Minimal refactor
- .catch(() => { + .catch((error) => { tauriBridgeListenPromise = null; - // Not running inside Tauri shell — no bridge needed. + // Not running inside Tauri shell — no bridge needed. + if (import.meta.env.DEV) { + console.debug("[mcpEvents] Tauri registry bridge init skipped/failed", error); + } });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/shared/mcpEvents.ts` around lines 33 - 36, The catch on the tauri bridge setup currently swallows all errors; update the catch handler for the promise that assigns tauriBridgeListenPromise to also log a debug-level message with the caught error and a short context string (e.g., "tauri bridge listen setup failed — likely non-Tauri runtime") so developers can distinguish expected non-Tauri runs from real regressions; ensure the log is dev-only (use a NODE_ENV / import.meta.env.MODE check or console.debug) and keep the existing behavior of setting tauriBridgeListenPromise = null in that catch block.src-tauri/src/modules/mcp/native.rs (1)
93-107: Tightenmanage_toolsschema to enforcetool_idper action.Right now only
actionis required structurally, so invalid install/uninstall calls are accepted by schema and fail later at runtime. Encode action-specific requirements in JSON schema (oneOf+ per-actionrequired).Suggested schema shape
input_schema: json!({ "type": "object", - "required": ["action"], "properties": { "action": { "type": "string", "enum": ["list", "install", "uninstall"], "description": "The operation to perform" }, "tool_id": { "type": "string", "description": "Tool identifier (required for install/uninstall, e.g. 'pengine/file-manager')" } - } + }, + "oneOf": [ + { + "required": ["action"], + "properties": { "action": { "const": "list" } } + }, + { + "required": ["action", "tool_id"], + "properties": { "action": { "const": "install" } } + }, + { + "required": ["action", "tool_id"], + "properties": { "action": { "const": "uninstall" } } + } + ] })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/modules/mcp/native.rs` around lines 93 - 107, The input_schema for the manage_tools command currently only requires "action" so install/uninstall requests missing tool_id pass validation; update the JSON schema assigned to input_schema (in the manage_tools handler) to use oneOf with three subschemas: one for {"action":"list"} with no tool_id required, one for {"action":"install"} that adds "tool_id" in required, and one for {"action":"uninstall"} that also requires "tool_id"; keep the existing "properties" and enum for action but split into per-action subschemas so install/uninstall calls fail schema validation if tool_id is omitted.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src-tauri/src/modules/mcp/native.rs`:
- Around line 230-235: The current code swallows errors from
mcp_service::rebuild_registry_into_state(state).await by logging and returning
Ok(()), so change the flow to propagate the failure: after the emit_log await,
return Err(e) (or map e into the function's Result error type) instead of
continuing to Ok(()); update the match/if-let to preserve the original error
variable (e) and return it (or use .map_err(|e| { state.emit_log(...).await; e
}) on the future) so callers of this function see the rebuild failure rather
than a spurious success.
---
Nitpick comments:
In `@src-tauri/src/modules/mcp/native.rs`:
- Around line 93-107: The input_schema for the manage_tools command currently
only requires "action" so install/uninstall requests missing tool_id pass
validation; update the JSON schema assigned to input_schema (in the manage_tools
handler) to use oneOf with three subschemas: one for {"action":"list"} with no
tool_id required, one for {"action":"install"} that adds "tool_id" in required,
and one for {"action":"uninstall"} that also requires "tool_id"; keep the
existing "properties" and enum for action but split into per-action subschemas
so install/uninstall calls fail schema validation if tool_id is omitted.
In `@src/shared/mcpEvents.ts`:
- Around line 33-36: The catch on the tauri bridge setup currently swallows all
errors; update the catch handler for the promise that assigns
tauriBridgeListenPromise to also log a debug-level message with the caught error
and a short context string (e.g., "tauri bridge listen setup failed — likely
non-Tauri runtime") so developers can distinguish expected non-Tauri runs from
real regressions; ensure the log is dev-only (use a NODE_ENV /
import.meta.env.MODE check or console.debug) and keep the existing behavior of
setting tauriBridgeListenPromise = null in that catch block.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 68f3a162-66d5-49b0-b453-1908fd112e42
📒 Files selected for processing (4)
src-tauri/src/modules/mcp/native.rssrc-tauri/src/modules/mcp/service.rssrc/modules/toolengine/components/ToolEnginePanel.tsxsrc/shared/mcpEvents.ts
🚧 Files skipped from review as they are similar to previous changes (2)
- src/modules/toolengine/components/ToolEnginePanel.tsx
- src-tauri/src/modules/mcp/service.rs
- Revised descriptions in the tool manager to clarify user actions and tool management processes. - Enhanced error handling in the `run_tool_mutation` function to ensure proper error propagation. - Removed unnecessary comments in the Tauri registry bridge initialization for cleaner code.
There was a problem hiding this comment.
🧹 Nitpick comments (2)
src/shared/mcpEvents.ts (1)
33-35: Don’t silently swallow bridge init failures.The catch path resets state, but it hides why initialization failed. Adding minimal logging/telemetry here will make startup regressions much easier to diagnose.
Suggested adjustment
- .catch(() => { + .catch((error) => { tauriBridgeListenPromise = null; + console.warn("Failed to initialize Tauri registry bridge", error); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/shared/mcpEvents.ts` around lines 33 - 35, The catch handler for the tauriBridgeListenPromise currently swallows errors; update the catch on tauriBridgeListenPromise to accept the error (e.g., catch((err) => {...})), log the failure (use the existing logger/telemetry utility in this module—e.g., processLogger.error or sendTelemetry) with a clear message including the error/details, then keep the existing state reset (tauriBridgeListenPromise = null); reference tauriBridgeListenPromise and the existing catch block when making the change.src-tauri/src/modules/mcp/native.rs (1)
146-171: Prefer structuredlistoutput over formatted prose.
install/uninstalldepend on an exacttool_id, and this bullet-list format makes downstream parsing brittle. Returning JSON here would make the tool chain more reliable and keep presentation concerns in the caller.♻️ Possible refactor
async fn handle_list_tools(state: &AppState) -> Result<String, String> { let catalog = tool_engine_service::load_catalog()?; let installed = { let _cfg_guard = state.mcp_config_mutex.lock().await; tool_engine_service::installed_tool_ids(&state.mcp_config_path) }; let installed_set: HashSet<&str> = installed.iter().map(|s| s.as_str()).collect(); - let mut lines = Vec::new(); - for tool in &catalog.tools { - let status = if installed_set.contains(tool.id.as_str()) { - "installed" - } else { - "not installed" - }; - lines.push(format!( - "- {} (id: {}, v{}): {} [{}]", - tool.name, tool.id, tool.version, tool.description, status - )); - } - - if lines.is_empty() { - Ok("No tools available in the catalog.".to_string()) - } else { - Ok(format!("Available tools:\n{}", lines.join("\n"))) - } + let tools: Vec<Value> = catalog + .tools + .iter() + .map(|tool| { + json!({ + "id": tool.id, + "name": tool.name, + "version": tool.version, + "description": tool.description, + "installed": installed_set.contains(tool.id.as_str()), + }) + }) + .collect(); + + Ok(json!({ "tools": tools }).to_string()) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src-tauri/src/modules/mcp/native.rs` around lines 146 - 171, handle_list_tools currently builds a human-readable bullet list string which is brittle for downstream parsing; change it to return a JSON array string where each item is an object with fields like id, name, version, description, and status. Use tool_engine_service::load_catalog() and tool_engine_service::installed_tool_ids(&state.mcp_config_path) as before (keeping the mcp_config_mutex lock around the config access), build a Vec of structs or maps representing each tool and its computed status ("installed" / "not_installed"), then serialize that Vec to a JSON string via serde_json::to_string(...) and return it as Ok(json_string). Ensure errors from load_catalog and installed_tool_ids propagate as before.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src-tauri/src/modules/mcp/native.rs`:
- Around line 146-171: handle_list_tools currently builds a human-readable
bullet list string which is brittle for downstream parsing; change it to return
a JSON array string where each item is an object with fields like id, name,
version, description, and status. Use tool_engine_service::load_catalog() and
tool_engine_service::installed_tool_ids(&state.mcp_config_path) as before
(keeping the mcp_config_mutex lock around the config access), build a Vec of
structs or maps representing each tool and its computed status ("installed" /
"not_installed"), then serialize that Vec to a JSON string via
serde_json::to_string(...) and return it as Ok(json_string). Ensure errors from
load_catalog and installed_tool_ids propagate as before.
In `@src/shared/mcpEvents.ts`:
- Around line 33-35: The catch handler for the tauriBridgeListenPromise
currently swallows errors; update the catch on tauriBridgeListenPromise to
accept the error (e.g., catch((err) => {...})), log the failure (use the
existing logger/telemetry utility in this module—e.g., processLogger.error or
sendTelemetry) with a clear message including the error/details, then keep the
existing state reset (tauriBridgeListenPromise = null); reference
tauriBridgeListenPromise and the existing catch block when making the change.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 8b114fdf-a511-43bf-9885-766990f3ffaf
📒 Files selected for processing (2)
src-tauri/src/modules/mcp/native.rssrc/shared/mcpEvents.ts
initTauriRegistryBridgeto connect Tauri events with browser events for MCP tools.useRegistryChangedhook to simplify registry change handling inMcpToolsPanelandToolEnginePanel.tool_managerintegration in the MCP service to ensure consistent tool management.Summary by CodeRabbit
New Features
UX / Sync
Tests