refactor(app): lazy network contexts, unified network switch, MCP network tools#814
Conversation
…work tools Rebased PR #803 onto current v1.0-dev by diffing against the squash-merged PR #767 base. Single commit replacing 57 granular commits that had interleaved merges from squash-merged branches. Key changes: - Defer non-active network context creation until switch - Simplify network switch to single BackendTask::SwitchNetwork - Add MCP tools: network_switch, network_refresh_endpoints - Unify context storage for MCP network operations - Force SPV backend in headless mode - Add user-friendly token validation error messages - Various SPV and shielded wallet fixes Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Caution Review failedPull request was closed or merged during review 📝 WalkthroughWalkthroughThis PR introduces network switching infrastructure with multi-network AppContext management, adds SPV synchronization gating for wallet-facing tools, implements new MCP tools for network operations, refactors theme management with throttled polling, and expands error handling for token validation failures across multiple components. Changes
Sequence Diagram(s)sequenceDiagram
participant User as User / UI
participant App as App (app.rs)
participant Backend as Backend Task
participant Context as AppContext
participant ZMQ as ZMQ Listener
User->>App: Request network switch
App->>App: Check if context exists
alt Context exists for target network
App->>Context: Fast switch (volatile)
else Context missing
App->>Backend: Dispatch SwitchNetwork task
Backend->>Context: Create new AppContext<br/>(block_in_place)
Context->>Backend: Return NetworkContextCreated
Backend->>App: Return result with new context
App->>App: swap_context() in MCP service
App->>ZMQ: spawn_zmq_listener if needed
end
App->>App: finalize_network_switch
App->>App: Apply context to screens
App->>User: Confirm switch complete
sequenceDiagram
participant Tool as MCP Tool
participant App as App / AppContext
participant SPV as SPV Manager
participant Backend as Backend Task
Tool->>App: Call tool (wallet-facing)
App->>App: Check metadata-only exception<br/>(core_wallets_list, network_info, etc.)
alt Not exception
Tool->>App: ensure_spv_synced()
App->>SPV: Check sync state
alt SPV not synced
SPV->>SPV: Sync from genesis or temp dir
SPV->>App: Sync complete
end
App->>Tool: Return ready
Tool->>Backend: Dispatch wallet/DAPI task
else Metadata-only exception
Tool->>Backend: Dispatch task directly<br/>(skip SPV gate)
end
Backend->>Tool: Return result
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
…d() check Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
|
@coderabbitai review all |
Review GateCommit:
|
|
✅ Actions performedFull review triggered. |
There was a problem hiding this comment.
Pull request overview
Refactors network/context handling to support lazy per-network context creation, unify network switching through a single backend task, and expose network-management capabilities via MCP/CLI tools.
Changes:
- Introduces lazy network context initialization and a unified
BackendTask::SwitchNetworkflow used by both GUI and MCP/CLI. - Adds MCP tools for
network_switchandnetwork_reinit_sdk, and forces SPV backend in headless mode. - Improves UX around network settings (manual DAPI discovery UI, async reinit banners) and refines token-import labeling/error handling.
Reviewed changes
Copilot reviewed 26 out of 27 changed files in this pull request and generated 13 comments.
Show a summary per file
| File | Description |
|---|---|
| src/app.rs | Lazy network context bootstrapping, unified network switching, theme state refactor |
| src/backend_task/mod.rs | Adds SwitchNetwork/ReinitCoreClientAndSdk tasks + success result variants |
| src/backend_task/error.rs | Adds token validation errors and NetworkContextCreationFailed |
| src/backend_task/platform_info.rs | Adjusts receiver type to self: &Arc<Self> |
| src/backend_task/wallet/fetch_platform_address_balances.rs | Switches platform balance map to PlatformAddress and conversion logic |
| src/config.rs | Updates regtest default RPC port and DAPI address list docs |
| src/context/mod.rs | Adds volatile backend mode, shielded init hook, RPC reinit logic changes |
| src/context/wallet_lifecycle.rs | FeatureGate import cleanup + shielded init-on-protocol-crossing helper |
| src/mcp/error.rs | Enriches MCP error responses with backend debug details |
| src/mcp/resolve.rs | Updates SPV gating documentation for MCP tools |
| src/mcp/server.rs | Reworks MCP context storage/swapping + headless SPV forcing |
| src/mcp/tools/network.rs | Adds network_reinit_sdk and network_switch tools |
| src/mcp/tools/wallet.rs | Updates platform address formatting and SPV gating notes |
| src/ui/mod.rs | Refactors Screen::change_context using macro + special-case screens |
| src/ui/network_chooser_screen.rs | Migrates to BTreeMap contexts, async reinit banner, DAPI discovery dialog rewrite |
| src/ui/tokens/add_token_by_id_screen.rs | Renames “Add Token” UI to “Import Token” |
| src/ui/tokens/tokens_screen/mod.rs | Renames right-side action to “Import Token” |
| src/ui/tokens/tokens_screen/my_tokens.rs | Renames “Add Token” button to “Import Token” |
| src/ui/wallets/wallets_screen/mod.rs | Developer-mode gating tweaks and platform address conversion for wallet storage |
| docs/MCP.md | Documents new network tools and SPV requirements |
| docs/MCP_TOOL_DEVELOPMENT.md | Updates SPV gate guidance for tool authors |
| docs/ai-design/2026-03-30-dapi-discovery-button/wireframe.html | Adds DAPI discovery UX wireframe |
| docs/ai-design/2026-03-30-dapi-discovery-button/ux-spec.md | Adds DAPI discovery UX specification |
| CONTRIBUTING.md | Updates MCP tool dev doc link |
| CLAUDE.md | Updates MCP tool dev doc reference |
| Cargo.toml | Bumps dash-sdk and trusted context provider git revs |
| Cargo.lock | Updates dependency lockfile for new git revisions |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Actionable comments posted: 17
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
docs/MCP_TOOL_DEVELOPMENT.md (1)
96-100:⚠️ Potential issue | 🟡 MinorThese checklist bullets now disagree.
Line 100 treats
core_wallets_listas part of the metadata/no-SPV exception, but Lines 97-98 still sayverify_networkmay be skipped only fornetwork_infoandtool_describe. Future tool authors will get contradictory guidance here.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/MCP_TOOL_DEVELOPMENT.md` around lines 96 - 100, The documentation contains contradictory checklist bullets about which tools can skip network verification and SPV sync; reconcile them by making the exception list consistent: update the lines referencing verify_network/resolve::require_network(), ensure_spv_synced, and the metadata exception so that only network_info and tool_describe (and other purely metadata tools that make no network calls such as tool_describe and network_info) are exempt from verify_network and ensure_spv_synced, while core_wallets_list is treated consistently (either included as a wallet-facing tool requiring ensure_spv_synced and resolve::require_network or explicitly documented as metadata); reference and adjust the entries mentioning verify_network, resolve::require_network(), ensure_spv_synced, core_wallets_list, network_info, and tool_describe so the checklist is unambiguous.src/app.rs (2)
779-781:⚠️ Potential issue | 🟠 MajorRebind stacked screens during
finalize_network_switch().Only
main_screensgetchange_context(). Any screen sitting inscreen_stackkeeps the oldAppContext, so actions from an open detail screen will keep running on the previous network after the switch.Suggested fix
for screen in self.main_screens.values_mut() { screen.change_context(app_context.clone()) } + for screen in &mut self.screen_stack { + screen.change_context(app_context.clone()); + }As per coding guidelines, all screens implement
ScreenLikewith achange_context()method.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app.rs` around lines 779 - 781, The loop in finalize_network_switch() only rebinds self.main_screens but leaves screens in self.screen_stack using the old AppContext; iterate over self.screen_stack (e.g., for screen in self.screen_stack.values_mut() or however the stack is stored) and call screen.change_context(app_context.clone()) for each entry so every ScreenLike instance gets the new AppContext; ensure you use the same app_context.clone() used for main_screens to avoid shared-mutation issues.
1154-1193:⚠️ Potential issue | 🟠 MajorDo not surface inactive-network ZMQ events to the visible screen.
The listeners stay alive per network in
zmq_listeners, but everyCoreItemhere is pushed intovisible_screen_mut()regardless ofnetwork. After a switch, blocks or IS-locks from the old network can still mutate the current screen.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app.rs` around lines 1154 - 1193, The code currently displays CoreItem notifications on the visible screen for any incoming ZMQ event regardless of which network produced them; add a check that the incoming network matches the UI's currently selected/active network before calling self.visible_screen_mut().display_task_result — i.e., keep processing/storing via app_context.received_transaction_finality for all networks but wrap the calls that construct and push CoreItem (all branches that call self.visible_screen_mut() for ZMQMessage::ISLockedTransaction and ZMQMessage::ChainLockedBlock and any similar displays) with an early guard like `if network != self.<selected_or_active_network_field> { continue; }` (or only skip the display path and not the storage) so only events from the active network are shown; reference symbols: core_message_receiver.try_recv, network (the loop variable), network_contexts.get, visible_screen_mut, CoreItem, and ZMQMessage variants.
🧹 Nitpick comments (1)
src/mcp/tools/network.rs (1)
84-88: Makenetworkoptional here for API consistency.This tool only operates on the active context, so requiring callers to always echo the current network creates a one-off public surface. Accepting
Option<String>and keeping the existing mismatch check when a value is provided would match the rest of the MCP contract.As per coding guidelines, MCP tools accept optional `network` parameters and fail only when a provided value disagrees with the active network.♻️ Suggested change
pub struct ReinitSdkParams { - /// Target network. Required — must match the server's active network. - pub network: String, + /// Optional network guard. When provided, it must match the server's active network. + pub network: Option<String>, } @@ - resolve::require_network(&ctx, Some(¶m.network))?; + resolve::require_network(&ctx, param.network.as_deref())?;Also applies to: 128-132
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/mcp/tools/network.rs` around lines 84 - 88, Change the ReinitSdkParams struct's network field from String to Option<String> (i.e., pub network: Option<String>) and keep the existing mismatch validation logic wherever the active network is checked so that: if network is Some(v) you still compare v to the active network and error on mismatch, and if None you accept the active context. Also apply the same change to the other similar params struct referenced (the one around lines 128-132) so all MCP tools accept an optional network parameter while preserving current mismatch checks.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/ai-design/2026-03-30-dapi-discovery-button/ux-spec.md`:
- Around line 331-346: The spec incorrectly references the obsolete
SystemTask::DiscoverDapiNodes; update the section and any checklist entries to
reference the actual backend task abstraction (replace
SystemTask::DiscoverDapiNodes with BackendTask::DiscoverDapiNodes), update any
surrounding prose/snippets to route through BackendTask, and ensure the success
variant remains BackendTaskSuccessResult::DapiNodesDiscovered and still
documents calling the async try_discover_nodes() from src/dapi_discovery.rs;
also update the duplicate references around lines noted in the checklist.
In `@docs/ai-design/2026-03-30-dapi-discovery-button/wireframe.html`:
- Around line 327-333: The onclick handlers on .scenario-tab elements and the
dialog buttons should pass the event explicitly and the showScenario function
must accept it; update all onclick attributes like
onclick="showScenario('default')" to onclick="showScenario(event, 'default')"
(apply similarly for all occurrences mentioned) and change the
showScenario(name) function signature to showScenario(event, name) so it uses
event.target to determine the clicked .scenario-tab instead of relying on the
implicit global event object; ensure any internal references to name still work
and update any callers of showScenario to match the new two-parameter signature.
In `@src/app.rs`:
- Around line 737-758: change_network currently only suppresses duplicate
switches to the same target, allowing overlapping switches to different networks
which can finish out of order and call finalize_network_switch incorrectly;
update change_network to early-return if self.network_switch_pending.is_some()
(i.e., block any new network switch while one is pending) and only proceed to
set network_switch_pending and call
handle_backend_task(BackendTask::SwitchNetwork { ... }) when there is no pending
switch; mention the involved symbols to edit: change_network,
network_switch_pending, handle_backend_task, BackendTask::SwitchNetwork, and
finalize_network_switch (ensuring pending is set before dispatch and no new
dispatch occurs until pending cleared).
- Around line 399-400: The NetworkChooserScreen is being constructed with
saved_network which can be stale after fallback; determine the actual chosen
network (the variable used after the fallback logic, e.g., chosen_network) and
pass that into NetworkChooserScreen::new instead of saved_network so the chooser
boots with the correct context; update the call that constructs
NetworkChooserScreen (currently using
NetworkChooserScreen::new(&network_contexts, saved_network,
overwrite_dash_conf)) to use chosen_network and ensure chosen_network is
computed from the fallback logic before this call.
In `@src/backend_task/error.rs`:
- Around line 391-401: The InvalidTokenNameCharacter error variant currently
interpolates raw token_name into the user-facing message; to fix, add a separate
display-safe field (e.g., token_name_escaped or token_name_display) that holds
an escaped/display-safe representation (escaping tabs, newlines, control chars)
and use that field in the #[error(...)] message while keeping the raw token_name
only in the variant for source/details or logging (and keep the message as a
single complete sentence). Update any construction sites that create
InvalidTokenNameCharacter (where token_name is supplied) to populate the escaped
field using a deterministic escape utility, and ensure the variant still
includes source_error: Box<SdkError> unchanged.
In `@src/backend_task/mod.rs`:
- Around line 106-112: The doc comment for the SwitchNetwork variant is
misleading: it says "Intercepted by AppState — never dispatched to
AppContext::run_backend_task" but there is an actual handler implemented for
SwitchNetwork in AppContext::run_backend_task and AppState processes the result;
update the comment on the SwitchNetwork enum variant to state that the task is
dispatched to AppContext::run_backend_task (and that AppState handles the
result), e.g., "Dispatched to AppContext::run_backend_task; result processed by
AppState. When start_spv is true, SPV sync is started on the new context
automatically." Ensure you reference the SwitchNetwork variant and
AppContext::run_backend_task / AppState in the updated comment.
- Around line 469-473: The call to
new_ctx.update_settings(crate::ui::RootScreenType::RootScreenNetworkChooser).ok()
swallows errors; change it to handle the Result and log a warning on Err (e.g.,
using log::warn! or tracing::warn!), including the returned error value and
context that persisting the network choice failed so debugging inconsistent
startup behavior is easier; ensure you still proceed on success and avoid
panicking.
- Around line 456-497: The TaskError::NetworkContextCreationFailed error message
is using the debug formatter `{network:?}` instead of the standard display
formatter; update the error's Display/thiserror format string to use `{network}`
(or implement Display for the Network type if it doesn't already) so the variant
TaskError::NetworkContextCreationFailed { network } uses the standard
`{network}` specifier; ensure any other occurrences (e.g., where Network is
embedded in TaskError messages) are similarly switched from `:?` to the plain
`{}` format.
In `@src/backend_task/wallet/fetch_platform_address_balances.rs`:
- Around line 173-177: The current use of filter_map with
PlatformAddress::try_from in the balances construction silently drops keys that
cannot be re-encoded, making the returned platform_address_info incomplete;
replace the filter_map step in the balances pipeline so that invalid keys are
not silently ignored: either (A) normalize/validate and rewrite the keys at the
storage site that inserts into platform_address_info (so
PlatformAddress::try_from always succeeds later), or (B) change the mapping here
to attempt conversion and propagate a typed error (return Result and fail the
task) when PlatformAddress::try_from(addr.clone()) fails instead of calling
.ok().map(...); update any caller of this function to handle the Result if you
choose option B. Ensure you touch the code paths interacting with
platform_address_info and balances (the function using filter_map and
PlatformAddress::try_from) so no invalid keys are silently dropped.
In `@src/context/mod.rs`:
- Around line 452-462: The code currently stores and may persist the new mode
inside set_core_backend_mode_inner (core_backend_mode.store and
db.update_core_backend_mode) before the new provider is actually bound; move the
core_backend_mode.store and the conditional persist (invalidate_settings_cache +
db.update_core_backend_mode) so they execute only after the provider-binding
code (the provider bind logic used later in this flow) completes successfully;
in short, perform provider binding first, and only on success call
core_backend_mode.store(...) and then persist via invalidate_settings_cache()
and db.update_core_backend_mode(...).
- Around line 579-590: The reinit branch currently constructs a new RPC Client
with hard-coded UserPass auth, which bypasses cookie/devnet lookup; replace this
logic in network_reinit_sdk so it reuses the existing helper
create_core_rpc_client(...) instead of calling Client::new with Auth::UserPass,
ensuring the same auth resolution used by AppContext::new() and
core_client_for_wallet() (preserve the returned Client or propagate
TaskError::RpcProviderCreationFailed on create_core_rpc_client failure).
In `@src/context/wallet_lifecycle.rs`:
- Around line 203-219: The candidate-collection closure swallows lock-poison
errors by using unwrap_or_default on the closure result, turning the one-shot
shielded backfill into a silent no-op; change the logic in the block that builds
candidates (the closure using self.wallets.read() and
self.shielded_states.lock()) to surface and log errors instead of defaulting to
an empty Vec—detect when wallets.read() or shielded_states.lock() returns Err,
log an error message with context, and only fall back to an empty list when you
explicitly want that behavior; ensure any early-return or error path does not
silently prevent subsequent calls to initialize_shielded_wallet from running for
eligible wallet hashes.
In `@src/mcp/error.rs`:
- Around line 39-47: The current McpToolError::TaskFailed branch leaks internal
debug info by using format!("{task_err:?}") and returning it to clients; replace
this with a sanitized diagnostics payload derived from the TaskError variants
(do not use Debug output). Update the TaskFailed handling to map task_err into a
safe serde_json::Value (e.g., an object with explicit fields like { "kind":
"<variant>", "message": task_err.to_string(), "aux": <sanitized_meta> }) and
return that instead of the raw debug string while keeping CODE_TASK_FAILED and
the user-facing e.to_string() unchanged; ensure any filesystem paths or internal
fields are omitted or redacted in the constructed payload.
In `@src/mcp/server.rs`:
- Around line 95-101: The init closure passed to init_guard should catch and
sanitize any error from init_app_context() before it propagates to MCP callers:
inside the async closure used by self.init_guard.get_or_try_init(...), call
init_app_context().await and if it Errs, log the detailed error (including
filesystem paths) at debug/error via your logger, then return a new McpError
with a generic message like "failed to initialize application context" (no
paths) so ctx() and callers do not receive path-containing Display text; keep
calling ctx_holder.store(app_context) and return Ok(()) on success. Ensure you
reference the same symbols (init_guard, ctx(), init_app_context(),
ctx_holder.store, McpError) when implementing the mapping.
In `@src/mcp/tools/network.rs`:
- Around line 226-231: The no-op return in the network switch path currently
hardcodes spv_started: true; instead read the actual SPV/start state from the
current ConnectionStatus and return that value. In the early-return where you
check ctx.network() == target (and build NetworkSwitchOutput with
network_display_name(target)), replace the hard-coded spv_started with a boolean
derived from the current connection status (query the ConnectionStatus via the
existing context/connection accessor used elsewhere in this module), e.g. obtain
the connection status from ctx (or ctx.connection()) and map it to the
SPV-running boolean, then set spv_started to that derived value so the output
reflects the real connection health.
In `@src/ui/network_chooser_screen.rs`:
- Around line 2124-2146: display_task_result is unconditionally clearing
reinit_banner and resetting config_save_failed, which lets unrelated messages
dismiss an in-flight reconnect banner; change it so
reinit_banner.take_and_clear() and std::mem::take(&mut self.config_save_failed)
are only invoked when handling the reconnect task's completion (i.e., when
matching BackendTaskSuccessResult::CoreClientReinitialized) or when the stored
reconnect task id/state explicitly matches the incoming message, do not clear or
mutate those fields on other message paths, and remove any direct dismissal of
global banners here so AppState remains responsible for banner lifecycle (keep
MessageBanner::set_global calls only in the CoreClientReinitialized branch).
- Around line 217-223: current_app_context() must not fall back to an arbitrary
AppContext because current_network can legitimately have no context; remove the
fallback and return None when there is no context so network-scoped UI can't
accidentally mutate the wrong network. Change the signature of
current_app_context (and call sites) to return Option<&Arc<AppContext>> by using
self.context_for_network(self.current_network) directly (no
.or_else(...).expect(...)); update callers that relied on a guaranteed context
to handle None explicitly (use shared state via self.data_dir / self.db for
non-network-specific operations or disable network-scoped controls until the
context exists). Ensure symbols: current_app_context, current_network,
network_contexts, data_dir, db are updated accordingly.
---
Outside diff comments:
In `@docs/MCP_TOOL_DEVELOPMENT.md`:
- Around line 96-100: The documentation contains contradictory checklist bullets
about which tools can skip network verification and SPV sync; reconcile them by
making the exception list consistent: update the lines referencing
verify_network/resolve::require_network(), ensure_spv_synced, and the metadata
exception so that only network_info and tool_describe (and other purely metadata
tools that make no network calls such as tool_describe and network_info) are
exempt from verify_network and ensure_spv_synced, while core_wallets_list is
treated consistently (either included as a wallet-facing tool requiring
ensure_spv_synced and resolve::require_network or explicitly documented as
metadata); reference and adjust the entries mentioning verify_network,
resolve::require_network(), ensure_spv_synced, core_wallets_list, network_info,
and tool_describe so the checklist is unambiguous.
In `@src/app.rs`:
- Around line 779-781: The loop in finalize_network_switch() only rebinds
self.main_screens but leaves screens in self.screen_stack using the old
AppContext; iterate over self.screen_stack (e.g., for screen in
self.screen_stack.values_mut() or however the stack is stored) and call
screen.change_context(app_context.clone()) for each entry so every ScreenLike
instance gets the new AppContext; ensure you use the same app_context.clone()
used for main_screens to avoid shared-mutation issues.
- Around line 1154-1193: The code currently displays CoreItem notifications on
the visible screen for any incoming ZMQ event regardless of which network
produced them; add a check that the incoming network matches the UI's currently
selected/active network before calling
self.visible_screen_mut().display_task_result — i.e., keep processing/storing
via app_context.received_transaction_finality for all networks but wrap the
calls that construct and push CoreItem (all branches that call
self.visible_screen_mut() for ZMQMessage::ISLockedTransaction and
ZMQMessage::ChainLockedBlock and any similar displays) with an early guard like
`if network != self.<selected_or_active_network_field> { continue; }` (or only
skip the display path and not the storage) so only events from the active
network are shown; reference symbols: core_message_receiver.try_recv, network
(the loop variable), network_contexts.get, visible_screen_mut, CoreItem, and
ZMQMessage variants.
---
Nitpick comments:
In `@src/mcp/tools/network.rs`:
- Around line 84-88: Change the ReinitSdkParams struct's network field from
String to Option<String> (i.e., pub network: Option<String>) and keep the
existing mismatch validation logic wherever the active network is checked so
that: if network is Some(v) you still compare v to the active network and error
on mismatch, and if None you accept the active context. Also apply the same
change to the other similar params struct referenced (the one around lines
128-132) so all MCP tools accept an optional network parameter while preserving
current mismatch checks.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 7eabff94-fbd3-47a4-8938-101ed9222d5d
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (26)
CLAUDE.mdCONTRIBUTING.mdCargo.tomldocs/MCP.mddocs/MCP_TOOL_DEVELOPMENT.mddocs/ai-design/2026-03-30-dapi-discovery-button/ux-spec.mddocs/ai-design/2026-03-30-dapi-discovery-button/wireframe.htmlsrc/app.rssrc/backend_task/error.rssrc/backend_task/mod.rssrc/backend_task/platform_info.rssrc/backend_task/wallet/fetch_platform_address_balances.rssrc/config.rssrc/context/mod.rssrc/context/wallet_lifecycle.rssrc/mcp/error.rssrc/mcp/resolve.rssrc/mcp/server.rssrc/mcp/tools/network.rssrc/mcp/tools/wallet.rssrc/ui/mod.rssrc/ui/network_chooser_screen.rssrc/ui/tokens/add_token_by_id_screen.rssrc/ui/tokens/tokens_screen/mod.rssrc/ui/tokens/tokens_screen/my_tokens.rssrc/ui/wallets/wallets_screen/mod.rs
|
✅ Actions performedFull review triggered. |
thepastaclaw
left a comment
There was a problem hiding this comment.
Code Review
I validated the current SHA from the completed Codex review outputs plus direct code inspection. The lazy-context refactor introduces three concrete regressions: all networks now share one mutable ConnectionStatus, SDK/Core-client reinitialization no longer preserves cookie authentication, and the new manual DAPI discovery flow silently drops discovered endpoints when a network has no existing config block. Those are all current-SHA issues with real user-visible impact in the network/context paths this PR refactors.
Reviewed commit: f89a075
🔴 3 blocking
1 additional finding
🔴 blocking: Lazy network contexts still share one mutable `ConnectionStatus` instance
src/app.rs (lines 294-308)
AppState::new_inner constructs a single Arc<ConnectionStatus> and passes clones of it into every eagerly created AppContext. The lazy BackendTask::SwitchNetwork path also reuses self.connection_status.clone() when it creates new contexts later. Each context's SPV manager stores that handle and emits status updates through it, so background work from an inactive network can still overwrite the active network's UI connection indicators after a switch. This defeats the per-network isolation the lazy-context refactor is trying to introduce.
🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/app.rs`:
- [BLOCKING] lines 294-308: Lazy network contexts still share one mutable `ConnectionStatus` instance
`AppState::new_inner` constructs a single `Arc<ConnectionStatus>` and passes clones of it into every eagerly created `AppContext`. The lazy `BackendTask::SwitchNetwork` path also reuses `self.connection_status.clone()` when it creates new contexts later. Each context's SPV manager stores that handle and emits status updates through it, so background work from an inactive network can still overwrite the active network's UI connection indicators after a switch. This defeats the per-network isolation the lazy-context refactor is trying to introduce.
In `src/context/mod.rs`:
- [BLOCKING] lines 579-590: `reinit_core_client_and_sdk()` no longer honors cookie auth
The reinit path now builds the replacement Core RPC client with `Auth::UserPass(...)` directly instead of going through `create_core_rpc_client()`, which tries `Auth::CookieFile(...)` first. On setups that rely on `.cookie` authentication, the new reinit path can replace a working client with one that uses empty/default user/pass credentials, breaking reconnects after password save, DAPI discovery reinit, or MCP-triggered reinitialization.
In `src/ui/network_chooser_screen.rs`:
- [BLOCKING] lines 2158-2163: Manual DAPI discovery cannot bootstrap a network with no existing config block
The `DapiNodesDiscovered` handler now requires `config.config_for_network(network)` to already return `Some(network_cfg)` before it saves discovered node addresses. If the user is trying to bootstrap a network that is absent from `.env`, this branch is skipped and the discovered addresses are discarded instead of creating/applying a new config block. That breaks the first-time discovery flow for exactly the missing-config case the new UI is meant to support.
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/ui/network_chooser_screen.rs (1)
478-528:⚠️ Potential issue | 🟠 MajorDon't tell users a dropped RPC password will apply later.
On this lazy-context path there are only two places to keep the new password: the config file or an existing
AppContext. IfConfig::load_from/config_for_networkshort-circuits, Save becomes a silent no-op; and ifconfig.save()fails whilecontext_for_network()isNone, Lines 523-528 say the change will apply when the network is activated even thoughupdated_configis discarded at the end of the block. Please surface those failures and only show a deferred-apply message when the value was actually persisted or staged somewhere.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/ui/network_chooser_screen.rs` around lines 478 - 528, The code updates a password only in updated_config then may discard it when Config::load_from / config_for_network short-circuits or when config.save() fails, but still shows a "will apply when activated" message; change the control flow so you first verify Config::load_from and config.config_for_network(self.current_network) actually returned Some before creating updated_config, then attempt config.save() and branch: if save succeeded and context_for_network(...) is None, show the deferred-apply message (since persisted to disk); if save failed and there is no app context, show an explicit error/warning that the password was not saved and will be lost (do not claim it will apply later); if save succeeded and app context exists, update the in-memory AppContext config as you already do; use the existing symbols Config::load_from, config_for_network, update_core_rpc_password, config.save, context_for_network, updated_config, and config_save_failed to implement these checks and messages.
♻️ Duplicate comments (15)
src/backend_task/error.rs (1)
391-425:⚠️ Potential issue | 🟡 MinorEscape user-supplied token fields before interpolating them into
Display.
token_nameandlanguage_codeare copied straight into user-facing messages here. Invalid values can contain tabs, newlines, or other control characters, which makes the banner/MCP message itself hard to read and brittle for translation extraction. Keep the raw values forDebug/source only, and store a display-safe copy for the#[error(...)]text.As per coding guidelines, all user-facing strings must be simple, complete sentences extractable as single translation units with named placeholders for dynamic values and no logic in the text itself.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend_task/error.rs` around lines 391 - 425, The error variants (InvalidTokenNameCharacter and InvalidTokenLanguageCode — and any variant that currently interpolates raw user input like token_name, language_code, or similar) must stop embedding raw user-supplied values in the #[error(...)] text; add separate display-safe fields (e.g., token_name_display: String, language_code_display: String) to those enum variants and use those display-safe fields in the #[error(...)] format strings, while keeping the original raw values (token_name, language_code) only for Debug/source_error storage; update places that construct these variants to populate both the raw and escaped/display-safe fields (escaping or sanitizing control characters and newlines for the display field) so user-facing messages remain single-line, translation-friendly placeholders.src/mcp/error.rs (1)
39-47:⚠️ Potential issue | 🟠 MajorDo not return
TaskError's raw debug chain to MCP callers.
format!("{task_err:?}")can expose host paths and other internals that are intentionally kept out ofDisplay. Return a small sanitized payload derived from typedTaskErrorvariants instead of serializing the full debug chain.Based on learnings: When MCP clients can receive error messages produced by Rust
Displayimplementations, avoid leaking filesystem paths (e.g., XDG/runtime config dirs like~/.local/share/...). Specifically, sanitize output to omit paths and keep the detailed path information only inDebug(for logs), and ensure the MCP error mapping uses the sanitizedDisplayso localhost/stdio errors do not expose sensitive host path details to the MCP caller.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/mcp/error.rs` around lines 39 - 47, McpToolError::TaskFailed is currently returning the raw debug chain (format!("{task_err:?}")) which can leak host paths and internals; instead map the typed TaskError variants to a small sanitized payload and return the sanitized Display text (e.g., task_err.to_string() or a short enum->message mapping) as the MCP payload for CODE_TASK_FAILED, ensuring any detailed Debug output stays in logs only (use e.g., log::debug!("TaskFailed debug: {:?}", task_err)). Replace the serde_json::Value::String(details) usage with a string derived from the sanitized mapping and remove direct Debug serialization to avoid exposing filesystem paths or other internals.src/context/wallet_lifecycle.rs (1)
202-219:⚠️ Potential issue | 🟡 MinorDon't turn the shielded backfill into a silent no-op.
ok()?plusunwrap_or_default()hides lock/read failures here and just returns an empty candidate list. Because this path only runs when the shielded gate first becomes available, skipped wallets may not get another automatic initialization attempt in the same session.Suggested guard
- let candidates: Vec<WalletSeedHash> = (|| { - let wallets = self.wallets.read().ok()?; - let existing = self.shielded_states.lock().ok()?; - Some( - wallets - .iter() - .filter(|(hash, wallet_arc)| { - !existing.contains_key(*hash) - && wallet_arc.read().ok().map(|w| w.is_open()).unwrap_or(false) - }) - .map(|(hash, _)| *hash) - .collect(), - ) - })() - .unwrap_or_default(); + let candidates: Vec<WalletSeedHash> = match (self.wallets.read(), self.shielded_states.lock()) { + (Ok(wallets), Ok(existing)) => wallets + .iter() + .filter_map(|(hash, wallet_arc)| match wallet_arc.read() { + Ok(wallet) if !existing.contains_key(*hash) && wallet.is_open() => Some(*hash), + Ok(_) => None, + Err(e) => { + tracing::error!(seed = %hex::encode(hash), error = %e, "Failed to inspect wallet during shielded backfill"); + None + } + }) + .collect(), + (Err(e), _) => { + tracing::error!(error = %e, "Failed to read wallets during shielded backfill"); + return; + } + (_, Err(e)) => { + tracing::error!(error = %e, "Failed to read shielded state during shielded backfill"); + return; + } + };🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/context/wallet_lifecycle.rs` around lines 202 - 219, The closure in init_missing_shielded_wallets currently swallows lock/read errors via ok()? and unwrap_or_default(), turning failures into an empty candidate list; instead, explicitly handle Err from wallets.read() and shielded_states.lock(): log the specific error (using the context logger/tracing) and return early so failures are visible (or propagate a Result), e.g. replace the ok()? chain with matches or if let Err(e) for wallets.read() and shielded_states.lock(), log a descriptive message including e and skip/return rather than returning an empty Vec; keep the subsequent initialize_shielded_wallet call and ensure the code references wallets.read(), shielded_states.lock(), and initialize_shielded_wallet when implementing the change.src/backend_task/wallet/fetch_platform_address_balances.rs (1)
173-177:⚠️ Potential issue | 🟠 MajorStill silently dropping invalid stored platform addresses here.
filter_map(...ok())means one bad stored key still turns into a partial success result, even though this branch is supposed to return the wallet's completeplatform_address_info. Please fail the task with a typedTaskErrorhere, or normalize these keys before they are stored. As per coding guidelines,src/backend_task/**/*.rs: "When adding new backend error types, add a dedicatedTaskErrorvariant rather than converting toString."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend_task/wallet/fetch_platform_address_balances.rs` around lines 173 - 177, The current pipeline silently drops invalid stored keys by using filter_map(...ok()) when converting addr into PlatformAddress; replace that with explicit error handling so any try_from(addr) failure returns a typed TaskError instead of being filtered out. In the closure where PlatformAddress::try_from(addr.clone()) is used (the filter_map in fetch_platform_address_balances), change to map/and_then or an explicit match that converts Err(_) into Err(TaskError::InvalidPlatformAddress { raw: addr.clone(), source: /* preserve original error */ }) and propagate the error so the task fails rather than producing a partial result. Ensure you add/use a TaskError variant named InvalidPlatformAddress (or similar) as required by src/backend_task/**/*.rs guidelines and preserve the original conversion error as the source.src/app.rs (1)
399-400:⚠️ Potential issue | 🟠 MajorPass
chosen_networkinstead ofsaved_networktoNetworkChooserScreen::new().After the fallback logic (lines 320-351),
chosen_networkmay differ fromsaved_networkif the saved network's context creation failed. Passingsaved_networkto the chooser screen could initialize it with a network that has no context.🔧 Suggested fix
let network_chooser_screen = - NetworkChooserScreen::new(&network_contexts, saved_network, overwrite_dash_conf); + NetworkChooserScreen::new(&network_contexts, chosen_network, overwrite_dash_conf);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app.rs` around lines 399 - 400, The code currently passes saved_network into NetworkChooserScreen::new, but after the fallback logic chosen_network may differ; update the call to pass chosen_network instead of saved_network so the chooser is initialized with the resolved network that has a valid context (i.e., replace the argument in the NetworkChooserScreen::new(&network_contexts, saved_network, overwrite_dash_conf) call with chosen_network).src/backend_task/mod.rs (2)
469-473:⚠️ Potential issue | 🟡 MinorLog a warning when
update_settingsfails instead of silently discarding the error.Using
.ok()silently discards errors from persisting the network choice. While the switch can still succeed, silent failures make debugging difficult when users report inconsistent startup behavior.🔧 Suggested fix
// Persist the network choice so subsequent startups (GUI, CLI, // MCP) all begin on the same network. - new_ctx - .update_settings(crate::ui::RootScreenType::RootScreenNetworkChooser) - .ok(); + if let Err(e) = new_ctx + .update_settings(crate::ui::RootScreenType::RootScreenNetworkChooser) + { + tracing::warn!(?network, "Failed to persist network choice: {e}"); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend_task/mod.rs` around lines 469 - 473, The call to persist the network choice currently swallows errors via .ok(); change it to handle the Result from new_ctx.update_settings(crate::ui::RootScreenType::RootScreenNetworkChooser) and log a warning on Err instead of discarding it (e.g. use if let Err(e) = new_ctx.update_settings(...) { log::warn!("Failed to persist network choice (RootScreenNetworkChooser): {:?}", e); } or an equivalent match/tracing::warn!), so failures are visible for debugging while keeping the successful path unchanged.
106-112:⚠️ Potential issue | 🟡 MinorDocumentation is misleading:
SwitchNetworkis handled byrun_backend_task.The doc comment states "Intercepted by
AppState— never dispatched toAppContext::run_backend_task", but lines 456-497 implement a handler for this task inrun_backend_task. This is confusing for maintainers.📝 Suggested fix
/// Create a new network context and switch to it. - /// Intercepted by `AppState` — never dispatched to `AppContext::run_backend_task`. + /// Dispatched to `AppContext::run_backend_task`; the returned + /// `NetworkContextCreated` result is processed by `AppState` to update + /// the active context. /// When `start_spv` is true, SPV sync is started on the new context automatically. SwitchNetwork { network: Network, start_spv: bool, },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend_task/mod.rs` around lines 106 - 112, The doc comment on the SwitchNetwork enum variant is incorrect: it claims the task is intercepted by AppState and never dispatched to AppContext::run_backend_task, but the code actually handles SwitchNetwork in AppContext::run_backend_task; update the documentation for the SwitchNetwork variant to reflect the true flow (mention that SwitchNetwork is handled by AppContext::run_backend_task and that AppState does not intercept it), or remove the incorrect sentence; reference the SwitchNetwork variant and AppContext::run_backend_task when making the change so maintainers can find the related handler to keep docs and code in sync.src/mcp/server.rs (1)
91-106:⚠️ Potential issue | 🟠 MajorSanitize initialization errors before returning them to MCP callers.
The
ctx()method forwards errors frominit_app_context()directly to callers. Theinit_app_context()function (lines 194-253) builds error messages that may include filesystem paths (e.g.,"data dir: {e}","db path: {e}"). These paths could leak sensitive host information to MCP clients.🛡️ Suggested approach
self.init_guard .get_or_try_init(|| async { - let app_context = init_app_context().await?; - ctx_holder.store(app_context); - Ok::<(), McpError>(()) + match init_app_context().await { + Ok(app_context) => { + ctx_holder.store(app_context); + Ok(()) + } + Err(e) => { + tracing::error!("MCP context init failed: {e:?}"); + Err(McpError::internal_error( + "Failed to initialize application context", + None, + )) + } + } }) .await?;Based on learnings: "When MCP clients can receive error messages produced by Rust
Displayimplementations, avoid leaking filesystem paths... keep the detailed path information only inDebug(for logs)."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/mcp/server.rs` around lines 91 - 106, The ctx() method currently propagates errors from init_app_context() which may contain sensitive filesystem paths; modify ctx() so that when init_guard.get_or_try_init(...) fails, you capture the error from init_app_context(), log the full error details (using Debug) via the appropriate logger, and then return a sanitized McpError::internal_error with a generic message (no paths or Display details) to callers. Locate the async init block around init_guard.get_or_try_init in ctx(), wrap the awaited result to map/log the original error, and ensure the final error returned by ctx() does not include sensitive path info (keep detailed error only in logs).src/mcp/tools/network.rs (1)
226-232:⚠️ Potential issue | 🟠 MajorReturn actual SPV state on the no-op path instead of hard-coding
true.When the active context is already on the target network, the code returns
spv_started: trueunconditionally. However, SPV may not have started, may have failed, or the backend may be in RPC mode. This makes the tool contract unreliable.🔧 Suggested fix
// Already on the target network — no-op. if ctx.network() == target { + let spv_running = ctx.connection_status().is_spv_connected(); return Ok(NetworkSwitchOutput { active: network_display_name(target).to_owned(), - spv_started: true, + spv_started: spv_running, }); }You may need to add a method to
ConnectionStatusif one doesn't exist, or use an existing indicator like checking ifspv_connected_peers() > 0.Based on learnings: "For connection health (status, peer counts, errors, overall state), always read from
ConnectionStatus, not directly fromSpvManageror other subsystems."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/mcp/tools/network.rs` around lines 226 - 232, The no-op branch in the network switch currently returns spv_started: true unconditionally; instead query the actual SPV state from the ConnectionStatus (e.g., add/use a method like ConnectionStatus::is_spv_started() or check spv_connected_peers() on the context/connection status) and return that value in the NetworkSwitchOutput returned by the ctx.network() == target path (keep network_display_name(target) for active). If ConnectionStatus lacks a suitable accessor, add one that encapsulates the health check and use it here rather than reading SpvManager directly.src/context/mod.rs (2)
579-590:⚠️ Potential issue | 🟠 MajorReuse
create_core_rpc_client()to preserve cookie authentication support.This branch hard-codes
Auth::UserPassauthentication, bypassing thecreate_core_rpc_client()helper (lines 654-677) which tries cookie authentication first. This breaks installs that rely on cookie auth.♻️ Suggested fix
// 2. Rebuild the RPC client with the new password let addr = format!("http://{}:{}", cfg.rpc_host(), cfg.rpc_port(self.network)); - let new_client = Client::new( + let new_client = Self::create_core_rpc_client( &addr, - Auth::UserPass( - cfg.core_rpc_user.clone().unwrap_or_default(), - cfg.core_rpc_password.clone().unwrap_or_default(), - ), + self.network, + &cfg.devnet_name, + &cfg, ) .map_err(|e| TaskError::RpcProviderCreationFailed { detail: e.to_string(), })?;Based on learnings: RPC paths are deprecated soon, so this should stay a small compatibility fix instead of forking auth logic.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/context/mod.rs` around lines 579 - 590, The current code rebuilds the RPC client by calling Client::new with Auth::UserPass (Client::new, Auth::UserPass), which bypasses cookie auth and breaks installs relying on cookies; replace this block to call the existing helper create_core_rpc_client(...) (the same helper used earlier) and propagate errors as TaskError::RpcProviderCreationFailed so cookie-first auth is preserved instead of hard-coding UserPass authentication. Ensure you pass the same cfg and self.network/context values to create_core_rpc_client and keep the existing error mapping to TaskError::RpcProviderCreationFailed.
452-492:⚠️ Potential issue | 🟠 MajorBackend mode is committed before provider binding succeeds.
The
set_core_backend_mode_innermethod stores the mode (line 453-454) and optionally persists it (lines 456-462) before attempting to bind the new provider (lines 468-490). If the provider bind fails, the app reports/stores the new mode while the SDK is still wired to the old provider.🔧 Suggested fix structure
Move the
core_backend_mode.store()and persistence logic to execute only after the provider binding succeeds:fn set_core_backend_mode_inner(self: &Arc<Self>, mode: CoreBackendMode, persist: bool) { - self.core_backend_mode - .store(mode.as_u8(), Ordering::Relaxed); - - if persist { - let _guard = self.invalidate_settings_cache(); - if let Err(e) = self.db.update_core_backend_mode(mode.as_u8()) { - tracing::error!("Failed to persist core backend mode: {}", e); - } - } - // Switch SDK context provider to match the selected backend. match mode { CoreBackendMode::Spv => { if let Err(e) = self.spv_context_provider.read()...bind_app_context(...) { tracing::error!("Failed to bind SPV provider: {}", e); return; } } CoreBackendMode::Rpc => { if let Err(e) = self.rpc_context_provider.read()...bind_app_context(...) { tracing::error!("Failed to bind RPC provider: {}", e); return; } } } + + // Only commit the mode after successful provider bind + self.core_backend_mode.store(mode.as_u8(), Ordering::Relaxed); + if persist { + let _guard = self.invalidate_settings_cache(); + if let Err(e) = self.db.update_core_backend_mode(mode.as_u8()) { + tracing::error!("Failed to persist core backend mode: {}", e); + } + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/context/mod.rs` around lines 452 - 492, In set_core_backend_mode_inner, perform the provider bind first and only update and persist core_backend_mode after the bind succeeds: attempt to bind using spv_context_provider (for CoreBackendMode::Spv) or rpc_context_provider (for CoreBackendMode::Rpc) as currently done, and if the bind returns Ok then call core_backend_mode.store(...) and, if persist is true, take the invalidate_settings_cache() guard and call db.update_core_backend_mode(...); if the bind fails return early without changing core_backend_mode or writing to db. Ensure you reference set_core_backend_mode_inner, CoreBackendMode::Spv/Rpc, spv_context_provider, rpc_context_provider, core_backend_mode, invalidate_settings_cache, and db.update_core_backend_mode when making the change.docs/ai-design/2026-03-30-dapi-discovery-button/ux-spec.md (1)
331-346:⚠️ Potential issue | 🟡 MinorThe task name
SystemTask::DiscoverDapiNodesis stale.The spec references
SystemTask::DiscoverDapiNodesbut the actual implementation insrc/backend_task/mod.rsusesBackendTask::DiscoverDapiNodes. This inconsistency could confuse implementers.📝 Suggested fix
Update section 6 to reference the correct type:
-A new `SystemTask` variant handles the async discovery: +A new `BackendTask` variant handles the async discovery: ```rust -SystemTask::DiscoverDapiNodes { network: Network } +BackendTask::DiscoverDapiNodes { network: Network }-This calls the existing
try_discover_nodes()fromsrc/dapi_discovery.rs
+This callsdapi_discovery::discover_and_format()fromsrc/backend_task/dapi_discovery.rsAlso update the checklist item at line 388: ```diff -- [ ] Add `SystemTask::DiscoverDapiNodes` backend task variant +- [ ] Add `BackendTask::DiscoverDapiNodes` backend task variant🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/ai-design/2026-03-30-dapi-discovery-button/ux-spec.md` around lines 331 - 346, The spec currently references the stale SystemTask::DiscoverDapiNodes; update the text to use the actual enum variant BackendTask::DiscoverDapiNodes { network: Network } and ensure the described call uses dapi_discovery::discover_and_format() (the async helper in src/backend_task/dapi_discovery.rs) instead of try_discover_nodes(); keep the result described as BackendTaskSuccessResult::DapiNodesDiscovered { network: Network, addresses: Vec<String> } and also update the checklist entry that still mentions the old SystemTask name to reference BackendTask::DiscoverDapiNodes and the new discover_and_format() call.docs/ai-design/2026-03-30-dapi-discovery-button/wireframe.html (1)
446-463:⚠️ Potential issue | 🟡 MinorThe implicit
eventglobal issue remains unaddressed.The
showScenario(name)function at line 463 usesevent.target.classList.add('active')which relies on the deprecated implicit globaleventobject. This was flagged in a previous review. Additionally, when called from the dialog buttons (lines 435-436),event.targetwill be the dialog button, not a.scenario-tab, causing incorrect active-tab styling.🔧 Suggested fix
-function showScenario(name) { +function showScenario(event, name) { // Reset all document.querySelectorAll('.scenario-tab').forEach(t => t.classList.remove('active')); // ... rest of reset logic ... // Activate the clicked tab - event.target.classList.add('active'); + if (event && event.target && event.target.classList.contains('scenario-tab')) { + event.target.classList.add('active'); + } else { + // Find and activate the matching tab by name + document.querySelector(`.scenario-tab[onclick*="'${name}'"]`)?.classList.add('active'); + }Then update all callers:
-<div class="scenario-tab active" onclick="showScenario('default')"> +<div class="scenario-tab active" onclick="showScenario(event, 'default')">🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/ai-design/2026-03-30-dapi-discovery-button/wireframe.html` around lines 446 - 463, The function showScenario(name) currently relies on the deprecated implicit global event and uses event.target, causing wrong element activation; change the signature to accept an explicit second argument (either an Event or the HTMLElement) — e.g. showScenario(name, evtOrEl) — then inside use (evtOrEl instanceof Event ? evtOrEl.currentTarget : evtOrEl) to determine the tab element and call classList.add('active') on that element; update all callers (the dialog buttons and any tab click handlers) to pass either the event object (from the click handler) or the tab element (this or the clicked element) so no code depends on the implicit global event and the correct .scenario-tab is activated.src/ui/network_chooser_screen.rs (2)
2192-2198:⚠️ Potential issue | 🟠 MajorKeep reconnect state tied to the reconnect task.
display_message()runs for every banner on this screen. Lines 2193-2194 can therefore clearreinit_bannerandconfig_save_failedbefore the matchingCoreClientReinitializedarrives, so an unrelated message can erase the final "saved" vs "session only" outcome.take_and_clear()is fine here, but it needs to run only from the reconnect task's success/failure path.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/ui/network_chooser_screen.rs` around lines 2192 - 2198, The display_message method currently clears reinit_banner and resets config_save_failed unconditionally, which can erase the reconnect task's final outcome when unrelated banners appear; change the logic so that take_and_clear() on reinit_banner and setting config_save_failed = false only happen from the reconnect task's success/failure handling (the CoreClientReinitialized/reconnect completion path) instead of inside display_message(); locate display_message (and any reconnect completion handler that emits CoreClientReinitialized) and move the reinit_banner.take_and_clear() and config_save_failed reset into that reconnect task's success/failure branches so other message types won't clear reconnect state.
217-223:⚠️ Potential issue | 🟠 MajorDo not fall back to another network's context.
With lazy context creation,
current_networkcan legitimately have noAppContextright after the selector changes but before that network is initialized. Line 221's fallback then makes the rest of this screen read and mutate whichever other network happened to exist, so backend-mode changes and destructive actions like clearing SPV/database state can hit the wrong network.Possible direction
- pub fn current_app_context(&self) -> &Arc<AppContext> { - self.context_for_network(self.current_network) - .or_else(|| self.network_contexts.values().next()) - .expect("BUG: no AppContext available for any network") + pub fn current_app_context(&self) -> Option<&Arc<AppContext>> { + self.context_for_network(self.current_network) }Then use
self.data_dir/self.dbfor shared operations and gate network-scoped controls onNone.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/ui/network_chooser_screen.rs` around lines 217 - 223, The current_app_context method must not fall back to another network's AppContext; remove the .or_else(|| self.network_contexts.values().next()) fallback in current_app_context (and stop unconditionally .expect(...)) — instead change current_app_context (and/or add current_app_context_opt) to return Option<&Arc<AppContext>> by forwarding the result of self.context_for_network(self.current_network), update callers to handle None by gating network-scoped controls/actions, and ensure shared operations use self.data_dir / self.db rather than any network AppContext when current_app_context is None.
🧹 Nitpick comments (1)
docs/ai-design/2026-03-30-dapi-discovery-button/ux-spec.md (1)
43-57: Consider adding language specifiers to fenced code blocks.Static analysis flagged multiple fenced code blocks without language specifiers. While these are ASCII art diagrams rather than executable code, adding a language hint like
textorplaintextimproves tooling compatibility.Example fix
-``` +```text Connection Settings Card +-------------------------------------------------------+ | Connection Type: [SPV Client v] (dev mode only) |🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/ai-design/2026-03-30-dapi-discovery-button/ux-spec.md` around lines 43 - 57, The fenced ASCII-art block labeled "Connection Settings Card" lacks a language specifier; update the fenced code blocks in docs/ai-design/2026-03-30-dapi-discovery-button/ux-spec.md (e.g., the block starting with "Connection Settings Card") to include a language hint such as ```text or ```plaintext so static analysis and tooling correctly treat the diagram as plain text; apply the same change to any other fenced ASCII-art blocks in this file.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/app.rs`:
- Around line 737-759: The change_network method currently only ignores requests
that target the same network; instead, block any new network switch while one is
pending by replacing the duplicate-target check with a guard that returns if
self.network_switch_pending.is_some(); keep the fast-path check
(context_available_for_network) and existing logic that sets
self.network_switch_pending and dispatches
handle_backend_task(BackendTask::SwitchNetwork { ... }), so that only one
outstanding switch can be in flight and finalize_network_switch is not invoked
with stale contexts.
In `@src/backend_task/error.rs`:
- Around line 978-980: The NetworkContextCreationFailed TaskError variant
currently drops the underlying setup error; modify the
TaskError::NetworkContextCreationFailed variant to include a #[source] field
(e.g. #[source] source: anyhow::Error or Box<dyn std::error::Error + Send +
Sync>) so the error chain is preserved, keeping the existing #[error("Could not
connect to {network:?}...")] user message only; then update every site that
constructs NetworkContextCreationFailed to pass the original error into the new
source field (replace creations like NetworkContextCreationFailed { network }
with NetworkContextCreationFailed { network, source: err }) so diagnostics are
retained.
In `@src/ui/mod.rs`:
- Around line 817-820: The WalletSendScreen, SingleKeyWalletSendScreen, and
CreateAssetLockScreen branches currently just assign the new app_context (e.g.,
Screen::WalletSendScreen(screen) => { screen.app_context = app_context; ... })
which leaves screens holding stale wallet Arcs from the previous network; update
these code paths so that on context switch you either re-resolve the wallet
object from the new app_context by identifier (lookup wallet by id from
app_context before using it) or dismiss/replace the detail screens instead of
reusing them — specifically modify the Screen::WalletSendScreen,
SingleKeyWalletSendScreen, and CreateAssetLockScreen handling to fetch fresh
wallet references from app_context (or call a dismiss/replace routine) and then
call invalidate_address_input()/equivalent initialization only after the fresh
resolution.
- Around line 805-810: In the Screen::WalletsBalancesScreen(screen) branch add
clearing of all transient wallet-screen state on network switch: reset
pending_platform_balance_refresh, clear the unlock/search-after-unlock flags
(e.g. unlock_after_search/unlock_after_action or equivalent fields on the
screen/AppState), and dismiss/clear any pending Core-wallet dialog state so none
of those callbacks fire against the new app_context; implement these additional
clears alongside the existing calls to reset_pending_list_state(),
update_selected_wallet_for_network(), and invalidate_address_inputs().
In `@src/ui/network_chooser_screen.rs`:
- Around line 2158-2187: The code always shows a success MessageBanner after
attempting to save the updated network config and reinitialize the SDK; change
this so success is only shown when both Config::save(...) and
Arc::clone(app_context).reinit_core_client_and_sdk() succeed, and show an
appropriate user-facing error/warning banner with actionable guidance and
technical details (using BannerHandle::with_details()) when either operation
fails. Specifically: after Config::load_from(...) and
config.update_config_for_network(...), capture the result of
config.save(data_dir) and if Err -> call MessageBanner::set_global or
BannerHandle::with_details() with a clear message like "Failed to save DAPI
addresses — please retry or restart the app" and attach the error details; only
proceed to call reinit_core_client_and_sdk() if save succeeded; if reinit
returns Err -> show a different banner "Failed to apply new DAPI addresses for
this session — restart or switch networks to apply" with details; only show the
success banner (MessageBanner::set_global("Updated to {count} node addresses."))
when both save and reinit (when context exists) succeeded, otherwise avoid the
success message.
In `@src/ui/wallets/wallets_screen/mod.rs`:
- Around line 2884-2889: The code currently calls self.app_context.network()
when converting PlatformAddress to Core Address
(platform_addr.to_address_with_network(network)) which can be the wrong network
after a switch; update the success result type that produces balances to include
the originating network (or otherwise thread the originating Network through the
task result) and use that originating Network value when calling
platform_addr.to_address_with_network(...); ensure the conversion in the loop
that calls wallet.set_platform_address_info(core_addr, balance, nonce) uses the
carried originating network rather than self.app_context.network(), and update
the producer of balances and any call sites to populate that network field.
---
Outside diff comments:
In `@src/ui/network_chooser_screen.rs`:
- Around line 478-528: The code updates a password only in updated_config then
may discard it when Config::load_from / config_for_network short-circuits or
when config.save() fails, but still shows a "will apply when activated" message;
change the control flow so you first verify Config::load_from and
config.config_for_network(self.current_network) actually returned Some before
creating updated_config, then attempt config.save() and branch: if save
succeeded and context_for_network(...) is None, show the deferred-apply message
(since persisted to disk); if save failed and there is no app context, show an
explicit error/warning that the password was not saved and will be lost (do not
claim it will apply later); if save succeeded and app context exists, update the
in-memory AppContext config as you already do; use the existing symbols
Config::load_from, config_for_network, update_core_rpc_password, config.save,
context_for_network, updated_config, and config_save_failed to implement these
checks and messages.
---
Duplicate comments:
In `@docs/ai-design/2026-03-30-dapi-discovery-button/ux-spec.md`:
- Around line 331-346: The spec currently references the stale
SystemTask::DiscoverDapiNodes; update the text to use the actual enum variant
BackendTask::DiscoverDapiNodes { network: Network } and ensure the described
call uses dapi_discovery::discover_and_format() (the async helper in
src/backend_task/dapi_discovery.rs) instead of try_discover_nodes(); keep the
result described as BackendTaskSuccessResult::DapiNodesDiscovered { network:
Network, addresses: Vec<String> } and also update the checklist entry that still
mentions the old SystemTask name to reference BackendTask::DiscoverDapiNodes and
the new discover_and_format() call.
In `@docs/ai-design/2026-03-30-dapi-discovery-button/wireframe.html`:
- Around line 446-463: The function showScenario(name) currently relies on the
deprecated implicit global event and uses event.target, causing wrong element
activation; change the signature to accept an explicit second argument (either
an Event or the HTMLElement) — e.g. showScenario(name, evtOrEl) — then inside
use (evtOrEl instanceof Event ? evtOrEl.currentTarget : evtOrEl) to determine
the tab element and call classList.add('active') on that element; update all
callers (the dialog buttons and any tab click handlers) to pass either the event
object (from the click handler) or the tab element (this or the clicked element)
so no code depends on the implicit global event and the correct .scenario-tab is
activated.
In `@src/app.rs`:
- Around line 399-400: The code currently passes saved_network into
NetworkChooserScreen::new, but after the fallback logic chosen_network may
differ; update the call to pass chosen_network instead of saved_network so the
chooser is initialized with the resolved network that has a valid context (i.e.,
replace the argument in the NetworkChooserScreen::new(&network_contexts,
saved_network, overwrite_dash_conf) call with chosen_network).
In `@src/backend_task/error.rs`:
- Around line 391-425: The error variants (InvalidTokenNameCharacter and
InvalidTokenLanguageCode — and any variant that currently interpolates raw user
input like token_name, language_code, or similar) must stop embedding raw
user-supplied values in the #[error(...)] text; add separate display-safe fields
(e.g., token_name_display: String, language_code_display: String) to those enum
variants and use those display-safe fields in the #[error(...)] format strings,
while keeping the original raw values (token_name, language_code) only for
Debug/source_error storage; update places that construct these variants to
populate both the raw and escaped/display-safe fields (escaping or sanitizing
control characters and newlines for the display field) so user-facing messages
remain single-line, translation-friendly placeholders.
In `@src/backend_task/mod.rs`:
- Around line 469-473: The call to persist the network choice currently swallows
errors via .ok(); change it to handle the Result from
new_ctx.update_settings(crate::ui::RootScreenType::RootScreenNetworkChooser) and
log a warning on Err instead of discarding it (e.g. use if let Err(e) =
new_ctx.update_settings(...) { log::warn!("Failed to persist network choice
(RootScreenNetworkChooser): {:?}", e); } or an equivalent match/tracing::warn!),
so failures are visible for debugging while keeping the successful path
unchanged.
- Around line 106-112: The doc comment on the SwitchNetwork enum variant is
incorrect: it claims the task is intercepted by AppState and never dispatched to
AppContext::run_backend_task, but the code actually handles SwitchNetwork in
AppContext::run_backend_task; update the documentation for the SwitchNetwork
variant to reflect the true flow (mention that SwitchNetwork is handled by
AppContext::run_backend_task and that AppState does not intercept it), or remove
the incorrect sentence; reference the SwitchNetwork variant and
AppContext::run_backend_task when making the change so maintainers can find the
related handler to keep docs and code in sync.
In `@src/backend_task/wallet/fetch_platform_address_balances.rs`:
- Around line 173-177: The current pipeline silently drops invalid stored keys
by using filter_map(...ok()) when converting addr into PlatformAddress; replace
that with explicit error handling so any try_from(addr) failure returns a typed
TaskError instead of being filtered out. In the closure where
PlatformAddress::try_from(addr.clone()) is used (the filter_map in
fetch_platform_address_balances), change to map/and_then or an explicit match
that converts Err(_) into Err(TaskError::InvalidPlatformAddress { raw:
addr.clone(), source: /* preserve original error */ }) and propagate the error
so the task fails rather than producing a partial result. Ensure you add/use a
TaskError variant named InvalidPlatformAddress (or similar) as required by
src/backend_task/**/*.rs guidelines and preserve the original conversion error
as the source.
In `@src/context/mod.rs`:
- Around line 579-590: The current code rebuilds the RPC client by calling
Client::new with Auth::UserPass (Client::new, Auth::UserPass), which bypasses
cookie auth and breaks installs relying on cookies; replace this block to call
the existing helper create_core_rpc_client(...) (the same helper used earlier)
and propagate errors as TaskError::RpcProviderCreationFailed so cookie-first
auth is preserved instead of hard-coding UserPass authentication. Ensure you
pass the same cfg and self.network/context values to create_core_rpc_client and
keep the existing error mapping to TaskError::RpcProviderCreationFailed.
- Around line 452-492: In set_core_backend_mode_inner, perform the provider bind
first and only update and persist core_backend_mode after the bind succeeds:
attempt to bind using spv_context_provider (for CoreBackendMode::Spv) or
rpc_context_provider (for CoreBackendMode::Rpc) as currently done, and if the
bind returns Ok then call core_backend_mode.store(...) and, if persist is true,
take the invalidate_settings_cache() guard and call
db.update_core_backend_mode(...); if the bind fails return early without
changing core_backend_mode or writing to db. Ensure you reference
set_core_backend_mode_inner, CoreBackendMode::Spv/Rpc, spv_context_provider,
rpc_context_provider, core_backend_mode, invalidate_settings_cache, and
db.update_core_backend_mode when making the change.
In `@src/context/wallet_lifecycle.rs`:
- Around line 202-219: The closure in init_missing_shielded_wallets currently
swallows lock/read errors via ok()? and unwrap_or_default(), turning failures
into an empty candidate list; instead, explicitly handle Err from wallets.read()
and shielded_states.lock(): log the specific error (using the context
logger/tracing) and return early so failures are visible (or propagate a
Result), e.g. replace the ok()? chain with matches or if let Err(e) for
wallets.read() and shielded_states.lock(), log a descriptive message including e
and skip/return rather than returning an empty Vec; keep the subsequent
initialize_shielded_wallet call and ensure the code references wallets.read(),
shielded_states.lock(), and initialize_shielded_wallet when implementing the
change.
In `@src/mcp/error.rs`:
- Around line 39-47: McpToolError::TaskFailed is currently returning the raw
debug chain (format!("{task_err:?}")) which can leak host paths and internals;
instead map the typed TaskError variants to a small sanitized payload and return
the sanitized Display text (e.g., task_err.to_string() or a short enum->message
mapping) as the MCP payload for CODE_TASK_FAILED, ensuring any detailed Debug
output stays in logs only (use e.g., log::debug!("TaskFailed debug: {:?}",
task_err)). Replace the serde_json::Value::String(details) usage with a string
derived from the sanitized mapping and remove direct Debug serialization to
avoid exposing filesystem paths or other internals.
In `@src/mcp/server.rs`:
- Around line 91-106: The ctx() method currently propagates errors from
init_app_context() which may contain sensitive filesystem paths; modify ctx() so
that when init_guard.get_or_try_init(...) fails, you capture the error from
init_app_context(), log the full error details (using Debug) via the appropriate
logger, and then return a sanitized McpError::internal_error with a generic
message (no paths or Display details) to callers. Locate the async init block
around init_guard.get_or_try_init in ctx(), wrap the awaited result to map/log
the original error, and ensure the final error returned by ctx() does not
include sensitive path info (keep detailed error only in logs).
In `@src/mcp/tools/network.rs`:
- Around line 226-232: The no-op branch in the network switch currently returns
spv_started: true unconditionally; instead query the actual SPV state from the
ConnectionStatus (e.g., add/use a method like ConnectionStatus::is_spv_started()
or check spv_connected_peers() on the context/connection status) and return that
value in the NetworkSwitchOutput returned by the ctx.network() == target path
(keep network_display_name(target) for active). If ConnectionStatus lacks a
suitable accessor, add one that encapsulates the health check and use it here
rather than reading SpvManager directly.
In `@src/ui/network_chooser_screen.rs`:
- Around line 2192-2198: The display_message method currently clears
reinit_banner and resets config_save_failed unconditionally, which can erase the
reconnect task's final outcome when unrelated banners appear; change the logic
so that take_and_clear() on reinit_banner and setting config_save_failed = false
only happen from the reconnect task's success/failure handling (the
CoreClientReinitialized/reconnect completion path) instead of inside
display_message(); locate display_message (and any reconnect completion handler
that emits CoreClientReinitialized) and move the reinit_banner.take_and_clear()
and config_save_failed reset into that reconnect task's success/failure branches
so other message types won't clear reconnect state.
- Around line 217-223: The current_app_context method must not fall back to
another network's AppContext; remove the .or_else(||
self.network_contexts.values().next()) fallback in current_app_context (and stop
unconditionally .expect(...)) — instead change current_app_context (and/or add
current_app_context_opt) to return Option<&Arc<AppContext>> by forwarding the
result of self.context_for_network(self.current_network), update callers to
handle None by gating network-scoped controls/actions, and ensure shared
operations use self.data_dir / self.db rather than any network AppContext when
current_app_context is None.
---
Nitpick comments:
In `@docs/ai-design/2026-03-30-dapi-discovery-button/ux-spec.md`:
- Around line 43-57: The fenced ASCII-art block labeled "Connection Settings
Card" lacks a language specifier; update the fenced code blocks in
docs/ai-design/2026-03-30-dapi-discovery-button/ux-spec.md (e.g., the block
starting with "Connection Settings Card") to include a language hint such as
```text or ```plaintext so static analysis and tooling correctly treat the
diagram as plain text; apply the same change to any other fenced ASCII-art
blocks in this file.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 77f10ab7-b26e-423f-8604-5d6bd013e74e
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (26)
CLAUDE.mdCONTRIBUTING.mdCargo.tomldocs/MCP.mddocs/MCP_TOOL_DEVELOPMENT.mddocs/ai-design/2026-03-30-dapi-discovery-button/ux-spec.mddocs/ai-design/2026-03-30-dapi-discovery-button/wireframe.htmlsrc/app.rssrc/backend_task/error.rssrc/backend_task/mod.rssrc/backend_task/platform_info.rssrc/backend_task/wallet/fetch_platform_address_balances.rssrc/config.rssrc/context/mod.rssrc/context/wallet_lifecycle.rssrc/mcp/error.rssrc/mcp/resolve.rssrc/mcp/server.rssrc/mcp/tools/network.rssrc/mcp/tools/wallet.rssrc/ui/mod.rssrc/ui/network_chooser_screen.rssrc/ui/tokens/add_token_by_id_screen.rssrc/ui/tokens/tokens_screen/mod.rssrc/ui/tokens/tokens_screen/my_tokens.rssrc/ui/wallets/wallets_screen/mod.rs
- PROJ-001: use unwrap_or_default() in DapiNodesDiscovered handler so
addresses are saved even when the network has no prior config entry
- PROJ-002: fix SwitchNetwork doc comment — it IS dispatched to
run_backend_task, not intercepted by AppState
- PROJ-003: update CLAUDE.md MCP context provider names to match current
code (ContextHolder::Shared / ContextHolder::Standalone)
- PROJ-005: correct LOCAL_core_rpc_port in .env.example from 20302 to 19898
- CODE-006: use Display format ({network}) instead of Debug ({network:?})
in NetworkContextCreationFailed error message
- CODE-008: remove duplicate update_settings() call from SwitchNetwork
backend task handler; finalize_network_switch() already persists it
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…eness, dialog consistency - Move network-switch progress banner from per-frame allocation to one-shot creation at switch initiation; clear via take_and_clear() on completion or error (CODE-001) - Replace synchronous reinit_core_client_and_sdk call in display_task_result with a deferred flag dispatched as BackendTask from the next ui() frame (PROJ-004) - Make set_ctx! macro exhaustive by adding a skip list for explicitly-handled variants; compiler now catches new Screen additions (CODE-003) - Wrap blocking AppContext::new() in tokio::task::block_in_place() inside the async SwitchNetwork handler (CODE-002) - Replace raw egui::Window fetch confirmation with ConfirmationDialog, matching SPV-clear and DB-clear dialogs on the same screen (CODE-009) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 4
♻️ Duplicate comments (18)
src/mcp/error.rs (1)
39-47:⚠️ Potential issue | 🟠 MajorDo not return
TaskError's debug chain to MCP clients.
format!("{task_err:?}")can expose config paths, RPC URLs, and nested internal errors thatDisplayintentionally hides. Keepdataempty for now, or replace it with a small sanitized payload derived from typedTaskErrorvariants.Safer fallback
- McpToolError::TaskFailed(task_err) => { - // Include the full Debug error chain so MCP clients can see - // the underlying cause (e.g. SDK/DAPI errors) instead of just - // the user-friendly Display message. - let details = format!("{task_err:?}"); - ( - CODE_TASK_FAILED, - e.to_string(), - Some(serde_json::Value::String(details)), - ) - } + McpToolError::TaskFailed(_task_err) => { + (CODE_TASK_FAILED, e.to_string(), None) + }Based on learnings: When MCP clients can receive error messages produced by Rust
Displayimplementations, avoid leaking filesystem paths; keep the detailed path information only inDebug(for logs).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/mcp/error.rs` around lines 39 - 47, The current McpToolError::TaskFailed branch is leaking Debug output via format!("{task_err:?}"); change it to avoid returning the debug chain to clients by removing the format!("{task_err:?}") usage and returning an empty data payload instead (i.e., replace the Some(serde_json::Value::String(details)) data value with no data/None) while keeping the existing CODE_TASK_FAILED and e.to_string() values; if you later need structured info, derive a small sanitized payload from typed TaskError variants rather than using task_err's Debug.src/backend_task/wallet/fetch_platform_address_balances.rs (1)
173-177:⚠️ Potential issue | 🟠 MajorDo not silently drop stored platform addresses that fail re-encoding.
filter_map(...ok())turns this backend success into a partial result: any bad key disappears from the response, so callers lose that address's balance and nonce with no typed failure. Please either normalize these keys whereplatform_address_infois written or bubble aTaskErrorhere instead of filtering them out.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend_task/wallet/fetch_platform_address_balances.rs` around lines 173 - 177, The current use of filter_map with PlatformAddress::try_from in the iterator silently drops addresses that fail re-encoding, losing balance/nonce data; change this to return an explicit error instead of filtering: replace the filter_map(...) that calls PlatformAddress::try_from(addr.clone()).ok() with logic that attempts PlatformAddress::try_from and on failure returns a TaskError (or aggregates failures) so callers get a typed failure; alternatively, if you prefer fixing upstream, normalize keys when writing platform_address_info so PlatformAddress::try_from cannot fail here — pick one approach and implement it in the code path that builds the response (the iterator using PlatformAddress::try_from and the producer that writes platform_address_info).src/ui/mod.rs (2)
806-810:⚠️ Potential issue | 🟠 MajorClear the rest of the wallet screen’s pending state on context switches.
This only resets the
ListCoreWalletsflags.pending_platform_balance_refresh, the unlock/search-after-unlock flags, and the pending Core-wallet dialog state can all survive the switch and then fire against the newapp_contextwith data captured from the previous network.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/ui/mod.rs` around lines 806 - 810, The wallet screen context-switch branch currently calls screen.reset_pending_list_state(), screen.update_selected_wallet_for_network(), and screen.invalidate_address_inputs() but doesn't clear other pending state; modify the context-switch handling so it also clears screen.pending_platform_balance_refresh, resets any unlock/search-after-unlock flags (e.g. screen.unlock_after_unlock, screen.search_after_unlock or similar), and dismisses/clears any pending core-wallet dialog state (e.g. screen.pending_core_wallet_dialog or screen.core_wallet_dialog_state) so no stale callbacks run against the old app_context; place these clears alongside the existing reset_pending_list_state() call to ensure all pending state is reset on app_context switch.
817-820:⚠️ Potential issue | 🟠 MajorDon’t keep wallet-bound detail screens alive across a context switch.
These paths only swap
app_context.WalletSendScreen,SingleKeyWalletSendScreen, andCreateAssetLockScreenstill hold wallet Arcs from the previous context, so after a network switch they can keep operating on stale wallet objects. Re-resolve them by identifier in the newAppContext, or dismiss them during the switch.Also applies to: 882-882, 905-905
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/ui/mod.rs` around lines 817 - 820, The WalletSendScreen, SingleKeyWalletSendScreen, and CreateAssetLockScreen arms only swap app_context but keep old wallet Arcs, so update those branches to re-resolve the wallet/asset lock by identifier from the new AppContext (rather than reusing the existing screen's wallet Arc) or clear/dismiss the screen during an app_context switch; locate the match arms for Screen::WalletSendScreen(screen), Screen::SingleKeyWalletSendScreen(screen) and Screen::CreateAssetLockScreen(screen) and replace the assignment screen.app_context = app_context plus stale-wallet usage with logic that looks up the wallet/asset by its id from the new app_context (or sets the screen to None/exit) and then call invalidate_address_input()/equivalent after successful re-resolution.src/ui/wallets/wallets_screen/mod.rs (1)
2884-2888:⚠️ Potential issue | 🟠 MajorUse the task’s originating network when re-encoding these addresses.
self.app_context.network()is whichever network is active when this result is rendered, not necessarily the one that producedPlatformAddressBalances. After a switch, an in-flight result for the same seed hash can repopulateplatform_address_infowith the wrong address prefix. Carry the originating network in the success result and use that here instead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/ui/wallets/wallets_screen/mod.rs` around lines 2884 - 2888, The code is using self.app_context.network() when converting PlatformAddress back to core addresses, which can be wrong after a network switch; change the conversion to use the originating network carried on the successful result instead. Modify the success type that produces balances (e.g., PlatformAddressBalances / the result struct that contains balances) to include an originating_network field, populate that field where the result is constructed, and then here replace let network = self.app_context.network() with the originating network from the result (the variable already holding balances), before calling platform_addr.to_address_with_network(...) and wallet.set_platform_address_info(...).src/mcp/server.rs (1)
95-100:⚠️ Potential issue | 🟠 MajorSanitize standalone init failures before returning them to MCP callers.
This closure still forwards
init_app_context()errors verbatim, and that helper formats config/database failures with local path data. A failed first tool call will leak host filesystem paths over MCP unless you log the detailed error server-side and replace the client-visible message here with a generic initialization failure.🛡️ Proposed fix
if let ContextHolder::Standalone(_) = &self.ctx { let ctx_holder = self.ctx.clone(); self.init_guard .get_or_try_init(|| async { - let app_context = init_app_context().await?; + let app_context = init_app_context().await.map_err(|err| { + tracing::error!("Failed to initialize application context: {err:?}"); + McpError::internal_error( + "Failed to initialize the application context.", + None, + ) + })?; ctx_holder.store(app_context); Ok::<(), McpError>(()) })Based on learnings, when MCP clients can receive error messages produced by Rust
Displayimplementations, avoid leaking filesystem paths and keep that detail only in logs/debug.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/mcp/server.rs` around lines 95 - 100, The closure passed to self.init_guard.get_or_try_init currently propagates errors from init_app_context() verbatim (which may contain filesystem paths); change it to catch/map the error from init_app_context(), log the full error (including the original Display/Debug) server-side, store the initialized context as before into ctx_holder, and return a sanitized McpError value/message (e.g., a generic "initialization failed") to callers; locate the closure around init_guard.get_or_try_init, the init_app_context() call, ctx_holder.store(...), and McpError to implement the mapping and logging.docs/ai-design/2026-03-30-dapi-discovery-button/wireframe.html (1)
327-333:⚠️ Potential issue | 🟡 MinorDrive the active scenario tab from the scenario name, not
event.target.
showScenario()is called from both the tab strip and the confirmation dialog. Relying on the implicit globaleventis not portable, and when the caller is a dialog button it clears the active tab instead of selecting the matching scenario.💡 Suggested update
- <div class="scenario-tab active" onclick="showScenario('default')">Default (with addresses)</div> + <div class="scenario-tab active" data-scenario="default" onclick="showScenario('default')">Default (with addresses)</div>function showScenario(name) { // Reset all document.querySelectorAll('.scenario-tab').forEach(t => t.classList.remove('active')); + document + .querySelector(`.scenario-tab[data-scenario="${name}"]`) + ?.classList.add('active'); document.getElementById('addrField').classList.add('hidden'); document.getElementById('addrFieldEmpty').classList.add('hidden'); document.getElementById('emptyHint').classList.add('hidden'); document.getElementById('devnetHint').classList.add('hidden'); document.getElementById('fetchBtn').classList.add('hidden'); document.getElementById('fetchBtnLoading').classList.add('hidden'); document.getElementById('buttonRow').classList.remove('hidden'); document.getElementById('saveBtn').classList.remove('hidden'); document.getElementById('successBanner').classList.remove('visible'); document.getElementById('errorBanner').classList.remove('visible'); document.getElementById('confirmOverlay').classList.remove('visible'); document.getElementById('networkLabel').textContent = 'Mainnet'; - - // Activate the clicked tab - event.target.classList.add('active'); }Apply the same
data-scenarioattribute pattern to the other tab elements.Also applies to: 435-436, 446-463
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/ai-design/2026-03-30-dapi-discovery-button/wireframe.html` around lines 327 - 333, The active tab selection relies on the implicit global event inside showScenario, which breaks when called from the confirmation dialog; update the tab markup and callers to pass an explicit scenario identifier. Add a data-scenario attribute to each .scenario-tab (e.g., data-scenario="default", "empty", etc.) and change the onclick handlers to call showScenario(this.dataset.scenario) (or call showScenario with the string literal) instead of relying on event.target; also update the confirmation dialog button(s) to call showScenario('confirm') (or use dataset) so showScenario receives the scenario name directly and sets the active .scenario-tab accordingly. Ensure the same change is applied to the other tab elements referenced (the duplicate locations).src/backend_task/mod.rs (2)
469-473:⚠️ Potential issue | 🟡 MinorLog a warning when
update_settingsfails.Using
.ok()silently discards errors from persisting the network choice. While the switch succeeds, silent failures make debugging inconsistent startup behavior difficult.,
Suggested fix
- new_ctx - .update_settings(crate::ui::RootScreenType::RootScreenNetworkChooser) - .ok(); + if let Err(e) = new_ctx + .update_settings(crate::ui::RootScreenType::RootScreenNetworkChooser) + { + tracing::warn!(?network, "Failed to persist network choice: {e}"); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend_task/mod.rs` around lines 469 - 473, new_ctx.update_settings(...) currently calls .ok() and drops any error; change it to check the Result from new_ctx.update_settings(crate::ui::RootScreenType::RootScreenNetworkChooser) and log a warning on Err (include the error value and context that persisting the network choice failed) instead of silently ignoring it. Use the project's logging facility (e.g., warn! or the existing logger instance) so the message includes the error and the fact it occurred while updating settings for RootScreenNetworkChooser.
106-112:⚠️ Potential issue | 🟡 MinorStale doc comment:
SwitchNetworkis dispatched, not intercepted.The comment says "Intercepted by
AppState— never dispatched toAppContext::run_backend_task", but lines 456-497 implement the handler. Update to reflect the actual flow.,
Suggested fix
- /// Create a new network context and switch to it. - /// Intercepted by `AppState` — never dispatched to `AppContext::run_backend_task`. - /// When `start_spv` is true, SPV sync is started on the new context automatically. + /// Create a new network context and switch to it. + /// The returned `NetworkContextCreated` result is processed by `AppState` to + /// update the active context. When `start_spv` is true, SPV sync is started + /// on the new context automatically.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend_task/mod.rs` around lines 106 - 112, The doc comment for the SwitchNetwork enum variant is stale — it currently claims "Intercepted by `AppState` — never dispatched to `AppContext::run_backend_task`" but the variant is actually dispatched and handled by `AppContext::run_backend_task` (the handler implementation exists and processes SwitchNetwork); update the comment on SwitchNetwork to reflect that it is dispatched to `AppContext::run_backend_task` and handled there (keep the note that when `start_spv` is true SPV sync is started automatically) and remove the incorrect "Intercepted by AppState" phrasing.src/context/mod.rs (2)
452-462:⚠️ Potential issue | 🟠 MajorBackend mode committed before provider binding succeeds.
set_core_backend_mode_innerstores the mode (line 453-454) and optionally persists it (lines 456-461) before the provider binding at lines 468-491. If binding fails, the app reports the new mode while still using the old provider.Move the
core_backend_mode.store()and persistence logic to after the provider binding succeeds.,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/context/mod.rs` around lines 452 - 462, The mode is stored and persisted too early in set_core_backend_mode_inner (core_backend_mode.store and db.update_core_backend_mode) before the provider binding occurs; move the store() call and the persistence logic (including the invalidate_settings_cache guard and db.update_core_backend_mode) so they execute only after the provider binding succeeds (i.e., after the provider binding block completes without error), leaving the binding code unchanged and only committing the new mode on successful bind.
579-590:⚠️ Potential issue | 🟠 MajorReuse
create_core_rpc_client()to preserve cookie auth.The reinit path constructs the RPC client with
Auth::UserPassdirectly, bypassing the cookie-auth-first logic increate_core_rpc_client(). This breaks setups relying on.cookieauthentication.,
Suggested fix
- let new_client = Client::new( - &addr, - Auth::UserPass( - cfg.core_rpc_user.clone().unwrap_or_default(), - cfg.core_rpc_password.clone().unwrap_or_default(), - ), - ) - .map_err(|e| TaskError::RpcProviderCreationFailed { - detail: e.to_string(), - })?; + let new_client = Self::create_core_rpc_client( + &addr, + self.network, + &cfg.devnet_name, + &cfg, + )?;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/context/mod.rs` around lines 579 - 590, The reinit path is creating a new RPC client directly with Client::new using Auth::UserPass, which bypasses cookie-first logic; replace that direct construction by calling the existing create_core_rpc_client(cfg, self.network) (or the function signature used in scope) and map its error into TaskError::RpcProviderCreationFailed so cookie-based auth is preserved; update the code that currently builds new_client from Client::new/Auth::UserPass to instead call create_core_rpc_client and propagate/map errors accordingly.src/app.rs (2)
737-741:⚠️ Potential issue | 🟠 MajorBlock all network switches while one is pending.
The current guard only blocks duplicate requests to the same target. If switch to network A is pending and the user requests network B, both tasks dispatch and can complete out of order.
,
Suggested fix
pub fn change_network(&mut self, network: Network) { - // Ignore if we're already switching to this network. - if self.network_switch_pending == Some(network) { + // Block any new switch while one is already in progress. + if self.network_switch_pending.is_some() { + tracing::debug!( + "Ignoring network switch to {:?} — switch to {:?} already pending", + network, + self.network_switch_pending + ); return; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app.rs` around lines 737 - 741, The change_network method currently only ignores requests targeting the same Network but must block any new network switch while one is pending; update the guard in change_network to return early whenever self.network_switch_pending.is_some() (regardless of the Network value) so no second switch (to a different Network) can be dispatched while a pending switch exists, ensuring only one active network transition at a time.
399-400:⚠️ Potential issue | 🟠 MajorPass
chosen_networktoNetworkChooserScreen::new().After the fallback loop (lines 320-342),
chosen_networkmay differ fromsaved_network. Passingsaved_networkcould initialize the chooser with a network that has no context.,
Suggested fix
let network_chooser_screen = - NetworkChooserScreen::new(&network_contexts, saved_network, overwrite_dash_conf); + NetworkChooserScreen::new(&network_contexts, chosen_network, overwrite_dash_conf);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app.rs` around lines 399 - 400, The NetworkChooserScreen is being initialized with saved_network but the fallback logic may have updated the chosen_network; update the call to NetworkChooserScreen::new(...) to pass chosen_network (or &chosen_network if others are references) instead of saved_network so the chooser is initialized with the actual selected network; adjust the argument binding in the network_chooser_screen assignment accordingly and ensure chosen_network is in scope.src/mcp/tools/network.rs (1)
226-232:⚠️ Potential issue | 🟠 MajorReturn actual SPV state on the no-op path.
When already on the target network,
spv_started: trueis hardcoded. If SPV never started, failed earlier, or the system is in RPC mode, this misleads callers. Query the actual SPV/connection state instead.,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/mcp/tools/network.rs` around lines 226 - 232, The no-op return currently hardcodes spv_started: true which can misreport state; instead query the real SPV/connection state from the context and return that. Replace the literal spv_started: true in the NetworkSwitchOutput returned when ctx.network() == target with a call that checks the actual state (for example using an existing method like ctx.is_spv_started(), ctx.spv().is_started(), or ctx.connection().is_spv_active() — whichever matches your codebase) so NetworkSwitchOutput.active still uses network_display_name(target).to_owned() but spv_started reflects the actual connection/SPV status.src/ui/network_chooser_screen.rs (4)
2192-2198:⚠️ Potential issue | 🟠 MajorKeep reconnect banner state tied to the reconnect task.
Lines 2193-2194 still clear
reinit_bannerand resetconfig_save_failedfor every message. Any unrelated banner can therefore dismiss the in-flight reconnect progress and erase whether the save failed beforeCoreClientReinitializedarrives. Only clear these fields when handling the reconnect task’s own completion or failure.Based on learnings, progress banners should be dismissed via
take_and_clear()only when the associated task finishes.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/ui/network_chooser_screen.rs` around lines 2192 - 2198, The display_message function currently unconditionally calls self.reinit_banner.take_and_clear() and resets self.config_save_failed, which clears reconnect progress and save-failure state for unrelated messages; remove those two lines from display_message and instead clear reinit_banner via take_and_clear() and reset config_save_failed only in the reconnect-task completion/failure handlers (e.g., wherever CoreClientReinitialized or the reconnect task result is processed), so the reconnect banner and save-failed flag remain tied to the lifecycle of the reconnect task rather than every message.
2158-2163:⚠️ Potential issue | 🔴 CriticalDAPI discovery still cannot bootstrap a missing network config.
Lines 2159-2160 only continue when
config_for_network(network)already returnsSome(_). If the user runs discovery for a network that is absent from.env, the discovered addresses are discarded and the network stays unbootstrapped. Create/update the config block even when it does not exist yet.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/ui/network_chooser_screen.rs` around lines 2158 - 2163, The code currently only updates networks when Config::load_from(...).config_for_network(network) returns Some(...), so discovered DAPI addresses for absent networks are discarded; modify the block around current_app_context(), Config::load_from and config_for_network to handle the None case by creating a new network config (e.g., new NetworkConfig or default), setting its dapi_addresses to Some(addresses_csv), and then calling config.update_config_for_network(network, network_cfg) so the network entry is created/updated even when config_for_network(network) was None; keep usage of network_cfg, Config::load_from, config_for_network, and update_config_for_network to locate the change.
2165-2187:⚠️ Potential issue | 🟠 MajorDo not report DAPI refresh as successful after save or reinit failures.
Lines 2165-2180 log and continue when persistence or
reinit_core_client_and_sdk()fails, but Line 2183 still always shows"Updated to {count} node addresses."That tells the user the refresh worked even when the new list is only in memory or was never applied. Gate the success banner on both steps succeeding, and surface an actionable warning/error otherwise.Based on learnings, use
MessageBannerfor the user-facing message and attach technical details viaBannerHandle::with_details().🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/ui/network_chooser_screen.rs` around lines 2165 - 2187, The success banner is shown unconditionally even if config.save or Arc::clone(app_context).reinit_core_client_and_sdk() failed; change the flow so the MessageBanner::set_global success message is only shown when both persistence (config.save) and reinit (reinit_core_client_and_sdk) succeed, and otherwise show an error/warning banner using MessageBanner (with technical details attached via BannerHandle::with_details()); specifically, after calling config.save(...) and after calling reinit_core_client_and_sdk() (found via context_for_network and current_app_context), track success booleans or early-return on error, log and present a failure banner with actionable text and BannerHandle::with_details() carrying the error string(s), and only call MessageBanner::set_global(... "Updated to {count} node addresses.") when both operations succeeded.
217-223:⚠️ Potential issue | 🟠 MajorDo not fall back to another network’s
AppContext.Line 219 falls back to
values().next(), butsrc/app.rs:310-352can legitimately leave this screen with only a different network initialized whilesrc/app.rs:399-401still passessaved_network. In that state, status reads, backend-mode toggles, and even clear-database / clear-SPV actions can hit the wrong network. ReturnNonefor missing contexts, useself.data_dir/self.dbfor shared state, and make network-scoped controls handle the missing-context case explicitly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/ui/network_chooser_screen.rs` around lines 217 - 223, The method current_app_context should not fall back to another network's AppContext; change current_app_context (and any callers) to return Option<&Arc<AppContext>> by using self.context_for_network(self.current_network) directly (remove the .or_else(|| self.network_contexts.values().next()) and the expect), and update network-scoped UI handlers to explicitly handle the None case using shared state (self.data_dir / self.db) for actions that are global; ensure all references to current_app_context in network_chooser_screen.rs and related handlers are updated to handle Option and avoid assuming a context exists.
🧹 Nitpick comments (4)
src/ui/tokens/tokens_screen/my_tokens.rs (1)
265-265: Make the empty-state button text sentence-style.Use complete-sentence copy (for example,
"Import a token.") for consistency with i18n requirements.Suggested copy update
- RichText::new("Import Token") + RichText::new("Import a token.")As per coding guidelines, “All user-facing strings (labels, messages, tooltips, errors) must be simple, complete sentences that are i18n-ready.”
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/ui/tokens/tokens_screen/my_tokens.rs` at line 265, Update the empty-state button text to a complete sentence for i18n compliance: replace the literal "Import Token" used in RichText::new(...) in my_tokens.rs with a sentence-style string such as "Import a token." so all user-facing text follows the sentence-case guideline and is ready for localization.src/ui/tokens/add_token_by_id_screen.rs (1)
137-137: Align all “Import Token” UI text to complete-sentence copy.Please use sentence-style text consistently across button, breadcrumb, and heading (for example,
"Import a token.") to satisfy i18n string constraints.Suggested copy updates
- && ComponentStyles::add_primary_button(ui, "Import Token").clicked() + && ComponentStyles::add_primary_button(ui, "Import a token.").clicked() - ("Import Token", AppAction::None), + ("Import a token.", AppAction::None), - ui.heading("Import Token"); + ui.heading("Import a token.");As per coding guidelines, “All user-facing strings (labels, messages, tooltips, errors) must be simple, complete sentences that are i18n-ready.”
Also applies to: 316-316, 337-337
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/ui/tokens/add_token_by_id_screen.rs` at line 137, Update the user-facing "Import Token" strings to sentence-style copy everywhere they appear: replace the button label in ComponentStyles::add_primary_button(...) from "Import Token" to "Import a token." and likewise update the breadcrumb and heading code paths that render "Import Token" (the other occurrences referenced in this file) to "Import a token." so all labels, breadcrumbs, and headings use a complete-sentence i18n-ready string with punctuation.src/ui/tokens/tokens_screen/mod.rs (1)
2827-2827: Use a complete, i18n-ready sentence for the CTA label.Please change
"Import Token"to sentence-style copy (for example,"Import a token.") to align with the UI string guideline.Suggested copy update
- "Import Token", + "Import a token.",As per coding guidelines, “All user-facing strings (labels, messages, tooltips, errors) must be simple, complete sentences that are i18n-ready.”
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/ui/tokens/tokens_screen/mod.rs` at line 2827, Replace the UI label literal "Import Token" with a sentence-style, i18n-ready string "Import a token." and ensure it is passed through the project's localization helper (use the same i18n/localization macro or function used elsewhere in this module, e.g., the tokens_screen localization pattern) so the new sentence is translated as part of the app.src/config.rs (1)
71-77: Please add a direct regression test for this new regtest fallback.This changes runtime behavior only when
core_rpc_portis unset, but the current tests still cover local/regtest with explicitSome(20302)values, so the fallback can drift without any failure.Possible test to pin the new default
#[cfg(test)] mod tests { use super::*; + #[test] + fn test_default_rpc_port_regtest() { + assert_eq!(NetworkConfig::default_rpc_port(Network::Regtest), 19898); + assert_eq!(NetworkConfig::default().rpc_port(Network::Regtest), 19898); + } + /// Helper to create a minimal valid NetworkConfig for testing fn make_network_config(dapi_addresses: &str, port: u16) -> NetworkConfig {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/config.rs` around lines 71 - 77, Add a regression unit test that asserts default_rpc_port(Network::Regtest) returns 19898 when core_rpc_port is unset: write a test that calls config::default_rpc_port(Network::Regtest) (or constructs the config with core_rpc_port = None and verifies the resolved RPC port) and expects 19898; also include quick assertions that default_rpc_port for Network::Mainnet/Testnet/Devnet still return 9998/19998/29998 to prevent future drift. Name the test to indicate it pins the regtest fallback (e.g., regtest_default_rpc_port_fallback) and place it alongside other config tests.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/mcp/resolve.rs`:
- Around line 107-116: Doc claims all DAPI/Platform operations need SPV sync but
four shielded tools skip it; update the implementations to match the docs by
removing the INTENTIONAL skip and invoking ensure_spv_synced before performing
Platform/DAPI actions in ShieldedShieldFromPlatform, ShieldedShieldUnshield,
ShieldedUnshieldWithdrawal, and ShieldedPlatformWithdrawal (and update their
comments to reflect SPV requirement), or alternatively add those four tool names
to the exception list with an explicit justification in the docs if you intend
them to remain exempt—pick one consistent approach.
In `@src/ui/mod.rs`:
- Around line 808-809: update_selected_wallet_for_network currently only checks
hashes but does not replace the screen's selected Arcs, causing the UI to keep
mutating wallet objects from the old context; modify
update_selected_wallet_for_network (or call a new helper) to look up the
matching wallet and single-key wallet inside app_context and rebind
screen.selected_wallet and screen.selected_single_key_wallet to Arc clones of
those instances from app_context (use Arc::clone or equivalent), ensuring you
update both selections when a match is found so the screen holds the Arc from
the new context rather than the old one.
In `@src/ui/network_chooser_screen.rs`:
- Around line 523-528: The warning shown when save_failed (the block using
MessageBanner::set_global and ui.ctx()) incorrectly promises that changes will
apply later even though config.save() failed and there is no live AppContext for
self.current_network; change this to a hard failure message that clearly states
the save failed and that the new password/config was not persisted and will not
be applied later (update the MessageBanner text and type accordingly), and
ensure any references to self.current_network or deferred-apply semantics are
removed so the user is not misled.
- Around line 811-815: Update the ui.label call that currently formats "This
will fetch a fresh list of DAPI nodes, replacing your current {} configured
addresses in the config file." to use a single, translation-friendly string with
a named placeholder (e.g., "{dapi_total}") and avoid hardcoded plural grammar;
change the message to a single extractable unit (for example: "This will fetch a
fresh list of DAPI nodes, replacing the currently configured address(es):
{dapi_total}") and pass dapi_total as a named argument when calling ui.label so
translators can handle pluralization correctly; locate the ui.label invocation
that wraps dapi_total to apply this change.
---
Duplicate comments:
In `@docs/ai-design/2026-03-30-dapi-discovery-button/wireframe.html`:
- Around line 327-333: The active tab selection relies on the implicit global
event inside showScenario, which breaks when called from the confirmation
dialog; update the tab markup and callers to pass an explicit scenario
identifier. Add a data-scenario attribute to each .scenario-tab (e.g.,
data-scenario="default", "empty", etc.) and change the onclick handlers to call
showScenario(this.dataset.scenario) (or call showScenario with the string
literal) instead of relying on event.target; also update the confirmation dialog
button(s) to call showScenario('confirm') (or use dataset) so showScenario
receives the scenario name directly and sets the active .scenario-tab
accordingly. Ensure the same change is applied to the other tab elements
referenced (the duplicate locations).
In `@src/app.rs`:
- Around line 737-741: The change_network method currently only ignores requests
targeting the same Network but must block any new network switch while one is
pending; update the guard in change_network to return early whenever
self.network_switch_pending.is_some() (regardless of the Network value) so no
second switch (to a different Network) can be dispatched while a pending switch
exists, ensuring only one active network transition at a time.
- Around line 399-400: The NetworkChooserScreen is being initialized with
saved_network but the fallback logic may have updated the chosen_network; update
the call to NetworkChooserScreen::new(...) to pass chosen_network (or
&chosen_network if others are references) instead of saved_network so the
chooser is initialized with the actual selected network; adjust the argument
binding in the network_chooser_screen assignment accordingly and ensure
chosen_network is in scope.
In `@src/backend_task/mod.rs`:
- Around line 469-473: new_ctx.update_settings(...) currently calls .ok() and
drops any error; change it to check the Result from
new_ctx.update_settings(crate::ui::RootScreenType::RootScreenNetworkChooser) and
log a warning on Err (include the error value and context that persisting the
network choice failed) instead of silently ignoring it. Use the project's
logging facility (e.g., warn! or the existing logger instance) so the message
includes the error and the fact it occurred while updating settings for
RootScreenNetworkChooser.
- Around line 106-112: The doc comment for the SwitchNetwork enum variant is
stale — it currently claims "Intercepted by `AppState` — never dispatched to
`AppContext::run_backend_task`" but the variant is actually dispatched and
handled by `AppContext::run_backend_task` (the handler implementation exists and
processes SwitchNetwork); update the comment on SwitchNetwork to reflect that it
is dispatched to `AppContext::run_backend_task` and handled there (keep the note
that when `start_spv` is true SPV sync is started automatically) and remove the
incorrect "Intercepted by AppState" phrasing.
In `@src/backend_task/wallet/fetch_platform_address_balances.rs`:
- Around line 173-177: The current use of filter_map with
PlatformAddress::try_from in the iterator silently drops addresses that fail
re-encoding, losing balance/nonce data; change this to return an explicit error
instead of filtering: replace the filter_map(...) that calls
PlatformAddress::try_from(addr.clone()).ok() with logic that attempts
PlatformAddress::try_from and on failure returns a TaskError (or aggregates
failures) so callers get a typed failure; alternatively, if you prefer fixing
upstream, normalize keys when writing platform_address_info so
PlatformAddress::try_from cannot fail here — pick one approach and implement it
in the code path that builds the response (the iterator using
PlatformAddress::try_from and the producer that writes platform_address_info).
In `@src/context/mod.rs`:
- Around line 452-462: The mode is stored and persisted too early in
set_core_backend_mode_inner (core_backend_mode.store and
db.update_core_backend_mode) before the provider binding occurs; move the
store() call and the persistence logic (including the invalidate_settings_cache
guard and db.update_core_backend_mode) so they execute only after the provider
binding succeeds (i.e., after the provider binding block completes without
error), leaving the binding code unchanged and only committing the new mode on
successful bind.
- Around line 579-590: The reinit path is creating a new RPC client directly
with Client::new using Auth::UserPass, which bypasses cookie-first logic;
replace that direct construction by calling the existing
create_core_rpc_client(cfg, self.network) (or the function signature used in
scope) and map its error into TaskError::RpcProviderCreationFailed so
cookie-based auth is preserved; update the code that currently builds new_client
from Client::new/Auth::UserPass to instead call create_core_rpc_client and
propagate/map errors accordingly.
In `@src/mcp/error.rs`:
- Around line 39-47: The current McpToolError::TaskFailed branch is leaking
Debug output via format!("{task_err:?}"); change it to avoid returning the debug
chain to clients by removing the format!("{task_err:?}") usage and returning an
empty data payload instead (i.e., replace the
Some(serde_json::Value::String(details)) data value with no data/None) while
keeping the existing CODE_TASK_FAILED and e.to_string() values; if you later
need structured info, derive a small sanitized payload from typed TaskError
variants rather than using task_err's Debug.
In `@src/mcp/server.rs`:
- Around line 95-100: The closure passed to self.init_guard.get_or_try_init
currently propagates errors from init_app_context() verbatim (which may contain
filesystem paths); change it to catch/map the error from init_app_context(), log
the full error (including the original Display/Debug) server-side, store the
initialized context as before into ctx_holder, and return a sanitized McpError
value/message (e.g., a generic "initialization failed") to callers; locate the
closure around init_guard.get_or_try_init, the init_app_context() call,
ctx_holder.store(...), and McpError to implement the mapping and logging.
In `@src/mcp/tools/network.rs`:
- Around line 226-232: The no-op return currently hardcodes spv_started: true
which can misreport state; instead query the real SPV/connection state from the
context and return that. Replace the literal spv_started: true in the
NetworkSwitchOutput returned when ctx.network() == target with a call that
checks the actual state (for example using an existing method like
ctx.is_spv_started(), ctx.spv().is_started(), or
ctx.connection().is_spv_active() — whichever matches your codebase) so
NetworkSwitchOutput.active still uses network_display_name(target).to_owned()
but spv_started reflects the actual connection/SPV status.
In `@src/ui/mod.rs`:
- Around line 806-810: The wallet screen context-switch branch currently calls
screen.reset_pending_list_state(), screen.update_selected_wallet_for_network(),
and screen.invalidate_address_inputs() but doesn't clear other pending state;
modify the context-switch handling so it also clears
screen.pending_platform_balance_refresh, resets any unlock/search-after-unlock
flags (e.g. screen.unlock_after_unlock, screen.search_after_unlock or similar),
and dismisses/clears any pending core-wallet dialog state (e.g.
screen.pending_core_wallet_dialog or screen.core_wallet_dialog_state) so no
stale callbacks run against the old app_context; place these clears alongside
the existing reset_pending_list_state() call to ensure all pending state is
reset on app_context switch.
- Around line 817-820: The WalletSendScreen, SingleKeyWalletSendScreen, and
CreateAssetLockScreen arms only swap app_context but keep old wallet Arcs, so
update those branches to re-resolve the wallet/asset lock by identifier from the
new AppContext (rather than reusing the existing screen's wallet Arc) or
clear/dismiss the screen during an app_context switch; locate the match arms for
Screen::WalletSendScreen(screen), Screen::SingleKeyWalletSendScreen(screen) and
Screen::CreateAssetLockScreen(screen) and replace the assignment
screen.app_context = app_context plus stale-wallet usage with logic that looks
up the wallet/asset by its id from the new app_context (or sets the screen to
None/exit) and then call invalidate_address_input()/equivalent after successful
re-resolution.
In `@src/ui/network_chooser_screen.rs`:
- Around line 2192-2198: The display_message function currently unconditionally
calls self.reinit_banner.take_and_clear() and resets self.config_save_failed,
which clears reconnect progress and save-failure state for unrelated messages;
remove those two lines from display_message and instead clear reinit_banner via
take_and_clear() and reset config_save_failed only in the reconnect-task
completion/failure handlers (e.g., wherever CoreClientReinitialized or the
reconnect task result is processed), so the reconnect banner and save-failed
flag remain tied to the lifecycle of the reconnect task rather than every
message.
- Around line 2158-2163: The code currently only updates networks when
Config::load_from(...).config_for_network(network) returns Some(...), so
discovered DAPI addresses for absent networks are discarded; modify the block
around current_app_context(), Config::load_from and config_for_network to handle
the None case by creating a new network config (e.g., new NetworkConfig or
default), setting its dapi_addresses to Some(addresses_csv), and then calling
config.update_config_for_network(network, network_cfg) so the network entry is
created/updated even when config_for_network(network) was None; keep usage of
network_cfg, Config::load_from, config_for_network, and
update_config_for_network to locate the change.
- Around line 2165-2187: The success banner is shown unconditionally even if
config.save or Arc::clone(app_context).reinit_core_client_and_sdk() failed;
change the flow so the MessageBanner::set_global success message is only shown
when both persistence (config.save) and reinit (reinit_core_client_and_sdk)
succeed, and otherwise show an error/warning banner using MessageBanner (with
technical details attached via BannerHandle::with_details()); specifically,
after calling config.save(...) and after calling reinit_core_client_and_sdk()
(found via context_for_network and current_app_context), track success booleans
or early-return on error, log and present a failure banner with actionable text
and BannerHandle::with_details() carrying the error string(s), and only call
MessageBanner::set_global(... "Updated to {count} node addresses.") when both
operations succeeded.
- Around line 217-223: The method current_app_context should not fall back to
another network's AppContext; change current_app_context (and any callers) to
return Option<&Arc<AppContext>> by using
self.context_for_network(self.current_network) directly (remove the .or_else(||
self.network_contexts.values().next()) and the expect), and update
network-scoped UI handlers to explicitly handle the None case using shared state
(self.data_dir / self.db) for actions that are global; ensure all references to
current_app_context in network_chooser_screen.rs and related handlers are
updated to handle Option and avoid assuming a context exists.
In `@src/ui/wallets/wallets_screen/mod.rs`:
- Around line 2884-2888: The code is using self.app_context.network() when
converting PlatformAddress back to core addresses, which can be wrong after a
network switch; change the conversion to use the originating network carried on
the successful result instead. Modify the success type that produces balances
(e.g., PlatformAddressBalances / the result struct that contains balances) to
include an originating_network field, populate that field where the result is
constructed, and then here replace let network = self.app_context.network() with
the originating network from the result (the variable already holding balances),
before calling platform_addr.to_address_with_network(...) and
wallet.set_platform_address_info(...).
---
Nitpick comments:
In `@src/config.rs`:
- Around line 71-77: Add a regression unit test that asserts
default_rpc_port(Network::Regtest) returns 19898 when core_rpc_port is unset:
write a test that calls config::default_rpc_port(Network::Regtest) (or
constructs the config with core_rpc_port = None and verifies the resolved RPC
port) and expects 19898; also include quick assertions that default_rpc_port for
Network::Mainnet/Testnet/Devnet still return 9998/19998/29998 to prevent future
drift. Name the test to indicate it pins the regtest fallback (e.g.,
regtest_default_rpc_port_fallback) and place it alongside other config tests.
In `@src/ui/tokens/add_token_by_id_screen.rs`:
- Line 137: Update the user-facing "Import Token" strings to sentence-style copy
everywhere they appear: replace the button label in
ComponentStyles::add_primary_button(...) from "Import Token" to "Import a
token." and likewise update the breadcrumb and heading code paths that render
"Import Token" (the other occurrences referenced in this file) to "Import a
token." so all labels, breadcrumbs, and headings use a complete-sentence
i18n-ready string with punctuation.
In `@src/ui/tokens/tokens_screen/mod.rs`:
- Line 2827: Replace the UI label literal "Import Token" with a sentence-style,
i18n-ready string "Import a token." and ensure it is passed through the
project's localization helper (use the same i18n/localization macro or function
used elsewhere in this module, e.g., the tokens_screen localization pattern) so
the new sentence is translated as part of the app.
In `@src/ui/tokens/tokens_screen/my_tokens.rs`:
- Line 265: Update the empty-state button text to a complete sentence for i18n
compliance: replace the literal "Import Token" used in RichText::new(...) in
my_tokens.rs with a sentence-style string such as "Import a token." so all
user-facing text follows the sentence-case guideline and is ready for
localization.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: efc7faf4-74e9-43f2-9f24-2d63b8c17fd2
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (26)
CLAUDE.mdCONTRIBUTING.mdCargo.tomldocs/MCP.mddocs/MCP_TOOL_DEVELOPMENT.mddocs/ai-design/2026-03-30-dapi-discovery-button/ux-spec.mddocs/ai-design/2026-03-30-dapi-discovery-button/wireframe.htmlsrc/app.rssrc/backend_task/error.rssrc/backend_task/mod.rssrc/backend_task/platform_info.rssrc/backend_task/wallet/fetch_platform_address_balances.rssrc/config.rssrc/context/mod.rssrc/context/wallet_lifecycle.rssrc/mcp/error.rssrc/mcp/resolve.rssrc/mcp/server.rssrc/mcp/tools/network.rssrc/mcp/tools/wallet.rssrc/ui/mod.rssrc/ui/network_chooser_screen.rssrc/ui/tokens/add_token_by_id_screen.rssrc/ui/tokens/tokens_screen/mod.rssrc/ui/tokens/tokens_screen/my_tokens.rssrc/ui/wallets/wallets_screen/mod.rs
…ie auth Replace the direct Client::new(Auth::UserPass(...)) call in reinit_core_client_and_sdk() with Self::create_core_rpc_client(), which tries cookie authentication first and falls back to user/pass. Fixes setups that rely on .cookie auth being silently bypassed on reinit. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
src/ui/network_chooser_screen.rs (1)
2178-2194:⚠️ Potential issue | 🟡 MinorSuccess banner shown even when config save fails.
When
config.save()fails at line 2178, the error is logged but the code continues to show "Updated to {count} node addresses" at lines 2190-2194. This misleads users into thinking the discovery succeeded when the addresses weren't actually persisted.Consider tracking the save result and showing a warning banner when save fails, or only showing success when both save and (if applicable) the reinit scheduling succeed.
🛠️ Suggested approach
- if let Err(e) = config.save(data_dir) { - tracing::error!("Failed to save config after DAPI discovery: {e}"); - } + let save_ok = match config.save(data_dir) { + Ok(_) => true, + Err(e) => { + tracing::error!("Failed to save config after DAPI discovery: {e}"); + false + } + }; // Update in-memory config and schedule async SDK reinit if let Some(app_context) = self.context_for_network(network) { if let Ok(mut cfg_lock) = app_context.config.write() { *cfg_lock = network_cfg; } self.pending_reinit_after_discovery = true; } - MessageBanner::set_global( - self.current_app_context().egui_ctx(), - format!("Updated to {count} node addresses."), - MessageType::Success, - ); + if save_ok { + MessageBanner::set_global( + self.current_app_context().egui_ctx(), + format!("Updated to {count} node addresses."), + MessageType::Success, + ); + } else { + MessageBanner::set_global( + self.current_app_context().egui_ctx(), + format!("Discovered {count} nodes but failed to save. Changes apply for this session only."), + MessageType::Warning, + ); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/ui/network_chooser_screen.rs` around lines 2178 - 2194, The save error is logged but the UI always shows a success banner; modify the logic around config.save(data_dir) in the network discovery flow so you track the Result and only proceed to update in-memory config, set pending_reinit_after_discovery, and call MessageBanner::set_global with a success MessageType when config.save(...) returned Ok; if config.save(...) returns Err, keep the existing tracing::error(...) and instead call MessageBanner::set_global with a warning or error MessageType and an explanatory message. Update references around config.save, context_for_network(...), pending_reinit_after_discovery, and MessageBanner::set_global to branch on the save result so state mutation and success UI only happen after a successful save.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/ui/network_chooser_screen.rs`:
- Around line 2199-2206: display_message currently clears reinit state
(self.reinit_banner.take_and_clear() and self.config_save_failed = false) for
every incoming message which can dismiss the ReinitCoreClientAndSdk banner
prematurely; change the logic so these two reinit-related fields are only
cleared when the incoming message pertains to the reinit flow (e.g., check the
MessageType or message content for ReinitCoreClientAndSdk/
CoreClientReinitialized) or remove both clears from display_message and instead
handle all reinit state updates exclusively inside display_task_result (the
functions/fields to touch are display_message, display_task_result,
self.reinit_banner, and self.config_save_failed).
---
Duplicate comments:
In `@src/ui/network_chooser_screen.rs`:
- Around line 2178-2194: The save error is logged but the UI always shows a
success banner; modify the logic around config.save(data_dir) in the network
discovery flow so you track the Result and only proceed to update in-memory
config, set pending_reinit_after_discovery, and call MessageBanner::set_global
with a success MessageType when config.save(...) returned Ok; if
config.save(...) returns Err, keep the existing tracing::error(...) and instead
call MessageBanner::set_global with a warning or error MessageType and an
explanatory message. Update references around config.save,
context_for_network(...), pending_reinit_after_discovery, and
MessageBanner::set_global to branch on the save result so state mutation and
success UI only happen after a successful save.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 2d1dda2d-64e6-4d35-a767-51fc727dcb3b
📒 Files selected for processing (8)
.env.exampleCLAUDE.mdsrc/app.rssrc/backend_task/error.rssrc/backend_task/mod.rssrc/context/mod.rssrc/ui/mod.rssrc/ui/network_chooser_screen.rs
🚧 Files skipped from review as they are similar to previous changes (6)
- CLAUDE.md
- src/ui/mod.rs
- src/context/mod.rs
- src/app.rs
- src/backend_task/mod.rs
- src/backend_task/error.rs
…th sanitization - Use chosen_network (not saved_network) for NetworkChooserScreen so the UI reflects the actual fallback network after init failure - Block ALL overlapping network switches, not just duplicates to the same network, preventing state corruption from out-of-order completion - Use OnceCell::const_new() in new_shared() — the pre-filled guard was misleading since Shared mode never enters the init path - Move core_backend_mode store/persist after provider bind succeeds so a failed bind does not leave the mode and provider out of sync - Catch and sanitize init_app_context() errors in MCP ctx() to avoid leaking filesystem paths to MCP callers Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…e, SPV status - Escape control characters in InvalidTokenNameCharacter display to prevent unreadable banners from tab/newline-injected token names - Log warning when PlatformAddress re-encoding fails instead of silently dropping entries from the balances map - Add diagnostic detail field to NetworkContextCreationFailed for Debug output (user-facing message unchanged) - Check actual SPV status via ConnectionStatus on no-op network switch instead of hardcoding spv_started: true Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…stale screen handling, address network - Replace direct is_developer_mode() calls with FeatureGate::DeveloperMode pattern in wallets_screen for UI consistency - Add reset_transient_state() to WalletsBalancesScreen to clear pending operations on network switch (platform balance refresh, unlock flags, asset lock search, core wallet dialog) - Clear wallet references in WalletSendScreen, SingleKeyWalletSendScreen, and CreateAssetLockScreen on network switch to prevent stale wallet Arcs from the previous context - Add network field to PlatformAddressBalances result so the display handler can verify the result matches the current network, discarding stale results Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 28 out of 29 changed files in this pull request and generated 2 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
thepastaclaw
left a comment
There was a problem hiding this comment.
Code Review
I validated the current SHA directly after the review agents stalled. Two previously posted issues are fixed on this head (cookie-auth reinit and first-time DAPI bootstrap), but the lazy-context refactor still has three high-confidence state-management regressions: all contexts still share one mutable ConnectionStatus, startup silently falls back to another network if the saved one cannot initialize, and lazy GUI switches can still force-start SPV from the auto-start flag without preserving the earlier startup gate semantics.
Reviewed commit: f06b4e2
🔴 3 blocking
1 additional finding
🔴 blocking: All lazy network contexts still share one mutable `ConnectionStatus`
src/app.rs (lines 294-308)
AppState::new_inner still constructs a single Arc<ConnectionStatus> and passes clones of it into every AppContext it creates. The lazy BackendTask::SwitchNetwork path also reuses self.connection_status.clone() when it creates new contexts later. Each context's SPV manager stores that handle and emits status updates through it, so background work from an inactive network can still overwrite the active network's connection indicators after a switch. This defeats the per-network isolation the lazy-context refactor is trying to introduce.
🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/app.rs`:
- [BLOCKING] lines 294-308: All lazy network contexts still share one mutable `ConnectionStatus`
`AppState::new_inner` still constructs a single `Arc<ConnectionStatus>` and passes clones of it into every `AppContext` it creates. The lazy `BackendTask::SwitchNetwork` path also reuses `self.connection_status.clone()` when it creates new contexts later. Each context's SPV manager stores that handle and emits status updates through it, so background work from an inactive network can still overwrite the active network's connection indicators after a switch. This defeats the per-network isolation the lazy-context refactor is trying to introduce.
- [BLOCKING] lines 330-351: Startup silently switches to a different network when the saved network context fails to initialize
If the persisted `chosen_network` cannot initialize, startup now silently falls back to the first successfully created context from `network_contexts.iter().next()`. That means a user who last selected one network can launch the app into a different network without an explicit failure state or confirmation, even though the problem was specific to the originally selected network. In a multi-network wallet app, silently landing on the wrong network is a high-risk behavior change because subsequent identity or wallet actions can be performed against an unexpected environment.
- [BLOCKING] lines 752-760: Lazy GUI network switches still force-start SPV from `auto_start_spv` alone
The slow-path GUI switch still derives `start_spv` directly from `db.get_auto_start_spv()` and forwards it into `BackendTask::SwitchNetwork`. In that backend task, `start_spv=true` still unconditionally flips the new context to `CoreBackendMode::Spv` and starts SPV. That is broader than the existing GUI startup gate in `try_auto_start_spv()`, which only auto-starts SPV when developer mode is enabled and the selected backend is already SPV. As written, a user with auto-start enabled who switches to a not-yet-loaded network can still be silently moved from RPC to SPV just by switching networks.
| .filter(|n| *n != saved_network), | ||
| ); | ||
| for net in try_order { | ||
| if let Some(ctx) = make_context(net) { | ||
| network_contexts.insert(net, ctx); | ||
| break; | ||
| } | ||
| if net == saved_network { | ||
| tracing::warn!( | ||
| "Could not create context for saved network {:?}. \ | ||
| Check your node addresses. Trying other networks...", | ||
| saved_network | ||
| ); | ||
| } | ||
| } | ||
| if network_contexts.is_empty() { | ||
| return Err( | ||
| "No network could be initialized. Check that at least one network has \ | ||
| DAPI node addresses configured in your settings file. You can use the \ | ||
| \"Fetch Node List\" button in Network Settings to get addresses." | ||
| .into(), | ||
| ); |
There was a problem hiding this comment.
🔴 Blocking: Startup silently switches to a different network when the saved network context fails to initialize
If the persisted chosen_network cannot initialize, startup now silently falls back to the first successfully created context from network_contexts.iter().next(). That means a user who last selected one network can launch the app into a different network without an explicit failure state or confirmation, even though the problem was specific to the originally selected network. In a multi-network wallet app, silently landing on the wrong network is a high-risk behavior change because subsequent identity or wallet actions can be performed against an unexpected environment.
source: unknown
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/app.rs`:
- [BLOCKING] lines 330-351: Startup silently switches to a different network when the saved network context fails to initialize
If the persisted `chosen_network` cannot initialize, startup now silently falls back to the first successfully created context from `network_contexts.iter().next()`. That means a user who last selected one network can launch the app into a different network without an explicit failure state or confirmation, even though the problem was specific to the originally selected network. In a multi-network wallet app, silently landing on the wrong network is a high-risk behavior change because subsequent identity or wallet actions can be performed against an unexpected environment.
| if self.context_available_for_network(network) { | ||
| self.finalize_network_switch(network); | ||
| return; | ||
| } | ||
|
|
||
| // Slow path: dispatch SwitchNetwork as a backend task. The result | ||
| // (NetworkContextCreated) comes back through the task result channel | ||
| // and is handled in update(). Same path used by MCP tools. | ||
| self.network_switch_pending = Some(network); |
There was a problem hiding this comment.
🔴 Blocking: Lazy GUI network switches still force-start SPV from auto_start_spv alone
The slow-path GUI switch still derives start_spv directly from db.get_auto_start_spv() and forwards it into BackendTask::SwitchNetwork. In that backend task, start_spv=true still unconditionally flips the new context to CoreBackendMode::Spv and starts SPV. That is broader than the existing GUI startup gate in try_auto_start_spv(), which only auto-starts SPV when developer mode is enabled and the selected backend is already SPV. As written, a user with auto-start enabled who switches to a not-yet-loaded network can still be silently moved from RPC to SPV just by switching networks.
source: unknown
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/app.rs`:
- [BLOCKING] lines 752-760: Lazy GUI network switches still force-start SPV from `auto_start_spv` alone
The slow-path GUI switch still derives `start_spv` directly from `db.get_auto_start_spv()` and forwards it into `BackendTask::SwitchNetwork`. In that backend task, `start_spv=true` still unconditionally flips the new context to `CoreBackendMode::Spv` and starts SPV. That is broader than the existing GUI startup gate in `try_auto_start_spv()`, which only auto-starts SPV when developer mode is enabled and the selected backend is already SPV. As written, a user with auto-start enabled who switches to a not-yet-loaded network can still be silently moved from RPC to SPV just by switching networks.
thepastaclaw
left a comment
There was a problem hiding this comment.
Code Review
The new push fixed several earlier issues, including booting the network chooser with the fallback network, blocking overlapping GUI network switches, and reporting the real SPV state on MCP no-op switches. However, the current SHA still has multiple correctness bugs in the new network-switching lifecycle: MCP HTTP switches bypass the GUI state machine, per-network SPV status is still globally shared, and deferred follow-up work can run against the wrong network.
Reviewed commit: b9283d1
🔴 4 blocking | 🟡 3 suggestion(s)
1 additional finding
🔴 blocking: All lazily created network contexts still share one mutable ConnectionStatus
src/app.rs (lines 297-309)
new_inner() allocates a single Arc<ConnectionStatus> and passes the same handle into every AppContext it creates. Each context's SPV manager is wired to push status updates into that shared object, so after a switch an older network's still-running SPV manager can write peer counts, sync state, or errors back into the status object that the newly selected network reads. Resetting the shared status during finalize_network_switch() only clears it temporarily; it does not stop stale writers.
🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/mcp/tools/network.rs`:
- [BLOCKING] lines 244-250: HTTP-mode MCP network_switch still splits MCP state from the GUI app state
On success this path only calls `service.swap_context(context)`. In HTTP mode that updates the MCP service's shared `ArcSwap`, but it never updates `AppState::chosen_network`, screen contexts, ZMQ listeners, persisted settings, or the GUI-side connection-status lifecycle that `finalize_network_switch()` maintains. That leaves two authoritative active-network states in the same process: MCP starts operating on the new context while the GUI keeps rendering and dispatching against the old one.
In `src/app.rs`:
- [BLOCKING] lines 297-309: All lazily created network contexts still share one mutable ConnectionStatus
`new_inner()` allocates a single `Arc<ConnectionStatus>` and passes the same handle into every `AppContext` it creates. Each context's SPV manager is wired to push status updates into that shared object, so after a switch an older network's still-running SPV manager can write peer counts, sync state, or errors back into the status object that the newly selected network reads. Resetting the shared status during `finalize_network_switch()` only clears it temporarily; it does not stop stale writers.
- [SUGGESTION] lines 751-775: Fast-path switches to an existing context still skip SPV auto-start
When the target context already exists, `change_network()` immediately calls `finalize_network_switch()` and returns. Only the slow path passes `start_spv` into `BackendTask::SwitchNetwork`, and `finalize_network_switch()` never calls `try_auto_start_spv()`. As a result, switching back to a previously created network will not restart SPV after the user enables auto-start or after SPV stopped on that context earlier, so behavior depends on whether the context was created in this switch or an earlier one.
In `src/backend_task/mod.rs`:
- [BLOCKING] lines 485-489: SwitchNetwork still bypasses the developer-mode SPV gate for GUI callers
When `start_spv` is true, this handler always forces `CoreBackendMode::Spv` and calls `start_spv()`. That is looser than `AppState::try_auto_start_spv()`, which only auto-starts SPV when `auto_start_spv`, `is_developer_mode()`, and SPV mode are all true. On the GUI slow path, `start_spv` comes from the persisted auto-start setting, so a non-developer user can still have SPV forced on during a newly created network switch, bypassing the app's own safety gate.
In `src/ui/network_chooser_screen.rs`:
- [BLOCKING] lines 2104-2107: Deferred SDK reinit after DAPI discovery can still run against the wrong network
The follow-up reinit is tracked as a bare boolean, so the next frame always dispatches `BackendTask::ReinitCoreClientAndSdk` against whatever `current_app_context()` happens to be current then. The discovery result itself is network-specific and updates the discovered network's config first, but if the user switches networks before the next frame, the reinit runs on the newly selected context instead. That leaves the discovered network with stale SDK/Core clients while the UI reports success.
- [SUGGESTION] lines 2199-2201: Unrelated screen messages still clear reinit progress and save-failure state
`display_message()` unconditionally clears `reinit_banner` and resets `config_save_failed` for every incoming message, even though those fields belong to the async reinit flow handled in `display_task_result()`. If any unrelated info/error banner arrives while reinit is still in flight, the progress banner disappears early and the eventual completion path loses whether the config save failed.
In `src/mcp/error.rs`:
- [SUGGESTION] lines 39-47: MCP TaskFailed responses still expose the full Debug error chain
This branch serializes `format!("{task_err:?}")` into the JSON-RPC error `data` field, which sends MCP clients the complete `Debug` representation of `TaskError`. That can include filesystem paths and low-level SDK details that the rest of the app intentionally keeps out of user-facing `Display` messages. Even if MCP is usually local, this still breaks the codebase's Display/Debug separation and leaks more internal detail than necessary.
| match result { | ||
| BackendTaskSuccessResult::NetworkContextCreated { | ||
| context, | ||
| spv_started, | ||
| .. | ||
| } => { | ||
| service.swap_context(context); |
There was a problem hiding this comment.
🔴 Blocking: HTTP-mode MCP network_switch still splits MCP state from the GUI app state
On success this path only calls service.swap_context(context). In HTTP mode that updates the MCP service's shared ArcSwap, but it never updates AppState::chosen_network, screen contexts, ZMQ listeners, persisted settings, or the GUI-side connection-status lifecycle that finalize_network_switch() maintains. That leaves two authoritative active-network states in the same process: MCP starts operating on the new context while the GUI keeps rendering and dispatching against the old one.
source: ['claude-general', 'codex-general', 'claude-rust-quality', 'codex-rust-quality']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/mcp/tools/network.rs`:
- [BLOCKING] lines 244-250: HTTP-mode MCP network_switch still splits MCP state from the GUI app state
On success this path only calls `service.swap_context(context)`. In HTTP mode that updates the MCP service's shared `ArcSwap`, but it never updates `AppState::chosen_network`, screen contexts, ZMQ listeners, persisted settings, or the GUI-side connection-status lifecycle that `finalize_network_switch()` maintains. That leaves two authoritative active-network states in the same process: MCP starts operating on the new context while the GUI keeps rendering and dispatching against the old one.
| let spv_started = if start_spv { | ||
| if new_ctx.core_backend_mode() != CoreBackendMode::Spv { | ||
| new_ctx.set_core_backend_mode_volatile(CoreBackendMode::Spv); | ||
| } | ||
| match new_ctx.start_spv() { |
There was a problem hiding this comment.
🔴 Blocking: SwitchNetwork still bypasses the developer-mode SPV gate for GUI callers
When start_spv is true, this handler always forces CoreBackendMode::Spv and calls start_spv(). That is looser than AppState::try_auto_start_spv(), which only auto-starts SPV when auto_start_spv, is_developer_mode(), and SPV mode are all true. On the GUI slow path, start_spv comes from the persisted auto-start setting, so a non-developer user can still have SPV forced on during a newly created network switch, bypassing the app's own safety gate.
source: ['claude-general', 'codex-general', 'claude-rust-quality', 'codex-rust-quality']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/backend_task/mod.rs`:
- [BLOCKING] lines 485-489: SwitchNetwork still bypasses the developer-mode SPV gate for GUI callers
When `start_spv` is true, this handler always forces `CoreBackendMode::Spv` and calls `start_spv()`. That is looser than `AppState::try_auto_start_spv()`, which only auto-starts SPV when `auto_start_spv`, `is_developer_mode()`, and SPV mode are all true. On the GUI slow path, `start_spv` comes from the persisted auto-start setting, so a non-developer user can still have SPV forced on during a newly created network switch, bypassing the app's own safety gate.
| // Dispatch deferred SDK reinit after DAPI discovery | ||
| if self.pending_reinit_after_discovery { | ||
| self.pending_reinit_after_discovery = false; | ||
| action |= AppAction::BackendTask(BackendTask::ReinitCoreClientAndSdk); |
There was a problem hiding this comment.
🔴 Blocking: Deferred SDK reinit after DAPI discovery can still run against the wrong network
The follow-up reinit is tracked as a bare boolean, so the next frame always dispatches BackendTask::ReinitCoreClientAndSdk against whatever current_app_context() happens to be current then. The discovery result itself is network-specific and updates the discovered network's config first, but if the user switches networks before the next frame, the reinit runs on the newly selected context instead. That leaves the discovered network with stale SDK/Core clients while the UI reports success.
source: ['claude-general', 'codex-general', 'claude-rust-quality', 'codex-rust-quality']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/ui/network_chooser_screen.rs`:
- [BLOCKING] lines 2104-2107: Deferred SDK reinit after DAPI discovery can still run against the wrong network
The follow-up reinit is tracked as a bare boolean, so the next frame always dispatches `BackendTask::ReinitCoreClientAndSdk` against whatever `current_app_context()` happens to be current then. The discovery result itself is network-specific and updates the discovered network's config first, but if the user switches networks before the next frame, the reinit runs on the newly selected context instead. That leaves the discovered network with stale SDK/Core clients while the UI reports success.
| // Fast path: context already exists — switch immediately. | ||
| if self.context_available_for_network(network) { | ||
| self.finalize_network_switch(network); | ||
| return; | ||
| } | ||
|
|
||
| // Slow path: dispatch SwitchNetwork as a backend task. The result | ||
| // (NetworkContextCreated) comes back through the task result channel | ||
| // and is handled in update(). Same path used by MCP tools. | ||
| self.network_switch_pending = Some(network); | ||
| self.network_switch_banner = Some(MessageBanner::set_global( | ||
| self.current_app_context().egui_ctx(), | ||
| format!("Connecting to {network:?}..."), | ||
| MessageType::Info, | ||
| )); | ||
| let start_spv = self | ||
| .current_app_context() | ||
| .db | ||
| .get_auto_start_spv() | ||
| .unwrap_or(false); | ||
| self.handle_backend_task(BackendTask::SwitchNetwork { network, start_spv }); | ||
| } | ||
|
|
||
| /// Complete the network switch after the context is available. | ||
| fn finalize_network_switch(&mut self, network: Network) { |
There was a problem hiding this comment.
🟡 Suggestion: Fast-path switches to an existing context still skip SPV auto-start
When the target context already exists, change_network() immediately calls finalize_network_switch() and returns. Only the slow path passes start_spv into BackendTask::SwitchNetwork, and finalize_network_switch() never calls try_auto_start_spv(). As a result, switching back to a previously created network will not restart SPV after the user enables auto-start or after SPV stopped on that context earlier, so behavior depends on whether the context was created in this switch or an earlier one.
source: ['claude-general', 'codex-general', 'claude-rust-quality', 'codex-rust-quality']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/app.rs`:
- [SUGGESTION] lines 751-775: Fast-path switches to an existing context still skip SPV auto-start
When the target context already exists, `change_network()` immediately calls `finalize_network_switch()` and returns. Only the slow path passes `start_spv` into `BackendTask::SwitchNetwork`, and `finalize_network_switch()` never calls `try_auto_start_spv()`. As a result, switching back to a previously created network will not restart SPV after the user enables auto-start or after SPV stopped on that context earlier, so behavior depends on whether the context was created in this switch or an earlier one.
| fn display_message(&mut self, _msg: &str, msg_type: MessageType) { | ||
| self.reinit_banner.take_and_clear(); | ||
| self.config_save_failed = false; |
There was a problem hiding this comment.
🟡 Suggestion: Unrelated screen messages still clear reinit progress and save-failure state
display_message() unconditionally clears reinit_banner and resets config_save_failed for every incoming message, even though those fields belong to the async reinit flow handled in display_task_result(). If any unrelated info/error banner arrives while reinit is still in flight, the progress banner disappears early and the eventual completion path loses whether the config save failed.
source: ['claude-general', 'codex-general', 'claude-rust-quality', 'codex-rust-quality']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/ui/network_chooser_screen.rs`:
- [SUGGESTION] lines 2199-2201: Unrelated screen messages still clear reinit progress and save-failure state
`display_message()` unconditionally clears `reinit_banner` and resets `config_save_failed` for every incoming message, even though those fields belong to the async reinit flow handled in `display_task_result()`. If any unrelated info/error banner arrives while reinit is still in flight, the progress banner disappears early and the eventual completion path loses whether the config save failed.
| McpToolError::TaskFailed(task_err) => { | ||
| // Include the full Debug error chain so MCP clients can see | ||
| // the underlying cause (e.g. SDK/DAPI errors) instead of just | ||
| // the user-friendly Display message. | ||
| let details = format!("{task_err:?}"); | ||
| ( | ||
| CODE_TASK_FAILED, | ||
| e.to_string(), | ||
| Some(serde_json::Value::String(details)), |
There was a problem hiding this comment.
🟡 Suggestion: MCP TaskFailed responses still expose the full Debug error chain
This branch serializes format!("{task_err:?}") into the JSON-RPC error data field, which sends MCP clients the complete Debug representation of TaskError. That can include filesystem paths and low-level SDK details that the rest of the app intentionally keeps out of user-facing Display messages. Even if MCP is usually local, this still breaks the codebase's Display/Debug separation and leaks more internal detail than necessary.
source: ['claude-general', 'codex-general', 'claude-rust-quality', 'codex-rust-quality']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/mcp/error.rs`:
- [SUGGESTION] lines 39-47: MCP TaskFailed responses still expose the full Debug error chain
This branch serializes `format!("{task_err:?}")` into the JSON-RPC error `data` field, which sends MCP clients the complete `Debug` representation of `TaskError`. That can include filesystem paths and low-level SDK details that the rest of the app intentionally keeps out of user-facing `Display` messages. Even if MCP is usually local, this still breaks the codebase's Display/Debug separation and leaks more internal detail than necessary.
…orm-wallet2 Merge feat/platform-wallet2 into the backport branch and fix 16 compile errors + 1 test failure from API divergence: - PlatformAddress import (backported #814 vs pw2 module structure) - FeatureGate import (missing after merge) - AppState field access: replace mainnet/testnet/devnet/local _app_context fields with network_contexts iteration (#814 lazy contexts vs pw2 restructure) - MigrationError conversions: chain .migration_err() on v34-v39 migration steps (pw2 error API vs #816 FK cleanup) - PlatformAddressBalances pattern: add missing `network` field - test_v33_migration_with_orphaned_fk_rows: adjust assertion — pw2 migration 37 drops/recreates wallet_transactions with per-account attribution, so v33 FK cleanup survivors are gone. Test verifies cleanup of other FK tables (wallet_addresses, etc.) which survive. 382 tests pass, 72 kittest pass.
Summary
Replaces #803 (rebased — original branch was based on squash-merged PR #767, making merge impossible).
BackendTask::SwitchNetworkhandler replaces scattered switch logicnetwork_switchfor programmatic network changesChanges
27 files changed, +2407 / -1071 lines. Key files:
src/app.rs— ThemeState, lazy contexts, simplified update loopsrc/backend_task/mod.rs— SwitchNetwork, DiscoverDapiNodes taskssrc/backend_task/error.rs— network operation error variantssrc/mcp/tools/network.rs— network_switch, network_info toolssrc/context/mod.rs— optional network contextssrc/ui/network_chooser_screen.rs— DAPI discovery UIReview fixes
Addressed 32 of 38 review comments across 7 commits:
reinit_core_client_and_sdk()now usescreate_core_rpc_client()to preserve cookie auth fallbackunwrap_or_default()so unconfigured networks can bootstrapchosen_network(post-fallback) instead ofsaved_networkcore_backend_mode.store()moved after provider binding succeedsAppContext::new()wrapped inblock_in_place()set_ctx!macro uses exhaustiveset:/skip:lists — compiler catches new variantsNetworkContextCreationFailedincludes detail field for DebugFeatureGate::DeveloperMode{network}Display format, double-persist removed, OnceCell constructor, ConfirmationDialog consistencyAudit
Independent auditor verified all 17 features from original PR #803 are present. No unique work lost. Differences from the old branch are exclusively from PRs merged into v1.0-dev since (#805 FeatureGate, #809 SPV EventHandler, #767 SDK update).
Test plan
cargo clippy --all-features --all-targets -- -D warningscleancargo test --all-features --workspace— 526 passed, 0 failednetwork_switchtool🤖 Co-authored by Claudius the Magnificent AI Agent
Summary by CodeRabbit
Release Notes
New Features
Documentation
Chores