refactor(spv): eliminate separate thread/runtime, improve shutdown and UX#577
Conversation
The previous platform revision used incorrect bech32m HRP prefixes (evo/tevo) for Platform addresses. The updated commit uses the correct DIP-0018 prefixes (dash/tdash). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Replace the intermediate SyncProgress/DetailedSyncProgress/SyncStage translation layer with direct use of dash_spv::sync::SyncProgress. This eliminates ~120 lines of bridge code in spawn_progress_watcher() and determine_sync_stage(), and lets the UI query per-manager progress (headers, filter_headers, filters, masternodes, blocks) via the upstream API. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Log headers() error before discarding for easier debugging - Restore download-window optimization for headers progress on checkpoint-resumed syncs (progress starts near 0% not 83%) - Replace catch-all _ => 0.0 with explicit SyncState variant matches so new variants produce compile errors - Fix filters progress to use current_height/target_height instead of downloaded/target_height (session count vs absolute height mismatch) - Restore peer count in sync status text - Add "Querying peer heights" label and diffs_processed to masternode status for more informative sync messages Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The blocks progress bar was bouncing backward because it used processed/requested session counters whose denominator grows as filters discover more matching blocks. Switch to last_processed block height relative to headers target_height, which only increases. Display "current / target" heights instead of percentage on the blocks bar. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…witch - Add windowed progress tracking for filters (consistent with headers/filter_headers) - Reset blocks_stage_start on Error state so recovery gets a fresh window - Track spv_progress_network to detect network changes and rebuild progress state from the new network's sync_progress instead of resetting to zero - Eliminate redundant clone in progress watcher (move instead of clone twice) - Consolidate all progress state reset logic into rebuild_spv_progress_state() Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Propagate SPV status into ConnectionStatus right after start_spv() and stop_spv() so the UI reflects the change on the next frame instead of waiting for the next throttled trigger_refresh() cycle (2-10 seconds). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…own hang SPV finality and reconcile listeners blocked app shutdown for up to 10s because they never checked the global cancellation token. Add cancel branches to their tokio::select! loops so they exit immediately on shutdown. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…down trace logging Wrap reconcile_spv_wallets(), handle_spv_finality_event(), and the debounce sleep inside tokio::select! with the cancellation token so shutdown can interrupt them even when blocked on locks held by the SPV sync thread. Add trace-level logging to TaskManager::shutdown() for per-task join timing to aid future shutdown diagnostics. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…aster - Disable the Disconnect button while SPV status is Stopping to prevent double-clicks and provide visual feedback - Poll SPV status every 200ms during Stopping instead of the 10s connected interval so the Stopped transition is reflected within 1s - Reset the throttle timer in stop_spv() so fast polling starts immediately Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…diagnostics Change spawn_sync() signature to require a &'static str name. The JoinSet now yields the task name on completion, letting shutdown() log which tasks finished and which ones timed out. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
No actionable comments were generated in the recent review. 🎉 📝 WalkthroughWalkthroughMoves SPV from its own OS thread/runtime into the main Tokio runtime as named TaskManager tasks; unifies SPV progress types to SpvSyncProgress; adds task naming/tracking and shutdown diagnostics; updates connection/wallet lifecycle timing and cancellation behavior; adds docs, tests, and security audit. Changes
Sequence Diagram(s)sequenceDiagram
participant UI as "UI / NetworkChooser"
participant TM as "TaskManager"
participant RT as "Tokio Runtime"
participant SM as "SpvManager"
participant NW as "Network / SPV backend"
UI->>TM: request SPV start (spawn_sync "spv_main_loop")
TM->>RT: submit named task
RT->>SM: run_spv_loop (task)
SM->>NW: start sync / event subscriptions
NW-->>SM: sync progress events
SM->>TM: spawn named event handlers
UI->>SM: status snapshot request
SM-->>UI: SpvSyncProgress / status
UI->>SM: request stop
SM->>TM: cancel stop_token
TM->>RT: cancel tasks / join (report active_names)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Replace the dedicated SPV thread + 4-worker tokio runtime with a spawned task on the main 12-worker runtime via TaskManager::spawn_sync. This simplifies shutdown (SPV loop now tracked in unified JoinSet), removes cross-runtime complexity, and improves debuggability. Additional changes: - Fix: zeroize xprv_str after wallet import (security H-1) - Fix: sanitize devnet_name in build_spv_data_dir to prevent path traversal - Add 21 integration tests covering lifecycle, concurrency, deadlock detection, and live testnet sync Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The spv_request_handler selected on stop_token, which was independent of the global cancellation token. During window-close shutdown only the global token was cancelled, leaving the request handler running and causing a ~5s hang until the TaskManager timeout aborted it. Fix by creating stop_token as a child_token() of the global cancel. This also simplifies run_spv_loop and run_sync_and_monitor by removing the redundant global_cancel parameter — a single stop_token now covers both explicit SpvManager::stop() and application-wide shutdown. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
… shutdown tracing Pin rust-dashcore patches to commit d8bc066 which includes: - sync manager task loop exits on network errors instead of logging indefinitely - sync coordinator signal_shutdown() cancels tasks before network disconnect - connection tasks race Peer::connect() against shutdown token Add debug tracing to SpvManager::stop() and run_sync_and_monitor() to measure client.stop() duration, and trace-level polling of SPV status in ConnectionStatus. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix: compute relative timestamps from actual data (#581) * fix: compute relative timestamps from actual data Replace hardcoded display strings like "Received: 1 day ago" with real relative timestamps computed from document createdAt/updatedAt fields using chrono::Utc::now(). Fixes #579 * style: fix import ordering per cargo fmt * refactor: extract format_relative_time to shared dashpay module Deduplicate the identical format_relative_time function that existed in both contact_requests.rs and send_payment.rs. Move it to the dashpay mod.rs as a pub(crate) function and import from both files. --------- Co-authored-by: PastaClaw <thepastaclaw@users.noreply.github.com> * fix: update platform for DIP-18 HRP and improve SPV sync progress (#575) * fix: update dashpay/platform to d6f4eb9 for DIP-18 HRP fix The previous platform revision used incorrect bech32m HRP prefixes (evo/tevo) for Platform addresses. The updated commit uses the correct DIP-0018 prefixes (dash/tdash). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * refactor(spv): use dash-spv SyncProgress API directly Replace the intermediate SyncProgress/DetailedSyncProgress/SyncStage translation layer with direct use of dash_spv::sync::SyncProgress. This eliminates ~120 lines of bridge code in spawn_progress_watcher() and determine_sync_stage(), and lets the UI query per-manager progress (headers, filter_headers, filters, masternodes, blocks) via the upstream API. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(spv): improve sync progress accuracy and robustness - Log headers() error before discarding for easier debugging - Restore download-window optimization for headers progress on checkpoint-resumed syncs (progress starts near 0% not 83%) - Replace catch-all _ => 0.0 with explicit SyncState variant matches so new variants produce compile errors - Fix filters progress to use current_height/target_height instead of downloaded/target_height (session count vs absolute height mismatch) - Restore peer count in sync status text - Add "Querying peer heights" label and diffs_processed to masternode status for more informative sync messages Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(spv): use height-based blocks progress to prevent bar from jumping The blocks progress bar was bouncing backward because it used processed/requested session counters whose denominator grows as filters discover more matching blocks. Switch to last_processed block height relative to headers target_height, which only increases. Display "current / target" heights instead of percentage on the blocks bar. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(spv): improve progress bar resilience across resume and network switch - Add windowed progress tracking for filters (consistent with headers/filter_headers) - Reset blocks_stage_start on Error state so recovery gets a fresh window - Track spv_progress_network to detect network changes and rebuild progress state from the new network's sync_progress instead of resetting to zero - Eliminate redundant clone in progress watcher (move instead of clone twice) - Consolidate all progress state reset logic into rebuild_spv_progress_state() Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> * build: add git and cargo permissions to Claude Code workflow (#565) * build: add git and cargo permissions to Claude Code workflow Allow Claude to run git fetch/merge/checkout/rebase/push and cargo build/test/clippy/fmt commands. Switch model to opus. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: use claude_args for model and allowed tools The `model` and `allowed_tools` inputs are not declared in claude-code-action@v1 and are silently ignored. Move them to `claude_args` with --model and --allowedTools flags. Also fix deprecated colon syntax (`:*`) to space syntax (` *`). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: use single quotes to prevent glob expansion in allowed tools Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * build: sandbox cargo commands and add git permissions for Claude - Add safe-cargo.sh wrapper that strips CI secrets before running cargo - Use --allowedTools for git and safe-cargo, --disallowedTools for raw cargo - Document safe-cargo usage in CLAUDE.md Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: switch safe-cargo.sh from denylist to allowlist approach Use `env -i` (start with empty environment, explicitly pass only what cargo needs) instead of `env -u` (strip known secrets). This is more robust against future secrets being added to the workflow. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * build: deny Claude from editing CI scripts and workflows Prevent Claude from modifying .github/scripts/ and .github/workflows/ to ensure the safe-cargo wrapper cannot be tampered with. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: conditionally pass optional env vars in safe-cargo.sh Empty PROTOC="" caused prost build scripts to fail with "protoc not found". Now optional vars (PROTOC, CC, CXX, etc.) are only passed when set and non-empty. Tested: build, test, fmt all pass through the wrapper. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * rabbit feedback * build: move safe-cargo.sh to scripts/ and allow +nightly fmt Move safe-cargo.sh from .github/scripts/ to top-level scripts/ for better discoverability. Add detailed comment explaining why the wrapper exists (prevent CI secret exfiltration via build scripts). Update all references in claude.yml, CLAUDE.md, and permission settings. Add `+nightly fmt` to allowedTools so Claude can follow CLAUDE.md formatting instructions in CI. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: PastaClaw <thepastaclaw@users.noreply.github.com> Co-authored-by: lklimek <842586+lklimek@users.noreply.github.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
- Cargo.toml: keep PR's [lints.rust.unexpected_cfgs] section alongside v1.0-dev's [lints.clippy] - Cargo.lock: regenerated
There was a problem hiding this comment.
🧹 Nitpick comments (7)
src/context/connection_status.rs (1)
110-115: Minor inconsistency in mutex poisoning handling.
reset_timersilently ignores a poisonedlast_updatemutex (if let Ok), whiletrigger_refreshon line 236 recovers from it viapoisoned.into_inner(). If the mutex is poisoned,reset_timerbecomes a no-op and the faster polling during shutdown won't kick in.Align with the existing recovery pattern
pub fn reset_timer(&self) { - if let Ok(mut last) = self.last_update.lock() { - *last = Instant::now() - REFRESH_CONNECTED; - } + let mut last = match self.last_update.lock() { + Ok(guard) => guard, + Err(poisoned) => poisoned.into_inner(), + }; + *last = Instant::now() - REFRESH_CONNECTED; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/context/connection_status.rs` around lines 110 - 115, reset_timer currently ignores a poisoned last_update mutex because it uses `if let Ok(...)`, causing a no-op on poisoning; update reset_timer to handle poisoning the same way trigger_refresh does by calling lock() and using the poison guard's into_inner() on Err so you always get a mutable Instant and set `*last = Instant::now() - REFRESH_CONNECTED`; reference the reset_timer function and the last_update mutex and mirror the poison recovery pattern used in trigger_refresh.src/ui/dashpay/mod.rs (1)
40-41: Thecontains("seconds")check is coupled tochrono_humanize's English output format.If
chrono_humanizeever changes its output wording or if localization is added, this heuristic would silently break. This is low-risk today since the library's output is stable and the crate is English-only, but worth noting.A more robust alternative would be to check the time delta directly:
Duration-based approach
- match dt_result { - LocalResult::Single(dt) => { - let human = HumanTime::from(dt).to_string(); - if human.contains("seconds") { - Some("just now".to_string()) - } else { - Some(human) - } - } - _ => None, - } + match dt_result { + LocalResult::Single(dt) => { + let delta = Utc::now() - dt; + if delta.num_seconds().abs() < 60 { + Some("just now".to_string()) + } else { + Some(HumanTime::from(dt).to_string()) + } + } + _ => None, + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/ui/dashpay/mod.rs` around lines 40 - 41, The heuristic using human.contains("seconds") is brittle because it depends on chrono_humanize's English text; replace that text-match with a direct duration check instead: compute the time delta between now and the timestamp (use the same DateTime/created_at variable used to produce human), convert to seconds (or use chrono::Duration) and if delta.num_seconds() < threshold (e.g., 60) return "just now", otherwise fall back to the humanized string. Update the branch that currently checks human.contains("seconds") to use the duration comparison so the logic no longer relies on chrono_humanize output.Cargo.toml (1)
93-102: Minor URL inconsistency between patch key and replacement URLs.The
[patch]section key useshttps://github.com/dashpay/rust-dashcore(nowww) while all replacement entries usehttps://www.github.com/dashpay/rust-dashcore(withwww). This works because Cargo resolves git URLs independently, but it's an inconsistency that could confuse future maintainers. Consider aligning the URLs.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@Cargo.toml` around lines 93 - 102, The patch entries under [patch."https://github.com/dashpay/rust-dashcore"] use replacement git URLs with "https://www.github.com/…" which is inconsistent; update the replacement URLs for all patched crates (dash-network, dash-spv, dashcore, dashcore-private, dashcore-rpc, dashcore-rpc-json, dashcore_hashes, key-wallet, key-wallet-manager) to match the patch key by removing the "www" prefix (use https://github.com/dashpay/rust-dashcore for each) so the patch key and replacement URLs are consistent.docs/ai-design/2026-02-16-spv-single-runtime-refactor/security-audit.md (1)
1-8: Documentation note: audit is labeled as automated.Line 4 states
Auditor: Security Engineer (automated). Consider adding context about the tool/process that generated this audit (e.g., AI-assisted) so readers understand the provenance and calibrate their trust level accordingly.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/ai-design/2026-02-16-spv-single-runtime-refactor/security-audit.md` around lines 1 - 8, Update the audit metadata so the "Auditor: Security Engineer (automated)" entry clearly identifies the tool/process and provenance: either expand that line to include the tool name and "AI-assisted" (e.g., "Auditor: Security Engineer (AI-assisted automated tool: TOOL_NAME)") or add a short "Provenance" or "Tooling" field immediately after the Auditor line that states the generator (tool/version), whether AI assistance was used, and a brief note about limitations; modify the header metadata in the document (the "Auditor" metadata line and/or add a new "Provenance" field) accordingly so readers can calibrate trust.src/utils/tasks.rs (1)
32-42:active_namesgrows unboundedly during normal (non-shutdown) operation.Names are pushed in
spawn_syncbut only removed during the shutdown join loop (lines 81–85). For a long-running desktop session that spawns many short-lived tasks, thisVecwill grow continuously. In practice this is negligible for a desktop app, but if you want precision in the remaining-tasks diagnostic, consider removing names as tasks complete outside of shutdown as well (e.g., a background reaper that pollsjoin_next).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/utils/tasks.rs` around lines 32 - 42, spawn_sync currently pushes the task name into active_names but names are only removed during shutdown; change the lifecycle so each spawned task removes its name when it completes by adding cleanup to the task wrapper (the spawn_subtask path) — after the future finishes, lock active_names and remove the matching name (e.g., search by equality and remove the first match), handling poisoned lock errors by using if let Ok(mut names) { ... } as elsewhere; this keeps the join/shutdown logic intact but prevents active_names from growing unbounded during normal operation.src/spv/manager.rs (1)
1317-1338: Solid path traversal mitigation for devnet names.The sanitization replaces anything outside
[a-zA-Z0-9_-]with_and falls back to"devnet"on empty. This prevents directory traversal via names like../../etc. One optional thought: consider enforcing a reasonable max length (e.g., 64 chars) to prevent excessively long directory paths, though this is low risk since devnet names come from configuration.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/spv/manager.rs` around lines 1317 - 1338, The devnet name sanitization in the Network::Devnet branch (using config.devnet_name and producing sanitized) should also enforce a maximum length to avoid excessively long filesystem names; after building sanitized (and after the empty-check fallback), truncate it to a reasonable limit (e.g., 64 characters) before returning so the returned string is at most that length. Update the logic around the sanitized variable (and the final returned value) to apply the truncation step and use the truncated value in place of sanitized when non-empty.src/spv/tests.rs (1)
219-242: Consider assertingIdleis not a valid post-start status.The assertion on Line 231-237 allows
Starting,Syncing, orError. Sincestart()synchronously sets status toStartingbefore spawning the async task, it should not be possible for the status to still beIdleat this point. The current assertion implicitly handles this correctly, but adding an explicit exclusion ofIdlein the message would make the intent clearer. This is very minor — the test logic is sound.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/spv/tests.rs` around lines 219 - 242, The test test_start_sets_starting_status should explicitly exclude SpvStatus::Idle after calling manager.start(0): update the assertion (or add an assert_ne) using manager.status() / snapshot to ensure snapshot.status is not SpvStatus::Idle and still allow Starting, Syncing, or Error; reference manager.start, manager.status, test_start_sets_starting_status and SpvStatus::Idle when making the change so the intent that start() synchronously transitions out of Idle is explicit.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@Cargo.toml`:
- Around line 93-102: The patch entries under
[patch."https://github.com/dashpay/rust-dashcore"] use replacement git URLs with
"https://www.github.com/…" which is inconsistent; update the replacement URLs
for all patched crates (dash-network, dash-spv, dashcore, dashcore-private,
dashcore-rpc, dashcore-rpc-json, dashcore_hashes, key-wallet,
key-wallet-manager) to match the patch key by removing the "www" prefix (use
https://github.com/dashpay/rust-dashcore for each) so the patch key and
replacement URLs are consistent.
In `@docs/ai-design/2026-02-16-spv-single-runtime-refactor/security-audit.md`:
- Around line 1-8: Update the audit metadata so the "Auditor: Security Engineer
(automated)" entry clearly identifies the tool/process and provenance: either
expand that line to include the tool name and "AI-assisted" (e.g., "Auditor:
Security Engineer (AI-assisted automated tool: TOOL_NAME)") or add a short
"Provenance" or "Tooling" field immediately after the Auditor line that states
the generator (tool/version), whether AI assistance was used, and a brief note
about limitations; modify the header metadata in the document (the "Auditor"
metadata line and/or add a new "Provenance" field) accordingly so readers can
calibrate trust.
In `@src/context/connection_status.rs`:
- Around line 110-115: reset_timer currently ignores a poisoned last_update
mutex because it uses `if let Ok(...)`, causing a no-op on poisoning; update
reset_timer to handle poisoning the same way trigger_refresh does by calling
lock() and using the poison guard's into_inner() on Err so you always get a
mutable Instant and set `*last = Instant::now() - REFRESH_CONNECTED`; reference
the reset_timer function and the last_update mutex and mirror the poison
recovery pattern used in trigger_refresh.
In `@src/spv/manager.rs`:
- Around line 1317-1338: The devnet name sanitization in the Network::Devnet
branch (using config.devnet_name and producing sanitized) should also enforce a
maximum length to avoid excessively long filesystem names; after building
sanitized (and after the empty-check fallback), truncate it to a reasonable
limit (e.g., 64 characters) before returning so the returned string is at most
that length. Update the logic around the sanitized variable (and the final
returned value) to apply the truncation step and use the truncated value in
place of sanitized when non-empty.
In `@src/spv/tests.rs`:
- Around line 219-242: The test test_start_sets_starting_status should
explicitly exclude SpvStatus::Idle after calling manager.start(0): update the
assertion (or add an assert_ne) using manager.status() / snapshot to ensure
snapshot.status is not SpvStatus::Idle and still allow Starting, Syncing, or
Error; reference manager.start, manager.status, test_start_sets_starting_status
and SpvStatus::Idle when making the change so the intent that start()
synchronously transitions out of Idle is explicit.
In `@src/ui/dashpay/mod.rs`:
- Around line 40-41: The heuristic using human.contains("seconds") is brittle
because it depends on chrono_humanize's English text; replace that text-match
with a direct duration check instead: compute the time delta between now and the
timestamp (use the same DateTime/created_at variable used to produce human),
convert to seconds (or use chrono::Duration) and if delta.num_seconds() <
threshold (e.g., 60) return "just now", otherwise fall back to the humanized
string. Update the branch that currently checks human.contains("seconds") to use
the duration comparison so the logic no longer relies on chrono_humanize output.
In `@src/utils/tasks.rs`:
- Around line 32-42: spawn_sync currently pushes the task name into active_names
but names are only removed during shutdown; change the lifecycle so each spawned
task removes its name when it completes by adding cleanup to the task wrapper
(the spawn_subtask path) — after the future finishes, lock active_names and
remove the matching name (e.g., search by equality and remove the first match),
handling poisoned lock errors by using if let Ok(mut names) { ... } as
elsewhere; this keeps the join/shutdown logic intact but prevents active_names
from growing unbounded during normal operation.
merge: resolve v1.0-dev conflicts for #577
|
@QuantumExplorer I will get rid of |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/context/connection_status.rs`:
- Around line 114-119: The function reset_timer currently opens an if let and
closes it but is missing the final closing brace for the function, causing a
compile error; add the missing closing brace to terminate the pub fn
reset_timer(&self) { ... } block (referencing the reset_timer method and the
self.last_update.lock() if-let) so the function is properly closed.
In `@src/ui/network_chooser_screen.rs`:
- Line 19: Remove the unused import ProgressPercentage from the use statement
that currently imports dash_sdk::dash_spv::sync::{ProgressPercentage,
SyncProgress as SpvSyncProgress, SyncState}; — edit the use line to only import
the symbols actually used (e.g., keep SyncProgress as SpvSyncProgress and
SyncState) so the redundant ProgressPercentage identifier is eliminated.
* fix: skip best chain lock polling in SPV mode (#567) * Initial plan * Skip chain lock refresh in SPV mode Co-authored-by: lklimek <842586+lklimek@users.noreply.github.com> * Document SPV guard intent Co-authored-by: lklimek <842586+lklimek@users.noreply.github.com> --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: lklimek <842586+lklimek@users.noreply.github.com> * fix: compute relative timestamps from actual data (#581) * fix: compute relative timestamps from actual data Replace hardcoded display strings like "Received: 1 day ago" with real relative timestamps computed from document createdAt/updatedAt fields using chrono::Utc::now(). Fixes #579 * style: fix import ordering per cargo fmt * refactor: extract format_relative_time to shared dashpay module Deduplicate the identical format_relative_time function that existed in both contact_requests.rs and send_payment.rs. Move it to the dashpay mod.rs as a pub(crate) function and import from both files. --------- Co-authored-by: PastaClaw <thepastaclaw@users.noreply.github.com> * fix: update platform for DIP-18 HRP and improve SPV sync progress (#575) * fix: update dashpay/platform to d6f4eb9 for DIP-18 HRP fix The previous platform revision used incorrect bech32m HRP prefixes (evo/tevo) for Platform addresses. The updated commit uses the correct DIP-0018 prefixes (dash/tdash). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * refactor(spv): use dash-spv SyncProgress API directly Replace the intermediate SyncProgress/DetailedSyncProgress/SyncStage translation layer with direct use of dash_spv::sync::SyncProgress. This eliminates ~120 lines of bridge code in spawn_progress_watcher() and determine_sync_stage(), and lets the UI query per-manager progress (headers, filter_headers, filters, masternodes, blocks) via the upstream API. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(spv): improve sync progress accuracy and robustness - Log headers() error before discarding for easier debugging - Restore download-window optimization for headers progress on checkpoint-resumed syncs (progress starts near 0% not 83%) - Replace catch-all _ => 0.0 with explicit SyncState variant matches so new variants produce compile errors - Fix filters progress to use current_height/target_height instead of downloaded/target_height (session count vs absolute height mismatch) - Restore peer count in sync status text - Add "Querying peer heights" label and diffs_processed to masternode status for more informative sync messages Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(spv): use height-based blocks progress to prevent bar from jumping The blocks progress bar was bouncing backward because it used processed/requested session counters whose denominator grows as filters discover more matching blocks. Switch to last_processed block height relative to headers target_height, which only increases. Display "current / target" heights instead of percentage on the blocks bar. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(spv): improve progress bar resilience across resume and network switch - Add windowed progress tracking for filters (consistent with headers/filter_headers) - Reset blocks_stage_start on Error state so recovery gets a fresh window - Track spv_progress_network to detect network changes and rebuild progress state from the new network's sync_progress instead of resetting to zero - Eliminate redundant clone in progress watcher (move instead of clone twice) - Consolidate all progress state reset logic into rebuild_spv_progress_state() Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> * build: add git and cargo permissions to Claude Code workflow (#565) * build: add git and cargo permissions to Claude Code workflow Allow Claude to run git fetch/merge/checkout/rebase/push and cargo build/test/clippy/fmt commands. Switch model to opus. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: use claude_args for model and allowed tools The `model` and `allowed_tools` inputs are not declared in claude-code-action@v1 and are silently ignored. Move them to `claude_args` with --model and --allowedTools flags. Also fix deprecated colon syntax (`:*`) to space syntax (` *`). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: use single quotes to prevent glob expansion in allowed tools Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * build: sandbox cargo commands and add git permissions for Claude - Add safe-cargo.sh wrapper that strips CI secrets before running cargo - Use --allowedTools for git and safe-cargo, --disallowedTools for raw cargo - Document safe-cargo usage in CLAUDE.md Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: switch safe-cargo.sh from denylist to allowlist approach Use `env -i` (start with empty environment, explicitly pass only what cargo needs) instead of `env -u` (strip known secrets). This is more robust against future secrets being added to the workflow. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * build: deny Claude from editing CI scripts and workflows Prevent Claude from modifying .github/scripts/ and .github/workflows/ to ensure the safe-cargo wrapper cannot be tampered with. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: conditionally pass optional env vars in safe-cargo.sh Empty PROTOC="" caused prost build scripts to fail with "protoc not found". Now optional vars (PROTOC, CC, CXX, etc.) are only passed when set and non-empty. Tested: build, test, fmt all pass through the wrapper. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * rabbit feedback * build: move safe-cargo.sh to scripts/ and allow +nightly fmt Move safe-cargo.sh from .github/scripts/ to top-level scripts/ for better discoverability. Add detailed comment explaining why the wrapper exists (prevent CI secret exfiltration via build scripts). Update all references in claude.yml, CLAUDE.md, and permission settings. Add `+nightly fmt` to allowedTools so Claude can follow CLAUDE.md formatting instructions in CI. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> * fix: handle errors instead of panicking on corrupted database blobs (#561) * fix: return error instead of panicking on corrupted database blobs (#560) - Replace `unreachable!()` in `get_scheduled_votes` with warning log and default to false for unexpected `executed` column values - Change `QualifiedIdentity::from_bytes()` to return `Result` instead of panicking via `.expect()` - Propagate deserialization errors as `rusqlite::Error` in all 6 callers so corrupted database is surfaced to the user rather than silently ignored or crashing the app - Add `CorruptedBlobError` newtype in database module to eliminate repeated `FromSqlConversionFailure` boilerplate Closes #560 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * chore: impl thiserror * chore: fmt * build: add Claude Code GitHub workflow and settings (#552) Cherry-pick from v1.0-dev to enable @claude mentions in PRs. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * chore: fix after merge * fix: skip corrupted identity blobs in get_wallets instead of aborting Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * chore: fail on corrupted identity * doc: document error handling in the db --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> * feat: track DAPI endpoint availability in connection status (#533) * fix: connection status not updated * chore: rabbit feedback * Initial plan * Track DAPI connection status and display in tooltips and connection info - Add dapi_total_endpoints and dapi_available fields to ConnectionStatus - Factor DAPI availability into overall_connected status (RED when no endpoints available) - Query SDK AddressList during periodic refresh for endpoint counts and availability - Display DAPI status in connection indicator tooltip - Display DAPI status in network chooser Connection Status card (all modes) - Add dapi_status_label() helper for consistent status text formatting Co-authored-by: lklimek <842586+lklimek@users.noreply.github.com> * Address review feedback: store available endpoint count and extract DRY helper - Changed dapi_available from AtomicBool to AtomicU16 (dapi_available_endpoints) to store the count of available endpoints instead of just a boolean - Display format now shows "Available ({available}/{total} endpoints)" - Extracted repeated DAPI status rendering into add_dapi_status_label() helper function in network_chooser_screen.rs to eliminate code duplication Co-authored-by: lklimek <842586+lklimek@users.noreply.github.com> * Fix rustfmt formatting issues in connection_status.rs - Reorder atomic imports (AtomicU8 before AtomicU16) per rustfmt - Wrap long .store() call to respect line length - Wrap long format!() strings to respect line length Co-authored-by: lklimek <842586+lklimek@users.noreply.github.com> * Fix borrow checker error: extract DAPI status values before mutable self borrow The add_dapi_status_label helper was capturing a &ConnectionStatus reference (derived from self) in closures, which extended the immutable borrow past the self.render_spv_sync_progress() mutable borrow on line 594. Fix: change add_dapi_status_label to accept pre-computed owned values (dapi_total, dapi_available, dapi_label) instead of &ConnectionStatus, and extract those values early alongside other status fields. Co-authored-by: lklimek <842586+lklimek@users.noreply.github.com> * Switch Platform to c2c88e4 and use get_live_addresses() for accurate available count Updated dash-sdk dependency to Platform commit c2c88e4a988ce930 which adds AddressList::get_live_addresses() method. Replaced the workaround that used get_live_address().is_some() (which could only tell if at least one endpoint was live) with get_live_addresses().len() to get the exact count of available non-banned DAPI endpoints. Co-authored-by: lklimek <842586+lklimek@users.noreply.github.com> * chore: typo + network changes * chore: apply feedback * chore: rabbit review * chore: rabbit feedback * chore: rabbitting * Remove overall_connected_with method (deleted upstream in base branch) Co-authored-by: lklimek <842586+lklimek@users.noreply.github.com> * Merge v1.0-dev into copilot/update-dapi-connection-status Resolved modify/delete conflict on src/context.rs: removed the file since v1.0-dev refactored it into src/context/mod.rs and submodules, which already include our Arc<ConnectionStatus> changes. Co-authored-by: lklimek <842586+lklimek@users.noreply.github.com> * chore: fix platfom versioning issues * chore: add todo * chore: fmt --------- Co-authored-by: Lukasz Klimek <842586+lklimek@users.noreply.github.com> Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> * refactor(spv): eliminate separate thread/runtime, improve shutdown and UX (#577) * fix: update dashpay/platform to d6f4eb9 for DIP-18 HRP fix The previous platform revision used incorrect bech32m HRP prefixes (evo/tevo) for Platform addresses. The updated commit uses the correct DIP-0018 prefixes (dash/tdash). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * refactor(spv): use dash-spv SyncProgress API directly Replace the intermediate SyncProgress/DetailedSyncProgress/SyncStage translation layer with direct use of dash_spv::sync::SyncProgress. This eliminates ~120 lines of bridge code in spawn_progress_watcher() and determine_sync_stage(), and lets the UI query per-manager progress (headers, filter_headers, filters, masternodes, blocks) via the upstream API. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(spv): improve sync progress accuracy and robustness - Log headers() error before discarding for easier debugging - Restore download-window optimization for headers progress on checkpoint-resumed syncs (progress starts near 0% not 83%) - Replace catch-all _ => 0.0 with explicit SyncState variant matches so new variants produce compile errors - Fix filters progress to use current_height/target_height instead of downloaded/target_height (session count vs absolute height mismatch) - Restore peer count in sync status text - Add "Querying peer heights" label and diffs_processed to masternode status for more informative sync messages Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(spv): use height-based blocks progress to prevent bar from jumping The blocks progress bar was bouncing backward because it used processed/requested session counters whose denominator grows as filters discover more matching blocks. Switch to last_processed block height relative to headers target_height, which only increases. Display "current / target" heights instead of percentage on the blocks bar. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(spv): improve progress bar resilience across resume and network switch - Add windowed progress tracking for filters (consistent with headers/filter_headers) - Reset blocks_stage_start on Error state so recovery gets a fresh window - Track spv_progress_network to detect network changes and rebuild progress state from the new network's sync_progress instead of resetting to zero - Eliminate redundant clone in progress watcher (move instead of clone twice) - Consolidate all progress state reset logic into rebuild_spv_progress_state() Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(spv): update ConnectionStatus immediately on connect/disconnect Propagate SPV status into ConnectionStatus right after start_spv() and stop_spv() so the UI reflects the change on the next frame instead of waiting for the next throttled trigger_refresh() cycle (2-10 seconds). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(spv): check cancellation token in listener tasks to prevent shutdown hang SPV finality and reconcile listeners blocked app shutdown for up to 10s because they never checked the global cancellation token. Add cancel branches to their tokio::select! loops so they exit immediately on shutdown. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * chore: remove log spamming line * fix(spv): make listener handler calls cancellation-aware and add shutdown trace logging Wrap reconcile_spv_wallets(), handle_spv_finality_event(), and the debounce sleep inside tokio::select! with the cancellation token so shutdown can interrupt them even when blocked on locks held by the SPV sync thread. Add trace-level logging to TaskManager::shutdown() for per-task join timing to aid future shutdown diagnostics. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(spv): disable Disconnect button during stopping and poll status faster - Disable the Disconnect button while SPV status is Stopping to prevent double-clicks and provide visual feedback - Poll SPV status every 200ms during Stopping instead of the 10s connected interval so the Stopped transition is reflected within 1s - Reset the throttle timer in stop_spv() so fast polling starts immediately Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * refactor(tasks): add mandatory task names to spawn_sync for shutdown diagnostics Change spawn_sync() signature to require a &'static str name. The JoinSet now yields the task name on completion, letting shutdown() log which tasks finished and which ones timed out. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * chore: log task names * refactor(spv): eliminate separate OS thread and tokio runtime Replace the dedicated SPV thread + 4-worker tokio runtime with a spawned task on the main 12-worker runtime via TaskManager::spawn_sync. This simplifies shutdown (SPV loop now tracked in unified JoinSet), removes cross-runtime complexity, and improves debuggability. Additional changes: - Fix: zeroize xprv_str after wallet import (security H-1) - Fix: sanitize devnet_name in build_spv_data_dir to prevent path traversal - Add 21 integration tests covering lifecycle, concurrency, deadlock detection, and live testnet sync Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(spv): derive stop_token as child of global cancel for clean shutdown The spv_request_handler selected on stop_token, which was independent of the global cancellation token. During window-close shutdown only the global token was cancelled, leaving the request handler running and causing a ~5s hang until the TaskManager timeout aborted it. Fix by creating stop_token as a child_token() of the global cancel. This also simplifies run_spv_loop and run_sync_and_monitor by removing the redundant global_cancel parameter — a single stop_token now covers both explicit SpvManager::stop() and application-wide shutdown. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * CLAUDE: minor changes about docs * fix(spv): bump rust-dashcore to d8bc066 for faster disconnect and add shutdown tracing Pin rust-dashcore patches to commit d8bc066 which includes: - sync manager task loop exits on network errors instead of logging indefinitely - sync coordinator signal_shutdown() cancels tasks before network disconnect - connection tasks race Peer::connect() against shutdown token Add debug tracing to SpvManager::stop() and run_sync_and_monitor() to measure client.stop() duration, and trace-level polling of SPV status in ConnectionStatus. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * chore: update rust-dashcore rev * merge: resolve v1.0-dev conflicts for #577 (#585) * fix: compute relative timestamps from actual data (#581) * fix: compute relative timestamps from actual data Replace hardcoded display strings like "Received: 1 day ago" with real relative timestamps computed from document createdAt/updatedAt fields using chrono::Utc::now(). Fixes #579 * style: fix import ordering per cargo fmt * refactor: extract format_relative_time to shared dashpay module Deduplicate the identical format_relative_time function that existed in both contact_requests.rs and send_payment.rs. Move it to the dashpay mod.rs as a pub(crate) function and import from both files. --------- Co-authored-by: PastaClaw <thepastaclaw@users.noreply.github.com> * fix: update platform for DIP-18 HRP and improve SPV sync progress (#575) * fix: update dashpay/platform to d6f4eb9 for DIP-18 HRP fix The previous platform revision used incorrect bech32m HRP prefixes (evo/tevo) for Platform addresses. The updated commit uses the correct DIP-0018 prefixes (dash/tdash). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * refactor(spv): use dash-spv SyncProgress API directly Replace the intermediate SyncProgress/DetailedSyncProgress/SyncStage translation layer with direct use of dash_spv::sync::SyncProgress. This eliminates ~120 lines of bridge code in spawn_progress_watcher() and determine_sync_stage(), and lets the UI query per-manager progress (headers, filter_headers, filters, masternodes, blocks) via the upstream API. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(spv): improve sync progress accuracy and robustness - Log headers() error before discarding for easier debugging - Restore download-window optimization for headers progress on checkpoint-resumed syncs (progress starts near 0% not 83%) - Replace catch-all _ => 0.0 with explicit SyncState variant matches so new variants produce compile errors - Fix filters progress to use current_height/target_height instead of downloaded/target_height (session count vs absolute height mismatch) - Restore peer count in sync status text - Add "Querying peer heights" label and diffs_processed to masternode status for more informative sync messages Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(spv): use height-based blocks progress to prevent bar from jumping The blocks progress bar was bouncing backward because it used processed/requested session counters whose denominator grows as filters discover more matching blocks. Switch to last_processed block height relative to headers target_height, which only increases. Display "current / target" heights instead of percentage on the blocks bar. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(spv): improve progress bar resilience across resume and network switch - Add windowed progress tracking for filters (consistent with headers/filter_headers) - Reset blocks_stage_start on Error state so recovery gets a fresh window - Track spv_progress_network to detect network changes and rebuild progress state from the new network's sync_progress instead of resetting to zero - Eliminate redundant clone in progress watcher (move instead of clone twice) - Consolidate all progress state reset logic into rebuild_spv_progress_state() Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> * build: add git and cargo permissions to Claude Code workflow (#565) * build: add git and cargo permissions to Claude Code workflow Allow Claude to run git fetch/merge/checkout/rebase/push and cargo build/test/clippy/fmt commands. Switch model to opus. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: use claude_args for model and allowed tools The `model` and `allowed_tools` inputs are not declared in claude-code-action@v1 and are silently ignored. Move them to `claude_args` with --model and --allowedTools flags. Also fix deprecated colon syntax (`:*`) to space syntax (` *`). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: use single quotes to prevent glob expansion in allowed tools Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * build: sandbox cargo commands and add git permissions for Claude - Add safe-cargo.sh wrapper that strips CI secrets before running cargo - Use --allowedTools for git and safe-cargo, --disallowedTools for raw cargo - Document safe-cargo usage in CLAUDE.md Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: switch safe-cargo.sh from denylist to allowlist approach Use `env -i` (start with empty environment, explicitly pass only what cargo needs) instead of `env -u` (strip known secrets). This is more robust against future secrets being added to the workflow. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * build: deny Claude from editing CI scripts and workflows Prevent Claude from modifying .github/scripts/ and .github/workflows/ to ensure the safe-cargo wrapper cannot be tampered with. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: conditionally pass optional env vars in safe-cargo.sh Empty PROTOC="" caused prost build scripts to fail with "protoc not found". Now optional vars (PROTOC, CC, CXX, etc.) are only passed when set and non-empty. Tested: build, test, fmt all pass through the wrapper. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * rabbit feedback * build: move safe-cargo.sh to scripts/ and allow +nightly fmt Move safe-cargo.sh from .github/scripts/ to top-level scripts/ for better discoverability. Add detailed comment explaining why the wrapper exists (prevent CI secret exfiltration via build scripts). Update all references in claude.yml, CLAUDE.md, and permission settings. Add `+nightly fmt` to allowedTools so Claude can follow CLAUDE.md formatting instructions in CI. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: PastaClaw <thepastaclaw@users.noreply.github.com> Co-authored-by: lklimek <842586+lklimek@users.noreply.github.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> * chore: bump rust-dashcore * chore: imports --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: Pasta Lil Claw <pasta+claw@dashboost.org> Co-authored-by: PastaClaw <thepastaclaw@users.noreply.github.com> Co-authored-by: PastaPastaPasta <6443210+PastaPastaPasta@users.noreply.github.com> * ci: cancel in-progress workflow runs on new push (#590) * ci: cancel in-progress workflow runs on new push Add concurrency groups to Tests and Clippy workflows so that previous runs are automatically cancelled when a new commit is pushed to the same branch or PR. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * trigger ci --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> * fix: use correct DashPay profile field "publicMessage" instead of "bio" (#582) The DashPay contract schema defines the field as "publicMessage", not "bio". This caused profile bios to never load in contact info. Co-authored-by: PastaClaw <thepastaclaw@users.noreply.github.com> --------- Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com> Co-authored-by: lklimek <842586+lklimek@users.noreply.github.com> Co-authored-by: PastaClaw <thepastaclaw@users.noreply.github.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Co-authored-by: PastaPastaPasta <6443210+PastaPastaPasta@users.noreply.github.com>
The SPV refactor (#577) replaced SyncProgress.header_height with progress.headers().current_height() via the ProgressPercentage trait, and removed detailed_progress in favor of state()/is_synced() methods. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Summary
subtasks.spawn_sync(), simplifying shutdown and reducing resource usagestop_tokenas a child of the global cancellation token so SPV shuts down automatically on app exit; bump rust-dashcore to include fixes for sync manager tick loop exits and cancellation-aware peer connectionsPeer::connect()against the shutdown token (was ~28s, now near-instant)SyncProgressAPI directly, height-based blocks progress, resilient progress bar across resume/network switchspawn_syncfor shutdown diagnosticsclient.stop()duration and trace SPV status pollingTest plan
cargo test --all-features --workspace)#[ignore]test that connects, syncs 10s, asserts clean shutdown)🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Bug Fixes
Documentation
Tests