fix(spv): restore sync progress bars via EventHandler#809
Conversation
The dash-spv SDK now dispatches sync progress, network events, and sync events via the `EventHandler` trait. DET was passing `Arc::new(())` (no-op handler), so `sync_progress_state` was never written and the UI showed "SPV Sync Status" with no progress bars. - Implement `SpvEventHandler` with `on_progress`, `on_sync_event`, and `on_network_event` — replacing the dead `spawn_progress_watcher`, `spawn_sync_event_handler`, and `spawn_network_event_handler` methods - Pass the handler to `DashSpvClient::new` instead of `Arc::new(())` - Restore finality event forwarding (InstantLock/ChainLock) for asset lock proof construction - Restore reconciliation signaling on block/sync events - Remove dead `spawn_progress_watcher` method Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Build/FFI-only changes upstream — no API changes, zero code modifications. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 18 minutes and 26 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (4)
📝 WalkthroughWalkthroughUpdated the dash-sdk git dependency revision and refactored SPV manager's event handling by introducing a new Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Review GateCommit:
|
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (3)
src/spv/manager.rs (3)
183-206: Consider simplifying the user-facing error message.Line 190 includes technical implementation details (
"reported by SPV progress channel") in the error message stored inlast_error, which is exposed viaSpvStatusSnapshotand likely displayed in the UI. As per coding guidelines, user-facing error messages should not include technical details.Suggested fix
- let msg = format!("Sync failed: {phase} (reported by SPV progress channel)"); + let msg = format!("Sync failed during {phase}.");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/spv/manager.rs` around lines 183 - 206, The user-facing error message set into self.last_error currently includes internal detail "reported by SPV progress channel"; change the message built in the SpvManager::failed_manager_name(progress) branch so it only contains a concise user-facing string (e.g., format!("Sync failed: {phase}")), leave internal/detail text out of the value assigned to *err_guard / error_msg, and continue to call cs.set_spv_last_error(Some(msg)) with that cleaned message; if you want to preserve the original detailed context for debugging, log the full message to the logger instead of storing it in last_error (use the same scope where error_msg is created).
1287-1393: Consider removing superseded dead code.
spawn_sync_event_handlerandspawn_network_event_handler(lines 1287-1393 and 1431-1470) are now marked#[allow(dead_code)]since their functionality has been moved toSpvEventHandler. Keeping them adds maintenance burden and risks divergence.If they're retained intentionally as fallback, consider adding a comment explaining the rationale. Otherwise, they can be removed in a follow-up cleanup.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/spv/manager.rs` around lines 1287 - 1393, The two dead functions spawn_sync_event_handler and spawn_network_event_handler are kept with #[allow(dead_code)] but their logic moved to SpvEventHandler; remove these obsolete functions to avoid maintenance drift (or if they must stay as fallbacks, add a TODO/comment explaining why they remain). Locate the definitions of spawn_sync_event_handler and spawn_network_event_handler in manager.rs, delete their implementations and any callers or locks (reconcile_tx, finality_tx, status, last_error, subtasks) that become unused, or alternatively add a clear explanatory comment and a TODO referencing SpvEventHandler to justify retaining them.
222-243: User-facing error includes technical details.The error message on line 228 includes the internal manager name and raw error substring, which are then exposed to the UI via
ConnectionStatus. Per coding guidelines forsrc/**/*.rs: never include technical details in user-facing error messages.Consider using a generic user-friendly message and attaching the technical details via
BannerHandle::with_details(e)or similar mechanism if available in this context.Suggested approach
let limit = error.floor_char_boundary(100); - let msg = format!("Sync manager {manager} failed: {}", &error[..limit]); + let msg = "Synchronization encountered an error. Please try again or check your connection.".to_string(); + tracing::debug!("Sync manager {manager} failed: {}", &error[..limit]);The technical detail is already logged on line 223; the user-facing message can be simplified.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/spv/manager.rs` around lines 222 - 243, The current handler for SyncEvent::ManagerError exposes internal manager name and raw error text to the UI via self.last_error and cs.set_spv_last_error; change it to keep the detailed technical text only in logs and internal storage while sending a generic, user-friendly message to the UI. Concretely, keep the tracing::error! line as-is and set the user-facing msg to something non-technical (e.g., "Synchronization failed, please retry" or similar) when calling cs.set_spv_last_error and when populating self.last_error for UI consumption; store the detailed substring (currently built with error.floor_char_boundary(100)) only in an internal field or attach it via a detail mechanism such as BannerHandle::with_details(e) if available, and ensure cs.set_spv_status(SpvStatus::Error) remains unchanged. Ensure references to SyncEvent::ManagerError, SpvStatus::Error, self.last_error, and cs.set_spv_last_error are updated accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/spv/manager.rs`:
- Around line 288-301: The on_network_event handler currently casts
connected_count to u16 directly which can silently truncate large values; update
the conversion in on_network_event (handling NetworkEvent::PeersUpdated) to
bound connected_count first (e.g., let capped = connected_count.min(u16::MAX as
usize)) and then cast capped as u16 before calling
cs.set_spv_connected_peers(...) and before storing into connected_peers,
mirroring the bounding used in spawn_network_event_handler to ensure consistent,
safe behavior.
---
Nitpick comments:
In `@src/spv/manager.rs`:
- Around line 183-206: The user-facing error message set into self.last_error
currently includes internal detail "reported by SPV progress channel"; change
the message built in the SpvManager::failed_manager_name(progress) branch so it
only contains a concise user-facing string (e.g., format!("Sync failed:
{phase}")), leave internal/detail text out of the value assigned to *err_guard /
error_msg, and continue to call cs.set_spv_last_error(Some(msg)) with that
cleaned message; if you want to preserve the original detailed context for
debugging, log the full message to the logger instead of storing it in
last_error (use the same scope where error_msg is created).
- Around line 1287-1393: The two dead functions spawn_sync_event_handler and
spawn_network_event_handler are kept with #[allow(dead_code)] but their logic
moved to SpvEventHandler; remove these obsolete functions to avoid maintenance
drift (or if they must stay as fallbacks, add a TODO/comment explaining why they
remain). Locate the definitions of spawn_sync_event_handler and
spawn_network_event_handler in manager.rs, delete their implementations and any
callers or locks (reconcile_tx, finality_tx, status, last_error, subtasks) that
become unused, or alternatively add a clear explanatory comment and a TODO
referencing SpvEventHandler to justify retaining them.
- Around line 222-243: The current handler for SyncEvent::ManagerError exposes
internal manager name and raw error text to the UI via self.last_error and
cs.set_spv_last_error; change it to keep the detailed technical text only in
logs and internal storage while sending a generic, user-friendly message to the
UI. Concretely, keep the tracing::error! line as-is and set the user-facing msg
to something non-technical (e.g., "Synchronization failed, please retry" or
similar) when calling cs.set_spv_last_error and when populating self.last_error
for UI consumption; store the detailed substring (currently built with
error.floor_char_boundary(100)) only in an internal field or attach it via a
detail mechanism such as BannerHandle::with_details(e) if available, and ensure
cs.set_spv_status(SpvStatus::Error) remains unchanged. Ensure references to
SyncEvent::ManagerError, SpvStatus::Error, self.last_error, and
cs.set_spv_last_error are updated accordingly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 2afdea22-d3ae-4d7c-b114-1422b91bfc45
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (2)
Cargo.tomlsrc/spv/manager.rs
There was a problem hiding this comment.
Pull request overview
This PR restores SPV sync UI feedback by wiring dash-spv’s new EventHandler-based callbacks into DET’s SpvManager, and updates dash-sdk to a newer platform v3.1-dev revision.
Changes:
- Introduce
SpvEventHandlerand pass it intoDashSpvClient::new()so progress/sync/network events update shared SPV state andConnectionStatus. - Forward sync lifecycle + finality events (InstantLock/ChainLock) and trigger reconciliation signals from the new handler.
- Bump
dash-sdkgit revision (and corresponding lockfile updates).
Reviewed changes
Copilot reviewed 2 out of 3 changed files in this pull request and generated 4 comments.
| File | Description |
|---|---|
src/spv/manager.rs |
Adds SpvEventHandler and updates SPV client type/build to use trait-based event dispatch; adjusts channels to be shareable with the handler. |
Cargo.toml |
Updates dash-sdk git revision. |
Cargo.lock |
Locks dependency graph to the new dash-sdk/dashcore revisions. |
Comments suppressed due to low confidence (2)
src/spv/manager.rs:1291
spawn_sync_event_handlerappears to be unused (no call sites found) and duplicates logic now handled bySpvEventHandler::on_sync_event. Consider removing it (and any related subscription setup) to avoid drift and confusion about which path is authoritative.
#[allow(dead_code)]
fn spawn_sync_event_handler(&self, mut sync_rx: tokio::sync::broadcast::Receiver<SyncEvent>) {
let reconcile_tx = self.reconcile_tx.lock().ok().and_then(|g| g.clone());
let finality_tx = self.finality_tx.lock().ok().and_then(|g| g.clone());
let status = Arc::clone(&self.status);
src/spv/manager.rs:306
AssetLockFinalityEventvariants are now constructed inSpvEventHandler::on_sync_event, so the#[allow(dead_code)]and comment about variants being used "when EventHandler is fully implemented" are out of date. Please remove/update to avoid suppressing legitimate dead-code warnings in the future.
/// Events forwarded from SPV to AppContext for asset lock proof construction.
#[allow(dead_code)] // Variants will be used when EventHandler is fully implemented
pub(crate) enum AssetLockFinalityEvent {
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
- Clamp connected_count before u16 cast to prevent overflow (threads 1, 2) - Remove dead spawn_sync_event_handler and spawn_network_event_handler, now replaced by SpvEventHandler trait methods (thread 4) - Remove stale #[allow(dead_code)] from failed_manager_name() and AssetLockFinalityEvent — both are now actively used (thread 5) - Log reconcile_tx send errors instead of silently discarding (thread 6) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
lklimek
left a comment
There was a problem hiding this comment.
Review comments from code review.
|
IMO ready to merge |
thepastaclaw
left a comment
There was a problem hiding this comment.
Code Review
Well-executed migration from dead no-op handlers to a properly wired SpvEventHandler. One real bug: on_progress unconditionally clears ConnectionStatus error after a ManagerError, creating divergent internal/UI state. The rest are low-risk consistency and hygiene items.
Reviewed commit: bed63bd
🔴 1 blocking | 🟡 2 suggestion(s) | 💬 2 nitpick(s)
1 additional finding
🟡 suggestion: spawn_wallet_event_handler captures reconcile_tx eagerly; diverges from SpvEventHandler's Mutex pattern
src/spv/manager.rs (lines 1290-1294)
Verified. spawn_wallet_event_handler (line 1294) clones reconcile_tx once at spawn time. SpvEventHandler::on_sync_event (line 284) acquires the Mutex on every event. The current call ordering is safe — wallet_lifecycle.rs:71-72 explicitly registers the channel before start() — but if register_reconcile_channel() were ever called again, the wallet handler would keep sending on a stale (disconnected) sender while SpvEventHandler picks up the new one. Two agents flagged this independently (claude-general, claude-rust-quality).
💡 Suggested change
Align `spawn_wallet_event_handler` to also read from the Mutex each iteration, matching SpvEventHandler. Or document the single-registration invariant on `register_reconcile_channel` more prominently.
🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/spv/manager.rs`:
- [BLOCKING] lines 196-206: on_progress clears ConnectionStatus error message after ManagerError, creating divergent state
**Verified against source.** After `on_sync_event(ManagerError{..})` sets `self.status = Error` and `cs.set_spv_last_error(Some(msg))`, the very next non-error `on_progress` call hits the `else if !is_error` branch at line 202 and calls `cs.set_spv_last_error(None)`. Meanwhile `new_status` is `None` because line 170-172 refuses to overwrite `SpvStatus::Error`, so `cs.set_spv_status` is never called. Result: ConnectionStatus shows Error with no error message; internal `self.last_error` still holds the message. Two independent agents flagged related symptoms (codex-rust-quality: blocking, claude-rust-quality: suggestion).
- [SUGGESTION] lines 1290-1294: spawn_wallet_event_handler captures reconcile_tx eagerly; diverges from SpvEventHandler's Mutex pattern
**Verified.** `spawn_wallet_event_handler` (line 1294) clones `reconcile_tx` once at spawn time. `SpvEventHandler::on_sync_event` (line 284) acquires the Mutex on every event. The current call ordering is safe — `wallet_lifecycle.rs:71-72` explicitly registers the channel before `start()` — but if `register_reconcile_channel()` were ever called again, the wallet handler would keep sending on a stale (disconnected) sender while SpvEventHandler picks up the new one. Two agents flagged this independently (claude-general, claude-rust-quality).
- [SUGGESTION] lines 149-305: No unit tests for SpvEventHandler state machine transitions
**Verified.** Existing tests (lines 446-468) only verify that channel registration returns empty channels. `SpvEventHandler`'s three methods (`on_progress`, `on_sync_event`, `on_network_event`) drive all SPV state transitions — pure state-machine logic on Arc-wrapped fields, readily testable without a live SPV client. Three agents flagged this (claude-general, codex-general, codex-rust-quality).
…ress # Conflicts: # Cargo.lock # Cargo.toml # src/spv/manager.rs
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Fix error message clearing: don't clear ConnectionStatus error when SpvStatus is already Error (prevents on_progress from erasing ManagerError messages) - Silence reconcile try_send warning — channel backpressure during heavy sync is expected and debounced downstream - Remove tombstone comments for deleted methods (git history suffices) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
thepastaclaw
left a comment
There was a problem hiding this comment.
Code Review
Clean incremental addressing prior review feedback. Removes dead code (old spawn_sync_event_handler and spawn_network_event_handler), removes #[allow(dead_code)] annotations, adds safe u16::MAX saturation for peer count cast, and improves reconcile try_send warning logging. The tombstone comment cleanup (prior nitpick) was partially addressed — two tombstone comments remain but the dead methods they referenced are now gone. The blocking finding about on_progress clearing error state was not addressed in this push.
Reviewed commit: d153224
🟡 1 suggestion(s)
🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/spv/manager.rs`:
- [SUGGESTION] lines 196-204: on_progress error-clearing regression not yet addressed (prior blocking finding)
The prior review flagged that `on_progress` unconditionally clears `ConnectionStatus.spv_last_error(None)` via the `else if !is_error` branch (line 202), even after `on_sync_event` has set `SpvStatus::Error` + error message. This creates divergent state: ConnectionStatus shows Error with no message while `self.last_error` still holds it.
This push doesn't modify the `on_progress` method. The suggested fix from the prior review: guard the `set_spv_last_error(None)` call with a check that the current status is not `SpvStatus::Error`:
```rust
else if !is_error && !matches!(self.status.read().map(|g| *g), Ok(SpvStatus::Error)) {
cs.set_spv_last_error(None);
}
</details>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Summary
EventHandlertrait, but DET was passingArc::new(())(no-op). ImplementedSpvEventHandlerwithon_progress,on_sync_event, andon_network_event.a1019039) — build/FFI changes only, no API changes.Root Cause
After the
rust-dashcoreSDK update, sync progress, network events, and sync lifecycle events are dispatched via theEventHandlertrait onDashSpvClient. DET was passingArc::new(())— the no-op default implementation — so none of the events reached the app. Three existing handler methods (spawn_progress_watcher,spawn_sync_event_handler,spawn_network_event_handler) were dead code, never subscribed to the new trait-based system.Test plan
cargo clippy --all-features --all-targets -- -D warningscleancargo buildsucceeds🤖 Co-authored by Claudius the Magnificent AI Agent
Summary by CodeRabbit
Chores
Refactor