Skip to content

fix(spv): restore sync progress bars via EventHandler#809

Merged
lklimek merged 7 commits into
v1.0-devfrom
fix/spv-sync-progress
Apr 1, 2026
Merged

fix(spv): restore sync progress bars via EventHandler#809
lklimek merged 7 commits into
v1.0-devfrom
fix/spv-sync-progress

Conversation

@lklimek
Copy link
Copy Markdown
Contributor

@lklimek lklimek commented Apr 1, 2026

Summary

  • Fix SPV sync progress bars not appearing — the dash-spv SDK now dispatches events via EventHandler trait, but DET was passing Arc::new(()) (no-op). Implemented SpvEventHandler with on_progress, on_sync_event, and on_network_event.
  • Restore finality event forwarding — InstantLock/ChainLock events for asset lock proofs were also broken by the same issue.
  • Update dash-sdk to latest platform v3.1-dev (a1019039) — build/FFI changes only, no API changes.

Root Cause

After the rust-dashcore SDK update, sync progress, network events, and sync lifecycle events are dispatched via the EventHandler trait on DashSpvClient. DET was passing Arc::new(()) — the no-op default implementation — so none of the events reached the app. Three existing handler methods (spawn_progress_watcher, spawn_sync_event_handler, spawn_network_event_handler) were dead code, never subscribed to the new trait-based system.

Test plan

  • cargo clippy --all-features --all-targets -- -D warnings clean
  • cargo build succeeds
  • Manual: connect in SPV mode, verify progress bars appear in "SPV Sync Status" section
  • Manual: verify status transitions (Syncing → Synced) work correctly
  • Manual: verify peer count updates in connection status

🤖 Co-authored by Claudius the Magnificent AI Agent

Summary by CodeRabbit

  • Chores

    • Updated dependency revision to latest version.
  • Refactor

    • Improved blockchain synchronization event handling and progress tracking for more reliable status updates and finality event propagation.

lklimek and others added 2 commits April 1, 2026 12:04
The dash-spv SDK now dispatches sync progress, network events, and sync
events via the `EventHandler` trait. DET was passing `Arc::new(())`
(no-op handler), so `sync_progress_state` was never written and the UI
showed "SPV Sync Status" with no progress bars.

- Implement `SpvEventHandler` with `on_progress`, `on_sync_event`,
  and `on_network_event` — replacing the dead `spawn_progress_watcher`,
  `spawn_sync_event_handler`, and `spawn_network_event_handler` methods
- Pass the handler to `DashSpvClient::new` instead of `Arc::new(())`
- Restore finality event forwarding (InstantLock/ChainLock) for asset
  lock proof construction
- Restore reconciliation signaling on block/sync events
- Remove dead `spawn_progress_watcher` method

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Build/FFI-only changes upstream — no API changes, zero code modifications.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 1, 2026

Warning

Rate limit exceeded

@lklimek has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 18 minutes and 26 seconds before requesting another review.

Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 18 minutes and 26 seconds.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 536c0d72-9a98-47b4-b991-08c2aed88509

📥 Commits

Reviewing files that changed from the base of the PR and between bed63bd and baf9065.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (4)
  • Cargo.toml
  • src/backend_task/core/mod.rs
  • src/context/wallet_lifecycle.rs
  • src/spv/manager.rs
📝 Walkthrough

Walkthrough

Updated the dash-sdk git dependency revision and refactored SPV manager's event handling by introducing a new SpvEventHandler that processes dash-spv callbacks directly into shared state and channels, eliminating the previous separate progress watcher task.

Changes

Cohort / File(s) Summary
Dependency Update
Cargo.toml
Updated dash-sdk git dependency revision from 94cefb30d9d8ad84b1d45e0a152341a2425f920b to a10190399d7033e7e56e3f756411e9a5dab87829.
SPV Event Handler Refactoring
src/spv/manager.rs
Introduced SpvEventHandler struct implementing EventHandler trait to process dash-spv push callbacks. Removed the spawn_progress_watcher task; event propagation now handled through the new handler. Updated SpvClient type alias to include SpvEventHandler generic parameter. Changed reconcile_tx and finality_tx fields to Arc<Mutex<...>> for shared access with the event handler. Simplified build_client() to pass event handler into client construction.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Poem

🐰 A handler so nimble, events dancing through,
No watcher to spawn, just callbacks that brew,
From spv's deep streams to channels so bright,
One Arc wraps them all, shared and tight,
Progress flows smoothly through fiber and state,
The refactor's complete—our SPV's first-rate! 🚀

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'fix(spv): restore sync progress bars via EventHandler' directly matches the main objective: implementing SpvEventHandler to restore sync progress functionality that was broken when using a no-op handler.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch fix/spv-sync-progress

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@lklimek lklimek requested review from Copilot and shumkov April 1, 2026 10:29
@lklimek lklimek marked this pull request as ready for review April 1, 2026 10:29
@thepastaclaw
Copy link
Copy Markdown
Collaborator

thepastaclaw commented Apr 1, 2026

Review Gate

Commit: 1a6489ec

  • Debounce: 2m ago (need 30m)

  • CI checks: checks still running (2 pending)

  • CodeRabbit review: comment found

  • Run review now (check to override)

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (3)
src/spv/manager.rs (3)

183-206: Consider simplifying the user-facing error message.

Line 190 includes technical implementation details ("reported by SPV progress channel") in the error message stored in last_error, which is exposed via SpvStatusSnapshot and likely displayed in the UI. As per coding guidelines, user-facing error messages should not include technical details.

Suggested fix
-            let msg = format!("Sync failed: {phase} (reported by SPV progress channel)");
+            let msg = format!("Sync failed during {phase}.");
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/spv/manager.rs` around lines 183 - 206, The user-facing error message set
into self.last_error currently includes internal detail "reported by SPV
progress channel"; change the message built in the
SpvManager::failed_manager_name(progress) branch so it only contains a concise
user-facing string (e.g., format!("Sync failed: {phase}")), leave
internal/detail text out of the value assigned to *err_guard / error_msg, and
continue to call cs.set_spv_last_error(Some(msg)) with that cleaned message; if
you want to preserve the original detailed context for debugging, log the full
message to the logger instead of storing it in last_error (use the same scope
where error_msg is created).

1287-1393: Consider removing superseded dead code.

spawn_sync_event_handler and spawn_network_event_handler (lines 1287-1393 and 1431-1470) are now marked #[allow(dead_code)] since their functionality has been moved to SpvEventHandler. Keeping them adds maintenance burden and risks divergence.

If they're retained intentionally as fallback, consider adding a comment explaining the rationale. Otherwise, they can be removed in a follow-up cleanup.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/spv/manager.rs` around lines 1287 - 1393, The two dead functions
spawn_sync_event_handler and spawn_network_event_handler are kept with
#[allow(dead_code)] but their logic moved to SpvEventHandler; remove these
obsolete functions to avoid maintenance drift (or if they must stay as
fallbacks, add a TODO/comment explaining why they remain). Locate the
definitions of spawn_sync_event_handler and spawn_network_event_handler in
manager.rs, delete their implementations and any callers or locks (reconcile_tx,
finality_tx, status, last_error, subtasks) that become unused, or alternatively
add a clear explanatory comment and a TODO referencing SpvEventHandler to
justify retaining them.

222-243: User-facing error includes technical details.

The error message on line 228 includes the internal manager name and raw error substring, which are then exposed to the UI via ConnectionStatus. Per coding guidelines for src/**/*.rs: never include technical details in user-facing error messages.

Consider using a generic user-friendly message and attaching the technical details via BannerHandle::with_details(e) or similar mechanism if available in this context.

Suggested approach
             let limit = error.floor_char_boundary(100);
-            let msg = format!("Sync manager {manager} failed: {}", &error[..limit]);
+            let msg = "Synchronization encountered an error. Please try again or check your connection.".to_string();
+            tracing::debug!("Sync manager {manager} failed: {}", &error[..limit]);

The technical detail is already logged on line 223; the user-facing message can be simplified.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/spv/manager.rs` around lines 222 - 243, The current handler for
SyncEvent::ManagerError exposes internal manager name and raw error text to the
UI via self.last_error and cs.set_spv_last_error; change it to keep the detailed
technical text only in logs and internal storage while sending a generic,
user-friendly message to the UI. Concretely, keep the tracing::error! line as-is
and set the user-facing msg to something non-technical (e.g., "Synchronization
failed, please retry" or similar) when calling cs.set_spv_last_error and when
populating self.last_error for UI consumption; store the detailed substring
(currently built with error.floor_char_boundary(100)) only in an internal field
or attach it via a detail mechanism such as BannerHandle::with_details(e) if
available, and ensure cs.set_spv_status(SpvStatus::Error) remains unchanged.
Ensure references to SyncEvent::ManagerError, SpvStatus::Error, self.last_error,
and cs.set_spv_last_error are updated accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/spv/manager.rs`:
- Around line 288-301: The on_network_event handler currently casts
connected_count to u16 directly which can silently truncate large values; update
the conversion in on_network_event (handling NetworkEvent::PeersUpdated) to
bound connected_count first (e.g., let capped = connected_count.min(u16::MAX as
usize)) and then cast capped as u16 before calling
cs.set_spv_connected_peers(...) and before storing into connected_peers,
mirroring the bounding used in spawn_network_event_handler to ensure consistent,
safe behavior.

---

Nitpick comments:
In `@src/spv/manager.rs`:
- Around line 183-206: The user-facing error message set into self.last_error
currently includes internal detail "reported by SPV progress channel"; change
the message built in the SpvManager::failed_manager_name(progress) branch so it
only contains a concise user-facing string (e.g., format!("Sync failed:
{phase}")), leave internal/detail text out of the value assigned to *err_guard /
error_msg, and continue to call cs.set_spv_last_error(Some(msg)) with that
cleaned message; if you want to preserve the original detailed context for
debugging, log the full message to the logger instead of storing it in
last_error (use the same scope where error_msg is created).
- Around line 1287-1393: The two dead functions spawn_sync_event_handler and
spawn_network_event_handler are kept with #[allow(dead_code)] but their logic
moved to SpvEventHandler; remove these obsolete functions to avoid maintenance
drift (or if they must stay as fallbacks, add a TODO/comment explaining why they
remain). Locate the definitions of spawn_sync_event_handler and
spawn_network_event_handler in manager.rs, delete their implementations and any
callers or locks (reconcile_tx, finality_tx, status, last_error, subtasks) that
become unused, or alternatively add a clear explanatory comment and a TODO
referencing SpvEventHandler to justify retaining them.
- Around line 222-243: The current handler for SyncEvent::ManagerError exposes
internal manager name and raw error text to the UI via self.last_error and
cs.set_spv_last_error; change it to keep the detailed technical text only in
logs and internal storage while sending a generic, user-friendly message to the
UI. Concretely, keep the tracing::error! line as-is and set the user-facing msg
to something non-technical (e.g., "Synchronization failed, please retry" or
similar) when calling cs.set_spv_last_error and when populating self.last_error
for UI consumption; store the detailed substring (currently built with
error.floor_char_boundary(100)) only in an internal field or attach it via a
detail mechanism such as BannerHandle::with_details(e) if available, and ensure
cs.set_spv_status(SpvStatus::Error) remains unchanged. Ensure references to
SyncEvent::ManagerError, SpvStatus::Error, self.last_error, and
cs.set_spv_last_error are updated accordingly.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 2afdea22-d3ae-4d7c-b114-1422b91bfc45

📥 Commits

Reviewing files that changed from the base of the PR and between 92c34f5 and bed63bd.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (2)
  • Cargo.toml
  • src/spv/manager.rs

Comment thread src/spv/manager.rs
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR restores SPV sync UI feedback by wiring dash-spv’s new EventHandler-based callbacks into DET’s SpvManager, and updates dash-sdk to a newer platform v3.1-dev revision.

Changes:

  • Introduce SpvEventHandler and pass it into DashSpvClient::new() so progress/sync/network events update shared SPV state and ConnectionStatus.
  • Forward sync lifecycle + finality events (InstantLock/ChainLock) and trigger reconciliation signals from the new handler.
  • Bump dash-sdk git revision (and corresponding lockfile updates).

Reviewed changes

Copilot reviewed 2 out of 3 changed files in this pull request and generated 4 comments.

File Description
src/spv/manager.rs Adds SpvEventHandler and updates SPV client type/build to use trait-based event dispatch; adjusts channels to be shareable with the handler.
Cargo.toml Updates dash-sdk git revision.
Cargo.lock Locks dependency graph to the new dash-sdk/dashcore revisions.
Comments suppressed due to low confidence (2)

src/spv/manager.rs:1291

  • spawn_sync_event_handler appears to be unused (no call sites found) and duplicates logic now handled by SpvEventHandler::on_sync_event. Consider removing it (and any related subscription setup) to avoid drift and confusion about which path is authoritative.
    #[allow(dead_code)]
    fn spawn_sync_event_handler(&self, mut sync_rx: tokio::sync::broadcast::Receiver<SyncEvent>) {
        let reconcile_tx = self.reconcile_tx.lock().ok().and_then(|g| g.clone());
        let finality_tx = self.finality_tx.lock().ok().and_then(|g| g.clone());
        let status = Arc::clone(&self.status);

src/spv/manager.rs:306

  • AssetLockFinalityEvent variants are now constructed in SpvEventHandler::on_sync_event, so the #[allow(dead_code)] and comment about variants being used "when EventHandler is fully implemented" are out of date. Please remove/update to avoid suppressing legitimate dead-code warnings in the future.
/// Events forwarded from SPV to AppContext for asset lock proof construction.
#[allow(dead_code)] // Variants will be used when EventHandler is fully implemented
pub(crate) enum AssetLockFinalityEvent {

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread src/spv/manager.rs Outdated
Comment thread src/spv/manager.rs
Comment thread src/spv/manager.rs Outdated
Comment thread src/spv/manager.rs
- Clamp connected_count before u16 cast to prevent overflow (threads 1, 2)
- Remove dead spawn_sync_event_handler and spawn_network_event_handler,
  now replaced by SpvEventHandler trait methods (thread 4)
- Remove stale #[allow(dead_code)] from failed_manager_name() and
  AssetLockFinalityEvent — both are now actively used (thread 5)
- Log reconcile_tx send errors instead of silently discarding (thread 6)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link
Copy Markdown
Contributor Author

@lklimek lklimek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review comments from code review.

Comment thread src/spv/manager.rs
@lklimek
Copy link
Copy Markdown
Contributor Author

lklimek commented Apr 1, 2026

IMO ready to merge

Copy link
Copy Markdown
Collaborator

@thepastaclaw thepastaclaw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

Well-executed migration from dead no-op handlers to a properly wired SpvEventHandler. One real bug: on_progress unconditionally clears ConnectionStatus error after a ManagerError, creating divergent internal/UI state. The rest are low-risk consistency and hygiene items.

Reviewed commit: bed63bd

🔴 1 blocking | 🟡 2 suggestion(s) | 💬 2 nitpick(s)

1 additional finding

🟡 suggestion: spawn_wallet_event_handler captures reconcile_tx eagerly; diverges from SpvEventHandler's Mutex pattern

src/spv/manager.rs (lines 1290-1294)

Verified. spawn_wallet_event_handler (line 1294) clones reconcile_tx once at spawn time. SpvEventHandler::on_sync_event (line 284) acquires the Mutex on every event. The current call ordering is safe — wallet_lifecycle.rs:71-72 explicitly registers the channel before start() — but if register_reconcile_channel() were ever called again, the wallet handler would keep sending on a stale (disconnected) sender while SpvEventHandler picks up the new one. Two agents flagged this independently (claude-general, claude-rust-quality).

💡 Suggested change
Align `spawn_wallet_event_handler` to also read from the Mutex each iteration, matching SpvEventHandler. Or document the single-registration invariant on `register_reconcile_channel` more prominently.
🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.

In `src/spv/manager.rs`:
- [BLOCKING] lines 196-206: on_progress clears ConnectionStatus error message after ManagerError, creating divergent state
  **Verified against source.** After `on_sync_event(ManagerError{..})` sets `self.status = Error` and `cs.set_spv_last_error(Some(msg))`, the very next non-error `on_progress` call hits the `else if !is_error` branch at line 202 and calls `cs.set_spv_last_error(None)`. Meanwhile `new_status` is `None` because line 170-172 refuses to overwrite `SpvStatus::Error`, so `cs.set_spv_status` is never called. Result: ConnectionStatus shows Error with no error message; internal `self.last_error` still holds the message. Two independent agents flagged related symptoms (codex-rust-quality: blocking, claude-rust-quality: suggestion).
- [SUGGESTION] lines 1290-1294: spawn_wallet_event_handler captures reconcile_tx eagerly; diverges from SpvEventHandler's Mutex pattern
  **Verified.** `spawn_wallet_event_handler` (line 1294) clones `reconcile_tx` once at spawn time. `SpvEventHandler::on_sync_event` (line 284) acquires the Mutex on every event. The current call ordering is safe — `wallet_lifecycle.rs:71-72` explicitly registers the channel before `start()` — but if `register_reconcile_channel()` were ever called again, the wallet handler would keep sending on a stale (disconnected) sender while SpvEventHandler picks up the new one. Two agents flagged this independently (claude-general, claude-rust-quality).
- [SUGGESTION] lines 149-305: No unit tests for SpvEventHandler state machine transitions
  **Verified.** Existing tests (lines 446-468) only verify that channel registration returns empty channels. `SpvEventHandler`'s three methods (`on_progress`, `on_sync_event`, `on_network_event`) drive all SPV state transitions — pure state-machine logic on Arc-wrapped fields, readily testable without a live SPV client. Three agents flagged this (claude-general, codex-general, codex-rust-quality).

Comment thread src/spv/manager.rs
Comment thread src/spv/manager.rs Outdated
Comment thread src/spv/manager.rs
Comment thread src/spv/manager.rs Outdated
@lklimek lklimek added the human review This PR is ready for human review. label Apr 1, 2026
lklimek and others added 3 commits April 1, 2026 13:25
…ress

# Conflicts:
#	Cargo.lock
#	Cargo.toml
#	src/spv/manager.rs
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- Fix error message clearing: don't clear ConnectionStatus error when
  SpvStatus is already Error (prevents on_progress from erasing
  ManagerError messages)
- Silence reconcile try_send warning — channel backpressure during
  heavy sync is expected and debounced downstream
- Remove tombstone comments for deleted methods (git history suffices)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link
Copy Markdown
Collaborator

@thepastaclaw thepastaclaw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

Clean incremental addressing prior review feedback. Removes dead code (old spawn_sync_event_handler and spawn_network_event_handler), removes #[allow(dead_code)] annotations, adds safe u16::MAX saturation for peer count cast, and improves reconcile try_send warning logging. The tombstone comment cleanup (prior nitpick) was partially addressed — two tombstone comments remain but the dead methods they referenced are now gone. The blocking finding about on_progress clearing error state was not addressed in this push.

Reviewed commit: d153224

🟡 1 suggestion(s)

🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.

In `src/spv/manager.rs`:
- [SUGGESTION] lines 196-204: on_progress error-clearing regression not yet addressed (prior blocking finding)
  The prior review flagged that `on_progress` unconditionally clears `ConnectionStatus.spv_last_error(None)` via the `else if !is_error` branch (line 202), even after `on_sync_event` has set `SpvStatus::Error` + error message. This creates divergent state: ConnectionStatus shows Error with no message while `self.last_error` still holds it.

This push doesn't modify the `on_progress` method. The suggested fix from the prior review: guard the `set_spv_last_error(None)` call with a check that the current status is not `SpvStatus::Error`:
```rust
else if !is_error && !matches!(self.status.read().map(|g| *g), Ok(SpvStatus::Error)) {
    cs.set_spv_last_error(None);
}
</details>

Comment thread src/spv/manager.rs
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@lklimek lklimek enabled auto-merge (squash) April 1, 2026 11:49
@lklimek lklimek merged commit 9acacca into v1.0-dev Apr 1, 2026
4 checks passed
@lklimek lklimek deleted the fix/spv-sync-progress branch April 1, 2026 11:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

human review This PR is ready for human review.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants