fix(spv): wait for wallet bootstrap before filter scan begins#833
fix(spv): wait for wallet bootstrap before filter scan begins#833lklimek wants to merge 2 commits into
Conversation
## Problem On app start, SPV initialisation and wallet-from-DB bootstrap run as two parallel async flows. SPV's `run_spv_loop` has a built-in wait that blocks the compact-block-filter scan until `wm.wallet_count() >= expected_wallet_count`. The signal passed to that wait — `expected_wallet_count` — was computed from the in-memory wallet set, which at the moment `start_spv()` runs is empty because `bootstrap_loaded_wallets` hasn't decrypted any seeds yet. The wait loop interpreted `expected_wallet_count == 0` as "no wallets expected, don't wait", fell through to `build_client(has_wallets=false)` with `start_height = u32::MAX`, and scanned the entire filter range against an empty monitored set. Wallet transactions on-chain were never matched. Balance displayed 0 until the user manually cleared SPV data and re-synced. ## Reproduction 1. Launch DET with one or more wallets already in the local database. 2. Let SPV sync on startup without touching anything. 3. Expected: wallet balances show correctly on the main screen. 4. Actual: balances display as 0, transactions absent from the Wallet screen. ## Fix When the in-memory count is 0, consult the database via a new `Database::wallet_count_for_network()` helper. If the DB holds wallets for the active network, report at least 1 so the wait loop blocks until `bootstrap_loaded_wallets` loads the first wallet into memory. The existing `wallet_count() >= expected` condition then naturally releases as soon as Flow B's side effect lands. The choice of `.min(1)` is deliberate: on fresh installs with no DB rows, the fallback correctly reports 0 and the wait loop continues to skip (no wallets to wait for); on established installs with N wallets the fallback reports 1 rather than N so one successful wallet unlock releases the wait without being blocked by wallets that may fail to decrypt later. Part of the zero-balance fix chain tracked in #829. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 53 minutes and 34 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Pull request overview
Adjusts SPV startup signaling so the SPV filter scan waits for wallet bootstrap to populate monitored addresses, avoiding “zero balance until clear+resync” on cold start with existing wallets.
Changes:
- Add a DB-backed fallback for
expected_walletsinAppContext::start_spv()when the in-memory open-wallet count is0. - Introduce
Database::wallet_count_for_network()to count wallets per network. - Add a unit test validating the per-network wallet count helper.
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.
| File | Description |
|---|---|
src/context/wallet_lifecycle.rs |
Adds DB fallback logic for expected_wallets to prevent SPV starting its scan before wallets are loaded. |
src/database/wallet.rs |
Adds wallet_count_for_network() plus a focused unit test (and test insert helper) for per-network counting. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
|
✅ Review complete (commit 8c3c9aa) |
thepastaclaw
left a comment
There was a problem hiding this comment.
Code Review
Small SPV startup race fix has a load-bearing logic bug: the .min(1) clamp on the DB-based wallet count both undercounts multi-wallet setups and includes password-protected wallets that bootstrap_loaded_wallets cannot auto-unlock. Secondary concerns are that the DB read now aborts start_spv on transient errors instead of falling back to 0, and that the signaling fix itself is only covered by manual testing; the new DB helper test uses a hand-written insert helper that may drift from the real write path.
Reviewed commit: 8c3c9aa
🔴 1 blocking | 🟡 2 suggestion(s) | 💬 1 nitpick(s)
🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/context/wallet_lifecycle.rs`:
- [BLOCKING] lines 76-80: `.min(1)` fallback undercounts wallets and counts password-protected ones that cannot auto-unlock
`run_spv_loop` waits until `wm.wallet_count() >= expected_wallet_count` before building the client and starting the initial filter scan (`src/spv/manager.rs:1045-1075`). The DB fallback here has two problems that both cause the race this PR is meant to fix to persist:
1. **Multi-wallet undercount.** Clamping the DB count with `.min(1)` means that when the database contains multiple wallets for this network, SPV only waits for the first one to load. As soon as wallet #1 is imported into the `WalletManager`, `wallet_count() >= 1` passes and the compact-filter scan starts, but wallets #2..N are still being loaded asynchronously by `bootstrap_loaded_wallets`, so their addresses are absent from `monitored_addresses()` when the scan begins. Their historical transactions are skipped for the session, reproducing the original bug for every wallet past the first.
2. **Password-protected wallets are counted but will never load.** The in-memory count above filters by `is_open() && seed_bytes().is_ok()`, meaning only wallets with decrypted seeds. The DB fallback is unfiltered, so password-protected wallets (still `WalletSeed::Closed` after bootstrap) raise the expected count but `handle_wallet_unlocked` → `wallet_seed_snapshot` returns `None` for them, so `queue_spv_wallet_load` is never called. `wallet_count()` stays at 0 and `run_spv_loop` burns its full 30-second deadline on every startup for users whose wallets are all password-protected.
Both issues share the same root cause: the fallback should count the full set of wallets that can actually auto-load into SPV, not collapse everything to 1 and not include locked wallets that can never satisfy the wait condition.
- [SUGGESTION] lines 76-80: DB error now aborts `start_spv` instead of falling back to 0
This commit replaced `.unwrap_or(0).min(1)` with `?.min(1)`, converting a previously silent fallback into a hard failure. Any transient `rusqlite::Error` now propagates as `TaskError::Database` and prevents SPV from starting at all, even though the wait loop could safely proceed with `expected_wallets = 0`. The in-memory wallet read immediately above still uses `.unwrap_or(0)` for lock poisoning, so the two paths treat "couldn't determine count" asymmetrically. This is low-risk in practice because the DB is already open at this point, but it is still a behavior change worth making explicit or softening with a warning + fallback.
In `src/database/wallet.rs`:
- [SUGGESTION] lines 1898-1934: Signaling fix has no automated coverage — only the DB helper is tested
`test_wallet_count_for_network_counts_per_network` verifies that the new helper returns the right per-network count, but the actual bug fixed here — that `start_spv` passes a non-zero `expected_wallet_count` when DB wallets exist but memory is still empty — is only exercised by the manual test plan. The original race was a subtle TOCTOU between async flows and the fallback is easy to regress. A unit test that either calls `start_spv()` in a controlled state, or extracts the `expected_wallets` computation into a pure helper and tests its branches, would give cheap protection.
| let expected_wallets = if expected_wallets == 0 { | ||
| self.db.wallet_count_for_network(&self.network)?.min(1) | ||
| } else { | ||
| expected_wallets | ||
| }; |
There was a problem hiding this comment.
🔴 Blocking: .min(1) fallback undercounts wallets and counts password-protected ones that cannot auto-unlock
run_spv_loop waits until wm.wallet_count() >= expected_wallet_count before building the client and starting the initial filter scan (src/spv/manager.rs:1045-1075). The DB fallback here has two problems that both cause the race this PR is meant to fix to persist:
-
Multi-wallet undercount. Clamping the DB count with
.min(1)means that when the database contains multiple wallets for this network, SPV only waits for the first one to load. As soon as wallet chore: impl ContextProvider #1 is imported into theWalletManager,wallet_count() >= 1passes and the compact-filter scan starts, but wallets Fix matrix strategy for ARM and AMD Builds #2..N are still being loaded asynchronously bybootstrap_loaded_wallets, so their addresses are absent frommonitored_addresses()when the scan begins. Their historical transactions are skipped for the session, reproducing the original bug for every wallet past the first. -
Password-protected wallets are counted but will never load. The in-memory count above filters by
is_open() && seed_bytes().is_ok(), meaning only wallets with decrypted seeds. The DB fallback is unfiltered, so password-protected wallets (stillWalletSeed::Closedafter bootstrap) raise the expected count buthandle_wallet_unlocked→wallet_seed_snapshotreturnsNonefor them, soqueue_spv_wallet_loadis never called.wallet_count()stays at 0 andrun_spv_loopburns its full 30-second deadline on every startup for users whose wallets are all password-protected.
Both issues share the same root cause: the fallback should count the full set of wallets that can actually auto-load into SPV, not collapse everything to 1 and not include locked wallets that can never satisfy the wait condition.
💡 Suggested change
| let expected_wallets = if expected_wallets == 0 { | |
| self.db.wallet_count_for_network(&self.network)?.min(1) | |
| } else { | |
| expected_wallets | |
| }; | |
| // Fallback: if no open wallets in memory yet, expect as many wallets | |
| // as the database holds for this network that can auto-unlock on | |
| // bootstrap. Password-protected wallets are excluded because | |
| // bootstrap_loaded_wallets cannot unlock them, so waiting for them | |
| // would burn run_spv_loop's 30s deadline without progress. | |
| let expected_wallets = if expected_wallets == 0 { | |
| self.db | |
| .auto_openable_wallet_count_for_network(&self.network)? | |
| } else { | |
| expected_wallets | |
| }; |
source: ['claude', 'codex']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/context/wallet_lifecycle.rs`:
- [BLOCKING] lines 76-80: `.min(1)` fallback undercounts wallets and counts password-protected ones that cannot auto-unlock
`run_spv_loop` waits until `wm.wallet_count() >= expected_wallet_count` before building the client and starting the initial filter scan (`src/spv/manager.rs:1045-1075`). The DB fallback here has two problems that both cause the race this PR is meant to fix to persist:
1. **Multi-wallet undercount.** Clamping the DB count with `.min(1)` means that when the database contains multiple wallets for this network, SPV only waits for the first one to load. As soon as wallet #1 is imported into the `WalletManager`, `wallet_count() >= 1` passes and the compact-filter scan starts, but wallets #2..N are still being loaded asynchronously by `bootstrap_loaded_wallets`, so their addresses are absent from `monitored_addresses()` when the scan begins. Their historical transactions are skipped for the session, reproducing the original bug for every wallet past the first.
2. **Password-protected wallets are counted but will never load.** The in-memory count above filters by `is_open() && seed_bytes().is_ok()`, meaning only wallets with decrypted seeds. The DB fallback is unfiltered, so password-protected wallets (still `WalletSeed::Closed` after bootstrap) raise the expected count but `handle_wallet_unlocked` → `wallet_seed_snapshot` returns `None` for them, so `queue_spv_wallet_load` is never called. `wallet_count()` stays at 0 and `run_spv_loop` burns its full 30-second deadline on every startup for users whose wallets are all password-protected.
Both issues share the same root cause: the fallback should count the full set of wallets that can actually auto-load into SPV, not collapse everything to 1 and not include locked wallets that can never satisfy the wait condition.
| let expected_wallets = if expected_wallets == 0 { | ||
| self.db.wallet_count_for_network(&self.network)?.min(1) | ||
| } else { | ||
| expected_wallets | ||
| }; |
There was a problem hiding this comment.
🟡 Suggestion: DB error now aborts start_spv instead of falling back to 0
This commit replaced .unwrap_or(0).min(1) with ?.min(1), converting a previously silent fallback into a hard failure. Any transient rusqlite::Error now propagates as TaskError::Database and prevents SPV from starting at all, even though the wait loop could safely proceed with expected_wallets = 0. The in-memory wallet read immediately above still uses .unwrap_or(0) for lock poisoning, so the two paths treat "couldn't determine count" asymmetrically. This is low-risk in practice because the DB is already open at this point, but it is still a behavior change worth making explicit or softening with a warning + fallback.
💡 Suggested change
| let expected_wallets = if expected_wallets == 0 { | |
| self.db.wallet_count_for_network(&self.network)?.min(1) | |
| } else { | |
| expected_wallets | |
| }; | |
| let expected_wallets = if expected_wallets == 0 { | |
| self.db | |
| .wallet_count_for_network(&self.network) | |
| .inspect_err(|e| tracing::warn!(?e, "wallet_count_for_network failed, assuming 0")) | |
| .unwrap_or(0) | |
| .min(1) | |
| } else { | |
| expected_wallets | |
| }; |
source: ['claude']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/context/wallet_lifecycle.rs`:
- [SUGGESTION] lines 76-80: DB error now aborts `start_spv` instead of falling back to 0
This commit replaced `.unwrap_or(0).min(1)` with `?.min(1)`, converting a previously silent fallback into a hard failure. Any transient `rusqlite::Error` now propagates as `TaskError::Database` and prevents SPV from starting at all, even though the wait loop could safely proceed with `expected_wallets = 0`. The in-memory wallet read immediately above still uses `.unwrap_or(0)` for lock poisoning, so the two paths treat "couldn't determine count" asymmetrically. This is low-risk in practice because the DB is already open at this point, but it is still a behavior change worth making explicit or softening with a warning + fallback.
| fn insert_test_wallet(db: &Database, seed_hash: &[u8; 32], network: Network) { | ||
| let network_str = network.to_string(); | ||
| let conn = db.conn.lock().unwrap(); | ||
| conn.execute( | ||
| "INSERT INTO wallet (seed_hash, encrypted_seed, salt, nonce, master_ecdsa_bip44_account_0_epk, uses_password, network) | ||
| VALUES (?, ?, ?, ?, ?, 0, ?)", | ||
| rusqlite::params![ | ||
| seed_hash.as_slice(), | ||
| vec![0u8; 64], | ||
| vec![0u8; 16], | ||
| vec![0u8; 12], | ||
| vec![0u8; 78], | ||
| network_str, | ||
| ], | ||
| ) | ||
| .expect("Failed to insert test wallet"); | ||
| } | ||
|
|
||
| #[test] | ||
| fn test_wallet_count_for_network_counts_per_network() { | ||
| let db = create_test_database().expect("create db"); | ||
| let mut seed_a = create_test_seed_hash(); | ||
| let mut seed_b = create_test_seed_hash(); | ||
| seed_a[0] = 0xA0; | ||
| seed_b[0] = 0xB0; | ||
|
|
||
| insert_test_wallet(&db, &seed_a, Network::Testnet); | ||
| insert_test_wallet(&db, &seed_b, Network::Testnet); | ||
|
|
||
| assert_eq!( | ||
| db.wallet_count_for_network(&Network::Testnet) | ||
| .expect("count"), | ||
| 2 | ||
| ); | ||
| assert_eq!( | ||
| db.wallet_count_for_network(&Network::Mainnet) | ||
| .expect("count"), |
There was a problem hiding this comment.
🟡 Suggestion: Signaling fix has no automated coverage — only the DB helper is tested
test_wallet_count_for_network_counts_per_network verifies that the new helper returns the right per-network count, but the actual bug fixed here — that start_spv passes a non-zero expected_wallet_count when DB wallets exist but memory is still empty — is only exercised by the manual test plan. The original race was a subtle TOCTOU between async flows and the fallback is easy to regress. A unit test that either calls start_spv() in a controlled state, or extracts the expected_wallets computation into a pure helper and tests its branches, would give cheap protection.
source: ['claude']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/database/wallet.rs`:
- [SUGGESTION] lines 1898-1934: Signaling fix has no automated coverage — only the DB helper is tested
`test_wallet_count_for_network_counts_per_network` verifies that the new helper returns the right per-network count, but the actual bug fixed here — that `start_spv` passes a non-zero `expected_wallet_count` when DB wallets exist but memory is still empty — is only exercised by the manual test plan. The original race was a subtle TOCTOU between async flows and the fallback is easy to regress. A unit test that either calls `start_spv()` in a controlled state, or extracts the `expected_wallets` computation into a pure helper and tests its branches, would give cheap protection.
| fn insert_test_wallet(db: &Database, seed_hash: &[u8; 32], network: Network) { | ||
| let network_str = network.to_string(); | ||
| let conn = db.conn.lock().unwrap(); | ||
| conn.execute( | ||
| "INSERT INTO wallet (seed_hash, encrypted_seed, salt, nonce, master_ecdsa_bip44_account_0_epk, uses_password, network) | ||
| VALUES (?, ?, ?, ?, ?, 0, ?)", | ||
| rusqlite::params![ | ||
| seed_hash.as_slice(), | ||
| vec![0u8; 64], | ||
| vec![0u8; 16], | ||
| vec![0u8; 12], | ||
| vec![0u8; 78], | ||
| network_str, | ||
| ], | ||
| ) | ||
| .expect("Failed to insert test wallet"); | ||
| } |
There was a problem hiding this comment.
💬 Nitpick: insert_test_wallet hand-writes INSERT SQL instead of going through store_wallet
The new test helper covers only part of the wallet table schema and relies on the remaining columns staying NULLable. That creates a second source of truth for the write path: a future schema change can break this helper without exercising the real insertion logic. The surrounding DB tests already use real Wallet fixtures with store_wallet / store_wallet_with_addresses, which both avoids duplicated SQL and exercises a realistic path.
source: ['claude']
thepastaclaw
left a comment
There was a problem hiding this comment.
Code Review
Targeted, well-scoped startup race fix. Two real issues with the new fallback: (1) the DB count includes password-locked wallets that bootstrap can't load, producing a guaranteed 30s startup stall for password-only profiles — a clear UX regression; (2) the .min(1) clamp leaves the multi-wallet startup case under-served and the inline comment misreads the operator. Both converge on a cleaner fix: filter the DB count by uses_password = 0 and drop the clamp, so expected_wallets represents exactly the set that bootstrap will actually load. Test coverage on the new fallback branch in start_spv() is also missing.
Reviewed commit: 8c3c9aa
🔴 1 blocking | 🟡 1 suggestion(s) | 💬 1 nitpick(s)
🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/database/wallet.rs`:
- [BLOCKING] lines 95-103: DB fallback counts password-locked wallets and forces a 30s startup stall on password-only profiles
`wallet_count_for_network` counts every row in `wallet`, but `get_wallets()` reconstructs password-protected rows as `WalletSeed::Closed` (src/database/wallet.rs:549-553), and `wallet_seed_snapshot()` only yields a load when `is_open()` is true (src/context/wallet_lifecycle.rs:292-305). For a profile whose only wallets are password-protected, the new fallback at src/context/wallet_lifecycle.rs:76-78 sets `expected_wallets = 1` while `SpvManager::wallet_count()` stays at 0 because `bootstrap_loaded_wallets` cannot decrypt them. `run_spv_loop` then sleeps for the full 30s deadline at src/spv/manager.rs:1045-1069 before proceeding. Pre-PR this case proceeded immediately (with the empty-scan bug); post-PR it adds a deterministic 30s startup delay on top of still requiring a separate unlock-triggered SPV reload. The fix is to count only wallets bootstrap can actually load — filter the SQL by `uses_password = 0`. That also lets us drop the `.min(1)` clamp safely, since locked wallets are no longer in the total.
In `src/context/wallet_lifecycle.rs`:
- [SUGGESTION] lines 72-80: `.min(1)` clamps the count at most 1 — comment says "at least 1" and the clamp masks multi-wallet startup
`expected_wallets = self.db.wallet_count_for_network(...)?.min(1)` returns 0 when the DB count is 0 and 1 for any positive count — i.e. clamps at most 1, not at least 1. Two issues: (1) the inline comment says "expect at least 1," which a future reader (or `clippy::redundant_min_max` autofix) may "correct" to `.max(1)`, silently regressing the cold-start path (DB empty → `expected = 1` → 30s wait, then empty scan); (2) the clamp also limits the multi-wallet startup case: with N>1 not-password-protected wallets in DB and 0 currently in memory, the SPV wait loop releases as soon as the first wallet finishes importing (src/spv/manager.rs:1051), even though wallets 2..N may still be deriving addresses, so they may miss the initial filter scan. If the DB query is filtered to `uses_password = 0` (see other finding), the count then represents exactly the loadable set and the `.min(1)` clamp can be removed entirely — `expected_wallets` becomes the precise number of wallets bootstrap will populate. If keeping the clamp, expand the comment to capture both (a) why we clamp (avoid stalling on a wallet that fails to decrypt) and (b) why 0 is preserved when the DB is empty.
Fresh dispatcher-requested review for exact queued SHA 8c3c9aa. Existing/same-SHA review dedup was intentionally bypassed per coordinator instruction.
| /// Count how many wallets are stored in the database for a given network. | ||
| pub fn wallet_count_for_network(&self, network: &Network) -> rusqlite::Result<usize> { | ||
| let conn = self.conn.lock().unwrap(); | ||
| conn.query_row( | ||
| "SELECT COUNT(*) FROM wallet WHERE network = ?", | ||
| [network.to_string()], | ||
| |row| row.get(0), | ||
| ) | ||
| } |
There was a problem hiding this comment.
🔴 Blocking: DB fallback counts password-locked wallets and forces a 30s startup stall on password-only profiles
wallet_count_for_network counts every row in wallet, but get_wallets() reconstructs password-protected rows as WalletSeed::Closed (src/database/wallet.rs:549-553), and wallet_seed_snapshot() only yields a load when is_open() is true (src/context/wallet_lifecycle.rs:292-305). For a profile whose only wallets are password-protected, the new fallback at src/context/wallet_lifecycle.rs:76-78 sets expected_wallets = 1 while SpvManager::wallet_count() stays at 0 because bootstrap_loaded_wallets cannot decrypt them. run_spv_loop then sleeps for the full 30s deadline at src/spv/manager.rs:1045-1069 before proceeding. Pre-PR this case proceeded immediately (with the empty-scan bug); post-PR it adds a deterministic 30s startup delay on top of still requiring a separate unlock-triggered SPV reload. The fix is to count only wallets bootstrap can actually load — filter the SQL by uses_password = 0. That also lets us drop the .min(1) clamp safely, since locked wallets are no longer in the total.
💡 Suggested change
| /// Count how many wallets are stored in the database for a given network. | |
| pub fn wallet_count_for_network(&self, network: &Network) -> rusqlite::Result<usize> { | |
| let conn = self.conn.lock().unwrap(); | |
| conn.query_row( | |
| "SELECT COUNT(*) FROM wallet WHERE network = ?", | |
| [network.to_string()], | |
| |row| row.get(0), | |
| ) | |
| } | |
| /// Count how many wallets are stored in the database for a given network | |
| /// that will be auto-loaded into SPV at startup (i.e. not password-protected). | |
| pub fn wallet_count_for_network(&self, network: &Network) -> rusqlite::Result<usize> { | |
| let conn = self.conn.lock().unwrap(); | |
| conn.query_row( | |
| "SELECT COUNT(*) FROM wallet WHERE network = ? AND uses_password = 0", | |
| [network.to_string()], | |
| |row| row.get(0), | |
| ) | |
| } |
source: ['claude', 'codex']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/database/wallet.rs`:
- [BLOCKING] lines 95-103: DB fallback counts password-locked wallets and forces a 30s startup stall on password-only profiles
`wallet_count_for_network` counts every row in `wallet`, but `get_wallets()` reconstructs password-protected rows as `WalletSeed::Closed` (src/database/wallet.rs:549-553), and `wallet_seed_snapshot()` only yields a load when `is_open()` is true (src/context/wallet_lifecycle.rs:292-305). For a profile whose only wallets are password-protected, the new fallback at src/context/wallet_lifecycle.rs:76-78 sets `expected_wallets = 1` while `SpvManager::wallet_count()` stays at 0 because `bootstrap_loaded_wallets` cannot decrypt them. `run_spv_loop` then sleeps for the full 30s deadline at src/spv/manager.rs:1045-1069 before proceeding. Pre-PR this case proceeded immediately (with the empty-scan bug); post-PR it adds a deterministic 30s startup delay on top of still requiring a separate unlock-triggered SPV reload. The fix is to count only wallets bootstrap can actually load — filter the SQL by `uses_password = 0`. That also lets us drop the `.min(1)` clamp safely, since locked wallets are no longer in the total.
| // Fallback: if no open wallets in memory yet but the database has | ||
| // wallets for this network, expect at least 1. | ||
| // bootstrap_loaded_wallets will load them shortly and the SPV wait | ||
| // loop will block until load completes. | ||
| let expected_wallets = if expected_wallets == 0 { | ||
| self.db.wallet_count_for_network(&self.network)?.min(1) | ||
| } else { | ||
| expected_wallets | ||
| }; |
There was a problem hiding this comment.
🟡 Suggestion: .min(1) clamps the count at most 1 — comment says "at least 1" and the clamp masks multi-wallet startup
expected_wallets = self.db.wallet_count_for_network(...)?.min(1) returns 0 when the DB count is 0 and 1 for any positive count — i.e. clamps at most 1, not at least 1. Two issues: (1) the inline comment says "expect at least 1," which a future reader (or clippy::redundant_min_max autofix) may "correct" to .max(1), silently regressing the cold-start path (DB empty → expected = 1 → 30s wait, then empty scan); (2) the clamp also limits the multi-wallet startup case: with N>1 not-password-protected wallets in DB and 0 currently in memory, the SPV wait loop releases as soon as the first wallet finishes importing (src/spv/manager.rs:1051), even though wallets 2..N may still be deriving addresses, so they may miss the initial filter scan. If the DB query is filtered to uses_password = 0 (see other finding), the count then represents exactly the loadable set and the .min(1) clamp can be removed entirely — expected_wallets becomes the precise number of wallets bootstrap will populate. If keeping the clamp, expand the comment to capture both (a) why we clamp (avoid stalling on a wallet that fails to decrypt) and (b) why 0 is preserved when the DB is empty.
💡 Suggested change
| // Fallback: if no open wallets in memory yet but the database has | |
| // wallets for this network, expect at least 1. | |
| // bootstrap_loaded_wallets will load them shortly and the SPV wait | |
| // loop will block until load completes. | |
| let expected_wallets = if expected_wallets == 0 { | |
| self.db.wallet_count_for_network(&self.network)?.min(1) | |
| } else { | |
| expected_wallets | |
| }; | |
| // Fallback: when no wallets are open in memory yet but the database | |
| // has wallets for this network, expect bootstrap to load all of them. | |
| // wallet_count_for_network already excludes password-protected rows | |
| // (which bootstrap cannot decrypt), so the count matches exactly what | |
| // run_spv_loop will see in monitored_addresses() once loads complete. | |
| let expected_wallets = if expected_wallets == 0 { | |
| self.db.wallet_count_for_network(&self.network)? | |
| } else { | |
| expected_wallets | |
| }; |
source: ['claude', 'codex']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/context/wallet_lifecycle.rs`:
- [SUGGESTION] lines 72-80: `.min(1)` clamps the count at most 1 — comment says "at least 1" and the clamp masks multi-wallet startup
`expected_wallets = self.db.wallet_count_for_network(...)?.min(1)` returns 0 when the DB count is 0 and 1 for any positive count — i.e. clamps at most 1, not at least 1. Two issues: (1) the inline comment says "expect at least 1," which a future reader (or `clippy::redundant_min_max` autofix) may "correct" to `.max(1)`, silently regressing the cold-start path (DB empty → `expected = 1` → 30s wait, then empty scan); (2) the clamp also limits the multi-wallet startup case: with N>1 not-password-protected wallets in DB and 0 currently in memory, the SPV wait loop releases as soon as the first wallet finishes importing (src/spv/manager.rs:1051), even though wallets 2..N may still be deriving addresses, so they may miss the initial filter scan. If the DB query is filtered to `uses_password = 0` (see other finding), the count then represents exactly the loadable set and the `.min(1)` clamp can be removed entirely — `expected_wallets` becomes the precise number of wallets bootstrap will populate. If keeping the clamp, expand the comment to capture both (a) why we clamp (avoid stalling on a wallet that fails to decrypt) and (b) why 0 is preserved when the DB is empty.
| fn insert_test_wallet(db: &Database, seed_hash: &[u8; 32], network: Network) { | ||
| let network_str = network.to_string(); | ||
| let conn = db.conn.lock().unwrap(); | ||
| conn.execute( | ||
| "INSERT INTO wallet (seed_hash, encrypted_seed, salt, nonce, master_ecdsa_bip44_account_0_epk, uses_password, network) | ||
| VALUES (?, ?, ?, ?, ?, 0, ?)", | ||
| rusqlite::params![ | ||
| seed_hash.as_slice(), | ||
| vec![0u8; 64], | ||
| vec![0u8; 16], | ||
| vec![0u8; 12], | ||
| vec![0u8; 78], | ||
| network_str, | ||
| ], | ||
| ) | ||
| .expect("Failed to insert test wallet"); | ||
| } | ||
|
|
||
| #[test] | ||
| fn test_wallet_count_for_network_counts_per_network() { | ||
| let db = create_test_database().expect("create db"); | ||
| let mut seed_a = create_test_seed_hash(); | ||
| let mut seed_b = create_test_seed_hash(); | ||
| seed_a[0] = 0xA0; | ||
| seed_b[0] = 0xB0; | ||
|
|
||
| insert_test_wallet(&db, &seed_a, Network::Testnet); | ||
| insert_test_wallet(&db, &seed_b, Network::Testnet); | ||
|
|
||
| assert_eq!( | ||
| db.wallet_count_for_network(&Network::Testnet) | ||
| .expect("count"), | ||
| 2 | ||
| ); | ||
| assert_eq!( | ||
| db.wallet_count_for_network(&Network::Mainnet) | ||
| .expect("count"), | ||
| 0 | ||
| ); | ||
| } |
There was a problem hiding this comment.
💬 Nitpick: No test exercises the start_spv() fallback branch
test_wallet_count_for_network_counts_per_network covers the new helper directly, which is appropriate at the DB layer. However the actual race-prevention logic — the if expected_wallets == 0 { ... } block in start_spv() — has no automated coverage and is verified only via the manual test plan. Given the bug class (TOCTOU between two parallel async flows during AppContext bring-up), a regression here is exactly what tests should catch. Either extract the small computation into a pure helper that takes (in_memory_count, db_count) -> usize and assert (0,0)→0, (0,N)→N, (M,_)→M, or add a kittest scenario that pre-populates the DB and asserts SPV starts with a non-zero expected_wallets. Optional but cheap insurance against a future .min/.max flip.
source: ['claude']
Summary
On app startup, imported wallets displayed a zero balance until the user manually cleared SPV data and re-synced. Root cause is an init-order + signaling race between two parallel async flows during
AppContextbring-up. SPV's filter-scan wait loop was fed a stale reading ofexpected_wallet_count, interpreted it as "no wallets", skipped the wait, and ran the full filter scan against an empty monitored set. Fix: consult the database when the in-memory wallet count is zero.Reproduction
Expected: SPV waits until the DB wallets are loaded into the in-memory set before it begins scanning compact block filters. Wallet transactions present on-chain are matched during the scan. Balances show correctly on the main screen.
Actual: SPV begins filter scanning with zero monitored addresses. Filter matches zero. Wallet transactions are permanently skipped for the session. Balances display as 0 until the user manually clears SPV data and relaunches.
Root cause — init-order + signaling race
Two independent async flows fire during
AppContextconstruction:start_spv): count wallets →spv_manager.start(expected)→run_spv_loop→ filter scan.bootstrap_loaded_wallets): read DB → unlock seeds → register in-memory → load into SPV's monitored set.Flow A needs Flow B's result (monitored addresses) before scanning. The code already has the right synchronisation primitive —
run_spv_loopcontains a wait loop that blocks untilwm.wallet_count() >= expected_wallet_count. The bug is in the signal passed to that primitive.The stale signal
start_spv()computedexpected_wallet_countlike this:Two filters:
is_open()(seed decrypted) andseed_bytes().is_ok()(bytes accessible). At the momentstart_spv()runs, Flow B hasn't decrypted any seeds yet — those happen later on a separate task. Soexpected_wallets = 0.When
expected_wallets = 0,run_spv_loopskips its wait loop:Downstream,
build_client(has_wallets=false)setsstart_height = u32::MAXas its "no wallets, use latest checkpoint" sentinel. The filter scan runs against an empty address set. Zero matches. Every wallet transaction on-chain is invisible until the next full rescan.Classic TOCTOU
Check was done too early to be meaningful. The wait was correctly skipped — but for the wrong reason.
Fix shape
Signal correction, not reorder. Keep parallelism, but make
expected_walletsreflect the eventual count rather than the current count.Two key ideas:
Consult the database when in-memory count is 0. New helper
Database::wallet_count_for_network()runs a parameterisedSELECT COUNT(*) FROM wallet WHERE network = ?. Returns the count of wallets that will be loaded shortly, even if none are loaded right now.Clamp to a minimum of 1, not the full count. SPV's wait condition is
wallet_count() >= expected_wallets. As soon asbootstrap_loaded_walletsloads the first wallet, the wait unblocks. Reporting 1 (rather than N) means one successful unlock releases the wait without being blocked by wallets that might later fail to decrypt (corrupt seed, wrong password, etc.).Why signal-correction over reorder
Files changed
src/context/wallet_lifecycle.rsexpected_walletscalculation instart_spv()src/database/wallet.rsDatabase::wallet_count_for_network()helper +insert_test_wallettest helper +test_wallet_count_for_network_counts_per_networkunit testTotal: +63 lines, 0 removed. Zero changes to the SPV manager, wallet lifecycle outside
start_spv, or any other subsystem.Test plan
Automated
cargo build --all-features— cleancargo clippy --all-features --all-targets -- -D warnings— cleancargo +nightly fmt --all -- --check— cleancargo test --all-features --workspace --lib— 453 tests pass, including the newtest_wallet_count_for_network_counts_per_networkwhich verifies the DB helper counts per-network correctly.Manual on testnet
SPV: all wallets loaded with monitored addresses, proceeding expected=1 loaded=Nwhere N is the actual wallet count.rm -rf ~/.config/dash-evo-tool/data.db ~/.config/dash-evo-tool/spv/, launch DET with no wallets. Expected: SPV starts from latest checkpoint without hanging on the wait loop (sinceexpected_wallets = 0)..min(1)).Scope
This PR carries only the Flow-A / Flow-B startup signaling fix (bug 1). Sibling PR #830 carries the atomic BIP44 receive-pool extension fix (bug 2) and the platform pin bump to rust-dashcore 309fac8.
Earlier drafts of #830 also carried a mid-session-wallet-import SPV restart workaround (bug 3). That workaround has since been stripped from #830 because it is blocked upstream by rust-dashcore#651 constraint #4 (
SegmentCache::first_valid_offsetunder-reports on reload, clampingscan_startback at the chain tip). Until upstream #651 lands, the sanctioned workaround for mid-session imports is Clear SPV Data. The stripped attempt is preserved onarchive/spv-wallet-address-sync-2026-04-20for future reference.If this PR lands first, the
51bd044ecommit currently on #830 (which contains an earlier combined bug-1+bug-2 fix) will be rebased / dropped — its bug-1 portion is identical to what this PR ships.Related
🤖 Co-authored by Claudius the Magnificent AI Agent