Skip to content

fix: handle errors instead of panicking on corrupted database blobs#561

Merged
lklimek merged 10 commits into
v1.0-devfrom
fix/560
Feb 17, 2026
Merged

fix: handle errors instead of panicking on corrupted database blobs#561
lklimek merged 10 commits into
v1.0-devfrom
fix/560

Conversation

@lklimek
Copy link
Copy Markdown
Contributor

@lklimek lklimek commented Feb 11, 2026

Summary

  • Replace unreachable!() in get_scheduled_votes with warning log + default for unexpected executed column values
  • Change QualifiedIdentity::from_bytes() to return Result instead of panicking via .expect()
  • Propagate deserialization errors as rusqlite::Error in all 6 callers so corrupted database is surfaced to the user rather than silently ignored or crashing the app
  • Add CorruptedBlobError newtype in database/mod.rs to eliminate repeated FromSqlConversionFailure boilerplate

Closes #560

Test plan

  • cargo clippy --all-features --all-targets -- -D warnings — clean
  • cargo test --all-features --workspace — all tests pass

🤖 Generated with Claude Code

Summary by CodeRabbit

  • Bug Fixes

    • Explicitly surface and log corrupted/invalid stored data during identity reconstruction instead of failing silently.
    • Avoid panics on unexpected database values; operations now log a warning and continue safely.
  • Stability Improvements

    • Improved validation when reading stored data, reducing silent errors and making failures diagnosable.
    • Greater resilience across database workflows when encountering malformed or unexpected blobs.

)

- Replace `unreachable!()` in `get_scheduled_votes` with warning log and
  default to false for unexpected `executed` column values
- Change `QualifiedIdentity::from_bytes()` to return `Result` instead of
  panicking via `.expect()`
- Propagate deserialization errors as `rusqlite::Error` in all 6 callers
  so corrupted database is surfaced to the user rather than silently
  ignored or crashing the app
- Add `CorruptedBlobError` newtype in database module to eliminate
  repeated `FromSqlConversionFailure` boilerplate

Closes #560

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Feb 11, 2026

Warning

Rate limit exceeded

@lklimek has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 5 minutes and 33 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📝 Walkthrough

Walkthrough

This pull request addresses database corruption handling issues by introducing a new CorruptedBlobError type and converting panic-prone deserialization paths to graceful error handling. Identity blob parsing and scheduled vote processing now log warnings and skip/default corrupted entries instead of panicking.

Changes

Cohort / File(s) Summary
Error Type Introduction
src/database/mod.rs
Added CorruptedBlobError(pub String) with pub(crate) visibility and From impl to convert into rusqlite::Error, enabling corruption errors to propagate through the standard database error pathway.
Identity Deserialization
src/database/identities.rs
Updated five identity loading paths (get_local_qualified_identities, get_local_voting_identities, get_local_user_identities, get_identity_by_id) to use .map_err(super::CorruptedBlobError)? when deserializing identity blobs, converting panics into catchable errors.
Graceful Corruption Handling
src/database/wallet.rs, src/database/scheduled_votes.rs
Modified wallet identity reconstruction to log and skip corrupted entries instead of failing; replaced unreachable!() in get_scheduled_votes with warning log and false default for unexpected executed column values.
API Signature Change
src/model/qualified_identity/mod.rs
Changed QualifiedIdentity::from_bytes() return type from Self to Result<Self, String>, replacing internal expect() with explicit error handling via bincode deserialization result.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Poem

🐰 A hop through the data so fine,
Where corruption once made the code pine,
Now errors are caught with a gentle care,
And graceful defaults float through the air,
No panics to break the database's line!

🚥 Pre-merge checks | ✅ 4 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 63.64% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Linked Issues check ✅ Passed All coding objectives from issue #560 are met: QualifiedIdentity::from_bytes returns Result [#560], get_scheduled_votes replaces unreachable!() with logging and default [#560], and callers propagate errors via CorruptedBlobError [#560].
Out of Scope Changes check ✅ Passed All changes are directly scoped to fixing the two panic issues in #560: error handling in from_bytes, CorruptedBlobError conversion, get_scheduled_votes logging, and caller-side error propagation.
Title check ✅ Passed The title accurately describes the main objective: converting panics on corrupted database blobs into error handling. All major changes align with this theme.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch fix/560

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/database/identities.rs (1)

176-211: ⚠️ Potential issue | 🟠 Major

Same skip-vs-error concern: corrupted blobs should be skipped in batch-loading functions.

get_local_qualified_identities, get_local_qualified_identities_in_wallets, get_local_voting_identities, and get_local_user_identities all use map_err(super::CorruptedBlobError)? which causes a single corrupted blob to fail the entire collection. Per issue #560: corrupted entries should be skipped with a warning so that the remaining identities still load.

♻️ Suggested pattern (apply to all Vec-returning identity loaders)
         for row in identity_iter {
             let (data, alias, wallet_index, status) = row?;
-            let mut identity = QualifiedIdentity::from_bytes(&data)
-                .map_err(super::CorruptedBlobError)?;
+            let mut identity = match QualifiedIdentity::from_bytes(&data) {
+                Ok(id) => id,
+                Err(e) => {
+                    tracing::warn!(error = %e, "Skipping corrupted identity blob");
+                    continue;
+                }
+            };
🤖 Fix all issues with AI agents
In `@src/database/wallet.rs`:
- Around line 817-818: The call to QualifiedIdentity::from_bytes(&identity_data)
currently uses the ? operator which aborts get_wallets on a single corrupted
blob; change this to handle the Result explicitly (e.g., match or if let Err(e)
= QualifiedIdentity::from_bytes(&identity_data) { log a warning including e and
some context about the wallet/key being processed using your crate's logger
(e.g., warn! or tracing::warn!), then continue the enclosing loop } else bind
the Ok(identity) and proceed), so corrupted entries are skipped with a warning
instead of failing the whole operation; keep the CorruptedBlobError type
available if you need to count or metrics-log these skips.

Comment thread src/database/wallet.rs Outdated
@lklimek
Copy link
Copy Markdown
Contributor Author

lklimek commented Feb 11, 2026

@claude please merge v1.0-dev into this branch and ensure tests pass.

@claude
Copy link
Copy Markdown

claude Bot commented Feb 11, 2026

Claude finished @lklimek's task in 40s —— View job


Merging v1.0-dev and Running Tests

  • Fetch latest v1.0-dev
  • Merge v1.0-dev into fix/560
  • Resolve any merge conflicts
  • Run cargo test suite
  • Run cargo clippy
  • Push changes if successful

Issue Encountered

I need approval to run git commands (git fetch and git merge). These commands require user permission before I can execute them.

What you can do:

  1. Approve the pending git commands when prompted
  2. Or manually run these commands locally:
    git fetch origin v1.0-dev
    git merge origin/v1.0-dev
    cargo test --all-features --workspace
    cargo clippy --all-features --all-targets -- -D warnings
    git push origin fix/560

Once you approve the git commands, I'll be able to complete the merge and run the tests.


Cherry-pick from v1.0-dev to enable @claude mentions in PRs.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@lklimek
Copy link
Copy Markdown
Contributor Author

lklimek commented Feb 12, 2026

@claude please merge v1.0-dev into this branch and ensure tests pass.

@claude
Copy link
Copy Markdown

claude Bot commented Feb 12, 2026

Claude finished @lklimek's task in 34s —— View job


Merging v1.0-dev and Running Tests

  • Fetch latest v1.0-dev
  • Merge v1.0-dev into fix/560
  • Resolve any merge conflicts
  • Run cargo test suite
  • Run cargo clippy
  • Push changes if successful

Issue Encountered

I need approval to run git commands (git fetch and git merge). These commands require user permission before I can execute them.

What you can do:

  1. Approve the pending git commands when prompted
  2. Or manually run these commands locally:
    git fetch origin v1.0-dev
    git merge origin/v1.0-dev
    cargo test --all-features --workspace
    cargo clippy --all-features --all-targets -- -D warnings
    git push origin fix/560

Once you approve the git commands, I'll be able to complete the merge and run the tests.


View job

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
src/database/identities.rs (2)

176-211: ⚠️ Potential issue | 🟠 Major

One corrupted identity aborts loading all identities — consider skip-and-warn.

The ? operator on Line 183 causes the entire function to return an error if any single identity blob is corrupted. While this is a significant improvement over panicking, the linked issue #560 suggests skipping corrupted entries with a warning so the remaining identities still load.

Consider logging a warning and continuing instead:

Proposed skip-and-warn pattern
         for row in identity_iter {
             let (data, alias, wallet_index, status) = row?;
-            let mut identity =
-                QualifiedIdentity::from_bytes(&data).map_err(super::CorruptedBlobError)?;
+            let mut identity = match QualifiedIdentity::from_bytes(&data) {
+                Ok(id) => id,
+                Err(e) => {
+                    tracing::warn!("Skipping corrupted identity blob: {e}");
+                    continue;
+                }
+            };

The same pattern applies to the other four loading functions in this file (get_local_qualified_identities_in_wallets, get_identity_by_id, get_local_voting_identities, get_local_user_identities).


239-272: ⚠️ Potential issue | 🟡 Minor

get_local_qualified_identities_in_wallets doesn't set identity.status.

Unlike get_local_qualified_identities (Line 187), this function doesn't query or set identity.status. The SQL on Line 226 selects only data, alias, wallet_index — no status column. The identity will retain whatever default status comes from from_bytes deserialization, which may differ from the database value.

If this is intentional (e.g., the function is #[allow(dead_code)]), a comment would help. Otherwise, consider aligning with the other function.

🤖 Fix all issues with AI agents
In @.claude/hooks/session-start.sh:
- Around line 93-116: The script currently injects both TENDERDASH_DIR and
CARGO_TARGET_DIR into the global Cargo [env] table (via the sed / heredoc
logic), which can unintentionally affect other build scripts; change the
injection so only TENDERDASH_DIR is written into the [env] table (remove
CARGO_TARGET_DIR from both the sed insertion and the heredoc), or alternatively
make setting CARGO_TARGET_DIR conditional (e.g., only when a specific marker or
project context is detected) and prefer scoping such a cache path to the
Tenderdash build invocation rather than the global Cargo config; update the sed
insertion and the heredoc that reference CARGO_TARGET_DIR to reflect this
change.
🧹 Nitpick comments (2)
.github/workflows/claude.yml (1)

15-19: Consider restricting who can trigger Claude runs.

Any user who can comment on issues or PRs can trigger this workflow by mentioning @claude. This could lead to unintended API credit consumption or unauthorized code changes. Consider adding an actor/membership check, e.g.:

if: |
  github.event.sender.type == 'User' &&
  (github.event_name == 'issue_comment' && contains(github.event.comment.body, '@claude') && 
   github.event.comment.author_association in ['OWNER', 'MEMBER', 'COLLABORATOR']) || ...

Or use github.event.comment.author_association to restrict to trusted contributors.

.claude/hooks/session-start.sh (1)

18-22: Hardcoded x86_64 architecture for protoc download.

The protoc zip filename is hardcoded to linux-x86_64. If the Claude Code remote environment runs on aarch64, this will install the wrong binary (or fail).

Consider detecting the architecture:

Proposed fix
   PROTOC_VERSION="25.2"
-  PROTOC_ZIP="protoc-${PROTOC_VERSION}-linux-x86_64.zip"
+  ARCH=$(uname -m)
+  case "$ARCH" in
+    x86_64)  PROTOC_ARCH="x86_64" ;;
+    aarch64) PROTOC_ARCH="aarch_64" ;;
+    *)       echo "[session-start] Unsupported architecture: $ARCH"; exit 1 ;;
+  esac
+  PROTOC_ZIP="protoc-${PROTOC_VERSION}-linux-${PROTOC_ARCH}.zip"

Comment thread .claude/hooks/session-start.sh
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@src/database/identities.rs`:
- Around line 303-304: The current use of query_map(...).collect() in
get_local_voting_identities and get_local_user_identities causes the iterator to
abort on the first Err (a corrupted blob) because
QualifiedIdentity::from_bytes(&data).map_err(...) returns Err and collect
short-circuits; change the processing to iterate with .filter_map or an explicit
loop that attempts QualifiedIdentity::from_bytes for each row, logs a warning on
Err (including identifying info), and only pushes successful identities into the
result so corrupted blobs are skipped instead of aborting the entire load.
Ensure you update the code paths that reference QualifiedIdentity::from_bytes
and the query_map result handling so they match this skip-and-warn behavior for
both functions.
- Around line 151-152: The code currently uses
QualifiedIdentity::from_bytes(&data).map_err(...) and the ? operator which
aborts the whole batch on the first corrupted blob; change the batch-loading
loops (the function containing QualifiedIdentity::from_bytes call and also
get_local_qualified_identities_in_wallets) to match on
QualifiedIdentity::from_bytes(&data) and on Err log a warning (include key/id
and error) and continue the loop instead of returning an error, while on Ok push
the identity as before; keep get_identity_by_id as-is (returning an error for a
single-identity lookup) unless you want the single-lookup behavior changed
explicitly.

Comment thread src/database/identities.rs
Comment thread src/database/identities.rs
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR improves robustness around corrupted/invalid data in the SQLite database by removing panic paths and converting identity-blob decoding failures into recoverable errors (or warnings), so the app can handle unexpected DB state more safely.

Changes:

  • Updated QualifiedIdentity::from_bytes() to return a Result instead of panicking on decode failure.
  • Replaced an unreachable!() in scheduled vote loading with a warning + safe default.
  • Introduced CorruptedBlobError to reduce boilerplate when converting decode failures into rusqlite::Error, and updated identity-loading call sites accordingly.

Reviewed changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
src/model/qualified_identity/mod.rs Makes identity deserialization fallible to avoid panics on corrupted blobs.
src/database/identities.rs Propagates corrupted identity blob decode failures as rusqlite::Error via CorruptedBlobError.
src/database/wallet.rs Avoids panicking on corrupted identity blobs when loading wallets by logging and skipping.
src/database/scheduled_votes.rs Avoids panicking on unexpected executed values; logs warning and defaults safely.
src/database/mod.rs Adds CorruptedBlobError newtype + From conversion into rusqlite::Error.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread src/database/wallet.rs Outdated
Comment thread src/database/mod.rs
Comment thread src/database/wallet.rs Outdated
@lklimek lklimek changed the title fix: return error instead of panicking on corrupted database blobs fix: handle errors instead of panicking on corrupted database blobs Feb 12, 2026
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 5 out of 5 changed files in this pull request and generated 1 comment.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment thread src/model/qualified_identity/mod.rs Outdated
@lklimek lklimek enabled auto-merge (squash) February 17, 2026 08:58
@lklimek lklimek merged commit 70587a8 into v1.0-dev Feb 17, 2026
4 checks passed
@lklimek lklimek deleted the fix/560 branch February 17, 2026 09:33
PastaPastaPasta added a commit that referenced this pull request Feb 17, 2026
* fix: skip best chain lock polling in SPV mode (#567)

* Initial plan

* Skip chain lock refresh in SPV mode

Co-authored-by: lklimek <842586+lklimek@users.noreply.github.com>

* Document SPV guard intent

Co-authored-by: lklimek <842586+lklimek@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: lklimek <842586+lklimek@users.noreply.github.com>

* fix: compute relative timestamps from actual data (#581)

* fix: compute relative timestamps from actual data

Replace hardcoded display strings like "Received: 1 day ago" with
real relative timestamps computed from document createdAt/updatedAt
fields using chrono::Utc::now().

Fixes #579

* style: fix import ordering per cargo fmt

* refactor: extract format_relative_time to shared dashpay module

Deduplicate the identical format_relative_time function that existed
in both contact_requests.rs and send_payment.rs. Move it to the
dashpay mod.rs as a pub(crate) function and import from both files.

---------

Co-authored-by: PastaClaw <thepastaclaw@users.noreply.github.com>

* fix: update platform for DIP-18 HRP and improve SPV sync progress (#575)

* fix: update dashpay/platform to d6f4eb9 for DIP-18 HRP fix

The previous platform revision used incorrect bech32m HRP prefixes
(evo/tevo) for Platform addresses. The updated commit uses the
correct DIP-0018 prefixes (dash/tdash).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* refactor(spv): use dash-spv SyncProgress API directly

Replace the intermediate SyncProgress/DetailedSyncProgress/SyncStage
translation layer with direct use of dash_spv::sync::SyncProgress.
This eliminates ~120 lines of bridge code in spawn_progress_watcher()
and determine_sync_stage(), and lets the UI query per-manager progress
(headers, filter_headers, filters, masternodes, blocks) via the
upstream API.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(spv): improve sync progress accuracy and robustness

- Log headers() error before discarding for easier debugging
- Restore download-window optimization for headers progress on
  checkpoint-resumed syncs (progress starts near 0% not 83%)
- Replace catch-all _ => 0.0 with explicit SyncState variant matches
  so new variants produce compile errors
- Fix filters progress to use current_height/target_height instead of
  downloaded/target_height (session count vs absolute height mismatch)
- Restore peer count in sync status text
- Add "Querying peer heights" label and diffs_processed to masternode
  status for more informative sync messages

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(spv): use height-based blocks progress to prevent bar from jumping

The blocks progress bar was bouncing backward because it used
processed/requested session counters whose denominator grows as filters
discover more matching blocks. Switch to last_processed block height
relative to headers target_height, which only increases. Display
"current / target" heights instead of percentage on the blocks bar.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(spv): improve progress bar resilience across resume and network switch

- Add windowed progress tracking for filters (consistent with headers/filter_headers)
- Reset blocks_stage_start on Error state so recovery gets a fresh window
- Track spv_progress_network to detect network changes and rebuild progress
  state from the new network's sync_progress instead of resetting to zero
- Eliminate redundant clone in progress watcher (move instead of clone twice)
- Consolidate all progress state reset logic into rebuild_spv_progress_state()

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>

* build: add git and cargo permissions to Claude Code workflow (#565)

* build: add git and cargo permissions to Claude Code workflow

Allow Claude to run git fetch/merge/checkout/rebase/push and
cargo build/test/clippy/fmt commands. Switch model to opus.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: use claude_args for model and allowed tools

The `model` and `allowed_tools` inputs are not declared in
claude-code-action@v1 and are silently ignored. Move them to
`claude_args` with --model and --allowedTools flags. Also fix
deprecated colon syntax (`:*`) to space syntax (` *`).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: use single quotes to prevent glob expansion in allowed tools

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* build: sandbox cargo commands and add git permissions for Claude

- Add safe-cargo.sh wrapper that strips CI secrets before running cargo
- Use --allowedTools for git and safe-cargo, --disallowedTools for raw cargo
- Document safe-cargo usage in CLAUDE.md

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: switch safe-cargo.sh from denylist to allowlist approach

Use `env -i` (start with empty environment, explicitly pass only
what cargo needs) instead of `env -u` (strip known secrets). This
is more robust against future secrets being added to the workflow.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* build: deny Claude from editing CI scripts and workflows

Prevent Claude from modifying .github/scripts/ and .github/workflows/
to ensure the safe-cargo wrapper cannot be tampered with.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: conditionally pass optional env vars in safe-cargo.sh

Empty PROTOC="" caused prost build scripts to fail with
"protoc not found". Now optional vars (PROTOC, CC, CXX, etc.)
are only passed when set and non-empty.

Tested: build, test, fmt all pass through the wrapper.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* rabbit feedback

* build: move safe-cargo.sh to scripts/ and allow +nightly fmt

Move safe-cargo.sh from .github/scripts/ to top-level scripts/ for
better discoverability. Add detailed comment explaining why the wrapper
exists (prevent CI secret exfiltration via build scripts). Update all
references in claude.yml, CLAUDE.md, and permission settings. Add
`+nightly fmt` to allowedTools so Claude can follow CLAUDE.md
formatting instructions in CI.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>

* fix: handle errors instead of panicking on corrupted database blobs (#561)

* fix: return error instead of panicking on corrupted database blobs (#560)

- Replace `unreachable!()` in `get_scheduled_votes` with warning log and
  default to false for unexpected `executed` column values
- Change `QualifiedIdentity::from_bytes()` to return `Result` instead of
  panicking via `.expect()`
- Propagate deserialization errors as `rusqlite::Error` in all 6 callers
  so corrupted database is surfaced to the user rather than silently
  ignored or crashing the app
- Add `CorruptedBlobError` newtype in database module to eliminate
  repeated `FromSqlConversionFailure` boilerplate

Closes #560

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* chore: impl thiserror

* chore: fmt

* build: add Claude Code GitHub workflow and settings (#552)

Cherry-pick from v1.0-dev to enable @claude mentions in PRs.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* chore: fix after merge

* fix: skip corrupted identity blobs in get_wallets instead of aborting

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* chore: fail on corrupted identity

* doc: document error handling in the db

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>

* feat: track DAPI endpoint availability in connection status (#533)

* fix: connection status not updated

* chore: rabbit feedback

* Initial plan

* Track DAPI connection status and display in tooltips and connection info

- Add dapi_total_endpoints and dapi_available fields to ConnectionStatus
- Factor DAPI availability into overall_connected status (RED when no endpoints available)
- Query SDK AddressList during periodic refresh for endpoint counts and availability
- Display DAPI status in connection indicator tooltip
- Display DAPI status in network chooser Connection Status card (all modes)
- Add dapi_status_label() helper for consistent status text formatting

Co-authored-by: lklimek <842586+lklimek@users.noreply.github.com>

* Address review feedback: store available endpoint count and extract DRY helper

- Changed dapi_available from AtomicBool to AtomicU16 (dapi_available_endpoints) to store
  the count of available endpoints instead of just a boolean
- Display format now shows "Available ({available}/{total} endpoints)"
- Extracted repeated DAPI status rendering into add_dapi_status_label() helper function
  in network_chooser_screen.rs to eliminate code duplication

Co-authored-by: lklimek <842586+lklimek@users.noreply.github.com>

* Fix rustfmt formatting issues in connection_status.rs

- Reorder atomic imports (AtomicU8 before AtomicU16) per rustfmt
- Wrap long .store() call to respect line length
- Wrap long format!() strings to respect line length

Co-authored-by: lklimek <842586+lklimek@users.noreply.github.com>

* Fix borrow checker error: extract DAPI status values before mutable self borrow

The add_dapi_status_label helper was capturing a &ConnectionStatus reference
(derived from self) in closures, which extended the immutable borrow past the
self.render_spv_sync_progress() mutable borrow on line 594.

Fix: change add_dapi_status_label to accept pre-computed owned values
(dapi_total, dapi_available, dapi_label) instead of &ConnectionStatus, and
extract those values early alongside other status fields.

Co-authored-by: lklimek <842586+lklimek@users.noreply.github.com>

* Switch Platform to c2c88e4 and use get_live_addresses() for accurate available count

Updated dash-sdk dependency to Platform commit c2c88e4a988ce930 which adds
AddressList::get_live_addresses() method. Replaced the workaround that used
get_live_address().is_some() (which could only tell if at least one endpoint
was live) with get_live_addresses().len() to get the exact count of available
non-banned DAPI endpoints.

Co-authored-by: lklimek <842586+lklimek@users.noreply.github.com>

* chore: typo + network changes

* chore: apply feedback

* chore: rabbit review

* chore: rabbit feedback

* chore: rabbitting

* Remove overall_connected_with method (deleted upstream in base branch)

Co-authored-by: lklimek <842586+lklimek@users.noreply.github.com>

* Merge v1.0-dev into copilot/update-dapi-connection-status

Resolved modify/delete conflict on src/context.rs: removed the file
since v1.0-dev refactored it into src/context/mod.rs and submodules,
which already include our Arc<ConnectionStatus> changes.

Co-authored-by: lklimek <842586+lklimek@users.noreply.github.com>

* chore: fix platfom versioning issues

* chore: add todo

* chore: fmt

---------

Co-authored-by: Lukasz Klimek <842586+lklimek@users.noreply.github.com>
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>

* refactor(spv): eliminate separate thread/runtime, improve shutdown and UX (#577)

* fix: update dashpay/platform to d6f4eb9 for DIP-18 HRP fix

The previous platform revision used incorrect bech32m HRP prefixes
(evo/tevo) for Platform addresses. The updated commit uses the
correct DIP-0018 prefixes (dash/tdash).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* refactor(spv): use dash-spv SyncProgress API directly

Replace the intermediate SyncProgress/DetailedSyncProgress/SyncStage
translation layer with direct use of dash_spv::sync::SyncProgress.
This eliminates ~120 lines of bridge code in spawn_progress_watcher()
and determine_sync_stage(), and lets the UI query per-manager progress
(headers, filter_headers, filters, masternodes, blocks) via the
upstream API.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(spv): improve sync progress accuracy and robustness

- Log headers() error before discarding for easier debugging
- Restore download-window optimization for headers progress on
  checkpoint-resumed syncs (progress starts near 0% not 83%)
- Replace catch-all _ => 0.0 with explicit SyncState variant matches
  so new variants produce compile errors
- Fix filters progress to use current_height/target_height instead of
  downloaded/target_height (session count vs absolute height mismatch)
- Restore peer count in sync status text
- Add "Querying peer heights" label and diffs_processed to masternode
  status for more informative sync messages

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(spv): use height-based blocks progress to prevent bar from jumping

The blocks progress bar was bouncing backward because it used
processed/requested session counters whose denominator grows as filters
discover more matching blocks. Switch to last_processed block height
relative to headers target_height, which only increases. Display
"current / target" heights instead of percentage on the blocks bar.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(spv): improve progress bar resilience across resume and network switch

- Add windowed progress tracking for filters (consistent with headers/filter_headers)
- Reset blocks_stage_start on Error state so recovery gets a fresh window
- Track spv_progress_network to detect network changes and rebuild progress
  state from the new network's sync_progress instead of resetting to zero
- Eliminate redundant clone in progress watcher (move instead of clone twice)
- Consolidate all progress state reset logic into rebuild_spv_progress_state()

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(spv): update ConnectionStatus immediately on connect/disconnect

Propagate SPV status into ConnectionStatus right after start_spv() and
stop_spv() so the UI reflects the change on the next frame instead of
waiting for the next throttled trigger_refresh() cycle (2-10 seconds).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(spv): check cancellation token in listener tasks to prevent shutdown hang

SPV finality and reconcile listeners blocked app shutdown for up to 10s
because they never checked the global cancellation token. Add cancel
branches to their tokio::select! loops so they exit immediately on
shutdown.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* chore: remove log spamming line

* fix(spv): make listener handler calls cancellation-aware and add shutdown trace logging

Wrap reconcile_spv_wallets(), handle_spv_finality_event(), and the
debounce sleep inside tokio::select! with the cancellation token so
shutdown can interrupt them even when blocked on locks held by the
SPV sync thread.

Add trace-level logging to TaskManager::shutdown() for per-task join
timing to aid future shutdown diagnostics.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(spv): disable Disconnect button during stopping and poll status faster

- Disable the Disconnect button while SPV status is Stopping to prevent
  double-clicks and provide visual feedback
- Poll SPV status every 200ms during Stopping instead of the 10s
  connected interval so the Stopped transition is reflected within 1s
- Reset the throttle timer in stop_spv() so fast polling starts
  immediately

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* refactor(tasks): add mandatory task names to spawn_sync for shutdown diagnostics

Change spawn_sync() signature to require a &'static str name. The
JoinSet now yields the task name on completion, letting shutdown()
log which tasks finished and which ones timed out.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* chore: log task names

* refactor(spv): eliminate separate OS thread and tokio runtime

Replace the dedicated SPV thread + 4-worker tokio runtime with a
spawned task on the main 12-worker runtime via TaskManager::spawn_sync.

This simplifies shutdown (SPV loop now tracked in unified JoinSet),
removes cross-runtime complexity, and improves debuggability.

Additional changes:
- Fix: zeroize xprv_str after wallet import (security H-1)
- Fix: sanitize devnet_name in build_spv_data_dir to prevent path traversal
- Add 21 integration tests covering lifecycle, concurrency, deadlock
  detection, and live testnet sync

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(spv): derive stop_token as child of global cancel for clean shutdown

The spv_request_handler selected on stop_token, which was independent
of the global cancellation token.  During window-close shutdown only
the global token was cancelled, leaving the request handler running
and causing a ~5s hang until the TaskManager timeout aborted it.

Fix by creating stop_token as a child_token() of the global cancel.
This also simplifies run_spv_loop and run_sync_and_monitor by removing
the redundant global_cancel parameter — a single stop_token now covers
both explicit SpvManager::stop() and application-wide shutdown.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* CLAUDE: minor changes about docs

* fix(spv): bump rust-dashcore to d8bc066 for faster disconnect and add shutdown tracing

Pin rust-dashcore patches to commit d8bc066 which includes:
- sync manager task loop exits on network errors instead of logging indefinitely
- sync coordinator signal_shutdown() cancels tasks before network disconnect
- connection tasks race Peer::connect() against shutdown token

Add debug tracing to SpvManager::stop() and run_sync_and_monitor() to
measure client.stop() duration, and trace-level polling of SPV status
in ConnectionStatus.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* chore: update rust-dashcore rev

* merge: resolve v1.0-dev conflicts for #577 (#585)

* fix: compute relative timestamps from actual data (#581)

* fix: compute relative timestamps from actual data

Replace hardcoded display strings like "Received: 1 day ago" with
real relative timestamps computed from document createdAt/updatedAt
fields using chrono::Utc::now().

Fixes #579

* style: fix import ordering per cargo fmt

* refactor: extract format_relative_time to shared dashpay module

Deduplicate the identical format_relative_time function that existed
in both contact_requests.rs and send_payment.rs. Move it to the
dashpay mod.rs as a pub(crate) function and import from both files.

---------

Co-authored-by: PastaClaw <thepastaclaw@users.noreply.github.com>

* fix: update platform for DIP-18 HRP and improve SPV sync progress (#575)

* fix: update dashpay/platform to d6f4eb9 for DIP-18 HRP fix

The previous platform revision used incorrect bech32m HRP prefixes
(evo/tevo) for Platform addresses. The updated commit uses the
correct DIP-0018 prefixes (dash/tdash).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* refactor(spv): use dash-spv SyncProgress API directly

Replace the intermediate SyncProgress/DetailedSyncProgress/SyncStage
translation layer with direct use of dash_spv::sync::SyncProgress.
This eliminates ~120 lines of bridge code in spawn_progress_watcher()
and determine_sync_stage(), and lets the UI query per-manager progress
(headers, filter_headers, filters, masternodes, blocks) via the
upstream API.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(spv): improve sync progress accuracy and robustness

- Log headers() error before discarding for easier debugging
- Restore download-window optimization for headers progress on
  checkpoint-resumed syncs (progress starts near 0% not 83%)
- Replace catch-all _ => 0.0 with explicit SyncState variant matches
  so new variants produce compile errors
- Fix filters progress to use current_height/target_height instead of
  downloaded/target_height (session count vs absolute height mismatch)
- Restore peer count in sync status text
- Add "Querying peer heights" label and diffs_processed to masternode
  status for more informative sync messages

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(spv): use height-based blocks progress to prevent bar from jumping

The blocks progress bar was bouncing backward because it used
processed/requested session counters whose denominator grows as filters
discover more matching blocks. Switch to last_processed block height
relative to headers target_height, which only increases. Display
"current / target" heights instead of percentage on the blocks bar.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(spv): improve progress bar resilience across resume and network switch

- Add windowed progress tracking for filters (consistent with headers/filter_headers)
- Reset blocks_stage_start on Error state so recovery gets a fresh window
- Track spv_progress_network to detect network changes and rebuild progress
  state from the new network's sync_progress instead of resetting to zero
- Eliminate redundant clone in progress watcher (move instead of clone twice)
- Consolidate all progress state reset logic into rebuild_spv_progress_state()

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>

* build: add git and cargo permissions to Claude Code workflow (#565)

* build: add git and cargo permissions to Claude Code workflow

Allow Claude to run git fetch/merge/checkout/rebase/push and
cargo build/test/clippy/fmt commands. Switch model to opus.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: use claude_args for model and allowed tools

The `model` and `allowed_tools` inputs are not declared in
claude-code-action@v1 and are silently ignored. Move them to
`claude_args` with --model and --allowedTools flags. Also fix
deprecated colon syntax (`:*`) to space syntax (` *`).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: use single quotes to prevent glob expansion in allowed tools

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* build: sandbox cargo commands and add git permissions for Claude

- Add safe-cargo.sh wrapper that strips CI secrets before running cargo
- Use --allowedTools for git and safe-cargo, --disallowedTools for raw cargo
- Document safe-cargo usage in CLAUDE.md

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: switch safe-cargo.sh from denylist to allowlist approach

Use `env -i` (start with empty environment, explicitly pass only
what cargo needs) instead of `env -u` (strip known secrets). This
is more robust against future secrets being added to the workflow.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* build: deny Claude from editing CI scripts and workflows

Prevent Claude from modifying .github/scripts/ and .github/workflows/
to ensure the safe-cargo wrapper cannot be tampered with.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: conditionally pass optional env vars in safe-cargo.sh

Empty PROTOC="" caused prost build scripts to fail with
"protoc not found". Now optional vars (PROTOC, CC, CXX, etc.)
are only passed when set and non-empty.

Tested: build, test, fmt all pass through the wrapper.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* rabbit feedback

* build: move safe-cargo.sh to scripts/ and allow +nightly fmt

Move safe-cargo.sh from .github/scripts/ to top-level scripts/ for
better discoverability. Add detailed comment explaining why the wrapper
exists (prevent CI secret exfiltration via build scripts). Update all
references in claude.yml, CLAUDE.md, and permission settings. Add
`+nightly fmt` to allowedTools so Claude can follow CLAUDE.md
formatting instructions in CI.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>

---------

Co-authored-by: PastaClaw <thepastaclaw@users.noreply.github.com>
Co-authored-by: lklimek <842586+lklimek@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>

* chore: bump rust-dashcore

* chore: imports

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: Pasta Lil Claw <pasta+claw@dashboost.org>
Co-authored-by: PastaClaw <thepastaclaw@users.noreply.github.com>
Co-authored-by: PastaPastaPasta <6443210+PastaPastaPasta@users.noreply.github.com>

* ci: cancel in-progress workflow runs on new push (#590)

* ci: cancel in-progress workflow runs on new push

Add concurrency groups to Tests and Clippy workflows so that
previous runs are automatically cancelled when a new commit is
pushed to the same branch or PR.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* trigger ci

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>

* fix: use correct DashPay profile field "publicMessage" instead of "bio" (#582)

The DashPay contract schema defines the field as "publicMessage",
not "bio". This caused profile bios to never load in contact info.

Co-authored-by: PastaClaw <thepastaclaw@users.noreply.github.com>

---------

Co-authored-by: Copilot <198982749+Copilot@users.noreply.github.com>
Co-authored-by: lklimek <842586+lklimek@users.noreply.github.com>
Co-authored-by: PastaClaw <thepastaclaw@users.noreply.github.com>
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: PastaPastaPasta <6443210+PastaPastaPasta@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Bug: Panic-on-corrupted-data in scheduled_votes and identity loading

3 participants