feat: repeat token claim as long as there are outstanding tokens#421
feat: repeat token claim as long as there are outstanding tokens#421lklimek wants to merge 20 commits into
Conversation
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 18 minutes and 20 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (4)
📝 WalkthroughWalkthroughAdds a new Changes
Sequence Diagram(s)sequenceDiagram
participant UI as ClaimTokensScreen
participant Handler as TokenTaskHandler
participant ClaimLogic as Claiming Logic
participant DBBalance as Balance/State Update
rect rgba(100, 150, 200, 0.5)
note over UI,Handler: Single-claim flow (claim_all = false)
UI->>Handler: dispatch TokenTask::ClaimTokens{claim_all:false}
Handler->>ClaimLogic: call claim_token()
ClaimLogic->>DBBalance: update balance/state with claimed_amount
ClaimLogic-->>Handler: TokensClaimed(amount)
Handler-->>UI: success result
UI->>UI: set claimed_amount, status Complete
end
rect rgba(150, 100, 200, 0.5)
note over UI,ClaimLogic: Bulk-claim flow (claim_all = true)
UI->>Handler: dispatch TokenTask::ClaimTokens{claim_all:true}
Handler->>ClaimLogic: call claim_all_tokens()
loop until exhausted or no rewards
ClaimLogic->>ClaimLogic: call claim_token() internally
ClaimLogic->>DBBalance: update balance/state incrementally
ClaimLogic->>ClaimLogic: aggregate total_claimed
end
ClaimLogic-->>Handler: TokensClaimed(total_claimed)
Handler-->>UI: success result
UI->>UI: set claimed_amount, status Complete
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Pull Request Overview
This PR implements a feature to execute all available token claims when a user clicks "claim" to handle situations where the platform limits prevent claiming all tokens in a single operation. The solution continuously executes claims until no more tokens are available, ensuring users can claim their full token balance.
Key changes:
- Added a new
claim_all_tokensmethod that iterates through all available claims - Enhanced error handling and logging with structured tracing instead of println statements
- Updated the UI to display the total number of tokens claimed
Reviewed Changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
| src/backend_task/tokens/claim_tokens.rs | Implements new claim_all_tokens method and improves error handling with structured logging |
| src/backend_task/mod.rs | Adds TokensClaimed variant to success result enum |
| src/backend_task/tokens/mod.rs | Updates call to use new claim_all_tokens method |
| src/ui/tokens/claim_tokens_screen.rs | Adds UI handling for displaying token claim results |
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
|
Sometimes we don't want to make all claims. We should give the user the option to do so or not. |
|
@copilot please take over this PR, merge v1.0-dev into it, and add a checkbox "Claim all" that will control if we will repeat token claim or not, enabled by default. |
Resolved conflicts by taking v1.0-dev as base, then reapplying token claim feature changes: - TokensClaimed(TokenAmount) variant - claim_all_tokens() and claim_token() methods - claim_all boolean in ClaimTokens task - UI checkbox for claim all option
- Added claim_all boolean field to ClaimTokens task - Updated handler to call claim_all_tokens or claim_token based on flag - Added claim_all checkbox to UI (enabled by default) - Added claimed_amount field to track tokens claimed - Updated display_task_result to save and display claimed amount - Updated success screen to show amount of tokens claimed - Wire checkbox state to ClaimTokens backend task Co-authored-by: lklimek <842586+lklimek@users.noreply.github.com>
* feat: wallet unlock on wallet screen (#415) * refactor: Amount type and AmountInput component (#417) * feat: amount input, first building version * test(amount): fixed tests * chore: I think final * chore: my_tokens display correct amount * chore: transfer tokens update * chore: hide unit on rewards estimate column * chore: two new helper methods * chore: I think finals * cargo fmt * feat: component trait * impl Component for AmountInput * chore: updated component trait * chore: update for egui enabled state mgmt * doc: component design pattern doc * chore: component design pattern continued * chore: amount improvements * chore: copilot review * chore: amount improvements * backport: amount component from * chore: fix imports * chore: refactor * chore: futher refactor * chore: further refactor based on feedback * doc: simplified component design pattern description * chore: peer review * doc: update docs * chore: amount input * test: run tests using github actions and fix failing tests (#422) * fix: failing tests * gha: run tests * fix: tests fail due to lack of .env file * fix: incorrect decimals handling on token create, mint and burn screens (#419) * feat: amount input, first building version * test(amount): fixed tests * chore: I think final * chore: my_tokens display correct amount * chore: transfer tokens update * chore: hide unit on rewards estimate column * chore: two new helper methods * chore: I think finals * cargo fmt * feat: component trait * impl Component for AmountInput * chore: updated component trait * chore: update for egui enabled state mgmt * doc: component design pattern doc * chore: component design pattern continued * chore: amount improvements * chore: copilot review * chore: amount improvements * refactor: mint and burn token screens use AmountInput * chore: use AmountInput on token creator screen * fix: burn error handling * feat: errors displayed in the AmountInput component * fix: vertical align of amount input * backport: amount component from * chore: fix imports * chore: refactor * chore: futher refactor * chore: further refactor based on feedback * doc: simplified component design pattern description * chore: peer review * doc: update docs * chore: amount input * chore: fixes after merge * chore: self-review * feat: amout set decimal places + rename label => with_label * refactor: amount input init on token screen * chore: fix token creator layout * chore: format base amount with leading zeros in confirmation * chore: base supply 0 by default * feat: add network field to identity structures (#423) * feat: add network field to identity structures * fix * feat: add display for private key in both WIF and Hex formats (#424) * feat: add display for private key in both WIF and Hex formats * fix: display private keys as normal text and depend on color theme --------- Co-authored-by: pauldelucia <pauldelucia2@gmail.com> * feat: allow setting zmq uri in .env (#425) * feat: allow setting ZMQ URI * chore: rename zmq_endpoint to core_zmq_endpoint * fix: failing test in token screen needed update * fix: update CI to use rust version 1.88 to match rust-toolchain * feat: better display in key selector (#429) * refactor: unified confirmation dialog (#413) * refactor: unified alert window * chore: update CLAUDE to create reusable components whenever appropriate * feat: correct confirm dialog on set token price screen * chore: remove callbacks which are overkill * chore: cargo fmt * chore: doctest fix * chore: impl Component for ConfirmationDialog * chore: use WidgetText * feat: Add Escape key handling for confirmation dialog * chore: button inactive when in progress * chore: fixes after merge * chore: some theme improvements * fmt and visual appeal --------- Co-authored-by: pauldelucia <pauldelucia2@gmail.com> * feat: nicer contract chooser panel (#426) * feat: nicer contract chooser panel * fmt * clippy * fix: token purchasability was not refreshing unless app restart (#432) * fix: token action buttons alignment * feat: nicer expanding tabs in token creator (#431) * feat: nicer expanding tabs in token creator * fix * feat: implement new ConfirmationDialog component throughout app (#430) * feat: implement new ConfirmationDialog component throughout app * fix: remove backup files * fix: confirmation dialog on withdrawal screen always showing "loading" * feat: Dash Masternode List and Quorum Viewer (#428) * a lot of work on DML viewer * more work * more work * more work * more work * more work * more work * more work * more work * ui improvements * ui improvements * optimizations * fast start * more work * more work * more work * fmt * much more work * fixes * updates * fix * fmt * revert * update for dashcore 40 * params for testnet * added testnet diff * fixes * clippy * more clippy * fixes * clean UI * use backend tasks * backend messages * coderabbit suggestions to add timeouts and prevent infinite loops * transient and fatal errors * update dash core configs for testnet * fmt * fix timeout * fix h-3 error * clear state when switching networks --------- Co-authored-by: pauldelucia <pauldelucia2@gmail.com> * chore: bump version to 1.0.0-dev (#439) The 1.0-dev branch builds were still reporting as 0.9.0 * feat: add left panel button labels and create contract subscreen chooser panel (#441) * feat: add left panel button labels and create contract subscreen chooser panel * fmt * feat: clarify identity index description for identity creation * fix: screen button labels display hard to see in dark mode * feat: left panel scroll (#443) * feat: left panel scroll * clippy * feat: add existing identity by wallet + identity index (#444) * feat: add existing identity by wallet + identity index * fmt * remove logging * update derivation index label * add robustness * fix: screen button labels display hard to see in dark mode * feat: left panel scroll (#443) * feat: left panel scroll * clippy * feat: load all identities up to an index * clippy * fix * feat: remove wallet (#445) * feat: remove wallet * clippy * Clarify wallet removal warning message Updated warning message for wallet removal to clarify that keys will no longer work unless the wallet is re-imported. * fix: identity creation and top-up via QR code handling (#447) * fix: identity creation handling * cleanup * cleanup * fix identity topup too * remove note * fix: keyword search required new platform version plus error handling and UI updates (#449) * fix: keyword search not displaying errors * update platform version and keyword search cleanup * fix * fix: handle decimals and fix schedules on price and purchase screens (#412) * fix: inactive button on set price group action * chore: apply review feedback * refactor: use token amount input * chore: update AmountInput * chore: tiered pricing * chore: direct purchase * chore: remove total agreed price * chore: max amount input defaults to max_credits * fmt * fix * clippy --------- Co-authored-by: pauldelucia <pauldelucia2@gmail.com> Co-authored-by: Claude <noreply@anthropic.com> * fix: showing used asset locks as unused (#448) * chore: remove unused task sender * chore: update to Platform V10 * feat: GroveSTARK ZK proofs (#450) * works * works * fix * ok * ok * ok * ok * ok * fix verification message showing in generation screen * remove advanced settings. set default 16 grinding bits and 128 bit security * ok * lock * ok * clippy * fmt * clippy and fmt * pin grovestark dep * fix * ok * clippy * rename file * remove ProofData * cleanup * fix * rename GroveStark to GroveSTARK * rename zk_proofs_screen to grovestark_screen * rename ZKProofs tool subscreen to GroveSTARK * move grovestark prover to model * rename ContractsDashpayComingSoon to ContractsDashpay * rename ContractsDashpay to Dashpay * rename dashpay stuff * fixes * ok * update grovestark rev * update * chore: update grovestark rev * chore: update platform to v2.1.2 * feat: derive keys from loaded wallets when loading identity by ID (#446) * feat: derive keys from loaded wallets when loading identity * clippy * chore: update platform version * ok * cleanup ui * cleanup ui * set max index search to 30 * set max const * k * fix: wallet screen not updating wallets after network change (#451) * fix: wallet screen not updating wallets after network change * ui improvement * clippy * fix: some document actions were showing error when it was success (#452) * fix: some document actions were showing error when it was success * add space at bottom * feat: wallet unlock button (#459) * fix: scrollable unused asset locks on identity creation (#460) * fix: no error display when trying to import a wallet twice on different networks (#466) * fix: no error display when trying to import a wallet twice on different networks * feat: add waiting for valid seed phrase message * fix * fix: logging should interpret RUST_LOG env variable (#467) * feat: signed dmg for mac (#437) * feat: signed mac binaries and dmg * fix * fix * extra check * app bundle * fix * fix icon * feat: simplify icon padding to 3% for correct sizing * ok * ok * fmt * fix: split dmgs into arm64 and x86 instead of universal * cleanup * fmt * fix naming * ok * fix * disk space fix * fix * fix * fix attempt * fix: enhance disk space cleanup for macOS DMG creation - Add more aggressive cleanup of Xcode caches and derived data - Remove additional Cargo build artifacts (.rlib, .d files) - Clean system-level caches to free maximum space - Add cleanup step for both ARM64 and x86_64 macOS builds - Remove temporary icon files after app bundle creation This should resolve the "No space left on device" error during DMG creation on GitHub Actions runners. 🤖 Generated with [Claude Code](https://claude.ai/code) Co-Authored-By: Claude <noreply@anthropic.com> * macos-latest * latest for arm too * revert * re-revert * clear cache * fix * cleanup * try large * omg * apply suggestion * cleanup * fix * fix * fix --------- Co-authored-by: Claude <noreply@anthropic.com> * feat: HD wallet system, DashPay integration, SPV support, and more (#464) * feat: dashpay * ok * ok * remove doc * ok * ok * database * ok * fix: use proper BackendTaskSuccessResults rather than Message * fmt and clippy * remove dashpay dip file * logo and cleanup * fix top panel spacing * ui stuff * fmt * dip 14 * image fetching * some fixes * cleaning up * todos * feat: SPV phase 1 (#440) * feat: spv in det * ui * progress bars * working progress updates * fmt * fast sync * add back progress monitoring * back to git dep * newer dashcore version * deduplication * spv context provider * fixes * small fix * clippy fixes * fixes --------- Co-authored-by: Quantum Explorer <quantum@dash.org> * feat: spv progress bars and sync from genesis when wallets are loaded (#454) * feat: spv progress bars * feat: start from 0 if syncing with wallets * clippy * fmt * update * fixes * fix: add sdk features * feat: add accounts and transactions to wallet screen * clippy * fmt * fix: dashpay database table initialization was hanging on new db (#463) * fix: display stuff * add accounts and much progress * feat: transaction history in wallet * clippy * ok * feat: hd wallet * send tx via wallet and wallet lock/unlock * update to PeerNetworkManager * ok * ok * ok * fmt * clippy * clippy * feat: clear SPV data button * fix: switch from dash core rpc to spv mode * feat: switch to DashSpvClientInterface * clean spv button fix * feat: send from wallet screen * feat: platform addresses * feat: platform address transitions * feat: platform address transitions * feat: move dashpay screen to top level * platform addresses sdk * remove async context provider fn * lock * update for new sdk with addresses and handle sdk result data * chore: update grovestark dependency * chore: update dash-sdk rev * fmt * remove elliptic curves patch * clippy * feat: use BackendTaskSuccessResult variants instead of Message(String) * fmt * clippy * update DIP18 implementation, fixes and some features around wallets and PAs * fix * wallet unlock on wallet screen * info popup component * small fixes * wallet unlock component * fix startup panic * welcome screen and dashpay stuff * cargo fix * cargo fmt * chore: update deps to use most recent dash-sdk * simplify welcome screen and move spv storage clear to advanced settings * default connect to peers via SPV DNS seeds * fix: exit app without waiting * fix error message and reverse quorum hash on lookup * fmt * settings to turn off spv auto sync * clippy * work on getting started flow * AddressProvider stuff * address sync stuff * many fixes all in one * fix * many fixes * fixes * amount input stuff * alignment in identity create screen for adding keys * more fixes * many improvements * simple token creator, word count selection for wallet seed phrases, more * fix: platform address info storage * feat: generate platform addresses and address sync post-checkpoint * many things * chore: update to recent changes in platform * docs: added few lines about .env file * fix: banned addresses and failure during address sync * fix: updates for new platform and fee estimation * fix: only take PA credits after checkpoint * feat: fee estimation and display * feat: fee estimation follow-up * more on fee displays * fix: proof logs in data contract create * fix: white on white text for fee estimation display * fix: always start as closed for advanced settings dropdown in settings screen * fix: hard to see data contracts in the register contract screen * fix: document create screen scroll * fix: fee estimations accuracy * fmt * clippy auto fixes * clippy manual fixes and pin dash-sdk dep * fix failing tests * fmt * fix: update rust toolchain to 1.92 * refactor: hide SPV behind dev mode * feat: warning about SPV being experimental * cleanup based on claude review * fmt * cleanup based on another claude review * fix: kittest failing * fix: remove fee result display in success screens and clean up dashpay avatar display * dashpay fixes and platform address fixes * refactor: move some AppContext functions to backend_task module * clippy * clippy * fix: refresh mode alignment in wallet screen and fmt * fix: failing test due to test_db issue * fix: dashpay avatar loading slow * fix: add rocksdb deps to ci workflow * fix: free up disk space in ci workflow * fix: display warning when partial wallet refresh success occurs * fix: propogate error for terminal balance sync * feat: more aggressive disk cleanup in ci workflow --------- Co-authored-by: Quantum Explorer <quantum@dash.org> Co-authored-by: Lukasz Klimek <842586+lklimek@users.noreply.github.com> Co-authored-by: Ivan Shumkov <ivanshumkov@gmail.com> * feat: comprehensive test coverage implementation (#481) * feat: comprehensive test suite * fmt * fix: cannot create identity from address due to wrong identity id (#470) * feat: dashpay * ok * ok * remove doc * ok * ok * database * ok * fix: use proper BackendTaskSuccessResults rather than Message * fmt and clippy * remove dashpay dip file * logo and cleanup * fix top panel spacing * ui stuff * fmt * dip 14 * image fetching * some fixes * cleaning up * todos * feat: SPV phase 1 (#440) * feat: spv in det * ui * progress bars * working progress updates * fmt * fast sync * add back progress monitoring * back to git dep * newer dashcore version * deduplication * spv context provider * fixes * small fix * clippy fixes * fixes --------- Co-authored-by: Quantum Explorer <quantum@dash.org> * feat: spv progress bars and sync from genesis when wallets are loaded (#454) * feat: spv progress bars * feat: start from 0 if syncing with wallets * clippy * fmt * update * fixes * fix: add sdk features * feat: add accounts and transactions to wallet screen * clippy * fmt * fix: dashpay database table initialization was hanging on new db (#463) * fix: display stuff * add accounts and much progress * feat: transaction history in wallet * clippy * ok * feat: hd wallet * send tx via wallet and wallet lock/unlock * update to PeerNetworkManager * ok * ok * ok * fmt * clippy * clippy * feat: clear SPV data button * fix: switch from dash core rpc to spv mode * feat: switch to DashSpvClientInterface * clean spv button fix * feat: send from wallet screen * feat: platform addresses * feat: platform address transitions * feat: platform address transitions * feat: move dashpay screen to top level * platform addresses sdk * remove async context provider fn * lock * update for new sdk with addresses and handle sdk result data * chore: update grovestark dependency * chore: update dash-sdk rev * fmt * remove elliptic curves patch * clippy * feat: use BackendTaskSuccessResult variants instead of Message(String) * fmt * clippy * update DIP18 implementation, fixes and some features around wallets and PAs * fix * wallet unlock on wallet screen * info popup component * small fixes * wallet unlock component * fix startup panic * welcome screen and dashpay stuff * cargo fix * cargo fmt * chore: update deps to use most recent dash-sdk * simplify welcome screen and move spv storage clear to advanced settings * default connect to peers via SPV DNS seeds * fix: exit app without waiting * fix error message and reverse quorum hash on lookup * fmt * settings to turn off spv auto sync * clippy * work on getting started flow * AddressProvider stuff * address sync stuff * many fixes all in one * fix * many fixes * fixes * amount input stuff * alignment in identity create screen for adding keys * more fixes * many improvements * simple token creator, word count selection for wallet seed phrases, more * fix: platform address info storage * feat: generate platform addresses and address sync post-checkpoint * many things * fix: cannot create identity from address * chore: update to recent changes in platform * docs: added few lines about .env file * fix: banned addresses and failure during address sync * fix: updates for new platform and fee estimation * fix: only take PA credits after checkpoint * feat: fee estimation and display * feat: fee estimation follow-up * chore: fix build * chore: correct platform revision * refactor: don't query for nonce when creating identity * more on fee displays * fix: proof logs in data contract create * fix: white on white text for fee estimation display * fix: always start as closed for advanced settings dropdown in settings screen * fix: hard to see data contracts in the register contract screen * fix: document create screen scroll * fix: fee estimations accuracy * fmt * clippy auto fixes * clippy manual fixes and pin dash-sdk dep * fix failing tests * fmt * fix: update rust toolchain to 1.92 * refactor: hide SPV behind dev mode * feat: warning about SPV being experimental * fix: platform nonces not updated in sync * chore: cargo fmt * deps: update platform repo * chore: clippy and rabbit * chore: rabbit review * chore: enforce address networ with require_network() * cleanup based on claude review * fmt * cleanup based on another claude review * fix: kittest failing * fix: remove fee result display in success screens and clean up dashpay avatar display * dashpay fixes and platform address fixes * deps: update platform repo * chore: fixes after merge * refactor: move some AppContext functions to backend_task module * clippy * clippy * fix: refresh mode alignment in wallet screen and fmt * chore: fmt * fmt --------- Co-authored-by: pauldelucia <pauldelucia2@gmail.com> Co-authored-by: Paul DeLucia <69597248+pauldelucia@users.noreply.github.com> Co-authored-by: Quantum Explorer <quantum@dash.org> Co-authored-by: Ivan Shumkov <ivanshumkov@gmail.com> * docs: add CLAUDE.md for Claude Code guidance (#484) Adds documentation to help Claude Code (claude.ai/code) work effectively with this codebase, including build commands, testing instructions, architecture overview, and UI component patterns. Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> * build: update dash-sdk to use rust-dashcore v0.42-dev (#483) * build: update dash-sdk to use rust-dashcore v0.41.0 Updates dash-sdk dependency to rev 7b2669c0b250504a4cded9e2b9e78551583e6744 which includes rust-dashcore v0.41.0. Breaking changes addressed: - WalletBalance: confirmed field renamed to spendable(), fields now methods - WalletManager::new() now requires Network argument - import_wallet_from_extended_priv_key() no longer takes network argument - current_height() no longer takes network argument - RequestSettings: removed max_decoding_message_size field - sync_to_tip() removed - initial sync now handled by monitor_network() Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * style: apply cargo fmt Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * clippy --------- Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> Co-authored-by: pauldelucia <pauldelucia2@gmail.com> * feat: hint text next to withdrawal destination input in dev mode with advanced options and owner key * feat: implement asset lock creation and detail screens (#418) * feat: implement asset lock creation and detail screens * clippy fix * some fixes * fix: ui cleanup and backend fixes * fix: clippy * feat: tests * fix: remove identity index selector for top ups * fmt * refactor: use `AmountInput` component * fixes * fix: clippy --------- Co-authored-by: pauldelucia <pauldelucia2@gmail.com> * fix: dev mode wouldnt toggle in left panel when unchecked in settings if config load failed (#488) * fix: force Core RPC mode on app start if not in dev mode (#489) * fix: display WIF format for manually added private keys (#496) * fix: display WIF format for manually added private keys Externally loaded identities with manually added private keys now show both WIF and hex formats, matching the display for wallet-derived keys. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * clippy * fmt --------- Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> Co-authored-by: pauldelucia <pauldelucia2@gmail.com> * fix: security level validation for add key screen (#494) * fix: auto-set security level based on key purpose in add key screen When selecting a key purpose, automatically set the appropriate security level and restrict available options: ENCRYPTION/DECRYPTION only allow MEDIUM, TRANSFER only allows CRITICAL, and AUTHENTICATION allows CRITICAL/HIGH/MEDIUM. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix: disable security level dropdown when only one option is valid Visually indicate to the user that the security level is fixed by the protocol for certain key purposes (ENCRYPTION, DECRYPTION, TRANSFER). Adds a hover tooltip explaining why the dropdown is disabled. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix: remove redundant `contract_bounds_enabled` check for `has_multiple_security_levels` bool * fmt --------- Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> Co-authored-by: pauldelucia <pauldelucia2@gmail.com> * fix: identity registration default key was using wrong `ContractBounds` (#487) * fix: identity registration default key was using wrong ContractBounds * feat: add tests * fmt * doc: update key descriptions * feat: add coderabbit configuration file * fix: document actions were auto selecting master keys but shouldnt (#493) * fix: top up with QR was validating against wallet max balance (#492) * fix: view platform address popup showing wrong address format (#500) * fix: view platform address popup showing wrong address format * fix: use helper * feat: update sdk v3.0.1 hotfix.1 (#503) * chore: update dash-sdk to v3.0.1-hotfix.1 Updates dash-sdk to v3.0.1-hotfix.1 to match testnet. Breaking API changes addressed: - WalletBalance.unconfirmed and .total are now fields instead of methods - RequestSettings now requires max_decoding_message_size field Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fmt --------- Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> * fix: ensure dev mode is off and spv sync off for new users (#504) * fix: ensure dev mode is off and spv sync off for new users * fix tests * fix * fix: update platform address prefix from dashevo to evo (#505) * fix: update platform address prefix from dashevo to evo Update address detection and UI hints to use the new shorter platform address prefixes (evo1/tevo1) instead of the old format (dashevo1/tdashevo1) to match SDK v3.0.1 changes. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix: validate platform address network matches app network Add network validation when parsing Bech32m Platform addresses to ensure the address network prefix matches the app's configured network, showing a descriptive error on mismatch. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fmt --------- Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> * fix: platform address balance doubling after transfer and refresh (#502) * fix: platform address balance doubling after transfer and refresh * fix * feat: add fee multiplier caching and use it across all UI screens (#506) * feat: add fee multiplier caching and use it across all UI screens - Add fee_multiplier_permille support to PlatformFeeEstimator - Add fee multiplier caching in AppContext with getter/setter - Add fee_estimator() helper method on AppContext - Cache the fee multiplier when epoch info is fetched - Display note when fee multiplier cache is updated - Update all UI screens to use app_context.fee_estimator() Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fmt * fix: apply fee multiplier to all estimation functions Update all estimate_* functions to apply the fee multiplier to the combined total (storage + processing/registration) rather than to individual components, ensuring consistent fee calculation across all state transition types per the Dash Platform spec. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * refactor: reuse fee_estimator instance in token screens Avoid duplicate construction and keep consistent multiplier snapshot within the frame by reusing the existing fee_estimator variable. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * refactor: use DEFAULT_FEE_MULTIPLIER_PERMILLE constant in AppContext Keep the AppContext default in sync with the estimator's canonical value instead of hardcoding 1000. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> * fix: take from multiple platform addresses in simple send flow (#501) * fix: take from multiple platform addresses in simple send flow * address selection algorithm * fix * fix: fee payer index lookup * cleanup * fix: use PlatformFeeEstimator for address transfer fees - Replace custom fee calculation with PlatformFeeEstimator - Simplify allocation logic by calculating fee upfront - Use 20% safety buffer on fee estimates - Remove complex iterative allocation loop Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fmt * Add fee deficit check in platform address allocation The fee payer must have enough remaining balance after their contribution to cover the transaction fee. This change calculates the fee deficit (if fee payer's remaining balance < estimated fee) and adds it to the shortfall, preventing impossible sends from being attempted. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Use actual input count for fee estimation in error messages Changed estimate_platform_fee calls to use addresses_available (the actual number of available inputs) instead of MAX_PLATFORM_INPUTS. This provides more accurate fee estimates and max sendable amounts in the error messages when transactions exceed available balance. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * Filter destination address from preview allocation The preview allocation now parses the destination platform address and passes it to allocate_platform_addresses, ensuring the destination is never shown as an input source. This matches the behavior of the actual send logic. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fmt * Filter destination from max calculation and improve warning message The max amount calculation now excludes the destination address, matching the allocation logic. This prevents showing an inflated max when the destination is one of your own addresses. Also improved the warning message to distinguish between exceeding the address limit vs insufficient balance (including fees). Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fmt * feat: use cached fee multiplier in send_screen Update estimate_platform_fee and allocate_platform_addresses to accept a PlatformFeeEstimator parameter, allowing the send screen to use the cached network fee multiplier from app_context.fee_estimator(). Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix: address review comments in send_screen - Add early return for empty sorted_addresses in allocate_platform_addresses - Use safe string slicing in render_platform_source_breakdown to avoid panic - Remove duplicate doc comment Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> * fix: prevent platform address balance doubling after internal updates (#507) * fix: prevent platform address balance doubling after internal updates Adds last_full_sync_balance column to track the balance checkpoint for terminal sync pre-population, separate from the current balance. Key changes: - Add DB migration (v26) for last_full_sync_balance column - Sync operations update last_full_sync_balance for next sync baseline - Internal updates (after transfers) preserve last_full_sync_balance - Terminal sync pre-populates with last_full_sync_balance, applies AddToCredits correctly without double-counting This fixes the scenario where: 1. Transfer completes, balance updated internally 2. App restarts, refresh triggers terminal sync 3. Same AddToCredits was being re-applied because the pre-population baseline wasn't distinguishing between sync and internal updates Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * docs: fix docblocks for is_sync_operation and last_full_sync_balance - Remove duplicate/obsolete is_full_sync wording from set_platform_address_info - Update last_full_sync_balance field doc to accurately describe when it's updated (during syncs) vs preserved (during internal updates) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix: preserve last_full_sync_balance when removing duplicate address entries Search for last_full_sync_balance from any canonical-equivalent entry BEFORE removing duplicates, so the value isn't lost when the existing entry has a different Address representation. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> * fix: fetch fresh nonces from platform for identity creation (#509) When creating an identity with Platform Address funding, the cached nonce could be stale (terminal-only sync only updates balance, not nonce). This caused "Invalid address nonce" errors. Now fetches fresh nonces from platform using AddressInfo::fetch_many() before identity creation. Note: The SDK's put_with_address_funding requires caller-provided nonces, unlike transfer_address_funds which handles nonces internally. This inconsistency should be addressed in the SDK. Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> * fix: don't auto-generate platform address when opening receive dialog (#508) Use platform_addresses() which checks watched_addresses for derived Platform addresses, instead of only checking platform_address_info which only contains synced addresses with balances. A new address is now only generated if there are truly no Platform addresses derived for the wallet, not just because they haven't been synced yet. Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> * fix: show N/A for UTXOs and Total Received columns on Platform addresses (#510) Platform addresses don't have UTXOs (they have credits on Platform layer) and we don't track historical received amounts for them. Showing 0 was misleading - N/A is more accurate. Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> * fix: reserve fees when using max button for platform address identity funding (#511) * fix: reserve fees when using max button for platform address identity funding When clicking the max button to fund identity creation or top-up from a platform address, the full balance was used without reserving room for transaction fees, causing "Insufficient combined address balances" errors. Changes: - Subtract estimated fee from max amount in identity creation screen - Subtract estimated fee from max amount in identity top-up screen - Fix fee estimation to use identity_create_asset_lock_cost (200M credits) instead of address_funding_asset_lock_cost (50M credits) - Add storage-based fees to identity creation/top-up fee estimates - Add new estimate_identity_topup_from_addresses() for platform address top-ups - Add 20% safety buffer to account for fee variability - Show hint explaining fee reservation when max is used Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * docs: fix docstrings to correctly state 20% safety buffer The docstrings incorrectly stated 10% safety buffer while the code applies 20% via total / 5. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> * Release SPV storage lock on shutdown (#517) * Fix dark mode text colors (#515) * Fix dark mode text and borders * style: apply cargo fmt formatting Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> * fix: invalid min amount when transferring funds (#523) * fix: invalid min amount when transferring funds * chore: rabbit's feedback * chore: update bincode to 2.0.1, dash-sdk to v3.1-dev, and grovestark (#526) * chore: update bincode to 2.0.1, dash-sdk to v3.1-dev, and grovestark - Update bincode from 2.0.0-rc.3 to 2.0.1 - Update dash-sdk to latest v3.1-dev (98a0ebec) - Update grovestark to a70bdb1a - Fix Decode/BorrowDecode implementations for bincode 2.0.1 API changes Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * chore: migrate to using dashpay/grovestark --------- Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> Co-authored-by: pasta <pasta@dashboost.org> * feat: use DAPI instead of Core RPC for historical asset lock transaction info (#528) Replace core_client.get_raw_transaction_info() calls with a new get_transaction_info_via_dapi() function that queries transaction status via DAPI gRPC. This removes the dependency on a local Core RPC node for checking whether asset lock transactions have been chainlocked, which is needed for identity registration, top-up, and platform address funding from asset locks. Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> * fix: not enough balance to pay a fee in platform to core transfers (#513) * fix: fee estimation for AddressCreditWithdrawalTransition * chore: max button * refactor: remove redundant code * chore: better hint in platform-to-core * Refactor token creator helpers (#518) * refactor(tokens): simplify token creator helpers * style: cargo fmt * Simplify wallets UI helpers and send flow (#519) * Simplify wallets UI helpers and send flow * Fix wallets screen helper placement * Fix refresh_on_arrival borrow * Stabilize token UI tests config * Simplify wallet send address type helper * fix: handle RegisteredTokenContract result to clear token creation spinner (#529) The backend task returned RegisteredTokenContract on success but the tokens screen had no handler for it, causing the spinner to spin forever. Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> * fix: build fails for windows target (#527) * fix: build fails for windows target * build: only create a release when tag is set * chore: remove redundant settings from release.yml * doc: windows requirements * fix: windows icon * build: change grovestark revision * chore: fmt * fix: don't open console window on windows * build: grovestark recent commit * fix: critical wallet bugs (GH#522, GH#476, GH#478, GH#85) (#534) * fix: auto-refresh wallet UTXOs on app startup (GH#522) On startup, bootstrap_loaded_wallets now spawns background tasks to refresh UTXOs from Core RPC for all HD and single-key wallets. This ensures balances are current without requiring the user to manually click Refresh. Only applies in RPC mode; SPV handles UTXOs via reconciliation. Task: 1.1a * fix: respect fee_deduct_from_output flag in platform address funding (GH#476) When the user selects "deduct from input", use an explicit output amount for the destination and a separate change address for the fee remainder, so the destination receives the exact requested amount. Task: 1.1b * fix: reserve estimated fees in wallet balance top-up max button (GH#478) Task: 1.1c * fix: prevent funding address reuse across identities (GH#85) Task: 1.1d * fix: add 5-minute timeout to asset lock proof wait loop (wallet-008) Task: 1.1e * fix: use wallet-controlled change address and epoch-aware fee estimator Use a fresh wallet receive address for platform funding change output instead of an ephemeral key-derived address, so surplus credits remain spendable. Also switch to epoch-aware fee estimator for accurate fee estimation when the network fee multiplier is above 1x. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: address critical review issues in wallet funding - Use try_lock() in tokio::select! timeout arm to avoid blocking the async runtime when another thread holds the mutex - Use change_address() (BIP44 internal path) instead of receive_address() for deriving change addresses, ensuring proper path separation - Replace .unwrap_or(0) with .ok_or() on BTreeMap index lookup to surface errors instead of silently targeting the wrong output Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: auto-refresh wallet UTXOs on asset lock proof timeout (RPC mode) When the 5-minute timeout fires, the asset lock tx has already been broadcast and UTXOs removed locally. Trigger a background wallet refresh in RPC mode so spent inputs are reconciled against chain state. SPV mode handles its own reconciliation. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * style: fix rustfmt formatting in fee estimator chain Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: replace saturating_mul with checked_mul for duffs-to-credits conversion Overflow now returns an explicit error with diagnostic info instead of silently clamping to u64::MAX. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> * fix(tokens): restore token reorder assignment and distribution field checks (#535) * Extract dialog rendering into wallets_screen/dialogs.rs module (#539) Moved 4 dialog rendering functions and 10+ helper methods (~1151 lines) from wallets_screen/mod.rs into a new dialogs.rs module. Includes send, receive, fund platform, and private key dialogs plus their state structs. mod.rs reduced from 3824 to 2673 lines. Task: 3.2a * build: update all dependencies (#531) * fix: build fails for windows target * build: only create a release when tag is set * chore: remove redundant settings from release.yml * doc: windows requirements * fix: windows icon * build: change grovestark revision * chore: fmt * fix: don't open console window on windows * build: grovestark recent commit * build: update dependencies * chore: clippy * chore: fmt * build: update platform to most recent v3.1-dev * feat: add key exchange protocol and state transition signing Add YAPPR key exchange flow for DashPay, QR scanner improvements, state transition signing screen, and refactor UI helpers. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * feat: migrate SPV event handling to broadcast channels Replace orphaned SpvEvent handler with three new broadcast-based handlers (SyncEvent, WalletEvent, NetworkEvent) that subscribe to the actual event producers in dash-spv. This fixes wallet reconciliation never being triggered by SPV events. - Bump dash-sdk to fb055ec008 (picks up rust-dashcore 4d2a7018 with WalletEvent support) - Add spawn_sync_event_handler: triggers reconcile on BlockProcessed, ChainLockReceived, InstantLockReceived, SyncComplete - Add spawn_wallet_event_handler: triggers reconcile on all wallet events (TransactionReceived, BalanceUpdated) - Add spawn_network_event_handler: updates connected_peers count on PeersUpdated - Add connected_peers field to SpvManager and SpvStatusSnapshot - Fix reconcile channel race: register channel before starting SPV thread so handlers always capture a valid sender - Improve reconcile: track in-memory UTXOs, log summaries, guard against wiping transaction history Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix: refresh receive dialog balances from wallet on each frame The receive dialog cached address balances at open time and never updated them, so SPV balance changes were not reflected until the dialog was closed and reopened. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * refactor: remove YAPPR key exchange and state transition signing Move these features to the feat/yappr-key-exchange branch. This branch now focuses on SPV broadcast channel migration, dependency bumps, and reconciliation fixes only. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * feat: replace Core RPC with DAPI for asset lock operations in SPV mode Migrate transaction info queries, broadcasting, and finality detection away from direct Core RPC calls so that identity registration and top-up work in SPV-only mode. Key changes: - Add `get_transaction_info_via_dapi()` using DAPI GetTransaction - Add `broadcast_raw_transaction()` that routes via RPC or SPV - Add SPV finality listener forwarding instant/chain lock events - Make asset lock creation methods async for SPV broadcasting - Use longer proof timeouts (5min) in SPV mode vs 2min for RPC - Update address balances after asset lock UTXO consumption - Add BIP32 derivation path detection for SPV address mapping - Bump platform SDK rev to bcb41de347 Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * style: apply cargo fmt formatting Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix: handle ignored Results and avoid panic on poisoned lock - Propagate update_address_balance errors via ? in create_asset_lock.rs - Replace .expect() with .map_err()? for core_client lock in broadcast_raw_transaction - Log try_send failures for finality events in SPV manager instead of silently dropping Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix: clean up finality map on success and remove poisoned lock panic Remove stale entries from transactions_waiting_for_finality on the success path of asset lock proof timeout loops, preventing unbounded map growth. Applied to both top_up_identity.rs and register_identity.rs. Also replace .expect() on poisoned RwLock read of core_client with graceful error propagation via map_err. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * refactor: address PR #525 review feedback from @lklimek - Rename get_transaction_info_via_dapi to get_transaction_info - Extract wait_for_asset_lock_proof helper on AppContext (4 copies → 1) - Extract recalculate_affected_address_balances and recalculate_address_balance helpers on Wallet (7 copies → helper calls) - Fix unused height variable suppression in ChainLock handler - Document single-subscriber semantics on register_finality_channel and register_reconcile_channel - Fix pre-existing collapsible_if clippy warnings in determine_sync_stage Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Refactor masternode list diff screen state (#520) * refactor: split masternode list diff screen state * fix: avoid borrows in diff screen renders * Initial plan * refactor: split context.rs into focused submodules (#543) Extract the monolithic 1795-line context.rs into a context/ module with 5 focused submodules, each grouping related AppContext methods: - mod.rs (543 lines): struct definition, constructor, config/SDK methods - wallet_lifecycle.rs (572 lines): SPV/wallet management and reconciliation - identity_db.rs (205 lines): identity and DPNS database operations - contract_token_db.rs (198 lines): contract and token database operations - settings_db.rs (84 lines): settings cache and persistence - transaction_processing.rs (231 lines): transaction finality handling Pure extraction with no logic changes. Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> * refactor: extract address table, asset locks, and single-key view from wallets_screen (#542) Continue the wallets_screen/mod.rs refactoring started in #539 (dialogs extraction). Extract three more focused submodules: - address_table.rs: SortColumn/SortOrder enums, AddressData struct, sorting logic, categorize_path, and the full address table renderer - asset_locks.rs: asset lock table rendering and fund dialog triggers - single_key_view.rs: single-key wallet detail view with UTXO display Reduces mod.rs from 2662 to 1947 lines (-27%). Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> * fix(startup): harden cookie parsing, config save, and logger init (#537) * fix(startup): harden cookie parsing, config save, and logger init * style(logging): apply rustfmt in logger init * fix(startup): use sync_all, fix Windows rename, and remove credential leak - Use with_file_name instead of with_extension for temp file path clarity - Replace flush() with sync_all() for crash-safe atomic config writes - Handle Windows rename semantics by removing target before rename - Remove raw cookie value from error message to prevent credential leakage Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * style: apply rustfmt Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(config): use NamedTempFile::persist() for atomic config save on Windows Replace manual remove-then-rename sequence with tempfile::NamedTempFile which uses MoveFileEx(MOVEFILE_REPLACE_EXISTING) on Windows for atomic file replacement. This fixes two issues: - Config loss if process crashes between remove_file and rename - Open file handle on tmp file preventing rename on Windows Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> * fix: propagate errors from wallet balance recalculation instead of silently dropping Replace `let _ = wallet.recalculate_*()` with `?` in 5 call sites so database errors during address-balance recalculation are surfaced instead of silently swallowed. This matches the existing pattern in create_asset_lock.rs which already propagates these errors. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * chore: update dash-sdk to platform rev 060515987a Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * chore: cargo fmt * fix: connection status not clear (#532) * fix: connection status not updated * chore: rabbit feedback * chore: typo + network changes * chore: apply feedback * chore: rabbit review * chore: rabbit feedback * chore: rabbitting * chore: fmt --------- Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com> * fix: error when SPV sync height unknown instead of defaulting to 0 Passing height=0 into transaction construction when SPV hasn't synced yet can lead to incorrect locktime/maturity behavior. Return an error instead so the caller knows to wait for sync progress. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * Merge pull request #548 from dashpay/chore/improve-claude-md chore: improve CLAUDE.md with architecture documentation * docs: use v1.0-dev for diffs etc agents (#551) * fix: remove dead code with guaranteed mutex deadlock (#559) * Initial plan * Remove unused insert_remote_identity_if_not_exists function with deadlock bug Co-authored-by: lklimek <842586+lklimek@users.noreply.github.com> --------- Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com> Co-authored-by: lklimek <842586+lklimek@users.noreply.github.com> * feat: add "Claim all" checkbox and complete merge with v1.0-dev - Added claim_all boolean field to ClaimTokens task - Updated handler to call claim_all_tokens or claim_token based on flag - Added claim_all checkbox to UI (enabled by default) - Added claimed_amount field to track tokens claimed - Updated display_task_result to save and display claimed amount - Updated success screen to show amount of tokens claimed - Wire checkbox state to ClaimTokens backend task Co-authored-by: lklimek <842586+lklimek@users.noreply.github.com> --------- Co-authored-by: Paul DeLucia <69597248+pauldelucia@users.noreply.github.com> Co-authored-by: lklimek <842586+lklimek@users.noreply.github.com> Co-authored-by: QuantumExplorer <quantum@dash.org> Co-authored-by: pauldelucia <pauldelucia2@gmail.com> Co-authored-by: thephez <thephez@users.noreply.github.com> Co-authored-by: Claude <noreply@anthropic.com> Co-authored-by: Ivan Shumkov <ivanshumkov@gmail.com> Co-authored-by: PastaPastaPasta <6443210+PastaPastaPasta@users.noreply.github.com> Co-authored-by: pasta <pasta@dashboost.org> Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/backend_task/tokens/claim_tokens.rs (1)
140-184:⚠️ Potential issue | 🟡 MinorAdd warning logs when document parsing fails or token_id is unavailable.
Lines 140–184 silently return
claimed_amount = 0whendata_contract.token_id(token_position)isNoneor the claim result document lacksclaimerId/amountfields. Sinceclaim_all_tokensbreaks its loop onTokensClaimed(0)(line 52), this causes on-chain successful claims to be silently discarded without retry or user notification.Log a warning when the token_id is absent or expected document fields are missing, so claim successes aren't lost to schema changes or unexpected document structures.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend_task/tokens/claim_tokens.rs` around lines 140 - 184, The code currently drops to returning TokensClaimed(0) silently if data_contract.token_id(token_position) is None or the claim document lacks expected fields; add warning logs to surface these cases: log a warning when data_contract.token_id(token_position) returns None (include token_position and any data_contract identifier/context available), and inside both ClaimResult::Document and ClaimResult::GroupActionWithDocument branches, when the (claimerId, amount) pattern match fails or the types are not Value::Identifier/Value::U64, emit a tracing::warn with the document and token_position; also log a warning if Identifier::from_bytes returns Err (include the raw bytes and token_id) before continuing, keeping the existing tracing::error on insert_token_identity_balance failures. This uses the existing symbols token_id, token_position, ClaimResult::Document, ClaimResult::GroupActionWithDocument, Identifier::from_bytes, insert_token_identity_balance and ensures claim_all_tokens won't silently drop successful claims without a warning.
🧹 Nitpick comments (2)
src/database/identities.rs (1)
139-144: TOCTOU: check-then-insert without a single atomic statement.The SELECT COUNT and INSERT are two separate statements. Even after fixing the deadlock (using
conn.execute), a concurrent thread could insert the same identity between the check and the insert, causing a constraint violation or duplicate. Consider usingINSERT OR IGNORE(orINSERT ... ON CONFLICT DO NOTHING) to make this atomic, which also simplifies the code by removing the SELECT entirely.♻️ Proposed simplification using INSERT OR IGNORE
- // Check if the identity already exists - let conn = self.conn.lock().unwrap(); - let mut stmt = - conn.prepare("SELECT COUNT(*) FROM identity WHERE id = ? AND network = ?")?; - let count: i64 = stmt.query_row(params![id, network], |row| row.get(0))?; - - // If the identity doesn't exist, insert it - if count == 0 { - self.execute( - "INSERT INTO identity (id, data, is_local, alias, identity_type, network) - VALUES (?, ?, 0, ?, ?, ?)", - params![id, data, alias, identity_type, network], - )?; - } + let conn = self.conn.lock().unwrap(); + conn.execute( + "INSERT OR IGNORE INTO identity (id, data, is_local, alias, identity_type, network) + VALUES (?, ?, 0, ?, ?, ?)", + params![id, data, alias, identity_type, network], + )?;This also eliminates the deadlock issue from the previous comment. As per coding guidelines, SQLite uses a single connection wrapped in
Mutex<Connection>, so theINSERT OR IGNOREapproach is safe and idiomatic.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/database/identities.rs` around lines 139 - 144, The current logic does a separate SELECT COUNT followed by self.execute INSERT which is TOCTOU-prone; replace the check-then-insert with a single atomic statement such as "INSERT OR IGNORE" (or "INSERT ... ON CONFLICT DO NOTHING") so the insert is no-op if the identity exists, remove the SELECT COUNT path entirely, and handle the execute result from the INSERT OR IGNORE in the same function (use the existing self.execute call site in identities.rs where the insert currently occurs) to avoid deadlocks and duplicate constraint errors.src/backend_task/tokens/claim_tokens.rs (1)
38-80: Unbounded loop — consider adding a safety limit.The
looprelies on the platform eventually returning zero or a consensus error to terminate. If something unexpected happens (e.g., a bug where the claimed amount never reaches zero, or transient network issues that cause partial success states), this could loop indefinitely. A max-iteration safety guard would improve robustness.💡 Suggested safety guard
let mut total_claimed = 0; + const MAX_CLAIM_ITERATIONS: usize = 10_000; + let mut iterations = 0; loop { + iterations += 1; + if iterations > MAX_CLAIM_ITERATIONS { + return Err(format!( + "Exceeded maximum claim iterations ({}); claimed so far: {}", + MAX_CLAIM_ITERATIONS, total_claimed + )); + } let result = self .claim_token(🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/backend_task/tokens/claim_tokens.rs` around lines 38 - 80, Add a bounded safety guard to the unbounded loop that repeatedly calls self.claim_token: introduce a max attempts constant (e.g., MAX_CLAIM_ITERATIONS) or an attempts counter and increment it each iteration, break and return an Err if the limit is exceeded (including total_claimed in the error), and optionally back off briefly between retries; update the loop surrounding self.claim_token(...) that inspects BackendTaskSuccessResult::TokensClaimed and the consensus error path to use this bounded loop (tracking attempts alongside total_claimed) so the function cannot loop indefinitely.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/backend_task/mod.rs`:
- Around line 221-222: Remove the dead enum variant ClaimedTokens(FeeResult)
from the backend task enum definition (leave TokensClaimed(TokenAmount) as-is),
and delete any now-unused imports or type references tied to FeeResult; ensure
no pattern matches or constructors reference ClaimedTokens (search for
ClaimedTokens and remove or refactor them) and run tests/build to confirm
nothing else depends on it.
In `@src/database/identities.rs`:
- Around line 133-144: The code holds self.conn.lock() (conn guard) and then
calls self.execute(...), which deadlocks because self.execute also locks
self.conn; replace the call to self.execute inside the guarded section with a
direct call on the local connection (e.g., use conn.execute(...) or
conn.prepare(...).execute(...) with the same SQL and params![id, data, alias,
identity_type, network]) so the insert runs using the existing guard instead of
attempting to re-lock self.conn; ensure you remove/self-contained the
transaction on the conn guard and keep parameter order identical.
In `@src/ui/tokens/claim_tokens_screen.rs`:
- Around line 209-223: The code calls selected_key.clone().expect("No key
selected") inside the TokenTask::ClaimTokens construction which can panic if
self.selected_key is None; before building the action (or before user
confirmation handler) check self.selected_key and handle the None case instead
of unwrapping — e.g., disable the Confirm path when self.selected_key.is_none(),
or return/emit a user-facing error/log and do not set action; specifically
change the logic around selected_key.clone().expect(...) in the ClaimTokens
creation (and the surrounding confirmation handler that assigns action) to
early-return or set an error state when selected_key is None so no panic occurs.
- Around line 247-251: The displayed claimed_amount is in base units and must be
adjusted by the token decimals before rendering; update the UI code that
currently labels "Amount claimed: {} tokens" to convert self.claimed_amount
(u64) to a decimal token amount using the token decimal info from
self.token_configuration.conventions().decimals(), reusing the same
conversion/formatting logic used by token_decimal_multiplier() in
direct_token_purchase_screen.rs (or extract that helper into a shared function),
and then show the formatted human-readable amount (e.g., dividing by 10^decimals
and formatting to a suitable precision) instead of the raw u64 value.
---
Outside diff comments:
In `@src/backend_task/tokens/claim_tokens.rs`:
- Around line 140-184: The code currently drops to returning TokensClaimed(0)
silently if data_contract.token_id(token_position) is None or the claim document
lacks expected fields; add warning logs to surface these cases: log a warning
when data_contract.token_id(token_position) returns None (include token_position
and any data_contract identifier/context available), and inside both
ClaimResult::Document and ClaimResult::GroupActionWithDocument branches, when
the (claimerId, amount) pattern match fails or the types are not
Value::Identifier/Value::U64, emit a tracing::warn with the document and
token_position; also log a warning if Identifier::from_bytes returns Err
(include the raw bytes and token_id) before continuing, keeping the existing
tracing::error on insert_token_identity_balance failures. This uses the existing
symbols token_id, token_position, ClaimResult::Document,
ClaimResult::GroupActionWithDocument, Identifier::from_bytes,
insert_token_identity_balance and ensures claim_all_tokens won't silently drop
successful claims without a warning.
---
Nitpick comments:
In `@src/backend_task/tokens/claim_tokens.rs`:
- Around line 38-80: Add a bounded safety guard to the unbounded loop that
repeatedly calls self.claim_token: introduce a max attempts constant (e.g.,
MAX_CLAIM_ITERATIONS) or an attempts counter and increment it each iteration,
break and return an Err if the limit is exceeded (including total_claimed in the
error), and optionally back off briefly between retries; update the loop
surrounding self.claim_token(...) that inspects
BackendTaskSuccessResult::TokensClaimed and the consensus error path to use this
bounded loop (tracking attempts alongside total_claimed) so the function cannot
loop indefinitely.
In `@src/database/identities.rs`:
- Around line 139-144: The current logic does a separate SELECT COUNT followed
by self.execute INSERT which is TOCTOU-prone; replace the check-then-insert with
a single atomic statement such as "INSERT OR IGNORE" (or "INSERT ... ON CONFLICT
DO NOTHING") so the insert is no-op if the identity exists, remove the SELECT
COUNT path entirely, and handle the execute result from the INSERT OR IGNORE in
the same function (use the existing self.execute call site in identities.rs
where the insert currently occurs) to avoid deadlocks and duplicate constraint
errors.
|
I think I'd rather we just claim the max amount, and show to the user they may need to claim again? It seems a bit strange and maybe error prone potentially. |
thepastaclaw
left a comment
There was a problem hiding this comment.
Code Review
Repeated token claiming loop is well-motivated but has an unbounded loop with no iteration limit, and the fee display shows single-claim cost even when claim_all may submit many transitions. Both agents converge on the infinite loop risk.
Reviewed commit: 1d3d035
🔴 1 blocking | 🟡 1 suggestion(s)
🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/backend_task/tokens/claim_tokens.rs`:
- [BLOCKING] lines 38-80: Unbounded loop in claim_all_tokens — no iteration limit
The `loop` in `claim_all_tokens` has no upper bound. If the platform has a bug where claims always return a positive amount, or a transient issue prevents the termination signal, this loop runs forever — blocking the tokio task, burning credits on fees, and leaving the UI stuck on 'Claiming...' indefinitely.
Add a `const MAX_CLAIM_ITERATIONS: usize = 1000;` safety cap and break with partial success after hitting it.
In `src/ui/tokens/claim_tokens_screen.rs`:
- [SUGGESTION] line 563: Claim-all mode still shows fee for only one claim
When `claim_all` is enabled, the backend may broadcast multiple claim transitions, but the screen displays `estimate_document_batch(1)` and the confirmation text doesn't warn about repeated fees. Users authorize what looks like a single cheap claim but may pay that fee multiple times.
| }); | ||
| ui.add_space(10.0); | ||
|
|
||
| // Fee estimation display |
There was a problem hiding this comment.
🟡 Suggestion: Claim-all mode still shows fee for only one claim
When claim_all is enabled, the backend may broadcast multiple claim transitions, but the screen displays estimate_document_batch(1) and the confirmation text doesn't warn about repeated fees. Users authorize what looks like a single cheap claim but may pay that fee multiple times.
source: ['codex-general']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/ui/tokens/claim_tokens_screen.rs`:
- [SUGGESTION] line 563: Claim-all mode still shows fee for only one claim
When `claim_all` is enabled, the backend may broadcast multiple claim transitions, but the screen displays `estimate_document_batch(1)` and the confirmation text doesn't warn about repeated fees. Users authorize what looks like a single cheap claim but may pay that fee multiple times.
…dc8011 # Conflicts: # src/backend_task/tokens/claim_tokens.rs # src/backend_task/tokens/mod.rs # src/ui/tokens/claim_tokens_screen.rs
|
🔍 Review in progress — actively reviewing now (commit 5032cfd) |
…SERT OR IGNORE Previously the function locked `self.conn` to run a SELECT COUNT, then called `self.execute()` which locks the same mutex again — a guaranteed deadlock on the std `Mutex` (same thread, same lock) as soon as any caller invoked it. Replace the check-then-insert with a single `INSERT OR IGNORE`: - acquires the connection mutex exactly once, removing the deadlock, - closes the TOCTOU window between the SELECT and the INSERT, - simplifies the code. Reported by CodeRabbit on PR #421. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
`BackendTaskSuccessResult::TokensClaimed(TokenAmount)` was added in an earlier iteration but never constructed anywhere — the entire claim flow reports results through `ClaimedTokens(FeeResult)`. Drop the dead variant; the `TokenAmount` import stays (used by `TokenBalances` above). Reported by CodeRabbit on PR #421. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
`claim_all_tokens` looped until the platform reported `NoTokensToClaim`. A misbehaving node or an SDK bug that keeps returning positive-amount claim results would have looped forever — blocking the tokio task, burning credits, and leaving the UI stuck on "Claiming...". Cap the loop at 1000 iterations (well above any legitimate backlog given the 128-cycle / 32,767-cycle per-call limits). When the cap is hit, log a warning and return the fees accumulated so far as a success; the user can press Claim again if more tokens remain. Reported by thepastaclaw and CodeRabbit on PR #421. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
With Claim all enabled the backend repeats the claim until the distribution is exhausted, potentially broadcasting many state transitions. The screen used to show a single-claim fee estimate and a generic confirmation message, so users thought they were authorising one cheap claim when several could actually be submitted. - Relabel the fee line to "Estimated fee per claim:" when Claim all is on and add a secondary line explaining that multiple transactions may be submitted, each charged that fee. - Update the confirmation dialog message to spell out the same thing. Reported by thepastaclaw on PR #421. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This helper was introduced with `#[allow(dead_code)]` as speculative support for caching remote identities, but no code path calls it. It is unrelated to the "Claim all" feature on this branch, carries non-obvious semantics (empty-string `identity_type` when `qualified_identity` is `None`), and its previous implementation also carried a deadlock bug. Delete it rather than keep dead code around. If a future change needs this capability, we can add it back with a real caller and tests in its own PR. Supersedes commit 8e839f4 which fixed a deadlock inside this function: with the function removed the deadlock is moot. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 6 out of 6 changed files in this pull request and generated 7 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| } | ||
| } | ||
|
|
||
| let total_fee = PlatformFeeEstimator::new().estimate_document_batch(iterations); |
There was a problem hiding this comment.
iterations is explicitly typed as usize and then passed to estimate_document_batch(iterations). If estimate_document_batch expects a fixed-width integer (commonly u16/u32 for “count”), this will fail to compile because Rust won’t implicitly convert from usize. Use the same type that estimate_document_batch expects (or convert explicitly with a checked/limited cast) to avoid a compile error and potential truncation.
| let total_fee = PlatformFeeEstimator::new().estimate_document_batch(iterations); | |
| let total_fee = PlatformFeeEstimator::new().estimate_document_batch( | |
| iterations | |
| .try_into() | |
| .expect("claim iteration count exceeds supported batch size"), | |
| ); |
| /// Calls [`Self::claim_tokens`] in a loop, accumulating fees across | ||
| /// iterations. Stops cleanly when the platform reports | ||
| /// [`TaskError::NoTokensToClaim`]. If no iteration succeeded (the very | ||
| /// first call reported nothing to claim), the error is propagated so the | ||
| /// user gets a proper "nothing to claim" banner instead of a silent | ||
| /// success. |
There was a problem hiding this comment.
The doc says this “accumulat[es] fees across iterations,” but the implementation doesn’t accumulate returned fees from each successful claim_tokens call; it discards per-iteration fee results and re-estimates once at the end based on iterations. Either update the documentation to describe “re-estimates total fee based on number of iterations,” or change the implementation to actually aggregate the fee results returned by claim_tokens.
| /// Calls [`Self::claim_tokens`] in a loop, accumulating fees across | |
| /// iterations. Stops cleanly when the platform reports | |
| /// [`TaskError::NoTokensToClaim`]. If no iteration succeeded (the very | |
| /// first call reported nothing to claim), the error is propagated so the | |
| /// user gets a proper "nothing to claim" banner instead of a silent | |
| /// success. | |
| /// Calls [`Self::claim_tokens`] in a loop and counts successful claim | |
| /// iterations. When the platform reports [`TaskError::NoTokensToClaim`], | |
| /// it stops cleanly and re-estimates the total fee from the number of | |
| /// successful iterations. If no iteration succeeded (the very first call | |
| /// reported nothing to claim), the error is propagated so the user gets a | |
| /// proper "nothing to claim" banner instead of a silent success. |
| // Return success with fee result | ||
| use crate::backend_task::FeeResult; | ||
| use crate::model::fee_estimation::PlatformFeeEstimator; | ||
| let estimated_fee = PlatformFeeEstimator::new().estimate_document_batch(1); | ||
| let fee_result = FeeResult::new(estimated_fee, estimated_fee); | ||
| Ok(BackendTaskSuccessResult::ClaimedTokens(fee_result)) |
There was a problem hiding this comment.
The PR description and metadata mention returning/reporting the total number of tokens claimed (and introduces BackendTaskSuccessResult::TokensClaimed(TokenAmount)), but claim_tokens still returns ClaimedTokens(FeeResult) and does not report claimed amount. If the intended user-visible outcome is “X tokens claimed,” this function (and claim_all_tokens) should return the amount (likely via TokensClaimed) or the PR description should be updated to reflect that only fee info is returned.
| selected_wallet, | ||
| wallet_open_attempted: false, | ||
| wallet_unlock_popup: WalletUnlockPopup::new(), | ||
| claim_all: true, |
There was a problem hiding this comment.
Defaulting claim_all to true changes the default behavior from single claim to potentially multiple sequential claims, which can increase time-to-complete and total fees/requests. If this is not intended as a breaking UX change, consider defaulting to false (preserving current behavior) or persisting the user’s last choice so they opt into claim-all explicitly.
| claim_all: true, | |
| claim_all: false, |
thepastaclaw
left a comment
There was a problem hiding this comment.
Code Review
Claim-all loop logic is sound but has three notable issues: (1) the total-fee calculation uses a default PlatformFeeEstimator and silently diverges from the per-claim figure the UI shows when the network multiplier isn't 1x, (2) reaching the 1,000-iteration safety cap returns plain ClaimedTokens success with no signal to the UI that tokens may still remain, and (3) there are no unit tests for the new termination, first-iteration error, and cap-break branches. A couple of smaller architectural nitpicks (dead Ok(other) arm, dual-role NoTokensToClaim, claim_all defaulting to true) are also worth noting.
Reviewed commit: 5032cfd
🟡 3 suggestion(s) | 💬 3 nitpick(s)
🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/backend_task/tokens/claim_tokens.rs`:
- [SUGGESTION] lines 180-183: Total fee ignores the cached network fee multiplier
`PlatformFeeEstimator::new()` uses the default 1x multiplier, while `AppContext::fee_estimator()` (src/context/mod.rs:528) wraps the cached `fee_multiplier_permille` that is updated from epoch info. The claim screen already uses `self.app_context.fee_estimator()` for the per-claim preview (src/ui/tokens/claim_tokens_screen.rs:576), so whenever the active multiplier isn't 1x the success total returned here will systematically disagree with what the user saw moments earlier. Switch to `self.fee_estimator()` so the aggregated fee respects the active multiplier. (The pre-existing single-claim path at line 110 has the same bug and should be updated together.)
- [SUGGESTION] lines 145-184: Hitting the iteration cap is indistinguishable from full success in the UI
When `iterations >= MAX_CLAIM_ALL_ITERATIONS` the function logs `warn!` and then falls through to `Ok(BackendTaskSuccessResult::ClaimedTokens(...))`. The banner the user sees is identical to a normal completion (`Claimed Successfully!`), so if the cap ever fires — e.g. a misbehaving node or SDK bug that never surfaces `NoTokensToClaim` — the user is told "Claim all" finished even though rewards may remain. The doc comment says the user can press Claim again, but nothing in the returned result or the UI indicates they should. Prefer adding a dedicated success variant (e.g. `ClaimedTokensPartial { iterations }`) and surfacing a warning banner at the call site so the partial-completion case is observable in the UI, not only in the log.
- [SUGGESTION] lines 132-184: `claim_all_tokens` has no unit-test coverage for its new branches
The loop introduces four load-bearing branches: first-iteration `NoTokensToClaim` is propagated as an error; later `NoTokensToClaim` breaks with an aggregated fee; any other error short-circuits; the 1,000-iteration cap breaks and still reports success. None of these are covered by an inline `#[test]` in the file, and a grep of `tests/` shows no `claim_tokens` / `claim_all_tokens` coverage (`tests/backend-e2e/token_tasks.rs` exercises other token operations but not claim). CLAUDE.md calls out that new backend tasks should have unit-test coverage, and this is exactly the kind of control flow where regressions are easy to miss. Consider extracting the termination logic behind an injected `claim_once` closure (or a small trait) so the three exit paths can be unit-tested without standing up the real SDK and DB.
| let total_fee = PlatformFeeEstimator::new().estimate_document_batch(iterations); | ||
| Ok(BackendTaskSuccessResult::ClaimedTokens(FeeResult::new( | ||
| total_fee, total_fee, | ||
| ))) |
There was a problem hiding this comment.
🟡 Suggestion: Total fee ignores the cached network fee multiplier
PlatformFeeEstimator::new() uses the default 1x multiplier, while AppContext::fee_estimator() (src/context/mod.rs:528) wraps the cached fee_multiplier_permille that is updated from epoch info. The claim screen already uses self.app_context.fee_estimator() for the per-claim preview (src/ui/tokens/claim_tokens_screen.rs:576), so whenever the active multiplier isn't 1x the success total returned here will systematically disagree with what the user saw moments earlier. Switch to self.fee_estimator() so the aggregated fee respects the active multiplier. (The pre-existing single-claim path at line 110 has the same bug and should be updated together.)
💡 Suggested change
| let total_fee = PlatformFeeEstimator::new().estimate_document_batch(iterations); | |
| Ok(BackendTaskSuccessResult::ClaimedTokens(FeeResult::new( | |
| total_fee, total_fee, | |
| ))) | |
| let total_fee = self.fee_estimator().estimate_document_batch(iterations); | |
| Ok(BackendTaskSuccessResult::ClaimedTokens(FeeResult::new( | |
| total_fee, total_fee, | |
| ))) |
source: ['claude']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/backend_task/tokens/claim_tokens.rs`:
- [SUGGESTION] lines 180-183: Total fee ignores the cached network fee multiplier
`PlatformFeeEstimator::new()` uses the default 1x multiplier, while `AppContext::fee_estimator()` (src/context/mod.rs:528) wraps the cached `fee_multiplier_permille` that is updated from epoch info. The claim screen already uses `self.app_context.fee_estimator()` for the per-claim preview (src/ui/tokens/claim_tokens_screen.rs:576), so whenever the active multiplier isn't 1x the success total returned here will systematically disagree with what the user saw moments earlier. Switch to `self.fee_estimator()` so the aggregated fee respects the active multiplier. (The pre-existing single-claim path at line 110 has the same bug and should be updated together.)
| if iterations >= MAX_CLAIM_ALL_ITERATIONS { | ||
| tracing::warn!( | ||
| iterations, | ||
| "claim_all_tokens hit iteration cap; returning partial success so the user can retry" | ||
| ); | ||
| break; | ||
| } | ||
| let outcome = self | ||
| .claim_tokens( | ||
| data_contract.clone(), | ||
| token_position, | ||
| actor_identity, | ||
| distribution_type, | ||
| signing_key.clone(), | ||
| public_note.clone(), | ||
| sdk, | ||
| ) | ||
| .await; | ||
|
|
||
| match outcome { | ||
| Ok(BackendTaskSuccessResult::ClaimedTokens(_)) => { | ||
| iterations = iterations.saturating_add(1); | ||
| } | ||
| // Any other success variant is unexpected — propagate verbatim. | ||
| Ok(other) => return Ok(other), | ||
| Err(TaskError::NoTokensToClaim) => { | ||
| if iterations == 0 { | ||
| return Err(TaskError::NoTokensToClaim); | ||
| } | ||
| break; | ||
| } | ||
| Err(e) => return Err(e), | ||
| } | ||
| } | ||
|
|
||
| let total_fee = PlatformFeeEstimator::new().estimate_document_batch(iterations); | ||
| Ok(BackendTaskSuccessResult::ClaimedTokens(FeeResult::new( | ||
| total_fee, total_fee, | ||
| ))) | ||
| } |
There was a problem hiding this comment.
🟡 Suggestion: Hitting the iteration cap is indistinguishable from full success in the UI
When iterations >= MAX_CLAIM_ALL_ITERATIONS the function logs warn! and then falls through to Ok(BackendTaskSuccessResult::ClaimedTokens(...)). The banner the user sees is identical to a normal completion (Claimed Successfully!), so if the cap ever fires — e.g. a misbehaving node or SDK bug that never surfaces NoTokensToClaim — the user is told "Claim all" finished even though rewards may remain. The doc comment says the user can press Claim again, but nothing in the returned result or the UI indicates they should. Prefer adding a dedicated success variant (e.g. ClaimedTokensPartial { iterations }) and surfacing a warning banner at the call site so the partial-completion case is observable in the UI, not only in the log.
source: ['claude', 'codex']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/backend_task/tokens/claim_tokens.rs`:
- [SUGGESTION] lines 145-184: Hitting the iteration cap is indistinguishable from full success in the UI
When `iterations >= MAX_CLAIM_ALL_ITERATIONS` the function logs `warn!` and then falls through to `Ok(BackendTaskSuccessResult::ClaimedTokens(...))`. The banner the user sees is identical to a normal completion (`Claimed Successfully!`), so if the cap ever fires — e.g. a misbehaving node or SDK bug that never surfaces `NoTokensToClaim` — the user is told "Claim all" finished even though rewards may remain. The doc comment says the user can press Claim again, but nothing in the returned result or the UI indicates they should. Prefer adding a dedicated success variant (e.g. `ClaimedTokensPartial { iterations }`) and surfacing a warning banner at the call site so the partial-completion case is observable in the UI, not only in the log.
| #[allow(clippy::too_many_arguments)] | ||
| pub async fn claim_all_tokens( | ||
| &self, | ||
| data_contract: Arc<DataContract>, | ||
| token_position: u16, | ||
| actor_identity: &QualifiedIdentity, | ||
| distribution_type: TokenDistributionType, | ||
| signing_key: IdentityPublicKey, | ||
| public_note: Option<String>, | ||
| sdk: &Sdk, | ||
| ) -> Result<BackendTaskSuccessResult, TaskError> { | ||
| let mut iterations: usize = 0; | ||
| loop { | ||
| if iterations >= MAX_CLAIM_ALL_ITERATIONS { | ||
| tracing::warn!( | ||
| iterations, | ||
| "claim_all_tokens hit iteration cap; returning partial success so the user can retry" | ||
| ); | ||
| break; | ||
| } | ||
| let outcome = self | ||
| .claim_tokens( | ||
| data_contract.clone(), | ||
| token_position, | ||
| actor_identity, | ||
| distribution_type, | ||
| signing_key.clone(), | ||
| public_note.clone(), | ||
| sdk, | ||
| ) | ||
| .await; | ||
|
|
||
| match outcome { | ||
| Ok(BackendTaskSuccessResult::ClaimedTokens(_)) => { | ||
| iterations = iterations.saturating_add(1); | ||
| } | ||
| // Any other success variant is unexpected — propagate verbatim. | ||
| Ok(other) => return Ok(other), | ||
| Err(TaskError::NoTokensToClaim) => { | ||
| if iterations == 0 { | ||
| return Err(TaskError::NoTokensToClaim); | ||
| } | ||
| break; | ||
| } | ||
| Err(e) => return Err(e), | ||
| } | ||
| } | ||
|
|
||
| let total_fee = PlatformFeeEstimator::new().estimate_document_batch(iterations); | ||
| Ok(BackendTaskSuccessResult::ClaimedTokens(FeeResult::new( | ||
| total_fee, total_fee, | ||
| ))) | ||
| } |
There was a problem hiding this comment.
🟡 Suggestion: claim_all_tokens has no unit-test coverage for its new branches
The loop introduces four load-bearing branches: first-iteration NoTokensToClaim is propagated as an error; later NoTokensToClaim breaks with an aggregated fee; any other error short-circuits; the 1,000-iteration cap breaks and still reports success. None of these are covered by an inline #[test] in the file, and a grep of tests/ shows no claim_tokens / claim_all_tokens coverage (tests/backend-e2e/token_tasks.rs exercises other token operations but not claim). CLAUDE.md calls out that new backend tasks should have unit-test coverage, and this is exactly the kind of control flow where regressions are easy to miss. Consider extracting the termination logic behind an injected claim_once closure (or a small trait) so the three exit paths can be unit-tested without standing up the real SDK and DB.
source: ['claude', 'codex']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/backend_task/tokens/claim_tokens.rs`:
- [SUGGESTION] lines 132-184: `claim_all_tokens` has no unit-test coverage for its new branches
The loop introduces four load-bearing branches: first-iteration `NoTokensToClaim` is propagated as an error; later `NoTokensToClaim` breaks with an aggregated fee; any other error short-circuits; the 1,000-iteration cap breaks and still reports success. None of these are covered by an inline `#[test]` in the file, and a grep of `tests/` shows no `claim_tokens` / `claim_all_tokens` coverage (`tests/backend-e2e/token_tasks.rs` exercises other token operations but not claim). CLAUDE.md calls out that new backend tasks should have unit-test coverage, and this is exactly the kind of control flow where regressions are easy to miss. Consider extracting the termination logic behind an injected `claim_once` closure (or a small trait) so the three exit paths can be unit-tested without standing up the real SDK and DB.
| match outcome { | ||
| Ok(BackendTaskSuccessResult::ClaimedTokens(_)) => { | ||
| iterations = iterations.saturating_add(1); | ||
| } | ||
| // Any other success variant is unexpected — propagate verbatim. | ||
| Ok(other) => return Ok(other), | ||
| Err(TaskError::NoTokensToClaim) => { | ||
| if iterations == 0 { | ||
| return Err(TaskError::NoTokensToClaim); | ||
| } | ||
| break; | ||
| } | ||
| Err(e) => return Err(e), | ||
| } |
There was a problem hiding this comment.
💬 Nitpick: Ok(other) arm silently discards already-broadcast claims
claim_tokens only ever returns BackendTaskSuccessResult::ClaimedTokens, so the Ok(other) => return Ok(other) arm is effectively dead today. If another variant is ever added, returning it here throws away every claim already broadcast in the loop — the user sees an unrelated result and has no indication real transactions were paid for. Either turn this into a debug_assert!/unreachable! to make the invariant explicit, or log a warning and still fall through to the accumulated ClaimedTokens result. Low impact now, cheap to harden.
source: ['claude']
| /// No tokens are currently available to claim for this distribution. | ||
| /// | ||
| /// Emitted when the platform's `InvalidTokenClaimNoCurrentRewards` consensus | ||
| /// error is returned — the identity has nothing left to claim right now. | ||
| /// Also used by `claim_all` as a loop-termination signal. | ||
| #[error("No tokens are currently available to claim. Please check back later.")] | ||
| NoTokensToClaim, |
There was a problem hiding this comment.
💬 Nitpick: NoTokensToClaim conflates user-facing error with loop-termination signal
TaskError::NoTokensToClaim is documented as both a user-facing error (its #[error] text is rendered in MessageBanner) and an internal termination signal for claim_all_tokens. Future refactors of claim_tokens that legitimately return this variant mid-loop would silently change the aggregated semantics. A cleaner shape is to have claim_tokens return a typed enum ClaimOutcome { Claimed(FeeResult), Empty } on Ok, reserving TaskError for real failures, and map Empty explicitly into the first-iteration error at the outer call site.
source: ['claude']
| selected_wallet, | ||
| wallet_open_attempted: false, | ||
| wallet_unlock_popup: WalletUnlockPopup::new(), | ||
| claim_all: true, |
There was a problem hiding this comment.
💬 Nitpick: claim_all defaults to true on every screen open
Even with the new confirmation dialog and "Claim all may submit several transactions" disclaimer, the checkbox resets to true on every visit — a user who unchecks it today will see it re-checked next time. Defaulting to false (explicit opt-in) or persisting the preference alongside other settings would reduce the chance of an accidental multi-fee claim for users who habitually click Claim without reading the dialog. UX judgement call.
source: ['claude']
thepastaclaw
left a comment
There was a problem hiding this comment.
Code Review
The new claim_all loop has two real correctness issues: the consensus-error predicate matches only one SDK error wrapper while the rest of the codebase checks both, and a mid-loop failure after one or more successful iterations is reported to the user as a plain error, hiding partial on-chain state changes and already-spent fees. The aggregated success result also discards iteration count and cap-hit signal, leaving the UI unable to communicate partial completion. Multiple agents flagged the absence of any test coverage for this consensus-adjacent state-transition loop, which CLAUDE.md explicitly requires.
Reviewed commit: 5032cfd
Fresh dispatcher run for this queue item. A same-SHA review already existed, so this records the fresh verification without duplicating inline threads.
Code Review
Reviewed commit: 5032cfd
🔴 2 blocking | 🟡 3 suggestion(s) | 💬 1 nitpick(s)
🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/backend_task/tokens/claim_tokens.rs`:
- [BLOCKING] lines 198-207: `is_no_current_rewards_error` only recognizes one of the two SDK wrappers used elsewhere in this crate
The predicate that decides whether `claim_all_tokens` can terminate cleanly only matches `SdkError::Protocol(ProtocolError::ConsensusError(...))`. Every other consensus-cause check in this crate handles both wrappers — see `is_instant_lock_proof_invalid` (src/backend_task/error.rs:1037) and `shielded_broadcast_error` (src/backend_task/error.rs:1106), each of which also unpacks `SdkError::StateTransitionBroadcastError(broadcast_err) => broadcast_err.cause.as_ref()`. If `InvalidTokenClaimNoCurrentRewards` is delivered through the broadcast wrapper (which is the path most state-transition rejections take in this codebase), `claim_tokens` will fail to produce `TaskError::NoTokensToClaim`, the loop will mis-classify the terminal condition as a hard error after partial success, and the single-claim path will bypass the friendly "nothing to claim" banner. Widen the matcher to mirror the existing convention and add a unit test for both shapes so future SDK refactors fail loudly instead of silently degrading.
- [BLOCKING] lines 164-177: Mid-loop error after successful iterations is reported to the user as a full failure
When iteration N succeeds (state transition broadcast and committed, fee paid) and iteration N+1 fails with anything other than `NoTokensToClaim`, `claim_all_tokens` returns `Err(e)`. `ClaimTokensScreen::display_task_result` only transitions to `Complete` on `BackendTaskSuccessResult::ClaimedTokens`, so the user sees a generic error banner with no signal that earlier claims already committed on-chain and consumed credit fees. A retry will then perform fewer iterations than expected, looking like a separate failure. Either return a partial-success success-result variant carrying the iteration count and the underlying error for warning display, or have the loop surface successful iterations through a tracing/banner channel before propagating the error so the user can self-resolve ("X claims succeeded, then we hit a problem — please retry to claim the rest"). Today the doc comment promise ("Any other error short-circuits the loop") translates into UX that hides on-chain state changes from the user.
- [SUGGESTION] lines 142-194: Aggregated success result hides iteration count and cap-hit state from the UI
`claim_all_tokens` returns the same `BackendTaskSuccessResult::ClaimedTokens(FeeResult)` shape as a single claim, with no field for `iterations_run` or `hit_cap`. Two consequences:
1. **Cap path silently truncates legitimate use.** The doc comment itself states fixed-amount distributions can have up to 32,767 cycles per call, but `MAX_CLAIM_ALL_ITERATIONS` is 1,000. A genuinely large fixed-amount backlog hits the cap, logs a warning, and returns the same success shape as a fully drained loop. The user has no signal to press Claim again — the success screen shows "Claimed Successfully!" with no indication more remain. Either raise the cap above the documented maximum (relying on `NoTokensToClaim` for normal termination) or surface the cap-hit state.
2. **Identical UX for 1 vs 1000 transactions.** The user cannot tell from the success screen how many claims actually ran or what they paid in aggregate.
Introduce a dedicated success variant such as `ClaimedAllTokens { iterations: usize, hit_cap: bool, fee: FeeResult }` so the UI can communicate partial completion and prompt the user to retry, matching the typed-success surface the rest of the codebase relies on for accurate user messaging.
- [SUGGESTION] lines 132-207: New loop, predicate, and termination semantics ship without any tests
CLAUDE.md states: "Changes to consensus-adjacent logic (fee calc, state transitions) MUST have tests." The new logic includes the `is_no_current_rewards_error` predicate, the loop-termination behavior (first-call no-rewards propagated, later no-rewards swallowed), the iteration-cap path, fee accumulation, and propagation of unexpected `Ok` variants — none of which has unit, integration, or UI coverage. The predicate in particular is straightforward to unit-test by constructing each enum shape (and would have caught the wrapper-match issue above). The loop semantics can be covered by extracting the iteration body into a closure-injected helper and asserting termination across all five branches. Without tests, future `SdkError`/`ConsensusError`/`StateError` refactors will silently regress termination since `matches!` degrades to `false` rather than producing a compile error.
- [SUGGESTION] lines 142-184: Long-running claim-all loop has no progress reporting or cancellation
Each iteration is a full platform state-transition broadcast plus proof verification — easily multiple seconds per call. With a 1,000-iteration cap, the worst-case task duration is in the multi-minute range with an unchanging banner and no cooperative cancellation point if the user navigates away. Other long-running backend tasks in the codebase emit incremental `TaskResult::Refresh`/progress updates. Consider sending an intermediate progress message after every N iterations or honoring a cancellation token from `AppContext`. At minimum, document that this task cannot be interrupted once started.
thepastaclaw
left a comment
There was a problem hiding this comment.
Code Review
Two real correctness issues in the new claim_all loop survive at this head: is_no_current_rewards_error only matches one of the two SDK error wrappers handled elsewhere in this crate, and a mid-loop failure after one or more committed claims is collapsed into a plain Err, hiding partial on-chain state and consumed fees from the UI. The aggregated success surface also cannot distinguish a fully-drained run from a cap-hit truncation, and the new consensus-adjacent loop ships with zero tests despite CLAUDE.md's explicit requirement.
Reviewed commit: 5032cfd
Fresh dispatcher run for this queue item. A same-SHA review already existed, so this records the fresh verification without duplicating inline threads.
Code Review
Reviewed commit: 5032cfd
🔴 2 blocking | 🟡 3 suggestion(s) | 💬 1 nitpick(s)
🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/backend_task/tokens/claim_tokens.rs`:
- [BLOCKING] lines 198-207: `is_no_current_rewards_error` only matches one of the two SDK error wrappers used elsewhere in this crate
The loop's termination predicate only matches `SdkError::Protocol(ProtocolError::ConsensusError(...))`. Every other consensus-cause helper in this crate handles both wrappers — see `is_instant_lock_proof_invalid` (src/backend_task/error.rs:1037-1049) and `shielded_broadcast_error` (src/backend_task/error.rs:1106-1140), each of which unwraps `SdkError::StateTransitionBroadcastError(broadcast_err) => broadcast_err.cause.as_ref()` as well. If `InvalidTokenClaimNoCurrentRewards` is delivered through the broadcast wrapper — the path most state-transition rejections take in this codebase — `claim_tokens` will not produce `TaskError::NoTokensToClaim`, `claim_all_tokens` will misclassify the normal terminal condition as a hard error after partial success, and the single-claim path will bypass the friendly "nothing to claim" banner. Because this is a Rust enum-pattern bug, it compiles silently and just routes to the wrong branch at runtime. Mirror the existing convention and add a unit test for both shapes.
- [BLOCKING] lines 164-177: Mid-loop error after successful iterations is reported to the user as a full failure
When iteration N succeeds (state transition broadcast and committed, fee paid) and iteration N+1 fails with anything other than `NoTokensToClaim`, `claim_all_tokens` returns `Err(e)` immediately at line 176, discarding the iteration count and the fees already spent. `ClaimTokensScreen::display_task_result` (src/ui/tokens/claim_tokens_screen.rs:294-300) only transitions to `Complete` on `BackendTaskSuccessResult::ClaimedTokens`, so the user sees an undifferentiated error banner with no signal that earlier claims already committed on-chain and consumed credit fees. A subsequent retry will perform fewer iterations than expected, looking like a separate failure. The typed success surface needs a partial-success variant carrying iteration count and the underlying error (or progress messages emitted before propagation), so the user can self-resolve ("X claims succeeded, then we hit a problem — please retry to claim the rest"). Today this hides irreversible on-chain state changes from the user.
- [SUGGESTION] lines 142-194: Aggregated success result hides iteration count and cap-hit state from the UI
`claim_all_tokens` returns the same `BackendTaskSuccessResult::ClaimedTokens(FeeResult)` shape as a single claim, with no field for iteration count or cap-hit state. Two consequences:
1. **Cap path silently truncates legitimate use.** The doc comment notes fixed-amount distributions can have up to 32,767 cycles per call, but `MAX_CLAIM_ALL_ITERATIONS` is 1,000. A genuinely large fixed-amount backlog hits the cap, logs a warning, and returns the same success shape as a fully drained loop — the success screen shows "Claimed Successfully!" with no signal that more rewards remain.
2. **Identical UX for 1 vs 1000 transactions.** The user cannot tell from the success screen how many claims actually ran or what they paid in aggregate.
Introduce a dedicated success variant such as `ClaimedAllTokens { iterations: usize, hit_cap: bool, fee: FeeResult }` so the UI can communicate partial completion and prompt retry — matching the typed-success surface the rest of the codebase relies on. Or raise the cap above the documented per-call maximum and rely on `NoTokensToClaim` for normal termination.
- [SUGGESTION] lines 115-207: New retry loop, termination predicate, and cap semantics ship without any tests
CLAUDE.md states: "Changes to consensus-adjacent logic (fee calc, state transitions) MUST have tests." The new logic adds the `is_no_current_rewards_error` predicate, the loop-termination behavior (first-call no-rewards propagated, later no-rewards swallowed), the iteration-cap path, fee accumulation, and propagation of unexpected `Ok` variants — none of which has unit, integration, or UI coverage. The predicate is straightforward to unit-test by constructing each enum shape (and would have caught the wrapper-match issue above). The loop semantics can be covered by extracting the iteration body into a closure-injected helper and asserting termination across all five branches. Without tests, future `SdkError`/`ConsensusError`/`StateError` refactors will silently regress termination since `matches!` degrades to `false` rather than producing a compile error.
- [SUGGESTION] lines 142-184: Long-running claim-all loop has no progress reporting or cancellation
Each iteration is a full platform state-transition broadcast plus proof verification — easily multiple seconds per call. With a 1,000-iteration cap, worst-case task duration runs into multiple minutes with an unchanging banner and no cooperative cancellation point if the user navigates away. Other long-running backend tasks in the codebase emit incremental `TaskResult::Refresh`/progress updates. Consider sending an intermediate progress message every N iterations or honoring a cancellation token from `AppContext`. At minimum, document that this task cannot be interrupted once started.
Problem to solve
When an user clicks "claim" and have a big amount of credits to claim that is above Platform's limits for single claim operation, only part of awaiting tokens is claimed. The user is not notified that there are more tokens available to claim.
Solution
Claim is repeated as long as there are outstanding tokens waiting to be claimed.
Copilot's summary
This pull request introduces a new feature to handle token claims more effectively and includes improvements to error handling and logging. The main changes involve adding a new
claim_all_tokensmethod, updating the success result type to include token claim amounts, and enhancing the user interface to display token claim results.Token Claim Enhancements:
claim_all_tokensmethod inAppContextto handle multiple token claims in a loop until no more claims are available. This method returns the total number of tokens claimed. (src/backend_task/tokens/claim_tokens.rs, [1] [2]BackendTaskSuccessResultenum to include aTokensClaimed(TokenAmount)variant for reporting the number of tokens claimed. (src/backend_task/mod.rs, src/backend_task/mod.rsR102)claim_tokenmethod to return theTokensClaimedresult with the claimed amount instead of a generic message. (src/backend_task/tokens/claim_tokens.rs, src/backend_task/tokens/claim_tokens.rsL123-R196)Error Handling and Logging Improvements:
eprintln!calls with structured logging usingtracing::error!for better error reporting when updating token balances. (src/backend_task/tokens/claim_tokens.rs, [1] [2]src/backend_task/tokens/claim_tokens.rs, [1] [2]User Interface Updates:
display_task_resultmethod to theClaimTokensScreento show success or error messages based on the token claim results. This includes displaying the number of tokens claimed or an error message if no tokens are available. (src/ui/tokens/claim_tokens_screen.rs, src/ui/tokens/claim_tokens_screen.rsR251-R269)These changes improve the robustness of the token claim process and provide better feedback to users.
Summary by CodeRabbit
New Features
Bug Fixes / Improvements