fix: prevent duplicate loaded contracts#854
Conversation
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the ⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: You can disable this status message by setting the Use the checkbox below for a quick retry:
📝 WalkthroughWalkthroughThis PR centralizes system contract handling by introducing accessor methods in ChangesSystem Contract Centralization & Safeguards
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
✅ Review complete (commit c04173a) |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/context/contract_token_db.rs (1)
60-73:⚠️ Potential issue | 🟡 Minor | ⚡ Quick win
get_contractsre-injects system contracts ignoringlimit/offset.DB-layer pagination correctly excludes the 5 system contracts (per
database/contracts.rs:267-294), but here the wrapper unconditionally inserts all 5 at positions 0-4 regardless ofoffset, and after the DB has already returned up tolimitrows. Net effect once pagination is exercised:
- Result length =
db_rows + 5, exceeding the requestedlimit.- System contracts appear on every page, not just the first.
All current callers pass
(None, None), so this is latent — but it contradicts the PR objective stating pagination is applied after duplicates are excluded, and will silently break the first paginated caller.🛠️ Suggested fix: only prepend on the first page and shrink the DB request accordingly
pub fn get_contracts( &self, limit: Option<u32>, offset: Option<u32>, ) -> Result<Vec<QualifiedContract>> { - // Get contracts from the database - let mut contracts = self.db.get_contracts(self, limit, offset)?; - - for (index, system_contract) in self.system_contracts().into_iter().enumerate() { - contracts.insert(index, system_contract); - } - - Ok(contracts) + let system = self.system_contracts(); + let system_count = system.len() as u32; + let effective_offset = offset.unwrap_or(0); + let prepend_count = system_count.saturating_sub(effective_offset); + let prepend_count = match limit { + Some(l) => prepend_count.min(l), + None => prepend_count, + }; + + let db_offset = effective_offset.saturating_sub(system_count); + let db_limit = limit.map(|l| l.saturating_sub(prepend_count)); + + let mut contracts = if db_limit == Some(0) { + Vec::new() + } else { + self.db.get_contracts(self, db_limit, Some(db_offset))? + }; + + let skip = effective_offset.min(system_count) as usize; + let take = prepend_count as usize; + for (i, sc) in system.into_iter().skip(skip).take(take).enumerate() { + contracts.insert(i, sc); + } + + Ok(contracts) }🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/context/contract_token_db.rs` around lines 60 - 73, get_contracts currently unconditionally prepends all system_contracts, breaking pagination; modify it so system_contracts are only prepended for the first page (offset == None or offset == 0) and shrink the DB request when doing so: compute system_count = self.system_contracts().len(), if offset.unwrap_or(0) == 0 then call self.db.get_contracts with adjusted_limit = limit.map(|l| l.saturating_sub(system_count as u32)) (ensuring non-negative) and then insert the system_contracts at the front; otherwise call self.db.get_contracts with the original limit/offset and do not insert system_contracts. Use the existing symbols get_contracts, system_contracts, and self.db.get_contracts.
🧹 Nitpick comments (3)
src/database/contracts.rs (1)
27-37: 💤 Low valueConsider logging the silent skip on system-contract insert.
Symmetric to
AppContext::remove_contract→ silent success when given a system-contract ID. The UI pre-filter is the front-line defense, but atracing::debug!(or warn) here would make it possible to detect when an unexpected callsite is feeding system contracts intoinsert_contract_if_not_existsinstead of failing closed.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/database/contracts.rs` around lines 27 - 37, In insert_contract_if_not_exists, add a tracing::debug! (or tracing::warn!) right before returning Ok(()) when AppContext::is_system_contract_id(&data_contract.id()) is true so the skip is recorded; include the data_contract.id() and contract_alias (and any relevant context like insert_tokens_too) in the log message to make it easy to detect callsites that are attempting to insert system contracts while keeping the existing early-return behavior intact.src/context/contract_token_db.rs (2)
100-106: 💤 Low valueSilent
Ok(())on attempted system-contract removal.UI guards in
contract_chooser_panel.rsare the primary defense, but any other caller (current or future backend task) will see this as a successful deletion when nothing happened. Consider returning a typed error (or at minimum atracing::warn!) so callers can distinguish "removed" from "ignored".🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/context/contract_token_db.rs` around lines 100 - 106, The method remove_contract currently returns Ok(()) when given a system contract id, hiding that no deletion occurred; change it to surface this as an error so callers can distinguish "ignored" from "removed": in remove_contract (and callers of ContractTokenDb::remove_contract) detect is_system_contract_id(contract_id) and return a typed error (e.g., add a new enum variant like ContractDbError::SystemContractDeletionForbidden or similar in your Result/Error type) instead of Ok(()); update signature usages or propagate the new error accordingly (alternatively, if you cannot add an error variant now, at minimum emit tracing::warn! with contract_id and still return an Err to signal failure). Ensure identifiers referenced: remove_contract, is_system_contract_id, Identifier, and the Result/Error type are updated consistently.
16-57: ⚡ Quick winAvoid deep-cloning all 5 system contracts on every helper call.
system_contracts()allocates a[QualifiedContract; 5]and deep-clones eachDataContract(viaArc::clone(&self.X).as_ref().clone(), where theArc::cloneis itself wasted because the result is dereferenced and cloned immediately). Every invocation ofsystem_contract_ids,is_system_contract_id, andsystem_contract_by_idroutes through this and clones five contracts — including from per-frame UI paths (contract_chooser_panel,update_contract_screen) and the SDKget_data_contractprovider hot path. ID-only lookups don't need fullQualifiedContractvalues.♻️ Proposed refactor: ID lookups without cloning, full clone only on hit
+ fn system_contract_arcs(&self) -> [(&Arc<DataContract>, &'static str); 5] { + [ + (&self.dpns_contract, "dpns"), + (&self.token_history_contract, "token_history"), + (&self.withdraws_contract, "withdrawals"), + (&self.keyword_search_contract, "keyword_search"), + (&self.dashpay_contract, "dashpay"), + ] + } + pub(crate) fn system_contracts(&self) -> [QualifiedContract; 5] { - [ - QualifiedContract { - contract: Arc::clone(&self.dpns_contract).as_ref().clone(), - alias: Some("dpns".to_string()), - }, - // ... four more identical entries - ] + self.system_contract_arcs().map(|(arc, alias)| QualifiedContract { + contract: arc.as_ref().clone(), + alias: Some(alias.to_string()), + }) } pub(crate) fn system_contract_ids(&self) -> [Identifier; 5] { - self.system_contracts() - .map(|system_contract| system_contract.contract.id()) + self.system_contract_arcs().map(|(arc, _)| arc.id()) } pub(crate) fn system_contract_by_id( &self, contract_id: &Identifier, ) -> Option<QualifiedContract> { - self.system_contracts() - .into_iter() - .find(|system_contract| system_contract.contract.id() == *contract_id) + self.system_contract_arcs() + .into_iter() + .find(|(arc, _)| arc.id() == *contract_id) + .map(|(arc, alias)| QualifiedContract { + contract: arc.as_ref().clone(), + alias: Some(alias.to_string()), + }) } pub(crate) fn is_system_contract_id(&self, contract_id: &Identifier) -> bool { - self.system_contract_ids().contains(contract_id) + self.system_contract_arcs() + .iter() + .any(|(arc, _)| arc.id() == *contract_id) }For the
context_provider::resolve_data_contractpath, also consider exposing anArc<DataContract>accessor so it can returnArc::clone(arc)instead ofArc::new(system_contract.contract), eliminating the deep clone there entirely.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@src/context/contract_token_db.rs` around lines 16 - 57, system_contracts() currently deep-clones all five DataContract instances on every call; change the helpers to avoid that by: implement system_contract_ids() to directly return an array (or Vec) of ids by calling id() on each Arc (self.dpns_contract.id(), self.token_history_contract.id(), self.withdraws_contract.id(), self.keyword_search_contract.id(), self.dashpay_contract.id()) instead of calling system_contracts(); implement is_system_contract_id() to check against those id values (or inline compare contract_id to each Arc's id) without allocating contracts; and change system_contract_by_id(contract_id) to test each Arc's id and only on a match construct and return a QualifiedContract by cloning/constructing from the matching Arc (e.g., QualifiedContract { contract: Arc::clone(&self.dpns_contract).as_ref().clone() } or preferably return an Arc<DataContract> wrapper), so you only deep-clone the DataContract when there is a hit; keep references to the fields dpns_contract, token_history_contract, withdraws_contract, keyword_search_contract, dashpay_contract and the QualifiedContract type names to locate where to change.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Outside diff comments:
In `@src/context/contract_token_db.rs`:
- Around line 60-73: get_contracts currently unconditionally prepends all
system_contracts, breaking pagination; modify it so system_contracts are only
prepended for the first page (offset == None or offset == 0) and shrink the DB
request when doing so: compute system_count = self.system_contracts().len(), if
offset.unwrap_or(0) == 0 then call self.db.get_contracts with adjusted_limit =
limit.map(|l| l.saturating_sub(system_count as u32)) (ensuring non-negative) and
then insert the system_contracts at the front; otherwise call
self.db.get_contracts with the original limit/offset and do not insert
system_contracts. Use the existing symbols get_contracts, system_contracts, and
self.db.get_contracts.
---
Nitpick comments:
In `@src/context/contract_token_db.rs`:
- Around line 100-106: The method remove_contract currently returns Ok(()) when
given a system contract id, hiding that no deletion occurred; change it to
surface this as an error so callers can distinguish "ignored" from "removed": in
remove_contract (and callers of ContractTokenDb::remove_contract) detect
is_system_contract_id(contract_id) and return a typed error (e.g., add a new
enum variant like ContractDbError::SystemContractDeletionForbidden or similar in
your Result/Error type) instead of Ok(()); update signature usages or propagate
the new error accordingly (alternatively, if you cannot add an error variant
now, at minimum emit tracing::warn! with contract_id and still return an Err to
signal failure). Ensure identifiers referenced: remove_contract,
is_system_contract_id, Identifier, and the Result/Error type are updated
consistently.
- Around line 16-57: system_contracts() currently deep-clones all five
DataContract instances on every call; change the helpers to avoid that by:
implement system_contract_ids() to directly return an array (or Vec) of ids by
calling id() on each Arc (self.dpns_contract.id(),
self.token_history_contract.id(), self.withdraws_contract.id(),
self.keyword_search_contract.id(), self.dashpay_contract.id()) instead of
calling system_contracts(); implement is_system_contract_id() to check against
those id values (or inline compare contract_id to each Arc's id) without
allocating contracts; and change system_contract_by_id(contract_id) to test each
Arc's id and only on a match construct and return a QualifiedContract by
cloning/constructing from the matching Arc (e.g., QualifiedContract { contract:
Arc::clone(&self.dpns_contract).as_ref().clone() } or preferably return an
Arc<DataContract> wrapper), so you only deep-clone the DataContract when there
is a hit; keep references to the fields dpns_contract, token_history_contract,
withdraws_contract, keyword_search_contract, dashpay_contract and the
QualifiedContract type names to locate where to change.
In `@src/database/contracts.rs`:
- Around line 27-37: In insert_contract_if_not_exists, add a tracing::debug! (or
tracing::warn!) right before returning Ok(()) when
AppContext::is_system_contract_id(&data_contract.id()) is true so the skip is
recorded; include the data_contract.id() and contract_alias (and any relevant
context like insert_tokens_too) in the log message to make it easy to detect
callsites that are attempting to insert system contracts while keeping the
existing early-return behavior intact.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: d0c17948-429a-46c6-9906-ce5695d33023
📒 Files selected for processing (6)
src/context/contract_token_db.rssrc/context_provider.rssrc/database/contracts.rssrc/ui/components/contract_chooser_panel.rssrc/ui/contracts_documents/add_contracts_screen.rssrc/ui/contracts_documents/update_contract_screen.rs
thepastaclaw
left a comment
There was a problem hiding this comment.
Code Review
The PR correctly centralizes system-contract handling and closes the duplicate-add paths reported in #841, but the centralization regresses the cached Arc model into repeated deep clones on hot paths (per-frame UI rendering and SDK context resolution). One mutation path (replace_contract) was not given the same system-contract guard added to insert_contract_if_not_exists / remove_contract, and the new SQL pagination + UI duplicate-detection landed without targeted regression coverage. No blocking issues.
Reviewed commit: adfd1c6
🟡 2 suggestion(s) | 💬 2 nitpick(s)
1 additional finding
🟡 suggestion: replace_contract still permits writing system contracts back into the database
src/context/contract_token_db.rs (lines 108-114)
The PR hardens insert_contract_if_not_exists (database/contracts.rs:34) and AppContext::remove_contract against system IDs, but AppContext::replace_contract forwards every ID straight to Database::replace_contract. Callers in src/backend_task/update_data_contract.rs:112,152 and src/backend_task/tokens/update_token_config.rs:103 will happily recreate the hidden duplicate rows the rest of the PR is trying to eliminate if a system contract ID ever reaches them. UI flows currently filter system contracts out of the update paths, so this is defense-in-depth rather than a live bug — but with get_contracts now masking system rows via NOT IN (...), any reintroduction would be silent and harder to notice. Apply the same is_system_contract_id short-circuit here for consistency with the sibling guards.
💡 Suggested change
pub fn replace_contract(
&self,
contract_id: Identifier,
new_contract: &DataContract,
) -> Result<()> {
if self.is_system_contract_id(&contract_id) {
return Ok(());
}
self.db.replace_contract(contract_id, new_contract, self)
}
🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/context/contract_token_db.rs`:
- [SUGGESTION] lines 16-57: system_contracts() helper forces a deep DataContract clone on every call
`system_contracts()` builds `[QualifiedContract; 5]` via `Arc::clone(&self.x_contract).as_ref().clone()` — i.e. a full deep clone of each cached `DataContract` (document types, schemas, tokens, keywords). It backs three lookups that have no need for owned data:
- `is_system_contract_id` is invoked per-row, per-frame in `src/ui/components/contract_chooser_panel.rs:519` and again in `src/ui/contracts_documents/update_contract_screen.rs:84`, plus on every `Database::insert_contract_if_not_exists` and `Database::get_contracts` SQL build. At 60 fps with N loaded contracts, that is 5×N deep clones every frame just to compare 32-byte identifiers.
- `system_contract_by_id` is called by `resolve_data_contract` in `src/context_provider.rs:23`. The previous implementation returned `Arc::clone(contract)` on a hit (refcount bump). After the refactor it deep-clones the cached `DataContract` and wraps it in a brand-new `Arc::new(...)` (line 24), defeating the point of caching the system contracts as `Arc<DataContract>` on `AppContext`. This path is exercised by the SDK during state-transition validation and document fetches.
Split the API: keep a borrowed iterator over the cached `Arc`s for the read-only callers, and only build owned `QualifiedContract` values at the actual `get_contracts` / `get_contract_by_id` callsites. Sketch:
```rust
fn system_contract_arcs(&self) -> [(&Arc<DataContract>, &'static str); 5] {
[
(&self.dpns_contract, "dpns"),
(&self.token_history_contract, "token_history"),
(&self.withdraws_contract, "withdrawals"),
(&self.keyword_search_contract, "keyword_search"),
(&self.dashpay_contract, "dashpay"),
]
}
pub(crate) fn system_contract_ids(&self) -> [Identifier; 5] {
self.system_contract_arcs().map(|(c, _)| c.id())
}
pub(crate) fn is_system_contract_id(&self, id: &Identifier) -> bool {
self.system_contract_arcs().iter().any(|(c, _)| c.id() == *id)
}
Expose system_contract_arc_by_id(&self) -> Option<Arc<DataContract>> for resolve_data_contract so it can return Arc::clone(arc) and recover the original zero-copy semantics.
- [SUGGESTION] lines 108-114: replace_contract still permits writing system contracts back into the database
The PR hardensinsert_contract_if_not_exists(database/contracts.rs:34) andAppContext::remove_contractagainst system IDs, butAppContext::replace_contractforwards every ID straight toDatabase::replace_contract. Callers insrc/backend_task/update_data_contract.rs:112,152andsrc/backend_task/tokens/update_token_config.rs:103will happily recreate the hidden duplicate rows the rest of the PR is trying to eliminate if a system contract ID ever reaches them. UI flows currently filter system contracts out of the update paths, so this is defense-in-depth rather than a live bug — but withget_contractsnow masking system rows viaNOT IN (...), any reintroduction would be silent and harder to notice. Apply the sameis_system_contract_idshort-circuit here for consistency with the sibling guards.
</details>
662a070 to
7f5ab47
Compare
|
Tracker #1224 follow-up is pushed in
Verification: |
thepastaclaw
left a comment
There was a problem hiding this comment.
Code Review
Codex's blocking finding is verified: update_data_contract still calls self.db.replace_contract(...) directly in two places (success path and proof-error recovery), bypassing the new AppContext::replace_contract system-contract guard introduced by this PR. Combined with the update screen's free-form JSON paste box, this leaves a path for users to recreate the duplicate persisted system-contract rows the PR is trying to eliminate. The test-coverage suggestion is also valid — three new validation paths and rewritten pagination logic ship without unit/kittest coverage.
Reviewed commit: 7f5ab47
🔴 1 blocking | 🟡 1 suggestion(s) | 💬 1 nitpick(s)
2 additional findings
🔴 blocking: update_data_contract bypasses the new system-contract write guard
src/backend_task/update_data_contract.rs (lines 111-112)
This PR added AppContext::replace_contract() specifically to no-op when the target ID is a system contract, but update_data_contract() still writes through self.db.replace_contract(...) directly — both here on the success path (lines 111–112) and in the proof-error recovery path (lines 151–153). The update screen invites users to paste arbitrary contract JSON (src/ui/contracts_documents/update_contract_screen.rs:577), so although the combo box filters out system contracts, a user can still paste a system contract document and broadcast it; on success, both DB writes will persist a duplicate row for that system contract, recreating the exact bug #841 fixed. Route both writes through self.replace_contract(...) so the system-ID guard is enforced uniformly. Note that AppContext::replace_contract takes (Identifier, &DataContract) (no third self arg).
💡 Suggested change
self.replace_contract(data_contract.id(), &returned_contract)?;
🟡 suggestion: New duplicate-rejection and pagination logic ships without regression tests
src/ui/contracts_documents/add_contracts_screen.rs (lines 80-158)
The PR adds several behaviors that are easy to regress silently and have no targeted tests: (1) add_contracts_clicked rejecting duplicate IDs in a single submission, (2) rejecting IDs already loaded via AppContext::get_contracts, (3) the non-trivial new pagination math in AppContext::get_contracts mixing cached system contracts with DB offset/limit (no current caller exercises a non-zero offset), and (4) the is_system_contract_id short-circuits in Database::insert_contract_if_not_exists and AppContext::replace_contract plus the startup cleanup_persisted_system_contracts() path. The duplicate-input check is pure logic on Vec<Identifier> and could be lifted into model/ for unit testing; the pagination math and persistence-layer guards are pure functions of AppContext and trivially testable with an in-memory DB. At minimum, add a unit test for AppContext::get_contracts covering offset combinations crossing the system/DB boundary, a kittest exercising the duplicate-input and already-loaded paths in AddContractsScreen, and a unit test asserting replace_contract and cleanup_persisted_system_contracts behave correctly for system IDs.
🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/backend_task/update_data_contract.rs`:
- [BLOCKING] lines 111-112: update_data_contract bypasses the new system-contract write guard
This PR added `AppContext::replace_contract()` specifically to no-op when the target ID is a system contract, but `update_data_contract()` still writes through `self.db.replace_contract(...)` directly — both here on the success path (lines 111–112) and in the proof-error recovery path (lines 151–153). The update screen invites users to paste arbitrary contract JSON (`src/ui/contracts_documents/update_contract_screen.rs:577`), so although the combo box filters out system contracts, a user can still paste a system contract document and broadcast it; on success, both DB writes will persist a duplicate row for that system contract, recreating the exact bug #841 fixed. Route both writes through `self.replace_contract(...)` so the system-ID guard is enforced uniformly. Note that `AppContext::replace_contract` takes `(Identifier, &DataContract)` (no third `self` arg).
In `src/ui/contracts_documents/add_contracts_screen.rs`:
- [SUGGESTION] lines 80-158: New duplicate-rejection and pagination logic ships without regression tests
The PR adds several behaviors that are easy to regress silently and have no targeted tests: (1) `add_contracts_clicked` rejecting duplicate IDs in a single submission, (2) rejecting IDs already loaded via `AppContext::get_contracts`, (3) the non-trivial new pagination math in `AppContext::get_contracts` mixing cached system contracts with DB offset/limit (no current caller exercises a non-zero offset), and (4) the `is_system_contract_id` short-circuits in `Database::insert_contract_if_not_exists` and `AppContext::replace_contract` plus the startup `cleanup_persisted_system_contracts()` path. The duplicate-input check is pure logic on `Vec<Identifier>` and could be lifted into `model/` for unit testing; the pagination math and persistence-layer guards are pure functions of `AppContext` and trivially testable with an in-memory DB. At minimum, add a unit test for `AppContext::get_contracts` covering offset combinations crossing the system/DB boundary, a kittest exercising the duplicate-input and already-loaded paths in `AddContractsScreen`, and a unit test asserting `replace_contract` and `cleanup_persisted_system_contracts` behave correctly for system IDs.
06d22b8 to
c591ca4
Compare
|
Addressed review
|
thepastaclaw
left a comment
There was a problem hiding this comment.
Code Review
Prior blocking issues are resolved at c591ca4: update_data_contract now routes through AppContext::replace_contract, the replace/remove guards are wired up with unit tests, pagination is correctly split via compute_contract_pagination with 9 tests, and the system-contract Arc-iter avoids deep cloning on ID-only paths. Remaining items are non-blocking quality suggestions: get_contracts still deep-clones every cached system DataContract on hot per-frame paths (chooser panel), startup cleanup runs 5 separate transactions even when nothing needs cleanup, and integration coverage for the new contract-listing semantics could be tighter.
Reviewed commit: c591ca4
🟡 1 suggestion(s) | 💬 3 nitpick(s)
1 additional finding
💬 nitpick: Screen-level wiring of the new duplicate guards is not covered by a kittest
src/ui/contracts_documents/add_contracts_screen.rs (lines 104-173)
find_duplicate_input and find_already_loaded_id are well unit-tested as pure helpers. What is not covered is the screen-level integration in add_contracts_clicked: that hitting either rejection path (a) sets add_contracts_status = Error, (b) emits a MessageBanner::set_global with the duplicate Base58 ID, and (c) returns AppAction::None without dispatching ContractTask::FetchContracts. A future refactor of this method (status-enum changes, banner refactor, early-return reordering) could regress one of these without breaking any test. A small tests/kittest/ exercise of both rejection paths would close the gap.
🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/context/contract_token_db.rs`:
- [SUGGESTION] lines 127-155: get_contracts deep-clones every cached system DataContract on per-frame paths
`AppContext::get_contracts` materializes owned `QualifiedContract` values for the visible system contracts via `system_contract.as_ref().clone()` — a full `DataContract` deep clone, not an `Arc` bump. The provider lookup regression was fixed by introducing `system_contract_arc_by_id`, but the same ownership cost remains here in a hotter path: `contract_chooser_panel.rs:162` and `helpers.rs:717` call `get_contracts(None, None)` per frame, and `update_contract_screen.rs:80-85` immediately filters the cloned system contracts back out. On every chooser frame, the five built-in contracts are reserialized into fresh owned values just to be re-borrowed (or discarded). Consider exposing a borrowed/`Arc`-backed read-only listing API for screens that only iterate, or a dedicated non-system listing helper for callers that exclude built-ins. Not a correctness bug, but a meaningful per-frame allocation cost that the rest of this PR's Arc-based plumbing was designed to avoid.
| pub fn get_contracts( | ||
| &self, | ||
| limit: Option<u32>, | ||
| offset: Option<u32>, | ||
| ) -> Result<Vec<QualifiedContract>> { | ||
| let system_contract_count = self.system_contract_arcs().len() as u32; | ||
| let pagination = compute_contract_pagination(system_contract_count, limit, offset); | ||
|
|
||
| let mut contracts = self | ||
| .system_contract_arcs() | ||
| .into_iter() | ||
| .skip(pagination.system_skip) | ||
| .take(pagination.system_take) | ||
| .map(|(system_contract, alias)| QualifiedContract { | ||
| contract: system_contract.as_ref().clone(), | ||
| alias: Some(alias.to_string()), | ||
| }) | ||
| .collect::<Vec<_>>(); | ||
|
|
||
| if pagination.db_limit != Some(0) { | ||
| contracts.extend(self.db.get_contracts( | ||
| self, | ||
| pagination.db_limit, | ||
| Some(pagination.db_offset), | ||
| )?); | ||
| } | ||
|
|
||
| Ok(contracts) | ||
| } |
There was a problem hiding this comment.
🟡 Suggestion: get_contracts deep-clones every cached system DataContract on per-frame paths
AppContext::get_contracts materializes owned QualifiedContract values for the visible system contracts via system_contract.as_ref().clone() — a full DataContract deep clone, not an Arc bump. The provider lookup regression was fixed by introducing system_contract_arc_by_id, but the same ownership cost remains here in a hotter path: contract_chooser_panel.rs:162 and helpers.rs:717 call get_contracts(None, None) per frame, and update_contract_screen.rs:80-85 immediately filters the cloned system contracts back out. On every chooser frame, the five built-in contracts are reserialized into fresh owned values just to be re-borrowed (or discarded). Consider exposing a borrowed/Arc-backed read-only listing API for screens that only iterate, or a dedicated non-system listing helper for callers that exclude built-ins. Not a correctness bug, but a meaningful per-frame allocation cost that the rest of this PR's Arc-based plumbing was designed to avoid.
source: ['codex', 'claude']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/context/contract_token_db.rs`:
- [SUGGESTION] lines 127-155: get_contracts deep-clones every cached system DataContract on per-frame paths
`AppContext::get_contracts` materializes owned `QualifiedContract` values for the visible system contracts via `system_contract.as_ref().clone()` — a full `DataContract` deep clone, not an `Arc` bump. The provider lookup regression was fixed by introducing `system_contract_arc_by_id`, but the same ownership cost remains here in a hotter path: `contract_chooser_panel.rs:162` and `helpers.rs:717` call `get_contracts(None, None)` per frame, and `update_contract_screen.rs:80-85` immediately filters the cloned system contracts back out. On every chooser frame, the five built-in contracts are reserialized into fresh owned values just to be re-borrowed (or discarded). Consider exposing a borrowed/`Arc`-backed read-only listing API for screens that only iterate, or a dedicated non-system listing helper for callers that exclude built-ins. Not a correctness bug, but a meaningful per-frame allocation cost that the rest of this PR's Arc-based plumbing was designed to avoid.
| pub(crate) fn cleanup_persisted_system_contracts_for( | ||
| db: &Database, | ||
| system_contract_ids: &[Identifier], | ||
| network: &Network, | ||
| ) -> Result<()> { | ||
| for contract_id in system_contract_ids { | ||
| db.remove_contract_for_network(contract_id.as_bytes(), network)?; | ||
| } | ||
| Ok(()) | ||
| } |
There was a problem hiding this comment.
💬 Nitpick: Startup cleanup opens five separate transactions on every AppContext::new
cleanup_persisted_system_contracts_for iterates the 5 system contract IDs and calls Database::remove_contract_for_network for each. That helper now opens its own unchecked_transaction and runs four DELETE statements (identity_token_balances, token_order, token, contract). Result: up to 5 transactions × 4 statements = 20 SQL operations on every AppContext::new, even in the steady state where no rows match. SQLite handles empty deletes cheaply and this is a cold path, so it is not urgent — but a single SELECT EXISTS guard, or folding the IDs into one WHERE contract_id IN (?, ?, ?, ?, ?) transaction, would remove the churn introduced by this defense-in-depth layer.
source: ['claude']
| pub fn get_contracts( | ||
| &self, | ||
| limit: Option<u32>, | ||
| offset: Option<u32>, | ||
| ) -> Result<Vec<QualifiedContract>> { | ||
| let system_contract_count = self.system_contract_arcs().len() as u32; | ||
| let pagination = compute_contract_pagination(system_contract_count, limit, offset); | ||
|
|
||
| let mut contracts = self | ||
| .system_contract_arcs() | ||
| .into_iter() | ||
| .skip(pagination.system_skip) | ||
| .take(pagination.system_take) | ||
| .map(|(system_contract, alias)| QualifiedContract { | ||
| contract: system_contract.as_ref().clone(), | ||
| alias: Some(alias.to_string()), | ||
| }) | ||
| .collect::<Vec<_>>(); | ||
|
|
||
| if pagination.db_limit != Some(0) { | ||
| contracts.extend(self.db.get_contracts( | ||
| self, | ||
| pagination.db_limit, | ||
| Some(pagination.db_offset), | ||
| )?); | ||
| } | ||
|
|
||
| Ok(contracts) | ||
| } |
There was a problem hiding this comment.
💬 Nitpick: Public get_contracts path lacks an integration-style test against a real DB
This PR changes the observable semantics of contract listing in two layers: Database::get_contracts now filters persisted system IDs in SQL via NOT IN (...), and AppContext::get_contracts splices cached system contracts back in according to the compute_contract_pagination split. The new tests cover compute_contract_pagination arithmetic and the cleanup helpers thoroughly, but never exercise the public get_contracts path against an in-memory database with real rows. A focused test that seeds a few real contract rows plus a stale persisted system-contract row, then asserts the merge boundary (system contracts first, no duplicates, correct count under various limits/offsets) would catch off-by-one and SQL-binding regressions that the helper-only tests cannot.
source: ['codex']
c591ca4 to
0d90ba2
Compare
|
Addressed the remaining performance suggestion in
|
thepastaclaw
left a comment
There was a problem hiding this comment.
Code Review
On SHA 0d90ba2, prior blocking issues (per-frame deep-clones, system-contract write guards, duplicate-detection helpers, pagination split) are all resolved. The only new defect introduced by this push is in test coverage: the new get_contracts_arc_pagination_matches_get_contracts test seeds rows with PLACEHOLDER bytes that fail deserialization, so the cross-boundary merge it claims to exercise is silently elided in every case. Two carry-forward nitpicks (cleanup transactions, missing kittest) were already posted in prior reviews and the underlying code is unchanged here.
Reviewed commit: 0d90ba2
🟡 1 suggestion(s)
🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/context/contract_token_db.rs`:
- [SUGGESTION] lines 822-863: New pagination test never reaches the DB-backed merge branch
`get_contracts_arc_pagination_matches_get_contracts` is the regression test for the new public listing semantics across the system / DB boundary, but it seeds its three user rows via `insert_contract_row`, which writes `PLACEHOLDER = b"placeholder-not-a-real-contract"` rather than a serialized `DataContract` (lines 400-414). `Database::get_contracts` in `src/database/contracts.rs:321-334` calls `DataContract::versioned_deserialize` and `continue`s on error, so for every (limit, offset) case in the test both `get_contracts` and `get_contracts_arc` see only the 5 cached system contracts — they never touch SQL filtering, offset handling, or the merge boundary this PR changed. The test currently asserts agreement between two paths after the only branch unique to this PR has been elided, so it would still pass if the cross-boundary pagination logic regressed. Either seed at least one row with a real serialized `DataContract` (round-tripped through `versioned_serialize`) or assert directly on `(system_skip, system_take, db_offset, db_limit)` for the cases that should cross the boundary, so a regression in `Database::get_contracts` filtering can actually fail the assertion.
0d90ba2 to
fed3724
Compare
thepastaclaw
left a comment
There was a problem hiding this comment.
Code Review
On fed3724, the prior blocking pagination-test issue is resolved by the new insert_real_contract_row helper and explicit cross-boundary assertions. Four valid suggestion/nitpick-level findings remain, all surfaced by Codex: two test-coverage gaps (screen-level wiring of new duplicate-rejection paths, and a stale persisted system-row case still seeded with non-deserializable bytes), one semantic divergence between loaded_contract_ids and the visible contract set on corrupted rows, and one cold-path startup transaction churn nit. No blocking issues.
Reviewed commit: fed3724
🟡 3 suggestion(s) | 💬 1 nitpick(s)
🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/ui/contracts_documents/add_contracts_screen.rs`:
- [SUGGESTION] lines 104-149: Screen-level duplicate-rejection branches in add_contracts_clicked have no UI test
`add_contracts_clicked()` now owns three new user-visible rejection branches: duplicate IDs in one submission, an ID already loaded, and the `loaded_contract_ids()` failure path. Each sets `add_contracts_status = Error`, calls `MessageBanner::set_global(...)`, and returns `AppAction::None` instead of dispatching `ContractTask::FetchContracts`. The new tests only exercise the pure helpers `find_duplicate_input` and `find_already_loaded_id`; the screen-level wiring (status state, banner, suppressed task dispatch) is uncovered. Add a `tests/kittest/` case that drives the screen through each branch and asserts the action returned and the resulting status, so future refactors of `add_contracts_clicked()` cannot silently break the visible behavior.
In `src/context/contract_token_db.rs`:
- [SUGGESTION] lines 810-829: System-exclusion test cannot prove the SQL NOT IN filter works
`get_user_contracts_excludes_system_contracts` is the only test of the new `NOT IN (...)` SQL exclusion in `Database::get_contracts`, but it seeds the colliding row via `insert_contract_row()`. By design that helper writes intentionally non-deserializable bytes that `Database::get_contracts` silently drops in its `DataContract::versioned_deserialize` loop (src/database/contracts.rs:322-334). If the SQL exclusion regressed, the row would still be filtered by deserialization and the test would still pass. The newer `get_contracts_arc_pagination_matches_get_contracts` correctly uses `insert_real_contract_row`, but never seeds a row whose ID collides with a system contract. Add a case that inserts a *real* serialized contract under a system-contract ID and asserts both `get_user_contracts()` and `get_contracts(None, None)` omit it while still returning the other DB rows.
- [SUGGESTION] lines 184-191: loaded_contract_ids treats corrupted DB rows as already loaded but they are invisible elsewhere
`loaded_contract_ids()` extends the system IDs with `Database::get_contract_ids_for_network()`, which only validates `Identifier::from_bytes` and does not touch the contract blob (src/database/contracts.rs:344-363). `Database::get_contracts()` and `get_contract_by_id()` instead silently skip rows whose blob fails `DataContract::versioned_deserialize` (src/database/contracts.rs:322-334). The new add-screen guard at add_contracts_screen.rs:121-149 uses `loaded_contract_ids()` and rejects any matching ID. Net effect: a row with a malformed contract body disappears from listings and lookups, but the add screen reports it as already loaded, so the user can no longer self-repair by re-fetching. Either filter `loaded_contract_ids` to the same invariant as the visible set, or have the cleanup path delete malformed rows so the membership check stays in sync.
| fn add_contracts_clicked(&mut self) -> AppAction { | ||
| match self.parse_identifiers() { | ||
| Ok(identifiers) => { | ||
| if let Some(identifier) = find_duplicate_input(&identifiers) { | ||
| let message = format!( | ||
| "Contract {} was entered more than once. Remove the duplicate entry before adding contracts.", | ||
| identifier.to_string(Encoding::Base58) | ||
| ); | ||
| self.add_contracts_status = AddContractsStatus::Error; | ||
| MessageBanner::set_global( | ||
| self.app_context.egui_ctx(), | ||
| &message, | ||
| MessageType::Error, | ||
| ); | ||
| return AppAction::None; | ||
| } | ||
|
|
||
| let existing_contract_ids = match self.app_context.loaded_contract_ids() { | ||
| Ok(ids) => ids, | ||
| Err(e) => { | ||
| tracing::warn!("Failed to check loaded contracts before adding: {e}"); | ||
| self.add_contracts_status = AddContractsStatus::Error; | ||
| MessageBanner::set_global( | ||
| self.app_context.egui_ctx(), | ||
| "Unable to check whether those contracts are already loaded. Please try again.", | ||
| MessageType::Error, | ||
| ); | ||
| return AppAction::None; | ||
| } | ||
| }; | ||
|
|
||
| if let Some(identifier) = | ||
| find_already_loaded_id(&identifiers, &existing_contract_ids) | ||
| { | ||
| let message = format!( | ||
| "Contract {} is already loaded. Select it from the existing contracts list or enter a different contract ID.", | ||
| identifier.to_string(Encoding::Base58) | ||
| ); | ||
| self.add_contracts_status = AddContractsStatus::Error; | ||
| MessageBanner::set_global( | ||
| self.app_context.egui_ctx(), | ||
| &message, | ||
| MessageType::Error, | ||
| ); | ||
| return AppAction::None; | ||
| } |
There was a problem hiding this comment.
🟡 Suggestion: Screen-level duplicate-rejection branches in add_contracts_clicked have no UI test
add_contracts_clicked() now owns three new user-visible rejection branches: duplicate IDs in one submission, an ID already loaded, and the loaded_contract_ids() failure path. Each sets add_contracts_status = Error, calls MessageBanner::set_global(...), and returns AppAction::None instead of dispatching ContractTask::FetchContracts. The new tests only exercise the pure helpers find_duplicate_input and find_already_loaded_id; the screen-level wiring (status state, banner, suppressed task dispatch) is uncovered. Add a tests/kittest/ case that drives the screen through each branch and asserts the action returned and the resulting status, so future refactors of add_contracts_clicked() cannot silently break the visible behavior.
source: ['codex-general']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/ui/contracts_documents/add_contracts_screen.rs`:
- [SUGGESTION] lines 104-149: Screen-level duplicate-rejection branches in add_contracts_clicked have no UI test
`add_contracts_clicked()` now owns three new user-visible rejection branches: duplicate IDs in one submission, an ID already loaded, and the `loaded_contract_ids()` failure path. Each sets `add_contracts_status = Error`, calls `MessageBanner::set_global(...)`, and returns `AppAction::None` instead of dispatching `ContractTask::FetchContracts`. The new tests only exercise the pure helpers `find_duplicate_input` and `find_already_loaded_id`; the screen-level wiring (status state, banner, suppressed task dispatch) is uncovered. Add a `tests/kittest/` case that drives the screen through each branch and asserts the action returned and the resulting status, so future refactors of `add_contracts_clicked()` cannot silently break the visible behavior.
| fn get_user_contracts_excludes_system_contracts() { | ||
| let temp_dir = tempfile::tempdir().expect("create temp data dir"); | ||
| let db = Arc::new(create_test_database().expect("create in-memory db")); | ||
| let app_context = make_test_app_context(temp_dir.path(), Arc::clone(&db)); | ||
|
|
||
| // Insert a row whose id collides with a system contract — even if | ||
| // such a row exists (e.g. legacy data), get_user_contracts should | ||
| // not return it because db.get_contracts excludes system ids. | ||
| let system_id = app_context.dpns_contract.id(); | ||
| insert_contract_row(&db, &system_id, &app_context.network); | ||
|
|
||
| let user_contracts = app_context | ||
| .get_user_contracts() | ||
| .expect("get_user_contracts succeeds"); | ||
| for qc in &user_contracts { | ||
| assert!( | ||
| !app_context.is_system_contract_id(&qc.contract.id()), | ||
| "get_user_contracts must not return any system contract" | ||
| ); | ||
| } |
There was a problem hiding this comment.
🟡 Suggestion: System-exclusion test cannot prove the SQL NOT IN filter works
get_user_contracts_excludes_system_contracts is the only test of the new NOT IN (...) SQL exclusion in Database::get_contracts, but it seeds the colliding row via insert_contract_row(). By design that helper writes intentionally non-deserializable bytes that Database::get_contracts silently drops in its DataContract::versioned_deserialize loop (src/database/contracts.rs:322-334). If the SQL exclusion regressed, the row would still be filtered by deserialization and the test would still pass. The newer get_contracts_arc_pagination_matches_get_contracts correctly uses insert_real_contract_row, but never seeds a row whose ID collides with a system contract. Add a case that inserts a real serialized contract under a system-contract ID and asserts both get_user_contracts() and get_contracts(None, None) omit it while still returning the other DB rows.
source: ['codex-general']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/context/contract_token_db.rs`:
- [SUGGESTION] lines 810-829: System-exclusion test cannot prove the SQL NOT IN filter works
`get_user_contracts_excludes_system_contracts` is the only test of the new `NOT IN (...)` SQL exclusion in `Database::get_contracts`, but it seeds the colliding row via `insert_contract_row()`. By design that helper writes intentionally non-deserializable bytes that `Database::get_contracts` silently drops in its `DataContract::versioned_deserialize` loop (src/database/contracts.rs:322-334). If the SQL exclusion regressed, the row would still be filtered by deserialization and the test would still pass. The newer `get_contracts_arc_pagination_matches_get_contracts` correctly uses `insert_real_contract_row`, but never seeds a row whose ID collides with a system contract. Add a case that inserts a *real* serialized contract under a system-contract ID and asserts both `get_user_contracts()` and `get_contracts(None, None)` omit it while still returning the other DB rows.
| /// Returns the IDs of every loaded contract (system + database) without | ||
| /// materializing the contract bodies. Use this for cheap membership | ||
| /// checks ("is this contract id already loaded?"). | ||
| pub fn loaded_contract_ids(&self) -> Result<Vec<Identifier>> { | ||
| let mut ids: Vec<Identifier> = self.system_contract_ids().to_vec(); | ||
| ids.extend(self.db.get_contract_ids_for_network(&self.network)?); | ||
| Ok(ids) | ||
| } |
There was a problem hiding this comment.
🟡 Suggestion: loaded_contract_ids treats corrupted DB rows as already loaded but they are invisible elsewhere
loaded_contract_ids() extends the system IDs with Database::get_contract_ids_for_network(), which only validates Identifier::from_bytes and does not touch the contract blob (src/database/contracts.rs:344-363). Database::get_contracts() and get_contract_by_id() instead silently skip rows whose blob fails DataContract::versioned_deserialize (src/database/contracts.rs:322-334). The new add-screen guard at add_contracts_screen.rs:121-149 uses loaded_contract_ids() and rejects any matching ID. Net effect: a row with a malformed contract body disappears from listings and lookups, but the add screen reports it as already loaded, so the user can no longer self-repair by re-fetching. Either filter loaded_contract_ids to the same invariant as the visible set, or have the cleanup path delete malformed rows so the membership check stays in sync.
source: ['codex-rust-quality']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/context/contract_token_db.rs`:
- [SUGGESTION] lines 184-191: loaded_contract_ids treats corrupted DB rows as already loaded but they are invisible elsewhere
`loaded_contract_ids()` extends the system IDs with `Database::get_contract_ids_for_network()`, which only validates `Identifier::from_bytes` and does not touch the contract blob (src/database/contracts.rs:344-363). `Database::get_contracts()` and `get_contract_by_id()` instead silently skip rows whose blob fails `DataContract::versioned_deserialize` (src/database/contracts.rs:322-334). The new add-screen guard at add_contracts_screen.rs:121-149 uses `loaded_contract_ids()` and rejects any matching ID. Net effect: a row with a malformed contract body disappears from listings and lookups, but the add screen reports it as already loaded, so the user can no longer self-repair by re-fetching. Either filter `loaded_contract_ids` to the same invariant as the visible set, or have the cleanup path delete malformed rows so the membership check stays in sync.
| pub(crate) fn cleanup_persisted_system_contracts_for( | ||
| db: &Database, | ||
| system_contract_ids: &[Identifier], | ||
| network: &Network, | ||
| ) -> Result<()> { | ||
| for contract_id in system_contract_ids { | ||
| db.remove_contract_for_network(contract_id.as_bytes(), network)?; | ||
| } | ||
| Ok(()) | ||
| } |
There was a problem hiding this comment.
💬 Nitpick: Startup cleanup opens five transactions on every AppContext construction
AppContext::new() unconditionally calls cleanup_persisted_system_contracts() (src/context/mod.rs:375), which iterates the five built-in IDs and invokes Database::remove_contract_for_network() per ID. Each call opens its own transaction and runs four DELETEs (src/database/contracts.rs:378-412), even in the steady state where there is nothing to remove — up to five commits and twenty no-op SQL statements per startup. Cold path, so non-blocking, but batching the IDs into a single transaction (or short-circuiting when the IDs are absent from the contract table) would preserve the safety fix without the recurring churn.
source: ['codex-rust-quality']
thepastaclaw
left a comment
There was a problem hiding this comment.
Code Review
Refactor centralizes system-contract handling on AppContext, adds duplicate/already-loaded checks before dispatch, and introduces an Arc-backed listing path. Logic, ownership, and test coverage are solid. Main valid concerns: duplicate-rejection rule is enforced only in the UI (not at the task/persistence boundary), replace_contract/remove_contract silently return Ok(()) for system IDs (masking caller intent), and the user-driven remove_contract was rewritten to cascade-delete tokens — an unrelated, untested behavior change.
Reviewed commit: fed3724
🟡 5 suggestion(s) | 💬 3 nitpick(s)
2 additional findings
🟡 suggestion: Duplicate-contract rejection lives in UI only — not enforced at the task boundary
src/backend_task/contract.rs (lines 49-70)
AddContractsScreen (src/ui/contracts_documents/add_contracts_screen.rs:107-149) now rejects duplicate inputs and already-loaded IDs before dispatching, but ContractTask::FetchContracts itself still blindly fetches every requested ID and always returns FetchedContracts(results) with Some(contract) for each. Persistence relies on INSERT OR IGNORE plus the new system-contract guard in insert_contract_if_not_exists, so the backend cannot distinguish 'newly persisted' from 'silently skipped'. Two consequences: (1) any non-UI caller of FetchContracts or SaveDataContract (e.g. MCP tools, future automation) is not subject to the new rule; (2) even the UI path has a TOCTOU window between loaded_contract_ids() and the async fetch. Lift the duplicate/already-loaded check into the task itself — either by short-circuiting per-ID before fetch or by returning a typed result that distinguishes 'inserted' vs 'already loaded' so callers can react meaningfully.
💬 nitpick: `LIMIT -1` is a SQLite-specific idiom — add a comment so it isn't 'fixed' later
src/database/contracts.rs (lines 283-290)
The new else if offset.is_some() { query.push_str(" LIMIT -1"); } branch relies on SQLite's documented behavior that a negative LIMIT means 'no upper bound'. The codebase is SQLite-only so this is correct, but it's the kind of magic literal that gets 'cleaned up' to i64::MAX by a future contributor. A one-line comment (// SQLite: LIMIT -1 means 'no limit', required when OFFSET is set without LIMIT) prevents that.
🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/backend_task/contract.rs`:
- [SUGGESTION] lines 49-70: Duplicate-contract rejection lives in UI only — not enforced at the task boundary
`AddContractsScreen` (src/ui/contracts_documents/add_contracts_screen.rs:107-149) now rejects duplicate inputs and already-loaded IDs before dispatching, but `ContractTask::FetchContracts` itself still blindly fetches every requested ID and always returns `FetchedContracts(results)` with `Some(contract)` for each. Persistence relies on `INSERT OR IGNORE` plus the new system-contract guard in `insert_contract_if_not_exists`, so the backend cannot distinguish 'newly persisted' from 'silently skipped'. Two consequences: (1) any non-UI caller of `FetchContracts` or `SaveDataContract` (e.g. MCP tools, future automation) is not subject to the new rule; (2) even the UI path has a TOCTOU window between `loaded_contract_ids()` and the async fetch. Lift the duplicate/already-loaded check into the task itself — either by short-circuiting per-ID before fetch or by returning a typed result that distinguishes 'inserted' vs 'already loaded' so callers can react meaningfully.
In `src/context/contract_token_db.rs`:
- [SUGGESTION] lines 226-246: `replace_contract` / `remove_contract` silently return Ok(()) for system IDs
Both methods log `tracing::warn!` and return `Ok(())` when the ID is a system contract. Callers like `broadcast_and_wait_state_transition` (which now uses `self.replace_contract`) can't tell 'persisted' from 'ignored' and may report `BackendTaskSuccessResult::UpdatedContract` for an op that never happened. The end user sees no banner — only a tracing warning. Encode the outcome in the type system: either return `enum ReplaceOutcome { Replaced, IgnoredSystemContract }` or add a dedicated `TaskError::SystemContractImmutable` variant so the caller is forced to surface a meaningful message. Returning the success type for an operation that didn't perform the requested mutation is the kind of silent no-op that makes future regressions invisible.
- [SUGGESTION] lines 61-70: Startup cleanup is not atomic across system IDs — partial cleanup persists silently
`cleanup_persisted_system_contracts_for` iterates `system_contract_ids` and calls `remove_contract_for_network` per ID, each opening its own SQLite transaction. If the second call fails, the first ID's deletion is already committed and the remaining IDs are skipped. The caller in `context/mod.rs:375` only logs a warning, so the partially-cleaned state silently persists across runs. Fold the loop into a single transaction (acquire `unchecked_transaction` once, run all dependent-table + contract DELETEs scoped to the system-id list, commit at the end) so cleanup is all-or-nothing. This also avoids 5× lock acquisitions on the shared `Mutex<Connection>` at startup.
In `src/database/contracts.rs`:
- [SUGGESTION] lines 366-413: User-driven `remove_contract` now cascade-deletes tokens — unrelated change with no test
`remove_contract` is rewritten to delegate to `remove_contract_for_network`, which performs explicit cascading deletes across `identity_token_balances`, `token_order`, `token`, and `contract` inside a transaction. Previously `remove_contract` just deleted the contract row and (incorrectly) relied on FK CASCADE, which never fires because no `PRAGMA foreign_keys = ON` is run on production connections. So this is plausibly a fix — but: (1) it isn't called out in the PR scope ('prevent duplicate loaded contracts'); (2) the new tests (`replace_contract_is_noop_for_system_contract_ids`, `remove_contract_is_noop_for_system_contract_ids`) only verify the no-op branch; the actual cascade behavior on a non-system contract with associated tokens/balances/orders is silently introduced and untested; (3) any user with a stale `token` row tied to a previously removed contract will see those rows deleted on the next removal of any contract. Either split this into its own PR with dedicated tests, or add a test that inserts a user contract + token + balance + order, calls `remove_contract`, and asserts all related rows are gone.
In `src/ui/contracts_documents/add_contracts_screen.rs`:
- [SUGGESTION] lines 121-134: `loaded_contract_ids` failure surfaces a generic banner and drops the technical detail
When `self.app_context.loaded_contract_ids()` fails, the screen emits `tracing::warn!("Failed to check loaded contracts before adding: {e}")` and shows a generic banner. The error returned is a typed `rusqlite::Error` and the project convention (CLAUDE.md, 'Error banners' / 'Logging') is to attach upstream errors via `BannerHandle::with_details(e)` so users get a collapsible details panel and the technical detail isn't lost on the floor. Either capture the handle returned by `MessageBanner::set_global` and call `.with_details(e)` on it, or lift the wording into a dedicated `TaskError::ContractListLookupFailed { #[source] rusqlite::Error }` variant.
| @@ -95,6 +237,11 @@ impl AppContext { | |||
| contract_id: Identifier, | |||
| new_contract: &DataContract, | |||
| ) -> Result<()> { | |||
| if self.is_system_contract_id(&contract_id) { | |||
| tracing::warn!(?contract_id, "Ignoring request to replace system contract"); | |||
| return Ok(()); | |||
| } | |||
|
|
|||
| self.db.replace_contract(contract_id, new_contract, self) | |||
| } | |||
There was a problem hiding this comment.
🟡 Suggestion: replace_contract / remove_contract silently return Ok(()) for system IDs
Both methods log tracing::warn! and return Ok(()) when the ID is a system contract. Callers like broadcast_and_wait_state_transition (which now uses self.replace_contract) can't tell 'persisted' from 'ignored' and may report BackendTaskSuccessResult::UpdatedContract for an op that never happened. The end user sees no banner — only a tracing warning. Encode the outcome in the type system: either return enum ReplaceOutcome { Replaced, IgnoredSystemContract } or add a dedicated TaskError::SystemContractImmutable variant so the caller is forced to surface a meaningful message. Returning the success type for an operation that didn't perform the requested mutation is the kind of silent no-op that makes future regressions invisible.
source: ['claude']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/context/contract_token_db.rs`:
- [SUGGESTION] lines 226-246: `replace_contract` / `remove_contract` silently return Ok(()) for system IDs
Both methods log `tracing::warn!` and return `Ok(())` when the ID is a system contract. Callers like `broadcast_and_wait_state_transition` (which now uses `self.replace_contract`) can't tell 'persisted' from 'ignored' and may report `BackendTaskSuccessResult::UpdatedContract` for an op that never happened. The end user sees no banner — only a tracing warning. Encode the outcome in the type system: either return `enum ReplaceOutcome { Replaced, IgnoredSystemContract }` or add a dedicated `TaskError::SystemContractImmutable` variant so the caller is forced to surface a meaningful message. Returning the success type for an operation that didn't perform the requested mutation is the kind of silent no-op that makes future regressions invisible.
| pub fn remove_contract( | ||
| &self, | ||
| contract_id: &[u8], | ||
| app_context: &AppContext, | ||
| ) -> rusqlite::Result<()> { | ||
| let network = app_context.network.to_string(); | ||
| self.remove_contract_for_network(contract_id, &app_context.network) | ||
| } | ||
|
|
||
| // 1) remove the contract itself | ||
| self.execute( | ||
| /// Network-only variant of [`Self::remove_contract`] for callers (e.g. tests | ||
| /// and free helpers in `context::contract_token_db`) that do not have a | ||
| /// full [`AppContext`] available. Production paths should keep using | ||
| /// [`Self::remove_contract`]. | ||
| pub(crate) fn remove_contract_for_network( | ||
| &self, | ||
| contract_id: &[u8], | ||
| network: &Network, | ||
| ) -> rusqlite::Result<()> { | ||
| let network = network.to_string(); | ||
| let conn = self.conn.lock().unwrap(); | ||
| let tx = conn.unchecked_transaction()?; | ||
|
|
||
| tx.execute( | ||
| "DELETE FROM identity_token_balances | ||
| WHERE network = ? | ||
| AND token_id IN ( | ||
| SELECT id FROM token WHERE data_contract_id = ? AND network = ? | ||
| )", | ||
| params![&network, contract_id, &network], | ||
| )?; | ||
| tx.execute( | ||
| "DELETE FROM token_order | ||
| WHERE token_id IN ( | ||
| SELECT id FROM token WHERE data_contract_id = ? AND network = ? | ||
| )", | ||
| params![contract_id, &network], | ||
| )?; | ||
| tx.execute( | ||
| "DELETE FROM token WHERE data_contract_id = ? AND network = ?", | ||
| params![contract_id, &network], | ||
| )?; | ||
| tx.execute( | ||
| "DELETE FROM contract | ||
| WHERE contract_id = ? AND network = ?", | ||
| params![contract_id, network], | ||
| WHERE contract_id = ? AND network = ?", | ||
| params![contract_id, &network], | ||
| )?; | ||
|
|
||
| Ok(()) | ||
| tx.commit() | ||
| } |
There was a problem hiding this comment.
🟡 Suggestion: User-driven remove_contract now cascade-deletes tokens — unrelated change with no test
remove_contract is rewritten to delegate to remove_contract_for_network, which performs explicit cascading deletes across identity_token_balances, token_order, token, and contract inside a transaction. Previously remove_contract just deleted the contract row and (incorrectly) relied on FK CASCADE, which never fires because no PRAGMA foreign_keys = ON is run on production connections. So this is plausibly a fix — but: (1) it isn't called out in the PR scope ('prevent duplicate loaded contracts'); (2) the new tests (replace_contract_is_noop_for_system_contract_ids, remove_contract_is_noop_for_system_contract_ids) only verify the no-op branch; the actual cascade behavior on a non-system contract with associated tokens/balances/orders is silently introduced and untested; (3) any user with a stale token row tied to a previously removed contract will see those rows deleted on the next removal of any contract. Either split this into its own PR with dedicated tests, or add a test that inserts a user contract + token + balance + order, calls remove_contract, and asserts all related rows are gone.
source: ['claude']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/database/contracts.rs`:
- [SUGGESTION] lines 366-413: User-driven `remove_contract` now cascade-deletes tokens — unrelated change with no test
`remove_contract` is rewritten to delegate to `remove_contract_for_network`, which performs explicit cascading deletes across `identity_token_balances`, `token_order`, `token`, and `contract` inside a transaction. Previously `remove_contract` just deleted the contract row and (incorrectly) relied on FK CASCADE, which never fires because no `PRAGMA foreign_keys = ON` is run on production connections. So this is plausibly a fix — but: (1) it isn't called out in the PR scope ('prevent duplicate loaded contracts'); (2) the new tests (`replace_contract_is_noop_for_system_contract_ids`, `remove_contract_is_noop_for_system_contract_ids`) only verify the no-op branch; the actual cascade behavior on a non-system contract with associated tokens/balances/orders is silently introduced and untested; (3) any user with a stale `token` row tied to a previously removed contract will see those rows deleted on the next removal of any contract. Either split this into its own PR with dedicated tests, or add a test that inserts a user contract + token + balance + order, calls `remove_contract`, and asserts all related rows are gone.
| pub(crate) fn cleanup_persisted_system_contracts_for( | ||
| db: &Database, | ||
| system_contract_ids: &[Identifier], | ||
| network: &Network, | ||
| ) -> Result<()> { | ||
| for contract_id in system_contract_ids { | ||
| db.remove_contract_for_network(contract_id.as_bytes(), network)?; | ||
| } | ||
| Ok(()) | ||
| } |
There was a problem hiding this comment.
🟡 Suggestion: Startup cleanup is not atomic across system IDs — partial cleanup persists silently
cleanup_persisted_system_contracts_for iterates system_contract_ids and calls remove_contract_for_network per ID, each opening its own SQLite transaction. If the second call fails, the first ID's deletion is already committed and the remaining IDs are skipped. The caller in context/mod.rs:375 only logs a warning, so the partially-cleaned state silently persists across runs. Fold the loop into a single transaction (acquire unchecked_transaction once, run all dependent-table + contract DELETEs scoped to the system-id list, commit at the end) so cleanup is all-or-nothing. This also avoids 5× lock acquisitions on the shared Mutex<Connection> at startup.
source: ['claude']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/context/contract_token_db.rs`:
- [SUGGESTION] lines 61-70: Startup cleanup is not atomic across system IDs — partial cleanup persists silently
`cleanup_persisted_system_contracts_for` iterates `system_contract_ids` and calls `remove_contract_for_network` per ID, each opening its own SQLite transaction. If the second call fails, the first ID's deletion is already committed and the remaining IDs are skipped. The caller in `context/mod.rs:375` only logs a warning, so the partially-cleaned state silently persists across runs. Fold the loop into a single transaction (acquire `unchecked_transaction` once, run all dependent-table + contract DELETEs scoped to the system-id list, commit at the end) so cleanup is all-or-nothing. This also avoids 5× lock acquisitions on the shared `Mutex<Connection>` at startup.
| let existing_contract_ids = match self.app_context.loaded_contract_ids() { | ||
| Ok(ids) => ids, | ||
| Err(e) => { | ||
| tracing::warn!("Failed to check loaded contracts before adding: {e}"); | ||
| self.add_contracts_status = AddContractsStatus::Error; | ||
| MessageBanner::set_global( | ||
| self.app_context.egui_ctx(), | ||
| "Unable to check whether those contracts are already loaded. Please try again.", | ||
| MessageType::Error, | ||
| ); | ||
| return AppAction::None; | ||
| } | ||
| }; | ||
|
|
There was a problem hiding this comment.
🟡 Suggestion: loaded_contract_ids failure surfaces a generic banner and drops the technical detail
When self.app_context.loaded_contract_ids() fails, the screen emits tracing::warn!("Failed to check loaded contracts before adding: {e}") and shows a generic banner. The error returned is a typed rusqlite::Error and the project convention (CLAUDE.md, 'Error banners' / 'Logging') is to attach upstream errors via BannerHandle::with_details(e) so users get a collapsible details panel and the technical detail isn't lost on the floor. Either capture the handle returned by MessageBanner::set_global and call .with_details(e) on it, or lift the wording into a dedicated TaskError::ContractListLookupFailed { #[source] rusqlite::Error } variant.
source: ['claude']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/ui/contracts_documents/add_contracts_screen.rs`:
- [SUGGESTION] lines 121-134: `loaded_contract_ids` failure surfaces a generic banner and drops the technical detail
When `self.app_context.loaded_contract_ids()` fails, the screen emits `tracing::warn!("Failed to check loaded contracts before adding: {e}")` and shows a generic banner. The error returned is a typed `rusqlite::Error` and the project convention (CLAUDE.md, 'Error banners' / 'Logging') is to attach upstream errors via `BannerHandle::with_details(e)` so users get a collapsible details panel and the technical detail isn't lost on the floor. Either capture the handle returned by `MessageBanner::set_global` and call `.with_details(e)` on it, or lift the wording into a dedicated `TaskError::ContractListLookupFailed { #[source] rusqlite::Error }` variant.
| pub fn loaded_contract_ids(&self) -> Result<Vec<Identifier>> { | ||
| let mut ids: Vec<Identifier> = self.system_contract_ids().to_vec(); | ||
| ids.extend(self.db.get_contract_ids_for_network(&self.network)?); | ||
| Ok(ids) | ||
| } |
There was a problem hiding this comment.
💬 Nitpick: loaded_contract_ids may return duplicates when a system ID is also persisted
loaded_contract_ids concatenates system_contract_ids() with db.get_contract_ids_for_network(...). The DB-side ID lister does not exclude system IDs (unlike Database::get_contracts, which filters via NOT IN (...)). The startup cleanup_persisted_system_contracts and the is_system_contract_id guards in insert_contract_if_not_exists / replace_contract / remove_contract make collision unlikely, but they are not airtight (partial-failure cleanup, schema migration, external mutation). Today's consumers (find_already_loaded_id, transition_visualizer_screen) only do contains checks and tolerate duplicates — but the API name suggests a unique set, and a future caller using len() would silently double-count. Either filter system IDs out of the DB result here or return a HashSet<Identifier>.
source: ['claude']
| pub fn get_contracts_arc( | ||
| &self, | ||
| limit: Option<u32>, | ||
| offset: Option<u32>, | ||
| ) -> Result<Vec<QualifiedContractRef>> { | ||
| let system_contract_count = self.system_contract_arcs().len() as u32; | ||
| let pagination = compute_contract_pagination(system_contract_count, limit, offset); | ||
|
|
||
| // Add the keyword search contract to the list | ||
| let keyword_search_contract = QualifiedContract { | ||
| contract: Arc::clone(&self.keyword_search_contract).as_ref().clone(), | ||
| alias: Some("keyword_search".to_string()), | ||
| }; | ||
| let mut contracts = self | ||
| .system_contract_arcs() | ||
| .into_iter() | ||
| .skip(pagination.system_skip) | ||
| .take(pagination.system_take) | ||
| .map(|(system_contract, alias)| QualifiedContractRef { | ||
| contract: Arc::clone(system_contract), | ||
| alias: Some(alias.to_string()), | ||
| }) | ||
| .collect::<Vec<_>>(); | ||
|
|
||
| // Insert the keyword search contract at 3 | ||
| contracts.insert(3, keyword_search_contract); | ||
| if pagination.db_limit != Some(0) { | ||
| contracts.extend( | ||
| self.db | ||
| .get_contracts(self, pagination.db_limit, Some(pagination.db_offset))? | ||
| .into_iter() | ||
| .map(|qc| QualifiedContractRef { | ||
| contract: Arc::new(qc.contract), | ||
| alias: qc.alias, | ||
| }), | ||
| ); | ||
| } | ||
|
|
||
| // Add the DashPay contract to the list | ||
| let dashpay_contract = QualifiedContract { | ||
| contract: Arc::clone(&self.dashpay_contract).as_ref().clone(), | ||
| alias: Some("dashpay".to_string()), | ||
| }; | ||
| Ok(contracts) | ||
| } | ||
|
|
||
| // Insert the DashPay contract at 4 | ||
| contracts.insert(4, dashpay_contract); | ||
| /// Returns the IDs of every loaded contract (system + database) without | ||
| /// materializing the contract bodies. Use this for cheap membership | ||
| /// checks ("is this contract id already loaded?"). | ||
| pub fn loaded_contract_ids(&self) -> Result<Vec<Identifier>> { | ||
| let mut ids: Vec<Identifier> = self.system_contract_ids().to_vec(); | ||
| ids.extend(self.db.get_contract_ids_for_network(&self.network)?); | ||
| Ok(ids) | ||
| } | ||
|
|
||
| Ok(contracts) | ||
| /// Returns only the user-added (database-backed) contracts, skipping the | ||
| /// built-in system contracts entirely. Callers that immediately filter | ||
| /// out system contracts should use this rather than building them only to | ||
| /// discard them. | ||
| pub fn get_user_contracts(&self) -> Result<Vec<QualifiedContract>> { | ||
| self.db.get_contracts(self, None, None) | ||
| } |
There was a problem hiding this comment.
💬 Nitpick: Newly-added listing helpers are pub but only used inside the crate
get_contracts_arc, loaded_contract_ids, and get_user_contracts are declared pub, but every consumer is inside dash-evo-tool. The companion helpers added in this PR (system_contract_ids, system_contract_by_id, system_contract_arc_by_id, is_system_contract_id, cleanup_persisted_system_contracts) are correctly pub(crate). Tightening the listing helpers to pub(crate) matches that discipline and avoids accidental coupling from feature crates (det_cli, mcp). The pre-existing pub get_contracts is grandfathered, but the new helpers have no such constraint.
source: ['claude']
fed3724 to
b215860
Compare
thepastaclaw
left a comment
There was a problem hiding this comment.
Code Review
Cumulative round of fixes from prior cycles is solid: duplicate / already-loaded / system-immutable rejection now lifted to the task boundary, replace_contract / remove_contract surface SystemContractImmutable, cleanup runs in a single transaction, and remove_contract_cascades_dependent_rows covers the new cascade. Three carry-forward suggestions remain — the SQL NOT IN regression test still seeds non-deserializable PLACEHOLDER bytes (so the filter isn't actually exercised), loaded_contract_ids includes IDs whose blobs fail deserialization while get_contracts/get_contract_by_id silently drop them (user is told 'already loaded' for an invisible row), and the screen-level wiring of the new rejection branches has no kittest. Codex's blocking finding about cross-network token_order deletion is overstated: token.id is a unique PRIMARY KEY so only one network's row exists per token_id at any moment, and the old remove_contract path either cascaded the same way (FKs on) or left orphans (FKs off) — the new explicit cascade is not a behavioral regression introduced by this PR.
Reviewed commit: b215860
🟡 4 suggestion(s) | 💬 3 nitpick(s)
1 additional finding
💬 nitpick: LIMIT -1 is a SQLite-specific idiom — add a comment so it isn't 'fixed' later
src/database/contracts.rs (lines 283-290)
The new else if offset.is_some() { query.push_str(" LIMIT -1"); } branch relies on SQLite's documented behavior that a negative LIMIT means 'no upper bound' and is required when OFFSET is specified without LIMIT. The codebase is SQLite-only so this is correct, but the -1 literal is the kind of magic value a future contributor might 'clean up' to i64::MAX (which would silently change pagination semantics on large tables) or remove altogether (which would make a parameter-bound query produce a syntax error). A one-line comment (// SQLite: LIMIT -1 means 'no limit', required when OFFSET is set without LIMIT) prevents that.
🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/context/contract_token_db.rs`:
- [SUGGESTION] lines 898-918: get_user_contracts_excludes_system_contracts cannot detect regression of the new SQL NOT IN filter
The test seeds the colliding row via `insert_contract_row(&db, &system_id, ...)` at line 907, which writes the `PLACEHOLDER = b"placeholder-not-a-real-contract"` bytes. `Database::get_contracts` (src/database/contracts.rs:317-334) deserialises every row via `DataContract::versioned_deserialize` and `continue`s on error, so the placeholder row is silently dropped at the deserialisation step regardless of the SQL filter. If the new `AND contract_id NOT IN ({placeholders})` clause introduced in this PR were removed, the placeholder row would still be filtered out by the deserialisation skip and the test would still pass — it cannot prove the SQL exclusion works. The PR already has `insert_real_contract_row` (lines 509-529) that produces round-trippable bytes; reuse it to seed a real serialized contract under a system-contract id, then assert both `get_user_contracts()` and `get_contracts(None, None)` omit it while still returning sibling user rows. That is the only way to actually exercise the new SQL filter end-to-end.
- [SUGGESTION] lines 261-265: loaded_contract_ids reports corrupted DB rows as already-loaded but they are invisible to listings and lookups
`loaded_contract_ids` extends the system IDs with `Database::get_contract_ids_for_network()` (src/database/contracts.rs:344-364), which only validates `Identifier::from_bytes` on the 32-byte key — it never touches the `contract` blob column. In contrast, `Database::get_contracts` (lines 322-334) and `Database::get_contract_by_id` (lines 132-136) silently `continue`/return `None` past rows whose blob fails `DataContract::versioned_deserialize`. The new task-boundary guard in `ContractTask::FetchContracts` (src/backend_task/contract.rs:87-94) and the screen guard in `add_contracts_screen.rs:121-149` both consult `loaded_contract_ids`. Net effect: a row with a valid 32-byte ID but a malformed contract body disappears from listings and lookups, but `add_contracts_clicked` and the task path both report it as 'already loaded' — the user can no longer self-resolve by re-fetching, which violates the project's 'every error must include a concrete user-resolvable action' rule. Either filter `loaded_contract_ids` to the same invariant as the visible set (skip rows whose blob fails to deserialize), or have a cleanup path delete malformed rows so the membership check stays in sync with what the rest of the app sees.
In `src/ui/contracts_documents/add_contracts_screen.rs`:
- [SUGGESTION] lines 104-149: Screen-level duplicate / already-loaded / lookup-failure branches in add_contracts_clicked have no UI test
`add_contracts_clicked` now owns four user-visible early-return branches (parse error, duplicate input, `loaded_contract_ids` failure, already-loaded). Each sets `add_contracts_status = Error`, posts a `MessageBanner::set_global(...)` (the lookup-failure case also attaches `with_details(e)` — the only place that uses the typed `rusqlite::Error` for the collapsible details panel), and returns `AppAction::None` instead of dispatching `ContractTask::FetchContracts`. The new tests only cover the pure helpers `find_duplicate_input` / `find_already_loaded_id`; the screen-level wiring (status state, suppressed task dispatch, `with_details(e)` lifecycle) is uncovered. A `tests/kittest/` case driving the screen through each branch would prevent silent regressions of the visible behavior when `add_contracts_clicked` is refactored — and would lock in the `with_details` lifecycle that the project conventions specifically call out. The task-boundary check is defense in depth but does not exercise the screen wiring.
In `src/database/contracts.rs`:
- [SUGGESTION] lines 395-401: Explicit DELETE FROM token_order depends on a token row whose network field can be stale across networks
`token_order` (src/database/tokens.rs:513-521) has no `network` column — its rows are scoped indirectly via the FK on `token.id`. The new explicit `DELETE FROM token_order WHERE token_id IN (SELECT id FROM token WHERE data_contract_id = ? AND network = ?)` (and the equivalent in `cleanup_persisted_system_contracts_for` at src/context/contract_token_db.rs:101-115) thus depends on the token table's `network` field being authoritative. Because `token.id BLOB PRIMARY KEY` is unique across rows and `insert_token` uses `ON CONFLICT(id) DO UPDATE SET network = excluded.network` (src/database/tokens.rs:124-142), the token row's network can be overwritten when the same contract is added on a second network — leaving any prior token_order rows pointing at a token whose row now claims a different network. Removing that contract on the now-current network will then delete those previously-orphaned ordering rows. Today this is largely masked by `save_token_order` doing a global `DELETE FROM token_order` followed by re-insert (src/database/tokens.rs:534), so the table is essentially network-agnostic by design. Worth either adding a `network` column to `token_order` and scoping deletions by it, or documenting that token-level state is single-network at a time and the cross-network token_order surface is intentional.
| fn get_user_contracts_excludes_system_contracts() { | ||
| let temp_dir = tempfile::tempdir().expect("create temp data dir"); | ||
| let db = Arc::new(create_test_database().expect("create in-memory db")); | ||
| let app_context = make_test_app_context(temp_dir.path(), Arc::clone(&db)); | ||
|
|
||
| // Insert a row whose id collides with a system contract — even if | ||
| // such a row exists (e.g. legacy data), get_user_contracts should | ||
| // not return it because db.get_contracts excludes system ids. | ||
| let system_id = app_context.dpns_contract.id(); | ||
| insert_contract_row(&db, &system_id, &app_context.network); | ||
|
|
||
| let user_contracts = app_context | ||
| .get_user_contracts() | ||
| .expect("get_user_contracts succeeds"); | ||
| for qc in &user_contracts { | ||
| assert!( | ||
| !app_context.is_system_contract_id(&qc.contract.id()), | ||
| "get_user_contracts must not return any system contract" | ||
| ); | ||
| } | ||
| } |
There was a problem hiding this comment.
🟡 Suggestion: get_user_contracts_excludes_system_contracts cannot detect regression of the new SQL NOT IN filter
The test seeds the colliding row via insert_contract_row(&db, &system_id, ...) at line 907, which writes the PLACEHOLDER = b"placeholder-not-a-real-contract" bytes. Database::get_contracts (src/database/contracts.rs:317-334) deserialises every row via DataContract::versioned_deserialize and continues on error, so the placeholder row is silently dropped at the deserialisation step regardless of the SQL filter. If the new AND contract_id NOT IN ({placeholders}) clause introduced in this PR were removed, the placeholder row would still be filtered out by the deserialisation skip and the test would still pass — it cannot prove the SQL exclusion works. The PR already has insert_real_contract_row (lines 509-529) that produces round-trippable bytes; reuse it to seed a real serialized contract under a system-contract id, then assert both get_user_contracts() and get_contracts(None, None) omit it while still returning sibling user rows. That is the only way to actually exercise the new SQL filter end-to-end.
source: ['claude', 'codex']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/context/contract_token_db.rs`:
- [SUGGESTION] lines 898-918: get_user_contracts_excludes_system_contracts cannot detect regression of the new SQL NOT IN filter
The test seeds the colliding row via `insert_contract_row(&db, &system_id, ...)` at line 907, which writes the `PLACEHOLDER = b"placeholder-not-a-real-contract"` bytes. `Database::get_contracts` (src/database/contracts.rs:317-334) deserialises every row via `DataContract::versioned_deserialize` and `continue`s on error, so the placeholder row is silently dropped at the deserialisation step regardless of the SQL filter. If the new `AND contract_id NOT IN ({placeholders})` clause introduced in this PR were removed, the placeholder row would still be filtered out by the deserialisation skip and the test would still pass — it cannot prove the SQL exclusion works. The PR already has `insert_real_contract_row` (lines 509-529) that produces round-trippable bytes; reuse it to seed a real serialized contract under a system-contract id, then assert both `get_user_contracts()` and `get_contracts(None, None)` omit it while still returning sibling user rows. That is the only way to actually exercise the new SQL filter end-to-end.
| pub fn loaded_contract_ids(&self) -> Result<Vec<Identifier>> { | ||
| let mut ids: Vec<Identifier> = self.system_contract_ids().to_vec(); | ||
| ids.extend(self.db.get_contract_ids_for_network(&self.network)?); | ||
| Ok(ids) | ||
| } |
There was a problem hiding this comment.
🟡 Suggestion: loaded_contract_ids reports corrupted DB rows as already-loaded but they are invisible to listings and lookups
loaded_contract_ids extends the system IDs with Database::get_contract_ids_for_network() (src/database/contracts.rs:344-364), which only validates Identifier::from_bytes on the 32-byte key — it never touches the contract blob column. In contrast, Database::get_contracts (lines 322-334) and Database::get_contract_by_id (lines 132-136) silently continue/return None past rows whose blob fails DataContract::versioned_deserialize. The new task-boundary guard in ContractTask::FetchContracts (src/backend_task/contract.rs:87-94) and the screen guard in add_contracts_screen.rs:121-149 both consult loaded_contract_ids. Net effect: a row with a valid 32-byte ID but a malformed contract body disappears from listings and lookups, but add_contracts_clicked and the task path both report it as 'already loaded' — the user can no longer self-resolve by re-fetching, which violates the project's 'every error must include a concrete user-resolvable action' rule. Either filter loaded_contract_ids to the same invariant as the visible set (skip rows whose blob fails to deserialize), or have a cleanup path delete malformed rows so the membership check stays in sync with what the rest of the app sees.
source: ['claude', 'codex']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/context/contract_token_db.rs`:
- [SUGGESTION] lines 261-265: loaded_contract_ids reports corrupted DB rows as already-loaded but they are invisible to listings and lookups
`loaded_contract_ids` extends the system IDs with `Database::get_contract_ids_for_network()` (src/database/contracts.rs:344-364), which only validates `Identifier::from_bytes` on the 32-byte key — it never touches the `contract` blob column. In contrast, `Database::get_contracts` (lines 322-334) and `Database::get_contract_by_id` (lines 132-136) silently `continue`/return `None` past rows whose blob fails `DataContract::versioned_deserialize`. The new task-boundary guard in `ContractTask::FetchContracts` (src/backend_task/contract.rs:87-94) and the screen guard in `add_contracts_screen.rs:121-149` both consult `loaded_contract_ids`. Net effect: a row with a valid 32-byte ID but a malformed contract body disappears from listings and lookups, but `add_contracts_clicked` and the task path both report it as 'already loaded' — the user can no longer self-resolve by re-fetching, which violates the project's 'every error must include a concrete user-resolvable action' rule. Either filter `loaded_contract_ids` to the same invariant as the visible set (skip rows whose blob fails to deserialize), or have a cleanup path delete malformed rows so the membership check stays in sync with what the rest of the app sees.
| fn add_contracts_clicked(&mut self) -> AppAction { | ||
| match self.parse_identifiers() { | ||
| Ok(identifiers) => { | ||
| if let Some(identifier) = find_duplicate_input(&identifiers) { | ||
| let message = format!( | ||
| "Contract {} was entered more than once. Remove the duplicate entry before adding contracts.", | ||
| identifier.to_string(Encoding::Base58) | ||
| ); | ||
| self.add_contracts_status = AddContractsStatus::Error; | ||
| MessageBanner::set_global( | ||
| self.app_context.egui_ctx(), | ||
| &message, | ||
| MessageType::Error, | ||
| ); | ||
| return AppAction::None; | ||
| } | ||
|
|
||
| let existing_contract_ids = match self.app_context.loaded_contract_ids() { | ||
| Ok(ids) => ids, | ||
| Err(e) => { | ||
| tracing::warn!("Failed to check loaded contracts before adding: {e}"); | ||
| self.add_contracts_status = AddContractsStatus::Error; | ||
| let handle = MessageBanner::set_global( | ||
| self.app_context.egui_ctx(), | ||
| "Unable to check whether those contracts are already loaded. Please try again.", | ||
| MessageType::Error, | ||
| ); | ||
| handle.with_details(e); | ||
| return AppAction::None; | ||
| } | ||
| }; | ||
|
|
||
| if let Some(identifier) = | ||
| find_already_loaded_id(&identifiers, &existing_contract_ids) | ||
| { | ||
| let message = format!( | ||
| "Contract {} is already loaded. Select it from the existing contracts list or enter a different contract ID.", | ||
| identifier.to_string(Encoding::Base58) | ||
| ); | ||
| self.add_contracts_status = AddContractsStatus::Error; | ||
| MessageBanner::set_global( | ||
| self.app_context.egui_ctx(), | ||
| &message, | ||
| MessageType::Error, | ||
| ); | ||
| return AppAction::None; |
There was a problem hiding this comment.
🟡 Suggestion: Screen-level duplicate / already-loaded / lookup-failure branches in add_contracts_clicked have no UI test
add_contracts_clicked now owns four user-visible early-return branches (parse error, duplicate input, loaded_contract_ids failure, already-loaded). Each sets add_contracts_status = Error, posts a MessageBanner::set_global(...) (the lookup-failure case also attaches with_details(e) — the only place that uses the typed rusqlite::Error for the collapsible details panel), and returns AppAction::None instead of dispatching ContractTask::FetchContracts. The new tests only cover the pure helpers find_duplicate_input / find_already_loaded_id; the screen-level wiring (status state, suppressed task dispatch, with_details(e) lifecycle) is uncovered. A tests/kittest/ case driving the screen through each branch would prevent silent regressions of the visible behavior when add_contracts_clicked is refactored — and would lock in the with_details lifecycle that the project conventions specifically call out. The task-boundary check is defense in depth but does not exercise the screen wiring.
source: ['claude', 'codex']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/ui/contracts_documents/add_contracts_screen.rs`:
- [SUGGESTION] lines 104-149: Screen-level duplicate / already-loaded / lookup-failure branches in add_contracts_clicked have no UI test
`add_contracts_clicked` now owns four user-visible early-return branches (parse error, duplicate input, `loaded_contract_ids` failure, already-loaded). Each sets `add_contracts_status = Error`, posts a `MessageBanner::set_global(...)` (the lookup-failure case also attaches `with_details(e)` — the only place that uses the typed `rusqlite::Error` for the collapsible details panel), and returns `AppAction::None` instead of dispatching `ContractTask::FetchContracts`. The new tests only cover the pure helpers `find_duplicate_input` / `find_already_loaded_id`; the screen-level wiring (status state, suppressed task dispatch, `with_details(e)` lifecycle) is uncovered. A `tests/kittest/` case driving the screen through each branch would prevent silent regressions of the visible behavior when `add_contracts_clicked` is refactored — and would lock in the `with_details` lifecycle that the project conventions specifically call out. The task-boundary check is defense in depth but does not exercise the screen wiring.
| tx.execute( | ||
| "DELETE FROM token_order | ||
| WHERE token_id IN ( | ||
| SELECT id FROM token WHERE data_contract_id = ? AND network = ? | ||
| )", | ||
| params![contract_id, &network], | ||
| )?; |
There was a problem hiding this comment.
🟡 Suggestion: Explicit DELETE FROM token_order depends on a token row whose network field can be stale across networks
token_order (src/database/tokens.rs:513-521) has no network column — its rows are scoped indirectly via the FK on token.id. The new explicit DELETE FROM token_order WHERE token_id IN (SELECT id FROM token WHERE data_contract_id = ? AND network = ?) (and the equivalent in cleanup_persisted_system_contracts_for at src/context/contract_token_db.rs:101-115) thus depends on the token table's network field being authoritative. Because token.id BLOB PRIMARY KEY is unique across rows and insert_token uses ON CONFLICT(id) DO UPDATE SET network = excluded.network (src/database/tokens.rs:124-142), the token row's network can be overwritten when the same contract is added on a second network — leaving any prior token_order rows pointing at a token whose row now claims a different network. Removing that contract on the now-current network will then delete those previously-orphaned ordering rows. Today this is largely masked by save_token_order doing a global DELETE FROM token_order followed by re-insert (src/database/tokens.rs:534), so the table is essentially network-agnostic by design. Worth either adding a network column to token_order and scoping deletions by it, or documenting that token-level state is single-network at a time and the cross-network token_order surface is intentional.
source: ['codex']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/database/contracts.rs`:
- [SUGGESTION] lines 395-401: Explicit DELETE FROM token_order depends on a token row whose network field can be stale across networks
`token_order` (src/database/tokens.rs:513-521) has no `network` column — its rows are scoped indirectly via the FK on `token.id`. The new explicit `DELETE FROM token_order WHERE token_id IN (SELECT id FROM token WHERE data_contract_id = ? AND network = ?)` (and the equivalent in `cleanup_persisted_system_contracts_for` at src/context/contract_token_db.rs:101-115) thus depends on the token table's `network` field being authoritative. Because `token.id BLOB PRIMARY KEY` is unique across rows and `insert_token` uses `ON CONFLICT(id) DO UPDATE SET network = excluded.network` (src/database/tokens.rs:124-142), the token row's network can be overwritten when the same contract is added on a second network — leaving any prior token_order rows pointing at a token whose row now claims a different network. Removing that contract on the now-current network will then delete those previously-orphaned ordering rows. Today this is largely masked by `save_token_order` doing a global `DELETE FROM token_order` followed by re-insert (src/database/tokens.rs:534), so the table is essentially network-agnostic by design. Worth either adding a `network` column to `token_order` and scoping deletions by it, or documenting that token-level state is single-network at a time and the cross-network token_order surface is intentional.
| pub fn loaded_contract_ids(&self) -> Result<Vec<Identifier>> { | ||
| let mut ids: Vec<Identifier> = self.system_contract_ids().to_vec(); | ||
| ids.extend(self.db.get_contract_ids_for_network(&self.network)?); | ||
| Ok(ids) | ||
| } |
There was a problem hiding this comment.
💬 Nitpick: loaded_contract_ids may return duplicates when a system ID is also persisted
loaded_contract_ids concatenates system_contract_ids() with db.get_contract_ids_for_network(...). The DB-side ID lister (src/database/contracts.rs:344-364) does not exclude system IDs (unlike Database::get_contracts, which now filters via NOT IN (...)). The startup cleanup_persisted_system_contracts (now atomic) and the is_system_contract_id guards in insert_contract_if_not_exists / replace_contract / remove_contract make collision unlikely, but they are not airtight (cleanup runs only at startup; nothing prevents an external SQLite write or schema migration leaving a stray row mid-session). Today's consumers (find_already_loaded_id, add_contracts_clicked, transition_visualizer_screen) only do contains checks and tolerate duplicates — but the API name suggests a unique set, and a future caller using len() or building a HashMap keyed by ID would silently double-count. Either filter system IDs out of the DB result here (or in get_contract_ids_for_network), or return a HashSet<Identifier>.
source: ['claude']
| pub fn get_contracts_arc( | ||
| &self, | ||
| limit: Option<u32>, | ||
| offset: Option<u32>, | ||
| ) -> Result<Vec<QualifiedContractRef>> { | ||
| let system_contract_count = self.system_contract_arcs().len() as u32; | ||
| let pagination = compute_contract_pagination(system_contract_count, limit, offset); | ||
|
|
||
| // Insert the withdrawal contract at 2 | ||
| contracts.insert(2, withdraws_contract); | ||
| let mut contracts = self | ||
| .system_contract_arcs() | ||
| .into_iter() | ||
| .skip(pagination.system_skip) | ||
| .take(pagination.system_take) | ||
| .map(|(system_contract, alias)| QualifiedContractRef { | ||
| contract: Arc::clone(system_contract), | ||
| alias: Some(alias.to_string()), | ||
| }) | ||
| .collect::<Vec<_>>(); | ||
|
|
||
| // Add the keyword search contract to the list | ||
| let keyword_search_contract = QualifiedContract { | ||
| contract: Arc::clone(&self.keyword_search_contract).as_ref().clone(), | ||
| alias: Some("keyword_search".to_string()), | ||
| }; | ||
|
|
||
| // Insert the keyword search contract at 3 | ||
| contracts.insert(3, keyword_search_contract); | ||
| if pagination.db_limit != Some(0) { | ||
| contracts.extend( | ||
| self.db | ||
| .get_contracts(self, pagination.db_limit, Some(pagination.db_offset))? | ||
| .into_iter() | ||
| .map(|qc| QualifiedContractRef { | ||
| contract: Arc::new(qc.contract), | ||
| alias: qc.alias, | ||
| }), | ||
| ); | ||
| } | ||
|
|
||
| // Add the DashPay contract to the list | ||
| let dashpay_contract = QualifiedContract { | ||
| contract: Arc::clone(&self.dashpay_contract).as_ref().clone(), | ||
| alias: Some("dashpay".to_string()), | ||
| }; | ||
| Ok(contracts) | ||
| } | ||
|
|
||
| // Insert the DashPay contract at 4 | ||
| contracts.insert(4, dashpay_contract); | ||
| /// Returns the IDs of every loaded contract (system + database) without | ||
| /// materializing the contract bodies. Use this for cheap membership | ||
| /// checks ("is this contract id already loaded?"). | ||
| pub fn loaded_contract_ids(&self) -> Result<Vec<Identifier>> { | ||
| let mut ids: Vec<Identifier> = self.system_contract_ids().to_vec(); | ||
| ids.extend(self.db.get_contract_ids_for_network(&self.network)?); | ||
| Ok(ids) | ||
| } | ||
|
|
||
| Ok(contracts) | ||
| /// Returns only the user-added (database-backed) contracts, skipping the | ||
| /// built-in system contracts entirely. Callers that immediately filter | ||
| /// out system contracts should use this rather than building them only to | ||
| /// discard them. | ||
| pub fn get_user_contracts(&self) -> Result<Vec<QualifiedContract>> { | ||
| self.db.get_contracts(self, None, None) | ||
| } |
There was a problem hiding this comment.
💬 Nitpick: New listing helpers are pub but only used inside this crate
get_contracts_arc, loaded_contract_ids, and get_user_contracts are declared pub, but every consumer is inside the dash-evo-tool library crate (src/ui/**, src/backend_task/contract.rs). The companion helpers added in this PR (system_contract_ids, system_contract_by_id, system_contract_arc_by_id, is_system_contract_id, cleanup_persisted_system_contracts) are correctly pub(crate). Tightening these listing helpers to pub(crate) matches that discipline and avoids accidental coupling from feature crates / future binaries (det_cli, mcp). The pre-existing pub get_contracts is grandfathered, but the new helpers have no such constraint.
source: ['claude']
b215860 to
e43bec8
Compare
thepastaclaw
left a comment
There was a problem hiding this comment.
Code Review
All previously-blocking findings on PR #854 are resolved at e43bec8 — system-contract immutability is enforced at the task boundary, the SQL NOT IN exclusion test now seeds real serialized contracts, loaded_contract_ids skips malformed blobs (with regression test), startup cleanup runs in a single transaction, and tests/kittest/add_contracts_screen.rs covers all four early-return branches with the with_details lifecycle. Remaining concerns are suggestion- and nitpick-level: a stale docstring on loaded_contract_ids, type-erasure of Identifier in three new TaskError variants, duplicated error wording between the screen and TaskError templates, and a few carried-forward nits.
Reviewed commit: e43bec8
🟡 3 suggestion(s) | 💬 4 nitpick(s)
1 additional finding
💬 nitpick: LIMIT -1 is a SQLite-specific idiom — add a comment so it isn't 'fixed' later
src/database/contracts.rs (lines 283-290)
The else if offset.is_some() { query.push_str(" LIMIT -1"); } branch relies on SQLite's documented behaviour that a negative LIMIT means 'no upper bound' and is required when OFFSET is bound without LIMIT. Codebase is SQLite-only so this is correct, but the magic -1 is the kind of literal a future contributor might 'clean up' to i64::MAX (silently changing pagination semantics on large tables) or remove altogether (turning a parameter-bound query into a SQLite syntax error at runtime). A one-line comment (// SQLite: LIMIT -1 means 'no upper bound', required when OFFSET is bound without LIMIT) prevents that. Carried forward from prior reviews.
🤖 Prompt for all review comments with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/backend_task/error.rs`:
- [SUGGESTION] lines 461-492: New TaskError variants store Identifier as Base58 String, erasing the typed value
SystemContractImmutable, DuplicateContractInRequest, and ContractAlreadyLoaded each carry contract_id: String. Every construction site (contract_token_db.rs:304/320, contract.rs:83/91/268/276) re-runs identifier.to_string(Encoding::Base58), so the typed Identifier is erased before the variant is built. Downstream consumers (future MCP tools, scripting, structural matching) cannot recover the original ID. Storing Identifier directly and formatting via Display in the #[error(...)] template (or a small wrapper) preserves the type, centralises Base58 encoding, and keeps the user-visible string identical. Storing typed IDs is consistent with the project rule allowing Base58 IDs in user-facing messages — the concern is the type erasure, not the encoding.
In `src/ui/contracts_documents/add_contracts_screen.rs`:
- [SUGGESTION] lines 104-150: Screen duplicates the wording already centralised in DuplicateContractInRequest / ContractAlreadyLoaded
Lines 108-117 and 139-148 build error strings via format!() that are byte-for-byte identical to the #[error(...)] templates on TaskError::DuplicateContractInRequest and TaskError::ContractAlreadyLoaded (src/backend_task/error.rs:475-491). Project conventions explicitly say repeated wording belongs in a TaskError variant — but here the variants exist and the screen still hand-rolls the same literals, creating two parallel sources of truth. Construct the variant and call Display on it (e.g. TaskError::DuplicateContractInRequest{contract_id: ...}.to_string()) so the wording lives in exactly one place. The screen-level pre-check itself is correct defense-in-depth; only the duplicated string is the issue.
In `src/context/contract_token_db.rs`:
- [SUGGESTION] lines 258-265: loaded_contract_ids docstring still claims 'cheap membership / no body materialization' but the impl now deserialises every blob
The docstring promises cheap membership checks 'without materializing the contract bodies', but Database::get_contract_ids_for_network (src/database/contracts.rs:347-384) was changed to SELECT contract_id, contract and run DataContract::versioned_deserialize on every row to keep loaded_contract_ids consistent with the visible set. So the call now fully materialises bodies (just discarding the deserialised value). This is fine for today's call sites — Add Contracts click in add_contracts_screen.rs:121, transition_visualizer_screen.rs:462-467, ContractTask::FetchContracts (backend_task/contract.rs:87) and SaveDataContract (:272) — but the docstring will mislead the next contributor into picking this helper over a real cheap-set check, and tightly couples membership to platform-version deserialisation cost. Either reword to reflect the actual cost ('returns IDs validated through the same deserialization path as get_contracts; not free — equivalent to a full listing minus the QualifiedContract construction'), or split into two helpers: a cheap raw-id lister, and a separate validated lister used where set agreement matters. A reconciliation pass that prunes malformed rows on detection (or at startup) would let the membership check stay O(rows) at the SQLite layer.
| /// A user-driven mutation targeted a built-in system contract. System | ||
| /// contracts (DPNS, DashPay, withdrawals, token history, keyword search) | ||
| /// are managed by the application and cannot be modified or removed. | ||
| #[error( | ||
| "Contract {contract_id} is a built-in system contract and cannot be modified or removed. \ | ||
| Use a different contract." | ||
| )] | ||
| SystemContractImmutable { | ||
| /// Base58-encoded ID of the system contract whose mutation was rejected. | ||
| contract_id: String, | ||
| }, | ||
|
|
||
| /// The same contract identifier appeared more than once in a single | ||
| /// add-contracts request. | ||
| #[error( | ||
| "Contract {contract_id} was entered more than once. Remove the duplicate entry before adding contracts." | ||
| )] | ||
| DuplicateContractInRequest { | ||
| /// Base58-encoded ID of the duplicated contract. | ||
| contract_id: String, | ||
| }, | ||
|
|
||
| /// An add-contracts request referenced a contract that is already loaded | ||
| /// (either persisted in the local database or one of the built-in system | ||
| /// contracts). | ||
| #[error( | ||
| "Contract {contract_id} is already loaded. Select it from the existing contracts list or enter a different contract ID." | ||
| )] | ||
| ContractAlreadyLoaded { | ||
| /// Base58-encoded ID of the already-loaded contract. | ||
| contract_id: String, | ||
| }, |
There was a problem hiding this comment.
🟡 Suggestion: New TaskError variants store Identifier as Base58 String, erasing the typed value
SystemContractImmutable, DuplicateContractInRequest, and ContractAlreadyLoaded each carry contract_id: String. Every construction site (contract_token_db.rs:304/320, contract.rs:83/91/268/276) re-runs identifier.to_string(Encoding::Base58), so the typed Identifier is erased before the variant is built. Downstream consumers (future MCP tools, scripting, structural matching) cannot recover the original ID. Storing Identifier directly and formatting via Display in the #[error(...)] template (or a small wrapper) preserves the type, centralises Base58 encoding, and keeps the user-visible string identical. Storing typed IDs is consistent with the project rule allowing Base58 IDs in user-facing messages — the concern is the type erasure, not the encoding.
source: ['claude']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/backend_task/error.rs`:
- [SUGGESTION] lines 461-492: New TaskError variants store Identifier as Base58 String, erasing the typed value
SystemContractImmutable, DuplicateContractInRequest, and ContractAlreadyLoaded each carry contract_id: String. Every construction site (contract_token_db.rs:304/320, contract.rs:83/91/268/276) re-runs identifier.to_string(Encoding::Base58), so the typed Identifier is erased before the variant is built. Downstream consumers (future MCP tools, scripting, structural matching) cannot recover the original ID. Storing Identifier directly and formatting via Display in the #[error(...)] template (or a small wrapper) preserves the type, centralises Base58 encoding, and keeps the user-visible string identical. Storing typed IDs is consistent with the project rule allowing Base58 IDs in user-facing messages — the concern is the type erasure, not the encoding.
| fn add_contracts_clicked(&mut self) -> AppAction { | ||
| match self.parse_identifiers() { | ||
| Ok(identifiers) => { | ||
| if let Some(identifier) = find_duplicate_input(&identifiers) { | ||
| let message = format!( | ||
| "Contract {} was entered more than once. Remove the duplicate entry before adding contracts.", | ||
| identifier.to_string(Encoding::Base58) | ||
| ); | ||
| self.add_contracts_status = AddContractsStatus::Error; | ||
| MessageBanner::set_global( | ||
| self.app_context.egui_ctx(), | ||
| &message, | ||
| MessageType::Error, | ||
| ); | ||
| return AppAction::None; | ||
| } | ||
|
|
||
| let existing_contract_ids = match self.app_context.loaded_contract_ids() { | ||
| Ok(ids) => ids, | ||
| Err(e) => { | ||
| tracing::warn!("Failed to check loaded contracts before adding: {e}"); | ||
| self.add_contracts_status = AddContractsStatus::Error; | ||
| let handle = MessageBanner::set_global( | ||
| self.app_context.egui_ctx(), | ||
| "Unable to check whether those contracts are already loaded. Please try again.", | ||
| MessageType::Error, | ||
| ); | ||
| handle.with_details(e); | ||
| return AppAction::None; | ||
| } | ||
| }; | ||
|
|
||
| if let Some(identifier) = | ||
| find_already_loaded_id(&identifiers, &existing_contract_ids) | ||
| { | ||
| let message = format!( | ||
| "Contract {} is already loaded. Select it from the existing contracts list or enter a different contract ID.", | ||
| identifier.to_string(Encoding::Base58) | ||
| ); | ||
| self.add_contracts_status = AddContractsStatus::Error; | ||
| MessageBanner::set_global( | ||
| self.app_context.egui_ctx(), | ||
| &message, | ||
| MessageType::Error, | ||
| ); | ||
| return AppAction::None; | ||
| } |
There was a problem hiding this comment.
🟡 Suggestion: Screen duplicates the wording already centralised in DuplicateContractInRequest / ContractAlreadyLoaded
Lines 108-117 and 139-148 build error strings via format!() that are byte-for-byte identical to the #[error(...)] templates on TaskError::DuplicateContractInRequest and TaskError::ContractAlreadyLoaded (src/backend_task/error.rs:475-491). Project conventions explicitly say repeated wording belongs in a TaskError variant — but here the variants exist and the screen still hand-rolls the same literals, creating two parallel sources of truth. Construct the variant and call Display on it (e.g. TaskError::DuplicateContractInRequest{contract_id: ...}.to_string()) so the wording lives in exactly one place. The screen-level pre-check itself is correct defense-in-depth; only the duplicated string is the issue.
source: ['claude']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/ui/contracts_documents/add_contracts_screen.rs`:
- [SUGGESTION] lines 104-150: Screen duplicates the wording already centralised in DuplicateContractInRequest / ContractAlreadyLoaded
Lines 108-117 and 139-148 build error strings via format!() that are byte-for-byte identical to the #[error(...)] templates on TaskError::DuplicateContractInRequest and TaskError::ContractAlreadyLoaded (src/backend_task/error.rs:475-491). Project conventions explicitly say repeated wording belongs in a TaskError variant — but here the variants exist and the screen still hand-rolls the same literals, creating two parallel sources of truth. Construct the variant and call Display on it (e.g. TaskError::DuplicateContractInRequest{contract_id: ...}.to_string()) so the wording lives in exactly one place. The screen-level pre-check itself is correct defense-in-depth; only the duplicated string is the issue.
| /// Returns the IDs of every loaded contract (system + database) without | ||
| /// materializing the contract bodies. Use this for cheap membership | ||
| /// checks ("is this contract id already loaded?"). | ||
| pub fn loaded_contract_ids(&self) -> Result<Vec<Identifier>> { | ||
| let mut ids: Vec<Identifier> = self.system_contract_ids().to_vec(); | ||
| ids.extend(self.db.get_contract_ids_for_network(self)?); | ||
| Ok(ids) | ||
| } |
There was a problem hiding this comment.
🟡 Suggestion: loaded_contract_ids docstring still claims 'cheap membership / no body materialization' but the impl now deserialises every blob
The docstring promises cheap membership checks 'without materializing the contract bodies', but Database::get_contract_ids_for_network (src/database/contracts.rs:347-384) was changed to SELECT contract_id, contract and run DataContract::versioned_deserialize on every row to keep loaded_contract_ids consistent with the visible set. So the call now fully materialises bodies (just discarding the deserialised value). This is fine for today's call sites — Add Contracts click in add_contracts_screen.rs:121, transition_visualizer_screen.rs:462-467, ContractTask::FetchContracts (backend_task/contract.rs:87) and SaveDataContract (:272) — but the docstring will mislead the next contributor into picking this helper over a real cheap-set check, and tightly couples membership to platform-version deserialisation cost. Either reword to reflect the actual cost ('returns IDs validated through the same deserialization path as get_contracts; not free — equivalent to a full listing minus the QualifiedContract construction'), or split into two helpers: a cheap raw-id lister, and a separate validated lister used where set agreement matters. A reconciliation pass that prunes malformed rows on detection (or at startup) would let the membership check stay O(rows) at the SQLite layer.
source: ['claude']
🤖 Fix this with AI agents
These findings are from an automated code review. Verify each finding against the current code and only fix it if needed.
In `src/context/contract_token_db.rs`:
- [SUGGESTION] lines 258-265: loaded_contract_ids docstring still claims 'cheap membership / no body materialization' but the impl now deserialises every blob
The docstring promises cheap membership checks 'without materializing the contract bodies', but Database::get_contract_ids_for_network (src/database/contracts.rs:347-384) was changed to SELECT contract_id, contract and run DataContract::versioned_deserialize on every row to keep loaded_contract_ids consistent with the visible set. So the call now fully materialises bodies (just discarding the deserialised value). This is fine for today's call sites — Add Contracts click in add_contracts_screen.rs:121, transition_visualizer_screen.rs:462-467, ContractTask::FetchContracts (backend_task/contract.rs:87) and SaveDataContract (:272) — but the docstring will mislead the next contributor into picking this helper over a real cheap-set check, and tightly couples membership to platform-version deserialisation cost. Either reword to reflect the actual cost ('returns IDs validated through the same deserialization path as get_contracts; not free — equivalent to a full listing minus the QualifiedContract construction'), or split into two helpers: a cheap raw-id lister, and a separate validated lister used where set agreement matters. A reconciliation pass that prunes malformed rows on detection (or at startup) would let the membership check stay O(rows) at the SQLite layer.
| pub fn loaded_contract_ids(&self) -> Result<Vec<Identifier>> { | ||
| let mut ids: Vec<Identifier> = self.system_contract_ids().to_vec(); | ||
| ids.extend(self.db.get_contract_ids_for_network(self)?); | ||
| Ok(ids) | ||
| } |
There was a problem hiding this comment.
💬 Nitpick: loaded_contract_ids can return duplicates if a system ID is also persisted
loaded_contract_ids concatenates system_contract_ids() with db.get_contract_ids_for_network(...). The DB-side lister filters malformed blobs but does NOT exclude system IDs (unlike Database::get_contracts, which uses NOT IN). Startup cleanup_persisted_system_contracts plus the is_system_contract_id guards in insert_contract_if_not_exists/replace_contract/remove_contract make collision unlikely, but it isn't airtight (cleanup runs only at AppContext::new; nothing prevents a stray row from arriving mid-session via external SQLite write or future migration). Today's consumers only do contains() and tolerate duplicates, but the API name suggests a unique set — a future caller using len() or building a HashMap<Identifier, _> would silently double-count. Filter system IDs in the DB lister (or here), or return HashSet.
source: ['claude']
| pub fn get_contracts_arc( | ||
| &self, | ||
| limit: Option<u32>, | ||
| offset: Option<u32>, | ||
| ) -> Result<Vec<QualifiedContractRef>> { | ||
| let system_contract_count = self.system_contract_arcs().len() as u32; | ||
| let pagination = compute_contract_pagination(system_contract_count, limit, offset); | ||
|
|
||
| // Add the keyword search contract to the list | ||
| let keyword_search_contract = QualifiedContract { | ||
| contract: Arc::clone(&self.keyword_search_contract).as_ref().clone(), | ||
| alias: Some("keyword_search".to_string()), | ||
| }; | ||
| let mut contracts = self | ||
| .system_contract_arcs() | ||
| .into_iter() | ||
| .skip(pagination.system_skip) | ||
| .take(pagination.system_take) | ||
| .map(|(system_contract, alias)| QualifiedContractRef { | ||
| contract: Arc::clone(system_contract), | ||
| alias: Some(alias.to_string()), | ||
| }) | ||
| .collect::<Vec<_>>(); | ||
|
|
||
| // Insert the keyword search contract at 3 | ||
| contracts.insert(3, keyword_search_contract); | ||
| if pagination.db_limit != Some(0) { | ||
| contracts.extend( | ||
| self.db | ||
| .get_contracts(self, pagination.db_limit, Some(pagination.db_offset))? | ||
| .into_iter() | ||
| .map(|qc| QualifiedContractRef { | ||
| contract: Arc::new(qc.contract), | ||
| alias: qc.alias, | ||
| }), | ||
| ); | ||
| } | ||
|
|
||
| // Add the DashPay contract to the list | ||
| let dashpay_contract = QualifiedContract { | ||
| contract: Arc::clone(&self.dashpay_contract).as_ref().clone(), | ||
| alias: Some("dashpay".to_string()), | ||
| }; | ||
| Ok(contracts) | ||
| } | ||
|
|
||
| // Insert the DashPay contract at 4 | ||
| contracts.insert(4, dashpay_contract); | ||
| /// Returns the IDs of every loaded contract (system + database) without | ||
| /// materializing the contract bodies. Use this for cheap membership | ||
| /// checks ("is this contract id already loaded?"). | ||
| pub fn loaded_contract_ids(&self) -> Result<Vec<Identifier>> { | ||
| let mut ids: Vec<Identifier> = self.system_contract_ids().to_vec(); | ||
| ids.extend(self.db.get_contract_ids_for_network(self)?); | ||
| Ok(ids) | ||
| } | ||
|
|
||
| Ok(contracts) | ||
| /// Returns only the user-added (database-backed) contracts, skipping the | ||
| /// built-in system contracts entirely. Callers that immediately filter | ||
| /// out system contracts should use this rather than building them only to | ||
| /// discard them. | ||
| pub fn get_user_contracts(&self) -> Result<Vec<QualifiedContract>> { | ||
| self.db.get_contracts(self, None, None) | ||
| } |
There was a problem hiding this comment.
💬 Nitpick: New listing helpers are pub but only used inside the crate
get_contracts_arc, loaded_contract_ids, and get_user_contracts are declared pub, but every consumer is inside the dash-evo-tool library crate (src/ui/, src/backend_task/contract.rs, src/ui/tools/). The companion helpers added in this PR (system_contract_ids, system_contract_by_id, system_contract_arc_by_id, is_system_contract_id, cleanup_persisted_system_contracts) are correctly pub(crate). Tightening the new listing helpers to pub(crate) matches that discipline and avoids accidental coupling from feature crates / future binaries (det_cli, mcp). The pre-existing pub get_contracts is grandfathered, but the three brand-new helpers have no such constraint.
source: ['claude']
| pub(crate) fn cleanup_persisted_system_contracts_for( | ||
| db: &Database, | ||
| system_contract_ids: &[Identifier], | ||
| network: &Network, | ||
| ) -> Result<()> { | ||
| if system_contract_ids.is_empty() { | ||
| return Ok(()); | ||
| } | ||
|
|
||
| let network_str = network.to_string(); | ||
| let placeholders = vec!["?"; system_contract_ids.len()].join(", "); | ||
|
|
||
| let conn = db.shared_connection(); | ||
| let conn = conn.lock().unwrap(); | ||
| let tx = conn.unchecked_transaction()?; | ||
|
|
||
| // identity_token_balances → tokens belonging to the listed contracts on this network. | ||
| let sql = format!( | ||
| "DELETE FROM identity_token_balances | ||
| WHERE network = ? | ||
| AND token_id IN ( | ||
| SELECT id FROM token | ||
| WHERE network = ? AND data_contract_id IN ({placeholders}) | ||
| )" | ||
| ); | ||
| let mut params_vec: Vec<&dyn rusqlite::ToSql> = | ||
| Vec::with_capacity(2 + system_contract_ids.len()); | ||
| params_vec.push(&network_str); | ||
| params_vec.push(&network_str); | ||
| let id_byte_vecs: Vec<Vec<u8>> = system_contract_ids.iter().map(|id| id.to_vec()).collect(); | ||
| for id_bytes in &id_byte_vecs { | ||
| params_vec.push(id_bytes); | ||
| } | ||
| tx.execute(&sql, rusqlite::params_from_iter(params_vec))?; | ||
|
|
||
| // token_order has no `network` column; scope by joining against `token`. | ||
| let sql = format!( | ||
| "DELETE FROM token_order | ||
| WHERE token_id IN ( | ||
| SELECT id FROM token | ||
| WHERE network = ? AND data_contract_id IN ({placeholders}) | ||
| )" | ||
| ); | ||
| let mut params_vec: Vec<&dyn rusqlite::ToSql> = | ||
| Vec::with_capacity(1 + system_contract_ids.len()); | ||
| params_vec.push(&network_str); | ||
| for id_bytes in &id_byte_vecs { | ||
| params_vec.push(id_bytes); | ||
| } | ||
| tx.execute(&sql, rusqlite::params_from_iter(params_vec))?; | ||
|
|
||
| // tokens for the listed contracts on this network. | ||
| let sql = format!( | ||
| "DELETE FROM token | ||
| WHERE network = ? AND data_contract_id IN ({placeholders})" | ||
| ); | ||
| let mut params_vec: Vec<&dyn rusqlite::ToSql> = | ||
| Vec::with_capacity(1 + system_contract_ids.len()); | ||
| params_vec.push(&network_str); | ||
| for id_bytes in &id_byte_vecs { | ||
| params_vec.push(id_bytes); | ||
| } | ||
| tx.execute(&sql, rusqlite::params_from_iter(params_vec))?; | ||
|
|
||
| // contract rows themselves. | ||
| let sql = format!( | ||
| "DELETE FROM contract | ||
| WHERE network = ? AND contract_id IN ({placeholders})" | ||
| ); | ||
| let mut params_vec: Vec<&dyn rusqlite::ToSql> = | ||
| Vec::with_capacity(1 + system_contract_ids.len()); | ||
| params_vec.push(&network_str); | ||
| for id_bytes in &id_byte_vecs { | ||
| params_vec.push(id_bytes); | ||
| } | ||
| tx.execute(&sql, rusqlite::params_from_iter(params_vec))?; | ||
|
|
||
| tx.commit() | ||
| } |
There was a problem hiding this comment.
💬 Nitpick: cleanup_persisted_system_contracts_for repeats the placeholder/params construction four times
Four near-identical blocks each format!() the SQL with the shared placeholders, allocate a fresh params_vec, push &network_str (twice for the first), then push every id_bytes slice. The shape is identical — only the FROM/WHERE template and leading-network-param count vary. A small closure that takes a SQL template and leading params, then appends id-bytes and executes via tx.execute, would collapse this to four call sites. Cold path so non-blocking, but the current shape is exactly the kind of repetitive boilerplate that drifts (e.g. someone updating the network filter in three of four blocks).
source: ['claude']
e43bec8 to
c04173a
Compare
thepastaclaw
left a comment
There was a problem hiding this comment.
Code Review
All previously-blocking findings remain resolved at c04173a. Six nitpick-level concerns survive verification: a stale comment in transition_visualizer_screen.rs that contradicts the canonical docstring, a redundant tracing::warn! next to a banner that already logs, a duplicate-ID risk in loaded_contract_ids, public-visibility concerns on new helpers, repetitive boilerplate in cleanup_persisted_system_contracts_for, and an undocumented SQLite LIMIT -1 idiom. No blocking issues.
Reviewed commit: c04173a
💬 6 nitpick(s)
1 additional finding
💬 nitpick: SQLite-specific LIMIT -1 idiom is undocumented
src/database/contracts.rs (lines 283-290)
The else if offset.is_some() { query.push_str(" LIMIT -1"); } branch relies on SQLite's documented behaviour that a negative LIMIT means "no upper bound" and is required when OFFSET is bound without LIMIT. The codebase is SQLite-only so this is correct, but the magic -1 is the kind of literal a future contributor might "clean up" to i64::MAX (silently changing pagination semantics on large tables) or remove altogether (turning the parameter-bound query into a SQLite syntax error at runtime). A one-line comment prevents that.
💡 Suggested change
if limit.is_some() {
query.push_str(" LIMIT ?");
} else if offset.is_some() {
// SQLite requires LIMIT when OFFSET is present; -1 means "no upper bound".
query.push_str(" LIMIT -1");
}
| // Check if contract already exists. Use the cheap | ||
| // ID-only lookup so we don't deserialize every | ||
| // loaded contract just to compare base58 strings. |
There was a problem hiding this comment.
💬 Nitpick: Comment claims loaded_contract_ids() is a cheap ID-only lookup, but the helper's docstring says otherwise
The new comment says "Use the cheap ID-only lookup so we don't deserialize every loaded contract just to compare base58 strings." That directly contradicts the canonical docstring on AppContext::loaded_contract_ids (src/context/contract_token_db.rs:257-265), which explicitly states the helper is not a cheap raw-id-only check and costs the same as scanning and deserializing every persisted blob. The actual win here is avoiding QualifiedContract allocation, not avoiding deserialization. As written, the comment will mislead the next contributor into using loaded_contract_ids as a lightweight membership probe in hot UI paths. Reword to match reality (e.g. "avoids constructing QualifiedContracts; still deserializes every blob").
source: ['claude']
| tracing::warn!("Failed to check loaded contracts before adding: {e}"); | ||
| self.add_contracts_status = AddContractsStatus::Error; | ||
| let handle = MessageBanner::set_global( | ||
| self.app_context.egui_ctx(), | ||
| "Unable to check whether those contracts are already loaded. Please try again.", | ||
| MessageType::Error, | ||
| ); | ||
| handle.with_details(e); |
There was a problem hiding this comment.
💬 Nitpick: Redundant tracing::warn! next to a banner that already logs with details
Line 125 emits tracing::warn!("Failed to check loaded contracts before adding: {e}") immediately before lines 127-132 set a global banner whose with_details(e) attachment already logs the full error chain. CLAUDE.md states: "MessageBanner logs all displayed messages (with details) automatically. Additional logging is unnecessary." The rusqlite error is now emitted twice (warn via tracing, error via banner). Drop the explicit tracing::warn!.
source: ['claude']
| pub fn loaded_contract_ids(&self) -> Result<Vec<Identifier>> { | ||
| let mut ids: Vec<Identifier> = self.system_contract_ids().to_vec(); | ||
| ids.extend(self.db.get_contract_ids_for_network(self)?); | ||
| Ok(ids) | ||
| } |
There was a problem hiding this comment.
💬 Nitpick: loaded_contract_ids can double-report a persisted system contract ID
loaded_contract_ids starts with system_contract_ids().to_vec() and then blindly extends with db.get_contract_ids_for_network(self)?. The DB lister validates blob deserialization but does not exclude system IDs (unlike Database::get_contracts, which uses NOT IN). Startup cleanup_persisted_system_contracts plus is_system_contract_id guards in mutation paths make collision unlikely, but they aren't airtight: cleanup runs only at AppContext::new, and nothing prevents a stray row from external SQLite writes or a future migration mid-session. Today's consumers do contains checks and tolerate duplicates, but the API name reads like a unique set — a future caller using len() or building a HashMap<Identifier, _> would silently double-count. Filter system IDs at the DB layer or here, or return HashSet<Identifier>.
💡 Suggested change
| pub fn loaded_contract_ids(&self) -> Result<Vec<Identifier>> { | |
| let mut ids: Vec<Identifier> = self.system_contract_ids().to_vec(); | |
| ids.extend(self.db.get_contract_ids_for_network(self)?); | |
| Ok(ids) | |
| } | |
| pub fn loaded_contract_ids(&self) -> Result<Vec<Identifier>> { | |
| let mut ids: Vec<Identifier> = self.system_contract_ids().to_vec(); | |
| ids.extend( | |
| self.db | |
| .get_contract_ids_for_network(self)? | |
| .into_iter() | |
| .filter(|id| !self.is_system_contract_id(id)), | |
| ); | |
| Ok(ids) | |
| } |
source: ['claude', 'codex']
| pub fn get_contracts_arc( | ||
| &self, | ||
| limit: Option<u32>, | ||
| offset: Option<u32>, | ||
| ) -> Result<Vec<QualifiedContractRef>> { | ||
| let system_contract_count = self.system_contract_arcs().len() as u32; | ||
| let pagination = compute_contract_pagination(system_contract_count, limit, offset); | ||
|
|
||
| // Add the keyword search contract to the list | ||
| let keyword_search_contract = QualifiedContract { | ||
| contract: Arc::clone(&self.keyword_search_contract).as_ref().clone(), | ||
| alias: Some("keyword_search".to_string()), | ||
| }; | ||
| let mut contracts = self | ||
| .system_contract_arcs() | ||
| .into_iter() | ||
| .skip(pagination.system_skip) | ||
| .take(pagination.system_take) | ||
| .map(|(system_contract, alias)| QualifiedContractRef { | ||
| contract: Arc::clone(system_contract), | ||
| alias: Some(alias.to_string()), | ||
| }) | ||
| .collect::<Vec<_>>(); | ||
|
|
||
| // Insert the keyword search contract at 3 | ||
| contracts.insert(3, keyword_search_contract); | ||
| if pagination.db_limit != Some(0) { | ||
| contracts.extend( | ||
| self.db | ||
| .get_contracts(self, pagination.db_limit, Some(pagination.db_offset))? | ||
| .into_iter() | ||
| .map(|qc| QualifiedContractRef { | ||
| contract: Arc::new(qc.contract), | ||
| alias: qc.alias, | ||
| }), | ||
| ); | ||
| } | ||
|
|
||
| // Add the DashPay contract to the list | ||
| let dashpay_contract = QualifiedContract { | ||
| contract: Arc::clone(&self.dashpay_contract).as_ref().clone(), | ||
| alias: Some("dashpay".to_string()), | ||
| }; | ||
| Ok(contracts) | ||
| } | ||
|
|
||
| // Insert the DashPay contract at 4 | ||
| contracts.insert(4, dashpay_contract); | ||
| /// Returns the IDs of every loaded contract (system + database). | ||
| /// | ||
| /// Database-backed IDs are validated through the same deserialization | ||
| /// path as [`Self::get_contracts`] and [`Self::get_contract_by_id`], so | ||
| /// the returned set matches the visible loaded contract set (rows whose | ||
| /// contract blobs fail to deserialize are skipped just as they are in | ||
| /// the listing helpers). This is **not** a cheap raw-id-only check: its | ||
| /// cost is equivalent to scanning and deserializing every persisted | ||
| /// contract blob, minus the cost of constructing full listing objects. | ||
| pub fn loaded_contract_ids(&self) -> Result<Vec<Identifier>> { | ||
| let mut ids: Vec<Identifier> = self.system_contract_ids().to_vec(); | ||
| ids.extend(self.db.get_contract_ids_for_network(self)?); | ||
| Ok(ids) | ||
| } | ||
|
|
||
| Ok(contracts) | ||
| /// Returns only the user-added (database-backed) contracts, skipping the | ||
| /// built-in system contracts entirely. Callers that immediately filter | ||
| /// out system contracts should use this rather than building them only to | ||
| /// discard them. | ||
| pub fn get_user_contracts(&self) -> Result<Vec<QualifiedContract>> { | ||
| self.db.get_contracts(self, None, None) | ||
| } |
There was a problem hiding this comment.
💬 Nitpick: New listing helpers are pub but every consumer is crate-local
get_contracts_arc, loaded_contract_ids, and get_user_contracts are declared pub, but every consumer is inside the dash-evo-tool library crate (src/ui/, src/backend_task/contract.rs, src/ui/tools/). The companion helpers added in this PR (system_contract_ids, system_contract_by_id, system_contract_arc_by_id, is_system_contract_id, cleanup_persisted_system_contracts) are correctly pub(crate). Tightening these new helpers to pub(crate) matches that discipline and avoids accidental coupling from feature crates / future binaries (det_cli, mcp). The pre-existing pub get_contracts is grandfathered, but these new helpers have no such constraint.
source: ['claude', 'codex']
| pub(crate) fn cleanup_persisted_system_contracts_for( | ||
| db: &Database, | ||
| system_contract_ids: &[Identifier], | ||
| network: &Network, | ||
| ) -> Result<()> { | ||
| if system_contract_ids.is_empty() { | ||
| return Ok(()); | ||
| } | ||
|
|
||
| let network_str = network.to_string(); | ||
| let placeholders = vec!["?"; system_contract_ids.len()].join(", "); | ||
|
|
||
| let conn = db.shared_connection(); | ||
| let conn = conn.lock().unwrap(); | ||
| let tx = conn.unchecked_transaction()?; | ||
|
|
||
| // identity_token_balances → tokens belonging to the listed contracts on this network. | ||
| let sql = format!( | ||
| "DELETE FROM identity_token_balances | ||
| WHERE network = ? | ||
| AND token_id IN ( | ||
| SELECT id FROM token | ||
| WHERE network = ? AND data_contract_id IN ({placeholders}) | ||
| )" | ||
| ); | ||
| let mut params_vec: Vec<&dyn rusqlite::ToSql> = | ||
| Vec::with_capacity(2 + system_contract_ids.len()); | ||
| params_vec.push(&network_str); | ||
| params_vec.push(&network_str); | ||
| let id_byte_vecs: Vec<Vec<u8>> = system_contract_ids.iter().map(|id| id.to_vec()).collect(); | ||
| for id_bytes in &id_byte_vecs { | ||
| params_vec.push(id_bytes); | ||
| } | ||
| tx.execute(&sql, rusqlite::params_from_iter(params_vec))?; | ||
|
|
||
| // token_order has no `network` column; scope by joining against `token`. | ||
| let sql = format!( | ||
| "DELETE FROM token_order | ||
| WHERE token_id IN ( | ||
| SELECT id FROM token | ||
| WHERE network = ? AND data_contract_id IN ({placeholders}) | ||
| )" | ||
| ); | ||
| let mut params_vec: Vec<&dyn rusqlite::ToSql> = | ||
| Vec::with_capacity(1 + system_contract_ids.len()); | ||
| params_vec.push(&network_str); | ||
| for id_bytes in &id_byte_vecs { | ||
| params_vec.push(id_bytes); | ||
| } | ||
| tx.execute(&sql, rusqlite::params_from_iter(params_vec))?; | ||
|
|
||
| // tokens for the listed contracts on this network. | ||
| let sql = format!( | ||
| "DELETE FROM token | ||
| WHERE network = ? AND data_contract_id IN ({placeholders})" | ||
| ); | ||
| let mut params_vec: Vec<&dyn rusqlite::ToSql> = | ||
| Vec::with_capacity(1 + system_contract_ids.len()); | ||
| params_vec.push(&network_str); | ||
| for id_bytes in &id_byte_vecs { | ||
| params_vec.push(id_bytes); | ||
| } | ||
| tx.execute(&sql, rusqlite::params_from_iter(params_vec))?; | ||
|
|
||
| // contract rows themselves. | ||
| let sql = format!( | ||
| "DELETE FROM contract | ||
| WHERE network = ? AND contract_id IN ({placeholders})" | ||
| ); | ||
| let mut params_vec: Vec<&dyn rusqlite::ToSql> = | ||
| Vec::with_capacity(1 + system_contract_ids.len()); | ||
| params_vec.push(&network_str); | ||
| for id_bytes in &id_byte_vecs { | ||
| params_vec.push(id_bytes); | ||
| } | ||
| tx.execute(&sql, rusqlite::params_from_iter(params_vec))?; | ||
|
|
||
| tx.commit() | ||
| } |
There was a problem hiding this comment.
💬 Nitpick: cleanup_persisted_system_contracts_for repeats placeholder/params construction four times
Four near-identical blocks each format!() the SQL with the shared placeholders, allocate a fresh params_vec, push &network_str (twice in the first, once after), then push every id_bytes slice. Only the FROM/WHERE template and leading-network-param count vary. A small closure that takes a SQL template and leading params, then appends id-bytes and executes via tx.execute, would collapse this to four call sites. Cold path so non-blocking, but the current shape is exactly the kind of repetitive boilerplate that drifts when one of the four blocks is updated (e.g. someone tweaking the network filter in three of four places).
source: ['claude']
Prevent duplicate loaded contracts
Summary
Fixes #841.
Validation
cargo fmt --checkcargo checkcargo clippy -- -D warningscode-review dashpay/dash-evo-tool upstream/v1.0-dev fix-system-contract-duplicate-load "Prevent loading duplicate contracts globally, including duplicates in the same submission and app-context system contracts; keep system contracts resolvable by ID; hide persisted system-contract duplicates from contract lists without breaking DB pagination; and ensure system-contract mutating UI paths use the same registry."— issues found: 0Summary by CodeRabbit
New Features
Bug Fixes