chore: clean load client helpers#2173
Conversation
f48eaa4 to
7d4ae00
Compare
7d4ae00 to
5ecb78f
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
sdk-libs/macros/src/finalize/codegen.rs (1)
459-460: Inconsistency:generate_pre_init_mints_onlystill uses hardcodedversion: 3.The
generate_pre_init_pdas_and_mintspath now usesTokenDataVersion::ShaFlat as u8(line 295), butgenerate_pre_init_mints_onlystill usesversion: 3. This creates an inconsistency—if the enum value changes, only one code path would be updated.🔧 Suggested fix
let compressed_mint_data = light_token_interface::instructions::mint_action::MintInstructionData { supply: 0, decimals: `#decimals`, metadata: light_token_interface::state::MintMetadata { - version: 3, + version: light_token_interface::state::TokenDataVersion::ShaFlat as u8, mint: mint_pda.to_bytes().into(),sdk-libs/compressible-client/src/decompress_mint.rs (2)
155-166: Add bounds check before accessingproof_result.accounts[0].Line 158 directly indexes
proof_result.accounts[0]without verifying the array isn't empty. While this should be populated whenProofRequiredpasses, defensive coding prevents panics from unexpected indexer responses.Suggested fix
// Proof required for cold mint let proof_result = validity_proof.ok_or(DecompressMintError::ProofRequired)?; + if proof_result.accounts.is_empty() { + return Err(DecompressMintError::MissingMintData); + } + // Extract tree info from proof result let account_info = &proof_result.accounts[0];
317-329: Fix step numbering and add bounds check.Two issues here:
- Comment numbering: line 317 says "5." and line 322 also says "5." - should be "5." and "6." respectively
- Line 329 accesses
proof_result.accounts[0]without bounds check (same issue asbuild_decompress_mint)Suggested fix
- // 5. Check if already decompressed flag is set - return empty vec (idempotent) + // 5. Check if already decompressed flag is set - return empty vec (idempotent) if mint_data.metadata.mint_decompressed { return Ok(vec![]); } - // 5. Get validity proof + // 6. Get validity proof let proof_result = indexer .get_validity_proof(vec![compressed_account.hash], vec![], None) .await? .value; - // 6. Extract tree info from proof result + // 7. Extract tree info from proof result + if proof_result.accounts.is_empty() { + return Err(DecompressMintError::MissingMintData); + } let account_info = &proof_result.accounts[0];
🤖 Fix all issues with AI agents
In `@sdk-libs/compressible-client/src/load_accounts.rs`:
- Around line 180-197: fetch_mint_proofs currently awaits
indexer.get_validity_proof sequentially per hash which increases latency; modify
fetch_mint_proofs to perform the requests concurrently (or batch if the Indexer
supports multi-hash calls) by mapping hashes to futures calling
indexer.get_validity_proof and then awaiting them with
futures::future::try_join_all (or call a bulk get_validity_proof API with all
hashes) and collect the returned ValidityProofWithContext values before
returning; update error handling to propagate the first failing future via the
Result as before.
♻️ Duplicate comments (8)
sdk-libs/compressible-client/Cargo.toml (1)
30-30: Use workspace dependency forsmallvec.Direct version pinning (
smallvec = "1.15") can lead to version inconsistencies. Prefersmallvec = { workspace = true }if it's defined in the workspace, or add it there first.🔧 Suggested fix
-smallvec = "1.15" +smallvec = { workspace = true }sdk-libs/compressible-client/src/decompress_mint.rs (1)
42-55: Consider addingAlreadyDecompressedvariant toMintState.The
DecompressMintErrorenum includesAlreadyDecompressed(line 32), butMintStateonly hasHot,Cold, andNone. When a compressed account exists but has empty data (the "already decompressed" case at line 506-507 in light_program_test.rs), the code returnsMintState::None.This conflates "mint never existed" with "mint was decompressed (shell remains)". Adding a
MintState::AlreadyDecompressedvariant would let callers distinguish these states programmatically without relying on context.sdk-libs/compressible-client/src/lib.rs (3)
424-433: Potential panic if token account'scpi_contextisNone.Line 429 calls
.unwrap()onacc.tree_info.cpi_contextassuming it's alwaysSomewhenhas_tokensis true. However, a token account could exist without a CPI context populated.Suggested fix
if has_pdas && has_tokens { // Find the first token account's CPI context let first_token_cpi_context = compressed_accounts .iter() .find(|(acc, _)| acc.owner == LIGHT_TOKEN_PROGRAM_ID) - .map(|(acc, _)| acc.tree_info.cpi_context.unwrap()) - .expect("has_tokens is true so there must be a token"); + .and_then(|(acc, _)| acc.tree_info.cpi_context) + .ok_or("Token compressed account missing cpi_context")?;
615-625: ReturnResultinstead of panicking on missing decompression context.Line 621 uses
expect()which will crash if a cold account somehow lacksdecompression_context. Even though the filter on line 602 ensures only cold accounts pass, defensive error handling is preferred in SDK code.
135-168:Fromimplementations can panic on malformed cold accounts.These conversions use
expect()which will panic if a cold account is constructed withoutload_context. While this "shouldn't happen" with proper construction, library code should prefer fallible conversions.Consider implementing
TryFrominstead, or ensuring the invariant is enforced at construction time so the panic is unreachable.sdk-libs/program-test/src/program_test/light_program_test.rs (1)
616-634: Ambiguous query semantics may return wrong token account.This method queries
get_compressed_token_accounts_by_owner(address, None, None)with no mint filter, then takes the first result. If the address owns multiple compressed token accounts across different mints, this returns an arbitrary one.The comment on line 616 states "address is the token account owner for ctoken" but the function signature implies callers pass a specific token account address. This semantic mismatch can lead to subtle bugs.
Consider either:
- Requiring a
mintparameter to constrain the query- Validating the returned account's address matches expectations
- Clarifying in docs that this returns "the first" token account for an owner
sdk-libs/compressible-client/src/account_interface.rs (1)
245-248: Replaceexpect()with proper error handling inTokenAccountInterface::cold.Line 247 uses
expect("pack should never fail")which can panic. WhileSplTokenAccount::packis unlikely to fail with valid inputs, SDK code should be defensive.Suggested approach
Change
cold()to returnResult<Self, AccountInterfaceError>and usemap_err:- pub fn cold(...) -> Self { + pub fn cold(...) -> Result<Self, AccountInterfaceError> { // ... let mut data = vec![0u8; SplTokenAccount::LEN]; - SplTokenAccount::pack(parsed, &mut data).expect("pack should never fail"); + SplTokenAccount::pack(parsed, &mut data) + .map_err(|e| AccountInterfaceError::ParseError(e.to_string()))?; // ... - Self { ... } + Ok(Self { ... }) }This requires updating callers but provides a safer API.
sdk-libs/compressible-client/src/load_accounts.rs (1)
241-253: Avoidexpect()on decompression context.Line 248 uses
expect("Cold account must have decompression context")which can panic. This is the same pattern flagged inlib.rs. Convert to fallible iteration:Suggested fix
- let compressed_accounts: Vec<_> = accounts - .iter() - .map(|a| { - let compressed_account = a - .account_interface - .decompression_context - .as_ref() - .expect("Cold account must have decompression context") - .compressed_account - .clone(); - (compressed_account, a.variant.clone()) - }) - .collect(); + let mut compressed_accounts = Vec::with_capacity(accounts.len()); + for a in accounts { + let compressed_account = a + .account_interface + .decompression_context + .as_ref() + .ok_or_else(|| { + LoadAccountsError::BuildInstruction( + "Cold account missing decompression context".into(), + ) + })? + .compressed_account + .clone(); + compressed_accounts.push((compressed_account, a.variant.clone())); + }
| async fn fetch_mint_proofs<I: Indexer>( | ||
| hashes: &[[u8; 32]], | ||
| indexer: &I, | ||
| ) -> Result<Vec<ValidityProofWithContext>, IndexerError> { | ||
| if hashes.is_empty() { | ||
| return Ok(vec![]); | ||
| } | ||
|
|
||
| // Each mint needs its own proof | ||
| let mut proofs = Vec::with_capacity(hashes.len()); | ||
| for hash in hashes { | ||
| let result = indexer | ||
| .get_validity_proof(vec![*hash], vec![], None) | ||
| .await?; | ||
| proofs.push(result.value); | ||
| } | ||
| Ok(proofs) | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Sequential mint proof fetching could be batched.
fetch_mint_proofs fetches each mint's proof sequentially in a loop (lines 190-195). While this works, it adds latency proportional to the number of mints. Consider batching if the indexer supports it, or using futures::future::try_join_all for concurrent fetches:
let futures: Vec<_> = hashes.iter().map(|hash| {
indexer.get_validity_proof(vec![*hash], vec![], None)
}).collect();
let results = futures::future::try_join_all(futures).await?;🤖 Prompt for AI Agents
In `@sdk-libs/compressible-client/src/load_accounts.rs` around lines 180 - 197,
fetch_mint_proofs currently awaits indexer.get_validity_proof sequentially per
hash which increases latency; modify fetch_mint_proofs to perform the requests
concurrently (or batch if the Indexer supports multi-hash calls) by mapping
hashes to futures calling indexer.get_validity_proof and then awaiting them with
futures::future::try_join_all (or call a bulk get_validity_proof API with all
hashes) and collect the returned ValidityProofWithContext values before
returning; update error handling to propagate the first failing future via the
Result as before.
📝 WalkthroughWalkthroughThis PR introduces a unified hot/cold account interface abstraction layer for the compressible-client SDK, enabling seamless handling of both on-chain and compressed accounts. It adds core modules for decompression workflows (load_accounts, tx_size), account abstractions (account_interface, account_interface_ext), and refactors CMint-specific terminology to generic Mint semantics throughout the token-SDK. The old decompress_atas module and test helper methods are removed. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client
participant LoadAccts as load_accounts
participant Indexer as Indexer
participant ProofSvc as Proof Service
participant InstrBuilder as Instruction Builders
Client->>LoadAccts: create_load_accounts_instructions(accounts, ...)
LoadAccts->>LoadAccts: Filter cold accounts (PDAs, ATAs, Mints)
LoadAccts->>LoadAccts: Validate hashes & contexts
par Fetch PDA/ATA Proofs
LoadAccts->>Indexer: fetch_proof_if_needed(hash)
Indexer-->>ProofSvc: query compressed state
ProofSvc-->>Indexer: proof
Indexer-->>LoadAccts: proof
and Fetch Mint Proofs
LoadAccts->>ProofSvc: fetch_mint_proofs(mints)
ProofSvc-->>LoadAccts: mint proofs
end
LoadAccts->>InstrBuilder: create_decompress_idempotent_instructions(PDAs, proof)
InstrBuilder-->>LoadAccts: Instruction
LoadAccts->>InstrBuilder: create_decompress_ata_instructions(ATAs, proof)
InstrBuilder-->>LoadAccts: Vec<Instruction>
LoadAccts->>InstrBuilder: create_decompress_mint_instructions(Mint, proof)
InstrBuilder-->>LoadAccts: Instruction
LoadAccts-->>Client: Vec<Instruction>
sequenceDiagram
participant Caller as Caller
participant Trait as AccountInterfaceExt
participant RPC as RPC (On-Chain)
participant Indexer as Indexer (Compressed)
participant Interface as Interface Constructor
Caller->>Trait: get_token_account_interface(address)
Trait->>RPC: fetch_account(address)
alt On-Chain Account Found
RPC-->>Interface: Account data
Interface-->>Trait: Hot TokenAccountInterface
else On-Chain Not Found
RPC-->>Trait: NotFound
Trait->>Indexer: query_compressed_token_accounts(owner, mint)
Indexer-->>Interface: Compressed data
Interface-->>Trait: Cold TokenAccountInterface
end
Trait-->>Caller: TokenAccountInterface (hot or cold)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Rationale: The PR spans 15+ files with heterogeneous changes: new account abstraction layers with intricate hot/cold fallback logic (account_interface_ext), concurrent proof-fetching orchestration (load_accounts with async patterns), transaction size estimation heuristics (tx_size), and broad terminology renames across token-SDK (CMint → Mint, CreateCTokenAta → CreateTokenAta). While individual modules are well-structured with clear patterns, the density of new logic, cross-module interactions, API removals from tests, and the breadth of scope demand careful verification of proof-fetching flows, error handling consistency, and interface completeness. Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
🤖 Fix all issues with AI agents
In `@sdk-libs/compressible-client/src/load_accounts.rs`:
- Around line 266-269: The current collection using filter_map silently drops
accounts missing load_context; change the logic that builds contexts (the let
contexts: SmallVec<[&TokenLoadContext; 8]> = ...) to explicitly check each
account: for each account in accounts, if account.load_context.is_some() push
its &TokenLoadContext into contexts; if account.load_context.is_none() and the
account is cold (e.g., account.is_cold or the cold-ATA condition used in this
module) return an Err indicating a missing load_context for a cold ATA. Ensure
you reference the same symbols (accounts, load_context, TokenLoadContext, and
contexts) and return the appropriate error type used by the surrounding function
instead of silently skipping entries.
- Around line 308-312: The loop over contexts (for (i, ctx) in
contexts.iter().enumerate()) directly indexes tree_infos.packed_tree_infos[i],
which can panic on length mismatch; replace that indexing with
tree_infos.packed_tree_infos.get(i) and, if it returns None, return an
appropriate LoadAccountsError (with a clear message) instead of panicking;
update any downstream uses of tree_info in that iteration to use the borrowed
value from get(i) to continue processing safely.
- Around line 98-111: The current pda_hashes construction silently drops
cold_pdas missing account_interface.decompression_context by using filter_map;
change this to fail fast by explicitly checking each cold_pda and returning an
error if decompression_context or compressed_account.hash is missing. Replace
the filter_map on cold_pdas with an iterator that maps each entry to its
c.compressed_account.hash or returns a descriptive Err (propagated with ? or
collected via collect::<Result<Vec<_>, _>>()), referencing pda_hashes,
cold_pdas, account_interface, decompression_context and compressed_account.hash
so missing decompression contexts cause an early, explicit failure instead of
being omitted. Ensure similar behavior is considered where appropriate for
ata_hashes/mint_hashes if they must not be silently dropped.
In `@sdk-libs/compressible-client/src/tx_size.rs`:
- Around line 31-33: The loop currently clones current_batch into trial for each
instruction which causes quadratic allocations; instead, reuse current_batch by
pushing the instruction, measuring the resulting size, then popping it to
restore state. Replace the trial clone logic in the loop that iterates over
instructions: do current_batch.push(ix.clone()), compute whatever measurement
you previously did on trial (same calls that used trial), then call
current_batch.pop() to remove the pushed instruction; remove the trial variable
and its clone to eliminate per-iteration cloning of current_batch.
- Around line 31-41: The loop that builds batches (iterating over instructions,
using current_batch, batches and calling estimate_tx_size(&trial, payer,
num_signers)) does not handle the case where a single instruction alone exceeds
max_size; that produces an oversized batch. Modify the logic so when
estimate_tx_size(&[ix], payer, num_signers) > max_size for a lone instruction
you return/propagate a descriptive error (e.g., Err/Result or a specific error
variant) instead of pushing it into batches; otherwise keep the existing
batching behavior (push current_batch when full and start new batch with ix).
Ensure the change surfaces the error from the function that owns this loop so
callers can handle oversized-instruction cases deterministically.
- Around line 52-82: The estimate_tx_size logic relies on an externally provided
num_signers which can be undercounted; modify split_by_tx_size so it no longer
takes num_signers as a parameter but instead computes the signer count inside by
iterating instructions and their AccountMeta.is_signer flags plus the payer
(ensure payer is always counted), then pass that computed count into
estimate_tx_size; update the split_by_tx_size signature and all callers/tests
(e.g., the test at line 115) to remove the num_signers argument so sizing is
always derived from instruction metadata and payer.
♻️ Duplicate comments (5)
sdk-libs/compressible-client/src/lib.rs (3)
426-433: Return an error instead of unwrapping token CPI context.This
unwrap()can panic if a token compressed account lackscpi_context. Make it fallible and propagate a clear error.🔧 Proposed fix
- let first_token_cpi_context = compressed_accounts - .iter() - .find(|(acc, _)| acc.owner == LIGHT_TOKEN_PROGRAM_ID) - .map(|(acc, _)| acc.tree_info.cpi_context.unwrap()) - .expect("has_tokens is true so there must be a token"); + let first_token_cpi_context = compressed_accounts + .iter() + .find(|(acc, _)| acc.owner == LIGHT_TOKEN_PROGRAM_ID) + .and_then(|(acc, _)| acc.tree_info.cpi_context) + .ok_or("Token compressed account missing cpi_context")?;
617-624: Avoid panic on missing decompression context in idempotent build.
expect(...)will crash in edge cases. Make this mapping fallible and return a structured error.🔧 Proposed fix
- let compressed_accounts: Vec<(CompressedAccount, V)> = cold_accounts - .into_iter() - .map(|a| { - let compressed_account = a - .account_interface - .decompression_context - .expect("Cold account must have decompression context") - .compressed_account; - (compressed_account, a.variant) - }) - .collect(); + let compressed_accounts: Vec<(CompressedAccount, V)> = cold_accounts + .into_iter() + .map(|a| { + let compressed_account = a + .account_interface + .decompression_context + .ok_or("Cold account missing decompression context")? + .compressed_account; + Ok((compressed_account, a.variant)) + }) + .collect::<Result<_, Box<dyn std::error::Error>>>()?;
137-165: Make AccountInterface conversions fallible instead of panicking.
expect(...)can crash callers if a cold interface is constructed withoutload_context. PreferTryFromand returnAccountInterfaceError.🔧 Proposed fix (sketch)
-impl From<&AccountInfoInterface> for AccountInterface { - fn from(info: &AccountInfoInterface) -> Self { - if info.is_cold { - Self::cold( - info.pubkey, - info.load_context - .as_ref() - .expect("cold account must have load_context") - .compressed - .clone(), - ) - } else { - Self::hot(info.pubkey) - } - } -} +impl TryFrom<&AccountInfoInterface> for AccountInterface { + type Error = AccountInterfaceError; + fn try_from(info: &AccountInfoInterface) -> Result<Self, Self::Error> { + if info.is_cold { + let compressed = info + .load_context + .as_ref() + .ok_or(AccountInterfaceError::MissingLoadContext)? + .compressed + .clone(); + Ok(Self::cold(info.pubkey, compressed)) + } else { + Ok(Self::hot(info.pubkey)) + } + } +}(Apply the same pattern to
TokenAccountInterface.)sdk-libs/compressible-client/src/load_accounts.rs (2)
184-191: Batch or parallelize mint proof fetching to cut latency.This loop awaits each mint proof sequentially. If the indexer supports it, use a batch call or
try_join_allfor concurrency.
233-241: Avoid panic when decompression context is missing.
expect(...)can crash clients; return aLoadAccountsErrorinstead.🔧 Proposed fix (fallible collection)
- let compressed_accounts: Vec<_> = accounts - .iter() - .map(|a| { - let compressed_account = a - .account_interface - .decompression_context - .as_ref() - .expect("Cold account must have decompression context") - .compressed_account - .clone(); - (compressed_account, a.variant.clone()) - }) - .collect(); + let compressed_accounts: Vec<_> = accounts + .iter() + .map(|a| { + let compressed_account = a + .account_interface + .decompression_context + .as_ref() + .ok_or_else(|| { + LoadAccountsError::BuildInstruction( + "Cold account missing decompression context".into(), + ) + })? + .compressed_account + .clone(); + Ok((compressed_account, a.variant.clone())) + }) + .collect::<Result<_, _>>()?;
There was a problem hiding this comment.
pls revert changes to this file
There was a problem hiding this comment.
Actionable comments posted: 7
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
sdk-libs/compressible-client/src/decompress_mint.rs (1)
317-322: Duplicate step comment: two "step 5" labels.Lines 317 and 322 both say "5." - the second should be "6." to match the numbering.
📝 Suggested fix
- // 5. Check if already decompressed flag is set - return empty vec (idempotent) + // 5. Check if already decompressed flag is set - return empty vec (idempotent) if mint_data.metadata.mint_decompressed { return Ok(vec![]); } - // 5. Get validity proof + // 6. Get validity proof let proof_result = indexer
🤖 Fix all issues with AI agents
In `@sdk-libs/compressible-client/src/account_interface_ext.rs`:
- Around line 22-26: The file has an incomplete TODO in the doc comment above
the extension trait (the comment immediately preceding #[async_trait] for the
account interface extension trait) — either replace "TODO:" with a short
description of remaining work (e.g., explain missing behaviors, constraints, or
planned features for the blanket-implemented extension for all `Rpc + Indexer`
types) or remove the TODO entirely if nothing remains to do; update the doc
comment above the trait (the extension trait in account_interface_ext.rs)
accordingly so the comment is complete and self-contained.
- Around line 137-146: The current code fabricates an empty
solana_account::Account and returns AccountInfoInterface::hot(...) for
non-existent accounts which is misleading; instead return an explicit "none"
variant (e.g., AccountInfoInterface::None or a constructor like
AccountInfoInterface::none()) when neither on-chain nor compressed state exists,
or if that variant does not exist add it mirroring MintState::None and use it
here (refer to the call site that currently constructs solana_account::Account,
the AccountInfoInterface::hot invocation, and program_id/address to locate the
change); also update the trait method docs to document that the method returns
the explicit None variant for missing accounts.
- Around line 184-225: In get_ata_interface, the call to
get_compressed_token_accounts_by_owner incorrectly passes the derived ATA PDA
(variable ata) instead of the actual owner pubkey; update the call in
get_ata_interface to pass owner (the function parameter) to
get_compressed_token_accounts_by_owner so the indexer is queried by the real
owner, leaving the rest of the logic (derive_token_ata,
TokenAccountInterface::cold, and error handling) unchanged.
- Around line 148-182: The get_token_account_interface function is using
get_compressed_token_accounts_by_owner(address, ...) incorrectly because the
input address is a token account pubkey, not the owner/wallet pubkey; update the
logic to query compressed token accounts by the actual owner/mint instead of the
token account address: either change the function signature to accept (owner,
mint) like get_ata_interface and call
get_compressed_token_accounts_by_owner(owner, Some(mint), ...), or when an
on-chain account exists use get_account(address) to parse and extract the owner
and mint and then query compressed accounts by that owner/mint, or derive the
expected ATA and use that for the compressed query; ensure
TokenAccountInterface::cold is still constructed with the correct owner/mint
values when returning a cold interface.
In `@sdk-libs/compressible-client/src/decompress_mint.rs`:
- Around line 30-39: The AlreadyDecompressed error variant is defined but never
used; update behavior to either remove the variant or return it from idempotent
paths so callers can distinguish "already decompressed" from success. Change
either build_decompress_mint and/or decompress_mint to return
Err(CompressibleClientError::AlreadyDecompressed) instead of Ok(vec![]) when the
mint is already decompressed (preserve current idempotent behavior behind a new
boolean flag or create a new non-idempotent helper), or remove the
AlreadyDecompressed enum variant if you prefer only idempotent Ok(vec![])
responses; modify the function signatures/flow in build_decompress_mint and
decompress_mint accordingly and update any callers to handle the new Err variant
or the removed variant.
In `@sdk-libs/macros/src/compressible/instructions.rs`:
- Line 1474: The macro currently does let __params_clone =
`#params_ident.clone`(); which introduces a hidden Clone bound on the user type
(params_ident) and can break user code; change the macro to avoid requiring
Clone by either (a) passing a reference to the original params into
light_finalize (use a borrowed &#params_ident or construct a minimal borrowed
view) or (b) clone only the specific fields actually needed by light_finalize
instead of cloning the entire params struct, or if you prefer to keep cloning
add an explicit compile-time check that emits a clear compile_error! if
params_ident does not implement Clone and/or document the Clone requirement in
the macro docs — update occurrences around the light_finalize invocation and the
generated __params_clone identifier accordingly.
- Around line 1467-1496: The unsafe is unsound because __accounts_ptr is
pointing into the ctx struct (via &ctx.accounts) which can be moved by the
handler; instead capture a pointer to the actual accounts data that ctx.accounts
references so it remains valid after ctx is moved. Change the creation of
__accounts_ptr to take the referenced-data pointer (i.e., derive the pointer
from ctx.accounts itself, not &ctx.accounts), keep __remaining_ptr and
__params_clone as-is, and then use that saved data pointer when calling <_ as
LightFinalize>::light_finalize after __light_handler_result succeeds.
♻️ Duplicate comments (5)
sdk-libs/compressible-client/Cargo.toml (1)
31-31: Use workspace dependency forsmallvec.Direct version pinning (
smallvec = "1.15") can lead to version drift across the monorepo. This should use{ workspace = true }if smallvec is defined in the workspace, or be added to the workspace dependencies first.Suggested fix
-smallvec = "1.15" +smallvec = { workspace = true }sdk-libs/compressible-client/src/tx_size.rs (1)
43-86: Solid implementation addressing prior feedback.The function now:
- Returns
Resultwith explicit error for oversized single instructions (lines 64-70)- Derives signer count from instruction metadata via
count_signers(line 105)One remaining optimization: lines 58-59 clone
current_batcheach iteration. For typical transaction counts this is fine, but if you want to avoid quadratic allocations, consider push/measure/pop pattern.Optional: Avoid per-iteration clone
for (idx, ix) in instructions.into_iter().enumerate() { - let mut trial = current_batch.clone(); - trial.push(ix.clone()); - - if estimate_tx_size(&trial, payer) > max_size { + current_batch.push(ix); + if estimate_tx_size(¤t_batch, payer) > max_size { + let ix = current_batch.pop().expect("just pushed"); // Check if this single instruction alone exceeds max_size - let single_ix_size = estimate_tx_size(&[ix.clone()], payer); + let single_ix_size = estimate_tx_size(&[ix.clone()], payer); if single_ix_size > max_size { return Err(InstructionTooLargeError { instruction_index: idx, estimated_size: single_ix_size, max_size, }); } if !current_batch.is_empty() { batches.push(current_batch); } current_batch = vec![ix]; - } else { - current_batch.push(ix); } }sdk-libs/compressible-client/src/lib.rs (3)
139-172: Panic-proneFromimplementations remain unaddressed.The
expect()calls on lines 146 and 163 will panic if a cold interface lacksload_context. This was flagged in a prior review. Consider implementingTryFrominstead to surface errors gracefully, or enforce the invariant at construction time so callers can't trigger a panic.
428-437: Panic on missing token CPI context remains unaddressed.The
.unwrap()on line 433 and.expect()on line 434 bypass theResultcontract and can crash callers. This was flagged in a prior review. Consider making this fallible with.ok_or(...).
619-629: Panic on missing decompression context remains unaddressed.The
.expect()on line 625 can crash callers when a cold account unexpectedly lacks decompression context. This was flagged in a prior review. Consider using.ok_or(...)to return a proper error.
| // Doesn't exist | ||
| let account = solana_account::Account { | ||
| lamports: 0, | ||
| data: vec![], | ||
| owner: *program_id, | ||
| executable: false, | ||
| rent_epoch: 0, | ||
| }; | ||
| Ok(AccountInfoInterface::hot(*address, account)) | ||
| } |
There was a problem hiding this comment.
Returning "hot" for non-existent accounts may be misleading.
When neither on-chain nor compressed state exists, you return AccountInfoInterface::hot() with a fabricated empty account. This could mislead callers into thinking the account exists on-chain when it doesn't.
Consider either:
- Returning an explicit
Nonevariant (similar toMintState::None) - Documenting this behavior clearly in the trait method docs
🤖 Prompt for AI Agents
In `@sdk-libs/compressible-client/src/account_interface_ext.rs` around lines 137 -
146, The current code fabricates an empty solana_account::Account and returns
AccountInfoInterface::hot(...) for non-existent accounts which is misleading;
instead return an explicit "none" variant (e.g., AccountInfoInterface::None or a
constructor like AccountInfoInterface::none()) when neither on-chain nor
compressed state exists, or if that variant does not exist add it mirroring
MintState::None and use it here (refer to the call site that currently
constructs solana_account::Account, the AccountInfoInterface::hot invocation,
and program_id/address to locate the change); also update the trait method docs
to document that the method returns the explicit None variant for missing
accounts.
| async fn get_token_account_interface( | ||
| &self, | ||
| address: &Pubkey, | ||
| ) -> Result<TokenAccountInterface, RpcError> { | ||
| use light_sdk::constants::LIGHT_TOKEN_PROGRAM_ID; | ||
|
|
||
| // On-chain first | ||
| if let Some(account) = self.get_account(*address).await? { | ||
| return TokenAccountInterface::hot(*address, account) | ||
| .map_err(|e| RpcError::CustomError(format!("parse error: {}", e))); | ||
| } | ||
|
|
||
| // Compressed state | ||
| let result = self | ||
| .get_compressed_token_accounts_by_owner(address, None, None) | ||
| .await | ||
| .map_err(indexer_err)?; | ||
|
|
||
| if let Some(compressed) = result.value.items.into_iter().next() { | ||
| let mint = compressed.token.mint; | ||
| return Ok(TokenAccountInterface::cold( | ||
| *address, | ||
| compressed, | ||
| *address, | ||
| mint, | ||
| 0, | ||
| LIGHT_TOKEN_PROGRAM_ID.into(), | ||
| )); | ||
| } | ||
|
|
||
| Err(RpcError::CustomError(format!( | ||
| "token account not found: {}", | ||
| address | ||
| ))) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find the definition of get_compressed_token_accounts_by_owner
rg "fn get_compressed_token_accounts_by_owner" -A 10 --type rustRepository: Lightprotocol/light-protocol
Length of output: 200
🏁 Script executed:
# Check the file in question to see full context
fd "account_interface_ext.rs" --type fRepository: Lightprotocol/light-protocol
Length of output: 200
🏁 Script executed:
# Search for usage patterns of this function
rg "get_compressed_token_accounts_by_owner" -B 3 -A 3 --type rustRepository: Lightprotocol/light-protocol
Length of output: 200
🏁 Script executed:
# Look for CLAUDE.md files that might document token account behavior
fd "CLAUDE.md" --type f -x grep -l "token.*account\|owner" {}Repository: Lightprotocol/light-protocol
Length of output: 303
🏁 Script executed:
# Check if we can access program-examples repo
git ls-files sdk-libs/ 2>/dev/null | head -20Repository: Lightprotocol/light-protocol
Length of output: 813
🌐 Web query:
Light Protocol get_compressed_token_accounts_by_owner owner parameter documentation
💡 Result:
The owner parameter should be a Solana public key (base58-encoded Pubkey string) representing the account/wallet whose compressed token accounts you want to query. It’s passed as the "owner" field in the JSON-RPC params for methods like getCompressedTokenBalancesByOwner and getCompressedAccountsByOwner. [1][2]
Sources:
- getCompressedTokenBalancesByOwner API docs (shows "owner" param in params). [1]
- getCompressedAccountsByOwner API docs (shows "owner" param in params). [2]
🌐 Web query:
Light Protocol compressed token account owner wallet semantics
💡 Result:
Summary of owner-wallet semantics for Light Protocol compressed token accounts:
-
Compressed token accounts store the same logical fields as SPL token accounts (balance, owner/authority, mint) inside compressed account data; owner is the wallet (or PDA) that controls that token account. [1][8]
-
Light provides light-ATAs whose address is deterministically derived from (owner pubkey, Light token program id, mint, bump). That means an owner’s Light-ATA for a given mint is predictable and program-derived (not a private-keyed address). Use this ATA convention to find/derive an owner’s compressed token account. [2][3]
-
Ownership/authority checks are enforced by the Light System/Token program during compressed-state updates: transactions must include the compressed account state being read/written (and proofs/roots) so the program can validate the owner field and other invariants before accepting the update. In short, control is enforced on-chain by the Light program the same way SPL enforces ownership, but on compressed state via the protocol’s validation. [9][1]
-
Compressed accounts may be either addressless (identified by hash) or have an optional persistent address (an address tree / PDA). For fungible token accounts the address field is optional, but when present it serves as the permanent identifier for that owner/account across state updates. Use PDAs if you need a stable owner-specific account address. [3][11]
-
Active vs. compressed lifecycle: Light exposes both on-chain cToken (regular Solana account wrapper) and compressed token accounts. A cToken account can be active (regular account) and later be compressed (migrated into the compressed state tree) and decompressed back; the owner semantics (who can sign to move funds or change owner) follow the token program rules and are validated when compressing/decompressing. [7][2]
-
Practical implications for wallets/integrations:
- Wallets should derive the owner’s Light-ATA to query balances via Light RPCs (Photon/etc.). [2][8]
- Authority changes, transfers, and decompression require invoking the Light Token/System program so the protocol can validate proofs/roots—clients must supply the appropriate compressed-state data in transactions. [1][9]
Sources:
[1] Light Protocol docs — Compressed Account Model / In a Nutshell.
[2] Light Token Program docs — light-ATA derivation and compressed token account overview.
[3] Light protocol whitepaper — compressed PDA/account layout and address semantics.
[7] light-ctoken-sdk docs (cToken / compressed token lifecycle).
[8] Lightprotocol compressed tokens overview / wallet support.
🏁 Script executed:
rg "get_token_account_interface" -B 2 -A 2 --type rustRepository: Lightprotocol/light-protocol
Length of output: 1331
🏁 Script executed:
# Look for the trait definition that contains this method
rg "trait.*\{" -A 20 --type rust | grep -A 20 "get_token_account_interface"Repository: Lightprotocol/light-protocol
Length of output: 1825
🏁 Script executed:
# Check the compressible-client implementation to understand the context better
fd "compressible-client" -type dRepository: Lightprotocol/light-protocol
Length of output: 242
🏁 Script executed:
# Search for where this is called from - likely in tests or client usage
rg "\.get_token_account_interface\(" -B 5 --type rustRepository: Lightprotocol/light-protocol
Length of output: 632
🏁 Script executed:
cat -n sdk-libs/compressible-client/src/account_interface_ext.rs | sed -n '140,190p'Repository: Lightprotocol/light-protocol
Length of output: 1934
🏁 Script executed:
# Check what get_compressed_token_accounts_by_owner actually expects
rg "fn get_compressed_token_accounts_by_owner" -B 2 -A 15Repository: Lightprotocol/light-protocol
Length of output: 10721
🏁 Script executed:
# Look at the ATA interface method to see how it differs
rg "fn get_ata_interface" -B 2 -A 15Repository: Lightprotocol/light-protocol
Length of output: 3374
Semantic mismatch: address parameter is a token account, not an owner wallet.
At line 162, you call get_compressed_token_accounts_by_owner(address, ...) where address is the token account address passed as input. However, this method filters compressed accounts by token_data.owner, which should be a wallet/owner pubkey—not the token account address.
For Light Protocol compressed token accounts, the owner field is stored in the compressed account data and is independent of the token account address. Passing the token account address as the owner will not find the matching compressed account.
Consider either: (1) requiring owner and mint parameters like get_ata_interface does, (2) extracting the owner from the on-chain account first (if present), or (3) deriving the expected ATA address and using a different query strategy.
🤖 Prompt for AI Agents
In `@sdk-libs/compressible-client/src/account_interface_ext.rs` around lines 148 -
182, The get_token_account_interface function is using
get_compressed_token_accounts_by_owner(address, ...) incorrectly because the
input address is a token account pubkey, not the owner/wallet pubkey; update the
logic to query compressed token accounts by the actual owner/mint instead of the
token account address: either change the function signature to accept (owner,
mint) like get_ata_interface and call
get_compressed_token_accounts_by_owner(owner, Some(mint), ...), or when an
on-chain account exists use get_account(address) to parse and extract the owner
and mint and then query compressed accounts by that owner/mint, or derive the
expected ATA and use that for the compressed query; ensure
TokenAccountInterface::cold is still constructed with the correct owner/mint
values when returning a cold interface.
| async fn get_ata_interface( | ||
| &self, | ||
| owner: &Pubkey, | ||
| mint: &Pubkey, | ||
| ) -> Result<AtaInterface, RpcError> { | ||
| use light_sdk::constants::LIGHT_TOKEN_PROGRAM_ID; | ||
|
|
||
| let (ata, bump) = derive_token_ata(owner, mint); | ||
|
|
||
| // On-chain first | ||
| if let Some(account) = self.get_account(ata).await? { | ||
| let inner = TokenAccountInterface::hot(ata, account) | ||
| .map_err(|e| RpcError::CustomError(format!("parse error: {}", e)))?; | ||
| return Ok(AtaInterface::new(inner)); | ||
| } | ||
|
|
||
| // Compressed state | ||
| let options = Some(GetCompressedTokenAccountsByOwnerOrDelegateOptions::new( | ||
| Some(*mint), | ||
| )); | ||
| let result = self | ||
| .get_compressed_token_accounts_by_owner(&ata, options, None) | ||
| .await | ||
| .map_err(indexer_err)?; | ||
|
|
||
| if let Some(compressed) = result.value.items.into_iter().next() { | ||
| let inner = TokenAccountInterface::cold( | ||
| ata, | ||
| compressed, | ||
| *owner, | ||
| *mint, | ||
| bump, | ||
| LIGHT_TOKEN_PROGRAM_ID.into(), | ||
| ); | ||
| return Ok(AtaInterface::new(inner)); | ||
| } | ||
|
|
||
| Err(RpcError::CustomError(format!( | ||
| "ATA not found: owner={} mint={}", | ||
| owner, mint | ||
| ))) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find TokenAccountInterface definition and cold method
rg "impl.*TokenAccountInterface|fn cold" -A 8 --type rust | head -100Repository: Lightprotocol/light-protocol
Length of output: 4549
🏁 Script executed:
# Check get_compressed_token_accounts_by_owner usage patterns
rg "get_compressed_token_accounts_by_owner" -B 3 -A 3 --type rustRepository: Lightprotocol/light-protocol
Length of output: 50385
🏁 Script executed:
# Find the file structure of compressible-client to understand the context better
fd "account_interface" --type fRepository: Lightprotocol/light-protocol
Length of output: 183
🏁 Script executed:
# Check the derive_token_ata function to understand its semantics
rg "fn derive_token_ata" -A 5 --type rustRepository: Lightprotocol/light-protocol
Length of output: 1199
🏁 Script executed:
# Check the specific file context around line 205 in account_interface_ext.rs
sed -n '180,230p' sdk-libs/compressible-client/src/account_interface_ext.rsRepository: Lightprotocol/light-protocol
Length of output: 1466
Query compressed token accounts by the actual owner, not the derived ATA address.
At line 205, the code queries get_compressed_token_accounts_by_owner(&ata, ...) where ata is a derived PDA address. The ATA is created as a PDA from seeds [owner, program_id, mint], but it is not the account owner. The function already receives the actual owner parameter, which should be used for the query. The semantic contract of get_compressed_token_accounts_by_owner expects the owner pubkey, and this is consistent with all usage patterns throughout the codebase.
Change line 205 to:
self.get_compressed_token_accounts_by_owner(owner, options, None)🤖 Prompt for AI Agents
In `@sdk-libs/compressible-client/src/account_interface_ext.rs` around lines 184 -
225, In get_ata_interface, the call to get_compressed_token_accounts_by_owner
incorrectly passes the derived ATA PDA (variable ata) instead of the actual
owner pubkey; update the call in get_ata_interface to pass owner (the function
parameter) to get_compressed_token_accounts_by_owner so the indexer is queried
by the real owner, leaving the rest of the logic (derive_token_ata,
TokenAccountInterface::cold, and error handling) unchanged.
| ProgramError(#[from] solana_program_error::ProgramError), | ||
|
|
||
| #[error("Mint already decompressed")] | ||
| AlreadyDecompressed, | ||
|
|
||
| #[error("Proof required for cold mint")] | ||
| #[error("Validity proof required for cold mint")] | ||
| ProofRequired, | ||
|
|
||
| #[error("Indexer error: {0}")] | ||
| IndexerError(#[from] light_client::indexer::IndexerError), |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
AlreadyDecompressed error variant is unused.
The AlreadyDecompressed variant is defined but never returned. Both build_decompress_mint (line 151) and decompress_mint (line 226) return Ok(vec![]) for idempotent behavior when already decompressed. Either remove the unused variant or use it for non-idempotent callers who want to distinguish "nothing to do" from "success."
🤖 Prompt for AI Agents
In `@sdk-libs/compressible-client/src/decompress_mint.rs` around lines 30 - 39,
The AlreadyDecompressed error variant is defined but never used; update behavior
to either remove the variant or return it from idempotent paths so callers can
distinguish "already decompressed" from success. Change either
build_decompress_mint and/or decompress_mint to return
Err(CompressibleClientError::AlreadyDecompressed) instead of Ok(vec![]) when the
mint is already decompressed (preserve current idempotent behavior behind a new
boolean flag or create a new non-idempotent helper), or remove the
AlreadyDecompressed enum variant if you prefer only idempotent Ok(vec![])
responses; modify the function signatures/flow in build_decompress_mint and
decompress_mint accordingly and update any callers to handle the new Err variant
or the removed variant.
| // 3. We only use these pointers if the handler succeeds | ||
| let __accounts_ptr = &ctx.accounts as *const _ as *const (); | ||
| let __remaining_ptr = ctx.remaining_accounts as *const [solana_account_info::AccountInfo]; | ||
| let __params_clone = #params_ident.clone(); |
There was a problem hiding this comment.
Implicit Clone bound may break user code.
The #params_ident.clone() call introduces a hidden trait bound: the params type must implement Clone. Users whose parameter structs don't derive Clone will get cryptic errors pointing into macro-generated code.
Consider either:
- Documenting this requirement clearly in the macro's doc comments
- Extracting only the fields needed for
light_finalizeinstead of cloning the entire params struct - Adding a compile-time error with a helpful message if Clone isn't implemented
#!/bin/bash
# Check if the macro documentation mentions the Clone requirement
rg -n -B2 -A10 'rentfree_program|wrap_function_with_rentfree' --type=md🤖 Prompt for AI Agents
In `@sdk-libs/macros/src/compressible/instructions.rs` at line 1474, The macro
currently does let __params_clone = `#params_ident.clone`(); which introduces a
hidden Clone bound on the user type (params_ident) and can break user code;
change the macro to avoid requiring Clone by either (a) passing a reference to
the original params into light_finalize (use a borrowed &#params_ident or
construct a minimal borrowed view) or (b) clone only the specific fields
actually needed by light_finalize instead of cloning the entire params struct,
or if you prefer to keep cloning add an explicit compile-time check that emits a
clear compile_error! if params_ident does not implement Clone and/or document
the Clone requirement in the macro docs — update occurrences around the
light_finalize invocation and the generated __params_clone identifier
accordingly.
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (10)
sdk-libs/macros/src/compressible/instructions.rs (1)
759-806: Consider using explicit exhaustive matching instead ofunreachable!().Multiple match arms use
unreachable!()forPdaOnlyandTokenOnlyvariants. While this works if the current invariant holds (the caller always passesMixed), it obscures the intent and will panic at macro expansion time if the invariant is violated.If these variants are genuinely never expected here, consider either:
- Refactoring to accept only
Mixed(via a newtype or different enum)- Adding a brief comment explaining why these are unreachable
This is a minor observation—the current approach is acceptable for a proc macro context where panics surface as compile errors.
sdk-libs/compressible-client/src/decompress_mint.rs (3)
156-166: Unchecked array access may panic if proof has no accounts.At line 158,
proof_result.accounts[0]will panic if theaccountsvector is empty. While this is likely never the case for a valid proof, defensive programming would handle this edge case gracefully.🛡️ Proposed defensive check
// Proof required for cold mint let proof_result = validity_proof.ok_or(DecompressMintError::ProofRequired)?; - // Extract tree info from proof result - let account_info = &proof_result.accounts[0]; + // Extract tree info from proof result + let account_info = proof_result + .accounts + .first() + .ok_or(DecompressMintError::MissingMintData)?;
317-326: Duplicate step numbering in comments.Steps 5 appears twice (lines 317 and 322). The second should be step 6 to match the subsequent "// 7." at line 339.
📝 Fix step numbering
- // 5. Check if already decompressed flag is set - return empty vec (idempotent) + // 5. Check if already decompressed flag is set - return empty vec (idempotent) if mint_data.metadata.mint_decompressed { return Ok(vec![]); } - // 5. Get validity proof + // 6. Get validity proof let proof_result = indexer
328-337: Same unchecked array access pattern indecompress_mint_idempotent.Line 329 also accesses
proof_result.accounts[0]without bounds checking. Consider applying the same defensive pattern here.sdk-libs/macros/src/compressible/file_scanner.rs (1)
122-136: Inconsistent error handling: inline module errors are silently swallowed.While
scan_rust_file(lines 88-104) now properly logs extraction errors,scan_inline_modulesilently ignoresErrresults fromtry_extract_from_struct. This creates a diagnostic blind spot for errors in inline modules.♻️ Apply consistent error handling
for item in content { match item { Item::Struct(item_struct) => { - if let Ok(Some((info, struct_name))) = try_extract_from_struct(item_struct) { - result.pda_specs.extend(info.pda_fields); - result.token_specs.extend(info.token_fields); - result.rentfree_struct_names.insert(struct_name); + match try_extract_from_struct(item_struct) { + Ok(Some((info, struct_name))) => { + result.pda_specs.extend(info.pda_fields); + result.token_specs.extend(info.token_fields); + result.rentfree_struct_names.insert(struct_name); + } + Ok(None) => { + // Struct didn't have rentfree fields, that's ok + } + Err(e) => { + result.errors.push(format!( + "Error extracting from inline struct {}: {}", + item_struct.ident, e + )); + } } }sdk-libs/program-test/src/compressible.rs (1)
453-466: Comment could be clearer about whymintisNone.The comment "Mint is decompressed, data lives in Mint account" at line 465 is slightly confusing in the context of
CompressAndCloseMint. Themint: Nonetells the on-chain program to read current state from the Solana account rather than instruction data—this is about data freshness, not decompression status.📝 Clarified comment
- mint: None, // Mint is decompressed, data lives in Mint account + mint: None, // Read current mint state from on-chain account, not stale instruction datasdk-libs/compressible-client/src/lib.rs (3)
334-336: Consider propagating serialization errors.The
expect()ontry_to_vec()will panic on serialization failure. While unlikely for these simple structs, the function could returnResultto maintain consistency with other builders in this module.♻️ Suggested fix
- let serialized_data = instruction_data - .try_to_vec() - .expect("Failed to serialize instruction data"); + let serialized_data = instruction_data.try_to_vec()?;This requires changing the return type to
Result<Instruction, Box<dyn std::error::Error>>.
543-548: Add explicit validation thatvalidity_proof_with_context.accountsis non-empty inbuild_decompress_idempotent_rawbefore calling.unwrap()onstate_trees.Line 454-458 calls
.unwrap()onpacked_tree_infos.state_trees.as_ref(), which will panic ifvalidity_proof_with_context.accountsis empty (causingpack_tree_infos()to setstate_trees = None). While compressed_accounts is validated at line 400 and comments suggest accounts should match in length, there's no explicit validation preventing the caller from passing an empty accounts list. This differs fromcompress_accounts_idempotent(lines 543-548), which explicitly checksvalidity_proof_with_context.accounts.is_empty()at lines 525-527. Add the same validation here before line 454 to prevent panics and improve code clarity.
358-358: The second parameter specifies config version, not bump—clarify the design intent.
CompressibleConfig::derive_pda(program_id, 0)uses version 0 for the PDA seed (not a bump; the bump is derived and returned). However,initialize_compression_config()at line 310 accepts this as a parameter with a default, whileupdate_compression_config()at line 358 hardcodes it to 0. The documentation identifies version 1 as the standard. Consider either accepting a version parameter inupdate_compression_config()for consistency, or add a comment explaining why version 0 is always used for updates.sdk-libs/token-sdk/src/token/decompress_mint.rs (1)
171-213: Update the comment to reflect actual DecompressMint account order (no mint_signer).The comment on lines 171–186 incorrectly lists
mint_signeras account#2. ForDecompressMint,mint_signerisNoneper the conditional logic inMintActionMetaConfig::to_account_metas()(which explicitly notes "mint_signer needs to sign for create_mint, not for decompress_mint"). Thebuild_account_infos()implementation is correct—remove the mint_signer entry from the comment:🔧 Corrected account order
// Account order must match to_account_metas() from MintActionMetaConfig: // 1. light_system_program - // 2. mint_signer (no sign for decompress) - // 3. authority (signer) - // 4. compressible_config - // 5. mint - // 6. rent_sponsor - // 7. fee_payer (signer) - // 8. cpi_authority_pda - // 9. registered_program_pda - // 10. account_compression_authority - // 11. account_compression_program - // 12. system_program - // 13. output_queue - // 14. tree_pubkey (state_tree) - // 15. input_queue + // 2. authority (signer) + // 3. compressible_config + // 4. mint + // 5. rent_sponsor + // 6. fee_payer (signer) + // 7. cpi_authority_pda + // 8. registered_program_pda + // 9. account_compression_authority + // 10. account_compression_program + // 11. system_program + // 12. output_queue + // 13. tree_pubkey (state_tree) + // 14. input_queue
🤖 Fix all issues with AI agents
In `@sdk-libs/compressible-client/src/account_interface_ext.rs`:
- Around line 25-26: Remove or complete the orphaned `// TODO:` comment in
account_interface_ext.rs: either expand it with the intended task or delete the
line so it no longer sits above the `#[async_trait]` attribute and confusing
future readers; ensure any new TODO clearly states the action and references the
surrounding item (the async trait implementation) so maintainers understand what
remains to be done.
- Around line 160-176: The code calls
get_compressed_token_accounts_by_owner(address, ...) with a token account
address instead of the account's owner; update the logic in the block that
constructs TokenAccountInterface::cold so you query by the true owner or by
account address using the appropriate indexer method: either (A) derive/fetch
the compressed account on-chain to obtain its owner pubkey and then call
get_compressed_token_accounts_by_owner(owner_pubkey, ...) or (B) replace the
call with a direct lookup method (e.g., get_compressed_token_account_by_address
or equivalent) that accepts the token account address, then use the returned
compressed account and its mint when constructing TokenAccountInterface::cold;
ensure you reference get_compressed_token_accounts_by_owner and
TokenAccountInterface::cold when making the change.
In `@sdk-libs/macros/src/compressible/instructions.rs`:
- Line 1474: The generated code calls .clone() on the params (see the
__params_clone binding and the params identifier `#params_ident`), which requires
the params type to implement Clone; add an explicit compile-time guarantee so
users get a clear error: either add a Clone bound on the generated function/impl
signature for the params type (e.g., include T: Clone / ParamsType: Clone where
you emit the function) or emit a small static assertion near the use (e.g., a
zero-sized helper that requires the params type to implement Clone) so the
compiler error points to a clear message instead of the macro expansion.
- Around line 1467-1496: The current unsafe saves raw pointers (__accounts_ptr,
__remaining_ptr) to parts of ctx before executing `#fn_block` which may move ctx,
leading to dangling pointers when later calling <_ as
LightFinalize>::light_finalize; instead, restructure to avoid raw-pointer
resurrection by destructuring ctx beforehand (e.g., bind let accounts_ref =
&ctx.accounts and let remaining_ref = ctx.remaining_accounts or move accounts
into a ManuallyDrop/Pin wrapper) and use those stable bindings (and a cloned
`#params_ident`) for the pre-init and the post-success light_finalize call; update
the code paths around __light_handler_result, light_pre_init, and the final
light_finalize invocation to use these borrowable/stable references rather than
casting to *const ().
In `@sdk-libs/program-test/src/compressible.rs`:
- Around line 272-276: The loop currently calls compress_mint_forester for each
mint_pubkey individually (using compress_mint_accounts and
stored_compressible_accounts), which is inefficient for many mints; refactor to
process mints in batches like the CToken handling by chunking
compress_mint_accounts (e.g., N-sized chunks) and either implement a batched
entry point (compress_mint_forester_batch) that accepts a slice of pubkeys or
run an async parallel batch using FuturesUnordered/join_all for each chunk, then
remove corresponding keys from stored_compressible_accounts after the batch
completes; update references to compress_mint_forester and
compress_mint_accounts accordingly.
In `@sdk-libs/sdk/src/compressible/close.rs`:
- Around line 14-15: The call to info.resize(0).map_err(|_|
LightSdkError::ConstraintViolation) discards the original error; change this to
preserve context by either (A) mapping to a new error variant like
LightSdkError::ResizeFailed that wraps the original error (e.g., map_err(|e|
LightSdkError::ResizeFailed(e.into()))) and add that variant to the
LightSdkError enum, or (B) log/trace the original error before converting (e.g.,
trace/error with the error) and then map to ConstraintViolation; apply the same
change to the second occurrence around lines 41-42 and update any error
construction sites or tests that reference LightSdkError accordingly.
♻️ Duplicate comments (6)
sdk-libs/compressible-client/Cargo.toml (1)
31-31: Use workspace dependency forsmallvecto maintain version consistency.Direct version pinning (
smallvec = "1.15") can lead to version inconsistencies across the monorepo. Ifsmallvecis used elsewhere in the workspace, prefersmallvec = { workspace = true }.sdk-libs/compressible-client/src/tx_size.rs (2)
88-99: ConsiderHashSetfor signer deduplication if instruction sets grow.
count_signersuses aVecwith linear.contains()checks. For typical transactions with few signers this is fine, but if instruction sets grow large, aHashSetwould provide O(1) lookups. Given current usage patterns, this is a minor optimization.
57-78: Quadratic allocation from cloningcurrent_batcheach iteration.The pattern of cloning
current_batchintotrialon each iteration (lines 58-59) causes O(n²) allocations in the worst case. You can avoid this by pushing, measuring, then popping if the batch exceeds the limit.♻️ Avoid per-iteration clone
for (idx, ix) in instructions.into_iter().enumerate() { - let mut trial = current_batch.clone(); - trial.push(ix.clone()); - - if estimate_tx_size(&trial, payer) > max_size { + current_batch.push(ix.clone()); + + if estimate_tx_size(¤t_batch, payer) > max_size { + let ix = current_batch.pop().expect("just pushed"); + // Check if this single instruction alone exceeds max_size let single_ix_size = estimate_tx_size(&[ix.clone()], payer); if single_ix_size > max_size { return Err(InstructionTooLargeError { instruction_index: idx, estimated_size: single_ix_size, max_size, }); } if !current_batch.is_empty() { batches.push(current_batch); } current_batch = vec![ix]; - } else { - current_batch.push(ix); } }sdk-libs/compressible-client/src/lib.rs (3)
139-172: Panic risk inFromimplementations remains unaddressed.These
Fromimplementations useexpect()which can panic if a cold interface lacksload_context. The previous review correctly identified this—consider implementingTryFrominstead to make the conversion fallible and return a proper error.
428-437: Panic risk on missing CPI context remains.The chain
.map(|(acc, _)| acc.tree_info.cpi_context.unwrap()).expect(...)can panic in two places:
unwrap()ifcpi_contextisNoneexpect()if no token account is found (thoughhas_tokensguards this)The first case is the real concern—a token account with
Nonecpi_context would crash.🔧 Suggested fix
- let first_token_cpi_context = compressed_accounts - .iter() - .find(|(acc, _)| acc.owner == LIGHT_TOKEN_PROGRAM_ID) - .map(|(acc, _)| acc.tree_info.cpi_context.unwrap()) - .expect("has_tokens is true so there must be a token"); + let first_token_cpi_context = compressed_accounts + .iter() + .find(|(acc, _)| acc.owner == LIGHT_TOKEN_PROGRAM_ID) + .and_then(|(acc, _)| acc.tree_info.cpi_context) + .ok_or("Token compressed account missing cpi_context")?;
619-629: Panic risk on missing decompression context remains.The
expect()at line 625 can crash callers. While cold accounts should have decompression context by construction, this is a public API that could receive malformed inputs.🔧 Suggested fix
- let compressed_accounts: Vec<(CompressedAccount, V)> = cold_accounts - .into_iter() - .map(|a| { - let compressed_account = a - .account_interface - .decompression_context - .expect("Cold account must have decompression context") - .compressed_account; - (compressed_account, a.variant) - }) - .collect(); + let compressed_accounts: Vec<(CompressedAccount, V)> = cold_accounts + .into_iter() + .map(|a| { + let compressed_account = a + .account_interface + .decompression_context + .ok_or("Cold account missing decompression context")? + .compressed_account; + Ok((compressed_account, a.variant)) + }) + .collect::<Result<_, Box<dyn std::error::Error>>>()?;
| // Compressed state | ||
| let result = self | ||
| .get_compressed_token_accounts_by_owner(address, None, None) | ||
| .await | ||
| .map_err(indexer_err)?; | ||
|
|
||
| if let Some(compressed) = result.value.items.into_iter().next() { | ||
| let mint = compressed.token.mint; | ||
| return Ok(TokenAccountInterface::cold( | ||
| *address, | ||
| compressed, | ||
| *address, | ||
| mint, | ||
| 0, | ||
| LIGHT_TOKEN_PROGRAM_ID.into(), | ||
| )); | ||
| } |
There was a problem hiding this comment.
Semantic mismatch: token account address passed as owner.
At line 162, get_compressed_token_accounts_by_owner(address, ...) is called where address is the token account's address. However, get_compressed_token_accounts_by_owner expects an owner pubkey, not a token account address. This will likely fail to find the intended account.
If you need to look up a specific compressed token account by its address, consider using a different indexer method or deriving the owner from the on-chain account data first.
🤖 Prompt for AI Agents
In `@sdk-libs/compressible-client/src/account_interface_ext.rs` around lines 160 -
176, The code calls get_compressed_token_accounts_by_owner(address, ...) with a
token account address instead of the account's owner; update the logic in the
block that constructs TokenAccountInterface::cold so you query by the true owner
or by account address using the appropriate indexer method: either (A)
derive/fetch the compressed account on-chain to obtain its owner pubkey and then
call get_compressed_token_accounts_by_owner(owner_pubkey, ...) or (B) replace
the call with a direct lookup method (e.g.,
get_compressed_token_account_by_address or equivalent) that accepts the token
account address, then use the returned compressed account and its mint when
constructing TokenAccountInterface::cold; ensure you reference
get_compressed_token_accounts_by_owner and TokenAccountInterface::cold when
making the change.
| // Save raw pointers to accounts and remaining_accounts before ctx is potentially moved. | ||
| // This is safe because: | ||
| // 1. The accounts struct lives on the stack for the entire function duration | ||
| // 2. remaining_accounts is a slice that lives for 'info lifetime | ||
| // 3. We only use these pointers if the handler succeeds | ||
| let __accounts_ptr = &ctx.accounts as *const _ as *const (); | ||
| let __remaining_ptr = ctx.remaining_accounts as *const [solana_account_info::AccountInfo]; | ||
| let __params_clone = #params_ident.clone(); | ||
|
|
||
| // Execute the original handler body (may move ctx and params) | ||
| let __light_handler_result: anchor_lang::Result<()> = #fn_block; | ||
|
|
||
| // Phase 2: On success, finalize compression | ||
| // Phase 2: On success, finalize compression using saved pointers | ||
| if __light_handler_result.is_ok() { | ||
| ctx.accounts.light_finalize(ctx.remaining_accounts, &#params_ident, __has_pre_init) | ||
| .map_err(|e| { | ||
| // SAFETY: accounts struct is still valid on the stack, we're just accessing it | ||
| // through a raw pointer because the borrow checker can't track that ctx was | ||
| // only partially moved (accounts field is still there). | ||
| unsafe { | ||
| let __accounts_ref = &*(__accounts_ptr as *const _); | ||
| let __remaining_ref = &*__remaining_ptr; | ||
| <_ as LightFinalize>::light_finalize( | ||
| __accounts_ref, | ||
| __remaining_ref, | ||
| &__params_clone, | ||
| __has_pre_init | ||
| ).map_err(|e| { | ||
| let pe: solana_program_error::ProgramError = e.into(); | ||
| pe | ||
| })?; | ||
| } |
There was a problem hiding this comment.
Unsound unsafe code: raw pointer may dangle if handler moves ctx.
The safety comment on lines 1468-1471 states that "the accounts struct lives on the stack for the entire function duration." However, this is only true if #fn_block does not move ctx. If user code does something like delegate_handler(ctx), the pointer at __accounts_ptr becomes dangling, and dereferencing it on line 1485 is undefined behavior.
Additionally, the type erasure pattern (casting to *const () and back via inference) is fragile—if the inferred type doesn't match, this silently produces UB.
Suggested alternatives:
-
Restrict handler patterns: Document (and ideally enforce via a proc-macro check) that the wrapped handler must only borrow
ctx, never move it. -
Destructure ctx before the handler: Extract the accounts and remaining_accounts into separate bindings before calling the handler, so the borrow checker can track them properly.
-
Use
ManuallyDroporPin: If you need to preserve the address, ensure the value cannot be moved.
Example: Safer restructure using destructuring
// Instead of raw pointers, consider:
let accounts = &ctx.accounts;
let remaining = ctx.remaining_accounts;
let params_ref = &#params_ident;
let __has_pre_init = accounts.light_pre_init(remaining, params_ref)
.map_err(|e| { let pe: solana_program_error::ProgramError = e.into(); pe })?;
// If the handler needs ctx, consider passing by reference or
// splitting the handler to return any needed state.
// Then finalize with the borrowed references.🤖 Prompt for AI Agents
In `@sdk-libs/macros/src/compressible/instructions.rs` around lines 1467 - 1496,
The current unsafe saves raw pointers (__accounts_ptr, __remaining_ptr) to parts
of ctx before executing `#fn_block` which may move ctx, leading to dangling
pointers when later calling <_ as LightFinalize>::light_finalize; instead,
restructure to avoid raw-pointer resurrection by destructuring ctx beforehand
(e.g., bind let accounts_ref = &ctx.accounts and let remaining_ref =
ctx.remaining_accounts or move accounts into a ManuallyDrop/Pin wrapper) and use
those stable bindings (and a cloned `#params_ident`) for the pre-init and the
post-success light_finalize call; update the code paths around
__light_handler_result, light_pre_init, and the final light_finalize invocation
to use these borrowable/stable references rather than casting to *const ().
| // 3. We only use these pointers if the handler succeeds | ||
| let __accounts_ptr = &ctx.accounts as *const _ as *const (); | ||
| let __remaining_ptr = ctx.remaining_accounts as *const [solana_account_info::AccountInfo]; | ||
| let __params_clone = #params_ident.clone(); |
There was a problem hiding this comment.
Implicit Clone bound on params type may cause confusing errors.
The generated code calls .clone() on the params, requiring the user's params type to implement Clone. If it doesn't, compilation fails with an error pointing at generated macro code, which is confusing for users.
Consider either:
- Adding a compile-time check with a clear error message (e.g., using a static assertion or trait bound)
- Documenting this requirement prominently in the macro's doc comment
Example: Add clearer compile-time assertion
+ // Ensure params implements Clone - required for finalization
+ fn __assert_clone<T: Clone>(_: &T) {}
+ __assert_clone(&#params_ident);
let __params_clone = `#params_ident.clone`();📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let __params_clone = #params_ident.clone(); | |
| // Ensure params implements Clone - required for finalization | |
| fn __assert_clone<T: Clone>(_: &T) {} | |
| __assert_clone(&#params_ident); | |
| let __params_clone = `#params_ident.clone`(); |
🤖 Prompt for AI Agents
In `@sdk-libs/macros/src/compressible/instructions.rs` at line 1474, The generated
code calls .clone() on the params (see the __params_clone binding and the params
identifier `#params_ident`), which requires the params type to implement Clone;
add an explicit compile-time guarantee so users get a clear error: either add a
Clone bound on the generated function/impl signature for the params type (e.g.,
include T: Clone / ParamsType: Clone where you emit the function) or emit a
small static assertion near the use (e.g., a zero-sized helper that requires the
params type to implement Clone) so the compiler error points to a clear message
instead of the macro expansion.
| // Process Mint accounts via mint_action | ||
| for mint_pubkey in compress_mint_accounts { | ||
| compress_mint_forester(rpc, mint_pubkey, &payer).await?; | ||
| stored_compressible_accounts.remove(&mint_pubkey); | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Mint compression uses single-item processing instead of batching.
Unlike CToken accounts which are batched (lines 257-270), Mint accounts are processed one at a time. While this works, it may be less efficient for large numbers of mints. Consider whether batch processing would be beneficial here.
🤖 Prompt for AI Agents
In `@sdk-libs/program-test/src/compressible.rs` around lines 272 - 276, The loop
currently calls compress_mint_forester for each mint_pubkey individually (using
compress_mint_accounts and stored_compressible_accounts), which is inefficient
for many mints; refactor to process mints in batches like the CToken handling by
chunking compress_mint_accounts (e.g., N-sized chunks) and either implement a
batched entry point (compress_mint_forester_batch) that accepts a slice of
pubkeys or run an async parallel batch using FuturesUnordered/join_all for each
chunk, then remove corresponding keys from stored_compressible_accounts after
the batch completes; update references to compress_mint_forester and
compress_mint_accounts accordingly.
| info.resize(0) | ||
| .map_err(|_| LightSdkError::ConstraintViolation)?; |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Error mapping discards original error context.
The map_err(|_| LightSdkError::ConstraintViolation) pattern is consistent with other error conversions in this file (lines 24, 29, 36), so this aligns well with the broader normalization effort. However, discarding the original error with |_| makes debugging harder if resize fails for an unexpected reason.
Consider whether a more specific error variant (e.g., ResizeFailed) would help distinguish resize issues from actual constraint violations, or at minimum, log/trace the original error before discarding it. That said, resize failures on closing accounts are rare in practice, so this is a minor concern.
Also applies to: 41-42
🤖 Prompt for AI Agents
In `@sdk-libs/sdk/src/compressible/close.rs` around lines 14 - 15, The call to
info.resize(0).map_err(|_| LightSdkError::ConstraintViolation) discards the
original error; change this to preserve context by either (A) mapping to a new
error variant like LightSdkError::ResizeFailed that wraps the original error
(e.g., map_err(|e| LightSdkError::ResizeFailed(e.into()))) and add that variant
to the LightSdkError enum, or (B) log/trace the original error before converting
(e.g., trace/error with the error) and then map to ConstraintViolation; apply
the same change to the second occurrence around lines 41-42 and update any error
construction sites or tests that reference LightSdkError accordingly.
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@sdk-libs/macros/src/compressible/instructions.rs`:
- Around line 1518-1534: The generated code wrongly casts the owned ctx.accounts
into a raw pointer; change the raw-pointer creation to take a mutable reference
first (use &mut ctx.accounts cast to *mut `#accounts_type`<#lt>) and similarly
ensure ctx.remaining_accounts is cast from a slice reference (e.g.
ctx.remaining_accounts as *const [solana_account_info::AccountInfo<#lt>]) rather
than casting an owned value; update the SAFETY comment next to
__accounts_ptr/__remaining_ptr to state that we obtain a &mut reference to
ctx.accounts before converting to a raw pointer and explain that the pointer
remains valid because the underlying stack-allocated T outlives the raw pointer,
and apply this same fix to the other occurrence around the second cast (the
block referenced at lines ~1567-1577).
In `@sdk-libs/sdk/src/compressible/close.rs`:
- Around line 14-15: Replace the deprecated AccountInfo::realloc calls with
AccountInfo::resize(0) in close.rs: find usages of info.realloc(0, false) (two
call sites) and change them to info.resize(0) while preserving the error mapping
(e.g., .map_err(|_| LightSdkError::ConstraintViolation)?); remove the now-unused
zero_init parameter since resize automatically zeroes new space.
♻️ Duplicate comments (1)
sdk-libs/macros/src/compressible/instructions.rs (1)
1535-1535: HiddenClonebound on params type (still applies).
#params_ident.clone()forces aClonebound and produces opaque macro errors for users who don’t implement it. Please keep the explicit bound/assertion or documentation you intended here.Also applies to: 1578-1578
| /// Check if a function body is a simple delegation pattern (e.g., `instructions::foo(ctx, params)`). | ||
| /// Returns true if the body is just delegating to another function. | ||
| fn is_delegation_body(block: &syn::Block) -> bool { | ||
| // Check if the block has a single statement/expression | ||
| if block.stmts.len() != 1 { | ||
| return false; | ||
| } | ||
|
|
||
| match &block.stmts[0] { | ||
| // Expression without semicolon at the end | ||
| syn::Stmt::Expr(expr, _) => is_delegation_call(expr), | ||
| _ => false, | ||
| } | ||
| } | ||
|
|
||
| fn is_delegation_call(expr: &syn::Expr) -> bool { | ||
| match expr { | ||
| // Direct function call: instructions::foo(ctx, params) | ||
| syn::Expr::Call(call) => { | ||
| // Check if any argument is named "ctx" (indicating delegation) | ||
| call.args.iter().any(|arg| { | ||
| if let syn::Expr::Path(path) = arg { | ||
| path.path | ||
| .segments | ||
| .last() | ||
| .is_some_and(|seg| seg.ident == "ctx") | ||
| } else { | ||
| false | ||
| } | ||
| }) | ||
| } | ||
| // Try expression: instructions::foo(ctx, params)? | ||
| syn::Expr::Try(try_expr) => is_delegation_call(&try_expr.expr), | ||
| _ => false, | ||
| } | ||
| } | ||
|
|
||
| /// Wrap a function with pre_init/finalize logic. | ||
| fn wrap_function_with_rentfree(fn_item: &syn::ItemFn, params_ident: &Ident) -> syn::ItemFn { | ||
| fn wrap_function_with_rentfree( | ||
| fn_item: &syn::ItemFn, | ||
| params_ident: &Ident, | ||
| accounts_type: &Ident, | ||
| params_type: &syn::Type, | ||
| ) -> syn::ItemFn { | ||
| let fn_vis = &fn_item.vis; | ||
| let fn_sig = &fn_item.sig; | ||
| let fn_block = &fn_item.block; | ||
| let fn_attrs = &fn_item.attrs; | ||
|
|
||
| let wrapped: syn::ItemFn = syn::parse_quote! { | ||
| #(#fn_attrs)* | ||
| #fn_vis #fn_sig { | ||
| // Phase 1: Pre-init (creates mints via CPI context write, registers compressed addresses) | ||
| use light_sdk::compressible::{LightPreInit, LightFinalize}; | ||
| let __has_pre_init = ctx.accounts.light_pre_init(ctx.remaining_accounts, &#params_ident) | ||
| .map_err(|e| { | ||
| // Skip wrapping for delegation patterns - they don't work with the wrapper | ||
| // because ctx gets moved to the delegated function. | ||
| if is_delegation_body(fn_block) { | ||
| // Return unchanged - user must handle compression manually in the delegated function | ||
| return fn_item.clone(); |
There was a problem hiding this comment.
Delegation heuristic can silently skip required rentfree wrapping.
Any handler that’s just a single call with ctx (e.g., validate(ctx, params)?) will now be left unwrapped, dropping pre_init/finalize even if the callee doesn’t do it. At the same time, delegation patterns with return/; still won’t be detected. Consider making delegation an explicit opt‑out (attribute) or narrowing the heuristic to known instruction delegates.
🔧 Proposed guard to make delegation explicit
+fn has_rentfree_delegate_attr(fn_item: &syn::ItemFn) -> bool {
+ fn_item
+ .attrs
+ .iter()
+ .any(|a| a.path().is_ident("rentfree_delegate"))
+}
...
- if is_delegation_body(fn_block) {
+ if has_rentfree_delegate_attr(fn_item) && is_delegation_body(fn_block) {
// Return unchanged - user must handle compression manually in the delegated function
return fn_item.clone();
}| // Phase 1: Pre-init (creates mints via CPI, registers compressed addresses) | ||
| // Note: ctx.accounts is &mut T, so we pass it directly (not &mut ctx.accounts) | ||
| let __has_pre_init = <#accounts_type<#lt> as LightPreInit<#lt, #params_type>>::light_pre_init( | ||
| ctx.accounts, | ||
| ctx.remaining_accounts, | ||
| &#params_ident | ||
| ).map_err(|e| { | ||
| let pe: solana_program_error::ProgramError = e.into(); | ||
| pe | ||
| })?; | ||
|
|
||
| // Execute the original handler body in a closure | ||
| let __light_handler_result = (|| #fn_block)(); | ||
| // Save raw pointers to accounts and remaining_accounts before ctx is potentially moved. | ||
| // SAFETY: ctx.accounts is &mut T (a mutable reference). The underlying T struct | ||
| // lives on the stack in the Anchor-generated code. When ctx is moved, only the | ||
| // reference is moved, not the T itself. The pointer remains valid. | ||
| let __accounts_ptr: *mut #accounts_type<#lt> = ctx.accounts as *mut #accounts_type<#lt>; | ||
| let __remaining_ptr: *const [solana_account_info::AccountInfo<#lt>] = ctx.remaining_accounts; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's locate and examine the file in question
find . -name "instructions.rs" -path "*/sdk-libs/macros/*" | head -5Repository: Lightprotocol/light-protocol
Length of output: 122
🏁 Script executed:
# Check the imports and Context definition in the target file
head -100 sdk-libs/macros/src/compressible/instructions.rs | grep -E "(use|Context|accounts)" || echo "Reading more..."Repository: Lightprotocol/light-protocol
Length of output: 217
🏁 Script executed:
# Search for Context type definitions in the codebase
rg -n "struct\s+Context<" --type=rs -A 5 | head -50
rg -n "type\s+Context\b" --type=rs | head -20Repository: Lightprotocol/light-protocol
Length of output: 125
🏁 Script executed:
# Check Anchor version and imports
rg -n "anchor-lang" Cargo.lock Cargo.tomlRepository: Lightprotocol/light-protocol
Length of output: 1375
🏁 Script executed:
# Look at the specific lines mentioned in the review
sed -n '1515,1540p' sdk-libs/macros/src/compressible/instructions.rs
sed -n '1564,1580p' sdk-libs/macros/src/compressible/instructions.rsRepository: Lightprotocol/light-protocol
Length of output: 2507
🏁 Script executed:
# Search for any custom Context struct definitions in the light-protocol repo
rg -n "pub\s+struct\s+Context" --type rust | head -20Repository: Lightprotocol/light-protocol
Length of output: 200
🏁 Script executed:
# Look at how Context is imported in the compressible module
rg -n "use.*Context" sdk-libs/macros/src/compressible/instructions.rs | head -10Repository: Lightprotocol/light-protocol
Length of output: 161
🏁 Script executed:
# Check Anchor's Context definition from the dependency
# In Anchor 0.31.1, Context should be defined in anchor-lang
# Let's verify what the standard definition looks like by searching documentation or examples
rg -n "pub struct Context" --type rust -A 8 | grep -A 8 "anchor" | head -30Repository: Lightprotocol/light-protocol
Length of output: 54
🏁 Script executed:
# Let's also check how ctx is actually used in real handler examples in this repo
rg -n "ctx\.accounts\s*:" --type rust | head -10
rg -n "fn\s+\w+\(.*ctx.*Context" --type rust | head -5Repository: Lightprotocol/light-protocol
Length of output: 730
🏁 Script executed:
# Check if there's anchor-lang source or docs available
# Let's look at the actual generated wrapper to understand how it handles the context
sed -n '1515,1580p' sdk-libs/macros/src/compressible/instructions.rs | cat -nRepository: Lightprotocol/light-protocol
Length of output: 3958
🏁 Script executed:
# Search for the actual Anchor Context import and see how it's used elsewhere
rg -n "anchor_lang::context::Context|anchor_lang::Context" --type rust | head -10Repository: Lightprotocol/light-protocol
Length of output: 54
🏁 Script executed:
# Let's check the light_pre_init function signature to understand what it expects
rg -n "trait LightPreInit|fn light_pre_init" --type rust -A 3 | head -30Repository: Lightprotocol/light-protocol
Length of output: 1457
🏁 Script executed:
# Check how the macro processes the handler to understand the flow better
sed -n '1400,1420p' sdk-libs/macros/src/compressible/instructions.rs | cat -nRepository: Lightprotocol/light-protocol
Length of output: 1512
🏁 Script executed:
# Check Anchor 0.31.1 documentation or source by looking at how Context is used
# Search for Context field definitions
rg -n "pub\s+accounts\s*:" --type rust | head -20Repository: Lightprotocol/light-protocol
Length of output: 961
🏁 Script executed:
# Let's look at actual handler examples to see how ctx is used
rg -n "ctx\.accounts\." --type rust | head -10Repository: Lightprotocol/light-protocol
Length of output: 1203
🏁 Script executed:
# Check the Anchor documentation or look at the light_pre_init trait implementation
# to understand if it takes &mut self or ownership
sed -n '26,50p' sdk-libs/sdk/src/compressible/finalize.rsRepository: Lightprotocol/light-protocol
Length of output: 1070
🏁 Script executed:
# Look at the full LightPreInit trait definition
rg -n "pub trait LightPreInit" --type rust -A 20 | head -40Repository: Lightprotocol/light-protocol
Length of output: 1872
🏁 Script executed:
# Try to find where Context comes from in Anchor
rg -n "^use.*anchor_lang" sdk-libs/macros/src/compressible/instructions.rs | head -5Repository: Lightprotocol/light-protocol
Length of output: 54
🏁 Script executed:
# Let me check the actual Anchor Context definition by looking at its usage pattern
# In Anchor, Context<T> has accounts: T (owned), not &mut T
# Let's verify by checking how handlers typically access ctx.accounts
rg -n "Context<.*>" programs/account-compression/src/lib.rs -A 5 | head -20Repository: Lightprotocol/light-protocol
Length of output: 923
🏁 Script executed:
# Check if there's any custom Context wrapper in this codebase
rg -n "impl.*Context" --type rust | head -10Repository: Lightprotocol/light-protocol
Length of output: 1032
🏁 Script executed:
# Let's look at the actual macro output more carefully - check how it uses ctx.accounts
# in the handler body to understand the implications
sed -n '1500,1545p' sdk-libs/macros/src/compressible/instructions.rs | cat -nRepository: Lightprotocol/light-protocol
Length of output: 2898
🏁 Script executed:
# Verify the Anchor Context from documentation or source
# Standard Anchor Context has: pub struct Context<'info, T, U, V> { pub accounts: T, ... }
# This means accounts is OWNED, not &mut T
# Let's see if there's any evidence in the codebase
rg -n "pub.*accounts\s*:\s*&" --type rust | grep -i context | head -5Repository: Lightprotocol/light-protocol
Length of output: 740
🏁 Script executed:
# This is critical - let's check if this code actually compiles
# by looking at build results or test files that use this macro
# First, find test files that use the compressible macro
rg -n "#\[light_compressible\]" --type rust | head -10Repository: Lightprotocol/light-protocol
Length of output: 54
🏁 Script executed:
# Let's look at one of these test files to see if the generated code compiles
find . -name "*.rs" -path "*/sdk-tests/*" | xargs grep -l "light_compressible" | head -3Repository: Lightprotocol/light-protocol
Length of output: 268
🏁 Script executed:
# Check the comment more carefully - it says ctx.accounts is &mut T
# But in Anchor 0.31.1, Context has accounts: T (owned)
# Let's verify by checking the actual cast line and how Rust handles it
sed -n '1524,1530p' sdk-libs/macros/src/compressible/instructions.rsRepository: Lightprotocol/light-protocol
Length of output: 425
🏁 Script executed:
# The key issue: can you cast ctx.accounts (T) to *mut T directly?
# In Rust, you need a reference. Let's search for how this is actually used
# to understand if there's a deref coercion happening
rg -n "ctx\.accounts\s+as\s+\*mut" sdk-libs/macros/src/compressible/instructions.rsRepository: Lightprotocol/light-protocol
Length of output: 289
🏁 Script executed:
# Let's examine the test file to see if this macro usage compiles
head -150 ./sdk-tests/csdk-anchor-full-derived-test/src/instruction_accounts.rs | tail -100Repository: Lightprotocol/light-protocol
Length of output: 2859
🏁 Script executed:
# Let's check if there's a Deref implementation that would make this work
rg -n "impl.*Deref.*for.*Context" --type rust -A 5 | head -20Repository: Lightprotocol/light-protocol
Length of output: 684
🏁 Script executed:
# More importantly, let's verify if Anchor's actual Context implementation
# Let's check if the test files have any build errors or if they pass
ls -la sdk-tests/csdk-anchor-full-derived-test/ | head -20Repository: Lightprotocol/light-protocol
Length of output: 542
🏁 Script executed:
# Let's trace through what ctx.accounts actually is
# by looking at the function parameter type in the test
rg -n "light_compressible" ./sdk-tests/csdk-anchor-full-derived-test/src/instruction_accounts.rs -B 5 -A 10 | head -30Repository: Lightprotocol/light-protocol
Length of output: 442
🏁 Script executed:
# Check if there are any casts or deref operations happening before the pointer cast
# Let's look at how ctx is used in the handler by examining macro-generated code patterns
rg -n "ctx\.accounts" sdk-libs/macros/src/compressible/instructions.rs | head -20Repository: Lightprotocol/light-protocol
Length of output: 1560
🏁 Script executed:
# Let me check the Anchor version and see if there's a custom Context type
# by examining actual usage in real programs
cat sdk-tests/csdk-anchor-full-derived-test/Cargo.toml | grep -A 5 anchor-langRepository: Lightprotocol/light-protocol
Length of output: 838
🏁 Script executed:
# Final check - look at what the handler signature looks like
# to understand ctx binding
rg -n "#\[light_compressible\]" ./sdk-tests/csdk-anchor-full-derived-test/src/instruction_accounts.rs -A 8 | head -20Repository: Lightprotocol/light-protocol
Length of output: 54
🏁 Script executed:
# Let's find the actual handler function that uses light_compressible
rg -n "light_compressible" ./sdk-tests/csdk-anchor-full-derived-test/src/ -A 15 | head -50Repository: Lightprotocol/light-protocol
Length of output: 1694
🏁 Script executed:
# Check if there's a handler implementation
find ./sdk-tests/csdk-anchor-full-derived-test/src -name "*.rs" | xargs grep -l "light_handler\|pub fn " | head -5Repository: Lightprotocol/light-protocol
Length of output: 124
🏁 Script executed:
# Let's see the full instruction file
cat ./sdk-tests/csdk-anchor-full-derived-test/src/lib.rs | head -100Repository: Lightprotocol/light-protocol
Length of output: 3483
🏁 Script executed:
# Most importantly, let's see if there are compilation errors in the test output
# by checking if the project builds
ls -la sdk-tests/csdk-anchor-full-derived-test/target/ 2>/dev/null | head -20 || echo "No target directory"Repository: Lightprotocol/light-protocol
Length of output: 54
🏁 Script executed:
# Let's look at Anchor's actual Context definition for 0.31.1
# by checking how it's used in actual handler signatures
rg -n "pub fn.*ctx: Context" programs/ -A 5 | head -40Repository: Lightprotocol/light-protocol
Length of output: 2997
🏁 Script executed:
# Critical: Let's see how the cast actually works
# by checking if there are any references involved
sed -n '1518,1540p' sdk-libs/macros/src/compressible/instructions.rs | cat -nRepository: Lightprotocol/light-protocol
Length of output: 1712
🏁 Script executed:
# Let's check if there's a custom Context type in light-sdk
rg -n "pub struct Context" sdk-libs/sdk/ --type rust -A 5 | head -20Repository: Lightprotocol/light-protocol
Length of output: 54
🏁 Script executed:
# Most definitive check: look at the Anchor version used and verify Context definition
# by examining the exact generic parameters
rg -n "Context<" programs/account-compression/src/lib.rs | head -5Repository: Lightprotocol/light-protocol
Length of output: 472
🏁 Script executed:
# Let's verify this is a real compilation issue by checking if the code compiles
# Look at whether there's a custom Deref impl for Context or the accounts field
rg -n "impl.*Deref" anchor-lang src/ 2>/dev/null || \
rg -n "impl.*Deref" . --type rust | grep -i "context\|accounts" | head -10Repository: Lightprotocol/light-protocol
Length of output: 206
🏁 Script executed:
# Check if Lightprotocol's custom Anchor fork modifies Context definition
rg -n "pub struct Context" . --type rust -B 2 -A 8 | head -40Repository: Lightprotocol/light-protocol
Length of output: 1551
🏁 Script executed:
# Most importantly: verify the actual type by looking at real code that works
# If ctx.accounts can be used with &mut operator, it's NOT a reference
rg -n "let.*=.*&mut.*ctx\.accounts" . --type rust | head -10Repository: Lightprotocol/light-protocol
Length of output: 299
🏁 Script executed:
# Final check: Does the generated code actually compile?
# by looking for any test build outputs or compilation errors
rg -n "error\[" sdk-tests/csdk-anchor-full-derived-test/ 2>/dev/null | head -5 || echo "No error patterns found in test output"Repository: Lightprotocol/light-protocol
Length of output: 54
🏁 Script executed:
# Check if there's documentation about why this pattern works
# by searching for comments about the cast or type conversion
rg -n "as \*mut" sdk-libs/macros/src/compressible/instructions.rs -B 3 -A 3Repository: Lightprotocol/light-protocol
Length of output: 1043
The generated code contains a type casting error: ctx.accounts as *mut T cannot cast an owned value to a raw pointer.
In Anchor 0.31.1, Context<T>.accounts is of type T (owned), not &mut T. The code attempts to cast this owned value directly to *mut T, which Rust's type system forbids—only mutable references (&mut T) can be cast to raw pointers.
The SAFETY comment claiming "ctx.accounts is &mut T (a mutable reference)" is factually incorrect. Handler code in this codebase demonstrates ctx.accounts is owned by using &mut ctx.accounts.field, which requires the &mut operator.
This generates non-compiling code. Correct the cast to &mut ctx.accounts as *mut T, or verify that handlers using this macro actually compile and document why the cast succeeds despite the type mismatch.
Also applies to: 1567-1577
🤖 Prompt for AI Agents
In `@sdk-libs/macros/src/compressible/instructions.rs` around lines 1518 - 1534,
The generated code wrongly casts the owned ctx.accounts into a raw pointer;
change the raw-pointer creation to take a mutable reference first (use &mut
ctx.accounts cast to *mut `#accounts_type`<#lt>) and similarly ensure
ctx.remaining_accounts is cast from a slice reference (e.g.
ctx.remaining_accounts as *const [solana_account_info::AccountInfo<#lt>]) rather
than casting an owned value; update the SAFETY comment next to
__accounts_ptr/__remaining_ptr to state that we obtain a &mut reference to
ctx.accounts before converting to a raw pointer and explain that the pointer
remains valid because the underlying stack-allocated T outlives the raw pointer,
and apply this same fix to the other occurrence around the second cast (the
block referenced at lines ~1567-1577).
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@sdk-libs/macros/src/compressible/instructions.rs`:
- Around line 1447-1501: The current early-return in wrap_function_with_rentfree
uses the unsafe is_delegation_body/is_delegation_call heuristic to skip
wrapping; change this so the wrapper only skips when the function is explicitly
annotated with #[rentfree_delegate]. Concretely, remove the unconditional early
return based on is_delegation_body, and instead detect the attribute on
fn_item.attrs (search for an attribute path ident "rentfree_delegate") and only
if present (true) then call is_delegation_body to validate and return the
original fn_item.clone(); otherwise always proceed with wrapping (and update the
comment to state delegation is only honored when #[rentfree_delegate] is
present). Ensure references: wrap_function_with_rentfree, is_delegation_body,
is_delegation_call, and the new attribute name rentfree_delegate are used.
♻️ Duplicate comments (1)
sdk-libs/macros/src/compressible/instructions.rs (1)
1529-1555: HiddenClonerequirement on params persists.Line 1535 and Line 1578 still call
.clone()on the params, which injects a hiddenClonebound and yields cryptic errors for users. Make the requirement explicit (or document it) so failures are actionable.🔧 Make the `Clone` requirement explicit (apply in both branches)
+ // Explicitly require Clone for clearer errors. + fn __assert_clone<T: Clone>(_: &T) {} + __assert_clone(&#params_ident); let __params_clone: `#params_type` = `#params_ident.clone`();Please confirm whether
Cloneis intended; if so, add an explicit check or doc note.Also applies to: 1576-1595
| /// Check if a function body is a simple delegation pattern (e.g., `instructions::foo(ctx, params)`). | ||
| /// Returns true if the body is just delegating to another function. | ||
| fn is_delegation_body(block: &syn::Block) -> bool { | ||
| // Check if the block has a single statement/expression | ||
| if block.stmts.len() != 1 { | ||
| return false; | ||
| } | ||
|
|
||
| match &block.stmts[0] { | ||
| // Expression without semicolon at the end | ||
| syn::Stmt::Expr(expr, _) => is_delegation_call(expr), | ||
| _ => false, | ||
| } | ||
| } | ||
|
|
||
| fn is_delegation_call(expr: &syn::Expr) -> bool { | ||
| match expr { | ||
| // Direct function call: instructions::foo(ctx, params) | ||
| syn::Expr::Call(call) => { | ||
| // Check if any argument is named "ctx" (indicating delegation) | ||
| call.args.iter().any(|arg| { | ||
| if let syn::Expr::Path(path) = arg { | ||
| path.path | ||
| .segments | ||
| .last() | ||
| .is_some_and(|seg| seg.ident == "ctx") | ||
| } else { | ||
| false | ||
| } | ||
| }) | ||
| } | ||
| // Try expression: instructions::foo(ctx, params)? | ||
| syn::Expr::Try(try_expr) => is_delegation_call(&try_expr.expr), | ||
| _ => false, | ||
| } | ||
| } | ||
|
|
||
| /// Wrap a function with pre_init/finalize logic. | ||
| fn wrap_function_with_rentfree(fn_item: &syn::ItemFn, params_ident: &Ident) -> syn::ItemFn { | ||
| fn wrap_function_with_rentfree( | ||
| fn_item: &syn::ItemFn, | ||
| params_ident: &Ident, | ||
| accounts_type: &Ident, | ||
| params_type: &syn::Type, | ||
| ) -> syn::ItemFn { | ||
| let fn_vis = &fn_item.vis; | ||
| let fn_sig = &fn_item.sig; | ||
| let fn_block = &fn_item.block; | ||
| let fn_attrs = &fn_item.attrs; | ||
|
|
||
| let wrapped: syn::ItemFn = syn::parse_quote! { | ||
| #(#fn_attrs)* | ||
| #fn_vis #fn_sig { | ||
| // Phase 1: Pre-init (creates mints via CPI context write, registers compressed addresses) | ||
| use light_sdk::compressible::{LightPreInit, LightFinalize}; | ||
| let __has_pre_init = ctx.accounts.light_pre_init(ctx.remaining_accounts, &#params_ident) | ||
| .map_err(|e| { | ||
| // Skip wrapping for delegation patterns - they don't work with the wrapper | ||
| // because ctx gets moved to the delegated function. | ||
| if is_delegation_body(fn_block) { | ||
| // Return unchanged - user must handle compression manually in the delegated function | ||
| return fn_item.clone(); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's find and examine the instructions.rs file
find . -name "instructions.rs" -path "*/compressible/*" | head -20Repository: Lightprotocol/light-protocol
Length of output: 122
🏁 Script executed:
# Get the size of the file to understand how much we need to read
wc -l ./sdk-libs/macros/src/compressible/instructions.rsRepository: Lightprotocol/light-protocol
Length of output: 127
🏁 Script executed:
# Check the context around lines 1447-1501 and look for the complete function
sed -n '1420,1550p' ./sdk-libs/macros/src/compressible/instructions.rsRepository: Lightprotocol/light-protocol
Length of output: 5596
🏁 Script executed:
# Search for any existing #[rentfree_delegate] or similar attributes in the file
rg "rentfree_delegate|skip_wrap|opt.?out" ./sdk-libs/macros/src/compressible/Repository: Lightprotocol/light-protocol
Length of output: 54
🏁 Script executed:
# Look for tests that exercise delegation patterns
find ./sdk-libs/macros -name "*.rs" -type f -exec grep -l "is_delegation" {} \;Repository: Lightprotocol/light-protocol
Length of output: 122
🏁 Script executed:
# Check for CLAUDE.md or similar documentation
find . -name "CLAUDE.md" -o -name "claude.md" | head -5Repository: Lightprotocol/light-protocol
Length of output: 256
🏁 Script executed:
# Look for documentation about compression flows and rentfree requirements
find ./sdk-libs/macros -name "README.md" -o -name "*.md" | xargs grep -l "pre_init\|finalize" 2>/dev/nullRepository: Lightprotocol/light-protocol
Length of output: 56
🏁 Script executed:
# Read the main CLAUDE.md to understand design principles
head -100 ./CLAUDE.mdRepository: Lightprotocol/light-protocol
Length of output: 5665
🏁 Script executed:
# Read compressible CLAUDE.md for specific requirements
cat ./program-libs/compressible/CLAUDE.mdRepository: Lightprotocol/light-protocol
Length of output: 2216
🏁 Script executed:
# Search for actual delegating handler calls to understand usage
rg "instructions::\w+\(ctx" ./sdk-libs/macros/src/compressible/ -A 2Repository: Lightprotocol/light-protocol
Length of output: 1085
🏁 Script executed:
# Look for test cases or examples showing delegation patterns
rg "is_delegation|wrap_function" ./sdk-libs/macros/src/compressible/ -B 3 -A 5Repository: Lightprotocol/light-protocol
Length of output: 4973
🏁 Script executed:
# Check if there are any usages of delegating patterns in handlers
rg -t rs "fn.*\(.*ctx.*\)" ./sdk-libs/macros/src/compressible/instructions.rs | head -20Repository: Lightprotocol/light-protocol
Length of output: 98
🏁 Script executed:
# Look at the compress and decompress context implementations mentioned in learnings
find ./sdk-libs/macros -name "*compress*context*" -type fRepository: Lightprotocol/light-protocol
Length of output: 128
🏁 Script executed:
# Check the decompress_context.rs implementation
cat ./sdk-libs/macros/src/compressible/decompress_context.rs | head -150Repository: Lightprotocol/light-protocol
Length of output: 7044
🏁 Script executed:
# Look for test cases or examples of the rentfree macro
find ./sdk-libs/macros -name "*.rs" | xargs grep -l "#\[rentfree\]" | head -10Repository: Lightprotocol/light-protocol
Length of output: 293
🏁 Script executed:
# Check if there are actual test cases showing delegation behavior
find ./sdk-libs/macros -name "tests.rs" -o -name "*test*.rs" | xargs grep -l "delegation\|is_delegation" 2>/dev/nullRepository: Lightprotocol/light-protocol
Length of output: 56
🏁 Script executed:
# Look at the README in compressible macros
cat ./sdk-libs/macros/src/compressible/README.mdRepository: Lightprotocol/light-protocol
Length of output: 1517
🏁 Script executed:
# Search for any documentation about the delegation skip behavior
rg "delegation|skip.*wrap|manual.*compress" ./sdk-libs/macros/src/compressible/ -B 2 -A 2Repository: Lightprotocol/light-protocol
Length of output: 3315
Gate delegation skip behind explicit #[rentfree_delegate] attribute.
The delegation heuristic is unsafe: is_delegation_call treats any single-call body passing a ctx argument as delegation, even when the callee is a helper function, not another rentfree handler. This silently skips light_pre_init/light_finalize wrappers and breaks compression flows.
Per learnings and the #[rentfree] macro design, all wrapped handlers must execute pre-init/finalize hooks. Require developers to opt in explicitly to skip wrapping:
+fn has_rentfree_delegate_attr(attrs: &[syn::Attribute]) -> bool {
+ attrs.iter().any(|attr| attr.path().is_ident("rentfree_delegate"))
+}
+
// Skip wrapping for delegation patterns - they don't work with the wrapper
// because ctx gets moved to the delegated function.
- if is_delegation_body(fn_block) {
+ if has_rentfree_delegate_attr(fn_attrs) && is_delegation_body(fn_block) {
// Return unchanged - user must handle compression manually in the delegated function
return fn_item.clone();
}This preserves the delegation optimization for explicit cases while keeping default behavior safe.
🤖 Prompt for AI Agents
In `@sdk-libs/macros/src/compressible/instructions.rs` around lines 1447 - 1501,
The current early-return in wrap_function_with_rentfree uses the unsafe
is_delegation_body/is_delegation_call heuristic to skip wrapping; change this so
the wrapper only skips when the function is explicitly annotated with
#[rentfree_delegate]. Concretely, remove the unconditional early return based on
is_delegation_body, and instead detect the attribute on fn_item.attrs (search
for an attribute path ident "rentfree_delegate") and only if present (true) then
call is_delegation_body to validate and return the original fn_item.clone();
otherwise always proceed with wrapping (and update the comment to state
delegation is only honored when #[rentfree_delegate] is present). Ensure
references: wrap_function_with_rentfree, is_delegation_body, is_delegation_call,
and the new attribute name rentfree_delegate are used.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@sdk-libs/compressible-client/src/load_accounts.rs`:
- Around line 67-69: Fix the typo in the doc comment in load_accounts.rs: change
the phrase "Exists fast if all accounts are hot." to "Exits fast if all accounts
are hot." in the documentation above the function that builds load instructions
for cold accounts (the doc block describing building load instructions for cold
accounts).
- Around line 432-435: The code directly indexes proof.accounts[0], which can
panic if the accounts vec is empty; change this to safely handle the empty case
by using proof.accounts.get(0) or check proof.accounts.is_empty() and
return/propagate a clear error instead of panicking, then unwrap the returned
Option into account_info before deriving state_tree and input_queue (i.e.,
adjust the bindings for account_info, state_tree, and input_queue to work with
the safe lookup and early-return error path).
♻️ Duplicate comments (7)
sdk-libs/compressible-client/src/account_interface_ext.rs (3)
137-145: Missing accounts should not be reported as “hot.”Returning a fabricated hot account with zero lamports can mislead callers into assuming the account exists on-chain. Prefer an explicit “None/missing” variant or clearly document this behavior.
160-175:get_compressed_token_accounts_by_owneris called with a token account address.The API expects an owner pubkey; passing the token account address will usually miss the intended compressed account, and
TokenAccountInterface::coldis built withowner = *address. Consider changing the signature to accept(owner, mint)or parsing owner/mint first (on-chain or from compressed data), or using an address-based indexer method if available.
200-217: Query compressed ATAs by the actual owner, not the derived ATA address.
get_compressed_token_accounts_by_ownerexpects the wallet owner. Passing the derived ATA PDA will not match compressed token accounts.🔧 Proposed fix
- let result = self - .get_compressed_token_accounts_by_owner(&ata, options, None) + let result = self + .get_compressed_token_accounts_by_owner(owner, options, None) .await .map_err(indexer_err)?;sdk-libs/compressible-client/src/load_accounts.rs (4)
204-221: Sequential mint proof fetching is a known performance limitation.The TODO on line 141 already acknowledges this. For now, each mint requires its own proof due to indexer constraints. When a batched proof RPC endpoint becomes available, this can be optimized with
futures::future::try_join_all.
261-273: Avoid panic in public library function.While the caller (
create_load_accounts_instructions) validates contexts exist, this function ispuband could be called directly. Theexpect()on line 268 will panic ifdecompression_contextisNone.🔧 Suggested fix
- let compressed_accounts: Vec<_> = accounts - .iter() - .map(|a| { - let compressed_account = a - .account_interface - .decompression_context - .as_ref() - .expect("Cold account must have decompression context") - .compressed_account - .clone(); - (compressed_account, a.variant.clone()) - }) - .collect(); + let mut compressed_accounts = Vec::with_capacity(accounts.len()); + for a in accounts { + let compressed_account = a + .account_interface + .decompression_context + .as_ref() + .ok_or_else(|| { + LoadAccountsError::MissingPdaDecompressionContext { + index: compressed_accounts.len(), + pubkey: a.account_interface.pubkey, + } + })? + .compressed_account + .clone(); + compressed_accounts.push((compressed_account, a.variant.clone())); + }
294-297:filter_mapsilently drops cold ATAs missingload_context.This can cause incomplete decompression without any error. For cold accounts, a missing
load_contextshould be an explicit failure, not a silent skip.🔧 Suggested fix
- let contexts: SmallVec<[&TokenLoadContext; 8]> = accounts - .iter() - .filter_map(|a| a.load_context.as_ref()) - .collect(); + let contexts: SmallVec<[&TokenLoadContext; 8]> = accounts + .iter() + .enumerate() + .map(|(i, a)| { + a.load_context.as_ref().ok_or_else(|| { + LoadAccountsError::MissingAtaLoadContext { + index: i, + pubkey: a.pubkey, + } + }) + }) + .collect::<Result<_, _>>()?;
336-338: Guard against index out-of-bounds panic.Direct indexing
tree_infos.packed_tree_infos[i]will panic if the proof contains fewer tree infos than contexts. Use bounds-checked access:🔧 Suggested fix
- let tree_info = &tree_infos.packed_tree_infos[i]; + let tree_info = tree_infos + .packed_tree_infos + .get(i) + .ok_or_else(|| { + LoadAccountsError::BuildInstruction(format!( + "Tree info index {} out of bounds (len {})", + i, + tree_infos.packed_tree_infos.len() + )) + })?;
| /// Build load instructions for cold accounts. | ||
| /// Exists fast if all accounts are hot. | ||
| /// Else, fetches proofs, returns instructions. |
There was a problem hiding this comment.
Typo in doc comment.
"Exists fast" should be "Exits fast" on line 68.
/// Build load instructions for cold accounts.
-/// Exists fast if all accounts are hot.
+/// Exits fast if all accounts are hot.
/// Else, fetches proofs, returns instructions.🤖 Prompt for AI Agents
In `@sdk-libs/compressible-client/src/load_accounts.rs` around lines 67 - 69, Fix
the typo in the doc comment in load_accounts.rs: change the phrase "Exists fast
if all accounts are hot." to "Exits fast if all accounts are hot." in the
documentation above the function that builds load instructions for cold accounts
(the doc block describing building load instructions for cold accounts).
| // get tree info | ||
| let account_info = &proof.accounts[0]; | ||
| let state_tree = account_info.tree_info.tree; | ||
| let input_queue = account_info.tree_info.queue; |
There was a problem hiding this comment.
Guard against empty proof accounts.
Direct indexing proof.accounts[0] will panic if the proof contains no accounts. While a valid mint proof should always have at least one account, defensive coding prevents cryptic panics:
🔧 Suggested fix
- let account_info = &proof.accounts[0];
+ let account_info = proof.accounts.first().ok_or_else(|| {
+ LoadAccountsError::BuildInstruction("Mint proof contains no accounts".into())
+ })?;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // get tree info | |
| let account_info = &proof.accounts[0]; | |
| let state_tree = account_info.tree_info.tree; | |
| let input_queue = account_info.tree_info.queue; | |
| // get tree info | |
| let account_info = proof.accounts.first().ok_or_else(|| { | |
| LoadAccountsError::BuildInstruction("Mint proof contains no accounts".into()) | |
| })?; | |
| let state_tree = account_info.tree_info.tree; | |
| let input_queue = account_info.tree_info.queue; |
🤖 Prompt for AI Agents
In `@sdk-libs/compressible-client/src/load_accounts.rs` around lines 432 - 435,
The code directly indexes proof.accounts[0], which can panic if the accounts vec
is empty; change this to safely handle the empty case by using
proof.accounts.get(0) or check proof.accounts.is_empty() and return/propagate a
clear error instead of panicking, then unwrap the returned Option into
account_info before deriving state_tree and input_queue (i.e., adjust the
bindings for account_info, state_tree, and input_queue to work with the safe
lookup and early-return error path).
cbd2467 to
158d68a
Compare
wip add mint support to sdk and macros sync tests, fix program-test compressioninfo parsing refactor macros update test flow wip wip - mix wip - force merge move to preinit wip wip: separate decomp stage separate ata and cmint handling in decompression wip - try atomic decomp wio wip create_pdas_and_mint_auto ref test_create_pdas_and_mint_auto: compress cmint feat(program-test): implement CMint auto-compression in warp_slot_forward - Add compress_cmint_forester() to handle CMint compression via mint_action - Track ACCOUNT_TYPE_MINT accounts in claim_and_compress - Key fix: pass mint: None to tell on-chain to read from CMint Solana account - Update test to rely on auto-compression instead of explicit compression Auto-compress coverage now includes: CToken, Program PDAs, and CMint wip - autocompress fix address derive path fix address derivation for cpdas cleanup basic_test.rs cleanup macro wip works cargo test-sbf attempt to clean up ov works: decompress_accounts_idempotent stage: before macro refactor specs for macro refactor 1 update refactor spec ph1 ph2 wip3 - before ctokenseedprovider decompress refactor macros owrks killed compressible macro cleanup rm non derived cleanup, ctoken cpi clean resolve mc fix mc wip fixing fix 2 resolve more mcs fix fix - fix fix fix fix rm docstring clean, fmt, lint fix build clean up and fix interface getters clean load
158d68a to
ee4baf8
Compare

Summary by CodeRabbit
Release Notes
New Features
Refactor
Chores
✏️ Tip: You can customize this high-level summary in your review settings.