Skip to content

chore: refactor macros#2171

Merged
SwenSchaeferjohann merged 9 commits intomainfrom
swen/clean-decompress-base
Jan 16, 2026
Merged

chore: refactor macros#2171
SwenSchaeferjohann merged 9 commits intomainfrom
swen/clean-decompress-base

Conversation

@SwenSchaeferjohann
Copy link
Contributor

@SwenSchaeferjohann SwenSchaeferjohann commented Jan 16, 2026

Summary by CodeRabbit

Release Notes

  • New Features

    • Added token account decompression with support for batch operations and various account types.
    • Added mint decompression functionality with CPI context support for multi-operation transactions.
    • Added helpers for creating and initializing compressed accounts and tokens.
    • Enhanced associated token account management and creation workflows.
  • Improvements

    • Reorganized SDK structure and APIs for better usability and consistency.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 16, 2026

📝 Walkthrough

Walkthrough

This PR introduces a comprehensive refactoring of the compression/decompression infrastructure, replacing CTOKEN_PROGRAM_ID with LIGHT_TOKEN_PROGRAM_ID, introducing new public structs (CreateAccountsProof), and adding extensive client-side compression operations through new modules (create_accounts_proof, decompress_atas, decompress_mint, initialize_config, pack). It includes significant macro overhauls for rentfree account handling, simplified TokenSeedProvider traits, new CPI-context-aware builders for token operations, and expanded test utilities.

Changes

Cohort / File(s) Summary
Documentation & Type Renames
ctoken_for_payments.md, programs/compressed-token/program/docs/ACCOUNTS.md, programs/compressed-token/program/docs/ctoken/CREATE.md
Program ID constant replacements (CTOKEN_PROGRAM_ID → LIGHT_TOKEN_PROGRAM_ID), import path updates in code examples, and documentation formatting/organization improvements without functional logic changes
Core Struct & Manifest Additions
program-libs/compressible/src/lib.rs, program-libs/compressible/Cargo.toml
New public CreateAccountsProof struct with ValidityProof and PackedAddressTreeInfo fields; workspace dependency on light-sdk-types added
Module Path Updates
programs/compressed-token/program/tests/compress_and_close.rs, rebase.md
Public API path refactoring from ctoken-qualified to lighter module paths; detailed rebase resolution notes on package naming consolidation and Phase 8 trait simplifications
New Compressible Client Modules
sdk-libs/compressible-client/src/create_accounts_proof.rs, sdk-libs/compressible-client/src/decompress_atas.rs, sdk-libs/compressible-client/src/decompress_mint.rs, sdk-libs/compressible-client/src/initialize_config.rs, sdk-libs/compressible-client/src/pack.rs
Five new substantial public modules (200–800+ lines each) providing client-side helpers for validity proof assembly, ATA/mint decompression, CPI initialization, and proof packing; introduces error types, context structures, and async/sync APIs
Compressible Client Manifest & Integration
sdk-libs/compressible-client/Cargo.toml, sdk-libs/compressible-client/src/lib.rs
New anchor feature, workspace dependencies (solana-program-error, spl-token-2022, light-token-sdk, light-token-interface, light-compressed-account), and substantial re-export expansion for new modules and account abstractions
Macro Removal (Legacy Accounts)
sdk-libs/macros/src/accounts.rs, sdk-libs/macros/src/traits.rs, sdk-libs/macros/src/program.rs
Complete removal of legacy light_accounts, light_traits, and light_program procedural macro support (~1300 lines deleted)
Macro Removal (GUIDE & Restructuring)
sdk-libs/macros/src/compressible/GUIDE.md, sdk-libs/macros/src/compressible/README.md
Deletion of legacy GUIDE documentation; public API renames (CompressedAccountVariant → RentFreeAccountVariant, add_compressible_instructions → rentfree attribute macro)
New Compressible Macro Modules
sdk-libs/macros/src/compressible/anchor_seeds.rs, sdk-libs/macros/src/compressible/file_scanner.rs, sdk-libs/macros/src/compressible/light_compressible.rs
Three new complex macro modules (~650–250 lines each) enabling compile-time seed extraction, crate scanning, and consolidated derive macro generation
Compressible Macro Core Updates
sdk-libs/macros/src/compressible/decompress_context.rs, sdk-libs/macros/src/compressible/seed_providers.rs, sdk-libs/macros/src/compressible/variant_enum.rs, sdk-libs/macros/src/compressible/utils.rs, sdk-libs/macros/src/compressible/traits.rs
Substantial refactoring: ctx-seed aware variant generation, RentFreeAccountVariant adoption, PackedTokenDataWithVariant naming, skip-field logic, ctoken-to-token enum renames, and token seed provider generation with new entry points
Finalize Macro Module
sdk-libs/macros/src/finalize/codegen.rs, sdk-libs/macros/src/finalize/parse.rs, sdk-libs/macros/src/finalize/mod.rs
New finalize-centric macro infrastructure (~680–390 lines) with LightPreInit/LightFinalize trait generation, comprehensive parsing of #[rentfree(...)] and #[light_mint(...)] attributes, and multi-path pre-init codegen
Macro Entry Point Overhaul
sdk-libs/macros/src/lib.rs, sdk-libs/macros/src/compressible/mod.rs
Removal of legacy macros (light_accounts, light_traits, light_system_accounts); introduction of rentfree_program, rent_free, light_instruction attributes; module re-exports for new anchor_seeds, file_scanner, light_compressible
SDK Core Trait & Module Updates
sdk-libs/sdk/src/compressible/finalize.rs, sdk-libs/sdk/src/compressible/traits.rs, sdk-libs/sdk/src/compressible/decompress_runtime.rs, sdk-libs/sdk/src/compressible/mod.rs
New LightPreInit/LightFinalize traits, IntoVariant/IntoCTokenVariant traits, simplified TokenSeedProvider (removed Accounts type), narrowed decompress_runtime feature gating (requires both v2 and cpi-context)
SDK Export Updates
sdk-libs/sdk/src/lib.rs
Removal of light_system_accounts and LightTraits re-exports from light_sdk_macros
Token SDK Decompress & Pack
sdk-libs/token-sdk/src/compressible/decompress_runtime.rs, sdk-libs/token-sdk/src/pack.rs, sdk-libs/token-sdk/src/compressed_token/v2/decompress_full.rs
TokenSeedProvider usage refactoring (unpacked variant-based seed resolution), Pack/Unpack trait definitions, PackedCTokenDataWithVariant → PackedTokenDataWithVariant rename, is_ata flag addition for ATA decompression handling
Token SDK CPI Builders
sdk-libs/token-sdk/src/token/create.rs, sdk-libs/token-sdk/src/token/create_ata.rs, sdk-libs/token-sdk/src/token/decompress_mint.rs
Comprehensive CPI builder refactoring: removal of compressible field from CreateTokenAccount, new rent-free builder pattern (CreateTokenAccountRentFreeCpi), multi-stage ATA builder (CreateCTokenAtaCpi → idempotent → rent_free), new DecompressCMintWithCpiContext/CpiWithContext structures, three new CPI context creation helpers
Token SDK Module & Feature Updates
sdk-libs/token-sdk/src/token/mod.rs, sdk-libs/token-sdk/Cargo.toml, sdk-libs/token-sdk/tests/pack_test.rs
SystemAccountInfos Clone derivation, CreateCTokenAtaCpi aliasing and re-exports, compressible feature now enables cpi-context, test type alias updates (PackedCTokenDataWithVariant → PackedCompressibleTokenDataWithVariant)
Program Test Infrastructure
sdk-libs/program-test/Cargo.toml, sdk-libs/program-test/src/compressible.rs, sdk-libs/program-test/src/program_test/light_program_test.rs
light-token-interface made non-optional; extract_compression_info returns compression_only flag; new compress_cmint_forester function; separation of Token/CMint compression flows; three new public interface getter methods (get_ata_account_interface, get_ata_interface, get_mint_interface)
Minor Comments & Gating
sdk-libs/sdk/src/compressible/compress_account.rs, sdk-libs/sdk/src/compressible/compress_account_on_init.rs
v2 address derivation comment addition; commented-out fail-fast handling for with_data (no active behavior change)

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~75 minutes

Possibly related PRs

Suggested labels

ai-review, macro-changes, cpi-builder-refactor, sdk-api-changes

Suggested reviewers

  • sergeytimoshin
  • ananas-block

A dance of types and tokens bright,
Compressed accounts taking flight,
Seeds extracted, proofs now packed,
Builders rent-free, never cracked.
🎯 The rentfree dawn arrives at last!

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 inconclusive)
Check name Status Explanation Resolution
Title check ❓ Inconclusive The title 'chore: refactor macros' is vague and generic, lacking specificity about which macros were refactored or what the primary change accomplishes. Consider a more descriptive title that captures the key refactoring objective, such as 'chore: refactor macros to support rent-free accounts and token decompression' or 'chore: consolidate compressible macros under finalize module'.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 70.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch swen/clean-decompress-base

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@SwenSchaeferjohann SwenSchaeferjohann force-pushed the swen/clean-decompress-base branch from 5b2ab0e to a606eb1 Compare January 16, 2026 16:05
xtask = "run --package xtask --"

[resolver]
incompatible-rust-versions = "fallback"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what does this do?

pub signer: Pubkey, // Mint signer (seed)
pub address_tree: Pubkey, // Address tree
pub compressed_address: [u8; 32], // Compressed address
pub state: MintState, // Hot/Cold/None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we handle it differently here than in AtaInterface which has the raw_account?

```rust
use light_compressible_client::{decompress_atas, decompress_cmint};

async fn decompress_all(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the name should reflect that it is only for atas and mints.

owners: &[Pubkey],
fee_payer: Pubkey,
payer: &Keypair,
) -> Result<(), Box<dyn std::error::Error>> {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should return the correct error

Suggested change
) -> Result<(), Box<dyn std::error::Error>> {
) -> Result<(), Box<dyn std::error::Error>> {

],
bump,
)]
#[rentfree]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice

@@ -1,72 +1,118 @@
use anchor_lang::prelude::*;
use light_compressible::CreateAccountsProof;
use light_sdk_macros::RentFree;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we add a new macro crate light_rentfree_macros?

@@ -1,72 +1,118 @@
use anchor_lang::prelude::*;
use light_compressible::CreateAccountsProof;
Copy link
Contributor

@ananas-block ananas-block Jan 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rename light_compressible to light_rentfree ?

@SwenSchaeferjohann SwenSchaeferjohann force-pushed the swen/clean-decompress-base branch from a606eb1 to e5c6692 Compare January 16, 2026 16:37
**`instructions.rs`** - Instruction generation

- Main macro: `add_compressible_instructions`
- Main macro: `#[compressible]`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- Main macro: `#[compressible]`
- Main macro: `#[rentfree]`

?

all_account_infos.push(token_rent_sponsor.clone());
all_account_infos.push(config.clone());

// Add required system accounts for transfer2 instruction
Copy link
Contributor

@ananas-block ananas-block Jan 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cant we just pass all account infos or the correct range?

@SwenSchaeferjohann SwenSchaeferjohann force-pushed the swen/clean-decompress-base branch from 559bbc8 to 3489d14 Compare January 16, 2026 18:34
wip

add mint support to sdk and macros

sync tests, fix program-test compressioninfo parsing

refactor macros

update test flow

wip

wip - mix

wip - force merge

move to preinit

wip

wip: separate decomp

stage separate ata and cmint handling in decompression

wip - try atomic decomp

wio

wip

create_pdas_and_mint_auto ref

test_create_pdas_and_mint_auto: compress cmint

feat(program-test): implement CMint auto-compression in warp_slot_forward

- Add compress_cmint_forester() to handle CMint compression via mint_action
- Track ACCOUNT_TYPE_MINT accounts in claim_and_compress
- Key fix: pass mint: None to tell on-chain to read from CMint Solana account
- Update test to rely on auto-compression instead of explicit compression

Auto-compress coverage now includes: CToken, Program PDAs, and CMint

wip - autocompress

fix address derive path

fix address derivation for cpdas

cleanup basic_test.rs

cleanup macro

wip

works cargo test-sbf

attempt to clean

up ov

works: decompress_accounts_idempotent

stage: before macro refactor

specs for macro refactor 1

update refactor spec

ph1 ph2

wip3 - before ctokenseedprovider

decompress refactor macros owrks

killed compressible macro

cleanup

rm non derived

cleanup, ctoken cpi clean
wip fixing

fix 2

resolve more mcs

fix

fix -

fix
@SwenSchaeferjohann SwenSchaeferjohann force-pushed the swen/clean-decompress-base branch from 3489d14 to 28c2eae Compare January 16, 2026 18:35
@SwenSchaeferjohann SwenSchaeferjohann marked this pull request as ready for review January 16, 2026 18:38
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 48

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
sdk-libs/macros/src/compressible/traits.rs (1)

86-133: #[skip] fields will make generated CompressAs code fail to compile.
By skipping fields in the struct literal, any struct that uses #[skip] will trigger “missing field” errors. Consider using struct update syntax (or build from self.clone()) so skipped fields are still populated while compression logic ignores them.

🔧 Proposed fix (populate skipped fields via struct update)
-                std::borrow::Cow::Owned(Self {
-                    compression_info: None,
-                    #(`#field_assignments`)*
-                })
+                std::borrow::Cow::Owned(Self {
+                    compression_info: None,
+                    #(`#field_assignments`)*
+                    ..self.clone()
+                })
programs/compressed-token/program/docs/ACCOUNTS.md (1)

19-22: Fix path reference and struct name in documentation.

The documentation references an incorrect path and struct name:

  • Path states program-libs/token-interface/src/state/ctoken/ctoken_struct.rs but should be program-libs/token-interface/src/state/token/token_struct.rs
  • Struct is named Token, not CToken — update all code examples in lines 47–61 to import and use Token instead of CToken

The TokenDataVersion enum documentation (lines 92–97) is correct and matches the actual implementation.

sdk-libs/token-sdk/src/pack.rs (1)

247-264: Clarify the Unpack assumption in the comment.

The comment on lines 257-258 notes this impl "assumes V is already unpacked." This could be confusing since CTokenDataWithVariant<V> with V: Clone suggests V contains Pubkeys directly. Consider whether this impl should exist at all, or if users should always go through PackedCTokenDataWithVariant::unpack instead.

sdk-libs/compressible-client/src/lib.rs (1)

450-472: Index-based matching requires caller to maintain ordering.

The comment explains that compressed_accounts must match the order of validity_proof_with_context.accounts. This is a subtle contract - consider adding a length mismatch check between compressed_accounts.len() and packed_tree_infos_slice.len() before the loop, since the .get(i) could fail with a confusing error message.

💡 Suggested improvement
+        // Verify lengths match before iteration
+        if compressed_accounts.len() != packed_tree_infos_slice.len() {
+            return Err(format!(
+                "Mismatch: {} compressed accounts but {} tree infos in proof",
+                compressed_accounts.len(),
+                packed_tree_infos_slice.len()
+            ).into());
+        }
+
         for (i, (compressed_account, data)) in compressed_accounts.iter().enumerate() {
🤖 Fix all issues with AI agents
In `@program-libs/compressible/Cargo.toml`:
- Line 35: The crate dependency is inverted: program-libs/compressible depends
on light-sdk-types from sdk-libs to get PackedAddressTreeInfo, which breaks the
intended dependency boundary; fix by moving the instruction/type definitions (at
least PackedAddressTreeInfo) into program-libs (either relocate the entire
light-sdk-types crate into program-libs or extract the instruction types into a
new program-libs crate such as program-libs/instruction-types or
program-libs/compressed-account) and then update Cargo.toml in compressible to
depend on that program-libs crate instead of sdk-libs so program-libs stays
independent of sdk-libs.

In `@programs/compressed-token/program/docs/ctoken/CREATE.md`:
- Around line 90-92: The phrase "compression_only" rule contains a duplicated
word "only" in the documentation; edit the CREATE.md section that describes
validation rules (the lines referencing compression_only in items 4.4 and 4.5)
and remove the extra "only" so the rule reads clearly (e.g., "Validate
compression_only requirement for restricted extensions" and "Validate
compression_only is only set for mints with restricted extensions"). Ensure
occurrences of the duplicated word are corrected consistently wherever the
compression_only rule is described.

In `@rebase.md`:
- Around line 37-40: Update the ambiguous rename notes in rebase.md where the
bullets show "light_token_sdk → light_token_sdk": for the entry referencing
Phase 8 and the later occurrence (around the other bullets), replace the
confusing arrow form with a clear statement—either show the actual intended
rename mapping if different or simplify to a single phrase like "use main's
naming: light_token_sdk" (reference the "Phase 8" bullet and the token
"light_token_sdk" to locate and fix both occurrences).

In `@sdk-libs/compressible-client/decompress-atas.md`:
- Around line 443-466: The fenced flow-diagram block in decompress-atas.md lacks
a language tag; update the opening triple-backtick to include a plain text
specifier (e.g., ```text or ```plaintext) so the block containing the steps
(starting with "User calls decompress_atas_idempotent([{wallet_owner, mint}])"
and listing derive_ctoken_ata, indexer.get_compressed_token_accounts_by_owner,
indexer.get_validity_proof, CreateAssociatedCTokenAccount::idempotent,
decompress_full_ctoken_accounts_with_indices, etc.) is marked as text for lint
compliance.

In `@sdk-libs/compressible-client/decompress-mint.md`:
- Around line 435-440: The documentation incorrectly lists AlreadyDecompressed
under "Error Handling"; update the "Error Handling" section in
decompress-mint.md to remove AlreadyDecompressed or explicitly mark it as an
idempotent non-error (matching the behavior described at line 167 where
AlreadyDecompressed returns an empty vec). Ensure references to the
AlreadyDecompressed case in the text (e.g., the description that it "is NOT an
error" and the behavior of the function that returns an empty vec) are
consistent with the Error Handling list.
- Around line 368-395: The fenced code block containing the flow diagram lacks a
language specifier; update the block to use a language tag (e.g., "text") so
markdown linting passes. Locate the diagram around the decompress_mint flow
(references: decompress_mint, derive_cmint_compressed_address,
indexer.get_compressed_account, parse_compressed_mint_data,
indexer.get_validity_proof, DecompressMint, .instruction()) and change the
opening fence from ``` to ```text (or another appropriate language) so the
diagram is a properly annotated fenced code block.

In `@sdk-libs/compressible-client/DECOMPRESSION.md`:
- Around line 139-171: Update the misleading comment for TokenData.owner inside
the TokenData struct: change the note that currently says "for ATAs, this is the
ATA pubkey" to state that owner follows SPL semantics and is the wallet (token
account owner), not the ATA account address; adjust the comment near
TokenData.owner in the AtaInterface/TokenData declaration so readers know the
field is populated from the SPL token account owner (as in the test that uses
spl_account.owner) rather than the ATA pubkey.

In `@sdk-libs/compressible-client/helper.md`:
- Around line 91-97: Update the fenced code block under the "When to use CPI
context" heading to include a language identifier (e.g., use ```text) so the
block is lint-friendly; locate the block containing the three lines "PDA-only tx
… Token-only tx … Mixed PDA + token …" and replace the opening ``` with ```text
to satisfy markdownlint.

In `@sdk-libs/compressible-client/proof_helper.md`:
- Around line 213-218: The check that converts
packed.packed_tree_infos.address_trees.first().copied().ok_or(CreateAccountsProofError::EmptyInputs)?
can produce a misleading EmptyInputs error if pack_proof returned an empty
address_trees despite validated non-empty inputs; update the error handling to
return a more specific error (e.g., CreateAccountsProofError::PackingFailed or
CreateAccountsProofError::UnexpectedEmptyPackedTrees) or add a clear comment
explaining why an empty address_trees is impossible, and replace the ok_or usage
around packed_tree_infos/address_trees/address_tree_info with that new error
variant to provide accurate context tied to pack_proof and
packed_tree_infos/address_trees/address_tree_info.

In `@sdk-libs/compressible-client/src/create_accounts_proof.rs`:
- Around line 18-32: Add a fail-fast error path when minting requires CPI
context: in the code path that checks has_mints and then calls pack_proof with
state_tree_info.cpi_context, return an explicit error if has_mints is true and
state_tree_info.cpi_context.is_none(); add a new CreateAccountsProofError
variant (e.g., CpiContextRequired or MissingCpiContext) and use it to
short-circuit before calling pack_proof so we never build an instruction missing
required CPI accounts.

In `@sdk-libs/compressible-client/src/decompress_atas.rs`:
- Around line 180-183: The state() accessor unnecessarily calls .clone() on a
Copy type; change the implementation of state() to return the AccountState
directly (remove .clone()) by returning self.token_data.state so you rely on the
AccountState being Copy; update the function signature if needed to keep
returning AccountState from state() and ensure references to token_data.state in
decompress_atas.rs (the state() method) are used without cloning.
- Around line 703-786: Add a unit test that constructs a valid SPL token Account
binary blob, calls parse_token_account_interface with that blob and the derived
ATA/mint/owner values, and asserts the returned AtaInterface fields
(token_data.amount, mint, owner, state, delegate, tlv) match expected unpacked
values; locate parse_token_account_interface and use derive_token_ata to get
ata/bump and TokenData/AccountState/AtaInterface types to validate both
successful parsing and correct handling of raw_account and is_hot/is_none flags.

In `@sdk-libs/compressible-client/src/decompress_mint.rs`:
- Around line 166-175: Add a brief clarifying comment explaining the fallback on
output_queue: when extracting from proof_result (see variables account_info,
state_tree, input_queue, output_queue) document that output_queue uses
next_tree_info.queue when present and falls back to input_queue when
next_tree_info is absent—e.g., "When the tree hasn't rolled over yet, use
input_queue as the output queue." Place this comment immediately above the
output_queue assignment to explain the rationale.
- Around line 125-128: Document the derivation for the constants
DEFAULT_RENT_PAYMENT and DEFAULT_WRITE_TOP_UP by expanding their comments to
explain how the values were calculated (e.g., epochs per day for
DEFAULT_RENT_PAYMENT = 2 and the rent-per-write calculation that yields 766
lamports), and add a reference/link to program-libs/compressible/docs/RENT.md
for the full rent formula; update the comment near DEFAULT_RENT_PAYMENT and
DEFAULT_WRITE_TOP_UP to include the formula/inputs used (rent rate, write size,
epoch duration) and a short example calculation so future readers can reproduce
the numbers.
- Around line 346-350: There are duplicate "// 5." comments starting at the
decompressed flag check and the validity proof retrieval; renumber the inline
step comments so they are sequential from the decompressed flag check onward:
keep the decompressed-flag comment as "// 5. Check decompressed flag" (where the
code checks compressed_account.decompressed), change the comment before
indexer.get_validity_proof(vec![compressed_account.hash], ...) from "// 5. Get
validity proof" to "// 6. Get validity proof", update the comment before
extracting tree info (the one referencing tree info) from "// 6. Extract tree
info" to "// 7. Extract tree info", change the comment before building
CompressedMintWithContext from "// 7. Build CompressedMintWithContext" to "// 8.
Build CompressedMintWithContext", and change the comment before building
DecompressMint instruction from "// 8. Build DecompressMint instruction" to "//
9. Build DecompressMint instruction" so all step numbers are sequential.

In `@sdk-libs/compressible-client/src/initialize_config.rs`:
- Around line 81-85: The build() default should derive and use the V1 config PDA
instead of defaulting config_bump to 0; update the
CompressibleClientConfigBuilder::build (or equivalent build() method) to call
CompressibleConfig::derive_v1_config_pda(&self.program_id) and use the returned
bump when constructing the config (instead of relying on the config_bump field
default of 0), and remove or change the default config_bump behavior so callers
who don’t set config_bump get the correct V1 PDA; if multiple config versions
are required, make the version explicit in the builder API rather than silently
defaulting to 0.
- Around line 20-26: The builder currently allows any Vec<Pubkey> for
address_space but on-chain expects exactly one entry
(MAX_ADDRESS_TREES_PER_SPACE == 1); update the address_space(...) builder method
to validate that the supplied Vec<Pubkey> has len() == 1 and fail fast if not
(return a Result::Err or panic with a clear message depending on the builder's
error pattern), so callers get immediate feedback instead of a runtime on-chain
failure; reference the InitializeCompressionConfigAnchorData struct and the
address_space(...) builder and use MAX_ADDRESS_TREES_PER_SPACE in the check and
error message.

In `@sdk-libs/compressible-client/src/lib.rs`:
- Around line 592-617: The code currently calls .expect(...) on
account_interface.decompression_context when building compressed_accounts, which
can panic if someone constructed an inconsistent AccountInterface; replace the
panic with defensive error handling by mapping the Option to a Result (e.g.,
using ok_or or ok_or_else with a descriptive error) and propagate that error out
of the enclosing function instead of panicking. Update the compressed_accounts
construction (the closure mapping to (CompressedAccount, V)) to return a
Result<(CompressedAccount, V), _> by extracting compressed_account via
account_interface.decompression_context.ok_or(...)? (or map_err) and adjust the
surrounding function signature/flow to propagate the error, ensuring references
to cold_accounts, CompressedAccount, V, and
account_interface.decompression_context are the points of change.

In `@sdk-libs/compressible-client/src/pack.rs`:
- Around line 99-118: The code unsafely casts system_offset (a usize) to u8 when
building PackedProofResult, which will silently truncate if pre-accounts > 255;
replace the direct cast with u8::try_from(system_offset) and return an Err from
the current function on conversion failure so the overflow is handled. Update
the assembly that constructs PackedProofResult to use the try_from result for
system_accounts_offset, and add a complementary guard inside
PackedAccounts::insert_or_get_config (or where next_index is advanced) to
check/return an error when adding would exceed 255 accounts to prevent silent
wrapping of next_index; reference symbols: system_offset, PackedProofResult,
PackedAccounts::insert_or_get_config, insert_or_get.

In `@sdk-libs/macros/MACRO_REFACTOR_V2.md`:
- Around line 587-621: The migration guide omits operational rollout
guidance—add a "Step 0: Understand On-Chain Impact" section stating that the
change is compile-time only and does not alter PDA derivation, compressed
account addresses, or existing account data (so deployed programs continue to
work); add a "Step 5: Incremental Migration Strategy" describing how to
coexistently migrate (keep module-level #[compressible(...)] while adding
#[compressible] to individual fields, test, then remove the module macro) and
validation steps; and add a "Step 6: Dynamic Seed Handling" section that
documents patterns for programs that compute seeds at runtime (examples for
handling computed seeds, testing, and migration hints) and explicitly mention
coexistence of #[compressible(...)] and #[compressible_program] during rollout
and that #[compressible_token(..., authority = ...)] mappings must be validated
to preserve on-chain addresses.
- Line 442: The panic in the match arm "_ => panic!(\"Unsupported seed
expression: {:?}\", expr)" should be replaced with a compile-time error: return
Err(syn::Error::new_spanned(expr, "Unsupported seed expression")); then
propagate Result<T, syn::Error> through the call chain (update the functions
that call the function containing that match arm to return Result and propagate
?), and convert the final syn::Error into a TokenStream at the proc-macro entry
point using error.to_compile_error() so the compiler emits a proper
compile_error! at the expression span instead of a runtime panic.
- Around line 520-531: The spec currently treats a type used with different PDA
seeds (e.g., UserRecord referenced in CreateUserRecord vs MigrateUserRecord) as
an error; change the macro/design to allow multiple seed schemes per logical
type by either (A) supporting a variant type parameter pattern so the same data
layout can be instantiated as UserRecord<V1> and UserRecord<V2> (update the
macro handling of #[compressible] and Account<'info, UserRecord<...>> parsing
and seed resolution), or (B) adding a per-field override attribute (e.g.,
support a #[compressible(seed_scheme = "...")] or similar on fields) so the
macro uses that seed scheme when deriving PDAs; alternatively, if you choose not
to implement runtime support, update MACRO_REFACTOR_V2.md to clearly document
the limitation and add a migration tool plan (referencing UserRecord,
CreateUserRecord, MigrateUserRecord, #[compressible], and seed_scheme) so
consumers can follow a supported migration path.
- Around line 552-567: The current seed-type inference in the macro silently
falls back to Pubkey for `.as_ref()` and u64 for `.to_le_bytes()` when no
matching field is found, which can hide refactor errors; update the macro logic
(the inference path that maps seeds like params.owner and params.category_id to
UserRecordSeeds.* by scanning UserRecord fields) to emit a compile-time
syn::Error if no match is found (including field_span, seed name and data type
name in the message) rather than using the Pubkey/u64 defaults; alternatively
(or additionally) add support for an explicit override like a seed_types(...)
attribute so callers can annotate types directly when inference fails (document
the new error and the seed_types annotation usage in the macro handling code).
- Around line 56-267: The docs need actionable cross-references and concrete
examples: update MACRO_REFACTOR_V2.md to (1) add explicit source links for the
components referenced in the diagrams (e.g., link the "file scanner" box to
macros/src/compressible/file_scanner.rs and the "CODEGEN OUTPUT"/instruction
handler to macros/src/compressible/instructions.rs, referencing symbols like
file_scanner.rs, SEED EXPRESSION PARSER, extract_account_inner_type, and
decompress_accounts_idempotent), (2) include real generated-code examples pulled
from the test snapshot (refer to tests/snapshots/generated_code.rs) adjacent to
each diagram instead of pseudocode, (3) add a short header with the Light
Protocol refactor version and minimum Rust toolchain required, and (4) append a
concise troubleshooting section mapping common errors (e.g., seed
derivation/verification failures, missing ctx fields) to their root causes and
where to inspect (pointing to extract_account_inner_type, seed parser, and
decompress_accounts_idempotent). Ensure links and filenames are inserted near
the COMPILE TIME and Seed Extraction Flow diagrams for quick navigation.
- Around line 624-631: Add repository-wide migration and performance checks: run
a scan across all .rs files to detect usages of the old #[compress_as(...)] and
new macro forms and produce a pass/fail summary (include results for
csdk-anchor-full-derived-test and
test_create_decompress_compress_single_account), and add benchmark tests that
measure macro expansion and file-scanning times for large workspaces to quantify
overhead; update CI to fail if scans find unvalidated programs or if benchmarks
exceed acceptable thresholds.

In `@sdk-libs/macros/MACRO_REFACTOR.md`:
- Around line 57-58: Several fenced code blocks in MACRO_REFACTOR.md (for
example the block showing #[derive(Accounts, LightCompressible)]) lack language
identifiers and trigger MD040; update each fenced block by adding the
appropriate language tag (e.g., ```rust for Rust snippets, ```toml for TOML, or
```text for plain text) immediately after the opening backticks so every fence
has a language identifier and the markdown linter stops reporting MD040.

In `@sdk-libs/macros/MACRO-NEW.md`:
- Around line 303-336: The markdown contains multiple fenced code blocks without
language identifiers (violating markdownlint MD040); update each fence in
MACRO-NEW.md around the examples that show "UNPACK / DERIVE PDA / VERIFY /
CREATE/WRITE" and "CPI Context Batching (Mixed PDAs + Tokens)" and the other
listed sections so every triple-backtick fence includes an appropriate language
tag (e.g., ```text for pseudocode, ```rust for Rust snippets, ```bash for shell
examples); ensure all other similar fences (the ones referencing the CRITICAL
note and nearby blocks) are updated consistently to include the correct language
identifier.
- Line 84: The markdown contains emphasis markers with extra inner spaces
(violating MD037); locate the bold/italic segment near the "### 2.2 Seeds
Structs (All Seeds - ctx._ + data._)" heading and remove the internal padding
inside the emphasis markers (e.g., change "* _text_ *" or "** _text_ **" to
"*_text_*" or "**_text**") so there are no spaces between the
asterisks/underscores and the emphasized text.

In `@sdk-libs/macros/OPTION_A_PLAN.md`:
- Around line 380-390: Replace the emphasized line "**Total estimated time: 5-6
hours**" with a proper markdown heading (e.g., "### Total estimated time: 5-6
hours") in OPTION_A_PLAN.md so markdownlint MD036 is satisfied; update only the
final estimate line formatting and keep the existing text content and position
intact.

In `@sdk-libs/macros/OPTION_A_STATE_FLOW.md`:
- Around line 11-50: The fenced diagram/code blocks in OPTION_A_STATE_FLOW.md
(e.g., the flow block containing symbols like decompress_accounts_idempotent,
parse CompressedAccountData[], PDAs/Tokens/Mints, and CPIs such as
CreateCTokenAccountCpi, CreateAssociatedCTokenAccountCpi, DecompressMintCpi) are
missing language identifiers; update each triple-backtick fence to include the
appropriate language tag (likely mermaid for flow diagrams, or text/rust where
code snippets appear) for all similar blocks (including the other blocks
referenced in the review) so markdownlint MD040 is satisfied.

In `@sdk-libs/macros/SPEC_OPTION_A.md`:
- Around line 252-333: The runtime currently verifies the derived ATA via
derive_ctoken_ata(…) and compares derived_ata to ata_info.key but does not check
that packed_ata.token_data.owner actually points to that ATA index; update
process_decompress_tokens_runtime to validate that packed_ata.token_data.owner
(an index) resolves to the same Pubkey as derived_ata/ata_info.key before
building the MultiInputTokenDataWithContext and DecompressFullIndices: look up
packed_accounts[packed_ata.token_data.owner as usize], compare its key to
derived_ata (or ata_info.key), and return an appropriate ProgramError (e.g.,
InvalidAccountData) with msg! if they mismatch so decompression destination
cannot be mismatched.

In `@sdk-libs/macros/SPEC_OPTION_B.md`:
- Around line 182-192: The validation currently uses generic
ProgramError::NotEnoughAccountKeys with only msg! logs when has_standard_atas or
has_standard_mints are true; update the implementation to return explicit,
descriptive errors instead of the generic one by adding new error variants
(e.g., MissingCtokenConfig and MissingCtokenRentSponsor) and using those in the
ctx.ctoken_config().ok_or_else(...) and
ctx.ctoken_rent_sponsor().ok_or_else(...) branches (replace
ProgramError::NotEnoughAccountKeys with the new specific errors and keep or
enhance the msg! text to reference the same symbols).

In `@sdk-libs/macros/src/compressible/anchor_seeds.rs`:
- Around line 653-691: The function generate_seed_derivation is currently
unused; either mark it explicitly as intentionally dead code by adding
#[allow(dead_code)] above the generate_seed_derivation function and include a
short TODO comment referencing upcoming macro work, or remove the
generate_seed_derivation implementation (and any associated ClassifiedSeed
helpers it solely supports) until it is needed; update any module exports or
references to ClassifiedSeed if you remove it to avoid unused warnings
elsewhere.
- Around line 693-731: Add an explicit dead-code allowance and brief doc comment
to both helper functions so the unused lint is suppressed until they're
consumed: annotate get_ctx_fields and get_data_fields with #[allow(dead_code)]
and add a short /// comment describing their purpose (e.g., "Extracts ctx/data
field idents from ClassifiedSeed for future seed derivation code") so
maintainers know why they exist.
- Around line 33-46: The struct ExtractedSeedSpec contains fields field_name and
is_boxed that are populated but never read; either annotate the struct with
#[allow(dead_code)] and add a short comment on intended future use for
field_name and is_boxed (so their presence is documented), or remove those
unused fields from ExtractedSeedSpec and update any code that constructs
ExtractedSeedSpec (so callers stop populating field_name and is_boxed);
reference the ExtractedSeedSpec definition and update its usages accordingly.

In `@sdk-libs/macros/src/compressible/decompress_context.rs`:
- Around line 51-106: The two branches conditioned on ctx_fields duplicate the
RentFreeAccountVariant match arms; extract the common body that calls
light_sdk::compressible::handle_packed_pda_variant (including the Result
handling) into a shared helper/closure and invoke it from both match arms,
keeping only the differing pattern pieces and the resolve_ctx_seeds setup in
each branch; update the generated tokens around ctx_seeds_construction,
idx_field_patterns, resolve_ctx_seeds, packed_name and pda_type so the match
arms emit the concise pattern plus a call to the shared helper to eliminate the
~90% duplication.

In `@sdk-libs/macros/src/compressible/seed_providers.rs`:
- Around line 25-85: The extractor misses ctx.* when wrapped in method-call
arguments or other wrapper exprs; update extract_ctx_from_expr to recursively
descend into syn::Expr::MethodCall by iterating method_call.args (in addition to
method_call.receiver) and ensure syn::Expr::Call and syn::Expr::Reference
already recurse (add if missing), and mirror the same recursion logic in
extract_ctx_field_name so it unwraps MethodCall, Call, Reference, and nested
Field/Path patterns; apply the same recursive-unwrapping fix to the seed ref
generation and client-side seed functions mentioned (the code around the seed
ref generation and client-side seed functions) so any nested
max_key(&ctx.accounts.foo.key()), &ctx.field, or similar wrappers will yield the
ctx field name.

In `@sdk-libs/macros/src/finalize/codegen.rs`:
- Around line 284-286: The code currently uses expect on
`#params_ident.create_accounts_proof.proof.0` when constructing __proof:
light_token_sdk::CompressedProof which panics if missing; change this to return
a typed error instead (e.g., convert the Option to a Result and return a
LightSdkError or ProgramError) by replacing the expect with an ok_or / match
that returns Err(LightSdkError::MissingProof) or Err(ProgramError::Custom(...))
from the enclosing function; do the same replacement for the other occurrence
around the code that clones create_accounts_proof.proof.0 so all missing-proof
paths return a Result error instead of panicking.

In `@sdk-libs/macros/src/finalize/parse.rs`:
- Around line 278-280: The if condition in parse.rs erroneously checks
"light_token_program" twice; update the second variant to "ctoken_program" so
the branch that sets ctoken_program_field (referenced as ctoken_program_field
and field_ident in this block) correctly matches both naming variants; locate
the if statement comparing field_name (inside the parsing logic) and replace the
duplicated string literal with "ctoken_program" to ensure the field is captured.

In `@sdk-libs/program-test/src/program_test/light_program_test.rs`:
- Around line 461-517: In get_ata_account_interface, don't use the ATA pubkey
for indexer lookup or as the SPL account owner: call
get_compressed_token_accounts_by_owner with the wallet owner (the function param
owner) instead of &ata, and call pack_token_data_to_spl_bytes with the wallet
owner (owner or &owner) as the account owner argument instead of &ata; keep
pubkey = ata and set decompression_context.wallet_owner = *owner as-is.
- Around line 551-625: The hot-path that builds TokenData (used in AtaInterface)
incorrectly sets TokenData.owner from spl_account.owner; change it to set owner
to the ATA pubkey (ata) so TokenData.owner semantics match the cold/compressed
paths and fallbacks—update the TokenData construction inside the "if let
Some(account) = self.context.get_account(&ata)" branch to assign owner: ata (not
spl_account.owner) while keeping other fields (mint, amount, delegate, state,
tlv) the same.

In `@sdk-libs/sdk/src/compressible/compress_account_on_init.rs`:
- Around line 55-60: The commented-out fail-fast block around the with_data
conditional in compress_account_on_init.rs conflicts with the TODO and creates
ambiguity about whether with_data=true is supported; decide the intended
behavior and either (A) re-enable the fail-fast logic by uncommenting the block
that logs msg!("with_data=true is not supported yet") and returns
LightSdkError::ConstraintViolation to explicitly disallow with_data, or (B)
remove the TODO and the commented block so macro-generated code intentionally
supports with_data=true; update any related comments in the file and ensure the
behavior matches the macro-generated handling of the with_data flag.

In `@sdk-libs/token-sdk/src/compressible/decompress_runtime.rs`:
- Around line 218-256: Replace the use of cpi_accounts.account_infos().first()
with the explicit cpi_accounts.authority() accessor to make it clear we are
retrieving the CPI authority account; push cpi_accounts.authority().map_err(|_|
ProgramError::InvalidAccountData)?.clone() into all_account_infos instead, and
update the preceding comment to state that index 0 is the CPI authority (used to
invoke the light system program) rather than the light system program itself;
keep the other pushes (registered_program_pda(),
account_compression_authority(), account_compression_program(),
system_program()) and the cpi_context() handling unchanged.

In `@sdk-libs/token-sdk/src/token/create_ata.rs`:
- Around line 267-305: The current code clones self.system_program and also
moves it into CompressibleParamsCpi::new_ata(), causing a redundant clone; fix
by first binding let system_program = self.system_program; then use that local:
set InternalCreateAtaCpi.system_program to system_program.clone() and pass the
moved system_program into CompressibleParamsCpi::new_ata(..., system_program).
Apply the same change inside CreateCTokenAtaRentFreeCpi::invoke and
::invoke_signed so InternalCreateAtaCpi and CompressibleParamsCpi::new_ata use
the local binding (referencing InternalCreateAtaCpi,
CreateCTokenAtaRentFreeCpi::invoke/invoke_signed, and
CompressibleParamsCpi::new_ata).

In `@sdk-libs/token-sdk/src/token/create.rs`:
- Around line 199-237: The invoke_signed method currently derives bump from
seeds.last() without validation, allowing empty or malformed seeds to produce
bump=0 and invalid PDAs; update invoke_signed to first verify seeds is non-empty
and that the last seed exists and has length exactly 1 (the bump byte),
returning an explicit ProgramError (e.g., InvalidArgument) on failure, then
extract the bump byte and build seed_vecs from all preceding seeds before
constructing CompressToPubkey; reference the seeds variable, bump,
invoke_signed, and CompressToPubkey when applying this guard.

In `@sdk-libs/token-sdk/src/token/decompress_mint.rs`:
- Around line 271-302: The account meta ordering is mismatched because
MintActionMetaConfig currently cannot set a custom mint_seed or CPI authority,
causing missing mint_seed and wrong cpi_authority in
DecompressCMintWithCpiContext::instruction(); update MintActionMetaConfig by
adding builder methods with_mint_seed(mint_seed_pubkey) and
with_cpi_authority(ctoken_cpi_authority_pubkey) and modify to_account_metas() to
include mint_seed at the same position as build_account_infos() and to use the
provided ctoken_cpi_authority instead of the default cpi_authority_pda;
alternatively (if you prefer minimal change) construct the account_metas vector
manually inside DecompressCMintWithCpiContext::instruction() using the same
ordering and pubkeys as build_account_infos(), ensuring mint_seed and
ctoken_cpi_authority occupy the correct slots.

In `@sdk-libs/token-sdk/src/token/mod.rs`:
- Around line 6-8: Update the documentation bullet that currently references
CreateAssociatedCTokenAccount to match the exported API name
CreateAssociatedTokenAccount (the public export and examples use
CreateAssociatedTokenAccount); ensure all other occurrences of the incorrect
symbol (e.g., the same bullets around the other list entries such as
CreateCTokenAtaCpi and CreateCTokenAccount) are corrected so the doc links align
with the actual exported symbols.

light-heap = { workspace = true, optional = true }
light-account-checks = { workspace= true }
light-compressed-account = { workspace= true }
light-sdk-types = { workspace = true }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# Find workspace root and check members
find . -name "Cargo.toml" -type f | head -3
cat Cargo.toml | grep -A 50 "^\[workspace\]" | head -60

Repository: Lightprotocol/light-protocol

Length of output: 1777


🏁 Script executed:

# Locate light-sdk-types definition
fd "light-sdk-types" --type d
find . -path "*/light-sdk-types/Cargo.toml" -type f

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

# Check the file under review and surrounding context
cat -n program-libs/compressible/Cargo.toml | sed -n '1,45p'

Repository: Lightprotocol/light-protocol

Length of output: 2164


🏁 Script executed:

# Look for CLAUDE.md files with governance rules
find . -name "CLAUDE.md" -type f | xargs ls -la
cat CLAUDE.md | head -100
cat sdk-libs/CLAUDE.md 2>/dev/null | head -100 || echo "No sdk-libs CLAUDE.md"
cat program-libs/CLAUDE.md 2>/dev/null | head -100 || echo "No program-libs CLAUDE.md"

Repository: Lightprotocol/light-protocol

Length of output: 6705


🏁 Script executed:

# Check light-sdk-types definition and contents
cat -n sdk-libs/sdk-types/Cargo.toml | head -30
echo "---"
cat sdk-libs/sdk-types/src/lib.rs | head -50

Repository: Lightprotocol/light-protocol

Length of output: 1984


🏁 Script executed:

# Check program-libs/compressible CLAUDE.md for context
cat program-libs/compressible/CLAUDE.md

Repository: Lightprotocol/light-protocol

Length of output: 2216


🏁 Script executed:

# Check what program-libs/compressible actually imports from light-sdk-types
rg "light-sdk-types|light_sdk_types" program-libs/compressible/src/ -A 2 -B 2

Repository: Lightprotocol/light-protocol

Length of output: 473


🏁 Script executed:

# Check if similar types exist in program-libs/token-interface
cat program-libs/token-interface/Cargo.toml | head -30
echo "---"
ls -la program-libs/token-interface/src/

Repository: Lightprotocol/light-protocol

Length of output: 1776


Move instruction types to program-libs instead of depending on sdk-libs.

Program libs must not depend on SDK libs—this violates the dependency boundary. The light-sdk-types crate is currently in sdk-libs, but program-libs/compressible needs PackedAddressTreeInfo from it. Either:

  1. Move light-sdk-types to program-libs (since it only depends on program-libs anyway), or
  2. Extract the instruction types to a shared location like program-libs/instruction-types or program-libs/compressed-account

Keeping program-libs dependencies unidirectional (inbound only from sdk-libs) is essential for clean separation of concerns.

🤖 Prompt for AI Agents
In `@program-libs/compressible/Cargo.toml` at line 35, The crate dependency is
inverted: program-libs/compressible depends on light-sdk-types from sdk-libs to
get PackedAddressTreeInfo, which breaks the intended dependency boundary; fix by
moving the instruction/type definitions (at least PackedAddressTreeInfo) into
program-libs (either relocate the entire light-sdk-types crate into program-libs
or extract the instruction types into a new program-libs crate such as
program-libs/instruction-types or program-libs/compressed-account) and then
update Cargo.toml in compressible to depend on that program-libs crate instead
of sdk-libs so program-libs stays independent of sdk-libs.

Comment on lines +90 to +92
4.4. Validate compression_only requirement for restricted extensions: - If mint has restricted extensions (e.g., TransferFee) and compression_only == 0 - Error: `ErrorCode::CompressionOnlyRequired`
4.5. Validate compression_only is only set for mints with restricted extensions: - If compression_only != 0 and mint has no restricted extensions - Error: `ErrorCode::CompressionOnlyNotAllowed`
4.6. Validate rent_payment is not exactly 1 epoch (must cover more than the current rent epoch or be 0) - Check: `compressible_config.rent_payment != 1` - Error: `ErrorCode::OneEpochPrefundingNotAllowed` if validation fails - Purpose: Prevent accounts from becoming immediately compressible due to epoch boundary timing
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove the duplicated “only” in the compression_only rule.

Minor wording fix for clarity.

✏️ Suggested wording
-   4.5. Validate compression_only is only set for mints with restricted extensions:
+   4.5. Validate compression_only is set only for mints with restricted extensions:
🧰 Tools
🪛 LanguageTool

[style] ~91-~91: This adverb was used twice in the sentence. Consider removing one of them or replacing them with a synonym.
Context: ...d` 4.5. Validate compression_only is only set for mints with restricted extension...

(ADVERB_REPETITION_PREMIUM)

🤖 Prompt for AI Agents
In `@programs/compressed-token/program/docs/ctoken/CREATE.md` around lines 90 -
92, The phrase "compression_only" rule contains a duplicated word "only" in the
documentation; edit the CREATE.md section that describes validation rules (the
lines referencing compression_only in items 4.4 and 4.5) and remove the extra
"only" so the rule reads clearly (e.g., "Validate compression_only requirement
for restricted extensions" and "Validate compression_only is only set for mints
with restricted extensions"). Ensure occurrences of the duplicated word are
corrected consistently wherever the compression_only rule is described.

Comment on lines +37 to +40
1. **sdk-libs/macros/src/compressible/instructions.rs**
- Keep user's Phase 8 changes (simplified API)
- Use main's naming (`light_token_sdk` not `light_token_sdk`)
- Resolution: Take HEAD's code, update package names to main's convention
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Fix ambiguous rename notes.

Both rename bullets read “light_token_sdklight_token_sdk”, which is confusing in rebase notes. Consider simplifying to avoid implying a no-op rename.

📝 Suggested edit
-   - Use main's naming (`light_token_sdk` not `light_token_sdk`)
+   - Use main's naming (`light_token_sdk`)
@@
-- **csdk-anchor-full-derived-test** - Keep user's changes, update imports to `light_token_sdk` → `light_token_sdk`
+- **csdk-anchor-full-derived-test** - Keep user's changes, update imports to `light_token_sdk`

Also applies to: 88-89

🤖 Prompt for AI Agents
In `@rebase.md` around lines 37 - 40, Update the ambiguous rename notes in
rebase.md where the bullets show "light_token_sdk → light_token_sdk": for the
entry referencing Phase 8 and the later occurrence (around the other bullets),
replace the confusing arrow form with a clear statement—either show the actual
intended rename mapping if different or simplify to a single phrase like "use
main's naming: light_token_sdk" (reference the "Phase 8" bullet and the token
"light_token_sdk" to locate and fix both occurrences).

Comment on lines +443 to +466
```
User calls decompress_atas_idempotent([{wallet_owner, mint}])
|
v
derive_ctoken_ata(wallet_owner, mint) -> ata_pubkey
|
v
indexer.get_compressed_token_accounts_by_owner(ata_pubkey)
|
v
[CompressedTokenAccount { owner: ata_pubkey, is_ata: true, ... }]
|
v
indexer.get_validity_proof([hash1, hash2, ...]) -> single proof
|
v
CreateAssociatedCTokenAccount::idempotent() -> create_ata_ix
|
v
decompress_full_ctoken_accounts_with_indices(proof, indices) -> decompress_ix
|
v
Return [create_ata_ix, decompress_ix]
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add language specifier to fenced code block.

The flow diagram is missing a language identifier. Consider adding text or plaintext for lint compliance.

Proposed fix
-```
+```text
 User calls decompress_atas_idempotent([{wallet_owner, mint}])
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)

443-443: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🤖 Prompt for AI Agents
In `@sdk-libs/compressible-client/decompress-atas.md` around lines 443 - 466, The
fenced flow-diagram block in decompress-atas.md lacks a language tag; update the
opening triple-backtick to include a plain text specifier (e.g., ```text or
```plaintext) so the block containing the steps (starting with "User calls
decompress_atas_idempotent([{wallet_owner, mint}])" and listing
derive_ctoken_ata, indexer.get_compressed_token_accounts_by_owner,
indexer.get_validity_proof, CreateAssociatedCTokenAccount::idempotent,
decompress_full_ctoken_accounts_with_indices, etc.) is marked as text for lint
compliance.

Comment on lines +368 to +395
```
User calls decompress_mint(mint_seed_pubkey, fee_payer, indexer)
|
v
derive_cmint_compressed_address(mint_seed_pubkey, address_tree) -> compressed_address
|
v
indexer.get_compressed_account(compressed_address) -> CompressedAccount { data, hash, tree_info }
|
v
parse_compressed_mint_data() -> CompressedMint { metadata.cmint_decompressed? }
|
+--[if cmint_decompressed == true]--> Return empty vec (idempotent)
|
v [if cmint_decompressed == false]
indexer.get_validity_proof([hash]) -> ValidityProofWithContext
|
v
DecompressMint {
mint_seed_pubkey,
payer: fee_payer,
authority: fee_payer, // Permissionless!
...
}.instruction() -> Instruction
|
v
Return vec![instruction] (caller signs and sends)
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Add language specifier to fenced code block.

Same lint issue as in decompress-atas.md. The flow diagram should have a language identifier.

Proposed fix
-```
+```text
 User calls decompress_mint(mint_seed_pubkey, fee_payer, indexer)
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)

368-368: Fenced code blocks should have a language specified

(MD040, fenced-code-language)

🤖 Prompt for AI Agents
In `@sdk-libs/compressible-client/decompress-mint.md` around lines 368 - 395, The
fenced code block containing the flow diagram lacks a language specifier; update
the block to use a language tag (e.g., "text") so markdown linting passes.
Locate the diagram around the decompress_mint flow (references: decompress_mint,
derive_cmint_compressed_address, indexer.get_compressed_account,
parse_compressed_mint_data, indexer.get_validity_proof, DecompressMint,
.instruction()) and change the opening fence from ``` to ```text (or another
appropriate language) so the diagram is a properly annotated fenced code block.

Comment on lines +55 to +60
// TODO: consider not supporting yet.
// Fail-fast: with_data=true is not yet supported in macro-generated code
// if with_data {
// msg!("with_data=true is not supported yet");
// return Err(LightSdkError::ConstraintViolation.into());
// }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Contradictory TODO comment and commented-out code.

The TODO says "consider not supporting yet," but by commenting out the fail-fast block, with_data=true is now actively supported. This creates confusion about the intended behavior:

  1. If with_data=true should be supported, remove the commented-out block and update/remove the TODO.
  2. If it shouldn't be supported yet, uncomment the fail-fast logic.

Leaving this in limbo makes it unclear whether this is intentional enablement or an accidental change.

🔧 Suggested resolution (if supporting with_data=true)
-    // TODO: consider not supporting yet.
-    // Fail-fast: with_data=true is not yet supported in macro-generated code
-    // if with_data {
-    //     msg!("with_data=true is not supported yet");
-    //     return Err(LightSdkError::ConstraintViolation.into());
-    // }
-
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// TODO: consider not supporting yet.
// Fail-fast: with_data=true is not yet supported in macro-generated code
// if with_data {
// msg!("with_data=true is not supported yet");
// return Err(LightSdkError::ConstraintViolation.into());
// }
🤖 Prompt for AI Agents
In `@sdk-libs/sdk/src/compressible/compress_account_on_init.rs` around lines 55 -
60, The commented-out fail-fast block around the with_data conditional in
compress_account_on_init.rs conflicts with the TODO and creates ambiguity about
whether with_data=true is supported; decide the intended behavior and either (A)
re-enable the fail-fast logic by uncommenting the block that logs
msg!("with_data=true is not supported yet") and returns
LightSdkError::ConstraintViolation to explicitly disallow with_data, or (B)
remove the TODO and the commented block so macro-generated code intentionally
supports with_data=true; update any related comments in the file and ensure the
behavior matches the macro-generated handling of the with_data flag.

Comment on lines +218 to +256
// Add required system accounts for transfer2 instruction
// Light system program is at index 0 in the cpi_accounts slice
all_account_infos.push(
cpi_accounts
.account_infos()
.first()
.ok_or(ProgramError::NotEnoughAccountKeys)?
.clone(),
);
all_account_infos.push(
cpi_accounts
.registered_program_pda()
.map_err(|_| ProgramError::InvalidAccountData)?
.clone(),
);
all_account_infos.push(
cpi_accounts
.account_compression_authority()
.map_err(|_| ProgramError::InvalidAccountData)?
.clone(),
);
all_account_infos.push(
cpi_accounts
.account_compression_program()
.map_err(|_| ProgramError::InvalidAccountData)?
.clone(),
);
all_account_infos.push(
cpi_accounts
.system_program()
.map_err(|_| ProgramError::InvalidAccountData)?
.clone(),
);

// Add CPI context if present
if let Ok(cpi_context) = cpi_accounts.cpi_context() {
all_account_infos.push(cpi_context.clone());
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

🧩 Analysis chain

🏁 Script executed:

# First, find the actual file and read the context
find . -name "decompress_runtime.rs" -type f

Repository: Lightprotocol/light-protocol

Length of output: 185


🏁 Script executed:

# Search for CpiAccounts definition and its accessor methods
rg "impl.*CpiAccounts" --type rust -A 30 | head -200

Repository: Lightprotocol/light-protocol

Length of output: 14769


🏁 Script executed:

# Check for light_system_program method/accessor
rg "fn light_system_program" --type rust -B 2 -A 5

Repository: Lightprotocol/light-protocol

Length of output: 5689


🏁 Script executed:

# Look for CpiAccounts definition more broadly
rg "struct CpiAccounts|pub struct CpiAccounts" --type rust -A 20

Repository: Lightprotocol/light-protocol

Length of output: 6772


🏁 Script executed:

# Check which CpiAccounts version is being used in decompress_runtime.rs
head -50 sdk-libs/token-sdk/src/compressible/decompress_runtime.rs | grep -E "use|import"

Repository: Lightprotocol/light-protocol

Length of output: 511


🏁 Script executed:

# Get more context around the usage in decompress_runtime.rs
sed -n '210,260p' sdk-libs/token-sdk/src/compressible/decompress_runtime.rs

Repository: Lightprotocol/light-protocol

Length of output: 1788


🏁 Script executed:

# Check if there's a v2 implementation with light_system_program accessor
rg "pub fn light_system_program" sdk-libs/sdk-types/src/cpi_accounts/v2.rs -A 5

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

# Get the full v2 CpiAccounts implementation to see all available methods
sed -n '/impl.*CpiAccounts.*T.*Clone.*for CpiAccounts/,/^}/p' sdk-libs/sdk-types/src/cpi_accounts/v2.rs | head -200

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

# Search for all public methods on CpiAccounts in v2
rg "pub fn" sdk-libs/sdk-types/src/cpi_accounts/v2.rs -A 2

Repository: Lightprotocol/light-protocol

Length of output: 3083


🏁 Script executed:

# Check the account_infos method specifically in v2
rg "fn account_infos|fn registered_program_pda|fn account_compression" sdk-libs/sdk-types/src/cpi_accounts/v2.rs -B 1 -A 5

Repository: Lightprotocol/light-protocol

Length of output: 1250


🏁 Script executed:

# Check what CompressionCpiAccountIndex enum contains in v2
rg "enum CompressionCpiAccountIndex" sdk-libs/sdk-types/src/cpi_accounts/v2.rs -A 20

Repository: Lightprotocol/light-protocol

Length of output: 976


🏁 Script executed:

# Verify the v1 has light_system_program but check if it's different from v2's authority
rg "enum CompressionCpiAccountIndex" sdk-libs/sdk-types/src/cpi_accounts/v1.rs -A 20

Repository: Lightprotocol/light-protocol

Length of output: 576


🏁 Script executed:

# Verify that v2 is what's actually being used and understand the account ordering
cat sdk-libs/sdk-types/src/cpi_accounts/v2.rs | grep -A 50 "pub fn authority"

Repository: Lightprotocol/light-protocol

Length of output: 2030


🏁 Script executed:

# Double-check: is there a light_system_program accessor anywhere in v2?
rg "light_system_program" sdk-libs/sdk-types/src/cpi_accounts/v2.rs

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

# Verify the actual code once more - what methods does the code use from cpi_accounts?
sed -n '210,260p' sdk-libs/token-sdk/src/compressible/decompress_runtime.rs

Repository: Lightprotocol/light-protocol

Length of output: 1788


🏁 Script executed:

# Check the documentation/comments in CpiAccounts::new() to understand the account ordering guarantee
sed -n '1,100p' sdk-libs/sdk-types/src/cpi_accounts/v2.rs | grep -A 20 "pub fn new"

Repository: Lightprotocol/light-protocol

Length of output: 938


🏁 Script executed:

# Final check: look at the actual order in v2 to confirm what .first() returns
sed -n '1,50p' sdk-libs/sdk-types/src/cpi_accounts/v2.rs

Repository: Lightprotocol/light-protocol

Length of output: 1811


🏁 Script executed:

# Get the full enum definition with indices
cat sdk-libs/sdk-types/src/cpi_accounts/v2.rs | grep -A 15 "pub enum CompressionCpiAccountIndex"

Repository: Lightprotocol/light-protocol

Length of output: 825


Use authority() accessor instead of .account_infos().first() for clarity.

The code correctly retrieves the Authority account at index 0, but using .account_infos().first() obscures the intent. The v2 CpiAccounts API provides an authority() accessor method that's clearer and safer:

-    // Light system program is at index 0 in the cpi_accounts slice
     all_account_infos.push(
         cpi_accounts
-            .account_infos()
-            .first()
-            .ok_or(ProgramError::NotEnoughAccountKeys)?
+            .authority()
+            .map_err(|_| ProgramError::InvalidAccountData)?
             .clone(),
     );

Note: The account at index 0 is the CPI authority (used to invoke the light system program), not the light system program itself. The comment in the current code is misleading.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Add required system accounts for transfer2 instruction
// Light system program is at index 0 in the cpi_accounts slice
all_account_infos.push(
cpi_accounts
.account_infos()
.first()
.ok_or(ProgramError::NotEnoughAccountKeys)?
.clone(),
);
all_account_infos.push(
cpi_accounts
.registered_program_pda()
.map_err(|_| ProgramError::InvalidAccountData)?
.clone(),
);
all_account_infos.push(
cpi_accounts
.account_compression_authority()
.map_err(|_| ProgramError::InvalidAccountData)?
.clone(),
);
all_account_infos.push(
cpi_accounts
.account_compression_program()
.map_err(|_| ProgramError::InvalidAccountData)?
.clone(),
);
all_account_infos.push(
cpi_accounts
.system_program()
.map_err(|_| ProgramError::InvalidAccountData)?
.clone(),
);
// Add CPI context if present
if let Ok(cpi_context) = cpi_accounts.cpi_context() {
all_account_infos.push(cpi_context.clone());
}
// Add required system accounts for transfer2 instruction
all_account_infos.push(
cpi_accounts
.authority()
.map_err(|_| ProgramError::InvalidAccountData)?
.clone(),
);
all_account_infos.push(
cpi_accounts
.registered_program_pda()
.map_err(|_| ProgramError::InvalidAccountData)?
.clone(),
);
all_account_infos.push(
cpi_accounts
.account_compression_authority()
.map_err(|_| ProgramError::InvalidAccountData)?
.clone(),
);
all_account_infos.push(
cpi_accounts
.account_compression_program()
.map_err(|_| ProgramError::InvalidAccountData)?
.clone(),
);
all_account_infos.push(
cpi_accounts
.system_program()
.map_err(|_| ProgramError::InvalidAccountData)?
.clone(),
);
// Add CPI context if present
if let Ok(cpi_context) = cpi_accounts.cpi_context() {
all_account_infos.push(cpi_context.clone());
}
🤖 Prompt for AI Agents
In `@sdk-libs/token-sdk/src/compressible/decompress_runtime.rs` around lines 218 -
256, Replace the use of cpi_accounts.account_infos().first() with the explicit
cpi_accounts.authority() accessor to make it clear we are retrieving the CPI
authority account; push cpi_accounts.authority().map_err(|_|
ProgramError::InvalidAccountData)?.clone() into all_account_infos instead, and
update the preceding comment to state that index 0 is the CPI authority (used to
invoke the light system program) rather than the light system program itself;
keep the other pushes (registered_program_pda(),
account_compression_authority(), account_compression_program(),
system_program()) and the cpi_context() handling unchanged.

Comment on lines +199 to +237
/// Invoke CPI with PDA signing for program-owned accounts.
///
/// Seeds are used for both signing AND deriving the compress_to address.
pub fn invoke_signed(self, seeds: &[&[u8]]) -> Result<(), ProgramError> {
let defaults = CompressibleParams::default();

// Build CompressToPubkey from signer seeds
let bump = seeds.last().and_then(|s| s.first()).copied().unwrap_or(0);

let seed_vecs: Vec<Vec<u8>> = seeds
.iter()
.take(seeds.len().saturating_sub(1))
.map(|s| s.to_vec())
.collect();

let compress_to = CompressToPubkey {
bump,
program_id: self.program_id.to_bytes(),
seeds: seed_vecs,
};

let cpi = LegacyCreateTokenAccountCpi {
payer: self.base.payer,
account: self.base.account,
mint: self.base.mint,
owner: self.base.owner,
compressible: CompressibleParamsCpi {
compressible_config: self.config,
rent_sponsor: self.sponsor,
system_program: self.system_program,
pre_pay_num_epochs: defaults.pre_pay_num_epochs,
lamports_per_write: defaults.lamports_per_write,
compress_to_account_pubkey: Some(compress_to),
token_account_version: defaults.token_account_version,
compression_only: defaults.compression_only,
},
};
cpi.invoke_signed(&[seeds])
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# First, let's examine the file to understand the context better
cat -n sdk-libs/token-sdk/src/token/create.rs | sed -n '190,250p'

Repository: Lightprotocol/light-protocol

Length of output: 2713


🏁 Script executed:

# Check if there are similar seed validation patterns in the codebase
rg -A 5 "seeds\.is_empty\|seeds\.last\(\)" sdk-libs/ --type rust | head -40

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

# Look for CompressToPubkey definition and how it uses seeds
rg -B 3 -A 10 "struct CompressToPubkey" --type rust

Repository: Lightprotocol/light-protocol

Length of output: 1526


🏁 Script executed:

# Check for documentation about seed validation in program-libs
find . -name "*.md" | xargs grep -l "seed" | grep -E "(account-checks|compress)" | head -5

Repository: Lightprotocol/light-protocol

Length of output: 317


🏁 Script executed:

# Get full CompressToPubkey implementation and check_seeds
rg -B 5 -A 30 "impl CompressToPubkey" program-libs/token-interface/src/instructions/extensions/compressible.rs

Repository: Lightprotocol/light-protocol

Length of output: 1286


🏁 Script executed:

# Check where invoke_signed is called to understand validation flow
rg "invoke_signed" sdk-libs/ --type rust -B 2 -A 5 | head -60

Repository: Lightprotocol/light-protocol

Length of output: 4657


🏁 Script executed:

# Look at how CompressToPubkey is used in the CPI invocation
rg -B 5 -A 15 "fn invoke_signed.*seeds" program-libs/token-interface/src/instructions/extensions/compressible.rs

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

# Check the account-checks documentation for seed validation patterns
cat program-libs/account-checks/docs/ACCOUNT_CHECKS.md | head -100

Repository: Lightprotocol/light-protocol

Length of output: 2498


🏁 Script executed:

# Search for calls to invoke_signed in the token SDK to understand typical usage
rg "\.invoke_signed\(" sdk-libs/token-sdk/ --type rust -B 5 -A 2

Repository: Lightprotocol/light-protocol

Length of output: 5598


🏁 Script executed:

# Check if there are tests for create.rs
fd "test" sdk-libs/token-sdk/src/token/ --type f

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

# Look for documentation on CompressToPubkey or seed format expectations
rg -B 5 -A 10 "Seeds are used for both signing" sdk-libs/ --type rust

Repository: Lightprotocol/light-protocol

Length of output: 1284


🏁 Script executed:

# Check what derive_address expects and validate its contract
rg -A 20 "pub fn derive_address" program-libs/token-interface/src/instructions/extensions/compressible.rs

Repository: Lightprotocol/light-protocol

Length of output: 876


🏁 Script executed:

# Check what error types are commonly used for seed validation in SDK
rg "InvalidArgument|InvalidSeeds|InvalidAccountData" sdk-libs/ program-libs/ --type rust | grep -i "error\|seed" | head -20

Repository: Lightprotocol/light-protocol

Length of output: 2195


🏁 Script executed:

# Look for how seeds are validated elsewhere in the SDK
rg "seeds\.is_empty\(\)|seeds\.len\(\)" sdk-libs/ --type rust -B 2 -A 2

Repository: Lightprotocol/light-protocol

Length of output: 5673


🏁 Script executed:

# Check if there are existing patterns for 1-byte seed validation
rg "len.*==.*1|len.*!=.*1" sdk-libs/token-sdk/ --type rust -B 2 -A 2 | head -30

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

# Verify the comment example and see how invoke_signed is documented
cat -n sdk-libs/token-sdk/src/token/create.rs | sed -n '140,170p'

Repository: Lightprotocol/light-protocol

Length of output: 1195


🏁 Script executed:

# Look for any tests of invoke_signed to understand real usage patterns
fd "test" sdk-libs/ --type f -name "*.rs" | xargs grep -l "invoke_signed" 2>/dev/null | head -5

Repository: Lightprotocol/light-protocol

Length of output: 302


🏁 Script executed:

# Check decompress_runtime.rs more carefully to see seed handling pattern
cat -n sdk-libs/token-sdk/src/compressible/decompress_runtime.rs | sed -n '130,180p'

Repository: Lightprotocol/light-protocol

Length of output: 2637


🏁 Script executed:

# Look at how invoke_signed is actually supposed to be called by checking all examples
rg "invoke_signed.*\[.*b\"" sdk-libs/ program-libs/ --type rust -A 1 -B 1 | head -40

Repository: Lightprotocol/light-protocol

Length of output: 457


🏁 Script executed:

# Verify the file is in an SDK lib that should only depend on program-libs
head -20 sdk-libs/token-sdk/src/token/create.rs | grep -E "use|mod"

Repository: Lightprotocol/light-protocol

Length of output: 503


🏁 Script executed:

# Check what dependencies are actually used in this file
rg "^use|^extern" sdk-libs/token-sdk/src/token/create.rs | head -15

Repository: Lightprotocol/light-protocol

Length of output: 416


🏁 Script executed:

# Confirm that invoke_signed doesn't do any validation before processing seeds
cat sdk-libs/token-sdk/src/token/create.rs | sed -n '199,237p'

Repository: Lightprotocol/light-protocol

Length of output: 1598


Validate signer seeds before deriving bump/compress_to.

Empty or malformed seeds silently produce bump = 0 and an invalid PDA, causing confusing CPI failures downstream when CompressToPubkey.check_seeds() derives the wrong address. Return an explicit error when seeds are empty or the bump seed isn't exactly 1 byte.

🔧 Suggested guard for seed validation
 pub fn invoke_signed(self, seeds: &[&[u8]]) -> Result<(), ProgramError> {
     let defaults = CompressibleParams::default();

     // Build CompressToPubkey from signer seeds
-    let bump = seeds.last().and_then(|s| s.first()).copied().unwrap_or(0);
+    if seeds.is_empty() || seeds.last().map(|s| s.len() != 1).unwrap_or(true) {
+        return Err(ProgramError::InvalidArgument);
+    }
+    let bump = seeds.last().unwrap()[0];
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
/// Invoke CPI with PDA signing for program-owned accounts.
///
/// Seeds are used for both signing AND deriving the compress_to address.
pub fn invoke_signed(self, seeds: &[&[u8]]) -> Result<(), ProgramError> {
let defaults = CompressibleParams::default();
// Build CompressToPubkey from signer seeds
let bump = seeds.last().and_then(|s| s.first()).copied().unwrap_or(0);
let seed_vecs: Vec<Vec<u8>> = seeds
.iter()
.take(seeds.len().saturating_sub(1))
.map(|s| s.to_vec())
.collect();
let compress_to = CompressToPubkey {
bump,
program_id: self.program_id.to_bytes(),
seeds: seed_vecs,
};
let cpi = LegacyCreateTokenAccountCpi {
payer: self.base.payer,
account: self.base.account,
mint: self.base.mint,
owner: self.base.owner,
compressible: CompressibleParamsCpi {
compressible_config: self.config,
rent_sponsor: self.sponsor,
system_program: self.system_program,
pre_pay_num_epochs: defaults.pre_pay_num_epochs,
lamports_per_write: defaults.lamports_per_write,
compress_to_account_pubkey: Some(compress_to),
token_account_version: defaults.token_account_version,
compression_only: defaults.compression_only,
},
};
cpi.invoke_signed(&[seeds])
}
/// Invoke CPI with PDA signing for program-owned accounts.
///
/// Seeds are used for both signing AND deriving the compress_to address.
pub fn invoke_signed(self, seeds: &[&[u8]]) -> Result<(), ProgramError> {
let defaults = CompressibleParams::default();
// Build CompressToPubkey from signer seeds
if seeds.is_empty() || seeds.last().map(|s| s.len() != 1).unwrap_or(true) {
return Err(ProgramError::InvalidArgument);
}
let bump = seeds.last().unwrap()[0];
let seed_vecs: Vec<Vec<u8>> = seeds
.iter()
.take(seeds.len().saturating_sub(1))
.map(|s| s.to_vec())
.collect();
let compress_to = CompressToPubkey {
bump,
program_id: self.program_id.to_bytes(),
seeds: seed_vecs,
};
let cpi = LegacyCreateTokenAccountCpi {
payer: self.base.payer,
account: self.base.account,
mint: self.base.mint,
owner: self.base.owner,
compressible: CompressibleParamsCpi {
compressible_config: self.config,
rent_sponsor: self.sponsor,
system_program: self.system_program,
pre_pay_num_epochs: defaults.pre_pay_num_epochs,
lamports_per_write: defaults.lamports_per_write,
compress_to_account_pubkey: Some(compress_to),
token_account_version: defaults.token_account_version,
compression_only: defaults.compression_only,
},
};
cpi.invoke_signed(&[seeds])
}
🤖 Prompt for AI Agents
In `@sdk-libs/token-sdk/src/token/create.rs` around lines 199 - 237, The
invoke_signed method currently derives bump from seeds.last() without
validation, allowing empty or malformed seeds to produce bump=0 and invalid
PDAs; update invoke_signed to first verify seeds is non-empty and that the last
seed exists and has length exactly 1 (the bump byte), returning an explicit
ProgramError (e.g., InvalidArgument) on failure, then extract the bump byte and
build seed_vecs from all preceding seeds before constructing CompressToPubkey;
reference the seeds variable, bump, invoke_signed, and CompressToPubkey when
applying this guard.

Comment on lines +271 to +302
pub fn instruction(self) -> Result<Instruction, ProgramError> {
// Derive CMint PDA
let (cmint_pda, _cmint_bump) = crate::token::find_mint_address(&self.mint_seed_pubkey);

// Build DecompressMintAction
let action = DecompressMintAction {
rent_payment: self.rent_payment,
write_top_up: self.write_top_up,
};

// Build instruction data with CPI context
let instruction_data = MintActionCompressedInstructionData::new(
self.compressed_mint_with_context,
self.proof.0,
)
.with_decompress_mint(action)
.with_cpi_context(self.cpi_context.clone());

// Build account metas with compressible CMint and CPI context
// Use provided config/rent_sponsor instead of hardcoded defaults
let mut meta_config = MintActionMetaConfig::new(
self.payer,
self.authority,
self.state_tree,
self.input_queue,
self.output_queue,
)
.with_compressible_mint(cmint_pda, self.compressible_config, self.rent_sponsor);

meta_config.cpi_context = Some(self.cpi_context_pubkey);

let account_metas = meta_config.to_account_metas();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

find . -name "decompress_mint.rs" -o -name "*mint_action*" | head -20

Repository: Lightprotocol/light-protocol

Length of output: 886


🏁 Script executed:

rg -l "MintActionMetaConfig" --type rs | head -10

Repository: Lightprotocol/light-protocol

Length of output: 98


🏁 Script executed:

cd sdk-libs && find . -type f -name "*.rs" | grep -E "(token|mint)" | head -20

Repository: Lightprotocol/light-protocol

Length of output: 1749


🏁 Script executed:

cat -n sdk-libs/token-sdk/src/token/decompress_mint.rs | head -450

Repository: Lightprotocol/light-protocol

Length of output: 19413


🏁 Script executed:

rg -A 10 "build_account_infos" sdk-libs/token-sdk/src/token/decompress_mint.rs

Repository: Lightprotocol/light-protocol

Length of output: 1839


🏁 Script executed:

rg -B 5 -A 30 "impl.*MintActionMetaConfig" sdk-libs/token-sdk/src/compressed_token/mint_action.rs

Repository: Lightprotocol/light-protocol

Length of output: 166


🏁 Script executed:

rg -B 2 -A 20 "fn to_account_metas" sdk-libs/token-sdk/src/compressed_token/mint_action.rs

Repository: Lightprotocol/light-protocol

Length of output: 166


🏁 Script executed:

fd "mint_action.rs" sdk-libs/token-sdk/src | xargs cat -n | grep -A 100 "struct MintActionMetaConfig"

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

find sdk-libs -name "*.rs" -type f | xargs grep -l "struct MintActionMetaConfig"

Repository: Lightprotocol/light-protocol

Length of output: 143


🏁 Script executed:

find . -name "*.rs" -type f -path "*/compressed_token/*" | grep -v target | head -20

Repository: Lightprotocol/light-protocol

Length of output: 1777


🏁 Script executed:

cat -n sdk-libs/token-sdk/src/compressed_token/v2/mint_action/account_metas.rs

Repository: Lightprotocol/light-protocol

Length of output: 10803


🏁 Script executed:

rg -B 5 -A 50 "struct MintActionMetaConfig" sdk-libs/token-sdk/src/compressed_token/v2/mint_action/account_metas.rs

Repository: Lightprotocol/light-protocol

Length of output: 3289


🏁 Script executed:

cat > /tmp/compare_accounts.py << 'EOF'
# Instruction metas from to_account_metas() for DecompressCMintWithCpiContext
instruction_order = [
    "1. light_system_program",
    "2. [mint_signer] - NOT PRESENT (skipped because new() doesn't set it)",
    "3. authority",
    "4. compressible_config (if Some)",
    "5. cmint (if Some)",
    "6. rent_sponsor (if Some)",
    "7. fee_payer",
    "8. cpi_authority_pda (DEFAULT)",
    "9. registered_program_pda",
    "10. account_compression_authority",
    "11. account_compression_program",
    "12. system_program",
    "13. cpi_context (if Some)",
    "14. output_queue",
    "15. tree_pubkey",
    "16. input_queue (if Some)",
]

# Account infos from build_account_infos() for DecompressCMintCpiWithContext
cpi_order = [
    "1. light_system_program",
    "2. mint_seed",
    "3. authority",
    "4. compressible_config",
    "5. cmint",
    "6. rent_sponsor",
    "7. payer",
    "8. ctoken_cpi_authority (CUSTOM - not default)",
    "9. registered_program_pda",
    "10. account_compression_authority",
    "11. account_compression_program",
    "12. system_program",
    "13. cpi_context_account",
    "14. output_queue",
    "15. state_tree",
    "16. input_queue",
]

print("INSTRUCTION METAS (from to_account_metas()):")
for item in instruction_order:
    print("  " + item)

print("\nCPI ACCOUNT_INFOS (from build_account_infos()):")
for item in cpi_order:
    print("  " + item)

print("\n⚠️  MISMATCHES:")
print("  Position 2: Instruction has NOTHING, CPI has 'mint_seed' ❌")
print("  Position 8: Instruction has 'cpi_authority_pda' (default), CPI has 'ctoken_cpi_authority' (custom) ❌")
print("\nThese mismatches will cause CPI account mismatch errors!")
EOF
python /tmp/compare_accounts.py

Repository: Lightprotocol/light-protocol

Length of output: 1230


🏁 Script executed:

rg "mint_seed|mint_signer" sdk-libs/token-sdk/src/compressed_token/v2/mint_action/account_metas.rs

Repository: Lightprotocol/light-protocol

Length of output: 1151


🏁 Script executed:

rg "with_" sdk-libs/token-sdk/src/compressed_token/v2/mint_action/account_metas.rs | head -20

Repository: Lightprotocol/light-protocol

Length of output: 364


🏁 Script executed:

rg "DecompressCMintCpiWithContext|DecompressCMintWithCpiContext" --type rs -l

Repository: Lightprotocol/light-protocol

Length of output: 98


🏁 Script executed:

rg -B 3 -A 8 "ctoken_cpi_authority" sdk-libs/token-sdk/src/token/decompress_mint.rs

Repository: Lightprotocol/light-protocol

Length of output: 1338


🏁 Script executed:

# Check if there are tests for this CPI path
fd ".*test.*" sdk-libs/token-sdk/tests | xargs grep -l "DecompressCMint.*Cpi.*Context" 2>/dev/null | head -5

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

rg "ctoken_cpi_authority" sdk-libs --type rust -B 2 -A 2 | head -40

Repository: Lightprotocol/light-protocol

Length of output: 2917


🏁 Script executed:

# Check if MintActionMetaConfig has any field for custom cpi_authority
rg "cpi_authority" sdk-libs/token-sdk/src/compressed_token/v2/mint_action/account_metas.rs

Repository: Lightprotocol/light-protocol

Length of output: 161


Add support for custom CPI authority and mint_seed account in MintActionMetaConfig

The DecompressCMintWithCpiContext::instruction() method builds account metas via MintActionMetaConfig, but the resulting meta order doesn't match build_account_infos(), causing CPI failures:

  • Position 2: Metas are missing mint_seed (config is initialized with mint_signer: None), but build_account_infos() includes it.
  • Position 8: Metas use the default cpi_authority_pda, but build_account_infos() explicitly requires ctoken_cpi_authority (a distinct account). The code comment confirms this is intentional: "Use ctoken's CPI authority for the CPI, not the calling program's authority."

MintActionMetaConfig lacks methods to set a custom CPI authority or include mint_seed. Either extend the config struct with new builder methods (with_mint_seed(), with_cpi_authority()) and update to_account_metas() accordingly, or manually construct account metas in DecompressCMintWithCpiContext::instruction() to match the CPI account order exactly.

Applies to: 388-407

🤖 Prompt for AI Agents
In `@sdk-libs/token-sdk/src/token/decompress_mint.rs` around lines 271 - 302, The
account meta ordering is mismatched because MintActionMetaConfig currently
cannot set a custom mint_seed or CPI authority, causing missing mint_seed and
wrong cpi_authority in DecompressCMintWithCpiContext::instruction(); update
MintActionMetaConfig by adding builder methods with_mint_seed(mint_seed_pubkey)
and with_cpi_authority(ctoken_cpi_authority_pubkey) and modify
to_account_metas() to include mint_seed at the same position as
build_account_infos() and to use the provided ctoken_cpi_authority instead of
the default cpi_authority_pda; alternatively (if you prefer minimal change)
construct the account_metas vector manually inside
DecompressCMintWithCpiContext::instruction() using the same ordering and pubkeys
as build_account_infos(), ensuring mint_seed and ctoken_cpi_authority occupy the
correct slots.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review continued from previous batch...

Comment on lines +703 to +786
#[cfg(test)]
mod tests {
use super::*;

#[test]
fn test_derive_ata() {
let wallet = Pubkey::new_unique();
let mint = Pubkey::new_unique();
let (ata, bump) = derive_token_ata(&wallet, &mint);
assert_ne!(ata, wallet);
assert_ne!(ata, mint);
let _ = bump;
}

#[test]
fn test_ata_interface_is_cold() {
let wallet = Pubkey::new_unique();
let mint = Pubkey::new_unique();
let (ata, bump) = derive_token_ata(&wallet, &mint);

let hot_ata = AtaInterface {
ata,
owner: wallet,
mint,
bump,
is_cold: false,
token_data: TokenData {
mint,
owner: ata,
amount: 100,
delegate: None,
state: AccountState::Initialized,
tlv: None,
},
raw_account: Some(Account::default()),
decompression: None,
};
assert!(!hot_ata.is_cold());
assert!(hot_ata.is_hot());
assert_eq!(hot_ata.amount(), 100);

let none_ata = AtaInterface {
ata,
owner: wallet,
mint,
bump,
is_cold: false,
token_data: TokenData::default(),
raw_account: None,
decompression: None,
};
assert!(!none_ata.is_cold());
assert!(!none_ata.is_hot());
assert!(none_ata.is_none());
}

#[test]
fn test_build_decompress_atas_fast_exit() {
let wallet = Pubkey::new_unique();
let mint = Pubkey::new_unique();
let (ata, bump) = derive_token_ata(&wallet, &mint);

// All hot - should return empty vec
let hot_atas = vec![AtaInterface {
ata,
owner: wallet,
mint,
bump,
is_cold: false,
token_data: TokenData {
mint,
owner: ata,
amount: 50,
delegate: None,
state: AccountState::Initialized,
tlv: None,
},
raw_account: Some(Account::default()),
decompression: None,
}];

let result = build_decompress_atas(&hot_atas, wallet, None).unwrap();
assert!(result.is_empty());
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Unit tests cover basic scenarios.

Tests verify:

  • ATA derivation produces unique address
  • Hot/cold/none state detection on AtaInterface
  • Fast-exit when all accounts are hot

Consider adding a test for parse_token_account_interface to verify SPL unpacking.

🤖 Prompt for AI Agents
In `@sdk-libs/compressible-client/src/decompress_atas.rs` around lines 703 - 786,
Add a unit test that constructs a valid SPL token Account binary blob, calls
parse_token_account_interface with that blob and the derived ATA/mint/owner
values, and asserts the returned AtaInterface fields (token_data.amount, mint,
owner, state, delegate, tlv) match expected unpacked values; locate
parse_token_account_interface and use derive_token_ata to get ata/bump and
TokenData/AccountState/AtaInterface types to validate both successful parsing
and correct handling of raw_account and is_hot/is_none flags.

Comment on lines +346 to +350
// 5. Get validity proof
let proof_result = indexer
.get_validity_proof(vec![compressed_account.hash], vec![], None)
.await?
.value;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Comment numbering is off.

The comment at line 346 says // 5. Get validity proof but it should be // 6. since step 5 is the decompressed flag check at line 341.

📝 Fix comment numbering
-    // 5. Get validity proof
+    // 6. Get validity proof

Also update subsequent step numbers:

  • Line 352: // 6. Extract tree info// 7. Extract tree info
  • Line 363: // 7. Build CompressedMintWithContext// 8. Build CompressedMintWithContext
  • Line 377: // 8. Build DecompressMint instruction// 9. Build DecompressMint instruction

Wait, actually looking more carefully, the numbering at lines 363 and 377 is already off. Let me re-count:

  1. Line 314: Derive addresses
  2. Line 321: Fetch compressed mint
  3. Line 330: Check if data is empty
  4. Line 337: Parse mint data
  5. Line 341: Check decompressed flag — but comment says "5." twice!

The issue is there are two "5." comments (lines 341 and 346). Fix by renumbering from line 341 onwards.

🤖 Prompt for AI Agents
In `@sdk-libs/compressible-client/src/decompress_mint.rs` around lines 346 - 350,
There are duplicate "// 5." comments starting at the decompressed flag check and
the validity proof retrieval; renumber the inline step comments so they are
sequential from the decompressed flag check onward: keep the decompressed-flag
comment as "// 5. Check decompressed flag" (where the code checks
compressed_account.decompressed), change the comment before
indexer.get_validity_proof(vec![compressed_account.hash], ...) from "// 5. Get
validity proof" to "// 6. Get validity proof", update the comment before
extracting tree info (the one referencing tree info) from "// 6. Extract tree
info" to "// 7. Extract tree info", change the comment before building
CompressedMintWithContext from "// 7. Build CompressedMintWithContext" to "// 8.
Build CompressedMintWithContext", and change the comment before building
DecompressMint instruction from "// 8. Build DecompressMint instruction" to "//
9. Build DecompressMint instruction" so all step numbers are sequential.

Comment on lines +592 to +617
let cold_accounts: Vec<_> = accounts
.into_iter()
.filter(|a| a.account_interface.is_cold)
.collect();

if cold_accounts.is_empty() {
return Ok(None);
}

// Extract pubkeys and (CompressedAccount, variant) pairs
let decompressed_account_addresses: Vec<Pubkey> = cold_accounts
.iter()
.map(|a| a.account_interface.pubkey)
.collect();

let compressed_accounts: Vec<(CompressedAccount, V)> = cold_accounts
.into_iter()
.map(|a| {
let compressed_account = a
.account_interface
.decompression_context
.expect("Cold account must have decompression context")
.compressed_account;
(compressed_account, a.variant)
})
.collect();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Cold account filtering with expect() on decompression_context.

Line 613's .expect("Cold account must have decompression context") is safe because the filter on line 594 only includes accounts where is_cold is true, and by the AccountInterface::cold() constructor, is_cold: true always implies decompression_context: Some(...). However, if someone constructs AccountInterface manually with inconsistent state, this panics. Consider using ok_or instead for defensive coding.

🛡️ Defensive alternative
         let compressed_accounts: Vec<(CompressedAccount, V)> = cold_accounts
             .into_iter()
-            .map(|a| {
-                let compressed_account = a
-                    .account_interface
-                    .decompression_context
-                    .expect("Cold account must have decompression context")
-                    .compressed_account;
-                (compressed_account, a.variant)
-            })
+            .filter_map(|a| {
+                let ctx = a.account_interface.decompression_context?;
+                Some((ctx.compressed_account, a.variant))
+            })
             .collect();
🤖 Prompt for AI Agents
In `@sdk-libs/compressible-client/src/lib.rs` around lines 592 - 617, The code
currently calls .expect(...) on account_interface.decompression_context when
building compressed_accounts, which can panic if someone constructed an
inconsistent AccountInterface; replace the panic with defensive error handling
by mapping the Option to a Result (e.g., using ok_or or ok_or_else with a
descriptive error) and propagate that error out of the enclosing function
instead of panicking. Update the compressed_accounts construction (the closure
mapping to (CompressedAccount, V)) to return a Result<(CompressedAccount, V), _>
by extracting compressed_account via
account_interface.decompression_context.ok_or(...)? (or map_err) and adjust the
surrounding function signature/flow to propagate the error, ensuring references
to cold_accounts, CompressedAccount, V, and
account_interface.decompression_context are the points of change.

Comment on lines +693 to +731
/// Get ctx field names from classified seeds
pub fn get_ctx_fields(seeds: &[ClassifiedSeed]) -> Vec<Ident> {
let mut fields = Vec::new();
for seed in seeds {
match seed {
ClassifiedSeed::CtxAccount(ident) => {
if !fields.iter().any(|f: &Ident| f == ident) {
fields.push(ident.clone());
}
}
ClassifiedSeed::FunctionCall { ctx_args, .. } => {
for arg in ctx_args {
if !fields.iter().any(|f: &Ident| f == arg) {
fields.push(arg.clone());
}
}
}
_ => {}
}
}
fields
}

/// Get data field names from classified seeds
pub fn get_data_fields(seeds: &[ClassifiedSeed]) -> Vec<(Ident, Option<Ident>)> {
let mut fields = Vec::new();
for seed in seeds {
if let ClassifiedSeed::DataField {
field_name,
conversion,
} = seed
{
if !fields.iter().any(|(f, _): &(Ident, _)| f == field_name) {
fields.push((field_name.clone(), conversion.clone()));
}
}
}
fields
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

get_ctx_fields and get_data_fields are unused.

Same situation as generate_seed_derivation. These helper functions extract field identifiers from classified seeds but aren't consumed yet.

🧹 Suggested: Add dead_code allowance with documentation
+/// Get ctx field names from classified seeds.
+/// 
+/// NOTE: Currently unused but will be needed for generating seed derivation
+/// code in the variant enum macro.
+#[allow(dead_code)]
 pub fn get_ctx_fields(seeds: &[ClassifiedSeed]) -> Vec<Ident> {
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
/// Get ctx field names from classified seeds
pub fn get_ctx_fields(seeds: &[ClassifiedSeed]) -> Vec<Ident> {
let mut fields = Vec::new();
for seed in seeds {
match seed {
ClassifiedSeed::CtxAccount(ident) => {
if !fields.iter().any(|f: &Ident| f == ident) {
fields.push(ident.clone());
}
}
ClassifiedSeed::FunctionCall { ctx_args, .. } => {
for arg in ctx_args {
if !fields.iter().any(|f: &Ident| f == arg) {
fields.push(arg.clone());
}
}
}
_ => {}
}
}
fields
}
/// Get data field names from classified seeds
pub fn get_data_fields(seeds: &[ClassifiedSeed]) -> Vec<(Ident, Option<Ident>)> {
let mut fields = Vec::new();
for seed in seeds {
if let ClassifiedSeed::DataField {
field_name,
conversion,
} = seed
{
if !fields.iter().any(|(f, _): &(Ident, _)| f == field_name) {
fields.push((field_name.clone(), conversion.clone()));
}
}
}
fields
}
/// Get ctx field names from classified seeds.
///
/// NOTE: Currently unused but will be needed for generating seed derivation
/// code in the variant enum macro.
#[allow(dead_code)]
pub fn get_ctx_fields(seeds: &[ClassifiedSeed]) -> Vec<Ident> {
let mut fields = Vec::new();
for seed in seeds {
match seed {
ClassifiedSeed::CtxAccount(ident) => {
if !fields.iter().any(|f: &Ident| f == ident) {
fields.push(ident.clone());
}
}
ClassifiedSeed::FunctionCall { ctx_args, .. } => {
for arg in ctx_args {
if !fields.iter().any(|f: &Ident| f == arg) {
fields.push(arg.clone());
}
}
}
_ => {}
}
}
fields
}
/// Get data field names from classified seeds
pub fn get_data_fields(seeds: &[ClassifiedSeed]) -> Vec<(Ident, Option<Ident>)> {
let mut fields = Vec::new();
for seed in seeds {
if let ClassifiedSeed::DataField {
field_name,
conversion,
} = seed
{
if !fields.iter().any(|(f, _): &(Ident, _)| f == field_name) {
fields.push((field_name.clone(), conversion.clone()));
}
}
}
fields
}
🧰 Tools
🪛 GitHub Check: Forester e2e test

[warning] 694-694:
function get_ctx_fields is never used

🪛 GitHub Check: programs (compressed-token-batched-tree, ["cargo-test-sbf -p compressed-token-test -- test_transf...

[warning] 694-694:
function get_ctx_fields is never used

🪛 GitHub Check: programs (light-system-program-compression, ["cargo-test-sbf -p system-test -- test_with_compress...

[warning] 694-694:
function get_ctx_fields is never used

🪛 GitHub Check: programs (system-cpi-test, ["cargo-test-sbf -p system-cpi-test", "cargo test -p light-system-prog...

[warning] 694-694:
function get_ctx_fields is never used

🪛 GitHub Check: Test batched-merkle-tree-simulate

[warning] 694-694:
function get_ctx_fields is never used

🤖 Prompt for AI Agents
In `@sdk-libs/macros/src/compressible/anchor_seeds.rs` around lines 693 - 731, Add
an explicit dead-code allowance and brief doc comment to both helper functions
so the unused lint is suppressed until they're consumed: annotate get_ctx_fields
and get_data_fields with #[allow(dead_code)] and add a short /// comment
describing their purpose (e.g., "Extracts ctx/data field idents from
ClassifiedSeed for future seed derivation code") so maintainers know why they
exist.

Comment on lines +267 to +305
impl<'info> CreateCTokenAtaRentFreeCpi<'info> {
/// Invoke CPI.
pub fn invoke(self) -> Result<(), ProgramError> {
InternalCreateAtaCpi {
owner: self.owner,
mint: self.mint,
payer: self.payer,
associated_token_account: self.ata,
system_program: self.system_program.clone(),
bump: self.bump,
compressible: CompressibleParamsCpi::new_ata(
self.config,
self.sponsor,
self.system_program,
),
idempotent: self.idempotent,
}
.invoke()
}

/// Invoke CPI with signer seeds (when caller needs to sign for another account).
pub fn invoke_signed(self, signer_seeds: &[&[&[u8]]]) -> Result<(), ProgramError> {
InternalCreateAtaCpi {
owner: self.owner,
mint: self.mint,
payer: self.payer,
associated_token_account: self.ata,
system_program: self.system_program.clone(),
bump: self.bump,
compressible: CompressibleParamsCpi::new_ata(
self.config,
self.sponsor,
self.system_program,
),
idempotent: self.idempotent,
}
.invoke_signed(signer_seeds)
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

system_program is cloned and then immediately moved—potential inefficiency.

In both invoke() (lines 275, 280) and invoke_signed() (lines 294, 299), self.system_program is cloned to pass to InternalCreateAtaCpi, then the original is moved into CompressibleParamsCpi::new_ata():

system_program: self.system_program.clone(),  // Clone here
// ...
compressible: CompressibleParamsCpi::new_ata(
    self.config,
    self.sponsor,
    self.system_program,  // Original moved here
),

This works but the clone is unnecessary since AccountInfo is cheap to clone. However, the semantic intent is unclear—is the same system_program needed in both places?

♻️ Consider reordering to avoid clone
 pub fn invoke(self) -> Result<(), ProgramError> {
+    let compressible = CompressibleParamsCpi::new_ata(
+        self.config,
+        self.sponsor,
+        self.system_program.clone(),
+    );
     InternalCreateAtaCpi {
         owner: self.owner,
         mint: self.mint,
         payer: self.payer,
         associated_token_account: self.ata,
-        system_program: self.system_program.clone(),
+        system_program: self.system_program,
         bump: self.bump,
-        compressible: CompressibleParamsCpi::new_ata(
-            self.config,
-            self.sponsor,
-            self.system_program,
-        ),
+        compressible,
         idempotent: self.idempotent,
     }
     .invoke()
 }
🤖 Prompt for AI Agents
In `@sdk-libs/token-sdk/src/token/create_ata.rs` around lines 267 - 305, The
current code clones self.system_program and also moves it into
CompressibleParamsCpi::new_ata(), causing a redundant clone; fix by first
binding let system_program = self.system_program; then use that local: set
InternalCreateAtaCpi.system_program to system_program.clone() and pass the moved
system_program into CompressibleParamsCpi::new_ata(..., system_program). Apply
the same change inside CreateCTokenAtaRentFreeCpi::invoke and ::invoke_signed so
InternalCreateAtaCpi and CompressibleParamsCpi::new_ata use the local binding
(referencing InternalCreateAtaCpi,
CreateCTokenAtaRentFreeCpi::invoke/invoke_signed, and
CompressibleParamsCpi::new_ata).

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 39

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (7)
sdk-libs/sdk/src/compressible/compress_account.rs (1)

57-61: Unchecked slice access can panic on empty address_space.

Line 59 accesses address_space[0] without validating the slice is non-empty. If a caller passes an empty slice, this will panic rather than returning a graceful error.

🐛 Proposed fix using safe access
-    let derived_c_pda = derive_address(
-        &account_info.key.to_bytes(),
-        &address_space[0].to_bytes(),
-        &program_id.to_bytes(),
-    );
+    let address_tree = address_space
+        .first()
+        .ok_or(LightSdkError::ConstraintViolation)?;
+    let derived_c_pda = derive_address(
+        &account_info.key.to_bytes(),
+        &address_tree.to_bytes(),
+        &program_id.to_bytes(),
+    );
sdk-libs/token-sdk/src/compressed_token/v2/decompress_full.rs (1)

201-248: LGTM - Clean ATA-aware packing logic.

The owner_is_signer = !is_ata inversion on line 221 correctly drives the signer configuration. The function properly propagates the is_ata flag to the returned struct.

Minor documentation suggestion: Consider adding to the doc comment that when is_ata=true, the caller is responsible for ensuring the wallet pubkey has already been added as a signer to packed_accounts before calling this function. The current comment mentions "Wallet owner is already added as signer by the caller" but doesn't explicitly state this is a precondition the caller must satisfy.

📝 Optional documentation enhancement
 /// Pack accounts for decompress with ATA support.
 ///
 /// For ATA decompress (is_ata=true):
 /// - Owner (ATA pubkey) is added without signer flag (ATA can't sign)
-/// - Wallet owner is already added as signer by the caller
+/// - **Precondition**: Caller must have already added the wallet pubkey as a signer
+///   to `packed_accounts` before calling this function
 ///
 /// For non-ATA decompress:
 /// - Owner is added as signer (normal case)
sdk-libs/macros/src/compressible/traits.rs (2)

64-84: Trait impl relies on panic for missing compression_info – consider returning Result.

The compression_info() and compression_info_mut() methods use .expect() which will panic if compression_info is None. While this is documented as expected behavior ("must be set"), a runtime panic in production can be unforgiving.

If this is intentional (i.e., calling these methods on an uninitialized account is always a programming error), consider adding a brief doc comment to the trait methods in the generated code to make this contract explicit.

💡 Optional: Add doc comments to generated trait methods
 impl light_sdk::compressible::HasCompressionInfo for `#struct_name` {
+    /// Returns a reference to the compression info.
+    /// # Panics
+    /// Panics if `compression_info` is `None`.
     fn compression_info(&self) -> &light_sdk::compressible::CompressionInfo {
         self.compression_info.as_ref().expect("compression_info must be set")
     }

174-178: try_to_vec() in size calculation could fail silently with .expect().

The size calculation uses try_to_vec().expect("Failed to serialize") which will panic if serialization fails. For a size() method, this might be unexpected behavior.

Consider whether returning a Result<usize, _> or using a more graceful fallback would be appropriate. That said, if the struct is already validated as serializable elsewhere, this may be acceptable.

sdk-libs/token-sdk/src/compressible/decompress_runtime.rs (1)

107-189: ATA-only flow never triggers; seedless tokens still invoke_signed.

token_signers_seed_groups is pushed for every token, so the Line 260 check is never true when token_accounts is non-empty. If any token has no program-owned seeds (ATA-style), we still call invoke_signed with an empty seed group and is_ata stays false, which contradicts the intent and can break signature derivation / ATA handling.

Consider deriving is_ata (e.g., from empty seeds) and only collecting signer groups for program-owned tokens so the “no signer” path can actually trigger.

💡 Suggested fix (seedless/ATA-aware branching)
-        let decompress_index = crate::compressed_token::decompress_full::DecompressFullIndices {
+        let is_ata = ctoken_signer_seeds.is_empty();
+        let decompress_index = crate::compressed_token::decompress_full::DecompressFullIndices {
             source,
             destination_index: owner_index,
             tlv: None,
-            is_ata: false, // Program-owned token: owner is a signer (via CPI seeds)
+            is_ata,
         };
         token_decompress_indices.push(decompress_index);
-        token_signers_seed_groups.push(ctoken_signer_seeds);
+        if !is_ata {
+            token_signers_seed_groups.push(ctoken_signer_seeds);
+        }

Also applies to: 259-275

sdk-libs/macros/src/lib.rs (1)

288-457: Duplicate RentFree derive macro breaks compilation.

Two #[proc_macro_derive(RentFree...)] entries collide (CI already reports this). Rename one derive and update docs/usages accordingly.

🛠️ One possible fix (rename the Accounts derive)
-/// #[derive(Accounts, RentFree)]
+/// #[derive(Accounts, RentFreeAccounts)]
 ...
-/// Generates `RentFree` trait implementation for rent-free accounts and light-mints.
+/// Generates `RentFreeAccounts` trait implementation for rent-free accounts and light-mints.
 ...
-#[proc_macro_derive(
-    RentFree,
-    attributes(rentfree, rentfree_token, light_mint, instruction)
-)]
-pub fn rent_free_derive(input: TokenStream) -> TokenStream {
+#[proc_macro_derive(
+    RentFreeAccounts,
+    attributes(rentfree, rentfree_token, light_mint, instruction)
+)]
+pub fn rent_free_accounts_derive(input: TokenStream) -> TokenStream {
     let input = parse_macro_input!(input as DeriveInput);
     into_token_stream(finalize::derive_light_finalize(input))
 }
sdk-libs/compressible-client/src/lib.rs (1)

413-429: Critical CPI context ordering logic - well-documented.

The comment on lines 413-415 explains a critical detail: when both PDAs and tokens exist, the CPI context must come from the first TOKEN's context, not the first PDA's, because tokens execute last and consume the context.

The expect on line 422 is safe given the has_tokens guard, but consider using a more descriptive error:

Optional: More descriptive unwrap
-            let first_token_cpi_context = compressed_accounts
-                .iter()
-                .find(|(acc, _)| acc.owner == LIGHT_TOKEN_PROGRAM_ID.into())
-                .map(|(acc, _)| acc.tree_info.cpi_context.unwrap())
-                .expect("has_tokens is true so there must be a token");
+            let first_token_cpi_context = compressed_accounts
+                .iter()
+                .find(|(acc, _)| acc.owner == LIGHT_TOKEN_PROGRAM_ID.into())
+                .and_then(|(acc, _)| acc.tree_info.cpi_context)
+                .ok_or("Token account found but missing CPI context")?;
🤖 Fix all issues with AI agents
In `@program-libs/compressible/Cargo.toml`:
- Line 35: The compressible crate currently depends on light-sdk-types which
creates an upward dependency; extract the instruction types (e.g.,
PackedAddressTreeInfo and any sibling structs/enums in
sdk-libs/sdk-types/src/instruction/) into a new or existing program-libs crate
(name it something like program-types), update all source imports in program and
sdk code to use program-types::instruction::PackedAddressTreeInfo, remove the
light-sdk-types dependency from program-libs/compressible/Cargo.toml, add the
new program-types crate as a dependency in compressible's Cargo.toml, and update
any use/path references across the repo to point to the new crate so no
program-libs crate depends on sdk-libs.

In `@programs/compressed-token/program/docs/ACCOUNTS.md`:
- Around line 46-58: Update the examples to import and use the real struct name
`Token` instead of the non-existent `CToken`: replace `use
light_token_interface::state::token::CToken;` with `use
light_token_interface::state::token::Token;` in both the zero-copy examples that
call `ZeroCopyAt`/`ZeroCopyAtMut` (where you destructure to `(token, _)` / `(mut
token, _)`) and in the borsh example that uses `BorshDeserialize`, ensuring all
references to `CToken` are renamed to `Token`.

In `@programs/compressed-token/program/docs/ctoken/CREATE.md`:
- Line 91: Replace the redundant phrase "Validate compression_only is only set
for mints with restricted extensions" in the CREATE.md docs with a clearer
rewording such as "Validate compression_only is set only for mints with
restricted extensions" (or "Validate compression_only is set for mints that have
restricted extensions") so the sentence uses "only" once and reads clearly;
update the line in the 4.5 validation bullet accordingly.
- Around line 14-16: Update the documentation to reference the correct account
struct and path: replace the incorrect `CToken`/`state/ctoken/ctoken_struct.rs`
reference with the actual struct `Token` and its file
`program-libs/token-interface/src/state/token/token_struct.rs`; also note that
`Token` is re-exported via `state::token::*` (see
`program-libs/token-interface/src/state/token/mod.rs`) so any mentions or
namespaces using `ctoken` should be changed to the `token` module and `Token`
struct.

In
`@programs/compressed-token/program/src/compressed_token/transfer2/compression/ctoken/compress_and_close.rs`:
- Around line 22-26: The import path is incorrect for CloseTokenAccountAccounts;
replace the erroneous import of
crate::token::close::accounts::CloseTokenAccountAccounts with
crate::ctoken::close::accounts::CloseTokenAccountAccounts (keep the
#[cfg(target_os = "solana")] guard and the same use statement structure) so the
module correctly references the ctoken close accounts module used by this file.

In `@rebase.md`:
- Around line 11-16: The rebase notes contain no-op/ambiguous rename bullets
(e.g., "light_token_sdk → light_token_sdk" and other duplicated entries) which
should be corrected or removed; update the bullets referencing ctoken-sdk,
token-sdk, light_token_sdk, ctoken → token module, sdk-ctoken-test →
sdk-light-token-test, light_token_interface, and type names CToken → Token so
each entry shows an actual, meaningful change (or drop the bullet if there is no
change), ensuring consistent naming across the document and removing
duplicate/typo entries that cause confusion.

In `@sdk-libs/compressible-client/decompress_ux.md`:
- Around line 260-268: Rewrite the error-handling bullet fragments into full
sentences by adding subjects and verbs: for from_seeds, change
"compressed_data() might be None (hot account passed to cold-only method)" to
something like "The call to compressed_data() may return None when a hot account
is passed to a cold-only method," and change "into_variant() can fail (seed
verification, deserialization)" to "into_variant() can fail due to seed
verification errors or deserialization failures." For from_ctoken, change
"compressed_data() might be None (hot account passed)" to "The call to
compressed_data() may return None if a hot account is passed," and change
"TokenData::try_from_slice() can fail (malformed data)" to
"TokenData::try_from_slice() can fail when the token data is malformed." Ensure
you update the bullets containing from_seeds, from_ctoken, compressed_data(),
into_variant(), and TokenData::try_from_slice() accordingly.

In `@sdk-libs/compressible-client/decompress-atas.md`:
- Around line 443-465: The fenced flow diagram in decompress-atas.md is missing
a language tag; update the code fence that wraps the flow (the block showing the
call chain involving decompress_atas_idempotent, derive_ctoken_ata,
indexer.get_compressed_token_accounts_by_owner, indexer.get_validity_proof,
CreateAssociatedCTokenAccount::idempotent and
decompress_full_ctoken_accounts_with_indices) to include a language identifier
(e.g., ```text) so markdownlint MD040 is satisfied.

In `@sdk-libs/compressible-client/decompress-mint.md`:
- Around line 458-463: Update the test plan to match the documented idempotent
behavior: change the "Already decompress" test to expect an empty vector result
instead of an AlreadyDecompressed error when calling decompress_mint (or the
documented decompression API) on an already-decompressed mint; keep the other
tests (MintNotFound and Custom address tree) as-is and ensure references to the
AlreadyDecompressed symbol are removed or replaced with the empty-vec
expectation in the Test Cases section.
- Around line 368-395: The fenced flow diagram block in decompress-mint.md lacks
a language tag (MD040); update the opening triple-backtick that precedes the
diagram (the block starting with "User calls decompress_mint(mint_seed_pubkey,
fee_payer, indexer)") to include a language specifier such as text (e.g.,
```text) so the diagram is properly linted; do not change the diagram contents
or the referenced symbols like decompress_mint, derive_cmint_compressed_address,
indexer.get_compressed_account, parse_compressed_mint_data, or
DecompressMint.instruction().

In `@sdk-libs/compressible-client/DECOMPRESSION.md`:
- Around line 153-157: The comment on TokenData.owner is inconsistent with the
parsing code; update the TokenData struct comment to match actual behavior by
stating that TokenData.owner is populated from spl_account.owner (i.e., the
wallet owner for standard SPL Token accounts) and not the ATA pubkey, and add a
note explaining that for ATAs this field contains the wallet owner rather than
the ATA account pubkey; reference TokenData and the population logic in
decompress_atas.rs where spl_account.owner is used so readers know which
behavior is authoritative.
- Around line 55-62: Update the documentation entry for the "simple" variant to
show the correct signature and pattern: replace the incorrect
`decompress_mint(signer, fee_payer, &indexer)` entry with
`decompress_mint_idempotent(request: DecompressMintRequest, fee_payer,
&indexer)` (or explicitly note that the wrapper accepts a DecompressMintRequest
containing mint_seed_pubkey: Pubkey) so it matches the actual function and the
ATA-pattern used elsewhere; ensure the doc references the DecompressMintRequest
type and that this variant delegates to decompress_mint_idempotent.

In `@sdk-libs/compressible-client/helper.md`:
- Around line 82-87: The example builds a MyInstruction but hides other fields
with a "// ..." placeholder; update the example to either list the remaining
field names (with placeholder values) or add an inline comment enumerating them
so readers know what else MyInstruction requires — specifically show the full
shape being passed to MyInstruction (e.g., include the other required properties
alongside proof.proof, packed.packed_tree_infos.address_trees and
packed.output_tree_index) before calling .data() so users can see all fields
expected by MyInstruction.
- Around line 56-57: The example calls derive_address and
derive_cmint_compressed_address without showing their source; update the snippet
to either add the appropriate import statements for those helpers or fully
qualify them (for example light_sdk::derive_address and
light_sdk::derive_cmint_compressed_address) so readers know which crate/module
provides them—ensure references to user_pda, tree, program_id, and mint_signer
remain unchanged while adding the import or qualification to the top of the
example.
- Around line 91-97: Add short, concrete examples for each CPI context case to
clarify when to use cpi_context: None vs tree_info.cpi_context; specifically,
for "PDA-only tx" show a minimal example scenario like creating/updating
compressed PDAs without token ops, for "Token-only tx" show a compressed token
transfer/mint example without PDA changes, and for "Mixed PDA + token" show a
transaction that creates a token-related PDA and mints/transfers tokens in the
same tx that requires cpi_context = tree_info.cpi_context (Option<Pubkey>);
reference the symbols cpi_context, PDA-only tx, Token-only tx, Mixed PDA +
token, and tree_info.cpi_context so readers can map examples to the rules.
- Around line 33-34: Clarify the purpose of the pub system_accounts_offset: u8
field by replacing "if needed" with a concise explanation of when it must be set
(e.g., when the program inflates a combined accounts array or when system
accounts are placed at a non-zero index in the flattened layout) and add a short
usage example in the surrounding flow example that shows setting
system_accounts_offset and how it shifts indices when mapping account slots;
reference the symbol system_accounts_offset in the doc text and the flow example
so readers can see both the semantic rule and a minimal example of computing
adjusted account indices.
- Line 43: Clarify the phrase "mixing PDAs+tokens" used in the cpi_context
parameter docs by explaining that it refers to instruction invocations that
include both program-derived addresses (PDAs) and token accounts in the same
transaction/operation (i.e., when accounts passed include PDAs and SPL token
accounts that require token program authority), and either add a one-line
definition next to `cpi_context` (referencing `tree_info.cpi_context`) or link
to the existing "When to use CPI context" section for a fuller explanation so
users can distinguish PDA-only, token-only, and mixed scenarios.
- Line 75: The docs use two different names for the same structure which is
confusing—normalize the naming so examples and prose match: replace occurrences
of output_tree.cpi_context with tree_info.cpi_context (or vice versa) across the
function doc and example so both refer to the same symbol (tree_info or
output_tree) and update any descriptive text to state that
tree_info/output_tree.cpi_context holds the CPI context for mixed PDA+token
cases; ensure the chosen name (tree_info or output_tree) is used consistently in
the function signature, example variables, and explanatory sentences.

In `@sdk-libs/compressible-client/src/create_accounts_proof.rs`:
- Around line 18-32: The RpcError returned from the RPC call in
get_create_accounts_proof is not automatically convertible into
CreateAccountsProofError because the Rpc variant lacks a #[from] attribute;
either add #[from] to the Rpc(RpcError) variant in the CreateAccountsProofError
enum or explicitly map the RPC call's error to
CreateAccountsProofError::Rpc(...) inside get_create_accounts_proof (e.g., using
.map_err(CreateAccountsProofError::Rpc)) so the ? operator will compile; locate
the enum CreateAccountsProofError and the function get_create_accounts_proof to
apply the change.

In `@sdk-libs/compressible-client/src/decompress_atas.rs`:
- Around line 179-183: The state() accessor clones AccountState unnecessarily;
if the AccountState enum is small and can derive Copy, change AccountState to
derive/implement Copy + Clone and then update state(&self) -> AccountState to
return self.token_data.state directly (remove .clone()); ensure the AccountState
type derives Copy in its definition so returning by value is valid.
- Around line 651-682: The code sets compression_index = 0 for every
CompressedOnly TLV entry which causes DuplicateCompressionIndex errors when
decompressing multiple ATAs in a batch; update the mapping that builds tlv_vec
(where ExtensionInstructionData::CompressedOnly and
CompressedOnlyExtensionInstructionData are constructed) to assign
compression_index from the per-account loop counter (e.g., use the current index
variable of the surrounding loop) so each entry pushed into in_tlv_data has a
unique compression_index; apply the same change in the analogous mapping in
sdk-libs/token-sdk/src/token/decompress.rs.

In `@sdk-libs/compressible-client/src/decompress_mint.rs`:
- Around line 221-303: There are two functions named decompress_mint (one taking
&MintInterface and one taking mint_seed_pubkey) which conflicts; rename the
MintInterface-based helper (e.g., to decompress_mint_from_interface or
decompress_mint_with_interface) and update all call sites, docs/comments, and
any internal references to the old name (look for the function with signature
decompress_mint<I: Indexer>(mint: &MintInterface, fee_payer: Pubkey, indexer:
&I) and rename it accordingly), leaving the public API function
decompress_mint(mint_seed_pubkey, fee_payer, indexer) intact.

In `@sdk-libs/compressible-client/src/initialize_config.rs`:
- Around line 81-109: The address_space builder method currently accepts any
Vec<Pubkey> and the config defaults to a single ADDRESS_TREE_V2 entry; update
the builder/serialization path so address_space is validated and normalized to
exactly 4 entries before being serialized (pad with Pubkey::default() or
ADDRESS_TREE_V2 as appropriate and return an error if length > 4), specifically
amend the address_space(self, address_space: Vec<Pubkey>) method and the code in
initialize_compression_config (or the struct's serialize/to_bytes path) to
enforce/normalize a fixed [Pubkey; 4] layout expected on-chain.

In `@sdk-libs/compressible-client/src/lib.rs`:
- Around line 32-36: The pub re-export list contains a duplicate symbol
"decompress_mint"; remove the second occurrence of decompress_mint from the pub
use statement (the one alongside build_decompress_mint, create_mint_interface,
decompress_mint_idempotent, DecompressMintError, DecompressMintRequest,
MintInterface, MintState, DEFAULT_RENT_PAYMENT, DEFAULT_WRITE_TOP_UP) so each
item (especially decompress_mint) is exported only once and the module compiles
without duplicate re-export warnings/errors.

In `@sdk-libs/compressible-client/wrapper.md`:
- Around line 404-426: Update the "Transaction Batching Considerations" /
"Transaction Batching Rules" section to clarify that the "~200k CU per
decompression" and "~7 decompressions per tx" figures are approximate: add a
concise note inside or directly under the "Compute limits" bullet stating these
values vary with proof complexity, account sizes, and runtime changes, and
recommend measuring per-deployment; ensure the clarification sits next to the
existing "Compute limits" bullets so readers see it's a caveat to those specific
numbers.

In `@sdk-libs/macros/MACRO_REFACTOR_V2.md`:
- Around line 1-641: Fix markdown lint issues: remove stray spaces inside
emphasis markers (e.g., change "ctx.* " to "ctx.*" in the document where
emphasis is used) and add language specifiers to the fenced code blocks that
currently lack them (e.g., the code blocks in the "Seed Extraction Flow", "New
Syntax Specification", "Macro Implementation", generated code examples, and
migration guide sections). Search for the literal emphasis "ctx.* " and the
undirected triple-backtick fences in MACRO_REFACTOR_V2.md (the examples under
headings like "Seed Extraction Flow", "New Syntax Specification", "Macro
Implementation", "Generated Code Examples", and "Migration Guide") and update
each fence to include an appropriate language tag (rust, text, or others) and
remove the extra space inside emphasis markers.

In `@sdk-libs/macros/MACRO_REFACTOR.md`:
- Line 327: Replace the emphasized section titles (e.g., "**Option A: Remaining
Accounts with Typed Indices**") with proper Markdown headings (for example "##
Option A: Remaining Accounts with Typed Indices") to resolve emphasis-as-heading
linter warnings; do the same for the corresponding "Option B" occurrence (the
similar emphasized text around line 346) so both option sections use heading
syntax for better navigation and TOC generation.

In `@sdk-libs/macros/OPTION_A_PLAN.md`:
- Around line 382-390: In sdk-libs/macros/OPTION_A_PLAN.md locate the emphasized
line "**Total estimated time: 5-6 hours**" and replace the emphasis with a
Markdown heading (e.g., add one or more leading '#' characters so it reads like
"## Total estimated time: 5-6 hours") to satisfy MD036; ensure you remove the
surrounding asterisks and keep the exact text "Total estimated time: 5-6 hours".

In `@sdk-libs/macros/SPEC_OPTION_B.md`:
- Around line 73-108: PackedStandardAtaData and PackedTokenData both define
mint_index, causing redundant data; either remove PackedTokenData::mint_index
and use PackedStandardAtaData::mint_index when constructing/serializing token
info, or keep both but enforce equality during deserialization/validation.
Update the structs and all call sites that construct/serialize PackedTokenData
(or implement a custom AnchorDeserialize for PackedStandardAtaData) so
token_data.mint_index is derived from the outer
PackedStandardAtaData::mint_index or validate token_data.mint_index ==
outer.mint_index and return an error if they differ; reference the structs
PackedStandardAtaData and PackedTokenData and adjust
serialization/deserialization paths accordingly.

In `@sdk-libs/macros/src/compressible/anchor_seeds.rs`:
- Around line 115-149: The code currently allows empty seed lists from
extract_anchor_seeds(&field.attrs) for rentfree/rentfree_token handling which
yields invalid PDAs; after each seeds = extract_anchor_seeds(...) call in the
rentfree branch (where you build ExtractedSeedSpec) and the token branch (where
you build ExtractedTokenSpec), check if seeds.is_empty() and return a
compile-time syn error instead of continuing. Use
syn::Error::new_spanned(field_ident, "rentfree/rentfree_token requires at least
one seed") (or similar message) to produce the compile error so the macro fails
fast when no seeds are provided. Ensure you return the Error from the
surrounding function (propagate via the Result) so no empty-seed spec gets
pushed into pda_fields or token_fields.
- Around line 653-685: The ClassifiedSeed::Constant arm in
generate_seed_derivation naively prefixes crate:: to the provided syn::Path
(quote! { crate::`#path.as_ref`() }), causing crate::crate::FOO for multi-segment
or crate-qualified paths; change it to conditionally prefix only for
single-segment unqualified identifiers: if path.segments.len() == 1 and
path.leading_colon.is_none() then emit quote! { crate::`#path.as_ref`() } else
emit quote! { `#path.as_ref`() } so multi-segment paths (e.g. some::BAR or
crate::FOO) are used as-is.

In `@sdk-libs/macros/src/compressible/file_scanner.rs`:
- Around line 68-85: The code currently swallows parse and extraction errors;
update the syn::parse_str(&contents) Err branch to record the parse error
(including file/path context) into the scanner result (e.g., push a descriptive
message into result.errors) instead of returning silently, and likewise change
the try_extract_from_struct handling inside the Item::Struct match: when
try_extract_from_struct returns Err(e) or Ok(None) where appropriate, append a
clear error entry to result.errors (with item/struct name and e) rather than
ignoring it, so all parse/extraction failures are surfaced to the caller; ensure
you reference and update the scanner result struct’s errors collection and keep
existing success path that extends result.pda_specs/result.token_specs.

In `@sdk-libs/macros/src/compressible/utils.rs`:
- Around line 107-115: The helper generate_empty_ctoken_enum should be renamed
to reflect the enum it emits (TokenAccountVariant); rename the function to
something like generate_empty_token_account_variant_enum (or
generate_empty_token_account_variant) and update all call sites that reference
generate_empty_ctoken_enum to the new name, plus update the function doc comment
to mention TokenAccountVariant for clarity; ensure any imports or references to
the old function name in tests, modules, or macros are updated so compilation
succeeds.

In `@sdk-libs/macros/src/finalize/codegen.rs`:
- Around line 441-444: The mints-only path is using the address queue index
instead of the output state queue; replace the use of
__tree_info.address_queue_pubkey_index when selecting output_queue with the
output state tree index from create_accounts_proof (e.g., use
create_accounts_proof.output_state_tree_index as the usize index passed to
cpi_accounts.get_tree_account_info) so that output_queue is obtained from the
correct tree account; update the assignment that sets output_queue (currently
using __tree_info.address_queue_pubkey_index) to use
create_accounts_proof.output_state_tree_index instead and keep address_tree and
__tree_pubkey logic unchanged.

In `@sdk-libs/macros/src/finalize/parse.rs`:
- Around line 278-280: The if condition currently checks "light_token_program"
twice so ctoken_program_field never matches; update the condition in parse.rs
where field_name is compared (the block that assigns ctoken_program_field using
field_ident) to check for both "ctoken_program" and "light_token_program" (e.g.,
if field_name == "ctoken_program" || field_name == "light_token_program") so
that fields named ctoken_program (or its alias) are correctly detected and
assigned to ctoken_program_field.

In `@sdk-libs/program-test/src/compressible.rs`:
- Around line 317-322: The account meta for compression_authority is currently
created as read-only but the generated CompressAccountsIdempotent struct and its
#[account(mut)] annotation require it to be mutable; in the vector named
program_metas (where AccountMeta::new(payer.pubkey(), true) etc. are set)
replace the AccountMeta construction for compression_authority to mark it
writable (use AccountMeta::new(compression_authority, true)) so the account
metas match the CompressAccountsIdempotent definition.

In `@sdk-libs/program-test/src/program_test/light_program_test.rs`:
- Around line 494-505: The synthetic SPL account is being built with
pack_token_data_to_spl_bytes(mint, &ata, token_data) which makes the SPL account
owner point to the ATA itself; change the second argument to the wallet
authority pubkey (the compressed account's owner/authority) so the synthesized
SPL Token Account has the correct owner, i.e., call
pack_token_data_to_spl_bytes(mint, &<wallet_authority_pubkey>, token_data) (use
the compressed account's authority field rather than ata) and keep the created
solana_sdk::account::Account owner/routine unchanged.

In `@sdk-libs/sdk/src/compressible/compress_account_on_init.rs`:
- Around line 55-60: The commented-out guard that claims "with_data=true is not
supported" is misleading; update the function (e.g., the
compress_account_on_init function) to either enforce the guard or remove the
dead comment: either (A) re-enable and implement the guard by checking the
with_data parameter and returning Err(LightSdkError::ConstraintViolation.into())
with a clear process/msg when with_data is true, or (B) delete the commented-out
block and its TODO so callers and reviewers aren't confused—pick one approach
and ensure the code and comments reflect the actual behavior.

In `@sdk-libs/token-sdk/src/token/create.rs`:
- Around line 199-236: invoke_signed currently defaults bump to 0 when seeds are
empty or malformed; add explicit validation at the start of invoke_signed to
ensure seeds is non-empty and the last seed slice has length exactly 1,
otherwise return Err(ProgramError::InvalidSeeds). After validation, extract bump
from seeds.last().unwrap()[0] and build seed_vecs from
seeds.iter().take(seeds.len() - 1) as you already do; then construct
CompressToPubkey with that bump and proceed to call
LegacyCreateTokenAccountCpi::invoke_signed as before.

light-heap = { workspace = true, optional = true }
light-account-checks = { workspace= true }
light-compressed-account = { workspace= true }
light-sdk-types = { workspace = true }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Find light-sdk-types in the workspace
echo "=== Finding light-sdk-types definition ==="
rg -l 'name\s*=\s*"light-sdk-types"' --glob 'Cargo.toml'

# Check the structure of the workspace
echo "=== Checking workspace structure ==="
find . -name "light-sdk-types" -type d | head -5

Repository: Lightprotocol/light-protocol

Length of output: 200


🏁 Script executed:

#!/bin/bash
# Once we find it, get its location and dependencies
CARGO_PATH=$(rg -l 'name\s*=\s*"light-sdk-types"' --glob 'Cargo.toml' | head -1)

if [ -n "$CARGO_PATH" ]; then
  echo "=== light-sdk-types Cargo.toml ==="
  cat "$CARGO_PATH"
  
  echo ""
  echo "=== Directory structure around light-sdk-types ==="
  dirname "$CARGO_PATH" | xargs ls -la
fi

Repository: Lightprotocol/light-protocol

Length of output: 1373


🏁 Script executed:

#!/bin/bash
# Check CLAUDE.md files in program-libs for dependency guidance
echo "=== CLAUDE.md files in program-libs ==="
find program-libs -name "CLAUDE.md" -type f | head -10

# Show relevant content from program-libs CLAUDE.md
if [ -f "program-libs/CLAUDE.md" ]; then
  echo "=== program-libs/CLAUDE.md content ==="
  cat "program-libs/CLAUDE.md" | head -100
fi

Repository: Lightprotocol/light-protocol

Length of output: 317


🏁 Script executed:

#!/bin/bash
# Find the exact path of light-sdk-types
echo "=== Exact location of light-sdk-types ==="
rg -l 'name\s*=\s*"light-sdk-types"' --glob 'Cargo.toml'

# Check what's in program-libs/compressible/CLAUDE.md
echo ""
echo "=== program-libs/compressible/CLAUDE.md ==="
cat program-libs/compressible/CLAUDE.md

# Also check the broader CLAUDE.md or guidelines
echo ""
echo "=== program-libs/compressible/docs/CLAUDE.md ==="
cat program-libs/compressible/docs/CLAUDE.md

Repository: Lightprotocol/light-protocol

Length of output: 2383


🏁 Script executed:

#!/bin/bash
# Check the current Cargo.toml of compressible to see all dependencies
echo "=== program-libs/compressible/Cargo.toml (full) ==="
cat program-libs/compressible/Cargo.toml | head -80

# Verify: is light-sdk-types in program-libs or sdk-libs?
echo ""
echo "=== Checking if light-sdk-types path ==="
fd -t d 'light-sdk-types' | head -5
</parameter>

Repository: Lightprotocol/light-protocol

Length of output: 2129


🏁 Script executed:

#!/bin/bash
# Find workspace root and list all crates to understand structure
echo "=== Root workspace.members ==="
rg '\[\[workspace\]\]' -A 50 'Cargo.toml' 2>/dev/null | head -60

# More direct: list Cargo.toml files with their package names
echo ""
echo "=== Package locations in workspace (sample) ==="
fd -t f 'Cargo.toml' -x grep -H 'name = "' {} \; | grep -E '(light-sdk-types|program-libs|sdk-libs)' | head -20

Repository: Lightprotocol/light-protocol

Length of output: 1365


🏁 Script executed:

#!/bin/bash
# Verify the dependency direction and check if there are already program-libs crates for types
echo "=== Looking for other program-libs type crates ==="
ls -la program-libs/ | grep -E 'type|interface'

# Check what light-sdk-types is actually used for in program-libs/compressible
echo ""
echo "=== How light-sdk-types is used in compressible ==="
rg 'light-sdk-types|use.*light_sdk_types' program-libs/compressible/src/ | head -20

Repository: Lightprotocol/light-protocol

Length of output: 335


🏁 Script executed:

#!/bin/bash
# Check if light-token-interface (program-libs tier) might be the right place for these types
echo "=== light-token-interface location and purpose ==="
head -50 program-libs/token-interface/src/lib.rs

# See if ValidityProof and PackedAddressTreeInfo mentioned in original review are in light-sdk-types
echo ""
echo "=== Types exported from light-sdk-types ==="
grep -E 'pub struct|pub enum' program-libs/sdk-types/src/lib.rs 2>/dev/null || grep -E 'pub struct|pub enum' sdk-libs/sdk-types/src/lib.rs | head -20

Repository: Lightprotocol/light-protocol

Length of output: 663


🏁 Script executed:

#!/bin/bash
# Check full light-sdk-types/src to see what instruction types are there
echo "=== light-sdk-types/src contents ==="
ls -la sdk-libs/sdk-types/src/

# Check if PackedAddressTreeInfo is defined there
echo ""
echo "=== Looking for PackedAddressTreeInfo definition ==="
rg 'PackedAddressTreeInfo' sdk-libs/sdk-types/src/ -B 2 -A 5 | head -30

Repository: Lightprotocol/light-protocol

Length of output: 1998


Move instruction types to program-libs tier—program-libs crates cannot depend on sdk-libs.

The light-sdk-types crate is located in sdk-libs/sdk-types/, which creates a dependency from program-libs/compressible upward into the SDK layer. This violates the dependency hierarchy: SDK libraries should depend on program libraries, not the reverse.

PackedAddressTreeInfo and similar instruction types are pure data structures that belong in the program-libs tier so that both programs and SDKs can use them. Per the established architecture, types needed by programs should be isolated in program-libs (similar to how light-token-interface handles token types), allowing SDKs to depend on them without creating backward dependencies.

Action: Move PackedAddressTreeInfo and related instruction types from sdk-libs/sdk-types/src/instruction/ to a new or existing program-libs type crate, then update dependencies accordingly.

🤖 Prompt for AI Agents
In `@program-libs/compressible/Cargo.toml` at line 35, The compressible crate
currently depends on light-sdk-types which creates an upward dependency; extract
the instruction types (e.g., PackedAddressTreeInfo and any sibling structs/enums
in sdk-libs/sdk-types/src/instruction/) into a new or existing program-libs
crate (name it something like program-types), update all source imports in
program and sdk code to use program-types::instruction::PackedAddressTreeInfo,
remove the light-sdk-types dependency from program-libs/compressible/Cargo.toml,
add the new program-types crate as a dependency in compressible's Cargo.toml,
and update any use/path references across the repo to point to the new crate so
no program-libs crate depends on sdk-libs.

Comment on lines +14 to +16
2. account layout `CToken` is defined in path: program-libs/token-interface/src/state/ctoken/ctoken_struct.rs
3. extension layout `CompressionInfo` is defined in path:
program-libs/token-interface/src/state/extensions/compressible.rs
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Search for CToken struct definition and verify paths
echo "=== Searching for ctoken_struct.rs file ==="
find program-libs/token-interface -name "ctoken_struct.rs" -o -name "*ctoken*.rs" 2>/dev/null | sort

echo ""
echo "=== Checking state directory structure ==="
fd -t f -e rs . program-libs/token-interface/src/state/ 2>/dev/null | sort

echo ""
echo "=== Searching for CToken struct definition ==="
rg -n 'pub struct CToken' program-libs/token-interface/src --type=rust

echo ""
echo "=== Checking module path in source files ==="
rg -n 'mod.*token|mod.*ctoken' program-libs/token-interface/src --type=rust | head -20

Repository: Lightprotocol/light-protocol

Length of output: 2943


🏁 Script executed:

#!/bin/bash
# Verify CToken struct is in token_struct.rs
echo "=== Checking token_struct.rs for CToken ==="
rg -n 'CToken' program-libs/token-interface/src/state/token/token_struct.rs

echo ""
echo "=== Check module visibility in token/mod.rs ==="
cat program-libs/token-interface/src/state/token/mod.rs

Repository: Lightprotocol/light-protocol

Length of output: 314


🏁 Script executed:

#!/bin/bash
# Read the token_struct.rs file to see what's defined
echo "=== Content of token_struct.rs ==="
cat -n program-libs/token-interface/src/state/token/token_struct.rs | head -100

echo ""
echo "=== Searching for any Token struct or similar ==="
rg -n 'pub struct' program-libs/token-interface/src/state/token/token_struct.rs

Repository: Lightprotocol/light-protocol

Length of output: 4488


🏁 Script executed:

#!/bin/bash
# Search for CToken type alias
echo "=== Searching for CToken type alias ==="
rg -n 'type CToken|pub use.*Token' program-libs/token-interface/src --type=rust

echo ""
echo "=== Checking CREATE.md context around line 99 ==="
sed -n '95,105p' programs/compressed-token/program/docs/ctoken/CREATE.md

Repository: Lightprotocol/light-protocol

Length of output: 2611


🏁 Script executed:

#!/bin/bash
# Check the exact content around line 14 in CREATE.md
echo "=== Lines 10-20 of CREATE.md ==="
sed -n '10,20p' programs/compressed-token/program/docs/ctoken/CREATE.md

echo ""
echo "=== Full search for CToken in the entire codebase ==="
rg -n 'CToken' . --type=rust 2>/dev/null | head -20

Repository: Lightprotocol/light-protocol

Length of output: 2648


🏁 Script executed:

#!/bin/bash
# Check if there's a CLAUDE.md or README in the compressed-token program docs
echo "=== Checking for CLAUDE.md or docs structure guidance ==="
ls -la programs/compressed-token/program/docs/ 2>/dev/null || echo "Path not found"

echo ""
echo "=== Check pub use statements in token-interface ==="
grep -r "pub use.*Token" program-libs/token-interface/src/state/token/ --include="*.rs"

echo ""
echo "=== Check if any file mentions both CToken and Token mapping ==="
rg -n 'CToken.*Token|pub.*Token' program-libs/token-interface/src/state/token/mod.rs -A 3

Repository: Lightprotocol/light-protocol

Length of output: 1023


🏁 Script executed:

#!/bin/bash
# Read CLAUDE.md to understand documentation structure
echo "=== Content of programs/compressed-token/program/docs/CLAUDE.md ==="
cat programs/compressed-token/program/docs/CLAUDE.md

echo ""
echo "=== Check ACCOUNTS.md for account naming conventions ==="
head -50 programs/compressed-token/program/docs/ACCOUNTS.md

Repository: Lightprotocol/light-protocol

Length of output: 4176


🏁 Script executed:

#!/bin/bash
# Check if CToken is a public type alias in the token module
echo "=== Full content of token/mod.rs ==="
cat program-libs/token-interface/src/state/token/mod.rs

echo ""
echo "=== Check lib.rs for any CToken exports ==="
rg -n 'CToken|pub.*token' program-libs/token-interface/src/lib.rs | head -20

echo ""
echo "=== Verify the exact line in ACCOUNTS.md that uses state::token::CToken ==="
grep -n 'state::token::CToken' programs/compressed-token/program/docs/ACCOUNTS.md

Repository: Lightprotocol/light-protocol

Length of output: 558


🏁 Script executed:

#!/bin/bash
# Check if CToken is defined as a type alias
echo "=== Full content of token_struct.rs including any type aliases ==="
cat program-libs/token-interface/src/state/token/token_struct.rs | grep -E '(pub type|pub use).*CToken|^pub.*Token' -A 2 -B 2

echo ""
echo "=== Check for any CToken alias in the entire token directory ==="
grep -r 'type CToken\|pub.*CToken' program-libs/token-interface/src/state/token/ --include="*.rs"

echo ""
echo "=== Broader search for CToken type definition ==="
grep -r 'type CToken\|pub.*CToken' program-libs/token-interface/src/ --include="*.rs"

Repository: Lightprotocol/light-protocol

Length of output: 447


🏁 Script executed:

#!/bin/bash
# Let me verify by checking if the documentation compiles - check what actual exports exist
echo "=== Check state module exports ==="
grep -n 'pub' program-libs/token-interface/src/state/mod.rs

echo ""
echo "=== List what's exported from state::token ==="
rg -n '^pub' program-libs/token-interface/src/state/token/ --type=rust

echo ""
echo "=== Check if there's actually a ctoken module anywhere ==="
find program-libs/token-interface -type d -name ctoken

Repository: Lightprotocol/light-protocol

Length of output: 1785


🏁 Script executed:

#!/bin/bash
# Let me check the actual code examples in ACCOUNTS.md more carefully
echo "=== Lines 45-65 of ACCOUNTS.md ==="
sed -n '45,65p' programs/compressed-token/program/docs/ACCOUNTS.md

echo ""
echo "=== Check if CToken is perhaps a type alias somewhere we missed ==="
rg -n 'CToken' program-libs/token-interface/src --type=rust | grep -E '(type|alias|= Token)'

Repository: Lightprotocol/light-protocol

Length of output: 719


🏁 Script executed:

#!/bin/bash
# Check the zero_copy module where those methods come from
echo "=== Content of token/zero_copy.rs ==="
cat program-libs/token-interface/src/state/token/zero_copy.rs

echo ""
echo "=== Check if CToken is defined in lib.rs or any other high-level module ==="
rg -n 'type CToken' program-libs/token-interface/src --type=rust

Repository: Lightprotocol/light-protocol

Length of output: 30853


Fix incorrect path and struct references in line 14.

The documentation references a non-existent path and struct name. The account layout is struct Token defined at program-libs/token-interface/src/state/token/token_struct.rs (not in state/ctoken/ directory). Update line 14 to reference the correct path from the repository root:

2. account layout `Token` is defined in path: program-libs/token-interface/src/state/token/token_struct.rs

Note: The struct is named Token and is re-exported through state::token::* (see module visibility in program-libs/token-interface/src/state/token/mod.rs). The directory structure changed from state/ctoken/ to state/token/, so any references using the old module namespace must be updated accordingly.

🤖 Prompt for AI Agents
In `@programs/compressed-token/program/docs/ctoken/CREATE.md` around lines 14 -
16, Update the documentation to reference the correct account struct and path:
replace the incorrect `CToken`/`state/ctoken/ctoken_struct.rs` reference with
the actual struct `Token` and its file
`program-libs/token-interface/src/state/token/token_struct.rs`; also note that
`Token` is re-exported via `state::token::*` (see
`program-libs/token-interface/src/state/token/mod.rs`) so any mentions or
namespaces using `ctoken` should be changed to the `token` module and `Token`
struct.

4.2. Validate CompressibleConfig is active (not inactive or deprecated) - Error: `CompressibleError::InvalidState` if not active
4.3. If with compress_to_pubkey: - Validates: derives address from provided seeds/bump and verifies it matches token_account pubkey - Security: ensures account is a derivable PDA, preventing compression to non-signable addresses
4.4. Validate compression_only requirement for restricted extensions: - If mint has restricted extensions (e.g., TransferFee) and compression_only == 0 - Error: `ErrorCode::CompressionOnlyRequired`
4.5. Validate compression_only is only set for mints with restricted extensions: - If compression_only != 0 and mint has no restricted extensions - Error: `ErrorCode::CompressionOnlyNotAllowed`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Minor style: adverb repetition.

The phrase "Validate compression_only is only set" has redundant usage of "only." Consider rephrasing to improve clarity:

📝 Suggested rewording
-   4.5. Validate compression_only is only set for mints with restricted extensions:
+   4.5. Validate compression_only is set exclusively for mints with restricted extensions:
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
4.5. Validate compression_only is only set for mints with restricted extensions: - If compression_only != 0 and mint has no restricted extensions - Error: `ErrorCode::CompressionOnlyNotAllowed`
4.5. Validate compression_only is set exclusively for mints with restricted extensions: - If compression_only != 0 and mint has no restricted extensions - Error: `ErrorCode::CompressionOnlyNotAllowed`
🧰 Tools
🪛 LanguageTool

[style] ~91-~91: This adverb was used twice in the sentence. Consider removing one of them or replacing them with a synonym.
Context: ...d` 4.5. Validate compression_only is only set for mints with restricted extension...

(ADVERB_REPETITION_PREMIUM)

🤖 Prompt for AI Agents
In `@programs/compressed-token/program/docs/ctoken/CREATE.md` at line 91, Replace
the redundant phrase "Validate compression_only is only set for mints with
restricted extensions" in the CREATE.md docs with a clearer rewording such as
"Validate compression_only is set only for mints with restricted extensions" (or
"Validate compression_only is set for mints that have restricted extensions") so
the sentence uses "only" once and reads clearly; update the line in the 4.5
validation bullet accordingly.

Comment on lines 317 to 322
let program_metas = vec![
AccountMeta::new(payer.pubkey(), true),
AccountMeta::new_readonly(config_pda, false),
AccountMeta::new(rent_sponsor, false),
AccountMeta::new_readonly(compression_authority, false),
AccountMeta::new(compression_authority, false),
];
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# First, let's examine the context around lines 312-322 in the file
cd /repo && cat -n sdk-libs/program-test/src/compressible.rs | sed -n '310,325p'

Repository: Lightprotocol/light-protocol

Length of output: 127


🏁 Script executed:

# Search for CompressAccountsIdempotent struct definition
rg -nA 25 "struct CompressAccountsIdempotent" --type rust

Repository: Lightprotocol/light-protocol

Length of output: 4566


🏁 Script executed:

# Search for the compress_accounts_idempotent instruction definition
rg -nA 15 "pub fn compress_accounts_idempotent" --type rust

Repository: Lightprotocol/light-protocol

Length of output: 2923


🏁 Script executed:

# Also check if there's any generated code or account constraint attributes
rg -B 5 -A 20 "compression_authority" sdk-libs/program-test/src/compressible.rs | head -80

Repository: Lightprotocol/light-protocol

Length of output: 3330


🏁 Script executed:

rg -n "sdk-libs/program-test/src/compressible.rs" --type rust | head -5

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

# Find the file and check the context around lines 312-322
git ls-files "*program-test/src/compressible.rs"

Repository: Lightprotocol/light-protocol

Length of output: 113


🏁 Script executed:

# Check the context around the problematic lines
sed -n '305,330p' sdk-libs/program-test/src/compressible.rs

Repository: Lightprotocol/light-protocol

Length of output: 952


Change compression_authority to writable in the account metas.

The generated CompressAccountsIdempotent struct explicitly marks compression_authority with #[account(mut)] (sdk-libs/macros/src/compressible/instructions.rs:1188), making it a required mutable account. However, line 321 passes it as read-only with AccountMeta::new(compression_authority, false). This contradicts both the struct definition and your own comment on line 316. Change it to AccountMeta::new(compression_authority, true).

🤖 Prompt for AI Agents
In `@sdk-libs/program-test/src/compressible.rs` around lines 317 - 322, The
account meta for compression_authority is currently created as read-only but the
generated CompressAccountsIdempotent struct and its #[account(mut)] annotation
require it to be mutable; in the vector named program_metas (where
AccountMeta::new(payer.pubkey(), true) etc. are set) replace the AccountMeta
construction for compression_authority to mark it writable (use
AccountMeta::new(compression_authority, true)) so the account metas match the
CompressAccountsIdempotent definition.

Comment on lines +494 to +505
// Synthesize SPL Token Account bytes from TokenData
let token_data = &compressed.token;
let data = pack_token_data_to_spl_bytes(mint, &ata, token_data).to_vec();

// Create synthetic Account
let account = solana_sdk::account::Account {
lamports: 0, // Compressed accounts don't have lamports
data,
owner: LIGHT_TOKEN_PROGRAM_ID.into(),
executable: false,
rent_epoch: 0,
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Synthetic ATA bytes should use the wallet owner, not the ATA pubkey.

pack_token_data_to_spl_bytes sets the SPL account owner field. Passing ata (Line 496) makes the synthetic SPL account claim it’s owned by itself instead of the wallet authority, which can break parsing/authorization logic downstream.

🐛 Suggested fix
-            let data = pack_token_data_to_spl_bytes(mint, &ata, token_data).to_vec();
+            let data = pack_token_data_to_spl_bytes(mint, owner, token_data).to_vec();
🤖 Prompt for AI Agents
In `@sdk-libs/program-test/src/program_test/light_program_test.rs` around lines
494 - 505, The synthetic SPL account is being built with
pack_token_data_to_spl_bytes(mint, &ata, token_data) which makes the SPL account
owner point to the ATA itself; change the second argument to the wallet
authority pubkey (the compressed account's owner/authority) so the synthesized
SPL Token Account has the correct owner, i.e., call
pack_token_data_to_spl_bytes(mint, &<wallet_authority_pubkey>, token_data) (use
the compressed account's authority field rather than ata) and keep the created
solana_sdk::account::Account owner/routine unchanged.

Comment on lines +55 to +60
// TODO: consider not supporting yet.
// Fail-fast: with_data=true is not yet supported in macro-generated code
// if with_data {
// msg!("with_data=true is not supported yet");
// return Err(LightSdkError::ConstraintViolation.into());
// }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Avoid commented-out guard; either enforce or delete.

The comment says with_data=true isn’t supported, yet the function proceeds to handle it. Commented-out logic is misleading—either implement the guard or remove the dead block to keep intent clear.

♻️ Suggested cleanup (no behavior change)
-    // TODO: consider not supporting yet.
-    // Fail-fast: with_data=true is not yet supported in macro-generated code
-    // if with_data {
-    //     msg!("with_data=true is not supported yet");
-    //     return Err(LightSdkError::ConstraintViolation.into());
-    // }
+    // TODO: Decide whether with_data=true should be supported here.
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// TODO: consider not supporting yet.
// Fail-fast: with_data=true is not yet supported in macro-generated code
// if with_data {
// msg!("with_data=true is not supported yet");
// return Err(LightSdkError::ConstraintViolation.into());
// }
// TODO: Decide whether with_data=true should be supported here.
🤖 Prompt for AI Agents
In `@sdk-libs/sdk/src/compressible/compress_account_on_init.rs` around lines 55 -
60, The commented-out guard that claims "with_data=true is not supported" is
misleading; update the function (e.g., the compress_account_on_init function) to
either enforce the guard or remove the dead comment: either (A) re-enable and
implement the guard by checking the with_data parameter and returning
Err(LightSdkError::ConstraintViolation.into()) with a clear process/msg when
with_data is true, or (B) delete the commented-out block and its TODO so callers
and reviewers aren't confused—pick one approach and ensure the code and comments
reflect the actual behavior.

Comment on lines +199 to +236
/// Invoke CPI with PDA signing for program-owned accounts.
///
/// Seeds are used for both signing AND deriving the compress_to address.
pub fn invoke_signed(self, seeds: &[&[u8]]) -> Result<(), ProgramError> {
let defaults = CompressibleParams::default();

// Build CompressToPubkey from signer seeds
let bump = seeds.last().and_then(|s| s.first()).copied().unwrap_or(0);

let seed_vecs: Vec<Vec<u8>> = seeds
.iter()
.take(seeds.len().saturating_sub(1))
.map(|s| s.to_vec())
.collect();

let compress_to = CompressToPubkey {
bump,
program_id: self.program_id.to_bytes(),
seeds: seed_vecs,
};

let cpi = LegacyCreateTokenAccountCpi {
payer: self.base.payer,
account: self.base.account,
mint: self.base.mint,
owner: self.base.owner,
compressible: CompressibleParamsCpi {
compressible_config: self.config,
rent_sponsor: self.sponsor,
system_program: self.system_program,
pre_pay_num_epochs: defaults.pre_pay_num_epochs,
lamports_per_write: defaults.lamports_per_write,
compress_to_account_pubkey: Some(compress_to),
token_account_version: defaults.token_account_version,
compression_only: defaults.compression_only,
},
};
cpi.invoke_signed(&[seeds])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# First, let's find and examine the file in question
find . -path "*sdk-libs/token-sdk/src/token/create.rs" -type f

Repository: Lightprotocol/light-protocol

Length of output: 200


🏁 Script executed:

# Also search for the CompressToPubkey struct to understand seed handling
rg "struct CompressToPubkey" -A 5

Repository: Lightprotocol/light-protocol

Length of output: 200


🏁 Script executed:

# Check how invoke_signed is used elsewhere in the codebase
rg "invoke_signed" --type rs -B 2 -A 2 | head -100

Repository: Lightprotocol/light-protocol

Length of output: 200


🏁 Script executed:

# Look for PDA derivation patterns with seeds and bumps
rg "find_program_address|create_program_address" --type rs -B 2 -A 2 | head -100

Repository: Lightprotocol/light-protocol

Length of output: 200


🌐 Web query:

Solana PDA seed validation best practices create_program_address

💡 Result:

Short best-practices checklist for PDA seed validation and using create_program_address

  1. Always derive PDAs with the canonical bump (find_program_address / findProgramDerivedAddress) and validate that bump on-chain. Do not accept a PDA/account without verifying seeds + canonical bump produce that address via create_program_address. [1][2][3]

  2. Limit seeds to the protocol bounds: ≤16 seeds and each ≤32 bytes. If a logical seed may be larger, hash it (e.g., SHA256) first and use the digest as the seed. This keeps costs predictable and avoids MaxSeedLengthExceeded errors. [1][4]

  3. Prevent ambiguous seed concatenation collisions. Either (a) use fixed-length seeds, (b) insert explicit separators, or (c) include type prefixes/static strings so [{a,b}] ≠ [{ab}]. [1][4]

  4. Store the canonical bump (and/or the full seed list) at initialization and reuse it. Recomputing find_program_address every instruction is expensive; storing the bump lets on-chain code verify seeds cheaply by calling create_program_address once. [1][2][3]

  5. Verify on-chain that the provided account.pubkey == create_program_address(seeds + [bump], program_id). Reject if create_program_address errors or returns a different address. This defends against different-valid-bump/address spoofing. [1][3][5]

  6. Treat seeds as public, not secret. Never rely on seed entropy for authorization; use PDAs as program-owned authorities only after explicit on-chain checks. [1][4]

  7. In frameworks (e.g., Anchor) use the seeds + bump constraints (seeds = [...], bump = ) so the runtime enforces the canonical-check automatically. Still store/verify bump when you need custom logic. [2]

  8. For variable inputs (user data, timestamps, vesting counters): avoid directly using large/unchecked variable data as seeds; canonicalize (serialize) and/or hash them first to fixed size and type. [1][4]

References

  • Solana Program-Derived Addresses / docs & cookbook. [1]
  • Solana Bump Seed Canonicalization / Anchor guidance. [2]
  • Solana create_program_address / create vs find semantics. [3]
  • Practical PDA seed rules and examples (Cookbook + community guidance). [4]
  • Security note on validating bump seeds (blog/audit guidance). [5]

If you want, I can paste exact code snippets (Rust/JS/Anchor) showing secure on-chain PDA validation.


Validate PDA seed shape before deriving CompressToPubkey.

The current code silently defaults bump to 0 when seeds are empty or the last seed is malformed, which can cause compress_to to derive an incorrect address. Per Solana best practices, PDAs must be verified with their canonical bump on-chain. Add explicit validation to fail fast:

  • Check that seeds is non-empty
  • Verify the last seed is exactly 1 byte (the bump)
  • Return ProgramError::InvalidSeeds if validation fails

This prevents silent address mismatches that could result in token accounts being created at unintended addresses.

Suggested validation guard
 pub fn invoke_signed(self, seeds: &[&[u8]]) -> Result<(), ProgramError> {
     let defaults = CompressibleParams::default();

-    // Build CompressToPubkey from signer seeds
-    let bump = seeds.last().and_then(|s| s.first()).copied().unwrap_or(0);
-
-    let seed_vecs: Vec<Vec<u8>> = seeds
-        .iter()
-        .take(seeds.len().saturating_sub(1))
-        .map(|s| s.to_vec())
-        .collect();
+    if seeds.is_empty() {
+        return Err(ProgramError::InvalidSeeds);
+    }
+    let (seed_parts, bump_slice) = seeds.split_at(seeds.len() - 1);
+    let bump_slice = bump_slice.first().ok_or(ProgramError::InvalidSeeds)?;
+    if bump_slice.len() != 1 {
+        return Err(ProgramError::InvalidSeeds);
+    }
+    let bump = bump_slice[0];
+
+    let seed_vecs: Vec<Vec<u8>> = seed_parts
+        .iter()
+        .map(|s| s.to_vec())
+        .collect();
🤖 Prompt for AI Agents
In `@sdk-libs/token-sdk/src/token/create.rs` around lines 199 - 236, invoke_signed
currently defaults bump to 0 when seeds are empty or malformed; add explicit
validation at the start of invoke_signed to ensure seeds is non-empty and the
last seed slice has length exactly 1, otherwise return
Err(ProgramError::InvalidSeeds). After validation, extract bump from
seeds.last().unwrap()[0] and build seed_vecs from seeds.iter().take(seeds.len()
- 1) as you already do; then construct CompressToPubkey with that bump and
proceed to call LegacyCreateTokenAccountCpi::invoke_signed as before.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Fix all issues with AI agents
In `@sdk-libs/compressible-client/src/decompress_mint.rs`:
- Around line 10-13: The doc comment lists the three APIs but mistakenly names
two bullets `decompress_mint`; update the third bullet to clearly reflect the
MintInterface wrapper (e.g., `decompress_mint_idempotent` or "decompress_mint
(MintInterface wrapper)") so the three entries read: `decompress_mint`,
`build_decompress_mint`, and the MintInterface wrapper
(`decompress_mint_idempotent` or similar). Ensure the text references the actual
symbol for the wrapper (MintInterface) so readers can locate the corresponding
function.
- Around line 125-128: Update the inaccurate comment for DEFAULT_RENT_PAYMENT:
replace “~24 hours per epoch” with the correct approximate epoch duration (≈2.5
days) to match Solana rent guidance in
program-libs/compressible/docs/SOLANA_RENT.md; ensure the comment next to the
constant DEFAULT_RENT_PAYMENT clearly states the corrected duration (e.g., “~2.5
days per epoch”) and, if helpful, reference the SOLANA_RENT.md doc for further
context.
- Around line 166-176: The code currently indexes proof_result.accounts[0]
directly (used to set account_info, state_tree, input_queue, output_queue) which
can panic when the accounts vec is empty; update the code to use
proof_result.accounts.get(0) and return a typed error if None is returned (e.g.,
add a new error variant in the function's error type or the existing Error
enum), propagate that error instead of panicking, and adjust subsequent uses of
account_info (the variable derived from the optional) to work with the safely
obtained reference; ensure the error is descriptive (e.g., "EmptyProofAccounts")
and used wherever proof_result.accounts[0] was referenced.

In `@sdk-libs/compressible-client/src/lib.rs`:
- Around line 148-231: The methods from_seeds and from_ctoken currently map a
missing compressed payload (interface.compressed_data() -> None) to
anchor_lang::error::ErrorCode::AccountNotInitialized which is misleading; change
the error returned when compressed_data() is None to a more descriptive error
(e.g., anchor_lang::error::ErrorCode::InvalidAccountData or a crate-specific
Anchor error you define) and update the doc comments for from_seeds and
from_ctoken to state the precondition that the provided AccountInterface must be
a cold/compressed account; locate the checks using
interface.compressed_data().ok_or_else(...) in the functions and replace the
ErrorCode::AccountNotInitialized construction and the doc text accordingly.
♻️ Duplicate comments (8)
sdk-libs/macros/src/compressible/file_scanner.rs (1)

74-91: Don’t silently drop parse/extraction errors while scanning.
Line 74-80 and Line 86-91 / Line 112-116 currently swallow failures, which can silently omit seed specs and lead to incorrect macro output. Capture errors into result.errors (or fail fast).

🛠️ Suggested error capture
-    let parsed: syn::File = match syn::parse_str(&contents) {
-        Ok(f) => f,
-        Err(e) => {
-            // Not all files may be valid on their own (e.g., test files with main)
-            // Just skip them silently
-            let _ = e;
-            return;
-        }
-    };
+    let parsed: syn::File = match syn::parse_str(&contents) {
+        Ok(f) => f,
+        Err(e) => {
+            result
+                .errors
+                .push(format!("Failed to parse {:?}: {}", path, e));
+            return;
+        }
+    };
@@
-            Item::Struct(item_struct) => {
-                if let Ok(Some((info, struct_name))) = try_extract_from_struct(&item_struct) {
-                    result.pda_specs.extend(info.pda_fields);
-                    result.token_specs.extend(info.token_fields);
-                    result.rentfree_struct_names.insert(struct_name);
-                }
-            }
+            Item::Struct(item_struct) => match try_extract_from_struct(&item_struct) {
+                Ok(Some((info, struct_name))) => {
+                    result.pda_specs.extend(info.pda_fields);
+                    result.token_specs.extend(info.token_fields);
+                    result.rentfree_struct_names.insert(struct_name);
+                }
+                Ok(None) => {}
+                Err(e) => result
+                    .errors
+                    .push(format!("Failed to extract {:?}: {}", path, e)),
+            },
@@
-            Item::Struct(item_struct) => {
-                if let Ok(Some((info, struct_name))) = try_extract_from_struct(item_struct) {
-                    result.pda_specs.extend(info.pda_fields);
-                    result.token_specs.extend(info.token_fields);
-                    result.rentfree_struct_names.insert(struct_name);
-                }
-            }
+            Item::Struct(item_struct) => match try_extract_from_struct(item_struct) {
+                Ok(Some((info, struct_name))) => {
+                    result.pda_specs.extend(info.pda_fields);
+                    result.token_specs.extend(info.token_fields);
+                    result.rentfree_struct_names.insert(struct_name);
+                }
+                Ok(None) => {}
+                Err(e) => result.errors.push(format!(
+                    "Failed to extract in inline module {}: {}",
+                    module.ident, e
+                )),
+            },

Also applies to: 109-116

sdk-libs/compressible-client/decompress-mint.md (3)

368-395: Add a language tag to the flow diagram fence (MD040).
markdownlint flags fences without a language specifier.

📝 Proposed fix
-```
+```text
 User calls decompress_mint(mint_seed_pubkey, fee_payer, indexer)

435-440: Error Handling contradicts idempotent behavior.
Line 167 says “AlreadyDecompressed is NOT an error,” but the Error Handling list says it is. Align the section to reflect idempotent success.

📝 Suggested clarification
- - `AlreadyDecompressed`: Mint is already on-chain (idempotent case)
+ - Already decompressed: Returns empty vec (idempotent success, not an error)

458-463: Test plan contradicts idempotent semantics.
The “Already decompressed” test should expect an empty vec, not an error.

📝 Suggested fix
-2. **Already decompressed**: Returns `AlreadyDecompressed` error
+2. **Already decompressed**: Returns empty vec (idempotent)
sdk-libs/compressible-client/src/decompress_mint.rs (1)

326-333: Step numbering is off.
There are two “// 5.” comments; renumber the second one and subsequent steps.

📝 Suggested fix
-    // 5. Get validity proof
+    // 6. Get validity proof
sdk-libs/compressible-client/src/lib.rs (1)

607-617: Consider defensive error handling instead of expect().

This was flagged in a past review. The expect() at line 613 is logically safe because the filter at line 594 only includes accounts where is_cold is true, and AccountInterface::cold() always sets decompression_context: Some(...). However, if someone constructs an AccountInterface manually with inconsistent state (is_cold: true but decompression_context: None), this panics.

For a public API, defensive handling is preferable:

🛡️ Defensive alternative using filter_map
         let compressed_accounts: Vec<(CompressedAccount, V)> = cold_accounts
             .into_iter()
-            .map(|a| {
-                let compressed_account = a
-                    .account_interface
-                    .decompression_context
-                    .expect("Cold account must have decompression context")
-                    .compressed_account;
-                (compressed_account, a.variant)
-            })
+            .filter_map(|a| {
+                let ctx = a.account_interface.decompression_context?;
+                Some((ctx.compressed_account, a.variant))
+            })
             .collect();
sdk-libs/compressible-client/DECOMPRESSION.md (2)

55-62: API table entry for simple decompress_mint has incorrect signature.

The table at line 61 shows decompress_mint(signer, fee_payer, &indexer) but based on past review analysis, the actual function takes a DecompressMintRequest or MintInterface. This contradicts line 59 which shows the correct high-perf wrapper signature.

Verify the actual function signatures and update the table for consistency.

#!/bin/bash
# Verify decompress_mint function signatures
rg -nP 'pub\s+(async\s+)?fn\s+decompress_mint\b' sdk-libs/compressible-client/src/decompress_mint.rs -A 5

153-161: TokenData.owner comment is misleading.

Line 156 states // Note: for ATAs, this is the ATA pubkey but this contradicts standard SPL Token semantics where owner is the wallet address that owns the token account, not the token account's pubkey itself.

Based on past review analysis, the actual code populates TokenData.owner from spl_account.owner (the wallet owner). The comment should be corrected.

📝 Suggested fix
 pub struct TokenData {
     pub mint: Pubkey,
-    pub owner: Pubkey,  // Note: for ATAs, this is the ATA pubkey
+    pub owner: Pubkey,  // Token account owner (wallet address, following SPL semantics)
     pub amount: u64,

Comment on lines +10 to +13
//! Three APIs are provided:
//! - `decompress_mint`: Simple async API (fetches state + proof internally)
//! - `build_decompress_mint`: Sync, caller provides pre-fetched state + proof
//! - `decompress_mint`: High-perf wrapper (takes MintInterface, fetches proof internally)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Clarify the API list (two bullets share the same name).
Line 11 and Line 13 both say decompress_mint. Consider naming the second bullet decompress_mint_idempotent or “mint-interface wrapper” to match the actual functions.

🤖 Prompt for AI Agents
In `@sdk-libs/compressible-client/src/decompress_mint.rs` around lines 10 - 13,
The doc comment lists the three APIs but mistakenly names two bullets
`decompress_mint`; update the third bullet to clearly reflect the MintInterface
wrapper (e.g., `decompress_mint_idempotent` or "decompress_mint (MintInterface
wrapper)") so the three entries read: `decompress_mint`,
`build_decompress_mint`, and the MintInterface wrapper
(`decompress_mint_idempotent` or similar). Ensure the text references the actual
symbol for the wrapper (MintInterface) so readers can locate the corresponding
function.

Comment on lines +125 to +128
/// Default rent payment in epochs (~24 hours per epoch)
pub const DEFAULT_RENT_PAYMENT: u8 = 2;
/// Default write top-up lamports (~3 hours rent per write)
pub const DEFAULT_WRITE_TOP_UP: u32 = 766;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Epoch duration note is inaccurate.
The comment says “~24 hours per epoch,” but Solana epochs are ~2.5 days (432,000 slots). Update the comment or adjust the stated duration. Based on learnings, please align with program-libs/compressible/docs/SOLANA_RENT.md.

🤖 Prompt for AI Agents
In `@sdk-libs/compressible-client/src/decompress_mint.rs` around lines 125 - 128,
Update the inaccurate comment for DEFAULT_RENT_PAYMENT: replace “~24 hours per
epoch” with the correct approximate epoch duration (≈2.5 days) to match Solana
rent guidance in program-libs/compressible/docs/SOLANA_RENT.md; ensure the
comment next to the constant DEFAULT_RENT_PAYMENT clearly states the corrected
duration (e.g., “~2.5 days per epoch”) and, if helpful, reference the
SOLANA_RENT.md doc for further context.

Comment on lines +166 to +176
// Extract tree info from proof result
let account_info = &proof_result.accounts[0];
let state_tree = account_info.tree_info.tree;
let input_queue = account_info.tree_info.queue;
let output_queue = account_info
.tree_info
.next_tree_info
.as_ref()
.map(|next| next.queue)
.unwrap_or(input_queue);

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Check file exists and get its total lines
wc -l sdk-libs/compressible-client/src/decompress_mint.rs

Repository: Lightprotocol/light-protocol

Length of output: 127


🏁 Script executed:

#!/bin/bash
# Read the code around the first location (lines 160-180)
sed -n '160,180p' sdk-libs/compressible-client/src/decompress_mint.rs | cat -n

Repository: Lightprotocol/light-protocol

Length of output: 921


🏁 Script executed:

#!/bin/bash
# Read the code around the second location (lines 330-350)
sed -n '330,350p' sdk-libs/compressible-client/src/decompress_mint.rs | cat -n

Repository: Lightprotocol/light-protocol

Length of output: 962


🏁 Script executed:

#!/bin/bash
# Find the error enum definition
grep -n "enum DecompressMintError" sdk-libs/compressible-client/src/decompress_mint.rs

Repository: Lightprotocol/light-protocol

Length of output: 105


🏁 Script executed:

#!/bin/bash
# Read the error enum to see current variants
sed -n '/enum DecompressMintError/,/^}/p' sdk-libs/compressible-client/src/decompress_mint.rs | head -20

Repository: Lightprotocol/light-protocol

Length of output: 495


🏁 Script executed:

#!/bin/bash
# Find ProofResult definition
rg "struct ProofResult|pub struct ProofResult" --type rs | head -5

Repository: Lightprotocol/light-protocol

Length of output: 98


🏁 Script executed:

#!/bin/bash
# Search for validity_proof type definition
rg "ValidityProof" --type rs -A 2 | head -30

Repository: Lightprotocol/light-protocol

Length of output: 98


🏁 Script executed:

#!/bin/bash
# Check if there are guards/checks on proof_result.accounts elsewhere in the file
grep -n "proof_result.accounts" sdk-libs/compressible-client/src/decompress_mint.rs

Repository: Lightprotocol/light-protocol

Length of output: 179


🏁 Script executed:

#!/bin/bash
# Check if there are other similar direct indexing patterns on proof results in the codebase
rg "proof.*accounts\[0\]" --type rs

Repository: Lightprotocol/light-protocol

Length of output: 98


🏁 Script executed:

#!/bin/bash
# Find ProofResult definition
rg "struct ProofResult" -A 5

Repository: Lightprotocol/light-protocol

Length of output: 374


🏁 Script executed:

#!/bin/bash
# Find ValidityProof definition
rg "ValidityProof" -A 3 | head -40

Repository: Lightprotocol/light-protocol

Length of output: 2551


🏁 Script executed:

#!/bin/bash
# Check imports in decompress_mint.rs to understand where ProofResult comes from
head -50 sdk-libs/compressible-client/src/decompress_mint.rs | grep -E "^use|^mod"

Repository: Lightprotocol/light-protocol

Length of output: 622


🏁 Script executed:

#!/bin/bash
# Search for similar unguarded indexing in the codebase
rg "\\.accounts\[0\]" -B 2 -A 2

Repository: Lightprotocol/light-protocol

Length of output: 20192


🏁 Script executed:

#!/bin/bash
# Find ValidityProofWithContext definition
rg "struct ValidityProofWithContext" -A 10

Repository: Lightprotocol/light-protocol

Length of output: 850


🏁 Script executed:

#!/bin/bash
# Search for documentation about what accounts should contain
rg "accounts" sdk-libs/compressible-client/src/decompress_mint.rs -B 5 | head -60

Repository: Lightprotocol/light-protocol

Length of output: 691


🏁 Script executed:

#!/bin/bash
# Check if there's any validation or assertion before using accounts[0]
sed -n '160,176p' sdk-libs/compressible-client/src/decompress_mint.rs

Repository: Lightprotocol/light-protocol

Length of output: 582


🏁 Script executed:

#!/bin/bash
# Check context around line 338 as well
sed -n '330,346p' sdk-libs/compressible-client/src/decompress_mint.rs

Repository: Lightprotocol/light-protocol

Length of output: 598


🏁 Script executed:

#!/bin/bash
# Search for get_validity_proof implementation or documentation
rg "fn get_validity_proof" -A 15

Repository: Lightprotocol/light-protocol

Length of output: 10178


🏁 Script executed:

#!/bin/bash
# Check if there's any documentation about the indexer API contract
rg "pub async fn get_validity_proof|/// Get validity proof" -B 5 -A 15 | head -80

Repository: Lightprotocol/light-protocol

Length of output: 3842


🏁 Script executed:

#!/bin/bash
# Look at how the indexer is mocked in tests to understand expected behavior
find . -name "*.rs" -path "*/test*" -exec grep -l "get_validity_proof" {} \; | head -3

Repository: Lightprotocol/light-protocol

Length of output: 1552


🏁 Script executed:

#!/bin/bash
# Check the test indexer implementation more carefully
sed -n '/async fn get_validity_proof/,/^    }/p' sdk-libs/program-test/src/indexer/test_indexer.rs | head -100

Repository: Lightprotocol/light-protocol

Length of output: 4849


🏁 Script executed:

#!/bin/bash
# Check photon indexer to see if it might return empty accounts
sed -n '/async fn get_validity_proof/,/^        }\)/p' sdk-libs/client/src/indexer/photon_indexer.rs | head -80

Repository: Lightprotocol/light-protocol

Length of output: 121


🏁 Script executed:

#!/bin/bash
# Check if there are any guards on accounts in other files that successfully use this API
rg "accounts\[0\]" sdk-libs/compressible-client/src/lib.rs -B 5 -A 5

Repository: Lightprotocol/light-protocol

Length of output: 1080


Guard against empty proof accounts to prevent panics.
Lines 167 and 338 index proof_result.accounts[0] without checking the vector length. The indexer's validity proof response can return an empty accounts vector when filtering by conditions (as seen in the test indexer implementation), which causes a panic at runtime. Add a typed error and use safe access with .get(0).

Suggested fix
 pub enum DecompressMintError {
     #[error("Indexer error: {0}")]
     Indexer(#[from] IndexerError),
@@ -40,6 +40,8 @@
     MissingMintData,
     #[error("Program error: {0}")]
     ProgramError(#[from] ProgramError),
+    #[error("Validity proof returned no accounts")]
+    ProofAccountMissing,
     #[error("Proof required for cold mint")]
     ProofRequired,
 }
@@ -166,7 +168,9 @@
     // Proof required for cold mint
     let proof_result = validity_proof.ok_or(DecompressMintError::ProofRequired)?;
 
     // Extract tree info from proof result
-    let account_info = &proof_result.accounts[0];
+    let account_info = proof_result
+        .accounts
+        .get(0)
+        .ok_or(DecompressMintError::ProofAccountMissing)?;
     let state_tree = account_info.tree_info.tree;
     let input_queue = account_info.tree_info.queue;
@@ -337,7 +341,9 @@
     // 6. Extract tree info from proof result
-    let account_info = &proof_result.accounts[0];
+    let account_info = proof_result
+        .accounts
+        .get(0)
+        .ok_or(DecompressMintError::ProofAccountMissing)?;
     let state_tree = account_info.tree_info.tree;
     let input_queue = account_info.tree_info.queue;
🤖 Prompt for AI Agents
In `@sdk-libs/compressible-client/src/decompress_mint.rs` around lines 166 - 176,
The code currently indexes proof_result.accounts[0] directly (used to set
account_info, state_tree, input_queue, output_queue) which can panic when the
accounts vec is empty; update the code to use proof_result.accounts.get(0) and
return a typed error if None is returned (e.g., add a new error variant in the
function's error type or the existing Error enum), propagate that error instead
of panicking, and adjust subsequent uses of account_info (the variable derived
from the optional) to work with the safely obtained reference; ensure the error
is descriptive (e.g., "EmptyProofAccounts") and used wherever
proof_result.accounts[0] was referenced.

Comment on lines +148 to +231
/// A rent-free decompression request combining account interface and variant.
/// Generic over V (the CompressedAccountVariant type from the program).
#[derive(Clone, Debug)]
pub struct RentFreeDecompressAccount<V> {
/// The account interface (contains pubkey and cold/hot state)
pub account_interface: AccountInterface,
/// The typed variant (e.g., CompressedAccountVariant::UserRecord { ... })
pub variant: V,
}

impl<V> RentFreeDecompressAccount<V> {
/// Create a new decompression request
pub fn new(account_interface: AccountInterface, variant: V) -> Self {
Self {
account_interface,
variant,
}
}

/// Create decompression request from account interface and seeds.
///
/// The seeds type determines which variant constructor to call.
/// Data is extracted from interface, passed to `IntoVariant::into_variant()`.
///
/// # Arguments
/// * `interface` - The account interface (must be cold/compressed)
/// * `seeds` - Seeds struct (e.g., `UserRecordSeeds`) that implements `IntoVariant<V>`
///
/// # Example
/// ```ignore
/// RentFreeDecompressAccount::from_seeds(
/// AccountInterface::cold(user_record_pda, compressed_user),
/// UserRecordSeeds { authority, mint_authority, owner, category_id },
/// )?
/// ```
#[cfg(feature = "anchor")]
pub fn from_seeds<S>(
interface: AccountInterface,
seeds: S,
) -> Result<Self, anchor_lang::error::Error>
where
S: light_sdk::compressible::IntoVariant<V>,
{
let data = interface.compressed_data().ok_or_else(|| {
anchor_lang::error::Error::from(anchor_lang::error::ErrorCode::AccountNotInitialized)
})?;
let variant = seeds.into_variant(data)?;
Ok(Self::new(interface, variant))
}

/// Create decompression request for CToken account.
///
/// Parses TokenData from interface.compressed_data() internally.
/// The CToken variant type determines how to wrap into the full variant.
///
/// # Arguments
/// * `interface` - The account interface (must be cold/compressed)
/// * `ctoken_variant` - CToken variant (e.g., `TokenAccountVariant::Vault { cmint }`)
///
/// # Example
/// ```ignore
/// RentFreeDecompressAccount::from_ctoken(
/// AccountInterface::cold(vault_pda, compressed_vault.account),
/// TokenAccountVariant::Vault { cmint: cmint_pda },
/// )?
/// ```
#[cfg(feature = "anchor")]
pub fn from_ctoken<T>(
interface: AccountInterface,
ctoken_variant: T,
) -> Result<Self, anchor_lang::error::Error>
where
T: light_sdk::compressible::IntoCTokenVariant<V, TokenData>,
{
use anchor_lang::AnchorDeserialize;

let data = interface.compressed_data().ok_or_else(|| {
anchor_lang::error::Error::from(anchor_lang::error::ErrorCode::AccountNotInitialized)
})?;
let token_data = TokenData::deserialize(&mut &data[..])?;
let variant = ctoken_variant.into_ctoken_variant(token_data);
Ok(Self::new(interface, variant))
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

RentFreeDecompressAccount provides good ergonomics for variant construction.

The from_seeds and from_ctoken constructors (anchor-gated) offer convenient ways to build decompression requests. However, I notice:

  1. Both methods return anchor_lang::error::Error with AccountNotInitialized when compressed_data() returns None. This error code doesn't semantically fit—the account is initialized (it's compressed), but the caller passed a hot account where a cold one was expected.

Consider a more descriptive error or at least a doc comment clarifying this precondition.

💡 Suggested improvement for error clarity
     #[cfg(feature = "anchor")]
     pub fn from_seeds<S>(
         interface: AccountInterface,
         seeds: S,
     ) -> Result<Self, anchor_lang::error::Error>
     where
         S: light_sdk::compressible::IntoVariant<V>,
     {
+        // Precondition: interface must be cold (compressed). Hot accounts have no compressed_data.
         let data = interface.compressed_data().ok_or_else(|| {
-            anchor_lang::error::Error::from(anchor_lang::error::ErrorCode::AccountNotInitialized)
+            // AccountNotInitialized is used here to indicate missing compressed data,
+            // which occurs when a hot account is passed instead of a cold one.
+            anchor_lang::error::Error::from(anchor_lang::error::ErrorCode::AccountNotInitialized)
         })?;
🤖 Prompt for AI Agents
In `@sdk-libs/compressible-client/src/lib.rs` around lines 148 - 231, The methods
from_seeds and from_ctoken currently map a missing compressed payload
(interface.compressed_data() -> None) to
anchor_lang::error::ErrorCode::AccountNotInitialized which is misleading; change
the error returned when compressed_data() is None to a more descriptive error
(e.g., anchor_lang::error::ErrorCode::InvalidAccountData or a crate-specific
Anchor error you define) and update the doc comments for from_seeds and
from_ctoken to state the precondition that the provided AccountInterface must be
a cold/compressed account; locate the checks using
interface.compressed_data().ok_or_else(...) in the functions and replace the
ErrorCode::AccountNotInitialized construction and the doc text accordingly.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@sdk-libs/compressible-client/src/decompress_mint.rs`:
- Around line 31-48: The code indexes proof_result.accounts[0] without checking
for emptiness and can panic; add a new enum variant to DecompressMintError
(e.g., MissingProofAccount) and replace both uses of proof_result.accounts[0]
with
proof_result.accounts.get(0).ok_or(DecompressMintError::MissingProofAccount)? so
the function returns an error instead of panicking in both locations where
proof_result.accounts is accessed (the two sites referenced in the review).

In `@sdk-libs/compressible-client/src/lib.rs`:
- Around line 416-425: The code currently unwraps the CPI context on
compressed_accounts iteration which can panic (see the use of .map(|(acc, _)|
acc.tree_info.cpi_context.unwrap()) when has_tokens is true); change this to
return a descriptive Err when cpi_context is None instead of panicking: locate
the logic building first_token_cpi_context (using compressed_accounts and the
found token account) and if tree_info.cpi_context is None return
Err(Box::new(std::io::Error::new(...)) or a custom error) with a message
referencing the token account and suggestive context, then pass the unwrapped
Pubkey into SystemAccountMetaConfig::new_with_cpi_context(*program_id,
first_token_cpi_context) and keep calling
remaining_accounts.add_system_accounts_v2(system_config)?.
♻️ Duplicate comments (6)
sdk-libs/macros/src/compressible/file_scanner.rs (1)

74-91: Surface parse/extraction failures instead of skipping.

Line 74-81 and Line 87-91/111-116 swallow parse/extraction errors, which can silently omit rentfree specs and generate incorrect macro output. Record these in ScannedModuleInfo.errors (or fail fast) so users see actionable diagnostics.

🛠️ Suggested error capture
-    let parsed: syn::File = match syn::parse_str(&contents) {
-        Ok(f) => f,
-        Err(e) => {
-            // Not all files may be valid on their own (e.g., test files with main)
-            // Just skip them silently
-            let _ = e;
-            return;
-        }
-    };
+    let parsed: syn::File = match syn::parse_str(&contents) {
+        Ok(f) => f,
+        Err(e) => {
+            result.errors.push(format!("Failed to parse {:?}: {}", path, e));
+            return;
+        }
+    };
@@
-            Item::Struct(item_struct) => {
-                if let Ok(Some((info, struct_name))) = try_extract_from_struct(&item_struct) {
-                    result.pda_specs.extend(info.pda_fields);
-                    result.token_specs.extend(info.token_fields);
-                    result.rentfree_struct_names.insert(struct_name);
-                }
-            }
+            Item::Struct(item_struct) => match try_extract_from_struct(&item_struct) {
+                Ok(Some((info, struct_name))) => {
+                    result.pda_specs.extend(info.pda_fields);
+                    result.token_specs.extend(info.token_fields);
+                    result.rentfree_struct_names.insert(struct_name);
+                }
+                Ok(None) => {}
+                Err(e) => result
+                    .errors
+                    .push(format!("Failed to extract {:?}: {}", path, e)),
+            },
@@
-            Item::Struct(item_struct) => {
-                if let Ok(Some((info, struct_name))) = try_extract_from_struct(item_struct) {
-                    result.pda_specs.extend(info.pda_fields);
-                    result.token_specs.extend(info.token_fields);
-                    result.rentfree_struct_names.insert(struct_name);
-                }
-            }
+            Item::Struct(item_struct) => match try_extract_from_struct(item_struct) {
+                Ok(Some((info, struct_name))) => {
+                    result.pda_specs.extend(info.pda_fields);
+                    result.token_specs.extend(info.token_fields);
+                    result.rentfree_struct_names.insert(struct_name);
+                }
+                Ok(None) => {}
+                Err(e) => result
+                    .errors
+                    .push(format!("Failed to extract in inline module {}: {}", module.ident, e)),
+            },

Also applies to: 111-116

sdk-libs/compressible-client/src/decompress_mint.rs (4)

125-128: Document DEFAULT_ derivation (epoch length + rent math).*

The comments are too terse and the epoch duration appears off. Please document the rent formula/inputs (epoch length, rent-per-byte, write size) so users can recompute these defaults confidently.


170-175: Add a short note for the output_queue fallback.

Line 170-175 and Line 341-346 fall back to input_queue when next_tree_info is absent. A brief comment explaining the no-rollover case would improve readability.

Also applies to: 341-346


326-377: Fix duplicate step numbering in the decompression flow comments.

Line 331 repeats “5.”; renumber subsequent steps to keep the sequence readable.

📝 Suggested renumbering
-    // 5. Get validity proof
+    // 6. Get validity proof
@@
-    // 6. Extract tree info from proof result
+    // 7. Extract tree info from proof result
@@
-    // 7. Build CompressedMintWithContext
+    // 8. Build CompressedMintWithContext
@@
-    // 8. Build DecompressMint instruction
+    // 9. Build DecompressMint instruction

10-14: Doc list duplicates decompress_mint and omits the idempotent API.

Line 11-13 lists decompress_mint twice; update the bullets to reflect the actual APIs (e.g., include decompress_mint_idempotent) to avoid confusion.

sdk-libs/compressible-client/src/lib.rs (1)

607-614: Avoid .expect() on decompression_context in cold-account mapping.

If an AccountInterface is constructed inconsistently, this panics. Prefer ok_or and propagate the error.

🛡️ Defensive alternative
-        let compressed_accounts: Vec<(CompressedAccount, V)> = cold_accounts
-            .into_iter()
-            .map(|a| {
-                let compressed_account = a
-                    .account_interface
-                    .decompression_context
-                    .expect("Cold account must have decompression context")
-                    .compressed_account;
-                (compressed_account, a.variant)
-            })
-            .collect();
+        let compressed_accounts: Vec<(CompressedAccount, V)> = cold_accounts
+            .into_iter()
+            .map(|a| {
+                let compressed_account = a
+                    .account_interface
+                    .decompression_context
+                    .ok_or("Cold account missing decompression context")?
+                    .compressed_account;
+                Ok((compressed_account, a.variant))
+            })
+            .collect::<Result<_, Box<dyn std::error::Error>>>()?;

Comment on lines +31 to +48
/// Error type for decompress mint operations.
#[derive(Debug, Error)]
pub enum DecompressMintError {
#[error("Indexer error: {0}")]
Indexer(#[from] IndexerError),

#[error("Compressed mint not found for signer {signer:?}")]
MintNotFound { signer: Pubkey },

#[error("Missing compressed mint data in account")]
MissingMintData,

#[error("Program error: {0}")]
ProgramError(#[from] ProgramError),

#[error("Proof required for cold mint")]
ProofRequired,
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# First, let's find and examine the decompress_mint.rs file
fd decompress_mint.rs

Repository: Lightprotocol/light-protocol

Length of output: 342


🏁 Script executed:

# Get file size and read the relevant sections
wc -l sdk-libs/compressible-client/src/decompress_mint.rs

Repository: Lightprotocol/light-protocol

Length of output: 127


🏁 Script executed:

# Read around line 167 with context
sed -n '155,180p' sdk-libs/compressible-client/src/decompress_mint.rs | cat -n

Repository: Lightprotocol/light-protocol

Length of output: 1148


🏁 Script executed:

# Read around line 338 with context
sed -n '325,350p' sdk-libs/compressible-client/src/decompress_mint.rs | cat -n

Repository: Lightprotocol/light-protocol

Length of output: 1162


🏁 Script executed:

# Search for all uses of proof_result.accounts in the file
rg "proof_result\.accounts" sdk-libs/compressible-client/src/decompress_mint.rs -B 2 -A 2

Repository: Lightprotocol/light-protocol

Length of output: 469


🏁 Script executed:

# Check if there's a CLAUDE.md or docs for this module
fd "CLAUDE.md|README.md" sdk-libs/compressible-client/

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

# Read the full function context around line 167
sed -n '140,190p' sdk-libs/compressible-client/src/decompress_mint.rs | cat -n

Repository: Lightprotocol/light-protocol

Length of output: 2297


🏁 Script executed:

# Read the full function context around line 338
sed -n '310,360p' sdk-libs/compressible-client/src/decompress_mint.rs | cat -n

Repository: Lightprotocol/light-protocol

Length of output: 2492


🏁 Script executed:

# Search for the function definitions
rg "fn.*\(" sdk-libs/compressible-client/src/decompress_mint.rs | head -10

Repository: Lightprotocol/light-protocol

Length of output: 576


🏁 Script executed:

# Check the Cargo.toml to verify SDK lib constraints
cat sdk-libs/compressible-client/Cargo.toml

Repository: Lightprotocol/light-protocol

Length of output: 1126


🏁 Script executed:

# Search for ValidityProof structure and get_validity_proof definition
rg "get_validity_proof|ValidityProof" --type rust -A 2 | head -50

Repository: Lightprotocol/light-protocol

Length of output: 3294


🏁 Script executed:

# Search for ValidityProofWithContext definition
fd "*.rs" -x grep -l "ValidityProofWithContext" {} \;

Repository: Lightprotocol/light-protocol

Length of output: 355


🏁 Script executed:

# Look for ValidityProofWithContext structure definition
rg "struct ValidityProofWithContext|pub.*ValidityProofWithContext" -A 5

Repository: Lightprotocol/light-protocol

Length of output: 451


🏁 Script executed:

# Check how get_validity_proof is implemented and what it returns
rg "get_validity_proof" --type rust -B 2 -A 10 | head -80

Repository: Lightprotocol/light-protocol

Length of output: 5754


🏁 Script executed:

# Look at the indexer trait to understand guarantees
rg "trait.*Indexer|fn get_validity_proof" -A 8

Repository: Lightprotocol/light-protocol

Length of output: 8016


Add bounds checking before accessing validity proof accounts to prevent panics.

Lines 167 and 338 index [0] on proof_result.accounts without checking if the vector is empty. If the indexer returns no accounts, this will panic. Use .get(0).ok_or() with a new error variant instead.

Proposed fix
 pub enum DecompressMintError {
@@
     #[error("Proof required for cold mint")]
     ProofRequired,
+    #[error("Validity proof is missing account entries")]
+    EmptyProofAccounts,
 }
@@
-    let account_info = &proof_result.accounts[0];
+    let account_info = proof_result
+        .accounts
+        .get(0)
+        .ok_or(DecompressMintError::EmptyProofAccounts)?;

Apply to both locations: lines 167 and 338.

🤖 Prompt for AI Agents
In `@sdk-libs/compressible-client/src/decompress_mint.rs` around lines 31 - 48,
The code indexes proof_result.accounts[0] without checking for emptiness and can
panic; add a new enum variant to DecompressMintError (e.g., MissingProofAccount)
and replace both uses of proof_result.accounts[0] with
proof_result.accounts.get(0).ok_or(DecompressMintError::MissingProofAccount)? so
the function returns an error instead of panicking in both locations where
proof_result.accounts is accessed (the two sites referenced in the review).

Comment on lines 416 to 425
if has_pdas && has_tokens {
let cpi_context_of_first_input =
compressed_accounts[0].0.tree_info.cpi_context.unwrap();
let system_config = SystemAccountMetaConfig::new_with_cpi_context(
*program_id,
cpi_context_of_first_input,
);
// Find the first token account's CPI context
let first_token_cpi_context = compressed_accounts
.iter()
.find(|(acc, _)| acc.owner == LIGHT_TOKEN_PROGRAM_ID.into())
.map(|(acc, _)| acc.tree_info.cpi_context.unwrap())
.expect("has_tokens is true so there must be a token");
let system_config =
SystemAccountMetaConfig::new_with_cpi_context(*program_id, first_token_cpi_context);
remaining_accounts.add_system_accounts_v2(system_config)?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

find . -type f -name "lib.rs" | grep -E "compressible-client" | head -5

Repository: Lightprotocol/light-protocol

Length of output: 113


🏁 Script executed:

find . -type f -path "*/compressible-client/src/lib.rs" 2>/dev/null

Repository: Lightprotocol/light-protocol

Length of output: 113


🏁 Script executed:

git ls-files | grep -E "compressible.*client" | head -20

Repository: Lightprotocol/light-protocol

Length of output: 781


🏁 Script executed:

git ls-files | grep sdk-libs | head -20

Repository: Lightprotocol/light-protocol

Length of output: 813


🏁 Script executed:

ls -la | head -20

Repository: Lightprotocol/light-protocol

Length of output: 1263


🏁 Script executed:

find . -type d -name sdk-libs 2>/dev/null

Repository: Lightprotocol/light-protocol

Length of output: 82


🏁 Script executed:

pwd && git ls-files | wc -l && git ls-files | head -30

Repository: Lightprotocol/light-protocol

Length of output: 923


🏁 Script executed:

rg "has_pdas && has_tokens" -A 10 --max-count 5

Repository: Lightprotocol/light-protocol

Length of output: 1720


🏁 Script executed:

test -f sdk-libs/compressible-client/src/lib.rs && echo "FILE EXISTS" || echo "FILE NOT FOUND"

Repository: Lightprotocol/light-protocol

Length of output: 83


🏁 Script executed:

rg "LIGHT_TOKEN_PROGRAM_ID.into()" -B 5 -A 5

Repository: Lightprotocol/light-protocol

Length of output: 12723


🏁 Script executed:

rg "cpi_context.unwrap()" -B 3 -A 3 | head -30

Repository: Lightprotocol/light-protocol

Length of output: 2259


🏁 Script executed:

rg "and_then.*cpi_context" -B 2 -A 2

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

git ls-files | grep -c "\.rs$" && echo "---" && git log --oneline -1

Repository: Lightprotocol/light-protocol

Length of output: 93


🏁 Script executed:

git status && echo "---" && git diff --stat | head -20

Repository: Lightprotocol/light-protocol

Length of output: 141


🏁 Script executed:

git ls-files | grep -i compressible | head -30

Repository: Lightprotocol/light-protocol

Length of output: 1417


🏁 Script executed:

ls -la && echo "---" && find . -maxdepth 2 -type d | sort | head -30

Repository: Lightprotocol/light-protocol

Length of output: 3863


🏁 Script executed:

rg "add_system_accounts_v2" -B 10 -A 3 | head -50

Repository: Lightprotocol/light-protocol

Length of output: 4018


🏁 Script executed:

cat -n sdk-libs/compressible-client/src/lib.rs | sed -n '400,440p'

Repository: Lightprotocol/light-protocol

Length of output: 2283


🏁 Script executed:

cat -n sdk-libs/compressible-client/src/lib.rs | sed -n '380,425p'

Repository: Lightprotocol/light-protocol

Length of output: 2518


🏁 Script executed:

head -50 sdk-libs/compressible-client/src/lib.rs | grep -E "use|Result|Error"

Repository: Lightprotocol/light-protocol

Length of output: 900


🏁 Script executed:

sed -n '1,100p' sdk-libs/compressible-client/src/lib.rs

Repository: Lightprotocol/light-protocol

Length of output: 3569


🏁 Script executed:

rg "fn.*pack_for_instruction" -A 20 sdk-libs/compressible-client/src/lib.rs | head -50

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

ast-grep --pattern 'fn $_($$$) -> Result<$$$, $_> {
  $$$
}'

Repository: Lightprotocol/light-protocol

Length of output: 50385


🏁 Script executed:

sed -n '350,430p' sdk-libs/compressible-client/src/lib.rs

Repository: Lightprotocol/light-protocol

Length of output: 3436


🏁 Script executed:

wc -l sdk-libs/compressible-client/src/lib.rs

Repository: Lightprotocol/light-protocol

Length of output: 115


🏁 Script executed:

rg "has_pdas && has_tokens" sdk-libs/compressible-client/src/lib.rs -B 30 -A 15

Repository: Lightprotocol/light-protocol

Length of output: 2123


🏁 Script executed:

rg "has_pdas && has_tokens" sdk-libs/compressible-client/src/lib.rs -B 50 | grep -E "^[0-9]+-\s*(pub )?fn " | tail -1

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

rg "has_pdas && has_tokens" sdk-libs/compressible-client/src/lib.rs -B 80 | tail -90

Repository: Lightprotocol/light-protocol

Length of output: 3191


🏁 Script executed:

rg "add_system_accounts_v2" sdk-libs/compressible-client/src/lib.rs -B 3 -A 1

Repository: Lightprotocol/light-protocol

Length of output: 727


🏁 Script executed:

rg "add_system_accounts_v2\(\)" -A 2 sdk-libs/compressible-client/src/lib.rs

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

sed -n '1,30p' sdk-libs/compressible-client/src/lib.rs

Repository: Lightprotocol/light-protocol

Length of output: 973


🏁 Script executed:

ls -la sdk-libs/compressible-client/src/

Repository: Lightprotocol/light-protocol

Length of output: 663


🏁 Script executed:

rg "has_pdas && has_tokens" sdk-libs/compressible-client/src/lib.rs -B 100 -A 20 | head -150

Repository: Lightprotocol/light-protocol

Length of output: 4795


🏁 Script executed:

python3 << 'EOF'
with open('sdk-libs/compressible-client/src/lib.rs', 'r') as f:
    lines = f.readlines()
    
# Find the line with "has_pdas && has_tokens"
for i, line in enumerate(lines):
    if 'has_pdas && has_tokens' in line:
        # Print context: 100 lines before and 30 lines after
        start = max(0, i - 100)
        end = min(len(lines), i + 30)
        print(''.join([f"{j+1:4d}: {lines[j]}" for j in range(start, end)]))
        break
EOF

Repository: Lightprotocol/light-protocol

Length of output: 158


🏁 Script executed:

rg "RpcError" sdk-libs/compressible-client/src/lib.rs | head -10

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

sed -n '1,50p' sdk-libs/compressible-client/src/lib.rs | grep -E "use|RpcError|Result"

Repository: Lightprotocol/light-protocol

Length of output: 697


🏁 Script executed:

python3 << 'EOF'
import re

with open('sdk-libs/compressible-client/src/lib.rs', 'r') as f:
    content = f.read()

# Find the function definition for pack_for_instruction or pack
match = re.search(r'pub\s+fn\s+(\w+)\s*\([^)]*\)\s*->\s*(Result|Box)?[^{]*\{', content)
if match:
    print("Found function:", match.group(0)[:100])

# Find all error types used
errors = re.findall(r'(RpcError|Result|Error)\w*', content)
print("\nError types used:", set(errors)[:10])

# Find imports related to errors
import_lines = [line for line in content.split('\n')[:100] if 'use' in line and ('error' in line.lower() or 'result' in line.lower() or 'RpcError' in line)]
print("\nError imports:")
for line in import_lines:
    print(line)
EOF

Repository: Lightprotocol/light-protocol

Length of output: 158


🏁 Script executed:

rg "fn add_system_accounts_v2" -A 10

Repository: Lightprotocol/light-protocol

Length of output: 1012


🏁 Script executed:

python3 << 'EOF'
with open('sdk-libs/compressible-client/src/lib.rs', 'r') as f:
    lines = f.readlines()

# Find the line with has_pdas && has_tokens
target_idx = None
for i, line in enumerate(lines):
    if 'has_pdas && has_tokens' in line:
        target_idx = i
        break

if target_idx:
    # Find the function definition by looking backwards for 'fn'
    for i in range(target_idx, -1, -1):
        if 'fn ' in lines[i] and ('pub' in lines[i] or 'async' in lines[i]):
            func_start = i
            # Get function signature (may span multiple lines)
            sig_lines = []
            j = i
            while j < len(lines) and '{' not in lines[j]:
                sig_lines.append(lines[j].rstrip())
                j += 1
            if '{' in lines[j]:
                sig_lines.append(lines[j].split('{')[0].rstrip())
            
            print("Function signature:")
            print('\n'.join(sig_lines))
            break
EOF

Repository: Lightprotocol/light-protocol

Length of output: 158


🏁 Script executed:

rg "ok_or|ok_or_else" sdk-libs/compressible-client/src/ -A 1 -B 1

Repository: Lightprotocol/light-protocol

Length of output: 3192


🏁 Script executed:

rg "RpcError::Custom|RpcError" sdk-libs/ -m 5

Repository: Lightprotocol/light-protocol

Length of output: 20501


🏁 Script executed:

cat sdk-libs/client/src/rpc/errors.rs | head -40

Repository: Lightprotocol/light-protocol

Length of output: 1106


🏁 Script executed:

rg "tree_info.cpi_context" sdk-libs/ -B 3 -A 1 | head -40

Repository: Lightprotocol/light-protocol

Length of output: 2740


🏁 Script executed:

rg "cpi_context" sdk-libs/client/src/indexer/ -A 2

Repository: Lightprotocol/light-protocol

Length of output: 5753


🏁 Script executed:

rg "struct.*TreeInfo|cpi_context:" sdk-libs/client/src/indexer/ -A 3

Repository: Lightprotocol/light-protocol

Length of output: 6134


🏁 Script executed:

grep -A 10 -B 10 "has_pdas && has_tokens" sdk-libs/macros/MACRO-NEW.md

Repository: Lightprotocol/light-protocol

Length of output: 497


🏁 Script executed:

head -100 sdk-libs/macros/MACRO-NEW.md

Repository: Lightprotocol/light-protocol

Length of output: 3016


🏁 Script executed:

rg "Option.*cpi_context|cpi_context.*Option" sdk-libs/client/ -B 2 -A 2

Repository: Lightprotocol/light-protocol

Length of output: 780


🏁 Script executed:

cat sdk-libs/client/src/indexer/tree_info.rs

Repository: Lightprotocol/light-protocol

Length of output: 12564


Replace unwrap panic with descriptive error handling for missing CPI context.

Line 421 uses .unwrap() on cpi_context, an Option<Pubkey> from the indexer. If the indexer omits this value, the code panics instead of returning a proper error. Since the function already returns Result<Instruction, Box<dyn std::error::Error>>, handle the missing context gracefully.

🛠️ Suggested fix
let first_token_cpi_context = compressed_accounts
    .iter()
    .find(|(acc, _)| acc.owner == LIGHT_TOKEN_PROGRAM_ID.into())
+   .and_then(|(acc, _)| acc.tree_info.cpi_context)
+   .ok_or("Token account missing CPI context from indexer")?;
-   .map(|(acc, _)| acc.tree_info.cpi_context.unwrap())
-   .expect("has_tokens is true so there must be a token");
🤖 Prompt for AI Agents
In `@sdk-libs/compressible-client/src/lib.rs` around lines 416 - 425, The code
currently unwraps the CPI context on compressed_accounts iteration which can
panic (see the use of .map(|(acc, _)| acc.tree_info.cpi_context.unwrap()) when
has_tokens is true); change this to return a descriptive Err when cpi_context is
None instead of panicking: locate the logic building first_token_cpi_context
(using compressed_accounts and the found token account) and if
tree_info.cpi_context is None return Err(Box::new(std::io::Error::new(...)) or a
custom error) with a message referencing the token account and suggestive
context, then pass the unwrapped Pubkey into
SystemAccountMetaConfig::new_with_cpi_context(*program_id,
first_token_cpi_context) and keep calling
remaining_accounts.add_system_accounts_v2(system_config)?.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 14

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
sdk-libs/macros/src/compressible/seed_providers.rs (2)

272-364: Consider extracting duplicated seed-to-TokenStream conversion logic.

The seed processing logic (lines 272-310 for token seeds and lines 326-364 for authority seeds) is nearly identical—handling literals, byte strings, uppercase constants, and ctx fields the same way. This duplication makes maintenance harder.

♻️ Suggested refactor

Extract a helper function:

fn seed_element_to_token_stream(seed: &SeedElement) -> TokenStream {
    match seed {
        SeedElement::Literal(lit) => {
            let value = lit.value();
            quote! { `#value.as_bytes`() }
        }
        SeedElement::Expression(expr) => {
            // Handle byte string literals
            if let syn::Expr::Lit(lit_expr) = &**expr {
                if let syn::Lit::ByteStr(byte_str) = &lit_expr.lit {
                    let bytes = byte_str.value();
                    return quote! { &[#(`#bytes`),*] };
                }
            }
            // Handle uppercase constants
            if let syn::Expr::Path(path_expr) = &**expr {
                if let Some(ident) = path_expr.path.get_ident() {
                    let ident_str = ident.to_string();
                    if ident_str.chars().all(|c| c.is_uppercase() || c == '_' || c.is_ascii_digit()) {
                        if ident_str == "LIGHT_CPI_SIGNER" {
                            return quote! { crate::`#ident.cpi_signer.as_ref`() };
                        } else {
                            return quote! { { let __seed: &[u8] = crate::`#ident.as_ref`(); __seed } };
                        }
                    }
                }
            }
            // Handle ctx.* field references
            if let Some(field_name) = extract_ctx_field_name(expr) {
                return quote! { `#field_name.as_ref`() };
            }
            // Fallback
            quote! { (`#expr`).as_ref() }
        }
    }
}

Then use it in both places:

let token_seed_refs: Vec<TokenStream> = spec.seeds.iter()
    .map(seed_element_to_token_stream)
    .collect();

598-606: Redundant branching—both branches are identical.

Both the Some(_segment) and else branches push the exact same code. The segment is captured but never used, and the logic is duplicated.

♻️ Simplified version
-if let Some(_segment) = path.path.segments.first() {
-    parameters.push(quote! { `#field_name`: &solana_pubkey::Pubkey });
-    expressions
-        .push(quote! { `#field_name.as_ref`() });
-} else {
-    parameters.push(quote! { `#field_name`: &solana_pubkey::Pubkey });
-    expressions
-        .push(quote! { `#field_name.as_ref`() });
-}
+parameters.push(quote! { `#field_name`: &solana_pubkey::Pubkey });
+expressions.push(quote! { `#field_name.as_ref`() });
sdk-libs/token-sdk/src/pack.rs (1)

247-264: Clarify the Unpack assumption in the comment.

Lines 257-258 note that the impl assumes V is already unpacked. This is important API guidance—consider making it a doc comment (///) so it appears in generated documentation.

📝 Make the note a doc comment
     impl<V> Unpack for CTokenDataWithVariant<V>
     where
         V: Clone,
     {
         type Unpacked = TokenDataWithVariant<V>;

         fn unpack(
             &self,
             remaining_accounts: &[AccountInfo],
         ) -> std::result::Result<Self::Unpacked, ProgramError> {
-            // Note: This impl assumes V is already unpacked (has Pubkeys).
-            // For packed variants, use PackedTokenDataWithVariant::unpack instead.
+            /// Note: This impl assumes V is already unpacked (has Pubkeys).
+            /// For packed variants, use PackedTokenDataWithVariant::unpack instead.
             Ok(TokenDataWithVariant {

Actually, since this is inside the function body, a regular comment is fine. The guidance is clear.

sdk-libs/compressible-client/src/lib.rs (1)

449-471: Index-based tree info matching requires careful ordering guarantees.

The comment at lines 449-453 explains the assumption: compressed accounts must be in the same order as validity_proof_with_context.accounts. This is a critical invariant—if violated, accounts will be matched with wrong tree infos, causing proof verification failures.

This is documented well, but consider adding a debug assertion or explicit length check to catch mismatches early:

debug_assert_eq!(
    compressed_accounts.len(),
    packed_tree_infos_slice.len(),
    "Compressed accounts must match validity proof accounts"
);
🤖 Fix all issues with AI agents
In `@sdk-libs/compressible-client/src/initialize_config.rs`:
- Around line 144-150: The hardcoded DISCRIMINATOR constant in
initialize_config.rs (const DISCRIMINATOR and the string
"global:initialize_compression_config") is brittle; add a verification that the
constant equals SHA256("global:initialize_compression_config")[..8] by
implementing a test or compile-time assertion: compute the SHA256 of the literal
instruction name and compare its first 8 bytes to DISCRIMINATOR (fail the
build/test if mismatched). Place the check referencing DISCRIMINATOR and the
literal "global:initialize_compression_config" (e.g., in a #[cfg(test)] unit
test or a build-time check) so future name changes will surface as a failing
assertion.

In `@sdk-libs/macros/src/compressible/anchor_seeds.rs`:
- Around line 458-473: Add a brief doc comment above the Expr::Path match arm
explaining the identifier classification heuristic: that identifiers matching
all uppercase letters, underscores, or ASCII digits are treated as constants
(ClassifiedSeed::Constant) while any other single-segment identifier becomes a
context account (ClassifiedSeed::CtxAccount); reference the Expr::Path match,
the ident variable extraction, and the ClassifiedSeed::Constant /
ClassifiedSeed::CtxAccount outcomes so future maintainers understand examples
like MY_SEED_123 vs user or mixed-case names like A1b2.
- Around line 270-276: The loop that calls classify_seed_expr(elem) silently
ignores Err results and drops seeds (in the block that sets authority_seeds from
array.elems); change this to propagate or report errors instead of swallowing
them: replace the for/if let Ok(...) pattern with an iterator that collects
Results (e.g., map(classify_seed_expr).collect::<Result<Vec<_>, _>>() ) and use
? to return the error or convert and return a descriptive error from the
surrounding function, so authority_seeds is only set on success; apply the same
fix to the similar pattern at the other location (the block around lines
291-295) to ensure all seed classification errors are surfaced.

In `@sdk-libs/macros/src/compressible/decompress_context.rs`:
- Around line 164-170: Replace the magic number assignment to token_data.version
with a named constant and document why version 3 is used: instead of setting
data.token_data.version = 3 inside the RentFreeAccountVariant::PackedCTokenData
arm, define and use a constant (e.g., CTOKEN_DATA_VERSION) and add a short
comment above the assignment explaining the protocol semantic or compatibility
reason for version 3; update the assignment in that arm (and any related places
if present) to use CTOKEN_DATA_VERSION and ensure compressed_token_accounts
logic remains unchanged.

In `@sdk-libs/macros/src/compressible/file_scanner.rs`:
- Around line 183-187: The test test_resolve_path currently only prints
resolve_crate_src_path() which means it never fails; update the test to assert
meaningful properties of resolve_crate_src_path(): call
resolve_crate_src_path(), assert the returned Path (or String) is non-empty and
that Path::exists() is true, then assert it points to either a directory
containing a "Cargo.toml" or contains a "src" subdirectory (or is a file within
the crate), so failures surface; modify test_resolve_path to use assertions
(e.g., assert! and/or assert_eq!) around resolve_crate_src_path() to validate
these properties.
- Around line 158-177: The function resolve_crate_src_path currently returns the
manifest directory itself when CARGO_MANIFEST_DIR is set but manifest_dir/src
doesn't exist; change this to return the manifest_dir.join("src") path (i.e.
return PathBuf::from(manifest_dir).join("src")) so callers consistently get a
src path to validate/handle, rather than a manifest root that may contain
Cargo.toml/target; update the branch that currently returns
PathBuf::from(manifest_dir) to return the joined src_path instead and keep the
existing fallback to current_dir for when CARGO_MANIFEST_DIR is not set.
- Around line 28-38: The function scan_module_for_compressible currently ignores
its _module: &ItemMod parameter and always returns Ok(...), so simplify the API
by removing the unused _module argument from scan_module_for_compressible,
change its return type from syn::Result<ScannedModuleInfo> to ScannedModuleInfo,
call scan_directory_recursive(base_path, &mut result) as before and return
result directly (no Ok wrapper), and update all call sites to match the new
signature; if _module is intentionally reserved, instead add a short doc comment
to ItemMod parameter explaining it’s reserved and actually reference it or keep
it named module to avoid leading underscore. Also ensure callers no longer
expect a Result and continue to read errors from result.errors.
- Around line 52-60: The loop currently uses entries.flatten() which silently
drops Err variants; instead iterate over entries as Result<DirEntry, io::Error>
and handle the Err cases (e.g., log or collect and propagate them) so
permission/corruption errors aren’t lost; for each Ok(entry) call entry.path()
and then continue to call scan_directory_recursive(&path, result) or
scan_rust_file(&path, result) as before, but ensure the function signature(s)
(e.g., scan_directory_recursive and scan_rust_file) and the caller properly
propagate or record entry-level errors rather than ignoring them.

In `@sdk-libs/macros/src/compressible/light_compressible.rs`:
- Around line 161-167: The assertions are too loose and
`output.contains("Pack")` can match substrings like `PackedUserRecord`; tighten
them by checking for concrete impls and type declarations instead — replace the
three contains checks with stronger checks such as verifying `output` contains
the exact impl lines `impl Pack for UserRecord` and `impl Unpack for UserRecord`
and a proper struct declaration like `pub struct PackedUserRecord` (or parse the
TokenStream and assert presence of Impl and Struct items named `Pack`, `Unpack`,
and `PackedUserRecord`), referencing the existing variables/assertion block that
uses `output`, `PackedUserRecord`, `Pack`, and `Unpack`.

In `@sdk-libs/macros/src/compressible/seed_providers.rs`:
- Around line 713-719: The match arm for syn::Expr::Lit currently only handles
syn::Lit::ByteStr and silently ignores other literal kinds, causing a
parameters/expressions length mismatch; update the syn::Expr::Lit branch to
produce an explicit compile-time error for non-ByteStr literals by using
syn::Error::new_spanned on lit_expr (or the offending literal) with a clear
message like "expected byte string literal (b\"...\")" and propagate it as a
compile_error (e.g., via to_compile_error()) so the macro fails loudly instead
of skipping entries; keep the existing handling of byte_str and
expressions.push(...) intact.

In `@sdk-libs/macros/src/finalize/parse.rs`:
- Around line 312-320: The default expressions for address_tree_info and
output_tree hardcode the identifier params, which breaks when the instruction's
first argument has a different name; instead, capture the actual first-arg
identifier (e.g., let first_arg_ident = &args.first_arg_ident or however the
code accesses the first instruction arg) and use it when building the
syn::parse_quote! defaults (replace params.create_accounts_proof... with
<first_arg_ident>.create_accounts_proof... via interpolation into parse_quote!).
Apply the same pattern to output_tree and to the #[light_mint]
default-generation code paths so all defaults reference the actual first
argument identifier rather than the literal params identifier.

In `@sdk-libs/token-sdk/src/compressible/decompress_runtime.rs`:
- Around line 256-273: The current branch decision uses
token_signers_seed_groups.is_empty() which is false whenever any token was
processed (even if all groups are empty), causing ATA-only flows to call
solana_cpi::invoke_signed; change the logic to detect non-empty signer groups
instead (e.g., let has_signers = token_signers_seed_groups.iter().any(|g|
!g.is_empty())); if has_signers call solana_cpi::invoke_signed(&ctoken_ix,
all_account_infos.as_slice(), signer_seed_slices.as_slice()) after building
signer_seed_refs/signer_seed_slices only for the non-empty groups, otherwise
call solana_cpi::invoke(&ctoken_ix, all_account_infos.as_slice()) for the
ATA-only path.
♻️ Duplicate comments (25)
sdk-libs/macros/src/compressible/file_scanner.rs (3)

75-93: Parse and extraction errors are silently swallowed.

This concern was already raised in a previous review. Parse failures (lines 77-82) and extraction errors (line 88's if let Ok(Some(...))) are both silently ignored, which can lead to missing seed specs without any user feedback.


110-124: Same silent error swallowing in inline module scanning.

This matches the pattern already flagged in the previous review. Extraction errors from try_extract_from_struct are silently ignored.


133-142: Malformed #[derive(...)] attributes silently fail the check.

If attr.parse_args_with(...) fails (line 135-137), the closure returns false, treating the struct as if it doesn't have #[derive(Accounts)]. While this might be acceptable (truly malformed derives would fail compilation anyway), you lose the opportunity to surface helpful diagnostics during macro expansion.

sdk-libs/macros/src/compressible/anchor_seeds.rs (1)

107-124: Empty seeds validation still missing for #[rentfree] and #[rentfree_token] fields.

When extract_anchor_seeds returns an empty vector (no seeds = [...] found), the code proceeds to construct ExtractedSeedSpec or ExtractedTokenSpec with empty seeds. A PDA with no seeds derives solely from the program ID, which is almost certainly unintended.

Consider failing fast with a compile error when seeds are empty:

🛠️ Suggested validation
             // Extract seeds from #[account(seeds = [...])]
             let seeds = extract_anchor_seeds(&field.attrs)?;
+            if seeds.is_empty() {
+                return Err(syn::Error::new_spanned(
+                    &field_ident,
+                    "#[rentfree] field requires #[account(seeds = [...])]",
+                ));
+            }

Apply similarly for the rentfree_token branch at line 124.

sdk-libs/macros/src/compressible/decompress_context.rs (1)

47-100: The two ctx_fields branches still share ~90% identical code.

The match arm bodies for empty vs non-empty ctx_fields are nearly identical — the only differences are the pattern destructuring and the resolve_ctx_seeds block. While this works, it's a maintenance burden if the handle_packed_pda_variant call signature ever changes.

That said, this is a refactor suggestion, not a blocker. The generated code is correct.

sdk-libs/macros/src/compressible/seed_providers.rs (2)

74-83: Method-call argument recursion still missing in extract_ctx_from_expr.

This helper only descends into method_call.receiver but skips method_call.args. Seeds like max_key(&ctx.accounts.foo.key()) will fail to extract the foo field from the argument.

The fix from the previous review still needs to be applied:

 } else if let syn::Expr::MethodCall(method_call) = expr {
     extract_ctx_from_expr(&method_call.receiver, ctx_fields, seen);
+    for arg in &method_call.args {
+        extract_ctx_from_expr(arg, ctx_fields, seen);
+    }
 }

413-442: extract_ctx_field_name still lacks recursive unwrapping for wrapper expressions.

As noted in the previous review, this function doesn't recurse into MethodCall, Reference, or Call expressions. When a seed uses &ctx.accounts.foo.key(), the function returns None instead of Some(foo).

The proposed fix from the previous review should be applied to handle these wrapper patterns.

sdk-libs/macros/src/finalize/codegen.rs (2)

285-286: Avoid expect on proofs in generated CPI paths.
Panics are fatal on-chain; convert missing proofs into a typed error instead of aborting. (Apply to both mint paths.)

✅ Proposed fix
-            let __proof: light_token_sdk::CompressedProof = `#params_ident.create_accounts_proof.proof.0.clone`()
-                .expect("proof is required for mint creation");
+            let __proof: light_token_sdk::CompressedProof = `#params_ident`
+                .create_accounts_proof
+                .proof
+                .0
+                .clone()
+                .ok_or_else(|| light_sdk::error::LightSdkError::MissingProof)?;

Use an existing error variant (or add one) appropriate for “missing proof”.

Also applies to: 450-451


442-445: Mint-only path picks the address queue, not the output state queue.
Mint creation should use create_accounts_proof.output_state_tree_index for the output queue; using the address queue can route the mint to the wrong queue.

✅ Proposed fix
-            let output_queue = cpi_accounts.get_tree_account_info(__tree_info.address_queue_pubkey_index as usize)?;
+            let output_queue_index = `#params_ident.create_accounts_proof.output_state_tree_index`;
+            let output_queue = cpi_accounts.get_tree_account_info(output_queue_index as usize)?;
sdk-libs/token-sdk/src/compressible/decompress_runtime.rs (1)

215-223: Prefer authority() accessor over account_infos().first() for clarity.
This is the same concern raised previously; the comment currently claims index 0 is the light system program, but it’s the CPI authority.

🔧 Suggested fix
-    // Add required system accounts for transfer2 instruction
-    // Light system program is at index 0 in the cpi_accounts slice
+    // Add required system accounts for transfer2 instruction
+    // Index 0 is the CPI authority (used to invoke the light system program)
     all_account_infos.push(
         cpi_accounts
-            .account_infos()
-            .first()
-            .ok_or(ProgramError::NotEnoughAccountKeys)?
+            .authority()
+            .map_err(|_| ProgramError::InvalidAccountData)?
             .clone(),
     );
sdk-libs/compressible-client/src/pack.rs (1)

101-119: Guard system_accounts_offset against u8 truncation.
A plain cast can silently wrap if the pre-accounts exceed 255. Use u8::try_from and return a proper error.

🔧 Suggested fix
 pub enum PackError {
     #[error("Failed to add system accounts: {0}")]
     SystemAccounts(#[from] light_sdk::error::LightSdkError),
+    #[error("system_accounts_offset exceeds u8::MAX: {0}")]
+    SystemAccountsOffsetTooLarge(usize),
 }
@@
-    Ok(PackedProofResult {
+    let system_accounts_offset = u8::try_from(system_offset)
+        .map_err(|_| PackError::SystemAccountsOffsetTooLarge(system_offset))?;
+    Ok(PackedProofResult {
         remaining_accounts,
         packed_tree_infos,
         output_tree_index,
-        system_accounts_offset: system_offset as u8,
+        system_accounts_offset,
     })
sdk-libs/token-sdk/src/token/mod.rs (1)

6-8: Doc bullet references a non-exported symbol.
CreateAssociatedCTokenAccount doesn’t appear in exports; use CreateAssociatedTokenAccount to keep links valid.

🔧 Suggested fix
-//! - [`CreateAssociatedCTokenAccount`] - Create associated ctoken account (ATA) instruction
+//! - [`CreateAssociatedTokenAccount`] - Create associated ctoken account (ATA) instruction
sdk-libs/compressible-client/src/decompress_mint.rs (5)

33-50: Add error variant for empty proof accounts.

The error enum is missing a variant for when proof_result.accounts is empty. Lines 170 and 341 index [0] directly, which will panic if the indexer returns no accounts.

🐛 Proposed fix
 pub enum DecompressMintError {
     #[error("Indexer error: {0}")]
     Indexer(#[from] IndexerError),

     #[error("Compressed mint not found for signer {signer:?}")]
     MintNotFound { signer: Pubkey },

     #[error("Missing compressed mint data in account")]
     MissingMintData,

     #[error("Program error: {0}")]
     ProgramError(#[from] ProgramError),

     #[error("Proof required for cold mint")]
     ProofRequired,
+
+    #[error("Validity proof returned no accounts")]
+    EmptyProofAccounts,
 }

128-131: Epoch duration comment is inaccurate.

The comment says "~24 hours per epoch" but Solana epochs are approximately 2.5 days (432,000 slots). Based on learnings from program-libs/compressible/docs/SOLANA_RENT.md, update the comment to reflect the correct duration.

📝 Fix comment
-/// Default rent payment in epochs (~24 hours per epoch)
+/// Default rent payment in epochs (~2.5 days per epoch)
 pub const DEFAULT_RENT_PAYMENT: u8 = 2;

10-13: Clarify API list—two entries share the same name.

Lines 11 and 13 both say decompress_mint. One is the simple async API and the other is described as a "high-perf wrapper." Rename the third entry to match the actual function name or clarify the distinction.

📝 Suggested fix
 //! Three APIs are provided:
 //! - `decompress_mint`: Simple async API (fetches state + proof internally)
 //! - `build_decompress_mint`: Sync, caller provides pre-fetched state + proof
-//! - `decompress_mint`: High-perf wrapper (takes MintInterface, fetches proof internally)
+//! - `decompress_mint_idempotent`: Idempotent API via DecompressMintRequest builder

167-178: Guard against empty proof accounts to prevent panic.

proof_result.accounts[0] will panic if the vector is empty. Use .get(0) with proper error handling.

🐛 Proposed fix
     // Proof required for cold mint
     let proof_result = validity_proof.ok_or(DecompressMintError::ProofRequired)?;

     // Extract tree info from proof result
-    let account_info = &proof_result.accounts[0];
+    let account_info = proof_result
+        .accounts
+        .get(0)
+        .ok_or(DecompressMintError::EmptyProofAccounts)?;
     let state_tree = account_info.tree_info.tree;

329-349: Fix comment numbering and guard against empty proof accounts.

Two issues here:

  1. Lines 329 and 334 both have "// 5." comments—renumber sequentially
  2. Line 341 indexes proof_result.accounts[0] without bounds checking
🐛 Proposed fixes
-    // 5. Check if already decompressed flag is set - return empty vec (idempotent)
+    // 5. Check decompressed flag
     if mint_data.metadata.cmint_decompressed {
         return Ok(vec![]);
     }

-    // 5. Get validity proof
+    // 6. Get validity proof
     let proof_result = indexer
         .get_validity_proof(vec![compressed_account.hash], vec![], None)
         .await?
         .value;

-    // 6. Extract tree info from proof result
-    let account_info = &proof_result.accounts[0];
+    // 7. Extract tree info from proof result
+    let account_info = proof_result
+        .accounts
+        .get(0)
+        .ok_or(DecompressMintError::EmptyProofAccounts)?;
sdk-libs/compressible-client/src/decompress_atas.rs (2)

178-182: Consider removing unnecessary clone on Copy type.

AccountState appears to be a simple enum that should implement Copy. The .clone() works but is unnecessary overhead.

♻️ Minor optimization
     #[inline]
     pub fn state(&self) -> AccountState {
-        self.token_data.state.clone()
+        self.token_data.state
     }

650-681: compression_index must be unique per account in batch.

Line 664 hardcodes compression_index: 0 for every CompressedOnly extension entry. When decompressing multiple accounts with this extension in a single batch, the transfer2 instruction will reject the transaction with DuplicateCompressionIndex.

Use the loop counter to assign unique indices:

🐛 Fix duplicate compression_index
                         Some(ExtensionInstructionData::CompressedOnly(
                             CompressedOnlyExtensionInstructionData {
                                 delegated_amount: co.delegated_amount,
                                 withheld_transfer_fee: co.withheld_transfer_fee,
                                 is_frozen,
-                                compression_index: 0,
+                                compression_index: i as u8,
                                 is_ata: true,
                                 bump: ctx.ata_bump,
                                 owner_index, // Wallet owner who signs
                             },
                         ))
program-libs/compressible/Cargo.toml (1)

35-35: Dependency direction violation: program-libs cannot depend on sdk-libs.

The light-sdk-types crate is located in sdk-libs/sdk-types/, which creates an upward dependency from program-libs/compressible into the SDK layer. This violates the established dependency hierarchy:

  • ✅ SDK libs → program-libs (allowed)
  • ❌ Program-libs → SDK libs (not allowed)

Per the coding guidelines and CLAUDE.md: "Program-libs must depend only on other program-libs or external crates."

The types needed by program-libs/compressible (likely PackedAddressTreeInfo and related instruction types) should be moved to a program-libs crate so both layers can use them without creating backwards dependencies.

🏗️ Suggested resolution
  1. Move the instruction types from sdk-libs/sdk-types/src/instruction/ to program-libs/instruction-types/ or similar
  2. Update program-libs/compressible to depend on the new program-libs crate
  3. Update sdk-libs/sdk-types to re-export from the program-libs crate for backward compatibility
#!/bin/bash
# Verify what types compressible imports from light-sdk-types
rg "light_sdk_types" program-libs/compressible/src/ -A 2 -B 2
echo "---"
# Check where light-sdk-types is defined
cat sdk-libs/sdk-types/Cargo.toml | head -10
sdk-libs/compressible-client/src/lib.rs (2)

415-428: Replace .unwrap() with proper error handling for missing CPI context.

Line 420 uses .unwrap() on acc.tree_info.cpi_context, which is an Option<Pubkey>. If the indexer omits this value, the code panics instead of returning a proper error. Since the function already returns Result<..., Box<dyn std::error::Error>>, handle this gracefully.

Suggested fix
         if has_pdas && has_tokens {
-            // Find the first token account's CPI context
-            let first_token_cpi_context = compressed_accounts
-                .iter()
-                .find(|(acc, _)| acc.owner == LIGHT_TOKEN_PROGRAM_ID)
-                .map(|(acc, _)| acc.tree_info.cpi_context.unwrap())
-                .expect("has_tokens is true so there must be a token");
+            // Find the first token account's CPI context
+            let first_token_cpi_context = compressed_accounts
+                .iter()
+                .find(|(acc, _)| acc.owner == LIGHT_TOKEN_PROGRAM_ID)
+                .and_then(|(acc, _)| acc.tree_info.cpi_context)
+                .ok_or("Token account missing CPI context from indexer")?;

606-616: Defensive coding: expect() could be replaced with filter_map.

The expect("Cold account must have decompression context") at line 612 is logically safe because the filter at line 593 only includes accounts where is_cold is true, and by the AccountInterface::cold() constructor, is_cold: true always implies decompression_context: Some(...).

However, if someone constructs AccountInterface manually with inconsistent state, this panics. A filter_map approach would silently skip such malformed entries, which may or may not be desired.

Alternative defensive approach
         let compressed_accounts: Vec<(CompressedAccount, V)> = cold_accounts
             .into_iter()
-            .map(|a| {
-                let compressed_account = a
-                    .account_interface
-                    .decompression_context
-                    .expect("Cold account must have decompression context")
-                    .compressed_account;
-                (compressed_account, a.variant)
-            })
+            .filter_map(|a| {
+                let ctx = a.account_interface.decompression_context?;
+                Some((ctx.compressed_account, a.variant))
+            })
             .collect();
sdk-libs/compressible-client/src/initialize_config.rs (2)

81-85: Default config_bump: 0 creates V0 config instead of standard V1.

The default config_bump: 0 derives a V0 config PDA, but all actual usage in the codebase (forester, token tests, program-test) uses derive_v1_config_pda() which corresponds to bump=1. Callers who don't explicitly call .config_bump(1) will silently create the wrong config account.

Per the documentation in program-libs/compressible/docs/CONFIG_ACCOUNT.md, V1 is the standard version used by the CToken program.

Suggested fix

Either change the default to 1:

-            config_bump: 0,
+            config_bump: 1, // V1 is the standard config version

Or derive the correct PDA in build() using derive_v1_config_pda:

-        let (config_pda, _) = CompressibleConfig::derive_pda(&self.program_id, self.config_bump);
+        let (config_pda, _) = CompressibleConfig::derive_v1_config_pda(&self.program_id);

107-109: Missing validation for address_space length.

The builder accepts any Vec<Pubkey> but the on-chain instruction enforces address_space.len() == 1 (based on MAX_ADDRESS_TREES_PER_SPACE). Callers passing an incorrect length will fail at runtime with a cryptic error.

Suggested fix - fail fast in builder
     pub fn address_space(mut self, address_space: Vec<Pubkey>) -> Self {
+        assert!(
+            address_space.len() == 1,
+            "address_space must contain exactly 1 pubkey (MAX_ADDRESS_TREES_PER_SPACE), found: {}",
+            address_space.len()
+        );
         self.address_space = address_space;
         self
     }
sdk-libs/compressible-client/src/create_accounts_proof.rs (1)

174-181: Missing validation when mints require CPI context but indexer returns None.

When has_mints is true, the code passes state_tree_info.cpi_context directly to pack_proof. If the indexer returns None for cpi_context, the resulting instruction will be missing required CPI context accounts and fail at execution time.

Proposed fix - fail fast with descriptive error
+    #[error("Missing CPI context for mint operations")]
+    MissingCpiContext,
     let cpi_context = if has_mints {
-        state_tree_info.cpi_context
+        Some(
+            state_tree_info
+                .cpi_context
+                .ok_or(CreateAccountsProofError::MissingCpiContext)?,
+        )
     } else {
         None
     };

Comment on lines +144 to +150
// Anchor discriminator for "initialize_compression_config"
// SHA256("global:initialize_compression_config")[..8]
const DISCRIMINATOR: [u8; 8] = [133, 228, 12, 169, 56, 76, 222, 61];

let serialized_data = instruction_data
.try_to_vec()
.expect("Failed to serialize instruction data");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Hardcoded discriminator is correct but brittle.

The discriminator [133, 228, 12, 169, 56, 76, 222, 61] is derived from SHA256("global:initialize_compression_config")[..8]. This is the standard Anchor pattern, but if the instruction name changes, this will silently break.

Consider adding a compile-time or test assertion to verify this matches the expected hash.

🤖 Prompt for AI Agents
In `@sdk-libs/compressible-client/src/initialize_config.rs` around lines 144 -
150, The hardcoded DISCRIMINATOR constant in initialize_config.rs (const
DISCRIMINATOR and the string "global:initialize_compression_config") is brittle;
add a verification that the constant equals
SHA256("global:initialize_compression_config")[..8] by implementing a test or
compile-time assertion: compute the SHA256 of the literal instruction name and
compare its first 8 bytes to DISCRIMINATOR (fail the build/test if mismatched).
Place the check referencing DISCRIMINATOR and the literal
"global:initialize_compression_config" (e.g., in a #[cfg(test)] unit test or a
build-time check) so future name changes will surface as a failing assertion.

Comment on lines +270 to +276
let mut seeds = Vec::new();
for elem in &array.elems {
if let Ok(seed) = classify_seed_expr(elem) {
seeds.push(seed);
}
}
authority_seeds = Some(seeds);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Silently swallowing seed classification errors may hide issues.

When classify_seed_expr(elem) fails, the error is discarded and the seed is simply omitted from the list. This could lead to incomplete seed derivation at runtime without any compile-time warning.

Consider propagating the error or at least emitting a warning:

🛠️ Suggested fix
                 for elem in &array.elems {
-                    if let Ok(seed) = classify_seed_expr(elem) {
-                        seeds.push(seed);
-                    }
+                    seeds.push(classify_seed_expr(elem)?);
                 }

The same pattern appears at lines 291-295.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
let mut seeds = Vec::new();
for elem in &array.elems {
if let Ok(seed) = classify_seed_expr(elem) {
seeds.push(seed);
}
}
authority_seeds = Some(seeds);
let mut seeds = Vec::new();
for elem in &array.elems {
seeds.push(classify_seed_expr(elem)?);
}
authority_seeds = Some(seeds);
🤖 Prompt for AI Agents
In `@sdk-libs/macros/src/compressible/anchor_seeds.rs` around lines 270 - 276, The
loop that calls classify_seed_expr(elem) silently ignores Err results and drops
seeds (in the block that sets authority_seeds from array.elems); change this to
propagate or report errors instead of swallowing them: replace the for/if let
Ok(...) pattern with an iterator that collects Results (e.g.,
map(classify_seed_expr).collect::<Result<Vec<_>, _>>() ) and use ? to return the
error or convert and return a descriptive error from the surrounding function,
so authority_seeds is only set on success; apply the same fix to the similar
pattern at the other location (the block around lines 291-295) to ensure all
seed classification errors are surfaced.

Comment on lines +458 to +473
// CONSTANT (all uppercase path)
Expr::Path(path) => {
if let Some(ident) = path.path.get_ident() {
let name = ident.to_string();
if name
.chars()
.all(|c| c.is_uppercase() || c == '_' || c.is_ascii_digit())
{
return Ok(ClassifiedSeed::Constant(path.path.clone()));
}
// Otherwise it's a variable reference - treat as ctx account
return Ok(ClassifiedSeed::CtxAccount(ident.clone()));
}
// Multi-segment path is a constant
Ok(ClassifiedSeed::Constant(path.path.clone()))
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Identifier classification heuristic is reasonable but worth documenting.

The logic treats all-uppercase identifiers (plus _ and digits) as constants, others as context accounts. This works for conventional Rust naming (MY_CONST vs account), but unusual names like A1b2 would be context accounts while A1B2 would be constants.

Consider adding a brief doc comment explaining this heuristic so future maintainers understand the classification rules:

// CONSTANT detection: uppercase + underscore + digit (e.g., MY_SEED_123)
// Otherwise: context account reference (e.g., user, owner)
🤖 Prompt for AI Agents
In `@sdk-libs/macros/src/compressible/anchor_seeds.rs` around lines 458 - 473, Add
a brief doc comment above the Expr::Path match arm explaining the identifier
classification heuristic: that identifiers matching all uppercase letters,
underscores, or ASCII digits are treated as constants (ClassifiedSeed::Constant)
while any other single-segment identifier becomes a context account
(ClassifiedSeed::CtxAccount); reference the Expr::Path match, the ident variable
extraction, and the ClassifiedSeed::Constant / ClassifiedSeed::CtxAccount
outcomes so future maintainers understand examples like MY_SEED_123 vs user or
mixed-case names like A1b2.

Comment on lines +164 to 170
RentFreeAccountVariant::PackedCTokenData(mut data) => {
data.token_data.version = 3;
compressed_token_accounts.push((data, meta));
}
CompressedAccountVariant::CTokenData(_) => {
RentFreeAccountVariant::CTokenData(_) => {
unreachable!();
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Hardcoded version = 3 warrants a comment or constant.

Line 165 sets data.token_data.version = 3 directly. If this version number has semantic meaning in the protocol, consider:

  1. Using a named constant (e.g., CTOKEN_DATA_VERSION)
  2. Adding a comment explaining why version 3 is used here

This helps future maintainers understand the intent rather than wondering about the magic number.

🤖 Prompt for AI Agents
In `@sdk-libs/macros/src/compressible/decompress_context.rs` around lines 164 -
170, Replace the magic number assignment to token_data.version with a named
constant and document why version 3 is used: instead of setting
data.token_data.version = 3 inside the RentFreeAccountVariant::PackedCTokenData
arm, define and use a constant (e.g., CTOKEN_DATA_VERSION) and add a short
comment above the assignment explaining the protocol semantic or compatibility
reason for version 3; update the assignment in that arm (and any related places
if present) to use CTOKEN_DATA_VERSION and ensure compressed_token_accounts
logic remains unchanged.

Comment on lines +28 to +38
pub fn scan_module_for_compressible(
_module: &ItemMod,
base_path: &Path,
) -> syn::Result<ScannedModuleInfo> {
let mut result = ScannedModuleInfo::default();

// Scan all .rs files in the src directory
scan_directory_recursive(base_path, &mut result);

Ok(result)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Unused _module parameter clutters the API.

The _module: &ItemMod parameter isn't used anywhere in the function body. This creates a confusing API where callers must provide something that's ignored. If it's reserved for future use, consider documenting that intent. Otherwise, simplify the signature.

Additionally, the function always returns Ok(...), making syn::Result somewhat misleading—it suggests parsing can fail here, but actual failures are accumulated in result.errors instead.

♻️ Consider simplifying the signature
-pub fn scan_module_for_compressible(
-    _module: &ItemMod,
-    base_path: &Path,
-) -> syn::Result<ScannedModuleInfo> {
+pub fn scan_module_for_compressible(base_path: &Path) -> ScannedModuleInfo {
     let mut result = ScannedModuleInfo::default();
 
     // Scan all .rs files in the src directory
     scan_directory_recursive(base_path, &mut result);
 
-    Ok(result)
+    result
 }
🤖 Prompt for AI Agents
In `@sdk-libs/macros/src/compressible/file_scanner.rs` around lines 28 - 38, The
function scan_module_for_compressible currently ignores its _module: &ItemMod
parameter and always returns Ok(...), so simplify the API by removing the unused
_module argument from scan_module_for_compressible, change its return type from
syn::Result<ScannedModuleInfo> to ScannedModuleInfo, call
scan_directory_recursive(base_path, &mut result) as before and return result
directly (no Ok wrapper), and update all call sites to match the new signature;
if _module is intentionally reserved, instead add a short doc comment to ItemMod
parameter explaining it’s reserved and actually reference it or keep it named
module to avoid leading underscore. Also ensure callers no longer expect a
Result and continue to read errors from result.errors.

Comment on lines +713 to +719
syn::Expr::Lit(lit_expr) => {
// Handle byte string literals: b"seed" -> use directly
if let syn::Lit::ByteStr(byte_str) = &lit_expr.lit {
let bytes = byte_str.value();
expressions.push(quote! { &[#(#bytes),*] });
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Non-ByteStr literals silently ignored.

This match arm handles syn::Expr::Lit but only processes ByteStr variants. Other literal types (int, char, etc.) won't push anything to expressions, causing a silent mismatch between parameters.len() and expressions.len() for that iteration.

If non-ByteStr literals are invalid here, consider adding an explicit error. Otherwise, handle them appropriately.

🐛 Suggested fix
 syn::Expr::Lit(lit_expr) => {
     // Handle byte string literals: b"seed" -> use directly
     if let syn::Lit::ByteStr(byte_str) = &lit_expr.lit {
         let bytes = byte_str.value();
         expressions.push(quote! { &[#(`#bytes`),*] });
+    } else {
+        // Fallback for other literals - convert to expression
+        expressions.push(quote! { (`#expr`).as_ref() });
     }
 }
🤖 Prompt for AI Agents
In `@sdk-libs/macros/src/compressible/seed_providers.rs` around lines 713 - 719,
The match arm for syn::Expr::Lit currently only handles syn::Lit::ByteStr and
silently ignores other literal kinds, causing a parameters/expressions length
mismatch; update the syn::Expr::Lit branch to produce an explicit compile-time
error for non-ByteStr literals by using syn::Error::new_spanned on lit_expr (or
the offending literal) with a clear message like "expected byte string literal
(b\"...\")" and propagate it as a compile_error (e.g., via to_compile_error())
so the macro fails loudly instead of skipping entries; keep the existing
handling of byte_str and expressions.push(...) intact.

Comment on lines +181 to +199
fn parse_instruction_attr(attrs: &[syn::Attribute]) -> Option<Vec<InstructionArg>> {
for attr in attrs {
if attr.path().is_ident("instruction") {
if let Ok(args) = attr.parse_args_with(|input: ParseStream| {
let content: Punctuated<InstructionArgParsed, Token![,]> =
Punctuated::parse_terminated(input)?;
Ok(content
.into_iter()
.map(|arg| InstructionArg {
name: arg.name,
ty: arg.ty,
})
.collect::<Vec<_>>())
}) {
return Some(args);
}
}
}
None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

find . -name "parse.rs" -path "*/sdk-libs/macros/*" | head -5

Repository: Lightprotocol/light-protocol

Length of output: 111


🏁 Script executed:

cat -n sdk-libs/macros/src/finalize/parse.rs | head -250

Repository: Lightprotocol/light-protocol

Length of output: 10234


🏁 Script executed:

# Search for all calls to parse_instruction_attr
rg "parse_instruction_attr" sdk-libs/macros/

Repository: Lightprotocol/light-protocol

Length of output: 297


🏁 Script executed:

# Look for error handling patterns in the macro module
rg "Result|Error" sdk-libs/macros/src/finalize/ -A 2 -B 2 | head -100

Repository: Lightprotocol/light-protocol

Length of output: 7123


🏁 Script executed:

# Check if there's a CLAUDE.md or docs that might guide this
find . -name "CLAUDE.md" -o -name "*.md" | grep -i claude

Repository: Lightprotocol/light-protocol

Length of output: 511


🏁 Script executed:

# See the full context of how instruction_args is used after line 238
cat -n sdk-libs/macros/src/finalize/parse.rs | sed -n '234,350p'

Repository: Lightprotocol/light-protocol

Length of output: 5988


🏁 Script executed:

# Check how instruction_args is used in the codebase (codegen, etc.)
rg "instruction_args" sdk-libs/macros/src/finalize/

Repository: Lightprotocol/light-protocol

Length of output: 463


🏁 Script executed:

# Look for the CLAUDE.md file in the root or sdk-libs/macros to understand guidelines
cat ./CLAUDE.md 2>/dev/null || echo "File not found at root"

Repository: Lightprotocol/light-protocol

Length of output: 10409


🏁 Script executed:

# Check if there's a CLAUDE.md in sdk-libs/macros
find sdk-libs/macros -name "CLAUDE.md" -exec cat {} \;

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

# Verify instruction_args is only called once
rg "parse_instruction_attr" sdk-libs/macros/src/

Repository: Lightprotocol/light-protocol

Length of output: 297


🏁 Script executed:

# Check the full context around line 238 to see how instruction_args is used after assignment
cat -n sdk-libs/macros/src/finalize/parse.rs | sed -n '380,410p'

Repository: Lightprotocol/light-protocol

Length of output: 392


🏁 Script executed:

# Look at codegen.rs to see how instruction_args is used
cat -n sdk-libs/macros/src/finalize/codegen.rs | grep -A 5 -B 5 "instruction_args"

Repository: Lightprotocol/light-protocol

Length of output: 840


🏁 Script executed:

# Check if there are any tests for parse_instruction_attr behavior
find sdk-libs/macros -name "*.rs" -type f | xargs grep -l "parse_instruction_attr" 2>/dev/null

Repository: Lightprotocol/light-protocol

Length of output: 109


🏁 Script executed:

# Look for tests related to instruction attribute parsing
find sdk-libs/macros -path "*/tests/*" -o -path "*/test/*" | head -20

Repository: Lightprotocol/light-protocol

Length of output: 171


🏁 Script executed:

# Verify the error handling pattern is consistent by checking other parse functions
cat -n sdk-libs/macros/src/finalize/parse.rs | sed -n '234,246p'

Repository: Lightprotocol/light-protocol

Length of output: 734


Propagate parse errors for #[instruction(...)] attribute syntax.

The function silently swallows parse failures (line 184 if let Ok) and returns None, masking malformed attributes and generating no diagnostic. This is inconsistent with the rest of the module—#[rentfree(...)] (line 302) and #[light_mint(...)] (line 342) both use attr.parse_args()? to propagate errors. Since parse_compressible_struct already returns Result<_, Error>, propagate instruction parsing errors upward.

✅ Proposed fix
-fn parse_instruction_attr(attrs: &[syn::Attribute]) -> Option<Vec<InstructionArg>> {
+fn parse_instruction_attr(attrs: &[syn::Attribute]) -> Result<Option<Vec<InstructionArg>>, Error> {
     for attr in attrs {
         if attr.path().is_ident("instruction") {
-            if let Ok(args) = attr.parse_args_with(|input: ParseStream| {
+            let args = attr.parse_args_with(|input: ParseStream| {
                 let content: Punctuated<InstructionArgParsed, Token![,]> =
                     Punctuated::parse_terminated(input)?;
                 Ok(content
                     .into_iter()
                     .map(|arg| InstructionArg {
                         name: arg.name,
                         ty: arg.ty,
                     })
                     .collect::<Vec<_>>())
-            }) {
-                return Some(args);
-            }
+            })?;
+            return Ok(Some(args));
         }
     }
-    None
+    Ok(None)
 }
-    let instruction_args = parse_instruction_attr(&input.attrs);
+    let instruction_args = parse_instruction_attr(&input.attrs)?;

Comment on lines +312 to +320
// Use defaults if not specified:
// - address_tree_info defaults to params.create_accounts_proof.address_tree_info
// - output_tree defaults to params.create_accounts_proof.output_state_tree_index
let address_tree_info = args.address_tree_info.unwrap_or_else(|| {
syn::parse_quote!(params.create_accounts_proof.address_tree_info)
});
let output_tree = args.output_tree.unwrap_or_else(|| {
syn::parse_quote!(params.create_accounts_proof.output_state_tree_index)
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Default expressions hardcode params identifier.
If the instruction arg isn’t literally named params, the generated code references an undefined ident. Either enforce the name or use the actual first arg ident when building defaults.

✅ Proposed fix
-    let instruction_args = parse_instruction_attr(&input.attrs)?;
+    let instruction_args = parse_instruction_attr(&input.attrs)?;
+    let params_ident = instruction_args
+        .as_ref()
+        .and_then(|args| args.first())
+        .map(|arg| arg.name.clone());
-                let address_tree_info = args.address_tree_info.unwrap_or_else(|| {
-                    syn::parse_quote!(params.create_accounts_proof.address_tree_info)
-                });
+                let address_tree_info = match args.address_tree_info {
+                    Some(expr) => expr,
+                    None => {
+                        let params_ident = params_ident.as_ref().ok_or_else(|| {
+                            Error::new_spanned(&field.ty, "missing #[instruction(...)] params for default address_tree_info")
+                        })?;
+                        syn::parse_quote!(`#params_ident.create_accounts_proof.address_tree_info`)
+                    }
+                };

Apply the same pattern for output_tree and the #[light_mint] defaults.

Also applies to: 355-358

🤖 Prompt for AI Agents
In `@sdk-libs/macros/src/finalize/parse.rs` around lines 312 - 320, The default
expressions for address_tree_info and output_tree hardcode the identifier
params, which breaks when the instruction's first argument has a different name;
instead, capture the actual first-arg identifier (e.g., let first_arg_ident =
&args.first_arg_ident or however the code accesses the first instruction arg)
and use it when building the syn::parse_quote! defaults (replace
params.create_accounts_proof... with <first_arg_ident>.create_accounts_proof...
via interpolation into parse_quote!). Apply the same pattern to output_tree and
to the #[light_mint] default-generation code paths so all defaults reference the
actual first argument identifier rather than the literal params identifier.

Comment on lines +70 to 76
let cpi_context_pubkey = if has_prior_context {
// PDAs/Mints wrote to context, tokens consume it
cpi_accounts.cpi_context().ok().map(|ctx| *ctx.key)
} else {
// Tokens-only: execute directly without CPI context
None
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Fail fast if CPI context is required but missing.
When has_prior_context is true, silently dropping to None can yield invalid or inconsistent decompression. Return an error instead.

🔧 Suggested fix
-    let cpi_context_pubkey = if has_prior_context {
-        // PDAs/Mints wrote to context, tokens consume it
-        cpi_accounts.cpi_context().ok().map(|ctx| *ctx.key)
-    } else {
-        // Tokens-only: execute directly without CPI context
-        None
-    };
+    let cpi_context_pubkey = if has_prior_context {
+        // PDAs/Mints wrote to context, tokens consume it
+        Some(
+            *cpi_accounts
+                .cpi_context()
+                .map_err(|_| ProgramError::InvalidAccountData)?
+                .key,
+        )
+    } else {
+        // Tokens-only: execute directly without CPI context
+        None
+    };

Comment on lines +256 to +273
// Only include signer seeds for program-owned tokens
if token_signers_seed_groups.is_empty() {
// All tokens were ATAs - no program signing needed
solana_cpi::invoke(&ctoken_ix, all_account_infos.as_slice())?;
} else {
let signer_seed_refs: Vec<Vec<&[u8]>> = token_signers_seed_groups
.iter()
.map(|group| group.iter().map(|s| s.as_slice()).collect())
.collect();
let signer_seed_slices: Vec<&[&[u8]]> =
signer_seed_refs.iter().map(|g| g.as_slice()).collect();

solana_cpi::invoke_signed(
&ctoken_ix,
all_account_infos.as_slice(),
signer_seed_slices.as_slice(),
)?;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

ATA-only path never hits the invoke branch.
token_signers_seed_groups is always non-empty once any token is processed, so ATA-only flows still call invoke_signed. Track non-empty signer groups instead.

🔧 Suggested fix
-    if token_signers_seed_groups.is_empty() {
+    let has_signer_seeds = token_signers_seed_groups.iter().any(|g| !g.is_empty());
+    if !has_signer_seeds {
         // All tokens were ATAs - no program signing needed
         solana_cpi::invoke(&ctoken_ix, all_account_infos.as_slice())?;
     } else {
-        let signer_seed_refs: Vec<Vec<&[u8]>> = token_signers_seed_groups
-            .iter()
-            .map(|group| group.iter().map(|s| s.as_slice()).collect())
-            .collect();
+        let signer_seed_refs: Vec<Vec<&[u8]>> = token_signers_seed_groups
+            .iter()
+            .filter(|g| !g.is_empty())
+            .map(|group| group.iter().map(|s| s.as_slice()).collect())
+            .collect();
         let signer_seed_slices: Vec<&[&[u8]]> =
             signer_seed_refs.iter().map(|g| g.as_slice()).collect();
🤖 Prompt for AI Agents
In `@sdk-libs/token-sdk/src/compressible/decompress_runtime.rs` around lines 256 -
273, The current branch decision uses token_signers_seed_groups.is_empty() which
is false whenever any token was processed (even if all groups are empty),
causing ATA-only flows to call solana_cpi::invoke_signed; change the logic to
detect non-empty signer groups instead (e.g., let has_signers =
token_signers_seed_groups.iter().any(|g| !g.is_empty())); if has_signers call
solana_cpi::invoke_signed(&ctoken_ix, all_account_infos.as_slice(),
signer_seed_slices.as_slice()) after building
signer_seed_refs/signer_seed_slices only for the non-empty groups, otherwise
call solana_cpi::invoke(&ctoken_ix, all_account_infos.as_slice()) for the
ATA-only path.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants