Conversation
📝 WalkthroughWalkthroughCentralizes compressible bootstrap into a reusable run_bootstrap returning BootstrapResult; adds Photon indexer interface models and POST endpoints; introduces rich indexer/client interface types and conversions (AccountInterface/TokenAccountInterface); extends RPC surface and program-test helpers; enhances event parsing for ATA owners; updates tests, configs, and tooling. Changes
Sequence Diagram(s)sequenceDiagram
participant Client as Client (RPC)
participant LightClient as LightClient
participant Photon as Photon API
participant OnChain as On‑Chain RPC
Client->>LightClient: get_account_interface(address, config)
activate LightClient
alt Hot path
LightClient->>OnChain: getAccount(address)
OnChain-->>LightClient: onchain account / not found
else Cold path
LightClient->>Photon: POST /getAccountInterface(params)
Photon-->>LightClient: AccountInterface or None
end
LightClient->>LightClient: convert model -> public AccountInterface
LightClient-->>Client: Response<Option<AccountInterface>>
deactivate LightClient
sequenceDiagram
participant App as Forester Bootstrap
participant Bootstrap as run_bootstrap
participant HTTP as HTTP Client / RPC V2
participant Processor as process_fn callback
App->>Bootstrap: run_bootstrap(rpc_url, program_id, filters, shutdown_rx, process_fn, label)
activate Bootstrap
Bootstrap->>Bootstrap: detect localhost (choose API path)
Bootstrap->>HTTP: request pages / getProgramAccounts
HTTP-->>Bootstrap: account batches (pages)
loop per account
Bootstrap->>Processor: process_fn(RawAccountData)
Processor-->>Bootstrap: continue/stop
end
Bootstrap-->>App: BootstrapResult { pages, fetched, inserted }
deactivate Bootstrap
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 17
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
forester/src/config.rs (1)
365-399: Include the offending entry incompressible_pda_programsparse errors.When multiple entries are provided, returning only the parser error can make it hard to spot which value failed. Consider echoing the raw entry alongside the error.
Proposed refinement
- .map(|s| { - s.parse::<crate::compressible::config::PdaProgramConfig>() - .map_err(|e| ConfigError::InvalidArguments { - field: "compressible_pda_programs", - invalid_values: vec![e], - }) - }) + .map(|s| { + let raw = s.clone(); + s.parse::<crate::compressible::config::PdaProgramConfig>() + .map_err(|e| ConfigError::InvalidArguments { + field: "compressible_pda_programs", + invalid_values: vec![format!("{raw}: {e}")], + }) + })forester/tests/test_compressible_ctoken.rs (1)
252-270: Wait for subscription-driven tracker updates before asserting.These assertions can race the websocket-driven tracker updates (airdrop + compression), which will intermittently flake under load. Add a small polling loop with timeout before asserting lamports and remaining accounts.
🐛 Proposed fix (apply the same pattern after compression)
- let accounts = tracker.get_all_token_accounts(); - assert_eq!(accounts[0].lamports, lamports + 10_000_000); + let accounts = tokio::time::timeout(Duration::from_secs(5), async { + loop { + let accounts = tracker.get_all_token_accounts(); + if accounts[0].lamports == lamports + 10_000_000 { + break accounts; + } + sleep(Duration::from_millis(200)).await; + } + }) + .await + .expect("Tracker did not observe lamports update in time"); + assert_eq!(accounts[0].lamports, lamports + 10_000_000);Also applies to: 334-341
forester/src/compressible/subscriber.rs (1)
176-225: Handle account closures by invokinghandle_removal.
programSubscribesends updates for closed accounts withlamports == 0(often empty data). The current path always tries to decode andhandle_update, which leaves stale entries in trackers. Add a guard to callhandle_removalwhen lamports are zero or data is empty.🐛 Proposed fix
- // Decode Base64 account data + // Account closed? Remove it from the tracker + if response.value.account.lamports == 0 { + self.handler.handle_removal(&pubkey); + return; + } + + // Decode Base64 account data use solana_account_decoder::UiAccountData;
🤖 Fix all issues with AI agents
In `@forester/Cargo.toml`:
- Line 46: The crate currently pins hex = "0.4" in its Cargo.toml; replace that
explicit version with a workspace-managed dependency by changing the dependency
entry to use workspace = true (e.g., hex = { workspace = true }) and ensure the
workspace root contains a [workspace.dependencies] entry for hex (adding hex =
"0.4" there if missing) so all workspace crates share the same hex version.
In `@forester/src/compressible/bootstrap_helpers.rs`:
- Around line 126-161: The get_current_slot function duplicates timeout, HTTP
and RPC error handling; refactor it to call the existing send_rpc_request helper
instead of performing its own client.post/.send/.json and timeout logic. Build
the same JSON payload inside get_current_slot, call send_rpc_request(rpc_url,
payload, RPC_REQUEST_TIMEOUT, client) (or the send_rpc_request signature used in
the repo), then extract the slot via result.get("result").and_then(|v|
v.as_u64()).ok_or_else(|| anyhow::anyhow!("Failed to extract slot from
response")); remove the timeout block and the duplicated error mapping so slot
fetching reuses send_rpc_request’s consistent HTTP status and RPC error
handling.
In `@forester/src/compressible/config.rs`:
- Around line 35-38: Update the doc comments that currently mention
`discriminator_hex` and `--pda-program` to reflect that parsing/serde uses
base58 and the CLI flag is `--compressible-pda-program`; specifically edit the
top comment block for the compressible PDA program configuration (the doc above
the compressible PDA config struct/type in config.rs) and the other occurrences
around lines 148-151 to say "base58" instead of "discriminator_hex" and to
reference `--compressible-pda-program` instead of `--pda-program`, so the
documentation matches the actual parsing and CLI wiring.
In `@forester/src/compressible/ctoken/state.rs`:
- Around line 77-115: The method update_from_token must skip mint/non-token
accounts so the tracker only stores actual token accounts; add an early guard in
update_from_token that detects when the provided Token (ctoken) represents a
mint (not a token account) and return Ok(()) without inserting; use whatever
discriminator the Token type provides (e.g., an is_mint()/is_account() check or
pattern match) to decide this, keeping the rest of the logic
(calculate_compressible_slot, building CTokenAccountState, and calling
self.insert) unchanged; this prevents get_ready_to_compress from including mint
accounts.
In `@forester/src/compressible/mint/compressor.rs`:
- Around line 52-121: compress_batch currently builds all CompressAndCloseMint
instructions and sends them in a single transaction (using instruction_futures
-> instructions -> rpc.create_and_send_transaction), which can exceed Solana
transaction size/compute limits; fix by chunking mint_states into smaller
batches (e.g., max_instructions_per_tx constant) and for each chunk build
instructions via the existing instruction_futures logic and call
create_and_send_transaction per chunk, collecting/returning the final
Signature(s) or changing the method to return a Vec<Signature> or the last
signature, and ensure errors from create_and_send_transaction are mapped as they
are now; update compress_batch signature/return accordingly and reuse existing
variables (rpc_pool, payer_keypair, MintActionParams,
create_mint_action_instruction) so only batching logic is added.
- Around line 183-186: The call to
futures::stream::iter(compression_futures).buffer_unordered(max_concurrent) can
panic if max_concurrent is 0; clamp the concurrency to at least 1 before calling
buffer_unordered (e.g., replace uses of max_concurrent with
max_concurrent.max(1) or create let concurrency = max_concurrent.max(1) and pass
that) in the compressor.rs spot where compression_futures and max_concurrent are
used, and apply the same guard to the PDA compressor and any other
buffer_unordered call that accepts user-supplied concurrency values.
In `@forester/src/compressible/pda/compressor.rs`:
- Around line 189-276: The current implementation builds and sends a single
batched instruction using account_states (via build_compress_accounts_idempotent
and create_and_send_transaction), which can exceed Solana transaction/account
limits; modify compressible logic to chunk account_states into safe-sized
batches (e.g., configurable MAX_BATCH_SIZE) and for each chunk repeat deriving
compressed_addresses, fetching compressed_accounts/hashes, calling
get_validity_proof, building the ix with build_compress_accounts_idempotent, and
sending via create_and_send_transaction, aggregating/returning the signatures
(or return the last signature) and ensure errors are handled per-chunk;
reference account_states, derive_address, compressed_addresses,
compressed_accounts, hashes, proof_with_context,
build_compress_accounts_idempotent, and create_and_send_transaction when
implementing the chunking.
- Around line 165-168: Clamp the concurrency value passed into buffer_unordered
to at least 1 to avoid runtime panics: compute a local capped value (e.g., let
cap = std::cmp::max(1, max_concurrent_batches as usize) or std::cmp::max(1,
max_concurrent_sends as usize) depending on the site) and replace usages of
buffer_unordered(max_concurrent) / buffer_unordered(max_concurrent_batches) with
buffer_unordered(cap). Apply the same pattern where you iterate over
compression_futures / mint compression futures / send transaction futures /
epoch_manager futures so the variables used by buffer_unordered are never zero.
In `@forester/src/compressible/pda/types.rs`:
- Around line 1-28: Add PartialEq and Eq derives to PdaAccountState so it can be
compared in tests; update the struct declaration for PdaAccountState (the pub
struct PdaAccountState { ... }) to include #[derive(Clone, Debug, PartialEq,
Eq)] and ensure no fields (Pubkey or u64) prevent deriving Eq/PartialEq.
In `@forester/src/epoch_manager.rs`:
- Around line 2035-2055: The extra pda_tracker.remove(&account_state.pubkey)
call in the results processing loop duplicates removal already performed by
PdaCompressor::compress_batch_concurrent (and similarly the MintCompressor
counterpart mentioned for lines 2110–2123); remove the redundant remove call(s)
here so that only the compressor implementation owns tracker mutation, leaving
the match arms to only log success/error and update counters (e.g., keep
total_compressed increment on Ok, but delete the pda_tracker.remove invocation),
and ensure the compressor functions remain responsible for calling
pda_tracker.remove when they succeed.
In `@forester/src/lib.rs`:
- Around line 241-259: The current code only spawns the CToken bootstrap when
shutdown_bootstrap is Some, skipping tracker initialization when it's None;
always spawn the tokio task and pass the optional shutdown signal through.
Extract rpc_url and clone ctoken_tracker as before, capture shutdown_bootstrap
(the Option) into a local variable (or clone/move it into the closure) and call
compressible::bootstrap_ctoken_accounts(rpc_url, tracker_clone,
shutdown_bootstrap_option).await inside tokio::spawn, logging errors or
completion as before so the bootstrap runs regardless of whether the shutdown
receiver is present.
In `@forester/tests/test_compressible_mint.rs`:
- Around line 138-148: Replace the current multi-step checks that call
rpc.get_account(mint_pda), deserialize via Mint::deserialize and individually
assert mint.metadata.mint_decompressed with a single assert_eq against an
expected Mint instance: deserialize the on‑chain account into `mint` (using
Mint::deserialize), extract any runtime values (PDAs, timestamps, lamports,
etc.) you need, construct an `expected_mint: Mint` using those runtime values,
then assert_eq!(mint, expected_mint); factor the construction into a small
helper (e.g., build_expected_mint) to keep the test DRY and reuse later in this
file.
- Around line 57-69: The comment and the test disagree about
CreateMintParams.rent_payment semantics: the comment says CreateMint
auto-decompresses when rent_payment > 0 but the test sets rent_payment: 0;
update one to match the intended behavior. Either change the comment to say
"rent_payment == 0 => immediately compressible for testing" or set
CreateMintParams { ... rent_payment: 1, ... } to trigger auto-decompression;
adjust the inline comment accordingly so CreateMintParams and its rent_payment
comment are consistent.
In `@forester/tests/test_compressible_pda.rs`:
- Around line 323-329: Wrap the calls to get_create_accounts_proof (which
internally calls get_validity_proof) in a retry loop at the four test call
sites: inside test_compressible_pda_bootstrap,
test_compressible_pda_compression, and both places in
test_compressible_pda_subscription; perform up to 3 attempts with a 5-second
backoff between attempts (use tokio::time::sleep(Duration::from_secs(5))). On
each attempt await get_create_accounts_proof and break on success; if it errors,
log or store the error and retry until attempts exhausted, then return or expect
the final error so the test still fails if all retries fail. Ensure you only
change the invocation sites (wrapping the existing call) and preserve the
original arguments and behavior.
In `@sdk-libs/client/src/indexer/options.rs`:
- Around line 116-158: GetCompressedMintsByAuthorityOptions currently leaves
authority_type as None which relies on server defaults; change the Rust default
so clients default to MintAuthorityType::Both. Update the Default implementation
or the GetCompressedMintsByAuthorityOptions::new() constructor to set
authority_type = Some(MintAuthorityType::Both) (and keep the existing Optional
type/signature), or adjust docs to explicitly state None means Both—preferably
set Some(MintAuthorityType::Both) in new()/Default to make client behavior
explicit; reference GetCompressedMintsByAuthorityOptions, new(), Default,
authority_type, and MintAuthorityType::Both when making the change.
In `@sdk-libs/client/src/local_test_validator.rs`:
- Around line 10-14: The upgradeable_programs field currently uses a tuple
Vec<(String, String, String)> which is unclear; replace it with a named struct
(e.g., UpgradeableProgramConfig with fields program_id, program_path,
upgrade_authority) and change the type of upgradeable_programs in
LightValidatorConfig to Vec<UpgradeableProgramConfig>, updating any construction
and pattern matching sites that reference upgradeable_programs or the tuple
order, while leaving sbf_programs as Vec<(String,String)> if desired.
In `@sdk-libs/program-test/src/indexer/test_indexer.rs`:
- Around line 999-1022: Replace the todo!() panics in get_compressed_mint,
get_compressed_mint_by_pda, and get_compressed_mints_by_authority with a
deterministic Err(IndexerError) return so callers get an error result instead of
a panic; follow the existing error pattern used elsewhere in the indexer (e.g.,
return an IndexerError variant such as UnimplementedOperation or a new
IndexerError::new("get_compressed_mint not implemented") consistent with how
other unimplemented RPCs are reported) and include the method name in the error
message to aid diagnostics.
| async fn get_compressed_mint( | ||
| &self, | ||
| _address: Address, | ||
| _config: Option<IndexerRpcConfig>, | ||
| ) -> Result<Response<Option<CompressedMint>>, IndexerError> { | ||
| todo!("get_compressed_mint not implemented") | ||
| } | ||
|
|
||
| async fn get_compressed_mint_by_pda( | ||
| &self, | ||
| _mint_pda: &Pubkey, | ||
| _config: Option<IndexerRpcConfig>, | ||
| ) -> Result<Response<Option<CompressedMint>>, IndexerError> { | ||
| todo!("get_compressed_mint_by_pda not implemented") | ||
| } | ||
|
|
||
| async fn get_compressed_mints_by_authority( | ||
| &self, | ||
| _authority: &Pubkey, | ||
| _options: Option<GetCompressedMintsByAuthorityOptions>, | ||
| _config: Option<IndexerRpcConfig>, | ||
| ) -> Result<Response<ItemsWithCursor<CompressedMint>>, IndexerError> { | ||
| todo!("get_compressed_mints_by_authority not implemented") | ||
| } |
There was a problem hiding this comment.
Avoid todo!() panics in compressed-mint stub methods.
These three trait-required methods will panic if called during testing or local development. Instead, return a deterministic error that allows test code to handle the unimplemented operation gracefully.
Replace todo!("...") with an appropriate IndexerError variant (e.g., a structured error or similar pattern used elsewhere in the indexer for unimplemented methods) so callers receive a proper error result rather than a panic.
🤖 Prompt for AI Agents
In `@sdk-libs/program-test/src/indexer/test_indexer.rs` around lines 999 - 1022,
Replace the todo!() panics in get_compressed_mint, get_compressed_mint_by_pda,
and get_compressed_mints_by_authority with a deterministic Err(IndexerError)
return so callers get an error result instead of a panic; follow the existing
error pattern used elsewhere in the indexer (e.g., return an IndexerError
variant such as UnimplementedOperation or a new
IndexerError::new("get_compressed_mint not implemented") consistent with how
other unimplemented RPCs are reported) and include the method name in the error
message to aid diagnostics.
There was a problem hiding this comment.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
forester/src/compressible/ctoken/compressor.rs (1)
145-152: Verify the sentinel used fordelegate_indexwhen no delegate is present.
Line 145-152 usesunwrap_or(0). If0is not the explicit “none” sentinel forCompressAndCloseIndices, this can incorrectly point to the first packed account (often the output queue), causing instruction failure or misbinding.Run the following to confirm how
delegate_indexis interpreted and which sentinel is expected:#!/bin/bash # Inspect CompressAndCloseIndices and delegate_index usage rg -n "CompressAndCloseIndices|delegate_index" -S rg -n "compress_and_close|delegate_index" -S programs/ program-libs/ sdk-libs/ forester/ || true
🤖 Fix all issues with AI agents
In `@forester/src/compressible/mint/bootstrap.rs`:
- Around line 18-106: The bootstrap_mint_accounts function duplicates common
scaffolding (shutdown flag setup, reqwest client, localhost branching, logging)
found across ctoken/mint/pda bootstraps; extract that into a shared helper
(e.g., run_bootstrap or BootstrapRunner) that accepts rpc_url, program_id
(Pubkey), filters (Option<Vec<serde_json::Value>>), shutdown_rx
(Option<oneshot::Receiver<()>>), a processing closure (FnMut(RawAccountData) ->
bool) and a label for logs, and then call that helper from
bootstrap_mint_accounts while keeping only mint-specific bits
(MINT_ACCOUNT_TYPE_FILTER, ACCOUNT_TYPE_OFFSET, the process_account closure, and
calls to tracker.update_from_account) and replace direct calls to
bootstrap_standard_api/bootstrap_v2_api with the helper’s internal selection
logic; ensure the helper returns the same tuple shapes (pages/fetched/inserted)
so existing call sites can be adapted with minimal changes.
In `@forester/src/compressible/mint/state.rs`:
- Around line 18-32: The compressible-slot math in calculate_compressible_slot
is off-by-one: use the first unpaid epoch as the compressible boundary. Update
calculate_compressible_slot to compute compressible_slot as
(get_last_funded_epoch(...) + 1) * SLOTS_PER_EPOCH (referencing
calculate_compressible_slot, get_last_funded_epoch and SLOTS_PER_EPOCH) so
is_ready_to_compress (which checks current_slot > compressible_slot) only
returns true once the account has entered an unpaid epoch.
In `@forester/src/compressible/pda/bootstrap.rs`:
- Around line 99-106: The memcmp filter built for the discriminator (see
discriminator_base58, filters, and program_config.discriminator) omits an
explicit encoding field; update the JSON filter to include "encoding": "base58"
alongside "offset" and "bytes" so RPCs that require explicit encoding receive
base58, matching the pattern used in mint/bootstrap.rs.
In `@forester/src/compressible/pda/compressor.rs`:
- Around line 284-305: The batch transaction path logs a warning when
rpc.confirm_transaction(signature) returns false but still returns
Ok(signature), causing callers (like compress()) to treat it as success; update
the compress_accounts_idempotent flow so that when confirmed is false you return
an Err (e.g., anyhow::anyhow!("Batch transaction not confirmed: {}", signature))
instead of Ok(signature), keep the tracing::warn log, and ensure you only remove
entries from self.tracker when confirmed is true (no tracker changes on the
error path); use the same error crate/type used elsewhere in this file to match
existing error handling.
In `@forester/tests/test_compressible_mint.rs`:
- Around line 354-439: Replace the if/else that checks ready_accounts with a
precondition assert and then run the compression logic unconditionally: remove
the surrounding if !ready_accounts.is_empty() { ... } else { panic!(...) }
wrapper and add assert!(!ready_accounts.is_empty(), "Mint should be ready to
compress with rent_payment=0"); above the existing code that creates
MintCompressor::new(...), calls
compressor.compress_batch(&ready_accounts).await, polls
rpc.get_account(mint_pda), and exercises
rpc.get_compressed_account/get_compressed_mint/get_compressed_mint_by_pda so the
body no longer needs nesting or the else panic.
In `@forester/tests/test_compressible_pda.rs`:
- Around line 403-411: Extract the repeated polling pattern into a reusable
async helper like wait_for_condition(timeout: Duration, poll_interval: Duration,
condition: F) -> bool (as suggested) and replace the manual loops (e.g., the
loop that uses start = tokio::time::Instant::now(); while start.elapsed() <
timeout { if !tracker.is_empty() { ... }
sleep(Duration::from_millis(500)).await; }) with calls to that helper passing
the 60s timeout, 500ms interval and a closure that checks tracker.is_empty();
ensure the helper uses tokio::time::Instant and tokio::time::sleep so tests
remain async-compatible and update other occurrences (lines referenced) to call
wait_for_condition to remove duplication and centralize timeout behavior.
In `@program-libs/compressible/Cargo.toml`:
- Line 38: The solana-rent dependency in Cargo.toml is declared unconditionally
but is only used as a mock fallback when not on-chain (see the
#[cfg(not(all(...)))] guard in src/rent/mod.rs), so mark the dependency optional
by adding optional = true to the solana-rent entry in Cargo.toml while not tying
it to any feature flags (do not add it to the solana feature or other features);
keep all existing uses and cfg guards unchanged so solana-rent remains available
for tests/local dev but not forced for consumers.
In `@sdk-libs/client/src/local_test_validator.rs`:
- Around line 71-78: The loop that appends upgradeable program args (iterating
config.upgradeable_programs and calling path.push_str with
upgradeable_program.program_path) can break if paths contain spaces/special
chars; update the push to quote or shell-escape each argument (e.g., wrap
program_path/program_id/upgrade_authority in quotes or use a shell-escaping
helper like shell_escape::escape) so the constructed command string safely
preserves spaces and special characters, and ensure the module imports the
escape helper if you choose shell_escape.
♻️ Duplicate comments (7)
forester/tests/test_compressible_pda.rs (1)
323-329: Add retry/backoff aroundget_create_accounts_proofto avoid CI flakiness.This call to
get_create_accounts_proof(which internally callsget_validity_proof) is susceptible to transient indexer failures, as evidenced by the pipeline failure. The same issue exists at lines 511, 773, and 835.Consider wrapping these calls with retry logic (max 3 attempts, 5-second backoff) to improve CI stability. Based on learnings, CI should run batched tree transfer tests with retry logic due to known flakiness.
🔧 Suggested retry wrapper
async fn get_create_accounts_proof_with_retry<R: Rpc + Indexer>( rpc: &R, program_id: &Pubkey, inputs: Vec<CreateAccountsProofInput>, ) -> Result<CreateAccountsProofResult, Box<dyn std::error::Error>> { let max_attempts = 3; let mut last_error = None; for attempt in 1..=max_attempts { match get_create_accounts_proof(rpc, program_id, inputs.clone()).await { Ok(result) => return Ok(result), Err(e) => { println!("Attempt {}/{} failed: {:?}", attempt, max_attempts, e); last_error = Some(e); if attempt < max_attempts { sleep(Duration::from_secs(5)).await; } } } } Err(last_error.unwrap().into()) }sdk-libs/program-test/src/indexer/test_indexer.rs (1)
999-1023: Avoidtodo!()panics in compressed-mint stubs.These methods will panic if the new Indexer APIs are invoked during tests; return a deterministic
IndexerErrorinstead so callers can handle the unimplemented path.🔧 Proposed fix
@@ async fn get_compressed_mint( &self, _address: Address, _config: Option<IndexerRpcConfig>, ) -> Result<Response<Option<CompressedMint>>, IndexerError> { - todo!("get_compressed_mint not implemented") + Err(IndexerError::NotImplemented( + "get_compressed_mint not implemented".to_string(), + )) } @@ async fn get_compressed_mint_by_pda( &self, _mint_pda: &Pubkey, _config: Option<IndexerRpcConfig>, ) -> Result<Response<Option<CompressedMint>>, IndexerError> { - todo!("get_compressed_mint_by_pda not implemented") + Err(IndexerError::NotImplemented( + "get_compressed_mint_by_pda not implemented".to_string(), + )) } @@ async fn get_compressed_mints_by_authority( &self, _authority: &Pubkey, _authority_type: MintAuthorityType, _options: Option<GetCompressedMintsByAuthorityOptions>, _config: Option<IndexerRpcConfig>, ) -> Result<Response<ItemsWithCursor<CompressedMint>>, IndexerError> { - todo!("get_compressed_mints_by_authority not implemented") + Err(IndexerError::NotImplemented( + "get_compressed_mints_by_authority not implemented".to_string(), + )) }forester/src/lib.rs (1)
241-259: CToken bootstrap is skipped whenshutdown_bootstrapisNone.That leaves the ctokens tracker without historical state while PDA/Mint bootstraps still run. Spawn the task unconditionally and pass the optional shutdown receiver through.
🔧 Suggested fix
@@ - // Spawn bootstrap task for ctokens - if let Some(shutdown_bootstrap_rx) = shutdown_bootstrap { - let tracker_clone = ctoken_tracker.clone(); - let rpc_url = config.external_services.rpc_url.clone(); - - tokio::spawn(async move { - if let Err(e) = compressible::bootstrap_ctoken_accounts( - rpc_url, - tracker_clone, - Some(shutdown_bootstrap_rx), - ) - .await - { - tracing::error!("CToken bootstrap failed: {:?}", e); - } else { - tracing::info!("CToken bootstrap complete"); - } - }); - } + // Spawn bootstrap task for ctokens + let tracker_clone = ctoken_tracker.clone(); + let rpc_url = config.external_services.rpc_url.clone(); + let shutdown_bootstrap_rx = shutdown_bootstrap; + + tokio::spawn(async move { + if let Err(e) = compressible::bootstrap_ctoken_accounts( + rpc_url, + tracker_clone, + shutdown_bootstrap_rx, + ) + .await + { + tracing::error!("CToken bootstrap failed: {:?}", e); + } else { + tracing::info!("CToken bootstrap complete"); + } + });forester/src/compressible/mint/compressor.rs (1)
208-211: Prevent panic whenmax_concurrent == 0.
buffer_unordered(0)will panic; clamp or validate before calling.🔧 Proposed fix
- let results: Vec<_> = futures::stream::iter(compression_futures) - .buffer_unordered(max_concurrent) + let concurrency = max_concurrent.max(1); + let results: Vec<_> = futures::stream::iter(compression_futures) + .buffer_unordered(concurrency)forester/src/compressible/pda/compressor.rs (1)
165-168: Guardmax_concurrentagainst zero beforebuffer_unordered.
Line 165-168 can panic ifmax_concurrent == 0; this was already flagged earlier.forester/src/epoch_manager.rs (1)
2035-2055: Redundant tracker removal aftercompress_batch_concurrent.This was flagged in a prior review: both
PdaCompressor::compress_batch_concurrentandMintCompressor::compress_batch_concurrentalready remove successful accounts from their trackers internally. The explicitpda_tracker.remove(&account_state.pubkey)(line 2043) andmint_tracker.remove(&mint_state.pubkey)(line 2115) calls duplicate that work, causing unnecessary DashMap writes.Consider letting only the compressor implementation own removal, leaving these match arms to just log and update counters.
Also applies to: 2110-2123
forester/tests/test_compressible_mint.rs (1)
91-104: Comment contradicts actualrent_paymentvalue.Line 91 states "CreateMint auto-decompresses (rent_payment > 0)" but line 102 sets
rent_payment: 0. This inconsistency was flagged previously. Either update the comment to reflect thatrent_payment=0means "immediately compressible for testing" or adjust the value if decompression behavior differs.📝 Suggested fix
- // Build params - CreateMint auto-decompresses (rent_payment > 0) + // Build params - rent_payment=0 makes mint immediately compressible for testing let params = CreateMintParams {
| let timeout = Duration::from_secs(60); | ||
|
|
||
| while start.elapsed() < timeout { | ||
| if !tracker.is_empty() { | ||
| println!("Bootstrap found {} PDA accounts", tracker.len()); | ||
| break; | ||
| } | ||
| sleep(Duration::from_millis(500)).await; | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Consider extracting the polling timeout pattern.
The 60-second timeout with 500ms polling interval appears in multiple places (here, line 814, line 875, line 942). Consider extracting this into a helper to reduce duplication and ensure consistent timeout behavior across tests.
async fn wait_for_condition<F, Fut>(
timeout: Duration,
poll_interval: Duration,
mut condition: F,
) -> bool
where
F: FnMut() -> Fut,
Fut: std::future::Future<Output = bool>,
{
let start = tokio::time::Instant::now();
while start.elapsed() < timeout {
if condition().await {
return true;
}
sleep(poll_interval).await;
}
false
}🤖 Prompt for AI Agents
In `@forester/tests/test_compressible_pda.rs` around lines 403 - 411, Extract the
repeated polling pattern into a reusable async helper like
wait_for_condition(timeout: Duration, poll_interval: Duration, condition: F) ->
bool (as suggested) and replace the manual loops (e.g., the loop that uses start
= tokio::time::Instant::now(); while start.elapsed() < timeout { if
!tracker.is_empty() { ... } sleep(Duration::from_millis(500)).await; }) with
calls to that helper passing the 60s timeout, 500ms interval and a closure that
checks tracker.is_empty(); ensure the helper uses tokio::time::Instant and
tokio::time::sleep so tests remain async-compatible and update other occurrences
(lines referenced) to call wait_for_condition to remove duplication and
centralize timeout behavior.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@forester/src/compressible/bootstrap_helpers.rs`:
- Around line 117-123: Replace uses of the deprecated base64::decode(...) with
the engine-based API so the code compiles under base64 v0.20+; for example,
change the call that assigns data from base64::decode(data_str) (the variable
named data in bootstrap_helpers.rs) to use
base64::engine::general_purpose::STANDARD.decode(data_str) (or import
base64::Engine and call STANDARD.decode(data_str)), preserve the existing error
handling and debug log that references pubkey, and apply the same refactor to
other occurrences in subscriber.rs and tree_data_sync.rs so all base64 decoding
uses the engine-based API.
In `@forester/src/compressible/pda/compressor.rs`:
- Around line 78-103: The code currently deserializes the config with
LightConfig::try_from_slice (skipping SDK checks) and uses a hardcoded
derive_pda(program_id, 0); replace the raw deserialization with the SDK
validator by calling LightConfig::load_checked on the fetched config account
(the account returned for config_pda) so the SDK can enforce config_bump == 0,
verify the PDA derivation matches config_pda, and apply any other constraints;
keep using LightConfig::derive_pda(program_id, 0) to locate config_pda, fetch
the account via rpc.get_account(...) as you already do, then pass that account
into LightConfig::load_checked and propagate any error instead of using
LightConfig::try_from_slice.
♻️ Duplicate comments (2)
forester/src/compressible/pda/compressor.rs (1)
165-169: Potential runtime panic withbuffer_unordered(0).If
max_concurrentis passed as0,buffer_unordered(0)will panic at runtime. This value can originate from configuration (e.g., environment variables), so defensive clamping is warranted.🔧 Proposed fix
// Execute concurrently with limit + let concurrency = max_concurrent.max(1); let results: Vec<_> = futures::stream::iter(compression_futures) - .buffer_unordered(max_concurrent) + .buffer_unordered(concurrency) .collect() .await;forester/tests/test_compressible_mint.rs (1)
92-104: Comment doesn't match the code behavior.Line 92 says "CreateMint auto-decompresses (rent_payment > 0)" but line 103 sets
rent_payment: 0. The code works correctly for testing (immediately compressible), but the comment is misleading.📝 Proposed fix
- // Build params - CreateMint auto-decompresses (rent_payment > 0) + // Build params - rent_payment=0 makes the mint immediately compressible for testing let params = CreateMintParams {
| let data = match base64::decode(data_str) { | ||
| Ok(bytes) => bytes, | ||
| Err(e) => { | ||
| debug!("Failed to decode base64 for account {}: {:?}", pubkey, e); | ||
| return None; | ||
| } | ||
| }; |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
🧩 Analysis chain
🏁 Script executed:
cat forester/Cargo.toml | grep -A 5 "base64"Repository: Lightprotocol/light-protocol
Length of output: 323
🏁 Script executed:
cat -n forester/src/compressible/bootstrap_helpers.rs | sed -n '110,130p'Repository: Lightprotocol/light-protocol
Length of output: 705
🏁 Script executed:
find . -name "CLAUDE.md" -type fRepository: Lightprotocol/light-protocol
Length of output: 645
🏁 Script executed:
head -100 Cargo.toml | grep -A 10 "\[workspace"Repository: Lightprotocol/light-protocol
Length of output: 745
🏁 Script executed:
cat Cargo.toml | grep -B 5 -A 5 "base64"Repository: Lightprotocol/light-protocol
Length of output: 320
🏁 Script executed:
cat CLAUDE.md | head -150Repository: Lightprotocol/light-protocol
Length of output: 7261
🏁 Script executed:
cat forester/CLAUDE.mdRepository: Lightprotocol/light-protocol
Length of output: 122
🏁 Script executed:
grep -r "base64::" forester/src/ | head -20Repository: Lightprotocol/light-protocol
Length of output: 382
🏁 Script executed:
cat Cargo.toml | grep -A 2 'base64 = "0.13"'Repository: Lightprotocol/light-protocol
Length of output: 121
Consider this refactoring when upgrading to base64 0.20+.
The base64::decode API is still the standard in the current version (0.13), but it was deprecated in base64 v0.20+. When you eventually upgrade the dependency, update to the engine-based API:
- let data = match base64::decode(data_str) {
+ use base64::{Engine as _, engine::general_purpose::STANDARD};
+ let data = match STANDARD.decode(data_str) {Apply the same pattern to other instances in subscriber.rs and tree_data_sync.rs for consistency.
🤖 Prompt for AI Agents
In `@forester/src/compressible/bootstrap_helpers.rs` around lines 117 - 123,
Replace uses of the deprecated base64::decode(...) with the engine-based API so
the code compiles under base64 v0.20+; for example, change the call that assigns
data from base64::decode(data_str) (the variable named data in
bootstrap_helpers.rs) to use
base64::engine::general_purpose::STANDARD.decode(data_str) (or import
base64::Engine and call STANDARD.decode(data_str)), preserve the existing error
handling and debug log that references pubkey, and apply the same refactor to
other occurrences in subscriber.rs and tree_data_sync.rs so all base64 decoding
uses the engine-based API.
c499bd8 to
0093038
Compare
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
forester/src/compressible/ctoken/state.rs (1)
93-101: Avoid leaving stale state when slot calculation fails.
On failure you now returnOk(())without updating or removing the existing entry, so a previously tracked account can remain with an outdatedcompressible_slotand still be considered for compression. Consider removing the entry (or bubbling the error) to avoid acting on stale data.🛠️ Suggested fix (remove stale entry on failure)
Err(e) => { warn!( "Failed to calculate compressible slot for {}: {}", pubkey, e ); + self.remove(&pubkey); return Ok(()); }forester/src/compressible/bootstrap_helpers.rs (1)
223-225: Consider extending localhost detection for edge cases.The current check covers the common cases (
localhost,127.0.0.1), but you may want to also handle:
0.0.0.0(sometimes used for local binding)[::1]or::1(IPv6 localhost)This is a minor edge case—the current implementation works for typical local development setups.
forester/tests/test_compressible_mint.rs (1)
321-343: Preferbuild_expected_minthere to keep assertions consistent.
This reduces duplication and keeps the expected‑state logic centralized.♻️ Suggested refactor
- // Build expected Mint - let expected_mint = Mint { - base: BaseMint { - mint_authority: Some(payer.pubkey().to_bytes().into()), - supply: 0, - decimals: 9, - is_initialized: true, - freeze_authority: None, - }, - metadata: MintMetadata { - version: metadata_version, - mint_decompressed: true, - mint: mint_pda.to_bytes().into(), - mint_signer: mint_seed.pubkey().to_bytes(), - bump, - }, - reserved: [0u8; 16], - account_type: ACCOUNT_TYPE_MINT, - compression, - extensions: None, - }; + // Build expected Mint + let expected_mint = build_expected_mint( + &payer.pubkey(), + 9, + &mint_pda, + &mint_seed.pubkey().to_bytes(), + bump, + metadata_version, + compression, + );
🤖 Fix all issues with AI agents
In `@forester/src/compressible/bootstrap_helpers.rs`:
- Around line 399-405: The spawned shutdown listener (tokio::spawn using
rx.await and shutdown_flag_clone.store) can leak if run_bootstrap returns early;
change the approach to either (1) race the bootstrap work against the shutdown
signal using tokio::select! (pass shutdown_rx into
bootstrap_standard_api/bootstrap_v2_api and check it there) so no background
task is spawned, or (2) keep the JoinHandle from tokio::spawn and explicitly
abort or await it before run_bootstrap returns (drop/abort the handle on all
return paths) to ensure the spawned task is cleaned up; update the code paths
that currently use the shutdown_rx + shutdown_flag_clone + rx.await spawn block
accordingly.
In `@forester/src/compressible/mint/compressor.rs`:
- Around line 153-170: The code currently pushes every chunk's signature into
signatures regardless of `confirmed`; update the logic in the function handling
the batching (the loop that uses `confirmed`, `chunk`, `chunk_idx`, `signature`,
and `self.tracker.remove`) so that you only push `signature` into the
`signatures` Vec when `confirmed` is true (i.e., move the
`signatures.push(signature)` inside the `if confirmed { ... }` block and keep
`self.tracker.remove(&mint_state.pubkey)` behavior unchanged), ensuring only
confirmed signatures are returned to callers.
In `@forester/tests/test_compressible_ctoken.rs`:
- Around line 528-532: The current loop uses .expect(&format!(...)) which
allocates even on success; change the lookup chain (the iterator .find(...)
calling .expect(...)) to defer allocation by replacing the expect/format call
with unwrap_or_else that panics only on the failure path (e.g., use
.unwrap_or_else(|| panic!("Bootstrap should have found account {}", pubkey))).
Update the code around expected_pubkeys, accounts, and account_state to use this
unwrap_or_else approach so the formatted string is only created when the lookup
fails.
In `@program-libs/CLAUDE.md`:
- Around line 63-72: Add a blank line between the "## Testing" heading and the
fenced code block so the triple-backtick block is preceded by an empty line;
locate the "## Testing" section in CLAUDE.md and insert a single blank line
before the code fence (the ``` bash block containing the three cargo test lines)
to follow Markdown best practices.
- Around line 1-4: Replace the current paragraph summary in CLAUDE.md with a 2–5
bullet "Summary" section, and add "Used In" and "Navigation" headings with brief
bullet lists; keep the existing high-level description but reformat it into the
header structure ("## Summary", "## Used In", "## Navigation", plus any existing
high-level sections) so the file matches the CLAUDE.md template and provides
clear links to on-chain programs, sdk-libs, and relevant tests or crate docs.
♻️ Duplicate comments (3)
forester/src/compressible/pda/compressor.rs (1)
174-178: Guard againstmax_concurrent == 0to preventbuffer_unorderedpanic.This was flagged in a previous review and addressed in
mint/compressor.rs(lines 191-198), but the same fix is missing here.buffer_unordered(0)panics at runtime.🔧 Proposed fix
+ // Guard against max_concurrent == 0 to avoid buffer_unordered panic + if max_concurrent == 0 { + return account_states + .iter() + .cloned() + .map(|state| Err((state, anyhow::anyhow!("max_concurrent must be > 0")))) + .collect(); + } + // Execute concurrently with limit let results: Vec<_> = futures::stream::iter(compression_futures) .buffer_unordered(max_concurrent),
sdk-libs/client/src/local_test_validator.rs (1)
71-77: Consider shell-escaping upgradeable program args.If any path contains spaces/special characters,
sh -cwill misparse the command line. This is low-risk for test infra but still worth hardening if paths can vary.sdk-libs/program-test/src/indexer/test_indexer.rs (1)
999-1023: Replacetodo!()with deterministic errors to avoid test panics.
These stub methods will panic if called; return anIndexerErrorinstead so callers can handle unimplemented paths.✅ Safer stub implementation
async fn get_compressed_mint( &self, _address: Address, _config: Option<IndexerRpcConfig>, ) -> Result<Response<Option<CompressedMint>>, IndexerError> { - todo!("get_compressed_mint not implemented") + Err(IndexerError::CustomError( + "get_compressed_mint not implemented".to_string(), + )) } async fn get_compressed_mint_by_pda( &self, _mint_pda: &Pubkey, _config: Option<IndexerRpcConfig>, ) -> Result<Response<Option<CompressedMint>>, IndexerError> { - todo!("get_compressed_mint_by_pda not implemented") + Err(IndexerError::CustomError( + "get_compressed_mint_by_pda not implemented".to_string(), + )) } async fn get_compressed_mints_by_authority( &self, _authority: &Pubkey, _authority_type: MintAuthorityType, _options: Option<GetCompressedMintsByAuthorityOptions>, _config: Option<IndexerRpcConfig>, ) -> Result<Response<ItemsWithCursor<CompressedMint>>, IndexerError> { - todo!("get_compressed_mints_by_authority not implemented") + Err(IndexerError::CustomError( + "get_compressed_mints_by_authority not implemented".to_string(), + )) }
7671bfc to
e2663c4
Compare
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
program-libs/CLAUDE.md (1)
68-70: Include--all-featuresfor light-batched-merkle-tree tests.The testing guidance here should match the standard unit test command set for program-libs; batched-merkle-tree also needs
--all-features. As per coding guidelines, keep CLAUDE.md accurate. Based on learnings, update the command.✅ Proposed doc fix
-cargo test -p light-batched-merkle-tree +cargo test -p light-batched-merkle-tree --all-featuresforester/src/compressible/pda/compressor.rs (1)
227-243: Consider adding error context for individual account failures in parallel fetch.When
try_join_allfails, you lose visibility into which specific compressed account caused the failure. For debugging larger batches, capturing which address failed would be helpful.♻️ Optional improvement
let compressed_account_futures = compressed_addresses.iter().map(|addr| { let rpc_clone = self.rpc_pool.clone(); let addr = *addr; async move { let rpc = rpc_clone.get_connection().await?; rpc.get_compressed_account(addr, None) .await - .map_err(|e| anyhow::anyhow!("Failed to get compressed account: {:?}", e))? + .map_err(|e| anyhow::anyhow!("Failed to get compressed account {:?}: {:?}", addr, e))? .value - .ok_or_else(|| anyhow::anyhow!("Compressed account not found: {:?}", addr)) + .ok_or_else(|| anyhow::anyhow!("Compressed account not found for address: {:?}", addr)) } });forester/src/compressible/ctoken/state.rs (1)
93-102: Silent failure on compressible slot calculation may mask upstream issues.Changing from error propagation to
warn!+Ok(())means callers won't know when accounts fail to be tracked due to missing extensions or invalid rent data. This is fine for graceful degradation, but consider whether you want metrics or alerting on these "soft failures" in production—especially if many accounts start hitting this path unexpectedly.If this is intentional (e.g., you expect some accounts to lack the Compressible extension), a brief inline comment documenting that expectation would help future readers.
🤖 Fix all issues with AI agents
In `@forester/package.json`:
- Line 6: The package.json "build" npm script currently runs "cargo build" which
generates artifacts under target/debug, but NX expects outputs at
{workspaceRoot}/target/release; update the "build" script to run cargo build
--release (i.e., change the "build" script entry in package.json) so outputs
land in target/release, or alternatively change the NX configuration outputs to
point to {workspaceRoot}/target/debug to match the existing "build"
script—modify either the package.json "build" script or the NX outputs setting
to make them consistent.
In `@forester/src/compressible/bootstrap_helpers.rs`:
- Around line 415-473: Both branches of the RPC bootstrap (the is_localhost
branch calling bootstrap_standard_api and the else branch calling
bootstrap_v2_api) repeat the same post-call steps: aborting
shutdown_listener_handle, unwrapping the api_result, logging with label, and
constructing a BootstrapResult; extract that shared logic into a small helper or
closure (e.g., a function handle_bootstrap_result) that takes the api_result (or
its Ok tuple) and label and performs handle.abort(), info! logging, and
BootstrapResult construction, then call that helper from both places to remove
duplication while keeping bootstrap_standard_api and bootstrap_v2_api calls
intact.
In `@forester/tests/test_compressible_mint.rs`:
- Around line 759-767: The test currently finds a mint by comparing
m.mint.decimals == 9 which can collide with other mints; update the lookup in
the mints_by_authority query to use a unique identifier (e.g., compare m.address
or m.mint.compression_address against the known compression_address_1) so the
assert targets the specific mint under test (adjust the variable referenced in
the closure used by found_mint and keep the subsequent assert the same).
In `@sdk-libs/client/src/indexer/types.rs`:
- Around line 1058-1061: The doc comment for the struct field `extensions` is
incorrect: it states "Serialized extensions (base64 encoded)" but the field
`extensions: Option<Vec<u8>>` actually contains decoded bytes (base64 decoding
occurs in the `TryFrom` implementation). Update the comment on `extensions` in
types.rs to reflect that it stores decoded/serialized bytes (e.g., "Serialized
extensions (decoded bytes)") and, if helpful, add a brief note pointing to the
`TryFrom` implementation that performs base64 decoding.
In `@sdk-libs/photon-api/src/models/_get_compressed_mint_post_request_params.rs`:
- Around line 22-28: The current GetCompressedMintPostRequestParams::new()
constructs an instance with both address and mint_pda as None which allows
invalid requests; replace it with a checked constructor that enforces exactly
one selector is set (either address or mint_pda). Implement either two explicit
constructors (e.g., GetCompressedMintPostRequestParams::with_address(address:
String) and ::with_mint_pda(mint_pda: String)) or a single checked factory
(e.g., ::try_new(address: Option<String>, mint_pda: Option<String>) ->
Result<Self, Error>) that returns an error if both or neither are provided;
update callers to use the new constructors and remove/privatize the unsafe
public new() to prevent creating malformed params.
♻️ Duplicate comments (2)
forester/src/compressible/pda/compressor.rs (1)
176-180:buffer_unordered(0)will panic at runtime ifmax_concurrentis zero.This issue was flagged in a previous review and remains unaddressed. If
max_concurrent(derived frommax_concurrent_batchesconfig) is set to 0 via environment variable, this will cause a runtime panic. Guard with.max(1)before passing tobuffer_unordered().🔧 Proposed fix
// Execute concurrently with limit + let concurrency = max_concurrent.max(1); let results: Vec<_> = futures::stream::iter(compression_futures) - .buffer_unordered(max_concurrent) + .buffer_unordered(concurrency) .collect() .await;sdk-libs/program-test/src/indexer/test_indexer.rs (1)
999-1023: Avoidtodo!()panics for the new compressed-mint methods. These should return a deterministicErr(IndexerError::...)so tests can handle the unimplemented path gracefully instead of crashing.
sdk-libs/photon-api/src/models/_get_compressed_mint_post_request_params.rs
Outdated
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
forester/src/compressible/ctoken/state.rs (1)
93-101: Avoid stale tracker entries when compressible-slot calc fails.
ReturningOk(())without updating/removing leaves any previously tracked state in place, which can schedule compression at the wrong slot. Consider removing the existing entry (or propagate the error) on failure.🔧 Suggested fix
Err(e) => { warn!( "Failed to calculate compressible slot for {}: {}", pubkey, e ); + // Avoid stale entries if this account was previously tracked. + self.remove(&pubkey); return Ok(()); }forester/tests/test_compressible_ctoken.rs (2)
100-156: Add a timeout around registration/active phase waits to avoid long hangs.When
current_slotis already in active phase, the new logic waits for the next epoch’s registration start with no upper bound. That can stall CI for a long time if epochs are long or slot progress is slow. A bounded wait (with a clear error) keeps the test deterministic.
Consider guarding the twowhile rpc.get_slot() < ...loops with a timeout and failing fast if exceeded.🔧 Suggested timeout guard
+ let wait_deadline = tokio::time::Instant::now() + Duration::from_secs(60); // Wait for registration phase while rpc.get_slot().await? < register_phase_start { + if tokio::time::Instant::now() > wait_deadline { + return Err("Timed out waiting for registration phase".into()); + } sleep(Duration::from_millis(400)).await; } + let wait_deadline = tokio::time::Instant::now() + Duration::from_secs(60); // Wait for active phase while rpc.get_slot().await? < active_phase_start { + if tokio::time::Instant::now() > wait_deadline { + return Err("Timed out waiting for active phase".into()); + } sleep(Duration::from_millis(400)).await; }
53-67: Mitigate airdrop rate-limit flakiness with retry/backoff.CI already failed with “airdrop lamports request failed (rate limit)”. Add a small retry helper with backoff to make these tests robust.
🔧 Example retry helper
+async fn airdrop_with_retry<R: Rpc>( + rpc: &mut R, + pubkey: &Pubkey, + lamports: u64, +) -> Result<(), Box<dyn std::error::Error>> { + let mut delay = Duration::from_millis(200); + for attempt in 1..=5 { + match rpc.airdrop_lamports(pubkey, lamports).await { + Ok(_) => return Ok(()), + Err(e) if attempt < 5 => { + tracing::warn!("Airdrop failed (attempt {}): {:?}. Retrying...", attempt, e); + sleep(delay).await; + delay *= 2; + } + Err(e) => return Err(e.into()), + } + } + Ok(()) +}- rpc.airdrop_lamports(&governance_pubkey, 1_000_000_000) - .await?; + airdrop_with_retry(rpc, &governance_pubkey, 1_000_000_000).await?; - rpc.airdrop_lamports(&forester_pubkey, 10_000_000_000) - .await?; + airdrop_with_retry(rpc, &forester_pubkey, 10_000_000_000).await?;- rpc.airdrop_lamports(&payer.pubkey(), 10_000_000_000) - .await - .expect("Failed to airdrop lamports"); + airdrop_with_retry(&mut rpc, &payer.pubkey(), 10_000_000_000) + .await + .expect("Failed to airdrop lamports");Also applies to: 231-233
🤖 Fix all issues with AI agents
In `@forester/tests/test_compressible_mint.rs`:
- Around line 398-419: The test currently treats a mint as closed only if
rpc.get_account(mint_pda) returns None; update the loop that queries get_account
(variable mint_after) to also treat Some(account) with account.lamports == 0 as
closed (set account_closed = true and break) so zero‑lamport leftover accounts
are accepted as closed (same pattern used in the ctoken test).
In `@sdk-libs/photon-api/src/models/compressed_mint_list.rs`:
- Around line 11-19: The CompressedMintList struct derives Serialize and
Deserialize but the serde traits are not imported; add an explicit import for
serde::Serialize and serde::Deserialize at the top of the file so the derives on
the CompressedMintList struct (and its fields like items:
Vec<models::CompressedMint>, cursor: Option<String>) compile correctly.
♻️ Duplicate comments (2)
sdk-libs/client/src/local_test_validator.rs (1)
71-76: Consider shell-escaping upgradeable program args.
This is still vulnerable to spaces/special chars in paths. Quoting orshell_escapewould make the command safer.🔧 Example quoting
- path.push_str(&format!( - " --upgradeable-program {} {} {}", - upgradeable_program.program_id, - upgradeable_program.program_path, - upgradeable_program.upgrade_authority - )); + path.push_str(&format!( + " --upgradeable-program {} \"{}\" {}", + upgradeable_program.program_id, + upgradeable_program.program_path, + upgradeable_program.upgrade_authority + ));sdk-libs/program-test/src/indexer/test_indexer.rs (1)
999-1022: Avoid todo!() panics in the new compressed mint stubs.These will still panic if invoked; return a deterministic IndexerError instead so tests can handle unimplemented behavior gracefully.
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
forester/tests/test_compressible_mint.rs (2)
322-341: Consider usingbuild_expected_minthelper for consistency.You've defined the
build_expected_minthelper at lines 26-58 and use it intest_compressible_mint_bootstrap, but here you're constructing the expected Mint inline. Using the helper keeps things DRY and ensures any future changes to Mint structure only need updating in one place.♻️ Suggested refactor
- // Build expected Mint - let expected_mint = Mint { - base: BaseMint { - mint_authority: Some(payer.pubkey().to_bytes().into()), - supply: 0, - decimals: 9, - is_initialized: true, - freeze_authority: None, - }, - metadata: MintMetadata { - version: metadata_version, - mint_decompressed: true, - mint: mint_pda.to_bytes().into(), - mint_signer: mint_seed.pubkey().to_bytes(), - bump, - }, - reserved: [0u8; 16], - account_type: ACCOUNT_TYPE_MINT, - compression, - extensions: None, - }; + // Build expected Mint using helper + let expected_mint = build_expected_mint( + &payer.pubkey(), + 9, + &mint_pda, + &mint_seed.pubkey().to_bytes(), + bump, + metadata_version, + compression, + );
641-652: Same zero-lamport handling needed here.Consistent with the other test, this loop should also treat zero-lamport accounts as closed.
🔧 Suggested fix
while start.elapsed() < timeout { let mint_after = rpc .get_account(mint_pda_1) .await .expect("Failed to query mint account"); - if mint_after.is_none() { + if mint_after.is_none() + || mint_after.as_ref().map(|a| a.lamports) == Some(0) + { account_closed = true; println!("First mint account closed successfully!"); break; } sleep(Duration::from_millis(500)).await; }
♻️ Duplicate comments (1)
forester/tests/test_compressible_mint.rs (1)
403-414: Handle zero-lamport accounts as "closed".A past review noted that compression might leave a zero-lamport account rather than deleting it outright. The current check only treats
Noneas closed. Treatinglamports == 0as closed too makes the test more robust.🔧 Suggested fix
while start.elapsed() < timeout { let mint_after = rpc .get_account(mint_pda) .await .expect("Failed to query mint account"); - if mint_after.is_none() { + if mint_after.is_none() + || mint_after.as_ref().map(|a| a.lamports) == Some(0) + { account_closed = true; println!("Mint account closed successfully!"); break; } sleep(Duration::from_millis(500)).await; }
4844a79 to
5268a7f
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
forester/tests/test_compressible_ctoken.rs (1)
94-160: Add a bounded wait for phase transitions to prevent hangs.With target_epoch potentially rolling forward, the two “wait for phase” loops can stall for an unbounded duration if phase timing drifts or the local validator stalls. Please add a maximum wait and fail fast with a clear error so CI doesn’t hang indefinitely.
🤖 Fix all issues with AI agents
In `@forester/src/compressible/mint/bootstrap.rs`:
- Around line 46-61: Update the final info log to include the number of
successfully inserted accounts so failures are visible: when logging the
bootstrap summary after run_bootstrap returns, include result.inserted alongside
tracker.len() and result.fetched (the closure that processes accounts is
process_account, and the bootstrap call is run_bootstrap), so the message
reports total tracked, fetched, inserted, and pages to aid debugging in cases
where process_account may return false.
♻️ Duplicate comments (4)
sdk-libs/photon-api/src/models/compressed_mint_list.rs (1)
11-19: Add missing serde derives import (compile error otherwise).
If there isn’t a crate-level#[macro_use] extern crate serde;or module prelude, these derives won’t resolve.🔧 Proposed fix
use crate::models; +use serde::{Deserialize, Serialize};Run this to confirm whether serde derive macros are already in scope via crate-wide imports:
#!/bin/bash # Check for macro_use or re-exports that would make Serialize/Deserialize available rg -n "macro_use.*serde|extern crate serde|pub use serde::\{.*Serialize.*Deserialize.*\}" sdk-libs/photon-apisdk-libs/client/src/indexer/types.rs (1)
1058-1061: Fixextensionsdoc comment to match decoded bytes.The field holds decoded bytes (base64 is decoded in the
TryFromimplementation), so the current wording is misleading.📝 Suggested doc comment tweak
- /// Serialized extensions (base64 encoded) + /// Serialized extensions (decoded bytes from the API's base64 representation) pub extensions: Option<Vec<u8>>,forester/tests/test_compressible_mint.rs (1)
403-413: Treat zero‑lamport mint accounts as closed.Some compression paths can leave a zero‑lamport account instead of
None; the current polling can falsely fail. Consider applying the same check in the subscription test loop around Line 646 as well.🔧 Suggested update
- if mint_after.is_none() { + if mint_after.is_none() + || mint_after.as_ref().map(|a| a.lamports) == Some(0) + { account_closed = true; println!("Mint account closed successfully!"); break; }- if mint_after.is_none() { + if mint_after.is_none() + || mint_after.as_ref().map(|a| a.lamports) == Some(0) + { account_closed = true; println!("First mint account closed successfully!"); break; }Also applies to: 641-649
sdk-libs/program-test/src/indexer/test_indexer.rs (1)
999-1022: Avoidtodo!()panics in compressed-mint stubs.Same concern as prior review: return a deterministic
Err(IndexerError)so tests can handle unimplemented calls without panicking.
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In `@forester/tests/test_indexer_interface.rs`:
- Line 183: Calls to wait_for_indexer are using needless mutable borrows
(wait_for_indexer(&mut rpc)); change those call sites to pass an immutable
reference (wait_for_indexer(&rpc)) since wait_for_indexer only requires
&RpcClient (or equivalent) and does not mutate rpc. Update all occurrences
(e.g., the call at wait_for_indexer(&mut rpc) and the similar calls around lines
noted) so they use &rpc, and remove any unnecessary mut on local variable
declarations only if no other uses require mutation.
In `@sdk-libs/client/src/indexer/types.rs`:
- Around line 1244-1300: The TokenAccountInterface TryFrom implementation
duplicates AccountInterface construction; replace the manual reconstruction in
impl TryFrom<&photon_api::models::TokenAccountInterface> for
TokenAccountInterface (the block building `account: AccountInterface { ... }`)
by calling AccountInterface::try_from(&tai.account) and propagate errors, then
use the returned AccountInterface when building TokenAccountInterface; ensure
you keep the existing token parsing logic (TokenData construction) unchanged and
adjust imports/traits if needed to resolve TryFrom conversions.
- Around line 1311-1341: The MintInterface TryFrom implementation duplicates
AccountInterface construction; replace the manual reconstruction block that
builds AccountInterface (address, lamports, owner, data, executable, rent_epoch,
resolved_from, resolved_slot, compressed_context) with a call to
AccountInterface::try_from(&mi.account)? to reuse the existing conversion logic,
then keep the existing MintData::try_from(&mi.mint_data)? and return
Ok(MintInterface { account, mint_data }); ensure any now-unused local variables
are removed and necessary trait/imports for TryFrom are present.
In `@sdk-libs/photon-api/src/models/_get_token_account_interface_post_request.rs`:
- Around line 27-33: The constructor GetTokenAccountInterfacePostRequest::new
currently sets a hardcoded id ("test-id") which breaks JSON-RPC response
correlation; change it to generate a unique UUID v4 string for the id (use
uuid::Uuid::new_v4().to_string()) when building the struct (inside
GetTokenAccountInterfacePostRequest::new where id is set alongside jsonrpc,
method, params) and update any other similar constructors (e.g.,
GetAccountInterfacePostRequest::new, GetMintInterfacePostRequest::new) to use
the same UUID v4 pattern so each request gets a unique id.
sdk-libs/photon-api/src/models/_get_token_account_interface_post_request.rs
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@sdk-libs/photon-api/src/models/cold_data.rs`:
- Around line 13-25: The discriminator field currently allows any length
(Vec<u8>) which contradicts the “first 8 bytes” doc; change
ColdData::discriminator to a fixed-size [u8; 8] (update the struct type and the
ColdData::new signature to accept [u8; 8]) so invalid-length payloads are
impossible, and update call sites to construct or convert into [u8; 8] (or,
alternatively, if you must accept variable input, add explicit validation inside
ColdData::new that checks discriminator.len() == 8 and return a Result or panic
on mismatch); refer to the ColdData struct and impl ColdData::new to make these
changes.
2c976fc to
cf0f163
Compare
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
sdk-libs/client/src/interface/account_interface_ext.rs (1)
238-239: TODO noted: concurrent fetching.The sequential loop with
TODO: concurrent with futuresis fine for now, but for production performance with many accounts, you'll want to usefutures::future::join_allor similar to parallelize the RPC calls. Happy to help sketch that out when you're ready.forester/tests/test_compressible_mint.rs (1)
313-315: Minor: Redundant clone before unwrap.The
clone()on line 314 is unnecessary since you immediatelyunwrap()it. You can simplify this.♻️ Suggested simplification
- let mint_data = mint_account.clone().unwrap(); + let mint_data = mint_account.unwrap();
🤖 Fix all issues with AI agents
In `@forester/src/compressible/mint/state.rs`:
- Around line 22-46: The early-return duplicates available_balance/epochs_funded
logic and can diverge from get_last_funded_epoch (leading to inconsistent
results when epochs_funded == 0 and compression_info.last_claimed_slot > 0);
remove the local early-return that checks rent_per_epoch == 0 ||
available_balance / rent_per_epoch == 0 and instead always call
get_last_funded_epoch (using compression_info, account_size, lamports, config,
and rent_exemption) and return (last_funded_epoch + 1) * SLOTS_PER_EPOCH, or if
you prefer to keep a short-circuit, make the zero-epochs behavior match
get_last_funded_epoch by returning last_claimed_epoch.saturating_sub(1)
converted to slot before multiplying by SLOTS_PER_EPOCH (referencing
get_last_funded_epoch, available_balance, rent_per_epoch,
compression_info.last_claimed_slot, and SLOTS_PER_EPOCH).
In `@forester/tests/test_indexer_interface.rs`:
- Around line 41-44: The test imports from outdated paths in light_token_client;
update the imports so the symbols resolve (or add crate re-exports/feature
flags). Locate the test's use of create_compressible_token_account and
CreateCompressibleTokenAccountInputs and change their import path to the crate's
current module (or add pub use in light_token_client) and likewise ensure
create_mint_action_instruction, MintActionParams, and MintActionType are
imported from the correct instructions::mint_action module (or re-export them
from a top-level module); adjust the test imports to match the crate's API (or
add the necessary pub use re-exports / feature-gated exports in
light_token_client) so the compiler can find these symbols.
In `@sdk-libs/client/src/indexer/types.rs`:
- Around line 1167-1226: The convert_cold_context function duplicates the same
tree_info and data decoding across the Account/Token/Mint arms; extract that
logic into small helpers (e.g., decode_tree_info(tree_info:
&photon_api::models::TreeInfo) -> Result<InterfaceTreeInfo, IndexerError> and
decode_cold_data(data: &photon_api::models::ColdData) -> Result<ColdData,
IndexerError>) that reuse decode_base58_to_fixed_array, Pubkey::new_from_array,
base64::decode_config and map base64 errors to
IndexerError::InvalidResponseData, then call these helpers from each arm in
convert_cold_context to remove duplication while preserving existing return
types (ColdContext variants).
In
`@sdk-libs/photon-api/src/models/_get_multiple_account_interfaces_post_request_params.rs`:
- Around line 10-21: GetMultipleAccountInterfacesPostRequestParams currently
accepts any-length addresses in its new constructor despite the doc comment "max
100"; add a checked constructor or validation method that enforces
addresses.len() <= 100 and returns a Result (e.g., try_new or validate)
indicating an error when the limit is exceeded; update or add unit tests to
exercise the new behavior and ensure callers migrate to the fallible constructor
(keep the existing new as convenience if desired but document it doesn’t
validate).
In
`@sdk-libs/photon-api/src/models/_get_multiple_account_interfaces_post_request.rs`:
- Around line 27-33: The constructor
GetMultipleAccountInterfacesPostRequest::new currently hardcodes id ("test-id")
and jsonrpc ("2.0"); change new to accept a caller-supplied id (and optionally
jsonrpc) or add a builder-style API so callers provide id to avoid
collisions—update the GetMultipleAccountInterfacesPostRequest::new signature to
take an id: String (and/or jsonrpc: String) and set self.id = id, leaving method
and params assignment as-is, or implement a
GetMultipleAccountInterfacesPostRequest::builder() that lets callers set
id/jsonrpc before finalizing the struct.
In `@sdk-libs/photon-api/src/models/interface_result.rs`:
- Around line 12-25: The enum InterfaceResult derives Serialize and Deserialize
but the serde traits are not imported; add the serde trait imports so the
derives resolve by importing Serialize and Deserialize (e.g., add a use
serde::{Serialize, Deserialize}; near the top of the file) and keep the existing
#[derive(...)] on InterfaceResult unchanged so the Account/Token/Mint variants
compile correctly.
| /// GetMultipleAccountInterfacesPostRequestParams : Request parameters for getMultipleAccountInterfaces | ||
| #[derive(Clone, Default, Debug, PartialEq, Serialize, Deserialize)] | ||
| pub struct GetMultipleAccountInterfacesPostRequestParams { | ||
| /// List of account addresses to look up (max 100) | ||
| #[serde(rename = "addresses")] | ||
| pub addresses: Vec<String>, | ||
| } | ||
|
|
||
| impl GetMultipleAccountInterfacesPostRequestParams { | ||
| pub fn new(addresses: Vec<String>) -> Self { | ||
| Self { addresses } | ||
| } |
There was a problem hiding this comment.
Enforce the 100-address cap to prevent avoidable API errors.
The doc says “max 100,” but the constructor accepts any size. Consider adding a checked constructor (or a validate method) so callers can fail fast.
✅ Suggested addition (non-breaking helper)
impl GetMultipleAccountInterfacesPostRequestParams {
pub fn new(addresses: Vec<String>) -> Self {
Self { addresses }
}
+
+ pub fn new_checked(addresses: Vec<String>) -> Result<Self, String> {
+ if addresses.len() > 100 {
+ return Err(format!(
+ "addresses.len() must be <= 100, got {}",
+ addresses.len()
+ ));
+ }
+ Ok(Self { addresses })
+ }
}🤖 Prompt for AI Agents
In
`@sdk-libs/photon-api/src/models/_get_multiple_account_interfaces_post_request_params.rs`
around lines 10 - 21, GetMultipleAccountInterfacesPostRequestParams currently
accepts any-length addresses in its new constructor despite the doc comment "max
100"; add a checked constructor or validation method that enforces
addresses.len() <= 100 and returns a Result (e.g., try_new or validate)
indicating an error when the limit is exceeded; update or add unit tests to
exercise the new behavior and ensure callers migrate to the fallible constructor
(keep the existing new as convenience if desired but document it doesn’t
validate).
| impl GetMultipleAccountInterfacesPostRequest { | ||
| pub fn new(params: models::GetMultipleAccountInterfacesPostRequestParams) -> Self { | ||
| Self { | ||
| id: "test-id".to_string(), | ||
| jsonrpc: "2.0".to_string(), | ||
| method: "getMultipleAccountInterfaces".to_string(), | ||
| params: Box::new(params), |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Prefer caller-supplied JSON-RPC id to avoid collisions.
Hardcoding "test-id" makes it easy to accidentally reuse IDs across requests. Consider accepting id (and optionally jsonrpc) in new() or adding a builder-style constructor.
♻️ Suggested refactor
- pub fn new(params: models::GetMultipleAccountInterfacesPostRequestParams) -> Self {
+ pub fn new(id: String, params: models::GetMultipleAccountInterfacesPostRequestParams) -> Self {
Self {
- id: "test-id".to_string(),
+ id,
jsonrpc: "2.0".to_string(),
method: "getMultipleAccountInterfaces".to_string(),
params: Box::new(params),
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| impl GetMultipleAccountInterfacesPostRequest { | |
| pub fn new(params: models::GetMultipleAccountInterfacesPostRequestParams) -> Self { | |
| Self { | |
| id: "test-id".to_string(), | |
| jsonrpc: "2.0".to_string(), | |
| method: "getMultipleAccountInterfaces".to_string(), | |
| params: Box::new(params), | |
| impl GetMultipleAccountInterfacesPostRequest { | |
| pub fn new(id: String, params: models::GetMultipleAccountInterfacesPostRequestParams) -> Self { | |
| Self { | |
| id, | |
| jsonrpc: "2.0".to_string(), | |
| method: "getMultipleAccountInterfaces".to_string(), | |
| params: Box::new(params), |
🤖 Prompt for AI Agents
In
`@sdk-libs/photon-api/src/models/_get_multiple_account_interfaces_post_request.rs`
around lines 27 - 33, The constructor
GetMultipleAccountInterfacesPostRequest::new currently hardcodes id ("test-id")
and jsonrpc ("2.0"); change new to accept a caller-supplied id (and optionally
jsonrpc) or add a builder-style API so callers provide id to avoid
collisions—update the GetMultipleAccountInterfacesPostRequest::new signature to
take an id: String (and/or jsonrpc: String) and set self.id = id, leaving method
and params assignment as-is, or implement a
GetMultipleAccountInterfacesPostRequest::builder() that lets callers set
id/jsonrpc before finalizing the struct.
QuestionCan we remove Context: The PR adds interface methods to |
Logic Review: Interface MethodsC1 (Option Return Types): Interface methods should return Current inconsistency:
Recommended: |
|
C2 (Owner Semantics Mismatch): Cold account construction uses different owner semantics:
Recommendation: Document owner field semantics per function. |
|
C3 (Silent First-Selection): Both result.value.items.into_iter().next()This silently takes the first result without validation. If the indexer returns multiple matches, only the first is used with no warning. Recommendation: Add validation or log warning if multiple results returned. |
Error Context Loss in Decode OperationsAll base58/base64 decoding failures (13+ points) collapse to generic Locations: Impact: Cannot determine which field failed or why. Fix: Use existing contextual error variants: // Instead of:
.map_err(|_| IndexerError::InvalidResponseData)?
// Use:
.map_err(|_| IndexerError::base58_decode_error("mint_pda", "invalid base58"))?Note: |
Unused Error Variants
// error.rs:23-30
#[error("Base58 decode error: {field} - {message}")]
Base58DecodeError { field: String, message: String },
#[error("Data decode error: {field} - {message}")]
DataDecodeError { field: String, message: String },
// error.rs:76-88 - helper methods
pub fn base58_decode_error(field: &str, message: &str) -> Self { ... }
pub fn decode_error(field: &str, message: &str) -> Self { ... }All conversions use generic |
There was a problem hiding this comment.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
sdk-libs/client/src/indexer/base58.rs (1)
30-34: Pre-existing:from_byteswill panic on incorrect slice length.This isn't part of your PR changes, but worth flagging:
copy_from_slicepanics ifbytes.len() != 32. This creates an inconsistent API wherefrom_base58returns aResultbutfrom_bytescan panic.🛡️ Optional defensive fix
fn from_bytes(bytes: &[u8]) -> Result<Self, IndexerError> { + if bytes.len() != 32 { + return Err(IndexerError::base58_decode_error( + "bytes", + format!("expected 32 bytes, got {}", bytes.len()), + )); + } let mut arr = [0u8; 32]; arr.copy_from_slice(bytes); Ok(arr) }forester/src/compressible/mint/state.rs (1)
56-86: Avoid stale tracker entries when validation fails.If a tracked pubkey later becomes invalid (closed, reallocated, or not decompressed), the early returns leave stale state in
accounts. Consider removing the entry on validation failures to keep tracker state consistent.🔧 Suggested patch (apply pattern to all early-return branches)
- if account_data.len() <= ACCOUNT_TYPE_OFFSET { - debug!("Mint account {} too short, skipping", pubkey); - return Ok(()); - } + if account_data.len() <= ACCOUNT_TYPE_OFFSET { + debug!("Mint account {} too short, skipping", pubkey); + self.remove(&pubkey); + return Ok(()); + } - if account_data[ACCOUNT_TYPE_OFFSET] != ACCOUNT_TYPE_MINT { - debug!("Account {} is not a Mint, skipping", pubkey); - return Ok(()); - } + if account_data[ACCOUNT_TYPE_OFFSET] != ACCOUNT_TYPE_MINT { + debug!("Account {} is not a Mint, skipping", pubkey); + self.remove(&pubkey); + return Ok(()); + }
🤖 Fix all issues with AI agents
In `@forester/tests/test_indexer_interface.rs`:
- Around line 125-131: Replace fixed sleeps (calls to
sleep(Duration::from_secs(...))) with deterministic readiness checks by using
the existing wait_for_indexer helper consistently: remove the sleep calls and
before each step that needs the indexer, call
wait_for_indexer(&rpc).await.expect(...). If wait_for_indexer lacks a
configurable timeout or retry interval, extend it (e.g., add a timeout or
max_retries parameter) and use that enhanced wait_for_indexer in the tests so
they no longer rely on hard sleeps and become deterministic and faster.
- Around line 554-559: The test assertion is expecting a non-existent
ColdContext::Mint variant; update the assertion to check for
ColdContext::Account instead so it matches the actual enum and the conversion
implemented in From<MintInterface> for AccountInterface (see
decompress_mint.rs:135). Specifically, change the matches on
compressed_mint_interface.account.cold to assert it is Some(ColdContext::Account
{ .. }) and keep the existing failure message ("Cold context should be the Mint
variant") or update it to reflect "Account" if you prefer.
In `@sdk-libs/client/src/indexer/types.rs`:
- Around line 1118-1128: The current logic logs a warning when derived_pda (from
find_mint_address(mint_signer_pubkey)) doesn't equal self.mint_pda, but we must
fail fast; replace the warn! path in the block that computes mint_signer,
derived_pda, and bump so that the function returns an error (or otherwise
aborts) instead of continuing with inconsistent metadata — e.g., propagate a
Result::Err or use the crate’s common error/bail mechanism with a clear message
including derived_pda, self.mint_pda, and bump to ensure callers cannot receive
a mint with mismatched PDA/bump.
In `@sdk-libs/client/src/interface/decompress_mint.rs`:
- Around line 138-142: The conversion for MintState::None should not fabricate a
valid AccountInterface via Account::default(); update the mapping in
decompress_mint.rs so that when matching MintState::None you propagate absence
(e.g., set account to None by changing the Account field on the target type to
Option<AccountInterface>) instead of Account::default(), and adjust the related
types/From impls and any callers of the struct (e.g., the From<MintState> impl
and any uses of the resulting MintInterface/MintStruct) to handle
Option<AccountInterface>; alternatively, if changing the type is unacceptable,
add an explicit documented sentinel (e.g., account = None or account_is_missing
flag) and ensure callers are updated to check that sentinel rather than relying
on lamports == 0.
- Around line 102-145: The Cold mint arm in the From<MintInterface> for
AccountInterface impl is constructing ColdContext::Account(compressed) but tests
expect ColdContext::Mint { .. }; change the MintState::Cold branch to build
ColdContext::Mint and include the previously discarded mint_data (do not use
mint_data: _), populating the ColdContext::Mint fields from compressed and
mint_data so AccountInterface::cold holds ColdContext::Mint and account remains
constructed from compressed.lamports/data/owner as before.
In `@sdk-libs/client/src/rpc/client.rs`:
- Around line 440-559: convert_account_interface currently hardcodes
TreeInfo.queue = tree_info.tree and TreeType::StateV1 when building
CompressedAccount in all cold branches; change it to take the real values from
the indexer response (or map them) instead: use tree_info.queue (not
tree_info.tree), and use tree_info.tree_type (or derive via a mapping function)
when populating TreeInfo in the Account, Token and Mint branches inside
convert_account_interface so v2/address trees keep correct queue and TreeType.
- Around line 562-624: In convert_token_account_interface, the owner_override
passed to TokenAccountInterface::cold is wrong (you're passing
indexer_tai.account.key — the token account address); instead set owner_override
to the token's owner (the wallet owner for ATAs or the PDA for program-owned
accounts) by using the correct owner field from the indexer payload (e.g., the
owner from indexer_tai.token or fall back to indexer_tai.account.account.owner
as appropriate) and update the TokenAccountInterface::cold call accordingly;
also fix convert_account_interface where unwrap_or_default() silently discards
token data deserialization errors—replace that unwrap_or_default() with proper
Result/error propagation so deserialization failures return an RpcError rather
than a default value.
In `@sdk-libs/program-test/src/program_test/rpc.rs`:
- Around line 548-557: The current code returns Some(MintInterface { state:
MintState::None }) for a not-found mint which is inconsistent with
get_account_interface/get_token_account_interface; update the RPC handler that
constructs the Response to return value: None when the mint isn't found instead
of Some(MintInterface { state: MintState::None }), so callers receive
Result<Option<...>, RpcError> with None = not found (adjust any local variables
like address_tree/compressed_address usage accordingly and remove the
MintState::None construction).
- Around line 560-577: The current implementation of
get_multiple_account_interfaces iterates and calls get_account_interface per
address causing an N+1 query pattern; update it to use the Photon indexer's
batch endpoint by invoking get_multiple_account_interfaces_post (or equivalent
batch RPC) instead of looping so all addresses are fetched in one request;
modify the get_multiple_account_interfaces function to call the batch method,
handle the returned Vec<Option<AccountInterface>> and construct the Response
with Context { slot } as before, and keep fallback to the per-address loop only
if the batch endpoint is not available.
There was a problem hiding this comment.
Actionable comments posted: 6
🤖 Fix all issues with AI agents
In `@sdk-libs/program-test/src/indexer/test_indexer.rs`:
- Around line 1403-1424: The current find_compressed_account_by_onchain_pubkey
and find_multiple_compressed_accounts_by_onchain_pubkeys perform O(n) linear
scans over compressed_accounts; switch them to use the onchain_pubkey_index
HashMap for O(1) lookups by retrieving the stored index
(onchain_pubkey_index.get(onchain_pubkey) -> idx) and then returning
compressed_accounts.get(idx), and update
find_multiple_compressed_accounts_by_onchain_pubkeys to map each key through the
new lookup; alternatively, if maintaining onchain_pubkey_index is undesirable
for tests, remove the onchain_pubkey_index field and its usage and add a brief
comment documenting that linear search is acceptable for test workloads.
- Around line 110-112: The onchain_pubkey_index HashMap is never updated; either
remove it or maintain it on inserts/removals: when adding a compressed account
in process_v1_compressed_account and add_event_and_compressed_accounts, insert
mapping from the account's onchain_pubkey ([u8;32]) to its index (usize) into
onchain_pubkey_index; when removing/replacing compressed_accounts update or
remove the corresponding key; then change
find_compressed_account_by_onchain_pubkey to look up the index in
onchain_pubkey_index and return compressed_accounts[index] (fall back to a safe
linear scan if missing) and add/adjust tests to assert index consistency.
- Around line 1427-1438: find_token_account_by_onchain_pubkey currently does a
linear scan over token_compressed_accounts using
Self::extract_onchain_pubkey_from_data; either add an indexed lookup or document
the linear-scan choice: to index, maintain a HashMap<[u8;32],
TokenDataWithMerkleContext> (or map to indexes) alongside
token_compressed_accounts and populate it where compressed token accounts are
inserted/updated, then change find_token_account_by_onchain_pubkey to query that
map using the extracted onchain pubkey; alternatively, add a short doc comment
on the find_token_account_by_onchain_pubkey function stating the linear search
is intentional for test simplicity so reviewers know it’s not an omission.
- Around line 1383-1384: The onchain_pubkey_index HashMap is created empty in
new() but never maintained when accounts change; update
process_v1_compressed_account to insert mapping entries into
onchain_pubkey_index whenever an account is added/updated (use the account's
onchain pubkey as key and relevant interface lookup info as value) and remove
the corresponding entry when an account is nullified or deleted, ensuring any
branch that mutates accounts (updates, overwrites, or clears) performs the same
insert/remove bookkeeping so the index remains in sync with the accounts
vector/registry.
In `@sdk-libs/program-test/src/program_test/rpc.rs`:
- Around line 458-463: The comment highlights confusing owner semantics in
TokenAccountInterface::cold construction where owner is set to *address with a
misleading comment; update the cold-path to use the actual token account owner
(token_acc.token_data.owner) when available, or clearly document the differing
semantics between get_account_interface and get_token_account_interface so
callers aren't misled; locate the TokenAccountInterface::cold invocation (using
compressed_token and *address) and replace the owner argument with the token
owner field or add a concise comment explaining why *address is used and how
owner differs across get_account_interface/get_token_account_interface.
- Around line 396-397: The conversion error raised by the .try_into() call is
wrapped into a generic RpcError::CustomError losing field context; change the
map_err to include the failing field and use the contextual
IndexerError::DataDecodeError (or construct a message like
format!("DataDecodeError on <field_name>: {:?}", e)) and convert that into
RpcError (e.g., IndexerError::DataDecodeError { field: "<field_name>".into(),
source: Box::new(e) } then .into() or format into RpcError::CustomError) so the
error identifies which field failed—update the .try_into().map_err(|e|
RpcError::CustomError(...)) to include the field name and IndexerError variant
instead of the current generic message.
There was a problem hiding this comment.
Actionable comments posted: 7
🤖 Fix all issues with AI agents
In `@sdk-libs/client/src/indexer/types.rs`:
- Around line 434-439: The MissingResult error messages lack the tree
identifier; update the IndexerError::MissingResult construction so the message
includes the missing tree id (e.g., the key from value.merkle_trees[i]) when
QUEUE_TREE_MAPPING.get(...) returns None — do this for the occurrence using
QUEUE_TREE_MAPPING and value.merkle_trees[i] (and the other similar sites
flagged in the review), by formatting the message to include the tree id string
so logs show which tree key failed.
In `@sdk-libs/client/src/rpc/client.rs`:
- Around line 493-539: The code builds a cold token AccountInterface using the
program owner (indexer_ai.account.owner) which is incorrect; change the owner
argument passed to AccountInterface::cold_token to the token's wallet owner
(e.g., token_data.owner or compressed_token.token.owner) when creating the
AccountInterface in the IndexerColdContext::Token branch so parsed.owner
reflects the actual token owner; update the call to
AccountInterface::cold_token(indexer_ai.key, compressed_token, /* use token
owner here instead of indexer_ai.account.owner */) and ensure the owner value's
type matches the expected parameter.
- Around line 1317-1324: The code currently takes
resp.value.items.into_iter().next() and uses *owner as the
TokenAccountInterface.key which silently hides multiple matches and uses the
wrong key; change this to explicitly check resp.value.items length and error if
there are 0 or >1 matches, and when there is exactly one match extract the token
account's compressed account address (e.g., a field on token_acc like
compressed_address or similar) to use as the TokenAccountInterface key instead
of *owner, returning an error if that compressed address is missing; update the
call site that uses TokenAccountInterface::cold to accept/propagate a Result so
callers handle the new error case and keep LIGHT_TOKEN_PROGRAM_ID usage
unchanged.
In `@sdk-libs/program-test/src/indexer/test_indexer.rs`:
- Around line 1402-1437: Add a duplicate-match guard to the single-lookups so we
don't silently return the first hit: in
find_compressed_account_by_onchain_pubkey and
find_token_account_by_onchain_pubkey, after finding the first matching entry
(using the existing predicate that calls
Self::extract_onchain_pubkey_from_data), check whether a second match exists
(e.g., iterate with filter using the same predicate and take(2) to detect >1
without allocating) and emit a debug_assert! or a warning log if a duplicate is
found; keep the original return value but ensure the duplicate detection uses
the same helper (Self::extract_onchain_pubkey_from_data) so behavior stays
consistent.
In `@sdk-libs/program-test/src/program_test/rpc.rs`:
- Around line 496-505: The current code in rpc.rs blindly uses
items.into_iter().next() (e.g., when building the Response with
TokenAccountInterface::cold) which silently picks an arbitrary match when the
indexer returns multiple entries; update each of the three call sites (the
blocks that currently call items.into_iter().next() and return Ok(Response { ...
TokenAccountInterface::cold ... })) to first inspect the collection: if
items.is_empty() return the existing not-found behavior, if items.len() > 1
return an explicit Err (or at minimum log/warn and return an error) indicating
ambiguous/multiple indexer results, and only when len() == 1 proceed to use the
single item to construct the Response; ensure the error path uses the same error
type returned by the surrounding function so callers can handle it.
- Around line 592-614: The get_token_account_by_owner_mint path needs an
on-chain hot lookup before the indexer fallback: replicate the pattern used in
get_token_account_interface() and get_ata_interface() by first calling
self.context.get_account(owner). If that returns an account owned by the
expected token program and its data/mint matches the requested mint, construct
and return a TokenAccountInterface::hot (using the same field order as other hot
responses). Only if the on-chain check yields no matching token account should
you query self.indexer.get_compressed_token_accounts_by_owner(...) as the
existing fallback.
- Around line 640-651: The hot-mint branch in get_mint_interface currently
treats any non-zero-lamport account as a mint; update the check after
self.context.get_account(address) so it also verifies account.owner ==
LIGHT_TOKEN_PROGRAM_ID before returning the MintInterface with MintState::Hot {
account }; mirror the ownership validation used in get_token_account_interface
and get_ata_interface to ensure only accounts owned by LIGHT_TOKEN_PROGRAM_ID
are classified as hot mints.
| required: false, | ||
| default: 8080, | ||
| }), | ||
| "compressible-pda-program": Flags.string({ |
There was a problem hiding this comment.
| "compressible-pda-program": Flags.string({ | |
| "light-pda-program": Flags.string({ |
- Introduced `get_mint_interface` method in the `Rpc` trait to fetch mint data from on-chain or compressed sources. - Updated `fetch_accounts` method to utilize the new mint interface. - Enhanced `TreeInfo` struct to include `slot_created` field for tracking account creation/compression slots. - Modified various test files to accommodate changes in the interface methods, ensuring proper fetching of account and mint interfaces. - Refactored tests to check for existence of interfaces after fetching.
4575ee5 to
5a0a078
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
program-libs/CLAUDE.md (1)
65-71:⚠️ Potential issue | 🟡 MinorAdd
--all-featuresto the batched-merkle-tree test command.The testing guidance for program-libs expects batched-merkle-tree to run with
--all-features, matching the other crates listed.✍️ Proposed doc fix
cargo test -p light-hasher --all-features cargo test -p light-compressed-account --all-features -cargo test -p light-batched-merkle-tree +cargo test -p light-batched-merkle-tree --all-featuresBased on learnings “Run unit tests with cargo test using --all-features flag for hasher, compressed-account, and batched-merkle-tree crates”.
sdk-libs/event/src/event.rs (1)
20-44: 🛠️ Refactor suggestion | 🟠 MajorExpand public event type docs to cover ATA owner semantics.
PublicTransactionEvent and BatchPublicTransactionEvent are public API surfaces; with the new
ata_ownersfield, please add/expand type-level docs describing required contents (input/output compressed account hashes, accounts/addresses/sequence numbers, and ATA owner mappings) so consumers can rely on explicit contracts.As per coding guidelines: Maintain comprehensive documentation for public types and functions including
PublicTransactionEvent,BatchPublicTransactionEvent, andevent_from_light_transactionin the event parsing module; Provide type descriptions for event types:PublicTransactionEventshould contain input/output compressed account hashes, andBatchPublicTransactionEventshould contain accounts, addresses, and sequence numbers.sdk-libs/client/src/indexer/photon_indexer.rs (1)
1173-1187:⚠️ Potential issue | 🟠 MajorGuard against underflow when trimming proof canopy.
If the proof length is less thanSTATE_MERKLE_TREE_CANOPY_DEPTH, this subtraction can panic or wrap. Add an explicit check and error out on malformed responses.🛡️ Proposed fix
- let mut proof_vec = x.proof.clone(); - proof_vec.truncate(proof_vec.len() - STATE_MERKLE_TREE_CANOPY_DEPTH); + let mut proof_vec = x.proof.clone(); + let new_len = proof_vec + .len() + .checked_sub(STATE_MERKLE_TREE_CANOPY_DEPTH) + .ok_or_else(|| IndexerError::InvalidResponseData( + "proof shorter than canopy depth".to_string() + ))?; + proof_vec.truncate(new_len);forester/tests/test_compressible_mint.rs (1)
325-345: 🧹 Nitpick | 🔵 TrivialPrefer reusing
build_expected_minthere to keep expected-state logic consistent.
This avoids drift between tests and the helper.♻️ Suggested refactor
- let expected_mint = Mint { - base: BaseMint { - mint_authority: Some(payer.pubkey().to_bytes().into()), - supply: 0, - decimals: 9, - is_initialized: true, - freeze_authority: None, - }, - metadata: MintMetadata { - version: metadata_version, - mint_decompressed: true, - mint: mint_pda.to_bytes().into(), - mint_signer: mint_seed.pubkey().to_bytes(), - bump, - }, - reserved: [0u8; 16], - account_type: ACCOUNT_TYPE_MINT, - compression, - extensions: None, - }; + let expected_mint = build_expected_mint( + &payer.pubkey(), + 9, + &mint_pda, + &mint_seed.pubkey().to_bytes(), + bump, + metadata_version, + compression, + );
🤖 Fix all issues with AI agents
In
`@sdk-libs/photon-api/src/models/_get_multiple_account_interfaces_post_request.rs`:
- Around line 10-12: The derives Serialize and Deserialize are not in scope; add
the serde trait imports so the derive macros compile by inserting a use
serde::{Serialize, Deserialize}; near the top of the file (alongside use
crate::models) so the #[derive(Serialize, Deserialize)] on the struct is
resolved (ensure the file's existing #[derive(Clone, Default, Debug, PartialEq,
Serialize, Deserialize)] continues to work).
| use crate::models; | ||
|
|
||
| #[derive(Clone, Default, Debug, PartialEq, Serialize, Deserialize)] |
There was a problem hiding this comment.
Add missing serde imports for the derive macros.
Serialize/Deserialize are derived but not in scope, which fails compilation.
🔧 Proposed fix
use crate::models;
+use serde::{Deserialize, Serialize};🤖 Prompt for AI Agents
In
`@sdk-libs/photon-api/src/models/_get_multiple_account_interfaces_post_request.rs`
around lines 10 - 12, The derives Serialize and Deserialize are not in scope;
add the serde trait imports so the derive macros compile by inserting a use
serde::{Serialize, Deserialize}; near the top of the file (alongside use
crate::models) so the #[derive(Serialize, Deserialize)] on the struct is
resolved (ensure the file's existing #[derive(Clone, Default, Debug, PartialEq,
Serialize, Deserialize)] continues to work).
- Renamed `get_ata_interface` to `get_associated_token_account_interface` for clarity and consistency across the codebase. - Updated all instances of the old method name in tests and implementations to the new name. - Simplified the creation of `AccountInterface` from mint interfaces by removing redundant code that manually handled compressed data. - Ensured that all tests reflect these changes, maintaining functionality while improving code readability and maintainability.
There was a problem hiding this comment.
Actionable comments posted: 13
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
sdk-libs/event/src/event.rs (1)
20-35:⚠️ Potential issue | 🟡 MinorAdd type-level docs reflecting ATA owner mappings.
With the new
ata_ownersfield, the public struct’s rustdoc should explicitly describe the event’s input/output compressed account hashes and the ATA owner mapping semantics.📝 Suggested rustdoc update
-#[derive(Debug, Clone, BorshSerialize, BorshDeserialize, Default, PartialEq)] -pub struct PublicTransactionEvent { +/// Public transaction event containing input/output compressed account hashes and +/// output accounts, plus optional ATA owner mappings (output_index → wallet_owner) +/// for compress_and_close when is_ata=true. +#[derive(Debug, Clone, BorshSerialize, BorshDeserialize, Default, PartialEq)] +pub struct PublicTransactionEvent {As per coding guidelines: Maintain comprehensive documentation for public types and functions including
PublicTransactionEvent,BatchPublicTransactionEvent, andevent_from_light_transactionin the event parsing module.sdk-libs/client/src/interface/decompress_mint.rs (1)
161-163:⚠️ Potential issue | 🟠 MajorDon’t silently treat missing mints as hot.
ReturningOk(vec![])forMintState::Nonehides a not-found condition and makes decompression a no‑op. SurfaceMintNotFoundso callers can react deterministically.🛠️ Suggested fix
- let mint_data = match &mint.state { - MintState::Hot { .. } | MintState::None => return Ok(vec![]), - MintState::Cold { mint_data, .. } => mint_data, - }; + let mint_data = match &mint.state { + MintState::Hot { .. } => return Ok(vec![]), + MintState::None => { + return Err(DecompressMintError::MintNotFound { address: mint.mint }); + } + MintState::Cold { mint_data, .. } => mint_data, + }; @@ - let hash = match mint.hash() { - Some(h) => h, - None => return Ok(vec![]), - }; + let hash = match mint.hash() { + Some(h) => h, + None if mint.is_hot() => return Ok(vec![]), + None => { + return Err(DecompressMintError::MintNotFound { address: mint.mint }); + } + };Also applies to: 223-226
🤖 Fix all issues with AI agents
In `@forester/tests/test_indexer_interface.rs`:
- Around line 12-13: The test queries the PhotonIndexer too soon; after
initializing the validator and before the first call to
photon_indexer.get_account_interface(), call wait_for_indexer(&rpc) so the
asynchronous indexer has time to ingest transactions; locate where the validator
is created/initialized and insert a call to wait_for_indexer(&rpc) immediately
before invoking photon_indexer.get_account_interface() (ensure you pass the
existing rpc variable).
In `@sdk-libs/client/src/indexer/photon_indexer.rs`:
- Around line 1177-1179: The truncation of the Merkle canopy blindly does
proof_vec.truncate(proof_vec.len() - STATE_MERKLE_TREE_CANOPY_DEPTH) which will
underflow if x.proof is shorter than STATE_MERKLE_TREE_CANOPY_DEPTH; modify the
code around proof_vec and x.proof to check that proof_vec.len() >=
STATE_MERKLE_TREE_CANOPY_DEPTH before truncating and, if not, return a
contextual error (include the current length and the constant) from the
surrounding function (e.g., where proof_vec is created/used in
photon_indexer.rs) instead of panicking.
In `@sdk-libs/client/src/indexer/types/account.rs`:
- Around line 173-179: The MissingResult error raised when looking up
QUEUE_TREE_MAPPING for account.tree should include the missing tree key so
debugging is easier: update the error construction in the block that sets
tree_info (where QUEUE_TREE_MAPPING.get(&account.tree) is called) to include
account.tree (or its Display/Debug) in the message/context passed to
IndexerError::MissingResult so the logged/returned error identifies which tree
key was not found.
- Around line 30-67: The CompressedAccount conversion (impl
TryFrom<CompressedAccountWithMerkleContext> for CompressedAccount)
inconsistently handles missing QUEUE_TREE_MAPPING entries by warning and setting
TreeInfo.cpi_context to None, while the other conversion
(TryFrom<&photon_api::models::Account>) returns an error; make behavior
consistent by either returning an IndexerError when QUEUE_TREE_MAPPING lookup
fails or by introducing a configurable strictness flag to choose between error
vs warn; update the lookup in the TryFrom<CompressedAccountWithMerkleContext>
implementation (the block using QUEUE_TREE_MAPPING.get and producing cpi_context
for TreeInfo) to either propagate IndexerError::missing_tree_info (or similar)
on None or consult a feature/config flag to decide whether to warn and continue
or return an error, and ensure the chosen behavior matches the other conversion
path.
- Around line 63-65: The code is using sentinel values seq: None and
slot_created: u64::MAX when constructing the target type (from a
CompressedAccountWithMerkleContext conversion); instead of using u64::MAX,
retrieve the real values from the source (CompressedAccountWithMerkleContext)
and set seq = Some(source.seq) / slot_created = Some(source.slot_created) where
available, otherwise change the target field slot_created to Option<u64> and use
None, or document in a comment on the conversion function (the constructor or
impl From/try_from you are editing) that these values are unavailable; avoid
using u64::MAX as a sentinel and prefer Option or explicit 0 only if documented
and intended.
In `@sdk-libs/client/src/indexer/types/proof.rs`:
- Around line 301-306: The address-path lookup using QUEUE_TREE_MAPPING
currently returns a generic IndexerError::MissingResult with message "expected
value was None"; change it to include the missing tree key like the account-path
does: when QUEUE_TREE_MAPPING.get(&value.merkle_trees[i]) is None, construct
IndexerError::MissingResult with context "conversion" and a message that embeds
the missing key (value.merkle_trees[i].to_string() or formatted) so the error
mirrors the more specific account-path message and aids debugging.
- Line 199: The call to next.pack_output_tree_index(packed_accounts).unwrap()
(and the other unwraps) can panic if pack_output_tree_index returns None;
replace these unwraps with error propagation by converting the Option to a
Result (e.g., using .ok_or_else(|| SomeError::Variant(...))?) and return or
propagate that Result from the surrounding function, or map the None into a
descriptive error; locate the occurrences on the
next.pack_output_tree_index(...) calls (symbol: pack_output_tree_index,
variable: next, packed_accounts) and change them to use ok_or_else or equivalent
and update the surrounding function signature to return Result so failures are
reported instead of panicking.
- Around line 88-120: The factory names are confusing: new_none() actually
creates a RootIndex that uses proof-by-index, and new_some() creates one with a
fixed root; rename constructors to clearly express intent (e.g., rename new_none
-> new_proof_by_index or new_dynamic and rename new_some -> new_fixed or
new_with_root_index), update any callers to use the new names, and keep the
existing accessor names (proof_by_index, root_index) unchanged so behavior is
preserved while improving clarity in RootIndex,
new_proof_by_index/new_with_root_index, and any usages of those constructors.
In `@sdk-libs/client/src/indexer/types/queue.rs`:
- Around line 69-88: reconstruct_proof currently indexes
self.low_element_indices[address_idx] without checks and can panic; first
validate that address_idx < self.low_element_indices.len() and return a
descriptive Err(IndexerError::...) (add an IndexerError variant like
InvalidIndex/OutOfBounds if none exists), then validate the recovered leaf_index
against the given tree_height (e.g., ensure leaf_index < (1usize << tree_height)
and tree_height within a sane bound) and return Err on invalid values; keep the
existing loop but ensure any arithmetic that could underflow/overflow is guarded
(use checked ops or bounds checks) before computing sibling_pos and encoding
node indices so the function returns an IndexerError instead of panicking.
In `@sdk-libs/client/src/indexer/types/tree.rs`:
- Around line 34-56: The NextTreeInfo construction is duplicated between the
impl NextTreeInfo::from_api_model and the
TryFrom<&photon_api::models::TreeContextInfo> implementation; refactor by having
one implementation delegate to the other (e.g., implement TryFrom for
NextTreeInfo to call NextTreeInfo::from_api_model or make from_api_model call
TryFrom) so only a single construction path builds tree_type, tree, queue, and
cpi_context using TreeType::from,
Pubkey::new_from_array(decode_base58_to_fixed_array(...)), and
decode_base58_option_to_pubkey, propagating IndexerError as before.
In `@sdk-libs/client/src/rpc/client.rs`:
- Around line 590-592: The current match arm converts parse failures from
TokenAccountInterface::hot into a generic RpcError::CustomError with a formatted
string, losing the original error type and chain; change the error handling in
the indexer_tai.account.cold match arm so it preserves the original error (e.g.,
return a dedicated RpcError::ParseError or wrap the original error as a source
via Box<dyn Error>), i.e., propagate or map the error from
TokenAccountInterface::hot into an RpcError variant that retains the underlying
error instead of using format!("parse error: {}", e), so callers can inspect the
original error type and source.
In `@sdk-libs/event/src/parse.rs`:
- Around line 633-637: Update the public docs for event_from_light_transaction
(and ensure PublicTransactionEvent and BatchPublicTransactionEvent docs reflect
this) to explain that ata_owners is now populated from
associated_instructions.token_instruction via extract_ata_owners, stating the
condition under which ATA owner data is included and precisely how each entry in
ata_owners maps to outputs in the event (e.g., which output index or address it
corresponds to and any None/default behavior). Mention any guarantees or
ordering (if any) produced by extract_ata_owners so users can correlate
ata_owners entries with the event outputs.
In `@sdk-libs/event/tests/parse_test.rs`:
- Around line 142-255: Tests test_rnd_functional and test_rnd_failing are
non-deterministic and slow because they use ThreadRng to pick a seed and run
100_000 iterations; change both to use a fixed deterministic seed constant
(e.g., 0xDEADBEEF) passed to StdRng::seed_from_u64, reduce num_iters from
100_000 to 10_000, and optionally allow overriding the seed and iterations from
environment variables (check env and parse into seed/num_iters) so CI is
reproducible and developers can override locally; update the seed construction
around StdRng::seed_from_u64 and the num_iters variable in both functions
(test_rnd_functional and test_rnd_failing) accordingly.
| use std::collections::HashMap; | ||
|
|
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's examine the test file
cat -n forester/tests/test_indexer_interface.rs | head -100Repository: Lightprotocol/light-protocol
Length of output: 4640
🏁 Script executed:
# Check if PhotonIndexer has get_indexer_slot method and understand its usage
rg "get_indexer_slot" forester/ -A 3 -B 1Repository: Lightprotocol/light-protocol
Length of output: 54
🏁 Script executed:
# Search for existing wait/poll patterns for indexer in forester tests
rg "wait.*indexer|poll.*indexer|sleep.*indexer" forester/ -i -A 2Repository: Lightprotocol/light-protocol
Length of output: 10267
🏁 Script executed:
# Check if there are any existing async wait patterns or slot-catching logic
rg "get_latest_blockhash|indexer.*catch|slot.*sync" forester/ -A 2 -B 1Repository: Lightprotocol/light-protocol
Length of output: 4596
🏁 Script executed:
# Look at other tests in forester to see common patterns
fd "test_.*\.rs" forester/tests/ -x head -30 {}Repository: Lightprotocol/light-protocol
Length of output: 7724
🏁 Script executed:
# Find the wait_for_indexer implementation
fd "utils.rs" forester-utils/ -o forester/ | xargs grep -l "wait_for_indexer" 2>/dev/null | head -5Repository: Lightprotocol/light-protocol
Length of output: 209
🏁 Script executed:
# Search for wait_for_indexer definition
rg "fn wait_for_indexer" forester/ -A 15Repository: Lightprotocol/light-protocol
Length of output: 54
🏁 Script executed:
# Check PhotonIndexer implementation for get_indexer_slot
rg "get_indexer_slot" --type rust -A 3 -B 1Repository: Lightprotocol/light-protocol
Length of output: 6738
🏁 Script executed:
# Look for PhotonIndexer definition and its methods
rg "impl.*PhotonIndexer|trait.*Indexer" -A 20 --type rust | head -100Repository: Lightprotocol/light-protocol
Length of output: 7542
🏁 Script executed:
# Read more of test_indexer_interface.rs to see where assertions happen and what it's testing
sed -n '200,350p' forester/tests/test_indexer_interface.rsRepository: Lightprotocol/light-protocol
Length of output: 5679
🏁 Script executed:
# Get the full wait_for_indexer implementation
rg "pub async fn wait_for_indexer" forester-utils/ -A 30Repository: Lightprotocol/light-protocol
Length of output: 1966
🏁 Script executed:
# Look at the end of test_indexer_interface.rs to see assertions
tail -200 forester/tests/test_indexer_interface.rsRepository: Lightprotocol/light-protocol
Length of output: 8226
🏁 Script executed:
# Search for PhotonIndexer usage in test_indexer_interface specifically
rg "photon_indexer\|get_token_account_interface\|getTokenAccountInterface" forester/tests/test_indexer_interface.rs -B 2 -A 5Repository: Lightprotocol/light-protocol
Length of output: 54
🏁 Script executed:
# Check if wait_for_indexer is imported/used in test_indexer_interface.rs
head -50 forester/tests/test_indexer_interface.rs | grep -E "use|import|wait_for"Repository: Lightprotocol/light-protocol
Length of output: 732
🏁 Script executed:
# Look for assertion patterns that query the indexer
rg "assert|expect" forester/tests/test_indexer_interface.rs | head -20Repository: Lightprotocol/light-protocol
Length of output: 979
Add wait_for_indexer() before querying PhotonIndexer interface methods.
After creating and executing transactions, the indexer ingests asynchronously. The test immediately queries interface endpoints without waiting, which causes assertions to race with indexer catch-up. Call wait_for_indexer(&rpc) between validator initialization and the first photon_indexer.get_account_interface() call.
🛠️ Suggested fix
-use std::collections::HashMap;
+use std::collections::HashMap;
+use forester_utils::utils::wait_for_indexer;
@@
let photon_indexer = PhotonIndexer::new("http://localhost:8784".to_string(), None);
+ wait_for_indexer(&rpc).await.expect("Failed to wait for indexer");
// ============ Test 1: getAccountInterface with compressible token account (on-chain) ============
println!("Test 1: getAccountInterface with compressible token account (on-chain)...");🤖 Prompt for AI Agents
In `@forester/tests/test_indexer_interface.rs` around lines 12 - 13, The test
queries the PhotonIndexer too soon; after initializing the validator and before
the first call to photon_indexer.get_account_interface(), call
wait_for_indexer(&rpc) so the asynchronous indexer has time to ingest
transactions; locate where the validator is created/initialized and insert a
call to wait_for_indexer(&rpc) immediately before invoking
photon_indexer.get_account_interface() (ensure you pass the existing rpc
variable).
| impl TryFrom<CompressedAccountWithMerkleContext> for CompressedAccount { | ||
| type Error = IndexerError; | ||
|
|
||
| fn try_from(account: CompressedAccountWithMerkleContext) -> Result<Self, Self::Error> { | ||
| let hash = account | ||
| .hash() | ||
| .map_err(|e| IndexerError::decode_error("data", e))?; | ||
| // Breaks light-program-test | ||
| let tree_info = QUEUE_TREE_MAPPING.get( | ||
| &Pubkey::new_from_array(account.merkle_context.merkle_tree_pubkey.to_bytes()) | ||
| .to_string(), | ||
| ); | ||
| let cpi_context = if let Some(tree_info) = tree_info { | ||
| tree_info.cpi_context | ||
| } else { | ||
| warn!("Cpi context not found in queue tree mapping"); | ||
| None | ||
| }; | ||
| Ok(CompressedAccount { | ||
| address: account.compressed_account.address, | ||
| data: account.compressed_account.data, | ||
| hash, | ||
| lamports: account.compressed_account.lamports, | ||
| leaf_index: account.merkle_context.leaf_index, | ||
| tree_info: TreeInfo { | ||
| tree: Pubkey::new_from_array(account.merkle_context.merkle_tree_pubkey.to_bytes()), | ||
| queue: Pubkey::new_from_array(account.merkle_context.queue_pubkey.to_bytes()), | ||
| tree_type: account.merkle_context.tree_type, | ||
| cpi_context, | ||
| next_tree_info: None, | ||
| }, | ||
| owner: Pubkey::new_from_array(account.compressed_account.owner.to_bytes()), | ||
| prove_by_index: account.merkle_context.prove_by_index, | ||
| seq: None, | ||
| slot_created: u64::MAX, | ||
| }) | ||
| } | ||
| } |
There was a problem hiding this comment.
Inconsistent error handling for missing QUEUE_TREE_MAPPING entries.
When the tree info lookup fails (lines 42-47), the code logs a warning and silently defaults cpi_context to None. This differs from the TryFrom<&photon_api::models::Account> implementation (lines 173-179) which returns an error for the same scenario.
This inconsistency can mask configuration issues in production. The comment "Breaks light-program-test" suggests this is a workaround rather than intentional design.
Consider either:
- Making both paths consistent (both error or both warn)
- Adding a feature flag to control strictness in test vs production
🤖 Prompt for AI Agents
In `@sdk-libs/client/src/indexer/types/account.rs` around lines 30 - 67, The
CompressedAccount conversion (impl TryFrom<CompressedAccountWithMerkleContext>
for CompressedAccount) inconsistently handles missing QUEUE_TREE_MAPPING entries
by warning and setting TreeInfo.cpi_context to None, while the other conversion
(TryFrom<&photon_api::models::Account>) returns an error; make behavior
consistent by either returning an IndexerError when QUEUE_TREE_MAPPING lookup
fails or by introducing a configurable strictness flag to choose between error
vs warn; update the lookup in the TryFrom<CompressedAccountWithMerkleContext>
implementation (the block using QUEUE_TREE_MAPPING.get and producing cpi_context
for TreeInfo) to either propagate IndexerError::missing_tree_info (or similar)
on None or consult a feature/config flag to decide whether to warn and continue
or return an error, and ensure the chosen behavior matches the other conversion
path.
| seq: None, | ||
| slot_created: u64::MAX, | ||
| }) |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Hardcoded sentinel values for seq and slot_created.
Setting seq: None and slot_created: u64::MAX loses information that may be available from the source CompressedAccountWithMerkleContext. If these values are genuinely unavailable from the source type, consider documenting this limitation. Using u64::MAX as a sentinel is unconventional—0 or an Option would be clearer.
🤖 Prompt for AI Agents
In `@sdk-libs/client/src/indexer/types/account.rs` around lines 63 - 65, The code
is using sentinel values seq: None and slot_created: u64::MAX when constructing
the target type (from a CompressedAccountWithMerkleContext conversion); instead
of using u64::MAX, retrieve the real values from the source
(CompressedAccountWithMerkleContext) and set seq = Some(source.seq) /
slot_created = Some(source.slot_created) where available, otherwise change the
target field slot_created to Option<u64> and use None, or document in a comment
on the conversion function (the constructor or impl From/try_from you are
editing) that these values are unavailable; avoid using u64::MAX as a sentinel
and prefer Option or explicit 0 only if documented and intended.
| let tree_info = | ||
| QUEUE_TREE_MAPPING | ||
| .get(&account.tree) | ||
| .ok_or(IndexerError::MissingResult { | ||
| context: "conversion".into(), | ||
| message: "expected value was None".into(), | ||
| })?; |
There was a problem hiding this comment.
Error message lacks specificity for missing tree mapping.
The error message "expected value was None" doesn't identify which tree key was missing. This makes debugging difficult when the indexer returns an unknown tree.
🔧 Suggested improvement
let tree_info =
QUEUE_TREE_MAPPING
.get(&account.tree)
.ok_or(IndexerError::MissingResult {
context: "conversion".into(),
- message: "expected value was None".into(),
+ message: format!("tree not found in QUEUE_TREE_MAPPING: {}", &account.tree),
})?;🤖 Prompt for AI Agents
In `@sdk-libs/client/src/indexer/types/account.rs` around lines 173 - 179, The
MissingResult error raised when looking up QUEUE_TREE_MAPPING for account.tree
should include the missing tree key so debugging is easier: update the error
construction in the block that sets tree_info (where
QUEUE_TREE_MAPPING.get(&account.tree) is called) to include account.tree (or its
Display/Debug) in the message/context passed to IndexerError::MissingResult so
the logged/returned error identifies which tree key was not found.
| match indexer_tai.account.cold { | ||
| None => TokenAccountInterface::hot(indexer_tai.account.key, account) | ||
| .map_err(|e| RpcError::CustomError(format!("parse error: {}", e))), |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Hot token account parse error loses context.
The error transformation format!("parse error: {}", e) discards the original error type. Consider using a more specific error variant or preserving the error chain for debugging.
🔧 Suggested improvement
None => TokenAccountInterface::hot(indexer_tai.account.key, account)
- .map_err(|e| RpcError::CustomError(format!("parse error: {}", e))),
+ .map_err(|e| RpcError::CustomError(format!("TokenAccountInterface::hot parse error for key {}: {}", indexer_tai.account.key, e))),🤖 Prompt for AI Agents
In `@sdk-libs/client/src/rpc/client.rs` around lines 590 - 592, The current match
arm converts parse failures from TokenAccountInterface::hot into a generic
RpcError::CustomError with a formatted string, losing the original error type
and chain; change the error handling in the indexer_tai.account.cold match arm
so it preserves the original error (e.g., return a dedicated
RpcError::ParseError or wrap the original error as a source via Box<dyn Error>),
i.e., propagate or map the error from TokenAccountInterface::hot into an
RpcError variant that retains the underlying error instead of using
format!("parse error: {}", e), so callers can inspect the original error type
and source.
| ata_owners: associated_instructions | ||
| .token_instruction | ||
| .as_ref() | ||
| .map(extract_ata_owners) | ||
| .unwrap_or_default(), |
There was a problem hiding this comment.
Document ATA-owner extraction in the public event parser.
Now that ata_owners is populated, update the rustdoc on event_from_light_transaction to mention when ATA owner data is included and how it maps to outputs.
📝 Suggested doc update
-/// 3. Create batched transaction events.
+/// 3. Create batched transaction events (including ATA owner mappings when a
+/// registry + token Transfer2 instruction is present).As per coding guidelines: Maintain comprehensive documentation for public types and functions including PublicTransactionEvent, BatchPublicTransactionEvent, and event_from_light_transaction in the event parsing module.
🤖 Prompt for AI Agents
In `@sdk-libs/event/src/parse.rs` around lines 633 - 637, Update the public docs
for event_from_light_transaction (and ensure PublicTransactionEvent and
BatchPublicTransactionEvent docs reflect this) to explain that ata_owners is now
populated from associated_instructions.token_instruction via extract_ata_owners,
stating the condition under which ATA owner data is included and precisely how
each entry in ata_owners maps to outputs in the event (e.g., which output index
or address it corresponds to and any None/default behavior). Mention any
guarantees or ordering (if any) produced by extract_ata_owners so users can
correlate ata_owners entries with the event outputs.
| #[test] | ||
| fn test_rnd_functional() { | ||
| let mut thread_rng = ThreadRng::default(); | ||
| let seed = thread_rng.next_u64(); | ||
| // Keep this print so that in case the test fails | ||
| // we can use the seed to reproduce the error. | ||
| println!("\n\ntest seed {}\n\n", seed); | ||
| let mut rng = StdRng::seed_from_u64(seed); | ||
| let num_iters = 100000; | ||
| for _ in 0..num_iters { | ||
| let len_pre = rng.gen_range(0..6); | ||
| let rnd_vec_pre = get_rnd_program_ids(&mut rng, len_pre, false); | ||
| let len_post = rng.gen_range(0..6); | ||
| let rnd_vec_post = get_rnd_program_ids(&mut rng, len_post, false); | ||
| let num_mid = rng.gen_range(1..6); | ||
|
|
||
| let program_ids = [ | ||
| rnd_vec_pre.as_slice(), | ||
| [ProgramId::LightSystem].as_slice(), | ||
| vec![ProgramId::SolanaSystem; num_mid].as_slice(), | ||
| [ProgramId::AccountCompression].as_slice(), | ||
| rnd_vec_post.as_slice(), | ||
| ] | ||
| .concat(); | ||
| let start_index = program_ids.len() - 1 - len_post; | ||
| let system_index = program_ids.len() - 1 - len_post - num_mid - 1; | ||
| let vec = find_cpi_patterns(&program_ids); | ||
| let expected = Indices { | ||
| system: system_index, | ||
| cpi: vec![], | ||
| insert_into_queues: start_index, | ||
| found_solana_system_program_instruction: true, | ||
| found_system: true, | ||
| token: None, | ||
| found_registry: false, | ||
| }; | ||
| assert!( | ||
| vec.contains(&expected), | ||
| "program ids {:?} parsed events {:?} expected {:?} ", | ||
| program_ids, | ||
| vec, | ||
| expected, | ||
| ); | ||
| } | ||
|
|
||
| for _ in 0..num_iters { | ||
| let len_pre = rng.gen_range(0..6); | ||
| let rnd_vec_pre = get_rnd_program_ids(&mut rng, len_pre, true); | ||
| let len_post = rng.gen_range(0..6); | ||
| let rnd_vec_post = get_rnd_program_ids(&mut rng, len_post, true); | ||
| let num_mid = rng.gen_range(1..6); | ||
|
|
||
| let program_ids = [ | ||
| rnd_vec_pre.as_slice(), | ||
| [ProgramId::LightSystem].as_slice(), | ||
| vec![ProgramId::SolanaSystem; num_mid].as_slice(), | ||
| [ProgramId::AccountCompression].as_slice(), | ||
| rnd_vec_post.as_slice(), | ||
| ] | ||
| .concat(); | ||
| let start_index = program_ids.len() - 1 - len_post; | ||
| let system_index = program_ids.len() - 1 - len_post - num_mid - 1; | ||
| let vec = find_cpi_patterns(&program_ids); | ||
| let expected = Indices { | ||
| system: system_index, | ||
| cpi: vec![], | ||
| insert_into_queues: start_index, | ||
| found_solana_system_program_instruction: true, | ||
| found_system: true, | ||
| token: None, | ||
| found_registry: false, | ||
| }; | ||
| assert!( | ||
| vec.iter().any(|x| x.system == expected.system | ||
| && x.insert_into_queues == expected.insert_into_queues), | ||
| "program ids {:?} parsed events {:?} expected {:?} ", | ||
| program_ids, | ||
| vec, | ||
| expected, | ||
| ); | ||
| } | ||
| } | ||
|
|
||
| #[test] | ||
| fn test_rnd_failing() { | ||
| let mut thread_rng = ThreadRng::default(); | ||
| let seed = thread_rng.next_u64(); | ||
| // Keep this print so that in case the test fails | ||
| // we can use the seed to reproduce the error. | ||
| println!("\n\ntest seed {}\n\n", seed); | ||
| let mut rng = StdRng::seed_from_u64(seed); | ||
| let num_iters = 100000; | ||
| for _ in 0..num_iters { | ||
| let len = rng.gen_range(0..20); | ||
| let mut program_ids = get_rnd_program_ids(&mut rng, len, true); | ||
| // if any ProgramId::LightSystem is followed by ProgramId::SolanaSystem overwrite ProgramId::SolanaSystem with ProgramId::Unknown | ||
| for i in 0..program_ids.len().saturating_sub(1) { | ||
| if matches!(program_ids[i], ProgramId::LightSystem) | ||
| && matches!(program_ids[i + 1], ProgramId::SolanaSystem) | ||
| { | ||
| program_ids[i + 1] = ProgramId::Unknown; | ||
| } | ||
| } | ||
|
|
||
| let vec = find_cpi_patterns(&program_ids); | ||
|
|
||
| assert!( | ||
| vec.is_empty(), | ||
| "program_ids {:?} result {:?}", | ||
| program_ids, | ||
| vec | ||
| ); | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -name "CLAUDE.md" -o -name "claude.md"Repository: Lightprotocol/light-protocol
Length of output: 718
🏁 Script executed:
cat -n sdk-libs/event/tests/parse_test.rs | sed -n '142,255p'Repository: Lightprotocol/light-protocol
Length of output: 5005
🏁 Script executed:
# Find other randomized/property tests in the repo
rg -l "ThreadRng|StdRng|rand::|property|proptest|quickcheck" --type rs | head -20Repository: Lightprotocol/light-protocol
Length of output: 98
🏁 Script executed:
# Check if there's fuzzing test guidance in the repo
find . -name "*fuzz*" -type f | grep -E "\.(rs|md|toml)$" | head -15Repository: Lightprotocol/light-protocol
Length of output: 124
🏁 Script executed:
cat CLAUDE.mdRepository: Lightprotocol/light-protocol
Length of output: 11426
🏁 Script executed:
cat sdk-libs/macros/src/light_pdas_tests/fuzz_tests.rsRepository: Lightprotocol/light-protocol
Length of output: 9485
Use deterministic seeds and reduce default iterations for these fuzz tests.
The current approach seeds from ThreadRng each run plus 100k iterations, making tests non-deterministic and slow. Compare with the existing fuzz_tests.rs in sdk-libs/macros, which uses fixed seeds (like 0xDEADBEEF) and 10k iterations—much better for reproducible CI. Apply to both test_rnd_functional() and test_rnd_failing():
Suggested fix (deterministic seed with env override)
- let mut thread_rng = ThreadRng::default();
- let seed = thread_rng.next_u64();
+ let seed = std::env::var("LIGHT_EVENT_FUZZ_SEED")
+ .ok()
+ .and_then(|v| v.parse().ok())
+ .unwrap_or(0xDEADBEEF_u64);
- let num_iters = 100000;
+ let num_iters = std::env::var("LIGHT_EVENT_FUZZ_ITERS")
+ .ok()
+ .and_then(|v| v.parse().ok())
+ .unwrap_or(10_000);🤖 Prompt for AI Agents
In `@sdk-libs/event/tests/parse_test.rs` around lines 142 - 255, Tests
test_rnd_functional and test_rnd_failing are non-deterministic and slow because
they use ThreadRng to pick a seed and run 100_000 iterations; change both to use
a fixed deterministic seed constant (e.g., 0xDEADBEEF) passed to
StdRng::seed_from_u64, reduce num_iters from 100_000 to 10_000, and optionally
allow overriding the seed and iterations from environment variables (check env
and parse into seed/num_iters) so CI is reproducible and developers can override
locally; update the seed construction around StdRng::seed_from_u64 and the
num_iters variable in both functions (test_rnd_functional and test_rnd_failing)
accordingly.
There was a problem hiding this comment.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
sdk-libs/event/src/event.rs (1)
10-35: 🛠️ Refactor suggestion | 🟠 MajorAdd rustdoc for the public event structs (new ATA field needs to be described).
Line 10 onward defines public event types but still lacks type-level docs. Please add rustdoc that describes what each event contains and mentions the new
ata_ownerssemantics.✍️ Suggested documentation scaffolding
+/// Batched event containing accounts, addresses, and sequence numbers +/// derived from a single light transaction. #[derive(Debug, Clone, Default, PartialEq)] pub struct BatchPublicTransactionEvent { @@ -#[derive(Debug, Clone, BorshSerialize, BorshDeserialize, Default, PartialEq)] +/// Public event containing input/output compressed account hashes and +/// related transaction metadata. +#[derive(Debug, Clone, BorshSerialize, BorshDeserialize, Default, PartialEq)] pub struct PublicTransactionEvent { @@ - /// ATA owner info for compressed ATAs (output_index -> wallet_owner_pubkey). + /// ATA owner info for compressed ATAs (output_index -> wallet_owner_pubkey). /// Only populated for compress_and_close operations where is_ata=true. pub ata_owners: Vec<AssociatedTokenAccountOwnerInfo>, }As per coding guidelines: Maintain comprehensive documentation for public types and functions including
PublicTransactionEvent,BatchPublicTransactionEvent, andevent_from_light_transactionin the event parsing module; Provide type descriptions for event types:PublicTransactionEventshould contain input/output compressed account hashes, andBatchPublicTransactionEventshould contain accounts, addresses, and sequence numbers.sdk-libs/client/src/interface/decompress_mint.rs (1)
175-177:⚠️ Potential issue | 🟠 MajorAvoid unchecked access to validity‑proof accounts.
accounts[0]will panic if the indexer returns an empty list. Useget(0)and surface a structured error so callers can handle malformed responses.🛠 Proposed fix
- let account_info = &proof_result.accounts[0]; + let account_info = proof_result.accounts.get(0).ok_or_else(|| { + DecompressMintError::IndexerError(crate::indexer::IndexerError::missing_result( + "build_decompress_mint", + "validity_proof.accounts[0] missing", + )) + })?;- let account_info = &proof_result.accounts[0]; + let account_info = proof_result.accounts.get(0).ok_or_else(|| { + DecompressMintError::IndexerError(crate::indexer::IndexerError::missing_result( + "decompress_mint_idempotent", + "validity_proof.accounts[0] missing", + )) + })?;Also applies to: 327-328
🤖 Fix all issues with AI agents
In `@sdk-libs/client/src/indexer/photon_indexer.rs`:
- Around line 1176-1179: The code truncates proof_vec using an unchecked
subtraction which can wrap on short proofs; replace the direct truncate call
with a checked subtraction using
proof_vec.len().checked_sub(STATE_MERKLE_TREE_CANOPY_DEPTH) and handle the None
case by failing fast (e.g. return an Err or propagate a descriptive error) so
short proofs are detected instead of silently wrapping; update the closure that
maps over x.proof (the proof_vec logic) to compute let new_len =
proof_vec.len().checked_sub(STATE_MERKLE_TREE_CANOPY_DEPTH).ok_or("proof shorter
than canopy depth")?; then call proof_vec.truncate(new_len) and propagate the
error from that mapping site.
In `@sdk-libs/client/src/indexer/types/account.rs`:
- Around line 105-130: Multiple base58 decodes in the Account -> TreeInfo
construction (owner, address, hash, merkle_context.tree, merkle_context.queue,
and cpi_context via decode_base58_option_to_pubkey) produce generic errors;
change each call to wrap the decode error with contextual information (e.g., use
a helper like base58_decode_error(field_name, source_err) or map_err to add
"base58_decode_error: <field>") so failures indicate which field failed (apply
same pattern for NextTreeInfo decoding locations around lines 151-167 as well).
In `@sdk-libs/client/src/indexer/types/proof.rs`:
- Around line 301-306: The error created when looking up a merkle tree in
QUEUE_TREE_MAPPING is misleading; update the IndexerError::MissingResult
produced in the lookup for tree_info (where you call
QUEUE_TREE_MAPPING.get(&value.merkle_trees[i])) so the message clearly states
the tree key was not found (include the tree id or mention it couldn't be found
in QUEUE_TREE_MAPPING) and keep the same context "conversion" to match existing
style; locate the lookup that assigns tree_info and replace the message
"expected value was None" with a descriptive message like "merkle tree not found
in QUEUE_TREE_MAPPING" (or include the actual key) to make debugging clearer.
In `@sdk-libs/client/src/indexer/types/queue.rs`:
- Around line 69-110: The reconstruct_proof and reconstruct_all_proofs paths
currently index directly into low_element_indices and node_hashes which can
panic on malformed data; change reconstruct_proof to validate that address_idx
is in bounds by using self.low_element_indices.get(address_idx) and return an
IndexerError::MissingResult with context "reconstruct_proof" if missing, and
likewise when you find a sibling index use self.nodes.iter().position(...) as
now but then fetch the hash with self.node_hashes.get(hash_idx) and return a
structured IndexerError if that get() returns None; ensure
reconstruct_all_proofs propagates those Result errors (keep calling
self.reconstruct_proof(i, tree_height) but avoid unwraps) and include references
to the symbols reconstruct_proof, reconstruct_all_proofs, low_element_indices,
nodes, node_hashes, and encode_node_index in your changes.
In `@sdk-libs/client/src/indexer/types/signature.rs`:
- Around line 10-19: The conversion from photon_api::models::SignatureInfo to
SignatureWithMetadata is infallible, so replace the impl
TryFrom<&photon_api::models::SignatureInfo> for SignatureWithMetadata (including
type Error and the Result return) with an impl
From<&photon_api::models::SignatureInfo> for SignatureWithMetadata; change the
fn signature to fn from(sig_info: &photon_api::models::SignatureInfo) -> Self
and return SignatureWithMetadata directly (copying block_time,
signature.clone(), and slot) instead of wrapping in Ok(...).
In `@sdk-libs/client/src/indexer/types/token.rs`:
- Around line 20-100: The two TryFrom implementations for CompressedTokenAccount
(for photon_api::models::TokenAccount and TokenAccountV2) duplicate TokenData
construction; extract that logic into a helper fn (e.g., parse_token_data) that
takes mint, owner, amount, delegate, state, and tlv and returns
Result<TokenData, IndexerError>, reusing decode_base58_to_fixed_array,
Pubkey::new_from_array, base64::decode_config,
Vec::<ExtensionStruct>::deserialize and IndexerError::decode_error inside it,
then call parse_token_data from both TryFrom::try_from implementations and keep
CompressedAccount::try_from(token_account.account.as_ref()) as-is so each impl
reduces to constructing account and calling the helper for token.
In `@sdk-libs/client/src/indexer/types/tree.rs`:
- Around line 34-56: Both from_api_model and the
TryFrom<&photon_api::models::TreeContextInfo> impl for NextTreeInfo contain
identical decoding logic; consolidate them to a single implementation to avoid
duplication. Replace one of them (e.g., keep the TryFrom impl for NextTreeInfo
as the canonical decoder) and have from_api_model delegate to that TryFrom (or
vice versa), referencing the same decoding steps (TreeType::from,
Pubkey::new_from_array with decode_base58_to_fixed_array for tree and queue, and
decode_base58_option_to_pubkey for cpi_context) so only one block of decoding
logic exists and both callers reuse it.
In `@sdk-libs/client/src/rpc/client.rs`:
- Around line 1226-1270: The hot branch must validate that ai.account is
actually a mint: check ai.account.owner equals the expected mint program owner
and attempt to deserialize ai.account.data with Mint::try_from_slice; if either
check fails treat it as missing/invalid mint (e.g., return an
RpcError::CustomError or None consistent with the function’s error handling)
instead of blindly returning MintState::Hot containing ai.account; modify the
MintState::Hot branch (where ai.account is used) to perform these owner and
deserialization checks and only construct MintState::Hot when they pass,
otherwise surface an appropriate error like "Invalid or non-mint hot account".
In `@sdk-libs/event/tests/parse_test.rs`:
- Around line 142-255: The randomized tests test_rnd_functional and
test_rnd_failing run 100_000 iterations each (num_iters) which slows CI; change
the test harness to use a much smaller default iteration count (e.g., 1_000) or
read an environment variable (e.g., TEST_RND_ITERS) to allow increasing locally,
and apply this to the num_iters variable in both tests (test_rnd_functional and
test_rnd_failing) so they use the configurable/smaller value instead of the
hard-coded 100_000. Ensure both occurrences that declare num_iters are updated
and keep the seed printing behavior unchanged.
| let owner = Pubkey::new_from_array(decode_base58_to_fixed_array(&account.owner)?); | ||
| let address = account | ||
| .address | ||
| .as_ref() | ||
| .map(|address| decode_base58_to_fixed_array(address)) | ||
| .transpose()?; | ||
| let hash = decode_base58_to_fixed_array(&account.hash)?; | ||
|
|
||
| let tree_info = TreeInfo { | ||
| tree: Pubkey::new_from_array(decode_base58_to_fixed_array( | ||
| &account.merkle_context.tree, | ||
| )?), | ||
| queue: Pubkey::new_from_array(decode_base58_to_fixed_array( | ||
| &account.merkle_context.queue, | ||
| )?), | ||
| tree_type: TreeType::from(account.merkle_context.tree_type as u64), | ||
| cpi_context: super::super::base58::decode_base58_option_to_pubkey( | ||
| &account.merkle_context.cpi_context, | ||
| )?, | ||
| next_tree_info: account | ||
| .merkle_context | ||
| .next_tree_context | ||
| .as_ref() | ||
| .map(|ctx| NextTreeInfo::try_from(ctx.as_ref())) | ||
| .transpose()?, | ||
| }; |
There was a problem hiding this comment.
Add field context to base58 decode failures.
Right now decode errors report a generic “base58” field, which makes it hard to tell whether owner/address/hash/tree/queue failed. Wrap with contextual base58_decode_error (or add a helper) so diagnostics identify the exact field.
🛠 Example refinement (apply to all base58 fields)
- let owner = Pubkey::new_from_array(decode_base58_to_fixed_array(&account.owner)?);
+ let owner = Pubkey::new_from_array(
+ decode_base58_to_fixed_array(&account.owner)
+ .map_err(|e| IndexerError::base58_decode_error("owner", e))?,
+ );
@@
- let hash = decode_base58_to_fixed_array(&account.hash)?;
+ let hash = decode_base58_to_fixed_array(&account.hash)
+ .map_err(|e| IndexerError::base58_decode_error("hash", e))?;Also applies to: 151-167
🤖 Prompt for AI Agents
In `@sdk-libs/client/src/indexer/types/account.rs` around lines 105 - 130,
Multiple base58 decodes in the Account -> TreeInfo construction (owner, address,
hash, merkle_context.tree, merkle_context.queue, and cpi_context via
decode_base58_option_to_pubkey) produce generic errors; change each call to wrap
the decode error with contextual information (e.g., use a helper like
base58_decode_error(field_name, source_err) or map_err to add
"base58_decode_error: <field>") so failures indicate which field failed (apply
same pattern for NextTreeInfo decoding locations around lines 151-167 as well).
| let tree_info = QUEUE_TREE_MAPPING.get(&value.merkle_trees[i]).ok_or( | ||
| IndexerError::MissingResult { | ||
| context: "conversion".into(), | ||
| message: "expected value was None".into(), | ||
| }, | ||
| )?; |
There was a problem hiding this comment.
Error message doesn't describe the actual failure.
The error message "expected value was None" doesn't accurately describe what went wrong—the issue is that the tree wasn't found in QUEUE_TREE_MAPPING. Compare with the more descriptive error at line 271-274:
🔧 Proposed fix
let tree_info = QUEUE_TREE_MAPPING.get(&value.merkle_trees[i]).ok_or(
IndexerError::MissingResult {
context: "conversion".into(),
- message: "expected value was None".into(),
+ message: format!(
+ "address tree not found in QUEUE_TREE_MAPPING: {}",
+ &value.merkle_trees[i]
+ ),
},
)?;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| let tree_info = QUEUE_TREE_MAPPING.get(&value.merkle_trees[i]).ok_or( | |
| IndexerError::MissingResult { | |
| context: "conversion".into(), | |
| message: "expected value was None".into(), | |
| }, | |
| )?; | |
| let tree_info = QUEUE_TREE_MAPPING.get(&value.merkle_trees[i]).ok_or( | |
| IndexerError::MissingResult { | |
| context: "conversion".into(), | |
| message: format!( | |
| "address tree not found in QUEUE_TREE_MAPPING: {}", | |
| &value.merkle_trees[i] | |
| ), | |
| }, | |
| )?; |
🤖 Prompt for AI Agents
In `@sdk-libs/client/src/indexer/types/proof.rs` around lines 301 - 306, The error
created when looking up a merkle tree in QUEUE_TREE_MAPPING is misleading;
update the IndexerError::MissingResult produced in the lookup for tree_info
(where you call QUEUE_TREE_MAPPING.get(&value.merkle_trees[i])) so the message
clearly states the tree key was not found (include the tree id or mention it
couldn't be found in QUEUE_TREE_MAPPING) and keep the same context "conversion"
to match existing style; locate the lookup that assigns tree_info and replace
the message "expected value was None" with a descriptive message like "merkle
tree not found in QUEUE_TREE_MAPPING" (or include the actual key) to make
debugging clearer.
| #[test] | ||
| fn test_rnd_functional() { | ||
| let mut thread_rng = ThreadRng::default(); | ||
| let seed = thread_rng.next_u64(); | ||
| // Keep this print so that in case the test fails | ||
| // we can use the seed to reproduce the error. | ||
| println!("\n\ntest seed {}\n\n", seed); | ||
| let mut rng = StdRng::seed_from_u64(seed); | ||
| let num_iters = 100000; | ||
| for _ in 0..num_iters { | ||
| let len_pre = rng.gen_range(0..6); | ||
| let rnd_vec_pre = get_rnd_program_ids(&mut rng, len_pre, false); | ||
| let len_post = rng.gen_range(0..6); | ||
| let rnd_vec_post = get_rnd_program_ids(&mut rng, len_post, false); | ||
| let num_mid = rng.gen_range(1..6); | ||
|
|
||
| let program_ids = [ | ||
| rnd_vec_pre.as_slice(), | ||
| [ProgramId::LightSystem].as_slice(), | ||
| vec![ProgramId::SolanaSystem; num_mid].as_slice(), | ||
| [ProgramId::AccountCompression].as_slice(), | ||
| rnd_vec_post.as_slice(), | ||
| ] | ||
| .concat(); | ||
| let start_index = program_ids.len() - 1 - len_post; | ||
| let system_index = program_ids.len() - 1 - len_post - num_mid - 1; | ||
| let vec = find_cpi_patterns(&program_ids); | ||
| let expected = Indices { | ||
| system: system_index, | ||
| cpi: vec![], | ||
| insert_into_queues: start_index, | ||
| found_solana_system_program_instruction: true, | ||
| found_system: true, | ||
| token: None, | ||
| found_registry: false, | ||
| }; | ||
| assert!( | ||
| vec.contains(&expected), | ||
| "program ids {:?} parsed events {:?} expected {:?} ", | ||
| program_ids, | ||
| vec, | ||
| expected, | ||
| ); | ||
| } | ||
|
|
||
| for _ in 0..num_iters { | ||
| let len_pre = rng.gen_range(0..6); | ||
| let rnd_vec_pre = get_rnd_program_ids(&mut rng, len_pre, true); | ||
| let len_post = rng.gen_range(0..6); | ||
| let rnd_vec_post = get_rnd_program_ids(&mut rng, len_post, true); | ||
| let num_mid = rng.gen_range(1..6); | ||
|
|
||
| let program_ids = [ | ||
| rnd_vec_pre.as_slice(), | ||
| [ProgramId::LightSystem].as_slice(), | ||
| vec![ProgramId::SolanaSystem; num_mid].as_slice(), | ||
| [ProgramId::AccountCompression].as_slice(), | ||
| rnd_vec_post.as_slice(), | ||
| ] | ||
| .concat(); | ||
| let start_index = program_ids.len() - 1 - len_post; | ||
| let system_index = program_ids.len() - 1 - len_post - num_mid - 1; | ||
| let vec = find_cpi_patterns(&program_ids); | ||
| let expected = Indices { | ||
| system: system_index, | ||
| cpi: vec![], | ||
| insert_into_queues: start_index, | ||
| found_solana_system_program_instruction: true, | ||
| found_system: true, | ||
| token: None, | ||
| found_registry: false, | ||
| }; | ||
| assert!( | ||
| vec.iter().any(|x| x.system == expected.system | ||
| && x.insert_into_queues == expected.insert_into_queues), | ||
| "program ids {:?} parsed events {:?} expected {:?} ", | ||
| program_ids, | ||
| vec, | ||
| expected, | ||
| ); | ||
| } | ||
| } | ||
|
|
||
| #[test] | ||
| fn test_rnd_failing() { | ||
| let mut thread_rng = ThreadRng::default(); | ||
| let seed = thread_rng.next_u64(); | ||
| // Keep this print so that in case the test fails | ||
| // we can use the seed to reproduce the error. | ||
| println!("\n\ntest seed {}\n\n", seed); | ||
| let mut rng = StdRng::seed_from_u64(seed); | ||
| let num_iters = 100000; | ||
| for _ in 0..num_iters { | ||
| let len = rng.gen_range(0..20); | ||
| let mut program_ids = get_rnd_program_ids(&mut rng, len, true); | ||
| // if any ProgramId::LightSystem is followed by ProgramId::SolanaSystem overwrite ProgramId::SolanaSystem with ProgramId::Unknown | ||
| for i in 0..program_ids.len().saturating_sub(1) { | ||
| if matches!(program_ids[i], ProgramId::LightSystem) | ||
| && matches!(program_ids[i + 1], ProgramId::SolanaSystem) | ||
| { | ||
| program_ids[i + 1] = ProgramId::Unknown; | ||
| } | ||
| } | ||
|
|
||
| let vec = find_cpi_patterns(&program_ids); | ||
|
|
||
| assert!( | ||
| vec.is_empty(), | ||
| "program_ids {:?} result {:?}", | ||
| program_ids, | ||
| vec | ||
| ); | ||
| } | ||
| } |
There was a problem hiding this comment.
Reduce randomized test iterations to keep CI fast and stable.
Line 150 and Line 233 each run 100k iterations; this doubles runtime for CI and local runs with little added value. Consider lowering the count or gating it behind an env var.
⚡ Suggested change
- let num_iters = 100000;
+ let num_iters = 10_000;
@@
- let num_iters = 100000;
+ let num_iters = 10_000;🤖 Prompt for AI Agents
In `@sdk-libs/event/tests/parse_test.rs` around lines 142 - 255, The randomized
tests test_rnd_functional and test_rnd_failing run 100_000 iterations each
(num_iters) which slows CI; change the test harness to use a much smaller
default iteration count (e.g., 1_000) or read an environment variable (e.g.,
TEST_RND_ITERS) to allow increasing locally, and apply this to the num_iters
variable in both tests (test_rnd_functional and test_rnd_failing) so they use
the configurable/smaller value instead of the hard-coded 100_000. Ensure both
occurrences that declare num_iters are updated and keep the seed printing
behavior unchanged.
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In `@sdk-libs/client/src/indexer/photon_indexer.rs`:
- Around line 1342-1353: After truncating proof_vec by
ADDRESS_TREE_CANOPY_DEPTH, validate that proof_vec.len() == 16 before calling
proof_arr.copy_from_slice to avoid a panic on malformed external input; if the
length is not 16 return an IndexerError::InvalidParameters with a descriptive
message (mentioning the observed length and expected 16) instead of calling
copy_from_slice on proof_arr in photon_indexer.rs.
In `@sdk-libs/client/src/indexer/types/token.rs`:
- Around line 67-99: The conversion impl for
Into<Vec<light_token::compat::TokenDataWithMerkleContext>> unnecessarily clones
token_account.account; update the iterator closure to consume/move the
CompressedTokenAccount fields instead of cloning (e.g. destructure or take
token_account.account directly) and pass the owned account into
CompressedAccountWithMerkleContext::from so no .clone() is needed; keep using
CompressedTokenAccount, token_account.token, and
CompressedAccountWithMerkleContext::from as the referenced symbols.
| const ADDRESS_TREE_CANOPY_DEPTH: usize = 10; | ||
| if proof_vec.len() < ADDRESS_TREE_CANOPY_DEPTH { | ||
| return Err(IndexerError::InvalidParameters(format!( | ||
| "Address proof length ({}) is less than canopy depth ({})", | ||
| proof_vec.len(), | ||
| ADDRESS_TREE_CANOPY_DEPTH, | ||
| ))); | ||
| } | ||
| proof_vec.truncate(proof_vec.len() - ADDRESS_TREE_CANOPY_DEPTH); | ||
| let mut proof_arr = [[0u8; 32]; 16]; | ||
| proof_arr.copy_from_slice(&proof_vec); | ||
|
|
There was a problem hiding this comment.
Validate proof length after truncation to avoid panics. copy_from_slice will panic unless the post‑truncation length is exactly 16; this is external data, so return a structured error instead.
🛠️ Suggested fix
- const ADDRESS_TREE_CANOPY_DEPTH: usize = 10;
+ const ADDRESS_TREE_CANOPY_DEPTH: usize = 10;
+ const ADDRESS_PROOF_LEN: usize = 16;
if proof_vec.len() < ADDRESS_TREE_CANOPY_DEPTH {
return Err(IndexerError::InvalidParameters(format!(
"Address proof length ({}) is less than canopy depth ({})",
proof_vec.len(),
ADDRESS_TREE_CANOPY_DEPTH,
)));
}
proof_vec.truncate(proof_vec.len() - ADDRESS_TREE_CANOPY_DEPTH);
- let mut proof_arr = [[0u8; 32]; 16];
+ if proof_vec.len() != ADDRESS_PROOF_LEN {
+ return Err(IndexerError::InvalidParameters(format!(
+ "Address proof length after canopy truncation ({}) != expected ({})",
+ proof_vec.len(),
+ ADDRESS_PROOF_LEN,
+ )));
+ }
+ let mut proof_arr = [[0u8; 32]; ADDRESS_PROOF_LEN];
proof_arr.copy_from_slice(&proof_vec);🤖 Prompt for AI Agents
In `@sdk-libs/client/src/indexer/photon_indexer.rs` around lines 1342 - 1353,
After truncating proof_vec by ADDRESS_TREE_CANOPY_DEPTH, validate that
proof_vec.len() == 16 before calling proof_arr.copy_from_slice to avoid a panic
on malformed external input; if the length is not 16 return an
IndexerError::InvalidParameters with a descriptive message (mentioning the
observed length and expected 16) instead of calling copy_from_slice on proof_arr
in photon_indexer.rs.
| #[allow(clippy::from_over_into)] | ||
| impl Into<light_token::compat::TokenDataWithMerkleContext> for CompressedTokenAccount { | ||
| fn into(self) -> light_token::compat::TokenDataWithMerkleContext { | ||
| let compressed_account = CompressedAccountWithMerkleContext::from(self.account); | ||
|
|
||
| light_token::compat::TokenDataWithMerkleContext { | ||
| token_data: self.token, | ||
| compressed_account, | ||
| } | ||
| } | ||
| } | ||
|
|
||
| #[allow(clippy::from_over_into)] | ||
| impl Into<Vec<light_token::compat::TokenDataWithMerkleContext>> | ||
| for super::super::response::Response< | ||
| super::super::response::ItemsWithCursor<CompressedTokenAccount>, | ||
| > | ||
| { | ||
| fn into(self) -> Vec<light_token::compat::TokenDataWithMerkleContext> { | ||
| self.value | ||
| .items | ||
| .into_iter() | ||
| .map( | ||
| |token_account| light_token::compat::TokenDataWithMerkleContext { | ||
| token_data: token_account.token, | ||
| compressed_account: CompressedAccountWithMerkleContext::from( | ||
| token_account.account.clone(), | ||
| ), | ||
| }, | ||
| ) | ||
| .collect::<Vec<light_token::compat::TokenDataWithMerkleContext>>() | ||
| } | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Avoid cloning account in the conversion loop. You own each CompressedTokenAccount in the iterator, so move the fields to skip an extra clone.
♻️ Suggested refactor
- .map(
- |token_account| light_token::compat::TokenDataWithMerkleContext {
- token_data: token_account.token,
- compressed_account: CompressedAccountWithMerkleContext::from(
- token_account.account.clone(),
- ),
- },
- )
+ .map(|CompressedTokenAccount { token, account }| {
+ light_token::compat::TokenDataWithMerkleContext {
+ token_data: token,
+ compressed_account: CompressedAccountWithMerkleContext::from(account),
+ }
+ })🤖 Prompt for AI Agents
In `@sdk-libs/client/src/indexer/types/token.rs` around lines 67 - 99, The
conversion impl for Into<Vec<light_token::compat::TokenDataWithMerkleContext>>
unnecessarily clones token_account.account; update the iterator closure to
consume/move the CompressedTokenAccount fields instead of cloning (e.g.
destructure or take token_account.account directly) and pass the owned account
into CompressedAccountWithMerkleContext::from so no .clone() is needed; keep
using CompressedTokenAccount, token_account.token, and
CompressedAccountWithMerkleContext::from as the referenced symbols.
Summary by CodeRabbit
New Features
Improvements
Bug Fixes
Tests