Skip to content

chore: client set default top up to 0, fix crate reexports#2182

Merged
ananas-block merged 12 commits intomainfrom
jorrit/fix-client
Jan 20, 2026
Merged

chore: client set default top up to 0, fix crate reexports#2182
ananas-block merged 12 commits intomainfrom
jorrit/fix-client

Conversation

@ananas-block
Copy link
Contributor

@ananas-block ananas-block commented Jan 19, 2026

  1. Documentation: Added section on getting Anchor program
    instruction discriminators to CLAUDE.md
  2. Dependency Management: Updated anchor-spl to use Light
    Protocol's fork (git dependency instead of registry crate) across
    all packages
  3. Cargo.lock Consolidation: Removed duplicate anchor-spl entry
    (0.31.1), keeping single entry pointing to Light Protocol fork
  4. SDK Re-exports: Added new module structure in light-sdk for
    macro-generated code:
    - compressed_account module re-exporting types from
    light-compressed-account
    - sdk_types module re-exporting types from light-sdk-types
    - hasher module as pub extern crate for downstream path
    compatibility
  5. Macro Code Updates: Updated import paths in sdk-libs/macros to
    use new SDK re-exports instead of direct crate references
  6. Default Configuration: Changed DEFAULT_WRITE_TOP_UP from 766 to
    0 in decompress_mint interface
  7. ForesterConfig Move: Relocated ForesterConfig import from
    light-registry to light-program-test in e2e_test_env and
    setup_forester
  8. Token SDK Features: Enhanced light-token-sdk with new features
    and dependencies:
    - Added anchor-discriminator as default feature
    - Added idl-build feature for client-side tooling
    - Added optional anchor-spl and light-sdk-macros dependencies
    - Changed light-sdk dependency to include cpi-context feature
  9. New Anchor Integration Module: Created
    sdk-libs/token-sdk/src/anchor.rs providing single import point for
    Anchor programs with Light Protocol re-exports
  10. SDK Dependency Addition: Added solana-program to
    light-account-checks dependencies
  11. add light-anchor-spl dep
  12. v2 is default in sdk crates
  13. rename light-token-sdk -> light-token
  14. rename light_token::token -> light-token::instruction

Summary by CodeRabbit

  • Bug Fixes

    • Default top-up for mint decompression changed from 766 to 0 when unspecified.
  • Documentation

    • Added "Getting Anchor Program Instruction Discriminators" guide with examples and run instructions.
  • New Features

    • Unconditionally available local program-test registry/tooling to simplify end-to-end testing.
    • New public SDK re-exports and namespace reorganizations to simplify downstream imports.
  • Chores

    • Package/module renames and import path harmonization across the SDK (token crate renaming and related adjustments).

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 19, 2026

Caution

Review failed

The head commit changed during the review from c452ecf to 3fc743b.

📝 Walkthrough

Walkthrough

DEFAULT_WRITE_TOP_UP constant in DecompressMint changed from 766 to 0. Large additions and refactors to program-test: new local registry_sdk, ungated devenv paths, instruction builders, many crate-path canonicalizations and SDK re-exports for testing and macro-generated code.

Changes

Cohort / File(s) Change Summary
Decompress Mint Configuration
sdk-libs/client/src/interface/decompress_mint.rs
pub const DEFAULT_WRITE_TOP_UP: u32 changed from 7660.
Program-test: local registry SDK (new/large)
sdk-libs/program-test/src/registry_sdk.rs
Added a self-contained registry SDK with program ID, PDAs, discriminators, account structs, (de)serializers, instruction builders (claim, compress_and_close, register_forester/epoch, finalize), and test setup helpers.
Program-test: forester & compressible flows
sdk-libs/program-test/src/forester/*, sdk-libs/program-test/src/compressible.rs, sdk-libs/program-test/src/utils/register_test_forester.rs
Replaced Anchor/manual account assembly with registry_sdk builders, removed/adjusted devenv gating, switched to REGISTRY_PROGRAM_ID/LIGHT_TOKEN_PROGRAM_ID, simplified instruction construction and account handling.
Program-test: exports & harness changes
sdk-libs/program-test/src/lib.rs, sdk-libs/program-test/src/program_test/*, sdk-libs/program-test/src/program_test/test_rpc.rs, sdk-libs/program-test/src/utils/mod.rs
Exposed registry_sdk, made forester and several TestRpc/LightProgramTest methods and fields unconditional (removed devenv guards), adjusted re-exports and initialization flows.
Macros / generated paths
sdk-libs/macros/src/... (multiple files)
Canonicalized generated type paths to ::light_sdk, light_token, and ::light_sdk::sdk_types; updated return types/signatures for compressed account types, hasher traits, rent sponsor, and CPI types.
SDK & top-level re-exports
sdk-libs/sdk/src/lib.rs, sdk-libs/sdk/src/interface/mod.rs
Added public modules hasher, compressed_account, sdk_types; exposed rent and CreateAccountsProof from light_compressible; replaced pub use light_hasher with pub extern crate light_hasher.
Token SDK & token crate rename
sdk-libs/token-sdk/*, sdk-libs/token-types/Cargo.toml, sdk-libs/token-sdk/Cargo.toml, sdk-libs/token-sdk/src/anchor.rs
Renamed/reshuffled token-sdk → token paths, added anchor module, adjusted features (anchor-discriminator, idl-build), removed some gated re-exports (decompress_runtime), and added an anchor re-export hub.
Cargo / feature defaults
sdk-libs/program-test/Cargo.toml, sdk-libs/sdk/Cargo.toml, sdk-libs/sdk-types/Cargo.toml, sdk-libs/token-sdk/Cargo.toml, sdk-libs/token-types/Cargo.toml, sdk-libs/sdk-pinocchio/Cargo.toml, sdk-libs/client/Cargo.toml
Adjusted feature defaults (added v2), added idl-build features, toggled optional solana-program, replaced many workspace dependency keys to new crate names.
Client / indexer / interface path updates
sdk-libs/client/src/**, sdk-libs/client/Cargo.toml, sdk-libs/client/src/indexer/types.rs
Repointed many imports from light_token_sdklight_token, updated TokenData/compat types, adjusted error source types, and updated some program ID import locations.
Docs & minor examples
CLAUDE.md, sdk-libs/macros/docs/features/anchor-spl-features.md, various docs
Added "Getting Anchor Program Instruction Discriminators" doc and updated example import paths (anchor_spllight_anchor_spl, light_token_sdklight_token).

Sequence Diagram(s)

sequenceDiagram
    participant TestHarness as "Test Harness / caller"
    participant RegistrySDK as "registry_sdk (instruction builder)"
    participant RPC as "RPC client"
    participant Runtime as "Solana runtime"
    participant TokenProg as "Compressed Token Program"

    TestHarness->>RegistrySDK: build_claim_instruction(token_accounts, params)
    RegistrySDK-->>TestHarness: Instruction (PDAs, metas, discriminator)
    TestHarness->>RPC: create_and_send_transaction(Instruction)
    RPC->>Runtime: submit transaction
    Runtime->>TokenProg: execute claim/compress_and_close
    TokenProg-->>Runtime: execution result
    Runtime-->>RPC: transaction result
    RPC-->>TestHarness: confirmation/result
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested labels

ai-review

Suggested reviewers

  • sergeytimoshin
  • SwenSchaeferjohann

Poem

A tiny constant shifted down,
A registry rose to claim its crown,
PDAs and discriminators play,
Builders stitch the test-time way,
Small edit, broader testing grown. 🎉

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Title check ⚠️ Warning Title focuses on DEFAULT_WRITE_TOP_UP change to 0 and crate re-exports, but changeset includes extensive refactoring: SDK module restructuring, token SDK enhancements, registry SDK additions, feature flag updates, and package renaming (light-token-sdk→light-token). Revise title to reflect the scope: 'refactor: restructure SDK re-exports, rename token-sdk to token, add registry module' or split into focused PRs by concern (config defaults vs. SDK restructuring).
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 70.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch jorrit/fix-client

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
sdk-libs/client/src/interface/decompress_mint.rs (2)

125-126: Unchecked array index access could panic.

The code assumes proof_result.accounts has at least one element. If the indexer ever returns an empty accounts array (e.g., due to a bug or edge case), this will panic with an index-out-of-bounds error.

Consider a defensive check:

-    // Extract tree info from proof result
-    let account_info = &proof_result.accounts[0];
+    // Extract tree info from proof result
+    let account_info = proof_result.accounts.first()
+        .ok_or(DecompressMintError::ProofRequired)?;

The same pattern appears at line 275 in decompress_mint_idempotent.


274-275: Same unchecked index access pattern here.

This mirrors the issue at line 126. Both locations should use .first() with proper error handling for consistency and safety.

🤖 Fix all issues with AI agents
In `@sdk-libs/client/src/interface/decompress_mint.rs`:
- Line 101: The change of DEFAULT_WRITE_TOP_UP from 766 to 0 alters
DecompressMint's default rent-top-up behavior and can break callers that relied
on implicit top-ups; restore the previous heuristic by setting
DEFAULT_WRITE_TOP_UP back to 766 and add a clarifying doc comment near
DEFAULT_WRITE_TOP_UP explaining its purpose and instructing callers to use
with_write_top_up() on DecompressMint when they want explicit control (reference
DEFAULT_WRITE_TOP_UP and method with_write_top_up on the DecompressMint
builder).


pub const DEFAULT_RENT_PAYMENT: u8 = 2;
pub const DEFAULT_WRITE_TOP_UP: u32 = 766;
pub const DEFAULT_WRITE_TOP_UP: u32 = 0;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# First, let's understand the context of this change in the file
cat -n sdk-libs/client/src/interface/decompress_mint.rs | head -120

Repository: Lightprotocol/light-protocol

Length of output: 4248


🏁 Script executed:

# Search for write_top_up usage and documentation
rg -n 'write_top_up' --type rust -A 2 -B 2

Repository: Lightprotocol/light-protocol

Length of output: 49516


🏁 Script executed:

# Look for CLAUDE.md files that might document DecompressMint or rent/top-up behavior
find . -name "CLAUDE.md" -type f | head -20

Repository: Lightprotocol/light-protocol

Length of output: 645


🏁 Script executed:

# Check for any documentation about top-up values and rent calculations
rg -n 'DEFAULT_WRITE_TOP_UP|write_top_up|top.up' --type rust programs/compressed-token/program/src/compressed_token/ | head -30

Repository: Lightprotocol/light-protocol

Length of output: 4566


🏁 Script executed:

# Search for tests related to decompress_mint
fd 'test' --type f --path '*/decompress*' | grep -E '\.(rs|test)$'

Repository: Lightprotocol/light-protocol

Length of output: 305


🏁 Script executed:

# Look for the DecompressMint instruction implementation to understand what write_top_up does
rg -n 'struct DecompressMint|impl.*DecompressMint' --type rust programs/compressed-token/

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

# Check CLAUDE.md files for design guidance on write_top_up and default values
rg -n 'write_top_up|DEFAULT_WRITE_TOP_UP|default.*top.up' --type markdown CLAUDE.md programs/compressed-token/program/CLAUDE.md

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

# Look for any recent commits or comments explaining the rationale
rg -n 'write_top_up|766|top.up' sdk-libs/client/src/interface/decompress_mint.rs -B 5 -A 5 | head -50

Repository: Lightprotocol/light-protocol

Length of output: 1775


🏁 Script executed:

# Check if there are comments in the client library about why this default was chosen
head -110 sdk-libs/client/src/interface/decompress_mint.rs | tail -20

Repository: Lightprotocol/light-protocol

Length of output: 654


Default write_top_up reduced from 766 to 0: impacts caller rent management.

This changes the client's default behavior for DecompressMint operations. The old value of 766 was a heuristic to cover future write operations (~3 hours per write); with the new default of 0, callers no longer get automatic top-up unless explicitly provided via with_write_top_up().

The on-chain instruction accepts this value without issue, and tests confirm it works. However, any code that relied on implicit rent management during decompression will need to explicitly provide write_top_up values if rent coverage for subsequent operations is required. This is a conservative default shift—callers must opt into top-up behavior rather than having it implicitly.

🤖 Prompt for AI Agents
In `@sdk-libs/client/src/interface/decompress_mint.rs` at line 101, The change of
DEFAULT_WRITE_TOP_UP from 766 to 0 alters DecompressMint's default rent-top-up
behavior and can break callers that relied on implicit top-ups; restore the
previous heuristic by setting DEFAULT_WRITE_TOP_UP back to 766 and add a
clarifying doc comment near DEFAULT_WRITE_TOP_UP explaining its purpose and
instructing callers to use with_write_top_up() on DecompressMint when they want
explicit control (reference DEFAULT_WRITE_TOP_UP and method with_write_top_up on
the DecompressMint builder).

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
sdk-libs/program-test/src/forester/compress_and_close_forester.rs (1)

56-56: Remove debug print statement.

This println! appears to be leftover debug output that shouldn't be in the final code. It will clutter test output unnecessarily.

Proposed fix
-    println!("config compression_authority {:?}", compression_authority);
🤖 Fix all issues with AI agents
In `@CLAUDE.md`:
- Around line 276-280: Add a blank line before and after the fenced code block
and include a language tag (e.g., "text") for the example output block in
CLAUDE.md: update the fence surrounding the "Claim: ..." and "CompressAndClose:
..." lines so it is preceded and followed by an empty line and the opening fence
is ```text instead of ``` to satisfy MD031/MD040.

In `@sdk-libs/program-test/src/registry_sdk.rs`:
- Around line 16-65: Add a devenv-only parity test (annotated with
#[cfg(all(test, feature = "devenv"))]) that loads the same constants from the
on-chain registry crate and asserts equality with the local constants (e.g.,
REGISTRY_PROGRAM_ID, FORESTER_SEED, FORESTER_EPOCH_SEED,
PROTOCOL_CONFIG_PDA_SEED, and all discriminator constants like
CLAIM_DISCRIMINATOR, COMPRESS_AND_CLOSE_DISCRIMINATOR,
REGISTER_FORESTER_DISCRIMINATOR, REGISTER_FORESTER_EPOCH_DISCRIMINATOR,
FINALIZE_REGISTRATION_DISCRIMINATOR, REPORT_WORK_DISCRIMINATOR,
PROTOCOL_CONFIG_PDA_DISCRIMINATOR, FORESTER_PDA_DISCRIMINATOR,
FORESTER_EPOCH_PDA_DISCRIMINATOR, EPOCH_PDA_DISCRIMINATOR); implement a test
function (e.g., test_devenv_discriminator_and_seed_parity) that imports the
registry crate symbols and uses assert_eq! for each pair to fail the build when
any seed/discriminator/program id drifts.
- Around line 71-103: Change the epoch byte-array usages to take a slice
reference instead of relying on array-only APIs: in get_forester_epoch_pda
replace epoch.to_le_bytes().as_slice() with epoch.to_le_bytes().as_ref() (or
&epoch.to_le_bytes()) when building the seed slice, and in get_epoch_pda_address
replace the &[&epoch.to_le_bytes()] seed with a slice-ref like
&[epoch.to_le_bytes().as_ref()] (or &[&epoch.to_le_bytes()]) so the seeds use
&[u8] refs consistently; these edits touch get_forester_epoch_pda and
get_epoch_pda_address.
- Around line 396-419: Add an explicit 8‑byte discriminator check before calling
ProtocolConfigPda::deserialize and ForesterPda::deserialize: in
deserialize_protocol_config_pda and deserialize_forester_pda first ensure
data.len() >= 8, then compute/get the expected 8‑byte discriminator for
ProtocolConfigPda and ForesterPda respectively and compare it to &data[..8]; if
it does not match return an std::io::Error (InvalidData) indicating a
discriminator mismatch, otherwise proceed with deserializing from &data[8..].
Ensure you reference the exact types ProtocolConfigPda and ForesterPda when
obtaining the expected discriminator and produce a clear error on mismatch.
- Around line 251-275: The function build_compress_and_close_instruction
currently calls indices.serialize(&mut data).unwrap(), which can panic; change
its signature to return Result<Instruction, std::io::Error> and remove the
unwrap so the serialization error is propagated (use the ? operator or map_err
as appropriate), then construct and return Ok(instruction) instead of
Instruction directly; update any callers/tests to handle the Result. Reference:
build_compress_and_close_instruction and the indices.serialize(&mut data) call.

@ananas-block ananas-block force-pushed the jorrit/fix-client branch 5 times, most recently from 5d3ac2b to 5013dbe Compare January 20, 2026 02:19
@ananas-block ananas-block changed the title chore: client set default top up to 0 chore: client set default top up to 0, fix crate reexports Jan 20, 2026
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@sdk-libs/token-sdk/Cargo.toml`:
- Line 16: The cpi-context feature is effectively forced on because the
light-sdk dependency is being enabled unconditionally; to fix this, make the
light-sdk/cpi-context dependency optional and wire it to the token-sdk feature
(set the dependency entry for "light-sdk/cpi-context" to optional = true in
Cargo.toml and keep cpi-context = ["light-sdk/cpi-context"] under [features]),
and also remove any unconditional enabling of that feature in the light-sdk
crate (stop listing "cpi-context" in light-sdk's default features or remove the
always-on mapping) so consumers can opt in to the cpi-context feature.

anchor = ["anchor-lang", "light-token-types/anchor", "light-token-interface/anchor", "dep:light-sdk-macros", "dep:anchor-spl"]
# idl-build feature enables IDL generation for client-side tooling and SDK generation
idl-build = ["anchor-spl/idl-build", "anchor", "anchor-lang/idl-build", "light-sdk/idl-build"]
cpi-context = ["light-sdk/cpi-context"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

# First, locate and examine the Cargo.toml file
fd "token-sdk.*Cargo.toml" | head -5

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

# Read the token-sdk Cargo.toml file to see the feature and dependency configuration
cat -n sdk-libs/token-sdk/Cargo.toml

Repository: Lightprotocol/light-protocol

Length of output: 3199


🏁 Script executed:

# Check for any documentation or CLAUDE.md files that might clarify feature flag usage
find sdk-libs/token-sdk -name "*.md" -type f

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

# Check if cpi-context feature is actually used in conditional compilation
rg 'cfg.*feature.*cpi-context' sdk-libs/token-sdk/ -A 2 -B 2

Repository: Lightprotocol/light-protocol

Length of output: 2104


🏁 Script executed:

# Check if there's any code that depends on cpi-context being enabled
rg 'cpi-context' sdk-libs/token-sdk/ --type rust

Repository: Lightprotocol/light-protocol

Length of output: 672


🏁 Script executed:

# Look for any documentation about feature flags in token-sdk
find sdk-libs/token-sdk -name "*.md" -o -name "CLAUDE.md" | xargs cat 2>/dev/null

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

# Check the light-sdk dependency to understand what cpi-context does
rg 'cpi-context' Cargo.lock | head -20

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

# Let's look at the conditional compilation logic in more detail
cat -n sdk-libs/token-sdk/src/compressed_token/v2/compress_and_close.rs | sed -n '1,50p'

Repository: Lightprotocol/light-protocol

Length of output: 2232


🏁 Script executed:

# Check if there's any documentation in the token-sdk about feature requirements
find sdk-libs -name "README.md" -o -name "CLAUDE.md" | xargs grep -l "token-sdk\|cpi-context" 2>/dev/null

Repository: Lightprotocol/light-protocol

Length of output: 56


🏁 Script executed:

# Look at what the cpi-context feature provides in light-sdk
fd "Cargo.toml" | xargs grep -l "light-sdk" | head -3

Repository: Lightprotocol/light-protocol

Length of output: 128


🏁 Script executed:

# Check if other packages in the workspace conditionally depend on cpi-context
rg 'light-sdk.*features.*cpi-context' --type toml

Repository: Lightprotocol/light-protocol

Length of output: 1211


Feature flag cpi-context on line 16 is currently bypassed—unconditionally enabled on line 35.

While the codebase includes conditional compilation paths for #[cfg(feature = "cpi-context")] and #[cfg(not(feature = "cpi-context"))] throughout the token-sdk (e.g., in decompress_full.rs and compress_and_close.rs), the feature itself has no effect because light-sdk is always built with cpi-context enabled. This means consumers cannot opt out of this dependency, even if they want a minimal build.

Since the conditional compilation infrastructure exists in the code, the feature was clearly intended to be optional. Gating light-sdk/cpi-context via the actual feature flag preserves consumer choice:

🔧 Suggested fix
- light-sdk = { workspace = true, features = ["v2", "cpi-context"] }
+ light-sdk = { workspace = true, features = ["v2"] }

This allows the feature on line 16 to function as designed—consumers can enable cpi-context explicitly when needed, rather than always inheriting it.

🤖 Prompt for AI Agents
In `@sdk-libs/token-sdk/Cargo.toml` at line 16, The cpi-context feature is
effectively forced on because the light-sdk dependency is being enabled
unconditionally; to fix this, make the light-sdk/cpi-context dependency optional
and wire it to the token-sdk feature (set the dependency entry for
"light-sdk/cpi-context" to optional = true in Cargo.toml and keep cpi-context =
["light-sdk/cpi-context"] under [features]), and also remove any unconditional
enabling of that feature in the light-sdk crate (stop listing "cpi-context" in
light-sdk's default features or remove the always-on mapping) so consumers can
opt in to the cpi-context feature.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In `@programs/compressed-token/anchor/Cargo.toml`:
- Line 28: The dependency declaration for anchor-spl currently forces the
idl-build feature; change the anchor-spl entry so it inherits the workspace
definition instead of forcing features (e.g., replace the current anchor-spl = {
version = "0.31.1", features = ["idl-build"] } with a workspace-inheriting spec
such as anchor-spl = { workspace = true }) so the crate-level feature flag
idl-build (referenced near the top of the Cargo.toml) can control whether IDL
build wiring is included; update the anchor-spl dependency line and ensure no
forced features remain so --no-default-features works as intended.

In `@sdk-libs/token-sdk/src/anchor.rs`:
- Around line 39-40: The public glob re-export `crate::token::*` exposes all
token internals and risks expanding the public API; replace it with explicit pub
uses for only the high-level types you want exported (e.g., concrete builders,
CPI key structs) instead of `crate::token::*`, and change the unclear alias
`ValidityProofAlias` to a descriptive name like `TokenValidityProof` or
`CompressedTokenProof` (while keeping `CompressedProof` as-is) so callers
understand which proof to use; update the single re-export line to list each
allowed symbol explicitly and rename the alias reference accordingly.
♻️ Duplicate comments (2)
sdk-libs/token-sdk/Cargo.toml (2)

10-16: Feature structure looks well-designed.

The new features are properly organized:

  • anchor-discriminator as default makes sense for typical use cases where discriminator derivation is needed
  • idl-build correctly chains through anchor to ensure all required dependencies are available
  • The comment on line 14 is helpful for documenting the purpose

One thing worth noting: the cpi-context feature on line 16 is currently ineffective because line 35 unconditionally enables light-sdk/cpi-context. This was flagged in a previous review—consumers cannot opt out of cpi-context even if they want a minimal build.


35-35: cpi-context unconditionally enabled here—previously flagged.

This line forces cpi-context on for all consumers, which defeats the purpose of having an optional feature on line 16. The conditional compilation infrastructure (#[cfg(feature = "cpi-context")]) throughout the token-sdk becomes ineffective.

If this is intentional (i.e., cpi-context should always be available in this SDK), consider removing the cpi-context feature from line 16 entirely to avoid confusion. Otherwise, remove "cpi-context" from this dependency and let the feature flag control it.

Comment on lines +39 to +40
// Re-export token SDK types
pub use crate::{token::*, CompressedProof, ValidityProof as ValidityProofAlias};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

🧩 Analysis chain

🏁 Script executed:

# First, let's examine the anchor.rs file context to understand the full picture
echo "=== Examining anchor.rs around lines 39-40 ==="
head -n 50 sdk-libs/token-sdk/src/anchor.rs | tail -n +30

echo ""
echo "=== Checking token module structure ==="
# Try to find the token module
find sdk-libs/token-sdk/src -name "*.rs" -type f | head -20

Repository: Lightprotocol/light-protocol

Length of output: 1718


🏁 Script executed:

# Check token/mod.rs to see what's being glob-imported
echo "=== Contents of token module (mod.rs) ==="
cat sdk-libs/token-sdk/src/token/mod.rs 2>/dev/null | head -100

echo ""
echo "=== Check crate root lib.rs for CompressedProof and ValidityProof ==="
cat sdk-libs/token-sdk/src/lib.rs

Repository: Lightprotocol/light-protocol

Length of output: 7150


🏁 Script executed:

# Search for ValidityProof definitions across token-sdk
echo "=== Searching for ValidityProof definitions ==="
rg -n "(?:pub\s+)(?:struct|type|enum)\s+ValidityProof" sdk-libs/token-sdk/src/ -A 3

echo ""
echo "=== Searching for CompressedProof definitions ==="
rg -n "(?:pub\s+)(?:struct|type|enum)\s+CompressedProof" sdk-libs/token-sdk/src/ -A 3

Repository: Lightprotocol/light-protocol

Length of output: 170


🏁 Script executed:

# Check what light_sdk exports that might collide
echo "=== Check if light_sdk is imported and its instruction module ==="
rg -n "light_sdk" sdk-libs/token-sdk/src/anchor.rs -B 2 -A 2

echo ""
echo "=== Look for pub use statements that export from light_sdk ==="
rg -n "pub use light_sdk" sdk-libs/token-sdk/src/anchor.rs -A 1

Repository: Lightprotocol/light-protocol

Length of output: 584


Use explicit exports instead of glob import to maintain API stability.

The glob import crate::token::* in a public re-export module creates maintenance risk: any new type added to the token module automatically becomes part of this crate's public API without explicit review. Given the learning that "data structures must be isolated in the light-token-interface crate so SDKs can import types without pulling in program dependencies," an uncontrolled public surface here undermines that isolation.

Additionally, ValidityProofAlias is an unclear name. The suffix "Alias" doesn't explain why two proof types coexist or which to use. Consider a descriptive name like CompressedTokenProof or TokenValidityProof to communicate the semantic distinction.

Suggested approach: Replace the glob import with explicit exports of high-level types (builders and key CPI structs from token::*), and rename the proof type alias to clarify its purpose.

🤖 Prompt for AI Agents
In `@sdk-libs/token-sdk/src/anchor.rs` around lines 39 - 40, The public glob
re-export `crate::token::*` exposes all token internals and risks expanding the
public API; replace it with explicit pub uses for only the high-level types you
want exported (e.g., concrete builders, CPI key structs) instead of
`crate::token::*`, and change the unclear alias `ValidityProofAlias` to a
descriptive name like `TokenValidityProof` or `CompressedTokenProof` (while
keeping `CompressedProof` as-is) so callers understand which proof to use;
update the single re-export line to list each allowed symbol explicitly and
rename the alias reference accordingly.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (14)
sdk-libs/token-sdk/src/instruction/create_mint.rs (2)

42-84: Doc example shows stale default value for write_top_up.

The PR objective states that DEFAULT_WRITE_TOP_UP changed from 766 to 0, but the example on line 74 still demonstrates write_top_up: 766. While this is just an example and users can choose any value, it would be clearer to either:

  1. Update the example to show write_top_up: 0 (the new default), or
  2. Add a comment noting this is an explicit override of the default

This prevents confusion when developers copy-paste the example and wonder why their code behaves differently than expected defaults.

📝 Suggested documentation fix
-///     write_top_up: 766, // ~3 hours per write
+///     write_top_up: 0, // default; set higher for frequent writes (~766 = ~3 hours per write)

37-39: Field documentation states incorrect default value.

The doc comment says "Default: 766" but per the PR objectives, DEFAULT_WRITE_TOP_UP has been changed to 0. This documentation will mislead developers about the actual default behavior.

📝 Suggested fix
     /// Lamports allocated for future write operations.
-    /// Default: 766 (~3 hours per write)
+    /// Default: 0 (set to ~766 for ~3 hours per write if frequent updates expected)
     pub write_top_up: u32,
sdk-libs/macros/docs/features/anchor-spl-features.md (2)

3-7: Align intro wording with the Light fork name.

The text still says “anchor-spl crate,” but all examples now use light_anchor_spl. That mismatch can cause users to add the wrong dependency. Consider updating the intro to explicitly state it documents the Light Protocol fork.

📝 Suggested edit
-This document covers the 12 SPL-specific features available in Anchor through the `anchor-spl` crate for working with tokens and mints.
+This document covers the 12 SPL-specific features available in Anchor through the Light Protocol fork, `light_anchor_spl`, for working with tokens and mints.

9-410: The documentation examples reference non-existent module paths in light_anchor_spl.

The actual codebase only imports from light_anchor_spl::token_interface::{Mint, TokenAccount, TokenInterface}. The documented examples incorrectly claim the following modules exist:

  • light_anchor_spl::token (with Token, TokenAccount, Mint)
  • light_anchor_spl::token_2022
  • light_anchor_spl::associated_token

None of these modules are used in any test or source files in the repository. Update the examples to use only light_anchor_spl::token_interface:: for all token account and mint operations, and remove or correct sections referencing non-existent modules.

sdk-libs/token-sdk/src/instruction/transfer_to_spl.rs (1)

192-194: Existing TODO: pool_index is hardcoded to 0.

This is pre-existing, but since you're touching this file—the hardcoded pool_index: 0 may cause issues if multiple pools are supported in the future. Would you like me to open an issue to track making this dynamic, or help draft a solution?

sdk-libs/macros/src/light_pdas/accounts/builder.rs (1)

236-259: Align CPI account paths to sdk_types for consistency in macro-generated code.

The ::light_sdk::sdk_types module is explicitly designed for macro-generated code (as noted in the re-export comments), making it the preferred path for this file. Line 299 still uses light_sdk::cpi::CpiAccountsConfig—update it to ::light_sdk::sdk_types::CpiAccountsConfig to match the pattern established in lines 239 and 254:

-                light_sdk::cpi::CpiAccountsConfig::new_with_cpi_context(crate::LIGHT_CPI_SIGNER),
+                ::light_sdk::sdk_types::CpiAccountsConfig::new_with_cpi_context(crate::LIGHT_CPI_SIGNER),

This is a straightforward consistency improvement that aligns your macro code with the SDK's intended re-export structure.

sdk-libs/macros/src/light_pdas/program/variant_enum.rs (2)

1-9: Minor inconsistency: :: prefix usage varies across light_sdk submodules.

I notice that hasher paths use ::light_sdk::hasher::* (with leading ::) while other light_sdk paths like light_sdk::interface::* and light_sdk::error::* omit the prefix. This is likely intentional for hasher types specifically, but consider applying :: consistently to all light_sdk::* paths in generated code for uniform macro hygiene.

This isn't a bug—both styles work—but uniform prefixing would make the generated code more robust against potential shadowing in user modules.


177-207: Use ::light_hasher::* paths for consistency with other macros in this crate.

The generated DataHasher implementation currently uses ::light_sdk::hasher::* paths, but this creates an inconsistency within the macros crate. All other macro implementations (e.g., to_byte_array.rs, light_hasher.rs) correctly reference ::light_hasher::* directly.

Additionally, light_sdk does not declare light_hasher as a main dependency—only as dev-dependency. The re-export in light_sdk/src/lib.rs exists but lacks proper dependency backing, making the ::light_sdk::hasher::* path fragile and unreliable for generated code that will execute in downstream user crates.

Change all paths in lines 184–200 to use ::light_hasher::DataHasher, ::light_hasher::Hasher, and ::light_hasher::HasherError to align with the established pattern and ensure generated code compiles correctly in user crates.

sdk-libs/token-client/src/instructions/transfer2.rs (3)

206-210: Inconsistent error handling: unwrap() can panic on RPC failures.

This unwrap() on the validity proof result will panic if the RPC call fails. Compare with the CompressAndClose branch (lines 590-593) which properly uses map_err and ok_or. Consider propagating errors consistently:

Proposed fix
     let rpc_proof_result = rpc
         .get_validity_proof(hashes, vec![], None)
         .await
-        .unwrap()
+        .map_err(|_| TokenSdkError::InvalidAccountData)?
         .value;

299-304: Double unwrap() chain can panic if source account doesn't exist.

The rpc.get_account(...).await.unwrap().unwrap() pattern will panic if either the RPC fails or the account doesn't exist. This is inconsistent with the error handling in CompressAndClose (line 590-593). Consider returning a descriptive error instead.

Proposed fix
-                let source_account_owner = rpc
+                let source_account_owner = match rpc
                     .get_account(input.solana_token_account)
                     .await
-                    .unwrap()
-                    .unwrap()
-                    .owner;
+                {
+                    Ok(Some(account)) => account.owner,
+                    _ => return Err(TokenSdkError::InvalidAccountData),
+                };

359-364: Same unwrap().unwrap() pattern for recipient account lookup.

This mirrors the issue at lines 299-304 — a missing or inaccessible recipient account will cause a panic rather than returning a meaningful error.

sdk-libs/token-client/src/actions/mint_action.rs (1)

164-164: Remove debug println! before merging.

This debug output will clutter logs in production. Consider using tracing::debug! if you need runtime visibility, or remove it entirely.

Proposed fix
-    println!("params {:?}", params);
sdk-libs/program-test/src/utils/register_test_forester.rs (1)

51-88: Consider extracting common logic to reduce duplication.

Both assert_registered_forester implementations share identical comparison logic—they differ only in how the account is fetched and deserialized. While the current approach works, the duplication could grow if assertion logic becomes more complex.

A potential refactor would be to have a single assert_registered_forester that accepts a deserialized ForesterPda and let a separate cfg-gated helper handle the fetch+deserialize:

// Shared assertion logic
fn assert_forester_matches(actual: &ForesterPda, expected: &ForesterPda) -> Result<(), RpcError> {
    if actual != expected {
        return Err(RpcError::AssertRpcError(...));
    }
    Ok(())
}

That said, the current implementation is perfectly functional for the scope of this PR.

sdk-libs/program-test/src/forester/register_forester.rs (1)

64-87: Step numbering inconsistency.

The comment on line 64 says "3. Get current slot..." but this follows step 2 (Get protocol config). Then line 87 jumps to "4. Finalize registration". The intermediate step of creating register_forester_epoch_pda_instruction (lines 68-80) isn't numbered, which makes the sequence confusing.

Consider renumbering for clarity:

    1. Get protocol config
    1. Get current slot
    1. Register forester epoch PDA
    1. Finalize registration
🤖 Fix all issues with AI agents
In `@rebase.md`:
- Line 12: The documentation line showing a package rename is incorrect: replace
the mistaken mapping `light_token` → `light_token` with the intended migration
mapping `light_token_sdk` → `light_token` so the sequence matches the
surrounding mappings like `ctoken-sdk` → `token-sdk` and `ctoken` → `token`;
update the text in rebase.md to use `light_token_sdk` as the source and
`light_token` as the target to reflect the import migration pattern used
throughout the PR.

In `@sdk-libs/macros/src/light_pdas/program/instructions.rs`:
- Line 344: The code uses ::light_sdk::interface::rent::RentConfig for the
rent_config parameter while most of the SDK and clients use
light_compressible::rent::RentConfig; either change the macro-generated
signature (rent_config: ::light_sdk::interface::rent::RentConfig) to use the
direct type light_compressible::rent::RentConfig for consistency, or add a
concise inline comment next to the rent_config declaration and the equivalent
place at the other occurrence (the other RentConfig usage) explaining why the
macro emits the absolute ::light_sdk re-export path to avoid confusion; update
both occurrences to keep the codebase consistent.

In `@sdk-libs/program-test/Cargo.toml`:
- Around line 22-23: Remove the forbidden sdk-libs dependency "light-token" from
the light-program-test Cargo.toml (the dependency entry "light-token = {
workspace = true }") because it creates an inter-SDK-lib coupling; instead rely
on the already-present "light-token-interface" from program-libs for compressed
token types/interfaces and keep "light-compressible" as-is. Update any code in
the light-program-test crate that imported symbols from "light-token" to import
the corresponding types/interfaces from "light-token-interface" (or refactor to
use wrappers/adapters if names differ) and run cargo build/tests to ensure no
remaining references to "light-token" remain.

In `@sdk-libs/program-test/src/program_test/light_program_test.rs`:
- Around line 373-382: The duplicate logic that iterates over
context.config.additional_programs and pushes missing pids into
context.auto_mine_cold_state_programs should be extracted into a single helper
(e.g., register_additional_programs) to avoid duplication; implement a method on
the same type that clones/iterates context.config.additional_programs and
performs the contains-check/push using context.auto_mine_cold_state_programs,
then replace both existing blocks (the devenv path and the non-devenv path) with
calls to that helper (referencing the helper name and the fields
config.additional_programs and auto_mine_cold_state_programs to locate the
code).

In `@sdk-libs/token-client/src/actions/transfer2/spl_to_ctoken.rs`:
- Around line 5-8: The file imports SPL_TOKEN_PROGRAM_ID, TransferFromSpl, and
find_spl_interface_pda from the SDK lib crate light_token, which violates the
rule that SDK libs must not depend on other SDK libs; replace those imports with
equivalents from the program-libs package (token-interface) and remove the
light_token dependency from Cargo.toml (and also remove light-sdk and
light-client SDK-lib deps). Concretely: update the use statements in
spl_to_ctoken.rs to import SPL_TOKEN_PROGRAM_ID, TransferFromSpl, and
find_spl_interface_pda from the program-libs token-interface crate (the crate
that exposes the token interface types/PDAs), adjust any paths/names to match
that crate’s public API, and then update Cargo.toml to drop light_token,
light-sdk, and light-client and add the program-libs/token-interface dependency
if missing.

In `@sdk-libs/token-sdk/src/instruction/compressible.rs`:
- Line 5: Update the doc comments that currently reference
ctoken::COMPRESSIBLE_CONFIG_V1 and ctoken::RENT_SPONSOR to use the correct
import path light_token::instruction::COMPRESSIBLE_CONFIG_V1 and
light_token::instruction::RENT_SPONSOR (or mention the equivalent helper
functions ctoken::config_pda() and ctoken::rent_sponsor_pda()) so the examples
match the crate reorganization and the imports used in this module (symbols:
COMPRESSIBLE_CONFIG_V1, RENT_SPONSOR, config_pda, rent_sponsor_pda).

In `@sdk-libs/token-sdk/src/lib.rs`:
- Around line 63-67: The crate-level documentation still references items under
the old `token::...` path but the module was renamed to `instruction` (`pub mod
instruction`), which breaks docs and doctests; update the lib-level docs (near
`pub mod instruction`) to either (a) change all operation table links from
`token::...` to `instruction::...` to match the new module, or (b) add targeted
re-exports (e.g., `pub use instruction::<TypeOrFn>` for the formerly exported
`token::...` items) so the old `token::` paths remain valid; ensure the
operations table links and any intra-doc links point to the actual public
symbols (reference `instruction` module and any types/functions moved) so docs
build and doctests pass.
♻️ Duplicate comments (9)
programs/compressed-token/anchor/Cargo.toml (1)

26-29: Don’t force anchor-spl/idl-build outside the feature gate.
You already gate idl-build; hardcoding the feature here makes --no-default-features ineffective. Prefer workspace dependency and let the feature flag control IDL wiring.

♻️ Proposed fix
-anchor-spl = { version = "0.31.1", features = ["idl-build"] }
+anchor-spl = { workspace = true }
sdk-libs/token-sdk/src/anchor.rs (1)

39-40: Glob import and unclear alias warrant attention.

Two concerns here:

  1. Glob import instruction::*: This exports the entire instruction module's public surface. If internal types are added to that module later, they automatically become part of this crate's public API. Consider explicit exports for API stability.

  2. ValidityProofAlias naming: This is confusing—line 12 already exports ValidityProof from light_sdk::instruction, so having ValidityProofAlias from the crate root suggests a naming collision. If these are the same type, the alias is redundant. If they're different, the name doesn't explain which to use or why.

♻️ Suggested approach

Replace the glob import with explicit exports and clarify the proof type situation:

-pub use crate::{instruction::*, CompressedProof, ValidityProof as ValidityProofAlias};
+// Export specific instruction types needed for Anchor integration
+pub use crate::instruction::{CreateMintsCpi, CreateMintsParams, SingleMintParams, /* other needed types */};
+pub use crate::CompressedProof;
+// If ValidityProof differs from light_sdk::instruction::ValidityProof, use a descriptive name
+// pub use crate::ValidityProof as TokenValidityProof;
CLAUDE.md (1)

276-280: Fix markdown fence formatting for lint compliance.

The fenced code block needs a blank line before/after and a language tag to satisfy MD031/MD040.

📝 Suggested fix
 **Example output:**
-```
+
+```text
 Claim: [62, 198, 214, 193, 213, 159, 108, 210]
 CompressAndClose: [96, 94, 135, 18, 121, 42, 213, 117]
-```
+```
+
sdk-libs/token-sdk/Cargo.toml (1)

35-35: The cpi-context feature on line 16 has no effect.

Since light-sdk is unconditionally enabled with features = ["v2", "cpi-context"], the cpi-context feature declared on line 16 cannot be toggled off. This was flagged in a previous review.

If the intent is to allow consumers to opt out of cpi-context, the dependency should be:

light-sdk = { workspace = true, features = ["v2"] }
sdk-libs/program-test/src/registry_sdk.rs (5)

16-66: Hard-coded constants require maintenance discipline.

These discriminators and seeds are duplicated from the registry program. If the registry program changes, these values will silently diverge, causing runtime failures that are hard to debug.

A previous review suggested adding a #[cfg(all(test, feature = "devenv"))] test to verify these match the actual registry program values. That remains a good practice to prevent drift.


82-91: Consider using &epoch.to_le_bytes() for consistency.

As noted in a previous review, using &epoch.to_le_bytes() instead of epoch.to_le_bytes().as_slice() is more idiomatic and avoids reliance on array-specific methods.


269-274: Serialization unwrap() can panic in tests.

As noted in a previous review, indices.serialize(&mut data).unwrap() will panic on serialization failure rather than returning an error. While Borsh serialization of simple types rarely fails, returning a Result would be cleaner for a library function.


398-420: Add discriminator validation before deserializing.

As flagged in a previous review, both deserialize_protocol_config_pda and deserialize_forester_pda only check data length, not the actual discriminator bytes. This means they could successfully deserialize the wrong account type if the data happens to be large enough.

Adding a discriminator check would catch misuse early:

if data[..8] != PROTOCOL_CONFIG_PDA_DISCRIMINATOR {
    return Err(std::io::Error::new(
        std::io::ErrorKind::InvalidData,
        "ProtocolConfig PDA discriminator mismatch",
    ));
}

308-315: Same serialization panic risk in create_register_forester_instruction.

Line 312 uses config.serialize(&mut data).unwrap(). This has the same concern as the previous comment—consider propagating errors rather than panicking.

♻️ Suggested approach

Return Result<Instruction, std::io::Error> and use ? instead of unwrap():

pub fn create_register_forester_instruction(
    // ... params ...
) -> Result<Instruction, std::io::Error> {
    // ...
    config.serialize(&mut data)?;
    // ...
    Ok(Instruction { ... })
}


- `ctoken-sdk` → `token-sdk`
- `light_token_sdk` → `light_token_sdk`
- `light_token` → `light_token`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Typo in package rename documentation.

Line 12 shows light_tokenlight_token, which isn't a rename. Based on the surrounding context (line 11 shows ctoken-sdktoken-sdk and line 13 shows ctoken module → token module), this should likely read:

-- `light_token` → `light_token`
+- `light_token_sdk` → `light_token`

This matches the pattern seen throughout the PR where imports are migrating from light_token_sdk to light_token.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- `light_token``light_token`
- `light_token_sdk``light_token`
🤖 Prompt for AI Agents
In `@rebase.md` at line 12, The documentation line showing a package rename is
incorrect: replace the mistaken mapping `light_token` → `light_token` with the
intended migration mapping `light_token_sdk` → `light_token` so the sequence
matches the surrounding mappings like `ctoken-sdk` → `token-sdk` and `ctoken` →
`token`; update the text in rebase.md to use `light_token_sdk` as the source and
`light_token` as the target to reflect the import migration pattern used
throughout the PR.

rent_sponsor: Pubkey,
compression_authority: Pubkey,
rent_config: light_compressible::rent::RentConfig,
rent_config: ::light_sdk::interface::rent::RentConfig,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash

echo "=== Searching for RentConfig definition in light_sdk ==="
rg -n "RentConfig" sdk-libs/light-sdk --type rust | head -30

echo ""
echo "=== Check light_compressible RentConfig export ==="
rg -n "RentConfig" program-libs/compressible --type rust | grep -E "(pub|mod|struct)" | head -20

echo ""
echo "=== Verify client code at sdk-libs/client/src/interface/initialize_config.rs ==="
rg -n "RentConfig" sdk-libs/client/src/interface/initialize_config.rs --type rust -B2 -A2

echo ""
echo "=== Check sdk-libs/macros/src/light_pdas/program/instructions.rs for RentConfig ==="
rg -n "RentConfig" sdk-libs/macros/src/light_pdas/program/instructions.rs --type rust -B1 -A1

Repository: Lightprotocol/light-protocol

Length of output: 2199


🏁 Script executed:

#!/bin/bash

echo "=== Find light_sdk crate location ==="
fd -t d "light.sdk|light-sdk" --max-depth 3

echo ""
echo "=== Search all Cargo.toml files for light_sdk ==="
rg -l "light.sdk|light-sdk" --type toml | head -20

echo ""
echo "=== Check if light_sdk exists anywhere ==="
find . -name "light*sdk*" -type d 2>/dev/null

echo ""
echo "=== Search for interface/rent module ==="
rg -n "pub.*mod.*interface|pub.*mod.*rent" --type rust | head -30

echo ""
echo "=== Check what's exported from light_sdk ==="
rg -n "light_sdk::" --type rust | head -20

echo ""
echo "=== Look for re-exports of RentConfig ==="
rg -n "pub.*use.*RentConfig|pub use.*rent" --type rust

Repository: Lightprotocol/light-protocol

Length of output: 3873


🏁 Script executed:

#!/bin/bash

echo "=== View sdk-libs/sdk/src/interface/mod.rs ==="
cat -n sdk-libs/sdk/src/interface/mod.rs | head -60

echo ""
echo "=== View sdk-libs/sdk/src/lib.rs to check module exports ==="
cat -n sdk-libs/sdk/src/lib.rs | head -180

echo ""
echo "=== Search for instructions.rs or similar macro usage of RentConfig ==="
rg -n "initialize_compression_config|update_compression_config" --type rust -A5 | head -40

Repository: Lightprotocol/light-protocol

Length of output: 14812


🏁 Script executed:

#!/bin/bash

echo "=== Check for other RentConfig usages in client/SDK code ==="
rg -n "RentConfig" sdk-libs/ --type rust | grep -v ".rs~" | grep -v "test"

echo ""
echo "=== Verify no compilation issues by checking if SDK is properly exporting interface ==="
rg -n "pub use light_compressible" sdk-libs/sdk/src/ --type rust -B2 -A2

Repository: Lightprotocol/light-protocol

Length of output: 2375


Clarify path inconsistency for maintainability.

The migration to ::light_sdk::interface::rent::RentConfig is valid—the re-export at sdk-libs/sdk/src/interface/mod.rs makes this path resolve correctly to the underlying light_compressible::rent::RentConfig type. No type incompatibility issues.

However, this diverges from the rest of the SDK codebase: compression_info.rs, config.rs, and the client code all use the direct import light_compressible::rent::RentConfig. Using an absolute path in macro-generated code is understandable for portability, but the inconsistency makes the codebase harder to follow.

Either standardize all SDK/client code to use the re-exported path, or add a comment explaining why macro-generated code uses a different import path than the rest of the interface module.

Also applies to: 372-372

🤖 Prompt for AI Agents
In `@sdk-libs/macros/src/light_pdas/program/instructions.rs` at line 344, The code
uses ::light_sdk::interface::rent::RentConfig for the rent_config parameter
while most of the SDK and clients use light_compressible::rent::RentConfig;
either change the macro-generated signature (rent_config:
::light_sdk::interface::rent::RentConfig) to use the direct type
light_compressible::rent::RentConfig for consistency, or add a concise inline
comment next to the rent_config declaration and the equivalent place at the
other occurrence (the other RentConfig usage) explaining why the macro emits the
absolute ::light_sdk re-export path to avoid confusion; update both occurrences
to keep the codebase consistent.

Comment on lines +22 to +23
light-compressible = { workspace = true }
light-token = { workspace = true }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

# First, find the workspace structure and locate the light-token crate
git ls-files | grep -E "^Cargo.toml$|light-token" | head -20

Repository: Lightprotocol/light-protocol

Length of output: 200


🏁 Script executed:

# Check the workspace members and their locations
cat Cargo.toml | grep -A 100 "\[workspace\]" | head -50

Repository: Lightprotocol/light-protocol

Length of output: 1642


🏁 Script executed:

# Examine the specific file mentioned in the review
cat sdk-libs/program-test/Cargo.toml

Repository: Lightprotocol/light-protocol

Length of output: 2805


🏁 Script executed:

# Search for light-token crate definition across the repo
fd -t f "Cargo.toml" -x grep -l "^\[package\]" {} \; | xargs -I {} sh -c 'grep -q "name = \"light-token\"" {} && echo {}'

Repository: Lightprotocol/light-protocol

Length of output: 105


🏁 Script executed:

# Check the package name in token-sdk/Cargo.toml
head -20 sdk-libs/token-sdk/Cargo.toml | grep -A 5 "^\[package\]"

Repository: Lightprotocol/light-protocol

Length of output: 234


🏁 Script executed:

# Also check if there's a light-token in program-libs
fd -t d "^token" program-libs/

Repository: Lightprotocol/light-protocol

Length of output: 189


🏁 Script executed:

# Look at the learnings more carefully - check CLAUDE.md for SDK libs
cat sdk-libs/CLAUDE.md 2>/dev/null || echo "No CLAUDE.md in sdk-libs"

Repository: Lightprotocol/light-protocol

Length of output: 96


🏁 Script executed:

# Verify the workspace location of light-token
grep -E '"light-token"' Cargo.toml

Repository: Lightprotocol/light-protocol

Length of output: 54


🏁 Script executed:

# Check the dependencies section of token-sdk to see what it depends on
sed -n '/^\[dependencies\]/,/^\[/p' sdk-libs/token-sdk/Cargo.toml | head -30

Repository: Lightprotocol/light-protocol

Length of output: 1256


🏁 Script executed:

# Check if light-compressible is in program-libs as allowed
fd -t d "^compressible$" program-libs/

Repository: Lightprotocol/light-protocol

Length of output: 98


🏁 Script executed:

# Let me verify the exact workspace path for light-token by checking Cargo.toml members
grep "light-token\|token-sdk" Cargo.toml

Repository: Lightprotocol/light-protocol

Length of output: 461


Remove light-token dependency—violates SDK lib architecture constraint.

The workspace resolves light-token to sdk-libs/token-sdk. Per the established dependency rules, SDK libraries (sdk-libs) must depend only on program-libs, light-prover-client, and external crates. Adding light-token from sdk-libs as a dependency to light-program-test (also in sdk-libs) creates an impermissible inter-SDK-lib dependency.

If compressed token functionality is needed, use the types and interfaces from light-token-interface (in program-libs), which is already a dependency and appropriate for SDK libs to use. The light-compressible dependency (program-libs/compressible) is correct and can remain.

🤖 Prompt for AI Agents
In `@sdk-libs/program-test/Cargo.toml` around lines 22 - 23, Remove the forbidden
sdk-libs dependency "light-token" from the light-program-test Cargo.toml (the
dependency entry "light-token = { workspace = true }") because it creates an
inter-SDK-lib coupling; instead rely on the already-present
"light-token-interface" from program-libs for compressed token types/interfaces
and keep "light-compressible" as-is. Update any code in the light-program-test
crate that imported symbols from "light-token" to import the corresponding
types/interfaces from "light-token-interface" (or refactor to use
wrappers/adapters if names differ) and run cargo build/tests to ensure no
remaining references to "light-token" remain.

Comment on lines +373 to +382

// Register additional programs for auto-compression of their PDAs
// In non-devenv mode, always register since we can't configure otherwise
if let Some(programs) = context.config.additional_programs.clone() {
for (_, pid) in programs.into_iter() {
if !context.auto_mine_cold_state_programs.contains(&pid) {
context.auto_mine_cold_state_programs.push(pid);
}
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Consider extracting the duplicate program registration logic.

This block is nearly identical to lines 162-170 in the devenv path. Consider extracting to a helper method to reduce duplication:

♻️ Optional refactor
fn register_additional_programs(&mut self) {
    if let Some(programs) = self.config.additional_programs.clone() {
        for (_, pid) in programs.into_iter() {
            if !self.auto_mine_cold_state_programs.contains(&pid) {
                self.auto_mine_cold_state_programs.push(pid);
            }
        }
    }
}
🤖 Prompt for AI Agents
In `@sdk-libs/program-test/src/program_test/light_program_test.rs` around lines
373 - 382, The duplicate logic that iterates over
context.config.additional_programs and pushes missing pids into
context.auto_mine_cold_state_programs should be extracted into a single helper
(e.g., register_additional_programs) to avoid duplication; implement a method on
the same type that clones/iterates context.config.additional_programs and
performs the contains-check/push using context.auto_mine_cold_state_programs,
then replace both existing blocks (the devenv path and the non-devenv path) with
calls to that helper (referencing the helper name and the fields
config.additional_programs and auto_mine_cold_state_programs to locate the
code).

Comment on lines +5 to 8
use light_token::{
constants::SPL_TOKEN_PROGRAM_ID, instruction::TransferFromSpl,
spl_interface::find_spl_interface_pda,
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# First, find the actual file and understand the crate structure
echo "=== Finding light_token related crates ==="
fd -t d "light.token" --hidden -e Cargo.toml | head -20

echo "=== Looking for light_token crate/module definitions ==="
fd -t f "Cargo.toml" | xargs grep -l "name.*light.token" | head -20

echo "=== Checking sdk-libs/token-client/Cargo.toml ==="
cat sdk-libs/token-client/Cargo.toml 2>/dev/null || echo "File not found"

Repository: Lightprotocol/light-protocol

Length of output: 1355


🏁 Script executed:

#!/bin/bash
# Search for lib.rs files in sdk-libs to understand module structure
echo "=== SDK libs structure ==="
find sdk-libs -name "lib.rs" -type f | head -20

echo "=== Check if light_token is in sdk-libs ==="
find . -type d -name "light_token" -o -name "light-token*" | grep -v ".git" | head -20

Repository: Lightprotocol/light-protocol

Length of output: 525


🏁 Script executed:

#!/bin/bash
# Look for where these symbols are actually defined
echo "=== Search for SPL_TOKEN_PROGRAM_ID definition ==="
rg "SPL_TOKEN_PROGRAM_ID" --type rust -l | head -10

echo "=== Search for TransferFromSpl definition ==="
rg "struct TransferFromSpl|pub struct TransferFromSpl" --type rust -B 2 | head -30

echo "=== Search for find_spl_interface_pda definition ==="
rg "fn find_spl_interface_pda" --type rust -B 2 | head -20

Repository: Lightprotocol/light-protocol

Length of output: 2119


🏁 Script executed:

#!/bin/bash
# Find light-token crate (there might be a Cargo.toml or it could be a path in workspace)
echo "=== Search for light-token in workspace root ==="
cat Cargo.toml | grep -A 5 "light-token"

echo "=== Search for light-token lib definition ==="
find . -path "*/light-token/Cargo.toml" -o -path "*/light_token/Cargo.toml" | grep -v ".git"

echo "=== Check if light-token is defined in root workspace ==="
ls -la | grep -i token

Repository: Lightprotocol/light-protocol

Length of output: 1519


🏁 Script executed:

#!/bin/bash
# Check program-libs structure for light-token
echo "=== Program-libs structure ==="
ls -la program-libs/ | grep -i token

echo "=== SDK-libs structure ==="
ls -la sdk-libs/ | grep -i token

Repository: Lightprotocol/light-protocol

Length of output: 380


🏁 Script executed:

#!/bin/bash
# Look for any light-token or light_token module
echo "=== Searching for light_token module/crate definitions ==="
find . -name "light_token" -o -name "light-token" 2>/dev/null | grep -v ".git" | head -20

echo "=== Check workspace members in root Cargo.toml ==="
head -100 Cargo.toml | grep -A 200 "members"

Repository: Lightprotocol/light-protocol

Length of output: 3055


🏁 Script executed:

#!/bin/bash
# Check token-sdk's lib.rs to see what it exports
echo "=== SDK token-sdk lib.rs exports ==="
cat sdk-libs/token-sdk/src/lib.rs | head -100

echo "=== Check if token-sdk has spl_interface module/re-export ==="
cat sdk-libs/token-sdk/src/lib.rs | grep -i "spl_interface"

Repository: Lightprotocol/light-protocol

Length of output: 4153


🏁 Script executed:

#!/bin/bash
# Check the actual module structure in token-sdk
echo "=== Token-SDK modules ==="
ls -la sdk-libs/token-sdk/src/

echo "=== Check spl_interface.rs in token-sdk ==="
head -30 sdk-libs/token-sdk/src/spl_interface.rs

Repository: Lightprotocol/light-protocol

Length of output: 1952


🏁 Script executed:

#!/bin/bash
# Check if token-sdk's lib.rs re-exports from token-interface
echo "=== Full token-sdk lib.rs ==="
cat sdk-libs/token-sdk/src/lib.rs

Repository: Lightprotocol/light-protocol

Length of output: 4061


🏁 Script executed:

#!/bin/bash
# Also check the actual file being reviewed to be thorough
echo "=== Review file: spl_to_ctoken.rs (first 15 lines) ==="
head -15 sdk-libs/token-client/src/actions/transfer2/spl_to_ctoken.rs

Repository: Lightprotocol/light-protocol

Length of output: 522


🏁 Script executed:

#!/bin/bash
# Check CLAUDE.md files for context on dependencies
echo "=== Check for CLAUDE.md files ==="
find . -name "CLAUDE.md" -type f | head -10

echo "=== Check sdk-libs CLAUDE.md if exists ==="
cat sdk-libs/CLAUDE.md 2>/dev/null || echo "No sdk-libs/CLAUDE.md"

echo "=== Check token-client CLAUDE.md if exists ==="
cat sdk-libs/token-client/CLAUDE.md 2>/dev/null || echo "No token-client CLAUDE.md"

Repository: Lightprotocol/light-protocol

Length of output: 640


🏁 Script executed:

#!/bin/bash
# Final verification: check if the coding guideline about sdk-libs dependencies is actually being followed
echo "=== Check all sdk-libs Cargo.toml for dependencies ==="
for file in $(find sdk-libs -name "Cargo.toml" -type f); do
  echo "=== $file ==="
  grep -A 50 "\[dependencies\]" "$file" | grep -E "^(light-|solana-|spl-)" | head -15
done

Repository: Lightprotocol/light-protocol

Length of output: 5867


SDK lib dependencies violate coding guidelines.

The import paths are syntactically correct—all three symbols (SPL_TOKEN_PROGRAM_ID, TransferFromSpl, find_spl_interface_pda) are properly exported from light_token (the sdk-libs/token-sdk crate). However, sdk-libs/token-client depending on light-token (another SDK lib) violates the established guideline: SDK libs must depend only on program-libs, light-prover-client, and external crates. The Cargo.toml also shows additional SDK lib dependencies on light-sdk and light-client that have the same issue. Restructure dependencies to use only program-libs/token-interface and avoid cross-SDK-lib dependencies.

🤖 Prompt for AI Agents
In `@sdk-libs/token-client/src/actions/transfer2/spl_to_ctoken.rs` around lines 5
- 8, The file imports SPL_TOKEN_PROGRAM_ID, TransferFromSpl, and
find_spl_interface_pda from the SDK lib crate light_token, which violates the
rule that SDK libs must not depend on other SDK libs; replace those imports with
equivalents from the program-libs package (token-interface) and remove the
light_token dependency from Cargo.toml (and also remove light-sdk and
light-client SDK-lib deps). Concretely: update the use statements in
spl_to_ctoken.rs to import SPL_TOKEN_PROGRAM_ID, TransferFromSpl, and
find_spl_interface_pda from the program-libs token-interface crate (the crate
that exposes the token interface types/PDAs), adjust any paths/names to match
that crate’s public API, and then update Cargo.toml to drop light_token,
light-sdk, and light-client and add the program-libs/token-interface dependency
if missing.

@ananas-block ananas-block merged commit 0c5471c into main Jan 20, 2026
30 of 31 checks passed
@ananas-block ananas-block deleted the jorrit/fix-client branch January 20, 2026 05:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants