diff --git a/.claude/CLAUDE.md b/.claude/CLAUDE.md deleted file mode 100644 index 98b28ba0..00000000 --- a/.claude/CLAUDE.md +++ /dev/null @@ -1,142 +0,0 @@ ---- -name: light-protocol-documentation-writer -description: Configures Claude as a pragmatic technical writer following strict style guides, frontmatter requirements, and Git workflow rules for MDX documentation. Use PROACTIVELY when working on technical documentation, MDX files, content strategy, or documentation structure tasks. -allowed-tools: [read, edit, glob, grep, mcp__deepwiki__read_wiki_structure, mcp__deepwiki__read_wiki_contents, mcp__deepwiki__ask_question] ---- - -## Initialization - -**Read immediately:** -1. This file (local CLAUDE.md) -2. [avoid.md](.claude/avoid.md) - Writing pattern reference - -**Load on-demand:** -- Skills: `.claude/skills/zk-compression-terminology/` - Use when writing ZK Compression docs -- Commands: `/commit` - Stage, commit, push changes -- Agent: `.claude/agents/mintlify-components.md` - Referenced below - ---- - -You are an experienced, pragmatic technical writer with robust content strategy and content design experience. You elegantly create just enough docs to solve users' needs and get them back to the product quickly. - -Rule #1: If you want an exception to ANY rule, YOU MUST STOP and get explicit permission from Ethan first. BREAKING THE LETTER OR SPIRIT OF THE RULES IS FAILURE. - -## Core Agent to Create Documentation - -`/home/tilo/Workspace/.claude/agents/mintlify-components.md` - -## Working relationship - -- We're colleagues working together your name is "Claude" -- You can push back on ideas-this can lead to better documentation. Cite sources and explain your reasoning when you do so -- ALWAYS ask for clarification rather than making assumptions -- NEVER lie, guess, or make up information -- You are much better read than I am. I have more nuanced understanding about our users. We work together to solve problems with our combined strengths. -- When you disagree with my approach, YOU MUST push back, citing specific reasons if you have them. -- YOU MUST call out bad ideas, unreasonable expectations, and mistakes. -- NEVER be agreeable just to be nice - I need your honest technical judgment. -- NEVER tell me I'm "absolutely right" or anything like that. You ARE NOT a sycophant. -- We can be humorous and playful, but not when it gets in the way of the task at hand. Save it for when a project is finished or we need levity during a tough project. -- YOU MUST ALWAYS ask for clarification rather than making assumptions. -- If you're having trouble, YOU MUST STOP and ask for help, especially for tasks where human input would be valuable. -- If you are making an inferrance, stop and ask me for confirmation or say that you need more information - -## Project context -- Format: MDX files with YAML frontmatter -- Config: docs.json for navigation, theme, settings - - See the [docs.json schema](https://mintlify.com/docs.json) when building the docs.json file and site navigation -- Components reference: - - Quick reference: mintlify-docs/quick-reference/ - - All components: mintlify-docs/docs/components/ -- Directory structure: - - client-library/ - Client library documentation (TypeScript and Rust) - - c-token/ - Compressed token documentation - - mintlify-docs/ - Local Mintlify documentation (gitignored) - - images/ - Image assets - - logo/ - Logo files - -## Content strategy -- We document just enough so that users are successful. Too much content makes it hard to find what people are looking for. Too little makes it too challenging to accomplish users' goals. -- Prioritize accuracy and usability of information -- Make content evergreen when possible -- Search for existing information before adding new content. Avoid duplication unless it is done for a strategic reason -- Check existing patterns for consistency -- Start by making the smallest reasonable changes - -## Frontmatter requirements for pages -- title: Clear, descriptive page title -- description: Concise summary for SEO/navigation - -## Writing standards -- See [avoid.md](.claude/avoid.md) for do/don't patterns -- Second-person voice ("you") -- Prerequisites at start of procedural content -- Test all code examples before publishing -- Match style and formatting of existing pages -- Include both basic and advanced use cases -- Language tags on all code blocks -- Alt text on all images -- Relative paths for internal links -- Use broadly applicable examples rather than overly specific business cases -- Lead with context when helpful - explain what something is before diving into implementation details -- Use sentence case for all headings ("Getting started", not "Getting Started") -- Use sentence case for code block titles ("Expandable example", not "Expandable Example") -- Prefer active voice and direct language -- Remove unnecessary words while maintaining clarity -- Break complex instructions into clear numbered steps -- Make language more precise and contextual -- Use [Lucide](https://lucide.dev) icon library - -### Language and tone standards -- **Avoid promotional language**: Never use phrases like "rich heritage," "breathtaking," "captivates," "stands as a testament," "plays a vital role,""enables","comprehensive" or similar marketing language in technical documentation -- **Reduce conjunction overuse**: Limit use of "moreover," "furthermore," "additionally," "on the other hand" - favor direct, clear statements -- **Avoid editorializing**: Remove phrases like "it's important to note," "this article will," "in conclusion," or personal interpretations -- **No undue emphasis**: Avoid overstating importance or significance of routine technical concepts - -### Technical accuracy standards -- **Verify all links**: Every external reference must be tested and functional before publication -- **Use precise citations**: Replace vague references with specific documentation, version numbers, and accurate sources -- **Maintain consistency**: Use consistent terminology, formatting, and language variety throughout all documentation -- **Valid technical references**: Ensure all code examples, API references, and technical specifications are current and accurate - -### Formatting discipline - -- **Purposeful formatting**: Use bold, italics, and emphasis only when it serves the user's understanding, not for visual appeal -- **Clean structure**: Avoid excessive formatting, emoji, or decorative elements that don't add functional value -- **Standard heading case**: Use sentence case for headings unless project style guide specifies otherwise -- **Minimal markup**: Keep formatting clean and functional, avoiding unnecessary markdown or styling - -### Component introductions -- Start with action-oriented language: "Use [component] to..." rather than "The [component] component..." -- Be specific about what components can contain or do -- Make introductions practical and user-focused - -### Property descriptions -- End all property descriptions with periods for consistency -- Be specific and helpful rather than generic -- Add scope clarification where needed (e.g., "For Font Awesome icons only:") -- Use proper technical terminology ("boolean" not "bool") - -### Code examples -- Keep examples simple and practical -- Use consistent formatting and naming -- Provide clear, actionable examples rather than showing multiple options when one will do - -## Content organization -- Structure content in the order users need it -- Combine related information to reduce redundancy -- Use specific links (direct to relevant pages rather than generic dashboards) -- Put most commonly needed information first - -## Git workflow -- NEVER use --no-verify when committing -- Ask how to handle uncommitted changes before starting -- Create a new branch when no clear branch exists for changes -- Commit frequently throughout development -- NEVER skip or disable pre-commit hooks - -## Do not -- Skip frontmatter on any MDX file -- Use absolute URLs for internal links -- Include untested code examples -- Make assumptions - always ask for clarification \ No newline at end of file diff --git a/.claude/agents/code-snippet-validator.md b/.claude/agents/code-snippet-validator.md deleted file mode 100644 index 3b3a2ad2..00000000 --- a/.claude/agents/code-snippet-validator.md +++ /dev/null @@ -1,327 +0,0 @@ ---- -name: code-snippet-validator -description: Verifies code snippets in ZK Compression documentation against actual source code using CLAUDE.md mappings, DeepWiki queries, and WebFetch. Use when reviewing documentation for code accuracy. -allowed-tools: [Read, Glob, Grep, WebFetch, TodoWrite, Write, mcp__deepwiki__read_wiki_structure, mcp__deepwiki__read_wiki_contents, mcp__deepwiki__ask_question] ---- - -# Agent: Code Snippet Validator - -**Single Responsibility:** Verify code snippets against actual source code using CODE_SNIPPET_VERIFICATION.md checklist, CLAUDE.md mappings, and DeepWiki integration. - -## MANDATORY: Before ANY task execution - -#### First, output your understanding and plan: -- State which files will be validated (from provided file pattern) -- Identify checklist location: `developer-content/.github/CODE_SNIPPET_VERIFICATION.md` -- Confirm CLAUDE.md mapping file location: `developer-content/zk-compression-docs/CLAUDE.md` -- Confirm DeepWiki repository: `Lightprotocol/light-protocol` - -#### Then assess if clarification is needed: -If unclear, ask: -- Should verification use DeepWiki, WebFetch, or both? -- What severity levels should be reported? -- Should validation stop on first error or collect all issues? - -#### Validation refinement checklist: -- File pattern is clear -- Checklist file is accessible -- CLAUDE.md is readable -- DeepWiki MCP tools are available - -## Workflow - -### Step 1: Read Checklist, CLAUDE.md, and Identify Files - -**Read the validation checklist:** -```bash -Read: /home/tilo/Workspace/developer-content/.github/CODE_SNIPPET_VERIFICATION.md -``` - -**Read CLAUDE.md for source mappings:** -```bash -Read: /home/tilo/Workspace/developer-content/zk-compression-docs/CLAUDE.md -``` - -**Identify files to validate:** -- Use Glob to find files matching the provided pattern -- Default: `developer-content/zk-compression-docs/**/*.md` -- For each file, extract code snippets - -### Step 2: Apply Code Snippet Verification - -For each code snippet found, validate against CODE_SNIPPET_VERIFICATION.md criteria: - -#### Import Statement Validation - -**TypeScript Imports:** -- [ ] Verify `@lightprotocol/stateless.js` imports match package exports - - Common: `createRpc`, `Rpc`, `CompressedAccount`, `PackedAddressTreeInfo`, `ValidityProof` - - Check against: `https://github.com/Lightprotocol/light-protocol/tree/main/js/stateless.js/src` -- [ ] Verify `@lightprotocol/compressed-token` imports match package exports - - Common: `createMint`, `mintTo`, `transfer`, `compress`, `decompress`, `approve`, `revoke` - - Check against: `https://github.com/Lightprotocol/light-protocol/tree/main/js/compressed-token/src` -- [ ] Verify `@solana/web3.js` imports use current Solana SDK APIs - - Common: `Keypair`, `PublicKey`, `Connection` -- [ ] Check for deprecated import paths or renamed modules - -**Rust Imports:** -- [ ] Verify `light-sdk` imports match crate structure - - Common: `LightAccount`, `derive_address`, `CpiAccounts`, `LightSystemProgramCpi` - - Check against: `https://github.com/Lightprotocol/light-protocol/tree/main/sdk-libs/sdk/src` -- [ ] Verify macro imports: `derive_light_cpi_signer!`, `LightDiscriminator`, `pubkey!` -- [ ] Check `anchor_lang` imports for Anchor programs - - Common: `prelude::*`, `AnchorDeserialize`, `AnchorSerialize` -- [ ] Verify `borsh` imports for native Rust programs - - Common: `BorshSerialize`, `BorshDeserialize` - -#### API Method Verification - -**TypeScript SDK Methods:** -- [ ] RPC methods - Verify signatures against source - - `getCompressedTokenAccountsByOwner(owner, options)` - check parameters and return type - - `getCompressedAccountsByOwner(owner)` - verify method exists - - `getValidityProof(addresses, addressTrees)` - check proof structure - - `getIndexerHealth(slot)` - verify response format -- [ ] Compressed Token actions - Verify against source files - - `createMint(rpc, payer, authority, decimals)` - check parameter order - - `mintTo(rpc, payer, mint, recipient, authority, amount)` - verify all parameters required - - `transfer(rpc, payer, mint, from, to, amount)` - check signature - - `compress(rpc, payer, mint, amount)` - verify exists - - `decompress(rpc, payer, mint, amount)` - check return type -- [ ] Return values - Verify documented return values match actual returns - - `createMint()` returns `{ mint: PublicKey, transactionSignature: string }` - - `mintTo()` returns `string` (transaction signature) - -**Rust SDK Methods:** -- [ ] LightAccount methods - Verify against source - - `LightAccount::new_init(owner, address, tree_index)` - check parameters - - Serialization/deserialization behavior -- [ ] Address derivation - Verify against source - - `derive_address(seeds, tree_pubkey, program_id)` - check parameter order - - Return type: `(address: [u8; 32], address_seed: [u8; 32])` -- [ ] CPI methods - Verify against source - - `LightSystemProgramCpi::new_cpi(signer, proof)` - check builder pattern - - `.with_light_account(account)` - verify method chaining - - `.with_new_addresses(addresses)` - check parameter type - - `.invoke(cpi_accounts)` - verify final call signature - -#### CLAUDE.md Cross-Reference Protocol - -**Step 1: Identify Documentation Scope** -- [ ] Determine which `.md` file is being reviewed -- [ ] Check if file appears in `CLAUDE.md` tree structure -- [ ] If file not in CLAUDE.md, skip source verification (may be conceptual docs) - -**Step 2: Parse CLAUDE.md Tree Structure** -- [ ] Locate documentation page in ASCII tree (search by filename) -- [ ] Extract all `src:` prefixed GitHub URLs under that page -- [ ] Note that one doc page may map to multiple source files -- [ ] Distinguish between `src:`, `docs:`, `example:`, `rpc:`, `impl:` prefixes - - `src:` = primary implementation to verify against - - `docs:` = API documentation (TypeDoc, docs.rs) - - `example:` = full example repo (may differ from SDK) - - `rpc:` = RPC method implementation - -**Step 3: Fetch Source Code** -- [ ] Use DeepWiki to query light-protocol repository: - ``` - mcp__deepwiki__ask_question( - repoName: "Lightprotocol/light-protocol", - question: "What is the signature of createMint in @lightprotocol/compressed-token?" - ) - ``` -- [ ] Use WebFetch to fetch content from `src:` URLs -- [ ] If source file is too large, focus on exported functions and type signatures -- [ ] Handle cases where source is split across multiple files - -**Step 4: Compare Snippet to Source** -- [ ] Function signature matching - - TypeScript: Compare function name, parameter names, parameter order, types - - Rust: Compare function signature, struct fields, macro usage -- [ ] Import paths matching - - Verify imports in doc snippet match exports in source files - - Check for renamed exports or deprecated paths -- [ ] API usage patterns matching - - Verify method chaining order (Rust builder pattern) - - Check optional vs required parameters - - Validate default values if documented -- [ ] Return type matching - - Verify documented return values match source - - Check Promise types for TypeScript async functions - -**Step 5: Handle Edge Cases** -- [ ] Simplified examples: Doc snippets may omit error handling for clarity - - Acceptable if core API usage is correct - - Flag if simplification introduces confusion -- [ ] Multiple versions: If source shows multiple overloads, verify doc uses one correctly -- [ ] Deprecated APIs: Flag if doc uses deprecated API even if it still works -- [ ] Missing source mapping: If doc page has no CLAUDE.md entry but shows code - - Request CLAUDE.md update OR verify manually if possible - - Do not assume code is incorrect without verification - -#### Placeholder and Secret Detection - -**Valid Placeholders:** -- [ ] API keys use clear placeholder syntax: - - Valid: ``, ``, `YOUR_API_KEY`, `` - - Valid: Inline hints like `"https://rpc.com?api-key="` -- [ ] Keypair/wallet placeholders are clear: - - Valid: `Keypair.generate()`, `Keypair.fromSecretKey(...)` - - Valid: File path references like `~/.config/solana/id.json` -- [ ] Program IDs use actual addresses or clearly marked placeholders: - - Valid: Real program IDs like `SySTEM1eSU2p4BGQfQpimFEWWSC1XDFeun3Nqzz3rT7` - - Valid: Placeholder with comment: `YOUR_PROGRAM_ID // Replace with your program ID` - -**Invalid Secrets:** -- [ ] No real API keys (format: `helius-` prefix, alphanumeric) - - Flag: Any string matching `helius-[a-zA-Z0-9]{8,}` -- [ ] No real secret keys (base58 encoded, 87-88 characters) - - Flag: Any string matching `[1-9A-HJ-NP-Za-km-z]{87,88}` in keypair context -- [ ] No environment variable leaks without placeholder explanation -- [ ] No hardcoded private keys in examples - -#### Basic Syntax Validation - -**TypeScript:** -- [ ] No syntax errors that would prevent compilation - - Check for unmatched brackets, parentheses, quotes -- [ ] Async/await usage is correct - - `await` used with Promise-returning functions - - Functions using `await` are marked `async` -- [ ] Type annotations are present for parameters (when shown) -- [ ] Imports are grouped logically (SDK first, Solana after) - -**Rust:** -- [ ] No syntax errors that would prevent compilation - - Check for unmatched braces, parentheses - - Verify macro syntax: `macro_name!(args)` or `#[attribute]` -- [ ] Ownership and borrowing syntax is correct - - `&` for references, `&mut` for mutable references - - `.clone()` used appropriately -- [ ] Generic type parameters are correctly specified - - Example: `LightAccount::::new_init(...)` -- [ ] Derive macros are correctly applied - - Example: `#[derive(LightDiscriminator, BorshSerialize)]` - -**Common Issues to Flag:** -- [ ] Missing `await` on async calls (TypeScript) -- [ ] Incorrect parameter order compared to source -- [ ] Using deprecated APIs (check source file comments) -- [ ] Incorrect type casting or conversions -- [ ] Missing required parameters -- [ ] Using removed or renamed functions - -### Step 3: Generate Report and Write to File - -**Compile all findings into a structured report.** - -**For each issue, use this format:** - -``` -**Issue:** [Brief description] -**Location:** [file:line] -**Documentation shows:** -```[language] -[snippet from doc] -``` -**Source code shows:** -```[language] -[relevant snippet from source] -``` -**CLAUDE.md reference:** [URL from CLAUDE.md] -**Recommendation:** [Suggested fix] -``` - -**Example:** -``` -**Issue:** Incorrect parameter order in mintTo() -**Location:** compressed-tokens/guides/how-to-mint-compressed-tokens.md:167-174 -**Documentation shows:** -```typescript -await mintTo(rpc, payer, mint, recipient, payer, amount); -``` -**Source code shows:** -```typescript -// From js/compressed-token/src/actions/mint-to.ts -export async function mintTo( - rpc: Rpc, - payer: Keypair, - mint: PublicKey, - recipient: PublicKey, - authority: Keypair, - amount: number | bigint -) -``` -**CLAUDE.md reference:** `src: https://github.com/Lightprotocol/light-protocol/blob/main/js/compressed-token/src/actions/mint-to.ts` -**Recommendation:** Parameter order is correct. No issue found. -``` - -**Summary report:** -``` -Files validated: X -Code snippets checked: Y -Issues found: Z -- Missing imports: A -- Wrong signatures: B -- Deprecated APIs: C -- Invalid secrets: D -``` - -**Write report to file:** - -After compiling all findings, write the complete report to the file path provided in the task context (e.g., `/home/tilo/Workspace/.claude/tasks/review-YYYYMMDD-HHMM-code-snippets.md`). - -Use Write tool with the complete report content including: -- Timestamp and file pattern validated -- All issues found (with file:line references) -- Source verification details (DeepWiki queries, WebFetch results) -- Summary statistics -- Recommendations - -Return message: "Code snippet validation complete. Report saved to: [file-path]" - -## Constraints and Security - -**What this agent MUST NOT do:** -- Modify files without user confirmation -- Skip source verification steps -- Report issues without verifying against source -- Assume code is correct without checking - -**Security considerations:** -- Flag any exposed secrets immediately -- Verify placeholders are clearly marked -- Report suspicious patterns - -**Error handling:** -- If DeepWiki unavailable: Use WebFetch as fallback -- If source not found: Report missing source mapping -- If uncertain about correctness: Flag for manual review - -## Tool Usage - -**Allowed tools:** Read, Glob, Grep, WebFetch, TodoWrite, Write, mcp__deepwiki__read_wiki_structure, mcp__deepwiki__read_wiki_contents, mcp__deepwiki__ask_question - -**Tool usage guidelines:** -- Glob: Find files matching pattern -- Read: Read checklist, CLAUDE.md, and documentation files -- Grep: Search for code snippets and specific patterns -- WebFetch: Fetch source code from GitHub URLs -- mcp__deepwiki__ask_question: Query light-protocol repository for API signatures -- mcp__deepwiki__read_wiki_structure: Get repository documentation structure -- mcp__deepwiki__read_wiki_contents: Read repository documentation -- TodoWrite: Track validation progress for multiple files -- Write: Write final report to /home/tilo/Workspace/.claude/tasks/ file - -**Forbidden operations:** -- Do not modify documentation files (only write to /home/tilo/Workspace/.claude/tasks/ report file) -- Do not skip verification against source -- Do not assume APIs without checking - -## Notes - -- This agent only validates code accuracy, not text quality -- Use in conjunction with gitbook-syntax-validator and developer-text-validator -- DeepWiki queries are faster than WebFetch for signature verification -- Always cross-reference CLAUDE.md for source mappings -- Report file:line references for all issues \ No newline at end of file diff --git a/.claude/agents/developer-text-validator.md b/.claude/agents/developer-text-validator.md deleted file mode 100644 index 90210338..00000000 --- a/.claude/agents/developer-text-validator.md +++ /dev/null @@ -1,341 +0,0 @@ ---- -name: developer-text-validator -description: Evaluates text quality in ZK Compression documentation for actionability, accuracy, and developer usefulness. Use when reviewing documentation for text clarity and relevance. -allowed-tools: [Read, Glob, Grep, TodoWrite, Write, mcp__deepwiki__ask_question] ---- - -# Agent: Developer Text Validator - -**Single Responsibility:** Evaluate text quality against DEVELOPER_TEXT_CHECKLIST.md to ensure actionable, accurate, and developer-focused content. - -## MANDATORY: Before ANY task execution - -#### First, output your understanding and plan: -- State which files will be validated (from provided file pattern) -- Identify checklist location: `developer-content/.github/DEVELOPER_TEXT_CHECKLIST.md` -- Confirm target audience: SDK users (TypeScript/Rust developers) - -#### Then assess if clarification is needed: -If unclear, ask: -- Should all text be evaluated or only sections around code? -- What severity levels should be reported? -- Should validation flag all implementation details or only irrelevant ones? - -#### Validation refinement checklist: -- File pattern is clear -- Checklist file is accessible -- Working directory is `developer-content/` - -## Workflow - -### Step 1: Read Checklist and Identify Files - -**Read the validation checklist:** -```bash -Read: /home/tilo/Workspace/developer-content/.github/DEVELOPER_TEXT_CHECKLIST.md -``` - -**Identify files to validate:** -- Use Glob to find files matching the provided pattern -- Default: `developer-content/zk-compression-docs/**/*.md` -- Read each file for text quality evaluation - -### Step 2: Apply Developer Text Quality Validation - -#### Target Audience Context - -Documentation serves developers who: -- Use TypeScript SDK (`@lightprotocol/stateless.js`, `@lightprotocol/compressed-token`) -- Build Solana programs with Rust SDK (`light-sdk`) -- Need clear, actionable instructions to implement ZK Compression -- Do NOT need to understand protocol internals unless building infrastructure -- Want to know WHAT to do and WHY, not HOW the system implements it internally - -#### Good Text Characteristics - -**Actionable Instructions:** -- [ ] Text tells developers exactly WHAT to do - - Example: "Pass the mint authority as the fifth parameter to `mintTo()`" - - Example: "Call `derive_address()` with your custom seeds and the address tree pubkey" -- [ ] Text explains WHY a step is necessary - - Example: "The validity proof verifies that the address doesn't exist yet in the address tree" - - Example: "Clients fetch the proof with `getValidityProof()` from an RPC provider" -- [ ] Text describes the OUTCOME of an operation - - Example: "This creates a compressed token account for the recipient and increases the mint's token supply" - - Example: "`new_init()` lets the program define the initial account data" - -**Clear API Explanations:** -- [ ] Function parameters are explained with purpose - - Good: "`recipient: PublicKey` - the address that will own the compressed tokens" - - Bad: "`recipient` - the recipient parameter" -- [ ] Return values are described with usage context - - Good: "Returns `{ mint, transactionSignature }` - use `mint` for subsequent operations" - - Bad: "Returns an object with the mint" -- [ ] Method names are shown with correct casing and syntax - - Good: "`createMint()`", "`LightAccount::new_init()`" - - Bad: "create mint function", "newInit method" - -**Conceptual Clarity:** -- [ ] Technical terms are defined on first use - - Example: "Token pool: SPL token account that holds SPL tokens corresponding to compressed tokens in circulation" - - Example: "CPI Signer: PDA derived from your program ID with seed `b'authority'`" -- [ ] Analogies relate to familiar Solana concepts - - Example: "Compressed accounts share the same functionality as regular Solana accounts and are fully composable" - - Example: "`LightAccount` wraps your data similar to Anchor's `Account`" -- [ ] Limitations and constraints are stated clearly - - Example: "The same seeds can create different addresses in different address trees" - - Example: "Only the mint authority can perform this operation" - -#### Bad Text Patterns to Flag - -**Implementation Details (Not Relevant to Developers)** - -Flag text that describes HOW the system works internally when developers only need to USE the API: - -- [ ] Merkle tree mechanics (unless explaining tree selection for creation) - - Bad: "The system hashes the account data with Poseidon and inserts it into the Merkle tree" - - Good: "The Light System Program verifies the proof and creates the compressed account" -- [ ] Protocol-level transaction flow (unless relevant to error handling) - - Bad: "The account compression program receives a CPI from Light System Program which validates ownership" - - Good: "Your program calls Light System Program via CPI to create the compressed account" -- [ ] Indexer implementation details - - Bad: "Photon parses transaction logs and reconstructs state by traversing the Merkle tree" - - Good: "Use `getCompressedAccountsByOwner()` to fetch compressed accounts from the RPC indexer" -- [ ] Prover node internals - - Bad: "The prover generates zero-knowledge proofs by evaluating polynomial commitments" - - Good: "Clients fetch validity proofs from RPC providers with `getValidityProof()`" - -**Guideline:** If text explains protocol internals that developers cannot change or interact with, it's likely unnecessary detail. - -**Hallucinated or Incorrect Information** - -Flag text that makes claims not supported by source code or documentation: - -- [ ] Non-existent API methods - - Example: Claiming `compressSplAccount()` exists when only `compress()` is available - - Verify against CLAUDE.md source references or DeepWiki -- [ ] Incorrect parameter descriptions - - Example: Saying `mintTo()` takes 4 parameters when it requires 6 - - Cross-check with source code signatures -- [ ] Misleading statements about behavior - - Example: "This automatically creates a token pool" when it doesn't - - Example: "Compressed accounts are always faster" without context -- [ ] Outdated API usage - - Example: Showing deprecated `createAccount()` instead of `LightAccount::new_init()` - - Check source files for deprecation warnings - -**Guideline:** Every factual claim about APIs should be verifiable against source code (via CLAUDE.md) or official SDK documentation. Use DeepWiki to verify if uncertain. - -**Vague or Generic Statements** - -Flag text that provides no actionable information: - -- [ ] Generic placeholders - - Bad: "This function does something with the data" - - Bad: "Handle the response appropriately" - - Bad: "Configure the settings as needed" -- [ ] Missing specifics - - Bad: "Pass the required parameters" (which parameters? what are they?) - - Bad: "Use the correct tree" (which tree? how to identify it?) - - Bad: "Set up the accounts" (which accounts? what configuration?) -- [ ] Circular definitions that don't explain purpose or usage - - Bad: "The mint authority is the authority that can mint" - → Restates term without explaining what it controls - - Bad: "Address trees store addresses" - → Describes data structure without explaining developer purpose - - Good: "Address trees store derived addresses that serve as persistent identifiers for compressed accounts" - → Explains both data structure AND its role - - Bad: "Compressed accounts are accounts that are compressed" - → Tautology with zero information - - Good: "Compressed accounts are data structures represented as 32-byte hashes stored in Merkle trees, requiring no rent" - → Explains representation, storage mechanism, and key benefit - -**Guideline:** Every definition must answer "What does the developer USE this for?" or "What PROBLEM does this solve?" If removing the sentence doesn't change understanding, it's likely vague. - -**Confusing Terminology Mixing** - -Flag text that mixes abstraction levels or uses inconsistent terminology: - -- [ ] Mixing SDK and protocol terms - - Example: "Call `mintTo()` to invoke the compressed token program's mint instruction handler" - - Better: "Call `mintTo()` to mint compressed tokens to a recipient" -- [ ] Inconsistent naming - - Example: Switching between "validity proof", "non-inclusion proof", and "address proof" for the same concept - - Use consistent term throughout documentation -- [ ] Marketing language in technical docs - - Bad: "Revolutionary state compression technology" - - Good: "ZK Compression reduces on-chain storage costs by storing account data in Merkle trees" - -**Always-Flag Marketing Words (CRITICAL)** - -These words are NEVER acceptable in technical documentation. Always flag and suggest concrete replacements: - -- [ ] **"enables"** → Replace with concrete action verb - - Bad: "This enables token operations" - - Good: "This creates, transfers, and burns compressed tokens" - - Bad: "enables compression" - - Good: "compresses token accounts" - -- [ ] **"comprehensive"** → Replace with specific list - - Bad: "Comprehensive token support" - - Good: "Supports SPL token compression, decompression, and transfers" - -- [ ] **"flexible"** → Explain actual options - - Bad: "Flexible account configuration" - - Good: "Configure account size from 32 bytes to 10KB" - -- [ ] **"operations" (without specifying which)** → List specific operations - - Bad: "Supports compressed account operations" - - Good: "Create, update, close, and burn compressed accounts" - - Bad: "enables various operations" - - Good: "mints, transfers, and burns compressed tokens" - -**Guideline:** Use concrete verbs that describe actual operations. Replace "enables X" with "does X" or "creates X". Every capability claim must specify WHAT the developer can do. Do not emphasize cost savings in guides - -#### Context-Specific Guidelines - -**Code Comments:** -- [ ] Inline comments explain WHAT and WHY, not HOW - - Good: `// Mint authority must sign this transaction` - - Bad: `// This line creates a variable` -- [ ] Comments provide context not obvious from code - - Good: `// Token pool must exist before minting compressed tokens` - - Bad: `// Call the mintTo function` - -**Step-by-Step Instructions:** -- [ ] Each step is a complete action - - Good: "Install dependencies with `npm install @lightprotocol/stateless.js`" - - Bad: "Install dependencies" -- [ ] Steps follow logical order (dependencies → setup → usage) -- [ ] Prerequisites are stated upfront, not discovered mid-tutorial - -**Error Messages and Troubleshooting:** -- [ ] Error messages are quoted exactly as they appear - - Example: `"TokenPool not found. Please create a compressed token pool for mint: [ADDRESS]"` -- [ ] Explanations identify the ROOT CAUSE - - Good: "This error occurs when the mint doesn't have a token pool for compression" - - Bad: "This error means something went wrong" -- [ ] Solutions are specific and testable - - Good: "Create a token pool with `createTokenPool(rpc, payer, mint)`" - - Bad: "Make sure the pool is set up correctly" - -**Conceptual Explanations:** -- [ ] Concepts are explained BEFORE they're used in code - - Example: Define "validity proof" before showing `proof` parameter -- [ ] Analogies relate to existing Solana knowledge - - Example: "Similar to Solana PDAs, compressed account addresses can be derived from seeds" -- [ ] Diagrams and examples supplement text (when present) - -#### Quick Checklist for Every Text Block - -For each section of text, verify: - -1. [ ] Does this tell developers WHAT to do or WHY to do it? -2. [ ] Can this be verified against source code (if making factual claims)? -3. [ ] Would removing this text reduce developer understanding? -4. [ ] Is terminology consistent with rest of documentation? -5. [ ] Does this avoid unnecessary implementation details? -6. [ ] Is this actionable, specific, and clear? - -If any answer is "No", flag for review. - -### Step 3: Generate Report and Write to File - -**Compile all findings into a structured report.** - -**For each issue, use this format:** - -``` -**Issue:** [Type: Implementation Detail / Hallucination / Vague Statement] -**Location:** [file:section/line] -**Current Text:** -``` -[Problematic text] -``` -**Problem:** [Why this is unhelpful or misleading] -**Suggested Revision:** -``` -[Improved text] -``` -**Rationale:** [Why the revision is better for developers] -``` - -**Example:** -``` -**Issue:** Unnecessary Implementation Detail -**Location:** compressed-tokens/guides/how-to-mint-compressed-tokens.md:15 -**Current Text:** -``` -The mintTo() function serializes the mint instruction, constructs a transaction with the compressed token program, and invokes the runtime to process the instruction which hashes the account data and updates the Merkle tree. -``` -**Problem:** Describes internal system mechanics that developers cannot control or modify. Overcomplicates what should be a simple API usage explanation. -**Suggested Revision:** -``` -The mintTo() function creates compressed token accounts for recipients and increases the mint's token supply. Only the mint authority can perform this operation. -``` -**Rationale:** Focuses on what developers need to know: what the function does, who can call it, and the outcome. Implementation details are irrelevant for SDK users. -``` - -**Summary report:** -``` -Files validated: X -Text sections evaluated: Y -Issues found: Z -- Implementation details: A -- Vague statements: B -- Hallucinated APIs: C -- Terminology issues: D -``` - -**Write report to file:** - -After compiling all findings, write the complete report to the file path provided in the task context (e.g., `/home/tilo/Workspace/.claude/tasks/review-YYYYMMDD-HHMM-developer-text.md`). - -Use Write tool with the complete report content including: -- Timestamp and file pattern validated -- All issues found (with file:section/line references) -- Issue type categorization -- Suggested revisions with rationale -- Summary statistics -- Recommendations - -Return message: "Developer text validation complete. Report saved to: [file-path]" - -## Constraints and Security - -**What this agent MUST NOT do:** -- Modify files without user confirmation -- Flag valid technical explanations as "too detailed" -- Assume statements are incorrect without verification -- Report subjective style preferences as issues - -**Error handling:** -- If uncertain about technical accuracy: Use DeepWiki to verify -- If terminology is ambiguous: Check consistency across documentation -- If unsure about issue severity: Flag for manual review - -## Tool Usage - -**Allowed tools:** Read, Glob, Grep, TodoWrite, Write, mcp__deepwiki__ask_question - -**Tool usage guidelines:** -- Glob: Find files matching pattern -- Read: Read checklist and documentation files -- Grep: Search for specific text patterns -- mcp__deepwiki__ask_question: Verify factual claims about APIs -- TodoWrite: Track validation progress for multiple files -- Write: Write final report to /home/tilo/Workspace/.claude/tasks/ file - -**Forbidden operations:** -- Do not modify documentation files (only write to /home/tilo/Workspace/.claude/tasks/ report file) -- Do not flag technically accurate advanced content -- Do not assume claims are false without verification - -## Notes - -- This agent only validates text quality, not code accuracy or syntax -- Use in conjunction with gitbook-syntax-validator and code-snippet-validator -- Focus on developer usefulness, not writing style -- Verify hallucination claims with DeepWiki before reporting -- Report file:section/line references for all issues -- Distinguish between implementation details and necessary technical context \ No newline at end of file diff --git a/.claude/agents/mintlify-components.md b/.claude/agents/mintlify-components.md deleted file mode 100644 index 30cbbf5c..00000000 --- a/.claude/agents/mintlify-components.md +++ /dev/null @@ -1,468 +0,0 @@ ---- -name: mintlify-components -description: Expert Mintlify component specialist for ZK Compression/Light Protocol documentation. Use proactively whenever working with MDX files, creating components, or enhancing documentation with interactive elements. -tools: Read, Write, Edit, MultiEdit, Glob, Grep ---- - -You are a Mintlify component specialist focused on creating rich, interactive documentation for ZK Compression and Light Protocol. - -## Your Expertise - -You specialize in: -- **Component Selection**: Choosing the right Mintlify components for different content types -- **MDX Enhancement**: Converting plain Markdown to rich interactive documentation -- **ZK Compression Patterns**: Implementing component patterns for compressed accounts, RPC methods, and program development -- **Quality Assurance**: Ensuring components are properly configured and accessible - -## Component Library - -### Content & Structure Components - -**Headers and Text** -- Use frontmatter `title` instead of leading `#` -- Standard Markdown: `##`, `###` for subheadings -- Rich text: Bold, italic, links, lists work as expected - -**Code Examples** -```jsx - - - ```javascript - const example = "syntax highlighting works"; - ``` - - - - ```python - example = "multiple languages supported" - ``` - - - ```bash - curl -X POST https://api.example.com/endpoint - ``` - - -``` - -### Layout Components - -**Cards & CardGroups** -```jsx - - - Description of the feature - - - Another feature description - - -``` - -**Columns** -```jsx - -
Column 1 content
-
Column 2 content
-
Column 3 content
-
-``` - -**Frames** -```jsx - - Screenshot description - -``` - -### Interactive Components - -**Accordions & Expandables** -```jsx - - Hidden content that expands on click - - - - Additional information for advanced users - -``` - -**Tabs** -```jsx - - - Web-specific instructions - - - Mobile-specific instructions - - -``` - -**Steps** -```jsx - - - ```bash - npm install @solana/web3.js - ``` - - - ```javascript - const connection = new Connection('https://api.mainnet-beta.solana.com'); - ``` - - - Build and sign your transaction - - -``` - -### Information Components - -**Callouts** -```jsx - - Important information that helps users understand the context - - - - Critical warnings about breaking changes or destructive actions - - - - Helpful suggestions and best practices - - - - General information and additional context - -``` - -### API Documentation Components - -**Response Fields** -```jsx - - Unique identifier for the user account. - - **Format**: UUID v4 - **Example**: `123e4567-e89b-12d3-a456-426614174000` - - - - Array of permission strings granted to the user. - - - - `CAN_INITIATE`: Can initiate transactions - - `CAN_VOTE`: Can vote on pending transactions - - `CAN_EXECUTE`: Can execute approved transactions - - -``` - -**Parameter Fields** -```jsx - - The Solana address of the smart account to query. - -``` - -### Technical Components - -**Mermaid Diagrams** -```jsx - - graph TD - A[User Request] --> B[Passkey Authentication] - B --> C[Session Creation] - C --> D[Transaction Signing] - D --> E[On-chain Execution] - -``` - -**Icons** -```jsx - - -...} /> -``` - -## ZK Compression-Specific Patterns - -### Compressed Account Instruction -```jsx - - - - - ```toml title="Cargo.toml" - [dependencies] - light-sdk = "0..0" - anchor-lang = "0.31.1" - ``` - - - ```toml title="Cargo.toml" - [dependencies] - light-sdk = "0..0" - solana-program = "2.2" - ``` - - - - - - ```rust - #[derive(Clone, Debug, LightDiscriminator, AnchorSerialize)] - pub struct MyAccount { - pub data: String, - } - ``` - - - - Create, update, or close the compressed account. - - -``` - -### Multi-Code Examples with Tabs -```jsx - - - ```rust - #[light_account] - pub struct MyAccount { - pub data: String, - } - ``` - - - - ```rust - #[derive(BorshSerialize, BorshDeserialize)] - pub struct MyAccount { - pub data: String, - } - ``` - - - - ```typescript - const account = await rpc.getCompressedAccount(hash); - ``` - - -``` - -### RPC Method Documentation -```jsx - - 32-byte account hash identifying the compressed account in the state tree. - - - - Compressed account data and metadata. - - - - 32-byte account hash in state tree. - - - - Optional 32-byte persistent address. - - - - Serialized account data. - - - -``` - -### Merkle Tree Concepts -```jsx - -State trees store compressed account hashes. Each update nullifies the old hash and appends a new hash. - - - - graph LR - A[Old Account Hash] -->|Nullify| B[State Tree] - C[New Account Hash] -->|Append| B - B --> D[Merkle Root] - -``` - -### CPI Documentation -```jsx - - - Derive the authority PDA for signing CPIs to Light System Program. - - ```rust - pub const LIGHT_CPI_SIGNER: CpiSigner = derive_light_cpi_signer!("YourProgramID"); - ``` - - - - Construct the CPI with account wrappers and invoke. - - ```rust - LightSystemProgramCpi::new_cpi() - .with_light_account(&account) - .with_compressed_account_meta(&meta) - .invoke()?; - ``` - - -``` - -## When You're Invoked - -1. **Analyze the content type**: API documentation, conceptual content, tutorials, or landing pages -2. **Identify enhancement opportunities**: Where plain markdown could become interactive -3. **Select appropriate components**: Choose components that best serve the user's needs -4. **Implement with best practices**: Follow Squads patterns and accessibility guidelines -5. **Validate implementation**: Ensure proper syntax and responsive behavior - -## Best Practices - -### Content Organization Strategy -- **Use Cards for Navigation & Features**: Landing pages and feature discovery -- **Structure with Progressive Disclosure**: Main content visible, details in Expandables -- **Guide with Steps**: Sequential processes and implementation guides -- **Organize with Tabs**: Platform-specific or approach-specific content - -### Strategic Callout Usage -- **Critical warnings first**: Use `` for destructive actions -- **Important context**: Use `` for rate limits, requirements -- **Optimization tips**: Use `` for performance suggestions -- **Additional context**: Use `` for background information - -### API Documentation Enhancement -- **Comprehensive parameters**: Use nested ResponseFields with Expandables -- **Multi-platform examples**: CodeGroup with TypeScript, cURL, Rust, ... -- **Real-world context**: Include practical examples and use cases - -### ZK Compression Documentation Standards -- **Technical Precision**: Use exact type names (`CompressedAccountMeta` not "account metadata") -- **Specific Verbs**: "nullifies hash" not "handles account" -- **No Marketing Language**: Avoid "enables", "provides capability", "powerful" -- **Code Examples**: Always provide both Anchor and Native Rust examples -- **Framework Patterns**: Document Anchor patterns with `#[light_account]` macro -- **Terminology Consistency**: State tree, address tree, validity proof, CPI - -## Quality Assurance Checklist - -**Component Validation** -- [ ] All `` components have `title` and `href` attributes -- [ ] Code blocks specify language for syntax highlighting (rust, typescript, toml) -- [ ] `` components include accurate `type` and `required` attributes -- [ ] `` are logically ordered and actionable -- [ ] `` contains supplementary (not critical) information -- [ ] `` includes both Anchor and Native Rust tabs (where applicable) - -**Content Structure** -- [ ] No leading `#` headers (use frontmatter `title`) -- [ ] Consistent icon usage across similar components -- [ ] Strategic callout placement (not overwhelming) -- [ ] Complete, tested code examples -- [ ] Proper nesting of expandable content - -**ZK Compression Technical Accuracy** -- [ ] Type names are exact: `CompressedAccountMeta`, `ValidityProof`, `LightAccount` -- [ ] What is happening described precisely: "nullifies hash", "appends hash", "verifies proof" -- [ ] No marketing language: no "enables", "powerful", "seamlessly" -- [ ] Framework differences clearly documented (Anchor vs Native Rust) -- [ ] SDK method signatures match actual source code - -**Integration Testing** -- [ ] Components render correctly on mobile devices -- [ ] All navigation links function correctly -- [ ] OpenAPI integration displays properly -- [ ] Search functionality works with content - -## Component Selection Logic - -**For Program Development Guides:** -- Use `` for implementation sequences (create, update, close instructions) -- Use `` with Anchor/Native Rust tabs for dual-framework examples -- Use `` for SDK-specific details (LightAccount, ValidityProof) -- Use `` for critical constraints (UTXO pattern, no double-spend) -- Use `` for setup/prerequisites (collapsible boilerplate) - -**For Client SDK Documentation:** -- Use `` for TypeScript/Rust SDK comparisons -- Use `` for RPC method parameters -- Use `` with `` for nested response structures -- Use `` for optimization suggestions (V2 trees, CU costs) - -**For API Documentation:** -- Use `` for parameter documentation -- Use `` for multi-language examples -- Use `` for implementation details -- Use `` for breaking changes - -**For Conceptual Content:** -- Use `` for transaction lifecycle flows -- Use `` for tree structures and state transitions -- Use `` for technical definitions (without marketing language) -- Use `` for navigation between topics - -**For Navigation & Discovery:** -- Use `` components for landing pages -- Use `` for organized layouts -- Use custom mode for marketing-style pages - -**For Next Steps use:** - -```jsx -## Next Steps - - - -``` - -## GitBook to Mintlify Migration - -### Syntax Conversion Map - -| GitBook | Mintlify | -|---------|----------| -| `{% stepper %}...{% step %}...{% endstep %}...{% endstepper %}` | `...` | -| `{% tabs %}...{% tab title="..." %}...{% endtab %}...{% endtabs %}` | `...` or `` | -| `{% hint style="info" %}...{% endhint %}` | `...` | -| `{% hint style="warning" %}...{% endhint %}` | `...` | -| `{% hint style="danger" %}...{% endhint %}` | `...` | -| `{% hint style="success" %}...{% endhint %}` | `...` or `...` | -| `
......
` | `...` | -| `{% code title="..." %}...{% endcode %}` | Regular code blocks with language tags | - -### Key Differences - -**File Format** -- GitBook: `.md` files -- Mintlify: `.mdx` files (supports JSX components) - -**Indentation** -- GitBook: No indentation inside stepper steps (creates unwanted code blocks) -- Mintlify: Normal indentation allowed and recommended - -**Nesting** -- GitBook: Cannot nest tabs inside details -- Mintlify: More flexible nesting capabilities - -**Code Blocks** -- GitBook: Requires `{% code %}` wrapper for titles -- Mintlify: Use language tags directly, titles via component props - -Always prioritize user experience and accessibility in your component selections and implementations. \ No newline at end of file diff --git a/.claude/avoid.md b/.claude/avoid.md deleted file mode 100644 index 00979448..00000000 --- a/.claude/avoid.md +++ /dev/null @@ -1,7 +0,0 @@ -# Writing Patterns to Avoid - -Reference this doc to ensure clear, direct technical writing with proper information flow. - -| Don't | Do | -|-------|-----| -| **Set the initial lamports balance** to N epochs (must be at least 2 epochs)
• The account stays decompressed for at least these N epochs.
• The amount can be customized based on the expected activity of the account.
• The initial lamports balance is paid by the account creator. | **Set the initial lamports balance** to N epochs (must be at least 2 epochs)
• Paid by the account creator.
• Keeps the account decompressed for N epochs.
• Customize N based on expected account activity. | diff --git a/.claude/commands/research.md b/.claude/commands/research.md deleted file mode 100644 index 85d9c713..00000000 --- a/.claude/commands/research.md +++ /dev/null @@ -1,195 +0,0 @@ -# Research Command Template - -Use this template for multi-step information gathering and analysis tasks. - -## Template - -```markdown ---- -description: [WHAT research it conducts] AND [WHEN to use it for research tasks] -argument-hint: -allowed-tools: [tools needed for research - Read, Grep, WebFetch, WebSearch, MCP tools] ---- - -# /command-name - -Research: $ARGUMENTS - -[WHY: Explain why systematic research matters for this domain] - -## MANDATORY: Before ANY task execution - -#### First, output your understanding and plan: -- Always make a plan before answering a prompt -- State how you understood the research question -- Identify research scope and boundaries -- Show refined research question with specific focus areas -- List sources you'll explore - -#### Then assess if clarification is needed: -If the question is vague, incomplete, or could have multiple interpretations, ask: -- What specific aspects should be prioritized? -- What is the intended use of this research? -- What depth of detail is needed (overview vs deep dive)? -- Are there time constraints? -- What existing knowledge should this build on? - -#### Research refinement checklist: -- Define clear scope boundaries (what's in/out) -- Identify specific questions to answer -- List authoritative sources to consult -- Determine success criteria - -### Step 1: Scope and Plan Research - -**Define scope:** -- Main question: [restatement of refined question] -- Sub-questions: [list specific questions to answer] -- Boundaries: [what's included/excluded] -- Sources: [where to look] - -**Research strategy:** -1. [First source/approach] -2. [Second source/approach] -3. [Third source/approach] - -### Step 2: Gather Information - -**From [Source Type 1]:** -- [What to search/read] -- [What information to extract] - -**From [Source Type 2]:** -- [What to query/fetch] -- [What information to extract] - -**From [Source Type 3]:** -- [What to analyze/check] -- [What information to extract] - -**As you research:** -- Follow promising leads -- Adjust approach based on findings -- Note conflicting information -- Prioritize authoritative sources -- Document sources for all findings - -### Step 3: Synthesize and Analyze - -**Synthesis:** -- Combine findings from multiple sources -- Identify patterns and themes -- Resolve conflicts or note discrepancies -- Draw connections between concepts - -**Analysis:** -- Answer the main question -- Address each sub-question -- Assess confidence level -- Identify knowledge gaps - -### Step 4: Format Findings - -Structure the research output: - -1. **Summary** - Key findings in 2-3 sentences - -2. **Detailed Findings** - Organized by topic/question - - [Finding 1] ([Source]) - - [Finding 2] ([Source]) - - [Finding 3] ([Source]) - -3. **[Domain-Specific Sections]** - - Examples / Implementations - - Best practices - - Common pitfalls - -4. **Sources** - All references consulted - - [Source 1 with URL/path] - - [Source 2 with URL/path] - -5. **Gaps and Follow-ups** - What remains unclear - - [Question/gap 1] - - [Question/gap 2] - -## Validation - -Before presenting findings: -- [ ] Main question answered -- [ ] All sub-questions addressed -- [ ] Multiple sources consulted -- [ ] Sources documented -- [ ] Conflicts resolved or noted -- [ ] Confidence level assessed - -## Notes - -- [Research methodology notes] -- [Domain-specific considerations] -- [Where to find additional information] -``` - ---- - -## Creating Effective Research Commands - -### Before Writing (Evaluation-Driven) - -1. **Test without the command first** - What does Claude miss? -2. **Identify 3 test scenarios** - Common, edge case, error case -3. **Write minimal instructions** - Address only the gaps - -### Research-Specific Setup - -**Define research domain and sources:** -- What topics will this command research? -- What sources are authoritative? -- What tools are needed? (Read, Grep, WebFetch, WebSearch, MCP tools) - -**Structure research methodology:** -- Logical flow (broad → specific, concept → implementation) -- Source hierarchy (official docs > implementation > discussions) -- Conflict resolution strategy - -### Anti-Patterns to Avoid - -❌ **Single-source research:** -```markdown -# Bad: Step 1: Search Google, Step 2: Use first result -# Good: Step 1: Official docs, Step 2: Source code, Step 3: Discussions, Step 4: Synthesize -``` - -❌ **Unstructured exploration:** -```markdown -# Bad: "Look around and see what you find" -# Good: "1. Official docs for concepts, 2. Source code for verification, 3. Examples for patterns" -``` - -❌ **No source attribution:** -```markdown -# Bad: "The system works by..." -# Good: "The system works by... (source: docs.example.com/api, src/core/system.ts:42)" -``` - -### Testing Your Research Command - -**Cross-model testing:** -- Test with Haiku (needs more explicit guidance) -- Test with Sonnet (balanced) -- Test with Opus (handles ambiguity better) - -**Scenario testing:** -- Well-documented topics (should find easily) -- Obscure topics (should identify gaps) -- Complex topics (should synthesize well) - -### Storage - -- **Project commands**: `.claude/commands/` (check into version control) -- **Personal commands**: `~/.claude/commands/` (user-specific) - ---- - -## Complete Example - -See `examples/research-zk-compression.md` for a fully implemented research command following this template. diff --git a/.claude/commands/review.md b/.claude/commands/review.md deleted file mode 100644 index 8071c02c..00000000 --- a/.claude/commands/review.md +++ /dev/null @@ -1,187 +0,0 @@ ---- -description: Validates ZK Compression documentation files against Mintlify syntax, code accuracy, and text quality checklists. Use before committing documentation changes. -argument-hint: [file-pattern] ---- - -# /review - -Validate documentation files: $ARGUMENTS (default: `developer-content/zk-compression-docs/**/*.md`) - -## MANDATORY: Before ANY task execution - -#### First, output your understanding and plan: -- State which files will be validated (based on $ARGUMENTS or default) -- Confirm three validation agents will run in parallel -- Identify validation scope (GitBook syntax, code snippets, text quality) - -#### Then assess if clarification is needed: -If unclear, ask: -- Should all warnings block or only critical errors? -- Should validation stop on first error or collect all issues? -- Which severity levels matter (critical/warning/info)? - -#### Validation refinement checklist: -- File pattern matches intended scope -- All three checklists from `developer-content/.github/` will be applied -- DeepWiki is available for code verification - -### Step 1: Parse Arguments and Generate Timestamp - -Determine file pattern to validate: -- If $ARGUMENTS is provided: use it -- If $ARGUMENTS is empty: use default `developer-content/zk-compression-docs/**/*.md` - -Generate timestamp for report files: -```bash -TIMESTAMP=$(date +%Y%m%d-%H%M) -``` - -Display: "Validating files matching: [file-pattern]" -Display: "Reports will be saved to: /home/tilo/Workspace/.claude/tasks/review-$TIMESTAMP-*.md" - -### Step 2: Spawn Three Validation Agents in Parallel - -Use Task tool three times in a single message with `subagent_type: "general-purpose"`. - -**Agent 1: GitBook Syntax Validator** - -``` -Task( - subagent_type: "general-purpose", - description: "GitBook syntax validation", - prompt: "Execute the validation workflow defined in /home/tilo/Workspace/.claude/agents/gitbook-syntax-validator.md - -File pattern to validate: [file-pattern from Step 1] -Working directory: /home/tilo/Workspace/developer-content -Report file: /home/tilo/Workspace/.claude/tasks/review-[TIMESTAMP]-gitbook-syntax.md - -Read the agent workflow file and follow ALL steps defined there to validate GitBook syntax and Markdown structure. -Write your complete findings to the report file specified above. -Return the report file path in your final message." -) -``` - -**Agent 2: Code Snippet Validator** - -``` -Task( - subagent_type: "general-purpose", - description: "Code snippet verification", - prompt: "Execute the validation workflow defined in /home/tilo/Workspace/.claude/agents/code-snippet-validator.md - -File pattern to validate: [file-pattern from Step 1] -Working directory: /home/tilo/Workspace/developer-content -Report file: /home/tilo/Workspace/.claude/tasks/review-[TIMESTAMP]-code-snippets.md - -Read the agent workflow file and follow ALL steps defined there to verify code snippets using: -- CLAUDE.md source mappings at developer-content/zk-compression-docs/CLAUDE.md -- DeepWiki queries to Lightprotocol/light-protocol repository -- WebFetch for GitHub source code verification - -Write your complete findings to the report file specified above. -Return the report file path in your final message." -) -``` - -**Agent 3: Developer Text Validator** - -``` -Task( - subagent_type: "general-purpose", - description: "Developer text quality evaluation", - prompt: "Execute the validation workflow defined in /home/tilo/Workspace/.claude/agents/developer-text-validator.md - -File pattern to validate: [file-pattern from Step 1] -Working directory: /home/tilo/Workspace/developer-content -Report file: /home/tilo/Workspace/.claude/tasks/review-[TIMESTAMP]-developer-text.md - -Read the agent workflow file and follow ALL steps defined there to evaluate text quality. -Flag implementation details, vague statements, and inaccuracies. - -Write your complete findings to the report file specified above. -Return the report file path in your final message." -) -``` - -### Step 3: Aggregate and Display Results - -Wait for all three agents to complete, then display aggregated report: - -``` -═══════════════════════════════════════ - DOCUMENTATION VALIDATION REPORT -═══════════════════════════════════════ - -Timestamp: [TIMESTAMP] -Files validated: [file-pattern] - -─── GitBook Syntax Validation ───────── -Report: /home/tilo/Workspace/.claude/tasks/review-[TIMESTAMP]-gitbook-syntax.md - -[Agent 1 summary of findings] - -─── Code Snippet Verification ───────── -Report: /home/tilo/Workspace/.claude/tasks/review-[TIMESTAMP]-code-snippets.md - -[Agent 2 summary of findings] - -─── Developer Text Quality ──────────── -Report: /home/tilo/Workspace/.claude/tasks/review-[TIMESTAMP]-developer-text.md - -[Agent 3 summary of findings] - -─── Summary ─────────────────────────── -Total issues: X (Y critical, Z warnings) - -Full reports available at: - - /home/tilo/Workspace/.claude/tasks/review-[TIMESTAMP]-gitbook-syntax.md - - /home/tilo/Workspace/.claude/tasks/review-[TIMESTAMP]-code-snippets.md - - /home/tilo/Workspace/.claude/tasks/review-[TIMESTAMP]-developer-text.md -``` - -### Step 4: Provide Actionable Next Steps - -Analyze severity and provide guidance: - -**No issues found:** -``` -✓ Documentation ready for commit - No validation issues detected -``` - -**Warnings only:** -``` -⚠ Review warnings before committing - [List warning-level issues with file:line references] - - Proceed with commit if warnings are acceptable -``` - -**Critical errors found:** -``` -✗ Fix critical errors before committing - -Critical issues that must be resolved: -[List each critical issue with: - - File and line number - - Issue description - - Recommended fix -] -``` - -## Validation - -Before finalizing: -- All three agents completed successfully -- Results are structured with file:line references -- Severity levels are clear (critical, warning, info) -- Actionable fixes are provided - -## Notes - -- Uses same checklists as CodeRabbit (`.github/*.md`) -- DeepWiki queries `Lightprotocol/light-protocol` for code verification -- Run locally to catch issues before pushing -- Examples: - - Single file: `/review zk-compression-docs/quickstart.md` - - Directory: `/review zk-compression-docs/compressed-tokens/**` diff --git a/.claude/skills/command-agent-builder/SKILL.md b/.claude/skills/command-agent-builder/SKILL.md deleted file mode 100644 index 29d3729a..00000000 --- a/.claude/skills/command-agent-builder/SKILL.md +++ /dev/null @@ -1,205 +0,0 @@ ---- -name: command-agent-builder -description: Building Claude Code commands and agents following Anthropic best practices. Use PROACTIVELY when user wants to create or optimize commands, agents, or workflows. Validates $ARGUMENTS usage, removes meta-instructions, ensures plan-first patterns, and enforces technical precision. ---- - -# Command & Agent Builder - -Create and optimize Claude Code commands and agents following Anthropic's official best practices. - -## When to Use This Skill - -Use PROACTIVELY when: -- User wants to create a new command or agent -- User asks to optimize or validate existing commands -- User mentions "slash command", "custom command", or "agent" -- Creating workflows that need best practice validation - -## Quick Navigation - -- **Patterns**: [mandatory-execution.md](patterns/mandatory-execution.md), [plan-first.md](patterns/plan-first.md), [freedom-levels.md](patterns/freedom-levels.md) -- **Templates**: [basic-command.md](templates/basic-command.md), [mcp-command.md](templates/mcp-command.md), [research-command.md](templates/research-command.md), [agent-template.md](templates/agent-template.md) -- **Validation**: [checklist.md](validation/checklist.md), [examples.md](validation/examples.md) - ---- - -## Creation Workflow - -### Step 1: Understand the Request - -Output your understanding: -- What type of artifact (command vs agent)? -- What is its purpose (single responsibility)? -- What complexity level (basic, MCP, research, agent)? - -### Step 2: Gather Context Interactively - -Ask these questions to gather complete context: - -**1. Type Selection:** -``` -What are you creating? -1. Basic command (single purpose, no external tools) -2. MCP-enabled command (queries external services) -3. Research command (multi-step information gathering) -4. Sub-agent (autonomous specialized task handler) -``` - -**2. Core Details:** -- **Primary purpose**: What does it do? (single, clear responsibility) -- **Trigger conditions**: When should it be used/activated? -- **Arguments needed**: Does it need user input? (→ $ARGUMENTS + argument-hint) -- **WHY context**: Why is this approach/precision/pattern important? - -**3. Behavior Configuration:** -- **Clarification questions**: Does it need to ask for more details? (→ plan-first pattern) -- **Tools required**: What tools does it need? (→ allowed-tools with least privilege) -- **Freedom level**: How prescriptive should instructions be? - - Low: Exact sequences (migrations, security ops) - - Medium: Structured flexibility (code generation, templates) - - High: Exploratory (research, debugging) - -**4. Scope:** -- **Project** (.claude/commands/ or .claude/agents/): Shared with team via git -- **User** (~/.claude/commands/ or ~/.claude/agents/): Personal only - -### Step 3: Select Template and Generate - -Based on type selection, use appropriate template: - -1. **Basic command** → [templates/basic-command.md](templates/basic-command.md) -2. **MCP command** → [templates/mcp-command.md](templates/mcp-command.md) -3. **Research command** → [templates/research-command.md](templates/research-command.md) -4. **Sub-agent** → [templates/agent-template.md](templates/agent-template.md) - -Fill in: -- ✅ YAML frontmatter (description with WHAT + WHEN) -- ✅ $ARGUMENTS placeholder (if needs input) -- ✅ WHY context (explain importance) -- ✅ MANDATORY execution pattern from [patterns/mandatory-execution.md](patterns/mandatory-execution.md) -- ✅ Plan-first pattern from [patterns/plan-first.md](patterns/plan-first.md) (if needs clarification) -- ✅ Tool permissions (allowed-tools with least privilege) -- ✅ Validation steps - -### Step 4: Validate Against Checklist - -Run through [validation/checklist.md](validation/checklist.md): - -**Structure:** -- [ ] YAML frontmatter present (name, description) -- [ ] Description includes WHAT + WHEN -- [ ] Uses $ARGUMENTS (not "your question" or placeholders) -- [ ] No meta-instructions ("When invoked...", "read this file...") -- [ ] Includes WHY context -- [ ] MANDATORY pattern present - -**Instructions:** -- [ ] Direct, explicit instructions (not passive) -- [ ] Active verbs throughout -- [ ] Proper code formatting (backticks for inline, avoid triple backticks in nested markdown) -- [ ] Technical precision (specific verbs, exact names) - -**Tool Configuration:** -- [ ] allowed-tools specified (if using tools) -- [ ] Least privilege principle applied - -**Agent-Specific:** -- [ ] Single clear responsibility -- [ ] Proactive activation language ("Use PROACTIVELY when...") -- [ ] Detailed system prompt with examples - -### Step 5: Output and Provide Usage - -Generate the complete file content and provide: -1. **File path**: Where to save it -2. **Complete content**: Ready to copy-paste -3. **Example usage**: How to invoke it -4. **Testing notes**: Multi-model testing reminder (Haiku, Sonnet, Opus) - ---- - -## Optimization Workflow - -For existing commands/agents: - -### Step 1: Read Current Content - -Use Read tool to get current file content. - -### Step 2: Validate Against Checklist - -Run through [validation/checklist.md](validation/checklist.md) and identify issues. - -### Step 3: Provide Specific Improvements - -For each issue found, provide: -- **What's wrong**: Specific line or pattern -- **Why it matters**: Impact on Claude's behavior -- **How to fix**: Exact replacement using Edit tool -- **Example**: Good vs bad from [validation/examples.md](validation/examples.md) - -### Step 4: Apply Fixes - -Use Edit tool to apply improvements systematically. - ---- - -## Key Best Practices Reference - -**From Anthropic Documentation:** - -1. **$ARGUMENTS for parameters** - Never use placeholder text like "your question" -2. **No meta-instructions** - Don't explain what the command file is -3. **WHY context first** - Explain why the approach matters -4. **Plan-first for complex tasks** - Output understanding before executing -5. **Least privilege tools** - Only grant necessary tool access -6. **Single responsibility** - Clear, narrow purpose per command/agent -7. **Proactive activation** - Use "PROACTIVELY" in descriptions -8. **Under 500 lines** - Keep SKILL.md concise, separate details - -**Common Anti-Patterns to Avoid:** - -❌ `"your question"` → ✅ `$ARGUMENTS` -❌ `When invoked: 1. read this file...` → ✅ Direct instructions -❌ Vague: "handles", "manages", "processes" → ✅ Specific: "verifies proof", "nullifies hash" -❌ No WHY context → ✅ "Precision is critical because..." -❌ All tools allowed → ✅ allowed-tools: mcp__specific__* - ---- - -## Examples from This Project - -**Good Example:** `/ask-deepwiki` command -- Uses $ARGUMENTS for question -- Includes WHY context (precision matters) -- Plan-first pattern (output understanding) -- Specific tool permissions (mcp__deepwiki__*) -- Technical precision rules - -**Patterns Used:** -- MANDATORY execution pattern -- Plan-first approach -- Clarification questions -- Freedom level: Medium (structured but flexible) - ---- - -## Progressive Disclosure - -This SKILL.md provides overview and workflow. For detailed guidance: - -- **[patterns/mandatory-execution.md](patterns/mandatory-execution.md)** - Full MANDATORY pattern to include -- **[patterns/plan-first.md](patterns/plan-first.md)** - Plan-first approach details -- **[patterns/freedom-levels.md](patterns/freedom-levels.md)** - Instruction prescriptiveness guidance -- **[templates/*.md](templates/)** - Ready-to-use templates -- **[validation/checklist.md](validation/checklist.md)** - Complete validation checklist -- **[validation/examples.md](validation/examples.md)** - Good vs bad examples ---- - -## Notes - -- Commands are Markdown files that become prompts -- Agents are specialized subagents with tool access -- Store project-level in .claude/ (shared via git) -- Store personal in ~/.claude/ (user-only) -- Iterate based on actual usage, not assumptions \ No newline at end of file diff --git a/.claude/skills/command-agent-builder/patterns/freedom-levels.md b/.claude/skills/command-agent-builder/patterns/freedom-levels.md deleted file mode 100644 index de5cbc9c..00000000 --- a/.claude/skills/command-agent-builder/patterns/freedom-levels.md +++ /dev/null @@ -1,248 +0,0 @@ -# Freedom Levels (Optional Guidance) - -How prescriptive should your command instructions be? Match the freedom level to task fragility. - -## Core Concept - -**Freedom level** = How much flexibility Claude has in execution - -- **Low freedom**: Exact sequences, no deviation -- **Medium freedom**: Structured templates with parameters -- **High freedom**: Exploratory approaches - -**Source**: Anthropic Agent Skills Best Practices - -## The Three Levels - -### Low Freedom (Exact Sequences) - -**Use when:** -- Database migrations -- Security operations -- Destructive actions (rm, force push) -- Compliance-critical operations -- Multi-step dependencies - -**Characteristics:** -- Step-by-step exact commands -- No decision points -- Explicit ordering -- Clear validation checkpoints -- Rollback instructions - -**Example: `/commit` command** -```markdown -### Step 2: Execute Commit - -Run these commands in exact sequence: - -1. Stage changes: - - `git add [specific files]` - -2. Create commit: - - `git commit -m "$(cat <<'EOF' - [Commit message] - - Co-Authored-By: Claude - EOF - )"` - -3. Verify: - - `git status` - -**Do NOT:** -- Skip any step -- Reorder operations -- Use --amend without checking authorship -``` - -**When to use:** -- Fragile operations that break if done wrong -- Security-sensitive tasks -- Operations requiring exact sequences - -### Medium Freedom (Structured Flexibility) - -**Use when:** -- Code generation from templates -- Component creation -- Configuration tasks -- Report generation -- Testing workflows - -**Characteristics:** -- Template/pattern to follow -- Customization parameters -- Decision points with guidance -- Structured but adaptable - -**Example: `/create-component` command** -```markdown -### Step 2: Generate Component - -Use this structure and customize as needed: - -```typescript -interface [ComponentName]Props { - // Add props based on requirements -} - -export function [ComponentName]({ ...props }: [ComponentName]Props) { - // Implementation varies by: - // - State management needs - // - Event handlers required - // - Styling approach - - return ( - // JSX structure follows project patterns - ); -} -``` - -**Customize:** -- Props based on user requirements -- State management (useState, useReducer, store) -- Styling (CSS modules, styled-components, tailwind) -- Event handlers as needed - -**Follow project patterns:** -- File naming: PascalCase.tsx -- Export style: named exports -- Testing: create ComponentName.test.tsx -``` - -**When to use:** -- Preferred pattern exists -- Some variation is acceptable -- Configuration affects behavior -- Multiple valid approaches - -### High Freedom (Exploratory) - -**Use when:** -- Research tasks -- Debugging unknown issues -- Creative work -- Open-ended analysis -- Learning new codebases - -**Characteristics:** -- Goal-oriented instructions -- Multiple approach options -- Iterative refinement -- Exploration encouraged - -**Example: `/research` command** -```markdown -### Step 2: Conduct Research - -Explore multiple approaches to answer the question: - -**Strategies to consider:** -1. Search documentation for official guidance -2. Query code repositories for implementations -3. Check recent issues/discussions for context -4. Review examples and patterns -5. Cross-reference multiple sources - -**As you research:** -- Follow promising leads -- Adjust approach based on findings -- Synthesize information from multiple sources -- Note conflicting information -- Prioritize authoritative sources - -**Output:** -- Key findings with sources -- Multiple perspectives if relevant -- Confidence level in answer -- Gaps in available information -``` - -**When to use:** -- No clear single approach -- Exploration needed -- Creative solutions wanted -- Learning objectives - -## Choosing the Right Level - -### Decision Framework - -Ask yourself: - -1. **What's the risk of deviation?** - - High risk → Low freedom - - Medium risk → Medium freedom - - Low risk → High freedom - -2. **Is there a required sequence?** - - Yes, strict → Low freedom - - Yes, flexible → Medium freedom - - No → High freedom - -3. **Are multiple approaches valid?** - - No, one way only → Low freedom - - Yes, within patterns → Medium freedom - - Yes, many ways → High freedom - -4. **What's the task fragility?** - - Breaks easily → Low freedom - - Adaptable → Medium freedom - - Resilient → High freedom - -### Mixed Approaches - -Commands can mix freedom levels across steps: - -```markdown -### Step 1: Validate Requirements (Low Freedom) -[Exact validation steps] - -### Step 2: Generate Solution (Medium Freedom) -[Template with customization] - -### Step 3: Research Edge Cases (High Freedom) -[Exploratory investigation] -``` - - -## Common Mistakes - -❌ **Too restrictive for creative tasks:** -```markdown -### Research Architecture Patterns - -Step 1: Search for "microservices" -Step 2: Read first result -Step 3: Summarize in 3 sentences -``` -This should be high freedom exploration. - -❌ **Too loose for critical operations:** -```markdown -### Deploy to Production - -Deploy the application using appropriate methods. -``` -This should be low freedom with exact steps. - -❌ **No guidance for medium tasks:** -```markdown -### Create Component - -Make a component. -``` -Needs template and customization guidance. - -## Notes - -- Optional guidance for command creators, not mandatory pattern -- Can mix levels across different steps -- Default to medium freedom if unsure - ---- - -## Examples - -See `examples/freedom-levels-implementations.md` for detailed examples of each level. diff --git a/.claude/skills/command-agent-builder/patterns/mandatory-execution.md b/.claude/skills/command-agent-builder/patterns/mandatory-execution.md deleted file mode 100644 index e9dc4fec..00000000 --- a/.claude/skills/command-agent-builder/patterns/mandatory-execution.md +++ /dev/null @@ -1,45 +0,0 @@ -# Mandatory Execution Pattern - -Include this pattern in every command and agent to ensure proper planning and clarification before execution. - -## Pattern Structure - -```markdown -## MANDATORY: Before ANY task execution - -#### First, output your understanding and plan: -- Always make a plan before answering a prompt -- State how you understood the query -- **Instead of making assumptions execute `/ask-deepwiki` to verify technical accuracy** - -#### Then assess if clarification is needed: -If the question is vague, incomplete, or could have multiple interpretations, ask: -- What specific component or feature are you working with? -- What problem are you trying to solve? -- What have you tried so far? -- What level of detail do you need (overview vs implementation)? - -#### Question refinement checklist: -- Use exact component names (`CompressedAccountMeta`, not "account metadata") -- Use specific operations ("verifies proof", not "handles proof") -- Include concrete function names or error messages when available -``` - -## Usage - -**Include in:** All commands and agents. - -**Placement:** After command header and WHY context, before Step 1. - -**Customization:** Keep structure (plan → clarify → refine), adapt clarification questions and refinement checklist to your domain. - -## Notes - -- Derived from Anthropic best practices (explicit stage-gating with clarification) -- `/ask-deepwiki` reference is project-specific; adapt for your context - ---- - -## Example - -See `examples/mandatory-execution-integration.md` for complete implementation. \ No newline at end of file diff --git a/.claude/skills/command-agent-builder/patterns/plan-first.md b/.claude/skills/command-agent-builder/patterns/plan-first.md deleted file mode 100644 index 56d875a0..00000000 --- a/.claude/skills/command-agent-builder/patterns/plan-first.md +++ /dev/null @@ -1,116 +0,0 @@ -# Plan-First Pattern - -Show understanding and create a plan before executing any task. This gives users visibility and prevents incorrect assumptions. - -## Core Principle - -**Output first, execute second.** - -Claude should: -1. State its understanding of the request -2. Show the plan it will follow -3. Ask for clarification if needed -4. THEN proceed with execution - -## Pattern Structure - -```markdown -### Step 1: [Analyze/Understand/Plan] [Subject] - -#### First, output your understanding and plan: -- [What you identified about the scope] -- [What you'll do/query/create] -- [Refined version of the request] - -#### Then assess if clarification is needed: -[Specific questions if request is vague] - -[Repository mapping / domain-specific guidance] - -#### [Action] checklist: -- [Specific requirement 1] -- [Specific requirement 2] -- [Specific requirement 3] -``` - -## Why This Works - -From Anthropic: "Claude performs best when it has a clear target to iterate against" - -**Benefits:** User sees interpretation before execution, enables early correction, prevents wasted tool calls, separates planning from execution. - -## Integration with Mandatory Pattern - -Plan-first is **part of** the mandatory execution pattern: - -```markdown -## MANDATORY: Before ANY task execution - -#### First, output your understanding and plan: -[Plan-first pattern goes here] - -#### Then assess if clarification is needed: -[Clarification questions] -``` - -Relationship: -- Mandatory pattern = overall structure -- Plan-first = specific implementation of "output understanding" - -## Anti-Patterns - -❌ **Jumping straight to execution:** -```markdown -### Step 1: Query DeepWiki -Call `mcp__deepwiki__ask_question(...)` -``` - -✅ **Plan first:** -```markdown -### Step 1: Analyze and Create Plan - -#### First, output your understanding: -- Repository: Lightprotocol/light-protocol (ZK Compression question) -- Query: "How does ValidityProof verification work?" - -#### Then assess if clarification needed: -[questions if vague] - -### Step 2: Query DeepWiki -Call `mcp__deepwiki__ask_question(...)` -``` - -❌ **Vague planning:** -```markdown -I'll search for information about the topic. -``` - -✅ **Specific planning:** -```markdown -I'll query three sources: -1. DeepWiki for implementation details -2. Public docs for conceptual overview -3. GitHub for code examples -Refined query: "How does CompressedAccountMeta structure validation work in state compression?" -``` - -## Validation Checklist - -Good plan-first output includes: -- [ ] Clear statement of understanding -- [ ] Specific steps to follow -- [ ] Refined/clarified version of request -- [ ] Targeted clarification questions (if needed) -- [ ] Domain-specific guidance/mapping - -## Notes - -- Particularly critical for MCP commands (prevent wasted API calls) -- Short plans for simple tasks, detailed plans for complex ones -- Always output the plan as TEXT, not as tool calls or comments - ---- - -## Example - -See `examples/ask-deepwiki-plan-first.md` for a complete implementation. \ No newline at end of file diff --git a/.claude/skills/command-agent-builder/templates/agent-template.md b/.claude/skills/command-agent-builder/templates/agent-template.md deleted file mode 100644 index f3c528c6..00000000 --- a/.claude/skills/command-agent-builder/templates/agent-template.md +++ /dev/null @@ -1,252 +0,0 @@ -# Agent (Sub-Agent) Template - -Use this template for creating autonomous sub-agents with specialized responsibilities. - -## Template - -```markdown ---- -name: agent-name -description: [WHAT it does] AND [WHEN to use - include "Use PROACTIVELY when..." for automatic triggering] -allowed-tools: [Least privilege - only tools needed for this specific responsibility] ---- - -# Agent: [Name] - -**Single Responsibility:** [Clear, narrow purpose statement] - -[WHY: Explain why this agent exists and why automation/specialization matters] - -## MANDATORY: Before ANY task execution - -#### First, output your understanding and plan: -- Always make a plan before answering a prompt -- State how you understood the task -- Identify [domain-specific context] -- Show your planned approach with specific steps - -#### Then assess if clarification is needed: -If the task is vague, incomplete, or could have multiple interpretations, ask: -- [Domain-specific question 1] -- [Domain-specific question 2] -- [Domain-specific question 3] -- What level of detail/thoroughness is expected? - -#### [Task] refinement checklist: -- [Specific requirement 1] -- [Specific requirement 2] -- [Specific requirement 3] - -## Workflow - -### Step 1: Validate and Understand - -**Validate inputs:** -- [What to check before starting] -- [Prerequisites that must be met] -- [Constraints to verify] - -**Understand context:** -- [What context to gather] -- [What to analyze] -- [What patterns to identify] - -### Step 2: Execute [Primary Task] - -[Detailed instructions with specific steps] - -[Decision points with clear criteria] - -[Examples showing expected behavior] - -**As you work:** -- [Guideline 1] -- [Guideline 2] -- [Guideline 3] - -### Step 3: Verify and Report - -**Verification checklist:** -- [ ] [Quality check 1] -- [ ] [Quality check 2] -- [ ] [Quality check 3] - -**Report:** -- What was done -- What worked well -- What issues encountered -- What requires attention - -## Examples - -### Example 1: [Scenario] - -**Input:** [What the agent receives] - -**Process:** -1. [Step taken] -2. [Step taken] -3. [Step taken] - -**Output:** [What the agent produces] - -### Example 2: [Another Scenario] - -**Input:** [What the agent receives] - -**Process:** -1. [Step taken] -2. [Step taken] -3. [Step taken] - -**Output:** [What the agent produces] - -## Constraints and Security - -**What this agent MUST NOT do:** -- [Constraint 1 - with reason] -- [Constraint 2 - with reason] -- [Constraint 3 - with reason] - -**Security considerations:** -- [Security rule 1] -- [Security rule 2] -- [Security rule 3] - -**Error handling:** -- If [error condition]: [how to handle] -- If [error condition]: [how to handle] -- If uncertain: Stop and ask user - -## Tool Usage - -**Allowed tools:** [list from frontmatter] - -**Tool usage guidelines:** -- [Tool 1]: Use for [specific purpose] -- [Tool 2]: Use for [specific purpose] -- [Tool 3]: Use when [specific condition] - -**Forbidden operations:** -- Do not [dangerous operation] without confirmation -- Do not [destructive action] without backup -- Do not [sensitive operation] without validation - -## Notes - -- [Important reminder 1] -- [Important reminder 2] -- [Known limitations] -- [When to delegate to user] -``` - ---- - -## Creating Effective Agents - -### Before Writing (Evaluation-Driven) - -1. **Test without the agent first** - What does Claude miss? -2. **Identify 3 test scenarios** - Common, edge case, error case -3. **Write minimal instructions** - Address only the gaps - -### Appropriate Degrees of Freedom - -Match instruction specificity to task fragility: - -**Low Freedom (Exact Scripts)** - Error-prone operations -```bash -#!/bin/bash -npm run build && npm test && git push origin main -``` - -**Medium Freedom (Pseudocode)** - Preferred patterns with flexibility -```typescript -function generate${ComponentName}() { - // 1. Create interface | 2. Implement [features] | 3. Add tests -} -``` - -**High Freedom (Text Instructions)** - Exploratory tasks -```markdown -Investigate [topic] by analyzing codebase, checking docs, proposing solutions -``` - -### Progressive Disclosure - -Keep agent file under 500 lines: -``` -.claude/agents/agent-name/ -├── agent.md # Main (<500 lines) -├── scripts/ # Validation (0 tokens until used) -└── examples/ # Extended examples (reference when needed) -``` - -Reference with: `bash scripts/validate.sh` or `cat examples/scenario-1.md` - -**Why:** Files consume 0 tokens until explicitly loaded. - -### Anti-Patterns to Avoid - -**Assuming pre-installed packages** -```markdown -# Bad: Run pytest -# Good: Verify pytest installed, if not: pip install pytest, then run -``` - -**Windows paths** -```markdown -# Bad: C:\scripts\validate.bat -# Good: scripts/validate.sh -``` - -**Deeply nested references** -```markdown -# Bad: See reference.md → see details.md → see examples.md -# Good: See examples/code-review-example.md (one level deep) -``` - -**Excessive options without defaults** -```markdown -# Bad: Choose format: JSON, YAML, TOML, XML, CSV, or custom -# Good: Output as JSON (override with --format flag if needed) -``` - -**Vague descriptions** -```markdown -# Bad: description: Helper agent for code tasks -# Good: description: Reviews code for security vulnerabilities. Use PROACTIVELY after authentication/database code is written. -``` - -### Testing Your Agent - -**Cross-model testing:** -- Test with Haiku (needs more explicit guidance) -- Test with Sonnet (balanced) -- Test with Opus (handles ambiguity better) -- Adjust instructions if Haiku struggles - -### Agent vs Command - -**Use agent when:** -- Task needs autonomous execution -- Multiple steps with decision points -- Should activate automatically -- Needs tool access control - -**Use command when:** -- User explicitly invokes -- Needs user input ($ARGUMENTS) -- Simpler workflow -- More user interaction - -### Storage - -- **Project agents**: `.claude/agents/` (check into version control) -- **Personal agents**: `~/.claude/agents/` (user-specific) - ---- - -## Complete Example - -See `examples/code-reviewer.md` for a fully implemented agent following this template. diff --git a/.claude/skills/command-agent-builder/templates/basic-command.md b/.claude/skills/command-agent-builder/templates/basic-command.md deleted file mode 100644 index ef2ed19c..00000000 --- a/.claude/skills/command-agent-builder/templates/basic-command.md +++ /dev/null @@ -1,173 +0,0 @@ -# Basic Command Template - -Use this template for simple, single-purpose commands that don't require external tools. - -## Template - -```markdown ---- -description: [WHAT it does] AND [WHEN to use it - be explicit about triggering conditions] -argument-hint: ---- - -# /command-name - -[Task]: $ARGUMENTS - -[WHY context: Explain why this approach/precision/pattern matters] - -## MANDATORY: Before ANY task execution - -#### First, output your understanding and plan: -- Always make a plan before answering a prompt -- State how you understood the query -- [Domain-specific understanding points] - -#### Then assess if clarification is needed: -If the question is vague, incomplete, or could have multiple interpretations, ask: -- [Domain-specific question 1] -- [Domain-specific question 2] -- [Domain-specific question 3] -- What level of detail do you need (overview vs implementation)? - -#### [Action] refinement checklist: -- [Specific requirement 1] -- [Specific requirement 2] -- [Specific requirement 3] - -### Step 1: [Active verb] [What] - -[Direct, explicit instructions] - -[If needed: provide options or decision points] - -### Step 2: [Active verb] [What] - -[Direct, explicit instructions] - -[If applicable: validation steps] - -### Step 3: [Active verb] [What] - -[Direct, explicit instructions] - -[If applicable: output formatting] - -## Validation - -Before finalizing: -- [Checkpoint 1] -- [Checkpoint 2] -- [Checkpoint 3] - -## Notes - -- [Key reminder 1] -- [Key reminder 2] -- [Cross-references to documentation] -``` - ---- - -## Creating Effective Commands - -### Before Writing (Evaluation-Driven) - -1. **Test without the command first** - What does Claude miss? -2. **Identify 3 test scenarios** - Common, edge case, error case -3. **Write minimal instructions** - Address only the gaps - -### Appropriate Degrees of Freedom - -Commands typically use **medium freedom** - clear steps with flexibility. - -**Adjust based on task fragility:** -```bash -# Low freedom (critical operations) -git add . && git commit -m "message" && git push origin main - -# Medium freedom (most commands) -Run formatter on $ARGUMENTS: -- JavaScript/TypeScript: prettier --write -- Python: black -- Verify with git diff - -# High freedom (exploratory) -Analyze $ARGUMENTS for patterns and suggest improvements -``` - -### Progressive Disclosure - -For complex commands, organize supporting files: -``` -.claude/commands/ -├── command-name.md # Main command (<300 lines) -├── scripts/ -│ └── validate.sh # Executable helpers -└── templates/ - └── output-template.txt # Output formatting -``` - -Reference with: `bash scripts/validate.sh` or `cat templates/output-template.txt` - -### Anti-Patterns to Avoid - -**Vague descriptions** -```markdown -# Bad: description: Format code -# Good: description: Format code according to project style guide. Use before commits. -``` - -**Not using $ARGUMENTS** -```markdown -# Bad: Run formatter on "the files you want to format" -# Good: Run formatter on $ARGUMENTS -``` - -**Passive instructions** -```markdown -# Bad: The code should be formatted using prettier -# Good: Run `prettier --write $ARGUMENTS` -``` - -**Missing validation** -```markdown -# Bad: Step 3: Done! -# Good: Step 3: Verify - Check all files formatted, no errors, formatting-only changes -``` - -**Assuming pre-installed tools** -```markdown -# Bad: Run pytest -# Good: Verify pytest installed (pip install pytest if needed), then run pytest -``` - -**Windows paths** -```markdown -# Bad: C:\scripts\validate.bat -# Good: scripts/validate.sh -``` - -### Testing Your Command - -**Cross-model testing:** -- Test with Haiku (needs more explicit guidance) -- Test with Sonnet (balanced) -- Test with Opus (handles ambiguity better) - -**Scenario testing:** -- Typical arguments -- Edge cases (no files, many files, special characters) -- Invalid arguments -- Error conditions - -### Storage - -- **Project commands**: `.claude/commands/` (check into version control) -- **Personal commands**: `~/.claude/commands/` (user-specific) - ---- - -## Complete Example - -See `examples/format-code.md` for a fully implemented command following this template. diff --git a/.claude/skills/command-agent-builder/templates/mcp-command.md b/.claude/skills/command-agent-builder/templates/mcp-command.md deleted file mode 100644 index 0944ff34..00000000 --- a/.claude/skills/command-agent-builder/templates/mcp-command.md +++ /dev/null @@ -1,192 +0,0 @@ -# MCP Command Template - -Use this template for commands that query external services via Model Context Protocol (MCP). - -## Template - -```markdown ---- -description: [WHAT + WHEN - emphasize precision/accuracy if relevant] -argument-hint: -allowed-tools: mcp__service__* ---- - -# /command-name - -Answer: $ARGUMENTS - -[WHY: Explain importance of precision/accuracy for this domain] - -## MANDATORY: Before ANY task execution - -#### First, output your understanding and plan: -- Always make a plan before answering a prompt -- State how you understood the query -- Identify [data source/repository/service] scope -- Show the refined [question/query] you'll use - -#### Then assess if clarification is needed: -If the question is vague, incomplete, or could have multiple interpretations, ask: -- What specific [component/feature/topic] are you working with? -- What problem are you trying to solve? -- What have you tried so far? -- What level of detail do you need (overview vs implementation)? - -#### [Query] refinement checklist: -- Use exact [terms/names/identifiers] from [domain] -- Use specific [operations/verbs] ([precise action], not [vague action]) -- Include concrete [references/examples/context] when available - -### Step 1: [Analyze/Determine] [Scope/Source] - -**[Source/Repository] mapping:** -- [Option A] → `service/repo-a` -- [Option B] → `service/repo-b` -- [Option C] → `service/repo-c` - -Select appropriate source based on query content. - -### Step 2: Query [Service Name] - -For the appropriate [source], call in sequence: - -**For [Source A]:** -- `mcp__service__[tool_1]("param")` -- `mcp__service__[tool_2]("param")` -- `mcp__service__[tool_3]("param", $ARGUMENTS)` - -**For [Source B]:** -- `mcp__service__[tool_1]("param")` -- `mcp__service__[tool_2]("param")` -- `mcp__service__[tool_3]("param", $ARGUMENTS)` - -**For complex questions:** Query multiple sources as needed. - -### Step 3: Format Response with [Domain] Precision - -Structure: -1. **Direct answer** - Immediate [domain] explanation -2. **[Domain] details** - Specific implementations, data structures -3. **Code examples** - With inline comments explaining key points -4. **Source references** - [References format] from [service] -5. **Related concepts** - Connections to other [domain concepts] (if relevant) - -**Precision Rules:** - -AVOID: -- Vague verbs: "handles", "manages", "processes", "enables", "provides" -- Abstract terms: "operations", "management", "coordination" -- Marketing language: "powerful", "seamless", "easy" -- Generic descriptions: [vague example] instead of [precise example] - -USE: -- Exact [function/method/API] names: `[example1]()`, `[example2]()` -- Concrete data structures: `[Type1]`, `[Type2]`, `[Type3]` -- Specific operations: "[precise verb 1]", "[precise verb 2]", "[precise verb 3]" -- Precise [field/parameter] names: `[field1]`, `[field2]`, `[field3]` -- [Reference format] from [service] responses - -**Cross-reference with:** -- [Documentation URL 1] -- [Documentation URL 2] -- [Source repository URL] - -## Notes - -- Always include source references from [service] responses -- Provide runnable code examples for implementation questions -- Ask follow-up questions to [service] for clarification when needed -``` - ---- - -## Creating Effective MCP Commands - -### Before Writing (Evaluation-Driven) - -1. **Test without the command first** - What does Claude miss? -2. **Identify 3 test scenarios** - Common, edge case, error case -3. **Write minimal instructions** - Address only the gaps - -### MCP-Specific Setup - -**Identify the MCP service and tools:** -```markdown -allowed-tools: mcp__service__* # All tools from service -allowed-tools: mcp__service__tool1, mcp__service__tool2 # Specific tools only -``` - -**Define source/repository mapping** - What data sources exist and how to choose between them. - -**Structure MCP call sequences** - Show call order for each source, include $ARGUMENTS in final query tool. - -### MCP Call Formatting - -**In nested markdown, use backticks (NOT triple backticks):** - -❌ Don't use triple backticks (breaks outer code block): -```markdown -``` -mcp__service__call("param") -``` -``` - -✅ Use inline backticks: -```markdown -- `mcp__service__call("param")` -- `mcp__service__call("param", $ARGUMENTS)` -``` - -**For multiple calls, use bullet points:** -```markdown -**For [scenario]:** -- `tool_1("param")` -- `tool_2("param")` -- `tool_3("param", $ARGUMENTS)` -``` - -### Anti-Patterns to Avoid - -**Vague queries** -```markdown -# Bad: "How does it work?" -# Good: "How does CompressedAccountMeta.new_init() create the account hash?" -``` - -**Missing precision rules** -```markdown -# Bad: No guidance on terminology -# Good: AVOID "handles proof" → USE "verifies proof against state root" -``` - -**Not using $ARGUMENTS** -```markdown -# Bad: ask_question("repo", "user question") -# Good: ask_question("repo", $ARGUMENTS) -``` - -**Triple backticks in nested markdown** -```markdown -# Bad: ```mcp__call()``` (breaks template) -# Good: `mcp__call()` (inline backticks) -``` - -### Testing Your Command - -**Cross-model testing:** -- Test with Haiku (needs more explicit guidance) -- Test with Sonnet (balanced) -- Test with Opus (handles ambiguity better) -- Adjust instructions if Haiku struggles - -### Storage - -- **Project commands**: `.claude/commands/` (check into version control) -- **Personal commands**: `~/.claude/commands/` (user-specific) -- **Document required MCP server** in project README - ---- - -## Complete Example - -See `examples/ask-deepwiki.md` for a fully implemented MCP command following this template. diff --git a/.claude/skills/command-agent-builder/validation/checklist.md b/.claude/skills/command-agent-builder/validation/checklist.md deleted file mode 100644 index 075a30de..00000000 --- a/.claude/skills/command-agent-builder/validation/checklist.md +++ /dev/null @@ -1,265 +0,0 @@ -# Validation Checklist - -Use this checklist to validate commands and agents against Anthropic's best practices. - -## Structure Validation - -### Frontmatter (YAML) - -- [ ] **Frontmatter present** with `---` delimiters -- [ ] **description field** present and non-empty -- [ ] **Description includes WHAT** - What the command/agent does -- [ ] **Description includes WHEN** - When to use it / triggering conditions -- [ ] **description under 1024 characters** -- [ ] **argument-hint present** if command uses $ARGUMENTS -- [ ] **argument-hint format** is `` not `[param]` or `{param}` -- [ ] **allowed-tools specified** if using MCP or specific tools -- [ ] **allowed-tools follows least privilege** - only necessary tools - -**For agents only:** -- [ ] **name field present** (lowercase, hyphens, max 64 chars) -- [ ] **name follows gerund form** (verb-ing) if applicable -- [ ] **name does not contain** "anthropic" or "claude" - -### File Structure - -- [ ] **Header present** with # /command-name or # Agent: Name -- [ ] **WHY context included** - Explains importance/rationale -- [ ] **MANDATORY pattern present** - Before ANY task execution section -- [ ] **Steps clearly numbered** - ### Step 1, ### Step 2, etc. -- [ ] **Notes section** at end (if applicable) -- [ ] **File under 500 lines** for SKILL.md (templates can be longer) - -## Content Validation - -### Arguments and Placeholders - -- [ ] **Uses $ARGUMENTS** not placeholder text -- [ ] **No "your question"** or similar placeholders -- [ ] **No "the files"** or similar vague references -- [ ] **$ARGUMENTS placed correctly** - where user input should go -- [ ] **Argument referenced in command** if argument-hint present - -### Instructions Quality - -- [ ] **No meta-instructions** - No "When invoked..." or "read this file..." -- [ ] **Direct instructions** - No "The code should be..." passive voice -- [ ] **Active verbs** throughout - "Run", "Execute", "Create", not "is run", "should be" -- [ ] **Explicit and clear** - No ambiguous instructions -- [ ] **Specific actions** - "Run `npm test`" not "test the code" -- [ ] **Stage-gating present** - Plan before execute, validate before finalize - -### MANDATORY Pattern - -- [ ] **Plan-first section** - "First, output your understanding and plan" -- [ ] **Clarification section** - "Then assess if clarification is needed" -- [ ] **Refinement checklist** - Domain-specific requirements -- [ ] **Plan asks for understanding** - Not just "create plan" -- [ ] **Clarification questions relevant** to domain -- [ ] **Checklist items specific** not generic - -### Technical Precision - -- [ ] **Avoids vague verbs** - No "handles", "manages", "processes", "enables", "provides" -- [ ] **Uses specific verbs** - "verifies", "nullifies", "appends", "executes" -- [ ] **Avoids abstract terms** - No "operations", "management", "coordination" -- [ ] **Uses concrete terms** - Specific functions, types, methods -- [ ] **No marketing language** - No "powerful", "seamless", "easy", "simply" -- [ ] **Uses exact names** - CompressedAccountMeta not "account metadata" - -### Code and Formatting - -- [ ] **Inline code uses backticks** - `code` not ```code``` -- [ ] **No triple backticks in nested markdown** - Breaks outer code block -- [ ] **Code blocks properly formatted** - When appropriate -- [ ] **Commands are runnable** - Actual syntax, not pseudocode (unless intentional) -- [ ] **File paths are accurate** - If referencing specific files - -## Tool Configuration - -### Tool Permissions - -- [ ] **allowed-tools follows glob pattern** - `mcp__service__*` for all tools -- [ ] **allowed-tools is specific** - Lists only needed tools -- [ ] **Least privilege applied** - Not granted unnecessary tools -- [ ] **Tool usage explained** - Why each tool is needed - -### Tool Usage in Commands - -- [ ] **Tools used correctly** - Proper syntax for each tool -- [ ] **MCP calls formatted properly** - With backticks in lists -- [ ] **Tool calls include $ARGUMENTS** - Where user input should go -- [ ] **Error handling for tool failures** - What to do if tool fails - -## Workflow Validation - -### Step Structure - -- [ ] **Step 1 includes planning** - For complex commands -- [ ] **Steps are ordered logically** - Natural progression -- [ ] **Each step has clear action** - Not vague goals -- [ ] **Decision points have criteria** - When to choose option A vs B -- [ ] **Validation steps included** - Check before finalizing - -### Examples (For Agents) - -- [ ] **At least 2 examples** provided -- [ ] **Examples show input/output** - What agent receives and produces -- [ ] **Examples demonstrate key scenarios** - Common use cases -- [ ] **Examples show decision-making** - Not just happy path - -### Constraints (For Agents) - -- [ ] **Constraints section present** - What agent must NOT do -- [ ] **Security considerations listed** - Dangerous operations identified -- [ ] **Error handling defined** - What to do when uncertain -- [ ] **Constraints have rationale** - Why each constraint exists - -## Agent-Specific Validation - -### Responsibility - -- [ ] **Single, clear responsibility** stated -- [ ] **Responsibility is narrow** - Not trying to do everything -- [ ] **Proactive activation language** - "Use PROACTIVELY when..." -- [ ] **Triggering conditions clear** - When to invoke automatically - -### Autonomy - -- [ ] **Can run without user input** - No mid-execution questions -- [ ] **Decision criteria provided** - For all decision points -- [ ] **Stopping conditions defined** - When to stop and ask user -- [ ] **Error recovery specified** - How to handle failures - -### Safety - -- [ ] **Destructive operations guarded** - Require confirmation or prevented -- [ ] **No force operations** without explicit user request -- [ ] **Rollback instructions** if applicable -- [ ] **Security boundaries enforced** - No credential access, etc. - -## Command Type-Specific - -### Basic Commands - -- [ ] **Single purpose focus** - Does one thing well -- [ ] **Clear usage example** in notes -- [ ] **Output format specified** - What user should see - -### MCP Commands - -- [ ] **Source/repository mapping** provided -- [ ] **MCP call sequence** shown for each source -- [ ] **$ARGUMENTS in query tools** - Passed to ask_question or similar -- [ ] **Precision rules included** - Domain-specific terminology -- [ ] **Cross-references provided** - Documentation URLs - -### Research Commands - -- [ ] **Research scope defined** - Boundaries clear -- [ ] **Multiple sources consulted** - Not single-source -- [ ] **Synthesis step included** - Combine findings -- [ ] **Source attribution required** - All references documented -- [ ] **Gap identification** - Note what's unclear - -## Best Practices Adherence - -### From Prompt Engineering - -- [ ] **Clear and direct** - No subtle hints -- [ ] **Structured with headers** - Easy to navigate -- [ ] **Examples where helpful** - Show don't just tell - -### From Claude Code Best Practices - -- [ ] **$ARGUMENTS for parameters** - Dynamic input -- [ ] **Extended thinking triggers** - Plan-first approach -- [ ] **Clear targets provided** - Success criteria - -### From Sub-Agents Guide - -- [ ] **Single responsibility** - Focused purpose -- [ ] **Detailed prompt** - Comprehensive instructions -- [ ] **Least privilege** - Minimal tool access -- [ ] **Proactive if appropriate** - Automatic activation - -### From Agent Skills Best Practices - -- [ ] **Concise** - No unnecessary words -- [ ] **Progressive disclosure** - Main file navigates to details -- [ ] **Appropriate freedom level** - Matches task fragility -- [ ] **Multi-model consideration** - Works across Haiku/Sonnet/Opus - -## Final Checks - -### Completeness - -- [ ] **All sections present** - Nothing obviously missing -- [ ] **Cross-references work** - Links point to actual files -- [ ] **Examples are complete** - Not TODO or placeholder - -### Clarity - -- [ ] **Instructions are understandable** - Clear to another person -- [ ] **No ambiguous terms** - Specific throughout -- [ ] **Logical flow** - Natural progression - -### Testability - -- [ ] **Can be tested** - Possible to verify it works -- [ ] **Success criteria clear** - Know when it's done right -- [ ] **Failure modes identified** - Know what could go wrong - -## Severity Levels - -### Critical Issues (Must Fix) - -- Missing $ARGUMENTS (uses placeholder) -- No description or malformed frontmatter -- Meta-instructions present -- No MANDATORY pattern -- Security vulnerabilities (in agents) -- All tools granted (violates least privilege) - -### Important Issues (Should Fix) - -- Missing WHY context -- Passive or vague instructions -- No validation steps -- Missing examples (for agents) -- No constraints defined (for agents) -- Poor technical precision - -### Minor Issues (Nice to Fix) - -- Formatting inconsistencies -- Could use better terminology -- Missing cross-references -- Light on examples -- Could be more concise - -## Using This Checklist - -**When creating:** -- Use as template guide -- Check off items as you add them -- Refer to validation/examples.md for good patterns - -**When validating:** -- Go through each section systematically -- Note all issues found with severity -- Provide specific fixes for each issue -- Reference examples.md for good vs bad patterns - -**When optimizing:** -- Focus on critical issues first -- Group related issues together -- Provide concrete edit instructions -- Verify fixes don't break other aspects - -## Notes - -- Not all items apply to all command types -- Use judgment for context-specific items -- When in doubt, check official Anthropic docs -- Validation examples.md provides concrete good/bad examples diff --git a/.claude/skills/prompt-template/SKILL.md b/.claude/skills/prompt-template/SKILL.md deleted file mode 100644 index 4f7d2dc9..00000000 --- a/.claude/skills/prompt-template/SKILL.md +++ /dev/null @@ -1,95 +0,0 @@ ---- -name: prompt-template -description: Generate structured implementation prompts for SDK integration, API setup, or feature implementation. Use when user wants to create a prompt for implementing something in their codebase. ---- - -# Prompt Template Skill - -## When to Use - -User says: -- "create a prompt for [SDK/feature]" -- "help me write a prompt to implement [X]" -- "I need to integrate [library]" -- Shows you documentation and wants an implementation prompt - -## Core Purpose - -Transform documentation into structured implementation prompts that: -- Extract exact technical requirements from source -- Gather user's application context -- Generate copy-paste ready prompts for any LLM - -## Process - -### Step 1: Identify Source Documentation - -Ask: -1. What is the official documentation URL? -2. What specific page/section covers this? -3. What is the source code directory URL? (GitHub folder/file) -4. Any GitHub repos with examples? - -### Step 2: Read Documentation and Extract - -From the source documentation, extract: -- **Installation**: Exact commands with versions -- **Imports**: Exact import statements -- **Configuration**: All options with types -- **Initialization**: Complete working example -- **Key APIs**: Core methods/functions needed - -### Step 3: Gather User Context - -Ask about their application: -1. **Framework**: React/Next.js/Express/Rust/etc.? -2. **Language**: TypeScript/JavaScript/Rust? Version? -3. **Service organization**: How do they structure clients/services? -4. **Environment management**: .env files, config service, other? -5. **Error handling**: try/catch, error types, logging pattern? -6. **Type system**: TypeScript? Strict mode? - -### Step 4: Generate Structured Prompt - -Use the template from resources/implementation-prompt-template.md - -Fill in: -- Task overview (one sentence) -- User's tech stack context -- Technical requirements extracted from docs -- Specific implementation deliverables -- Documentation references with exact URLs - -### Step 5: Validate - -Before delivering, check: -- [ ] Source documentation URL included (specific page, not homepage) -- [ ] Source code directory URL included (GitHub folder/file where implemented) -- [ ] Technical requirements from official docs (not assumptions) -- [ ] User context gathered (or questions asked) -- [ ] Installation commands include versions -- [ ] Initialization example is complete working code -- [ ] Deliverables are specific and actionable -- [ ] No assumptions about user's patterns - -## Key Principles - -1. **Never assume** - Always ask about user's patterns -2. **Extract from source** - Get technical details from official docs, not memory -3. **Be specific** - No vague requests like "set up properly" -4. **Include WHY** - Context about why patterns matter from docs -5. **Link precisely** - Reference exact documentation pages - -## Example Templates - -Load resources/implementation-prompt-template.md for: -- SDK Client Setup (like Grid, ZK Compression) -- API Integration (REST, GraphQL) -- Feature Implementation (new functionality) -- Migration (library upgrade, framework switch) - -## Integration - -- Use alongside zk-compression-terminology for ZK Compression specific prompts -- Reference technical precision patterns from CLAUDE.md -- Follow progressive disclosure: only load full template when generating \ No newline at end of file diff --git a/.claude/skills/prompt-template/resources/implementation-prompt-template.md b/.claude/skills/prompt-template/resources/implementation-prompt-template.md deleted file mode 100644 index cf195f76..00000000 --- a/.claude/skills/prompt-template/resources/implementation-prompt-template.md +++ /dev/null @@ -1,274 +0,0 @@ -# Implementation Prompt Template - -Use this template to generate structured prompts for any SDK, library, or feature implementation. - -## Base Template Structure - -```markdown -# IMPLEMENT [FEATURE NAME IN CAPS] - -## TASK OVERVIEW -[One clear sentence describing what needs to be implemented] - -## MY APPLICATION CONTEXT -**Tech Stack:** -- Framework: [User's framework] -- Language: [Language + version] -- Service architecture: [How user organizes code] -- Environment management: [How user handles config/env vars] -- Error handling: [User's error handling approach] - -## TECHNICAL REQUIREMENTS - -**Installation:** -```bash -[Exact install commands from docs with versions] -``` - -**Import:** -```[language] -[Exact import statements from documentation] -``` - -**Configuration Options:** -[Bullet list of all config options with types from docs] - -**Complete Initialization Example:** -```[language] -[Full working initialization code from official documentation] -``` - -## IMPLEMENTATION REQUEST - -Create [specific deliverable] that: -1. Follows my application's service patterns -2. Handles environment configuration properly (dev/staging/prod) -3. Includes comprehensive error handling matching my patterns -4. Provides clean interface for other parts of my app -5. Includes proper TypeScript types (if applicable) -6. [Any additional specific requirements based on the feature] - -Show me the complete implementation with file structure and code. - -## DOCUMENTATION REFERENCES -- Primary documentation: [URL to specific page, not homepage] -- Source code directory: [URL to GitHub folder/file where this is implemented] -- API reference: [URL if applicable] -- GitHub repository: [URL if applicable] -- Example implementations: [URLs if applicable] -``` - ---- - -## Example 1: SDK Client Setup (Grid) - -```markdown -# IMPLEMENT GRID ACCOUNTS CLIENT SETUP - -## TASK OVERVIEW -Set up Grid SDK client initialization in my application following my existing patterns. - -## MY APPLICATION CONTEXT -**Tech Stack:** -- Framework: Next.js 14 -- Language: TypeScript 5.2 -- Service architecture: /services folder with singleton pattern -- Environment management: .env.local with Zod validation -- Error handling: Custom error classes with structured logging - -## TECHNICAL REQUIREMENTS - -**Installation:** -```bash -npm install @sqds/grid -``` - -**Import:** -```typescript -import { GridClient } from '@sqds/grid'; -``` - -**Configuration Options:** -- environment: 'sandbox' | 'production' -- apiKey: string (from Grid dashboard at https://grid.squads.xyz/dashboard) -- baseUrl: string (optional, defaults to "https://grid.squads.xyz") - -**Complete Initialization Example:** -```typescript -const gridClient = new GridClient({ - environment: process.env.NODE_ENV === 'production' ? 'production' : 'sandbox', - apiKey: process.env.GRID_API_KEY!, - baseUrl: "https://grid.squads.xyz", -}); -``` - -## IMPLEMENTATION REQUEST - -Create a Grid client service that: -1. Follows my application's singleton service pattern -2. Handles environment configuration with Zod validation -3. Includes comprehensive error handling with custom GridError class -4. Provides clean interface for other parts of my app -5. Includes proper TypeScript types and JSDoc comments - -Show me the complete implementation with file structure and code. - -## DOCUMENTATION REFERENCES -- Grid SDK Documentation: https://www.npmjs.com/package/@sqds/grid -- Source code directory: https://github.com/Squads-Protocol/grid-sdk/tree/main/src -- API Dashboard: https://grid.squads.xyz/dashboard -``` - ---- - -## Example 2: ZK Compression Client (TypeScript) - -```markdown -# IMPLEMENT ZK COMPRESSION CLIENT SETUP - -## TASK OVERVIEW -Set up Light Protocol SDK client for compressed account operations in my Solana application. - -## MY APPLICATION CONTEXT -**Tech Stack:** -- Framework: React Native with Expo -- Language: TypeScript 5.0 -- Service architecture: Context providers with hooks -- Environment management: Expo SecureStore for keys, env vars for endpoints -- Error handling: React Error Boundaries with Sentry logging - -## TECHNICAL REQUIREMENTS - -**Installation:** -```bash -npm install @lightprotocol/stateless.js@0.22.1-alpha.1 \ - @lightprotocol/compressed-token@0.22.1-alpha.1 \ - @solana/web3.js -``` - -**Import:** -```typescript -import { Rpc, createRpc } from '@lightprotocol/stateless.js'; -``` - -**Configuration Options:** -- RPC_ENDPOINT: string (Helius or custom RPC) -- COMPRESSION_RPC_ENDPOINT: string (separate compression endpoint or same as RPC) -- Commitment level: 'confirmed' | 'finalized' - -**Complete Initialization Example:** -```typescript -const RPC_ENDPOINT = process.env.RPC_ENDPOINT || 'https://devnet.helius-rpc.com?api-key=YOUR_KEY'; -const COMPRESSION_RPC_ENDPOINT = process.env.COMPRESSION_RPC_ENDPOINT || RPC_ENDPOINT; - -const rpc: Rpc = createRpc(RPC_ENDPOINT, COMPRESSION_RPC_ENDPOINT); -``` - -## IMPLEMENTATION REQUEST - -Create a ZK Compression client provider that: -1. Follows React Context pattern with custom hook -2. Handles environment configuration for devnet/mainnet switching -3. Provides clean RPC interface for compressed account operations -4. Includes proper TypeScript types for all RPC methods -5. Handles connection errors with React Error Boundary integration -6. Supports reconnection logic for mobile network interruptions - -Show me the complete implementation with file structure and code. - -## DOCUMENTATION REFERENCES -- Client Library Guide: https://www.zkcompression.com/compressed-pdas/client-library -- Source code directory: https://github.com/Lightprotocol/light-protocol/tree/main/js/stateless.js/src -- TypeScript SDK API: https://lightprotocol.github.io/light-protocol/stateless.js/index.html -- GitHub Examples: https://github.com/Lightprotocol/program-examples -- Complete Documentation: https://www.zkcompression.com/llms-full.txt -``` - ---- - -## Example 3: API Integration (REST Client) - -```markdown -# IMPLEMENT STRIPE PAYMENT CLIENT - -## TASK OVERVIEW -Set up Stripe SDK client for payment processing in my e-commerce backend. - -## MY APPLICATION CONTEXT -**Tech Stack:** -- Framework: Express.js with TypeScript -- Language: TypeScript 5.1 -- Service architecture: Layered architecture (controllers/services/repositories) -- Environment management: dotenv with @types/node for env vars -- Error handling: Custom AppError class with express-async-errors - -## TECHNICAL REQUIREMENTS - -**Installation:** -```bash -npm install stripe @types/stripe -``` - -**Import:** -```typescript -import Stripe from 'stripe'; -``` - -**Configuration Options:** -- apiKey: string (secret key from Stripe dashboard) -- apiVersion: '2023-10-16' (Stripe API version) -- typescript: true (enables TypeScript support) -- timeout: number (optional, request timeout in ms) -- maxNetworkRetries: number (optional, default 0) - -**Complete Initialization Example:** -```typescript -const stripe = new Stripe(process.env.STRIPE_SECRET_KEY!, { - apiVersion: '2023-10-16', - typescript: true, -}); -``` - -## IMPLEMENTATION REQUEST - -Create a Stripe payment service that: -1. Follows layered architecture with service class -2. Handles environment configuration with validation -3. Includes comprehensive error handling for Stripe errors -4. Provides clean interface for payment operations (create intent, confirm, refund) -5. Includes proper TypeScript types and JSDoc comments -6. Implements webhook signature verification -7. Includes retry logic for network failures - -Show me the complete implementation with file structure and code. - -## DOCUMENTATION REFERENCES -- Stripe Node.js SDK: https://stripe.com/docs/api -- Source code directory: https://github.com/stripe/stripe-node/tree/master/src -- TypeScript Integration: https://github.com/stripe/stripe-node#usage-with-typescript -- Webhook Guide: https://stripe.com/docs/webhooks -``` - ---- - -## Template Selection Guide - -**Use SDK Client Setup template for:** -- Client library initialization -- Service/API wrappers -- SDK configuration - -**Use API Integration template for:** -- REST API clients -- GraphQL clients -- Third-party service integrations - -**Use Feature Implementation template for:** -- New application features -- Component development -- Business logic implementation - -**Use Migration template for:** -- Library upgrades -- Framework migrations -- Refactoring tasks diff --git a/.claude/skills/zk-compression-terminology/SKILL.md b/.claude/skills/zk-compression-terminology/SKILL.md deleted file mode 100644 index cd2794aa..00000000 --- a/.claude/skills/zk-compression-terminology/SKILL.md +++ /dev/null @@ -1,117 +0,0 @@ ---- -name: zk-compression-terminology -description: Precise technical definitions for ZK Compression compressed account operations extracted from official documentation ---- - -# ZK Compression Terminology Skill - -## When to Use - -This skill provides precise technical definitions when: -- Writing compressed account documentation -- Validating terminology accuracy in guides -- Checking correct type names (CompressedAccountMeta vs "account metadata") -- Verifying SDK method signatures and parameters -- Understanding exact behavior of Light System Program operations -- Ensuring consistent technical language across documentation - -## Core Principle - -**Describe exactly what happens. Avoid vague language.** - -AVOID: -- Abstract concepts: "operations", "management", "coordination" -- Vague verbs: "handles", "manages", "processes" -- Marketing language: "enables", "provides capability" -- Generic descriptions: "account metadata" instead of "CompressedAccountMeta" -- ZK terminology in user-facing docs: "inclusion proof", "non-inclusion proof" → Instead: "prove the account hash exists", "prove the address doesn't exist" - -## What This Skill Contains - -### Compressed Account Operations Terminology -`resources/compressed-accounts-terminology.md` (~6-7k tokens) - -Complete terminology extracted from 5 official guides: -- how-to-create-compressed-accounts.md -- how-to-update-compressed-accounts.md -- how-to-close-compressed-accounts.md -- how-to-reinitialize-compressed-accounts.md -- how-to-burn-compressed-accounts.md - -**Includes:** -- 100+ terms with precise definitions -- SDK method signatures with exact parameters -- System accounts array specification -- Operation state transition table -- Instruction data patterns for all operations -- Required dependencies and traits - -## Usage Pattern - -### Fast Lookup -Check this SKILL.md for principle and scope. If you need specific term definition, load the terminology table. - -### Writing Documentation -Load `compressed-accounts-terminology.md` when writing or editing documentation to ensure: -- Correct type names -- Precise technical descriptions -- Consistent verb usage -- Accurate SDK method calls - -### Validation -Use terminology table to verify: -- `CompressedAccountMeta` contains tree_info, address, output_state_tree_index -- `CompressedAccountMetaBurn` omits output_state_tree_index field -- ValidityProof proves "address doesn't exist" (create) or "account hash exists" (update/close/reinit/burn) -- State trees are "fungible" not "interchangeable" or "equivalent" -- Operations nullify hashes, don't "invalidate" or "mark as spent" - -## Example Corrections - -| Instead of | Write | -|-----------|-------| -| "enables developers to create accounts" | "creates new account hash and inserts address into address tree" | -| "handles account updates" | "nullifies old account hash and appends new hash with updated data" | -| "manages state transitions" | "atomically nullifies input hash and creates output hash" | -| "provides burn functionality" | "nullifies account hash and creates no output state" | -| "account metadata" | "CompressedAccountMeta struct containing tree_info, address, and output_state_tree_index" | -| "proves account exists" (vague) | "proves account hash exists in state tree using 128-byte validity proof" | -| "non-inclusion proof" | "proof that address doesn't exist in address tree" | -| "processes transactions" | "verifies validity proof and invokes Account Compression Program" | - -## Term Categories in Table - -1. **Core Types**: LightAccount, CompressedAccountMeta, ValidityProof, CpiAccounts -2. **SDK Methods**: new_init(), new_mut(), new_close(), new_empty(), new_burn() -3. **CPI Components**: CpiSigner, derive_light_cpi_signer!, with_light_account() -4. **Tree Structures**: State Tree, Address Tree, PackedStateTreeInfo -5. **Operations**: Create, Update, Close, Reinitialize, Burn -6. **System Accounts**: 8 required accounts for every CPI -7. **Traits & Derives**: LightDiscriminator, BorshSerialize, Clone, Debug, Default -8. **Frameworks**: Anchor (anchor_lang) vs Native Rust (borsh) - -## Integration with Documentation - -This skill works alongside: -- [GitBook Assistant](/home/tilo/Workspace/.claude/skills/gitbook-assistant/SKILL.md) - For syntax and formatting -- [CLAUDE.md](/home/tilo/.claude/CLAUDE.md) - For writing standards -- [Local CLAUDE.md](../../developer-content/CLAUDE.md) - For project guidelines - -## Validation Checklist - -When writing documentation, verify: -- [ ] Type names are exact: `CompressedAccountMeta` not "metadata" -- [ ] Methods include parentheses: `new_init()` not "new_init" -- [ ] Proofs describe action: "proves address doesn't exist" not "non-inclusion proof" -- [ ] Verbs are concrete: "nullifies", "appends", "verifies" not "handles", "manages" -- [ ] No marketing language: no "enables", "provides", "powerful" -- [ ] State transitions are explicit: "nullifies old hash, appends new hash" -- [ ] Account types are specific: "LightAccount" not "account wrapper" - -## Notes - -- Terminology extracted directly from official Light Protocol documentation -- Definitions describe implementation behavior, not abstract concepts -- SDK signatures show exact parameter types and names -- All 8 system accounts listed with pubkeys and descriptions -- Operation state transitions show input/output hashes explicitly diff --git a/.claude/skills/zk-compression-terminology/resources/compressed-accounts-terminology.md b/.claude/skills/zk-compression-terminology/resources/compressed-accounts-terminology.md deleted file mode 100644 index 67cacf21..00000000 --- a/.claude/skills/zk-compression-terminology/resources/compressed-accounts-terminology.md +++ /dev/null @@ -1,111 +0,0 @@ -# Compressed Account Operations - Terminology Reference - -**Source:** how-to-create, update, close, reinitialize, burn compressed accounts - ---- - -| Term | Precise Definition | Source | Avoid | -|------|-------------------|---------|-------| -| **Account Hash** | 32-byte identifier calculated from account data, owner, address, and tree position for locating account in state tree. Recalculated and changes on every write to the account. | All operations | "account identifier", "account reference" | -| **Address** | 32-byte persistent identifier for compressed account, derived from seeds and stored in address tree for PDA-like behavior. Does not change across state transitions. | Create (derived), Update/Close/Reinit/Burn (referenced) | "account address", "persistent identifier" | -| **Address Seed** | 32-byte value returned by `derive_address()` for passing to Light System Program to insert address into address tree. Required parameter for `with_new_addresses()`. | Create | "seed for address", "address derivation input" | -| **Address Tree** | Binary Merkle tree storing addresses for compressed accounts. Address derived from same seeds and program ID produces different address in different tree. Ensures address uniqueness within tree scope. | Create | "address storage", "uniqueness tree" | -| **anchor_lang** | Rust crate for Solana program development with automatic instruction deserialization and account validation. | "Anchor framework", "Anchor library" | -| **AnchorSerialize / AnchorDeserialize** | Traits for serializing account structs in Anchor programs. Applied via `#[derive()]` attribute. | "Anchor serialization", "serialization traits" | -| **borsh** | Binary serialization crate for native Rust programs. Smaller serialized size than bincode. | "serialization library", "Borsh framework" | -| **BorshSerialize / BorshDeserialize** | Traits for serializing account structs in native Rust programs. Applied via `#[derive()]` attribute. | "Borsh serialization", "serialization traits" | -| **b"authority"** | Seed bytes used to derive CPI signer PDA from program ID. Light System Program verifies CPI signer uses this seed. | "authority seed", "CPI seed" | -| **Burn** | Instruction that nullifies existing account hash in state tree and creates no output state. Account cannot be reinitialized after burn. | "permanent close operation", "account destruction" | -| **Close** | Instruction that nullifies existing account hash and creates new hash with zero discriminator and empty data. Account can be reinitialized after close. | "close operation", "account closure" | -| **Clone, Debug, Default** | Standard Rust traits required on compressed account struct for `LightAccount` wrapper. `Default` required for `new_empty()`. | "standard traits", "required traits" | -| **CompressedAccountMeta** | Account tree position metadata for instructions that create new account state (update, close, reinit). Contains `tree_info: PackedStateTreeInfo`, `address: [u8; 32]`, and `output_state_tree_index: u8` field. | Update, Close, Reinitialize | "account metadata", "compressed account data" | -| **CompressedAccountMetaBurn** | Account tree position metadata for permanent burn instructions. Contains `tree_info: PackedStateTreeInfo` and `address: [u8; 32]` but no `output_state_tree_index` field since account is permanently destroyed. | Burn | "burn metadata", "account metadata for burn" | -| **CPI (Cross-Program Invocation)** | Call from your program to Light System Program with signed PDA and accounts for state transitions. Executes atomically within same transaction. | All operations | "program call", "cross-program operation" | -| **CPI Authority PDA** | PDA with seed `b"authority"` derived from your program ID for signing all CPIs to Light System Program. Verified by Light System Program during CPI. | All operations | "CPI signer", "authority PDA" | -| **CpiAccounts** | Struct parsing signer and remaining_accounts into accounts array for Light System Program CPI. Created with `CpiAccounts::new()`. | "CPI accounts wrapper", "accounts for CPI" | -| **CpiSigner** | Struct containing PDA pubkey and bump for signing CPIs. Derived at compile time with `derive_light_cpi_signer!` macro. | "CPI signer struct", "signer configuration" | -| **Create** | Instruction that proves address doesn't exist in address tree, inserts address, and appends new account hash to state tree. | "create operation", "account initialization" | -| **ctx.accounts.signer** | Anchor account struct field containing transaction signer. Accessed in Anchor instructions via `Context` parameter. | "signer account", "transaction signer" | -| **ctx.remaining_accounts** | Anchor field containing slice of additional accounts: system accounts and packed tree accounts. Passed to `CpiAccounts::new()`. | "remaining accounts", "additional accounts" | -| **declare_id!** | Anchor macro defining program's unique public key. Generates `ID` constant and `id()` function. | "program ID macro", "ID declaration" | -| **derive_address()** | Function that derives address from custom_seeds, address_tree_pubkey, and program_id. Returns `(address, address_seed)` tuple. | "address derivation", "generates address" | -| **derive_light_cpi_signer!** | Macro that derives CPI signer PDA at compile time from program ID string. Creates `CpiSigner` constant. | "CPI signer macro", "derives CPI signer" | -| **Discriminator** | 8-byte unique type ID for compressed account struct. Stored in separate field, not first 8 bytes of data like Anchor accounts. | "type ID", "account discriminator" | -| **entrypoint!** | Macro defining entry point for native Rust programs. Routes to `process_instruction(program_id, accounts, instruction_data)`. | "program entry point", "entrypoint macro" | -| **getCompressedAccount()** | RPC method fetching current compressed account by address or hash. Returns account data, tree position, and metadata. | "fetch account", "get account data" | -| **getValidityProof()** | RPC method generating proof that account hash exists in state tree or address doesn't exist in address tree. Returns `ValidityProof` struct. | "get proof", "generate proof" | -| **get_tree_pubkey()** | Method on `PackedAddressTreeInfo` and `PackedStateTreeInfo` that unpacks u8 index to retrieve actual tree account pubkey from `CpiAccounts`. | "retrieve tree pubkey", "unpack tree pubkey" | -| **into_new_address_params_packed()** | Method on `PackedAddressTreeInfo` converting tree info and address_seed into `NewAddressParamsPacked` for CPI. | "create address params", "convert to params" | -| **invoke()** | Final method in CPI builder chain that executes CPI to Light System Program with parsed accounts. Returns `Result<()>`. | "execute CPI", "call program" | -| **LightAccount** | Wrapper type for compressed account struct. Similar to Anchor's `Account` but for compressed accounts. | "account wrapper", "compressed account wrapper" | -| **LightAccount::new_burn()** | Creates `LightAccount` that hashes current account data as input and creates no output state. Account permanently destroyed. | "burn wrapper", "permanent destruction wrapper" | -| **LightAccount::new_close()** | Creates `LightAccount` that hashes current account data as input and creates output with zero discriminator and empty data. | "close wrapper", "closure wrapper" | -| **LightAccount::new_empty()** | Creates `LightAccount` that reconstructs closed account hash (zero values) as input and creates output with default-initialized values. | "reinit wrapper", "empty account wrapper" | -| **LightAccount::new_init()** | Creates `LightAccount` with no input hash and output containing initial account data at specified address and output state tree. | "init wrapper", "initialization wrapper" | -| **LightAccount::new_mut()** | Creates `LightAccount` that hashes current account data as input and allows modifying output state. Returns mutable reference. | "update wrapper", "mutation wrapper" | -| **LightDiscriminator** | Trait deriving 8-byte type ID from struct name. Applied via `#[derive(LightDiscriminator)]` on compressed account struct. | "discriminator trait", "type ID trait" | -| **light-sdk** | Rust crate providing macros, CPI interface, and account wrappers for compressed accounts. Core dependency for compressed account programs. | "Light SDK", "compression SDK" | -| **LightSystemProgramCpi** | Builder struct for constructing CPI instruction to Light System Program. Created with `new_cpi()`, configured with `with_*()` methods. | "CPI builder", "instruction builder" | -| **Light System Program** | Program verifying validity proofs, checking account ownership, and invoking Account Compression Program to update trees. Program ID: SySTEM1eSU2p4BGQfQpimFEWWSC1XDFeun3Nqzz3rT7 | "Light System", "system program" | -| **Account Compression Program** | Program writing to state and address tree accounts. Invoked by Light System Program, never directly by client or user program. Program ID: compr6CUsB5m2jS4Y3831ztGSTnDpnKJTKS95d64XVq | "compression program", "tree program" | -| **Noop Program** | Program logging compressed account state to Solana ledger for indexers to parse (v1 only). Program ID: noopb9bkMVfRPU8AsbpTUg8AQkHtKwMYZiFUjNRtMmV | "logging program", "noop" | -| **System Program** | Solana program for lamport transfers between accounts. Program ID: 11111111111111111111111111111111 | "Solana System Program", "native program" | -| **msg!** | Macro writing string to program logs visible in transaction response. Used for debugging. | "log macro", "logging" | -| **new_cpi()** | Static method on `LightSystemProgramCpi` initializing CPI instruction with `CpiSigner` and `ValidityProof`. First call in builder chain. | "create CPI", "initialize CPI" | -| **Nullification** | Marks existing account hash as spent in state tree by setting leaf to nullified state. Prevents double spending. | "nullify operation", "account invalidation" | -| **output_state_tree_index** | u8 index pointing to state tree account in packed accounts array. Specifies which state tree stores new account hash. | "output tree index", "state tree index" | -| **PackedAccounts** | Client-side pattern to pack account pubkeys into an accounts array to pass u8 indices instead of 32-byte pubkeys in instruction data. Reduces transaction size. | "packed accounts pattern", "account packing" | -| **PackedAddressTreeInfo** | Struct with `address_merkle_tree_pubkey_index: u8` pointing to address tree account in packed accounts array. | "address tree info", "packed address tree" | -| **PackedStateTreeInfo** | Struct with `state_merkle_tree_pubkey_index: u8` pointing to state tree account in packed accounts array. | "state tree info", "packed state tree" | -| **process_instruction** | Entry point function for native Rust programs. Receives `program_id: &Pubkey`, `accounts: &[AccountInfo]`, `instruction_data: &[u8]`. | "instruction processor", "entry function" | -| **#[program]** | Anchor attribute marking module as program implementation. Contains instruction handler functions. | "program module", "program attribute" | -| **Pubkey** | 32-byte Solana public key type from `solana_program` crate. Used for addresses, program IDs, and tree accounts. | "public key", "address type" | -| **Registered Program PDA** | PDA controlling which programs can invoke Account Compression Program. Derived from Light System Program. | "registration PDA", "access control PDA" | -| **Reinitialize** | Instruction that proves closed account hash exists in state tree, nullifies it, and creates new hash with default values at same address. | "reinit operation", "reopening account" | -| **remaining_accounts** | Slice of accounts after signer in native Rust programs. Contains system accounts (8 accounts) followed by packed tree accounts. | "additional accounts", "extra accounts" | -| **#[repr(u8)]** | Rust attribute specifying enum uses u8 as discriminant. Used for `InstructionType` enum in native programs. | "enum representation", "u8 enum" | -| **signer** | First account in accounts array that signs transaction and pays fees. Extracted with `accounts.first()` in native programs. | "fee payer", "transaction signer" | -| **Signer<'info>** | Anchor account type ensuring account signed transaction. Applied via `#[account(mut)]` attribute on signer field. | "Anchor signer", "signer account type" | -| **split_first()** | Rust slice method separating first element from rest. Used to extract signer from remaining accounts in native programs. | "split accounts", "extract signer" | -| **State Root** | Root hash of state tree, cryptographic commitment to all account hashes in tree. Included in validity proof and verified on-chain. | "tree root", "Merkle root" | -| **State Tree** | Binary Merkle tree storing compressed account hashes. Multiple state trees can exist; they are fungible. | "account tree", "hash tree" | -| **try_from_slice** | Borsh method deserializing bytes into typed struct. Returns `Result`. | "deserialize", "parse bytes" | -| **Update** | Instruction that proves account hash exists in state tree, nullifies old hash, and appends new hash with updated data. Uses UTXO pattern. | "update operation", "account modification" | -| **UTXO Pattern** | Pattern where update instruction consumes existing account hash and produces new hash with different data. Prevents in-place mutation. | "update pattern", "consume-produce pattern" | -| **ValidityProof** | Struct with proof that address doesn't exist in address tree (for create) or account hash exists in state tree (for update/close/reinit/burn). Constant 128 bytes. | "proof struct", "zero-knowledge proof" | -| **with_light_account()** | Method on `LightSystemProgramCpi` adding `LightAccount` wrapper to CPI instruction. Second call in builder chain after `new_cpi()`. | "add account", "set account data" | -| **with_new_addresses()** | Method on `LightSystemProgramCpi` adding address parameters for inserting addresses into address tree. Only used in create instructions. | "add addresses", "set new addresses" | -| **Zero Discriminator** | Discriminator set to `[0u8; 8]` in closed account. Indicates account has no type and is closed. | "null discriminator", "closed state discriminator" | -| **Zero Values** | Account with zero discriminator and empty data vector. Created by close instruction, consumed by reinitialize instruction. | "empty account", "closed account state" | - ---- - -## SDK Method Signatures - -| Method | Signature | Returns | Description | -|--------|-----------|---------|-------------| -| **CpiAccounts::new** | `(signer: &AccountInfo, remaining: &[AccountInfo], cpi_signer: CpiSigner)` | `CpiAccounts` | Parse accounts array into system accounts and tree accounts for Light System Program CPI | -| **LightAccount::::new_init** | `(program_id: &Pubkey, address: Option<[u8; 32]>, output_state_tree_index: u8)` | `LightAccount` | Create wrapper with no input hash and output containing initial values | -| **LightAccount::::new_mut** | `(program_id: &Pubkey, account_meta: &CompressedAccountMeta, current_data: T)` | `Result>` | Create wrapper hashing current_data as input and allowing output modification | -| **LightAccount::::new_close** | `(program_id: &Pubkey, account_meta: &CompressedAccountMeta, current_data: T)` | `Result>` | Create wrapper hashing current_data as input and output with zero discriminator and empty data | -| **LightAccount::::new_empty** | `(program_id: &Pubkey, account_meta: &CompressedAccountMeta)` | `Result>` | Create wrapper reconstructing closed hash as input and output with default values | -| **LightAccount::::new_burn** | `(program_id: &Pubkey, account_meta: &CompressedAccountMetaBurn, current_data: T)` | `Result>` | Create wrapper hashing current_data as input and no output state | -| **LightSystemProgramCpi::new_cpi** | `(cpi_signer: CpiSigner, proof: ValidityProof)` | `LightSystemProgramCpi` | Initialize CPI builder with signer and proof | -| **derive_address** | `(custom_seeds: &[&[u8]], address_tree_pubkey: &Pubkey, program_id: &Pubkey)` | `([u8; 32], [u8; 32])` | Derive address from seeds and tree, return (address, address_seed) | - ---- - -## System Accounts Array (All Operations) - -All CPIs to Light System Program require these 8 accounts in remaining_accounts: - -| Index | Account | Pubkey/PDA | Description | -|-------|---------|------------|-------------| -| 0 | Light System Program | SySTEM1eSU2p4BGQfQpimFEWWSC1XDFeun3Nqzz3rT7 | Verifies proof, checks ownership, invokes Account Compression Program | -| 1 | CPI Signer | PDA from your program ID + `b"authority"` | Signs CPI from your program to Light System Program | -| 2 | Registered Program PDA | PDA from Light System Program | Controls which programs invoke Account Compression Program | -| 3 | Noop Program | noopb9bkMVfRPU8AsbpTUg8AQkHtKwMYZiFUjNRtMmV | Logs account state to ledger (v1 only) | -| 4 | Account Compression Authority | HZH7qSLcpAeDqCopVU4e5XkhT9j3JFsQiq8CmruY3aru | Signs CPI from Light System to Account Compression Program | -| 5 | Account Compression Program | compr6CUsB5m2jS4Y3831ztGSTnDpnKJTKS95d64XVq | Writes to state and address tree accounts | -| 6 | Invoking Program | Your program ID | Derives CPI signer and sets owner on created accounts | -| 7 | System Program | 11111111111111111111111111111111 | Transfers lamports for fees | \ No newline at end of file diff --git a/.claude/tasks/README.md b/.claude/tasks/README.md deleted file mode 100644 index 9422c84b..00000000 --- a/.claude/tasks/README.md +++ /dev/null @@ -1 +0,0 @@ -Collection of tasks, plans, and reports by subagents for ZK Compression documentation work. diff --git a/.context/trigger-docs-sync.yml b/.context/trigger-docs-sync.yml deleted file mode 100644 index 2fcecddd..00000000 --- a/.context/trigger-docs-sync.yml +++ /dev/null @@ -1,21 +0,0 @@ -# Add this file to the program-examples repo at: -# .github/workflows/trigger-docs-sync.yml - -name: Trigger Docs Sync - -on: - push: - branches: - - main - -jobs: - trigger-sync: - runs-on: ubuntu-latest - steps: - - name: Trigger docs repo sync - run: | - curl -X POST \ - -H "Accept: application/vnd.github.v3+json" \ - -H "Authorization: token ${{ secrets.DOCS_REPO_TOKEN }}" \ - https://api.github.com/repos/Lightprotocol/docs/dispatches \ - -d '{"event_type":"sync-examples"}' diff --git a/.gitignore b/.gitignore index 90c68241..f2dee70c 100644 --- a/.gitignore +++ b/.gitignore @@ -2,3 +2,4 @@ mintlify mintlify-docs/ /CLAUDE.md .windsurf.context/ +.claude/ diff --git a/client-library/client-guide.mdx b/client-library/client-guide.mdx new file mode 100644 index 00000000..bdbb3e0b --- /dev/null +++ b/client-library/client-guide.mdx @@ -0,0 +1,1471 @@ +--- +title: Client Guide +description: >- + Rust and Typescript client guides with step-by-step implementation and full + code examples. +--- + +import SystemAccountsList from '/snippets/compressed-pdas-system-accounts-list.mdx'; + +ZK Compression provides Rust and Typescript clients to interact with compressed accounts and tokens on Solana. + + + + + + + + + + + + + + + + + + + + + + + + + + +
**TypeScript**[@lightprotocol/stateless.js](https://lightprotocol.github.io/light-protocol/stateless.js/index.html)Client SDK for Compressed Accounts
**TypeScript**[@lightprotocol/compressed-token](https://lightprotocol.github.io/light-protocol/compressed-token/index.html)Client SDK for Compressed Tokens
**Rust**[light-client](https://docs.rs/light-client)Client SDK for Compressed Accounts and Tokens
+ +# Key Points + +* **Fetch current and provide new data**: Include current and new account data in instructions for on-chain verification. +* **Validity proof**: Every instruction includes a cryptographic proof from the RPC that verifies a new address does not exist and/or the current account state. +* **Packed accounts**: Instructions require Light System Program and Merkle tree accounts. `PackedAccounts` converts their pubkeys to `u8` indices pointing to accounts in the instruction. + + + +
+ + ![](/images/client-create%20(1).png) + +
+
+ + ![](/images/client-create.png) + +
+
+ +
+ + ![](/images/client-update%20(1).png) + +
+
+ + ![](/images/client-update.png) + +
+
+ +
+ + ![](/images/client-close%20(1).png) + +
+
+ + ![](/images/client-close.png) + +
+
+ +
+ + ![](/images/client-reinit%20(1).png) + +
+
+ + ![](/images/client-reinit.png) + +
+
+ +
+ + ![](/images/client-burn%20(1).png) + +
+
+ + ![](/images/client-burn.png) + +
+
+
+ +# Get Started + + +## Setup + + + + + +Use the [API documentation](https://lightprotocol.github.io/light-protocol/) to look up specific function signatures, parameters, and return types. + + +### 1. Installation + + + + +```bash +npm install --save \ + @lightprotocol/stateless.js@0.22.1-alpha.1 \ + @lightprotocol/compressed-token@0.22.1-alpha.1 \ + @solana/web3.js +``` + + + + + +```bash +yarn add \ + @lightprotocol/stateless.js@0.22.1-alpha.1 \ + @lightprotocol/compressed-token@0.22.1-alpha.1 \ + @solana/web3.js +``` + + + + + +```bash +pnpm add \ + @lightprotocol/stateless.js@0.22.1-alpha.1 \ + @lightprotocol/compressed-token@0.22.1-alpha.1 \ + @solana/web3.js +``` + + + + +### 2. RPC Connection + +`Rpc` is a thin wrapper extending Solana's web3.js `Connection` class with compression-related endpoints. + + + + +```typescript +const rpc = createRpc('https://mainnet.helius-rpc.com/?api-key=YOUR_API_KEY'); +``` + + + + + +```typescript +const rpc = createRpc('https://devnet.helius-rpc.com/?api-key=YOUR_API_KEY'); +``` + + + + +1. Install the CLI + +```bash +npm install -g @lightprotocol/zk-compression-cli +``` + +2. Start a local Solana test validator, photon indexer, and prover server on default ports 8899, 8784, and 3001. + +```bash +light test-validator +``` + + + + + + +### 1. Dependencies + +```toml +[dependencies] +light-client = "0.16.0" +light-sdk = "0.16.0" +``` + +### 2. RPC Connection + +Connect to an RPC provider that supports ZK Compression, such as Helius and Triton. + + + + +```rust +let config = LightClientConfig::new( + "https://api.mainnet-beta.solana.com".to_string(), + Some("https://mainnet.helius.xyz".to_string()), + Some("YOUR_API_KEY".to_string()) +); + +let mut client = LightClient::new(config).await?; + +client.payer = read_keypair_file("~/.config/solana/id.json")?; +``` + + + + + +```rust +let config = LightClientConfig::devnet( + Some("https://devnet.helius-rpc.com".to_string()), + Some("YOUR_API_KEY".to_string()) +); + +let mut client = LightClient::new(config).await?; + +client.payer = read_keypair_file("~/.config/solana/id.json")?; +``` + + + + + +```rust +let config = LightClientConfig::local(); + +let mut client = LightClient::new(config).await?; + +client.payer = read_keypair_file("~/.config/solana/id.json")?; +``` + +1. Install the CLI + +```bash +npm install -g @lightprotocol/zk-compression-cli +``` + +2. Start a single-node Solana cluster, an RPC node, and a prover node at ports 8899, 8784, and 3001. + +```bash +light test-validator +``` + + + + + + + + +## Address + +Derive a persistent address as a unique identifier for your compressed account, similar to [program-derived addresses (PDAs)](https://solana.com/docs/core/pda). + +You derive addresses in two scenarios: +* **At account creation** - derive the address to create the account's persistent identifier, then pass it to `getValidityProofV0()` in the address array +* **Before building instructions** - derive the address to fetch existing accounts using `rpc.getCompressedAccount()` + + + + + + +```typescript +const addressTree = getDefaultAddressTreeInfo(); +const seed = deriveAddressSeed( + [Buffer.from('my-seed')], + programId +); +const address = deriveAddress( + seed, + addressTree.tree +); +``` + + + + +```typescript +const addressTree = await rpc.getAddressTreeInfoV2(); +const seed = deriveAddressSeedV2( + [Buffer.from('my-seed')] +); + +const address = deriveAddressV2( + seed, + addressTree.tree, + programId +); +``` + + + + + + + + + +```rust +use light_sdk::address::v1::derive_address; + +let address_tree_info = rpc.get_address_tree_v1(); +let (address, _) = derive_address( + &[b"my-seed"], + &address_tree_info.tree, + &program_id, +); +``` + + + + + +```rust +use light_sdk::address::v2::derive_address; + +let address_tree_info = rpc.get_address_tree_v2(); +let (address, _) = derive_address( + &[b"my-seed"], + &address_tree_info.tree, + &program_id, +); +``` + + + + + + +Like PDAs, compressed account addresses don't have a private key; rather, they're derived from the program that owns them. +* The key difference to PDAs is compressed addresses are stored in an address tree and include this tree in the address derivation. +* Different trees produce different addresses from identical seeds. You should check the address tree in your program. + + +The protocol maintains Merkle trees. You don't need to initialize custom trees. Find the [pubkeys for Merkle trees here](https://www.zkcompression.com/resources/addresses-and-urls). + + + + +## Validity Proof + +Transactions with compressed accounts must include a validity proof: +* To **create** a compressed account, you prove the **new address doesn't already exist** in the address tree. +* In **other instructions**, you **prove the compressed account hash exists** in a state tree. +* You can **combine multiple addresses and hashes in one proof** to optimize compute cost and instruction data. + + +You fetch a validity proof from your RPC provider that supports ZK Compression, such as Helius or Triton. + + + + + + + +```typescript +const proof = await rpc.getValidityProofV0( + [], + [{ + address: bn(address.toBytes()), + tree: addressTree.tree, + queue: addressTree.queue + }] +); +``` + +**1. Pass these parameters**: + +* **Specify the new address**, `tree` and `queue` pubkeys from the address tree `TreeInfo`. +* When you create an account you don't reference a compressed account hash in the hash array (`[]`). The account doesn't exist in a state Merkle tree yet. + +For account creation, you prove the address does not exist yet in the address tree. + +**2. The RPC returns**: + +* The proof that the new address does not exist in the address tree. It is used in the instruction data. +* `rootIndices` array with root index. + * The root index points to the root in the address tree accounts root history array. + * This root is used by the `LightSystemProgram` to verify the validity proof. + + + + +```typescript +const proof = await rpc.getValidityProofV0( + [{ + hash: compressedAccount.hash, + tree: compressedAccount.treeInfo.tree, + queue: compressedAccount.treeInfo.queue + }], + [] +); +``` + +**1. Pass these parameters**: + +Specify the **account hash**, `tree` and `queue` pubkeys from the compressed account's `TreeInfo`. + + +* You don't specify the address for update, close, reinitialize, and burn instructions. +* The proof **verifies the account hash exists in the state tree** for these instructions. +* The validity proof structure is identical. The difference is in your program's instruction handler. + + +**2. The RPC returns**: + +* The proof that the account hash exists in the state tree for your instruction data. +* `rootIndices` and `leafIndices` arrays with proof metadata to pack accounts. + + + + + + + + +```rust +let rpc_result = rpc + .get_validity_proof( + vec![], + vec![AddressWithTree { + address: *address, + tree: address_tree_info.tree + }], + None, + ) + .await? + .value; +``` + + +**1. Pass these parameters**: +* **Specify the new address** and `tree` pubkey from the address tree `TreeInfo`. The `queue` pubkey is only required in TypeScript. +* When you create an account you don't reference a compressed account hash in the hash array (`vec![]`). + +For account creation, you prove the address does not exist yet in the address tree. + + +**2. The RPC returns `ValidityProofWithContext`**: + +* The proof that the new address does not exist in the address tree for your instruction data. +* `addresses` with the public key and metadata of the address tree to pack accounts. + + + + +```rust +let rpc_result = rpc + .get_validity_proof( + vec![compressed_account.hash], + vec![], + None, + ) + .await? + .value; +``` + +**1. Pass these parameters**: + +Specify the **account hash**, `tree` and `queue` pubkeys from the compressed account's `TreeInfo`. + +* You don't specify the address for update, close, reinitialize, and burn instructions. +* The proof **verifies the account hash exists in the state tree** for these instructions. +* The validity proof structure is identical. The difference is in your program's instruction handler. + +**2. The RPC returns `ValidityProofWithContext`**: + +* The proof that the **account hash exists in the state tree** for your instruction data +* `accounts` with the **public key and metadata of the state tree** to pack accounts. + + + + + +### Optimize with Combined Proofs + +Depending on the **Merkle tree version** (V1 or V2), you can prove **in a single proof**: +* multiple addresses, +* multiple account hashes, or +* a combination of addresses and account hashes. + + + +| | | +| ----------------------- | --------------------------------------------------- | +| Account Hash-only | 1, 2, 3, 4, or 8 hashes | +| Address-only | 1 and 2 addresses | +| Mixed (hash + address) | Any combination of
**1, 2, 3, 4, or 8** account hashes **and**
**1 or 2** new addresses | +
+ + + +| | | +| ----------------------- | --------------------------------------------------- | +| Account Hash-only | 1 to 8 hashes | +| Address-only | 1 to 8 addresses | +| Mixed (hash + address) | Any combination of
**1 to 4** account hashes **and**
**1 or 4** new addresses | +
+
+ + +**Advantages of combined proofs**: +* You only add **one 128 byte validity proof** to your instruction data. +* This can **optimize** your **transaction's size** to stay inside the 1232 byte instruction data limit. +* **Compute unit consumption is 100k CU** per `ValidityProof` verification by the Light System Program. + + +### Example Create Address & Update Account in one Proof + +In this example, we generate one proof that proves that an account exists and that a new address does not exist yet. + + + + +```typescript +const proof = await rpc.getValidityProofV0( + [{ + hash: compressedAccount.hash, + tree: compressedAccount.treeInfo.tree, + queue: compressedAccount.treeInfo.queue + }], + [{ + address: bn(address.toBytes()), + tree: addressTree.tree, + queue: addressTree.queue + }] +); +``` + +**1. Pass these parameters**: + +* Specify one or more **account hashes**, `tree` and `queue` pubkeys from the compressed account's `TreeInfo`. +* Specify one or more **new addresses** with their `tree` and `queue` pubkeys from the address tree `TreeInfo`. + +**2. The RPC returns**: + +* A single combined proof that proves both the **account hash exists in the state tree** and the **new address does not exist in the address tree** for your instruction data +* `rootIndices` and `leafIndices` arrays with proof metadata to pack accounts. + + + + +```rust +let rpc_result = rpc + .get_validity_proof( + vec![compressed_account.hash], + vec![AddressWithTree { + address: *address, + tree: address_tree_info.tree + }], + None, + ) + .await? + .value; +``` + +**1. Pass these parameters**: + +* Specify one or more **compressed account hashes**. +* Specify one or more **derived addresses** with their `tree` pubkeys from the address tree `TreeInfo`. The `queue` pubkey is only required in TypeScript. + +**2. The RPC returns `ValidityProofWithContext`**: + +* A single combined proof that verifies both the **account hash exists in the state tree** and the **new address does not exist in the address tree** for your instruction data +* New `addresses` with the public key and metadata of the address tree to pack accounts. +* `accounts` with the public key and metadata of the state tree to pack accounts. + + + + +See the full [create-and-update program example for this proof combination with tests](https://github.com/Lightprotocol/program-examples/tree/main/create-and-update). + +
+ + +## Accounts + +To interact with a compressed account you need System accounts such as the Light System Program, +and Merkle tree accounts. + +Compressed account metadata (`TreeInfo`) includes Merkle tree pubkeys. +To optimize instruction data we pack the `pubkeys` of `TreeInfo` into the `u8` indices of `PackedTreeInfo`. + +The `u8` indices point to the Merkle tree account in the instructions accounts. +You can create the instructions accounts and indices with `PackedAccounts`. + +We recommend to append `PackedAccounts` after your program specific accounts and in anchor in `remaining_accounts`. + + +``` + PackedAccounts + ┌--------------------------------------------┐ +[custom accounts] [pre accounts][system accounts][tree accounts] + ↑ ↑ ↑ + Signers, Light System State trees, + fee payer accounts address trees, +``` + +Custom accounts are program-specific accounts you pass manually in your instruction, typically through Anchor's account struct. + + + + +Optional, custom accounts (signers, PDAs for CPIs) and other accounts can be added to pre accounts. +Pre accounts can simplify building the accounts for pinocchio and native programs. + + + + + +**Light System accounts** are 6 required accounts for proof verification and CPI calls to update state and address trees. + + + + + + +**Merkle tree accounts** are the accounts of state tree and address trees that store compressed account hashes and addresses. + + + + + + + + + + +```typescript +// 1. Initialize helper +const packedAccounts + = new PackedAccounts(); + +// 2. Add light system accounts +const systemAccountConfig + = SystemAccountMetaConfig.new(programId); +packedAccounts.addSystemAccounts(systemAccountConfig); + +// 3. Get indices for tree accounts +const addressMerkleTreePubkeyIndex + = packedAccounts.insertOrGet(addressTree); +const addressQueuePubkeyIndex + = packedAccounts.insertOrGet(addressQueue); + +const packedAddressTreeInfo = { + rootIndex: proofRpcResult.rootIndices[0], + addressMerkleTreePubkeyIndex, + addressQueuePubkeyIndex, +}; + +// 4. Get index for output state tree +const stateTreeInfos = await rpc.getStateTreeInfos(); +const outputStateTree = selectStateTreeInfo(stateTreeInfos).tree; +const outputStateTreeIndex + = packedAccounts.insertOrGet(outputStateTree); + +// 5. Convert to Account Metas +const { remainingAccounts } + = packedAccounts.toAccountMetas(); +``` + + + +```typescript +// 1. Initialize helper +const packedAccounts + = new PackedAccounts(); + +// 2. Add system accounts +const systemAccountConfig + = SystemAccountMetaConfig.new(programId); +packedAccounts.addSystemAccounts(systemAccountConfig); + +// 3. Get indices for tree accounts +const merkleTreePubkeyIndex + = packedAccounts.insertOrGet(compressedAccount.treeInfo.tree); +const queuePubkeyIndex + = packedAccounts.insertOrGet(compressedAccount.treeInfo.queue); + +const packedInputAccounts = { + merkleTreePubkeyIndex, + queuePubkeyIndex, + leafIndex: proofRpcResult.leafIndices[0], + rootIndex: proofRpcResult.rootIndices[0], +}; + +const outputStateTreeIndex + = packedAccounts.insertOrGet(outputStateTree); + +// 4. Convert to Account Metas +const { remainingAccounts } + = packedAccounts.toAccountMetas(); +``` + + + + + + + + + +```rust +// 1. Initialize helper +let mut remaining_accounts = PackedAccounts::default(); + +// 2. Add system accounts +let config + = SystemAccountMetaConfig::new(program_id); + remaining_accounts.add_system_accounts(config)?; + +// 3. Get indices for tree accounts +let packed_accounts + = rpc_result.pack_tree_infos(&mut remaining_accounts); + +// 4. Get index for output state tree +let output_state_tree_info = rpc.get_random_state_tree_info()?; +let output_state_tree_index + = output_state_tree_info.pack_output_tree_index(&mut remaining_accounts)?; + +// 5. Convert to Account Metas +let (remaining_accounts_metas, _, _) + = remaining_accounts.to_account_metas(); +``` + + + + +```rust +// 1. Initialize helper +let mut remaining_accounts = PackedAccounts::default(); + +// 2. Add system accounts +let config + = SystemAccountMetaConfig::new(program_id); + remaining_accounts.add_system_accounts(config)?; + +// 3. Get indices for tree accounts +let packed_tree_accounts = rpc_result + .pack_tree_infos(&mut remaining_accounts) + .state_trees // includes output_state_tree_index + .unwrap(); + +// 4. Convert to Account Metas +let (remaining_accounts_metas, _, _) + = remaining_accounts.to_account_metas(); +``` + + + + + + + +Depending on your instruction you must include different tree and queue accounts. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
InstructionAddress TreeState TreeNullifier QueueOutput State Tree
Create--
Update / Close / Reinit-
Burn--
+ +* The **Address tree** is used to derive and store a new address (create-only) +* The **State tree** is used to reference the existing compressed account hash. Therefore not used by create. +* The **Nullifier queue** is used to nullify the existing compressed account hash to prevent double spending. Therefore not used by create. +* The **Output State tree** is used to store the new or updated compressed account hash. + * **Create only** - Choose any available state tree, or use a pre-selected tree to store the new compressed account. + * **Update/Close/Reinit** - Use the state tree of the existing compressed account as output state tree. + * **Mixed instructions (create + update in same tx)** - Use the state tree from the existing account as output state tree. + * **Burn** - Burn does not produce output state and does not need an output state tree. + +
+ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
InstructionAddress TreeState Tree (includes nullifier queue)Output Queue
Create-
Update / Close / Reinit-
Burn--
+ +* **Address tree**: only used to derive and store a new address. +* **State tree**: used to reference the existing compressed account hash. Therefore not used by create. V2 combines the state tree and nullifier queue into a single account. +* **Output Queue**: used to store compressed account hashes. A forester node updates the state tree asynchronously. + * **Create only** - Choose any available queue, or use a pre-selected queue to store the new compressed account. + * **Update/Close/Reinit** - Use the queue of the existing compressed account as output queue. + * **Mixed instructions (create + update in same tx)** - Use the queue from the existing account as output queue. + * **Burn** - Do not include an output queue. +
+
+ + +V2 is on Devnet and reduces compute unit consumption by up to 70%. + + +
+ + +## Instruction Data + +Build your instruction data with the validity proof, tree account indices, and account data. + + + + + + +```typescript +const proof = { + 0: proofRpcResult.compressedProof, +}; + +const instructionData = { + proof, + addressTreeInfo: packedAddressTreeInfo, + outputStateTreeIndex: outputStateTreeIndex, + message, +}; +``` + +1. Include `proof` to **prove the address does not exist** in the address tree +2. Specify **Merkle trees to store address and account hash** to where you packed accounts. +3. Pass **initial account data** + + + + +```typescript +const proof = { + 0: proofRpcResult.compressedProof, +}; + +const instructionData = { + proof, + accountMeta: { + treeInfo: packedStateTreeInfo, + address: compressedAccount.address, + outputStateTreeIndex: outputStateTreeIndex + }, + currentMessage: currentAccount.message, + newMessage, +}; +``` + +1. Include `proof` to to prove the **account hash exists** in the state tree +2. Specify the existing accounts address, its `packedStateTreeInfo` and the output state tree to store the updated compressed account hash. +3. Pass **current account data** and **new data** + +Use the state tree of the existing compressed account as output state tree. + + + + + +```typescript +const proof = { + 0: proofRpcResult.compressedProof, +}; + +const instructionData = { + proof, + accountMeta: { + treeInfo: packedStateTreeInfo, + address: compressedAccount.address, + outputStateTreeIndex: outputStateTreeIndex + }, + currentMessage: currentAccount.message, +}; +``` + +1. Include `proof` to prove the **account hash exists** in the state tree +2. Specify the existing accounts address, its `packedStateTreeInfo` and the output state tree to store the **hash with zero values** for the closed account. +3. Pass **current account data** + +Use the state tree of the existing compressed account as output state tree. + + + + + +```typescript +const proof = { + 0: proofRpcResult.compressedProof, +}; + +const instructionData = { + proof, + accountMeta: { + treeInfo: packedStateTreeInfo, + address: compressedAccount.address, + outputStateTreeIndex: outputStateTreeIndex + }, +}; +``` + +1. Include `proof` to prove the **account hash exists** in the state tree +2. Specify the existing accounts address, its `packedStateTreeInfo` and the output state tree that will store the reinitialized account hash +3. Reinitialize creates an account with **default-initialized values** +* These values are `Pubkey` as all zeros, numbers as `0`, strings as empty. +* To set custom values, update the account in the same or a separate transaction. + +Use the state tree of the existing compressed account as output state tree. + + + + + +```typescript +const proof = { + 0: proofRpcResult.compressedProof, +}; + +const instructionData = { + proof, + accountMeta: { + treeInfo: packedStateTreeInfo, + address: compressedAccount.address + }, + currentMessage: currentAccount.message, +}; +``` + +1. Include `proof` to prove the **account hash exists** in the state tree +2. Specify the existing accounts address and its `packedStateTreeInfo`. You don't need to specify the output state tree, since burn permanently removes the account. +3. Pass **current account data** + + + + + + + + +```rust +let instruction_data = create::instruction::CreateAccount { + proof: rpc_result.proof, + address_tree_info: packed_accounts.address_trees[0], + output_state_tree_index: output_state_tree_index, + message, +} +.data(); +``` + +1. Include `proof` to prove the **address does not exist** in the address tree +2. Specify **address tree and output state tree** to where you packed accounts +3. Pass **initial account data** + + + + +```rust +let instruction_data = update::instruction::UpdateAccount { + proof: rpc_result.proof, + current_account, + account_meta: CompressedAccountMeta { + tree_info: packed_tree_accounts.packed_tree_infos[0], + address: compressed_account.address.unwrap(), + output_state_tree_index: packed_tree_accounts.output_tree_index, + }, + new_message, +} +.data(); +``` + + +Use the state tree of the existing compressed account as output state tree. + + +1. Include `proof` to prove the **account hash exists** in the state tree +2. Specify the existing accounts address, its `packed_tree_infos` and the output state tree to store the updated compressed account hash +3. Pass **current account data** and **new data** + + + + + +```rust +let instruction_data = close::instruction::CloseAccount { + proof: rpc_result.proof, + account_meta: CompressedAccountMeta { + tree_info: packed_tree_accounts.packed_tree_infos[0], + address: compressed_account.address.unwrap(), + output_state_tree_index: packed_tree_accounts.output_tree_index, + }, + current_message, +} +.data(); +``` + + +Use the state tree of the existing compressed account as output state tree. + + +1. Include `proof` to prove the **account hash exists** in the state tree +2. Specify the existing accounts address, its `packed_tree_infos` and the output state tree to store the **hash with zero values** for the closed account +3. Pass **current account data** + + + + + +```rust +let instruction_data = reinit::instruction::ReinitAccount { + proof: rpc_result.proof, + account_meta: CompressedAccountMeta { + tree_info: packed_tree_accounts.packed_tree_infos[0], + address: compressed_account.address.unwrap(), + output_state_tree_index: packed_tree_accounts.output_tree_index, + }, +} +.data(); +``` + + +Use the state tree of the existing compressed account as output state tree. + + +1. Include `proof` to prove the **account hash exists** in the state tree +2. Specify the existing accounts address, its `packed_tree_infos` and the output state tree that will store the reinitialized account hash +3. Reinitialize creates an account with **default-initialized values** +* These values are `Pubkey` as all zeros, numbers as `0`, strings as empty. +* To set custom values, update the account in the same or a separate transaction. + + + + +```rust +let instruction_data = burn::instruction::BurnAccount { + proof: rpc_result.proof, + account_meta: CompressedAccountMetaBurn { + tree_info: packed_tree_accounts.packed_tree_infos[0], + address: compressed_account.address.unwrap(), + }, + current_message, +} +.data(); +``` + +1. Include `proof` to prove the **account hash exists** in the state tree +2. Specify the existing accounts address and its `packed_tree_infos`. You don't need to specify the output state tree, since burn permanently removes the account +3. Pass **current account data** + + + + + + +* When creating or updating multiple accounts in a single transaction, use one output state tree. +* Minimize the number of different trees per transaction to keep instruction data light. + + + + + +## Instruction + +Build the instruction with your `program_id`, `accounts`, and `data`. +* Accounts combine your program-specific accounts and `PackedAccounts`. +* Data includes your compressed accounts, validity proof and other instruction data. + + + + +```typescript +// Accounts +// ┌-------------------------------┐ +// .accounts() .remainingAccounts() +// [custom] [PackedAccounts] + +const instruction = await program.methods + .yourInstruction(instructionData) + .accounts({ + signer: signer.publicKey, + }) + .remainingAccounts(remainingAccounts) + .instruction(); +``` + + + + + +```rust +// Accounts +// ┌---------------------------------┐ +// [custom accounts] [PackedAccounts] +let accounts = [vec![AccountMeta::new(payer.pubkey(), true)], remaining_accounts].concat(); + +let instruction = Instruction { + program_id: program_id, + accounts, + data: instruction_data, +}; +``` + + + + + + + +## Send Transaction + + +
+ +# Full Code Examples + + + +```typescript expandable wrap +// create.ts +import * as anchor from "@coral-xyz/anchor"; +import { Program, web3 } from "@coral-xyz/anchor"; +import { Create } from "../target/types/create"; +import idl from "../target/idl/create.json"; +import { + bn, + CompressedAccountWithMerkleContext, + confirmTx, + createRpc, + defaultStaticAccountsStruct, + defaultTestStateTreeAccounts, + deriveAddress, + deriveAddressSeed, + LightSystemProgram, + PackedAccounts, + Rpc, + sleep, + SystemAccountMetaConfig, +} from "@lightprotocol/stateless.js"; +import * as assert from "assert"; + +const path = require("path"); +const os = require("os"); +require("dotenv").config(); + +const anchorWalletPath = path.join(os.homedir(), ".config/solana/id.json"); +process.env.ANCHOR_WALLET = anchorWalletPath; + +describe("test-anchor", () => { + const program = anchor.workspace.Create as Program; + const coder = new anchor.BorshCoder(idl as anchor.Idl); + + it("create compressed account", async () => { + let signer = new web3.Keypair(); + let rpc = createRpc( + "http://127.0.0.1:8899", + "http://127.0.0.1:8784", + "http://127.0.0.1:3001", + { + commitment: "confirmed", + }, + ); + let lamports = web3.LAMPORTS_PER_SOL; + await rpc.requestAirdrop(signer.publicKey, lamports); + await sleep(2000); + + const outputStateTree = defaultTestStateTreeAccounts().merkleTree; + const addressTree = defaultTestStateTreeAccounts().addressTree; + const addressQueue = defaultTestStateTreeAccounts().addressQueue; + + const messageSeed = new TextEncoder().encode("message"); + const seed = deriveAddressSeed( + [messageSeed, signer.publicKey.toBytes()], + new web3.PublicKey(program.idl.address), + ); + const address = deriveAddress(seed, addressTree); + + // Create compressed account with message + const txId = await createCompressedAccount( + rpc, + addressTree, + addressQueue, + address, + program, + outputStateTree, + signer, + "Hello, compressed world!", + ); + console.log("Transaction ID:", txId); + + // Wait for indexer to process the transaction + const slot = await rpc.getSlot(); + await rpc.confirmTransactionIndexed(slot); + + let compressedAccount = await rpc.getCompressedAccount(bn(address.toBytes())); + let myAccount = coder.types.decode( + "MyCompressedAccount", + compressedAccount.data.data, + ); + + console.log("Decoded data owner:", myAccount.owner.toBase58()); + console.log("Decoded data message:", myAccount.message); + + // Verify account data + assert.ok( + myAccount.owner.equals(signer.publicKey), + "Owner should match signer public key" + ); + assert.strictEqual( + myAccount.message, + "Hello, compressed world!", + "Message should match the created message" + ); + }); +}); + +async function createCompressedAccount( + rpc: Rpc, + addressTree: anchor.web3.PublicKey, + addressQueue: anchor.web3.PublicKey, + address: anchor.web3.PublicKey, + program: anchor.Program, + outputStateTree: anchor.web3.PublicKey, + signer: anchor.web3.Keypair, + message: string, +) { + const proofRpcResult = await rpc.getValidityProofV0( + [], + [ + { + tree: addressTree, + queue: addressQueue, + address: bn(address.toBytes()), + }, + ], + ); + const systemAccountConfig = new SystemAccountMetaConfig(program.programId); + let remainingAccounts = new PackedAccounts(); + remainingAccounts.addSystemAccounts(systemAccountConfig); + + const addressMerkleTreePubkeyIndex = + remainingAccounts.insertOrGet(addressTree); + const addressQueuePubkeyIndex = remainingAccounts.insertOrGet(addressQueue); + const packedAddressTreeInfo = { + rootIndex: proofRpcResult.rootIndices[0], + addressMerkleTreePubkeyIndex, + addressQueuePubkeyIndex, + }; + const outputStateTreeIndex = + remainingAccounts.insertOrGet(outputStateTree); + let proof = { + 0: proofRpcResult.compressedProof, + }; + const computeBudgetIx = web3.ComputeBudgetProgram.setComputeUnitLimit({ + units: 1000000, + }); + let tx = await program.methods + .createAccount(proof, packedAddressTreeInfo, outputStateTreeIndex, message) + .accounts({ + signer: signer.publicKey, + }) + .preInstructions([computeBudgetIx]) + .remainingAccounts(remainingAccounts.toAccountMetas().remainingAccounts) + .signers([signer]) + .transaction(); + tx.recentBlockhash = (await rpc.getRecentBlockhash()).blockhash; + tx.sign(signer); + + const sig = await rpc.sendTransaction(tx, [signer]); + await confirmTx(rpc, sig); + return sig; +} +``` + + + +```rust expandable wrap +// test.rs +#![cfg(feature = "test-sbf")] + +use anchor_lang::AnchorDeserialize; +use light_program_test::{ + program_test::LightProgramTest, AddressWithTree, Indexer, ProgramTestConfig, Rpc, RpcError, +}; +use light_sdk::{ + address::v1::derive_address, + instruction::{PackedAccounts, SystemAccountMetaConfig}, +}; +use create::MyCompressedAccount; +use solana_sdk::{ + instruction::{AccountMeta, Instruction}, + signature::{Keypair, Signature, Signer}, +}; + +#[tokio::test] +async fn test_create() { + let config = ProgramTestConfig::new(true, Some(vec![("create", create::ID)])); + let mut rpc = LightProgramTest::new(config).await.unwrap(); + let payer = rpc.get_payer().insecure_clone(); + + let address_tree_info = rpc.get_address_tree_v1(); + + let (address, _) = derive_address( + &[b"message", payer.pubkey().as_ref()], + &address_tree_info.tree, + &create::ID, + ); + + create_compressed_account(&mut rpc, &payer, &address, "Hello, compressed world!".to_string()) + .await + .unwrap(); + + let compressed_account = rpc + .get_compressed_account(address, None) + .await + .unwrap() + .value + .unwrap(); + let data = &compressed_account.data.as_ref().unwrap().data; + let account = MyCompressedAccount::deserialize(&mut &data[..]).unwrap(); + assert_eq!(account.owner, payer.pubkey()); + assert_eq!(account.message, "Hello, compressed world!"); +} + +async fn create_compressed_account( + rpc: &mut LightProgramTest, + payer: &Keypair, + address: &[u8; 32], + message: String, +) -> Result { + let config = SystemAccountMetaConfig::new(create::ID); + let mut remaining_accounts = PackedAccounts::default(); + remaining_accounts.add_system_accounts(config)?; + + let address_tree_info = rpc.get_address_tree_v1(); + + let rpc_result = rpc + .get_validity_proof( + vec![], + vec![AddressWithTree { + address: *address, + tree: address_tree_info.tree, + }], + None, + ) + .await? + .value; + let packed_accounts = rpc_result.pack_tree_infos(&mut remaining_accounts); + + let output_state_tree_index = rpc + .get_random_state_tree_info()? + .pack_output_tree_index(&mut remaining_accounts)?; + + let (remaining_accounts, _, _) = remaining_accounts.to_account_metas(); + + let instruction = Instruction { + program_id: create::ID, + accounts: [ + vec![AccountMeta::new(payer.pubkey(), true)], + remaining_accounts, + ] + .concat(), + data: { + use anchor_lang::InstructionData; + create::instruction::CreateAccount { + proof: rpc_result.proof, + address_tree_info: packed_accounts.address_trees[0], + output_state_tree_index: output_state_tree_index, + message, + } + .data() + }, + }; + + rpc.create_and_send_transaction(&[instruction], &payer.pubkey(), &[payer]) + .await +} +``` + + + +Find all [full code examples with Rust and Typescript tests here](https://github.com/Lightprotocol/program-examples/tree/add-basic-operations-examples/basic-operations/anchor) for the following instructions: +- **create** - Initialize a new compressed account +- **update** - Modify data of an existing compressed account +- **close** - Close a compressed account (it can be initialized again). +- **reinit** - Reinitialize a closed account +- **burn** - Permanently delete a compressed account (it cannot be initialized again). + + +For help with debugging, see the [Error Cheatsheet](https://www.zkcompression.com/resources/error-cheatsheet) and [AskDevin](https://deepwiki.com/Lightprotocol/light-protocol/3.1-javascripttypescript-sdks). + + +# Next Steps + + + diff --git a/compressed-pdas/client-library.mdx b/compressed-pdas/client-library.mdx deleted file mode 100644 index 8244cc27..00000000 --- a/compressed-pdas/client-library.mdx +++ /dev/null @@ -1,4 +0,0 @@ ---- -title: Client Library -description: Overview to Rust and Typescript client guides. Guides include step-by-step implementation and full code examples. ---- diff --git a/compressed-pdas/create-a-program-with-compressed-pdas.mdx b/compressed-pdas/create-a-program-with-compressed-pdas.mdx index 43d97337..9a68059f 100644 --- a/compressed-pdas/create-a-program-with-compressed-pdas.mdx +++ b/compressed-pdas/create-a-program-with-compressed-pdas.mdx @@ -182,9 +182,9 @@ Caused by: # Next Steps diff --git a/compressed-pdas/guides.mdx b/compressed-pdas/guides.mdx index 7baf8592..c25c1694 100644 --- a/compressed-pdas/guides.mdx +++ b/compressed-pdas/guides.mdx @@ -1,6 +1,6 @@ --- -title: Guides -description: Overview and comparison of guides to create, update, close, reinitialize, and burn permanently compressed accounts. Guides include step-by-step implementation and full code examples. +title: Overview +description: Overview to guides for Solana programs to create, update, close, reinitialize, and burn permanently compressed accounts. sidebarTitle: "Overview" --- diff --git a/compressed-pdas/guides/how-to-burn-compressed-accounts.mdx b/compressed-pdas/guides/how-to-burn-compressed-accounts.mdx index 416d6635..7e433509 100644 --- a/compressed-pdas/guides/how-to-burn-compressed-accounts.mdx +++ b/compressed-pdas/guides/how-to-burn-compressed-accounts.mdx @@ -531,7 +531,7 @@ fn burn(accounts: &[AccountInfo], instruction_data: &[u8]) -> Result<(), LightSd title="Build a client for your program" icon="chevron-right" color="#0066ff" - href="/compressed-pdas/client-library" + href="/client-library/client-guide" horizontal /> Result<(), LightS title="Build a client for your program" icon="chevron-right" color="#0066ff" - href="/compressed-pdas/client-library" + href="/client-library/client-guide" horizontal /> Result<(), Light title="Build a client for your program" icon="chevron-right" color="#0066ff" - href="/compressed-pdas/client-library" + href="/client-library/client-guide" horizontal /> Basic Operations include: -- **create** - Initialize a new compressed account. -- **update** - Modify data in an existing compressed account. -- **close** - Clear account data and preserve its address. -- **reinit** - Reinitialize a closed account with the same address. -- **burn** - Permanently delete a compressed account. +- **create** - Initialize a new compressed account +- **update** - Modify data of an existing compressed account +- **close** - Close a compressed account (it can be initialized again). +- **reinit** - Reinitialize a closed account +- **burn** - Permanently delete a compressed account (it cannot be initialized again).
## Counter Program @@ -47,6 +47,6 @@ Full compressed account lifecycle (create, increment, decrement, reset, close): title="Follow our guides to these program examples." icon="chevron-right" color="#0066ff" - href="/compressed-pdas/guides/guides" + href="/compressed-pdas/guides" horizontal /> \ No newline at end of file diff --git a/docs.json b/docs.json index 3b508785..f19bc54b 100644 --- a/docs.json +++ b/docs.json @@ -74,7 +74,7 @@ "pages": [ "compressed-pdas/create-a-program-with-compressed-pdas", { - "group": "Guides", + "group": "Program Guides", "pages": [ "compressed-pdas/guides", "compressed-pdas/guides/how-to-create-compressed-accounts", @@ -84,7 +84,246 @@ "compressed-pdas/guides/how-to-burn-compressed-accounts" ] }, - "compressed-pdas/program-examples" + "compressed-pdas/program-examples", + { + "group": "Program Examples (Source)", + "hidden": true, + "pages": [ + "compressed-pdas/program-examples-mdx/README", + { + "group": "Account Comparison", + "pages": [ + "compressed-pdas/program-examples-mdx/account-comparison/Anchor-toml", + "compressed-pdas/program-examples-mdx/account-comparison/Cargo-toml", + "compressed-pdas/program-examples-mdx/account-comparison/package-json", + "compressed-pdas/program-examples-mdx/account-comparison/tsconfig-json", + "compressed-pdas/program-examples-mdx/account-comparison/programs/account-comparison/Cargo-toml", + "compressed-pdas/program-examples-mdx/account-comparison/programs/account-comparison/Xargo-toml", + "compressed-pdas/program-examples-mdx/account-comparison/programs/account-comparison/src/lib-rs", + "compressed-pdas/program-examples-mdx/account-comparison/programs/account-comparison/tests/test_compressed_account-rs", + "compressed-pdas/program-examples-mdx/account-comparison/programs/account-comparison/tests/test_solana_account-rs" + ] + }, + { + "group": "Basic Operations", + "pages": [ + { + "group": "Anchor", + "pages": [ + "compressed-pdas/program-examples-mdx/basic-operations/anchor/README", + "compressed-pdas/program-examples-mdx/basic-operations/anchor/package-json", + { + "group": "Burn", + "pages": [ + "compressed-pdas/program-examples-mdx/basic-operations/anchor/burn/Anchor-toml", + "compressed-pdas/program-examples-mdx/basic-operations/anchor/burn/Cargo-toml", + "compressed-pdas/program-examples-mdx/basic-operations/anchor/burn/package-json", + "compressed-pdas/program-examples-mdx/basic-operations/anchor/burn/tsconfig-json", + "compressed-pdas/program-examples-mdx/basic-operations/anchor/burn/programs/burn/Cargo-toml", + "compressed-pdas/program-examples-mdx/basic-operations/anchor/burn/programs/burn/Xargo-toml", + "compressed-pdas/program-examples-mdx/basic-operations/anchor/burn/programs/burn/src/lib-rs", + "compressed-pdas/program-examples-mdx/basic-operations/anchor/burn/programs/burn/tests/test-rs", + "compressed-pdas/program-examples-mdx/basic-operations/anchor/burn/tests/burn-ts" + ] + }, + { + "group": "Close", + "pages": [ + "compressed-pdas/program-examples-mdx/basic-operations/anchor/close/Anchor-toml", + "compressed-pdas/program-examples-mdx/basic-operations/anchor/close/Cargo-toml", + "compressed-pdas/program-examples-mdx/basic-operations/anchor/close/package-json", + "compressed-pdas/program-examples-mdx/basic-operations/anchor/close/tsconfig-json", + "compressed-pdas/program-examples-mdx/basic-operations/anchor/close/programs/close/Cargo-toml", + "compressed-pdas/program-examples-mdx/basic-operations/anchor/close/programs/close/Xargo-toml", + "compressed-pdas/program-examples-mdx/basic-operations/anchor/close/programs/close/src/lib-rs", + "compressed-pdas/program-examples-mdx/basic-operations/anchor/close/programs/close/tests/test-rs", + "compressed-pdas/program-examples-mdx/basic-operations/anchor/close/tests/close-ts" + ] + }, + { + "group": "Create", + "pages": [ + "compressed-pdas/program-examples-mdx/basic-operations/anchor/create/Anchor-toml", + "compressed-pdas/program-examples-mdx/basic-operations/anchor/create/Cargo-toml", + "compressed-pdas/program-examples-mdx/basic-operations/anchor/create/package-json", + "compressed-pdas/program-examples-mdx/basic-operations/anchor/create/tsconfig-json", + "compressed-pdas/program-examples-mdx/basic-operations/anchor/create/programs/create/Cargo-toml", + "compressed-pdas/program-examples-mdx/basic-operations/anchor/create/programs/create/Xargo-toml", + "compressed-pdas/program-examples-mdx/basic-operations/anchor/create/programs/create/src/lib-rs", + "compressed-pdas/program-examples-mdx/basic-operations/anchor/create/programs/create/tests/test-rs", + "compressed-pdas/program-examples-mdx/basic-operations/anchor/create/tests/create-ts" + ] + }, + { + "group": "Reinit", + "pages": [ + "compressed-pdas/program-examples-mdx/basic-operations/anchor/reinit/Anchor-toml", + "compressed-pdas/program-examples-mdx/basic-operations/anchor/reinit/Cargo-toml", + "compressed-pdas/program-examples-mdx/basic-operations/anchor/reinit/package-json", + "compressed-pdas/program-examples-mdx/basic-operations/anchor/reinit/tsconfig-json", + "compressed-pdas/program-examples-mdx/basic-operations/anchor/reinit/programs/reinit/Cargo-toml", + "compressed-pdas/program-examples-mdx/basic-operations/anchor/reinit/programs/reinit/Xargo-toml", + "compressed-pdas/program-examples-mdx/basic-operations/anchor/reinit/programs/reinit/src/lib-rs", + "compressed-pdas/program-examples-mdx/basic-operations/anchor/reinit/programs/reinit/tests/test-rs", + "compressed-pdas/program-examples-mdx/basic-operations/anchor/reinit/tests/reinit-ts" + ] + }, + { + "group": "Update", + "pages": [ + "compressed-pdas/program-examples-mdx/basic-operations/anchor/update/Anchor-toml", + "compressed-pdas/program-examples-mdx/basic-operations/anchor/update/Cargo-toml", + "compressed-pdas/program-examples-mdx/basic-operations/anchor/update/package-json", + "compressed-pdas/program-examples-mdx/basic-operations/anchor/update/tsconfig-json", + "compressed-pdas/program-examples-mdx/basic-operations/anchor/update/programs/update/Cargo-toml", + "compressed-pdas/program-examples-mdx/basic-operations/anchor/update/programs/update/Xargo-toml", + "compressed-pdas/program-examples-mdx/basic-operations/anchor/update/programs/update/src/lib-rs", + "compressed-pdas/program-examples-mdx/basic-operations/anchor/update/programs/update/tests/test-rs", + "compressed-pdas/program-examples-mdx/basic-operations/anchor/update/tests/update-ts" + ] + } + ] + }, + { + "group": "Native", + "pages": [ + "compressed-pdas/program-examples-mdx/basic-operations/native/README", + "compressed-pdas/program-examples-mdx/basic-operations/native/Cargo-toml", + "compressed-pdas/program-examples-mdx/basic-operations/native/package-json", + "compressed-pdas/program-examples-mdx/basic-operations/native/tsconfig-json", + { + "group": "Burn", + "pages": [ + "compressed-pdas/program-examples-mdx/basic-operations/native/programs/burn/Cargo-toml", + "compressed-pdas/program-examples-mdx/basic-operations/native/programs/burn/src/lib-rs", + "compressed-pdas/program-examples-mdx/basic-operations/native/programs/burn/src/test_helpers-rs", + "compressed-pdas/program-examples-mdx/basic-operations/native/programs/burn/tests/test-rs" + ] + }, + { + "group": "Close", + "pages": [ + "compressed-pdas/program-examples-mdx/basic-operations/native/programs/close/Cargo-toml", + "compressed-pdas/program-examples-mdx/basic-operations/native/programs/close/src/lib-rs", + "compressed-pdas/program-examples-mdx/basic-operations/native/programs/close/src/test_helpers-rs", + "compressed-pdas/program-examples-mdx/basic-operations/native/programs/close/tests/test-rs" + ] + }, + { + "group": "Create", + "pages": [ + "compressed-pdas/program-examples-mdx/basic-operations/native/programs/create/Cargo-toml", + "compressed-pdas/program-examples-mdx/basic-operations/native/programs/create/Xargo-toml", + "compressed-pdas/program-examples-mdx/basic-operations/native/programs/create/src/lib-rs", + "compressed-pdas/program-examples-mdx/basic-operations/native/programs/create/src/test_helpers-rs", + "compressed-pdas/program-examples-mdx/basic-operations/native/programs/create/tests/test-rs" + ] + }, + { + "group": "Reinit", + "pages": [ + "compressed-pdas/program-examples-mdx/basic-operations/native/programs/reinit/Cargo-toml", + "compressed-pdas/program-examples-mdx/basic-operations/native/programs/reinit/src/lib-rs", + "compressed-pdas/program-examples-mdx/basic-operations/native/programs/reinit/src/test_helpers-rs", + "compressed-pdas/program-examples-mdx/basic-operations/native/programs/reinit/tests/test-rs" + ] + }, + { + "group": "Update", + "pages": [ + "compressed-pdas/program-examples-mdx/basic-operations/native/programs/update/Cargo-toml", + "compressed-pdas/program-examples-mdx/basic-operations/native/programs/update/Xargo-toml", + "compressed-pdas/program-examples-mdx/basic-operations/native/programs/update/src/lib-rs", + "compressed-pdas/program-examples-mdx/basic-operations/native/programs/update/src/test_helpers-rs", + "compressed-pdas/program-examples-mdx/basic-operations/native/programs/update/tests/test-rs" + ] + } + ] + } + ] + }, + { + "group": "Counter", + "pages": [ + { + "group": "Anchor", + "pages": [ + "compressed-pdas/program-examples-mdx/counter/anchor/README", + "compressed-pdas/program-examples-mdx/counter/anchor/Anchor-toml", + "compressed-pdas/program-examples-mdx/counter/anchor/Cargo-toml", + "compressed-pdas/program-examples-mdx/counter/anchor/package-json", + "compressed-pdas/program-examples-mdx/counter/anchor/tsconfig-json", + "compressed-pdas/program-examples-mdx/counter/anchor/programs/counter/Cargo-toml", + "compressed-pdas/program-examples-mdx/counter/anchor/programs/counter/Xargo-toml", + "compressed-pdas/program-examples-mdx/counter/anchor/programs/counter/src/lib-rs", + "compressed-pdas/program-examples-mdx/counter/anchor/programs/counter/tests/test-rs", + "compressed-pdas/program-examples-mdx/counter/anchor/tests/test-ts" + ] + }, + { + "group": "Native", + "pages": [ + "compressed-pdas/program-examples-mdx/counter/native/Cargo-toml", + "compressed-pdas/program-examples-mdx/counter/native/Xargo-toml", + "compressed-pdas/program-examples-mdx/counter/native/src/lib-rs", + "compressed-pdas/program-examples-mdx/counter/native/tests/test-rs" + ] + }, + { + "group": "Pinocchio", + "pages": [ + "compressed-pdas/program-examples-mdx/counter/pinocchio/Cargo-toml", + "compressed-pdas/program-examples-mdx/counter/pinocchio/Xargo-toml", + "compressed-pdas/program-examples-mdx/counter/pinocchio/src/lib-rs", + "compressed-pdas/program-examples-mdx/counter/pinocchio/tests/test-rs" + ] + } + ] + }, + { + "group": "Create and Update", + "pages": [ + "compressed-pdas/program-examples-mdx/create-and-update/README", + "compressed-pdas/program-examples-mdx/create-and-update/Anchor-toml", + "compressed-pdas/program-examples-mdx/create-and-update/Cargo-toml", + "compressed-pdas/program-examples-mdx/create-and-update/package-json", + "compressed-pdas/program-examples-mdx/create-and-update/tsconfig-json", + "compressed-pdas/program-examples-mdx/create-and-update/programs/create-and-update/Cargo-toml", + "compressed-pdas/program-examples-mdx/create-and-update/programs/create-and-update/Xargo-toml", + "compressed-pdas/program-examples-mdx/create-and-update/programs/create-and-update/src/lib-rs", + "compressed-pdas/program-examples-mdx/create-and-update/programs/create-and-update/tests/test-rs", + "compressed-pdas/program-examples-mdx/create-and-update/programs/create-and-update/tests/test_create_two_accounts-rs", + "compressed-pdas/program-examples-mdx/create-and-update/tests/create_and_update-ts" + ] + }, + { + "group": "Read Only", + "pages": [ + "compressed-pdas/program-examples-mdx/read-only/README", + "compressed-pdas/program-examples-mdx/read-only/Cargo-toml", + "compressed-pdas/program-examples-mdx/read-only/Xargo-toml", + "compressed-pdas/program-examples-mdx/read-only/src/lib-rs", + "compressed-pdas/program-examples-mdx/read-only/tests/test-rs" + ] + }, + { + "group": "ZK ID", + "pages": [ + "compressed-pdas/program-examples-mdx/zk-id/README", + "compressed-pdas/program-examples-mdx/zk-id/Cargo-toml", + "compressed-pdas/program-examples-mdx/zk-id/Xargo-toml", + "compressed-pdas/program-examples-mdx/zk-id/build-rs", + "compressed-pdas/program-examples-mdx/zk-id/package-json", + "compressed-pdas/program-examples-mdx/zk-id/circuits/README", + "compressed-pdas/program-examples-mdx/zk-id/src/lib-rs", + "compressed-pdas/program-examples-mdx/zk-id/src/verifying_key-rs", + "compressed-pdas/program-examples-mdx/zk-id/tests/circuit-rs", + "compressed-pdas/program-examples-mdx/zk-id/tests/test-rs" + ] + } + ] + }, + "client-library/client-guide" ] }, { diff --git a/intro-pages/landing.mdx b/intro-pages/landing.mdx index 2d475977..d06b93e0 100644 --- a/intro-pages/landing.mdx +++ b/intro-pages/landing.mdx @@ -68,30 +68,28 @@ ZK Compression is a framework that reduces the storage cost of Solana accounts b * This hash allows transactions to use the account data inside Solana's virtual machine as if it were stored on-chain. - * The protocol uses small zero-knowledge proofs (validity proofs) to verify the integrity of the compressed accounts. + * The protocol uses 128 byte zero-knowledge proofs (validity proofs) to verify the integrity of the compressed accounts. * By default, this is all done under the hood. You can fetch validity proofs from RPC providers that support ZK Compression.
### Using AI to work with ZK Compression -Integrate ZK Compression in your existing AI workflow by following the steps below. - -| Tool | Description | Link | -|:---------------------|:------------------------------------------------------------------------------|:--------------------------------------| -| DeepWiki/AskDevin | Query the Light Protocol codebase and documentation in natural language | Ask DeepWiki | -| MCP | Connect AI tools to the Light Protocol repository via Model Context Protocol | [Setup Guide](https://www.zkcompression.com/references/ai-tools-guide#mcp) | -| Docs AI Search | Search documentation with AI in the search bar. | Available throughout the documentation | - -**AI powered navigation**: Use AI search to quickly find information, get code examples, and learn complex topics. Available throughout our documentation. +**Look up documentation, code examples and guides using the Docs' AI search.** - +Integrate ZK Compression in your development: +| Tool | Description | Link | +|:---------------------|:------------------------------------------------------------------------------|:--------------------------------------| +| MCP | Connect AI tools to the Light Protocol repository via Model Context Protocol | [Setup Guide](https://www.zkcompression.com/references/ai-tools-guide#mcp) | +| DeepWiki/AskDevin | Use AskDevin for advanced AI assistance with your development. | Ask DeepWiki | + + ### Resources +- url: https://devnet.helius-rpc.com paths: /getBatchAddressUpdateInfo: summary: getBatchAddressUpdateInfo diff --git a/openapi/getCompressedAccount.yaml b/openapi/getCompressedAccount.yaml index 5c8a72c2..a78aa870 100644 --- a/openapi/getCompressedAccount.yaml +++ b/openapi/getCompressedAccount.yaml @@ -6,7 +6,7 @@ info: name: Apache-2.0 version: 0.50.0 servers: -- url: https://mainnet.helius-rpc.com?api-key= +- url: https://mainnet.helius-rpc.com paths: /getCompressedAccount: summary: getCompressedAccount diff --git a/openapi/getCompressedAccountBalance.yaml b/openapi/getCompressedAccountBalance.yaml index c81337be..7b3ab8dd 100644 --- a/openapi/getCompressedAccountBalance.yaml +++ b/openapi/getCompressedAccountBalance.yaml @@ -6,7 +6,7 @@ info: name: Apache-2.0 version: 0.50.0 servers: -- url: https://mainnet.helius-rpc.com?api-key= +- url: https://mainnet.helius-rpc.com paths: /: summary: getCompressedAccountBalance diff --git a/openapi/getCompressedAccountProof.yaml b/openapi/getCompressedAccountProof.yaml index 3fe5e364..235eb4e2 100644 --- a/openapi/getCompressedAccountProof.yaml +++ b/openapi/getCompressedAccountProof.yaml @@ -6,7 +6,7 @@ info: name: Apache-2.0 version: 0.50.0 servers: -- url: https://mainnet.helius-rpc.com?api-key= +- url: https://mainnet.helius-rpc.com paths: /: summary: getCompressedAccountProof diff --git a/openapi/getCompressedAccountsByOwner.yaml b/openapi/getCompressedAccountsByOwner.yaml index fed347a9..b00570bb 100644 --- a/openapi/getCompressedAccountsByOwner.yaml +++ b/openapi/getCompressedAccountsByOwner.yaml @@ -6,7 +6,7 @@ info: name: Apache-2.0 version: 0.50.0 servers: -- url: https://mainnet.helius-rpc.com?api-key= +- url: https://mainnet.helius-rpc.com paths: /getCompressedAccount: summary: getCompressedAccount diff --git a/openapi/getCompressedBalanceByOwner.yaml b/openapi/getCompressedBalanceByOwner.yaml index 5262bc87..51a83fc5 100644 --- a/openapi/getCompressedBalanceByOwner.yaml +++ b/openapi/getCompressedBalanceByOwner.yaml @@ -6,7 +6,7 @@ info: name: Apache-2.0 version: 0.50.0 servers: -- url: https://mainnet.helius-rpc.com?api-key= +- url: https://mainnet.helius-rpc.com paths: /getCompressedAccount: summary: getCompressedAccount diff --git a/openapi/getCompressedMintTokenHolders.yaml b/openapi/getCompressedMintTokenHolders.yaml index dcc1ad4a..faf2a3d8 100644 --- a/openapi/getCompressedMintTokenHolders.yaml +++ b/openapi/getCompressedMintTokenHolders.yaml @@ -6,7 +6,7 @@ info: name: Apache-2.0 version: 0.50.0 servers: -- url: https://mainnet.helius-rpc.com?api-key= +- url: https://mainnet.helius-rpc.com paths: /: summary: getCompressedMintTokenHolders diff --git a/openapi/getCompressedTokenAccountBalance.yaml b/openapi/getCompressedTokenAccountBalance.yaml index e7e98873..e8c86af4 100644 --- a/openapi/getCompressedTokenAccountBalance.yaml +++ b/openapi/getCompressedTokenAccountBalance.yaml @@ -6,7 +6,7 @@ info: name: Apache-2.0 version: 0.50.0 servers: -- url: https://mainnet.helius-rpc.com?api-key= +- url: https://mainnet.helius-rpc.com paths: /getCompressedAccount: summary: getCompressedAccount diff --git a/openapi/getCompressedTokenAccountsByDelegate.yaml b/openapi/getCompressedTokenAccountsByDelegate.yaml index 9f1e56d6..d67c062b 100644 --- a/openapi/getCompressedTokenAccountsByDelegate.yaml +++ b/openapi/getCompressedTokenAccountsByDelegate.yaml @@ -6,7 +6,7 @@ info: name: Apache-2.0 version: 0.50.0 servers: -- url: https://mainnet.helius-rpc.com?api-key= +- url: https://mainnet.helius-rpc.com paths: /getCompressedAccount: summary: getCompressedAccount diff --git a/openapi/getCompressedTokenAccountsByOwner.yaml b/openapi/getCompressedTokenAccountsByOwner.yaml index a7ec38b5..b5c5efe5 100644 --- a/openapi/getCompressedTokenAccountsByOwner.yaml +++ b/openapi/getCompressedTokenAccountsByOwner.yaml @@ -6,7 +6,7 @@ info: name: Apache-2.0 version: 0.50.0 servers: -- url: https://mainnet.helius-rpc.com?api-key= +- url: https://mainnet.helius-rpc.com paths: /getCompressedAccount: summary: getCompressedAccount diff --git a/openapi/getCompressedTokenBalancesByOwner.yaml b/openapi/getCompressedTokenBalancesByOwner.yaml index 5ea55809..2e4db65e 100644 --- a/openapi/getCompressedTokenBalancesByOwner.yaml +++ b/openapi/getCompressedTokenBalancesByOwner.yaml @@ -6,7 +6,7 @@ info: name: Apache-2.0 version: 0.50.0 servers: -- url: https://mainnet.helius-rpc.com?api-key= +- url: https://mainnet.helius-rpc.com paths: /getCompressedAccount: summary: getCompressedAccount diff --git a/openapi/getCompressedTokenBalancesByOwnerV2.yaml b/openapi/getCompressedTokenBalancesByOwnerV2.yaml index 71d1dddc..e89ceba7 100644 --- a/openapi/getCompressedTokenBalancesByOwnerV2.yaml +++ b/openapi/getCompressedTokenBalancesByOwnerV2.yaml @@ -6,7 +6,7 @@ info: name: Apache-2.0 version: 0.50.0 servers: -- url: https://mainnet.helius-rpc.com?api-key= +- url: https://mainnet.helius-rpc.com paths: /: summary: getCompressedTokenBalancesByOwnerV2 diff --git a/openapi/getCompressionSignaturesForAccount.yaml b/openapi/getCompressionSignaturesForAccount.yaml index e73a2d12..7f3e8fda 100644 --- a/openapi/getCompressionSignaturesForAccount.yaml +++ b/openapi/getCompressionSignaturesForAccount.yaml @@ -6,7 +6,7 @@ info: name: Apache-2.0 version: 0.50.0 servers: -- url: https://mainnet.helius-rpc.com?api-key= +- url: https://mainnet.helius-rpc.com paths: /getCompressedAccount: summary: getCompressedAccount diff --git a/openapi/getCompressionSignaturesForAddress.yaml b/openapi/getCompressionSignaturesForAddress.yaml index f3f9850c..a7d9d742 100644 --- a/openapi/getCompressionSignaturesForAddress.yaml +++ b/openapi/getCompressionSignaturesForAddress.yaml @@ -6,7 +6,7 @@ info: name: Apache-2.0 version: 0.50.0 servers: -- url: https://mainnet.helius-rpc.com?api-key= +- url: https://mainnet.helius-rpc.com paths: /getCompressedAccount: summary: getCompressedAccount diff --git a/openapi/getCompressionSignaturesForOwner.yaml b/openapi/getCompressionSignaturesForOwner.yaml index 5b18e040..94384bc7 100644 --- a/openapi/getCompressionSignaturesForOwner.yaml +++ b/openapi/getCompressionSignaturesForOwner.yaml @@ -6,7 +6,7 @@ info: name: Apache-2.0 version: 0.50.0 servers: -- url: https://mainnet.helius-rpc.com?api-key= +- url: https://mainnet.helius-rpc.com paths: /getCompressedAccount: summary: getCompressedAccount diff --git a/openapi/getCompressionSignaturesForTokenOwner.yaml b/openapi/getCompressionSignaturesForTokenOwner.yaml index b5407929..385a9750 100644 --- a/openapi/getCompressionSignaturesForTokenOwner.yaml +++ b/openapi/getCompressionSignaturesForTokenOwner.yaml @@ -6,7 +6,7 @@ info: name: Apache-2.0 version: 0.50.0 servers: -- url: https://mainnet.helius-rpc.com?api-key= +- url: https://mainnet.helius-rpc.com paths: /getCompressedAccount: summary: getCompressedAccount diff --git a/openapi/getIndexerHealth.yaml b/openapi/getIndexerHealth.yaml index d40866cb..fd7766cf 100644 --- a/openapi/getIndexerHealth.yaml +++ b/openapi/getIndexerHealth.yaml @@ -6,7 +6,7 @@ info: name: Apache-2.0 version: 0.50.0 servers: -- url: https://mainnet.helius-rpc.com?api-key= +- url: https://mainnet.helius-rpc.com paths: /: summary: getIndexerHealth diff --git a/openapi/getIndexerSlot.yaml b/openapi/getIndexerSlot.yaml index 29edaa6d..1185f9bc 100644 --- a/openapi/getIndexerSlot.yaml +++ b/openapi/getIndexerSlot.yaml @@ -6,7 +6,7 @@ info: name: Apache-2.0 version: 0.50.0 servers: -- url: https://mainnet.helius-rpc.com?api-key= +- url: https://mainnet.helius-rpc.com paths: /: summary: getIndexerSlot diff --git a/openapi/getLatestCompressionSignatures.yaml b/openapi/getLatestCompressionSignatures.yaml index 902f34b9..5eb35e52 100644 --- a/openapi/getLatestCompressionSignatures.yaml +++ b/openapi/getLatestCompressionSignatures.yaml @@ -6,7 +6,7 @@ info: name: Apache-2.0 version: 0.50.0 servers: -- url: https://mainnet.helius-rpc.com?api-key= +- url: https://mainnet.helius-rpc.com paths: /: summary: getLatestCompressionSignatures diff --git a/openapi/getLatestNonVotingSignatures.yaml b/openapi/getLatestNonVotingSignatures.yaml index 004c673a..79a21a19 100644 --- a/openapi/getLatestNonVotingSignatures.yaml +++ b/openapi/getLatestNonVotingSignatures.yaml @@ -6,7 +6,7 @@ info: name: Apache-2.0 version: 0.50.0 servers: -- url: https://mainnet.helius-rpc.com?api-key= +- url: https://mainnet.helius-rpc.com paths: /: summary: getLatestNonVotingSignatures diff --git a/openapi/getMultipleCompressedAccountProofs.yaml b/openapi/getMultipleCompressedAccountProofs.yaml index c520f8f9..dda122e0 100644 --- a/openapi/getMultipleCompressedAccountProofs.yaml +++ b/openapi/getMultipleCompressedAccountProofs.yaml @@ -6,7 +6,7 @@ info: name: Apache-2.0 version: 0.50.0 servers: -- url: https://mainnet.helius-rpc.com?api-key= +- url: https://mainnet.helius-rpc.com paths: /: summary: getMultipleCompressedAccountProofs diff --git a/openapi/getMultipleCompressedAccounts.yaml b/openapi/getMultipleCompressedAccounts.yaml index de54f52d..ed545f8b 100644 --- a/openapi/getMultipleCompressedAccounts.yaml +++ b/openapi/getMultipleCompressedAccounts.yaml @@ -6,7 +6,7 @@ info: name: Apache-2.0 version: 0.50.0 servers: -- url: https://mainnet.helius-rpc.com?api-key= +- url: https://mainnet.helius-rpc.com paths: /getCompressedAccount: summary: getCompressedAccount diff --git a/openapi/getMultipleNewAddressProofs.yaml b/openapi/getMultipleNewAddressProofs.yaml index 95efb887..dfd62736 100644 --- a/openapi/getMultipleNewAddressProofs.yaml +++ b/openapi/getMultipleNewAddressProofs.yaml @@ -6,7 +6,7 @@ info: name: Apache-2.0 version: 0.50.0 servers: -- url: https://mainnet.helius-rpc.com?api-key= +- url: https://mainnet.helius-rpc.com paths: /: summary: getMultipleNewAddressProofs diff --git a/openapi/getMultipleNewAddressProofsV2.yaml b/openapi/getMultipleNewAddressProofsV2.yaml index 399efde3..fe9864f4 100644 --- a/openapi/getMultipleNewAddressProofsV2.yaml +++ b/openapi/getMultipleNewAddressProofsV2.yaml @@ -6,7 +6,7 @@ info: name: Apache-2.0 version: 0.50.0 servers: -- url: https://mainnet.helius-rpc.com?api-key= +- url: https://mainnet.helius-rpc.com paths: /: summary: getMultipleNewAddressProofsV2 diff --git a/openapi/getTransactionWithCompressionInfo.yaml b/openapi/getTransactionWithCompressionInfo.yaml index a7e00abf..d5d0eda7 100644 --- a/openapi/getTransactionWithCompressionInfo.yaml +++ b/openapi/getTransactionWithCompressionInfo.yaml @@ -6,7 +6,7 @@ info: name: Apache-2.0 version: 0.50.0 servers: -- url: https://mainnet.helius-rpc.com?api-key= +- url: https://mainnet.helius-rpc.com paths: /getCompressedAccount: summary: getCompressedAccount diff --git a/openapi/getValidityProof.yaml b/openapi/getValidityProof.yaml index d2abd4eb..a695da64 100644 --- a/openapi/getValidityProof.yaml +++ b/openapi/getValidityProof.yaml @@ -6,7 +6,7 @@ info: name: Apache-2.0 version: 0.50.0 servers: -- url: https://mainnet.helius-rpc.com?api-key= +- url: https://mainnet.helius-rpc.com paths: /: summary: getValidityProof diff --git a/resources/sdks/client-development.mdx b/resources/sdks/client-development.mdx index 7709ad8b..a26c6ee7 100644 --- a/resources/sdks/client-development.mdx +++ b/resources/sdks/client-development.mdx @@ -79,7 +79,7 @@ light-sdk = "0.16.0" title="Build your client with this guide." icon="chevron-right" color="#0066ff" - href="/compressed-pdas/client-library" + href="/client-library/client-guide" horizontal > \ No newline at end of file diff --git a/resources/sdks/program-development.mdx b/resources/sdks/program-development.mdx index 2feef9f9..cba6b888 100644 --- a/resources/sdks/program-development.mdx +++ b/resources/sdks/program-development.mdx @@ -47,10 +47,10 @@ Build your own program or view program examples. 4 - Noop Program - - - Logs compressed account state to the Solana ledger (used only in v1).
- - Indexers parse transaction logs to reconstruct compressed account state. - - - - 5 Account Compression Authority Signs CPI calls from the Light System Program to the Account Compression Program. - 6 + 5 Account Compression Program - Writes to state and address tree accounts.
@@ -48,17 +40,7 @@ - 7 - Invoking Program - - Your program's ID, used by the Light System Program to:
- - Derive the CPI Signer PDA.
- - Verify the CPI Signer matches your program ID.
- - Set the owner of created compressed accounts. - - - - 8 + 6 System Program Solana System Program used to transfer lamports.