From 0302eb4317b82c8715dc539625c17fd6f512bffc Mon Sep 17 00:00:00 2001 From: tcsenpai Date: Sat, 6 Dec 2025 09:52:43 +0100 Subject: [PATCH 01/29] Sync AGENTS.md from testnet --- .beads/issues.jsonl | 13 +++++ AGENTS.md | 136 ++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 149 insertions(+) create mode 100644 .beads/issues.jsonl create mode 100644 AGENTS.md diff --git a/.beads/issues.jsonl b/.beads/issues.jsonl new file mode 100644 index 000000000..21979ea5a --- /dev/null +++ b/.beads/issues.jsonl @@ -0,0 +1,13 @@ +{"id":"node-1q8","title":"Phase 1: Categorized Logger Utility","description":"Create a new categorized Logger utility that serves as a drop-in replacement for the current logger. Must support categories and be TUI-ready.","design":"## Logger Categories\n\n- **CORE** - Main bootstrap, warmup, general operations\n- **NETWORK** - RPC server, connections, HTTP endpoints\n- **PEER** - Peer management, peer gossip, peer bootstrap\n- **CHAIN** - Blockchain, blocks, mempool\n- **SYNC** - Synchronization operations\n- **CONSENSUS** - PoR BFT consensus operations\n- **IDENTITY** - GCR, identity management\n- **MCP** - MCP server operations\n- **MULTICHAIN** - Cross-chain/XM operations\n- **DAHR** - DAHR-specific operations\n\n## API Design\n\n```typescript\n// New logger interface\ninterface LogEntry {\n level: LogLevel;\n category: LogCategory;\n message: string;\n timestamp: Date;\n}\n\ntype LogLevel = 'debug' | 'info' | 'warning' | 'error' | 'critical';\ntype LogCategory = 'CORE' | 'NETWORK' | 'PEER' | 'CHAIN' | 'SYNC' | 'CONSENSUS' | 'IDENTITY' | 'MCP' | 'MULTICHAIN' | 'DAHR';\n\n// Usage:\nlogger.info('CORE', 'Starting the node');\nlogger.error('NETWORK', 'Connection failed');\nlogger.debug('CHAIN', 'Block validated #45679');\n```\n\n## Features\n\n1. Emit events for TUI to subscribe to\n2. Maintain backward compatibility with file logging\n3. Ring buffer for in-memory log storage (TUI display)\n4. Category-based filtering\n5. Log level filtering","acceptance_criteria":"- [ ] LogCategory type with all 10 categories defined\n- [ ] New Logger class with category-aware methods\n- [ ] Event emitter for TUI integration\n- [ ] Ring buffer for last N log entries (configurable, default 1000)\n- [ ] File logging preserved (backward compatible)\n- [ ] Unit tests for logger functionality","status":"closed","priority":1,"issue_type":"feature","assignee":"claude","created_at":"2025-12-04T15:45:22.238751684+01:00","updated_at":"2025-12-04T15:57:01.3507118+01:00","closed_at":"2025-12-04T15:57:01.3507118+01:00","labels":["logger","phase-1","tui"],"dependencies":[{"issue_id":"node-1q8","depends_on_id":"node-wrd","type":"parent-child","created_at":"2025-12-04T15:46:41.663898616+01:00","created_by":"daemon"}]} +{"id":"node-66u","title":"Phase 2: TUI Framework Setup","description":"Set up the TUI framework using terminal-kit (already installed). Create the basic layout structure with panels.","design":"## Layout Structure\n\n```\n┌─────────────────────────────────────────────────────────────────┐\n│ HEADER: Node info, status, version │\n├─────────────────────────────────────────────────────────────────┤\n│ TABS: Category selection │\n├─────────────────────────────────────────────────────────────────┤\n│ │\n│ LOG AREA: Scrollable log display │\n│ │\n├─────────────────────────────────────────────────────────────────┤\n│ FOOTER: Controls and status │\n└─────────────────────────────────────────────────────────────────┘\n```\n\n## Components\n\n1. **TUIManager** - Main orchestrator\n2. **HeaderPanel** - Node info display\n3. **TabBar** - Category tabs\n4. **LogPanel** - Scrollable log view\n5. **FooterPanel** - Controls and input\n\n## terminal-kit Features to Use\n\n- ScreenBuffer for double-buffering\n- Input handling (keyboard shortcuts)\n- Color support\n- Box drawing characters","acceptance_criteria":"- [ ] TUIManager class created\n- [ ] Basic layout with 4 panels renders correctly\n- [ ] Terminal resize handling\n- [ ] Keyboard input capture working\n- [ ] Clean exit handling (restore terminal state)","status":"closed","priority":1,"issue_type":"feature","assignee":"claude","created_at":"2025-12-04T15:45:22.405530697+01:00","updated_at":"2025-12-04T16:03:17.66943608+01:00","closed_at":"2025-12-04T16:03:17.66943608+01:00","labels":["phase-2","tui","ui"],"dependencies":[{"issue_id":"node-66u","depends_on_id":"node-1q8","type":"blocks","created_at":"2025-12-04T15:46:29.51715706+01:00","created_by":"daemon"},{"issue_id":"node-66u","depends_on_id":"node-wrd","type":"parent-child","created_at":"2025-12-04T15:46:41.730819864+01:00","created_by":"daemon"}]} +{"id":"node-67f","title":"Phase 5: Migrate Existing Logging","description":"Replace all existing console.log, term.*, and Logger calls with the new categorized logger throughout the codebase.","design":"## Migration Strategy\n\n1. Create compatibility layer in old Logger that redirects to new\n2. Map existing tags to categories:\n - `[MAIN]`, `[BOOTSTRAP]` → CORE\n - `[RPC]`, `[SERVER]` → NETWORK\n - `[PEER]`, `[PEERROUTINE]` → PEER\n - `[CHAIN]`, `[BLOCK]`, `[MEMPOOL]` → CHAIN\n - `[SYNC]`, `[MAINLOOP]` → SYNC\n - `[CONSENSUS]`, `[PORBFT]` → CONSENSUS\n - `[GCR]`, `[IDENTITY]` → IDENTITY\n - `[MCP]` → MCP\n - `[XM]`, `[MULTICHAIN]` → MULTICHAIN\n - `[DAHR]`, `[WEB2]` → DAHR\n\n3. Search and replace patterns:\n - `console.log(...)` → `logger.info('CATEGORY', ...)`\n - `term.green(...)` → `logger.info('CATEGORY', ...)`\n - `log.info(...)` → `logger.info('CATEGORY', ...)`\n\n## Files to Update (174+ console.log calls)\n\n- src/index.ts (25 calls)\n- src/utilities/*.ts\n- src/libs/**/*.ts\n- src/features/**/*.ts","acceptance_criteria":"- [ ] All console.log calls replaced\n- [ ] All term.* calls replaced\n- [ ] All old Logger calls migrated\n- [ ] No terminal output bypasses TUI\n- [ ] Lint passes\n- [ ] Type-check passes","notes":"Core migration complete:\n- Replaced src/utilities/logger.ts with re-export of LegacyLoggerAdapter\n- All existing log.* calls now route through CategorizedLogger\n- Migrated console.log and term.* calls in index.ts (main entry point)\n- Migrated mainLoop.ts\n\nRemaining legacy calls (lower priority):\n- ~129 console.log calls in 20 files (many in tests/client/cli)\n- ~56 term.* calls in 13 files (excluding TUIManager which needs them)\n\nThe core logging infrastructure is now TUI-ready. Legacy calls will still work but bypass TUI display.","status":"in_progress","priority":2,"issue_type":"task","assignee":"claude","created_at":"2025-12-04T15:45:22.92693117+01:00","updated_at":"2025-12-04T16:11:41.686770383+01:00","labels":["phase-5","refactor","tui"],"dependencies":[{"issue_id":"node-67f","depends_on_id":"node-1q8","type":"blocks","created_at":"2025-12-04T15:46:29.724713609+01:00","created_by":"daemon"},{"issue_id":"node-67f","depends_on_id":"node-s48","type":"blocks","created_at":"2025-12-04T15:46:29.777335113+01:00","created_by":"daemon"},{"issue_id":"node-67f","depends_on_id":"node-wrd","type":"parent-child","created_at":"2025-12-04T15:46:41.885331922+01:00","created_by":"daemon"}]} +{"id":"node-8ka","title":"ZK Identity System - Phase 6-8: Node Integration","description":"ProofVerifier, GCR transaction types (zk_commitment_add, zk_attestation_add), RPC endpoints (/zk/merkle-root, /zk/merkle/proof, /zk/nullifier)","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-12-06T09:43:09.277685498+01:00","updated_at":"2025-12-06T09:43:25.850988068+01:00","closed_at":"2025-12-06T09:43:25.850988068+01:00","labels":["gcr","node","zk"],"dependencies":[{"issue_id":"node-8ka","depends_on_id":"node-94a","type":"blocks","created_at":"2025-12-06T09:43:16.947262666+01:00","created_by":"daemon"}]} +{"id":"node-94a","title":"ZK Identity System - Phase 1-5: Core Cryptography","description":"Core ZK-SNARK cryptographic foundation using Groth16/Poseidon. Includes circuits, Merkle tree, database entities.","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-12-06T09:43:09.180321179+01:00","updated_at":"2025-12-06T09:43:25.782519636+01:00","closed_at":"2025-12-06T09:43:25.782519636+01:00","labels":["cryptography","groth16","zk"]} +{"id":"node-9q4","title":"ZK Identity System - Phase 9: SDK Integration","description":"SDK CommitmentService (poseidon-lite), ProofGenerator (snarkjs), ZKIdentity class. Located in ../sdks/src/encryption/zK/","status":"closed","priority":1,"issue_type":"epic","created_at":"2025-12-06T09:43:09.360890667+01:00","updated_at":"2025-12-06T09:43:25.896325192+01:00","closed_at":"2025-12-06T09:43:25.896325192+01:00","labels":["sdk","zk"],"dependencies":[{"issue_id":"node-9q4","depends_on_id":"node-8ka","type":"blocks","created_at":"2025-12-06T09:43:16.997274204+01:00","created_by":"daemon"}]} +{"id":"node-a95","title":"ZK Identity System - Future: Verify-and-Delete Flow","description":"zk_verified_commitment: OAuth verify + create ZK commitment + skip public record (privacy preservation). See serena memory: zk_verify_and_delete_plan","status":"open","priority":2,"issue_type":"feature","created_at":"2025-12-06T09:43:09.576634316+01:00","updated_at":"2025-12-06T09:43:09.576634316+01:00","labels":["future","privacy","zk"],"dependencies":[{"issue_id":"node-a95","depends_on_id":"node-dj4","type":"blocks","created_at":"2025-12-06T09:43:17.134669302+01:00","created_by":"daemon"}]} +{"id":"node-bj2","title":"ZK Identity System - Phase 10: Trusted Setup Ceremony","description":"Multi-party ceremony with 40+ nodes. Script: src/features/zk/scripts/ceremony.ts. Generates final proving/verification keys.","notes":"Currently running ceremony with 40+ nodes on separate repo. Script ready at src/features/zk/scripts/ceremony.ts","status":"in_progress","priority":1,"issue_type":"epic","created_at":"2025-12-06T09:43:09.430249817+01:00","updated_at":"2025-12-06T09:43:25.957018289+01:00","labels":["ceremony","security","zk"],"dependencies":[{"issue_id":"node-bj2","depends_on_id":"node-9q4","type":"blocks","created_at":"2025-12-06T09:43:17.036700285+01:00","created_by":"daemon"}]} +{"id":"node-d82","title":"Phase 4: Info Panel and Controls","description":"Implement the header info panel showing node status and the footer with control commands.","design":"## Header Panel Info\n\n- Node version\n- Status indicator (🟢 Running / 🟡 Syncing / 🔴 Stopped)\n- Public key (truncated with copy option)\n- Server port\n- Connected peers count\n- Current block number\n- Sync status\n\n## Footer Controls\n\n- **[S]** - Start node (if stopped)\n- **[P]** - Pause/Stop node\n- **[R]** - Restart node\n- **[Q]** - Quit application\n- **[L]** - Toggle log level filter\n- **[F]** - Filter/Search logs\n- **[C]** - Clear current log view\n- **[H]** - Help overlay\n\n## Real-time Updates\n\n- Subscribe to sharedState for live updates\n- Peer count updates\n- Block number updates\n- Sync status changes","acceptance_criteria":"- [ ] Header shows all node info\n- [ ] Info updates in real-time\n- [ ] All control keys functional\n- [ ] Start/Stop/Restart commands work\n- [ ] Help overlay accessible\n- [ ] Graceful quit (cleanup)","status":"closed","priority":1,"issue_type":"feature","created_at":"2025-12-04T15:45:22.750471894+01:00","updated_at":"2025-12-04T16:05:56.222574924+01:00","closed_at":"2025-12-04T16:05:56.222574924+01:00","labels":["phase-4","tui","ui"],"dependencies":[{"issue_id":"node-d82","depends_on_id":"node-66u","type":"blocks","created_at":"2025-12-04T15:46:29.652996097+01:00","created_by":"daemon"},{"issue_id":"node-d82","depends_on_id":"node-wrd","type":"parent-child","created_at":"2025-12-04T15:46:41.831349124+01:00","created_by":"daemon"}]} +{"id":"node-dj4","title":"ZK Identity System - Phase 11: CDN Deployment","description":"Upload WASM, proving keys to CDN. Update SDK ProofGenerator with CDN URLs. See serena memory: zk_technical_architecture","status":"open","priority":2,"issue_type":"epic","created_at":"2025-12-06T09:43:09.507162284+01:00","updated_at":"2025-12-06T09:43:09.507162284+01:00","labels":["cdn","deployment","zk"],"dependencies":[{"issue_id":"node-dj4","depends_on_id":"node-bj2","type":"blocks","created_at":"2025-12-06T09:43:17.091861452+01:00","created_by":"daemon"}]} +{"id":"node-s48","title":"Phase 3: Log Display with Tabs","description":"Implement the tabbed log display with filtering by category. Users can switch between All logs and category-specific views.","design":"## Tab Structure\n\n- **[All]** - Shows all logs from all categories\n- **[Core]** - CORE category only\n- **[Network]** - NETWORK category only\n- **[Peer]** - PEER category only\n- **[Chain]** - CHAIN category only\n- **[Sync]** - SYNC category only\n- **[Consensus]** - CONSENSUS category only\n- **[Identity]** - IDENTITY category only\n- **[MCP]** - MCP category only\n- **[XM]** - MULTICHAIN category only\n- **[DAHR]** - DAHR category only\n\n## Navigation\n\n- Number keys 0-9 for quick tab switching\n- Arrow keys for tab navigation\n- Tab key to cycle through tabs\n\n## Log Display Features\n\n- Color-coded by log level (green=info, yellow=warning, red=error, magenta=debug)\n- Auto-scroll to bottom (toggle with 'A')\n- Manual scroll with Page Up/Down, Home/End\n- Search/filter with '/' key","acceptance_criteria":"- [ ] Tab bar with all categories displayed\n- [ ] Tab switching via keyboard (numbers, arrows, tab)\n- [ ] Log filtering by selected category works\n- [ ] Color-coded log levels\n- [ ] Scrolling works (auto and manual)\n- [ ] Visual indicator for active tab","status":"closed","priority":1,"issue_type":"feature","created_at":"2025-12-04T15:45:22.577437178+01:00","updated_at":"2025-12-04T16:05:56.159601702+01:00","closed_at":"2025-12-04T16:05:56.159601702+01:00","labels":["phase-3","tui","ui"],"dependencies":[{"issue_id":"node-s48","depends_on_id":"node-66u","type":"blocks","created_at":"2025-12-04T15:46:29.57958254+01:00","created_by":"daemon"},{"issue_id":"node-s48","depends_on_id":"node-wrd","type":"parent-child","created_at":"2025-12-04T15:46:41.781338648+01:00","created_by":"daemon"}]} +{"id":"node-w8x","title":"Phase 6: Testing and Polish","description":"Final testing, edge case handling, documentation, and polish for the TUI implementation.","design":"## Testing Scenarios\n\n1. Normal startup and operation\n2. Multiple nodes on same machine\n3. Terminal resize during operation\n4. High log volume stress test\n5. Long-running stability test\n6. Graceful shutdown scenarios\n7. Error recovery\n\n## Polish Items\n\n1. Smooth scrolling animations\n2. Loading indicators\n3. Timestamp formatting options\n4. Log export functionality\n5. Configuration persistence\n\n## Documentation\n\n1. Update README with TUI usage\n2. Keyboard shortcuts reference\n3. Configuration options","acceptance_criteria":"- [ ] All test scenarios pass\n- [ ] No memory leaks in long-running test\n- [ ] Terminal state always restored on exit\n- [ ] Documentation complete\n- [ ] README updated","status":"open","priority":2,"issue_type":"task","created_at":"2025-12-04T15:45:23.120288464+01:00","updated_at":"2025-12-04T15:45:23.120288464+01:00","labels":["phase-6","testing","tui"],"dependencies":[{"issue_id":"node-w8x","depends_on_id":"node-67f","type":"blocks","created_at":"2025-12-04T15:46:29.841151783+01:00","created_by":"daemon"},{"issue_id":"node-w8x","depends_on_id":"node-wrd","type":"parent-child","created_at":"2025-12-04T15:46:41.94294082+01:00","created_by":"daemon"}]} +{"id":"node-wrd","title":"TUI Implementation - Epic","description":"Transform the Demos node from a scrolling wall of text into a proper TUI (Terminal User Interface) with categorized logging, tabbed views, control panel, and node info display.","status":"open","priority":1,"issue_type":"epic","created_at":"2025-12-04T15:44:37.186782378+01:00","updated_at":"2025-12-04T15:44:37.186782378+01:00","labels":["logging","tui","ux"]} diff --git a/AGENTS.md b/AGENTS.md new file mode 100644 index 000000000..c06265633 --- /dev/null +++ b/AGENTS.md @@ -0,0 +1,136 @@ +# AI Agent Instructions for Demos Network + +## Issue Tracking with bd (beads) + +**IMPORTANT**: This project uses **bd (beads)** for ALL issue tracking. Do NOT use markdown TODOs, task lists, or other tracking methods. + +### Why bd? + +- Dependency-aware: Track blockers and relationships between issues +- Git-friendly: Auto-syncs to JSONL for version control +- Agent-optimized: JSON output, ready work detection, discovered-from links +- Prevents duplicate tracking systems and confusion + +### Quick Start + +**Check for ready work:** +```bash +bd ready --json +``` + +**Create new issues:** +```bash +bd create "Issue title" -t bug|feature|task -p 0-4 --json +bd create "Issue title" -p 1 --deps discovered-from:bd-123 --json +``` + +**Claim and update:** +```bash +bd update bd-42 --status in_progress --json +bd update bd-42 --priority 1 --json +``` + +**Complete work:** +```bash +bd close bd-42 --reason "Completed" --json +``` + +### Issue Types + +- `bug` - Something broken +- `feature` - New functionality +- `task` - Work item (tests, docs, refactoring) +- `epic` - Large feature with subtasks +- `chore` - Maintenance (dependencies, tooling) + +### Priorities + +- `0` - Critical (security, data loss, broken builds) +- `1` - High (major features, important bugs) +- `2` - Medium (default, nice-to-have) +- `3` - Low (polish, optimization) +- `4` - Backlog (future ideas) + +### Workflow for AI Agents + +1. **Check ready work**: `bd ready` shows unblocked issues +2. **Claim your task**: `bd update --status in_progress` +3. **Work on it**: Implement, test, document +4. **Discover new work?** Create linked issue: + - `bd create "Found bug" -p 1 --deps discovered-from:` +5. **Complete**: `bd close --reason "Done"` +6. **Commit together**: Always commit the `.beads/issues.jsonl` file together with the code changes so issue state stays in sync with code state + +### Auto-Sync + +bd automatically syncs with git: +- Exports to `.beads/issues.jsonl` after changes (5s debounce) +- Imports from JSONL when newer (e.g., after `git pull`) +- No manual export/import needed! + +### GitHub Copilot Integration + +If using GitHub Copilot, also create `.github/copilot-instructions.md` for automatic instruction loading. +Run `bd onboard` to get the content, or see step 2 of the onboard instructions. + +### MCP Server (Recommended) + +If using Claude or MCP-compatible clients, install the beads MCP server: + +```bash +pip install beads-mcp +``` + +Add to MCP config (e.g., `~/.config/claude/config.json`): +```json +{ + "beads": { + "command": "beads-mcp", + "args": [] + } +} +``` + +Then use `mcp__beads__*` functions instead of CLI commands. + +### Managing AI-Generated Planning Documents + +AI assistants often create planning and design documents during development: +- PLAN.md, IMPLEMENTATION.md, ARCHITECTURE.md +- DESIGN.md, CODEBASE_SUMMARY.md, INTEGRATION_PLAN.md +- TESTING_GUIDE.md, TECHNICAL_DESIGN.md, and similar files + +**Best Practice: Use a dedicated directory for these ephemeral files** + +**Recommended approach:** +- Create a `history/` directory in the project root +- Store ALL AI-generated planning/design docs in `history/` +- Keep the repository root clean and focused on permanent project files +- Only access `history/` when explicitly asked to review past planning + +**Example .gitignore entry (optional):** +``` +# AI planning documents (ephemeral) +history/ +``` + +**Benefits:** +- Clean repository root +- Clear separation between ephemeral and permanent documentation +- Easy to exclude from version control if desired +- Preserves planning history for archeological research +- Reduces noise when browsing the project + +### Important Rules + +- Use bd for ALL task tracking +- Always use `--json` flag for programmatic use +- Link discovered work with `discovered-from` dependencies +- Check `bd ready` before asking "what should I work on?" +- Store AI planning docs in `history/` directory +- Do NOT create markdown TODO lists +- Do NOT use external issue trackers +- Do NOT duplicate tracking systems +- Do NOT clutter repo root with planning documents + +For more details, see README.md and QUICKSTART.md. From ef943e6f5e9a93770c96bf2c7d764716cff41416 Mon Sep 17 00:00:00 2001 From: tcsenpai Date: Tue, 13 Jan 2026 16:05:03 +0100 Subject: [PATCH 02/29] feat(storage): add GCR_StorageProgram entity for unified storage Phase 2 of StoragePrograms feature implementation: - Create GCR_StorageProgram TypeORM entity with full field support - Add indexes for owner, programName, encoding, storageLocation - Support JSON and Binary encoding via 'encoding' field - Include robust ACL (mode, allowed, blacklisted, groups) as jsonb - Add IPFS stubs (storageLocation, ipfsCid) for future hybrid storage - Track fees paid (totalFeesPaid) and soft delete (isDeleted) - Update datasource.ts to register the new entity - Update to demosdk 2.8.11 with unified StorageProgram types Co-Authored-By: Claude Opus 4.5 --- .../memories/feature_storage_programs_plan.md | 105 ++++++++++++ package.json | 2 +- src/model/datasource.ts | 2 + .../entities/GCRv2/GCR_StorageProgram.ts | 156 ++++++++++++++++++ 4 files changed, 264 insertions(+), 1 deletion(-) create mode 100644 .serena/memories/feature_storage_programs_plan.md create mode 100644 src/model/entities/GCRv2/GCR_StorageProgram.ts diff --git a/.serena/memories/feature_storage_programs_plan.md b/.serena/memories/feature_storage_programs_plan.md new file mode 100644 index 000000000..4e8cff1a7 --- /dev/null +++ b/.serena/memories/feature_storage_programs_plan.md @@ -0,0 +1,105 @@ +# StoragePrograms Feature Plan + +## Summary +Unified storage solution for Demos Network supporting both JSON (structured) and Binary (raw) data with robust ACL and size-based pricing. + +## Design Decision +Single unified StorageProgram with `encoding: "json" | "binary"` parameter. Both encodings share identical features. + +## Core Specifications + +### Limits & Pricing +- **Max Size**: 1MB (1,048,576 bytes) for both encodings +- **Pricing**: 1 DEM per 10KB (minimum 1 DEM) +- **JSON Nesting**: Max 64 levels depth + +### Access Control (ACL) +```typescript +interface StorageProgramACL { + mode: "owner" | "public" | "restricted" + owner: string // Always has full access + allowed?: string[] // Explicitly allowed addresses + blacklisted?: string[] // Blocked (highest priority) + groups?: Record +} +``` + +**ACL Resolution Priority**: +1. Owner → FULL ACCESS (always) +2. Blacklisted → DENIED (even if in allowed/groups) +3. Allowed → permissions granted +4. Groups → check group permissions +5. Mode fallback: owner/restricted → DENIED, public → READ only + +### Operations +- CREATE_STORAGE_PROGRAM +- WRITE_STORAGE +- READ_STORAGE +- UPDATE_ACCESS_CONTROL +- DELETE_STORAGE_PROGRAM + +### Storage +- **Location**: On-chain (PostgreSQL) initially +- **IPFS**: Stubs ready for future hybrid storage +- **Retention**: Permanent, owner/ACL-deletable only +- **Legacy**: Old Storage transactions kept for retrocompatibility + +## Key Files + +### SDK (../sdks) +- `src/types/blockchain/TransactionSubtypes/StorageProgramTransaction.ts` - Types +- `src/storage/StorageProgram.ts` - Main class + +### Node +- `src/model/entities/GCRv2/GCR_StorageProgram.ts` - Entity (new) +- `src/libs/blockchain/gcr/handleGCR.ts` - Handler implementation +- Confirm flow validation in transaction handlers + +## Database Schema +```sql +CREATE TABLE gcr_storage_programs ( + "storageAddress" TEXT PRIMARY KEY, + "owner" TEXT NOT NULL, + "programName" TEXT NOT NULL, + "encoding" TEXT NOT NULL, -- 'json' | 'binary' + "data" TEXT NOT NULL, + "sizeBytes" INTEGER NOT NULL, + "acl" JSONB NOT NULL, + "metadata" JSONB DEFAULT '{}', + "storageLocation" TEXT DEFAULT 'onchain', + "ipfsCid" TEXT, -- STUB for future + "salt" TEXT DEFAULT '', + "createdByTx" TEXT NOT NULL, + "lastModifiedByTx" TEXT NOT NULL, + "totalFeesPaid" BIGINT NOT NULL, + "createdAt" TIMESTAMP, + "updatedAt" TIMESTAMP +); +``` + +## Implementation Guidelines +- **Elegant**: Clean, readable code following existing patterns +- **Maintainable**: Well-documented, consistent with codebase style +- **No overengineering**: Simple solutions, YAGNI principle +- **Use existing patterns**: Follow TLSNotary, IPFS handler patterns + +## Related +- feature_ipfs_transactions (similar pricing model) +- arch_gcr_entities (entity patterns) +- Legacy StorageTransaction.ts (retrocompat) + +## SDK Workflow Reminder + +**CRITICAL**: After ANY changes to `../sdks`: +1. Run `bun run build` in ../sdks +2. Commit changes +3. Push to remote +4. **STOP AND TELL USER TO PUBLISH NEW VERSION** before continuing with node work + +This ensures the node can use the updated SDK types. + +## Last Updated +2026-01-13 - Initial planning document diff --git a/package.json b/package.json index badd4e224..d189ceecb 100644 --- a/package.json +++ b/package.json @@ -59,7 +59,7 @@ "@fastify/cors": "^9.0.1", "@fastify/swagger": "^8.15.0", "@fastify/swagger-ui": "^4.1.0", - "@kynesyslabs/demosdk": "^2.8.6", + "@kynesyslabs/demosdk": "^2.8.11", "@metaplex-foundation/js": "^0.20.1", "@modelcontextprotocol/sdk": "^1.13.3", "@noble/ed25519": "^3.0.0", diff --git a/src/model/datasource.ts b/src/model/datasource.ts index d644b0228..61f2a1e44 100644 --- a/src/model/datasource.ts +++ b/src/model/datasource.ts @@ -22,6 +22,7 @@ import { GCRHashes } from "./entities/GCRv2/GCRHashes.js" import { GCRSubnetsTxs } from "./entities/GCRv2/GCRSubnetsTxs.js" import { GCRMain } from "./entities/GCRv2/GCR_Main.js" import { GCRTLSNotary } from "./entities/GCRv2/GCR_TLSNotary.js" +import { GCRStorageProgram } from "./entities/GCRv2/GCR_StorageProgram.js" import { GCRTracker } from "./entities/GCR/GCRTracker.js" export const dataSource = new DataSource({ @@ -45,6 +46,7 @@ export const dataSource = new DataSource({ GCRTracker, GCRMain, GCRTLSNotary, + GCRStorageProgram, ], synchronize: true, logging: false, diff --git a/src/model/entities/GCRv2/GCR_StorageProgram.ts b/src/model/entities/GCRv2/GCR_StorageProgram.ts new file mode 100644 index 000000000..e9ab09dda --- /dev/null +++ b/src/model/entities/GCRv2/GCR_StorageProgram.ts @@ -0,0 +1,156 @@ +import { + Column, + CreateDateColumn, + UpdateDateColumn, + Entity, + Index, + PrimaryColumn, +} from "typeorm" + +// REVIEW: GCR_StorageProgram entity for unified JSON/Binary storage + +// Type definitions matching SDK types to avoid import resolution issues +type StorageEncoding = "json" | "binary" +type StorageLocation = "onchain" | "ipfs" +type StorageACLMode = "owner" | "public" | "restricted" +interface StorageGroupPermissions { + members: string[] + permissions: ("read" | "write" | "delete")[] +} +interface StorageProgramACL { + mode: StorageACLMode + allowed?: string[] + blacklisted?: string[] + groups?: Record +} + +/** + * GCR StorageProgram Entity + * + * Stores data for StorageProgram transactions with support for: + * - JSON (structured key-value) or Binary (base64 raw) encoding + * - Robust ACL: owner, allowed, blacklisted, public, groups + * - Max 1MB data, priced at 1 DEM per 10KB + * - IPFS-ready with storageLocation and ipfsCid fields (stubs for future) + * + * @see feature_storage_programs_plan.md for specification + */ +@Entity("gcr_storageprogram") +@Index("idx_gcr_storageprogram_owner", ["owner"]) +@Index("idx_gcr_storageprogram_programname", ["programName"]) +@Index("idx_gcr_storageprogram_encoding", ["encoding"]) +@Index("idx_gcr_storageprogram_storagelocation", ["storageLocation"]) +export class GCRStorageProgram { + /** + * Unique storage address (stor-{sha256(deployer:name:salt).substring(0,40)}) + */ + @PrimaryColumn({ type: "text", name: "storageAddress" }) + storageAddress: string + + /** + * Owner address (deployer who created the storage program) + */ + @Column({ type: "text", name: "owner" }) + owner: string + + /** + * Human-readable name for the storage program + */ + @Column({ type: "text", name: "programName" }) + programName: string + + /** + * Encoding format: "json" for structured data, "binary" for raw base64 + */ + @Column({ type: "text", name: "encoding" }) + encoding: StorageEncoding + + /** + * Stored data - either JSON object or base64 string depending on encoding + * For JSON: Record (max 64 nesting levels) + * For Binary: base64 encoded string + */ + @Column({ type: "jsonb", name: "data", nullable: true }) + data: Record | string | null + + /** + * Size of the data in bytes (used for fee calculation) + */ + @Column({ type: "integer", name: "sizeBytes" }) + sizeBytes: number + + /** + * Robust Access Control List + * Contains: mode, allowed, blacklisted, groups + */ + @Column({ type: "jsonb", name: "acl" }) + acl: StorageProgramACL + + /** + * Optional metadata (filename, mimeType, description, etc.) + */ + @Column({ type: "jsonb", name: "metadata", nullable: true }) + metadata: Record | null + + /** + * Storage location: "onchain" (current) or "ipfs" (future) + */ + @Column({ type: "text", name: "storageLocation", default: "onchain" }) + storageLocation: StorageLocation + + /** + * IPFS Content Identifier (stub for future IPFS integration) + * Will contain CID when storageLocation is "ipfs" + */ + @Column({ type: "text", name: "ipfsCid", nullable: true }) + ipfsCid: string | null + + /** + * Optional salt used in address derivation + */ + @Column({ type: "text", name: "salt", nullable: true }) + salt: string | null + + /** + * Transaction hash that created this storage program + */ + @Column({ type: "text", name: "createdByTx" }) + createdByTx: string + + /** + * Transaction hash of the last modification (write/update) + */ + @Column({ type: "text", name: "lastModifiedByTx" }) + lastModifiedByTx: string + + /** + * Total fees paid for this storage program (cumulative) + */ + @Column({ + type: "bigint", + name: "totalFeesPaid", + transformer: { + to: (v: bigint) => v.toString(), + from: (v: string | number) => BigInt(v), + }, + }) + totalFeesPaid: bigint + + /** + * Whether this storage program has been deleted (soft delete) + */ + @Column({ type: "boolean", name: "isDeleted", default: false }) + isDeleted: boolean + + /** + * Transaction hash that deleted this program (if deleted) + */ + @Column({ type: "text", name: "deletedByTx", nullable: true }) + deletedByTx: string | null + + @CreateDateColumn({ type: "timestamp", name: "createdAt" }) + createdAt: Date + + @UpdateDateColumn({ type: "timestamp", name: "updatedAt" }) + updatedAt: Date +} From 33887e7ac6441802216e7b8ef7df822f07b59601 Mon Sep 17 00:00:00 2001 From: tcsenpai Date: Tue, 13 Jan 2026 16:15:14 +0100 Subject: [PATCH 03/29] feat(storage): add StorageProgram confirm/broadcast flow handlers Phase 3 of StoragePrograms feature implementation: - Add GCRStorageProgramRoutines with validation and fee calculation - validateStorageProgramPayload() for confirm flow validation - GCRStorageProgramRoutines.apply() for broadcast flow GCREdit handling - Support CREATE, WRITE, UPDATE_ACL, DELETE operations - Integrate with handleGCR.ts routing and repository management - Follow SDK GCREditStorageProgram structure (target, context.operation, context.data) Fee calculation: 1 DEM per 10KB chunk (minimum 1 DEM) ACL validation: mode, allowed, blacklisted, groups with permissions Soft delete pattern with isDeleted flag preservation Co-Authored-By: Claude Opus 4.5 --- .../gcr_routines/GCRStorageProgramRoutines.ts | 637 ++++++++++++++++++ src/libs/blockchain/gcr/handleGCR.ts | 11 +- 2 files changed, 647 insertions(+), 1 deletion(-) create mode 100644 src/libs/blockchain/gcr/gcr_routines/GCRStorageProgramRoutines.ts diff --git a/src/libs/blockchain/gcr/gcr_routines/GCRStorageProgramRoutines.ts b/src/libs/blockchain/gcr/gcr_routines/GCRStorageProgramRoutines.ts new file mode 100644 index 000000000..7c0da830d --- /dev/null +++ b/src/libs/blockchain/gcr/gcr_routines/GCRStorageProgramRoutines.ts @@ -0,0 +1,637 @@ +/** + * GCR StorageProgram Routines + * + * Handles StorageProgram transaction validation and fee calculation + * for the confirm/broadcast two-step transaction flow. + * + * @fileoverview StorageProgram GCR routines for storage operations + */ + +import type { Repository } from "typeorm" +import { types, storage } from "@kynesyslabs/demosdk" + +import { GCRStorageProgram } from "@/model/entities/GCRv2/GCR_StorageProgram" +import log from "@/utilities/logger" +import type { GCRResult } from "../handleGCR" + +// Re-export SDK types for convenience +type GCREdit = types.GCREdit +type GCREditStorageProgram = types.GCREditStorageProgram +type StorageProgramPayload = storage.StorageProgramPayload + +// REVIEW: StorageProgram fee constants matching SDK +const STORAGE_PROGRAM_MAX_SIZE_BYTES = 1048576 // 1MB +const STORAGE_PROGRAM_PRICING_CHUNK_BYTES = 10240 // 10KB +const STORAGE_PROGRAM_FEE_PER_CHUNK = 1n // 1 DEM per chunk + +/** + * StorageProgram cost breakdown for confirm flow + */ +export interface StorageProgramCostBreakdown { + /** Base cost for the operation (currently 0) */ + baseCost: bigint + /** Storage cost based on data size */ + storageCost: bigint + /** Data size in bytes */ + sizeBytes: number + /** Encoding type used */ + encoding: "json" | "binary" + /** Number of 10KB chunks */ + chunks: number +} + +/** + * Validates a StorageProgram payload and calculates fees + * + * @param payload - The StorageProgram payload to validate + * @param senderAddress - The sender's address + * @returns Validation result with fee breakdown + */ +export function validateStorageProgramPayload( + payload: StorageProgramPayload, + senderAddress: string, +): { + valid: boolean + message: string + breakdown?: StorageProgramCostBreakdown + totalFee?: bigint +} { + const encoding = payload.encoding || "json" + + // Validate operation type + const validOperations = [ + "CREATE_STORAGE_PROGRAM", + "WRITE_STORAGE", + "READ_STORAGE", + "UPDATE_ACCESS_CONTROL", + "DELETE_STORAGE_PROGRAM", + ] + if (!validOperations.includes(payload.operation)) { + return { + valid: false, + message: `Invalid operation: ${payload.operation}`, + } + } + + // Validate storage address format + if (!payload.storageAddress || !payload.storageAddress.startsWith("stor-")) { + return { + valid: false, + message: "Invalid storage address format. Expected: stor-{hash}", + } + } + + // For CREATE, validate required fields + if (payload.operation === "CREATE_STORAGE_PROGRAM") { + if (!payload.programName || payload.programName.trim() === "") { + return { + valid: false, + message: "Program name is required for CREATE_STORAGE_PROGRAM", + } + } + } + + // Validate data if present + let sizeBytes = 0 + if (payload.data !== undefined && payload.data !== null) { + sizeBytes = calculateDataSize(payload.data, encoding) + + // Check size limit + if (sizeBytes > STORAGE_PROGRAM_MAX_SIZE_BYTES) { + return { + valid: false, + message: `Data size ${sizeBytes} bytes exceeds maximum ${STORAGE_PROGRAM_MAX_SIZE_BYTES} bytes (1MB)`, + } + } + + // For JSON encoding, validate nesting depth + if (encoding === "json" && typeof payload.data === "object") { + const nestingDepth = calculateJsonNestingDepth(payload.data) + if (nestingDepth > 64) { + return { + valid: false, + message: `JSON nesting depth ${nestingDepth} exceeds maximum 64 levels`, + } + } + } + + // For binary encoding, validate base64 format + if (encoding === "binary" && typeof payload.data === "string") { + if (!isValidBase64(payload.data)) { + return { + valid: false, + message: "Binary data must be valid base64 encoded string", + } + } + } + } + + // Validate ACL structure if present + if (payload.acl) { + const aclValidation = validateACLStructure(payload.acl) + if (!aclValidation.valid) { + return aclValidation + } + } + + // Calculate fee + const chunks = Math.ceil(sizeBytes / STORAGE_PROGRAM_PRICING_CHUNK_BYTES) + const storageCost = BigInt(Math.max(1, chunks)) * STORAGE_PROGRAM_FEE_PER_CHUNK + const baseCost = 0n + const totalFee = baseCost + storageCost + + const breakdown: StorageProgramCostBreakdown = { + baseCost, + storageCost, + sizeBytes, + encoding, + chunks: Math.max(1, chunks), + } + + log.debug( + `[StorageProgram] Validated ${payload.operation}: ${sizeBytes} bytes, ${chunks} chunks, ${totalFee} DEM fee`, + ) + + return { + valid: true, + message: `StorageProgram ${payload.operation} validated. Fee: ${totalFee} DEM`, + breakdown, + totalFee, + } +} + +/** + * Calculate data size in bytes + */ +function calculateDataSize( + data: Record | string, + encoding: "json" | "binary", +): number { + if (encoding === "binary") { + // Binary data is base64 encoded, decode to get actual size + if (typeof data === "string") { + // Base64 size = original_size * 4/3 (with padding) + // Actual decoded size = length * 3/4 (minus padding) + const padding = (data.match(/=/g) || []).length + return Math.floor((data.length * 3) / 4) - padding + } + return 0 + } + + // JSON encoding - use string length + return Buffer.byteLength(JSON.stringify(data), "utf8") +} + +/** + * Calculate JSON nesting depth recursively + */ +function calculateJsonNestingDepth(obj: unknown, currentDepth = 0): number { + if (typeof obj !== "object" || obj === null) { + return currentDepth + } + + let maxDepth = currentDepth + 1 + + if (Array.isArray(obj)) { + for (const item of obj) { + maxDepth = Math.max(maxDepth, calculateJsonNestingDepth(item, currentDepth + 1)) + } + } else { + for (const value of Object.values(obj)) { + maxDepth = Math.max(maxDepth, calculateJsonNestingDepth(value, currentDepth + 1)) + } + } + + return maxDepth +} + +/** + * Validate base64 string format + */ +function isValidBase64(str: string): boolean { + if (str.length === 0) return true + const base64Regex = /^[A-Za-z0-9+/]*={0,2}$/ + return base64Regex.test(str) && str.length % 4 === 0 +} + +/** + * Validate ACL structure + */ +function validateACLStructure(acl: unknown): { valid: boolean; message: string } { + if (!acl || typeof acl !== "object") { + return { valid: false, message: "ACL must be an object" } + } + + const aclObj = acl as Record + + // Validate mode + const validModes = ["owner", "public", "restricted"] + if (!aclObj.mode || !validModes.includes(aclObj.mode as string)) { + return { + valid: false, + message: `ACL mode must be one of: ${validModes.join(", ")}`, + } + } + + // Validate allowed addresses if present + if (aclObj.allowed !== undefined) { + if (!Array.isArray(aclObj.allowed)) { + return { valid: false, message: "ACL allowed must be an array of addresses" } + } + for (const addr of aclObj.allowed) { + if (typeof addr !== "string") { + return { valid: false, message: "ACL allowed must contain string addresses" } + } + } + } + + // Validate blacklisted addresses if present + if (aclObj.blacklisted !== undefined) { + if (!Array.isArray(aclObj.blacklisted)) { + return { valid: false, message: "ACL blacklisted must be an array of addresses" } + } + for (const addr of aclObj.blacklisted) { + if (typeof addr !== "string") { + return { valid: false, message: "ACL blacklisted must contain string addresses" } + } + } + } + + // Validate groups if present + if (aclObj.groups !== undefined) { + if (typeof aclObj.groups !== "object" || aclObj.groups === null) { + return { valid: false, message: "ACL groups must be an object" } + } + for (const [groupName, group] of Object.entries(aclObj.groups)) { + const groupObj = group as Record + if (!Array.isArray(groupObj.members)) { + return { + valid: false, + message: `ACL group ${groupName} must have members array`, + } + } + if (!Array.isArray(groupObj.permissions)) { + return { + valid: false, + message: `ACL group ${groupName} must have permissions array`, + } + } + const validPermissions = ["read", "write", "delete"] + for (const perm of groupObj.permissions) { + if (!validPermissions.includes(perm as string)) { + return { + valid: false, + message: `Invalid permission ${perm} in group ${groupName}`, + } + } + } + } + } + + return { valid: true, message: "ACL structure valid" } +} + +/** + * GCRStorageProgramRoutines handles the storage and retrieval of StorageProgram data. + * Programs are stored via CREATE_STORAGE_PROGRAM and WRITE_STORAGE operations. + */ +export class GCRStorageProgramRoutines { + /** + * Apply a StorageProgram GCR edit operation + * @param editOperation - The GCREditStorageProgram operation + * @param gcrStorageProgramRepository - TypeORM repository for GCRStorageProgram + * @param simulate - If true, don't persist changes + */ + static async apply( + editOperation: GCREdit, + gcrStorageProgramRepository: Repository, + simulate: boolean, + ): Promise { + const spEdit = editOperation as GCREditStorageProgram + + if (spEdit.type !== "storageProgram") { + return { success: false, message: "Invalid edit type for StorageProgram" } + } + + // SDK GCREditStorageProgram structure: + // - target: storage address (stor-xxx) + // - context.operation: CREATE_STORAGE_PROGRAM, WRITE_STORAGE, etc. + // - context.sender: sender address + // - context.data: { variables, metadata } + const operation = spEdit.context.operation + const storageAddress = spEdit.target + + log.info(`[StorageProgram] Processing ${operation} for ${storageAddress}`) + + switch (operation) { + case "CREATE_STORAGE_PROGRAM": { + return this.handleCreate(spEdit, gcrStorageProgramRepository, simulate) + } + case "WRITE_STORAGE": { + return this.handleWrite(spEdit, gcrStorageProgramRepository, simulate) + } + case "UPDATE_ACCESS_CONTROL": { + return this.handleUpdateAcl(spEdit, gcrStorageProgramRepository, simulate) + } + case "DELETE_STORAGE_PROGRAM": { + return this.handleDelete(spEdit, gcrStorageProgramRepository, simulate) + } + default: { + log.warning(`[StorageProgram] Unknown operation: ${operation}`) + return { success: false, message: `Unknown operation: ${operation}` } + } + } + } + + /** + * Handle CREATE_STORAGE_PROGRAM operation + * + * SDK GCREditStorageProgram structure: + * - target: storageAddress + * - context.sender: owner/sender + * - context.data.variables: StorageProgramPayload fields + * - context.data.metadata: optional metadata + */ + private static async handleCreate( + edit: GCREditStorageProgram, + repository: Repository, + simulate: boolean, + ): Promise { + const storageAddress = edit.target + const sender = edit.context.sender + const variables = edit.context.data?.variables as StorageProgramPayload | undefined + + if (!variables) { + return { success: false, message: "Missing data.variables for create operation" } + } + + // Check if storage program already exists + const existing = await repository.findOneBy({ storageAddress }) + if (existing && !existing.isDeleted) { + return { + success: false, + message: `Storage program already exists: ${storageAddress}`, + } + } + + if (simulate) { + log.debug(`[StorageProgram] Simulated create: ${storageAddress}`) + return { success: true, message: "Simulated create successful" } + } + + // Calculate size and fee + const encoding = variables.encoding || "json" + const sizeBytes = variables.data + ? calculateDataSize(variables.data, encoding) + : 0 + const chunks = Math.ceil(sizeBytes / STORAGE_PROGRAM_PRICING_CHUNK_BYTES) + const fee = BigInt(Math.max(1, chunks)) * STORAGE_PROGRAM_FEE_PER_CHUNK + + // Create new storage program + const program = new GCRStorageProgram() + program.storageAddress = storageAddress + program.owner = sender + program.programName = variables.programName || "" + program.encoding = encoding + program.data = variables.data || null + program.sizeBytes = sizeBytes + program.acl = variables.acl || { mode: "owner" } + program.metadata = (edit.context.data?.metadata as Record) || variables.metadata || null + program.storageLocation = variables.storageLocation || "onchain" + program.ipfsCid = null + program.salt = variables.salt || null + program.createdByTx = edit.txhash + program.lastModifiedByTx = edit.txhash + program.totalFeesPaid = fee + program.isDeleted = false + program.deletedByTx = null + + await repository.save(program) + log.info(`[StorageProgram] Created: ${storageAddress}`) + + return { success: true, message: `Storage program created: ${storageAddress}` } + } + + /** + * Handle WRITE_STORAGE operation + */ + private static async handleWrite( + edit: GCREditStorageProgram, + repository: Repository, + simulate: boolean, + ): Promise { + const storageAddress = edit.target + const variables = edit.context.data?.variables as StorageProgramPayload | undefined + + if (!variables) { + return { success: false, message: "Missing data.variables for write operation" } + } + + // Find existing storage program + const program = await repository.findOneBy({ storageAddress }) + + if (!program) { + return { + success: false, + message: `Storage program not found: ${storageAddress}`, + } + } + + if (program.isDeleted) { + return { + success: false, + message: `Storage program has been deleted: ${storageAddress}`, + } + } + + if (simulate) { + log.debug(`[StorageProgram] Simulated write: ${storageAddress}`) + return { success: true, message: "Simulated write successful" } + } + + // Calculate new size and fee + const encoding = variables.encoding || program.encoding + const newSizeBytes = variables.data + ? calculateDataSize(variables.data, encoding) + : program.sizeBytes + const chunks = Math.ceil(newSizeBytes / STORAGE_PROGRAM_PRICING_CHUNK_BYTES) + const fee = BigInt(Math.max(1, chunks)) * STORAGE_PROGRAM_FEE_PER_CHUNK + + // Update data + program.data = variables.data ?? program.data + program.sizeBytes = newSizeBytes + program.encoding = encoding + program.lastModifiedByTx = edit.txhash + program.totalFeesPaid = program.totalFeesPaid + fee + + if (variables.metadata || edit.context.data?.metadata) { + const newMetadata = (edit.context.data?.metadata as Record) || variables.metadata + program.metadata = { ...program.metadata, ...newMetadata } + } + + await repository.save(program) + log.info(`[StorageProgram] Updated: ${storageAddress}`) + + return { success: true, message: `Storage program updated: ${storageAddress}` } + } + + /** + * Handle UPDATE_ACCESS_CONTROL operation + */ + private static async handleUpdateAcl( + edit: GCREditStorageProgram, + repository: Repository, + simulate: boolean, + ): Promise { + const storageAddress = edit.target + const sender = edit.context.sender + const variables = edit.context.data?.variables as StorageProgramPayload | undefined + + if (!variables?.acl) { + return { success: false, message: "Missing acl in data.variables for updateAcl operation" } + } + + const program = await repository.findOneBy({ storageAddress }) + + if (!program) { + return { + success: false, + message: `Storage program not found: ${storageAddress}`, + } + } + + if (program.isDeleted) { + return { + success: false, + message: `Storage program has been deleted: ${storageAddress}`, + } + } + + // Only owner can update ACL + if (program.owner !== sender) { + return { + success: false, + message: "Only owner can update access control", + } + } + + if (simulate) { + log.debug(`[StorageProgram] Simulated ACL update: ${storageAddress}`) + return { success: true, message: "Simulated ACL update successful" } + } + + program.acl = variables.acl + program.lastModifiedByTx = edit.txhash + + await repository.save(program) + log.info(`[StorageProgram] ACL updated: ${storageAddress}`) + + return { success: true, message: `ACL updated: ${storageAddress}` } + } + + /** + * Handle DELETE_STORAGE_PROGRAM operation (soft delete) + */ + private static async handleDelete( + edit: GCREditStorageProgram, + repository: Repository, + simulate: boolean, + ): Promise { + const storageAddress = edit.target + const sender = edit.context.sender + + const program = await repository.findOneBy({ storageAddress }) + + if (!program) { + return { + success: false, + message: `Storage program not found: ${storageAddress}`, + } + } + + if (program.isDeleted) { + return { + success: false, + message: `Storage program already deleted: ${storageAddress}`, + } + } + + // Check delete permission (owner or ACL) + const canDelete = + program.owner === sender || + checkDeletePermission(program.acl, sender) + + if (!canDelete) { + return { + success: false, + message: "No permission to delete this storage program", + } + } + + if (simulate) { + log.debug(`[StorageProgram] Simulated delete: ${storageAddress}`) + return { success: true, message: "Simulated delete successful" } + } + + // Soft delete + program.isDeleted = true + program.deletedByTx = edit.txhash + program.lastModifiedByTx = edit.txhash + + await repository.save(program) + log.info(`[StorageProgram] Deleted: ${storageAddress}`) + + return { success: true, message: `Storage program deleted: ${storageAddress}` } + } + + /** + * Read a storage program by address + */ + static async getStorageProgram( + storageAddress: string, + repository: Repository, + ): Promise { + const program = await repository.findOneBy({ storageAddress }) + if (program?.isDeleted) { + return null + } + return program + } + + /** + * Get all storage programs owned by an address + */ + static async getStorageProgramsByOwner( + owner: string, + repository: Repository, + ): Promise { + return repository.find({ + where: { owner, isDeleted: false }, + order: { createdAt: "DESC" }, + }) + } +} + +/** + * Check if address has delete permission in ACL + */ +function checkDeletePermission( + acl: { mode: string; allowed?: string[]; blacklisted?: string[]; groups?: Record }, + address: string, +): boolean { + // Check blacklist first + if (acl.blacklisted?.includes(address)) { + return false + } + + // Check groups + if (acl.groups) { + for (const group of Object.values(acl.groups)) { + if (group.members.includes(address) && group.permissions.includes("delete")) { + return true + } + } + } + + return false +} diff --git a/src/libs/blockchain/gcr/handleGCR.ts b/src/libs/blockchain/gcr/handleGCR.ts index 45e4738d6..76b7acefc 100644 --- a/src/libs/blockchain/gcr/handleGCR.ts +++ b/src/libs/blockchain/gcr/handleGCR.ts @@ -50,6 +50,8 @@ import { Repository } from "typeorm" import GCRIdentityRoutines from "./gcr_routines/GCRIdentityRoutines" import { GCRTLSNotaryRoutines } from "./gcr_routines/GCRTLSNotaryRoutines" import { GCRTLSNotary } from "@/model/entities/GCRv2/GCR_TLSNotary" +import { GCRStorageProgramRoutines } from "./gcr_routines/GCRStorageProgramRoutines" +import { GCRStorageProgram } from "@/model/entities/GCRv2/GCR_StorageProgram" import { Referrals } from "@/features/incentive/referrals" // REVIEW: TLSNotary token management for native operations import { createToken, extractDomain } from "@/features/tlsnotary/tokenManager" @@ -285,11 +287,17 @@ export default class HandleGCR { log.debug(`Assigning GCREdit ${editOperation.type}`) return { success: true, message: "Not implemented" } case "smartContract": - case "storageProgram": case "escrow": // TODO implementations log.debug(`GCREdit ${editOperation.type} not yet implemented`) return { success: true, message: "Not implemented" } + // REVIEW: StorageProgram unified storage operations + case "storageProgram": + return GCRStorageProgramRoutines.apply( + editOperation, + repositories.storageProgram as Repository, + simulate, + ) // REVIEW: TLSNotary attestation proof storage case "tlsnotary": return GCRTLSNotaryRoutines.apply( @@ -549,6 +557,7 @@ export default class HandleGCR { subnetsTxs: dataSource.getRepository(GCRSubnetsTxs), tracker: dataSource.getRepository(GCRTracker), tlsnotary: dataSource.getRepository(GCRTLSNotary), + storageProgram: dataSource.getRepository(GCRStorageProgram), } } From 4e3e95949df60d81c85b0a9238e221eea27fba7c Mon Sep 17 00:00:00 2001 From: tcsenpai Date: Wed, 14 Jan 2026 14:43:38 +0100 Subject: [PATCH 04/29] feat(storage): add StorageProgram RPC endpoints and ACL read check Phase 5 of StoragePrograms implementation: - Add GET /storage-program/:address endpoint for reading by address - Add GET /storage-program/owner/:owner endpoint for listing by owner - Implement checkReadPermission in GCRStorageProgramRoutines - ACL check enforces: public (allow all), owner (only owner), restricted (allowed/groups) - Proper error handling: NOT_FOUND, PERMISSION_DENIED, INTERNAL_ERROR - Update SDK to 2.8.13 for HandleStorageProgramOperations support Related: DEM-548 Co-Authored-By: Claude Opus 4.5 --- package.json | 2 +- src/features/storageprogram/index.ts | 36 +++ src/features/storageprogram/routes.ts | 254 ++++++++++++++++++ .../gcr_routines/GCRStorageProgramRoutines.ts | 68 +++++ src/libs/network/server_rpc.ts | 8 + 5 files changed, 367 insertions(+), 1 deletion(-) create mode 100644 src/features/storageprogram/index.ts create mode 100644 src/features/storageprogram/routes.ts diff --git a/package.json b/package.json index d189ceecb..b40591c42 100644 --- a/package.json +++ b/package.json @@ -59,7 +59,7 @@ "@fastify/cors": "^9.0.1", "@fastify/swagger": "^8.15.0", "@fastify/swagger-ui": "^4.1.0", - "@kynesyslabs/demosdk": "^2.8.11", + "@kynesyslabs/demosdk": "^2.8.13", "@metaplex-foundation/js": "^0.20.1", "@modelcontextprotocol/sdk": "^1.13.3", "@noble/ed25519": "^3.0.0", diff --git a/src/features/storageprogram/index.ts b/src/features/storageprogram/index.ts new file mode 100644 index 000000000..d85ab27e1 --- /dev/null +++ b/src/features/storageprogram/index.ts @@ -0,0 +1,36 @@ +/** + * StorageProgram Feature Module + * + * Provides unified storage capabilities for JSON and binary data on the Demos Network. + * Supports robust ACL (owner, public, restricted modes with groups and blacklists). + * + * Features: + * - JSON storage with 64-level nesting support + * - Binary storage with base64 encoding + * - Max 1MB data, priced at 1 DEM per 10KB + * - Robust ACL with owner, allowed, blacklisted, and group-based permissions + * - IPFS-ready with storageLocation and ipfsCid fields (stubs for future) + * + * @module features/storageprogram + */ + +// REVIEW: StorageProgram feature module - entry point for unified storage feature + +import type { BunServer } from "@/libs/network/bunServer" +import { registerStorageProgramRoutes } from "./routes" +import log from "@/utilities/logger" + +// Re-export routes for direct access if needed +export { registerStorageProgramRoutes } from "./routes" + +/** + * Initialize StorageProgram feature + * + * Registers HTTP routes with BunServer for storage program access. + * + * @param server - BunServer instance for route registration + */ +export function initializeStorageProgram(server: BunServer): void { + registerStorageProgramRoutes(server) + log.info("[StorageProgram] Feature initialized") +} diff --git a/src/features/storageprogram/routes.ts b/src/features/storageprogram/routes.ts new file mode 100644 index 000000000..9f4c8c032 --- /dev/null +++ b/src/features/storageprogram/routes.ts @@ -0,0 +1,254 @@ +/** + * StorageProgram RPC Routes + * + * Provides HTTP endpoints for reading StorageProgram data with ACL enforcement. + * + * Routes: + * - GET /storage-program/:address - Read a storage program by address + * - GET /storage-program/owner/:owner - List storage programs by owner + * + * @module features/storageprogram/routes + */ + +// REVIEW: StorageProgram RPC routes for unified storage access + +import type { BunServer } from "@/libs/network/bunServer" +import { jsonResponse } from "@/libs/network/bunServer" +import log from "@/utilities/logger" +import Datasource from "@/model/datasource" +import { GCRStorageProgram } from "@/model/entities/GCRv2/GCR_StorageProgram" +import { GCRStorageProgramRoutines } from "@/libs/blockchain/gcr/gcr_routines/GCRStorageProgramRoutines" + +// ============================================================================ +// Response Types +// ============================================================================ + +/** + * Storage program read response + */ +interface StorageProgramResponse { + success: boolean + storageAddress?: string + owner?: string + programName?: string + encoding?: "json" | "binary" + data?: Record | string | null + metadata?: Record | null + storageLocation?: string + sizeBytes?: number + createdAt?: string + updatedAt?: string + error?: string + errorCode?: "NOT_FOUND" | "PERMISSION_DENIED" | "DELETED" | "INTERNAL_ERROR" +} + +/** + * Storage programs list response + */ +interface StorageProgramsListResponse { + success: boolean + programs?: Array<{ + storageAddress: string + programName: string + encoding: "json" | "binary" + sizeBytes: number + storageLocation: string + createdAt: string + updatedAt: string + }> + count?: number + error?: string +} + +// ============================================================================ +// Route Handlers +// ============================================================================ + +/** + * Get storage program by address + * + * Enforces ACL read permissions based on the requester's identity. + * For public storage programs, anyone can read. + * For owner/restricted, identity header is required. + */ +async function getStorageProgramHandler(req: Request): Promise { + try { + // Extract address from URL path + const url = new URL(req.url) + const pathParts = url.pathname.split("/") + const storageAddress = pathParts[pathParts.length - 1] + + if (!storageAddress || !storageAddress.startsWith("stor-")) { + const response: StorageProgramResponse = { + success: false, + error: "Invalid storage address format. Expected: stor-{hash}", + errorCode: "NOT_FOUND", + } + return jsonResponse(response, 400) + } + + // Get requester identity from header (optional for public programs) + const identity = req.headers.get("identity") + let requesterAddress: string | undefined + + if (identity) { + // Parse identity header (format: algorithm:publicKey or just publicKey) + const splits = identity.split(":") + requesterAddress = splits.length > 1 ? splits[1] : identity + } + + // Get repository + const db = await Datasource.getInstance() + const repository = db.getDataSource().getRepository(GCRStorageProgram) + + // Fetch storage program + const program = await GCRStorageProgramRoutines.getStorageProgram( + storageAddress, + repository, + ) + + if (!program) { + const response: StorageProgramResponse = { + success: false, + error: `Storage program not found: ${storageAddress}`, + errorCode: "NOT_FOUND", + } + return jsonResponse(response, 404) + } + + // Check read permission + const hasReadAccess = GCRStorageProgramRoutines.checkReadPermission( + program, + requesterAddress, + ) + + if (!hasReadAccess) { + const response: StorageProgramResponse = { + success: false, + error: "Permission denied: You do not have read access to this storage program", + errorCode: "PERMISSION_DENIED", + } + return jsonResponse(response, 403) + } + + // Return storage program data + const response: StorageProgramResponse = { + success: true, + storageAddress: program.storageAddress, + owner: program.owner, + programName: program.programName, + encoding: program.encoding, + data: program.data, + metadata: program.metadata, + storageLocation: program.storageLocation, + sizeBytes: program.sizeBytes, + createdAt: program.createdAt.toISOString(), + updatedAt: program.updatedAt.toISOString(), + } + + log.debug(`[StorageProgram] Read: ${storageAddress} by ${requesterAddress || "anonymous"}`) + return jsonResponse(response) + } catch (error) { + log.error(`[StorageProgram] Error reading storage program: ${error}`) + const response: StorageProgramResponse = { + success: false, + error: error instanceof Error ? error.message : "Internal server error", + errorCode: "INTERNAL_ERROR", + } + return jsonResponse(response, 500) + } +} + +/** + * List storage programs by owner + * + * Returns a list of storage programs owned by the specified address. + * Only returns programs that the requester has permission to see (public or owned). + */ +async function listByOwnerHandler(req: Request): Promise { + try { + // Extract owner from URL path + const url = new URL(req.url) + const pathParts = url.pathname.split("/") + const owner = pathParts[pathParts.length - 1] + + if (!owner) { + const response: StorageProgramsListResponse = { + success: false, + error: "Owner address is required", + } + return jsonResponse(response, 400) + } + + // Get requester identity from header + const identity = req.headers.get("identity") + let requesterAddress: string | undefined + + if (identity) { + const splits = identity.split(":") + requesterAddress = splits.length > 1 ? splits[1] : identity + } + + // Get repository + const db = await Datasource.getInstance() + const repository = db.getDataSource().getRepository(GCRStorageProgram) + + // Fetch all programs by owner + const programs = await GCRStorageProgramRoutines.getStorageProgramsByOwner( + owner, + repository, + ) + + // Filter to only programs the requester can read + const accessiblePrograms = programs.filter(program => + GCRStorageProgramRoutines.checkReadPermission(program, requesterAddress), + ) + + // Map to response format (without full data for list view) + const response: StorageProgramsListResponse = { + success: true, + programs: accessiblePrograms.map(p => ({ + storageAddress: p.storageAddress, + programName: p.programName, + encoding: p.encoding, + sizeBytes: p.sizeBytes, + storageLocation: p.storageLocation, + createdAt: p.createdAt.toISOString(), + updatedAt: p.updatedAt.toISOString(), + })), + count: accessiblePrograms.length, + } + + log.debug(`[StorageProgram] Listed ${accessiblePrograms.length} programs for owner ${owner}`) + return jsonResponse(response) + } catch (error) { + log.error(`[StorageProgram] Error listing storage programs: ${error}`) + const response: StorageProgramsListResponse = { + success: false, + error: error instanceof Error ? error.message : "Internal server error", + } + return jsonResponse(response, 500) + } +} + +// ============================================================================ +// Route Registration +// ============================================================================ + +/** + * Register StorageProgram routes with BunServer + * + * Routes: + * - GET /storage-program/:address - Read a storage program by address + * - GET /storage-program/owner/:owner - List storage programs by owner + * + * @param server - BunServer instance + */ +export function registerStorageProgramRoutes(server: BunServer): void { + // Read storage program by address + // Note: BunServer uses pattern matching, so we register the specific route + server.get("/storage-program/owner/*", listByOwnerHandler) + server.get("/storage-program/*", getStorageProgramHandler) + + log.info("[StorageProgram] Routes registered: /storage-program/:address, /storage-program/owner/:owner") +} diff --git a/src/libs/blockchain/gcr/gcr_routines/GCRStorageProgramRoutines.ts b/src/libs/blockchain/gcr/gcr_routines/GCRStorageProgramRoutines.ts index 7c0da830d..7e4d72bc4 100644 --- a/src/libs/blockchain/gcr/gcr_routines/GCRStorageProgramRoutines.ts +++ b/src/libs/blockchain/gcr/gcr_routines/GCRStorageProgramRoutines.ts @@ -610,6 +610,74 @@ export class GCRStorageProgramRoutines { order: { createdAt: "DESC" }, }) } + + + /** + * Check if an address has read permission for a storage program + * @param program - The storage program to check + * @param requesterAddress - The address requesting read access (optional for public data) + * @returns true if read is allowed, false otherwise + */ + static checkReadPermission( + program: GCRStorageProgram, + requesterAddress?: string, + ): boolean { + const acl = program.acl + + // Public mode - everyone can read + if (acl.mode === "public") { + // Still check blacklist for public mode + if (requesterAddress && acl.blacklisted?.includes(requesterAddress)) { + return false + } + return true + } + + // Owner mode - only owner can read + if (acl.mode === "owner") { + return requesterAddress === program.owner + } + + // Restricted mode - check allowed list and groups + if (acl.mode === "restricted") { + // No requester means anonymous - denied in restricted mode + if (!requesterAddress) { + return false + } + + // Check blacklist first + if (acl.blacklisted?.includes(requesterAddress)) { + return false + } + + // Owner always has access + if (requesterAddress === program.owner) { + return true + } + + // Check allowed list + if (acl.allowed?.includes(requesterAddress)) { + return true + } + + // Check groups for read permission + if (acl.groups) { + for (const group of Object.values(acl.groups)) { + if ( + group.members.includes(requesterAddress) && + group.permissions.includes("read") + ) { + return true + } + } + } + + return false + } + + // Unknown mode - deny by default + return false + } } /** diff --git a/src/libs/network/server_rpc.ts b/src/libs/network/server_rpc.ts index c1688cb66..4367a0ea4 100644 --- a/src/libs/network/server_rpc.ts +++ b/src/libs/network/server_rpc.ts @@ -463,6 +463,14 @@ export async function serverRpcBun() { } } + // REVIEW: Register StorageProgram routes for unified storage access + try { + const { registerStorageProgramRoutes } = await import("@/features/storageprogram/routes") + registerStorageProgramRoutes(server) + } catch (error) { + log.warning("[RPC] Failed to register StorageProgram routes: " + error) + } + log.info("[RPC Call] Server is running on 0.0.0.0:" + port, true) return server.start() } From f4d9837f116013d187b8c0df5bf6b2d27dd00e5a Mon Sep 17 00:00:00 2001 From: tcsenpai Date: Wed, 14 Jan 2026 15:02:43 +0100 Subject: [PATCH 05/29] added storage programs docs --- specs/storageprogram/01-overview.mdx | 156 ++++++++++ specs/storageprogram/02-architecture.mdx | 251 +++++++++++++++ specs/storageprogram/03-operations.mdx | 266 ++++++++++++++++ specs/storageprogram/04-acl.mdx | 355 ++++++++++++++++++++++ specs/storageprogram/05-rpc-endpoints.mdx | 335 ++++++++++++++++++++ 5 files changed, 1363 insertions(+) create mode 100644 specs/storageprogram/01-overview.mdx create mode 100644 specs/storageprogram/02-architecture.mdx create mode 100644 specs/storageprogram/03-operations.mdx create mode 100644 specs/storageprogram/04-acl.mdx create mode 100644 specs/storageprogram/05-rpc-endpoints.mdx diff --git a/specs/storageprogram/01-overview.mdx b/specs/storageprogram/01-overview.mdx new file mode 100644 index 000000000..1ae5f48a9 --- /dev/null +++ b/specs/storageprogram/01-overview.mdx @@ -0,0 +1,156 @@ +--- +title: "StorageProgram Overview" +description: "Introduction to unified storage for JSON and binary data on the Demos Network" +--- + +# StorageProgram Overview + +The StorageProgram feature provides unified on-chain storage for JSON and binary data on the Demos Network, with robust access control and a simple pricing model. + +## What is StorageProgram? + +StorageProgram is a general-purpose storage system that allows: + +- **JSON Storage** - Structured data with up to 64-level nesting depth +- **Binary Storage** - Base64-encoded files and assets +- **Access Control** - Owner, public, and restricted modes with groups +- **Immutable Records** - Content addressed by deterministic storage addresses + +## Key Features + +| Feature | Description | +|---------|-------------| +| **Dual Encoding** | Support for JSON objects and binary (base64) data | +| **Robust ACL** | Owner-only, public, or restricted with allowed/blacklisted addresses and groups | +| **Max 1MB** | Data size limit of 1MB per storage program | +| **Simple Pricing** | 1 DEM per 10KB chunk | +| **Soft Delete** | Programs can be deleted but records are preserved for auditability | +| **IPFS-Ready** | Storage location field prepared for future IPFS integration | + +## Storage Address Format + +Each storage program is identified by a unique address: + +``` +stor-{keccak256-hash} +``` + +The hash is derived from the combination of owner address, program name, and optional salt for uniqueness. + +## Quick Start + +### Create a Storage Program + +```typescript +import { storage } from "@kynesyslabs/demosdk" + +// Create a storage program for JSON data +const payload = storage.buildStorageProgramPayload({ + operation: "CREATE_STORAGE_PROGRAM", + owner: "your-demos-address", + programName: "my-config", + encoding: "json", + data: { + settings: { + theme: "dark", + language: "en" + } + }, + acl: { mode: "owner" } +}) + +// The SDK calculates the storage address deterministically +console.log(payload.storageAddress) // stor-abc123... +``` + +### Write to Storage + +```typescript +// Update the storage program data +const writePayload = storage.buildStorageProgramPayload({ + operation: "WRITE_STORAGE", + storageAddress: "stor-abc123...", + data: { + settings: { + theme: "light", + language: "fr" + } + } +}) +``` + +### Read Storage via RPC + +```typescript +// Read storage program via HTTP +const response = await fetch( + "https://rpc.demos.network/storage-program/stor-abc123...", + { + headers: { + "identity": "ed25519:your-public-key", + "signature": "your-signature" + } + } +) + +const data = await response.json() +console.log(data.data) // { settings: { theme: "light", ... } } +``` + +## Access Control Modes + +StorageProgram supports three ACL modes: + +| Mode | Read Access | Write Access | +|------|-------------|--------------| +| **owner** | Owner only | Owner only | +| **public** | Anyone (except blacklisted) | Owner only | +| **restricted** | Owner + allowed + group members | Owner + group members with "write" permission | + +## Pricing Model + +Storage costs are calculated based on data size: + +- **Chunk Size**: 10KB (10,240 bytes) +- **Price per Chunk**: 1 DEM +- **Minimum Fee**: 1 DEM (even for small data) +- **Maximum Size**: 1MB (1,048,576 bytes) + +Example calculations: + +| Data Size | Chunks | Fee | +|-----------|--------|-----| +| 5 KB | 1 | 1 DEM | +| 15 KB | 2 | 2 DEM | +| 100 KB | 10 | 10 DEM | +| 1 MB | 103 | 103 DEM | + +## Operations + +StorageProgram supports four operations: + +1. **CREATE_STORAGE_PROGRAM** - Create a new storage program +2. **WRITE_STORAGE** - Update data in an existing storage program +3. **UPDATE_ACCESS_CONTROL** - Modify ACL settings +4. **DELETE_STORAGE_PROGRAM** - Soft delete the storage program + +## Current Limitations + +### On-Chain Only Storage + +Currently, all data is stored on-chain. The `storageLocation` field is prepared for future IPFS integration but only "onchain" is supported at this time. + +```typescript +// This will log a warning and fall back to "onchain" +const payload = storage.buildStorageProgramPayload({ + // ... + storageLocation: "ipfs" // Not yet implemented +}) +``` + +## Next Steps + +- [Architecture](/storageprogram/architecture) - System design and data flow +- [Operations](/storageprogram/operations) - Detailed operation reference +- [ACL](/storageprogram/acl) - Access control configuration +- [RPC Reference](/storageprogram/rpc-endpoints) - HTTP API documentation diff --git a/specs/storageprogram/02-architecture.mdx b/specs/storageprogram/02-architecture.mdx new file mode 100644 index 000000000..06e7b280c --- /dev/null +++ b/specs/storageprogram/02-architecture.mdx @@ -0,0 +1,251 @@ +--- +title: "StorageProgram Architecture" +description: "System design and data flow for StorageProgram operations" +--- + +# StorageProgram Architecture + +This document describes the internal architecture and data flow of the StorageProgram feature. + +## System Overview + +``` +┌─────────────────────────────────────────────────────────────────────┐ +│ Client Application │ +└─────────────────────────────────────────────────────────────────────┘ + │ + │ SDK + ▼ +┌─────────────────────────────────────────────────────────────────────┐ +│ @kynesyslabs/demosdk │ +│ ┌─────────────────────────────────────────────────────────────┐ │ +│ │ storage.buildStorageProgramPayload() │ │ +│ │ - Generates deterministic storage addresses │ │ +│ │ - Validates payload structure │ │ +│ │ - Calculates fees │ │ +│ └─────────────────────────────────────────────────────────────┘ │ +└─────────────────────────────────────────────────────────────────────┘ + │ + │ Transaction + ▼ +┌─────────────────────────────────────────────────────────────────────┐ +│ Demos Network Node │ +│ ┌─────────────────────────────────────────────────────────────┐ │ +│ │ RPC Layer │ │ +│ │ - POST / (confirm/broadcast flow) │ │ +│ │ - GET /storage-program/:address (read) │ │ +│ │ - GET /storage-program/owner/:owner (list) │ │ +│ └─────────────────────────────────────────────────────────────┘ │ +│ │ │ +│ ▼ │ +│ ┌─────────────────────────────────────────────────────────────┐ │ +│ │ GCR Processing Layer │ │ +│ │ - GCRStorageProgramRoutines.apply() │ │ +│ │ - Validation and fee verification │ │ +│ │ - ACL enforcement │ │ +│ └─────────────────────────────────────────────────────────────┘ │ +│ │ │ +│ ▼ │ +│ ┌─────────────────────────────────────────────────────────────┐ │ +│ │ Storage Layer │ │ +│ │ - GCR_StorageProgram entity (TypeORM) │ │ +│ │ - PostgreSQL persistence │ │ +│ └─────────────────────────────────────────────────────────────┘ │ +└─────────────────────────────────────────────────────────────────────┘ +``` + +## Components + +### SDK Layer + +The SDK provides the `storage` module with functions for building StorageProgram payloads: + +```typescript +import { storage } from "@kynesyslabs/demosdk" + +// Build a payload with automatic address generation +const payload = storage.buildStorageProgramPayload({ + operation: "CREATE_STORAGE_PROGRAM", + owner: senderAddress, + programName: "my-storage", + // ... +}) +``` + +The SDK handles: +- **Address Generation**: Deterministic `stor-{hash}` addresses +- **Payload Validation**: Structure and field validation +- **Fee Calculation**: Based on data size and pricing constants + +### GCR Processing Layer + +The `GCRStorageProgramRoutines` class processes StorageProgram operations: + +```typescript +// File: src/libs/blockchain/gcr/gcr_routines/GCRStorageProgramRoutines.ts + +class GCRStorageProgramRoutines { + static async apply( + editOperation: GCREdit, + repository: Repository, + simulate: boolean + ): Promise +} +``` + +Operations flow: +1. **Validation**: Payload structure and permissions +2. **Fee Verification**: Confirm sufficient funds +3. **State Mutation**: Create/update/delete records +4. **Persistence**: Save to database + +### Storage Layer + +The `GCR_StorageProgram` entity stores program data: + +```typescript +@Entity({ name: "gcr_storage_program" }) +export class GCRStorageProgram { + @PrimaryColumn() + storageAddress: string // stor-{hash} + + @Column() + owner: string + + @Column() + programName: string + + @Column() + encoding: "json" | "binary" + + @Column({ type: "jsonb", nullable: true }) + data: Record | string | null + + @Column({ type: "jsonb" }) + acl: StorageProgramACL + + @Column() + storageLocation: string // "onchain" or future "ipfs" + + @Column({ nullable: true }) + ipfsCid: string | null // Future IPFS support + + // ... additional fields +} +``` + +## Transaction Flow + +### Confirm Phase + +1. Client builds payload using SDK +2. Client sends to RPC with `method: "execute"` and `extra: "confirmTx"` +3. Node validates payload and calculates fees +4. Node returns fee breakdown and validation result + +### Broadcast Phase + +1. Client signs and sends confirmed transaction +2. Node processes through GCR layer +3. `GCRStorageProgramRoutines.apply()` handles the operation +4. State changes are persisted to database + +``` +Client Node + │ │ + │─── Confirm Request ──────────▶│ + │ │ Validate payload + │ │ Calculate fees + │◀── Fee Breakdown ─────────────│ + │ │ + │─── Broadcast (signed) ───────▶│ + │ │ Apply GCR edit + │ │ Persist state + │◀── Success Response ──────────│ +``` + +## Data Flow by Operation + +### CREATE_STORAGE_PROGRAM + +``` +1. SDK generates stor-{hash} address +2. Node validates address doesn't exist +3. Node validates payload structure +4. Node calculates storage fee +5. New GCRStorageProgram record created +6. Owner, ACL, data, and metadata stored +``` + +### WRITE_STORAGE + +``` +1. Node looks up existing program by address +2. Node verifies sender has write permission +3. Node calculates fee for new data size +4. Data field updated +5. Metadata merged if provided +6. lastModifiedByTx and totalFeesPaid updated +``` + +### UPDATE_ACCESS_CONTROL + +``` +1. Node looks up existing program +2. Node verifies sender is owner +3. ACL structure validated +4. ACL field replaced with new value +5. lastModifiedByTx updated +``` + +### DELETE_STORAGE_PROGRAM + +``` +1. Node looks up existing program +2. Node verifies sender has delete permission +3. isDeleted flag set to true +4. deletedByTx recorded +5. Data preserved for auditability +``` + +## RPC Endpoints + +### Read Operations + +| Endpoint | Method | Description | +|----------|--------|-------------| +| `/storage-program/:address` | GET | Read a storage program by address | +| `/storage-program/owner/:owner` | GET | List programs owned by an address | + +### Write Operations + +All write operations go through the standard confirm/broadcast flow on the main RPC endpoint (`POST /`). + +## ACL Enforcement + +Read permissions are checked in the RPC layer: + +```typescript +static checkReadPermission( + program: GCRStorageProgram, + requesterAddress?: string +): boolean +``` + +Write and delete permissions are checked in the GCR layer during operation processing. + +## Future: IPFS Integration + +The architecture is prepared for IPFS storage: + +```typescript +// Current stub implementation +program.storageLocation = "onchain" // Always onchain +program.ipfsCid = null // Populated when IPFS implemented + +// Future flow: +// 1. Pin data to IPFS network +// 2. Store CID in ipfsCid field +// 3. Set storageLocation to "ipfs" +// 4. Data field contains CID reference only +``` diff --git a/specs/storageprogram/03-operations.mdx b/specs/storageprogram/03-operations.mdx new file mode 100644 index 000000000..cbbbe2559 --- /dev/null +++ b/specs/storageprogram/03-operations.mdx @@ -0,0 +1,266 @@ +--- +title: "StorageProgram Operations" +description: "Detailed reference for StorageProgram transaction operations" +--- + +# StorageProgram Operations + +This document provides detailed reference for all StorageProgram operations. + +## Operation Types + +| Operation | Description | Permission Required | +|-----------|-------------|---------------------| +| `CREATE_STORAGE_PROGRAM` | Create a new storage program | None (anyone can create) | +| `WRITE_STORAGE` | Update data in existing program | Owner or group write permission | +| `UPDATE_ACCESS_CONTROL` | Modify ACL settings | Owner only | +| `DELETE_STORAGE_PROGRAM` | Soft delete the program | Owner or group delete permission | + +## CREATE_STORAGE_PROGRAM + +Creates a new storage program at a deterministic address. + +### Parameters + +| Field | Type | Required | Description | +|-------|------|----------|-------------| +| `operation` | string | Yes | Must be "CREATE_STORAGE_PROGRAM" | +| `storageAddress` | string | Yes | Generated `stor-{hash}` address | +| `programName` | string | Yes | Human-readable name (non-empty) | +| `encoding` | string | No | "json" (default) or "binary" | +| `data` | object/string | No | Initial data content | +| `acl` | object | No | Access control (defaults to owner mode) | +| `metadata` | object | No | Optional metadata | +| `salt` | string | No | Optional salt for address uniqueness | +| `storageLocation` | string | No | "onchain" only (default) | + +### Example + +```typescript +import { storage } from "@kynesyslabs/demosdk" + +const payload = storage.buildStorageProgramPayload({ + operation: "CREATE_STORAGE_PROGRAM", + owner: "ed25519:abc123...", + programName: "user-preferences", + encoding: "json", + data: { + theme: "dark", + notifications: true, + language: "en" + }, + acl: { + mode: "owner" + }, + metadata: { + version: "1.0", + createdBy: "my-app" + } +}) +``` + +### Response + +```typescript +{ + success: true, + message: "Storage program created: stor-abc123..." +} +``` + +### Errors + +| Error | Cause | +|-------|-------| +| "Storage program already exists" | Address already taken | +| "Program name is required" | Empty or missing programName | +| "Data size exceeds maximum" | Data larger than 1MB | +| "JSON nesting depth exceeds 64" | Too deeply nested JSON | + +## WRITE_STORAGE + +Updates the data in an existing storage program. + +### Parameters + +| Field | Type | Required | Description | +|-------|------|----------|-------------| +| `operation` | string | Yes | Must be "WRITE_STORAGE" | +| `storageAddress` | string | Yes | Target storage address | +| `data` | object/string | Yes | New data content | +| `encoding` | string | No | Can change encoding type | +| `metadata` | object | No | Metadata to merge | + +### Example + +```typescript +const payload = storage.buildStorageProgramPayload({ + operation: "WRITE_STORAGE", + storageAddress: "stor-abc123...", + data: { + theme: "light", + notifications: false, + language: "fr" + }, + metadata: { + lastUpdated: new Date().toISOString() + } +}) +``` + +### Behavior + +- Data is **replaced** entirely, not merged +- Metadata is **merged** with existing metadata +- Encoding can be changed between writes +- Storage fees are calculated on new data size + +### Errors + +| Error | Cause | +|-------|-------| +| "Storage program not found" | Invalid address | +| "Storage program has been deleted" | Program was soft deleted | +| "Data size exceeds maximum" | New data larger than 1MB | + +## UPDATE_ACCESS_CONTROL + +Modifies the ACL settings of a storage program. + +### Parameters + +| Field | Type | Required | Description | +|-------|------|----------|-------------| +| `operation` | string | Yes | Must be "UPDATE_ACCESS_CONTROL" | +| `storageAddress` | string | Yes | Target storage address | +| `acl` | object | Yes | New ACL configuration | + +### Example + +```typescript +const payload = storage.buildStorageProgramPayload({ + operation: "UPDATE_ACCESS_CONTROL", + storageAddress: "stor-abc123...", + acl: { + mode: "restricted", + allowed: [ + "ed25519:friend1...", + "ed25519:friend2..." + ], + blacklisted: [ + "ed25519:blocked..." + ], + groups: { + editors: { + members: ["ed25519:editor1..."], + permissions: ["read", "write"] + } + } + } +}) +``` + +### ACL Structure + +```typescript +interface StorageProgramACL { + mode: "owner" | "public" | "restricted" + allowed?: string[] // Addresses with read access + blacklisted?: string[] // Addresses denied access + groups?: { + [groupName: string]: { + members: string[] // Group member addresses + permissions: ("read" | "write" | "delete")[] + } + } +} +``` + +### Errors + +| Error | Cause | +|-------|-------| +| "Only owner can update access control" | Sender is not owner | +| "ACL mode must be one of: owner, public, restricted" | Invalid mode | +| "Invalid permission in group" | Permission not in allowed list | + +## DELETE_STORAGE_PROGRAM + +Soft deletes a storage program. Data is preserved for auditability. + +### Parameters + +| Field | Type | Required | Description | +|-------|------|----------|-------------| +| `operation` | string | Yes | Must be "DELETE_STORAGE_PROGRAM" | +| `storageAddress` | string | Yes | Target storage address | + +### Example + +```typescript +const payload = storage.buildStorageProgramPayload({ + operation: "DELETE_STORAGE_PROGRAM", + storageAddress: "stor-abc123..." +}) +``` + +### Behavior + +- Sets `isDeleted = true` +- Records `deletedByTx` transaction hash +- Data and metadata are preserved +- Program cannot be read via RPC after deletion +- Address cannot be reused (hash collision prevention) + +### Errors + +| Error | Cause | +|-------|-------| +| "Storage program not found" | Invalid address | +| "Storage program already deleted" | Already soft deleted | +| "No permission to delete" | Sender lacks delete permission | + +## Fee Calculation + +All operations that create or update data incur storage fees: + +```typescript +const CHUNK_SIZE = 10240 // 10KB +const FEE_PER_CHUNK = 1n // 1 DEM + +const sizeBytes = Buffer.byteLength(JSON.stringify(data), "utf8") +const chunks = Math.ceil(sizeBytes / CHUNK_SIZE) +const fee = BigInt(Math.max(1, chunks)) * FEE_PER_CHUNK +``` + +### Fee Examples + +| Data Size | Calculation | Fee | +|-----------|-------------|-----| +| 0 bytes | min(1) × 1 | 1 DEM | +| 5 KB | ceil(5120/10240) = 1 × 1 | 1 DEM | +| 10 KB | ceil(10240/10240) = 1 × 1 | 1 DEM | +| 10.1 KB | ceil(10340/10240) = 2 × 1 | 2 DEM | +| 100 KB | ceil(102400/10240) = 10 × 1 | 10 DEM | +| 1 MB | ceil(1048576/10240) = 103 × 1 | 103 DEM | + +## Validation Rules + +### Data Validation + +| Encoding | Validation | +|----------|------------| +| json | Valid JSON, max 64 nesting levels | +| binary | Valid base64 string, length % 4 == 0 | + +### Size Limits + +- Maximum data size: 1MB (1,048,576 bytes) +- For binary: decoded size is checked (base64 size × 3/4) +- For JSON: UTF-8 byte length of stringified data + +### Address Format + +- Must match pattern: `stor-{hex-hash}` +- Hash is keccak256 of (owner + programName + salt) +- Addresses are case-sensitive diff --git a/specs/storageprogram/04-acl.mdx b/specs/storageprogram/04-acl.mdx new file mode 100644 index 000000000..2de58d4cd --- /dev/null +++ b/specs/storageprogram/04-acl.mdx @@ -0,0 +1,355 @@ +--- +title: "StorageProgram Access Control" +description: "Detailed ACL configuration for StorageProgram read and write permissions" +--- + +# StorageProgram Access Control + +StorageProgram implements a robust Access Control List (ACL) system supporting owner-only, public, and restricted access modes with group-based permissions. + +## ACL Modes + +### Owner Mode + +The most restrictive mode. Only the storage program owner can read or modify data. + +```typescript +acl: { + mode: "owner" +} +``` + +| Operation | Permission | +|-----------|------------| +| Read | Owner only | +| Write | Owner only | +| Delete | Owner only | +| Update ACL | Owner only | + +### Public Mode + +Anyone can read the data, but only the owner can modify it. + +```typescript +acl: { + mode: "public", + blacklisted: [ + "ed25519:blocked-address..." // Optional: denied even in public mode + ] +} +``` + +| Operation | Permission | +|-----------|------------| +| Read | Anyone (except blacklisted) | +| Write | Owner only | +| Delete | Owner only | +| Update ACL | Owner only | + +### Restricted Mode + +Fine-grained control over who can read, write, and delete. + +```typescript +acl: { + mode: "restricted", + allowed: [ + "ed25519:reader1...", + "ed25519:reader2..." + ], + blacklisted: [ + "ed25519:blocked..." + ], + groups: { + editors: { + members: ["ed25519:editor1..."], + permissions: ["read", "write"] + }, + admins: { + members: ["ed25519:admin1..."], + permissions: ["read", "write", "delete"] + } + } +} +``` + +| Operation | Permission | +|-----------|------------| +| Read | Owner, allowed list, or group with "read" | +| Write | Owner or group with "write" | +| Delete | Owner or group with "delete" | +| Update ACL | Owner only | + +## ACL Structure + +```typescript +interface StorageProgramACL { + /** + * Access mode: + * - "owner": Only owner can read/write + * - "public": Anyone can read, only owner writes + * - "restricted": Configurable via allowed/groups + */ + mode: "owner" | "public" | "restricted" + + /** + * Addresses with read access (restricted mode only) + * These addresses can read but cannot write unless in a group + */ + allowed?: string[] + + /** + * Addresses denied access regardless of mode + * Takes precedence over allowed list and group membership + */ + blacklisted?: string[] + + /** + * Named groups with specific permissions + * Members inherit all permissions assigned to the group + */ + groups?: { + [groupName: string]: { + members: string[] + permissions: ("read" | "write" | "delete")[] + } + } +} +``` + +## Permission Resolution + +Permission checks follow this precedence order: + +``` +1. Check blacklist (deny if present) +2. Check if owner (allow if owner) +3. Check allowed list (read only in restricted mode) +4. Check group membership and permissions +5. Check mode default (public allows read, owner denies all) +``` + +### Flowchart + +``` +Is address blacklisted? + │ + ├─ Yes → DENY + │ + └─ No → Is address the owner? + │ + ├─ Yes → ALLOW + │ + └─ No → Is mode "public"? + │ + ├─ Yes → ALLOW READ + │ (DENY WRITE/DELETE) + │ + └─ No → Is mode "restricted"? + │ + ├─ No (owner mode) → DENY + │ + └─ Yes → Is in allowed list? + │ + ├─ Yes → ALLOW READ + │ + └─ No → Check groups for permission + │ + ├─ Has permission → ALLOW + │ + └─ No permission → DENY +``` + +## Group Permissions + +Groups provide a way to assign multiple permissions to multiple addresses: + +| Permission | Grants | +|------------|--------| +| `read` | Can read storage program data via RPC | +| `write` | Can update data using WRITE_STORAGE operation | +| `delete` | Can soft delete using DELETE_STORAGE_PROGRAM operation | + +### Group Examples + +**Read-Only Group** +```typescript +groups: { + viewers: { + members: ["ed25519:viewer1...", "ed25519:viewer2..."], + permissions: ["read"] + } +} +``` + +**Editor Group** +```typescript +groups: { + editors: { + members: ["ed25519:editor1..."], + permissions: ["read", "write"] + } +} +``` + +**Admin Group** +```typescript +groups: { + admins: { + members: ["ed25519:admin1..."], + permissions: ["read", "write", "delete"] + } +} +``` + +**Multiple Groups** +```typescript +groups: { + readers: { + members: ["ed25519:reader1...", "ed25519:reader2..."], + permissions: ["read"] + }, + contributors: { + members: ["ed25519:contrib1..."], + permissions: ["read", "write"] + }, + maintainers: { + members: ["ed25519:maint1..."], + permissions: ["read", "write", "delete"] + } +} +``` + +## Blacklist Behavior + +The blacklist takes precedence over all other permissions: + +```typescript +acl: { + mode: "public", + blacklisted: ["ed25519:bad-actor..."] +} +``` + +- Blacklisted addresses are denied even in public mode +- Blacklist is checked first in all permission evaluations +- Owner cannot be blacklisted (owner always has full access) + +## Common Patterns + +### Public Read, Owner Write + +```typescript +acl: { + mode: "public" +} +``` + +### Team Collaboration + +```typescript +acl: { + mode: "restricted", + groups: { + team: { + members: [ + "ed25519:alice...", + "ed25519:bob...", + "ed25519:charlie..." + ], + permissions: ["read", "write"] + } + } +} +``` + +### Read-Only Sharing + +```typescript +acl: { + mode: "restricted", + allowed: [ + "ed25519:client1...", + "ed25519:client2..." + ] +} +``` + +### Tiered Access + +```typescript +acl: { + mode: "restricted", + groups: { + free: { + members: ["ed25519:free-user1..."], + permissions: ["read"] + }, + premium: { + members: ["ed25519:premium-user1..."], + permissions: ["read", "write"] + }, + enterprise: { + members: ["ed25519:enterprise-user1..."], + permissions: ["read", "write", "delete"] + } + } +} +``` + +### Public with Exceptions + +```typescript +acl: { + mode: "public", + blacklisted: [ + "ed25519:spammer1...", + "ed25519:abuser2..." + ] +} +``` + +## Updating ACL + +Only the owner can update ACL settings: + +```typescript +const payload = storage.buildStorageProgramPayload({ + operation: "UPDATE_ACCESS_CONTROL", + storageAddress: "stor-abc123...", + acl: { + mode: "restricted", + allowed: ["ed25519:new-reader..."], + groups: { + editors: { + members: ["ed25519:new-editor..."], + permissions: ["read", "write"] + } + } + } +}) +``` + +**Important**: The entire ACL is replaced, not merged. Include all desired settings. + +## RPC Identity Header + +To access restricted storage programs, clients must provide identity headers: + +```http +GET /storage-program/stor-abc123... +identity: ed25519:your-public-key +signature: your-signed-message +``` + +Without identity headers: +- Public mode: Access granted +- Owner/Restricted mode: Access denied + +## Security Considerations + +1. **Owner immutability**: The owner cannot be changed after creation +2. **Blacklist precedence**: Always checked first for security +3. **Group validation**: Group names and permissions are validated +4. **Address format**: All addresses must be valid Demos addresses +5. **No anonymous writes**: All write operations require authentication diff --git a/specs/storageprogram/05-rpc-endpoints.mdx b/specs/storageprogram/05-rpc-endpoints.mdx new file mode 100644 index 000000000..679e3a5e7 --- /dev/null +++ b/specs/storageprogram/05-rpc-endpoints.mdx @@ -0,0 +1,335 @@ +--- +title: "StorageProgram RPC Endpoints" +description: "HTTP API reference for reading StorageProgram data" +--- + +# StorageProgram RPC Endpoints + +This document provides the HTTP API reference for reading StorageProgram data. + +## Endpoints Overview + +| Endpoint | Method | Description | +|----------|--------|-------------| +| `/storage-program/:address` | GET | Read a storage program by address | +| `/storage-program/owner/:owner` | GET | List storage programs by owner | + +## Read Storage Program + +Retrieves a storage program by its address. + +### Request + +```http +GET /storage-program/stor-abc123def456... +``` + +### Headers + +| Header | Required | Description | +|--------|----------|-------------| +| `identity` | Conditional | Required for owner/restricted mode. Format: `algorithm:publicKey` or just `publicKey` | +| `signature` | Conditional | Required with identity. Signature of the public key | + +**Note**: Headers are optional for public storage programs. + +### Response + +**Success (200)** + +```json +{ + "success": true, + "storageAddress": "stor-abc123def456...", + "owner": "ed25519:owner-public-key...", + "programName": "my-storage-program", + "encoding": "json", + "data": { + "key": "value", + "nested": { + "data": "here" + } + }, + "metadata": { + "version": "1.0", + "createdBy": "my-app" + }, + "storageLocation": "onchain", + "sizeBytes": 1234, + "createdAt": "2024-01-15T10:30:00.000Z", + "updatedAt": "2024-01-16T14:20:00.000Z" +} +``` + +**Not Found (404)** + +```json +{ + "success": false, + "error": "Storage program not found: stor-abc123...", + "errorCode": "NOT_FOUND" +} +``` + +**Permission Denied (403)** + +```json +{ + "success": false, + "error": "Permission denied: You do not have read access to this storage program", + "errorCode": "PERMISSION_DENIED" +} +``` + +**Invalid Address (400)** + +```json +{ + "success": false, + "error": "Invalid storage address format. Expected: stor-{hash}", + "errorCode": "NOT_FOUND" +} +``` + +### Response Fields + +| Field | Type | Description | +|-------|------|-------------| +| `success` | boolean | Whether the request succeeded | +| `storageAddress` | string | The storage program address | +| `owner` | string | Owner's address | +| `programName` | string | Human-readable name | +| `encoding` | string | "json" or "binary" | +| `data` | object/string/null | Stored data content | +| `metadata` | object/null | Optional metadata | +| `storageLocation` | string | Storage location ("onchain") | +| `sizeBytes` | number | Data size in bytes | +| `createdAt` | string | ISO 8601 creation timestamp | +| `updatedAt` | string | ISO 8601 last update timestamp | +| `error` | string | Error message (on failure) | +| `errorCode` | string | Error code (on failure) | + +### Error Codes + +| Code | HTTP Status | Description | +|------|-------------|-------------| +| `NOT_FOUND` | 404 | Storage program doesn't exist or invalid address | +| `PERMISSION_DENIED` | 403 | Requester lacks read access | +| `DELETED` | 404 | Storage program was soft deleted | +| `INTERNAL_ERROR` | 500 | Server error during processing | + +### Example: cURL + +```bash +# Public storage program (no auth needed) +curl https://rpc.demos.network/storage-program/stor-abc123... + +# Protected storage program (with auth) +curl https://rpc.demos.network/storage-program/stor-abc123... \ + -H "identity: ed25519:your-public-key-hex" \ + -H "signature: your-signature-hex" +``` + +### Example: JavaScript + +```javascript +// Read public storage +const response = await fetch( + 'https://rpc.demos.network/storage-program/stor-abc123...' +) +const data = await response.json() + +// Read protected storage +const response = await fetch( + 'https://rpc.demos.network/storage-program/stor-abc123...', { + headers: { + 'identity': 'ed25519:' + publicKeyHex, + 'signature': signatureHex + } + } +) +``` + +## List Storage Programs by Owner + +Retrieves all storage programs owned by an address. + +### Request + +```http +GET /storage-program/owner/ed25519:owner-public-key... +``` + +### Headers + +Same as Read Storage Program. Used to filter results based on read permissions. + +### Response + +**Success (200)** + +```json +{ + "success": true, + "programs": [ + { + "storageAddress": "stor-abc123...", + "programName": "config-v1", + "encoding": "json", + "sizeBytes": 1024, + "storageLocation": "onchain", + "createdAt": "2024-01-15T10:30:00.000Z", + "updatedAt": "2024-01-16T14:20:00.000Z" + }, + { + "storageAddress": "stor-def456...", + "programName": "user-data", + "encoding": "binary", + "sizeBytes": 51200, + "storageLocation": "onchain", + "createdAt": "2024-01-10T08:00:00.000Z", + "updatedAt": "2024-01-10T08:00:00.000Z" + } + ], + "count": 2 +} +``` + +**No Programs Found (200)** + +```json +{ + "success": true, + "programs": [], + "count": 0 +} +``` + +### Response Fields + +| Field | Type | Description | +|-------|------|-------------| +| `success` | boolean | Whether the request succeeded | +| `programs` | array | List of storage programs (without full data) | +| `count` | number | Total programs in the list | +| `error` | string | Error message (on failure) | + +### Program List Item Fields + +| Field | Type | Description | +|-------|------|-------------| +| `storageAddress` | string | The storage program address | +| `programName` | string | Human-readable name | +| `encoding` | string | "json" or "binary" | +| `sizeBytes` | number | Data size in bytes | +| `storageLocation` | string | Storage location ("onchain") | +| `createdAt` | string | ISO 8601 creation timestamp | +| `updatedAt` | string | ISO 8601 last update timestamp | + +**Note**: The list response does not include full `data` content to reduce response size. Use the individual read endpoint to get full data. + +### Filtering Behavior + +The list only returns programs that the requester can read: +- **Anonymous**: Only public programs +- **Authenticated**: Public + programs where requester has read access + +### Example: cURL + +```bash +# List all public programs by owner +curl https://rpc.demos.network/storage-program/owner/ed25519:abc123... + +# List all accessible programs (including restricted) +curl https://rpc.demos.network/storage-program/owner/ed25519:abc123... \ + -H "identity: ed25519:your-public-key-hex" \ + -H "signature: your-signature-hex" +``` + +## Authentication + +### Identity Header Format + +The `identity` header supports multiple formats: + +``` +# With algorithm prefix (recommended) +identity: ed25519:public-key-hex + +# Without prefix (defaults to ed25519) +identity: public-key-hex + +# Other supported algorithms +identity: falcon:public-key-hex +identity: ml-dsa:public-key-hex +``` + +### Signature Header + +The signature should be the result of signing the public key with the corresponding private key: + +```javascript +// Using SDK +const { ucrypto } = require("@kynesyslabs/demosdk/encryption") + +const message = publicKeyHex // Sign the public key itself +const signature = await ucrypto.sign(privateKey, message) +const signatureHex = Buffer.from(signature).toString("hex") +``` + +### Signature Verification + +The server verifies: +1. The signature is valid for the public key +2. The algorithm matches the identity prefix +3. The identity has read permission for the requested resource + +## Write Operations + +Write operations (CREATE, WRITE, UPDATE_ACCESS_CONTROL, DELETE) are not performed via these endpoints. They use the standard RPC confirm/broadcast flow: + +```http +POST / +Content-Type: application/json + +{ + "method": "execute", + "params": [{ + "data": { + "content": { + "data": ["storageProgram", { /* payload */ }] + } + }, + "extra": "confirmTx" + }] +} +``` + +See the SDK documentation for building and submitting StorageProgram transactions. + +## Rate Limiting + +The RPC endpoints respect the node's rate limiting configuration: + +- Per-IP request limits +- Per-identity transaction limits +- Automatic blocking for abuse + +Rate limit errors return: + +```json +{ + "error": "Rate limit exceeded", + "retryAfter": null +} +``` + +## CORS + +The endpoints support CORS for browser-based applications: + +``` +Access-Control-Allow-Origin: * +Access-Control-Allow-Methods: GET, POST, OPTIONS +Access-Control-Allow-Headers: Content-Type, identity, signature +``` From 11790a7ca971469dfe68519ad20f0a7aac3e2025 Mon Sep 17 00:00:00 2001 From: tcsenpai Date: Wed, 14 Jan 2026 15:02:51 +0100 Subject: [PATCH 06/29] added ipfs stubs --- package.json | 2 +- .../gcr_routines/GCRStorageProgramRoutines.ts | 24 +++++++++++++++++-- 2 files changed, 23 insertions(+), 3 deletions(-) diff --git a/package.json b/package.json index b40591c42..cb2a5b66e 100644 --- a/package.json +++ b/package.json @@ -59,7 +59,7 @@ "@fastify/cors": "^9.0.1", "@fastify/swagger": "^8.15.0", "@fastify/swagger-ui": "^4.1.0", - "@kynesyslabs/demosdk": "^2.8.13", + "@kynesyslabs/demosdk": "^2.8.15", "@metaplex-foundation/js": "^0.20.1", "@modelcontextprotocol/sdk": "^1.13.3", "@noble/ed25519": "^3.0.0", diff --git a/src/libs/blockchain/gcr/gcr_routines/GCRStorageProgramRoutines.ts b/src/libs/blockchain/gcr/gcr_routines/GCRStorageProgramRoutines.ts index 7e4d72bc4..d8dc890b3 100644 --- a/src/libs/blockchain/gcr/gcr_routines/GCRStorageProgramRoutines.ts +++ b/src/libs/blockchain/gcr/gcr_routines/GCRStorageProgramRoutines.ts @@ -397,8 +397,18 @@ export class GCRStorageProgramRoutines { program.sizeBytes = sizeBytes program.acl = variables.acl || { mode: "owner" } program.metadata = (edit.context.data?.metadata as Record) || variables.metadata || null - program.storageLocation = variables.storageLocation || "onchain" - program.ipfsCid = null + // REVIEW: IPFS storage location handling - stub for future implementation + // Currently only supports "onchain" storage. IPFS integration planned for future release. + const requestedLocation = variables.storageLocation || "onchain" + if (requestedLocation !== "onchain") { + log.warning( + "[StorageProgram] IPFS storage not yet implemented. " + + "Requested \"" + requestedLocation + "\", falling back to \"onchain\". " + + "Address: " + storageAddress, + ) + } + program.storageLocation = "onchain" // Always onchain for now + program.ipfsCid = null // IPFS CID stub - will be populated when IPFS is implemented program.salt = variables.salt || null program.createdByTx = edit.txhash program.lastModifiedByTx = edit.txhash @@ -464,6 +474,16 @@ export class GCRStorageProgramRoutines { program.lastModifiedByTx = edit.txhash program.totalFeesPaid = program.totalFeesPaid + fee + // REVIEW: IPFS storage location handling - stub for future implementation + // Write operations cannot change storageLocation after creation (always stays "onchain" for now) + if (variables.storageLocation && variables.storageLocation !== "onchain") { + log.warning( + "[StorageProgram] IPFS storage not yet implemented. " + + "Write operation requested \"" + variables.storageLocation + "\", but storage location " + + "cannot be changed after creation. Address: " + storageAddress, + ) + } + if (variables.metadata || edit.context.data?.metadata) { const newMetadata = (edit.context.data?.metadata as Record) || variables.metadata program.metadata = { ...program.metadata, ...newMetadata } From 633614b3d4bcba7dd365f8aa2d5819ac3fb52f9d Mon Sep 17 00:00:00 2001 From: tcsenpai Date: Wed, 14 Jan 2026 15:07:02 +0100 Subject: [PATCH 07/29] docs(storage): add comprehensive SDK examples for StorageProgram - Add full transaction flow diagram (confirm/broadcast) - Include examples for all CRUD operations - Document ACL helper methods and validation utilities - Add complete working example with error handling Co-Authored-By: Claude Opus 4.5 --- specs/storageprogram/06-examples.mdx | 570 +++++++++++++++++++++++++++ 1 file changed, 570 insertions(+) create mode 100644 specs/storageprogram/06-examples.mdx diff --git a/specs/storageprogram/06-examples.mdx b/specs/storageprogram/06-examples.mdx new file mode 100644 index 000000000..c587e6bc7 --- /dev/null +++ b/specs/storageprogram/06-examples.mdx @@ -0,0 +1,570 @@ +--- +title: "StorageProgram Examples" +description: "Complete SDK examples for StorageProgram operations" +--- + +# StorageProgram Examples + +Complete examples showing the full transaction flow for StorageProgram operations. + +## Transaction Flow Overview + +All write operations follow the standard Demos Network confirm/broadcast flow: + +``` +┌─────────────────────────────────────────────────────────────────────────┐ +│ StorageProgram Flow │ +├─────────────────────────────────────────────────────────────────────────┤ +│ │ +│ 1. BUILD PAYLOAD │ +│ ┌─────────────────────────────────────────────────────────────┐ │ +│ │ StorageProgram.createStorageProgram(...) │ │ +│ │ StorageProgram.writeStorage(...) │ │ +│ │ StorageProgram.updateAccessControl(...) │ │ +│ │ StorageProgram.deleteStorageProgram(...) │ │ +│ └─────────────────────────────────────────────────────────────┘ │ +│ ↓ │ +│ 2. CREATE TRANSACTION │ +│ ┌─────────────────────────────────────────────────────────────┐ │ +│ │ Transaction { │ │ +│ │ content: { │ │ +│ │ type: "storageProgram", │ │ +│ │ data: [ "storageProgram", payload ] │ │ +│ │ } │ │ +│ │ } │ │ +│ └─────────────────────────────────────────────────────────────┘ │ +│ ↓ │ +│ 3. CONFIRM TRANSACTION (RPC) │ +│ ┌─────────────────────────────────────────────────────────────┐ │ +│ │ POST / │ │ +│ │ { method: "execute", │ │ +│ │ params: [{ data: { content: tx.content }, │ │ +│ │ extra: "confirmTx" }] } │ │ +│ └─────────────────────────────────────────────────────────────┘ │ +│ ↓ │ +│ 4. RPC RETURNS VALIDITY DATA │ +│ ┌─────────────────────────────────────────────────────────────┐ │ +│ │ { success: true, │ │ +│ │ validityData: { │ │ +│ │ hash: "0x...", │ │ +│ │ fee: 1, │ │ +│ │ nonce: 42, │ │ +│ │ ... │ │ +│ │ } │ │ +│ │ } │ │ +│ └─────────────────────────────────────────────────────────────┘ │ +│ ↓ │ +│ 5. SIGN TRANSACTION │ +│ ┌─────────────────────────────────────────────────────────────┐ │ +│ │ signature = sign(validityData.hash, privateKey) │ │ +│ │ tx.signature = signature │ │ +│ │ tx.hash = validityData.hash │ │ +│ └─────────────────────────────────────────────────────────────┘ │ +│ ↓ │ +│ 6. BROADCAST TRANSACTION (RPC) │ +│ ┌─────────────────────────────────────────────────────────────┐ │ +│ │ POST / │ │ +│ │ { method: "execute", │ │ +│ │ params: [{ data: signedTransaction, │ │ +│ │ extra: "broadcastTx" }] } │ │ +│ └─────────────────────────────────────────────────────────────┘ │ +│ ↓ │ +│ 7. TRANSACTION CONFIRMED │ +│ ┌─────────────────────────────────────────────────────────────┐ │ +│ │ { success: true, │ │ +│ │ message: "Storage program created: stor-..." } │ │ +│ └─────────────────────────────────────────────────────────────┘ │ +│ │ +└─────────────────────────────────────────────────────────────────────────┘ +``` + +## Setup + +```typescript +import { StorageProgram, types, ucrypto } from "@kynesyslabs/demosdk" + +const RPC_URL = "https://rpc.demos.network" + +// Your wallet +const privateKey = "your-private-key-hex" +const publicKey = ucrypto.getPublicKey(privateKey) +const address = `ed25519:${publicKey}` +``` + +## Helper: Execute Transaction + +This helper handles the full confirm/broadcast flow: + +```typescript +async function executeStorageTransaction( + payload: types.StorageProgramPayload +): Promise { + // 1. Build transaction content + const txContent = { + type: "storageProgram" as const, + data: ["storageProgram", payload] as [string, types.StorageProgramPayload] + } + + // 2. Confirm transaction - get validity data + const confirmResponse = await fetch(RPC_URL, { + method: "POST", + headers: { "Content-Type": "application/json" }, + body: JSON.stringify({ + method: "execute", + params: [{ + data: { content: txContent }, + extra: "confirmTx" + }] + }) + }) + + const confirmResult = await confirmResponse.json() + if (!confirmResult.success) { + throw new Error(`Confirm failed: ${confirmResult.error}`) + } + + const { validityData } = confirmResult + + // 3. Sign the transaction hash + const signature = await ucrypto.sign(privateKey, validityData.hash) + const signatureHex = Buffer.from(signature).toString("hex") + + // 4. Build signed transaction + const signedTx = { + content: txContent, + hash: validityData.hash, + signature: signatureHex, + ed25519_signature: signatureHex, + blockNumber: null, + status: "pending" + } + + // 5. Broadcast transaction + const broadcastResponse = await fetch(RPC_URL, { + method: "POST", + headers: { "Content-Type": "application/json" }, + body: JSON.stringify({ + method: "execute", + params: [{ + data: signedTx, + extra: "broadcastTx" + }] + }) + }) + + return broadcastResponse.json() +} +``` + +## Example 1: Create JSON Storage Program + +Store structured JSON data with public read access. + +```typescript +// Derive the storage address first (deterministic) +const programName = "user-preferences" +const storageAddress = StorageProgram.deriveStorageAddress( + address, + programName +) +console.log("Storage address will be:", storageAddress) +// => stor-7a8b9c0d1e2f3a4b5c6d7e8f9a0b1c2d3e4f5a6b + +// Create the storage program payload +const payload = StorageProgram.createStorageProgram( + address, // deployer (becomes owner) + programName, // unique name + { // initial data (JSON) + theme: "dark", + notifications: true, + language: "en", + settings: { + fontSize: 14, + autoSave: true + } + }, + "json", // encoding + { mode: "public" }, // ACL - anyone can read + { // options + metadata: { + version: "1.0", + createdBy: "my-dapp" + } + } +) + +// Execute the transaction +const result = await executeStorageTransaction(payload) +console.log(result) +// => { success: true, message: "Storage program created: stor-7a8b9c..." } +``` + +## Example 2: Create Binary Storage Program + +Store binary data (images, files) with restricted access. + +```typescript +import * as fs from "fs" + +// Read file and convert to base64 +const imageBuffer = fs.readFileSync("./avatar.png") +const base64Image = imageBuffer.toString("base64") + +// Check size before creating +const sizeBytes = StorageProgram.getDataSize(base64Image, "binary") +console.log(`Image size: ${sizeBytes} bytes`) + +if (!StorageProgram.validateSize(base64Image, "binary")) { + throw new Error("Image exceeds 1MB limit") +} + +// Calculate fee +const fee = StorageProgram.calculateStorageFee(base64Image, "binary") +console.log(`Storage fee: ${fee} DEM`) + +// Create payload with restricted ACL +const payload = StorageProgram.createStorageProgram( + address, + "team-avatar", + base64Image, + "binary", + { + mode: "restricted", + allowed: [ + "ed25519:teammate1-pubkey...", + "ed25519:teammate2-pubkey..." + ] + }, + { + metadata: { + filename: "avatar.png", + mimeType: "image/png", + uploadedAt: new Date().toISOString() + } + } +) + +const result = await executeStorageTransaction(payload) +``` + +## Example 3: Create Storage with Groups + +Use group-based ACL for team collaboration. + +```typescript +const payload = StorageProgram.createStorageProgram( + address, + "project-docs", + { + title: "Project Documentation", + sections: ["overview", "api", "deployment"] + }, + "json", + StorageProgram.groupACL({ + admins: { + members: ["ed25519:admin1...", "ed25519:admin2..."], + permissions: ["read", "write", "delete"] + }, + editors: { + members: ["ed25519:editor1...", "ed25519:editor2..."], + permissions: ["read", "write"] + }, + viewers: { + members: ["ed25519:viewer1...", "ed25519:viewer2..."], + permissions: ["read"] + } + }) +) + +const result = await executeStorageTransaction(payload) +``` + +## Example 4: Write/Update Storage + +Update data in an existing storage program. + +```typescript +const payload = StorageProgram.writeStorage( + "stor-7a8b9c0d1e2f3a4b5c6d7e8f9a0b1c2d3e4f5a6b", + { + theme: "light", // changed + notifications: false, // changed + language: "fr", // changed + settings: { + fontSize: 16, // changed + autoSave: true + }, + newField: "new value" // added + }, + "json" +) + +const result = await executeStorageTransaction(payload) +// => { success: true, message: "Storage program updated: stor-7a8b9c..." } +``` + +## Example 5: Update Access Control + +Change who can access the storage program. + +```typescript +// Switch from owner-only to public +const payload = StorageProgram.updateAccessControl( + "stor-7a8b9c0d1e2f3a4b5c6d7e8f9a0b1c2d3e4f5a6b", + { mode: "public" } +) + +const result = await executeStorageTransaction(payload) +``` + +```typescript +// Add a blacklist to public storage +const payload = StorageProgram.updateAccessControl( + "stor-7a8b9c0d1e2f3a4b5c6d7e8f9a0b1c2d3e4f5a6b", + { + mode: "public", + blacklisted: [ + "ed25519:spammer1...", + "ed25519:spammer2..." + ] + } +) +``` + +```typescript +// Convert to group-based access +const payload = StorageProgram.updateAccessControl( + "stor-7a8b9c0d1e2f3a4b5c6d7e8f9a0b1c2d3e4f5a6b", + { + mode: "restricted", + groups: { + team: { + members: ["ed25519:alice...", "ed25519:bob..."], + permissions: ["read", "write"] + } + } + } +) +``` + +## Example 6: Delete Storage Program + +Soft-delete a storage program (owner or ACL-permissioned only). + +```typescript +const payload = StorageProgram.deleteStorageProgram( + "stor-7a8b9c0d1e2f3a4b5c6d7e8f9a0b1c2d3e4f5a6b" +) + +const result = await executeStorageTransaction(payload) +// => { success: true, message: "Storage program deleted: stor-7a8b9c..." } +``` + +## Example 7: Read Storage (RPC GET) + +Reading does NOT require a transaction. Use the RPC GET endpoints: + +```typescript +// Read public storage - no auth needed +const response = await fetch( + `${RPC_URL}/storage-program/stor-7a8b9c0d1e2f3a4b5c6d7e8f9a0b1c2d3e4f5a6b` +) +const data = await response.json() + +console.log(data) +// => { +// success: true, +// storageAddress: "stor-7a8b9c...", +// owner: "ed25519:owner-pubkey...", +// programName: "user-preferences", +// encoding: "json", +// data: { theme: "dark", ... }, +// metadata: { version: "1.0", ... }, +// storageLocation: "onchain", +// sizeBytes: 256, +// createdAt: "2024-01-15T10:30:00.000Z", +// updatedAt: "2024-01-16T14:20:00.000Z" +// } +``` + +```typescript +// Read protected storage - with auth headers +const message = publicKey +const signature = await ucrypto.sign(privateKey, message) +const signatureHex = Buffer.from(signature).toString("hex") + +const response = await fetch( + `${RPC_URL}/storage-program/stor-7a8b9c0d1e2f3a4b5c6d7e8f9a0b1c2d3e4f5a6b`, + { + headers: { + "identity": `ed25519:${publicKey}`, + "signature": signatureHex + } + } +) +const data = await response.json() +``` + +## Example 8: List Storage by Owner + +```typescript +// List all public programs by owner +const response = await fetch( + `${RPC_URL}/storage-program/owner/${address}` +) +const data = await response.json() + +console.log(data) +// => { +// success: true, +// programs: [ +// { storageAddress: "stor-abc...", programName: "config-v1", ... }, +// { storageAddress: "stor-def...", programName: "user-data", ... } +// ], +// count: 2 +// } +``` + +## ACL Helper Methods + +The SDK provides convenient ACL builders: + +```typescript +// Private/owner-only (default) +const privateAcl = StorageProgram.privateACL() +// => { mode: "owner" } + +// Public read, owner write +const publicAcl = StorageProgram.publicACL() +// => { mode: "public" } + +// Restricted to specific addresses +const restrictedAcl = StorageProgram.restrictedACL([ + "ed25519:friend1...", + "ed25519:friend2..." +]) +// => { mode: "restricted", allowed: ["ed25519:friend1...", ...] } + +// Group-based access +const groupAcl = StorageProgram.groupACL({ + admins: { members: [...], permissions: ["read", "write", "delete"] }, + users: { members: [...], permissions: ["read"] } +}) + +// Public with blacklist +const blacklistAcl = StorageProgram.blacklistACL( + "public", + ["ed25519:blocked1...", "ed25519:blocked2..."] +) +// => { mode: "public", blacklisted: [...] } +``` + +## Validation Utilities + +```typescript +// Check data size +const size = StorageProgram.getDataSize(data, "json") +const isValid = StorageProgram.validateSize(data, "json") + +// Check JSON nesting depth (max 64 levels) +const validNesting = StorageProgram.validateNestingDepth(data) + +// Calculate fee before transaction +const fee = StorageProgram.calculateStorageFee(data, "json") +console.log(`This will cost ${fee} DEM`) + +// Check permission +const acl = { mode: "restricted", allowed: ["ed25519:user1..."] } +const canRead = StorageProgram.checkPermission(acl, owner, user, "read") +const canWrite = StorageProgram.checkPermission(acl, owner, user, "write") +``` + +## Error Handling + +```typescript +try { + const result = await executeStorageTransaction(payload) + + if (!result.success) { + switch (result.errorCode) { + case "ALREADY_EXISTS": + console.error("Storage program already exists at this address") + break + case "NOT_FOUND": + console.error("Storage program not found") + break + case "PERMISSION_DENIED": + console.error("You don't have permission for this operation") + break + case "SIZE_EXCEEDED": + console.error("Data exceeds 1MB limit") + break + case "INSUFFICIENT_BALANCE": + console.error("Not enough DEM for storage fee") + break + default: + console.error(`Error: ${result.error}`) + } + } +} catch (error) { + console.error("Transaction failed:", error) +} +``` + +## Complete Working Example + +Here's a full example you can run: + +```typescript +import { StorageProgram, ucrypto } from "@kynesyslabs/demosdk" + +const RPC_URL = "https://rpc.demos.network" + +async function main() { + // Setup wallet + const privateKey = process.env.DEMOS_PRIVATE_KEY! + const publicKey = ucrypto.getPublicKey(privateKey) + const address = `ed25519:${publicKey}` + + console.log("Wallet address:", address) + + // 1. Create a storage program + const programName = "my-first-storage" + const storageAddress = StorageProgram.deriveStorageAddress(address, programName) + + console.log("Creating storage at:", storageAddress) + + const createPayload = StorageProgram.createStorageProgram( + address, + programName, + { message: "Hello, Demos!", timestamp: Date.now() }, + "json", + { mode: "public" } + ) + + const createResult = await executeStorageTransaction(createPayload) + console.log("Create result:", createResult) + + // 2. Read the storage + const readResponse = await fetch(`${RPC_URL}/storage-program/${storageAddress}`) + const readData = await readResponse.json() + console.log("Read result:", readData) + + // 3. Update the storage + const writePayload = StorageProgram.writeStorage( + storageAddress, + { message: "Updated message!", timestamp: Date.now(), version: 2 }, + "json" + ) + + const writeResult = await executeStorageTransaction(writePayload) + console.log("Write result:", writeResult) + + // 4. Read again to verify + const verifyResponse = await fetch(`${RPC_URL}/storage-program/${storageAddress}`) + const verifyData = await verifyResponse.json() + console.log("Updated data:", verifyData.data) +} + +main().catch(console.error) +``` From 03154509206a7e186633019986f4c38fed56f373 Mon Sep 17 00:00:00 2001 From: tcsenpai Date: Wed, 14 Jan 2026 16:21:29 +0100 Subject: [PATCH 08/29] fix(storage): ACL permission fixes and error handling improvements - Add missing owner field to CREATE_STORAGE_PROGRAM spec docs - Add INVALID_REQUEST error code for malformed requests (routes.ts) - Remove READ_STORAGE from valid transaction operations (reads are RPC-only) - Add write permission check in handleWriteStorage (was missing ACL enforcement) - Fix ACL priority: owner check now before blacklist (owner cannot be blacklisted) - Add checkWritePermission helper for ACL group/public mode validation Co-Authored-By: Claude Opus 4.5 --- specs/storageprogram/03-operations.mdx | 1 + src/features/storageprogram/routes.ts | 4 +- .../gcr_routines/GCRStorageProgramRoutines.ts | 56 ++++++++++++++++--- 3 files changed, 52 insertions(+), 9 deletions(-) diff --git a/specs/storageprogram/03-operations.mdx b/specs/storageprogram/03-operations.mdx index cbbbe2559..602681fce 100644 --- a/specs/storageprogram/03-operations.mdx +++ b/specs/storageprogram/03-operations.mdx @@ -25,6 +25,7 @@ Creates a new storage program at a deterministic address. | Field | Type | Required | Description | |-------|------|----------|-------------| | `operation` | string | Yes | Must be "CREATE_STORAGE_PROGRAM" | +| `owner` | string | Yes | Owner address (e.g., "ed25519:...") | | `storageAddress` | string | Yes | Generated `stor-{hash}` address | | `programName` | string | Yes | Human-readable name (non-empty) | | `encoding` | string | No | "json" (default) or "binary" | diff --git a/src/features/storageprogram/routes.ts b/src/features/storageprogram/routes.ts index 9f4c8c032..a12f0f769 100644 --- a/src/features/storageprogram/routes.ts +++ b/src/features/storageprogram/routes.ts @@ -39,7 +39,7 @@ interface StorageProgramResponse { createdAt?: string updatedAt?: string error?: string - errorCode?: "NOT_FOUND" | "PERMISSION_DENIED" | "DELETED" | "INTERNAL_ERROR" + errorCode?: "NOT_FOUND" | "PERMISSION_DENIED" | "DELETED" | "INTERNAL_ERROR" | "INVALID_REQUEST" } /** @@ -82,7 +82,7 @@ async function getStorageProgramHandler(req: Request): Promise { const response: StorageProgramResponse = { success: false, error: "Invalid storage address format. Expected: stor-{hash}", - errorCode: "NOT_FOUND", + errorCode: "INVALID_REQUEST", } return jsonResponse(response, 400) } diff --git a/src/libs/blockchain/gcr/gcr_routines/GCRStorageProgramRoutines.ts b/src/libs/blockchain/gcr/gcr_routines/GCRStorageProgramRoutines.ts index d8dc890b3..b8157d0da 100644 --- a/src/libs/blockchain/gcr/gcr_routines/GCRStorageProgramRoutines.ts +++ b/src/libs/blockchain/gcr/gcr_routines/GCRStorageProgramRoutines.ts @@ -59,10 +59,10 @@ export function validateStorageProgramPayload( const encoding = payload.encoding || "json" // Validate operation type + // Note: READ_STORAGE is not a transaction operation - reads are handled via RPC endpoints const validOperations = [ "CREATE_STORAGE_PROGRAM", "WRITE_STORAGE", - "READ_STORAGE", "UPDATE_ACCESS_CONTROL", "DELETE_STORAGE_PROGRAM", ] @@ -454,6 +454,19 @@ export class GCRStorageProgramRoutines { } } + // Check write permission (owner or ACL) + const sender = edit.context.sender + const canWrite = + program.owner === sender || + checkWritePermission(program.acl, sender) + + if (!canWrite) { + return { + success: false, + message: "No permission to write to this storage program", + } + } + if (simulate) { log.debug(`[StorageProgram] Simulated write: ${storageAddress}`) return { success: true, message: "Simulated write successful" } @@ -665,16 +678,16 @@ export class GCRStorageProgramRoutines { return false } - // Check blacklist first - if (acl.blacklisted?.includes(requesterAddress)) { - return false - } - - // Owner always has access + // Owner always has access (check BEFORE blacklist - owner cannot be blacklisted) if (requesterAddress === program.owner) { return true } + // Check blacklist + if (acl.blacklisted?.includes(requesterAddress)) { + return false + } + // Check allowed list if (acl.allowed?.includes(requesterAddress)) { return true @@ -723,3 +736,32 @@ function checkDeletePermission( return false } + +/** + * Check if address has write permission in ACL + */ +function checkWritePermission( + acl: { mode: string; allowed?: string[]; blacklisted?: string[]; groups?: Record }, + address: string, +): boolean { + // Check blacklist first + if (acl.blacklisted?.includes(address)) { + return false + } + + // Public mode allows anyone to write (if not blacklisted) + if (acl.mode === "public") { + return true + } + + // Check groups for write permission + if (acl.groups) { + for (const group of Object.values(acl.groups)) { + if (group.members.includes(address) && group.permissions.includes("write")) { + return true + } + } + } + + return false +} From 9173350d9320e63ac89595b7381ce7443a5be58c Mon Sep 17 00:00:00 2001 From: tcsenpai Date: Wed, 14 Jan 2026 17:39:33 +0100 Subject: [PATCH 09/29] fix(storage): address CodeRabbit review findings - Fix logging inconsistency: rename chunks variable to rawChunks for raw value, use chunks for effective value (min 1) ensuring log output matches actual fee calculation - Add null/object type guard in ACL group validation to prevent runtime errors from malformed ACL input Co-Authored-By: Claude Opus 4.5 --- .../gcr_routines/GCRStorageProgramRoutines.ts | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/src/libs/blockchain/gcr/gcr_routines/GCRStorageProgramRoutines.ts b/src/libs/blockchain/gcr/gcr_routines/GCRStorageProgramRoutines.ts index b8157d0da..26f7296b5 100644 --- a/src/libs/blockchain/gcr/gcr_routines/GCRStorageProgramRoutines.ts +++ b/src/libs/blockchain/gcr/gcr_routines/GCRStorageProgramRoutines.ts @@ -135,8 +135,9 @@ export function validateStorageProgramPayload( } // Calculate fee - const chunks = Math.ceil(sizeBytes / STORAGE_PROGRAM_PRICING_CHUNK_BYTES) - const storageCost = BigInt(Math.max(1, chunks)) * STORAGE_PROGRAM_FEE_PER_CHUNK + const rawChunks = Math.ceil(sizeBytes / STORAGE_PROGRAM_PRICING_CHUNK_BYTES) + const chunks = Math.max(1, rawChunks) // Minimum 1 chunk even for empty data + const storageCost = BigInt(chunks) * STORAGE_PROGRAM_FEE_PER_CHUNK const baseCost = 0n const totalFee = baseCost + storageCost @@ -145,11 +146,11 @@ export function validateStorageProgramPayload( storageCost, sizeBytes, encoding, - chunks: Math.max(1, chunks), + chunks, } log.debug( - `[StorageProgram] Validated ${payload.operation}: ${sizeBytes} bytes, ${chunks} chunks, ${totalFee} DEM fee`, + `[StorageProgram] Validated ${payload.operation}: ${sizeBytes} bytes, ${chunks} chunk(s), ${totalFee} DEM fee`, ) return { @@ -263,6 +264,13 @@ function validateACLStructure(acl: unknown): { valid: boolean; message: string } return { valid: false, message: "ACL groups must be an object" } } for (const [groupName, group] of Object.entries(aclObj.groups)) { + // Guard against null or non-object group entries + if (!group || typeof group !== "object") { + return { + valid: false, + message: `ACL group ${groupName} must be an object`, + } + } const groupObj = group as Record if (!Array.isArray(groupObj.members)) { return { From bc5a7a5b6f3acd9d6d8284fd5d1d4dcb833ce1b9 Mon Sep 17 00:00:00 2001 From: tcsenpai Date: Thu, 15 Jan 2026 17:21:04 +0100 Subject: [PATCH 10/29] fix(storage): validate stor- address format for storageProgram transactions StorageProgram transactions use stor-{40 hex chars} format for the 'to' field instead of Ed25519 public keys. Added validateStorageAddress() method and modified structured() to route validation based on tx type. This fixes the TypeError when confirming storage transactions where validateToField() expected 32-byte Ed25519 keys but received stor- addresses. Co-Authored-By: Claude Opus 4.5 --- src/libs/blockchain/transaction.ts | 58 ++++++++++++++++++++++++++---- 1 file changed, 52 insertions(+), 6 deletions(-) diff --git a/src/libs/blockchain/transaction.ts b/src/libs/blockchain/transaction.ts index 723b534e5..97abfce92 100644 --- a/src/libs/blockchain/transaction.ts +++ b/src/libs/blockchain/transaction.ts @@ -393,17 +393,63 @@ export default class Transaction implements ITransaction { return null } } - // Modify the structured method to use the new validation - public static structured(tx: Transaction): { + /** + * Validates a storage address format (stor-{40 hex chars}) + * Used for StorageProgram transaction type where 'to' field is a storage address + */ + private static validateStorageAddress(to: string): { valid: boolean message: string } { - // Validate TO field - const toValidation = this.validateToField(tx.content.to) - if (!toValidation.valid) { + log.debug(`[TX] validateStorageAddress - Validating storage address: ${to}`) + + if (!to || typeof to !== "string") { + return { + valid: false, + message: "Missing or invalid storage address", + } + } + + // Storage address format: stor-{40 hex chars} + const storageAddressRegex = /^stor-[0-9a-f]{40}$/i + if (!storageAddressRegex.test(to)) { + log.debug(`[TX] validateStorageAddress - Invalid storage address format: ${to}`) return { valid: false, - message: toValidation.message, + message: `Invalid storage address format: ${to}. Expected: stor-{40 hex chars}`, + } + } + + log.debug("[TX] validateStorageAddress - Storage address is valid") + return { + valid: true, + message: "Storage address is valid", + } + } + + // Modify the structured method to use the new validation + public static structured(tx: Transaction): { + valid: boolean + message: string + } { + // REVIEW: StorageProgram transactions use stor-{hash} format for 'to' field + // instead of Ed25519 public key, so we use different validation + if (tx.content.type === "storageProgram") { + const storageValidation = this.validateStorageAddress(tx.content.to as string) + if (!storageValidation.valid) { + return { + valid: false, + message: storageValidation.message, + } + } + } else { + // Validate TO field as Ed25519 public key for non-storage transactions + const toValidation = this.validateToField(tx.content.to) + if (!toValidation.valid) { + return { + valid: false, + message: toValidation.message, + } } } From ab2ca93ef9c1cd661c763977849d61c55b36d589 Mon Sep 17 00:00:00 2001 From: tcsenpai Date: Thu, 15 Jan 2026 17:59:57 +0100 Subject: [PATCH 11/29] feat(storage): add search by name endpoint with partial matching - Add searchStorageProgramsByName to GCRStorageProgramRoutines - Support exact match and ILIKE partial matching - Add pagination support (limit, offset) - Add /storage-program/search?q=name endpoint - ACL filtering applied to search results Co-Authored-By: Claude Opus 4.5 --- src/features/storageprogram/routes.ts | 84 ++++++++++++++++++- .../gcr_routines/GCRStorageProgramRoutines.ts | 60 +++++++++++-- 2 files changed, 136 insertions(+), 8 deletions(-) diff --git a/src/features/storageprogram/routes.ts b/src/features/storageprogram/routes.ts index a12f0f769..e19f16520 100644 --- a/src/features/storageprogram/routes.ts +++ b/src/features/storageprogram/routes.ts @@ -231,6 +231,83 @@ async function listByOwnerHandler(req: Request): Promise { } } +/** + * Search storage programs by name (supports partial matching) + * + * Query parameters: + * - q: Search query (required) + * - exact: If "true", performs exact match instead of partial (optional) + * - limit: Max results to return, default 50 (optional) + * - offset: Pagination offset, default 0 (optional) + */ +async function searchByNameHandler(req: Request): Promise { + try { + const url = new URL(req.url) + const query = url.searchParams.get("q") + const exactMatch = url.searchParams.get("exact") === "true" + const limit = parseInt(url.searchParams.get("limit") || "50", 10) + const offset = parseInt(url.searchParams.get("offset") || "0", 10) + + if (!query || query.trim() === "") { + const response: StorageProgramsListResponse = { + success: false, + error: "Search query 'q' parameter is required", + } + return jsonResponse(response, 400) + } + + // Get requester identity from header + const identity = req.headers.get("identity") + let requesterAddress: string | undefined + + if (identity) { + const splits = identity.split(":") + requesterAddress = splits.length > 1 ? splits[1] : identity + } + + // Get repository + const db = await Datasource.getInstance() + const repository = db.getDataSource().getRepository(GCRStorageProgram) + + // Search programs by name + const programs = await GCRStorageProgramRoutines.searchStorageProgramsByName( + query.trim(), + repository, + { limit, offset, exactMatch }, + ) + + // Filter to only programs the requester can read + const accessiblePrograms = programs.filter(program => + GCRStorageProgramRoutines.checkReadPermission(program, requesterAddress), + ) + + // Map to response format (without full data for list view) + const response: StorageProgramsListResponse = { + success: true, + programs: accessiblePrograms.map(p => ({ + storageAddress: p.storageAddress, + programName: p.programName, + encoding: p.encoding, + sizeBytes: p.sizeBytes, + storageLocation: p.storageLocation, + createdAt: p.createdAt.toISOString(), + updatedAt: p.updatedAt.toISOString(), + })), + count: accessiblePrograms.length, + } + + log.debug(`[StorageProgram] Search "${query}" found ${accessiblePrograms.length} programs`) + return jsonResponse(response) + } catch (error) { + log.error(`[StorageProgram] Error searching storage programs: ${error}`) + const response: StorageProgramsListResponse = { + success: false, + error: error instanceof Error ? error.message : "Internal server error", + } + return jsonResponse(response, 500) + } +} + // ============================================================================ // Route Registration // ============================================================================ @@ -241,14 +318,15 @@ async function listByOwnerHandler(req: Request): Promise { * Routes: * - GET /storage-program/:address - Read a storage program by address * - GET /storage-program/owner/:owner - List storage programs by owner + * - GET /storage-program/search?q=name - Search storage programs by name (partial match) * * @param server - BunServer instance */ export function registerStorageProgramRoutes(server: BunServer): void { - // Read storage program by address - // Note: BunServer uses pattern matching, so we register the specific route + // Register specific routes first (more specific paths before wildcards) + server.get("/storage-program/search", searchByNameHandler) server.get("/storage-program/owner/*", listByOwnerHandler) server.get("/storage-program/*", getStorageProgramHandler) - log.info("[StorageProgram] Routes registered: /storage-program/:address, /storage-program/owner/:owner") + log.info("[StorageProgram] Routes registered: /storage-program/:address, /storage-program/owner/:owner, /storage-program/search") } diff --git a/src/libs/blockchain/gcr/gcr_routines/GCRStorageProgramRoutines.ts b/src/libs/blockchain/gcr/gcr_routines/GCRStorageProgramRoutines.ts index 26f7296b5..4aeff9ec9 100644 --- a/src/libs/blockchain/gcr/gcr_routines/GCRStorageProgramRoutines.ts +++ b/src/libs/blockchain/gcr/gcr_routines/GCRStorageProgramRoutines.ts @@ -652,6 +652,45 @@ export class GCRStorageProgramRoutines { }) } + /** + * Search storage programs by name (supports partial matching) + * @param namePattern - The name or partial name to search for + * @param repository - TypeORM repository for GCRStorageProgram + * @param options - Search options (limit, offset, exactMatch) + * @returns Array of matching storage programs + */ + static async searchStorageProgramsByName( + namePattern: string, + repository: Repository, + options?: { + limit?: number + offset?: number + exactMatch?: boolean + }, + ): Promise { + const limit = options?.limit ?? 50 + const offset = options?.offset ?? 0 + const exactMatch = options?.exactMatch ?? false + + if (exactMatch) { + return repository.find({ + where: { programName: namePattern, isDeleted: false }, + order: { createdAt: "DESC" }, + take: limit, + skip: offset, + }) + } + + // Partial match using ILIKE (case-insensitive) + return repository + .createQueryBuilder("sp") + .where("sp.programName ILIKE :pattern", { pattern: `%${namePattern}%` }) + .andWhere("sp.isDeleted = false") + .orderBy("sp.createdAt", "DESC") + .take(limit) + .skip(offset) + .getMany() + } /** * Check if an address has read permission for a storage program @@ -746,7 +785,13 @@ function checkDeletePermission( } /** - * Check if address has write permission in ACL + * Check if address has write permission in ACL (non-owner). + * Note: Owner check is done separately in handleWrite before calling this. + * + * Per spec (04-acl.mdx): + * - Owner mode: Only owner can write (this function returns false) + * - Public mode: Only owner can write (this function returns false) + * - Restricted mode: Owner or group with "write" permission */ function checkWritePermission( acl: { mode: string; allowed?: string[]; blacklisted?: string[]; groups?: Record }, @@ -757,13 +802,18 @@ function checkWritePermission( return false } - // Public mode allows anyone to write (if not blacklisted) + // Owner mode: only owner can write (handled by caller) + if (acl.mode === "owner") { + return false + } + + // Public mode: only owner can write (handled by caller) if (acl.mode === "public") { - return true + return false } - // Check groups for write permission - if (acl.groups) { + // Restricted mode: check groups for write permission + if (acl.mode === "restricted" && acl.groups) { for (const group of Object.values(acl.groups)) { if (group.members.includes(address) && group.permissions.includes("write")) { return true From 54c79f4fc03c5402d2b8146486cc8a953eea36b5 Mon Sep 17 00:00:00 2001 From: tcsenpai Date: Thu, 15 Jan 2026 18:09:30 +0100 Subject: [PATCH 12/29] feat(gcr): track assignedTxs for all successful transactions - Modified HandleGCR.apply() to update sender's assignedTxs on success - Added addAssignedTx() helper with duplicate prevention - Non-blocking: assignedTxs update failure doesn't fail the operation Relates to: DEM-549 Co-Authored-By: Claude Opus 4.5 --- src/libs/blockchain/gcr/handleGCR.ts | 65 +++++++++++++++++++++++++--- 1 file changed, 58 insertions(+), 7 deletions(-) diff --git a/src/libs/blockchain/gcr/handleGCR.ts b/src/libs/blockchain/gcr/handleGCR.ts index 76b7acefc..b562eea0e 100644 --- a/src/libs/blockchain/gcr/handleGCR.ts +++ b/src/libs/blockchain/gcr/handleGCR.ts @@ -261,53 +261,104 @@ export default class HandleGCR { editOperation.isRollback = true } + let result: GCRResult + // Applying the edit operations switch (editOperation.type) { case "balance": - return GCRBalanceRoutines.apply( + result = await GCRBalanceRoutines.apply( editOperation, repositories.main as Repository, simulate, ) + break case "nonce": - return GCRNonceRoutines.apply( + result = await GCRNonceRoutines.apply( editOperation, repositories.main as Repository, simulate, ) + break case "identity": - return GCRIdentityRoutines.apply( + result = await GCRIdentityRoutines.apply( editOperation, repositories.main as Repository, simulate, ) + break case "assign": case "subnetsTx": // TODO implementations log.debug(`Assigning GCREdit ${editOperation.type}`) - return { success: true, message: "Not implemented" } + result = { success: true, message: "Not implemented" } + break case "smartContract": case "escrow": // TODO implementations log.debug(`GCREdit ${editOperation.type} not yet implemented`) - return { success: true, message: "Not implemented" } + result = { success: true, message: "Not implemented" } + break // REVIEW: StorageProgram unified storage operations case "storageProgram": - return GCRStorageProgramRoutines.apply( + result = await GCRStorageProgramRoutines.apply( editOperation, repositories.storageProgram as Repository, simulate, ) + break // REVIEW: TLSNotary attestation proof storage case "tlsnotary": - return GCRTLSNotaryRoutines.apply( + result = await GCRTLSNotaryRoutines.apply( editOperation, repositories.tlsnotary as Repository, simulate, ) + break default: return { success: false, message: "Invalid GCREdit type" } } + + // REVIEW: Update assignedTxs for the transaction sender on successful operations + // This tracks all transactions associated with an account + const sender = tx.content?.from + if (result.success && !simulate && tx.hash && sender) { + try { + await this.addAssignedTx(sender, tx.hash, repositories.main) + } catch (error) { + log.warn( + `[HandleGCR] Failed to update assignedTxs for ${sender}: ${error}`, + ) + // Don't fail the operation if assignedTxs update fails + } + } + + return result + } + + /** + * Adds a transaction hash to the account's assignedTxs array + * @param pubkey The account public key + * @param txHash The transaction hash to add + * @param repository The GCRMain repository + */ + private static async addAssignedTx( + pubkey: string, + txHash: string, + repository: Repository, + ): Promise { + let account = await repository.findOneBy({ pubkey }) + + if (!account) { + // Create account if it doesn't exist + account = await this.createAccount(pubkey) + } + + // Avoid duplicates + if (!account.assignedTxs.includes(txHash)) { + account.assignedTxs.push(txHash) + await repository.save(account) + log.debug(`[HandleGCR] Added tx ${txHash} to assignedTxs for ${pubkey}`) + } } /** From 233984b7443abc9694b276625c50e6bc6d58de5f Mon Sep 17 00:00:00 2001 From: tcsenpai Date: Sun, 18 Jan 2026 16:26:47 +0100 Subject: [PATCH 13/29] feat(storage): implement granular storage program API Add field-level read/write operations for storage programs instead of full blob operations. This enables efficient granular access to storage program data. Read methods (manageNodeCall.ts): - getStorageProgramFields - list top-level field names - getStorageProgramValue - get specific field value - getStorageProgramItem - get array element by index - hasStorageProgramField - check field existence - getStorageProgramFieldType - get field type info - getStorageProgramAll - full data (retrocompat) Write routines (GCRStorageProgramRoutines.ts): - SET_FIELD - set/create field value - SET_ITEM - set array element at index - APPEND_ITEM - push to array - DELETE_FIELD - remove field - DELETE_ITEM - remove array element Features: - ACL enforcement on all operations - Fee calculation based on size delta - Binary encoding detection (error for granular) - Bounds checking for array operations Co-Authored-By: Claude Opus 4.5 --- package.json | 2 +- specs/storageprogram/04-acl.mdx | 74 +- specs/storageprogram/05-rpc-endpoints.mdx | 5 +- .../gcr_routines/GCRStorageProgramRoutines.ts | 401 ++++++++++ src/libs/blockchain/gcr/handleGCR.ts | 4 + src/libs/network/manageGCRRoutines.ts | 160 ++++ src/libs/network/manageNodeCall.ts | 702 ++++++++++++++++++ .../entities/GCRv2/GCR_StorageProgram.ts | 8 + 8 files changed, 1322 insertions(+), 34 deletions(-) diff --git a/package.json b/package.json index cb2a5b66e..877ab7e91 100644 --- a/package.json +++ b/package.json @@ -59,7 +59,7 @@ "@fastify/cors": "^9.0.1", "@fastify/swagger": "^8.15.0", "@fastify/swagger-ui": "^4.1.0", - "@kynesyslabs/demosdk": "^2.8.15", + "@kynesyslabs/demosdk": "^2.9.0", "@metaplex-foundation/js": "^0.20.1", "@modelcontextprotocol/sdk": "^1.13.3", "@noble/ed25519": "^3.0.0", diff --git a/specs/storageprogram/04-acl.mdx b/specs/storageprogram/04-acl.mdx index 2de58d4cd..e50c69e87 100644 --- a/specs/storageprogram/04-acl.mdx +++ b/specs/storageprogram/04-acl.mdx @@ -119,47 +119,59 @@ interface StorageProgramACL { ## Permission Resolution -Permission checks follow this precedence order: +Permission checks follow this precedence order (varies slightly by mode): +**Public Mode:** ``` -1. Check blacklist (deny if present) -2. Check if owner (allow if owner) -3. Check allowed list (read only in restricted mode) +1. Check blacklist (deny if blacklisted) +2. Allow read (anyone can read) +3. Deny write/delete (owner only, checked separately) +``` + +**Owner Mode:** +``` +1. Check if owner (allow all if owner) +2. Deny all (non-owners have no access) +``` + +**Restricted Mode:** +``` +1. Check if owner (allow all - owner cannot be blacklisted) +2. Check blacklist (deny if blacklisted) +3. Check allowed list (read access) 4. Check group membership and permissions -5. Check mode default (public allows read, owner denies all) +5. Deny by default ``` ### Flowchart ``` -Is address blacklisted? - │ - ├─ Yes → DENY - │ - └─ No → Is address the owner? - │ - ├─ Yes → ALLOW - │ - └─ No → Is mode "public"? - │ - ├─ Yes → ALLOW READ - │ (DENY WRITE/DELETE) - │ - └─ No → Is mode "restricted"? - │ - ├─ No (owner mode) → DENY - │ - └─ Yes → Is in allowed list? - │ - ├─ Yes → ALLOW READ - │ - └─ No → Check groups for permission - │ - ├─ Has permission → ALLOW - │ - └─ No permission → DENY +┌─ What is the ACL mode? +│ +├─ PUBLIC +│ └─ Is address blacklisted? +│ ├─ Yes → DENY +│ └─ No → ALLOW READ (write/delete: owner only) +│ +├─ OWNER +│ └─ Is address the owner? +│ ├─ Yes → ALLOW ALL +│ └─ No → DENY ALL +│ +└─ RESTRICTED + └─ Is address the owner? + ├─ Yes → ALLOW ALL (owner cannot be blacklisted) + └─ No → Is address blacklisted? + ├─ Yes → DENY + └─ No → Is in allowed list? + ├─ Yes → ALLOW READ + └─ No → Check groups for permission + ├─ Has permission → ALLOW + └─ No permission → DENY ``` +> **Note**: In restricted mode, owner is checked before blacklist because the owner cannot be blacklisted from their own storage program. + ## Group Permissions Groups provide a way to assign multiple permissions to multiple addresses: diff --git a/specs/storageprogram/05-rpc-endpoints.mdx b/specs/storageprogram/05-rpc-endpoints.mdx index 679e3a5e7..617e502ef 100644 --- a/specs/storageprogram/05-rpc-endpoints.mdx +++ b/specs/storageprogram/05-rpc-endpoints.mdx @@ -87,7 +87,7 @@ GET /storage-program/stor-abc123def456... { "success": false, "error": "Invalid storage address format. Expected: stor-{hash}", - "errorCode": "NOT_FOUND" + "errorCode": "INVALID_REQUEST" } ``` @@ -113,7 +113,8 @@ GET /storage-program/stor-abc123def456... | Code | HTTP Status | Description | |------|-------------|-------------| -| `NOT_FOUND` | 404 | Storage program doesn't exist or invalid address | +| `NOT_FOUND` | 404 | Storage program doesn't exist | +| `INVALID_REQUEST` | 400 | Invalid address format or malformed request | | `PERMISSION_DENIED` | 403 | Requester lacks read access | | `DELETED` | 404 | Storage program was soft deleted | | `INTERNAL_ERROR` | 500 | Server error during processing | diff --git a/src/libs/blockchain/gcr/gcr_routines/GCRStorageProgramRoutines.ts b/src/libs/blockchain/gcr/gcr_routines/GCRStorageProgramRoutines.ts index 4aeff9ec9..a60659955 100644 --- a/src/libs/blockchain/gcr/gcr_routines/GCRStorageProgramRoutines.ts +++ b/src/libs/blockchain/gcr/gcr_routines/GCRStorageProgramRoutines.ts @@ -65,6 +65,12 @@ export function validateStorageProgramPayload( "WRITE_STORAGE", "UPDATE_ACCESS_CONTROL", "DELETE_STORAGE_PROGRAM", + // REVIEW: Granular field operations (JSON encoding only) + "SET_FIELD", + "SET_ITEM", + "APPEND_ITEM", + "DELETE_FIELD", + "DELETE_ITEM", ] if (!validOperations.includes(payload.operation)) { return { @@ -73,6 +79,37 @@ export function validateStorageProgramPayload( } } + // Granular operations require field name and JSON encoding + const granularOperations = ["SET_FIELD", "SET_ITEM", "APPEND_ITEM", "DELETE_FIELD", "DELETE_ITEM"] + if (granularOperations.includes(payload.operation)) { + // Validate field name is present + const payloadWithField = payload as StorageProgramPayload & { field?: string; index?: number; value?: unknown } + if (!payloadWithField.field || typeof payloadWithField.field !== "string") { + return { + valid: false, + message: `Field name is required for ${payload.operation} operation`, + } + } + + // SET_ITEM and DELETE_ITEM require index + if ((payload.operation === "SET_ITEM" || payload.operation === "DELETE_ITEM") && + (payloadWithField.index === undefined || typeof payloadWithField.index !== "number")) { + return { + valid: false, + message: `Index is required for ${payload.operation} operation`, + } + } + + // SET_FIELD, SET_ITEM, and APPEND_ITEM require value + if ((payload.operation === "SET_FIELD" || payload.operation === "SET_ITEM" || payload.operation === "APPEND_ITEM") && + payloadWithField.value === undefined) { + return { + valid: false, + message: `Value is required for ${payload.operation} operation`, + } + } + } + // Validate storage address format if (!payload.storageAddress || !payload.storageAddress.startsWith("stor-")) { return { @@ -344,6 +381,22 @@ export class GCRStorageProgramRoutines { case "DELETE_STORAGE_PROGRAM": { return this.handleDelete(spEdit, gcrStorageProgramRepository, simulate) } + // REVIEW: Granular field operations + case "SET_FIELD": { + return this.handleSetField(spEdit, gcrStorageProgramRepository, simulate) + } + case "SET_ITEM": { + return this.handleSetItem(spEdit, gcrStorageProgramRepository, simulate) + } + case "APPEND_ITEM": { + return this.handleAppendItem(spEdit, gcrStorageProgramRepository, simulate) + } + case "DELETE_FIELD": { + return this.handleDeleteField(spEdit, gcrStorageProgramRepository, simulate) + } + case "DELETE_ITEM": { + return this.handleDeleteItem(spEdit, gcrStorageProgramRepository, simulate) + } default: { log.warning(`[StorageProgram] Unknown operation: ${operation}`) return { success: false, message: `Unknown operation: ${operation}` } @@ -420,6 +473,7 @@ export class GCRStorageProgramRoutines { program.salt = variables.salt || null program.createdByTx = edit.txhash program.lastModifiedByTx = edit.txhash + program.interactionTxs = [edit.txhash] program.totalFeesPaid = fee program.isDeleted = false program.deletedByTx = null @@ -493,6 +547,7 @@ export class GCRStorageProgramRoutines { program.sizeBytes = newSizeBytes program.encoding = encoding program.lastModifiedByTx = edit.txhash + program.interactionTxs = [...(program.interactionTxs || []), edit.txhash] program.totalFeesPaid = program.totalFeesPaid + fee // REVIEW: IPFS storage location handling - stub for future implementation @@ -563,6 +618,7 @@ export class GCRStorageProgramRoutines { program.acl = variables.acl program.lastModifiedByTx = edit.txhash + program.interactionTxs = [...(program.interactionTxs || []), edit.txhash] await repository.save(program) log.info(`[StorageProgram] ACL updated: ${storageAddress}`) @@ -618,6 +674,7 @@ export class GCRStorageProgramRoutines { program.isDeleted = true program.deletedByTx = edit.txhash program.lastModifiedByTx = edit.txhash + program.interactionTxs = [...(program.interactionTxs || []), edit.txhash] await repository.save(program) log.info(`[StorageProgram] Deleted: ${storageAddress}`) @@ -758,6 +815,350 @@ export class GCRStorageProgramRoutines { // Unknown mode - deny by default return false } + + // ========================================================================= + // REVIEW: Granular Field Operations + // ========================================================================= + + /** + * Handle SET_FIELD operation - set a single field value + */ + private static async handleSetField( + edit: GCREditStorageProgram, + repository: Repository, + simulate: boolean, + ): Promise { + const storageAddress = edit.target + const sender = edit.context.sender + const variables = edit.context.data?.variables as StorageProgramPayload & { field: string; value: unknown } | undefined + + if (!variables?.field) { + return { success: false, message: "Field name is required for SET_FIELD operation" } + } + + const program = await repository.findOneBy({ storageAddress }) + if (!program) { + return { success: false, message: `Storage program not found: ${storageAddress}` } + } + if (program.isDeleted) { + return { success: false, message: `Storage program has been deleted: ${storageAddress}` } + } + + // Granular operations only work with JSON encoding + if (program.encoding === "binary") { + return { success: false, message: "SET_FIELD operation not supported for binary encoding. Use WRITE_STORAGE instead." } + } + + // Check write permission + if (program.owner !== sender && !checkWritePermission(program.acl, sender)) { + return { success: false, message: "No permission to write to this storage program" } + } + + if (simulate) { + log.debug(`[StorageProgram] Simulated SET_FIELD: ${storageAddress}.${variables.field}`) + return { success: true, message: "Simulated SET_FIELD successful" } + } + + // Get current data or initialize empty object + const currentData = (program.data as Record) || {} + const oldSizeBytes = calculateDataSize(currentData, "json") + + // Set the field value + currentData[variables.field] = variables.value + const newSizeBytes = calculateDataSize(currentData, "json") + + // Check size limit + if (newSizeBytes > STORAGE_PROGRAM_MAX_SIZE_BYTES) { + return { success: false, message: `Data size ${newSizeBytes} bytes exceeds maximum ${STORAGE_PROGRAM_MAX_SIZE_BYTES} bytes (1MB)` } + } + + // Calculate delta-based fee (only charge if size increased) + const deltaBytes = Math.max(0, newSizeBytes - oldSizeBytes) + const deltaChunks = Math.ceil(deltaBytes / STORAGE_PROGRAM_PRICING_CHUNK_BYTES) + const fee = deltaChunks > 0 ? BigInt(deltaChunks) * STORAGE_PROGRAM_FEE_PER_CHUNK : 0n + + // Update program + program.data = currentData + program.sizeBytes = newSizeBytes + program.lastModifiedByTx = edit.txhash + program.interactionTxs = [...(program.interactionTxs || []), edit.txhash] + program.totalFeesPaid = program.totalFeesPaid + fee + + await repository.save(program) + log.info(`[StorageProgram] SET_FIELD: ${storageAddress}.${variables.field} (delta: +${deltaBytes} bytes, fee: ${fee} DEM)`) + + return { success: true, message: `Field ${variables.field} set successfully` } + } + + /** + * Handle SET_ITEM operation - set an item at a specific array index + */ + private static async handleSetItem( + edit: GCREditStorageProgram, + repository: Repository, + simulate: boolean, + ): Promise { + const storageAddress = edit.target + const sender = edit.context.sender + const variables = edit.context.data?.variables as StorageProgramPayload & { field: string; index: number; value: unknown } | undefined + + if (!variables?.field || variables.index === undefined) { + return { success: false, message: "Field name and index are required for SET_ITEM operation" } + } + + const program = await repository.findOneBy({ storageAddress }) + if (!program) { + return { success: false, message: `Storage program not found: ${storageAddress}` } + } + if (program.isDeleted) { + return { success: false, message: `Storage program has been deleted: ${storageAddress}` } + } + + if (program.encoding === "binary") { + return { success: false, message: "SET_ITEM operation not supported for binary encoding" } + } + + if (program.owner !== sender && !checkWritePermission(program.acl, sender)) { + return { success: false, message: "No permission to write to this storage program" } + } + + const currentData = (program.data as Record) || {} + const fieldValue = currentData[variables.field] + + if (!Array.isArray(fieldValue)) { + return { success: false, message: `Field ${variables.field} is not an array` } + } + + if (variables.index < 0 || variables.index >= fieldValue.length) { + return { success: false, message: `Index ${variables.index} out of bounds for array ${variables.field} (length: ${fieldValue.length})` } + } + + if (simulate) { + log.debug(`[StorageProgram] Simulated SET_ITEM: ${storageAddress}.${variables.field}[${variables.index}]`) + return { success: true, message: "Simulated SET_ITEM successful" } + } + + const oldSizeBytes = calculateDataSize(currentData, "json") + fieldValue[variables.index] = variables.value + const newSizeBytes = calculateDataSize(currentData, "json") + + if (newSizeBytes > STORAGE_PROGRAM_MAX_SIZE_BYTES) { + return { success: false, message: `Data size ${newSizeBytes} bytes exceeds maximum ${STORAGE_PROGRAM_MAX_SIZE_BYTES} bytes (1MB)` } + } + + const deltaBytes = Math.max(0, newSizeBytes - oldSizeBytes) + const deltaChunks = Math.ceil(deltaBytes / STORAGE_PROGRAM_PRICING_CHUNK_BYTES) + const fee = deltaChunks > 0 ? BigInt(deltaChunks) * STORAGE_PROGRAM_FEE_PER_CHUNK : 0n + + program.data = currentData + program.sizeBytes = newSizeBytes + program.lastModifiedByTx = edit.txhash + program.interactionTxs = [...(program.interactionTxs || []), edit.txhash] + program.totalFeesPaid = program.totalFeesPaid + fee + + await repository.save(program) + log.info(`[StorageProgram] SET_ITEM: ${storageAddress}.${variables.field}[${variables.index}] (delta: +${deltaBytes} bytes, fee: ${fee} DEM)`) + + return { success: true, message: `Item at ${variables.field}[${variables.index}] set successfully` } + } + + /** + * Handle APPEND_ITEM operation - append an item to an array field + */ + private static async handleAppendItem( + edit: GCREditStorageProgram, + repository: Repository, + simulate: boolean, + ): Promise { + const storageAddress = edit.target + const sender = edit.context.sender + const variables = edit.context.data?.variables as StorageProgramPayload & { field: string; value: unknown } | undefined + + if (!variables?.field) { + return { success: false, message: "Field name is required for APPEND_ITEM operation" } + } + + const program = await repository.findOneBy({ storageAddress }) + if (!program) { + return { success: false, message: `Storage program not found: ${storageAddress}` } + } + if (program.isDeleted) { + return { success: false, message: `Storage program has been deleted: ${storageAddress}` } + } + + if (program.encoding === "binary") { + return { success: false, message: "APPEND_ITEM operation not supported for binary encoding" } + } + + if (program.owner !== sender && !checkWritePermission(program.acl, sender)) { + return { success: false, message: "No permission to write to this storage program" } + } + + const currentData = (program.data as Record) || {} + let fieldValue = currentData[variables.field] + + // If field doesn't exist, create empty array + if (fieldValue === undefined) { + fieldValue = [] + currentData[variables.field] = fieldValue + } + + if (!Array.isArray(fieldValue)) { + return { success: false, message: `Field ${variables.field} is not an array` } + } + + if (simulate) { + log.debug(`[StorageProgram] Simulated APPEND_ITEM: ${storageAddress}.${variables.field}`) + return { success: true, message: "Simulated APPEND_ITEM successful" } + } + + const oldSizeBytes = calculateDataSize(currentData, "json") + fieldValue.push(variables.value) + const newSizeBytes = calculateDataSize(currentData, "json") + + if (newSizeBytes > STORAGE_PROGRAM_MAX_SIZE_BYTES) { + return { success: false, message: `Data size ${newSizeBytes} bytes exceeds maximum ${STORAGE_PROGRAM_MAX_SIZE_BYTES} bytes (1MB)` } + } + + const deltaBytes = Math.max(0, newSizeBytes - oldSizeBytes) + const deltaChunks = Math.ceil(deltaBytes / STORAGE_PROGRAM_PRICING_CHUNK_BYTES) + const fee = deltaChunks > 0 ? BigInt(deltaChunks) * STORAGE_PROGRAM_FEE_PER_CHUNK : 0n + + program.data = currentData + program.sizeBytes = newSizeBytes + program.lastModifiedByTx = edit.txhash + program.interactionTxs = [...(program.interactionTxs || []), edit.txhash] + program.totalFeesPaid = program.totalFeesPaid + fee + + await repository.save(program) + log.info(`[StorageProgram] APPEND_ITEM: ${storageAddress}.${variables.field} (new length: ${fieldValue.length}, delta: +${deltaBytes} bytes, fee: ${fee} DEM)`) + + return { success: true, message: `Item appended to ${variables.field} successfully (new length: ${fieldValue.length})` } + } + + /** + * Handle DELETE_FIELD operation - delete a single field + */ + private static async handleDeleteField( + edit: GCREditStorageProgram, + repository: Repository, + simulate: boolean, + ): Promise { + const storageAddress = edit.target + const sender = edit.context.sender + const variables = edit.context.data?.variables as StorageProgramPayload & { field: string } | undefined + + if (!variables?.field) { + return { success: false, message: "Field name is required for DELETE_FIELD operation" } + } + + const program = await repository.findOneBy({ storageAddress }) + if (!program) { + return { success: false, message: `Storage program not found: ${storageAddress}` } + } + if (program.isDeleted) { + return { success: false, message: `Storage program has been deleted: ${storageAddress}` } + } + + if (program.encoding === "binary") { + return { success: false, message: "DELETE_FIELD operation not supported for binary encoding" } + } + + // DELETE_FIELD requires write permission (same as write operations) + if (program.owner !== sender && !checkWritePermission(program.acl, sender)) { + return { success: false, message: "No permission to write to this storage program" } + } + + const currentData = (program.data as Record) || {} + + if (!(variables.field in currentData)) { + return { success: false, message: `Field ${variables.field} does not exist` } + } + + if (simulate) { + log.debug(`[StorageProgram] Simulated DELETE_FIELD: ${storageAddress}.${variables.field}`) + return { success: true, message: "Simulated DELETE_FIELD successful" } + } + + // Delete field (no fee for deletions - they reduce storage) + delete currentData[variables.field] + const newSizeBytes = calculateDataSize(currentData, "json") + + program.data = currentData + program.sizeBytes = newSizeBytes + program.lastModifiedByTx = edit.txhash + program.interactionTxs = [...(program.interactionTxs || []), edit.txhash] + // No fee added for deletions + + await repository.save(program) + log.info(`[StorageProgram] DELETE_FIELD: ${storageAddress}.${variables.field} (new size: ${newSizeBytes} bytes)`) + + return { success: true, message: `Field ${variables.field} deleted successfully` } + } + + /** + * Handle DELETE_ITEM operation - delete an item at a specific array index + */ + private static async handleDeleteItem( + edit: GCREditStorageProgram, + repository: Repository, + simulate: boolean, + ): Promise { + const storageAddress = edit.target + const sender = edit.context.sender + const variables = edit.context.data?.variables as StorageProgramPayload & { field: string; index: number } | undefined + + if (!variables?.field || variables.index === undefined) { + return { success: false, message: "Field name and index are required for DELETE_ITEM operation" } + } + + const program = await repository.findOneBy({ storageAddress }) + if (!program) { + return { success: false, message: `Storage program not found: ${storageAddress}` } + } + if (program.isDeleted) { + return { success: false, message: `Storage program has been deleted: ${storageAddress}` } + } + + if (program.encoding === "binary") { + return { success: false, message: "DELETE_ITEM operation not supported for binary encoding" } + } + + if (program.owner !== sender && !checkWritePermission(program.acl, sender)) { + return { success: false, message: "No permission to write to this storage program" } + } + + const currentData = (program.data as Record) || {} + const fieldValue = currentData[variables.field] + + if (!Array.isArray(fieldValue)) { + return { success: false, message: `Field ${variables.field} is not an array` } + } + + if (variables.index < 0 || variables.index >= fieldValue.length) { + return { success: false, message: `Index ${variables.index} out of bounds for array ${variables.field} (length: ${fieldValue.length})` } + } + + if (simulate) { + log.debug(`[StorageProgram] Simulated DELETE_ITEM: ${storageAddress}.${variables.field}[${variables.index}]`) + return { success: true, message: "Simulated DELETE_ITEM successful" } + } + + // Remove item at index (splice modifies array in place) + fieldValue.splice(variables.index, 1) + const newSizeBytes = calculateDataSize(currentData, "json") + + program.data = currentData + program.sizeBytes = newSizeBytes + program.lastModifiedByTx = edit.txhash + program.interactionTxs = [...(program.interactionTxs || []), edit.txhash] + // No fee added for deletions + + await repository.save(program) + log.info(`[StorageProgram] DELETE_ITEM: ${storageAddress}.${variables.field}[${variables.index}] (new length: ${fieldValue.length})`) + + return { success: true, message: `Item at ${variables.field}[${variables.index}] deleted successfully (new length: ${fieldValue.length})` } + } } /** diff --git a/src/libs/blockchain/gcr/handleGCR.ts b/src/libs/blockchain/gcr/handleGCR.ts index b562eea0e..8cb87c90f 100644 --- a/src/libs/blockchain/gcr/handleGCR.ts +++ b/src/libs/blockchain/gcr/handleGCR.ts @@ -391,6 +391,10 @@ export default class HandleGCR { // Keep track of applied edits to be able to rollback them const appliedEdits: GCREdit[] = [] for (const edit of tx.content.gcr_edits) { + // REVIEW: Ensure txhash is set on each GCR edit from the transaction + // This is needed because client-side GCR edits don't have the txhash + // (it's cleared during validation for hash comparison) + edit.txhash = tx.hash log.debug("[applyToTx] Executing GCREdit: " + edit.type) try { const result = await HandleGCR.apply( diff --git a/src/libs/network/manageGCRRoutines.ts b/src/libs/network/manageGCRRoutines.ts index f2246e531..6bc945348 100644 --- a/src/libs/network/manageGCRRoutines.ts +++ b/src/libs/network/manageGCRRoutines.ts @@ -8,6 +8,9 @@ import { Referrals } from "@/features/incentive/referrals" import GCR from "../blockchain/gcr/gcr" import { NomisIdentityProvider } from "@/libs/identity/providers/nomisIdentityProvider" import { BroadcastManager } from "../communications/broadcastManager" +import { GCRStorageProgramRoutines } from "../blockchain/gcr/gcr_routines/GCRStorageProgramRoutines" +import Datasource from "@/model/datasource" +import { GCRStorageProgram } from "@/model/entities/GCRv2/GCR_StorageProgram" interface GCRRoutinePayload { method: string @@ -177,6 +180,163 @@ export default async function manageGCRRoutines( // SECTION Web2 Identity Management + // SECTION StorageProgram Query Methods + + // REVIEW: Get storage program by address + case "getStorageProgram": { + const storageAddress = params[0] + const requesterAddress = params[1] // Optional identity for ACL check + + if (!storageAddress) { + response.result = 400 + response.response = null + response.extra = { error: "Storage address is required" } + break + } + + try { + const db = await Datasource.getInstance() + const repository = db.getDataSource().getRepository(GCRStorageProgram) + + const program = await GCRStorageProgramRoutines.getStorageProgram( + storageAddress, + repository, + ) + + if (!program) { + response.result = 404 + response.response = null + response.extra = { error: `Storage program not found: ${storageAddress}` } + break + } + + // Check read permission + const hasReadAccess = GCRStorageProgramRoutines.checkReadPermission( + program, + requesterAddress, + ) + + if (!hasReadAccess) { + response.result = 403 + response.response = null + response.extra = { error: "Permission denied: You do not have read access to this storage program" } + break + } + + response.response = { + storageAddress: program.storageAddress, + owner: program.owner, + programName: program.programName, + encoding: program.encoding, + data: program.data, + metadata: program.metadata, + storageLocation: program.storageLocation, + sizeBytes: program.sizeBytes, + createdAt: program.createdAt.toISOString(), + updatedAt: program.updatedAt.toISOString(), + } + } catch (error) { + response.result = 500 + response.response = null + response.extra = { error: error instanceof Error ? error.message : String(error) } + } + break + } + + // REVIEW: Get storage programs by owner + case "getStorageProgramsByOwner": { + const owner = params[0] + const requesterAddress = params[1] // Optional identity for ACL filtering + + if (!owner) { + response.result = 400 + response.response = null + response.extra = { error: "Owner address is required" } + break + } + + try { + const db = await Datasource.getInstance() + const repository = db.getDataSource().getRepository(GCRStorageProgram) + + const programs = await GCRStorageProgramRoutines.getStorageProgramsByOwner( + owner, + repository, + ) + + // Filter to only programs the requester can read + const accessiblePrograms = programs.filter(program => + GCRStorageProgramRoutines.checkReadPermission(program, requesterAddress), + ) + + // Map to response format (without full data for list view) + response.response = accessiblePrograms.map(p => ({ + storageAddress: p.storageAddress, + programName: p.programName, + encoding: p.encoding, + sizeBytes: p.sizeBytes, + storageLocation: p.storageLocation, + createdAt: p.createdAt.toISOString(), + updatedAt: p.updatedAt.toISOString(), + })) + } catch (error) { + response.result = 500 + response.response = null + response.extra = { error: error instanceof Error ? error.message : String(error) } + } + break + } + + // REVIEW: Search storage programs by name + case "searchStoragePrograms": { + const query = params[0] + const options = params[1] || {} // { limit, offset, exactMatch } + const requesterAddress = params[2] // Optional identity for ACL filtering + + if (!query || (typeof query === "string" && query.trim() === "")) { + response.result = 400 + response.response = null + response.extra = { error: "Search query is required" } + break + } + + try { + const db = await Datasource.getInstance() + const repository = db.getDataSource().getRepository(GCRStorageProgram) + + const programs = await GCRStorageProgramRoutines.searchStorageProgramsByName( + typeof query === "string" ? query.trim() : String(query), + repository, + { + limit: options.limit || 50, + offset: options.offset || 0, + exactMatch: options.exactMatch || false, + }, + ) + + // Filter to only programs the requester can read + const accessiblePrograms = programs.filter(program => + GCRStorageProgramRoutines.checkReadPermission(program, requesterAddress), + ) + + // Map to response format (without full data for list view) + response.response = accessiblePrograms.map(p => ({ + storageAddress: p.storageAddress, + programName: p.programName, + encoding: p.encoding, + sizeBytes: p.sizeBytes, + storageLocation: p.storageLocation, + createdAt: p.createdAt.toISOString(), + updatedAt: p.updatedAt.toISOString(), + })) + } catch (error) { + response.result = 500 + response.response = null + response.extra = { error: error instanceof Error ? error.message : String(error) } + } + break + } + default: response.response = false break diff --git a/src/libs/network/manageNodeCall.ts b/src/libs/network/manageNodeCall.ts index 7d207d242..e24f69362 100644 --- a/src/libs/network/manageNodeCall.ts +++ b/src/libs/network/manageNodeCall.ts @@ -33,6 +33,28 @@ import { uint8ArrayToHex, } from "@kynesyslabs/demosdk/encryption" import { DTRManager } from "./dtr/dtrmanager" +// REVIEW: StorageProgram query imports +import { GCRStorageProgramRoutines } from "../blockchain/gcr/gcr_routines/GCRStorageProgramRoutines" +import { GCRStorageProgram } from "@/model/entities/GCRv2/GCR_StorageProgram" +import Datasource from "@/model/datasource" + +/** + * Normalizes a storage address to ensure it has the 'stor-' prefix. + * Storage addresses can come with or without the prefix from various sources. + * @param address - The storage address to normalize + * @returns The normalized address with 'stor-' prefix + */ +function normalizeStorageAddress(address: string): string { + if (!address) return address + // Remove any 0x prefix if present (legacy addresses) + const normalized = address.startsWith("0x") ? address.slice(2) : address + // Remove 0xstor- prefix if present (legacy format) + if (normalized.startsWith("stor-")) { + return normalized + } + // Add stor- prefix if missing + return `stor-${normalized}` +} export interface NodeCall { message: string @@ -700,6 +722,686 @@ export async function manageNodeCall(content: NodeCall): Promise { break } + // REVIEW: StorageProgram query methods (public, no authentication required) + case "getStorageProgram": { + const rawStorageAddress = data.storageAddress + const requesterAddress = data.requesterAddress // Optional identity for ACL check + + if (!rawStorageAddress) { + response.result = 400 + response.response = null + response.extra = { error: "Storage address is required" } + break + } + + // Normalize address to ensure stor- prefix + const storageAddress = normalizeStorageAddress(rawStorageAddress) + + try { + const db = await Datasource.getInstance() + const repository = db.getDataSource().getRepository(GCRStorageProgram) + + const program = await GCRStorageProgramRoutines.getStorageProgram( + storageAddress, + repository, + ) + + if (!program) { + response.result = 404 + response.response = null + response.extra = { error: `Storage program not found: ${storageAddress}` } + break + } + + // Check read permission + const hasReadAccess = GCRStorageProgramRoutines.checkReadPermission( + program, + requesterAddress, + ) + + if (!hasReadAccess) { + response.result = 403 + response.response = null + response.extra = { error: "Permission denied: You do not have read access to this storage program" } + break + } + + response.response = { + storageAddress: program.storageAddress, + owner: program.owner, + programName: program.programName, + encoding: program.encoding, + data: program.data, + metadata: program.metadata, + storageLocation: program.storageLocation, + sizeBytes: program.sizeBytes, + createdAt: program.createdAt.toISOString(), + updatedAt: program.updatedAt.toISOString(), + createdByTx: program.createdByTx, + lastModifiedByTx: program.lastModifiedByTx, + } + } catch (error) { + log.error(`[manageNodeCall] getStorageProgram error: ${error}`) + response.result = 500 + response.response = null + response.extra = { error: error instanceof Error ? error.message : String(error) } + } + break + } + + case "getStorageProgramsByOwner": { + const owner = data.owner + const requesterAddress = data.requesterAddress // Optional identity for ACL filtering + + if (!owner) { + response.result = 400 + response.response = null + response.extra = { error: "Owner address is required" } + break + } + + try { + const db = await Datasource.getInstance() + const repository = db.getDataSource().getRepository(GCRStorageProgram) + + const programs = await GCRStorageProgramRoutines.getStorageProgramsByOwner( + owner, + repository, + ) + + // Filter to only programs the requester can read + const accessiblePrograms = programs.filter(program => + GCRStorageProgramRoutines.checkReadPermission(program, requesterAddress), + ) + + // Map to response format (without full data for list view) + response.response = accessiblePrograms.map(p => ({ + storageAddress: p.storageAddress, + programName: p.programName, + encoding: p.encoding, + sizeBytes: p.sizeBytes, + storageLocation: p.storageLocation, + createdAt: p.createdAt.toISOString(), + updatedAt: p.updatedAt.toISOString(), + })) + } catch (error) { + log.error(`[manageNodeCall] getStorageProgramsByOwner error: ${error}`) + response.result = 500 + response.response = null + response.extra = { error: error instanceof Error ? error.message : String(error) } + } + break + } + + case "searchStoragePrograms": { + const query = data.query + const options = data.options || {} // { limit, offset, exactMatch } + const requesterAddress = data.requesterAddress // Optional identity for ACL filtering + + if (!query || (typeof query === "string" && query.trim() === "")) { + response.result = 400 + response.response = null + response.extra = { error: "Search query is required" } + break + } + + try { + const db = await Datasource.getInstance() + const repository = db.getDataSource().getRepository(GCRStorageProgram) + + const programs = await GCRStorageProgramRoutines.searchStorageProgramsByName( + typeof query === "string" ? query.trim() : String(query), + repository, + { + limit: options.limit || 50, + offset: options.offset || 0, + exactMatch: options.exactMatch || false, + }, + ) + + // Filter to only programs the requester can read + const accessiblePrograms = programs.filter(program => + GCRStorageProgramRoutines.checkReadPermission(program, requesterAddress), + ) + + // Map to response format (without full data for list view) + response.response = accessiblePrograms.map(p => ({ + storageAddress: p.storageAddress, + programName: p.programName, + encoding: p.encoding, + sizeBytes: p.sizeBytes, + storageLocation: p.storageLocation, + createdAt: p.createdAt.toISOString(), + updatedAt: p.updatedAt.toISOString(), + })) + } catch (error) { + log.error(`[manageNodeCall] searchStoragePrograms error: ${error}`) + response.result = 500 + response.response = null + response.extra = { error: error instanceof Error ? error.message : String(error) } + } + break + } + + // REVIEW: Storage Program Standard Calls - Granular Read Methods + // These methods provide fine-grained access to storage program data fields + // All methods enforce ACL permissions and reject binary-encoded data + + /** + * getStorageProgramFields - Returns all field names (keys) from the storage program data + * @param storageAddress - The storage program address (with or without stor- prefix) + * @param requesterAddress - Optional requester identity for ACL check + * @returns Array of field names + */ + case "getStorageProgramFields": { + const rawStorageAddress = data.storageAddress + const requesterAddress = data.requesterAddress + + if (!rawStorageAddress) { + response.result = 400 + response.response = null + response.extra = { error: "Storage address is required" } + break + } + + const storageAddress = normalizeStorageAddress(rawStorageAddress) + + try { + const db = await Datasource.getInstance() + const repository = db.getDataSource().getRepository(GCRStorageProgram) + + const program = await GCRStorageProgramRoutines.getStorageProgram( + storageAddress, + repository, + ) + + if (!program) { + response.result = 404 + response.response = null + response.extra = { error: `Storage program not found: ${storageAddress}` } + break + } + + // Check read permission + const hasReadAccess = GCRStorageProgramRoutines.checkReadPermission( + program, + requesterAddress, + ) + + if (!hasReadAccess) { + response.result = 403 + response.response = null + response.extra = { error: "Permission denied: You do not have read access to this storage program" } + break + } + + // Reject binary encoding - granular access requires JSON + if (program.encoding === "binary") { + response.result = 400 + response.response = null + response.extra = { error: "Granular field access is not supported for binary-encoded storage programs. Use getStorageProgram for full data access." } + break + } + + // Return field names + const fields = program.data && typeof program.data === "object" + ? Object.keys(program.data) + : [] + + response.response = { + storageAddress: program.storageAddress, + fields, + } + } catch (error) { + log.error(`[manageNodeCall] getStorageProgramFields error: ${error}`) + response.result = 500 + response.response = null + response.extra = { error: error instanceof Error ? error.message : String(error) } + } + break + } + + /** + * getStorageProgramValue - Returns the value of a specific field + * @param storageAddress - The storage program address + * @param field - The field name to retrieve + * @param requesterAddress - Optional requester identity for ACL check + * @returns The field value + */ + case "getStorageProgramValue": { + const rawStorageAddress = data.storageAddress + const field = data.field + const requesterAddress = data.requesterAddress + + if (!rawStorageAddress) { + response.result = 400 + response.response = null + response.extra = { error: "Storage address is required" } + break + } + + if (!field || typeof field !== "string") { + response.result = 400 + response.response = null + response.extra = { error: "Field name is required and must be a string" } + break + } + + const storageAddress = normalizeStorageAddress(rawStorageAddress) + + try { + const db = await Datasource.getInstance() + const repository = db.getDataSource().getRepository(GCRStorageProgram) + + const program = await GCRStorageProgramRoutines.getStorageProgram( + storageAddress, + repository, + ) + + if (!program) { + response.result = 404 + response.response = null + response.extra = { error: `Storage program not found: ${storageAddress}` } + break + } + + const hasReadAccess = GCRStorageProgramRoutines.checkReadPermission( + program, + requesterAddress, + ) + + if (!hasReadAccess) { + response.result = 403 + response.response = null + response.extra = { error: "Permission denied: You do not have read access to this storage program" } + break + } + + if (program.encoding === "binary") { + response.result = 400 + response.response = null + response.extra = { error: "Granular field access is not supported for binary-encoded storage programs. Use getStorageProgram for full data access." } + break + } + + // Check if field exists + if (!program.data || typeof program.data !== "object" || !(field in program.data)) { + response.result = 404 + response.response = null + response.extra = { error: `Field not found: ${field}` } + break + } + + response.response = { + storageAddress: program.storageAddress, + field, + value: (program.data as Record)[field], + } + } catch (error) { + log.error(`[manageNodeCall] getStorageProgramValue error: ${error}`) + response.result = 500 + response.response = null + response.extra = { error: error instanceof Error ? error.message : String(error) } + } + break + } + + /** + * getStorageProgramItem - Returns an item from an array field at a specific index + * @param storageAddress - The storage program address + * @param field - The field name (must be an array) + * @param index - The array index to retrieve + * @param requesterAddress - Optional requester identity for ACL check + * @returns The item at the specified index + */ + case "getStorageProgramItem": { + const rawStorageAddress = data.storageAddress + const field = data.field + const index = data.index + const requesterAddress = data.requesterAddress + + if (!rawStorageAddress) { + response.result = 400 + response.response = null + response.extra = { error: "Storage address is required" } + break + } + + if (!field || typeof field !== "string") { + response.result = 400 + response.response = null + response.extra = { error: "Field name is required and must be a string" } + break + } + + if (typeof index !== "number" || !Number.isInteger(index) || index < 0) { + response.result = 400 + response.response = null + response.extra = { error: "Index is required and must be a non-negative integer" } + break + } + + const storageAddress = normalizeStorageAddress(rawStorageAddress) + + try { + const db = await Datasource.getInstance() + const repository = db.getDataSource().getRepository(GCRStorageProgram) + + const program = await GCRStorageProgramRoutines.getStorageProgram( + storageAddress, + repository, + ) + + if (!program) { + response.result = 404 + response.response = null + response.extra = { error: `Storage program not found: ${storageAddress}` } + break + } + + const hasReadAccess = GCRStorageProgramRoutines.checkReadPermission( + program, + requesterAddress, + ) + + if (!hasReadAccess) { + response.result = 403 + response.response = null + response.extra = { error: "Permission denied: You do not have read access to this storage program" } + break + } + + if (program.encoding === "binary") { + response.result = 400 + response.response = null + response.extra = { error: "Granular field access is not supported for binary-encoded storage programs. Use getStorageProgram for full data access." } + break + } + + if (!program.data || typeof program.data !== "object" || !(field in program.data)) { + response.result = 404 + response.response = null + response.extra = { error: `Field not found: ${field}` } + break + } + + const fieldValue = (program.data as Record)[field] + if (!Array.isArray(fieldValue)) { + response.result = 400 + response.response = null + response.extra = { error: `Field '${field}' is not an array` } + break + } + + if (index >= fieldValue.length) { + response.result = 404 + response.response = null + response.extra = { error: `Index ${index} out of bounds. Array length: ${fieldValue.length}` } + break + } + + response.response = { + storageAddress: program.storageAddress, + field, + index, + item: fieldValue[index], + } + } catch (error) { + log.error(`[manageNodeCall] getStorageProgramItem error: ${error}`) + response.result = 500 + response.response = null + response.extra = { error: error instanceof Error ? error.message : String(error) } + } + break + } + + /** + * hasStorageProgramField - Checks if a field exists in the storage program data + * @param storageAddress - The storage program address + * @param field - The field name to check + * @param requesterAddress - Optional requester identity for ACL check + * @returns Boolean indicating if field exists + */ + case "hasStorageProgramField": { + const rawStorageAddress = data.storageAddress + const field = data.field + const requesterAddress = data.requesterAddress + + if (!rawStorageAddress) { + response.result = 400 + response.response = null + response.extra = { error: "Storage address is required" } + break + } + + if (!field || typeof field !== "string") { + response.result = 400 + response.response = null + response.extra = { error: "Field name is required and must be a string" } + break + } + + const storageAddress = normalizeStorageAddress(rawStorageAddress) + + try { + const db = await Datasource.getInstance() + const repository = db.getDataSource().getRepository(GCRStorageProgram) + + const program = await GCRStorageProgramRoutines.getStorageProgram( + storageAddress, + repository, + ) + + if (!program) { + response.result = 404 + response.response = null + response.extra = { error: `Storage program not found: ${storageAddress}` } + break + } + + const hasReadAccess = GCRStorageProgramRoutines.checkReadPermission( + program, + requesterAddress, + ) + + if (!hasReadAccess) { + response.result = 403 + response.response = null + response.extra = { error: "Permission denied: You do not have read access to this storage program" } + break + } + + if (program.encoding === "binary") { + response.result = 400 + response.response = null + response.extra = { error: "Granular field access is not supported for binary-encoded storage programs. Use getStorageProgram for full data access." } + break + } + + const hasField = program.data && + typeof program.data === "object" && + field in program.data + + response.response = { + storageAddress: program.storageAddress, + field, + exists: hasField, + } + } catch (error) { + log.error(`[manageNodeCall] hasStorageProgramField error: ${error}`) + response.result = 500 + response.response = null + response.extra = { error: error instanceof Error ? error.message : String(error) } + } + break + } + + /** + * getStorageProgramFieldType - Returns the type of a specific field + * @param storageAddress - The storage program address + * @param field - The field name to check + * @param requesterAddress - Optional requester identity for ACL check + * @returns The field type (string, number, boolean, array, object, null, undefined) + */ + case "getStorageProgramFieldType": { + const rawStorageAddress = data.storageAddress + const field = data.field + const requesterAddress = data.requesterAddress + + if (!rawStorageAddress) { + response.result = 400 + response.response = null + response.extra = { error: "Storage address is required" } + break + } + + if (!field || typeof field !== "string") { + response.result = 400 + response.response = null + response.extra = { error: "Field name is required and must be a string" } + break + } + + const storageAddress = normalizeStorageAddress(rawStorageAddress) + + try { + const db = await Datasource.getInstance() + const repository = db.getDataSource().getRepository(GCRStorageProgram) + + const program = await GCRStorageProgramRoutines.getStorageProgram( + storageAddress, + repository, + ) + + if (!program) { + response.result = 404 + response.response = null + response.extra = { error: `Storage program not found: ${storageAddress}` } + break + } + + const hasReadAccess = GCRStorageProgramRoutines.checkReadPermission( + program, + requesterAddress, + ) + + if (!hasReadAccess) { + response.result = 403 + response.response = null + response.extra = { error: "Permission denied: You do not have read access to this storage program" } + break + } + + if (program.encoding === "binary") { + response.result = 400 + response.response = null + response.extra = { error: "Granular field access is not supported for binary-encoded storage programs. Use getStorageProgram for full data access." } + break + } + + if (!program.data || typeof program.data !== "object" || !(field in program.data)) { + response.result = 404 + response.response = null + response.extra = { error: `Field not found: ${field}` } + break + } + + const value = (program.data as Record)[field] + let fieldType: string + + if (value === null) { + fieldType = "null" + } else if (Array.isArray(value)) { + fieldType = "array" + } else { + fieldType = typeof value + } + + response.response = { + storageAddress: program.storageAddress, + field, + type: fieldType, + } + } catch (error) { + log.error(`[manageNodeCall] getStorageProgramFieldType error: ${error}`) + response.result = 500 + response.response = null + response.extra = { error: error instanceof Error ? error.message : String(error) } + } + break + } + + /** + * getStorageProgramAll - Alias for getStorageProgram, returns all data + * This provides consistency with the granular method naming convention + * @param storageAddress - The storage program address + * @param requesterAddress - Optional requester identity for ACL check + * @returns Full storage program data (same as getStorageProgram) + */ + case "getStorageProgramAll": { + const rawStorageAddress = data.storageAddress + const requesterAddress = data.requesterAddress + + if (!rawStorageAddress) { + response.result = 400 + response.response = null + response.extra = { error: "Storage address is required" } + break + } + + const storageAddress = normalizeStorageAddress(rawStorageAddress) + + try { + const db = await Datasource.getInstance() + const repository = db.getDataSource().getRepository(GCRStorageProgram) + + const program = await GCRStorageProgramRoutines.getStorageProgram( + storageAddress, + repository, + ) + + if (!program) { + response.result = 404 + response.response = null + response.extra = { error: `Storage program not found: ${storageAddress}` } + break + } + + const hasReadAccess = GCRStorageProgramRoutines.checkReadPermission( + program, + requesterAddress, + ) + + if (!hasReadAccess) { + response.result = 403 + response.response = null + response.extra = { error: "Permission denied: You do not have read access to this storage program" } + break + } + + response.response = { + storageAddress: program.storageAddress, + owner: program.owner, + programName: program.programName, + encoding: program.encoding, + data: program.data, + metadata: program.metadata, + storageLocation: program.storageLocation, + sizeBytes: program.sizeBytes, + createdAt: program.createdAt.toISOString(), + updatedAt: program.updatedAt.toISOString(), + createdByTx: program.createdByTx, + lastModifiedByTx: program.lastModifiedByTx, + } + } catch (error) { + log.error(`[manageNodeCall] getStorageProgramAll error: ${error}`) + response.result = 500 + response.response = null + response.extra = { error: error instanceof Error ? error.message : String(error) } + } + break + } + // NOTE Don't look past here, go away // INFO For real, nothing here to be seen // REVIEW DTR: Handle relayed transactions from non-validator nodes diff --git a/src/model/entities/GCRv2/GCR_StorageProgram.ts b/src/model/entities/GCRv2/GCR_StorageProgram.ts index e9ab09dda..87891d16e 100644 --- a/src/model/entities/GCRv2/GCR_StorageProgram.ts +++ b/src/model/entities/GCRv2/GCR_StorageProgram.ts @@ -145,6 +145,14 @@ export class GCRStorageProgram { /** * Transaction hash that deleted this program (if deleted) */ + + /** + * Array of all transaction hashes that interacted with this storage program + * Provides complete history of modifications + */ + @Column({ type: "simple-array", name: "interactionTxs", default: "" }) + interactionTxs: string[] + @Column({ type: "text", name: "deletedByTx", nullable: true }) deletedByTx: string | null From 107f8c8b041e9a3b2004edbc46c9e8df7fd2136a Mon Sep 17 00:00:00 2001 From: tcsenpai Date: Mon, 19 Jan 2026 16:40:53 +0100 Subject: [PATCH 14/29] docs(specs): add granular storage API specifications Add detailed specifications for Storage Program granular operations: - 03-operations.mdx: Add GRANULAR_WRITE operation with 5 operation types (SET_FIELD, SET_ITEM, APPEND_ITEM, DELETE_FIELD, DELETE_ITEM), GranularWriteOperation interface, and comparison table vs WRITE_STORAGE - 05-rpc-endpoints.mdx: Add all granular read endpoints (/fields, /field/:field, /field/:field/item/:index, /has/:field, /type/:field, /all) with request/ response examples and error codes Co-Authored-By: Claude Opus 4.5 --- specs/storageprogram/03-operations.mdx | 132 ++++++++++++ specs/storageprogram/05-rpc-endpoints.mdx | 248 ++++++++++++++++++++++ 2 files changed, 380 insertions(+) diff --git a/specs/storageprogram/03-operations.mdx b/specs/storageprogram/03-operations.mdx index 602681fce..1816cf4fb 100644 --- a/specs/storageprogram/03-operations.mdx +++ b/specs/storageprogram/03-operations.mdx @@ -13,6 +13,7 @@ This document provides detailed reference for all StorageProgram operations. |-----------|-------------|---------------------| | `CREATE_STORAGE_PROGRAM` | Create a new storage program | None (anyone can create) | | `WRITE_STORAGE` | Update data in existing program | Owner or group write permission | +| `GRANULAR_WRITE` | Field-level data modifications | Owner or group write permission | | `UPDATE_ACCESS_CONTROL` | Modify ACL settings | Owner only | | `DELETE_STORAGE_PROGRAM` | Soft delete the program | Owner or group delete permission | @@ -221,6 +222,137 @@ const payload = storage.buildStorageProgramPayload({ | "Storage program already deleted" | Already soft deleted | | "No permission to delete" | Sender lacks delete permission | +## Granular Write Operations + +In addition to full data replacement via `WRITE_STORAGE`, the StorageProgram supports granular field-level operations for more efficient updates. + +### GRANULAR_WRITE + +Performs field-level modifications to storage program data without replacing the entire dataset. + +### Parameters + +| Field | Type | Required | Description | +|-------|------|----------|-------------| +| `operation` | string | Yes | Must be "GRANULAR_WRITE" | +| `storageAddress` | string | Yes | Target storage address | +| `operations` | array | Yes | Array of granular operations | + +### Operation Types + +| Operation Type | Description | Required Fields | +|----------------|-------------|-----------------| +| `SET_FIELD` | Set a top-level field value | `field`, `value` | +| `SET_ITEM` | Set an item in an array by index | `field`, `index`, `value` | +| `APPEND_ITEM` | Append an item to an array field | `field`, `value` | +| `DELETE_FIELD` | Delete a top-level field | `field` | +| `DELETE_ITEM` | Delete an item from an array by index | `field`, `index` | + +### Operation Structure + +```typescript +interface GranularWriteOperation { + type: "SET_FIELD" | "SET_ITEM" | "APPEND_ITEM" | "DELETE_FIELD" | "DELETE_ITEM" + field: string // Target field name + value?: any // Value for SET/APPEND operations + index?: number // Array index for SET_ITEM/DELETE_ITEM +} +``` + +### Examples + +#### SET_FIELD - Update a single field + +```typescript +import { storage } from "@kynesyslabs/demosdk" + +const payload = storage.buildStorageProgramPayload({ + operation: "GRANULAR_WRITE", + storageAddress: "stor-abc123...", + operations: [ + { type: "SET_FIELD", field: "theme", value: "dark" }, + { type: "SET_FIELD", field: "lastLogin", value: Date.now() } + ] +}) +``` + +#### SET_ITEM - Update an array element + +```typescript +const payload = storage.buildStorageProgramPayload({ + operation: "GRANULAR_WRITE", + storageAddress: "stor-abc123...", + operations: [ + { type: "SET_ITEM", field: "posts", index: 0, value: { title: "Updated Post", content: "..." } } + ] +}) +``` + +#### APPEND_ITEM - Add to an array + +```typescript +const payload = storage.buildStorageProgramPayload({ + operation: "GRANULAR_WRITE", + storageAddress: "stor-abc123...", + operations: [ + { type: "APPEND_ITEM", field: "posts", value: { title: "New Post", content: "..." } } + ] +}) +``` + +#### DELETE_FIELD - Remove a field + +```typescript +const payload = storage.buildStorageProgramPayload({ + operation: "GRANULAR_WRITE", + storageAddress: "stor-abc123...", + operations: [ + { type: "DELETE_FIELD", field: "temporaryData" } + ] +}) +``` + +#### DELETE_ITEM - Remove from an array + +```typescript +const payload = storage.buildStorageProgramPayload({ + operation: "GRANULAR_WRITE", + storageAddress: "stor-abc123...", + operations: [ + { type: "DELETE_ITEM", field: "posts", index: 2 } + ] +}) +``` + +### Behavior + +- Operations are applied **atomically** in order +- Field names must be top-level keys (nested paths not supported) +- Array operations validate index bounds +- Fees are calculated on the resulting data size difference +- Multiple operations can be batched in a single transaction + +### Errors + +| Error | Cause | +|-------|-------| +| "Storage program not found" | Invalid address | +| "Field does not exist" | DELETE_FIELD/SET_ITEM on non-existent field | +| "Field is not an array" | Array operation on non-array field | +| "Index out of bounds" | SET_ITEM/DELETE_ITEM with invalid index | +| "Invalid operation type" | Unrecognized operation type | +| "Operations array is required" | Missing or empty operations array | + +### Comparison: WRITE_STORAGE vs GRANULAR_WRITE + +| Aspect | WRITE_STORAGE | GRANULAR_WRITE | +|--------|---------------|----------------| +| Data replacement | Full replacement | Field-level updates | +| Bandwidth | Entire data sent | Only changes sent | +| Use case | Complete updates | Incremental changes | +| Atomicity | Single operation | Multiple operations batched | +| Fee calculation | Based on total size | Based on size difference | + ## Fee Calculation All operations that create or update data incur storage fees: diff --git a/specs/storageprogram/05-rpc-endpoints.mdx b/specs/storageprogram/05-rpc-endpoints.mdx index 617e502ef..b70a043a4 100644 --- a/specs/storageprogram/05-rpc-endpoints.mdx +++ b/specs/storageprogram/05-rpc-endpoints.mdx @@ -13,6 +13,13 @@ This document provides the HTTP API reference for reading StorageProgram data. |----------|--------|-------------| | `/storage-program/:address` | GET | Read a storage program by address | | `/storage-program/owner/:owner` | GET | List storage programs by owner | +| `/storage-program/:address/fields` | GET | List all field names | +| `/storage-program/:address/field/:field` | GET | Get a field's value | +| `/storage-program/:address/field/:field/item/:index` | GET | Get an array item | +| `/storage-program/:address/has/:field` | GET | Check if field exists | +| `/storage-program/:address/type/:field` | GET | Get field value type | +| `/storage-program/:address/all` | GET | Get all data | +| `/storage-program/search/:name` | GET | Search by program name | ## Read Storage Program @@ -247,6 +254,247 @@ curl https://rpc.demos.network/storage-program/owner/ed25519:abc123... \ -H "signature: your-signature-hex" ``` +## Granular Read Endpoints + +These endpoints provide field-level access to storage program data for more efficient querying. + +### Get All Fields + +Retrieves all top-level field names from a storage program's data. + +```http +GET /storage-program/:address/fields +``` + +**Response (200)** + +```json +{ + "success": true, + "storageAddress": "stor-abc123...", + "fields": ["theme", "notifications", "settings", "posts"] +} +``` + +### Get Field Value + +Retrieves the value of a specific field. + +```http +GET /storage-program/:address/field/:field +``` + +**Response (200)** + +```json +{ + "success": true, + "storageAddress": "stor-abc123...", + "field": "theme", + "value": "dark", + "type": "string" +} +``` + +**Field Not Found (404)** + +```json +{ + "success": false, + "error": "Field not found: unknownField", + "errorCode": "FIELD_NOT_FOUND" +} +``` + +### Get Array Item + +Retrieves a specific item from an array field by index. + +```http +GET /storage-program/:address/field/:field/item/:index +``` + +**Response (200)** + +```json +{ + "success": true, + "storageAddress": "stor-abc123...", + "field": "posts", + "index": 0, + "value": { + "title": "First Post", + "content": "Hello World" + } +} +``` + +**Index Out of Bounds (400)** + +```json +{ + "success": false, + "error": "Index out of bounds: 10 (array length: 3)", + "errorCode": "INDEX_OUT_OF_BOUNDS" +} +``` + +**Field Not Array (400)** + +```json +{ + "success": false, + "error": "Field is not an array: theme", + "errorCode": "INVALID_FIELD_TYPE" +} +``` + +### Check Field Exists + +Checks if a field exists in the storage program. + +```http +GET /storage-program/:address/has/:field +``` + +**Response (200)** + +```json +{ + "success": true, + "storageAddress": "stor-abc123...", + "field": "theme", + "exists": true +} +``` + +### Get Field Type + +Returns the type of a specific field's value. + +```http +GET /storage-program/:address/type/:field +``` + +**Response (200)** + +```json +{ + "success": true, + "storageAddress": "stor-abc123...", + "field": "posts", + "type": "array" +} +``` + +**Field Types** + +| Type | Description | +|------|-------------| +| `string` | Text value | +| `number` | Numeric value (integer or float) | +| `boolean` | True or false | +| `array` | Ordered list of values | +| `object` | Key-value mapping | +| `null` | Null value | +| `undefined` | Field exists but value is undefined | + +### Get All Data + +Retrieves all data from a storage program (equivalent to the base read endpoint). + +```http +GET /storage-program/:address/all +``` + +**Response (200)** + +```json +{ + "success": true, + "storageAddress": "stor-abc123...", + "data": { + "theme": "dark", + "notifications": true, + "settings": { "language": "en" }, + "posts": [{ "title": "Post 1" }] + } +} +``` + +### Search by Name + +Search for storage programs by name with partial matching. + +```http +GET /storage-program/search/:name +``` + +**Response (200)** + +```json +{ + "success": true, + "query": "user", + "programs": [ + { + "storageAddress": "stor-abc123...", + "programName": "user-preferences", + "owner": "ed25519:...", + "sizeBytes": 1024 + }, + { + "storageAddress": "stor-def456...", + "programName": "user-settings", + "owner": "ed25519:...", + "sizeBytes": 512 + } + ], + "count": 2 +} +``` + +### Granular Endpoints Summary + +| Endpoint | Method | Description | +|----------|--------|-------------| +| `/storage-program/:address/fields` | GET | List all field names | +| `/storage-program/:address/field/:field` | GET | Get a field's value | +| `/storage-program/:address/field/:field/item/:index` | GET | Get an array item | +| `/storage-program/:address/has/:field` | GET | Check if field exists | +| `/storage-program/:address/type/:field` | GET | Get field value type | +| `/storage-program/:address/all` | GET | Get all data | +| `/storage-program/search/:name` | GET | Search by program name | + +### Example: JavaScript with Granular Endpoints + +```javascript +// Get all field names +const fields = await fetch('/storage-program/stor-abc.../fields') + .then(r => r.json()) +console.log(fields.fields) // ["theme", "posts", "settings"] + +// Get specific field value +const theme = await fetch('/storage-program/stor-abc.../field/theme') + .then(r => r.json()) +console.log(theme.value) // "dark" + +// Get array item +const post = await fetch('/storage-program/stor-abc.../field/posts/item/0') + .then(r => r.json()) +console.log(post.value) // { title: "First Post", ... } + +// Check if field exists +const hasField = await fetch('/storage-program/stor-abc.../has/theme') + .then(r => r.json()) +console.log(hasField.exists) // true + +// Get field type +const fieldType = await fetch('/storage-program/stor-abc.../type/posts') + .then(r => r.json()) +console.log(fieldType.type) // "array" +``` + ## Authentication ### Identity Header Format From 8c7b774c3f4c7562e5a2f31d14acff7e240a4483 Mon Sep 17 00:00:00 2001 From: tcsenpai Date: Mon, 19 Jan 2026 16:52:11 +0100 Subject: [PATCH 15/29] docs: add Serena session memories and key derivation spec - Add Serena session memories for storage program API work - Add key derivation specification document --- .../session_2026-01-18_storage_program_api.md | 46 +++ ...ion_2026-01-19_storage_poc_granular_api.md | 58 +++ ...sion_storage_program_queries_2026_01_18.md | 72 ++++ specs/key_derivation.md | 381 ++++++++++++++++++ 4 files changed, 557 insertions(+) create mode 100644 .serena/memories/session_2026-01-18_storage_program_api.md create mode 100644 .serena/memories/session_2026-01-19_storage_poc_granular_api.md create mode 100644 .serena/memories/session_storage_program_queries_2026_01_18.md create mode 100644 specs/key_derivation.md diff --git a/.serena/memories/session_2026-01-18_storage_program_api.md b/.serena/memories/session_2026-01-18_storage_program_api.md new file mode 100644 index 000000000..5afce3c34 --- /dev/null +++ b/.serena/memories/session_2026-01-18_storage_program_api.md @@ -0,0 +1,46 @@ +# Session: Storage Program Standard Calls API +**Date**: 2026-01-18 +**Branch**: storage_v2 + +## Summary +Implemented granular storage program API - node-side read/write methods and SDK wrappers. + +## Completed Tasks + +### Core Implementation (✅ Done) +- **node-tytc / DEM-551**: Node read methods in `manageNodeCall.ts` + - getStorageProgramFields, getStorageProgramValue, getStorageProgramItem + - hasStorageProgramField, getStorageProgramFieldType, getStorageProgramAll + +- **node-d3bv / DEM-552**: Node write methods in `GCRStorageProgramRoutines.ts` + - SET_FIELD, SET_ITEM, APPEND_ITEM, DELETE_FIELD, DELETE_ITEM + - Fee calculation based on size delta + +- **node-ekwj / DEM-553**: SDK wrapper methods in `../sdks/src/storage/StorageProgram.ts` + - 6 read methods: getFields, getValue, getItem, hasField, getFieldType, getAll + - 5 write payload builders: setField, setItem, appendItem, deleteField, deleteItem + - **SDK v2.9.0 published** + +## Remaining Tasks (Epic: node-9idc) + +### NEXT SESSION START HERE: +- **node-dsbw**: Update `../storage-poc` to demonstrate new standard calls API + +### Also Remaining: +- **node-22zq**: Testing & edge cases for standard calls +- **node-h5tu**: Update `../documentation-mintlify` public docs +- **node-i8b7**: Update `specs/storageprogram/*.mdx` internal specs + +## Key Files Modified +- `/home/tcsenpai/kynesys/node/src/libs/network/manageNodeCall.ts` - Read endpoints +- `/home/tcsenpai/kynesys/node/src/libs/blockchain/gcr/gcr_routines/GCRStorageProgramRoutines.ts` - Write handlers +- `/home/tcsenpai/kynesys/sdks/src/storage/StorageProgram.ts` - SDK methods + +## Linear Issues +- DEM-551: Done +- DEM-552: Done +- DEM-553: Done + +## Notes +- Beads issues can't be closed while epic (node-9idc) is open - marked with COMPLETED notes instead +- SDK granular methods use nodeCall pattern for reads, payload builders for writes diff --git a/.serena/memories/session_2026-01-19_storage_poc_granular_api.md b/.serena/memories/session_2026-01-19_storage_poc_granular_api.md new file mode 100644 index 000000000..2ba3dd528 --- /dev/null +++ b/.serena/memories/session_2026-01-19_storage_poc_granular_api.md @@ -0,0 +1,58 @@ +# Session: Storage POC Granular API Update + +## Date +2026-01-19 + +## Summary +Updated the storage-poc application to demonstrate the new granular storage program API with a new "Granular API" tab. + +## Completed Work + +### Task: node-dsbw (CLOSED) +- Added new "Granular API" tab to `/home/tcsenpai/kynesys/storage-poc/src/App.tsx` +- Updated SDK from v2.8.24 to v2.9.0 + +### Read Operations Implemented +1. `getFields(rpcUrl, address, identity?)` - List all top-level field names +2. `getValue(rpcUrl, address, field, identity?)` - Get specific field value +3. `getItem(rpcUrl, address, field, index, identity?)` - Get array element +4. `hasField(rpcUrl, address, field, identity?)` - Check field existence +5. `getFieldType(rpcUrl, address, field, identity?)` - Get field type + +### Write Operations Implemented +1. `setField(address, field, value)` - Set/create field +2. `setItem(address, field, index, value)` - Set array element +3. `appendItem(address, field, value)` - Push to array +4. `deleteField(address, field)` - Remove field +5. `deleteItem(address, field, index)` - Remove array element + +### Fee Display +- Fee extracted from `confirmResult.response?.data?.transaction?.content?.transaction_fee` +- Total fee = `network_fee + rpc_fee + additional_fee` +- Display format: `Fee: ${(totalFee / 1e18).toFixed(6)} DEM` + +## Technical Discoveries + +### SDK Type Structure +- `TxFee` interface: `{ network_fee: number, rpc_fee: number, additional_fee: number }` +- Fee is NOT on ValidityData.data.fee (doesn't exist) +- Fee is on `ValidityData.data.transaction.content.transaction_fee` + +### UI Architecture +- Two-column layout: READ operations (left), WRITE operations (right) +- Optional identity field for ACL-protected storage programs +- Proper validation per operation type (field required for getValue, index for getItem, etc.) + +## Git State +- Branch: `storage_v2` +- Commit: `233984b7 feat(storage): implement granular storage program API` +- Pushed: ✅ to origin/storage_v2 + +## Remaining Epic Tasks (node-9idc) +- `node-22zq` - Testing & edge cases for standard calls +- `node-h5tu` - SDK integration (if still needed) +- `node-i8b7` - Documentation + +## Related +- session_2026-01-18_storage_program_api (previous session) +- feature_storage_programs_plan (planning doc) diff --git a/.serena/memories/session_storage_program_queries_2026_01_18.md b/.serena/memories/session_storage_program_queries_2026_01_18.md new file mode 100644 index 000000000..a4c4bd719 --- /dev/null +++ b/.serena/memories/session_storage_program_queries_2026_01_18.md @@ -0,0 +1,72 @@ +# Session: Storage Program Query Methods - 2026-01-18 + +## Summary +Fixed SDK storage program query methods to work without authentication and resolved address format normalization issue. + +## Work Completed + +### 1. Unauthenticated Storage Program Queries +**Problem**: SDK storage program queries (`getByAddress`, `getByOwner`, `searchByName`) were returning null/empty because `gcr_routine` requires authentication headers. + +**Solution**: +- Added storage program query methods to `manageNodeCall.ts` (unauthenticated endpoint) +- Updated SDK `StorageProgram.ts` to use `nodeCall` instead of `gcr_routine` + +**Files Modified**: +- `src/libs/network/manageNodeCall.ts` - Added 3 new cases: `getStorageProgram`, `getStorageProgramsByOwner`, `searchStoragePrograms` +- `../sdks/src/storage/StorageProgram.ts` - Changed from `gcr_routine` to `nodeCall` + +### 2. createdByTx Field Population +**Problem**: `createdByTx` field in `GCRStorageProgram` entity was not being populated during transaction processing. + +**Root Cause**: In `endpointHandlers.ts:109`, `gcredit.txhash = ""` is set during validation for hash comparison, but never restored. + +**Solution**: Added `edit.txhash = tx.hash` in `handleGCR.ts` `applyToTx()` method before applying edits. + +**File Modified**: +- `src/libs/blockchain/gcr/handleGCR.ts` - Added txhash assignment in applyToTx loop + +### 3. Storage Address Normalization +**Problem**: `getStorageProgram` endpoint returned null because: +- DB stores addresses as `stor-{hash}` (with prefix) +- Client was sending `{hash}` (without prefix) + +**Solution**: Added `normalizeStorageAddress()` helper function in `manageNodeCall.ts` that: +- Strips `0x` prefix if present (legacy addresses) +- Adds `stor-` prefix if missing + +**File Modified**: +- `src/libs/network/manageNodeCall.ts` - Added normalizeStorageAddress() function + +## Database Observations +- Table name: `gcr_storageprogram` (no underscore) +- Entity name: `GCRStorageProgram` +- Storage addresses in DB: `stor-{40char_hash}` format +- Some legacy addresses have `0xstor-` prefix + +## Technical Details + +### nodeCall vs gcr_routine +- `nodeCall`: Public endpoint, no authentication required +- `gcr_routine`: Requires `signature` and `identity` headers + +### Storage Address Formats Observed +``` +0xstor-53ad58410dfcd0b93c18f0928d84ad43c1bbf5f5 (legacy with 0x) +stor-7e40fde1086c8ed4cf0486ed12c010d30abd715f (current format) +7e40fde1086c8ed4cf0486ed12c010d30abd715f (raw hash, needs normalization) +``` + +## Testing Notes +- Node needs restart after changes for them to take effect +- Storage POC at `../storage-poc/` can be used for testing +- PostgreSQL container: `postgres_5332` on port 5332 (user: demosuser, db: demos) + +## Related Files +- `src/libs/network/manageGCRRoutines.ts` - Contains authenticated storage methods (kept for backward compatibility) +- `src/model/entities/GCRv2/GCR_StorageProgram.ts` - Entity definition +- `src/libs/blockchain/gcr/gcr_routines/GCRStorageProgramRoutines.ts` - Shared query routines + +## Next Steps +- Test the endpoints after node restart +- Consider adding similar normalization to other storage-related endpoints if needed diff --git a/specs/key_derivation.md b/specs/key_derivation.md new file mode 100644 index 000000000..2fa947ae3 --- /dev/null +++ b/specs/key_derivation.md @@ -0,0 +1,381 @@ +# Demos Network Key Derivation Specification + +**Version**: 1.0.0 +**Date**: 2026-01-17 +**Status**: Final (Production) + +## Abstract + +This document specifies the key derivation process used by the Demos Network to convert a BIP39 mnemonic into an Ed25519 keypair. This specification is essential for: +- Hardware wallet implementations +- Third-party wallet integrations +- Cross-platform compatibility testing +- Security audits + +## Overview + +The Demos Network uses a multi-step key derivation process that transforms a 12-word BIP39 mnemonic into an Ed25519 keypair. The process involves SHA3-512 hashing, HKDF key derivation, and a final SHA256 transformation. + +### High-Level Flow + +``` +┌─────────────────┐ +│ Mnemonic │ 12 BIP39 words +│ (12 words) │ +└────────┬────────┘ + │ + ▼ +┌─────────────────┐ +│ SHA3-512 │ Hash the mnemonic string +│ │ +└────────┬────────┘ + │ + ▼ +┌─────────────────┐ +│ Hex Encode │ 64 bytes → 128-char hex string (no 0x prefix) +│ │ +└────────┬────────┘ + │ + ▼ +┌─────────────────┐ +│ ASCII Encode │ 128-char string → 128-byte array (ASCII codes) +│ (TextEncoder) │ +└────────┬────────┘ + │ + ▼ +┌─────────────────┐ +│ HKDF │ Derive 32-byte key using: +│ (SHA-256) │ - Salt: "master seed" +│ │ - Info: "ed25519" +└────────┬────────┘ + │ + ▼ +┌─────────────────┐ +│ Decimal String │ Convert bytes to comma-separated decimals +│ Conversion │ +└────────┬────────┘ + │ + ▼ +┌─────────────────┐ +│ SHA-256 │ Hash the decimal string +│ │ +└────────┬────────┘ + │ + ▼ +┌─────────────────┐ +│ Ed25519 │ Generate keypair from 32-byte seed +│ Keypair Gen │ +└─────────────────┘ +``` + +## Detailed Specification + +### Step 1: Mnemonic Normalization + +**Input**: 12-word BIP39 mnemonic phrase +**Output**: Trimmed string + +**Process**: +1. Accept mnemonic as string input +2. Trim leading and trailing whitespace +3. Validate against BIP39 English wordlist (2048 words) + +**Example**: +``` +Input: " abandon abandon abandon abandon abandon abandon abandon abandon abandon abandon abandon about " +Output: "abandon abandon abandon abandon abandon abandon abandon abandon abandon abandon abandon about" +``` + +### Step 2: SHA3-512 Hash + +**Input**: Normalized mnemonic string (UTF-8 encoded) +**Output**: 64-byte hash (Uint8Array) + +**Algorithm**: SHA3-512 (FIPS 202) + +**Important**: This is SHA3-512, NOT: +- Keccak-512 (pre-standardization SHA3) +- SHA-512 (SHA-2 family) + +**Example**: +``` +Input: "abandon abandon abandon abandon abandon abandon abandon abandon abandon abandon abandon about" +Output: 77c9f69156488defa99b21d8704b62b060804c1125fa088bf9495f358fe2242514f31bc4d960e74758b88eb2fd7eff4a84fc7c14df9819b55ec7663d654b48af (hex) +``` + +### Step 3: Hex Encoding + +**Input**: 64-byte SHA3-512 hash +**Output**: 128-character lowercase hexadecimal string (NO `0x` prefix) + +**Process**: +1. Convert each byte to 2-character hex representation +2. Use lowercase letters (a-f) +3. Ensure NO `0x` prefix is included + +**Example**: +``` +Input: [0x77, 0xc9, 0xf6, 0x91, ...] +Output: "77c9f69156488defa99b21d8704b62b060804c1125fa088bf9495f358fe2242514f31bc4d960e74758b88eb2fd7eff4a84fc7c14df9819b55ec7663d654b48af" +``` + +### Step 4: ASCII Encoding (Master Seed Creation) + +**Input**: 128-character hex string +**Output**: 128-byte Uint8Array (the "master seed") + +**Process**: Convert each character of the hex string to its ASCII code. + +**Critical Detail**: This is NOT parsing the hex string as binary data. Each hex character becomes one byte representing its ASCII value. + +**Example**: +``` +Input: "77c9f6..." (128 chars) +Output: [55, 55, 99, 57, 102, 54, ...] (128 bytes) + ↑ ↑ ↑ ↑ ↑ ↑ + '7' '7' 'c' '9' 'f' '6' (ASCII codes) +``` + +**JavaScript equivalent**: +```javascript +const masterSeed = new TextEncoder().encode(hexString); +``` + +### Step 5: HKDF Derivation + +**Input**: 128-byte master seed +**Output**: 32-byte derived seed + +**Algorithm**: HKDF (RFC 5869) with SHA-256 + +**Parameters**: +| Parameter | Value | +|-----------|-------| +| Hash Function | SHA-256 | +| IKM (Input Key Material) | master seed (128 bytes) | +| Salt | `"master seed"` (UTF-8 encoded, 11 bytes) | +| Info | `"ed25519"` (UTF-8 encoded, 7 bytes) | +| Output Length | 32 bytes | + +**Example**: +``` +Input: [55, 55, 99, 57, 102, 54, ...] (128 bytes) +Output: 0715507ffd6a856581ab612104aed8736ccaa8c4a287321bcef1e99fda35003d (hex) +``` + +### Step 6: Decimal String Conversion + +**Input**: 32-byte derived seed +**Output**: Comma-separated decimal string + +**Process**: Convert each byte to its decimal value, join with commas. + +**Example**: +``` +Input: [7, 21, 80, 127, 253, 106, 133, 101, ...] (32 bytes) +Output: "7,21,80,127,253,106,133,101,129,171,97,33,4,174,216,115,108,202,168,196,162,135,50,27,206,241,233,159,218,53,0,61" +``` + +**JavaScript equivalent**: +```javascript +const decimalString = derivedSeed.toString(); +// Uint8Array.prototype.toString() produces comma-separated decimals +``` + +### Step 7: Final SHA-256 Hash + +**Input**: Decimal string from Step 6 +**Output**: 32-byte Ed25519 seed + +**Algorithm**: SHA-256 + +**Example**: +``` +Input: "7,21,80,127,253,106,133,101,..." +Output: 9c059a934eed1a4244dc564888d780e60a3b55bc20b67603ddf8633d9ac72959 (hex) +``` + +### Step 8: Ed25519 Keypair Generation + +**Input**: 32-byte Ed25519 seed +**Output**: Ed25519 keypair (32-byte public key, 64-byte private key) + +**Algorithm**: Ed25519 (RFC 8032) + +**Example**: +``` +Input: 9c059a934eed1a4244dc564888d780e60a3b55bc20b67603ddf8633d9ac72959 (hex) +Output: + Public Key: 263af3be8487729727d99b35dcfdc61bf920a9164249ad117b292e6d3c7194f8 + Private Key: 9c059a934eed1a4244dc564888d780e6... (64 bytes) +``` + +## Address Derivation + +The Demos Network address is simply the public key with a `0x` prefix. + +``` +Address = "0x" + hex(publicKey) +``` + +**Example**: +``` +Public Key: 263af3be8487729727d99b35dcfdc61bf920a9164249ad117b292e6d3c7194f8 +Address: 0x263af3be8487729727d99b35dcfdc61bf920a9164249ad117b292e6d3c7194f8 +``` + +**Note**: The address is NOT derived from hashing the public key (unlike some other blockchain networks). + +## Complete Test Vector + +### Input +``` +Mnemonic: "abandon abandon abandon abandon abandon abandon abandon abandon abandon abandon abandon about" +Message: "Hello, Demos Hardware Wallet!" +``` + +### Intermediate Values +``` +Step 2 - SHA3-512 Hash: + 77c9f69156488defa99b21d8704b62b060804c1125fa088bf9495f358fe2242514f31bc4d960e74758b88eb2fd7eff4a84fc7c14df9819b55ec7663d654b48af + +Step 4 - Master Seed: + Length: 128 bytes + First 10 bytes: [55, 55, 99, 57, 102, 54, 57, 49, 53, 54] + +Step 5 - HKDF Derived Seed: + 0715507ffd6a856581ab612104aed8736ccaa8c4a287321bcef1e99fda35003d + +Step 6 - Decimal String: + "7,21,80,127,253,106,133,101,129,171,97,33,4,174,216,115,108,202,168,196,162,135,50,27,206,241,233,159,218,53,0,61" + +Step 7 - Ed25519 Seed: + 9c059a934eed1a4244dc564888d780e60a3b55bc20b67603ddf8633d9ac72959 +``` + +### Expected Outputs +``` +Public Key: + 263af3be8487729727d99b35dcfdc61bf920a9164249ad117b292e6d3c7194f8 + +Address: + 0x263af3be8487729727d99b35dcfdc61bf920a9164249ad117b292e6d3c7194f8 + +Signature (of message): + 8ab34f7d52a08c78ea2b62a5cb6c973169c00ae5302f7a47ae45d8f3f2260244c933528d5c2aa3cbacf41e37b14d3729f06efc5ae1b9a84e368a0cb5b79adf01 +``` + +## Implementation Notes + +### Required Cryptographic Primitives + +1. **SHA3-512**: FIPS 202 compliant implementation +2. **SHA-256**: FIPS 180-4 compliant implementation +3. **HKDF**: RFC 5869 compliant implementation +4. **Ed25519**: RFC 8032 compliant implementation + +### Common Implementation Pitfalls + +1. **Using wrong SHA3 variant**: Must be SHA3-512, not Keccak-512 +2. **Including 0x prefix in hex string**: The hex string must NOT have a 0x prefix +3. **Parsing hex as binary**: Step 4 encodes characters as ASCII, not as binary hex values +4. **Wrong HKDF parameters**: Salt and Info must be exact strings specified +5. **Wrong string conversion**: Must use comma-separated decimals, not other formats + +### Language-Specific Implementations + +#### JavaScript/TypeScript +```javascript +import { sha3_512 } from "@noble/hashes/sha3"; +import { hkdf } from "@noble/hashes/hkdf"; +import { sha256 } from "@noble/hashes/sha256"; +import forge from "node-forge"; + +function deriveKeypair(mnemonic) { + // Step 2: SHA3-512 + const hash = sha3_512(mnemonic); + + // Step 3: Hex encode (no 0x prefix) + const hexString = Array.from(hash).map(b => b.toString(16).padStart(2, '0')).join(''); + + // Step 4: ASCII encode + const masterSeed = new TextEncoder().encode(hexString); + + // Step 5: HKDF + const derivedSeed = hkdf(sha256, masterSeed, "master seed", "ed25519", 32); + + // Step 6-7: Decimal string + SHA256 + const decimalString = derivedSeed.toString(); + const md = forge.sha256.create(); + md.update(decimalString); + const ed25519Seed = md.digest().toHex(); + + // Step 8: Generate keypair + return forge.pki.ed25519.generateKeyPair({ + seed: Buffer.from(ed25519Seed, "hex") + }); +} +``` + +#### C/C++ (Arduino/ESP32) +```cpp +// Pseudocode - see demos-hw-wallet implementation for details +void deriveKeypair(const char* mnemonic, uint8_t* publicKey, uint8_t* privateKey) { + uint8_t sha3Hash[64]; + char hexString[129]; + uint8_t masterSeed[128]; + uint8_t derivedSeed[32]; + char decimalString[256]; + uint8_t ed25519Seed[32]; + + // Step 2: SHA3-512 + sha3_512(mnemonic, strlen(mnemonic), sha3Hash); + + // Step 3: Hex encode + bytesToHex(sha3Hash, 64, hexString); // No 0x prefix! + + // Step 4: ASCII encode + for (int i = 0; i < 128; i++) { + masterSeed[i] = (uint8_t)hexString[i]; + } + + // Step 5: HKDF + hkdf_sha256(derivedSeed, 32, masterSeed, 128, "master seed", "ed25519"); + + // Step 6: Decimal string + bytesToDecimalString(derivedSeed, 32, decimalString); + + // Step 7: SHA256 + sha256(decimalString, strlen(decimalString), ed25519Seed); + + // Step 8: Ed25519 keypair + ed25519_create_keypair(publicKey, privateKey, ed25519Seed); +} +``` + +## Security Considerations + +1. **Mnemonic Protection**: The mnemonic must be kept secret and secure +2. **Memory Clearing**: Clear all intermediate values from memory after use +3. **Timing Attacks**: Use constant-time comparison for cryptographic operations +4. **Side Channels**: Be aware of side-channel attack vectors in embedded implementations + +## Historical Context + +The current derivation process differs from standard BIP32/BIP44 due to a legacy decision that was preserved for backward compatibility with existing testnet wallets. The comment in the SDK source code explains: + +> "NOTE: Reverted this bug to keep generating the same keypair with the same mnemonic for mnemonics added to testnet during the incentives campaign." + +## References + +- [FIPS 202 - SHA-3 Standard](https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.202.pdf) +- [FIPS 180-4 - SHA-2 Standard](https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.180-4.pdf) +- [RFC 5869 - HKDF](https://tools.ietf.org/html/rfc5869) +- [RFC 8032 - Ed25519](https://tools.ietf.org/html/rfc8032) +- [BIP39 - Mnemonic code](https://github.com/bitcoin/bips/blob/master/bip-0039.mediawiki) + +## Changelog + +| Version | Date | Changes | +|---------|------|---------| +| 1.0.0 | 2026-01-17 | Initial specification | From 2a36b5d9785b6a6d43b0eb55de35413502151f2a Mon Sep 17 00:00:00 2001 From: tcsenpai Date: Mon, 19 Jan 2026 16:53:17 +0100 Subject: [PATCH 16/29] ignores --- .gitignore | 2 ++ 1 file changed, 2 insertions(+) diff --git a/.gitignore b/.gitignore index 0c059c16e..c66a91fc1 100644 --- a/.gitignore +++ b/.gitignore @@ -215,3 +215,5 @@ ipfs_53550/data_53550/ipfs src/features/tlsnotary/SDK_INTEGRATION.md src/features/tlsnotary/SDK_INTEGRATION.md ipfs/data_53550/ipfs +nohup.out +\*.db From aa8b5e47b93a0c7e7a9141e59d47cbc33214a2c2 Mon Sep 17 00:00:00 2001 From: tcsenpai Date: Wed, 21 Jan 2026 10:51:19 +0100 Subject: [PATCH 17/29] added trunk config --- .trunk/configs/.hadolint.yaml | 4 +++ .trunk/configs/.markdownlint.yaml | 2 ++ .trunk/configs/.shellcheckrc | 7 +++++ .trunk/configs/.yamllint.yaml | 7 +++++ .trunk/configs/svgo.config.mjs | 14 +++++++++ .trunk/trunk.yaml | 48 +++++++++++++++++++++++++++++++ 6 files changed, 82 insertions(+) create mode 100644 .trunk/configs/.hadolint.yaml create mode 100644 .trunk/configs/.markdownlint.yaml create mode 100644 .trunk/configs/.shellcheckrc create mode 100644 .trunk/configs/.yamllint.yaml create mode 100644 .trunk/configs/svgo.config.mjs create mode 100644 .trunk/trunk.yaml diff --git a/.trunk/configs/.hadolint.yaml b/.trunk/configs/.hadolint.yaml new file mode 100644 index 000000000..ea894f49f --- /dev/null +++ b/.trunk/configs/.hadolint.yaml @@ -0,0 +1,4 @@ +# Following source doesn't work in most setups +ignored: + - SC1090 + - SC1091 diff --git a/.trunk/configs/.markdownlint.yaml b/.trunk/configs/.markdownlint.yaml new file mode 100644 index 000000000..b40ee9d7a --- /dev/null +++ b/.trunk/configs/.markdownlint.yaml @@ -0,0 +1,2 @@ +# Prettier friendly markdownlint config (all formatting rules disabled) +extends: markdownlint/style/prettier diff --git a/.trunk/configs/.shellcheckrc b/.trunk/configs/.shellcheckrc new file mode 100644 index 000000000..8c7b1ada8 --- /dev/null +++ b/.trunk/configs/.shellcheckrc @@ -0,0 +1,7 @@ +enable=all +source-path=SCRIPTDIR +disable=SC2154 + +# If you're having issues with shellcheck following source, disable the errors via: +# disable=SC1090 +# disable=SC1091 diff --git a/.trunk/configs/.yamllint.yaml b/.trunk/configs/.yamllint.yaml new file mode 100644 index 000000000..0ce3fd823 --- /dev/null +++ b/.trunk/configs/.yamllint.yaml @@ -0,0 +1,7 @@ +rules: + quoted-strings: + required: only-when-needed + extra-allowed: ["{|}"] + key-duplicates: {} + octal-values: + forbid-implicit-octal: true diff --git a/.trunk/configs/svgo.config.mjs b/.trunk/configs/svgo.config.mjs new file mode 100644 index 000000000..87908149d --- /dev/null +++ b/.trunk/configs/svgo.config.mjs @@ -0,0 +1,14 @@ +export default { + plugins: [ + { + name: "preset-default", + params: { + overrides: { + removeViewBox: false, // https://github.com/svg/svgo/issues/1128 + sortAttrs: true, + removeOffCanvasPaths: true, + }, + }, + }, + ], +} diff --git a/.trunk/trunk.yaml b/.trunk/trunk.yaml new file mode 100644 index 000000000..e3d806736 --- /dev/null +++ b/.trunk/trunk.yaml @@ -0,0 +1,48 @@ +# This file controls the behavior of Trunk: https://docs.trunk.io/cli +# To learn more about the format of this file, see https://docs.trunk.io/reference/trunk-yaml +version: 0.1 +cli: + version: 1.25.0 +# Trunk provides extensibility via plugins. (https://docs.trunk.io/plugins) +plugins: + sources: + - id: trunk + ref: v1.7.4 + uri: https://github.com/trunk-io/plugins +# Many linters and tools depend on runtimes - configure them here. (https://docs.trunk.io/runtimes) +runtimes: + enabled: + - go@1.21.0 + - node@22.16.0 + - python@3.10.8 +# This is the section where you manage your linters. (https://docs.trunk.io/check/configuration) +lint: + ignore: + - linters: [ALL] + paths: + # Ignore markdown-ish fils + - src/**/*.md + - specs/**/* + - .github/workflows/*.yml + - "*.md" + enabled: + - actionlint@1.7.10 + - checkov@3.2.497 + - dotenv-linter@4.0.0 + - eslint@8.57.0 + - git-diff-check + - hadolint@2.14.0 + - markdownlint@0.47.0 + - oxipng@10.0.0 + - prettier@3.8.0 + - shellcheck@0.11.0 + - shfmt@3.6.0 + - svgo@4.0.0 + - taplo@0.10.0 + - trufflehog@3.92.5 + - yamllint@1.38.0 +tools: + enabled: + - tsc@5.9.3 + - uv@0.9.26 + - ts-node@10.9.2 From 04cb83b1e2383f16acf0309eede46791667ff2f2 Mon Sep 17 00:00:00 2001 From: tcsenpai Date: Wed, 21 Jan 2026 10:52:11 +0100 Subject: [PATCH 18/29] trunk first linting --- .github/copilot-instructions.md | 1 + .github/workflows/claude-merge-fix.yml | 114 +- .github/workflows/claude-merge-notify.yml | 58 +- .github/workflows/fix-beads-conflicts.yml | 116 +- .github/workflows/fix-serena-conflicts.yml | 124 +- .github/workflows/notify-beads-merging.yml | 56 +- .github/workflows/notify-serena-merging.yml | 58 +- .serena/memories/_continue_here.md | 6 + .serena/memories/_index.md | 7 +- .serena/memories/code_style_conventions.md | 16 +- .serena/memories/codebase_structure.md | 10 +- .serena/memories/development_patterns.md | 34 +- .serena/memories/devnet_docker_setup.md | 11 +- .../memories/feature_storage_programs_plan.md | 31 +- .../omniprotocol_complete_2025_11_11.md | 88 +- .../omniprotocol_session_2025-12-01.md | 10 + .../memories/omniprotocol_wave8.1_complete.md | 61 +- .../omniprotocol_wave8_tcp_physical_layer.md | 310 +- .../memories/project_context_consolidated.md | 21 +- .serena/memories/project_purpose.md | 7 +- .../session_2026-01-18_storage_program_api.md | 26 +- ...ession_2026-01-19_storage_docs_complete.md | 79 + ...ion_2026-01-19_storage_poc_granular_api.md | 11 + ...sion_storage_program_queries_2026_01_18.md | 18 +- ...on_ud_ownership_verification_2025_10_21.md | 52 +- ...ion_ud_points_implementation_2025_01_31.md | 21 + .serena/memories/suggested_commands.md | 16 +- .../memories/task_completion_guidelines.md | 27 +- .serena/memories/tech_stack.md | 11 +- .../memories/tlsnotary_integration_context.md | 60 +- .../typescript_audit_complete_2025_12_17.md | 46 +- .serena/memories/ud_architecture_patterns.md | 58 +- .serena/memories/ud_integration_complete.md | 40 +- .serena/memories/ud_phase5_complete.md | 72 +- .serena/memories/ud_phases_tracking.md | 88 +- .serena/memories/ud_security_patterns.md | 46 +- .serena/memories/ud_technical_reference.md | 9 + .serena/project.yml | 2 +- AGENTS.md | 34 +- CONSOLE_LOG_AUDIT.md | 131 +- CONTRIBUTING.md | 16 +- GUIDELINES/CODING.md | 166 +- GUIDELINES/PR.md | 8 +- GUIDELINES/VIBES.md | 29 +- INSTALL.md | 82 +- OMNIPROTOCOL_SETUP.md | 29 +- OMNIPROTOCOL_TLS_GUIDE.md | 62 +- README.md | 68 +- REPO_ANALYSIS/Onboarding_Documentation.md | 132 +- TG_IDENTITY_PLAN.md | 69 +- TO_FIX.md | 70 +- devnet/README.md | 40 +- devnet/docker-compose.yml | 274 +- devnet/run-devnet | 68 +- devnet/scripts/attach.sh | 42 +- devnet/scripts/generate-identities.sh | 36 +- devnet/scripts/generate-peerlist.sh | 44 +- devnet/scripts/logs.sh | 56 +- devnet/scripts/setup.sh | 12 +- devnet/scripts/watch-all.sh | 58 +- documentation/bridges/rubic.md | 10 +- documentation/protected-endpoints.md | 8 +- documentation/referral-system.md | 61 +- .../DTR_MINIMAL_IMPLEMENTATION.md | 71 +- dtr_implementation/README.md | 55 +- .../validator_status_minimal.md | 22 +- fixtures/address_info.json | 48 +- fixtures/block_header.json | 35 +- fixtures/consensus/greenlight_01.json | 36 +- fixtures/consensus/greenlight_02.json | 36 +- fixtures/consensus/greenlight_03.json | 36 +- fixtures/consensus/greenlight_04.json | 36 +- fixtures/consensus/greenlight_05.json | 36 +- fixtures/consensus/greenlight_06.json | 36 +- fixtures/consensus/greenlight_07.json | 36 +- fixtures/consensus/greenlight_08.json | 36 +- fixtures/consensus/greenlight_09.json | 36 +- fixtures/consensus/greenlight_10.json | 36 +- fixtures/consensus/proposeBlockHash_01.json | 54 +- fixtures/consensus/proposeBlockHash_02.json | 54 +- fixtures/consensus/setValidatorPhase_01.json | 48 +- fixtures/consensus/setValidatorPhase_02.json | 48 +- fixtures/consensus/setValidatorPhase_03.json | 48 +- fixtures/consensus/setValidatorPhase_04.json | 48 +- fixtures/consensus/setValidatorPhase_05.json | 48 +- fixtures/consensus/setValidatorPhase_06.json | 48 +- fixtures/consensus/setValidatorPhase_07.json | 48 +- fixtures/consensus/setValidatorPhase_08.json | 48 +- fixtures/consensus/setValidatorPhase_09.json | 48 +- fixtures/consensus/setValidatorPhase_10.json | 48 +- fixtures/last_block_number.json | 2 +- fixtures/mempool.json | 2 +- fixtures/peerlist.json | 46 +- fixtures/peerlist_hash.json | 7 +- install-deps.sh | 17 +- jest.config.ts | 56 +- knip.json | 14 +- monitoring/README.md | 107 +- monitoring/docker-compose.yml | 226 +- monitoring/grafana/branding/demos-icon.svg | 4 +- .../grafana/branding/demos-logo-morph.svg | 16 +- .../grafana/branding/demos-logo-white.svg | 15 +- monitoring/grafana/branding/favicon.png | Bin 1571 -> 1407 bytes .../provisioning/dashboards/dashboard.yml | 22 +- .../dashboards/json/consensus-blockchain.json | 1316 +++--- .../dashboards/json/demos-overview.json | 2258 ++++----- .../dashboards/json/network-peers.json | 1768 +++---- .../dashboards/json/system-health.json | 2506 +++++----- .../provisioning/datasources/prometheus.yml | 30 +- monitoring/prometheus/prometheus.yml | 62 +- node-doctor | 558 +-- package.json | 1 + reset-node | 330 +- run | 1530 +++--- scripts/ceremony_contribute.sh | 820 ++-- src/benchmark.ts | 161 +- .../signalingServer/signalingServer.ts | 5 +- .../signalingServer/types/IMMessage.ts | 1 - src/features/activitypub/fedistore.ts | 7 +- src/features/fhe/fhe_test.ts | 79 +- src/features/incentive/PointSystem.ts | 59 +- src/features/incentive/referrals.ts | 6 +- src/features/mcp/MCPServer.ts | 37 +- src/features/mcp/examples/remoteExample.ts | 44 +- src/features/mcp/examples/simpleExample.ts | 37 +- src/features/mcp/index.ts | 22 +- src/features/mcp/tools/demosTools.ts | 37 +- src/features/metrics/MetricsCollector.ts | 66 +- src/features/metrics/MetricsServer.ts | 6 +- src/features/metrics/MetricsService.ts | 46 +- .../chainwares/aptoswares/Move.toml | 2 +- .../routines/executors/aptos_balance_query.ts | 22 +- .../executors/aptos_contract_write.ts | 5 +- .../routines/executors/balance_query.ts | 8 +- .../multichain/routines/executors/pay.ts | 32 +- src/features/storageprogram/routes.ts | 68 +- src/features/tlsnotary/TLSNotaryService.ts | 1438 +++--- src/features/tlsnotary/ffi.ts | 746 +-- src/features/tlsnotary/index.ts | 118 +- src/features/tlsnotary/portAllocator.ts | 186 +- src/features/tlsnotary/proxyManager.ts | 833 ++-- src/features/tlsnotary/routes.ts | 247 +- src/features/tlsnotary/tokenManager.ts | 377 +- src/features/web2/dahr/DAHR.ts | 6 +- src/features/web2/dahr/DAHRFactory.ts | 15 +- src/features/web2/proxy/Proxy.ts | 8 +- src/index.ts | 26 +- src/libs/abstraction/web2/discord.ts | 108 +- src/libs/blockchain/UDTypes/uns_sol.json | 3676 ++++++--------- src/libs/blockchain/UDTypes/uns_sol.ts | 4180 ++++++++--------- .../gcr/gcr_routines/GCRIdentityRoutines.ts | 11 +- .../gcr/gcr_routines/GCRTLSNotaryRoutines.ts | 33 +- .../gcr_routines/handleNativeOperations.ts | 53 +- .../gcr/gcr_routines/udIdentityManager.ts | 5 +- .../gcr_routines/udSolanaResolverHelper.ts | 1336 +++--- src/libs/blockchain/gcr/handleGCR.ts | 24 +- .../routines/beforeFindGenesisHooks.ts | 4 +- .../routines/executeNativeTransaction.ts | 14 +- src/libs/blockchain/routines/subOperations.ts | 4 +- .../routines/validateTransaction.ts | 28 +- src/libs/blockchain/transaction.ts | 56 +- src/libs/consensus/v2/interfaces.ts | 3 +- src/libs/consensus/v2/routines/getShard.ts | 4 +- src/libs/consensus/v2/routines/isValidator.ts | 2 +- .../v2/routines/orderTransactions.ts | 4 +- .../consensus/v2/types/secretaryManager.ts | 26 +- src/libs/crypto/cryptography.ts | 5 +- src/libs/identity/tools/twitter.ts | 4 +- src/libs/l2ps/parallelNetworks.ts | 67 +- src/libs/network/endpointHandlers.ts | 15 +- src/libs/network/index.ts | 2 +- src/libs/network/manageAuth.ts | 12 +- src/libs/network/manageConsensusRoutines.ts | 13 +- src/libs/network/manageGCRRoutines.ts | 100 +- src/libs/network/manageNativeBridge.ts | 19 +- src/libs/network/manageNodeCall.ts | 493 +- src/libs/network/middleware/rateLimiter.ts | 16 +- .../routines/nodecalls/getBlockByHash.ts | 3 +- .../nodecalls/getBlockHeaderByHash.ts | 6 +- .../nodecalls/getBlockHeaderByNumber.ts | 6 +- src/libs/network/routines/timeSync.ts | 4 +- .../transactions/handleIdentityRequest.ts | 9 +- .../routines/transactions/handleL2PS.ts | 11 +- .../transactions/handleWeb2ProxyRequest.ts | 6 +- src/libs/network/server_rpc.ts | 11 +- src/libs/omniprotocol/auth/parser.ts | 23 +- src/libs/omniprotocol/auth/types.ts | 16 +- src/libs/omniprotocol/auth/verifier.ts | 31 +- .../omniprotocol/integration/BaseAdapter.ts | 16 +- .../integration/consensusAdapter.ts | 54 +- src/libs/omniprotocol/integration/index.ts | 5 +- .../omniprotocol/integration/peerAdapter.ts | 15 +- src/libs/omniprotocol/integration/startup.ts | 26 +- src/libs/omniprotocol/protocol/dispatcher.ts | 4 +- .../protocol/handlers/consensus.ts | 98 +- .../omniprotocol/protocol/handlers/control.ts | 41 +- .../omniprotocol/protocol/handlers/gcr.ts | 415 +- .../omniprotocol/protocol/handlers/meta.ts | 43 +- .../omniprotocol/protocol/handlers/sync.ts | 48 +- .../protocol/handlers/transaction.ts | 127 +- .../omniprotocol/protocol/handlers/utils.ts | 1 - src/libs/omniprotocol/protocol/opcodes.ts | 23 +- src/libs/omniprotocol/protocol/registry.ts | 420 +- .../omniprotocol/ratelimit/RateLimiter.ts | 13 +- .../omniprotocol/serialization/consensus.ts | 5 +- .../omniprotocol/serialization/control.ts | 76 +- src/libs/omniprotocol/serialization/gcr.ts | 1 - .../serialization/jsonEnvelope.ts | 6 +- src/libs/omniprotocol/serialization/meta.ts | 28 +- .../omniprotocol/serialization/primitives.ts | 55 +- src/libs/omniprotocol/serialization/sync.ts | 58 +- .../omniprotocol/serialization/transaction.ts | 17 +- .../omniprotocol/server/OmniProtocolServer.ts | 31 +- .../server/ServerConnectionManager.ts | 9 +- src/libs/omniprotocol/server/TLSServer.ts | 41 +- src/libs/omniprotocol/tls/certificates.ts | 44 +- src/libs/omniprotocol/tls/initialize.ts | 6 +- src/libs/omniprotocol/tls/types.ts | 16 +- .../transport/ConnectionFactory.ts | 6 +- .../omniprotocol/transport/ConnectionPool.ts | 7 +- .../omniprotocol/transport/MessageFramer.ts | 4 +- .../omniprotocol/transport/TLSConnection.ts | 14 +- src/libs/omniprotocol/transport/types.ts | 14 +- src/libs/omniprotocol/types/errors.ts | 10 +- src/libs/omniprotocol/types/message.ts | 4 +- src/libs/peer/PeerManager.ts | 14 +- src/libs/peer/routines/broadcast.ts | 4 +- .../peer/routines/getPeerConnectionString.ts | 7 +- src/libs/peer/routines/getPeerIdentity.ts | 8 +- src/libs/peer/routines/peerBootstrap.ts | 30 +- src/libs/peer/routines/peerGossip.ts | 8 +- src/libs/utils/calibrateTime.ts | 5 +- src/libs/utils/demostdlib/groundControl.ts | 6 +- src/libs/utils/keyMaker.ts | 5 +- src/libs/utils/showPubkey.ts | 24 +- src/migrations/AddReferralSupport.ts | 14 +- src/model/datasource.ts | 1 - src/model/entities/GCRv2/GCRSubnetsTxs.ts | 2 +- src/model/entities/GCRv2/GCR_TLSNotary.ts | 8 +- src/model/entities/Mempool.ts | 7 +- src/types/nomis-augmentations.d.ts | 54 +- src/utilities/Diagnostic.ts | 148 +- src/utilities/cli_libraries/wallet.ts | 5 +- src/utilities/mainLoop.ts | 2 +- src/utilities/sharedState.ts | 2 +- src/utilities/tui/CategorizedLogger.ts | 5 +- src/utilities/tui/LegacyLoggerAdapter.ts | 14 +- src/utilities/tui/TUIManager.ts | 95 +- src/utilities/validateUint8Array.ts | 8 +- src/utilities/waiter.ts | 4 +- start_db | 260 +- tests/mocks/demosdk-encryption.ts | 9 +- tests/mocks/demosdk-types.ts | 6 +- tests/mocks/demosdk-websdk.ts | 10 +- tests/omniprotocol/consensus.test.ts | 119 +- tests/omniprotocol/fixtures.test.ts | 4 +- tests/omniprotocol/gcr.test.ts | 42 +- tests/omniprotocol/peerOmniAdapter.test.ts | 20 +- tests/omniprotocol/transaction.test.ts | 16 +- tlsnotary/docker-compose.yml | 40 +- 260 files changed, 18026 insertions(+), 16292 deletions(-) create mode 100644 .serena/memories/session_2026-01-19_storage_docs_complete.md diff --git a/.github/copilot-instructions.md b/.github/copilot-instructions.md index e8a438f91..573ebe00a 100644 --- a/.github/copilot-instructions.md +++ b/.github/copilot-instructions.md @@ -5,6 +5,7 @@ This project is the Demos Network node/RPC implementation. We use **bd (beads)** for all task tracking. **Key Features:** + - Dependency-aware issue tracking - Auto-sync with Git via JSONL - AI-optimized CLI with JSON output diff --git a/.github/workflows/claude-merge-fix.yml b/.github/workflows/claude-merge-fix.yml index e20500f94..379ab3e61 100644 --- a/.github/workflows/claude-merge-fix.yml +++ b/.github/workflows/claude-merge-fix.yml @@ -1,66 +1,66 @@ name: Preserve Claude Memory Files on: - push: - branches: ['**'] + push: + branches: ["**"] jobs: - preserve-claude: - runs-on: ubuntu-latest - steps: - - name: Checkout repository - uses: actions/checkout@v4 - with: - fetch-depth: 2 - token: ${{ secrets.GITHUB_TOKEN }} + preserve-claude: + runs-on: ubuntu-latest + steps: + - name: Checkout repository + uses: actions/checkout@v4 + with: + fetch-depth: 2 + token: ${{ secrets.GITHUB_TOKEN }} - - name: Check if this was a merge commit - id: check_merge - run: | - if git log -1 --pretty=format:"%P" | grep -q " "; then - echo "is_merge=true" >> $GITHUB_OUTPUT - echo "✅ Detected merge commit" - else - echo "is_merge=false" >> $GITHUB_OUTPUT - exit 0 - fi + - name: Check if this was a merge commit + id: check_merge + run: | + if git log -1 --pretty=format:"%P" | grep -q " "; then + echo "is_merge=true" >> $GITHUB_OUTPUT + echo "✅ Detected merge commit" + else + echo "is_merge=false" >> $GITHUB_OUTPUT + exit 0 + fi - - name: Check for .claude changes in merge - if: steps.check_merge.outputs.is_merge == 'true' - id: check_claude - run: | - if git log -1 --name-only | grep -q "^\.claude/"; then - echo "claude_changed=true" >> $GITHUB_OUTPUT - echo "🚨 .claude files were modified in merge - will revert!" - else - echo "claude_changed=false" >> $GITHUB_OUTPUT - exit 0 - fi + - name: Check for .claude changes in merge + if: steps.check_merge.outputs.is_merge == 'true' + id: check_claude + run: | + if git log -1 --name-only | grep -q "^\.claude/"; then + echo "claude_changed=true" >> $GITHUB_OUTPUT + echo "🚨 .claude files were modified in merge - will revert!" + else + echo "claude_changed=false" >> $GITHUB_OUTPUT + exit 0 + fi - - name: Revert .claude to pre-merge state - if: steps.check_merge.outputs.is_merge == 'true' && steps.check_claude.outputs.claude_changed == 'true' - run: | - CURRENT_BRANCH=$(git branch --show-current) - echo "🔄 Reverting .claude/ to pre-merge state on $CURRENT_BRANCH" - - MERGE_BASE=$(git log -1 --pretty=format:"%P" | cut -d' ' -f1) - git checkout $MERGE_BASE -- .claude/ 2>/dev/null || echo "No .claude in base commit" - - git config user.name "github-actions[bot]" - git config user.email "41898282+github-actions[bot]@users.noreply.github.com" - - if git diff --staged --quiet; then - git add .claude/ - fi - - if ! git diff --cached --quiet; then - git commit -m "🔒 Preserve branch-specific .claude files + - name: Revert .claude to pre-merge state + if: steps.check_merge.outputs.is_merge == 'true' && steps.check_claude.outputs.claude_changed == 'true' + run: | + CURRENT_BRANCH=$(git branch --show-current) + echo "🔄 Reverting .claude/ to pre-merge state on $CURRENT_BRANCH" - Reverted .claude/ changes from merge to keep $CURRENT_BRANCH version. - [skip ci]" - - git push origin $CURRENT_BRANCH - echo "✅ Successfully preserved $CURRENT_BRANCH .claude files" - else - echo "ℹ️ No changes to revert" - fi + MERGE_BASE=$(git log -1 --pretty=format:"%P" | cut -d' ' -f1) + git checkout $MERGE_BASE -- .claude/ 2>/dev/null || echo "No .claude in base commit" + + git config user.name "github-actions[bot]" + git config user.email "41898282+github-actions[bot]@users.noreply.github.com" + + if git diff --staged --quiet; then + git add .claude/ + fi + + if ! git diff --cached --quiet; then + git commit -m "🔒 Preserve branch-specific .claude files + + Reverted .claude/ changes from merge to keep $CURRENT_BRANCH version. + [skip ci]" + + git push origin $CURRENT_BRANCH + echo "✅ Successfully preserved $CURRENT_BRANCH .claude files" + else + echo "ℹ️ No changes to revert" + fi diff --git a/.github/workflows/claude-merge-notify.yml b/.github/workflows/claude-merge-notify.yml index f32480ca7..95949b380 100644 --- a/.github/workflows/claude-merge-notify.yml +++ b/.github/workflows/claude-merge-notify.yml @@ -1,38 +1,38 @@ name: Claude PR Warning on: - pull_request: - branches: ['**'] - types: [opened, synchronize] + pull_request: + branches: ["**"] + types: [opened, synchronize] jobs: - claude-warning: - runs-on: ubuntu-latest - steps: - - name: Checkout PR - uses: actions/checkout@v4 - with: - fetch-depth: 0 + claude-warning: + runs-on: ubuntu-latest + steps: + - name: Checkout PR + uses: actions/checkout@v4 + with: + fetch-depth: 0 - - name: Check for .claude changes - run: | - echo "🔍 Checking if PR touches .claude/ files..." - - if git diff --name-only origin/${{ github.base_ref }}...HEAD | grep -q "^\.claude/"; then - echo "⚠️ This PR modifies .claude/ files" - - COMMENT_BODY="⚠️ **Claude Memory Files Detected** + - name: Check for .claude changes + run: | + echo "🔍 Checking if PR touches .claude/ files..." - This PR modifies \`.claude/\` files. After merge, these changes will be **automatically reverted** to preserve branch-specific Claude conversation context. + if git diff --name-only origin/${{ github.base_ref }}...HEAD | grep -q "^\.claude/"; then + echo "⚠️ This PR modifies .claude/ files" + + COMMENT_BODY="⚠️ **Claude Memory Files Detected** - **Files that will be reverted:** - $(git diff --name-only origin/${{ github.base_ref }}...HEAD | grep '^\.claude/' | sed 's/^/- /' | head -10) + This PR modifies \`.claude/\` files. After merge, these changes will be **automatically reverted** to preserve branch-specific Claude conversation context. - This is expected behavior to keep Claude conversation context branch-specific. ✅" - - gh pr comment ${{ github.event.number }} --body "$COMMENT_BODY" || echo "Could not post comment" - else - echo "✅ No .claude files affected" - fi - env: - GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} + **Files that will be reverted:** + $(git diff --name-only origin/${{ github.base_ref }}...HEAD | grep '^\.claude/' | sed 's/^/- /' | head -10) + + This is expected behavior to keep Claude conversation context branch-specific. ✅" + + gh pr comment ${{ github.event.number }} --body "$COMMENT_BODY" || echo "Could not post comment" + else + echo "✅ No .claude files affected" + fi + env: + GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/fix-beads-conflicts.yml b/.github/workflows/fix-beads-conflicts.yml index c71d8fa6f..2d37f5942 100644 --- a/.github/workflows/fix-beads-conflicts.yml +++ b/.github/workflows/fix-beads-conflicts.yml @@ -1,73 +1,73 @@ name: Preserve Branch-Specific Beads Files on: - push: - branches: ['**'] + push: + branches: ["**"] jobs: - preserve-beads: - runs-on: ubuntu-latest - steps: - - name: Checkout repository - uses: actions/checkout@v4 - with: - fetch-depth: 2 - token: ${{ secrets.GITHUB_TOKEN }} + preserve-beads: + runs-on: ubuntu-latest + steps: + - name: Checkout repository + uses: actions/checkout@v4 + with: + fetch-depth: 2 + token: ${{ secrets.GITHUB_TOKEN }} - - name: Check if this was a merge commit - id: check_merge - run: | - if git log -1 --pretty=format:"%P" | grep -q " "; then - echo "is_merge=true" >> $GITHUB_OUTPUT - echo "✅ Detected merge commit" - else - echo "is_merge=false" >> $GITHUB_OUTPUT - exit 0 - fi + - name: Check if this was a merge commit + id: check_merge + run: | + if git log -1 --pretty=format:"%P" | grep -q " "; then + echo "is_merge=true" >> $GITHUB_OUTPUT + echo "✅ Detected merge commit" + else + echo "is_merge=false" >> $GITHUB_OUTPUT + exit 0 + fi - - name: Check for .beads changes in merge - if: steps.check_merge.outputs.is_merge == 'true' - id: check_beads - run: | - if git log -1 --name-only | grep -qE "^\.beads/(issues\.jsonl|deletions\.jsonl|metadata\.json)$"; then - echo "beads_changed=true" >> $GITHUB_OUTPUT - echo "🚨 .beads files were modified in merge - will revert!" - else - echo "beads_changed=false" >> $GITHUB_OUTPUT - exit 0 - fi + - name: Check for .beads changes in merge + if: steps.check_merge.outputs.is_merge == 'true' + id: check_beads + run: | + if git log -1 --name-only | grep -qE "^\.beads/(issues\.jsonl|deletions\.jsonl|metadata\.json)$"; then + echo "beads_changed=true" >> $GITHUB_OUTPUT + echo "🚨 .beads files were modified in merge - will revert!" + else + echo "beads_changed=false" >> $GITHUB_OUTPUT + exit 0 + fi - - name: Revert .beads to pre-merge state - if: steps.check_merge.outputs.is_merge == 'true' && steps.check_beads.outputs.beads_changed == 'true' - run: | - CURRENT_BRANCH=$(git branch --show-current) - echo "🔄 Reverting .beads/ issue tracking files to pre-merge state on $CURRENT_BRANCH" + - name: Revert .beads to pre-merge state + if: steps.check_merge.outputs.is_merge == 'true' && steps.check_beads.outputs.beads_changed == 'true' + run: | + CURRENT_BRANCH=$(git branch --show-current) + echo "🔄 Reverting .beads/ issue tracking files to pre-merge state on $CURRENT_BRANCH" - # Get the first parent (target branch before merge) - MERGE_BASE=$(git log -1 --pretty=format:"%P" | cut -d' ' -f1) + # Get the first parent (target branch before merge) + MERGE_BASE=$(git log -1 --pretty=format:"%P" | cut -d' ' -f1) - # Restore specific .beads files from the target branch's state before merge - git checkout $MERGE_BASE -- .beads/issues.jsonl 2>/dev/null || echo "No issues.jsonl in base commit" - git checkout $MERGE_BASE -- .beads/deletions.jsonl 2>/dev/null || echo "No deletions.jsonl in base commit" - git checkout $MERGE_BASE -- .beads/metadata.json 2>/dev/null || echo "No metadata.json in base commit" + # Restore specific .beads files from the target branch's state before merge + git checkout $MERGE_BASE -- .beads/issues.jsonl 2>/dev/null || echo "No issues.jsonl in base commit" + git checkout $MERGE_BASE -- .beads/deletions.jsonl 2>/dev/null || echo "No deletions.jsonl in base commit" + git checkout $MERGE_BASE -- .beads/metadata.json 2>/dev/null || echo "No metadata.json in base commit" - # Configure git - git config user.name "github-actions[bot]" - git config user.email "41898282+github-actions[bot]@users.noreply.github.com" + # Configure git + git config user.name "github-actions[bot]" + git config user.email "41898282+github-actions[bot]@users.noreply.github.com" - # Commit the reversion - if git diff --staged --quiet; then - git add .beads/issues.jsonl .beads/deletions.jsonl .beads/metadata.json 2>/dev/null || true - fi + # Commit the reversion + if git diff --staged --quiet; then + git add .beads/issues.jsonl .beads/deletions.jsonl .beads/metadata.json 2>/dev/null || true + fi - if ! git diff --cached --quiet; then - git commit -m "🔒 Preserve branch-specific .beads issue tracking files + if ! git diff --cached --quiet; then + git commit -m "🔒 Preserve branch-specific .beads issue tracking files - Reverted .beads/ changes from merge to keep $CURRENT_BRANCH version intact. - [skip ci]" + Reverted .beads/ changes from merge to keep $CURRENT_BRANCH version intact. + [skip ci]" - git push origin $CURRENT_BRANCH - echo "✅ Successfully preserved $CURRENT_BRANCH .beads files" - else - echo "ℹ️ No changes to revert" - fi + git push origin $CURRENT_BRANCH + echo "✅ Successfully preserved $CURRENT_BRANCH .beads files" + else + echo "ℹ️ No changes to revert" + fi diff --git a/.github/workflows/fix-serena-conflicts.yml b/.github/workflows/fix-serena-conflicts.yml index 00a4ad53d..81fffa883 100644 --- a/.github/workflows/fix-serena-conflicts.yml +++ b/.github/workflows/fix-serena-conflicts.yml @@ -1,71 +1,71 @@ name: Preserve Branch-Specific Serena Files on: - push: - branches: ['**'] + push: + branches: ["**"] jobs: - preserve-serena: - runs-on: ubuntu-latest - steps: - - name: Checkout repository - uses: actions/checkout@v4 - with: - fetch-depth: 2 - token: ${{ secrets.GITHUB_TOKEN }} + preserve-serena: + runs-on: ubuntu-latest + steps: + - name: Checkout repository + uses: actions/checkout@v4 + with: + fetch-depth: 2 + token: ${{ secrets.GITHUB_TOKEN }} - - name: Check if this was a merge commit - id: check_merge - run: | - if git log -1 --pretty=format:"%P" | grep -q " "; then - echo "is_merge=true" >> $GITHUB_OUTPUT - echo "✅ Detected merge commit" - else - echo "is_merge=false" >> $GITHUB_OUTPUT - exit 0 - fi + - name: Check if this was a merge commit + id: check_merge + run: | + if git log -1 --pretty=format:"%P" | grep -q " "; then + echo "is_merge=true" >> $GITHUB_OUTPUT + echo "✅ Detected merge commit" + else + echo "is_merge=false" >> $GITHUB_OUTPUT + exit 0 + fi - - name: Check for .serena changes in merge - if: steps.check_merge.outputs.is_merge == 'true' - id: check_serena - run: | - if git log -1 --name-only | grep -q "^\.serena/"; then - echo "serena_changed=true" >> $GITHUB_OUTPUT - echo "🚨 .serena files were modified in merge - will revert!" - else - echo "serena_changed=false" >> $GITHUB_OUTPUT - exit 0 - fi + - name: Check for .serena changes in merge + if: steps.check_merge.outputs.is_merge == 'true' + id: check_serena + run: | + if git log -1 --name-only | grep -q "^\.serena/"; then + echo "serena_changed=true" >> $GITHUB_OUTPUT + echo "🚨 .serena files were modified in merge - will revert!" + else + echo "serena_changed=false" >> $GITHUB_OUTPUT + exit 0 + fi - - name: Revert .serena to pre-merge state - if: steps.check_merge.outputs.is_merge == 'true' && steps.check_serena.outputs.serena_changed == 'true' - run: | - CURRENT_BRANCH=$(git branch --show-current) - echo "🔄 Reverting .serena/ to pre-merge state on $CURRENT_BRANCH" - - # Get the first parent (target branch before merge) - MERGE_BASE=$(git log -1 --pretty=format:"%P" | cut -d' ' -f1) - - # Restore .serena from the target branch's state before merge - git checkout $MERGE_BASE -- .serena/ 2>/dev/null || echo "No .serena in base commit" - - # Configure git - git config user.name "github-actions[bot]" - git config user.email "41898282+github-actions[bot]@users.noreply.github.com" - - # Commit the reversion - if git diff --staged --quiet; then - git add .serena/ - fi - - if ! git diff --cached --quiet; then - git commit -m "🔒 Preserve branch-specific .serena files + - name: Revert .serena to pre-merge state + if: steps.check_merge.outputs.is_merge == 'true' && steps.check_serena.outputs.serena_changed == 'true' + run: | + CURRENT_BRANCH=$(git branch --show-current) + echo "🔄 Reverting .serena/ to pre-merge state on $CURRENT_BRANCH" - Reverted .serena/ changes from merge to keep $CURRENT_BRANCH version intact. - [skip ci]" - - git push origin $CURRENT_BRANCH - echo "✅ Successfully preserved $CURRENT_BRANCH .serena files" - else - echo "ℹ️ No changes to revert" - fi + # Get the first parent (target branch before merge) + MERGE_BASE=$(git log -1 --pretty=format:"%P" | cut -d' ' -f1) + + # Restore .serena from the target branch's state before merge + git checkout $MERGE_BASE -- .serena/ 2>/dev/null || echo "No .serena in base commit" + + # Configure git + git config user.name "github-actions[bot]" + git config user.email "41898282+github-actions[bot]@users.noreply.github.com" + + # Commit the reversion + if git diff --staged --quiet; then + git add .serena/ + fi + + if ! git diff --cached --quiet; then + git commit -m "🔒 Preserve branch-specific .serena files + + Reverted .serena/ changes from merge to keep $CURRENT_BRANCH version intact. + [skip ci]" + + git push origin $CURRENT_BRANCH + echo "✅ Successfully preserved $CURRENT_BRANCH .serena files" + else + echo "ℹ️ No changes to revert" + fi diff --git a/.github/workflows/notify-beads-merging.yml b/.github/workflows/notify-beads-merging.yml index e47ffbaa7..d8472a2a1 100644 --- a/.github/workflows/notify-beads-merging.yml +++ b/.github/workflows/notify-beads-merging.yml @@ -1,37 +1,37 @@ name: Beads Merge Warning on: - pull_request: - branches: ['**'] + pull_request: + branches: ["**"] jobs: - beads-warning: - runs-on: ubuntu-latest - steps: - - name: Check for .beads changes - uses: actions/checkout@v4 - with: - fetch-depth: 0 + beads-warning: + runs-on: ubuntu-latest + steps: + - name: Check for .beads changes + uses: actions/checkout@v4 + with: + fetch-depth: 0 - - name: Warn about .beads files - run: | - # Check if PR touches .beads issue tracking files - if git diff --name-only origin/${{ github.base_ref }}...HEAD | grep -qE "^\.beads/(issues\.jsonl|deletions\.jsonl|metadata\.json)$"; then - echo "⚠️ This PR modifies .beads/ issue tracking files" - echo "🤖 After merge, these will be auto-reverted to preserve branch-specific issues" - echo "" - echo "Files affected:" - git diff --name-only origin/${{ github.base_ref }}...HEAD | grep -E "^\.beads/(issues\.jsonl|deletions\.jsonl|metadata\.json)$" | sed 's/^/ - /' + - name: Warn about .beads files + run: | + # Check if PR touches .beads issue tracking files + if git diff --name-only origin/${{ github.base_ref }}...HEAD | grep -qE "^\.beads/(issues\.jsonl|deletions\.jsonl|metadata\.json)$"; then + echo "⚠️ This PR modifies .beads/ issue tracking files" + echo "🤖 After merge, these will be auto-reverted to preserve branch-specific issues" + echo "" + echo "Files affected:" + git diff --name-only origin/${{ github.base_ref }}...HEAD | grep -E "^\.beads/(issues\.jsonl|deletions\.jsonl|metadata\.json)$" | sed 's/^/ - /' - # Post comment on PR - gh pr comment ${{ github.event.number }} --body "⚠️ **Beads Issue Tracking Files Detected** + # Post comment on PR + gh pr comment ${{ github.event.number }} --body "⚠️ **Beads Issue Tracking Files Detected** - This PR modifies \`.beads/\` issue tracking files. After merge, these changes will be **automatically reverted** to preserve branch-specific issue tracking. + This PR modifies \`.beads/\` issue tracking files. After merge, these changes will be **automatically reverted** to preserve branch-specific issue tracking. - Files that will be reverted: - $(git diff --name-only origin/${{ github.base_ref }}...HEAD | grep -E '^\.beads/(issues\.jsonl|deletions\.jsonl|metadata\.json)$' | sed 's/^/- /')" || echo "Could not post comment" - else - echo "✅ No .beads issue tracking files affected" - fi - env: - GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} + Files that will be reverted: + $(git diff --name-only origin/${{ github.base_ref }}...HEAD | grep -E '^\.beads/(issues\.jsonl|deletions\.jsonl|metadata\.json)$' | sed 's/^/- /')" || echo "Could not post comment" + else + echo "✅ No .beads issue tracking files affected" + fi + env: + GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} diff --git a/.github/workflows/notify-serena-merging.yml b/.github/workflows/notify-serena-merging.yml index 8d52a163a..60cf891ee 100644 --- a/.github/workflows/notify-serena-merging.yml +++ b/.github/workflows/notify-serena-merging.yml @@ -1,37 +1,37 @@ name: Serena Merge Warning on: - pull_request: - branches: ['**'] + pull_request: + branches: ["**"] jobs: - serena-warning: - runs-on: ubuntu-latest - steps: - - name: Check for .serena changes - uses: actions/checkout@v4 - with: - fetch-depth: 0 + serena-warning: + runs-on: ubuntu-latest + steps: + - name: Check for .serena changes + uses: actions/checkout@v4 + with: + fetch-depth: 0 - - name: Warn about .serena files - run: | - # Check if PR touches .serena files - if git diff --name-only origin/${{ github.base_ref }}...HEAD | grep -q "^\.serena/"; then - echo "⚠️ This PR modifies .serena/ files" - echo "🤖 After merge, these will be auto-reverted to preserve branch-specific memories" - echo "" - echo "Files affected:" - git diff --name-only origin/${{ github.base_ref }}...HEAD | grep "^\.serena/" | sed 's/^/ - /' - - # Post comment on PR - gh pr comment ${{ github.event.number }} --body "⚠️ **MCP Memory Files Detected** + - name: Warn about .serena files + run: | + # Check if PR touches .serena files + if git diff --name-only origin/${{ github.base_ref }}...HEAD | grep -q "^\.serena/"; then + echo "⚠️ This PR modifies .serena/ files" + echo "🤖 After merge, these will be auto-reverted to preserve branch-specific memories" + echo "" + echo "Files affected:" + git diff --name-only origin/${{ github.base_ref }}...HEAD | grep "^\.serena/" | sed 's/^/ - /' + + # Post comment on PR + gh pr comment ${{ github.event.number }} --body "⚠️ **MCP Memory Files Detected** - This PR modifies \`.serena/\` files. After merge, these changes will be **automatically reverted** to preserve branch-specific MCP memories. + This PR modifies \`.serena/\` files. After merge, these changes will be **automatically reverted** to preserve branch-specific MCP memories. - Files that will be reverted: - $(git diff --name-only origin/${{ github.base_ref }}...HEAD | grep '^\.serena/' | sed 's/^/- /')" || echo "Could not post comment" - else - echo "✅ No .serena files affected" - fi - env: - GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} + Files that will be reverted: + $(git diff --name-only origin/${{ github.base_ref }}...HEAD | grep '^\.serena/' | sed 's/^/- /')" || echo "Could not post comment" + else + echo "✅ No .serena files affected" + fi + env: + GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} diff --git a/.serena/memories/_continue_here.md b/.serena/memories/_continue_here.md index e477da238..13dd195d2 100644 --- a/.serena/memories/_continue_here.md +++ b/.serena/memories/_continue_here.md @@ -1,26 +1,32 @@ # Continue Here - Last Session: 2025-12-17 ## Last Activity + TypeScript type audit completed successfully. ## Status + - **Branch**: custom_protocol - **Type errors**: 0 production, 2 test-only (fhe_test.ts - not planned) - **Epic node-tsaudit**: CLOSED ## Recent Commits + - `c684bb2a` - fix: remove dead crypto code and fix showPubkey type - `20137452` - fix: resolve OmniProtocol type errors - `fc5abb9e` - fix: resolve 22 TypeScript type errors ## Key Memories + - `typescript_audit_complete_2025_12_17` - Full audit details and patterns ## Previous Work (2025-12-16) + - Console.log migration epic COMPLETE (node-7d8) - OmniProtocol 90% complete (node-99g) ## Ready For + - New feature development - Further code quality improvements - Any pending tasks in beads diff --git a/.serena/memories/_index.md b/.serena/memories/_index.md index c6ba9770a..6d0b4c003 100644 --- a/.serena/memories/_index.md +++ b/.serena/memories/_index.md @@ -1,15 +1,18 @@ # Serena Memory Index - Quick Navigation ## Current Work (Start Here) -- **_continue_here** - Active work streams and next actions + +- **\_continue_here** - Active work streams and next actions ## OmniProtocol Implementation + - **omniprotocol_complete_2025_11_11** - Comprehensive status (90% complete) - **omniprotocol_wave8_tcp_physical_layer** - TCP layer implementation - **omniprotocol_wave8.1_complete** - Wave 8.1 completion details - **omniprotocol_session_2025-12-01** - Recent session notes ## UD Integration + - **ud_phases_tracking** - Complete phases 1-6 overview - **ud_phase5_complete** - Detailed Phase 5 implementation - **ud_integration_complete** - Current status, dependencies, next steps @@ -20,6 +23,7 @@ - **session_ud_points_implementation_2025_01_31** - Points system session ## Project Core + - **project_purpose** - Demos Network node software overview - **project_context_consolidated** - Consolidated project context - **tech_stack** - Languages, frameworks, tools @@ -28,6 +32,7 @@ - **development_patterns** - Established code patterns ## Development Workflow + - **suggested_commands** - Common CLI commands - **task_completion_guidelines** - Workflow patterns diff --git a/.serena/memories/code_style_conventions.md b/.serena/memories/code_style_conventions.md index 380a46056..f550b4a02 100644 --- a/.serena/memories/code_style_conventions.md +++ b/.serena/memories/code_style_conventions.md @@ -1,13 +1,16 @@ # Demos Network Node Software - Code Style & Conventions ## ESLint Configuration + ### Naming Conventions (enforced by @typescript-eslint/naming-convention) + - **Variables/Functions/Methods**: camelCase (leading/trailing underscores allowed) - **Classes/Types/Interfaces**: PascalCase - **Interfaces**: PascalCase (no "I" prefix - explicitly forbidden) - **Type Aliases**: PascalCase ### Code Style Rules + - **Quotes**: Double quotes (`"`) required - **Semicolons**: None (`;` forbidden) - **Indentation**: 4 spaces (via Prettier) @@ -15,6 +18,7 @@ - **Switch Cases**: Colon spacing enforced ## Prettier Configuration + - **Print Width**: 80 characters - **Tab Width**: 4 spaces - **Single Quote**: false (use double quotes) @@ -25,28 +29,32 @@ - **Bracket Spacing**: true ## TypeScript Configuration + - **Target**: ESNext - **Module**: ESNext with bundler resolution - **Strict Mode**: Enabled with exceptions: - - `strictNullChecks`: false - - `noImplicitAny`: false - - `strictBindCallApply`: false + - `strictNullChecks`: false + - `noImplicitAny`: false + - `strictBindCallApply`: false - **Decorators**: Experimental decorators enabled - **Source Maps**: Enabled for debugging ## Import Conventions + - **Path Aliases**: Use `@/` instead of relative imports (`../../../`) - **Import Style**: ES6 imports with destructuring where appropriate - **Restricted Imports**: Warnings for certain import patterns ## File Organization + - **License Headers**: All files start with KyneSys Labs license - **Feature-based Structure**: Code organized in `src/features/` by domain - **Utilities**: Shared utilities in `src/utilities/` and `src/libs/` - **Types**: Centralized type definitions in `src/types/` ## Comments & Documentation + - **License**: CC BY-NC-ND 4.0 header in all source files - **JSDoc**: Expected for public APIs and complex functions - **Review Comments**: Use `// REVIEW:` for new features needing attention -- **FIXME Comments**: For temporary workarounds needing later fixes \ No newline at end of file +- **FIXME Comments**: For temporary workarounds needing later fixes diff --git a/.serena/memories/codebase_structure.md b/.serena/memories/codebase_structure.md index a67dbf6f9..d381859a7 100644 --- a/.serena/memories/codebase_structure.md +++ b/.serena/memories/codebase_structure.md @@ -1,6 +1,7 @@ # Demos Network Node Software - Codebase Structure ## Root Directory Structure + ``` / ├── src/ # Main source code @@ -16,6 +17,7 @@ ``` ## Source Code Structure (`src/`) + ``` src/ ├── index.ts # Main application entry point @@ -52,22 +54,26 @@ src/ ## Key Architecture Patterns ### Feature-Based Organization + - Each major feature has its own directory under `src/features/` - Features are self-contained with their own models, services, and utilities - Cross-feature communication through well-defined interfaces ### Core Library Structure + - `libs/network/`: RPC server, API endpoints, networking protocols - `libs/blockchain/`: Genesis block management, chain operations - `libs/peer/`: P2P networking, peer discovery, connection management - `libs/utils/`: Shared utilities like time calibration, cryptographic operations ### Database Layer + - TypeORM-based models in `src/model/` - Migration files in `src/migrations/` - Connection configuration in `src/model/datasource.ts` ### Configuration Files + - `package.json`: Dependencies and scripts - `tsconfig.json`: TypeScript configuration - `.eslintrc.cjs`: ESLint rules and naming conventions @@ -76,12 +82,14 @@ src/ - `.env.example`: Environment variable template ## Entry Points + - **Main Application**: `src/index.ts` - **Key Generation**: `src/libs/utils/keyMaker.ts` - **Backup/Restore**: `src/utilities/backupAndRestore.ts` ## Important Directories + - **Runtime Data**: `data/` (chain.db, logs) - **Identity Files**: `.demos_identity`, `public.key` - **Peer Configuration**: `demos_peerlist.json` -- **Environment**: `.env` file \ No newline at end of file +- **Environment**: `.env` file diff --git a/.serena/memories/development_patterns.md b/.serena/memories/development_patterns.md index fa82d991e..a63b08b11 100644 --- a/.serena/memories/development_patterns.md +++ b/.serena/memories/development_patterns.md @@ -3,6 +3,7 @@ ## Architecture Principles ### Feature-Based Architecture + - Organize code by business domain in `src/features/` - Each feature is self-contained with clear boundaries - Cross-feature communication through well-defined interfaces @@ -11,6 +12,7 @@ ### Established Patterns to Follow #### Import Patterns + ```typescript // ✅ GOOD: Use path aliases import { someUtility } from "@/utilities/someUtility" @@ -21,6 +23,7 @@ import { someUtility } from "../../../utilities/someUtility" ``` #### License Headers + ```typescript /* LICENSE @@ -35,12 +38,13 @@ KyneSys Labs: https://www.kynesys.xyz/ ``` #### TypeScript Conventions + ```typescript // ✅ GOOD: Follow naming conventions -class UserManager { } // PascalCase for classes -interface UserData { } // PascalCase, no "I" prefix -function getUserData() { } // camelCase for functions -const userName = "john" // camelCase for variables +class UserManager {} // PascalCase for classes +interface UserData {} // PascalCase, no "I" prefix +function getUserData() {} // camelCase for functions +const userName = "john" // camelCase for variables // ✅ GOOD: Use proper module exports export { default as server_rpc } from "./server_rpc" @@ -52,12 +56,14 @@ import { getSharedState } from "./utilities/sharedState" ## Development Guidelines ### Code Quality Standards + 1. **Maintainability First**: Clean, readable, well-documented code -2. **Error Handling**: Comprehensive error handling and validation +2. **Error Handling**: Comprehensive error handling and validation 3. **Type Safety**: Full TypeScript coverage, run lint after changes 4. **Testing**: Follow existing test patterns in `src/tests/` ### Workflow Patterns + 1. **Plan Before Coding**: Create implementation plans for complex features 2. **Phases Workflow**: Use `*_PHASES.md` files for complex feature development 3. **Incremental Development**: Focused, reviewable changes @@ -67,6 +73,7 @@ import { getSharedState } from "./utilities/sharedState" ### Integration Patterns #### SDK Integration + ```typescript // ✅ Use the published package import { SomeSDKFunction } from "@kynesyslabs/demosdk" @@ -75,19 +82,21 @@ import { SomeSDKFunction } from "@kynesyslabs/demosdk" ``` #### Database Integration (TypeORM) + ```typescript // Follow existing entity patterns @Entity() export class SomeEntity { @PrimaryGeneratedColumn() id: number - + @Column() name: string } ``` #### Network Layer Integration + ```typescript // Use established server patterns from src/libs/network/ import { server_rpc } from "@/libs/network" @@ -96,22 +105,26 @@ import { server_rpc } from "@/libs/network" ## Project-Specific Conventions ### Demos Network Terminology + - **XM/Crosschain**: Multichain capabilities (interchangeable terms) - **GCR**: Always refers to GCRv2 methods unless specified - **Consensus**: Always refers to PoRBFTv2 when present - **SDK/demosdk**: Refers to `@kynesyslabs/demosdk` package ### Special Branch Considerations + - **native_bridges branch**: Reference `./bridges_docs/` for status - **SDK imports**: Sometimes import from `../sdks/build` with `// FIXME` comment ### File Creation Guidelines + - **NEVER create files unless absolutely necessary** -- **ALWAYS prefer editing existing files** +- **ALWAYS prefer editing existing files** - **NEVER proactively create documentation** unless explicitly requested - **Use feature-based organization** for new modules ### Review and Documentation + ```typescript // REVIEW: New authentication middleware implementation export class AuthMiddleware { @@ -122,27 +135,32 @@ export class AuthMiddleware { ## Best Practices ### Error Messages + - Provide clear, actionable error messages - Include context for debugging - Use consistent error formatting ### Naming Conventions + - Use descriptive names expressing intent - Follow TypeScript/JavaScript conventions - Maintain consistency with existing codebase ### Documentation Standards + - JSDoc for all new methods and functions - Inline comments for complex logic - Document architectural decisions ### Performance Considerations + - Consider resource usage and optimization - Follow established patterns for database queries - Use appropriate data structures and algorithms ## Testing Strategy + - **NEVER start the node directly** during testing - Use `bun run lint:fix` for syntax validation - Follow existing test patterns in `src/tests/` -- Manual testing only in controlled environments \ No newline at end of file +- Manual testing only in controlled environments diff --git a/.serena/memories/devnet_docker_setup.md b/.serena/memories/devnet_docker_setup.md index 943843b62..0f52eb34f 100644 --- a/.serena/memories/devnet_docker_setup.md +++ b/.serena/memories/devnet_docker_setup.md @@ -1,14 +1,17 @@ # Devnet Docker Compose Setup ## Overview + A Docker Compose setup for running 4 Demos Network nodes locally, replacing the need for 4 VPSes during development. ## Location + `/devnet/` directory in the main repository. ## Key Components ### Files + - `docker-compose.yml` - Orchestrates postgres + 4 nodes - `Dockerfile` - Bun-based image with native module support - `run-devnet` - Simplified node runner (no git, bun install, postgres management) @@ -18,7 +21,9 @@ A Docker Compose setup for running 4 Demos Network nodes locally, replacing the - `scripts/generate-peerlist.sh` - Creates demos_peerlist.json with Docker hostnames ### Environment Variables for Nodes + Each node requires: + - `PG_HOST` - PostgreSQL hostname (default: postgres) - `PG_PORT` - PostgreSQL port (default: 5432) - `PG_USER`, `PG_PASSWORD`, `PG_DATABASE` @@ -27,23 +32,27 @@ Each node requires: - `EXPOSED_URL` - Self URL for peer discovery (e.g., `http://node-1:53551`) ### Port Mapping + | Node | RPC Port | Omni Port | -|--------|----------|-----------| +| ------ | -------- | --------- | | node-1 | 53551 | 53561 | | node-2 | 53552 | 53562 | | node-3 | 53553 | 53563 | | node-4 | 53554 | 53564 | ## Build Optimization + - Uses BuildKit: `DOCKER_BUILDKIT=1 docker-compose build` - Layer caching: package.json copied first, deps installed, then rest - Native modules: `bufferutil`, `utf-8-validate` compiled with build-essential + python3-setuptools ## Related Changes + - `src/model/datasource.ts` - Added env var support for external DB - `./run` - Added `--external-db` / `-e` flag ## Usage + ```bash cd devnet ./scripts/setup.sh # One-time setup diff --git a/.serena/memories/feature_storage_programs_plan.md b/.serena/memories/feature_storage_programs_plan.md index 4e8cff1a7..8b5189b7f 100644 --- a/.serena/memories/feature_storage_programs_plan.md +++ b/.serena/memories/feature_storage_programs_plan.md @@ -1,33 +1,41 @@ # StoragePrograms Feature Plan ## Summary + Unified storage solution for Demos Network supporting both JSON (structured) and Binary (raw) data with robust ACL and size-based pricing. ## Design Decision + Single unified StorageProgram with `encoding: "json" | "binary"` parameter. Both encodings share identical features. ## Core Specifications ### Limits & Pricing + - **Max Size**: 1MB (1,048,576 bytes) for both encodings - **Pricing**: 1 DEM per 10KB (minimum 1 DEM) - **JSON Nesting**: Max 64 levels depth ### Access Control (ACL) + ```typescript interface StorageProgramACL { mode: "owner" | "public" | "restricted" - owner: string // Always has full access - allowed?: string[] // Explicitly allowed addresses - blacklisted?: string[] // Blocked (highest priority) - groups?: Record + owner: string // Always has full access + allowed?: string[] // Explicitly allowed addresses + blacklisted?: string[] // Blocked (highest priority) + groups?: Record< + string, + { + members: string[] + permissions: ("read" | "write" | "delete")[] + } + > } ``` **ACL Resolution Priority**: + 1. Owner → FULL ACCESS (always) 2. Blacklisted → DENIED (even if in allowed/groups) 3. Allowed → permissions granted @@ -35,6 +43,7 @@ interface StorageProgramACL { 5. Mode fallback: owner/restricted → DENIED, public → READ only ### Operations + - CREATE_STORAGE_PROGRAM - WRITE_STORAGE - READ_STORAGE @@ -42,6 +51,7 @@ interface StorageProgramACL { - DELETE_STORAGE_PROGRAM ### Storage + - **Location**: On-chain (PostgreSQL) initially - **IPFS**: Stubs ready for future hybrid storage - **Retention**: Permanent, owner/ACL-deletable only @@ -50,15 +60,18 @@ interface StorageProgramACL { ## Key Files ### SDK (../sdks) + - `src/types/blockchain/TransactionSubtypes/StorageProgramTransaction.ts` - Types - `src/storage/StorageProgram.ts` - Main class ### Node + - `src/model/entities/GCRv2/GCR_StorageProgram.ts` - Entity (new) - `src/libs/blockchain/gcr/handleGCR.ts` - Handler implementation - Confirm flow validation in transaction handlers ## Database Schema + ```sql CREATE TABLE gcr_storage_programs ( "storageAddress" TEXT PRIMARY KEY, @@ -81,12 +94,14 @@ CREATE TABLE gcr_storage_programs ( ``` ## Implementation Guidelines + - **Elegant**: Clean, readable code following existing patterns - **Maintainable**: Well-documented, consistent with codebase style - **No overengineering**: Simple solutions, YAGNI principle - **Use existing patterns**: Follow TLSNotary, IPFS handler patterns ## Related + - feature_ipfs_transactions (similar pricing model) - arch_gcr_entities (entity patterns) - Legacy StorageTransaction.ts (retrocompat) @@ -94,6 +109,7 @@ CREATE TABLE gcr_storage_programs ( ## SDK Workflow Reminder **CRITICAL**: After ANY changes to `../sdks`: + 1. Run `bun run build` in ../sdks 2. Commit changes 3. Push to remote @@ -102,4 +118,5 @@ CREATE TABLE gcr_storage_programs ( This ensures the node can use the updated SDK types. ## Last Updated + 2026-01-13 - Initial planning document diff --git a/.serena/memories/omniprotocol_complete_2025_11_11.md b/.serena/memories/omniprotocol_complete_2025_11_11.md index 218c8a1ae..5afeeab68 100644 --- a/.serena/memories/omniprotocol_complete_2025_11_11.md +++ b/.serena/memories/omniprotocol_complete_2025_11_11.md @@ -24,6 +24,7 @@ OmniProtocol replaces HTTP JSON-RPC with a **custom binary TCP protocol** for no ## Architecture Overview ### Message Format + ``` [12-byte header] + [optional auth block] + [payload] + [4-byte CRC32] @@ -34,6 +35,7 @@ Checksum: CRC32 validation ``` ### Connection Flow + ``` Client Server | | @@ -62,6 +64,7 @@ Client Server ### ✅ 100% Complete Components #### 1. Authentication System + - **Ed25519 signature verification** using @noble/ed25519 - **Timestamp-based replay protection** (±5 minute window) - **5 signature modes** (SIGN_PUBKEY, SIGN_MESSAGE_ID, SIGN_FULL_PAYLOAD, etc.) @@ -70,11 +73,13 @@ Client Server - **Automatic verification** in dispatcher middleware **Files**: + - `src/libs/omniprotocol/auth/types.ts` (90 lines) - `src/libs/omniprotocol/auth/parser.ts` (120 lines) - `src/libs/omniprotocol/auth/verifier.ts` (150 lines) #### 2. TCP Server Infrastructure + - **OmniProtocolServer** - Main TCP listener with event-driven architecture - **ServerConnectionManager** - Connection lifecycle management - **InboundConnection** - Per-connection handler with state machine @@ -84,11 +89,13 @@ Client Server - **Graceful startup and shutdown** **Files**: + - `src/libs/omniprotocol/server/OmniProtocolServer.ts` (220 lines) - `src/libs/omniprotocol/server/ServerConnectionManager.ts` (180 lines) - `src/libs/omniprotocol/server/InboundConnection.ts` (260 lines) #### 3. TLS/SSL Encryption + - **Certificate generation** using openssl (self-signed) - **Certificate validation** and expiry checking - **TLSServer** - TLS-wrapped TCP server @@ -99,6 +106,7 @@ Client Server - **Connection factory** for tcp:// vs tls:// routing **Files**: + - `src/libs/omniprotocol/tls/types.ts` (70 lines) - `src/libs/omniprotocol/tls/certificates.ts` (210 lines) - `src/libs/omniprotocol/tls/initialize.ts` (95 lines) @@ -107,6 +115,7 @@ Client Server - `src/libs/omniprotocol/transport/ConnectionFactory.ts` (60 lines) #### 4. Rate Limiting (DoS Protection) + - **Per-IP connection limits** (default: 10 concurrent) - **Per-IP request rate limits** (default: 100 req/s) - **Per-identity request rate limits** (default: 200 req/s) @@ -117,10 +126,12 @@ Client Server - **Integrated into both TCP and TLS servers** **Files**: + - `src/libs/omniprotocol/ratelimit/types.ts` (90 lines) - `src/libs/omniprotocol/ratelimit/RateLimiter.ts` (380 lines) #### 5. Message Framing & Transport + - **MessageFramer** - Parse TCP stream into messages - **PeerConnection** - Client-side connection with state machine - **ConnectionPool** - Pool of persistent connections @@ -129,12 +140,14 @@ Client Server - **Automatic reconnection** and error handling **Files**: + - `src/libs/omniprotocol/transport/MessageFramer.ts` (215 lines) - `src/libs/omniprotocol/transport/PeerConnection.ts` (338 lines) - `src/libs/omniprotocol/transport/ConnectionPool.ts` (301 lines) - `src/libs/omniprotocol/transport/types.ts` (162 lines) #### 6. Node Integration + - **Key management** - Integration with getSharedState keypair - **Startup integration** - Server wired into src/index.ts - **Environment variable configuration** @@ -142,6 +155,7 @@ Client Server - **PeerOmniAdapter** - Automatic authentication and HTTP fallback **Files**: + - `src/libs/omniprotocol/integration/keys.ts` (80 lines) - `src/libs/omniprotocol/integration/startup.ts` (180 lines) - `src/libs/omniprotocol/integration/peerAdapter.ts` (modified) @@ -152,22 +166,26 @@ Client Server ### ❌ Not Implemented (10% remaining) #### 1. Testing (0% - CRITICAL GAP) + - ❌ Unit tests (auth, framing, server, TLS, rate limiting) - ❌ Integration tests (client-server roundtrip) - ❌ Load tests (1000+ concurrent connections) #### 2. Metrics & Monitoring + - ❌ Prometheus integration - ❌ Latency tracking - ❌ Throughput monitoring - ⚠️ Basic stats available via getStats() #### 3. Post-Quantum Cryptography (Optional) + - ❌ Falcon signature verification - ❌ ML-DSA signature verification - ⚠️ Only Ed25519 supported #### 4. Advanced Features (Optional) + - ❌ Push messages (server-initiated) - ❌ Multiplexing (multiple requests per connection) - ❌ Protocol versioning @@ -177,12 +195,14 @@ Client Server ## Environment Variables ### TCP Server + ```bash OMNI_ENABLED=false # Enable OmniProtocol server OMNI_PORT=3001 # Server port (default: HTTP port + 1) ``` ### TLS/SSL Encryption + ```bash OMNI_TLS_ENABLED=false # Enable TLS OMNI_TLS_MODE=self-signed # self-signed or ca @@ -193,6 +213,7 @@ OMNI_TLS_MIN_VERSION=TLSv1.3 # TLSv1.2 or TLSv1.3 ``` ### Rate Limiting + ```bash OMNI_RATE_LIMIT_ENABLED=true # Default: true OMNI_MAX_CONNECTIONS_PER_IP=10 # Max concurrent per IP @@ -205,16 +226,19 @@ OMNI_MAX_REQUESTS_PER_SECOND_PER_IDENTITY=200 # Max req/s per identity ## Performance Characteristics ### Message Overhead + - **HTTP JSON**: ~500-800 bytes minimum (headers + envelope) - **OmniProtocol**: 12-110 bytes minimum (header + optional auth + checksum) - **Savings**: 60-97% overhead reduction ### Connection Performance + - **HTTP**: New TCP connection per request (~40-120ms handshake) - **OmniProtocol**: Persistent connection (~10-30ms after initial) - **Improvement**: 70-90% latency reduction for subsequent requests ### Scalability Targets + - **1,000 peers**: ~400-800 KB memory - **10,000 peers**: ~4-8 MB memory - **Throughput**: 10,000+ requests/second @@ -224,6 +248,7 @@ OMNI_MAX_REQUESTS_PER_SECOND_PER_IDENTITY=200 # Max req/s per identity ## Security Features ### ✅ Implemented + - Ed25519 signature verification - Timestamp-based replay protection (±5 minutes) - Per-handler authentication requirements @@ -238,6 +263,7 @@ OMNI_MAX_REQUESTS_PER_SECOND_PER_IDENTITY=200 # Max req/s per identity - CRC32 checksum validation ### ⚠️ Gaps + - No nonce tracking (optional additional replay protection) - No comprehensive security audit - No automated testing @@ -253,6 +279,7 @@ OMNI_MAX_REQUESTS_PER_SECOND_PER_IDENTITY=200 # Max req/s per identity **Documentation**: ~8,000 lines ### File Breakdown + - Authentication: 360 lines (3 files) - TCP Server: 660 lines (3 files) - TLS/SSL: 970 lines (6 files) @@ -281,48 +308,52 @@ All commits on branch: `claude/custom-tcp-protocol-011CV1uA6TQDiV9Picft86Y5` ## Next Steps ### P0 - Critical (Before Mainnet) + 1. **Testing Infrastructure** - - Unit tests for all components - - Integration tests (localhost client-server) - - Load tests (1000+ concurrent connections with rate limiting) + - Unit tests for all components + - Integration tests (localhost client-server) + - Load tests (1000+ concurrent connections with rate limiting) 2. **Security Audit** - - Professional security review - - Penetration testing - - Code audit + - Professional security review + - Penetration testing + - Code audit 3. **Monitoring & Observability** - - Prometheus metrics integration - - Latency/throughput tracking - - Error rate monitoring + - Prometheus metrics integration + - Latency/throughput tracking + - Error rate monitoring ### P1 - Important + 4. **Operational Documentation** - - Operator runbook - - Deployment guide - - Troubleshooting guide - - Performance tuning guide + - Operator runbook + - Deployment guide + - Troubleshooting guide + - Performance tuning guide 5. **Connection Health** - - Heartbeat mechanism - - Health check endpoints - - Dead connection detection + - Heartbeat mechanism + - Health check endpoints + - Dead connection detection ### P2 - Optional + 6. **Post-Quantum Cryptography** - - Falcon library integration - - ML-DSA library integration + - Falcon library integration + - ML-DSA library integration 7. **Advanced Features** - - Push messages (server-initiated) - - Protocol versioning - - Connection multiplexing enhancements + - Push messages (server-initiated) + - Protocol versioning + - Connection multiplexing enhancements --- ## Deployment Recommendations ### For Controlled Deployment (Now) + ```bash OMNI_ENABLED=true OMNI_TLS_ENABLED=true # Recommended @@ -330,11 +361,13 @@ OMNI_RATE_LIMIT_ENABLED=true # Default, recommended ``` **Use with**: + - Trusted peer networks - Internal testing environments - Controlled rollout to subset of peers ### For Mainnet Deployment (After Testing) + - ✅ Complete comprehensive testing - ✅ Conduct security audit - ✅ Add Prometheus monitoring @@ -347,15 +380,18 @@ OMNI_RATE_LIMIT_ENABLED=true # Default, recommended ## Documentation Files **Specifications**: + - `OmniProtocol/08_TCP_SERVER_IMPLEMENTATION.md` (1,238 lines) - `OmniProtocol/09_AUTHENTICATION_IMPLEMENTATION.md` (800+ lines) - `OmniProtocol/10_TLS_IMPLEMENTATION_PLAN.md` (383 lines) **Guides**: + - `OMNIPROTOCOL_SETUP.md` (Setup guide) - `OMNIPROTOCOL_TLS_GUIDE.md` (TLS usage guide, 455 lines) **Status Tracking**: + - `src/libs/omniprotocol/IMPLEMENTATION_STATUS.md` (Updated 2025-11-11) - `OmniProtocol/IMPLEMENTATION_SUMMARY.md` (Updated 2025-11-11) @@ -364,22 +400,23 @@ OMNI_RATE_LIMIT_ENABLED=true # Default, recommended ## Known Limitations 1. **JSON Payloads**: Still using JSON envelopes for payload encoding (hybrid format) - - Future: Full binary encoding for 60-70% additional bandwidth savings + - Future: Full binary encoding for 60-70% additional bandwidth savings 2. **Single Connection per Peer**: Default max 1 connection per peer - - Future: Multiple connections for high-traffic peers + - Future: Multiple connections for high-traffic peers 3. **No Push Messages**: Only request-response pattern supported - - Future: Server-initiated push notifications + - Future: Server-initiated push notifications 4. **Limited Observability**: Only basic stats available - - Future: Prometheus metrics, detailed latency tracking + - Future: Prometheus metrics, detailed latency tracking --- ## Success Metrics **Current Achievement**: + - ✅ 90% production-ready - ✅ All critical security features implemented - ✅ DoS protection via rate limiting @@ -388,6 +425,7 @@ OMNI_RATE_LIMIT_ENABLED=true # Default, recommended - ✅ Integrated into node startup **Production Readiness Criteria**: + - [ ] 100% test coverage for critical paths - [ ] Security audit completed - [ ] Load tested with 1000+ connections diff --git a/.serena/memories/omniprotocol_session_2025-12-01.md b/.serena/memories/omniprotocol_session_2025-12-01.md index cd0e5ddb6..a06a3a5c2 100644 --- a/.serena/memories/omniprotocol_session_2025-12-01.md +++ b/.serena/memories/omniprotocol_session_2025-12-01.md @@ -1,48 +1,58 @@ # OmniProtocol Session - December 1, 2025 ## Session Summary + Continued work on OmniProtocol integration, fixing authentication and message routing issues. ## Key Fixes Implemented ### 1. Authentication Fix (c1f642a3) + - **Problem**: Server only extracted peerIdentity after `hello_peer` (opcode 0x01) - **Impact**: NODE_CALL messages with valid auth blocks had `peerIdentity=null` - **Solution**: Extract peerIdentity from auth block for ANY authenticated message at top of `handleMessage()` ### 2. Mempool Routing Fix (59ffd328) + - **Problem**: `mempool` is a top-level RPC method, not a nodeCall message - **Impact**: Mempool merge requests got "Unknown message" error - **Solution**: Added routing in `handleNodeCall` to detect `method === "mempool"` and route to `ServerHandlers.handleMempool()` ### 3. Identity Format Fix (1fe432fd) + - **Problem**: OmniProtocol used `Buffer.toString("hex")` without `0x` prefix - **Impact**: PeerManager couldn't find peers (expects `0x` prefix) - **Solution**: Added `0x` prefix in `InboundConnection.ts` and `verifier.ts` ## Architecture Verification + All peer-to-peer communication now uses OmniProtocol TCP binary transport: + - `peer.call()` → `omniAdapter.adaptCall()` → TCP - `peer.longCall()` → internal `this.call()` → TCP - `consensus_routine` → NODE_CALL opcode → TCP - `mempool` merge → NODE_CALL opcode → TCP HTTP fallback only triggers on: + - OmniProtocol disabled - Node keys unavailable - TCP connection failure ## Commits This Session + 1. `1fe432fd` - Fix 0x prefix for peer identity 2. `c1f642a3` - Authenticate on ANY message with valid auth block 3. `59ffd328` - Route mempool RPC method to ServerHandlers ## Pending Work + - Test transactions with OmniProtocol (XM, native, DAHR) - Consider dedicated opcodes for frequently used methods - Clean up debug logging before production ## Key Files Modified + - `src/libs/omniprotocol/server/InboundConnection.ts` - `src/libs/omniprotocol/protocol/handlers/control.ts` - `src/libs/omniprotocol/auth/verifier.ts` diff --git a/.serena/memories/omniprotocol_wave8.1_complete.md b/.serena/memories/omniprotocol_wave8.1_complete.md index 598cb9207..3cbeff1f8 100644 --- a/.serena/memories/omniprotocol_wave8.1_complete.md +++ b/.serena/memories/omniprotocol_wave8.1_complete.md @@ -11,9 +11,11 @@ Wave 8.1 successfully implements **persistent TCP transport** to replace HTTP JS ## Components Implemented ### 1. MessageFramer.ts (215 lines) + **Purpose**: Parse TCP byte stream into complete OmniProtocol messages **Features**: + - Buffer accumulation from TCP socket - 12-byte header parsing: `[version:2][opcode:1][flags:1][payloadLength:4][sequence:4]` - CRC32 checksum validation @@ -23,9 +25,11 @@ Wave 8.1 successfully implements **persistent TCP transport** to replace HTTP JS **Location**: `src/libs/omniprotocol/transport/MessageFramer.ts` ### 2. PeerConnection.ts (338 lines) + **Purpose**: Wrap TCP socket with state machine and request tracking **Features**: + - Connection state machine: UNINITIALIZED → CONNECTING → AUTHENTICATING → READY → IDLE_PENDING → CLOSING → CLOSED - Request-response correlation via sequence IDs - In-flight request tracking with timeout @@ -36,9 +40,11 @@ Wave 8.1 successfully implements **persistent TCP transport** to replace HTTP JS **Location**: `src/libs/omniprotocol/transport/PeerConnection.ts` ### 3. ConnectionPool.ts (301 lines) + **Purpose**: Manage pool of persistent TCP connections **Features**: + - Per-peer connection pooling (max 1 connection per peer by default) - Global connection limit (max 100 total by default) - Lazy connection creation (create on first use) @@ -50,9 +56,11 @@ Wave 8.1 successfully implements **persistent TCP transport** to replace HTTP JS **Location**: `src/libs/omniprotocol/transport/ConnectionPool.ts` ### 4. types.ts (162 lines) + **Purpose**: Shared type definitions for transport layer **Key Types**: + - `ConnectionState`: State machine states - `ConnectionOptions`: Timeout, retries, priority - `PendingRequest`: Request tracking structure @@ -64,7 +72,9 @@ Wave 8.1 successfully implements **persistent TCP transport** to replace HTTP JS **Location**: `src/libs/omniprotocol/transport/types.ts` ### 5. peerAdapter.ts Integration + **Changes**: + - Added `ConnectionPool` initialization in constructor - Replaced HTTP placeholder in `adaptCall()` with TCP transport - Added `httpToTcpConnectionString()` converter @@ -74,7 +84,9 @@ Wave 8.1 successfully implements **persistent TCP transport** to replace HTTP JS **Location**: `src/libs/omniprotocol/integration/peerAdapter.ts` ### 6. Configuration Updates + **Added to ConnectionPoolConfig**: + - `maxTotalConnections: 100` - Global TCP connection limit **Location**: `src/libs/omniprotocol/types/config.ts` @@ -82,10 +94,11 @@ Wave 8.1 successfully implements **persistent TCP transport** to replace HTTP JS ## Architecture Transformation ### Before (Wave 7.x - HTTP Transport) + ``` peerAdapter.adaptCall() ↓ -peer.call() +peer.call() ↓ axios.post(url, json_payload) ↓ @@ -95,6 +108,7 @@ One TCP connection per request (closed after response) ``` ### After (Wave 8.1 - TCP Transport) + ``` peerAdapter.adaptCall() ↓ @@ -116,16 +130,19 @@ Correlate response via sequence ID ## Performance Benefits ### Connection Efficiency + - **Persistent connections**: Reuse TCP connections across requests (no 3-way handshake overhead) - **Connection pooling**: Efficient resource management - **Multiplexing**: Single TCP connection handles multiple concurrent requests via sequence IDs ### Protocol Efficiency + - **Binary framing**: Fixed-size header vs HTTP text headers - **Direct socket I/O**: No HTTP layer overhead - **CRC32 validation**: Integrity checking at protocol level ### Resource Management + - **Configurable limits**: Global and per-peer connection limits - **Idle cleanup**: Automatic cleanup of unused connections after 10 minutes - **Health monitoring**: Pool statistics for observability @@ -133,11 +150,13 @@ Correlate response via sequence ID ## Current Encoding (Wave 8.1) **Still using JSON payloads** in hybrid format: + - Header: Binary (12 bytes) - Payload: JSON envelope (length-prefixed) - Checksum: Binary (4 bytes CRC32) **Wave 8.2 will replace** JSON with full binary encoding for: + - Request/response payloads - Complex data structures - All handler communication @@ -145,20 +164,22 @@ Correlate response via sequence ID ## Migration Configuration ### Current Default (HTTP Only) + ```typescript DEFAULT_OMNIPROTOCOL_CONFIG = { migration: { - mode: "HTTP_ONLY", // ← TCP transport NOT used + mode: "HTTP_ONLY", // ← TCP transport NOT used omniPeers: new Set(), autoDetect: true, fallbackTimeout: 1000, - } + }, } ``` ### To Enable TCP Transport **Option 1: Global Enable** + ```typescript const adapter = new PeerOmniAdapter({ config: { @@ -168,12 +189,13 @@ const adapter = new PeerOmniAdapter({ omniPeers: new Set(), autoDetect: true, fallbackTimeout: 1000, - } - } + }, + }, }) ``` **Option 2: Per-Peer Enable** + ```typescript adapter.markOmniPeer(peerIdentity) // Mark specific peer for TCP // OR @@ -181,6 +203,7 @@ adapter.markHttpPeer(peerIdentity) // Force HTTP for specific peer ``` ### Migration Modes + - `HTTP_ONLY`: Never use TCP, always HTTP (current default) - `OMNI_PREFERRED`: Try TCP first, fall back to HTTP on failure (recommended) - `OMNI_ONLY`: Force TCP only, error if TCP fails (production after testing) @@ -188,12 +211,14 @@ adapter.markHttpPeer(peerIdentity) // Force HTTP for specific peer ## Testing Status **Not yet tested** - infrastructure is complete but: + 1. No unit tests written yet 2. No integration tests written yet 3. No end-to-end testing with real nodes 4. Migration mode is HTTP_ONLY (TCP not active) **To test**: + 1. Enable `OMNI_PREFERRED` mode 2. Mark test peer with `markOmniPeer()` 3. Make RPC calls and verify TCP connection establishment @@ -225,6 +250,7 @@ adapter.markHttpPeer(peerIdentity) // Force HTTP for specific peer **Goal**: Replace JSON payloads with full binary encoding **Approach**: + 1. Implement binary encoders for common types (string, number, array, object) 2. Create request/response binary serialization 3. Update handlers to use binary encoding @@ -232,6 +258,7 @@ adapter.markHttpPeer(peerIdentity) // Force HTTP for specific peer 5. Maintain backward compatibility during transition **Files to Modify**: + - `src/libs/omniprotocol/serialization/` - Add binary encoders/decoders - Handler files - Update payload encoding - peerAdapter - Switch to binary encoding @@ -239,50 +266,62 @@ adapter.markHttpPeer(peerIdentity) // Force HTTP for specific peer ## Files Created/Modified ### Created + - `src/libs/omniprotocol/transport/types.ts` (162 lines) - `src/libs/omniprotocol/transport/MessageFramer.ts` (215 lines) - `src/libs/omniprotocol/transport/PeerConnection.ts` (338 lines) - `src/libs/omniprotocol/transport/ConnectionPool.ts` (301 lines) ### Modified + - `src/libs/omniprotocol/integration/peerAdapter.ts` - Added ConnectionPool integration - `src/libs/omniprotocol/types/config.ts` - Added maxTotalConnections to pool config ### Total Lines of Code + **~1,016 lines** across 4 new files + integration ## Decision Log ### Why Persistent Connections? + HTTP's connection-per-request model has significant overhead: + - TCP 3-way handshake for every request - TLS handshake for HTTPS - No request multiplexing Persistent connections eliminate this overhead and enable: + - Request-response correlation via sequence IDs - Concurrent requests on single connection - Lower latency for subsequent requests ### Why Connection Pool? + - Prevents connection exhaustion (DoS protection) - Enables resource monitoring and limits - Automatic cleanup of idle connections - Health tracking for observability ### Why Idle Timeout 10 Minutes? + Balance between: + - Connection reuse efficiency (longer is better) - Resource usage (shorter is better) - Standard practice for persistent connections ### Why Sequence IDs vs Connection IDs? + Sequence IDs enable: + - Multiple concurrent requests on same connection - Request-response correlation - Better resource utilization ### Why CRC32? + - Fast computation (hardware acceleration available) - Sufficient for corruption detection - Standard in network protocols @@ -291,41 +330,51 @@ Sequence IDs enable: ## Potential Issues & Mitigations ### Issue: TCP Connection Failures + **Mitigation**: Automatic fallback to HTTP on TCP failure, automatic peer marking ### Issue: Resource Exhaustion + **Mitigation**: Connection pool limits (global and per-peer), idle cleanup ### Issue: Request Timeout + **Mitigation**: Per-request timeout configuration, automatic cleanup of timed-out requests ### Issue: Connection State Management + **Mitigation**: Clear state machine with documented transitions, error state handling ### Issue: Partial Message Handling + **Mitigation**: MessageFramer buffer accumulation, wait for complete messages ## Performance Targets ### Connection Establishment + - Target: <100ms for local connections - Target: <500ms for remote connections ### Request-Response Latency + - Target: <10ms overhead for connection reuse - Target: <100ms for first request (includes connection establishment) ### Connection Pool Efficiency + - Target: >90% connection reuse rate - Target: <1% connection pool capacity usage under normal load ### Resource Usage + - Target: <1MB memory per connection - Target: <100 open connections under normal load ## Monitoring Recommendations ### Metrics to Track + - Connection establishment time - Connection reuse rate - Pool capacity usage @@ -336,10 +385,12 @@ Sequence IDs enable: - TCP vs HTTP request distribution ### Alerts to Configure + - Pool capacity >80% - Connection timeout rate >5% - Fallback rate >10% - Average latency >100ms ## Wave 8.1 Completion Date + **2025-11-02** diff --git a/.serena/memories/omniprotocol_wave8_tcp_physical_layer.md b/.serena/memories/omniprotocol_wave8_tcp_physical_layer.md index bd14ceb0e..12b47cd1e 100644 --- a/.serena/memories/omniprotocol_wave8_tcp_physical_layer.md +++ b/.serena/memories/omniprotocol_wave8_tcp_physical_layer.md @@ -9,7 +9,9 @@ ## Current State Analysis ### What We Have (Wave 7.1-7.4 Complete) + ✅ **40 Binary Handlers Implemented**: + - Control & Infrastructure: 5 opcodes (0x03-0x07) - Data Sync: 8 opcodes (0x20-0x28) - Protocol Meta: 5 opcodes (0xF0-0xF4) @@ -18,6 +20,7 @@ - Transactions: 5 opcodes (0x10-0x12, 0x15-0x16) ✅ **Architecture Components**: + - Complete opcode registry with typed handlers - JSON envelope serialization (intermediate format) - Binary message header structures defined @@ -25,6 +28,7 @@ - Feature flags and migration modes configured ❌ **What We're Missing**: + - TCP socket transport layer - Connection pooling and lifecycle management - Full binary payload encoding (still using JSON envelopes) @@ -32,12 +36,14 @@ - Connection state machine implementation ### What We're Currently Using + ``` Handler → JSON Envelope → HTTP Transport (Wave 7.x) (peerAdapter.ts:78-81) ``` ### What Wave 8 Will Build + ``` Handler → Binary Encoding → TCP Transport (new encoders) (new ConnectionPool) @@ -46,33 +52,36 @@ Handler → Binary Encoding → TCP Transport ## Wave 8 Implementation Plan ### Wave 8.1: TCP Connection Infrastructure (Foundation) + **Duration**: 3-5 days **Priority**: CRITICAL - Core transport layer #### Deliverables + 1. **ConnectionPool Class** (`src/libs/omniprotocol/transport/ConnectionPool.ts`) - - Per-peer connection management - - Connection state machine (UNINITIALIZED → CONNECTING → AUTHENTICATING → READY → IDLE → CLOSED) - - Idle timeout handling (10 minutes) - - Connection limits (1000 total, 1 per peer initially) - - LRU eviction when at capacity + - Per-peer connection management + - Connection state machine (UNINITIALIZED → CONNECTING → AUTHENTICATING → READY → IDLE → CLOSED) + - Idle timeout handling (10 minutes) + - Connection limits (1000 total, 1 per peer initially) + - LRU eviction when at capacity 2. **PeerConnection Class** (`src/libs/omniprotocol/transport/PeerConnection.ts`) - - TCP socket wrapper with Node.js `net` module - - Connection lifecycle (connect, authenticate, ready, close) - - Message ID generation and tracking - - Request-response correlation (Map) - - Idle timer management - - Graceful shutdown with proto_disconnect (0xF4) + - TCP socket wrapper with Node.js `net` module + - Connection lifecycle (connect, authenticate, ready, close) + - Message ID generation and tracking + - Request-response correlation (Map) + - Idle timer management + - Graceful shutdown with proto_disconnect (0xF4) 3. **Message Framing** (`src/libs/omniprotocol/transport/MessageFramer.ts`) - - TCP stream → complete messages parsing - - Buffer accumulation and boundary detection - - Header parsing (12-byte: version, opcode, sequence, payloadLength) - - Checksum validation - - Partial message buffering + - TCP stream → complete messages parsing + - Buffer accumulation and boundary detection + - Header parsing (12-byte: version, opcode, sequence, payloadLength) + - Checksum validation + - Partial message buffering #### Key Technical Decisions + - **One Connection Per Peer**: Sufficient for current traffic patterns, can scale later - **TCP_NODELAY**: Disabled (Nagle's algorithm) for low latency - **SO_KEEPALIVE**: Enabled with 60s interval @@ -81,6 +90,7 @@ Handler → Binary Encoding → TCP Transport - **Idle Timeout**: 10 minutes #### Integration Points + ```typescript // peerAdapter.ts will use ConnectionPool instead of HTTP async adaptCall(peer: Peer, request: RPCRequest): Promise { @@ -97,6 +107,7 @@ async adaptCall(peer: Peer, request: RPCRequest): Promise { ``` #### Tests + - Connection establishment and authentication flow - Message send/receive round-trip - Timeout handling (connect, auth, request) @@ -106,10 +117,12 @@ async adaptCall(peer: Peer, request: RPCRequest): Promise { - Connection pool limits and LRU eviction ### Wave 8.2: Binary Payload Encoding (Performance) + **Duration**: 4-6 days **Priority**: HIGH - Bandwidth savings #### Current JSON Envelope Format + ```typescript // From jsonEnvelope.ts export function encodeJsonRequest(payload: unknown): Buffer { @@ -120,50 +133,54 @@ export function encodeJsonRequest(payload: unknown): Buffer { ``` #### Target Binary Format (from 05_PAYLOAD_STRUCTURES.md) + ```typescript // Example: Transaction structure interface BinaryTransaction { - hash: Buffer // 32 bytes fixed - type: number // 1 byte - from: Buffer // 32 bytes (address) - to: Buffer // 32 bytes (address) - amount: bigint // 8 bytes (uint64) - nonce: bigint // 8 bytes - timestamp: bigint // 8 bytes - fees: bigint // 8 bytes - signature: Buffer // length-prefixed - data: Buffer[] // count-prefixed array - gcrEdits: Buffer[] // count-prefixed array - raw: Buffer // length-prefixed + hash: Buffer // 32 bytes fixed + type: number // 1 byte + from: Buffer // 32 bytes (address) + to: Buffer // 32 bytes (address) + amount: bigint // 8 bytes (uint64) + nonce: bigint // 8 bytes + timestamp: bigint // 8 bytes + fees: bigint // 8 bytes + signature: Buffer // length-prefixed + data: Buffer[] // count-prefixed array + gcrEdits: Buffer[] // count-prefixed array + raw: Buffer // length-prefixed } ``` #### Deliverables + 1. **Binary Encoders** (`src/libs/omniprotocol/serialization/`) - - Update existing `transaction.ts` to use full binary encoding - - Update `gcr.ts` beyond just addressInfo - - Update `consensus.ts` for remaining consensus types - - Update `sync.ts` for block/mempool/peerlist structures - - Keep `primitives.ts` as foundation (already exists) + - Update existing `transaction.ts` to use full binary encoding + - Update `gcr.ts` beyond just addressInfo + - Update `consensus.ts` for remaining consensus types + - Update `sync.ts` for block/mempool/peerlist structures + - Keep `primitives.ts` as foundation (already exists) 2. **Encoder Registry Pattern** - ```typescript - // Map opcode → binary encoder/decoder - interface PayloadCodec { - encode(data: T): Buffer - decode(buffer: Buffer): T - } - - const PAYLOAD_CODECS = new Map>() - ``` + + ```typescript + // Map opcode → binary encoder/decoder + interface PayloadCodec { + encode(data: T): Buffer + decode(buffer: Buffer): T + } + + const PAYLOAD_CODECS = new Map>() + ``` 3. **Gradual Migration Strategy** - - Phase 1: Keep JSON envelope for complex structures (GCR edits, bridge trades) - - Phase 2: Binary encode simple structures (addresses, hashes, numbers) - - Phase 3: Full binary encoding for all payloads - - Always maintain decoder parity with encoder + - Phase 1: Keep JSON envelope for complex structures (GCR edits, bridge trades) + - Phase 2: Binary encode simple structures (addresses, hashes, numbers) + - Phase 3: Full binary encoding for all payloads + - Always maintain decoder parity with encoder #### Bandwidth Savings Analysis + ``` Current (JSON envelope): Simple request (getPeerInfo): ~120 bytes @@ -177,6 +194,7 @@ Target (Binary): ``` #### Tests + - Round-trip encoding/decoding for all opcodes - Edge cases (empty arrays, max values, unicode strings) - Backward compatibility (can still decode JSON envelopes) @@ -184,10 +202,12 @@ Target (Binary): - Malformed data handling ### Wave 8.3: Timeout & Retry Enhancement (Reliability) + **Duration**: 2-3 days **Priority**: MEDIUM - Better than HTTP's fixed delays #### Current HTTP Behavior (from Peer.ts) + ```typescript // Fixed retry logic async longCall(request, isAuthenticated, sleepTime = 250, retries = 3) { @@ -202,34 +222,37 @@ async longCall(request, isAuthenticated, sleepTime = 250, retries = 3) { ``` #### Enhanced Retry Strategy (from 04_CONNECTION_MANAGEMENT.md) + ```typescript interface RetryOptions { - maxRetries: number // Default: 3 - initialDelay: number // Default: 250ms + maxRetries: number // Default: 3 + initialDelay: number // Default: 250ms backoffMultiplier: number // Default: 1.0 (linear), 2.0 (exponential) - maxDelay: number // Default: 1000ms - allowedErrors: number[] // Don't retry for these status codes - retryOnTimeout: boolean // Default: true + maxDelay: number // Default: 1000ms + allowedErrors: number[] // Don't retry for these status codes + retryOnTimeout: boolean // Default: true } ``` #### Deliverables + 1. **RetryManager** (`src/libs/omniprotocol/transport/RetryManager.ts`) - - Exponential backoff support - - Per-operation timeout configuration - - Error classification (transient, degraded, fatal) + - Exponential backoff support + - Per-operation timeout configuration + - Error classification (transient, degraded, fatal) 2. **CircuitBreaker** (`src/libs/omniprotocol/transport/CircuitBreaker.ts`) - - 5 failures → OPEN state - - 30 second timeout → HALF_OPEN - - 2 successes → CLOSED - - Prevents cascading failures when peer is consistently offline + - 5 failures → OPEN state + - 30 second timeout → HALF_OPEN + - 2 successes → CLOSED + - Prevents cascading failures when peer is consistently offline 3. **TimeoutManager** (`src/libs/omniprotocol/transport/TimeoutManager.ts`) - - Adaptive timeouts based on peer latency history - - Per-operation type timeouts (consensus 1s, sync 30s, etc.) + - Adaptive timeouts based on peer latency history + - Per-operation type timeouts (consensus 1s, sync 30s, etc.) #### Integration + ```typescript // Enhanced PeerConnection.sendMessage with circuit breaker async sendMessage(opcode, payload, timeout) { @@ -243,6 +266,7 @@ async sendMessage(opcode, payload, timeout) { ``` #### Tests + - Exponential backoff timing verification - Circuit breaker state transitions - Adaptive timeout calculation from latency history @@ -250,31 +274,34 @@ async sendMessage(opcode, payload, timeout) { - Timeout vs retry interaction ### Wave 8.4: Concurrency & Resource Management (Scalability) + **Duration**: 3-4 days **Priority**: MEDIUM - Handles 1000+ peers #### Deliverables + 1. **Request Slot Management** (PeerConnection enhancement) - - Max 100 concurrent requests per connection - - Backpressure queue when at limit - - Slot acquisition/release pattern + - Max 100 concurrent requests per connection + - Backpressure queue when at limit + - Slot acquisition/release pattern 2. **AsyncMutex** (`src/libs/omniprotocol/transport/AsyncMutex.ts`) - - Thread-safe send operations (one message at a time per connection) - - Lock queue for waiting operations + - Thread-safe send operations (one message at a time per connection) + - Lock queue for waiting operations 3. **BufferPool** (`src/libs/omniprotocol/transport/BufferPool.ts`) - - Reusable buffers for common message sizes (256, 1K, 4K, 16K, 64K) - - Max 100 buffers per size to prevent memory bloat - - Security: Zero-fill buffers on release + - Reusable buffers for common message sizes (256, 1K, 4K, 16K, 64K) + - Max 100 buffers per size to prevent memory bloat + - Security: Zero-fill buffers on release 4. **Connection Metrics** (`src/libs/omniprotocol/transport/MetricsCollector.ts`) - - Per-peer latency tracking (p50, p95, p99) - - Error counts (connection, timeout, auth) - - Resource usage (memory, in-flight requests) - - Connection pool statistics + - Per-peer latency tracking (p50, p95, p99) + - Error counts (connection, timeout, auth) + - Resource usage (memory, in-flight requests) + - Connection pool statistics #### Memory Targets + ``` 1,000 peers: - Active connections: 50-100 (5-10% typical) @@ -289,6 +316,7 @@ async sendMessage(opcode, payload, timeout) { ``` #### Tests + - Concurrent request limiting (100 per connection) - Buffer pool acquire/release cycles - Metrics collection and calculation @@ -296,57 +324,61 @@ async sendMessage(opcode, payload, timeout) { - Connection pool scaling (simulate 1000 peers) ### Wave 8.5: Integration & Migration (Production Readiness) + **Duration**: 3-5 days **Priority**: CRITICAL - Safe rollout #### Deliverables + 1. **PeerAdapter Enhancement** (`src/libs/omniprotocol/integration/peerAdapter.ts`) - - Remove HTTP fallback placeholder (lines 78-81) - - Implement full TCP transport path - - Maintain dual-protocol support (HTTP + TCP based on connection string) + - Remove HTTP fallback placeholder (lines 78-81) + - Implement full TCP transport path + - Maintain dual-protocol support (HTTP + TCP based on connection string) 2. **Peer.ts Integration** - ```typescript - async call(request: RPCRequest, isAuthenticated = true): Promise { - // Detect protocol from connection string - if (this.connection.string.startsWith('tcp://')) { - return await this.callOmniProtocol(request, isAuthenticated) - } else if (this.connection.string.startsWith('http://')) { - return await this.callHTTP(request, isAuthenticated) - } - } - ``` + + ```typescript + async call(request: RPCRequest, isAuthenticated = true): Promise { + // Detect protocol from connection string + if (this.connection.string.startsWith('tcp://')) { + return await this.callOmniProtocol(request, isAuthenticated) + } else if (this.connection.string.startsWith('http://')) { + return await this.callHTTP(request, isAuthenticated) + } + } + ``` 3. **Connection String Format** - - HTTP: `http://ip:port` or `https://ip:port` - - TCP: `tcp://ip:port` or `tcps://ip:port` (TLS) - - Auto-detection based on peer capabilities + - HTTP: `http://ip:port` or `https://ip:port` + - TCP: `tcp://ip:port` or `tcps://ip:port` (TLS) + - Auto-detection based on peer capabilities 4. **Migration Modes** (already defined in config) - - `HTTP_ONLY`: All peers use HTTP (Wave 7.x default) - - `OMNI_PREFERRED`: Use TCP for peers in `omniPeers` set, HTTP fallback - - `OMNI_ONLY`: TCP only, fail if TCP unavailable (production target) + - `HTTP_ONLY`: All peers use HTTP (Wave 7.x default) + - `OMNI_PREFERRED`: Use TCP for peers in `omniPeers` set, HTTP fallback + - `OMNI_ONLY`: TCP only, fail if TCP unavailable (production target) 5. **Error Handling & Fallback** - ```typescript - // Dual protocol with automatic fallback - async call(request) { - if (this.supportsOmni() && config.mode !== 'HTTP_ONLY') { - try { - return await this.callOmniProtocol(request) - } catch (error) { - if (config.mode === 'OMNI_PREFERRED') { - log.warning('TCP failed, falling back to HTTP', error) - return await this.callHTTP(request) - } - throw error // OMNI_ONLY mode - } - } - return await this.callHTTP(request) - } - ``` + ```typescript + // Dual protocol with automatic fallback + async call(request) { + if (this.supportsOmni() && config.mode !== 'HTTP_ONLY') { + try { + return await this.callOmniProtocol(request) + } catch (error) { + if (config.mode === 'OMNI_PREFERRED') { + log.warning('TCP failed, falling back to HTTP', error) + return await this.callHTTP(request) + } + throw error // OMNI_ONLY mode + } + } + return await this.callHTTP(request) + } + ``` #### Tests + - End-to-end flow: handler → binary encoding → TCP → response - HTTP fallback when TCP unavailable - Migration mode switching (HTTP_ONLY → OMNI_PREFERRED → OMNI_ONLY) @@ -355,44 +387,48 @@ async sendMessage(opcode, payload, timeout) { - Performance benchmarking: TCP vs HTTP latency comparison ### Wave 8.6: Monitoring & Debugging (Observability) + **Duration**: 2-3 days **Priority**: LOW - Can be deferred #### Deliverables + 1. **Logging Infrastructure** - - Connection lifecycle events (connect, auth, ready, close) - - Message send/receive with opcodes and sizes - - Error details with classification - - Circuit breaker state changes + - Connection lifecycle events (connect, auth, ready, close) + - Message send/receive with opcodes and sizes + - Error details with classification + - Circuit breaker state changes 2. **Debug Mode** - - Packet-level inspection (hex dumps) - - Message flow tracing (message ID tracking) - - Connection state visualization + - Packet-level inspection (hex dumps) + - Message flow tracing (message ID tracking) + - Connection state visualization 3. **Metrics Dashboard** (future enhancement) - - Real-time connection count - - Latency histograms - - Error rate trends - - Bandwidth savings vs HTTP + - Real-time connection count + - Latency histograms + - Error rate trends + - Bandwidth savings vs HTTP 4. **Health Check Endpoint** - - OmniProtocol status (enabled/disabled) - - Active connections count - - Circuit breaker states - - Recent errors summary + - OmniProtocol status (enabled/disabled) + - Active connections count + - Circuit breaker states + - Recent errors summary ## Pending Handlers (Can Implement in Parallel) While Wave 8 is being built, we can continue implementing remaining handlers using JSON envelope pattern: ### Medium Priority + - `0x13 bridge_getTrade` (likely redundant with 0x12) - `0x14 bridge_executeTrade` (likely redundant with 0x12) - `0x50-0x5F` Browser/client operations (16 opcodes) - `0x60-0x62` Admin operations (3 opcodes) ### Low Priority + - `0x30 consensus_generic` (wrapper opcode) - `0x40 gcr_generic` (wrapper opcode) - `0x32 voteBlockHash` (deprecated in PoRBFTv2) @@ -400,26 +436,29 @@ While Wave 8 is being built, we can continue implementing remaining handlers usi ## Wave 8 Success Criteria ### Technical Validation + ✅ All existing HTTP tests pass with TCP transport ✅ Binary encoding round-trip tests for all 40 opcodes ✅ Connection pool handles 1000 simulated peers ✅ Circuit breaker prevents cascading failures ✅ Graceful fallback from TCP to HTTP works -✅ Memory usage within targets (<1MB for 1000 peers) +✅ Memory usage within targets (<1MB for 1000 peers) ### Performance Targets + ✅ Cold connection: <120ms (TCP handshake + auth) ✅ Warm connection: <30ms (message send + response) ✅ Bandwidth savings: >60% vs HTTP for typical payloads ✅ Throughput: >10,000 req/s with connection reuse -✅ Latency p95: <50ms for warm connections +✅ Latency p95: <50ms for warm connections ### Production Readiness + ✅ Feature flag controls (HTTP_ONLY, OMNI_PREFERRED, OMNI_ONLY) ✅ Dual protocol support (HTTP + TCP) ✅ Error handling and logging comprehensive ✅ No breaking changes to existing Peer class API -✅ Safe rollout strategy documented +✅ Safe rollout strategy documented ## Timeline Estimate @@ -428,6 +467,7 @@ While Wave 8 is being built, we can continue implementing remaining handlers usi **Conservative**: 35-42 days (with buffer for issues) ### Parallel Work Opportunities + - Wave 8.1 (TCP infra) can be built while finishing Wave 7.5 (testing) - Wave 8.2 (binary encoding) can start before 8.1 completes - Remaining handlers (browser/admin ops) can be implemented anytime @@ -436,25 +476,33 @@ While Wave 8 is being built, we can continue implementing remaining handlers usi ## Risk Analysis ### High Risk + 🔴 **TCP Connection Management Complexity** + - Mitigation: Start with single connection per peer, scale later - Fallback: Keep HTTP as safety net during migration 🔴 **Binary Encoding Bugs** + - Mitigation: Extensive round-trip testing, fixture validation - Fallback: JSON envelope mode for complex structures ### Medium Risk + 🟡 **Performance Doesn't Meet Targets** + - Mitigation: Profiling and optimization sprints - Fallback: Hybrid mode (TCP for hot paths, HTTP for bulk) 🟡 **Memory Leaks in Connection Pool** + - Mitigation: Long-running stress tests, memory profiling - Fallback: Aggressive idle timeout, connection limits ### Low Risk + 🟢 **Protocol Versioning** + - Already designed in message header - Backward compatibility maintained @@ -462,12 +510,12 @@ While Wave 8 is being built, we can continue implementing remaining handlers usi 1. **Review this plan** with the team/stakeholders 2. **Start Wave 8.1** (TCP Connection Infrastructure) - - Create `src/libs/omniprotocol/transport/` directory - - Implement ConnectionPool and PeerConnection classes - - Write connection lifecycle tests + - Create `src/libs/omniprotocol/transport/` directory + - Implement ConnectionPool and PeerConnection classes + - Write connection lifecycle tests 3. **Continue Wave 7.5** (Testing & Hardening) in parallel - - Complete remaining handler tests - - Integration test suite for existing opcodes + - Complete remaining handler tests + - Integration test suite for existing opcodes 4. **Document Wave 8.1 progress** in memory updates ## References diff --git a/.serena/memories/project_context_consolidated.md b/.serena/memories/project_context_consolidated.md index e5bdf8f37..2a1815c1c 100644 --- a/.serena/memories/project_context_consolidated.md +++ b/.serena/memories/project_context_consolidated.md @@ -1,6 +1,7 @@ # Demos Network Node - Complete Project Context ## Project Overview + **Repository**: Demos Network RPC Node Implementation **Version**: 0.9.5 (early development) **Branch**: `tg_identities_v2` @@ -8,6 +9,7 @@ **Working Directory**: `/Users/tcsenpai/kynesys/node` ## Architecture & Key Components + ``` src/ ├── features/ # Feature modules (multichain, incentives) @@ -15,20 +17,21 @@ src/ │ ├── blockchain/ # Chain, consensus (PoRBFTv2), GCR (v2) │ ├── peer/ # Peer networking │ └── network/ # RPC server, GCR routines -├── model/ # TypeORM entities & database config +├── model/ # TypeORM entities & database config ├── utilities/ # Utility functions ├── types/ # TypeScript definitions └── tests/ # Test files ``` ## Essential Development Commands + ```bash # Code Quality (REQUIRED after changes) bun run lint:fix # ESLint validation + auto-fix bun tsc --noEmit # Type checking (MANDATORY) bun format # Code formatting -# Development +# Development bun dev # Development mode with auto-reload bun start:bun # Production start @@ -37,6 +40,7 @@ bun test:chains # Jest tests for chain functionality ``` ## Critical Development Rules + - **NEVER start the node directly** during development or testing - **Use `bun run lint:fix`** for error checking (not node startup) - **Always run type checking** before marking tasks complete @@ -46,30 +50,35 @@ bun test:chains # Jest tests for chain functionality - **Add `// REVIEW:` comments** for new features ## Code Standards -- **Naming**: camelCase (variables/functions), PascalCase (classes/interfaces) + +- **Naming**: camelCase (variables/functions), PascalCase (classes/interfaces) - **Style**: Double quotes, no semicolons, trailing commas - **Imports**: Use `@/` aliases (not `../../../`) - **Comments**: JSDoc for functions, `// REVIEW:` for new features - **ESLint**: Supports both camelCase and UPPER_CASE variables ## Task Completion Checklist + **Before marking any task complete**: -1. ✅ Run type checking (`bun tsc --noEmit`) + +1. ✅ Run type checking (`bun tsc --noEmit`) 2. ✅ Run linting (`bun lint:fix`) 3. ✅ Add `// REVIEW:` comments on new code 4. ✅ Use `@/` imports instead of relative paths 5. ✅ Add JSDoc for new functions ## Technology Notes + - **GCR**: Always refers to GCRv2 unless specified otherwise -- **Consensus**: Always refers to PoRBFTv2 unless specified otherwise +- **Consensus**: Always refers to PoRBFTv2 unless specified otherwise - **XM/Crosschain**: Multichain capabilities in `src/features/multichain` - **SDK**: `@kynesyslabs/demosdk` package (current version 2.4.7) - **Database**: PostgreSQL + SQLite3 with TypeORM - **Framework**: Fastify with Socket.io ## Testing & Quality Assurance + - **Node Startup**: Only in production or controlled environments - **Development Testing**: Use ESLint validation for code correctness - **Resource Efficiency**: ESLint prevents unnecessary node startup overhead -- **Environment Stability**: Maintains clean development environment \ No newline at end of file +- **Environment Stability**: Maintains clean development environment diff --git a/.serena/memories/project_purpose.md b/.serena/memories/project_purpose.md index c5e515310..8d88fe53d 100644 --- a/.serena/memories/project_purpose.md +++ b/.serena/memories/project_purpose.md @@ -1,21 +1,25 @@ # Demos Network Node Software - Project Purpose ## Overview + The Demos Network Node Software is the official RPC implementation for the Demos Network. This repository contains the core network infrastructure components that allow machines to participate in the Demos Network as nodes. ## Key Components + - **Demos Network RPC**: Core network infrastructure and node functionality - **Demos Network SDK**: Full SDK implementation (`@kynesyslabs/demosdk` package) - **Multi-chain capabilities**: Cross-chain functionality referred to as "XM" or "Crosschain" - **Various features**: Including bridges, FHE, ZK, post-quantum cryptography, incentives, and more ## Target Environment + - Early development stage (not production-ready) - Designed for Linux, macOS, and WSL2 on Windows - Uses TypeScript with modern ES modules - Requires Node.js 20.x+, Bun, and Docker ## Architecture + - Modular feature-based architecture in `src/features/` - Database integration with TypeORM and PostgreSQL - RESTful API endpoints via Fastify @@ -23,7 +27,8 @@ The Demos Network Node Software is the official RPC implementation for the Demos - Identity management with cryptographic keys ## Development Context + - Licensed under CC BY-NC-ND 4.0 by KyneSys Labs - Private repository (not for public distribution) - Active development with frequent updates -- Focus on maintainability, type safety, and comprehensive error handling \ No newline at end of file +- Focus on maintainability, type safety, and comprehensive error handling diff --git a/.serena/memories/session_2026-01-18_storage_program_api.md b/.serena/memories/session_2026-01-18_storage_program_api.md index 5afce3c34..45411f504 100644 --- a/.serena/memories/session_2026-01-18_storage_program_api.md +++ b/.serena/memories/session_2026-01-18_storage_program_api.md @@ -1,46 +1,52 @@ # Session: Storage Program Standard Calls API + **Date**: 2026-01-18 **Branch**: storage_v2 ## Summary + Implemented granular storage program API - node-side read/write methods and SDK wrappers. ## Completed Tasks ### Core Implementation (✅ Done) + - **node-tytc / DEM-551**: Node read methods in `manageNodeCall.ts` - - getStorageProgramFields, getStorageProgramValue, getStorageProgramItem - - hasStorageProgramField, getStorageProgramFieldType, getStorageProgramAll - + - getStorageProgramFields, getStorageProgramValue, getStorageProgramItem + - hasStorageProgramField, getStorageProgramFieldType, getStorageProgramAll - **node-d3bv / DEM-552**: Node write methods in `GCRStorageProgramRoutines.ts` - - SET_FIELD, SET_ITEM, APPEND_ITEM, DELETE_FIELD, DELETE_ITEM - - Fee calculation based on size delta - + - SET_FIELD, SET_ITEM, APPEND_ITEM, DELETE_FIELD, DELETE_ITEM + - Fee calculation based on size delta - **node-ekwj / DEM-553**: SDK wrapper methods in `../sdks/src/storage/StorageProgram.ts` - - 6 read methods: getFields, getValue, getItem, hasField, getFieldType, getAll - - 5 write payload builders: setField, setItem, appendItem, deleteField, deleteItem - - **SDK v2.9.0 published** + - 6 read methods: getFields, getValue, getItem, hasField, getFieldType, getAll + - 5 write payload builders: setField, setItem, appendItem, deleteField, deleteItem + - **SDK v2.9.0 published** ## Remaining Tasks (Epic: node-9idc) ### NEXT SESSION START HERE: + - **node-dsbw**: Update `../storage-poc` to demonstrate new standard calls API ### Also Remaining: + - **node-22zq**: Testing & edge cases for standard calls - **node-h5tu**: Update `../documentation-mintlify` public docs - **node-i8b7**: Update `specs/storageprogram/*.mdx` internal specs ## Key Files Modified + - `/home/tcsenpai/kynesys/node/src/libs/network/manageNodeCall.ts` - Read endpoints - `/home/tcsenpai/kynesys/node/src/libs/blockchain/gcr/gcr_routines/GCRStorageProgramRoutines.ts` - Write handlers - `/home/tcsenpai/kynesys/sdks/src/storage/StorageProgram.ts` - SDK methods ## Linear Issues + - DEM-551: Done -- DEM-552: Done +- DEM-552: Done - DEM-553: Done ## Notes + - Beads issues can't be closed while epic (node-9idc) is open - marked with COMPLETED notes instead - SDK granular methods use nodeCall pattern for reads, payload builders for writes diff --git a/.serena/memories/session_2026-01-19_storage_docs_complete.md b/.serena/memories/session_2026-01-19_storage_docs_complete.md new file mode 100644 index 000000000..d4364a982 --- /dev/null +++ b/.serena/memories/session_2026-01-19_storage_docs_complete.md @@ -0,0 +1,79 @@ +# Session: Storage Program Documentation Completion + +**Date**: 2026-01-19 +**Branch**: storage_v2 + +## Summary + +Completed documentation for Storage Program Granular API across documentation-mintlify and node specs. + +## Work Completed + +### Documentation (../documentation-mintlify) + +Updated `sdk/storage-programs/rpc-queries.md` with: + +- **Granular Read Endpoints table** - 7 endpoints documented +- **Get All Field Names** - `/storage-program/:address/fields` +- **Get Field Value** - `/storage-program/:address/field/:field` +- **Get Array Item** - `/storage-program/:address/field/:field/item/:index` +- **Check Field Exists** - `/storage-program/:address/has/:field` +- **Get Field Type** - `/storage-program/:address/type/:field` +- **Get All Data** - `/storage-program/:address/all` +- **Search by Name** - `/storage-program/search/:name` +- **When to Use comparison table** +- **Practical examples** (data discovery, conditional access, parallel queries) + +### Node Specs (specs/storageprogram/) + +- `03-operations.mdx` - Added GRANULAR_WRITE documentation (132 lines) +- `05-rpc-endpoints.mdx` - Added granular read endpoints (248 lines) + +## Git Commits + +- Documentation: `d5534f7` (rebased from `f21e97d`) +- Node specs: `107f8c8b` + +## Beads Status + +Epic `node-9idc` (Storage Program Standard Calls API): + +- ✓ node-d3bv: Node write methods +- ✓ node-ekwj: SDK wrapper methods +- ✓ node-tytc: Node read methods +- ✓ node-dsbw: storage-poc integration +- ✓ node-h5tu: documentation-mintlify (closed this session) +- ✓ node-i8b7: specs/storageprogram (closed this session) +- ○ node-22zq: Testing & edge cases (remaining) + +## Technical Reference + +### Granular Read Methods (6) + +| Method | Returns | Use Case | +| ----------------------- | ------------------ | ------------------------ | +| `getFields()` | `string[]` | Discover data structure | +| `getValue(field)` | `any` + `type` | Read single field | +| `getItem(field, index)` | `any` | Access array elements | +| `hasField(field)` | `boolean` | Check before accessing | +| `getFieldType(field)` | `StorageFieldType` | Type validation | +| `getAll()` | Full data | When you need everything | + +### Granular Write Operations (5) + +| Type | Required Fields | Description | +| -------------- | ------------------------- | -------------------- | +| `SET_FIELD` | `field`, `value` | Set top-level field | +| `SET_ITEM` | `field`, `index`, `value` | Update array element | +| `APPEND_ITEM` | `field`, `value` | Append to array | +| `DELETE_FIELD` | `field` | Remove field | +| `DELETE_ITEM` | `field`, `index` | Remove array element | + +### StorageFieldType Enum + +`string`, `number`, `boolean`, `array`, `object`, `null`, `undefined` + +## Next Steps + +- Complete testing task `node-22zq` with edge case coverage +- Consider closing epic `node-9idc` once testing is done diff --git a/.serena/memories/session_2026-01-19_storage_poc_granular_api.md b/.serena/memories/session_2026-01-19_storage_poc_granular_api.md index 2ba3dd528..797c31931 100644 --- a/.serena/memories/session_2026-01-19_storage_poc_granular_api.md +++ b/.serena/memories/session_2026-01-19_storage_poc_granular_api.md @@ -1,18 +1,22 @@ # Session: Storage POC Granular API Update ## Date + 2026-01-19 ## Summary + Updated the storage-poc application to demonstrate the new granular storage program API with a new "Granular API" tab. ## Completed Work ### Task: node-dsbw (CLOSED) + - Added new "Granular API" tab to `/home/tcsenpai/kynesys/storage-poc/src/App.tsx` - Updated SDK from v2.8.24 to v2.9.0 ### Read Operations Implemented + 1. `getFields(rpcUrl, address, identity?)` - List all top-level field names 2. `getValue(rpcUrl, address, field, identity?)` - Get specific field value 3. `getItem(rpcUrl, address, field, index, identity?)` - Get array element @@ -20,6 +24,7 @@ Updated the storage-poc application to demonstrate the new granular storage prog 5. `getFieldType(rpcUrl, address, field, identity?)` - Get field type ### Write Operations Implemented + 1. `setField(address, field, value)` - Set/create field 2. `setItem(address, field, index, value)` - Set array element 3. `appendItem(address, field, value)` - Push to array @@ -27,6 +32,7 @@ Updated the storage-poc application to demonstrate the new granular storage prog 5. `deleteItem(address, field, index)` - Remove array element ### Fee Display + - Fee extracted from `confirmResult.response?.data?.transaction?.content?.transaction_fee` - Total fee = `network_fee + rpc_fee + additional_fee` - Display format: `Fee: ${(totalFee / 1e18).toFixed(6)} DEM` @@ -34,25 +40,30 @@ Updated the storage-poc application to demonstrate the new granular storage prog ## Technical Discoveries ### SDK Type Structure + - `TxFee` interface: `{ network_fee: number, rpc_fee: number, additional_fee: number }` - Fee is NOT on ValidityData.data.fee (doesn't exist) - Fee is on `ValidityData.data.transaction.content.transaction_fee` ### UI Architecture + - Two-column layout: READ operations (left), WRITE operations (right) - Optional identity field for ACL-protected storage programs - Proper validation per operation type (field required for getValue, index for getItem, etc.) ## Git State + - Branch: `storage_v2` - Commit: `233984b7 feat(storage): implement granular storage program API` - Pushed: ✅ to origin/storage_v2 ## Remaining Epic Tasks (node-9idc) + - `node-22zq` - Testing & edge cases for standard calls - `node-h5tu` - SDK integration (if still needed) - `node-i8b7` - Documentation ## Related + - session_2026-01-18_storage_program_api (previous session) - feature_storage_programs_plan (planning doc) diff --git a/.serena/memories/session_storage_program_queries_2026_01_18.md b/.serena/memories/session_storage_program_queries_2026_01_18.md index a4c4bd719..e2f0f38f8 100644 --- a/.serena/memories/session_storage_program_queries_2026_01_18.md +++ b/.serena/memories/session_storage_program_queries_2026_01_18.md @@ -1,22 +1,27 @@ # Session: Storage Program Query Methods - 2026-01-18 ## Summary + Fixed SDK storage program query methods to work without authentication and resolved address format normalization issue. ## Work Completed ### 1. Unauthenticated Storage Program Queries + **Problem**: SDK storage program queries (`getByAddress`, `getByOwner`, `searchByName`) were returning null/empty because `gcr_routine` requires authentication headers. -**Solution**: +**Solution**: + - Added storage program query methods to `manageNodeCall.ts` (unauthenticated endpoint) - Updated SDK `StorageProgram.ts` to use `nodeCall` instead of `gcr_routine` **Files Modified**: + - `src/libs/network/manageNodeCall.ts` - Added 3 new cases: `getStorageProgram`, `getStorageProgramsByOwner`, `searchStoragePrograms` - `../sdks/src/storage/StorageProgram.ts` - Changed from `gcr_routine` to `nodeCall` ### 2. createdByTx Field Population + **Problem**: `createdByTx` field in `GCRStorageProgram` entity was not being populated during transaction processing. **Root Cause**: In `endpointHandlers.ts:109`, `gcredit.txhash = ""` is set during validation for hash comparison, but never restored. @@ -24,21 +29,27 @@ Fixed SDK storage program query methods to work without authentication and resol **Solution**: Added `edit.txhash = tx.hash` in `handleGCR.ts` `applyToTx()` method before applying edits. **File Modified**: + - `src/libs/blockchain/gcr/handleGCR.ts` - Added txhash assignment in applyToTx loop ### 3. Storage Address Normalization + **Problem**: `getStorageProgram` endpoint returned null because: + - DB stores addresses as `stor-{hash}` (with prefix) - Client was sending `{hash}` (without prefix) **Solution**: Added `normalizeStorageAddress()` helper function in `manageNodeCall.ts` that: + - Strips `0x` prefix if present (legacy addresses) - Adds `stor-` prefix if missing **File Modified**: + - `src/libs/network/manageNodeCall.ts` - Added normalizeStorageAddress() function ## Database Observations + - Table name: `gcr_storageprogram` (no underscore) - Entity name: `GCRStorageProgram` - Storage addresses in DB: `stor-{40char_hash}` format @@ -47,10 +58,12 @@ Fixed SDK storage program query methods to work without authentication and resol ## Technical Details ### nodeCall vs gcr_routine + - `nodeCall`: Public endpoint, no authentication required - `gcr_routine`: Requires `signature` and `identity` headers ### Storage Address Formats Observed + ``` 0xstor-53ad58410dfcd0b93c18f0928d84ad43c1bbf5f5 (legacy with 0x) stor-7e40fde1086c8ed4cf0486ed12c010d30abd715f (current format) @@ -58,15 +71,18 @@ stor-7e40fde1086c8ed4cf0486ed12c010d30abd715f (current format) ``` ## Testing Notes + - Node needs restart after changes for them to take effect - Storage POC at `../storage-poc/` can be used for testing - PostgreSQL container: `postgres_5332` on port 5332 (user: demosuser, db: demos) ## Related Files + - `src/libs/network/manageGCRRoutines.ts` - Contains authenticated storage methods (kept for backward compatibility) - `src/model/entities/GCRv2/GCR_StorageProgram.ts` - Entity definition - `src/libs/blockchain/gcr/gcr_routines/GCRStorageProgramRoutines.ts` - Shared query routines ## Next Steps + - Test the endpoints after node restart - Consider adding similar normalization to other storage-related endpoints if needed diff --git a/.serena/memories/session_ud_ownership_verification_2025_10_21.md b/.serena/memories/session_ud_ownership_verification_2025_10_21.md index 319da1085..dbb5df736 100644 --- a/.serena/memories/session_ud_ownership_verification_2025_10_21.md +++ b/.serena/memories/session_ud_ownership_verification_2025_10_21.md @@ -1,6 +1,7 @@ # Session: UD Domain Ownership Verification - October 21, 2025 ## Session Overview + **Duration**: ~1 hour **Branch**: `ud_identities` **Commit**: `2ac51f02` - fix(ud): add ownership verification to deductUdDomainPoints and fix import path @@ -8,47 +9,57 @@ ## Work Completed ### 1. Code Review Analysis + **Reviewer Concerns Analyzed**: + 1. UD domain ownership verification missing in `deductUdDomainPoints` (LEGITIMATE) 2. Import path using explicit `node_modules/` path in udIdentityManager.ts (LEGITIMATE) ### 2. Security Implementation + **File**: `src/features/incentive/PointSystem.ts` **Changes**: + - Added UDIdentityManager import for domain resolution - Implemented blockchain-verified ownership check in `deductUdDomainPoints()` - Verification flow: - 1. Get user's linked wallets from GCR via `getUserIdentitiesFromGCR()` - 2. Resolve domain on-chain via `UDIdentityManager.resolveUDDomain()` - 3. Extract wallet addresses from linkedWallets format ("chain:address") - 4. Verify at least one user wallet matches domain's authorized addresses - 5. Handle case-sensitive comparison for Solana, case-insensitive for EVM - 6. Return 400 error if ownership verification fails - 7. Only proceed with point deduction if verified + 1. Get user's linked wallets from GCR via `getUserIdentitiesFromGCR()` + 2. Resolve domain on-chain via `UDIdentityManager.resolveUDDomain()` + 3. Extract wallet addresses from linkedWallets format ("chain:address") + 4. Verify at least one user wallet matches domain's authorized addresses + 5. Handle case-sensitive comparison for Solana, case-insensitive for EVM + 6. Return 400 error if ownership verification fails + 7. Only proceed with point deduction if verified **Security Vulnerability Addressed**: + - **Before**: Users could deduct points for domains they no longer own after transfer - **After**: Blockchain-verified ownership required before point deduction - **Impact**: Prevents points inflation from same domain generating multiple points across accounts ### 3. Infrastructure Fix + **File**: `src/libs/blockchain/gcr/gcr_routines/udIdentityManager.ts` **Changes**: + - Line 3: Fixed import path from `node_modules/@kynesyslabs/demosdk/build/types/abstraction` to `@kynesyslabs/demosdk/build/types/abstraction` - Line 258: Made `resolveUDDomain()` public (was private) to enable ownership verification from PointSystem **Rationale**: + - Explicit node_modules paths break module resolution across different environments - Public visibility required for PointSystem to verify domain ownership on-chain ## Technical Decisions ### Why UD Domains Need Ownership Verification + **Key Insight**: UD domains are NFTs (blockchain assets) that can be transferred/sold **Vulnerability Scenario**: + 1. Alice links `alice.crypto` → earns 3 points ✅ 2. Alice transfers domain to Bob on blockchain 🔄 3. Bob links `alice.crypto` → earns 3 points ✅ @@ -56,20 +67,24 @@ 5. **Result**: Same domain generates 6 points (should be max 3) **Solution**: Match linking security pattern + - Linking: Verifies signature from authorized wallet via `UDIdentityManager.verifyPayload()` - Unlinking: Now verifies current ownership via `UDIdentityManager.resolveUDDomain()` ### Implementation Pattern + **Ownership Verification Strategy**: + ```typescript // 1. Get user's linked wallets from GCR const { linkedWallets } = await this.getUserIdentitiesFromGCR(userId) // 2. Resolve domain to get current on-chain authorized addresses -const domainResolution = await UDIdentityManager.resolveUDDomain(normalizedDomain) +const domainResolution = + await UDIdentityManager.resolveUDDomain(normalizedDomain) // 3. Extract wallet addresses (format: "chain:address" → "address") -const userWalletAddresses = linkedWallets.map(wallet => wallet.split(':')[1]) +const userWalletAddresses = linkedWallets.map(wallet => wallet.split(":")[1]) // 4. Verify ownership with chain-specific comparison const isOwner = domainResolution.authorizedAddresses.some(authAddr => @@ -80,40 +95,46 @@ const isOwner = domainResolution.authorizedAddresses.some(authAddr => } // EVM: case-insensitive hex return authAddr.address.toLowerCase() === userAddr.toLowerCase() - }) + }), ) ``` ## Validation Results + - **ESLint**: ✅ No errors in modified files - **Type Safety**: ✅ All changes type-safe - **Import Verification**: ✅ UDIdentityAssignPayload confirmed exported from SDK - **Pattern Consistency**: ✅ Matches linking flow security architecture ## Files Modified + 1. `src/features/incentive/PointSystem.ts` (+56 lines) - - Added UDIdentityManager import - - Implemented ownership verification in deductUdDomainPoints() + - Added UDIdentityManager import + - Implemented ownership verification in deductUdDomainPoints() 2. `src/libs/blockchain/gcr/gcr_routines/udIdentityManager.ts` (+2, -2 lines) - - Fixed import path (line 3) - - Made resolveUDDomain() public (line 258) + - Fixed import path (line 3) + - Made resolveUDDomain() public (line 258) ## Key Learnings ### UD Domain Resolution Flow + **Multi-Chain Priority**: + 1. Polygon UNS → Base UNS → Sonic UNS → Ethereum UNS → Ethereum CNS 2. Fallback to Solana for .demos and other Solana domains 3. Returns UnifiedDomainResolution with authorizedAddresses array ### Points System Security Principles + 1. **Consistency**: Award and deduct operations must have matching security 2. **Blockchain Truth**: On-chain state is source of truth for ownership 3. **Chain Awareness**: Different signature validation (case-sensitive Solana vs case-insensitive EVM) 4. **Error Clarity**: Return meaningful 400 errors when verification fails ### Import Path Best Practices + - Never use explicit `node_modules/` paths in TypeScript imports - Use package name directly: `@kynesyslabs/demosdk/build/types/abstraction` - Ensures module resolution works across all environments (dev, build, production) @@ -121,17 +142,20 @@ const isOwner = domainResolution.authorizedAddresses.some(authAddr => ## Project Context Updates ### UD Integration Status + - **Phase 5**: Complete (domain linking with multi-chain support) - **Security Enhancement**: Ownership verification now complete for both award and deduct flows - **Points Integrity**: Protected against domain transfer abuse ### Related Memories + - `ud_integration_complete`: Base UD domain integration - `ud_phase5_complete`: Multi-chain UD support completion - `ud_technical_reference`: UD resolution and verification patterns - `ud_architecture_patterns`: UD domain system architecture ## Next Potential Work + 1. Consider adding similar ownership verification for Web3 wallet deduction 2. Review other identity deduction flows for consistency 3. Add integration tests for UD ownership verification edge cases diff --git a/.serena/memories/session_ud_points_implementation_2025_01_31.md b/.serena/memories/session_ud_points_implementation_2025_01_31.md index e6adc7ee3..88036c7bc 100644 --- a/.serena/memories/session_ud_points_implementation_2025_01_31.md +++ b/.serena/memories/session_ud_points_implementation_2025_01_31.md @@ -5,19 +5,23 @@ **Commit**: c833679d ## Task Summary + Implemented missing UD domain points methods in PointSystem to resolve TypeScript errors identified during pre-existing issue analysis. ## Implementation Details ### Point Values Added + - `LINK_UD_DOMAIN_DEMOS: 3` - For .demos TLD domains - `LINK_UD_DOMAIN: 1` - For other UD domains ### Methods Implemented #### 1. awardUdDomainPoints(userId, domain, referralCode?) + **Location**: src/features/incentive/PointSystem.ts:866-934 **Functionality**: + - TLD-based point determination (.demos = 3, others = 1) - Duplicate domain linking detection - Referral code support @@ -25,8 +29,10 @@ Implemented missing UD domain points methods in PointSystem to resolve TypeScrip - Returns RPCResponse with points awarded and total #### 2. deductUdDomainPoints(userId, domain) + **Location**: src/features/incentive/PointSystem.ts:942-1001 **Functionality**: + - TLD-based point determination - Domain-specific point tracking verification - GCR integration for point deduction @@ -35,34 +41,42 @@ Implemented missing UD domain points methods in PointSystem to resolve TypeScrip ### Type System Updates #### 1. GCR_Main Entity (src/model/entities/GCRv2/GCR_Main.ts) + - Added `udDomains: { [domain: string]: number }` to breakdown (line 36) - Added `telegram: number` to socialAccounts (line 34) #### 2. SDK Types (sdks/src/types/abstraction/index.ts) + - Added `udDomains: { [domain: string]: number }` to UserPoints breakdown (line 283) #### 3. Local UserPoints Interface (PointSystem.ts:12-33) + - Created local interface matching GCR entity structure - Includes all fields: web3Wallets, socialAccounts (with telegram), udDomains, referrals, demosFollow ### Infrastructure Updates #### Extended addPointsToGCR() + - Added "udDomains" type support (line 146) - Implemented udDomains breakdown handling (lines 221-228) #### Updated getUserPointsInternal() + - Added udDomains initialization in breakdown return (line 130) - Added telegram to socialAccounts initialization (line 128) ## Integration Points ### IncentiveManager Hooks + The implemented methods are called by existing hooks in IncentiveManager.ts: + - `udDomainLinked()` → calls `awardUdDomainPoints()` - `udDomainUnlinked()` → calls `deductUdDomainPoints()` ## Testing & Validation + - ✅ TypeScript compilation: All UD-related errors resolved - ✅ ESLint: All files pass linting - ✅ Pattern consistency: Follows existing web3Wallets/socialAccounts patterns @@ -71,7 +85,9 @@ The implemented methods are called by existing hooks in IncentiveManager.ts: ## Technical Decisions ### Why Local UserPoints Interface? + Created local interface instead of importing from SDK to: + 1. Avoid circular dependency issues during development 2. Ensure type consistency with GCR entity structure 3. Enable rapid iteration without SDK rebuilds @@ -80,23 +96,28 @@ Created local interface instead of importing from SDK to: Note: Added FIXME comment for future SDK import migration ### Domain Identification Logic + Uses `domain.toLowerCase().endsWith(".demos")` for TLD detection: + - Simple and reliable - Case-insensitive - Minimal processing overhead ## Files Modified + 1. src/features/incentive/PointSystem.ts (+182 lines) 2. src/model/entities/GCRv2/GCR_Main.ts (+2 lines) 3. sdks/src/types/abstraction/index.ts (+1 line) ## Commit Information + ``` feat(ud): implement UD domain points system with TLD-based rewards Commit: c833679d ``` ## Session Metadata + - Duration: ~45 minutes - Complexity: Moderate (extending existing system) - Dependencies: GCR entity, IncentiveManager, SDK types diff --git a/.serena/memories/suggested_commands.md b/.serena/memories/suggested_commands.md index e68a36edb..8d596960e 100644 --- a/.serena/memories/suggested_commands.md +++ b/.serena/memories/suggested_commands.md @@ -1,7 +1,9 @@ # Demos Network Node Software - Essential Commands ## Development Commands + ### Code Quality & Linting + ```bash bun run lint # Check code style and linting bun run lint:fix # Auto-fix ESLint issues @@ -10,6 +12,7 @@ bun run prettier-format # Format specific modules ``` ### Node Operations + ```bash bun install # Install dependencies bun run start # Start the node with tsx @@ -21,6 +24,7 @@ bun run dev # Development mode with auto-restart ``` ### Database Operations + ```bash bun run migration:generate # Generate TypeORM migration bun run migration:run # Run pending migrations @@ -28,11 +32,13 @@ bun run migration:revert # Revert last migration ``` ### Testing + ```bash bun run test:chains # Run chain-specific tests ``` ### Utilities + ```bash bun run keygen # Generate cryptographic keys bun run restore # Backup and restore utility @@ -41,7 +47,9 @@ bun run upgrade_deps # Interactive dependency updates ``` ## Production Commands + ### Running the Node + ```bash ./run # Start database and node (recommended) ./run -p # Custom node port @@ -52,7 +60,9 @@ bun run upgrade_deps # Interactive dependency updates ``` ## System Commands (macOS/Darwin) + ### Essential Unix Tools + ```bash ls -la # List files with details cd /path/to/dir # Change directory (use /usr/bin/zoxide if available) @@ -61,6 +71,7 @@ find . -name "*.ts" # Find files by pattern ``` ### Process Management + ```bash lsof -i :53550 # Check if node port is in use lsof -i :5332 # Check if database port is in use @@ -69,6 +80,7 @@ kill -9 # Force kill process ``` ### Docker Operations + ```bash docker info # Check Docker status docker ps # List running containers @@ -77,6 +89,7 @@ docker compose down # Stop services ``` ## Git Workflow + ```bash git status # Check current status git branch # List branches @@ -86,6 +99,7 @@ git commit -m "message" # Commit changes ``` ## Troubleshooting Commands + ```bash # Check system requirements node --version # Should be 20.x+ @@ -99,4 +113,4 @@ sudo lsof -i :53550 # Node port # Log inspection tail -f logs/node.log # Node logs tail -f postgres_*/postgres.log # Database logs -``` \ No newline at end of file +``` diff --git a/.serena/memories/task_completion_guidelines.md b/.serena/memories/task_completion_guidelines.md index 54de6a6f9..17cf18269 100644 --- a/.serena/memories/task_completion_guidelines.md +++ b/.serena/memories/task_completion_guidelines.md @@ -3,21 +3,26 @@ ## Essential Quality Checks After Code Changes ### 1. Code Quality Validation + ```bash bun run lint:fix # ALWAYS run after code changes ``` + - Fixes ESLint issues automatically - Validates naming conventions (camelCase, PascalCase) - Ensures code style compliance - **CRITICAL**: This is the primary validation method - NEVER skip ### 2. Type Safety Verification + Since this project uses TypeScript with strict settings: + - TypeScript compilation happens during `bun run lint:fix` - Watch for type errors in the output - Address any type-related warnings ### 3. Code Review Preparation + - Add `// REVIEW:` comments before newly added features - Document complex logic with inline comments - Ensure JSDoc comments for new public methods @@ -25,6 +30,7 @@ Since this project uses TypeScript with strict settings: ## Development Workflow Completion ### When Adding New Features + 1. **Implement the feature** following established patterns 2. **Run `bun run lint:fix`** to validate syntax and style 3. **Add review comments** for significant changes @@ -32,12 +38,14 @@ Since this project uses TypeScript with strict settings: 5. **Test manually** if applicable (avoid starting the node directly) ### When Modifying Existing Code + 1. **Understand existing patterns** before making changes 2. **Maintain consistency** with current codebase style 3. **Run `bun run lint:fix`** to catch any issues 4. **Verify imports** use `@/` path aliases instead of relative paths ### When Working with Database Models + 1. **Generate migrations** if schema changes: `bun run migration:generate` 2. **Review generated migrations** before committing 3. **Test migration** in development environment if possible @@ -45,6 +53,7 @@ Since this project uses TypeScript with strict settings: ## Important "DON'Ts" for Task Completion ### ❌ NEVER Do These: + - **Start the node directly** during development (`bun run start`, `./run`) - **Skip linting** - always run `bun run lint:fix` - **Use relative imports** - use `@/` path aliases instead @@ -52,6 +61,7 @@ Since this project uses TypeScript with strict settings: - **Ignore naming conventions** - follow camelCase/PascalCase rules ### ✅ ALWAYS Do These: + - **Run `bun run lint:fix`** after any code changes - **Use established patterns** from existing code - **Follow the license header** format in new files @@ -60,15 +70,17 @@ Since this project uses TypeScript with strict settings: ## Validation Commands Summary -| Task Type | Required Command | Purpose | -|-----------|-----------------|---------| -| Any code change | `bun run lint:fix` | Syntax, style, type checking | -| New features | `// REVIEW:` comments | Mark for code review | -| Database changes | `bun run migration:generate` | Create schema migrations | -| Dependency updates | `bun install` | Ensure deps are current | +| Task Type | Required Command | Purpose | +| ------------------ | ---------------------------- | ---------------------------- | +| Any code change | `bun run lint:fix` | Syntax, style, type checking | +| New features | `// REVIEW:` comments | Mark for code review | +| Database changes | `bun run migration:generate` | Create schema migrations | +| Dependency updates | `bun install` | Ensure deps are current | ## Quality Gates + Before considering any task complete: + 1. ✅ Code passes `bun run lint:fix` without errors 2. ✅ All new code follows established patterns 3. ✅ Path aliases (`@/`) used instead of relative imports @@ -76,7 +88,8 @@ Before considering any task complete: 5. ✅ No unnecessary new files created ## Special Project Considerations + - **Node Testing**: Use ESLint validation instead of starting the node - **SDK Integration**: Reference `@kynesyslabs/demosdk` package, not source - **Bun Preference**: Always use `bun` commands over `npm`/`yarn` -- **License Compliance**: CC BY-NC-ND 4.0 headers in all new source files \ No newline at end of file +- **License Compliance**: CC BY-NC-ND 4.0 headers in all new source files diff --git a/.serena/memories/tech_stack.md b/.serena/memories/tech_stack.md index 5527eb839..68a63561b 100644 --- a/.serena/memories/tech_stack.md +++ b/.serena/memories/tech_stack.md @@ -1,30 +1,36 @@ # Demos Network Node Software - Technology Stack ## Core Technologies + - **Runtime**: Bun (preferred over npm/yarn) with Node.js 20.x+ compatibility - **Language**: TypeScript with ES modules - **Module System**: ESNext with bundler resolution - **Package Manager**: Bun (primary), with npm fallback ## Database & ORM + - **Database**: PostgreSQL (port 5332 by default) - **ORM**: TypeORM with decorators and migrations - **Connection**: Custom datasource configuration in `src/model/datasource.ts` ## Web Framework & APIs + - **Primary Framework**: Fastify with CORS support - **API Documentation**: Swagger/OpenAPI integration - **Alternative**: Express.js (legacy support) - **WebSocket**: Socket.io for real-time communication ## Key Dependencies + ### Core Network & Blockchain + - `@kynesyslabs/demosdk`: ^2.3.22 (Demos Network SDK) - `@cosmjs/encoding`: Cosmos blockchain integration - `web3`: ^4.16.0 (Ethereum integration) - `rubic-sdk`: ^5.57.4 (Cross-chain bridge integration) ### Cryptography & Security + - `node-forge`: ^1.3.1 (Cryptographic operations) - `openpgp`: ^5.11.0 (PGP encryption) - `superdilithium`: ^2.0.6 (Post-quantum cryptography) @@ -32,6 +38,7 @@ - `rijndael-js`: ^2.0.0 (AES encryption) ### Development Tools + - **TypeScript**: ^5.8.3 - **ESLint**: ^8.57.1 with @typescript-eslint - **Prettier**: ^2.8.0 @@ -39,12 +46,14 @@ - **tsx**: ^3.12.8 (TypeScript execution) ## Infrastructure + - **Containerization**: Docker with docker-compose - **Networking**: Custom P2P networking implementation - **Time Synchronization**: NTP client integration - **Terminal Interface**: terminal-kit for CLI interactions ## Path Resolution + - **Base URL**: `./` (project root) - **Path Aliases**: `@/*` maps to `src/*` -- **Module Resolution**: Bundler-style with tsconfig-paths \ No newline at end of file +- **Module Resolution**: Bundler-style with tsconfig-paths diff --git a/.serena/memories/tlsnotary_integration_context.md b/.serena/memories/tlsnotary_integration_context.md index 25eefa30f..11ff419ba 100644 --- a/.serena/memories/tlsnotary_integration_context.md +++ b/.serena/memories/tlsnotary_integration_context.md @@ -4,36 +4,43 @@ - **Epic**: `node-6lo` - TLSNotary Backend Integration - **Tasks** (in dependency order): - 1. `node-3yq` - Copy pre-built .so library (READY) - 2. `node-ebc` - Create FFI bindings - 3. `node-r72` - Create TLSNotaryService - 4. `node-9kw` - Create Fastify routes - 5. `node-mwm` - Create feature entry point - 6. `node-2fw` - Integrate with node startup - 7. `node-hgf` - Add SDK discovery endpoint - 8. `node-8sq` - Type check and lint + 1. `node-3yq` - Copy pre-built .so library (READY) + 2. `node-ebc` - Create FFI bindings + 3. `node-r72` - Create TLSNotaryService + 4. `node-9kw` - Create Fastify routes + 5. `node-mwm` - Create feature entry point + 6. `node-2fw` - Integrate with node startup + 7. `node-hgf` - Add SDK discovery endpoint + 8. `node-8sq` - Type check and lint ## Reference Code Locations ### Pre-built Binary + ``` /home/tcsenpai/tlsn/demos_tlsnotary/node/rust/target/release/libtlsn_notary.so ``` + Target: `libs/tlsn/libtlsn_notary.so` ### FFI Reference Implementation + ``` /home/tcsenpai/tlsn/demos_tlsnotary/node/ts/TLSNotary.ts ``` + Complete working bun:ffi bindings to adapt for `src/features/tlsnotary/ffi.ts` ### Demo App Reference + ``` /home/tcsenpai/tlsn/demos_tlsnotary/demo/src/app.tsx ``` + Browser-side attestation flow with tlsn-js WASM ### Integration Documentation + ``` /home/tcsenpai/tlsn/demos_tlsnotary/BACKEND_INTEGRATION.md /home/tcsenpai/tlsn/demos_tlsnotary/INTEGRATION.md @@ -43,28 +50,42 @@ Browser-side attestation flow with tlsn-js WASM ```typescript const symbols = { - tlsn_init: { args: [], returns: FFIType.i32 }, - tlsn_notary_create: { args: [FFIType.ptr], returns: FFIType.ptr }, - tlsn_notary_start_server: { args: [FFIType.ptr, FFIType.u16], returns: FFIType.i32 }, - tlsn_notary_stop_server: { args: [FFIType.ptr], returns: FFIType.i32 }, - tlsn_verify_attestation: { args: [FFIType.ptr, FFIType.u64], returns: FFIType.ptr }, - tlsn_notary_get_public_key: { args: [FFIType.ptr, FFIType.ptr, FFIType.u64], returns: FFIType.i32 }, - tlsn_notary_destroy: { args: [FFIType.ptr], returns: FFIType.void }, - tlsn_free_verification_result: { args: [FFIType.ptr], returns: FFIType.void }, - tlsn_free_string: { args: [FFIType.ptr], returns: FFIType.void }, -}; + tlsn_init: { args: [], returns: FFIType.i32 }, + tlsn_notary_create: { args: [FFIType.ptr], returns: FFIType.ptr }, + tlsn_notary_start_server: { + args: [FFIType.ptr, FFIType.u16], + returns: FFIType.i32, + }, + tlsn_notary_stop_server: { args: [FFIType.ptr], returns: FFIType.i32 }, + tlsn_verify_attestation: { + args: [FFIType.ptr, FFIType.u64], + returns: FFIType.ptr, + }, + tlsn_notary_get_public_key: { + args: [FFIType.ptr, FFIType.ptr, FFIType.u64], + returns: FFIType.i32, + }, + tlsn_notary_destroy: { args: [FFIType.ptr], returns: FFIType.void }, + tlsn_free_verification_result: { + args: [FFIType.ptr], + returns: FFIType.void, + }, + tlsn_free_string: { args: [FFIType.ptr], returns: FFIType.void }, +} ``` ## FFI Struct Layouts ### NotaryConfig (40 bytes) + - signing_key ptr (8 bytes) -- signing_key_len (8 bytes) +- signing_key_len (8 bytes) - max_sent_data (8 bytes) - max_recv_data (8 bytes) - server_port (2 bytes + padding) ### VerificationResultFFI (40 bytes) + - status (4 bytes + 4 padding) - server_name ptr (8 bytes) - connection_time (8 bytes) @@ -75,5 +96,6 @@ const symbols = { ## SDK Integration (Already Complete) Package `@kynesyslabs/demosdk` v2.7.2 has `tlsnotary/` module with: + - TLSNotary class: initialize(), attest(), verify(), getTranscript() - Located in `/home/tcsenpai/kynesys/sdks/src/tlsnotary/` diff --git a/.serena/memories/typescript_audit_complete_2025_12_17.md b/.serena/memories/typescript_audit_complete_2025_12_17.md index 58fe8125a..c6a74cf86 100644 --- a/.serena/memories/typescript_audit_complete_2025_12_17.md +++ b/.serena/memories/typescript_audit_complete_2025_12_17.md @@ -3,37 +3,44 @@ ## Date: 2025-12-17 ## Summary + Comprehensive TypeScript type-check audit completed. Reduced errors from 38 to 2 (95% reduction). Remaining 2 errors in fhe_test.ts closed as not planned. Production code has 0 type errors. ## Issues Completed ### Fixed Issues -| Issue | Category | Errors Fixed | Solution | -|-------|----------|--------------|----------| -| node-c98 | UrlValidationResult | 6 | Type imports and interface fixes | -| node-01y | executeNativeTransaction | 2 | Return type fixes | -| node-u9a | IMP Signaling | 2 | log.debug args, signedData→signature | -| node-tus | Network Module | 6 | Named exports, signature type, originChainType | -| node-eph | SDK Missing Exports | 4 | Created local types.ts for EncryptedTransaction, SubnetPayload | -| node-9x8 | OmniProtocol | 11 | Catch blocks, bigint→number, Buffer casts, union types | -| node-clk | Deprecated Crypto | 2 | Removed dead code (saveEncrypted/loadEncrypted) | -| (untracked) | showPubkey.ts | 1 | Uint8Array cast | + +| Issue | Category | Errors Fixed | Solution | +| ----------- | ------------------------ | ------------ | -------------------------------------------------------------- | +| node-c98 | UrlValidationResult | 6 | Type imports and interface fixes | +| node-01y | executeNativeTransaction | 2 | Return type fixes | +| node-u9a | IMP Signaling | 2 | log.debug args, signedData→signature | +| node-tus | Network Module | 6 | Named exports, signature type, originChainType | +| node-eph | SDK Missing Exports | 4 | Created local types.ts for EncryptedTransaction, SubnetPayload | +| node-9x8 | OmniProtocol | 11 | Catch blocks, bigint→number, Buffer casts, union types | +| node-clk | Deprecated Crypto | 2 | Removed dead code (saveEncrypted/loadEncrypted) | +| (untracked) | showPubkey.ts | 1 | Uint8Array cast | ### Excluded/Not Planned -| Issue | Category | Errors | Reason | -|-------|----------|--------|--------| -| node-2e8 | Tests | 4 | Excluded src/tests from tsconfig | -| node-a96 | FHE Test | 2 | Closed as not planned | + +| Issue | Category | Errors | Reason | +| -------- | -------- | ------ | -------------------------------- | +| node-2e8 | Tests | 4 | Excluded src/tests from tsconfig | +| node-a96 | FHE Test | 2 | Closed as not planned | ## Key Patterns Discovered ### SDK Type Gaps + When SDK types exist but aren't exported, create local type definitions: + - Created `src/libs/l2ps/types.ts` with EncryptedTransaction, SubnetPayload - Mirror SDK internal types until SDK exports are updated ### Catch Block Error Handling + Standard pattern for unknown error type in catch blocks: + ```typescript } catch (error) { throw new Error(`Message: ${(error as Error).message}`) @@ -41,30 +48,37 @@ Standard pattern for unknown error type in catch blocks: ``` ### Union Type Narrowing + When TypeScript narrows to `never` in switch defaults: + ```typescript message: `Unsupported: ${(payload as KnownType).property}` ``` ### Dead Code Detection + `createCipher`/`createDecipher` were undefined in Bun but node worked fine = dead code paths never executed. ## Configuration Changes + - Added `"src/tests"` to tsconfig.json exclude list ## Files Modified (Key) + - src/libs/l2ps/types.ts (NEW) - src/libs/crypto/cryptography.ts (removed dead code) -- src/libs/omniprotocol/* (11 fixes) -- src/libs/network/* (multiple fixes) +- src/libs/omniprotocol/\* (11 fixes) +- src/libs/network/\* (multiple fixes) - tsconfig.json (exclude src/tests) ## Commits + 1. `fc5abb9e` - fix: resolve 22 TypeScript type errors (38→16 remaining) 2. `20137452` - fix: resolve OmniProtocol type errors (16→5 remaining) 3. `c684bb2a` - fix: remove dead crypto code and fix showPubkey type (4→2 errors) ## Final State + - Production errors: 0 - Test-only errors: 2 (fhe_test.ts - not planned) - Epic node-tsaudit: CLOSED diff --git a/.serena/memories/ud_architecture_patterns.md b/.serena/memories/ud_architecture_patterns.md index 8689b5b27..cdd8e8324 100644 --- a/.serena/memories/ud_architecture_patterns.md +++ b/.serena/memories/ud_architecture_patterns.md @@ -3,6 +3,7 @@ ## Resolution Flow ### Multi-Chain Cascade (5-Network Fallback) + ``` 1. Try Polygon L2 UNS → Success? Return UnifiedDomainResolution 2. Try Base L2 UNS → Success? Return UnifiedDomainResolution @@ -14,6 +15,7 @@ ``` ### UnifiedDomainResolution Structure + ```typescript { domain: string // "example.crypto" @@ -36,11 +38,12 @@ ## Verification Flow ### Multi-Address Authorization + ```typescript verifyPayload(payload) { // 1. Resolve domain → get all authorized addresses const resolution = await resolveUDDomain(domain) - + // 2. Check signing address is authorized const matchingAddress = resolution.authorizedAddresses.find( auth => auth.address.toLowerCase() === signingAddress.toLowerCase() @@ -48,7 +51,7 @@ verifyPayload(payload) { if (!matchingAddress) { throw `Address ${signingAddress} not authorized for ${domain}` } - + // 3. Verify signature based on type if (matchingAddress.signatureType === "evm") { const recovered = ethers.verifyMessage(signedData, signature) @@ -61,10 +64,10 @@ verifyPayload(payload) { ) if (!isValid) throw "Invalid Solana signature" } - + // 4. Verify challenge contains Demos public key if (!signedData.includes(demosPublicKey)) throw "Invalid challenge" - + // 5. Store in GCR await saveToGCR(demosAddress, { domain, signingAddress, signatureType, ... }) } @@ -73,28 +76,37 @@ verifyPayload(payload) { ## Storage Pattern (JSONB) ### GCR Structure + ```typescript gcr_main.identities = { - xm: { /* cross-chain */ }, - web2: { /* social */ }, - pqc: { /* post-quantum */ }, - ud: [ // Array of UD identities - { - domain: "example.crypto", - signingAddress: "0x...", // Address that signed - signatureType: "evm", - signature: "0x...", - network: "polygon", - registryType: "UNS", - publicKey: "", - timestamp: 1234567890, - signedData: "Link ... to Demos ..." - } - ] + xm: { + /* cross-chain */ + }, + web2: { + /* social */ + }, + pqc: { + /* post-quantum */ + }, + ud: [ + // Array of UD identities + { + domain: "example.crypto", + signingAddress: "0x...", // Address that signed + signatureType: "evm", + signature: "0x...", + network: "polygon", + registryType: "UNS", + publicKey: "", + timestamp: 1234567890, + signedData: "Link ... to Demos ...", + }, + ], } ``` ### Defensive Initialization + ```typescript // New accounts (handleGCR.ts) identities: { xm: {}, web2: {}, pqc: {}, ud: [] } @@ -106,6 +118,7 @@ gcr.identities.ud = gcr.identities.ud || [] ## Helper Methods Pattern ### Conversion Helpers + ```typescript // EVM → Unified evmToUnified(evmResolution): UnifiedDomainResolution @@ -115,6 +128,7 @@ solanaToUnified(solanaResolution): UnifiedDomainResolution ``` ### Signature Detection + ```typescript detectAddressType(address: string): "evm" | "solana" | null validateAddressType(address, expectedType): boolean @@ -122,6 +136,7 @@ isSignableAddress(address): boolean ``` ### Record Extraction + ```typescript fetchDomainRecords(domain, tokenId, provider, registry): Record extractSignableAddresses(records): SignableAddress[] @@ -130,6 +145,7 @@ extractSignableAddresses(records): SignableAddress[] ## Error Messages ### Authorization Failure + ``` Address 0x123... is not authorized for domain example.crypto. Authorized addresses: @@ -138,9 +154,11 @@ Authorized addresses: ``` ### Success Message + ``` Verified ownership of example.crypto via evm signature from crypto.ETH.address ``` ## Future: .demos TLD Support + **Zero code changes required** - domain resolution handles all TLDs automatically via `ethers.namehash()` and registry contracts. diff --git a/.serena/memories/ud_integration_complete.md b/.serena/memories/ud_integration_complete.md index 459300041..20f54b988 100644 --- a/.serena/memories/ud_integration_complete.md +++ b/.serena/memories/ud_integration_complete.md @@ -3,8 +3,10 @@ **Status**: Phase 5 + Points ✅ | **Branch**: `ud_identities` | **Next**: Phase 6 ## ⚠️ IMPORTANT: Solana Integration Note + The Solana integration uses **UD helper pattern** NOT the reverse engineering/API approach documented in old exploration memories. Current implementation: -- Uses existing `udSolanaResolverHelper.ts` + +- Uses existing `udSolanaResolverHelper.ts` - Fetches records directly via Solana program - NO API key required for resolution - Converts to UnifiedDomainResolution format @@ -13,6 +15,7 @@ The Solana integration uses **UD helper pattern** NOT the reverse engineering/AP ## Current Implementation ### Completed Phases + 1. ✅ Signature detection utility (`signatureDetector.ts`) 2. ✅ EVM records fetching (all 5 networks) 3. ✅ Solana integration + UnifiedDomainResolution (via helper) @@ -22,33 +25,38 @@ The Solana integration uses **UD helper pattern** NOT the reverse engineering/AP 7. ⏸️ SDK client updates (pending) ### Phase 5 Breaking Changes + ```typescript // SavedUdIdentity - NEW structure interface SavedUdIdentity { - domain: string - signingAddress: string // CHANGED from resolvedAddress - signatureType: SignatureType // NEW: "evm" | "solana" - signature: string - publicKey: string - timestamp: number - signedData: string - network: "polygon" | "ethereum" | "base" | "sonic" | "solana" // ADDED solana - registryType: "UNS" | "CNS" + domain: string + signingAddress: string // CHANGED from resolvedAddress + signatureType: SignatureType // NEW: "evm" | "solana" + signature: string + publicKey: string + timestamp: number + signedData: string + network: "polygon" | "ethereum" | "base" | "sonic" | "solana" // ADDED solana + registryType: "UNS" | "CNS" } ``` ### Points System Implementation (NEW) + **Commit**: `c833679d` | **Date**: 2025-01-31 **Point Values**: + - `.demos` TLD domains: **3 points** - Other UD domains: **1 point** **Methods**: + - `awardUdDomainPoints(userId, domain, referralCode?)` - Awards points with duplicate detection - `deductUdDomainPoints(userId, domain)` - Deducts points on domain unlink **Type Extensions**: + ```typescript // GCR_Main.ts - points.breakdown udDomains: { [domain: string]: number } // Track points per domain @@ -72,6 +80,7 @@ interface UserPoints { ``` **Integration**: + - IncentiveManager hooks call PointSystem methods automatically - `udDomainLinked()` → `awardUdDomainPoints()` - `udDomainUnlinked()` → `deductUdDomainPoints()` @@ -79,6 +88,7 @@ interface UserPoints { **Details**: See `session_ud_points_implementation_2025_01_31` memory ### Key Capabilities + - **Multi-chain resolution**: Polygon L2 → Base L2 → Sonic → Ethereum L1 UNS → Ethereum L1 CNS → Solana (via helper) - **Multi-address auth**: Sign with ANY address in domain records (not just owner) - **Dual signature types**: EVM (secp256k1) + Solana (ed25519) @@ -88,7 +98,9 @@ interface UserPoints { ## Integration Status ### Node Repository + **Modified**: + - `udIdentityManager.ts`: Resolution + verification logic + Solana integration - `GCRIdentityRoutines.ts`: Field extraction and validation - `IncentiveManager.ts`: Points for domain linking @@ -97,14 +109,17 @@ interface UserPoints { - `GCR_Main.ts`: udDomains breakdown field **Created**: + - `signatureDetector.ts`: Auto-detect signature types - `udSolanaResolverHelper.ts`: Solana resolution (existing, reused) ### SDK Repository + **Current**: v2.4.24 (with UD types from Phase 0-5) **Pending**: Phase 6 client method updates ## Testing Status + - ✅ Type definitions compile - ✅ Field validation functional - ✅ JSONB storage compatible (no migration) @@ -114,6 +129,7 @@ interface UserPoints { ## Next Phase 6 Requirements **SDK Updates** (`../sdks/`):\ + 1. Update `UDIdentityPayload` with `signingAddress` + `signatureType` 2. Remove old `resolvedAddress` field 3. Update `addUnstoppableDomainIdentity()` signature @@ -122,14 +138,17 @@ interface UserPoints { 6. Add `getUDSignableAddresses()` helper method **Files to modify**: + - `src/types/abstraction/index.ts` - `src/abstraction/Identities.ts` ## Dependencies + - Node: `tweetnacl@1.0.3`, `bs58@6.0.0` (for Solana signatures) - SDK: `ethers` (already present) ## Commit History + - `ce3c32a8`: Phase 1 signature detection - `7b9826d8`: Phase 2 EVM records - `10460e41`: Phase 3 & 4 Solana + multi-sig @@ -138,6 +157,7 @@ interface UserPoints { - **Next**: Phase 6 SDK client updates ## Reference + - **Phase 5 details**: See `ud_phase5_complete` memory - **Points implementation**: See `session_ud_points_implementation_2025_01_31` memory - **Phases tracking**: See `ud_phases_tracking` memory for complete timeline diff --git a/.serena/memories/ud_phase5_complete.md b/.serena/memories/ud_phase5_complete.md index e42bfe23a..a5eacb2fd 100644 --- a/.serena/memories/ud_phase5_complete.md +++ b/.serena/memories/ud_phase5_complete.md @@ -15,21 +15,23 @@ Successfully updated identity type definitions to support multi-address verifica **File**: `src/model/entities/types/IdentityTypes.ts` **BREAKING CHANGES from Phase 4**: + ```typescript export interface SavedUdIdentity { - domain: string // Unchanged: "brad.crypto" or "example.demos" - signingAddress: string // ✅ CHANGED from resolvedAddress + domain: string // Unchanged: "brad.crypto" or "example.demos" + signingAddress: string // ✅ CHANGED from resolvedAddress signatureType: SignatureType // ✅ NEW: "evm" | "solana" - signature: string // Unchanged - publicKey: string // Unchanged - timestamp: number // Unchanged - signedData: string // Unchanged + signature: string // Unchanged + publicKey: string // Unchanged + timestamp: number // Unchanged + signedData: string // Unchanged network: "polygon" | "ethereum" | "base" | "sonic" | "solana" // ✅ ADDED "solana" registryType: "UNS" | "CNS" // Unchanged } ``` **Key Changes**: + - `resolvedAddress` → `signingAddress`: More accurate - this is the address that SIGNED, not necessarily the domain owner - Added `signatureType`: Indicates whether to use EVM (ethers.verifyMessage) or Solana (nacl.sign.detached.verify) - Added `"solana"` to network union: Supports .demos domains on Solana @@ -41,6 +43,7 @@ export interface SavedUdIdentity { **Method**: `applyUdIdentityAdd()` (lines 470-560) Updated to extract and validate new fields: + ```typescript const { domain, @@ -75,15 +78,22 @@ const data: SavedUdIdentity = { ### 3. Database Storage **Storage Structure** (JSONB column, no migration needed): + ```typescript gcr_main.identities = { - xm: { /* ... */ }, - web2: { /* ... */ }, - pqc: { /* ... */ }, + xm: { + /* ... */ + }, + web2: { + /* ... */ + }, + pqc: { + /* ... */ + }, ud: [ { domain: "example.crypto", - signingAddress: "0x123...", // Address that signed + signingAddress: "0x123...", // Address that signed signatureType: "evm", signature: "0xabc...", network: "polygon", @@ -91,13 +101,13 @@ gcr_main.identities = { }, { domain: "alice.demos", - signingAddress: "ABCD...xyz", // Solana address + signingAddress: "ABCD...xyz", // Solana address signatureType: "solana", signature: "base58...", network: "solana", // ... - } - ] + }, + ], } ``` @@ -108,6 +118,7 @@ gcr_main.identities = { **Method**: `udDomainLinked()` (line 117+) Awards points for first-time UD domain linking: + ```typescript static async udDomainLinked( demosAddress: string, @@ -122,6 +133,7 @@ static async udDomainLinked( ## Documentation Comments Added Added comprehensive JSDoc comments to `SavedUdIdentity`: + ```typescript /** * The Unstoppable Domains identity saved in the GCR @@ -141,6 +153,7 @@ Added comprehensive JSDoc comments to `SavedUdIdentity`: ## Type Safety Verification ✅ **No type errors** in affected files: + - `src/model/entities/types/IdentityTypes.ts` - `src/libs/blockchain/gcr/gcr_routines/GCRIdentityRoutines.ts` - `src/libs/blockchain/gcr/gcr_routines/udIdentityManager.ts` @@ -153,11 +166,12 @@ Added comprehensive JSDoc comments to `SavedUdIdentity`: **No database migration required** ✅ Why: + - `identities` column is JSONB (flexible JSON storage) - Defensive initialization in `GCRIdentityRoutines.applyUdIdentityAdd()`: - ```typescript - accountGCR.identities.ud = accountGCR.identities.ud || [] - ``` + ```typescript + accountGCR.identities.ud = accountGCR.identities.ud || [] + ``` - New accounts: Include `ud: []` in default initialization (handled by GCR system) - Existing accounts: Key auto-added on first UD link operation @@ -166,10 +180,17 @@ Why: ### With Phase 4 (Multi-Signature Verification) Phase 4's `verifyPayload()` method already expects these fields (with backward compatibility): + ```typescript // Phase 4 comment: "Phase 5 will update SDK to use signingAddress + signatureType" -const { domain, resolvedAddress, signature, signedData, network, registryType } = - payload.payload +const { + domain, + resolvedAddress, + signature, + signedData, + network, + registryType, +} = payload.payload // Phase 5 completed this - now properly uses signingAddress ``` @@ -177,6 +198,7 @@ const { domain, resolvedAddress, signature, signedData, network, registryType } ### With Storage System All UD identities stored in `gcr_main.identities.ud[]` array: + - Each entry is a `SavedUdIdentity` object - Supports mixed signature types (EVM + Solana in same account) - Queried via `GCRIdentityRoutines` methods @@ -184,6 +206,7 @@ All UD identities stored in `gcr_main.identities.ud[]` array: ### With Incentive System First-time domain linking triggers points: + ```typescript const isFirst = await this.isFirstConnection( "ud", @@ -204,27 +227,31 @@ if (isFirst) { ## Files Modified **Node Repository** (this repo): + - `src/model/entities/types/IdentityTypes.ts` - Interface updates - `src/libs/blockchain/gcr/gcr_routines/GCRIdentityRoutines.ts` - Field extraction and validation - Documentation comments added throughout **SDK Repository** (../sdks) - **Phase 6 pending**: + - Still uses old `UDIdentityPayload` format in `src/types/abstraction/index.ts` - Needs update to match node-side changes ## Backward Compatibility **Breaking Changes**: + - `SavedUdIdentity.resolvedAddress` removed (now `signingAddress`) - New required field: `signatureType` - Network type expanded: added `"solana"` **Migration Path for Existing Data**: + - N/A - No existing UD identities in production yet - If there were, would need script to: - 1. Rename `resolvedAddress` → `signingAddress` - 2. Detect and add `signatureType` based on address format - 3. Update network if needed + 1. Rename `resolvedAddress` → `signingAddress` + 2. Detect and add `signatureType` based on address format + 3. Update network if needed ## Testing Checklist @@ -239,6 +266,7 @@ if (isFirst) { **Phase 6: Update SDK Client Methods** (../sdks repository) Required changes: + 1. Update `UDIdentityPayload` in `src/types/abstraction/index.ts` 2. Remove old payload format 3. Use new payload format from `UDResolution.ts` @@ -255,6 +283,6 @@ Required changes: ✅ GCR storage logic updated ✅ Incentive system integration working ✅ No type errors or lint issues -✅ Backward compatibility considered +✅ Backward compatibility considered **Phase 5 Status: COMPLETE** ✅ diff --git a/.serena/memories/ud_phases_tracking.md b/.serena/memories/ud_phases_tracking.md index c4a839a40..f36d17ce1 100644 --- a/.serena/memories/ud_phases_tracking.md +++ b/.serena/memories/ud_phases_tracking.md @@ -4,15 +4,15 @@ ## Phase Status Overview -| Phase | Status | Commit | Description | -|-------|--------|--------|-------------| -| Phase 1 | ✅ Complete | `ce3c32a8` | Signature detection utility | -| Phase 2 | ✅ Complete | `7b9826d8` | EVM records fetching | -| Phase 3 | ✅ Complete | `10460e41` | Solana integration + UnifiedDomainResolution | -| Phase 4 | ✅ Complete | `10460e41` | Multi-signature verification (EVM + Solana) | -| Phase 5 | ✅ Complete | `eff3af6c` | IdentityTypes updates (breaking changes) | -| **Points** | ✅ Complete | `c833679d` | **UD domain points system implementation** | -| Phase 6 | ⏸️ Pending | - | SDK client method updates | +| Phase | Status | Commit | Description | +| ---------- | ----------- | ---------- | -------------------------------------------- | +| Phase 1 | ✅ Complete | `ce3c32a8` | Signature detection utility | +| Phase 2 | ✅ Complete | `7b9826d8` | EVM records fetching | +| Phase 3 | ✅ Complete | `10460e41` | Solana integration + UnifiedDomainResolution | +| Phase 4 | ✅ Complete | `10460e41` | Multi-signature verification (EVM + Solana) | +| Phase 5 | ✅ Complete | `eff3af6c` | IdentityTypes updates (breaking changes) | +| **Points** | ✅ Complete | `c833679d` | **UD domain points system implementation** | +| Phase 6 | ⏸️ Pending | - | SDK client method updates | --- @@ -22,11 +22,13 @@ **File**: `src/libs/blockchain/gcr/gcr_routines/signatureDetector.ts` **Created**: + - `detectSignatureType(address)` - Auto-detect EVM vs Solana from address format - `validateAddressType(address, expectedType)` - Validate address matches type - `isSignableAddress(address)` - Check if address is recognized format **Patterns**: + - EVM: `/^0x[0-9a-fA-F]{40}$/` (secp256k1) - Solana: `/^[1-9A-HJ-NP-Za-km-z]{32,44}$/` (ed25519) @@ -38,6 +40,7 @@ **File**: `src/libs/blockchain/gcr/gcr_routines/udIdentityManager.ts` **Changes**: + - `resolveUDDomain()` return type: simple object → `EVMDomainResolution` - Added resolver ABI with `get()` method - Defined `UD_RECORD_KEYS` array (8 common crypto address records) @@ -46,6 +49,7 @@ - Applied to all 5 EVM networks: Polygon, Base, Sonic, Ethereum UNS, Ethereum CNS **Record Keys**: + ```typescript const UD_RECORD_KEYS = [ "crypto.ETH.address", @@ -67,6 +71,7 @@ const UD_RECORD_KEYS = [ **File**: `src/libs/blockchain/gcr/gcr_routines/udIdentityManager.ts` **Changes**: + - Added imports: `UnifiedDomainResolution`, `SolanaDomainResolver` - Created `evmToUnified()` - Converts `EVMDomainResolution` → `UnifiedDomainResolution` - Created `solanaToUnified()` - Converts Solana helper result → `UnifiedDomainResolution` @@ -74,6 +79,7 @@ const UD_RECORD_KEYS = [ - Added Solana fallback after all EVM networks fail **Resolution Cascade**: + 1. Polygon L2 UNS → unified format 2. Base L2 UNS → unified format 3. Sonic UNS → unified format @@ -83,6 +89,7 @@ const UD_RECORD_KEYS = [ 7. Throw if domain not found on any network **Temporary Phase 3 Limitation**: + - `verifyPayload()` only supports EVM domains - Solana domains fail with "Phase 3 limitation" message - Phase 4 implements full multi-address verification @@ -95,15 +102,18 @@ const UD_RECORD_KEYS = [ **File**: `src/libs/blockchain/gcr/gcr_routines/udIdentityManager.ts` **Dependencies Added**: + - `tweetnacl@1.0.3` - Solana signature verification - `bs58@6.0.0` - Base58 encoding/decoding **Changes**: + - Completely rewrote `verifyPayload()` for multi-address support - Added `verifySignature()` helper method for dual signature type support - Enhanced error messages with authorized address lists **Verification Flow**: + ```typescript 1. Resolve domain → get UnifiedDomainResolution with authorizedAddresses 2. Check domain has authorized addresses (fail if empty) @@ -114,12 +124,14 @@ const UD_RECORD_KEYS = [ ``` **EVM Signature**: + ```typescript const recoveredAddress = ethers.verifyMessage(signedData, signature) if (recoveredAddress !== authorizedAddress.address) fail ``` **Solana Signature**: + ```typescript const signatureBytes = bs58.decode(signature) const messageBytes = new TextEncoder().encode(signedData) @@ -128,7 +140,7 @@ const publicKeyBytes = bs58.decode(authorizedAddress.address) const isValid = nacl.sign.detached.verify( messageBytes, signatureBytes, - publicKeyBytes + publicKeyBytes, ) ``` @@ -139,22 +151,24 @@ const isValid = nacl.sign.detached.verify( ## Phase 5: Update IdentityTypes ✅ **Commit**: `eff3af6c` -**Files**: +**Files**: + - `src/model/entities/types/IdentityTypes.ts` - `src/libs/blockchain/gcr/gcr_routines/GCRIdentityRoutines.ts` **Breaking Changes**: + ```typescript // OLD (Phase 4) interface SavedUdIdentity { - resolvedAddress: string // ❌ REMOVED + resolvedAddress: string // ❌ REMOVED // ... } // NEW (Phase 5) interface SavedUdIdentity { domain: string - signingAddress: string // ✅ CHANGED from resolvedAddress + signingAddress: string // ✅ CHANGED from resolvedAddress signatureType: SignatureType // ✅ NEW: "evm" | "solana" signature: string publicKey: string @@ -166,6 +180,7 @@ interface SavedUdIdentity { ``` **Changes in GCRIdentityRoutines**: + - Updated `applyUdIdentityAdd()` to extract `signingAddress` and `signatureType` - Added field validation for new required fields - Updated storage logic to use new field names @@ -180,22 +195,26 @@ interface SavedUdIdentity { **Commit**: `c833679d` **Date**: 2025-01-31 -**Files**: +**Files**: + - `src/features/incentive/PointSystem.ts` - `src/model/entities/GCRv2/GCR_Main.ts` **Purpose**: Incentivize UD domain linking with TLD-based rewards ### Point Values + - `.demos` TLD domains: **3 points** - Other UD domains: **1 point** ### Methods Implemented #### awardUdDomainPoints(userId, domain, referralCode?) + **Location**: PointSystem.ts:866-934 **Features**: + - Automatic TLD detection (`domain.toLowerCase().endsWith(".demos")`) - Duplicate domain linking prevention - Referral code support @@ -203,6 +222,7 @@ interface SavedUdIdentity { - Returns `RPCResponse` with points awarded and updated total **Logic Flow**: + ```typescript 1. Determine point value based on TLD 2. Check for duplicate domain in GCR breakdown.udDomains @@ -211,15 +231,18 @@ interface SavedUdIdentity { ``` #### deductUdDomainPoints(userId, domain) + **Location**: PointSystem.ts:942-1001 **Features**: + - TLD-based point calculation (matching award logic) - Domain-specific point tracking verification - Safe deduction (checks if points exist first) - Returns `RPCResponse` with points deducted and updated total **Logic Flow**: + ```typescript 1. Determine point value based on TLD 2. Verify domain exists in GCR breakdown.udDomains @@ -230,6 +253,7 @@ interface SavedUdIdentity { ### Infrastructure Updates #### GCR Entity Extensions (GCR_Main.ts) + ```typescript // Added to points.breakdown udDomains: { [domain: string]: number } // Track points per domain @@ -237,21 +261,23 @@ telegram: number // Added to socialAccounts ``` #### PointSystem Type Updates + ```typescript // Extended addPointsToGCR() type parameter type: "web3Wallets" | "socialAccounts" | "udDomains" // Added udDomains handling in addPointsToGCR() if (type === "udDomains") { - account.points.breakdown.udDomains = + account.points.breakdown.udDomains = account.points.breakdown.udDomains || {} - account.points.breakdown.udDomains[platform] = - oldDomainPoints + points + account.points.breakdown.udDomains[platform] = oldDomainPoints + points } ``` #### Local UserPoints Interface + Created local interface matching GCR structure to avoid SDK circular dependencies: + ```typescript interface UserPoints { // ... existing fields @@ -259,11 +285,11 @@ interface UserPoints { web3Wallets: { [chain: string]: number } socialAccounts: { twitter: number - github: number + github: number discord: number - telegram: number // ✅ NEW + telegram: number // ✅ NEW } - udDomains: { [domain: string]: number } // ✅ NEW + udDomains: { [domain: string]: number } // ✅ NEW referrals: number demosFollow: number } @@ -274,6 +300,7 @@ interface UserPoints { ### Integration with IncentiveManager **Existing Hooks** (IncentiveManager.ts:117-137): + ```typescript static async udDomainLinked( userId: string, @@ -298,6 +325,7 @@ static async udDomainUnlinked( These hooks are called automatically when UD identities are added/removed via `udIdentityManager`. ### Testing & Validation + - ✅ TypeScript compilation: All errors resolved - ✅ ESLint: All files pass linting - ✅ Pattern consistency: Matches web3Wallets/socialAccounts implementation @@ -306,17 +334,20 @@ These hooks are called automatically when UD identities are added/removed via `u ### Design Decisions **Why TLD-based rewards?** + - `.demos` domains directly promote Demos Network branding - Higher reward incentivizes ecosystem adoption - Simple rule: easy for users to understand **Why local UserPoints interface?** + - Avoid SDK circular dependencies during rapid iteration - Ensure type consistency with GCR entity structure - Enable development without rebuilding SDK - FIXME comment added for future SDK migration **Why domain-level tracking in breakdown?** + - Prevents duplicate point awards for same domain - Enables accurate point deduction on unlink - Matches existing pattern (web3Wallets per chain, socialAccounts per platform) @@ -338,11 +369,12 @@ These hooks are called automatically when UD identities are added/removed via `u ### Required Changes #### 1. Update Types (`src/types/abstraction/index.ts`) + ```typescript // REMOVE old format export interface UDIdentityPayload { domain: string - resolvedAddress: string // ❌ DELETE + resolvedAddress: string // ❌ DELETE signature: string publicKey: string signedData: string @@ -351,7 +383,7 @@ export interface UDIdentityPayload { // ADD new format export interface UDIdentityPayload { domain: string - signingAddress: string // ✅ NEW + signingAddress: string // ✅ NEW signatureType: SignatureType // ✅ NEW signature: string publicKey: string @@ -362,6 +394,7 @@ export interface UDIdentityPayload { #### 2. Update Methods (`src/abstraction/Identities.ts`) **Update `generateUDChallenge()`**: + ```typescript // OLD generateUDChallenge(demosPublicKey: string): string @@ -376,6 +409,7 @@ generateUDChallenge( ``` **Update `addUnstoppableDomainIdentity()`**: + ```typescript // OLD async addUnstoppableDomainIdentity( @@ -397,7 +431,7 @@ async addUnstoppableDomainIdentity( ) { // Detect signature type from address format const signatureType = detectAddressType(signingAddress) - + const payload: UDIdentityAssignPayload = { method: "ud_identity_assign", payload: { @@ -414,6 +448,7 @@ async addUnstoppableDomainIdentity( ``` #### 3. Add Helper Method (NEW) + ```typescript /** * Get all signable addresses for a UD domain @@ -430,11 +465,13 @@ async getUDSignableAddresses( ### Phase 6 Testing Requirements **Unit Tests**: + - Challenge generation with signing address - Signature type auto-detection - Multi-address payload creation **Integration Tests**: + - End-to-end UD identity verification flow - EVM domain + EVM signature - Solana domain + Solana signature @@ -449,7 +486,7 @@ async getUDSignableAddresses( **Phase 3 → Phase 4**: UnifiedDomainResolution provides authorizedAddresses **Phase 4 → Phase 5**: Verification logic expects new type structure **Phase 5 → Points**: Identity storage structure enables points tracking -**Points → Phase 6**: SDK must match node implementation for client usage +**Points → Phase 6**: SDK must match node implementation for client usage --- @@ -459,8 +496,9 @@ async getUDSignableAddresses( **Latest Commit**: `c833679d` (UD points system) **Next Action**: Update SDK client methods in `../sdks/` repository **Breaking Changes**: Phases 4, 5, 6 all introduce breaking changes -**Testing**: End-to-end testing blocked until Phase 6 complete +**Testing**: End-to-end testing blocked until Phase 6 complete For detailed implementation sessions: + - Phase 5 details: See `ud_phase5_complete` memory - Points implementation: See `session_ud_points_implementation_2025_01_31` memory diff --git a/.serena/memories/ud_security_patterns.md b/.serena/memories/ud_security_patterns.md index a5fad31aa..2534a94e0 100644 --- a/.serena/memories/ud_security_patterns.md +++ b/.serena/memories/ud_security_patterns.md @@ -3,24 +3,26 @@ ## Ownership Verification Architecture ### Core Principle + **Blockchain State as Source of Truth**: UD domains are NFTs that can be transferred. All ownership decisions must be verified on-chain, not from cached GCR data. ### Verification Flow Pattern + ```typescript // STANDARD PATTERN for UD ownership verification async verifyUdDomainOwnership(userId: string, domain: string): boolean { // 1. Get user's linked wallets from GCR const { linkedWallets } = await getUserIdentitiesFromGCR(userId) - + // 2. Resolve domain on-chain to get current authorized addresses const domainResolution = await UDIdentityManager.resolveUDDomain(domain) - + // 3. Extract wallet addresses (format: "chain:address" → "address") const userWalletAddresses = linkedWallets.map(wallet => { const parts = wallet.split(':') return parts.length > 1 ? parts[1] : wallet }) - + // 4. Check ownership with chain-specific comparison const isOwner = domainResolution.authorizedAddresses.some(authAddr => userWalletAddresses.some(userAddr => { @@ -32,7 +34,7 @@ async verifyUdDomainOwnership(userId: string, domain: string): boolean { return authAddr.address.toLowerCase() === userAddr.toLowerCase() }) ) - + return isOwner } ``` @@ -40,16 +42,20 @@ async verifyUdDomainOwnership(userId: string, domain: string): boolean { ## Security Checkpoints ### Domain Linking (Award Points) + **Location**: `src/features/incentive/PointSystem.ts::awardUdDomainPoints()` **Security**: ✅ Verified via UDIdentityManager.verifyPayload() + - Resolves domain to get authorized addresses - Verifies signature from authorized wallet - Checks Demos public key in challenge message - Only awards points if all verification passes ### Domain Unlinking (Deduct Points) + **Location**: `src/features/incentive/PointSystem.ts::deductUdDomainPoints()` **Security**: ✅ Verified via UDIdentityManager.resolveUDDomain() + - Resolves domain to get current authorized addresses - Compares against user's linked wallets - Blocks deduction if user doesn't own domain @@ -58,7 +64,9 @@ async verifyUdDomainOwnership(userId: string, domain: string): boolean { ## Multi-Chain Considerations ### Domain Resolution Priority + **EVM Networks** (in order): + 1. Polygon UNS Registry 2. Base UNS Registry 3. Sonic UNS Registry @@ -66,16 +74,20 @@ async verifyUdDomainOwnership(userId: string, domain: string): boolean { 5. Ethereum CNS Registry (legacy) **Solana Network**: + - Fallback for .demos and other Solana domains - Uses SolanaDomainResolver for resolution ### Signature Type Handling + **EVM Addresses**: + - Format: 0x-prefixed hex (40 characters) - Comparison: Case-insensitive - Verification: ethers.verifyMessage() **Solana Addresses**: + - Format: Base58-encoded (32 bytes) - Comparison: Case-sensitive - Verification: nacl.sign.detached.verify() @@ -83,6 +95,7 @@ async verifyUdDomainOwnership(userId: string, domain: string): boolean { ## Error Handling Patterns ### Domain Not Resolvable + ```typescript try { domainResolution = await UDIdentityManager.resolveUDDomain(domain) @@ -92,19 +105,20 @@ try { response: { message: `Cannot verify ownership: domain ${domain} is not resolvable`, }, - extra: { error: error.message } + extra: { error: error.message }, } } ``` ### Ownership Verification Failed + ```typescript if (!isOwner) { return { result: 400, response: { message: `Cannot deduct points: domain ${domain} is not owned by any of your linked wallets`, - } + }, } } ``` @@ -112,6 +126,7 @@ if (!isOwner) { ## Testing Considerations ### Test Scenarios + 1. **Happy Path**: User owns domain → deduction succeeds 2. **Transfer Scenario**: User transferred domain → deduction fails with 400 3. **Resolution Failure**: Domain expired/deleted → returns 400 with clear error @@ -123,17 +138,20 @@ if (!isOwner) { ## Integration Points ### UDIdentityManager API + **Public Methods**: + - `resolveUDDomain(domain: string): Promise` - - Returns authorized addresses and network metadata - - Throws if domain not resolvable - + - Returns authorized addresses and network metadata + - Throws if domain not resolvable - `verifyPayload(payload: UDIdentityAssignPayload, sender: string)` - - Full signature verification for domain linking - - Includes ownership + signature validation + - Full signature verification for domain linking + - Includes ownership + signature validation ### PointSystem Integration + **Dependencies**: + - `getUserIdentitiesFromGCR()`: Get user's linked wallets - `UDIdentityManager.resolveUDDomain()`: Get current domain ownership - `addPointsToGCR()`: Execute point changes after verification @@ -141,17 +159,23 @@ if (!isOwner) { ## Security Vulnerability Prevention ### Prevented Attack: Domain Transfer Abuse + **Scenario**: Attacker transfers domain after earning points + - ✅ **Protected**: Ownership verified on-chain before deduction - ✅ **Result**: Attacker loses points when domain transferred ### Prevented Attack: Same Domain Multiple Accounts + **Scenario**: Same domain linked to multiple accounts + - ✅ **Protected**: Duplicate linking check in awardUdDomainPoints() - ✅ **Protected**: Ownership verification in deductUdDomainPoints() - ✅ **Result**: Each domain can only earn points once per account ### Prevented Attack: Expired Domain Points + **Scenario**: Domain expires but points remain + - ✅ **Protected**: Resolution failure prevents deduction - ⚠️ **Note**: Points remain awarded (acceptable - user earned them legitimately) diff --git a/.serena/memories/ud_technical_reference.md b/.serena/memories/ud_technical_reference.md index 20606c259..c878cabf4 100644 --- a/.serena/memories/ud_technical_reference.md +++ b/.serena/memories/ud_technical_reference.md @@ -3,6 +3,7 @@ ## Network Configuration ### EVM Networks (Priority Order) + 1. **Polygon L2**: `0x0E2846C302E5E05C64d5FaA0365b1C2aE48AD2Ad` | `https://polygon-rpc.com` 2. **Base L2**: `0xF6c1b83977DE3dEffC476f5048A0a84d3375d498` | `https://mainnet.base.org` 3. **Sonic**: `0xDe1DAdcF11a7447C3D093e97FdbD513f488cE3b4` | `https://rpc.soniclabs.com` @@ -10,6 +11,7 @@ 5. **Ethereum CNS**: `0xD1E5b0FF1287aA9f9A268759062E4Ab08b9Dacbe` | `https://eth.llamarpc.com` ### Solana Network + - **UD Program**: `6eLvwb1dwtV5coME517Ki53DojQaRLUctY9qHqAsS9G2` - **RPC**: `https://api.mainnet-beta.solana.com` - **Resolution**: Via `udSolanaResolverHelper.ts` (direct Solana program interaction) @@ -18,6 +20,7 @@ ## Record Keys Priority **Signable Records** (support multi-address verification): + - `crypto.ETH.address` - Primary EVM - `crypto.SOL.address` - Primary Solana - `crypto.MATIC.address` - Polygon native @@ -27,6 +30,7 @@ - `token.SOL.SOL.USDC.address` - Solana USDC **Non-Signable** (skip): + - `crypto.BTC.address` - Bitcoin can't sign Demos challenges - `ipfs.html.value` - Not an address - `dns.*` - Not an address @@ -34,6 +38,7 @@ ## Signature Detection Patterns ### Address Formats + ```typescript // EVM: 0x prefix + 40 hex chars /^0x[0-9a-fA-F]{40}$/ @@ -43,22 +48,26 @@ ``` ### Verification Methods + **EVM**: `ethers.verifyMessage(signedData, signature)` → recoveredAddress **Solana**: `nacl.sign.detached.verify(messageBytes, signatureBytes, publicKeyBytes)` → boolean ## Test Data Examples ### EVM Domain (sir.crypto on Polygon) + - Owner: `0x45238D633D6a1d18ccde5fFD234958ECeA46eB86` - Records: Sparse (2/11 populated) - Signable: 1 EVM address ### Solana Domain (thecookingsenpai.demos) + - Records: Rich (4/11 populated) - Signable: 2 EVM + 2 Solana addresses - Multi-chain from start ## Environment Variables + ```bash ETHEREUM_RPC=https://eth.llamarpc.com # EVM resolution # Solana resolution via helper - no API key needed diff --git a/.serena/project.yml b/.serena/project.yml index 93009accd..1fd3f0609 100644 --- a/.serena/project.yml +++ b/.serena/project.yml @@ -21,7 +21,7 @@ read_only: false # list of tool names to exclude. We recommend not excluding any tools, see the readme for more details. # Below is the complete list of tools for convenience. -# To make sure you have the latest list of tools, and to view their descriptions, +# To make sure you have the latest list of tools, and to view their descriptions, # execute `uv run scripts/print_tool_overview.py`. # # * `activate_project`: Activates a project by name. diff --git a/AGENTS.md b/AGENTS.md index b83240c64..10448f961 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -1,4 +1,5 @@ # AI Agent Instructions for Demos Network + # Demos Network Agent Instructions ## Issue Tracking with bd (beads) @@ -15,11 +16,13 @@ ### Quick Start **Check for ready work:** + ```bash bd ready --json ``` **Create new issues:** + ```bash bd create "Issue title" -t bug|feature|task -p 0-4 --json bd create "Issue title" -p 1 --deps discovered-from:bd-123 --json @@ -27,12 +30,14 @@ bd create "Subtask" --parent --json # Hierarchical subtask (gets ID l ``` **Claim and update:** + ```bash bd update bd-42 --status in_progress --json bd update bd-42 --priority 1 --json ``` **Complete work:** + ```bash bd close bd-42 --reason "Completed" --json ``` @@ -59,13 +64,14 @@ bd close bd-42 --reason "Completed" --json 2. **Claim your task**: `bd update --status in_progress` 3. **Work on it**: Implement, test, document 4. **Discover new work?** Create linked issue: - - `bd create "Found bug" -p 1 --deps discovered-from:` + - `bd create "Found bug" -p 1 --deps discovered-from:` 5. **Complete**: `bd close --reason "Done"` 6. **Commit together**: Always commit the `.beads/issues.jsonl` file together with the code changes so issue state stays in sync with code state ### Auto-Sync bd automatically syncs with git: + - Exports to `.beads/issues.jsonl` after changes (5s debounce) - Imports from JSONL when newer (e.g., after `git pull`) - No manual export/import needed! @@ -84,12 +90,13 @@ pip install beads-mcp ``` Add to MCP config (e.g., `~/.config/claude/config.json`): + ```json { - "beads": { - "command": "beads-mcp", - "args": [] - } + "beads": { + "command": "beads-mcp", + "args": [] + } } ``` @@ -98,6 +105,7 @@ Then use `mcp__beads__*` functions instead of CLI commands. ### Managing AI-Generated Planning Documents AI assistants often create planning and design documents during development: + - PLAN.md, IMPLEMENTATION.md, ARCHITECTURE.md - DESIGN.md, CODEBASE_SUMMARY.md, INTEGRATION_PLAN.md - TESTING_GUIDE.md, TECHNICAL_DESIGN.md, and similar files @@ -105,18 +113,21 @@ AI assistants often create planning and design documents during development: **Best Practice: Use a dedicated directory for these ephemeral files** **Recommended approach:** + - Create a `history/` directory in the project root - Store ALL AI-generated planning/design docs in `history/` - Keep the repository root clean and focused on permanent project files - Only access `history/` when explicitly asked to review past planning **Example .gitignore entry (optional):** + ``` # AI planning documents (ephemeral) history/ ``` **Benefits:** + - Clean repository root - Clear separation between ephemeral and permanent documentation - Easy to exclude from version control if desired @@ -153,17 +164,18 @@ For more details, see README.md and QUICKSTART.md. 2. **Run quality gates** (if code changed) - Tests, linters, builds 3. **Update issue status** - Close finished work, update in-progress items 4. **PUSH TO REMOTE** - This is MANDATORY: - ```bash - git pull --rebase - bd sync - git push - git status # MUST show "up to date with origin" - ``` + ```bash + git pull --rebase + bd sync + git push + git status # MUST show "up to date with origin" + ``` 5. **Clean up** - Clear stashes, prune remote branches 6. **Verify** - All changes committed AND pushed 7. **Hand off** - Provide context for next session **CRITICAL RULES:** + - Work is NOT complete until `git push` succeeds - NEVER stop before pushing - that leaves work stranded locally - NEVER say "ready to push when you are" - YOU must push diff --git a/CONSOLE_LOG_AUDIT.md b/CONSOLE_LOG_AUDIT.md index 2cdf8a5c0..2bf9ea6a0 100644 --- a/CONSOLE_LOG_AUDIT.md +++ b/CONSOLE_LOG_AUDIT.md @@ -14,58 +14,63 @@ These bypass the async buffering optimization and can block the event loop. These run during normal node operation and should be converted to CategorizedLogger: ### Consensus Module (`src/libs/consensus/`) -| File | Lines | Category | -|------|-------|----------| -| `v2/PoRBFT.ts` | 245, 332-333, 527, 533 | CONSENSUS | -| `v2/types/secretaryManager.ts` | 900 | CONSENSUS | -| `v2/routines/getShard.ts` | 18 | CONSENSUS | -| `routines/proofOfConsensus.ts` | 15-57 (many) | CONSENSUS | + +| File | Lines | Category | +| ------------------------------ | ---------------------- | --------- | +| `v2/PoRBFT.ts` | 245, 332-333, 527, 533 | CONSENSUS | +| `v2/types/secretaryManager.ts` | 900 | CONSENSUS | +| `v2/routines/getShard.ts` | 18 | CONSENSUS | +| `routines/proofOfConsensus.ts` | 15-57 (many) | CONSENSUS | ### Network Module (`src/libs/network/`) -| File | Lines | Category | -|------|-------|----------| -| `endpointHandlers.ts` | 112-642 (many) | NETWORK | -| `server_rpc.ts` | 431-432 | NETWORK | -| `manageExecution.ts` | 19-117 (many) | NETWORK | -| `manageNodeCall.ts` | 47-466 (many) | NETWORK | -| `manageHelloPeer.ts` | 36 | NETWORK | -| `manageConsensusRoutines.ts` | 194-333 | CONSENSUS | -| `routines/timeSync.ts` | 30-84 (many) | NETWORK | -| `routines/nodecalls/*.ts` | Multiple files | NETWORK | + +| File | Lines | Category | +| ---------------------------- | -------------- | --------- | +| `endpointHandlers.ts` | 112-642 (many) | NETWORK | +| `server_rpc.ts` | 431-432 | NETWORK | +| `manageExecution.ts` | 19-117 (many) | NETWORK | +| `manageNodeCall.ts` | 47-466 (many) | NETWORK | +| `manageHelloPeer.ts` | 36 | NETWORK | +| `manageConsensusRoutines.ts` | 194-333 | CONSENSUS | +| `routines/timeSync.ts` | 30-84 (many) | NETWORK | +| `routines/nodecalls/*.ts` | Multiple files | NETWORK | ### Peer Module (`src/libs/peer/`) -| File | Lines | Category | -|------|-------|----------| -| `Peer.ts` | 113, 125 | PEER | -| `PeerManager.ts` | 52-371 (many) | PEER | -| `routines/checkOfflinePeers.ts` | 9-27 | PEER | -| `routines/peerBootstrap.ts` | 31-100 (many) | PEER | -| `routines/peerGossip.ts` | 228 | PEER | -| `routines/getPeerConnectionString.ts` | 35-39 | PEER | -| `routines/getPeerIdentity.ts` | 32-76 (many) | PEER | + +| File | Lines | Category | +| ------------------------------------- | ------------- | -------- | +| `Peer.ts` | 113, 125 | PEER | +| `PeerManager.ts` | 52-371 (many) | PEER | +| `routines/checkOfflinePeers.ts` | 9-27 | PEER | +| `routines/peerBootstrap.ts` | 31-100 (many) | PEER | +| `routines/peerGossip.ts` | 228 | PEER | +| `routines/getPeerConnectionString.ts` | 35-39 | PEER | +| `routines/getPeerIdentity.ts` | 32-76 (many) | PEER | ### Blockchain Module (`src/libs/blockchain/`) -| File | Lines | Category | -|------|-------|----------| -| `transaction.ts` | 115-490 (many) | CHAIN | -| `chain.ts` | 57-666 (many) | CHAIN | -| `routines/Sync.ts` | 283, 368 | SYNC | -| `routines/validateTransaction.ts` | 38-288 (many) | CHAIN | -| `routines/executeOperations.ts` | 51-98 | CHAIN | -| `gcr/gcr.ts` | 212-1052 (many) | CHAIN | -| `gcr/handleGCR.ts` | 280-399 (many) | CHAIN | + +| File | Lines | Category | +| --------------------------------- | --------------- | -------- | +| `transaction.ts` | 115-490 (many) | CHAIN | +| `chain.ts` | 57-666 (many) | CHAIN | +| `routines/Sync.ts` | 283, 368 | SYNC | +| `routines/validateTransaction.ts` | 38-288 (many) | CHAIN | +| `routines/executeOperations.ts` | 51-98 | CHAIN | +| `gcr/gcr.ts` | 212-1052 (many) | CHAIN | +| `gcr/handleGCR.ts` | 280-399 (many) | CHAIN | ### OmniProtocol Module (`src/libs/omniprotocol/`) -| File | Lines | Category | -|------|-------|----------| -| `transport/PeerConnection.ts` | 407, 464 | NETWORK | -| `transport/ConnectionPool.ts` | 409 | NETWORK | -| `transport/TLSConnection.ts` | 104-189 (many) | NETWORK | -| `server/OmniProtocolServer.ts` | 76-181 (many) | NETWORK | -| `server/InboundConnection.ts` | 55-227 (many) | NETWORK | -| `server/TLSServer.ts` | 110-289 (many) | NETWORK | -| `protocol/handlers/*.ts` | Multiple files | NETWORK | -| `integration/*.ts` | Multiple files | NETWORK | + +| File | Lines | Category | +| ------------------------------ | -------------- | -------- | +| `transport/PeerConnection.ts` | 407, 464 | NETWORK | +| `transport/ConnectionPool.ts` | 409 | NETWORK | +| `transport/TLSConnection.ts` | 104-189 (many) | NETWORK | +| `server/OmniProtocolServer.ts` | 76-181 (many) | NETWORK | +| `server/InboundConnection.ts` | 55-227 (many) | NETWORK | +| `server/TLSServer.ts` | 110-289 (many) | NETWORK | +| `protocol/handlers/*.ts` | Multiple files | NETWORK | +| `integration/*.ts` | Multiple files | NETWORK | --- @@ -74,36 +79,41 @@ These run during normal node operation and should be converted to CategorizedLog These run less frequently but still during operation: ### Identity Module (`src/libs/identity/`) -| File | Lines | Category | -|------|-------|----------| + +| File | Lines | Category | +| ------------------ | -------- | -------- | | `tools/twitter.ts` | 456, 572 | IDENTITY | -| `tools/discord.ts` | 106 | IDENTITY | +| `tools/discord.ts` | 106 | IDENTITY | ### Abstraction Module (`src/libs/abstraction/`) -| File | Lines | Category | -|------|-------|----------| -| `index.ts` | 253 | IDENTITY | -| `web2/github.ts` | 25 | IDENTITY | -| `web2/parsers.ts` | 53 | IDENTITY | + +| File | Lines | Category | +| ----------------- | ----- | -------- | +| `index.ts` | 253 | IDENTITY | +| `web2/github.ts` | 25 | IDENTITY | +| `web2/parsers.ts` | 53 | IDENTITY | ### Crypto Module (`src/libs/crypto/`) -| File | Lines | Category | -|------|-------|----------| -| `cryptography.ts` | 28-271 (many) | CORE | -| `forgeUtils.ts` | 8-45 | CORE | -| `pqc/enigma.ts` | 47 | CORE | + +| File | Lines | Category | +| ----------------- | ------------- | -------- | +| `cryptography.ts` | 28-271 (many) | CORE | +| `forgeUtils.ts` | 8-45 | CORE | +| `pqc/enigma.ts` | 47 | CORE | --- ## 🟢 LOW PRIORITY - Cold Paths ### Startup/Shutdown (`src/index.ts`) + - Lines: 387, 477-565 (shutdown handlers, startup logs) - These run once, acceptable as console for visibility ### Feature Modules (Occasional Use) + - `src/features/multichain/*.ts` - XM operations -- `src/features/fhe/*.ts` - FHE operations +- `src/features/fhe/*.ts` - FHE operations - `src/features/bridges/*.ts` - Bridge operations - `src/features/web2/*.ts` - Web2 proxy - `src/features/InstantMessagingProtocol/*.ts` - IM server @@ -129,16 +139,19 @@ These are CLI utilities where console.log is appropriate: ## Recommendations ### Immediate Actions (P0) -1. Convert consensus hot path logs to `log.debug()` + +1. Convert consensus hot path logs to `log.debug()` 2. Convert peer/network hot path logs to `log.debug()` 3. Convert blockchain validation logs to `log.debug()` ### Short Term (P1) + 4. Convert OmniProtocol logs to CategorizedLogger 5. Convert GCR operation logs to CategorizedLogger 6. Add `OMNI` or similar category for OmniProtocol ### Medium Term (P2) + 7. Audit feature modules and convert where needed 8. Consider adding more log categories for better filtering diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 6f2c3a16f..f20e961b4 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -31,11 +31,13 @@ Please refer to [INSTALL.md](INSTALL.md) for all the necessary informations on h ### 1. Create a Feature Branch Always create a feature branch for your work: + ```bash git checkout -b feature/your-feature-name ``` Branch naming conventions: + - `feature/` - New features - `fix/` - Bug fixes - `refactor/` - Code refactoring @@ -62,11 +64,13 @@ bun run lint:fix # Auto-fix issues ### 4. Commit Guidelines Write clear, descriptive commit messages: + - Use present tense ("Add feature" not "Added feature") - Keep the first line under 50 characters - Reference issues when applicable (`Fixes #123`) Example: + ``` Add multichain transaction validation @@ -92,14 +96,15 @@ Fixes #456 ### Pull Request Template When opening a PR, please include: + - **Description** - What does this PR do? - **Motivation** - Why is this change needed? - **Testing** - How has this been tested? - **Breaking changes** - Does this break existing functionality? - **Issues** - Link to related issues - ### Key Principles + - **Modularity** - Keep features isolated and reusable - **Type Safety** - Leverage TypeScript for full type coverage - **Error Handling** - Comprehensive error handling and validation @@ -108,7 +113,9 @@ When opening a PR, please include: ## 🐛 Reporting Issues ### Bug Reports + Include: + - Clear description of the bug - Steps to reproduce - Expected vs actual behavior @@ -116,7 +123,9 @@ Include: - Relevant logs or error messages ### Feature Requests + Include: + - Use case and motivation - Proposed solution - Alternative solutions considered @@ -125,11 +134,13 @@ Include: ## 💡 Development Tips ### Using Bun Effectively + - **Always use Bun** for package management (`bun add`, not `npm install`) - Run TypeScript directly with Bun (`bun src/index.ts`) - Leverage Bun's built-in test runner (`bun test`) ### Code Style + - Use double quotes for strings - No semicolons at statement ends - camelCase for variables and functions @@ -137,6 +148,7 @@ Include: - See [GUIDELINES/CODING.md](GUIDELINES/CODING.md) for complete style guide ### Performance Considerations + - Optimize for readability first, then performance - Use async/await for asynchronous operations - Implement proper caching strategies @@ -145,12 +157,14 @@ Include: ## 🤝 Community ### Code of Conduct + - Be respectful and inclusive - Welcome newcomers and help them get started - Focus on constructive feedback - Report unacceptable behavior to maintainers ### Getting Help + - Review existing documentation - Search through existing issues - Ask questions in discussions diff --git a/GUIDELINES/CODING.md b/GUIDELINES/CODING.md index c02086c0f..80209a9fd 100644 --- a/GUIDELINES/CODING.md +++ b/GUIDELINES/CODING.md @@ -5,130 +5,150 @@ This document provides natural language coding guidelines extracted from the pro ## 1. Code Formatting ### 1.1 Quotes and Semicolons + - **Always use double quotes** for string literals - - ✅ `const message = "Hello World"` - - ❌ `const message = 'Hello World'` + - ✅ `const message = "Hello World"` + - ❌ `const message = 'Hello World'` - **Never use semicolons** at the end of statements - - ✅ `const value = 42` - - ❌ `const value = 42;` + - ✅ `const value = 42` + - ❌ `const value = 42;` ### 1.2 Comma Usage + - **Always include trailing commas** in multi-line structures (arrays, objects, function parameters) - - ✅ Multi-line with trailing comma: - ```typescript - const config = { - host: "localhost", - port: 53550, - debug: true, - } - ``` - - ❌ Multi-line without trailing comma: - ```typescript - const config = { - host: "localhost", - port: 53550, - debug: true - } - ``` + - ✅ Multi-line with trailing comma: + + ```typescript + const config = { + host: "localhost", + port: 53550, + debug: true, + } + ``` + + - ❌ Multi-line without trailing comma: + + ```typescript + const config = { + host: "localhost", + port: 53550, + debug: true, + } + ``` ### 1.3 Switch Statements + - **Add space after colon** in switch case statements, but not before - - ✅ `case "test": return value` - - ❌ `case "test" :return value` - - ❌ `case "test":return value` + - ✅ `case "test": return value` + - ❌ `case "test" :return value` + - ❌ `case "test":return value` ## 2. Naming Conventions ### 2.1 Variables and Functions + - **Use camelCase** for all variables and function names - - ✅ `const userName = "Alice"` - - ✅ `function calculateTotal() { }` - - ❌ `const user_name = "Alice"` - - ❌ `function CalculateTotal() { }` + - ✅ `const userName = "Alice"` + - ✅ `function calculateTotal() { }` + - ❌ `const user_name = "Alice"` + - ❌ `function CalculateTotal() { }` - **Leading underscores are allowed** but should be used sparingly (typically for private/internal properties) - - ✅ `const _internalState = {}` - - ✅ `const normalVariable = {}` + - ✅ `const _internalState = {}` + - ✅ `const normalVariable = {}` ### 2.2 Methods + - **Use camelCase** for all class and object methods - - ✅ `class Service { processData() { } }` - - ❌ `class Service { ProcessData() { } }` + - ✅ `class Service { processData() { } }` + - ❌ `class Service { ProcessData() { } }` ### 2.3 Types and Interfaces + - **Use PascalCase** for all type definitions - - ✅ `type UserProfile = { }` - - ✅ `interface Configuration { }` - - ❌ `type userProfile = { }` + - ✅ `type UserProfile = { }` + - ✅ `interface Configuration { }` + - ❌ `type userProfile = { }` - **Don't prefix interfaces with 'I'** - - ✅ `interface UserService { }` - - ❌ `interface IUserService { }` + - ✅ `interface UserService { }` + - ❌ `interface IUserService { }` ### 2.4 Classes + - **Use PascalCase** for all class names - - ✅ `class NetworkManager { }` - - ❌ `class networkManager { }` - - ❌ `class network_manager { }` + - ✅ `class NetworkManager { }` + - ❌ `class networkManager { }` + - ❌ `class network_manager { }` ### 2.5 Type Aliases + - **Use PascalCase** for type aliases - - ✅ `type ResponseStatus = "success" | "error"` - - ❌ `type responseStatus = "success" | "error"` + - ✅ `type ResponseStatus = "success" | "error"` + - ❌ `type responseStatus = "success" | "error"` ## 3. TypeScript Specific Guidelines ### 3.1 Type Safety + - **Using `any` is allowed** when necessary, but should be avoided when possible - - Prefer specific types or `unknown` when the type is truly unknown - - Document why `any` is used when it's necessary + - Prefer specific types or `unknown` when the type is truly unknown + - Document why `any` is used when it's necessary ### 3.2 Empty Functions + - **Empty functions are permitted** (useful for default callbacks, placeholders, or optional handlers) - - ✅ `const noop = () => {}` - - ✅ `onError: () => {} // Default no-op handler` + - ✅ `const noop = () => {}` + - ✅ `onError: () => {} // Default no-op handler` ### 3.3 CommonJS Requires + - **`require()` statements are allowed** when needed for dynamic imports or CommonJS compatibility - - However, prefer ES6 `import` statements when possible + - However, prefer ES6 `import` statements when possible ### 3.4 Variable Declarations + - **`var` keyword is technically allowed** but strongly discouraged - - Always prefer `const` for values that won't be reassigned - - Use `let` for values that will be reassigned - - ✅ `const API_URL = "https://api.example.com"` - - ✅ `let counter = 0` - - ⚠️ `var oldStyle = "avoid this"` + - Always prefer `const` for values that won't be reassigned + - Use `let` for values that will be reassigned + - ✅ `const API_URL = "https://api.example.com"` + - ✅ `let counter = 0` + - ⚠️ `var oldStyle = "avoid this"` ## 4. Code Quality ### 4.1 Unused Variables + - **Unused variables are currently not enforced** by the linter - - However, you should still remove unused code for cleanliness - - Consider commenting out code that might be needed later with explanation + - However, you should still remove unused code for cleanliness + - Consider commenting out code that might be needed later with explanation ### 4.2 Console Statements + - **Console statements are allowed** (no warning for console.log, console.error, etc.) - - Use them appropriately for debugging and logging - - Consider using a proper logging system for production code + - Use them appropriately for debugging and logging + - Consider using a proper logging system for production code ### 4.3 Extra Semicolons + - **No extra semicolons allowed** (this is enforced as an error) - - ❌ `const value = 42;;` - - ❌ `function test() { };` + - ❌ `const value = 42;;` + - ❌ `function test() { };` ## 5. Import Guidelines ### 5.1 Import Restrictions + - **Import restrictions are configured as warnings** - - Follow the project's module structure - - Avoid circular dependencies - - Use proper path aliases when configured + - Follow the project's module structure + - Avoid circular dependencies + - Use proper path aliases when configured ## 6. Environment and Compatibility ### 6.1 Target Environment + - Code runs primarily in **Bun runtime** (with Node.js compatibility) - **Bun is the preferred package manager and runtime** - **ES6 modules** are the primary module system (CommonJS supported for compatibility) @@ -137,6 +157,7 @@ This document provides natural language coding guidelines extracted from the pro - TypeScript is executed directly via Bun without compilation step ### 6.2 Global Variables + - `NodeJS` namespace is available (read-only) - `globalThis` is available for global scope access - Bun-specific globals are available when running under Bun runtime @@ -144,21 +165,25 @@ This document provides natural language coding guidelines extracted from the pro ## 7. Best Practices (Beyond ESLint) ### 7.1 File Organization + - Keep files focused on a single responsibility - Group related functionality in feature modules - Use clear, descriptive file names ### 7.2 Error Handling + - Always handle errors appropriately - Use try-catch blocks for async operations - Provide meaningful error messages ### 7.3 Comments and Documentation + - Write self-documenting code when possible - Add comments for complex logic - Use JSDoc comments for public APIs ### 7.4 Testing + - Write tests for new features - Maintain existing test coverage - Follow the established testing patterns in the codebase @@ -168,22 +193,25 @@ This document provides natural language coding guidelines extracted from the pro ## 8. Package Management and Runtime ### 8.1 Package Manager + - **Always use Bun** as the package manager - - ✅ `bun install` - - ✅ `bun add ` - - ❌ `npm install` - - ❌ `yarn add` + - ✅ `bun install` + - ✅ `bun add ` + - ❌ `npm install` + - ❌ `yarn add` ### 8.2 Running Scripts + - **Use Bun to run TypeScript directly** - - ✅ `bun run