Skip to content

feat(l2ps-messaging): add crypto and integration tests for messaging …#686

Open
Shitikyan wants to merge 3 commits intotestnetfrom
feat-l2ps-messaging
Open

feat(l2ps-messaging): add crypto and integration tests for messaging …#686
Shitikyan wants to merge 3 commits intotestnetfrom
feat-l2ps-messaging

Conversation

@Shitikyan
Copy link
Contributor

@Shitikyan Shitikyan commented Mar 10, 2026

…protocol

  • Implemented unit tests for message hashing, encryption, and decryption functionalities.
  • Added integration tests for WebSocket server handling peer registration, messaging, discovery, and error handling.
  • Defined protocol types for messaging, including message envelopes and server/client message structures.
  • Enhanced setup scripts for zk keys and verification keys for improved reliability.
  • Updated datasource to include L2PS messaging entities for database integration.

Summary by CodeRabbit

  • New Features

    • Real-time peer-to-peer messaging: online delivery, offline queuing, history, peer discovery, public-key lookup; optional runtime enable via L2PS_MESSAGING_ENABLED and configurable port.
  • Documentation

    • Added comprehensive L2PS Messaging quickstart with protocol examples, workflows, and troubleshooting.
  • Tests

    • Extensive unit, integration, and end-to-end tests covering protocol, crypto, delivery, offline behavior, and lifecycle.
  • Chores

    • Messaging persistence model integrated and startup/shutdown lifecycle support added; small tooling/path resolution fix.

…protocol

- Implemented unit tests for message hashing, encryption, and decryption functionalities.
- Added integration tests for WebSocket server handling peer registration, messaging, discovery, and error handling.
- Defined protocol types for messaging, including message envelopes and server/client message structures.
- Enhanced setup scripts for zk keys and verification keys for improved reliability.
- Updated datasource to include L2PS messaging entities for database integration.
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 10, 2026

Walkthrough

Adds an opt-in L2PS Messaging feature: a Bun WebSocket server, a messaging service that bridges messages into L2PS transactions, crypto helpers, DB entity, tests, docs, CLI e2e script, and startup lifecycle wiring controlled by env vars.

Changes

Cohort / File(s) Summary
Configuration & Lifecycle
\.env\.example, src/index.ts
New env vars L2PS_MESSAGING_ENABLED and L2PS_MESSAGING_PORT; conditional startup/shutdown and indexState fields for the messaging server.
Protocol Types
src/features/l2ps-messaging/types.ts
Complete WebSocket protocol/types: envelopes, client/server frames, encrypted payloads, storage shapes, errors, and peer metadata.
WebSocket Server
src/features/l2ps-messaging/L2PSMessagingServer.ts
New Bun-based server implementing register/send/history/discover/request_public_key, peer management, offline queuing/delivery, validation, and integration with service/ParallelNetworks.
Service Layer
src/features/l2ps-messaging/L2PSMessagingService.ts
Singleton service for dedupe, persistence, offline rate-limits, submit-to-L2PS pipeline, mempool/execution handling, queued retrieval, history, and delivery marking.
Crypto Utilities
src/features/l2ps-messaging/crypto.ts
Message hashing, AES-256-GCM encrypt/decrypt helpers, and symmetric key generation (Web Crypto API).
Persistence
src/features/l2ps-messaging/entities/L2PSMessage.ts, src/model/datasource.ts
New TypeORM entity L2PSMessage and added to DataSource entities with indexed fields and status lifecycle.
Module Export & Lifecycle API
src/features/l2ps-messaging/index.ts
Exports server/service/entity/crypto/types and singleton lifecycle helpers: startL2PSMessaging, stopL2PSMessaging, isL2PSMessagingEnabled.
Tests (unit/integration/e2e)
src/features/l2ps-messaging/tests/*, scripts/l2ps-messaging-test.ts
Unit tests for crypto/service, server protocol tests, full integration tests, and an end-to-end CLI script simulating Alice/Bob flows.
Docs & Quickstart
src/features/l2ps-messaging/L2PS_MESSAGING_QUICKSTART.md
New quickstart and protocol reference covering config, protocol frames, message lifecycle, and troubleshooting.
Misc Utility
src/features/zk/scripts/setup-zk.ts
Replaced hardcoded npx path with dynamic which npx resolution.

Sequence Diagram(s)

sequenceDiagram
    actor Client as Peer (Client)
    participant WS as L2PSMessagingServer
    participant Service as L2PSMessagingService
    participant L2PS as L2PS Pipeline
    participant DB as Database

    rect rgba(100,200,100,0.5)
    Note over Client,DB: Send Flow (online recipient)
    Client->>WS: send {to, encrypted, messageHash, id, timestamp}
    activate WS
    WS->>WS: validate sender registration & payload
    WS->>Service: processMessage(from,to,encrypted,messageHash,recipientOnline)
    activate Service
    Service->>DB: dedupe check & persist (status: delivered)
    Service->>L2PS: submitToL2PS(tx)
    activate L2PS
    L2PS-->>Service: txHash / mempool ack
    deactivate L2PS
    Service->>DB: update l2psTxHash & status
    Service-->>WS: {success, l2psTxHash}
    deactivate Service
    WS->>Client: message_sent {messageId}
    WS->>Client: incoming_message -> recipient
    Client-->>WS: ack
    deactivate WS
    end

    rect rgba(100,150,200,0.5)
    Note over Client,DB: Offline Delivery on Register
    Client->>WS: register {publicKey, l2psUid, proof}
    activate WS
    WS->>Service: getQueuedMessages(toKey,l2psUid)
    activate Service
    Service->>DB: fetch queued messages (ordered)
    Service-->>WS: [queued messages]
    deactivate Service
    WS->>Client: incoming_message (1..N)
    WS->>Service: markDelivered([ids])
    Service->>DB: update statuses -> delivered
    WS-->>Client: registered
    deactivate WS
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

  • Feat zk integration #662: Related L2PS/ZK integration and lifecycle changes that overlap with messaging’s ParallelNetworks and submission paths.
  • L2ps implementation #514: Prior changes to L2PS subsystems (ParallelNetworks, mempool/executor) that messaging integrates with.

Suggested labels

Review effort 5/5

Suggested reviewers

  • cwilvx

"I hop through sockets, nibble keys with care,
I tuck messages safe in a green leafy lair.
From Alice to Bob I carry the cheer,
Queues hold the whispers till partners appear.
Bun, bytes, and rollups — a rabbit's small cheer!" 🐇✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 37.93% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately describes the main focus of the changeset: adding crypto and integration tests for the L2PS messaging feature, which comprise multiple new test files and supporting infrastructure.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat-l2ps-messaging

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@qodo-code-review
Copy link
Contributor

Review Summary by Qodo

Add L2PS Messaging: Real-time encrypted chat with L2PS persistence

✨ Enhancement 🧪 Tests

Grey Divider

Walkthroughs

Description
• Implemented L2PS-backed real-time messaging with WebSocket server
  - Peer registration with ed25519 proof verification
  - Message routing, offline queueing, and delivery
  - Conversation history queries with pagination
• Added comprehensive unit and integration tests
  - Protocol validation, peer management, message routing
  - Offline delivery, rate limiting, error handling
  - 37 passing tests across crypto, server, and service
• Created crypto helpers for message encryption/decryption
  - AES-256-GCM symmetric encryption with random nonces
  - Message hashing for deduplication
• Integrated with L2PS mempool for message persistence
  - Messages submitted as L2PS transactions with zero fees
  - Automatic batching and L1 rollup pipeline
Diagram
flowchart LR
  Client["Client SDK"]
  WS["WebSocket Server"]
  Service["L2PS Messaging Service"]
  DB["l2ps_messages Table"]
  L2PS["L2PS Mempool"]
  L1["L1 Rollup"]
  
  Client -->|register/send| WS
  WS -->|route/queue| Service
  Service -->|persist| DB
  Service -->|submit tx| L2PS
  L2PS -->|batch| L1
  WS -->|instant delivery| Client
  DB -->|history query| Client
Loading

Grey Divider

File Changes

1. scripts/l2ps-messaging-test.ts 🧪 Tests +249/-0

E2E test script for messaging protocol

scripts/l2ps-messaging-test.ts


2. src/features/l2ps-messaging/L2PSMessagingServer.ts ✨ Enhancement +438/-0

WebSocket server for real-time message routing

src/features/l2ps-messaging/L2PSMessagingServer.ts


3. src/features/l2ps-messaging/L2PSMessagingService.ts ✨ Enhancement +277/-0

Service layer bridging messages to L2PS

src/features/l2ps-messaging/L2PSMessagingService.ts


View more (13)
4. src/features/l2ps-messaging/crypto.ts ✨ Enhancement +87/-0

Message encryption and hashing utilities

src/features/l2ps-messaging/crypto.ts


5. src/features/l2ps-messaging/entities/L2PSMessage.ts ✨ Enhancement +37/-0

TypeORM entity for message persistence

src/features/l2ps-messaging/entities/L2PSMessage.ts


6. src/features/l2ps-messaging/index.ts ✨ Enhancement +34/-0

Feature exports and lifecycle management

src/features/l2ps-messaging/index.ts


7. src/features/l2ps-messaging/tests/L2PSMessagingServer.test.ts 🧪 Tests +310/-0

Protocol and peer management unit tests

src/features/l2ps-messaging/tests/L2PSMessagingServer.test.ts


8. src/features/l2ps-messaging/tests/L2PSMessagingService.test.ts 🧪 Tests +279/-0

Service logic and message lifecycle tests

src/features/l2ps-messaging/tests/L2PSMessagingService.test.ts


9. src/features/l2ps-messaging/tests/crypto.test.ts 🧪 Tests +130/-0

Encryption and hashing function tests

src/features/l2ps-messaging/tests/crypto.test.ts


10. src/features/l2ps-messaging/tests/integration.test.ts 🧪 Tests +473/-0

Full WebSocket integration tests

src/features/l2ps-messaging/tests/integration.test.ts


11. src/features/l2ps-messaging/types.ts ✨ Enhancement +251/-0

Protocol types and message envelope definitions

src/features/l2ps-messaging/types.ts


12. src/features/l2ps-messaging/L2PS_MESSAGING_QUICKSTART.md 📝 Documentation +426/-0

Comprehensive messaging feature documentation

src/features/l2ps-messaging/L2PS_MESSAGING_QUICKSTART.md


13. src/features/zk/scripts/setup-zk.ts 🐞 Bug fix +2/-2

Resolve npx path dynamically from PATH

src/features/zk/scripts/setup-zk.ts


14. src/index.ts ✨ Enhancement +29/-0

Initialize L2PS messaging server on startup

src/index.ts


15. src/model/datasource.ts ✨ Enhancement +4/-0

Register L2PSMessage entity with TypeORM

src/model/datasource.ts


16. .env.example ⚙️ Configuration changes +6/-0

Add L2PS messaging configuration variables

.env.example


Grey Divider

Qodo Logo

@qodo-code-review
Copy link
Contributor

qodo-code-review bot commented Mar 10, 2026

Code Review by Qodo

🐞 Bugs (7) 📘 Rule violations (0) 📎 Requirement gaps (0)

Grey Divider


Action required

1. Re-register drops new socket🐞 Bug ⛯ Reliability
Description
In L2PSMessagingServer.handleRegister(), closing an existing connection for the same publicKey can
trigger handleClose() later, which deletes the peer map entry by publicKey without checking the
socket instance. This can remove the newly-registered connection and break routing/notifications
until the client registers again.
Code

src/features/l2ps-messaging/L2PSMessagingServer.ts[R352-361]

+    private handleClose(ws: ServerWebSocket<WSData>): void {
+        const publicKey = ws.data.publicKey
+        if (!publicKey) return
+
+        const peer = this.peers.get(publicKey)
+        if (!peer) return
+
+        const l2psUid = peer.l2psUid
+        this.peers.delete(publicKey)
+
Evidence
handleRegister() closes the previous ws for the same publicKey and then stores the new ws under the
same key; handleClose() later deletes by publicKey only, so a delayed close from the old ws can
delete the new peer entry.

src/features/l2ps-messaging/L2PSMessagingServer.ts[154-168]
src/features/l2ps-messaging/L2PSMessagingServer.ts[352-361]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Re-registration can inadvertently disconnect the new connection: the old socket’s `close` event can run after the new socket is stored, and `handleClose()` deletes the peer entry by `publicKey` without verifying the socket instance.
### Issue Context
This causes the newly registered peer to disappear from `this.peers`, breaking message routing and peer notifications.
### Fix Focus Areas
- src/features/l2ps-messaging/L2PSMessagingServer.ts[154-168]
- src/features/l2ps-messaging/L2PSMessagingServer.ts[352-361]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. L2PS failure always success🐞 Bug ✓ Correctness
Description
L2PSMessagingService.processMessage() returns success:true regardless of whether submitToL2PS()
fails, so L2PSMessagingServer.handleSend() will never emit L2PS_SUBMIT_FAILED. This makes
client/server behavior inconsistent and can acknowledge messages even when L2PS persistence failed.
Code

src/features/l2ps-messaging/L2PSMessagingService.ts[R79-93]

+        // Submit to L2PS mempool (non-blocking for real-time delivery)
+        const l2psResult = await this.submitToL2PS(l2psUid, fromKey, toKey, messageId, messageHash, encrypted, now)
+
+        if (l2psResult.success && l2psResult.txHash) {
+            await repo.update(msg.id, {
+                l2psTxHash: l2psResult.txHash,
+                status: recipientOnline ? "delivered" : "queued",
+            })
+        }
+
+        return {
+            success: true,
+            l2psTxHash: l2psResult.txHash,
+            error: l2psResult.error,
+        }
Evidence
processMessage() always returns { success: true, ... } even if submitToL2PS() returned `{
success: false }, while the server relies on result.success` to decide whether to send
L2PS_SUBMIT_FAILED.

src/features/l2ps-messaging/L2PSMessagingService.ts[79-93]
src/features/l2ps-messaging/L2PSMessagingService.ts[160-163]
src/features/l2ps-messaging/L2PSMessagingServer.ts[244-252]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`processMessage()` always returns `success: true`, even when L2PS mempool submission fails. This makes the server’s `if (!result.success)` error handling unreachable.
### Issue Context
The WebSocket server expects `result.success` to represent whether L2PS submission succeeded.
### Fix Focus Areas
- src/features/l2ps-messaging/L2PSMessagingService.ts[79-93]
- src/features/l2ps-messaging/L2PSMessagingService.ts[160-163]
- src/features/l2ps-messaging/L2PSMessagingServer.ts[244-270]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


3. Discover leaks peer keys 🐞 Bug ⛨ Security
Description
L2PSMessagingServer.handleDiscover() returns all peers when ws.data.l2psUid is null, which is the
default for an unregistered connection. This allows unauthenticated enumeration of online peer
public keys across all L2PS networks.
Code

src/features/l2ps-messaging/L2PSMessagingServer.ts[R318-323]

+    private handleDiscover(ws: ServerWebSocket<WSData>): void {
+        const l2psUid = ws.data.l2psUid
+        const peers = Array.from(this.peers.values())
+            .filter(p => !l2psUid || p.l2psUid === l2psUid)
+            .map(p => p.publicKey)
+
Evidence
Connections start with { l2psUid: null } and discover filters with !l2psUid || ..., which
becomes true and includes all peers when the caller is unregistered.

src/features/l2ps-messaging/L2PSMessagingServer.ts[43-50]
src/features/l2ps-messaging/L2PSMessagingServer.ts[318-323]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`discover` currently treats an unregistered connection (`l2psUid === null`) as authorized to list all online peers across networks.
### Issue Context
WS upgrade initializes `l2psUid` to null; discover uses `!l2psUid || ...` which makes null equivalent to ‘no filter’.
### Fix Focus Areas
- src/features/l2ps-messaging/L2PSMessagingServer.ts[43-50]
- src/features/l2ps-messaging/L2PSMessagingServer.ts[318-329]
- src/features/l2ps-messaging/L2PSMessagingServer.ts[333-348]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

4. Offline limit resets early🐞 Bug ⛨ Security
Description
Offline message rate-limiting is tracked per-sender in offlineMessageCounts, but
deliverQueuedMessages() clears the sender’s count after sending each queued message. This resets the
limit too aggressively and allows senders to repeatedly queue up to the maximum again after any
delivery occurs.
Code

src/features/l2ps-messaging/L2PSMessagingServer.ts[R388-402]

+        for (const msg of queued) {
+            try {
+                this.send(ws, {
+                    type: "message",
+                    payload: {
+                        from: msg.from,
+                        encrypted: msg.encrypted,
+                        messageHash: msg.messageHash,
+                        offline: true,
+                    },
+                    timestamp: Date.now(),
+                })
+                deliveredIds.push(msg.id)
+                this.service.resetOfflineCount(msg.from)
+            } catch {
Evidence
The map is keyed only by sender (fromKey) and incremented per queued message; delivery calls
resetOfflineCount(from) which deletes the entire sender counter, not decrementing or accounting for
remaining queued messages to other recipients.

src/features/l2ps-messaging/L2PSMessagingService.ts[57-64]
src/features/l2ps-messaging/L2PSMessagingServer.ts[388-405]
src/features/l2ps-messaging/L2PSMessagingService.ts[271-276]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Offline message counts are per-sender but are wiped on delivery of any queued message, effectively weakening the intended max-queued limit.
### Issue Context
The counter is incremented when queueing but deleted during delivery without considering remaining queued messages.
### Fix Focus Areas
- src/features/l2ps-messaging/L2PSMessagingService.ts[57-64]
- src/features/l2ps-messaging/L2PSMessagingService.ts[271-276]
- src/features/l2ps-messaging/L2PSMessagingServer.ts[378-411]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


5. Messaging port NaN start 🐞 Bug ⛯ Reliability
Description
indexState.L2PS_MESSAGING_PORT is parsed without a NaN fallback, so a non-numeric env value becomes
NaN and getNextAvailablePort() returns null. This can prevent the messaging server from starting due
to an invalid port value.
Code

src/index.ts[R110-113]

+    // L2PS Messaging
+    L2PS_MESSAGING_ENABLED: process.env.L2PS_MESSAGING_ENABLED?.toLowerCase() === "true",
+    L2PS_MESSAGING_PORT: parseInt(process.env.L2PS_MESSAGING_PORT ?? "3006", 10),
+    l2psMessagingServer: null as any,
Evidence
parseInt can yield NaN; getNextAvailablePort’s loop won’t run for NaN and returns null, which then
gets passed into server startup.

src/index.ts[110-113]
src/index.ts[217-227]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Invalid `L2PS_MESSAGING_PORT` env values can turn into `NaN`, leading to `getNextAvailablePort(NaN)` returning `null` and breaking messaging startup.
### Issue Context
Other ports in the code often use `|| &amp;amp;lt;default&amp;amp;gt;` after parsing; this one does not.
### Fix Focus Areas
- src/index.ts[110-113]
- src/index.ts[217-227]
- src/index.ts[618-630]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


6. ZK setup nonportable npx 🐞 Bug ⛯ Reliability
Description
setup-zk.ts executes execSync("which npx") at module load time, which throws immediately in
environments without which or npx in PATH. This prevents the script’s own error handling and
reduces portability.
Code

src/features/zk/scripts/setup-zk.ts[R18-20]

+// npx path - resolved dynamically from PATH
+const NPX = execSync("which npx").toString().trim()
Evidence
NPX resolution is performed via execSync without a surrounding try/catch and runs before the helper
exec() error handling is used.

src/features/zk/scripts/setup-zk.ts[18-20]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
The ZK setup script can fail at import time due to `execSync(&amp;amp;#x27;which npx&amp;amp;#x27;)`, preventing controlled error reporting.
### Issue Context
This happens before the script’s `exec()` helper which otherwise prints structured failure messages.
### Fix Focus Areas
- src/features/zk/scripts/setup-zk.ts[18-20]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Advisory comments

7. messageHash docs inconsistent 🐞 Bug ✓ Correctness
Description
The Quickstart examples describe messageHash as sha256_of_plaintext, while the provided
computeMessageHash helper hashes from:to:content:timestamp and the DB enforces global uniqueness
on message_hash. This inconsistency can lead client implementations to choose a non-unique hash
scheme that conflicts with server-side dedup/DB constraints.
Code

src/features/l2ps-messaging/L2PS_MESSAGING_QUICKSTART.md[R98-110]

+```json
+{
+  "type": "send",
+  "payload": {
+    "to": "<recipient_public_key_hex>",
+    "encrypted": {
+      "ciphertext": "<base64_encrypted_data>",
+      "nonce": "<base64_aes_gcm_nonce>",
+      "ephemeralKey": "<hex_x25519_ephemeral_key>"
+    },
+    "messageHash": "<sha256_of_plaintext>"
+  },
+  "timestamp": 1709312400000
Evidence
Documentation and helper code describe different messageHash semantics, and the DB schema makes
messageHash globally unique, so clients need a clearly unique construction to avoid unexpected
duplicate rejections.

src/features/l2ps-messaging/L2PS_MESSAGING_QUICKSTART.md[98-111]
src/features/l2ps-messaging/crypto.ts[14-26]
src/features/l2ps-messaging/entities/L2PSMessage.ts[22-23]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
Docs and helper code describe different `messageHash` computations while the DB enforces a unique `message_hash`.
### Issue Context
Clients following the Quickstart example may compute a hash that is not unique enough for the DB constraint.
### Fix Focus Areas
- src/features/l2ps-messaging/L2PS_MESSAGING_QUICKSTART.md[96-112]
- src/features/l2ps-messaging/crypto.ts[14-26]
- src/features/l2ps-messaging/entities/L2PSMessage.ts[22-23]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

Comment on lines +318 to +323
private handleDiscover(ws: ServerWebSocket<WSData>): void {
const l2psUid = ws.data.l2psUid
const peers = Array.from(this.peers.values())
.filter(p => !l2psUid || p.l2psUid === l2psUid)
.map(p => p.publicKey)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

3. Discover leaks peer keys 🐞 Bug ⛨ Security

L2PSMessagingServer.handleDiscover() returns all peers when ws.data.l2psUid is null, which is the
default for an unregistered connection. This allows unauthenticated enumeration of online peer
public keys across all L2PS networks.
Agent Prompt
### Issue description
`discover` currently treats an unregistered connection (`l2psUid === null`) as authorized to list all online peers across networks.

### Issue Context
WS upgrade initializes `l2psUid` to null; discover uses `!l2psUid || ...` which makes null equivalent to ‘no filter’.

### Fix Focus Areas
- src/features/l2ps-messaging/L2PSMessagingServer.ts[43-50]
- src/features/l2ps-messaging/L2PSMessagingServer.ts[318-329]
- src/features/l2ps-messaging/L2PSMessagingServer.ts[333-348]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 11

🧹 Nitpick comments (7)
src/features/l2ps-messaging/tests/L2PSMessagingServer.test.ts (2)

8-8: Unused imports from bun:test: beforeEach, afterEach, mock, spyOn.

These test utilities are imported but not used in any test.

♻️ Proposed fix
-import { describe, it, expect, beforeEach, afterEach, mock, spyOn } from "bun:test"
+import { describe, it, expect } from "bun:test"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/features/l2ps-messaging/tests/L2PSMessagingServer.test.ts` at line 8, The
test file imports unused symbols from bun:test (beforeEach, afterEach, mock,
spyOn); remove these unused imports from the import statement so only describe,
it, and expect remain (update the import line that currently references
beforeEach, afterEach, mock, spyOn) to eliminate unused imports and any
linter/test warnings.

9-10: Unused imports: L2PSMessagingServer and L2PSMessagingService.

These imports are declared but never used in the test file. The tests validate protocol logic and data structures using mock objects rather than actual server instances.

♻️ Proposed fix
 import { describe, it, expect, beforeEach, afterEach, mock, spyOn } from "bun:test"
-import { L2PSMessagingServer } from "../L2PSMessagingServer"
-import { L2PSMessagingService } from "../L2PSMessagingService"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/features/l2ps-messaging/tests/L2PSMessagingServer.test.ts` around lines 9
- 10, Remove the unused imports L2PSMessagingServer and L2PSMessagingService
from the test file; locate the import statements referencing L2PSMessagingServer
and L2PSMessagingService at the top of
src/features/l2ps-messaging/tests/L2PSMessagingServer.test.ts and delete those
lines so the file only imports the mocks/utilities actually used in the tests,
then run lint/tests to confirm no unused-import errors remain.
src/features/l2ps-messaging/tests/crypto.test.ts (2)

93-104: Use expect().rejects.toThrow() instead of try/catch with unreachable assertion.

The pattern expect(true).toBe(false) to indicate unreachable code is an anti-pattern. Bun's test framework supports expect().rejects.toThrow() for async error assertions.

♻️ Proposed fix
     it("should fail to decrypt with wrong key", async () => {
         const key1 = generateSymmetricKey()
         const key2 = generateSymmetricKey()
         const encrypted = await encryptMessage("secret", key1)

-        try {
-            await decryptMessage(encrypted, key2)
-            expect(true).toBe(false) // should not reach
-        } catch (error) {
-            expect(error).toBeDefined()
-        }
+        await expect(decryptMessage(encrypted, key2)).rejects.toThrow()
     })
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/features/l2ps-messaging/tests/crypto.test.ts` around lines 93 - 104, The
test currently uses a try/catch with an unreachable assertion; replace that with
an async rejection assertion using expect(...).rejects.toThrow(): call
generateSymmetricKey() twice, encrypt with encryptMessage("secret", key1), then
assert await expect(decryptMessage(encrypted, key2)).rejects.toThrow() so the
test fails when decryption does not throw; locate the test case using the
function names generateSymmetricKey, encryptMessage, and decryptMessage to
update the assertion.

106-121: Same pattern improvement for tampered ciphertext test.

♻️ Proposed fix
     it("should fail to decrypt with tampered ciphertext", async () => {
         const key = generateSymmetricKey()
         const encrypted = await encryptMessage("secret", key)

         // Tamper with ciphertext
         const bytes = Buffer.from(encrypted.ciphertext, "base64")
         bytes[0] ^= 0xff
-        encrypted.ciphertext = bytes.toString("base64")
+        const tampered = { ...encrypted, ciphertext: bytes.toString("base64") }

-        try {
-            await decryptMessage(encrypted, key)
-            expect(true).toBe(false)
-        } catch (error) {
-            expect(error).toBeDefined()
-        }
+        await expect(decryptMessage(tampered, key)).rejects.toThrow()
     })
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/features/l2ps-messaging/tests/crypto.test.ts` around lines 106 - 121,
Replace the manual try/catch assertion in the tampered-ciphertext test with
Jest's promise rejection matcher: create a tampered copy of the encrypted result
(use generateSymmetricKey/encryptMessage to get the original, then modify a
copy's ciphertext bytes), then use await
expect(decryptMessage(tamperedEncrypted, key)).rejects.toBeDefined() or
.rejects.toThrow() instead of try { await decryptMessage...;
expect(true).toBe(false) } catch {...}; this uses the functions
generateSymmetricKey, encryptMessage and decryptMessage and the local
encrypted/tamperedEncrypted variables referenced in the diff.
src/features/l2ps-messaging/tests/integration.test.ts (2)

32-33: Port selection has collision risk in parallel CI runs.

The random offset Math.floor(Math.random() * 1000) could still collide when multiple test processes run simultaneously. Consider using port 0 to let the OS assign an available port, then retrieve the actual port from the server.

♻️ Proposed alternative using OS-assigned port
 beforeAll(() => {
-    port = 19876 + Math.floor(Math.random() * 1000)
     server = Bun.serve<WSData>({
-        port,
+        port: 0, // Let OS assign available port
         fetch: (req, server) => {
             // ...
         },
         // ...
     })
+    port = server.port // Retrieve actual assigned port
 })
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/features/l2ps-messaging/tests/integration.test.ts` around lines 32 - 33,
The test's port allocation (in beforeAll using port = 19876 +
Math.floor(Math.random() * 1000)) risks collisions in parallel CI; change the
test to bind the server on port 0 (call server.listen(0) in beforeAll), then
read the assigned port via server.address().port and assign that to the test's
port variable so all subsequent requests use the real bound port; ensure any
teardown still closes the server and update any places that construct the base
URL to use the retrieved port.

32-136: Potential resource leak: beforeEach clears peers map but doesn't close existing WebSocket connections.

When peers.clear() is called, the WebSocket connections stored in the map are not explicitly closed. While the connections may eventually be garbage collected, this could cause issues in rapid test execution.

♻️ Proposed fix
 beforeEach(() => {
+    // Close any existing connections before clearing
+    for (const peer of peers.values()) {
+        try { peer.ws.close() } catch {}
+    }
     peers.clear()
 })
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/features/l2ps-messaging/tests/integration.test.ts` around lines 32 - 136,
The beforeEach currently calls peers.clear() which drops references but doesn't
close active WebSocket connections; update the beforeEach to iterate over peers
(the peers Map) and call close() on each peer.ws (e.g., p.ws.close()) to
gracefully terminate connections before calling peers.clear(), ensuring active
sockets are closed and avoiding resource leaks in tests that create WebSocket
server connections.
src/features/l2ps-messaging/L2PS_MESSAGING_QUICKSTART.md (1)

26-28: Add language specifiers to fenced code blocks.

Several code blocks are missing language identifiers which affects syntax highlighting and accessibility. Lines 26-28, 56-58, 245-260, 293-299, and 339-350 should specify appropriate languages.

📝 Example fixes
-```
+```text
 [L2PS] Loaded network: testnet_l2ps_001

For the architecture diagram at line 245:

-```
+```text
 Sender (SDK)                       Node                         Recipient (SDK)

For test output at line 293:

-```
+```text
 bun test v1.3.3

For directory structure at line 339:

-```
+```text
 src/features/l2ps-messaging/
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/features/l2ps-messaging/L2PS_MESSAGING_QUICKSTART.md` around lines 26 -
28, Add explicit language specifiers to the fenced code blocks mentioned (e.g.,
the block containing "[L2PS] Loaded network: testnet_l2ps_001", the architecture
diagram block starting with "Sender (SDK)                       Node            
Recipient (SDK)", the test output block starting with "bun test v1.3.3", and the
directory structure block starting with "src/features/l2ps-messaging/") by
changing their opening fences from ``` to ```text (or a more specific language
if appropriate) so they provide proper syntax highlighting and accessibility.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@scripts/l2ps-messaging-test.ts`:
- Around line 185-200: The sender ACK waiter is attached after awaiting the
recipient message which can miss fast ACKs; for the Bob receive block, call
waitForAny(wsAlice, ["message_sent","message_queued","error"]) and store the
promise (e.g., ackAlicePromise) before sending/awaiting the recipient message
promise (msgPromiseBob), then await msgPromiseBob and finally await
ackAlicePromise; do the same change for the symmetric Bob→Alice flow (the block
using waitForAny around lines 203-218), referencing the existing waitForAny,
msgPromiseBob/msgPromiseAlice, wsAlice and wsBob symbols so the ACK waiter is
armed before awaiting the recipient frame.

In `@src/features/l2ps-messaging/crypto.ts`:
- Around line 36-45: The encryptMessage implementation accepts any key length
and passes sharedKey.buffer (which can be misaligned) to
crypto.subtle.importKey; update encryptMessage to validate that
sharedKey.byteLength === 32 and throw a clear error if not, and pass the
Uint8Array itself (not .buffer) as the key material to crypto.subtle.importKey;
make the identical change in the counterpart decrypt function (decryptMessage)
so both enforce AES-256-GCM key length and import the raw key using the
sharedKey Uint8Array directly.

In `@src/features/l2ps-messaging/entities/L2PSMessage.ts`:
- Around line 4-23: The message-hash uniqueness is currently enforced globally
on the L2PSMessage.messageHash column and L2PSMessagingService.processMessage()
looks up by messageHash only, which allows cross-network collisions; change the
DB schema and lookup to be network-scoped by making the unique key apply to
(l2psUid, messageHash) instead of messageHash alone and update
L2PSMessagingService.processMessage() to use findOneBy({ l2psUid, messageHash })
when deduplicating; ensure the L2PSMessage entity removes the single-column
unique constraint on messageHash and instead defines uniqueness for the pair,
and update any related queries that assume global uniqueness to include l2psUid.

In `@src/features/l2ps-messaging/L2PSMessagingServer.ts`:
- Around line 318-328: handleDiscover currently returns all peers when
ws.data.l2psUid is falsy, allowing unauthenticated sockets to enumerate users;
update the handleDiscover method to require the socket be registered before
responding: check that ws.data.l2psUid (or a dedicated registration flag on
ws.data) is present/true and if not, send an error response or a
discover_response with an empty peers list and do not enumerate this.peers; keep
using the existing send(...) call for the response and reference
ws.data.l2psUid, this.peers and handleDiscover to locate where to add the gate.
- Around line 136-152: Add a freshness check around the proof verification to
reject stale or future timestamps: parse msg.timestamp (in both the register
verification block and the similar history verification at the other location)
into a number, verify it is within an allowed skew/window (e.g., now ±
MAX_PROOF_AGE_MS), and call sendError(ws, "STALE_PROOF", "...") (or reuse
"INVALID_PROOF") if the timestamp is missing, unparsable, too old, or too far in
the future before performing ucrypto.verify; define a single MAX_PROOF_AGE_MS
constant used by both the register proof (proofMessage) and the history proof
paths to keep behavior consistent.
- Around line 282-307: The handler currently passes limit ?? 50 straight into
this.service.getHistory which can be arbitrarily large; before calling
getHistory you must sanitize msg.payload.limit: parse it to a number (use
default 50 when undefined), reject non-positive values by calling
this.sendError(ws, "INVALID_LIMIT", "Limit must be a positive integer"), and
clamp it to a sane maximum (e.g., const MAX_HISTORY_LIMIT = 100) — then call
this.service.getHistory(myKey, peerKey, l2psUid, before, clampedLimit). Ensure
you reference the local variable limit (from msg.payload), use l2psUid, myKey
and peerKey as-is, and replace the direct use of limit ?? 50 with the
validated/clamped value.
- Around line 124-145: The proof verification currently uses
getSharedState.signingAlgorithm which can vary; pin the L2PS messaging protocol
to ed25519 by replacing uses of getSharedState.signingAlgorithm in the proof
verification blocks (the verify call in L2PSMessagingServer where proofMessage
is constructed and the similar block around lines 288-296) with the fixed
algorithm identifier "ed25519"; ensure the same change is applied to any other
verify calls in this file that validate client proofs, and keep existing helpers
like hexToUint8Array and sendError unchanged.
- Around line 235-270: The code currently ignores the numeric return value of
ws.send(); update the WebSocket wrapper method send(...) to return the
underlying ws.send(...) numeric result (positive bytes, 0 failure, -1
backpressure) and expose a drain() promise/event for -1; then in
L2PSMessagingServer where you call this.send(recipientPeer!.ws, ...)
(deliver-to-recipient), and where you send acknowledgements (the message_sent
and message_queued responses) and where offline messages are later marked
delivered, check the returned value: treat 0 as a delivery failure (do not
confirm delivery or mark delivered; call sendError or retry), treat -1 as
backpressure (await the wrapper drain() before confirming/marking), and only
proceed with DB updates/confirmations when ws.send() returns >0; ensure the same
logic is applied for the send path referenced by this.service.processMessage
flow and the offline-delivery path so false confirmations are avoided.

In `@src/features/l2ps-messaging/L2PSMessagingService.ts`:
- Around line 20-25: The offline quota handling in L2PSMessagingService is too
coarse: offlineMessageCounts is keyed only by sender and resetOfflineCount()
deletes the entire sender entry, restoring the full quota after any successful
delivery; change this to either (A) decrement the sender’s counter by the actual
number of messages successfully delivered (use the delivery result to subtract N
from offlineMessageCounts.get(sender)) or (B) make the key more specific
(compose a key from sender + recipient or sender + network) so counts are
tracked per recipient/route instead of globally; update any logic that sets or
resets counts (references: offlineMessageCounts, resetOfflineCount(),
MAX_OFFLINE_MESSAGES_PER_SENDER) to implement one of these approaches and ensure
counts never go negative and respect the MAX_OFFLINE_MESSAGES_PER_SENDER limit.

In `@src/features/l2ps-messaging/types.ts`:
- Around line 41-48: ClientMessageType currently exposes the "ack" literal even
though L2PSMessagingServer.handleMessage() has no case to handle it, causing
valid-looking frames to be rejected; either remove "ack" from the exported
ClientMessageType union (in types.ts) until handleMessage implements support, or
add a corresponding handling branch in L2PSMessagingServer.handleMessage() that
processes "ack" frames (match the expected payload shape and return the
appropriate response/ack path). Ensure the symbol names referenced are
ClientMessageType and L2PSMessagingServer.handleMessage so the change is applied
to the right places.

In `@src/features/zk/scripts/setup-zk.ts`:
- Around line 18-19: The NPX resolution using const NPX = execSync("which
npx")... is Unix-only and can throw at module load; change this to detect
platform (process.platform === "win32") and call "where npx" on Windows and
"which npx" elsewhere, wrap the execSync call in a try/catch to handle missing
npx gracefully, and set a clear fallback or throw an informative error; update
the NPX constant assignment and any code that uses NPX to rely on this safe
resolution (refer to the NPX constant and its execSync call).

---

Nitpick comments:
In `@src/features/l2ps-messaging/L2PS_MESSAGING_QUICKSTART.md`:
- Around line 26-28: Add explicit language specifiers to the fenced code blocks
mentioned (e.g., the block containing "[L2PS] Loaded network: testnet_l2ps_001",
the architecture diagram block starting with "Sender (SDK)                      
Node                         Recipient (SDK)", the test output block starting
with "bun test v1.3.3", and the directory structure block starting with
"src/features/l2ps-messaging/") by changing their opening fences from ``` to
```text (or a more specific language if appropriate) so they provide proper
syntax highlighting and accessibility.

In `@src/features/l2ps-messaging/tests/crypto.test.ts`:
- Around line 93-104: The test currently uses a try/catch with an unreachable
assertion; replace that with an async rejection assertion using
expect(...).rejects.toThrow(): call generateSymmetricKey() twice, encrypt with
encryptMessage("secret", key1), then assert await
expect(decryptMessage(encrypted, key2)).rejects.toThrow() so the test fails when
decryption does not throw; locate the test case using the function names
generateSymmetricKey, encryptMessage, and decryptMessage to update the
assertion.
- Around line 106-121: Replace the manual try/catch assertion in the
tampered-ciphertext test with Jest's promise rejection matcher: create a
tampered copy of the encrypted result (use generateSymmetricKey/encryptMessage
to get the original, then modify a copy's ciphertext bytes), then use await
expect(decryptMessage(tamperedEncrypted, key)).rejects.toBeDefined() or
.rejects.toThrow() instead of try { await decryptMessage...;
expect(true).toBe(false) } catch {...}; this uses the functions
generateSymmetricKey, encryptMessage and decryptMessage and the local
encrypted/tamperedEncrypted variables referenced in the diff.

In `@src/features/l2ps-messaging/tests/integration.test.ts`:
- Around line 32-33: The test's port allocation (in beforeAll using port = 19876
+ Math.floor(Math.random() * 1000)) risks collisions in parallel CI; change the
test to bind the server on port 0 (call server.listen(0) in beforeAll), then
read the assigned port via server.address().port and assign that to the test's
port variable so all subsequent requests use the real bound port; ensure any
teardown still closes the server and update any places that construct the base
URL to use the retrieved port.
- Around line 32-136: The beforeEach currently calls peers.clear() which drops
references but doesn't close active WebSocket connections; update the beforeEach
to iterate over peers (the peers Map) and call close() on each peer.ws (e.g.,
p.ws.close()) to gracefully terminate connections before calling peers.clear(),
ensuring active sockets are closed and avoiding resource leaks in tests that
create WebSocket server connections.

In `@src/features/l2ps-messaging/tests/L2PSMessagingServer.test.ts`:
- Line 8: The test file imports unused symbols from bun:test (beforeEach,
afterEach, mock, spyOn); remove these unused imports from the import statement
so only describe, it, and expect remain (update the import line that currently
references beforeEach, afterEach, mock, spyOn) to eliminate unused imports and
any linter/test warnings.
- Around line 9-10: Remove the unused imports L2PSMessagingServer and
L2PSMessagingService from the test file; locate the import statements
referencing L2PSMessagingServer and L2PSMessagingService at the top of
src/features/l2ps-messaging/tests/L2PSMessagingServer.test.ts and delete those
lines so the file only imports the mocks/utilities actually used in the tests,
then run lint/tests to confirm no unused-import errors remain.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 0195a4f3-2008-4ce1-96f2-5757c2a0595a

📥 Commits

Reviewing files that changed from the base of the PR and between ba1c1bd and 42e09a9.

📒 Files selected for processing (16)
  • .env.example
  • scripts/l2ps-messaging-test.ts
  • src/features/l2ps-messaging/L2PSMessagingServer.ts
  • src/features/l2ps-messaging/L2PSMessagingService.ts
  • src/features/l2ps-messaging/L2PS_MESSAGING_QUICKSTART.md
  • src/features/l2ps-messaging/crypto.ts
  • src/features/l2ps-messaging/entities/L2PSMessage.ts
  • src/features/l2ps-messaging/index.ts
  • src/features/l2ps-messaging/tests/L2PSMessagingServer.test.ts
  • src/features/l2ps-messaging/tests/L2PSMessagingService.test.ts
  • src/features/l2ps-messaging/tests/crypto.test.ts
  • src/features/l2ps-messaging/tests/integration.test.ts
  • src/features/l2ps-messaging/types.ts
  • src/features/zk/scripts/setup-zk.ts
  • src/index.ts
  • src/model/datasource.ts

Comment on lines +185 to +200
const msgPromiseBob = waitFor(wsBob, "message")
wsAlice.send(frame("send", {
to: bob.publicKey,
encrypted: {
ciphertext: Buffer.from("Hello Bob from Alice!").toString("base64"),
nonce: Buffer.from("test_nonce_1").toString("base64"),
},
messageHash: "hash_alice_to_bob_" + Date.now(),
}))

const msgBob = await msgPromiseBob
log("Bob", `Received message from ${msgBob.payload.from.slice(0, 12)}...`)
log("Bob", `Decoded: ${Buffer.from(msgBob.payload.encrypted.ciphertext, "base64").toString()}`)

const ackAlice = await waitForAny(wsAlice, ["message_sent", "message_queued", "error"])
log("Alice", `Ack: type=${ackAlice.type}`)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Arm the sender ACK waiter before awaiting the recipient frame.

Both flows subscribe for "message_sent" | "message_queued" | "error" only after await msgPromise.... If processMessage() finishes quickly, the ACK can arrive on the sender socket before waitForAny() is attached, so this script will fail intermittently.

Suggested fix
-    const msgPromiseBob = waitFor(wsBob, "message")
+    const msgPromiseBob = waitFor(wsBob, "message")
+    const ackAlicePromise = waitForAny(wsAlice, ["message_sent", "message_queued", "error"])
     wsAlice.send(frame("send", {
         to: bob.publicKey,
         encrypted: {
             ciphertext: Buffer.from("Hello Bob from Alice!").toString("base64"),
             nonce: Buffer.from("test_nonce_1").toString("base64"),
         },
         messageHash: "hash_alice_to_bob_" + Date.now(),
     }))
 
     const msgBob = await msgPromiseBob
 ...
-    const ackAlice = await waitForAny(wsAlice, ["message_sent", "message_queued", "error"])
+    const ackAlice = await ackAlicePromise

Apply the same pattern to the Bob → Alice block.

Also applies to: 203-218

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@scripts/l2ps-messaging-test.ts` around lines 185 - 200, The sender ACK waiter
is attached after awaiting the recipient message which can miss fast ACKs; for
the Bob receive block, call waitForAny(wsAlice,
["message_sent","message_queued","error"]) and store the promise (e.g.,
ackAlicePromise) before sending/awaiting the recipient message promise
(msgPromiseBob), then await msgPromiseBob and finally await ackAlicePromise; do
the same change for the symmetric Bob→Alice flow (the block using waitForAny
around lines 203-218), referencing the existing waitForAny,
msgPromiseBob/msgPromiseAlice, wsAlice and wsBob symbols so the ACK waiter is
armed before awaiting the recipient frame.

Comment on lines +36 to +45
export async function encryptMessage(
plaintext: string,
sharedKey: Uint8Array,
): Promise<SerializedEncryptedMessage> {
const nonce = crypto.getRandomValues(new Uint8Array(12))
const encoded = new TextEncoder().encode(plaintext)

const cryptoKey = await crypto.subtle.importKey(
"raw", sharedKey.buffer as ArrayBuffer, "AES-GCM", false, ["encrypt"],
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

git ls-files "src/features/l2ps-messaging/crypto.ts"

Repository: kynesyslabs/node

Length of output: 97


🏁 Script executed:

head -100 src/features/l2ps-messaging/crypto.ts

Repository: kynesyslabs/node

Length of output: 2715


🏁 Script executed:

cat -n src/features/l2ps-messaging/crypto.ts | sed -n '30,75p'

Repository: kynesyslabs/node

Length of output: 1902


🏁 Script executed:

# Search for test files or usage of these functions
find . -name "*crypto*test*" -o -name "*crypto*spec*" 2>/dev/null | head -20

Repository: kynesyslabs/node

Length of output: 110


🏁 Script executed:

rg "encryptMessage|decryptMessage" --type ts --context 3 | head -100

Repository: kynesyslabs/node

Length of output: 8093


🏁 Script executed:

cat -n src/features/l2ps-messaging/tests/crypto.test.ts

Repository: kynesyslabs/node

Length of output: 5481


🏁 Script executed:

# Check if there are any tests that use subarrays or different key sizes
rg "subarray|slice" src/features/l2ps-messaging/tests/crypto.test.ts

Repository: kynesyslabs/node

Length of output: 42


🏁 Script executed:

# Verify Web Crypto API spec behavior with .buffer vs Uint8Array
# Look at how generateSymmetricKey creates keys
cat -n src/features/l2ps-messaging/crypto.ts | sed -n '80,90p'

Repository: kynesyslabs/node

Length of output: 294


🌐 Web query:

Web Crypto API SubtleCrypto importKey raw buffer vs Uint8Array

💡 Result:

crypto.subtle.importKey()’s keyData parameter is a BufferSource (or JWK), meaning you can pass either:

  • an ArrayBuffer (raw bytes), or
  • an ArrayBufferView like Uint8Array / any TypedArray / DataView (a view over bytes). The WebCrypto spec defines importKey(format, (BufferSource or JsonWebKey) keyData, ...). [2]

So for "raw" keys, both ArrayBuffer and Uint8Array work—and MDN even shows "raw" import using a Uint8Array. [1]

The gotcha: Uint8Array.buffer may include extra bytes

If your Uint8Array is a slice/view into a larger buffer (non-zero byteOffset or shorter byteLength), then passing u8.buffer imports the entire underlying buffer, not just the view’s window. Prefer passing the view itself, or slice the buffer:

// Best: pass the view (imports exactly the bytes in the view)
await crypto.subtle.importKey("raw", u8, "AES-GCM", false, ["encrypt","decrypt"]);

// If you must pass an ArrayBuffer, slice it to the view range
const ab = u8.buffer.slice(u8.byteOffset, u8.byteOffset + u8.byteLength);
await crypto.subtle.importKey("raw", ab, "AES-GCM", false, ["encrypt","decrypt"]);

Node.js note

In Node, WebCrypto commonly accepts ArrayBuffer | TypedArray | DataView | Buffer, and the error message explicitly lists those accepted types when you pass something else. [3]

Sources: MDN importKey() docs/examples [1], WebCrypto spec signature for importKey() [2], Node.js issue showing the accepted-type error text [3].


🏁 Script executed:

# Check if these functions are used externally and how keys are passed
rg "encryptMessage|decryptMessage" --type ts -B 3 -A 3 | grep -v "crypto.test.ts" | grep -v "crypto.ts" | head -80

Repository: kynesyslabs/node

Length of output: 707


🏁 Script executed:

# Search for any patterns where a subarray might be passed as a key
rg "\.subarray\(|\.slice\(" src/features/l2ps-messaging/ --type ts

Repository: kynesyslabs/node

Length of output: 1080


🏁 Script executed:

# Search more broadly for actual calls to encryptMessage in service files
rg "encryptMessage" src/features/l2ps-messaging/ --type ts -B 5 -A 5

Repository: kynesyslabs/node

Length of output: 9462


🏁 Script executed:

# Check the L2PSMessagingService for how it uses these functions
cat -n src/features/l2ps-messaging/L2PSMessagingService.ts | head -150

Repository: kynesyslabs/node

Length of output: 6460


Add AES-256 key length validation and pass the Uint8Array directly instead of .buffer.

The API is documented as AES-256-GCM but accepts any key size (16, 24, or 32 bytes). Additionally, sharedKey.buffer passes the entire underlying ArrayBuffer, which breaks if a caller ever passes key.subarray(). Pass the Uint8Array directly and validate that sharedKey.byteLength === 32.

Suggested fix
+function assertAes256Key(sharedKey: Uint8Array): Uint8Array {
+    if (sharedKey.byteLength !== 32) {
+        throw new Error("sharedKey must be exactly 32 bytes")
+    }
+    return sharedKey
+}
+
 export async function encryptMessage(
     plaintext: string,
     sharedKey: Uint8Array,
 ): Promise<SerializedEncryptedMessage> {
 ...
     const cryptoKey = await crypto.subtle.importKey(
-        "raw", sharedKey.buffer as ArrayBuffer, "AES-GCM", false, ["encrypt"],
+        "raw", assertAes256Key(sharedKey), "AES-GCM", false, ["encrypt"],
     )
 ...
 export async function decryptMessage(
     encrypted: SerializedEncryptedMessage,
     sharedKey: Uint8Array,
 ): Promise<string> {
 ...
     const cryptoKey = await crypto.subtle.importKey(
-        "raw", sharedKey.buffer as ArrayBuffer, "AES-GCM", false, ["decrypt"],
+        "raw", assertAes256Key(sharedKey), "AES-GCM", false, ["decrypt"],
     )

Also applies to: 62-71

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/features/l2ps-messaging/crypto.ts` around lines 36 - 45, The
encryptMessage implementation accepts any key length and passes sharedKey.buffer
(which can be misaligned) to crypto.subtle.importKey; update encryptMessage to
validate that sharedKey.byteLength === 32 and throw a clear error if not, and
pass the Uint8Array itself (not .buffer) as the key material to
crypto.subtle.importKey; make the identical change in the counterpart decrypt
function (decryptMessage) so both enforce AES-256-GCM key length and import the
raw key using the sharedKey Uint8Array directly.

Comment on lines +4 to +23
@Entity("l2ps_messages")
export class L2PSMessage {
/** UUID v4 generated by sender */
@PrimaryColumn("text", { name: "id" })
id: string

@Index()
@Column("text", { name: "from_key" })
fromKey: string

@Index()
@Column("text", { name: "to_key" })
toKey: string

@Index()
@Column("text", { name: "l2ps_uid" })
l2psUid: string

@Column("text", { name: "message_hash", unique: true })
messageHash: string
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Make message-hash uniqueness network-scoped.

Right now the database and L2PSMessagingService.processMessage() dedupe on messageHash alone. Because the exported hash helper does not include l2psUid, a valid message on one network can collide with the same sender/to/content/timestamp tuple on another network. This should be unique per (l2psUid, messageHash), not globally.

Suggested fix
 `@Entity`("l2ps_messages")
+@Index(["l2psUid", "messageHash"], { unique: true })
 export class L2PSMessage {
 ...
-    `@Column`("text", { name: "message_hash", unique: true })
+    `@Column`("text", { name: "message_hash" })
     messageHash: string

L2PSMessagingService.processMessage() should also change its lookup to findOneBy({ l2psUid, messageHash }).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/features/l2ps-messaging/entities/L2PSMessage.ts` around lines 4 - 23, The
message-hash uniqueness is currently enforced globally on the
L2PSMessage.messageHash column and L2PSMessagingService.processMessage() looks
up by messageHash only, which allows cross-network collisions; change the DB
schema and lookup to be network-scoped by making the unique key apply to
(l2psUid, messageHash) instead of messageHash alone and update
L2PSMessagingService.processMessage() to use findOneBy({ l2psUid, messageHash })
when deduplicating; ensure the L2PSMessage entity removes the single-column
unique constraint on messageHash and instead defines uniqueness for the pair,
and update any related queries that assume global uniqueness to include l2psUid.

Comment on lines +124 to +145
if (publicKey.length < MIN_PUBLIC_KEY_LENGTH || !/^[0-9a-fA-F]+$/.test(publicKey)) {
this.sendError(ws, "INVALID_MESSAGE", "Invalid publicKey format (expected hex)")
return
}

// Verify L2PS network exists
const l2ps = await ParallelNetworks.getInstance().getL2PS(l2psUid)
if (!l2ps) {
this.sendError(ws, "L2PS_NOT_FOUND", `L2PS network ${l2psUid} not found`)
return
}

// Verify proof of key ownership: sign("register:{publicKey}:{timestamp}")
const proofMessage = `register:${publicKey}:${msg.timestamp}`
try {
const valid = await ucrypto.verify({
algorithm: getSharedState.signingAlgorithm,
message: new TextEncoder().encode(proofMessage),
publicKey: this.hexToUint8Array(publicKey),
signature: this.hexToUint8Array(proof),
})
if (!valid) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

find . -name "L2PSMessagingServer.ts" -type f

Repository: kynesyslabs/node

Length of output: 112


🏁 Script executed:

wc -l src/features/l2ps-messaging/L2PSMessagingServer.ts

Repository: kynesyslabs/node

Length of output: 114


🏁 Script executed:

sed -n '120,150p' src/features/l2ps-messaging/L2PSMessagingServer.ts

Repository: kynesyslabs/node

Length of output: 1377


🏁 Script executed:

sed -n '284,300p' src/features/l2ps-messaging/L2PSMessagingServer.ts

Repository: kynesyslabs/node

Length of output: 751


🏁 Script executed:

sed -n '1,50p' src/features/l2ps-messaging/L2PSMessagingServer.ts

Repository: kynesyslabs/node

Length of output: 1656


🏁 Script executed:

find . -name "sharedState.ts" -o -name "sharedState.js"

Repository: kynesyslabs/node

Length of output: 90


🏁 Script executed:

rg "ucrypto\.verify" src/features/l2ps-messaging/L2PSMessagingServer.ts

Repository: kynesyslabs/node

Length of output: 157


🏁 Script executed:

rg "signingAlgorithm" --type ts --type js | head -20

Repository: kynesyslabs/node

Length of output: 1896


🏁 Script executed:

cat -n src/utilities/sharedState.ts

Repository: kynesyslabs/node

Length of output: 16228


🏁 Script executed:

sed -n '100,200p' src/features/l2ps-messaging/L2PSMessagingServer.ts

Repository: kynesyslabs/node

Length of output: 3773


🏁 Script executed:

rg "signingAlgorithm\s*=" src/utilities/sharedState.ts

Repository: kynesyslabs/node

Length of output: 112


🏁 Script executed:

rg "sharedState\.signingAlgorithm\s*=" --type ts --type js

Repository: kynesyslabs/node

Length of output: 42


🏁 Script executed:

sed -n '1,80p' src/features/l2ps-messaging/L2PSMessagingServer.ts

Repository: kynesyslabs/node

Length of output: 2653


🏁 Script executed:

rg "signingAlgorithm" src/utilities/sharedState.ts -A 2 -B 2

Repository: kynesyslabs/node

Length of output: 249


🏁 Script executed:

rg "signingAlgorithm\s*:" src/utilities/sharedState.ts

Repository: kynesyslabs/node

Length of output: 42


🏁 Script executed:

rg "SIGNING_ALGORITHM|ALGORITHM" --type ts --type js | head -30

Repository: kynesyslabs/node

Length of output: 622


🏁 Script executed:

sed -n '115,135p' src/features/l2ps-messaging/L2PSMessagingServer.ts

Repository: kynesyslabs/node

Length of output: 854


🏁 Script executed:

rg "signingAlgorithm\s*=" src/ --type ts -B 5 -A 5

Repository: kynesyslabs/node

Length of output: 691


🏁 Script executed:

cat -n src/libs/network/verifySignature.ts

Repository: kynesyslabs/node

Length of output: 6045


🏁 Script executed:

rg "getSharedState.*signingAlgorithm" src/features/l2ps-messaging/ -B 3 -A 8

Repository: kynesyslabs/node

Length of output: 2344


🏁 Script executed:

rg "SharedState|getSharedState" src/features/l2ps-messaging/L2PSMessagingServer.ts -B 2 -A 2

Repository: kynesyslabs/node

Length of output: 820


🏁 Script executed:

rg "process\.env" src/utilities/sharedState.ts | grep -i "algo\|sign\|crypto"

Repository: kynesyslabs/node

Length of output: 42


🏁 Script executed:

cat -n src/features/l2ps-messaging/L2PSMessagingService.ts | head -80

Repository: kynesyslabs/node

Length of output: 3581


Pin the L2PS messaging protocol to ed25519 or embed the signing algorithm in client proofs.

Lines 124-145 and 288-296 verify client proofs using getSharedState.signingAlgorithm. While currently hardcoded to ed25519, the L2PS messaging protocol accepts plain hex public keys without specifying the signing algorithm. If the node is configured with Falcon/ML-DSA, ed25519-signed client proofs will fail. Either hardcode the protocol to ed25519 (recommended for this use case) or extend the message format to include the algorithm with each proof.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/features/l2ps-messaging/L2PSMessagingServer.ts` around lines 124 - 145,
The proof verification currently uses getSharedState.signingAlgorithm which can
vary; pin the L2PS messaging protocol to ed25519 by replacing uses of
getSharedState.signingAlgorithm in the proof verification blocks (the verify
call in L2PSMessagingServer where proofMessage is constructed and the similar
block around lines 288-296) with the fixed algorithm identifier "ed25519";
ensure the same change is applied to any other verify calls in this file that
validate client proofs, and keep existing helpers like hexToUint8Array and
sendError unchanged.

Comment on lines +136 to +152
// Verify proof of key ownership: sign("register:{publicKey}:{timestamp}")
const proofMessage = `register:${publicKey}:${msg.timestamp}`
try {
const valid = await ucrypto.verify({
algorithm: getSharedState.signingAlgorithm,
message: new TextEncoder().encode(proofMessage),
publicKey: this.hexToUint8Array(publicKey),
signature: this.hexToUint8Array(proof),
})
if (!valid) {
this.sendError(ws, "INVALID_PROOF", "Signature verification failed")
return
}
} catch (error) {
this.sendError(ws, "INVALID_PROOF", `Proof verification error: ${error}`)
return
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Add freshness checks to signed proofs.

Both proof strings include msg.timestamp, but neither path rejects stale timestamps. A captured register or history frame can therefore be replayed indefinitely to re-register a peer or re-read history until the key rotates.

Also applies to: 288-304

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/features/l2ps-messaging/L2PSMessagingServer.ts` around lines 136 - 152,
Add a freshness check around the proof verification to reject stale or future
timestamps: parse msg.timestamp (in both the register verification block and the
similar history verification at the other location) into a number, verify it is
within an allowed skew/window (e.g., now ± MAX_PROOF_AGE_MS), and call
sendError(ws, "STALE_PROOF", "...") (or reuse "INVALID_PROOF") if the timestamp
is missing, unparsable, too old, or too far in the future before performing
ucrypto.verify; define a single MAX_PROOF_AGE_MS constant used by both the
register proof (proofMessage) and the history proof paths to keep behavior
consistent.

Comment on lines +282 to +307
const { peerKey, before, limit, proof } = msg.payload
if (!peerKey || !proof) {
this.sendError(ws, "INVALID_MESSAGE", "Missing peerKey or proof")
return
}

// Verify proof: sign("history:{peerKey}:{timestamp}")
const proofMessage = `history:${peerKey}:${msg.timestamp}`
try {
const valid = await ucrypto.verify({
algorithm: getSharedState.signingAlgorithm,
message: new TextEncoder().encode(proofMessage),
publicKey: this.hexToUint8Array(myKey),
signature: this.hexToUint8Array(proof),
})
if (!valid) {
this.sendError(ws, "INVALID_PROOF", "History proof failed")
return
}
} catch {
this.sendError(ws, "INVALID_PROOF", "Proof verification error")
return
}

const l2psUid = ws.data.l2psUid!
const result = await this.service.getHistory(myKey, peerKey, l2psUid, before, limit ?? 50)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Clamp limit before querying history.

limit ?? 50 allows arbitrary page sizes, and getHistory() turns that straight into .take(limit + 1). One client can request a huge page and force an expensive DB read. Bound it to a sane maximum and reject non-positive values.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/features/l2ps-messaging/L2PSMessagingServer.ts` around lines 282 - 307,
The handler currently passes limit ?? 50 straight into this.service.getHistory
which can be arbitrarily large; before calling getHistory you must sanitize
msg.payload.limit: parse it to a number (use default 50 when undefined), reject
non-positive values by calling this.sendError(ws, "INVALID_LIMIT", "Limit must
be a positive integer"), and clamp it to a sane maximum (e.g., const
MAX_HISTORY_LIMIT = 100) — then call this.service.getHistory(myKey, peerKey,
l2psUid, before, clampedLimit). Ensure you reference the local variable limit
(from msg.payload), use l2psUid, myKey and peerKey as-is, and replace the direct
use of limit ?? 50 with the validated/clamped value.

Comment on lines +20 to +25
const MAX_OFFLINE_MESSAGES_PER_SENDER = 200

export class L2PSMessagingService {
private static instance: L2PSMessagingService
private offlineMessageCounts = new Map<string, number>()

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

The offline quota resets too aggressively.

offlineMessageCounts is keyed only by sender, but resetOfflineCount() deletes the whole entry. As soon as one queued delivery succeeds, that sender gets the full quota back even if they still have many queued messages for other peers. Decrement by the number actually delivered, or scope the counter by sender + recipient/network instead of blanket-resetting it.

Also applies to: 57-64, 274-276

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/features/l2ps-messaging/L2PSMessagingService.ts` around lines 20 - 25,
The offline quota handling in L2PSMessagingService is too coarse:
offlineMessageCounts is keyed only by sender and resetOfflineCount() deletes the
entire sender entry, restoring the full quota after any successful delivery;
change this to either (A) decrement the sender’s counter by the actual number of
messages successfully delivered (use the delivery result to subtract N from
offlineMessageCounts.get(sender)) or (B) make the key more specific (compose a
key from sender + recipient or sender + network) so counts are tracked per
recipient/route instead of globally; update any logic that sets or resets counts
(references: offlineMessageCounts, resetOfflineCount(),
MAX_OFFLINE_MESSAGES_PER_SENDER) to implement one of these approaches and ensure
counts never go negative and respect the MAX_OFFLINE_MESSAGES_PER_SENDER limit.

Comment on lines +41 to +48
export type ClientMessageType =
| "register"
| "send"
| "history"
| "discover"
| "request_public_key"
| "ack"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Do not expose "ack" until the server handles it.

ClientMessageType advertises "ack", but L2PSMessagingServer.handleMessage() has no matching case. A client built against this exported union can send a valid-looking frame and still get INVALID_MESSAGE.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/features/l2ps-messaging/types.ts` around lines 41 - 48, ClientMessageType
currently exposes the "ack" literal even though
L2PSMessagingServer.handleMessage() has no case to handle it, causing
valid-looking frames to be rejected; either remove "ack" from the exported
ClientMessageType union (in types.ts) until handleMessage implements support, or
add a corresponding handling branch in L2PSMessagingServer.handleMessage() that
processes "ack" frames (match the expected payload shape and return the
appropriate response/ack path). Ensure the symbol names referenced are
ClientMessageType and L2PSMessagingServer.handleMessage so the change is applied
to the right places.

Comment on lines +18 to +19
// npx path - resolved dynamically from PATH
const NPX = execSync("which npx").toString().trim()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

which npx fails on Windows and lacks error handling.

The which command is Unix-specific and does not exist on Windows (use where instead). Additionally, if npx is not installed or not in PATH, execSync throws an unhandled exception at module load time with a cryptic error, degrading the user experience.

This contradicts the PR goal of improving reliability and is inconsistent with the cross-platform awareness shown elsewhere in this file (e.g., using unlinkSync instead of shell rm).

Proposed fix with cross-platform support and error handling
-// npx path - resolved dynamically from PATH
-const NPX = execSync("which npx").toString().trim()
+// npx path - resolved dynamically from PATH (cross-platform)
+function resolveNpx(): string {
+    const isWindows = process.platform === "win32"
+    const command = isWindows ? "where npx" : "which npx"
+    try {
+        const result = execSync(command, { encoding: "utf-8", stdio: ["pipe", "pipe", "pipe"] })
+        // `where` on Windows may return multiple lines; take the first
+        return result.split(/\r?\n/)[0].trim()
+    } catch {
+        throw new Error(
+            "npx not found in PATH. Please install Node.js/npm first.\n" +
+            "  → https://nodejs.org/"
+        )
+    }
+}
+
+const NPX = resolveNpx()
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// npx path - resolved dynamically from PATH
const NPX = execSync("which npx").toString().trim()
// npx path - resolved dynamically from PATH (cross-platform)
function resolveNpx(): string {
const isWindows = process.platform === "win32"
const command = isWindows ? "where npx" : "which npx"
try {
const result = execSync(command, { encoding: "utf-8", stdio: ["pipe", "pipe", "pipe"] })
// `where` on Windows may return multiple lines; take the first
return result.split(/\r?\n/)[0].trim()
} catch {
throw new Error(
"npx not found in PATH. Please install Node.js/npm first.\n" +
" → https://nodejs.org/"
)
}
}
const NPX = resolveNpx()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/features/zk/scripts/setup-zk.ts` around lines 18 - 19, The NPX resolution
using const NPX = execSync("which npx")... is Unix-only and can throw at module
load; change this to detect platform (process.platform === "win32") and call
"where npx" on Windows and "which npx" elsewhere, wrap the execSync call in a
try/catch to handle missing npx gracefully, and set a clear fallback or throw an
informative error; update the NPX constant assignment and any code that uses NPX
to rely on this safe resolution (refer to the NPX constant and its execSync
call).

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/features/l2ps-messaging/L2PSMessagingServer.ts`:
- Around line 235-250: The recipient frame is being sent before persistence/L2PS
submission completes; move the this.send(...) that targets recipientPeer!.ws
(inside the recipientOnline branch) to after awaiting
this.service.processMessage(...) and after verifying result.success is true, so
the live recipient only receives messages that were successfully processed; if
result.success is false, keep calling this.sendError(ws, "L2PS_SUBMIT_FAILED",
result.error) and do not deliver to recipient; preserve the same payload fields
(from: senderKey, encrypted, messageHash, offline: false, timestamp) when
sending post-success.
- Around line 405-423: The offline quota is being reset inside the send loop via
this.service.resetOfflineCount(msg.from) before the DB update
this.service.markDelivered(deliveredIds) commits; move quota adjustment so it
runs only after markDelivered succeeds and base it on the actual number of rows
marked delivered. Concretely: stop calling
this.service.resetOfflineCount(msg.from) inside the try/send loop, call
markDelivered(deliveredIds) first, inspect the result (e.g., number of rows
updated or returned IDs) and then call this.service.resetOfflineCount(msg.from,
count) or equivalent with the actual delivered count for that sender (use
deliveredIds or the confirmed-updated IDs) to ensure quota is only freed after a
successful commit.

In `@src/features/l2ps-messaging/L2PSMessagingService.ts`:
- Around line 48-52: The preflight read using repo.findOneBy({ messageHash }) in
L2PSMessagingService is TOCTOU-prone; preserve that read as best-effort but make
the unique DB constraint in the L2PSMessage entity the true guard: remove/keep
the early-return as non-fatal and wrap the repo.save(...) call in a try/catch to
detect duplicate-key/unique-constraint DB errors (the error thrown when
inserting a row violating message_hash uniqueness), and on that error return the
same { success: false, error: "Duplicate message" } response instead of
surfacing an internal error; apply the same pattern around the other insertion
block referenced (lines ~66-77) so all saves handle duplicate-key errors
gracefully.
- Around line 57-64: The offline-message quota is incremented before DB/L2PS
persistence and never rolled back on submission failure; also the code writes
the undeclared status "failed". Fix by: when handling the L2PS submission
failure in L2PSMessagingService (the block that sets the message status to
"failed"), decrement this.offlineMessageCounts for the same fromKey if recipient
was offline and the stored count > 0 to roll back the quota, and replace the
literal "failed" with a valid MessageStatus value (or add "failed" to the
MessageStatus type and L2PSMessage entity if you prefer modeling a failed
terminal state) so the status written to the DB is declared.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: ec534a51-66db-4ad2-b5d7-b60310db587d

📥 Commits

Reviewing files that changed from the base of the PR and between 42e09a9 and 4df26f3.

📒 Files selected for processing (2)
  • src/features/l2ps-messaging/L2PSMessagingServer.ts
  • src/features/l2ps-messaging/L2PSMessagingService.ts

Comment on lines +57 to +64
// Rate-limit offline messages
if (!recipientOnline) {
const count = this.offlineMessageCounts.get(fromKey) ?? 0
if (count >= MAX_OFFLINE_MESSAGES_PER_SENDER) {
return { success: false, error: "Offline message limit reached" }
}
this.offlineMessageCounts.set(fromKey, count + 1)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "== Status definitions =="
sed -n '1,220p' src/features/l2ps-messaging/types.ts
echo
sed -n '1,220p' src/features/l2ps-messaging/entities/L2PSMessage.ts
echo

echo "== Current writes/usages of \"failed\" in messaging code =="
rg -n '"failed"|status\s*:' src/features/l2ps-messaging src/model

Repository: kynesyslabs/node

Length of output: 1781


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "== Checking for L2PS messaging files =="
find . -type f -name "*L2PSMessage*" -o -name "*L2PSMessaging*" 2>/dev/null | head -20
echo

echo "== Checking types.ts and entities structure =="
ls -la src/features/l2ps-messaging/ 2>/dev/null || echo "Directory not found"

Repository: kynesyslabs/node

Length of output: 1080


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "== L2PSMessagingService.ts lines 50-90 (context around 57-64 and 82-85) =="
sed -n '50,90p' src/features/l2ps-messaging/L2PSMessagingService.ts

echo
echo "== Full types.ts to check status definitions =="
cat src/features/l2ps-messaging/types.ts

echo
echo "== Full L2PSMessage.ts entity =="
cat src/features/l2ps-messaging/entities/L2PSMessage.ts

echo
echo "== Search for 'failed' status writes in messaging code =="
rg -n '"failed"|status.*failed|failed.*status' src/features/l2ps-messaging/ -A 2 -B 2

Repository: kynesyslabs/node

Length of output: 11481


Roll back offline message quota on L2PS submission failure.

When a recipient is offline, the sender's offline message count is incremented at line 63 before the message is durably saved to the database at line 69. If the subsequent L2PS submission fails at line 82, the code updates the message status to "failed" at line 84 but never decrements the quota. A few transient DB/L2PS failures can exhaust the sender's offline quota even though no messages were actually queued. Additionally, "failed" is not a declared status in the MessageStatus type (types.ts) or the L2PSMessage entity (line 27), yet it is written to the database.

Either decrement offlineMessageCounts on failure, or model "failed" as a valid status end-to-end rather than using an undeclared value.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/features/l2ps-messaging/L2PSMessagingService.ts` around lines 57 - 64,
The offline-message quota is incremented before DB/L2PS persistence and never
rolled back on submission failure; also the code writes the undeclared status
"failed". Fix by: when handling the L2PS submission failure in
L2PSMessagingService (the block that sets the message status to "failed"),
decrement this.offlineMessageCounts for the same fromKey if recipient was
offline and the stored count > 0 to roll back the quota, and replace the literal
"failed" with a valid MessageStatus value (or add "failed" to the MessageStatus
type and L2PSMessage entity if you prefer modeling a failed terminal state) so
the status written to the DB is declared.

@sonarqubecloud
Copy link

Quality Gate Failed Quality Gate failed

Failed conditions
3 Security Hotspots
B Reliability Rating on New Code (required ≥ A)

See analysis details on SonarQube Cloud

Catch issues before they fail your Quality Gate with our IDE extension SonarQube for IDE

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (6)
src/features/l2ps-messaging/types.ts (1)

41-48: ⚠️ Potential issue | 🟡 Minor

Remove "ack" until the server implements it.

ClientMessageType advertises an "ack" frame, but L2PSMessagingServer.handleMessage() has no matching case, so a type-correct client still gets INVALID_MESSAGE.

Suggested diff
 export type ClientMessageType =
     | "register"
     | "send"
     | "history"
     | "discover"
     | "request_public_key"
-    | "ack"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/features/l2ps-messaging/types.ts` around lines 41 - 48, ClientMessageType
currently includes the "ack" variant but L2PSMessagingServer.handleMessage()
does not handle it, causing valid-typed clients to receive INVALID_MESSAGE;
remove the "ack" member from the ClientMessageType union in types.ts until you
implement handling in L2PSMessagingServer.handleMessage(), or alternatively add
a matching case in L2PSMessagingServer.handleMessage() that processes "ack"
frames consistently with the protocol.
src/features/l2ps-messaging/L2PSMessagingService.ts (1)

24-25: ⚠️ Potential issue | 🟠 Major

Resetting the whole sender quota still frees too much capacity.

offlineMessageCounts is keyed only by sender, and resetOfflineCount() deletes the entire entry. Delivering one queued message to one peer restores the full quota even if that sender still has other queued messages outstanding.

Also applies to: 289-290

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/features/l2ps-messaging/L2PSMessagingService.ts` around lines 24 - 25,
offlineMessageCounts is currently keyed only by sender and resetOfflineCount()
deletes the entire sender entry, which wrongly restores full quota when only one
queued message was delivered; change the counting strategy to track
per-sender-per-recipient (or per-queue-item) counts instead of a single
sender-wide count, update where counts are incremented/decremented (references:
offlineMessageCounts and resetOfflineCount) so delivering one queued message
decrements the specific sender-recipient counter (or decrements the sender total
by one) rather than deleting the whole map entry, and ensure any other usages
(e.g., the code paths at the other referenced locations) are updated to use the
new composite key or decrement behavior.
src/features/l2ps-messaging/L2PSMessagingServer.ts (4)

136-152: ⚠️ Potential issue | 🔴 Critical

Reject stale or future proof timestamps before verifying signatures.

Both proof paths bind msg.timestamp, but neither checks that it is present, parseable, and within a small skew window. A captured register or history frame can still be replayed indefinitely.

Also applies to: 284-300

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/features/l2ps-messaging/L2PSMessagingServer.ts` around lines 136 - 152,
The proof verification currently uses msg.timestamp without validating it;
update the L2PSMessagingServer registration and the analogous history
verification block (the other proof path) to first ensure msg.timestamp exists,
can be parsed as an integer/number, and falls within an acceptable clock skew
window (e.g., ±300 seconds) before calling ucrypto.verify; if missing,
unparsable, or outside the window call this.sendError(ws, "INVALID_PROOF",
"...") and return. Locate the verification blocks around the proofMessage
construction and hexToUint8Array usage in L2PSMessagingServer and add the
timestamp parsing/validation logic there (and mirror the same check in the block
at the other referenced location).

139-143: ⚠️ Potential issue | 🟠 Major

Pin client-proof verification to ed25519, or include the algorithm in the signed payload.

The protocol comments/types describe client keys as ed25519, but these verifies follow the node-wide getSharedState.signingAlgorithm. A node configured for a different signing algorithm will reject otherwise valid client proofs.

Verify by comparing the protocol contract with the runtime algorithm source:

#!/bin/bash
rg -nC2 'signingAlgorithm|ed25519' \
  src/features/l2ps-messaging/L2PSMessagingServer.ts \
  src/features/l2ps-messaging/types.ts \
  src/utilities/sharedState.ts

Also applies to: 287-291

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/features/l2ps-messaging/L2PSMessagingServer.ts` around lines 139 - 143,
The verification uses node-wide getSharedState.signingAlgorithm which can
mismatch the protocol's ed25519 keys; change the client-proof verification in
L2PSMessagingServer to explicitly use ed25519 (i.e., call ucrypto.verify with
algorithm set to 'ed25519') or alternatively embed the algorithm in the signed
payload and parse it before calling ucrypto.verify; update both occurrences that
pass getSharedState.signingAlgorithm (the verify call that encodes proofMessage
and hexToUint8Array(publicKey)/hexToUint8Array(proof) and the similar block at
the later verify call around lines 287-291) so the algorithm is
deterministically derived from the protocol (prefer hardcoding 'ed25519' for
client proofs) rather than from getSharedState.signingAlgorithm.

431-436: ⚠️ Potential issue | 🔴 Critical

Return Bun's ws.send() result instead of swallowing it.

This helper currently turns every send into a fire-and-forget operation, so callers cannot tell success from 0 (dropped) or -1 (backpressure). That leaks into false message_sent confirmations and queued-message rows being marked delivered even when the socket write did not succeed.

Bun ServerWebSocket send() return value backpressure dropped connection Bun 1.2
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/features/l2ps-messaging/L2PSMessagingServer.ts` around lines 431 - 436,
The send helper currently swallows Bun's ServerWebSocket.send() return value;
change L2PSMessagingServer.send to return the numeric result from ws.send(...)
(use the existing ServerWebSocket<WSData> and ProtocolFrame types) so callers
can detect 0 (dropped) or -1 (backpressure/error). Concretely: change the method
signature from void to return number, return ws.send(JSON.stringify(frame)) in
the try block, and in the catch log the error with log.debug and return -1 (or
another sentinel) instead of swallowing; then update callers that mark messages
delivered (message_sent / queued-message flow) to check this numeric result and
only mark delivered on a success code.

278-303: ⚠️ Potential issue | 🟠 Major

Clamp and validate limit before querying history.

limit ?? 50 still flows straight into L2PSMessagingService.getHistory(), which turns it into .take(limit + 1). A non-positive or huge value can produce odd pagination or an unnecessarily expensive DB read.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/features/l2ps-messaging/L2PSMessagingServer.ts` around lines 278 - 303,
The code passes limit ?? 50 straight into service.getHistory which can lead to
non-positive or huge values; validate and normalize limit before calling
getHistory: parse/ensure msg.payload.limit is a positive integer, default to 50
if undefined, clamp it to a sane max (e.g., MAX_HISTORY_LIMIT = 100 or a config
constant) and ensure a minimum of 1, and if the provided limit is invalid (NaN
or <=0) either send an INVALID_MESSAGE error or coerce to the default; then call
this.service.getHistory(myKey, peerKey, l2psUid, before, clampedLimit) instead
of using limit ?? 50 so downstream .take(limit + 1) is safe.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/features/l2ps-messaging/L2PSMessagingServer.ts`:
- Around line 156-157: The empty catch in the socket-replacement block swallows
errors when closing a stale socket: in L2PSMessagingServer.ts where you check
"if (existing)" and call "(existing.ws as ServerWebSocket<WSData>).close()",
remove the empty catch and replace it with proper error handling—catch the
exception and log it at debug level (via the server's logger or processLogger)
including context that this occurred while closing/replacing an existing
connection so cleanup failures are observable; ensure the original close is
still attempted and that the catch only logs the error without rethrowing.

In `@src/features/l2ps-messaging/L2PSMessagingService.ts`:
- Around line 57-85: The offlineMessageCounts increment for fromKey happens
before persisting/queueing (see offlineMessageCounts.set(fromKey, ...),
repo.save(msg), and subsequent L2PS submission code); move or protect the
increment so that any exception during repo.save(msg) or any later pre-queue
error rolls the counter back: decrement offlineMessageCounts for fromKey in the
catch/finally path (or delay increment until after durable queueing) to ensure
the sender’s quota is only consumed for successfully queued messages and handle
duplicate-key (saveError.code === "23505" or
saveError.message.includes("duplicate key")) and other errors by restoring the
count.
- Around line 54-78: The code currently sets status = "delivered" based on
recipientOnline before the websocket write, which can falsely mark messages as
delivered; change the flow so you always persist the new L2PSMessage
(L2PSMessage instance persisted via repo.save) with status "queued" initially,
then attempt the socket send when recipientOnline is true, and only after the
socket write succeeds update the same message record to status = "delivered"
(use repo.save/repo.update on the existing msg.id and set l2psTxHash if
available); if the socket send fails or the peer disconnects, keep it queued so
offline retry logic can handle it. Apply the same change to the other save path
that currently sets delivered prematurely (the other repo.save block referenced
in the comment).

---

Duplicate comments:
In `@src/features/l2ps-messaging/L2PSMessagingServer.ts`:
- Around line 136-152: The proof verification currently uses msg.timestamp
without validating it; update the L2PSMessagingServer registration and the
analogous history verification block (the other proof path) to first ensure
msg.timestamp exists, can be parsed as an integer/number, and falls within an
acceptable clock skew window (e.g., ±300 seconds) before calling ucrypto.verify;
if missing, unparsable, or outside the window call this.sendError(ws,
"INVALID_PROOF", "...") and return. Locate the verification blocks around the
proofMessage construction and hexToUint8Array usage in L2PSMessagingServer and
add the timestamp parsing/validation logic there (and mirror the same check in
the block at the other referenced location).
- Around line 139-143: The verification uses node-wide
getSharedState.signingAlgorithm which can mismatch the protocol's ed25519 keys;
change the client-proof verification in L2PSMessagingServer to explicitly use
ed25519 (i.e., call ucrypto.verify with algorithm set to 'ed25519') or
alternatively embed the algorithm in the signed payload and parse it before
calling ucrypto.verify; update both occurrences that pass
getSharedState.signingAlgorithm (the verify call that encodes proofMessage and
hexToUint8Array(publicKey)/hexToUint8Array(proof) and the similar block at the
later verify call around lines 287-291) so the algorithm is deterministically
derived from the protocol (prefer hardcoding 'ed25519' for client proofs) rather
than from getSharedState.signingAlgorithm.
- Around line 431-436: The send helper currently swallows Bun's
ServerWebSocket.send() return value; change L2PSMessagingServer.send to return
the numeric result from ws.send(...) (use the existing ServerWebSocket<WSData>
and ProtocolFrame types) so callers can detect 0 (dropped) or -1
(backpressure/error). Concretely: change the method signature from void to
return number, return ws.send(JSON.stringify(frame)) in the try block, and in
the catch log the error with log.debug and return -1 (or another sentinel)
instead of swallowing; then update callers that mark messages delivered
(message_sent / queued-message flow) to check this numeric result and only mark
delivered on a success code.
- Around line 278-303: The code passes limit ?? 50 straight into
service.getHistory which can lead to non-positive or huge values; validate and
normalize limit before calling getHistory: parse/ensure msg.payload.limit is a
positive integer, default to 50 if undefined, clamp it to a sane max (e.g.,
MAX_HISTORY_LIMIT = 100 or a config constant) and ensure a minimum of 1, and if
the provided limit is invalid (NaN or <=0) either send an INVALID_MESSAGE error
or coerce to the default; then call this.service.getHistory(myKey, peerKey,
l2psUid, before, clampedLimit) instead of using limit ?? 50 so downstream
.take(limit + 1) is safe.

In `@src/features/l2ps-messaging/L2PSMessagingService.ts`:
- Around line 24-25: offlineMessageCounts is currently keyed only by sender and
resetOfflineCount() deletes the entire sender entry, which wrongly restores full
quota when only one queued message was delivered; change the counting strategy
to track per-sender-per-recipient (or per-queue-item) counts instead of a single
sender-wide count, update where counts are incremented/decremented (references:
offlineMessageCounts and resetOfflineCount) so delivering one queued message
decrements the specific sender-recipient counter (or decrements the sender total
by one) rather than deleting the whole map entry, and ensure any other usages
(e.g., the code paths at the other referenced locations) are updated to use the
new composite key or decrement behavior.

In `@src/features/l2ps-messaging/types.ts`:
- Around line 41-48: ClientMessageType currently includes the "ack" variant but
L2PSMessagingServer.handleMessage() does not handle it, causing valid-typed
clients to receive INVALID_MESSAGE; remove the "ack" member from the
ClientMessageType union in types.ts until you implement handling in
L2PSMessagingServer.handleMessage(), or alternatively add a matching case in
L2PSMessagingServer.handleMessage() that processes "ack" frames consistently
with the protocol.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: c6226e79-2608-4adb-9f86-ac1f6d791acd

📥 Commits

Reviewing files that changed from the base of the PR and between 4df26f3 and 540a783.

📒 Files selected for processing (3)
  • src/features/l2ps-messaging/L2PSMessagingServer.ts
  • src/features/l2ps-messaging/L2PSMessagingService.ts
  • src/features/l2ps-messaging/types.ts

Comment on lines +156 to +157
if (existing) {
try { (existing.ws as ServerWebSocket<WSData>).close() } catch {}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove the empty catch here.

Line 157 currently trips ESLint no-empty, and it also hides failures while replacing a stale socket. At least log at debug level so the cleanup path is observable.

🧰 Tools
🪛 ESLint

[error] 157-157: Empty block statement.

(no-empty)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/features/l2ps-messaging/L2PSMessagingServer.ts` around lines 156 - 157,
The empty catch in the socket-replacement block swallows errors when closing a
stale socket: in L2PSMessagingServer.ts where you check "if (existing)" and call
"(existing.ws as ServerWebSocket<WSData>).close()", remove the empty catch and
replace it with proper error handling—catch the exception and log it at debug
level (via the server's logger or processLogger) including context that this
occurred while closing/replacing an existing connection so cleanup failures are
observable; ensure the original close is still attempted and that the catch only
logs the error without rethrowing.

Comment on lines +54 to +78
const status = recipientOnline ? "delivered" : "queued"
const now = Date.now()

// Rate-limit offline messages
if (!recipientOnline) {
const count = this.offlineMessageCounts.get(fromKey) ?? 0
if (count >= MAX_OFFLINE_MESSAGES_PER_SENDER) {
return { success: false, error: "Offline message limit reached" }
}
this.offlineMessageCounts.set(fromKey, count + 1)
}

// Store message in local DB
const msg = new L2PSMessage()
msg.id = messageId
msg.fromKey = fromKey
msg.toKey = toKey
msg.l2psUid = l2psUid
msg.messageHash = messageHash
msg.encrypted = encrypted
msg.l2psTxHash = null
msg.timestamp = String(now)
msg.status = status
try {
await repo.save(msg)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Don't persist live-recipient messages as delivered before the socket write succeeds.

recipientOnline is only a snapshot from the peer map. If the recipient disconnects, or the later websocket send is dropped, this row is already stored as delivered and will never be retried from the offline queue.

Also applies to: 101-105

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/features/l2ps-messaging/L2PSMessagingService.ts` around lines 54 - 78,
The code currently sets status = "delivered" based on recipientOnline before the
websocket write, which can falsely mark messages as delivered; change the flow
so you always persist the new L2PSMessage (L2PSMessage instance persisted via
repo.save) with status "queued" initially, then attempt the socket send when
recipientOnline is true, and only after the socket write succeeds update the
same message record to status = "delivered" (use repo.save/repo.update on the
existing msg.id and set l2psTxHash if available); if the socket send fails or
the peer disconnects, keep it queued so offline retry logic can handle it. Apply
the same change to the other save path that currently sets delivered prematurely
(the other repo.save block referenced in the comment).

Comment on lines +57 to +85
// Rate-limit offline messages
if (!recipientOnline) {
const count = this.offlineMessageCounts.get(fromKey) ?? 0
if (count >= MAX_OFFLINE_MESSAGES_PER_SENDER) {
return { success: false, error: "Offline message limit reached" }
}
this.offlineMessageCounts.set(fromKey, count + 1)
}

// Store message in local DB
const msg = new L2PSMessage()
msg.id = messageId
msg.fromKey = fromKey
msg.toKey = toKey
msg.l2psUid = l2psUid
msg.messageHash = messageHash
msg.encrypted = encrypted
msg.l2psTxHash = null
msg.timestamp = String(now)
msg.status = status
try {
await repo.save(msg)
} catch (saveError: any) {
// Catch duplicate-key constraint violation (TOCTOU race)
if (saveError?.code === "23505" || saveError?.message?.includes("duplicate key")) {
return { success: false, error: "Duplicate message" }
}
throw saveError
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Roll back the offline quota on any pre-queue failure, not just L2PS submission failure.

The counter increments before repo.save(msg). If that save throws, or any later exception escapes before the explicit rollback block, the sender loses quota even though nothing was durably queued.

Also applies to: 90-98

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/features/l2ps-messaging/L2PSMessagingService.ts` around lines 57 - 85,
The offlineMessageCounts increment for fromKey happens before
persisting/queueing (see offlineMessageCounts.set(fromKey, ...), repo.save(msg),
and subsequent L2PS submission code); move or protect the increment so that any
exception during repo.save(msg) or any later pre-queue error rolls the counter
back: decrement offlineMessageCounts for fromKey in the catch/finally path (or
delay increment until after durable queueing) to ensure the sender’s quota is
only consumed for successfully queued messages and handle duplicate-key
(saveError.code === "23505" or saveError.message.includes("duplicate key")) and
other errors by restoring the count.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant