feat(l2ps-messaging): add crypto and integration tests for messaging …#686
feat(l2ps-messaging): add crypto and integration tests for messaging …#686
Conversation
…protocol - Implemented unit tests for message hashing, encryption, and decryption functionalities. - Added integration tests for WebSocket server handling peer registration, messaging, discovery, and error handling. - Defined protocol types for messaging, including message envelopes and server/client message structures. - Enhanced setup scripts for zk keys and verification keys for improved reliability. - Updated datasource to include L2PS messaging entities for database integration.
WalkthroughAdds an opt-in L2PS Messaging feature: a Bun WebSocket server, a messaging service that bridges messages into L2PS transactions, crypto helpers, DB entity, tests, docs, CLI e2e script, and startup lifecycle wiring controlled by env vars. Changes
Sequence Diagram(s)sequenceDiagram
actor Client as Peer (Client)
participant WS as L2PSMessagingServer
participant Service as L2PSMessagingService
participant L2PS as L2PS Pipeline
participant DB as Database
rect rgba(100,200,100,0.5)
Note over Client,DB: Send Flow (online recipient)
Client->>WS: send {to, encrypted, messageHash, id, timestamp}
activate WS
WS->>WS: validate sender registration & payload
WS->>Service: processMessage(from,to,encrypted,messageHash,recipientOnline)
activate Service
Service->>DB: dedupe check & persist (status: delivered)
Service->>L2PS: submitToL2PS(tx)
activate L2PS
L2PS-->>Service: txHash / mempool ack
deactivate L2PS
Service->>DB: update l2psTxHash & status
Service-->>WS: {success, l2psTxHash}
deactivate Service
WS->>Client: message_sent {messageId}
WS->>Client: incoming_message -> recipient
Client-->>WS: ack
deactivate WS
end
rect rgba(100,150,200,0.5)
Note over Client,DB: Offline Delivery on Register
Client->>WS: register {publicKey, l2psUid, proof}
activate WS
WS->>Service: getQueuedMessages(toKey,l2psUid)
activate Service
Service->>DB: fetch queued messages (ordered)
Service-->>WS: [queued messages]
deactivate Service
WS->>Client: incoming_message (1..N)
WS->>Service: markDelivered([ids])
Service->>DB: update statuses -> delivered
WS-->>Client: registered
deactivate WS
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Review Summary by QodoAdd L2PS Messaging: Real-time encrypted chat with L2PS persistence
WalkthroughsDescription• Implemented L2PS-backed real-time messaging with WebSocket server - Peer registration with ed25519 proof verification - Message routing, offline queueing, and delivery - Conversation history queries with pagination • Added comprehensive unit and integration tests - Protocol validation, peer management, message routing - Offline delivery, rate limiting, error handling - 37 passing tests across crypto, server, and service • Created crypto helpers for message encryption/decryption - AES-256-GCM symmetric encryption with random nonces - Message hashing for deduplication • Integrated with L2PS mempool for message persistence - Messages submitted as L2PS transactions with zero fees - Automatic batching and L1 rollup pipeline Diagramflowchart LR
Client["Client SDK"]
WS["WebSocket Server"]
Service["L2PS Messaging Service"]
DB["l2ps_messages Table"]
L2PS["L2PS Mempool"]
L1["L1 Rollup"]
Client -->|register/send| WS
WS -->|route/queue| Service
Service -->|persist| DB
Service -->|submit tx| L2PS
L2PS -->|batch| L1
WS -->|instant delivery| Client
DB -->|history query| Client
File Changes1. scripts/l2ps-messaging-test.ts
|
Code Review by Qodo
1.
|
| private handleDiscover(ws: ServerWebSocket<WSData>): void { | ||
| const l2psUid = ws.data.l2psUid | ||
| const peers = Array.from(this.peers.values()) | ||
| .filter(p => !l2psUid || p.l2psUid === l2psUid) | ||
| .map(p => p.publicKey) | ||
|
|
There was a problem hiding this comment.
3. Discover leaks peer keys 🐞 Bug ⛨ Security
L2PSMessagingServer.handleDiscover() returns all peers when ws.data.l2psUid is null, which is the default for an unregistered connection. This allows unauthenticated enumeration of online peer public keys across all L2PS networks.
Agent Prompt
### Issue description
`discover` currently treats an unregistered connection (`l2psUid === null`) as authorized to list all online peers across networks.
### Issue Context
WS upgrade initializes `l2psUid` to null; discover uses `!l2psUid || ...` which makes null equivalent to ‘no filter’.
### Fix Focus Areas
- src/features/l2ps-messaging/L2PSMessagingServer.ts[43-50]
- src/features/l2ps-messaging/L2PSMessagingServer.ts[318-329]
- src/features/l2ps-messaging/L2PSMessagingServer.ts[333-348]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
There was a problem hiding this comment.
Actionable comments posted: 11
🧹 Nitpick comments (7)
src/features/l2ps-messaging/tests/L2PSMessagingServer.test.ts (2)
8-8: Unused imports frombun:test:beforeEach,afterEach,mock,spyOn.These test utilities are imported but not used in any test.
♻️ Proposed fix
-import { describe, it, expect, beforeEach, afterEach, mock, spyOn } from "bun:test" +import { describe, it, expect } from "bun:test"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/l2ps-messaging/tests/L2PSMessagingServer.test.ts` at line 8, The test file imports unused symbols from bun:test (beforeEach, afterEach, mock, spyOn); remove these unused imports from the import statement so only describe, it, and expect remain (update the import line that currently references beforeEach, afterEach, mock, spyOn) to eliminate unused imports and any linter/test warnings.
9-10: Unused imports:L2PSMessagingServerandL2PSMessagingService.These imports are declared but never used in the test file. The tests validate protocol logic and data structures using mock objects rather than actual server instances.
♻️ Proposed fix
import { describe, it, expect, beforeEach, afterEach, mock, spyOn } from "bun:test" -import { L2PSMessagingServer } from "../L2PSMessagingServer" -import { L2PSMessagingService } from "../L2PSMessagingService"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/l2ps-messaging/tests/L2PSMessagingServer.test.ts` around lines 9 - 10, Remove the unused imports L2PSMessagingServer and L2PSMessagingService from the test file; locate the import statements referencing L2PSMessagingServer and L2PSMessagingService at the top of src/features/l2ps-messaging/tests/L2PSMessagingServer.test.ts and delete those lines so the file only imports the mocks/utilities actually used in the tests, then run lint/tests to confirm no unused-import errors remain.src/features/l2ps-messaging/tests/crypto.test.ts (2)
93-104: Useexpect().rejects.toThrow()instead of try/catch with unreachable assertion.The pattern
expect(true).toBe(false)to indicate unreachable code is an anti-pattern. Bun's test framework supportsexpect().rejects.toThrow()for async error assertions.♻️ Proposed fix
it("should fail to decrypt with wrong key", async () => { const key1 = generateSymmetricKey() const key2 = generateSymmetricKey() const encrypted = await encryptMessage("secret", key1) - try { - await decryptMessage(encrypted, key2) - expect(true).toBe(false) // should not reach - } catch (error) { - expect(error).toBeDefined() - } + await expect(decryptMessage(encrypted, key2)).rejects.toThrow() })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/l2ps-messaging/tests/crypto.test.ts` around lines 93 - 104, The test currently uses a try/catch with an unreachable assertion; replace that with an async rejection assertion using expect(...).rejects.toThrow(): call generateSymmetricKey() twice, encrypt with encryptMessage("secret", key1), then assert await expect(decryptMessage(encrypted, key2)).rejects.toThrow() so the test fails when decryption does not throw; locate the test case using the function names generateSymmetricKey, encryptMessage, and decryptMessage to update the assertion.
106-121: Same pattern improvement for tampered ciphertext test.♻️ Proposed fix
it("should fail to decrypt with tampered ciphertext", async () => { const key = generateSymmetricKey() const encrypted = await encryptMessage("secret", key) // Tamper with ciphertext const bytes = Buffer.from(encrypted.ciphertext, "base64") bytes[0] ^= 0xff - encrypted.ciphertext = bytes.toString("base64") + const tampered = { ...encrypted, ciphertext: bytes.toString("base64") } - try { - await decryptMessage(encrypted, key) - expect(true).toBe(false) - } catch (error) { - expect(error).toBeDefined() - } + await expect(decryptMessage(tampered, key)).rejects.toThrow() })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/l2ps-messaging/tests/crypto.test.ts` around lines 106 - 121, Replace the manual try/catch assertion in the tampered-ciphertext test with Jest's promise rejection matcher: create a tampered copy of the encrypted result (use generateSymmetricKey/encryptMessage to get the original, then modify a copy's ciphertext bytes), then use await expect(decryptMessage(tamperedEncrypted, key)).rejects.toBeDefined() or .rejects.toThrow() instead of try { await decryptMessage...; expect(true).toBe(false) } catch {...}; this uses the functions generateSymmetricKey, encryptMessage and decryptMessage and the local encrypted/tamperedEncrypted variables referenced in the diff.src/features/l2ps-messaging/tests/integration.test.ts (2)
32-33: Port selection has collision risk in parallel CI runs.The random offset
Math.floor(Math.random() * 1000)could still collide when multiple test processes run simultaneously. Consider using port 0 to let the OS assign an available port, then retrieve the actual port from the server.♻️ Proposed alternative using OS-assigned port
beforeAll(() => { - port = 19876 + Math.floor(Math.random() * 1000) server = Bun.serve<WSData>({ - port, + port: 0, // Let OS assign available port fetch: (req, server) => { // ... }, // ... }) + port = server.port // Retrieve actual assigned port })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/l2ps-messaging/tests/integration.test.ts` around lines 32 - 33, The test's port allocation (in beforeAll using port = 19876 + Math.floor(Math.random() * 1000)) risks collisions in parallel CI; change the test to bind the server on port 0 (call server.listen(0) in beforeAll), then read the assigned port via server.address().port and assign that to the test's port variable so all subsequent requests use the real bound port; ensure any teardown still closes the server and update any places that construct the base URL to use the retrieved port.
32-136: Potential resource leak:beforeEachclears peers map but doesn't close existing WebSocket connections.When
peers.clear()is called, the WebSocket connections stored in the map are not explicitly closed. While the connections may eventually be garbage collected, this could cause issues in rapid test execution.♻️ Proposed fix
beforeEach(() => { + // Close any existing connections before clearing + for (const peer of peers.values()) { + try { peer.ws.close() } catch {} + } peers.clear() })🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/l2ps-messaging/tests/integration.test.ts` around lines 32 - 136, The beforeEach currently calls peers.clear() which drops references but doesn't close active WebSocket connections; update the beforeEach to iterate over peers (the peers Map) and call close() on each peer.ws (e.g., p.ws.close()) to gracefully terminate connections before calling peers.clear(), ensuring active sockets are closed and avoiding resource leaks in tests that create WebSocket server connections.src/features/l2ps-messaging/L2PS_MESSAGING_QUICKSTART.md (1)
26-28: Add language specifiers to fenced code blocks.Several code blocks are missing language identifiers which affects syntax highlighting and accessibility. Lines 26-28, 56-58, 245-260, 293-299, and 339-350 should specify appropriate languages.
📝 Example fixes
-``` +```text [L2PS] Loaded network: testnet_l2ps_001For the architecture diagram at line 245:
-``` +```text Sender (SDK) Node Recipient (SDK)For test output at line 293:
-``` +```text bun test v1.3.3For directory structure at line 339:
-``` +```text src/features/l2ps-messaging/🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/l2ps-messaging/L2PS_MESSAGING_QUICKSTART.md` around lines 26 - 28, Add explicit language specifiers to the fenced code blocks mentioned (e.g., the block containing "[L2PS] Loaded network: testnet_l2ps_001", the architecture diagram block starting with "Sender (SDK) Node Recipient (SDK)", the test output block starting with "bun test v1.3.3", and the directory structure block starting with "src/features/l2ps-messaging/") by changing their opening fences from ``` to ```text (or a more specific language if appropriate) so they provide proper syntax highlighting and accessibility.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@scripts/l2ps-messaging-test.ts`:
- Around line 185-200: The sender ACK waiter is attached after awaiting the
recipient message which can miss fast ACKs; for the Bob receive block, call
waitForAny(wsAlice, ["message_sent","message_queued","error"]) and store the
promise (e.g., ackAlicePromise) before sending/awaiting the recipient message
promise (msgPromiseBob), then await msgPromiseBob and finally await
ackAlicePromise; do the same change for the symmetric Bob→Alice flow (the block
using waitForAny around lines 203-218), referencing the existing waitForAny,
msgPromiseBob/msgPromiseAlice, wsAlice and wsBob symbols so the ACK waiter is
armed before awaiting the recipient frame.
In `@src/features/l2ps-messaging/crypto.ts`:
- Around line 36-45: The encryptMessage implementation accepts any key length
and passes sharedKey.buffer (which can be misaligned) to
crypto.subtle.importKey; update encryptMessage to validate that
sharedKey.byteLength === 32 and throw a clear error if not, and pass the
Uint8Array itself (not .buffer) as the key material to crypto.subtle.importKey;
make the identical change in the counterpart decrypt function (decryptMessage)
so both enforce AES-256-GCM key length and import the raw key using the
sharedKey Uint8Array directly.
In `@src/features/l2ps-messaging/entities/L2PSMessage.ts`:
- Around line 4-23: The message-hash uniqueness is currently enforced globally
on the L2PSMessage.messageHash column and L2PSMessagingService.processMessage()
looks up by messageHash only, which allows cross-network collisions; change the
DB schema and lookup to be network-scoped by making the unique key apply to
(l2psUid, messageHash) instead of messageHash alone and update
L2PSMessagingService.processMessage() to use findOneBy({ l2psUid, messageHash })
when deduplicating; ensure the L2PSMessage entity removes the single-column
unique constraint on messageHash and instead defines uniqueness for the pair,
and update any related queries that assume global uniqueness to include l2psUid.
In `@src/features/l2ps-messaging/L2PSMessagingServer.ts`:
- Around line 318-328: handleDiscover currently returns all peers when
ws.data.l2psUid is falsy, allowing unauthenticated sockets to enumerate users;
update the handleDiscover method to require the socket be registered before
responding: check that ws.data.l2psUid (or a dedicated registration flag on
ws.data) is present/true and if not, send an error response or a
discover_response with an empty peers list and do not enumerate this.peers; keep
using the existing send(...) call for the response and reference
ws.data.l2psUid, this.peers and handleDiscover to locate where to add the gate.
- Around line 136-152: Add a freshness check around the proof verification to
reject stale or future timestamps: parse msg.timestamp (in both the register
verification block and the similar history verification at the other location)
into a number, verify it is within an allowed skew/window (e.g., now ±
MAX_PROOF_AGE_MS), and call sendError(ws, "STALE_PROOF", "...") (or reuse
"INVALID_PROOF") if the timestamp is missing, unparsable, too old, or too far in
the future before performing ucrypto.verify; define a single MAX_PROOF_AGE_MS
constant used by both the register proof (proofMessage) and the history proof
paths to keep behavior consistent.
- Around line 282-307: The handler currently passes limit ?? 50 straight into
this.service.getHistory which can be arbitrarily large; before calling
getHistory you must sanitize msg.payload.limit: parse it to a number (use
default 50 when undefined), reject non-positive values by calling
this.sendError(ws, "INVALID_LIMIT", "Limit must be a positive integer"), and
clamp it to a sane maximum (e.g., const MAX_HISTORY_LIMIT = 100) — then call
this.service.getHistory(myKey, peerKey, l2psUid, before, clampedLimit). Ensure
you reference the local variable limit (from msg.payload), use l2psUid, myKey
and peerKey as-is, and replace the direct use of limit ?? 50 with the
validated/clamped value.
- Around line 124-145: The proof verification currently uses
getSharedState.signingAlgorithm which can vary; pin the L2PS messaging protocol
to ed25519 by replacing uses of getSharedState.signingAlgorithm in the proof
verification blocks (the verify call in L2PSMessagingServer where proofMessage
is constructed and the similar block around lines 288-296) with the fixed
algorithm identifier "ed25519"; ensure the same change is applied to any other
verify calls in this file that validate client proofs, and keep existing helpers
like hexToUint8Array and sendError unchanged.
- Around line 235-270: The code currently ignores the numeric return value of
ws.send(); update the WebSocket wrapper method send(...) to return the
underlying ws.send(...) numeric result (positive bytes, 0 failure, -1
backpressure) and expose a drain() promise/event for -1; then in
L2PSMessagingServer where you call this.send(recipientPeer!.ws, ...)
(deliver-to-recipient), and where you send acknowledgements (the message_sent
and message_queued responses) and where offline messages are later marked
delivered, check the returned value: treat 0 as a delivery failure (do not
confirm delivery or mark delivered; call sendError or retry), treat -1 as
backpressure (await the wrapper drain() before confirming/marking), and only
proceed with DB updates/confirmations when ws.send() returns >0; ensure the same
logic is applied for the send path referenced by this.service.processMessage
flow and the offline-delivery path so false confirmations are avoided.
In `@src/features/l2ps-messaging/L2PSMessagingService.ts`:
- Around line 20-25: The offline quota handling in L2PSMessagingService is too
coarse: offlineMessageCounts is keyed only by sender and resetOfflineCount()
deletes the entire sender entry, restoring the full quota after any successful
delivery; change this to either (A) decrement the sender’s counter by the actual
number of messages successfully delivered (use the delivery result to subtract N
from offlineMessageCounts.get(sender)) or (B) make the key more specific
(compose a key from sender + recipient or sender + network) so counts are
tracked per recipient/route instead of globally; update any logic that sets or
resets counts (references: offlineMessageCounts, resetOfflineCount(),
MAX_OFFLINE_MESSAGES_PER_SENDER) to implement one of these approaches and ensure
counts never go negative and respect the MAX_OFFLINE_MESSAGES_PER_SENDER limit.
In `@src/features/l2ps-messaging/types.ts`:
- Around line 41-48: ClientMessageType currently exposes the "ack" literal even
though L2PSMessagingServer.handleMessage() has no case to handle it, causing
valid-looking frames to be rejected; either remove "ack" from the exported
ClientMessageType union (in types.ts) until handleMessage implements support, or
add a corresponding handling branch in L2PSMessagingServer.handleMessage() that
processes "ack" frames (match the expected payload shape and return the
appropriate response/ack path). Ensure the symbol names referenced are
ClientMessageType and L2PSMessagingServer.handleMessage so the change is applied
to the right places.
In `@src/features/zk/scripts/setup-zk.ts`:
- Around line 18-19: The NPX resolution using const NPX = execSync("which
npx")... is Unix-only and can throw at module load; change this to detect
platform (process.platform === "win32") and call "where npx" on Windows and
"which npx" elsewhere, wrap the execSync call in a try/catch to handle missing
npx gracefully, and set a clear fallback or throw an informative error; update
the NPX constant assignment and any code that uses NPX to rely on this safe
resolution (refer to the NPX constant and its execSync call).
---
Nitpick comments:
In `@src/features/l2ps-messaging/L2PS_MESSAGING_QUICKSTART.md`:
- Around line 26-28: Add explicit language specifiers to the fenced code blocks
mentioned (e.g., the block containing "[L2PS] Loaded network: testnet_l2ps_001",
the architecture diagram block starting with "Sender (SDK)
Node Recipient (SDK)", the test output block starting
with "bun test v1.3.3", and the directory structure block starting with
"src/features/l2ps-messaging/") by changing their opening fences from ``` to
```text (or a more specific language if appropriate) so they provide proper
syntax highlighting and accessibility.
In `@src/features/l2ps-messaging/tests/crypto.test.ts`:
- Around line 93-104: The test currently uses a try/catch with an unreachable
assertion; replace that with an async rejection assertion using
expect(...).rejects.toThrow(): call generateSymmetricKey() twice, encrypt with
encryptMessage("secret", key1), then assert await
expect(decryptMessage(encrypted, key2)).rejects.toThrow() so the test fails when
decryption does not throw; locate the test case using the function names
generateSymmetricKey, encryptMessage, and decryptMessage to update the
assertion.
- Around line 106-121: Replace the manual try/catch assertion in the
tampered-ciphertext test with Jest's promise rejection matcher: create a
tampered copy of the encrypted result (use generateSymmetricKey/encryptMessage
to get the original, then modify a copy's ciphertext bytes), then use await
expect(decryptMessage(tamperedEncrypted, key)).rejects.toBeDefined() or
.rejects.toThrow() instead of try { await decryptMessage...;
expect(true).toBe(false) } catch {...}; this uses the functions
generateSymmetricKey, encryptMessage and decryptMessage and the local
encrypted/tamperedEncrypted variables referenced in the diff.
In `@src/features/l2ps-messaging/tests/integration.test.ts`:
- Around line 32-33: The test's port allocation (in beforeAll using port = 19876
+ Math.floor(Math.random() * 1000)) risks collisions in parallel CI; change the
test to bind the server on port 0 (call server.listen(0) in beforeAll), then
read the assigned port via server.address().port and assign that to the test's
port variable so all subsequent requests use the real bound port; ensure any
teardown still closes the server and update any places that construct the base
URL to use the retrieved port.
- Around line 32-136: The beforeEach currently calls peers.clear() which drops
references but doesn't close active WebSocket connections; update the beforeEach
to iterate over peers (the peers Map) and call close() on each peer.ws (e.g.,
p.ws.close()) to gracefully terminate connections before calling peers.clear(),
ensuring active sockets are closed and avoiding resource leaks in tests that
create WebSocket server connections.
In `@src/features/l2ps-messaging/tests/L2PSMessagingServer.test.ts`:
- Line 8: The test file imports unused symbols from bun:test (beforeEach,
afterEach, mock, spyOn); remove these unused imports from the import statement
so only describe, it, and expect remain (update the import line that currently
references beforeEach, afterEach, mock, spyOn) to eliminate unused imports and
any linter/test warnings.
- Around line 9-10: Remove the unused imports L2PSMessagingServer and
L2PSMessagingService from the test file; locate the import statements
referencing L2PSMessagingServer and L2PSMessagingService at the top of
src/features/l2ps-messaging/tests/L2PSMessagingServer.test.ts and delete those
lines so the file only imports the mocks/utilities actually used in the tests,
then run lint/tests to confirm no unused-import errors remain.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 0195a4f3-2008-4ce1-96f2-5757c2a0595a
📒 Files selected for processing (16)
.env.examplescripts/l2ps-messaging-test.tssrc/features/l2ps-messaging/L2PSMessagingServer.tssrc/features/l2ps-messaging/L2PSMessagingService.tssrc/features/l2ps-messaging/L2PS_MESSAGING_QUICKSTART.mdsrc/features/l2ps-messaging/crypto.tssrc/features/l2ps-messaging/entities/L2PSMessage.tssrc/features/l2ps-messaging/index.tssrc/features/l2ps-messaging/tests/L2PSMessagingServer.test.tssrc/features/l2ps-messaging/tests/L2PSMessagingService.test.tssrc/features/l2ps-messaging/tests/crypto.test.tssrc/features/l2ps-messaging/tests/integration.test.tssrc/features/l2ps-messaging/types.tssrc/features/zk/scripts/setup-zk.tssrc/index.tssrc/model/datasource.ts
| const msgPromiseBob = waitFor(wsBob, "message") | ||
| wsAlice.send(frame("send", { | ||
| to: bob.publicKey, | ||
| encrypted: { | ||
| ciphertext: Buffer.from("Hello Bob from Alice!").toString("base64"), | ||
| nonce: Buffer.from("test_nonce_1").toString("base64"), | ||
| }, | ||
| messageHash: "hash_alice_to_bob_" + Date.now(), | ||
| })) | ||
|
|
||
| const msgBob = await msgPromiseBob | ||
| log("Bob", `Received message from ${msgBob.payload.from.slice(0, 12)}...`) | ||
| log("Bob", `Decoded: ${Buffer.from(msgBob.payload.encrypted.ciphertext, "base64").toString()}`) | ||
|
|
||
| const ackAlice = await waitForAny(wsAlice, ["message_sent", "message_queued", "error"]) | ||
| log("Alice", `Ack: type=${ackAlice.type}`) |
There was a problem hiding this comment.
Arm the sender ACK waiter before awaiting the recipient frame.
Both flows subscribe for "message_sent" | "message_queued" | "error" only after await msgPromise.... If processMessage() finishes quickly, the ACK can arrive on the sender socket before waitForAny() is attached, so this script will fail intermittently.
Suggested fix
- const msgPromiseBob = waitFor(wsBob, "message")
+ const msgPromiseBob = waitFor(wsBob, "message")
+ const ackAlicePromise = waitForAny(wsAlice, ["message_sent", "message_queued", "error"])
wsAlice.send(frame("send", {
to: bob.publicKey,
encrypted: {
ciphertext: Buffer.from("Hello Bob from Alice!").toString("base64"),
nonce: Buffer.from("test_nonce_1").toString("base64"),
},
messageHash: "hash_alice_to_bob_" + Date.now(),
}))
const msgBob = await msgPromiseBob
...
- const ackAlice = await waitForAny(wsAlice, ["message_sent", "message_queued", "error"])
+ const ackAlice = await ackAlicePromiseApply the same pattern to the Bob → Alice block.
Also applies to: 203-218
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@scripts/l2ps-messaging-test.ts` around lines 185 - 200, The sender ACK waiter
is attached after awaiting the recipient message which can miss fast ACKs; for
the Bob receive block, call waitForAny(wsAlice,
["message_sent","message_queued","error"]) and store the promise (e.g.,
ackAlicePromise) before sending/awaiting the recipient message promise
(msgPromiseBob), then await msgPromiseBob and finally await ackAlicePromise; do
the same change for the symmetric Bob→Alice flow (the block using waitForAny
around lines 203-218), referencing the existing waitForAny,
msgPromiseBob/msgPromiseAlice, wsAlice and wsBob symbols so the ACK waiter is
armed before awaiting the recipient frame.
| export async function encryptMessage( | ||
| plaintext: string, | ||
| sharedKey: Uint8Array, | ||
| ): Promise<SerializedEncryptedMessage> { | ||
| const nonce = crypto.getRandomValues(new Uint8Array(12)) | ||
| const encoded = new TextEncoder().encode(plaintext) | ||
|
|
||
| const cryptoKey = await crypto.subtle.importKey( | ||
| "raw", sharedKey.buffer as ArrayBuffer, "AES-GCM", false, ["encrypt"], | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
git ls-files "src/features/l2ps-messaging/crypto.ts"Repository: kynesyslabs/node
Length of output: 97
🏁 Script executed:
head -100 src/features/l2ps-messaging/crypto.tsRepository: kynesyslabs/node
Length of output: 2715
🏁 Script executed:
cat -n src/features/l2ps-messaging/crypto.ts | sed -n '30,75p'Repository: kynesyslabs/node
Length of output: 1902
🏁 Script executed:
# Search for test files or usage of these functions
find . -name "*crypto*test*" -o -name "*crypto*spec*" 2>/dev/null | head -20Repository: kynesyslabs/node
Length of output: 110
🏁 Script executed:
rg "encryptMessage|decryptMessage" --type ts --context 3 | head -100Repository: kynesyslabs/node
Length of output: 8093
🏁 Script executed:
cat -n src/features/l2ps-messaging/tests/crypto.test.tsRepository: kynesyslabs/node
Length of output: 5481
🏁 Script executed:
# Check if there are any tests that use subarrays or different key sizes
rg "subarray|slice" src/features/l2ps-messaging/tests/crypto.test.tsRepository: kynesyslabs/node
Length of output: 42
🏁 Script executed:
# Verify Web Crypto API spec behavior with .buffer vs Uint8Array
# Look at how generateSymmetricKey creates keys
cat -n src/features/l2ps-messaging/crypto.ts | sed -n '80,90p'Repository: kynesyslabs/node
Length of output: 294
🌐 Web query:
Web Crypto API SubtleCrypto importKey raw buffer vs Uint8Array
💡 Result:
crypto.subtle.importKey()’s keyData parameter is a BufferSource (or JWK), meaning you can pass either:
- an
ArrayBuffer(raw bytes), or - an
ArrayBufferViewlikeUint8Array/ any TypedArray /DataView(a view over bytes). The WebCrypto spec definesimportKey(format, (BufferSource or JsonWebKey) keyData, ...). [2]
So for "raw" keys, both ArrayBuffer and Uint8Array work—and MDN even shows "raw" import using a Uint8Array. [1]
The gotcha: Uint8Array.buffer may include extra bytes
If your Uint8Array is a slice/view into a larger buffer (non-zero byteOffset or shorter byteLength), then passing u8.buffer imports the entire underlying buffer, not just the view’s window. Prefer passing the view itself, or slice the buffer:
// Best: pass the view (imports exactly the bytes in the view)
await crypto.subtle.importKey("raw", u8, "AES-GCM", false, ["encrypt","decrypt"]);
// If you must pass an ArrayBuffer, slice it to the view range
const ab = u8.buffer.slice(u8.byteOffset, u8.byteOffset + u8.byteLength);
await crypto.subtle.importKey("raw", ab, "AES-GCM", false, ["encrypt","decrypt"]);Node.js note
In Node, WebCrypto commonly accepts ArrayBuffer | TypedArray | DataView | Buffer, and the error message explicitly lists those accepted types when you pass something else. [3]
Sources: MDN importKey() docs/examples [1], WebCrypto spec signature for importKey() [2], Node.js issue showing the accepted-type error text [3].
🏁 Script executed:
# Check if these functions are used externally and how keys are passed
rg "encryptMessage|decryptMessage" --type ts -B 3 -A 3 | grep -v "crypto.test.ts" | grep -v "crypto.ts" | head -80Repository: kynesyslabs/node
Length of output: 707
🏁 Script executed:
# Search for any patterns where a subarray might be passed as a key
rg "\.subarray\(|\.slice\(" src/features/l2ps-messaging/ --type tsRepository: kynesyslabs/node
Length of output: 1080
🏁 Script executed:
# Search more broadly for actual calls to encryptMessage in service files
rg "encryptMessage" src/features/l2ps-messaging/ --type ts -B 5 -A 5Repository: kynesyslabs/node
Length of output: 9462
🏁 Script executed:
# Check the L2PSMessagingService for how it uses these functions
cat -n src/features/l2ps-messaging/L2PSMessagingService.ts | head -150Repository: kynesyslabs/node
Length of output: 6460
Add AES-256 key length validation and pass the Uint8Array directly instead of .buffer.
The API is documented as AES-256-GCM but accepts any key size (16, 24, or 32 bytes). Additionally, sharedKey.buffer passes the entire underlying ArrayBuffer, which breaks if a caller ever passes key.subarray(). Pass the Uint8Array directly and validate that sharedKey.byteLength === 32.
Suggested fix
+function assertAes256Key(sharedKey: Uint8Array): Uint8Array {
+ if (sharedKey.byteLength !== 32) {
+ throw new Error("sharedKey must be exactly 32 bytes")
+ }
+ return sharedKey
+}
+
export async function encryptMessage(
plaintext: string,
sharedKey: Uint8Array,
): Promise<SerializedEncryptedMessage> {
...
const cryptoKey = await crypto.subtle.importKey(
- "raw", sharedKey.buffer as ArrayBuffer, "AES-GCM", false, ["encrypt"],
+ "raw", assertAes256Key(sharedKey), "AES-GCM", false, ["encrypt"],
)
...
export async function decryptMessage(
encrypted: SerializedEncryptedMessage,
sharedKey: Uint8Array,
): Promise<string> {
...
const cryptoKey = await crypto.subtle.importKey(
- "raw", sharedKey.buffer as ArrayBuffer, "AES-GCM", false, ["decrypt"],
+ "raw", assertAes256Key(sharedKey), "AES-GCM", false, ["decrypt"],
)Also applies to: 62-71
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/features/l2ps-messaging/crypto.ts` around lines 36 - 45, The
encryptMessage implementation accepts any key length and passes sharedKey.buffer
(which can be misaligned) to crypto.subtle.importKey; update encryptMessage to
validate that sharedKey.byteLength === 32 and throw a clear error if not, and
pass the Uint8Array itself (not .buffer) as the key material to
crypto.subtle.importKey; make the identical change in the counterpart decrypt
function (decryptMessage) so both enforce AES-256-GCM key length and import the
raw key using the sharedKey Uint8Array directly.
| @Entity("l2ps_messages") | ||
| export class L2PSMessage { | ||
| /** UUID v4 generated by sender */ | ||
| @PrimaryColumn("text", { name: "id" }) | ||
| id: string | ||
|
|
||
| @Index() | ||
| @Column("text", { name: "from_key" }) | ||
| fromKey: string | ||
|
|
||
| @Index() | ||
| @Column("text", { name: "to_key" }) | ||
| toKey: string | ||
|
|
||
| @Index() | ||
| @Column("text", { name: "l2ps_uid" }) | ||
| l2psUid: string | ||
|
|
||
| @Column("text", { name: "message_hash", unique: true }) | ||
| messageHash: string |
There was a problem hiding this comment.
Make message-hash uniqueness network-scoped.
Right now the database and L2PSMessagingService.processMessage() dedupe on messageHash alone. Because the exported hash helper does not include l2psUid, a valid message on one network can collide with the same sender/to/content/timestamp tuple on another network. This should be unique per (l2psUid, messageHash), not globally.
Suggested fix
`@Entity`("l2ps_messages")
+@Index(["l2psUid", "messageHash"], { unique: true })
export class L2PSMessage {
...
- `@Column`("text", { name: "message_hash", unique: true })
+ `@Column`("text", { name: "message_hash" })
messageHash: stringL2PSMessagingService.processMessage() should also change its lookup to findOneBy({ l2psUid, messageHash }).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/features/l2ps-messaging/entities/L2PSMessage.ts` around lines 4 - 23, The
message-hash uniqueness is currently enforced globally on the
L2PSMessage.messageHash column and L2PSMessagingService.processMessage() looks
up by messageHash only, which allows cross-network collisions; change the DB
schema and lookup to be network-scoped by making the unique key apply to
(l2psUid, messageHash) instead of messageHash alone and update
L2PSMessagingService.processMessage() to use findOneBy({ l2psUid, messageHash })
when deduplicating; ensure the L2PSMessage entity removes the single-column
unique constraint on messageHash and instead defines uniqueness for the pair,
and update any related queries that assume global uniqueness to include l2psUid.
| if (publicKey.length < MIN_PUBLIC_KEY_LENGTH || !/^[0-9a-fA-F]+$/.test(publicKey)) { | ||
| this.sendError(ws, "INVALID_MESSAGE", "Invalid publicKey format (expected hex)") | ||
| return | ||
| } | ||
|
|
||
| // Verify L2PS network exists | ||
| const l2ps = await ParallelNetworks.getInstance().getL2PS(l2psUid) | ||
| if (!l2ps) { | ||
| this.sendError(ws, "L2PS_NOT_FOUND", `L2PS network ${l2psUid} not found`) | ||
| return | ||
| } | ||
|
|
||
| // Verify proof of key ownership: sign("register:{publicKey}:{timestamp}") | ||
| const proofMessage = `register:${publicKey}:${msg.timestamp}` | ||
| try { | ||
| const valid = await ucrypto.verify({ | ||
| algorithm: getSharedState.signingAlgorithm, | ||
| message: new TextEncoder().encode(proofMessage), | ||
| publicKey: this.hexToUint8Array(publicKey), | ||
| signature: this.hexToUint8Array(proof), | ||
| }) | ||
| if (!valid) { |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -name "L2PSMessagingServer.ts" -type fRepository: kynesyslabs/node
Length of output: 112
🏁 Script executed:
wc -l src/features/l2ps-messaging/L2PSMessagingServer.tsRepository: kynesyslabs/node
Length of output: 114
🏁 Script executed:
sed -n '120,150p' src/features/l2ps-messaging/L2PSMessagingServer.tsRepository: kynesyslabs/node
Length of output: 1377
🏁 Script executed:
sed -n '284,300p' src/features/l2ps-messaging/L2PSMessagingServer.tsRepository: kynesyslabs/node
Length of output: 751
🏁 Script executed:
sed -n '1,50p' src/features/l2ps-messaging/L2PSMessagingServer.tsRepository: kynesyslabs/node
Length of output: 1656
🏁 Script executed:
find . -name "sharedState.ts" -o -name "sharedState.js"Repository: kynesyslabs/node
Length of output: 90
🏁 Script executed:
rg "ucrypto\.verify" src/features/l2ps-messaging/L2PSMessagingServer.tsRepository: kynesyslabs/node
Length of output: 157
🏁 Script executed:
rg "signingAlgorithm" --type ts --type js | head -20Repository: kynesyslabs/node
Length of output: 1896
🏁 Script executed:
cat -n src/utilities/sharedState.tsRepository: kynesyslabs/node
Length of output: 16228
🏁 Script executed:
sed -n '100,200p' src/features/l2ps-messaging/L2PSMessagingServer.tsRepository: kynesyslabs/node
Length of output: 3773
🏁 Script executed:
rg "signingAlgorithm\s*=" src/utilities/sharedState.tsRepository: kynesyslabs/node
Length of output: 112
🏁 Script executed:
rg "sharedState\.signingAlgorithm\s*=" --type ts --type jsRepository: kynesyslabs/node
Length of output: 42
🏁 Script executed:
sed -n '1,80p' src/features/l2ps-messaging/L2PSMessagingServer.tsRepository: kynesyslabs/node
Length of output: 2653
🏁 Script executed:
rg "signingAlgorithm" src/utilities/sharedState.ts -A 2 -B 2Repository: kynesyslabs/node
Length of output: 249
🏁 Script executed:
rg "signingAlgorithm\s*:" src/utilities/sharedState.tsRepository: kynesyslabs/node
Length of output: 42
🏁 Script executed:
rg "SIGNING_ALGORITHM|ALGORITHM" --type ts --type js | head -30Repository: kynesyslabs/node
Length of output: 622
🏁 Script executed:
sed -n '115,135p' src/features/l2ps-messaging/L2PSMessagingServer.tsRepository: kynesyslabs/node
Length of output: 854
🏁 Script executed:
rg "signingAlgorithm\s*=" src/ --type ts -B 5 -A 5Repository: kynesyslabs/node
Length of output: 691
🏁 Script executed:
cat -n src/libs/network/verifySignature.tsRepository: kynesyslabs/node
Length of output: 6045
🏁 Script executed:
rg "getSharedState.*signingAlgorithm" src/features/l2ps-messaging/ -B 3 -A 8Repository: kynesyslabs/node
Length of output: 2344
🏁 Script executed:
rg "SharedState|getSharedState" src/features/l2ps-messaging/L2PSMessagingServer.ts -B 2 -A 2Repository: kynesyslabs/node
Length of output: 820
🏁 Script executed:
rg "process\.env" src/utilities/sharedState.ts | grep -i "algo\|sign\|crypto"Repository: kynesyslabs/node
Length of output: 42
🏁 Script executed:
cat -n src/features/l2ps-messaging/L2PSMessagingService.ts | head -80Repository: kynesyslabs/node
Length of output: 3581
Pin the L2PS messaging protocol to ed25519 or embed the signing algorithm in client proofs.
Lines 124-145 and 288-296 verify client proofs using getSharedState.signingAlgorithm. While currently hardcoded to ed25519, the L2PS messaging protocol accepts plain hex public keys without specifying the signing algorithm. If the node is configured with Falcon/ML-DSA, ed25519-signed client proofs will fail. Either hardcode the protocol to ed25519 (recommended for this use case) or extend the message format to include the algorithm with each proof.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/features/l2ps-messaging/L2PSMessagingServer.ts` around lines 124 - 145,
The proof verification currently uses getSharedState.signingAlgorithm which can
vary; pin the L2PS messaging protocol to ed25519 by replacing uses of
getSharedState.signingAlgorithm in the proof verification blocks (the verify
call in L2PSMessagingServer where proofMessage is constructed and the similar
block around lines 288-296) with the fixed algorithm identifier "ed25519";
ensure the same change is applied to any other verify calls in this file that
validate client proofs, and keep existing helpers like hexToUint8Array and
sendError unchanged.
| // Verify proof of key ownership: sign("register:{publicKey}:{timestamp}") | ||
| const proofMessage = `register:${publicKey}:${msg.timestamp}` | ||
| try { | ||
| const valid = await ucrypto.verify({ | ||
| algorithm: getSharedState.signingAlgorithm, | ||
| message: new TextEncoder().encode(proofMessage), | ||
| publicKey: this.hexToUint8Array(publicKey), | ||
| signature: this.hexToUint8Array(proof), | ||
| }) | ||
| if (!valid) { | ||
| this.sendError(ws, "INVALID_PROOF", "Signature verification failed") | ||
| return | ||
| } | ||
| } catch (error) { | ||
| this.sendError(ws, "INVALID_PROOF", `Proof verification error: ${error}`) | ||
| return | ||
| } |
There was a problem hiding this comment.
Add freshness checks to signed proofs.
Both proof strings include msg.timestamp, but neither path rejects stale timestamps. A captured register or history frame can therefore be replayed indefinitely to re-register a peer or re-read history until the key rotates.
Also applies to: 288-304
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/features/l2ps-messaging/L2PSMessagingServer.ts` around lines 136 - 152,
Add a freshness check around the proof verification to reject stale or future
timestamps: parse msg.timestamp (in both the register verification block and the
similar history verification at the other location) into a number, verify it is
within an allowed skew/window (e.g., now ± MAX_PROOF_AGE_MS), and call
sendError(ws, "STALE_PROOF", "...") (or reuse "INVALID_PROOF") if the timestamp
is missing, unparsable, too old, or too far in the future before performing
ucrypto.verify; define a single MAX_PROOF_AGE_MS constant used by both the
register proof (proofMessage) and the history proof paths to keep behavior
consistent.
| const { peerKey, before, limit, proof } = msg.payload | ||
| if (!peerKey || !proof) { | ||
| this.sendError(ws, "INVALID_MESSAGE", "Missing peerKey or proof") | ||
| return | ||
| } | ||
|
|
||
| // Verify proof: sign("history:{peerKey}:{timestamp}") | ||
| const proofMessage = `history:${peerKey}:${msg.timestamp}` | ||
| try { | ||
| const valid = await ucrypto.verify({ | ||
| algorithm: getSharedState.signingAlgorithm, | ||
| message: new TextEncoder().encode(proofMessage), | ||
| publicKey: this.hexToUint8Array(myKey), | ||
| signature: this.hexToUint8Array(proof), | ||
| }) | ||
| if (!valid) { | ||
| this.sendError(ws, "INVALID_PROOF", "History proof failed") | ||
| return | ||
| } | ||
| } catch { | ||
| this.sendError(ws, "INVALID_PROOF", "Proof verification error") | ||
| return | ||
| } | ||
|
|
||
| const l2psUid = ws.data.l2psUid! | ||
| const result = await this.service.getHistory(myKey, peerKey, l2psUid, before, limit ?? 50) |
There was a problem hiding this comment.
Clamp limit before querying history.
limit ?? 50 allows arbitrary page sizes, and getHistory() turns that straight into .take(limit + 1). One client can request a huge page and force an expensive DB read. Bound it to a sane maximum and reject non-positive values.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/features/l2ps-messaging/L2PSMessagingServer.ts` around lines 282 - 307,
The handler currently passes limit ?? 50 straight into this.service.getHistory
which can be arbitrarily large; before calling getHistory you must sanitize
msg.payload.limit: parse it to a number (use default 50 when undefined), reject
non-positive values by calling this.sendError(ws, "INVALID_LIMIT", "Limit must
be a positive integer"), and clamp it to a sane maximum (e.g., const
MAX_HISTORY_LIMIT = 100) — then call this.service.getHistory(myKey, peerKey,
l2psUid, before, clampedLimit). Ensure you reference the local variable limit
(from msg.payload), use l2psUid, myKey and peerKey as-is, and replace the direct
use of limit ?? 50 with the validated/clamped value.
| const MAX_OFFLINE_MESSAGES_PER_SENDER = 200 | ||
|
|
||
| export class L2PSMessagingService { | ||
| private static instance: L2PSMessagingService | ||
| private offlineMessageCounts = new Map<string, number>() | ||
|
|
There was a problem hiding this comment.
The offline quota resets too aggressively.
offlineMessageCounts is keyed only by sender, but resetOfflineCount() deletes the whole entry. As soon as one queued delivery succeeds, that sender gets the full quota back even if they still have many queued messages for other peers. Decrement by the number actually delivered, or scope the counter by sender + recipient/network instead of blanket-resetting it.
Also applies to: 57-64, 274-276
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/features/l2ps-messaging/L2PSMessagingService.ts` around lines 20 - 25,
The offline quota handling in L2PSMessagingService is too coarse:
offlineMessageCounts is keyed only by sender and resetOfflineCount() deletes the
entire sender entry, restoring the full quota after any successful delivery;
change this to either (A) decrement the sender’s counter by the actual number of
messages successfully delivered (use the delivery result to subtract N from
offlineMessageCounts.get(sender)) or (B) make the key more specific (compose a
key from sender + recipient or sender + network) so counts are tracked per
recipient/route instead of globally; update any logic that sets or resets counts
(references: offlineMessageCounts, resetOfflineCount(),
MAX_OFFLINE_MESSAGES_PER_SENDER) to implement one of these approaches and ensure
counts never go negative and respect the MAX_OFFLINE_MESSAGES_PER_SENDER limit.
| export type ClientMessageType = | ||
| | "register" | ||
| | "send" | ||
| | "history" | ||
| | "discover" | ||
| | "request_public_key" | ||
| | "ack" | ||
|
|
There was a problem hiding this comment.
Do not expose "ack" until the server handles it.
ClientMessageType advertises "ack", but L2PSMessagingServer.handleMessage() has no matching case. A client built against this exported union can send a valid-looking frame and still get INVALID_MESSAGE.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/features/l2ps-messaging/types.ts` around lines 41 - 48, ClientMessageType
currently exposes the "ack" literal even though
L2PSMessagingServer.handleMessage() has no case to handle it, causing
valid-looking frames to be rejected; either remove "ack" from the exported
ClientMessageType union (in types.ts) until handleMessage implements support, or
add a corresponding handling branch in L2PSMessagingServer.handleMessage() that
processes "ack" frames (match the expected payload shape and return the
appropriate response/ack path). Ensure the symbol names referenced are
ClientMessageType and L2PSMessagingServer.handleMessage so the change is applied
to the right places.
| // npx path - resolved dynamically from PATH | ||
| const NPX = execSync("which npx").toString().trim() |
There was a problem hiding this comment.
which npx fails on Windows and lacks error handling.
The which command is Unix-specific and does not exist on Windows (use where instead). Additionally, if npx is not installed or not in PATH, execSync throws an unhandled exception at module load time with a cryptic error, degrading the user experience.
This contradicts the PR goal of improving reliability and is inconsistent with the cross-platform awareness shown elsewhere in this file (e.g., using unlinkSync instead of shell rm).
Proposed fix with cross-platform support and error handling
-// npx path - resolved dynamically from PATH
-const NPX = execSync("which npx").toString().trim()
+// npx path - resolved dynamically from PATH (cross-platform)
+function resolveNpx(): string {
+ const isWindows = process.platform === "win32"
+ const command = isWindows ? "where npx" : "which npx"
+ try {
+ const result = execSync(command, { encoding: "utf-8", stdio: ["pipe", "pipe", "pipe"] })
+ // `where` on Windows may return multiple lines; take the first
+ return result.split(/\r?\n/)[0].trim()
+ } catch {
+ throw new Error(
+ "npx not found in PATH. Please install Node.js/npm first.\n" +
+ " → https://nodejs.org/"
+ )
+ }
+}
+
+const NPX = resolveNpx()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // npx path - resolved dynamically from PATH | |
| const NPX = execSync("which npx").toString().trim() | |
| // npx path - resolved dynamically from PATH (cross-platform) | |
| function resolveNpx(): string { | |
| const isWindows = process.platform === "win32" | |
| const command = isWindows ? "where npx" : "which npx" | |
| try { | |
| const result = execSync(command, { encoding: "utf-8", stdio: ["pipe", "pipe", "pipe"] }) | |
| // `where` on Windows may return multiple lines; take the first | |
| return result.split(/\r?\n/)[0].trim() | |
| } catch { | |
| throw new Error( | |
| "npx not found in PATH. Please install Node.js/npm first.\n" + | |
| " → https://nodejs.org/" | |
| ) | |
| } | |
| } | |
| const NPX = resolveNpx() |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/features/zk/scripts/setup-zk.ts` around lines 18 - 19, The NPX resolution
using const NPX = execSync("which npx")... is Unix-only and can throw at module
load; change this to detect platform (process.platform === "win32") and call
"where npx" on Windows and "which npx" elsewhere, wrap the execSync call in a
try/catch to handle missing npx gracefully, and set a clear fallback or throw an
informative error; update the NPX constant assignment and any code that uses NPX
to rely on this safe resolution (refer to the NPX constant and its execSync
call).
…ry and public key requests
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/features/l2ps-messaging/L2PSMessagingServer.ts`:
- Around line 235-250: The recipient frame is being sent before persistence/L2PS
submission completes; move the this.send(...) that targets recipientPeer!.ws
(inside the recipientOnline branch) to after awaiting
this.service.processMessage(...) and after verifying result.success is true, so
the live recipient only receives messages that were successfully processed; if
result.success is false, keep calling this.sendError(ws, "L2PS_SUBMIT_FAILED",
result.error) and do not deliver to recipient; preserve the same payload fields
(from: senderKey, encrypted, messageHash, offline: false, timestamp) when
sending post-success.
- Around line 405-423: The offline quota is being reset inside the send loop via
this.service.resetOfflineCount(msg.from) before the DB update
this.service.markDelivered(deliveredIds) commits; move quota adjustment so it
runs only after markDelivered succeeds and base it on the actual number of rows
marked delivered. Concretely: stop calling
this.service.resetOfflineCount(msg.from) inside the try/send loop, call
markDelivered(deliveredIds) first, inspect the result (e.g., number of rows
updated or returned IDs) and then call this.service.resetOfflineCount(msg.from,
count) or equivalent with the actual delivered count for that sender (use
deliveredIds or the confirmed-updated IDs) to ensure quota is only freed after a
successful commit.
In `@src/features/l2ps-messaging/L2PSMessagingService.ts`:
- Around line 48-52: The preflight read using repo.findOneBy({ messageHash }) in
L2PSMessagingService is TOCTOU-prone; preserve that read as best-effort but make
the unique DB constraint in the L2PSMessage entity the true guard: remove/keep
the early-return as non-fatal and wrap the repo.save(...) call in a try/catch to
detect duplicate-key/unique-constraint DB errors (the error thrown when
inserting a row violating message_hash uniqueness), and on that error return the
same { success: false, error: "Duplicate message" } response instead of
surfacing an internal error; apply the same pattern around the other insertion
block referenced (lines ~66-77) so all saves handle duplicate-key errors
gracefully.
- Around line 57-64: The offline-message quota is incremented before DB/L2PS
persistence and never rolled back on submission failure; also the code writes
the undeclared status "failed". Fix by: when handling the L2PS submission
failure in L2PSMessagingService (the block that sets the message status to
"failed"), decrement this.offlineMessageCounts for the same fromKey if recipient
was offline and the stored count > 0 to roll back the quota, and replace the
literal "failed" with a valid MessageStatus value (or add "failed" to the
MessageStatus type and L2PSMessage entity if you prefer modeling a failed
terminal state) so the status written to the DB is declared.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: ec534a51-66db-4ad2-b5d7-b60310db587d
📒 Files selected for processing (2)
src/features/l2ps-messaging/L2PSMessagingServer.tssrc/features/l2ps-messaging/L2PSMessagingService.ts
| // Rate-limit offline messages | ||
| if (!recipientOnline) { | ||
| const count = this.offlineMessageCounts.get(fromKey) ?? 0 | ||
| if (count >= MAX_OFFLINE_MESSAGES_PER_SENDER) { | ||
| return { success: false, error: "Offline message limit reached" } | ||
| } | ||
| this.offlineMessageCounts.set(fromKey, count + 1) | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== Status definitions =="
sed -n '1,220p' src/features/l2ps-messaging/types.ts
echo
sed -n '1,220p' src/features/l2ps-messaging/entities/L2PSMessage.ts
echo
echo "== Current writes/usages of \"failed\" in messaging code =="
rg -n '"failed"|status\s*:' src/features/l2ps-messaging src/modelRepository: kynesyslabs/node
Length of output: 1781
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== Checking for L2PS messaging files =="
find . -type f -name "*L2PSMessage*" -o -name "*L2PSMessaging*" 2>/dev/null | head -20
echo
echo "== Checking types.ts and entities structure =="
ls -la src/features/l2ps-messaging/ 2>/dev/null || echo "Directory not found"Repository: kynesyslabs/node
Length of output: 1080
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== L2PSMessagingService.ts lines 50-90 (context around 57-64 and 82-85) =="
sed -n '50,90p' src/features/l2ps-messaging/L2PSMessagingService.ts
echo
echo "== Full types.ts to check status definitions =="
cat src/features/l2ps-messaging/types.ts
echo
echo "== Full L2PSMessage.ts entity =="
cat src/features/l2ps-messaging/entities/L2PSMessage.ts
echo
echo "== Search for 'failed' status writes in messaging code =="
rg -n '"failed"|status.*failed|failed.*status' src/features/l2ps-messaging/ -A 2 -B 2Repository: kynesyslabs/node
Length of output: 11481
Roll back offline message quota on L2PS submission failure.
When a recipient is offline, the sender's offline message count is incremented at line 63 before the message is durably saved to the database at line 69. If the subsequent L2PS submission fails at line 82, the code updates the message status to "failed" at line 84 but never decrements the quota. A few transient DB/L2PS failures can exhaust the sender's offline quota even though no messages were actually queued. Additionally, "failed" is not a declared status in the MessageStatus type (types.ts) or the L2PSMessage entity (line 27), yet it is written to the database.
Either decrement offlineMessageCounts on failure, or model "failed" as a valid status end-to-end rather than using an undeclared value.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/features/l2ps-messaging/L2PSMessagingService.ts` around lines 57 - 64,
The offline-message quota is incremented before DB/L2PS persistence and never
rolled back on submission failure; also the code writes the undeclared status
"failed". Fix by: when handling the L2PS submission failure in
L2PSMessagingService (the block that sets the message status to "failed"),
decrement this.offlineMessageCounts for the same fromKey if recipient was
offline and the stored count > 0 to roll back the quota, and replace the literal
"failed" with a valid MessageStatus value (or add "failed" to the MessageStatus
type and L2PSMessage entity if you prefer modeling a failed terminal state) so
the status written to the DB is declared.
…or L2PS submissions
|
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (6)
src/features/l2ps-messaging/types.ts (1)
41-48:⚠️ Potential issue | 🟡 MinorRemove
"ack"until the server implements it.
ClientMessageTypeadvertises an"ack"frame, butL2PSMessagingServer.handleMessage()has no matching case, so a type-correct client still getsINVALID_MESSAGE.Suggested diff
export type ClientMessageType = | "register" | "send" | "history" | "discover" | "request_public_key" - | "ack"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/l2ps-messaging/types.ts` around lines 41 - 48, ClientMessageType currently includes the "ack" variant but L2PSMessagingServer.handleMessage() does not handle it, causing valid-typed clients to receive INVALID_MESSAGE; remove the "ack" member from the ClientMessageType union in types.ts until you implement handling in L2PSMessagingServer.handleMessage(), or alternatively add a matching case in L2PSMessagingServer.handleMessage() that processes "ack" frames consistently with the protocol.src/features/l2ps-messaging/L2PSMessagingService.ts (1)
24-25:⚠️ Potential issue | 🟠 MajorResetting the whole sender quota still frees too much capacity.
offlineMessageCountsis keyed only by sender, andresetOfflineCount()deletes the entire entry. Delivering one queued message to one peer restores the full quota even if that sender still has other queued messages outstanding.Also applies to: 289-290
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/l2ps-messaging/L2PSMessagingService.ts` around lines 24 - 25, offlineMessageCounts is currently keyed only by sender and resetOfflineCount() deletes the entire sender entry, which wrongly restores full quota when only one queued message was delivered; change the counting strategy to track per-sender-per-recipient (or per-queue-item) counts instead of a single sender-wide count, update where counts are incremented/decremented (references: offlineMessageCounts and resetOfflineCount) so delivering one queued message decrements the specific sender-recipient counter (or decrements the sender total by one) rather than deleting the whole map entry, and ensure any other usages (e.g., the code paths at the other referenced locations) are updated to use the new composite key or decrement behavior.src/features/l2ps-messaging/L2PSMessagingServer.ts (4)
136-152:⚠️ Potential issue | 🔴 CriticalReject stale or future proof timestamps before verifying signatures.
Both proof paths bind
msg.timestamp, but neither checks that it is present, parseable, and within a small skew window. A capturedregisterorhistoryframe can still be replayed indefinitely.Also applies to: 284-300
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/l2ps-messaging/L2PSMessagingServer.ts` around lines 136 - 152, The proof verification currently uses msg.timestamp without validating it; update the L2PSMessagingServer registration and the analogous history verification block (the other proof path) to first ensure msg.timestamp exists, can be parsed as an integer/number, and falls within an acceptable clock skew window (e.g., ±300 seconds) before calling ucrypto.verify; if missing, unparsable, or outside the window call this.sendError(ws, "INVALID_PROOF", "...") and return. Locate the verification blocks around the proofMessage construction and hexToUint8Array usage in L2PSMessagingServer and add the timestamp parsing/validation logic there (and mirror the same check in the block at the other referenced location).
139-143:⚠️ Potential issue | 🟠 MajorPin client-proof verification to ed25519, or include the algorithm in the signed payload.
The protocol comments/types describe client keys as ed25519, but these verifies follow the node-wide
getSharedState.signingAlgorithm. A node configured for a different signing algorithm will reject otherwise valid client proofs.Verify by comparing the protocol contract with the runtime algorithm source:
#!/bin/bash rg -nC2 'signingAlgorithm|ed25519' \ src/features/l2ps-messaging/L2PSMessagingServer.ts \ src/features/l2ps-messaging/types.ts \ src/utilities/sharedState.tsAlso applies to: 287-291
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/l2ps-messaging/L2PSMessagingServer.ts` around lines 139 - 143, The verification uses node-wide getSharedState.signingAlgorithm which can mismatch the protocol's ed25519 keys; change the client-proof verification in L2PSMessagingServer to explicitly use ed25519 (i.e., call ucrypto.verify with algorithm set to 'ed25519') or alternatively embed the algorithm in the signed payload and parse it before calling ucrypto.verify; update both occurrences that pass getSharedState.signingAlgorithm (the verify call that encodes proofMessage and hexToUint8Array(publicKey)/hexToUint8Array(proof) and the similar block at the later verify call around lines 287-291) so the algorithm is deterministically derived from the protocol (prefer hardcoding 'ed25519' for client proofs) rather than from getSharedState.signingAlgorithm.
431-436:⚠️ Potential issue | 🔴 CriticalReturn Bun's
ws.send()result instead of swallowing it.This helper currently turns every send into a fire-and-forget operation, so callers cannot tell success from
0(dropped) or-1(backpressure). That leaks into falsemessage_sentconfirmations and queued-message rows being marked delivered even when the socket write did not succeed.Bun ServerWebSocket send() return value backpressure dropped connection Bun 1.2🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/l2ps-messaging/L2PSMessagingServer.ts` around lines 431 - 436, The send helper currently swallows Bun's ServerWebSocket.send() return value; change L2PSMessagingServer.send to return the numeric result from ws.send(...) (use the existing ServerWebSocket<WSData> and ProtocolFrame types) so callers can detect 0 (dropped) or -1 (backpressure/error). Concretely: change the method signature from void to return number, return ws.send(JSON.stringify(frame)) in the try block, and in the catch log the error with log.debug and return -1 (or another sentinel) instead of swallowing; then update callers that mark messages delivered (message_sent / queued-message flow) to check this numeric result and only mark delivered on a success code.
278-303:⚠️ Potential issue | 🟠 MajorClamp and validate
limitbefore querying history.
limit ?? 50still flows straight intoL2PSMessagingService.getHistory(), which turns it into.take(limit + 1). A non-positive or huge value can produce odd pagination or an unnecessarily expensive DB read.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/features/l2ps-messaging/L2PSMessagingServer.ts` around lines 278 - 303, The code passes limit ?? 50 straight into service.getHistory which can lead to non-positive or huge values; validate and normalize limit before calling getHistory: parse/ensure msg.payload.limit is a positive integer, default to 50 if undefined, clamp it to a sane max (e.g., MAX_HISTORY_LIMIT = 100 or a config constant) and ensure a minimum of 1, and if the provided limit is invalid (NaN or <=0) either send an INVALID_MESSAGE error or coerce to the default; then call this.service.getHistory(myKey, peerKey, l2psUid, before, clampedLimit) instead of using limit ?? 50 so downstream .take(limit + 1) is safe.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/features/l2ps-messaging/L2PSMessagingServer.ts`:
- Around line 156-157: The empty catch in the socket-replacement block swallows
errors when closing a stale socket: in L2PSMessagingServer.ts where you check
"if (existing)" and call "(existing.ws as ServerWebSocket<WSData>).close()",
remove the empty catch and replace it with proper error handling—catch the
exception and log it at debug level (via the server's logger or processLogger)
including context that this occurred while closing/replacing an existing
connection so cleanup failures are observable; ensure the original close is
still attempted and that the catch only logs the error without rethrowing.
In `@src/features/l2ps-messaging/L2PSMessagingService.ts`:
- Around line 57-85: The offlineMessageCounts increment for fromKey happens
before persisting/queueing (see offlineMessageCounts.set(fromKey, ...),
repo.save(msg), and subsequent L2PS submission code); move or protect the
increment so that any exception during repo.save(msg) or any later pre-queue
error rolls the counter back: decrement offlineMessageCounts for fromKey in the
catch/finally path (or delay increment until after durable queueing) to ensure
the sender’s quota is only consumed for successfully queued messages and handle
duplicate-key (saveError.code === "23505" or
saveError.message.includes("duplicate key")) and other errors by restoring the
count.
- Around line 54-78: The code currently sets status = "delivered" based on
recipientOnline before the websocket write, which can falsely mark messages as
delivered; change the flow so you always persist the new L2PSMessage
(L2PSMessage instance persisted via repo.save) with status "queued" initially,
then attempt the socket send when recipientOnline is true, and only after the
socket write succeeds update the same message record to status = "delivered"
(use repo.save/repo.update on the existing msg.id and set l2psTxHash if
available); if the socket send fails or the peer disconnects, keep it queued so
offline retry logic can handle it. Apply the same change to the other save path
that currently sets delivered prematurely (the other repo.save block referenced
in the comment).
---
Duplicate comments:
In `@src/features/l2ps-messaging/L2PSMessagingServer.ts`:
- Around line 136-152: The proof verification currently uses msg.timestamp
without validating it; update the L2PSMessagingServer registration and the
analogous history verification block (the other proof path) to first ensure
msg.timestamp exists, can be parsed as an integer/number, and falls within an
acceptable clock skew window (e.g., ±300 seconds) before calling ucrypto.verify;
if missing, unparsable, or outside the window call this.sendError(ws,
"INVALID_PROOF", "...") and return. Locate the verification blocks around the
proofMessage construction and hexToUint8Array usage in L2PSMessagingServer and
add the timestamp parsing/validation logic there (and mirror the same check in
the block at the other referenced location).
- Around line 139-143: The verification uses node-wide
getSharedState.signingAlgorithm which can mismatch the protocol's ed25519 keys;
change the client-proof verification in L2PSMessagingServer to explicitly use
ed25519 (i.e., call ucrypto.verify with algorithm set to 'ed25519') or
alternatively embed the algorithm in the signed payload and parse it before
calling ucrypto.verify; update both occurrences that pass
getSharedState.signingAlgorithm (the verify call that encodes proofMessage and
hexToUint8Array(publicKey)/hexToUint8Array(proof) and the similar block at the
later verify call around lines 287-291) so the algorithm is deterministically
derived from the protocol (prefer hardcoding 'ed25519' for client proofs) rather
than from getSharedState.signingAlgorithm.
- Around line 431-436: The send helper currently swallows Bun's
ServerWebSocket.send() return value; change L2PSMessagingServer.send to return
the numeric result from ws.send(...) (use the existing ServerWebSocket<WSData>
and ProtocolFrame types) so callers can detect 0 (dropped) or -1
(backpressure/error). Concretely: change the method signature from void to
return number, return ws.send(JSON.stringify(frame)) in the try block, and in
the catch log the error with log.debug and return -1 (or another sentinel)
instead of swallowing; then update callers that mark messages delivered
(message_sent / queued-message flow) to check this numeric result and only mark
delivered on a success code.
- Around line 278-303: The code passes limit ?? 50 straight into
service.getHistory which can lead to non-positive or huge values; validate and
normalize limit before calling getHistory: parse/ensure msg.payload.limit is a
positive integer, default to 50 if undefined, clamp it to a sane max (e.g.,
MAX_HISTORY_LIMIT = 100 or a config constant) and ensure a minimum of 1, and if
the provided limit is invalid (NaN or <=0) either send an INVALID_MESSAGE error
or coerce to the default; then call this.service.getHistory(myKey, peerKey,
l2psUid, before, clampedLimit) instead of using limit ?? 50 so downstream
.take(limit + 1) is safe.
In `@src/features/l2ps-messaging/L2PSMessagingService.ts`:
- Around line 24-25: offlineMessageCounts is currently keyed only by sender and
resetOfflineCount() deletes the entire sender entry, which wrongly restores full
quota when only one queued message was delivered; change the counting strategy
to track per-sender-per-recipient (or per-queue-item) counts instead of a single
sender-wide count, update where counts are incremented/decremented (references:
offlineMessageCounts and resetOfflineCount) so delivering one queued message
decrements the specific sender-recipient counter (or decrements the sender total
by one) rather than deleting the whole map entry, and ensure any other usages
(e.g., the code paths at the other referenced locations) are updated to use the
new composite key or decrement behavior.
In `@src/features/l2ps-messaging/types.ts`:
- Around line 41-48: ClientMessageType currently includes the "ack" variant but
L2PSMessagingServer.handleMessage() does not handle it, causing valid-typed
clients to receive INVALID_MESSAGE; remove the "ack" member from the
ClientMessageType union in types.ts until you implement handling in
L2PSMessagingServer.handleMessage(), or alternatively add a matching case in
L2PSMessagingServer.handleMessage() that processes "ack" frames consistently
with the protocol.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: c6226e79-2608-4adb-9f86-ac1f6d791acd
📒 Files selected for processing (3)
src/features/l2ps-messaging/L2PSMessagingServer.tssrc/features/l2ps-messaging/L2PSMessagingService.tssrc/features/l2ps-messaging/types.ts
| if (existing) { | ||
| try { (existing.ws as ServerWebSocket<WSData>).close() } catch {} |
There was a problem hiding this comment.
Remove the empty catch here.
Line 157 currently trips ESLint no-empty, and it also hides failures while replacing a stale socket. At least log at debug level so the cleanup path is observable.
🧰 Tools
🪛 ESLint
[error] 157-157: Empty block statement.
(no-empty)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/features/l2ps-messaging/L2PSMessagingServer.ts` around lines 156 - 157,
The empty catch in the socket-replacement block swallows errors when closing a
stale socket: in L2PSMessagingServer.ts where you check "if (existing)" and call
"(existing.ws as ServerWebSocket<WSData>).close()", remove the empty catch and
replace it with proper error handling—catch the exception and log it at debug
level (via the server's logger or processLogger) including context that this
occurred while closing/replacing an existing connection so cleanup failures are
observable; ensure the original close is still attempted and that the catch only
logs the error without rethrowing.
| const status = recipientOnline ? "delivered" : "queued" | ||
| const now = Date.now() | ||
|
|
||
| // Rate-limit offline messages | ||
| if (!recipientOnline) { | ||
| const count = this.offlineMessageCounts.get(fromKey) ?? 0 | ||
| if (count >= MAX_OFFLINE_MESSAGES_PER_SENDER) { | ||
| return { success: false, error: "Offline message limit reached" } | ||
| } | ||
| this.offlineMessageCounts.set(fromKey, count + 1) | ||
| } | ||
|
|
||
| // Store message in local DB | ||
| const msg = new L2PSMessage() | ||
| msg.id = messageId | ||
| msg.fromKey = fromKey | ||
| msg.toKey = toKey | ||
| msg.l2psUid = l2psUid | ||
| msg.messageHash = messageHash | ||
| msg.encrypted = encrypted | ||
| msg.l2psTxHash = null | ||
| msg.timestamp = String(now) | ||
| msg.status = status | ||
| try { | ||
| await repo.save(msg) |
There was a problem hiding this comment.
Don't persist live-recipient messages as delivered before the socket write succeeds.
recipientOnline is only a snapshot from the peer map. If the recipient disconnects, or the later websocket send is dropped, this row is already stored as delivered and will never be retried from the offline queue.
Also applies to: 101-105
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/features/l2ps-messaging/L2PSMessagingService.ts` around lines 54 - 78,
The code currently sets status = "delivered" based on recipientOnline before the
websocket write, which can falsely mark messages as delivered; change the flow
so you always persist the new L2PSMessage (L2PSMessage instance persisted via
repo.save) with status "queued" initially, then attempt the socket send when
recipientOnline is true, and only after the socket write succeeds update the
same message record to status = "delivered" (use repo.save/repo.update on the
existing msg.id and set l2psTxHash if available); if the socket send fails or
the peer disconnects, keep it queued so offline retry logic can handle it. Apply
the same change to the other save path that currently sets delivered prematurely
(the other repo.save block referenced in the comment).
| // Rate-limit offline messages | ||
| if (!recipientOnline) { | ||
| const count = this.offlineMessageCounts.get(fromKey) ?? 0 | ||
| if (count >= MAX_OFFLINE_MESSAGES_PER_SENDER) { | ||
| return { success: false, error: "Offline message limit reached" } | ||
| } | ||
| this.offlineMessageCounts.set(fromKey, count + 1) | ||
| } | ||
|
|
||
| // Store message in local DB | ||
| const msg = new L2PSMessage() | ||
| msg.id = messageId | ||
| msg.fromKey = fromKey | ||
| msg.toKey = toKey | ||
| msg.l2psUid = l2psUid | ||
| msg.messageHash = messageHash | ||
| msg.encrypted = encrypted | ||
| msg.l2psTxHash = null | ||
| msg.timestamp = String(now) | ||
| msg.status = status | ||
| try { | ||
| await repo.save(msg) | ||
| } catch (saveError: any) { | ||
| // Catch duplicate-key constraint violation (TOCTOU race) | ||
| if (saveError?.code === "23505" || saveError?.message?.includes("duplicate key")) { | ||
| return { success: false, error: "Duplicate message" } | ||
| } | ||
| throw saveError | ||
| } |
There was a problem hiding this comment.
Roll back the offline quota on any pre-queue failure, not just L2PS submission failure.
The counter increments before repo.save(msg). If that save throws, or any later exception escapes before the explicit rollback block, the sender loses quota even though nothing was durably queued.
Also applies to: 90-98
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/features/l2ps-messaging/L2PSMessagingService.ts` around lines 57 - 85,
The offlineMessageCounts increment for fromKey happens before
persisting/queueing (see offlineMessageCounts.set(fromKey, ...), repo.save(msg),
and subsequent L2PS submission code); move or protect the increment so that any
exception during repo.save(msg) or any later pre-queue error rolls the counter
back: decrement offlineMessageCounts for fromKey in the catch/finally path (or
delay increment until after durable queueing) to ensure the sender’s quota is
only consumed for successfully queued messages and handle duplicate-key
(saveError.code === "23505" or saveError.message.includes("duplicate key")) and
other errors by restoring the count.




…protocol
Summary by CodeRabbit
New Features
Documentation
Tests
Chores