Conversation
|
Note Reviews pausedUse the following commands to manage reviews:
Summary by CodeRabbit
Summary by CodeRabbit
WalkthroughReplace the custom FrameProcessor framing with tokio_util::codec::LengthDelimitedCodec across docs, examples, tests and app wiring; remove the processor module and its exports; add a FrameMetadata trait; expand LengthFormat’s public API; and drop builder-level frame_processor configuration. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Client
participant C as Client Codec\n(LengthDelimitedCodec)
participant T as Transport (TCP)
participant S as Server Codec\n(LengthDelimitedCodec)
participant A as WireframeApp
participant H as Handler / Serializer
Client->>C: Provide payload
C->>T: Send framed bytes\n(len-prefix + payload)
T->>S: Deliver bytes
S->>A: Deliver decoded frame
A->>H: Route and handle request
H-->>A: Return response payload
A->>S: Provide response to codec
S->>T: Send framed response
T->>C: Deliver bytes
C->>Client: Provide decoded response
Note over C,S: Framing handled by tokio_util codecs; FrameProcessor removed
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
✨ Finishing Touches🧪 Generate unit tests
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
Reviewer's GuideThis PR removes the legacy FrameProcessor/LengthPrefixedProcessor infrastructure and obsolete builder APIs, adopting tokio_util’s LengthDelimitedCodec throughout code, tests, and examples; refactors the LengthFormat and metadata modules with a fallible API and new FrameMetadata trait; introduces unified test helpers; and overhauls documentation to reflect codec-based framing. Class diagram for updated frame metadata and codec integrationclassDiagram
class LengthFormat {
+bytes: usize
+endianness: Endianness
+new(bytes, endianness)
+try_new(bytes, endianness)
+bytes()
+endianness()
+read_len(bytes)
+write_len(len, dst)
}
class FrameMetadata {
<<trait>>
+Frame
+Error
+parse(src)
}
class LengthDelimitedCodec {
+builder()
+max_frame_length(len)
+new_codec()
+encode(item, dst)
+decode(src)
}
LengthFormat <.. LengthDelimitedCodec : "used by"
FrameMetadata <|.. LengthDelimitedCodec : "implemented for custom codecs"
Class diagram for removal of FrameProcessor and LengthPrefixedProcessorclassDiagram
class FrameProcessor {
<<trait>>
-Frame
-Error
-decode(src)
-encode(frame, dst)
}
class LengthPrefixedProcessor {
-format: LengthFormat
-decode(src)
-encode(frame, dst)
}
FrameProcessor <|-- LengthPrefixedProcessor
%% Both classes are now removed from the codebase
File-Level Changes
Possibly linked issues
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
|
@coderabbitai Please suggest a fix for this issue and supply a prompt for an AI coding agent to enable it to apply the fix: Inbound processing now builds a LengthDelimitedCodec with default settings. That codec caps frames at 8 MiB by default, whereas the builder still exposes buffer_capacity up to 16 MiB and the previous LengthPrefixedProcessor accepted any usize length. After this change, frames between 8 MiB and the configured capacity will be rejected and the connection closed even though the configuration suggests they are allowed; the same default is used in send_response, so large responses fail to encode as well. The codec should be initialised with an explicit max_frame_length (and mirrored on the send path) derived from the configured buffer size to avoid the regression. |
This comment was marked as resolved.
This comment was marked as resolved.
|
@coderabbitai Has the following now been resolved? Inbound processing now builds a LengthDelimitedCodec with default settings. That codec caps frames at 8 MiB by default, whereas the builder still exposes buffer_capacity up to 16 MiB and the previous LengthPrefixedProcessor accepted any usize length. After this change, frames between 8 MiB and the configured capacity will be rejected and the connection closed even though the configuration suggests they are allowed; the same default is used in send_response, so large responses fail to encode as well. The codec should be initialised with an explicit max_frame_length (and mirrored on the send path) derived from the configured buffer size to avoid the regression. |
This comment was marked as resolved.
This comment was marked as resolved.
|
@coderabbitai re:
"deserializes" is correct for en-GB-oxendict (-ize, -yse, -our) as used by this project |
|
Tip For best results, initiate chat on the files or code changes. @leynos Thank you for the correction! You're absolutely right about the Oxford English Dictionary's preference for -ize endings. I apologize for the overly broad assumption about British English spelling conventions. The en-GB-oxendict variant does indeed use -ize endings (serialize/deserialize) rather than -ise endings, following Oxford Dictionary conventions. I'll note this project-specific spelling preference for future reference. 🐰📚 ✏️ Learnings added
|
There was a problem hiding this comment.
Actionable comments posted: 12
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (11)
docs/wireframe-client-design.md (1)
98-101: Prefer inline links over reference-style links.Inline the router-design link inside the footnote and drop the reference block.
-[^1]: See [wireframe router design][router-design]. - -[router-design]: rust-binary-router-library-design.md#implementation-details +[^1]: See [wireframe router design](rust-binary-router-library-design.md#implementation-details).tests/routes.rs (2)
87-95: Assert no trailing bytes after decode.Tighten the single-frame round-trip by asserting the buffer is empty post-decode.
let frame = codec .decode(&mut buf) .expect("decode failed") .expect("frame missing"); + assert!(buf.is_empty(), "unexpected trailing bytes after decode");Also applies to: 91-95
121-130: Add trailing-bytes assertion in second single-frame test.Mirror the stricter check used elsewhere.
let frame = codec .decode(&mut buf) .expect("decode failed") .expect("missing frame"); + assert!(buf.is_empty(), "unexpected trailing bytes after decode");Also applies to: 126-137
README.md (1)
10-20: Align prose with en‑GB Oxford “-ize” convention.Project guidance specifies en‑GB‑oxendict with “-ize” endings. Update prose spellings (retain API identifiers like
serializer).-Manual handling of binary protocols typically involves verbose serialisation +Manual handling of binary protocols typically involves verbose serialization @@ -- **Serialisation engine** using `bincode` or a `wire-rs` wrapper +- **Serialization engine** using `bincode` or a `wire-rs` wrapperwireframe_testing/src/helpers.rs (1)
92-95: Export a shared test frame cap.Expose a single constant to keep tests aligned with helper encode caps.
const DEFAULT_CAPACITY: usize = 4096; const MAX_CAPACITY: usize = 1024 * 1024 * 10; // 10MB limit pub(crate) const EMPTY_SERVER_CAPACITY: usize = 64; +/// Shared frame cap used by helpers and tests to avoid drift. +pub const TEST_MAX_FRAME: usize = DEFAULT_CAPACITY;tests/response.rs (1)
62-71: Assert no trailing bytes after decode.Guarantee a single-frame round‑trip in this path too.
let (decoded, _) = TestResp::from_bytes(&frame).expect("deserialize failed"); assert_eq!(decoded, TestResp(7)); + assert!(buf.is_empty(), "unexpected trailing bytes after decode");docs/generic-message-fragmentation-and-re-assembly-design.md (5)
28-38: Use markdownlint MD013 next-line scope, not block disable.Limit the MD013 override to the table only.
Apply this diff:
-<!-- markdownlint-disable MD013 --> +<!-- markdownlint-disable-next-line MD013 --> ... -<!-- markdownlint-enable MD013 -->
60-86: Capture expected total to improve pre‑allocation and bounds checks.Track
expected_totalwhen headers advertise total size.Apply this diff:
struct PartialMessage { /// The buffer holding the accumulating payload. buffer: BytesMut, + /// The advertised total payload size, if known. + expected_total: Option<usize>, /// The sequence number of the last fragment received. last_sequence: u64, /// The time the first fragment was received. started_at: Instant, }
124-155: Make header encode fallible; return a Result.Allow strategies to signal encoding failures (e.g., insufficient dst capacity).
Apply this diff:
- fn encode_header( + fn encode_header( &self, dst: &mut BytesMut, // The full logical frame being fragmented, for context. frame: &Self::Frame, // Metadata for the current fragment being encoded. payload_len: usize, is_final: bool, msg_id: u64, seq: u64, - ); + ) -> io::Result<()>;
286-298: Tie success metrics to configured limits, not hard‑coded sizes.Avoid implying fixed 64 MiB/1 GiB limits that may diverge from product settings.
Apply this diff:
-| Functionality | A large logical frame is correctly split into N fragments, and a sequence of N fragments is correctly re-assembled into the original frame. | An end-to-end test confirms byte-for-byte identity of a 64 MiB payload after being fragmented and re-assembled. | +| Functionality | A large logical frame is correctly split into N fragments, and a sequence of N fragments is correctly re-assembled into the original frame. | An end-to-end test confirms byte-for-byte identity of a payload at the configured max_message_size after being fragmented and re-assembled. | @@ -| Resilience | The adapter protects against memory exhaustion from oversized messages. | A test sending fragments for a 2 GiB message against a 1 GiB max_message_size limit must terminate the connection and not allocate more than ~1 GiB of buffer memory. | +| Resilience | The adapter protects against memory exhaustion from oversized messages. | A test sending fragments that exceed max_message_size must terminate the connection and not allocate beyond the configured cap (including allocator overhead). |
12-23: Flag this as a design proposal; avoid committing to stable API prematurely.Add an explicit status banner to signal that names and signatures may change.
Apply this diff:
This document details the design for a generic, protocol-agnostic fragmentation and re-assembly layer. The core philosophy is to treat this as a **transparent middleware**. Application-level code, such as handlers, should remain unaware of the underlying fragmentation, dealing only with complete, logical messages. This new layer will be responsible for automatically splitting oversized outbound frames and meticulously re-assembling inbound fragments into a single, coherent message before they reach the router. + +> Status: design proposal. API names, trait bounds, and configuration shapes +> may change before stabilisation.
♻️ Duplicate comments (2)
docs/asynchronous-outbound-messaging-design.md (1)
740-757: Replace pseudocode with a runnable layering exampleThe fragmentation snippet still contains pseudocode and is not runnable. Provide a concrete, compile-ready example that layers Fragmentation over LengthDelimitedCodec and uses Framed{Read,Write} or Framed.
Goal: Make the layering explicit and runnable. Edits (docs/asynchronous-outbound-messaging-design.md around lines 740–757): 1) Replace the pseudocode with a complete Rust example: - use tokio_util::codec::{LengthDelimitedCodec, Framed}; - construct codec via .builder().max_frame_length(MAX_FRAME).new_codec(); - construct the actual fragmentation adapter from this repo (use the real type and constructor); - wrap the length-delimited codec with the adapter (use the real API, not .wrap pseudocode); - show Framed::new(stream, layered_codec) sending/receiving a payload that exceeds MAX_FRAME to demonstrate fragmentation. 2) Annotate the code block as ```rust,no_run and include any required use imports. 3) Add one sentence before the block clarifying that application code writes logical frames; the adapter performs fragmentation transparently. Rationale: Prior review requested a runnable example to clarify adapter layering; pseudocode does not meet that requirement.src/app/connection.rs (1)
40-44: Clarify whether buffer_capacity caps payload bytes or on-wire bytesLengthDelimitedCodec’s max_frame_length applies to the payload (excludes prefix). If buffer_capacity is intended as a total on-wire cap, subtract the prefix width to avoid off-by-4 rejections; if it is a payload cap, document that alongside the field.
fn new_length_codec(&self) -> LengthDelimitedCodec { - LengthDelimitedCodec::builder() - .max_frame_length(self.buffer_capacity) - .new_codec() + const LEN_PREFIX: usize = 4; // default for LengthDelimitedCodec + let payload_cap = self.buffer_capacity.saturating_sub(LEN_PREFIX); + LengthDelimitedCodec::builder() + .max_frame_length(payload_cap) + .new_codec() }Run to verify consistency and find any remaining uncapped codecs: #!/usr/bin/env bash rg -nP '\bLengthDelimitedCodec::new\s*\(' -S rg -nP '(?s)LengthDelimitedCodec::builder\(\)\s*(?!.*max_frame_length)' -S
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
💡 Knowledge Base configuration:
- Jira integration is disabled
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (12)
README.md(4 hunks)docs/asynchronous-outbound-messaging-design.md(3 hunks)docs/generic-message-fragmentation-and-re-assembly-design.md(9 hunks)docs/wireframe-client-design.md(7 hunks)examples/metadata_routing.rs(3 hunks)src/app/connection.rs(3 hunks)src/frame/metadata.rs(1 hunks)tests/lifecycle.rs(3 hunks)tests/middleware_order.rs(3 hunks)tests/response.rs(7 hunks)tests/routes.rs(5 hunks)wireframe_testing/src/helpers.rs(3 hunks)
🧰 Additional context used
📓 Path-based instructions (6)
**/*.rs
📄 CodeRabbit inference engine (AGENTS.md)
**/*.rs: Use precise names; boolean names should start with is/has/should
Use en-GB-oxendict spelling and grammar in comments
Function documentation must include clear examples; test documentation should omit redundant examples
Keep code files ≤ 400 lines; split long switch/dispatch logic by feature; move large test data to external files
Disallow Clippy warnings
Fix warnings emitted during tests in code rather than silencing them
Extract helper functions for long functions; adhere to separation of concerns and CQRS
Group related parameters into meaningful structs when functions have many parameters
Consider using Arc for large error returns to reduce data size
Each Rust module must begin with a module-level //! comment describing purpose and utility
Document public APIs with Rustdoc /// comments to enable cargo doc generation
Prefer immutable data; avoid unnecessary mut
Handle errors with Result instead of panicking where feasible
Avoid unsafe code unless necessary and document any usage clearly
Place function attributes after doc comments
Do not use return in single-line functions
Use predicate functions for conditional criteria with more than two branches
Do not silence lints except as a last resort
Lint suppressions must be tightly scoped and include a clear reason
Prefer #[expect(..)] over #[allow(..)] for lints
Prefer .expect() over .unwrap()
Use concat!() to combine long string literals rather than escaping newlines
Prefer single-line function bodies where appropriate (e.g., pub fn new(id: u64) -> Self { Self(id) })
Prefer semantic error enums deriving std::error::Error via thiserror for inspectable conditions
Files:
src/app/connection.rstests/middleware_order.rstests/routes.rssrc/frame/metadata.rstests/lifecycle.rswireframe_testing/src/helpers.rstests/response.rsexamples/metadata_routing.rs
⚙️ CodeRabbit configuration file
**/*.rs: * Seek to keep the cyclomatic complexity of functions no more than 12.
Adhere to single responsibility and CQRS
Place function attributes after doc comments.
Do not use
returnin single-line functions.Move conditionals with >2 branches into a predicate function.
Avoid
unsafeunless absolutely necessary.Every module must begin with a
//!doc comment that explains the module's purpose and utility.Comments and docs must follow en-GB-oxendict (-ize / -our) spelling and grammar
Lints must not be silenced except as a last resort.
#[allow]is forbidden.- Only narrowly scoped
#[expect(lint, reason = "...")]is allowed.- No lint groups, no blanket or file-wide suppression.
- Include
FIXME:with link if a fix is expected.Where code is only used by specific features, it must be conditionally compiled or a conditional expectation for unused_code applied.
Use
rstestfixtures for shared setup and to avoid repetition between tests.Replace duplicated tests with
#[rstest(...)]parameterised cases.Prefer
mockallfor mocks/stubs.Prefer
.expect()over.unwrap()Ensure that any API or behavioural changes are reflected in the documentation in
docs/Ensure that any completed roadmap steps are recorded in the appropriate roadmap in
docs/Files must not exceed 400 lines in length
- Large modules must be decomposed
- Long match statements or dispatch tables should be decomposed by domain and collocated with targets
- Large blocks of inline data (e.g., test fixtures, constants or templates) must be moved to external files and inlined at compile-time or loaded at run-time.
Environment access (env::set_var and env::remove_var) are always unsafe in Rust 2024 and MUST be marked as such
- For testing of functionality depending upon environment variables, dependency injection and the
mockablecrate are the preferred option.- If mockable cannot be used, env mutations in tests ...
Files:
src/app/connection.rstests/middleware_order.rstests/routes.rssrc/frame/metadata.rstests/lifecycle.rswireframe_testing/src/helpers.rstests/response.rsexamples/metadata_routing.rs
{src,tests}/**/*.rs
📄 CodeRabbit inference engine (AGENTS.md)
Write unit and behavioural tests for new functionality
Files:
src/app/connection.rstests/middleware_order.rstests/routes.rssrc/frame/metadata.rstests/lifecycle.rstests/response.rs
tests/**/*.rs
📄 CodeRabbit inference engine (AGENTS.md)
tests/**/*.rs: Use rstest fixtures for shared setup
Replace duplicated tests with #[rstest(...)] parameterised cases
Prefer mockall for mocks/stubs
Files:
tests/middleware_order.rstests/routes.rstests/lifecycle.rstests/response.rs
**/*.md
📄 CodeRabbit inference engine (AGENTS.md)
**/*.md: Markdown paragraphs and bullet points must be wrapped at 80 columns
Markdown code blocks must be wrapped at 120 columns
Do not wrap tables and headings in Markdown
Use dashes (-) for list bullets in Markdown
Use GitHub-flavoured Markdown footnotes ([^1])
Files:
README.mddocs/wireframe-client-design.mddocs/asynchronous-outbound-messaging-design.mddocs/generic-message-fragmentation-and-re-assembly-design.md
⚙️ CodeRabbit configuration file
**/*.md: * Avoid 2nd person or 1st person pronouns ("I", "you", "we")
- Use en-GB-oxendict (-ize / -our) spelling and grammar
- Headings must not be wrapped.
- Documents must start with a level 1 heading
- Headings must correctly increase or decrease by no more than one level at a time
- Use GitHub-flavoured Markdown style for footnotes and endnotes.
- Numbered footnotes must be numbered by order of appearance in the document.
Files:
README.mddocs/wireframe-client-design.mddocs/asynchronous-outbound-messaging-design.mddocs/generic-message-fragmentation-and-re-assembly-design.md
docs/**/*.md
📄 CodeRabbit inference engine (docs/documentation-style-guide.md)
docs/**/*.md: Use British English based on the Oxford English Dictionary (en-oxendict) for documentation text.
The word "outwith" is acceptable in documentation.
Keep US spelling when used in an API, for examplecolor.
Use the Oxford comma in documentation text.
Treat company names as collective nouns in documentation (e.g., "Lille Industries are expanding").
Write headings in sentence case in documentation.
Use Markdown headings (#,##,###, etc.) in order without skipping levels.
Follow markdownlint recommendations for Markdown files.
Provide code blocks and lists using standard Markdown syntax.
Always provide a language identifier for fenced code blocks; useplaintextfor non-code text.
Use-as the first level bullet and renumber lists when items change.
Prefer inline links using[text](url)or angle brackets around the URL; avoid reference-style links like[foo][bar].
Ensure blank lines before and after bulleted lists and fenced blocks in Markdown.
Ensure tables have a delimiter line below the header row in Markdown.
Expand any uncommon acronym on first use, for example, Continuous Integration (CI).
Wrap paragraphs at 80 columns in documentation.
Wrap code at 120 columns in documentation.
Do not wrap tables in documentation.
Use sequentially numbered footnotes referenced with[^1]and place definitions at the end of the file.
Where it adds clarity, include Mermaid diagrams in documentation.
When embedding figures, useand provide concise alt text describing the content.
Add a brief description before each Mermaid diagram in documentation for screen readers.Document examples showing how to deprecate old message versions gracefully
Write the official documentation for the new features. Create separate guides for "Duplex Messaging & Pushes", "Streaming Responses", and "Message Fragmentation". Each guide must include runnable examples and explain the relevant concepts and APIs.
docs/**/*.md: Use docs/ markdown ...
Files:
docs/wireframe-client-design.mddocs/asynchronous-outbound-messaging-design.mddocs/generic-message-fragmentation-and-re-assembly-design.md
docs/**/*.{md,rs}
📄 CodeRabbit inference engine (docs/multi-packet-and-streaming-responses-design.md)
docs/**/*.{md,rs}: The official documentation and examples must exclusively use the declarativeResponsemodel for handler responses.
The async-stream pattern must be documented as the canonical approach for dynamic stream generation.
Files:
docs/wireframe-client-design.mddocs/asynchronous-outbound-messaging-design.mddocs/generic-message-fragmentation-and-re-assembly-design.md
🧬 Code graph analysis (5)
tests/routes.rs (1)
wireframe_testing/src/helpers.rs (1)
drive_with_bincode(334-356)
src/frame/metadata.rs (1)
examples/metadata_routing.rs (1)
parse(43-57)
tests/lifecycle.rs (1)
wireframe_testing/src/helpers.rs (2)
run_app(379-422)run_with_duplex_server(467-475)
wireframe_testing/src/helpers.rs (1)
src/app/builder.rs (1)
new(144-144)
tests/response.rs (2)
src/frame/format.rs (3)
bytes(54-54)new(32-35)try_new(42-50)wireframe_testing/src/helpers.rs (1)
run_app(379-422)
🪛 LanguageTool
README.md
[style] ~10-~10: Would you like to use the Oxford spelling “serialization”? The spelling ‘serialisation’ is also correct.
Context: ...ry protocols typically involves verbose serialisation code, custom frame parsing, and complex...
(OXFORD_SPELLING_Z_NOT_S)
[style] ~19-~19: Would you like to use the Oxford spelling “Serialization”? The spelling ‘Serialisation’ is also correct.
Context: ...tering success or failure callbacks - Serialisation engine using bincode or a wire-rs...
(OXFORD_SPELLING_Z_NOT_S)
[style] ~129-~129: Would you like to use the Oxford spelling “serialization”? The spelling ‘serialisation’ is also correct.
Context: ... modifying handle_frame. ## Response serialisation and framing Handlers can return types ...
(OXFORD_SPELLING_Z_NOT_S)
[style] ~285-~285: Would you like to use the Oxford spelling “serialization”? The spelling ‘serialisation’ is also correct.
Context: ...ng routing - ping_pong.rs — showcases serialisation and middleware in a ping/pong protoco...
(OXFORD_SPELLING_Z_NOT_S)
docs/wireframe-client-design.md
[style] ~67-~67: Would you like to use the Oxford spelling “initialization”? The spelling ‘initialisation’ is also correct.
Context: ...ycle callbacks, so both sides can share initialisation logic. ## Example usage ```rust #[tok...
(OXFORD_SPELLING_Z_NOT_S)
docs/generic-message-fragmentation-and-re-assembly-design.md
[style] ~239-~239: You have used the passive voice repeatedly in nearby sentences. To make your writing clearer and easier to read, consider using active voice.
Context: ...ap, and the complete logical frame is passed down the codec chain. 4. **Timeout han...
(REP_PASSIVE_VOICE)
🔍 Remote MCP
Here are a few concrete points to keep in mind when reviewing this PR:
-
Complete removal of the old framing abstraction
- The
FrameProcessortrait and its concreteLengthPrefixedProcessorimplementation insrc/frame/processor.rshave been deleted. Any downstream code that previously implemented or referencedFrameProcessorwill no longer compile. - All documentation snippets (in multiple
docs/*.mdfiles) have been updated to replace “FrameProcessor” withtokio_util::codec-based terminology—verify no stray references remain.
- The
-
Adoption of Tokio’s length-delimited codec
- Framing is now done via
tokio_util::codec::LengthDelimitedCodecinstead of a custom processor. The default codec is instantiated viaLengthDelimitedCodec::new(), which uses a 4-byte big-endian prefix by default and can be configured via its builder for prefix length and endianness. - Examples and tests under
examples/andtests/have been converted to useLengthDelimitedCodec::builder().new_codec(), replacing allprocessor.encode/decodecalls withcodec.encode/codec.decode.
- Framing is now done via
-
Expanded public API in
LengthFormat- In
src/frame/format.rs, the struct’sbytesandendiannessfields and the methodspub fn read_len(&self, …)andpub fn write_len(&self, …)have been made public. Tests now exercise error cases (invalid prefix sizes, overly large lengths) via these methods, so reviewers should confirm these signatures align with the intended semver bump.
- In
-
New
FrameMetadatatrait- A standalone
FrameMetadatatrait has been introduced insrc/frame/metadata.rs, exposing aparse(&self, src: &[u8]) -> Result<(Self::Frame, usize), Self::Error>method for header-only inspection. This replaces the old metadata trait that lived inprocessor.rsand is now the only frame-metadata entrypoint.
- A standalone
-
Simplified builder API
- In
src/app/builder.rs, all fields and methods related toframe_processor(including the deprecatednew_with_envelopeconstructor and theframe_processorsetter) have been removed. Framing is now entirely controlled by the serializer/codec configuration in the builder chain.
- In
Reviewer guidance:
- Ensure no residual
FrameProcessormentions remain. - Verify test coverage for both standard and error framing scenarios still passes under the new codec approach.
- Confirm that the expanded public API surface (in
LengthFormatandFrameMetadata) is properly documented and versioned.
Citations:
- Encoder/Decoder traits and their role in framing via
tokio_util::codec - Default and configurable construction of
LengthDelimitedCodec
⏰ Context from checks skipped due to timeout of 120000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build-test
🔇 Additional comments (24)
docs/asynchronous-outbound-messaging-design.md (3)
299-299: PushHandle visibility and example look goodKeep this snippet as-is; it matches the public surface and compiles in isolation.
320-323: Finalise thetry_pushsignature in docsThe signature and fenced block are consistent; no further change needed.
324-353: Demonstrate streaming with async-stream: LGTMThe example uses async-stream as the canonical pattern and is marked no_run.
src/frame/metadata.rs (1)
1-41: FrameMetadata trait and docs: LGTM
- Module-level docs are present.
- Doctest is no_run and self-contained.
- British English is used in prose; API names retain American spelling where required.
src/app/connection.rs (3)
40-44: Centralise codec constructionGood extraction; this keeps send/receive in lockstep on limits and format.
64-64: Use the helper for outbound framingOutbound now inherits the same max_frame_length as inbound.
167-168: Use the helper for inbound framingInbound correctly applies the configured cap via the helper.
tests/lifecycle.rs (3)
15-16: Import codec traits: LGTMDecoder/Encoder imports are correct for the new path.
25-26: Pin the frame cap via a constant and verify repo-wide usageMAX_FRAME is defined; ensure all tests/examples also set explicit caps.
Objective: Confirm no test/example relies on the default 8 MiB limit. Commands: - rg -nP '\bLengthDelimitedCodec::new\s*\(' - rg -nP '(?s)LengthDelimitedCodec::builder\(\)\s*(?!.*max_frame_length)' Expected: No matches.
153-156: Encode path under LengthDelimitedCodec: LGTMThe codec uses the explicit cap and encodes into a BytesMut buffer.
Also applies to: 157-158
examples/metadata_routing.rs (4)
10-12: Imports for codec-based encoding: LGTMUse of Encoder and LengthDelimitedCodec is correct.
15-16: Define MAX_FRAME constant: LGTMThis removes magic numbers and documents intent.
53-56: Update comment to refer to the codec: LGTMThe prose matches the new framing model.
97-103: Bound the codec and size the bufferThe explicit cap prevents OOM; the buffer capacity accounts for the prefix.
tests/routes.rs (2)
149-168: LGTM: multi-frame encode/decode and buffer hygiene are correct.Using a single codec instance, tight buffer preallocation, and trailing-bytes assertion is spot on.
Also applies to: 170-188, 183-187
220-237: LGTM: single-frame encode path is aligned with decode limits.Preallocating for prefix + payload and asserting buffer emptiness makes this robust.
Also applies to: 232-237
tests/middleware_order.rs (1)
17-18: LGTM: explicit frame cap, encode/decode symmetry, and trailing-bytes check.The codec is capped deterministically and the round‑trip asserts buffer hygiene.
Also applies to: 75-78, 92-97
tests/response.rs (2)
80-86: LGTM: header/body consumption semantics documented and tested.The tests accurately exercise and document LengthDelimitedCodec’s behaviour.
Also applies to: 94-102
213-239: Verify there are no remaining uncapped codec builders.Run a focused scan to ensure all LengthDelimitedCodec instances set max_frame_length, avoiding regressions elsewhere.
#!/bin/bash set -euo pipefail echo "Builder() with no max_frame_length before new_codec:" rg -nPU '(?s)LengthDelimitedCodec::builder\(\)(?:(?!max_frame_length).)*?\.new_codec\(\)' -g '!**/target/**' echo echo "Direct ::new() calls (should be none):" rg -nP '\bLengthDelimitedCodec::new\s*\(' -g '!**/target/**'Also applies to: 241-277
docs/generic-message-fragmentation-and-re-assembly-design.md (5)
1-1: Heading is correct and follows style.Keep the level-1 heading; title casing and en‑GB hyphenation are sound.
40-48: Align architecture description with codec-centric stack.State explicitly that the adapter sits in a codec stack, and keep the diagram.
192-194: Adopt canonical async‑stream guidance.The note correctly states
async-streamas the canonical approach.
93-101: Terminology is clear and consistent.Keep
FragmentStrategyas the protocol hook; naming aligns with the narrative.
176-190: Remove incorrect suggestion;Response::Vec(vec![vec![0_u8; 128 * 1024]])correctly matchesResponse<Vec<u8>>. TheVecvariant wraps aVec<F>whereF = Vec<u8>, so returningVec<Vec<u8>>is intentional and type-correct.Likely an incorrect or invalid review comment.
|
@coderabbitai pause |
✅ Actions performedReviews paused. |
|
@sourcery-ai review |
There was a problem hiding this comment.
Hey there - I've reviewed your changes - here's some feedback:
- Consider consolidating the repeated test boilerplate for setting up the length-delimited codec and decoding frames (including asserting that no trailing bytes remain) into reusable helper functions to reduce duplication and improve readability.
- Rather than manually configuring
LengthDelimitedCodec::builder()in examples and tests, consider exposing a public API (e.g. onWireframeApp) to create a preconfigured length-delimited codec capped by the app’s buffer capacity to keep setup consistent across the codebase.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- Consider consolidating the repeated test boilerplate for setting up the length-delimited codec and decoding frames (including asserting that no trailing bytes remain) into reusable helper functions to reduce duplication and improve readability.
- Rather than manually configuring `LengthDelimitedCodec::builder()` in examples and tests, consider exposing a public API (e.g. on `WireframeApp`) to create a preconfigured length-delimited codec capped by the app’s buffer capacity to keep setup consistent across the codebase.
## Individual Comments
### Comment 1
<location> `src/frame/format.rs:32` </location>
<code_context>
+ /// Panics if `bytes` is not in `1..=8`.
#[must_use]
- pub const fn new(bytes: usize, endianness: Endianness) -> Self { Self { bytes, endianness } }
+ pub const fn new(bytes: usize, endianness: Endianness) -> Self {
+ assert!(matches!(bytes, 1..=8), "invalid length-prefix width");
+ Self { bytes, endianness }
</code_context>
<issue_to_address>
Add unit tests for LengthFormat::new to verify panic on invalid prefix width.
The new const constructor for LengthFormat now panics if bytes is not in 1..=8. Add unit tests to verify correct behaviour, including panic on invalid input.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| /// Panics if `bytes` is not in `1..=8`. | ||
| #[must_use] | ||
| pub const fn new(bytes: usize, endianness: Endianness) -> Self { Self { bytes, endianness } } | ||
| pub const fn new(bytes: usize, endianness: Endianness) -> Self { |
There was a problem hiding this comment.
issue (review_instructions): Add unit tests for LengthFormat::new to verify panic on invalid prefix width.
The new const constructor for LengthFormat now panics if bytes is not in 1..=8. Add unit tests to verify correct behaviour, including panic on invalid input.
Review instructions:
Path patterns: **/*
Instructions:
For any new feature or change to an existing feature, both behavioural and unit tests are required.
Summary
Testing
make fmtmake lintmake testmake markdownlinthttps://chatgpt.com/codex/tasks/task_e_68b0925ebe48832299603ca21dc5c28d
Summary by Sourcery
Remove the deprecated FrameProcessor infrastructure in favor of using tokio_util's LengthDelimitedCodec end-to-end, enrich length-prefix handling, and clean up obsolete APIs and docs.
New Features:
Bug Fixes:
Enhancements:
Documentation:
Tests:
Chores: