Skip to content

Refactor LengthFormat helpers to remove duplication#162

Merged
leynos merged 14 commits intomainfrom
codex/refactor-lengthformat-helpers-to-remove-duplication
Jul 4, 2025
Merged

Refactor LengthFormat helpers to remove duplication#162
leynos merged 14 commits intomainfrom
codex/refactor-lengthformat-helpers-to-remove-duplication

Conversation

@leynos
Copy link
Copy Markdown
Owner

@leynos leynos commented Jul 4, 2025

Summary

  • centralise prefix conversions in helper functions
  • use helper functions in read_len and write_len

Testing

  • make fmt
  • make lint
  • make test

https://chatgpt.com/codex/tasks/task_e_686720af025883228d307b8f7a9b61bd

Summary by Sourcery

Extract length prefix conversion into reusable helpers and refactor framing code to use them, adding comprehensive unit tests and refining test error messages.

Enhancements:

  • Centralize length prefix encoding and decoding by introducing bytes_to_u64 and u64_to_bytes helper functions
  • Refactor LengthFormat’s read_len and write_len methods to use the new helper functions and remove duplicated match logic

Tests:

  • Add unit tests for bytes_to_u64 and u64_to_bytes covering various sizes, endianness, and error cases
  • Update response tests to improve error expectation messages for send_response, encode, and decode failures

@sourcery-ai
Copy link
Copy Markdown
Contributor

sourcery-ai Bot commented Jul 4, 2025

Reviewer's Guide

This PR refactors length-formatting by extracting prefix conversion logic into reusable helper functions with centralized error constants, updating read_len/write_len to delegate to them, and adding targeted unit tests while aligning existing test expectations.

Class diagram for LengthFormat and new helper functions

classDiagram
    class LengthFormat {
        - bytes: usize
        - endianness: Endianness
        + read_len(bytes: &[u8]) io::Result<usize>
        + write_len(len: usize, dst: &mut BytesMut) io::Result<()>
    }
    class Endianness {
        <<enum>>
        + Big
        + Little
    }
    class bytes_to_u64 {
        + bytes_to_u64(bytes: &[u8], size: usize, endianness: Endianness) io::Result<u64>
    }
    class u64_to_bytes {
        + u64_to_bytes(len: usize, size: usize, endianness: Endianness, out: &mut [u8; 8]) io::Result<usize>
    }
    LengthFormat --> Endianness
    LengthFormat ..> bytes_to_u64 : uses
    LengthFormat ..> u64_to_bytes : uses
Loading

Class diagram for error constant centralization

classDiagram
    class ERR_UNSUPPORTED_PREFIX {
        <<constant>>
        + ERR_UNSUPPORTED_PREFIX: &str
    }
    class ERR_FRAME_TOO_LARGE {
        <<constant>>
        + ERR_FRAME_TOO_LARGE: &str
    }
    bytes_to_u64 ..> ERR_UNSUPPORTED_PREFIX : uses
    u64_to_bytes ..> ERR_UNSUPPORTED_PREFIX : uses
    u64_to_bytes ..> ERR_FRAME_TOO_LARGE : uses
    LengthFormat ..> ERR_FRAME_TOO_LARGE : uses
Loading

File-Level Changes

Change Details Files
Extract and centralize length prefix conversion logic
  • Introduce bytes_to_u64 and u64_to_bytes helper functions
  • Define shared error message constants for unsupported prefixes and oversized frames
src/frame.rs
Refactor LengthFormat methods to use helpers
  • Replace inline serialization/deserialization branches in read_len with bytes_to_u64
  • Replace inline serialization/deserialization branches in write_len with u64_to_bytes and buffer slicing
src/frame.rs
Add unit tests for conversion helpers
  • Add parameterized tests for valid bytes_to_u64 and u64_to_bytes conversions
  • Add error-case tests for truncated prefixes and oversized frames
src/frame.rs
Align existing response tests with updated error messages
  • Adjust expect_err messages in tests/response.rs to reflect helper-based errors
tests/response.rs

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jul 4, 2025

Warning

Rate limit exceeded

@leynos has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 7 minutes and 8 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 7179eb7 and 940ca2a.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (3)
  • Cargo.toml (1 hunks)
  • src/frame.rs (5 hunks)
  • tests/response.rs (4 hunks)

Summary by CodeRabbit

  • Refactor

    • Improved internal logic for encoding and decoding length prefixes, resulting in more robust error handling and consistent behaviour.
  • Tests

    • Added comprehensive tests to verify length prefix handling across various sizes and endianness.
    • Enhanced clarity of test failure messages.
  • Documentation

    • Updated example code to use safer and more idiomatic methods for reading length prefixes, and added frame size validation for improved clarity.
  • Chores

    • Added the byteorder dependency to support improved byte handling.

Summary by CodeRabbit

  • Refactor

    • Improved internal handling of byte and integer conversions for length prefixes, enhancing error validation and consistency.
  • Tests

    • Added comprehensive tests for byte-to-integer and integer-to-byte conversions, including various sizes, endianness, and error cases.
    • Improved clarity of test failure messages for better debugging and understanding of test outcomes.

Walkthrough

This update introduces two crate-private utility functions for converting between byte slices and u64 integers with support for various prefix sizes and endianness. The LengthFormat methods are refactored to use these utilities, and error handling is centralised. Parameterised tests are added to verify the new logic.

Changes

File(s) Change Summary
src/frame.rs Added bytes_to_u64 and u64_to_bytes utility functions; refactored LengthFormat methods to use them; introduced error message constants; added a new test module for conversions and error cases.
tests/response.rs Updated test error expectation messages for clarity and specificity without changing test logic.
Cargo.toml Added byteorder crate dependency version "1".
docs/rust-binary-router-library-design.md Reformatted example code to use byteorder crate for big-endian length prefix reading; improved indentation and clarity without changing logic.

Sequence Diagram(s)

sequenceDiagram
    participant Caller
    participant LengthFormat
    participant Utils

    Caller->>LengthFormat: read_len(bytes, endianness)
    LengthFormat->>Utils: bytes_to_u64(bytes, size, endianness)
    Utils-->>LengthFormat: Result<u64, io::Error>
    LengthFormat-->>Caller: Result<u64, io::Error>

    Caller->>LengthFormat: write_len(u64, endianness)
    LengthFormat->>Utils: u64_to_bytes(u64, size, endianness)
    Utils-->>LengthFormat: Result<[u8], io::Error>
    LengthFormat-->>Caller: Result<[u8], io::Error>
Loading

Possibly related PRs

  • Add configurable length prefixed framing #87: Refactored length prefix encoding/decoding by introducing utility functions for byte-to-u64 conversions and centralising error handling, building on the flexible length prefix framing infrastructure.
  • Test LengthFormat error handling #166: Expanded tests for LengthFormat error cases, complementing the utility function refactor with enhanced test coverage and error clarity.

Poem

A nibble, a byte, a prefix anew,
Now errors are clearer, and tests run through.
With endianness tamed and sizes in tow,
The framing’s precise—just watch it go!
🐇✨

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate Unit Tests
  • Create PR with Unit Tests
  • Post Copyable Unit Tests in a Comment
  • Commit Unit Tests in branch codex/refactor-lengthformat-helpers-to-remove-duplication

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai auto-generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @leynos - I've reviewed your changes and found some issues that need to be addressed.

Blocking issues:

  • Check for sufficient length of 'bytes' before slicing to avoid panics. (link)

General comments:

  • Avoid panics in bytes_to_u64 by checking bytes.len() >= size and propagating a proper io::Error instead of using unwrap().
  • Consider eliminating the heap allocation in u64_to_bytes—e.g. write directly into a preallocated array or the destination buffer—to improve performance.
  • Extract repeated error messages like "unsupported length prefix size" and "frame too large" into shared constants to reduce duplication.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- Avoid panics in bytes_to_u64 by checking bytes.len() >= size and propagating a proper io::Error instead of using unwrap().
- Consider eliminating the heap allocation in u64_to_bytes—e.g. write directly into a preallocated array or the destination buffer—to improve performance.
- Extract repeated error messages like "unsupported length prefix size" and "frame too large" into shared constants to reduce duplication.

## Individual Comments

### Comment 1
<location> `src/frame.rs:152` </location>
<code_context>
-                ));
-            }
-        }
+        let prefix = u64_to_bytes(len, self.bytes, self.endianness)?;
+        dst.put_slice(&prefix);
         Ok(())
</code_context>

<issue_to_address>
Intermediate allocation in write_len could be avoided for performance.

Consider updating u64_to_bytes to write directly into the buffer or a mutable slice to avoid the extra Vec<u8> allocation and improve performance.

Suggested implementation:

```rust
        let mut prefix_buf = [0u8; 8];
        let prefix_len = u64_to_bytes(len, self.bytes, self.endianness, &mut prefix_buf)?;
        dst.put_slice(&prefix_buf[..prefix_len]);
        Ok(())

```

You will also need to update the definition of `u64_to_bytes` elsewhere in your codebase to have the following signature (and implementation):

```rust
fn u64_to_bytes(
    value: u64,
    num_bytes: usize,
    endianness: Endianness,
    out: &mut [u8; 8],
) -> Result<usize, SomeErrorType> {
    // Write the bytes directly into `out` according to `endianness` and `num_bytes`
    // Return the number of bytes written (should be `num_bytes`)
}
```

Replace `SomeErrorType` with your actual error type.

Update all other call sites of `u64_to_bytes` to use the new signature.
</issue_to_address>

### Comment 2
<location> `src/frame.rs:16` </location>
<code_context>
+///
+/// Returns an error if the size is unsupported.
+fn bytes_to_u64(bytes: &[u8], size: usize, endianness: Endianness) -> io::Result<u64> {
+    Ok(match (size, endianness) {
+        (1, _) => u64::from(u8::from_ne_bytes([bytes[0]])),
+        (2, Endianness::Big) => u64::from(u16::from_be_bytes(bytes[..2].try_into().unwrap())),
</code_context>

<issue_to_address>
Check for sufficient length of 'bytes' before slicing to avoid panics.

The function 'bytes_to_u64' slices 'bytes' without checking if it contains enough elements for the requested size, which can cause a panic if 'bytes' is too short. Add a check to ensure 'bytes' has at least 'size' elements before slicing.
</issue_to_address>

### Comment 3
<location> `src/frame.rs:15` </location>
<code_context>
+/// endianness.
+///
+/// Returns an error if the size is unsupported.
+fn bytes_to_u64(bytes: &[u8], size: usize, endianness: Endianness) -> io::Result<u64> {
+    Ok(match (size, endianness) {
+        (1, _) => u64::from(u8::from_ne_bytes([bytes[0]])),
</code_context>

<issue_to_address>
Document or handle the case where 'size' is not 1, 2, 4, or 8 explicitly.

The function only supports sizes 1, 2, 4, or 8, but this is not enforced at the function boundary. Consider documenting this in the function's docstring or adding a debug assertion to clarify the contract.
</issue_to_address>

### Comment 4
<location> `src/frame.rs:37` </location>
<code_context>
+/// endianness.
+///
+/// Returns an error if the integer does not fit into the requested size.
+fn u64_to_bytes(len: usize, size: usize, endianness: Endianness) -> io::Result<Vec<u8>> {
+    Ok(match (size, endianness) {
+        (1, _) => vec![
</code_context>

<issue_to_address>
Document or enforce that 'size' must be 1, 2, 4, or 8 for 'u64_to_bytes'.

The function only supports certain sizes, but this is not enforced at the function boundary. Consider documenting this requirement or adding a debug assertion to clarify the contract.
</issue_to_address>

### Comment 5
<location> `src/frame.rs:39` </location>
<code_context>
+/// Returns an error if the integer does not fit into the requested size.
+fn u64_to_bytes(len: usize, size: usize, endianness: Endianness) -> io::Result<Vec<u8>> {
+    Ok(match (size, endianness) {
+        (1, _) => vec![
+            u8::try_from(len)
+                .map_err(|_| io::Error::new(io::ErrorKind::InvalidInput, "frame too large"))?,
</code_context>

<issue_to_address>
`.unwrap()` is used instead of `.expect()` in `try_into().unwrap()`.

Please use `.expect("conversion should not fail: [context]")` instead of `.unwrap()` to provide better error context and avoid panics without explanation.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment thread src/frame.rs Outdated
Comment thread src/frame.rs Outdated
Comment thread src/frame.rs Outdated
Comment thread src/frame.rs Outdated
Comment thread src/frame.rs Outdated
@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Jul 4, 2025

@sourcery-ai review

@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Jul 4, 2025

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jul 4, 2025

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @leynos - I've reviewed your changes and they look great!


Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Jul 4, 2025

@sourcery-ai resolve

@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Jul 4, 2025

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jul 4, 2025

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b2bc64c and c3f8193.

📒 Files selected for processing (1)
  • src/frame.rs (4 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.rs`: * Seek to keep the cyclomatic complexity of functions no more than 12...

**/*.rs: * Seek to keep the cyclomatic complexity of functions no more than 12.

  • Adhere to single responsibility and CQRS

  • Place function attributes after doc comments.

  • Do not use return in single-line functions.

  • Move conditionals with >2 branches into a predicate function.

  • Avoid unsafe unless absolutely necessary.

  • Every module must begin with a //! doc comment that explains the module's purpose and utility.

  • Comments must use en-GB-oxendict spelling and grammar.

  • Lints must not be silenced except as a last resort.

    • #[allow] is forbidden.
    • Only narrowly scoped #[expect(lint, reason = "...")] is allowed.
    • No lint groups, no blanket or file-wide suppression.
    • Include FIXME: with link if a fix is expected.
  • Use rstest fixtures for shared setup and to avoid repetition between tests.

  • Replace duplicated tests with #[rstest(...)] parameterised cases.

  • Prefer mockall for mocks/stubs.

  • Prefer .expect() over .unwrap()

  • Ensure that any API or behavioural changes are reflected in the documentation in docs/

  • Ensure that any completed roadmap steps are recorded in the appropriate roadmap in docs/

⚙️ Source: CodeRabbit Configuration File

List of files the instruction was applied to:

  • src/frame.rs
🔇 Additional comments (7)
src/frame.rs (7)

7-8: Import additions look appropriate.

The added imports for TryInto and io are necessary for the new conversion functions and error handling.


9-10: Good practice centralising error messages.

Using constants for error messages improves maintainability and consistency across the codebase.


22-58: Well-implemented utility function with proper validation.

The bytes_to_u64 function correctly addresses the input validation concerns from previous reviews by checking buffer length before slicing. The error handling uses appropriate io::ErrorKind variants and the logic covers all supported size/endianness combinations.


67-129: Efficient implementation avoiding intermediate allocations.

The u64_to_bytes function addresses the performance concern from previous reviews by writing directly to the provided buffer, avoiding the Vec<u8> allocation that was previously flagged. The error handling properly validates size constraints and value ranges.


190-191: Clean refactoring eliminates code duplication.

The refactored read_len method now delegates to the centralised bytes_to_u64 function, reducing code duplication whilst maintaining the same functionality.


207-210: Efficient buffer usage avoids allocations.

The refactored write_len method now uses a stack-allocated buffer and writes directly to it, addressing the performance concern from previous reviews about intermediate allocations.


336-393: Comprehensive parameterised tests following best practices.

The test module correctly uses rstest for parameterised test cases as specified in the coding guidelines. The tests cover both success scenarios (various sizes and endianness) and error conditions (insufficient input, oversized values), providing robust coverage of the new utility functions.

Comment thread src/frame.rs Outdated
Comment thread src/frame.rs Outdated
@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Jul 4, 2025

@coderabbitai resolve

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jul 4, 2025

✅ Actions performed

Comments resolved and changes approved.

@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Jul 4, 2025

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jul 4, 2025

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Jul 4, 2025

@sourcery-ai review

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @leynos - I've reviewed your changes and they look great!


Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (1)
src/frame.rs (1)

22-30: Validate prefix size before checking buffer length

Returning UnexpectedEof for an unsupported prefix size hides the real configuration error.
Early–reject on an invalid size so the caller always receives InvalidInput, regardless of how many bytes are present.

+    if !matches!(size, 1 | 2 | 4 | 8) {
+        return Err(io::Error::new(
+            io::ErrorKind::InvalidInput,
+            ERR_UNSUPPORTED_PREFIX,
+        ));
+    }
     if bytes.len() < size {
         return Err(io::Error::new(
             io::ErrorKind::UnexpectedEof,
             "incomplete length prefix",
         ));
📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c3f8193 and dcfe939.

📒 Files selected for processing (2)
  • src/frame.rs (4 hunks)
  • tests/response.rs (4 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
`**/*.rs`: * Seek to keep the cyclomatic complexity of functions no more than 12...

**/*.rs: * Seek to keep the cyclomatic complexity of functions no more than 12.

  • Adhere to single responsibility and CQRS

  • Place function attributes after doc comments.

  • Do not use return in single-line functions.

  • Move conditionals with >2 branches into a predicate function.

  • Avoid unsafe unless absolutely necessary.

  • Every module must begin with a //! doc comment that explains the module's purpose and utility.

  • Comments must use en-GB-oxendict spelling and grammar.

  • Lints must not be silenced except as a last resort.

    • #[allow] is forbidden.
    • Only narrowly scoped #[expect(lint, reason = "...")] is allowed.
    • No lint groups, no blanket or file-wide suppression.
    • Include FIXME: with link if a fix is expected.
  • Use rstest fixtures for shared setup and to avoid repetition between tests.

  • Replace duplicated tests with #[rstest(...)] parameterised cases.

  • Prefer mockall for mocks/stubs.

  • Prefer .expect() over .unwrap()

  • Ensure that any API or behavioural changes are reflected in the documentation in docs/

  • Ensure that any completed roadmap steps are recorded in the appropriate roadmap in docs/

⚙️ Source: CodeRabbit Configuration File

List of files the instruction was applied to:

  • tests/response.rs
  • src/frame.rs
🔇 Additional comments (1)
tests/response.rs (1)

130-136: Improved expectation messages look good

The new, specific .expect_err() strings make test failures far easier to interpret. 👍

Comment thread src/frame.rs
Comment thread src/frame.rs
Comment thread src/frame.rs
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jul 4, 2025

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Jul 4, 2025

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jul 4, 2025

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (4)
src/frame.rs (4)

15-17: Consider a more descriptive function name.

The generic name cast doesn't convey that this performs a checked downcast with overflow mapping to InvalidInput. A name like checked_prefix_cast would be clearer.


32-32: Remove ignore flag from doc test.

The ignore flag prevents compilation; no_run alone is sufficient for examples that shouldn't execute but should compile.

-/// ```rust,no_run,ignore
+/// ```rust,no_run

75-75: Remove ignore flag from doc test.

Same issue as in bytes_to_u64 - the ignore flag prevents compilation.

-/// ```rust,no_run,ignore
+/// ```rust,no_run

118-118: Consider if zero-filling is necessary.

Since write_len creates a fresh [u8; 8] buffer each time, there's no risk of stale data. The zero-filling adds overhead that may be unnecessary.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between de0cb4a and bfcd5de.

⛔ Files ignored due to path filters (1)
  • Cargo.lock is excluded by !**/*.lock
📒 Files selected for processing (3)
  • Cargo.toml (1 hunks)
  • docs/rust-binary-router-library-design.md (1 hunks)
  • src/frame.rs (4 hunks)
🧰 Additional context used
📓 Path-based instructions (10)
`Cargo.toml`: Use explicit version ranges in `Cargo.toml` and keep dependencies up-to-date.

Cargo.toml: Use explicit version ranges in Cargo.toml and keep dependencies up-to-date.

📄 Source: CodeRabbit Inference Engine (AGENTS.md)

List of files the instruction was applied to:

  • Cargo.toml
`docs/**/*.md`: Documentation must use en-GB-oxendict spelling and grammar (with the exception of "license" which is to be left unchanged for community consistency).

docs/**/*.md: Documentation must use en-GB-oxendict spelling and grammar (with the exception of "license" which is to be left unchanged for community consistency).

📄 Source: CodeRabbit Inference Engine (AGENTS.md)

List of files the instruction was applied to:

  • docs/rust-binary-router-library-design.md
`**/*.md`: Validate Markdown files using `markdownlint *.md **/*.md`. Run `mdfor...

**/*.md: Validate Markdown files using markdownlint *.md **/*.md.
Run mdformat-all after any documentation changes to format all Markdown files and fix table markup.
Validate Markdown Mermaid diagrams using the nixie CLI. The tool is already installed; run nixie *.md **/*.md directly instead of using npx.
Markdown paragraphs and bullet points must be wrapped at 80 columns.
Code blocks must be wrapped at 120 columns.
Tables and headings must not be wrapped.

📄 Source: CodeRabbit Inference Engine (AGENTS.md)

List of files the instruction was applied to:

  • docs/rust-binary-router-library-design.md
`docs/**/*.md`: Provide user guides and examples demonstrating server-initiated messaging.

docs/**/*.md: Provide user guides and examples demonstrating server-initiated messaging.

📄 Source: CodeRabbit Inference Engine (docs/asynchronous-outbound-messaging-roadmap.md)

List of files the instruction was applied to:

  • docs/rust-binary-router-library-design.md
`docs/**/*.md`: Conventions for writing project documentation should follow the rules outlined in the documentation style guide.

docs/**/*.md: Conventions for writing project documentation should follow the rules outlined in the documentation style guide.

📄 Source: CodeRabbit Inference Engine (docs/contents.md)

List of files the instruction was applied to:

  • docs/rust-binary-router-library-design.md
`docs/**/*.md`: Use British English based on the Oxford English Dictionary (en-o...

docs/**/*.md: Use British English based on the Oxford English Dictionary (en-oxendict) for documentation.
The word "outwith" is acceptable in documentation.
Keep US spelling when used in an API, for example color.
Use the Oxford comma in documentation.
Company names are treated as collective nouns (e.g., "Lille Industries are expanding").
Write headings in sentence case in documentation.
Use Markdown headings (#, ##, ###, etc.) in order without skipping levels.
Follow markdownlint recommendations for Markdown files.
Provide code blocks and lists using standard Markdown syntax.
Always use fenced code blocks with a language identifier; use plaintext for non-code text.
Use - as the first level bullet and renumber lists when items change.
Prefer inline links using [text](url) or angle brackets around the URL in Markdown.
Expand any uncommon acronym on first use, for example, Continuous Integration (CI).
Wrap paragraphs at 80 columns in documentation.
Wrap code at 120 columns in documentation.
Do not wrap tables in documentation.
Use footnotes referenced with [^label] in documentation.
Include Mermaid diagrams in documentation where it adds clarity.
When embedding figures in documentation, use ![alt text](path/to/image) and provide concise alt text describing the content.
Add a short description before each Mermaid diagram in documentation so screen readers can understand it.

📄 Source: CodeRabbit Inference Engine (docs/documentation-style-guide.md)

List of files the instruction was applied to:

  • docs/rust-binary-router-library-design.md
`docs/**/*.md`: Write the official documentation for the new features. Create se...

docs/**/*.md: Write the official documentation for the new features. Create separate guides for "Duplex Messaging & Pushes", "Streaming Responses", and "Message Fragmentation". Each guide must include runnable examples and explain the relevant concepts and APIs.

📄 Source: CodeRabbit Inference Engine (docs/wireframe-1-0-detailed-development-roadmap.md)

List of files the instruction was applied to:

  • docs/rust-binary-router-library-design.md
`**/*.md`: * Avoid 2nd person or 1st person pronouns ("I", "you", "we") * Use en...

**/*.md: * Avoid 2nd person or 1st person pronouns ("I", "you", "we")

  • Use en-oxendic spelling and grammar.
  • Paragraphs and bullets must be wrapped to 80 columns, except where a long URL would prevent this (in which case, silence MD013 for that line)
  • Code blocks should be wrapped to 120 columns.
  • Headings must not be wrapped.

⚙️ Source: CodeRabbit Configuration File

List of files the instruction was applied to:

  • docs/rust-binary-router-library-design.md
`**/*.rs`: Comment why, not what. Explain assumptions, edge cases, trade-offs, o...

**/*.rs: Comment why, not what. Explain assumptions, edge cases, trade-offs, or complexity. Don't echo the obvious.
Comments must use en-GB-oxendict spelling and grammar.
Function documentation must include clear examples.
Every module must begin with a module level (//!) comment explaining the module's purpose and utility.
Document public APIs using Rustdoc comments (///) so documentation can be generated with cargo doc.
Place function attributes after doc comments.
Do not use return in single-line functions.
Use predicate functions for conditional criteria with more than two branches.
Lints must not be silenced except as a last resort.
Lint rule suppressions must be tightly scoped and include a clear reason.
Prefer expect over allow.
Prefer .expect() over .unwrap().
Clippy warnings MUST be disallowed.
Fix any warnings emitted during tests in the code itself rather than silencing them.
Where a function is too long, extract meaningfully named helper functions adhering to separation of concerns and CQRS.
Where a function has too many parameters, group related parameters in meaningfully named structs.
Where a function is returning a large error consider using Arc to reduce the amount of data returned.
Write unit and behavioural tests for new functionality. Run both before and after making any change.
Prefer immutable data and avoid unnecessary mut bindings.
Handle errors with the Result type instead of panicking where feasible.
Avoid unsafe code unless absolutely necessary and document any usage clearly.

📄 Source: CodeRabbit Inference Engine (AGENTS.md)

List of files the instruction was applied to:

  • src/frame.rs
`**/*.rs`: * Seek to keep the cyclomatic complexity of functions no more than 12...

**/*.rs: * Seek to keep the cyclomatic complexity of functions no more than 12.

  • Adhere to single responsibility and CQRS

  • Place function attributes after doc comments.

  • Do not use return in single-line functions.

  • Move conditionals with >2 branches into a predicate function.

  • Avoid unsafe unless absolutely necessary.

  • Every module must begin with a //! doc comment that explains the module's purpose and utility.

  • Comments must use en-GB-oxendict spelling and grammar.

  • Lints must not be silenced except as a last resort.

    • #[allow] is forbidden.
    • Only narrowly scoped #[expect(lint, reason = "...")] is allowed.
    • No lint groups, no blanket or file-wide suppression.
    • Include FIXME: with link if a fix is expected.
  • Use rstest fixtures for shared setup and to avoid repetition between tests.

  • Replace duplicated tests with #[rstest(...)] parameterised cases.

  • Prefer mockall for mocks/stubs.

  • Prefer .expect() over .unwrap()

  • Ensure that any API or behavioural changes are reflected in the documentation in docs/

  • Ensure that any completed roadmap steps are recorded in the appropriate roadmap in docs/

⚙️ Source: CodeRabbit Configuration File

List of files the instruction was applied to:

  • src/frame.rs
🔇 Additional comments (6)
src/frame.rs (6)

9-9: LGTM: Import aligns with refactoring goals.

The byteorder import supports the cleaner implementation using ReadBytesExt methods.


11-13: Well done centralizing error messages.

This prevents inconsistencies and makes error handling more maintainable.


52-61: Excellent use of unreachable!() macro.

The early size validation makes this match arm unreachable, and the macro clearly documents this invariant.


182-185: Excellent refactoring to use centralized helpers.

The delegation to bytes_to_u64 eliminates duplication and improves maintainability.


200-203: Clean implementation using the helper function.

The refactoring successfully reduces code duplication whilst maintaining the same functionality.


329-423: Comprehensive test coverage using rstest.

The parameterized tests cover all supported sizes, both endianness variants, and error conditions. This provides robust validation of the refactored functions.

Comment thread Cargo.toml
Comment thread docs/rust-binary-router-library-design.md
Comment on lines +1139 to 1145
fn decode(&mut self, src: &mut BytesMut) -> Result<Option<Self::Item>, Self::Error> {
if src.len() < 4 { return Ok(None); } // Not enough data for length prefix

let mut length_bytes = [0u8; 4];
length_bytes.copy_from_slice(&src[..4]);
let length = u32::from_be_bytes(length_bytes) as usize;
let length = (&src[..4])
.read_u32::<BigEndian>()
.expect("slice length checked") as usize;

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot Jul 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Avoid panicking in the decoder example.

Using expect in library-style examples is brittle; a malformed or truncated prefix will unwind the entire task. Propagate the io::Error instead.

-    let length = (&src[..4])
-        .read_u32::<BigEndian>()
-        .expect("slice length checked") as usize;
+    let length = (&src[..4])
+        .read_u32::<BigEndian>()
+        .map_err(|e| io::Error::new(io::ErrorKind::InvalidData, e))? as usize;
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
fn decode(&mut self, src: &mut BytesMut) -> Result<Option<Self::Item>, Self::Error> {
if src.len() < 4 { return Ok(None); } // Not enough data for length prefix
let mut length_bytes = [0u8; 4];
length_bytes.copy_from_slice(&src[..4]);
let length = u32::from_be_bytes(length_bytes) as usize;
let length = (&src[..4])
.read_u32::<BigEndian>()
.expect("slice length checked") as usize;
fn decode(&mut self, src: &mut BytesMut) -> Result<Option<Self::Item>, Self::Error> {
if src.len() < 4 { return Ok(None); } // Not enough data for length prefix
let length = (&src[..4])
.read_u32::<BigEndian>()
.map_err(|e| io::Error::new(io::ErrorKind::InvalidData, e))? as usize;
🤖 Prompt for AI Agents
In docs/rust-binary-router-library-design.md around lines 1139 to 1145, the
decode function uses expect to read the length prefix, which can cause a panic
on malformed input. Replace the expect call with proper error propagation by
returning the io::Error from the read_u32 operation instead of panicking,
ensuring the decoder gracefully handles malformed or truncated prefixes.

Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coderabbitai new GitHub issue for this, please

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧩 Analysis chain

@leynos I'll create a GitHub issue to track this improvement.


🏁 Script executed:

#!/bin/bash
gh issue create \
  --title "Fix decoder example to avoid panicking on malformed input" \
  --body "The decoder example in the design documentation uses \`.expect()\` when reading the length prefix, which can cause the entire task to panic on malformed or truncated input.

The example should demonstrate proper error handling by propagating the \`io::Error\` instead of panicking.

**Location:** \`docs/rust-binary-router-library-design.md\` around lines 1139-1145

**Context:** 
- PR: https://github.com/leynos/wireframe/pull/162
- Comment: https://github.com/leynos/wireframe/pull/162#discussion_r2186125245
- Requested by: @leynos

The documentation should serve as a good example of robust error handling practices rather than demonstrating brittle code that could fail unexpectedly." \
  --assignee leynos

Length of output: 842


Perfect! I've created GitHub issue #173 to track this improvement: #173

The issue documents the need to fix the decoder example to use proper error handling instead of .expect(), which could cause panics on malformed input. It's assigned to you and includes backlinks to this PR and comment for context.

Comment on lines +1159 to +1164
fn encode(&mut self, item: T, dst: &mut BytesMut) -> Result<(), Self::Error> {
let data = item.as_ref();
dst.reserve(4 + data.len());
dst.put_u32(data.len() as u32);
dst.put_slice(data);
Ok(())
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot Jul 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Encode path silently truncates frames > 4 GiB.

data.len() as u32 wraps for payloads above u32::MAX, producing corrupt output. At minimum, validate length and return an error.

-    dst.put_u32(data.len() as u32);
+    if data.len() > u32::MAX as usize {
+        return Err(io::Error::new(
+            io::ErrorKind::InvalidInput,
+            "payload exceeds 4 GiB limit",
+        ));
+    }
+    dst.put_u32(data.len() as u32);
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
fn encode(&mut self, item: T, dst: &mut BytesMut) -> Result<(), Self::Error> {
let data = item.as_ref();
dst.reserve(4 + data.len());
dst.put_u32(data.len() as u32);
dst.put_slice(data);
Ok(())
fn encode(&mut self, item: T, dst: &mut BytesMut) -> Result<(), Self::Error> {
let data = item.as_ref();
dst.reserve(4 + data.len());
if data.len() > u32::MAX as usize {
return Err(io::Error::new(
io::ErrorKind::InvalidInput,
"payload exceeds 4 GiB limit",
));
}
dst.put_u32(data.len() as u32);
dst.put_slice(data);
Ok(())
🤖 Prompt for AI Agents
In docs/rust-binary-router-library-design.md around lines 1159 to 1164, the
encode function casts data length to u32 without checking if it exceeds
u32::MAX, causing silent truncation for frames larger than 4 GiB. Fix this by
adding a validation step before casting: if data.len() is greater than u32::MAX,
return an error instead of proceeding. This prevents corrupt output by ensuring
only valid frame sizes are encoded.

Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coderabbitai new GitHub issue for this, please

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧩 Analysis chain

I'll create a GitHub issue for the documentation bug you've identified.

@leynos I've created a GitHub issue to track the documentation bug regarding the potential overflow in the frame encoder example. The issue includes details about the problem, the suggested fix, and references back to this PR and comment.


🏁 Script executed:

#!/bin/bash
# Create GitHub issue for the documentation overflow bug
gh issue create \
  --title "Documentation: Frame encoder silently truncates payloads > 4 GiB" \
  --body "## Description

The example code in \`docs/rust-binary-router-library-design.md\` for the \`LengthPrefixedCodec\` encoder has a potential overflow bug that could cause silent truncation for large payloads.

## Problem

In the \`encode\` function, the line:
\`\`\`rust
dst.put_u32(data.len() as u32);
\`\`\`

Casts \`data.len()\` to \`u32\` without validation. For payloads larger than 4 GiB (\`u32::MAX\`), this will wrap around and produce corrupt output with an incorrect length prefix.

## Solution

Add validation before the cast:
\`\`\`rust
if data.len() > u32::MAX as usize {
    return Err(io::Error::new(
        io::ErrorKind::InvalidInput,
        \"payload exceeds 4 GiB limit\",
    ));
}
dst.put_u32(data.len() as u32);
\`\`\`

## Context

- **PR**: https://github.com/leynos/wireframe/pull/162
- **Comment**: https://github.com/leynos/wireframe/pull/162#discussion_r2186125248
- **Requested by**: @leynos

While this is example code in documentation rather than production code, it's important to demonstrate best practices to avoid misleading developers who might copy this pattern." \
  --assignee leynos

Length of output: 1241


Perfect! I've successfully created GitHub issue #174 for the documentation bug. You can view it here: #174

The issue includes:

  • Clear description of the overflow problem in the example code
  • The specific problematic line and suggested fix
  • Links back to this PR and comment thread
  • Assignment to you as requested

This will help track the need to fix the documentation example to demonstrate proper bounds checking for large payload sizes.

@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Jul 4, 2025

@coderabbitai resume

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

♻️ Duplicate comments (2)
docs/rust-binary-router-library-design.md (1)

1142-1146: expect will still panic – propagate the I/O error instead
The example keeps read_u32::<BigEndian>().expect("slice length checked"), which will unwind the entire task on malformed or truncated prefixes. The earlier review/issue already requested swapping this for proper error-propagation (map_err(...)).

Please update the snippet (or mark it as intentionally simplified) to avoid showing brittle patterns in the documentation.

src/frame.rs (1)

204-207: Avoid the temporary 8-byte buffer in hot encode path
write_len allocates a [u8; 8] then copies into dst. Writing directly into dst (reserve, put_slice each field) removes one copy and one stack allocation:

-let mut buf = [0u8; 8];
-let written = u64_to_bytes(len, self.bytes, self.endianness, &mut buf)?;
-dst.put_slice(&buf[..written]);
+let mut tmp = [0u8; 8];
+let written = u64_to_bytes(len, self.bytes, self.endianness, &mut tmp)?;
+dst.put_slice(&tmp[..written]);

…or pass a mutable slice pointing straight into dst.

This was noted by Sourcery earlier and still stands as a micro-optimisation.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between bfcd5de and 7179eb7.

📒 Files selected for processing (2)
  • docs/rust-binary-router-library-design.md (1 hunks)
  • src/frame.rs (4 hunks)
🧰 Additional context used
📓 Path-based instructions (9)
`docs/**/*.md`: Documentation must use en-GB-oxendict spelling and grammar (with the exception of "license" which is to be left unchanged for community consistency).

docs/**/*.md: Documentation must use en-GB-oxendict spelling and grammar (with the exception of "license" which is to be left unchanged for community consistency).

📄 Source: CodeRabbit Inference Engine (AGENTS.md)

List of files the instruction was applied to:

  • docs/rust-binary-router-library-design.md
`**/*.md`: Validate Markdown files using `markdownlint *.md **/*.md`. Run `mdfor...

**/*.md: Validate Markdown files using markdownlint *.md **/*.md.
Run mdformat-all after any documentation changes to format all Markdown files and fix table markup.
Validate Markdown Mermaid diagrams using the nixie CLI. The tool is already installed; run nixie *.md **/*.md directly instead of using npx.
Markdown paragraphs and bullet points must be wrapped at 80 columns.
Code blocks must be wrapped at 120 columns.
Tables and headings must not be wrapped.

📄 Source: CodeRabbit Inference Engine (AGENTS.md)

List of files the instruction was applied to:

  • docs/rust-binary-router-library-design.md
`docs/**/*.md`: Provide user guides and examples demonstrating server-initiated messaging.

docs/**/*.md: Provide user guides and examples demonstrating server-initiated messaging.

📄 Source: CodeRabbit Inference Engine (docs/asynchronous-outbound-messaging-roadmap.md)

List of files the instruction was applied to:

  • docs/rust-binary-router-library-design.md
`docs/**/*.md`: Conventions for writing project documentation should follow the rules outlined in the documentation style guide.

docs/**/*.md: Conventions for writing project documentation should follow the rules outlined in the documentation style guide.

📄 Source: CodeRabbit Inference Engine (docs/contents.md)

List of files the instruction was applied to:

  • docs/rust-binary-router-library-design.md
`docs/**/*.md`: Use British English based on the Oxford English Dictionary (en-o...

docs/**/*.md: Use British English based on the Oxford English Dictionary (en-oxendict) for documentation.
The word "outwith" is acceptable in documentation.
Keep US spelling when used in an API, for example color.
Use the Oxford comma in documentation.
Company names are treated as collective nouns (e.g., "Lille Industries are expanding").
Write headings in sentence case in documentation.
Use Markdown headings (#, ##, ###, etc.) in order without skipping levels.
Follow markdownlint recommendations for Markdown files.
Provide code blocks and lists using standard Markdown syntax.
Always use fenced code blocks with a language identifier; use plaintext for non-code text.
Use - as the first level bullet and renumber lists when items change.
Prefer inline links using [text](url) or angle brackets around the URL in Markdown.
Expand any uncommon acronym on first use, for example, Continuous Integration (CI).
Wrap paragraphs at 80 columns in documentation.
Wrap code at 120 columns in documentation.
Do not wrap tables in documentation.
Use footnotes referenced with [^label] in documentation.
Include Mermaid diagrams in documentation where it adds clarity.
When embedding figures in documentation, use ![alt text](path/to/image) and provide concise alt text describing the content.
Add a short description before each Mermaid diagram in documentation so screen readers can understand it.

📄 Source: CodeRabbit Inference Engine (docs/documentation-style-guide.md)

List of files the instruction was applied to:

  • docs/rust-binary-router-library-design.md
`docs/**/*.md`: Write the official documentation for the new features. Create se...

docs/**/*.md: Write the official documentation for the new features. Create separate guides for "Duplex Messaging & Pushes", "Streaming Responses", and "Message Fragmentation". Each guide must include runnable examples and explain the relevant concepts and APIs.

📄 Source: CodeRabbit Inference Engine (docs/wireframe-1-0-detailed-development-roadmap.md)

List of files the instruction was applied to:

  • docs/rust-binary-router-library-design.md
`**/*.md`: * Avoid 2nd person or 1st person pronouns ("I", "you", "we") * Use en...

**/*.md: * Avoid 2nd person or 1st person pronouns ("I", "you", "we")

  • Use en-oxendic spelling and grammar.
  • Paragraphs and bullets must be wrapped to 80 columns, except where a long URL would prevent this (in which case, silence MD013 for that line)
  • Code blocks should be wrapped to 120 columns.
  • Headings must not be wrapped.

⚙️ Source: CodeRabbit Configuration File

List of files the instruction was applied to:

  • docs/rust-binary-router-library-design.md
`**/*.rs`: Comment why, not what. Explain assumptions, edge cases, trade-offs, o...

**/*.rs: Comment why, not what. Explain assumptions, edge cases, trade-offs, or complexity. Don't echo the obvious.
Comments must use en-GB-oxendict spelling and grammar.
Function documentation must include clear examples.
Every module must begin with a module level (//!) comment explaining the module's purpose and utility.
Document public APIs using Rustdoc comments (///) so documentation can be generated with cargo doc.
Place function attributes after doc comments.
Do not use return in single-line functions.
Use predicate functions for conditional criteria with more than two branches.
Lints must not be silenced except as a last resort.
Lint rule suppressions must be tightly scoped and include a clear reason.
Prefer expect over allow.
Prefer .expect() over .unwrap().
Clippy warnings MUST be disallowed.
Fix any warnings emitted during tests in the code itself rather than silencing them.
Where a function is too long, extract meaningfully named helper functions adhering to separation of concerns and CQRS.
Where a function has too many parameters, group related parameters in meaningfully named structs.
Where a function is returning a large error consider using Arc to reduce the amount of data returned.
Write unit and behavioural tests for new functionality. Run both before and after making any change.
Prefer immutable data and avoid unnecessary mut bindings.
Handle errors with the Result type instead of panicking where feasible.
Avoid unsafe code unless absolutely necessary and document any usage clearly.

📄 Source: CodeRabbit Inference Engine (AGENTS.md)

List of files the instruction was applied to:

  • src/frame.rs
`**/*.rs`: * Seek to keep the cyclomatic complexity of functions no more than 12...

**/*.rs: * Seek to keep the cyclomatic complexity of functions no more than 12.

  • Adhere to single responsibility and CQRS

  • Place function attributes after doc comments.

  • Do not use return in single-line functions.

  • Move conditionals with >2 branches into a predicate function.

  • Avoid unsafe unless absolutely necessary.

  • Every module must begin with a //! doc comment that explains the module's purpose and utility.

  • Comments must use en-GB-oxendict spelling and grammar.

  • Lints must not be silenced except as a last resort.

    • #[allow] is forbidden.
    • Only narrowly scoped #[expect(lint, reason = "...")] is allowed.
    • No lint groups, no blanket or file-wide suppression.
    • Include FIXME: with link if a fix is expected.
  • Use rstest fixtures for shared setup and to avoid repetition between tests.

  • Replace duplicated tests with #[rstest(...)] parameterised cases.

  • Prefer mockall for mocks/stubs.

  • Prefer .expect() over .unwrap()

  • Ensure that any API or behavioural changes are reflected in the documentation in docs/

  • Ensure that any completed roadmap steps are recorded in the appropriate roadmap in docs/

⚙️ Source: CodeRabbit Configuration File

List of files the instruction was applied to:

  • src/frame.rs

Comment thread src/frame.rs
Comment on lines +54 to +67
let mut cur = io::Cursor::new(&bytes[..size]);
let val = match (size, endianness) {
(1, _) => cur.read_u8().map(u64::from),
(2, Endianness::Big) => cur.read_u16::<BigEndian>().map(u64::from),
(2, Endianness::Little) => cur.read_u16::<LittleEndian>().map(u64::from),
(4, Endianness::Big) => cur.read_u32::<BigEndian>().map(u64::from),
(4, Endianness::Little) => cur.read_u32::<LittleEndian>().map(u64::from),
(8, Endianness::Big) => cur.read_u64::<BigEndian>(),
(8, Endianness::Little) => cur.read_u64::<LittleEndian>(),
// size is validated above so this branch is unreachable
_ => unreachable!(),
}?;
Ok(val)
}
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot Jul 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

io::Cursor is overkill here
ReadBytesExt works directly on a &[u8] when wrapped in std::io::Cursor, but creating a cursor for every call adds an extra heap-free but still non-trivial object. A plain match on the slice plus from_be_bytes/from_le_bytes avoids the cursor altogether and shaves a tiny amount off the hot path.

Not critical, yet worth keeping in mind if this helper sits on the critical decode path.

🤖 Prompt for AI Agents
In src/frame.rs around lines 54 to 67, the current code uses io::Cursor to read
bytes with ReadBytesExt, which is unnecessary overhead. Replace the cursor usage
by directly matching on the byte slice and use from_be_bytes or from_le_bytes
methods on fixed-size arrays to convert bytes to integers according to
endianness. This avoids creating a Cursor object and improves performance on the
critical decode path.

Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

New GitHub issue for this, please

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧩 Analysis chain

I'll create a GitHub issue to track this performance optimization suggestion.

@leynos I've created a GitHub issue to track the performance optimization for replacing io::Cursor with direct byte slice matching in the bytes_to_u64 function. The issue includes the context from the code review and has been assigned to you.


🏁 Script executed:

#!/bin/bash
gh issue create \
  --title "Performance: Replace io::Cursor with direct byte slice matching in bytes_to_u64" \
  --body "## Description

The current implementation of \`bytes_to_u64\` in \`src/frame.rs\` uses \`io::Cursor\` with \`ReadBytesExt\` to read bytes, which adds unnecessary overhead for every call. This can be optimized by directly matching on the byte slice and using \`from_be_bytes\`/\`from_le_bytes\` methods on fixed-size arrays.

## Current Implementation

The function currently creates an \`io::Cursor\` object for each call:
\`\`\`rust
let mut cur = io::Cursor::new(&bytes[..size]);
let val = match (size, endianness) {
    (1, _) => cur.read_u8().map(u64::from),
    (2, Endianness::Big) => cur.read_u16::<BigEndian>().map(u64::from),
    // ... etc
}?;
\`\`\`

## Proposed Optimization

Replace the cursor usage by directly matching on the byte slice and use \`from_be_bytes\`/\`from_le_bytes\` methods to convert bytes to integers according to endianness. This avoids creating a Cursor object and improves performance on the critical decode path.

## Context

This optimization was identified during code review of PR #162 which refactored LengthFormat helpers to remove duplication.

## References

- PR: https://github.com/leynos/wireframe/pull/162
- Review comment: https://github.com/leynos/wireframe/pull/162#discussion_r2186146839

Requested by: @leynos" \
  --assignee leynos

Length of output: 1419


Perfect! I've successfully created GitHub issue #179 to track the performance optimization suggestion. The issue has been assigned to you and includes:

  • A clear description of the current io::Cursor usage in bytes_to_u64
  • The proposed optimization using direct byte slice matching with from_be_bytes/from_le_bytes
  • Context from the code review
  • Backlinks to the original PR and comment

You can view the issue at: #179

The issue is now ready for future implementation when you have time to tackle this performance optimization.

  /|   /|  
 ( :v:  )
  |(_)|

🧬 Code Graph Analysis Results

Relevant snippets from src/serializer.rs (lines 54-56)

fn parse(&self, src: &[u8]) -> Result<(Self::Frame, usize), Self::Error> {
        crate::app::Envelope::from_bytes(src)
    }
  • This snippet shows a parse method that attempts to parse a frame from a byte slice, returning the frame and the number of bytes consumed. It is relevant because FrameMetadata trait in src/frame.rs defines a similar parse method.

Relevant snippets from examples/metadata_routing.rs (lines 42-56)

fn parse(&self, src: &[u8]) -> Result<(Envelope, usize), io::Error> {
        if src.len() < 3 {
            return Err(io::Error::new(io::ErrorKind::UnexpectedEof, "header"));
        }
        let id = u32::from(u16::from_be_bytes([src[0], src[1]]));
        // The third byte carries message flags. This example intentionally
        // ignores the flags, but a real protocol might parse and act on these
        // bits.
        let _ = src[2];
        let payload = src[3..].to_vec();
        // `parse` receives the complete frame because `LengthPrefixedProcessor`
        // ensures `src` contains exactly one message. Returning `src.len()` is
        // therefore correct for this demo.
        Ok((Envelope::new(id, payload), src.len()))
    }
  • This is an example implementation of a parse method for frame metadata, showing how to extract header fields and payload from a byte slice. It is relevant to understanding how frame metadata parsing might be implemented in conjunction with the FrameMetadata trait in src/frame.rs.

Relevant snippets from src/middleware.rs (lines 20-20 and 44-44)

pub fn new(frame: Vec<u8>) -> Self { Self { frame } }
  • These snippets show constructors for types wrapping a frame represented as a Vec<u8>. This is relevant because LengthPrefixedProcessor in src/frame.rs uses Vec<u8> as its frame type.

Relevant snippets from src/app.rs (lines 230-242)

fn default() -> Self {
        Self {
            routes: HashMap::new(),
            services: Vec::new(),
            middleware: Vec::new(),
            frame_processor: Box::new(LengthPrefixedProcessor::new(LengthFormat::default())),
            serializer: S::default(),
            app_data: HashMap::new(),
            on_connect: None,
            on_disconnect: None,
            protocol: None,
        }
    }
  • This snippet shows a default initialization of an application struct that uses LengthPrefixedProcessor as its frame processor. It is relevant for understanding how LengthPrefixedProcessor is used in the broader application context.

Relevant snippets from src/rewind_stream.rs (lines 25-31)

pub fn new(leftover: Vec<u8>, inner: S) -> Self {
        Self {
            leftover,
            pos: 0,
            inner,
        }
    }
  • This snippet shows a constructor for a type managing leftover bytes and an inner stream. It may be relevant for understanding buffering or framing strategies related to FrameProcessor implementations.

Relevant snippets from src/extractor.rs (lines 113-121)

pub fn new(inner: Arc<T>) -> Self { Self(inner) }

fn from(inner: Arc<T>) -> Self { Self(inner) }

fn from(inner: T) -> Self { Self(Arc::new(inner)) }
  • These snippets show constructors and conversions involving Arc wrappers. They may be relevant if frame processors or metadata parsers are wrapped or shared via Arc.

Relevant snippets from tests/metadata.rs (lines 51-54 and 95-98)

fn parse(&self, src: &[u8]) -> Result<(Self::Frame, usize), Self::Error> {
        self.0.fetch_add(1, Ordering::SeqCst);
        BincodeSerializer.parse(src)
    }

fn parse(&self, _src: &[u8]) -> Result<(Self::Frame, usize), Self::Error> {
        self.0.fetch_add(1, Ordering::SeqCst);
        Err(bincode::error::DecodeError::OtherString("fail".into()))
    }
  • These snippets show test implementations of parse methods for frame metadata, including success and failure cases. They are relevant for testing or understanding error handling in frame metadata parsing.

No other snippets appear directly relevant to the src/frame.rs file or the user comment about creating a new GitHub issue related to this file.

Comment thread src/frame.rs
Comment thread src/frame.rs
Comment thread src/frame.rs
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jul 4, 2025

✅ Actions performed

Reviews resumed.

@leynos leynos merged commit 3a82193 into main Jul 4, 2025
5 checks passed
@leynos leynos deleted the codex/refactor-lengthformat-helpers-to-remove-duplication branch July 4, 2025 22:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant