Skip to content

Add slow reader/writer back-pressure testkit, tests, and docs#495

Merged
leynos merged 20 commits intomainfrom
add-slow-reader-writer-simulation-p8g9fg
Mar 11, 2026
Merged

Add slow reader/writer back-pressure testkit, tests, and docs#495
leynos merged 20 commits intomainfrom
add-slow-reader-writer-simulation-p8g9fg

Conversation

@leynos
Copy link
Copy Markdown
Owner

@leynos leynos commented Mar 5, 2026

Summary

  • Add slow-I/O back-pressure testkit as first-class helpers in wireframe_testing, including public API surface for paced I/O (SlowIoPacing, SlowIoConfig), a core slow-I/O driver, and accompanying tests and docs (ExecPlan item 8.5.2).
  • Introduce integration and behavioural tests, fixtures, and feature/scenario wiring to validate deterministic back-pressure behavior under paused Tokio time.
  • Update related docs and roadmap to reflect the new capabilities and completed item.

Changes

  • Added docs/execplans/8-5-2-slow-reader-and-writer-simulation.md containing the full execution plan for roadmap item 8.5.2.
  • Implemented slow reader and slow writer simulation in wireframe_testing:
    • New slow_io helper module with SlowIoPacing and SlowIoConfig driven pacing for duplex I/O.
    • Public API surface additions: drive_with_slow_frames, drive_with_slow_payloads, drive_with_slow_codec_payloads, drive_with_slow_codec_frames.
    • Updated wireframe_testing/lib.rs to re-export new helpers; updated helpers.rs to expose new SlowIo types.
  • Added integration and behavioural tests for slow-I/O back-pressure:
    • tests/slow_io_backpressure.rs (integration tests)
    • tests/features/slow_io_backpressure.feature
    • tests/fixtures/slow_io_backpressure.rs
    • tests/steps/slow_io_backpressure_steps.rs
    • tests/scenarios/slow_io_backpressure_scenarios.rs
    • tests/scenarios/mod.rs and tests/fixtures/mod.rs updated accordingly
  • Documentation updates to reflect new testkit capabilities and roadmap state:
    • docs/adr-002-streaming-requests-and-shared-message-assembly.md: Implementation note for 8.5.2 added
    • docs/wireframe-testing-crate.md: Slow-I/O drivers section added
    • docs/users-guide.md: Added section on slow-reader/slow-writer simulation with API and usage examples
    • docs/roadmap.md: Marked 8.5.2 as done

Rationale

  • Enables deterministic back-pressure testing by explicitly pacing inbound writes and outbound reads, enabling first-class testkit support for slow peers without altering existing API surfaces.
  • Keeps the API additive with a compact surface that covers raw, default-framed, and codec-aware test needs while avoiding changes to existing helpers.
  • Provides a structured path (rstest and rstest-bdd) to validate slow-I/O behavior before broader code changes.

Plan of work (Stage A–G)

  • The ExecPlan has been implemented and executed in this PR to deliver additive slow-I/O helpers, exports, tests, and docs as outlined in 8.5.2. See docs/execplans/8-5-2-slow-reader-and-writer-simulation.md for the full plan and milestones.

Acceptance criteria

  • ExecPlan clearly documents the additive API contract, pacing model, and staged plan.
  • Reviewers can validate feasibility and alignment with existing testing patterns (rstest, rstest-bdd).
  • Roadmap references updated appropriately when implementation proceeds.
  • Subsequents PRs implement the slow reader/writer helpers and associated tests, following the staged plan.

Validation and testing approach

  • Deterministic pacing via paused Tokio time and explicit advances rather than real-time sleeps.
  • Validate integration with existing wireframe_testing helpers and test harness patterns.
  • Run through both unit-level rstest tests and behavioural rstest-bdd scenarios to ensure coverage.

Risks and mitigations

  • Timing-sensitive tests may be flaky. Mitigation: use paused Tokio time and explicit advances.
  • API surface could grow large. Mitigation: keep a shared pacing config with a small additive API.
  • Potential deadlocks in misconfigured pacing. Mitigation: bounded capacity, clear shutdown order, and deterministic assertions.

Documentation and maintenance

  • Update docs/roadmap.md to reflect 8-5-2 status as appropriate after milestones.
  • Add public API docs once implementation details are settled.

Related files

  • docs/execplans/8-5-2-slow-reader-and-writer-simulation.md (new)
  • wireframe_testing/src/helpers/slow_io.rs (new)
  • wireframe_testing/src/helpers.rs (updated)
  • wireframe_testing/src/lib.rs (updated)
  • tests/slow_io_backpressure.rs (new)
  • tests/features/slow_io_backpressure.feature (new)
  • tests/fixtures/slow_io_backpressure.rs (new)
  • tests/steps/slow_io_backpressure_steps.rs (new)
  • tests/scenarios/slow_io_backpressure_scenarios.rs (new)
  • tests/scenarios/mod.rs (updated)
  • tests/fixtures/mod.rs (updated)
  • docs/adr-002-streaming-requests-and-shared-message-assembly.md (update note)
  • docs/wireframe-testing-crate.md (update)
  • docs/users-guide.md (update)
  • docs/roadmap.md (update with 8.5.2 done)

Revision notes

  • Added ExecPlan document for roadmap item 8-5-2, including stages, tests, and documentation efforts.

📎 Task: https://www.devboxer.com/task/b6a14c4c-c5f5-4778-bad9-400deb738a15

@sourcery-ai
Copy link
Copy Markdown
Contributor

sourcery-ai Bot commented Mar 5, 2026

Reviewer's Guide

Implements a new slow I/O back-pressure testkit in wireframe_testing, including SlowIoPacing/SlowIoConfig, paced duplex drivers and helper functions, plus integration and BDD tests and documentation/roadmap updates to support deterministic slow reader/writer simulation.

Sequence diagram for drive_with_slow_codec_payloads slow I/O flow

sequenceDiagram
    actor Test as Test
    participant Helper as SlowIoHelpers
    participant App as WireframeApp
    participant Codec as FrameCodec
    participant Tokio as TokioRuntime
    participant Writer as ClientWriter
    participant Reader as ClientReader

    Test->>Helper: drive_with_slow_codec_payloads(app, codec, payloads, config)
    Helper->>Helper: encode_payloads_with_codec(codec, payloads)
    Helper->>Tokio: duplex(config.capacity)
    Tokio-->>Helper: (client_stream, server_stream)
    Helper->>Tokio: spawn app.handle_connection(server_stream)
    Helper->>Tokio: split(client_stream)
    Tokio-->>Helper: (Reader, Writer)

    par paced_writer
        alt writer_pacing is Some
            loop while bytes_remaining
                Helper->>Writer: write_with_optional_pacing(chunk, writer_pacing)
            end
        else writer_pacing is None
            Helper->>Writer: write_all(all_bytes)
        end
        Helper->>Writer: shutdown()
    and paced_reader
        alt reader_pacing is Some
            loop until EOF
                Helper->>Reader: read_with_optional_pacing(reader_pacing)
                Reader-->>Helper: chunk_bytes
            end
        else reader_pacing is None
            Helper->>Reader: read_to_end()
            Reader-->>Helper: all_bytes
        end
    end

    Helper->>Helper: decode_frames_with_codec(codec, raw_bytes)
    Helper->>Helper: extract_payloads(frames)
    Helper-->>Test: io::Result<Vec<Vec<u8>>>
Loading

Updated class diagram for slow I/O helpers in wireframe_testing

classDiagram
    direction LR

    class SlowIoPacing {
        +NonZeroUsize chunk_size
        +Duration delay
        +new(chunk_size: NonZeroUsize, delay: Duration) SlowIoPacing
    }

    class SlowIoConfig {
        +Option~SlowIoPacing~ writer_pacing
        +Option~SlowIoPacing~ reader_pacing
        +usize capacity
        +new() SlowIoConfig
        +with_writer_pacing(pacing: SlowIoPacing) SlowIoConfig
        +with_reader_pacing(pacing: SlowIoPacing) SlowIoConfig
        +with_capacity(capacity: usize) SlowIoConfig
        -validate() io::Result~SlowIoConfig~
    }

    class SlowIoHelpers {
        <<utility>>
        +drive_with_slow_frames(app: WireframeApp, frames: Vec~Vec~u8~~, config: SlowIoConfig) io::Result~Vec~u8~~
        +drive_with_slow_payloads(app: WireframeApp, payloads: Vec~Vec~u8~~, config: SlowIoConfig) io::Result~Vec~u8~~
        +drive_with_slow_codec_payloads(app: WireframeApp, codec: FrameCodec, payloads: Vec~Vec~u8~~, config: SlowIoConfig) io::Result~Vec~Vec~u8~~~
        +drive_with_slow_codec_frames(app: WireframeApp, codec: FrameCodec, payloads: Vec~Vec~u8~~, config: SlowIoConfig) io::Result~Vec~Frame~>
        -pause_between_chunks(delay: Duration, should_pause: bool) void
        -write_with_optional_pacing(writer: AsyncWriteExt, bytes: &[u8], pacing: Option~SlowIoPacing~) io::Result~()~
        -read_with_optional_pacing(reader: AsyncReadExt, pacing: Option~SlowIoPacing~) io::Result~Vec~u8~~
        -drive_slow_internal(server_fn: FnOnce(DuplexStream), wire_bytes: Vec~u8~, config: SlowIoConfig) io::Result~Vec~u8~~
        -encode_length_delimited_payloads(payloads: Vec~Vec~u8~~) io::Result~Vec~u8~~
        -drive_slow_codec_frames_internal(handler: FnOnce(DuplexStream), codec: FrameCodec, payloads: Vec~Vec~u8~~, config: SlowIoConfig) io::Result~Vec~Frame~>
    }

    class WireframeApp {
        +handle_connection(stream: DuplexStream) Future~()~
    }

    class FrameCodec {
        <<interface>>
        +Frame
    }

    class LengthDelimitedCodec {
        +encode(item: Bytes, dst: BytesMut) Result~()~
    }

    class TestSerializer {
        <<interface>>
    }

    SlowIoConfig --> SlowIoPacing : uses
    SlowIoHelpers ..> SlowIoConfig : config
    SlowIoHelpers ..> SlowIoPacing : pacing
    SlowIoHelpers ..> WireframeApp : drives
    SlowIoHelpers ..> FrameCodec : encodes_decodes
    SlowIoHelpers ..> LengthDelimitedCodec : default_framing
    WireframeApp ..> TestSerializer : generic_over
Loading

File-Level Changes

Change Details Files
Introduce configurable slow I/O pacing types and core paced duplex driver helpers in wireframe_testing.
  • Add SlowIoPacing struct for per-direction chunk-size and delay configuration.
  • Add SlowIoConfig struct with optional reader/writer pacing and bounded duplex capacity, including validation for zero/oversized capacities.
  • Implement internal drive_slow_internal that runs the app over a tokio::io::DuplexStream using optional paced writes and reads with panic-to-io::Error conversion.
  • Provide drive_with_slow_frames and drive_with_slow_payloads helpers that encode frames/payloads then delegate to the paced driver.
  • Provide drive_with_slow_codec_frames and drive_with_slow_codec_payloads helpers that integrate the paced driver with FrameCodec-based apps and existing codec utilities.
wireframe_testing/src/helpers/slow_io.rs
Wire the new slow I/O helpers into the existing helper and crate public API surfaces and runtime dependencies.
  • Register the slow_io module in helpers.rs and re-export SlowIoConfig, SlowIoPacing and the four drive_with_slow_* helpers.
  • Re-export SlowIoConfig, SlowIoPacing and the slow I/O drive helpers from wireframe_testing::lib for external consumers.
  • Extend the tokio dependency for wireframe_testing to include the time feature required for virtual-time pacing.
wireframe_testing/src/helpers.rs
wireframe_testing/src/lib.rs
wireframe_testing/Cargo.toml
Add rstest-based integration tests covering slow writer, slow reader, combined pacing, and config validation.
  • Build echo WireframeApp instances using HotlineFrameCodec and BincodeSerializer and helper constructors for re-use across tests.
  • Implement paced codec tests that compare baseline fast-path behaviour with slow I/O runs and assert on pending state under paused Tokio time and final echoed payload lengths.
  • Add a combined slow reader + writer test that exercises duplex back-pressure with small buffer capacity and verifies clean round trips.
  • Add tests ensuring invalid SlowIoConfig capacities (zero and >MAX_CAPACITY) are rejected with well-typed InvalidInput io::Error messages.
tests/slow_io_backpressure.rs
Add rstest-bdd behavioural test flow for slow I/O back-pressure using a dedicated world fixture and step definitions.
  • Introduce SlowIoBackpressureWorld fixture managing a current-thread Tokio runtime, paused time, app configuration, and spawned slow I/O tasks.
  • Add ReaderDriveConfig and CombinedDriveConfig types with FromStr implementations to parse compact slash-delimited pacing/capacity parameters from feature files.
  • Expose fixture methods to configure the app, start slow-writer, slow-reader, and combined drives, assert pending state, advance virtual time, and assert echoed payload length.
  • Define BDD steps and scenarios that bind to the fixture, using descriptive Gherkin in slow_io_backpressure.feature and register new fixture/steps/scenarios modules.
tests/fixtures/slow_io_backpressure.rs
tests/steps/slow_io_backpressure_steps.rs
tests/scenarios/slow_io_backpressure_scenarios.rs
tests/features/slow_io_backpressure.feature
tests/fixtures/mod.rs
tests/scenarios/mod.rs
tests/steps/mod.rs
Document the slow I/O helpers, update the streaming ADR with implementation notes, add the 8.5.2 ExecPlan, and mark the roadmap item complete.
  • Extend wireframe-testing-crate.md with a "Slow-I/O drivers" section including API examples, sequencing diagram, and entry points for the new helpers.
  • Update users-guide.md with a "Simulating slow readers and writers" section showing end-to-end usage of SlowIoConfig/SlowIoPacing with drive_with_slow_codec_payloads and notes on deterministic tests under paused time.
  • Add an implementation note to adr-002 describing the SlowIoPacing/SlowIoConfig model and helper coverage for frames/payloads/codec-aware round trips.
  • Add the detailed execplans/8-5-2-slow-reader-and-writer-simulation.md plan describing stages, risks, tolerances, and validation strategy.
  • Mark roadmap item 8.5.2 as complete and make minor formatting/word-wrapping tweaks in an existing 11.1.2 execplan doc.
docs/wireframe-testing-crate.md
docs/users-guide.md
docs/adr-002-streaming-requests-and-shared-message-assembly.md
docs/execplans/8-5-2-slow-reader-and-writer-simulation.md
docs/roadmap.md
docs/execplans/11-1-2-structured-logging-and-tracing-spans.md

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Mar 5, 2026

No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: b850f2f5-c064-4610-911b-5f1d35804605

📥 Commits

Reviewing files that changed from the base of the PR and between 182d133 and fd1cf02.

📒 Files selected for processing (3)
  • tests/fixtures/slow_io_backpressure.rs
  • tests/scenarios/slow_io_backpressure_scenarios.rs
  • tests/steps/slow_io_backpressure_steps.rs

Summary by CodeRabbit

  • New Features

    • Deterministic slow I/O simulation helpers for back‑pressure testing with configurable reader/writer/combined pacing and public pacing/config API.
  • Improvements

    • Panic message formatting now preserves string payloads verbatim while using debug formatting for other payloads.
  • Documentation

    • New guides, exec plan, roadmap update and user‑guide examples covering slow‑I/O pacing, capacity tuning and zero‑copy payload extraction.
  • Tests

    • BDD feature scenarios, fixtures, step definitions and integration tests exercising slow‑I/O back‑pressure.

Walkthrough

Introduce deterministic slow I/O back‑pressure testing: add SlowIoPacing and SlowIoConfig, four slow‑I/O driving helpers and re‑exports in wireframe_testing, add BDD integration tests and fixtures, update docs/roadmap and crate docs, and preserve string panic payloads verbatim in panic formatting.

Changes

Cohort / File(s) Summary
Slow I/O Public API
wireframe_testing/src/helpers/slow_io.rs, wireframe_testing/src/helpers.rs, wireframe_testing/src/lib.rs
Add new slow‑I/O module and public API: SlowIoPacing, SlowIoConfig, MAX_SLOW_IO_CAPACITY, and driving helpers (drive_with_slow_frames, drive_with_slow_payloads, drive_with_slow_codec_payloads, drive_with_slow_codec_frames). Re‑export from helpers and crate root.
Wireframe Testing Configuration
wireframe_testing/Cargo.toml
Enable Tokio time feature for deterministic time control in tests.
Helpers Integration
wireframe_testing/src/helpers.rs
Import slow_io module, replace MAX_CAPACITY with MAX_SLOW_IO_CAPACITY, and re‑export slow‑I/O types and helpers.
Tests: Fixtures & Runtime
tests/fixtures/slow_io_backpressure.rs, tests/fixtures/mod.rs
Add SlowIoBackpressureWorld fixture with paused runtime control, pacing orchestration, start/advance/assert helpers; export module in fixtures mod.
Tests: Integration & Unit
tests/slow_io_backpressure.rs
Add comprehensive tests covering paced frames/payloads, codec round‑trips, panic→io::Error mapping, invalid config validation and paced/combined scenarios.
Tests: BDD Scenarios & Steps
tests/features/slow_io_backpressure.feature, tests/scenarios/slow_io_backpressure_scenarios.rs, tests/scenarios/mod.rs, tests/steps/slow_io_backpressure_steps.rs, tests/steps/mod.rs
Add Gherkin feature with three back‑pressure scenarios and BDD wiring (scenarios + step definitions) that call fixture methods to configure, drive, advance virtual time and assert outcomes.
Docs & Planning
docs/execplans/8-5-2-slow-reader-and-writer-simulation.md, docs/wireframe-testing-crate.md, docs/users-guide.md, docs/roadmap.md, docs/adr-002-streaming-requests-and-shared-message-assembly.md
Add ExecPlan, crate and user‑guide docs describing slow‑I/O pacing, drivers and examples; mark roadmap item 8.5.2 complete; add implementation notes referencing SlowIoConfig/SlowIoPacing.
Panic Formatting & Tests
src/panic.rs, src/server/connection_spawner.rs
Preserve &'static str and String panic payloads verbatim in format_panic; update test expectations to match new panic string behaviour.
Test Module Registrations
tests/scenarios/mod.rs, tests/steps/mod.rs
Register new slow‑I/O scenario and step modules.

Sequence Diagram(s)

sequenceDiagram
    autonumber
    participant Client as "Client"
    participant WriterTask as "Writer Task"
    participant AppServer as "App / Server"
    participant ReaderTask as "Reader Task"
    participant Runtime as "Runtime / Time"

    Client->>WriterTask: call drive_with_slow_*
    activate WriterTask

    WriterTask->>WriterTask: chunk data, wait (rgba(52,152,219,0.5))
    WriterTask->>AppServer: send chunk
    activate AppServer

    AppServer->>AppServer: process request
    AppServer-->>ReaderTask: produce response
    deactivate AppServer

    activate ReaderTask
    ReaderTask->>ReaderTask: chunk response, wait (rgba(231,76,60,0.5))
    ReaderTask->>Runtime: rely on paused/advance time

    activate Runtime
    Runtime->>Runtime: advance simulated time
    Runtime-->>ReaderTask: resume
    deactivate Runtime

    ReaderTask->>Client: deliver chunked response
    deactivate ReaderTask
    WriterTask-->>Client: complete result
    deactivate WriterTask
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐢 Slow bytes tip‑toe down the lane,
Tests wind clocks and bend the frame,
Writers pause and readers sigh,
Echoes return, unhurried, spry. ✨

🚥 Pre-merge checks | ✅ 3
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title directly and clearly describes the main change: adding slow reader/writer back-pressure simulation testkit, tests, and documentation.
Description check ✅ Passed The description is thorough and directly related to the changeset, covering implementation details, tests, documentation updates, rationale, and acceptance criteria.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch add-slow-reader-writer-simulation-p8g9fg

Comment @coderabbitai help to get the list of available commands and usage tips.

@leynos leynos changed the title Add ExecPlan for slow reader/writer back-pressure simulation Implement slow reader/writer back-pressure simulation (testkit) Mar 6, 2026
codescene-delta-analysis[bot]

This comment was marked as outdated.

@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Mar 6, 2026

@coderabbitai Please suggest a fix for this issue and supply a prompt for an AI coding agent to enable it to apply the fix:

tests/slow_io_backpressure.rs

Comment on file

fn build_echo_app(
    codec: HotlineFrameCodec,
) -> io::Result<WireframeApp<BincodeSerializer, (), Envelope, HotlineFrameCodec>> {
    WireframeApp::<BincodeSerializer, (), Envelope>::new()
        .map_err(|e| io::Error::other(format!("app init: {e}")))?
        .with_codec(codec)
        .route(
            1,
            Arc::new(|_: &Envelope| -> BoxFuture<'static, ()> { Box::pin(async {}) }),
        )
        .map_err(|e| io::Error::other(format!("route: {e}")))
}

❌ New issue: Code Duplication
The module contains 4 functions with similar structure: build_echo_app,build_length_delimited_echo_app,slow_reader_delays_outbound_draining,slow_writer_delays_inbound_completion

@coderabbitai

This comment was marked as resolved.

@devboxerhub devboxerhub Bot force-pushed the add-slow-reader-writer-simulation-p8g9fg branch from 67d621d to e95b36b Compare March 6, 2026 11:41
codescene-delta-analysis[bot]

This comment was marked as outdated.

@leynos leynos changed the title Implement slow reader/writer back-pressure simulation (testkit) Add slow reader/writer back-pressure testkit and tests (8.5.2) Mar 6, 2026
codescene-delta-analysis[bot]

This comment was marked as outdated.

@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Mar 6, 2026

@coderabbitai Please suggest a fix for this issue and supply a prompt for an AI coding agent to enable it to apply the fix:

tests/slow_io_backpressure.rs

Comment on file

async fn slow_writer_delays_inbound_completion() -> io::Result<()> {
    let pacing = SlowIoPacing::new(
        NonZeroUsize::new(8).ok_or_else(|| io::Error::other("non-zero"))?,
        Duration::from_millis(5),
    );
    run_paced_codec_test(
        vec![b'a'; 64],
        hotline_codec(),
        SlowIoConfig::new().with_writer_pacing(pacing),
        100,
    )
    .await
}

❌ New issue: Code Duplication
The module contains 2 functions with similar structure: slow_reader_delays_outbound_draining,slow_writer_delays_inbound_completion

@coderabbitai

This comment was marked as resolved.

@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Mar 6, 2026

@coderabbitai Please suggest a fix for this issue and supply a prompt for an AI coding agent to enable it to apply the fix:

Code Duplication

tests/slow_io_backpressure.rs:

What lead to degradation?

The module contains 4 functions with similar structure: build_echo_app,build_length_delimited_echo_app,slow_reader_delays_outbound_draining,slow_writer_delays_inbound_completion

Why does this problem occur?

Duplicated code often leads to code that's harder to change since the same logical change has to be done in multiple functions. More duplication gives lower code health.

How to fix it?

A certain degree of duplicated code might be acceptable. The problems start when it is the same behavior that is duplicated across the functions in the module, ie. a violation of the Don't Repeat Yourself (DRY) principle. DRY violations lead to code that is changed together in predictable patterns, which is both expensive and risky. DRY violations can be identified using CodeScene's X-Ray analysis to detect clusters of change coupled functions with high code similarity. Read More
Once you have identified the similarities across functions, look to extract and encapsulate the concept that varies into its own function(s). These shared abstractions can then be re-used, which minimizes the amount of duplication and simplifies change.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Mar 6, 2026

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
fd slow_io_backpressure.rs --exec cat {}

Length of output: 19743


@leynos: The CodeScene report is stale — it was computed against an earlier commit. Both previous fixes have already been applied in the current branch:

Previous issue Current state
echo_route() not extracted; inline closure duplicated in both builders echo_route() extracted; both builders call it
slow_writer_delays_inbound_completion / slow_reader_delays_outbound_draining duplicated ✅ Merged into paced_codec_single_payload (rstest named cases)

The only residual structural similarity is between build_echo_app and build_length_delimited_echo_app, and this cannot be eliminated — they return different generic instantiations of WireframeApp (...HotlineFrameCodec vs the default codec), so Rust's type system prevents merging them into a single function. The shared echo_route() call is the best achievable de-duplication for those two.

No further action is required. The CodeScene score should improve once the scan re-runs against the current commit.

@leynos leynos marked this pull request as ready for review March 6, 2026 14:10
sourcery-ai[bot]

This comment was marked as resolved.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 669c611a1d

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment thread wireframe_testing/src/helpers/slow_io.rs Outdated
codescene-delta-analysis[bot]

This comment was marked as outdated.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 8

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docs/execplans/8-5-2-slow-reader-and-writer-simulation.md`:
- Line 76: Update the checkbox line containing the phrase "Finalised additive
slow-I/O helper API around" to use Oxford English "-ized" spelling by replacing
"Finalised" with "Finalized" so the text reads "Finalized additive slow-I/O
helper API around"; locate the line with the checked item (the "[x] (2026-03-06
01:35Z) Finalised..." entry) and make the single-word substitution.
- Around line 84-85: The checklist line "- [x] (2026-03-06 01:35Z) Updated
design docs, users guide, and roadmap entry `8.5.2`." needs a possessive
apostrophe: change "users guide" to "user's guide" so it reads "Updated design
docs, user's guide, and roadmap entry `8.5.2`."; update that exact text in the
document "8-5-2-slow-reader-and-writer-simulation.md".
- Line 202: The heading "### Stage A: Finalise slow-I/O helper contract (no
behavioural changes yet)" uses Oxford English spelling with "-ise"; change
"Finalise" to "Finalize" and update any other occurrences of
"finalise"/"finalised" in this document to the "-ize" form to comply with the
en-GB-oxendict guideline; look for the exact string "Stage A: Finalise slow-I/O
helper contract (no behavioural changes yet)" and replace it and related
variants while preserving punctuation and capitalization.

In `@tests/fixtures/slow_io_backpressure.rs`:
- Around line 218-230: The start_drive function currently allows starting a new
drive while a prior async task (self.task) is still running which detaches the
old JoinHandle; add a guard at the top of start_drive that checks
self.task.as_ref().is_some_and(|t| !t.is_finished()) and return an Err("slow-io
drive is already running".into()) if true, preventing a second drive from being
started until the existing JoinHandle has finished or been awaited; keep the
rest of start_drive (build_app, codec, payload, pause, spawn) unchanged and only
set self.task after this check.

In `@tests/scenarios/slow_io_backpressure_scenarios.rs`:
- Around line 11-35: Remove the repeated #[expect(unused_variables)] attributes
and instead consume the fixture parameter inside each empty test function by
adding a single-use binding like let _ = slow_io_backpressure_world; in
slow_writer_delays, slow_reader_delays, and combined_slow_io_round_trip so the
parameter is used without changing behavior.

In `@tests/slow_io_backpressure.rs`:
- Around line 247-279: Replace the current length-only verification with a full
payload equality check: after awaiting the task and getting response from
drive_with_slow_codec_payloads, call deserialize_echo_payloads(&response)
(instead of deserialize_echo_lengths) and compare the resulting Vec<u8> payloads
to the original payload_a and payload_b (use clones or references as needed); if
they differ, return an io::Error containing both expected and actual payloads in
the message to fail the test on byte-level corruption or reordering.

In `@wireframe_testing/src/helpers/slow_io.rs`:
- Around line 131-133: Replace uses of io::Error::new(io::ErrorKind::Other, ...)
with io::Error::other(...) in three places: the writer chunk slice (the let
chunk = bytes.get(offset..end).ok_or_else(|| { ... })? expression), the reader
chunk slice ok_or_else in the corresponding reader code, and the server task
Err(...) creation that currently builds an io::Error::new with format!("server
task failed: {panic_msg}"); for the server task call
io::Error::other(format!("server task failed: {panic_msg}")) so the error
construction uses io::Error::other in the same places.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: c3cb23c2-4173-4d9f-8f96-b749d4f9f978

📥 Commits

Reviewing files that changed from the base of the PR and between 926468d and f9f6ac4.

📒 Files selected for processing (20)
  • docs/adr-002-streaming-requests-and-shared-message-assembly.md
  • docs/execplans/11-1-2-structured-logging-and-tracing-spans.md
  • docs/execplans/8-5-2-slow-reader-and-writer-simulation.md
  • docs/roadmap.md
  • docs/users-guide.md
  • docs/wireframe-testing-crate.md
  • src/panic.rs
  • src/server/connection_spawner.rs
  • tests/features/slow_io_backpressure.feature
  • tests/fixtures/mod.rs
  • tests/fixtures/slow_io_backpressure.rs
  • tests/scenarios/mod.rs
  • tests/scenarios/slow_io_backpressure_scenarios.rs
  • tests/slow_io_backpressure.rs
  • tests/steps/mod.rs
  • tests/steps/slow_io_backpressure_steps.rs
  • wireframe_testing/Cargo.toml
  • wireframe_testing/src/helpers.rs
  • wireframe_testing/src/helpers/slow_io.rs
  • wireframe_testing/src/lib.rs

Comment thread docs/execplans/8-5-2-slow-reader-and-writer-simulation.md Outdated
Comment thread docs/execplans/8-5-2-slow-reader-and-writer-simulation.md Outdated
Comment thread docs/execplans/8-5-2-slow-reader-and-writer-simulation.md Outdated
Comment thread tests/fixtures/slow_io_backpressure.rs
Comment thread tests/scenarios/slow_io_backpressure_scenarios.rs Outdated
Comment thread tests/slow_io_backpressure.rs
Comment thread wireframe_testing/src/helpers/slow_io.rs
Comment thread wireframe_testing/src/helpers/slow_io.rs Outdated
@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Mar 7, 2026

@coderabbitai Have the following now been resolved?

Annul any requirements that violate the en-GB-oxendict spelling (-ize / -yse / -our) conventions (for example a request to replace "normalize" with "normalise" or "artefact" with "artifact"), or where the requirement unnecessarily increases cyclomatic complexity.

Please address the comments from this code review:

## Overall Comments
- In `slow_io.rs`, the helper bounds use `AsyncReadExt`/`AsyncWriteExt` as traits (`R: AsyncReadExt + Unpin`, `W: AsyncWriteExt + Unpin`); consider switching these to `AsyncRead`/`AsyncWrite` to depend on the core I/O traits rather than the extension traits and make the helpers usable with any Tokio reader/writer.
- `slow_io.rs` constructs several `io::Error` values via `io::Error::other(...)`; if you want to keep compatibility with a broader Rust MSRV, it may be safer to use `io::Error::new(io::ErrorKind::Other, ...)` consistently instead of relying on the newer `other` constructor.

## Individual Comments

### Comment 1
<location path="tests/slow_io_backpressure.rs" line_range="15-20" />
<code_context>
+use std::{num::NonZeroUsize, time::Duration};
+use wireframe::app::WireframeApp;
+use wireframe::codec::examples::HotlineFrameCodec;
+use wireframe_testing::{
+    SlowIoConfig, SlowIoPacing, drive_with_slow_codec_payloads,
+};
</code_context>
<issue_to_address>
**suggestion (testing):** Add explicit coverage for `drive_with_slow_frames` and `drive_with_slow_payloads` happy paths

Current tests only cover `drive_with_slow_frames` for invalid config and do not exercise `drive_with_slow_payloads` at all. Please add small tests that:

- Use `drive_with_slow_frames` to drive a simple echo app with multiple frames and assert the raw bytes are as expected.
- Use `drive_with_slow_payloads` (length‑delimited) to drive the same app and assert the echoed payloads.

This will give positive-path coverage for all three slow-I/O entry points and guard against regressions in framing/payload wiring.

Suggested implementation:

```rust
const MAX_CAPACITY_PLUS_ONE: usize = (1024 * 1024 * 10) + 1;
type EchoRoute = Arc<dyn Fn(&Envelope) -> BoxFuture<'static, ()> + Send + Sync>;

#[tokio::test]
async fn slow_frames_echo_happy_path() {
    // Arrange: simple echo app and slow I/O config
    let app = WireframeApp::new(); // replace with the actual constructor used elsewhere in this file
    let config = SlowIoConfig {
        // Adjust field names to match SlowIoConfig definition
        read_pacing: Some(SlowIoPacing {
            max_chunk_size: NonZeroUsize::new(3).expect("non-zero"),
            interval: Duration::from_millis(1),
        }),
        write_pacing: Some(SlowIoPacing {
            max_chunk_size: NonZeroUsize::new(2).expect("non-zero"),
            interval: Duration::from_millis(1),
        }),
        ..Default::default()
    };

    // Input bytes contain multiple frames as understood by HotlineFrameCodec.
    // Adjust the framing to match HotlineFrameCodec’s expected format.
    // For example, if HotlineFrameCodec is length-delimited:
    // [len][payload][len][payload]...
    let input_bytes = vec![
        0x00, 0x00, 0x00, 0x03, b'f', b'o', b'o', // "foo"
        0x00, 0x00, 0x00, 0x03, b'b', b'a', b'r', // "bar"
    ];

    // Act
    let output_bytes = drive_with_slow_frames::<HotlineFrameCodec, _>(
        config,
        input_bytes.clone(),
        app,
    )
    .await
    .expect("drive_with_slow_frames should succeed");

    // Assert: echo app should reflect bytes 1:1
    assert_eq!(output_bytes, input_bytes);
}

#[tokio::test]
async fn slow_payloads_echo_happy_path() {
    // Arrange: same echo app but exercising payload-level helpers
    let app = WireframeApp::new(); // replace with the actual constructor used elsewhere in this file
    let serializer = BincodeSerializer::default();
    let config = SlowIoConfig {
        read_pacing: Some(SlowIoPacing {
            max_chunk_size: NonZeroUsize::new(2).expect("non-zero"),
            interval: Duration::from_millis(1),
        }),
        write_pacing: Some(SlowIoPacing {
            max_chunk_size: NonZeroUsize::new(3).expect("non-zero"),
            interval: Duration::from_millis(1),
        }),
        ..Default::default()
    };

    let input_payloads = vec![
        String::from("hello"),
        String::from("world"),
        String::from("slow-io"),
    ];

    // Act: drive via length-delimited codec/payload helper
    let echoed_payloads: Vec<String> = drive_with_slow_codec_payloads::<
        HotlineFrameCodec,
        _,
        _,
    >(config, serializer, input_payloads.clone(), app)
    .await
    .expect("drive_with_slow_codec_payloads should succeed");

    // Assert
    assert_eq!(echoed_payloads, input_payloads);
}

```

The edits above assume specific constructor and API shapes that you will likely need to align with the existing code:

1. **WireframeApp construction / echo behavior**
   - Replace `WireframeApp::new()` with the actual constructor or helper used elsewhere in `slow_io_backpressure.rs` to build the echo app under test.
   - Ensure the app is wired so that for a given `Envelope` it simply echoes the payload back to the caller; reuse any existing `EchoRoute`/`WireframeApp` helpers if already defined later in the file.

2. **`SlowIoConfig` and `SlowIoPacing` field names**
   - Adjust the field names (`read_pacing`, `write_pacing`, `max_chunk_size`, `interval`) and the `..Default::default()` usage to match the actual `SlowIoConfig` and `SlowIoPacing` definitions in `wireframe_testing`.
   - If `SlowIoConfig` does not implement `Default`, initialize all required fields explicitly.

3. **`drive_with_slow_frames` signature**
   - Update the call to `drive_with_slow_frames::<HotlineFrameCodec, _>(...)` to match its true signature (argument order and count). The intent is:
     - use `SlowIoConfig` for pacing,
     - feed in raw input bytes containing multiple frames,
     - pass the echo app,
     - assert that the returned bytes match the input bytes.

4. **`drive_with_slow_codec_payloads` signature and generics**
   - Align the generic parameters and arguments with the actual helper signature. If it accepts the app before payloads or requires a serializer by reference, adjust accordingly.
   - Ensure the payload type (`String` here) matches what your echo route and serializer expect; if other types (e.g. a custom request struct) are used in existing tests, mirror that instead.

5. **HotlineFrameCodec framing format**
   - The sample frame encoding in `input_bytes` assumes a 4-byte big-endian length prefix followed by the UTF-8 payload. If `HotlineFrameCodec` uses a different framing strategy (e.g. newline-delimited), adjust `input_bytes` accordingly and update the expectation comment.
</issue_to_address>

### Comment 2
<location path="tests/slow_io_backpressure.rs" line_range="81-86" />
<code_context>
+    Ok(())
+}
+
+async fn run_paced_codec_test(
+    payload: Vec<u8>,
+    codec: HotlineFrameCodec,
+    config: SlowIoConfig,
+    final_advance_millis: u64,
+) -> io::Result<()> {
+    let serialized = serialize_envelope(&payload)?;
+
</code_context>
<issue_to_address>
**suggestion (testing):** Consider adding a test that exercises panic-to-`io::Error` mapping in the slow I/O driver

The slow I/O helper wraps the server future in `catch_unwind` and maps panics to `io::Error::other("server task failed: …")`, but tests only cover success and config errors. To verify the behavior promised in the PR, please add a test that triggers a panic in a handler, runs it through `drive_with_slow_codec_payloads` (or another slow helper), and asserts that the resulting `io::Error` has the expected kind and panic message. This will lock in the panic-handling semantics and prevent regressions.

Suggested implementation:

```rust
use wireframe::{
    app::{Envelope, WireframeApp},
    codec::examples::HotlineFrameCodec,
    serializer::{BincodeSerializer, Serializer},
};

#[tokio::test]
async fn panic_in_server_is_mapped_to_io_error_other() {
    // Arrange: a payload that will cause the handler to panic.
    let payload = b"panic-test-payload".to_vec();

    // Use a default / minimal slow I/O configuration.
    let config = SlowIoConfig::default();

    // Act: drive a server future that panics through the slow I/O driver.
    let codec = HotlineFrameCodec::default();
    let result = drive_with_slow_codec_payloads(
        async move {
            // This panic should be caught and turned into an io::Error by the slow I/O helper.
            panic!("intentional handler panic for test");
            #[allow(unreachable_code)]
            Ok::<_, io::Error>(())
        },
        codec,
        config,
        vec![payload],
    )
    .await;

    // Assert: the panic has been mapped to an `io::Error` with the expected kind and message.
    let err = result.expect_err("expected panic to be mapped into io::Error");
    assert_eq!(err.kind(), io::ErrorKind::Other);
    let msg = err.to_string();
    assert!(
        msg.contains("server task failed"),
        "error message did not contain panic preface: {msg}"
    );
    assert!(
        msg.contains("intentional handler panic for test"),
        "error message did not contain original panic message: {msg}"
    );
}

```

To fully wire this up you will also need to:

1. Ensure `SlowIoConfig` and `drive_with_slow_codec_payloads` are in scope:
   - If they live in this module, no extra imports are needed.
   - If they are re-exported from a helper module, add an appropriate `use` (for example, `use crate::tests::slow_io::{SlowIoConfig, drive_with_slow_codec_payloads};` or whatever the actual path is).

2. Adjust the signature of the `drive_with_slow_codec_payloads` call if it differs:
   - The test assumes a signature roughly like:
     `async fn drive_with_slow_codec_payloads<Fut>(server: Fut, codec: HotlineFrameCodec, config: SlowIoConfig, payloads: Vec<Vec<u8>>) -> io::Result<Vec<Vec<u8>>> where Fut: Future<Output = io::Result<()>> + Send + 'static;`
   - If your helper instead takes a closure, e.g. `|| async { ... }`, wrap the `async move` block accordingly.
   - If it does not require `payloads`, drop the `vec![payload]` argument.

3. If `SlowIoConfig` does not implement `Default`, construct it the same way your existing tests do (e.g. `SlowIoConfig { read_chunk_size: ..., write_chunk_size: ..., tick_millis: ... }`).

4. If the slow I/O helper uses a different error preface string than `"server task failed"`, update the `msg.contains("server task failed")` assertion to match the actual wording used in the panic-to-`io::Error` mapping.
</issue_to_address>

### Comment 3
<location path="docs/users-guide.md" line_range="459-462" />
<code_context>
+  output bytes.
+- `drive_with_slow_codec_payloads` — codec-aware payloads, returns decoded
+  payload byte vectors.
+- `drive_with_slow_codec_frames` — codec-aware payloads, returns decoded
+  `F::Frame` values.
+
</code_context>
<issue_to_address>
**issue (typo):** Clarify that `drive_with_slow_codec_frames` is codec-aware for frames, not payloads.

This should read “codec-aware frames” rather than “codec-aware payloads” to match the function name and the fact it returns decoded `F::Frame` values.

```suggestion
- `drive_with_slow_payloads` — default length-delimited payloads, returns raw
  output bytes.
- `drive_with_slow_codec_payloads` — codec-aware payloads, returns decoded
  payload byte vectors.
- `drive_with_slow_codec_frames` — codec-aware frames, returns decoded
  `F::Frame` values.
```
</issue_to_address>

### Comment 4
<location path="docs/execplans/8-5-2-slow-reader-and-writer-simulation.md" line_range="376-380" />
<code_context>
+    pub capacity: usize,
+}
+
+pub async fn drive_with_slow_io_payloads<S, C, E>(
+    app: wireframe::app::WireframeApp<S, C, E>,
+    payloads: Vec<Vec<u8>>,
+    config: SlowIoConfig,
+) -> std::io::Result<Vec<u8>>
+where
+    S: wireframe_testing::TestSerializer,
</code_context>
<issue_to_address>
**issue:** Make the planned function name consistent with the documented public helper (`drive_with_slow_payloads`).

The function here is named `drive_with_slow_io_payloads`, while the rest of the documentation and public helper list use `drive_with_slow_payloads`. Please align this example with the documented public helper name to avoid API confusion.
</issue_to_address>

### Comment 5
<location path="wireframe_testing/src/helpers/slow_io.rs" line_range="235" />
<code_context>
+    Ok(wire)
+}
+
+async fn drive_slow_codec_frames_internal<F, H, Fut>(
+    handler: H,
+    codec: &F,
</code_context>
<issue_to_address>
**issue (complexity):** Consider simplifying the slow codec-driving helpers by inlining the internal helper into `drive_with_slow_codec_frames` and reusing the shared codec-encoding utility instead of duplicating encoding logic.

You can reduce indirection and duplication without changing behavior by:

1. **Inlining `drive_slow_codec_frames_internal` into `drive_with_slow_codec_frames`**

`drive_slow_codec_frames_internal` is only called from `drive_with_slow_codec_frames` with a closure that always does `app.handle_connection(server)`. You can remove the extra generic handler and keep a single flow:

```rust
pub async fn drive_with_slow_codec_frames<S, C, E, F>(
    app: WireframeApp<S, C, E, F>,
    codec: &F,
    payloads: Vec<Vec<u8>>,
    config: SlowIoConfig,
) -> io::Result<Vec<F::Frame>>
where
    S: TestSerializer,
    C: Send + 'static,
    E: Packet,
    F: FrameCodec,
{
    let encoded = encode_payloads_with_codec(codec, payloads)?;
    let wire_bytes: Vec<u8> = encoded.into_iter().flatten().collect();

    let raw = drive_slow_internal(
        |server| async move { app.handle_connection(server).await },
        wire_bytes,
        config,
    )
    .await?;

    decode_frames_with_codec(codec, raw)
}
```

You can then delete `drive_slow_codec_frames_internal`, which removes one generic function and one more level of abstraction.

2. **Reusing `encode_payloads_with_codec` from `encode_length_delimited_payloads`**

`encode_length_delimited_payloads` manually repeats codec usage that `encode_payloads_with_codec` already encapsulates. You can construct the default codec and delegate:

```rust
fn encode_length_delimited_payloads(payloads: Vec<Vec<u8>>) -> io::Result<Vec<u8>> {
    let codec: LengthDelimitedCodec = new_test_codec(DEFAULT_CAPACITY);
    let frames = encode_payloads_with_codec(&codec, payloads)?;
    Ok(frames.into_iter().flatten().collect())
}
```

This removes the manual `LengthFormat` and per-payload encoding loop, and keeps a single implementation of “encode payloads with a codec” logic.
</issue_to_address>

### Comment 6
<location path="docs/execplans/8-5-2-slow-reader-and-writer-simulation.md" line_range="38" />
<code_context>
+- Keep deterministic test behaviour; avoid wall-clock-only assertions when
+  paused Tokio time can be used.
+- Update relevant design documentation with implementation decisions.
+- Update `docs/users-guide.md` for any new public testkit API.
+- On feature completion, update roadmap checkbox `8.5.2` from `[ ]` to `[x]`.
+
</code_context>
<issue_to_address>
**issue (review_instructions):** The acronym “API” is used here without being expanded on first use in this document.

To follow the style guidance, expand the acronym the first time it appears in this execplan, for example: "application programming interface (API)".

<details>
<summary>Review instructions:</summary>

**Path patterns:** `**/*.md`

**Instructions:**
Define uncommon acronyms on first use.

</details>
</issue_to_address>

### Comment 7
<location path="docs/execplans/8-5-2-slow-reader-and-writer-simulation.md" line_range="75" />
<code_context>
+- [x] (2026-03-05 17:40Z) Drafted ExecPlan for roadmap item `8.5.2`.
+- [x] (2026-03-06 00:10Z) Updated scope tolerance to fit the required helper,
+  BDD, and documentation footprint.
+- [x] (2026-03-06 01:35Z) Finalised additive slow-I/O helper API around
+  `SlowIoPacing` and `SlowIoConfig`.
+- [x] (2026-03-06 01:35Z) Implemented slow writer and slow reader simulation in
</code_context>
<issue_to_address>
**issue (review_instructions):** “Finalised” uses -ise rather than the requested -ize Oxford spelling.

To match the en-GB-oxendic (-ize) convention, change “Finalised” to “Finalized”.

<details>
<summary>Review instructions:</summary>

**Path patterns:** `**/*.md`

**Instructions:**
Use en-GB-oxendic (-ize / -yse / -our) spelling and grammar.

</details>
</issue_to_address>

@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Mar 7, 2026

@coderabbitai Please suggest a fix for this issue and supply a prompt for an AI coding agent to enable it to apply the fix:

tests/slow_io_backpressure.rs

Comment on file

fn deserialize_echo_lengths(bytes: &[Vec<u8>]) -> io::Result<Vec<usize>> {
    bytes
        .iter()
        .map(|raw| {
            let (env, _) = BincodeSerializer
                .deserialize::<Envelope>(raw)
                .map_err(|e| {
                    io::Error::new(io::ErrorKind::InvalidData, format!("deserialize: {e}"))
                })?;
            Ok(env.payload_bytes().len())
        })
        .collect()
}

❌ New issue: Code Duplication
The module contains 2 functions with similar structure: deserialize_echo_lengths,deserialize_echo_payloads

@coderabbitai

This comment was marked as resolved.

@coderabbitai

This comment was marked as resolved.

codescene-delta-analysis[bot]

This comment was marked as outdated.

@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Mar 8, 2026

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Mar 8, 2026

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@coderabbitai coderabbitai Bot added the codex label Mar 8, 2026
@devboxerhub devboxerhub Bot force-pushed the add-slow-reader-writer-simulation-p8g9fg branch from 24d9be6 to 47e3162 Compare March 10, 2026 14:17
codescene-delta-analysis[bot]

This comment was marked as outdated.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

♻️ Duplicate comments (1)
tests/fixtures/slow_io_backpressure.rs (1)

329-340: ⚠️ Potential issue | 🟠 Major

Reject trailing bytes when reading the echoed envelope.

Capture the consumed length from BincodeSerializer::deserialize and fail when it does not match raw.len(). As written, a valid Envelope plus trailing garbage still satisfies the behavioural assertion and can hide framing regressions.

Proposed fix
-        let (env, _) = BincodeSerializer
+        let (env, consumed) = BincodeSerializer
             .deserialize::<Envelope>(raw)
             .map_err(|e| format!("deserialize: {e}"))?;
+        if consumed != raw.len() {
+            return Err(
+                format!(
+                    "deserialize: trailing bytes after envelope: consumed {consumed} of {}",
+                    raw.len()
+                )
+                .into(),
+            );
+        }
         let actual_len = env.payload_bytes().len();
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/fixtures/slow_io_backpressure.rs` around lines 329 - 340, In
assert_completed_payload_len capture the consumed byte count returned by
BincodeSerializer::deserialize::<Envelope>(raw) (e.g. let (env, consumed) =
BincodeSerializer.deserialize::<Envelope>(raw)...) and immediately check that
consumed == raw.len(); if not, return an error indicating trailing bytes
(including consumed and raw.len() in the message) before proceeding to read
env.payload_bytes().len(); this ensures trailing garbage after a valid Envelope
is rejected.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docs/execplans/11-1-2-structured-logging-and-tracing-spans.md`:
- Line 18: Remove the unresolved merge conflict marker `<<<<<<< LEFT` from the
file; search for and delete any other leftover conflict markers (`<<<<<<<`,
`=======`, `>>>>>>>`) in the document, then save the Markdown so it parses
cleanly (optionally run your Markdown linter/preview to confirm no remaining
conflict artifacts).

In `@docs/users-guide.md`:
- Around line 430-453: The snippet is feeding raw bytes vec![1,2,3] into
drive_with_slow_codec_payloads but
WireframeApp::new()?.with_codec(codec.clone()) expects codec-framed Envelope
messages; replace the payload with a properly serialized Envelope (create a
wireframe::Envelope with the desired request/headers, encode/serialize it using
HotlineFrameCodec or the codec.encode/serialize helper to produce Vec<u8>) and
pass that Vec<u8> to drive_with_slow_codec_payloads, or alternatively switch to
the raw-frame helper variant (e.g., drive_with_slow_codec_raw_frames or
similarly named helper) that accepts pre-framed raw frames instead of codec
payloads; ensure you use the existing HotlineFrameCodec and Envelope types so
the app receives a valid request.

In `@docs/wireframe-testing-crate.md`:
- Around line 190-192: The sequence diagram references a non-existent helper
name drive_with_slow_io_payloads; update the diagram label to the exported
helper name used elsewhere (either drive_with_slow_payloads or
drive_with_slow_codec_payloads) so the diagram matches the public API; locate
the Runtime->>Helper line and replace drive_with_slow_io_payloads(app, payloads,
config) with the correct exported helper name (choose drive_with_slow_payloads
or drive_with_slow_codec_payloads consistent with the surrounding docs).

In `@tests/slow_io_backpressure.rs`:
- Line 27: Replace the hard-coded 10 MiB literal and matching error text with
the public test constant: import or reference wireframe_testing::TEST_MAX_FRAME
and change MAX_CAPACITY_PLUS_ONE to be (TEST_MAX_FRAME as usize) + 1 (or
otherwise derive the ceiling case from TEST_MAX_FRAME) and update any
assertions/error messages that mention "10 MiB" to interpolate or assert against
TEST_MAX_FRAME so the test relies on TEST_MAX_FRAME instead of a duplicated
numeric literal (also apply the same replacement at the other occurrence around
lines 335-340).

In `@wireframe_testing/src/helpers/slow_io.rs`:
- Around line 249-259: The example uses vec![1, 2, 3] which suggests a payload
but the function drive_with_slow_frames expects pre-framed/raw-frame bytes;
update the snippet to show an actual encoded frame (i.e., replace the bare
payload with the wire-format bytes) or else switch to the payload helper API so
it’s clear you’re supplying a payload — adjust the example around
drive_with_slow_frames, SlowIoConfig, SlowIoPacing, and WireframeApp to use the
correct pre-framed bytes or the payload helper accordingly.
- Around line 206-208: Remove the unnecessary Send bound on the Fut generic:
update the function/type bounds where F: FnOnce(DuplexStream) -> Fut and Fut:
std::future::Future<Output = ()> + Send to drop the + Send so Fut is only
required to be a Future<Output = ()>; this applies to the same generic bounds
locations (e.g., the helper functions that spawn server_fut, writer_fut,
reader_fut which are polled via tokio::try_join! on the current task and never
passed to tokio::spawn).

---

Duplicate comments:
In `@tests/fixtures/slow_io_backpressure.rs`:
- Around line 329-340: In assert_completed_payload_len capture the consumed byte
count returned by BincodeSerializer::deserialize::<Envelope>(raw) (e.g. let
(env, consumed) = BincodeSerializer.deserialize::<Envelope>(raw)...) and
immediately check that consumed == raw.len(); if not, return an error indicating
trailing bytes (including consumed and raw.len() in the message) before
proceeding to read env.payload_bytes().len(); this ensures trailing garbage
after a valid Envelope is rejected.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 59e2de79-cdae-416a-aa72-8ae46f087a0f

📥 Commits

Reviewing files that changed from the base of the PR and between 24d9be6 and 47e3162.

📒 Files selected for processing (20)
  • docs/adr-002-streaming-requests-and-shared-message-assembly.md
  • docs/execplans/11-1-2-structured-logging-and-tracing-spans.md
  • docs/execplans/8-5-2-slow-reader-and-writer-simulation.md
  • docs/roadmap.md
  • docs/users-guide.md
  • docs/wireframe-testing-crate.md
  • src/panic.rs
  • src/server/connection_spawner.rs
  • tests/features/slow_io_backpressure.feature
  • tests/fixtures/mod.rs
  • tests/fixtures/slow_io_backpressure.rs
  • tests/scenarios/mod.rs
  • tests/scenarios/slow_io_backpressure_scenarios.rs
  • tests/slow_io_backpressure.rs
  • tests/steps/mod.rs
  • tests/steps/slow_io_backpressure_steps.rs
  • wireframe_testing/Cargo.toml
  • wireframe_testing/src/helpers.rs
  • wireframe_testing/src/helpers/slow_io.rs
  • wireframe_testing/src/lib.rs

Comment thread docs/execplans/11-1-2-structured-logging-and-tracing-spans.md Outdated
Comment thread docs/users-guide.md Outdated
Comment thread docs/wireframe-testing-crate.md
Comment thread tests/slow_io_backpressure.rs Outdated
Comment thread wireframe_testing/src/helpers/slow_io.rs Outdated
Comment thread wireframe_testing/src/helpers/slow_io.rs Outdated
@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Mar 10, 2026

@coderabbitai Please suggest a fix for this issue and supply a prompt for an AI coding agent to enable it to apply the fix:

Bumpy Road Ahead

wireframe_testing/src/helpers/slow_io.rs: SlowIoConfig.validate

What lead to degradation?

SlowIoConfig.validate has 2 blocks with nested conditional logic. Any nesting of 2 or deeper is considered. Threshold is 2 blocks per function

Why does this problem occur?

A Bumpy Road is a function that contains multiple chunks of nested conditional logic inside the same function. The deeper the nesting and the more bumps, the lower the code health.
A bumpy code road represents a lack of encapsulation which becomes an obstacle to comprehension. In imperative languages there’s also an increased risk for feature entanglement, which leads to complex state management. CodeScene considers the following rules for the code health impact: 1) The deeper the nested conditional logic of each bump, the higher the tax on our working memory. 2) The more bumps inside a function, the more expensive it is to refactor as each bump represents a missing abstraction. 3) The larger each bump – that is, the more lines of code it spans – the harder it is to build up a mental model of the function. The nesting depth for what is considered a bump is levels of conditionals.

How to fix it?

Bumpy Road implementations indicate a lack of encapsulation. Check out the detailed description of the Bumpy Road code health issue.
A Bumpy Road often suggests that the function/method does too many things. The first refactoring step is to identify the different possible responsibilities of the function. Consider extracting those responsibilities into smaller, cohesive, and well-named functions. The EXTRACT FUNCTION refactoring is the primary response.

@coderabbitai

This comment was marked as resolved.

Refactor slow I/O example code in the users guide to use
BincodeSerializer and Envelope for request serialization.
This clarifies how to prepare payloads, enhancing documentation
accuracy and usability.

Co-authored-by: devboxerhub[bot] <devboxerhub[bot]@users.noreply.github.com>
codescene-delta-analysis[bot]

This comment was marked as outdated.

…duplication

Extracted chunk size validation logic for writer and reader pacing into a new helper function validate_pacing_chunk_size to improve code clarity and maintainability in slow_io.rs.

Co-authored-by: devboxerhub[bot] <devboxerhub[bot]@users.noreply.github.com>
@leynos leynos changed the title Add slow reader/writer back-pressure testkit, tests, and docs (8.5.2) Implement slow reader and writer back-pressure simulation Mar 10, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (1)
wireframe_testing/src/helpers/slow_io.rs (1)

309-323: ⚠️ Potential issue | 🟡 Minor

Serialize an Envelope in this rustdoc example.

drive_with_slow_codec_payloads injects codec payload bytes straight into the app. With the default serializer still in place, vec![1] is not a valid request payload, so this example documents a round trip that will not succeed if copied into a test.

🛠️ Proposed fix
 /// # use std::{num::NonZeroUsize, time::Duration};
-/// # use wireframe::app::WireframeApp;
+/// # use wireframe::{
+/// #     app::{Envelope, WireframeApp},
+/// #     serializer::{BincodeSerializer, Serializer},
+/// # };
 /// # use wireframe::codec::examples::HotlineFrameCodec;
 /// # use wireframe_testing::{
 /// #     drive_with_slow_codec_payloads, SlowIoConfig, SlowIoPacing,
 /// # };
-/// # async fn demo() -> std::io::Result<()> {
+/// # async fn demo() -> Result<(), Box<dyn std::error::Error>> {
 /// let codec = HotlineFrameCodec::new(4096);
 /// let app = WireframeApp::new().expect("app").with_codec(codec.clone());
 /// let config = SlowIoConfig::new().with_reader_pacing(SlowIoPacing::new(
 ///     NonZeroUsize::new(32).expect("non-zero"),
 ///     Duration::from_millis(5),
 /// ));
-/// let out = drive_with_slow_codec_payloads(app, &codec, vec![vec![1]], config).await?;
+/// let request =
+///     BincodeSerializer.serialize(&Envelope::new(1, Some(7), vec![1]))?;
+/// let out =
+///     drive_with_slow_codec_payloads(app, &codec, vec![request], config).await?;
 /// # let _ = out;
 /// # Ok(())
 /// # }

As per coding guidelines, **/*.rs: Document public Rust APIs using Rustdoc comments (///) so documentation can be generated with cargo doc.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@wireframe_testing/src/helpers/slow_io.rs` around lines 309 - 323, The rustdoc
example passes raw bytes to drive_with_slow_codec_payloads which won’t
round-trip with the default serializer; replace the literal vec![1] payload with
a serialized Envelope and adjust the example imports and return type: import
Envelope and a Serializer implementation (e.g. BincodeSerializer) from
crate::app/serializer, change the demo signature to return Result<Box<dyn
std::error::Error>> or similar, serialize an Envelope::new(id,
maybe_response_to, vec![1]) via BincodeSerializer.serialize(...) and pass that
serialized byte vector to drive_with_slow_codec_payloads(app, &codec,
vec![request], config). Ensure the example uses the symbols Envelope,
BincodeSerializer (or Serializer) and drive_with_slow_codec_payloads so it
compiles and demonstrates a working round trip.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docs/wireframe-testing-crate.md`:
- Around line 193-209: The sequence diagram misrepresents pacing by sleeping
before the first transfer; update the diagram to show
start_paced_writes(start_paced_writes(payloads, config.writer_pacing)) and
start_paced_reads(start_paced_reads(config.reader_pacing)) invoking an immediate
send_first_chunk()/read_first_chunk(), then enter loops that sleep
(Time.sleep(config.*.delay)) between remaining chunks (loop for each remaining
chunk / loop while app_has_more_output) before calling
send_chunk()/read_chunk(); rename the loop guards and config identifiers in the
diagram to writer_pacing/reader_pacing to match the proposed contract.

In `@tests/fixtures/slow_io_backpressure.rs`:
- Around line 238-248: In take_outputs_from_task ensure the spawned task truly
finished before block_on: check the Task handle with task.is_finished(); if it
is not finished, put the Task back into self.task and return an Err indicating
the slow-io drive is still pending and that Tokio time should be advanced (e.g.,
"slow-io drive is still pending; advance Tokio time before collecting outputs"),
otherwise proceed to block_on and collect outputs as before.

---

Duplicate comments:
In `@wireframe_testing/src/helpers/slow_io.rs`:
- Around line 309-323: The rustdoc example passes raw bytes to
drive_with_slow_codec_payloads which won’t round-trip with the default
serializer; replace the literal vec![1] payload with a serialized Envelope and
adjust the example imports and return type: import Envelope and a Serializer
implementation (e.g. BincodeSerializer) from crate::app/serializer, change the
demo signature to return Result<Box<dyn std::error::Error>> or similar,
serialize an Envelope::new(id, maybe_response_to, vec![1]) via
BincodeSerializer.serialize(...) and pass that serialized byte vector to
drive_with_slow_codec_payloads(app, &codec, vec![request], config). Ensure the
example uses the symbols Envelope, BincodeSerializer (or Serializer) and
drive_with_slow_codec_payloads so it compiles and demonstrates a working round
trip.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 6a35e89a-c209-4e5b-8f46-d58e597b7715

📥 Commits

Reviewing files that changed from the base of the PR and between 47e3162 and efdbb24.

📒 Files selected for processing (4)
  • docs/users-guide.md
  • docs/wireframe-testing-crate.md
  • tests/fixtures/slow_io_backpressure.rs
  • wireframe_testing/src/helpers/slow_io.rs

Comment thread docs/wireframe-testing-crate.md Outdated
Comment thread tests/fixtures/slow_io_backpressure.rs
- Moved MAX_SLOW_IO_CAPACITY constant to slow_io.rs helper module
- Replaced scattered capacity limits with MAX_SLOW_IO_CAPACITY
- Exported MAX_SLOW_IO_CAPACITY and used it across tests and helpers
- Adjusted tests to use the centralized constant and updated error messages accordingly

This change removes hardcoded capacity values and aligns capacity checks on a single constant to improve maintainability and reduce duplication.

Co-authored-by: devboxerhub[bot] <devboxerhub[bot]@users.noreply.github.com>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@wireframe_testing/src/helpers/slow_io.rs`:
- Around line 249-263: The examples use .expect() in public Rustdoc (e.g.,
WireframeApp::new(), NonZeroUsize::new()) but the doctest signature returns
std::io::Result<()>, so replace .expect() with fallible propagation: convert
WireframeApp::new() errors into std::io::Error using map_err(...)? and propagate
with ?, and replace NonZeroUsize::new(...).expect(...) with .ok_or_else(||
std::io::Error::other("..."))? to produce an io::Error; apply the same pattern
in both examples (the blocks involving WireframeApp::new and NonZeroUsize::new
in the first and second examples).

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 33bcce7c-f484-43d1-bb93-47d88eee0cb3

📥 Commits

Reviewing files that changed from the base of the PR and between efdbb24 and 2071965.

📒 Files selected for processing (4)
  • tests/slow_io_backpressure.rs
  • wireframe_testing/src/helpers.rs
  • wireframe_testing/src/helpers/slow_io.rs
  • wireframe_testing/src/lib.rs

Comment thread wireframe_testing/src/helpers/slow_io.rs
- Updated documentation diagrams and descriptions to accurately reflect
  pacing configuration parameters for writer and reader.
- Enhanced slow IO helpers doc comments with richer examples and improved
  error mapping in example code.
- Added details for first chunk sending/reading in pacing sequences.

Also:
- Added a check in slow-io backpressure test fixture to return error if
  drive task is still pending, guiding proper Tokio time advancement before output collection.

Co-authored-by: devboxerhub[bot] <devboxerhub[bot]@users.noreply.github.com>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@docs/wireframe-testing-crate.md`:
- Around line 150-166: Replace the panic-based
NonZeroUsize::new(...).expect("non-zero") calls in the example with the
error-propagating pattern used in slow_io.rs so the snippet compiles as a
rustdoc example; specifically, change both
NonZeroUsize::new(8).expect("non-zero") and
NonZeroUsize::new(32).expect("non-zero") to NonZeroUsize::new(...).ok_or_else(||
"chunk size must be non-zero")? (used with SlowIoPacing::new) so the example
follows the .ok_or_else(...)? convention shown in slow_io.rs.

In `@tests/fixtures/slow_io_backpressure.rs`:
- Around line 134-139: The runtime is being paused per-drive causing a panic
when tokio::time::pause() is called a second time; modify impl Default for
SlowIoBackpressureWorld (the Default::default implementation that builds the
tokio runtime) to set the runtime into paused mode by adding .start_paused(true)
to tokio::runtime::Builder::new_current_thread().enable_all().build(), and then
remove the per-drive call to tokio::time::pause() from start_drive() so pause is
only done once at runtime creation.
- Around line 329-330: The test's advance_millis method uses
tokio::time::advance but doesn't yield control afterwards, so timer-backed tasks
may not finish on their first poll; update advance_millis (the pub fn
advance_millis method) to include a tokio::task::yield_now().await inside the
same block_on async block after tokio::time::advance(...) so the scheduler gives
woken timer tasks an extra turn and ensures task.is_finished() reflects
completed timers.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 79b65b22-5617-4f92-8170-10af4701d6e6

📥 Commits

Reviewing files that changed from the base of the PR and between 2071965 and 8b6cea5.

📒 Files selected for processing (3)
  • docs/wireframe-testing-crate.md
  • tests/fixtures/slow_io_backpressure.rs
  • wireframe_testing/src/helpers/slow_io.rs

Comment thread docs/wireframe-testing-crate.md
Comment thread tests/fixtures/slow_io_backpressure.rs Outdated
Comment thread tests/fixtures/slow_io_backpressure.rs Outdated
…handling in slow IO tests

- Start Tokio runtime paused to better control virtual time in tests
- Add task yield after advancing Tokio's virtual time to allow scheduled tasks to run
- Remove redundant pause call as starting paused suffices
- Update example code in docs to use error handling instead of unwraps

These changes improve the reliability and clarity of the slow IO backpressure tests and their related documentation example.

Co-authored-by: devboxerhub[bot] <devboxerhub[bot]@users.noreply.github.com>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/fixtures/slow_io_backpressure.rs`:
- Around line 105-109: The parse error message in CombinedDriveConfig::from_str
uses a backslash-continued string literal; replace that with a concat!() call to
join the long literal instead (e.g., use concat!("expected combined config ",
"payload_len/...", " got {s}")). Update the Err(format!(...)) invocation to pass
a concat!() produced string for the static text and keep the {s} interpolation
in the format! call so the final message remains identical.
- Around line 17-23: The fixture currently defers runtime creation errors by
storing runtime: Option and runtime_error; change the fixture constructor
function slow_io_backpressure_world to return
TestResult<SlowIoBackpressureWorld> (i.e. Result<..., _>) so you can create a
real tokio::runtime::Runtime eagerly with ? and fail the test at setup time,
change the SlowIoBackpressureWorld struct to hold runtime:
tokio::runtime::Runtime (not Option) and remove runtime_error, and adjust any
code that constructs or destructs task/outputs to account for the non-optional
runtime; apply the same pattern to the other fixtures referenced around lines
134-156 and 169-194.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: a8ce8d28-5a02-4b62-b85d-95cabcd35b29

📥 Commits

Reviewing files that changed from the base of the PR and between 8b6cea5 and 93eea8a.

📒 Files selected for processing (2)
  • docs/wireframe-testing-crate.md
  • tests/fixtures/slow_io_backpressure.rs

Comment thread tests/fixtures/slow_io_backpressure.rs
Comment thread tests/fixtures/slow_io_backpressure.rs
…sure fixture

Refactored the error message construction in CombinedDriveConfig::FromStr implementation to use concat! macro for better readability and formatting without changing functionality.

Co-authored-by: devboxerhub[bot] <devboxerhub[bot]@users.noreply.github.com>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
tests/fixtures/slow_io_backpressure.rs (1)

17-23: 🛠️ Refactor suggestion | 🟠 Major

Return TestResult<SlowIoBackpressureWorld> from the fixture and fail eagerly.

The current pattern carries runtime: Option<Runtime> and runtime_error: Option<String> through the world, deferring construction failure to unrelated later steps. Store a plain tokio::runtime::Runtime in the struct, make slow_io_backpressure_world() return TestResult<SlowIoBackpressureWorld>, and let callers apply ? to surface runtime-creation failures immediately. This also applies to the Default implementation (lines 138–160) and runtime() accessor (lines 181–188).

As per coding guidelines, "Consume fallible Rust fixtures in rstest by making the test return Result and applying ? to the fixture."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/fixtures/slow_io_backpressure.rs` around lines 17 - 23, Replace the
optional runtime pattern by storing a concrete tokio::runtime::Runtime inside
SlowIoBackpressureWorld and remove runtime_error: Option<String>; change the
fixture constructor slow_io_backpressure_world() to return
TestResult<SlowIoBackpressureWorld> so it can return Err(...) on runtime
creation failure, update the Default implementation to construct and return a
ready Runtime (or fail via the TestResult path) instead of leaving runtime None,
and adjust the runtime() accessor to return &Runtime (or &mut Runtime) rather
than Option<Runtime>; update callers to use the fixture as a fallible TestResult
and apply ? so runtime creation failures surface immediately.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@tests/fixtures/slow_io_backpressure.rs`:
- Around line 17-23: Replace the optional runtime pattern by storing a concrete
tokio::runtime::Runtime inside SlowIoBackpressureWorld and remove runtime_error:
Option<String>; change the fixture constructor slow_io_backpressure_world() to
return TestResult<SlowIoBackpressureWorld> so it can return Err(...) on runtime
creation failure, update the Default implementation to construct and return a
ready Runtime (or fail via the TestResult path) instead of leaving runtime None,
and adjust the runtime() accessor to return &Runtime (or &mut Runtime) rather
than Option<Runtime>; update callers to use the fixture as a fallible TestResult
and apply ? so runtime creation failures surface immediately.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 3cab29e4-878e-4ee7-bdf4-3b30bf16d6f3

📥 Commits

Reviewing files that changed from the base of the PR and between 93eea8a and 182d133.

📒 Files selected for processing (1)
  • tests/fixtures/slow_io_backpressure.rs

…backpressure fixture

Replace SlowIoBackpressureWorld option-based runtime handling with direct runtime.
Change fixture to return TestResult to handle runtime build errors gracefully.
Update test scenarios and steps to propagate TestResult and unwrap safely.
Improves error handling and removes deprecated Default impl for the test fixture.

Co-authored-by: devboxerhub[bot] <devboxerhub[bot]@users.noreply.github.com>
@leynos leynos changed the title Implement slow reader and writer back-pressure simulation Add slow reader/writer back-pressure testkit, tests, and docs Mar 11, 2026
@leynos leynos merged commit 1eff26c into main Mar 11, 2026
6 checks passed
@leynos leynos deleted the add-slow-reader-writer-simulation-p8g9fg branch March 11, 2026 18:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant