Add slow reader/writer back-pressure testkit, tests, and docs#495
Add slow reader/writer back-pressure testkit, tests, and docs#495
Conversation
Reviewer's GuideImplements a new slow I/O back-pressure testkit in wireframe_testing, including SlowIoPacing/SlowIoConfig, paced duplex drivers and helper functions, plus integration and BDD tests and documentation/roadmap updates to support deterministic slow reader/writer simulation. Sequence diagram for drive_with_slow_codec_payloads slow I/O flowsequenceDiagram
actor Test as Test
participant Helper as SlowIoHelpers
participant App as WireframeApp
participant Codec as FrameCodec
participant Tokio as TokioRuntime
participant Writer as ClientWriter
participant Reader as ClientReader
Test->>Helper: drive_with_slow_codec_payloads(app, codec, payloads, config)
Helper->>Helper: encode_payloads_with_codec(codec, payloads)
Helper->>Tokio: duplex(config.capacity)
Tokio-->>Helper: (client_stream, server_stream)
Helper->>Tokio: spawn app.handle_connection(server_stream)
Helper->>Tokio: split(client_stream)
Tokio-->>Helper: (Reader, Writer)
par paced_writer
alt writer_pacing is Some
loop while bytes_remaining
Helper->>Writer: write_with_optional_pacing(chunk, writer_pacing)
end
else writer_pacing is None
Helper->>Writer: write_all(all_bytes)
end
Helper->>Writer: shutdown()
and paced_reader
alt reader_pacing is Some
loop until EOF
Helper->>Reader: read_with_optional_pacing(reader_pacing)
Reader-->>Helper: chunk_bytes
end
else reader_pacing is None
Helper->>Reader: read_to_end()
Reader-->>Helper: all_bytes
end
end
Helper->>Helper: decode_frames_with_codec(codec, raw_bytes)
Helper->>Helper: extract_payloads(frames)
Helper-->>Test: io::Result<Vec<Vec<u8>>>
Updated class diagram for slow I/O helpers in wireframe_testingclassDiagram
direction LR
class SlowIoPacing {
+NonZeroUsize chunk_size
+Duration delay
+new(chunk_size: NonZeroUsize, delay: Duration) SlowIoPacing
}
class SlowIoConfig {
+Option~SlowIoPacing~ writer_pacing
+Option~SlowIoPacing~ reader_pacing
+usize capacity
+new() SlowIoConfig
+with_writer_pacing(pacing: SlowIoPacing) SlowIoConfig
+with_reader_pacing(pacing: SlowIoPacing) SlowIoConfig
+with_capacity(capacity: usize) SlowIoConfig
-validate() io::Result~SlowIoConfig~
}
class SlowIoHelpers {
<<utility>>
+drive_with_slow_frames(app: WireframeApp, frames: Vec~Vec~u8~~, config: SlowIoConfig) io::Result~Vec~u8~~
+drive_with_slow_payloads(app: WireframeApp, payloads: Vec~Vec~u8~~, config: SlowIoConfig) io::Result~Vec~u8~~
+drive_with_slow_codec_payloads(app: WireframeApp, codec: FrameCodec, payloads: Vec~Vec~u8~~, config: SlowIoConfig) io::Result~Vec~Vec~u8~~~
+drive_with_slow_codec_frames(app: WireframeApp, codec: FrameCodec, payloads: Vec~Vec~u8~~, config: SlowIoConfig) io::Result~Vec~Frame~>
-pause_between_chunks(delay: Duration, should_pause: bool) void
-write_with_optional_pacing(writer: AsyncWriteExt, bytes: &[u8], pacing: Option~SlowIoPacing~) io::Result~()~
-read_with_optional_pacing(reader: AsyncReadExt, pacing: Option~SlowIoPacing~) io::Result~Vec~u8~~
-drive_slow_internal(server_fn: FnOnce(DuplexStream), wire_bytes: Vec~u8~, config: SlowIoConfig) io::Result~Vec~u8~~
-encode_length_delimited_payloads(payloads: Vec~Vec~u8~~) io::Result~Vec~u8~~
-drive_slow_codec_frames_internal(handler: FnOnce(DuplexStream), codec: FrameCodec, payloads: Vec~Vec~u8~~, config: SlowIoConfig) io::Result~Vec~Frame~>
}
class WireframeApp {
+handle_connection(stream: DuplexStream) Future~()~
}
class FrameCodec {
<<interface>>
+Frame
}
class LengthDelimitedCodec {
+encode(item: Bytes, dst: BytesMut) Result~()~
}
class TestSerializer {
<<interface>>
}
SlowIoConfig --> SlowIoPacing : uses
SlowIoHelpers ..> SlowIoConfig : config
SlowIoHelpers ..> SlowIoPacing : pacing
SlowIoHelpers ..> WireframeApp : drives
SlowIoHelpers ..> FrameCodec : encodes_decodes
SlowIoHelpers ..> LengthDelimitedCodec : default_framing
WireframeApp ..> TestSerializer : generic_over
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: ASSERTIVE Plan: Pro Run ID: 📒 Files selected for processing (3)
Summary by CodeRabbit
WalkthroughIntroduce deterministic slow I/O back‑pressure testing: add SlowIoPacing and SlowIoConfig, four slow‑I/O driving helpers and re‑exports in wireframe_testing, add BDD integration tests and fixtures, update docs/roadmap and crate docs, and preserve string panic payloads verbatim in panic formatting. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Client as "Client"
participant WriterTask as "Writer Task"
participant AppServer as "App / Server"
participant ReaderTask as "Reader Task"
participant Runtime as "Runtime / Time"
Client->>WriterTask: call drive_with_slow_*
activate WriterTask
WriterTask->>WriterTask: chunk data, wait (rgba(52,152,219,0.5))
WriterTask->>AppServer: send chunk
activate AppServer
AppServer->>AppServer: process request
AppServer-->>ReaderTask: produce response
deactivate AppServer
activate ReaderTask
ReaderTask->>ReaderTask: chunk response, wait (rgba(231,76,60,0.5))
ReaderTask->>Runtime: rely on paused/advance time
activate Runtime
Runtime->>Runtime: advance simulated time
Runtime-->>ReaderTask: resume
deactivate Runtime
ReaderTask->>Client: deliver chunked response
deactivate ReaderTask
WriterTask-->>Client: complete result
deactivate WriterTask
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Comment |
|
@coderabbitai Please suggest a fix for this issue and supply a prompt for an AI coding agent to enable it to apply the fix: Comment on file fn build_echo_app(
codec: HotlineFrameCodec,
) -> io::Result<WireframeApp<BincodeSerializer, (), Envelope, HotlineFrameCodec>> {
WireframeApp::<BincodeSerializer, (), Envelope>::new()
.map_err(|e| io::Error::other(format!("app init: {e}")))?
.with_codec(codec)
.route(
1,
Arc::new(|_: &Envelope| -> BoxFuture<'static, ()> { Box::pin(async {}) }),
)
.map_err(|e| io::Error::other(format!("route: {e}")))
}❌ New issue: Code Duplication |
This comment was marked as resolved.
This comment was marked as resolved.
67d621d to
e95b36b
Compare
|
@coderabbitai Please suggest a fix for this issue and supply a prompt for an AI coding agent to enable it to apply the fix: Comment on file async fn slow_writer_delays_inbound_completion() -> io::Result<()> {
let pacing = SlowIoPacing::new(
NonZeroUsize::new(8).ok_or_else(|| io::Error::other("non-zero"))?,
Duration::from_millis(5),
);
run_paced_codec_test(
vec![b'a'; 64],
hotline_codec(),
SlowIoConfig::new().with_writer_pacing(pacing),
100,
)
.await
}❌ New issue: Code Duplication |
This comment was marked as resolved.
This comment was marked as resolved.
|
@coderabbitai Please suggest a fix for this issue and supply a prompt for an AI coding agent to enable it to apply the fix: Code Duplicationtests/slow_io_backpressure.rs: What lead to degradation?The module contains 4 functions with similar structure: build_echo_app,build_length_delimited_echo_app,slow_reader_delays_outbound_draining,slow_writer_delays_inbound_completion Why does this problem occur?Duplicated code often leads to code that's harder to change since the same logical change has to be done in multiple functions. More duplication gives lower code health. How to fix it?A certain degree of duplicated code might be acceptable. The problems start when it is the same behavior that is duplicated across the functions in the module, ie. a violation of the Don't Repeat Yourself (DRY) principle. DRY violations lead to code that is changed together in predictable patterns, which is both expensive and risky. DRY violations can be identified using CodeScene's X-Ray analysis to detect clusters of change coupled functions with high code similarity. Read More |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🏁 Script executed: Length of output: 19743
The only residual structural similarity is between No further action is required. The CodeScene score should improve once the scan re-runs against the current commit. |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 669c611a1d
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
There was a problem hiding this comment.
Actionable comments posted: 8
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/execplans/8-5-2-slow-reader-and-writer-simulation.md`:
- Line 76: Update the checkbox line containing the phrase "Finalised additive
slow-I/O helper API around" to use Oxford English "-ized" spelling by replacing
"Finalised" with "Finalized" so the text reads "Finalized additive slow-I/O
helper API around"; locate the line with the checked item (the "[x] (2026-03-06
01:35Z) Finalised..." entry) and make the single-word substitution.
- Around line 84-85: The checklist line "- [x] (2026-03-06 01:35Z) Updated
design docs, users guide, and roadmap entry `8.5.2`." needs a possessive
apostrophe: change "users guide" to "user's guide" so it reads "Updated design
docs, user's guide, and roadmap entry `8.5.2`."; update that exact text in the
document "8-5-2-slow-reader-and-writer-simulation.md".
- Line 202: The heading "### Stage A: Finalise slow-I/O helper contract (no
behavioural changes yet)" uses Oxford English spelling with "-ise"; change
"Finalise" to "Finalize" and update any other occurrences of
"finalise"/"finalised" in this document to the "-ize" form to comply with the
en-GB-oxendict guideline; look for the exact string "Stage A: Finalise slow-I/O
helper contract (no behavioural changes yet)" and replace it and related
variants while preserving punctuation and capitalization.
In `@tests/fixtures/slow_io_backpressure.rs`:
- Around line 218-230: The start_drive function currently allows starting a new
drive while a prior async task (self.task) is still running which detaches the
old JoinHandle; add a guard at the top of start_drive that checks
self.task.as_ref().is_some_and(|t| !t.is_finished()) and return an Err("slow-io
drive is already running".into()) if true, preventing a second drive from being
started until the existing JoinHandle has finished or been awaited; keep the
rest of start_drive (build_app, codec, payload, pause, spawn) unchanged and only
set self.task after this check.
In `@tests/scenarios/slow_io_backpressure_scenarios.rs`:
- Around line 11-35: Remove the repeated #[expect(unused_variables)] attributes
and instead consume the fixture parameter inside each empty test function by
adding a single-use binding like let _ = slow_io_backpressure_world; in
slow_writer_delays, slow_reader_delays, and combined_slow_io_round_trip so the
parameter is used without changing behavior.
In `@tests/slow_io_backpressure.rs`:
- Around line 247-279: Replace the current length-only verification with a full
payload equality check: after awaiting the task and getting response from
drive_with_slow_codec_payloads, call deserialize_echo_payloads(&response)
(instead of deserialize_echo_lengths) and compare the resulting Vec<u8> payloads
to the original payload_a and payload_b (use clones or references as needed); if
they differ, return an io::Error containing both expected and actual payloads in
the message to fail the test on byte-level corruption or reordering.
In `@wireframe_testing/src/helpers/slow_io.rs`:
- Around line 131-133: Replace uses of io::Error::new(io::ErrorKind::Other, ...)
with io::Error::other(...) in three places: the writer chunk slice (the let
chunk = bytes.get(offset..end).ok_or_else(|| { ... })? expression), the reader
chunk slice ok_or_else in the corresponding reader code, and the server task
Err(...) creation that currently builds an io::Error::new with format!("server
task failed: {panic_msg}"); for the server task call
io::Error::other(format!("server task failed: {panic_msg}")) so the error
construction uses io::Error::other in the same places.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: c3cb23c2-4173-4d9f-8f96-b749d4f9f978
📒 Files selected for processing (20)
docs/adr-002-streaming-requests-and-shared-message-assembly.mddocs/execplans/11-1-2-structured-logging-and-tracing-spans.mddocs/execplans/8-5-2-slow-reader-and-writer-simulation.mddocs/roadmap.mddocs/users-guide.mddocs/wireframe-testing-crate.mdsrc/panic.rssrc/server/connection_spawner.rstests/features/slow_io_backpressure.featuretests/fixtures/mod.rstests/fixtures/slow_io_backpressure.rstests/scenarios/mod.rstests/scenarios/slow_io_backpressure_scenarios.rstests/slow_io_backpressure.rstests/steps/mod.rstests/steps/slow_io_backpressure_steps.rswireframe_testing/Cargo.tomlwireframe_testing/src/helpers.rswireframe_testing/src/helpers/slow_io.rswireframe_testing/src/lib.rs
|
@coderabbitai Have the following now been resolved? Annul any requirements that violate the en-GB-oxendict spelling (-ize / -yse / -our) conventions (for example a request to replace "normalize" with "normalise" or "artefact" with "artifact"), or where the requirement unnecessarily increases cyclomatic complexity. |
|
@coderabbitai Please suggest a fix for this issue and supply a prompt for an AI coding agent to enable it to apply the fix: Comment on file fn deserialize_echo_lengths(bytes: &[Vec<u8>]) -> io::Result<Vec<usize>> {
bytes
.iter()
.map(|raw| {
let (env, _) = BincodeSerializer
.deserialize::<Envelope>(raw)
.map_err(|e| {
io::Error::new(io::ErrorKind::InvalidData, format!("deserialize: {e}"))
})?;
Ok(env.payload_bytes().len())
})
.collect()
}❌ New issue: Code Duplication |
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
24d9be6 to
47e3162
Compare
There was a problem hiding this comment.
Actionable comments posted: 6
♻️ Duplicate comments (1)
tests/fixtures/slow_io_backpressure.rs (1)
329-340:⚠️ Potential issue | 🟠 MajorReject trailing bytes when reading the echoed envelope.
Capture the consumed length from
BincodeSerializer::deserializeand fail when it does not matchraw.len(). As written, a validEnvelopeplus trailing garbage still satisfies the behavioural assertion and can hide framing regressions.Proposed fix
- let (env, _) = BincodeSerializer + let (env, consumed) = BincodeSerializer .deserialize::<Envelope>(raw) .map_err(|e| format!("deserialize: {e}"))?; + if consumed != raw.len() { + return Err( + format!( + "deserialize: trailing bytes after envelope: consumed {consumed} of {}", + raw.len() + ) + .into(), + ); + } let actual_len = env.payload_bytes().len();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/fixtures/slow_io_backpressure.rs` around lines 329 - 340, In assert_completed_payload_len capture the consumed byte count returned by BincodeSerializer::deserialize::<Envelope>(raw) (e.g. let (env, consumed) = BincodeSerializer.deserialize::<Envelope>(raw)...) and immediately check that consumed == raw.len(); if not, return an error indicating trailing bytes (including consumed and raw.len() in the message) before proceeding to read env.payload_bytes().len(); this ensures trailing garbage after a valid Envelope is rejected.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/execplans/11-1-2-structured-logging-and-tracing-spans.md`:
- Line 18: Remove the unresolved merge conflict marker `<<<<<<< LEFT` from the
file; search for and delete any other leftover conflict markers (`<<<<<<<`,
`=======`, `>>>>>>>`) in the document, then save the Markdown so it parses
cleanly (optionally run your Markdown linter/preview to confirm no remaining
conflict artifacts).
In `@docs/users-guide.md`:
- Around line 430-453: The snippet is feeding raw bytes vec![1,2,3] into
drive_with_slow_codec_payloads but
WireframeApp::new()?.with_codec(codec.clone()) expects codec-framed Envelope
messages; replace the payload with a properly serialized Envelope (create a
wireframe::Envelope with the desired request/headers, encode/serialize it using
HotlineFrameCodec or the codec.encode/serialize helper to produce Vec<u8>) and
pass that Vec<u8> to drive_with_slow_codec_payloads, or alternatively switch to
the raw-frame helper variant (e.g., drive_with_slow_codec_raw_frames or
similarly named helper) that accepts pre-framed raw frames instead of codec
payloads; ensure you use the existing HotlineFrameCodec and Envelope types so
the app receives a valid request.
In `@docs/wireframe-testing-crate.md`:
- Around line 190-192: The sequence diagram references a non-existent helper
name drive_with_slow_io_payloads; update the diagram label to the exported
helper name used elsewhere (either drive_with_slow_payloads or
drive_with_slow_codec_payloads) so the diagram matches the public API; locate
the Runtime->>Helper line and replace drive_with_slow_io_payloads(app, payloads,
config) with the correct exported helper name (choose drive_with_slow_payloads
or drive_with_slow_codec_payloads consistent with the surrounding docs).
In `@tests/slow_io_backpressure.rs`:
- Line 27: Replace the hard-coded 10 MiB literal and matching error text with
the public test constant: import or reference wireframe_testing::TEST_MAX_FRAME
and change MAX_CAPACITY_PLUS_ONE to be (TEST_MAX_FRAME as usize) + 1 (or
otherwise derive the ceiling case from TEST_MAX_FRAME) and update any
assertions/error messages that mention "10 MiB" to interpolate or assert against
TEST_MAX_FRAME so the test relies on TEST_MAX_FRAME instead of a duplicated
numeric literal (also apply the same replacement at the other occurrence around
lines 335-340).
In `@wireframe_testing/src/helpers/slow_io.rs`:
- Around line 249-259: The example uses vec![1, 2, 3] which suggests a payload
but the function drive_with_slow_frames expects pre-framed/raw-frame bytes;
update the snippet to show an actual encoded frame (i.e., replace the bare
payload with the wire-format bytes) or else switch to the payload helper API so
it’s clear you’re supplying a payload — adjust the example around
drive_with_slow_frames, SlowIoConfig, SlowIoPacing, and WireframeApp to use the
correct pre-framed bytes or the payload helper accordingly.
- Around line 206-208: Remove the unnecessary Send bound on the Fut generic:
update the function/type bounds where F: FnOnce(DuplexStream) -> Fut and Fut:
std::future::Future<Output = ()> + Send to drop the + Send so Fut is only
required to be a Future<Output = ()>; this applies to the same generic bounds
locations (e.g., the helper functions that spawn server_fut, writer_fut,
reader_fut which are polled via tokio::try_join! on the current task and never
passed to tokio::spawn).
---
Duplicate comments:
In `@tests/fixtures/slow_io_backpressure.rs`:
- Around line 329-340: In assert_completed_payload_len capture the consumed byte
count returned by BincodeSerializer::deserialize::<Envelope>(raw) (e.g. let
(env, consumed) = BincodeSerializer.deserialize::<Envelope>(raw)...) and
immediately check that consumed == raw.len(); if not, return an error indicating
trailing bytes (including consumed and raw.len() in the message) before
proceeding to read env.payload_bytes().len(); this ensures trailing garbage
after a valid Envelope is rejected.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 59e2de79-cdae-416a-aa72-8ae46f087a0f
📒 Files selected for processing (20)
docs/adr-002-streaming-requests-and-shared-message-assembly.mddocs/execplans/11-1-2-structured-logging-and-tracing-spans.mddocs/execplans/8-5-2-slow-reader-and-writer-simulation.mddocs/roadmap.mddocs/users-guide.mddocs/wireframe-testing-crate.mdsrc/panic.rssrc/server/connection_spawner.rstests/features/slow_io_backpressure.featuretests/fixtures/mod.rstests/fixtures/slow_io_backpressure.rstests/scenarios/mod.rstests/scenarios/slow_io_backpressure_scenarios.rstests/slow_io_backpressure.rstests/steps/mod.rstests/steps/slow_io_backpressure_steps.rswireframe_testing/Cargo.tomlwireframe_testing/src/helpers.rswireframe_testing/src/helpers/slow_io.rswireframe_testing/src/lib.rs
|
@coderabbitai Please suggest a fix for this issue and supply a prompt for an AI coding agent to enable it to apply the fix: Bumpy Road Aheadwireframe_testing/src/helpers/slow_io.rs: SlowIoConfig.validate What lead to degradation?SlowIoConfig.validate has 2 blocks with nested conditional logic. Any nesting of 2 or deeper is considered. Threshold is 2 blocks per function Why does this problem occur?A Bumpy Road is a function that contains multiple chunks of nested conditional logic inside the same function. The deeper the nesting and the more bumps, the lower the code health. How to fix it?Bumpy Road implementations indicate a lack of encapsulation. Check out the detailed description of the Bumpy Road code health issue. |
This comment was marked as resolved.
This comment was marked as resolved.
Refactor slow I/O example code in the users guide to use BincodeSerializer and Envelope for request serialization. This clarifies how to prepare payloads, enhancing documentation accuracy and usability. Co-authored-by: devboxerhub[bot] <devboxerhub[bot]@users.noreply.github.com>
…duplication Extracted chunk size validation logic for writer and reader pacing into a new helper function validate_pacing_chunk_size to improve code clarity and maintainability in slow_io.rs. Co-authored-by: devboxerhub[bot] <devboxerhub[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
wireframe_testing/src/helpers/slow_io.rs (1)
309-323:⚠️ Potential issue | 🟡 MinorSerialize an
Envelopein this rustdoc example.
drive_with_slow_codec_payloadsinjects codec payload bytes straight into the app. With the default serializer still in place,vec![1]is not a valid request payload, so this example documents a round trip that will not succeed if copied into a test.🛠️ Proposed fix
/// # use std::{num::NonZeroUsize, time::Duration}; -/// # use wireframe::app::WireframeApp; +/// # use wireframe::{ +/// # app::{Envelope, WireframeApp}, +/// # serializer::{BincodeSerializer, Serializer}, +/// # }; /// # use wireframe::codec::examples::HotlineFrameCodec; /// # use wireframe_testing::{ /// # drive_with_slow_codec_payloads, SlowIoConfig, SlowIoPacing, /// # }; -/// # async fn demo() -> std::io::Result<()> { +/// # async fn demo() -> Result<(), Box<dyn std::error::Error>> { /// let codec = HotlineFrameCodec::new(4096); /// let app = WireframeApp::new().expect("app").with_codec(codec.clone()); /// let config = SlowIoConfig::new().with_reader_pacing(SlowIoPacing::new( /// NonZeroUsize::new(32).expect("non-zero"), /// Duration::from_millis(5), /// )); -/// let out = drive_with_slow_codec_payloads(app, &codec, vec![vec![1]], config).await?; +/// let request = +/// BincodeSerializer.serialize(&Envelope::new(1, Some(7), vec![1]))?; +/// let out = +/// drive_with_slow_codec_payloads(app, &codec, vec![request], config).await?; /// # let _ = out; /// # Ok(()) /// # }As per coding guidelines,
**/*.rs: Document public Rust APIs using Rustdoc comments (///) so documentation can be generated with cargo doc.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@wireframe_testing/src/helpers/slow_io.rs` around lines 309 - 323, The rustdoc example passes raw bytes to drive_with_slow_codec_payloads which won’t round-trip with the default serializer; replace the literal vec![1] payload with a serialized Envelope and adjust the example imports and return type: import Envelope and a Serializer implementation (e.g. BincodeSerializer) from crate::app/serializer, change the demo signature to return Result<Box<dyn std::error::Error>> or similar, serialize an Envelope::new(id, maybe_response_to, vec![1]) via BincodeSerializer.serialize(...) and pass that serialized byte vector to drive_with_slow_codec_payloads(app, &codec, vec![request], config). Ensure the example uses the symbols Envelope, BincodeSerializer (or Serializer) and drive_with_slow_codec_payloads so it compiles and demonstrates a working round trip.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/wireframe-testing-crate.md`:
- Around line 193-209: The sequence diagram misrepresents pacing by sleeping
before the first transfer; update the diagram to show
start_paced_writes(start_paced_writes(payloads, config.writer_pacing)) and
start_paced_reads(start_paced_reads(config.reader_pacing)) invoking an immediate
send_first_chunk()/read_first_chunk(), then enter loops that sleep
(Time.sleep(config.*.delay)) between remaining chunks (loop for each remaining
chunk / loop while app_has_more_output) before calling
send_chunk()/read_chunk(); rename the loop guards and config identifiers in the
diagram to writer_pacing/reader_pacing to match the proposed contract.
In `@tests/fixtures/slow_io_backpressure.rs`:
- Around line 238-248: In take_outputs_from_task ensure the spawned task truly
finished before block_on: check the Task handle with task.is_finished(); if it
is not finished, put the Task back into self.task and return an Err indicating
the slow-io drive is still pending and that Tokio time should be advanced (e.g.,
"slow-io drive is still pending; advance Tokio time before collecting outputs"),
otherwise proceed to block_on and collect outputs as before.
---
Duplicate comments:
In `@wireframe_testing/src/helpers/slow_io.rs`:
- Around line 309-323: The rustdoc example passes raw bytes to
drive_with_slow_codec_payloads which won’t round-trip with the default
serializer; replace the literal vec![1] payload with a serialized Envelope and
adjust the example imports and return type: import Envelope and a Serializer
implementation (e.g. BincodeSerializer) from crate::app/serializer, change the
demo signature to return Result<Box<dyn std::error::Error>> or similar,
serialize an Envelope::new(id, maybe_response_to, vec![1]) via
BincodeSerializer.serialize(...) and pass that serialized byte vector to
drive_with_slow_codec_payloads(app, &codec, vec![request], config). Ensure the
example uses the symbols Envelope, BincodeSerializer (or Serializer) and
drive_with_slow_codec_payloads so it compiles and demonstrates a working round
trip.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 6a35e89a-c209-4e5b-8f46-d58e597b7715
📒 Files selected for processing (4)
docs/users-guide.mddocs/wireframe-testing-crate.mdtests/fixtures/slow_io_backpressure.rswireframe_testing/src/helpers/slow_io.rs
- Moved MAX_SLOW_IO_CAPACITY constant to slow_io.rs helper module - Replaced scattered capacity limits with MAX_SLOW_IO_CAPACITY - Exported MAX_SLOW_IO_CAPACITY and used it across tests and helpers - Adjusted tests to use the centralized constant and updated error messages accordingly This change removes hardcoded capacity values and aligns capacity checks on a single constant to improve maintainability and reduce duplication. Co-authored-by: devboxerhub[bot] <devboxerhub[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@wireframe_testing/src/helpers/slow_io.rs`:
- Around line 249-263: The examples use .expect() in public Rustdoc (e.g.,
WireframeApp::new(), NonZeroUsize::new()) but the doctest signature returns
std::io::Result<()>, so replace .expect() with fallible propagation: convert
WireframeApp::new() errors into std::io::Error using map_err(...)? and propagate
with ?, and replace NonZeroUsize::new(...).expect(...) with .ok_or_else(||
std::io::Error::other("..."))? to produce an io::Error; apply the same pattern
in both examples (the blocks involving WireframeApp::new and NonZeroUsize::new
in the first and second examples).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 33bcce7c-f484-43d1-bb93-47d88eee0cb3
📒 Files selected for processing (4)
tests/slow_io_backpressure.rswireframe_testing/src/helpers.rswireframe_testing/src/helpers/slow_io.rswireframe_testing/src/lib.rs
- Updated documentation diagrams and descriptions to accurately reflect pacing configuration parameters for writer and reader. - Enhanced slow IO helpers doc comments with richer examples and improved error mapping in example code. - Added details for first chunk sending/reading in pacing sequences. Also: - Added a check in slow-io backpressure test fixture to return error if drive task is still pending, guiding proper Tokio time advancement before output collection. Co-authored-by: devboxerhub[bot] <devboxerhub[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/wireframe-testing-crate.md`:
- Around line 150-166: Replace the panic-based
NonZeroUsize::new(...).expect("non-zero") calls in the example with the
error-propagating pattern used in slow_io.rs so the snippet compiles as a
rustdoc example; specifically, change both
NonZeroUsize::new(8).expect("non-zero") and
NonZeroUsize::new(32).expect("non-zero") to NonZeroUsize::new(...).ok_or_else(||
"chunk size must be non-zero")? (used with SlowIoPacing::new) so the example
follows the .ok_or_else(...)? convention shown in slow_io.rs.
In `@tests/fixtures/slow_io_backpressure.rs`:
- Around line 134-139: The runtime is being paused per-drive causing a panic
when tokio::time::pause() is called a second time; modify impl Default for
SlowIoBackpressureWorld (the Default::default implementation that builds the
tokio runtime) to set the runtime into paused mode by adding .start_paused(true)
to tokio::runtime::Builder::new_current_thread().enable_all().build(), and then
remove the per-drive call to tokio::time::pause() from start_drive() so pause is
only done once at runtime creation.
- Around line 329-330: The test's advance_millis method uses
tokio::time::advance but doesn't yield control afterwards, so timer-backed tasks
may not finish on their first poll; update advance_millis (the pub fn
advance_millis method) to include a tokio::task::yield_now().await inside the
same block_on async block after tokio::time::advance(...) so the scheduler gives
woken timer tasks an extra turn and ensures task.is_finished() reflects
completed timers.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 79b65b22-5617-4f92-8170-10af4701d6e6
📒 Files selected for processing (3)
docs/wireframe-testing-crate.mdtests/fixtures/slow_io_backpressure.rswireframe_testing/src/helpers/slow_io.rs
…handling in slow IO tests - Start Tokio runtime paused to better control virtual time in tests - Add task yield after advancing Tokio's virtual time to allow scheduled tasks to run - Remove redundant pause call as starting paused suffices - Update example code in docs to use error handling instead of unwraps These changes improve the reliability and clarity of the slow IO backpressure tests and their related documentation example. Co-authored-by: devboxerhub[bot] <devboxerhub[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/fixtures/slow_io_backpressure.rs`:
- Around line 105-109: The parse error message in CombinedDriveConfig::from_str
uses a backslash-continued string literal; replace that with a concat!() call to
join the long literal instead (e.g., use concat!("expected combined config ",
"payload_len/...", " got {s}")). Update the Err(format!(...)) invocation to pass
a concat!() produced string for the static text and keep the {s} interpolation
in the format! call so the final message remains identical.
- Around line 17-23: The fixture currently defers runtime creation errors by
storing runtime: Option and runtime_error; change the fixture constructor
function slow_io_backpressure_world to return
TestResult<SlowIoBackpressureWorld> (i.e. Result<..., _>) so you can create a
real tokio::runtime::Runtime eagerly with ? and fail the test at setup time,
change the SlowIoBackpressureWorld struct to hold runtime:
tokio::runtime::Runtime (not Option) and remove runtime_error, and adjust any
code that constructs or destructs task/outputs to account for the non-optional
runtime; apply the same pattern to the other fixtures referenced around lines
134-156 and 169-194.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: a8ce8d28-5a02-4b62-b85d-95cabcd35b29
📒 Files selected for processing (2)
docs/wireframe-testing-crate.mdtests/fixtures/slow_io_backpressure.rs
…sure fixture Refactored the error message construction in CombinedDriveConfig::FromStr implementation to use concat! macro for better readability and formatting without changing functionality. Co-authored-by: devboxerhub[bot] <devboxerhub[bot]@users.noreply.github.com>
There was a problem hiding this comment.
♻️ Duplicate comments (1)
tests/fixtures/slow_io_backpressure.rs (1)
17-23: 🛠️ Refactor suggestion | 🟠 MajorReturn
TestResult<SlowIoBackpressureWorld>from the fixture and fail eagerly.The current pattern carries
runtime: Option<Runtime>andruntime_error: Option<String>through the world, deferring construction failure to unrelated later steps. Store a plaintokio::runtime::Runtimein the struct, makeslow_io_backpressure_world()returnTestResult<SlowIoBackpressureWorld>, and let callers apply?to surface runtime-creation failures immediately. This also applies to theDefaultimplementation (lines 138–160) andruntime()accessor (lines 181–188).As per coding guidelines, "Consume fallible Rust fixtures in
rstestby making the test returnResultand applying?to the fixture."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/fixtures/slow_io_backpressure.rs` around lines 17 - 23, Replace the optional runtime pattern by storing a concrete tokio::runtime::Runtime inside SlowIoBackpressureWorld and remove runtime_error: Option<String>; change the fixture constructor slow_io_backpressure_world() to return TestResult<SlowIoBackpressureWorld> so it can return Err(...) on runtime creation failure, update the Default implementation to construct and return a ready Runtime (or fail via the TestResult path) instead of leaving runtime None, and adjust the runtime() accessor to return &Runtime (or &mut Runtime) rather than Option<Runtime>; update callers to use the fixture as a fallible TestResult and apply ? so runtime creation failures surface immediately.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@tests/fixtures/slow_io_backpressure.rs`:
- Around line 17-23: Replace the optional runtime pattern by storing a concrete
tokio::runtime::Runtime inside SlowIoBackpressureWorld and remove runtime_error:
Option<String>; change the fixture constructor slow_io_backpressure_world() to
return TestResult<SlowIoBackpressureWorld> so it can return Err(...) on runtime
creation failure, update the Default implementation to construct and return a
ready Runtime (or fail via the TestResult path) instead of leaving runtime None,
and adjust the runtime() accessor to return &Runtime (or &mut Runtime) rather
than Option<Runtime>; update callers to use the fixture as a fallible TestResult
and apply ? so runtime creation failures surface immediately.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 3cab29e4-878e-4ee7-bdf4-3b30bf16d6f3
📒 Files selected for processing (1)
tests/fixtures/slow_io_backpressure.rs
…backpressure fixture Replace SlowIoBackpressureWorld option-based runtime handling with direct runtime. Change fixture to return TestResult to handle runtime build errors gracefully. Update test scenarios and steps to propagate TestResult and unwrap safely. Improves error handling and removes deprecated Default impl for the test fixture. Co-authored-by: devboxerhub[bot] <devboxerhub[bot]@users.noreply.github.com>
Summary
Changes
Rationale
Plan of work (Stage A–G)
Acceptance criteria
Validation and testing approach
Risks and mitigations
Documentation and maintenance
Related files
Revision notes
📎 Task: https://www.devboxer.com/task/b6a14c4c-c5f5-4778-bad9-400deb738a15