Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
34 changes: 17 additions & 17 deletions docs/asynchronous-outbound-messaging-design.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,13 +39,13 @@ sections describe how to build that actor from first principles using the biased

The implementation must satisfy the following core requirements:

| ID | Requirement |
| ID | Requirement |
| --- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |
| G1 | Any async task must be able to push frames to a live connection. |
| G2 | Ordering-safety: Pushed frames must interleave correctly with normal request/response traffic and respect any per-message sequencing rules. |
| G3 | Back-pressure: Writers must block (or fail fast) when the peer cannot drain the socket, preventing unbounded memory consumption. |
| G4 | Generic—independent of any particular protocol; usable by both servers and clients built on wireframe. |
| G5 | Preserve the simple “return a reply” path for code that does not need pushes, ensuring backward compatibility and low friction for existing users. |
| G1 | Any async task must be able to push frames to a live connection. |
| G2 | Ordering-safety: Pushed frames must interleave correctly with normal request/response traffic and respect any per-message sequencing rules. |
| G3 | Back-pressure: Writers must block (or fail fast) when the peer cannot drain the socket, preventing unbounded memory consumption. |
| G4 | Generic—independent of any particular protocol; usable by both servers and clients built on wireframe. |
| G5 | Preserve the simple “return a reply” path for code that does not need pushes, ensuring backward compatibility and low friction for existing users. |

Comment on lines +42 to 49
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot Jun 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Table rows exceed 80 columns

The requirement rows G2–G5 are ~130 chars each. To satisfy MD013 you can either
soft-wrap the cell contents or silence the rule for those lines with
<!-- markdownlint-disable-line MD013 -->.

🤖 Prompt for AI Agents
In docs/asynchronous-outbound-messaging-design.md around lines 42 to 49, the
table rows for requirements G2 to G5 exceed 80 characters, violating the MD013
markdownlint rule. To fix this, either soft-wrap the long cell contents by
breaking the text into multiple lines within the same cell or add a markdownlint
disable comment like <!-- markdownlint-disable-line MD013 --> at the end of
those lines to silence the rule for them.

Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Table rows are allowed to exceed 80 columns

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@leynos, understood! I've noted that table rows are exempt from the 80-column wrapping requirement for markdown files.


✏️ Learnings added
Learnt from: leynos
PR: leynos/wireframe#154
File: docs/asynchronous-outbound-messaging-design.md:42-49
Timestamp: 2025-06-29T16:50:55.852Z
Learning: In markdown files, table rows are allowed to exceed 80 columns and are exempt from the MD013 wrapping rule, even though paragraphs and bullets must be wrapped to 80 columns.

Note: Learnings are effective only in the context of similar code segments. To apply general coding guidelines, please configure review instructions. You can manage existing learnings in the app.

## 3. Core Architecture: The Connection Actor

Expand All @@ -70,7 +70,7 @@ manage two distinct, bounded `tokio::mpsc` channels for pushed frames:
messages like heartbeats, session control notifications, or protocol-level
pings.

1. `low_priority_push_rx: mpsc::Receiver<F>`: For standard, non-urgent
2. `low_priority_push_rx: mpsc::Receiver<F>`: For standard, non-urgent
background messages like log forwarding or secondary status updates.

The bounded nature of these channels provides an inherent and robust
Expand All @@ -90,13 +90,13 @@ The polling order will be:
1. **Graceful Shutdown Signal:** The `CancellationToken` will be checked first
to ensure immediate reaction to a server-wide shutdown request.

1. **High-Priority Push Channel:** Messages from `high_priority_push_rx` will be
2. **High-Priority Push Channel:** Messages from `high_priority_push_rx` will be
drained next.

1. **Low-Priority Push Channel:** Messages from `low_priority_push_rx` will be
3. **Low-Priority Push Channel:** Messages from `low_priority_push_rx` will be
processed after all high-priority messages.

1. **Handler Response Stream:** Frames from the active request's
4. **Handler Response Stream:** Frames from the active request's
`Response::Stream` will be processed last.

```rust
Expand Down Expand Up @@ -684,11 +684,11 @@ sequenceDiagram

## 8. Measurable Objectives & Success Criteria

| Category | Objective | Success Metric |
| Category | Objective | Success Metric |
| --------------- | ------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| API Correctness | The PushHandle, SessionRegistry, and WireframeProtocol trait are implemented exactly as specified in this document. | 100% of the public API surface is present and correctly typed. |
| Functionality | Pushed frames are delivered reliably and in the correct order of priority. | A test with concurrent high-priority, low-priority, and streaming producers must show that all frames are delivered and that the final written sequence respects the strict priority order. |
| Back-pressure | A slow consumer must cause producer tasks to suspend without consuming unbounded memory. | A test with a slow consumer and a fast producer must show the producer's push().await call blocks, and the process memory usage remains stable. |
| Resilience | The SessionRegistry must not leak memory when connections are terminated. | A long-running test that creates and destroys thousands of connections must show no corresponding growth in the SessionRegistry's size or the process's overall memory footprint. |
| Performance | The overhead of the push mechanism should be minimal for connections that do not use it. | A benchmark of a simple request-response workload with the push feature enabled (but unused) should show < 2% performance degradation compared to a build without the feature. |
| Performance | The latency for a high-priority push under no contention should be negligible. | The time from push_high_priority().await returning to the frame being written to the socket buffer should be < 10µs. |
| API Correctness | The PushHandle, SessionRegistry, and WireframeProtocol trait are implemented exactly as specified in this document. | 100% of the public API surface is present and correctly typed. |
| Functionality | Pushed frames are delivered reliably and in the correct order of priority. | A test with concurrent high-priority, low-priority, and streaming producers must show that all frames are delivered and that the final written sequence respects the strict priority order. |
| Back-pressure | A slow consumer must cause producer tasks to suspend without consuming unbounded memory. | A test with a slow consumer and a fast producer must show the producer's push().await call blocks, and the process memory usage remains stable. |
| Resilience | The SessionRegistry must not leak memory when connections are terminated. | A long-running test that creates and destroys thousands of connections must show no corresponding growth in the SessionRegistry's size or the process's overall memory footprint. |
| Performance | The overhead of the push mechanism should be minimal for connections that do not use it. | A benchmark of a simple request-response workload with the push feature enabled (but unused) should show < 2% performance degradation compared to a build without the feature. |
| Performance | The latency for a high-priority push under no contention should be negligible. | The time from push_high_priority().await returning to the frame being written to the socket buffer should be < 10µs. |
4 changes: 3 additions & 1 deletion docs/asynchronous-outbound-messaging-roadmap.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,8 +29,10 @@ design documents.
- [x] **Leak-proof `SessionRegistry`** using `dashmap::DashMap` and `Weak`
pointers ([Design §4.2][design-registry],
[Resilience Guide §3.2][resilience-registry]).
- [ ] **Document `async-stream`** for creating `Response::Stream` values
- [x] **Document `async-stream`** for creating `Response::Stream` values
([Roadmap #2.4][roadmap-2-4]).
- [ ] **Example handler using `async-stream`** demonstrating `Response::Stream`
generation in the examples directory.
Comment on lines +32 to +35
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Bullet exceeds 80-column guideline and should be wrapped

The added bullet for the example handler is > 100 characters, violating the
project’s 80-column wrap rule for paragraphs / bullets.

- - [ ] **Example handler using `async-stream`** demonstrating `Response::Stream` generation in the examples directory.
+ - [ ] **Example handler using `async-stream`** demonstrating `Response::Stream`
+   generation in the *examples* directory.

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In docs/asynchronous-outbound-messaging-roadmap.md around lines 32 to 35, the
bullet point describing the example handler using async-stream exceeds the
80-column limit. Break this long bullet into multiple lines so that no line
exceeds 80 characters, ensuring proper indentation for wrapped lines to maintain
markdown formatting and readability.

- [ ] **Tests covering streams and push delivery** drawing on
[Testing Guide §4][testing-guide-advanced].

Expand Down
21 changes: 10 additions & 11 deletions docs/multi-packet-and-streaming-responses-design.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,11 +38,10 @@ The implementation must satisfy the following core requirements:
## 3. Core Architecture: Declarative Streaming

The cornerstone of this design is a move to a purely **declarative streaming
model**. Instead of providing handlers with an imperative `FrameSink` to push
frames into, handlers will declaratively return a description of the entire
response. This approach significantly simplifies the API surface, improves
testability, and eliminates a class of resource management issues associated
with sink-based designs.
model**. Handlers declaratively return a description of the entire response.
This approach significantly simplifies the API surface, improves testability,
and eliminates a class of resource management issues associated with sink-based
designs.

### 3.1 The Connection Actor's Role

Expand All @@ -61,12 +60,12 @@ explicit channel management.
### 3.2 The `async-stream` Crate

To provide an ergonomic way for developers to generate streams using
imperative-style logic (e.g., inside a `for` loop), `wireframe` will adopt and
recommend the `async-stream` crate. This crate provides macros (`stream!` and
imperative-style logic (e.g., inside a `for` loop), `wireframe` adopts and
recommends the `async-stream` crate. This crate provides macros (`stream!` and
`try_stream!`) that transform imperative `yield` statements into a fully
compliant `Stream` object. This gives developers the best of both worlds: the
intuitive feel of imperative code generation without the API complexity of a
separate `FrameSink` type.
compliant `Stream` object. This gives developers the intuitive feel of
imperative code generation with minimal API complexity. The library recommends
this pattern as the canonical way to build `Response::Stream` values.
Comment on lines +63 to +68
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Minor line-length breach – wrap for consistency

Lines 63-66 run a little past 80 columns. While tables are exempt, regular
paragraphs are not. Consider a soft wrap to stay consistent with the coding
guidelines.

🤖 Prompt for AI Agents
In docs/multi-packet-and-streaming-responses-design.md around lines 63 to 68,
the paragraph lines exceed the 80-column limit. To fix this, reformat the text
by inserting line breaks at appropriate points to ensure no line goes beyond 80
characters, maintaining readability and consistency with the coding guidelines.


## 4. Public API Surface

Expand Down Expand Up @@ -252,7 +251,7 @@ hang.
| Category | Objective | Success Metric |
| API Correctness | The Response enum and FrameStream type alias are implemented exactly as specified in this document. | 100% of the public API surface is present and correctly typed. |
| Functionality | A handler returning a stream of N frames results in N frames being written to the socket in the correct order. | A test suite confirms 100% frame delivery and strict ordering for Response::Vec and Response::Stream. |
| Ergonomics | The async-stream pattern is documented as the canonical approach for dynamic stream generation and is demonstrably simpler than a FrameSink API. | The official examples and documentation exclusively use the declarative Response model. |
| Ergonomics | The async-stream pattern is documented as the canonical approach for dynamic stream generation. | The official examples and documentation exclusively use the declarative Response model. |
| Performance | The Response::Vec variant has measurably lower allocation and dispatch overhead than Response::Stream for small, fixed-size responses. | A criterion benchmark confirms that Response::Vec is at least 50% faster and performs fewer allocations than Response::Stream for a response of 10 frames. |
| Error Handling | A WireframeError::Protocol error yielded from a stream correctly triggers the handle_error protocol callback without terminating the connection. | An integration test confirms that a protocol-level error is correctly formatted and sent to the client, while the connection remains open. |

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -40,10 +40,9 @@ server-initiated pushes and streaming responses.

#### The Unified `Response` Enum and Declarative Handler Model

To provide a clean, unified API, the handler return type will evolve. The
imperative `FrameSink` model, which required a separate handler signature and
introduced resource management complexities, will be replaced by a more
ergonomic, declarative approach. 1 Handlers will return an enhanced
To provide a clean, unified API, the handler return type will evolve. A more
ergonomic, declarative approach replaces the previous imperative model.
Handlers will return an enhanced

`Response` enum, giving developers clear and efficient ways to express their
intent.
Expand All @@ -64,11 +63,11 @@ pub enum Response<F = Frame, E = MyProtocolError> {
```

This design is powered by the `async-stream` crate, which allows developers to
write imperative-looking logic that generates a declarative `Stream` object.
This provides the best of both worlds: the intuitive feel of a

`for` loop for generating frames, without the API complexity of a separate
`Sink` type.
write imperative-looking logic that generates a declarative `Stream` object. It
provides the best of both worlds: the intuitive feel of a `for` loop for
generating frames with minimal API complexity. The project recommends
`async-stream` as the canonical method for constructing `Response::Stream`
values.
Comment thread
coderabbitai[bot] marked this conversation as resolved.

Rust

Expand Down
Loading