Implement structured logging, tracing spans, and TracingConfig API#488
Implement structured logging, tracing spans, and TracingConfig API#488
Conversation
Reviewer's GuideIntroduces a configurable client-side tracing system with a new TracingConfig API, centralized span/timing helpers, and end-to-end instrumentation across connect/send/receive/call/streaming/close, plus tests, BDD scenarios, and documentation describing the behavior. Sequence diagram for client connect and call with tracing spanssequenceDiagram
actor Developer
participant Builder as WireframeClientBuilder
participant Client as WireframeClient
participant Helpers as TracingHelpers
participant Subscriber as TracingSubscriber
Developer->>Builder: TracingConfig::default().with_all_timing(true)
Developer->>Builder: tracing_config(config)
Developer->>Builder: connect(addr)
Builder->>Helpers: connect_span(tracing_config, addr)
Helpers-->>Builder: Span(connect)
Builder->>Builder: enter connect span
Builder->>Builder: start = connect_timing.then(Instant::now)
Builder->>Client: construct WireframeClient(tracing_config)
Builder->>Helpers: emit_timing_event(start)
Helpers-->>Subscriber: debug(elapsed_us) [connect]
Developer->>Client: call(request)
Client->>Helpers: call_span(tracing_config)
Helpers-->>Client: Span(call)
Client->>Client: enter call span
Client->>Client: start = call_timing.then(Instant::now)
Client->>Client: send(request)
Client->>Helpers: send_span(tracing_config, frame_bytes)
Helpers-->>Client: Span(send)
Client->>Client: enter send span
Client->>Helpers: emit_timing_event(start_send)
Helpers-->>Subscriber: debug(elapsed_us) [send]
Client->>Client: receive()
Client->>Helpers: receive_span(tracing_config)
Helpers-->>Client: Span(receive)
Client->>Client: enter receive span
Client-->>Client: result = Ok(response)
Client->>Helpers: emit_timing_event(start_receive)
Helpers-->>Subscriber: debug(elapsed_us) [receive]
Client-->>Helpers: record result="ok" on call span
Client->>Helpers: emit_timing_event(start)
Helpers-->>Subscriber: debug(elapsed_us) [call]
Client-->>Developer: response
Class diagram for TracingConfig and client tracing integrationclassDiagram
class TracingConfig {
+Level connect_level
+Level send_level
+Level receive_level
+Level call_level
+Level streaming_level
+Level close_level
+bool connect_timing
+bool send_timing
+bool receive_timing
+bool call_timing
+bool streaming_timing
+bool close_timing
+with_connect_level(level Level) TracingConfig
+with_connect_timing(enabled bool) TracingConfig
+with_send_level(level Level) TracingConfig
+with_send_timing(enabled bool) TracingConfig
+with_receive_level(level Level) TracingConfig
+with_receive_timing(enabled bool) TracingConfig
+with_call_level(level Level) TracingConfig
+with_call_timing(enabled bool) TracingConfig
+with_streaming_level(level Level) TracingConfig
+with_streaming_timing(enabled bool) TracingConfig
+with_close_level(level Level) TracingConfig
+with_close_timing(enabled bool) TracingConfig
+with_all_levels(level Level) TracingConfig
+with_all_timing(enabled bool) TracingConfig
}
class WireframeClientBuilder {
+SocketOptions socket_options
+Option~PreambleConfig~ preamble_config
+LifecycleHooks lifecycle_hooks
+RequestHooks request_hooks
+TracingConfig tracing_config
+tracing_config(config TracingConfig) WireframeClientBuilder
+connect(addr SocketAddr) WireframeClient
}
class WireframeClient {
+FramedTransport framed
+Serializer serializer
+Option~ConnectionState~ connection_state
+Option~OnDisconnectHandler~ on_disconnect
+Option~ClientErrorHandler~ on_error
+RequestHooks request_hooks
+TracingConfig tracing_config
+AtomicU64 correlation_counter
+send(request Req) Result
+receive() Result
+call(request Req) Result
+close() void
}
class TracingHelpers {
+connect_span(config TracingConfig, peer_addr str) Span
+send_span(config TracingConfig, frame_bytes usize) Span
+receive_span(config TracingConfig) Span
+send_envelope_span(config TracingConfig, correlation_id u64, frame_bytes usize) Span
+call_span(config TracingConfig) Span
+call_correlated_span(config TracingConfig) Span
+streaming_span(config TracingConfig, correlation_id u64) Span
+close_span(config TracingConfig) Span
+emit_timing_event(start Option~Instant~) void
}
class ResponseStream {
+WireframeClient client
+u64 correlation_id
+bool terminated
+usize frame_count
+is_terminated() bool
+frame_count() usize
+poll_next(cx Context) Poll
+process_frame(bytes &[u8]) Option~Result~
}
TracingConfig "1" <-- "1" WireframeClientBuilder : holds
TracingConfig "1" <-- "1" WireframeClient : holds
WireframeClientBuilder --> WireframeClient : builds
WireframeClient "1" --> "*" ResponseStream : creates
TracingHelpers ..> TracingConfig : reads
WireframeClient ..> TracingHelpers : uses
ResponseStream ..> TracingHelpers : uses debug events
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: ASSERTIVE Plan: Pro Run ID: 📒 Files selected for processing (1)
Summary by CodeRabbit
WalkthroughIntroduce configurable client-side structured tracing: add TracingConfig, wire it through builder and runtime, provide tracing_helpers with span factories and timing emission, instrument connect/send/receive/call/streaming/close, add ResponseStream frame counting, and add unit and BDD tests plus docs updates. Changes
Sequence Diagram(s)mermaid Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
|
@coderabbitai Please suggest a fix for this issue and supply a prompt for an AI coding agent to enable it to apply the fix: Comment on file async fn connect_emits_span_with_peer_address() {
let (addr, server) = spawn_echo_server().await;
let _client = WireframeClient::builder()
.tracing_config(TracingConfig::default().with_connect_timing(true))
.connect(addr)
.await
.expect("connect");
server.abort();
logs_assert(|lines: &[&str]| {
lines
.iter()
.find(|line| line.contains("client.connect") && line.contains(&addr.to_string()))
.map(|_| ())
.ok_or_else(|| {
format!(
"client.connect span with peer address not found in:\n{}",
lines.join("\n")
)
})
});
}❌ New issue: Code Duplication |
This comment was marked as resolved.
This comment was marked as resolved.
|
@coderabbitai Please suggest a fix for this issue and supply a prompt for an AI coding agent to enable it to apply the fix: Comment on file async fn send_emits_span_with_frame_bytes() {
with_echo_client(
TracingConfig::default().with_send_timing(true),
|mut client, _addr| async move {
let envelope = Envelope::new(1, None, vec![1, 2, 3]);
client.send(&envelope).await.expect("send");
},
)
.await;
logs_assert(span_assertion("client.send", &["frame.bytes"]));
}❌ New issue: Code Duplication |
…nfig Implemented structured tracing spans around client operations: connect, send, receive, call, streaming, and close. Added a new TracingConfig to configure per-operation span levels and optional per-command elapsed-time events. When no tracing subscriber is installed, instrumentation incurs zero cost. Spans emit structured fields such as frame size, correlation IDs, and result status. Streaming responses emit per-frame debug events including frame counts. Added unit tests with tracing-test and BDD scenarios to validate spans and timing events. Updated client builder with tracing_config method and documented new tracing features in users guide and design docs. Marked roadmap item 11.1.2 as done. This improves client observability with fine-grained, configurable tracing instrumentation without breaking backwards compatibility. Co-authored-by: devboxerhub[bot] <devboxerhub[bot]@users.noreply.github.com>
Refactored client tracing tests by introducing a helper function `with_echo_client` that spawns the echo server, connects the client with given tracing config, runs a closure, then tears down the server. This removed duplication of setup/teardown code across tests. Also standardized span assertion closures with `span_assertion` helper for cleaner log checks. Updated tests to leverage these helpers, improving readability and maintainability without changing test logic or coverage. Co-authored-by: devboxerhub[bot] <devboxerhub[bot]@users.noreply.github.com>
This comment was marked as resolved.
This comment was marked as resolved.
267693f to
8f7e69b
Compare
|
@coderabbitai Please suggest a fix for this issue and supply a prompt for an AI coding agent to enable it to apply the fix: Code Duplicationsrc/client/tests/tracing.rs: What lead to degradation?The module contains 5 functions with similar structure: call_emits_wrapping_span,receive_emits_span_with_result,send_emits_span_with_frame_bytes,send_envelope_emits_span_with_correlation_id_and_frame_bytes and 1 more functions Why does this problem occur?Duplicated code often leads to code that's harder to change since the same logical change has to be done in multiple functions. More duplication gives lower code health. How to fix it?A certain degree of duplicated code might be acceptable. The problems start when it is the same behavior that is duplicated across the functions in the module, ie. a violation of the Don't Repeat Yourself (DRY) principle. DRY violations lead to code that is changed together in predictable patterns, which is both expensive and risky. DRY violations can be identified using CodeScene's X-Ray analysis to detect clusters of change coupled functions with high code similarity. Read More |
This comment was marked as resolved.
This comment was marked as resolved.
…ing tests Replaced repeated test patterns in tracing.rs with the test_span_emission! macro. This macro simplifies async span emission tests by combining client setup, operation execution, and log assertions, reducing boilerplate and improving test clarity. Co-authored-by: devboxerhub[bot] <devboxerhub[bot]@users.noreply.github.com>
|
@coderabbitai Please suggest a fix for this issue and supply a prompt for an AI coding agent to enable it to apply the fix: Code Duplicationsrc/client/tests/tracing.rs: What lead to degradation?The module contains 5 functions with similar structure: call_emits_wrapping_span,receive_emits_span_with_result,send_emits_span_with_frame_bytes,send_envelope_emits_span_with_correlation_id_and_frame_bytes and 1 more functions Why does this problem occur?Duplicated code often leads to code that's harder to change since the same logical change has to be done in multiple functions. More duplication gives lower code health. How to fix it?A certain degree of duplicated code might be acceptable. The problems start when it is the same behavior that is duplicated across the functions in the module, ie. a violation of the Don't Repeat Yourself (DRY) principle. DRY violations lead to code that is changed together in predictable patterns, which is both expensive and risky. DRY violations can be identified using CodeScene's X-Ray analysis to detect clusters of change coupled functions with high code similarity. Read More |
This comment was marked as resolved.
This comment was marked as resolved.
|
@coderabbitai Please suggest a fix for this issue and supply a prompt for an AI coding agent to enable it to apply the fix: Code Duplicationsrc/client/tests/tracing.rs: What lead to degradation?The module contains 5 functions with similar structure: call_emits_wrapping_span,receive_emits_span_with_result,send_emits_span_with_frame_bytes,send_envelope_emits_span_with_correlation_id_and_frame_bytes and 1 more functions Why does this problem occur?Duplicated code often leads to code that's harder to change since the same logical change has to be done in multiple functions. More duplication gives lower code health. How to fix it?A certain degree of duplicated code might be acceptable. The problems start when it is the same behavior that is duplicated across the functions in the module, ie. a violation of the Don't Repeat Yourself (DRY) principle. DRY violations lead to code that is changed together in predictable patterns, which is both expensive and risky. DRY violations can be identified using CodeScene's X-Ray analysis to detect clusters of change coupled functions with high code similarity. Read More |
This comment was marked as resolved.
This comment was marked as resolved.
There was a problem hiding this comment.
Hey - I've found 15 issues, and left some high level feedback:
- Per the docstring that timing events fire on both success and error paths,
emit_timing_eventis not called on early-return error paths in higher-level operations likecall()andcall_correlated()(e.g. whensend/send_envelopefail), so consider adding matchingemit_timing_event(timing_start)calls before those early returns for consistency with the lower-level methods.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- Per the docstring that timing events fire on both success and error paths, `emit_timing_event` is not called on early-return error paths in higher-level operations like `call()` and `call_correlated()` (e.g. when `send`/`send_envelope` fail), so consider adding matching `emit_timing_event(timing_start)` calls before those early returns for consistency with the lower-level methods.
## Individual Comments
### Comment 1
<location path="src/client/messaging.rs" line_range="137-146" />
<code_context>
return Err(err);
}
};
+ let span = send_envelope_span(&self.tracing_config, correlation_id, bytes.len());
+ let _guard = span.enter();
+ let timing_start = self.tracing_config.send_timing.then(Instant::now);
self.invoke_before_send_hooks(&mut bytes);
if let Err(e) = self.framed.send(Bytes::from(bytes)).await {
let err = ClientError::from(e);
self.invoke_error_hook(&err).await;
return Err(err);
}
+ emit_timing_event(timing_start);
Ok(correlation_id)
}
</code_context>
<issue_to_address>
**suggestion (bug_risk):** Timing events are only emitted on success, skipping errored send paths.
`timing_start` is set before hooks and `send`, but `emit_timing_event(timing_start)` only runs after a successful send. On the `Err` path from `self.framed.send(..).await`, the function returns early and no timing is recorded. If you need timings for all send attempts (including failures), refactor so `emit_timing_event` is called on both success and error paths (e.g., in a shared `match`/`if let` block or via a small scope-guard helper).
Suggested implementation:
```rust
let span = send_envelope_span(&self.tracing_config, correlation_id, bytes.len());
let _guard = span.enter();
let timing_start = self.tracing_config.send_timing.then(Instant::now);
self.invoke_before_send_hooks(&mut bytes);
let send_result = self.framed.send(Bytes::from(bytes)).await;
emit_timing_event(timing_start);
if let Err(e) = send_result {
let err = ClientError::from(e);
self.invoke_error_hook(&err).await;
return Err(err);
}
```
If there are other send-like functions in this file (or nearby modules) that also use `send_envelope_span` and `emit_timing_event`, consider applying the same pattern so timing is consistently recorded for both successful and failed send attempts across the codebase.
</issue_to_address>
### Comment 2
<location path="src/client/runtime.rs" line_range="154-164" />
<code_context>
return Err(err);
}
};
+ let span = send_span(&self.tracing_config, bytes.len());
+ let _guard = span.enter();
+ let timing_start = self.tracing_config.send_timing.then(Instant::now);
self.invoke_before_send_hooks(&mut bytes);
if let Err(e) = self.framed.send(Bytes::from(bytes)).await {
let err = ClientError::from(e);
self.invoke_error_hook(&err).await;
return Err(err);
}
+ emit_timing_event(timing_start);
Ok(())
}
</code_context>
<issue_to_address>
**suggestion:** Send timing is not emitted when the underlying framed send fails.
This helper mirrors `send_envelope` by starting timing but only emitting it on success. If failed sends should also contribute to timing metrics (useful for diagnosing latency/failure patterns), call `emit_timing_event(timing_start)` in both the `Ok` and `Err` paths before returning.
```suggestion
let span = send_span(&self.tracing_config, bytes.len());
let _guard = span.enter();
let timing_start = self.tracing_config.send_timing.then(Instant::now);
self.invoke_before_send_hooks(&mut bytes);
if let Err(e) = self.framed.send(Bytes::from(bytes)).await {
let err = ClientError::from(e);
self.invoke_error_hook(&err).await;
emit_timing_event(timing_start);
return Err(err);
}
emit_timing_event(timing_start);
Ok(())
```
</issue_to_address>
### Comment 3
<location path="src/client/builder/connect.rs" line_range="60-66" />
<code_context>
+Thread `tracing_config` through the `WireframeClient` constructor.
+
+```rust
+let span = connect_span(&self.tracing_config, &addr.to_string());
+let _guard = span.enter();
+let start = self.tracing_config.connect_timing.then(Instant::now);
</code_context>
<issue_to_address>
**suggestion:** Connect timing is skipped on early errors during socket setup or connect.
Because `timing_start` is created before socket creation/connect, any error returned via `?` before `emit_timing_event` is reached will skip recording the timing. If you want timing for failed connect attempts as well (e.g., dial timeouts, socket option errors), consider restructuring so `emit_timing_event` is invoked on both success and error paths (e.g., via a `match`/`if let` that always records the elapsed time).
</issue_to_address>
### Comment 4
<location path="src/client/streaming.rs" line_range="93-99" />
<code_context>
request.set_correlation_id(Some(correlation_id));
}
+ let span = streaming_span(&self.tracing_config, correlation_id);
+ let _guard = span.enter();
+ let timing_start = self.tracing_config.streaming_timing.then(Instant::now);
</code_context>
<issue_to_address>
**suggestion (bug_risk):** Streaming call timing is not emitted when serialization or send fails.
In `call_streaming`, `timing_start` is captured before serialization and send, but `emit_timing_event` is only called after a successful send. Early returns from `serialize` or `self.framed.send(..)` therefore skip timing emission. Please emit the timing event on all exit paths so failed streaming calls are also recorded consistently.
Suggested implementation:
```rust
let mut bytes = match self.serializer.serialize(&request) {
Ok(bytes) => bytes,
Err(e) => {
// Ensure streaming timing is emitted even on serialization failure
emit_timing_event(&self.tracing_config, timing_start, &span);
return Err(err);
}
};
```
To fully implement the suggestion, you should also:
1. In the `if let Err(e) = self.framed.send(Bytes::from(bytes)).await { ... }` block (which is truncated in the snippet), call `emit_timing_event(&self.tracing_config, timing_start, &span);` before returning or propagating the send error.
2. Ensure any other early-return paths in `call_streaming` (e.g., validation errors, hook failures) that occur after `timing_start` is set also call `emit_timing_event` before exiting.
3. If the actual signature of `emit_timing_event` differs (e.g., requires additional parameters such as an operation name or result status), adjust the arguments accordingly to match how it is used elsewhere in the codebase.
</issue_to_address>
### Comment 5
<location path="src/client/messaging.rs" line_range="228-234" />
<code_context>
where
P: Packet + EncodeWith<S> + DecodeWith<S>,
{
+ let span = call_correlated_span(&self.tracing_config);
+ let _guard = span.enter();
+ let timing_start = self.tracing_config.call_timing.then(Instant::now);
</code_context>
<issue_to_address>
**suggestion:** call_correlated does not emit timing or record span result when send_envelope fails.
Because the error is propagated with `?`, failures from `send_envelope` return early and skip both `span.record("result", ...)` and `emit_timing_event`. If you also need tracing for failed calls, handle the `send_envelope` result explicitly (e.g., with `match`), record `result = "err"`, emit timing, and then return the error.
```suggestion
let span = call_correlated_span(&self.tracing_config);
let _guard = span.enter();
let timing_start = self.tracing_config.call_timing.then(Instant::now);
let correlation_id = match self.send_envelope(request).await {
Ok(correlation_id) => correlation_id,
Err(err) => {
span.record("result", "err");
emit_timing_event(timing_start);
return Err(err);
}
};
span.record("correlation_id", correlation_id);
let response: P = self.receive_envelope().await?;
```
</issue_to_address>
### Comment 6
<location path="src/client/tests/tracing.rs" line_range="207-216" />
<code_context>
+async fn call_streaming_emits_span_with_correlation_id() {
</code_context>
<issue_to_address>
**issue (testing):** The `call_streaming_emits_span_with_correlation_id` test does not actually assert the `call_streaming` span name or its fields.
This currently only checks that some log line contains `"correlation_id"`, which could come from unrelated logs. Please assert against the specific span instead: e.g. look up the `"client.call_streaming"` span and verify that within that span you see the `correlation_id` field (and optionally its expected value). For instance, the last line could be:
```rust
logs_assert(span_assertion("client.call_streaming", &["correlation_id"]));
```
</issue_to_address>
### Comment 7
<location path="src/client/tests/tracing.rs" line_range="345-354" />
<code_context>
+async fn response_stream_emits_frame_events() {
</code_context>
<issue_to_address>
**suggestion (testing):** ResponseStream tests assert logging but never exercise or verify the new `frame_count()` accessor.
The new `frame_count()` accessor isn’t exercised here; the test only checks the logging events. Please add an assertion that `stream.frame_count()` matches the number of successfully decoded data frames and that the terminator frame doesn’t increment the count (e.g., asserting `stream.frame_count() == 3` after the `while let Some(result) = stream.next().await { ... }` loop).
Suggested implementation:
```rust
#[rstest]
#[traced_test]
#[tokio::test]
async fn response_stream_emits_frame_events() {
let cid = CorrelationId::new(42);
let frames = vec![
TestStreamEnvelope::data(MessageId::new(1), cid, Payload::new(vec![10])),
TestStreamEnvelope::data(MessageId::new(2), cid, Payload::new(vec![20])),
TestStreamEnvelope::data(MessageId::new(3), cid, Payload::new(vec![30])),
TestStreamEnvelope::terminator(cid),
];
let (mut client, _server) = setup_streaming_test(frames).await.expect("setup");
```
In the same `response_stream_emits_frame_events` function, after the existing `while let Some(result) = stream.next().await { ... }` loop that drives the `ResponseStream`, add:
```rust
// We sent 3 data frames and 1 terminator. Only data frames should be counted.
assert_eq!(
stream.frame_count(),
3,
"frame_count should only include successfully decoded data frames"
);
```
This assertion must be placed **after** the `while let Some(result)` loop so that all frames (including the terminator) have been processed, and `frame_count()` reflects the final value. Ensure it uses the same `stream` variable that the loop is iterating over, and that no early returns occur before this assertion.
</issue_to_address>
### Comment 8
<location path="src/client/tests/tracing.rs" line_range="387-73" />
<code_context>
+#[rstest]
+#[traced_test]
+#[tokio::test]
+async fn default_config_is_backwards_compatible() {
+ // No tracing_config() call — uses the default. Verifies no panic
+ // occurs and basic operations succeed with default configuration.
+ let (addr, server) = spawn_echo_server().await;
+ let mut client = WireframeClient::builder()
+ .connect(addr)
+ .await
+ .expect("connect");
+ let envelope = Envelope::new(1, None, vec![1, 2, 3]);
+ let _response: Envelope = client.call(&envelope).await.expect("call");
+ server.abort();
+}
</code_context>
<issue_to_address>
**suggestion (testing):** There are no direct tests for `TracingConfig` defaults and level/timing setters; current coverage is mostly behavioural and indirect.
This verifies that the default config doesn’t break basic operations, but it doesn’t exercise `TracingConfig` itself (defaults and `with_*`/`with_all_*` helpers). Because `TracingConfig` is public and central to tracing, please add a few focused unit tests that:
- Assert default levels (`connect_level`/`close_level` = `INFO`, others = `DEBUG`).
- Assert default timing flags are all `false`.
- Verify `with_all_levels` updates all level fields.
- Verify `with_all_timing(true/false)` toggles all timing flags.
- Verify individual setters (e.g. `with_send_timing(true)`) only affect their specific flag.
These can directly instantiate `TracingConfig::default()` and assert fields to keep them fast and robust.
Suggested implementation:
```rust
#[test]
fn tracing_config_defaults_are_as_expected() {
let cfg = TracingConfig::default();
// Default levels: connect/close at INFO, others at DEBUG.
assert_eq!(cfg.connect_level, Level::INFO);
assert_eq!(cfg.close_level, Level::INFO);
assert_eq!(cfg.send_level, Level::DEBUG);
assert_eq!(cfg.recv_level, Level::DEBUG);
// Default timing flags: all false.
assert!(!cfg.connect_timing);
assert!(!cfg.close_timing);
assert!(!cfg.send_timing);
assert!(!cfg.recv_timing);
}
#[test]
fn tracing_config_with_all_levels_updates_all_level_fields() {
let cfg = TracingConfig::default().with_all_levels(Level::TRACE);
assert_eq!(cfg.connect_level, Level::TRACE);
assert_eq!(cfg.close_level, Level::TRACE);
assert_eq!(cfg.send_level, Level::TRACE);
assert_eq!(cfg.recv_level, Level::TRACE);
}
#[test]
fn tracing_config_with_all_timing_toggles_all_flags() {
let cfg_true = TracingConfig::default().with_all_timing(true);
assert!(cfg_true.connect_timing);
assert!(cfg_true.close_timing);
assert!(cfg_true.send_timing);
assert!(cfg_true.recv_timing);
let cfg_false = cfg_true.with_all_timing(false);
assert!(!cfg_false.connect_timing);
assert!(!cfg_false.close_timing);
assert!(!cfg_false.send_timing);
assert!(!cfg_false.recv_timing);
}
#[test]
fn tracing_config_individual_timing_setters_only_affect_their_flag() {
let cfg = TracingConfig::default()
.with_send_timing(true)
.with_recv_timing(false)
.with_connect_timing(true)
.with_close_timing(false);
assert!(cfg.send_timing);
assert!(!cfg.recv_timing);
assert!(cfg.connect_timing);
assert!(!cfg.close_timing);
}
/// Spawn an echo server, connect a client with the given tracing config,
/// run an async closure against it, then tear down the server.
///
/// The closure takes ownership of the client so that operations such as
```
To make these tests compile and match your actual API, please verify and adjust the following:
1. **Imports**
Ensure `TracingConfig` and `Level` are in scope at the top of `src/client/tests/tracing.rs`, for example:
- If `TracingConfig` lives in your crate:
```rust
use crate::client::tracing::TracingConfig;
```
- For the level type, adjust to whatever you use (examples):
```rust
use tracing::Level;
// or
use log::Level;
```
2. **Field and method names**
Align the tests with the actual `TracingConfig` definition:
- Level fields: update `connect_level`, `close_level`, `send_level`, `recv_level` to the real field names.
- Timing fields: update `connect_timing`, `close_timing`, `send_timing`, `recv_timing` similarly.
- Helper methods: update `with_all_levels`, `with_all_timing`, `with_send_timing`, `with_recv_timing`, `with_connect_timing`, `with_close_timing` to your actual method names if they differ.
3. **Default expectations**
If your documented defaults differ (e.g. non-`DEBUG` for send/recv), adjust the `assert_eq!` expectations accordingly to keep the tests aligned with your intended behavior.
</issue_to_address>
### Comment 9
<location path="src/client/response_stream.rs" line_range="158" />
<code_context>
+ clippy::cognitive_complexity,
+ reason = "linear poll_next with per-frame tracing events"
+ )]
fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
let this = self.get_mut();
</code_context>
<issue_to_address>
**issue (complexity):** Consider extracting the frame-counting and tracing logic into a helper method so that `poll_next` remains a simple, linear orchestration of control flow.
You can keep the new functionality while making `poll_next` linear again by extracting the tracing + frame-count logic into a helper. That removes the need to interleave side-effects with the control flow.
For example:
```rust
impl<'a, P, S, T, C> ResponseStream<'a, P, S, T, C>
where
P: Packet + DecodeWith<S>,
S: Serializer + Send + Sync,
T: ClientStream,
{
fn on_frame_received(
&mut self,
frame_bytes: usize,
result: &Option<Result<P, ClientError>>,
) {
if let Some(Ok(_)) = result {
self.frame_count = self.frame_count.saturating_add(1);
tracing::debug!(
frame.bytes = frame_bytes,
stream.frames_received = self.frame_count,
correlation_id = self.correlation_id,
"stream frame received"
);
}
}
}
```
And then simplify `poll_next` to just orchestrate the flow:
```rust
impl<P, S, T, C> Stream for ResponseStream<'_, P, S, T, C>
where
P: Packet + DecodeWith<S>,
S: Serializer + Send + Sync,
T: ClientStream,
{
type Item = Result<P, ClientError>;
fn poll_next(self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
let this = self.get_mut();
if this.terminated {
return Poll::Ready(None);
}
match Pin::new(&mut this.client.framed).poll_next(cx) {
Poll::Pending => Poll::Pending,
Poll::Ready(None) => {
this.terminated = true;
Poll::Ready(Some(Err(ClientError::disconnected())))
}
Poll::Ready(Some(Err(e))) => {
this.terminated = true;
Poll::Ready(Some(Err(ClientError::from(e))))
}
Poll::Ready(Some(Ok(mut bytes))) => {
let frame_bytes = bytes.len();
this.client.invoke_after_receive_hooks(&mut bytes);
let result = this.process_frame(&bytes);
this.on_frame_received(frame_bytes, &result);
Poll::Ready(result)
}
}
}
}
```
This keeps all behavior (frame counting, per-frame tracing, and the existing terminator logging inside `process_frame`) while:
- Making `poll_next` purely about polling + delegating.
- Localizing the tracing/counter side effects in one small, testable helper.
- Likely allowing you to drop the `#[expect(clippy::cognitive_complexity, ...)]` attribute.
</issue_to_address>
### Comment 10
<location path="src/client/messaging.rs" line_range="243" />
<code_context>
expected: Some(correlation_id),
received: response_correlation_id,
};
+ span.record("result", "err");
self.invoke_error_hook(&err).await;
return Err(err);
</code_context>
<issue_to_address>
**issue (complexity):** Consider extracting the repeated span result + timing + error-hook logic into small helper functions to simplify control flow and reduce duplication in these methods.
You can reduce the added complexity by centralizing the “record result + timing” pattern into a small helper and using it uniformly in the early-return paths.
For example, you currently repeat:
```rust
span.record("result", "err");
emit_timing_event(timing_start);
self.invoke_error_hook(&err).await;
return Err(err);
```
You can wrap that into a helper that keeps the control-flow in `receive_internal` (and others) much cleaner:
```rust
async fn handle_traced_error(
&self,
span: &tracing::Span,
timing_start: Option<Instant>,
err: ClientError,
) -> ClientError {
span.record("result", "err");
emit_timing_event(timing_start);
self.invoke_error_hook(&err).await;
err
}
```
Then `receive_internal` becomes:
```rust
pub(crate) async fn receive_internal<R: DecodeWith<S>>(&mut self) -> Result<R, ClientError> {
let span = receive_span(&self.tracing_config);
let _guard = span.enter();
let timing_start = self.tracing_config.receive_timing.then(Instant::now);
let Some(frame) = self.framed.next().await else {
let err = ClientError::disconnected();
return Err(self.handle_traced_error(&span, timing_start, err).await);
};
let mut bytes = match frame {
Ok(bytes) => bytes,
Err(e) => {
let err = ClientError::from(e);
return Err(self.handle_traced_error(&span, timing_start, err).await);
}
};
span.record("frame.bytes", bytes.len());
self.invoke_after_receive_hooks(&mut bytes);
let (message, _consumed) = match self.serializer.deserialize(&bytes) {
Ok(result) => result,
Err(e) => {
let err = ClientError::decode(e);
return Err(self.handle_traced_error(&span, timing_start, err).await);
}
};
span.record("result", "ok");
emit_timing_event(timing_start);
Ok(message)
}
```
Similarly, for `call_correlated` you can avoid manual `result` recording in multiple branches by using small helpers:
```rust
fn record_success(span: &tracing::Span, timing_start: Option<Instant>) {
span.record("result", "ok");
emit_timing_event(timing_start);
}
async fn record_failure_and_hook(
&self,
span: &tracing::Span,
timing_start: Option<Instant>,
err: ClientError,
) -> ClientError {
span.record("result", "err");
emit_timing_event(timing_start);
self.invoke_error_hook(&err).await;
err
}
```
Usage in `call_correlated`:
```rust
pub async fn call_correlated<P>(&mut self, request: P) -> Result<P, ClientError>
where
P: Packet + EncodeWith<S> + DecodeWith<S>,
{
let span = call_correlated_span(&self.tracing_config);
let _guard = span.enter();
let timing_start = self.tracing_config.call_timing.then(Instant::now);
let correlation_id = self.send_envelope(request).await?;
span.record("correlation_id", correlation_id);
let response: P = self.receive_envelope().await?;
let response_correlation_id = response.correlation_id();
if response_correlation_id != Some(correlation_id) {
let err = ClientError::CorrelationMismatch {
expected: Some(correlation_id),
received: response_correlation_id,
};
return Err(
self.record_failure_and_hook(&span, timing_start, err).await
);
}
record_success(&span, timing_start);
Ok(response)
}
```
This preserves all behavior (spans, timing, hooks) while removing duplicated tracing boilerplate and making future branches less error-prone.
</issue_to_address>
### Comment 11
<location path="docs/wireframe-client-design.md" line_range="70" />
<code_context>
| Setup hook | `on_connection_setup(...)` | Disabled | Store per-connection state for metrics, auth context, or counters. |
| Teardown hook | `on_connection_teardown(...)` | Disabled | Flush metrics and release per-connection resources on `close()`. |
| Error hook | `on_error(...)` | Disabled | Centralize client transport/decode/correlation error reporting. |
+| Tracing config | `tracing_config(TracingConfig)` | INFO connect/close, DEBUG data ops, timing off | Customise tracing span levels and per-command timing. |
### Request/response helpers
</code_context>
<issue_to_address>
**suggestion (review_instructions):** "Customise" should use -ize spelling to follow the specified en-GB-oxendic (-ize) convention.
The instructions call for en-GB-oxendic spelling with -ize forms, so this should be "Customise" → "Customize".
<details>
<summary>Review instructions:</summary>
**Path patterns:** `**/*.md`
**Instructions:**
Use en-GB-oxendic (-ize / -yse / -our) spelling and grammar.
</details>
</issue_to_address>
### Comment 12
<location path="docs/users-guide.md" line_range="1143" />
<code_context>
| Error hook | `on_error(...)` | Disabled | Transport and decode failures must be routed to observability. |
| Before-send hook | `before_send(...)` | Disabled | Inspect or mutate serialized bytes before every outgoing frame. |
| After-receive hook | `after_receive(...)` | Disabled | Inspect or mutate raw bytes after every incoming frame is read. |
+| Tracing config | `tracing_config(TracingConfig)` | INFO connect/close, DEBUG data ops, timing off | Customise tracing span levels and per-command timing. |
```rust
</code_context>
<issue_to_address>
**suggestion (review_instructions):** "Customise" should be spelled with -ize to match the required en-GB-oxendic convention.
To align with the -ize requirement, change "Customise tracing span levels and per-command timing." to "Customize tracing span levels and per-command timing."
<details>
<summary>Review instructions:</summary>
**Path patterns:** `**/*.md`
**Instructions:**
Use en-GB-oxendic (-ize / -yse / -our) spelling and grammar.
</details>
</issue_to_address>
### Comment 13
<location path="docs/execplans/11-1-2-structured-logging-and-tracing-spans.md" line_range="37" />
<code_context>
+- All code must pass `make check-fmt`, `make lint` (clippy `-D warnings`),
+ and `make test`.
+- Comments and documentation must use en-GB-oxendict spelling.
+- Tests must use `rstest` fixtures; BDD tests use `rstest-bdd` v0.5.0.
+- Existing public API signatures must not change (backwards compatibility).
+- Clients configured without tracing must behave identically to today.
</code_context>
<issue_to_address>
**suggestion (review_instructions):** "BDD" is introduced without expansion; it should be defined on first use.
Please expand BDD on first use, for example: "Behaviour-driven development (BDD) tests use `rstest-bdd` v0.5.0."
<details>
<summary>Review instructions:</summary>
**Path patterns:** `**/*.md`
**Instructions:**
Define uncommon acronyms on first use.
</details>
</issue_to_address>
### Comment 14
<location path="docs/execplans/11-1-2-structured-logging-and-tracing-spans.md" line_range="75" />
<code_context>
+ addition is ~26 lines = ~380. If it approaches 400, extract `close()`
+ instrumentation to a dedicated `src/client/close.rs` file.
+
+- Risk: `dynamic_span!` macro causes binary bloat from 5 instantiations per
+ call site. Severity: low. Likelihood: medium. Mitigation: each branch is a
+ single `tracing::*_span!` call. LTO eliminates unused branches. Total call
</code_context>
<issue_to_address>
**suggestion (review_instructions):** "LTO" is used later in this bullet without being defined as an acronym.
Within this risk/mitigation bullet, LTO is mentioned without expansion. Please define it on first use, e.g. "link-time optimisation (LTO) eliminates unused branches."
<details>
<summary>Review instructions:</summary>
**Path patterns:** `**/*.md`
**Instructions:**
Define uncommon acronyms on first use.
</details>
</issue_to_address>
### Comment 15
<location path="docs/execplans/11-1-2-structured-logging-and-tracing-spans.md" line_range="148" />
<code_context>
+ `stream.frames_total` fields in tracing events. Also useful as a public API
+ for users who want to track stream progress without tracing.
+
+- Decision: place span factory functions in a dedicated `tracing_helpers.rs`
+ module, not inline in each method. Rationale: centralises span creation,
+ keeps instrumentation logic out of the hot-path methods, and enables
</code_context>
<issue_to_address>
**suggestion (review_instructions):** "centralises" in the rationale below should use -ize spelling per the en-GB-oxendic guidance.
In the rationale sentence that follows this decision, "centralises span creation" should be "centralizes span creation" to comply with the required -ize spelling.
<details>
<summary>Review instructions:</summary>
**Path patterns:** `**/*.md`
**Instructions:**
Use en-GB-oxendic (-ize / -yse / -our) spelling and grammar.
</details>
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 71b1215374
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
There was a problem hiding this comment.
Actionable comments posted: 13
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/client/messaging.rs (1)
232-250:⚠️ Potential issue | 🟠 MajorEmit timing and set
result=erron allcall_correlatedearly returns.Lines 232 and 234 short-circuit with
?, which skips both error-result recording and timing emission.🛠️ Proposed fix
- let correlation_id = self.send_envelope(request).await?; + let correlation_id = match self.send_envelope(request).await { + Ok(correlation_id) => correlation_id, + Err(err) => { + span.record("result", "err"); + emit_timing_event(timing_start); + return Err(err); + } + }; span.record("correlation_id", correlation_id); - let response: P = self.receive_envelope().await?; + let response: P = match self.receive_envelope().await { + Ok(response) => response, + Err(err) => { + span.record("result", "err"); + emit_timing_event(timing_start); + return Err(err); + } + }; @@ let err = ClientError::CorrelationMismatch { expected: Some(correlation_id), received: response_correlation_id, }; span.record("result", "err"); + emit_timing_event(timing_start); self.invoke_error_hook(&err).await; return Err(err); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/client/messaging.rs` around lines 232 - 250, The call_correlated flow currently short-circuits on errors from send_envelope and receive_envelope (the `?` uses) and skips recording span result="err" and calling emit_timing_event; change those `?`-returns to explicit error handling (match or map_err) that on error records span.record("result","err"), invokes self.invoke_error_hook(&err).await, calls emit_timing_event(timing_start), and then returns Err(err); also add emit_timing_event(timing_start) before the existing correlation-mismatch early return (which already records result="err" and invokes invoke_error_hook) so all early returns from this function consistently record result="err" and emit timing.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/roadmap.md`:
- Around line 496-498: The completed checklist item that currently reads
"Provide structured logging and tracing spans around connect, send, receive, and
stream lifecycle events, plus configuration for per-command timing." should be
updated to reflect the delivered tracing scope by adding the missing lifecycle
operations `call` and `close` (e.g., "...around connect, call, send, receive,
close, and stream lifecycle events..."), ensuring the checkbox remains checked
and the wording stays consistent with surrounding items in docs/roadmap.md.
In `@docs/users-guide.md`:
- Line 1143: The table row for the tracing config uses British spelling
"Customise"; update that cell to the Oxford -ize spelling by replacing
"Customise tracing span levels and per-command timing." with "Customize tracing
span levels and per-command timing." in the row containing the symbol
"tracing_config(TracingConfig)" (the Tracing config table entry).
In `@docs/wireframe-client-design.md`:
- Line 70: Update the table row that describes Tracing config (the row
containing "Tracing config" and the signature `tracing_config(TracingConfig)`)
to use Oxford -ize spelling by changing the description text from "Customise
tracing span levels and per-command timing." to "Customize tracing span levels
and per-command timing."
- Around line 278-283: Replace the existing guidance around using Span::enter()
across .await points: explain that holding the entered guard across awaits is
unsafe and instead recommend using Future::instrument(span) or the
#[tracing::instrument] macro for async methods, and advise Span::in_scope() only
for purely synchronous sections; reference Span::enter(), Future::instrument(),
#[tracing::instrument], and Span::in_scope() so maintainers can update the docs
and examples accordingly.
In `@src/client/builder/connect.rs`:
- Around line 60-63: The connect function currently creates a span guard
(connect_span / _guard) and calls Instant::now across awaits which violates
async instrumentation and only records timing on the success path; fix by moving
span creation into an instrumented async block (use the span to instrument the
async block rather than holding _guard across awaits) and run the entire
connection logic inside that instrumented block, capture the Result from that
block, then compute and emit timing using tracing_config.connect_timing and the
Instant captured at block entry (Instant::now) after the block returns so timing
is logged for every exit path (success or error); update references to
connect_span, tracing_config, connect_timing, and Instant::now accordingly and
ensure timing emission occurs after the block result is available.
In `@src/client/messaging.rs`:
- Around line 141-147: The send timing is skipped on transport failure because
the function returns before calling emit_timing_event; update the send path in
the function containing self.framed.send(Bytes::from(bytes)).await so that
emit_timing_event(timing_start) is always called regardless of success or error
(e.g., call emit_timing_event before returning Err or use a drop/finally-style
guard), and then continue to invoke self.invoke_error_hook(&err).await and
return Err(err) using the same correlation_id semantics.
In `@src/client/runtime.rs`:
- Around line 154-164: The current implementations of send, call, and close
enter a tracing span with a guard that lives across await points (e.g. the span
+ _guard usage in send, call, close), which breaks async tracing and can skip
emit_timing_event on early returns; instead import tracing::Instrument and wrap
the async work in a small async block, call .instrument(span).await to drive the
future, capture its Result into a variable, then call
emit_timing_event(timing_start) and return the captured result. Specifically
update send (function send), call (function call), and close (function close)
to: create span with send_span/call_span/close_span, compute timing_start, build
the inner async block that runs
invoke_before_send_hooks/invoke_before_call_hooks/invoke_before_close_hooks,
awaits framed.send/framed.call/framed.close and invokes invoke_error_hook on
errors, then .instrument(span).await into a local result, call
emit_timing_event(timing_start), and return result; remove any long-lived span
guard variables so no guard spans across awaits.
In `@src/client/streaming.rs`:
- Around line 105-106: The span currently records frame.bytes before hooks run,
misreporting size if invoke_before_send_hooks mutates bytes; move the
span.record("frame.bytes", bytes.len()) call to immediately after
self.invoke_before_send_hooks(&mut bytes) so it records the post-hook length
(refer to span.record, invoke_before_send_hooks, and the bytes variable in this
function).
In `@src/client/tests/tracing.rs`:
- Around line 250-273: The test receive_error_records_result_err should
explicitly assert that client.receive() returned an Err variant; update the
assertion to destructure the result (e.g., let Err(err) = result else {
panic!("expected Err, got {:?}", result); }) so the test name matches the check
and you get the actual error value for inspection; locate the assertion near the
call to WireframeClient::receive() in function receive_error_records_result_err
and replace the current generic check with this explicit pattern match.
- Around line 227-230: The current test only checks for the "correlation_id"
field and can be satisfied by unrelated logs; update the assertion that calls
span_assertion("correlation_id", &[]) so it also asserts the span name (e.g.,
"call_streaming") together with the correlation_id. Locate the
logs_assert(span_assertion("correlation_id", &[])) call in tracing.rs and
replace it with a variant that requires both the span name "call_streaming" and
the "correlation_id" field (for example by calling a span_assertion overload or
helper that accepts the span name and field), ensuring the test only matches the
per-frame streaming span.
In `@tests/fixtures/client_tracing.rs`:
- Around line 6-13: Remove the file-level expect_used suppression and make the
shared fixture fallible: remove the #![expect(clippy::expect_used,...)]
attributes, change the fixture function signatures (the test/fixture entry
functions in client_tracing.rs) to return Result<..., Box<dyn std::error::Error
+ Send + Sync>> (or a suitable Result type) so errors can be propagated, and
replace all .expect(...) usages (especially Mutex/.lock().expect(...) and other
panic-y calls) with non-panicking handling — convert PoisonError/lock errors
into Err(...) and propagate with ? (or map_err(...)?) and similarly replace
other .expect calls with Result propagation so the fixture is lint-clean and
does not panic.
In `@tests/steps/client_tracing_steps.rs`:
- Around line 3-6: The crate-level suppression for clippy::unnecessary_wraps is
too broad; restrict it by either annotating only the truly infallible step
functions with #[expect(clippy::unnecessary_wraps, reason = "...")]
(specifically the step functions given_client_with_connect_timing() and
when_client_closes()), or remove the infallible functions from returning
TestResult and change their signatures to return () so the linter can run
normally for other steps (adjust any step-macro expectations accordingly).
Locate the functions given_client_with_connect_timing() and when_client_closes()
and apply one of these two fixes consistently across those definitions.
- Around line 30-35: The test step functions (e.g., given_connected_send_timing)
create a new Tokio runtime per call which causes TcpStream I/O registration
issues; refactor ClientTracingWorld to own a single tokio::runtime::Runtime
(following the existing ClientRuntimeWorld pattern), add a block_on(&mut self,
fut) helper that runs futures on that owned runtime, update ClientTracingWorld
methods like start_echo_server() and connect() to be awaited via
this.block_on(...) instead of creating new runtimes inside step functions, and
change all step functions (given_connected_send_timing and the other listed
steps) to call client_tracing_world.block_on(...) so the same runtime is reused
for all async ops.
---
Outside diff comments:
In `@src/client/messaging.rs`:
- Around line 232-250: The call_correlated flow currently short-circuits on
errors from send_envelope and receive_envelope (the `?` uses) and skips
recording span result="err" and calling emit_timing_event; change those
`?`-returns to explicit error handling (match or map_err) that on error records
span.record("result","err"), invokes self.invoke_error_hook(&err).await, calls
emit_timing_event(timing_start), and then returns Err(err); also add
emit_timing_event(timing_start) before the existing correlation-mismatch early
return (which already records result="err" and invokes invoke_error_hook) so all
early returns from this function consistently record result="err" and emit
timing.
ℹ️ Review info
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (24)
docs/execplans/11-1-2-structured-logging-and-tracing-spans.mddocs/roadmap.mddocs/users-guide.mddocs/wireframe-client-design.mdsrc/client/builder/connect.rssrc/client/builder/core.rssrc/client/builder/mod.rssrc/client/builder/tracing.rssrc/client/messaging.rssrc/client/mod.rssrc/client/response_stream.rssrc/client/runtime.rssrc/client/streaming.rssrc/client/tests/mod.rssrc/client/tests/tracing.rssrc/client/tracing_config.rssrc/client/tracing_helpers.rstests/features/client_tracing.featuretests/fixtures/client_tracing.rstests/fixtures/mod.rstests/scenarios/client_tracing_scenarios.rstests/scenarios/mod.rstests/steps/client_tracing_steps.rstests/steps/mod.rs
…n recording - Emit timing events immediately after send/receive/call operations to capture accurate duration metrics. - Add `traced_error` and `traced_ok` helpers to record span results and invoke error hooks consistently. - Refactor response stream to emit per-frame received events with correlation ID in tracing. - Add new tests verifying tracing spans emit result fields and timing events on errors. - Centralize tracing span result and timing emission improves observability and error handling. Co-authored-by: devboxerhub[bot] <devboxerhub[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 5
♻️ Duplicate comments (5)
src/client/builder/connect.rs (1)
60-66:⚠️ Potential issue | 🟠 MajorInstrument the connect future instead of holding an entered span guard.
Wrap the async connect flow with
.instrument(span)and keep timing emission
after awaited completion so tracing context remains correct across poll cycles.Check tracing best practice for async connect flows: should span context be applied with Future::instrument(span) rather than Span::enter() guards?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/client/builder/connect.rs` around lines 60 - 66, The current connect function enters a span guard and then awaits connect_inner, which can lose proper tracing across async poll cycles; instead, wrap the future returned by self.connect_inner(addr) with Future::instrument(span) using the span from connect_span(&self.tracing_config, &addr.to_string()), await that instrumented future, and only then call emit_timing_event(timing_start); update references in the connect function (span, _guard, timing_start, self.connect_inner, emit_timing_event, connect_span, and self.tracing_config) accordingly so the span is applied to the future rather than held via Span::enter().docs/wireframe-client-design.md (1)
278-283:⚠️ Potential issue | 🟠 MajorReplace the async
Span::enter()safety claim with async-safe tracing guidance.Document
Future::instrument(span)or#[tracing::instrument]for async
methods, and reserveSpan::in_scope()for synchronous sections only.According to tracing crate docs, should async code avoid holding Span::enter() guards across await points and prefer Future::instrument or #[tracing::instrument]?🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/wireframe-client-design.md` around lines 278 - 283, Replace the current claim that using Span::enter() across await points is safe with explicit async-safe tracing guidance: update the documentation around Span::enter(), Span::in_scope(), and client methods to state that Span::enter() guards must not be held across .await points and recommend using Future::instrument(span) or the #[tracing::instrument] attribute for async methods; reserve Span::in_scope() for purely synchronous sections and add a short example or pointer to the tracing crate docs showing Future::instrument(span) and #[tracing::instrument] as the correct patterns for instrumenting async code.src/client/streaming.rs (1)
106-107:⚠️ Potential issue | 🟠 MajorRecord
frame.bytesafter hook mutation.Move frame-size recording to the post-hook state so emitted metadata reflects
the actual bytes written to transport.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/client/streaming.rs` around lines 106 - 107, The span.record("frame.bytes", bytes.len()) call is currently before self.invoke_before_send_hooks(&mut bytes) so the recorded frame size may not reflect hook-induced mutations; move the span.record call to after invoke_before_send_hooks so it records the post-hook bytes.len(), updating the metadata to match the actual bytes sent by the transport (look for the span.record invocation and self.invoke_before_send_hooks(&mut bytes) in the same function and swap their order).src/client/runtime.rs (1)
154-165:⚠️ Potential issue | 🟠 MajorRefactor runtime operations to instrument futures instead of entered guards.
Apply async-safe instrumentation for
send,call, andclose; avoid keeping
entered span guards alive across.awaitboundaries.For tracing 0.1 async code, confirm whether Span::enter() guards must not live across await points and whether Future::instrument(span) is the correct pattern.Also applies to: 244-262, 371-383
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/client/runtime.rs` around lines 154 - 165, The code currently calls send_span(...) and keeps the span guard from span.enter() across an await (in send, call, close), which is not async-safe; replace entering the span with instrumenting the future: remove span.enter() and _guard, create the send future (e.g., let fut = self.framed.send(Bytes::from(bytes));), apply tracing::Instrument (fut.instrument(span)) and await that instrumented future, keeping timing (timing_start/emit_timing_event) and hooks (invoke_before_send_hooks, invoke_error_hook, ClientError::from) logic the same; apply the same pattern to the other occurrences (the methods around lines referenced for call and close) so no Span::enter() guard lives across an await.src/client/tests/tracing.rs (1)
255-278:⚠️ Potential issue | 🟡 MinorAssert
result="err"in the receive-error tracing assertion.Tighten the assertion to prove the error outcome is recorded, not only that the span exists.
🛠️ Proposed fix
- logs_assert(span_assertion("client.receive", &[])); + logs_assert(span_assertion("client.receive", &["result=\"err\""]));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/client/tests/tracing.rs` around lines 255 - 278, The test receive_error_records_result_err currently only asserts the span exists; update it to also assert that the recorded span includes result="err". After calling client.receive() and asserting result.is_err(), adjust the logs_assert/span_assertion usage (referencing span_assertion("client.receive", &[])) to include the tracing attribute for the error (e.g., result="err") so the assertion verifies the span contains the error result produced by WireframeClient::builder() with TracingConfig::default().with_receive_timing(true) and the client.receive() call.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/execplans/11-1-2-structured-logging-and-tracing-spans.md`:
- Line 78: Several documentation instances use British spellings with "-isation"
and other variants; update all occurrences of "optimisation", "Centralising",
"Initialise", "serialisation", and "initialised" (and any other -isation
variants flagged at the comment locations) to the en-GB-oxendict '-ize' forms:
"optimization", "Centralizing", "Initialize", "serialization", and
"initialized". Search the document for these tokens (and the other reported
locations around lines referenced in the review) and perform a simple
find-and-replace, ensuring case is preserved (e.g., "Centralising" ->
"Centralizing", "Initialise" -> "Initialize") so the spelling rules are
consistent across the doc.
In `@src/client/messaging.rs`:
- Around line 138-143: Current code holds a span guard across an .await
(span.enter() stored in _guard) which can misattribute logs; instead import
tracing::Instrument and wrap the futures with span.instrument(...) so the span
is entered only while the future is polled: replace usage of let _guard =
span.enter(); timing_start = ...; self.framed.send(...).await;
emit_timing_event(...) with creating the future (e.g.
self.framed.send(Bytes::from(bytes)) ), apply span.instrument(future) and await
that instrumented future, and do the same pattern for the other two async send
paths referenced (the calls around lines using invoke_before_send_hooks,
framed.send, emit_timing_event and the other two async paths), ensuring
timing_start and emit_timing_event remain outside the entered guard but inside
the instrumented future poll region.
In `@src/client/runtime.rs`:
- Around line 154-158: The span is computed using bytes.len() before
invoke_before_send_hooks mutates the payload, so move or recreate the span after
the call to invoke_before_send_hooks(&mut bytes) so the sent frame size matches
the span; specifically, call invoke_before_send_hooks(&mut bytes) first, then
create the span via send_span(&self.tracing_config, bytes.len()) and enter it
(let _guard = span.enter()) immediately before calling
self.framed.send(Bytes::from(bytes)).await, preserving the timing_start logic
(timing_start = self.tracing_config.send_timing.then(Instant::now)) as needed.
In `@src/client/streaming.rs`:
- Around line 93-110: Remove the Span::enter() guard and stop holding it across
await points: create the span via streaming_span(&self.tracing_config,
correlation_id) but do not call span.enter(); instead wrap the async part that
awaits (the send future around self.framed.send(Bytes::from(bytes)).await and
the subsequent error handling and emit_timing_event call) with
Future::instrument(span) so the span is re-entered on each poll. Also move the
span.record("frame.bytes", ...) call to after invoke_before_send_hooks(&mut
bytes) so the recorded length reflects any mutations by the hooks; keep
timing_start (self.tracing_config.streaming_timing.then(Instant::now)),
serializer.serialize(&request) error handling, invoke_error_hook(&err).await,
and emit_timing_event(timing_start) behavior unchanged but ensure the awaited
send and its error path are inside the instrumented future.
In `@src/client/tests/tracing.rs`:
- Around line 136-141: The test receive_emits_span_with_result currently calls
test_span_emission! with an empty expected attributes list; update that
invocation (the macro call around
TracingConfig::default().with_receive_timing(true) and the "client.receive" test
name) to assert the span contains result="ok" (e.g. add the expected result
field to the attribute list passed to test_span_emission! so the test requires
result="ok"). Ensure the change is applied to the test_span_emission! invocation
in the async fn receive_emits_span_with_result closure.
---
Duplicate comments:
In `@docs/wireframe-client-design.md`:
- Around line 278-283: Replace the current claim that using Span::enter() across
await points is safe with explicit async-safe tracing guidance: update the
documentation around Span::enter(), Span::in_scope(), and client methods to
state that Span::enter() guards must not be held across .await points and
recommend using Future::instrument(span) or the #[tracing::instrument] attribute
for async methods; reserve Span::in_scope() for purely synchronous sections and
add a short example or pointer to the tracing crate docs showing
Future::instrument(span) and #[tracing::instrument] as the correct patterns for
instrumenting async code.
In `@src/client/builder/connect.rs`:
- Around line 60-66: The current connect function enters a span guard and then
awaits connect_inner, which can lose proper tracing across async poll cycles;
instead, wrap the future returned by self.connect_inner(addr) with
Future::instrument(span) using the span from connect_span(&self.tracing_config,
&addr.to_string()), await that instrumented future, and only then call
emit_timing_event(timing_start); update references in the connect function
(span, _guard, timing_start, self.connect_inner, emit_timing_event,
connect_span, and self.tracing_config) accordingly so the span is applied to the
future rather than held via Span::enter().
In `@src/client/runtime.rs`:
- Around line 154-165: The code currently calls send_span(...) and keeps the
span guard from span.enter() across an await (in send, call, close), which is
not async-safe; replace entering the span with instrumenting the future: remove
span.enter() and _guard, create the send future (e.g., let fut =
self.framed.send(Bytes::from(bytes));), apply tracing::Instrument
(fut.instrument(span)) and await that instrumented future, keeping timing
(timing_start/emit_timing_event) and hooks (invoke_before_send_hooks,
invoke_error_hook, ClientError::from) logic the same; apply the same pattern to
the other occurrences (the methods around lines referenced for call and close)
so no Span::enter() guard lives across an await.
In `@src/client/streaming.rs`:
- Around line 106-107: The span.record("frame.bytes", bytes.len()) call is
currently before self.invoke_before_send_hooks(&mut bytes) so the recorded frame
size may not reflect hook-induced mutations; move the span.record call to after
invoke_before_send_hooks so it records the post-hook bytes.len(), updating the
metadata to match the actual bytes sent by the transport (look for the
span.record invocation and self.invoke_before_send_hooks(&mut bytes) in the same
function and swap their order).
In `@src/client/tests/tracing.rs`:
- Around line 255-278: The test receive_error_records_result_err currently only
asserts the span exists; update it to also assert that the recorded span
includes result="err". After calling client.receive() and asserting
result.is_err(), adjust the logs_assert/span_assertion usage (referencing
span_assertion("client.receive", &[])) to include the tracing attribute for the
error (e.g., result="err") so the assertion verifies the span contains the error
result produced by WireframeClient::builder() with
TracingConfig::default().with_receive_timing(true) and the client.receive()
call.
ℹ️ Review info
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (12)
docs/execplans/11-1-2-structured-logging-and-tracing-spans.mddocs/users-guide.mddocs/wireframe-client-design.mdsrc/client/builder/connect.rssrc/client/messaging.rssrc/client/response_stream.rssrc/client/runtime.rssrc/client/streaming.rssrc/client/tests/mod.rssrc/client/tests/tracing.rssrc/client/tests/tracing_streaming.rssrc/client/tracing_config.rs
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🏁 Script executed: Length of output: 35310 🏁 Script executed: Length of output: 18275 🏁 Script executed: Length of output: 36247 🏁 Script executed: Length of output: 3523
✅ RESOLVED (15/16 comments)Timing on error paths (Comments 1-5)All timing emission issues have been completely resolved:
Testing improvements (Comments 6-7)
Code complexity (Comments 9-10)
Documentation (Comments 11-15)
❌ NOT RESOLVED (1/16 comments)Comment 8: TracingConfig unit testsThe
Would you like me to generate a plan to add these unit tests to |
…nstrument Refactor client async methods to use `tracing::Instrument::instrument(span)` to wrap futures instead of holding `Span::enter()` guards across `.await` points. This change applies to connection, messaging, runtime, streaming, and response stream logic. This approach improves async span safety by ensuring spans are entered only while the future is polled, which is required for correctness on multi-threaded runtimes. Corresponding documentation and examples were updated to clarify the async-safe span instrumentation pattern. Additional improvements include: - Adding frame count tracking in ResponseStream. - Emitting timing events only within instrumented spans. - Updating tests to expect tracing event result annotations. - Enhancing `TracingConfig` tests for level and timing flag consistency. This change aligns with the `tracing` crate best practices for async instrumentation, enhancing reliability and observability of client operations. Co-authored-by: devboxerhub[bot] <devboxerhub[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/client/runtime.rs (1)
147-158:⚠️ Potential issue | 🟠 MajorEmit
elapsed_uswhensend()fails during serialization.Line 157 starts timing after serialization, so Line 150 returns without a timing
event onClientError::Serialize.🔧 Proposed fix
- let mut bytes = match self.serializer.serialize(message) { + let timing_start = self.tracing_config.send_timing.then(Instant::now); + let mut bytes = match self.serializer.serialize(message) { Ok(bytes) => bytes, Err(e) => { let err = ClientError::Serialize(e); + emit_timing_event(timing_start); self.invoke_error_hook(&err).await; return Err(err); } }; self.invoke_before_send_hooks(&mut bytes); let span = send_span(&self.tracing_config, bytes.len()); - let timing_start = self.tracing_config.send_timing.then(Instant::now);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/client/runtime.rs` around lines 147 - 158, The send timing is started after serialization so a ClientError::Serialize returns without emitting elapsed_us; move the tracing timing setup so it begins before serialization (create timing_start using self.tracing_config.send_timing.then(Instant::now) and call send_span(&self.tracing_config, ...) before calling self.serializer.serialize), then ensure the serialization Err branch computes elapsed_us from timing_start (if present) and includes that timing info when calling self.invoke_error_hook(&err).await (or otherwise emits the timing event) so failures during self.serializer.serialize produce elapsed_us.
♻️ Duplicate comments (2)
docs/execplans/11-1-2-structured-logging-and-tracing-spans.md (1)
24-25:⚠️ Potential issue | 🟡 MinorExpand
BDDon first occurrence.Rewrite Line 24 to
behaviour-driven development (BDD)so the acronym is
expanded at first use.As per coding guidelines: "Expand any uncommon acronym on first use, for example, Continuous Integration (CI)."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/execplans/11-1-2-structured-logging-and-tracing-spans.md` around lines 24 - 25, Replace the first occurrence of the acronym "BDD" with its expansion "behaviour-driven development (BDD)" in the sentence that currently reads "rstest with `tracing-test`, and BDD tests using rstest-bdd v0.5.0, validate the instrumentation." so the acronym is expanded on first use; update the text around `tracing-test`/`rstest-bdd` as needed to preserve grammar and punctuation.src/client/messaging.rs (1)
130-139:⚠️ Potential issue | 🟠 MajorEmit send timing on serialization errors in
send_envelope().Line 139 initializes timing after serialization. Line 132 can return
ClientError::Serializewithout emittingelapsed_us.🔧 Proposed fix
- let mut bytes = match self.serializer.serialize(&envelope) { + let timing_start = self.tracing_config.send_timing.then(Instant::now); + let mut bytes = match self.serializer.serialize(&envelope) { Ok(bytes) => bytes, Err(e) => { let err = ClientError::Serialize(e); + emit_timing_event(timing_start); self.invoke_error_hook(&err).await; return Err(err); } }; - let timing_start = self.tracing_config.send_timing.then(Instant::now);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/client/messaging.rs` around lines 130 - 139, The send timing is only started after serialization so a serialization error in send_envelope() (the Err(e) branch that returns ClientError::Serialize) never emits elapsed_us; move or ensure timing is started before serialization (e.g., initialize timing_start = self.tracing_config.send_timing.then(Instant::now) before calling self.serializer.serialize) or, alternatively, compute and emit elapsed_us in the Err(e) branch using Instant::now minus a previously captured start; update the serialize error path where ClientError::Serialize is created and where invoke_error_hook is called so elapsed timing is emitted consistently.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/client/messaging.rs`:
- Around line 138-140: The span for send_envelope is created before
invoke_before_send_hooks mutates the payload, so move or recreate the call to
send_envelope_span(&self.tracing_config, correlation_id, bytes.len()) to after
invoke_before_send_hooks(&mut bytes) so the span uses the post-hook bytes.len();
ensure you update any use of the earlier `span` variable (and recompute
timing_start if you intend it to reflect post-hook timing) so the tracing
reflects the mutated payload size.
In `@src/client/tracing_config.rs`:
- Around line 229-230: Add a module-level documentation comment above the test
module declaration: insert a brief `//!` comment immediately before `mod tests
{` that explains the purpose and utility of the `tests` module (e.g., what
aspects of tracing_config it verifies and any test setup invariants), so the
module complies with the rule that every Rust module begins with a `//!`
module-level comment and clearly documents its intent.
- Around line 307-329: The two existing tests only exercise the send setters;
add parameterized rstest cases to cover every public builder setter so each
with_* toggles only its own flag/field. Update the tests
individual_timing_setters_only_affect_their_flag and
individual_level_setters_only_affect_their_field to use #[rstest] with cases for
with_connect_timing, with_receive_timing, with_call_timing,
with_streaming_timing, with_close_timing (and the corresponding
with_connect_level, with_receive_level, with_call_level, with_streaming_level,
with_close_level) by creating TracingConfig::default().with_<X>_timing(true) or
.with_<X>_level(Level::TRACE) in each case and asserting flags via
assert_timing_flags(&cfg, [...]) and levels via assert_levels(&cfg, [...]) so
only the targeted position changes while others remain at defaults.
---
Outside diff comments:
In `@src/client/runtime.rs`:
- Around line 147-158: The send timing is started after serialization so a
ClientError::Serialize returns without emitting elapsed_us; move the tracing
timing setup so it begins before serialization (create timing_start using
self.tracing_config.send_timing.then(Instant::now) and call
send_span(&self.tracing_config, ...) before calling self.serializer.serialize),
then ensure the serialization Err branch computes elapsed_us from timing_start
(if present) and includes that timing info when calling
self.invoke_error_hook(&err).await (or otherwise emits the timing event) so
failures during self.serializer.serialize produce elapsed_us.
---
Duplicate comments:
In `@docs/execplans/11-1-2-structured-logging-and-tracing-spans.md`:
- Around line 24-25: Replace the first occurrence of the acronym "BDD" with its
expansion "behaviour-driven development (BDD)" in the sentence that currently
reads "rstest with `tracing-test`, and BDD tests using rstest-bdd v0.5.0,
validate the instrumentation." so the acronym is expanded on first use; update
the text around `tracing-test`/`rstest-bdd` as needed to preserve grammar and
punctuation.
In `@src/client/messaging.rs`:
- Around line 130-139: The send timing is only started after serialization so a
serialization error in send_envelope() (the Err(e) branch that returns
ClientError::Serialize) never emits elapsed_us; move or ensure timing is started
before serialization (e.g., initialize timing_start =
self.tracing_config.send_timing.then(Instant::now) before calling
self.serializer.serialize) or, alternatively, compute and emit elapsed_us in the
Err(e) branch using Instant::now minus a previously captured start; update the
serialize error path where ClientError::Serialize is created and where
invoke_error_hook is called so elapsed timing is emitted consistently.
ℹ️ Review info
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (8)
docs/execplans/11-1-2-structured-logging-and-tracing-spans.mddocs/wireframe-client-design.mdsrc/client/builder/connect.rssrc/client/messaging.rssrc/client/runtime.rssrc/client/streaming.rssrc/client/tests/tracing.rssrc/client/tracing_config.rs
|
@coderabbitai Have the following now been resolved? Please address the following concerns and ensure all commit gates succeed:
|
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🏁 Script executed: Length of output: 2772
The implementation uses explicit |
…nfig testing - Emit timing events on send errors and completions in messaging and runtime modules. - Refactor timing_start assignment to include early error paths. - Add comprehensive rstest parameterized tests for TracingConfig level and timing flag setters. - Minor documentation and formatting improvements in tracing and execplans. These changes improve observability by accurately emitting timing events and increase test coverage for tracing configuration. Co-authored-by: devboxerhub[bot] <devboxerhub[bot]@users.noreply.github.com>
- Added a Tokio runtime to the ClientTracingWorld test fixture. - Use a cloned handle from the runtime to run async steps, avoiding multiple runtime creations. - Improved async invocations with runtime handle for better runtime management. - Removed redundant runtime construction in test step definitions. - Updated locking and error handling on captured output buffer with more robust error management. - Minor docs update reflecting expanded tracing lifecycle events in roadmap. This refactor simplifies async test code by reusing a single runtime handle, reducing overhead and preventing potential conflicts from multiple Tokio runtimes in tests. Co-authored-by: devboxerhub[bot] <devboxerhub[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
tests/fixtures/client_tracing.rs (1)
83-87:⚠️ Potential issue | 🟠 MajorRemove panic-based runtime construction from this shared fixture.
Stop using
Runtime::new().expect(...)with lint suppression here. Make world
construction fallible and propagate the error path through fixture setup.🔧 Proposed refactor shape
-fn new_world(config: TracingConfig) -> ClientTracingWorld { - #[expect( - clippy::expect_used, - reason = "fixture constructor has no fallible return path" - )] - let runtime = tokio::runtime::Runtime::new().expect("create test runtime"); +type WorldResult<T> = Result<T, Box<dyn std::error::Error + Send + Sync>>; + +fn new_world(config: TracingConfig) -> WorldResult<ClientTracingWorld> { + let runtime = tokio::runtime::Runtime::new()?; @@ - ClientTracingWorld { + Ok(ClientTracingWorld { runtime, addr: None, server: None, client: None, tracing_config: config, captured, _subscriber_guard: guard, - } + }) } #[rustfmt::skip] #[fixture] -pub fn client_tracing_world() -> ClientTracingWorld { - new_world(TracingConfig::default()) +pub fn client_tracing_world() -> WorldResult<ClientTracingWorld> { + new_world(TracingConfig::default()) }As per coding guidelines, "In Rust production code and shared fixtures, avoid
.expect()entirely: returnResultand use?to propagate errors instead of panicking."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/fixtures/client_tracing.rs` around lines 83 - 87, The test fixture currently panics on failure by calling tokio::runtime::Runtime::new().expect(...); update the fixture constructor to be fallible (return Result<..., E>) and propagate errors instead of panicking: remove the #[expect clippy::expect_used] suppression and the .expect call, call tokio::runtime::Runtime::new()? (or map the error) into the fixture's Result, and propagate the error from the function that constructs the shared world so callers can handle it; update the signature where the `runtime` variable is created and any callers to use ? to forward the error.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/execplans/11-1-2-structured-logging-and-tracing-spans.md`:
- Around line 12-14: Update the purpose sentence that currently reads "client
connect, send, receive, and stream lifecycle events" to include the missing
lifecycle stages "call" and "close" so it reads something like "client connect,
call, send, receive, close, and stream lifecycle events", and ensure the
following clause still states that every client operation emits a `tracing`
span; adjust surrounding wording if needed so the sentence remains grammatical
and consistent with the rest of the document.
---
Duplicate comments:
In `@tests/fixtures/client_tracing.rs`:
- Around line 83-87: The test fixture currently panics on failure by calling
tokio::runtime::Runtime::new().expect(...); update the fixture constructor to be
fallible (return Result<..., E>) and propagate errors instead of panicking:
remove the #[expect clippy::expect_used] suppression and the .expect call, call
tokio::runtime::Runtime::new()? (or map the error) into the fixture's Result,
and propagate the error from the function that constructs the shared world so
callers can handle it; update the signature where the `runtime` variable is
created and any callers to use ? to forward the error.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 010831a2-bbca-455f-853f-4d5333f1dc4b
📒 Files selected for processing (7)
docs/execplans/11-1-2-structured-logging-and-tracing-spans.mddocs/roadmap.mdsrc/client/messaging.rssrc/client/runtime.rssrc/client/tracing_config.rstests/fixtures/client_tracing.rstests/steps/client_tracing_steps.rs
|
@coderabbitai Have the following now been resolved? ❌ NOT RESOLVED (1/16 comments)Comment 8: TracingConfig unit testsThe
|
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain🏁 Script executed: Length of output: 11918
The implementation goes beyond the original request by using All 16 comments are now resolved. ✅ |
…ization ClientTracingWorld now optionally owns a Tokio runtime and carries an error message if initialization fails. All test usage points of the runtime handle now gracefully handle runtime initialization failure, returning errors instead of panicking. This change improves robustness and error reporting in client tracing BDD test fixtures and steps. Co-authored-by: devboxerhub[bot] <devboxerhub[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/execplans/11-1-2-structured-logging-and-tracing-spans.md`:
- Around line 144-147: The sentence "Also useful as a public API for users who
want to track stream progress without tracing." needs a comma after the
conjunctive adverb "Also"; update the sentence in the paragraph that mentions
`frame_count` and `frame_count()` on `ResponseStream` to read "Also, useful as a
public API..." (or better: "Also, it is useful as a public API...") so the style
convention is followed.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 6f2329ce-6d15-4281-8521-5753deb88433
📒 Files selected for processing (3)
docs/execplans/11-1-2-structured-logging-and-tracing-spans.mdtests/fixtures/client_tracing.rstests/steps/client_tracing_steps.rs
Correct line breaks in the 'structured logging and tracing spans' documentation for better readability and formatting without changing content. Co-authored-by: devboxerhub[bot] <devboxerhub[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
docs/execplans/11-1-2-structured-logging-and-tracing-spans.md (1)
146-146:⚠️ Potential issue | 🟡 MinorInsert comma after the conjunctive adverb.
Add a comma after "Also" at the beginning of the sentence, as required for conjunctive adverbs in British English.
📝 Proposed fix
- `stream.frames_total` fields in tracing events. Also useful as a public API + `stream.frames_total` fields in tracing events. Also, useful as a public API for users who want to track stream progress without tracing.As per coding guidelines: "Documentation must use en-GB-oxendict spelling and grammar."
🧰 Tools
🪛 LanguageTool
[uncategorized] ~146-~146: A comma may be missing after the conjunctive/linking adverb 'Also'.
Context: ...frames_total` fields in tracing events. Also useful as a public API for users who ...(SENT_START_CONJUNCTIVE_LINKING_ADVERB_COMMA)
🤖 Prompt for AI agents
Verify each finding against the current code and only fix it if needed. In `@docs/execplans/11-1-2-structured-logging-and-tracing-spans.md` at line 146, insert a comma after "Also" so the sentence reads "Also, useful as a public API for users who want to track stream progress without tracing."🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/execplans/11-1-2-structured-logging-and-tracing-spans.md` at line 146, The sentence that begins "Also useful as a public API for users who want to track stream progress without tracing." is missing a comma after the conjunctive adverb; update the text around the `stream.frames_total` mention so it reads "Also, useful as a public API for users who want to track stream progress without tracing." (locate the sentence containing `stream.frames_total` and the following "Also useful..." fragment and insert the comma after "Also").
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/execplans/11-1-2-structured-logging-and-tracing-spans.md`:
- Line 69: Update the compound adjective "multi-threaded" to the single-word
form "multithreaded" in the sentence that currently reads "safe under
multi-threaded runtimes." (Search for that exact phrase in the
docs/execplans/11-1-2-structured-logging-and-tracing-spans.md content to locate
the occurrence.) Replace only the adjective to "multithreaded" so the sentence
reads "safe under multithreaded runtimes."
---
Duplicate comments:
In `@docs/execplans/11-1-2-structured-logging-and-tracing-spans.md`:
- Line 146: The sentence that begins "Also useful as a public API for users who
want to track stream progress without tracing." is missing a comma after the
conjunctive adverb; update the text around the `stream.frames_total` mention so
it reads "Also, useful as a public API for users who want to track stream
progress without tracing." (locate the sentence containing `stream.frames_total`
and the following "Also useful..." fragment and insert the comma after "Also").
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 54514a35-d1bb-439a-8a94-161df16163d8
📒 Files selected for processing (1)
docs/execplans/11-1-2-structured-logging-and-tracing-spans.md
Corrected 'multi-threaded' to 'multithreaded' and added a missing comma after 'Also' to improve clarity in the execplans documentation on structured logging and tracing spans. Co-authored-by: devboxerhub[bot] <devboxerhub[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/execplans/11-1-2-structured-logging-and-tracing-spans.md`:
- Line 197: Replace the machine-specific absolute path `/home/user/project` in
the phrase "The wireframe crate (`/home/user/project`) is a Rust async
networking framework" with a repository-relative wording such as "the wireframe
crate (run from the repository root)" or "the wireframe crate (relative to the
repository root)" so the guidance is portable; update the sentence in
docs/execplans/11-1-2-structured-logging-and-tracing-spans.md wherever that path
appears (and the similar occurrence noted at line 580) to use the
repository-root phrasing instead of an absolute file path.
- Around line 569-576: The ExecPlan currently hardcodes the markdownlint binary
by invoking "make markdownlint MDLINT=/root/.bun/bin/markdownlint-cli2"; update
that invocation to the project-standard "make markdownlint" so the command uses
the default make target and environment, and do the same replacement for the
identical invocation referenced at 601-601; modify the line containing "make
markdownlint MDLINT=/root/.bun/bin/markdownlint-cli2" to simply "make
markdownlint" (leaving the surrounding pipefail/tee/echo pattern intact).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: ea8bd7d5-c0b0-4b00-8baf-2ba1eb0f4c47
📒 Files selected for processing (1)
docs/execplans/11-1-2-structured-logging-and-tracing-spans.md
The documentation now refers to running commands from the repository root instead of a hardcoded path (/home/user/project). This improves clarity and makes instructions more generic and applicable to all users. Additionally, the call to `make markdownlint` was simplified by removing a hardcoded path to the markdownlint executable, streamlining the instructions. Co-authored-by: devboxerhub[bot] <devboxerhub[bot]@users.noreply.github.com>
Summary
What’s new
Changes by area
Usage example
Testing plan
Risks and mitigations
If you'd like, I can add a short example snippet to the users guide showing a minimal tracing_config usage alongside a quick output sample for a connect/send/receive cycle.
▢ Generated by DevBoxer ◰
ℹ️ Tag @devboxerhub to ask questions and address PR feedback
📎 Task: https://www.devboxer.com/task/6cf37ed1-6f69-4348-94f8-cb5bc21c328e
Summary by Sourcery
Add configurable structured tracing spans and per-command timing across the wireframe client lifecycle, expose a public TracingConfig API, and validate the instrumentation with new unit, BDD tests, and documentation updates.
New Features:
Enhancements:
Documentation:
Tests: