Conversation
Reviewer's GuideRefactored ConnectionActor’s polling mechanism by extracting event determination into a dedicated async helper, introducing a PollEvent enum to unify polling outcomes, and simplifying the poll_sources method by matching on PollEvent instead of using an inline tokio::select. Sequence diagram for ConnectionActor polling with PollEventsequenceDiagram
participant CA as ConnectionActor
participant State as ActorState
participant Out as Vec<F>
participant PollEvent
CA->>CA: next_event(State)
CA-->>PollEvent: Await PollEvent (Shutdown | High | Low | Response)
CA->>CA: match PollEvent
alt PollEvent::Shutdown
CA->>CA: process_shutdown(State)
else PollEvent::High
CA->>CA: process_high(res, State, Out)
else PollEvent::Low
CA->>CA: process_low(res, State, Out)
else PollEvent::Response
CA->>CA: process_response(res, State, Out)
end
Class diagram for PollEvent enum and ConnectionActor polling changesclassDiagram
class ConnectionActor {
+async fn poll_sources(state: &mut ActorState, out: &mut Vec<F>) -> Result<(), WireframeError<E>>
+async fn next_event(&mut self, state: &ActorState) -> PollEvent<F, E>
-fn process_shutdown(state: &mut ActorState)
-fn process_high(res, state: &mut ActorState, out: &mut Vec<F>)
-fn process_low(res, state: &mut ActorState, out: &mut Vec<F>)
-fn process_response(res, state: &mut ActorState, out: &mut Vec<F>)
}
class PollEvent {
<<enum>>
Shutdown
High(Option<F>)
Low(Option<F>)
Response(Option<Result<F, WireframeError<E>>>)
}
ConnectionActor --> PollEvent : uses
File-Level Changes
Possibly linked issues
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
|
Warning Rate limit exceeded@leynos has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 0 minutes and 58 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (1)
✨ Finishing Touches
🧪 Generate Unit Tests
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
|
@sourcery-ai review |
There was a problem hiding this comment.
Hey @leynos - I've reviewed your changes - here's some feedback:
- Ensure
next_event’stokio::select!always has at least one enabled branch (or add a fallback) so it can’t panic when all guards are false. - Add a comment or test to confirm that the biased
select!innext_eventstill enforces the intended priority ordering (shutdown > high > low > response) from the original implementation.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- Ensure `next_event`’s `tokio::select!` always has at least one enabled branch (or add a fallback) so it can’t panic when all guards are false.
- Add a comment or test to confirm that the biased `select!` in `next_event` still enforces the intended priority ordering (shutdown > high > low > response) from the original implementation.
## Individual Comments
### Comment 1
<location> `src/connection.rs:347` </location>
<code_context>
}
+
+ /// Determine which event should be processed next.
+ async fn next_event(&mut self, state: &ActorState) -> PollEvent<F, E> {
+ let high_available = self.high_rx.is_some();
+ let low_available = self.low_rx.is_some();
</code_context>
<issue_to_address>
Consider documenting or clarifying the bias order in next_event's select.
Since the order of branches in 'tokio::select! { biased; ... }' affects event prioritization, please clarify if prioritizing the shutdown signal is intentional. Documenting this will help maintainers understand the impact on shutdown responsiveness and event throughput.
</issue_to_address>
<suggested_fix>
<<<<<<< SEARCH
/// Determine which event should be processed next.
async fn next_event(&mut self, state: &ActorState) -> PollEvent<F, E> {
let high_available = self.high_rx.is_some();
let low_available = self.low_rx.is_some();
=======
/// Determine which event should be processed next.
///
/// # Event Selection Bias
///
/// This function uses `tokio::select! { biased; ... }` to prioritize the shutdown signal branch.
/// The order of branches in the select block is intentional: the shutdown signal is checked first,
/// ensuring that shutdown requests are handled promptly, even if other events are available.
/// This improves shutdown responsiveness, but may slightly reduce event throughput if shutdown
/// signals and other events arrive simultaneously.
async fn next_event(&mut self, state: &ActorState) -> PollEvent<F, E> {
let high_available = self.high_rx.is_some();
let low_available = self.low_rx.is_some();
>>>>>>> REPLACE
</suggested_fix>
### Comment 2
<location> `src/connection.rs:367` </location>
<code_context>
+}
+
+/// Outcome of polling the various sources.
+enum PollEvent<F, E> {
+ /// Shutdown signal triggered.
+ Shutdown,
</code_context>
<issue_to_address>
Consider deriving standard traits for PollEvent for easier debugging and usage.
Adding derives like Debug, PartialEq, or Clone (if applicable) can improve logging, testing, and code manipulation. Consider including them unless there are specific constraints.
Suggested implementation:
```rust
#[derive(Debug, PartialEq, Clone)]
/// Outcome of polling the various sources.
enum PollEvent<F, E> {
```
```rust
enum PollEvent<F, E>
where
F: Debug + PartialEq + Clone,
E: Debug + PartialEq + Clone,
{
```
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
|
|
||
| () = Self::wait_shutdown(self.shutdown.clone()), if state.is_active() => PollEvent::Shutdown, | ||
|
|
||
| res = Self::poll_receiver(self.high_rx.as_mut()), if high_available => PollEvent::High(res), |
There was a problem hiding this comment.
suggestion: Consider documenting or clarifying the bias order in next_event's select.
Since the order of branches in 'tokio::select! { biased; ... }' affects event prioritization, please clarify if prioritizing the shutdown signal is intentional. Documenting this will help maintainers understand the impact on shutdown responsiveness and event throughput.
| /// Determine which event should be processed next. | |
| async fn next_event(&mut self, state: &ActorState) -> PollEvent<F, E> { | |
| let high_available = self.high_rx.is_some(); | |
| let low_available = self.low_rx.is_some(); | |
| /// Determine which event should be processed next. | |
| /// | |
| /// # Event Selection Bias | |
| /// | |
| /// This function uses `tokio::select! { biased; ... }` to prioritize the shutdown signal branch. | |
| /// The order of branches in the select block is intentional: the shutdown signal is checked first, | |
| /// ensuring that shutdown requests are handled promptly, even if other events are available. | |
| /// This improves shutdown responsiveness, but may slightly reduce event throughput if shutdown | |
| /// signals and other events arrive simultaneously. | |
| async fn next_event(&mut self, state: &ActorState) -> PollEvent<F, E> { | |
| let high_available = self.high_rx.is_some(); | |
| let low_available = self.low_rx.is_some(); |
| /// Result of polling the low-priority queue. | ||
| Low(Option<F>), | ||
| /// Result of polling the response stream. | ||
| Response(Option<Result<F, WireframeError<E>>>), |
There was a problem hiding this comment.
suggestion: Consider deriving standard traits for PollEvent for easier debugging and usage.
Adding derives like Debug, PartialEq, or Clone (if applicable) can improve logging, testing, and code manipulation. Consider including them unless there are specific constraints.
Suggested implementation:
#[derive(Debug, PartialEq, Clone)]
/// Outcome of polling the various sources.
enum PollEvent<F, E> {enum PollEvent<F, E>
where
F: Debug + PartialEq + Clone,
E: Debug + PartialEq + Clone,
{
Summary
PollEventto enumerate polling outcomesPollEventinpoll_sourcesnext_eventhelperTesting
make fmtmake lintmake testhttps://chatgpt.com/codex/tasks/task_e_686720cf84f48322b0d5db9b12ce721e
Summary by Sourcery
Simplify the polling logic in ConnectionActor by introducing a PollEvent enum and a next_event helper to centralize event selection and replace the inline tokio::select! usage.
Enhancements: