Skip to content

Simplify polling logic in ConnectionActor#164

Closed
leynos wants to merge 2 commits intomainfrom
codex/refactor-polling-logic-in-connectionactor
Closed

Simplify polling logic in ConnectionActor#164
leynos wants to merge 2 commits intomainfrom
codex/refactor-polling-logic-in-connectionactor

Conversation

@leynos
Copy link
Copy Markdown
Owner

@leynos leynos commented Jul 4, 2025

Summary

  • introduce PollEvent to enumerate polling outcomes
  • match on PollEvent in poll_sources
  • derive events via new next_event helper

Testing

  • make fmt
  • make lint
  • make test

https://chatgpt.com/codex/tasks/task_e_686720cf84f48322b0d5db9b12ce721e

Summary by Sourcery

Simplify the polling logic in ConnectionActor by introducing a PollEvent enum and a next_event helper to centralize event selection and replace the inline tokio::select! usage.

Enhancements:

  • Extract polling logic into a new async next_event helper.
  • Introduce a PollEvent enum to enumerate shutdown, high, low, and response events.
  • Replace the inline tokio::select! branches with a match on PollEvent.

@sourcery-ai
Copy link
Copy Markdown
Contributor

sourcery-ai Bot commented Jul 4, 2025

Reviewer's Guide

Refactored ConnectionActor’s polling mechanism by extracting event determination into a dedicated async helper, introducing a PollEvent enum to unify polling outcomes, and simplifying the poll_sources method by matching on PollEvent instead of using an inline tokio::select.

Sequence diagram for ConnectionActor polling with PollEvent

sequenceDiagram
    participant CA as ConnectionActor
    participant State as ActorState
    participant Out as Vec<F>
    participant PollEvent
    CA->>CA: next_event(State)
    CA-->>PollEvent: Await PollEvent (Shutdown | High | Low | Response)
    CA->>CA: match PollEvent
    alt PollEvent::Shutdown
        CA->>CA: process_shutdown(State)
    else PollEvent::High
        CA->>CA: process_high(res, State, Out)
    else PollEvent::Low
        CA->>CA: process_low(res, State, Out)
    else PollEvent::Response
        CA->>CA: process_response(res, State, Out)
    end
Loading

Class diagram for PollEvent enum and ConnectionActor polling changes

classDiagram
    class ConnectionActor {
        +async fn poll_sources(state: &mut ActorState, out: &mut Vec<F>) -> Result<(), WireframeError<E>>
        +async fn next_event(&mut self, state: &ActorState) -> PollEvent<F, E>
        -fn process_shutdown(state: &mut ActorState)
        -fn process_high(res, state: &mut ActorState, out: &mut Vec<F>)
        -fn process_low(res, state: &mut ActorState, out: &mut Vec<F>)
        -fn process_response(res, state: &mut ActorState, out: &mut Vec<F>)
    }
    class PollEvent {
        <<enum>>
        Shutdown
        High(Option<F>)
        Low(Option<F>)
        Response(Option<Result<F, WireframeError<E>>>)
    }
    ConnectionActor --> PollEvent : uses
Loading

File-Level Changes

Change Details Files
Extract polling logic into a next_event helper
  • Moved polling branches out of poll_sources into a new async next_event method
  • Centralized shutdown, high, low, and response polling inside tokio::select in next_event
  • Wrapped availability checks for each source in next_event before polling
src/connection.rs
Introduce PollEvent enum to represent polling outcomes
  • Defined PollEvent with Shutdown, High, Low, and Response variants
  • Made next_event return PollEvent variants instead of raw results
src/connection.rs
Simplify poll_sources by matching on PollEvent
  • Replaced inline tokio::select in poll_sources with a match on PollEvent
  • Delegated to process_shutdown, process_high, process_low, and process_response based on the enum
src/connection.rs

Possibly linked issues


Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jul 4, 2025

Warning

Rate limit exceeded

@leynos has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 0 minutes and 58 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between a9a483d and 8f238a8.

📒 Files selected for processing (1)
  • src/connection.rs (2 hunks)
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate Unit Tests
  • Create PR with Unit Tests
  • Post Copyable Unit Tests in a Comment
  • Commit Unit Tests in branch codex/refactor-polling-logic-in-connectionactor

🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai auto-generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @leynos - I've reviewed your changes and they look great!


Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Jul 4, 2025

@coderabbitai review

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Jul 4, 2025

✅ Actions performed

Review triggered.

Note: CodeRabbit is an incremental review system and does not re-review already reviewed commits. This command is applicable only when automatic reviews are paused.

@leynos
Copy link
Copy Markdown
Owner Author

leynos commented Jul 4, 2025

@sourcery-ai review

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @leynos - I've reviewed your changes - here's some feedback:

  • Ensure next_event’s tokio::select! always has at least one enabled branch (or add a fallback) so it can’t panic when all guards are false.
  • Add a comment or test to confirm that the biased select! in next_event still enforces the intended priority ordering (shutdown > high > low > response) from the original implementation.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- Ensure `next_event`’s `tokio::select!` always has at least one enabled branch (or add a fallback) so it can’t panic when all guards are false.
- Add a comment or test to confirm that the biased `select!` in `next_event` still enforces the intended priority ordering (shutdown > high > low > response) from the original implementation.

## Individual Comments

### Comment 1
<location> `src/connection.rs:347` </location>
<code_context>
     }
+
+    /// Determine which event should be processed next.
+    async fn next_event(&mut self, state: &ActorState) -> PollEvent<F, E> {
+        let high_available = self.high_rx.is_some();
+        let low_available = self.low_rx.is_some();
</code_context>

<issue_to_address>
Consider documenting or clarifying the bias order in next_event's select.

Since the order of branches in 'tokio::select! { biased; ... }' affects event prioritization, please clarify if prioritizing the shutdown signal is intentional. Documenting this will help maintainers understand the impact on shutdown responsiveness and event throughput.
</issue_to_address>

<suggested_fix>
<<<<<<< SEARCH
    /// Determine which event should be processed next.
    async fn next_event(&mut self, state: &ActorState) -> PollEvent<F, E> {
        let high_available = self.high_rx.is_some();
        let low_available = self.low_rx.is_some();
=======
    /// Determine which event should be processed next.
    ///
    /// # Event Selection Bias
    ///
    /// This function uses `tokio::select! { biased; ... }` to prioritize the shutdown signal branch.
    /// The order of branches in the select block is intentional: the shutdown signal is checked first,
    /// ensuring that shutdown requests are handled promptly, even if other events are available.
    /// This improves shutdown responsiveness, but may slightly reduce event throughput if shutdown
    /// signals and other events arrive simultaneously.
    async fn next_event(&mut self, state: &ActorState) -> PollEvent<F, E> {
        let high_available = self.high_rx.is_some();
        let low_available = self.low_rx.is_some();
>>>>>>> REPLACE

</suggested_fix>

### Comment 2
<location> `src/connection.rs:367` </location>
<code_context>
+}
+
+/// Outcome of polling the various sources.
+enum PollEvent<F, E> {
+    /// Shutdown signal triggered.
+    Shutdown,
</code_context>

<issue_to_address>
Consider deriving standard traits for PollEvent for easier debugging and usage.

Adding derives like Debug, PartialEq, or Clone (if applicable) can improve logging, testing, and code manipulation. Consider including them unless there are specific constraints.

Suggested implementation:

```rust
#[derive(Debug, PartialEq, Clone)]
/// Outcome of polling the various sources.
enum PollEvent<F, E> {

```

```rust
enum PollEvent<F, E>
where
    F: Debug + PartialEq + Clone,
    E: Debug + PartialEq + Clone,
{

```
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment thread src/connection.rs
Comment on lines +346 to +349

() = Self::wait_shutdown(self.shutdown.clone()), if state.is_active() => PollEvent::Shutdown,

res = Self::poll_receiver(self.high_rx.as_mut()), if high_available => PollEvent::High(res),
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: Consider documenting or clarifying the bias order in next_event's select.

Since the order of branches in 'tokio::select! { biased; ... }' affects event prioritization, please clarify if prioritizing the shutdown signal is intentional. Documenting this will help maintainers understand the impact on shutdown responsiveness and event throughput.

Suggested change
/// Determine which event should be processed next.
async fn next_event(&mut self, state: &ActorState) -> PollEvent<F, E> {
let high_available = self.high_rx.is_some();
let low_available = self.low_rx.is_some();
/// Determine which event should be processed next.
///
/// # Event Selection Bias
///
/// This function uses `tokio::select! { biased; ... }` to prioritize the shutdown signal branch.
/// The order of branches in the select block is intentional: the shutdown signal is checked first,
/// ensuring that shutdown requests are handled promptly, even if other events are available.
/// This improves shutdown responsiveness, but may slightly reduce event throughput if shutdown
/// signals and other events arrive simultaneously.
async fn next_event(&mut self, state: &ActorState) -> PollEvent<F, E> {
let high_available = self.high_rx.is_some();
let low_available = self.low_rx.is_some();

Comment thread src/connection.rs
/// Result of polling the low-priority queue.
Low(Option<F>),
/// Result of polling the response stream.
Response(Option<Result<F, WireframeError<E>>>),
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: Consider deriving standard traits for PollEvent for easier debugging and usage.

Adding derives like Debug, PartialEq, or Clone (if applicable) can improve logging, testing, and code manipulation. Consider including them unless there are specific constraints.

Suggested implementation:

#[derive(Debug, PartialEq, Clone)]
/// Outcome of polling the various sources.
enum PollEvent<F, E> {
enum PollEvent<F, E>
where
    F: Debug + PartialEq + Clone,
    E: Debug + PartialEq + Clone,
{

@leynos leynos closed this Jul 4, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant