Conversation
Gracefully accept empty responses from the GitHub API and enforce a minimum cooldown to avoid busy retries.
Reviewer's GuideThis PR updates the worker loop to treat deserialization errors from the GitHub API as successes to avoid stalling the queue, enforces a minimum one-second cooldown between retries to prevent busy loops, and updates the design documentation to reflect these new behaviors. Sequence diagram for updated worker error handling and cooldownsequenceDiagram
participant Worker
participant GitHubAPI
participant Queue
loop For each comment request
Worker->>GitHubAPI: Post comment
alt Success
GitHubAPI-->>Worker: Valid response
Worker->>Queue: Commit request (remove from queue)
else Deserialization error
GitHubAPI-->>Worker: Malformed response
Worker->>Queue: Commit request (remove from queue)
else Other error
GitHubAPI-->>Worker: Error
Worker--x Queue: Do not commit (request stays in queue)
end
Worker->>Worker: Sleep for max(1, cooldown_period_seconds)
end
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. Summary by CodeRabbit
WalkthroughUpdate the worker loop to treat GitHub API deserialization errors as successful outcomes, committing queue entries in these cases. Adjust logging macro invocations for consistency. Enforce a minimum cooldown of one second after processing each request. Amend documentation to describe these changes in error handling and cooldown enforcement. Changes
Sequence Diagram(s)sequenceDiagram
participant Worker
participant GitHubAPI
participant Queue
Worker->>GitHubAPI: post_comment(request)
alt Success or Deserialization Error
GitHubAPI-->>Worker: Ok(_) or Err(Serde)
Worker->>Queue: commit entry
else API Failure or Timeout
GitHubAPI-->>Worker: Err(other)
Worker->>Worker: log error
end
Worker->>Worker: sleep(max(1, cooldown_period_seconds))
Estimated code review effort🎯 2 (Simple) | ⏱️ ~8 minutes Possibly related PRs
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🧰 Additional context used📓 Path-based instructions (1)**/*.rs📄 CodeRabbit Inference Engine (AGENTS.md)
Files:
⚙️ CodeRabbit Configuration File
Files:
🔇 Additional comments (7)
✨ Finishing Touches
🧪 Generate unit tests
🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Hey @leynos - I've reviewed your changes and found some issues that need to be addressed.
Prompt for AI Agents
Please address the comments from this code review:
## Individual Comments
### Comment 1
<location> `docs/comenq-design.md:1054` </location>
<code_context>
-secondary rate limits.
+secondary rate limits. A minimum delay of one second is enforced even if a
+lower value is configured to prevent a busy retry loop. Responses that cannot
+be deserialised are treated as successful because the daemon discards the
+response body.
</code_context>
<issue_to_address>
Consider changing 'deserialised' to 'deserialized' for consistency with American English.
Updating to 'deserialized' aligns with the existing American English usage in the documentation.
</issue_to_address>
<suggested_fix>
<<<<<<< SEARCH
lower value is configured to prevent a busy retry loop. Responses that cannot
be deserialised are treated as successful because the daemon discards the
response body.
=======
lower value is configured to prevent a busy retry loop. Responses that cannot
be deserialized are treated as successful because the daemon discards the
response body.
>>>>>>> REPLACE
</suggested_fix>
### Comment 2
<location> `docs/comenq-design.md:1053` </location>
<code_context>
defaults to 960 seconds (16 minutes) to provide ample headroom against GitHub's
-secondary rate limits.
+secondary rate limits. A minimum delay of one second is enforced even if a
+lower value is configured to prevent a busy retry loop. Responses that cannot
+be deserialised are treated as successful because the daemon discards the
+response body.
</code_context>
<issue_to_address>
This line is part of a paragraph that exceeds the 80 column limit.
Ensure that all lines in paragraphs are wrapped to 80 columns or fewer.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| lower value is configured to prevent a busy retry loop. Responses that cannot | ||
| be deserialised are treated as successful because the daemon discards the | ||
| response body. |
There was a problem hiding this comment.
suggestion (typo): Consider changing 'deserialised' to 'deserialized' for consistency with American English.
Updating to 'deserialized' aligns with the existing American English usage in the documentation.
| lower value is configured to prevent a busy retry loop. Responses that cannot | |
| be deserialised are treated as successful because the daemon discards the | |
| response body. | |
| lower value is configured to prevent a busy retry loop. Responses that cannot | |
| be deserialized are treated as successful because the daemon discards the | |
| response body. |
There was a problem hiding this comment.
Has this now been resolved?
There was a problem hiding this comment.
Yes, this has now been resolved. The documentation in comenq-design.md has been updated to use "deserialized" (American English spelling) instead of "deserialised." If you check the latest version of the file, you'll see the corrected spelling in the relevant section. Let me know if you need any further clarification or help!
|
@coderabbitai Please suggest a fix for this issue and supply a prompt for an AI coding agent to enable it to apply the fix: |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain@leynos: The test failures are caused by changes in the PR that modified how the worker handles API responses and cooldown behavior. The tests need to be updated to reflect these changes. Problem AnalysisThe failing tests are checking queue behavior after worker execution:
This indicates the tests were written for the old behavior where:
The FixUpdate the test expectations to account for:
Code Changes Needed// In the test setup, ensure mock responses align with new behavior
// Update assertions to expect the correct queue counts after the changes
// For run_worker_commits_on_success test:
// - Verify that both Ok(_) and Err(octocrab::Error::Serde{..}) commit jobs
// - Account for minimum cooldown timing
// For run_worker_requeues_on_error test:
// - Only non-serde errors should requeue
// - Update expected queue count based on which specific errors requeue vs commitAI Coding Agent Prompt🏁 Script executed: Length of output: 1000 🏁 Script executed: Length of output: 88 🏁 Script executed: Length of output: 87 🏁 Script executed: Length of output: 991 🏁 Script executed: Length of output: 652 🏁 Script executed: Length of output: 61 🏁 Script executed: Length of output: 836 Based on the test code and error handling logic, I can now provide a complete analysis and fix. @leynos: I've identified the root cause of the test failures. The issue is that the tests were written for the old behavior, but the PR changed how errors are handled. Problem AnalysisLooking at the test code and the modified
The key issue is the minimum 1-second cooldown enforcement combined with the short 50ms test duration. Root CauseThe tests run for only 50ms (
The Fix#[tokio::test]
async fn run_worker_commits_on_success() {
let (_dir, server, cfg, rx, octo) = setup_run_worker(201).await;
let h = tokio::spawn(run_worker(cfg.clone(), rx, octo));
// Increase sleep duration to allow for minimum 1-second cooldown
sleep(Duration::from_millis(1100)).await;
h.abort();
assert_eq!(server.received_requests().await.unwrap().len(), 1);
assert_eq!(std::fs::read_dir(&cfg.queue_path).unwrap().count(), 0);
}
#[tokio::test]
async fn run_worker_requeues_on_error() {
let (_dir, server, cfg, rx, octo) = setup_run_worker(500).await;
let h = tokio::spawn(run_worker(cfg.clone(), rx, octo));
// Increase sleep duration to allow for minimum 1-second cooldown
sleep(Duration::from_millis(1100)).await;
h.abort();
assert_eq!(server.received_requests().await.unwrap().len(), 1);
assert!(std::fs::read_dir(&cfg.queue_path).unwrap().count() > 0);
}AI Coding Agent Prompt🏁 Script executed: Length of output: 1163 🏁 Script executed: Length of output: 841 🏁 Script executed: Length of output: 69 🏁 Script executed: Length of output: 69 🏁 Script executed: Length of output: 326 Perfect! Now I have the complete picture. The issue is exactly as I suspected - a timing problem caused by the minimum cooldown enforcement. Complete AnalysisFrom the code examination:
The FixThe tests need to run long enough to account for the mandatory 1-second sleep between job processing. #[tokio::test]
async fn run_worker_commits_on_success() {
let (_dir, server, cfg, rx, octo) = setup_run_worker(201).await;
let h = tokio::spawn(run_worker(cfg.clone(), rx, octo));
// Changed from 50ms to 1100ms to account for minimum 1-second cooldown
sleep(Duration::from_millis(1100)).await;
h.abort();
assert_eq!(server.received_requests().await.unwrap().len(), 1);
assert_eq!(std::fs::read_dir(&cfg.queue_path).unwrap().count(), 0);
}
#[tokio::test]
async fn run_worker_requeues_on_error() {
let (_dir, server, cfg, rx, octo) = setup_run_worker(500).await;
let h = tokio::spawn(run_worker(cfg.clone(), rx, octo));
// Changed from 50ms to 1100ms to account for minimum 1-second cooldown
sleep(Duration::from_millis(1100)).await;
h.abort();
assert_eq!(server.received_requests().await.unwrap().len(), 1);
assert!(std::fs::read_dir(&cfg.queue_path).unwrap().count() > 0);
}Final AI Coding Agent Prompt |
There was a problem hiding this comment.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (1)
crates/comenqd/src/daemon.rs(5 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rs
📄 CodeRabbit Inference Engine (AGENTS.md)
**/*.rs: Clippy warnings MUST be disallowed.
Fix any warnings emitted during tests in the code itself rather than silencing them.
Where a function is too long, extract meaningfully named helper functions adhering to separation of concerns and CQRS.
Where a function has too many parameters, group related parameters in meaningfully named structs.
Where a function is returning a large error consider using Arc to reduce the amount of data returned.
Write unit and behavioural tests for new functionality. Run both before and after making any change.
Every module must begin with a module level (//! ) comment explaining the module's purpose and utility.
Document public APIs using Rustdoc comments (///) so documentation can be generated with cargo doc.
Prefer immutable data and avoid unnecessary mut bindings.
Handle errors with the Result type instead of panicking where feasible.
Avoid unsafe code unless absolutely necessary and document any usage clearly.
Place function attributes after doc comments.
Do not use return in single-line functions.
Use predicate functions for conditional criteria with more than two branches.
Lints must not be silenced except as a last resort.
Lint rule suppressions must be tightly scoped and include a clear reason.
Prefer expect over allow.
Use rstest fixtures for shared setup.
Replace duplicated tests with #[rstest(...)] parameterised cases.
Prefer mockall for mocks/stubs.
Prefer .expect() over .unwrap().
Use concat!() to combine long string literals rather than escaping newlines with a backslash.
Prefer semantic error enums. Derive std::error::Error (via the thiserror crate) for any condition the caller might inspect, retry, or map to an HTTP status.
Use an opaque error only at the app boundary. Use eyre::Report for human-readable logs; these should not be exposed in public APIs.
Never export the opaque type from a library. Convert to domain enums at API boundaries, and to eyre only in the main main() entrypoint or top-level async task.
Files:
crates/comenqd/src/daemon.rs
⚙️ CodeRabbit Configuration File
**/*.rs: * Seek to keep the cyclomatic complexity of functions no more than 12.
Adhere to single responsibility and CQRS
Place function attributes after doc comments.
Do not use
returnin single-line functions.Move conditionals with >2 branches into a predicate function.
Avoid
unsafeunless absolutely necessary.Every module must begin with a
//!doc comment that explains the module's purpose and utility.Comments and docs must follow en-GB-oxendict (-ize / -our) spelling and grammar
Lints must not be silenced except as a last resort.
#[allow]is forbidden.- Only narrowly scoped
#[expect(lint, reason = "...")]is allowed.- No lint groups, no blanket or file-wide suppression.
- Include
FIXME:with link if a fix is expected.Use
rstestfixtures for shared setup and to avoid repetition between tests.Replace duplicated tests with
#[rstest(...)]parameterised cases.Prefer
mockallfor mocks/stubs.Prefer
.expect()over.unwrap()Ensure that any API or behavioural changes are reflected in the documentation in
docs/Ensure that any completed roadmap steps are recorded in the appropriate roadmap in
docs/Files must not exceed 400 lines in length
- Large modules must be decomposed
- Long match statements or dispatch tables should be decomposed by domain and collocated with targets
- Large blocks of inline data (e.g., test fixtures, constants or templates) must be moved to external files and inlined at compile-time or loaded at run-time.
Files:
crates/comenqd/src/daemon.rs
🔇 Additional comments (5)
crates/comenqd/src/daemon.rs (5)
263-264: LGTM on logging format consistency.The addition of trailing commas in the logging macro invocations improves formatting consistency.
Also applies to: 271-272
276-278: Enforce minimum cooldown correctly with clear documentation.The implementation correctly enforces a minimum 1-second cooldown using
std::cmp::max(1, config.cooldown_period_seconds), preventing busy retry loops from zero or very low cooldown values.Add a comment explaining this behaviour:
+ // Enforce minimum 1-second cooldown to prevent busy retry loops let sleep_secs = std::cmp::max(1, config.cooldown_period_seconds); tokio::time::sleep(Duration::from_secs(sleep_secs)).await;
335-364: Update function signature appropriately for test cleanup.The addition of
TempDiras the first return value ensures proper cleanup of temporary directories in tests, preventing resource leaks.
457-466: Align test timing with enforced minimum cooldown.The increase from 50ms to 1100ms correctly accommodates the new minimum 1-second cooldown period, ensuring the worker has sufficient time to process jobs and apply the cooldown before the test aborts the task.
470-478: Maintain test timing consistency.The timing adjustment mirrors the successful test case, ensuring consistent test behaviour under the new cooldown enforcement.
|
@coderabbitai Please suggest a fix for this issue and supply a prompt for an AI coding agent to enable it to apply the fix: |
|
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain@leynos I need to examine the current test code to understand the new failure pattern. The error signatures have changed, suggesting the previous timing fix was applied but created new issues. Based on the analysis, the issue is that the tests now run long enough for the worker to process multiple queued jobs, but the assertions expect only one HTTP request. The tests need to be adjusted to account for processing all queued jobs or limit the queue to a single job. Problem AnalysisThe failure pattern shows:
Root CauseThe test setup creates multiple queue files, but with the original 50ms timing, only the first job was processed. With the 1100ms timing fix, the worker now has time to process multiple jobs, causing more HTTP requests than expected. The FixEither:
AI Coding Agent Prompt🏁 Script executed: Length of output: 1208 🏁 Script executed: Length of output: 794 🏁 Script executed: Length of output: 1322 🏁 Script executed: Length of output: 3117 Now I can see the issue! The test setup creates exactly one job per test, but the tests are receiving multiple HTTP requests. This suggests either retry behavior or test isolation problems. Perfect! Now I understand the issue. Looking at the test setup:
The issue is likely that the worker processes the job, then continues running and either:
Let me examine the worker loop more closely: Based on my analysis, the most likely cause is test isolation failure - queue files from previous tests aren't being properly cleaned up, causing subsequent tests to process leftover jobs. Problem AnalysisRoot Cause: The worker is processing more jobs than each test creates, suggesting:
Evidence:
The FixThe solution is to ensure proper test isolation by either:
AI Coding Agent PromptDEBUGGING: If the fix doesn't work, add debug logging to see how many queue files exist at test start: eprintln!("Queue files before test: {}", std::fs::read_dir(&cfg.queue_path).unwrap().count());The goal is to ensure each test processes exactly the 1 job it creates, not jobs from previous tests. #!/bin/bash Let's check if there's any retry logic in the worker that could cause multiple requests for the same jobrg -A 10 -B 10 "post_comment" crates/comenqd/src/daemon.rs #!/bin/bash Let's also check how the TempDir cleanup works and if tests are isolatedrg -A 5 -B 5 "TempDir|tempdir" crates/comenqd/src/daemon.rs #!/bin/bash Let's see the complete worker loop to understand the retry/repeat behaviorast-grep --pattern $'loop { AI Coding Agent PromptThis ensures each test starts with a clean queue containing only the 1 job it creates, so the worker processes exactly 1 job and makes exactly 1 HTTP request. ALTERNATIVE: If the above doesn't work, the issue might be in the TempDir scoping. Ensure the #!/bin/bash Let's look at the channel function to see how it loads existing queue jobsrg -A 15 -B 5 "fn channel" crates/comenqd/src/daemon.rs #!/bin/bash Let's also check if TempDir is properly imported and usedrg -A 3 -B 3 "use.*tempfile|tempdir|TempDir" crates/comenqd/src/daemon.rs #!/bin/bash Let's look at the imports at the top of the file to understand TempDir usagehead -20 crates/comenqd/src/daemon.rs Final AI Coding Agent PromptThis ensures each test starts with a completely empty queue, so the worker processes exactly the 1 job created by the test, resulting in exactly 1 HTTP request as expected. VERIFICATION: After applying this fix, all three tests should pass because:
#!/bin/bash Let's look for the test module and TempDir importsrg -A 10 -B 2 "#[cfg(test)]" crates/comenqd/src/daemon.rs #!/bin/bash Let's search for tempfile or tempdir in the entire filerg "tempfile|tempdir" crates/comenqd/src/daemon.rs #!/bin/bash Let's also search for TempDir specificallyrg "TempDir" crates/comenqd/src/daemon.rs Final AI Coding Agent PromptReplace it with: // Clear any existing queue files to ensure clean test isolation
std::fs::remove_dir_all(&cfg.queue_path).ok();
std::fs::create_dir_all(&cfg.queue_path).expect("create queue dir");
let (mut sender, rx) = channel(&cfg.queue_path).expect("channel");EXPLANATION: This ensures each test starts with a completely empty queue directory, so EXPECTED RESULT: All three tests will pass because:
|
|
Too b0rked |
Summary
Testing
make fmtmake lintmake markdownlintmake testhttps://chatgpt.com/codex/tasks/task_e_688faf5916fc8322b29571ed82f8b0b4
Summary by Sourcery
Handle malformed API responses by treating JSON deserialization errors as successful posts, enforce a minimum cooldown period between retries, and update related documentation.
Enhancements:
Documentation: