feat: add AI code review integration (#1067)#1068
Conversation
Add thin HTTP client that calls the external pr-ai-reviewer service for AI-powered code review on PRs. - New ai-review top-level config (server-url, providers, triggers) - Thin HTTP client in webhook_server/libs/ai_review.py (~100 lines) - Fire-and-forget on PR opened and synchronize (non-clean rebase) - Same pattern as test-oracle integration - 15 new tests, all 1508 tests pass, 90.67% coverage Closes #1067
|
Report bugs in Issues Welcome! 🎉This pull request will be automatically processed with the following features: 🔄 Automatic Actions
📋 Available CommandsPR Status Management
Review & Approval
Testing & Validation
Container Operations
Cherry-pick Operations
Label Management
✅ Merge RequirementsThis PR will be automatically approved when the following conditions are met:
📊 Review ProcessApprovers and ReviewersApprovers:
Reviewers:
Available Labels
AI Features
💡 Tips
For more information, please refer to the project documentation or contact the maintainers. |
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 26 minutes and 44 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: ASSERTIVE Plan: Pro Run ID: 📒 Files selected for processing (7)
WalkthroughThis PR implements AI-powered code review functionality that analyzes pull requests via an external AI review service. It adds configuration schema for AI review settings, a new module for orchestrating asynchronous review requests, integration into the PR handler to trigger reviews on PR opened and synchronized events, and comprehensive test coverage. Changes
Sequence Diagram(s)sequenceDiagram
actor GitHub
participant WebhookHandler as PR Handler
participant AIReview as ai_review.py
participant AIService as AI Review Service
participant GitHubAPI as GitHub API
GitHub->>WebhookHandler: PR opened/synchronized webhook
WebhookHandler->>WebhookHandler: Validate trigger & config
WebhookHandler->>AIReview: call_ai_reviewer() [async background]
AIReview->>AIService: GET /health (5s timeout)
alt Health Check Fails
AIService-->>AIReview: Connection error / HTTP error
AIReview->>GitHubAPI: Post issue comment (review skipped)
AIReview-->>WebhookHandler: Return (log warning)
else Health Check Passes
AIService-->>AIReview: 200 OK
AIReview->>AIReview: Build review request payload<br/>(pr_url, providers, token, timeouts)
AIReview->>AIService: POST /review (computed timeout)
AIService-->>AIReview: Review response (comments, summary)
alt Review Success
AIReview->>AIReview: Log completion details
AIReview-->>WebhookHandler: Return (fire-and-forget)
else Review/Parsing Errors
AIReview->>AIReview: Log error (no PR comment)
AIReview-->>WebhookHandler: Return (no interrupt)
end
end
WebhookHandler-->>GitHub: Webhook response 200
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes The diff introduces new business logic across multiple files with heterogeneous purposes (schema, async orchestration, integration, tests), but follows consistent patterns. The Possibly related PRs
Suggested labels
Suggested reviewers
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@webhook_server/config/schema.yaml`:
- Around line 216-282: The ai-review object schema is duplicated; extract the
shared schema into a single reusable definition under $defs (e.g., define
aiReview object with properties server-url, providers, max-rounds,
timeout-minutes, triggers, required and additionalProperties) and replace both
inline ai-review occurrences with a $ref to that $defs entry; ensure that
properties like providers (with its ai-provider/ai-model items and constraints),
default values, min/max constraints, required fields, and additionalProperties:
false are preserved exactly in the $defs so both references validate
identically.
In `@webhook_server/libs/ai_review.py`:
- Around line 70-77: The code is using asyncio.to_thread to read
pull_request.html_url which is a cached webhook property and doesn't need a
thread hop; replace the async to_thread call with a direct property access
(assign pr_url = pull_request.html_url) and remove the unnecessary
asyncio.to_thread wrapper where pr_url is set in ai_review.py so the rest of the
logic (providers_config, providers_payload) remains unchanged.
In `@webhook_server/tests/test_ai_review.py`:
- Around line 176-180: Combine the nested context managers in the
test_cancelled_error_reraised test into a single with statement: replace the
nested "with patch(...): with pytest.raises(...):" with "with
patch('webhook_server.libs.ai_review.httpx.AsyncClient',
side_effect=asyncio.CancelledError), pytest.raises(asyncio.CancelledError):" so
the call to call_ai_reviewer(mock_github_webhook, mock_pull_request,
trigger='pr-opened') runs inside one combined context.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
Run ID: e89544b2-2b9b-427e-9c84-ddab56929c29
📒 Files selected for processing (10)
CLAUDE.mdexamples/config.yamlwebhook_server/config/schema.yamlwebhook_server/libs/ai_review.pywebhook_server/libs/github_api.pywebhook_server/libs/handlers/pull_request_handler.pywebhook_server/tests/test_ai_review.pywebhook_server/tests/test_clean_rebase_detection.pywebhook_server/tests/test_pull_request_handler.pywebhook_server/tests/test_runner_handler.py
- Extract ai-review schema to $defs, use $ref in both locations - Remove unnecessary asyncio.to_thread for cached html_url property - Combine nested with statements (SIM117)
Summary
Add AI-powered code review integration via the external pr-ai-reviewer service. Same architecture as the existing test-oracle integration — thin HTTP client in the webhook server, heavy logic in the external service.
What Changed
New Files
webhook_server/libs/ai_review.py/review→ logwebhook_server/tests/test_ai_review.pyModified Files
webhook_server/config/schema.yamlai-reviewtop-level config + repo-level (server-url, providers, triggers)webhook_server/libs/github_api.pyai_review_configfrom configwebhook_server/libs/handlers/pull_request_handler.pycall_ai_revieweron opened + synchronize (non-clean)AGENTS.mdexamples/config.yamlai_review_config = Noneto mocks + 4 handler scheduling testsConfiguration
Testing
Related
Closes #1067
Summary by CodeRabbit
Release Notes
New Features
Documentation
Tests