From 62340b94248253899a41d5373ca795fa8a9236af Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Kacper=20Miko=C5=82ajczak?= Date: Thu, 8 Jan 2026 21:13:18 +0100 Subject: [PATCH 1/7] add ai reviewer guide --- .../philosophies/AI-REVIEWER.md | 113 ++++++++++++++++++ contributingGuides/philosophies/INDEX.md | 1 + 2 files changed, 114 insertions(+) create mode 100644 contributingGuides/philosophies/AI-REVIEWER.md diff --git a/contributingGuides/philosophies/AI-REVIEWER.md b/contributingGuides/philosophies/AI-REVIEWER.md new file mode 100644 index 0000000000000..10576f5931099 --- /dev/null +++ b/contributingGuides/philosophies/AI-REVIEWER.md @@ -0,0 +1,113 @@ +# AI Reviewer Philosophy +This philosophy guides our approach to AI-assisted code and documentation review, explaining when to use each reviewer and how to respond to their feedback. + +#### Terminology +- **AI Reviewer** - Automated agent that analyzes PRs or issues and provides feedback +- **Holistic Reviewer** - A reviewer without predefined rules that provides general feedback +- **Smart Linter** - The code-inline-reviewer; a rule-based reviewer with predefined patterns +- **Rule Violation** - Specific pattern that triggers rule-based reviewer feedback + +## Guiding Principles + +These are recommendations for working effectively with AI reviewers, not strict requirements. + +### Treat AI feedback as suggestions +AI reviewers provide automated feedback to assist human reviewers, but their output is not infallible. Contributors and reviewers should evaluate each piece of feedback on its merits rather than blindly accepting or rejecting it. + +### Validate AI feedback before requesting changes +When AI reviewers flag potential issues, human reviewers should verify the feedback is accurate and applicable before asking contributors to make changes. This prevents unnecessary work from false positives. + +### Report false positives to maintainers +When AI feedback is incorrect or not applicable, reach out to the AI reviewer maintainers to help improve the system. You can either tag them directly in a reply to the reviewer's comment or reach out through Slack. This feedback helps refine the reviewers and prevents the same issues from recurring. + +### Keep rule documentation in sync with AI reviewer prompts +When adding or modifying rules in AI reviewer agent files, the corresponding documentation should be updated. The agent files in `.claude/agents/` are the source of truth for specific rules. + +## Reviewer Setup + +### Available AI Reviewers + +**code-inline-reviewer (Smart Linter)** +- Reviews source code PRs for specific, predefined performance violations +- Creates inline comments on lines that violate rules +- See `.claude/agents/code-inline-reviewer.md` for current rule definitions + +**Holistic Reviewer** +- Provides general code review without predefined rules +- Catches issues that don't fit into specific rule categories +- Acts as a counterweight to the Smart Linter +- Outputs general code quality feedback and suggestions +- Currently implemented using Codex, configured at the repository level + +**helpdot-inline-reviewer** +- Reviews HelpDot documentation PRs for readability, AI readiness, and style compliance +- Creates inline comments for specific violations +- See `.claude/agents/helpdot-inline-reviewer.md` for criteria + +**helpdot-summary-reviewer** +- Provides overall quality assessment with scoring for documentation PRs +- Posts a top-level PR comment with summary and recommendations +- See `.claude/agents/helpdot-summary-reviewer.md` for scoring criteria + +**deploy-blocker-investigator** +- Investigates deploy blocker issues to identify the causing PR +- Posts findings and recommendations on the issue +- See `.claude/agents/deploy-blocker-investigator.md` for investigation process + +### When to Use Each Reviewer + +#### Code PRs +Code PRs benefit from the **two-reviewer approach**: + +1. **Smart Linter (code-inline-reviewer)**: Catches specific, well-defined performance anti-patterns with consistent, rule-based feedback +2. **Holistic Reviewer**: Catches general code quality issues, design concerns, and anything not covered by specific rules + +Together they balance precision (rules) with coverage (holistic review). + +#### Documentation PRs +Documentation PRs in the HelpDot system use two complementary reviewers: + +1. **helpdot-inline-reviewer**: Line-specific feedback on violations +2. **helpdot-summary-reviewer**: Overall quality assessment with scores + +#### Deploy Blocker Issues +When a deploy blocker issue is created: + +1. **deploy-blocker-investigator**: Analyzes the issue, identifies the likely causing PR, and recommends resolution + +## Working with AI Feedback + +### Addressing Valid Feedback +When AI feedback is accurate: +1. Make the suggested changes +2. If the fix differs from the suggestion, explain your approach + +### Handling False Positives +When AI feedback is incorrect or not applicable: +1. Evaluate whether the feedback applies to your specific context +2. Reach out to AI reviewer maintainers by tagging them in a reply or through Slack +3. Your feedback helps refine the reviewers and prevent recurring issues + +### Escalating to Human Reviewers +Escalate to human reviewers when: +- You're unsure whether AI feedback is valid +- The AI feedback conflicts with other requirements +- The suggested fix would require significant architectural changes + +### Examples + +#### Appropriate Response to Valid Feedback +**AI Comment**: "PERF-4: This object passed as a prop should be memoized to prevent unnecessary re-renders." + +✅ **Good Response**: Wrap the object in `useMemo` or refactor to avoid creating new references. + +❌ **Bad Response**: Ignore the feedback without consideration. + +#### Appropriate Response to False Positive +**AI Comment**: "PERF-4: This object passed as a prop should be memoized." + +**Context**: The parent component is already optimized by React Compiler. + +✅ **Good Response**: Tag the AI reviewer maintainers or reach out through Slack with explanation of incorrect suggestion. + +❌ **Bad Response**: Apply the change anyway, adding unnecessary complexity. diff --git a/contributingGuides/philosophies/INDEX.md b/contributingGuides/philosophies/INDEX.md index 5e8ac3d7ace88..3150fa5cb10ea 100644 --- a/contributingGuides/philosophies/INDEX.md +++ b/contributingGuides/philosophies/INDEX.md @@ -5,6 +5,7 @@ The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "S "OPTIONAL" are to be interpreted as described in [RFC 2119](https://datatracker.ietf.org/doc/html/rfc2119). ## Contents +* [AI Reviewer Philosophy](/contributingGuides/philosophies/AI-REVIEWER.md) * [Beta Usage Philosophy](/contributingGuides/philosophies/BETAS.md) * [Cross-Platform Philosophy](/contributingGuides/philosophies/CROSS-PLATFORM.md) * [Data Flow Philosophy](/contributingGuides/philosophies/DATA-FLOW.md) From bbb9cffdf5e4e62f8f3ce0bfc136e001f1eb0b48 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Kacper=20Miko=C5=82ajczak?= Date: Thu, 8 Jan 2026 21:16:43 +0100 Subject: [PATCH 2/7] add mermaid --- .../philosophies/AI-REVIEWER.md | 27 +++++++++++++++++++ 1 file changed, 27 insertions(+) diff --git a/contributingGuides/philosophies/AI-REVIEWER.md b/contributingGuides/philosophies/AI-REVIEWER.md index 10576f5931099..dd45fc95879b3 100644 --- a/contributingGuides/philosophies/AI-REVIEWER.md +++ b/contributingGuides/philosophies/AI-REVIEWER.md @@ -56,6 +56,33 @@ When adding or modifying rules in AI reviewer agent files, the corresponding doc ### When to Use Each Reviewer +```mermaid +flowchart TD + subgraph trigger [Reviewer pipeline] + A{Contribution Type} + end + + A -->|Code PR| B[Smart Linter] + A -->|Code PR| C[Holistic Reviewer] + A -->|HelpDot PR| D[helpdot-inline-reviewer] + A -->|HelpDot PR| E[helpdot-summary-reviewer] + A -->|Deploy Blocker Issue| F[deploy-blocker-investigator] + + subgraph code [Code Review] + B -->|Rule-based| G[Inline comments for violations] + C -->|General| H[Quality feedback] + end + + subgraph docs [Documentation Review] + D -->|Inline| I[Line-specific feedback] + E -->|Summary| J[Scores and recommendations] + end + + subgraph deploy [Issue Investigation] + F --> K[Identify causing PR] + end +``` + #### Code PRs Code PRs benefit from the **two-reviewer approach**: From 1ef988f07c1cb29ae0235dc39ddf7e1fb74caaab Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Kacper=20Miko=C5=82ajczak?= Date: Mon, 12 Jan 2026 12:40:21 +0100 Subject: [PATCH 3/7] add why section --- contributingGuides/philosophies/AI-REVIEWER.md | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/contributingGuides/philosophies/AI-REVIEWER.md b/contributingGuides/philosophies/AI-REVIEWER.md index dd45fc95879b3..5e221f21d3f22 100644 --- a/contributingGuides/philosophies/AI-REVIEWER.md +++ b/contributingGuides/philosophies/AI-REVIEWER.md @@ -7,6 +7,22 @@ This philosophy guides our approach to AI-assisted code and documentation review - **Smart Linter** - The code-inline-reviewer; a rule-based reviewer with predefined patterns - **Rule Violation** - Specific pattern that triggers rule-based reviewer feedback +## Why We Use AI Reviewers + +AI reviewers serve several key purposes in our development workflow: + +### Scale human reviewer capacity +With a high volume of PRs, human reviewers can't catch every detail. AI reviewers provide consistent, automated first-pass review that catches common issues before human review, allowing human reviewers to focus on architectural decisions, business logic, and nuanced feedback. + +### Enforce institutional knowledge consistently +Performance patterns, coding standards, and documentation guidelines are often tribal knowledge. AI reviewers codify this knowledge into repeatable checks, ensuring every PR benefits from the same expertise regardless of which human reviewer is assigned. + +### Reduce review turnaround time +Contributors get immediate feedback on common issues without waiting for human reviewer availability. This enables faster iteration cycles and reduces the back-and-forth that slows down PR merges. + +### Maintain quality at scale +As the codebase and contributor base grow, AI reviewers help maintain consistent quality standards without linearly increasing human reviewer burden. + ## Guiding Principles These are recommendations for working effectively with AI reviewers, not strict requirements. From 78c50e9afab14b48dc054fc3eacfbf36c3c5b8e9 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Kacper=20Miko=C5=82ajczak?= Date: Mon, 12 Jan 2026 12:54:51 +0100 Subject: [PATCH 4/7] add triggers info --- .../philosophies/AI-REVIEWER.md | 66 ++++++++++++++----- 1 file changed, 50 insertions(+), 16 deletions(-) diff --git a/contributingGuides/philosophies/AI-REVIEWER.md b/contributingGuides/philosophies/AI-REVIEWER.md index 5e221f21d3f22..4927a842d0096 100644 --- a/contributingGuides/philosophies/AI-REVIEWER.md +++ b/contributingGuides/philosophies/AI-REVIEWER.md @@ -70,36 +70,53 @@ When adding or modifying rules in AI reviewer agent files, the corresponding doc - Posts findings and recommendations on the issue - See `.claude/agents/deploy-blocker-investigator.md` for investigation process -### When to Use Each Reviewer +### Triggers and When Reviewers Run + +AI reviewers are triggered automatically based on contribution type and file changes. The diagram below shows the reviewer pipeline: ```mermaid flowchart TD - subgraph trigger [Reviewer pipeline] - A{Contribution Type} + subgraph triggers [GitHub Events] + T1[PR opened/ready_for_review] + T2[workflow_dispatch] + end + + subgraph filters [Path Filters] + T1 --> F1{src/** changed?} + T1 --> F2{docs/**/*.md changed?} end - A -->|Code PR| B[Smart Linter] - A -->|Code PR| C[Holistic Reviewer] - A -->|HelpDot PR| D[helpdot-inline-reviewer] - A -->|HelpDot PR| E[helpdot-summary-reviewer] - A -->|Deploy Blocker Issue| F[deploy-blocker-investigator] + F1 -->|Yes| B[Smart Linter] + F1 -->|Yes| C[Holistic Reviewer] + F2 -->|Yes| D[helpdot-inline-reviewer] + F2 -->|Yes| E[helpdot-summary-reviewer] + T2 -->|Manual trigger| F[deploy-blocker-investigator] - subgraph code [Code Review] - B -->|Rule-based| G[Inline comments for violations] - C -->|General| H[Quality feedback] + subgraph code [Code Review Output] + B --> G[Inline comments for violations] + C --> H[Quality feedback] end - subgraph docs [Documentation Review] - D -->|Inline| I[Line-specific feedback] - E -->|Summary| J[Scores and recommendations] + subgraph docs [Documentation Review Output] + D --> I[Line-specific feedback] + E --> J[Scores and recommendations] end - subgraph deploy [Issue Investigation] + subgraph deploy [Issue Investigation Output] F --> K[Identify causing PR] end ``` #### Code PRs + +**Trigger conditions:** +- PR is opened or marked ready for review +- PR modifies files in `src/**` +- PR is not a draft +- PR title does not contain "Revert" + +**How to re-run it?** Convert your PR to draft, then mark it ready for review again. + Code PRs benefit from the **two-reviewer approach**: 1. **Smart Linter (code-inline-reviewer)**: Catches specific, well-defined performance anti-patterns with consistent, rule-based feedback @@ -108,13 +125,30 @@ Code PRs benefit from the **two-reviewer approach**: Together they balance precision (rules) with coverage (holistic review). #### Documentation PRs + +**Trigger conditions:** +- PR is opened or marked ready for review +- PR modifies files in `docs/**/*.md` or `docs/**/*.csv` +- PR is not a draft +- PR title does not contain "Revert" + +**How to re-run it?** Convert your PR to draft, then mark it ready for review again. + Documentation PRs in the HelpDot system use two complementary reviewers: 1. **helpdot-inline-reviewer**: Line-specific feedback on violations 2. **helpdot-summary-reviewer**: Overall quality assessment with scores #### Deploy Blocker Issues -When a deploy blocker issue is created: + +**Trigger conditions:** +- Manually triggered via `workflow_dispatch` +- Issue must have the `DeployBlockerCash` label +- Actor must have write access to the repository + +**How to re-run it?** Navigate to Actions → "Investigate Deploy Blocker" workflow → Run workflow with the issue URL. + +When a deploy blocker issue needs investigation: 1. **deploy-blocker-investigator**: Analyzes the issue, identifies the likely causing PR, and recommends resolution From 30a997e9286ad3a85ce35117fedef41a9a12402c Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Kacper=20Miko=C5=82ajczak?= Date: Mon, 12 Jan 2026 13:01:33 +0100 Subject: [PATCH 5/7] content fixes --- contributingGuides/philosophies/AI-REVIEWER.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/contributingGuides/philosophies/AI-REVIEWER.md b/contributingGuides/philosophies/AI-REVIEWER.md index 4927a842d0096..f4ec3b146c3db 100644 --- a/contributingGuides/philosophies/AI-REVIEWER.md +++ b/contributingGuides/philosophies/AI-REVIEWER.md @@ -30,8 +30,8 @@ These are recommendations for working effectively with AI reviewers, not strict ### Treat AI feedback as suggestions AI reviewers provide automated feedback to assist human reviewers, but their output is not infallible. Contributors and reviewers should evaluate each piece of feedback on its merits rather than blindly accepting or rejecting it. -### Validate AI feedback before requesting changes -When AI reviewers flag potential issues, human reviewers should verify the feedback is accurate and applicable before asking contributors to make changes. This prevents unnecessary work from false positives. +### Discuss on vague feedback +When AI feedback is unclear or ambiguous, contributors will benefit from discussing it first with C+ reviewers before jumping to implementation. As mentioned in the first principle, reviewer feedback should be treated as suggestions only. ### Report false positives to maintainers When AI feedback is incorrect or not applicable, reach out to the AI reviewer maintainers to help improve the system. You can either tag them directly in a reply to the reviewer's comment or reach out through Slack. This feedback helps refine the reviewers and prevents the same issues from recurring. @@ -44,7 +44,7 @@ When adding or modifying rules in AI reviewer agent files, the corresponding doc ### Available AI Reviewers **code-inline-reviewer (Smart Linter)** -- Reviews source code PRs for specific, predefined performance violations +- Reviews source code PRs for specific, predefined violations - Creates inline comments on lines that violate rules - See `.claude/agents/code-inline-reviewer.md` for current rule definitions @@ -119,7 +119,7 @@ flowchart TD Code PRs benefit from the **two-reviewer approach**: -1. **Smart Linter (code-inline-reviewer)**: Catches specific, well-defined performance anti-patterns with consistent, rule-based feedback +1. **Smart Linter (code-inline-reviewer)**: Catches specific, well-defined anti-patterns with consistent, rule-based feedback 2. **Holistic Reviewer**: Catches general code quality issues, design concerns, and anything not covered by specific rules Together they balance precision (rules) with coverage (holistic review). From 1d36daf25dd317391163348b88b6fda157f029da Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Kacper=20Miko=C5=82ajczak?= Date: Mon, 12 Jan 2026 14:26:19 +0100 Subject: [PATCH 6/7] add helpdot to cspell dict --- cspell.json | 1 + 1 file changed, 1 insertion(+) diff --git a/cspell.json b/cspell.json index 1c61490d481e2..b450f59656f5e 100644 --- a/cspell.json +++ b/cspell.json @@ -294,6 +294,7 @@ "headshot", "healthcheck", "Heathrow", + "helpdot", "helpsite", "Highfive", "Highlightable", From 6f9f160cfbc7614ea878238c73348870c84b703b Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Kacper=20Miko=C5=82ajczak?= Date: Tue, 13 Jan 2026 10:40:34 +0100 Subject: [PATCH 7/7] reach out through slack --- contributingGuides/philosophies/AI-REVIEWER.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/contributingGuides/philosophies/AI-REVIEWER.md b/contributingGuides/philosophies/AI-REVIEWER.md index f4ec3b146c3db..43fe9931e6850 100644 --- a/contributingGuides/philosophies/AI-REVIEWER.md +++ b/contributingGuides/philosophies/AI-REVIEWER.md @@ -34,7 +34,7 @@ AI reviewers provide automated feedback to assist human reviewers, but their out When AI feedback is unclear or ambiguous, contributors will benefit from discussing it first with C+ reviewers before jumping to implementation. As mentioned in the first principle, reviewer feedback should be treated as suggestions only. ### Report false positives to maintainers -When AI feedback is incorrect or not applicable, reach out to the AI reviewer maintainers to help improve the system. You can either tag them directly in a reply to the reviewer's comment or reach out through Slack. This feedback helps refine the reviewers and prevents the same issues from recurring. +When AI feedback is incorrect or not applicable, reach out to the AI reviewer maintainers in the #expensify-open-source Slack channel to help improve the system. This feedback helps refine the reviewers and prevents the same issues from recurring. ### Keep rule documentation in sync with AI reviewer prompts When adding or modifying rules in AI reviewer agent files, the corresponding documentation should be updated. The agent files in `.claude/agents/` are the source of truth for specific rules. @@ -162,7 +162,7 @@ When AI feedback is accurate: ### Handling False Positives When AI feedback is incorrect or not applicable: 1. Evaluate whether the feedback applies to your specific context -2. Reach out to AI reviewer maintainers by tagging them in a reply or through Slack +2. Reach out to AI reviewer maintainers in the #expensify-open-source Slack channel 3. Your feedback helps refine the reviewers and prevent recurring issues ### Escalating to Human Reviewers @@ -185,6 +185,6 @@ Escalate to human reviewers when: **Context**: The parent component is already optimized by React Compiler. -✅ **Good Response**: Tag the AI reviewer maintainers or reach out through Slack with explanation of incorrect suggestion. +✅ **Good Response**: Reach out in the #expensify-open-source Slack channel with explanation of incorrect suggestion. ❌ **Bad Response**: Apply the change anyway, adding unnecessary complexity.