feat: add BMAD Analyst feature ideation workflow#60
Conversation
📝 WalkthroughWalkthroughAdds a new GitHub Actions workflow that weekly (and on manual dispatch) collects repository signals (issues, recent closed issues, Discussions in the Ideas category, releases, merged PRs, label subsets), uploads a Changes
Sequence Diagram(s)sequenceDiagram
participant Scheduler as Scheduler\n(GH Actions)
participant Gather as gather-signals\n(Job)
participant GitHubAPI as GitHub API
participant Artifact as Artifact Storage
participant Analyze as analyze\n(Job)
participant Claude as Claude\n(AI Action)
participant Discussions as GitHub Discussions
Scheduler->>Gather: trigger (cron or manual)
Gather->>GitHubAPI: query issues, closed(30d), discussions(category ID), releases, merged PRs, labels
GitHubAPI-->>Gather: return signals
Gather->>Artifact: upload `signals.json` (artifact)
Artifact-->>Gather: confirm upload
Scheduler->>Analyze: start analyze job
Analyze->>Artifact: download `signals.json`
Artifact-->>Analyze: deliver artifact
Analyze->>Claude: invoke action with signals + focus_area + research_depth + repo context
Claude-->>Analyze: return proposals & discussion updates
Analyze->>GitHubAPI: create/update Discussions, apply labels, post comments
GitHubAPI-->>Analyze: confirmation
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes 🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Pull request overview
Adds an automated GitHub Actions workflow to periodically collect repository “signals” (issues/PRs/releases/discussions) and run the Claude Code Action as a “BMAD Analyst” agent to propose and maintain feature-idea Discussions over time.
Changes:
- Introduces a scheduled + manually-triggerable workflow for weekly feature ideation.
- Implements a “gather signals” job that aggregates repo activity into a JSON payload for the agent.
- Runs a Claude Code Action job with web research enabled to create/update per-idea GitHub Discussions.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
…, SHA
- Add issues:read, pull-requests:read, discussions:read permissions to
gather-signals job (was missing with top-level permissions: {})
- Switch from job output to artifact for signals data to avoid GitHub
Actions output size limits with large issue/PR payloads
- Drop issue body from queries to reduce payload size
- Use category-filtered GraphQL query for Ideas discussions instead of
fetching all and post-filtering (scales to >50 discussions)
- Increase discussion pagination from 50 to 100
- Clarify DST behavior in cron schedule comment (3 AM EDT / 2 AM EST)
- Align claude-code-action SHA to v1.0.89 matching claude.yml
Addresses Copilot review comments on PR #60.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
.github/workflows/feature-ideation.yml (1)
74-79: These 30-day signals are sampled, not complete.Both queries take the first 30 results and only then apply the 30-day cutoff. On an active repo, that undercounts recent closed issues or merged PRs and skews the evidence Mary scores against. Consider paginating until
closedAt/mergedAtfalls outside the window, or at least raising the cap enough that the window stays representative.Also applies to: 123-127
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/feature-ideation.yml around lines 74 - 79, The current GH queries (using THIRTY_DAYS_AGO and CLOSED_ISSUES via `gh issue list --limit 30`) sample the first 30 results then filter, which undercounts recent activity; update the logic so you paginate/fetch all pages until you encounter an item with closedAt (or mergedAt for PRs) older than THIRTY_DAYS_AGO (or use `--paginate` / a much higher `--limit` like 1000 as a simpler alternative), i.e., iterate pages from `gh issue list` (and the analogous `gh pr list` that populates MERGED_PRS) appending results and stop when the last item's closedAt/mergedAt < THIRTY_DAYS_AGO, then run the date filter against the accumulated set before JSON query.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/feature-ideation.yml:
- Around line 207-210: The prompt is sending literal shell variables
($FOCUS_AREA, $RESEARCH_DEPTH, $(date -u +%Y-%m-%d)) to Claude; replace those
with GitHub Actions expressions or explicit bash evaluation so the values are
expanded before being passed. Update the direct_prompt construction to use
GitHub expressions like ${{ env.FOCUS_AREA }} and ${{ env.RESEARCH_DEPTH }} (and
${{ env.GITHUB_TIME or format(...) }} / or evaluate date via run step) or run a
small bash step to export evaluated variables into env lines before invoking the
prompt; target the block that builds the prompt (the direct_prompt/ env
interpolation code) and swap the plain $... and $(date...) tokens for the
evaluated expressions or exported bash variables so Claude receives the actual
values.
- Around line 83-116: The GraphQL query for repository.discussions (used to
populate IDEAS_DISCUSSIONS after resolving IDEAS_CAT_ID) only fetches a single
page (first:100) and falls back to '[]' on any error, which can cause duplicate
idea creation; update the script to perform proper cursor pagination: run the
discussions query in a loop using pageInfo.endCursor/hasNextPage (pass an after
variable) and append .data.repository.discussions.nodes into IDEAS_DISCUSSIONS
until hasNextPage is false, and if any API call fails or returns unexpected
data, exit non‑zero rather than setting an empty list; apply the same paginated
inventory logic to the General category fetch (the analogous General discussion
variable) so both categories are fully enumerated before deduping.
---
Nitpick comments:
In @.github/workflows/feature-ideation.yml:
- Around line 74-79: The current GH queries (using THIRTY_DAYS_AGO and
CLOSED_ISSUES via `gh issue list --limit 30`) sample the first 30 results then
filter, which undercounts recent activity; update the logic so you
paginate/fetch all pages until you encounter an item with closedAt (or mergedAt
for PRs) older than THIRTY_DAYS_AGO (or use `--paginate` / a much higher
`--limit` like 1000 as a simpler alternative), i.e., iterate pages from `gh
issue list` (and the analogous `gh pr list` that populates MERGED_PRS) appending
results and stop when the last item's closedAt/mergedAt < THIRTY_DAYS_AGO, then
run the date filter against the accumulated set before JSON query.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: bd764232-9ffe-4e6a-b628-0fae91f0b91c
📒 Files selected for processing (1)
.github/workflows/feature-ideation.yml
| IDEAS_CAT_ID=$(gh api graphql -f query=' | ||
| query($repo: String!, $owner: String!) { | ||
| repository(name: $repo, owner: $owner) { | ||
| discussionCategories(first: 20) { | ||
| nodes { id name } | ||
| } | ||
| } | ||
| }' -f owner="${{ github.repository_owner }}" \ | ||
| -f repo="${{ github.event.repository.name }}" \ | ||
| --jq '[.data.repository.discussionCategories.nodes[] | select(.name == "Ideas")][0].id' \ | ||
| 2>/dev/null || echo "") | ||
|
|
||
| if [ -n "$IDEAS_CAT_ID" ] && [ "$IDEAS_CAT_ID" != "null" ]; then | ||
| IDEAS_DISCUSSIONS=$(gh api graphql -f query=' | ||
| query($repo: String!, $owner: String!, $categoryId: ID!) { | ||
| repository(name: $repo, owner: $owner) { | ||
| discussions(first: 100, orderBy: {field: UPDATED_AT, direction: DESC}, | ||
| categoryId: $categoryId) { | ||
| nodes { | ||
| id number title | ||
| createdAt updatedAt | ||
| labels(first: 10) { nodes { name } } | ||
| comments(first: 1) { totalCount } | ||
| } | ||
| } | ||
| } | ||
| }' -f owner="${{ github.repository_owner }}" \ | ||
| -f repo="${{ github.event.repository.name }}" \ | ||
| -f categoryId="$IDEAS_CAT_ID" \ | ||
| --jq '.data.repository.discussions.nodes' \ | ||
| 2>/dev/null || echo '[]') | ||
| else | ||
| IDEAS_DISCUSSIONS='[]' | ||
| fi |
There was a problem hiding this comment.
Page through the full discussion inventory before deduping.
repository.discussions(first: 100, ...) only returns one page; GitHub's GraphQL API caps each page at 100 items and expects cursor pagination with pageInfo/after for the rest. Because this block also degrades any lookup failure to an empty list, the analyzer can conclude that an existing idea is new and create a duplicate Discussion. The General fallback has the same problem, since this job never inventories that category at all. Please paginate the inventory and fail the run if it cannot be built reliably. (docs.github.com)
Also applies to: 325-327
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/feature-ideation.yml around lines 83 - 116, The GraphQL
query for repository.discussions (used to populate IDEAS_DISCUSSIONS after
resolving IDEAS_CAT_ID) only fetches a single page (first:100) and falls back to
'[]' on any error, which can cause duplicate idea creation; update the script to
perform proper cursor pagination: run the discussions query in a loop using
pageInfo.endCursor/hasNextPage (pass an after variable) and append
.data.repository.discussions.nodes into IDEAS_DISCUSSIONS until hasNextPage is
false, and if any API call fails or returns unexpected data, exit non‑zero
rather than setting an empty list; apply the same paginated inventory logic to
the General category fetch (the analogous General discussion variable) so both
categories are fully enumerated before deduping.
| env: | ||
| GH_TOKEN: ${{ secrets.GITHUB_TOKEN }} | ||
| FOCUS_AREA: ${{ inputs.focus_area || '' }} | ||
| RESEARCH_DEPTH: ${{ inputs.research_depth || 'standard' }} |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n .github/workflows/feature-ideation.yml | sed -n '200,240p'Repository: petry-projects/TalkTerm
Length of output: 2319
Use GitHub expressions to pass environment variables into the prompt.
The env variables are correctly defined (lines 207-210), but they're not accessible to Claude in the direct_prompt because plain text variables like $FOCUS_AREA, $RESEARCH_DEPTH, and $(date -u +%Y-%m-%d) are not evaluated by GitHub Actions. Claude receives these as literal strings.
Replace the problematic lines (230-234) with either:
- GitHub expressions:
- Focus area: ${{ env.FOCUS_AREA }}and- Research depth: ${{ env.RESEARCH_DEPTH }} - Or instruct Claude to read them via Bash:
- - Date: $(date -u +%Y-%m-%d)
- - Focus area: $FOCUS_AREA (empty = open-ended exploration)
- - Research depth: $RESEARCH_DEPTH
+ - Determine the current UTC date with `date -u +%Y-%m-%d` and use it for all `<DATE>` placeholders.
+ - Determine the focus area with `printf '%s\n' "${FOCUS_AREA:-Open exploration}"`.
+ - Determine the research depth with `printf '%s\n' "${RESEARCH_DEPTH:-standard}"`.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/feature-ideation.yml around lines 207 - 210, The prompt is
sending literal shell variables ($FOCUS_AREA, $RESEARCH_DEPTH, $(date -u
+%Y-%m-%d)) to Claude; replace those with GitHub Actions expressions or explicit
bash evaluation so the values are expanded before being passed. Update the
direct_prompt construction to use GitHub expressions like ${{ env.FOCUS_AREA }}
and ${{ env.RESEARCH_DEPTH }} (and ${{ env.GITHUB_TIME or format(...) }} / or
evaluate date via run step) or run a small bash step to export evaluated
variables into env lines before invoking the prompt; target the block that
builds the prompt (the direct_prompt/ env interpolation code) and swap the plain
$... and $(date...) tokens for the evaluated expressions or exported bash
variables so Claude receives the actual values.
Scheduled workflow (Fridays 3am ET) that uses Claude Code Action as the BMAD Analyst (Mary) to research market trends, analyze project signals, and create per-idea Discussion threads in the Ideas category. Each run creates new proposals or updates existing ones with evolving findings. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…, SHA
- Add issues:read, pull-requests:read, discussions:read permissions to
gather-signals job (was missing with top-level permissions: {})
- Switch from job output to artifact for signals data to avoid GitHub
Actions output size limits with large issue/PR payloads
- Drop issue body from queries to reduce payload size
- Use category-filtered GraphQL query for Ideas discussions instead of
fetching all and post-filtering (scales to >50 discussions)
- Increase discussion pagination from 50 to 100
- Clarify DST behavior in cron schedule comment (3 AM EDT / 2 AM EST)
- Align claude-code-action SHA to v1.0.89 matching claude.yml
Addresses Copilot review comments on PR #60.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…ipeline - Fix action inputs: direct_prompt→prompt, allowed_tools→claude_args, remove invalid timeout_minutes - Set ANTHROPIC_MODEL=claude-opus-4-6 for highest-quality reasoning - Add "think hard" and "research freely" directives at the top of prompt - Replace single-phase ideation with 4-skill iterative pipeline: Phase 2: Market Research (evidence gathering with iteration loops) Phase 3: Brainstorming (divergent ideation, 8-15 raw ideas) Phase 4: Party Mode (collaborative amplify/connect/ground refinement) Phase 5: Adversarial (5-question stress test, only survivors proceed) - Add Adversarial Review section to Discussion body template - Increase job timeout to 60min for deeper analysis Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
34b8d4d to
95a330b
Compare
|
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
.github/workflows/feature-ideation.yml (2)
384-404: Consider passing resolved IDs through the signals artifact.Phase 6 instructs Claude to resolve the repository ID and Ideas category ID via GraphQL, but the category ID was already resolved in the
gather-signalsjob (lines 83-93). Including these IDs insignals.jsonwould eliminate a redundant API call and ensure consistency between the inventory and the creation target.💡 Example: Extend signals.json to include resolved IDs
In the
gather-signalsjob, after resolvingIDEAS_CAT_ID, also fetch the repository ID and include both in the JSON:+ # Fetch repository ID for later use + REPO_ID=$(gh api graphql -f query=' + query($repo: String!, $owner: String!) { + repository(name: $repo, owner: $owner) { id } + }' -f owner="${{ github.repository_owner }}" \ + -f repo="${{ github.event.repository.name }}" \ + --jq '.data.repository.id' 2>/dev/null || echo "") + jq -n \ --arg scan_date "$(date -u +%Y-%m-%dT%H:%M:%SZ)" \ --arg repo "$REPO" \ + --arg repo_id "$REPO_ID" \ + --arg ideas_category_id "$IDEAS_CAT_ID" \ ...Then update the prompt to read these IDs from
signals.jsoninstead of re-querying.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/feature-ideation.yml around lines 384 - 404, The workflow redundantly re-resolves repository and discussion category IDs in "Phase 6" instead of reusing values from the earlier gather-signals step; update the gather-signals job to include the resolved repository ID and IDEAS_CAT_ID (and fallback GENERAL_CAT_ID) inside the signals.json artifact, then change Phase 6 to read repo ID and category ID from signals.json (instead of running the GraphQL query) so the steps that reference IDEAS_CAT_ID, repository ID, and the "Ideas"/"General" fallbacks consume the artifact values for consistency and to avoid duplicate API calls.
411-416: Consider providing concrete fuzzy matching guidance."Use fuzzy matching — the exact title may differ but the idea should be recognizably the same" is subjective and could lead to inconsistent behavior across runs. Claude might match too aggressively (false positives causing missed updates) or too loosely (duplicates created).
Consider adding specific criteria:
### Matching Logic Search existing Ideas discussions (from the signals data) for a title that - matches the proposal's core concept. Use fuzzy matching — the exact title - may differ but the idea should be recognizably the same. + matches the proposal's core concept. Match criteria: + - Same core feature/capability (ignore phrasing differences) + - Same target user problem or use case + - If >70% conceptual overlap, treat as the same idea + - When uncertain, prefer updating an existing Discussion over creating a duplicate🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/feature-ideation.yml around lines 411 - 416, The fuzzy-matching guidance is too vague; update the workflow text and matching config so the matching algorithm and concrete thresholds are explicit: document which algorithm to use (e.g., token_set_ratio or Levenshtein for title-level, or embedding cosine for semantic match) and add config keys like fuzzy_matching.method, fuzzy_matching.threshold (e.g., token_set_ratio >= 85 or cosine >= 0.85), fuzzy_matching.min_shared_tokens, and fuzzy_matching.ignore_stopwords; reference and update the matching implementation (match_proposal_to_existing_idea / find_best_idea_match) to normalize case/diacritics, strip stopwords, and apply those thresholds so runs are deterministic and consistent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/feature-ideation.yml:
- Around line 68-79: The current use of the "2>/dev/null || echo '[]'" fallback
for GH queries (seen on the OPEN_ISSUES and CLOSED_ISSUES assignments using "gh
issue list") silently swallows all errors and returns an empty array
indistinguishable from a valid "no results" response; change the error handling
so failures are detectable: capture the exit status and stderr from the "gh
issue list" calls for OPEN_ISSUES and CLOSED_ISSUES (and other similar signal
variables), log or emit a clear sentinel (e.g., an object/JSON with an "error"
field or a non-empty message) when the command fails, and for critical data
consider exiting non-zero instead of returning '[]' so the workflow fails rather
than proceeding with missing inventory.
---
Nitpick comments:
In @.github/workflows/feature-ideation.yml:
- Around line 384-404: The workflow redundantly re-resolves repository and
discussion category IDs in "Phase 6" instead of reusing values from the earlier
gather-signals step; update the gather-signals job to include the resolved
repository ID and IDEAS_CAT_ID (and fallback GENERAL_CAT_ID) inside the
signals.json artifact, then change Phase 6 to read repo ID and category ID from
signals.json (instead of running the GraphQL query) so the steps that reference
IDEAS_CAT_ID, repository ID, and the "Ideas"/"General" fallbacks consume the
artifact values for consistency and to avoid duplicate API calls.
- Around line 411-416: The fuzzy-matching guidance is too vague; update the
workflow text and matching config so the matching algorithm and concrete
thresholds are explicit: document which algorithm to use (e.g., token_set_ratio
or Levenshtein for title-level, or embedding cosine for semantic match) and add
config keys like fuzzy_matching.method, fuzzy_matching.threshold (e.g.,
token_set_ratio >= 85 or cosine >= 0.85), fuzzy_matching.min_shared_tokens, and
fuzzy_matching.ignore_stopwords; reference and update the matching
implementation (match_proposal_to_existing_idea / find_best_idea_match) to
normalize case/diacritics, strip stopwords, and apply those thresholds so runs
are deterministic and consistent.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 93961fbe-b4c1-44fe-92e3-df31197c0087
📒 Files selected for processing (1)
.github/workflows/feature-ideation.yml
| # --- Open issues (exclude bot-generated, omit body to stay within output limits) --- | ||
| OPEN_ISSUES=$(gh issue list --repo "$REPO" --state open --limit 50 \ | ||
| --json number,title,labels,createdAt,author \ | ||
| -q '[.[] | select(.author.login != "dependabot[bot]" and .author.login != "github-actions[bot]")]' \ | ||
| 2>/dev/null || echo '[]') | ||
|
|
||
| # --- Recently closed issues (last 30 days) --- | ||
| THIRTY_DAYS_AGO=$(date -u -d '30 days ago' +%Y-%m-%d 2>/dev/null || date -u -v-30d +%Y-%m-%d) | ||
| CLOSED_ISSUES=$(gh issue list --repo "$REPO" --state closed --limit 30 \ | ||
| --json number,title,labels,closedAt \ | ||
| -q "[.[] | select(.closedAt >= \"${THIRTY_DAYS_AGO}\")]" \ | ||
| 2>/dev/null || echo '[]') |
There was a problem hiding this comment.
Silent error swallowing masks API failures.
The 2>/dev/null || echo '[]' pattern throughout the signal collection (lines 72, 79, 93, 113, 121, 127) swallows all errors—including authentication failures, rate limits, and network issues—returning empty arrays that downstream analysis treats as "no data." This can cause the analyzer to make decisions based on incomplete information (e.g., concluding there are no open issues when the API call actually failed).
Consider distinguishing between "no results" and "query failed":
🛡️ Proposed fix: Add error handling with fallback indication
- OPEN_ISSUES=$(gh issue list --repo "$REPO" --state open --limit 50 \
- --json number,title,labels,createdAt,author \
- -q '[.[] | select(.author.login != "dependabot[bot]" and .author.login != "github-actions[bot]")]' \
- 2>/dev/null || echo '[]')
+ if ! OPEN_ISSUES=$(gh issue list --repo "$REPO" --state open --limit 50 \
+ --json number,title,labels,createdAt,author \
+ -q '[.[] | select(.author.login != "dependabot[bot]" and .author.login != "github-actions[bot]")]'); then
+ echo "::warning::Failed to fetch open issues"
+ OPEN_ISSUES='[]'
+ fiAlternatively, for critical data like the Ideas discussions inventory (which affects duplicate detection), consider failing the job rather than proceeding with empty data.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # --- Open issues (exclude bot-generated, omit body to stay within output limits) --- | |
| OPEN_ISSUES=$(gh issue list --repo "$REPO" --state open --limit 50 \ | |
| --json number,title,labels,createdAt,author \ | |
| -q '[.[] | select(.author.login != "dependabot[bot]" and .author.login != "github-actions[bot]")]' \ | |
| 2>/dev/null || echo '[]') | |
| # --- Recently closed issues (last 30 days) --- | |
| THIRTY_DAYS_AGO=$(date -u -d '30 days ago' +%Y-%m-%d 2>/dev/null || date -u -v-30d +%Y-%m-%d) | |
| CLOSED_ISSUES=$(gh issue list --repo "$REPO" --state closed --limit 30 \ | |
| --json number,title,labels,closedAt \ | |
| -q "[.[] | select(.closedAt >= \"${THIRTY_DAYS_AGO}\")]" \ | |
| 2>/dev/null || echo '[]') | |
| # --- Open issues (exclude bot-generated, omit body to stay within output limits) --- | |
| if ! OPEN_ISSUES=$(gh issue list --repo "$REPO" --state open --limit 50 \ | |
| --json number,title,labels,createdAt,author \ | |
| -q '[.[] | select(.author.login != "dependabot[bot]" and .author.login != "github-actions[bot]")]'); then | |
| echo "::warning::Failed to fetch open issues" | |
| OPEN_ISSUES='[]' | |
| fi | |
| # --- Recently closed issues (last 30 days) --- | |
| THIRTY_DAYS_AGO=$(date -u -d '30 days ago' +%Y-%m-%d 2>/dev/null || date -u -v-30d +%Y-%m-%d) | |
| CLOSED_ISSUES=$(gh issue list --repo "$REPO" --state closed --limit 30 \ | |
| --json number,title,labels,closedAt \ | |
| -q "[.[] | select(.closedAt >= \"${THIRTY_DAYS_AGO}\")]" \ | |
| 2>/dev/null || echo '[]') |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/feature-ideation.yml around lines 68 - 79, The current use
of the "2>/dev/null || echo '[]'" fallback for GH queries (seen on the
OPEN_ISSUES and CLOSED_ISSUES assignments using "gh issue list") silently
swallows all errors and returns an empty array indistinguishable from a valid
"no results" response; change the error handling so failures are detectable:
capture the exit status and stderr from the "gh issue list" calls for
OPEN_ISSUES and CLOSED_ISSUES (and other similar signal variables), log or emit
a clear sentinel (e.g., an object/JSON with an "error" field or a non-empty
message) when the command fails, and for critical data consider exiting non-zero
instead of returning '[]' so the workflow fails rather than proceeding with
missing inventory.



Summary
How it works
Job 1 — Gather Signals: Collects open issues, feature requests, bug patterns, merged PRs, existing Ideas discussions, and releases into a structured JSON payload.
Job 2 — BMAD Analyst: Claude Code Action embodies the BMAD Analyst persona (Mary) and:
Per-idea Discussion strategy
Each feature proposal gets its own Discussion (titled
💡 <Idea>) in the Ideas category. On subsequent runs:Inputs
focus_arearesearch_depthstandardquick/standard/deep— controls web search volumeSample run
A test execution created 5 Discussion threads: #55–#59
Dependencies
CLAUDE_CODE_OAUTH_TOKENorg secret (already configured)Test plan
workflow_dispatchwithfocus_areainput steers research appropriatelyquickdepth completes in < 15 minutes🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Chores