Summary
Each consumer of the agent-team catalog ends up manually correlating two data sources to file useful upstream feedback: (1) GitHub Actions run logs (stage durations, failure reasons, tool-call waste) and (2) local Claude Code session transcripts (what the human had to fix by hand, which workarounds stuck, which prompts the agent got wrong). This is why only power users produce issues like #47 / #48 / and the 10 sibling issues filed alongside this one — the analysis takes an hour of manual correlation. Every other consumer silently works around gaps without reporting them.
A built-in skill that packages this workflow would turn every consumer into a feedback source, and compound the catalog's improvement rate.
Proposal
Ship .claude/skills/agent-team-feedback.md (or an equivalent slash command / skill node in the plugin) that, when invoked:
- Scans recent agent-team runs via
gh run list --workflow=*-agent.yml, grouping by issue_number to reconstruct full pipelines.
- Computes per-stage durations, failure reasons (from log tails), iteration counts, and cross-references the most recent local Claude Code session (
~/.claude/projects/<slug>/*.jsonl) for human-side interventions — e.g. manual re-dispatches, PRs the user merged to unblock, issue comments that correspond to observed pipeline hiccups.
- Surfaces a prioritized list of upstream-improvement opportunities: timeout pressure, prompt drift, missing tools, recovery friction, heuristic fragility.
- For each item, drafts a GitHub issue body in the
#48-style: problem statement, timeline table with concrete run links, proposal with a minimal diff, explicit acceptance criteria.
- Confirms with the user before filing on
verkyyi/github-agent-runner.
Why in the catalog repo, not individually authored
- The skill needs to know the catalog's shape (stage names, comment markers, label conventions) to produce useful drafts. Shipping it alongside the catalog keeps it in sync as the catalog evolves.
- It normalizes the format of incoming feedback — less variance in issue quality → faster triage.
- It lowers the cost of filing from "1 hour of correlation" to "1 slash command + a confirmation", which matters for the volume of feedback received.
Acceptance
Summary
Each consumer of the agent-team catalog ends up manually correlating two data sources to file useful upstream feedback: (1) GitHub Actions run logs (stage durations, failure reasons, tool-call waste) and (2) local Claude Code session transcripts (what the human had to fix by hand, which workarounds stuck, which prompts the agent got wrong). This is why only power users produce issues like
#47/#48/ and the 10 sibling issues filed alongside this one — the analysis takes an hour of manual correlation. Every other consumer silently works around gaps without reporting them.A built-in skill that packages this workflow would turn every consumer into a feedback source, and compound the catalog's improvement rate.
Proposal
Ship
.claude/skills/agent-team-feedback.md(or an equivalent slash command / skill node in the plugin) that, when invoked:gh run list --workflow=*-agent.yml, grouping byissue_numberto reconstruct full pipelines.~/.claude/projects/<slug>/*.jsonl) for human-side interventions — e.g. manual re-dispatches, PRs the user merged to unblock, issue comments that correspond to observed pipeline hiccups.#48-style: problem statement, timeline table with concrete run links, proposal with a minimal diff, explicit acceptance criteria.verkyyi/github-agent-runner.Why in the catalog repo, not individually authored
Acceptance
agent-teamcatalog directory (or wherever the plugin surfaces consumer skills).#47/#48(concrete run links, timeline tables, acceptance criteria).gh issue createcall — no silent filing.