feat: add /readiness skill with report card and mobile-friendly image#3
Conversation
Introduces a new agent-guided skill that evaluates any codebase across 8 pillars (style, testing, hooks, documentation, agent config, code quality, dev environment, agentic workflow) and 37 criteria, producing a scored readiness report with maturity levels 1-5. Inspired by Factory.ai's Agent Readiness framework but designed around our harness philosophy: the agent does the analysis using the setup skill's templates and scripts as a reference library, producing surgical recommendations that respect existing project configuration. Key features: - 3 parallel subagents for clean context during evaluation - Report saved to .claude/readiness-report.md with YAML frontmatter for delta tracking across runs - Monorepo support with per-app scoring - Surgical remediation recommendations (not "run /setup") - Recommends Superpowers if no agentic workflow system detected Also removes competitive-analysis.md (old working file that didn't belong as a setup skill reference) and updates README with full readiness documentation. https://claude.ai/code/session_01TZt5Uy6Rx9BCjYJUtxJpFy
…hooting Users didn't know they could just ask naturally after install. Added conversational trigger examples for both skills, a troubleshooting table for common issues, and clarified that slash commands aren't required — Claude auto-triggers skills from natural language. https://claude.ai/code/session_01TZt5Uy6Rx9BCjYJUtxJpFy
Adds a Skill Creator-style eval framework that tests the /readiness skill against fixture repos with known maturity levels. The runner launches Claude CLI with --plugin-dir, captures output, and the grader validates the generated report against expected criteria (level range, pillar scores, recommendation content, insight detection). Fixtures: - level-1-bare: minimal project, should score Level 1 - level-3-enforced: linting/tests/hooks/CLAUDE.md, should score Level 2-3 - level-5-autonomous: full harness coverage, should score Level 4-5 https://claude.ai/code/session_01TZt5Uy6Rx9BCjYJUtxJpFy
Three bugs found from inspecting session logs: 1. jq piped all test case JSON (including expected scores) into the prompt 2. Default permission mode caused hangs waiting for tool approval 3. Claude analyzed fixture dirs instead of the temp working directory Fixes: strip expected values from test case data passed to the loop, add --permission-mode acceptEdits, and prepend CWD-only instruction. https://claude.ai/code/session_01TZt5Uy6Rx9BCjYJUtxJpFy
Plan-before-build is inherently part of the agentic workflow systems we check for (BMAD, Superpowers, gStack all include planning phases). Removes redundant criterion, reducing Pillar 8 from 3 to 2 criteria and total from 37 to 36. https://claude.ai/code/session_01TZt5Uy6Rx9BCjYJUtxJpFy
Dark terminal-style card visualizing the 8-pillar readiness scorecard with color-coded progress bars. Designed for LinkedIn/social posts. https://claude.ai/code/session_01TZt5Uy6Rx9BCjYJUtxJpFy
Reorganize into assets/social/src/ (SVG sources) and assets/social/ (exported PNGs). LinkedIn requires image files, not SVGs. 1200x780 PNG exported via sharp-cli for LinkedIn post use. https://claude.ai/code/session_01TZt5Uy6Rx9BCjYJUtxJpFy
…ility Bumped all font sizes from 11-14px to 15-22px, increased bar heights, and expanded the viewBox vertically to accommodate the larger layout. https://claude.ai/code/session_01DfRtdiyWXCTZWwKaX4wgzm
|
Caution Review failedThe pull request is closed. ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: ⛔ Files ignored due to path filters (2)
📒 Files selected for processing (48)
📝 WalkthroughWalkthroughA new Changes
Sequence DiagramsequenceDiagram
participant Agent as Readiness Agent
participant Scanner as Environment<br/>Scanner
participant Evaluators as Parallel Pillar<br/>Evaluators (3×)
participant Scorer as Scorer &<br/>Report Writer
participant User as User /<br/>Fixer
Agent->>Scanner: Detect project environment<br/>(manifests, git, languages, monorepo)
Scanner-->>Agent: Project metadata
Agent->>Evaluators: Run subagents on pillar<br/>groups (pass/fail + insights)
par Evaluate Pillars
Evaluators->>Evaluators: Check style, validation,<br/>testing rules
Evaluators->>Evaluators: Check git hooks,<br/>documentation
Evaluators->>Evaluators: Check agent config,<br/>code quality, dev env
end
Evaluators-->>Agent: Pillar results JSON
Agent->>Scorer: Compute maturity level<br/>1–5 with gating logic
Scorer->>Scorer: Calculate per-pillar scores,<br/>aggregate per-app if monorepo
Scorer-->>Agent: Level score (1–5)
Agent->>Scorer: Write YAML frontmatter<br/>+ conversational analysis
Scorer->>Scorer: Output `.claude/readiness-report.md`
Scorer-->>Agent: Report written
Agent->>Agent: Generate prioritized<br/>remediation recommendations
Agent-->>User: Display findings & offer fixes
opt User applies fixes
User->>Agent: Request fix application
Agent->>User: Apply templates/scripts<br/>without overwriting
User-->>Scorer: Remediated codebase
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Poem
✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary
/readinessskill for codebase analysis and scoringTest plan
/readinessskill against a sample projecthttps://claude.ai/code/session_01DfRtdiyWXCTZWwKaX4wgzm
Summary by CodeRabbit
New Features
/readinessskill to assess codebase readiness across 8 pillars with maturity level scoring, generating detailed reports to.claude/readiness-report.mdand prioritized remediation recommendations.Documentation