Skip to content

tx2z/claude-code-test-coverage-audit

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Claude Code Test Coverage Audit

A comprehensive test quality auditing command for Claude Code that goes beyond line coverage to analyze test effectiveness, quality, and maintainability using specialized AI agents.

Features

  • Coverage Gap Detection - Finds untested functions, branches, error paths, and edge cases
  • Test Quality Analysis - Evaluates naming, patterns, isolation, assertions, and readability
  • Mutation Testing Analysis - Identifies weak assertions and tests that pass with wrong code
  • Flaky Test Detection - Finds time-dependent, order-dependent, and non-deterministic tests
  • Test Performance Analysis - Identifies slow tests and optimization opportunities
  • Test Organization Review - Evaluates structure, naming conventions, and test categorization

Requirements

  • Claude Code CLI installed and configured
  • A project with existing tests to audit

Installation

  1. Clone or download this repository
  2. Copy the folders to your project's .claude/ directory:
# From your project root
cp -r path/to/claude-code-test-coverage-audit/commands .claude/
cp -r path/to/claude-code-test-coverage-audit/testing .claude/

Your project structure should look like:

your-project/
├── .claude/
│   ├── commands/
│   │   └── test-coverage-audit.md
│   └── testing/
│       ├── agents/
│       │   ├── coverage-gaps.md
│       │   ├── test-quality.md
│       │   ├── mutation-testing.md
│       │   ├── flaky-tests.md
│       │   ├── test-performance.md
│       │   └── test-organization.md
│       └── templates/
│           └── test-audit-report.md
├── src/
├── tests/
└── ...
  1. (Optional) Add test-audit-reports/ to your .gitignore:
echo "test-audit-reports/" >> .gitignore

Optional: Optimize for Your Tech Stack

After installation, you can optimize the test auditor for your specific codebase. This improves audit accuracy by focusing on your testing frameworks and conventions.

Run this prompt in Claude Code:

I just installed the test-coverage-audit command in .claude/. Please:

1. Analyze my codebase to detect my testing stack (test framework, assertion library, mocking tools, E2E framework)
2. Read the command files in .claude/commands/test-coverage-audit.md and .claude/testing/agents/
3. Optimize each audit agent by:
   - Removing patterns for test frameworks I don't use
   - Adding quality checks specific to my test framework (Jest matchers, pytest fixtures, etc.)
   - Configuring flaky test detection based on my async patterns
   - Adjusting performance thresholds based on my test runner
4. Keep the agent structure, audit categories, and output format unchanged

Show me what you'll change before applying.

Usage

In Claude Code, run the test audit command:

/test-coverage-audit

Audit Modes

Command Description
/test-coverage-audit Full audit (all checks)
/test-coverage-audit full Full audit (all checks)
/test-coverage-audit gaps Coverage gap analysis only
/test-coverage-audit quality Test quality analysis only
/test-coverage-audit unit Analyze unit tests only
/test-coverage-audit integration Analyze integration tests only
/test-coverage-audit e2e Analyze end-to-end tests only
/test-coverage-audit mutation Mutation testing analysis
/test-coverage-audit flaky Flaky test detection
/test-coverage-audit performance Test performance analysis

Audit Categories

Coverage Gap Analysis (TEST01)

Check Description
Untested Functions Functions/methods without any test coverage
Untested Branches if/else paths not exercised by tests
Untested Error Paths Exception handlers and error cases
Edge Cases Boundary conditions and special values
Missing Integrations Component interactions without tests
UI Components Frontend components lacking tests
API Endpoints Backend routes without coverage
Critical Paths Business-critical logic untested

Test Quality Analysis (TEST02)

Check Description
Naming Quality Descriptive, consistent test names
AAA Pattern Arrange-Act-Assert structure
Test Isolation No shared state between tests
Mock Appropriateness Correct mock usage and boundaries
Assertion Quality Specific vs vague assertions
Readability Clear, maintainable test code
Magic Values Unexplained constants in tests
Test Data Proper test data management

Mutation Testing Analysis (TEST03)

Check Description
Surviving Mutants Code changes not caught by tests
Weak Assertions Assertions that pass incorrectly
Boundary Tests Off-by-one and limit testing
Operator Coverage Arithmetic/logical operator testing
Conditional Coverage Boolean expression testing

Flaky Test Detection (TEST04)

Check Description
Time Dependencies Tests relying on current time
Order Dependencies Tests depending on execution order
Race Conditions Async timing issues
External Dependencies Network, file system, services
Random Data Non-deterministic test inputs
Environment Dependencies System-specific assumptions

Test Performance Analysis (TEST05)

Check Description
Slow Tests Tests exceeding time thresholds
Database Tests Unnecessary DB operations
Setup Overhead Expensive repeated setup
Parallelization Missing concurrent execution
Fixture Size Oversized test fixtures
Memory Usage Tests with memory issues

Test Organization Analysis (TEST06)

Check Description
File Structure Proper test file organization
Naming Conventions Consistent naming patterns
Categorization Unit/integration/e2e separation
Documentation Test purpose and context
Shared Utilities Common test helper usage
Configuration Test config organization

Supported Tech Stacks

The audit auto-detects and adapts to:

JavaScript/TypeScript:

  • Jest, Vitest, Mocha, Jasmine
  • Testing Library (React, Vue, Angular)
  • Cypress, Playwright (E2E)

Python:

  • pytest, unittest, nose
  • pytest-cov, coverage.py

PHP:

  • PHPUnit, Pest
  • Codeception

.NET:

  • xUnit, NUnit, MSTest
  • FluentAssertions

Go:

  • testing package
  • testify, gomock

Java:

  • JUnit 4/5, TestNG
  • Mockito, AssertJ

Rust:

  • Built-in test framework
  • cargo test

Output

Reports are saved to test-audit-reports/YYYY-MM-DD-HHmm-audit.md with:

  • Executive Summary (quality score, coverage metrics)
  • Critical Gaps (untested critical paths)
  • Quality Issues (prioritized by impact)
  • Weak Tests (tests needing strengthening)
  • Flaky Test Candidates (unreliable tests)
  • Performance Issues (slow tests)
  • Suggested Test Cases (with code examples)
  • Recommendations (prioritized actions)

How It Works

  1. Tech Stack Detection - Automatically detects test frameworks and patterns
  2. Coverage Analysis - Examines existing coverage and identifies gaps
  3. Quality Assessment - Evaluates test code quality and patterns
  4. Agent Execution - Spawns specialized agents for each audit domain
  5. Finding Aggregation - Combines and prioritizes all findings
  6. Report Generation - Creates actionable markdown report
  7. Test Generation - Optionally generates suggested test cases

Customization

To adapt for your specific needs:

  1. Framework detection - Modify Step 1 in test-coverage-audit.md
  2. Quality thresholds - Adjust scoring in agent files
  3. Report format - Modify testing/templates/test-audit-report.md
  4. Agent behavior - Edit individual agents in testing/agents/

Contributing

Contributions welcome! Please:

  1. Fork the repository
  2. Create a feature branch
  3. Submit a pull request

License

MIT License - see LICENSE file.

Disclaimer

This tool performs static analysis and pattern matching on test code. It provides recommendations based on best practices but cannot guarantee test effectiveness. Always combine with actual test execution and coverage tools for production systems.

References

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •