From 6e4b1e72039fcc93bfa82697ef891f7f7c47155a Mon Sep 17 00:00:00 2001 From: Claude Date: Tue, 14 Apr 2026 18:50:57 +0200 Subject: [PATCH 01/12] chore: repo standardization - Align template .claude commands, hooks, rules, and skills - Replace several .jinja command stubs with static .md where appropriate - Add preflight, validate_dor, task template, and justfile/.gitignore updates Made-with: Cursor --- .gitignore | 24 +- assets/task_template.yaml | 97 +++ justfile | 18 + scripts/preflight.sh | 67 ++ scripts/validate_dor.py | 107 +++ template/.claude/.refactor-edit-count | 1 + .../{coverage.md.jinja => coverage.md} | 8 +- template/.claude/commands/dependency-check.md | 100 +++ .../{docs-check.md.jinja => docs-check.md} | 4 +- ...ate.md.jinja => guided-template-update.md} | 10 +- .../commands/{release.md.jinja => release.md} | 8 +- .../commands/{review.md.jinja => review.md} | 7 +- .../{standards.md.jinja => standards.md} | 4 +- template/.claude/commands/update-claude-md.md | 44 ++ template/.claude/commands/validate-release.md | 129 ++++ template/.claude/hooks/README.md | 10 +- .../hooks/post-bash-test-coverage-reminder.sh | 60 ++ .../hooks/post-write-test-structure.sh | 48 ++ .../hooks/pre-bash-branch-protection.sh | 46 ++ .../.claude/hooks/pre-delete-protection.sh | 51 ++ .../hooks/pre-write-src-require-test.sh | 68 +- .../hooks/pre-write-src-test-reminder.sh | 54 +- template/.claude/rules/bash/security.md | 62 +- template/.claude/rules/common/code-review.md | 56 +- template/.claude/rules/common/git-workflow.md | 41 +- template/.claude/rules/common/hooks.md | 7 + template/.claude/rules/common/security.md | 51 +- .../rules/copier/template-conventions.md | 183 +++++ template/.claude/rules/jinja/coding-style.md | 107 +++ template/.claude/rules/jinja/testing.md | 91 +++ .../.claude/rules/markdown/conventions.md | 60 +- template/.claude/rules/python/coding-style.md | 138 ++++ template/.claude/rules/python/hooks.md | 8 +- template/.claude/rules/python/patterns.md | 164 +++++ template/.claude/rules/python/security.md | 55 +- template/.claude/rules/python/testing.md | 73 +- template/.claude/rules/yaml/conventions.md | 76 ++ template/.claude/settings.json | 41 ++ .../.claude/skills/claude_commands/SKILL.md | 293 ++++++++ .../claude_commands/claude-commands.skill | Bin 0 -> 15481 bytes .../references/command-patterns.md | 448 +++++++++++ .../references/frontmatter-reference.md | 219 ++++++ .../templates/command-template.md | 159 ++++ template/.claude/skills/claude_hooks/SKILL.md | 428 +++++++++++ .../assets/templates/hook-template.py | 274 +++++++ .../assets/templates/hook-template.sh | 173 +++++ .../assets/templates/settings-example.json | 211 ++++++ .../skills/claude_hooks/claude-hooks.skill | Bin 0 -> 19011 bytes .../skills/claude_hooks/references/events.md | 696 ++++++++++++++++++ .../.claude/skills/config-management/SKILL.md | 93 +++ .../references/complete-configs.md | 257 +++++++ template/.claude/skills/linting/SKILL.md | 70 ++ .../skills/linting/references/pre-commit.md | 221 ++++++ .../.claude/skills/linting/references/ruff.md | 203 +++++ template/.claude/skills/prepare_pr/SKILL.md | 10 +- .../prepare_pr/references/section-rules.md | 2 +- template/.claude/skills/pytest/SKILL.md | 23 +- .../skills/pytest/references/fixtures.md | 4 + .../pytest/references/test-organization.md | 81 +- .../skills/pytest/references/test-types.md | 10 +- .../.claude/skills/sdlc-workflow/SKILL.md | 294 ++++++++ .../sdlc-workflow/references/stage-banner.md | 37 + template/.claude/skills/security/SKILL.md | 43 ++ .../skills/security/references/bandit.md | 171 +++++ .../skills/security/references/semgrep.md | 226 ++++++ .../.claude/skills/tdd-test-planner/SKILL.md | 4 +- .../references/pytest-patterns.md | 18 +- .../skills/test-quality-reviewer/SKILL.md | 2 +- .../references/advanced-patterns.md | 15 +- .../.claude/skills/type-checking/SKILL.md | 57 ++ .../type-checking/references/basedpyright.md | 247 +++++++ 71 files changed, 6934 insertions(+), 233 deletions(-) create mode 100644 assets/task_template.yaml create mode 100644 scripts/preflight.sh create mode 100644 scripts/validate_dor.py create mode 100644 template/.claude/.refactor-edit-count rename template/.claude/commands/{coverage.md.jinja => coverage.md} (85%) create mode 100644 template/.claude/commands/dependency-check.md rename template/.claude/commands/{docs-check.md.jinja => docs-check.md} (92%) rename template/.claude/commands/{guided-template-update.md.jinja => guided-template-update.md} (95%) rename template/.claude/commands/{release.md.jinja => release.md} (89%) rename template/.claude/commands/{review.md.jinja => review.md} (83%) rename template/.claude/commands/{standards.md.jinja => standards.md} (91%) create mode 100644 template/.claude/commands/update-claude-md.md create mode 100644 template/.claude/commands/validate-release.md create mode 100644 template/.claude/hooks/post-bash-test-coverage-reminder.sh create mode 100644 template/.claude/hooks/post-write-test-structure.sh create mode 100644 template/.claude/hooks/pre-bash-branch-protection.sh create mode 100644 template/.claude/hooks/pre-delete-protection.sh create mode 100644 template/.claude/rules/copier/template-conventions.md create mode 100644 template/.claude/rules/jinja/coding-style.md create mode 100644 template/.claude/rules/jinja/testing.md create mode 100644 template/.claude/rules/python/coding-style.md create mode 100644 template/.claude/rules/python/patterns.md create mode 100644 template/.claude/rules/yaml/conventions.md create mode 100644 template/.claude/skills/claude_commands/SKILL.md create mode 100644 template/.claude/skills/claude_commands/claude-commands.skill create mode 100644 template/.claude/skills/claude_commands/references/command-patterns.md create mode 100644 template/.claude/skills/claude_commands/references/frontmatter-reference.md create mode 100644 template/.claude/skills/claude_commands/templates/command-template.md create mode 100644 template/.claude/skills/claude_hooks/SKILL.md create mode 100644 template/.claude/skills/claude_hooks/assets/templates/hook-template.py create mode 100644 template/.claude/skills/claude_hooks/assets/templates/hook-template.sh create mode 100644 template/.claude/skills/claude_hooks/assets/templates/settings-example.json create mode 100644 template/.claude/skills/claude_hooks/claude-hooks.skill create mode 100644 template/.claude/skills/claude_hooks/references/events.md create mode 100644 template/.claude/skills/config-management/SKILL.md create mode 100644 template/.claude/skills/config-management/references/complete-configs.md create mode 100644 template/.claude/skills/linting/SKILL.md create mode 100644 template/.claude/skills/linting/references/pre-commit.md create mode 100644 template/.claude/skills/linting/references/ruff.md create mode 100644 template/.claude/skills/sdlc-workflow/SKILL.md create mode 100644 template/.claude/skills/sdlc-workflow/references/stage-banner.md create mode 100644 template/.claude/skills/security/SKILL.md create mode 100644 template/.claude/skills/security/references/bandit.md create mode 100644 template/.claude/skills/security/references/semgrep.md create mode 100644 template/.claude/skills/type-checking/SKILL.md create mode 100644 template/.claude/skills/type-checking/references/basedpyright.md diff --git a/.gitignore b/.gitignore index 8894b78..1d58b4c 100644 --- a/.gitignore +++ b/.gitignore @@ -1,9 +1,3 @@ -# -------------------------------------------------------------------------- -# Local developer ignores (not enforced in CI) -# -------------------------------------------------------------------------- -runner.sh -full_runner.sh - # ========================================================================== # Python Core # ========================================================================== @@ -237,5 +231,21 @@ data/ # Cursor Related files .cursor/ -# Mac specific files +# Stray root file (accidental) +/1 1 + +# Task artifacts (generated outputs) +tasks/ +task_channel/ + + +# Benchmark data +.benchmarks/ + +# -------------------------------------------------------------------------- +# Local developer ignores (not enforced in CI) +# -------------------------------------------------------------------------- +runner.sh +full_runner.sh + diff --git a/assets/task_template.yaml b/assets/task_template.yaml new file mode 100644 index 0000000..f2e1117 --- /dev/null +++ b/assets/task_template.yaml @@ -0,0 +1,97 @@ +# ───────────────────────────────────────────── +# TASK DEFINITION TEMPLATE (assets/task_template.yaml) +# Copy to tasks/TASK_.yaml and fill every field. +# Fields marked ★ are required; all others are optional. +# ───────────────────────────────────────────── + +# ── Identity ───────────────────────────────── +task_id: "TASK_000" # ★ Unique sequential ID, e.g. TASK_042 +title: "" # ★ One-line description (imperative, ≤72 chars) +type: feature # ★ feature | fix | refactor | chore | perf | docs +status: draft # draft | ready | in-progress | review | done | blocked + +# ── Ownership & scheduling ─────────────────── +owner: "" # GitHub username or team slug +created_at: "" # ISO 8601, e.g. 2026-04-14 +target_branch: "" # Branch to create, e.g. feat/TASK-042-short-slug +base_branch: main # Branch to merge into + +# ── Sizing ─────────────────────────────────── +complexity: medium # trivial | small | medium | large | epic +estimated_hours: null # Rough estimate; omit if unknown + +# ── Definition of Ready checklist ──────────── +# All must be true before the SDLC pipeline starts. +definition_of_ready: + requirements_clear: false # Requirement is unambiguous + acceptance_criteria_written: false + dependencies_identified: false + no_blockers: false + branch_available: false + +# ── Requirement ────────────────────────────── +requirement: | + # ★ Full requirement statement in Given / When / Then form. + # Given , + # When , + # Then . + +# ── Acceptance criteria ─────────────────────── +# ★ List every testable criterion. +# Each item becomes at least one test in RED phase. +acceptance_criteria: + - id: AC-1 + given: "" + when: "" + then: "" + test_marker: unit # unit | integration | e2e | performance + # - id: AC-2 + # given: "" + # when: "" + # then: "" + # test_marker: unit + +# ── Constraints ────────────────────────────── +constraints: + - "" # e.g. "Do NOT use eval() or built-in shortcuts" + +# ── Files likely affected ──────────────────── +files_affected: + source: + - "" # e.g. src/my_library/parser.py + tests: + - "" # e.g. tests/unit/test_parser.py + +# ── Dependencies ───────────────────────────── +depends_on: [] # List of TASK_IDs that must be done first +blocks: [] # List of TASK_IDs that this task unblocks + +# ── Risk & rollback ────────────────────────── +risk_level: low # low | medium | high | critical +breaking_change: false # Does this change any public API / contract? +rollback_plan: | + # Describe how to revert if something goes wrong post-merge. + # e.g. "Revert commit ; no DB migration to undo." +migration_required: false # Is a DB or data migration needed? + +# ── Testing strategy ───────────────────────── +testing: + target_coverage: 85 # Minimum % for new code + property_based: false # Use Hypothesis for property-based tests? + mutation_testing: false # Run mutmut after GREEN? + performance_baseline: false # Capture benchmark before refactor? + +# ── Implementation guide ───────────────────── +# Optional step-by-step notes for the implementer. +implementation_guide: | + # Step 1 — ... + # Step 2 — ... + +# ── Notes ──────────────────────────────────── +notes: | + # Any extra context: related issues, design decisions, links. + +# ── Changelog entry (auto-filled by pipeline) ─ +changelog: + category: "" # Added | Changed | Fixed | Removed | Security + entry: "" # One-line changelog line, filled at commit stage diff --git a/justfile b/justfile index 08b3e33..58ec1ee 100644 --- a/justfile +++ b/justfile @@ -271,3 +271,21 @@ sync-check: # Print a conventional PR title + PR body (template + git log) for pr-policy compliance pr-draft: @uv run python scripts/pr_commit_policy.py draft + +# ------------------------------------------------------------------------- +# SDLC: Task management +# ------------------------------------------------------------------------- + +# Validate a task YAML against Definition of Ready +dor-check TASK_ID: + python3 scripts/validate_dor.py tasks/{{TASK_ID}}.yaml + +# List all tasks and their statuses +tasks: + @echo "Task ID Status Title" + @echo "---------- ---------- -----" + @python3 -c "import yaml; from pathlib import Path; [print(f\"{d['task_id']:<14}{d['status']:<14}{d['title']}\") for p in sorted(Path('tasks').glob('TASK_*.yaml')) if (d := yaml.safe_load(p.read_text()))]" + +# Run pre-flight checks before starting SDLC pipeline +preflight TASK_ID: + bash scripts/preflight.sh {{TASK_ID}} diff --git a/scripts/preflight.sh b/scripts/preflight.sh new file mode 100644 index 0000000..bb0a285 --- /dev/null +++ b/scripts/preflight.sh @@ -0,0 +1,67 @@ +#!/usr/bin/env bash +set -euo pipefail + +TASK_ID="${1:?Usage: preflight.sh TASK_ID}" +TASK_FILE="tasks/${TASK_ID}.yaml" + +echo "=== Pre-flight checks for ${TASK_ID} ===" + +# 1. Task file exists +if [[ ! -f "$TASK_FILE" ]]; then + echo "FAIL: $TASK_FILE not found" + exit 1 +fi +echo " [ok] Task file exists" + +# 2. DoR validation +python3 scripts/validate_dor.py "$TASK_FILE" > /dev/null +echo " [ok] Definition of Ready met" + +# 3. No uncommitted changes +if [[ -n "$(git status --porcelain)" ]]; then + echo "FAIL: Uncommitted changes in working tree" + echo " Run: git stash or git commit" + exit 1 +fi +echo " [ok] Working tree clean" + +# 4. Base branch is up to date +BASE_BRANCH=$(python3 -c "import yaml; print(yaml.safe_load(open('$TASK_FILE'))['base_branch'])") +git fetch origin "$BASE_BRANCH" --quiet 2>/dev/null || true +LOCAL=$(git rev-parse "$BASE_BRANCH" 2>/dev/null || echo "none") +REMOTE=$(git rev-parse "origin/$BASE_BRANCH" 2>/dev/null || echo "none") +if [[ "$LOCAL" != "$REMOTE" ]]; then + echo "WARN: $BASE_BRANCH is behind origin/$BASE_BRANCH" + echo " Run: git checkout $BASE_BRANCH && git pull" +fi +echo " [ok] Base branch checked" + +# 5. Target branch does not already exist (prevent stale work) +TARGET_BRANCH=$(python3 -c "import yaml; print(yaml.safe_load(open('$TASK_FILE'))['target_branch'])") +if git show-ref --verify --quiet "refs/heads/$TARGET_BRANCH" 2>/dev/null; then + echo "WARN: Branch $TARGET_BRANCH already exists locally" +fi +echo " [ok] Branch check done" + +# 6. Python version check +REQUIRED_PYTHON=$(python3 -c " +import tomllib +with open('pyproject.toml', 'rb') as f: + d = tomllib.load(f) +print(d.get('project', {}).get('requires-python', '')) +" 2>/dev/null || echo "") +ACTUAL_PYTHON=$(python3 --version | awk '{print $2}') +echo " [ok] Python $ACTUAL_PYTHON (requires: $REQUIRED_PYTHON)" + +# 7. Baseline CI passes +echo " Running baseline CI (just ci)..." +if just ci > /dev/null 2>&1; then + echo " [ok] Baseline CI passes" +else + echo "FAIL: Baseline CI does not pass on unmodified code" + echo " Fix baseline issues before starting new work" + exit 1 +fi + +echo "" +echo "=== All pre-flight checks passed ===" diff --git a/scripts/validate_dor.py b/scripts/validate_dor.py new file mode 100644 index 0000000..3e54f02 --- /dev/null +++ b/scripts/validate_dor.py @@ -0,0 +1,107 @@ +#!/usr/bin/env python3 +"""Validate a task YAML file against the Definition of Ready schema.""" + +import sys +from pathlib import Path + +import yaml + +VALID_TYPES = {"feature", "fix", "refactor", "chore", "perf", "docs"} +VALID_STATUSES = {"draft", "ready", "in-progress", "review", "done", "blocked"} + + +def _check_required_fields(data: dict, errors: list[str]) -> None: + """Check required string fields.""" + for field in ("task_id", "title", "requirement"): + value = data.get(field, "") + if not value or not str(value).strip(): + errors.append(f"{field}: must be non-empty") + + +def _check_title_length(data: dict, errors: list[str]) -> None: + """Check title does not exceed 72 characters.""" + title = data.get("title", "") + if len(str(title)) > 72: + errors.append(f"title: exceeds 72 characters ({len(title)})") + + +def _check_type_enum(data: dict, errors: list[str]) -> None: + """Check type is valid enum value.""" + task_type = data.get("type", "") + if task_type not in VALID_TYPES: + errors.append(f"type: '{task_type}' not in {VALID_TYPES}") + + +def _check_status_enum(data: dict, errors: list[str]) -> None: + """Check status is valid enum value.""" + status = data.get("status", "") + if status not in VALID_STATUSES: + errors.append(f"status: '{status}' not in {VALID_STATUSES}") + + +def _check_acceptance_criteria(data: dict, errors: list[str]) -> None: + """Check acceptance criteria have required fields.""" + ac = data.get("acceptance_criteria", []) + if not ac: + errors.append("acceptance_criteria: must have at least 1 item") + return + + for i, criterion in enumerate(ac): + for key in ("id", "given", "when", "then"): + if not criterion.get(key): + msg = f"acceptance_criteria[{i}].{key}: must be non-empty" + errors.append(msg) + + +def _check_definition_of_ready(data: dict, errors: list[str]) -> None: + """Check all definition of ready flags are true.""" + dor = data.get("definition_of_ready", {}) + for flag, value in dor.items(): + if value is not True: + msg = f"definition_of_ready.{flag}: must be true (got {value})" + errors.append(msg) + + +def _check_dependencies(data: dict, errors: list[str]) -> None: + """Warn about dependencies.""" + deps = data.get("depends_on", []) + if deps: + msg = f"depends_on: has {len(deps)} dependencies -- verify they are done" + errors.append(msg) + + +def validate(path: str) -> list[str]: + """Validate task YAML and return list of error messages.""" + errors: list[str] = [] + data = yaml.safe_load(Path(path).read_text()) + + _check_required_fields(data, errors) + _check_title_length(data, errors) + _check_type_enum(data, errors) + _check_status_enum(data, errors) + _check_acceptance_criteria(data, errors) + _check_definition_of_ready(data, errors) + _check_dependencies(data, errors) + + return errors + + +def main() -> None: + """Run validation on the given task YAML file.""" + if len(sys.argv) != 2: + print(f"Usage: {sys.argv[0]} ") + sys.exit(2) + + errors = validate(sys.argv[1]) + if errors: + print("FAIL -- Definition of Ready not met:") + for err in errors: + print(f" - {err}") + sys.exit(1) + else: + print("PASS -- Definition of Ready met") + sys.exit(0) + + +if __name__ == "__main__": + main() diff --git a/template/.claude/.refactor-edit-count b/template/.claude/.refactor-edit-count new file mode 100644 index 0000000..8351c19 --- /dev/null +++ b/template/.claude/.refactor-edit-count @@ -0,0 +1 @@ +14 diff --git a/template/.claude/commands/coverage.md.jinja b/template/.claude/commands/coverage.md similarity index 85% rename from template/.claude/commands/coverage.md.jinja rename to template/.claude/commands/coverage.md index c28e535..c932207 100644 --- a/template/.claude/commands/coverage.md.jinja +++ b/template/.claude/commands/coverage.md @@ -4,7 +4,7 @@ Analyse test coverage and write tests for any gaps found. 1. **Run coverage** — execute `just coverage` to get the full report with missing lines: ``` - uv run --active pytest tests/ --cov={{ package_name }} --cov-report=term-missing + uv run --active pytest tests/ --cov=my_library --cov-report=term-missing ``` 2. **Parse results** — from the output identify: @@ -12,7 +12,7 @@ Analyse test coverage and write tests for any gaps found. - Every module below 85 % (list module name + actual percentage + missing line ranges) 3. **Gap analysis** — for each under-covered module: - - Open the source file at `src/{{ package_name }}/` + - Open the source file at `src/my_library/` - Read the lines flagged as uncovered - Identify what scenarios, branches, or edge cases those lines represent @@ -28,14 +28,14 @@ Analyse test coverage and write tests for any gaps found. ## Report format ``` -## Coverage Report — {{ package_name }} +## Coverage Report — my_library Overall: X% (target: ≥ 85%) ### Modules below threshold | Module | Coverage | Missing lines | |---------------|----------|----------------------| -| {{ package_name }}.core | 72% | 45-52, 78 | +| my_library.core | 72% | 45-52, 78 | ### Tests written - tests/.../test_core.py: added test_edge_case_x, test_branch_y diff --git a/template/.claude/commands/dependency-check.md b/template/.claude/commands/dependency-check.md new file mode 100644 index 0000000..551660c --- /dev/null +++ b/template/.claude/commands/dependency-check.md @@ -0,0 +1,100 @@ +Validate that `uv.lock` is in sync with `pyproject.toml` and committed. + +This command ensures dependencies are reproducible and locked, catching silent drift that +can cause issues in CI or when collaborators pull changes. + +## Prerequisites + +None — runs on the current state of the repo. + +## Steps + +1. **Check if uv.lock exists** + ```bash + test -f uv.lock && echo "uv.lock exists" || echo "ERROR: uv.lock not found" + ``` + If missing, the repo is not using uv's lock mechanism. Warn the user. + +2. **Verify uv.lock is committed to git** + ```bash + git ls-files uv.lock | grep -q uv.lock && echo "Committed" || echo "NOT COMMITTED" + ``` + If not committed, warn: "uv.lock must be committed to ensure reproducible installs." + +3. **Check for uncommitted changes to uv.lock** + ```bash + git diff uv.lock | wc -l + ``` + If output > 0, warn: "uv.lock has uncommitted changes. Run `git add uv.lock && git commit` to sync." + +4. **Verify all extras are locked** + + Read `pyproject.toml` and extract all extra names under `[project.optional-dependencies]`: + ``` + dev, test, docs (if include_docs enabled) + ``` + + Then check that `uv.lock` contains entries for all extras. Verify by scanning lock file for: + ``` + [project optional-dependencies] + dev = [...] + test = [...] + docs = [...] (if applicable) + ``` + +5. **Verify lock file age** + ```bash + LOCK_DATE=$(stat -f%Sm -t%Y-%m-%d uv.lock) # macOS + # OR (Linux) + LOCK_DATE=$(date -r uv.lock +%Y-%m-%d) + ``` + If lock is older than 30 days, suggest: "Consider running `just update` to refresh dependencies." + +6. **Dry-run dependency sync** (optional, helps catch issues early) + ```bash + uv sync --frozen --extra dev --no-install + ``` + If this fails, report the error. Common issues: + - `pyproject.toml` has unpinned versions (should be allowed, but check for overconstraint) + - Missing dependencies specified in extras + +## Output Format + +``` +## Dependency Check + +✓ uv.lock exists and is committed +✓ No uncommitted changes to uv.lock +✓ All extras locked: dev, test, docs +✓ Lock file is recent (2026-04-02) +✓ Dry-run sync successful + +All dependency checks passed. +``` + +OR (with issues): + +``` +## Dependency Check + +✓ uv.lock exists and is committed +✗ UNCOMMITTED CHANGES to uv.lock + → Run: git add uv.lock && git commit -m "chore: refresh dependencies" +✓ All extras locked: dev, test +✗ Missing extra: docs (declared in pyproject.toml but not in lock) + → Run: uv lock --upgrade && uv sync --frozen --extra docs +✗ Lock file is 42 days old + → Consider refreshing: just update + +Action items (before pushing): +1. Commit uv.lock changes +2. Lock the missing extras +3. Refresh dependencies to latest versions +``` + +## Tips + +- Run this before every commit to catch dependency drift +- If `uv.lock` is gitignored (unusual but possible), you're not benefiting from reproducible installs +- Lock file age is advisory only; very old locks are fine if dependencies are stable +- Always commit `uv.lock` to the repository — this is a uv best practice diff --git a/template/.claude/commands/docs-check.md.jinja b/template/.claude/commands/docs-check.md similarity index 92% rename from template/.claude/commands/docs-check.md.jinja rename to template/.claude/commands/docs-check.md index 95e1bd3..8ef1f06 100644 --- a/template/.claude/commands/docs-check.md.jinja +++ b/template/.claude/commands/docs-check.md @@ -5,7 +5,7 @@ Audit and repair documentation across all Python source files. 1. **Run ruff docstring check** — `uv run --active ruff check --select D src/ tests/ scripts/` Report every violation with file, line, and rule code. -2. **Deep symbol scan** — for every `.py` file under `src/{{ package_name }}/`, `tests/`, and `scripts/`: +2. **Deep symbol scan** — for every `.py` file under `src/my_library/`, `tests/`, and `scripts/`: - Read the file - Identify all public symbols: module-level functions, classes, methods not prefixed with `_` - For each symbol check: @@ -30,7 +30,7 @@ Audit and repair documentation across all Python source files. ## Output format ``` -## Documentation Audit — {{ package_name }} +## Documentation Audit — my_library ### Ruff violations: N found [list violations] diff --git a/template/.claude/commands/guided-template-update.md.jinja b/template/.claude/commands/guided-template-update.md similarity index 95% rename from template/.claude/commands/guided-template-update.md.jinja rename to template/.claude/commands/guided-template-update.md index b0264a6..f9a0292 100644 --- a/template/.claude/commands/guided-template-update.md.jinja +++ b/template/.claude/commands/guided-template-update.md @@ -108,7 +108,7 @@ changed, asking for confirmation at critical steps, and verifying the update wit ### Success (no conflicts, CI passes) ``` -## Template Update — {{ project_name }} +## Template Update — My Library ✓ Update available: vX.Y.Z → vA.B.C @@ -133,13 +133,13 @@ Your project is now synced with the latest template! ### Conflicts Detected ``` -## Template Update — {{ project_name }} +## Template Update — My Library ✓ Update available: vX.Y.Z → vA.B.C ✓ Update applied ✗ Conflicts found in 2 files: - - src/{{ package_name }}/core.py.rej + - src/my_library/core.py.rej - pyproject.toml.rej Action required: @@ -153,14 +153,14 @@ Action required: ### CI Failures After Update ``` -## Template Update — {{ project_name }} +## Template Update — My Library ✓ Update applied ✓ No conflicts ✗ CI failed at: ruff lint Error: - src/{{ package_name }}/core.py:42: D100 Missing docstring in public module + src/my_library/core.py:42: D100 Missing docstring in public module This is likely due to new standards from the updated template (e.g., new ruff D rules). diff --git a/template/.claude/commands/release.md.jinja b/template/.claude/commands/release.md similarity index 89% rename from template/.claude/commands/release.md.jinja rename to template/.claude/commands/release.md index 5e1798f..fab2244 100644 --- a/template/.claude/commands/release.md.jinja +++ b/template/.claude/commands/release.md @@ -1,11 +1,11 @@ Orchestrate a new release: verify CI, bump version, tag, and push. -This command automates the release workflow for {{ project_name }}. +This command automates the release workflow for My Library. ## Prerequisites - All changes must be committed (no dirty working tree) -- You must have push access to origin (https://github.com/{{ github_username }}/{{ project_slug }}) +- You must have push access to origin (https://github.com/yourusername/my-library) - The main/master branch must be up to date with origin ## Steps @@ -64,7 +64,7 @@ This command automates the release workflow for {{ project_name }}. ``` 9. **Create a GitHub Release** (if not automated) - - Go to: https://github.com/{{ github_username }}/{{ project_slug }}/releases + - Go to: https://github.com/yourusername/my-library/releases - Click "Draft a new release" - Select the tag you just pushed - Add release notes (summary of changes, new features, breaking changes, etc.) @@ -76,7 +76,7 @@ Report to the user: ``` ✓ Release vX.Y.Z created successfully -Release page: https://github.com/{{ github_username }}/{{ project_slug }}/releases/tag/vX.Y.Z +Release page: https://github.com/yourusername/my-library/releases/tag/vX.Y.Z Next steps: - Check that the tag pushed successfully diff --git a/template/.claude/commands/review.md.jinja b/template/.claude/commands/review.md similarity index 83% rename from template/.claude/commands/review.md.jinja rename to template/.claude/commands/review.md index 64a3611..5e4010d 100644 --- a/template/.claude/commands/review.md.jinja +++ b/template/.claude/commands/review.md @@ -8,7 +8,12 @@ Perform a thorough pre-merge code review of all recently modified Python files. - `just type` — basedpyright type check; report all errors with file + line - `just docs-check` — ruff `--select D` docstring check -2. **Manual symbol scan** — for every `.py` file under `src/{{ package_name }}/` that was + For detailed guidance on each tool, load the relevant skill: + - Linting: `.claude/skills/linting/SKILL.md` + - Type checking: `.claude/skills/type-checking/SKILL.md` + - Docstrings: `.claude/skills/python-docstrings/SKILL.md` + +2. **Manual symbol scan** — for every `.py` file under `src/my_library/` that was added or modified (use `git diff --name-only`): - Read the file - List every public function, class, and method (names not prefixed with `_`) diff --git a/template/.claude/commands/standards.md.jinja b/template/.claude/commands/standards.md similarity index 91% rename from template/.claude/commands/standards.md.jinja rename to template/.claude/commands/standards.md index 699259a..ab278d8 100644 --- a/template/.claude/commands/standards.md.jinja +++ b/template/.claude/commands/standards.md @@ -9,7 +9,7 @@ This is the "am I ready to merge?" command. It runs all checks and aggregates re - basedpyright: `standard` mode type checking 2. **Docstring coverage** — `just docs-check` - - All public symbols in `src/{{ package_name }}/` have Google-style docstrings + - All public symbols in `src/my_library/` have Google-style docstrings - All modules have module-level docstrings 3. **Test coverage** — `just coverage` @@ -26,7 +26,7 @@ This is the "am I ready to merge?" command. It runs all checks and aggregates re ## Output format ``` -## Standards Report — {{ project_name }} — +## Standards Report — My Library — ### ✓/✗ Static Analysis (ruff + basedpyright) [errors or "All clean"] diff --git a/template/.claude/commands/update-claude-md.md b/template/.claude/commands/update-claude-md.md new file mode 100644 index 0000000..d924fb9 --- /dev/null +++ b/template/.claude/commands/update-claude-md.md @@ -0,0 +1,44 @@ +Detect and fix drift between CLAUDE.md and the actual project configuration files. + +CLAUDE.md is the project's living standards contract. It must stay in sync with +`pyproject.toml`, `justfile`, and `copier.yml`. This command performs that sync. + +## Steps + +1. **Read the source-of-truth files**: + - `pyproject.toml` — extract: + - Active ruff rules (`[tool.ruff.lint]` select + ignore) + - Pydocstyle convention + - BasedPyright mode and key settings + - pytest addopts and markers + - `justfile` — extract all public recipe names (non-`_` prefixed) and their + one-line description comment (the `# ...` line directly above each recipe) + - `copier.yml` — extract all prompt variable names and `_skip_if_exists` paths + +2. **Read CLAUDE.md** and identify every section that references configuration: + - "Code style" section — ruff rules, line length, pydocstyle convention + - "Common development commands" table — just recipes + - Any mention of specific tool versions or settings + +3. **Compare and identify drift**: + - Are all ruff rules in CLAUDE.md's code style section accurate and complete? + - Does the commands table include all current `just` recipes? + - Are any new recipes added to the justfile missing from the table? + - Are any removed recipes still listed in CLAUDE.md? + - Does the "Recent improvements" section need a new entry for today's changes? + +4. **Update CLAUDE.md** to match reality: + - Correct any stale ruff rule lists + - Add any missing `just` recipes to the commands table (with description) + - Remove any obsolete entries + - Add a new bullet to "Recent improvements" summarising what changed today + (use today's date from `date +%Y-%m-%d`) + +5. **Report** what was changed (or "CLAUDE.md is already in sync — no changes needed"). + +## Important + +- Do not rewrite the whole file; make surgical edits to keep existing prose intact. +- Preserve the document structure and heading hierarchy. +- Do not fabricate descriptions — if a recipe's purpose is unclear, read the justfile + recipe body to infer it. diff --git a/template/.claude/commands/validate-release.md b/template/.claude/commands/validate-release.md new file mode 100644 index 0000000..699fb60 --- /dev/null +++ b/template/.claude/commands/validate-release.md @@ -0,0 +1,129 @@ +Simulate a release by rendering and testing the template with all feature combinations. + +This command validates that a release won't break existing or new users by testing the template +against all possible feature combinations before you tag and ship. + +## Prerequisites + +- Working tree must be clean (`git status`) +- All changes must be committed +- `just ci` must pass + +## Steps + +1. **Verify prerequisites** + ```bash + git status # must show "nothing to commit" + just ci # must pass completely + ``` + If CI fails, fix issues and re-run before proceeding. + +2. **Extract template version** from `pyproject.toml` + ```bash + VERSION=$(grep "^version = " pyproject.toml | sed 's/.*"\([^"]*\)".*/\1/') + echo "Testing release for version: $VERSION" + ``` + +3. **Generate test projects** — Create 4 temporary projects covering all feature combinations: + + **a) Minimal config** (all optional features disabled) + ```bash + copier copy . /tmp/test-minimal --trust --defaults \ + --data include_docs=false \ + --data include_numpy=false \ + --data include_pandas_support=false + cd /tmp/test-minimal && just ci && cd - + ``` + + **b) Full-featured config** (all optional features enabled) + ```bash + copier copy . /tmp/test-full --trust --defaults \ + --data include_docs=true \ + --data include_numpy=true \ + --data include_pandas_support=true + cd /tmp/test-full && just ci && cd - + ``` + + **c) Docs only** + ```bash + copier copy . /tmp/test-docs --trust --defaults \ + --data include_docs=true \ + --data include_numpy=false \ + --data include_pandas_support=false + cd /tmp/test-docs && just ci && cd - + ``` + + **d) Data science stack** (numpy + pandas, no docs) + ```bash + copier copy . /tmp/test-datascience --trust --defaults \ + --data include_docs=false \ + --data include_numpy=true \ + --data include_pandas_support=true + cd /tmp/test-datascience && just ci && cd - + ``` + +4. **Capture results** — for each test project, record: + - Config used (feature combination) + - Exit code of `just ci` + - Any error messages (if failed) + +5. **Clean up** temporary directories + ```bash + rm -rf /tmp/test-minimal /tmp/test-full /tmp/test-docs /tmp/test-datascience + ``` + +6. **Report results** to the user + +## Success Criteria + +All four feature combinations must pass `just ci` completely. If any combination fails: +- ✗ **Do NOT release** — fix the underlying issue first +- Report which combination failed and why +- Suggest fixes (e.g., missing dependency, syntax error in template) + +## Output Format + +``` +## Release Validation — v + +Testing template against all feature combinations... + +| Config | Status | Time | Error | +|--------|--------|------|-------| +| Minimal (no features) | ✓ PASS | 45s | — | +| Full featured | ✓ PASS | 52s | — | +| Docs only | ✓ PASS | 48s | — | +| Data science stack | ✓ PASS | 50s | — | + +✓ All feature combinations passed. +Ready to release v with confidence. + +Next step: Run `/release` to tag and push. +``` + +OR (if a test fails): + +``` +## Release Validation — v + +Testing template against all feature combinations... + +| Config | Status | Time | Error | +|--------|--------|------|-------| +| Minimal | ✓ PASS | 45s | — | +| Full featured | ✗ FAIL | 42s | pytest: missing test files | +| Docs only | ✓ PASS | 48s | — | +| Data science stack | ✓ PASS | 50s | — | + +✗ One configuration failed. Fix the issue and re-run validation. + +**Issue:** Full-featured config (`include_docs=true, include_numpy=true, include_pandas_support=true`) fails at pytest stage. +**Likely cause:** Test template not rendering correctly when all features enabled. +**Suggested fix:** Review `template/tests/unit/test_core.py.jinja` conditional logic. +``` + +## Tips + +- Keep temp directories if a test fails (for debugging) +- If uv.lock generation is slow, consider passing `--skip-tasks` to copier (after validation passes) +- This is a pre-release gate — run it before `/release` every time diff --git a/template/.claude/hooks/README.md b/template/.claude/hooks/README.md index b840fd9..8a35ee2 100644 --- a/template/.claude/hooks/README.md +++ b/template/.claude/hooks/README.md @@ -210,15 +210,19 @@ exit 0 |---|---|---|---| | `post-edit-python.sh` | PostToolUse | Edit\|Write | ruff + basedpyright after `.py` edits | | `post-edit-markdown.sh` | PostToolUse | Edit | Warn if `.md` edited outside `docs/` | +| `post-edit-refactor-test-guard.sh` | PostToolUse | Edit\|Write | Remind to run tests after several `src/` edits | +| `post-bash-test-coverage-reminder.sh` | PostToolUse | Bash | Surface modules below 85% after test runs | +| `post-write-test-structure.sh` | PostToolUse | Write | Check test file structure and markers | | `pre-bash-block-no-verify.sh` | PreToolUse | Bash | Block `git --no-verify` | | `pre-bash-git-push-reminder.sh` | PreToolUse | Bash | Warn to review before push | | `pre-bash-commit-quality.sh` | PreToolUse | Bash | Secret/debug scan before commit | +| `pre-bash-coverage-gate.sh` | PreToolUse | Bash | Warn before `git commit` if coverage below threshold | +| `pre-bash-branch-protection.sh` | PreToolUse | Bash | Block `git push` to main/master | +| `pre-bash-delete-protection.sh` | PreToolUse | Bash | Block `rm` of critical files | | `pre-config-protection.sh` | PreToolUse | Write\|Edit\|MultiEdit | Block weakening ruff/basedpyright config | | `pre-protect-uv-lock.sh` | PreToolUse | Write\|Edit | Block direct edits to `uv.lock` | -| `pre-write-src-require-test.sh` | PreToolUse | Write\|Edit | Block if `tests//test_.py` missing for top-level `src//.py` (strict TDD) | -| `pre-bash-coverage-gate.sh` | PreToolUse | Bash | Warn before `git commit` if coverage below threshold | +| `pre-write-src-require-test.sh` | PreToolUse | Write\|Edit | Block if test file missing in `tests/unit/`, `tests/integration/`, or `tests/e2e/` for `src//.py` (strict TDD) | | `pre-write-src-test-reminder.sh` | (optional) | Write\|Edit | Non-blocking alternative to `pre-write-src-require-test.sh` — **do not register both** | -| `post-edit-refactor-test-guard.sh` | PostToolUse | Edit\|Write | Remind to run tests after several `src/` edits | ## Adding a new hook diff --git a/template/.claude/hooks/post-bash-test-coverage-reminder.sh b/template/.claude/hooks/post-bash-test-coverage-reminder.sh new file mode 100644 index 0000000..36d561b --- /dev/null +++ b/template/.claude/hooks/post-bash-test-coverage-reminder.sh @@ -0,0 +1,60 @@ +#!/usr/bin/env bash +set -uo pipefail + +# PostToolUse hook for Bash: after pytest/just test runs, parse coverage output +# and surface modules below 85%. + +INPUT=$(cat) + +# Only trigger on pytest or just test commands +COMMAND=$(echo "$INPUT" | python3 -c " +import json, sys +data = json.loads(sys.stdin.read()) +print(data.get('tool_result', {}).get('stdout', '') if 'tool_result' in data else '') +" 2>/dev/null) || { echo "$INPUT"; exit 0; } + +TOOL_INPUT=$(echo "$INPUT" | python3 -c " +import json, sys +data = json.loads(sys.stdin.read()) +print(data.get('tool_input', {}).get('command', '')) +" 2>/dev/null) || { echo "$INPUT"; exit 0; } + +# Check if the command was a test/coverage command +case "$TOOL_INPUT" in + *pytest*|*"just test"*|*"just coverage"*|*"just ci"*) ;; + *) echo "$INPUT"; exit 0 ;; +esac + +# Look for coverage output in the result +if echo "$COMMAND" | grep -q "TOTAL"; then + # Parse modules below 85% + LOW_COVERAGE=$(echo "$COMMAND" | python3 -c " +import sys + +lines = sys.stdin.read().strip().split('\n') +low = [] +for line in lines: + parts = line.split() + if len(parts) >= 4 and parts[0].startswith('src/'): + try: + pct = int(parts[-1].rstrip('%')) + if pct < 85: + low.append(f'{parts[0]}: {pct}%') + except (ValueError, IndexError): + pass +if low: + print('\n'.join(low)) +" 2>/dev/null) + + if [[ -n "$LOW_COVERAGE" ]]; then + echo "┌─ Coverage reminder" + echo "│ Modules below 85% threshold:" + echo "$LOW_COVERAGE" | while IFS= read -r line; do + echo "│ ⚠ $line" + done + echo "└─ Consider writing tests to close gaps before committing" + fi +fi + +echo "$INPUT" +exit 0 diff --git a/template/.claude/hooks/post-write-test-structure.sh b/template/.claude/hooks/post-write-test-structure.sh new file mode 100644 index 0000000..bbfd0bb --- /dev/null +++ b/template/.claude/hooks/post-write-test-structure.sh @@ -0,0 +1,48 @@ +#!/usr/bin/env bash +set -uo pipefail + +# PostToolUse hook for Write: after creating test_*.py files, check for proper +# test structure: test_ functions, no unittest.TestCase, proper marker usage. + +INPUT=$(cat) + +FILE_PATH=$(echo "$INPUT" | python3 -c " +import json, sys +data = json.loads(sys.stdin.read()) +print(data.get('tool_input', {}).get('file_path', '')) +" 2>/dev/null) || { echo "$INPUT"; exit 0; } + +# Only check test files +case "$FILE_PATH" in + */test_*.py|*tests/*.py) ;; + *) echo "$INPUT"; exit 0 ;; +esac + +# Verify the file exists +[[ -f "$FILE_PATH" ]] || { echo "$INPUT"; exit 0; } + +WARNINGS="" + +# Check for test_ functions +if ! grep -q "^def test_\|^ def test_\|^async def test_" "$FILE_PATH"; then + WARNINGS="${WARNINGS}\n│ ⚠ No test_ functions found — file may not contain any tests" +fi + +# Check for unittest.TestCase (discouraged) +if grep -q "unittest.TestCase\|class.*TestCase" "$FILE_PATH"; then + WARNINGS="${WARNINGS}\n│ ⚠ unittest.TestCase detected — prefer pytest function-based tests" +fi + +# Check for pytest markers (pytestmark at module level or @pytest.mark on functions) +if ! grep -q "pytestmark\|@pytest.mark\." "$FILE_PATH"; then + WARNINGS="${WARNINGS}\n│ ⚠ No pytest markers found — every test file must set pytestmark = pytest.mark. at module level" +fi + +if [[ -n "$WARNINGS" ]]; then + echo "┌─ Test structure check: $(basename "$FILE_PATH")" + echo -e "$WARNINGS" + echo "└─ Fix these issues to maintain test quality standards" +fi + +echo "$INPUT" +exit 0 diff --git a/template/.claude/hooks/pre-bash-branch-protection.sh b/template/.claude/hooks/pre-bash-branch-protection.sh new file mode 100644 index 0000000..9839414 --- /dev/null +++ b/template/.claude/hooks/pre-bash-branch-protection.sh @@ -0,0 +1,46 @@ +#!/usr/bin/env bash +set -uo pipefail + +# PreToolUse hook for Bash: block git push to main/master branches. +# Feature branch pushes are allowed. + +INPUT=$(cat) + +COMMAND=$(echo "$INPUT" | python3 -c " +import json, sys +data = json.loads(sys.stdin.read()) +print(data.get('tool_input', {}).get('command', '')) +" 2>/dev/null) || { echo "$INPUT"; exit 0; } + +# Only check git push commands +case "$COMMAND" in + *"git push"*) ;; + *) echo "$INPUT"; exit 0 ;; +esac + +# Check if pushing to main or master +if echo "$COMMAND" | grep -qE "git push\s+\S+\s+(main|master)(\s|$)"; then + echo "┌─ Branch protection" >&2 + echo "│ ✗ BLOCKED: cannot push directly to main/master" >&2 + echo "│ Use a feature branch and create a pull request instead." >&2 + echo "└─ Example: git push origin feat/my-feature" >&2 + exit 2 +fi + +# Check if on main/master and pushing without specifying a branch +CURRENT_BRANCH=$(git branch --show-current 2>/dev/null) +if [[ "$CURRENT_BRANCH" == "main" || "$CURRENT_BRANCH" == "master" ]]; then + # If the push command doesn't specify a remote branch, it pushes current + if echo "$COMMAND" | grep -qE "^git push\s*$|^git push\s+origin\s*$|^git push\s+-u\s+origin\s*$"; then + echo "┌─ Branch protection" >&2 + echo "│ ✗ BLOCKED: currently on '$CURRENT_BRANCH' — cannot push directly" >&2 + echo "│ Create a feature branch first:" >&2 + echo "│ git checkout -b feat/my-feature" >&2 + echo "│ git push -u origin feat/my-feature" >&2 + echo "└─" >&2 + exit 2 + fi +fi + +echo "$INPUT" +exit 0 diff --git a/template/.claude/hooks/pre-delete-protection.sh b/template/.claude/hooks/pre-delete-protection.sh new file mode 100644 index 0000000..373a573 --- /dev/null +++ b/template/.claude/hooks/pre-delete-protection.sh @@ -0,0 +1,51 @@ +#!/usr/bin/env bash +set -uo pipefail + +# PreToolUse hook for Bash: block rm/del of critical project files. + +INPUT=$(cat) + +COMMAND=$(echo "$INPUT" | python3 -c " +import json, sys +data = json.loads(sys.stdin.read()) +print(data.get('tool_input', {}).get('command', '')) +" 2>/dev/null) || { echo "$INPUT"; exit 0; } + +# Only check rm commands +case "$COMMAND" in + *rm*) ;; + *) echo "$INPUT"; exit 0 ;; +esac + +# Protected files — never delete these +PROTECTED_FILES=( + "pyproject.toml" + "justfile" + "CLAUDE.md" + ".pre-commit-config.yaml" + ".copier-answers.yml" + "uv.lock" + ".claude/settings.json" +) + +for protected in "${PROTECTED_FILES[@]}"; do + if echo "$COMMAND" | grep -q "$protected"; then + echo "┌─ Delete protection" >&2 + echo "│ ✗ BLOCKED: cannot delete critical file: $protected" >&2 + echo "│ These files are essential to the project infrastructure." >&2 + echo "└─ If you must remove this file, do it manually outside Claude." >&2 + exit 2 + fi +done + +# Block rm -rf on .claude/ directory +if echo "$COMMAND" | grep -qE "rm\s+(-[a-zA-Z]*r[a-zA-Z]*\s+|)\.claude(/|$|\s)"; then + echo "┌─ Delete protection" >&2 + echo "│ ✗ BLOCKED: cannot recursively delete .claude/ directory" >&2 + echo "│ This directory contains skills, hooks, and settings." >&2 + echo "└─ Delete specific files within .claude/ instead." >&2 + exit 2 +fi + +echo "$INPUT" +exit 0 diff --git a/template/.claude/hooks/pre-write-src-require-test.sh b/template/.claude/hooks/pre-write-src-require-test.sh index 3c3531d..2c90209 100755 --- a/template/.claude/hooks/pre-write-src-require-test.sh +++ b/template/.claude/hooks/pre-write-src-require-test.sh @@ -1,14 +1,18 @@ #!/usr/bin/env bash # Claude PreToolUse hook — Write|Edit # TDD enforcement: block writing to src//.py if the corresponding -# test file tests//test_.py does not exist yet. +# test file does not exist in tests/unit/, tests/integration/, or tests/e2e/. # # This is the strict version of pre-write-src-test-reminder.sh. That hook warns; # this one blocks. Use this hook when the project follows strict TDD discipline. # -# Scope: only paths matching src//.py (exactly one directory -# between src/ and the file), excluding __init__.py. Nested layouts such as -# src//common/foo.py are skipped. +# Scope: paths matching src//.py (top-level modules) and +# src//common/.py (common subpackage), excluding __init__.py. +# +# Test file search order: +# 1. tests/unit/test_.py (or tests/unit/common/test_.py) +# 2. tests/integration/test_.py +# 3. tests/e2e/test_.py # # Exits: 0 = allow | 2 = block (test file missing) @@ -35,35 +39,51 @@ print(data.get("tool_input", {}).get("file_path", "")) FP="${FILE_PATH//\\//}" FP="${FP#./}" -# Only match top-level package modules: src//.py -if [[ ! "$FP" =~ ^(.*/)?src/([^/]+)/([^/]+)\.py$ ]]; then +# Match top-level package modules: src//.py +# or common subpackage modules: src//common/.py +IS_COMMON=false +if [[ "$FP" =~ ^(.*/)?src/([^/]+)/common/([^/]+)\.py$ ]]; then + PKG="${BASH_REMATCH[2]}" + MOD="${BASH_REMATCH[3]}" + IS_COMMON=true +elif [[ "$FP" =~ ^(.*/)?src/([^/]+)/([^/]+)\.py$ ]]; then + PKG="${BASH_REMATCH[2]}" + MOD="${BASH_REMATCH[3]}" +else echo "$INPUT" exit 0 fi -PKG="${BASH_REMATCH[2]}" -MOD="${BASH_REMATCH[3]}" - # Skip __init__.py — these rarely need a dedicated test file. if [[ "$MOD" == "__init__" ]]; then echo "$INPUT" exit 0 fi -EXPECTED_TEST="tests/${PKG}/test_${MOD}.py" - -# If the test file already exists, allow the write. -if [[ -f "$EXPECTED_TEST" ]]; then - echo "$INPUT" - exit 0 +# Search for the test file in type subdirectories. +if [[ "$IS_COMMON" == true ]]; then + SEARCH_PATHS=( + "tests/unit/common/test_${MOD}.py" + "tests/integration/common/test_${MOD}.py" + "tests/e2e/common/test_${MOD}.py" + ) + EXPECTED_TEST="tests/unit/common/test_${MOD}.py" +else + SEARCH_PATHS=( + "tests/unit/test_${MOD}.py" + "tests/integration/test_${MOD}.py" + "tests/e2e/test_${MOD}.py" + ) + EXPECTED_TEST="tests/unit/test_${MOD}.py" fi -# Also check if the test file exists at the flat layout: tests/test_.py -EXPECTED_TEST_FLAT="tests/test_${MOD}.py" -if [[ -f "$EXPECTED_TEST_FLAT" ]]; then - echo "$INPUT" - exit 0 -fi +# If a test file exists in any type subdirectory, allow the write. +for path in "${SEARCH_PATHS[@]}"; do + if [[ -f "$path" ]]; then + echo "$INPUT" + exit 0 + fi +done # Block: test file does not exist. echo "┌─ TDD enforcement: test-first required" >&2 @@ -76,7 +96,11 @@ echo "│ This hook enforces RED → GREEN discipline: no implementation" >&2 echo "│ code before a failing test exists." >&2 echo "│" >&2 echo "│ To create the test file:" >&2 -echo "│ mkdir -p tests/${PKG} && touch $EXPECTED_TEST" >&2 +if [[ "$IS_COMMON" == true ]]; then + echo "│ mkdir -p tests/unit/common && touch $EXPECTED_TEST" >&2 +else + echo "│ mkdir -p tests/unit && touch $EXPECTED_TEST" >&2 +fi echo "└─ ✗ Blocked — write test first" >&2 exit 2 diff --git a/template/.claude/hooks/pre-write-src-test-reminder.sh b/template/.claude/hooks/pre-write-src-test-reminder.sh index 81a044a..8351185 100755 --- a/template/.claude/hooks/pre-write-src-test-reminder.sh +++ b/template/.claude/hooks/pre-write-src-test-reminder.sh @@ -1,11 +1,11 @@ #!/usr/bin/env bash # Claude PreToolUse hook — Write|Edit -# Remind to add a pytest module when touching a top-level package source file. +# Remind to add a pytest module when touching a package source file. # -# Scope: only paths matching src//.py (exactly one directory between -# src/ and the file), excluding __init__.py. Nested layouts such as -# src//common/foo.py are skipped — they are often covered by shared tests -# (e.g. test_support.py). +# Scope: paths matching src//.py (top-level modules) and +# src//common/.py (common subpackage), excluding __init__.py. +# +# Test files are searched in tests/unit/, tests/integration/, and tests/e2e/. # # Reference : https://github.com/affaan-m/everything-claude-code/blob/main/hooks/README.md # (recipe: Require test files alongside new source files — pytest layout) @@ -34,34 +34,54 @@ print(data.get("tool_input", {}).get("file_path", "")) FP="${FILE_PATH//\\//}" FP="${FP#./}" -if [[ ! "$FP" =~ ^(.*/)?src/([^/]+)/([^/]+)\.py$ ]]; then +# Match top-level package modules or common subpackage modules. +IS_COMMON=false +if [[ "$FP" =~ ^(.*/)?src/([^/]+)/common/([^/]+)\.py$ ]]; then + MOD="${BASH_REMATCH[3]}" + IS_COMMON=true +elif [[ "$FP" =~ ^(.*/)?src/([^/]+)/([^/]+)\.py$ ]]; then + MOD="${BASH_REMATCH[3]}" +else echo "$INPUT" exit 0 fi -PKG="${BASH_REMATCH[2]}" -MOD="${BASH_REMATCH[3]}" - if [[ "$MOD" == "__init__" ]]; then echo "$INPUT" exit 0 fi -EXPECTED_TEST="tests/${PKG}/test_${MOD}.py" - -if [[ -f "$EXPECTED_TEST" ]]; then - echo "$INPUT" - exit 0 +# Search for the test file in type subdirectories. +if [[ "$IS_COMMON" == true ]]; then + SEARCH_PATHS=( + "tests/unit/common/test_${MOD}.py" + "tests/integration/common/test_${MOD}.py" + "tests/e2e/common/test_${MOD}.py" + ) + EXPECTED_TEST="tests/unit/common/test_${MOD}.py" +else + SEARCH_PATHS=( + "tests/unit/test_${MOD}.py" + "tests/integration/test_${MOD}.py" + "tests/e2e/test_${MOD}.py" + ) + EXPECTED_TEST="tests/unit/test_${MOD}.py" fi +for path in "${SEARCH_PATHS[@]}"; do + if [[ -f "$path" ]]; then + echo "$INPUT" + exit 0 + fi +done + echo "┌─ Test module reminder" >&2 echo "│" >&2 echo "│ Source : $FP" >&2 echo "│ No file: $EXPECTED_TEST" >&2 echo "│" >&2 -echo "│ Add tests under tests// (pytest: test_.py) or extend an" >&2 -echo "│ existing test module that imports this code." >&2 -echo "│ Nested src paths (e.g. src//common/…) are not checked." >&2 +echo "│ Add tests under tests/unit/ (or tests/integration/, tests/e2e/) using" >&2 +echo "│ the naming convention test_.py." >&2 echo "└─" >&2 echo "$INPUT" diff --git a/template/.claude/rules/bash/security.md b/template/.claude/rules/bash/security.md index 908bb7f..e7e9b4c 100644 --- a/template/.claude/rules/bash/security.md +++ b/template/.claude/rules/bash/security.md @@ -17,35 +17,75 @@ case "$action" in esac ``` -## Validate all inputs +## Never use `shell=True` equivalent + +Constructing commands via string interpolation and passing to a shell interpreter +enables injection: + +```bash +# Wrong — $filename could contain shell metacharacters +system("process $filename") + +# Correct — pass as separate argument +process_file "$filename" +``` + +When calling external programs, pass arguments as separate words, never concatenated +into a single string. + +## Validate and sanitise all inputs + +Scripts that accept arguments or read from environment variables must validate them +before use: ```bash -file_path="${1:?Usage: script.sh }" +file_path="${1:?Usage: script.sh }" # fail with message if empty +# Reject paths containing traversal sequences if [[ "$file_path" == *..* ]]; then echo "Error: path traversal not allowed" >&2 exit 1 fi ``` -## Secrets in environment +## Secrets in environment variables - Do not echo or log environment variables that may contain secrets. -- Validate required variables exist before use: - ```bash - : "${API_KEY:?API_KEY environment variable is required}" - ``` +- Do not write secrets to temporary files unless the file is created with `mktemp` + and cleaned up in an `EXIT` trap. +- Check that required environment variables exist before using them: -## Temporary files +```bash +: "${API_KEY:?API_KEY environment variable is required}" +``` + +## Temporary file handling + +Use `mktemp` for temporary files and clean up with a trap: ```bash TMPFILE=$(mktemp) trap 'rm -f "$TMPFILE"' EXIT + +# Use $TMPFILE safely +some_command > "$TMPFILE" +process_output "$TMPFILE" ``` -Never use predictable names like `/tmp/output.txt`. +Never use predictable filenames like `/tmp/output.txt` — they are vulnerable to +symlink attacks. + +## Subprocess calls in hook scripts -## Subprocess calls +Hook scripts in `.claude/hooks/` execute in the context of the developer's machine. +They should: +- Only call trusted binaries (`uv`, `git`, `python3`, `ruff`, `basedpyright`). +- Never download or execute code from the network. +- Avoid `curl | bash` patterns. +- Not modify files outside the project directory. + +The `pre-bash-block-no-verify.sh` hook blocks `git commit --no-verify` to ensure +pre-commit security gates cannot be bypassed. Pass arguments as separate words; never concatenate into a shell string: @@ -56,5 +96,3 @@ git status --porcelain # Wrong — injection risk sh -c "git $user_command" ``` - -Avoid `curl | bash` patterns. diff --git a/template/.claude/rules/common/code-review.md b/template/.claude/rules/common/code-review.md index 457cb4b..34cdee8 100644 --- a/template/.claude/rules/common/code-review.md +++ b/template/.claude/rules/common/code-review.md @@ -1,5 +1,16 @@ # Code Review Standards +## When to review + +Review your own code before every commit, and request a peer review before merging to +`main`. Self-review catches the majority of issues before CI runs. + +**Mandatory review triggers:** +- Any change to security-sensitive code: authentication, authorisation, secret handling, + user input processing. +- Architectural changes: new modules, changed public APIs, modified data models. +- Changes to CI/CD configuration or deployment scripts. + ## Self-review checklist **Correctness** @@ -9,13 +20,14 @@ **Code quality** - [ ] Functions are focused; no function exceeds 80 lines. -- [ ] No deep nesting (> 4 levels); use early returns. -- [ ] No commented-out code. +- [ ] No deep nesting (> 4 levels); use early returns instead. +- [ ] Variable and function names are clear and consistent with the codebase. +- [ ] No commented-out code left in. **Testing** - [ ] Every new public symbol has at least one test. -- [ ] `just coverage` shows no module below 85 %. -- [ ] Tests are isolated and order-independent. +- [ ] `just coverage` shows no module below threshold. +- [ ] Tests are isolated and do not depend on order. **Documentation** - [ ] Every new public function/class/method has a Google-style docstring. @@ -23,22 +35,40 @@ **Security** - [ ] No hardcoded secrets. +- [ ] No new external dependencies without justification. - [ ] User inputs are validated. **CI** -- [ ] `just ci` passes locally before pushing. +- [ ] `just ci` passes with zero errors locally before pushing. ## Severity levels -| Level | Meaning | Action | -|-------|---------|--------| +| Level | Meaning | Required action | +|-------|---------|-----------------| | CRITICAL | Security vulnerability or data-loss risk | Block merge — must fix | | HIGH | Bug or significant quality issue | Should fix before merge | -| MEDIUM | Maintainability concern | Consider fixing | -| LOW | Style suggestion | Optional | +| MEDIUM | Maintainability or readability concern | Consider fixing | +| LOW | Style suggestion or minor improvement | Optional | + +## Common issues to catch + +| Issue | Example | Fix | +|-------|---------|-----| +| Large functions | > 80 lines | Extract helpers | +| Deep nesting | `if a: if b: if c:` | Early returns | +| Missing error handling | Bare `except:` | Handle specifically | +| Hardcoded magic values | `if status == 3:` | Named constant | +| Missing type annotations | `def foo(x):` | Add type hints | +| Missing docstring | No docstring on public function | Add Google-style docstring | +| Debug artefacts | `print("here")` | Remove or use logger | + +## Integration with automated checks + +The review checklist is enforced at multiple layers: -## Automated enforcement +- **PostToolUse hooks**: ruff + basedpyright fire after every `.py` edit in a Claude session. +- **Pre-commit hooks**: ruff, basedpyright, secret scan on every `git commit`. +- **CI**: full `just ci` run on every push and pull request. -- **PostToolUse hooks**: ruff + basedpyright after every `.py` edit. -- **Pre-commit hooks**: ruff, basedpyright, secret scan on `git commit`. -- **CI**: full `just ci` on every push and pull request. +Fix violations at the earliest layer — it is cheaper to fix a ruff error immediately +after editing a file than to fix it after the CI pipeline fails. diff --git a/template/.claude/rules/common/git-workflow.md b/template/.claude/rules/common/git-workflow.md index 7c8c2b8..2b3f44b 100644 --- a/template/.claude/rules/common/git-workflow.md +++ b/template/.claude/rules/common/git-workflow.md @@ -6,33 +6,45 @@ : + + ``` **Types**: `feat`, `fix`, `refactor`, `docs`, `test`, `chore`, `perf`, `ci`, `build` Rules: -- Subject line ≤ 72 characters; imperative mood ("Add feature", not "Added feature"). +- Subject line ≤ 72 characters; use imperative mood ("Add feature", not "Added feature"). +- Body wraps at 80 characters. - Reference issues in the footer (`Closes #123`), not the subject. -- One logical change per commit. +- One logical change per commit. Do not bundle unrelated fixes. + +## Branch naming + +``` +/ # e.g. feat/add-logging-manager +/-description # e.g. fix/42-null-pointer +``` ## What never goes in a commit -- Hardcoded secrets, API keys, passwords, or tokens. -- Generated artefacts reproducible from source (`.pyc`, `.venv/`, `dist/`). -- Merge-conflict markers or `*.rej` files. -- Debug statements (`print()`, `pdb.set_trace()`). +- Hardcoded secrets, API keys, tokens, or passwords. +- Generated artefacts that are reproducible from source (build output, `*.pyc`, `.venv/`). +- Merge-conflict markers (`<<<<<<<`, `=======`, `>>>>>>>`). +- `*.rej` files left by Copier update conflicts. +- Debug statements (`print()`, `debugger`, `pdb.set_trace()`). -The `pre-bash-commit-quality.sh` hook scans staged files before every commit. +The `pre-bash-commit-quality.sh` hook scans staged files for the above before every commit. ## Protected operations -These commands are blocked by hooks and must not be run without explicit justification: +These commands are **blocked** by pre-commit hooks and must not be run without explicit +justification: - `git commit --no-verify` — bypasses quality gates. - `git push --force` — rewrites shared history. -- `git push` directly to the default branch (for example `main`) — use pull requests. +- `git push` directly to `main` — use pull requests. -**Maintainers:** configure branch protection and squash-only merges on GitHub; see -`docs/github-repository-settings.md` in this repository (single checklist). +**Maintainers:** enforce PR-only `main` and squash merges in GitHub **Settings** / branch +protection; see `docs/github-repository-settings.md` in this repository (single checklist). ## TDD commit conventions @@ -59,5 +71,8 @@ Tests: test_calculate_discount_* in tests/test_pricing.py ## Pull request workflow 1. Run `just review` (lint + types + docstrings + tests) before opening a PR. -2. Write a PR description explaining the *why*, not just the *what*. -3. All CI checks must be green before requesting review. +2. Use `git diff main...HEAD` to review all changes since branching. +3. Write a PR description that explains the *why* behind the change, not just the *what*. +4. Include a test plan: which scenarios were verified manually or with automated tests. +5. All CI checks must be green before requesting review. +6. Squash-merge feature branches; preserve merge commits for release branches. diff --git a/template/.claude/rules/common/hooks.md b/template/.claude/rules/common/hooks.md index 9b9654d..088e26c 100644 --- a/template/.claude/rules/common/hooks.md +++ b/template/.claude/rules/common/hooks.md @@ -3,6 +3,13 @@ This file provides cross-language guidance for Claude Code hooks (PreToolUse/PostToolUse/Stop). Language-specific hook documentation should live in `../python/hooks.md`, `../bash/...`, etc. +## General rules + +- Hooks must be fast and side-effect free unless explicitly intended. +- **PreToolUse** hooks may block actions (exit code `2`) and should avoid long-running commands. +- **PostToolUse** hooks must not block (exit code is ignored); use them for reminders and checks. +- Prefer printing concise, actionable guidance on stderr for warnings and blocks. + When documenting a new hook: - List the script in the relevant `hooks/README.md` table. - Document the lifecycle event, matcher, and whether it blocks (exit code 2) or warns. diff --git a/template/.claude/rules/common/security.md b/template/.claude/rules/common/security.md index 72ec121..c65f6e1 100644 --- a/template/.claude/rules/common/security.md +++ b/template/.claude/rules/common/security.md @@ -1,38 +1,53 @@ # Security Guidelines -## Pre-commit checklist +## Pre-commit security checklist -Before every commit: +Before every commit, verify: -- [ ] No hardcoded secrets (API keys, passwords, tokens, private keys). +- [ ] No hardcoded secrets: API keys, passwords, tokens, private keys. +- [ ] No credentials in comments or docstrings. - [ ] All user-supplied inputs are validated before use. -- [ ] Parameterised queries used for all database operations. +- [ ] Parameterised queries used for all database operations (no string-formatted SQL). - [ ] File paths from user input are sanitised (no path traversal). -- [ ] Error messages do not expose internal paths or configuration values. -- [ ] New dependencies are from trusted sources and pinned. +- [ ] Error messages do not expose stack traces, internal paths, or configuration values + to end users. +- [ ] New dependencies are from trusted sources and pinned to specific versions. + +The `pre-bash-commit-quality.sh` hook performs a basic automated scan, but it does not +replace manual review. ## Secret management +- **Never** hardcode secrets in source files, configuration, or documentation. +- Use environment variables; load them with a library (e.g. `python-dotenv`) rather + than direct `os.environ` reads scattered across code. +- Validate that required secrets are present at application startup; fail fast with a + clear error rather than silently using a default. +- Rotate any secret that may have been exposed. Treat exposure as certain if the secret + ever appeared in a commit, even briefly. + ```python -# Correct: fail fast with a clear error if the secret is missing -api_key = os.environ["OPENAI_API_KEY"] +# Correct: fail fast, clear message +api_key = os.environ["OPENAI_API_KEY"] # raises KeyError if missing -# Wrong: silent fallback masks misconfiguration +# Wrong: silent fallback hides misconfiguration api_key = os.environ.get("OPENAI_API_KEY", "") ``` -Use `python-dotenv` in development. Never commit `.env` files. Document required -variables in `.env.example` with placeholder values. - ## Dependency security - Pin all dependency versions in `pyproject.toml` and commit `uv.lock`. -- Review Dependabot / Renovate PRs promptly. +- Review `dependabot` PRs promptly; do not let them accumulate. +- Run `just security` (if present) or `uv run pip-audit` before major releases. ## Security response protocol -1. Stop work on the current feature. -2. For critical issues: open a private security advisory, not a public issue. -3. Rotate any exposed secrets before fixing the code. -4. Write a test that reproduces the vulnerability before patching. -5. Search the codebase for similar patterns after fixing. +If a security issue is discovered: + +1. Stop work on the current feature immediately. +2. Assess severity: can it expose user data, allow privilege escalation, or cause data loss? +3. For **critical** issues: open a private GitHub security advisory, do not discuss in + public issues. +4. Rotate any exposed secrets before fixing the code. +5. Write a test that reproduces the vulnerability before patching. +6. Review the entire codebase for similar patterns after fixing. diff --git a/template/.claude/rules/copier/template-conventions.md b/template/.claude/rules/copier/template-conventions.md new file mode 100644 index 0000000..8c40e38 --- /dev/null +++ b/template/.claude/rules/copier/template-conventions.md @@ -0,0 +1,183 @@ +# Copier Template Conventions + +# applies-to: copier.yml, template/** + +These rules apply to `copier.yml` and all files under `template/`. They are +specific to the template meta-repo and are **not** propagated to generated projects. + +## copier.yml structure + +Organise `copier.yml` into clearly labelled sections with separator comments: + +```yaml +# --- Template metadata --- +_min_copier_version: "9.11.2" +_subdirectory: template + +# --- Jinja extensions --- +_jinja_extensions: + - jinja2_time.TimeExtension + +# --- Questions / Prompts --- +project_name: + type: str + help: Human-readable project name + +# --- Computed values --- +current_year: + type: str + when: false + default: "{% now 'utc', '%Y' %}" + +# --- Post-generation tasks --- +_tasks: + - command: uv lock +``` + +## Questions (prompts) + +- Every question must have a `help:` string that clearly explains what the value is used for. +- Every question must have a sensible `default:` so users can accept defaults with `--defaults`. +- Use `choices:` for questions with a fixed set of valid answers (license, Python version). +- Use `validator:` with a Jinja expression for format validation (e.g. package name must be + a valid Python identifier). +- Use `when: "{{ some_bool }}"` to conditionally show questions that are only relevant + when a related option is enabled. + +## Computed variables + +Computed variables (not prompted) must use `when: false`: + +```yaml +current_year: + type: str + when: false + default: "{% now 'utc', '%Y' %}" +``` + +They are not stored in `.copier-answers.yml` and not shown to users. Use them for +derived values (year, Python version matrix) to keep templates DRY. + +## Secrets and third-party tokens + +Do **not** add Copier prompts for API tokens or upload keys (Codecov, PyPI, npm, +and so on). Those belong in the CI provider’s **encrypted secrets** (for example +GitHub Actions **Settings → Secrets**) and in maintainer documentation +(`README.md`, `docs/ci.md`), not in `.copier-answers.yml`. + +If you must accept a rare secret interactively, use `secret: true` with a safe +`default` so it is **not** written to the answers file — and still prefer +documenting the secret-in-CI workflow instead of prompting. + +## _skip_if_exists + +Files in `_skip_if_exists` are preserved on `copier update` — user edits are not +overwritten. Add a file here when: +- The user is expected to customise it significantly (`pyproject.toml`, `README.md`). +- Overwriting it on update would destroy user work. + +Do **not** add files here unless there is a clear reason. Too many skipped files +make updates less useful. + +## _tasks + +Post-generation tasks run after both `copier copy` and `copier update`. Design them +to be **idempotent** (safe to run multiple times) and **fast** (do not download +large artefacts unconditionally). + +Use `copier update --skip-tasks` to bypass tasks when only the template content needs +to be refreshed. + +Tasks use `/bin/sh` (POSIX shell), not bash. Use POSIX-compatible syntax. + +## Template file conventions + +- Add `.jinja` suffix to every file that contains Jinja2 expressions. +- Files without `.jinja` are copied verbatim (no Jinja processing). +- Template file names may themselves contain Jinja expressions: + `src/{{ package_name }}/__init__.py.jinja` → `src/mylib/__init__.py`. +- Keep Jinja expressions in file names simple (variable substitution only). + +## .copier-answers.yml + +- Never edit `.copier-answers.yml` by hand in generated projects. +- The answers file is managed by Copier's update algorithm. Manual edits cause + unpredictable diffs on the next `copier update`. +- The template file `{{_copier_conf.answers_file}}.jinja` generates the answers file. + Changes to its structure require careful migration testing. + +## Versioning and releases + +- Tag releases with **PEP 440** versions: `1.0.0`, `1.2.3`, `2.0.0a1`. +- `copier update` uses these tags to select the appropriate template version. +- Introduce `_migrations` in `copier.yml` when a new version renames or removes template + files, to guide users through the update. +- See `src/{{ package_name }}/common/bump_version.py` and `.github/workflows/release.yml` for the release + automation workflow. + +## Skill descriptions (if adding skills to template) + +Every skill SKILL.md frontmatter must include a `description:` field with these constraints: + +- **Max 1024 characters** (hard limit) — skill descriptions are used in Claude's command + suggestions and UI; longer descriptions are truncated and unusable. +- Use `>-` for multi-line descriptions (block scalar without trailing newline). +- Lead with the skill's **primary purpose** (what it does). +- Include **trigger keywords** if applicable (e.g., "use this when the user says..."). +- Mention **what the skill outputs** (e.g., "produces test skeletons with AAA structure"). + +**Template:** + +```yaml +--- +name: skill-name +description: >- + + + + + +--- +``` + +**Example (tdd-test-planner):** + +```yaml +description: >- + Convert a requirement into a structured pytest test plan. + Use when the user says: "plan my tests", "write tests first", "TDD approach". + Produces categorised test cases (happy path, errors, boundaries, edges, integration) + plus pytest skeletons with AAA structure and fixture guidance. +``` + +**Measurement:** Count characters in the `description:` value (not including frontmatter). + +--- + +## Dual-hierarchy maintenance + +This repo has two parallel `.claude/` trees: + +``` +.claude/ ← used while developing the template +template/.claude/ ← rendered into generated projects +``` + +When adding or modifying hooks, commands, or rules: +- Decide whether the change applies to the template maintainer only, generated projects + only, or both. +- If both: add to both trees. The `post-edit-template-mirror.sh` hook will remind you. +- Rules specific to Copier/Jinja/YAML belong only in the root tree. +- Rules for Python/Bash/Markdown typically belong in both trees. + +## Testing template changes + +Every change to `copier.yml` or a template file requires a test update in +`tests/integration/test_template.py`. Run: + +```bash +just test # run all template tests +copier copy . /tmp/test-output --trust --defaults --vcs-ref HEAD +``` + +Clean up: `rm -rf /tmp/test-output` diff --git a/template/.claude/rules/jinja/coding-style.md b/template/.claude/rules/jinja/coding-style.md new file mode 100644 index 0000000..e500ab5 --- /dev/null +++ b/template/.claude/rules/jinja/coding-style.md @@ -0,0 +1,107 @@ +# Jinja2 Coding Style + +# applies-to: **/*.jinja + +Jinja2 templates are used in this repository to generate Python project files via +Copier. These rules apply to every `.jinja` file under `template/`. + +## Enabled extensions + +The following Jinja2 extensions are active (configured in `copier.yml`): + +- `jinja2_time.TimeExtension` — provides `{% now 'utc', '%Y' %}` for date injection. +- `jinja2.ext.do` — enables `{% do list.append(item) %}` for side-effect statements. +- `jinja2.ext.loopcontrols` — enables `{% break %}` and `{% continue %}` in loops. + +Always use these extensions rather than working around them with complex filter chains. + +## Variable substitution + +Use `{{ variable_name }}` for all substitutions. Add a trailing space inside braces +only for readability in complex expressions, not as a general rule: + +```jinja +{{ project_name }} +{{ author_name | lower | replace(" ", "-") }} +{{ python_min_version }} +``` + +## Control structures + +Indent template logic blocks consistently with the surrounding file content. +Use `{%- -%}` (dash-trimmed) tags to suppress blank lines produced by control blocks +when the rendered output should be compact: + +```jinja +{%- if include_docs %} +mkdocs: + site_name: {{ project_name }} +{%- endif %} +``` + +Use `{% if %}...{% elif %}...{% else %}...{% endif %}` for branching. +Avoid deeply nested conditions; extract to a Jinja macro or simplify the data model. + +## Whitespace control + +- Use `{%- ... -%}` to strip leading/trailing whitespace around control blocks that + should not produce blank lines in the output. +- Never strip whitespace blindly on every tag — it makes templates hard to read. +- Test rendered output with `copier copy . /tmp/test-output --trust --defaults` and + inspect for spurious blank lines before committing. + +## Filters + +Prefer built-in Jinja2 filters over custom Python logic in templates: + +| Goal | Filter | +|------|--------| +| Lowercase | `\| lower` | +| Replace characters | `\| replace("x", "y")` | +| Default value | `\| default("fallback")` | +| Join list | `\| join(", ")` | +| Trim whitespace | `\| trim` | + +Avoid complex filter chains longer than 3 steps; compute the value in `copier.yml` +as a computed variable instead. + +## Macros + +Define reusable template fragments as macros at the top of the file or in a dedicated +`_macros.jinja` file (if the project grows to warrant it): + +```jinja +{% macro license_header(year, author) %} +# Copyright (c) {{ year }} {{ author }}. All rights reserved. +{% endmacro %} +``` + +## File naming + +- Suffix: `.jinja` (e.g. `pyproject.toml.jinja`, `__init__.py.jinja`). +- File names can themselves be Jinja expressions: + `src/{{ package_name }}/__init__.py.jinja` renders to `src/mylib/__init__.py`. +- Keep file name expressions simple: variable substitution only, no filters. + +## Commenting + +Use Jinja comments (`{# comment #}`) for notes that should not appear in the rendered +output. Use the target language's comment syntax for notes that should survive rendering: + +```jinja +{# This block is only rendered when pandas support is requested #} +{% if include_pandas_support %} +pandas>=2.0 +{% endif %} + +# This Python comment will appear in the generated file +import os +``` + +## Do not embed logic that belongs in copier.yml + +Templates should be presentation, not computation. Move conditional logic to: +- `copier.yml` computed variables (`when: false`, `default: "{% if ... %}"`) +- Copier's `_tasks` for post-generation side effects + +Deeply nested `{% if %}{% if %}{% if %}` blocks in a template are a signal to refactor. diff --git a/template/.claude/rules/jinja/testing.md b/template/.claude/rules/jinja/testing.md new file mode 100644 index 0000000..0f1feff --- /dev/null +++ b/template/.claude/rules/jinja/testing.md @@ -0,0 +1,91 @@ +# Jinja2 Template Testing + +# applies-to: **/*.jinja + +> This file extends [common/testing.md](../common/testing.md) with Jinja2-specific content. + +## What to test + +Every Jinja2 template change requires a corresponding test in `tests/integration/test_template.py`. +Tests render the template with `copier copy` and assert on the output. + +Scenarios to cover for each template file: + +1. **Default values** — render with `--defaults` and assert the file exists with + expected content. +2. **Boolean feature flags** — render with each combination of `include_*` flags that + affects the template and assert the relevant sections are present or absent. +3. **Variable substitution** — render with non-default values (e.g. a custom + `project_name`) and assert the value appears correctly in the output. +4. **Whitespace correctness** — spot-check that blank lines are not spuriously added or + removed by whitespace-control tags. + +## Test utilities + +Tests use the `copier` Python API directly (not the CLI) for reliability: + +```python +from copier import run_copy + +def test_renders_with_pandas(tmp_path): + run_copy( + src_path=str(ROOT), + dst_path=str(tmp_path), + data={"include_pandas_support": True}, + defaults=True, + overwrite=True, + unsafe=True, + vcs_ref="HEAD", + ) + pyproject = (tmp_path / "pyproject.toml").read_text() + assert "pandas" in pyproject +``` + +The `ROOT` constant points to the repository root. Use the `tmp_path` fixture for the +destination directory so pytest cleans it up automatically. + +## Syntax validation + +The `post-edit-jinja.sh` PostToolUse hook validates Jinja2 syntax automatically after +every `.jinja` file edit in a Claude Code session: + +``` +┌─ Jinja2 syntax check: template/pyproject.toml.jinja +│ ✓ Jinja2 syntax OK +└─ Done +``` + +If the hook reports a syntax error, fix it before running tests — `copier copy` will +fail with a less helpful error message if template syntax is broken. + +## Manual rendering for inspection + +Render the full template to inspect a complex template change: + +```bash +copier copy . /tmp/test-output --trust --defaults --vcs-ref HEAD +``` + +Inspect specific files: + +```bash +cat /tmp/test-output/pyproject.toml +cat /tmp/test-output/src/my_library/__init__.py +``` + +Clean up: + +```bash +rm -rf /tmp/test-output +``` + +## Update testing + +Test `copier update` scenarios when changing `_skip_if_exists` or the `.copier-answers.yml` +template. The `tests/integration/test_template.py` file includes update scenario tests; add new ones +when you add new `_skip_if_exists` entries. + +## Coverage for template branches + +Aim to cover every `{% if %}` branch in every template file with at least one test. +Untested branches can produce invalid code in generated projects. diff --git a/template/.claude/rules/markdown/conventions.md b/template/.claude/rules/markdown/conventions.md index de26f03..5673b3e 100644 --- a/template/.claude/rules/markdown/conventions.md +++ b/template/.claude/rules/markdown/conventions.md @@ -7,42 +7,62 @@ Any Markdown file created as part of a workflow, analysis, or investigation output **must be placed inside the `docs/` folder**. -**Allowed exceptions** (may be placed anywhere): +**Allowed exceptions** (may be placed at the repository root or any location): - `README.md` - `CLAUDE.md` -- `.claude/rules/**/*.md` -- `.github/**/*.md` +- `.claude/rules/**/*.md` (rules documentation) +- `.claude/hooks/README.md` (hooks documentation) +- `.github/**/*.md` (GitHub community files) -Do **not** create free-standing files such as `ANALYSIS.md` or `NOTES.md` at the -repository root or inside `src/`, `tests/`, or `scripts/`. +Do **not** create free-standing files such as `ANALYSIS.md`, `NOTES.md`, or +`LOGGING_ANALYSIS.md` at the repository root or inside `src/`, `tests/`, or `scripts/`. -This is enforced by `.claude/hooks/post-edit-markdown.sh`. +This rule is enforced by: +- `pre-write-doc-file-warning.sh` (PreToolUse: blocks writing `.md` outside allowed locations) +- `post-edit-markdown.sh` (PostToolUse: warns if an existing `.md` is edited in the wrong place) ## Headings -- ATX headings (`#`, `##`, `###`), not Setext underlines. -- One `# Title` per file. -- Do not skip heading levels. -- Sentence-case headings (capitalise first word and proper nouns only). +- Use ATX headings (`#`, `##`, `###`), not Setext underlines (`===`, `---`). +- One top-level heading (`# Title`) per file. +- Do not skip heading levels (e.g. `##` → `####` without a `###` in between). +- Headings should be sentence-case (capitalise first word only) unless the subject + is a proper noun or acronym. + +## Line length and wrapping + +- Wrap prose at 100 characters for readability in editors and diffs. +- Do not wrap code blocks, tables, or long URLs. ## Code blocks -Always specify the language: -``` -\```python -\```bash -\```yaml -``` +- Always specify the language for fenced code blocks: + ``` + \```python + \```bash + \```yaml + \``` +- Use inline code (backticks) for: file names, directory names, command names, + variable names, and short code snippets within prose. ## Lists -- Use `-` for unordered lists. -- Use `1.` for all ordered list items. +- Use `-` for unordered lists (not `*` or `+`). +- Use `1.` for all items in ordered lists (the renderer handles numbering). +- Nest lists with 2-space or 4-space indentation consistently within a file. + +## Tables + +- Align columns with spaces for readability in source (optional but preferred). +- Include a header row and a separator row. +- Keep tables narrow enough to read without horizontal scrolling. ## Links -- Relative links for internal files: `[CLAUDE.md](../CLAUDE.md)`. -- Descriptive link text: `[setup guide](./setup.md)`, not `[click here](./setup.md)`. +- Use reference-style links for URLs that appear more than once. +- Use relative links for internal project files: + `[CLAUDE.md](../CLAUDE.md)` not `https://github.com/…/CLAUDE.md`. +- Do not embed bare URLs in prose; always use `[descriptive text](url)`. ## CLAUDE.md maintenance diff --git a/template/.claude/rules/python/coding-style.md b/template/.claude/rules/python/coding-style.md new file mode 100644 index 0000000..f906f2e --- /dev/null +++ b/template/.claude/rules/python/coding-style.md @@ -0,0 +1,138 @@ +# Python Coding Style + +# applies-to: **/*.py, **/*.pyi + +> This file extends [common/coding-style.md](../common/coding-style.md) with Python-specific content. + +## Formatter and linter + +- **ruff** handles both formatting and linting. Do not use black, isort, or flake8 alongside it. +- Run `just fmt` to format, `just lint` to lint, `just fix` to auto-fix safe violations. +- Active rule sets: `E`, `F`, `I`, `UP`, `B`, `SIM`, `C4`, `RUF`, `TCH`, `PGH`, `PT`, `ARG`, + `D`, `C90`, `PERF`. +- Line length: 100 characters. `E501` is disabled (formatter handles wrapping). + +## Type annotations + +All public functions and methods must have complete type annotations: + +```python +# Correct +def calculate_discount(price: float, tier: str) -> float: ... + +# Wrong +def calculate_discount(price, tier): ... +``` + +- **basedpyright** strict mode for type checking (`just type`). +- Prefer `X | Y` union syntax over `Optional[X]` / `Union[X, Y]`. + +## Docstrings — Google style + +Every public function, class, and method must have a Google-style docstring: + +```python +def fetch_user(user_id: int, *, include_deleted: bool = False) -> User | None: + """Fetch a user by ID from the database. + + Args: + user_id: The primary key of the user to retrieve. + include_deleted: When True, soft-deleted users are also returned. + + Returns: + The matching User instance, or None if not found. + + Raises: + DatabaseError: If the database connection fails. + """ +``` + +Test files (`tests/**`) and scripts (`scripts/**`) are exempt from docstring requirements. + +## Naming + +| Symbol | Convention | Example | +|--------|-----------|---------| +| Module | `snake_case` | `file_manager.py` | +| Class | `PascalCase` | `LoggingManager` | +| Function / method | `snake_case` | `configure_logging()` | +| Variable | `snake_case` | `retry_count` | +| Constant | `UPPER_SNAKE_CASE` | `MAX_RETRIES` | +| Private | leading `_` | `_internal_helper()` | + +## Immutability + +```python +from dataclasses import dataclass +from typing import NamedTuple + +@dataclass(frozen=True) +class Config: + host: str + port: int + +class Point(NamedTuple): + x: float + y: float +``` + +## Error handling + +```python +# Correct: specific exception, meaningful message +try: + result = parse_config(path) +except FileNotFoundError: + raise ConfigError(f"Config file not found: {path}") from None + +# Wrong: silently swallowed +try: + result = parse_config(path) +except Exception: + result = None +``` + +## Logging + +**Mandatory:** use the public APIs in `my_library.common.logging_manager` for all logging +setup and shared log formatting — `configure_logging()`, `get_logger()`, `bind_context()`, +`clear_context()`, `log_section` / `log_sub_section` / `log_section_divider`, `log_fields`, +`log_message`, `wrap_text`, `list_to_numbered_string`, and `write_machine_stdout_line` (raw stdout +for `$(...)` capture only). Call `configure_logging()` once at the app entry point before other +package code logs. + +**Event logs:** after `configure_logging()`, use `structlog.get_logger()` and level methods with +event names and keyword fields (no f-strings in the event string). + +**Never** in `src/my_library/` (except inside `logging_manager` itself): `print()`, +`logging.getLogger()`, `logging.basicConfig()`, or `structlog.configure()`. + +```python +import structlog + +from my_library.common.logging_manager import configure_logging + +configure_logging() +log = structlog.get_logger() + +log.info("user_created", user_id=user.id) +log.error("payment_failed", order_id=order_id, reason=str(exc), llm=True) +``` + +## Reuse `common/` in package code + +Outside `src/my_library/common/`, prefer imports from `my_library.common` (`file_manager`, +`decorators`, `utils`, `logging_manager`) instead of duplicating the same behaviour in `core.py`, +`cli.py`, or other modules. + +## Imports + +```python +# stdlib → third-party → local +import os +from pathlib import Path + +import structlog + +from my_library.common.utils import slugify +``` diff --git a/template/.claude/rules/python/hooks.md b/template/.claude/rules/python/hooks.md index f92d2af..75fae84 100644 --- a/template/.claude/rules/python/hooks.md +++ b/template/.claude/rules/python/hooks.md @@ -70,12 +70,14 @@ the same scope; running both would warn and then block on the same condition. | Hook | Trigger | What it does | |------|---------|----------------| -| `pre-write-src-require-test.sh` | `Write` or `Edit` on `src//.py` | **Blocks** write if `tests//test_.py` does not exist (strict TDD). **Registered by default** in `.claude/settings.json`. | +| `pre-write-src-require-test.sh` | `Write` or `Edit` on `src//.py` | **Blocks** write if a matching test file does not exist in `tests/unit/`, `tests/integration/`, or `tests/e2e/` (strict TDD). **Registered by default** in `.claude/settings.json`. | | `pre-write-src-test-reminder.sh` | Same | Warns only (non-blocking). Swap into `settings.json` **instead of** `pre-write-src-require-test.sh` if you want reminders without blocking. | | `pre-bash-coverage-gate.sh` | `Bash` on `git commit` | Warns if test coverage is below 85% threshold | -Both source/test hooks only check top-level package modules (`src//.py`, -excluding `__init__.py`). Nested packages are skipped. +Both source/test hooks check top-level package modules (`src//.py`, +excluding `__init__.py`) and common subpackage modules (`src//common/.py`). +Test files are searched in `tests/unit/`, `tests/integration/`, and `tests/e2e/` +subdirectories (e.g. `tests/unit/test_.py` or `tests/unit/common/test_.py`). ### How to swap to the warn-only reminder diff --git a/template/.claude/rules/python/patterns.md b/template/.claude/rules/python/patterns.md new file mode 100644 index 0000000..75d73ad --- /dev/null +++ b/template/.claude/rules/python/patterns.md @@ -0,0 +1,164 @@ +# Python Patterns + +# applies-to: **/*.py, **/*.pyi + +> This file extends [common/patterns.md](../common/patterns.md) with Python-specific content. + +## Dataclasses as data transfer objects + +Use `@dataclass` (or `@dataclass(frozen=True)`) for plain data containers. +Use `NamedTuple` when immutability and tuple unpacking are both needed: + +```python +from dataclasses import dataclass, field + +@dataclass +class CreateOrderRequest: + user_id: int + items: list[str] = field(default_factory=list) + notes: str | None = None + +@dataclass(frozen=True) +class Money: + amount: float + currency: str = "USD" +``` + +## Protocol for duck typing + +Prefer `Protocol` over abstract base classes for interface definitions: + +```python +from typing import Protocol + +class Repository(Protocol): + def find_by_id(self, id: int) -> dict | None: ... + def save(self, entity: dict) -> dict: ... + def delete(self, id: int) -> None: ... +``` + +## Context managers for resource management + +Use context managers (`with` statement) for all resources that need cleanup: + +```python +# Files +with open(path, encoding="utf-8") as fh: + content = fh.read() + +# Custom resources: implement __enter__ / __exit__ or use @contextmanager +from contextlib import contextmanager + +@contextmanager +def managed_connection(url: str): + conn = connect(url) + try: + yield conn + finally: + conn.close() +``` + +## Generators for lazy evaluation + +Use generators instead of building full lists when iterating once: + +```python +def read_lines(path: Path) -> Generator[str, None, None]: + with open(path, encoding="utf-8") as fh: + yield from fh + +# Caller controls materialisation +lines = list(read_lines(log_path)) # full list when needed +first_error = next( + (l for l in read_lines(log_path) if "ERROR" in l), None +) +``` + +## Dependency injection over globals + +Pass dependencies as constructor arguments or function parameters. Avoid module-level +singletons that cannot be replaced in tests: + +```python +# Preferred +class OrderService: + def __init__(self, repo: Repository, logger: BoundLogger) -> None: + self._repo = repo + self._log = logger + +# Avoid +class OrderService: + _repo = GlobalRepository() # hard to test, hard to swap +``` + +## Configuration objects + +Centralise configuration in a typed dataclass or pydantic model loaded once at startup: + +```python +@dataclass(frozen=True) +class AppConfig: + database_url: str + log_level: str = "INFO" + max_retries: int = 3 + + @classmethod + def from_env(cls) -> "AppConfig": + return cls( + database_url=os.environ["DATABASE_URL"], + log_level=os.environ.get("LOG_LEVEL", "INFO"), + ) +``` + +## Exception hierarchy + +Define a project-level base exception and derive domain-specific exceptions from it: + +```python +class AppError(Exception): + """Base class for all application errors.""" + +class ConfigError(AppError): + """Raised when configuration is missing or invalid.""" + +class DatabaseError(AppError): + """Raised when a database operation fails.""" +``` + +Catch `AppError` at the top level; catch specific subclasses where recovery is possible. + +## Common refactoring patterns + +When refactoring during TDD's REFACTOR phase, apply these transformations: + +- **Extract function** — if a function body exceeds 20 lines or does more than one + thing, extract a named helper. The original function becomes a coordinator. +- **Reduce nesting** — replace `if cond: ... else: ...` with early returns + (`if not cond: return`). Target max 3 levels of indentation. +- **Eliminate duplication** — if the same 3+ line block appears twice, extract it. + For test code, consider `@pytest.mark.parametrize` instead. +- **Replace magic values** — bare literals (`42`, `"pending"`, `0.85`) become named + constants at module level. +- **Simplify conditionals** — use `dict` dispatch or `match` statements instead of + long `if/elif` chains. + +After every refactoring step, run `just test` to confirm tests still pass. + +## `__all__` for public API + +Define `__all__` in every package `__init__.py` to make the public interface explicit: + +```python +# src/mypackage/__init__.py +__all__ = ["AppContext", "configure_logging", "AppError"] +``` + +## structlog context binding + +```python +from my_library.common.logging_manager import bind_context, clear_context + +bind_context(request_id="req-abc", user_id=42) +# ... all logs in this scope carry these fields +clear_context() +``` diff --git a/template/.claude/rules/python/security.md b/template/.claude/rules/python/security.md index 580449f..39cef2d 100644 --- a/template/.claude/rules/python/security.md +++ b/template/.claude/rules/python/security.md @@ -6,26 +6,43 @@ ## Secret management +Load secrets from environment variables; never hardcode them: + ```python import os -# Correct: fails immediately with a clear error +# Correct: fails immediately with a clear error if the secret is missing api_key = os.environ["OPENAI_API_KEY"] -# Wrong: silent fallback masks misconfiguration +# Wrong: silently falls back to an empty string, masking misconfiguration api_key = os.environ.get("OPENAI_API_KEY", "") ``` -Use `python-dotenv` for development. Never commit `.env` files. +For development, use `python-dotenv` to load a `.env` file that is listed in `.gitignore`: + +```python +from dotenv import load_dotenv +load_dotenv() # reads .env if present; silently skips if absent +api_key = os.environ["OPENAI_API_KEY"] +``` + +Never commit `.env` files. Add them to `.gitignore` and document required variables in +`.env.example` with placeholder values. ## Input validation -Validate all external inputs at the boundary: +Validate all external inputs at the boundary before passing them into application logic: ```python +# Correct: validate early, raise with context def process_order(order_id: str) -> Order: if not order_id.isalnum(): - raise ValueError(f"Invalid order ID: {order_id!r}") + raise ValueError(f"Invalid order ID format: {order_id!r}") + ... + +# Wrong: trusting input, using it unvalidated +def process_order(order_id: str) -> Order: + return db.query(f"SELECT * FROM orders WHERE id = '{order_id}'") ``` ## SQL injection prevention @@ -40,6 +57,8 @@ cursor.execute(f"SELECT * FROM users WHERE email = '{email}'") ## Path traversal prevention +Sanitise file paths from user input: + ```python from pathlib import Path @@ -50,21 +69,39 @@ if not str(user_path).startswith(str(base_dir)): raise PermissionError("Path traversal detected") ``` +## Static security analysis + +Run **bandit** before release to catch common Python security issues: + +```bash +uv run bandit -r src/ -ll # report issues at medium severity and above +``` + +Bandit is not a substitute for code review, but it catches common patterns like +hardcoded passwords, use of `shell=True` in subprocess calls, and insecure random. + ## Subprocess calls +Avoid `shell=True` when calling external processes; it enables shell injection: + ```python +import subprocess + # Correct: list form, no shell expansion result = subprocess.run(["git", "status"], capture_output=True, check=True) -# Wrong: shell=True + user input = injection risk +# Wrong: shell=True + user-controlled input = injection result = subprocess.run(f"git {user_cmd}", shell=True, check=True) ``` -## Tokens and random values +## Cryptography + +- Do not implement cryptographic primitives. Use `cryptography` or `hashlib` for + standard algorithms. +- Use `secrets` (stdlib) for generating tokens, not `random`. +- Use bcrypt or Argon2 for password hashing; never SHA-256 or MD5 for passwords. ```python import secrets token = secrets.token_urlsafe(32) # cryptographically secure ``` - -Never use `random` for security-sensitive values. diff --git a/template/.claude/rules/python/testing.md b/template/.claude/rules/python/testing.md index 541d036..15823e6 100644 --- a/template/.claude/rules/python/testing.md +++ b/template/.claude/rules/python/testing.md @@ -8,14 +8,79 @@ Use **pytest** exclusively. Do not use `unittest.TestCase` for new tests. +## Mandatory pytest markers + +Every test function and test class must have a pytest marker. Use module-level +`pytestmark` for files where all tests share the same marker. + +### Module-level pattern (required in every test file) + +```python +import pytest + +pytestmark = pytest.mark.unit + +def test_example(): + assert 1 + 1 == 2 +``` + +### Marker decision table + +| Situation | Marker | +|---|---| +| Pure logic, no I/O, mocked boundaries | `unit` | +| Touches filesystem, subprocess, or database | `integration` | +| Full application flow with external systems | `e2e` | +| Test that guards a specific fixed bug | `regression` | +| Test that consistently runs over 1 second | `slow` | +| Minimal health check or deploy verification | `smoke` | + +### Available markers + +Defined in `pyproject.toml` under `[tool.pytest.ini_options].markers`: +- `unit` — fast isolated tests, boundaries mocked +- `integration` — cross-module, real I/O, subprocesses +- `e2e` — full stack or external systems +- `regression` — guards a fixed bug +- `slow` — exceeds ~1s +- `smoke` — minimal deploy/health checks + +### Enforcement + +- `--strict-markers` is enabled — any unlisted marker causes a test failure. +- Unmarked tests will fail collection. +- Every new test file **must** set `pytestmark = pytest.mark.` at module level. +- To add a new marker, define it in `pyproject.toml` first. + ## Directory layout ``` tests/ -├── conftest.py -├── test_core.py -└── / - └── test_module.py +├── conftest.py # global fixtures (db, config, factories, etc.) +├── test_imports.py +├── e2e/ +│ └── test_*.py +├── integration/ +│ └── test_*.py +└── unit/ + ├── test_*.py + └── common/ + └── test_*.py # tests for src//common/ modules +``` + +Test files are organised by test type into subdirectories: + +- `tests/unit/` — fast isolated tests, boundaries mocked. +- `tests/integration/` — cross-module, real I/O, subprocesses. +- `tests/e2e/` — full application flow with external systems. + +Source-to-test mapping: + +``` +src//core.py → tests/unit/test_core.py +src//cli.py → tests/unit/test_cli.py +src//common/utils.py → tests/unit/common/test_utils.py +src//common/decorators.py → tests/unit/common/test_decorators.py ``` Files: `test_.py`. Functions: start with `test_`. diff --git a/template/.claude/rules/yaml/conventions.md b/template/.claude/rules/yaml/conventions.md new file mode 100644 index 0000000..0706c20 --- /dev/null +++ b/template/.claude/rules/yaml/conventions.md @@ -0,0 +1,76 @@ +# YAML Conventions + +# applies-to: **/*.yml, **/*.yaml + +YAML files in this repository include `copier.yml`, GitHub Actions workflows +(`.github/workflows/*.yml`), `mkdocs.yml`, and `.pre-commit-config.yaml`. + +## Formatting + +- Indentation: 2 spaces. Never use tabs. +- No trailing whitespace on any line. +- End each file with a single newline. +- Wrap long string values with block scalars (`|` or `>`) rather than quoted strings + when the value spans multiple lines. + +```yaml +# Preferred for multiline strings +description: >- + A long description that wraps cleanly + and is easy to read in source. + +# Avoid +description: "A long description that wraps cleanly and is easy to read in source." +``` + +## Quoting strings + +- Quote strings that contain YAML special characters: `:`, `{`, `}`, `[`, `]`, + `,`, `#`, `&`, `*`, `?`, `|`, `-`, `<`, `>`, `=`, `!`, `%`, `@`, `\`. +- Quote strings that could be misinterpreted as other types (`"true"`, `"1.0"`, + `"null"`). +- Do not quote simple alphanumeric strings unnecessarily. + +## Booleans and nulls + +Use YAML 1.2 style (as recognised by Copier and most modern parsers): +- Boolean: `true` / `false` (lowercase, unquoted). +- Null: `null` or `~`. +- Avoid the YAML 1.1 aliases (`yes`, `no`, `on`, `off`) — they are ambiguous. + +## Comments + +- Use `#` comments to explain non-obvious configuration choices. +- Separate logical sections with a blank line and a comment header: + +```yaml +# ------------------------------------------------------------------------- +# Post-generation tasks +# ------------------------------------------------------------------------- +_tasks: + - command: uv lock +``` + +## GitHub Actions specific + +- Pin third-party actions to a full commit SHA, not a floating tag: + ```yaml + uses: actions/checkout@v4 # acceptable if you review tag-SHA mapping + uses: actions/checkout@abc1234 # preferred for production workflows + ``` +- Use `env:` at the step or job level for environment variables; avoid top-level `env:` + unless the variable is used across all jobs. +- Name every step with a descriptive `name:` field. +- Prefer `actions/setup-python` with an explicit `python-version` matrix over hardcoded + versions. + +## copier.yml specific + +See [copier/template-conventions.md](../copier/template-conventions.md) for Copier-specific +YAML conventions. Rules here cover general YAML style; Copier semantics are covered there. + +## .pre-commit-config.yaml specific + +- Pin `rev:` to a specific version tag, not `HEAD` or `latest`. +- Group hooks by repository with a blank line between repos. +- List hooks in logical order: formatters before linters, linters before type checkers. diff --git a/template/.claude/settings.json b/template/.claude/settings.json index 985743a..cd50eb2 100644 --- a/template/.claude/settings.json +++ b/template/.claude/settings.json @@ -4,6 +4,7 @@ "Bash(just:*)", "Bash(uv:*)", "Bash(copier:*)", + "Bash(gh:*)", "Bash(git status)", "Bash(git diff:*)", "Bash(git log:*)", @@ -92,6 +93,26 @@ } ], "description": "Coverage gate: warn before git commit if test coverage is below 85%" + }, + { + "matcher": "Bash", + "hooks": [ + { + "type": "command", + "command": "bash .claude/hooks/pre-delete-protection.sh" + } + ], + "description": "Block rm of critical files: pyproject.toml, justfile, CLAUDE.md, .pre-commit-config.yaml, .copier-answers.yml, uv.lock, .claude/settings.json" + }, + { + "matcher": "Bash", + "hooks": [ + { + "type": "command", + "command": "bash .claude/hooks/pre-bash-branch-protection.sh" + } + ], + "description": "Block git push to main/master branches — use feature branches and PRs" } ], "PostToolUse": [ @@ -124,6 +145,26 @@ } ], "description": "TDD refactor guard: remind to run tests after multiple src/ edits" + }, + { + "matcher": "Bash", + "hooks": [ + { + "type": "command", + "command": "bash .claude/hooks/post-bash-test-coverage-reminder.sh" + } + ], + "description": "After pytest/just test runs, surface modules below 85% coverage" + }, + { + "matcher": "Write", + "hooks": [ + { + "type": "command", + "command": "bash .claude/hooks/post-write-test-structure.sh" + } + ], + "description": "After creating test files, check for test_ functions, no unittest.TestCase, and proper pytest markers" } ] } diff --git a/template/.claude/skills/claude_commands/SKILL.md b/template/.claude/skills/claude_commands/SKILL.md new file mode 100644 index 0000000..2419ba9 --- /dev/null +++ b/template/.claude/skills/claude_commands/SKILL.md @@ -0,0 +1,293 @@ +--- +name: claude-commands +description: Write, create, edit, manage, and organize .claude/commands and .claude/skills files for Claude Code. Use whenever the user wants to create a new slash command, write a custom command, add a command to their project, manage their command library, scaffold a .claude/commands directory, understand command syntax, build a skill file, or set up personal or project-scoped commands. Also trigger when the user says things like "add a /command", "make a slash command", "create a custom command", "set up commands for my project", or "help me write a SKILL.md". +--- + +# Claude Commands & Skills + +This skill helps you write, manage, and organize custom slash commands for Claude Code. It covers both the legacy `.claude/commands/` format and the modern `.claude/skills/` system — both produce `/command-name` shortcuts. + +## Quick orientation + +| Format | Path | Slash command | Extras | +|--------|------|---------------|--------| +| Legacy command | `.claude/commands/deploy.md` | `/deploy` | Simple, works everywhere | +| Modern skill | `.claude/skills/deploy/SKILL.md` | `/deploy` | Supports supporting files, auto-invocation | +| Personal command | `~/.claude/commands/deploy.md` | `/deploy` | Available across all projects | +| Personal skill | `~/.claude/skills/deploy/SKILL.md` | `/deploy` | Available across all projects | + +**Rule of thumb**: Use `.claude/commands/` for quick, simple prompts. Graduate to `.claude/skills/` when you need supporting files (scripts, templates, examples) or want Claude to auto-invoke it from natural language. + +If both a command and skill share a name, the skill takes precedence. + +--- + +## Anatomy of a command file + +Every command is a Markdown file. The filename (without `.md`) becomes the slash command name. + +``` +.claude/commands/ +├── commit.md → /commit +├── review.md → /review +├── fix-issue.md → /fix-issue +└── git/ + ├── push.md → /git:push + └── sync.md → /git:sync +``` + +Subdirectories create namespaced commands using `:` as the separator. Use namespaces to group related commands and avoid collisions. + +--- + +## Frontmatter fields + +All frontmatter is optional, but `description` is strongly recommended. Place frontmatter between `---` markers at the top of the file. + +```markdown +--- +name: my-command # Override display name (default: filename) +description: One-line summary of what this command does and when to use it +argument-hint: [file] [flags] # Shown in autocomplete as usage hint +allowed-tools: Read Grep Bash(git *) # Pre-approve tools (space-separated) +disable-model-invocation: true # Prevent Claude from auto-triggering this +model: claude-opus-4-6 # Pin to a specific model +context: fork # Run in subagent: "fork" | "agent" | inline (default) +when_to_use: | # Extended trigger context for auto-invocation + Use when the user mentions deployments, releases, or shipping code. +--- +``` + +See `references/frontmatter-reference.md` for the full field reference with all options and caveats. + +--- + +## Argument handling + +### All arguments as one string: `$ARGUMENTS` + +```markdown +--- +argument-hint: +--- +Create a new feature branch named `$ARGUMENTS` from main: +1. git checkout main && git pull +2. git checkout -b feature/$ARGUMENTS +3. Confirm the branch was created +``` + +Invoked as: `/branch my-feature` → `$ARGUMENTS` = `"my-feature"` + +### Positional arguments: `$1`, `$2`, ... + +```markdown +--- +argument-hint: +--- +Fix GitHub issue #$1 with priority $2. +1. Read the issue description +2. Implement the fix +3. Write a test +4. Commit with message: "fix(#$1): [description] - priority $2" +``` + +Invoked as: `/fix-issue 42 high` → `$1`=`"42"`, `$2`=`"high"` + +### No arguments + +Some commands need no arguments at all — they operate on the current context (open file, git state, etc.). + +```markdown +Run a security audit of this codebase: +1. Check for hardcoded secrets using `grep -r "api_key\|password\|secret" --include="*.ts"` +2. Scan for SQL injection vectors +3. List all external HTTP calls and their destinations +4. Report findings grouped by severity +``` + +--- + +## Bash injection (dynamic context) + +When `allowed-tools` includes `Bash`, you can inject live shell output into the prompt using `` !`command` `` syntax. The output is inserted before Claude sees the prompt. + +```markdown +--- +allowed-tools: Read Grep Bash(git *) +description: Review staged changes before committing +--- + +## Current branch +!`git branch --show-current` + +## Staged diff +!`git diff --cached` + +## Files changed +!`git diff --cached --name-only` + +Review the staged changes above. Check for: +1. Obvious bugs or logic errors +2. Missing error handling +3. Hardcoded values that should be env vars +4. Tests that should accompany these changes + +Be concise and actionable. +``` + +**Security note**: `Bash(git *)` restricts bash to only `git` commands. Always scope `Bash(...)` as narrowly as possible. + +--- + +## File and context references + +### Reference a file with `@` + +```markdown +Analyze @src/auth/login.ts and identify all security vulnerabilities. +``` + +### Dynamic file reference via argument + +```markdown +--- +argument-hint: +--- +Review the file @$ARGUMENTS for: +- Code quality issues +- Missing error handling +- Performance concerns +- Documentation gaps +``` + +### Embed file contents at write-time + +```markdown +--- +allowed-tools: Read +--- + +Read the file at path: $ARGUMENTS + +Then refactor it to: +1. Extract magic numbers into named constants +2. Add JSDoc comments to all exported functions +3. Replace any `var` declarations with `const` or `let` +``` + +--- + +## Command categories & patterns + +See `references/command-patterns.md` for full examples organized by category: +- **Git workflows**: commit, pr, sync, changelog +- **Code quality**: review, lint-fix, refactor, test-gen +- **Project management**: plan, estimate, standup +- **Documentation**: doc-gen, readme-update, api-docs +- **Debugging**: trace, explain-error, profile +- **Onboarding**: orientation, architecture-tour + +--- + +## Writing effective commands + +### 1. Front-load the action + +Start with the verb. Claude reads the beginning first. + +```markdown +# ✓ Good — action is clear immediately +Generate a commit message for the staged changes... + +# ✗ Weak — buries the action +You are a helpful assistant. When the user runs this command, your job is to... +``` + +### 2. Explain the *why*, not just the *what* + +Commands that explain reasoning are more robust than rigid checklists: + +```markdown +# ✓ Good — explains purpose +Review this PR focusing on correctness and security. We care less about style +(linting handles that) and more about logic errors, missing edge cases, and +anything that could fail in production. + +# ✗ Brittle — mechanical list with no context +1. Check variable names +2. Check function names +3. Check comments +4. Check imports +``` + +### 3. Scope `allowed-tools` tightly + +```markdown +# ✓ Tight — only git read commands +allowed-tools: Bash(git log:*) Bash(git diff:*) Bash(git show:*) + +# ✗ Wide — grants all bash +allowed-tools: Bash +``` + +### 4. Use `disable-model-invocation` for side-effect commands + +Any command that writes, deploys, or sends data should not auto-trigger: + +```markdown +--- +disable-model-invocation: true +allowed-tools: Bash(npm *) Bash(git push:*) +--- +Deploy to production... +``` + +### 5. Keep commands single-purpose + +One command, one job. Chain them at the call site (`/commit` then `/pr`), not inside the command file. + +### 6. Namespace related commands + +``` +.claude/commands/ +├── db/ +│ ├── migrate.md → /db:migrate +│ ├── seed.md → /db:seed +│ └── rollback.md → /db:rollback +└── deploy/ + ├── staging.md → /deploy:staging + └── production.md → /deploy:production +``` + +--- + +## Quick scaffolding workflow + +When asked to create commands for a project: + +1. **Audit the project** — check for existing `.claude/commands/`, `CLAUDE.md`, `package.json` scripts, common dev tasks +2. **Identify repetitive workflows** — what does the developer run / explain repeatedly? +3. **Choose scope** — project-level (`.claude/`) for team commands, personal (`~/.claude/`) for individual shortcuts +4. **Write the commands** — start from the template in `templates/command-template.md` +5. **Name deliberately** — short, verb-first, kebab-case: `gen-types`, `fix-lint`, `check-deps` +6. **Test by invocation** — mentally trace what Claude will see after bash injection and argument substitution + +--- + +## Managing an existing command library + +When asked to audit, reorganize, or improve existing commands: + +1. **Read all files** in `.claude/commands/` (and `~/.claude/commands/` if relevant) +2. **Check for**: duplicate commands, overly broad `allowed-tools`, missing `description` fields, commands that should be namespaced +3. **Suggest graduation** — commands that have grown complex enough for supporting files → migrate to `.claude/skills/` +4. **Verify no conflicts** — if a skill and command share a name, the skill wins silently +5. **Update `CLAUDE.md`** to list available commands so Claude (and humans) know they exist + +--- + +## Reference files + +- `references/frontmatter-reference.md` — Complete frontmatter field docs with all options +- `references/command-patterns.md` — Ready-to-use command examples by category +- `templates/command-template.md` — Annotated template for writing a new command from scratch diff --git a/template/.claude/skills/claude_commands/claude-commands.skill b/template/.claude/skills/claude_commands/claude-commands.skill new file mode 100644 index 0000000000000000000000000000000000000000..b7bf0fe996d422425d4fbde3c2f747b81f8bba2b GIT binary patch literal 15481 zcma)jV{|3$wsmaVw!LH9HafO#+eydl*jC3$$F}WsY$xCQ9^8ArbI*In-8G&XRX_He zwbrP;$9(2mOHl?C3ldf4A zob&)(Jrjnp7#V9F_7=LM-yLflW)*X57M~m$wk0rk_b}T`dMOXmn2&_wlbzXj@^QQu z5Sz`D&ZVgwfajj7MPqD@`0MBp@LP_YHDc$nN0%1SQjOSZc1lf0%{D4Hac;@h%|1 zw2$Do=&dhCjRkl?EOK^*n=Q3ya@oADm7)4?*`-mcxHil#t4;)Vb8cDRDVqAAxh)f> zgsq)-;OCA&%P)3mg_PqXHy*qA{b7P4%dRD#vR%eTJm)nkVCoy~0>?%Z#w!hGA z)0J=FSW!0|EV8jLIE}*_a>vP%HYLq_g>rZQIwi06yre+m za_n`DI&6zTULE&RsOy^hrlh^A5SQ_=ll23C)>ci3oT>Q>NZ{cI;#rFv_PUWyNdb=5 zc|&yAEZ3*!=Og9M6|1Z>F$8vFfo zblD>i>z+OT-&ZauK(SbXK^*As5`5LJODw#Bn(p|@f&s)TSVV*MPK)KkwieiSI)_NX z@kip$(C$M!sx{GS(E%X}0yXqu z5|u~ccY`mfz2!?i?uWqU$|@hn^X5rtVw#*H@1)-P(M5IZoSF4TJ8rcU@H)Qa0-YecTL1aX3daA?Yv(mfa%wsV+d%&tH zMYBwnG{|%z+7Jgo6@Ze-{p^{q-&_E*%;4bm`z)kU9_XBsrB|T^$=bz@C%Wv0F!L7D z!YA%H93`=CaMddeV+qqmV9Y6EA#M)=FK)Y7vqvJa?evnsdg@yu6d59{8GU*-0A|Jieg&^dA{P;yyF9Q>0FL z_G20RLcIC1aYLlRKT|@H)(!AWe?@&zQqv*wZG7VX=-(iSv^_i_0))>Lgx&;fg|!pkUZDUl{jOhxXLJT(;;BlZmZ4Lf3o7}z?U z5nCdGpA~kXlk>{>wK#-6LTyL_d%IF@-uw_9m|L8@g!b)M^Xk_{%iL7pkhy=ygNlMG z+`W`@{_!wnvete+qUI&dVrWG7>k-#X+(HVP4{c2EY|M1^9e3|b@7HdeLAReTwCuMV zg5Tr|<4gn=rGi<7%?8*gN@WEjMP%s00kMpaD?CZ0Q|Ga3LNN2=7LcW@1WNE^h!uR% z1y>O%qe6bd-B0FDG(UjSvvt$MZwnM*NCmi5Qv;e>%8@oBV5vVqe_g7km$@hHKtcDs zRBSemU&@821r3$ub zuN|+&5V>Y~phF?vBaP9DIyl=ZMT6RJgIGIs!$#jD%F7{8%>3psmBpS18{bNHL1zIB z(LV7S{UxD<1a`WNy=dZ7bTuDN_06WyD$Nea9)|ig-<%XFxd}J!z(KN?5)+A6d+B@5 zMGPzN)SKcp2dUSyhm=ZI_N3^$pK-c9IfY711lm`eOC^69CM(@ztLYSB8276UVvib{ zZRt=ZS27qyH!#zkCMANS7AYU`U|-a9$uV z?mCCI&yC%JO11pl)`jF2S$cwcs+c~~&L0u6yFFD}5}Qo9`Z8Q7t&yMZlV3Vs@ME_2 z(G2bkilKW#dE=9{UGD?}VX%oiFwvcX(fHsUsI1~dimG1Z%l`Ch*&9!>TjJ-7&6XprZtf79^Pj!*y90fK1*|D^|GVsY_e9mw-p%06S~h$&6CYd zk_%{H{9!#Td7sY-4y90hpA*&=W-tzGY`*JbkfD%Bx)Yd>I;8u2Rv~MgNb&y5#txMHYQG?T^+?{o~cm8g)D>5%G^{_ilVV- zGc!4*gWk>E%8XF6Ti^Qyv|O9hZ>w<}kK_}EQONwVFJCI#*uEE8P1T%xCush_0@okZ zEm^xZ0b1td4!v54WA*Ly8$yTl=LPBlsR!c`lr<~A21LTjR;0;f-OfHt#^`?d+33y3 zo|>VE`Qqw2Dc3Qnok%O;J+&HU){ckzV}?MmlfO|+E|Sz-ZDkb({WTJsVdo+-ghjHSJN<6Q|8XHc z*F{5H5sPCTexYO)zQ>>T!9*OlVU0KLpI%-YgRSR1>sarHDLXsf50~G|0Lc_+hf?4_ z-;Atcs6ZE@kUYa8{|pp5;i*PxPC$fI%9h&6WaT&gZpNQoUfXf>J16z`6`-yWyyxRA zKfJS^mc)7)H+9INB|UtXXI^Q`WZ2h4_Siw7^^9xTG?&rE3;)R7xnoz!o=3M9uemky zRPk|l$>fNam)puIUUtNNh!HLxp**T5?##|5k{3T7O0C50oE7eA0&+Q~z2hLT5X68Y z8Mb^=$@l@PzMx~FFsHlTdM zMlJXPIz*0QnmaX4d5%qe?j-_sl%pt+j?dT*8ALFGCU0xuh3ZXpZgA|$p9~_1zFMZO zolz;M=u)4%6wbrNJ^f&!)THpO!}z?dXOGrd_gNA%Yo}$~94JPyw zOMWH%{-H>S@Uk(TEa}s!vNUBADh1~wFbG?O)7K8OpXSz=GVM$ubaJ3yv*h<0-93p} zl&~k{5(eOHj1wnZj&Lcb5F7PZcJTs z=S;2d^1Ke7If(J3ZB{k7b$8ULixWRYu)ev0F1f7TEVDV3F9Rb<|256k?iq@vx`UraWvWo0JNVBeze1ibynk|rhHm6W>tf6;@CRm@7XxxBSBJ25tA_u1U zY$d=4O6%9bm-1)-qQy(KRDpbXWCVu120!M{Vxl^WH@Nu)KGA6VNkj8p&9? z?C<+=1n+Nfs{Ha|l0-IY;^ac6OiG&~PeDluu&tJt$LNPFoz#ZvrkS9^TIZ%spe-S+(pi#*(KKj9SFem%K5zMUy5~3+4la`XQTM zqJ0}*Pz`I^Syy7ygPhWdYT!@ZAk3XHJ2rZ-sm@jEd?-eRrh4y6%>zD)c+bN3P<$^o zN1X_x*{V+qz!Z3f;I|XA#&x8X+c^xz?~5wl)41nE-P#j+!vYQ@l;;lJ6ff63bg;-Q zsjeD>rVsB>)m!5z&zU?OAK9MaaBfFf%ba$2ljb25g->3VUka_GeeSb$^0pqBJEE`c zQ8vk8Q4Dz;xm9lgOR55cqDaQYQsRmAt%mx|5OExXup+l||Hl3NmWisU?bZVW3S)=Dhc!Fj^} z?bv@{gg9ZxA>Ue-A~>ju=<~+paS_wNc>(*0EGXqvVGw$IR}CIlog39j>Vzkd{wRd* zN1+Q)nGI48Tdu&ULy$X3>sYd1fCbF?h-rD!*inVa_zz$kK4X&$VO^?1JUVm~$n-YgG2R)g6kOic$VHD^C23Ohz>F= z{s@oe5k63@!`E@m_kkRlW$r-hf25^14oo%o{ zt=rQ%ftCSIBnZmFkdNaP)LM$+NV_`O%j!_MW6Q{%&y#S-B4oe&XUCOr;wSVX-XbGr zu9h2vSAPP4Tdx8HzM!t<#*vEH7-ITc)0s&8nw{vg--(I5wD+Y>#8XFXm0 z{JA;lK(pEe!C)g-w(A28#7Z9fqH=<8v)5)f=0qVGpP|WZ*_C3m_6k3~ub9`VfT zK?QM+j&U`Y2gEVumdK;S$~wqNFBR7=V&=r)P6^VU2Crgn=35Nnl&S?ZIwIAcQalA6 z$%N4f8Gr6}+22_DCF~>9sW-03i+YtDB0JI_Zox8UZ8}NS^9~}F=XbWSjl0nFt~GZ2 z6kpKN4Qc8RRqbzo_WUTKyDe}@pE-SebEeKs@0{*7u?qB*pci|q!jFshe9pLin|0~Y zDfNuaVGTe~l|pAx2x>0gMpUI|P4N&L48(iC>5ViNM8`wm4x9oxDQEi%;ZBw*MbRo$ zw6&B{DD=TY?}}TBxQUQPzOx$Pg_36Zt+JzwRuRR#nv57FE|IB9uT=k_J~;^Ab*zb6 zr!-%7d8>uyt)(ny6}lmraosn_-mXoyJAqxGOY;- zNtB`NT`0qwwnwD0Up%+#<>tK9dYARUt4Imj1Zg@$gu3ZC-C`GvZT*V}#2`f$qg(57 z@T@PVqy`M;vS8bnY%9^dW0?fy@|VsT2bBPK>pG5szv|>%&1u~(*K#F#o*&uP<7fk! zfXm}+`Y;-|8{$hy0KZRe5nZLk`Y2lo8nAOdLWTe73}{p^Jr5!M>n2?N{S&NFxEveo z>L+T_K*jen`Sf-N~*2J0x=enmCd<{xa&JkAoD{Yv9ZKs zkz0{V@DJ{k736BPs}qFR!6>gU<#Xy>M{-If?ni z5>@JG$)^O;gpv}OC|!!PFc>jY^ih59Ab7oEP&;=)T9m;BsE?2X%#&Q~I633ZekjAgW7?h?^xr=1U_3sV>XsH|bZ0 zs3qwo(hdoDFv3*_1xzYV8C93=9h+`xmJ z8VamsKv+1vYKVmd=VqBUM)>a1+E5`3;6~eDhf&aMQUi9%y_@qjZLz>HeGjCqm!Ns& z6+vi68y+OXcGrZwb(BF{QIX^Eqj%nAd8?zLh%EZ#@wHXKEzLlrL?U|@f4kxA&U(rX zk0T@51E-;>yFbaKJQ$7?1oo9_JGOc6m*0$^Mgi%;$kBBiO{ZdYj*w~Y5x{Mgjlreq zQia}IGt=lv*}QV&yD=1Z-Eb7UNwL}b`}l<+M}qN)EsKgey557#8yv%rYz{>wYe~3k zLU2XKTy6|I6CDJQ?xaNBy0PgR7BiGLQEaP_wOb#Mv zQSz1akJG|=K#EMfaaNKqkuRY39V~_`w8nfjmcNQR?vH&S<7`OWU77D@Z3?zyVdGuk zh-*)@b1T(zYT%MUC}NMaMulp27m}=gFv+qCW94lolF0d|U(BXV zQla57c0(FrcR*CkqiuyPAM-dgq&a1d4_hI({()+>`mA4SBoA^ z{+VyR1bj|OnY|PMRZ;CUHMdy--kVDQy|5CA0Mb&EuttJHL2AS`jVuSu)=#N^)ddHS ztjc|Xj=DJo|>fF{kBo^SLy$fPqL6r49Z_-#3-Xy_W&M$J7V%H5xbeZ_AmAPZ zM=Up6N$6GW0bjeOO^&>)Y%!H8!f|C3=6g3C8w1I?D_*{ixRKbn^y-}H@y+U?bz8R( z8FErYdxqdTUB{}i^om!ZFIG)3F?3=j{fAtDp!4McJsXnw^0Yw@#Kh@kN5Po>iVlO8 zApIc1Qa%xisQ!=k6y4pjGkZ9|k6$5@2e6y4g}`9n8%!e{f0kr%o)SLmjIHQ?iVWF* zy#m$_Zq8h84gS0G_~(XUsl#0b0u=~|kO2sY?f;_c|54+lclevIw{!ldp#O6Z@#p$C zL7$_&?zAtST|J1y#Paj9Nuf9ul0Uf{Fpz*$A2NHz`vOb3%78cz6V zDe%p>$D!L~dXz4Jh`hu5J>L$cYHVzbJHwL&mM~-9sH?@(YRXoxg9h=apI-Oyb^sm% zbE+guNiQaIM-~OLou`9=6;oD;-uBYpu{^_G*2ub*u3{1kWPEaFWg(UQ^H)d{jr9Iq zJCnvfS)<%<3MKV)-B`%rSqnBSpZi45L+d*|B|XN5J&x;9i6d$8*VnQ4yPmJxKKC*HR4KH({fs^@(GKLW(uqt;GVT-#c&dNQSvhndPhz&-OSDRsG9Y&uQ6*(s2 zg##3{Y5JpWuxWxh>dA`sEanK9Vx|?3RB?MIbOArbE?M$;SYPq(1Y8V|G-@}tmSS=+ z0SM;T4P{F$xtW!WzvWH|2>psHp7F&sIivUWo;!X>MU%m)T@b6a zK+79jNNdoCmAC=#C#fmz4iA8;z}KI#n;5r~?t{7`_-r8uQ&+J}+xS&ie$$Pk1YTP( z(=Wo6_190Y-PQxilY%hG44yYK$;x~!DSSBsq5@EwLrf{Xc#4olQ0?65sieu-M)9tf zf$%^ODo>_Nrwy3`V?c1|NT#Lqzr6R~+gRJtB0~BgM)m9y+5E6+fNS}zg*akwt_{R; zY|iBsy6tT6Y*-`tsm)<6XaY9ruqH5J>p+Tsw{PFCtK^ILh(D2cQDtZs^zwYLw+OH| z@bTq3+jV<$;Pw01JMcsti6>A{c4r;^S#IFZei76B@banoke-Go#rG=jY99zWFxFeG zD``z1b#m99afQ=jF8@4IRg+m{M*}E^*{-(X^ME`AJ(nWP$<*Nu0`x*~z7QSj@M53r zOal3fmcFC)W%59x2_S2gR7H6nEztxx=8xEA%lqqXAmplT(IffD^sbHpW5W387*P~N zmS3>%{PBmt*MHHEc86=gCgj)~xbZft38ru4-g^WTG+7-hJ*&oJyF)6M+K&R*+H8jz z{!P=fBAD01qoFMD>KQC3vO#=HM!*Cxz%)x_j~+zWac$*P8vL+)TkLgO6h~LRg8mXl z@|Toej4yiHonj^h>s3;Ub(sh+B$$8A?^)41bvM*kuc-kTFt7~~yjz-nq9ORf-gjMp zo-jr|0$MNtt>bRQJ_Ef0Opk|9kf;`TqFWm^IWLP<2`yKY;e6gN8zWL{Tb^ZMM-FPs zrs?t&%UIt!B8_cQpFP#kR~8PM3D%61{ps?`4%u0=JvA-R@o20yft$bla8QUP$wa#$TCrEtGo^TYvmJw2a zPTwm19>YOS27O3-+WAEB!kBb2hxynx0?goY(LD6>-(H##-WxcGTAnTL`<5kdW@w-@ zUDMDW1usBG1F>lvN7@|RWs%gkbMNKij_j8!fdFX6C}Q|=KhyxeKVU2|LV|i{=JgSP zIwX-IYz~pE5P&-yZbP%hlz@39=KCm(Ns7waAjy`Irq6#4DN77rL+`mwGKF8|61Vyp z$=}90Qj=H)iDFEwg$F=!c(8T*f0}bOKETV9r`QG(u<-_`$kYfK_{g*L*J;1>LPj)! zhhWFF1S?Qqi>dX^0NKGj>5A8HAKdmPVp7V-!6uhOIg(CoG9Wu3aqX8(gB9f|op5QQ zbRj?&C8v|N1c>%Z*$ah-e`6AJl5-5u6+aE&@ter*LI|E7-b2_~cg}lg!SxF3FXz(9 z&_UD}84F4vzKJ(sHsD8oHOF&?y0??W*j>CoRXGwL13rycyH0>{FLm3R-WD=P zpN3L?jb?|DQ#D{mT8Vzs3j~sKUP@ZEk^(_U<3h$0P;=eSMS>{q^+V4!yXks72W&n` z{-IXJBS3`WtnI`*`-}wP%u@w7Due^{7DSzbTR9xZe|Dwk!bXw<>?0z-C(UHwPQR$i zeS}||GRlhg;~5ofwZDJ8H-I)9!)-3-O2o>AlGad7@)$pwOSJ5#j@EU6^mDX`YH980 ztCQqFsg2ierQ*EflydHf5$hm{OM9o*bfTf)T^7q7-JPvJ_cM^}3mhjGi=UL#DDOsTA@n zB@=6yI?gIM?f2P$bAx9pB&&GK1M-Fm5Fd5%6cC zw{l7XokBX;g1jf3MDWdbtF*;ji+n`fp+l2P;ar5EHsA!MQL2ZK6(y*8PcA|u;x!#X z8^z~Af)NFVBGec*_QNMMfQRP~pCf;LDa{it&GXCD#Yc9gPZx`%B9ujz7#kTY>z^9h z)le?-J8RaY2fX%FCf^_jmXilvwv=>3lb*PIi^0xV-r2?#L9OVHDTqQvNm@E*xfe?- zFBz&H&m@DBvZS&!vhZ_M3*520N-svD*e4t1cEo64LDl4`9fUeVKk%1;?V8XdktT2jN-@I}o<-7-+Y?T!|7yQ(n<2T9)23s(M5Zbp`;R+ueipAx*Pp zH9bag2}hw7`Syo}bDlm+pUd|0Fb8pY#LMxyfAfp6%;`~0ARy%v=r4~!B?ZDv<;_o;EbfS69qAW+z~^`WtyXdonN%_?`;qpf4?Bqr z!PK(wQ0FR>xRkQ8rm;{5+I4V0`AF%=jxOn|c^&A&(jBA7g6{p)!b->!rfU_yV7nRJ z7)|OPa9CYEho3j+gn18jo^hNhIhd~5j`k$HtR!JK-avnP^&K@nH#59lC+{76djj)f zCK1lpXK=(HfeBp*-2gLi;vmyd57S(?URp4h_|fJY3lnT!uL;~N)JRhtycmK??5f~z zA7LEhBs2$WRqs{z|mGz&8ED13a+84TE2pRS}2zA_uB6s#6V`7Fp))Rgdx zI_8~sfT9-Zoq)QkEfcCsP>Xk6$^G0 zu6_QYzZ&D`u2hijzlnT`mmLt=^mJ{WH}g#xni*b8Tu;iaT;N%o-;D>yizTsGQo9Tn zaY;bh(0F_&F(5dxF-{xo^Sirf63PiXG_p^W=m(loFZO~v#t638cTizjv~L?meH3jC1DQc1 zC@JGxu=MY~fKkf;%TTGB6R=COS`C64{El4a8@P<$_CCLjcKxB5JSGm+J)pLydtYCk zF#PSWpFVKuPYq-a%usfjE+Tp@^C|e7TKY1ie}1~R3N3t-9XYfhuh^1~(_Xj6^v-=B z&>mD%Xd;Sdr@Iqy3Q13F6u|dIFcT;x4=PGg>uV2B5CsF71Z6f-YDx4Fo>EYjDFZ)l zxsimX>(E!yNvb&W9Sa^yJc@^`kjTRr1y$?UV1)|{^?)SJ`0@pgVpmy9>!DB2Pr-ou zGD_?fR(I`T#A9){3ch>{7g2lpRcn?!zB*gi3hC^D0HC`iEpL*v6Y?&0NBJp#25|B( zJi{(RsvGrQ@E&>P1M7-9IK=3{BsXrlz*hkkd@`)6^8MrgsU-oB9UEN(ZPlO$lgHa%FPfBeuOO&z0!Z9mdMiGq(`NbrBwR#YX0!n3(b+%jS-d8^1z!NlY4Hnc&k7Fj!&@ zn4`k8*9UkSG3aOW`o9TBN5qMI@~?HzN)niZz^(O1bNxsr(dy%y$X?CduxAnXSQ*hrmBv5e$pjvxt)&-*Y-Hh zl%_4!o7A#%1qh>d8Ph&HshI=UtO+-3LCaZ5uXuog@m(BHR4j`4f;MZtf~k#8>)z>6_N+k*9%CHGM`Akebpp z;{-6zEmb;qZi2=xV;lFOXz>fLR-J>L*2m+m#Q^s{7y2|fOD!5<4muq8;{{W!G|fv;nK zc+zx8ju&h#Ush}jcFwtoNj_SX|zMSUyZuM1K#UWKd#{GHvVYkQLG@!p(=;S zgZrf6vFj-EH7~K5-3%U?vk;+n@}#rbtMog^QGZNTTN^#UKZg@7A&Sl`}13?R*9{nWBUpq7ZS-zyL}9`FrcEo>6fjI8Et?#V+^A;{Kpzb?(09P+3WX*I25n!XLW+eq}{ z@+R7LJLIxRenIE;n+!d|;}SD}&#`j7VGvbsS_;RWZiDc$aH0;vH>2v-HEeIV{9Unb zt#Dfd6qvS-(G8LX)5KZJO64^Zml6Q4q|Wx4!o6OibiK4E5x4IWhglKW+9-9Ez(F-I zg=mcgiz{_2lm6P&!Z8z`>xF%~ppS1pz7V+4nwxU56hPIFH3pnmYC@{Y@^K45RN+g% zsIn3}d10%gAw7Imn6M(L-DAtru=I#iNNoG=`+bth9X9||x@tx2gu&U~i{;}(qj_uI z=gn9nw+^L)PKCy$8X}_AqsmZc(!?JT;<2R_KbVIOQXQM(S%cf!Vc#RXRk&e)K(&B$ z4nD37EG@;30HM0gZ7q_EpMSlpI|jt;%Y)Ts6xd)e$ooW23a`rXE95C(MyI7Q6EC8w z88WJ@taqIwt2unfKXNfx*UoBSyT|C_pf6Kdi8-!eGrq)2R919qm$_B zMvZerbf^~&l2E|XJyZt#`L+soH|lJ+na%q_Wd-(q6~(LW$?!lB*@l*3f}nN3aRa@2A7qv5Mn$2ZyM|m9V`l z;zI1bty~!Kk2_D~TC*oCjW=!z^0&FcR+jAs9d091FDnGa_C`o2BB4Ozt2Z`p8I@mw z-k7y=HGl-7AEb$-rP+ag@^UeCkC2$~hG8$e1mBj*VzP9Q5UH6H{uqTC9p>5q-g?Md z;F>RH(xA3dlv4^xx*7*=T33A>z$9Xd&lk;G@vxJ^8 z1&13!Sj~OUvm<7Y!1aT`tCTknzRh(2*Xc%F1JLI*K{d@jfk)G%; zJvSFqTL&9Mmw(piA^w;l_+RG;{zSeFU%YyU1OgI31_EOJKOq0p%=;(TKh3;9$^Kmb zX66-ZuG`^Cp!nw1QmFuwc(Q*#BzqI&92!g7gPNgl7i0nhwsFcd31{7G+~oTXg%!G}wA? za!s5jb}_IeoN()a>*39J>@NNnKF(?6U1pH>wV2GZht=+zR_d{5Dt!1a=@-vkf@)c) z-o+dHR!WUw3DDPr*j&XQj;-Z|j;R}9bCv0;)_r9TnGbYHM&Siep&8a=K{zXY1AgM~ zZ-YFzcXQ>!19@lU<;?q!r(wW1@yFKoM2jd8-ck73%N4y9!EDAVCn4c)b=!AMRt4j+ zdTpL8a_LzOXd))ow}>a7f(%8WkA;x!n5MF5=mf6=`l#fWg@E~Tv*6YKYiK(2QGU!c z043%zWGU$DV(+u!{cKV_35cw6qc_R|WbM}~R&8{=QLZ@8( z55z>@Zmuu-b71SM+c`s%^thO9EE!IzSt(^TZBKg-mvqW*wolj1nB;4;tW@orY0 zjUnD@fTu!TC|mG6gwd0piB*eYNd2ACmKFgYrhW+vG_(znYpI5cce#9i4US!9p|%6i|rkd z`J{Q(`vWgCF_Wb+LJXN0;!<=%THH<{hxXudu1{Pk)bpfjHQAUpKXMtW%DioH*0DTg zGWdEIqQ6`nR<2Nal}iU+u-7oim~$Y;98ZdnP<1b^$eMO5VD~t3OLR2vtDiB_#lz9f z&B0GJk0GwRDC7Q#n`S#Rf|^nZ8{5IjFds8~Jh5?G0_g^=e|%^Q(!VAbBx>tIMI5`efiU zE+p6kcB^FkmPN-s)g*-ip#b8IbBhq9~9>EzCNehRRvV+jk}&uyz(B2l=^+a`5fcK$4beCw#-Fk zpEMB(jDNP86(!%DWmT?=#Dz}DD<=sruxE23KRKp4ht_7YWa3Sw*KCA$M@XS`Z1GQ} z+9xtm1%oS>hn0S}OP@a}3pE@hWafEbpk~48N)UR{xxT8y1X)zVL+R;qkp=p8_ zzPi?8XPF-&W(bjyi_{UVx9)aKxOh7K#2oGXj*qzNJ)vuT37cE$*SnzS4GDu`66Om4 z`;Btcx*j~AoWqYwZDB}ZCTkVqoMoT`cMSLUEyKKZOGRy>JnjexMO0D zdDK=5<+H)06Xtb0RL@oRktW5h-ugiE){_ZMMMbIgB}d>EW@=j75C%66+{Knqk&YIa z7g@YPertR((&t0zjlk~OyWogl?pw_G@#)gsB`p1h0e+iUJtuOxJcV;|vvAej8FEZF z1;@H?ULG8ShV9M8#6W?ef`r1?FP|Ooey$9AoS&& z5A*Bbpf@HZ+nW6t&Z&-Z-h80+FKlVm@mJ_=%qR{qh+51e4eKG~3XDm7CDFkS zd|qE}GnvEQE?7v339o@=GQXqdf|P+2PnDBVpwNmOykstNLvSeTAkp~h%A7d0UmcAo zODv9x_yBRgbwpS-hN#P-K7>bfFBvZwd!g3YX=BS;p>U)fXYdJ%9NyQWg2y!ZV%yZ+ zr^vYoW?2xy$QMYJR-SRFfWiyy8=;~MFbDzYUyVRw{bLvs3Ft!T&&$6Why?%pK;*ya zioaw3)i~tev4Q_UEecfpOYHw1j{M&^{;C193yuzuGbXp!l;L^FN01SIwCJ zN|EuG6#vyI`Dbh9A2k1=r24Pf`k#%N|C+7){|n7uw`u-C{~ua3|CRoKR*?Udehlxg z68SGt{!@bffA!@5E%3kR_kZf>f6XsH(_b_5W$kHp{*fBtr${y2<4fq;Oy{=EBtoD!|= literal 0 HcmV?d00001 diff --git a/template/.claude/skills/claude_commands/references/command-patterns.md b/template/.claude/skills/claude_commands/references/command-patterns.md new file mode 100644 index 0000000..03ca500 --- /dev/null +++ b/template/.claude/skills/claude_commands/references/command-patterns.md @@ -0,0 +1,448 @@ +# Command Patterns Reference + +Ready-to-use command examples organized by category. Copy, adapt, and drop into `.claude/commands/`. + +--- + +## Git Workflows + +### `/commit` — Smart commit message generator + +```markdown +--- +description: Generate a conventional commit message from staged changes +allowed-tools: Bash(git diff:*) Bash(git log:*) +--- + +## Staged changes +!`git diff --cached` + +## Recent commit style (for reference) +!`git log --oneline -5` + +Generate a conventional commit message for the staged changes above. + +Format: `(): ` + +Types: feat, fix, docs, style, refactor, perf, test, chore, ci + +Rules: +- Subject line under 72 chars +- Use imperative mood ("add" not "added") +- If the change is complex, add a blank line then a short body +- Do NOT include "Co-authored-by" or AI attribution lines + +Output only the commit message, nothing else. +``` + +### `/pr` — PR description writer + +```markdown +--- +description: Write a pull request title and description from branch changes +argument-hint: [base-branch] +allowed-tools: Bash(git log:*) Bash(git diff:*) +--- + +## Branch changes vs $ARGUMENTS +!`git log $ARGUMENTS..HEAD --oneline 2>/dev/null || git log main..HEAD --oneline` + +## Full diff +!`git diff $ARGUMENTS..HEAD 2>/dev/null || git diff main..HEAD` + +Write a GitHub pull request description. + +Structure: +## What +One sentence: what does this PR do? + +## Why +Why is this change needed? What problem does it solve? + +## How +Brief explanation of the approach taken (skip if obvious from the diff). + +## Testing +How was this tested? What should reviewers verify? + +Keep it factual and concise. No fluff. +``` + +### `/changelog` — Changelog entry from commits + +```markdown +--- +description: Generate a changelog entry from recent commits +argument-hint: [from-tag-or-commit] +allowed-tools: Bash(git log:*) Bash(git tag:*) +--- + +## Recent commits +!`git log $ARGUMENTS..HEAD --pretty=format:"%h %s" 2>/dev/null || git log --oneline -20` + +## Latest tags +!`git tag --sort=-version:refname | head -5` + +Generate a changelog entry in Keep a Changelog format (https://keepachangelog.com). + +Group commits under: +- **Added** — new features +- **Changed** — changes to existing functionality +- **Fixed** — bug fixes +- **Removed** — removed features +- **Security** — security fixes + +Skip chore/ci/style commits. Use user-facing language, not technical jargon. +``` + +--- + +## Code Quality + +### `/review` — In-depth code review + +```markdown +--- +description: Thorough code review of a file or directory +argument-hint: +allowed-tools: Read Grep Glob +disable-model-invocation: true +model: claude-opus-4-6 +--- + +Review the code at @$ARGUMENTS + +Focus areas (in priority order): +1. **Correctness** — Does the logic match the intent? Any edge cases missed? +2. **Security** — Input validation, auth checks, injection risks, exposed data +3. **Error handling** — Unhandled exceptions, missing null checks, silent failures +4. **Performance** — Unnecessary work, blocking calls, memory leaks +5. **Readability** — Is the intent clear without over-commenting? + +For each issue found: +- Cite the exact line(s) +- Explain the problem +- Suggest the fix + +Skip pure style issues (naming conventions, formatting) — those belong in linting config. +``` + +### `/refactor` — Targeted refactor + +```markdown +--- +description: Refactor a file or function for clarity and maintainability +argument-hint: [focus-area] +allowed-tools: Read Edit Write +--- + +Refactor @$1 + +Goal: improve readability and maintainability without changing external behavior. + +Steps: +1. Read the full file first +2. Identify: magic numbers, duplicated logic, deeply nested conditionals, long functions +3. Apply refactors one at a time +4. Verify the public API / exports are unchanged +5. Add or update inline comments where the logic is genuinely non-obvious + +Do NOT: change algorithms, rename public exports, or add new features. +Show a summary of what you changed and why. +``` + +### `/test-gen` — Generate missing tests + +```markdown +--- +description: Generate unit tests for a file or function +argument-hint: +allowed-tools: Read Grep Glob Write +--- + +Generate tests for @$ARGUMENTS + +Steps: +1. Read the source file to understand what needs testing +2. Check for existing test files (`*.test.*`, `*.spec.*`, `__tests__/`) +3. Identify the testing framework in use (Jest, Vitest, pytest, etc.) +4. Write tests that cover: + - Happy path for each exported function + - Edge cases: empty input, null, zero, max values + - Error cases: invalid input, missing required fields + - Any business logic with multiple branches + +Match the existing test style (imports, assertion style, describe/it vs test). +Place tests in the appropriate file/directory for this project. +``` + +--- + +## Debugging + +### `/trace` — Trace an error to its root + +```markdown +--- +description: Debug an error message or unexpected behavior +argument-hint: +allowed-tools: Read Grep Glob Bash(git log:*) +--- + +Debug this issue: $ARGUMENTS + +Investigation steps: +1. Search for the error string or relevant keywords in the codebase +2. Trace the call stack from the error site upward +3. Identify what data/state could cause this +4. Check recent changes to relevant files: `git log --oneline -10 -- ` +5. Propose the most likely cause and a fix + +Be systematic. Show your reasoning. If multiple causes are possible, rank them. +``` + +### `/explain-error` — Plain-language error explanation + +```markdown +--- +description: Explain what an error means and how to fix it +argument-hint: +--- + +Explain this error in plain language: + +``` +$ARGUMENTS +``` + +Tell me: +1. What this error means +2. What usually causes it +3. How to fix it (most common fix first) +4. How to prevent it going forward + +Assume I understand the language but may not know this specific error. +``` + +--- + +## Project Management + +### `/standup` — Generate standup notes + +```markdown +--- +description: Generate yesterday/today/blockers from recent git activity +allowed-tools: Bash(git log:*) Bash(git diff:*) +--- + +## My recent commits (last 2 days) +!`git log --author="$(git config user.name)" --since="2 days ago" --oneline` + +## Current branch and status +!`git branch --show-current` +!`git status --short` + +Generate a standup update from the activity above. + +Format: +**Yesterday**: What I worked on +**Today**: What I plan to work on +**Blockers**: Anything blocking me (say "None" if nothing) + +Keep it brief — 2-3 bullet points per section max. +Use natural language, not commit message syntax. +``` + +### `/estimate` — Task size estimation + +```markdown +--- +description: Estimate effort for a task or feature +argument-hint: +--- + +Estimate the effort for: $ARGUMENTS + +Given this codebase, provide: +1. **T-shirt size**: XS / S / M / L / XL +2. **Hour range**: rough low-high estimate +3. **Key unknowns**: what would make this take longer? +4. **Suggested breakdown**: 3-5 subtasks + +Be honest about uncertainty. If you need to see specific files to estimate, ask. +``` + +--- + +## Documentation + +### `/doc-gen` — Generate JSDoc / docstrings + +```markdown +--- +description: Add documentation comments to a file +argument-hint: +allowed-tools: Read Edit +--- + +Add documentation to @$ARGUMENTS + +Rules: +- Add JSDoc (JS/TS), docstrings (Python), or equivalent for the language +- Document every exported function, class, and type +- Each doc must include: description, @param (with types), @returns, @throws (if applicable) +- For complex functions, add a one-line @example +- Do NOT document obvious internal helpers — only public API surfaces +- Preserve all existing code exactly; only add/modify comments +``` + +### `/readme-update` — Sync README with codebase + +```markdown +--- +description: Update README to match current codebase state +allowed-tools: Read Glob +--- + +Review the README and update it to match the current codebase. + +Steps: +1. Read README.md +2. Read package.json or equivalent for actual scripts/dependencies +3. Check entry points and key files +4. Update sections that are stale: + - Installation instructions + - Available scripts / commands + - Configuration options + - API reference (if present) + +Keep the existing tone and structure. Only change what's factually incorrect or missing. +Flag any sections you're unsure about rather than guessing. +``` + +--- + +## Onboarding & Navigation + +### `/architecture` — Explain project structure + +```markdown +--- +description: Explain the codebase architecture and key components +allowed-tools: Read Glob Grep +--- + +Explain the architecture of this codebase. + +Cover: +1. **Purpose**: What does this project do? +2. **Structure**: Major directories and what lives in each +3. **Entry points**: Where does execution begin? (main file, API routes, etc.) +4. **Key layers**: How is the code layered? (e.g., API → service → repository) +5. **Data flow**: How does a typical request/operation flow through the system? +6. **External dependencies**: Key libraries, databases, services + +Draw an ASCII diagram showing how components relate. +Flag anything unusual or that took you a moment to understand — that's what new devs will also find confusing. +``` + +### `/find` — Find where something is implemented + +```markdown +--- +description: Find where a feature, function, or concept is implemented +argument-hint: +allowed-tools: Grep Glob Read +--- + +Find where "$ARGUMENTS" is implemented in this codebase. + +Search strategy: +1. Grep for the exact term and close synonyms +2. Check index/barrel files for exports +3. Look at the most likely directories based on project structure +4. Trace from entry points if needed + +Report: +- The primary file(s) where this lives +- Any related files (tests, types, config) +- A brief explanation of how it works +``` + +--- + +## Security + +### `/security-scan` — Vulnerability audit + +```markdown +--- +description: Scan codebase for common security vulnerabilities +allowed-tools: Read Grep Glob Bash(git log:*) +disable-model-invocation: true +model: claude-opus-4-6 +context: fork +--- + +Perform a security audit of this codebase. + +Check for: +1. **Secrets/credentials** — hardcoded API keys, passwords, tokens +2. **Injection risks** — SQL, command, template injection +3. **Auth gaps** — unprotected endpoints, missing authorization checks +4. **Input validation** — untrusted data used without sanitization +5. **Dependency risks** — check package.json for known vulnerable patterns +6. **XSS** (if web app) — unescaped user content rendered as HTML +7. **CORS/CSP misconfiguration** — overly permissive origins + +For each finding: +- Severity: Critical / High / Medium / Low +- File and line number +- Description of the risk +- Recommended fix + +Report "None found" for categories with no issues. +``` + +--- + +## Workflow Recipes + +### Multi-step: "ship it" workflow + +```markdown +--- +name: ship +description: Run full pre-ship checklist: tests, lint, build, then commit and PR +allowed-tools: Bash(npm *) Bash(git *) +disable-model-invocation: true +--- + +Run the complete ship workflow: + +1. **Lint**: `npm run lint` — fix any errors, warn about warnings +2. **Test**: `npm test` — do not proceed if tests fail +3. **Build**: `npm run build` — confirm it compiles cleanly +4. **Commit**: Generate and apply a conventional commit for staged changes +5. **Push**: `git push origin HEAD` +6. **PR**: Summarize what's in this branch for a PR description + +Stop and report if any step fails. Do not continue past failures. +``` + +### Template command for project-specific tasks + +```markdown +--- +name: task-name +description: [One line: what it does. When to use it.] +argument-hint: [arg description if needed] +allowed-tools: Read # add only what's needed +disable-model-invocation: true # if has side effects +--- + +[Main prompt here] + +$ARGUMENTS +``` diff --git a/template/.claude/skills/claude_commands/references/frontmatter-reference.md b/template/.claude/skills/claude_commands/references/frontmatter-reference.md new file mode 100644 index 0000000..e606f45 --- /dev/null +++ b/template/.claude/skills/claude_commands/references/frontmatter-reference.md @@ -0,0 +1,219 @@ +# Frontmatter Reference + +Complete reference for all YAML frontmatter fields in `.claude/commands/*.md` and `.claude/skills/*/SKILL.md` files. + +--- + +## `name` + +**Type**: string +**Default**: directory name (for skills) or filename without `.md` (for commands) +**Constraints**: lowercase letters, numbers, hyphens only; max 64 characters + +Overrides the display name and the `/slash-command` trigger. Usually you want the filename to be the name, so this field is rarely needed. + +```yaml +name: deploy-prod +``` + +--- + +## `description` + +**Type**: string +**Default**: first paragraph of markdown body +**Recommended**: always include + +The primary text Claude reads to decide whether to auto-invoke this skill. Also shown in `/help` listings and autocomplete. + +- Front-load the key use case — descriptions are truncated at ~1,536 chars in the skill listing +- Be specific about *when* to use it, not just *what* it does +- For `disable-model-invocation: true` commands, you can be terser since Claude never auto-triggers it + +```yaml +description: Generate a conventional commit message from staged changes. Use when committing code or when the user asks to "commit", "save changes", or "create a commit". +``` + +--- + +## `when_to_use` + +**Type**: string (supports multi-line with `|`) +**Default**: none + +Additional context for auto-invocation. Extends `description` without cluttering it. Useful for listing synonyms, edge cases, and trigger phrases. + +```yaml +when_to_use: | + Also trigger when user says: "ship it", "push this", "make a commit", + "save my work", "check in these changes". + Do NOT trigger for git operations that aren't commits (pull, push, branch). +``` + +--- + +## `argument-hint` + +**Type**: string +**Default**: none + +Hint shown in the autocomplete dropdown describing expected arguments. Purely cosmetic — does not validate or parse arguments. + +```yaml +argument-hint: [--strict] +``` + +Common conventions: +- `` — required positional argument +- `[optional]` — optional argument +- `...` — variadic (multiple values) +- `=` — key-value pair + +--- + +## `allowed-tools` + +**Type**: space-separated string +**Default**: none (uses session permissions) + +Pre-approves specific tools so Claude can use them without per-call confirmation when this command is active. This does NOT grant tools not already available in the session — it only pre-approves prompts for tools that are available. + +### Tool identifiers + +```yaml +# File tools +allowed-tools: Read Write Edit + +# Search tools +allowed-tools: Grep Glob + +# Combined +allowed-tools: Read Grep Glob + +# Bash — ALWAYS scope with a pattern +allowed-tools: Bash(git *) # All git subcommands +allowed-tools: Bash(npm run:*) # Only npm run scripts +allowed-tools: Bash(pytest *) # Only pytest +allowed-tools: Bash(git log:*) Bash(git diff:*) # Multiple bash patterns +``` + +### Bash scoping patterns + +`Bash(command *)` — all arguments to `command` +`Bash(command subcommand:*)` — only `command subcommand` and its args +`Bash(command subcommand flag)` — exact command only + +**Best practice**: never use bare `Bash` without a scope pattern for production commands. It grants unrestricted shell access. + +--- + +## `disable-model-invocation` + +**Type**: boolean +**Default**: `false` + +When `true`, prevents Claude from auto-triggering this command based on context. The user must invoke it explicitly with `/command-name`. + +Use for any command with side effects: + +```yaml +disable-model-invocation: true +``` + +Good candidates: +- Deployment commands (`/deploy`, `/release`) +- Database mutations (`/db:migrate`, `/db:seed`) +- Git push/publish commands +- Any command that sends external requests +- Destructive operations (`/db:reset`, `/clean`) + +--- + +## `model` + +**Type**: string +**Default**: session default (usually Sonnet) + +Pins this command to a specific Claude model. Useful when a command requires Opus for complex reasoning, or Haiku for fast/cheap operations. + +```yaml +model: claude-opus-4-6 # Best reasoning, slower, pricier +model: claude-sonnet-4-6 # Balanced (session default) +model: claude-haiku-4-5-20251001 # Fast, cheap, good for simple tasks +``` + +--- + +## `context` + +**Type**: string +**Default**: inline (runs in current conversation) + +Controls where the skill executes. + +| Value | Behavior | +|-------|----------| +| (none / inline) | Runs in the current conversation context | +| `fork` | Runs in a new subagent with a copy of current context; results summarized back | +| `agent` | Runs in a fresh subagent with no inherited context | + +```yaml +context: fork # Good for long-running analysis you want isolated +context: agent # Good for clean-slate tasks (security scan, full review) +``` + +`fork` is the right choice for most complex commands — the subagent can do deep work without polluting your conversation context. + +--- + +## Complete example: production-ready command + +```yaml +--- +name: pr-review +description: > + Comprehensive pull request review covering correctness, security, and + performance. Use when reviewing a PR, before merging, or when asked to + "review these changes" or "check this PR". +argument-hint: [branch-or-diff-ref] +allowed-tools: Read Grep Glob Bash(git log:*) Bash(git diff:*) +disable-model-invocation: false +model: claude-opus-4-6 +context: fork +when_to_use: | + Trigger for: "review this PR", "check my changes", "look at this diff", + "is this ready to merge", "code review". + Do not trigger for: simple file edits, asking about code without a review intent. +--- + +## Changed files +!`git diff --name-only $ARGUMENTS..HEAD 2>/dev/null || git diff --cached --name-only` + +## Diff +!`git diff $ARGUMENTS..HEAD 2>/dev/null || git diff --cached` + +Review the diff above. Prioritize: +1. **Correctness** — logic errors, off-by-ones, missing edge cases +2. **Security** — injection risks, exposed secrets, auth bypasses +3. **Error handling** — unhandled exceptions, missing null checks +4. **Performance** — N+1 queries, unnecessary re-renders, blocking calls + +Skip style comments — linting handles those. Be specific and actionable. +Group findings by file, severity first. +``` + +--- + +## Minimal viable command (no frontmatter) + +When you just want a reusable prompt with no configuration, you can omit frontmatter entirely: + +```markdown +Analyze this codebase and explain its architecture: +1. Identify the main layers (API, business logic, data access) +2. List the key entry points +3. Draw an ASCII diagram of how components relate +4. Highlight any unusual patterns or technical debt +``` + +Save as `.claude/commands/architecture.md` → `/architecture` diff --git a/template/.claude/skills/claude_commands/templates/command-template.md b/template/.claude/skills/claude_commands/templates/command-template.md new file mode 100644 index 0000000..2f926a9 --- /dev/null +++ b/template/.claude/skills/claude_commands/templates/command-template.md @@ -0,0 +1,159 @@ +# Command Template + +Use this annotated template when writing a new `.claude/commands/*.md` file from scratch. +Delete the comment lines (``) before saving your command. + +--- + +```markdown +--- + +description: [Action verb] [what it does]. Use when [trigger conditions]. + + +argument-hint: [optional-arg] + + +allowed-tools: Read Grep + + +disable-model-invocation: true + + +# model: claude-sonnet-4-6 + + +# context: fork +--- + + + +[Primary action statement — what Claude should do] + + + + + + + +Steps: +1. [First action] +2. [Second action] +3. [What to report / output format] + + +``` + +--- + +## Template variants + +### Minimal (no frontmatter needed) + +```markdown +[Action]. $ARGUMENTS + +1. [Step] +2. [Step] +3. Report: [format] +``` + +### With bash context injection + +```markdown +--- +allowed-tools: Bash(git log:*) Bash(git diff:*) +description: [description] +--- + +## Context +!`git log --oneline -10` +!`git diff HEAD~1` + +[Instructions using the injected context above] +``` + +### Positional args + +```markdown +--- +argument-hint: +allowed-tools: Read Write +--- + +Convert @$1 to $2 format. + +1. Read the source file +2. Transform to the target format +3. Write the result to $1.$2 (same name, new extension) +4. Confirm the output path +``` + +### Subagent + disable auto-invoke + +```markdown +--- +description: [Destructive or long-running operation] +disable-model-invocation: true +context: fork +allowed-tools: Bash(npm *) Bash(git *) +--- + +[Instructions for a long-running or side-effect operation] + +Important: If any step fails, stop immediately and report what failed. +Do not attempt to continue past errors. +``` + +--- + +## Naming conventions + +| Pattern | Example | Use for | +|---------|---------|---------| +| `verb-noun` | `gen-types`, `fix-lint` | Single-purpose actions | +| `noun` | `commit`, `review`, `deploy` | Well-known workflows | +| `namespace/verb` | `db/migrate`, `deploy/staging` | Grouped commands | +| `check-noun` | `check-deps`, `check-types` | Read-only audits | + +Avoid: +- Names that shadow built-ins: `help`, `clear`, `compact`, `init` +- Generic names: `run`, `do`, `execute`, `task` +- Camel case: use `fix-lint` not `fixLint` diff --git a/template/.claude/skills/claude_hooks/SKILL.md b/template/.claude/skills/claude_hooks/SKILL.md new file mode 100644 index 0000000..b4dfb77 --- /dev/null +++ b/template/.claude/skills/claude_hooks/SKILL.md @@ -0,0 +1,428 @@ +--- +name: claude-hooks +description: > + Write, manage, and debug .claude/hooks — Claude Code's lifecycle automation system. + Use this skill whenever the user mentions hooks, .claude/settings.json hook config, + PreToolUse, PostToolUse, SessionStart, Stop, or any Claude Code hook event. Also + trigger for: "automate Claude Code", "enforce rules in Claude Code", "run a script + when Claude edits a file", "block dangerous commands", "notify me when Claude finishes", + "hook into Claude Code", or any request to make Claude Code run custom scripts + automatically. If the user is working inside a .claude/ directory or discussing + Claude Code automation at all — use this skill. +--- + +# Claude Code Hooks Skill + +Hooks let you attach shell commands, HTTP endpoints, LLM prompts, or subagents to +specific moments in Claude Code's lifecycle — turning guidelines into guaranteed, +deterministic enforcement. + +## Quick mental model + +``` +Claude Code event fires + → matcher filter (tool name / session type / etc.) + → if condition (optional fine-grained filter) + → hook handler runs (command / http / prompt / agent) + → exit code + JSON output control what happens next +``` + +--- + +## 1. Where to put hooks + +| File | Scope | Commit? | +|------|-------|---------| +| `~/.claude/settings.json` | All your projects | No — machine-local | +| `.claude/settings.json` | This project | Yes — share with team | +| `.claude/settings.local.json` | This project | No — gitignored | +| Plugin `hooks/hooks.json` | When plugin enabled | Yes — bundled | + +**Best practice:** keep project-wide enforcement (linting, security gates) in +`.claude/settings.json`. Keep personal preferences (notifications, logging) in +`~/.claude/settings.json`. Never commit secrets — use `.claude/settings.local.json` +for hooks that reference API keys. + +**Always reference scripts via `$CLAUDE_PROJECT_DIR`** to avoid path breakage when +Claude Code changes working directory: + +```json +"command": "\"$CLAUDE_PROJECT_DIR\"/.claude/hooks/my-script.sh" +``` + +Store hook scripts in `.claude/hooks/` and make them executable (`chmod +x`). + +--- + +## 2. Configuration schema + +```json +{ + "hooks": { + "PreToolUse": [ + { + "matcher": "Bash", + "hooks": [ + { + "type": "command", + "command": "\"$CLAUDE_PROJECT_DIR\"/.claude/hooks/validate-bash.sh", + "timeout": 10, + "statusMessage": "Validating command…" + } + ] + } + ] + } +} +``` + +Three nesting levels: +1. **Hook event** — lifecycle point (`PreToolUse`, `Stop`, etc.) +2. **Matcher group** — regex/string filter for when it fires +3. **Hook handler** — the actual command/endpoint/prompt to run + +### Matcher rules + +| Matcher value | Behavior | +|---|---| +| `"*"`, `""`, or omitted | Match everything | +| Letters/digits/`_`/`|` only | Exact string or `\|`-separated list: `"Edit\|Write"` | +| Any other character | JavaScript regex: `"^Notebook"`, `"mcp__memory__.*"` | + +MCP tools: `mcp____` — use `mcp__memory__.*` to match all tools +from a server (the `.*` suffix is required). + +### Handler types + +| `type` | How it works | Best for | +|--------|-------------|----------| +| `"command"` | Shell script, stdin=JSON, stdout+exit code=result | Most use cases | +| `"http"` | POST JSON to a URL, response body=result | Remote services, webhooks | +| `"prompt"` | Asks a Claude model, returns yes/no JSON | Semantic checks | +| `"agent"` | Spawns subagent with Read/Grep/Glob tools | Complex file analysis | + +### Common handler fields + +| Field | Default | Purpose | +|-------|---------|---------| +| `type` | — | Required: `command`, `http`, `prompt`, `agent` | +| `if` | — | Permission-rule syntax fine filter: `"Bash(git *)"`, `"Edit(*.ts)"` | +| `timeout` | 600/30/60s | Seconds before cancelling | +| `statusMessage` | — | Spinner text while hook runs | +| `async` | false | Run in background, don't block Claude | +| `asyncRewake` | false | Background, but wake Claude on exit 2 | + +--- + +## 3. Exit codes and JSON output + +### Exit codes (command hooks) + +| Code | Meaning | +|------|---------| +| `0` | Success — Claude Code parses stdout for JSON | +| `2` | **Blocking error** — stderr fed back to Claude/user, blocks action | +| Other | Non-blocking error — shows first line of stderr, execution continues | + +> **Critical:** Use `exit 2` to enforce policy. `exit 1` is non-blocking and +> will not stop the action, even though it's the conventional Unix failure code. + +### JSON output (on exit 0) + +Print a JSON object to stdout. Fields: + +```json +{ + "continue": false, // Stop Claude entirely (all events) + "stopReason": "message", // Shown to user when continue=false + "suppressOutput": true, // Hide stdout from debug log + "systemMessage": "warn", // Warning shown to user + "decision": "block", // Block action (PostToolUse, Stop, etc.) + "reason": "why", // Explanation when decision=block + "hookSpecificOutput": { ... } // Event-specific fields (see below) +} +``` + +**You must choose one approach per hook:** exit codes OR exit-0-plus-JSON. +JSON is only processed on exit 0. + +--- + +## 4. Event decision control quick-reference + +| Event | Can block? | How to block | +|-------|-----------|--------------| +| `PreToolUse` | ✅ | `hookSpecificOutput.permissionDecision: "deny"` | +| `PermissionRequest` | ✅ | `hookSpecificOutput.decision.behavior: "deny"` | +| `UserPromptSubmit` | ✅ | `decision: "block"` OR exit 2 | +| `Stop` / `SubagentStop` | ✅ | `decision: "block"` (with `reason`) | +| `TeammateIdle` | ✅ | exit 2 (teammate keeps working) | +| `TaskCreated` / `TaskCompleted` | ✅ | exit 2 | +| `ConfigChange` | ✅ | `decision: "block"` OR exit 2 | +| `PreCompact` | ✅ | exit 2 OR `decision: "block"` | +| `Elicitation` | ✅ | exit 2 (denies request) | +| `PostToolUse` | ⚠️ No | `decision: "block"` sends feedback to Claude | +| `SessionStart/End` | ❌ | Side effects only | +| `Notification` | ❌ | Side effects only | + +### PreToolUse — richest control + +```json +{ + "hookSpecificOutput": { + "hookEventName": "PreToolUse", + "permissionDecision": "deny", // allow / deny / ask / defer + "permissionDecisionReason": "...", // shown to Claude on deny + "updatedInput": { ... }, // modify tool args before execution + "additionalContext": "..." // inject context for Claude + } +} +``` + +Precedence when multiple hooks conflict: `deny > defer > ask > allow` + +--- + +## 5. All hook events + +See `references/events.md` for the complete event reference with input schemas. +The most important events are: + +**Security / enforcement:** `PreToolUse`, `UserPromptSubmit`, `PermissionRequest` +**Quality gates:** `PostToolUse`, `Stop`, `TeammateIdle`, `TaskCompleted` +**Automation / logging:** `SessionStart`, `SessionEnd`, `Notification`, `PostToolUse` +**Environment management:** `SessionStart`, `CwdChanged`, `FileChanged` +**Advanced:** `Elicitation`, `WorktreeCreate`, `PreCompact`, `SubagentStop` + +--- + +## 6. Writing hook scripts + +### Canonical shell script template + +See `assets/templates/hook-template.sh` for a complete, copy-paste ready template. + +Core pattern every command hook follows: + +```bash +#!/usr/bin/env bash +set -euo pipefail + +# 1. Read the full JSON payload from stdin +INPUT=$(cat) + +# 2. Extract what you need +TOOL=$(echo "$INPUT" | jq -r '.tool_name // empty') +CMD=$(echo "$INPUT" | jq -r '.tool_input.command // empty') + +# 3. Business logic +if [[ "$CMD" == *"rm -rf /"* ]]; then + # exit 2 = blocking error; stderr goes to Claude + echo "Blocked: cannot delete root filesystem" >&2 + exit 2 +fi + +# 4a. Allow with no output +exit 0 + +# 4b. OR: allow with JSON decision output +jq -n '{ + hookSpecificOutput: { + hookEventName: "PreToolUse", + permissionDecision: "allow", + additionalContext: "Command looks safe" + } +}' +``` + +### Python script template + +See `assets/templates/hook-template.py` for a Python template. + +### Key rules for scripts + +1. **Parse stdin fully before writing to stdout** — Claude Code reads the entire + stdout after the script exits, not line-by-line. +2. **Redirect non-JSON output to stderr** — any non-JSON text on stdout when you + intend to return JSON will break parsing. +3. **Keep SessionStart and SessionEnd hooks fast** — they run on every session; + SessionEnd has a 1.5s default timeout. +4. **Check `stop_hook_active`** in Stop hooks to prevent infinite loops. +5. **Use `jq` for JSON parsing** in shell scripts — never use `grep/awk` on JSON. +6. **Test locally with echo pipe**: `echo '{"tool_name":"Bash","tool_input":{"command":"ls"}}' | ./my-hook.sh` + +--- + +## 7. Common patterns + +### Auto-format after writes + +```json +{ + "hooks": { + "PostToolUse": [{ + "matcher": "Write|Edit|MultiEdit", + "hooks": [{ + "type": "command", + "command": "npx prettier --write \"$CLAUDE_TOOL_INPUT_FILE_PATH\"", + "async": true + }] + }] + } +} +``` + +### Block dangerous shell commands + +```json +{ + "hooks": { + "PreToolUse": [{ + "matcher": "Bash", + "hooks": [{ + "type": "command", + "if": "Bash(rm *)", + "command": "\"$CLAUDE_PROJECT_DIR\"/.claude/hooks/block-dangerous.sh" + }] + }] + } +} +``` + +### Desktop notification on completion (macOS) + +```json +{ + "hooks": { + "Stop": [{ + "hooks": [{ + "type": "command", + "command": "osascript -e 'display notification \"Claude finished\" with title \"Claude Code\"'", + "async": true + }] + }] + } +} +``` + +### Run tests before Stop (quality gate) + +```json +{ + "hooks": { + "Stop": [{ + "hooks": [{ + "type": "command", + "command": "\"$CLAUDE_PROJECT_DIR\"/.claude/hooks/run-tests-gate.sh" + }] + }] + } +} +``` + +### Reload direnv on directory change + +```json +{ + "hooks": { + "FileChanged": [{ + "matcher": ".envrc", + "hooks": [{ + "type": "command", + "command": "bash -c 'direnv export bash >> \"$CLAUDE_ENV_FILE\"'" + }] + }], + "CwdChanged": [{ + "hooks": [{ + "type": "command", + "command": "bash -c 'direnv export bash >> \"$CLAUDE_ENV_FILE\" 2>/dev/null || true'" + }] + }] + } +} +``` + +### Log all tool calls (audit trail) + +```json +{ + "hooks": { + "PreToolUse": [{ + "hooks": [{ + "type": "command", + "command": "bash -c 'cat >> ~/.claude/audit.jsonl'", + "async": true + }] + }] + } +} +``` + +### Inject context at session start + +```json +{ + "hooks": { + "SessionStart": [{ + "matcher": "startup", + "hooks": [{ + "type": "command", + "command": "\"$CLAUDE_PROJECT_DIR\"/.claude/hooks/inject-context.sh" + }] + }] + } +} +``` + +--- + +## 8. Debugging hooks + +1. **Run `/hooks`** in Claude Code — read-only browser for all configured hooks, + shows source file, type, matcher, and full command/URL. + +2. **Test the script directly:** + ```bash + echo '{"hook_event_name":"PreToolUse","tool_name":"Bash","tool_input":{"command":"ls"}}' \ + | bash .claude/hooks/my-hook.sh + echo "Exit: $?" + ``` + +3. **Start Claude Code with `--debug`** — all hook stdout/stderr appears in the + debug log even when not shown in the transcript. + +4. **Common problems:** + - Hook not firing → wrong event name casing (it's `PreToolUse` not `preToolUse`) + - JSON not parsed → something printed to stdout before your JSON object + - Path not found → use `$CLAUDE_PROJECT_DIR` prefix; check the script is executable + - exit 1 doesn't block → use `exit 2` for blocking errors + - Infinite Stop loop → check `stop_hook_active` field before blocking + +5. **Temporarily disable all hooks:** + ```json + { "disableAllHooks": true } + ``` + +--- + +## 9. Security considerations + +Hooks execute arbitrary shell commands. Before writing or accepting hook config: + +- **Never put secrets in the command string** — use env vars referenced via + `allowedEnvVars` in HTTP hooks, or load from a secrets file in scripts. +- **Validate all JSON inputs** — don't blindly eval anything from `tool_input`. +- **Avoid writing hooks that read from user-controlled input into shell** without + sanitization — command injection is real. +- **Review third-party hook configurations** before enabling them. +- **Commit `.claude/settings.json` to source control** so team changes are visible. +- **Use `.claude/settings.local.json` for personal overrides** — it's gitignored. + +--- + +## Reference files + +- `references/events.md` — full event input schemas and decision control for all 26 events +- `assets/templates/hook-template.sh` — production-ready bash template +- `assets/templates/hook-template.py` — Python template with full JSON handling +- `assets/templates/settings-example.json` — annotated settings.json starter diff --git a/template/.claude/skills/claude_hooks/assets/templates/hook-template.py b/template/.claude/skills/claude_hooks/assets/templates/hook-template.py new file mode 100644 index 0000000..53535bb --- /dev/null +++ b/template/.claude/skills/claude_hooks/assets/templates/hook-template.py @@ -0,0 +1,274 @@ +#!/usr/bin/env python3 +""" +Claude Code Hook Script Template (Python) +========================================== +Usage: Copy to .claude/hooks/.py +Make executable: chmod +x .claude/hooks/.py + +Reference in settings.json: + "command": "python3 \"$CLAUDE_PROJECT_DIR\"/.claude/hooks/.py" + +For UV single-file scripts with inline dependencies: + "command": "uv run \"$CLAUDE_PROJECT_DIR\"/.claude/hooks/.py" + +If using UV, add a dependencies block at the top: + # /// script + # dependencies = ["requests>=2.28"] + # /// +""" + +import json +import sys +import os +import subprocess +from pathlib import Path + + +# ============================================================================= +# OUTPUT HELPERS +# ============================================================================= + +def allow(additional_context: str = None) -> None: + """Allow the action, optionally injecting context.""" + if additional_context: + _exit_json({ + "hookSpecificOutput": { + "hookEventName": event_name, + "additionalContext": additional_context, + } + }) + sys.exit(0) + + +def deny(reason: str) -> None: + """Block the tool call (PreToolUse). Reason shown to Claude.""" + _exit_json({ + "hookSpecificOutput": { + "hookEventName": "PreToolUse", + "permissionDecision": "deny", + "permissionDecisionReason": reason, + } + }) + + +def ask_user(reason: str) -> None: + """Ask user to confirm (PreToolUse). Reason shown to user.""" + _exit_json({ + "hookSpecificOutput": { + "hookEventName": "PreToolUse", + "permissionDecision": "ask", + "permissionDecisionReason": reason, + } + }) + + +def block(reason: str) -> None: + """Block action with feedback (PostToolUse, Stop, etc.).""" + _exit_json({"decision": "block", "reason": reason}) + + +def blocking_error(message: str) -> None: + """Exit with code 2: blocking error, message sent to Claude.""" + print(message, file=sys.stderr) + sys.exit(2) + + +def _exit_json(data: dict) -> None: + """Print JSON to stdout and exit 0.""" + print(json.dumps(data)) + sys.exit(0) + + +def log(message: str) -> None: + """Write to a log file (NOT stderr, which goes to Claude).""" + log_path = Path(os.environ.get("CLAUDE_PROJECT_DIR", Path.home())) / ".claude" / "hooks.log" + try: + with open(log_path, "a") as f: + from datetime import datetime, timezone + ts = datetime.now(timezone.utc).strftime("%Y-%m-%dT%H:%M:%SZ") + f.write(f"[{ts}] {message}\n") + except Exception: + pass # Never let logging break the hook + + +# ============================================================================= +# READ INPUT +# ============================================================================= + +try: + raw = sys.stdin.read() + payload = json.loads(raw) +except json.JSONDecodeError as e: + print(f"Failed to parse hook input JSON: {e}", file=sys.stderr) + sys.exit(1) + +# Common fields +event_name = payload.get("hook_event_name", "") +session_id = payload.get("session_id", "") +cwd = payload.get("cwd", "") +perm_mode = payload.get("permission_mode", "") + +# Tool fields (present in PreToolUse, PostToolUse, etc.) +tool_name = payload.get("tool_name", "") +tool_input = payload.get("tool_input", {}) +tool_response= payload.get("tool_response", {}) + +# Common sub-fields +command = tool_input.get("command", "") +file_path = tool_input.get("file_path", "") + + +# ============================================================================= +# PRETOOLUSE — validate before tool runs +# ============================================================================= + +if event_name == "PreToolUse": + + if tool_name == "Bash": + # Block dangerous deletion patterns + dangerous_patterns = [ + "rm -rf /", + "rm -rf ~", + "rm -rf $HOME", + "> /dev/sda", + "mkfs.", + ] + for pattern in dangerous_patterns: + if pattern in command: + blocking_error(f"BLOCKED: Dangerous command pattern detected: {pattern!r}") + + # Block cloud metadata endpoint access + if "169.254.169.254" in command or "metadata.google.internal" in command: + deny("Access to cloud metadata endpoints is blocked") + + # Log the command + log(f"Bash command: {command[:120]}") + + if tool_name in ("Write", "Edit", "MultiEdit"): + # Protect sensitive files + sensitive_names = {".env", ".env.local", ".env.production", "credentials.json"} + sensitive_paths = {"secrets/", "private/", ".ssh/"} + + basename = Path(file_path).name + if basename in sensitive_names: + ask_user(f"Writing to sensitive file: {basename}") + + if any(p in file_path for p in sensitive_paths): + ask_user(f"Writing inside a sensitive directory: {file_path}") + + # Default: allow + allow() + + +# ============================================================================= +# POSTTOOLUSE — react after tool ran +# ============================================================================= + +elif event_name == "PostToolUse": + + if tool_name in ("Write", "Edit") and file_path.endswith(".py"): + # Run ruff linter if available + try: + result = subprocess.run( + ["ruff", "check", file_path, "--quiet"], + capture_output=True, text=True, timeout=30 + ) + if result.returncode != 0: + issues = result.stdout[:500] or result.stderr[:500] + block(f"Ruff lint errors in {file_path}:\n{issues}\nFix before proceeding.") + except (FileNotFoundError, subprocess.TimeoutExpired): + pass # ruff not installed or timed out — don't block + + allow() + + +# ============================================================================= +# STOP — quality gate before Claude finishes +# ============================================================================= + +elif event_name == "Stop": + stop_active = payload.get("stop_hook_active", False) + + # CRITICAL: prevent infinite stop loops + if stop_active: + sys.exit(0) + + # Example: check for TODO/FIXME comments left behind + # Uncomment to enable: + # result = subprocess.run( + # ["grep", "-r", "--include=*.py", "-l", "TODO\|FIXME", cwd], + # capture_output=True, text=True + # ) + # if result.stdout.strip(): + # files = result.stdout.strip() + # block(f"TODO/FIXME comments remain in:\n{files}\nResolve before finishing.") + + sys.exit(0) + + +# ============================================================================= +# USERPROMPTSUBMIT — augment or validate prompt +# ============================================================================= + +elif event_name == "UserPromptSubmit": + prompt = payload.get("prompt", "") + + # Example: add git branch context + try: + branch = subprocess.check_output( + ["git", "-C", cwd, "branch", "--show-current"], + stderr=subprocess.DEVNULL, text=True + ).strip() + context = f"Current git branch: {branch}" + except (subprocess.CalledProcessError, FileNotFoundError): + context = None + + if context: + _exit_json({ + "hookSpecificOutput": { + "hookEventName": "UserPromptSubmit", + "additionalContext": context, + } + }) + + sys.exit(0) + + +# ============================================================================= +# SESSIONSTART — inject environment and context +# ============================================================================= + +elif event_name == "SessionStart": + env_file = os.environ.get("CLAUDE_ENV_FILE") + + if env_file: + # Persist env vars for all subsequent Bash commands + with open(env_file, "a") as f: + f.write('export PATH="./node_modules/.bin:$PATH"\n') + + # Inject project summary as context (plain stdout) + readme = Path(cwd) / "README.md" + if readme.exists(): + content = readme.read_text()[:1000] + print(f"Project README (truncated):\n{content}") + + sys.exit(0) + + +# ============================================================================= +# SESSIONEND — cleanup +# ============================================================================= + +elif event_name == "SessionEnd": + reason = payload.get("reason", "other") + log(f"Session ended: reason={reason} session_id={session_id}") + sys.exit(0) + + +# ============================================================================= +# FALLBACK — unknown or unhandled event +# ============================================================================= + +else: + # Silently allow anything we don't handle + sys.exit(0) diff --git a/template/.claude/skills/claude_hooks/assets/templates/hook-template.sh b/template/.claude/skills/claude_hooks/assets/templates/hook-template.sh new file mode 100644 index 0000000..539910f --- /dev/null +++ b/template/.claude/skills/claude_hooks/assets/templates/hook-template.sh @@ -0,0 +1,173 @@ +#!/usr/bin/env bash +# ============================================================================= +# Claude Code Hook Script Template +# ============================================================================= +# Usage: Copy and adapt this file to .claude/hooks/.sh +# Make executable: chmod +x .claude/hooks/.sh +# +# Reference in settings.json: +# "command": "\"$CLAUDE_PROJECT_DIR\"/.claude/hooks/.sh" +# +# Supported hook events this template handles: +# PreToolUse, PostToolUse, Stop, UserPromptSubmit, SessionStart, SessionEnd +# ============================================================================= + +set -euo pipefail + +# ----------------------------------------------------------------------------- +# 1. READ FULL JSON INPUT FROM STDIN +# Always read the complete payload before doing anything else. +# ----------------------------------------------------------------------------- +INPUT=$(cat) + +# ----------------------------------------------------------------------------- +# 2. EXTRACT COMMON FIELDS +# ----------------------------------------------------------------------------- +EVENT=$(echo "$INPUT" | jq -r '.hook_event_name // empty') +SESSION=$(echo "$INPUT" | jq -r '.session_id // empty') +CWD=$(echo "$INPUT" | jq -r '.cwd // empty') +PERM_MODE=$(echo "$INPUT" | jq -r '.permission_mode // empty') + +# Tool-specific fields (PreToolUse / PostToolUse) +TOOL_NAME=$(echo "$INPUT" | jq -r '.tool_name // empty') +COMMAND=$(echo "$INPUT" | jq -r '.tool_input.command // empty') +FILE_PATH=$(echo "$INPUT" | jq -r '.tool_input.file_path // empty') + +# Stop-specific fields +STOP_ACTIVE=$(echo "$INPUT" | jq -r '.stop_hook_active // false') +LAST_MSG=$(echo "$INPUT" | jq -r '.last_assistant_message // empty') + +# ----------------------------------------------------------------------------- +# 3. OPTIONAL: LOG FOR DEBUGGING +# Comment out or redirect to file in production. +# Note: stderr is shown to user/Claude on exit 2; write logs to a file instead. +# ----------------------------------------------------------------------------- +LOG_FILE="${CLAUDE_PROJECT_DIR:-$HOME}/.claude/hooks.log" +echo "[$(date -u +%Y-%m-%dT%H:%M:%SZ)] EVENT=$EVENT TOOL=$TOOL_NAME CMD=${COMMAND:0:80}" \ + >> "$LOG_FILE" 2>/dev/null || true + +# ----------------------------------------------------------------------------- +# 4. BUSINESS LOGIC — adapt to your use case +# ----------------------------------------------------------------------------- + +# --- PreToolUse: validate before tool runs --- +if [[ "$EVENT" == "PreToolUse" ]]; then + + # Example: block rm -rf on root or home directory + if [[ "$TOOL_NAME" == "Bash" ]]; then + if echo "$COMMAND" | grep -qE 'rm\s+-rf\s+(/|~|/home|/Users)\b'; then + # exit 2 = blocking error; stderr goes to Claude as error message + echo "BLOCKED: Recursive delete on root/home directory is not allowed." >&2 + exit 2 + fi + + # Example: block network requests to internal metadata servers + if echo "$COMMAND" | grep -qE '169\.254\.169\.254|metadata\.google\.internal'; then + echo "BLOCKED: Access to cloud metadata endpoints is not permitted." >&2 + exit 2 + fi + fi + + # Example: protect sensitive files from being written + if [[ "$TOOL_NAME" == "Write" || "$TOOL_NAME" == "Edit" ]]; then + BASENAME=$(basename "$FILE_PATH") + if [[ "$BASENAME" == ".env" || "$BASENAME" == "*.pem" || "$FILE_PATH" == *"secrets"* ]]; then + # Use JSON output for richer control + jq -n '{ + hookSpecificOutput: { + hookEventName: "PreToolUse", + permissionDecision: "ask", + permissionDecisionReason: "Writing to a sensitive file — please confirm" + } + }' + exit 0 + fi + fi + + # Default: allow the tool call (exit 0 with no JSON = allow) + exit 0 +fi + +# --- PostToolUse: react after tool completed --- +if [[ "$EVENT" == "PostToolUse" ]]; then + TOOL_RESPONSE=$(echo "$INPUT" | jq -r '.tool_response // {}') + + # Example: run linter after Python file is written + if [[ "$TOOL_NAME" == "Write" && "$FILE_PATH" == *.py ]]; then + if command -v ruff &>/dev/null; then + if ! ruff check "$FILE_PATH" --quiet 2>/dev/null; then + ISSUES=$(ruff check "$FILE_PATH" 2>&1 | head -20) + jq -n --arg issues "$ISSUES" '{ + decision: "block", + reason: ("Ruff found lint issues:\n" + $issues + "\nFix before proceeding.") + }' + exit 0 + fi + fi + fi + + exit 0 +fi + +# --- Stop: quality gate before Claude finishes --- +if [[ "$EVENT" == "Stop" ]]; then + + # CRITICAL: check stop_hook_active to prevent infinite loops + if [[ "$STOP_ACTIVE" == "true" ]]; then + # Already in a stop-hook cycle — do not block again + exit 0 + fi + + # Example: run tests and block completion if they fail + # Uncomment and adapt: + # if ! npm test --silent 2>/dev/null; then + # jq -n '{"decision":"block","reason":"Test suite is failing. Fix all failing tests before finishing."}' + # exit 0 + # fi + + exit 0 +fi + +# --- UserPromptSubmit: validate or augment prompt --- +if [[ "$EVENT" == "UserPromptSubmit" ]]; then + PROMPT=$(echo "$INPUT" | jq -r '.prompt // empty') + + # Example: add context about current git branch to every prompt + BRANCH=$(git -C "$CWD" branch --show-current 2>/dev/null || echo "unknown") + jq -n --arg branch "$BRANCH" '{ + hookSpecificOutput: { + hookEventName: "UserPromptSubmit", + additionalContext: ("Current git branch: " + $branch) + } + }' + exit 0 +fi + +# --- SessionStart: inject environment / context --- +if [[ "$EVENT" == "SessionStart" ]]; then + + # Persist environment variables for all subsequent Bash commands + if [[ -n "${CLAUDE_ENV_FILE:-}" ]]; then + # Example: add project-local node_modules to PATH + echo 'export PATH="./node_modules/.bin:$PATH"' >> "$CLAUDE_ENV_FILE" + fi + + # Inject project context into Claude's context window + if [[ -f "$CWD/README.md" ]]; then + echo "Project README summary:" + head -30 "$CWD/README.md" 2>/dev/null || true + fi + + exit 0 +fi + +# --- SessionEnd: cleanup --- +if [[ "$EVENT" == "SessionEnd" ]]; then + REASON=$(echo "$INPUT" | jq -r '.reason // "other"') + echo "[$(date -u +%Y-%m-%dT%H:%M:%SZ)] Session ended: $REASON (id=$SESSION)" \ + >> "$LOG_FILE" 2>/dev/null || true + exit 0 +fi + +# Fallback: if none of the events matched, just exit 0 +exit 0 diff --git a/template/.claude/skills/claude_hooks/assets/templates/settings-example.json b/template/.claude/skills/claude_hooks/assets/templates/settings-example.json new file mode 100644 index 0000000..d72550f --- /dev/null +++ b/template/.claude/skills/claude_hooks/assets/templates/settings-example.json @@ -0,0 +1,211 @@ +{ + "_comment": "Project Claude Code hooks — commit this file. For personal overrides, use .claude/settings.local.json", + + "hooks": { + + "_comment_PreToolUse": "Fires before every tool call. Can block, modify input, or inject context.", + + "PreToolUse": [ + + { + "_comment": "Security gate — block dangerous shell commands", + "matcher": "Bash", + "hooks": [ + { + "type": "command", + "command": "\"$CLAUDE_PROJECT_DIR\"/.claude/hooks/block-dangerous.sh", + "timeout": 10, + "statusMessage": "Validating command…" + } + ] + }, + + { + "_comment": "Protect sensitive files from being overwritten", + "matcher": "Write|Edit|MultiEdit", + "hooks": [ + { + "type": "command", + "if": "Write(.env*)|Edit(.env*)", + "command": "\"$CLAUDE_PROJECT_DIR\"/.claude/hooks/protect-secrets.sh", + "timeout": 5 + } + ] + }, + + { + "_comment": "Log all tool calls for audit trail (async — doesn't block Claude)", + "hooks": [ + { + "type": "command", + "command": "bash -c 'cat >> \"$CLAUDE_PROJECT_DIR\"/.claude/audit.jsonl'", + "async": true + } + ] + }, + + { + "_comment": "Auto-approve read-only git commands in default permission mode", + "matcher": "Bash", + "hooks": [ + { + "type": "command", + "if": "Bash(git status*)|Bash(git log*)|Bash(git diff*)|Bash(git show*)", + "command": "bash -c 'jq -n \"{hookSpecificOutput:{hookEventName:\\\"PreToolUse\\\",permissionDecision:\\\"allow\\\",permissionDecisionReason:\\\"Read-only git command\\\"}}\"'" + } + ] + } + + ], + + "_comment_PostToolUse": "Fires after every tool succeeds. Cannot undo; can give Claude feedback.", + + "PostToolUse": [ + + { + "_comment": "Auto-format edited files with Prettier", + "matcher": "Write|Edit|MultiEdit", + "hooks": [ + { + "type": "command", + "command": "bash -c 'FILE=$(jq -r \".tool_input.file_path // empty\"); [ -n \"$FILE\" ] && npx prettier --write \"$FILE\" 2>/dev/null || true'", + "async": true, + "statusMessage": "Formatting…" + } + ] + }, + + { + "_comment": "Run Python linter after Python file writes", + "matcher": "Write|Edit", + "hooks": [ + { + "type": "command", + "if": "Write(*.py)|Edit(*.py)", + "command": "\"$CLAUDE_PROJECT_DIR\"/.claude/hooks/lint-python.sh" + } + ] + } + + ], + + "_comment_Stop": "Fires when Claude finishes responding. Can block to enforce quality gates.", + + "Stop": [ + + { + "_comment": "Desktop notification when Claude finishes (macOS). Linux: use notify-send.", + "hooks": [ + { + "type": "command", + "command": "bash -c 'ACTIVE=$(jq -r .stop_hook_active); [ \"$ACTIVE\" = \"true\" ] && exit 0; osascript -e \"display notification \\\"Claude Code finished\\\" with title \\\"Claude Code\\\"\" 2>/dev/null || notify-send \"Claude Code\" \"Task completed\" 2>/dev/null || true'", + "async": true + } + ] + } + + ], + + "_comment_UserPromptSubmit": "Fires when user submits a prompt. Can add context or block.", + + "UserPromptSubmit": [ + + { + "_comment": "Inject current git branch into every prompt for context", + "hooks": [ + { + "type": "command", + "command": "bash -c 'BRANCH=$(git branch --show-current 2>/dev/null || echo unknown); jq -n --arg b \"$BRANCH\" \"{hookSpecificOutput:{hookEventName:\\\"UserPromptSubmit\\\",additionalContext:(\\\"Git branch: \\\" + $b)}}\"'" + } + ] + } + + ], + + "_comment_SessionStart": "Fires when session begins. Use for context injection and env setup.", + + "SessionStart": [ + + { + "_comment": "Inject project context and set up env vars on session startup", + "matcher": "startup", + "hooks": [ + { + "type": "command", + "command": "\"$CLAUDE_PROJECT_DIR\"/.claude/hooks/session-setup.sh" + } + ] + } + + ], + + "_comment_SessionEnd": "Fires at session end. Use for cleanup and logging.", + + "SessionEnd": [ + + { + "_comment": "Archive transcript on session end", + "hooks": [ + { + "type": "command", + "command": "bash -c 'TRANSCRIPT=$(jq -r .transcript_path); DEST=\"$CLAUDE_PROJECT_DIR/.claude/transcripts/$(date +%Y%m%d-%H%M%S).jsonl\"; cp \"$TRANSCRIPT\" \"$DEST\" 2>/dev/null || true'", + "async": true + } + ] + } + + ], + + "_comment_FileChanged": "Fires when a watched file changes on disk. Matcher = literal filenames to watch.", + + "FileChanged": [ + + { + "_comment": "Reload direnv when .envrc changes", + "matcher": ".envrc", + "hooks": [ + { + "type": "command", + "command": "bash -c 'direnv export bash >> \"$CLAUDE_ENV_FILE\" 2>/dev/null || true'" + } + ] + } + + ], + + "_comment_CwdChanged": "Fires when the working directory changes.", + + "CwdChanged": [ + + { + "_comment": "Activate direnv on directory change", + "hooks": [ + { + "type": "command", + "command": "bash -c 'direnv export bash >> \"$CLAUDE_ENV_FILE\" 2>/dev/null || true'" + } + ] + } + + ], + + "_comment_Notification": "Fires for permission prompts, idle notifications, etc.", + + "Notification": [ + + { + "_comment": "Send Slack message when Claude needs permission (for unattended runs)", + "matcher": "permission_prompt", + "hooks": [ + { + "type": "command", + "command": "\"$CLAUDE_PROJECT_DIR\"/.claude/hooks/slack-notify.sh", + "async": true + } + ] + } + + ] + + } +} diff --git a/template/.claude/skills/claude_hooks/claude-hooks.skill b/template/.claude/skills/claude_hooks/claude-hooks.skill new file mode 100644 index 0000000000000000000000000000000000000000..e3dce15bc4d4635de40c9f61ef6373edc617302d GIT binary patch literal 19011 zcmbTdV~{Rg*QI-xZQHhO+x9Now!O=?ZQHhO?y|KD{XAcHpN@E=&*_en5i=wIQ&9#46dLgFWflBe_rD(g#|;I53oy1Zay2ofv#__fcBXT-wzRQfP*a5hfSgJhmOc$nVoM@#QYBV6Xdrnc5~SgLfQk6xs{XZYrEzDl50*NbH3`$KSRM=8tCr^&I zD{8V=CK`UQ z_)NZhksh~r4u*$ug_(y6a+%8JP0vE8i-G!)r>u_p`DDkGLt1suGcQF2-Z}T<^Sq>gIf_=Z=P?f%l9=r{A?#P%;qDaz^=B+<5;zhS|sctvi(yldxn4?Osg08`iZ&&ZPoqt)C(S@uAiD##b z@RM1v@#AF~gZ=sh&ckW`Sbo2f`Ty$un zX88~ZV&!LDdvfj`2QkLRGI1E?V9BX0lvY8GC)~p<GLF6N>2X)HY`daT5yvX;StgKqKHB?H0Ye&&q-LoK&BIim!z628 zdZc$gw{2K)+B;Y>*Dg81Ze+=nezrY%&gKONrZDND#SYJgK7!UgMw zhl|JG|M~D9o^;jjY-QwNV?n3cBAAJjC%Zy_Zn|*`|GYlmIj;{_^usJ(9g*Sn6&jgC zWrw;)zG9wixCBA>)wn}YL((i#$q=H(I4Mz+KNK0@-;F*|FzHb%WB_rY^@D&|T9{lJ z6x*J~Szd8)j`yvLZ=j#s&BMd*&FAy?W+#s!T!W%)>~_}xPJuz9E6bz82sT zQ9atiUsCWpR0l{~AQtV`OMAaw+59tqkOt{SALWuk!{I9-WOXP=Avwa~Mq>CxxWzMG zsKe!n{2W#p?;of48=JKxQ6ahjK?ZX2&Wg~bv`Eqod-*${N~79jZY`Vj!OEAacG>1? za0Ht!x#~WPY@+yrEr~b>0Auxbx6oM3#V94gZ5_7iT9DNC1)$G z+T#Iw$fv&qs&$GAAC2D=b`-gqOm~{A^Tn10&MSY3!n73oQ|TkS`?|t|2*YiHQg#Fu zTa*oxUu0I2x@x@@U;Ftj`bwW~P7Hd3*34|@a6F6*39S1QUV*Tco1illBU7*Wc?V{P zoh~fRk=?KAQbjGVkn%baDB{U}M~&2zIwBZQAwcH?6_t{^0n^6)iJO0*_3aN%1>%5& zTf~_@Ga0{&Q<1D{w_{R!tb@Ue5Pefgj6H$KpA=ZXO!(vVDSm{w*1kNv#XO;lQuiI| z)0A!z1P?|i`2L4VN9WhZA`Bs+#3LgC{0|mQe+0vQ^oK!WLt`YGrLgVpLRU4E^v>N< zwd|`Aq|Oo_8ux)45r{02`q9P6cWijNj6MZQN}8EqIGLUnu122R z<%bnUL%q8d=F}4I#Az)vq3~melo`;k;7Fz1s)~$~XNcA|gbdlAI!%0sxZSqYaJiNY zrhPg|Gg_N_SBb35)m(0wN18cJT$M#`Sx1SAn$#dB7VU1tRI+VCx+D~Kcuxvj5suS{ zkK$5Dz=GDRrtKgCJ9f$12BkakIJu(r+ALEfP=h!fVK;xt1A@B?Bs=(@4by5Y1Ty_T zk;uhsBq?SRoWH1AHiXVseW*`?MY7m()4UN`>8V#Hp- zPt5F6r5ci@ZKpI%x9~s>eM=!-!Dx=N*`&D_?CN=chv*9Ik|9uAX8@5JrV=*%7RVxE zB2)t{JQe42KvN=sEjAe_#8gGTh!D=MmOt0KC^@{o7pRr?hvFPGZ-wre;c@s~0JC5E^ zkeayrlR4->wP5WQdwPj3^4qiywzqDkk}S|2*}~ZahYzG5!D5NZ4`mMj4CmrpHOoYu z_k&pEQZEL{-P7GnuxC-~(TIw)jW&(f(`AEg^BIaw4UmkYyUK_=+d5HgXtC|3Dg|u9zoeL#?fo#_AS1H z_2>%WjB?{1_H@f!E1h@U39)svqNA+#f}0Ef()F+1+raWHzdx*6O)z1ZlSpl88+Ux_ zicq_D|4xw2i3BFov)ahQLW=)NIWNJ+g(=(Yt5~naA9$Du3-T_qeA8 z&U`bKmwN-+GzEBLB&!Do+`!QhyoB%=>Q0X$@o>}B3L%*<2c=R-Vcg8Aib2mvjZe@A zds>|SmcqZt8k!_vJX%L7yG6Brq~T=Ctkl?|%$m=~0l7ZQV3z+CC2RRCqVwqlNS*Gh z?_4?eI^w`^z#_BztT_djKZYbnyEjAKK*w04o}g1O{8jhumHdRiW8+0T9}?F~j2okvxHe z0V4Z?F89`0^Ae!F=-S=9Z^*U-_$)c@I>!O+`_7<{u=?WHtl6!dc5_GJD? zY}a>YcYVJf+U73)%wAkP6ACXE=RQSRtb^Ig)aOIik|E;0?aUrc*ctRh3OwX5iY1_p za%r#P3?}x5j9ID3043vG*QA(yMqS14V?Y?1i@C{=p*uAf*GP^-VHw{mpk5gCUtbPN zrk1@cvpSf7aqwa0cf|&}IrV(&VcY@Mt-?e@OuWg>ht0-AG*odtQ|}fStS74jJZTJi zM_l1)jZWgb2UMMJR?Tvu7k}7AM+4zfTj(IR=FrOV%ti=-0>xlt6bVTamgz~ryuC6H zGac|WdOQ_onkN}HS3Q$dLUSjAw@~}bRADWpsgCobeYl;d-_^e524p{w)JvyN^|@DV zN?FrxO;zIkUO!>IP(BjwLXVx+sP5)w0Lxl@yciKvR-CL$0xia1v#zuekvRu!_{R_zB_w~3u?!)-p1Zmpz0~7h_q7n)otKh zaAZwc;PMUf``9fB5vkfM{U9n?05DSU;WMwJCu)%LG^R!X-Kw{-u|;Gzd8(}Av^d+* zWUrip;P$u}edKLi-b%9RFdwbF!f1m44rUR4psFMa#>0ysVsOy{+e284XTbB3|Iggd z*R6kfPtVXXyCx=$m-y7!ZvfQ1yF=25=*82Aauw;6lO~O7#K6O7!sQ%G3~)dEBBhk3 zA(J5$(|!nl*%pOjs8DBpX8o;WgT2C zVM04+H{Zm0d;fDpz)J=+bR{BF_KGx7Yjz|NMP!OlyUhlcTzm-K?~jF^+`v0KiuXR+ zB{ww+bxG;9e6pe-;1aT--Aq*BomjC3K9{4FnHC#Jryv(>EHt%hd-yqp>aThg#Bolz z28G`THU_bQ4)R8q%vbi)(@UKNG6oJ6^D$8##iiDi(DI4*jXUvkN4cCM{En>6VzJv4 zTuk0t*$wv{FU7+cW@lj&k=OFR^qks9K2QsBu$hoNYq) zBG^#Xj=6yDLWO5-wRIv9E3=^&LtDU@3hl=J& zTO;c@889JZXsIpV&|f=@-56)9$CjcRZJQWL#_+cGvcoz%=^E6M{^rFCy*^LxqVZxLre5JX`CmC}mO+#e)U*RaifNTnsSMj#T>04}~GmVK+3& zOIb^XNB(@uroP)m&<#fOD|h4LNh`jo;&egKfSl8r)Hwgcm0PZ-&VN9II436sf0b-7 zYnUvXs9cbOx`__Bn)dLIY7=CVSgU4o&251Jo~+6yu>|5F5hSvCvp5!COlpw2buOhT z_RLf!V3pxRJk!#EQX-H@kOI|NWk=>+$^dj-sWb?+a*%Q49emlWKxb-WkY*~;_}4Xw zcMD;cTuN(xjX<~+$vpQ`rZXUz@8&EWic-U>Z3M?sH~tx|sC6|xnh(pbUinV7@$9`K zQs#3mEct?fIY&nin_R6}psBVQuS}7>*v7cF{okMp%E>5uEssJ=~)pxL|_ zm&G2N80Y%drQiCiqQK{{nuT1p_=hn8a`gbChtEbRp=cjY(idX?G9igGU;+xmQLE;I z>`6vH53UR5rkFR^hOENL70oECUZT?yI&xoCco{^PtSBe=bYN7V>N z3L%C>+wKyWlJ>wqugHraUUEN#wj08}^I(J5GLib&SeiqH>4C#Cvm56nuF`=ArOFsf z9~9Dd&<e?LzWk3KA{wDURMzv^=@;( z>s?lFL5k8sCj3AJeVV0J^gVitcKkeJj=;N5c4VUKqJpYr0PHvMJU3txJY)gXKXTyp z+0s{euQbj_Y_e_6`fKaStClW`oLy@W8bQI8BqS9Eg#Q z8Nj0r!vB6a4ngor8^}PJ^Uux_i=7e+7HJi$t zqqr`2osDQ^YjRpZzVww_2N1)`D&jqXW)=wz!{#qY?o&M!?y$AyFiZ)3RX;YrD_@`1 z<2;oxIbp0xagG=mzC;M+EL1r=Kfc~>-&Ch|D+o}QGurnXYVbiH3lgeaitydsC+@D= z(Iqa(IfuiGu!lb}R36TseCXY6r%fd2=$z|d(C@bOiYHlnsIU9N z8J+qe0OLB(wd!Oi?a9|wllxt{W7>1&>oJBTyaz??vQXhO!zpve+P(?1y_Vg1xccxO zBZpNJQE_~jHMM6~ z>GVd?MAXLP7qX{z6Y7VSlOo8jzv;cdO;3hzw{63VM;_6EiR-q1K2?bQ<@iA1=_3UH zgeu7T3imz!{IBR%;yheEg+tUA29Nh6AOw#{Pri-bqgN>2P{|trw z{!gLszjVV7Dr`4!0H7QL0O0z+r2q2)E|zxY&UB_8M*nm)rMGgnxBIV{_}`ZQ)_=sr zJDLY}TkMG6d1aiKQkTuCJ8zQ}yF1ELa6 zj35<>%PRr0Nt3ta%aaEsRpwTKZCZ&x^f;i*AP;?Hc`l5yu`Vo_(BZoX`go$ka0}E-nWowT8oYGEJ*aY1x8GDA$TbQGaNs5OscK@3XV;~JcMbw+SdYe;g;fUjXS)% z{?79kIV}c1yH>fc@V_^Aq>b@G>Z%W};nCgJ(qJc9C!sEq+rE~XQWRF9(l$|=)nR+o z?8RtF1`2ZQzoyRmXiCJ^qR(F6mt1Xw>!AF9lb9)K{W%8n%0}9Z$wdU##3-_nzIQ0M zyL%#dLaWnGaM7T!ydo9&YRgRv4$ZZ~TNz2xtE2btb3k|_wFi3vpS>)>% zNl+K{>`hoOTkIhacU>kAL0o`|#0?ikN$8u5TdZROnB+k`5JY_ATNxXQrIa!R zBK0v&E*q|JAt5wZb|i!SDwl1CM+}!3tHIb(8x1AWUEa{+Gf2L)@nS$S@hk%3l-jP= z+3gTo&NE;q?zSjTPF!0_IeV@t-cy}{?}_fn z-06#IQ2{e&;UJzJQ)aSbemXY30=@g&QK3P3_~PJtDaDs@Vmc+T+hujH9z1NHK(o5j zzUIhHCCPzTpp7U+Eft$ul5LGrLmZ`A@lGNSOnoR7amQ3LOM1IzGti~`L_Ua@kGCXC zDD6yK1RgHjx*C5W8dAs8<;s~#-Lw5DS7*RhG+9&lF~nlq=jr#u*I9#VxN&0K5#p`&T5%t+2bZ(CoC$f>a zy!KAC3tyB|=YmaVudEcGHdH^!9yRA-4%`K34lr51@uqJqVS$Oqpvs+gcnU1Aa#7{S z0P$hJ+`??*j*h!%7+%`T4bTYoQEk!UaYlk3E*~|+!451A2d1p2JBtapp(#-u=``MM zmtDJ@-s_!J_dPoG8OvsVRXfqI26p4#8>1!RvWm@* zpC2tNqG@>f#%t})(onP*jI0TZFw?LFuthM??F^9G-anEbEGbG!7-bueR+Tww-eC3E zizmZMr0>8CL~YnA8oM~|iZoZJu6`EOqT8@cdag5dy8Lr?-_E=)y)Fs{fEZQU|riCwcq#U1y$M9 zE8|RU=+;@pdIeS&5q8M@8ggqT$m&-lom>?|Zy`+?meYG8`Uvq&oQ+>jyY2zCKwcK0 zjly*bd*mz1anI4co0ZFUv+$B=EXi(q3QslFs_{EKBO>z!_os;2Np$Q#l1+rtGl^uN zY6yqkw0zF}vetw47w>YgXeCWV;!u?2cIGs2`EEAhvOj9i`GEPDm`Sa1{z*mt5DeD+ z;DEA$JB2|{+~empwzn1~nY6@KL@pCmq&jjT-!=f1>7o;^4xmH#!`9aOl@XmpbLsnY zb@woy1uqet*Zpnc<-*ErNt z9aoL~Y0HMD;(vzU4i?Z{8zC>Ds_{r*qK9H`#f&bY46DYC{3Ji3%@=Mao<%2pJmHPB zkI=#&w60p&B%0GGKRe%s$I_J#OG9xw%z^7!cpE`#7RE*rkXrA7gN-wtRu;$rr%ibh za*K?|_3#)_zKDOJU=u72KBl`wbf;2!1G?EFpG_4t@B?5AQ>qag^@#sQb zcASv3rT8^_qvFE3S&`ziJz0LnXc;V|9=d9^yF5v0Ff0sU`ModeL(IE-V<{;o&~MGJ zdY9>66{OMIo^u-XU~slp&`t&!nhmHBWisF$U=BQbU;@D7>mcWr%uI@aA0CTe|&7|BXzrveGiQ=Y|Y!`P=%>)w_wg3x5O z1sXjMw(H0#zxE!!3ZGA#q}$B9=S%@hyFZ!dNxMKGxD8_HO7+VC zmRxktDsFr3$WY!4mRM&ZQQEEEJjw`)mpzOm3kd9w-gIC$R!VCNR+>;QjIUC`lu$0gfsW3SjA3Q~f|fkC5gz)~|KfJ7V4(*;lzP?sHvTM}1oIC_UbBF)bvQ zmqn;m)(;lL^_|jil)jLZ$KknN!`x$webX4gC}A*=B-x~@#98L$ z?jaHHze1a~_;1-k}lC^Jylo?kDYU@N4Ow$FcGgxA;TN1!$|<+xnzTC!H4cRc}nK7K`>m&)JiwS{r14ECmRhnK>1@6Rus8 zLkd#&d^{efddP6mVONXeXj3Y?&gzJ zJWa&TVz_-`I=>u=!QM9>Y|W)@&AJ$EM?HZ`<4`<)t~XsL#d?#7dpX>XyD0kvcv0R# z5%VR`KGBJ=JAlrpbOd=-e5M+dxsp9j(wt9+>G9`t_VaJjKu25eOQegi*BVkmUvFc3 zY=}-)H@!svV~|60h+JZp64<&*Fni|;=)j(~KC)Kc9QapWPZ2#A^}G7Ht)spqnJNP- z=$J|A?mgPYKKEsC$O+Lg_2SsPLjK(}<|({iKJ^H8#p>AJ&jR1uy?d>vHd`T-?lv{N zqj`)F(#;@Kz?wq!?XEl;)@p%p$>QPlFeGx;Vn)~uPqt8E;Qds#eyw3CGL=gESpi z10Kq+8}T&($nOcNB}50P89G(8vfwrZL*Fpuaay@yxwU%M(|Hzd9tc?JO6_I=Vzvdx+04^FA)rSE)Wn1o zMt}{R3(xUI1m(Tr=L>Q6T-Y$^%QjqP;ttX#ZFQ5(OQr15U-AY-9Ghk~>s5>uaRCRY zTR846B!OdODX*8VikHTaK+P`dUkiVZoBcgj!3wBh`<=QWx8YxfkBu`ZWjn&k6LpGe z*TTIJzKNHP43d?|v9gwEcc7|wE|@==Q%gh>zybH>OQZ{nwxX$yxS&VCa>O)u7A zKm2Aoz$(MabjYguamxmWiYR@>c_Qa58YEeCdPWd~?H1nnRqb_#^cE58046>M@KY{n#3t%uT)~wK~=Th zj3&@EV=AWv++HN4*!^Wdx3K| zc>IGVY;&1HHGt!Dw5}R#?sw_&0Wy}AzPgSRrvQO*G;tuAmGbQ;D*L0Xg;^>es@y9# zy>Ol1@Z@QvE;y&!V(NX&v!ixjWqiV+?r8XLBbd}so&!)&CBz&{;c(E)=hn%EslSUe&u@XIu} zudK)Si)T0v%o|=U40sO$u^P(%I3q7x5$odu&VpF~p3m=*c&rIm#arApX4BW&>h2X@mJ5DS=vJFdkWk+{(~zEx@0yIzh>A<$S<-7T7z%3IRN-?wGw zB(eddtthi4ZYiPE#-fK!Z_24sZXCtB#+EiT<1CXgpzXjqe-ZzN{3YGuBiU$#lk!8< z2xQ3obGc^lIz7t}D_b?ni3~?yYc%~--3TI~@Gk6GlWs~Bo;vJ*TigvSps18M7(bVk zUh+v%-aT(Ksi&Gj8xV=8Ij(5L9nC6Rm@YzHfREXP&C)E%@!cAqQMquIY3S^y%aL$i zPXqs%3ZXSQ$^;E-=8QEMy+yDQ`xnAz5Dfz;Tr%L)5<9Y@)_+yG{SCDr>e%P^3l}|S zntY4vZa3$%_?OVFLLD1!PAf#3{83jXZqejiqUb-rdotVVRXvpKe~< z(C{ZS_s)NkJ(b9(8GNdeGcX0|KBr_;ROH2;DMj@;*N{+sCReO*D1<2F_Ckw8Tk$5v zi$dkqQ`YAH-c;#T+`Z(6?LMW-*2~Tt%JZv(xwiZH1!*3v5Ak_m*%cj~c@L*!p_m_Ua!>+fUQwRhLl>ExuNr30hz!`(0O1%aJW*lscM2L$8E5I?E z{so@8_w4Vr{wonw)z>Dl3u@>|O$|aB0u8-AoZafnQ;(4O@}C^9EglVT<1xub$YTjx{I=*%tv5PY8uWtjs}ZRa5Vc^R)8oAD z32oX1$oC{gx7ly)v@Jt3ZAMtcV|`N{r!Xq>N>&hL^oH0HD=|;cTT(>`buy2%7A?io zfQ_Gck=ujP&Bf>8c$%jGXQ?I+9^kYqAvLPdmSv(h2v^D{4CLk}%#R=yoQzla?bzwt z)qK>h-QTR~XyU~>Q?ol0RQ)AlE7b;PaQo5P%2{(8I=JC+Z#%mA_~Cv7rq zq?ZBF#tV60!@awqd1a`(!E3Ju;1ZBfLT}-6^4f)bb`mvmhIR2ss*U=KKloP-O@!{< z5OXQrY|*{#`G4!k;v<(ks;m+kW(st8(bO+5>$RIXniOWeQdVjzX3RG?BFpU`Nx&vS z)db_=X0Mn*@cVj+z$vIQX7lXtFkg7n#~5Sj24)(^%0aqGOM@I|h@v-YrdkDhL*Gx? zXrWPX8l{G)OdL*j$2`g+BgmpD*P0nq8Thd>s{05T}J=gQK*!4L2+E}b&k)z zx=ZeviV>go_X!-rB|NSG4=pKn*L0ecYK5jT#~`(b7YSNen*T(*d~8mFWCt>>IEYG! z@}xtI>e~6=xZ#^;0{@W>p5dY*u~$Z|sO#b>$aEAwL(DnCr=KaP7oIOT zOFRnRtMbgPUFS2-gBpq}Ucsy@8433gL7V9w3{xqS9r;kXasYW7(2UFrCtsY=wVJbc z)q7dxd9#YTE(o~Sj2fh-)|95HB#Yg)r~awBrNwqAol%D7I~jk;HI#h@_4bqiE`ydvv+b}f>^CpdNu)`Q^DWV1lnqQe%p4QU&- ztX>k$7(#QyTr><5&p1cUl}_qh5e;4fk6eVxfFFa&)Xi8hrl%m(1oiwAzW4;Tb-11^ z=61#`uTv8cpO^V9;nDo}A#u1k(X-qS=v2Qyz6|oA!pn2%C=&v_k~jm$5|cl+gajY_ zOg>U42Uy^@&i+_K`2ax}skAvNL#K=|4kWZ7BoXMBiY1E$p>aT?iktr#lc9}% ze=<1t6+CSgb{b04A(M^Gq~;X9{6)_93sbjCaZBTrL}`m?10(Bo2CaDB9 zl_%5cbV#!h)?qAo33Y5~PAt|gI&PiCO6};dGz8A}<4YzvkKqcmM0PWGYb)|vQ2bz7 zpj1mDj6k`Ka?QoTw#;L*;B9CvA^dY&QGki#)Q2qY;-(X(7^3ewLn=}EM8>c^hW{Vw znGxgJc|JcG_q5YB zlCoXOI=khu1c$0puB(ltoiHl-NYPF9kd6n_yZGJc1b+nv|DH%uN_%xbd0JE)7rOQP zi${kc5tv>-`mNf*JEQjH!4{TSTb;h%&D5uPqNn@P`fgn@@UaCclL>b~fXrI8*vvNw zhe_38YmE5Fwd8hDHXjSs*Lnl5hV+tv1WJ`U&XvT05?fz3PrbxF0e6hO=sv6wm)Ssy zOeB&r;Q`hx1AWUG?MN$nZ^#^0!Is_3-Z#Sk@?ZasrWnCMYq_ET01He20M7qD_f(OQ zl9m0RlOF$$sQg?1VY7O4cAU0HoqG4m(gkbp#$c+8jcK6p7RZ!%B_0+K5TrI-C!N}x z`mg9^X#qw6dX7T*oUEiO)e`EIDplo_DqfM?v46)s)^=C`Zc@^%lV}DJT`;n9vI?(j z2JUe~jJU^MMlY(_f5@2*x+!O`37Oby7aQfHeH5+{l0RyttyxrIaxSsHFL;$AGgw*B zCfbqWR0O`>_m+Wv^5LU~ZM0HKR8Z;;rzv-qM}(6kO*LBCqamFPbc!xKty1_Dnq=wD z$T=`d2kph#&D(b8Z3 z;{vu*EtOvCX>|r}FX+#XS&Nix(Ypq@!?}j5~YsCfv)2s(}H*&$FQo^ z8nxVH=`POMYpRrU#K~A7MT#-Ey{{)%URrEby%<1t_%OVTI6g=R;i~r<4(;5j<2a7kO@bgs1In14# z32UN_#fBB|BjrB)~XVRgZ|IlzdOE!^ydJH#B|nsIsJ}+`9ie<5xbjD0%ELKhH626eUDwz7;i7~(v(x6hth*K( z7&v`EH?1V_d|P+58rU4*zT6|}8@_|hY=BK(&epEn7y4RG`*es-#5-VHxc$qFb3jX9m6IOXDVdFB{UV)>Sb6) zOZR=SV4Y-?ioGUT6|L6pMrIXUTYO`_tr(Yn+}}VVDQGOTUgU>%Jk63l$C${E2TRh_ zDqHA^{N=kqFT|^$`+-oc)=F`|bB!Kpa1ylfO=Wr-d&=M`R>Ue--kO5X;`9Z$czL+} zo@WjQ{%j~-XepLu08W0^973fEWN0*3*U}%a53CQlU|2ZWEr9eOXn27i~NH5#QJ{rlp(<%T+NIbJb=0L z-Sw^>@FvX%AC}ZST&PM?A;JwAy?DNVByuyC5ydDX7i+u##S^hMd@DvD<_FCy$&fTb z%=l+T&k1}K7dL|0TsVq`3L#p}rNj1;>Nm$5op)A1%#CHlYODGG$)e#gp- z_f^Sqedwvn9^$(P{Wy~a7pF$<3LxqBGDIuZ|%WZaT|MITR)(a@^g5-Ip=VfK)BIfWp~u6A~%yRpkIwx z(K_w%?G78+X_TL2otx+$-~lo()`L%+cJ|C4l{Us^T*0D}$0IQnzREs-%VuO&2&1NY zsL__fKBFzd@wQ2gDEEbm^A@4VZ=TXu4|6t==~D>59HNbXp3mw#A>1KZ`~}i;M#pE4 zvP&dl*a#HP#O;pVaX)_BsKw6Yv#%Y5K&_IH7Rl3C7M-)y|CbdJHIY>iCB2fG_tO~A zTS;Y@MDwLqPT2EC5#wI=vap-5P5TYU?NvcvN*D+JRSI|vRe{Wk!5mD2?JI5 zL_C8t8alv_?hZ&UO-?PRTOl4TG#{19n@$r;9hLMv+2~;+`n78SwhV0~Lv0vP_iMB8 z=^ic#p6lf>yf9-1=(kQP!7nNdp%@=hCj52psrTx{L zP^dCs&BkmIQhvKvcFMFpK1~Rno%$opA|wrw&0nDU)v!Uk?4eMn@Tth{0WaXJL6U{^ zFk`>MGc$25Z z`7WB5)ov($F4K{0y-VPf}7pu>81nO#1)wf5-K9zMj+r3P2-yg2njdvzLf zg{PT&b^RSH zREh>~#TTh03km2DY5-H?jhAl7N>a*#2iC`XY`e3I*q#eB;a&)Ug9M*Jg*cj+I!Lam zdG`3iNT{yg(ax&rD@hbvHxmJ+e;^kWKRcW@;BgB`LXz#K+O%l|5z5EsA&dEr>)YSE zceNEa#gWf#N4r_LKcgWa@APn{J*h*Odi)zh{#(~BVkoGR1m1~a26VWQg67=4pQ9M01V4M(2SR3-@9A*LqN=_=$`GMA8sPBL~Ge}q^Fe^H}GqVM> zHujiUhC;b*8S5%>J<~wkKt0phCFTU~l(Mnb?*rC#g#17mzUUQrnv)LizG+)KfK1tI zEOP@RXP?2+!eDu|O2H4yai{*0B)~HdFgm7ZYfaiG0fUKFBgGlWnOAk|c0%U%nXGym zMa3>Ws!#!5O{SMDrbbfbpdw^!78f)eSvL^-A;ujAjRW<=Mu@W;k`n(0aeQg3qyYN7 zT|B(#_Xi$hb8QUL@v)f8I{SA2<18t@=(Z?H)M8zGhlyOU zA~J&O4zy8uB6=fBbNKtLT^4J8>Z19psUG5Pd zNN-WloQEdB*S^aOQ**uUBi}rD-0$AwyJxvni zg49~@b5S-YofE#2xk(!qlz7vwSAm9gl#F4m@ah%IYQ1J zGQk-o=TH;sA65z~ja{kiMNf7#_g{94w?w!>0;j8#0sY%nJ3F_j$1XSa=SQk)EsB7~GX1?kvIR)R{ z7d&{sVz!K=BHZri)Ie2a+K9^<-6rIseV^1W)a(;u{{&U%4!xyt6?%Rb&g);NsO3v& z-WuC#^t|)!smB_uzW&9f+?ze;GBQA^Q!2I1C~&>`9SG%C;-OiEyhWC?l+Im@%lZQa z4$zmK9AIpToH3KH^%cm0k21)^6W?+6V|p`#w#uZ>Mw*rQP7Hz3l`NIRe@?j2 z!cvNr5F}5j969XB2Uyp;)+?Ye-Q9)o-nLe|^1-SM=h#?+;G=_j-1P*A$8GMuzt9C~ z3c=xRV-0(wsr^yjh4Vn1Fcc}f!`3&2y&s8jN`75KWHIWomiR^47d~Y%=&tq_LT|6Y z97;Y_T>H7yr0buZzU<2M-6p1`xfr_U{Hie;uTs-v3ETgPA^N;(o69B0#pC1l{UAS| zI&93Op?G7De3IF%W0RKTiQ~9YuQot>Ba5xfQf*$%`zWE7D$^fk7oNJArMvj>w59&d zXVj%|^XV(frVztN!EnK$k5={}OVuvjIDcQ2l!H(esp}~B-{v?!Ppb7QpSbhlZIiAr ztq*_AT_e2aexv6-mUS!VU7Pt*PFQF^=Qg{mc`HtFtFO9U)54hkJGWo-|NoUwpU-3n zFn^-j4qQ?uzqKLxuu5+dkA>XRytlk^FMaAO+HKWpXE(bD#2nqvmL1OT`jdoV_(L^+D-g+(f-d%@5beC zI^t^gsLu9U(d^PqGnV}^J@fhb&J`jWPTLpv?_AOUr(T+IwK}i*xqttJH-|8-+}KgJ zWre@<(&F`#C!7wL%*8qH_3doF#-jYg7DC5wPrBBnE0xt2AY}C9*!v6jEhoQTFz4CN zuz#-~KYwufdvws1$J?6JZgQxonIHbB>iW8) z7GYQQ6c=o}Iq%n-*9UY96!wJg$cvvD)X(euE~v-$^|w~}#9PYKKU|D8Gw`wE&uCWJ zzLz&sz)<=5Kf&qepV)qSaOduANeiwwKjK1tj5crZ>sP$r^5o;?Df${0On0Y-J~0mW zee}lS+L6MmX4@BD<9cmU6jNsZrXyt8u`fN=wVd8tj~+TPw{z010JWDw^70ZtkHxIF zR6W#ZqP{GoCA(b@u-8l6qRp?UnmpXcEV?b91~Vgp~VU58Lzp<;1D$ zmp=|`v;Qo~biCM0xT2uOdz0c^$^D`Fg$kWVS1v4?vo2_d_ztf4DWB95trw?v96DaE zRgkY(UjJ%BTS0){&ow*$+;?52`|@D$%VlftY}oYg)#?|l!eNs|ra5)Bmw4~N8D?S^KY)cu+^O5@2nLIC3ao1iJdT|N+?q zo^jdt*9Asqp1<3yS8VY)Sg@U)V{>ba_1AE|mqxE#vw~+VK2h^`BlqMtA#d-VWqKBL zQ0K3s!I={C;OetS9&g)rHdWVXdvw=hmGzEnfgen*aFuZjHF|Zyw!wNZc25mGC*;rfGY~r%fu=8MTuRUmq{C?NSq2Fy%#9ex9^w$g8dC6W89?(v z+cNM380Mx7Wb0NG|~O>%4oEh7NhjAaP{vdx%7=EyeBHo|H%VgMaz0hYn@ X0B=^{78?czP9QwN%)pT92I2t#KHdEo literal 0 HcmV?d00001 diff --git a/template/.claude/skills/claude_hooks/references/events.md b/template/.claude/skills/claude_hooks/references/events.md new file mode 100644 index 0000000..bd00799 --- /dev/null +++ b/template/.claude/skills/claude_hooks/references/events.md @@ -0,0 +1,696 @@ +# Hook Events Reference + +Complete input schemas and decision control for all Claude Code hook events. + +## Table of Contents + +| Category | Events | +|----------|--------| +| Session lifecycle | [SessionStart](#sessionstart), [SessionEnd](#sessionend) | +| Prompt handling | [UserPromptSubmit](#userpromptsubmit) | +| Tool lifecycle | [PreToolUse](#pretooluse), [PermissionRequest](#permissionrequest), [PermissionDenied](#permissiondenied), [PostToolUse](#posttooluse), [PostToolUseFailure](#posttoolusefailure) | +| Agent lifecycle | [SubagentStart](#subagentstart), [SubagentStop](#subagentstop) | +| Stop events | [Stop](#stop), [StopFailure](#stopfailure) | +| Team events | [TeammateIdle](#teammateidle), [TaskCreated](#taskcreated), [TaskCompleted](#taskcompleted) | +| Compaction | [PreCompact](#precompact), [PostCompact](#postcompact) | +| Config & files | [InstructionsLoaded](#instructionsloaded), [ConfigChange](#configchange), [CwdChanged](#cwdchanged), [FileChanged](#filechanged) | +| Worktrees | [WorktreeCreate](#worktreecreate), [WorktreeRemove](#worktreeremove) | +| Notifications | [Notification](#notification) | +| MCP Elicitation | [Elicitation](#elicitation), [ElicitationResult](#elicitationresult) | + +--- + +## Common input fields (all events) + +```json +{ + "session_id": "abc123", + "transcript_path": "/Users/.../.claude/projects/.../transcript.jsonl", + "cwd": "/Users/my-project", + "permission_mode": "default", + "hook_event_name": "PreToolUse" +} +``` + +`permission_mode` values: `default`, `plan`, `acceptEdits`, `auto`, `dontAsk`, `bypassPermissions` + +When running inside a subagent: +- `agent_id` — unique subagent identifier +- `agent_type` — agent name (e.g., `"Explore"`, `"security-reviewer"`) + +--- + +## SessionStart + +**When:** Session begins or resumes. Only `type: "command"` hooks supported. + +**Matcher values:** `startup` | `resume` | `clear` | `compact` + +**Input:** +```json +{ + "source": "startup", + "model": "claude-sonnet-4-6" +} +``` + +**Output:** +- stdout text → added to Claude's context +- `hookSpecificOutput.additionalContext` → string added to context +- `CLAUDE_ENV_FILE` — append `export VAR=value` lines to persist env vars to Bash + +**Cannot block.** Use for context injection and environment setup. + +**Pattern — inject git context:** +```bash +#!/usr/bin/env bash +INPUT=$(cat) +BRANCH=$(git branch --show-current 2>/dev/null || echo "unknown") +RECENT=$(git log --oneline -5 2>/dev/null || echo "no git history") +echo "Current branch: $BRANCH" +echo "Recent commits:" +echo "$RECENT" +``` + +--- + +## SessionEnd + +**When:** Session terminates. Default timeout: 1.5s (configurable via `CLAUDE_CODE_SESSIONEND_HOOKS_TIMEOUT_MS`). + +**Matcher values:** `clear` | `resume` | `logout` | `prompt_input_exit` | `bypass_permissions_disabled` | `other` + +**Input:** +```json +{ + "reason": "other" +} +``` + +**Cannot block.** Use for cleanup, logging, archiving transcripts. + +--- + +## UserPromptSubmit + +**When:** User submits a prompt, before Claude processes it. + +**No matcher support** — fires on every prompt. + +**Input:** +```json +{ + "prompt": "Write a function to calculate factorial" +} +``` + +**Output:** +- Plain stdout (non-JSON) → added as context visible to Claude +- `hookSpecificOutput.additionalContext` → added more discretely +- `hookSpecificOutput.sessionTitle` → renames the session +- `decision: "block"` + `reason` → rejects the prompt + +**Block a prompt:** +```bash +INPUT=$(cat) +PROMPT=$(echo "$INPUT" | jq -r '.prompt') +if echo "$PROMPT" | grep -qi "drop table\|delete from"; then + jq -n '{"decision":"block","reason":"SQL destructive operations require human review"}' + exit 0 +fi +exit 0 +``` + +--- + +## PreToolUse + +**When:** After Claude creates tool parameters, before the tool executes. Can block. + +**Matcher values:** `Bash` | `Edit` | `MultiEdit` | `Write` | `Read` | `Glob` | `Grep` | `Agent` | `WebFetch` | `WebSearch` | `AskUserQuestion` | `ExitPlanMode` | MCP tools (`mcp____`) + +**Input:** +```json +{ + "tool_name": "Bash", + "tool_use_id": "toolu_01ABC...", + "tool_input": { + "command": "npm test", + "description": "Run test suite", + "timeout": 120000, + "run_in_background": false + } +} +``` + +**tool_input by tool:** + +| Tool | Key fields | +|------|-----------| +| `Bash` | `command`, `description`, `timeout`, `run_in_background` | +| `Write` | `file_path`, `content` | +| `Edit` | `file_path`, `old_string`, `new_string`, `replace_all` | +| `Read` | `file_path`, `offset`, `limit` | +| `Glob` | `pattern`, `path` | +| `Grep` | `pattern`, `path`, `glob`, `output_mode`, `-i`, `multiline` | +| `WebFetch` | `url`, `prompt` | +| `WebSearch` | `query`, `allowed_domains`, `blocked_domains` | +| `Agent` | `prompt`, `description`, `subagent_type`, `model` | + +**Output (hookSpecificOutput):** + +```json +{ + "hookSpecificOutput": { + "hookEventName": "PreToolUse", + "permissionDecision": "allow", + "permissionDecisionReason": "Reason shown to user (allow/ask) or Claude (deny)", + "updatedInput": { "command": "modified command" }, + "additionalContext": "Extra context for Claude before tool runs" + } +} +``` + +`permissionDecision` values: +- `"allow"` — skip permission prompt, proceed +- `"deny"` — block the tool call (reason shown to Claude) +- `"ask"` — show permission dialog to user (reason shown to user) +- `"defer"` — pause for external input (non-interactive `-p` mode only) + +**Precedence:** `deny > defer > ask > allow` + +**Pattern — block specific file paths:** +```bash +INPUT=$(cat) +FILE=$(echo "$INPUT" | jq -r '.tool_input.file_path // empty') +if [[ "$FILE" == *".env"* || "$FILE" == *"secrets"* ]]; then + jq -n '{hookSpecificOutput:{hookEventName:"PreToolUse",permissionDecision:"deny",permissionDecisionReason:"Access to secret files is blocked"}}' + exit 0 +fi +exit 0 +``` + +**Pattern — auto-approve safe git commands:** +```bash +INPUT=$(cat) +CMD=$(echo "$INPUT" | jq -r '.tool_input.command // empty') +if [[ "$CMD" =~ ^git\ (status|log|diff|show|branch|fetch) ]]; then + jq -n '{hookSpecificOutput:{hookEventName:"PreToolUse",permissionDecision:"allow",permissionDecisionReason:"Read-only git command"}}' + exit 0 +fi +exit 0 +``` + +--- + +## PermissionRequest + +**When:** A permission dialog is about to be shown to the user. + +**Matcher values:** Same as PreToolUse tool names. + +**Input:** +```json +{ + "tool_name": "Bash", + "tool_input": { "command": "rm -rf node_modules" }, + "permission_suggestions": [ + { + "type": "addRules", + "rules": [{"toolName": "Bash", "ruleContent": "rm -rf node_modules"}], + "behavior": "allow", + "destination": "localSettings" + } + ] +} +``` + +**Output:** +```json +{ + "hookSpecificOutput": { + "hookEventName": "PermissionRequest", + "decision": { + "behavior": "allow", + "updatedInput": { "command": "modified" }, + "updatedPermissions": [ ... ] + } + } +} +``` + +`updatedPermissions` entry types: `addRules`, `replaceRules`, `removeRules`, `setMode`, `addDirectories`, `removeDirectories` +`destination` values: `session`, `localSettings`, `projectSettings`, `userSettings` + +--- + +## PermissionDenied + +**When:** Auto mode classifier denies a tool call. Only fires in auto mode. + +**Input:** +```json +{ + "tool_name": "Bash", + "tool_input": { "command": "rm -rf /tmp/build" }, + "tool_use_id": "toolu_01...", + "reason": "Auto mode denied: command targets a path outside the project" +} +``` + +**Output:** Return `{"hookSpecificOutput":{"hookEventName":"PermissionDenied","retry":true}}` to let Claude retry. + +--- + +## PostToolUse + +**When:** Tool completes successfully. Cannot undo — can only give Claude feedback. + +**Matcher values:** Same as PreToolUse tool names. + +**Input:** +```json +{ + "tool_name": "Write", + "tool_use_id": "toolu_01...", + "tool_input": { "file_path": "/path/file.txt", "content": "..." }, + "tool_response": { "filePath": "/path/file.txt", "success": true } +} +``` + +**Output:** +```json +{ + "decision": "block", + "reason": "Tests failed after this file was written", + "hookSpecificOutput": { + "hookEventName": "PostToolUse", + "additionalContext": "...", + "updatedMCPToolOutput": { ... } + } +} +``` + +`updatedMCPToolOutput` — replaces MCP tool response (MCP tools only). + +**Pattern — run linter after file write:** +```bash +INPUT=$(cat) +FILE=$(echo "$INPUT" | jq -r '.tool_input.file_path // empty') +EXT="${FILE##*.}" +if [[ "$EXT" == "py" ]]; then + if ! python -m ruff check "$FILE" 2>&1; then + jq -n --arg f "$FILE" '{"decision":"block","reason":"Ruff linting failed on \($f). Fix lint errors before proceeding."}' + exit 0 + fi +fi +exit 0 +``` + +--- + +## PostToolUseFailure + +**When:** Tool execution throws an error or returns failure. + +**Input:** +```json +{ + "tool_name": "Bash", + "tool_use_id": "toolu_01...", + "tool_input": { "command": "npm test" }, + "error": "Command exited with non-zero status code 1", + "is_interrupt": false +} +``` + +**Output:** +```json +{ + "hookSpecificOutput": { + "hookEventName": "PostToolUseFailure", + "additionalContext": "Hint: check package.json for correct test script name" + } +} +``` + +Cannot block (tool already failed). Use to inject diagnostic context for Claude. + +--- + +## SubagentStart + +**When:** A subagent is spawned via the Agent tool. + +**Matcher values:** `Bash` | `Explore` | `Plan` | custom agent names + +**Input:** +```json +{ + "agent_id": "agent-abc123", + "agent_type": "Explore" +} +``` + +**Output:** `hookSpecificOutput.additionalContext` — injected into subagent's context. + +--- + +## SubagentStop + +**When:** A subagent finishes responding. + +**Matcher values:** Same as SubagentStart. + +**Input:** +```json +{ + "stop_hook_active": false, + "agent_id": "agent-abc123", + "agent_type": "Explore", + "agent_transcript_path": "~/.claude/projects/.../subagents/agent-abc123.jsonl", + "last_assistant_message": "Analysis complete. Found 3 issues..." +} +``` + +Uses the same decision control as [Stop](#stop). + +--- + +## Stop + +**When:** Main Claude Code agent finishes responding. Does not fire on user interrupt. + +**No matcher support.** + +**Input:** +```json +{ + "stop_hook_active": true, + "last_assistant_message": "I've completed the refactoring..." +} +``` + +⚠️ **Always check `stop_hook_active`** to avoid infinite loops when blocking. + +**Output:** +```json +{ + "decision": "block", + "reason": "Tests are still failing. Run npm test and fix any errors." +} +``` + +**Pattern — quality gate (safe):** +```bash +INPUT=$(cat) +ACTIVE=$(echo "$INPUT" | jq -r '.stop_hook_active') +# Only gate on first Stop, not recursive ones +if [[ "$ACTIVE" == "true" ]]; then + exit 0 +fi +if ! npm test --silent 2>/dev/null; then + jq -n '{"decision":"block","reason":"Test suite is failing. Fix all tests before finishing."}' + exit 0 +fi +exit 0 +``` + +--- + +## StopFailure + +**When:** Turn ends due to API error (rate limit, auth, billing, etc.). + +**Matcher values:** `rate_limit` | `authentication_failed` | `billing_error` | `invalid_request` | `server_error` | `max_output_tokens` | `unknown` + +**Input:** +```json +{ + "error": "rate_limit", + "error_details": "429 Too Many Requests", + "last_assistant_message": "API Error: Rate limit reached" +} +``` + +**Cannot block.** Use for alerting and logging only. + +--- + +## TeammateIdle + +**When:** An agent team teammate is about to stop working. + +**No matcher support.** + +**Input:** +```json +{ + "teammate_name": "researcher", + "team_name": "my-project" +} +``` + +**Control:** +- `exit 2` + stderr → teammate keeps working with your feedback +- `{"continue": false, "stopReason": "..."}` → stops teammate entirely + +--- + +## TaskCreated + +**When:** A task is being created via `TaskCreate`. + +**No matcher support.** + +**Input:** +```json +{ + "task_id": "task-001", + "task_subject": "Implement user authentication", + "task_description": "Add login and signup endpoints", + "teammate_name": "implementer", + "team_name": "my-project" +} +``` + +**Control:** exit 2 blocks creation; `{"continue":false}` stops teammate. + +--- + +## TaskCompleted + +**When:** A task is being marked complete via `TaskUpdate`. + +**No matcher support.** Same input shape as TaskCreated. Same control options. + +--- + +## PreCompact + +**When:** Before context compaction. + +**Matcher values:** `manual` | `auto` + +**Input:** +```json +{ + "trigger": "manual", + "custom_instructions": "" +} +``` + +**Can block:** exit 2 or `{"decision":"block"}`. + +--- + +## PostCompact + +**When:** After compaction completes. + +**Matcher values:** `manual` | `auto` + +**Input:** +```json +{ + "trigger": "auto", + "compact_summary": "Summary of compacted conversation..." +} +``` + +**Cannot block.** Use for logging or archiving summaries. + +--- + +## InstructionsLoaded + +**When:** A `CLAUDE.md` or `.claude/rules/*.md` file is loaded into context. + +**Matcher values:** `session_start` | `nested_traversal` | `path_glob_match` | `include` | `compact` + +**Input:** +```json +{ + "file_path": "/Users/my-project/CLAUDE.md", + "memory_type": "Project", + "load_reason": "session_start", + "globs": ["src/**"], + "trigger_file_path": "/Users/my-project/src/index.ts" +} +``` + +**No decision control.** Observability / audit only. + +--- + +## ConfigChange + +**When:** A settings file changes during a session. + +**Matcher values:** `user_settings` | `project_settings` | `local_settings` | `policy_settings` | `skills` + +**Input:** +```json +{ + "source": "project_settings", + "file_path": "/Users/my-project/.claude/settings.json" +} +``` + +**Can block** (except `policy_settings`): `{"decision":"block","reason":"..."}` or exit 2. + +--- + +## CwdChanged + +**When:** Working directory changes (e.g., Claude runs `cd`). + +**No matcher support.** + +**Input:** +```json +{ + "old_cwd": "/Users/my-project", + "new_cwd": "/Users/my-project/src" +} +``` + +**Has access to `CLAUDE_ENV_FILE`.** Return `watchPaths` array to update FileChanged watch list. + +--- + +## FileChanged + +**When:** A watched file changes on disk. + +**Matcher / watch list:** `|`-separated literal filenames: `".envrc|.env"`. The same value both defines which files to watch AND filters hooks when they fire. + +**Input:** +```json +{ + "file_path": "/Users/my-project/.envrc", + "event": "change" +} +``` + +`event` values: `change` | `add` | `unlink` + +**Has access to `CLAUDE_ENV_FILE`.** Return `watchPaths` to update dynamic watch list. + +--- + +## WorktreeCreate + +**When:** Creating a worktree via `--worktree` or `isolation: "worktree"`. Replaces default git behavior. + +**Input:** +```json +{ + "name": "feature-auth" +} +``` + +**Output:** Hook MUST print the absolute path to the new worktree directory on stdout. +HTTP hooks: `hookSpecificOutput.worktreePath`. + +--- + +## WorktreeRemove + +**When:** Removing a worktree at session exit or when a subagent finishes. + +**Input:** +```json +{ + "worktree_path": "/Users/.../my-project/.claude/worktrees/feature-auth" +} +``` + +**No decision control.** Use for cleanup. + +--- + +## Notification + +**When:** Claude Code sends a notification. + +**Matcher values:** `permission_prompt` | `idle_prompt` | `auth_success` | `elicitation_dialog` + +**Input:** +```json +{ + "message": "Claude needs your permission to use Bash", + "title": "Permission needed", + "notification_type": "permission_prompt" +} +``` + +**Cannot block.** Return `additionalContext` in `hookSpecificOutput` to inject context. + +--- + +## Elicitation + +**When:** An MCP server requests user input during a tool call. + +**Matcher values:** MCP server names. + +**Input (form mode):** +```json +{ + "mcp_server_name": "my-mcp-server", + "message": "Please provide your credentials", + "mode": "form", + "requested_schema": { + "type": "object", + "properties": { + "username": { "type": "string", "title": "Username" } + } + } +} +``` + +**Output:** +```json +{ + "hookSpecificOutput": { + "hookEventName": "Elicitation", + "action": "accept", + "content": { "username": "alice" } + } +} +``` + +`action` values: `accept` | `decline` | `cancel`. Exit 2 denies. + +--- + +## ElicitationResult + +**When:** After user responds to MCP elicitation, before response is sent to server. + +**Matcher values:** MCP server names. + +**Can block** (exit 2 → response becomes decline). + +**Output:** Same schema as Elicitation output — can override user's response. diff --git a/template/.claude/skills/config-management/SKILL.md b/template/.claude/skills/config-management/SKILL.md new file mode 100644 index 0000000..d4560cb --- /dev/null +++ b/template/.claude/skills/config-management/SKILL.md @@ -0,0 +1,93 @@ +--- +name: config-management +description: >- + Guide for pyproject.toml, justfile, pre-commit, and CI pipeline configuration. + Use this skill when modifying project configuration files, adding dependencies, + configuring tools, or troubleshooting CI. Trigger on mentions of: pyproject.toml, + justfile, pre-commit, CI configuration, GitHub Actions, tool configuration, + dependency management, or any request to add/modify project config. +model: haiku +--- + +# Config Management Skill + +Guidance for managing project-level configuration across the toolchain. + +## Configuration files + +| File | Purpose | When to modify | +|---|---|---| +| `pyproject.toml` | Package metadata, dependencies, tool configs (ruff, basedpyright, pytest, coverage) | Adding deps, changing tool settings | +| `justfile` | Task runner recipes (test, lint, fmt, ci, etc.) | Adding new workflows, modifying commands | +| `.pre-commit-config.yaml` | Git pre-commit hooks (ruff, basedpyright) | Adding/updating hook repos | +| `.github/workflows/ci.yml` | GitHub Actions CI pipeline | Modifying CI steps, adding jobs | +| `.copier-answers.yml` | Copier template answers — **never edit manually** | Only via `copier update` | +| `uv.lock` | Dependency lockfile — **never edit manually** | Only via `uv lock` or `just update` | + +## pyproject.toml structure + +Key sections and their tool owners: + +| Section | Tool | +|---|---| +| `[project]` | Package metadata, version, dependencies | +| `[project.optional-dependencies]` | Extra dependency groups (dev, test, docs) | +| `[build-system]` | Build backend (hatchling) | +| `[tool.ruff]` | Ruff linter + formatter config | +| `[tool.ruff.lint]` | Rule selection, per-file-ignores | +| `[tool.basedpyright]` | Type checker mode, Python version | +| `[tool.pytest.ini_options]` | Pytest markers, flags, test paths | +| `[tool.coverage.*]` | Coverage thresholds, source paths | + +## justfile quick reference + +| Recipe | What it runs | +|---|---| +| `just test` | `pytest -q` | +| `just test-parallel` | `pytest -q -n auto` | +| `just coverage` | `pytest --cov=src --cov-report=term-missing` | +| `just lint` | `ruff check src/ tests/` | +| `just fmt` | `ruff format src/ tests/` | +| `just fix` | `ruff check --fix src/ tests/` | +| `just type` | `basedpyright` | +| `just docs-check` | `ruff check --select D src/` | +| `just ci` | Full pipeline: fix + fmt + lint + type + docs-check + test + precommit | +| `just review` | Pre-merge review: fix + lint + type + docs-check + test | + +## Dependency management + +```bash +# Add a runtime dependency +uv add + +# Add a dev dependency +uv add --optional dev + +# Sync from lockfile (no changes to lockfile) +just sync # or: uv sync --frozen --extra dev --extra test --extra docs + +# Update lockfile and resync +just update # or: uv lock && uv sync --extra dev --extra test --extra docs +``` + +## Pre-commit configuration + +Hooks run on `git commit`. Managed via `.pre-commit-config.yaml`: +- **ruff** — lint + format check on staged `.py` files +- **basedpyright** — type check on `src/` + +Run all hooks manually: `just precommit` + +## Rules for config changes + +1. **Never weaken** ruff or basedpyright settings — the `pre-config-protection.sh` hook blocks this. +2. **Never edit** `uv.lock` directly — the `pre-protect-uv-lock.sh` hook blocks this. +3. **Never edit** `.copier-answers.yml` manually — use `copier update`. +4. Keep `justfile` recipes simple — one concern per recipe. +5. Pin all dependency versions in `pyproject.toml`. + +## Quick reference: where to go deeper + +| Topic | Reference file | +|-----------------------------|------------------------------------------------------------------| +| Complete tool configs | [references/complete-configs.md](references/complete-configs.md) | diff --git a/template/.claude/skills/config-management/references/complete-configs.md b/template/.claude/skills/config-management/references/complete-configs.md new file mode 100644 index 0000000..bda78c2 --- /dev/null +++ b/template/.claude/skills/config-management/references/complete-configs.md @@ -0,0 +1,257 @@ +# Complete configurations + +Ready-to-use configuration files for all tools in this project. Copy these into +your project and adjust versions, paths, and rule selections as needed. + +--- + +## pyproject.toml — all tool sections + +```toml +# ── Ruff ──────────────────────────────────────────────────────────────────── +[tool.ruff] +target-version = "py311" +src = ["src"] +exclude = [".git", ".venv", "__pycache__", "build", "dist"] + +[tool.ruff.lint] +select = ["E", "W", "F", "I", "UP", "B", "SIM"] +ignore = ["E501"] +fixable = ["ALL"] + +[tool.ruff.lint.per-file-ignores] +"tests/**/*.py" = ["PLR2004", "F401"] +"**/__init__.py" = ["F401"] + +[tool.ruff.format] +quote-style = "double" +indent-style = "space" +skip-magic-trailing-comma = false +line-ending = "lf" +line-length = 88 + +# ── basedpyright ───────────────────────────────────────────────────────────── +[tool.basedpyright] +pythonVersion = "3.11" +include = ["src"] +exclude = ["**/__pycache__", ".venv", "build", "dist"] +venvPath = "." +venv = ".venv" +typeCheckingMode = "standard" + +# ── Bandit ─────────────────────────────────────────────────────────────────── +[tool.bandit] +targets = ["src"] +skips = [] # add rule IDs here only with a comment explaining why +exclude_dirs = [".venv", "build", "dist", "tests"] +# Note: severity/confidence thresholds are CLI flags (-ll -ii), not pyproject keys. +``` + +--- + +## .pre-commit-config.yaml — all hooks + +```yaml +minimum_pre_commit_version: "3.0.0" + +default_language_version: + python: python3.11 + +repos: + # ── Ruff: lint + format ────────────────────────────────────────────────── + - repo: https://github.com/astral-sh/ruff-pre-commit + rev: v0.9.0 + hooks: + - id: ruff + args: ["--fix"] + - id: ruff-format + + # ── Bandit: security lint ───────────────────────────────────────────────── + - repo: https://github.com/PyCQA/bandit + rev: 1.8.3 + hooks: + - id: bandit + args: ["-c", "pyproject.toml", "-ll", "-ii"] + files: ^src/ + + # ── Semgrep: pattern-based security scan ───────────────────────────────── + - repo: https://github.com/semgrep/semgrep + rev: v1.60.0 + hooks: + - id: semgrep + args: ["--config", "p/python", "--config", ".semgrep.yml", "--severity", "ERROR"] + files: ^src/ + + # ── basedpyright: type checking ────────────────────────────────────────── + # Requires basedpyright to be installed in the active venv. + - repo: local + hooks: + - id: basedpyright + name: basedpyright + entry: basedpyright + language: system + types: [python] + pass_filenames: false + + # ── General hygiene ────────────────────────────────────────────────────── + - repo: https://github.com/pre-commit/pre-commit-hooks + rev: v5.0.0 + hooks: + - id: trailing-whitespace + - id: end-of-file-fixer + - id: check-yaml + - id: check-toml + - id: check-merge-conflict + - id: debug-statements +``` + +--- + +## .semgrep.yml — local custom rules + +```yaml +# .semgrep.yml +# Local project-specific rules. Registry packs (p/python etc.) are passed +# as --config flags on the CLI, not listed here. +# +# Usage: +# semgrep --config p/python --config p/owasp-top-ten --config .semgrep.yml src/ + +rules: + - id: no-eval + pattern: eval(...) + message: > + eval() executes arbitrary code. Use ast.literal_eval() for safe data + parsing, or refactor to avoid dynamic evaluation entirely. + languages: [python] + severity: ERROR + + - id: no-hardcoded-secrets-in-jwt + patterns: + - pattern: jwt.encode(..., "$SECRET", ...) + - pattern-not: jwt.encode(..., os.environ.get(...), ...) + message: Hardcoded JWT secret. Load from environment variable or secrets manager. + languages: [python] + severity: ERROR +``` + +--- + +## GitHub Actions workflow — full CI pipeline + +```yaml +# .github/workflows/ci.yml +name: CI + +on: + push: + branches: [main] + pull_request: + +jobs: + # ── Fast checks: format + lint + security ─────────────────────────────── + quality: + name: Code Quality + runs-on: ubuntu-latest + + steps: + - uses: actions/checkout@v4 + + - name: Set up Python + uses: actions/setup-python@v5 + with: + python-version: "3.11" + cache: "pip" + + - name: Install quality tools + run: pip install ruff bandit semgrep + + - name: Ruff — format check + run: ruff format --check . + + - name: Ruff — lint + run: ruff check . + + - name: Bandit — security lint + run: bandit -c pyproject.toml -r src/ -ll -ii + + - name: Cache semgrep rules + uses: actions/cache@v4 + with: + path: ~/.semgrep/cache + key: semgrep-${{ hashFiles('.semgrep.yml') }} + + - name: Semgrep — pattern scan + run: | + semgrep --config p/python \ + --config p/owasp-top-ten \ + --config .semgrep.yml \ + --severity ERROR \ + src/ + + # ── Type checking ──────────────────────────────────────────────────────── + typecheck: + name: Type Check + runs-on: ubuntu-latest + needs: quality # only run if quality checks pass + + steps: + - uses: actions/checkout@v4 + + - name: Set up Python + uses: actions/setup-python@v5 + with: + python-version: "3.11" + cache: "pip" + + - name: Install dependencies + run: | + pip install -r requirements.txt # must include basedpyright + + - name: basedpyright — type check + run: basedpyright + + # ── Tests ──────────────────────────────────────────────────────────────── + test: + name: Tests (Python ${{ matrix.python-version }}) + runs-on: ubuntu-latest + needs: [quality, typecheck] # only run if both upstream jobs pass + + strategy: + matrix: + python-version: ["3.11", "3.12"] + + steps: + - uses: actions/checkout@v4 + + - name: Set up Python ${{ matrix.python-version }} + uses: actions/setup-python@v5 + with: + python-version: ${{ matrix.python-version }} + cache: "pip" + + - name: Install dependencies + run: pip install -r requirements.txt + + - name: Run pytest with coverage + run: pytest --maxfail=1 --disable-warnings --cov=src --cov-report=xml + + - name: Upload coverage + uses: codecov/codecov-action@v4 + with: + files: coverage.xml +``` + +--- + +## Version update checklist + +When updating tool versions, change them in all three places: + +| Tool | pyproject.toml | .pre-commit-config.yaml | requirements.txt / pyproject deps | +|---|---|---|---| +| ruff | `target-version` (Python, not ruff) | `rev: v0.9.0` | `ruff>=0.9.0` | +| bandit | — | `rev: 1.8.3` | `bandit>=1.8.3` | +| semgrep | — | `rev: v1.60.0` | `semgrep>=1.60.0` | +| basedpyright | `pythonVersion` (Python, not tool) | (local hook — no rev) | `basedpyright>=1.x` | +| pre-commit-hooks | — | `rev: v5.0.0` | — | diff --git a/template/.claude/skills/linting/SKILL.md b/template/.claude/skills/linting/SKILL.md new file mode 100644 index 0000000..b798d68 --- /dev/null +++ b/template/.claude/skills/linting/SKILL.md @@ -0,0 +1,70 @@ +--- +name: linting +description: >- + Ruff lint and format guidance: configuration, error codes, auto-fix workflows, + CI ordering, and per-file-ignores. Use this skill when fixing lint violations, + configuring ruff rules, or understanding lint error codes. Trigger on mentions + of: ruff, lint, linting, format, formatter, auto-fix, lint errors, E/F/I/UP/B + codes, isort, or any request to fix or configure Python linting. +model: haiku +--- + +# Linting Skill + +Guidance for using **ruff** as the project's linter and formatter. Ruff replaces +black, isort, flake8, and pycodestyle in a single tool. + +## Command dispatch + +| Command | What it does | +|---|---| +| `just fix` | Auto-fix safe violations | +| `just fmt` | Format all files | +| `just lint` | Check for violations (read-only) | + +## Workflow order + +Always run in this order to avoid churn: + +```bash +just fix # auto-fix first +just fmt # then format +just lint # then verify clean +``` + +## Active rule sets + +See `references/ruff.md` for the complete configuration and every rule set. + +## Common fix patterns + +| Code | Meaning | Fix | +|---|---|---| +| `E` | pycodestyle style error | Usually auto-fixable | +| `F` | pyflakes logic error | Review manually | +| `I` | isort import order | Auto-fixable | +| `UP` | pyupgrade modernisation | Auto-fixable | +| `B` | bugbear potential bugs | Review manually | +| `D` | pydocstyle docstring | Add/fix docstring | +| `T20` | print() in app code | Replace with structlog | +| `C90` | complexity > 10 | Extract functions | +| `PERF` | performance anti-pattern | Review suggestion | + +## Per-file ignores + +- `tests/**` — `ARG`, `T20` ignored (unused args OK, print OK) +- `scripts/**` — `T20` ignored (print OK in CLI scripts) +- Docstrings (`D` rules) are enforced in tests. + +## Suppression rules + +- Never add `# noqa` without a specific code: `# noqa: E501` +- Always include a comment explaining why: `# noqa: B008 — FastAPI Depends()` +- Prefer fixing over suppressing + +## Quick reference: where to go deeper + +| Topic | Reference file | +|----------------------------|----------------------------------------------------| +| Full ruff configuration | [references/ruff.md](references/ruff.md) | +| Pre-commit integration | [references/pre-commit.md](references/pre-commit.md) | diff --git a/template/.claude/skills/linting/references/pre-commit.md b/template/.claude/skills/linting/references/pre-commit.md new file mode 100644 index 0000000..6eaee4f --- /dev/null +++ b/template/.claude/skills/linting/references/pre-commit.md @@ -0,0 +1,221 @@ +# pre-commit + +pre-commit is a framework for managing Git hooks. It runs configured tools automatically +before each commit, blocking the commit if any check fails. + +--- + +## What it does + +- Installs hooks into `.git/hooks/` on `pre-commit install` +- On `git commit`, runs each hook against staged files only (fast) +- In CI, run against all files with `pre-commit run --all-files` + +--- + +## Installation + +```bash +pip install pre-commit # or: uv add --dev pre-commit +pre-commit install # installs the git hook — run once per clone +``` + +Verify the hook was installed: +```bash +cat .git/hooks/pre-commit # should reference pre-commit +``` + +--- + +## Complete .pre-commit-config.yaml + +This is the canonical config for all tools in this project. See +`references/complete-configs.md` for a copy you can paste directly. + +```yaml +# Minimum pre-commit version required. +minimum_pre_commit_version: "3.0.0" + +# Default Python version used when a hook doesn't specify one. +default_language_version: + python: python3.11 + +repos: + # ── Ruff: lint + format ────────────────────────────────────────────────── + - repo: https://github.com/astral-sh/ruff-pre-commit + rev: v0.9.0 # pin to a specific tag + hooks: + - id: ruff + args: ["--fix"] # auto-fix safe issues on commit + - id: ruff-format # auto-format on commit + + # ── Bandit: security lint ───────────────────────────────────────────────── + - repo: https://github.com/PyCQA/bandit + rev: 1.8.3 + hooks: + - id: bandit + args: ["-c", "pyproject.toml"] + # Scope to src/ only — test code legitimately uses assert, subprocess, etc. + files: ^src/ + + # ── Semgrep: pattern-based security scan ───────────────────────────────── + - repo: https://github.com/semgrep/semgrep + rev: v1.60.0 # pin to a specific release + hooks: + - id: semgrep + args: ["--config", ".semgrep.yml", "--error"] + files: ^src/ + # pass_filenames: true is correct for per-file analysis. + # Cross-file taint rules require running semgrep outside pre-commit (in CI). + + # ── basedpyright: type checking ────────────────────────────────────────── + # Runs as a `local` hook so it uses the project's installed venv. + # Requires basedpyright to be installed: pip install basedpyright + - repo: local + hooks: + - id: basedpyright + name: basedpyright + entry: basedpyright + language: system # uses whatever basedpyright is on PATH + types: [python] + pass_filenames: false # basedpyright analyses the whole project graph + + # ── General hygiene ────────────────────────────────────────────────────── + - repo: https://github.com/pre-commit/pre-commit-hooks + rev: v5.0.0 + hooks: + - id: trailing-whitespace + - id: end-of-file-fixer + - id: check-yaml + - id: check-toml + - id: check-merge-conflict + - id: debug-statements # catches leftover breakpoint() / pdb.set_trace() +``` + +### Hook ordering rationale + +Hooks run in the order listed. Fast, auto-fixing hooks go first so the developer sees +clean output; slow analysis hooks (basedpyright) go last. + +``` +ruff (fix + format) → bandit → semgrep → basedpyright → hygiene +``` + +--- + +## Running pre-commit + +```bash +# Run all hooks against all files (use after install or config changes): +pre-commit run --all-files + +# Run a specific hook only: +pre-commit run ruff --all-files +pre-commit run bandit --all-files +pre-commit run semgrep --all-files +pre-commit run basedpyright --all-files + +# Run against staged files only (mirrors what git commit does): +pre-commit run + +# Run against a specific file: +pre-commit run --files src/mymodule.py +``` + +--- + +## Skipping hooks + +**Skip all hooks for one commit** (use very rarely — e.g. emergency hotfix): +```bash +git commit --no-verify -m "emergency: revert broken deploy" +``` + +**Skip a specific hook for one commit:** +```bash +SKIP=basedpyright git commit -m "wip: partially typed module" +SKIP=semgrep,bandit git commit -m "temp: adding fixture with insecure pattern" +``` + +**Exclude files permanently** in the hook config: +```yaml +- id: ruff + exclude: "^tests/fixtures/" # regex matched against the file path +``` + +--- + +## Updating hook versions + +Always pin `rev:` to a tagged release. Update intentionally: + +```bash +# Update all hooks to latest tagged release: +pre-commit autoupdate + +# Update a single repo: +pre-commit autoupdate --repo https://github.com/astral-sh/ruff-pre-commit +``` + +After updating, run `pre-commit run --all-files` to confirm nothing broke. + +--- + +## CI integration (GitHub Actions) + +Use the dedicated action — it caches hook environments automatically: + +```yaml +- name: Run pre-commit + uses: pre-commit/action@v3.0.1 +``` + +This caches `~/.cache/pre-commit` keyed on the hash of `.pre-commit-config.yaml`. + +**Important for `language: system` hooks** (basedpyright): the hook runs whatever is on +`PATH`, so project dependencies must be installed *before* pre-commit runs: + +```yaml +- name: Install Python dependencies + run: pip install -r requirements.txt # or: uv sync — must include basedpyright + +- name: Run pre-commit + uses: pre-commit/action@v3.0.1 +``` + +If you run individual tool steps (ruff, basedpyright, etc.) as separate CI jobs, the +pre-commit job is optional but still useful as a final gate — it catches hooks that +aren't individually tested. + +--- + +## Adding a new hook + +1. Find the hook repo at [pre-commit.com/hooks](https://pre-commit.com/hooks.html) or + the tool's own docs. +2. Add a `- repo:` block to `.pre-commit-config.yaml` in the appropriate position + (fast/auto-fixing hooks first, slow analysis hooks last). +3. Pin `rev:` to a tagged release — never `HEAD` or a branch name. +4. Run `pre-commit run --all-files` to validate. +5. Add the tool to the table in `SKILL.md` and create `references/.md`. +6. Copy the updated full config to `references/complete-configs.md`. + +--- + +## Gotchas + +- **Staged-only mode can hide issues.** On commit, pre-commit checks staged files only. + Unstaged changes interacting with staged ones can produce a passing local commit that + fails CI (which runs `--all-files`). Always run `pre-commit run --all-files` before + pushing to a shared branch. +- **`language: system` hooks depend on PATH.** The basedpyright hook uses whatever + `basedpyright` is on `PATH`. If your venv isn't activated, it either uses a wrong + version or fails entirely. In CI, install dependencies before running pre-commit. +- **Cache in `~/.cache/pre-commit`.** If a hook behaves strangely after a version bump, + clear it: `pre-commit clean`. Then re-run `pre-commit install`. +- **`pass_filenames: false` is required for whole-project tools.** basedpyright resolves + the full import graph; passing individual filenames breaks cross-module type inference. + Semgrep's per-file mode (`pass_filenames: true`) is fine for single-file rules but + won't catch cross-file taint flows — run those in CI without pre-commit. +- **`rev` must be a tag.** pre-commit warns if you use a branch; tags guarantee + reproducibility across machines and CI runs. diff --git a/template/.claude/skills/linting/references/ruff.md b/template/.claude/skills/linting/references/ruff.md new file mode 100644 index 0000000..0723c73 --- /dev/null +++ b/template/.claude/skills/linting/references/ruff.md @@ -0,0 +1,203 @@ +# Ruff + +Ruff is a fast Python linter and formatter written in Rust. It replaces flake8, isort, +and black in a single tool. + +--- + +## What it does + +- **`ruff check`** — lints: enforces rules (unused imports, style, complexity, bugs, etc.) +- **`ruff format`** — formats: opinionated code formatter (Black-compatible output) + +Both read from `[tool.ruff]` in `pyproject.toml`. + +--- + +## Installation + +```bash +pip install ruff # or: uv add --dev ruff +``` + +Verify: +```bash +ruff --version +``` + +--- + +## pyproject.toml config (annotated) + +```toml +[tool.ruff] +# Target Python version — affects which syntax is valid and which rules apply. +# Keep in sync with the minimum Python version the project supports. +target-version = "py311" + +# Directories ruff will scan. Excludes generated, vendored, or build dirs. +src = ["src"] +exclude = [ + ".git", + ".venv", + "__pycache__", + "build", + "dist", +] + +[tool.ruff.lint] +# Rule sets to enable. Add a prefix letter to enable the entire ruleset. +# E/W = pycodestyle errors/warnings +# F = pyflakes (undefined names, unused imports) +# I = isort (import order) — replaces standalone isort +# UP = pyupgrade (modernise syntax for target Python version) +# B = flake8-bugbear (likely bugs and design issues) +# SIM = flake8-simplify (suggest simpler expressions) +# S = flake8-bandit (security — lightweight overlap with bandit) +select = ["E", "W", "F", "I", "UP", "B", "SIM"] + +# Rules to ignore project-wide. +# E501 = line too long — controlled by the formatter, not the linter. +ignore = ["E501"] + +# Allow autofix for all enabled rules when running `ruff check --fix`. +fixable = ["ALL"] +unfixable = [] + +# Per-file rule overrides. Use sparingly — prefer fixing the root cause. +[tool.ruff.lint.per-file-ignores] +# Test files: assert is valid (pytest), magic values are fine, fixtures look unused. +# S101 requires the S ruleset to be in `select` above — add "S" if you want it. +"tests/**/*.py" = ["PLR2004", "F401"] +# __init__.py: re-exports don't need to be explicitly used. +"**/__init__.py" = ["F401"] + +[tool.ruff.format] +# Double quotes matches Black's default. +quote-style = "double" +# Spaces, not tabs. +indent-style = "space" +# Preserve magic trailing commas (they affect how multi-line structures are formatted). +skip-magic-trailing-comma = false +# Normalise line endings to LF on all platforms. +line-ending = "lf" +# Line length for the formatter. Default is 88 (same as Black). +# If you change this, also set line-length under [tool.ruff.lint] for E501. +line-length = 88 +``` + +--- + +## Common rule codes and fixes + +| Code | Meaning | Fix | +|---|---|---| +| `F401` | Unused import | Remove import; `# noqa: F401` only for intentional re-exports | +| `F811` | Redefinition of unused name | Remove or rename the duplicate | +| `F841` | Local variable assigned but never used | Remove assignment or rename to `_` | +| `E711` | `== None` instead of `is None` | Use `is None` / `is not None` | +| `E712` | `== True` instead of `is True` | Use `is True` or just the boolean directly | +| `I001` | Import block unsorted | Run `ruff check --fix` to auto-sort | +| `UP006` | Use `list` instead of `List` (3.9+) | Update type annotation | +| `UP007` | Use `X \| Y` instead of `Optional[X]` (3.10+) | Update union syntax | +| `B006` | Mutable default argument | Use `None` default + guard in body | +| `B007` | Loop variable unused | Rename to `_` | +| `B008` | Function call as default argument | Move call inside the function body | +| `SIM108` | Ternary can replace if/else | Simplify to `x = a if cond else b` | +| `SIM117` | Nested `with` can be merged | Use `with a(), b():` | + +### Looking up an unfamiliar rule + +```bash +ruff rule F401 # prints full explanation of the rule +ruff rule --all # lists every available rule with description +``` + +### Suppressing a rule inline + +```python +import os # noqa: F401 ← suppress one specific rule (preferred) +import os # noqa ← suppress all rules on this line (avoid) +``` + +Prefer adding an entry to `per-file-ignores` in `pyproject.toml` over scattering +`# noqa` across files — it keeps suppression decisions visible and reviewable. + +--- + +## Running ruff + +```bash +# Check (no changes, exits non-zero on violations): +ruff check . + +# Check and auto-fix safe issues: +ruff check --fix . + +# Show a summary of which rules fired most (useful for tuning config): +ruff check --statistics . + +# Format (rewrites files in place): +ruff format . + +# Format check — CI mode: no changes, exits non-zero if formatting needed: +ruff format --check . + +# Show exactly what the formatter would change without writing: +ruff format --diff . + +# Check a single file: +ruff check src/mymodule.py +``` + +--- + +## CI step (GitHub Actions) + +```yaml +- name: Install ruff + run: pip install ruff + +- name: Ruff — format check + run: ruff format --check . + +- name: Ruff — lint + run: ruff check . +``` + +Always run `ruff format --check` before `ruff check` — formatting failures surface +first without wasting time on lint output. + +With `cache: 'pip'` on `actions/setup-python`, repeated installs are fast. + +--- + +## Pre-commit hook entry + +```yaml +- repo: https://github.com/astral-sh/ruff-pre-commit + rev: v0.9.0 # pin to a specific release; update with: pre-commit autoupdate + hooks: + - id: ruff + args: ["--fix"] # auto-fix on commit so the developer sees clean diffs + - id: ruff-format # auto-format on commit +``` + +--- + +## Gotchas + +- **`ruff format` and `ruff check` are separate concerns.** A file can pass linting but + still need formatting, and vice versa. Always run both. +- **`--fix` in local dev, `--check` in CI.** Using `--fix` in CI silently rewrites files; + the pipeline appears to pass but the rewrite never gets committed. Use check mode in CI. +- **`target-version` drives UP rules.** `UP006` (drop `List`/`Dict`) only fires at + `py39+`. Keep `target-version` in sync with your actual minimum Python. +- **isort is built in via the `I` ruleset.** Do not also run standalone isort — they will + conflict on import ordering decisions. +- **`S` ruleset (flake8-bandit) overlaps with bandit.** If you add `"S"` to `select`, + expect duplicate findings. Either add it and accept overlap, or omit it and rely on + bandit for security rules. If you add `"S"`, also update `per-file-ignores` for tests + to include `"S101"` (assert statements). +- **Line length must be set consistently.** If you change `line-length` from 88, set it + in *both* `[tool.ruff.format]` and `[tool.ruff.lint]` (for `E501` if you enable it). diff --git a/template/.claude/skills/prepare_pr/SKILL.md b/template/.claude/skills/prepare_pr/SKILL.md index cba3108..38444e8 100644 --- a/template/.claude/skills/prepare_pr/SKILL.md +++ b/template/.claude/skills/prepare_pr/SKILL.md @@ -1,6 +1,6 @@ --- name: pr-template -description: > +description: >- Enforce a strict PR (pull request) description template every time one is generated, written, or drafted. Use this skill whenever the user asks to: generate a PR, write a PR description, draft a pull request, create a PR for a branch or commit, summarize changes as a PR, or fill in a PR template. @@ -83,6 +83,14 @@ After the code block: --- +## Quick reference: where to go deeper + +| Topic | Reference file | +|----------------------------------------------|------------------------------------------------------| +| Section-by-section fill rules and fallbacks | [references/section-rules.md](references/section-rules.md) | + +--- + ## Fallback template skeleton Use this only if `.github/PULL_REQUEST_TEMPLATE.md` cannot be found. diff --git a/template/.claude/skills/prepare_pr/references/section-rules.md b/template/.claude/skills/prepare_pr/references/section-rules.md index 4adc133..9ce348c 100644 --- a/template/.claude/skills/prepare_pr/references/section-rules.md +++ b/template/.claude/skills/prepare_pr/references/section-rules.md @@ -1,4 +1,4 @@ -# Section Rules +# Section rules This file contains exact fill instructions for every PR section. Work through sections **one at a time, in the order they appear in the template**. diff --git a/template/.claude/skills/pytest/SKILL.md b/template/.claude/skills/pytest/SKILL.md index 790c1a2..7d88f6f 100644 --- a/template/.claude/skills/pytest/SKILL.md +++ b/template/.claude/skills/pytest/SKILL.md @@ -212,11 +212,20 @@ observable behaviour, not implementation details. ``` tests/ conftest.py # Root-level shared fixtures - test_core.py # Tests for src//core.py - test_utils.py # Tests for src//utils.py - / - conftest.py # Fixtures for this subdirectory - test_module_a.py # Tests for src///module_a.py + test_imports.py # Import smoke tests + unit/ + conftest.py # Fixtures for unit tests + test_core.py # Tests for src//core.py + test_cli.py # Tests for src//cli.py + common/ + conftest.py # Fixtures for common module tests + test_utils.py # Tests for src//common/utils.py + integration/ + conftest.py # Fixtures for integration tests + test_*.py + e2e/ + conftest.py # Fixtures for e2e tests + test_*.py ``` - File names: `test_.py` — mirrors the source module being tested. @@ -224,8 +233,8 @@ tests/ - Class names: `TestClassName` — group related tests, no `__init__`. - Fixture names: descriptive nouns (`db_connection`, `sample_user`, `tmp_config_file`). -Read [references/test-organization.md](references/test-organization.md) for large -project layouts, separating unit/integration tests, and discovery configuration. +Read [references/test-organization.md](references/test-organization.md) for the full +project layout (unit/integration/e2e subdirectories), naming rules, and discovery configuration. --- diff --git a/template/.claude/skills/pytest/references/fixtures.md b/template/.claude/skills/pytest/references/fixtures.md index 8810119..623675c 100644 --- a/template/.claude/skills/pytest/references/fixtures.md +++ b/template/.claude/skills/pytest/references/fixtures.md @@ -158,10 +158,14 @@ tests/ conftest.py # Fixtures available to ALL tests unit/ conftest.py # Fixtures only for unit tests + common/ + conftest.py # Fixtures only for common module tests test_models.py integration/ conftest.py # Fixtures only for integration tests test_api.py + e2e/ + conftest.py # Fixtures only for e2e tests ``` Rules: diff --git a/template/.claude/skills/pytest/references/test-organization.md b/template/.claude/skills/pytest/references/test-organization.md index edc4296..898c205 100644 --- a/template/.claude/skills/pytest/references/test-organization.md +++ b/template/.claude/skills/pytest/references/test-organization.md @@ -16,8 +16,8 @@ grows. This document covers layout, naming, discovery, and configuration. ## Directory layout -Keep tests in a top-level `tests/` directory, separate from `src/`. Mirror the source -structure so every module has a clear corresponding test file. +Keep tests in a top-level `tests/` directory, separate from `src/`. Organise tests by type +into subdirectories: `unit/`, `integration/`, and `e2e/`. ``` project/ @@ -25,51 +25,38 @@ project/ myapp/ __init__.py core.py - models.py - utils.py - auth/ + cli.py + common/ __init__.py - login.py - permissions.py + utils.py + decorators.py + logging_manager.py tests/ - conftest.py # Root fixtures shared by all tests - test_core.py # Tests for src/myapp/core.py - test_models.py # Tests for src/myapp/models.py - test_utils.py # Tests for src/myapp/utils.py - auth/ - conftest.py # Fixtures for auth tests - test_login.py # Tests for src/myapp/auth/login.py - test_permissions.py + conftest.py # Global fixtures shared by all tests + test_imports.py # Import smoke tests + unit/ + conftest.py # Fixtures for unit tests + test_core.py # Tests for src/myapp/core.py + test_cli.py # Tests for src/myapp/cli.py + common/ + conftest.py # Fixtures for common module tests + test_utils.py # Tests for src/myapp/common/utils.py + test_decorators.py + test_logging_manager.py + integration/ + conftest.py # Fixtures for integration tests + test_*.py + e2e/ + conftest.py # Fixtures for e2e tests + test_*.py ``` -This layout avoids import path issues and makes it obvious which tests cover which code. - -### For larger projects - -When a project has distinct test types with different requirements (speed, dependencies), -separate them: - -``` -tests/ - conftest.py - unit/ - conftest.py - test_core.py - test_models.py - integration/ - conftest.py - test_database.py - test_api.py - e2e/ - conftest.py - test_workflows.py -``` - -This allows running subsets easily: +This layout makes it easy to run subsets: ```bash pytest tests/unit/ # fast unit tests only pytest tests/integration/ # integration tests only +pytest tests/e2e/ # end-to-end tests only ``` ## File naming and discovery @@ -140,16 +127,15 @@ the implementation rather than the value the test receives. ## Separating test types -Use markers and/or directories to separate test types so you can run subsets. - -### By directory (recommended for large projects) +Tests are organised by type into subdirectories under `tests/`: ``` tests/unit/ → pytest tests/unit tests/integration/ → pytest tests/integration +tests/e2e/ → pytest tests/e2e ``` -### By marker (simpler, works for any project size) +Additionally, use markers for cross-cutting concerns and for running subsets by marker: ```python @pytest.mark.unit @@ -164,9 +150,8 @@ pytest -m unit # run only unit tests pytest -m "not integration" # skip integration tests ``` -Both approaches work. Directories are more visible; markers are more flexible. Many -projects use directories for the primary split and markers for cross-cutting concerns -like `@pytest.mark.slow`. +Set `pytestmark` at module level in every test file — typically matching the directory +the file lives in (e.g. `pytestmark = pytest.mark.unit` for files in `tests/unit/`). ## conftest.py hierarchy @@ -180,9 +165,13 @@ tests/ unit/ conftest.py # Available to tests/unit/ only → mock_db, isolated_config + common/ + conftest.py # Available to tests/unit/common/ only integration/ conftest.py # Available to tests/integration/ only → real_db_session, test_server + e2e/ + conftest.py # Available to tests/e2e/ only ``` Rules: diff --git a/template/.claude/skills/pytest/references/test-types.md b/template/.claude/skills/pytest/references/test-types.md index 144f7f2..2db8a33 100644 --- a/template/.claude/skills/pytest/references/test-types.md +++ b/template/.claude/skills/pytest/references/test-types.md @@ -69,11 +69,11 @@ def test_fetches_user_profile(mocker): assert profile.name == "Alice" ``` -**Organise** unit tests to mirror the source layout: +**Organise** unit tests under `tests/unit/`, mirroring the source layout: ``` -src/myapp/orders.py → tests/test_orders.py -src/myapp/auth/login.py → tests/auth/test_login.py +src/myapp/orders.py → tests/unit/test_orders.py +src/myapp/common/utils.py → tests/unit/common/test_utils.py ``` ## Integration tests @@ -118,8 +118,8 @@ def test_exports_report_to_csv(tmp_path): assert len(lines) == 3 # header + 2 data rows ``` -**Separating from unit tests:** either use directory structure (`tests/unit/`, -`tests/integration/`) or markers (`@pytest.mark.integration`) so you can run them +**Separating from unit tests:** use directory structure (`tests/unit/`, +`tests/integration/`, `tests/e2e/`) and markers (`@pytest.mark.integration`) so you can run them separately. ## Functional and API tests diff --git a/template/.claude/skills/sdlc-workflow/SKILL.md b/template/.claude/skills/sdlc-workflow/SKILL.md new file mode 100644 index 0000000..989480c --- /dev/null +++ b/template/.claude/skills/sdlc-workflow/SKILL.md @@ -0,0 +1,294 @@ +--- +name: sdlc-workflow +description: >- + Master SDLC orchestrator that reads TASK.md and executes a complete development + cycle: TDD test design, TDD implementation, refactoring, code quality, security, + documentation, git commit, and pull request. Use this skill whenever TASK.md is + updated or the user says "run the pipeline", "implement this task", "start the + workflow", "SDLC", "implement from TASK.md", or any request to execute the full + development lifecycle from a task description. +--- + +# SDLC Workflow Skill + +This skill orchestrates the full software development lifecycle from a task description +in `TASK.md` through to a pull request. It delegates to sub-skills and uses Agent tool +calls with model overrides for mechanical stages. + +**Hard rules — never break these:** +- No implementation code before a failing test exists. +- No refactoring before GREEN is confirmed. +- No commit before `just ci` exits 0. +- No patching bugs without first writing a reproducing test. +- Every test must have a pytest marker. + +--- + +## Stage banner + +Display this banner at the top of every response. Use `✓` for completed, `●` for +current, `○` for upcoming. + +``` +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ +SDLC ○DESIGN ○RED ○GREEN ○REFACTOR ○QUALITY ○SECURE ○DOCS ○COMMIT ○PR +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ +``` + +--- + +## Stage pipeline + +``` +Stage Name Model Execution Sub-skills +----- -------------------- ------ ---------------- ------------------------- + 0 DISCOVER Full Main model (reads YAML) + 0.5 PRE-FLIGHT Haiku Agent call (automated checks) + 1 RED Full Main model pytest + 2 GREEN Full Main model pytest + 2.5 CONTRACT (conditional) Haiku Agent call (API guard) + 3 REFACTOR Sonnet Agent call python-docstrings + 3.5 PERF (conditional) Haiku Agent call (benchmarks) + 4 CODE QUALITY Haiku Agent (parallel) linting, type-checking + 5 SECURITY Haiku Agent (parallel) security + 6 DOCUMENTATION Haiku Agent (parallel) python-docstrings, markdown + 7 COMMIT + CHANGELOG Haiku Agent (sequential) (inline guidance) + 8 PULL REQUEST Haiku Agent (sequential) prepare_pr +``` + +Conditional stages (2.5, 3.5) only activate when flagged in task YAML. + +--- + +## Stage 0 — DISCOVER: Read task YAML + +1. Read `tasks/TASK_ID.yaml`. Parse: task_id, type, status, requirement, + acceptance_criteria, constraints, files_affected, testing_strategy, and changelog entry. +2. Read `assets/task_template.yaml` for YAML schema reference. +3. Run `just preflight TASK_ID` to execute pre-flight checks. +4. This project uses `just test` for tests and `just ci` for full CI. +5. Explore the codebase: `pyproject.toml`, `conftest.py`, existing tests, target modules. +6. Create `task_channel/task_TASK_ID/` directory for persistent artifacts. +7. Output: requirement summary, acceptance criteria list, files that will change. +8. Get user approval before proceeding. + +--- + +## Stage 0.5 — PRE-FLIGHT: Automated checks + +1. Run `just preflight TASK_ID`. +2. If any check fails, stop the pipeline and report the failure. +3. If all checks pass, proceed to Stage 1. + +Gate: all pre-flight checks pass. No user approval needed. + +--- + +## Stage 1 — RED: Write failing tests + +**Model:** Full (main model, interactive) + +1. Load the `pytest` skill (read `.claude/skills/pytest/SKILL.md`). +2. For fixture patterns, read `.claude/skills/pytest/references/fixtures.md`. + For mocking guidance, read `.claude/skills/pytest/references/mocking.md`. +3. For each acceptance criterion, draft the smallest test: + - Name: `test__when_` + - Structure: AAA (Arrange-Act-Assert) + - Every test file must set `pytestmark = pytest.mark.` at module level. +4. Show draft to user, wait for approval. +5. Write the test file(s). +6. Run `just test`. Classify the failure: + + | Failure type | Meaning | Action | + |---|---|---| + | `AssertionError` | Wrong behaviour | Ideal RED | + | `AttributeError` / `ImportError` | Missing module/function | Good RED | + | `SyntaxError` / `IndentationError` | Broken test | Fix test first | + | `TypeError` (wrong signature) | Signature mismatch | Fix test first | + | Unrelated exception | Regression | Investigate first | + +7. Confirm RED: *"RED confirmed — failing for the right reason."* + +Do not proceed until RED is confirmed for all acceptance criteria. + +--- + +## Stage 2 — GREEN: Write minimal implementation + +**Model:** Full (main model, interactive) + +1. For each failing test, implement *just enough* to pass. + - Include type annotations on all public functions. + - If logging is involved, follow the structlog guidelines in `CLAUDE.md` + (see "Structured logging" and "Mandatory `logging_manager` usage" sections). +2. Show draft, wait for approval. +3. Write implementation. +4. Run `just test`. All tests must pass. +5. Check coverage: `just coverage`. New code should be fully covered. + - If coverage below 85%, write targeted tests for the uncovered lines + visible in the `just coverage` output. +6. Confirm GREEN: *"GREEN confirmed — all tests pass."* + +Do not proceed until GREEN is confirmed. + +--- + +## Stage 3 — REFACTOR: Improve code quality + +**Model:** Sonnet (Agent call) + +Spawn an Agent with: +``` +Load the python-docstrings skill (.claude/skills/python-docstrings/SKILL.md). + +Review these files: {implementation_files}, {test_files} + +Apply refactoring one group at a time: +- Naming and clarity +- Structure and duplication +- Docstrings (Google-style) +- Test improvements (parametrize, fixtures, naming) + +Run `just test` after every change. If tests go red, revert immediately. + +Report all changes applied. +``` + +Gate: all tests must stay green. No behaviour change. + +--- + +## Stages 4, 5, 6 — Parallel quality checks + +Launch 3 Agent calls simultaneously with `model: "haiku"`: + +### Stage 4 — Code Quality Agent + +``` +Load the linting skill (.claude/skills/linting/SKILL.md) and type-checking skill +(.claude/skills/type-checking/SKILL.md). + +Run: just fix && just fmt && just lint && just type + +Fix any remaining violations. Gate: both exit 0. + +Report: list of fixes applied. +``` + +### Stage 5 — Security Agent + +``` +Load the security skill (.claude/skills/security/SKILL.md). + +Run security scans if configured: +- bandit -c pyproject.toml -r src/ (if bandit is available) +- semgrep --config .semgrep.yml src/ (if .semgrep.yml exists) + +Review changed files for: hardcoded secrets, injection risks, unsafe patterns. +Fix findings or add justified suppressions with specific codes. + +Report: findings and fixes. +``` + +### Stage 6 — Documentation Agent + +``` +Load the python-docstrings skill (.claude/skills/python-docstrings/SKILL.md) +and markdown skill (.claude/skills/markdown/SKILL.md). + +Run: just docs-check + +Verify all new public symbols have Google-style docstrings. +Update any affected markdown documentation in docs/. + +Gate: just docs-check exits 0. + +Report: docstrings added/fixed. +``` + +Wait for all 3 agents to complete before proceeding. + +--- + +## Stage 7 — Git Commit + +**Model:** Haiku (Agent call, sequential after stages 4-6) + +``` +1. Run `just ci` to verify everything passes. +2. If CI fails, classify the failure: + - Test failure: read the error, fix the root cause + - Lint violation: run `just fix` then `just lint` + - Type error: fix the annotation or add a typed ignore with code + - Import error: check dependency or module path + - Coverage drop: add targeted tests + Fix in priority order. Max 3 iterations. +3. Stage changed files: `git add ` (never git add -A). +4. Compose commit message: + - Format: : + - Types: feat, fix, refactor, docs, test, chore, perf, ci, build + - Subject line <= 72 characters, imperative mood + - Body: explain WHY, list acceptance criteria covered + - Footer: Co-Authored-By: Claude Opus 4.6 +5. Run `git commit`. +6. Gate: commit succeeds (pre-commit hooks pass). + +Report: commit hash and message. +``` + +--- + +## Stage 8 — Pull Request + +**Model:** Haiku (Agent call, sequential after stage 7) + +``` +Load the prepare_pr skill (.claude/skills/prepare_pr/SKILL.md). + +1. Read .github/PULL_REQUEST_TEMPLATE.md (if exists). +2. Extract signals from commits: git log main..HEAD, git diff main...HEAD --stat. +3. Fill all PR template sections per the skill's rules. +4. Run: gh pr create --title "" --body "<body>" +5. Output the PR URL. + +Report: PR URL and title. +``` + +--- + +## Error handling + +| Stage | On failure | +|---|---| +| 1-2 (RED/GREEN) | Stop pipeline. Report to user. Needs human judgment. | +| 3 (REFACTOR) | Stop. Revert changes. Report what went wrong. | +| 4-6 (parallel) | Each retries up to 3 times independently. Others continue. | +| 7 (COMMIT) | CI fix loop, max 3 iterations. Then stop + report. | +| 8 (PR) | Report error. Provide PR body for manual creation. | + +--- + +## Multi-criterion features + +When TASK.md has several acceptance criteria: +1. Run RED -> GREEN for each criterion in order. +2. Do not enter REFACTOR until all criteria have passing tests. +3. Extract shared fixtures to `conftest.py` during the second RED pass. + +--- + +## Bug fix protocol + +1. Write a test that reliably triggers the bug. Confirm RED. +2. Do not touch production code yet. +3. Fix the bug with the minimal change (GREEN). +4. Continue through REFACTOR -> VALIDATE as normal. + +--- + +## Quick reference: where to go deeper + +| Topic | Reference file | +|---|---| +| TASK.md format and validation | [references/task-template.md](references/task-template.md) | +| Stage banner format | [references/stage-banner.md](references/stage-banner.md) | diff --git a/template/.claude/skills/sdlc-workflow/references/stage-banner.md b/template/.claude/skills/sdlc-workflow/references/stage-banner.md new file mode 100644 index 0000000..9f308de --- /dev/null +++ b/template/.claude/skills/sdlc-workflow/references/stage-banner.md @@ -0,0 +1,37 @@ +# Stage banner format + +Display at the top of every response during the SDLC workflow. + +## Format + +``` +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ +SDLC ○DESIGN ○RED ○GREEN ○REFACTOR ○QUALITY ○SECURE ○DOCS ○COMMIT ○PR +━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ +``` + +## Symbols + +| Symbol | Meaning | +|---|---| +| `✓` | Stage completed successfully | +| `●` | Stage currently active | +| `○` | Stage not yet started | +| `✗` | Stage failed | + +## Examples + +Mid-cycle (GREEN active): +``` +SDLC ✓DESIGN ✓RED ●GREEN ○REFACTOR ○QUALITY ○SECURE ○DOCS ○COMMIT ○PR +``` + +Parallel stages active: +``` +SDLC ✓DESIGN ✓RED ✓GREEN ✓REFACTOR ●QUALITY ●SECURE ●DOCS ○COMMIT ○PR +``` + +Pipeline complete: +``` +SDLC ✓DESIGN ✓RED ✓GREEN ✓REFACTOR ✓QUALITY ✓SECURE ✓DOCS ✓COMMIT ✓PR +``` diff --git a/template/.claude/skills/security/SKILL.md b/template/.claude/skills/security/SKILL.md new file mode 100644 index 0000000..de623d7 --- /dev/null +++ b/template/.claude/skills/security/SKILL.md @@ -0,0 +1,43 @@ +--- +name: security +description: >- + Bandit, semgrep, and Python security patterns. Covers hardcoded secrets, + injection risks, unsafe patterns, input validation, and cryptographic + security. Use this skill when scanning for vulnerabilities, understanding + security errors, or fixing security issues. Trigger on mentions of: security, + vulnerability, bandit, semgrep, secret, API key, injection, SQL injection, + command injection, or any request to audit code for security. +model: haiku +--- + +# Security Skill + +Guidance for security scanning using **bandit** and **semgrep**, plus Python-specific +security patterns. + +## Tool dispatch + +| Tool | Command | When to use | +|---|---|---| +| Bandit | (configured if available) | General Python security issues | +| Semgrep | (configured if `.semgrep.yml` exists) | Pattern-based vulnerability scan | + +## Manual security checklist + +When reviewing code, check for: + +| Category | What to check | +|---|---| +| Secrets | Hardcoded API keys, tokens, passwords, private keys in code or config | +| Input validation | All external inputs validated before use (CLI args, env vars, file paths, URLs) | +| Injection risks | SQL/command/code injection vectors; parameterized queries; no shell=True | +| Crypto | Use `secrets` module, never `random` for security-sensitive values | +| Path traversal | user_input validated against base directory; use `.resolve()` | +| Error handling | Error messages don't expose internal paths or config values | + +## Quick reference: where to go deeper + +| Topic | Reference file | +|------------------------------|----------------------------------------------------| +| Bandit usage and rules | [references/bandit.md](references/bandit.md) | +| Semgrep usage and patterns | [references/semgrep.md](references/semgrep.md) | diff --git a/template/.claude/skills/security/references/bandit.md b/template/.claude/skills/security/references/bandit.md new file mode 100644 index 0000000..a2df6a8 --- /dev/null +++ b/template/.claude/skills/security/references/bandit.md @@ -0,0 +1,171 @@ +# Bandit + +Bandit is a security linter for Python. It scans source code for common security issues +using a set of AST-based plugins, each targeting a specific vulnerability class. + +--- + +## What it does + +- Statically analyses Python AST for security anti-patterns +- Reports each finding with a severity (LOW / MEDIUM / HIGH) and a confidence + (LOW / MEDIUM / HIGH) so you can filter by signal strength +- Does **not** execute code — pure static analysis + +--- + +## Installation + +```bash +pip install bandit # or: uv add --dev bandit +``` + +Verify: +```bash +bandit --version +``` + +--- + +## pyproject.toml config (annotated) + +Bandit reads a limited set of keys from `[tool.bandit]`. Note that severity/confidence +thresholds must be passed as **CLI flags** — they are not supported as pyproject.toml +keys. + +```toml +[tool.bandit] +# Directories or files to scan (passed to -r). +targets = ["src"] + +# Test IDs to skip project-wide. Use sparingly — prefer per-line nosec. +# B101 = assert statement (valid in tests; exclude tests dir instead of skipping globally) +# B104 = binding to 0.0.0.0 (acceptable in containerised services — document the reason) +skips = ["B101"] + +# Paths to exclude from scanning (regex matched against file path). +exclude_dirs = [".venv", "build", "dist", "tests"] +``` + +### Severity and confidence thresholds + +Thresholds are set via CLI flags, not pyproject.toml: + +| CLI flag | Meaning | +|---|---| +| `-l` | Report LOW+ severity (default: all) | +| `-ll` | Report MEDIUM+ severity (recommended) | +| `-lll` | Report HIGH severity only | +| `-i` | Report LOW+ confidence | +| `-ii` | Report MEDIUM+ confidence (recommended) | +| `-iii` | Report HIGH confidence only | + +For CI, use `-ll -ii` (MEDIUM/MEDIUM) to filter noise without missing real issues: + +```bash +bandit -c pyproject.toml -r src/ -ll -ii +``` + +--- + +## Common issue codes and fixes + +| Code | Issue | Fix | +|---|---|---| +| `B101` | `assert` statement | Use explicit `if`/`raise` for runtime validation | +| `B105` | Hardcoded password — string literal | Load from env var or secrets manager | +| `B106` | Hardcoded password — function argument default | Use `None` default; load at runtime | +| `B107` | Hardcoded password — function argument | Same as B106 | +| `B108` | Probable insecure temp file | Use `tempfile.mkstemp()` or `tempfile.TemporaryFile()` | +| `B110` | `try/except/pass` — silenced exception | Log or handle the exception explicitly | +| `B201` | Flask debug mode in production | Never set `debug=True` outside local dev | +| `B301` | `pickle` on untrusted data | Use `json` or `msgpack` for untrusted sources | +| `B303` | MD5 / SHA1 for cryptography | Use `hashlib.sha256()` or better | +| `B311` | `random` for security-sensitive use | Use the `secrets` module instead | +| `B324` | Insecure hash function | Use SHA-256+ | +| `B501` | SSL certificate verification disabled | Remove `verify=False` | +| `B601` | `shell=True` in subprocess | Pass a list of args; never interpolate user input | +| `B602` | `subprocess` with shell injection risk | Same as B601 | +| `B608` | SQL string formatting — injection risk | Use parameterised queries | + +### Suppressing a finding inline + +```python +result = subprocess.run(cmd, shell=True) # nosec B602 +``` + +Always include the specific code in `# nosec`. A bare `# nosec` suppresses every finding +on the line and makes the reasoning invisible to reviewers. + +To suppress a finding project-wide, add it to `skips` in `pyproject.toml` with a comment +explaining why — e.g. `skips = ["B104"] # service runs in a container; 0.0.0.0 is correct`. + +--- + +## Running bandit + +```bash +# Basic scan using pyproject.toml config: +bandit -c pyproject.toml -r src/ + +# Recommended CI mode — MEDIUM severity and confidence minimum: +bandit -c pyproject.toml -r src/ -ll -ii + +# JSON output for downstream tooling or artefact storage: +bandit -c pyproject.toml -r src/ -f json -o bandit-report.json + +# Run only specific tests: +bandit -r src/ -t B301,B303 +``` + +Exit codes: `0` = no issues at or above threshold, `1` = issues found, `2` = bandit error. + +--- + +## CI step (GitHub Actions) + +```yaml +- name: Install bandit + run: pip install bandit + +- name: Security lint with bandit + run: bandit -c pyproject.toml -r src/ -ll -ii +``` + +See `SKILL.md` for the full CI ordering across all tools. + +--- + +## Pre-commit hook entry + +```yaml +- repo: https://github.com/PyCQA/bandit + rev: 1.8.3 # pin to a specific release; update with: pre-commit autoupdate + hooks: + - id: bandit + args: ["-c", "pyproject.toml", "-ll", "-ii"] + # Scope to src/ only — test code legitimately uses assert, subprocess, etc. + files: ^src/ +``` + +--- + +## Gotchas + +- **Exclude tests from scanning.** Test code legitimately uses `assert` (B101), raw + subprocess calls, and sometimes intentionally insecure patterns in fixtures. Add + `tests` to `exclude_dirs` in `[tool.bandit]` rather than globally skipping B101. +- **Severity thresholds are CLI-only.** You cannot set `severity = "MEDIUM"` in + `[tool.bandit]` — bandit ignores unknown keys. Pass `-ll -ii` on the command line + (or in the pre-commit `args`). +- **`# nosec` without a code is an anti-pattern.** Always write `# nosec B601` so + reviewers know exactly which finding is being suppressed and why. +- **Bandit flags patterns, not intent.** A `random` call used for a non-security shuffle + will still trigger B311. Add `# nosec B311` with a comment explaining the non-security + use rather than globally skipping B311. +- **Bandit is not a substitute for a full security audit.** It catches common + anti-patterns but misses logic-level vulnerabilities, dependency CVEs (use `pip-audit` + for those), and runtime issues. +- **Overlap with semgrep.** Both tools flag issues like `eval`, insecure hashes, and SQL + injection. The overlap is intentional — each catches slightly different forms of the + same pattern. Do not remove bandit in favour of semgrep. diff --git a/template/.claude/skills/security/references/semgrep.md b/template/.claude/skills/security/references/semgrep.md new file mode 100644 index 0000000..13cafed --- /dev/null +++ b/template/.claude/skills/security/references/semgrep.md @@ -0,0 +1,226 @@ +# Semgrep + +Semgrep is a fast, multi-language static analysis tool that matches AST-aware code +patterns. It complements bandit: bandit targets a fixed catalogue of Python security +rules, while semgrep lets you use curated registries and write project-specific custom +rules. + +--- + +## What it does + +- Matches structural code patterns without executing code +- Ships with curated rule registries (`p/python`, `p/owasp-top-ten`, `p/security-audit`) +- Supports custom rules written in YAML — the main value-add over bandit +- Reports findings with severity, rule ID, file, and line number + +--- + +## Installation + +```bash +pip install semgrep # or: uv add --dev semgrep +``` + +The PyPI package bundles the semgrep engine — no separate binary needed. + +Verify: +```bash +semgrep --version +``` + +--- + +## Configuration + +Semgrep does **not** read `pyproject.toml`. All configuration lives in `.semgrep.yml` +(or a `.semgrep/` directory) at the project root and is passed via `--config`. + +--- + +## .semgrep.yml (annotated) + +```yaml +# .semgrep.yml — rule configuration file at the project root. +# +# Registry packs (p/python etc.) cannot be embedded directly in this file — +# they must be passed via --config on the CLI or in pre-commit args. +# This file is for LOCAL custom rules only. +# +# To use registry packs alongside local rules, pass multiple --config flags: +# semgrep --config p/python --config p/owasp-top-ten --config .semgrep.yml src/ + +rules: + # ── Example custom rule ────────────────────────────────────────────────── + # Custom rules give semgrep its real advantage over bandit. + # Save additional rules in .semgrep/rules/<name>.yml and reference them + # with: semgrep --config .semgrep/rules/ src/ + + - id: no-hardcoded-jwt-secret + patterns: + - pattern: jwt.encode(..., "$SECRET", ...) + - pattern-not: jwt.encode(..., os.environ.get(...), ...) + message: > + Hardcoded JWT secret. Load the secret from an environment variable + or secrets manager instead. + languages: [python] + severity: ERROR + + - id: no-eval + pattern: eval(...) + message: > + eval() executes arbitrary code. Use ast.literal_eval() for safe + data parsing, or refactor to avoid dynamic evaluation. + languages: [python] + severity: ERROR + + - id: no-print-statements + pattern: print(...) + message: Use the logging module instead of print() in production code. + languages: [python] + severity: WARNING + # Only enforce this in src/, not tests — pass `files: ^src/` in pre-commit. +``` + +### Structuring custom rules + +For more than a handful of rules, organise under `.semgrep/rules/`: + +``` +.semgrep/ +├── rules/ +│ ├── security.yml # project-specific security rules +│ ├── style.yml # project-specific style rules +│ └── tests/ # test fixtures for rules (semgrep --test) +│ └── security_test.py +``` + +Reference the whole directory: `semgrep --config .semgrep/rules/ src/` + +--- + +## Common registry rule IDs and fixes + +These are available via `--config p/python` or `--config p/security-audit`: + +| Rule ID | Issue | Fix | +|---|---|---| +| `python.lang.security.audit.exec-detected` | `exec()` call | Refactor to explicit function calls | +| `python.lang.security.audit.eval-detected` | `eval()` call | Use `ast.literal_eval()` for data | +| `python.lang.security.audit.dangerous-system-call` | `os.system()` | Use `subprocess.run(["cmd"], check=True)` | +| `python.lang.security.audit.formatted-sql-query` | SQL via f-string / `%` | Use parameterised queries | +| `python.lang.security.audit.jinja2.autoescape-disabled` | XSS via Jinja2 | Set `autoescape=True` | +| `python.django.security.audit.raw-query` | Django raw SQL | Use the ORM or parameterised `raw()` | +| `python.requests.security.no-auth-over-http` | Credentials over HTTP | Enforce HTTPS | +| `python.lang.security.insecure-hash-use` | MD5 / SHA1 for security | Use SHA-256 or better | + +--- + +## Suppressing a finding + +**Inline (single line):** +```python +result = eval(user_input) # nosemgrep: python.lang.security.audit.eval-detected +``` + +**Whole file** (add at top of file): +```python +# nosemgrep: python.lang.security.audit.eval-detected +``` + +Always include the specific rule ID. A bare `# nosemgrep` suppresses all findings on +the line and makes the intent invisible to reviewers. + +--- + +## Running semgrep + +```bash +# Run local rules in .semgrep.yml: +semgrep --config .semgrep.yml src/ + +# Run a registry pack (fetches rules from semgrep.dev on first run, then caches): +semgrep --config p/python src/ + +# Run multiple sources together: +semgrep --config p/python --config p/owasp-top-ten --config .semgrep.yml src/ + +# Restrict to ERROR severity only (suppress WARNING / INFO): +semgrep --config p/python --severity ERROR src/ + +# JSON output for CI artefacts or downstream tooling: +semgrep --config .semgrep.yml --json src/ -o semgrep-report.json + +# Test custom rules against fixtures in .semgrep/rules/tests/: +semgrep --test .semgrep/rules/ +``` + +Exit codes: `0` = no findings, `1` = findings found. + +--- + +## CI step (GitHub Actions) + +```yaml +- name: Install semgrep + run: pip install semgrep + +# Cache registry rules to avoid re-fetching on every run. +- name: Cache semgrep rules + uses: actions/cache@v4 + with: + path: ~/.semgrep/cache + key: semgrep-${{ hashFiles('.semgrep.yml') }} + +# Run registry packs + local rules together. +- name: Semgrep scan + run: | + semgrep --config p/python \ + --config p/owasp-top-ten \ + --config .semgrep.yml \ + --severity ERROR \ + src/ +``` + +See `SKILL.md` for the full CI ordering across all tools. + +--- + +## Pre-commit hook entry + +```yaml +- repo: https://github.com/semgrep/semgrep + rev: v1.60.0 # pin to a specific release; update with: pre-commit autoupdate + hooks: + - id: semgrep + # Registry packs can be passed here too. + args: ["--config", "p/python", "--config", ".semgrep.yml", "--severity", "ERROR"] + # Scope to src/ only. + files: ^src/ + # pass_filenames: true is correct for per-file pattern matching. + # Cross-file taint analysis is not supported in pre-commit mode. +``` + +--- + +## Gotchas + +- **Registry packs are CLI flags, not YAML rule entries.** You cannot write `- p/python` + inside a `rules:` block in `.semgrep.yml` — that syntax is invalid. Pass registries + with `--config p/python` on the command line or in the pre-commit `args`. +- **Registry rules require internet access on first run.** Rules are cached in + `~/.semgrep/cache`. In an air-gapped CI environment, vendor the rules locally by + downloading them and adding as local rule files. +- **`p/security-audit` is noisy.** It includes WARNING and INFO findings on top of + ERROR. Use `--severity ERROR` in CI until the baseline is clean, then broaden. +- **Semgrep is slower than bandit on first run** because it fetches and compiles registry + rules. Cache `~/.semgrep/cache` in CI to keep subsequent runs fast. +- **Custom rules are the real value-add.** Registry rules catch generic issues; custom + rules catch your project's specific anti-patterns. Even two or three custom rules + tailored to your stack are worth writing. +- **`--config auto` changes silently over time.** Auto mode infers rulesets from your + codebase and will shift as semgrep's auto-detection improves. Use explicit `--config` + flags in CI for reproducible results. +- **Overlap with bandit is intentional.** Both tools flag `eval`, insecure hashes, and + SQL injection but via different matching mechanisms. The overlap means issues are + caught by at least one tool even if the other misses a variant. diff --git a/template/.claude/skills/tdd-test-planner/SKILL.md b/template/.claude/skills/tdd-test-planner/SKILL.md index cf4a345..3c032fe 100644 --- a/template/.claude/skills/tdd-test-planner/SKILL.md +++ b/template/.claude/skills/tdd-test-planner/SKILL.md @@ -163,7 +163,7 @@ Produce the plan in this exact structure: `⚠ Assumption:` any ambiguity resolved with an assumption (omit if none). **### pytest mechanics** -- File: `tests/test_<module>.py` +- File: `tests/unit/test_<module>.py` (or `tests/integration/`, `tests/e2e/` as appropriate) - Fixtures: list; note which belong in `conftest.py` - Parametrize groups: which test groups use parametrize - Markers: unit / integration / regression / slow as applicable @@ -181,7 +181,7 @@ For each applicable category: **### Skeletons** -conftest.py skeleton (only if shared fixtures exist), then `tests/test_<module>.py`: +conftest.py skeleton (only if shared fixtures exist), then `tests/unit/test_<module>.py`: Every test function body uses Arrange / Act / Assert comments with `...` under each. Every function ends with `# [A1]` (or matching ID) in the signature comment. diff --git a/template/.claude/skills/tdd-test-planner/references/pytest-patterns.md b/template/.claude/skills/tdd-test-planner/references/pytest-patterns.md index 5a816ac..79a1eff 100644 --- a/template/.claude/skills/tdd-test-planner/references/pytest-patterns.md +++ b/template/.claude/skills/tdd-test-planner/references/pytest-patterns.md @@ -88,13 +88,19 @@ def test_parse_rejects_invalid_input(bad_input): # [B1] ``` tests/ -├── conftest.py ← project-wide fixtures (DB session, HTTP client) +├── conftest.py <- project-wide fixtures (DB session, HTTP client) ├── unit/ -│ ├── conftest.py ← unit-specific fixtures -│ └── test_transfer.py -└── integration/ - ├── conftest.py ← integration-specific fixtures (real DB, containers) - └── test_transfer_db.py +│ ├── conftest.py <- unit-specific fixtures +│ ├── test_transfer.py +│ └── common/ +│ ├── conftest.py <- common module-specific fixtures +│ └── test_utils.py +├── integration/ +│ ├── conftest.py <- integration-specific fixtures (real DB, containers) +│ └── test_transfer_db.py +└── e2e/ + ├── conftest.py <- e2e-specific fixtures + └── test_workflows.py ``` Project-wide conftest example: diff --git a/template/.claude/skills/test-quality-reviewer/SKILL.md b/template/.claude/skills/test-quality-reviewer/SKILL.md index d0d351e..df171d3 100644 --- a/template/.claude/skills/test-quality-reviewer/SKILL.md +++ b/template/.claude/skills/test-quality-reviewer/SKILL.md @@ -288,7 +288,7 @@ When source files are provided: 4. **Untested boundary conditions** — min/max, empty collections, zero, None 5. **Over-tested private internals** — `_method` tests are brittle; suggest testing via public API -**File naming:** `myapp/auth.py` → `tests/test_auth.py` (not `tests/auth_tests.py`). Flag mismatches. +**File naming:** `myapp/auth.py` → `tests/unit/test_auth.py` (not `tests/auth_tests.py`). Flag mismatches. --- diff --git a/template/.claude/skills/test-quality-reviewer/references/advanced-patterns.md b/template/.claude/skills/test-quality-reviewer/references/advanced-patterns.md index 9a18fda..674e7f6 100644 --- a/template/.claude/skills/test-quality-reviewer/references/advanced-patterns.md +++ b/template/.claude/skills/test-quality-reviewer/references/advanced-patterns.md @@ -207,15 +207,20 @@ mock.sned_email() # AttributeError: Mock object has no attribute 'sned_email' ``` project/ -├── conftest.py ← session-level fixtures (DB engine, test client) ├── tests/ │ ├── conftest.py ← fixtures shared across all test subdirectories │ ├── unit/ │ │ ├── conftest.py ← fixtures only for unit tests -│ │ └── test_auth.py -│ └── integration/ -│ ├── conftest.py ← fixtures only for integration tests -│ └── test_api.py +│ │ ├── test_auth.py +│ │ └── common/ +│ │ ├── conftest.py ← fixtures only for common module tests +│ │ └── test_utils.py +│ ├── integration/ +│ │ ├── conftest.py ← fixtures only for integration tests +│ │ └── test_api.py +│ └── e2e/ +│ ├── conftest.py ← fixtures only for e2e tests +│ └── test_workflows.py ``` **Rule of thumb:** diff --git a/template/.claude/skills/type-checking/SKILL.md b/template/.claude/skills/type-checking/SKILL.md new file mode 100644 index 0000000..ce1000f --- /dev/null +++ b/template/.claude/skills/type-checking/SKILL.md @@ -0,0 +1,57 @@ +--- +name: type-checking +description: >- + BasedPyright type checking: configuration, common errors, fix strategies, and + standard mode settings. Use this skill when debugging type errors, configuring + basedpyright, or understanding type annotations. Trigger on mentions of: + type checking, type annotation, basedpyright, typing errors, mypy, type error, + Protocol, Union, Generic, or any request to fix type issues in Python. +model: haiku +--- + +# Type checking Skill + +Guidance for **basedpyright** in `standard` mode for strict type checking across +the project. + +## Command dispatch + +| Command | What it does | +|---|---| +| `just type` | Run basedpyright type check | +| `just type --outputjson` | Machine-readable type errors | + +## Configuration + +See `pyproject.toml` under `[tool.basedpyright]`: +- Mode: `standard` (strict) +- Python version: 3.11 +- venv: `.venv` + +Never change mode to `off`, `basic`, or `strict` without discussion. + +## Common errors and fixes + +See `references/basedpyright.md` for: +- All error codes and what they mean +- Step-by-step fix strategies +- Type annotation patterns for common scenarios + +## Type annotation rules + +1. **All public functions** must have complete type annotations (parameters + return). +2. **Private functions** (_prefixed) are less strict but encouraged. +3. Use modern syntax: `X | Y` instead of `Optional[X]` / `Union[X, Y]` +4. Use `never` for impossible paths, `NoReturn` for non-returning functions. +5. Use `Protocol` for structural typing instead of ABC when possible. + +## Suppression + +- Avoid `# type: ignore` without specific error codes. +- When suppressing, use: `# type: ignore[error-code]` with an explanation. + +## Quick reference: where to go deeper + +| Topic | Reference file | +|--------------------------|----------------------------------------------------------| +| Error codes and fixes | [references/basedpyright.md](references/basedpyright.md) | diff --git a/template/.claude/skills/type-checking/references/basedpyright.md b/template/.claude/skills/type-checking/references/basedpyright.md new file mode 100644 index 0000000..e8fe363 --- /dev/null +++ b/template/.claude/skills/type-checking/references/basedpyright.md @@ -0,0 +1,247 @@ +# basedpyright + +basedpyright is a community fork of pyright (Microsoft's static type checker for Python). +It adds stricter defaults, clearer error messages, and additional rules over vanilla pyright. + +--- + +## What it does + +- Performs static type analysis — no code is executed +- Checks annotations, inferred types, call signatures, narrowing, exhaustiveness, and more +- Reads `[tool.basedpyright]` from `pyproject.toml` + +--- + +## Installation + +```bash +pip install basedpyright # or: uv add --dev basedpyright +``` + +Verify: +```bash +basedpyright --version +``` + +--- + +## pyproject.toml config (annotated) + +```toml +[tool.basedpyright] +# Python version used for type checking (e.g. affects which stdlib types are available). +pythonVersion = "3.11" + +# First-party source directories. Only these paths are type-checked. +include = ["src"] + +# Paths excluded from analysis. +exclude = ["**/__pycache__", ".venv", "build", "dist"] + +# Virtual environment resolution. +# venvPath = the DIRECTORY that contains your venv folder (usually the project root). +# venv = the NAME of the venv folder inside venvPath. +# Together they tell basedpyright where to find installed third-party packages. +# Example: venvPath = "." and venv = ".venv" resolves to ./.venv +venvPath = "." +venv = ".venv" + +# ── Strictness level ──────────────────────────────────────────────────────── +# Options: "off" | "basic" | "standard" | "strict" | "all" +# Recommendation: start at "standard", move to "strict" once the codebase is annotated. +typeCheckingMode = "standard" + +# ── Individual rule overrides ──────────────────────────────────────────────── +# Each rule can be set to "none" | "information" | "warning" | "error" +# independently of typeCheckingMode. Uncomment to customise: + +# reportUnknownVariableType = "error" # flag x: Unknown +# reportUnknownMemberType = "error" # flag unknown attribute types +# reportMissingTypeArgument = "warning" # flag bare list, dict etc. +# reportUnusedImport = "warning" # overlaps with ruff F401 +# reportUninitializedInstanceVariable = "warning" +``` + +### Strictness levels explained + +| Level | What it checks | +|---|---| +| `off` | Type checking disabled | +| `basic` | Only obvious errors: undefined names, wrong argument counts | +| `standard` | Full pyright checks — good default for most projects | +| `strict` | All checks; full annotation coverage required | +| `all` | basedpyright-specific extras on top of `strict` (may have false positives) | + +**Recommended progression:** start at `standard`, fix all errors, then move to `strict`. +Use per-file overrides (below) to keep `strict` globally while granting exceptions for +tests or legacy modules. + +--- + +## Per-file overrides + +Relax or tighten type checking for specific directories: + +```toml +[tool.basedpyright] +typeCheckingMode = "strict" + +[[tool.basedpyright.executionEnvironments]] +root = "tests" +typeCheckingMode = "basic" # test fixtures are often untyped; relax here + +[[tool.basedpyright.executionEnvironments]] +root = "scripts" +typeCheckingMode = "standard" +``` + +For a single line, use an inline ignore (last resort — prefer fixing the root cause): + +```python +result = some_untyped_library.call() # type: ignore[no-untyped-call] +``` + +Always include the specific error code in `type: ignore`. Bare `# type: ignore` silences +all errors on the line and makes the intent invisible to reviewers. + +--- + +## Common error codes and fixes + +| Code | Meaning | Fix | +|---|---|---| +| `reportMissingImports` | Package not installed in the venv | Install the package; or check `venvPath`/`venv` config | +| `reportMissingTypeStubs` | Package has no type information | Install `types-<pkg>` stub; see below | +| `reportUnknownVariableType` | Type inferred as `Unknown` | Add an explicit annotation | +| `reportUnknownMemberType` | Attribute type is `Unknown` | Annotate the class attribute | +| `reportUnknownArgumentType` | Argument type cannot be inferred | Annotate the caller or the callee | +| `reportReturnType` | Return value doesn't match declared return type | Fix annotation or return value | +| `reportAttributeAccessIssue` | Attribute doesn't exist on the type | Check spelling; use `hasattr` guard | +| `reportOperatorIssue` | Operator not defined for these types | Narrow the type before the operation | +| `reportIndexIssue` | Index or key type is invalid | Ensure key type matches the container | +| `reportCallIssue` | Object is not callable | Check the type; unwrap Optional before calling | + +### Handling third-party libraries without type stubs + +**Option 1 — Install community stubs** (preferred when stubs exist): + +```bash +pip install types-requests types-PyYAML types-python-dateutil +``` + +**Option 2 — Suppress missing-stubs warnings project-wide** (when no stubs exist): + +```toml +[tool.basedpyright] +reportMissingTypeStubs = "none" # or "warning" to see it without failing CI +``` + +**Option 3 — Suppress per import** (for one-off cases): + +```python +import untyped_lib # type: ignore[import-untyped] +``` + +**Option 4 — Generate a stub skeleton**: + +```bash +basedpyright --createstub some_package # writes a stub to typestubs/some_package/ +``` + +Commit the generated stubs to the repo and add `stubPath = "typestubs"` to +`[tool.basedpyright]` so basedpyright picks them up automatically. + +--- + +## Running basedpyright + +```bash +# Check entire project (reads pyproject.toml automatically): +basedpyright + +# Check a specific file: +basedpyright src/mymodule.py + +# Verbose output — shows which config file was loaded, useful for debugging: +basedpyright --verbose + +# JSON output — for tooling integration or counting errors: +basedpyright --outputjson | python -c "import sys,json; d=json.load(sys.stdin); print(d['summary'])" + +# Count current errors (useful for tracking incremental adoption progress): +basedpyright 2>&1 | grep " error" | wc -l +``` + +Exit codes: `0` = no errors, `1` = type errors found, `2` = fatal configuration error. + +--- + +## Incremental adoption strategy + +For an existing codebase with many type errors, adopt in stages: + +1. **`typeCheckingMode = "basic"`** — fix the small set of obvious errors first. +2. **`typeCheckingMode = "standard"`** — fix errors, use `executionEnvironments` to + relax specific directories (tests, scripts, legacy modules) temporarily. +3. **`typeCheckingMode = "strict"`** — requires full annotation coverage across `src/`. + Tackle module by module. Add `# type: ignore[<code>]` as a last resort. +4. **`typeCheckingMode = "all"`** — evaluate locally; some rules may be too aggressive + for production CI. Keep at `"strict"` in CI unless all `"all"` rules are clean. + +--- + +## CI step (GitHub Actions) + +```yaml +- name: Install dependencies + run: pip install -r requirements.txt # basedpyright must be in here, or: + # run: uv sync + +- name: Type check with basedpyright + run: basedpyright +``` + +basedpyright needs the project's dependencies installed so it can resolve third-party +imports. If it reports `reportMissingImports` on every import, the venv is not being +found — check that `venvPath` and `venv` in `pyproject.toml` match the actual venv path, +or that the CI runner has the packages on `PATH`. + +--- + +## Pre-commit hook entry + +Run as a `local` hook to use the project's installed venv: + +```yaml +- repo: local + hooks: + - id: basedpyright + name: basedpyright + entry: basedpyright + language: system # uses basedpyright from the active venv + types: [python] + pass_filenames: false # analyses the whole project, not individual files +``` + +Requires `basedpyright` to be installed in the active environment before committing. +In CI, install dependencies before running pre-commit (see `references/pre-commit.md`). + +--- + +## Gotchas + +- **`venvPath` vs `venv` — a common confusion.** `venvPath` is the *directory that + contains* the venv, and `venv` is the *name of the venv folder*. For a project at + `/my/project` with a venv at `/my/project/.venv`, set `venvPath = "."` and + `venv = ".venv"`. Setting `venvPath = ".venv"` is wrong. +- **`reportMissingImports` on all third-party imports** usually means the venv isn't + found. Double-check `venvPath`/`venv` config or confirm packages are installed. +- **basedpyright vs pyright.** A colleague using vanilla pyright will see nearly + identical errors. basedpyright adds rules like `reportUnreachable`. Both tools read + the same `[tool.basedpyright]` / `[tool.pyright]` config keys. +- **`pass_filenames: false` is required in pre-commit.** basedpyright resolves the full + import graph across all files; giving it individual filenames breaks cross-module + type inference. +- **`typeCheckingMode = "all"` may produce false positives.** Use `"strict"` in CI + and evaluate `"all"` locally before committing to it. From d3c7cf31a6904b74eac8f8cd60c2d29daba92bea Mon Sep 17 00:00:00 2001 From: Claude <claude@anthropic.com> Date: Thu, 16 Apr 2026 00:15:02 +0200 Subject: [PATCH 02/12] chore: add maintain-commands skill and update template/docs --- .claude/CLAUDE.md | 143 --- .claude/commands/ci-fix.md | 7 + .claude/commands/ci.md | 20 +- .claude/commands/coverage.md | 27 +- .claude/commands/dependency-check.md | 5 + .claude/commands/docs-check.md | 16 +- .claude/commands/generate.md | 11 - .claude/commands/release.md | 89 +- .claude/commands/review.md | 21 +- .claude/commands/standards.md | 25 +- .claude/commands/tdd-green.md | 5 + .claude/commands/tdd-red.md | 6 + .claude/commands/test.md | 9 +- .claude/commands/update-claude-md.md | 6 + .claude/commands/validate-release.md | 7 + .claude/hooks/post-bash-pr-created.sh | 8 +- .../hooks/post-bash-test-coverage-reminder.sh | 74 ++ .claude/hooks/post-edit-copier-migration.sh | 8 +- .claude/hooks/post-edit-jinja.sh | 8 +- .claude/hooks/post-edit-markdown.sh | 8 +- .claude/hooks/post-edit-python.sh | 8 +- .../hooks/post-edit-refactor-test-guard.sh | 7 +- .claude/hooks/post-edit-template-mirror.sh | 8 +- .claude/hooks/post-write-test-structure.sh | 60 ++ .claude/hooks/pre-bash-block-no-verify.sh | 8 +- .claude/hooks/pre-bash-branch-protection.sh | 62 ++ .claude/hooks/pre-bash-commit-quality.sh | 8 +- .claude/hooks/pre-bash-coverage-gate.sh | 11 +- .claude/hooks/pre-bash-git-push-reminder.sh | 8 +- .claude/hooks/pre-config-protection.sh | 16 +- .claude/hooks/pre-delete-protection.sh | 66 ++ .claude/hooks/pre-protect-readonly-files.sh | 17 +- .claude/hooks/pre-protect-uv-lock.sh | 8 +- .claude/hooks/pre-write-doc-file-warning.sh | 8 +- .claude/hooks/pre-write-jinja-syntax.sh | 24 +- .claude/hooks/pre-write-src-require-test.sh | 5 +- .claude/hooks/pre-write-src-test-reminder.sh | 2 +- {assets => .claude/hooks}/protected_files.csv | 0 .claude/hooks/stop-cost-tracker.sh | 2 +- .claude/hooks/stop-desktop-notify.sh | 10 +- .claude/hooks/stop-evaluate-session.sh | 2 +- .claude/hooks/stop-session-end.sh | 2 +- .claude/rules/README.md | 142 +-- .claude/rules/bash/coding-style.md | 148 +-- .claude/rules/bash/security.md | 101 +-- .claude/rules/common/code-review.md | 77 -- .claude/rules/common/coding-style.md | 72 +- .claude/rules/common/development-workflow.md | 85 -- .claude/rules/common/git-workflow.md | 86 +- .claude/rules/common/hooks.md | 13 +- .claude/rules/common/security.md | 67 +- .claude/rules/common/testing.md | 90 +- .claude/rules/copier/template-conventions.md | 188 +--- .claude/rules/jinja/coding-style.md | 107 --- .claude/rules/jinja/testing.md | 91 -- .claude/rules/markdown/conventions.md | 90 +- .claude/rules/python/coding-style.md | 155 +--- .claude/rules/python/hooks.md | 122 +-- .claude/rules/python/patterns.md | 137 --- .claude/rules/python/security.md | 109 --- .claude/rules/python/testing.md | 157 +--- .claude/rules/yaml/conventions.md | 84 +- .claude/settings.json | 2 + .claude/skills/bash-guide/SKILL.md | 208 +++++ .../skills/bash-guide/references/logging.md | 209 +++++ .../skills/bash-guide/references/patterns.md | 320 +++++++ .../skills/bash-guide/references/structure.md | 202 +++++ .claude/skills/config-management/SKILL.md | 93 ++ .../references/complete-configs.md | 257 ++++++ .claude/skills/cron-scheduling/SKILL.md | 235 +++++ .../references/environments.md | 560 ++++++++++++ .../references/managing-jobs.md | 449 +++++++++ .../cron-scheduling/references/monitoring.md | 472 ++++++++++ .../references/syntax-reference.md | 311 +++++++ .claude/skills/css-guide/SKILL.md | 265 ++++++ .../css-guide/references/accessibility.md | 160 ++++ .../css-guide/references/css-architecture.md | 261 ++++++ .../css-guide/references/layout-responsive.md | 154 ++++ .claude/skills/html-guide/SKILL.md | 294 ++++++ .../html-guide/references/accessibility.md | 223 +++++ .../references/semantic-elements.md | 291 ++++++ .claude/skills/jinja-guide/SKILL.md | 264 ++++++ .../jinja-guide/references/best-practices.md | 411 +++++++++ .../references/filters-reference.md | 358 ++++++++ .../jinja-guide/references/inheritance.md | 371 ++++++++ .../skills/jinja-guide/references/syntax.md | 223 +++++ .claude/skills/linting/SKILL.md | 70 ++ .../skills/linting/references/pre-commit.md | 221 +++++ .claude/skills/linting/references/ruff.md | 203 +++++ .../skills/maintain-commands}/SKILL.md | 84 +- .../maintain-commands}/claude-commands.skill | Bin .../references/command-patterns.md | 2 +- .../references/frontmatter-reference.md | 2 +- .../skills/maintain-hooks}/SKILL.md | 203 ++--- .../skills/maintain-hooks}/claude-hooks.skill | Bin .../maintain-hooks}/references/events.md | 2 +- .../maintain-hooks/references/patterns.md | 124 +++ .claude/skills/maintain-rules/SKILL.md | 331 +++++++ .../maintain-rules/references/categories.md | 353 ++++++++ .../maintain-rules/references/examples.md | 258 ++++++ .claude/skills/maintain-skills/SKILL.md | 311 +++++++ .../references/audit-checklist.md | 0 .../references/audit-examples.md | 0 .../references/fixing-skills.md | 149 +++ .../references/maintenance-log.md | 0 .claude/skills/markdown/SKILL.md | 179 ++++ .../references/anti-patterns-cheatsheet.md | 323 +++++++ .../markdown/references/code-and-links.md | 347 +++++++ .../markdown/references/document-structure.md | 274 ++++++ .../markdown/references/extended-syntax.md | 325 +++++++ .../markdown/references/file-management.md | 347 +++++++ .../markdown/references/formatting-syntax.md | 306 +++++++ .../markdown/references/tables-images-html.md | 277 ++++++ .claude/skills/prepare_pr/SKILL.md | 151 +++ .../prepare_pr/references/section-rules.md | 201 ++++ .claude/skills/pytest/SKILL.md | 341 +++++++ .../skills/pytest/references/anti-patterns.md | 268 ++++++ .../skills/pytest/references/assertions.md | 243 +++++ .../pytest/references/ci-and-plugins.md | 250 +++++ .claude/skills/pytest/references/fixtures.md | 212 +++++ .claude/skills/pytest/references/mocking.md | 206 +++++ .../references/parametrize-and-markers.md | 211 +++++ .../pytest/references/test-organization.md | 213 +++++ .../skills/pytest/references/test-types.md | 205 +++++ .../skills/pytest/scripts/find_slow_tests.py | 167 ++++ .../skills/pytest/scripts/mark_slow_tests.py | 260 ++++++ .claude/skills/python-code-quality/SKILL.md | 113 +++ .../python-code-quality/references/bandit.md | 171 ++++ .../references/basedpyright.md | 247 +++++ .../references/complete-configs.md | 257 ++++++ .../references/pre-commit.md | 221 +++++ .../python-code-quality/references/ruff.md | 203 +++++ .../python-code-quality/references/semgrep.md | 226 +++++ .claude/skills/python-code-reviewer/SKILL.md | 204 +++++ .../references/checklist.md | 291 ++++++ .../references/output-format.md | 270 ++++++ .../references/python-patterns.md | 502 ++++++++++ .claude/skills/python-docstrings/SKILL.md | 95 ++ .../python-docstrings/references/auditing.md | 237 +++++ .../python-docstrings/references/classes.md | 283 ++++++ .../python-docstrings/references/examples.md | 398 ++++++++ .../python-docstrings/references/functions.md | 367 ++++++++ .../references/generators.md | 183 ++++ .../python-docstrings/references/modules.md | 200 ++++ .../python-docstrings/references/overrides.md | 188 ++++ .../references/properties.md | 163 ++++ .../python-docstrings/references/sections.md | 285 ++++++ .claude/skills/sdlc-workflow/SKILL.md | 309 +++++++ .../assets/templates}/task_template.yaml | 2 +- .../sdlc-workflow/references/stage-banner.md | 37 + .../sdlc-workflow/scripts}/preflight.sh | 3 +- .../sdlc-workflow/scripts}/validate_dor.py | 0 .claude/skills/security/SKILL.md | 43 + .claude/skills/security/references/bandit.md | 171 ++++ .claude/skills/security/references/semgrep.md | 226 +++++ .claude/skills/tdd-test-planner/SKILL.md | 4 +- .../references/pytest-patterns.md | 18 +- .claude/skills/test-quality-reviewer/SKILL.md | 2 +- .../references/advanced-patterns.md | 15 +- .claude/skills/type-checking/SKILL.md | 57 ++ .../type-checking/references/basedpyright.md | 247 +++++ .github/CODEOWNERS | 6 + .github/CODE_OF_CONDUCT.md | 87 ++ .github/ISSUE_TEMPLATE/bug_report.md.jinja | 87 ++ .github/ISSUE_TEMPLATE/config.yml.jinja | 5 + .../ISSUE_TEMPLATE/feature_request.md | 0 .github/renovate.json | 55 +- .github/workflows/file-freshness.yml | 2 + .github/workflows/release.yml | 17 +- .gitignore | 142 ++- .gitmessage | 46 +- {docs => assets}/root-template-sync-map.yaml | 74 +- cliff.toml | 120 +++ docs/commit-release-workflow.md | 429 +++++++++ docs/repo_instructions.md | 857 ++++++++++++++++++ docs/repo_management.md | 221 +++++ docs/root-template-sync-guide.md | 272 ++++++ docs/root-template-sync-policy.md | 2 +- env.example | 65 +- justfile | 96 +- pyproject.toml | 14 +- scripts/bump_version.py | 5 +- scripts/check_root_template_sync.py | 193 +++- scripts/pr_commit_policy.py | 107 ++- scripts/repo_file_freshness.py | 176 +++- scripts/sync_skip_if_exists.py | 79 +- template/.claude/commands/ci-fix.md | 7 + template/.claude/commands/ci.md | 5 + template/.claude/commands/coverage.md | 6 + template/.claude/commands/dependency-check.md | 5 + template/.claude/commands/docs-check.md | 6 + template/.claude/commands/generate.md | 6 + .../commands/guided-template-update.md | 7 + template/.claude/commands/release.md | 27 +- template/.claude/commands/review.md | 6 + template/.claude/commands/standards.md | 5 + template/.claude/commands/tdd-green.md | 5 + template/.claude/commands/tdd-red.md | 6 + template/.claude/commands/test.md | 5 + template/.claude/commands/update-claude-md.md | 6 + template/.claude/commands/validate-release.md | 7 + template/.claude/hooks/README.md | 2 +- .../hooks/post-bash-test-coverage-reminder.sh | 68 +- template/.claude/hooks/post-edit-markdown.sh | 8 +- template/.claude/hooks/post-edit-python.sh | 8 +- .../hooks/post-edit-refactor-test-guard.sh | 7 +- .../hooks/post-write-test-structure.sh | 36 +- .../.claude/hooks/pre-bash-block-no-verify.sh | 8 +- .../hooks/pre-bash-branch-protection.sh | 54 +- .../.claude/hooks/pre-bash-commit-quality.sh | 8 +- .../.claude/hooks/pre-bash-coverage-gate.sh | 11 +- .../hooks/pre-bash-git-push-reminder.sh | 8 +- .../.claude/hooks/pre-config-protection.sh | 16 +- .../.claude/hooks/pre-delete-protection.sh | 49 +- template/.claude/hooks/pre-protect-uv-lock.sh | 8 +- .../hooks/pre-write-src-require-test.sh | 5 +- .../hooks/pre-write-src-test-reminder.sh | 2 +- template/.claude/rules/README.md | 109 ++- template/.claude/rules/bash/coding-style.md | 92 +- template/.claude/rules/bash/security.md | 107 +-- template/.claude/rules/common/code-review.md | 74 -- template/.claude/rules/common/coding-style.md | 65 +- .../rules/common/development-workflow.md | 51 -- template/.claude/rules/common/git-workflow.md | 80 +- template/.claude/rules/common/hooks.md | 18 +- template/.claude/rules/common/security.md | 57 +- template/.claude/rules/common/testing.md | 72 +- .../rules/copier/template-conventions.md | 188 +--- template/.claude/rules/jinja/coding-style.md | 107 --- template/.claude/rules/jinja/testing.md | 91 -- .../.claude/rules/markdown/conventions.md | 79 +- template/.claude/rules/python/coding-style.md | 148 +-- .../rules/python/coding-style.md.jinja | 138 --- template/.claude/rules/python/hooks.md | 114 +-- template/.claude/rules/python/patterns.md | 164 ---- .../.claude/rules/python/patterns.md.jinja | 106 --- template/.claude/rules/python/security.md | 107 --- template/.claude/rules/python/testing.md | 154 +--- template/.claude/rules/yaml/conventions.md | 84 +- template/.claude/settings.json | 105 +++ .../templates/command-template.md | 159 ---- .../assets/templates/hook-template.py | 274 ------ .../assets/templates/hook-template.sh | 173 ---- .../assets/templates/settings-example.json | 211 ----- .../.claude/skills/sdlc-workflow/SKILL.md | 17 +- .../.claude/skills/skill-maintainer/SKILL.md | 368 -------- .../.github/PULL_REQUEST_TEMPLATE.md.jinja | 84 -- template/.github/renovate.json.jinja | 61 -- .../workflows/dependency-review.yml.jinja | 40 - template/.github/workflows/labeler.yml.jinja | 27 - .../.github/workflows/pr-policy.yml.jinja | 40 - template/.github/workflows/release.yml.jinja | 19 +- template/.gitignore.jinja | 229 ----- template/.gitmessage | 46 +- template/.pre-commit-config.yaml.jinja | 83 -- template/.secrets.baseline | 8 +- template/.vscode/extensions.json.jinja | 42 - template/.vscode/launch.json.jinja | 89 -- template/.vscode/settings.json.jinja | 94 -- .../docs/github-repository-settings.md.jinja | 183 ---- template/justfile.jinja | 85 +- template/scripts/pr_commit_policy.py.jinja | 201 ---- .../common/file_manager.py.jinja | 48 - template/tests/e2e/conftest.py.jinja | 1 - template/tests/integration/conftest.py.jinja | 1 - ...de_git_cliff %}cliff.toml{% endif %}.jinja | 119 ++- 266 files changed, 24808 insertions(+), 7184 deletions(-) delete mode 100644 .claude/CLAUDE.md delete mode 100644 .claude/commands/generate.md create mode 100755 .claude/hooks/post-bash-test-coverage-reminder.sh mode change 100644 => 100755 .claude/hooks/post-edit-refactor-test-guard.sh create mode 100755 .claude/hooks/post-write-test-structure.sh create mode 100755 .claude/hooks/pre-bash-branch-protection.sh mode change 100644 => 100755 .claude/hooks/pre-bash-coverage-gate.sh create mode 100755 .claude/hooks/pre-delete-protection.sh rename {assets => .claude/hooks}/protected_files.csv (100%) delete mode 100644 .claude/rules/common/code-review.md delete mode 100644 .claude/rules/common/development-workflow.md delete mode 100644 .claude/rules/jinja/coding-style.md delete mode 100644 .claude/rules/jinja/testing.md delete mode 100644 .claude/rules/python/patterns.md delete mode 100644 .claude/rules/python/security.md create mode 100644 .claude/skills/bash-guide/SKILL.md create mode 100644 .claude/skills/bash-guide/references/logging.md create mode 100644 .claude/skills/bash-guide/references/patterns.md create mode 100644 .claude/skills/bash-guide/references/structure.md create mode 100644 .claude/skills/config-management/SKILL.md create mode 100644 .claude/skills/config-management/references/complete-configs.md create mode 100644 .claude/skills/cron-scheduling/SKILL.md create mode 100644 .claude/skills/cron-scheduling/references/environments.md create mode 100644 .claude/skills/cron-scheduling/references/managing-jobs.md create mode 100644 .claude/skills/cron-scheduling/references/monitoring.md create mode 100644 .claude/skills/cron-scheduling/references/syntax-reference.md create mode 100644 .claude/skills/css-guide/SKILL.md create mode 100644 .claude/skills/css-guide/references/accessibility.md create mode 100644 .claude/skills/css-guide/references/css-architecture.md create mode 100644 .claude/skills/css-guide/references/layout-responsive.md create mode 100644 .claude/skills/html-guide/SKILL.md create mode 100644 .claude/skills/html-guide/references/accessibility.md create mode 100644 .claude/skills/html-guide/references/semantic-elements.md create mode 100644 .claude/skills/jinja-guide/SKILL.md create mode 100644 .claude/skills/jinja-guide/references/best-practices.md create mode 100644 .claude/skills/jinja-guide/references/filters-reference.md create mode 100644 .claude/skills/jinja-guide/references/inheritance.md create mode 100644 .claude/skills/jinja-guide/references/syntax.md create mode 100644 .claude/skills/linting/SKILL.md create mode 100644 .claude/skills/linting/references/pre-commit.md create mode 100644 .claude/skills/linting/references/ruff.md rename {template/.claude/skills/claude_commands => .claude/skills/maintain-commands}/SKILL.md (67%) rename {template/.claude/skills/claude_commands => .claude/skills/maintain-commands}/claude-commands.skill (100%) rename {template/.claude/skills/claude_commands => .claude/skills/maintain-commands}/references/command-patterns.md (99%) rename {template/.claude/skills/claude_commands => .claude/skills/maintain-commands}/references/frontmatter-reference.md (99%) rename {template/.claude/skills/claude_hooks => .claude/skills/maintain-hooks}/SKILL.md (73%) rename {template/.claude/skills/claude_hooks => .claude/skills/maintain-hooks}/claude-hooks.skill (100%) rename {template/.claude/skills/claude_hooks => .claude/skills/maintain-hooks}/references/events.md (99%) create mode 100644 .claude/skills/maintain-hooks/references/patterns.md create mode 100644 .claude/skills/maintain-rules/SKILL.md create mode 100644 .claude/skills/maintain-rules/references/categories.md create mode 100644 .claude/skills/maintain-rules/references/examples.md create mode 100644 .claude/skills/maintain-skills/SKILL.md rename {template/.claude/skills/skill-maintainer => .claude/skills/maintain-skills}/references/audit-checklist.md (100%) rename {template/.claude/skills/skill-maintainer => .claude/skills/maintain-skills}/references/audit-examples.md (100%) create mode 100644 .claude/skills/maintain-skills/references/fixing-skills.md rename {template/.claude/skills/skill-maintainer => .claude/skills/maintain-skills}/references/maintenance-log.md (100%) create mode 100644 .claude/skills/markdown/SKILL.md create mode 100644 .claude/skills/markdown/references/anti-patterns-cheatsheet.md create mode 100644 .claude/skills/markdown/references/code-and-links.md create mode 100644 .claude/skills/markdown/references/document-structure.md create mode 100644 .claude/skills/markdown/references/extended-syntax.md create mode 100644 .claude/skills/markdown/references/file-management.md create mode 100644 .claude/skills/markdown/references/formatting-syntax.md create mode 100644 .claude/skills/markdown/references/tables-images-html.md create mode 100644 .claude/skills/prepare_pr/SKILL.md create mode 100644 .claude/skills/prepare_pr/references/section-rules.md create mode 100644 .claude/skills/pytest/SKILL.md create mode 100644 .claude/skills/pytest/references/anti-patterns.md create mode 100644 .claude/skills/pytest/references/assertions.md create mode 100644 .claude/skills/pytest/references/ci-and-plugins.md create mode 100644 .claude/skills/pytest/references/fixtures.md create mode 100644 .claude/skills/pytest/references/mocking.md create mode 100644 .claude/skills/pytest/references/parametrize-and-markers.md create mode 100644 .claude/skills/pytest/references/test-organization.md create mode 100644 .claude/skills/pytest/references/test-types.md create mode 100644 .claude/skills/pytest/scripts/find_slow_tests.py create mode 100644 .claude/skills/pytest/scripts/mark_slow_tests.py create mode 100644 .claude/skills/python-code-quality/SKILL.md create mode 100644 .claude/skills/python-code-quality/references/bandit.md create mode 100644 .claude/skills/python-code-quality/references/basedpyright.md create mode 100644 .claude/skills/python-code-quality/references/complete-configs.md create mode 100644 .claude/skills/python-code-quality/references/pre-commit.md create mode 100644 .claude/skills/python-code-quality/references/ruff.md create mode 100644 .claude/skills/python-code-quality/references/semgrep.md create mode 100644 .claude/skills/python-code-reviewer/SKILL.md create mode 100644 .claude/skills/python-code-reviewer/references/checklist.md create mode 100644 .claude/skills/python-code-reviewer/references/output-format.md create mode 100644 .claude/skills/python-code-reviewer/references/python-patterns.md create mode 100644 .claude/skills/python-docstrings/SKILL.md create mode 100644 .claude/skills/python-docstrings/references/auditing.md create mode 100644 .claude/skills/python-docstrings/references/classes.md create mode 100644 .claude/skills/python-docstrings/references/examples.md create mode 100644 .claude/skills/python-docstrings/references/functions.md create mode 100644 .claude/skills/python-docstrings/references/generators.md create mode 100644 .claude/skills/python-docstrings/references/modules.md create mode 100644 .claude/skills/python-docstrings/references/overrides.md create mode 100644 .claude/skills/python-docstrings/references/properties.md create mode 100644 .claude/skills/python-docstrings/references/sections.md create mode 100644 .claude/skills/sdlc-workflow/SKILL.md rename {assets => .claude/skills/sdlc-workflow/assets/templates}/task_template.yaml (97%) create mode 100644 .claude/skills/sdlc-workflow/references/stage-banner.md rename {scripts => .claude/skills/sdlc-workflow/scripts}/preflight.sh (94%) rename {scripts => .claude/skills/sdlc-workflow/scripts}/validate_dor.py (100%) create mode 100644 .claude/skills/security/SKILL.md create mode 100644 .claude/skills/security/references/bandit.md create mode 100644 .claude/skills/security/references/semgrep.md create mode 100644 .claude/skills/type-checking/SKILL.md create mode 100644 .claude/skills/type-checking/references/basedpyright.md create mode 100644 .github/CODEOWNERS create mode 100644 .github/CODE_OF_CONDUCT.md create mode 100644 .github/ISSUE_TEMPLATE/bug_report.md.jinja create mode 100644 .github/ISSUE_TEMPLATE/config.yml.jinja rename template/.github/ISSUE_TEMPLATE/feature_request.md.jinja => .github/ISSUE_TEMPLATE/feature_request.md (100%) rename {docs => assets}/root-template-sync-map.yaml (68%) create mode 100644 cliff.toml create mode 100644 docs/commit-release-workflow.md create mode 100644 docs/repo_instructions.md create mode 100644 docs/repo_management.md create mode 100644 docs/root-template-sync-guide.md mode change 100644 => 100755 template/.claude/hooks/post-bash-test-coverage-reminder.sh mode change 100644 => 100755 template/.claude/hooks/post-write-test-structure.sh mode change 100644 => 100755 template/.claude/hooks/pre-bash-branch-protection.sh mode change 100644 => 100755 template/.claude/hooks/pre-delete-protection.sh delete mode 100644 template/.claude/rules/common/code-review.md delete mode 100644 template/.claude/rules/common/development-workflow.md delete mode 100644 template/.claude/rules/jinja/coding-style.md delete mode 100644 template/.claude/rules/jinja/testing.md delete mode 100644 template/.claude/rules/python/coding-style.md.jinja delete mode 100644 template/.claude/rules/python/patterns.md delete mode 100644 template/.claude/rules/python/patterns.md.jinja delete mode 100644 template/.claude/rules/python/security.md delete mode 100644 template/.claude/skills/claude_commands/templates/command-template.md delete mode 100644 template/.claude/skills/claude_hooks/assets/templates/hook-template.py delete mode 100644 template/.claude/skills/claude_hooks/assets/templates/hook-template.sh delete mode 100644 template/.claude/skills/claude_hooks/assets/templates/settings-example.json delete mode 100644 template/.claude/skills/skill-maintainer/SKILL.md delete mode 100644 template/.github/PULL_REQUEST_TEMPLATE.md.jinja delete mode 100644 template/.github/renovate.json.jinja delete mode 100644 template/.github/workflows/dependency-review.yml.jinja delete mode 100644 template/.github/workflows/labeler.yml.jinja delete mode 100644 template/.github/workflows/pr-policy.yml.jinja delete mode 100644 template/.gitignore.jinja delete mode 100644 template/.pre-commit-config.yaml.jinja delete mode 100644 template/.vscode/extensions.json.jinja delete mode 100644 template/.vscode/launch.json.jinja delete mode 100644 template/.vscode/settings.json.jinja delete mode 100644 template/docs/github-repository-settings.md.jinja delete mode 100644 template/scripts/pr_commit_policy.py.jinja delete mode 100644 template/src/{{ package_name }}/common/file_manager.py.jinja delete mode 100644 template/tests/e2e/conftest.py.jinja delete mode 100644 template/tests/integration/conftest.py.jinja diff --git a/.claude/CLAUDE.md b/.claude/CLAUDE.md deleted file mode 100644 index daf5e37..0000000 --- a/.claude/CLAUDE.md +++ /dev/null @@ -1,143 +0,0 @@ -# .claude/ — Claude Code Configuration (Meta-Repo) - -This directory contains all Claude Code configuration for **this Copier template repository** -(the meta-repo). It is active when you are developing the template itself. - -> [!IMPORTANT] -> **Dual-hierarchy:** This repo has two parallel `.claude/` trees: -> - `.claude/` ← active when **developing this template** (you are here) -> - `template/.claude/` ← rendered into every **generated project** -> -> When adding or modifying hooks, commands, or rules, decide which tree (or both) the -> change belongs in. See `rules/README.md` for the split rules. - -## Directory layout - -``` -.claude/ -├── CLAUDE.md ← this file -├── settings.json ← hook registrations and permission allow/deny lists -├── hooks/ ← shell hook scripts -│ ├── README.md ← full hook developer guide (events, matchers, exit codes, templates) -│ └── *.sh ← individual hook scripts -├── commands/ ← slash command prompt files -│ └── *.md ← one file per slash command -├── skills/ ← Agent skills (TDD workflow, test planner, test quality reviewer, …) -└── rules/ ← AI coding rules - ├── README.md ← rule developer guide (structure, priority, dual-hierarchy) - ├── common/ ← language-agnostic rules - ├── python/ ← Python-specific rules - ├── jinja/ ← Jinja2 rules (meta-repo only) - ├── bash/ ← Bash rules - ├── markdown/ ← Markdown placement and authoring rules - ├── yaml/ ← YAML conventions for copier.yml + workflows (meta-repo only) - └── copier/ ← Copier template conventions (meta-repo only) - └── template-conventions.md -``` - -## `settings.json` — permissions and hooks - -The `settings.json` file: -- **`permissions.allow`** — Bash commands Claude is allowed to run without user approval - (`just:*`, `uv:*`, `copier:*`, standard git read commands, `python3:*`). -- **`permissions.deny`** — Bash commands that are always blocked (`git push`, `uv publish`, - `git commit --no-verify`, `git push --force`). -- **`hooks`** — lifecycle hook registrations (see hooks/ section below). - -## hooks/ - -Shell scripts that integrate with Claude Code's lifecycle events. See `hooks/README.md` -for the full developer guide. - -### Hooks registered in this tree - -| Script | Event | Matcher | Purpose | -|---|---|---|---| -| `session-start-bootstrap.sh` | SessionStart | * | Toolchain status + previous session snapshot | -| `pre-bash-block-no-verify.sh` | PreToolUse | Bash | Block `--no-verify` in git commands | -| `pre-bash-git-push-reminder.sh` | PreToolUse | Bash | Warn to run `just review` before push | -| `pre-bash-commit-quality.sh` | PreToolUse | Bash | Scan staged `.py` files for secrets/debug markers | -| `pre-bash-coverage-gate.sh` | PreToolUse | Bash | Warn before `git commit` if coverage below threshold | -| `pre-config-protection.sh` | PreToolUse | Write\|Edit\|MultiEdit | Block weakening ruff/basedpyright config edits | -| `pre-protect-uv-lock.sh` | PreToolUse | Write\|Edit | Block direct edits to `uv.lock` | -| `pre-write-src-require-test.sh` | PreToolUse | Write\|Edit | Block if the matching test file does not exist (strict TDD) | -| `pre-write-src-test-reminder.sh` | PreToolUse | Write\|Edit | Warn if test file missing for new source module (optional alternative to strict TDD) | -| `pre-write-doc-file-warning.sh` | PreToolUse | Write | Block `.md` files written outside `docs/` | -| `pre-write-jinja-syntax.sh` | PreToolUse | Write | Validate Jinja2 syntax before writing `.jinja` files | -| `pre-suggest-compact.sh` | PreToolUse | Edit\|Write | Suggest `/compact` every ~50 operations | -| `pre-compact-save-state.sh` | PreCompact | * | Snapshot git state before context compaction | -| `post-edit-python.sh` | PostToolUse | Edit\|Write | ruff + basedpyright after every `.py` edit | -| `post-edit-jinja.sh` | PostToolUse | Edit\|Write | Jinja2 syntax check after every `.jinja` edit | -| `post-edit-markdown.sh` | PostToolUse | Edit | Warn if existing `.md` edited outside `docs/` | -| `post-edit-copier-migration.sh` | PostToolUse | Edit\|Write | Migration checklist after `copier.yml` edits | -| `post-edit-template-mirror.sh` | PostToolUse | Edit\|Write | Remind to mirror `template/.claude/` ↔ root | -| `post-edit-refactor-test-guard.sh` | PostToolUse | Edit\|Write | Remind to run tests after several `src/` or `scripts/` edits | -| `post-bash-pr-created.sh` | PostToolUse | Bash | Log PR URL after `gh pr create` | -| `stop-session-end.sh` | Stop | * | Persist session state JSON | -| `stop-evaluate-session.sh` | Stop | * | Extract reusable patterns from transcript | -| `stop-cost-tracker.sh` | Stop | * | Accumulate session token costs | -| `stop-desktop-notify.sh` | Stop | * | macOS desktop notification on completion | - -### Key differences vs `template/.claude/hooks/` - -The template hooks are a **subset** — generated projects do not need: -- `SessionStart` hooks (no session state tracking) -- `pre-write-doc-file-warning.sh` (no doc-file restriction) -- `pre-write-jinja-syntax.sh` (no Jinja files in generated projects) -- `pre-suggest-compact.sh` -- `pre-compact-save-state.sh` -- `post-edit-jinja.sh` -- `post-edit-copier-migration.sh` -- `post-edit-template-mirror.sh` -- `post-bash-pr-created.sh` -- All `stop-*` hooks - -## commands/ - -One Markdown file per slash command. Claude Code reads these files when you invoke -`/command-name`. The filename (without `.md`) is the slash command name. - -| Command | Purpose | -|---|---| -| `/review` | Full pre-merge checklist (lint + types + docstrings + coverage + symbol scan) | -| `/coverage` | Run coverage, identify gaps, write missing tests | -| `/docs-check` | Audit and repair Google-style docstrings | -| `/standards` | Consolidated pass/fail report — "ready to merge?" gate | -| `/update-claude-md` | Sync CLAUDE.md against pyproject.toml + justfile to prevent drift | -| `/generate` | Render the template into `/tmp/test-output` and inspect it | -| `/release` | Orchestrate a new release: CI → version bump → tag → push | -| `/validate-release` | Verify release prerequisites (clean tree, CI passing, correct tag) | -| `/ci` | Run `just ci` and report results | -| `/test` | Run `just test` and summarise failures | -| `/dependency-check` | Validate `uv.lock` is committed, in sync, and not stale | -| `/tdd-red` | Validate RED phase: confirm a test fails for the right reason | -| `/tdd-green` | Validate GREEN phase: confirm the test passes with no regressions | -| `/ci-fix` | Autonomous CI fixer: diagnose failures, apply fixes, re-run until green | - -## rules/ - -Plain Markdown files documenting coding standards. See `rules/README.md` for the -full guide on rule priority, the dual-hierarchy, and how to write new rules. - -### Rule directories in this tree (meta-repo) - -| Directory | Scope | -|---|---| -| `common/` | Language-agnostic: coding-style, git-workflow, testing, security, development-workflow, code-review | -| `python/` | Python: coding-style, testing, patterns, security, hooks | -| `jinja/` | Jinja2: coding-style, testing — **meta-repo only** | -| `bash/` | Bash: coding-style, security | -| `markdown/` | Markdown placement and authoring conventions | -| `yaml/` | YAML conventions for `copier.yml` and GitHub Actions — **meta-repo only** | -| `copier/` | Copier template conventions — **meta-repo only** | - -Rules in `jinja/`, `yaml/`, and `copier/` are **not** propagated to generated projects. - -## Adding new hooks, commands, or rules - -1. **Hook** — write the script in `hooks/`, register it in `settings.json`, test with a - sample JSON payload. Mirror to `template/.claude/hooks/` if relevant for generated projects. -2. **Command** — create a `.md` file in `commands/`. Mirror as `.md.jinja` in - `template/.claude/commands/` if the command is useful in generated projects. -3. **Rule** — create a `.md` file in the appropriate `rules/` subdirectory. Mirror to - `template/.claude/rules/` if the rule applies to generated projects too. diff --git a/.claude/commands/ci-fix.md b/.claude/commands/ci-fix.md index 2744369..64a22e0 100644 --- a/.claude/commands/ci-fix.md +++ b/.claude/commands/ci-fix.md @@ -1,3 +1,10 @@ +--- +description: Diagnose and fix all CI failures autonomously — format, lint, docstrings, types, tests, and coverage. Use when the user asks to "fix CI", "make CI green", or resolve pipeline failures. +allowed-tools: Read Write Edit Grep Glob Bash(just *) Bash(uv *) Bash(git diff:*) Bash(git status:*) +disable-model-invocation: true +context: fork +--- + Diagnose and fix all CI failures in this project. You are an autonomous CI-fixing agent. ## Step 1 — Run CI and capture output diff --git a/.claude/commands/ci.md b/.claude/commands/ci.md index c33d4c8..4586e95 100644 --- a/.claude/commands/ci.md +++ b/.claude/commands/ci.md @@ -1,21 +1,19 @@ -Run the full local CI pipeline for this Copier template repository and report results. +--- +description: Run the full local CI pipeline (fix, fmt, lint, type, test) and report results. Use when the user asks to "run CI", "check everything", or verify the project before committing. +allowed-tools: Bash(just *) +--- + +Run the full local CI pipeline for this repository and report results. Execute `just ci` which runs in this order: 1. `just fix` — auto-fix ruff lint issues 2. `just fmt` — ruff formatting -3. `just check` — read-only mirror of GitHub Actions, which runs: - - `uv sync --frozen --extra dev` - - `just fmt-check` — verify formatting (read-only) - - `ruff check .` — lint - - `basedpyright` — type check - - `just docs-check` — docstring coverage (`--select D`) - - `just test-ci` — pytest with coverage XML output - - `pre-commit run --all-files --verbose` - - `just audit` — pip-audit dependency security scan +3. `just lint` — ruff lint check +4. `just type` — basedpyright type check +5. `just test` — pytest After running, summarize: - Which steps passed and which failed - Any lint errors or type errors with file + line number - Any failing test names and their assertion messages -- Any security vulnerabilities found by pip-audit - Suggested fixes for any failures diff --git a/.claude/commands/coverage.md b/.claude/commands/coverage.md index d5e6bbe..d0e322b 100644 --- a/.claude/commands/coverage.md +++ b/.claude/commands/coverage.md @@ -1,43 +1,50 @@ +--- +description: Analyse test coverage and write tests to fill any gaps below the 85% threshold. Use when the user asks to "check coverage", "improve coverage", or "find uncovered code". +allowed-tools: Read Write Edit Grep Glob Bash(just *) Bash(uv *) +disable-model-invocation: true +--- + Analyse test coverage and write tests for any gaps found. ## Steps 1. **Run coverage** — execute `just coverage` to get the full report with missing lines: ``` - uv run --active pytest --cov --cov-report=term-missing --cov-report=xml + uv run --active pytest tests/ --cov=my_library --cov-report=term-missing ``` 2. **Parse results** — from the output identify: - - Overall coverage percentage - - Every module below **80 %** (list module name + actual percentage + missing line ranges) + - Overall coverage percentage (target: ≥ **85 %** per `[tool.coverage.report] fail_under`) + - Every module below 85 % (list module name + actual percentage + missing line ranges) 3. **Gap analysis** — for each under-covered module: - - Open the source file + - Open the source file at `src/my_library/` - Read the lines flagged as uncovered - Identify what scenarios, branches, or edge cases those lines represent 4. **Write missing tests** — for each gap: - - Locate the appropriate test file in `tests/` + - Locate the appropriate test file under `tests/` - Write one or more test functions that exercise the uncovered lines - Follow the existing test style (arrange / act / assert, parametrize over similar cases) - Ensure new tests pass: run `just test` after writing them -5. **Re-run coverage** — run `just coverage` again and confirm overall coverage has improved. +5. **Re-run coverage** — run `just coverage` again and confirm overall coverage has improved + and no module is below the 85 % threshold. ## Report format ``` -## Coverage Report +## Coverage Report — my_library -Overall: X% (target: ≥ 80%) +Overall: X% (target: ≥ 85%) ### Modules below threshold | Module | Coverage | Missing lines | |---------------|----------|----------------------| -| foo.bar | 62% | 45-52, 78, 91-95 | +| my_library.core | 72% | 45-52, 78 | ### Tests written -- tests/foo/test_bar.py: added test_edge_case_x, test_branch_y +- tests/.../test_core.py: added test_edge_case_x, test_branch_y ### After fixes Overall: Y% diff --git a/.claude/commands/dependency-check.md b/.claude/commands/dependency-check.md index 551660c..02cba51 100644 --- a/.claude/commands/dependency-check.md +++ b/.claude/commands/dependency-check.md @@ -1,3 +1,8 @@ +--- +description: Validate that uv.lock is in sync with pyproject.toml and committed. Use when the user asks to "check dependencies", "verify the lockfile", or before releasing. +allowed-tools: Read Bash(test *) Bash(git ls-files:*) Bash(git diff:*) Bash(stat *) Bash(date *) Bash(uv sync:*) +--- + Validate that `uv.lock` is in sync with `pyproject.toml` and committed. This command ensures dependencies are reproducible and locked, catching silent drift that diff --git a/.claude/commands/docs-check.md b/.claude/commands/docs-check.md index 96dd8cf..7ce1a02 100644 --- a/.claude/commands/docs-check.md +++ b/.claude/commands/docs-check.md @@ -1,11 +1,17 @@ +--- +description: Audit and repair Google-style docstrings across all Python source files. Use when the user asks to "check docs", "fix docstrings", or "audit documentation". +allowed-tools: Read Write Edit Grep Glob Bash(uv run:*) +disable-model-invocation: true +--- + Audit and repair documentation across all Python source files. ## Steps -1. **Run ruff docstring check** — `uv run --active ruff check --select D .` +1. **Run ruff docstring check** — `uv run --active ruff check --select D src/ tests/ scripts/` Report every violation with file, line, and rule code. -2. **Deep symbol scan** — for every `.py` file (including `tests/` and `scripts/`; ruff `D` applies there too): +2. **Deep symbol scan** — for every `.py` file under `src/my_library/`, `tests/`, and `scripts/`: - Read the file - Identify all public symbols: module-level functions, classes, methods not prefixed with `_` - For each symbol check: @@ -16,7 +22,7 @@ Audit and repair documentation across all Python source files. - Names match the actual parameter names exactly d. **Returns section** — present when the return type is not `None` e. **Raises section** — present when the function raises documented exceptions - f. **Style** — Google convention (sections use `Args:`, `Returns:`, `Raises:` headers) + f. **Style** — Google convention (`Args:`, `Returns:`, `Raises:` section headers) 3. **Module docstrings** — verify each `.py` file has a module-level docstring that describes the module's purpose in one sentence. @@ -25,12 +31,12 @@ Audit and repair documentation across all Python source files. docstring. Base the content on the function's signature, body, and surrounding context. Do not invent descriptions — if the purpose is unclear, write a minimal stub with a `TODO`. -5. **Verify** — re-run `uv run --active ruff check --select D .` and confirm zero violations. +5. **Verify** — re-run `uv run --active ruff check --select D src/ tests/ scripts/` and confirm zero violations. ## Output format ``` -## Documentation Audit +## Documentation Audit — my_library ### Ruff violations: N found [list violations] diff --git a/.claude/commands/generate.md b/.claude/commands/generate.md deleted file mode 100644 index be82a4c..0000000 --- a/.claude/commands/generate.md +++ /dev/null @@ -1,11 +0,0 @@ -Generate a test project from this Copier template into a temporary directory and inspect it. - -Steps: -1. Run: `copier copy . /tmp/test-output --trust --defaults` -2. List the files and directories created under `/tmp/test-output` -3. Show the contents of the generated `pyproject.toml` -4. Show the contents of the generated `justfile` (if present) -5. Check whether the generated project has a valid Python package structure under `src/` -6. Clean up: `rm -rf /tmp/test-output` - -Report any errors that occurred during generation and what they mean in the context of the template source files. diff --git a/.claude/commands/release.md b/.claude/commands/release.md index cc32f0f..7fbba13 100644 --- a/.claude/commands/release.md +++ b/.claude/commands/release.md @@ -1,11 +1,18 @@ +--- +description: Orchestrate a new release — verify CI, bump version, tag, and push to origin. Use when the user asks to "release", "cut a release", "ship a version", or "tag and publish". +argument-hint: [patch|minor|major|X.Y.Z] +allowed-tools: Read Edit Bash(git *) Bash(just ci:*) Bash(uv version:*) Bash(grep *) +disable-model-invocation: true +--- + Orchestrate a new release: verify CI, bump version, tag, and push. -This command automates the release workflow for this Copier template repository. +This command automates the release workflow for My Library. ## Prerequisites - All changes must be committed (no dirty working tree) -- You must have push access to the origin remote +- You must have push access to origin (https://github.com/yourusername/my-library) - The main/master branch must be up to date with origin ## Steps @@ -20,64 +27,82 @@ This command automates the release workflow for this Copier template repository. ```bash just ci ``` - If any step fails (lint, type, test, pre-commit), fix the issue and re-run. - Do not proceed until `just ci` passes completely. + This runs: fix → fmt → lint → type → docs-check → test → pre-commit. + If any step fails, fix the issue and re-run. Do not proceed until all steps pass. 3. **Determine the version bump** Ask the user: "What type of bump? (patch/minor/major) or specify explicit version (X.Y.Z)?" - - `patch` — bug fixes, no new features (0.0.2 → 0.0.3) - - `minor` — new features, backwards compatible (0.0.2 → 0.1.0) - - `major` — breaking changes (0.0.2 → 1.0.0) - - Explicit version — e.g., "0.1.0-rc1" for pre-releases (use with caution) + - `patch` — bug fixes, no new features (0.1.0 → 0.1.1) + - `minor` — new features, backwards compatible (0.1.0 → 0.2.0) + - `major` — breaking changes (0.1.0 → 1.0.0) -4. **Bump the version** using the version bump script +4. **Bump the version** in `pyproject.toml` + ```bash + # Using sed or a text editor: + # Change the version line in [project] section from X.Y.Z to the new version + ``` + Or use a bump tool if one is configured: ```bash - NEW_VERSION=$(python scripts/bump_version.py --bump patch) - # OR - NEW_VERSION=$(python scripts/bump_version.py --new-version X.Y.Z) + # Option: uv version --version X.Y.Z (if your project uses uv version) ``` - Output the new version to the user. 5. **Verify the version was bumped correctly** ```bash - grep version pyproject.toml | head -1 + grep "^version" pyproject.toml + ``` + Confirm the version matches your intended bump. + +6. **Create a commit** for the version bump + ```bash + git add pyproject.toml + git commit -m "chore(release): bump version to X.Y.Z" ``` - Confirm the version line matches the new version. -6. **Create a git tag** with the new version +7. **Create a git tag** with the new version ```bash - git tag v${NEW_VERSION} + git tag vX.Y.Z ``` + Tags should follow PEP 440 with a `v` prefix (e.g., `v0.2.0`, `v1.0.0-rc1`). -7. **Push the tag** to origin (this triggers release.yml) +8. **Push the commit and tag** to origin ```bash - git push origin v${NEW_VERSION} + git push origin main # or master, depending on your default branch + git push origin vX.Y.Z ``` -8. **Monitor the release workflow** - - The tag push triggers `.github/workflows/release.yml` - - Wait for the workflow to complete (check GitHub Actions) - - Confirm that a GitHub Release was created with release notes +9. **Create a GitHub Release** (if not automated) + - Go to: https://github.com/yourusername/my-library/releases + - Click "Draft a new release" + - Select the tag you just pushed + - Add release notes (summary of changes, new features, breaking changes, etc.) + - Publish the release ## Output Report to the user: ``` -✓ Release v<VERSION> created successfully +✓ Release vX.Y.Z created successfully -Next steps: -- GitHub Release: https://github.com/[org]/python_project_template/releases/tag/v<VERSION> -- Monitor workflow: https://github.com/[org]/python_project_template/actions +Release page: https://github.com/yourusername/my-library/releases/tag/vX.Y.Z -To update existing generated projects to this new template version: - copier update --trust # in an existing generated-project directory +Next steps: +- Check that the tag pushed successfully +- Consider updating your CHANGELOG.md if not auto-generated +- Notify users of the new release ``` ## Rollback (if something goes wrong) -If you need to undo a release before the workflow completes: +If you need to undo a release before publishing: ```bash -git tag -d v<VERSION> # delete local tag -git push origin :v<VERSION> # delete remote tag +git reset --soft HEAD~1 # undo the version commit, keep changes staged +git tag -d vX.Y.Z # delete local tag # Fix the issue, then run release again ``` + +If already pushed to origin: +```bash +git tag -d vX.Y.Z # delete local tag +git push origin :vX.Y.Z # delete remote tag (colon syntax or git push --delete) +git push origin +<previous-commit>:main # force-push main back (use with caution!) +``` diff --git a/.claude/commands/review.md b/.claude/commands/review.md index 67d824b..3d7ad14 100644 --- a/.claude/commands/review.md +++ b/.claude/commands/review.md @@ -1,3 +1,9 @@ +--- +description: Perform a thorough pre-merge code review of recently modified Python files — lint, types, docstrings, test coverage. Use when the user asks to "review", "code review", or "check my changes" before merging. +allowed-tools: Read Grep Glob Bash(just *) Bash(git diff:*) Bash(git log:*) +context: fork +--- + Perform a thorough pre-merge code review of all recently modified Python files. ## Steps @@ -8,7 +14,13 @@ Perform a thorough pre-merge code review of all recently modified Python files. - `just type` — basedpyright type check; report all errors with file + line - `just docs-check` — ruff `--select D` docstring check -2. **Manual symbol scan** — for every `.py` file that was added or modified (use `git diff --name-only`): + For detailed guidance on each tool, load the relevant skill: + - Linting: `.claude/skills/linting/SKILL.md` + - Type checking: `.claude/skills/type-checking/SKILL.md` + - Docstrings: `.claude/skills/python-docstrings/SKILL.md` + +2. **Manual symbol scan** — for every `.py` file under `src/my_library/` that was + added or modified (use `git diff --name-only`): - Read the file - List every public function, class, and method (names not prefixed with `_`) - For each public symbol verify: @@ -18,15 +30,10 @@ Perform a thorough pre-merge code review of all recently modified Python files. - No bare `type: ignore` without an explanatory inline comment 3. **Test coverage sanity** — for each new function or class added, check that a corresponding - test exists in `tests/`. If any new public symbol lacks a test, list it explicitly. - -4. **Copier template integrity** (if any `.jinja` files changed) — verify that the Jinja2 hook - did not surface syntax errors. If it did, list the affected templates. + test exists under `tests/`. If any new public symbol lacks a test, list it explicitly. ## Output format -Produce a concise report: - ``` ## Code Review Report diff --git a/.claude/commands/standards.md b/.claude/commands/standards.md index ff93318..9ecd5c7 100644 --- a/.claude/commands/standards.md +++ b/.claude/commands/standards.md @@ -1,3 +1,8 @@ +--- +description: Run the complete standards enforcement suite (lint, types, docstrings, coverage) and produce a consolidated "ready to merge" report. Use when the user asks "am I ready to merge?", "check standards", or "run all checks". +allowed-tools: Read Grep Glob Bash(just *) Bash(git diff:*) +--- + Run the complete standards enforcement suite and produce a consolidated report. This is the "am I ready to merge?" command. It runs all checks and aggregates results. @@ -5,21 +10,18 @@ This is the "am I ready to merge?" command. It runs all checks and aggregates re ## Checks to run (execute concurrently where possible) 1. **Static analysis** — `just lint` + `just type` - - ruff: all configured rules (E, F, I, UP, B, SIM, C4, RUF, D, C90, PERF) + - ruff: all configured rules (E, F, I, UP, B, SIM, C4, RUF, D, TCH, PGH, PT, ARG, C90, PERF) - basedpyright: `standard` mode type checking 2. **Docstring coverage** — `just docs-check` - - All public symbols have Google-style docstrings + - All public symbols in `src/my_library/` have Google-style docstrings - All modules have module-level docstrings 3. **Test coverage** — `just coverage` - - Report overall percentage and flag any module below 80 % + - Target: ≥ 85 % (enforced by `[tool.coverage.report] fail_under = 85`) + - Report any module below threshold with its percentage -4. **Copier template integrity** (if any `.jinja` files are present) - - Confirm no unresolved Copier conflict markers (`<<<<<<`, `>>>>>>`) - - Confirm no stray `.rej` sidecar files - -5. **Definition-of-done checklist** — for every function or class added/modified: +4. **Definition-of-done checklist** — for every function or class added/modified: - [ ] Code passes lint + type check - [ ] Google-style docstring present - [ ] All parameters and return type annotated @@ -29,7 +31,7 @@ This is the "am I ready to merge?" command. It runs all checks and aggregates re ## Output format ``` -## Standards Report — <YYYY-MM-DD> +## Standards Report — My Library — <YYYY-MM-DD> ### ✓/✗ Static Analysis (ruff + basedpyright) [errors or "All clean"] @@ -38,12 +40,9 @@ This is the "am I ready to merge?" command. It runs all checks and aggregates re [violations or "All public symbols documented"] ### ✓/✗ Test Coverage -[modules below 80% or "All modules ≥ 80%"] +[modules below 85% or "All modules ≥ 85%"] Overall: X% -### ✓/✗ Template Integrity -[issues or "No conflicts or .rej files"] - ### Definition-of-Done Status [any unchecked items or "All items complete"] diff --git a/.claude/commands/tdd-green.md b/.claude/commands/tdd-green.md index fa7b0ca..0014a66 100644 --- a/.claude/commands/tdd-green.md +++ b/.claude/commands/tdd-green.md @@ -1,3 +1,8 @@ +--- +description: Validate the GREEN phase of a TDD cycle — confirm the target test now passes and no regressions were introduced. Use when the user asks to "check GREEN", "verify GREEN phase", or after implementing code to make a failing test pass. +allowed-tools: Read Bash(just test:*) Bash(uv run pytest:*) Bash(echo *) +--- + Validate the GREEN phase of a TDD cycle: confirm that the previously-failing test now PASSES and no regressions were introduced. Run the full test suite with `just test`. diff --git a/.claude/commands/tdd-red.md b/.claude/commands/tdd-red.md index 0b55517..81af293 100644 --- a/.claude/commands/tdd-red.md +++ b/.claude/commands/tdd-red.md @@ -1,3 +1,9 @@ +--- +description: Validate the RED phase of a TDD cycle — confirm a specific test fails for the right reason before implementation. Use when the user asks to "check RED", "verify RED phase", or after writing a new failing test. +argument-hint: [test-file-or-name] +allowed-tools: Read Bash(just test:*) Bash(uv run pytest:*) +--- + Validate the RED phase of a TDD cycle: confirm that a specific test FAILS for the right reason. Run the test suite with `just test` (or the specific test file if the user provides one). diff --git a/.claude/commands/test.md b/.claude/commands/test.md index 939d40a..fdd7f0d 100644 --- a/.claude/commands/test.md +++ b/.claude/commands/test.md @@ -1,6 +1,11 @@ -Run the pytest test suite for this Copier template repository. +--- +description: Run the pytest test suite and report results with failure details. Use when the user asks to "run tests", "run the test suite", or "check if tests pass". +allowed-tools: Bash(just test:*) Bash(uv run pytest:*) +--- + +Run the pytest test suite for this project. Execute `just test` and then: - Report the total number of tests, how many passed, failed, or were skipped - For each failing test: show the test name, the file and line, and the full assertion error -- If all tests pass, confirm clearly and suggest whether any new template changes need corresponding test coverage +- If all tests pass, confirm clearly and note whether any recent code changes need new or updated tests diff --git a/.claude/commands/update-claude-md.md b/.claude/commands/update-claude-md.md index d924fb9..e7777a3 100644 --- a/.claude/commands/update-claude-md.md +++ b/.claude/commands/update-claude-md.md @@ -1,3 +1,9 @@ +--- +description: Detect and fix drift between CLAUDE.md and project config files (pyproject.toml, justfile, copier.yml). Use when the user asks to "sync CLAUDE.md", "update CLAUDE.md", or fix stale project documentation. +allowed-tools: Read Edit Grep Bash(date *) +disable-model-invocation: true +--- + Detect and fix drift between CLAUDE.md and the actual project configuration files. CLAUDE.md is the project's living standards contract. It must stay in sync with diff --git a/.claude/commands/validate-release.md b/.claude/commands/validate-release.md index 699fb60..f177e93 100644 --- a/.claude/commands/validate-release.md +++ b/.claude/commands/validate-release.md @@ -1,3 +1,10 @@ +--- +description: Simulate a release by rendering and testing the template against all feature combinations before shipping. Use when the user asks to "validate a release", "test the template", or before running /release. +allowed-tools: Read Bash(git status:*) Bash(just ci:*) Bash(copier *) Bash(grep *) Bash(rm -rf /tmp/test-*) +disable-model-invocation: true +context: fork +--- + Simulate a release by rendering and testing the template with all feature combinations. This command validates that a release won't break existing or new users by testing the template diff --git a/.claude/hooks/post-bash-pr-created.sh b/.claude/hooks/post-bash-pr-created.sh index daf3495..d9a87bf 100755 --- a/.claude/hooks/post-bash-pr-created.sh +++ b/.claude/hooks/post-bash-pr-created.sh @@ -14,13 +14,13 @@ set -euo pipefail INPUT=$(cat) -OUTPUT=$(python3 - <<'PYEOF' -import json, sys +OUTPUT=$(CLAUDE_HOOK_INPUT="$INPUT" python3 - <<'PYEOF' +import json, os -data = json.loads(sys.stdin.read()) +data = json.loads(os.environ["CLAUDE_HOOK_INPUT"]) print(data.get("tool_output", {}).get("output", "")) PYEOF -<<<"$INPUT") || exit 0 +) || exit 0 # Detect a GitHub PR URL in the command output PR_URL=$(echo "$OUTPUT" \ diff --git a/.claude/hooks/post-bash-test-coverage-reminder.sh b/.claude/hooks/post-bash-test-coverage-reminder.sh new file mode 100755 index 0000000..1d57a90 --- /dev/null +++ b/.claude/hooks/post-bash-test-coverage-reminder.sh @@ -0,0 +1,74 @@ +#!/usr/bin/env bash +# Claude PostToolUse hook — Bash +# After pytest / just test runs, parse coverage output and surface modules +# below the 85% project threshold. +# +# Only fires on commands that look like test or coverage runs (pytest, just +# test, just coverage, just ci). Reads the tool's stdout from tool_response and +# greps the coverage table; any src/ module under 85% is listed as a reminder +# so the developer can add tests before committing. +# +# Reference : Custom — project-specific hook, not derived from ECC. +# Exits : 0 always (PostToolUse hooks cannot block) + +set -uo pipefail + +INPUT=$(cat) + +# Extract the bash command that was executed +TOOL_INPUT=$(CLAUDE_HOOK_INPUT="$INPUT" python3 - <<'PYEOF' +import json, os +data = json.loads(os.environ["CLAUDE_HOOK_INPUT"]) +print(data.get("tool_input", {}).get("command", "")) +PYEOF +) || { echo "$INPUT"; exit 0; } + +# Only trigger on pytest / just test / just coverage / just ci commands +case "$TOOL_INPUT" in + *pytest*|*"just test"*|*"just coverage"*|*"just ci"*) ;; + *) echo "$INPUT"; exit 0 ;; +esac + +# Extract the tool's stdout from tool_response +TOOL_OUTPUT=$(CLAUDE_HOOK_INPUT="$INPUT" python3 - <<'PYEOF' +import json, os +data = json.loads(os.environ["CLAUDE_HOOK_INPUT"]) +resp = data.get("tool_response") or data.get("tool_result") or {} +print(resp.get("stdout", "") or resp.get("output", "")) +PYEOF +) || { echo "$INPUT"; exit 0; } + +# Look for coverage output in the tool response +if echo "$TOOL_OUTPUT" | grep -q "TOTAL"; then + LOW_COVERAGE=$(CLAUDE_HOOK_INPUT="$TOOL_OUTPUT" python3 - <<'PYEOF' +import os +lines = os.environ["CLAUDE_HOOK_INPUT"].strip().split("\n") +low = [] +for line in lines: + parts = line.split() + if len(parts) >= 4 and parts[0].startswith("src/"): + try: + pct = int(parts[-1].rstrip("%")) + if pct < 85: + low.append(f"{parts[0]}: {pct}%") + except (ValueError, IndexError): + pass +if low: + print("\n".join(low)) +PYEOF +) + + if [[ -n "$LOW_COVERAGE" ]]; then + echo "┌─ Coverage reminder" + echo "│" + echo "│ Modules below 85% threshold:" + echo "$LOW_COVERAGE" | while IFS= read -r line; do + echo "│ ⚠ $line" + done + echo "│" + echo "└─ Consider writing tests to close gaps before committing" + fi +fi + +echo "$INPUT" +exit 0 diff --git a/.claude/hooks/post-edit-copier-migration.sh b/.claude/hooks/post-edit-copier-migration.sh index 48ab720..9d7d262 100755 --- a/.claude/hooks/post-edit-copier-migration.sh +++ b/.claude/hooks/post-edit-copier-migration.sh @@ -21,13 +21,13 @@ set -euo pipefail INPUT=$(cat) -FILE_PATH=$(python3 - <<'PYEOF' -import json, sys +FILE_PATH=$(CLAUDE_HOOK_INPUT="$INPUT" python3 - <<'PYEOF' +import json, os -data = json.loads(sys.stdin.read()) +data = json.loads(os.environ["CLAUDE_HOOK_INPUT"]) print(data.get("tool_input", {}).get("file_path", "")) PYEOF -<<<"$INPUT") || exit 0 +) || exit 0 BASENAME=$(basename "$FILE_PATH") if [[ "$BASENAME" != "copier.yml" ]]; then diff --git a/.claude/hooks/post-edit-jinja.sh b/.claude/hooks/post-edit-jinja.sh index d1eeb41..995cc49 100755 --- a/.claude/hooks/post-edit-jinja.sh +++ b/.claude/hooks/post-edit-jinja.sh @@ -19,13 +19,13 @@ set -euo pipefail INPUT=$(cat) -FILE_PATH=$(python3 - <<'PYEOF' -import json, sys +FILE_PATH=$(CLAUDE_HOOK_INPUT="$INPUT" python3 - <<'PYEOF' +import json, os -data = json.loads(sys.stdin.read()) +data = json.loads(os.environ["CLAUDE_HOOK_INPUT"]) print(data.get("tool_input", {}).get("file_path", "")) PYEOF -<<<"$INPUT") +) if [[ "$FILE_PATH" != *.jinja ]] || [[ -z "$FILE_PATH" ]] || [[ ! -f "$FILE_PATH" ]]; then exit 0 diff --git a/.claude/hooks/post-edit-markdown.sh b/.claude/hooks/post-edit-markdown.sh index dcfb124..bb52109 100755 --- a/.claude/hooks/post-edit-markdown.sh +++ b/.claude/hooks/post-edit-markdown.sh @@ -24,13 +24,13 @@ set -euo pipefail INPUT=$(cat) # Extract file_path from tool_input (works for both Edit and Write tools) -FILE_PATH=$(python3 - <<'PYEOF' -import json, sys +FILE_PATH=$(CLAUDE_HOOK_INPUT="$INPUT" python3 - <<'PYEOF' +import json, os -data = json.loads(sys.stdin.read()) +data = json.loads(os.environ["CLAUDE_HOOK_INPUT"]) print(data.get("tool_input", {}).get("file_path", "")) PYEOF -<<<"$INPUT") +) # Only process .md files that exist if [[ "$FILE_PATH" != *.md ]] || [[ -z "$FILE_PATH" ]]; then diff --git a/.claude/hooks/post-edit-python.sh b/.claude/hooks/post-edit-python.sh index c09ec0e..159d72a 100755 --- a/.claude/hooks/post-edit-python.sh +++ b/.claude/hooks/post-edit-python.sh @@ -18,13 +18,13 @@ set -euo pipefail INPUT=$(cat) # Extract file_path from tool_input (works for both Edit and Write tools) -FILE_PATH=$(python3 - <<'PYEOF' -import json, sys +FILE_PATH=$(CLAUDE_HOOK_INPUT="$INPUT" python3 - <<'PYEOF' +import json, os -data = json.loads(sys.stdin.read()) +data = json.loads(os.environ["CLAUDE_HOOK_INPUT"]) print(data.get("tool_input", {}).get("file_path", "")) PYEOF -<<<"$INPUT") +) # Only process .py files that actually exist if [[ "$FILE_PATH" != *.py ]] || [[ -z "$FILE_PATH" ]] || [[ ! -f "$FILE_PATH" ]]; then diff --git a/.claude/hooks/post-edit-refactor-test-guard.sh b/.claude/hooks/post-edit-refactor-test-guard.sh old mode 100644 new mode 100755 index 8bdbc2b..11c81e3 --- a/.claude/hooks/post-edit-refactor-test-guard.sh +++ b/.claude/hooks/post-edit-refactor-test-guard.sh @@ -7,16 +7,17 @@ # more than 3 source edits have occurred since the last test run, it surfaces a # reminder. This prevents silent regressions during the REFACTOR stage of TDD. # -# Exits: 0 always (PostToolUse hooks cannot block) +# Reference : Custom — project-specific hook, not derived from ECC. +# Exits : 0 always (PostToolUse hooks cannot block) set -euo pipefail INPUT=$(cat) FILE_PATH=$(printf '%s' "$INPUT" | python3 -c ' -import json, sys +import json, os -data = json.load(sys.stdin) +data = json.loads(os.environ["CLAUDE_HOOK_INPUT"]) print(data.get("tool_input", {}).get("file_path", "")) ') || { echo "$INPUT" diff --git a/.claude/hooks/post-edit-template-mirror.sh b/.claude/hooks/post-edit-template-mirror.sh index f7178d7..ee3fb0d 100755 --- a/.claude/hooks/post-edit-template-mirror.sh +++ b/.claude/hooks/post-edit-template-mirror.sh @@ -25,13 +25,13 @@ set -euo pipefail INPUT=$(cat) -FILE_PATH=$(python3 - <<'PYEOF' -import json, sys +FILE_PATH=$(CLAUDE_HOOK_INPUT="$INPUT" python3 - <<'PYEOF' +import json, os -data = json.loads(sys.stdin.read()) +data = json.loads(os.environ["CLAUDE_HOOK_INPUT"]) print(data.get("tool_input", {}).get("file_path", "")) PYEOF -<<<"$INPUT") || exit 0 +) || exit 0 # Only fire for template/.claude/ edits if ! echo "$FILE_PATH" | grep -qE '(^|/)template/\.claude/'; then diff --git a/.claude/hooks/post-write-test-structure.sh b/.claude/hooks/post-write-test-structure.sh new file mode 100755 index 0000000..7c0bdd2 --- /dev/null +++ b/.claude/hooks/post-write-test-structure.sh @@ -0,0 +1,60 @@ +#!/usr/bin/env bash +# Claude PostToolUse hook — Write +# After creating test_*.py files, check for proper test structure: +# test_ functions, no unittest.TestCase, and module-level pytest markers. +# +# Only inspects files under tests/ or matching test_*.py. Surfaces warnings to +# Claude so it can self-correct in the same turn, but never blocks (PostToolUse +# cannot block). The three checks enforce: +# 1. At least one test_ function is defined in the file. +# 2. No unittest.TestCase classes (pytest function-based tests are preferred). +# 3. A pytestmark = pytest.mark.<marker> or @pytest.mark.<marker> is present. +# +# Reference : Custom — project-specific hook, not derived from ECC. +# Exits : 0 always (PostToolUse hooks cannot block) + +set -uo pipefail + +INPUT=$(cat) + +FILE_PATH=$(CLAUDE_HOOK_INPUT="$INPUT" python3 - <<'PYEOF' +import json, os +data = json.loads(os.environ["CLAUDE_HOOK_INPUT"]) +print(data.get("tool_input", {}).get("file_path", "")) +PYEOF +) || { echo "$INPUT"; exit 0; } + +# Only check test files that actually exist +case "$FILE_PATH" in + */test_*.py|*tests/*.py) ;; + *) echo "$INPUT"; exit 0 ;; +esac + +[[ -f "$FILE_PATH" ]] || { echo "$INPUT"; exit 0; } + +WARNINGS="" + +# Check for test_ functions +if ! grep -q "^def test_\|^ def test_\|^async def test_" "$FILE_PATH"; then + WARNINGS="${WARNINGS}\n│ ⚠ No test_ functions found — file may not contain any tests" +fi + +# Check for unittest.TestCase (discouraged) +if grep -q "unittest.TestCase\|class.*TestCase" "$FILE_PATH"; then + WARNINGS="${WARNINGS}\n│ ⚠ unittest.TestCase detected — prefer pytest function-based tests" +fi + +# Check for pytest markers +if ! grep -q "pytestmark\|@pytest.mark\." "$FILE_PATH"; then + WARNINGS="${WARNINGS}\n│ ⚠ No pytest markers — set pytestmark = pytest.mark.<marker> at module level" +fi + +if [[ -n "$WARNINGS" ]]; then + echo "┌─ Test structure check: $(basename "$FILE_PATH")" + echo -e "$WARNINGS" + echo "│" + echo "└─ Fix these issues to maintain test quality standards" +fi + +echo "$INPUT" +exit 0 diff --git a/.claude/hooks/pre-bash-block-no-verify.sh b/.claude/hooks/pre-bash-block-no-verify.sh index 459ec1c..c6b53b3 100755 --- a/.claude/hooks/pre-bash-block-no-verify.sh +++ b/.claude/hooks/pre-bash-block-no-verify.sh @@ -15,13 +15,13 @@ set -uo pipefail INPUT=$(cat) -COMMAND=$(python3 - <<'PYEOF' -import json, sys +COMMAND=$(CLAUDE_HOOK_INPUT="$INPUT" python3 - <<'PYEOF' +import json, os -data = json.loads(sys.stdin.read()) +data = json.loads(os.environ["CLAUDE_HOOK_INPUT"]) print(data.get("tool_input", {}).get("command", "")) PYEOF -<<<"$INPUT") || { echo "$INPUT"; exit 0; } +) || { echo "$INPUT"; exit 0; } if echo "$COMMAND" | grep -qE -- '--no-verify'; then echo "┌─ BLOCKED: --no-verify detected" >&2 diff --git a/.claude/hooks/pre-bash-branch-protection.sh b/.claude/hooks/pre-bash-branch-protection.sh new file mode 100755 index 0000000..b2cbede --- /dev/null +++ b/.claude/hooks/pre-bash-branch-protection.sh @@ -0,0 +1,62 @@ +#!/usr/bin/env bash +# Claude PreToolUse hook — Bash +# Block git push to main/master branches; feature branch pushes are allowed. +# +# Two guards are applied: +# 1. Explicit targets — `git push <remote> main|master` is blocked. +# 2. Implicit current-branch pushes — if HEAD is main/master and the command +# is a bare `git push`, `git push origin`, or `git push -u origin`, the +# push is blocked so the protected branch is never advanced directly. +# +# Developers should create a feature branch and open a pull request instead. +# +# Reference : Custom — project-specific hook, not derived from ECC. +# Exits : 0 = allow | 2 = block + +set -uo pipefail + +INPUT=$(cat) + +COMMAND=$(CLAUDE_HOOK_INPUT="$INPUT" python3 - <<'PYEOF' +import json, os +data = json.loads(os.environ["CLAUDE_HOOK_INPUT"]) +print(data.get("tool_input", {}).get("command", "")) +PYEOF +) || { echo "$INPUT"; exit 0; } + +# Only check git push commands +case "$COMMAND" in + *"git push"*) ;; + *) echo "$INPUT"; exit 0 ;; +esac + +# Explicit push to main/master +if echo "$COMMAND" | grep -qE "git push\s+\S+\s+(main|master)(\s|$)"; then + echo "┌─ BLOCKED: direct push to main/master" >&2 + echo "│" >&2 + echo "│ Pushing directly to a protected branch is not allowed." >&2 + echo "│ Use a feature branch and open a pull request instead." >&2 + echo "│" >&2 + echo "│ git checkout -b feat/my-feature" >&2 + echo "│ git push -u origin feat/my-feature" >&2 + echo "└─" >&2 + exit 2 +fi + +# Implicit push from main/master +CURRENT_BRANCH=$(git branch --show-current 2>/dev/null || echo "") +if [[ "$CURRENT_BRANCH" == "main" || "$CURRENT_BRANCH" == "master" ]]; then + if echo "$COMMAND" | grep -qE "^git push\s*$|^git push\s+origin\s*$|^git push\s+-u\s+origin\s*$"; then + echo "┌─ BLOCKED: HEAD is on protected branch '$CURRENT_BRANCH'" >&2 + echo "│" >&2 + echo "│ A bare 'git push' would advance the protected branch directly." >&2 + echo "│ Create a feature branch first:" >&2 + echo "│" >&2 + echo "│ git checkout -b feat/my-feature" >&2 + echo "│ git push -u origin feat/my-feature" >&2 + echo "└─" >&2 + exit 2 + fi +fi + +echo "$INPUT" diff --git a/.claude/hooks/pre-bash-commit-quality.sh b/.claude/hooks/pre-bash-commit-quality.sh index 1fc9183..8ef6ed5 100755 --- a/.claude/hooks/pre-bash-commit-quality.sh +++ b/.claude/hooks/pre-bash-commit-quality.sh @@ -22,13 +22,13 @@ set -uo pipefail INPUT=$(cat) -COMMAND=$(python3 - <<'PYEOF' -import json, sys +COMMAND=$(CLAUDE_HOOK_INPUT="$INPUT" python3 - <<'PYEOF' +import json, os -data = json.loads(sys.stdin.read()) +data = json.loads(os.environ["CLAUDE_HOOK_INPUT"]) print(data.get("tool_input", {}).get("command", "")) PYEOF -<<<"$INPUT") || { echo "$INPUT"; exit 0; } +) || { echo "$INPUT"; exit 0; } # Only fire for git commit commands if ! echo "$COMMAND" | grep -qE '^\s*git\s+commit\b'; then diff --git a/.claude/hooks/pre-bash-coverage-gate.sh b/.claude/hooks/pre-bash-coverage-gate.sh old mode 100644 new mode 100755 index 25be5f7..6b9bd56 --- a/.claude/hooks/pre-bash-coverage-gate.sh +++ b/.claude/hooks/pre-bash-coverage-gate.sh @@ -7,19 +7,20 @@ # hard gate — this hook gives earlier feedback so the developer can add missing # tests before committing. # -# Exits: 0 always (non-blocking; warnings on stderr only) +# Reference : Custom — project-specific hook, not derived from ECC. +# Exits : 0 always (non-blocking; warnings on stderr only) set -uo pipefail INPUT=$(cat) -COMMAND=$(python3 - <<'PYEOF' -import json, sys +COMMAND=$(CLAUDE_HOOK_INPUT="$INPUT" python3 - <<'PYEOF' +import json, os -data = json.loads(sys.stdin.read()) +data = json.loads(os.environ["CLAUDE_HOOK_INPUT"]) print(data.get("tool_input", {}).get("command", "")) PYEOF -<<<"$INPUT") || { echo "$INPUT"; exit 0; } +) || { echo "$INPUT"; exit 0; } # Only fire for git commit commands. if ! echo "$COMMAND" | grep -qE '^\s*git\s+commit\b'; then diff --git a/.claude/hooks/pre-bash-git-push-reminder.sh b/.claude/hooks/pre-bash-git-push-reminder.sh index 4796fe1..7b3ce20 100755 --- a/.claude/hooks/pre-bash-git-push-reminder.sh +++ b/.claude/hooks/pre-bash-git-push-reminder.sh @@ -14,13 +14,13 @@ set -uo pipefail INPUT=$(cat) -COMMAND=$(python3 - <<'PYEOF' -import json, sys +COMMAND=$(CLAUDE_HOOK_INPUT="$INPUT" python3 - <<'PYEOF' +import json, os -data = json.loads(sys.stdin.read()) +data = json.loads(os.environ["CLAUDE_HOOK_INPUT"]) print(data.get("tool_input", {}).get("command", "")) PYEOF -<<<"$INPUT") || { echo "$INPUT"; exit 0; } +) || { echo "$INPUT"; exit 0; } if echo "$COMMAND" | grep -qE '^\s*git\s+push\b'; then echo "┌─ Reminder: git push must be done manually" >&2 diff --git a/.claude/hooks/pre-config-protection.sh b/.claude/hooks/pre-config-protection.sh index 3e5d0bc..b223106 100755 --- a/.claude/hooks/pre-config-protection.sh +++ b/.claude/hooks/pre-config-protection.sh @@ -25,13 +25,13 @@ set -uo pipefail INPUT=$(cat) # Extract file path first (used in error message for the BLOCK case) -FILE_PATH=$(python3 - <<'PYEOF' -import json, sys +FILE_PATH=$(CLAUDE_HOOK_INPUT="$INPUT" python3 - <<'PYEOF' +import json, os -data = json.loads(sys.stdin.read()) +data = json.loads(os.environ["CLAUDE_HOOK_INPUT"]) print(data.get("tool_input", {}).get("file_path", "")) PYEOF -<<<"$INPUT") || { echo "$INPUT"; exit 0; } +) || { echo "$INPUT"; exit 0; } BASENAME=$(basename "$FILE_PATH") @@ -42,10 +42,10 @@ case "$BASENAME" in esac # Run protection analysis; outputs: OK | SKIP | BLOCK:<reason> | WARN:<reason> -CHECK=$(python3 - <<'PYEOF' -import json, re, sys +CHECK=$(CLAUDE_HOOK_INPUT="$INPUT" python3 - <<'PYEOF' +import json, os, re, sys -data = json.loads(sys.stdin.read()) +data = json.loads(os.environ["CLAUDE_HOOK_INPUT"]) new_content = ( data.get("tool_input", {}).get("new_string", "") or data.get("tool_input", {}).get("content", "") @@ -83,7 +83,7 @@ if basename in ("pyproject.toml", "pyproject.toml.jinja"): print("OK") PYEOF -<<<"$INPUT") || { echo "$INPUT"; exit 0; } +) || { echo "$INPUT"; exit 0; } case "$CHECK" in BLOCK:*) diff --git a/.claude/hooks/pre-delete-protection.sh b/.claude/hooks/pre-delete-protection.sh new file mode 100755 index 0000000..6389da0 --- /dev/null +++ b/.claude/hooks/pre-delete-protection.sh @@ -0,0 +1,66 @@ +#!/usr/bin/env bash +# Claude PreToolUse hook — Bash +# Block `rm` (and recursive deletes) of files critical to the project +# infrastructure, and block `rm -rf` of the .claude/ directory. +# +# Protected files include pyproject.toml, justfile, CLAUDE.md, +# .pre-commit-config.yaml, .copier-answers.yml, uv.lock, and +# .claude/settings.json. These files are essential for builds, quality gates, +# and Claude Code configuration; deleting them through Claude is never the +# right move. If a real removal is needed, the user should do it manually. +# +# Reference : Custom — project-specific hook, not derived from ECC. +# Exits : 0 = allow | 2 = block + +set -uo pipefail + +INPUT=$(cat) + +COMMAND=$(CLAUDE_HOOK_INPUT="$INPUT" python3 - <<'PYEOF' +import json, os +data = json.loads(os.environ["CLAUDE_HOOK_INPUT"]) +print(data.get("tool_input", {}).get("command", "")) +PYEOF +) || { echo "$INPUT"; exit 0; } + +# Only check rm commands +case "$COMMAND" in + *rm*) ;; + *) echo "$INPUT"; exit 0 ;; +esac + +# Protected files — never delete these via Claude +PROTECTED_FILES=( + "pyproject.toml" + "justfile" + "CLAUDE.md" + ".pre-commit-config.yaml" + ".copier-answers.yml" + "uv.lock" + ".claude/settings.json" +) + +for protected in "${PROTECTED_FILES[@]}"; do + if echo "$COMMAND" | grep -q -- "$protected"; then + echo "┌─ BLOCKED: delete protection" >&2 + echo "│" >&2 + echo "│ Cannot delete critical file: $protected" >&2 + echo "│ These files are essential to the project infrastructure." >&2 + echo "│" >&2 + echo "└─ If you must remove this file, do it manually outside Claude." >&2 + exit 2 + fi +done + +# Block rm -rf on the .claude/ directory itself +if echo "$COMMAND" | grep -qE "rm\s+(-[a-zA-Z]*r[a-zA-Z]*\s+|)\.claude(/|$|\s)"; then + echo "┌─ BLOCKED: delete protection" >&2 + echo "│" >&2 + echo "│ Cannot recursively delete the .claude/ directory." >&2 + echo "│ It contains skills, hooks, and settings that drive Claude Code." >&2 + echo "│" >&2 + echo "└─ Delete specific files within .claude/ instead, one at a time." >&2 + exit 2 +fi + +echo "$INPUT" diff --git a/.claude/hooks/pre-protect-readonly-files.sh b/.claude/hooks/pre-protect-readonly-files.sh index c6ff85a..98e025f 100755 --- a/.claude/hooks/pre-protect-readonly-files.sh +++ b/.claude/hooks/pre-protect-readonly-files.sh @@ -1,8 +1,8 @@ #!/usr/bin/env bash # Claude PreToolUse hook — Write|Edit|MultiEdit -# Block direct edits to files listed in assets/protected_files.csv. +# Block direct edits to files listed in .claude/hooks/protected_files.csv. # -# CSV format (assets/protected_files.csv): +# CSV format (.claude/hooks/protected_files.csv): # Column 1 (filepath) — repo-relative path; hook matches on the basename # Column 4 (ai_can_modify) — "never" triggers the block # Column 12 (reason) — shown in the block message @@ -10,11 +10,14 @@ # Rows starting with # are comments and are skipped. # The header row (filepath,...) is also skipped. # -# To protect a new file: add a row to assets/protected_files.csv. +# To protect a new file: add a row to .claude/hooks/protected_files.csv. # To unprotect a file: remove its row from that CSV. # Never edit this hook script to manage the list. # # Exits: 0 = allow | 2 = block +# +# Reference : Custom — project-specific hook, not derived from ECC. +# Exits : 0 = allow | 2 = block (protected file) set -uo pipefail @@ -22,14 +25,14 @@ INPUT=$(cat) # Parse file_path from the JSON payload; pipe INPUT so heredoc/stdin don't conflict FILE_PATH=$(printf '%s' "$INPUT" | python3 -c " -import json, sys -data = json.loads(sys.stdin.read()) +import json, os +data = json.loads(os.environ["CLAUDE_HOOK_INPUT"]) print(data.get('tool_input', {}).get('file_path', '')) ") || { printf '%s' "$INPUT"; exit 0; } BASENAME=$(basename "$FILE_PATH") REPO_ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/../.." && pwd)" -CSV="${REPO_ROOT}/assets/protected_files.csv" +CSV="${REPO_ROOT}/.claude/hooks/protected_files.csv" if [[ ! -f "$CSV" ]]; then # CSV missing — fail open so development is not blocked @@ -77,7 +80,7 @@ if [[ "$RESULT" == BLOCK:* ]]; then printf '│ and commit via the normal review process.\n' >&2 printf '│\n' >&2 printf '│ To change the protected-files list, edit (with human review):\n' >&2 - printf '│ assets/protected_files.csv\n' >&2 + printf '│ .claude/hooks/protected_files.csv\n' >&2 printf '└─\n' >&2 exit 2 fi diff --git a/.claude/hooks/pre-protect-uv-lock.sh b/.claude/hooks/pre-protect-uv-lock.sh index 5d49f64..fddd948 100755 --- a/.claude/hooks/pre-protect-uv-lock.sh +++ b/.claude/hooks/pre-protect-uv-lock.sh @@ -21,13 +21,13 @@ set -uo pipefail INPUT=$(cat) -FILE_PATH=$(python3 - <<'PYEOF' -import json, sys +FILE_PATH=$(CLAUDE_HOOK_INPUT="$INPUT" python3 - <<'PYEOF' +import json, os -data = json.loads(sys.stdin.read()) +data = json.loads(os.environ["CLAUDE_HOOK_INPUT"]) print(data.get("tool_input", {}).get("file_path", "")) PYEOF -<<<"$INPUT") || { echo "$INPUT"; exit 0; } +) || { echo "$INPUT"; exit 0; } BASENAME=$(basename "$FILE_PATH") diff --git a/.claude/hooks/pre-write-doc-file-warning.sh b/.claude/hooks/pre-write-doc-file-warning.sh index d1d2d4d..997663b 100755 --- a/.claude/hooks/pre-write-doc-file-warning.sh +++ b/.claude/hooks/pre-write-doc-file-warning.sh @@ -22,13 +22,13 @@ set -uo pipefail INPUT=$(cat) -FILE_PATH=$(python3 - <<'PYEOF' -import json, sys +FILE_PATH=$(CLAUDE_HOOK_INPUT="$INPUT" python3 - <<'PYEOF' +import json, os -data = json.loads(sys.stdin.read()) +data = json.loads(os.environ["CLAUDE_HOOK_INPUT"]) print(data.get("tool_input", {}).get("file_path", "")) PYEOF -<<<"$INPUT") || { echo "$INPUT"; exit 0; } +) || { echo "$INPUT"; exit 0; } # Only process .md files if [[ "$FILE_PATH" != *.md ]] || [[ -z "$FILE_PATH" ]]; then diff --git a/.claude/hooks/pre-write-jinja-syntax.sh b/.claude/hooks/pre-write-jinja-syntax.sh index d2b62aa..b3ade48 100755 --- a/.claude/hooks/pre-write-jinja-syntax.sh +++ b/.claude/hooks/pre-write-jinja-syntax.sh @@ -23,13 +23,13 @@ set -uo pipefail INPUT=$(cat) -FILE_PATH=$(python3 - <<'PYEOF' -import json, sys +FILE_PATH=$(CLAUDE_HOOK_INPUT="$INPUT" python3 - <<'PYEOF' +import json, os -data = json.loads(sys.stdin.read()) +data = json.loads(os.environ["CLAUDE_HOOK_INPUT"]) print(data.get("tool_input", {}).get("file_path", "")) PYEOF -<<<"$INPUT") || { echo "$INPUT"; exit 0; } +) || { echo "$INPUT"; exit 0; } # Only process .jinja files if [[ "$FILE_PATH" != *.jinja ]] || [[ -z "$FILE_PATH" ]]; then @@ -38,13 +38,13 @@ if [[ "$FILE_PATH" != *.jinja ]] || [[ -z "$FILE_PATH" ]]; then fi # Extract the file content from tool_input.content -CONTENT=$(python3 - <<'PYEOF' -import json, sys +CONTENT=$(CLAUDE_HOOK_INPUT="$INPUT" python3 - <<'PYEOF' +import json, os -data = json.loads(sys.stdin.read()) +data = json.loads(os.environ["CLAUDE_HOOK_INPUT"]) print(data.get("tool_input", {}).get("content", "")) PYEOF -<<<"$INPUT") || { echo "$INPUT"; exit 0; } +) || { echo "$INPUT"; exit 0; } if [[ -z "$CONTENT" ]]; then echo "$INPUT" @@ -52,10 +52,10 @@ if [[ -z "$CONTENT" ]]; then fi # Validate the Jinja2 syntax with the same extensions Copier uses -RESULT=$(python3 - <<'PYEOF' -import sys +RESULT=$(JINJA_CONTENT="$CONTENT" python3 - <<'PYEOF' +import os, sys -content = sys.argv[1] if len(sys.argv) > 1 else "" +content = os.environ.get("JINJA_CONTENT", "") try: from jinja2 import Environment, TemplateSyntaxError @@ -76,7 +76,7 @@ except TemplateSyntaxError as exc: except ImportError as exc: print(f"SKIP:Jinja2 not importable: {exc}") PYEOF -"$CONTENT") || { echo "$INPUT"; exit 0; } +) || { echo "$INPUT"; exit 0; } case "$RESULT" in OK) diff --git a/.claude/hooks/pre-write-src-require-test.sh b/.claude/hooks/pre-write-src-require-test.sh index 3c3531d..29d5ade 100755 --- a/.claude/hooks/pre-write-src-require-test.sh +++ b/.claude/hooks/pre-write-src-require-test.sh @@ -10,7 +10,8 @@ # between src/ and the file), excluding __init__.py. Nested layouts such as # src/<pkg>/common/foo.py are skipped. # -# Exits: 0 = allow | 2 = block (test file missing) +# Reference : Custom — project-specific hook, not derived from ECC. +# Exits : 0 = allow | 2 = block (test file missing) set -uo pipefail @@ -19,7 +20,7 @@ INPUT=$(cat) FILE_PATH=$(printf '%s' "$INPUT" | python3 -c ' import json, sys -data = json.load(sys.stdin) +data = json.loads(sys.stdin.read()) print(data.get("tool_input", {}).get("file_path", "")) ') || { echo "$INPUT" diff --git a/.claude/hooks/pre-write-src-test-reminder.sh b/.claude/hooks/pre-write-src-test-reminder.sh index 81a044a..43f8f37 100755 --- a/.claude/hooks/pre-write-src-test-reminder.sh +++ b/.claude/hooks/pre-write-src-test-reminder.sh @@ -18,7 +18,7 @@ INPUT=$(cat) FILE_PATH=$(printf '%s' "$INPUT" | python3 -c ' import json, sys -data = json.load(sys.stdin) +data = json.loads(sys.stdin.read()) print(data.get("tool_input", {}).get("file_path", "")) ') || { echo "$INPUT" diff --git a/assets/protected_files.csv b/.claude/hooks/protected_files.csv similarity index 100% rename from assets/protected_files.csv rename to .claude/hooks/protected_files.csv diff --git a/.claude/hooks/stop-cost-tracker.sh b/.claude/hooks/stop-cost-tracker.sh index c798435..19e9737 100755 --- a/.claude/hooks/stop-cost-tracker.sh +++ b/.claude/hooks/stop-cost-tracker.sh @@ -25,7 +25,7 @@ import json, os, sys, datetime from pathlib import Path try: - data = json.loads(sys.stdin.read()) + data = json.loads(os.environ["CLAUDE_HOOK_INPUT"]) except (json.JSONDecodeError, ValueError): sys.exit(0) diff --git a/.claude/hooks/stop-desktop-notify.sh b/.claude/hooks/stop-desktop-notify.sh index 0378dd3..b6db8be 100755 --- a/.claude/hooks/stop-desktop-notify.sh +++ b/.claude/hooks/stop-desktop-notify.sh @@ -30,20 +30,20 @@ PROJECT=$(basename "$PWD") MESSAGE="Task complete in $PROJECT" # Try to extract the last assistant message from the transcript for a richer notification -TRANSCRIPT_PATH=$(python3 - <<'PYEOF' -import json, sys +TRANSCRIPT_PATH=$(CLAUDE_HOOK_INPUT="$INPUT" python3 - <<'PYEOF' +import json, os try: - data = json.loads(sys.stdin.read()) + data = json.loads(os.environ["CLAUDE_HOOK_INPUT"]) print(data.get("transcript_path", "")) except Exception: print("") PYEOF -<<<"$INPUT") || true +) || true if [[ -n "$TRANSCRIPT_PATH" ]] && [[ -f "$TRANSCRIPT_PATH" ]]; then LAST_MSG=$(python3 - "$TRANSCRIPT_PATH" <<'PYEOF' -import json, sys +import json, os from pathlib import Path transcript_file = sys.argv[1] diff --git a/.claude/hooks/stop-evaluate-session.sh b/.claude/hooks/stop-evaluate-session.sh index 1f75899..51df591 100755 --- a/.claude/hooks/stop-evaluate-session.sh +++ b/.claude/hooks/stop-evaluate-session.sh @@ -27,7 +27,7 @@ from pathlib import Path from collections import Counter try: - data = json.loads(sys.stdin.read()) + data = json.loads(os.environ["CLAUDE_HOOK_INPUT"]) except (json.JSONDecodeError, ValueError): sys.exit(0) diff --git a/.claude/hooks/stop-session-end.sh b/.claude/hooks/stop-session-end.sh index 470c183..e529c42 100755 --- a/.claude/hooks/stop-session-end.sh +++ b/.claude/hooks/stop-session-end.sh @@ -29,7 +29,7 @@ import json, os, sys, datetime, subprocess from pathlib import Path try: - data = json.loads(sys.stdin.read()) + data = json.loads(os.environ["CLAUDE_HOOK_INPUT"]) except (json.JSONDecodeError, ValueError): sys.exit(0) diff --git a/.claude/rules/README.md b/.claude/rules/README.md index 82728a2..26594f3 100644 --- a/.claude/rules/README.md +++ b/.claude/rules/README.md @@ -1,108 +1,116 @@ # AI Rules — Developer Guide This directory contains the rules that inform any AI assistant (Claude Code, Cursor, etc.) -working on this codebase. Rules are plain Markdown files — no tool-specific frontmatter or -format is required. Any AI that can read context from a directory will benefit from them. +working in this project. Rules are plain Markdown files — readable by any tool without +conversion or special configuration. -## Structure +The format, scoping, and organisation conventions below are the canonical reference; they +match the `maintain-rules` skill. When adding or editing rules, follow this document. + +## Philosophy — rules vs skills + +Rules and skills serve strict, non-overlapping roles: + +- **Rules** (this directory) are **short, always-on, non-negotiable**. They hold hard + constraints ("never do X", "always use Y"), project invariants, and guardrails. Target + **5–7 lines per file**. If it's longer, it belongs in a skill. +- **Skills** (`.claude/skills/`) are **rich and invoked when relevant**. They hold + patterns, examples, step-by-step how-tos, templates, and edge cases. + +Before writing a new rule, check whether a skill already covers the content. If yes, +delete or shrink the rule — do not duplicate. -Rules are organised into a **common** layer plus **language/tool-specific** directories: +## Structure ``` .claude/rules/ ├── README.md ← you are here -├── common/ # Universal principles — apply to all code in this repo +├── common/ # Universal hard constraints — apply to all code │ ├── coding-style.md │ ├── git-workflow.md │ ├── testing.md │ ├── security.md -│ ├── development-workflow.md -│ └── code-review.md -├── python/ # Python-specific (extends common/) -│ ├── coding-style.md -│ ├── testing.md -│ ├── patterns.md -│ ├── security.md │ └── hooks.md -├── jinja/ # Jinja2 template-specific -│ ├── coding-style.md -│ └── testing.md +├── python/ # Python-specific (extends common/) ├── bash/ # Shell script-specific -│ ├── coding-style.md -│ └── security.md +├── yaml/ # YAML authoring conventions ├── markdown/ # Markdown authoring conventions -│ └── conventions.md -├── yaml/ # YAML file conventions (copier.yml, workflows, etc.) -│ └── conventions.md -└── copier/ # Copier template-specific rules (this repo only) - └── template-conventions.md +└── copier/ # Copier template-repo conventions ``` +Detailed how-to content that previously lived here has moved to skills: + +| Former rule | Now lives in | +|---|---| +| `common/development-workflow.md` | `skills/sdlc-workflow/`, `skills/tdd-workflow/` | +| `common/code-review.md` | `skills/python-code-reviewer/` | +| `python/security.md` | `skills/security/` | +| `python/patterns.md` | `skills/python-code-quality/` | +| `jinja/coding-style.md`, `jinja/testing.md` | `skills/jinja-guide/` | + ## Rule priority -When language-specific rules and common rules conflict, **language-specific rules take -precedence** (specific overrides general). This mirrors CSS specificity and `.gitignore` -precedence. +Language-specific rules override common rules where they conflict. -- `common/` defines universal defaults. -- Language directories (`python/`, `jinja/`, `bash/`, …) override those defaults where - language idioms differ. +## Standard rule format -## Dual-hierarchy reminder +Every rule file follows one of two shapes. -This Copier meta-repo has **two parallel rule trees**: +### Unconditional rule — `common/*.md` -``` -.claude/rules/ ← active when DEVELOPING this template repo -template/.claude/rules/ ← rendered into every GENERATED project +Loads on every session. No frontmatter. + +```markdown +# [Topic Name] + +- Concrete, verifiable instruction. +- Another specific instruction. ``` -When you add or modify a rule: -- Changes to `template/.claude/rules/` affect every project generated from this - template going forward. -- Changes to the root `.claude/rules/` affect only this meta-repo. -- Many rules belong in **both** trees (e.g. Python coding style, security). -- Copier-specific rules (`copier/`) belong only in the root tree. -- Jinja rules belong only in the root tree (generated projects do not contain Jinja files). +### Path-scoped rule — language/topic directories -## How to write a new rule +Loads only when Claude touches files matching the globs. Uses YAML frontmatter with a +`paths:` key. -1. **Choose the right directory** — `common/` for language-agnostic principles, - a language directory for language-specific ones. Create a new directory if a - language or domain does not exist yet. +```markdown +--- +paths: + - "**/*.py" + - "**/*.pyi" +--- -2. **File name** — use lowercase kebab-case matching the topic: `coding-style.md`, - `testing.md`, `patterns.md`, `security.md`, `hooks.md`, `performance.md`. +# [Topic Name] -3. **Opening line** — if the file extends a common counterpart, start with: - ``` - > This file extends [common/xxx.md](../common/xxx.md) with <Language> specific content. - ``` +- Rule 1 +- Rule 2 +``` -4. **Content guidelines**: - - State rules as actionable imperatives ("Always …", "Never …", "Prefer …"). - - Use concrete code examples (correct and incorrect) wherever possible. - - Keep each file under 150 lines; split into multiple files if a topic grows larger. - - Do not repeat content already covered in the common layer — cross-reference instead. - - Avoid tool-specific configuration syntax in rule prose; describe intent, not config. +Notes: +- `paths:` values are glob patterns, always double-quoted. +- Do **not** use the legacy `# applies-to:` comment syntax. Always use YAML frontmatter. -5. **File patterns annotation** (optional but helpful) — add a YAML comment block at the - top listing which glob patterns the rule applies to. AI tools that understand frontmatter - can use this; tools that do not will simply skip the comment: - ```yaml - # applies-to: **/*.py, **/*.pyi - ``` +## Authoring rules — checklist -6. **Mirror to `template/.claude/rules/`** if the rule is relevant to generated projects. +Before committing a rule, verify: -7. **Update this README** when adding a new language directory or a new top-level file. +1. **Location** — `common/` for language-agnostic, language directory otherwise. +2. **Filename** — lowercase kebab-case (`coding-style.md`, `testing.md`). +3. **Frontmatter** — present with `paths:` for path-scoped; absent for unconditional. +4. **Title** — single `#` heading matching the topic. +5. **Content** — hard constraints only; imperatives; concrete. +6. **Size** — **5–7 lines per file** (excluding frontmatter and title). Longer content goes in a skill. +7. **No duplication** — if a skill already covers it, remove the rule. +8. **README** — update this file when adding a new directory. ## How AI tools consume these rules | Tool | Mechanism | |------|-----------| -| Claude Code | Reads `CLAUDE.md` (project root), then any file you reference or load via slash commands | +| Claude Code | Loads `CLAUDE.md` plus every `.claude/rules/**/*.md`; path-scoped files activate when matching files are touched | | Cursor | Reads `.cursor/rules/*.mdc`; symlink or copy relevant rules there if desired | -| Generic LLM | Pass rule file contents in system prompt or context window | +| Generic LLM | Pass rule file contents in the system prompt or context window | + +## Related skill -Because rules are plain Markdown, they are readable by any tool without conversion. +The `maintain-rules` skill (at `.claude/skills/maintain-rules/`) contains the full +reference for authoring, auditing, and organising these files. diff --git a/.claude/rules/bash/coding-style.md b/.claude/rules/bash/coding-style.md index 843c848..889dc9c 100644 --- a/.claude/rules/bash/coding-style.md +++ b/.claude/rules/bash/coding-style.md @@ -1,135 +1,13 @@ -# Bash / Shell Coding Style - -# applies-to: **/*.sh - -Shell scripts in this repository are hook scripts under `.claude/hooks/` and helper -scripts under `scripts/`. These rules apply to all `.sh` files. - -## Shebang and strict mode - -Every script must start with the appropriate shebang and enable strict mode: - -```bash -#!/usr/bin/env bash -# One-line description of what this script does. -``` - -For PostToolUse / Stop / SessionStart hooks (non-blocking): - -```bash -set -euo pipefail -``` - -For PreToolUse blocking hooks (must not exit non-zero accidentally): - -```bash -set -uo pipefail # intentionally NOT -e; we handle exit codes manually -``` - -`-e` (exit on error), `-u` (error on unset variable), `-o pipefail` (pipe failures propagate). -Document clearly when `-e` is omitted and why. - -## Variable quoting - -Always quote variable expansions unless you intentionally want word splitting: - -```bash -# Correct -file_path="$1" -if [[ -f "$file_path" ]]; then ... - -# Wrong — breaks on paths with spaces -if [[ -f $file_path ]]; then ... -``` - -Use `[[ ]]` (bash conditionals) instead of `[ ]` (POSIX test). `[[ ]]` handles -spaces in variables without quoting issues. - -## Reading stdin (hook scripts) - -Hook scripts receive JSON on stdin. Always capture it immediately and parse with Python: - -```bash -INPUT=$(cat) - -FILE_PATH=$(python3 - <<'PYEOF' -import json, sys -data = json.loads(sys.stdin.read()) -print(data.get("tool_input", {}).get("file_path", "")) -PYEOF -<<<"$INPUT") || { echo "$INPUT"; exit 0; } -``` - -The `|| { echo "$INPUT"; exit 0; }` guard ensures that a malformed JSON payload never -accidentally blocks a PreToolUse hook. - -## Output formatting - -Use box-drawing characters for structured output (consistent with all other hooks): - -```bash -echo "┌─ Hook name: $context" -echo "│" -echo "│ Informational content" -echo "└─ ✓ Done" # or └─ ✗ Fix before committing -``` - -- PostToolUse / Stop / SessionStart hooks: print to **stdout**. -- PreToolUse blocking messages: print to **stderr** (shown to the user on block). - -## Exit codes - -| Script type | Exit 0 | Exit 2 | -|-------------|--------|--------| -| PreToolUse | Allow tool to proceed | Block tool call | -| PostToolUse | Normal completion | Not meaningful — avoid | -| Stop / SessionStart | Normal completion | Not meaningful — avoid | - -Only PreToolUse hooks should ever exit 2. All other hooks must exit 0. - -When a PreToolUse hook allows the call to proceed, echo `$INPUT` back to stdout (required): - -```bash -echo "$INPUT" # pass-through: required for the tool call to proceed -exit 0 -``` - -## Naming and file organisation - -File naming convention: -``` -{event-prefix}-{matcher}-{purpose}.sh -``` - -| Prefix | Lifecycle event | -|--------|----------------| -| `pre-bash-` | PreToolUse on Bash | -| `pre-write-` | PreToolUse on Write | -| `pre-config-` | PreToolUse on Edit/Write/MultiEdit | -| `pre-protect-` | PreToolUse guard for a specific resource | -| `post-edit-` | PostToolUse on Edit or Write | -| `post-bash-` | PostToolUse on Bash | -| `session-` | SessionStart | -| `stop-` | Stop | - -## Functions - -Extract repeated logic into shell functions. Name functions in `snake_case`: - -```bash -check_python_file() { - local file_path="$1" - [[ "$file_path" == *.py ]] && [[ -f "$file_path" ]] -} -``` - -Keep functions short (≤ 30 lines). Scripts that grow beyond ~100 lines should be -split into multiple files or rewritten as Python. - -## Portability - -Hook scripts run on macOS and Linux. Avoid GNU-specific flags when a POSIX-compatible -alternative exists. Test both platforms if in doubt. - -Copier `_tasks` use `/bin/sh` (POSIX shell, not bash). Use `#!/bin/sh` and POSIX-only -syntax in tasks embedded in `copier.yml`. +--- +paths: + - "**/*.sh" + - "**/*.bash" +--- + +# Bash Coding Style + +- Start with `#!/usr/bin/env bash` and `set -euo pipefail` (omit `-e` for PreToolUse hooks). +- Always quote variable expansions: `"$var"`. Use `[[ ]]`, not `[ ]`. +- PreToolUse hooks: echo `$INPUT` and exit 0 to allow; exit 2 to block. +- Print to stdout for PostToolUse/Stop output; print to stderr for PreToolUse blocking messages. +- Hooks run on macOS and Linux — avoid GNU-specific flags. diff --git a/.claude/rules/bash/security.md b/.claude/rules/bash/security.md index 7bfee6d..e4f5051 100644 --- a/.claude/rules/bash/security.md +++ b/.claude/rules/bash/security.md @@ -1,92 +1,13 @@ -# Bash Security - -# applies-to: **/*.sh - -> This file extends [common/security.md](../common/security.md) with Bash-specific content. - -## Never use `eval` - -`eval` executes arbitrary strings as code. Any user-controlled input passed to `eval` -is a code injection vulnerability: - -```bash -# Wrong — if $user_input = "rm -rf /", this destroys the filesystem -eval "process_$user_input" - -# Correct — use a case statement or associative array -case "$action" in - start) start_service ;; - stop) stop_service ;; - *) echo "Unknown action: $action" >&2; exit 1 ;; -esac -``` - -## Never use `shell=True` equivalent - -Constructing commands via string interpolation and passing to a shell interpreter -enables injection: - -```bash -# Wrong — $filename could contain shell metacharacters -system("process $filename") - -# Correct — pass as separate argument -process_file "$filename" -``` - -When calling external programs, pass arguments as separate words, never concatenated -into a single string. - -## Validate and sanitise all inputs +--- +paths: + - "**/*.sh" + - "**/*.bash" +--- -Scripts that accept arguments or read from environment variables must validate them -before use: - -```bash -file_path="${1:?Usage: script.sh <file-path>}" # fail with message if empty - -# Reject paths containing traversal sequences -if [[ "$file_path" == *..* ]]; then - echo "Error: path traversal not allowed" >&2 - exit 1 -fi -``` - -## Secrets in environment variables - -- Do not echo or log environment variables that may contain secrets. -- Do not write secrets to temporary files unless the file is created with `mktemp` - and cleaned up in an `EXIT` trap. -- Check that required environment variables exist before using them: - -```bash -: "${API_KEY:?API_KEY environment variable is required}" -``` - -## Temporary file handling - -Use `mktemp` for temporary files and clean up with a trap: - -```bash -TMPFILE=$(mktemp) -trap 'rm -f "$TMPFILE"' EXIT - -# Use $TMPFILE safely -some_command > "$TMPFILE" -process_output "$TMPFILE" -``` - -Never use predictable filenames like `/tmp/output.txt` — they are vulnerable to -symlink attacks. - -## Subprocess calls in hook scripts - -Hook scripts in `.claude/hooks/` execute in the context of the developer's machine. -They should: -- Only call trusted binaries (`uv`, `git`, `python3`, `ruff`, `basedpyright`). -- Never download or execute code from the network. -- Avoid `curl | bash` patterns. -- Not modify files outside the project directory. +# Bash Security -The `pre-bash-block-no-verify.sh` hook blocks `git commit --no-verify` to ensure -pre-commit security gates cannot be bypassed. +- Never use `eval`; use `case` statements for dispatch. +- Pass arguments as separate words — never concatenate user input into a shell string. +- Validate all arguments and env vars before use; reject paths containing `..`. +- Use `mktemp` for temp files with `trap 'rm -f "$TMPFILE"' EXIT`. +- Hook scripts must only call trusted binaries (`uv`, `git`, `python3`, `ruff`); never `curl | bash`. diff --git a/.claude/rules/common/code-review.md b/.claude/rules/common/code-review.md deleted file mode 100644 index b0f2560..0000000 --- a/.claude/rules/common/code-review.md +++ /dev/null @@ -1,77 +0,0 @@ -# Code Review Standards - -## When to review - -Review your own code before every commit, and request a peer review before merging to -`main`. Self-review catches the majority of issues before CI runs. - -**Mandatory review triggers:** -- Any change to security-sensitive code: authentication, authorisation, secret handling, - user input processing. -- Architectural changes: new modules, changed public APIs, modified data models. -- Changes to CI/CD configuration or deployment scripts. -- Changes to `copier.yml` that affect generated projects. - -## Self-review checklist - -Before opening a PR, verify each item: - -**Correctness** -- [ ] Code does what the requirement says. -- [ ] Edge cases are handled (empty inputs, None/null, boundary values). -- [ ] Error paths are covered by tests. - -**Code quality** -- [ ] Functions are focused; no function exceeds 80 lines. -- [ ] No deep nesting (> 4 levels); use early returns instead. -- [ ] Variable and function names are clear and consistent with the codebase. -- [ ] No commented-out code left in. - -**Testing** -- [ ] Every new public symbol has at least one test. -- [ ] `just coverage` shows no module below threshold. -- [ ] Tests are isolated and do not depend on order. - -**Documentation** -- [ ] Every new public function/class/method has a Google-style docstring. -- [ ] `CLAUDE.md` is updated if project structure or commands changed. - -**Security** -- [ ] No hardcoded secrets. -- [ ] No new external dependencies without justification. -- [ ] User inputs are validated. - -**CI** -- [ ] `just ci` passes with zero errors locally before pushing. - -## Severity levels - -| Level | Meaning | Required action | -|-------|---------|-----------------| -| CRITICAL | Security vulnerability or data-loss risk | Block merge — must fix | -| HIGH | Bug or significant quality issue | Should fix before merge | -| MEDIUM | Maintainability or readability concern | Consider fixing | -| LOW | Style suggestion or minor improvement | Optional | - -## Common issues to catch - -| Issue | Example | Fix | -|-------|---------|-----| -| Large functions | > 80 lines | Extract helpers | -| Deep nesting | `if a: if b: if c:` | Early returns | -| Missing error handling | Bare `except:` | Handle specifically | -| Hardcoded magic values | `if status == 3:` | Named constant | -| Missing type annotations | `def foo(x):` | Add type hints | -| Missing docstring | No docstring on public function | Add Google-style docstring | -| Debug artefacts | `print("here")` | Remove or use logger | - -## Integration with automated checks - -The review checklist is enforced at multiple layers: - -- **PostToolUse hooks**: ruff + basedpyright fire after every `.py` edit in a Claude session. -- **Pre-commit hooks**: ruff, basedpyright, secret scan on every `git commit`. -- **CI**: full `just ci` run on every push and pull request. - -Fix violations at the earliest layer — it is cheaper to fix a ruff error immediately -after editing a file than to fix it after the CI pipeline fails. diff --git a/.claude/rules/common/coding-style.md b/.claude/rules/common/coding-style.md index a6ed19a..b584edb 100644 --- a/.claude/rules/common/coding-style.md +++ b/.claude/rules/common/coding-style.md @@ -1,64 +1,8 @@ -# Coding Style — Common Principles - -These principles apply to all languages in this repository. Language-specific directories -extend or override individual rules where language idioms differ. - -## Naming - -- Use descriptive, intention-revealing names. Abbreviations are acceptable only when - universally understood in the domain (e.g. `url`, `id`, `ctx`). -- Functions and variables: `snake_case` for Python/Bash, follow language convention otherwise. -- Constants: `UPPER_SNAKE_CASE`. -- Avoid single-letter names except for short loop counters and mathematical variables. - -## Function size and focus - -- Functions should do one thing. If you need "and" to describe what a function does, - split it. -- Target ≤ 40 lines per function body; hard limit 80 lines. Exceeding this is a signal - to extract helpers. -- McCabe cyclomatic complexity ≤ 10 (enforced by ruff `C90`). - -## File size and cohesion - -- Keep files cohesive — one module, one concern. -- Target ≤ 400 lines per file; treat 600 lines as a trigger to extract a submodule. - -## Immutability preference - -- Prefer immutable data structures and values. Mutate in place only when necessary for - correctness or performance. -- **Language note**: overridden in languages where mutation is idiomatic (e.g. Go pointer - receivers). - -## Error handling - -- Never silently swallow exceptions or errors. Either handle them explicitly or propagate. -- Log errors with sufficient context to diagnose the issue without a debugger. -- Do not use bare `except:` / catch-all blocks unless re-raising immediately. - -## No magic values - -- Replace bare literals (`0`, `""`, `"pending"`) with named constants or enums. -- Document the origin of non-obvious numeric thresholds in a comment. - -## No debug artefacts - -- Remove `print()`, `console.log()`, `debugger`, and temporary debug variables before - committing. Use the project's logging infrastructure instead. - -## Comments - -- Comments explain *why*, not *what*. Code should be self-documenting. -- Avoid comments that merely restate what the code does ("increment counter", "return result"). -- Use `TODO(username): description` for tracked work items; never leave bare `TODO` or - `FIXME` in committed code. - -## Line length - -- 100 characters (enforced by ruff formatter for Python). Wrap long expressions clearly. - -## Imports - -- Group and sort imports: standard library → third-party → local. One blank line between groups. -- Absolute imports preferred over relative imports except within the same package. +# Coding Style + +- Line length: 100 characters (enforced by ruff formatter). +- Functions: one thing only; target ≤ 40 lines, hard limit 80 lines; McCabe complexity ≤ 10. +- Never silently swallow exceptions; no bare `except:` unless re-raising immediately. +- No magic values — replace bare literals with named constants or enums. +- No `print()` or debug variables in committed code; use structured logging. +- Comments explain *why*, not *what*. Use `TODO(username): description` for tracked work. diff --git a/.claude/rules/common/development-workflow.md b/.claude/rules/common/development-workflow.md deleted file mode 100644 index 76e572a..0000000 --- a/.claude/rules/common/development-workflow.md +++ /dev/null @@ -1,85 +0,0 @@ -# Development Workflow - -> This file describes the full feature development pipeline. Git operations are -> covered in [git-workflow.md](./git-workflow.md). - -## Before writing any code: research first - -1. **Search the existing codebase** for similar patterns before adding new ones. - Prefer consistency with existing code over novelty. -2. **Check third-party libraries** (PyPI, crates.io, etc.) before writing utility code. - Prefer battle-tested libraries over hand-rolled solutions for non-trivial problems. -3. **Read the relevant rule files** in `.claude/rules/` for the language or domain - you are working in. - -## Feature implementation workflow - -### 1. Understand the requirement - -- Read the issue, PR description, or task thoroughly before touching code. -- Identify edge cases and failure modes upfront. Write them down as test cases. -- For large changes, sketch the data flow and affected modules before coding. - -### 2. Write tests first (TDD) - -See [testing.md](./testing.md) for the full TDD workflow. Summary: - -- Write one failing test per requirement or edge case. -- Run `just test` to confirm the test fails for the right reason. -- Then write the implementation. - -### 3. Implement - -- Follow the coding style rules for the relevant language. -- Keep the diff small and focused. One PR per logical change. -- Run the language-specific PostToolUse hook output (ruff, basedpyright) after every - file edit and fix violations before moving on. - -### 4. Self-review before opening a PR - -Run each check individually before the full suite so failures are isolated: - -```bash -just fix # auto-fix ruff violations -just fmt # format with ruff formatter -just lint # ruff lint — must be clean -just type # basedpyright — must be clean -just docs-check # docstring completeness — must be clean -just test # all tests — must pass -just coverage # no module below coverage threshold -``` - -Or run the full pipeline: - -```bash -just ci # fix → fmt → lint → type → docs-check → test → precommit -``` - -### 5. Commit and open a PR - -See [git-workflow.md](./git-workflow.md) for commit message format and PR conventions. - -## Working in this repo vs generated projects - -| Context | You are working on… | Use `just` from… | -|---------|---------------------|-----------------| -| Root repo | The Copier template itself | `/workspace/` | -| Generated project | A project produced by `copier copy` | The generated project root | - -Changes to `template/` affect all future generated projects. Test template changes by -running `copier copy . /tmp/test-output --trust --defaults` and inspecting the output. -Clean up with `rm -rf /tmp/test-output`. - -## Toolchain quick reference - -| Task | Command | -|------|---------| -| Run tests | `just test` | -| Coverage report | `just coverage` | -| Lint | `just lint` | -| Format | `just fmt` | -| Type check | `just type` | -| Docstring check | `just docs-check` | -| Full CI | `just ci` | -| Diagnose environment | `just doctor` | -| Generate test project | `copier copy . /tmp/test-output --trust --defaults` | diff --git a/.claude/rules/common/git-workflow.md b/.claude/rules/common/git-workflow.md index 3218b3f..2c03b20 100644 --- a/.claude/rules/common/git-workflow.md +++ b/.claude/rules/common/git-workflow.md @@ -1,84 +1,6 @@ # Git Workflow -## Commit message format - -``` -<type>: <short imperative description> - -<optional body — explain WHY, not what> - -<optional footer: Closes #123, Breaking change: …> -``` - -**Types**: `feat`, `fix`, `refactor`, `docs`, `test`, `chore`, `perf`, `ci`, `build` - -Rules: -- Subject line ≤ 72 characters; use imperative mood ("Add feature", not "Added feature"). -- Body wraps at 80 characters. -- Reference issues in the footer (`Closes #123`), not the subject. -- One logical change per commit. Do not bundle unrelated fixes. - -## Branch naming - -``` -<type>/<short-description> # e.g. feat/add-logging-manager -<type>/<issue-number>-description # e.g. fix/42-null-pointer -``` - -## What never goes in a commit - -- Hardcoded secrets, API keys, tokens, or passwords. -- Generated artefacts that are reproducible from source (build output, `*.pyc`, `.venv/`). -- Merge-conflict markers (`<<<<<<<`, `=======`, `>>>>>>>`). -- `*.rej` files left by Copier update conflicts. -- Debug statements (`print()`, `debugger`, `pdb.set_trace()`). - -The `pre-bash-commit-quality.sh` hook scans staged files for the above before every commit. - -## Protected operations - -These commands are **blocked** by pre-commit hooks and must not be run without explicit -justification: -- `git commit --no-verify` — bypasses quality gates. -- `git push --force` — rewrites shared history. -- `git push` directly to `main` — use pull requests. - -**Maintainers:** enforce PR-only `main` and squash merges in GitHub **Settings** / branch -protection; see `docs/github-repository-settings.md` in this repository (single checklist). - -## TDD commit conventions - -When committing TDD work, structure commits to reflect the discipline: - -- Use `test:` type for RED commits (failing test added). -- Use `feat:` or `fix:` type for GREEN commits (implementation that makes tests pass). -- Use `refactor:` type for REFACTOR commits (no behaviour change). -- Include test context in the commit body: which scenarios are covered, what edge - cases were tested, and why the implementation approach was chosen. - -Example: -``` -feat: add discount calculation for premium users - -Implements tiered discount logic. TDD cycle covered: -- Happy path: 10% discount for premium tier -- Edge cases: zero-item cart, negative prices rejected -- Boundary: exactly-at-threshold cart values - -Tests: test_calculate_discount_* in tests/test_pricing.py -``` - -## Pull request workflow - -1. Run `just review` (lint + types + docstrings + tests) before opening a PR. -2. Use `git diff main...HEAD` to review all changes since branching. -3. Write a PR description that explains the *why* behind the change, not just the *what*. -4. Include a test plan: which scenarios were verified manually or with automated tests. -5. All CI checks must be green before requesting review. -6. Squash-merge feature branches; preserve merge commits for release branches. - -## Copier template repos — additional rules - -- Never edit `.copier-answers.yml` by hand — the update algorithm depends on it. -- Resolve `*.rej` conflict files before committing; they indicate unreviewed merge conflicts. -- Tag releases with PEP 440-compatible versions (`1.2.3`, `1.2.3a1`) for `copier update` to work. +- Commit format: `<type>: <imperative description>` (subject ≤ 72 chars). Types: `feat`, `fix`, `refactor`, `docs`, `test`, `chore`, `perf`, `ci`, `build`. +- Never commit: hardcoded secrets, `*.rej` files, merge-conflict markers, debug statements. +- Blocked operations: `git commit --no-verify`, `git push --force`, direct push to `main`. +- Run `just review` before opening a PR; all CI checks must be green. diff --git a/.claude/rules/common/hooks.md b/.claude/rules/common/hooks.md index 8199545..282b335 100644 --- a/.claude/rules/common/hooks.md +++ b/.claude/rules/common/hooks.md @@ -1,11 +1,6 @@ -# Hooks (Common) - -This document defines shared conventions for Claude Code hooks across languages. -Language-specific hook requirements live in their respective rule files (e.g. `python/hooks.md`). - -## General rules +# Hooks - Hooks must be fast and side-effect free unless explicitly intended. -- **PreToolUse** hooks may block actions (exit code `2`) and should avoid long-running commands. -- **PostToolUse** hooks must not block (exit code is ignored); use them for reminders and checks. -- Prefer printing concise, actionable guidance on stderr for warnings and blocks. +- PreToolUse hooks may block (exit code `2`); echo `$INPUT` back to stdout to allow. +- PostToolUse and Stop hooks must exit `0`; use them for reminders only, not blocking. +- Document new hooks in `hooks/README.md` with lifecycle event, matcher, and blocking behaviour. diff --git a/.claude/rules/common/security.md b/.claude/rules/common/security.md index 2c3e043..e17a9d2 100644 --- a/.claude/rules/common/security.md +++ b/.claude/rules/common/security.md @@ -1,63 +1,6 @@ -# Security Guidelines +# Security -## Pre-commit security checklist - -Before every commit, verify: - -- [ ] No hardcoded secrets: API keys, passwords, tokens, private keys. -- [ ] No credentials in comments or docstrings. -- [ ] All user-supplied inputs are validated before use. -- [ ] Parameterised queries used for all database operations (no string-formatted SQL). -- [ ] File paths from user input are sanitised (no path traversal). -- [ ] Error messages do not expose stack traces, internal paths, or configuration values - to end users. -- [ ] New dependencies are from trusted sources and pinned to specific versions. - -The `pre-bash-commit-quality.sh` hook performs a basic automated scan, but it does not -replace manual review. - -## Secret management - -- **Never** hardcode secrets in source files, configuration, or documentation. -- Use environment variables; load them with a library (e.g. `python-dotenv`) rather - than direct `os.environ` reads scattered across code. -- Validate that required secrets are present at application startup; fail fast with a - clear error rather than silently using a default. -- Rotate any secret that may have been exposed. Treat exposure as certain if the secret - ever appeared in a commit, even briefly. - -```python -# Correct: fail fast, clear message -api_key = os.environ["OPENAI_API_KEY"] # raises KeyError if missing - -# Wrong: silent fallback hides misconfiguration -api_key = os.environ.get("OPENAI_API_KEY", "") -``` - -## Dependency security - -- Pin all dependency versions in `pyproject.toml` and commit `uv.lock`. -- Review `dependabot` PRs promptly; do not let them accumulate. -- Run `just security` (if present) or `uv run pip-audit` before major releases. - -## Security response protocol - -If a security issue is discovered: - -1. Stop work on the current feature immediately. -2. Assess severity: can it expose user data, allow privilege escalation, or cause data loss? -3. For **critical** issues: open a private GitHub security advisory, do not discuss in - public issues. -4. Rotate any exposed secrets before fixing the code. -5. Write a test that reproduces the vulnerability before patching. -6. Review the entire codebase for similar patterns after fixing. - -## What is in scope for this repo - -This is a Copier template repository. Security considerations include: - -- The `_tasks` in `copier.yml` execute arbitrary shell commands on the user's machine. - Keep tasks minimal, idempotent, and auditable. -- Generated projects inherit security configuration from the template; changes to - template security configuration affect all future generated projects. -- Copier's `--trust` flag must be documented so users understand what they are consenting to. +- Never hardcode secrets (API keys, passwords, tokens) in source, config, or docs. +- Load secrets via `os.environ["KEY"]` (fail-fast); never `os.environ.get("KEY", "")`. +- Validate all external inputs at the boundary; use parameterised queries for SQL. +- Pin all dependency versions in `pyproject.toml`; commit `uv.lock`. diff --git a/.claude/rules/common/testing.md b/.claude/rules/common/testing.md index 2801702..7d2ff45 100644 --- a/.claude/rules/common/testing.md +++ b/.claude/rules/common/testing.md @@ -1,87 +1,7 @@ # Testing Requirements -## Minimum coverage: 80 % (85 % for generated projects) - -All new code requires tests. Coverage is measured at the module level; do not lower -an existing module's coverage when making changes. - -## Test types required - -| Type | Scope | Framework | -|------|-------|-----------| -| Unit | Individual functions, pure logic | pytest | -| Integration | Multi-module flows, I/O boundaries | pytest | -| Template rendering | Copier copy/update output (this repo) | pytest + copier API | - -End-to-end tests are added for critical user-facing flows when the project ships a CLI -or HTTP API. - -## Test-Driven Development workflow - -For new features and bug fixes, follow TDD: - -1. Write test — it must **fail** (RED). Use `/tdd-red` to validate. -2. Write minimal implementation — test must **pass** (GREEN). Use `/tdd-green` to validate. -3. Refactor for clarity and performance (REFACTOR). Tests must stay green after every change. -4. Validate full CI pipeline (`just ci`). Use `/ci-fix` if anything fails. -5. Verify coverage did not drop. - -Skipping the RED step (writing code before a failing test) is not TDD. - -### GREEN means minimal - -In the GREEN phase, write only enough code to make the failing test pass. Do not add error handling -for untested paths, optimisations, or features beyond what the test requires. Those belong in the -next RED cycle or in REFACTOR. Over-engineering in GREEN violates the TDD contract and introduces -untested code. - -## AAA structure (Arrange-Act-Assert) - -```python -def test_calculates_discount_for_premium_users(): - # Arrange - user = User(tier="premium") - cart = Cart(items=[Item(price=100)]) - - # Act - total = calculate_total(user, cart) - - # Assert - assert total == 90 # 10 % discount -``` - -## Naming conventions - -Use descriptive names that read as sentences: - -``` -test_returns_empty_list_when_no_results_match() -test_raises_value_error_when_email_is_invalid() -test_falls_back_to_default_when_config_is_missing() -``` - -Avoid: `test_1()`, `test_function()`, `test_ok()`. - -## Test isolation - -- Tests must not share mutable state. Each test starts from a known baseline. -- Mock external I/O (network, filesystem, clocks) at the boundary, not deep in - the call stack. -- Do not depend on test execution order. Tests must pass when run individually - and in any order. - -## What not to test - -- Implementation details that may change without affecting observable behaviour. -- Third-party library internals. -- Trivial getters/setters with no logic. - -## Running tests - -```bash -just test # all tests, quiet -just test-parallel # parallelised with pytest-xdist -just coverage # coverage report with missing lines -``` - -Always run `just ci` before opening a pull request. +- Minimum coverage: 85% per module; never lower an existing module's coverage. +- TDD workflow: write failing test (RED) → minimal implementation (GREEN) → refactor (REFACTOR). +- GREEN means minimal — write only enough code to make the failing test pass. +- Tests must not share mutable state; must pass when run individually or in any order. +- Every new public symbol requires at least one test. Use `just test` / `just coverage`. diff --git a/.claude/rules/copier/template-conventions.md b/.claude/rules/copier/template-conventions.md index 8c40e38..eb128e6 100644 --- a/.claude/rules/copier/template-conventions.md +++ b/.claude/rules/copier/template-conventions.md @@ -1,183 +1,13 @@ -# Copier Template Conventions - -# applies-to: copier.yml, template/** - -These rules apply to `copier.yml` and all files under `template/`. They are -specific to the template meta-repo and are **not** propagated to generated projects. - -## copier.yml structure - -Organise `copier.yml` into clearly labelled sections with separator comments: - -```yaml -# --- Template metadata --- -_min_copier_version: "9.11.2" -_subdirectory: template - -# --- Jinja extensions --- -_jinja_extensions: - - jinja2_time.TimeExtension - -# --- Questions / Prompts --- -project_name: - type: str - help: Human-readable project name - -# --- Computed values --- -current_year: - type: str - when: false - default: "{% now 'utc', '%Y' %}" - -# --- Post-generation tasks --- -_tasks: - - command: uv lock -``` - -## Questions (prompts) - -- Every question must have a `help:` string that clearly explains what the value is used for. -- Every question must have a sensible `default:` so users can accept defaults with `--defaults`. -- Use `choices:` for questions with a fixed set of valid answers (license, Python version). -- Use `validator:` with a Jinja expression for format validation (e.g. package name must be - a valid Python identifier). -- Use `when: "{{ some_bool }}"` to conditionally show questions that are only relevant - when a related option is enabled. - -## Computed variables - -Computed variables (not prompted) must use `when: false`: - -```yaml -current_year: - type: str - when: false - default: "{% now 'utc', '%Y' %}" -``` - -They are not stored in `.copier-answers.yml` and not shown to users. Use them for -derived values (year, Python version matrix) to keep templates DRY. - -## Secrets and third-party tokens - -Do **not** add Copier prompts for API tokens or upload keys (Codecov, PyPI, npm, -and so on). Those belong in the CI provider’s **encrypted secrets** (for example -GitHub Actions **Settings → Secrets**) and in maintainer documentation -(`README.md`, `docs/ci.md`), not in `.copier-answers.yml`. - -If you must accept a rare secret interactively, use `secret: true` with a safe -`default` so it is **not** written to the answers file — and still prefer -documenting the secret-in-CI workflow instead of prompting. - -## _skip_if_exists - -Files in `_skip_if_exists` are preserved on `copier update` — user edits are not -overwritten. Add a file here when: -- The user is expected to customise it significantly (`pyproject.toml`, `README.md`). -- Overwriting it on update would destroy user work. - -Do **not** add files here unless there is a clear reason. Too many skipped files -make updates less useful. - -## _tasks - -Post-generation tasks run after both `copier copy` and `copier update`. Design them -to be **idempotent** (safe to run multiple times) and **fast** (do not download -large artefacts unconditionally). - -Use `copier update --skip-tasks` to bypass tasks when only the template content needs -to be refreshed. - -Tasks use `/bin/sh` (POSIX shell), not bash. Use POSIX-compatible syntax. - -## Template file conventions - -- Add `.jinja` suffix to every file that contains Jinja2 expressions. -- Files without `.jinja` are copied verbatim (no Jinja processing). -- Template file names may themselves contain Jinja expressions: - `src/{{ package_name }}/__init__.py.jinja` → `src/mylib/__init__.py`. -- Keep Jinja expressions in file names simple (variable substitution only). - -## .copier-answers.yml - -- Never edit `.copier-answers.yml` by hand in generated projects. -- The answers file is managed by Copier's update algorithm. Manual edits cause - unpredictable diffs on the next `copier update`. -- The template file `{{_copier_conf.answers_file}}.jinja` generates the answers file. - Changes to its structure require careful migration testing. - -## Versioning and releases - -- Tag releases with **PEP 440** versions: `1.0.0`, `1.2.3`, `2.0.0a1`. -- `copier update` uses these tags to select the appropriate template version. -- Introduce `_migrations` in `copier.yml` when a new version renames or removes template - files, to guide users through the update. -- See `src/{{ package_name }}/common/bump_version.py` and `.github/workflows/release.yml` for the release - automation workflow. - -## Skill descriptions (if adding skills to template) - -Every skill SKILL.md frontmatter must include a `description:` field with these constraints: - -- **Max 1024 characters** (hard limit) — skill descriptions are used in Claude's command - suggestions and UI; longer descriptions are truncated and unusable. -- Use `>-` for multi-line descriptions (block scalar without trailing newline). -- Lead with the skill's **primary purpose** (what it does). -- Include **trigger keywords** if applicable (e.g., "use this when the user says..."). -- Mention **what the skill outputs** (e.g., "produces test skeletons with AAA structure"). - -**Template:** - -```yaml --- -name: skill-name -description: >- - <Primary purpose in one sentence.> - - <When to use it / trigger keywords.> - - <What it produces or outputs.> +paths: + - "copier.yml" + - "template/**" --- -``` -**Example (tdd-test-planner):** - -```yaml -description: >- - Convert a requirement into a structured pytest test plan. - Use when the user says: "plan my tests", "write tests first", "TDD approach". - Produces categorised test cases (happy path, errors, boundaries, edges, integration) - plus pytest skeletons with AAA structure and fixture guidance. -``` - -**Measurement:** Count characters in the `description:` value (not including frontmatter). - ---- - -## Dual-hierarchy maintenance - -This repo has two parallel `.claude/` trees: - -``` -.claude/ ← used while developing the template -template/.claude/ ← rendered into generated projects -``` - -When adding or modifying hooks, commands, or rules: -- Decide whether the change applies to the template maintainer only, generated projects - only, or both. -- If both: add to both trees. The `post-edit-template-mirror.sh` hook will remind you. -- Rules specific to Copier/Jinja/YAML belong only in the root tree. -- Rules for Python/Bash/Markdown typically belong in both trees. - -## Testing template changes - -Every change to `copier.yml` or a template file requires a test update in -`tests/integration/test_template.py`. Run: - -```bash -just test # run all template tests -copier copy . /tmp/test-output --trust --defaults --vcs-ref HEAD -``` +# Copier Template Conventions -Clean up: `rm -rf /tmp/test-output` +- Never edit `.copier-answers.yml` by hand — managed by Copier's update algorithm. +- Computed variables (not prompted) use `when: false`; not stored in answers or shown to users. +- Do not prompt for API tokens or secrets; use CI encrypted secrets instead. +- Dual-hierarchy: changes affecting both meta-repo and generated projects go in both `.claude/` and `template/.claude/`. +- Every `copier.yml` or template file change requires a test update in `tests/integration/test_template.py`. diff --git a/.claude/rules/jinja/coding-style.md b/.claude/rules/jinja/coding-style.md deleted file mode 100644 index e500ab5..0000000 --- a/.claude/rules/jinja/coding-style.md +++ /dev/null @@ -1,107 +0,0 @@ -# Jinja2 Coding Style - -# applies-to: **/*.jinja - -Jinja2 templates are used in this repository to generate Python project files via -Copier. These rules apply to every `.jinja` file under `template/`. - -## Enabled extensions - -The following Jinja2 extensions are active (configured in `copier.yml`): - -- `jinja2_time.TimeExtension` — provides `{% now 'utc', '%Y' %}` for date injection. -- `jinja2.ext.do` — enables `{% do list.append(item) %}` for side-effect statements. -- `jinja2.ext.loopcontrols` — enables `{% break %}` and `{% continue %}` in loops. - -Always use these extensions rather than working around them with complex filter chains. - -## Variable substitution - -Use `{{ variable_name }}` for all substitutions. Add a trailing space inside braces -only for readability in complex expressions, not as a general rule: - -```jinja -{{ project_name }} -{{ author_name | lower | replace(" ", "-") }} -{{ python_min_version }} -``` - -## Control structures - -Indent template logic blocks consistently with the surrounding file content. -Use `{%- -%}` (dash-trimmed) tags to suppress blank lines produced by control blocks -when the rendered output should be compact: - -```jinja -{%- if include_docs %} -mkdocs: - site_name: {{ project_name }} -{%- endif %} -``` - -Use `{% if %}...{% elif %}...{% else %}...{% endif %}` for branching. -Avoid deeply nested conditions; extract to a Jinja macro or simplify the data model. - -## Whitespace control - -- Use `{%- ... -%}` to strip leading/trailing whitespace around control blocks that - should not produce blank lines in the output. -- Never strip whitespace blindly on every tag — it makes templates hard to read. -- Test rendered output with `copier copy . /tmp/test-output --trust --defaults` and - inspect for spurious blank lines before committing. - -## Filters - -Prefer built-in Jinja2 filters over custom Python logic in templates: - -| Goal | Filter | -|------|--------| -| Lowercase | `\| lower` | -| Replace characters | `\| replace("x", "y")` | -| Default value | `\| default("fallback")` | -| Join list | `\| join(", ")` | -| Trim whitespace | `\| trim` | - -Avoid complex filter chains longer than 3 steps; compute the value in `copier.yml` -as a computed variable instead. - -## Macros - -Define reusable template fragments as macros at the top of the file or in a dedicated -`_macros.jinja` file (if the project grows to warrant it): - -```jinja -{% macro license_header(year, author) %} -# Copyright (c) {{ year }} {{ author }}. All rights reserved. -{% endmacro %} -``` - -## File naming - -- Suffix: `.jinja` (e.g. `pyproject.toml.jinja`, `__init__.py.jinja`). -- File names can themselves be Jinja expressions: - `src/{{ package_name }}/__init__.py.jinja` renders to `src/mylib/__init__.py`. -- Keep file name expressions simple: variable substitution only, no filters. - -## Commenting - -Use Jinja comments (`{# comment #}`) for notes that should not appear in the rendered -output. Use the target language's comment syntax for notes that should survive rendering: - -```jinja -{# This block is only rendered when pandas support is requested #} -{% if include_pandas_support %} -pandas>=2.0 -{% endif %} - -# This Python comment will appear in the generated file -import os -``` - -## Do not embed logic that belongs in copier.yml - -Templates should be presentation, not computation. Move conditional logic to: -- `copier.yml` computed variables (`when: false`, `default: "{% if ... %}"`) -- Copier's `_tasks` for post-generation side effects - -Deeply nested `{% if %}{% if %}{% if %}` blocks in a template are a signal to refactor. diff --git a/.claude/rules/jinja/testing.md b/.claude/rules/jinja/testing.md deleted file mode 100644 index 0f1feff..0000000 --- a/.claude/rules/jinja/testing.md +++ /dev/null @@ -1,91 +0,0 @@ -# Jinja2 Template Testing - -# applies-to: **/*.jinja - -> This file extends [common/testing.md](../common/testing.md) with Jinja2-specific content. - -## What to test - -Every Jinja2 template change requires a corresponding test in `tests/integration/test_template.py`. -Tests render the template with `copier copy` and assert on the output. - -Scenarios to cover for each template file: - -1. **Default values** — render with `--defaults` and assert the file exists with - expected content. -2. **Boolean feature flags** — render with each combination of `include_*` flags that - affects the template and assert the relevant sections are present or absent. -3. **Variable substitution** — render with non-default values (e.g. a custom - `project_name`) and assert the value appears correctly in the output. -4. **Whitespace correctness** — spot-check that blank lines are not spuriously added or - removed by whitespace-control tags. - -## Test utilities - -Tests use the `copier` Python API directly (not the CLI) for reliability: - -```python -from copier import run_copy - -def test_renders_with_pandas(tmp_path): - run_copy( - src_path=str(ROOT), - dst_path=str(tmp_path), - data={"include_pandas_support": True}, - defaults=True, - overwrite=True, - unsafe=True, - vcs_ref="HEAD", - ) - pyproject = (tmp_path / "pyproject.toml").read_text() - assert "pandas" in pyproject -``` - -The `ROOT` constant points to the repository root. Use the `tmp_path` fixture for the -destination directory so pytest cleans it up automatically. - -## Syntax validation - -The `post-edit-jinja.sh` PostToolUse hook validates Jinja2 syntax automatically after -every `.jinja` file edit in a Claude Code session: - -``` -┌─ Jinja2 syntax check: template/pyproject.toml.jinja -│ ✓ Jinja2 syntax OK -└─ Done -``` - -If the hook reports a syntax error, fix it before running tests — `copier copy` will -fail with a less helpful error message if template syntax is broken. - -## Manual rendering for inspection - -Render the full template to inspect a complex template change: - -```bash -copier copy . /tmp/test-output --trust --defaults --vcs-ref HEAD -``` - -Inspect specific files: - -```bash -cat /tmp/test-output/pyproject.toml -cat /tmp/test-output/src/my_library/__init__.py -``` - -Clean up: - -```bash -rm -rf /tmp/test-output -``` - -## Update testing - -Test `copier update` scenarios when changing `_skip_if_exists` or the `.copier-answers.yml` -template. The `tests/integration/test_template.py` file includes update scenario tests; add new ones -when you add new `_skip_if_exists` entries. - -## Coverage for template branches - -Aim to cover every `{% if %}` branch in every template file with at least one test. -Untested branches can produce invalid code in generated projects. diff --git a/.claude/rules/markdown/conventions.md b/.claude/rules/markdown/conventions.md index eb3b25e..dcd4939 100644 --- a/.claude/rules/markdown/conventions.md +++ b/.claude/rules/markdown/conventions.md @@ -1,82 +1,12 @@ -# Markdown Conventions - -# applies-to: **/*.md - -## File placement rule - -Any Markdown file created as part of a workflow, analysis, or investigation output -**must be placed inside the `docs/` folder**. - -**Allowed exceptions** (may be placed at the repository root or any location): -- `README.md` -- `CLAUDE.md` -- `.claude/rules/**/*.md` (rules documentation) -- `.claude/hooks/README.md` (hooks documentation) -- `.github/**/*.md` (GitHub community files) - -Do **not** create free-standing files such as `ANALYSIS.md`, `NOTES.md`, or -`LOGGING_ANALYSIS.md` at the repository root or inside `src/`, `tests/`, or `scripts/`. - -This rule is enforced by: -- `pre-write-doc-file-warning.sh` (PreToolUse: blocks writing `.md` outside allowed locations) -- `post-edit-markdown.sh` (PostToolUse: warns if an existing `.md` is edited in the wrong place) - -## Headings - -- Use ATX headings (`#`, `##`, `###`), not Setext underlines (`===`, `---`). -- One top-level heading (`# Title`) per file. -- Do not skip heading levels (e.g. `##` → `####` without a `###` in between). -- Headings should be sentence-case (capitalise first word only) unless the subject - is a proper noun or acronym. - -## Line length and wrapping - -- Wrap prose at 100 characters for readability in editors and diffs. -- Do not wrap code blocks, tables, or long URLs. - -## Code blocks +--- +paths: + - "**/*.md" + - "**/*.mdx" +--- -- Always specify the language for fenced code blocks: - ``` - \```python - \```bash - \```yaml - \``` -- Use inline code (backticks) for: file names, directory names, command names, - variable names, and short code snippets within prose. - -## Lists - -- Use `-` for unordered lists (not `*` or `+`). -- Use `1.` for all items in ordered lists (the renderer handles numbering). -- Nest lists with 2-space or 4-space indentation consistently within a file. - -## Tables - -- Align columns with spaces for readability in source (optional but preferred). -- Include a header row and a separator row. -- Keep tables narrow enough to read without horizontal scrolling. - -## Links - -- Use reference-style links for URLs that appear more than once. -- Use relative links for internal project files: - `[CLAUDE.md](../CLAUDE.md)` not `https://github.com/…/CLAUDE.md`. -- Do not embed bare URLs in prose; always use `[descriptive text](url)`. - -## CLAUDE.md maintenance - -`CLAUDE.md` (root and `template/CLAUDE.md.jinja`) is the primary context document -for AI assistants. Keep it up to date: - -- Run `/update-claude-md` slash command to detect drift against `pyproject.toml` - and the `justfile`. -- Update `CLAUDE.md` when you add or change slash commands, hooks, tooling, or - project structure. -- Do not duplicate content between `CLAUDE.md` and the rules files. Cross-reference - with a link instead. - -## Generated project documentation +# Markdown Conventions -Documentation for generated projects lives in `docs/` (MkDocs) and follows the same -conventions. The `docs/index.md.jinja` template is the starting point. +- Output files belong in `docs/`. Exceptions: `README.md`, `CLAUDE.md`, `.claude/**/*.md`, `.github/**/*.md`. +- Never create free-standing files (e.g. `ANALYSIS.md`, `NOTES.md`) at the repo root, `src/`, `tests/`, or `scripts/`. +- Use ATX headings (`#`), sentence-case, one `# Title` per file; `-` for unordered lists. +- Wrap prose at 100 characters; always specify a language on fenced code blocks. diff --git a/.claude/rules/python/coding-style.md b/.claude/rules/python/coding-style.md index c7dba38..d9e8db0 100644 --- a/.claude/rules/python/coding-style.md +++ b/.claude/rules/python/coding-style.md @@ -1,145 +1,14 @@ -# Python Coding Style - -# applies-to: **/*.py, **/*.pyi - -> This file extends [common/coding-style.md](../common/coding-style.md) with Python-specific content. - -## Formatter and linter - -- **ruff** is the single tool for both formatting and linting. Do not use black, isort, - or flake8 alongside it — they will conflict. -- Run `just fmt` to format, `just lint` to lint, `just fix` to auto-fix safe violations. -- Active rule sets: `E`, `F`, `I`, `UP`, `B`, `SIM`, `C4`, `RUF`, `D`, `C90`, `PERF`. -- Line length: 100 characters. `E501` is disabled (formatter handles wrapping). - -## Type annotations - -All public functions and methods must have complete type annotations: - -```python -# Correct -def calculate_discount(price: float, tier: str) -> float: - ... - -# Wrong — missing annotations -def calculate_discount(price, tier): - ... -``` - -- Use `basedpyright` in `standard` mode for type checking (`just type`). -- Prefer `X | Y` union syntax (Python 3.10+) over `Optional[X]` / `Union[X, Y]`. -- Use `from __future__ import annotations` only when needed for forward references in - Python 3.11; prefer the native syntax. - -## Docstrings — Google style - -Every public function, class, and method must have a Google-style docstring: - -```python -def fetch_user(user_id: int, *, include_deleted: bool = False) -> User | None: - """Fetch a user by ID from the database. - - Args: - user_id: The primary key of the user to retrieve. - include_deleted: When True, soft-deleted users are also returned. - - Returns: - The matching User instance, or None if not found. - - Raises: - DatabaseError: If the database connection fails. - """ -``` - -- `Args`, `Returns`, `Raises`, `Yields`, `Note`, `Example` are the supported sections. -- Test files (`tests/**`) and scripts (`scripts/**`) are exempt from docstring requirements. - -## Naming - -| Symbol | Convention | Example | -|--------|-----------|---------| -| Module | `snake_case` | `file_manager.py` | -| Class | `PascalCase` | `LoggingManager` | -| Function / method | `snake_case` | `configure_logging()` | -| Variable | `snake_case` | `retry_count` | -| Constant | `UPPER_SNAKE_CASE` | `MAX_RETRIES` | -| Type alias | `PascalCase` | `UserId = int` | -| Private | leading `_` | `_internal_helper()` | -| Dunder | `__name__` | `__all__`, `__init__` | - -## Immutability +--- +paths: + - "**/*.py" + - "**/*.pyi" +--- -Prefer immutable data structures: - -```python -from dataclasses import dataclass -from typing import NamedTuple - -@dataclass(frozen=True) -class Config: - host: str - port: int - -class Point(NamedTuple): - x: float - y: float -``` - -## Error handling - -```python -# Correct: specific exception, meaningful message -try: - result = parse_config(path) -except FileNotFoundError: - raise ConfigError(f"Config file not found: {path}") from None -except (ValueError, KeyError) as exc: - raise ConfigError(f"Malformed config: {exc}") from exc - -# Wrong: silently swallowed -try: - result = parse_config(path) -except Exception: - result = None -``` - -## Logging - -Use **structlog** (configured via `common.logging_manager`). Never use `print()` or the -standard `logging.getLogger()` in application code. - -```python -import structlog -log = structlog.get_logger() - -log.info("user_created", user_id=user.id, email=user.email) -log.error("payment_failed", order_id=order_id, reason=str(exc), llm=True) -``` - -## Imports - -```python -# Correct order: stdlib → third-party → local -import os -from pathlib import Path - -import structlog - -from mypackage.common.utils import slugify -``` - -Avoid wildcard imports (`from module import *`) except in `__init__.py` for re-exporting. - -## Collections and comprehensions - -Prefer comprehensions over `map`/`filter` for readability: - -```python -# Preferred -active_users = [u for u in users if u.is_active] - -# Avoid unless the lambda is trivial -active_users = list(filter(lambda u: u.is_active, users)) -``` +# Python Coding Style -Use generator expressions when you only iterate once and do not need a list in memory. +- All public functions, methods, and classes require complete type annotations and a Google-style docstring. +- Use `structlog.get_logger()` for logging in `src/`; never `print()`, `logging.getLogger()`, or `logging.basicConfig()`. +- Call `configure_logging()` once at entry point; use public APIs from `my_library.common.logging_manager`. +- Prefer `X | Y` union syntax; basedpyright `standard` mode enforced — run `just type` after edits. +- Run `just fmt` and `just lint` after edits; active ruff rules include `D`, `C90`, `PERF`, `T20`. +- Outside `common/`, prefer imports from `my_library.common` over reimplementing file I/O, decorators, or utils. diff --git a/.claude/rules/python/hooks.md b/.claude/rules/python/hooks.md index 15170e5..ad04ac6 100644 --- a/.claude/rules/python/hooks.md +++ b/.claude/rules/python/hooks.md @@ -1,114 +1,12 @@ -# Python Hooks - -# applies-to: **/*.py, **/*.pyi - -> This file extends [common/hooks.md](../common/hooks.md) with Python-specific content. - -## PostToolUse hooks active for Python files - -These hooks fire automatically in a Claude Code session after any `.py` file is -edited or created. They provide immediate feedback so violations are fixed in the -same turn, not at CI time. - -| Hook script | Trigger | What it does | -|-------------|---------|--------------| -| `post-edit-python.sh` | Edit or Write on `*.py` | Runs `ruff check` + `basedpyright` on the saved file; prints violations to stdout | - -### What the hook checks - -1. **ruff check** — all active rule sets: `E`, `F`, `I`, `UP`, `B`, `SIM`, `C4`, `RUF`, - `D` (docstrings), `C90` (complexity), `PERF` (performance anti-patterns), - `T20` / `T201` (no `print()` in app code). -2. **basedpyright** — type correctness in `standard` mode. - -If either tool reports violations, the hook prints a structured block: - -``` -┌─ Standards check: src/mypackage/core.py -│ -│ ruff check -│ src/mypackage/core.py:42:5: D401 First line should be in imperative mood -│ -│ basedpyright -│ src/mypackage/core.py:17:12: error: Type "str | None" cannot be assigned to "str" -│ -└─ ✗ Violations found — fix before committing -``` - -Fix all reported violations before moving on to the next file. - -## Pre-commit hooks active for Python files - -The following hooks run on every `git commit` via pre-commit: - -| Hook | What it does | -|------|-------------| -| `ruff` | Lint + format check on staged `.py` files | -| `basedpyright` | Type check on the entire `src/` tree | -| `pre-bash-commit-quality.sh` | Scan staged `.py` files for hardcoded secrets and debug statements | - -The `pre-bash-block-no-verify.sh` hook blocks `git commit --no-verify`, ensuring -pre-commit hooks cannot be skipped. - -## Warning: `print()` in application code - -Using `print()` in `src/` is a violation (ruff `T201`). Use `structlog` instead: +--- +paths: + - "**/*.py" + - "**/*.pyi" +--- -```python -# Wrong -print(f"Processing order {order_id}") - -# Correct -log = structlog.get_logger() -log.info("processing_order", order_id=order_id) -``` - -`print()` is allowed in `scripts/` and in test files. - -## TDD enforcement hooks (PreToolUse) - -Register **at most one** of the source/test hooks below for `Write|Edit`. They use -the same scope; running both would warn and then block on the same condition. - -| Hook | Trigger | What it does | -|------|---------|----------------| -| `pre-write-src-require-test.sh` | `Write` or `Edit` on `src/<pkg>/<module>.py` | **Blocks** write if `tests/<pkg>/test_<module>.py` does not exist (strict TDD). **Registered by default** in `.claude/settings.json`. | -| `pre-write-src-test-reminder.sh` | Same | Warns only (non-blocking). Swap into `settings.json` **instead of** `pre-write-src-require-test.sh` if you want reminders without blocking. | -| `pre-bash-coverage-gate.sh` | `Bash` on `git commit` | Warns if test coverage is below 85% threshold | - -Both source/test hooks only check top-level package modules (`src/<pkg>/<name>.py`, -excluding `__init__.py`). Nested packages are skipped. - -### How to swap to the warn-only reminder - -The default strict hook (`pre-write-src-require-test.sh`) blocks any write to -`src/<pkg>/<module>.py` when the matching test file is missing. If you prefer a non-blocking -reminder, swap the registration in `.claude/settings.json`: - -1. Locate the `PreToolUse` entry whose `command` is - `bash .claude/hooks/pre-write-src-require-test.sh`. -2. Replace `pre-write-src-require-test.sh` with `pre-write-src-test-reminder.sh` in that - entry. -3. Register only one at a time. Registering both produces duplicate output on every write. - -## Refactor test guard (PostToolUse) - -| Hook | Trigger | What it does | -|------|---------|----------------| -| `post-edit-refactor-test-guard.sh` | `Edit` or `Write` on `src/**/*.py` | Reminds to run tests after every 3 source edits | - -Tracks edit count since last test run. Helps maintain GREEN during the REFACTOR phase. - -## Type-checking configuration - -basedpyright settings live in `pyproject.toml` under `[tool.basedpyright]`: - -```toml -[tool.basedpyright] -pythonVersion = "3.11" -typeCheckingMode = "standard" -reportMissingTypeStubs = false -``` +# Python Hooks -Do not weaken `typeCheckingMode` or add broad `# type: ignore` comments without -a specific error code and a comment explaining why it is necessary. +- `post-edit-python.sh` runs ruff + basedpyright after every `.py` edit — fix all violations before moving on. +- `pre-write-src-require-test.sh` blocks writing `src/<pkg>/<module>.py` if a matching test file is missing. +- `pre-bash-coverage-gate.sh` warns before `git commit` if coverage is below 85%. +- Do not weaken `typeCheckingMode` in `pyproject.toml`; never add broad `# type: ignore` without a specific error code. diff --git a/.claude/rules/python/patterns.md b/.claude/rules/python/patterns.md deleted file mode 100644 index a260b19..0000000 --- a/.claude/rules/python/patterns.md +++ /dev/null @@ -1,137 +0,0 @@ -# Python Patterns - -# applies-to: **/*.py, **/*.pyi - -> This file extends [common/patterns.md](../common/patterns.md) with Python-specific content. - -## Dataclasses as data transfer objects - -Use `@dataclass` (or `@dataclass(frozen=True)`) for plain data containers. -Use `NamedTuple` when immutability and tuple unpacking are both needed: - -```python -from dataclasses import dataclass, field - -@dataclass -class CreateOrderRequest: - user_id: int - items: list[str] = field(default_factory=list) - notes: str | None = None - -@dataclass(frozen=True) -class Money: - amount: float - currency: str = "USD" -``` - -## Protocol for duck typing - -Prefer `Protocol` over abstract base classes for interface definitions: - -```python -from typing import Protocol - -class Repository(Protocol): - def find_by_id(self, id: int) -> dict | None: ... - def save(self, entity: dict) -> dict: ... - def delete(self, id: int) -> None: ... -``` - -## Context managers for resource management - -Use context managers (`with` statement) for all resources that need cleanup: - -```python -# Files -with open(path, encoding="utf-8") as fh: - content = fh.read() - -# Custom resources: implement __enter__ / __exit__ or use @contextmanager -from contextlib import contextmanager - -@contextmanager -def managed_connection(url: str): - conn = connect(url) - try: - yield conn - finally: - conn.close() -``` - -## Generators for lazy evaluation - -Use generators instead of building full lists when iterating once: - -```python -def read_lines(path: Path) -> Generator[str, None, None]: - with open(path, encoding="utf-8") as fh: - yield from fh - -# Caller controls materialisation -lines = list(read_lines(log_path)) # full list when needed -first_error = next( - (l for l in read_lines(log_path) if "ERROR" in l), None -) -``` - -## Dependency injection over globals - -Pass dependencies as constructor arguments or function parameters. Avoid module-level -singletons that cannot be replaced in tests: - -```python -# Preferred -class OrderService: - def __init__(self, repo: Repository, logger: BoundLogger) -> None: - self._repo = repo - self._log = logger - -# Avoid -class OrderService: - _repo = GlobalRepository() # hard to test, hard to swap -``` - -## Configuration objects - -Centralise configuration in a typed dataclass or pydantic model loaded once at startup: - -```python -@dataclass(frozen=True) -class AppConfig: - database_url: str - log_level: str = "INFO" - max_retries: int = 3 - - @classmethod - def from_env(cls) -> "AppConfig": - return cls( - database_url=os.environ["DATABASE_URL"], - log_level=os.environ.get("LOG_LEVEL", "INFO"), - ) -``` - -## Exception hierarchy - -Define a project-level base exception and derive domain-specific exceptions from it: - -```python -class AppError(Exception): - """Base class for all application errors.""" - -class ConfigError(AppError): - """Raised when configuration is missing or invalid.""" - -class DatabaseError(AppError): - """Raised when a database operation fails.""" -``` - -Catch `AppError` at the top level; catch specific subclasses where recovery is possible. - -## `__all__` for public API - -Define `__all__` in every package `__init__.py` to make the public interface explicit: - -```python -# src/mypackage/__init__.py -__all__ = ["AppContext", "configure_logging", "AppError"] -``` diff --git a/.claude/rules/python/security.md b/.claude/rules/python/security.md deleted file mode 100644 index d0f7bd6..0000000 --- a/.claude/rules/python/security.md +++ /dev/null @@ -1,109 +0,0 @@ -# Python Security - -# applies-to: **/*.py, **/*.pyi - -> This file extends [common/security.md](../common/security.md) with Python-specific content. - -## Secret management - -Load secrets from environment variables; never hardcode them: - -```python -import os - -# Correct: fails immediately with a clear error if the secret is missing -api_key = os.environ["OPENAI_API_KEY"] - -# Wrong: silently falls back to an empty string, masking misconfiguration -api_key = os.environ.get("OPENAI_API_KEY", "") -``` - -For development, use `python-dotenv` to load a `.env` file that is listed in `.gitignore`: - -```python -from dotenv import load_dotenv -load_dotenv() # reads .env if present; silently skips if absent -api_key = os.environ["OPENAI_API_KEY"] -``` - -Never commit `.env` files. Add them to `.gitignore` and document required variables in -`.env.example` with placeholder values. - -## Input validation - -Validate all external inputs at the boundary before passing them into application logic: - -```python -# Correct: validate early, raise with context -def process_order(order_id: str) -> Order: - if not order_id.isalnum(): - raise ValueError(f"Invalid order ID format: {order_id!r}") - ... - -# Wrong: trusting input, using it unvalidated -def process_order(order_id: str) -> Order: - return db.query(f"SELECT * FROM orders WHERE id = '{order_id}'") -``` - -## SQL injection prevention - -Always use parameterised queries: - -```python -# Correct -cursor.execute("SELECT * FROM users WHERE email = %s", (email,)) - -# Wrong — SQL injection vulnerability -cursor.execute(f"SELECT * FROM users WHERE email = '{email}'") -``` - -## Path traversal prevention - -Sanitise file paths from user input: - -```python -from pathlib import Path - -base_dir = Path("/app/uploads").resolve() -user_path = (base_dir / user_input).resolve() - -if not str(user_path).startswith(str(base_dir)): - raise PermissionError("Path traversal detected") -``` - -## Static security analysis - -Run **bandit** before release to catch common Python security issues: - -```bash -uv run bandit -r src/ -ll # report issues at medium severity and above -``` - -Bandit is not a substitute for code review, but it catches common patterns like -hardcoded passwords, use of `shell=True` in subprocess calls, and insecure random. - -## Subprocess calls - -Avoid `shell=True` when calling external processes; it enables shell injection: - -```python -import subprocess - -# Correct: list form, no shell expansion -result = subprocess.run(["git", "status"], capture_output=True, check=True) - -# Wrong: shell=True + user-controlled input = injection -result = subprocess.run(f"git {user_cmd}", shell=True, check=True) -``` - -## Cryptography - -- Do not implement cryptographic primitives. Use `cryptography` or `hashlib` for - standard algorithms. -- Use `secrets` (stdlib) for generating tokens, not `random`. -- Use bcrypt or Argon2 for password hashing; never SHA-256 or MD5 for passwords. - -```python -import secrets -token = secrets.token_urlsafe(32) # cryptographically secure -``` diff --git a/.claude/rules/python/testing.md b/.claude/rules/python/testing.md index 176667d..f6fbabe 100644 --- a/.claude/rules/python/testing.md +++ b/.claude/rules/python/testing.md @@ -1,148 +1,13 @@ -# Python Testing - -# applies-to: **/*.py, **/*.pyi - -> This file extends [common/testing.md](../common/testing.md) with Python-specific content. - -## Framework: pytest - -Use **pytest** exclusively. Do not use `unittest.TestCase` for new tests. Mixing styles -in the same file is not allowed. - -## Directory layout - -``` -tests/ -├── conftest.py # shared fixtures -├── test_core.py # tests for src/<package>/core.py -├── <package>/ -│ ├── test_module_a.py -│ └── test_module_b.py -└── test_imports.py # smoke test: every public symbol is importable -``` - -### Meta-repo (this template repository) - -Generated projects follow the layout above. **This Copier meta-repository** has no `src/` tree; -Python under test lives in `scripts/`. Keep pytest modules organized as: - -``` -tests/ -├── conftest.py # top-level shared fixtures -├── unit/ -│ ├── conftest.py -│ └── test_<script>.py # mirrors scripts/<script>.py -├── integration/ -│ ├── conftest.py -│ └── test_template.py # Copier copy/update integration suite -└── e2e/ - └── conftest.py # placeholder for future e2e tests -``` - -**No `__init__.py` files** — the flat layout (`pythonpath = ["."]`) avoids package nesting. - -**No shared constants file** — define path constants directly in each test file that needs them: - -```python -from pathlib import Path - -REPO_ROOT = Path(__file__).resolve().parent.parent.parent -TEMPLATE_ROOT = REPO_ROOT / "template" -COPIER_YAML = REPO_ROOT / "copier.yml" -``` - -Do not flatten new tests into the top level of `tests/` unless they truly have no script or -integration home. - -Files must be named `test_<module>.py`. Test functions must start with `test_`. - -## Running tests - -```bash -just test # pytest -q -just test-parallel # pytest -q -n auto (pytest-xdist) -just coverage # pytest --cov=src --cov-report=term-missing -``` - -## Fixtures - -Define shared fixtures in `conftest.py`. Use function scope unless a fixture is -explicitly expensive and safe to share: +--- +paths: + - "**/*.py" + - "**/*.pyi" +--- -```python -# conftest.py -import pytest -from mypackage.core import AppContext - -@pytest.fixture() -def app_context() -> AppContext: - ctx = AppContext() - yield ctx - ctx.close() -``` - -Use `tmp_path` (built-in pytest fixture) for temporary file operations. Never use -`/tmp` directly in tests. - -## Parametrised tests - -Use `@pytest.mark.parametrize` instead of looping inside a test function: - -```python -@pytest.mark.parametrize(("input_val", "expected"), [ - ("hello world", "hello-world"), - (" leading", "leading"), - ("UPPER", "upper"), -]) -def test_slugify(input_val: str, expected: str) -> None: - assert slugify(input_val) == expected -``` - -## Marks for categorisation - -```python -@pytest.mark.unit -def test_pure_logic(): ... - -@pytest.mark.integration -def test_reads_from_disk(): ... - -@pytest.mark.slow -def test_full_pipeline(): ... -``` - -Run a subset: `uv run pytest -m unit`. - -## Mocking - -Use `unittest.mock` (or `pytest-mock`'s `mocker` fixture). Mock at the boundary — -mock the I/O call, not an internal helper: - -```python -def test_fetch_user_returns_none_when_not_found(mocker): - mocker.patch("mypackage.db.execute", return_value=[]) - result = fetch_user(user_id=99) - assert result is None -``` - -Do not patch implementation details that are not part of the public interface. - -## Coverage requirements - -- Overall threshold: ≥ 80 % (≥ 85 % for generated projects). -- New modules introduced in a PR must meet the threshold. -- Coverage is measured on `src/` only; test files are excluded from the report. - -Configuration lives in `pyproject.toml` under `[tool.coverage.*]`. - -## What to assert - -- Assert on **observable behaviour**, not implementation steps. -- Avoid asserting on log output or internal state unless that is the feature under test. -- Use `pytest.raises` for exception testing: +# Python Testing -```python -def test_raises_on_invalid_email(): - with pytest.raises(ValueError, match="invalid email"): - validate_email("not-an-email") -``` +- Use **pytest** exclusively; never `unittest.TestCase` for new tests. +- Every test file must set `pytestmark = pytest.mark.<marker>` at module level — `--strict-markers` is enabled; unmarked tests fail collection. +- Valid markers: `unit`, `integration`, `e2e`, `regression`, `slow`, `smoke` (defined in `pyproject.toml`). +- Test layout: `tests/{unit,integration,e2e}/test_<module>.py` mirroring `src/<pkg>/<module>.py`. +- Coverage threshold: ≥ 85% on `src/`; run `just coverage` to verify. diff --git a/.claude/rules/yaml/conventions.md b/.claude/rules/yaml/conventions.md index 0706c20..027af16 100644 --- a/.claude/rules/yaml/conventions.md +++ b/.claude/rules/yaml/conventions.md @@ -1,76 +1,12 @@ -# YAML Conventions - -# applies-to: **/*.yml, **/*.yaml - -YAML files in this repository include `copier.yml`, GitHub Actions workflows -(`.github/workflows/*.yml`), `mkdocs.yml`, and `.pre-commit-config.yaml`. - -## Formatting - -- Indentation: 2 spaces. Never use tabs. -- No trailing whitespace on any line. -- End each file with a single newline. -- Wrap long string values with block scalars (`|` or `>`) rather than quoted strings - when the value spans multiple lines. - -```yaml -# Preferred for multiline strings -description: >- - A long description that wraps cleanly - and is easy to read in source. - -# Avoid -description: "A long description that wraps cleanly and is easy to read in source." -``` - -## Quoting strings - -- Quote strings that contain YAML special characters: `:`, `{`, `}`, `[`, `]`, - `,`, `#`, `&`, `*`, `?`, `|`, `-`, `<`, `>`, `=`, `!`, `%`, `@`, `\`. -- Quote strings that could be misinterpreted as other types (`"true"`, `"1.0"`, - `"null"`). -- Do not quote simple alphanumeric strings unnecessarily. - -## Booleans and nulls +--- +paths: + - "**/*.yml" + - "**/*.yaml" +--- -Use YAML 1.2 style (as recognised by Copier and most modern parsers): -- Boolean: `true` / `false` (lowercase, unquoted). -- Null: `null` or `~`. -- Avoid the YAML 1.1 aliases (`yes`, `no`, `on`, `off`) — they are ambiguous. - -## Comments - -- Use `#` comments to explain non-obvious configuration choices. -- Separate logical sections with a blank line and a comment header: - -```yaml -# ------------------------------------------------------------------------- -# Post-generation tasks -# ------------------------------------------------------------------------- -_tasks: - - command: uv lock -``` - -## GitHub Actions specific - -- Pin third-party actions to a full commit SHA, not a floating tag: - ```yaml - uses: actions/checkout@v4 # acceptable if you review tag-SHA mapping - uses: actions/checkout@abc1234 # preferred for production workflows - ``` -- Use `env:` at the step or job level for environment variables; avoid top-level `env:` - unless the variable is used across all jobs. -- Name every step with a descriptive `name:` field. -- Prefer `actions/setup-python` with an explicit `python-version` matrix over hardcoded - versions. - -## copier.yml specific - -See [copier/template-conventions.md](../copier/template-conventions.md) for Copier-specific -YAML conventions. Rules here cover general YAML style; Copier semantics are covered there. - -## .pre-commit-config.yaml specific +# YAML Conventions -- Pin `rev:` to a specific version tag, not `HEAD` or `latest`. -- Group hooks by repository with a blank line between repos. -- List hooks in logical order: formatters before linters, linters before type checkers. +- Indentation: 2 spaces; never tabs. No trailing whitespace. End files with a newline. +- Booleans: `true`/`false` (YAML 1.2 lowercase); avoid `yes`/`no`/`on`/`off`. +- Quote strings containing YAML special characters; do not over-quote simple alphanumeric strings. +- GitHub Actions: pin third-party actions to a full commit SHA; name every step with `name:`. diff --git a/.claude/settings.json b/.claude/settings.json index f852a45..6b0526f 100644 --- a/.claude/settings.json +++ b/.claude/settings.json @@ -1,9 +1,11 @@ { + "plansDirectory": ".claude/plans/", "permissions": { "allow": [ "Bash(just:*)", "Bash(uv:*)", "Bash(copier:*)", + "Bash(gh:*)", "Bash(git status)", "Bash(git diff:*)", "Bash(git log:*)", diff --git a/.claude/skills/bash-guide/SKILL.md b/.claude/skills/bash-guide/SKILL.md new file mode 100644 index 0000000..78e287f --- /dev/null +++ b/.claude/skills/bash-guide/SKILL.md @@ -0,0 +1,208 @@ +--- +name: bash-guide +description: >- + Expert bash scripting skill for writing, maintaining, and updating + production-quality shell scripts. Use this skill whenever the user asks to + write a bash script, create a shell utility, automate a task with shell, + update or refactor an existing .sh file, add logging to a script, handle + errors in bash, set up script structure, or follow shell best practices. + Also triggers for: "write me a script", "shell automation", "bash function", + "cron job script", "deployment script", "wrapper script", or any task + producing a .sh or executable shell file. Covers Google Shell Style Guide + compliance, dual-mode logging (human vs LLM execution contexts), and + LLM-agent readability patterns. +--- + +# Bash Guide Skill + +Produce bash scripts that are correct (safe defaults, proper quoting, reliable +error handling), readable by humans (clear structure, comments, named variables), +readable by LLM agents (structured output, machine-parseable logs, deterministic +exit codes), and maintainable (modular, documented, easy to update). + +Authority: [Google Shell Style Guide](https://google.github.io/styleguide/shellguide.html). + +## When to use bash (and when not to) + +Use bash when calling other utilities with minimal data transformation, writing +small glue scripts (under ~150 lines), or automating sequential shell operations. + +Switch to Python/Go/etc. when the script exceeds ~150 lines or has complex +control flow, heavy data manipulation is required, performance matters, or data +structures beyond arrays are needed. + +## Workflow + +### 1. Determine execution context + +Before writing, check (or ask) for these environment variables, which drive how +logging behaves: + +``` +EXECUTION_CONTEXT "llm" | "human" (default: "human") +HUMAN_LOG_LEVEL "debug"|"info"|"warn"|"error" (default: "info") +LLM_LOG_LEVEL "debug"|"info"|"warn"|"error" (default: "info") +``` + +### 2. Start from the skeleton + +For any new script, copy [templates/script-skeleton.sh](templates/script-skeleton.sh). +It includes `set -euo pipefail`, standard constants, a minimal dual-mode logging +library, `usage()`, `die()`, exit-code constants, and an arg-parsing `main()`. + +Every script follows this top-to-bottom layout: + +1. Shebang + `set -euo pipefail` +2. File header comment (name, description, usage, version) +3. Global constants (`readonly`) +4. Sourced libraries (if any) +5. Logging functions (or source shared lib) +6. Helper / utility functions +7. Core logic functions +8. `main()` function +9. `main "$@"` as the last non-comment line + +For the rationale, header format, and function-comment convention, see +[references/structure.md](references/structure.md). + +### 3. Pick the right logging mode + +**`EXECUTION_CONTEXT=human` (default)** — colorized, human-friendly messages to +STDERR with ISO-8601 timestamps: + +``` +[2026-04-15T10:23:01Z] [INFO ] Starting deploy.sh v2.1.0 +[2026-04-15T10:23:02Z] [WARN ] Config file not found — using defaults +``` + +**`EXECUTION_CONTEXT=llm`** — structured key=value lines to STDERR. No color +codes (they corrupt LLM parsing). One event per line. Machine-parseable: + +``` +[2026-04-15T10:23:01Z] LEVEL=INFO SCRIPT=deploy.sh MSG="Starting deploy.sh v2.1.0" +[2026-04-15T10:23:02Z] LEVEL=WARN SCRIPT=deploy.sh MSG="Config file not found" +``` + +Rules for LLM mode (critical — violations break downstream agents): + +- No ANSI escape codes +- All log output to STDERR; script data/payload output to STDOUT only +- Short, unambiguous `KEY=VALUE` fields +- No multi-line log messages +- Exit codes documented and deterministic + +For the full logging library (rotation, JSON mode, structured fields), see +[references/logging.md](references/logging.md). + +### 4. Follow Google naming & formatting + +| Element | Convention | Example | +|-----------------|-------------------------|--------------------------------| +| Functions | `snake_case` | `deploy_artifact()` | +| Local variables | `snake_case` | `local file_path` | +| Constants / env | `UPPER_SNAKE_CASE` | `readonly MAX_RETRIES=3` | +| Source files | `lowercase_with_unders` | `deploy_helpers.sh` | +| Indentation | 2 spaces, no tabs | — | +| Line length | ≤ 80 chars | (URLs/paths exempt) | +| Variable expand | `"${var}"` always | `"${array[@]}"` for arrays | +| Numeric compare | `(( ))` | `(( count > threshold ))` | + +Control-flow layout: `; then` and `; do` on the same line as `if`/`for`/`while`. + +### 5. Handle errors deterministically + +`set -euo pipefail` goes at the top of every script: + +- `-e` — exit on unhandled non-zero return +- `-u` — error on unset variable references +- `-o pipefail` — a pipeline fails if any segment fails + +Prefer inline checks for critical operations: + +```bash +if ! cp "${src}" "${dst}"; then + die "Failed to copy ${src} to ${dst}" +fi +``` + +For pipelines, consult `PIPESTATUS`: + +```bash +tar -cf - ./* | gzip > archive.tar.gz +pipe_status=("${PIPESTATUS[@]}") +if (( pipe_status[0] != 0 || pipe_status[1] != 0 )); then + die "Archive failed (tar=${pipe_status[0]}, gzip=${pipe_status[1]})" +fi +``` + +Document exit codes as constants (`EXIT_OK`, `EXIT_USAGE`, `EXIT_IO`, …) so LLM +callers can branch on them. For LLM-executable scripts, emit a final structured +line containing `EXIT_CODE=<n> REASON="..."` before `exit`. + +For arg-parsing patterns (`getopts` vs. manual), `trap` cleanup, array idioms, +heredocs, and retry loops, see [references/patterns.md](references/patterns.md). + +### 6. LLM-agent readability checklist + +When a script will be invoked or parsed by an LLM agent, verify: + +- [ ] `EXECUTION_CONTEXT=llm` is supported and tested +- [ ] All log output goes to STDERR; payload data goes to STDOUT +- [ ] Log lines are single-line, key=value structured in llm mode +- [ ] Exit codes are documented and consistent +- [ ] `--help` / `-h` outputs a clean block an agent can parse +- [ ] Functions have header comments (Globals / Arguments / Outputs / Returns) +- [ ] No interactive prompts (`read -p`) when `EXECUTION_CONTEXT=llm` +- [ ] Script signals completion clearly (final log line or structured `EXIT_CODE`) + +## Maintaining existing scripts + +When asked to modify an existing script: + +1. **Read fully first** — understand existing style, logging approach, and + variable names before editing. +2. **Preserve style** — if the script uses 4-space indent, keep it. Don't + impose new style on partial edits. +3. **Locate insertion points** — add functions near existing ones of the same + type; never insert executable code between function definitions. +4. **Update the version string** if present in the header. +5. **Mark incomplete work** with `# TODO(agent): <reason>`. +6. **Mentally trace** the modified path with a sample input. + +For larger refactors, propose a plan (what will change, what stays, new deps or +env vars introduced) before executing. + +## Quick patterns cheat-sheet + +```bash +# Safe temp dir with cleanup +readonly TMP_DIR="$(mktemp -d)" +trap 'rm -rf "${TMP_DIR}"' EXIT + +# Read file into array safely (bash 4+) +readarray -t lines < <(grep "pattern" "${file}") + +# Check required env vars +for var in REQUIRED_VAR_1 REQUIRED_VAR_2; do + [[ -z "${!var:-}" ]] && die "Required env var not set: ${var}" +done + +# Retry loop +attempt=0 +until command_to_try || (( ++attempt >= MAX_RETRIES )); do + log_warn "Attempt ${attempt} failed, retrying..." + sleep $(( attempt * 2 )) +done +(( attempt >= MAX_RETRIES )) && die "Failed after ${MAX_RETRIES} attempts" +``` + +For the full pattern library, see [references/patterns.md](references/patterns.md). + +## Quick reference: where to go deeper + +| Topic | Reference file | +|------------------------------------------------------|----------------------------------------------------------------| +| Script skeleton, headers, `main()` convention | [references/structure.md](references/structure.md) | +| Full dual-context logging library | [references/logging.md](references/logging.md) | +| Arg parsing, traps, arrays, retries, heredocs | [references/patterns.md](references/patterns.md) | +| Copy-paste starter script | [templates/script-skeleton.sh](templates/script-skeleton.sh) | diff --git a/.claude/skills/bash-guide/references/logging.md b/.claude/skills/bash-guide/references/logging.md new file mode 100644 index 0000000..f91047b --- /dev/null +++ b/.claude/skills/bash-guide/references/logging.md @@ -0,0 +1,209 @@ +# Logging library reference + +## Philosophy + +All scripts implement the **dual-context logging pattern**: + +| Context | Format | Audience | Destination | +|---|---|---|---| +| `human` | Colorized, timestamped, plain English | Developers/operators | STDERR | +| `llm` | Structured `KEY=VALUE`, no ANSI | LLM agents, CI parsers | STDERR | + +**Data output always goes to STDOUT. Log output always goes to STDERR.** + +--- + +## Full Logging Library (embed or source) + +```bash +#!/bin/bash +# lib/logging.sh — portable structured logging for human and LLM contexts +# Source this file; do not execute directly. + +# ─── Config (read from environment, with defaults) ──────────────────────────── +readonly _LOG_EXECUTION_CONTEXT="${EXECUTION_CONTEXT:-human}" +readonly _LOG_HUMAN_LEVEL="${HUMAN_LOG_LEVEL:-info}" +readonly _LOG_LLM_LEVEL="${LLM_LOG_LEVEL:-info}" + +# Prevent double-sourcing +[[ -n "${_LOGGING_LOADED:-}" ]] && return 0 +readonly _LOGGING_LOADED=1 + +# ─── Level → numeric mapping ────────────────────────────────────────────────── +_log_level_value() { + case "${1,,}" in # lowercase comparison + debug) printf '%d' 0 ;; + info) printf '%d' 1 ;; + warn) printf '%d' 2 ;; + error) printf '%d' 3 ;; + *) printf '%d' 1 ;; + esac +} + +# ─── ANSI colors (human mode only) ─────────────────────────────────────────── +_LOG_COLOR_DEBUG='\033[0;36m' # cyan +_LOG_COLOR_INFO='\033[0;32m' # green +_LOG_COLOR_WARN='\033[0;33m' # yellow +_LOG_COLOR_ERROR='\033[0;31m' # red +_LOG_COLOR_RESET='\033[0m' + +# ─── Core log function ──────────────────────────────────────────────────────── +####################################### +# Emit a log message at the given level. +# Globals: +# _LOG_EXECUTION_CONTEXT +# _LOG_HUMAN_LEVEL +# _LOG_LLM_LEVEL +# SCRIPT_NAME (optional, falls back to basename $0) +# Arguments: +# $1 - Log level: debug|info|warn|error +# $@ - Message +# Outputs: +# Writes to STDERR only. No STDOUT. +####################################### +log() { + local level="${1,,}"; shift + local message="$*" + local script_name="${SCRIPT_NAME:-$(basename "$0")}" + local timestamp + timestamp="$(date -u +'%Y-%m-%dT%H:%M:%SZ')" + + # Determine active threshold + local active_threshold + if [[ "${_LOG_EXECUTION_CONTEXT}" == "llm" ]]; then + active_threshold="${_LOG_LLM_LEVEL}" + else + active_threshold="${_LOG_HUMAN_LEVEL}" + fi + + # Filter below threshold + local msg_val active_val + msg_val="$(_log_level_value "${level}")" + active_val="$(_log_level_value "${active_threshold}")" + (( msg_val < active_val )) && return 0 + + if [[ "${_LOG_EXECUTION_CONTEXT}" == "llm" ]]; then + # ── LLM mode: structured, parseable, no color ────────────────────────── + # Format: [ISO8601] LEVEL=X SCRIPT=Y MSG="Z" + # LLM agents can extract fields with: grep -oP 'LEVEL=\K\S+' + printf '[%s] LEVEL=%s SCRIPT=%s MSG="%s"\n' \ + "${timestamp}" \ + "${level^^}" \ + "${script_name}" \ + "${message}" \ + >&2 + else + # ── Human mode: colorized, labeled ───────────────────────────────────── + local color_var="_LOG_COLOR_${level^^}" + local color="${!color_var:-}" + printf "${color}[%s] [%-5s] %s${_LOG_COLOR_RESET}\n" \ + "${timestamp}" \ + "${level^^}" \ + "${message}" \ + >&2 + fi +} + +# ─── Convenience wrappers ───────────────────────────────────────────────────── +log_debug() { log "debug" "$@"; } +log_info() { log "info" "$@"; } +log_warn() { log "warn" "$@"; } +log_error() { log "error" "$@"; } + +# ─── Structured exit (always use for final exit) ────────────────────────────── +####################################### +# Log an error and exit with a code. +# Arguments: +# $1 - Exit code (integer) +# $@ - Error message +####################################### +die() { + local code="$1"; shift + local message="$*" + local script_name="${SCRIPT_NAME:-$(basename "$0")}" + + log_error "${message}" + + # In LLM mode, emit a final structured exit line so agents can detect it + if [[ "${_LOG_EXECUTION_CONTEXT}" == "llm" ]]; then + printf 'EXIT_CODE=%d SCRIPT=%s REASON="%s"\n' \ + "${code}" "${script_name}" "${message}" >&2 + fi + exit "${code}" +} +``` + +--- + +## Log Level Decision Guide + +| Use level | When | +|---|---| +| `debug` | Step-by-step internals, variable values, loop iterations. Not shown in default runs. | +| `info` | Normal progress milestones: "Started", "Processing X", "Done". | +| `warn` | Something unexpected but recoverable: missing optional file, using default value. | +| `error` | Fatal condition. Always followed by `die` or `exit`. | + +--- + +## LLM Log Parsing Patterns + +An LLM agent (or script) reading output in `llm` mode can extract fields: + +```bash +# Extract all warnings from a script run +script_output=$(EXECUTION_CONTEXT=llm ./deploy.sh 2>&1 1>/dev/null) +echo "${script_output}" | grep 'LEVEL=WARN' + +# Extract exit code from final line +exit_line=$(echo "${script_output}" | grep '^EXIT_CODE=') +exit_code=$(echo "${exit_line}" | grep -oP 'EXIT_CODE=\K[0-9]+') + +# Parse as key=value pairs (awk) +echo "${script_output}" | awk -F'[ =]' '/LEVEL=ERROR/ { print $0 }' +``` + +--- + +## Reducing Log Noise in LLM Mode + +When `EXECUTION_CONTEXT=llm` and `LLM_LOG_LEVEL=info` (the defaults), scripts should: + +1. **Suppress all debug output** (default threshold is `info`) +2. **Emit only meaningful milestones** at `info` +3. **Never emit progress bars, spinners, or interactive prompts** +4. **Suppress command output** with `>/dev/null` for noisy utilities: + ```bash + apt-get install -y curl >/dev/null 2>&1 \ + || die 3 "Failed to install curl" + ``` +5. **One log event per logical operation**, not per loop iteration (unless debug) + +--- + +## Minimal Inline Version (for short scripts) + +For scripts under ~80 lines where sourcing a library is overkill: + +```bash +# Compact inline logging +_LL="${LLM_LOG_LEVEL:-info}"; _HL="${HUMAN_LOG_LEVEL:-info}" +log() { + local lvl="$1"; shift + local ts; ts="$(date -u +'%Y-%m-%dT%H:%M:%SZ')" + local thr; [[ "${EXECUTION_CONTEXT:-human}" == "llm" ]] && thr="${_LL}" || thr="${_HL}" + local lv; case "${lvl}" in debug)lv=0;;info)lv=1;;warn)lv=2;;*)lv=3;;esac + local tv; case "${thr}" in debug)tv=0;;info)tv=1;;warn)tv=2;;*)tv=3;;esac + (( lv < tv )) && return 0 + if [[ "${EXECUTION_CONTEXT:-human}" == "llm" ]]; then + printf '[%s] LEVEL=%s MSG="%s"\n' "${ts}" "${lvl^^}" "$*" >&2 + else + printf '[%s] [%-5s] %s\n' "${ts}" "${lvl^^}" "$*" >&2 + fi +} +log_debug() { log debug "$@"; } +log_info() { log info "$@"; } +log_warn() { log warn "$@"; } +log_error() { log error "$@"; } +die() { log_error "$@"; exit 1; } +``` \ No newline at end of file diff --git a/.claude/skills/bash-guide/references/patterns.md b/.claude/skills/bash-guide/references/patterns.md new file mode 100644 index 0000000..e4b0a02 --- /dev/null +++ b/.claude/skills/bash-guide/references/patterns.md @@ -0,0 +1,320 @@ +# Bash patterns reference + +## Long-option Argument Parsing + +```bash +parse_args() { + while [[ $# -gt 0 ]]; do + case "$1" in + -h|--help) + usage + exit 0 + ;; + -v|--verbose) + export HUMAN_LOG_LEVEL="debug" + export LLM_LOG_LEVEL="debug" + shift + ;; + -n|--dry-run) + readonly DRY_RUN=true + shift + ;; + --output=*) + readonly OUTPUT_FILE="${1#--output=}" + shift + ;; + --output) + readonly OUTPUT_FILE="$2" + shift 2 + ;; + --) + shift + break + ;; + -*) + die 1 "Unknown option: $1" + ;; + *) + # First positional arg + readonly INPUT_FILE="$1" + shift + ;; + esac + done + + # Validate required arguments + [[ -z "${INPUT_FILE:-}" ]] && die 1 "INPUT_FILE is required. See --help." +} +``` + +--- + +## Safe Array Patterns + +```bash +# Declare explicitly +declare -a items=() + +# Append +items+=("new item with spaces") +items+=("another item") + +# Iterate safely (never unquoted) +for item in "${items[@]}"; do + process "${item}" +done + +# Pass to a command +run_cmd "${items[@]}" + +# Length +echo "Count: ${#items[@]}" + +# Slice +subset=("${items[@]:1:3}") # items[1], items[2], items[3] + +# Read file into array (bash 4+) +readarray -t lines < <(cat "${file}") + +# Join array into string +joined="$(IFS=','; echo "${items[*]}")" +``` + +--- + +## Robust File / Path Handling + +```bash +# Always quote paths +cp "${src_file}" "${dst_dir}/" + +# Check existence before use +[[ -f "${file}" ]] || die 3 "File not found: ${file}" +[[ -d "${dir}" ]] || die 3 "Directory not found: ${dir}" +[[ -r "${file}" ]] || die 3 "Cannot read: ${file}" +[[ -w "${dir}" ]] || die 3 "Cannot write to: ${dir}" + +# Wildcard expansion — use explicit paths to avoid -flag filenames +for f in ./*; do + process_file "${f}" +done + +# Temp directory with guaranteed cleanup +readonly TMP_DIR="$(mktemp -d)" +trap 'rm -rf "${TMP_DIR}"' EXIT + +# Resolve canonical path +real_path="$(realpath "${input_path}")" +``` + +--- + +## Heredoc Patterns + +```bash +# Basic heredoc +cat <<'EOF' +This text won't expand $VARIABLES. +EOF + +# Variable-expanding heredoc +cat <<EOF +Hostname: $(hostname) +User: ${USER} +EOF + +# Heredoc into a variable +read -r -d '' config_content <<'EOF' || true +key1=value1 +key2=value2 +EOF + +# Tab-indented heredoc (use real tabs) +if true; then + cat <<-EOF + This is indented with tabs. + Tabs are stripped by <<-. + EOF +fi +``` + +--- + +## Retry with Backoff + +```bash +####################################### +# Retry a command with exponential backoff. +# Globals: +# None +# Arguments: +# $1 - Max attempts (integer) +# $@ - Command and its arguments +# Returns: +# Exit code of the command, or 1 if all attempts fail +####################################### +retry() { + local max_attempts="$1"; shift + local attempt=1 + local wait_seconds=1 + + until "$@"; do + if (( attempt >= max_attempts )); then + log_error "Command failed after ${max_attempts} attempts: $*" + return 1 + fi + log_warn "Attempt ${attempt}/${max_attempts} failed. Retrying in ${wait_seconds}s..." + sleep "${wait_seconds}" + (( attempt++ )) + (( wait_seconds = wait_seconds * 2 )) + done + log_debug "Command succeeded on attempt ${attempt}: $*" +} + +# Usage: +retry 5 curl -sf "https://example.com/api" +``` + +--- + +## String Manipulation (prefer builtins over sed/awk) + +```bash +# Trim prefix +path="${full_path#/tmp/}" + +# Trim suffix +name="${filename%.sh}" + +# Replace first match +new="${string/old/new}" + +# Replace all matches +new="${string//old/new}" + +# Uppercase / lowercase (bash 4+) +upper="${var^^}" +lower="${var,,}" + +# Substring: ${var:start:length} +first4="${var:0:4}" + +# Length +len="${#var}" + +# Default value if unset or empty +val="${MY_VAR:-default}" + +# Error if unset +val="${REQUIRED_VAR:?'REQUIRED_VAR must be set'}" + +# Regex match (use BASH_REMATCH) +if [[ "${input}" =~ ^([0-9]{4})-([0-9]{2})-([0-9]{2})$ ]]; then + year="${BASH_REMATCH[1]}" + month="${BASH_REMATCH[2]}" + day="${BASH_REMATCH[3]}" +fi +``` + +--- + +## Process Substitution vs Pipes + +```bash +# WRONG: pipe creates subshell; $last_line stays 'NULL' outside +last_line='NULL' +your_cmd | while read -r line; do + last_line="${line}" +done +echo "${last_line}" # prints NULL + +# RIGHT: process substitution keeps vars in current shell +last_line='NULL' +while read -r line; do + last_line="${line}" +done < <(your_cmd) +echo "${last_line}" # prints last non-empty line + +# ALSO RIGHT: readarray (bash 4+) +readarray -t lines < <(your_cmd) +last_line="${lines[-1]}" +``` + +--- + +## Numeric Arithmetic + +```bash +# Use (( )) for integer math — never expr or $[ ] +(( total = count * price )) +(( remainder = total % 7 )) +(( i += 1 )) +(( i-- )) + +# Use $(( )) in string contexts +echo "Total: $(( count * price ))" + +# Comparison +if (( a > b )); then + log_info "a is larger" +fi + +# Avoid standalone (( )) with set -e when result might be 0 (falsy) +# This exits with set -e when i is 0: (( i++ )) +# Safe pattern: +(( i++ )) || true +# Or use: i=$(( i + 1 )) +``` + +--- + +## Checking External Commands Safely + +```bash +# Existence check +if ! command -v docker &>/dev/null; then + die 2 "docker is required but not installed" +fi + +# Version check +docker_version="$(docker --version | grep -oP '\d+\.\d+\.\d+')" +required="20.10.0" +# Compare versions as dot-separated integers +version_ge() { + local a b + IFS='.' read -ra a <<< "$1" + IFS='.' read -ra b <<< "$2" + for i in 0 1 2; do + (( ${a[i]:-0} > ${b[i]:-0} )) && return 0 + (( ${a[i]:-0} < ${b[i]:-0} )) && return 1 + done + return 0 +} +version_ge "${docker_version}" "${required}" \ + || die 2 "docker >= ${required} required (found ${docker_version})" +``` + +--- + +## Dry-Run Pattern + +```bash +####################################### +# Execute a command, or print it if DRY_RUN=true. +# Globals: +# DRY_RUN +# Arguments: +# Command and all its arguments +####################################### +run_or_dry() { + if [[ "${DRY_RUN:-false}" == "true" ]]; then + log_info "[DRY-RUN] would run: $*" + return 0 + fi + "$@" +} + +# Usage: +run_or_dry rm -rf "${old_dir}" +run_or_dry kubectl apply -f "${manifest}" +``` \ No newline at end of file diff --git a/.claude/skills/bash-guide/references/structure.md b/.claude/skills/bash-guide/references/structure.md new file mode 100644 index 0000000..042e6f8 --- /dev/null +++ b/.claude/skills/bash-guide/references/structure.md @@ -0,0 +1,202 @@ +# Script structure reference + +## Complete File Header Format + +```bash +#!/bin/bash +# +# SCRIPT NAME: <script_name>.sh +# DESCRIPTION: <One-sentence description of what this script does> +# USAGE: ./<script_name>.sh [OPTIONS] <REQUIRED_ARG> +# +# OPTIONS: +# -h, --help Show this help and exit +# -v, --verbose Enable debug-level logging +# -n, --dry-run Simulate actions without making changes +# +# ENVIRONMENT VARIABLES: +# EXECUTION_CONTEXT llm|human Controls log format (default: human) +# HUMAN_LOG_LEVEL debug|info|warn|error (default: info) +# LLM_LOG_LEVEL debug|info|warn|error (default: info) +# +# EXIT CODES: +# 0 Success +# 1 Usage / argument error +# 2 Missing dependency +# 3 I/O error (file not found, permission denied) +# 4 Runtime error +# +# EXAMPLES: +# ./<script_name>.sh --help +# EXECUTION_CONTEXT=llm ./<script_name>.sh process input.txt +# +# AUTHOR: <name or "agent"> +# VERSION: 1.0.0 +# UPDATED: YYYY-MM-DD +``` + +--- + +## Function Comment Format (required for all non-trivial functions) + +```bash +####################################### +# Brief description of what this does. +# Globals: +# VARIABLE_READ (reads but doesn't modify) +# VARIABLE_WRITE (modifies) +# Arguments: +# $1 - Description of first argument +# $2 - Description of second argument (optional) +# Outputs: +# Writes result to STDOUT +# Writes errors/warnings to STDERR via log_* +# Returns: +# 0 on success +# 1 if <condition> +# 3 if file not found +####################################### +my_function() { + local arg1="$1" + local arg2="${2:-default_value}" + ... +} +``` + +--- + +## Canonical File Layout (annotated) + +``` +Line 1 : #!/bin/bash +Lines 2-N : File header comment block + +N+1 : set -euo pipefail +N+2 : (blank line) + +SECTION 1 — Constants + All readonly globals + EXECUTION_CONTEXT, HUMAN_LOG_LEVEL, LLM_LOG_LEVEL + Exit code constants + +SECTION 2 — Sourced libraries + source "${SCRIPT_DIR}/lib/logging.sh" (if external) + source "${SCRIPT_DIR}/lib/utils.sh" + +SECTION 3 — Logging functions + _log_level_value() + log() + log_debug() log_info() log_warn() log_error() + +SECTION 4 — Utility / helper functions + usage() + die() + cleanup_and_exit() + check_dependencies() + +SECTION 5 — Core logic functions + (named clearly, one responsibility each) + +SECTION 6 — main() + Argument parsing (while/case) + Dependency checks + Core function calls + Exit + +Last line : main "$@" +``` + +--- + +## Argument Parsing with getopts (POSIX-style) + +Use for simple short options: + +```bash +parse_args() { + local OPTIND opt + while getopts ":hvn" opt; do + case "${opt}" in + h) usage; exit 0 ;; + v) export HUMAN_LOG_LEVEL="debug"; export LLM_LOG_LEVEL="debug" ;; + n) readonly DRY_RUN=true ;; + :) die "Option -${OPTARG} requires an argument" ;; + \?) die "Unknown option: -${OPTARG}" ;; + esac + done + shift $(( OPTIND - 1 )) + # Remaining positional args are in "$@" +} +``` + +For long options, use a while/case loop (see patterns.md). + +--- + +## Dependency Checking + +```bash +####################################### +# Verify required commands exist on PATH. +# Arguments: +# One or more command names +# Returns: +# 0 if all present; exits with code 2 if any missing +####################################### +check_dependencies() { + local -a missing=() + for cmd in "$@"; do + if ! command -v "${cmd}" &>/dev/null; then + missing+=("${cmd}") + fi + done + if (( ${#missing[@]} > 0 )); then + log_error "Missing required commands: ${missing[*]}" + exit 2 + fi + log_debug "Dependencies satisfied: $*" +} + +# Usage in main(): +check_dependencies curl jq git docker +``` + +--- + +## Trap / Cleanup Pattern + +```bash +# Declare cleanup function before setting trap +cleanup() { + local exit_code=$? + log_debug "Cleaning up (exit_code=${exit_code})" + # Remove temp files, release locks, etc. + rm -rf "${TMP_DIR:-}" + # Always exit with the original code + exit "${exit_code}" +} + +trap cleanup EXIT +trap 'log_error "Interrupted"; exit 130' INT TERM +``` + +--- + +## Modular / Library Scripts + +Library scripts (sourced, not executed): +- Extension: `.sh` +- Not executable (`chmod -x`) +- Begin with a guard: + +```bash +#!/bin/bash +# lib/utils.sh — utility functions for <project> +# Source this file; do not execute directly. + +# Guard against direct execution +if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then + echo "This file must be sourced, not executed." >&2 + exit 1 +fi +``` \ No newline at end of file diff --git a/.claude/skills/config-management/SKILL.md b/.claude/skills/config-management/SKILL.md new file mode 100644 index 0000000..d4560cb --- /dev/null +++ b/.claude/skills/config-management/SKILL.md @@ -0,0 +1,93 @@ +--- +name: config-management +description: >- + Guide for pyproject.toml, justfile, pre-commit, and CI pipeline configuration. + Use this skill when modifying project configuration files, adding dependencies, + configuring tools, or troubleshooting CI. Trigger on mentions of: pyproject.toml, + justfile, pre-commit, CI configuration, GitHub Actions, tool configuration, + dependency management, or any request to add/modify project config. +model: haiku +--- + +# Config Management Skill + +Guidance for managing project-level configuration across the toolchain. + +## Configuration files + +| File | Purpose | When to modify | +|---|---|---| +| `pyproject.toml` | Package metadata, dependencies, tool configs (ruff, basedpyright, pytest, coverage) | Adding deps, changing tool settings | +| `justfile` | Task runner recipes (test, lint, fmt, ci, etc.) | Adding new workflows, modifying commands | +| `.pre-commit-config.yaml` | Git pre-commit hooks (ruff, basedpyright) | Adding/updating hook repos | +| `.github/workflows/ci.yml` | GitHub Actions CI pipeline | Modifying CI steps, adding jobs | +| `.copier-answers.yml` | Copier template answers — **never edit manually** | Only via `copier update` | +| `uv.lock` | Dependency lockfile — **never edit manually** | Only via `uv lock` or `just update` | + +## pyproject.toml structure + +Key sections and their tool owners: + +| Section | Tool | +|---|---| +| `[project]` | Package metadata, version, dependencies | +| `[project.optional-dependencies]` | Extra dependency groups (dev, test, docs) | +| `[build-system]` | Build backend (hatchling) | +| `[tool.ruff]` | Ruff linter + formatter config | +| `[tool.ruff.lint]` | Rule selection, per-file-ignores | +| `[tool.basedpyright]` | Type checker mode, Python version | +| `[tool.pytest.ini_options]` | Pytest markers, flags, test paths | +| `[tool.coverage.*]` | Coverage thresholds, source paths | + +## justfile quick reference + +| Recipe | What it runs | +|---|---| +| `just test` | `pytest -q` | +| `just test-parallel` | `pytest -q -n auto` | +| `just coverage` | `pytest --cov=src --cov-report=term-missing` | +| `just lint` | `ruff check src/ tests/` | +| `just fmt` | `ruff format src/ tests/` | +| `just fix` | `ruff check --fix src/ tests/` | +| `just type` | `basedpyright` | +| `just docs-check` | `ruff check --select D src/` | +| `just ci` | Full pipeline: fix + fmt + lint + type + docs-check + test + precommit | +| `just review` | Pre-merge review: fix + lint + type + docs-check + test | + +## Dependency management + +```bash +# Add a runtime dependency +uv add <package> + +# Add a dev dependency +uv add --optional dev <package> + +# Sync from lockfile (no changes to lockfile) +just sync # or: uv sync --frozen --extra dev --extra test --extra docs + +# Update lockfile and resync +just update # or: uv lock && uv sync --extra dev --extra test --extra docs +``` + +## Pre-commit configuration + +Hooks run on `git commit`. Managed via `.pre-commit-config.yaml`: +- **ruff** — lint + format check on staged `.py` files +- **basedpyright** — type check on `src/` + +Run all hooks manually: `just precommit` + +## Rules for config changes + +1. **Never weaken** ruff or basedpyright settings — the `pre-config-protection.sh` hook blocks this. +2. **Never edit** `uv.lock` directly — the `pre-protect-uv-lock.sh` hook blocks this. +3. **Never edit** `.copier-answers.yml` manually — use `copier update`. +4. Keep `justfile` recipes simple — one concern per recipe. +5. Pin all dependency versions in `pyproject.toml`. + +## Quick reference: where to go deeper + +| Topic | Reference file | +|-----------------------------|------------------------------------------------------------------| +| Complete tool configs | [references/complete-configs.md](references/complete-configs.md) | diff --git a/.claude/skills/config-management/references/complete-configs.md b/.claude/skills/config-management/references/complete-configs.md new file mode 100644 index 0000000..bda78c2 --- /dev/null +++ b/.claude/skills/config-management/references/complete-configs.md @@ -0,0 +1,257 @@ +# Complete configurations + +Ready-to-use configuration files for all tools in this project. Copy these into +your project and adjust versions, paths, and rule selections as needed. + +--- + +## pyproject.toml — all tool sections + +```toml +# ── Ruff ──────────────────────────────────────────────────────────────────── +[tool.ruff] +target-version = "py311" +src = ["src"] +exclude = [".git", ".venv", "__pycache__", "build", "dist"] + +[tool.ruff.lint] +select = ["E", "W", "F", "I", "UP", "B", "SIM"] +ignore = ["E501"] +fixable = ["ALL"] + +[tool.ruff.lint.per-file-ignores] +"tests/**/*.py" = ["PLR2004", "F401"] +"**/__init__.py" = ["F401"] + +[tool.ruff.format] +quote-style = "double" +indent-style = "space" +skip-magic-trailing-comma = false +line-ending = "lf" +line-length = 88 + +# ── basedpyright ───────────────────────────────────────────────────────────── +[tool.basedpyright] +pythonVersion = "3.11" +include = ["src"] +exclude = ["**/__pycache__", ".venv", "build", "dist"] +venvPath = "." +venv = ".venv" +typeCheckingMode = "standard" + +# ── Bandit ─────────────────────────────────────────────────────────────────── +[tool.bandit] +targets = ["src"] +skips = [] # add rule IDs here only with a comment explaining why +exclude_dirs = [".venv", "build", "dist", "tests"] +# Note: severity/confidence thresholds are CLI flags (-ll -ii), not pyproject keys. +``` + +--- + +## .pre-commit-config.yaml — all hooks + +```yaml +minimum_pre_commit_version: "3.0.0" + +default_language_version: + python: python3.11 + +repos: + # ── Ruff: lint + format ────────────────────────────────────────────────── + - repo: https://github.com/astral-sh/ruff-pre-commit + rev: v0.9.0 + hooks: + - id: ruff + args: ["--fix"] + - id: ruff-format + + # ── Bandit: security lint ───────────────────────────────────────────────── + - repo: https://github.com/PyCQA/bandit + rev: 1.8.3 + hooks: + - id: bandit + args: ["-c", "pyproject.toml", "-ll", "-ii"] + files: ^src/ + + # ── Semgrep: pattern-based security scan ───────────────────────────────── + - repo: https://github.com/semgrep/semgrep + rev: v1.60.0 + hooks: + - id: semgrep + args: ["--config", "p/python", "--config", ".semgrep.yml", "--severity", "ERROR"] + files: ^src/ + + # ── basedpyright: type checking ────────────────────────────────────────── + # Requires basedpyright to be installed in the active venv. + - repo: local + hooks: + - id: basedpyright + name: basedpyright + entry: basedpyright + language: system + types: [python] + pass_filenames: false + + # ── General hygiene ────────────────────────────────────────────────────── + - repo: https://github.com/pre-commit/pre-commit-hooks + rev: v5.0.0 + hooks: + - id: trailing-whitespace + - id: end-of-file-fixer + - id: check-yaml + - id: check-toml + - id: check-merge-conflict + - id: debug-statements +``` + +--- + +## .semgrep.yml — local custom rules + +```yaml +# .semgrep.yml +# Local project-specific rules. Registry packs (p/python etc.) are passed +# as --config flags on the CLI, not listed here. +# +# Usage: +# semgrep --config p/python --config p/owasp-top-ten --config .semgrep.yml src/ + +rules: + - id: no-eval + pattern: eval(...) + message: > + eval() executes arbitrary code. Use ast.literal_eval() for safe data + parsing, or refactor to avoid dynamic evaluation entirely. + languages: [python] + severity: ERROR + + - id: no-hardcoded-secrets-in-jwt + patterns: + - pattern: jwt.encode(..., "$SECRET", ...) + - pattern-not: jwt.encode(..., os.environ.get(...), ...) + message: Hardcoded JWT secret. Load from environment variable or secrets manager. + languages: [python] + severity: ERROR +``` + +--- + +## GitHub Actions workflow — full CI pipeline + +```yaml +# .github/workflows/ci.yml +name: CI + +on: + push: + branches: [main] + pull_request: + +jobs: + # ── Fast checks: format + lint + security ─────────────────────────────── + quality: + name: Code Quality + runs-on: ubuntu-latest + + steps: + - uses: actions/checkout@v4 + + - name: Set up Python + uses: actions/setup-python@v5 + with: + python-version: "3.11" + cache: "pip" + + - name: Install quality tools + run: pip install ruff bandit semgrep + + - name: Ruff — format check + run: ruff format --check . + + - name: Ruff — lint + run: ruff check . + + - name: Bandit — security lint + run: bandit -c pyproject.toml -r src/ -ll -ii + + - name: Cache semgrep rules + uses: actions/cache@v4 + with: + path: ~/.semgrep/cache + key: semgrep-${{ hashFiles('.semgrep.yml') }} + + - name: Semgrep — pattern scan + run: | + semgrep --config p/python \ + --config p/owasp-top-ten \ + --config .semgrep.yml \ + --severity ERROR \ + src/ + + # ── Type checking ──────────────────────────────────────────────────────── + typecheck: + name: Type Check + runs-on: ubuntu-latest + needs: quality # only run if quality checks pass + + steps: + - uses: actions/checkout@v4 + + - name: Set up Python + uses: actions/setup-python@v5 + with: + python-version: "3.11" + cache: "pip" + + - name: Install dependencies + run: | + pip install -r requirements.txt # must include basedpyright + + - name: basedpyright — type check + run: basedpyright + + # ── Tests ──────────────────────────────────────────────────────────────── + test: + name: Tests (Python ${{ matrix.python-version }}) + runs-on: ubuntu-latest + needs: [quality, typecheck] # only run if both upstream jobs pass + + strategy: + matrix: + python-version: ["3.11", "3.12"] + + steps: + - uses: actions/checkout@v4 + + - name: Set up Python ${{ matrix.python-version }} + uses: actions/setup-python@v5 + with: + python-version: ${{ matrix.python-version }} + cache: "pip" + + - name: Install dependencies + run: pip install -r requirements.txt + + - name: Run pytest with coverage + run: pytest --maxfail=1 --disable-warnings --cov=src --cov-report=xml + + - name: Upload coverage + uses: codecov/codecov-action@v4 + with: + files: coverage.xml +``` + +--- + +## Version update checklist + +When updating tool versions, change them in all three places: + +| Tool | pyproject.toml | .pre-commit-config.yaml | requirements.txt / pyproject deps | +|---|---|---|---| +| ruff | `target-version` (Python, not ruff) | `rev: v0.9.0` | `ruff>=0.9.0` | +| bandit | — | `rev: 1.8.3` | `bandit>=1.8.3` | +| semgrep | — | `rev: v1.60.0` | `semgrep>=1.60.0` | +| basedpyright | `pythonVersion` (Python, not tool) | (local hook — no rev) | `basedpyright>=1.x` | +| pre-commit-hooks | — | `rev: v5.0.0` | — | diff --git a/.claude/skills/cron-scheduling/SKILL.md b/.claude/skills/cron-scheduling/SKILL.md new file mode 100644 index 0000000..5fad82c --- /dev/null +++ b/.claude/skills/cron-scheduling/SKILL.md @@ -0,0 +1,235 @@ +--- +name: cron-scheduling +description: >- + Complete cron job lifecycle skill: setup, write, update, manage, pause, stop, and monitor + scheduled tasks on Linux, macOS, Windows, Docker, Kubernetes, GitHub Actions, Node.js, and Python. + Use this skill whenever a user asks about scheduling, automating recurring tasks, cron expressions, + crontab editing, job management, cron monitoring, task schedulers, or any variation of + "run this script automatically", "set up a scheduled job", "run every X minutes/hours/days", + "cronjob", "crontab", "stop a cron job", "why isn't my cron job running", or + "how do I see what cron jobs are running". Always reach for this skill even for + adjacent topics like launchd, Task Scheduler, GitHub Actions schedules, APScheduler, + node-cron, or Kubernetes CronJobs. +--- + +# Cron Scheduling Skill + +## Quick reference: where to go deeper + +| Topic | Reference file | +|-------|-----------------| +| Cron expression syntax and operators | [references/syntax-reference.md](references/syntax-reference.md) | +| Non-Linux environments and platforms | [references/environments.md](references/environments.md) | +| Updating, pausing, and managing jobs | [references/managing-jobs.md](references/managing-jobs.md) | +| Monitoring, logging, and troubleshooting | [references/monitoring.md](references/monitoring.md) | + +--- + +## Quick-decision tree — what does the user need? + +``` +User asks about cron… +│ +├── "What does */5 * * * * mean?" / "How do I write a schedule for…" +│ └── Read: references/syntax-reference.md +│ +├── "Set up / create a cron job" +│ └── Follow: §Setup workflow below — no extra file needed for Linux basics +│ If macOS / Windows / Docker / K8s / Node / Python → also read: references/environments.md +│ +├── "Update / change / edit a cron job" +│ └── Read: references/managing-jobs.md §Updating +│ +├── "Pause / disable / stop / delete a cron job" +│ └── Read: references/managing-jobs.md §Stopping & disabling +│ +├── "My cron job isn't running" / "How do I debug?" +│ └── Read: references/monitoring.md §Debugging +│ +├── "Monitor / see logs / alert on failures" +│ └── Read: references/monitoring.md +│ +└── Non-Linux environment question + └── Read: references/environments.md +``` + +--- + +## Core concepts (always available) + +### Cron syntax at a glance + +``` + ┌──────── minute 0–59 + │ ┌───── hour 0–23 + │ │ ┌── day-of-month 1–31 + │ │ │ ┌─ month 1–12 + │ │ │ │ ┌ day-of-week 0–7 (0 and 7 both = Sunday) + │ │ │ │ │ + * * * * * command +``` + +| Symbol | Meaning | Example | +|--------|---------|---------| +| `*` | Every value | `* * * * *` → every minute | +| `,` | List | `0,30 * * * *` → :00 and :30 | +| `-` | Range | `0 9-17 * * *` → every hour 9 AM–5 PM | +| `/` | Step | `*/15 * * * *` → every 15 min | +| `@daily` | Shorthand | = `0 0 * * *` | +| `@reboot` | At boot | runs once on startup | + +### The #1 rule: cron's environment is minimal + +Cron runs with a stripped-down `PATH=/usr/bin:/bin` — no `.bashrc`, no `.zshrc`, no exports. +**Always use absolute paths** for both the interpreter and the script. + +```bash +# Wrong — relies on your PATH +0 2 * * * python3 backup.py + +# Correct +0 2 * * * /usr/bin/python3 /opt/app/backup.py >> /var/log/backup.log 2>&1 +``` + +--- + +## Setup workflow (Linux/macOS cron) + +### Step 1 — Write and test the script + +```bash +#!/usr/bin/env bash +set -euo pipefail # Exit on error, undefined vars, pipe failures + +LOG=/var/log/myjob.log +log() { echo "[$(date -u +%Y-%m-%dT%H:%M:%SZ)] $*" >> "$LOG"; } + +log "INFO: Job started" +# ... your logic here ... +log "INFO: Job finished" +``` + +Test it in cron's exact environment before scheduling: + +```bash +env -i HOME=/home/youruser SHELL=/bin/bash PATH=/usr/bin:/bin \ + bash /opt/scripts/myjob.sh +``` + +If it fails here → fix it before adding to cron. + +### Step 2 — Open the crontab + +```bash +crontab -e # Your own jobs +sudo crontab -u www-data -e # Another user's jobs (as root) +sudo nano /etc/cron.d/myapp # System-wide jobs (needs username field) +``` + +### Step 3 — Write a complete entry + +```bash +# ── Environment (set at top of crontab, once) ────────────────── +SHELL=/bin/bash +PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin +MAILTO="" # "" = suppress email; or admin@example.com + +# ── Job entries ──────────────────────────────────────────────── +# minute hour day month weekday command + +# Daily backup at 2:30 AM +30 2 * * * /bin/bash /opt/scripts/backup.sh >> /var/log/backup.log 2>&1 + +# Weekdays at 8 AM +0 8 * * 1-5 /usr/bin/python3 /opt/app/report.py >> /var/log/report.log 2>&1 + +# Every 5 minutes, safe against overlaps +*/5 * * * * /usr/bin/flock -n /tmp/poll.lock /opt/scripts/poll.sh >> /var/log/poll.log 2>&1 +``` + +> `>> /var/log/file.log 2>&1` captures both stdout and stderr. +> Omit it and output vanishes (or goes to root's mailbox). + +### Step 4 — Verify + +```bash +crontab -l # List installed jobs +systemctl status cron # Ubuntu/Debian +systemctl status crond # CentOS/RHEL +sudo tail -f /var/log/syslog | grep CRON # Watch live execution +sudo journalctl -u cron --since "1 hour ago" # Recent activity +``` + +### Step 5 — System-wide jobs (`/etc/cron.d/`) + +Files in `/etc/cron.d/` need a username field and must be owned by root, mode 644: + +```bash +# /etc/cron.d/myapp +SHELL=/bin/bash +PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin + +# minute hour day month weekday USER command +0 2 * * * root /opt/myapp/backup.sh >> /var/log/myapp.log 2>&1 +*/5 * * * * www-data /opt/myapp/worker.sh > /dev/null 2>&1 +``` + +```bash +sudo chmod 644 /etc/cron.d/myapp +sudo chown root:root /etc/cron.d/myapp +``` + +Drop-in directories (no username field needed, just drop an executable file): + +| Directory | Frequency | +|-----------|-----------| +| `/etc/cron.hourly/` | Every hour | +| `/etc/cron.daily/` | Every day | +| `/etc/cron.weekly/` | Every week | +| `/etc/cron.monthly/` | Every month | + +--- + +## Safe job template (copy-paste starter) + +```bash +#!/usr/bin/env bash +# ── Safe cron job template ───────────────────────────────────────── +# Usage: crontab entry should use flock + timeout + absolute paths +# */5 * * * * /usr/bin/flock -n /tmp/JOBNAME.lock \ +# /usr/bin/timeout 240 /opt/scripts/JOBNAME.sh \ +# >> /var/log/JOBNAME.log 2>&1 + +set -euo pipefail +IFS=$'\n\t' + +JOB_NAME="JOBNAME" +LOG="/var/log/${JOB_NAME}.log" + +log() { echo "[$(date -u +%Y-%m-%dT%H:%M:%SZ)] [$JOB_NAME] $*" >> "$LOG"; } + +START=$(date +%s) +log "INFO: Started" + +# ── Your logic here ──────────────────────────────────────────────── + +log "INFO: Finished in $(( $(date +%s) - START ))s" +``` + +--- + +## Quick reference — crontab commands + +```bash +crontab -e # Edit your crontab +crontab -l # List your crontab +crontab -r # Remove ALL your jobs (careful!) +crontab -u USER -e # Edit another user's crontab (root only) +crontab -u USER -l # List another user's crontab + +# List ALL users' jobs on the system +for u in $(cut -f1 -d: /etc/passwd); do + jobs=$(crontab -l -u "$u" 2>/dev/null | grep -v '^#' | grep -v '^$') + [ -n "$jobs" ] && echo "=== $u ===" && echo "$jobs" +done +``` diff --git a/.claude/skills/cron-scheduling/references/environments.md b/.claude/skills/cron-scheduling/references/environments.md new file mode 100644 index 0000000..9736d82 --- /dev/null +++ b/.claude/skills/cron-scheduling/references/environments.md @@ -0,0 +1,560 @@ +# Cron environments reference + +_Read this file when: the user is on macOS, Windows, Docker, Kubernetes, GitHub Actions, Node.js, or Python — i.e., anything that isn't standard Linux cron._ + +--- + +## Table of contents + +1. [macOS — launchd](#1-macos--launchd) +2. [Windows — Task Scheduler & schtasks](#2-windows--task-scheduler) +3. [Docker containers](#3-docker-containers) +4. [Kubernetes CronJob](#4-kubernetes-cronjob) +5. [AWS EventBridge + Lambda](#5-aws-eventbridge--lambda) +6. [GitHub Actions scheduled workflows](#6-github-actions) +7. [Node.js — node-cron](#7-nodejs--node-cron) +8. [Python — APScheduler & schedule](#8-python) +9. [Environment comparison table](#9-comparison-table) + +--- + +## 1. macOS — launchd + +macOS marks cron as legacy. The native scheduler is **launchd**, using XML plist files. + +### Plist file locations + +| Location | Purpose | +|----------|---------| +| `~/Library/LaunchAgents/` | Per-user jobs (run when user is logged in) | +| `/Library/LaunchAgents/` | Per-user jobs (run for any user that logs in) | +| `/Library/LaunchDaemons/` | System-wide jobs (run as root, no user session needed) | + +### Calendar-based job (equivalent to cron schedule) + +```xml +<!-- ~/Library/LaunchAgents/com.myapp.backup.plist --> +<?xml version="1.0" encoding="UTF-8"?> +<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" + "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> +<plist version="1.0"> +<dict> + <key>Label</key> + <string>com.myapp.backup</string> + + <key>ProgramArguments</key> + <array> + <string>/bin/bash</string> + <string>/Users/me/scripts/backup.sh</string> + </array> + + <!-- Run daily at 2:00 AM --> + <key>StartCalendarInterval</key> + <dict> + <key>Hour</key><integer>2</integer> + <key>Minute</key><integer>0</integer> + </dict> + + <!-- Catch up on missed runs after sleep/off --> + <key>RunAtLoad</key><false/> + + <key>StandardOutPath</key><string>/tmp/backup.log</string> + <key>StandardErrorPath</key><string>/tmp/backup.err</string> +</dict> +</plist> +``` + +### Interval-based job (run every N seconds) + +```xml +<!-- Run every 5 minutes (300 seconds) --> +<key>StartInterval</key> +<integer>300</integer> +``` + +### StartCalendarInterval field keys + +| Key | Values | +|-----|--------| +| `Minute` | 0–59 | +| `Hour` | 0–23 | +| `Day` | 1–31 | +| `Month` | 1–12 | +| `Weekday` | 0–7 (0 and 7 = Sunday) | + +### Management commands + +```bash +# Load (activate) — first time or after editing +launchctl load ~/Library/LaunchAgents/com.myapp.backup.plist + +# Unload (deactivate) +launchctl unload ~/Library/LaunchAgents/com.myapp.backup.plist + +# Run immediately (for testing) +launchctl start com.myapp.backup + +# Stop a running job +launchctl stop com.myapp.backup + +# Check status (exit code in 3rd column; 0 = OK, null = not running) +launchctl list | grep com.myapp + +# macOS 10.11+ alternative +launchctl enable gui/$(id -u)/com.myapp.backup +launchctl bootout gui/$(id -u)/com.myapp.backup +``` + +--- + +## 2. Windows — Task Scheduler + +### schtasks CLI (Command Prompt / PowerShell) + +```powershell +# Create — daily at 2 AM +schtasks /create /tn "MyApp Backup" /tr "C:\Scripts\backup.bat" /sc daily /st 02:00 /ru SYSTEM /f + +# Create — every 5 minutes +schtasks /create /tn "MyApp Poll" /tr "C:\Scripts\poll.bat" /sc minute /mo 5 /ru SYSTEM /f + +# Update — change schedule time +schtasks /change /tn "MyApp Backup" /st 03:00 + +# Enable / disable +schtasks /change /tn "MyApp Backup" /enable +schtasks /change /tn "MyApp Backup" /disable + +# Run immediately +schtasks /run /tn "MyApp Backup" + +# Delete +schtasks /delete /tn "MyApp Backup" /f + +# Query all tasks +schtasks /query /fo LIST /v + +# Query one task +schtasks /query /tn "MyApp Backup" /fo LIST /v +``` + +### PowerShell — richer task creation + +```powershell +# Create a repeating daily task +$action = New-ScheduledTaskAction -Execute "C:\Scripts\backup.bat" +$trigger = New-ScheduledTaskTrigger -Daily -At "2:00AM" +$settings = New-ScheduledTaskSettingsSet ` + -ExecutionTimeLimit (New-TimeSpan -Hours 2) ` + -RunOnlyIfNetworkAvailable ` + -StartWhenAvailable # Run missed tasks when machine wakes +$principal = New-ScheduledTaskPrincipal -UserId "SYSTEM" -RunLevel Highest + +Register-ScheduledTask ` + -TaskName "MyApp Backup" ` + -Action $action -Trigger $trigger -Settings $settings -Principal $principal + +# Update trigger +$task = Get-ScheduledTask -TaskName "MyApp Backup" +$task.Triggers[0].StartBoundary = "2024-01-01T03:00:00" +Set-ScheduledTask -InputObject $task + +# Remove +Unregister-ScheduledTask -TaskName "MyApp Backup" -Confirm:$false +``` + +--- + +## 3. Docker containers + +### Approach A — cron inside the container (simple) + +```dockerfile +FROM ubuntu:22.04 +RUN apt-get update && apt-get install -y cron && rm -rf /var/lib/apt/lists/* + +COPY mycrontab /etc/cron.d/myapp +RUN chmod 0644 /etc/cron.d/myapp && crontab /etc/cron.d/myapp +RUN touch /var/log/cron.log + +# Print Docker env vars to /etc/environment so cron can read them +CMD ["bash", "-c", "printenv > /etc/environment && cron -f"] +``` + +```bash +# mycrontab (no username field — already installed via crontab command) +SHELL=/bin/bash +PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin + +* * * * * root echo "tick" >> /var/log/cron.log 2>&1 +0 2 * * * root /app/scripts/backup.sh >> /var/log/backup.log 2>&1 +``` + +### Approach B — host-level cron triggers container (recommended for production) + +Keep your host crontab clean and let it drive the container: + +```bash +# Run a one-shot command inside a named running container +0 2 * * * docker exec myapp_container bash -c '/app/scripts/backup.sh >> /app/logs/backup.log 2>&1' + +# Run a fresh disposable container +0 2 * * * docker run --rm --env-file /etc/myapp.env myapp/backup:latest +``` + +### Docker Compose variant + +```bash +0 2 * * * docker compose -f /opt/myapp/docker-compose.yml run --rm backup +``` + +--- + +## 4. Kubernetes CronJob + +```yaml +apiVersion: batch/v1 +kind: CronJob +metadata: + name: db-backup + namespace: production +spec: + schedule: "30 2 * * *" # Standard cron syntax + timeZone: "America/New_York" # Kubernetes 1.27+ (requires TZ data) + concurrencyPolicy: Forbid # Forbid|Allow|Replace + successfulJobsHistoryLimit: 3 + failedJobsHistoryLimit: 1 + startingDeadlineSeconds: 300 # Skip if can't start within 5 min + jobTemplate: + spec: + backoffLimit: 2 # Retry failed pods up to 2 times + template: + spec: + restartPolicy: OnFailure # Never|OnFailure + containers: + - name: backup + image: myapp/backup:latest + command: ["/app/backup.sh"] + env: + - name: DB_URL + valueFrom: + secretKeyRef: + name: db-secret + key: url + resources: + requests: { memory: "256Mi", cpu: "100m" } + limits: { memory: "512Mi", cpu: "500m" } +``` + +```bash +# List all CronJobs +kubectl get cronjobs -n production + +# Check recent Job runs +kubectl get jobs -n production --sort-by=.metadata.creationTimestamp + +# Trigger a manual run +kubectl create job --from=cronjob/db-backup manual-backup-$(date +%s) -n production + +# Suspend a CronJob (stops new runs, doesn't kill current) +kubectl patch cronjob db-backup -p '{"spec":{"suspend":true}}' -n production + +# Resume +kubectl patch cronjob db-backup -p '{"spec":{"suspend":false}}' -n production + +# Delete +kubectl delete cronjob db-backup -n production +``` + +--- + +## 5. AWS EventBridge + Lambda + +### AWS cron syntax differences from Unix + +``` +cron(minute hour day-of-month month day-of-week year) +# ^^^^ +# Year field is REQUIRED in AWS (doesn't exist in Unix cron) +# Use ? in EITHER day-of-month OR day-of-week (not both, not neither) +``` + +```bash +# Daily at 2 AM UTC +cron(0 2 * * ? *) + +# Every weekday at 8 AM EST (UTC-5 = 13:00 UTC) +cron(0 13 ? * MON-FRI *) + +# Every 5 minutes +rate(5 minutes) + +# Every hour +rate(1 hour) +``` + +### AWS CLI + +```bash +# Create EventBridge rule +aws events put-rule \ + --name "daily-backup" \ + --schedule-expression "cron(0 2 * * ? *)" \ + --state ENABLED \ + --region us-east-1 + +# Add Lambda as target +aws events put-targets \ + --rule "daily-backup" \ + --targets "Id=1,Arn=arn:aws:lambda:us-east-1:123456789:function:BackupFn" + +# Disable / enable rule +aws events disable-rule --name "daily-backup" +aws events enable-rule --name "daily-backup" + +# Delete rule (must remove targets first) +aws events remove-targets --rule "daily-backup" --ids 1 +aws events delete-rule --name "daily-backup" +``` + +### Terraform + +```hcl +resource "aws_cloudwatch_event_rule" "daily_backup" { + name = "daily-backup" + schedule_expression = "cron(0 2 * * ? *)" + state = "ENABLED" +} + +resource "aws_cloudwatch_event_target" "backup_lambda" { + rule = aws_cloudwatch_event_rule.daily_backup.name + target_id = "BackupLambda" + arn = aws_lambda_function.backup.arn +} + +resource "aws_lambda_permission" "allow_eventbridge" { + statement_id = "AllowExecutionFromEventBridge" + action = "lambda:InvokeFunction" + function_name = aws_lambda_function.backup.function_name + principal = "events.amazonaws.com" + source_arn = aws_cloudwatch_event_rule.daily_backup.arn +} +``` + +--- + +## 6. GitHub Actions + +```yaml +# .github/workflows/scheduled.yml +name: Nightly Tasks + +on: + schedule: + # Standard Unix cron syntax (UTC only) + - cron: '0 2 * * *' # 2 AM UTC daily + - cron: '0 8 * * 1-5' # 8 AM UTC weekdays (second trigger) + workflow_dispatch: # Allow manual run from GitHub UI + +jobs: + backup: + runs-on: ubuntu-latest + timeout-minutes: 30 + steps: + - uses: actions/checkout@v4 + + - name: Run backup + env: + DB_URL: ${{ secrets.DATABASE_URL }} + AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} + AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} + run: ./scripts/backup.sh + + - name: Notify on failure + if: failure() + uses: actions/github-script@v7 + with: + script: | + github.rest.issues.create({ + owner: context.repo.owner, + repo: context.repo.repo, + title: 'Nightly backup failed', + body: `Run: ${context.serverUrl}/${context.repo.owner}/${context.repo.repo}/actions/runs/${context.runId}` + }) +``` + +**GitHub Actions cron caveats:** +- All times are UTC — there is no timezone support +- Minimum interval: 5 minutes +- Heavily loaded repos may have 15–60 min delays during peak hours +- Scheduled workflows are disabled after 60 days of repo inactivity +- Only runs on the default branch + +--- + +## 7. Node.js — node-cron + +```bash +npm install node-cron +``` + +```javascript +const cron = require('node-cron'); + +// Basic usage +cron.schedule('0 2 * * *', () => { + console.log('Running nightly backup…'); + runBackup(); +}); + +// With timezone and explicit start control +const task = cron.schedule('30 7 * * 1-5', async () => { + try { + await generateMorningReport(); + } catch (err) { + console.error('Report failed:', err.message); + await notifyOpsTeam(err); + } +}, { + scheduled: false, // Don't auto-start + timezone: 'America/New_York' +}); + +task.start(); // Start the task +task.stop(); // Pause (keeps registration) +task.destroy(); // Remove permanently + +// Validate an expression without scheduling +const valid = cron.validate('*/5 * * * *'); // true + +// Graceful shutdown +process.on('SIGTERM', () => { + task.stop(); + process.exit(0); +}); +``` + +--- + +## 8. Python + +### Option A — `schedule` (simple, synchronous) + +```bash +pip install schedule +``` + +```python +import schedule +import time + +def backup(): + print("Running backup…") + +def report(): + print("Sending weekly report…") + +# Human-readable scheduling DSL +schedule.every().day.at("02:00").do(backup) +schedule.every().monday.at("09:00").do(report) +schedule.every(5).minutes.do(lambda: print("Heartbeat")) +schedule.every().hour.at(":30").do(lambda: print("Half-past")) + +# Blocking run loop +while True: + schedule.run_pending() + time.sleep(30) # Check every 30 seconds +``` + +### Option B — APScheduler (production-grade, async-capable) + +```bash +pip install apscheduler +``` + +```python +from apscheduler.schedulers.background import BackgroundScheduler +from apscheduler.triggers.cron import CronTrigger +from apscheduler.triggers.interval import IntervalTrigger +from apscheduler.events import EVENT_JOB_ERROR, EVENT_JOB_EXECUTED +import atexit, logging + +logging.basicConfig(level=logging.INFO) +scheduler = BackgroundScheduler(timezone='UTC') + +# Event listener for monitoring +def job_listener(event): + if event.exception: + logging.error(f'Job {event.job_id} failed: {event.exception}') + else: + logging.info(f'Job {event.job_id} succeeded') + +scheduler.add_listener(job_listener, EVENT_JOB_EXECUTED | EVENT_JOB_ERROR) + +# Add jobs +scheduler.add_job( + func=run_backup, + trigger=CronTrigger(hour=2, minute=0, timezone='UTC'), + id='nightly_backup', + name='Nightly Database Backup', + max_instances=1, # Prevent overlap + coalesce=True, # If missed, run once (not multiple) + misfire_grace_time=300, # Allow up to 5-min late start + replace_existing=True +) + +scheduler.add_job( + func=generate_report, + trigger=CronTrigger(day_of_week='mon-fri', hour=8, minute=0), + id='morning_report' +) + +scheduler.add_job( + func=heartbeat, + trigger=IntervalTrigger(minutes=5), + id='heartbeat' +) + +scheduler.start() +atexit.register(scheduler.shutdown) # Clean shutdown on exit + +# Modify a running job +scheduler.reschedule_job('nightly_backup', trigger=CronTrigger(hour=3)) +scheduler.pause_job('heartbeat') +scheduler.resume_job('heartbeat') +scheduler.remove_job('morning_report') + +# List all jobs +for job in scheduler.get_jobs(): + print(f'{job.id}: next run = {job.next_run_time}') +``` + +### APScheduler with FastAPI / Flask + +```python +# FastAPI integration +from contextlib import asynccontextmanager +from fastapi import FastAPI + +@asynccontextmanager +async def lifespan(app: FastAPI): + scheduler.start() + yield + scheduler.shutdown() + +app = FastAPI(lifespan=lifespan) +``` + +--- + +## 9. Comparison table + +| Feature | Linux cron | macOS launchd | Windows Task Scheduler | Kubernetes CronJob | GitHub Actions | node-cron | APScheduler | +|---------|-----------|--------------|----------------------|-------------------|---------------|-----------|-------------| +| Syntax | 5-field cron | XML plist | GUI / schtasks CLI | 5-field cron | 5-field cron | 5-field cron | Python DSL | +| Timezone support | System TZ | Plist key | GUI setting | `spec.timeZone` | UTC only | Option field | Per-job | +| Catch-up on miss | No | `RunAtLoad` | `StartWhenAvailable` | `startingDeadline` | No | No | `coalesce` | +| Overlap prevention | `flock` | One instance | Setting | `concurrencyPolicy` | Manual | Manual | `max_instances` | +| Retry on failure | No | No | Limited | `backoffLimit` | No | Manual | Manual | +| Secret injection | env vars | Keychain | Credential manager | K8s Secrets | GitHub Secrets | env vars | env vars | +| Min interval | 1 min | 1 sec | 1 min | 1 min | 5 min | 1 min | 1 sec | diff --git a/.claude/skills/cron-scheduling/references/managing-jobs.md b/.claude/skills/cron-scheduling/references/managing-jobs.md new file mode 100644 index 0000000..7477986 --- /dev/null +++ b/.claude/skills/cron-scheduling/references/managing-jobs.md @@ -0,0 +1,449 @@ +# Cron job management reference + +_Read this file when: the user wants to update, edit, pause, resume, stop, delete, or manage the execution behaviour of existing cron jobs — including preventing overlaps, managing environment variables, running as specific users, or chaining jobs._ + +--- + +## Table of contents + +1. [Updating a cron job](#1-updating-a-cron-job) +2. [Stopping and disabling jobs](#2-stopping-and-disabling-jobs) +3. [Deleting jobs](#3-deleting-jobs) +4. [Preventing overlapping runs](#4-preventing-overlapping-runs) +5. [Managing environment variables](#5-managing-environment-variables) +6. [Running as a specific user](#6-running-as-a-specific-user) +7. [Job chaining and dependencies](#7-job-chaining-and-dependencies) +8. [Killing a running job](#8-killing-a-running-job) +9. [Stopping the cron daemon](#9-stopping-the-cron-daemon) + +--- + +## 1. Updating a cron job + +### Method 1 — Interactive edit (best for one-off changes) + +```bash +crontab -e +# Find the line, edit it, save and quit +# crontab reloads automatically on save — no daemon restart needed +``` + +### Method 2 — Programmatic sed update (best for scripts and CI/CD) + +```bash +# Change backup time from 2 AM to 3 AM +OLD='0 2 \* \* \* /bin/bash /opt/scripts/backup.sh' +NEW='0 3 * * * /bin/bash /opt/scripts/backup.sh' + +crontab -l | sed "s|$OLD|$NEW|" | crontab - + +# Verify +crontab -l +``` + +### Method 3 — Full file replacement (best for config management / Ansible / Terraform) + +```bash +# Write a complete new crontab from a managed file +cat > /tmp/managed-crontab << 'EOF' +SHELL=/bin/bash +PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin +MAILTO="" + +# v2.4.0 — updated 2024-01-20 +0 3 * * * /bin/bash /opt/scripts/backup.sh >> /var/log/backup.log 2>&1 +0 */4 * * * /usr/bin/python3 /opt/app/sync.py >> /var/log/sync.log 2>&1 +EOF + +crontab /tmp/managed-crontab +rm /tmp/managed-crontab +crontab -l # Verify +``` + +### Method 4 — Append a new job without touching existing ones + +```bash +# Safe append — preserves current crontab +(crontab -l 2>/dev/null; echo "*/10 * * * * /opt/scripts/newjob.sh >> /var/log/newjob.log 2>&1") | crontab - +``` + +### Updating `/etc/cron.d/` files + +Just overwrite the file — crond polls `/etc/cron.d/` and picks up changes automatically (no reload needed): + +```bash +sudo tee /etc/cron.d/myapp > /dev/null << 'EOF' +SHELL=/bin/bash +PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin + +# Updated schedule +0 3 * * * root /bin/bash /opt/scripts/backup.sh >> /var/log/backup.log 2>&1 +EOF + +sudo chmod 644 /etc/cron.d/myapp +sudo chown root:root /etc/cron.d/myapp +``` + +--- + +## 2. Stopping and disabling jobs + +### Temporarily disable — comment out + +```bash +# Manual — open editor and add # at start of line +crontab -e + +# Programmatic — tag with marker so you can re-enable later +TARGET="/opt/scripts/backup.sh" +crontab -l | sed "/$TARGET/s/^/#DISABLED /" | crontab - + +# Re-enable by removing the tag +crontab -l | sed "s/^#DISABLED \(.*$TARGET.*\)/\1/" | crontab - +``` + +### Temporarily disable — use a flag file + +Add this guard at the top of your script. Gives you instant on/off without touching crontab: + +```bash +#!/usr/bin/env bash +# Check for disable flag +FLAG="/opt/scripts/.backup.disabled" +if [ -f "$FLAG" ]; then + echo "[$(date)] Skipping: disabled by flag file $FLAG" >> /var/log/backup.log + exit 0 +fi +# ... rest of script +``` + +```bash +# Disable +touch /opt/scripts/.backup.disabled + +# Re-enable +rm /opt/scripts/.backup.disabled +``` + +### Temporarily disable — stop the cron daemon (disables ALL jobs) + +```bash +sudo systemctl stop cron # Ubuntu/Debian +sudo systemctl stop crond # CentOS/RHEL + +sudo systemctl start cron # Resume all jobs +``` + +--- + +## 3. Deleting jobs + +### Delete one specific job + +```bash +# Manual +crontab -e # Delete the line, save + +# Programmatic — match on command string +TARGET="/opt/scripts/backup.sh" +crontab -l | grep -v "$TARGET" | crontab - + +# Verify it's gone +crontab -l | grep "$TARGET" # Should return nothing +``` + +### Delete ALL jobs for the current user + +```bash +# Always back up first +crontab -l > ~/crontab_backup_$(date +%Y%m%d_%H%M%S).txt + +# Then remove +crontab -r +``` + +### Delete jobs for another user (as root) + +```bash +crontab -r -u www-data +``` + +### Remove a `/etc/cron.d/` file + +```bash +sudo rm /etc/cron.d/myapp +``` + +--- + +## 4. Preventing overlapping runs + +When a job takes longer than its scheduled interval, multiple instances pile up. Choose the right prevention strategy: + +### Strategy A — `flock` (recommended for most jobs) + +`flock` is part of `util-linux` (pre-installed on all major Linux distros): + +```bash +# In crontab — non-blocking: skip if already running +*/5 * * * * /usr/bin/flock -n /tmp/myjob.lock /opt/scripts/myjob.sh >> /var/log/myjob.log 2>&1 +# │ └── lock file └── your command +# └── -n = non-blocking (exit code 1 if locked, don't queue) + +# Blocking: wait up to 30 seconds to acquire the lock, then skip +*/5 * * * * /usr/bin/flock -w 30 /tmp/myjob.lock /opt/scripts/myjob.sh >> /var/log/myjob.log 2>&1 +``` + +### Strategy B — PID file inside the script + +```bash +#!/usr/bin/env bash +PIDFILE="/var/run/myjob.pid" + +# Check if already running +if [ -f "$PIDFILE" ] && kill -0 "$(cat "$PIDFILE")" 2>/dev/null; then + echo "[$(date)] Already running (PID $(cat $PIDFILE)), exiting." >> /var/log/myjob.log + exit 0 +fi + +# Register PID and clean up on exit +echo $$ > "$PIDFILE" +trap 'rm -f "$PIDFILE"; exit' INT TERM EXIT + +# ... your job logic here ... + +rm -f "$PIDFILE" +``` + +### Strategy C — `timeout` (kill if job runs too long) + +```bash +# Kill the job if it hasn't finished within 4 minutes +*/5 * * * * /usr/bin/timeout 240 /opt/scripts/myjob.sh >> /var/log/myjob.log 2>&1 +# │ +# seconds (240 = 4 min) + +# With a signal (default TERM; use -k for KILL after grace period) +*/5 * * * * /usr/bin/timeout --kill-after=10 240 /opt/scripts/myjob.sh >> /var/log/myjob.log 2>&1 +``` + +### Strategy D — combine flock + timeout (most robust) + +```bash +*/5 * * * * /usr/bin/flock -n /tmp/myjob.lock /usr/bin/timeout 240 /opt/scripts/myjob.sh >> /var/log/myjob.log 2>&1 +``` + +--- + +## 5. Managing environment variables + +Cron runs with `PATH=/usr/bin:/bin` and no shell profile loaded. Three safe patterns: + +### Pattern 1 — Set variables in the crontab header + +```bash +# At the top of crontab, before any job entries +SHELL=/bin/bash +PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin +MAILTO="" +TZ=UTC +DB_HOST=localhost +API_ENV=production +``` + +Limitation: no `$(...)` substitution, no `source`, no arrays. Simple key=value only. + +### Pattern 2 — Source an env file inside the script + +```bash +#!/usr/bin/env bash +# Load application environment +set -a # Auto-export all vars +source /opt/app/.env # Contains: DB_URL=..., API_KEY=... +set +a + +# Now all .env variables are available +/opt/app/run.sh +``` + +### Pattern 3 — Wrapper script that sources env and execs + +```bash +#!/usr/bin/env bash +# /opt/scripts/with-env.sh — reusable env wrapper +set -a +source /opt/app/.env +set +a +exec "$@" +``` + +```bash +# In crontab — clean, reusable pattern +0 2 * * * /opt/scripts/with-env.sh /opt/app/backup.py >> /var/log/backup.log 2>&1 +0 8 * * 1-5 /opt/scripts/with-env.sh /opt/app/report.py >> /var/log/report.log 2>&1 +``` + +### Pattern 4 — Inline env vars per job + +```bash +# Set vars inline (simple cases) +0 2 * * * DB_HOST=localhost DB_USER=app /opt/scripts/backup.sh >> /var/log/backup.log 2>&1 +``` + +### Secrets management + +Never store secrets in plaintext in the crontab file. Use: + +```bash +# Read from a protected file (owned root, mode 600) +0 2 * * * DB_PASSWORD=$(cat /etc/myapp/db.secret) /opt/scripts/backup.sh + +# Source a protected env file +0 2 * * * source /etc/myapp/secrets.env && /opt/scripts/backup.sh + +# Use systemd credentials / vault agent / AWS SSM Parameter Store in the script +``` + +--- + +## 6. Running as a specific user + +### User-level crontab + +```bash +sudo crontab -u postgres -e # Edit postgres user's crontab +sudo crontab -u www-data -l # List www-data's crontab +``` + +### System crontab (`/etc/crontab` and `/etc/cron.d/`) + +These files have an extra `USERNAME` field between the schedule and the command: + +```bash +# /etc/cron.d/myapp +SHELL=/bin/bash +PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin + +# min hr dom mon dow USER command +0 2 * * * postgres pg_dump mydb | gzip > /backups/db.sql.gz +*/5 * * * * www-data php /var/www/html/artisan queue:work --once +0 3 1 * * root /usr/sbin/logrotate -f /etc/logrotate.conf +``` + +### `sudo` in a cron job (avoid if possible) + +```bash +# If a non-root job needs elevated access, prefer sudoers configuration +# /etc/sudoers.d/myapp-cron: +www-data ALL=(root) NOPASSWD: /opt/scripts/specific_privileged_script.sh + +# Then in the crontab for www-data: +*/5 * * * * sudo /opt/scripts/specific_privileged_script.sh +``` + +--- + +## 7. Job chaining and dependencies + +### Sequential — job B runs after job A finishes (in the same crontab entry) + +```bash +# Run B only if A succeeds (&&) +0 2 * * * /opt/scripts/prepare.sh && /opt/scripts/process.sh >> /var/log/etl.log 2>&1 + +# Run B regardless of A's exit code (;) +0 2 * * * /opt/scripts/cleanup.sh; /opt/scripts/report.sh >> /var/log/chain.log 2>&1 + +# Run B only if A fails (||) +0 2 * * * /opt/scripts/primary.sh || /opt/scripts/fallback.sh >> /var/log/chain.log 2>&1 +``` + +### Sequential — stagger with separate crontab entries + +Use time offsets when tasks share a resource (DB, API) and you want them spread out: + +```bash +0 2 * * * /opt/scripts/export_users.sh >> /var/log/export.log 2>&1 +20 2 * * * /opt/scripts/export_orders.sh >> /var/log/export.log 2>&1 +40 2 * * * /opt/scripts/export_products.sh >> /var/log/export.log 2>&1 +``` + +### Parallel — all start at the same time + +```bash +0 2 * * * /opt/scripts/job_a.sh >> /var/log/job_a.log 2>&1 +0 2 * * * /opt/scripts/job_b.sh >> /var/log/job_b.log 2>&1 +0 2 * * * /opt/scripts/job_c.sh >> /var/log/job_c.log 2>&1 +``` + +### Dependency via sentinel file (A must finish before B runs) + +```bash +#!/usr/bin/env bash +# job_b.sh — waits for job_a to complete +SENTINEL="/tmp/job_a.done" +TIMEOUT=1800 # 30 minutes + +start=$(date +%s) +while [ ! -f "$SENTINEL" ]; do + elapsed=$(( $(date +%s) - start )) + [ "$elapsed" -ge "$TIMEOUT" ] && { echo "Timed out waiting for job_a"; exit 1; } + sleep 10 +done + +rm -f "$SENTINEL" +# ... rest of job_b ... +``` + +```bash +# job_a.sh — create sentinel on success +# ... main logic ... +touch /tmp/job_a.done +``` + +--- + +## 8. Killing a running job + +```bash +# Find the process +pgrep -af "backup.sh" +ps aux | grep backup.sh + +# Graceful kill (SIGTERM — lets script clean up) +kill $(pgrep -f "backup.sh") + +# Force kill (SIGKILL — use if graceful kill doesn't work after ~10s) +kill -9 $(pgrep -f "backup.sh") + +# Kill process tree (also kills child processes spawned by the job) +pkill -TERM -P $(pgrep -f "backup.sh") # Graceful +pkill -KILL -P $(pgrep -f "backup.sh") # Forceful + +# Kill by PID file +kill $(cat /var/run/myjob.pid) + +# After killing a flock-managed job, remove stale lock file +rm -f /tmp/myjob.lock +``` + +--- + +## 9. Stopping the cron daemon + +```bash +# Stop crond (all scheduled jobs pause immediately) +sudo systemctl stop cron # Ubuntu/Debian +sudo systemctl stop crond # CentOS/RHEL + +# Restart crond (pick up config changes, clear stuck state) +sudo systemctl restart cron + +# Disable crond at boot (won't start on reboot) +sudo systemctl disable cron +sudo systemctl enable cron # Re-enable + +# Status check +sudo systemctl status cron +sudo systemctl is-active cron # Prints "active" or "inactive" +``` diff --git a/.claude/skills/cron-scheduling/references/monitoring.md b/.claude/skills/cron-scheduling/references/monitoring.md new file mode 100644 index 0000000..4d71834 --- /dev/null +++ b/.claude/skills/cron-scheduling/references/monitoring.md @@ -0,0 +1,472 @@ +# Cron monitoring and debugging reference + +_Read this file when: the user asks about logs, monitoring, alerting on cron failures, dead man's switches, Prometheus/Grafana integration, or debugging why a cron job isn't running._ + +--- + +## Table of contents + +1. [Reading cron logs](#1-reading-cron-logs) +2. [Structured logging inside scripts](#2-structured-logging-inside-scripts) +3. [Dead man's switch (healthcheck monitoring)](#3-dead-mans-switch) +4. [Exit-code monitoring and alerting](#4-exit-code-monitoring-and-alerting) +5. [Prometheus + Grafana integration](#5-prometheus--grafana-integration) +6. [Debugging — systematic process](#6-debugging--systematic-process) +7. [Troubleshooting table](#7-troubleshooting-table) +8. [Simulating cron's environment locally](#8-simulating-crons-environment) + +--- + +## 1. Reading cron logs + +### System log (where crond writes job start/end events) + +```bash +# Ubuntu/Debian — systemd journal +sudo journalctl -u cron -f # Stream live +sudo journalctl -u cron --since "today" # Today only +sudo journalctl -u cron --since "2 hours ago" # Last 2 hours +sudo journalctl -u cron -p err # Errors only +sudo journalctl -u cron --since "today" | grep CMD # Show commands run + +# Ubuntu/Debian — syslog (older systems) +sudo tail -f /var/log/syslog | grep CRON +sudo grep CRON /var/log/syslog | tail -50 + +# CentOS/RHEL +sudo tail -f /var/log/cron +sudo grep "backup" /var/log/cron | tail -20 +``` + +### What a successful cron log entry looks like + +``` +Jan 15 02:00:01 myserver CRON[18432]: (ubuntu) CMD (/bin/bash /opt/scripts/backup.sh >> /var/log/backup.log 2>&1) +│ │ │ │ │ +timestamp hostname daemon[PID] user command that ran +``` + +### Count how many times a job ran + +```bash +# Count runs today +sudo journalctl -u cron --since "today" | grep "backup.sh" | wc -l + +# Show run times for the last 7 days +sudo journalctl -u cron --since "7 days ago" | grep "backup.sh" +``` + +### Cron job output logs (written by your script redirect) + +```bash +# View job output log +tail -f /var/log/backup.log +tail -100 /var/log/backup.log + +# Search for errors in job log +grep -i "error\|fail\|warn" /var/log/backup.log | tail -20 + +# Show entries with timestamps (if your script logs them) +grep "2024-01-15" /var/log/backup.log +``` + +--- + +## 2. Structured logging inside scripts + +Add this boilerplate to every cron script for consistent, parseable logs: + +### Bash — simple timestamped log function + +```bash +#!/usr/bin/env bash +set -euo pipefail + +LOG="/var/log/myjob.log" +JOB="myjob" + +log() { + local level=$1; shift + echo "[$(date -u +%Y-%m-%dT%H:%M:%SZ)] [$JOB] [$level] $*" >> "$LOG" +} + +START=$(date +%s) +log INFO "Job started (PID $$)" + +# ... your logic ... +EXIT_CODE=0 +some_command || EXIT_CODE=$? + +DURATION=$(( $(date +%s) - START )) +if [ "$EXIT_CODE" -eq 0 ]; then + log INFO "Job completed successfully in ${DURATION}s" +else + log ERROR "Job failed with exit code $EXIT_CODE after ${DURATION}s" +fi +exit "$EXIT_CODE" +``` + +### JSON structured logs (for log aggregators like Datadog, Loki, Splunk) + +```bash +log_json() { + local level=$1; shift + printf '{"ts":"%s","job":"%s","level":"%s","msg":"%s","pid":%d}\n' \ + "$(date -u +%Y-%m-%dT%H:%M:%SZ)" "$JOB" "$level" "$*" "$$" >> "$LOG" +} + +log_json INFO "Backup started" +log_json INFO "Written 1.24 GB to /backups/db_20240115.sql.gz" +log_json INFO "Completed in 142s" +``` + +### Log rotation (prevent logs from filling disk) + +```bash +# /etc/logrotate.d/cron-jobs +/var/log/backup.log +/var/log/report.log +/var/log/sync.log { + daily + rotate 30 + compress + delaycompress + missingok + notifempty + copytruncate # Safe for scripts still writing to the file +} +``` + +--- + +## 3. Dead man's switch + +Standard log monitoring tells you when something goes wrong. A dead man's switch alerts you when a job *stops running* — which is invisible in logs. + +### Healthchecks.io (free tier available) + +```bash +HC_UUID="your-uuid-here" # From healthchecks.io dashboard +HC_URL="https://hc-ping.com/${HC_UUID}" + +# Pattern 1 — Ping start and success; auto-ping /fail if start was sent but success wasn't +0 2 * * * \ + /usr/bin/curl -fsS --retry 3 -o /dev/null "${HC_URL}/start" && \ + /opt/scripts/backup.sh >> /var/log/backup.log 2>&1 && \ + /usr/bin/curl -fsS --retry 3 -o /dev/null "${HC_URL}" \ + || /usr/bin/curl -fsS --retry 3 -o /dev/null "${HC_URL}/fail" + +# Pattern 2 — Simple ping on success (no start signal) +0 2 * * * /opt/scripts/backup.sh >> /var/log/backup.log 2>&1 && \ + /usr/bin/curl -fsS --retry 3 -o /dev/null "${HC_URL}" +``` + +### Cronitor + +```bash +CRONITOR_KEY="your-monitor-key" + +0 2 * * * \ + /usr/bin/curl -sm 5 "https://cronitor.link/${CRONITOR_KEY}/run" && \ + /opt/scripts/backup.sh >> /var/log/backup.log 2>&1 && \ + /usr/bin/curl -sm 5 "https://cronitor.link/${CRONITOR_KEY}/complete" \ + || /usr/bin/curl -sm 5 "https://cronitor.link/${CRONITOR_KEY}/fail" +``` + +### Wrapper script approach (DRY — reuse across all jobs) + +```bash +#!/usr/bin/env bash +# /opt/scripts/monitored-run.sh +# Usage: monitored-run.sh <healthcheck-uuid> <command> [args...] + +HC_UUID=$1; shift +HC_URL="https://hc-ping.com/${HC_UUID}" + +/usr/bin/curl -fsS --retry 3 -o /dev/null "${HC_URL}/start" 2>/dev/null || true + +START=$(date +%s) +"$@" +EXIT_CODE=$? +DURATION=$(( $(date +%s) - START )) + +if [ "$EXIT_CODE" -eq 0 ]; then + /usr/bin/curl -fsS --retry 3 -o /dev/null "${HC_URL}?duration=${DURATION}" 2>/dev/null || true +else + /usr/bin/curl -fsS --retry 3 -o /dev/null "${HC_URL}/fail" 2>/dev/null || true +fi + +exit "$EXIT_CODE" +``` + +```bash +# Clean crontab entries using the wrapper +0 2 * * * /opt/scripts/monitored-run.sh "UUID-1" /opt/scripts/backup.sh >> /var/log/backup.log 2>&1 +0 8 * * 1-5 /opt/scripts/monitored-run.sh "UUID-2" /opt/scripts/report.sh >> /var/log/report.log 2>&1 +``` + +--- + +## 4. Exit-code monitoring and alerting + +### Email on failure (simple — uses system mail) + +```bash +#!/usr/bin/env bash +JOB_NAME="db_backup" +ALERT_EMAIL="ops@example.com" + +run_job() { + /opt/scripts/backup.sh >> /var/log/backup.log 2>&1 +} + +if ! run_job; then + EXIT_CODE=$? + { + echo "Subject: [CRON ALERT] $JOB_NAME failed on $(hostname)" + echo "To: $ALERT_EMAIL" + echo "" + echo "Job: $JOB_NAME" + echo "Host: $(hostname)" + echo "Time: $(date)" + echo "Exit code: $EXIT_CODE" + echo "" + echo "Last 20 lines of log:" + tail -20 /var/log/backup.log + } | sendmail "$ALERT_EMAIL" +fi +``` + +### Slack webhook on failure + +```bash +#!/usr/bin/env bash +SLACK_WEBHOOK="https://hooks.slack.com/services/YOUR/WEBHOOK/URL" +JOB_NAME="db_backup" + +/opt/scripts/backup.sh >> /var/log/backup.log 2>&1 +EXIT_CODE=$? + +if [ "$EXIT_CODE" -ne 0 ]; then + /usr/bin/curl -fsS -X POST -H 'Content-type: application/json' \ + --data "{\"text\":\":rotating_light: *Cron job failed*\n*Job:* \`${JOB_NAME}\`\n*Host:* \`$(hostname)\`\n*Time:* $(date -u)\n*Exit code:* ${EXIT_CODE}\"}" \ + "$SLACK_WEBHOOK" +fi +``` + +### Generic alert wrapper script + +```bash +#!/usr/bin/env bash +# /opt/scripts/alert-on-fail.sh — universal wrapper +# Usage: alert-on-fail.sh "Job Name" /path/to/command [args...] +JOB_NAME="$1"; shift + +"$@" +EXIT_CODE=$? + +if [ "$EXIT_CODE" -ne 0 ]; then + MSG="Cron job '$JOB_NAME' failed on $(hostname) at $(date). Exit code: $EXIT_CODE" + # Notify via your preferred channel: + echo "$MSG" | mail -s "Cron failure: $JOB_NAME" ops@example.com + # or: curl -fsS -X POST ... (Slack/PagerDuty/etc.) +fi + +exit "$EXIT_CODE" +``` + +```bash +# In crontab +0 2 * * * /opt/scripts/alert-on-fail.sh "DB Backup" /opt/scripts/backup.sh >> /var/log/backup.log 2>&1 +``` + +--- + +## 5. Prometheus + Grafana integration + +Push job metrics to a **Prometheus Pushgateway** after each run. + +### Push metrics script + +```bash +#!/usr/bin/env bash +# /opt/scripts/push-metrics.sh +# Usage: push-metrics.sh <job_name> <exit_code> <duration_seconds> + +JOB_NAME=$1 +EXIT_CODE=$2 +DURATION=$3 +PUSHGATEWAY="${PUSHGATEWAY_URL:-http://pushgateway:9091}" +TS=$(date +%s) + +cat << EOF | /usr/bin/curl -s --data-binary @- "${PUSHGATEWAY}/metrics/job/cron/instance/${JOB_NAME}" +# HELP cron_job_last_success_timestamp_seconds Unix timestamp of last successful run +# TYPE cron_job_last_success_timestamp_seconds gauge +$([ "$EXIT_CODE" -eq 0 ] && echo "cron_job_last_success_timestamp_seconds{job=\"$JOB_NAME\"} $TS") +# HELP cron_job_last_exit_code Exit code of the most recent run +# TYPE cron_job_last_exit_code gauge +cron_job_last_exit_code{job="$JOB_NAME"} $EXIT_CODE +# HELP cron_job_last_duration_seconds Duration in seconds of the most recent run +# TYPE cron_job_last_duration_seconds gauge +cron_job_last_duration_seconds{job="$JOB_NAME"} $DURATION +EOF +``` + +### Full wrapper with Prometheus push + +```bash +#!/usr/bin/env bash +# /opt/scripts/cron-wrapper.sh +# Usage: cron-wrapper.sh <job_name> <command> [args...] + +JOB_NAME=$1; shift +START=$(date +%s) + +"$@" +EXIT_CODE=$? +DURATION=$(( $(date +%s) - START )) + +/opt/scripts/push-metrics.sh "$JOB_NAME" "$EXIT_CODE" "$DURATION" + +exit "$EXIT_CODE" +``` + +```bash +# In crontab — clean one-liner +0 2 * * * /opt/scripts/cron-wrapper.sh "db_backup" /opt/scripts/backup.sh >> /var/log/backup.log 2>&1 +``` + +### Grafana dashboard queries (PromQL) + +```promql +# Jobs that haven't run successfully in the expected window +time() - cron_job_last_success_timestamp_seconds > 90000 # > 25 hours + +# Jobs currently failing +cron_job_last_exit_code != 0 + +# Duration trend (last 7 days) +cron_job_last_duration_seconds{job="db_backup"} + +# Alert rule — job failed +ALERT CronJobFailed + IF cron_job_last_exit_code != 0 + FOR 5m + LABELS { severity = "warning" } + ANNOTATIONS { + summary = "Cron job {{ $labels.job }} failed", + description = "Exit code: {{ $value }}" + } +``` + +--- + +## 6. Debugging — systematic process + +Work through these checks in order: + +``` +Job didn't run? +│ +├─ 1. Is crond running? +│ systemctl status cron +│ → If stopped: sudo systemctl start cron +│ +├─ 2. Is the schedule correct? +│ crontab -l (look for syntax errors) +│ Validate at https://crontab.guru +│ Check: is the system timezone what you expect? +│ echo $TZ; timedatectl +│ +├─ 3. Does the crontab file parse OK? +│ sudo systemctl status cron | grep error +│ sudo journalctl -u cron | grep error +│ +├─ 4. Did crond attempt to run it? +│ sudo journalctl -u cron --since "1 hour ago" | grep CMD +│ → If no entry: schedule isn't matching — re-check fields +│ +├─ 5. Can the script run in cron's environment? +│ env -i HOME=/home/USER SHELL=/bin/bash PATH=/usr/bin:/bin bash /path/to/script.sh +│ → If fails: fix PATH, absolute paths, env vars +│ +├─ 6. Does the script have execute permission? +│ ls -la /path/to/script.sh +│ → Fix: chmod +x /path/to/script.sh +│ +├─ 7. Can the cron user read the script and its dependencies? +│ sudo -u www-data bash /path/to/script.sh +│ → Fix: chown/chmod as needed +│ +└─ 8. Is output being captured? + Add >> /tmp/debug.log 2>&1 to capture errors + cat /tmp/debug.log +``` + +--- + +## 7. Troubleshooting table + +| Symptom | Likely cause | Fix | +|---------|-------------|-----| +| Job never runs | crond is stopped | `sudo systemctl start cron` | +| Job never runs | Wrong schedule | Validate at crontab.guru; check timezone | +| Job never runs | `/etc/cron.d/` file has wrong perms | `chmod 644 /etc/cron.d/myapp` | +| Job never runs | `/etc/cron.d/` file has a `.sh` extension | Rename to no extension | +| `command not found` | Relative path | Use absolute paths everywhere | +| Works manually, fails in cron | PATH mismatch | Set `PATH=` at top of crontab; use absolute paths | +| Works manually, fails in cron | Env var not set | Source `.env` file explicitly in script | +| No output, no errors | Output not redirected | Add `>> /var/log/job.log 2>&1` | +| `Permission denied` | Script not executable | `chmod +x /path/to/script.sh` | +| Job runs but has no effect | Missing secrets/credentials | Source env file; check secret manager | +| Multiple copies running | Job overlaps itself | Add `flock -n /tmp/job.lock` | +| Job runs once then stops | `crontab -r` was run | Recreate crontab | +| Runs at wrong time | System timezone unexpected | Check `timedatectl`; set `TZ=` in crontab | +| Job output triggers email | `MAILTO` not set | Add `MAILTO=""` to crontab header | +| Job runs as wrong user | Mixed up crontab sources | Check which user's crontab you're editing | +| Stale lock file after crash | flock lock not cleaned up | `rm -f /tmp/job.lock` | +| Job killed mid-run | OOM killer | Add `resources:` limits; check `dmesg \| grep OOM` | + +--- + +## 8. Simulating cron's environment + +The most reliable way to reproduce a cron failure locally: + +```bash +# Full simulation — strips your entire environment +env -i \ + HOME=/home/ubuntu \ + SHELL=/bin/bash \ + PATH=/usr/bin:/bin \ + USER=ubuntu \ + LOGNAME=ubuntu \ + /bin/bash /opt/scripts/myjob.sh + +# With additional vars your crontab defines +env -i \ + HOME=/home/ubuntu \ + SHELL=/bin/bash \ + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin \ + MAILTO="" \ + TZ=UTC \ + /bin/bash /opt/scripts/myjob.sh + +# Run as the cron user (catches permission issues) +sudo -u www-data env -i \ + HOME=/var/www \ + SHELL=/bin/bash \ + PATH=/usr/bin:/bin \ + /bin/bash /opt/scripts/myjob.sh +``` + +### Quick diagnostic one-liner + +Add this temporarily to catch environment issues: + +```bash +# In crontab — dumps cron's environment to a file for inspection +* * * * * env > /tmp/cron-env.txt && echo "---" >> /tmp/cron-env.txt +``` + +Then inspect `/tmp/cron-env.txt` to see exactly what cron sees. diff --git a/.claude/skills/cron-scheduling/references/syntax-reference.md b/.claude/skills/cron-scheduling/references/syntax-reference.md new file mode 100644 index 0000000..039bf94 --- /dev/null +++ b/.claude/skills/cron-scheduling/references/syntax-reference.md @@ -0,0 +1,311 @@ +# Cron syntax reference + +_Read this file when: the user asks about cron expressions, field meanings, special characters, schedule examples, or wants help writing/understanding a specific schedule._ + +--- + +## Table of contents + +1. [Field reference](#1-field-reference) +2. [Special characters](#2-special-characters) +3. [Shorthand macros](#3-shorthand-macros) +4. [Expression examples library](#4-expression-examples-library) +5. [Expression builder — step-by-step](#5-expression-builder) +6. [Edge cases and gotchas](#6-edge-cases-and-gotchas) + +--- + +## 1. Field reference + +``` +position field valid range notes +──────── ───────────── ─────────── ────────────────────────────────────── +1 minute 0–59 +2 hour 0–23 +3 day-of-month 1–31 see gotcha §6.1 +4 month 1–12 or JAN FEB MAR APR MAY JUN + JUL AUG SEP OCT NOV DEC +5 day-of-week 0–7 0 and 7 both = Sunday + or SUN MON TUE WED THU FRI SAT +6+ command any string rest of the line, including arguments +``` + +--- + +## 2. Special characters + +### `*` — wildcard (every value) + +```bash +* * * * * # Every minute of every hour every day +0 * * * * # Top of every hour, every day +0 0 * * * # Midnight every day +``` + +### `,` — list (multiple specific values) + +```bash +0,30 * * * * # At :00 and :30 of every hour +0 8,12,17 * * * # At 8 AM, 12 PM, and 5 PM +0 0 1,15 * * # 1st and 15th of every month at midnight +0 0 * 1,4,7,10 * # Midnight on the 1st of Jan, Apr, Jul, Oct +``` + +### `-` — range (inclusive) + +```bash +0 9-17 * * * # Every hour from 9 AM to 5 PM (9,10,11,...,17) +0 0 * * 1-5 # Monday through Friday at midnight +0 9 * * 1-5 # 9 AM on weekdays only +30 8-18 * * 1-5 # :30 every hour during business hours Mon–Fri +``` + +### `/` — step (every N) + +```bash +*/5 * * * * # Every 5 minutes +*/15 * * * * # Every 15 minutes (0, 15, 30, 45) +0 */2 * * * # Every 2 hours at :00 (0, 2, 4, ..., 22) +0 */6 * * * # Every 6 hours (0, 6, 12, 18) +*/10 8-18 * * 1-5 # Every 10 min during business hours, weekdays +``` + +### Combining `,` `-` `/` + +```bash +0,30 9-17 * * 1-5 # :00 and :30 during business hours Mon–Fri +*/15 8,20 * * * # Every 15 min at 8 AM and 8 PM +0 6-22/2 * * * # Every 2 hours from 6 AM to 10 PM (6,8,10,...,22) +``` + +--- + +## 3. Shorthand macros + +These replace the five-field expression entirely: + +| Macro | Equivalent | Meaning | +|-------|-----------|---------| +| `@reboot` | _(none)_ | Once at system startup | +| `@yearly` / `@annually` | `0 0 1 1 *` | Jan 1st at midnight | +| `@monthly` | `0 0 1 * *` | 1st of every month at midnight | +| `@weekly` | `0 0 * * 0` | Every Sunday at midnight | +| `@daily` / `@midnight` | `0 0 * * *` | Every day at midnight | +| `@hourly` | `0 * * * *` | Every hour on the hour | + +```bash +@reboot /opt/scripts/startup.sh >> /var/log/startup.log 2>&1 +@daily /opt/scripts/cleanup.sh >> /var/log/cleanup.log 2>&1 +@weekly /opt/scripts/weekly_report.sh >> /var/log/weekly.log 2>&1 +``` + +--- + +## 4. Expression examples library + +### Time-based patterns + +```bash +# Every minute +* * * * * + +# Every N minutes +*/5 * * * * # Every 5 min +*/10 * * * * # Every 10 min +*/15 * * * * # Every 15 min +*/30 * * * * # Every 30 min (same as 0,30 * * * *) + +# Specific minute of every hour +0 * * * * # :00 — top of every hour +15 * * * * # :15 past every hour +45 * * * * # :45 past every hour + +# Every N hours +0 */2 * * * # Every 2 hours +0 */3 * * * # Every 3 hours +0 */4 * * * # Every 4 hours +0 */6 * * * # Every 6 hours +0 */12 * * * # Every 12 hours (midnight and noon) +``` + +### Daily patterns + +```bash +0 0 * * * # Midnight +0 1 * * * # 1 AM +30 2 * * * # 2:30 AM (good for overnight backups) +0 6 * * * # 6 AM +45 7 * * * # 7:45 AM (before 8 AM standup) +0 12 * * * # Noon +0 18 * * * # 6 PM +0 22 * * * # 10 PM +59 23 * * * # 11:59 PM +``` + +### Weekday patterns + +```bash +0 8 * * 1-5 # 8 AM Monday–Friday +0 8 * * 1 # 8 AM Monday only +0 8 * * 5 # 8 AM Friday only +0 8 * * 1,3,5 # 8 AM Mon, Wed, Fri +0 8 * * 2,4 # 8 AM Tue, Thu +0 0 * * 6,0 # Midnight on Sat and Sun (weekends) +0 0 * * 0 # Midnight every Sunday +*/5 9-17 * * 1-5 # Every 5 min during business hours +0,30 9-17 * * 1-5 # Every 30 min during business hours +``` + +### Monthly patterns + +```bash +0 2 1 * * # 1st of every month at 2 AM +0 2 15 * * # 15th of every month at 2 AM +0 2 1,15 * * # 1st and 15th at 2 AM +0 2 28-31 * * # Last few days of month (not always reliable — see §6.1) +0 0 1 */3 * # First day of every quarter (Jan, Apr, Jul, Oct) +0 0 1 1 * # January 1st at midnight (same as @yearly) +``` + +### Specific calendar patterns + +```bash +0 9 * * 1#1 # First Monday of the month (GNU cron only) +0 2 * 1 * # Every day in January at 2 AM +0 2 * 12 * # Every day in December at 2 AM +0 2 25 12 * # December 25th at 2 AM +0 9 * 3-5 1-5 # 9 AM weekdays in March, April, May +``` + +--- + +## 5. Expression builder + +When a user describes a schedule in plain English, map it to a cron expression using this process: + +### Step 1 — Identify frequency type + +| User says | Start with | +|-----------|-----------| +| "every X minutes" | `*/X * * * *` | +| "every X hours" | `0 */X * * *` | +| "every day at TIME" | `MIN HOUR * * *` | +| "every weekday at TIME" | `MIN HOUR * * 1-5` | +| "every Monday at TIME" | `MIN HOUR * * 1` | +| "every week on DAY at TIME" | `MIN HOUR * * DOW` | +| "every month on the Nth" | `MIN HOUR N * *` | +| "every year on DATE" | `MIN HOUR DOM MON *` | +| "at startup / on boot" | `@reboot` | + +### Step 2 — Convert time to fields + +``` +"3:30 AM" → minute=30 hour=3 +"8:00 AM" → minute=0 hour=8 +"12:00 PM" → minute=0 hour=12 +"5:45 PM" → minute=45 hour=17 +"11:59 PM" → minute=59 hour=23 +"midnight" → minute=0 hour=0 +"noon" → minute=0 hour=12 +``` + +### Step 3 — Convert day names to numbers + +``` +Sunday=0, Monday=1, Tuesday=2, Wednesday=3, +Thursday=4, Friday=5, Saturday=6, Sunday=7 (also valid) +``` + +### Step 4 — Worked examples + +| Plain English | Expression | +|---------------|-----------| +| Every 5 minutes | `*/5 * * * *` | +| Every hour at :30 | `30 * * * *` | +| Daily at 2:30 AM | `30 2 * * *` | +| Weekdays at 7:45 AM | `45 7 * * 1-5` | +| Every Monday at 9 AM | `0 9 * * 1` | +| First of every month at midnight | `0 0 1 * *` | +| Every 15 min during business hours | `*/15 9-17 * * 1-5` | +| Twice a day at 8 AM and 8 PM | `0 8,20 * * *` | +| Every Sunday at 3:45 AM | `45 3 * * 0` | +| Every 6 hours starting midnight | `0 */6 * * *` | +| Every 30 min on weekends | `*/30 * * * 6,0` | +| Jan 1st at midnight | `0 0 1 1 *` | + +--- + +## 6. Edge cases and gotchas + +### 6.1 — Day-of-month AND day-of-week are ORed, not ANDed + +When you set both `day-of-month` and `day-of-week` to something other than `*`, cron runs if EITHER condition matches — not both: + +```bash +0 2 15 * 5 # Runs on the 15th of every month AND every Friday + # NOT "the 15th only if it's a Friday" + +# To run ONLY on Fridays that fall on the 15th, use a script-level check: +0 2 15 * * [ "$(date +%u)" = "5" ] && /opt/scripts/myjob.sh +``` + +### 6.2 — Minimum interval is 1 minute + +Cron wakes up once per minute. For sub-minute scheduling, use a loop inside a per-minute job: + +```bash +# Run every 10 seconds (6 times per minute) +* * * * * for i in 1 2 3 4 5 6; do /opt/scripts/poll.sh & sleep 10; done +``` + +### 6.3 — Timezone is the system timezone + +Cron uses the system timezone (`/etc/localtime`). To force a specific timezone: + +```bash +# Set in the crontab file header +TZ=UTC +0 2 * * * /opt/scripts/backup.sh + +# Or inline per job (GNU cron) +TZ=America/New_York 0 9 * * 1-5 /opt/scripts/report.sh +``` + +### 6.4 — `@reboot` timing + +`@reboot` runs after crond starts, which may be before your application, database, or network are ready. Add a sleep or use `systemd` service ordering for production use: + +```bash +@reboot sleep 30 && /opt/scripts/startup.sh >> /var/log/startup.log 2>&1 +``` + +### 6.5 — Last day of month + +There is no `L` in standard cron (that's Quartz/enterprise schedulers). Workaround: + +```bash +# Run on the last day of the month using a script check +0 2 28-31 * * [ "$(date -d tomorrow +%d)" = "01" ] && /opt/scripts/month_end.sh +``` + +### 6.6 — February and 31-day months + +`0 0 31 * *` runs only in months that have 31 days. If you want end-of-month reliably, use the technique in §6.5. + +### 6.7 — AWS / GCP / Quartz cron differences + +Standard Unix cron ≠ cloud schedulers: + +| Feature | Unix cron | AWS EventBridge | Quartz (Java) | +|---------|-----------|----------------|---------------| +| Fields | 5 | 6 (adds year) | 6–7 (adds seconds/year) | +| `?` in day fields | No | Required in one of dom/dow | Supported | +| `L` (last) | No | Yes | Yes | +| `W` (weekday) | No | No | Yes | +| `#` (Nth weekday) | GNU only | No | Yes | + +AWS example — every day at 2 AM UTC: +``` +cron(0 2 * * ? *) +# ^ year field required; ? required in dom or dow (not both) +``` diff --git a/.claude/skills/css-guide/SKILL.md b/.claude/skills/css-guide/SKILL.md new file mode 100644 index 0000000..24f3bcf --- /dev/null +++ b/.claude/skills/css-guide/SKILL.md @@ -0,0 +1,265 @@ +--- +name: css-guide +description: >- + Use this skill whenever writing, editing, updating, or reviewing CSS files. + Triggers on any request to create stylesheets, style components or pages, + refactor CSS, add responsive design, fix layout bugs, improve CSS + architecture, create CSS custom properties (variables), or update existing + .css / .scss / .sass files. Also triggers for phrases like "style this", + "write CSS", "make it responsive", "fix the layout", "update the styles", + "add a dark mode", "improve the CSS", or any request mentioning .css, .scss, + or .sass files. Apply this skill even for inline style blocks, + component-scoped CSS, and utility classes. +--- + +# CSS Guide Skill + +Write, maintain, and update CSS to professional standards. Priorities: +architecture that scales, design tokens over hard-coded values, accessible +interaction, and responsive layouts built mobile-first. The Google HTML/CSS +Style Guide is the formatting authority. + +## Pre-delivery checklist + +Before handing off any CSS, confirm: + +- [ ] Custom properties (`--var-name`) used for all repeated values +- [ ] No `!important` (except utility overrides with a clear comment) +- [ ] No ID selectors (`#selector`) — use classes +- [ ] No inline styles or `style=""` pushed from CSS +- [ ] Class names are semantic and hyphen-delimited (BEM preferred) +- [ ] `0` values have no units (`margin: 0`, not `0px`) +- [ ] Leading zeros on decimals (`0.8em`, not `.8em`) +- [ ] 3-char hex where possible (`#ebc`, not `#eebbcc`) +- [ ] Single quotes for strings +- [ ] 2-space indentation, semicolons after every declaration, blank line between rules +- [ ] `:focus-visible` styles defined (WCAG 2.2) +- [ ] Color contrast ≥ 4.5:1 for body text (AA) +- [ ] Media queries use `min-width` (mobile-first) + +## Workflow + +### 1. Start from a tokenized foundation + +All design decisions — color, type, spacing, radius, shadow, motion, z-index — +belong in CSS custom properties at `:root`. Never sprinkle hard-coded values +through components. + +For a production-ready token set to drop in, copy +[templates/css-tokens.css](templates/css-tokens.css). Pair it with +[templates/css-reset.css](templates/css-reset.css) for a modern reset and +[templates/css-patterns.css](templates/css-patterns.css) for common component +examples (buttons, cards, forms, nav, modals). + +Minimal token example: + +```css +:root { + --color-accent: #3b82f6; + --color-text: #111827; + --color-bg: #ffffff; + --space-4: 1rem; + --radius-md: 0.5rem; + --text-base: 1rem; + --transition-fast: 150ms ease; +} +``` + +Dark mode and high-contrast adjustments override tokens — the rest of the +stylesheet doesn't change. See +[references/accessibility.md](references/accessibility.md) for the patterns. + +### 2. Use BEM for naming + +`.block`, `.block__element`, `.block--modifier`, +`.block__element--modifier`. Lowercase, hyphenated, names reflect purpose not +appearance. + +```css +.card { … } /* Block */ +.card__header { … } /* Element */ +.card__title { … } +.card--featured { … } /* Modifier */ +.card__title--large { … } +``` + +Avoid presentational names (`.blue-button`, `.left-column`). Avoid +over-abbreviation (`.nav-lnk`). Target by class — not by tag, not by id. + +### 3. Follow the architecture that matches project size + +``` +Small: styles/main.css +Medium: styles/{base,layout,components,utilities}/_*.css + main.css imports +Large: ITCSS — 1-settings/ 2-tools/ 3-generic/ 4-elements/ + 5-objects/ 6-components/ 7-utilities/ +``` + +Modern projects benefit from cascade layers: + +```css +@layer reset, base, layout, components, utilities; +@layer components { .card { … } } +@layer utilities { .visually-hidden { … } } +``` + +Layers give you a deterministic cascade independent of source order or +specificity. For an ITCSS walk-through and layer migration tips, see +[references/css-architecture.md](references/css-architecture.md). + +### 4. Format consistently (Google style) + +```css +/* 2 spaces, space after ":", space before "{", blank line between rules */ +.rule-one { + color: var(--color-text); + margin: 0; + padding: var(--space-4); +} + +.rule-two { + background: var(--color-bg); +} + +/* Multiple selectors: one per line */ +h1, +h2, +h3 { font-weight: 700; } +``` + +Declaration order: alphabetical (Google default) or grouped (positioning → box +model → typography → visual → interaction). Pick one per codebase and stick +with it. + +Values: `0` without units, leading zero on decimals, 3-char hex when possible, +single quotes for strings (`font-family: 'Inter', sans-serif;`). + +### 5. Control specificity + +Specificity ranks low→high: type (0,0,1) < class/attr/pseudo (0,1,0) < id +(1,0,0) < inline (1,0,0,0) < `!important`. + +```css +/* AVOID: id selectors (hard to override) */ +#header { … } + +/* AVOID: over-qualifying with tag */ +ul.nav { … } +div.card { … } + +/* AVOID: deep descendant chains — brittle and slow */ +.page .sidebar .widget .widget__title a { … } + +/* PREFER: flat BEM */ +.widget__title-link { … } +``` + +`!important` is acceptable only in utility classes (`.visually-hidden`) where +overriding is the explicit intent. + +Style all interactive states together: `:hover`, `:focus-visible`, `:active`, +`:disabled`. Use `::before`/`::after` (double colon) for pseudo-elements. + +### 6. Layout with Grid and Flexbox + +```css +/* 2D — Grid */ +.card-grid { + display: grid; + grid-template-columns: repeat(auto-fill, minmax(280px, 1fr)); + gap: var(--space-6); +} + +/* 1D — Flexbox */ +.nav__list { + display: flex; + align-items: center; + gap: var(--space-4); +} + +/* Container */ +.container { + width: min(100% - 2rem, 1200px); + margin-inline: auto; +} +``` + +Write mobile-first with `min-width` queries, and use `clamp()` for fluid type +and spacing. Full patterns and container queries are in +[references/layout-responsive.md](references/layout-responsive.md). + +### 7. Ship accessible defaults + +These must be in every stylesheet: + +- `:focus-visible` ring (never just `outline: none`) +- `@media (prefers-reduced-motion: reduce)` safety net +- `.visually-hidden` / `.sr-only` utility +- Color tokens that meet 4.5:1 contrast for body text + +Complete patterns (skip links, dark mode, high-contrast boost) are in +[references/accessibility.md](references/accessibility.md). + +## Maintaining existing stylesheets + +### Before editing + +1. Identify the architecture — BEM? SMACSS? utility-first? no pattern? +2. Scan for existing custom properties — reuse them, don't redefine. +3. Find ALL rules for the component you're modifying. +4. Check specificity of existing selectors — match or beat appropriately. +5. Search for `!important` and understand why before adding more. + +### While editing + +- Match existing indentation (2-space, 4-space, or tab) and declaration order. +- Match selector and comment style. +- Edit only what's required — don't reformat unrelated rules. +- Add new rules AFTER existing rules for the same component. +- Never silently delete rules you don't understand — comment them out with + an explanation and ask the user. + +### After editing + +- Could this change affect other elements using the same class? +- Did you increase specificity in a way that blocks future overrides? +- Do breakpoint changes hold across the full responsive range? + +### Common update patterns + +```css +/* New modifier — added below existing rules */ +.btn--danger { background: var(--color-error); } +.btn--large { font-size: var(--text-lg); padding: var(--space-3) var(--space-6); } + +/* Add responsive behaviour at matching breakpoint */ +@media (min-width: 768px) { + .card-grid { grid-template-columns: repeat(2, 1fr); } +} + +/* Refactor a magic number into a token */ +:root { --header-height: 64px; } +.header { height: var(--header-height); } +.page-main { padding-top: var(--header-height); } +``` + +## Tooling + +- Validator: https://jigsaw.w3.org/css-validator/ +- Linter: **stylelint** with `stylelint-config-standard` +- Formatter: **Prettier** (`tabWidth: 2`, `singleQuote: true`, `printWidth: 100`) + +Recommended stylelint rules: `color-named: "never"`, `color-no-invalid-hex`, +`declaration-no-important`, a BEM-compatible `selector-class-pattern`, and +`no-duplicate-selectors`. + +## Quick reference: where to go deeper + +| Topic | Reference file | +|----------------------------------------------------------|------------------------------------------------------------------------| +| ITCSS, BEM, SMACSS, and cascade-layer architecture | [references/css-architecture.md](references/css-architecture.md) | +| Focus, reduced motion, dark mode, sr-only, contrast | [references/accessibility.md](references/accessibility.md) | +| Grid, Flexbox, mobile-first, fluid type, container queries | [references/layout-responsive.md](references/layout-responsive.md) | +| Design token system — copy into projects | [templates/css-tokens.css](templates/css-tokens.css) | +| Modern CSS reset | [templates/css-reset.css](templates/css-reset.css) | +| Common component patterns (button, card, form, nav, modal) | [templates/css-patterns.css](templates/css-patterns.css) | diff --git a/.claude/skills/css-guide/references/accessibility.md b/.claude/skills/css-guide/references/accessibility.md new file mode 100644 index 0000000..711285f --- /dev/null +++ b/.claude/skills/css-guide/references/accessibility.md @@ -0,0 +1,160 @@ +# CSS accessibility reference + +Practical, WCAG 2.2-aligned CSS patterns for focus, motion, contrast, and +screen-reader-only content. Load this file whenever a CSS task involves +keyboard navigation, animations, dark mode, reduced motion, or hiding content +from sighted users only. + +## Focus indicators (WCAG 2.2) + +Never remove focus outlines without replacing them. WCAG 2.2 SC 2.4.11 +(Focus Not Obscured) and 2.4.13 (Focus Appearance) apply. + +```css +/* Remove default only if replaced immediately */ +:focus { outline: none; } + +/* Visible, WCAG-compliant focus ring */ +:focus-visible { + outline: 3px solid var(--color-accent); + outline-offset: 3px; + border-radius: var(--radius-sm); +} + +/* High-contrast boost for users who request it */ +@media (prefers-contrast: more) { + :focus-visible { + outline-width: 4px; + outline-color: #000; + } +} +``` + +Prefer `:focus-visible` over `:focus` so the ring shows for keyboard users but +not for mouse clicks (where it looks noisy). + +## Reduced motion + +Respect `prefers-reduced-motion: reduce`. Wrap non-essential animations and +provide a static alternative. + +```css +/* Motion only when the user has not opted out */ +@media (prefers-reduced-motion: no-preference) { + .card { + transition: transform var(--transition-normal), + box-shadow var(--transition-normal); + } + .card:hover { + transform: translateY(-4px); + box-shadow: var(--shadow-lg); + } +} + +/* Global safety net — near-zero duration for anything animated */ +@media (prefers-reduced-motion: reduce) { + *, + *::before, + *::after { + animation-duration: 0.01ms !important; + animation-iteration-count: 1 !important; + transition-duration: 0.01ms !important; + scroll-behavior: auto !important; + } +} +``` + +## Dark mode + +Two patterns — pick one per project. + +```css +/* 1. System-driven */ +@media (prefers-color-scheme: dark) { + :root { + --color-text: var(--color-neutral-50); + --color-background: #0f172a; + --color-surface: #1e293b; + --color-border: #334155; + } +} + +/* 2. User-toggled — attribute on <html> or <body> */ +[data-theme="dark"] { + --color-text: var(--color-neutral-50); + --color-background: #0f172a; +} +``` + +Because colors are defined as custom properties, the rest of the stylesheet +doesn't need to change. + +## Visually hidden (screen-reader only) + +Use when content must be announced but not shown (e.g., labels for icon-only +buttons, skip-link targets). + +```css +.visually-hidden, +.sr-only { + position: absolute !important; + width: 1px !important; + height: 1px !important; + padding: 0 !important; + margin: -1px !important; + overflow: hidden !important; + clip: rect(0, 0, 0, 0) !important; + white-space: nowrap !important; + border: 0 !important; +} + +/* Make it appear when focused via keyboard */ +.visually-hidden:focus, +.sr-only:focus { + position: static !important; + width: auto !important; + height: auto !important; + overflow: visible !important; + clip: auto !important; + white-space: normal !important; +} +``` + +## Skip link + +Allows keyboard users to bypass repeated navigation. + +```css +.skip-link { + position: absolute; + left: -9999px; + top: auto; + width: 1px; + height: 1px; + overflow: hidden; + z-index: var(--z-toast); +} + +.skip-link:focus { + left: var(--space-4); + top: var(--space-4); + width: auto; + height: auto; + overflow: visible; + padding: var(--space-2) var(--space-4); + background: var(--color-text); + color: var(--color-background); + border-radius: var(--radius-md); + font-weight: var(--font-weight-bold); +} +``` + +## Color contrast minimums + +- Normal text (< 18pt): **4.5:1** ratio (WCAG AA) +- Large text (≥ 18pt or ≥ 14pt bold): **3:1** ratio (WCAG AA) +- UI components (borders, icons): **3:1** ratio +- Verify with https://webaim.org/resources/contrastchecker/ + +When a color token fails the check, adjust the token value — never patch a +single rule. The whole token system flips with it. diff --git a/.claude/skills/css-guide/references/css-architecture.md b/.claude/skills/css-guide/references/css-architecture.md new file mode 100644 index 0000000..7c2a43d --- /dev/null +++ b/.claude/skills/css-guide/references/css-architecture.md @@ -0,0 +1,261 @@ +# CSS architecture reference + +## Choosing a CSS Architecture + +| Factor | Utility-First | BEM | SMACSS | ITCSS | +|--------|--------------|-----|--------|-------| +| **Best for** | Rapid prototyping, Tailwind | Component libraries | Content sites | Large codebases | +| **Naming** | Utility classes | `.block__el--mod` | `.l-layout .m-module` | Layered | +| **Learning curve** | Low | Medium | Medium | High | +| **Maintainability** | High | High | Medium | Very high | +| **CSS bundle size** | Small (purged) | Grows with components | Medium | Medium | + +**Recommendation**: BEM naming + ITCSS layer organization covers 80% of project types. + +--- + +## ITCSS — Inverted Triangle CSS + +Organizes CSS from least to most specific. Rules lower in the triangle affect fewer elements but with more specificity. + +``` + ████████████████████████ + Settings (widest reach) + ██████████████████████████ + Tools + ████████████████████████████ + Generic + ██████████████████████████████ + Elements + ████████████████████████████████ + Objects + ██████████████████████████████████ + Components + ████████████████████████████████████ + Utilities (narrowest — highest specificity) +``` + +### Layer Descriptions + +**1. Settings** — Global variables, design tokens, configuration +```css +/* _settings.css */ +:root { + --color-primary: #3b82f6; + --font-body: system-ui, sans-serif; +} +``` + +**2. Tools** — Mixins, functions (Sass only; skip for plain CSS) +```scss +// _tools.scss +@mixin respond-to($breakpoint) { ... } +@function rem($px) { ... } +``` + +**3. Generic** — Resets, normalize. Affects broad HTML elements. +```css +/* _generic.css */ +*, *::before, *::after { box-sizing: border-box; } +body { margin: 0; } +``` + +**4. Elements** — Unclassed HTML element defaults +```css +/* _elements.css */ +a { color: var(--color-primary); } +img { max-width: 100%; display: block; } +``` + +**5. Objects** — Class-based layout patterns. No cosmetics. +```css +/* _objects.css */ +.container { width: min(100%, 1200px); margin-inline: auto; } +.grid { display: grid; } +``` + +**6. Components** — Discrete UI components. The bulk of your CSS. +```css +/* _button.css */ +.btn { ... } +.btn--primary { ... } +``` + +**7. Utilities** — Single-purpose, often `!important`. Overrides. +```css +/* _utilities.css */ +.visually-hidden { position: absolute !important; ... } +.text-center { text-align: center !important; } +``` + +--- + +## BEM — Block Element Modifier + +### Full Rules + +**Blocks** are standalone components. They: +- Have no outside geometry (no `margin`, `position: absolute`) +- Can be nested or moved anywhere in the DOM +- Are named as single concepts: `.card`, `.nav`, `.form`, `.modal` + +**Elements** are parts that make sense only in context of their block: +- Named: `.block__element` +- Double underscore separator +- NEVER: `.block__elem1__elem2` (flatten the chain, use single level) + +```css +/* WRONG: chained elements */ +.card__body__title { } + +/* RIGHT: element references its block directly */ +.card__title { } +``` + +**Modifiers** are variants or states of a block or element: +- Named: `.block--modifier` or `.block__element--modifier` +- Double hyphen separator +- Never use alone — always combined with base class + +```html +<!-- WRONG --> +<div class="btn--primary"> + +<!-- RIGHT --> +<div class="btn btn--primary"> +``` + +### BEM + State Classes + +For JavaScript-toggled states, use a separate state class: +```css +.is-open { } +.is-active { } +.is-loading { } +.has-error { } +``` + +These are combined with BEM classes: +```html +<div class="dropdown is-open"> +``` + +### When BEM Gets Complex: Use Context Classes + +Instead of super-long BEM chains, use a context modifier on the parent: +```css +/* Context: inside a featured card, headings are larger */ +.card--featured .card__title { + font-size: var(--text-2xl); +} +``` + +--- + +## SMACSS — Scalable and Modular Architecture for CSS + +Five categories: + +1. **Base** — Element defaults (like ITCSS Elements) +2. **Layout** — Major page sections, prefixed `.l-` +3. **Module** — Reusable UI components (no prefix or `.m-`) +4. **State** — State rules, prefixed `.is-` +5. **Theme** — Theme variations + +```css +/* Layout */ +.l-header { } +.l-sidebar { } +.l-main { } + +/* Module */ +.nav { } +.nav-item { } + +/* State */ +.is-hidden { display: none; } +.is-active { } +``` + +--- + +## CSS Cascade Layers (Modern CSS — 2023+) + +Layers explicitly control the order of the cascade, eliminating specificity wars. + +```css +/* Declare layer order at the very top of your entry file */ +/* Earlier = lower priority; later = higher priority */ +@layer reset, base, layout, components, utilities; + +/* Styles outside any layer take highest priority */ + +@layer reset { + *, *::before, *::after { box-sizing: border-box; } +} + +@layer base { + :root { --color-text: #111; } + body { font-family: system-ui; } +} + +@layer components { + .btn { padding: 0.5rem 1rem; } + .btn--primary { background: var(--color-primary); } +} + +@layer utilities { + /* !important not needed — utilities layer wins via layer order */ + .hidden { display: none; } +} +``` + +**Browser support**: 93%+ (2024). Safe to use with fallback plan. + +--- + +## CSS Architecture Decision Checklist + +Before starting a project, decide: + +- [ ] Single file or multi-file? (multi-file for anything beyond a landing page) +- [ ] Will you use a preprocessor (Sass/SCSS)? Or plain CSS with custom properties? +- [ ] Will you use CSS Modules, CSS-in-JS, or global stylesheets? +- [ ] Will you use a utility framework (Tailwind) or write component CSS? +- [ ] What naming convention? (BEM recommended for teams) +- [ ] Will you use Cascade Layers? (recommended for new projects) +- [ ] What is the design token strategy? (always use custom properties) +- [ ] How will dark mode be handled? (`prefers-color-scheme` vs `.dark` class toggle) +- [ ] What are the breakpoints? (document in the tokens file) +- [ ] What linter config? (Stylelint with config-standard) + +--- + +## Handling Specificity Conflicts + +When you need to override a style and can't change the source: + +**Option 1: Increase specificity with a compound selector** (last resort) +```css +/* Original */ +.btn { color: red; } + +/* Override — add another class for context */ +.modal .btn { color: blue; } +``` + +**Option 2: Use Cascade Layers** (preferred in modern CSS) +```css +@layer base { .btn { color: red; } } +@layer components { .btn--modal { color: blue; } } +/* components layer wins regardless of specificity */ +``` + +**Option 3: CSS Custom Property override** +```css +/* Base component uses a variable */ +.btn { color: var(--btn-color, var(--color-primary)); } + +/* Override in specific context — no specificity issue */ +.modal { --btn-color: white; } +``` diff --git a/.claude/skills/css-guide/references/layout-responsive.md b/.claude/skills/css-guide/references/layout-responsive.md new file mode 100644 index 0000000..8b1989a --- /dev/null +++ b/.claude/skills/css-guide/references/layout-responsive.md @@ -0,0 +1,154 @@ +# Layout and responsive design reference + +Modern CSS layout patterns (Grid, Flexbox, container) and mobile-first +responsive techniques. Load this file when you're laying out pages, building +responsive components, or deciding between Grid and Flexbox. + +## CSS Grid — use for 2D layout + +```css +/* Page layout: header / main / footer */ +.page-layout { + display: grid; + grid-template-columns: 1fr; + grid-template-rows: auto 1fr auto; + min-height: 100vh; +} + +/* Content column with flexible side margins */ +.content-grid { + display: grid; + grid-template-columns: 1fr min(65ch, 100%) 1fr; +} + +/* Card grid — responsive without media queries */ +.card-grid { + display: grid; + grid-template-columns: repeat(auto-fill, minmax(280px, 1fr)); + gap: var(--space-6); +} +``` + +`auto-fill` + `minmax()` eliminates most breakpoints for card layouts: tracks +grow/shrink automatically. + +## Flexbox — use for 1D layout + +```css +/* Navigation row */ +.nav__list { + display: flex; + align-items: center; + gap: var(--space-4); + list-style: none; + margin: 0; + padding: 0; +} + +/* Card that sticks its footer to the bottom */ +.card { + display: flex; + flex-direction: column; +} +.card__body { flex: 1; } + +/* Center anything in both axes */ +.centered { + display: grid; /* grid works better than flex for centering */ + place-items: center; +} +``` + +Rule of thumb: if you're laying out a row/column of items, reach for Flexbox. +If you're controlling rows AND columns, reach for Grid. + +## Container + +```css +.container { + width: min(100% - 2rem, 1200px); + margin-inline: auto; +} +``` + +`min()` gives you a fluid-then-capped container with one line. `margin-inline` +is the logical equivalent of `margin-left` and `margin-right` — it works in +right-to-left languages too. + +## Mobile-first media queries + +Always author for the smallest viewport first, then progressively enhance with +`min-width` queries. Avoid `max-width` except for rare override cases. + +```css +.nav { + display: flex; + flex-direction: column; +} + +@media (min-width: 768px) { + .nav { flex-direction: row; } +} + +@media (min-width: 1024px) { + .nav { gap: var(--space-8); } +} +``` + +Standard breakpoints (match your design system): + +``` +sm 640px +md 768px +lg 1024px +xl 1280px +2xl 1536px +``` + +## Fluid typography and spacing + +`clamp(min, preferred, max)` scales a value between the viewport extremes. +Fewer breakpoints, smoother transitions. + +```css +h1 { + font-size: clamp(var(--text-2xl), 5vw, var(--text-4xl)); +} + +body { + font-size: clamp(var(--text-base), 1.2vw, var(--text-lg)); +} + +.section { + padding-block: clamp(var(--space-8), 10vw, var(--space-24)); +} +``` + +## Container queries (when supported) + +When a component needs to respond to its container rather than the viewport: + +```css +.card-parent { + container-type: inline-size; + container-name: card; +} + +@container card (min-width: 500px) { + .card { flex-direction: row; } +} +``` + +Reach for container queries in design-system components that will be dropped +into unknown layouts (sidebars, modals, embedded widgets). + +## Performance notes for layout + +- Avoid the universal selector in component rules: `* { box-sizing: border-box; }` + is fine in the reset, but `.card * { … }` invalidates more of the layout tree. +- Don't chain deep descendants (`.nav .list .item .link span`) — target classes + directly. It's faster and less coupled. +- Use `contain: layout paint` on isolated widgets (modals, cards with complex + internals) so style/layout work doesn't cascade to the whole page. +- `will-change: transform` should go on elements about to animate, and be + removed after. Leaving it on forever is a memory leak. diff --git a/.claude/skills/html-guide/SKILL.md b/.claude/skills/html-guide/SKILL.md new file mode 100644 index 0000000..40c5efd --- /dev/null +++ b/.claude/skills/html-guide/SKILL.md @@ -0,0 +1,294 @@ +--- +name: html-guide +description: >- + Use this skill whenever writing, editing, updating, or reviewing HTML files. + Triggers on any request to create HTML pages, templates, components, or + documents; fix or refactor existing HTML; audit HTML for accessibility or + quality; add semantic structure; or update markup in a codebase. Also + triggers for phrases like "write HTML", "build a page", "update the markup", + "fix the HTML", "make this accessible", or any request mentioning .html + files. Apply this skill even for partial HTML snippets, email templates, + and component markup — not just full pages. +--- + +# HTML Guide Skill + +Write, maintain, and update HTML to professional standards. Emphasis on +semantic structure, WCAG 2.2 accessibility, and clean formatting that reads +well for humans, screen readers, and downstream tooling. + +## Pre-delivery checklist + +Before handing off any HTML, confirm: + +- [ ] `<!doctype html>` is the very first line +- [ ] `<meta charset="utf-8">` is the first element in `<head>` +- [ ] `<meta name="viewport" content="width=device-width, initial-scale=1">` +- [ ] `<title>` is descriptive and unique +- [ ] `lang` attribute on `<html>` +- [ ] All images have meaningful `alt` text (or `alt=""` for decorative) +- [ ] Heading hierarchy is sequential (no skipping levels); one `<h1>` per page +- [ ] Landmarks used: `<header>`, `<nav>`, `<main>`, `<footer>` +- [ ] Every form input is associated with a `<label>` +- [ ] No inline `style=""` or `onclick=""` +- [ ] No `type` attributes on `<script>` or `<link rel="stylesheet">` +- [ ] Double quotes on all attribute values; lowercase tags and attribute names +- [ ] 2-space indentation, UTF-8, no BOM +- [ ] Validated at https://validator.w3.org/nu/ + +## Workflow + +### 1. Start from a valid document + +For any new page, copy [templates/html-template.html](templates/html-template.html). +It contains the correct head order, landmark layout, and a skip-link stub. + +Minimal structure: + +```html +<!doctype html> +<html lang="en"> + <head> + <meta charset="utf-8"> + <meta name="viewport" content="width=device-width, initial-scale=1"> + <meta name="description" content="Page description for SEO (150–160 chars)"> + <title>Page Title – Site Name + + + + +
+ +
+
+

Page Heading

+
+
+ + + +``` + +Head rules: `charset` FIRST, always include `viewport`, use the pattern +`Page Name – Site Name` for titles, load CSS in `` and scripts at the end +of `` (or with `defer`), use HTTPS for externals, omit `type="text/css"` +and `type="text/javascript"` (HTML5 defaults apply). + +### 2. Reach for semantic elements first + +Use the element built for the job. Semantic HTML gives you keyboard support, +screen reader hints, and document structure for free. + +| Element | Purpose | +|-------------------------------|--------------------------------------------------| +| `
` / `