diff --git a/.github/skills/openspec-workflows/SKILL.md b/.github/skills/openspec-workflows/SKILL.md new file mode 100644 index 00000000..fdbcaa56 --- /dev/null +++ b/.github/skills/openspec-workflows/SKILL.md @@ -0,0 +1,66 @@ +--- +name: openspec-workflows +description: Create OpenSpec changes from implementation plans, and validate existing changes before implementation. Use when the user wants to turn a plan document into an OpenSpec change proposal, or validate that a change is safe to implement (breaking changes, dependency analysis). +license: MIT +metadata: + author: openspec + version: "1.0" +--- + +Two workflows for managing OpenSpec changes at the proposal stage. + +**Input**: Optionally specify a workflow name (`create` or `validate`) and a target (plan path or change ID). If omitted, ask the user which workflow they need. + +## Workflow Selection + +Determine which workflow to run: + +| User Intent | Workflow | Reference | +|---|---|---| +| Turn a plan into an OpenSpec change | **Create Change from Plan** | `references/create-change-from-plan.md` | +| Validate a change before implementation | **Validate Change** | `references/validate-change.md` | + +If the user's intent is unclear, use **AskUserQuestion** to ask which workflow they need. + +## Create Change from Plan + +Turns an implementation plan document into a fully formed OpenSpec change with proposal, specs, design, and tasks — including GitHub issue creation for public repos. + +**When to use**: The user has a plan document (typically in `specfact-cli-internal/docs/internal/implementation/`) and wants to create an OpenSpec change from it. + +**Load** `references/create-change-from-plan.md` and follow the full workflow. + +**Key steps**: +1. Select and parse the plan document +2. Cross-reference against existing plans and validate targets +3. Resolve any issues interactively +4. Create the OpenSpec change via `opsx:ff` skill +5. Review and improve: enforce TDD-first, add git worktree tasks (worktree creation first, PR last, cleanup after merge), validate against `openspec/config.yaml` +6. Create GitHub issue (public repos only) + +## Validate Change + +Performs dry-run simulation to detect breaking changes, analyze dependencies, and verify format compliance before implementation begins. + +**When to use**: The user wants to validate that an existing change is safe to implement — check for breaking interface changes, missing dependency updates, and format compliance. + +**Load** `references/validate-change.md` and follow the full workflow. + +**Key steps**: +1. Select the change (by ID or interactive list) +2. Parse all change artifacts (proposal, tasks, design, spec deltas) +3. Simulate interface changes in a temporary workspace +4. Analyze dependencies and detect breaking changes +5. Present findings and get user decision if breaking changes found +6. Run `openspec validate --strict` +7. Create `CHANGE_VALIDATION.md` report + +## Guardrails + +- Read `openspec/config.yaml` for project context and rules +- Read `CLAUDE.md` for project conventions +- Never modify production code during validation — use temp workspaces +- Never proceed with ambiguities — ask for clarification +- Enforce TDD-first ordering in tasks (per config.yaml) +- Enforce git worktree workflow: worktree creation first task, PR creation last task, worktree cleanup after merge — never switch the primary checkout away from `dev` +- Only create GitHub issues in the target repository specified by the plan diff --git a/.github/skills/openspec-workflows/references/create-change-from-plan.md b/.github/skills/openspec-workflows/references/create-change-from-plan.md new file mode 100644 index 00000000..bb999477 --- /dev/null +++ b/.github/skills/openspec-workflows/references/create-change-from-plan.md @@ -0,0 +1,312 @@ +# Workflow: Create OpenSpec Change from Plan + +## Table of Contents + +- [Guardrails](#guardrails) +- [Step 1: Plan Selection](#step-1-plan-selection) +- [Step 2: Plan Review and Alignment](#step-2-plan-review-and-alignment) +- [Step 3: Integrity Re-Check](#step-3-integrity-re-check) +- [Step 4: OpenSpec Change Creation](#step-4-openspec-change-creation) +- [Step 5: Proposal Review and Improvement](#step-5-proposal-review-and-improvement) +- [Step 6: GitHub Issue Creation](#step-6-github-issue-creation) +- [Step 7: Create GitHub Issue via gh CLI](#step-7-create-github-issue-via-gh-cli) +- [Step 8: Completion](#step-8-completion) + +## Guardrails + +- Read `openspec/config.yaml` during the workflow (before or at Step 5) for project context and TDD/SDD rules. +- Favor straightforward, minimal implementations. Keep changes tightly scoped. +- Never proceed with ambiguities or conflicts — ask for clarification interactively. +- Do not write code during the proposal stage. Only create design documents (proposal.md, tasks.md, design.md, spec deltas). +- Always validate alignment against existing plans and implementation reality before proceeding. +- **CRITICAL**: Only create GitHub issues in the target repository specified by the plan. +- **CRITICAL Git Workflow (Worktree Policy)**: Use git worktrees for parallel development — never switch the primary checkout away from `dev`. Add a worktree creation task as the FIRST task, and PR creation as the LAST task. Never work on protected branches (`main`/`dev`) directly. Branch naming: `/`. Worktree path: `../specfact-cli-worktrees//`. All subsequent tasks execute inside the worktree directory. +- **CRITICAL TDD**: Per config.yaml, test tasks MUST come before implementation tasks. + +## Step 1: Plan Selection + +**If plan path provided**: Resolve to absolute path, verify file exists. + +**If no plan path provided**: +1. Search for plans in: + - `specfact-cli-internal/docs/internal/brownfield-strategy/` (`*.md`) + - `specfact-cli-internal/docs/internal/implementation/` (`*.md`) + - `specfact-cli/docs/` (if accessible) +2. Display numbered list with file path, title (first heading), last modified date. +3. Prompt user to select. + +## Step 2: Plan Review and Alignment + +### 2.1: Read and Parse Plan + +1. Read plan file completely. +2. Extract: + - Title and purpose (first H1) + - **Target repository** (look for `**Repository**:` in header metadata, e.g. `` `nold-ai/specfact-cli` ``) + - Phases/tasks with descriptions + - Files to create/modify (note repository prefixes) + - Dependencies, success metrics, estimated effort +3. Identify referenced targets (files, directories, repositories). + +### 2.2: Cross-Reference Check + +1. Search `specfact-cli-internal/docs/internal/brownfield-strategy/` for overlapping plans. +2. Search `specfact-cli-internal/docs/internal/implementation/` for conflicting implementation plans. +3. Extract conflicting info, overlapping scope, dependency relationships, timeline conflicts. + +### 2.3: Target Validation + +For each target in the plan: +- **Files**: Check existence, readability, location, structure matches assumptions. +- **Directories**: Check existence, structure. +- **Repositories**: Verify in workspace, structure matches, access ok. +- **Code refs**: Verify functions/classes exist, structure matches. + +### 2.4: Alignment Analysis + +Check: +1. **Accuracy**: File paths correct? Repos referenced accurately? Commands valid? +2. **Correctness**: Technical details accurate? Implementation approaches align with codebase? +3. **Ambiguities**: Unclear requirements, vague acceptance criteria, missing context. +4. **Conflicts**: With other plans, overlapping scope, timeline/resource conflicts. +5. **Consistency**: With CLAUDE.md conventions, OpenSpec conventions, existing patterns. + +### 2.5: Issue Detection and Interactive Resolution + +**If issues found**: +1. Categorize: Critical (must resolve), Warning (should resolve), Info (non-blocking). +2. Present: `[CRITICAL/WARNING/INFO] : ` with context and suggested resolutions. +3. Resolve interactively: For critical issues, prompt for clarification. For warnings, ask resolve or skip. +4. Re-validate after resolution. Loop until all critical issues resolved. + +## Step 3: Integrity Re-Check + +1. Re-run all checks from Step 2 with updated understanding. +2. Verify user clarifications are consistent. +3. Check for new issues introduced by clarifications. +4. If misalignments remain, go back to Step 2.5. + +## Step 4: OpenSpec Change Creation + +### 4.1: Determine Change Name + +1. Extract from plan title, convert to kebab-case. +2. Ensure unique (check existing changes in `openspec/changes/`). + +### 4.2: Execute OPSX Fast-Forward + +Invoke the `opsx:ff` skill with the change name: +- Use the plan as source of requirements. +- Map plan phases/tasks to OpenSpec capabilities. +- The opsx:ff workflow creates: change directory, proposal.md, specs/, design.md, tasks.md. +- It reads `openspec/config.yaml` for project context and per-artifact rules. + +### 4.3: Extract Change ID + +1. Identify created change ID. +2. Verify change directory: `openspec/changes//`. +3. Verify artifacts created: proposal.md, tasks.md, specs/. + +## Step 5: Proposal Review and Improvement + +### 5.1: Review Against Config and Project Rules + +1. **Read `openspec/config.yaml`**: + - Project context: Tech stack, constraints, architecture patterns. + - Development discipline (SDD + TDD): (1) Specs first, (2) Tests second (expect failure), (3) Code last. + - Per-artifact rules: `rules.tasks` — TDD order, test-before-code. + +2. **Read and apply project rules** from CLAUDE.md: + - Contract-first development, testing requirements, code conventions. + +3. **Verify config.yaml rules applied**: + - Source Tracking section (if public-facing). + - GitHub issue creation task (if public repo). + - 2-hour maximum chunks. + - TDD: test tasks before implementation. + +### 5.2: Update Tasks with Quality Standards and Git Workflow + +#### 5.2.1: Determine Branch Type + +- `add-*`, `create-*`, `implement-*`, `enhance-*` -> `feature/` +- `fix-*`, `correct-*`, `repair-*` -> `bugfix/` +- `update-*`, `modify-*`, `refactor-*` -> `feature/` +- `hotfix-*`, `urgent-*` -> `hotfix/` +- Default: `feature/` + +Branch name: `/`. Target: `dev`. + +#### 5.2.2: Add Git Worktree Creation Task (FIRST TASK) + +Add as first task in tasks.md: + +```markdown +## 1. Create git worktree for this change + +- [ ] 1.1 Fetch latest and create a worktree with a new branch from `origin/dev`. + - [ ] 1.1.1 `git fetch origin` + - [ ] 1.1.2 `git worktree add ../specfact-cli-worktrees// -b / origin/dev` + - [ ] 1.1.3 Change into the worktree: `cd ../specfact-cli-worktrees//` + - [ ] 1.1.4 Create a virtual environment: `python -m venv .venv && source .venv/bin/activate && pip install -e ".[dev]"` + - [ ] 1.1.5 `git branch --show-current` (verify correct branch) +``` + +**If a GitHub issue exists**, use `gh issue develop` to link the branch before creating the worktree: + +```markdown + - [ ] 1.1.2a `gh issue develop --repo --name /` (creates remote branch linked to issue) + - [ ] 1.1.2b `git fetch origin && git worktree add ../specfact-cli-worktrees// /` +``` + +All remaining tasks in tasks.md MUST run inside the worktree directory, not the primary checkout. + +#### 5.2.3: Update Tasks with Quality Standards + +For each task, ensure: +- Testing requirements (unit, contract, integration, E2E). +- Code quality checks: `hatch run format`, `hatch run type-check`, `hatch run contract-test`. +- Validation: `openspec validate --strict`. + +#### 5.2.4: Enforce TDD-first in tasks.md + +1. **Add "TDD / SDD order (enforced)" section** at top of tasks.md (after title, before first numbered task): + - State: per config.yaml, tests before code for any behavior-changing task. + - Order: (1) Spec deltas, (2) Tests from scenarios (expect failure), (3) Code last. + - "Do not implement production code until tests exist and have been run (expecting failure)." + - Separate with `---`. + +2. **Reorder each behavior-changing section**: Test tasks before implementation tasks. + +3. **Verify**: Scan tasks.md — any section with both test and implementation tasks must have tests first. + +#### 5.2.5: Add PR Creation Task (LAST TASK) + +Add as last task in tasks.md. Only create PR if target repo is public (specfact-cli, platform-frontend). + +Key steps (run from inside the worktree directory): +1. Prepare commit: `git add .`, commit with conventional message, push with `-u`: `git push -u origin /`. +2. Create PR body from `.github/pull_request_template.md`: + - Use full repo path format for issue refs: `Fixes nold-ai/specfact-cli#` + - Include OpenSpec change ID in description. +3. Create PR: `gh pr create --repo --base dev --head --title ": " --body-file ` +4. Link to project (specfact-cli only): `gh project item-add 1 --owner nold-ai --url ` +5. Verify Development link on issue, project board. +6. Update project status to "In Progress" (if applicable). + +PR title format: `feat:` for feature/, `fix:` for bugfix/, etc. + +#### 5.2.6: Add Worktree Cleanup Task (AFTER MERGE) + +Add a note after the PR task for post-merge cleanup: + +```markdown +## Post-merge cleanup (after PR is merged) + +- [ ] Return to primary checkout: `cd .../specfact-cli` +- [ ] `git fetch origin` +- [ ] `git worktree remove ../specfact-cli-worktrees//` +- [ ] `git branch -d /` +- [ ] `git worktree prune` +- [ ] (Optional) `git push origin --delete /` +``` + +### 5.3: Update Proposal with Quality Gates + +Update proposal.md with: quality standards section, git workflow requirements, acceptance criteria (branch created, tests pass, contracts validated, docs updated, PR created). + +### 5.4: Validate with OpenSpec + +1. Verify format: proposal.md has `# Change:` title, `## Why`, `## What Changes`, `## Impact`. Tasks.md uses `## 1.` numbered format. +2. Check status: `openspec status --change "" --json`. +3. Run: `openspec validate --strict`. Fix and re-run until passing. + +### 5.5: Markdown Linting + +Run `markdownlint --config .markdownlint.json --fix` on all `.md` files in the change directory. Fix remaining issues manually. + +## Step 6: GitHub Issue Creation + +### 6.1: Determine Target Repository + +1. Extract target repo from plan header (`**Repository**:` field). +2. Decision: + - `specfact-cli` or `platform-frontend` (public) -> create issue, proceed to 6.2. + - `specfact-cli-internal` (internal) -> skip issue creation, go to Step 8. + - Not specified -> ask user. + +### 6.2: Sanitize Proposal Content + +For public issues: +- **Remove**: Competitive analysis, market positioning, internal strategy, effort estimates. +- **Preserve**: User-facing value, feature descriptions, acceptance criteria, API changes. + +Format per config.yaml: +- Title: `[Change] ` +- Labels: `enhancement`, `change-proposal` +- Body: `## Why`, `## What Changes`, `## Acceptance Criteria` +- Footer: `*OpenSpec Change Proposal: *` + +Show sanitized content to user for approval before creating. + +## Step 7: Create GitHub Issue via gh CLI + +1. Write sanitized content to temp file. +2. Create issue: + +```bash +gh issue create \ + --repo \ + --title "[Change] " \ + --body-file /tmp/github-issue-<change-id>.md \ + --label "enhancement" \ + --label "change-proposal" +``` + +3. For specfact-cli: link to project `gh project item-add 1 --owner nold-ai --url <ISSUE_URL>`. +4. Update `proposal.md` Source Tracking section: + +```markdown +## Source Tracking + +<!-- source_repo: <target-repo> --> +- **GitHub Issue**: #<number> +- **Issue URL**: <url> +- **Last Synced Status**: proposed +``` + +5. Cleanup temp file. + +## Step 8: Completion + +Display summary: + +``` +Change ID: <change-id> +Location: openspec/changes/<change-id>/ + +Validation: + - OpenSpec validation passed + - Markdown linting passed + - Config.yaml rules applied (TDD-first enforced) + - Git workflow tasks added (branch + PR) + +GitHub Issue (if public): + - Issue #<number> created: <url> + - Source tracking updated + +Next Steps: + 1. Review: openspec/changes/<change-id>/proposal.md + 2. Review: openspec/changes/<change-id>/tasks.md + 3. Verify TDD order and git workflow in tasks + 4. Apply when ready: invoke opsx:apply skill +``` + +## Error Handling + +- **Plan not found**: Search and suggest alternatives. +- **Validation failures**: Present clearly, allow interactive resolution. +- **OpenSpec validation fails**: Fix and re-validate, don't proceed until passing. +- **gh CLI unavailable**: Inform user, provide manual creation instructions. +- **Issue creation fails**: Log error, allow retry, don't fail entire workflow. +- **Project linking fails**: Log warning, continue (non-critical). diff --git a/.github/skills/openspec-workflows/references/validate-change.md b/.github/skills/openspec-workflows/references/validate-change.md new file mode 100644 index 00000000..2ac055a8 --- /dev/null +++ b/.github/skills/openspec-workflows/references/validate-change.md @@ -0,0 +1,264 @@ +# Workflow: Validate OpenSpec Change + +## Table of Contents + +- [Guardrails](#guardrails) +- [Step 1: Change Selection](#step-1-change-selection) +- [Step 2: Read and Parse Change](#step-2-read-and-parse-change) +- [Step 3: Simulate Change Application](#step-3-simulate-change-application) +- [Step 4: Dependency Analysis](#step-4-dependency-analysis) +- [Step 5: Validation Report and Decision](#step-5-validation-report-and-decision) +- [Step 6: Create Validation Report](#step-6-create-validation-report) +- [Step 7: Completion](#step-7-completion) + +## Guardrails + +- Never modify the actual codebase during validation — only work in temp directories. +- Focus on interface/contract/parameter analysis, not implementation details. +- Identify breaking changes, not style or formatting issues. +- Always create CHANGE_VALIDATION.md for audit trail. +- Ask for user confirmation before extending change scope or rejecting proposals. + +## Step 1: Change Selection + +**If change ID provided**: Resolve to `openspec/changes/<change-id>/`, verify directory and proposal.md exist. + +**If no change ID provided**: +1. List active changes: `openspec list --json`. +2. Display numbered list with change ID, schema, status, brief description. +3. Prompt user to select. + +## Step 2: Read and Parse Change + +### 2.1: Check Status and Read Artifacts + +1. **Read `openspec/config.yaml`** for project context, constraints, and per-artifact rules. + +2. **Check change status**: `openspec status --change "<change-id>" --json` + - Verify artifacts exist and are complete (status: "done"). + +3. **Get artifact context**: `openspec instructions apply --change "<change-id>" --json` + +4. **Verify proposal.md format** (per config.yaml): + - Title: `# Change: [Brief description]` + - Required sections: `## Why`, `## What Changes`, `## Capabilities`, `## Impact` + - "What Changes": bullet list with NEW/EXTEND/MODIFY markers + - "Capabilities": each capability needs a spec file + - "Impact": Affected specs, Affected code, Integration points + +5. **Read proposal.md**: Extract summary, rationale, scope, capabilities, affected files. + +6. **Verify tasks.md format** (per config.yaml): + - Hierarchical numbered sections: `## 1.`, `## 2.` + - Tasks: `- [ ] 1.1 [Description]` + - Sub-tasks: `- [ ] 1.1.1 [Description]` + - Rules: 2-hour max chunks, contract tasks, test tasks, quality gates, git worktree workflow (worktree creation first, PR last, cleanup after merge) + +7. **Read tasks.md**: Extract tasks, files to create/modify/delete, task dependencies. Verify worktree creation first, PR creation last, worktree cleanup after merge. + +8. **Read design.md** (if exists): Architectural decisions, interface changes, contracts, migration plans. Verify bridge adapter docs, sequence diagrams for multi-repo. + +9. **Read spec deltas** (`specs/<capability>/spec.md`): ADDED/MODIFIED/REMOVED requirements, interface/parameter/contract changes, cross-refs. Verify Given/When/Then format. + +### 2.2: Identify Change Scope + +1. **Files to modify**: Extract from tasks.md and proposal.md. Categorize: code, tests, docs, config. +2. **Modules/Components**: Python modules, classes, functions, interfaces, contracts, APIs. Note public vs private. +3. **Dependencies**: From proposal "Dependencies" section and task dependencies. + +## Step 3: Simulate Change Application + +### 3.1: Create Temporary Workspace + +```bash +TEMP_WORKSPACE="/tmp/specfact-validation-<change-id>-$(date +%s)" +mkdir -p "$TEMP_WORKSPACE" +``` + +Copy relevant repository structure to temp workspace. + +### 3.2: Analyze Spec Deltas for Interface Changes + +For each spec delta: +1. Parse ADDED/MODIFIED/REMOVED requirements. +2. Extract interface changes: function signatures, class interfaces, `@icontract`/`@beartype` decorators, type hints, API endpoints. +3. Create interface scaffolds in temp workspace (stubs only, no implementation): + +```python +# OLD INTERFACE (from existing codebase) +def process_data(data: str, options: dict) -> dict: ... + +# NEW INTERFACE (from change proposal) +def process_data(data: str, options: dict, validate: bool = True) -> dict: ... +``` + +### 3.3: Map Tasks to File Modifications + +For each task, categorize modification type: +- **Interface change**: Function/class signature modification +- **Contract change**: `@icontract` decorator modification +- **Type change**: Type hint modification +- **New/Delete file**: Module/class/function added or removed +- **Documentation**: Non-breaking doc changes + +Create modification map: File path -> Modification type -> Interface changes. + +## Step 4: Dependency Analysis + +### 4.1: Find Dependent Code + +For each modified file/interface, search codebase: +- `from...import...<module>` — find imports +- `<function_name>(` or `<class_name>(` — find usages +- `@<decorator>` — find contract decorators + +Build dependency graph: Modified interface -> dependent files (direct, indirect, test). + +### 4.2: Analyze Breaking Changes + +Compare old vs new interface. Detect: +- **Parameter removal**: Required param removed +- **Parameter addition**: Required param added (no default) +- **Parameter type change**: Incompatible type +- **Return type change**: Incompatible return +- **Contract strengthening**: `@require` stricter, `@ensure` weaker +- **Method/class/module removal**: Public API removed + +For each dependent file, check if it would break: +- **Would break**: Incompatible usage detected +- **Would need update**: Compatible but may need adjustment +- **No impact**: Usage compatible + +### 4.3: Identify Required Updates + +Categorize: +- **Critical**: Must update or code breaks +- **Recommended**: Should update for consistency +- **Optional**: No update needed + +## Step 5: Validation Report and Decision + +### 5.1: Summary + +Count breaking changes, affected interfaces, dependent files. Assess impact: High/Medium/Low. + +### 5.2: Present Findings + +``` +Change Validation Report: <change-id> + +Breaking Changes Detected: <count> + - <interface 1>: <description> + +Dependent Files Affected: <count> + Critical (must update): <count> + Recommended: <count> + Optional: <count> + +Impact Assessment: <High/Medium/Low> +``` + +### 5.3: User Decision (if breaking changes) + +**Option A: Extend Scope** — Add tasks to update dependent files. May require major version. + +**Option B: Adjust Change** — Add default params, keep old interface (deprecation), use optional params. + +**Option C: Reject and Defer** — Update status to "deferred", document in CHANGE_VALIDATION.md. + +**No breaking changes**: Proceed to 5.4. + +### 5.4: OpenSpec Validation + +1. Check status: `openspec status --change "<change-id>" --json` +2. Run: `openspec validate <change-id> --strict` +3. Fix issues and re-run until passing. +4. If proposal was updated (scope extended/adjusted), re-validate. + +## Step 6: Create Validation Report + +Create `openspec/changes/<change-id>/CHANGE_VALIDATION.md`: + +```markdown +# Change Validation Report: <change-id> + +**Validation Date**: <timestamp> +**Change Proposal**: [proposal.md](./proposal.md) +**Validation Method**: Dry-run simulation in temporary workspace + +## Executive Summary + +- Breaking Changes: <count> detected / <count> resolved +- Dependent Files: <count> affected +- Impact Level: <High/Medium/Low> +- Validation Result: <Pass/Fail/Deferred> +- User Decision: <Extend Scope/Adjust Change/Reject/N/A> + +## Breaking Changes Detected + +### Interface: <name> +- **Type**: Parameter addition/removal/type change +- **Old Signature**: `<old>` +- **New Signature**: `<new>` +- **Dependent Files**: <file>: <impact> + +## Dependencies Affected + +### Critical Updates Required +- <file>: <reason> + +### Recommended Updates +- <file>: <reason> + +## Impact Assessment + +- **Code Impact**: <description> +- **Test Impact**: <description> +- **Documentation Impact**: <description> +- **Release Impact**: <Minor/Major/Patch> + +## Format Validation + +- **proposal.md Format**: <Pass/Fail> + - Title, sections, capabilities, impact per config.yaml +- **tasks.md Format**: <Pass/Fail> + - Headers, task format, config.yaml compliance (TDD, git workflow, quality gates) +- **specs Format**: <Pass/Fail> + - Given/When/Then format, references existing patterns +- **Config.yaml Compliance**: <Pass/Fail> + +## OpenSpec Validation + +- **Status**: <Pass/Fail> +- **Command**: `openspec validate <change-id> --strict` +- **Issues Found/Fixed**: <count> + +## Validation Artifacts + +- Temporary workspace: <path> +``` + +Update proposal status if deferred, scope extended, or adjusted. + +## Step 7: Completion + +``` +Change ID: <change-id> +Validation Report: openspec/changes/<change-id>/CHANGE_VALIDATION.md + +Findings: + - Breaking Changes: <count> + - Dependent Files: <count> + - Impact Level: <level> + - Validation Result: <result> + +Next Steps: + <based on decision — implement, re-validate, or defer> +``` + +## Error Handling + +- **Change not found**: Search and suggest alternatives. +- **Repo not accessible**: Inform user, provide manual validation instructions. +- **Breaking changes**: Present options clearly, don't proceed without user decision. +- **Dependency analysis fails**: Continue with partial analysis, note limitations. diff --git a/.gitignore b/.gitignore index 7366d840..16ff7b11 100644 --- a/.gitignore +++ b/.gitignore @@ -106,6 +106,8 @@ docs/internal/ .github/prompts/specfact.*.md .github/prompts/opsx-*.md +.github/skills/openspec-*/ +!.github/skills/openspec-workflows/ .claude/commands/opsx/ .claude/commands/specfact.*.md diff --git a/CHANGELOG.md b/CHANGELOG.md index c3ec321a..4fed2f9f 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -8,6 +8,20 @@ All notable changes to this project will be documented in this file. **Important:** Changes need to be documented below this block as this is the header section. Each section should be separated by a horizontal rule. Newer changelog entries need to be added on top of prior ones to keep the history chronological with most recent changes first. +--- + +## [0.39.0] - 2026-02-28 + +### Added + +- **Category group commands** (OpenSpec change `module-migration-01-categorize-and-group`): Category grouping mounts commands under `code`, `backlog`, `project`, `spec`, and `govern`. Use `specfact code analyze`, `specfact backlog --help`, etc. Flat shims (e.g. `specfact validate`) remain with deprecation notice in Copilot mode. Configurable via `category_grouping_enabled` (default true). +- **First-run module selection in `specfact init`**: `--profile solo-developer` and `--profile enterprise-full-stack`, plus `--install <bundles>` and interactive bundle selection on first run when no category bundle is installed. +- **Integration and E2E tests**: `tests/integration/test_category_group_routing.py` and `tests/e2e/test_first_run_init.py` for category routing and init profile flows. + +### Fixed + +- `test_module_grouping.py` now imports `group_modules_by_category` from `module_grouping` instead of `module_packages`, fixing collection errors in the full test suite. + --- ## [0.38.2] - 2026-02-27 diff --git a/README.md b/README.md index 399b781c..9cb4cd3d 100644 --- a/README.md +++ b/README.md @@ -37,6 +37,11 @@ pip install -U specfact-cli # Bootstrap module registry and local config (~/.specfact) specfact init +# First-run bundle selection (examples) +specfact init --profile solo-developer +specfact init --install backlog,codebase +specfact init --install all + # Configure IDE prompts/templates (interactive selector by default) specfact init ide specfact init ide --ide cursor @@ -47,11 +52,11 @@ specfact init ide --ide vscode ```bash # Analyze an existing codebase -specfact import from-code my-project --repo . +specfact project import from-code my-project --repo . # Validate external code without modifying source -specfact validate sidecar init my-project /path/to/repo -specfact validate sidecar run my-project /path/to/repo +specfact code validate sidecar init my-project /path/to/repo +specfact code validate sidecar run my-project /path/to/repo ``` ### Backlog Bridge (60 seconds) diff --git a/docs/_layouts/default.html b/docs/_layouts/default.html index 248e1e47..5ec59066 100644 --- a/docs/_layouts/default.html +++ b/docs/_layouts/default.html @@ -189,6 +189,7 @@ <h2 class="docs-sidebar-title"> <li><a href="{{ '/reference/projectbundle-schema/' | relative_url }}">ProjectBundle Schema</a></li> <li><a href="{{ '/reference/module-contracts/' | relative_url }}">Module Contracts</a></li> <li><a href="{{ '/reference/module-security/' | relative_url }}">Module Security</a></li> + <li><a href="{{ '/reference/module-categories/' | relative_url }}">Module Categories</a></li> <li><a href="{{ '/reference/bridge-registry/' | relative_url }}">Bridge Registry</a></li> <li><a href="{{ '/guides/integrations-overview/' | relative_url }}">Integrations Overview</a></li> </ul> diff --git a/docs/getting-started/README.md b/docs/getting-started/README.md index 8a5f65ea..ed352160 100644 --- a/docs/getting-started/README.md +++ b/docs/getting-started/README.md @@ -43,6 +43,13 @@ uvx specfact-cli@latest plan init my-project --interactive **Note**: Interactive AI Assistant mode provides better feature detection and semantic understanding, but requires `pip install specfact-cli` and IDE setup. CLI-only mode works immediately with `uvx` but may show 0 features for simple test cases. +First-run bundle selection examples: + +```bash +specfact init --profile solo-developer +specfact init --install backlog,codebase +``` + ### Modernizing Legacy Code? **New to brownfield modernization?** See our **[Brownfield Engineer Guide](../guides/brownfield-engineer.md)** for a complete walkthrough of modernizing legacy Python code with SpecFact CLI. diff --git a/docs/getting-started/first-steps.md b/docs/getting-started/first-steps.md index 6db88ce3..bd4a6a16 100644 --- a/docs/getting-started/first-steps.md +++ b/docs/getting-started/first-steps.md @@ -52,6 +52,9 @@ specfact init ide --ide cursor # - .specfact/templates/backlog/field_mappings/ with default ADO field mapping templates # - IDE-specific command files for your AI assistant (Cursor in this example) +# Optional first-run profile for bundle selection +specfact init --profile solo-developer + # Step 4: Use slash command in IDE chat /specfact.01-import legacy-api --repo . # Or let the AI assistant prompt you for bundle name @@ -92,10 +95,10 @@ specfact init ide --ide cursor ```bash # Review the extracted bundle using CLI commands -specfact plan review my-project +specfact project plan review my-project # Or get structured findings for analysis -specfact plan review my-project --list-findings --findings-format json +specfact project plan review my-project --list-findings --findings-format json ``` Review the auto-generated plan to understand what SpecFact discovered about your codebase. @@ -112,10 +115,10 @@ specfact sdd constitution bootstrap --repo . ```bash # First-time setup: Configure CrossHair for contract exploration -specfact repro setup +specfact code repro setup # Analyze and validate your codebase -specfact repro --verbose +specfact code repro --verbose ``` **What happens**: diff --git a/docs/index.md b/docs/index.md index 076c700d..da2f9c18 100644 --- a/docs/index.md +++ b/docs/index.md @@ -102,6 +102,26 @@ Why this matters: - Interfaces and contracts keep feature development isolated and safer to iterate. - Pending OpenSpec-driven module changes can land incrementally with lower migration risk. +### Category Command Groups and First-Run Selection + +SpecFact now groups feature commands by workflow domain: + +- `specfact project ...` +- `specfact backlog ...` +- `specfact code ...` +- `specfact spec ...` +- `specfact govern ...` + +On a fresh setup, `specfact init` supports first-run bundle selection: + +```bash +specfact init --profile solo-developer +specfact init --install backlog,codebase +specfact init --install all +``` + +See [Module Categories](reference/module-categories.md) for full mappings and profile presets. + **Module security and extensions:** - **[Using Module Security and Extensions](guides/using-module-security-and-extensions.md)** - How to use verified modules (arch-06) and schema extensions (arch-07) from the CLI and as a module author diff --git a/docs/reference/README.md b/docs/reference/README.md index 1369dcf9..2d3289d3 100644 --- a/docs/reference/README.md +++ b/docs/reference/README.md @@ -23,6 +23,7 @@ Complete technical reference for SpecFact CLI. - **[Directory Structure](directory-structure.md)** - Project structure and organization - **[Schema Versioning](schema-versioning.md)** - Bundle schema versions and backward compatibility (v1.0, v1.1) - **[Module Security](module-security.md)** - Marketplace/module integrity and publisher metadata +- **[Module Categories](module-categories.md)** - Category grouping model, canonical module assignments, bundles, and first-run profiles - **[Dependency resolution](dependency-resolution.md)** - How module/pip dependency resolution works and bypass options ## Quick Reference diff --git a/docs/reference/commands.md b/docs/reference/commands.md index 6a4305c1..facbca94 100644 --- a/docs/reference/commands.md +++ b/docs/reference/commands.md @@ -5089,6 +5089,8 @@ specfact init [OPTIONS] - `--repo PATH` - Repository path (default: current directory) - `--install-deps` - Install contract enhancement dependencies (prefer `specfact init ide --install-deps`) +- `--profile TEXT` - First-run bundle profile (`solo-developer`, `backlog-team`, `api-first-team`, `enterprise-full-stack`) +- `--install TEXT` - First-run bundle selection by aliases (`project`, `backlog`, `codebase|code`, `spec`, `govern`) or `all` **Examples:** @@ -5096,6 +5098,13 @@ specfact init [OPTIONS] # Bootstrap only (no IDE prompt/template copy) specfact init +# Bootstrap and install a profile preset (first run) +specfact init --profile solo-developer + +# Bootstrap and install explicit bundles (first run) +specfact init --install backlog,codebase +specfact init --install all + # Install dependencies during bootstrap specfact init --install-deps ``` @@ -5104,8 +5113,9 @@ specfact init --install-deps 1. Initializes/updates user-level registry state under `~/.specfact/registry/`. 2. Discovers installed modules and refreshes command help cache. -3. Prints a header note that module management moved to `specfact module`. -4. Reports IDE prompt status and points to `specfact init ide` for prompt/template setup. +3. On first run, supports interactive bundle selection (or non-interactive `--profile` / `--install`). +4. Prints a header note that module management moved to `specfact module`. +5. Reports IDE prompt status and points to `specfact init ide` for prompt/template setup. ### `module` - Module Lifecycle and Marketplace Management diff --git a/docs/reference/module-categories.md b/docs/reference/module-categories.md new file mode 100644 index 00000000..70ef04bd --- /dev/null +++ b/docs/reference/module-categories.md @@ -0,0 +1,87 @@ +--- +layout: default +title: Module Categories +nav_order: 35 +permalink: /reference/module-categories/ +--- + +# Module Categories + +SpecFact groups feature modules into workflow-oriented command families. + +Core commands remain top-level: + +- `specfact init` +- `specfact auth` +- `specfact module` +- `specfact upgrade` + +Category command groups: + +- `specfact project ...` +- `specfact backlog ...` +- `specfact code ...` +- `specfact spec ...` +- `specfact govern ...` + +## Canonical Category Assignments + +| Module | Category | Bundle | Group Command | Sub-command | +|---|---|---|---|---| +| `init` | `core` | — | — | `init` | +| `auth` | `core` | — | — | `auth` | +| `module_registry` | `core` | — | — | `module` | +| `upgrade` | `core` | — | — | `upgrade` | +| `project` | `project` | `specfact-project` | `project` | `project` | +| `plan` | `project` | `specfact-project` | `project` | `plan` | +| `import_cmd` | `project` | `specfact-project` | `project` | `import` | +| `sync` | `project` | `specfact-project` | `project` | `sync` | +| `migrate` | `project` | `specfact-project` | `project` | `migrate` | +| `backlog` | `backlog` | `specfact-backlog` | `backlog` | `backlog` | +| `policy_engine` | `backlog` | `specfact-backlog` | `backlog` | `policy` | +| `analyze` | `codebase` | `specfact-codebase` | `code` | `analyze` | +| `drift` | `codebase` | `specfact-codebase` | `code` | `drift` | +| `validate` | `codebase` | `specfact-codebase` | `code` | `validate` | +| `repro` | `codebase` | `specfact-codebase` | `code` | `repro` | +| `contract` | `spec` | `specfact-spec` | `spec` | `contract` | +| `spec` | `spec` | `specfact-spec` | `spec` | `api` | +| `sdd` | `spec` | `specfact-spec` | `spec` | `sdd` | +| `generate` | `spec` | `specfact-spec` | `spec` | `generate` | +| `enforce` | `govern` | `specfact-govern` | `govern` | `enforce` | +| `patch_mode` | `govern` | `specfact-govern` | `govern` | `patch` | + +## Bundle Contents by Category + +- `specfact-project`: `project`, `plan`, `import`, `sync`, `migrate` +- `specfact-backlog`: `backlog`, `policy` +- `specfact-codebase`: `analyze`, `drift`, `validate`, `repro` +- `specfact-spec`: `contract`, `api`, `sdd`, `generate` +- `specfact-govern`: `enforce`, `patch` + +## First-Run Profiles + +`specfact init` supports profile presets and explicit bundle selection: + +- `solo-developer` -> `specfact-codebase` +- `backlog-team` -> `specfact-backlog`, `specfact-project`, `specfact-codebase` +- `api-first-team` -> `specfact-spec`, `specfact-codebase` +- `enterprise-full-stack` -> `specfact-project`, `specfact-backlog`, `specfact-codebase`, `specfact-spec`, `specfact-govern` + +Examples: + +```bash +specfact init --profile solo-developer +specfact init --install backlog,codebase +specfact init --install all +``` + +## Command Topology: Before and After + +Before: + +- Flat top-level command surface with many feature commands. + +After: + +- Core top-level commands plus grouped workflow families (`project`, `backlog`, `code`, `spec`, `govern`). +- Backward-compatibility flat shims remain available during migration. diff --git a/modules/backlog-core/module-package.yaml b/modules/backlog-core/module-package.yaml index 4caea7a8..cfe42d0d 100644 --- a/modules/backlog-core/module-package.yaml +++ b/modules/backlog-core/module-package.yaml @@ -1,7 +1,11 @@ name: backlog-core -version: 0.1.5 +version: 0.1.6 commands: - backlog +category: backlog +bundle: specfact-backlog +bundle_group_command: backlog +bundle_sub_command: core command_help: backlog: Backlog dependency analysis, delta workflows, and release readiness pip_dependencies: [] @@ -20,10 +24,10 @@ schema_extensions: publisher: name: nold-ai url: https://github.com/nold-ai/specfact-cli-modules - email: oss@nold.ai + email: hello@noldai.com integrity: - checksum: sha256:c6ae56b1e5f3cf4d4bc0d9d256f24e6377f08e4e82a1f8bead935c0e7cee7431 - signature: FpTzbqYcR+6jiRUXjqvzfmqoLGeam7lLyLLc/ZfT7AokzRPz4cl5F/KO0b3XZmXQfHWfT+GFTJi5T/POkobJCg== + checksum: sha256:786a67c54f70930208265217499634ccd5e04cb8404d00762bce2e01904c55e4 + signature: Q8CweUicTL/btp9p5QYTlBuXF3yoKvz9ZwaGK0yw3QSM72nni28ZBJ+FivGkmBfcH5zXWAGtASbqC4ry8m5DDQ== dependencies: [] description: Provide advanced backlog analysis and readiness capabilities. license: Apache-2.0 diff --git a/modules/bundle-mapper/module-package.yaml b/modules/bundle-mapper/module-package.yaml index 5e7b8380..2dd2e3b2 100644 --- a/modules/bundle-mapper/module-package.yaml +++ b/modules/bundle-mapper/module-package.yaml @@ -1,6 +1,7 @@ name: bundle-mapper -version: 0.1.2 +version: 0.1.3 commands: [] +category: core pip_dependencies: [] module_dependencies: [] core_compatibility: '>=0.28.0,<1.0.0' @@ -17,10 +18,10 @@ schema_extensions: publisher: name: nold-ai url: https://github.com/nold-ai/specfact-cli-modules - email: oss@nold.ai + email: hello@noldai.com integrity: - checksum: sha256:1012f453bc4ae83b22e2cfabce13e5e324d9b4cdf454ce0159b5c5e17dd36f77 - signature: LlPqbIH6uD70AInX28PpVurOEv+W/Ztarj5yQhZ3MkC3yORcQrh6ISvJsQeFHFiV1cmnYck7RfDipl4FJyzDAA== + checksum: sha256:359763f8589be35f00b53a996d76ccec32789508d0a2d7dae7e3cdb039a92fc3 + signature: OmAp12Rdk79IewQYiKRqvvAm8UgM6onL52Y2/ixSgX3X7onoc9FBKzBYuPmynEVgmJWAI2AX2gdujo/bKH5nAg== dependencies: [] description: Map backlog items to best-fit modules using scoring heuristics. license: Apache-2.0 diff --git a/openspec/CHANGE_ORDER.md b/openspec/CHANGE_ORDER.md index 01e1c62b..89f78c2e 100644 --- a/openspec/CHANGE_ORDER.md +++ b/openspec/CHANGE_ORDER.md @@ -74,14 +74,18 @@ These are derived extensions of the same 2026-02-15 plan and are required to ope |--------|-------|---------------|----------|------------| | marketplace | 01 | ✅ marketplace-01-central-module-registry (implemented 2026-02-22; archived) | [#214](https://github.com/nold-ai/specfact-cli/issues/214) | #208 | | marketplace | 02 | marketplace-02-advanced-marketplace-features | [#215](https://github.com/nold-ai/specfact-cli/issues/215) | #214 | +| marketplace | 03 | marketplace-03-publisher-identity | [#327](https://github.com/nold-ai/specfact-cli/issues/327) | #215 (marketplace-02) | +| marketplace | 04 | marketplace-04-revocation | [#328](https://github.com/nold-ai/specfact-cli/issues/328) | marketplace-03 | +| marketplace | 05 | marketplace-05-registry-federation | [#329](https://github.com/nold-ai/specfact-cli/issues/329) | marketplace-03 | ### Module migration (UX grouping and extraction) | Module | Order | Change folder | GitHub # | Blocked by | |--------|-------|---------------|----------|------------| -| module-migration | 01 | module-migration-01-categorize-and-group | TBD | #215 (marketplace-02) | +| module-migration | 01 | module-migration-01-categorize-and-group | [#315](https://github.com/nold-ai/specfact-cli/issues/315) | #215 ✅ (marketplace-02) | | module-migration | 02 | module-migration-02-bundle-extraction | TBD | module-migration-01 | | module-migration | 03 | module-migration-03-core-slimming | TBD | module-migration-02 | +| module-migration | 04 | module-migration-04-remove-flat-shims | TBD | module-migration-01 | ### Cross-cutting foundations (no hard dependencies — implement early) @@ -219,6 +223,9 @@ Set these in GitHub so issue dependencies are explicit. Optional dependencies ar | [#213](https://github.com/nold-ai/specfact-cli/issues/213) | arch-07 schema extensions | arch-04 ✅ (already implemented) | | [#214](https://github.com/nold-ai/specfact-cli/issues/214) | marketplace-01 registry | #208 | | [#215](https://github.com/nold-ai/specfact-cli/issues/215) | marketplace-02 advanced features | #214 | +| [#327](https://github.com/nold-ai/specfact-cli/issues/327) | marketplace-03 publisher identity | #215 | +| [#328](https://github.com/nold-ai/specfact-cli/issues/328) | marketplace-04 revocation | marketplace-03 (#327) | +| [#329](https://github.com/nold-ai/specfact-cli/issues/329) | marketplace-05 registry federation | marketplace-03 (#327) | | [#173](https://github.com/nold-ai/specfact-cli/issues/173) | backlog-core-02 interactive create | #116 | | [#220](https://github.com/nold-ai/specfact-cli/issues/220) | backlog-scrum-01 standup | #116 | | [#170](https://github.com/nold-ai/specfact-cli/issues/170) | backlog-scrum-02 sprint planning | #116 | @@ -317,8 +324,12 @@ Dependencies flow left-to-right; a wave may start once all its hard blockers are - marketplace-02 (needs marketplace-01) - backlog-scrum-01 ✅ (needs backlog-core-01; benefits from policy-engine-01 + patch-mode-01) - backlog-safe-02 (needs backlog-safe-01; integrates with scrum/kanban via bridge registry) - - module-migration-01-categorize-and-group (needs marketplace-02; adds category metadata + group commands) + - module-migration-01-categorize-and-group (marketplace-02 dependency resolved; adds category metadata + group commands) + - module-migration-04-remove-flat-shims (0.40.x; needs module-migration-01; removes flat shims, category-only CLI) - module-migration-02-bundle-extraction (needs module-migration-01; moves module source to bundle packages, publishes to marketplace registry) + - marketplace-03-publisher-identity (needs marketplace-02; can run parallel with module-migration-01/02/03) + - marketplace-04-revocation (needs marketplace-03; must land before external publisher onboarding) + - marketplace-05-registry-federation (needs marketplace-03) - **Wave 4 — Ceremony layer + module slimming** (needs Wave 3): - ceremony-cockpit-01 ✅ (probes installed backlog-* modules at runtime; no hard deps but best after Wave 3) diff --git a/openspec/changes/marketplace-03-publisher-identity/design.md b/openspec/changes/marketplace-03-publisher-identity/design.md new file mode 100644 index 00000000..57c9d209 --- /dev/null +++ b/openspec/changes/marketplace-03-publisher-identity/design.md @@ -0,0 +1,193 @@ +# Design: Publisher Identity and Module Trust Chain + +## Context + +marketplace-01 established Ed25519 publisher signing infrastructure. marketplace-02 adds multi-registry support and a trust level per registry. What is missing is a publisher identity layer: who is a publisher, what tier are they, and how does the CLI verify their identity at install time without a live accounts database. + +**Current State:** + +- `crypto_validator.py`: `validate_module()` has an `official` tier branch; publisher = `nold-ai` (string comparison) +- No structured publisher record; no publisher key lookup from a signed index +- No `community` or `verified` tier handling +- `custom_registries.py`: registry trust level exists (from marketplace-02) but not linked to publisher tier + +**Constraints:** + +- NOLD AI root key must be bundled at build time (offline-first; no runtime CA lookup) +- Trust index fetch must cache gracefully (7-day TTL fallback for CDN failures) +- Backward compatible: existing `official` tier install path must not be touched +- `specfact-cli-modules/` repo is separate — CLI reads from its deployed trust index URL, not from local files +- Must not introduce a server-side accounts database in Phase 1 + +## Goals / Non-Goals + +**Goals:** + +- Structured publisher identity with three tiers (official, verified, community) +- CLI verifies publisher attestation + registry endorsement at install time +- Trust tier displayed in search/info output +- Trust override flags with audit logging (offline-safe) + +**Non-Goals:** + +- Publisher self-registration UI (specfact.io-backend-phase1 — separate change, separate repo) +- Registry federation / external registry certificates (marketplace-05) +- Revocation infrastructure (marketplace-04) +- Paid module gating (marketplace-06, requires legal entity) + +## Architecture + +### Trust Layer (`src/specfact_cli/trust/`) + +Three modules with clear separation of concerns: + +```text +trust/ + key_store.py — NOLD AI root public key (Ed25519), bundled at build time + publisher_registry.py — fetch + cache publishers/index.json; verify NOLD AI signature + resolver.py — tier resolution order; install gate logic; audit logging +``` + +`resolver.py` calls `crypto_validator.py` for low-level Ed25519 operations — does NOT duplicate crypto. + +### Trust Resolution Sequence + +```text +specfact module install @mycompany/specfact-jira-sync + │ + ├─ trust/publisher_registry.py: fetch publishers/index.json (cache 7d) + ├─ key_store.py: verify NOLD AI signature over publishers/index.json + ├─ Resolve publisher_id from module's module-package.yaml publisher block + ├─ Fetch publisher record from index (publisher_id → tier + public_key) + ├─ crypto_validator.py: verify publisher Ed25519 signature on bundle + ├─ trust/resolver.py: resolve effective tier (publisher tier ∩ registry tier) + │ + ├─ official → install without prompt + ├─ verified → install without prompt + ├─ community → prompt unless --trust-community (log to audit) + └─ unregistered → block unless --trust-unregistered (log to audit) +``` + +### crypto_validator.py Extension Strategy + +The existing `official` branch is preserved unchanged. New branches added alongside: + +```python +# BEFORE (migration-02): single official check +if publisher == "nold-ai": + validate_official(bundle, signature) + +# AFTER (marketplace-03): tier dispatch, official path unchanged +match tier: + case "official": + validate_official(bundle, signature) # unchanged + case "verified": + validate_verified(bundle, publisher_record, signature) # new + case "community": + validate_community(bundle, publisher_record, signature) # new + case _: + raise UnregisteredPublisherError(publisher) +``` + +### Publisher Record Format + +`module-package.yaml` structured `publisher:` block (marketplace-03 format): + +```yaml +publisher: + publisher_id: pub_abc123 + handle: mycompany + tier: verified + public_key_fingerprint: sha256:abcdef... + publisher_signature: "<Ed25519 sig over name+version+sha256>" +``` + +CLI accepts both the legacy `publisher: nold-ai` string (migration-02) and the structured block during the transition window. Format detected at parse time. + +### Registry Endorsement Countersignature + +`registry/index.json` entry gains a `registry_signature` field (NOLD AI countersig over `name+version+publisher_id+checksum_sha256`). This is **distinct** from the publisher's `signature_ed25519`. Both coexist: + +```json +{ + "name": "specfact-jira-sync", + "version": "1.0.0", + "publisher_id": "pub_abc123", + "tier": "verified", + "checksum_sha256": "abcdef...", + "signature_ed25519": "<publisher sig — from migration-02>", + "registry_signature": "<NOLD AI countersig — new in marketplace-03>" +} +``` + +`scripts/publish-module.py` adds the countersig step after existing publisher signing. + +## Decisions + +### Decision 1: Trust Index Caching Strategy + +**Options:** + +- A: Always fetch from CDN, fail hard if unavailable +- B: Cache with TTL, serve stale with warning +- C: Cache only, no online refresh + +Choice: B (7-day TTL cache, serve stale with staleness warning) + +**Rationale:** + +- Offline-first constraint: CLI must work without internet during runs +- 7-day staleness acceptable for publisher index (revocation handled by marketplace-04) +- Warning informs user when cache is stale without blocking install + +### Decision 2: Root Key Bundling + +**Options:** + +- A: Fetch root key from well-known URL at runtime +- B: Bundle root key in CLI package at build time +- C: Store in user config (~/.specfact/) + +Choice: B (bundled at build time) + +**Rationale:** + +- Offline-first: no network required for key verification +- Tamper-evident: key changes require a CLI release (auditable) +- Acceptable key rotation cadence: quarterly, requires CLI update + +### Decision 3: Audit Log Location + +**Choice**: `~/.specfact/module-audit.log` — append-only, human-readable, line-per-install + +Captures: timestamp, module, tier, action (installed/blocked/prompted/accepted), flag used. + +## Sequence Diagrams + +### Install with Publisher Attestation + +```text +User CLI trust/ crypto_validator CDN + │ │ │ │ │ + │ install @org/mod │ │ │ │ + │──────────────────>│ │ │ │ + │ │ fetch publishers/ │ │ │ + │ │─────────────────> │ │ │ + │ │ │ GET /trust/publishers/index.json │ + │ │ │──────────────────────────────────>│ + │ │ │<──────────────────────────────────│ + │ │ │ verify NOLD AI sig │ + │ │ │────────────────────> │ + │ │ │<──────────────────── │ + │ │ resolve publisher │ │ │ + │ │<────────────────- │ │ │ + │ │ verify bundle sig │ │ │ + │ │────────────────────────────────────────> │ + │ │<──────────────────────────────────────── │ + │ │ resolve tier │ │ │ + │ │─────────────────> │ │ │ + │ │ tier=verified │ │ │ + │ │<─────────────────│ │ │ + │ install proceeds │ │ │ │ + │<──────────────────│ │ │ │ +``` diff --git a/openspec/changes/marketplace-03-publisher-identity/proposal.md b/openspec/changes/marketplace-03-publisher-identity/proposal.md new file mode 100644 index 00000000..02f9fcda --- /dev/null +++ b/openspec/changes/marketplace-03-publisher-identity/proposal.md @@ -0,0 +1,65 @@ +# Change: Publisher Identity and Module Trust Chain + +## Why + +marketplace-02 provides multi-registry support and dependency resolution, but modules carry no publisher attestation beyond a simple `publisher: nold-ai` string (introduced by module-migration-02). To enable a verified third-party module ecosystem, the CLI needs a CA-style publisher identity system: NOLD AI vouches for publisher identity and module integrity, but not for module content or behaviour. Publishers host their own artifacts; NOLD AI hosts only the trust index. + +Without publisher attestation and registry endorsement countersignatures, the CLI cannot distinguish between official, verified-community, and unregistered modules — making safe third-party module installation impossible. + +## What Changes + +- **NEW**: `src/specfact_cli/trust/` — trust orchestration layer with three modules: + - `resolver.py` — trust tier resolution order (`official > verified > community > unregistered`) + - `publisher_registry.py` — fetch, cache, and verify `publishers/index.json` from trust index + - `key_store.py` — NOLD AI root public key bundle (Ed25519) embedded at CLI build time +- **MODIFY**: `src/specfact_cli/registry/crypto_validator.py` — extend `validate_module()` to add `verified` and `community` tier branches alongside the existing `official` branch from module-migration-02; add publisher record lookup from trust layer; do NOT replace the `official` path +- **MODIFY**: `src/specfact_cli/modules/module_registry/src/` — trust verification at install time (call trust layer before download); trust tier display in `specfact module search` and `specfact module info` output; `--trust-community` and `--trust-unregistered` flags with audit logging to `~/.specfact/module-audit.log` +- **MODIFY**: `scripts/publish-module.py` — add NOLD AI registry endorsement countersignature step after existing publisher signing step +- **NEW**: `scripts/sign-publishers.py` — signs `publishers/index.json` with NOLD AI root key (run by CI on merge to specfact-cli-modules) +- **NEW**: `docs/guides/publisher-trust.md` — user-facing guide on trust tiers, verification, install flags +- **MODIFY**: `docs/reference/module-commands.md` — document trust tier output, new flags +- **MODIFY**: `docs/_layouts/default.html` — add publisher-trust guide to sidebar navigation + +**Backward compatibility**: Fully additive. The existing `official` tier path in `crypto_validator.py` is preserved unchanged. The transition from `publisher: nold-ai` (string, migration-02) to the structured `publisher:` block is handled by a format-detection branch in the parser — the CLI accepts both during a transition window. + +**Rollback plan**: Revert trust/ module import, restore pre-marketplace-03 `crypto_validator.py` — all existing install flows remain unchanged. + +## Capabilities + +### New Capabilities + +- `publisher-identity`: `publishers/index.json` schema definition, JSON Schema validation, structured publisher records (publisher_id, handle, tier, github_org, domain, public_key) +- `module-trust-chain`: structured `publisher:` block in `module-package.yaml` (Level 2 publisher attestation); `registry_signature` NOLD AI countersig in `registry/index.json` entries (Level 3 registry endorsement); NOLD AI root key bundle in CLI build +- `trust-resolution`: tier resolution order enforcement (`official > verified > community > unregistered`) at install time; `--trust-community` / `--trust-unregistered` flags; audit logging; tier badges in search/info output + +### Modified Capabilities + +- `module-security`: extend `validate_module()` in `crypto_validator.py` to add `verified` and `community` tier branches (spec delta only — extends existing capability from marketplace-01/migration-02) + +## Impact + +- **Affected code**: + - `src/specfact_cli/trust/` (new: resolver.py, publisher_registry.py, key_store.py) + - `src/specfact_cli/registry/crypto_validator.py` (modify: extend tier branches) + - `src/specfact_cli/modules/module_registry/src/` (modify: trust verification + display) + - `scripts/publish-module.py` (modify: add registry endorsement countersig step) + - `scripts/sign-publishers.py` (new: CI signing script) +- **Affected specs**: New specs for `publisher-identity`, `module-trust-chain`, `trust-resolution`; delta spec for `module-security` +- **Affected documentation**: + - `docs/guides/publisher-trust.md` (new) + - `docs/reference/module-commands.md` (update: flags, trust display) + - `docs/_layouts/default.html` (navigation update) +- **External dependencies**: None beyond existing `cryptography` library (already in requirements via arch-06) +- **Integration points**: Trust layer integrates with `crypto_validator.py` (signature checks), `module_installer.py` (pre-install gate), `custom_registries.py` (registry trust level) +- **Backward compatibility**: Fully additive; official tier path unchanged; dual-format publisher string handled +- **Hard blocker**: marketplace-02 (#215) must land first (provides `custom_registries.py` trust level infrastructure that `trust/resolver.py` extends) + +--- + +## Source Tracking + +<!-- source_repo: nold-ai/specfact-cli --> +- **GitHub Issue**: #327 +- **Issue URL**: <https://github.com/nold-ai/specfact-cli/issues/327> +- **Repository**: nold-ai/specfact-cli +- **Last Synced Status**: proposed diff --git a/openspec/changes/marketplace-03-publisher-identity/specs/module-security/spec.md b/openspec/changes/marketplace-03-publisher-identity/specs/module-security/spec.md new file mode 100644 index 00000000..350178df --- /dev/null +++ b/openspec/changes/marketplace-03-publisher-identity/specs/module-security/spec.md @@ -0,0 +1,47 @@ +# module-security Specification Delta + +## Purpose + +Delta spec extending the existing `module-security` capability (established by marketplace-01 and arch-06) to support `verified` and `community` tier branches in `validate_module()`. The `official` path defined in module-migration-02 is preserved and unchanged. + +## MODIFIED Requirements + +### Requirement: validate_module() dispatches by publisher tier + +`validate_module()` in `crypto_validator.py` SHALL dispatch validation logic based on publisher tier, adding `verified` and `community` branches without replacing the existing `official` branch. + +#### Scenario: Official module validation (existing — unchanged) + +- **GIVEN** a module with `publisher: nold-ai` (legacy string) or `tier: official` +- **WHEN** `validate_module()` is called +- **THEN** SHALL execute the existing official validation path unchanged +- **AND** SHALL NOT call publisher registry lookup + +#### Scenario: Verified module validation (new) + +- **GIVEN** a module with `tier: verified` and a structured `publisher:` block +- **WHEN** `validate_module()` is called +- **THEN** SHALL call `trust/publisher_registry.resolve_publisher(publisher_id, index)` to fetch publisher record +- **AND** SHALL verify `publisher_signature` against the resolved public key +- **AND** SHALL verify `registry_signature` (NOLD AI countersig) against the bundled root key +- **AND** SHALL raise `PublisherSignatureMismatchError` if either check fails + +#### Scenario: Community module validation (new) + +- **GIVEN** a module with `tier: community` and a structured `publisher:` block +- **WHEN** `validate_module()` is called +- **THEN** SHALL call `trust/publisher_registry.resolve_publisher(publisher_id, index)` +- **AND** SHALL verify `publisher_signature` (publisher key from index) +- **AND** `registry_signature` check is optional for `community` tier (countersig may not be present) +- **AND** SHALL raise `PublisherSignatureMismatchError` if publisher signature fails + +#### Scenario: Unknown tier + +- **GIVEN** a module with an unrecognised `tier` value +- **WHEN** `validate_module()` is called +- **THEN** SHALL raise `UnknownTierError(tier)` with a clear message + +## Contract Requirements + +- `validate_module(module: ModuleManifest, tier: str, publisher_index: PublisherIndex | None) -> ValidationResult` — `@require` tier in `{"official", "verified", "community"}`; `@ensure` result.valid is bool; `@beartype` +- Existing contract on `official` path MUST remain satisfied — no regressions allowed diff --git a/openspec/changes/marketplace-03-publisher-identity/specs/module-trust-chain/spec.md b/openspec/changes/marketplace-03-publisher-identity/specs/module-trust-chain/spec.md new file mode 100644 index 00000000..b393f56d --- /dev/null +++ b/openspec/changes/marketplace-03-publisher-identity/specs/module-trust-chain/spec.md @@ -0,0 +1,83 @@ +# module-trust-chain Specification + +## Purpose + +Defines the three-level trust chain for published modules: package integrity (Level 1, existing), publisher attestation via structured `publisher:` block (Level 2, new), and NOLD AI registry endorsement countersignature (Level 3, new). + +## ADDED Requirements + +### Requirement: Verify publisher attestation (Level 2) + +The CLI SHALL verify the publisher's Ed25519 signature over the bundle at install time. + +#### Scenario: Valid publisher attestation + +- **GIVEN** a module bundle with a structured `publisher:` block containing a `publisher_signature` +- **AND** the publisher record is found in `publishers/index.json` +- **WHEN** CLI installs the module +- **THEN** CLI SHALL verify `publisher_signature` against the publisher's `public_key` from the index +- **AND** the signature covers `name + version + sha256` (canonical concatenation) +- **AND** SHALL proceed with install if signature is valid + +#### Scenario: Publisher signature mismatch + +- **GIVEN** a module bundle where `publisher_signature` does not verify against the publisher's public key +- **WHEN** CLI attempts install +- **THEN** CLI SHALL raise `PublisherSignatureMismatchError` and abort install +- **AND** SHALL NOT install the bundle under any flag combination + +#### Scenario: Missing publisher_signature in structured block + +- **GIVEN** a module with a structured `publisher:` block that lacks `publisher_signature` +- **WHEN** CLI attempts install +- **THEN** CLI SHALL treat the module as `unregistered` and apply unregistered install policy + +### Requirement: Verify NOLD AI registry endorsement countersignature (Level 3) + +The CLI SHALL verify the `registry_signature` (NOLD AI countersig) on each registry index entry before install. + +#### Scenario: Valid registry endorsement + +- **GIVEN** a registry `index.json` entry contains both `signature_ed25519` (publisher) and `registry_signature` (NOLD AI) +- **WHEN** CLI resolves a module from the registry +- **THEN** CLI SHALL verify `registry_signature` against the NOLD AI root public key +- **AND** the countersig covers `name + version + publisher_id + checksum_sha256` (canonical) +- **AND** SHALL proceed if valid + +#### Scenario: Missing registry_signature (pre-marketplace-03 entries) + +- **GIVEN** a registry entry that pre-dates marketplace-03 and has no `registry_signature` field +- **WHEN** CLI resolves the entry +- **THEN** CLI SHALL treat it as `official` tier if publisher is `nold-ai` (backward compatibility) +- **AND** SHALL surface `[WARN] No registry endorsement found; treating as official (legacy entry)` + +#### Scenario: registry_signature verification failure + +- **GIVEN** a registry entry where `registry_signature` does not verify against the NOLD AI root key +- **WHEN** CLI resolves the entry +- **THEN** CLI SHALL reject the entry and raise `RegistryEndorsementTamperError` +- **AND** SHALL NOT proceed with install + +### Requirement: NOLD AI root public key bundled at build time + +The CLI build process SHALL embed the NOLD AI Ed25519 root public key in `trust/key_store.py`. + +#### Scenario: Root key loaded from bundle + +- **GIVEN** the CLI is installed offline (no network) +- **WHEN** CLI loads the trust layer +- **THEN** CLI SHALL successfully load the NOLD AI root public key from the bundled `key_store.py` +- **AND** SHALL NOT require any network call to load the root key + +#### Scenario: Overridable trust index URL + +- **GIVEN** user has set `trust_index_url: https://internal.corp/trust/` in `~/.specfact/config.yaml` +- **WHEN** CLI fetches the publisher index +- **THEN** CLI SHALL use the configured URL instead of the default `https://specfact.io/trust/` +- **AND** SHALL still verify the NOLD AI signature using the bundled root key + +## Contract Requirements + +- `key_store.get_root_public_key() -> Ed25519PublicKey` — `@ensure` returns non-None; no network call +- `crypto_validator.validate_registry_endorsement(entry: RegistryEntry, root_key: Ed25519PublicKey) -> bool` — `@require` entry has checksum_sha256; `@beartype` +- `crypto_validator.validate_publisher_attestation(bundle_sha256: str, publisher_record: PublisherRecord, publisher_signature: str) -> bool` — `@require` all inputs non-empty; `@beartype` diff --git a/openspec/changes/marketplace-03-publisher-identity/specs/publisher-identity/spec.md b/openspec/changes/marketplace-03-publisher-identity/specs/publisher-identity/spec.md new file mode 100644 index 00000000..38deb762 --- /dev/null +++ b/openspec/changes/marketplace-03-publisher-identity/specs/publisher-identity/spec.md @@ -0,0 +1,81 @@ +# publisher-identity Specification + +## Purpose + +Defines the structured publisher record format, `publishers/index.json` schema validation, and the CLI's ability to resolve publisher metadata at install time. + +## ADDED Requirements + +### Requirement: Resolve publisher record from trust index + +The CLI SHALL fetch and cache `publishers/index.json` from the configured trust index URL. + +#### Scenario: Fetch publishers index on first install + +- **GIVEN** the trust index URL is `https://specfact.io/trust/` +- **WHEN** user installs a module with a structured `publisher:` block +- **THEN** CLI SHALL fetch `publishers/index.json` from the trust index +- **AND** SHALL verify the NOLD AI signature over the index using the bundled root public key +- **AND** SHALL cache the index in `~/.specfact/cache/publishers-index.json` with a 7-day TTL + +#### Scenario: Serve from cache when CDN is unavailable + +- **GIVEN** a valid cached `publishers/index.json` exists (age < 7 days) +- **WHEN** the CDN is unreachable +- **THEN** CLI SHALL serve the cached index without error +- **AND** SHALL proceed with install using cached publisher data + +#### Scenario: Stale cache warning + +- **GIVEN** the cached `publishers/index.json` is older than 7 days +- **WHEN** the CDN is unreachable +- **THEN** CLI SHALL surface a `[WARN] Publisher index is stale (>7 days); verification may be outdated` warning +- **AND** SHALL proceed with install using stale cache rather than hard-failing + +#### Scenario: Signature verification failure on fetched index + +- **GIVEN** a fetched `publishers/index.json` fails NOLD AI signature verification +- **WHEN** user installs a module +- **THEN** CLI SHALL reject the index and raise `PublisherIndexTamperError` +- **AND** SHALL NOT fall back to the tampered index even if a valid cache exists + +### Requirement: Resolve publisher_id to public key + +The CLI SHALL look up a publisher's public key from the cached/fetched index. + +#### Scenario: Publisher found in index + +- **GIVEN** `module-package.yaml` contains a structured `publisher:` block with `publisher_id: pub_abc123` +- **WHEN** CLI processes the publisher block +- **THEN** CLI SHALL resolve `publisher_id` to the publisher's `public_key` in the index +- **AND** SHALL use that key for Ed25519 signature verification + +#### Scenario: Publisher not found in index + +- **GIVEN** `module-package.yaml` contains a `publisher_id` not present in the index +- **WHEN** CLI processes the publisher block +- **THEN** CLI SHALL treat the module as `unregistered` +- **AND** SHALL apply unregistered install policy (block unless `--trust-unregistered`) + +### Requirement: Backward-compatible dual-format publisher field + +The CLI SHALL accept both the legacy `publisher: nold-ai` string format and the structured `publisher:` block. + +#### Scenario: Legacy string format + +- **GIVEN** `module-package.yaml` contains `publisher: nold-ai` (string, from module-migration-02) +- **WHEN** CLI processes the publisher field +- **THEN** CLI SHALL infer `tier: official` and proceed through the official validation path +- **AND** SHALL NOT raise an error or warning about the legacy format + +#### Scenario: Structured block format + +- **GIVEN** `module-package.yaml` contains a structured `publisher:` block +- **WHEN** CLI processes the publisher field +- **THEN** CLI SHALL resolve publisher_id from the trust index and perform full attestation verification + +## Contract Requirements + +- `publisher_registry.fetch_publisher_index(trust_index_url: str) -> PublisherIndex` — `@require` trust_index_url is a non-empty HTTPS URL; `@ensure` result.nold_ai_signature is verified +- `publisher_registry.resolve_publisher(publisher_id: str, index: PublisherIndex) -> PublisherRecord | None` — `@require` publisher_id is non-empty; `@ensure` None returned (not raised) when publisher not found +- `@beartype` on all public functions in `trust/publisher_registry.py` diff --git a/openspec/changes/marketplace-03-publisher-identity/specs/trust-resolution/spec.md b/openspec/changes/marketplace-03-publisher-identity/specs/trust-resolution/spec.md new file mode 100644 index 00000000..c5d76d36 --- /dev/null +++ b/openspec/changes/marketplace-03-publisher-identity/specs/trust-resolution/spec.md @@ -0,0 +1,101 @@ +# trust-resolution Specification + +## Purpose + +Defines the trust tier resolution order, install-time enforcement, override flags, audit logging, and display of trust tiers in module search and info output. + +## ADDED Requirements + +### Requirement: Enforce trust tier resolution order at install time + +The CLI SHALL resolve effective trust tier and enforce install policy based on `official > verified > community > unregistered`. + +#### Scenario: Install official module without prompt + +- **GIVEN** a module with effective tier `official` +- **WHEN** user runs `specfact module install @nold-ai/specfact-codebase` +- **THEN** CLI SHALL install without any prompt or warning +- **AND** SHALL log `[INFO] Installing @nold-ai/specfact-codebase (official)` + +#### Scenario: Install verified module without prompt + +- **GIVEN** a module with effective tier `verified` +- **WHEN** user runs `specfact module install @mycompany/specfact-jira-sync` +- **THEN** CLI SHALL install without any prompt or warning +- **AND** SHALL log `[INFO] Installing @mycompany/specfact-jira-sync (verified)` + +#### Scenario: Install community module with warning prompt + +- **GIVEN** a module with effective tier `community` +- **WHEN** user runs `specfact module install @devuser/specfact-lint-rules` +- **THEN** CLI SHALL display a `[WARN] @devuser/specfact-lint-rules is community-verified (publisher identity confirmed, content not reviewed by NOLD AI)` +- **AND** SHALL prompt: `Install anyway? [y/N]` +- **AND** SHALL abort on `N` or no input + +#### Scenario: Install community module with --trust-community flag + +- **GIVEN** a module with effective tier `community` +- **AND** user passes `--trust-community` flag +- **WHEN** user runs `specfact module install @devuser/specfact-lint-rules --trust-community` +- **THEN** CLI SHALL install without prompt +- **AND** SHALL append to `~/.specfact/module-audit.log`: `timestamp, @devuser/specfact-lint-rules, community, installed, --trust-community` + +#### Scenario: Block unregistered module + +- **GIVEN** a module with effective tier `unregistered` (publisher not in index) +- **WHEN** user runs `specfact module install some/unregistered-module` +- **THEN** CLI SHALL display `[ERROR] some/unregistered-module is not registered in the NOLD AI trust index. Use --trust-unregistered to override.` +- **AND** SHALL exit with non-zero status code + +#### Scenario: Install unregistered module with --trust-unregistered flag + +- **GIVEN** a module with effective tier `unregistered` +- **AND** user passes `--trust-unregistered` flag +- **WHEN** user runs `specfact module install some/unregistered-module --trust-unregistered` +- **THEN** CLI SHALL install with a prominent warning: `[WARN] Installing unregistered module — NOLD AI has not verified this publisher` +- **AND** SHALL append to `~/.specfact/module-audit.log`: `timestamp, some/unregistered-module, unregistered, installed, --trust-unregistered` + +### Requirement: Display trust tier in search output + +The CLI SHALL show tier badges in `specfact module search` results. + +#### Scenario: Search results include tier badges + +- **GIVEN** multiple modules from different tiers +- **WHEN** user runs `specfact module search backlog` +- **THEN** each result line SHALL include a tier badge: + - `[official]` for NOLD AI official modules + - `[verified]` for domain-verified publisher modules + - `[community]` for GitHub-identity-only publisher modules + - `[unregistered]` for modules not in the trust index + +### Requirement: Display trust tier in info output + +The CLI SHALL show tier detail in `specfact module info <module>`. + +#### Scenario: Module info shows publisher tier detail + +- **GIVEN** a module `@mycompany/specfact-jira-sync` with tier `verified` +- **WHEN** user runs `specfact module info @mycompany/specfact-jira-sync` +- **THEN** CLI SHALL display: + - `Publisher: mycompany (verified ✅)` + - `Publisher ID: pub_abc123` + - `Trust: registry endorsed (NOLD AI countersig verified)` + +### Requirement: Audit log is append-only and human-readable + +The CLI SHALL maintain an append-only audit log of all module install decisions. + +#### Scenario: Audit log entry format + +- **GIVEN** a module install (any tier) +- **WHEN** install completes or is overridden +- **THEN** CLI SHALL append one line to `~/.specfact/module-audit.log`: + - Format: `ISO8601_UTC | module_handle | tier | action | flag_used_or_none` + - Example: `2026-02-27T12:00:00Z | @devuser/specfact-lint-rules | community | installed | --trust-community` + +## Contract Requirements + +- `resolver.resolve_effective_tier(publisher_tier: str, registry_tier: str) -> str` — `@require` both inputs in allowed tier set; `@ensure` result is minimum of the two tiers (by rank order); `@beartype` +- `resolver.enforce_install_policy(module_handle: str, tier: str, flags: InstallFlags) -> InstallDecision` — `@require` tier in known set; `@ensure` result is one of `{install, prompt, block}`; `@beartype` +- `resolver.append_audit_log(entry: AuditEntry) -> None` — `@require` entry timestamp is UTC; `@beartype` diff --git a/openspec/changes/marketplace-03-publisher-identity/tasks.md b/openspec/changes/marketplace-03-publisher-identity/tasks.md new file mode 100644 index 00000000..6121cf27 --- /dev/null +++ b/openspec/changes/marketplace-03-publisher-identity/tasks.md @@ -0,0 +1,277 @@ +# Implementation Tasks: marketplace-03-publisher-identity + +## TDD / SDD Order (Enforced) + +Per config.yaml, tests MUST come before implementation for any behavior-changing task. Order: + +1. Spec deltas (already created in this change) +2. Tests from spec scenarios (expect failure — no implementation yet) +3. Code implementation (until tests pass and behavior satisfies spec) +4. Evidence recorded in `openspec/changes/marketplace-03-publisher-identity/TDD_EVIDENCE.md` + +Do not implement production code until tests exist and have been run (expecting failure). + +--- + +## 1. Create git worktree for this change + +- [ ] 1.1 Fetch latest and create a worktree with a new branch from `origin/dev` + - [ ] 1.1.1 `git fetch origin` + - [ ] 1.1.2 `gh issue develop 327 --repo nold-ai/specfact-cli --name feature/marketplace-03-publisher-identity` + - [ ] 1.1.3 `git fetch origin && git worktree add ../specfact-cli-worktrees/feature/marketplace-03-publisher-identity feature/marketplace-03-publisher-identity` + - [ ] 1.1.4 `cd ../specfact-cli-worktrees/feature/marketplace-03-publisher-identity` + - [ ] 1.1.5 `python -m venv .venv && source .venv/bin/activate && pip install -e ".[dev]"` + - [ ] 1.1.6 `git branch --show-current` (verify: `feature/marketplace-03-publisher-identity`) + +> All subsequent tasks run inside the worktree directory. +> **Hard blocker**: marketplace-02 (#215) must be implemented and merged before this branch is opened for code work. Branch can be created and spec/test work begun, but code changes to `custom_registries.py` integration points must wait for marketplace-02. + +--- + +## 2. Write spec deltas and review (SDD) + +- [ ] 2.1 Review spec files created in this change + - [ ] 2.1.1 Review `openspec/changes/marketplace-03-publisher-identity/specs/publisher-identity/spec.md` + - [ ] 2.1.2 Review `openspec/changes/marketplace-03-publisher-identity/specs/module-trust-chain/spec.md` + - [ ] 2.1.3 Review `openspec/changes/marketplace-03-publisher-identity/specs/trust-resolution/spec.md` + - [ ] 2.1.4 Review `openspec/changes/marketplace-03-publisher-identity/specs/module-security/spec.md` (delta) + - [ ] 2.1.5 Run `openspec validate marketplace-03-publisher-identity --strict` and fix any issues + - [ ] 2.1.6 Run `hatch run yaml-lint` to validate YAML/markdown + +--- + +## 3. Create Pydantic models for trust layer (TDD) + +- [ ] 3.1 Write tests for trust layer data models (expect failure) + - [ ] 3.1.1 Create `tests/unit/trust/test_models.py` + - [ ] 3.1.2 Test `PublisherRecord` Pydantic model: required fields, validation + - [ ] 3.1.3 Test `PublisherIndex` model: publishers list, schema_version, nold_ai_signature + - [ ] 3.1.4 Test `AuditEntry` model: timestamp UTC enforcement, field presence + - [ ] 3.1.5 Test `InstallFlags` model: trust_community, trust_unregistered booleans + - [ ] 3.1.6 Run tests — expect failures (models do not exist) + - [ ] 3.1.7 Record failing evidence in `TDD_EVIDENCE.md` + +- [ ] 3.2 Implement trust layer models + - [ ] 3.2.1 Create `src/specfact_cli/trust/__init__.py` + - [ ] 3.2.2 Create `src/specfact_cli/trust/models.py` with `PublisherRecord`, `PublisherIndex`, `AuditEntry`, `InstallFlags`, `InstallDecision`, `ValidationResult` + - [ ] 3.2.3 All models use `Pydantic BaseModel` with `Field(...)` and descriptions + - [ ] 3.2.4 Run tests — expect pass + - [ ] 3.2.5 Record passing evidence in `TDD_EVIDENCE.md` + +--- + +## 4. Implement key_store.py (TDD) + +- [ ] 4.1 Write tests for key_store (expect failure) + - [ ] 4.1.1 Create `tests/unit/trust/test_key_store.py` + - [ ] 4.1.2 Test `get_root_public_key()` returns Ed25519PublicKey (use test fixture key) + - [ ] 4.1.3 Test no network call is made during key load + - [ ] 4.1.4 Test key is loadable offline (no network mock needed) + - [ ] 4.1.5 Run tests — expect failures + - [ ] 4.1.6 Record failing evidence in `TDD_EVIDENCE.md` + +- [ ] 4.2 Implement key_store.py + - [ ] 4.2.1 Create `src/specfact_cli/trust/key_store.py` + - [ ] 4.2.2 Embed NOLD AI test root public key (base64 Ed25519) as module constant for test; production key injected at build time via `hatch build` hook or environment variable + - [ ] 4.2.3 Implement `get_root_public_key() -> Ed25519PublicKey` + - [ ] 4.2.4 Add `@beartype` and `@ensure` result is not None + - [ ] 4.2.5 Run tests — expect pass + - [ ] 4.2.6 Record passing evidence in `TDD_EVIDENCE.md` + +--- + +## 5. Implement publisher_registry.py (TDD) + +- [ ] 5.1 Write tests for publisher_registry (expect failure) + - [ ] 5.1.1 Create `tests/unit/trust/test_publisher_registry.py` + - [ ] 5.1.2 Test `fetch_publisher_index()` fetches and verifies signature (mock HTTP + crypto) + - [ ] 5.1.3 Test cache hit returns cached index without HTTP call + - [ ] 5.1.4 Test stale cache (>7 days) returns stale with warning when CDN offline + - [ ] 5.1.5 Test tampered index raises `PublisherIndexTamperError` + - [ ] 5.1.6 Test `resolve_publisher()` returns `PublisherRecord` when found + - [ ] 5.1.7 Test `resolve_publisher()` returns `None` when not found (no raise) + - [ ] 5.1.8 Run tests — expect failures + - [ ] 5.1.9 Record failing evidence in `TDD_EVIDENCE.md` + +- [ ] 5.2 Implement publisher_registry.py + - [ ] 5.2.1 Create `src/specfact_cli/trust/publisher_registry.py` + - [ ] 5.2.2 Implement `fetch_publisher_index(trust_index_url: str, cache_dir: Path) -> PublisherIndex` + - [ ] 5.2.3 Cache: write to `~/.specfact/cache/publishers-index.json` with mtime TTL check + - [ ] 5.2.4 Implement `resolve_publisher(publisher_id: str, index: PublisherIndex) -> PublisherRecord | None` + - [ ] 5.2.5 Add `@require`, `@ensure`, `@beartype` on all public functions + - [ ] 5.2.6 Run tests — expect pass + - [ ] 5.2.7 Record passing evidence in `TDD_EVIDENCE.md` + +--- + +## 6. Implement resolver.py (TDD) + +- [ ] 6.1 Write tests for trust resolver (expect failure) + - [ ] 6.1.1 Create `tests/unit/trust/test_resolver.py` + - [ ] 6.1.2 Test `resolve_effective_tier()`: official+verified=official, verified+community=community, community+unregistered=unregistered, etc. + - [ ] 6.1.3 Test `enforce_install_policy()`: official→install, verified→install, community(no flag)→prompt, community(--trust-community)→install, unregistered(no flag)→block, unregistered(--trust-unregistered)→install + - [ ] 6.1.4 Test `append_audit_log()` appends correct line format to log file + - [ ] 6.1.5 Run tests — expect failures + - [ ] 6.1.6 Record failing evidence in `TDD_EVIDENCE.md` + +- [ ] 6.2 Implement resolver.py + - [ ] 6.2.1 Create `src/specfact_cli/trust/resolver.py` + - [ ] 6.2.2 Implement `resolve_effective_tier(publisher_tier: str, registry_tier: str) -> str` (min by rank) + - [ ] 6.2.3 Implement `enforce_install_policy(module_handle: str, tier: str, flags: InstallFlags) -> InstallDecision` + - [ ] 6.2.4 Implement `append_audit_log(entry: AuditEntry) -> None` (append-only to `~/.specfact/module-audit.log`) + - [ ] 6.2.5 Add `@require`, `@ensure`, `@beartype` on all public functions + - [ ] 6.2.6 Run tests — expect pass + - [ ] 6.2.7 Record passing evidence in `TDD_EVIDENCE.md` + +--- + +## 7. Extend crypto_validator.py (TDD) + +- [ ] 7.1 Write tests for extended crypto_validator (expect failure) + - [ ] 7.1.1 Create or extend `tests/unit/registry/test_crypto_validator.py` + - [ ] 7.1.2 Test `validate_module()` official tier path is unchanged (regression test — must keep passing) + - [ ] 7.1.3 Test `validate_module()` verified tier: valid publisher sig → pass + - [ ] 7.1.4 Test `validate_module()` verified tier: invalid publisher sig → `PublisherSignatureMismatchError` + - [ ] 7.1.5 Test `validate_module()` community tier: valid publisher sig → pass (no countersig required) + - [ ] 7.1.6 Test `validate_registry_endorsement()`: valid NOLD AI countersig → True + - [ ] 7.1.7 Test `validate_registry_endorsement()`: tampered countersig → `RegistryEndorsementTamperError` + - [ ] 7.1.8 Test `validate_module()` unknown tier → `UnknownTierError` + - [ ] 7.1.9 Run tests — expect failures for new tests; official tests must continue to pass + - [ ] 7.1.10 Record failing evidence in `TDD_EVIDENCE.md` + +- [ ] 7.2 Extend crypto_validator.py + - [ ] 7.2.1 Add `validated` and `community` tier branches to `validate_module()` (match/case dispatch) + - [ ] 7.2.2 Add `validate_publisher_attestation(bundle_sha256: str, publisher_record: PublisherRecord, publisher_signature: str) -> bool` + - [ ] 7.2.3 Add `validate_registry_endorsement(entry: RegistryEntry, root_key: Ed25519PublicKey) -> bool` + - [ ] 7.2.4 Add `UnknownTierError`, `PublisherSignatureMismatchError`, `RegistryEndorsementTamperError` exceptions + - [ ] 7.2.5 Do NOT modify the `official` branch (verified by regression tests) + - [ ] 7.2.6 Add `@require`, `@ensure`, `@beartype` on new public functions + - [ ] 7.2.7 Run tests — all must pass including regression tests for official path + - [ ] 7.2.8 Record passing evidence in `TDD_EVIDENCE.md` + +--- + +## 8. Integrate trust layer into module_registry install/search/info (TDD) + +- [ ] 8.1 Write integration tests for module_registry with trust (expect failure) + - [ ] 8.1.1 Create `tests/integration/test_module_trust_integration.py` + - [ ] 8.1.2 Test install official module: no prompt, no audit log entry + - [ ] 8.1.3 Test install verified module: no prompt, no audit log entry + - [ ] 8.1.4 Test install community module without flag: prompt shown, abort on N + - [ ] 8.1.5 Test install community module with `--trust-community`: install without prompt, audit log entry created + - [ ] 8.1.6 Test install unregistered module without flag: blocked with error + - [ ] 8.1.7 Test install unregistered module with `--trust-unregistered`: installs with warning, audit log entry + - [ ] 8.1.8 Test `specfact module search` output contains tier badges + - [ ] 8.1.9 Test `specfact module info` output contains publisher tier detail + - [ ] 8.1.10 Run tests — expect failures + - [ ] 8.1.11 Record failing evidence in `TDD_EVIDENCE.md` + +- [ ] 8.2 Integrate trust layer into module_registry + - [ ] 8.2.1 Modify `src/specfact_cli/modules/module_registry/src/` install command: call `resolver.enforce_install_policy()` before download + - [ ] 8.2.2 Add `--trust-community` and `--trust-unregistered` flags to install command + - [ ] 8.2.3 Modify `specfact module search` output: add tier badge column + - [ ] 8.2.4 Modify `specfact module info` output: add publisher tier detail block + - [ ] 8.2.5 Run integration tests — expect pass + - [ ] 8.2.6 Record passing evidence in `TDD_EVIDENCE.md` + +--- + +## 9. Extend scripts/publish-module.py with registry endorsement signing + +- [ ] 9.1 Write tests for publish-module registry endorsement step (expect failure) + - [ ] 9.1.1 Create or extend `tests/unit/scripts/test_publish_module.py` + - [ ] 9.1.2 Test that endorsement signing step is called after publisher signing + - [ ] 9.1.3 Test that `registry_signature` field is added to index entry + - [ ] 9.1.4 Run tests — expect failures + - [ ] 9.1.5 Record failing evidence in `TDD_EVIDENCE.md` + +- [ ] 9.2 Extend scripts/publish-module.py + - [ ] 9.2.1 Add NOLD AI countersig step after existing publisher signing + - [ ] 9.2.2 Sign over `name + version + publisher_id + checksum_sha256` (canonical JSON, sorted keys) + - [ ] 9.2.3 Write `registry_signature` field into the registry index entry + - [ ] 9.2.4 Add `scripts/sign-publishers.py` (signs `publishers/index.json` with NOLD AI key; run by CI) + - [ ] 9.2.5 Run tests — expect pass + - [ ] 9.2.6 Record passing evidence in `TDD_EVIDENCE.md` + +--- + +## 10. Module signing verification quality gate + +- [ ] 10.1 `hatch run ./scripts/verify-modules-signature.py --require-signature` +- [ ] 10.2 If verification fails after module changes, re-sign affected manifests: + - [ ] 10.2.1 `hatch run python scripts/sign-modules.py --key-file <private-key.pem> <module-package.yaml ...>` + - [ ] 10.2.2 Bump module version before re-signing (patch increment) + - [ ] 10.2.3 Re-run verification until fully green + +--- + +## 11. Quality gates + +- [ ] 11.1 `hatch run format` (ruff format + autofix) +- [ ] 11.2 `hatch run type-check` (basedpyright strict) +- [ ] 11.3 `hatch run lint` +- [ ] 11.4 `hatch run yaml-lint` +- [ ] 11.5 `hatch run contract-test` +- [ ] 11.6 `hatch test --cover -v` (full suite, all trust/ tests must pass) + +--- + +## 12. Documentation research and review + +- [ ] 12.1 Identify affected documentation: + - [ ] 12.1.1 `docs/guides/publisher-trust.md` (new — publisher tiers, install flags, trust index URL override) + - [ ] 12.1.2 `docs/reference/module-commands.md` (update: --trust-community, --trust-unregistered flags; tier badges in search/info) + - [ ] 12.1.3 `docs/_layouts/default.html` (add publisher-trust guide to sidebar navigation) + - [ ] 12.1.4 `README.md` (add brief mention of trust tier system in module ecosystem section) +- [ ] 12.2 Write/update each affected doc + - [ ] 12.2.1 Create `docs/guides/publisher-trust.md` with Jekyll front-matter (layout, title, permalink, description) + - [ ] 12.2.2 Update `docs/reference/module-commands.md` — add new flags, trust display examples + - [ ] 12.2.3 Update `docs/_layouts/default.html` — add publisher-trust to Guides sidebar section +- [ ] 12.3 Verify front-matter is correct on all new/edited pages + +--- + +## 13. Version and changelog + +- [ ] 13.1 Determine version bump: this is a feature branch → minor increment + - [ ] 13.1.1 Check current version in `pyproject.toml` + - [ ] 13.1.2 Propose increment (e.g. `0.38.2 → 0.39.0`) and confirm with user before applying +- [ ] 13.2 Sync version across `pyproject.toml`, `setup.py`, `src/specfact_cli/__init__.py` +- [ ] 13.3 Add `CHANGELOG.md` entry under new `[X.Y.Z] - YYYY-MM-DD` section: + - `Added: Publisher identity trust layer (trust/, publisher_registry, resolver, key_store)` + - `Added: Module trust chain verification (publisher attestation + registry endorsement countersig)` + - `Added: Trust tier display in module search and info output (--trust-community, --trust-unregistered flags)` + +--- + +## 14. GitHub issue creation + +- [x] 14.1 GitHub issue already created: [#327](https://github.com/nold-ai/specfact-cli/issues/327) +- [x] 14.2 Linked to project board +- [x] 14.3 `proposal.md` Source Tracking updated with issue #327 +- [x] 14.4 `CHANGE_ORDER.md` updated with marketplace-03 entry and GitHub issue #327 + +--- + +## 15. Create PR + +- [ ] 15.1 Commit all changes from inside the worktree: + - [ ] 15.1.1 `git add src/specfact_cli/trust/ src/specfact_cli/registry/crypto_validator.py src/specfact_cli/modules/module_registry/src/ scripts/ docs/ openspec/ pyproject.toml setup.py CHANGELOG.md` + - [ ] 15.1.2 `git commit -S -m "feat: publisher identity and module trust chain (marketplace-03)"` + - [ ] 15.1.3 `git push -u origin feature/marketplace-03-publisher-identity` +- [ ] 15.2 Create PR body from `.github/pull_request_template.md` + - Include: `Fixes nold-ai/specfact-cli#<issue-number>`, OpenSpec change ID: `marketplace-03-publisher-identity` +- [ ] 15.3 `gh pr create --repo nold-ai/specfact-cli --base dev --head feature/marketplace-03-publisher-identity --title "feat: publisher identity and module trust chain" --body-file /tmp/pr-marketplace-03.md` +- [ ] 15.4 `gh project item-add 1 --owner nold-ai --url <PR_URL>` +- [ ] 15.5 Verify Development link on issue; set project status to "In Progress" + +--- + +## Post-merge cleanup (after PR is merged) + +- [ ] Return to primary checkout: `cd /home/dom/git/nold-ai/specfact-cli` +- [ ] `git fetch origin` +- [ ] `git worktree remove ../specfact-cli-worktrees/feature/marketplace-03-publisher-identity` +- [ ] `git branch -d feature/marketplace-03-publisher-identity` +- [ ] `git worktree prune` +- [ ] (Optional) `git push origin --delete feature/marketplace-03-publisher-identity` diff --git a/openspec/changes/marketplace-04-revocation/design.md b/openspec/changes/marketplace-04-revocation/design.md new file mode 100644 index 00000000..9049e255 --- /dev/null +++ b/openspec/changes/marketplace-04-revocation/design.md @@ -0,0 +1,133 @@ +# Design: Publisher and Module Revocation Infrastructure + +## Context + +After marketplace-03 introduces publisher attestation, the first external publisher could theoretically be onboarded. However, without revocation infrastructure, a key compromise or policy violation has no response mechanism. This change closes that gap. + +**Current State:** + +- No revocation mechanism exists in any form +- Revoked publishers/modules cannot be prevented from installing +- No automated scan of bundle contents on publication + +**Constraints:** + +- Revocation check must be fast: single HTTP fetch + signature verify, cached (1h TTL for revocation vs 7d for publisher index) +- Grace window logic must be centralized in `trust/revocation.py` — no per-module special cases +- Hard blocks (security_incident, expiry after window) must not be bypassable by user-facing flags +- AST scan must run as part of `scripts/publish-module.py` invocation — no separate opt-in + +## Goals / Non-Goals + +**Goals:** + +- Publisher revocation: signed `publishers/revoked.json` fetched and enforced by CLI +- Module revocation: per-module revocation records enforced at install and on invocation +- Grace window enforcement by reason type (immediate / 30d / 14d / 7d) +- CI AST scan on bundle publication +- Policy document for users and publishers + +**Non-Goals:** + +- Automatic module uninstall on revocation (warn-only for installed modules; user controls uninstall) +- Real-time revocation push/webhooks (polling cache is sufficient for Phase 1) +- OCSP-style stapling (over-engineered for current scale) + +## Architecture + +### Revocation Checker (`src/specfact_cli/trust/revocation.py`) + +```python +# Key public API +def check_publisher_revocation(publisher_id: str, revoked: PublisherRevocationIndex) -> RevocationStatus +def check_module_revocation(module_name: str, version: str, revoked: ModuleRevocationIndex) -> RevocationStatus +def enforce_revocation_policy(status: RevocationStatus, flags: RevocationFlags) -> RevocationDecision +def fetch_revocation_indexes(trust_index_url: str, cache_dir: Path) -> tuple[PublisherRevocationIndex, ModuleRevocationIndex] +``` + +### Grace Window Policy + +Centralized constant mapping in `revocation.py`: + +```python +GRACE_WINDOWS: dict[str, GraceWindowPolicy] = { + "security_incident": GraceWindowPolicy(grace_days=0, install_action="hard_block", existing_action="warn"), + "policy_violation": GraceWindowPolicy(grace_days=30, install_action="warn", existing_action="warn", post_expiry="hard_block"), + "publisher_request": GraceWindowPolicy(grace_days=7, install_action="warn", existing_action="warn_soft"), + "api_incompatibility": GraceWindowPolicy(grace_days=14, install_action="warn_suggest_newer", existing_action="warn_suggest_newer"), +} +``` + +### Install Flow with Revocation + +```text +specfact module install @mycompany/specfact-jira-sync + │ + ├─ [marketplace-03] trust/publisher_registry: resolve publisher + ├─ trust/revocation: fetch revocation indexes (1h cache) + ├─ check_publisher_revocation(publisher_id) + │ ├─ NOT revoked → continue + │ └─ REVOKED: reason=security_incident → hard_block (no override) + │ reason=policy_violation, in window → warn + prompt + │ reason=policy_violation, past window → hard_block + ├─ check_module_revocation(name, version) + │ └─ similar grace window enforcement + └─ proceed with trust tier resolution (marketplace-03) +``` + +### Invocation Warning for Installed Revoked Modules + +On any `specfact module <command>` invocation, the module_registry checks the revocation index for all loaded modules and surfaces warnings: + +```text +⚠ WARNING: specfact-jira-sync@1.0.0 has been revoked (security_incident). + Reason: Remote code execution vulnerability — update or uninstall immediately. + Run: specfact module update specfact-jira-sync + specfact module uninstall specfact-jira-sync +``` + +This check is non-blocking (warn-only) for installed modules to prevent breaking existing workflows. + +### CI AST Scan (`.github/workflows/scan-bundles.yml`) + +Triggered by: `push` events to `specfact-cli-modules` (or as pre-publication step in `publish-module.py`). + +Checks (all via stdlib `ast` module — no external dependencies): + +1. Obfuscated/minified code: detect single-character variable names at module level or `exec(base64.b64decode(...))` patterns +2. `subprocess.run(..., shell=True)` combined with external URL strings +3. Network calls on import: `socket`, `urllib`, `requests` calls at module top level (not inside functions) +4. Known-bad patterns: `eval(`, `exec(` applied to remote strings + +Failed scans create a GitHub issue in `nold-ai/specfact-cli-internal` (internal) and block the publication PR. + +## Decisions + +### Decision 1: Revocation Cache TTL + +**Choice**: 1 hour TTL (vs 7 days for publisher index) + +**Rationale:** + +- Revocation is time-sensitive (especially security_incident: 0-day grace) +- 1h allows near-real-time propagation via CDN with short TTL +- Offline: serve stale with warning (same pattern as publisher index) + +### Decision 2: Hard Block Bypassability + +**Choice**: `security_incident` revocations are NEVER bypassable by user flags. + +**Rationale:** + +- A security incident with 0-day grace means immediate hard block is the point — no flag override defeats this +- Other reason codes respect `--force` for emergency internal use (internal flag, not user-facing) + +### Decision 3: AST Scan Scope + +**Choice**: Stdlib `ast` only, no external analysis tools. + +**Rationale:** + +- Offline-first: no network call during scan +- No additional CI dependencies +- Covers the highest-risk patterns (remote code exec, obfuscation) adequately for Phase 1 diff --git a/openspec/changes/marketplace-04-revocation/proposal.md b/openspec/changes/marketplace-04-revocation/proposal.md new file mode 100644 index 00000000..fd5fa02e --- /dev/null +++ b/openspec/changes/marketplace-04-revocation/proposal.md @@ -0,0 +1,56 @@ +# Change: Publisher and Module Revocation Infrastructure + +## Why + +marketplace-03 introduces publisher attestation and trust tiers but provides no mechanism to revoke a compromised publisher or a vulnerable module. Revocation infrastructure must exist **before any external publisher is onboarded** — otherwise a compromised key or malicious module cannot be removed from circulation once installed. + +The revocation system follows a CA model: NOLD AI signs revocation entries in `publishers/revoked.json` and per-module revocation records in `registry/modules/revoked.json`. The CLI checks these on every install and surfaces warnings for already-installed revoked modules. + +## What Changes + +- **NEW**: `src/specfact_cli/trust/revocation.py` — revocation checker: fetch and cache `publishers/revoked.json` and module revocation records; enforce grace window policy by reason type +- **MODIFY**: `src/specfact_cli/modules/module_registry/src/` — pre-install revocation check; post-install revocation warning on `specfact module` invocation if installed module is revoked +- **NEW**: `.github/workflows/scan-bundles.yml` — CI AST scan for obfuscated code, shell=True subprocess calls, suspicious network-on-import patterns; blocks publication on failure +- **NEW**: `scripts/revoke-publisher.py` — signs revocation entry in `publishers/revoked.json` with NOLD AI key +- **NEW**: `scripts/revoke-module.py` — signs per-module revocation record with NOLD AI key +- **NEW**: `docs/trust/grace-window-policy.md` — user-facing policy document; ToS-linked +- **MODIFY**: `docs/reference/module-commands.md` — document revocation warning messages and grace window behaviour + +**Backward compatibility**: Additive. Non-revoked modules see no change in install behaviour. + +**Rollback plan**: Disable revocation check flag (`--skip-revocation-check`, internal-only CLI flag for emergency bypass); all install flows unchanged without the revocation pre-flight. + +## Capabilities + +### New Capabilities + +- `publisher-revocation`: fetch, cache, and verify `publishers/revoked.json`; enforce hard block or grace window per reason type; warn on installed modules from revoked publishers +- `module-revocation`: per-module revocation records in `registry/modules/revoked.json`; grace window enforcement by reason; prominent warning on invocation of already-installed revoked modules +- `grace-window-policy`: by-reason grace windows (immediate / 30d / 14d / 7d); hard block after expiry; CLI behaviour during window per reason code +- `automated-scan`: CI GitHub Actions workflow scanning published bundles for obfuscated code, shell=True subprocess + URL combos, network-on-import patterns, known-bad eval/exec patterns + +## Impact + +- **Affected code**: + - `src/specfact_cli/trust/revocation.py` (new: revocation checker + cache) + - `src/specfact_cli/modules/module_registry/src/` (modify: pre-install check + invocation warning) + - `.github/workflows/scan-bundles.yml` (new: AST scan CI) + - `scripts/revoke-publisher.py` (new: signing script) + - `scripts/revoke-module.py` (new: signing script) +- **Affected specs**: New specs for `publisher-revocation`, `module-revocation`, `grace-window-policy`, `automated-scan` +- **Affected documentation**: + - `docs/trust/grace-window-policy.md` (new) + - `docs/reference/module-commands.md` (update: revocation warning messages) + - `docs/_layouts/default.html` (navigation update: add trust/ section) +- **External dependencies**: `ast` (stdlib, already available); no new external libraries +- **Hard dependency**: marketplace-03 (`publisher-identity`, `trust-resolution`) must land first — revocation builds on the trust layer and publisher index + +--- + +## Source Tracking + +<!-- source_repo: nold-ai/specfact-cli --> +- **GitHub Issue**: #328 +- **Issue URL**: <https://github.com/nold-ai/specfact-cli/issues/328> +- **Repository**: nold-ai/specfact-cli +- **Last Synced Status**: proposed diff --git a/openspec/changes/marketplace-04-revocation/specs/automated-scan/spec.md b/openspec/changes/marketplace-04-revocation/specs/automated-scan/spec.md new file mode 100644 index 00000000..4e908fa3 --- /dev/null +++ b/openspec/changes/marketplace-04-revocation/specs/automated-scan/spec.md @@ -0,0 +1,68 @@ +# automated-scan Specification + +## Purpose + +Defines the CI AST-based scan of published module bundles for high-risk patterns. Runs as part of `scripts/publish-module.py` and as a standalone GitHub Actions workflow. + +## ADDED Requirements + +### Requirement: Block publication on obfuscated code detection + +The scan system SHALL detect and block publication of bundles containing obfuscated code patterns using stdlib `ast` analysis. + +#### Scenario: Obfuscated code detected + +- **GIVEN** a bundle contains `exec(base64.b64decode(...))` or single-char variable names at module level (≥ 80% of top-level assignments) +- **WHEN** `scripts/publish-module.py` is run +- **THEN** SHALL abort publication with `[ERROR] Obfuscated code pattern detected in <file>. Publication blocked.` +- **AND** SHALL create a GitHub issue in `nold-ai/specfact-cli-internal` with file path and line + +### Requirement: Block publication on shell=True + external URL pattern + +The scan system SHALL detect and block `subprocess.run(shell=True)` combined with external URLs in the same file. + +#### Scenario: subprocess with shell=True and external URL + +- **GIVEN** a bundle file contains `subprocess.run(..., shell=True)` or `subprocess.Popen(..., shell=True)` AND the same file contains a string literal matching an HTTP/HTTPS URL +- **WHEN** `scripts/publish-module.py` is run +- **THEN** SHALL abort publication with `[ERROR] Suspicious subprocess(shell=True) + URL pattern in <file>. Manual review required.` +- **AND** SHALL create a security issue in `nold-ai/specfact-cli-internal` + +### Requirement: Warn on network calls at module import scope + +The scan system SHALL warn (not hard-block) on network API calls that appear at module top level rather than inside functions. + +#### Scenario: Network call at top-level (not inside a function) + +- **GIVEN** a bundle file contains `socket.connect()`, `urllib.request.urlopen()`, or `requests.get()` at module top level (not inside `def`, `class`, `if __name__`, etc.) +- **WHEN** `scripts/publish-module.py` is run +- **THEN** SHALL warn: `[WARN] Network call at module import scope in <file>:<line>. Review before publication.` +- **AND** SHALL NOT hard-block (warn only — e.g. health check on import is legitimate in some contexts) +- **AND** SHALL require explicit `--allow-import-network` flag to proceed + +### Requirement: Block on eval/exec applied to remote strings + +The scan system SHALL detect and block `eval()` or `exec()` applied to variables assigned from network responses. + +#### Scenario: eval or exec on remote string + +- **GIVEN** a bundle file contains `eval(<variable>)` or `exec(<variable>)` where `<variable>` is assigned from a network response (urllib, requests, socket read) +- **WHEN** `scripts/publish-module.py` is run +- **THEN** SHALL abort publication with `[ERROR] eval/exec on remote data in <file>. Publication blocked.` + +### Requirement: CI scan runs as GitHub Actions workflow + +The AST scan SHALL run automatically in CI on every bundle-modifying PR. + +#### Scenario: Scan on PR to specfact-cli-modules + +- **GIVEN** `.github/workflows/scan-bundles.yml` is configured +- **WHEN** a PR is opened against `specfact-cli-modules` that modifies any `*.py` file in a bundle +- **THEN** GitHub Actions SHALL run the AST scan +- **AND** SHALL fail the PR check if any hard-block pattern is detected +- **AND** SHALL annotate the PR with file path and line for each finding + +## Contract Requirements + +- `scan_bundle(bundle_path: Path) -> ScanReport` — `@require` bundle_path is an existing directory; `@ensure` result.findings is a list; `@beartype` +- `ScanReport.has_blocking_findings() -> bool` — `@ensure` result is True iff any finding has `severity == "block"`; `@beartype` diff --git a/openspec/changes/marketplace-04-revocation/specs/grace-window-policy/spec.md b/openspec/changes/marketplace-04-revocation/specs/grace-window-policy/spec.md new file mode 100644 index 00000000..bc2db887 --- /dev/null +++ b/openspec/changes/marketplace-04-revocation/specs/grace-window-policy/spec.md @@ -0,0 +1,59 @@ +# grace-window-policy Specification + +## Purpose + +Defines the by-reason grace window policy governing CLI behaviour during and after revocation grace windows. Centralised in `trust/revocation.py` — no per-module special cases. + +## ADDED Requirements + +### Requirement: Grace window policy is centralized and by-reason + +The CLI SHALL apply grace windows based only on revocation reason code. No per-publisher or per-module override is permitted in the CLI (NOLD AI may adjust reason in the signed record, which is the override mechanism). + +#### Scenario: security_incident — immediate hard block + +- **GIVEN** a revocation entry with `reason: security_incident` +- **WHEN** CLI checks revocation at any point (install or invocation) +- **THEN** `grace_days = 0`: hard block on install, warn-only on existing invocation +- **AND** No flag combination can override the install block + +#### Scenario: policy_violation — 30-day warn window, then hard block + +- **GIVEN** a revocation entry with `reason: policy_violation` +- **WHEN** `now - revoked_at <= 30 days` +- **THEN** install shows warning + prompt; invocation shows warning +- **WHEN** `now - revoked_at > 30 days` +- **THEN** install is hard-blocked; invocation shows escalated warning + +#### Scenario: publisher_request — 7-day soft warn, then soft block + +- **GIVEN** a revocation entry with `reason: publisher_request` +- **WHEN** `now - revoked_at <= 7 days` +- **THEN** install shows warning + prompt; invocation shows informational notice +- **WHEN** `now - revoked_at > 7 days` +- **THEN** install shows warning + prompt (soft block — still installable with confirmation) + +#### Scenario: api_incompatibility — 14-day suggest-newer window + +- **GIVEN** a revocation entry with `reason: api_incompatibility` +- **WHEN** `now - revoked_at <= 14 days` +- **THEN** install shows: `[WARN] <module>@<version> has a known API incompatibility. Upgrade recommended.` + suggest newer version +- **AND** install is NOT blocked during the 14-day window +- **WHEN** `now - revoked_at > 14 days` +- **THEN** install is soft-blocked (warn + prompt) + +### Requirement: Unknown reason code falls back to most restrictive policy + +The CLI SHALL apply the most restrictive policy (security_incident equivalent) when an unrecognised revocation reason code is encountered. + +#### Scenario: Unknown reason code + +- **GIVEN** a revocation entry with an unrecognised `reason` value +- **WHEN** CLI processes the entry +- **THEN** CLI SHALL apply `security_incident` policy (most restrictive) +- **AND** SHALL log `[WARN] Unknown revocation reason '<reason>'; applying most restrictive policy` + +## Contract Requirements + +- `GRACE_WINDOWS: dict[str, GraceWindowPolicy]` — module-level constant, not overridable at runtime +- `compute_grace_status(revoked_at: datetime, reason: str) -> GraceStatus` — `@require` revoked_at is UTC-aware; `@ensure` result.action in VALID_ACTIONS; `@beartype` diff --git a/openspec/changes/marketplace-04-revocation/specs/module-revocation/spec.md b/openspec/changes/marketplace-04-revocation/specs/module-revocation/spec.md new file mode 100644 index 00000000..13566d94 --- /dev/null +++ b/openspec/changes/marketplace-04-revocation/specs/module-revocation/spec.md @@ -0,0 +1,70 @@ +# module-revocation Specification + +## Purpose + +Defines per-module revocation records in `registry/modules/revoked.json`, CLI enforcement at install time, and the warning surfaced on invocation of already-installed revoked modules. + +## ADDED Requirements + +### Requirement: Block installation of revoked module versions + +The CLI SHALL enforce module-level revocation at install time, applying the same grace window policy as publisher revocation. + +#### Scenario: Module version revoked with security_incident — hard block + +- **GIVEN** `registry/modules/revoked.json` contains `name: specfact-jira-sync, version: 1.0.0, reason: security_incident` +- **WHEN** user installs `@mycompany/specfact-jira-sync@1.0.0` +- **THEN** CLI SHALL hard-block: `[ERROR] specfact-jira-sync@1.0.0 has been revoked (security_incident). Installation blocked.` +- **AND** SHALL suggest: `Run: specfact module install @mycompany/specfact-jira-sync (to install latest)` +- **AND** SHALL NOT allow any flag to override + +#### Scenario: Module version revoked with api_incompatibility — suggest newer version + +- **GIVEN** `name: specfact-jira-sync, version: 0.9.0, reason: api_incompatibility` in revocation index +- **WHEN** user installs or already has `specfact-jira-sync@0.9.0` +- **THEN** CLI SHALL warn: `[WARN] specfact-jira-sync@0.9.0 has a known API incompatibility. Upgrade recommended.` +- **AND** SHALL suggest: `Run: specfact module update specfact-jira-sync` +- **AND** install of 0.9.0 is blocked only after 14-day grace window expiry + +### Requirement: Warn on invocation of installed revoked module + +The CLI SHALL check revocation status of all loaded modules on `specfact module <command>` invocation. + +#### Scenario: Loaded module is revoked (security_incident) + +- **GIVEN** `specfact-jira-sync@1.0.0` is installed and subsequently revoked with `security_incident` +- **WHEN** user runs any `specfact` command that loads the module +- **THEN** CLI SHALL display prominently before any command output: + + ```text + ⚠ WARNING: specfact-jira-sync@1.0.0 has been revoked (security_incident). + Reason: Remote code execution vulnerability — update or uninstall immediately. + Run: specfact module update specfact-jira-sync + specfact module uninstall specfact-jira-sync + ``` + +- **AND** SHALL NOT block the command (warn-only for installed modules) + +#### Scenario: Loaded module is revoked (policy_violation, within grace window) + +- **GIVEN** `specfact-jira-sync@1.0.0` is installed and revoked with `policy_violation`, within 30d grace +- **WHEN** user runs any command loading the module +- **THEN** CLI SHALL display: `[WARN] specfact-jira-sync@1.0.0 has been revoked (policy_violation). Grace window: N days remaining.` +- **AND** SHALL continue command execution + +### Requirement: Configurable periodic re-check for installed modules + +The CLI SHALL support a configurable periodic revocation re-check for installed modules. + +#### Scenario: Periodic re-check (weekly default) + +- **GIVEN** `revocation_check_interval: 7d` in `~/.specfact/config.yaml` (default) +- **AND** the last revocation check was more than 7 days ago +- **WHEN** user runs any `specfact` command +- **THEN** CLI SHALL perform a background revocation re-check for all installed modules +- **AND** SHALL surface any newly-revoked module warnings + +## Contract Requirements + +- `check_module_revocation(module_name: str, version: str, index: ModuleRevocationIndex) -> RevocationStatus` — `@require` module_name and version are non-empty; `@ensure` result.is_revoked is bool; `@beartype` +- `enforce_revocation_policy(status: RevocationStatus, context: RevocationContext) -> RevocationDecision` — `@require` status.reason in KNOWN_REASONS; `@beartype` diff --git a/openspec/changes/marketplace-04-revocation/specs/publisher-revocation/spec.md b/openspec/changes/marketplace-04-revocation/specs/publisher-revocation/spec.md new file mode 100644 index 00000000..c5e30910 --- /dev/null +++ b/openspec/changes/marketplace-04-revocation/specs/publisher-revocation/spec.md @@ -0,0 +1,69 @@ +# publisher-revocation Specification + +## Purpose + +Defines the CLI's publisher revocation check: fetching, caching, and verifying `publishers/revoked.json`, and enforcing revocation policy at install time. + +## ADDED Requirements + +### Requirement: Fetch and cache publisher revocation index + +The CLI SHALL fetch `publishers/revoked.json` from the trust index and cache with 1-hour TTL. + +#### Scenario: Fetch publisher revocation index on install + +- **WHEN** user installs any module +- **THEN** CLI SHALL fetch `{trust_index_url}/publishers/revoked.json` +- **AND** SHALL verify NOLD AI signature over the index +- **AND** SHALL cache the result in `~/.specfact/cache/publishers-revoked.json` with 1h TTL + +#### Scenario: Cache hit within TTL + +- **GIVEN** a valid cached `publishers-revoked.json` (age < 1h) +- **WHEN** user installs a module +- **THEN** CLI SHALL use the cached index without HTTP fetch + +#### Scenario: Stale revocation cache when offline + +- **GIVEN** cached `publishers-revoked.json` is older than 1h and CDN is unreachable +- **WHEN** user installs a module +- **THEN** CLI SHALL serve stale revocation index with `[WARN] Revocation index is stale; revocation check may be outdated` +- **AND** SHALL proceed with install + +### Requirement: Block installation from revoked publishers + +The CLI SHALL enforce revocation policy at install time, blocking or warning based on the revocation reason and grace window. + +#### Scenario: Publisher revoked with security_incident — hard block + +- **GIVEN** a publisher with `reason: security_incident` in `publishers/revoked.json` +- **WHEN** user installs any module from that publisher +- **THEN** CLI SHALL display `[ERROR] Publisher <handle> has been revoked (security_incident). Installation blocked.` +- **AND** SHALL exit with non-zero status +- **AND** SHALL NOT allow any flag to override this block + +#### Scenario: Publisher revoked with policy_violation — warn during grace window + +- **GIVEN** a publisher with `reason: policy_violation` and `revoked_at` within 30 days +- **WHEN** user installs a module from that publisher +- **THEN** CLI SHALL display `[WARN] Publisher <handle> is under revocation review (policy_violation). Grace window expires in N days.` +- **AND** SHALL prompt user to confirm install +- **AND** SHALL log to audit log on confirmation + +#### Scenario: Publisher revoked with policy_violation — hard block after grace window expiry + +- **GIVEN** a publisher with `reason: policy_violation` and `revoked_at` more than 30 days ago +- **WHEN** user installs a module from that publisher +- **THEN** CLI SHALL hard-block with `[ERROR] Publisher <handle> revocation grace window expired. Installation blocked.` + +#### Scenario: Publisher revoked with publisher_request — warn within 7d grace + +- **GIVEN** a publisher with `reason: publisher_request` within 7 days +- **WHEN** user installs a module from that publisher +- **THEN** CLI SHALL warn: `[WARN] Publisher <handle> has requested removal (publisher_request). Modules may be discontinued.` +- **AND** SHALL install with prompt confirmation + +## Contract Requirements + +- `check_publisher_revocation(publisher_id: str, index: PublisherRevocationIndex) -> RevocationStatus` — `@require` publisher_id is non-empty; `@ensure` result.is_revoked is bool; `@beartype` +- `fetch_revocation_indexes(trust_index_url: str, cache_dir: Path) -> tuple[PublisherRevocationIndex, ModuleRevocationIndex]` — `@require` trust_index_url is HTTPS; `@beartype` diff --git a/openspec/changes/marketplace-04-revocation/tasks.md b/openspec/changes/marketplace-04-revocation/tasks.md new file mode 100644 index 00000000..c5c9fe11 --- /dev/null +++ b/openspec/changes/marketplace-04-revocation/tasks.md @@ -0,0 +1,223 @@ +# Implementation Tasks: marketplace-04-revocation + +## TDD / SDD Order (Enforced) + +Per config.yaml, tests MUST come before implementation for any behavior-changing task. Order: + +1. Spec deltas (already created in this change) +2. Tests from spec scenarios (expect failure — no implementation yet) +3. Code implementation (until tests pass and behavior satisfies spec) +4. Evidence recorded in `openspec/changes/marketplace-04-revocation/TDD_EVIDENCE.md` + +Do not implement production code until tests exist and have been run (expecting failure). + +--- + +## 1. Create git worktree for this change + +- [ ] 1.1 Fetch latest and create a worktree with a new branch from `origin/dev` + - [ ] 1.1.1 `git fetch origin` + - [ ] 1.1.2 `gh issue develop 328 --repo nold-ai/specfact-cli --name feature/marketplace-04-revocation` + - [ ] 1.1.3 `git fetch origin && git worktree add ../specfact-cli-worktrees/feature/marketplace-04-revocation feature/marketplace-04-revocation` + - [ ] 1.1.4 `cd ../specfact-cli-worktrees/feature/marketplace-04-revocation` + - [ ] 1.1.5 `python -m venv .venv && source .venv/bin/activate && pip install -e ".[dev]"` + - [ ] 1.1.6 `git branch --show-current` (verify: `feature/marketplace-04-revocation`) + +> All subsequent tasks run inside the worktree directory. +> **Hard blocker**: marketplace-03 must be implemented and merged before code work begins. Trust layer (`trust/publisher_registry.py`, `trust/resolver.py`) is a required dependency. + +--- + +## 2. Review spec files (SDD) + +- [ ] 2.1 Review specs created in this change + - [ ] 2.1.1 `openspec/changes/marketplace-04-revocation/specs/publisher-revocation/spec.md` + - [ ] 2.1.2 `openspec/changes/marketplace-04-revocation/specs/module-revocation/spec.md` + - [ ] 2.1.3 `openspec/changes/marketplace-04-revocation/specs/grace-window-policy/spec.md` + - [ ] 2.1.4 `openspec/changes/marketplace-04-revocation/specs/automated-scan/spec.md` + - [ ] 2.1.5 `openspec validate marketplace-04-revocation --strict` + - [ ] 2.1.6 `hatch run yaml-lint` + +--- + +## 3. Create revocation data models (TDD) + +- [ ] 3.1 Write tests for revocation models (expect failure) + - [ ] 3.1.1 Create `tests/unit/trust/test_revocation_models.py` + - [ ] 3.1.2 Test `RevocationEntry`: required fields (publisher_id/module_name, reason, revoked_at, grace_window_days) + - [ ] 3.1.3 Test `PublisherRevocationIndex`: schema_version, nold_ai_signature, revocations list + - [ ] 3.1.4 Test `ModuleRevocationIndex`: same structure, module-scoped entries + - [ ] 3.1.5 Test `GraceWindowPolicy`: grace_days, install_action, existing_action + - [ ] 3.1.6 Test `RevocationStatus`: is_revoked, reason, grace_status, days_remaining + - [ ] 3.1.7 Run tests — expect failures + - [ ] 3.1.8 Record failing evidence in `TDD_EVIDENCE.md` + +- [ ] 3.2 Implement revocation models in `src/specfact_cli/trust/models.py` (extend) + - [ ] 3.2.1 Add `RevocationEntry`, `PublisherRevocationIndex`, `ModuleRevocationIndex`, `GraceWindowPolicy`, `GraceStatus`, `RevocationStatus`, `RevocationDecision`, `RevocationContext` to `trust/models.py` + - [ ] 3.2.2 All Pydantic BaseModel with Field(...) and descriptions + - [ ] 3.2.3 `revoked_at` must be UTC-aware datetime + - [ ] 3.2.4 Run tests — expect pass + - [ ] 3.2.5 Record passing evidence in `TDD_EVIDENCE.md` + +--- + +## 4. Implement revocation.py (TDD) + +- [ ] 4.1 Write tests for revocation checker (expect failure) + - [ ] 4.1.1 Create `tests/unit/trust/test_revocation.py` + - [ ] 4.1.2 Test `fetch_revocation_indexes()`: fetch + sig verify + cache (mock HTTP) + - [ ] 4.1.3 Test cache hit within 1h TTL — no HTTP call + - [ ] 4.1.4 Test stale cache when offline — serve with warning + - [ ] 4.1.5 Test `check_publisher_revocation()`: not-revoked → RevocationStatus(is_revoked=False) + - [ ] 4.1.6 Test `check_publisher_revocation()`: revoked, security_incident → RevocationStatus(is_revoked=True, grace_days=0) + - [ ] 4.1.7 Test `check_publisher_revocation()`: revoked, policy_violation, within 30d → grace status + - [ ] 4.1.8 Test `check_publisher_revocation()`: revoked, policy_violation, past 30d → expired + - [ ] 4.1.9 Test `check_module_revocation()`: same grace window logic for module entries + - [ ] 4.1.10 Test `compute_grace_status()`: all four reason codes + unknown reason → most restrictive + - [ ] 4.1.11 Test `enforce_revocation_policy()`: security_incident → hard_block; policy_violation in window → warn; etc. + - [ ] 4.1.12 Run tests — expect failures + - [ ] 4.1.13 Record failing evidence in `TDD_EVIDENCE.md` + +- [ ] 4.2 Implement `src/specfact_cli/trust/revocation.py` + - [ ] 4.2.1 `GRACE_WINDOWS` constant (dict of reason → GraceWindowPolicy) + - [ ] 4.2.2 `fetch_revocation_indexes(trust_index_url, cache_dir) -> tuple[PublisherRevocationIndex, ModuleRevocationIndex]` + - [ ] 4.2.3 Cache in `~/.specfact/cache/publishers-revoked.json` and `registry-modules-revoked.json` (1h TTL) + - [ ] 4.2.4 `check_publisher_revocation(publisher_id, index) -> RevocationStatus` + - [ ] 4.2.5 `check_module_revocation(module_name, version, index) -> RevocationStatus` + - [ ] 4.2.6 `compute_grace_status(revoked_at, reason) -> GraceStatus` + - [ ] 4.2.7 `enforce_revocation_policy(status, context) -> RevocationDecision` + - [ ] 4.2.8 `@require`, `@ensure`, `@beartype` on all public functions + - [ ] 4.2.9 Run tests — expect pass + - [ ] 4.2.10 Record passing evidence in `TDD_EVIDENCE.md` + +--- + +## 5. Integrate revocation into module_registry install/invocation (TDD) + +- [ ] 5.1 Write integration tests (expect failure) + - [ ] 5.1.1 Create `tests/integration/test_revocation_integration.py` + - [ ] 5.1.2 Test install of security_incident-revoked publisher: hard block, no flag override + - [ ] 5.1.3 Test install of policy_violation-revoked publisher, in window: warn + prompt + - [ ] 5.1.4 Test install of policy_violation-revoked publisher, past window: hard block + - [ ] 5.1.5 Test install of publisher_request-revoked, in window: warn + prompt, succeeds + - [ ] 5.1.6 Test install of api_incompatibility-revoked module: warn, suggest newer, not blocked in window + - [ ] 5.1.7 Test invocation warning for installed security_incident-revoked module + - [ ] 5.1.8 Test weekly re-check triggered when last check > 7 days ago + - [ ] 5.1.9 Run tests — expect failures + - [ ] 5.1.10 Record failing evidence in `TDD_EVIDENCE.md` + +- [ ] 5.2 Integrate revocation into module_registry + - [ ] 5.2.1 Add revocation pre-flight to install command (after trust tier resolution, before download) + - [ ] 5.2.2 Add revocation warning check on module load (for already-installed modules) + - [ ] 5.2.3 Add `revocation_check_interval` config key (default: `7d`) to `~/.specfact/config.yaml` + - [ ] 5.2.4 Implement periodic re-check: track last check timestamp in `~/.specfact/cache/revocation-last-check` + - [ ] 5.2.5 Run tests — expect pass + - [ ] 5.2.6 Record passing evidence in `TDD_EVIDENCE.md` + +--- + +## 6. Implement AST scan (TDD) + +- [ ] 6.1 Write tests for bundle scanner (expect failure) + - [ ] 6.1.1 Create `tests/unit/scripts/test_bundle_scan.py` + - [ ] 6.1.2 Test `scan_bundle()` on clean bundle — empty findings + - [ ] 6.1.3 Test detection of `exec(base64.b64decode(...))` pattern + - [ ] 6.1.4 Test detection of `subprocess.run(shell=True)` + HTTP URL in same file + - [ ] 6.1.5 Test detection of network call at module top level + - [ ] 6.1.6 Test detection of `eval(<network-response-variable>)` pattern + - [ ] 6.1.7 Test `ScanReport.has_blocking_findings()` — True when block-severity finding present + - [ ] 6.1.8 Run tests — expect failures + - [ ] 6.1.9 Record failing evidence in `TDD_EVIDENCE.md` + +- [ ] 6.2 Implement AST scan + - [ ] 6.2.1 Create `src/specfact_cli/trust/bundle_scanner.py` with `scan_bundle(bundle_path) -> ScanReport` + - [ ] 6.2.2 Implement all four check patterns using stdlib `ast` module only + - [ ] 6.2.3 Integrate scan call into `scripts/publish-module.py` before publication step + - [ ] 6.2.4 Create `.github/workflows/scan-bundles.yml` (triggers on push/PR to specfact-cli-modules, *.py changes) + - [ ] 6.2.5 Add `@require`, `@ensure`, `@beartype` to `scan_bundle()` + - [ ] 6.2.6 Run tests — expect pass + - [ ] 6.2.7 Record passing evidence in `TDD_EVIDENCE.md` + +--- + +## 7. Create revocation signing scripts + +- [ ] 7.1 Create `scripts/revoke-publisher.py` + - [ ] 7.1.1 Signs revocation entry in `publishers/revoked.json` with NOLD AI key + - [ ] 7.1.2 Accepts: publisher_id, handle, reason (enum: security_incident/policy_violation/publisher_request/api_incompatibility), grace_window_days + - [ ] 7.1.3 Appends to `publishers/revoked.json` and re-signs the full index +- [ ] 7.2 Create `scripts/revoke-module.py` + - [ ] 7.2.1 Signs per-module revocation entry in `registry/modules/revoked.json` + - [ ] 7.2.2 Accepts: module_name, version, reason, grace_window_days + - [ ] 7.2.3 Appends and re-signs + +--- + +## 8. Module signing verification quality gate + +- [ ] 8.1 `hatch run ./scripts/verify-modules-signature.py --require-signature` +- [ ] 8.2 Re-sign and bump version if any module changed + +--- + +## 9. Quality gates + +- [ ] 9.1 `hatch run format` +- [ ] 9.2 `hatch run type-check` +- [ ] 9.3 `hatch run lint` +- [ ] 9.4 `hatch run yaml-lint` +- [ ] 9.5 `hatch run contract-test` +- [ ] 9.6 `hatch test --cover -v` + +--- + +## 10. Documentation research and review + +- [ ] 10.1 Identify and update affected documentation: + - [ ] 10.1.1 Create `docs/trust/grace-window-policy.md` (Jekyll front-matter required; ToS-linkable) + - [ ] 10.1.2 Update `docs/reference/module-commands.md`: revocation warning messages, `revocation_check_interval` config + - [ ] 10.1.3 Update `docs/_layouts/default.html`: add `Trust` section to sidebar with grace-window-policy link + - [ ] 10.1.4 Update `docs/guides/publisher-trust.md` (from marketplace-03): add note on revocation and grace windows + +--- + +## 11. Version and changelog + +- [ ] 11.1 Determine version bump: feature branch → minor increment (confirm with user) +- [ ] 11.2 Sync version across `pyproject.toml`, `setup.py`, `src/specfact_cli/__init__.py` +- [ ] 11.3 `CHANGELOG.md` entry: + - `Added: Publisher and module revocation infrastructure (trust/revocation.py)` + - `Added: Grace window policy enforcement by reason type (security_incident/policy_violation/publisher_request/api_incompatibility)` + - `Added: CI AST scan for bundle publication (.github/workflows/scan-bundles.yml)` + - `Added: docs/trust/grace-window-policy.md` + +--- + +## 12. GitHub issue creation + +- [x] 12.1 GitHub issue already created: [#328](https://github.com/nold-ai/specfact-cli/issues/328) +- [x] 12.2 Linked to project board +- [x] 12.3 `proposal.md` Source Tracking updated with issue #328 +- [x] 12.4 `CHANGE_ORDER.md` updated with marketplace-04 entry and GitHub issue #328 + +--- + +## 13. Create PR + +- [ ] 13.1 Commit from inside the worktree + - [ ] 13.1.1 `git add src/specfact_cli/trust/ src/specfact_cli/modules/module_registry/ scripts/ .github/workflows/ docs/ openspec/ pyproject.toml setup.py CHANGELOG.md` + - [ ] 13.1.2 `git commit -S -m "feat: publisher and module revocation infrastructure (marketplace-04)"` + - [ ] 13.1.3 `git push -u origin feature/marketplace-04-revocation` +- [ ] 13.2 `gh pr create --repo nold-ai/specfact-cli --base dev --head feature/marketplace-04-revocation --title "feat: publisher and module revocation infrastructure" --body-file /tmp/pr-marketplace-04.md` +- [ ] 13.3 `gh project item-add 1 --owner nold-ai --url <PR_URL>` + +--- + +## Post-merge cleanup (after PR is merged) + +- [ ] Return to primary checkout: `cd /home/dom/git/nold-ai/specfact-cli` +- [ ] `git fetch origin` +- [ ] `git worktree remove ../specfact-cli-worktrees/feature/marketplace-04-revocation` +- [ ] `git branch -d feature/marketplace-04-revocation` +- [ ] `git worktree prune` +- [ ] (Optional) `git push origin --delete feature/marketplace-04-revocation` diff --git a/openspec/changes/marketplace-05-registry-federation/design.md b/openspec/changes/marketplace-05-registry-federation/design.md new file mode 100644 index 00000000..c3fcf388 --- /dev/null +++ b/openspec/changes/marketplace-05-registry-federation/design.md @@ -0,0 +1,127 @@ +# Design: Registry Federation and Trust Certificate Verification + +## Context + +marketplace-02 added `custom_registries.py` with a trust level per registry (always / prompt / never). This trust level is user-assigned at `add-registry` time with no verification — any registry can be set to `always`. marketplace-05 replaces user-assigned trust with NOLD AI-certified trust tiers. + +**Current State (marketplace-02):** + +- `add_registry(url, trust)`: stores url + user-specified trust level +- No certificate check +- Trust level is a user assertion, not a verified claim + +**Target State (marketplace-05):** + +- `add_registry(url)`: fetches registry certificate from `{url}/.specfact/registry-cert.json` +- Verifies certificate against NOLD AI root key (from `trust/key_store.py`) +- Derives effective trust tier from certificate +- `--trust-local` bypasses certificate check for air-gapped use + +**Constraints:** + +- Offline-first: certificate must be cacheable; fetching may fail for air-gapped registries +- Backward compatible: existing registries without certificate → `community` tier (soft downgrade) +- Certificate expiry must be enforced (do not silently serve expired certs) + +## Goals / Non-Goals + +**Goals:** + +- Registry certificate schema + verification at add-registry and periodically +- Trust score propagation: effective tier = min(publisher_tier, registry_tier) +- `--trust-local` for air-gapped / enterprise-internal registries (no certificate required) +- `[local]` badge for local-trust modules in search output +- Certificate expiry enforcement with warning before expiry + +**Non-Goals:** + +- Registry certificate issuance (server-side; specfact.io-backend-phase2) +- Registry revocation index (future extension) +- Per-module trust score API (specfact.io-backend-phase2) + +## Architecture + +### Registry Certificate Schema + +Stored at `{registry_url}/.specfact/registry-cert.json`: + +```json +{ + "registry_id": "reg_xyz789", + "name": "Acme Internal Registry", + "url": "https://registry.acme.com/specfact", + "tier": "verified", + "certificate": "<Ed25519 cert signed by NOLD AI>", + "issued_at": "2026-02-27T00:00:00Z", + "expires_at": "2027-02-27T00:00:00Z" +} +``` + +### `trust/registry_cert.py` + +```python +# Key public API +def fetch_registry_cert(registry_url: str) -> RegistryCert | None +def verify_registry_cert(cert: RegistryCert, root_key: Ed25519PublicKey) -> bool +def store_registry_cert(cert: RegistryCert, store_path: Path) -> None +def load_registry_store(store_path: Path) -> list[RegistryCert] +def get_effective_registry_tier(registry_url: str, store: list[RegistryCert]) -> str +``` + +### Trust Propagation + +`trust/resolver.py` updated to include registry tier in effective tier calculation: + +```python +TIER_RANK = {"official": 3, "verified": 2, "community": 1, "local": 0, "unregistered": -1} + +def resolve_effective_tier(publisher_tier: str, registry_tier: str) -> str: + return min(publisher_tier, registry_tier, key=lambda t: TIER_RANK[t]) +``` + +Note: `local` tier is below `community`. A `verified` publisher module served from a `local` registry is effective `local` — it cannot be promoted to `community` or `verified` without central registration. + +### add-registry Flow with Certificate + +```text +specfact module add-registry https://registry.acme.com/specfact + │ + ├─ Fetch {url}/.specfact/registry-cert.json + │ └─ If fetch fails: warn "No certificate found; treating as community tier" + ├─ trust/registry_cert.py: verify_registry_cert(cert, root_key) + │ └─ If verification fails: raise RegistryCertVerificationError + ├─ Check cert not expired: cert.expires_at > now + │ └─ If expired: warn + use community tier + ├─ Store cert in ~/.specfact/registries.json (effective_tier = cert.tier) + └─ "Registry added: Acme Internal Registry [verified]" +``` + +### --trust-local Flow + +```text +specfact module add-registry https://internal.corp/specfact --trust-local + │ + ├─ Skip certificate fetch + ├─ Store in ~/.specfact/registries.json (effective_tier = "local") + └─ "Registry added: internal.corp [local] — modules from this registry are not NOLD AI certified" +``` + +## Decisions + +### Decision 1: Backward Compatibility for Uncertified Registries + +**Choice**: Uncertified registries (no `/.specfact/registry-cert.json`) receive `community` tier with a warning at add-registry time. + +**Rationale**: Breaking existing custom registry users would be a poor upgrade experience. Community tier is the appropriate unverified-but-identity-confirmed tier. + +### Decision 2: Certificate Expiry Enforcement + +**Choice**: Warn 30 days before expiry; at expiry, downgrade to `community` tier with warning. No hard block. + +**Rationale**: Hard blocking at expiry would break workflows for registry operators who are slow to renew. Community-tier downgrade is reversible once cert is renewed and re-verified. + +### Decision 3: Local Trust Registries in Search Output + +**Choice**: `[local]` badge in search output, always (not `[community]` or above). Modules cannot be promoted beyond `local` without central registration. + +**Rationale**: Prevents local trust from being misread as a NOLD AI endorsement. Explicit `[local]` label is honest about the trust boundary. diff --git a/openspec/changes/marketplace-05-registry-federation/proposal.md b/openspec/changes/marketplace-05-registry-federation/proposal.md new file mode 100644 index 00000000..d9156378 --- /dev/null +++ b/openspec/changes/marketplace-05-registry-federation/proposal.md @@ -0,0 +1,52 @@ +# Change: Registry Federation and Trust Certificate Verification + +## Why + +marketplace-02 enables custom registries with a trust level, but any operator can claim any trust level. Without a certificate layer, the CLI cannot distinguish between a verified third-party registry (reviewed by NOLD AI) and an arbitrary self-hosted index. This change adds a CA-style registry certificate system: external registries obtain a signed certificate from NOLD AI, the CLI verifies it at add-registry time and on each fetch, and trust level is propagated as the minimum of the registry's tier and the publisher's tier. + +The `--trust-local` flag for air-gapped enterprise registries bypasses certificate verification and marks modules `[local]` — they cannot be promoted to community/verified trust without central registration. + +## What Changes + +- **MODIFY**: `src/specfact_cli/registry/custom_registries.py` — extend `add_registry()` to fetch and verify registry certificate from `{registry_url}/.specfact/registry-cert.json` against the NOLD AI root key (from `trust/key_store.py`); store effective trust tier in `~/.specfact/registries.json`; add `--trust-local` flag for air-gapped registries +- **MODIFY**: `src/specfact_cli/trust/resolver.py` — integrate registry tier into effective tier calculation (min of publisher_tier and registry_tier) +- **NEW**: `src/specfact_cli/trust/registry_cert.py` — registry certificate fetcher, verifier, and local registry store manager +- **MODIFY**: `src/specfact_cli/modules/module_registry/src/` — add `[local]`, `[community]`, `[verified]`, `[official]` badges to search output accounting for registry tier; extend search to query all registered + verified registries +- **NEW**: `docs/guides/custom-registries.md` (update or supplement marketplace-02's guide) — registry certificate setup, air-gapped usage, trust score propagation +- **MODIFY**: `docs/_layouts/default.html` — update navigation if needed + +**Backward compatibility**: Existing custom registries added via marketplace-02 without a certificate are treated as `community` tier by default (soft downgrade with a warning at `add-registry` time). `--trust-local` flag continues to allow air-gapped registries without central certificate. + +**Rollback plan**: Remove registry certificate verification; restore flat custom registry trust level from marketplace-02. + +## Capabilities + +### New Capabilities + +- `registry-federation`: add-registry certificate verification against NOLD AI root key; `~/.specfact/registries.json` local registry store with effective trust tier; `--trust-local` for air-gapped registries +- `registry-certificates`: registry certificate schema (`registries/index.json`); certificate fetch from `/.specfact/registry-cert.json`; certificate expiry enforcement +- `trust-propagation`: effective trust = min(publisher_tier, registry_tier); `[local]`, `[community]`, `[verified]`, `[official]` badges propagated to all search output + +## Impact + +- **Affected code**: + - `src/specfact_cli/registry/custom_registries.py` (modify: certificate verification at add-registry) + - `src/specfact_cli/trust/resolver.py` (modify: registry tier integration into effective tier) + - `src/specfact_cli/trust/registry_cert.py` (new: certificate fetch + verify + store) + - `src/specfact_cli/modules/module_registry/src/` (modify: registry-tier-aware badges in search) +- **Affected specs**: New specs for `registry-federation`, `registry-certificates`, `trust-propagation` +- **Affected documentation**: + - `docs/guides/custom-registries.md` (update: certificate requirements, --trust-local, tier propagation) + - `docs/_layouts/default.html` (navigation update if needed) +- **External dependencies**: None beyond existing `cryptography` library (already in requirements) +- **Hard dependency**: marketplace-03 (`trust/key_store.py`, `trust/resolver.py` base); marketplace-04 recommended but not hard-blocking + +--- + +## Source Tracking + +<!-- source_repo: nold-ai/specfact-cli --> +- **GitHub Issue**: #329 +- **Issue URL**: <https://github.com/nold-ai/specfact-cli/issues/329> +- **Repository**: nold-ai/specfact-cli +- **Last Synced Status**: proposed diff --git a/openspec/changes/marketplace-05-registry-federation/specs/registry-certificates/spec.md b/openspec/changes/marketplace-05-registry-federation/specs/registry-certificates/spec.md new file mode 100644 index 00000000..349773f9 --- /dev/null +++ b/openspec/changes/marketplace-05-registry-federation/specs/registry-certificates/spec.md @@ -0,0 +1,69 @@ +# registry-certificates Specification + +## Purpose + +Defines the registry certificate schema, the canonical location (`/.specfact/registry-cert.json`), and the CLI's certificate store (`~/.specfact/registries.json`). + +## ADDED Requirements + +### Requirement: Registry certificate schema + +The registry certificate served at `{registry_url}/.specfact/registry-cert.json` SHALL conform to the following schema. + +#### Scenario: Valid certificate structure + +- **GIVEN** a registry certificate JSON at `/.specfact/registry-cert.json` +- **WHEN** CLI fetches and parses it +- **THEN** it SHALL contain: + - `registry_id` (string, non-empty) + - `name` (string, human-readable registry name) + - `url` (string, HTTPS URL matching the registry URL) + - `tier` (string, one of: `official`, `verified`, `community`) + - `certificate` (string, base64-encoded Ed25519 signature by NOLD AI over canonical cert JSON) + - `issued_at` (string, ISO 8601 UTC) + - `expires_at` (string, ISO 8601 UTC, must be after `issued_at`) + +#### Scenario: Certificate URL field mismatch + +- **GIVEN** a certificate whose `url` field does not match the registry URL it was fetched from +- **WHEN** CLI verifies the certificate +- **THEN** CLI SHALL raise `RegistryCertUrlMismatchError`: `[ERROR] Certificate URL mismatch: cert says {cert.url}, fetched from {registry_url}` +- **AND** SHALL NOT store the registry + +### Requirement: CLI local registry store + +The CLI SHALL maintain `~/.specfact/registries.json` with all registered registries. + +#### Scenario: Store structure after adding a certified registry + +- **GIVEN** a registry with a verified certificate is added +- **WHEN** CLI stores the registry +- **THEN** `~/.specfact/registries.json` SHALL contain an entry with: + - `registry_id`, `name`, `url`, `effective_tier` + - `trust_local: false` + - `cert_issued_at`, `cert_expires_at` + - `added_at` (timestamp of CLI add-registry call) + +#### Scenario: Store structure for --trust-local registry + +- **GIVEN** a registry added with `--trust-local` +- **WHEN** CLI stores the registry +- **THEN** `~/.specfact/registries.json` entry SHALL have: + - `effective_tier: local` + - `trust_local: true` + - No `cert_*` fields (no certificate was fetched) + +### Requirement: list-registries shows effective tier + +The `specfact module list-registries` command SHALL display effective tier and certificate metadata for each registered registry. + +#### Scenario: List shows all registries with tier and cert status + +- **WHEN** user runs `specfact module list-registries` +- **THEN** CLI SHALL display all stored registries with: name, url, effective_tier badge, cert expiry date (if applicable) + +## Contract Requirements + +- `RegistryCert` Pydantic model: all fields required, `url` validated as HTTPS, `expires_at` validated as after `issued_at` +- `RegistryStoreEntry` Pydantic model: includes both cert metadata and effective_tier +- `@beartype` on all public functions in `trust/registry_cert.py` diff --git a/openspec/changes/marketplace-05-registry-federation/specs/registry-federation/spec.md b/openspec/changes/marketplace-05-registry-federation/specs/registry-federation/spec.md new file mode 100644 index 00000000..99aa7c41 --- /dev/null +++ b/openspec/changes/marketplace-05-registry-federation/specs/registry-federation/spec.md @@ -0,0 +1,68 @@ +# registry-federation Specification + +## Purpose + +Defines the `specfact module add-registry` command's certificate-based verification flow, local registry store management, and the `--trust-local` flag for air-gapped registries. + +## MODIFIED Requirements + +### Requirement: Verify registry certificate at add-registry time + +The CLI SHALL fetch and cryptographically verify the NOLD AI-signed registry certificate before storing a new registry. + +#### Scenario: Add registry with valid certificate + +- **GIVEN** `https://registry.acme.com/specfact/.specfact/registry-cert.json` exists and has a valid NOLD AI signature +- **WHEN** user runs `specfact module add-registry https://registry.acme.com/specfact` +- **THEN** CLI SHALL fetch the certificate +- **AND** SHALL verify the NOLD AI Ed25519 signature using the bundled root key +- **AND** SHALL store the registry in `~/.specfact/registries.json` with `effective_tier: verified` (from cert) +- **AND** SHALL display: `Registry added: Acme Internal Registry [verified]` + +#### Scenario: Registry has no certificate — community tier + +- **GIVEN** `{registry_url}/.specfact/registry-cert.json` returns 404 +- **WHEN** user runs `specfact module add-registry {registry_url}` +- **THEN** CLI SHALL warn: `[WARN] No registry certificate found at {url}. Treating as community tier.` +- **AND** SHALL store registry with `effective_tier: community` +- **AND** SHALL NOT abort + +#### Scenario: Certificate verification fails + +- **GIVEN** the certificate JSON has an invalid or tampered NOLD AI signature +- **WHEN** user runs `specfact module add-registry {registry_url}` +- **THEN** CLI SHALL raise `RegistryCertVerificationError` with: `[ERROR] Registry certificate signature verification failed. Registry not added.` +- **AND** SHALL NOT store the registry + +#### Scenario: Add registry with --trust-local (air-gapped) + +- **GIVEN** user passes `--trust-local` +- **WHEN** user runs `specfact module add-registry https://internal.corp/specfact --trust-local` +- **THEN** CLI SHALL skip certificate fetch entirely +- **AND** SHALL store registry with `effective_tier: local` +- **AND** SHALL display: `Registry added: internal.corp [local] — modules from this registry are not NOLD AI certified` + +### Requirement: Certificate expiry enforcement + +The CLI SHALL detect expired registry certificates and downgrade the registry to community tier, warning the operator to renew. + +#### Scenario: Registry certificate expires + +- **GIVEN** a stored registry certificate whose `expires_at` is in the past +- **WHEN** CLI fetches from that registry +- **THEN** CLI SHALL downgrade effective tier to `community` +- **AND** SHALL warn: `[WARN] Registry certificate for {name} has expired. Treating as community tier. Renew at specfact.io/registries/register.` + +#### Scenario: Certificate approaching expiry (30-day warning) + +- **GIVEN** a stored certificate whose `expires_at` is within 30 days +- **WHEN** CLI fetches from that registry +- **THEN** CLI SHALL warn: `[WARN] Registry certificate for {name} expires in N days. Renew at specfact.io/registries/register.` +- **AND** SHALL continue with the certified tier + +## Contract Requirements + +- `fetch_registry_cert(registry_url: str) -> RegistryCert | None` — `@require` registry_url is HTTPS; `@beartype` +- `verify_registry_cert(cert: RegistryCert, root_key: Ed25519PublicKey) -> bool` — `@require` cert is non-None; `@beartype` +- `store_registry_cert(cert: RegistryCert, store_path: Path) -> None` — `@require` store_path.parent.exists(); `@beartype` +- `get_effective_registry_tier(registry_url: str, store: list[RegistryCert]) -> str` — `@ensure` result in {"official", "verified", "community", "local"}; `@beartype` diff --git a/openspec/changes/marketplace-05-registry-federation/specs/trust-propagation/spec.md b/openspec/changes/marketplace-05-registry-federation/specs/trust-propagation/spec.md new file mode 100644 index 00000000..39ccfbd6 --- /dev/null +++ b/openspec/changes/marketplace-05-registry-federation/specs/trust-propagation/spec.md @@ -0,0 +1,77 @@ +# trust-propagation Specification + +## Purpose + +Defines how effective module trust tier is calculated as the minimum of publisher tier and registry tier, and how this propagates to search output badges and install policy. + +## MODIFIED Requirements + +### Requirement: Effective trust = min(publisher_tier, registry_tier) + +The CLI SHALL resolve effective trust tier as the minimum (by rank) of the publisher's tier and the registry's effective tier. + +#### Scenario: Verified publisher from verified registry + +- **GIVEN** publisher tier = `verified`, registry effective_tier = `verified` +- **WHEN** CLI resolves effective tier +- **THEN** effective_tier = `verified` + +#### Scenario: Official publisher from verified registry + +- **GIVEN** publisher tier = `official`, registry effective_tier = `verified` +- **WHEN** CLI resolves effective tier +- **THEN** effective_tier = `official` (official publisher outranks registry tier) + +#### Scenario: Verified publisher from community registry + +- **GIVEN** publisher tier = `verified`, registry effective_tier = `community` +- **WHEN** CLI resolves effective tier +- **THEN** effective_tier = `community` (registry tier caps the effective tier) + +#### Scenario: Verified publisher from local-trust registry + +- **GIVEN** publisher tier = `verified`, registry effective_tier = `local` +- **WHEN** CLI resolves effective tier +- **THEN** effective_tier = `local` +- **AND** module is shown as `[local]` in search output regardless of publisher tier + +#### Scenario: Any publisher from unregistered registry + +- **GIVEN** registry_tier = `unregistered` (registry added without any cert or trust-local flag) +- **WHEN** CLI resolves effective tier +- **THEN** effective_tier = `unregistered` +- **AND** install is blocked unless `--trust-unregistered` + +### Requirement: Trust tier badges in search output reflect effective tier + +Search output SHALL display the effective tier (min of publisher_tier and registry_tier), not the raw publisher tier. + +#### Scenario: Search output shows effective tier badge + +- **GIVEN** a module from a `verified` publisher served by a `community` registry +- **WHEN** user runs `specfact module search <term>` +- **THEN** the module entry SHALL show `[community]` (effective tier, not publisher tier) +- **AND** the raw publisher tier SHALL be visible in `specfact module info` but not in search list view + +#### Scenario: Local-trust module badge in search + +- **GIVEN** a module from a `--trust-local` registry +- **WHEN** user runs `specfact module search <term>` +- **THEN** the module entry SHALL show `[local]` badge +- **AND** SHALL NOT show `[verified]` or `[community]` even if the publisher record is verified + +### Requirement: Install policy uses effective tier (not publisher tier) + +The install gate SHALL use effective_tier (publisher ∩ registry) for all policy decisions, not the raw publisher tier. + +#### Scenario: Community-effective module requires prompt even if publisher is verified + +- **GIVEN** effective_tier = `community` (verified publisher + community registry) +- **WHEN** user installs the module without `--trust-community` +- **THEN** CLI SHALL apply community install policy (warn + prompt) +- **AND** SHALL display: `[WARN] Registry for this module is community-certified. Publisher is verified, but registry is not. Install anyway? [y/N]` + +## Contract Requirements + +- `resolve_effective_tier(publisher_tier: str, registry_tier: str) -> str` — extended to include `local` in tier rank order; `@ensure` result in {"official", "verified", "community", "local", "unregistered"}; `@beartype` +- Tier rank order (for min resolution): `official(4) > verified(3) > community(2) > local(1) > unregistered(0)` diff --git a/openspec/changes/marketplace-05-registry-federation/tasks.md b/openspec/changes/marketplace-05-registry-federation/tasks.md new file mode 100644 index 00000000..06d0e224 --- /dev/null +++ b/openspec/changes/marketplace-05-registry-federation/tasks.md @@ -0,0 +1,215 @@ +# Implementation Tasks: marketplace-05-registry-federation + +## TDD / SDD Order (Enforced) + +Per config.yaml, tests MUST come before implementation for any behavior-changing task. Order: + +1. Spec deltas (already created in this change) +2. Tests from spec scenarios (expect failure — no implementation yet) +3. Code implementation (until tests pass and behavior satisfies spec) +4. Evidence recorded in `openspec/changes/marketplace-05-registry-federation/TDD_EVIDENCE.md` + +Do not implement production code until tests exist and have been run (expecting failure). + +--- + +## 1. Create git worktree for this change + +- [ ] 1.1 Fetch latest and create a worktree with a new branch from `origin/dev` + - [ ] 1.1.1 `git fetch origin` + - [ ] 1.1.2 `gh issue develop 329 --repo nold-ai/specfact-cli --name feature/marketplace-05-registry-federation` + - [ ] 1.1.3 `git fetch origin && git worktree add ../specfact-cli-worktrees/feature/marketplace-05-registry-federation feature/marketplace-05-registry-federation` + - [ ] 1.1.4 `cd ../specfact-cli-worktrees/feature/marketplace-05-registry-federation` + - [ ] 1.1.5 `python -m venv .venv && source .venv/bin/activate && pip install -e ".[dev]"` + - [ ] 1.1.6 `git branch --show-current` (verify: `feature/marketplace-05-registry-federation`) + +> All subsequent tasks run inside the worktree directory. +> **Hard blocker**: marketplace-03 must be implemented and merged (provides `trust/key_store.py`, `trust/resolver.py`). +> marketplace-04 is recommended but not hard-blocking. + +--- + +## 2. Review spec files (SDD) + +- [ ] 2.1 Review specs created in this change + - [ ] 2.1.1 `openspec/changes/marketplace-05-registry-federation/specs/registry-federation/spec.md` + - [ ] 2.1.2 `openspec/changes/marketplace-05-registry-federation/specs/registry-certificates/spec.md` + - [ ] 2.1.3 `openspec/changes/marketplace-05-registry-federation/specs/trust-propagation/spec.md` + - [ ] 2.1.4 `openspec validate marketplace-05-registry-federation --strict` + - [ ] 2.1.5 `hatch run yaml-lint` + +--- + +## 3. Create registry certificate data models (TDD) + +- [ ] 3.1 Write tests for registry cert models (expect failure) + - [ ] 3.1.1 Create `tests/unit/trust/test_registry_cert_models.py` + - [ ] 3.1.2 Test `RegistryCert`: all required fields, URL validated as HTTPS, expires_at > issued_at + - [ ] 3.1.3 Test `RegistryCert` with non-HTTPS URL → validation error + - [ ] 3.1.4 Test `RegistryCert` with expires_at before issued_at → validation error + - [ ] 3.1.5 Test `RegistryStoreEntry`: includes effective_tier, trust_local bool, cert metadata + - [ ] 3.1.6 Run tests — expect failures + - [ ] 3.1.7 Record failing evidence in `TDD_EVIDENCE.md` + +- [ ] 3.2 Implement models in `trust/models.py` (extend) + - [ ] 3.2.1 Add `RegistryCert`, `RegistryStoreEntry` Pydantic models to `trust/models.py` + - [ ] 3.2.2 Validators: `url` is HTTPS, `expires_at` > `issued_at`, `tier` in allowed set + - [ ] 3.2.3 Run tests — expect pass + - [ ] 3.2.4 Record passing evidence in `TDD_EVIDENCE.md` + +--- + +## 4. Implement trust/registry_cert.py (TDD) + +- [ ] 4.1 Write tests for registry_cert module (expect failure) + - [ ] 4.1.1 Create `tests/unit/trust/test_registry_cert.py` + - [ ] 4.1.2 Test `fetch_registry_cert()`: valid cert JSON at `/.specfact/registry-cert.json` → returns RegistryCert (mock HTTP) + - [ ] 4.1.3 Test `fetch_registry_cert()`: 404 → returns None (not raises) + - [ ] 4.1.4 Test `verify_registry_cert()`: valid sig → True + - [ ] 4.1.5 Test `verify_registry_cert()`: tampered sig → False (or raises RegistryCertVerificationError) + - [ ] 4.1.6 Test `verify_registry_cert()`: URL mismatch → raises RegistryCertUrlMismatchError + - [ ] 4.1.7 Test `store_registry_cert()`: writes to registries.json correctly + - [ ] 4.1.8 Test `get_effective_registry_tier()`: certified → tier from cert; uncertified → community; local → local + - [ ] 4.1.9 Test `get_effective_registry_tier()`: expired cert → community with warning + - [ ] 4.1.10 Run tests — expect failures + - [ ] 4.1.11 Record failing evidence in `TDD_EVIDENCE.md` + +- [ ] 4.2 Implement `src/specfact_cli/trust/registry_cert.py` + - [ ] 4.2.1 `fetch_registry_cert(registry_url: str) -> RegistryCert | None` + - [ ] 4.2.2 `verify_registry_cert(cert: RegistryCert, root_key: Ed25519PublicKey) -> bool` — URL mismatch raises RegistryCertUrlMismatchError + - [ ] 4.2.3 `store_registry_cert(cert: RegistryCert, store_path: Path) -> None` + - [ ] 4.2.4 `load_registry_store(store_path: Path) -> list[RegistryStoreEntry]` + - [ ] 4.2.5 `get_effective_registry_tier(registry_url: str, store: list[RegistryStoreEntry]) -> str` — handle expiry → community downgrade with warning + - [ ] 4.2.6 `@require`, `@ensure`, `@beartype` on all public functions + - [ ] 4.2.7 Run tests — expect pass + - [ ] 4.2.8 Record passing evidence in `TDD_EVIDENCE.md` + +--- + +## 5. Update trust/resolver.py for registry tier integration (TDD) + +- [ ] 5.1 Write tests for updated resolver (expect failure) + - [ ] 5.1.1 Extend `tests/unit/trust/test_resolver.py` + - [ ] 5.1.2 Test `resolve_effective_tier()`: all combinations from trust-propagation spec (official+verified=official, verified+community=community, verified+local=local, any+unregistered=unregistered) + - [ ] 5.1.3 Test tier rank order: official(4)>verified(3)>community(2)>local(1)>unregistered(0) + - [ ] 5.1.4 Run tests — expect failures for new cases (regression tests from marketplace-03 must still pass) + - [ ] 5.1.5 Record failing evidence in `TDD_EVIDENCE.md` + +- [ ] 5.2 Update `trust/resolver.py` + - [ ] 5.2.1 Add `local` to `TIER_RANK` constant (rank: 1, between community and unregistered) + - [ ] 5.2.2 Update `resolve_effective_tier()` to use full rank dict + - [ ] 5.2.3 Verify marketplace-03 regression tests still pass + - [ ] 5.2.4 Run all tests — expect pass + - [ ] 5.2.5 Record passing evidence in `TDD_EVIDENCE.md` + +--- + +## 6. Extend custom_registries.py with certificate verification (TDD) + +- [ ] 6.1 Write tests for extended add-registry (expect failure) + - [ ] 6.1.1 Extend `tests/unit/registry/test_custom_registries.py` + - [ ] 6.1.2 Test `add_registry()` with valid cert: stored with effective_tier from cert + - [ ] 6.1.3 Test `add_registry()` with 404 cert: stored as community with warning + - [ ] 6.1.4 Test `add_registry()` with invalid cert sig: raises RegistryCertVerificationError, not stored + - [ ] 6.1.5 Test `add_registry()` with URL mismatch: raises RegistryCertUrlMismatchError, not stored + - [ ] 6.1.6 Test `add_registry(--trust-local)`: stored with effective_tier=local, no cert fetch + - [ ] 6.1.7 Test `list_registries()`: output includes effective_tier badge and cert expiry + - [ ] 6.1.8 Run tests — expect failures + - [ ] 6.1.9 Record failing evidence in `TDD_EVIDENCE.md` + +- [ ] 6.2 Extend `custom_registries.py` + - [ ] 6.2.1 Modify `add_registry()` to call `trust/registry_cert.py: fetch_registry_cert()` → verify → store + - [ ] 6.2.2 Add `--trust-local` flag support: skip cert fetch, store as `local` tier + - [ ] 6.2.3 Update `list_registries()` output: add effective_tier badge column, cert expiry field + - [ ] 6.2.4 Run tests — expect pass + - [ ] 6.2.5 Record passing evidence in `TDD_EVIDENCE.md` + +--- + +## 7. Update module_registry search output for effective tier (TDD) + +- [ ] 7.1 Write tests for effective-tier badges in search output (expect failure) + - [ ] 7.1.1 Extend `tests/integration/test_module_trust_integration.py` + - [ ] 7.1.2 Test search result for verified-publisher + community-registry: badge = `[community]` + - [ ] 7.1.3 Test search result for any-publisher + local-registry: badge = `[local]` + - [ ] 7.1.4 Test install policy uses effective tier (verified+community → prompt if no --trust-community) + - [ ] 7.1.5 Run tests — expect failures + - [ ] 7.1.6 Record failing evidence in `TDD_EVIDENCE.md` + +- [ ] 7.2 Update module_registry search/install + - [ ] 7.2.1 Modify search output to use effective_tier (publisher_tier ∩ registry_tier) for badge + - [ ] 7.2.2 Modify install pre-flight to use effective_tier for policy resolution + - [ ] 7.2.3 Update `specfact module info` to show both publisher tier and registry tier separately + - [ ] 7.2.4 Run tests — expect pass + - [ ] 7.2.5 Record passing evidence in `TDD_EVIDENCE.md` + +--- + +## 8. Module signing verification quality gate + +- [ ] 8.1 `hatch run ./scripts/verify-modules-signature.py --require-signature` +- [ ] 8.2 Re-sign and bump version if any module changed + +--- + +## 9. Quality gates + +- [ ] 9.1 `hatch run format` +- [ ] 9.2 `hatch run type-check` +- [ ] 9.3 `hatch run lint` +- [ ] 9.4 `hatch run yaml-lint` +- [ ] 9.5 `hatch run contract-test` +- [ ] 9.6 `hatch test --cover -v` + +--- + +## 10. Documentation research and review + +- [ ] 10.1 Identify and update affected documentation: + - [ ] 10.1.1 Update `docs/guides/custom-registries.md` (from marketplace-02): add certificate requirements, --trust-local, tier propagation examples, cert expiry handling + - [ ] 10.1.2 Update `docs/guides/publisher-trust.md` (from marketplace-03): add registry federation section explaining how registry tier caps publisher tier + - [ ] 10.1.3 Update `docs/reference/module-commands.md`: document --trust-local flag, list-registries cert expiry column + - [ ] 10.1.4 Update `docs/_layouts/default.html` if new pages added + +--- + +## 11. Version and changelog + +- [ ] 11.1 Determine version bump: feature branch → minor increment (confirm with user) +- [ ] 11.2 Sync version across `pyproject.toml`, `setup.py`, `src/specfact_cli/__init__.py` +- [ ] 11.3 `CHANGELOG.md` entry: + - `Added: Registry federation with NOLD AI certificate verification (trust/registry_cert.py)` + - `Added: --trust-local flag for air-gapped enterprise registries` + - `Added: Trust score propagation — effective tier = min(publisher_tier, registry_tier)` + - `Added: [local], [community], [verified], [official] tier badges propagated through registry federation` + +--- + +## 12. GitHub issue creation + +- [x] 12.1 GitHub issue already created: [#329](https://github.com/nold-ai/specfact-cli/issues/329) +- [x] 12.2 Linked to project board +- [x] 12.3 `proposal.md` Source Tracking updated with issue #329 +- [x] 12.4 `CHANGE_ORDER.md` updated with marketplace-05 entry and GitHub issue #329 + +--- + +## 13. Create PR + +- [ ] 13.1 Commit from inside the worktree + - [ ] 13.1.1 `git add src/specfact_cli/trust/ src/specfact_cli/registry/custom_registries.py src/specfact_cli/modules/module_registry/ docs/ openspec/ pyproject.toml setup.py CHANGELOG.md` + - [ ] 13.1.2 `git commit -S -m "feat: registry federation and trust certificate verification (marketplace-05)"` + - [ ] 13.1.3 `git push -u origin feature/marketplace-05-registry-federation` +- [ ] 13.2 `gh pr create --repo nold-ai/specfact-cli --base dev --head feature/marketplace-05-registry-federation --title "feat: registry federation and trust certificate verification" --body-file /tmp/pr-marketplace-05.md` +- [ ] 13.3 `gh project item-add 1 --owner nold-ai --url <PR_URL>` + +--- + +## Post-merge cleanup (after PR is merged) + +- [ ] Return to primary checkout: `cd /home/dom/git/nold-ai/specfact-cli` +- [ ] `git fetch origin` +- [ ] `git worktree remove ../specfact-cli-worktrees/feature/marketplace-05-registry-federation` +- [ ] `git branch -d feature/marketplace-05-registry-federation` +- [ ] `git worktree prune` +- [ ] (Optional) `git push origin --delete feature/marketplace-05-registry-federation` diff --git a/openspec/changes/module-migration-01-categorize-and-group/CHANGE_VALIDATION.md b/openspec/changes/module-migration-01-categorize-and-group/CHANGE_VALIDATION.md new file mode 100644 index 00000000..58cdd540 --- /dev/null +++ b/openspec/changes/module-migration-01-categorize-and-group/CHANGE_VALIDATION.md @@ -0,0 +1,39 @@ +# Change Validation: module-migration-01-categorize-and-group + +- **Validated on (UTC):** 2026-02-28T01:02:00Z +- **Workflow:** /wf-validate-change (implementation update re-validation) +- **Strict command:** `openspec validate module-migration-01-categorize-and-group --strict` +- **Status command:** `openspec status --change "module-migration-01-categorize-and-group" --json` +- **Result:** PASS + +## Scope Summary + +- **Capabilities touched by this update:** `category-command-groups`, `first-run-selection` +- **Regression fixes validated:** + - grouped registration preserves duplicate-command extension merging (no loader overwrite) + - first-run detection treats workspace-local `project` source modules as installed +- **Code paths reviewed:** + - `src/specfact_cli/registry/module_packages.py` + - `src/specfact_cli/modules/init/src/first_run_selection.py` + - `tests/unit/specfact_cli/registry/test_module_packages.py` + - `tests/unit/modules/init/test_first_run_selection.py` + +## Breaking-Change Analysis + +- No public CLI command names or argument signatures were changed. +- Behavior is a compatibility restoration: + - grouped mode now matches prior extension semantics for duplicate command groups + - `specfact init` first-run suppression now correctly includes project-scoped installed bundles +- No downstream migration is required. + +## Dependency and Interface Impact + +- Registry impact is internal to loader composition for duplicate command names. +- Init impact is internal to module discovery source filtering. +- No additional OpenSpec change scope expansion was required. + +## Validation Outcome + +- OpenSpec strict validation passed for this change. +- `openspec status` reports required artifacts present and complete (`proposal`, `design`, `specs`, `tasks`). +- Note: local environment emitted non-blocking OpenSpec telemetry network errors while flushing analytics; validation result remained PASS. diff --git a/openspec/changes/module-migration-01-categorize-and-group/TDD_EVIDENCE.md b/openspec/changes/module-migration-01-categorize-and-group/TDD_EVIDENCE.md new file mode 100644 index 00000000..b487661d --- /dev/null +++ b/openspec/changes/module-migration-01-categorize-and-group/TDD_EVIDENCE.md @@ -0,0 +1,110 @@ +# TDD Evidence: module-migration-01-categorize-and-group + +## Phase 3 — First-run module selection in `specfact init` + +### 5.1 Failing tests (pre-implementation) + +Tests were written first in `tests/unit/modules/init/test_first_run_selection.py`. Initial run (before implementation) would fail on: + +- Profile resolution and install parsing (no `resolve_profile_bundles`, `resolve_install_bundles`, `is_first_run`, or `install_bundles_for_init`). +- CLI tests would fail due to missing `--profile`/`--install` and missing first_run_selection integration. + +(Exact failing run not captured; implementation followed immediately after test creation.) + +### 5.3 Passing tests (post-implementation) + +**Timestamp:** 2026-02-28 +**Command:** `hatch test -- tests/unit/modules/init/test_first_run_selection.py -v` +**Result:** 16 passed + +**Summary:** + +- `test_profile_solo_developer_resolves_to_specfact_codebase_only` — profile preset resolution. +- `test_profile_enterprise_full_stack_resolves_to_all_five_bundles` — enterprise preset. +- `test_profile_nonexistent_raises_with_valid_list` — invalid profile raises with valid list. +- `test_install_backlog_codebase_resolves_to_two_bundles` — `--install` parsing. +- `test_install_all_resolves_to_all_five_bundles` — `--install all`. +- `test_install_unknown_bundle_raises` — unknown bundle raises. +- `test_is_first_run_true_when_no_category_bundle_installed` — first-run detection (no category bundle). +- `test_is_first_run_false_when_category_bundle_installed` — first-run false when bundle present. +- `test_init_profile_solo_developer_calls_installer_with_specfact_codebase` — CLI `--profile solo-developer`. +- `test_init_profile_enterprise_full_stack_calls_installer_with_all_five` — CLI `--profile enterprise-full-stack`. +- `test_init_profile_nonexistent_exits_nonzero_and_lists_valid_profiles` — CLI invalid profile exits non-zero. +- `test_init_install_backlog_codebase_calls_installer_with_two_bundles` — CLI `--install backlog,codebase`. +- `test_init_install_all_calls_installer_with_five_bundles` — CLI `--install all`. +- `test_init_install_widgets_exits_nonzero` — CLI unknown bundle exits non-zero. +- `test_init_second_run_skips_first_run_flow` — second run does not call installer when no `--profile`/`--install`. +- `test_spec_bundle_install_includes_project_dep` — `install_bundles_for_init(["specfact-spec"])` installs project dep. + +Implementation: `src/specfact_cli/modules/init/src/first_run_selection.py` and `commands.py` (--profile, --install, first_run_selection integration). + +### Phase 3 follow-up (5.2.3, 5.2.7) + +**Interactive first-run UI (5.2.3):** +- `_interactive_first_run_bundle_selection()` in commands.py: welcome banner (Panel), questionary.select for profile or "Choose bundles manually", questionary.checkbox for manual bundle selection. When first run and interactive and no --profile/--install, init() calls it and installs selected bundles or shows tip if none. +- `BUNDLE_DISPLAY` and `PROFILE_DISPLAY_ORDER` in first_run_selection.py for UI labels. + +**Graceful degradation (5.2.7):** +- In `install_bundles_for_init`, each `install_bundled_module` call wrapped in try/except; on exception log warning "Dependency resolver may be unavailable" and re-raise so errors are surfaced. + +**Additional tests:** +- `test_init_first_run_interactive_with_selection_calls_installer`: first run + interactive, mock selection returns ["specfact-codebase"], assert install called. +- `test_init_first_run_interactive_no_selection_shows_tip`: first run + interactive, mock selection returns [], assert no install and "Tip" / "module install" in output. + +**Run:** `hatch test -- tests/unit/modules/init/test_first_run_selection.py -v` — 18 passed. + +## Section 6 — Integration and E2E + +**Timestamp:** 2026-02-28 +**Commands:** `hatch test -- tests/integration/test_category_group_routing.py tests/e2e/test_first_run_init.py -v` +**Result:** 5 passed (3 integration + 2 e2e). + +**Integration:** `test_code_analyze_help_exits_zero`, `test_backlog_help_lists_subcommands`, `test_validate_shim_help_exits_zero`. +**E2E:** `test_init_profile_solo_developer_completes_in_temp_workspace`, `test_after_solo_developer_init_code_analyze_help_available` (install_bundles_for_init mocked). + +## Phase 4 — Regression fixes from review (grouped extension merge + project-scoped first-run) + +### 4.1 Failing tests (pre-implementation) + +**Timestamp:** 2026-02-28 01:00 UTC +**Command:** `hatch test -- tests/unit/specfact_cli/registry/test_module_packages.py::test_grouped_registration_merges_duplicate_command_extensions tests/unit/modules/init/test_first_run_selection.py::test_is_first_run_false_when_project_scoped_category_bundle_installed -v` +**Result:** 2 failed. + +**Failure summary:** + +- `test_grouped_registration_merges_duplicate_command_extensions` failed because grouped registration replaced the earlier `backlog` loader; observed commands were only `('ext_cmd',)` and `base_cmd` was missing. +- `test_is_first_run_false_when_project_scoped_category_bundle_installed` failed because `is_first_run()` ignored modules discovered with source `project`, returning `True` for an already-initialized workspace. + +### 4.2 Passing tests (post-implementation) + +**Timestamp:** 2026-02-28 01:01 UTC +**Command:** `hatch test -- tests/unit/specfact_cli/registry/test_module_packages.py::test_grouped_registration_merges_duplicate_command_extensions tests/unit/modules/init/test_first_run_selection.py::test_is_first_run_false_when_project_scoped_category_bundle_installed -v` +**Result:** 2 passed. + +**Implementation summary:** + +- Updated `register_module_package_commands()` grouped path to merge duplicate command loaders via `_make_extending_loader` for module entries (and core root entries), instead of unconditional overwrite. +- Updated `is_first_run()` source filter to include `project` modules in first-run detection. + +## Phase 5 — Regression fix from PR 331 (trust failure should not block unaffected legacy module registration) + +### 5.1 Failing test (pre-implementation) + +**Timestamp:** 2026-02-28 21:07 local +**Command:** `hatch test -- tests/unit/specfact_cli/registry/test_module_packages.py::test_unaffected_modules_register_when_one_fails_trust -v` +**Result:** 1 failed. + +**Failure summary:** + +- In grouped mode, a module without `category` metadata was routed into grouped registration, so `good_cmd` was not mounted as flat top-level despite warning text indicating flat mounting. + +### 5.2 Passing tests (post-implementation) + +**Timestamp:** 2026-02-28 21:09 local +**Command:** `hatch test -- tests/unit/specfact_cli/registry/test_module_packages.py::test_unaffected_modules_register_when_one_fails_trust tests/unit/specfact_cli/registry/test_module_packages.py::test_grouped_registration_merges_duplicate_command_extensions tests/unit/registry/test_module_grouping.py::test_module_package_yaml_without_category_mounts_ungrouped_warning_logged -v` +**Result:** 3 passed. + +**Implementation summary:** + +- Updated `register_module_package_commands()` to use grouped registration only when `category_grouping_enabled` is true and module metadata declares `category`. +- Updated grouped-extension unit fixture metadata to include `category="backlog"` so the test reflects migration-era grouped manifests and remains aligned with category-driven grouping semantics. diff --git a/openspec/changes/module-migration-01-categorize-and-group/proposal.md b/openspec/changes/module-migration-01-categorize-and-group/proposal.md index 9b9f1a9b..5555564d 100644 --- a/openspec/changes/module-migration-01-categorize-and-group/proposal.md +++ b/openspec/changes/module-migration-01-categorize-and-group/proposal.md @@ -63,5 +63,5 @@ This mirrors the VS Code model: ship a lean core, present workflow-domain groups - **GitHub Issue**: #315 - **Issue URL**: <https://github.com/nold-ai/specfact-cli/issues/315> - **Repository**: nold-ai/specfact-cli -- **Last Synced Status**: proposed +- **Last Synced Status**: open - **Sanitized**: false diff --git a/openspec/changes/module-migration-01-categorize-and-group/specs/category-command-groups/spec.md b/openspec/changes/module-migration-01-categorize-and-group/specs/category-command-groups/spec.md index 2f57e91f..f0e72b8b 100644 --- a/openspec/changes/module-migration-01-categorize-and-group/specs/category-command-groups/spec.md +++ b/openspec/changes/module-migration-01-categorize-and-group/specs/category-command-groups/spec.md @@ -26,6 +26,16 @@ Each category group SHALL expose its member modules as sub-commands, preserving - **THEN** the command SHALL execute identically to the original `specfact analyze contracts` - **AND** the exit code, output format, and side effects SHALL be identical +#### Scenario: Grouped registration preserves command extensions for duplicate command names + +- **GIVEN** `category_grouping_enabled` is `true` +- **AND** a base module provides command group `backlog` +- **AND** an extension module also declares command group `backlog` +- **WHEN** module package commands are registered +- **THEN** the registry SHALL merge extension subcommands into the existing `backlog` command tree +- **AND** SHALL NOT replace the existing loader with only the extension loader +- **AND** both base and extension subcommands SHALL remain accessible under `specfact backlog ...` + #### Scenario: Category group command is absent when bundle not installed - **GIVEN** the `govern` bundle is NOT installed diff --git a/openspec/changes/module-migration-01-categorize-and-group/specs/first-run-selection/spec.md b/openspec/changes/module-migration-01-categorize-and-group/specs/first-run-selection/spec.md index 0958013c..956ab8a8 100644 --- a/openspec/changes/module-migration-01-categorize-and-group/specs/first-run-selection/spec.md +++ b/openspec/changes/module-migration-01-categorize-and-group/specs/first-run-selection/spec.md @@ -49,6 +49,15 @@ On a fresh install where no bundles are installed, `specfact init` SHALL present - **THEN** the CLI SHALL NOT show the bundle selection UI - **AND** SHALL run the standard workspace re-initialisation flow +#### Scenario: Workspace-local project-scoped modules suppress first-run flow + +- **GIVEN** a repository already contains category bundle modules under workspace-local `.specfact/modules` +- **AND** those modules are discovered with source `project` +- **WHEN** the user runs `specfact init` +- **THEN** first-run detection SHALL treat the workspace as already initialized +- **AND** the CLI SHALL NOT show first-run bundle selection again +- **AND** SHALL run the standard workspace re-initialisation flow + ### Requirement: `specfact init --profile <name>` installs a named preset non-interactively The system SHALL accept a `--profile <name>` argument on `specfact init` and MUST install the canonical bundle set for that profile without prompting, whether in CI/CD mode or interactive mode. diff --git a/openspec/changes/module-migration-01-categorize-and-group/tasks.md b/openspec/changes/module-migration-01-categorize-and-group/tasks.md index 55d73b5d..327ff992 100644 --- a/openspec/changes/module-migration-01-categorize-and-group/tasks.md +++ b/openspec/changes/module-migration-01-categorize-and-group/tasks.md @@ -20,60 +20,43 @@ Do NOT implement production code for any behavior-changing step until failing-te ## 1. Create git worktree branch from dev -- [ ] 1.1 Fetch latest origin and create worktree with feature branch - - [ ] 1.1.1 `git fetch origin` - - [ ] 1.1.2 `git worktree add ../specfact-cli-worktrees/feature/module-migration-01-categorize-and-group -b feature/module-migration-01-categorize-and-group origin/dev` - - [ ] 1.1.3 `cd ../specfact-cli-worktrees/feature/module-migration-01-categorize-and-group` - - [ ] 1.1.4 `git branch --show-current` — verify output is `feature/module-migration-01-categorize-and-group` - - [ ] 1.1.5 `python -m venv .venv && source .venv/bin/activate && pip install -e ".[dev]"` - - [ ] 1.1.6 `hatch env create` - - [ ] 1.1.7 `hatch run smart-test-status` and `hatch run contract-test-status` — confirm baseline green +- [x] 1.1 Fetch latest origin and create worktree with feature branch + - [x] 1.1.1 `git fetch origin` + - [x] 1.1.2 `git worktree add ../specfact-cli-worktrees/feature/module-migration-01-categorize-and-group -b feature/module-migration-01-categorize-and-group origin/dev` + - [x] 1.1.3 `cd ../specfact-cli-worktrees/feature/module-migration-01-categorize-and-group` + - [x] 1.1.4 `git branch --show-current` — verify output is `feature/module-migration-01-categorize-and-group` + - [x] 1.1.5 `python -m venv .venv && source .venv/bin/activate && pip install -e ".[dev]"` + - [x] 1.1.6 `hatch env create` + - [x] 1.1.7 `hatch run smart-test-status` and `hatch run contract-test-status` — confirm baseline green ## 2. Create GitHub issue for change tracking -- [ ] 2.1 Create GitHub issue in nold-ai/specfact-cli - - [ ] 2.1.1 `gh issue create --repo nold-ai/specfact-cli --title "[Change] Module Grouping and Category Command Groups" --label "enhancement,change-proposal" --body "$(cat <<'EOF'` - - ```text - ## Why - - SpecFact CLI exposes 21 flat top-level commands, overwhelming new users. The marketplace foundation (marketplace-01, marketplace-02) now supports signed packages and bundle-level dependency resolution. This change introduces category grouping metadata, 5 umbrella group commands, and VS Code-style first-run bundle selection. - - ## What Changes - - - Add `category`, `bundle`, `bundle_group_command`, `bundle_sub_command` to all 21 `module-package.yaml` files - - Create `src/specfact_cli/groups/` with 5 category Typer apps - - Update `bootstrap.py` to mount category groups with compat shims - - Add `category_grouping_enabled` config flag (default `true`) - - Update `specfact init` with `--profile` and `--install` for first-run bundle selection - - *OpenSpec Change Proposal: module-migration-01-categorize-and-group* - ``` - - - [ ] 2.1.2 Capture issue number and URL from output - - [ ] 2.1.3 Update `openspec/changes/module-migration-01-categorize-and-group/proposal.md` Source Tracking section with issue number, URL, and status `open` +- [x] 2.1 Create GitHub issue in nold-ai/specfact-cli + - [x] 2.1.1 `gh issue create --repo nold-ai/specfact-cli ...` + - [x] 2.1.2 Capture issue number and URL from output + - [x] 2.1.3 Update `openspec/changes/module-migration-01-categorize-and-group/proposal.md` Source Tracking section with issue number, URL, and status `open` ## 3. Phase 1 — Add category metadata to all module-package.yaml files (TDD) ### 3.1 Write tests for manifest validation (expect failure) -- [ ] 3.1.1 Create `tests/unit/registry/test_module_grouping.py` -- [ ] 3.1.2 Test: `module-package.yaml` with `category: codebase` passes validation -- [ ] 3.1.3 Test: `module-package.yaml` with `category: unknown` raises `ModuleManifestError` -- [ ] 3.1.4 Test: `module-package.yaml` without `category` field mounts as ungrouped flat command (no error, warning logged) -- [ ] 3.1.5 Test: `bundle_group_command` mismatch vs canonical category raises `ModuleManifestError` -- [ ] 3.1.6 Test: core-category modules have no `bundle` or `bundle_group_command` -- [ ] 3.1.7 Test: `registry.group_modules_by_category()` returns correct grouping dict from a list of module manifests -- [ ] 3.1.8 Run tests: `hatch test -- tests/unit/registry/test_module_grouping.py -v` (expect failures — record in TDD_EVIDENCE.md) +- [x] 3.1.1 Create `tests/unit/registry/test_module_grouping.py` +- [x] 3.1.2 Test: `module-package.yaml` with `category: codebase` passes validation +- [x] 3.1.3 Test: `module-package.yaml` with `category: unknown` raises `ModuleManifestError` +- [x] 3.1.4 Test: `module-package.yaml` without `category` field mounts as ungrouped flat command (no error, warning logged) +- [x] 3.1.5 Test: `bundle_group_command` mismatch vs canonical category raises `ModuleManifestError` +- [x] 3.1.6 Test: core-category modules have no `bundle` or `bundle_group_command` +- [x] 3.1.7 Test: `registry.group_modules_by_category()` returns correct grouping dict from a list of module manifests +- [x] 3.1.8 Run tests: `hatch test -- tests/unit/registry/test_module_grouping.py -v` (expect failures — record in TDD_EVIDENCE.md) ### 3.2 Implement category field validation in registry -- [ ] 3.2.1 Add `category`, `bundle`, `bundle_group_command`, `bundle_sub_command` fields (Optional[str]) to `ModulePackage` Pydantic model in `src/specfact_cli/registry/module_packages.py` -- [ ] 3.2.2 Add validation: if `category` is set and not in `{"core","project","backlog","codebase","spec","govern"}` → raise `ModuleManifestError` -- [ ] 3.2.3 Add validation: if `category` != `"core"` and `bundle_group_command` does not match canonical mapping → raise `ModuleManifestError` -- [ ] 3.2.4 Add `group_modules_by_category()` function with `@require` and `@beartype` decorators -- [ ] 3.2.5 Add warning log when `category` field is absent -- [ ] 3.2.6 `hatch test -- tests/unit/registry/test_module_grouping.py -v` — verify tests pass +- [x] 3.2.1 Add `category`, `bundle`, `bundle_group_command`, `bundle_sub_command` fields (Optional[str]) to `ModulePackage` Pydantic model in `src/specfact_cli/registry/module_packages.py` +- [x] 3.2.2 Add validation: if `category` is set and not in `{"core","project","backlog","codebase","spec","govern"}` → raise `ModuleManifestError` +- [x] 3.2.3 Add validation: if `category` != `"core"` and `bundle_group_command` does not match canonical mapping → raise `ModuleManifestError` +- [x] 3.2.4 Add `group_modules_by_category()` function with `@require` and `@beartype` decorators +- [x] 3.2.5 Add warning log when `category` field is absent +- [x] 3.2.6 `hatch test -- tests/unit/registry/test_module_grouping.py -v` — verify tests pass ### 3.3 Add category metadata to all 21 module-package.yaml files @@ -81,257 +64,260 @@ Apply the canonical category assignments: **Core (no bundle fields):** -- [ ] 3.3.1 `modules/init/module-package.yaml` → `category: core`, `bundle_sub_command: init` -- [ ] 3.3.2 `modules/auth/module-package.yaml` → `category: core`, `bundle_sub_command: auth` -- [ ] 3.3.3 `modules/module_registry/module-package.yaml` → `category: core`, `bundle_sub_command: module` -- [ ] 3.3.4 `modules/upgrade/module-package.yaml` → `category: core`, `bundle_sub_command: upgrade` +- [x] 3.3.1 `modules/init/module-package.yaml` → `category: core`, `bundle_sub_command: init` +- [x] 3.3.2 `modules/auth/module-package.yaml` → `category: core`, `bundle_sub_command: auth` +- [x] 3.3.3 `modules/module_registry/module-package.yaml` → `category: core`, `bundle_sub_command: module` +- [x] 3.3.4 `modules/upgrade/module-package.yaml` → `category: core`, `bundle_sub_command: upgrade` **Project bundle (`specfact-project`, group command `project`):** -- [ ] 3.3.5 `modules/project/module-package.yaml` → `category: project`, `bundle: specfact-project`, `bundle_group_command: project`, `bundle_sub_command: project` -- [ ] 3.3.6 `modules/plan/module-package.yaml` → `category: project`, `bundle: specfact-project`, `bundle_group_command: project`, `bundle_sub_command: plan` -- [ ] 3.3.7 `modules/import_cmd/module-package.yaml` → `category: project`, `bundle: specfact-project`, `bundle_group_command: project`, `bundle_sub_command: import` -- [ ] 3.3.8 `modules/sync/module-package.yaml` → `category: project`, `bundle: specfact-project`, `bundle_group_command: project`, `bundle_sub_command: sync` -- [ ] 3.3.9 `modules/migrate/module-package.yaml` → `category: project`, `bundle: specfact-project`, `bundle_group_command: project`, `bundle_sub_command: migrate` +- [x] 3.3.5 `modules/project/module-package.yaml` → `category: project`, `bundle: specfact-project`, `bundle_group_command: project`, `bundle_sub_command: project` +- [x] 3.3.6 `modules/plan/module-package.yaml` → `category: project`, `bundle: specfact-project`, `bundle_group_command: project`, `bundle_sub_command: plan` +- [x] 3.3.7 `modules/import_cmd/module-package.yaml` → `category: project`, `bundle: specfact-project`, `bundle_group_command: project`, `bundle_sub_command: import` +- [x] 3.3.8 `modules/sync/module-package.yaml` → `category: project`, `bundle: specfact-project`, `bundle_group_command: project`, `bundle_sub_command: sync` +- [x] 3.3.9 `modules/migrate/module-package.yaml` → `category: project`, `bundle: specfact-project`, `bundle_group_command: project`, `bundle_sub_command: migrate` **Backlog bundle (`specfact-backlog`, group command `backlog`):** -- [ ] 3.3.10 `modules/backlog/module-package.yaml` → `category: backlog`, `bundle: specfact-backlog`, `bundle_group_command: backlog`, `bundle_sub_command: backlog` -- [ ] 3.3.11 `modules/policy_engine/module-package.yaml` → `category: backlog`, `bundle: specfact-backlog`, `bundle_group_command: backlog`, `bundle_sub_command: policy` +- [x] 3.3.10 `modules/backlog/module-package.yaml` → `category: backlog`, `bundle: specfact-backlog`, `bundle_group_command: backlog`, `bundle_sub_command: backlog` +- [x] 3.3.11 `modules/policy_engine/module-package.yaml` → `category: backlog`, `bundle: specfact-backlog`, `bundle_group_command: backlog`, `bundle_sub_command: policy` **Codebase bundle (`specfact-codebase`, group command `code`):** -- [ ] 3.3.12 `modules/analyze/module-package.yaml` → `category: codebase`, `bundle: specfact-codebase`, `bundle_group_command: code`, `bundle_sub_command: analyze` -- [ ] 3.3.13 `modules/drift/module-package.yaml` → `category: codebase`, `bundle: specfact-codebase`, `bundle_group_command: code`, `bundle_sub_command: drift` -- [ ] 3.3.14 `modules/validate/module-package.yaml` → `category: codebase`, `bundle: specfact-codebase`, `bundle_group_command: code`, `bundle_sub_command: validate` -- [ ] 3.3.15 `modules/repro/module-package.yaml` → `category: codebase`, `bundle: specfact-codebase`, `bundle_group_command: code`, `bundle_sub_command: repro` +- [x] 3.3.12 `modules/analyze/module-package.yaml` → `category: codebase`, `bundle: specfact-codebase`, `bundle_group_command: code`, `bundle_sub_command: analyze` +- [x] 3.3.13 `modules/drift/module-package.yaml` → `category: codebase`, `bundle: specfact-codebase`, `bundle_group_command: code`, `bundle_sub_command: drift` +- [x] 3.3.14 `modules/validate/module-package.yaml` → `category: codebase`, `bundle: specfact-codebase`, `bundle_group_command: code`, `bundle_sub_command: validate` +- [x] 3.3.15 `modules/repro/module-package.yaml` → `category: codebase`, `bundle: specfact-codebase`, `bundle_group_command: code`, `bundle_sub_command: repro` **Spec bundle (`specfact-spec`, group command `spec`):** -- [ ] 3.3.16 `modules/contract/module-package.yaml` → `category: spec`, `bundle: specfact-spec`, `bundle_group_command: spec`, `bundle_sub_command: contract` -- [ ] 3.3.17 `modules/spec/module-package.yaml` → `category: spec`, `bundle: specfact-spec`, `bundle_group_command: spec`, `bundle_sub_command: api` (collision avoidance) -- [ ] 3.3.18 `modules/sdd/module-package.yaml` → `category: spec`, `bundle: specfact-spec`, `bundle_group_command: spec`, `bundle_sub_command: sdd` -- [ ] 3.3.19 `modules/generate/module-package.yaml` → `category: spec`, `bundle: specfact-spec`, `bundle_group_command: spec`, `bundle_sub_command: generate` +- [x] 3.3.16 `modules/contract/module-package.yaml` → `category: spec`, `bundle: specfact-spec`, `bundle_group_command: spec`, `bundle_sub_command: contract` +- [x] 3.3.17 `modules/spec/module-package.yaml` → `category: spec`, `bundle: specfact-spec`, `bundle_group_command: spec`, `bundle_sub_command: api` (collision avoidance) +- [x] 3.3.18 `modules/sdd/module-package.yaml` → `category: spec`, `bundle: specfact-spec`, `bundle_group_command: spec`, `bundle_sub_command: sdd` +- [x] 3.3.19 `modules/generate/module-package.yaml` → `category: spec`, `bundle: specfact-spec`, `bundle_group_command: spec`, `bundle_sub_command: generate` **Govern bundle (`specfact-govern`, group command `govern`):** -- [ ] 3.3.20 `modules/enforce/module-package.yaml` → `category: govern`, `bundle: specfact-govern`, `bundle_group_command: govern`, `bundle_sub_command: enforce` -- [ ] 3.3.21 `modules/patch_mode/module-package.yaml` → `category: govern`, `bundle: specfact-govern`, `bundle_group_command: govern`, `bundle_sub_command: patch` +- [x] 3.3.20 `modules/enforce/module-package.yaml` → `category: govern`, `bundle: specfact-govern`, `bundle_group_command: govern`, `bundle_sub_command: enforce` +- [x] 3.3.21 `modules/patch_mode/module-package.yaml` → `category: govern`, `bundle: specfact-govern`, `bundle_group_command: govern`, `bundle_sub_command: patch` ### 3.4 Module signing gate (after all module-package.yaml edits) -- [ ] 3.4.1 `hatch run ./scripts/verify-modules-signature.py --require-signature` — expect failures (manifests changed, signatures stale) -- [ ] 3.4.2 Bump version field in each modified module-package.yaml (patch increment per module) -- [ ] 3.4.3 `hatch run python scripts/sign-modules.py --key-file <private-key.pem> src/specfact_cli/modules/*/module-package.yaml` -- [ ] 3.4.4 `hatch run ./scripts/verify-modules-signature.py --require-signature` — confirm fully green +- [x] 3.4.1 `hatch run ./scripts/verify-modules-signature.py --require-signature` — expect failures (manifests changed, signatures stale) +- [x] 3.4.2 Bump version field in each modified module-package.yaml (patch increment per module) +- [x] 3.4.3 `hatch run python scripts/sign-modules.py --key-file <private-key.pem> src/specfact_cli/modules/*/module-package.yaml` +- [x] 3.4.4 `hatch run ./scripts/verify-modules-signature.py --require-signature` — confirm fully green ## 4. Phase 2 — Category group commands (TDD) ### 4.1 Write tests for category group bootstrap (expect failure) -- [ ] 4.1.1 Create `tests/unit/registry/test_category_groups.py` -- [ ] 4.1.2 Test: with `category_grouping_enabled=True`, `bootstrap_cli()` registers `code`, `backlog`, `project`, `spec`, `govern` group commands -- [ ] 4.1.3 Test: with `category_grouping_enabled=False`, bootstrap registers flat module commands (no group commands) -- [ ] 4.1.4 Test: `specfact code analyze contracts` routes to the same handler as `specfact analyze contracts` -- [ ] 4.1.5 Test: `specfact govern --help` when govern bundle not installed produces install suggestion -- [ ] 4.1.6 Test: flat shim `specfact validate` emits deprecation warning in Copilot mode -- [ ] 4.1.7 Test: flat shim `specfact validate` is silent in CI/CD mode -- [ ] 4.1.8 Test: `specfact spec api validate` routes correctly (collision avoidance) -- [ ] 4.1.9 Create `tests/unit/groups/test_codebase_group.py` — test group app has expected sub-commands -- [ ] 4.1.10 Run tests: `hatch test -- tests/unit/registry/test_category_groups.py tests/unit/groups/ -v` (expect failures — record in TDD_EVIDENCE.md) +- [x] 4.1.1 Create `tests/unit/registry/test_category_groups.py` +- [x] 4.1.2 Test: with `category_grouping_enabled=True`, `bootstrap_cli()` registers `code`, `backlog`, `project`, `spec`, `govern` group commands +- [x] 4.1.3 Test: with `category_grouping_enabled=False`, bootstrap registers flat module commands (no group commands) +- [x] 4.1.4 Test: `specfact code analyze contracts` routes to the same handler as `specfact analyze contracts` +- [x] 4.1.5 Test: `specfact govern --help` when govern bundle not installed produces install suggestion +- [x] 4.1.6 Test: flat shim `specfact validate` emits deprecation warning in Copilot mode +- [x] 4.1.7 Test: flat shim `specfact validate` is silent in CI/CD mode +- [x] 4.1.8 Test: `specfact spec api validate` routes correctly (collision avoidance) +- [x] 4.1.9 Create `tests/unit/groups/test_codebase_group.py` — test group app has expected sub-commands +- [x] 4.1.10 Run tests: `hatch test -- tests/unit/registry/test_category_groups.py tests/unit/groups/ -v` (expect failures — record in TDD_EVIDENCE.md) ### 4.2 Create `src/specfact_cli/groups/` package -- [ ] 4.2.1 Create `src/specfact_cli/groups/__init__.py` -- [ ] 4.2.2 Create `src/specfact_cli/groups/project_group.py` +- [x] 4.2.1 Create `src/specfact_cli/groups/__init__.py` +- [x] 4.2.2 Create `src/specfact_cli/groups/project_group.py` - `app = typer.Typer(name="project", help="Project lifecycle commands.", no_args_is_help=True)` - Members: project, plan, import_cmd (as `import`), sync, migrate - `@require` and `@beartype` on `_register_members()` -- [ ] 4.2.3 Create `src/specfact_cli/groups/backlog_group.py` +- [x] 4.2.3 Create `src/specfact_cli/groups/backlog_group.py` - Members: backlog, policy_engine (as `policy`) -- [ ] 4.2.4 Create `src/specfact_cli/groups/codebase_group.py` +- [x] 4.2.4 Create `src/specfact_cli/groups/codebase_group.py` - Members: analyze, drift, validate, repro -- [ ] 4.2.5 Create `src/specfact_cli/groups/spec_group.py` +- [x] 4.2.5 Create `src/specfact_cli/groups/spec_group.py` - Members: contract, spec (as `api`), sdd, generate -- [ ] 4.2.6 Create `src/specfact_cli/groups/govern_group.py` +- [x] 4.2.6 Create `src/specfact_cli/groups/govern_group.py` - Members: enforce, patch_mode (as `patch`) -- [ ] 4.2.7 All group files must use `@icontract` and `@beartype` on all public functions +- [x] 4.2.7 All group files must use `@icontract` and `@beartype` on all public functions ### 4.3 Update `bootstrap.py` to mount category groups -- [ ] 4.3.1 Read `category_grouping_enabled` from config (default `True`) -- [ ] 4.3.2 If `True`: import and mount each group app via `app.add_typer()`; skip flat mounting for grouped modules -- [ ] 4.3.3 Always mount core modules (init, auth, module, upgrade) as flat top-level commands -- [ ] 4.3.4 Implement `_register_compat_shims(app)` for all 17 non-core modules: +- [x] 4.3.1 Read `category_grouping_enabled` from config (default `True`) +- [x] 4.3.2 If `True`: import and mount each group app via `app.add_typer()`; skip flat mounting for grouped modules +- [x] 4.3.3 Always mount core modules (init, auth, module, upgrade) as flat top-level commands +- [x] 4.3.4 Implement `_register_compat_shims(app)` for all 17 non-core modules: - Shim emits deprecation warning in Copilot mode, silent in CI/CD mode - Delegates to category group equivalent -- [ ] 4.3.5 Add `@require`, `@ensure`, and `@beartype` to all modified/new bootstrap functions +- [x] 4.3.5 Add `@require`, `@ensure`, and `@beartype` to all modified/new bootstrap functions ### 4.4 Update `cli.py` to register category groups -- [ ] 4.4.1 Confirm category group apps are registered via `bootstrap.py` (no direct `cli.py` changes expected; verify and update if needed) +- [x] 4.4.1 Confirm category group apps are registered via `bootstrap.py` (no direct `cli.py` changes expected; verify and update if needed) ### 4.5 Verify tests pass -- [ ] 4.5.1 `hatch test -- tests/unit/registry/test_category_groups.py tests/unit/groups/ -v` -- [ ] 4.5.2 Record passing-test results in TDD_EVIDENCE.md +- [x] 4.5.1 `hatch test -- tests/unit/registry/test_category_groups.py tests/unit/groups/ -v` +- [x] 4.5.2 Record passing-test results in TDD_EVIDENCE.md ## 5. Phase 3 — First-run module selection in `specfact init` (TDD) ### 5.1 Write tests for first-run selection (expect failure) -- [ ] 5.1.1 Create `tests/unit/modules/init/test_first_run_selection.py` -- [ ] 5.1.2 Test: `specfact init --profile solo-developer` installs only `specfact-codebase` (mock installer) -- [ ] 5.1.3 Test: `specfact init --profile enterprise-full-stack` installs all 5 bundles -- [ ] 5.1.4 Test: `specfact init --profile nonexistent` exits non-zero with error listing valid profiles -- [ ] 5.1.5 Test: `specfact init --install backlog,codebase` installs `specfact-backlog` and `specfact-codebase` -- [ ] 5.1.6 Test: `specfact init --install all` installs all 5 bundles -- [ ] 5.1.7 Test: `specfact init --install widgets` exits non-zero with unknown bundle error -- [ ] 5.1.8 Test: second run of init (bundles already installed) skips first-run selection flow -- [ ] 5.1.9 Test: `spec` bundle installation triggers automatic `project` bundle dep install (mock marketplace-02 dep resolver) -- [ ] 5.1.10 Run tests: `hatch test -- tests/unit/modules/init/test_first_run_selection.py -v` (expect failures — record in TDD_EVIDENCE.md) +- [x] 5.1.1 Create `tests/unit/modules/init/test_first_run_selection.py` +- [x] 5.1.2 Test: `specfact init --profile solo-developer` installs only `specfact-codebase` (mock installer) +- [x] 5.1.3 Test: `specfact init --profile enterprise-full-stack` installs all 5 bundles +- [x] 5.1.4 Test: `specfact init --profile nonexistent` exits non-zero with error listing valid profiles +- [x] 5.1.5 Test: `specfact init --install backlog,codebase` installs `specfact-backlog` and `specfact-codebase` +- [x] 5.1.6 Test: `specfact init --install all` installs all 5 bundles +- [x] 5.1.7 Test: `specfact init --install widgets` exits non-zero with unknown bundle error +- [x] 5.1.8 Test: second run of init (bundles already installed) skips first-run selection flow +- [x] 5.1.9 Test: `spec` bundle installation triggers automatic `project` bundle dep install (mock marketplace-02 dep resolver) +- [x] 5.1.10 Run tests: `hatch test -- tests/unit/modules/init/test_first_run_selection.py -v` (expect failures — record in TDD_EVIDENCE.md) ### 5.2 Implement first-run selection in `specfact init` -- [ ] 5.2.1 Add `--profile` and `--install` parameters to `specfact init` command in `src/specfact_cli/modules/init/src/commands.py` -- [ ] 5.2.2 Implement `is_first_run()` detection (no category bundle installed) -- [ ] 5.2.3 Implement Copilot-mode interactive bundle selection UI using `rich` (multi-select checkboxes) -- [ ] 5.2.4 Implement profile preset resolution: map profile name → bundle list -- [ ] 5.2.5 Implement `--install` flag parsing: comma-separated bundle names + `all` alias -- [ ] 5.2.6 Implement bundle installation by calling `module_installer.install_module()` for each selected bundle -- [ ] 5.2.7 Implement graceful degradation when marketplace-02 dep resolver unavailable (warn, skip dep resolution) -- [ ] 5.2.8 Add `@require`, `@ensure`, `@beartype` on all new public functions -- [ ] 5.2.9 `hatch test -- tests/unit/modules/init/test_first_run_selection.py -v` — verify tests pass +- [x] 5.2.1 Add `--profile` and `--install` parameters to `specfact init` command in `src/specfact_cli/modules/init/src/commands.py` +- [x] 5.2.2 Implement `is_first_run()` detection (no category bundle installed) +- [x] 5.2.3 Implement Copilot-mode interactive bundle selection UI using `rich` (multi-select checkboxes) +- [x] 5.2.4 Implement profile preset resolution: map profile name → bundle list +- [x] 5.2.5 Implement `--install` flag parsing: comma-separated bundle names + `all` alias +- [x] 5.2.6 Implement bundle installation by calling `module_installer.install_module()` for each selected bundle +- [x] 5.2.7 Implement graceful degradation when marketplace-02 dep resolver unavailable (warn, skip dep resolution) +- [x] 5.2.8 Add `@require`, `@ensure`, `@beartype` on all new public functions +- [x] 5.2.9 `hatch test -- tests/unit/modules/init/test_first_run_selection.py -v` — verify tests pass ### 5.3 Record passing-test evidence -- [ ] 5.3.1 Update TDD_EVIDENCE.md with passing-test run for first-run selection (timestamp, command, summary) +- [x] 5.3.1 Update TDD_EVIDENCE.md with passing-test run for first-run selection (timestamp, command, summary) ## 6. Integration and E2E tests -- [ ] 6.1 Create `tests/integration/test_category_group_routing.py` - - [ ] 6.1.1 Test: `specfact code analyze --help` returns non-zero-error-free output (CLI integration) - - [ ] 6.1.2 Test: `specfact backlog --help` lists backlog and policy sub-commands - - [ ] 6.1.3 Test: deprecated flat command `specfact validate --help` still returns help without error -- [ ] 6.2 Create `tests/e2e/test_first_run_init.py` - - [ ] 6.2.1 Test: `specfact init --profile solo-developer` in a temp workspace completes without error - - [ ] 6.2.2 Test: after `--profile solo-developer`, `specfact code analyze --help` is available -- [ ] 6.3 Run integration and E2E suites: `hatch test -- tests/integration/test_category_group_routing.py tests/e2e/test_first_run_init.py -v` +- [x] 6.1 Create `tests/integration/test_category_group_routing.py` + - [x] 6.1.1 Test: `specfact code analyze --help` returns non-zero-error-free output (CLI integration) + - [x] 6.1.2 Test: `specfact backlog --help` lists backlog and policy sub-commands + - [x] 6.1.3 Test: deprecated flat command `specfact validate --help` still returns help without error +- [x] 6.2 Create `tests/e2e/test_first_run_init.py` + - [x] 6.2.1 Test: `specfact init --profile solo-developer` in a temp workspace completes without error + - [x] 6.2.2 Test: after `--profile solo-developer`, `specfact code analyze --help` is available +- [x] 6.3 Run integration and E2E suites: `hatch test -- tests/integration/test_category_group_routing.py tests/e2e/test_first_run_init.py -v` ## 7. Quality gates -- [ ] 7.1 Format - - [ ] 7.1.1 `hatch run format` - - [ ] 7.1.2 Fix any formatting issues +- [x] 7.1 Format + - [x] 7.1.1 `hatch run format` + - [x] 7.1.2 Fix any formatting issues -- [ ] 7.2 Type checking - - [ ] 7.2.1 `hatch run type-check` - - [ ] 7.2.2 Fix any basedpyright strict errors +- [x] 7.2 Type checking + - [x] 7.2.1 `hatch run type-check` + - [x] 7.2.2 Fix any basedpyright strict errors -- [ ] 7.3 Full lint suite - - [ ] 7.3.1 `hatch run lint` - - [ ] 7.3.2 Fix any lint errors +- [x] 7.3 Full lint suite + - [x] 7.3.1 `hatch run lint` + - [x] 7.3.2 Fix any lint errors -- [ ] 7.4 YAML lint - - [ ] 7.4.1 `hatch run yaml-lint` - - [ ] 7.4.2 Fix any YAML formatting issues (includes module-package.yaml files) +- [x] 7.4 YAML lint + - [x] 7.4.1 `hatch run yaml-lint` + - [x] 7.4.2 Fix any YAML formatting issues (includes module-package.yaml files) -- [ ] 7.5 Contract-first testing - - [ ] 7.5.1 `hatch run contract-test` - - [ ] 7.5.2 Verify all contracts pass +- [x] 7.5 Contract-first testing + - [x] 7.5.1 `hatch run contract-test` + - [x] 7.5.2 Verify all contracts pass -- [ ] 7.6 Smart test suite - - [ ] 7.6.1 `hatch run smart-test` - - [ ] 7.6.2 Verify no regressions +- [x] 7.6 Smart test suite + - [x] 7.6.1 `hatch run smart-test` + - [x] 7.6.2 Verify no regressions -- [ ] 7.7 Module signing gate - - [ ] 7.7.1 `hatch run ./scripts/verify-modules-signature.py --require-signature` - - [ ] 7.7.2 If any modules fail (due to field additions in step 3): re-sign with `hatch run python scripts/sign-modules.py --key-file <private-key.pem> <module-package.yaml ...>` - - [ ] 7.7.3 Re-run verification until fully green +- [x] 7.7 Module signing gate + - [x] 7.7.1 `hatch run ./scripts/verify-modules-signature.py --require-signature` + - [x] 7.7.2 If any modules fail (due to field additions in step 3): re-sign with `hatch run python scripts/sign-modules.py --key-file <private-key.pem> <module-package.yaml ...>` + - [x] 7.7.3 Re-run verification until fully green ## 8. Documentation research and review -- [ ] 8.1 Identify affected documentation - - [ ] 8.1.1 Review `docs/guides/getting-started.md` — update install and first-run flow with bundle selection UX - - [ ] 8.1.2 Review `docs/reference/commands.md` — update command topology with before/after category group layout - - [ ] 8.1.3 Review `README.md` — update command listing to reflect category group commands and fresh-install view - - [ ] 8.1.4 Review `docs/index.md` — confirm landing page reflects simplified command surface +- [x] 8.1 Identify affected documentation + - [x] 8.1.1 Review `docs/getting-started/first-steps.md` — update install and first-run flow with bundle selection UX + - [x] 8.1.2 Review `docs/reference/commands.md` — update command topology with before/after category group layout + - [x] 8.1.3 Review `README.md` — update command listing to reflect category group commands and fresh-install view + - [x] 8.1.4 Review `docs/index.md` — confirm landing page reflects simplified command surface -- [ ] 8.2 Update `docs/guides/getting-started.md` - - [ ] 8.2.1 Verify Jekyll front-matter is preserved (title, layout, nav_order, permalink) - - [ ] 8.2.2 Add "First-run bundle selection" section with interactive UI screenshot/ASCII art - - [ ] 8.2.3 Add profile preset table with bundle contents - - [ ] 8.2.4 Add `specfact init --profile <name>` usage for CI/CD +- [x] 8.2 Update `docs/getting-started/first-steps.md` + - [x] 8.2.1 Verify Jekyll front-matter is preserved (title, layout, nav_order, permalink) + - [x] 8.2.2 Add "First-run bundle selection" guidance + - [x] 8.2.3 Add profile preset and bundle selection examples + - [x] 8.2.4 Add `specfact init --profile <name>` usage for CI/CD -- [ ] 8.3 Create `docs/reference/module-categories.md` (new page) - - [ ] 8.3.1 Add Jekyll front-matter: `layout: default`, `title: Module Categories`, `nav_order: <appropriate>`, `permalink: /reference/module-categories/` - - [ ] 8.3.2 Write canonical category assignment table (all 21 modules) - - [ ] 8.3.3 Write bundle contents section per category - - [ ] 8.3.4 Write profile presets section - - [ ] 8.3.5 Write before/after command topology section +- [x] 8.3 Create `docs/reference/module-categories.md` (new page) + - [x] 8.3.1 Add Jekyll front-matter: `layout: default`, `title: Module Categories`, `nav_order: <appropriate>`, `permalink: /reference/module-categories/` + - [x] 8.3.2 Write canonical category assignment table (all 21 modules) + - [x] 8.3.3 Write bundle contents section per category + - [x] 8.3.4 Write profile presets section + - [x] 8.3.5 Write before/after command topology section -- [ ] 8.4 Update `docs/_layouts/default.html` - - [ ] 8.4.1 Add "Module Categories" link to sidebar navigation under Reference section +- [x] 8.4 Update `docs/_layouts/default.html` + - [x] 8.4.1 Add "Module Categories" link to sidebar navigation under Reference section -- [ ] 8.5 Update `README.md` - - [ ] 8.5.1 Update command listing: show core commands + category group commands - - [ ] 8.5.2 Add brief mention of first-run bundle selection +- [x] 8.5 Update `README.md` + - [x] 8.5.1 Update command listing: show core commands + category group commands + - [x] 8.5.2 Add brief mention of first-run bundle selection -- [ ] 8.6 Verify docs build - - [ ] 8.6.1 Check all Markdown links resolve - - [ ] 8.6.2 Check front-matter is valid YAML +- [x] 8.6 Verify docs build + - [x] 8.6.1 Check all Markdown links resolve (changed docs paths validated) + - [x] 8.6.2 Check front-matter is valid YAML (`hatch run yaml-lint`) ## 9. Version and changelog -- [ ] 9.1 Determine version bump: **minor** (new feature: category groups, first-run selection; feature/* branch) - - [ ] 9.1.1 Confirm current version in `pyproject.toml` - - [ ] 9.1.2 Confirm bump is minor (e.g., `0.X.Y → 0.(X+1).0`) - - [ ] 9.1.3 Request explicit confirmation from user before applying bump - -- [ ] 9.2 Sync version across all files - - [ ] 9.2.1 `pyproject.toml` - - [ ] 9.2.2 `setup.py` - - [ ] 9.2.3 `src/__init__.py` (if present) - - [ ] 9.2.4 `src/specfact_cli/__init__.py` - - [ ] 9.2.5 Verify all four files show the same version - -- [ ] 9.3 Update `CHANGELOG.md` - - [ ] 9.3.1 Add new section `## [X.Y.Z] - 2026-MM-DD` - - [ ] 9.3.2 Add `### Added` subsection: +- [x] 9.1 Determine version bump: **minor** (new feature: category groups, first-run selection; feature/* branch) + - [x] 9.1.1 Confirm current version in `pyproject.toml` + - [x] 9.1.2 Confirm bump is minor (e.g., `0.X.Y → 0.(X+1).0`) + - [x] 9.1.3 Request explicit confirmation from user before applying bump + +- [x] 9.2 Sync version across all files + - [x] 9.2.1 `pyproject.toml` + - [x] 9.2.2 `setup.py` + - [x] 9.2.3 `src/__init__.py` (if present) + - [x] 9.2.4 `src/specfact_cli/__init__.py` + - [x] 9.2.5 Verify all four files show the same version + +- [x] 9.3 Update `CHANGELOG.md` + - [x] 9.3.1 Add new section `## [X.Y.Z] - 2026-MM-DD` + - [x] 9.3.2 Add `### Added` subsection: - Category group commands: `specfact project`, `specfact backlog`, `specfact code`, `specfact spec`, `specfact govern` - `module-grouping` metadata fields in `module-package.yaml` for all 21 modules - First-run interactive bundle selection in `specfact init` - `--profile` and `--install` flags for `specfact init` - 4 workflow profile presets: solo-developer, backlog-team, api-first-team, enterprise-full-stack - `category_grouping_enabled` config flag (default `true`) - - [ ] 9.3.3 Add `### Changed` subsection: + - [x] 9.3.3 Add `### Changed` subsection: - `specfact --help` now shows category group commands when bundles are installed - Bootstrap mounts category groups by default - - [ ] 9.3.4 Add `### Deprecated` subsection: + - [x] 9.3.4 Add `### Deprecated` subsection: - All 17 non-core flat top-level commands are deprecated in favor of category group equivalents (removal in next major version) - - [ ] 9.3.5 Reference GitHub issue number + - [x] 9.3.5 Reference GitHub issue number ## 10. Create PR to dev -- [ ] 10.1 Verify TDD_EVIDENCE.md is complete (failing-before and passing-after evidence for all behavior changes) +- [x] 10.1 Verify TDD_EVIDENCE.md is complete (failing-before and passing-after evidence for all behavior changes) -- [ ] 10.2 Prepare commit - - [ ] 10.2.1 `git add src/specfact_cli/groups/ src/specfact_cli/registry/ src/specfact_cli/modules/*/module-package.yaml src/specfact_cli/modules/init/src/ docs/ README.md CHANGELOG.md pyproject.toml setup.py src/specfact_cli/__init__.py openspec/changes/module-migration-01-categorize-and-group/` - - [ ] 10.2.2 `git commit -m "feat: add category group commands and first-run bundle selection (#<issue>)"` - - [ ] 10.2.3 (If GPG signing required) provide `git commit -S -m "..."` for user to run locally - - [ ] 10.2.4 `git push -u origin feature/module-migration-01-categorize-and-group` +- [x] 10.2 Prepare commit + - [x] 10.2.1 `git add src/specfact_cli/groups/ src/specfact_cli/registry/ src/specfact_cli/modules/*/module-package.yaml src/specfact_cli/modules/init/src/ docs/ README.md CHANGELOG.md pyproject.toml setup.py src/specfact_cli/__init__.py openspec/changes/module-migration-01-categorize-and-group/` + - [x] 10.2.2 `git commit -m "feat: add category group commands and first-run bundle selection (#<issue>)"` + - [x] 10.2.3 (If GPG signing required) provide `git commit -S -m "..."` for user to run locally + - [x] 10.2.4 `git push -u origin feature/module-migration-01-categorize-and-group` + - [x] 10.2.5 Note: direct documentation updates on `dev` branch are applied without creating a new PR-to-dev. -- [ ] 10.3 Create PR via gh CLI - - [ ] 10.3.1 `gh pr create --repo nold-ai/specfact-cli --base dev --head feature/module-migration-01-categorize-and-group --title "feat: Module Grouping and Category Command Groups (#<issue>)" --body "$(cat <<'EOF' ... EOF)"` +- [x] 10.3 Create PR via gh CLI + - [x] 10.3.1 `gh pr create --repo nold-ai/specfact-cli --base dev --head feature/module-migration-01-categorize-and-group --title "feat: Module Grouping and Category Command Groups (#<issue>)" --body "$(cat <<'EOF' ... EOF)"` - Body: Summary bullets (3 max), Test plan checklist, OpenSpec change ID, issue reference - - [ ] 10.3.2 Capture PR URL + - [x] 10.3.2 Capture PR URL + - [x] 10.3.3 Historical note: completed via merged PR #331. -- [ ] 10.4 Link PR to project board - - [ ] 10.4.1 `gh project item-add 1 --owner nold-ai --url <PR_URL>` +- [x] 10.4 Link PR to project board + - [x] 10.4.1 `gh project item-add 1 --owner nold-ai --url <PR_URL>` + - [x] 10.4.2 Historical note: completed as part of PR #331 flow. -- [ ] 10.5 Verify PR - - [ ] 10.5.1 Confirm base is `dev`, head is `feature/module-migration-01-categorize-and-group` - - [ ] 10.5.2 Confirm CI checks are running (tests.yml, specfact.yml) +- [x] 10.5 Verify PR + - [x] 10.5.1 Confirm base is `dev`, head is `feature/module-migration-01-categorize-and-group` + - [x] 10.5.2 Confirm CI checks are running (tests.yml, specfact.yml) --- diff --git a/openspec/changes/module-migration-04-remove-flat-shims/CHANGE_VALIDATION.md b/openspec/changes/module-migration-04-remove-flat-shims/CHANGE_VALIDATION.md new file mode 100644 index 00000000..41af7738 --- /dev/null +++ b/openspec/changes/module-migration-04-remove-flat-shims/CHANGE_VALIDATION.md @@ -0,0 +1,84 @@ +# Change Validation Report: module-migration-04-remove-flat-shims + +**Validation Date**: 2026-02-28T01:06:06+01:00 +**Change Proposal**: [proposal.md](./proposal.md) +**Validation Method**: Dry-run validation per /wf-validate-change workflow; OpenSpec validate --strict; dependency grep. + +## Executive Summary + +- **Breaking Changes**: 1 (intentional): removal of 17 flat CLI command names from root surface. +- **Dependent Files**: 4 affected (1 source, 3 test files). +- **Impact Level**: Medium (breaking UX; migration path documented). +- **Validation Result**: Pass +- **User Decision**: N/A (change is intentionally breaking; no scope extension requested). + +## Breaking Changes Detected + +### Interface: Root CLI command list + +- **Type**: Command removal (17 flat shim names no longer registered). +- **Old behaviour**: `specfact --help` listed core + category groups + 17 flat shims (e.g. `validate`, `analyze`, `plan`). `specfact validate ...` delegated to `specfact code validate ...` with optional deprecation message. +- **New behaviour**: `specfact --help` lists only core + category groups. `specfact validate` returns "No such command". +- **Breaking**: Yes (by design for 0.40.x). +- **Dependent files**: + - **tests/unit/registry/test_category_groups.py**: `test_flat_shim_validate_emits_deprecation_in_copilot_mode`, `test_flat_shim_validate_silent_in_cicd_mode` — must be removed or rewritten (assert flat command absent or error). + - **tests/integration/test_category_group_routing.py**: `test_validate_shim_help_exits_zero` — must be removed or changed to assert `specfact code validate --help` (or assert `specfact validate` fails). + - **tests/integration/commands/test_validate_sidecar.py**: Invokes `app` with `["validate", "sidecar", ...]` — should be updated to `["code", "validate", "sidecar", ...]` for 0.40.x. + +## Dependencies Affected + +### Critical updates required + +- **src/specfact_cli/registry/module_packages.py**: Remove `FLAT_TO_GROUP`, `_make_shim_loader()`, and the shim-registration loop in `_register_category_groups_and_shims()`; rename to `_register_category_groups()` and keep only group registration. + +### Recommended updates (tests) + +- **tests/unit/registry/test_category_groups.py**: Remove or rewrite tests that assert flat shim deprecation/silent behaviour; add/keep tests that root help contains only core + groups. +- **tests/integration/test_category_group_routing.py**: Remove `test_validate_shim_help_exits_zero` or replace with test that `specfact validate` fails and suggests `specfact code validate`. +- **tests/integration/commands/test_validate_sidecar.py**: Update invocations from `["validate", "sidecar", ...]` to `["code", "validate", "sidecar", ...]`. + +## Impact Assessment + +- **Code impact**: Single module (`module_packages.py`) reduced by removing shim layer; call sites of flat commands (scripts, docs) must migrate to category form. +- **Test impact**: 3 test files need updates; no new interfaces, only removal of shim behaviour. +- **Documentation impact**: commands.md, getting-started.md, README.md, CHANGELOG.md (0.40.0 BREAKING entry). +- **Release impact**: Minor version 0.40.0 (breaking CLI surface). + +## User Decision + +**Decision**: Proceed with change as proposed (intentionally breaking). +**Rationale**: Migration path documented in proposal; 0.40.x scope agreed. +**Next steps**: Implement per tasks.md; create GitHub issue and link in proposal Source Tracking; run specfact sync bridge to sync issue. + +## Format Validation + +- **proposal.md format**: Pass + - Title format: Correct (`# Change: Remove Flat Shims — ...`) + - Required sections: All present (Why, What Changes, Capabilities, Impact) + - "What Changes" format: Correct (REMOVE/MODIFY/KEEP bullets) + - "Capabilities" section: Present + - "Impact" format: Correct + - Source Tracking section: Present (placeholders for GitHub issue) +- **tasks.md format**: Pass + - Section headers: Correct (`## 1. Branch and prep`, etc.) + - Task format: Correct (`- [ ] 1.1 ...`) + - Sub-task format: Correct + - Config compliance: Branch creation first (1.1), PR last (6.2); GitHub issue task (6.1). Optional: add worktree bootstrap pre-flight in 1.x if using worktree. +- **specs format**: Pass + - Delta headers: REMOVED Requirements, MODIFIED Requirements with Scenario blocks + - Parsed deltas: 2 (1 MODIFIED, 1 REMOVED) +- **design.md**: Not present (optional for this change). +- **Config.yaml compliance**: Pass. + +## OpenSpec Validation + +- **Status**: Pass +- **Validation command**: `openspec validate module-migration-04-remove-flat-shims --strict` +- **Issues found**: 0 (after adding spec delta under `specs/category-command-groups/spec.md`) +- **Issues fixed**: 1 (added spec delta so change has at least one delta with Scenario blocks) +- **Re-validated**: Yes + +## Validation Artifacts + +- Spec delta added: `openspec/changes/module-migration-04-remove-flat-shims/specs/category-command-groups/spec.md` +- Dependency search: `rg FLAT_TO_GROUP|_make_shim_loader|_register_category_groups_and_shims` and `rg validate.*--help|flat shim|deprecation` in tests. diff --git a/openspec/changes/module-migration-04-remove-flat-shims/proposal.md b/openspec/changes/module-migration-04-remove-flat-shims/proposal.md new file mode 100644 index 00000000..d201a754 --- /dev/null +++ b/openspec/changes/module-migration-04-remove-flat-shims/proposal.md @@ -0,0 +1,39 @@ +# Change: Remove Flat Shims — Category-Only CLI (0.40.x) + +## Why + + +Module-migration-01 introduced category group commands (`code`, `backlog`, `project`, `spec`, `govern`) and backward-compatibility shims so existing flat commands (e.g. `specfact validate`, `specfact analyze`) still worked while emitting a deprecation notice. The proposal stated: "Shims are removed after one major version cycle." + +The 0.40.x series completes that migration: the top-level CLI surface should show only core commands (`init`, `auth`, `module`, `upgrade`) and the five category groups. Scripts and muscle memory that still invoke flat commands must switch to the category form (e.g. `specfact code validate`). This reduces noise in `specfact --help`, clarifies the canonical command topology, and avoids maintaining two code paths. + +## What Changes + + +- **REMOVE**: Registration of compat shims for all 17 non-core flat commands. No more top-level `analyze`, `drift`, `validate`, `repro`, `backlog`, `policy`, `project`, `plan`, `import`, `sync`, `migrate`, `contract`, `spec`, `sdd`, `generate`, `enforce`, `patch` at root. +- **MODIFY**: `_register_category_groups_and_shims()` in `module_packages.py` becomes category-group-only registration (no `FLAT_TO_GROUP` shim loop). Optionally rename to `_register_category_groups()`. +- **REMOVE**: `FLAT_TO_GROUP` and `_make_shim_loader()` (and any shim-specific tests that assert deprecation or shim delegation). +- **KEEP**: Core commands (`init`, `auth`, `module`, `upgrade`) and the five category groups with their sub-commands unchanged. `category_grouping_enabled` remains supported; when `false`, behavior can remain "flat" by mounting module commands directly (no groups, no shims). +- **MODIFY**: Docs and CHANGELOG to state the breaking change and migration path (flat → category). + +## Capabilities +### Modified Capabilities + +- `category-command-groups`: Sole top-level surface for non-core module commands. No flat shims; users must use `specfact code analyze`, `specfact backlog ceremony`, etc. +- `command-registry`: Bootstrap no longer registers shim loaders; only group typers and (when grouping disabled) direct module commands. + +### Removed Capabilities + +- Backward-compat shim layer (deprecation delegates) for the 17 flat command names. + + +--- + +## Source Tracking + +<!-- source_repo: nold-ai/specfact-cli --> +- **GitHub Issue**: #330 +- **Issue URL**: <https://github.com/nold-ai/specfact-cli/issues/330> +- **Repository**: nold-ai/specfact-cli +- **Last Synced Status**: proposed +- **Sanitized**: false \ No newline at end of file diff --git a/openspec/changes/module-migration-04-remove-flat-shims/specs/category-command-groups/spec.md b/openspec/changes/module-migration-04-remove-flat-shims/specs/category-command-groups/spec.md new file mode 100644 index 00000000..93ffc4ff --- /dev/null +++ b/openspec/changes/module-migration-04-remove-flat-shims/specs/category-command-groups/spec.md @@ -0,0 +1,38 @@ +# category-command-groups Specification (Delta: Remove Flat Shims) + +## Purpose + +This delta removes the backward-compat shim layer for flat commands. After this change, the root CLI SHALL list only core commands and the five category groups when `category_grouping_enabled` is true. + +## REMOVED Requirements + +### Requirement: Backward-compat shims preserve all existing flat top-level commands + +*(Removed in 0.40.x. Flat commands are no longer registered; users MUST use category form.)* + +#### Scenario: Root help lists only core and category groups + +- **GIVEN** `category_grouping_enabled` is `true` +- **WHEN** the user runs `specfact --help` +- **THEN** the output SHALL list only: core commands (`init`, `auth`, `module`, `upgrade`) and the five category groups (`code`, `backlog`, `project`, `spec`, `govern`) +- **AND** SHALL NOT list any of the 17 former flat shim commands (e.g. `analyze`, `validate`, `plan`, `sync`) + +#### Scenario: Flat command name returns error + +- **GIVEN** `category_grouping_enabled` is `true` +- **WHEN** the user runs `specfact validate --help` +- **THEN** the CLI SHALL respond with an error indicating the command is not found +- **AND** SHALL suggest using `specfact code validate` or list available commands + +## MODIFIED Requirements + +### Requirement: Bootstrap mounts category groups when grouping is enabled + +Bootstrap SHALL mount only category group apps (and core commands) when `category_grouping_enabled` is true. It SHALL NOT register any shim loaders for flat command names. + +#### Scenario: No shim registration at bootstrap + +- **GIVEN** `category_grouping_enabled` is `true` +- **WHEN** the CLI bootstrap runs +- **THEN** the registry SHALL contain entries only for core commands and the five category group names +- **AND** SHALL NOT contain entries for `analyze`, `drift`, `validate`, `repro`, `backlog`, `policy`, `project`, `plan`, `import`, `sync`, `migrate`, `contract`, `spec`, `sdd`, `generate`, `enforce`, `patch` as top-level commands diff --git a/openspec/changes/module-migration-04-remove-flat-shims/tasks.md b/openspec/changes/module-migration-04-remove-flat-shims/tasks.md new file mode 100644 index 00000000..0a044a2b --- /dev/null +++ b/openspec/changes/module-migration-04-remove-flat-shims/tasks.md @@ -0,0 +1,40 @@ +# Tasks: module-migration-04-remove-flat-shims + +TDD/SDD order enforced. Version series: **0.40.x**. + +## 1. Branch and prep + +- [ ] 1.1 Create feature branch from `dev`: `feature/module-migration-04-remove-flat-shims` +- [ ] 1.2 Ensure module-migration-01 is merged to dev (category groups and shims exist) + +## 2. Spec and tests first + +- [ ] 2.1 Add spec delta under `specs/category-command-groups/`: when `category_grouping_enabled` is true, root CLI SHALL list only core commands (init, auth, module, upgrade) and the five category groups (code, backlog, project, spec, govern). No flat shim commands. +- [ ] 2.2 Update or add tests that assert root help contains only core + groups when grouping enabled; remove or rewrite tests that assert flat shim deprecation or `specfact validate --help` success for shim. +- [ ] 2.3 Run tests and capture **failing** result (shims still present) in `TDD_EVIDENCE.md`. + +## 3. Implementation + +- [ ] 3.1 In `module_packages.py`: remove the loop that registers shims from `FLAT_TO_GROUP`; keep only category group registration. Rename `_register_category_groups_and_shims` → `_register_category_groups` (or equivalent). +- [ ] 3.2 Remove `FLAT_TO_GROUP` and `_make_shim_loader()` (and any code only used by shims). +- [ ] 3.4 Run tests; capture **passing** result in `TDD_EVIDENCE.md`. + +## 4. Quality gates + +- [ ] 4.1 `hatch run format` and fix +- [ ] 4.2 `hatch run type-check` and fix +- [ ] 4.3 `hatch run lint` and fix +- [ ] 4.4 `hatch run contract-test` and fix +- [ ] 4.5 `hatch run smart-test` (or smart-test-full) and fix + +## 5. Documentation and release + +- [ ] 5.1 Update `docs/reference/commands.md`: command topology is category-only (no flat commands). +- [ ] 5.2 Update `docs/guides/getting-started.md` and `README.md`: command list shows only core + categories; add migration note for users of flat commands. +- [ ] 5.3 Bump version to **0.40.0** in `pyproject.toml`, `setup.py`, `src/__init__.py`, `src/specfact_cli/__init__.py`. +- [ ] 5.4 Add CHANGELOG.md entry for 0.40.0: **BREAKING** — removed flat command shims; use `specfact <group> <sub>` (e.g. `specfact code validate`). + +## 6. PR + +- [ ] 6.1 Create GitHub issue for change (title: `[Change] Remove flat shims — category-only CLI (0.40.x)`); link in proposal Source Tracking. +- [ ] 6.2 Open PR to `dev`; reference this change and breaking-change migration path. diff --git a/pyproject.toml b/pyproject.toml index 5ab83626..e0ec179c 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -4,7 +4,7 @@ build-backend = "hatchling.build" [project] name = "specfact-cli" -version = "0.38.2" +version = "0.39.0" description = "The swiss knife CLI for agile DevOps teams. Keep backlog, specs, tests, and code in sync with validation and contract enforcement for new projects and long-lived codebases." readme = "README.md" requires-python = ">=3.11" diff --git a/setup.py b/setup.py index 40e4e0a8..e7a50edb 100644 --- a/setup.py +++ b/setup.py @@ -7,7 +7,7 @@ if __name__ == "__main__": _setup = setup( name="specfact-cli", - version="0.38.2", + version="0.39.0", description=( "The swiss knife CLI for agile DevOps teams. Keep backlog, specs, tests, and code in sync with " "validation and contract enforcement for new projects and long-lived codebases." diff --git a/src/__init__.py b/src/__init__.py index 7c5a48a3..71bf4be9 100644 --- a/src/__init__.py +++ b/src/__init__.py @@ -3,4 +3,4 @@ """ # Package version: keep in sync with pyproject.toml, setup.py, src/specfact_cli/__init__.py -__version__ = "0.38.2" +__version__ = "0.39.0" diff --git a/src/specfact_cli/__init__.py b/src/specfact_cli/__init__.py index c09f1d95..d6c68d00 100644 --- a/src/specfact_cli/__init__.py +++ b/src/specfact_cli/__init__.py @@ -8,6 +8,6 @@ - Supporting agile ceremonies and team workflows """ -__version__ = "0.38.2" +__version__ = "0.39.0" __all__ = ["__version__"] diff --git a/src/specfact_cli/cli.py b/src/specfact_cli/cli.py index fcf3df96..a366ca0b 100644 --- a/src/specfact_cli/cli.py +++ b/src/specfact_cli/cli.py @@ -451,13 +451,26 @@ def _make_lazy_typer(cmd_name: str, help_str: str) -> typer.Typer: def _get_command(typer_instance: typer.Typer) -> click.Command: - """Wrapper around typer.main.get_command that returns LazyDelegateGroup for our lazy typers.""" + """Wrapper around typer.main.get_command that returns LazyDelegateGroup for our lazy typers, + and applies flatten-same-name for Typers with _specfact_flatten_same_name. + """ if getattr(typer_instance, "_specfact_lazy_delegate", False): cmd_name = getattr(typer_instance, "_specfact_lazy_cmd_name", "") help_str = getattr(typer_instance, "_specfact_lazy_help_str", "") return _build_lazy_delegate_group(cmd_name, help_str) assert _typer_get_command_original is not None - return _typer_get_command_original(typer_instance) + result = _typer_get_command_original(typer_instance) + flatten_name = getattr(typer_instance, "_specfact_flatten_same_name", None) + if isinstance(flatten_name, str) and isinstance(result, click.Group) and flatten_name in result.commands: + redundant = result.commands.pop(flatten_name) + if isinstance(redundant, click.Group): + for cmd_name, cmd in redundant.commands.items(): + result.add_command(cmd, name=cmd_name) + if result.commands: + for cname in sorted(result.commands.keys()): + cmd = result.commands.pop(cname) + result.add_command(cmd, name=cname) + return result def _get_group_from_info_wrapper( @@ -476,12 +489,23 @@ def _get_group_from_info_wrapper( help_str = getattr(typer_instance, "_specfact_lazy_help_str", "") return _build_lazy_delegate_group(cmd_name, help_str) assert _typer_get_group_from_info_original is not None - return _typer_get_group_from_info_original( + result = _typer_get_group_from_info_original( group_info, pretty_exceptions_short=pretty_exceptions_short, suggest_commands=suggest_commands, rich_markup_mode=rich_markup_mode, ) + flatten_name = getattr(typer_instance, "_specfact_flatten_same_name", None) if typer_instance else None + if isinstance(flatten_name, str) and flatten_name in result.commands: + redundant = result.commands.pop(flatten_name) + if isinstance(redundant, click.Group): + for cmd_name, cmd in redundant.commands.items(): + result.add_command(cmd, name=cmd_name) + if result.commands: + for name in sorted(result.commands.keys()): + cmd = result.commands.pop(name) + result.add_command(cmd, name=name) + return result # Original Typer build functions (set once by _patch_typer_build so re-import of cli doesn't overwrite with our wrapper). diff --git a/src/specfact_cli/groups/__init__.py b/src/specfact_cli/groups/__init__.py new file mode 100644 index 00000000..10efd395 --- /dev/null +++ b/src/specfact_cli/groups/__init__.py @@ -0,0 +1,18 @@ +"""Category group commands: project, backlog, code, spec, govern.""" + +from __future__ import annotations + +from specfact_cli.groups.backlog_group import app as backlog_app +from specfact_cli.groups.codebase_group import app as codebase_app +from specfact_cli.groups.govern_group import app as govern_app +from specfact_cli.groups.project_group import app as project_app +from specfact_cli.groups.spec_group import app as spec_app + + +__all__ = [ + "backlog_app", + "codebase_app", + "govern_app", + "project_app", + "spec_app", +] diff --git a/src/specfact_cli/groups/backlog_group.py b/src/specfact_cli/groups/backlog_group.py new file mode 100644 index 00000000..d18dc348 --- /dev/null +++ b/src/specfact_cli/groups/backlog_group.py @@ -0,0 +1,44 @@ +"""Backlog category group (backlog, policy).""" + +from __future__ import annotations + +import typer +from beartype import beartype +from icontract import ensure, require + +from specfact_cli.registry.registry import CommandRegistry + + +_MEMBERS = [ + ("backlog", "backlog"), + ("policy", "policy"), +] + + +@require(lambda app: app is not None) +@ensure(lambda result: result is None) +@beartype +def _register_members(app: typer.Typer) -> None: + """Register member module sub-apps (called when group is first used).""" + for display_name, cmd_name in _MEMBERS: + try: + member_app = CommandRegistry.get_module_typer(cmd_name) + if member_app is not None: + app.add_typer(member_app, name=display_name) + except ValueError: + pass + + +def build_app() -> typer.Typer: + """Build the backlog group Typer with members (lazy; registry must be populated).""" + app = typer.Typer( + name="backlog", + help="Backlog and policy commands.", + no_args_is_help=True, + ) + _register_members(app) + app._specfact_flatten_same_name = "backlog" + return app + + +app = build_app() diff --git a/src/specfact_cli/groups/codebase_group.py b/src/specfact_cli/groups/codebase_group.py new file mode 100644 index 00000000..1390c6ca --- /dev/null +++ b/src/specfact_cli/groups/codebase_group.py @@ -0,0 +1,40 @@ +"""Codebase quality category group (analyze, drift, validate, repro).""" + +from __future__ import annotations + +import typer +from beartype import beartype +from icontract import ensure, require + +from specfact_cli.registry.registry import CommandRegistry + + +_MEMBERS = ("analyze", "drift", "validate", "repro") + + +@require(lambda app: app is not None) +@ensure(lambda result: result is None) +@beartype +def _register_members(app: typer.Typer) -> None: + """Register member module sub-apps (called when group is first used).""" + for name in _MEMBERS: + try: + member_app = CommandRegistry.get_module_typer(name) + if member_app is not None: + app.add_typer(member_app, name=name) + except ValueError: + pass + + +def build_app() -> typer.Typer: + """Build the code group Typer with members (lazy; registry must be populated).""" + app = typer.Typer( + name="code", + help="Codebase quality commands: analyze, drift, validate, repro.", + no_args_is_help=True, + ) + _register_members(app) + return app + + +app = build_app() diff --git a/src/specfact_cli/groups/govern_group.py b/src/specfact_cli/groups/govern_group.py new file mode 100644 index 00000000..f66e520f --- /dev/null +++ b/src/specfact_cli/groups/govern_group.py @@ -0,0 +1,43 @@ +"""Governance category group (enforce, patch).""" + +from __future__ import annotations + +import typer +from beartype import beartype +from icontract import ensure, require + +from specfact_cli.registry.registry import CommandRegistry + + +_MEMBERS = [ + ("enforce", "enforce"), + ("patch", "patch"), +] + + +@require(lambda app: app is not None) +@ensure(lambda result: result is None) +@beartype +def _register_members(app: typer.Typer) -> None: + """Register member module sub-apps (called when group is first used).""" + for display_name, cmd_name in _MEMBERS: + try: + member_app = CommandRegistry.get_module_typer(cmd_name) + if member_app is not None: + app.add_typer(member_app, name=display_name) + except ValueError: + pass + + +def build_app() -> typer.Typer: + """Build the govern group Typer with members (lazy; registry must be populated).""" + app = typer.Typer( + name="govern", + help="Governance and quality gates: enforce, patch.", + no_args_is_help=True, + ) + _register_members(app) + return app + + +app = build_app() diff --git a/src/specfact_cli/groups/project_group.py b/src/specfact_cli/groups/project_group.py new file mode 100644 index 00000000..b5ce3464 --- /dev/null +++ b/src/specfact_cli/groups/project_group.py @@ -0,0 +1,47 @@ +"""Project lifecycle category group (project, plan, import, sync, migrate).""" + +from __future__ import annotations + +import typer +from beartype import beartype +from icontract import ensure, require + +from specfact_cli.registry.registry import CommandRegistry + + +_MEMBERS = [ + ("project", "project"), + ("plan", "plan"), + ("import", "import"), + ("sync", "sync"), + ("migrate", "migrate"), +] + + +@require(lambda app: app is not None) +@ensure(lambda result: result is None) +@beartype +def _register_members(app: typer.Typer) -> None: + """Register member module sub-apps (called when group is first used).""" + for display_name, cmd_name in _MEMBERS: + try: + member_app = CommandRegistry.get_module_typer(cmd_name) + if member_app is not None: + app.add_typer(member_app, name=display_name) + except ValueError: + pass + + +def build_app() -> typer.Typer: + """Build the project group Typer with members (lazy; registry must be populated).""" + app = typer.Typer( + name="project", + help="Project lifecycle commands.", + no_args_is_help=True, + ) + _register_members(app) + app._specfact_flatten_same_name = "project" + return app + + +app = build_app() diff --git a/src/specfact_cli/groups/spec_group.py b/src/specfact_cli/groups/spec_group.py new file mode 100644 index 00000000..585c6c52 --- /dev/null +++ b/src/specfact_cli/groups/spec_group.py @@ -0,0 +1,45 @@ +"""Spec category group (contract, api, sdd, generate) — spec module mounted as 'api' to avoid collision.""" + +from __future__ import annotations + +import typer +from beartype import beartype +from icontract import ensure, require + +from specfact_cli.registry.registry import CommandRegistry + + +_MEMBERS = [ + ("contract", "contract"), + ("api", "spec"), + ("sdd", "sdd"), + ("generate", "generate"), +] + + +@require(lambda app: app is not None) +@ensure(lambda result: result is None) +@beartype +def _register_members(app: typer.Typer) -> None: + """Register member module sub-apps (called when group is first used).""" + for display_name, cmd_name in _MEMBERS: + try: + member_app = CommandRegistry.get_module_typer(cmd_name) + if member_app is not None: + app.add_typer(member_app, name=display_name) + except ValueError: + pass + + +def build_app() -> typer.Typer: + """Build the spec group Typer with members (lazy; registry must be populated).""" + app = typer.Typer( + name="spec", + help="Spec and contract commands: contract, api, sdd, generate.", + no_args_is_help=True, + ) + _register_members(app) + return app + + +app = build_app() diff --git a/src/specfact_cli/models/module_package.py b/src/specfact_cli/models/module_package.py index 8ffb4af0..7395dd9b 100644 --- a/src/specfact_cli/models/module_package.py +++ b/src/specfact_cli/models/module_package.py @@ -153,6 +153,19 @@ class ModulePackageMetadata(BaseModel): description: str | None = Field(default=None, description="Module description for user-facing module details") license: str | None = Field(default=None, description="SPDX license identifier or license name") source: str = Field(default="builtin", description="Module source: builtin, project, user, marketplace, or custom") + category: str | None = Field( + default=None, + description="Workflow category: core, project, backlog, codebase, spec, or govern.", + ) + bundle: str | None = Field(default=None, description="Bundle id (e.g. specfact-codebase) for non-core modules.") + bundle_group_command: str | None = Field( + default=None, + description="Top-level group command for this category (e.g. code, backlog).", + ) + bundle_sub_command: str | None = Field( + default=None, + description="Sub-command name within the group (e.g. analyze, validate).", + ) @beartype @ensure(lambda result: isinstance(result, list), "Validated bridges must be returned as a list") diff --git a/src/specfact_cli/modules/analyze/module-package.yaml b/src/specfact_cli/modules/analyze/module-package.yaml index 896f7fff..08a573d0 100644 --- a/src/specfact_cli/modules/analyze/module-package.yaml +++ b/src/specfact_cli/modules/analyze/module-package.yaml @@ -1,7 +1,11 @@ name: analyze -version: 0.1.0 +version: 0.1.1 commands: - analyze +category: codebase +bundle: specfact-codebase +bundle_group_command: code +bundle_sub_command: analyze command_help: analyze: Analyze codebase for contract coverage and quality pip_dependencies: [] @@ -11,9 +15,9 @@ core_compatibility: '>=0.28.0,<1.0.0' publisher: name: nold-ai url: https://github.com/nold-ai/specfact-cli-modules - email: oss@nold.ai + email: hello@noldai.com description: Analyze codebase quality, contracts, and architecture signals. license: Apache-2.0 integrity: - checksum: sha256:49d908578ab91e142cff50a27d9b15fff3a30cf790597eecbea1910e38a754b6 - signature: CqsLSUUx3DYa0a8F56/OW4QFt6TDhx1OueAwI0tYC892S7RlvaF5JEwKcUXujKD2IQRoOKUQ7d0Gdqs8Kwh2Cw== + checksum: sha256:d57826fb72253cf65a191bace15cb1a6b7551e844b80a4bef94e9cf861727bde + signature: /9/vp39C0v8ywsHOY3hBMyxbSNqYf5nbz1Fa9gw0KmNKclBIhfYj/JZzi7R56iYZaU5w8YsjLEj4/IspV2JdCg== diff --git a/src/specfact_cli/modules/auth/module-package.yaml b/src/specfact_cli/modules/auth/module-package.yaml index c15b2f14..2100cc26 100644 --- a/src/specfact_cli/modules/auth/module-package.yaml +++ b/src/specfact_cli/modules/auth/module-package.yaml @@ -1,7 +1,9 @@ name: auth -version: 0.1.0 +version: 0.1.1 commands: - auth +category: core +bundle_sub_command: auth command_help: auth: Authenticate with DevOps providers (GitHub, Azure DevOps) pip_dependencies: [] @@ -11,9 +13,9 @@ core_compatibility: '>=0.28.0,<1.0.0' publisher: name: nold-ai url: https://github.com/nold-ai/specfact-cli-modules - email: oss@nold.ai + email: hello@noldai.com description: Authenticate SpecFact with supported DevOps providers. license: Apache-2.0 integrity: - checksum: sha256:561fc420c18f9702b1cbb396cb1c0443909646ad8857f508d532d648fe812a9d - signature: IUFyErHdSMMRtdGCUjDZhkU6hujDv1J5IHQXKYelK4RGqeekYUFer13IeG7S1xZZ5ckmvsgF1592UsTCSV2BAA== + checksum: sha256:358844d5b8d1b5ca829e62cd52d0719cc4cc347459bcedd350a0ddac0de5e387 + signature: a46QWufONaLsbIiUqvkEPJ92Fs4KgN301dfDvOrOg+c3SYki2aw1Ofu8YVDaB6ClsgVAtWwQz6P8kiqGUTX1AA== diff --git a/src/specfact_cli/modules/backlog/module-package.yaml b/src/specfact_cli/modules/backlog/module-package.yaml index 4334ee18..c39f1c49 100644 --- a/src/specfact_cli/modules/backlog/module-package.yaml +++ b/src/specfact_cli/modules/backlog/module-package.yaml @@ -1,7 +1,11 @@ name: backlog -version: 0.1.6 +version: 0.1.7 commands: - backlog +category: backlog +bundle: specfact-backlog +bundle_group_command: backlog +bundle_sub_command: backlog command_help: backlog: Backlog refinement and template management pip_dependencies: [] @@ -24,9 +28,9 @@ service_bridges: publisher: name: nold-ai url: https://github.com/nold-ai/specfact-cli-modules - email: oss@nold.ai + email: hello@noldai.com description: Manage backlog ceremonies, refinement, and dependency insights. license: Apache-2.0 integrity: - checksum: sha256:a3b033ef35a6a95e1c40ffe28e112cb1683af5051dd813038bacf1cd76bfd7ad - signature: gHQkRqNpRRpxwRmFiHSHaSpq8/rwKvv1v/4Wjt8pRl0Z2VFTVF1DStb2XwgZlE0Bpg77n++G5mIl7KkM7MyGBQ== + checksum: sha256:8e7c0b8636d5ef39ba3b3b1275d67f68bde017e1328efd38f091f97152256c7f + signature: RK6YZCqmWWfb8OWCsRX6Qic1jqiqGdaDrcJmOYLLI3epz48LWx7sx3ZcIHzYGNf8VLg1q0tAnpTfsxfC4nm7DQ== diff --git a/src/specfact_cli/modules/contract/module-package.yaml b/src/specfact_cli/modules/contract/module-package.yaml index de525a14..fb3fce8c 100644 --- a/src/specfact_cli/modules/contract/module-package.yaml +++ b/src/specfact_cli/modules/contract/module-package.yaml @@ -1,7 +1,11 @@ name: contract -version: 0.1.0 +version: 0.1.1 commands: - contract +category: spec +bundle: specfact-spec +bundle_group_command: spec +bundle_sub_command: contract command_help: contract: Manage OpenAPI contracts for project bundles pip_dependencies: [] @@ -11,9 +15,9 @@ core_compatibility: '>=0.28.0,<1.0.0' publisher: name: nold-ai url: https://github.com/nold-ai/specfact-cli-modules - email: oss@nold.ai + email: hello@noldai.com description: Validate and manage API contracts for project bundles. license: Apache-2.0 integrity: - checksum: sha256:ea7526559317a65684db0ecc8eaccd06a60dcb94361c95389e7a35cfd31279d3 - signature: iJC2irFaSiWa9fdFjJYGpHlsyWKRNpoFmQSB+PY0ORS6y+gVHAPNGJL7iP5TFYB3I83szCWfmF+2hLeTyc1XDg== + checksum: sha256:e36b4d6b91ec88ec7586265457440babcce2e0ea29db20f25307797c0ffb19c0 + signature: kPeqIYhcF4ri/0q+cKcrCVe4VUsEVT62GPL9uPTV2GJp58Rejkcq1rnaoO2zun0GRWzXI00DMutSCU85P+kECQ== diff --git a/src/specfact_cli/modules/drift/module-package.yaml b/src/specfact_cli/modules/drift/module-package.yaml index 57d282a5..d7a56025 100644 --- a/src/specfact_cli/modules/drift/module-package.yaml +++ b/src/specfact_cli/modules/drift/module-package.yaml @@ -1,7 +1,11 @@ name: drift -version: 0.1.0 +version: 0.1.1 commands: - drift +category: codebase +bundle: specfact-codebase +bundle_group_command: code +bundle_sub_command: drift command_help: drift: Detect drift between code and specifications pip_dependencies: [] @@ -11,9 +15,9 @@ core_compatibility: '>=0.28.0,<1.0.0' publisher: name: nold-ai url: https://github.com/nold-ai/specfact-cli-modules - email: oss@nold.ai + email: hello@noldai.com description: Detect and report drift between code, plans, and specs. license: Apache-2.0 integrity: - checksum: sha256:0bf406486ada20fa82273f62a46567a63231be00bca1aa0da335405081d868ee - signature: 22k+r94pdCPh7lP4UYZfvlNRlTQaSasXwzDJWE33I1Pzeq1hPRnyXBylx9x7IvdDqACLTnCIz5j6R8XKCu1WAQ== + checksum: sha256:3ba1feb48d85bb7e87b379ca630edcb2fabbeee998f63c4cbac46158d86c6667 + signature: gcukNmz2mJt+G4sztoWqsQ0DtaXRq+D+Lfitjy0QIvJZUvis4SNdSrBApBsoVB5F079NHpLJNjl24piejZRHBA== diff --git a/src/specfact_cli/modules/enforce/module-package.yaml b/src/specfact_cli/modules/enforce/module-package.yaml index 0777c1d3..af27e153 100644 --- a/src/specfact_cli/modules/enforce/module-package.yaml +++ b/src/specfact_cli/modules/enforce/module-package.yaml @@ -1,7 +1,11 @@ name: enforce -version: 0.1.0 +version: 0.1.1 commands: - enforce +category: govern +bundle: specfact-govern +bundle_group_command: govern +bundle_sub_command: enforce command_help: enforce: Configure quality gates pip_dependencies: [] @@ -12,9 +16,9 @@ core_compatibility: '>=0.28.0,<1.0.0' publisher: name: nold-ai url: https://github.com/nold-ai/specfact-cli-modules - email: oss@nold.ai + email: hello@noldai.com description: Apply governance policies and quality gates to bundles. license: Apache-2.0 integrity: - checksum: sha256:444896a5ac47c50cac848e649bb509893ba8c62b382100ccbe2b65660dca6587 - signature: Htf9gy0P5UmGmrDky3oLyl4GgVQoVJ6514f31O1v1Butyns4o49CF6mVZMqbOBh68ToloPUxG96E/1GEzM3QDg== + checksum: sha256:836e08acb3842480c909d95bba2dcfbb5914c33ceb64bd8b85e6e6a948c39ff3 + signature: gOIb0KCdrUwEOSNWEkMCFQ/cne9KG0zT0s09R4SzGKCKmIN2ZI1eCQ4Py+EOU5fPjszMN9R6NEuMmRXaZ+MpCA== diff --git a/src/specfact_cli/modules/generate/module-package.yaml b/src/specfact_cli/modules/generate/module-package.yaml index 8f9e54aa..41dd1a97 100644 --- a/src/specfact_cli/modules/generate/module-package.yaml +++ b/src/specfact_cli/modules/generate/module-package.yaml @@ -1,7 +1,11 @@ name: generate -version: 0.1.0 +version: 0.1.1 commands: - generate +category: spec +bundle: specfact-spec +bundle_group_command: spec +bundle_sub_command: generate command_help: generate: Generate artifacts from SDD and plans pip_dependencies: [] @@ -12,9 +16,9 @@ core_compatibility: '>=0.28.0,<1.0.0' publisher: name: nold-ai url: https://github.com/nold-ai/specfact-cli-modules - email: oss@nold.ai + email: hello@noldai.com description: Generate implementation artifacts from plans and SDD. license: Apache-2.0 integrity: - checksum: sha256:c1b01eea7f1de8e71fd61ac02caae27b10e133131683804356f01c9c6171a955 - signature: dKokcDKA0v/xJ4SDWDvFEtCdVFScgyygdoJZlD7LgSQeucpi8csKLMW97XOaOUZCtNUsVm194EWBnIzLf70+CA== + checksum: sha256:d0e6c3749216c231b48f01415a7ed84c5710b49f3826fbad4d74e399fc22f443 + signature: IvszOEUxuOeUTn/CFj7xda8oyWDoDl0uVq/LDsGrv7NoTXhb68xQ0L2XTLDKUcr4end9+6svbaj0v4+opUa5Bg== diff --git a/src/specfact_cli/modules/import_cmd/module-package.yaml b/src/specfact_cli/modules/import_cmd/module-package.yaml index c61999b8..0e08bc61 100644 --- a/src/specfact_cli/modules/import_cmd/module-package.yaml +++ b/src/specfact_cli/modules/import_cmd/module-package.yaml @@ -1,7 +1,11 @@ name: import_cmd -version: 0.1.0 +version: 0.1.1 commands: - import +category: project +bundle: specfact-project +bundle_group_command: project +bundle_sub_command: import command_help: import: Import codebases and external tool projects (e.g., Spec-Kit, OpenSpec, generic-markdown) pip_dependencies: [] @@ -11,9 +15,9 @@ core_compatibility: '>=0.28.0,<1.0.0' publisher: name: nold-ai url: https://github.com/nold-ai/specfact-cli-modules - email: oss@nold.ai + email: hello@noldai.com description: Import projects and requirements from code and external tools. license: Apache-2.0 integrity: - checksum: sha256:246f8402200e57947a50f61102ef8057dd75a6ad2768d359774600557daead8b - signature: sCgBiKEtN5r2br/6ZAYmII7XjjZ9Ru8WqqPfXXaE2shn0eQBiDxKKH/ATNBZphtYJbTfXEpQFZg3oVja4jITCg== + checksum: sha256:f1cdb18387d6e64bdbbc59eac070df7aa1e215f5684c82e3e5058e7f3bff2a78 + signature: DeuBD5usns6KCBFNYAim9gDaUAZVWW0jgDeWW1+EpbtsDskiKTTP7MTU5fh4U2N/JHsXFTXZVMh4VaQHOyXMCg== diff --git a/src/specfact_cli/modules/init/module-package.yaml b/src/specfact_cli/modules/init/module-package.yaml index f48c594b..f02d208a 100644 --- a/src/specfact_cli/modules/init/module-package.yaml +++ b/src/specfact_cli/modules/init/module-package.yaml @@ -1,7 +1,9 @@ name: init -version: 0.1.0 +version: 0.1.2 commands: - init +category: core +bundle_sub_command: init command_help: init: Bootstrap SpecFact and manage module lifecycle (use `init ide` for IDE setup) pip_dependencies: [] @@ -11,9 +13,9 @@ core_compatibility: '>=0.28.0,<1.0.0' publisher: name: nold-ai url: https://github.com/nold-ai/specfact-cli-modules - email: oss@nold.ai + email: hello@noldai.com description: Initialize SpecFact workspace and bootstrap local configuration. license: Apache-2.0 integrity: - checksum: sha256:ebd53092241acd0bf53db40a001529b98ee001244b04223cfe855d7fdcbc607d - signature: NrPV9fl6k7W47hi4hkbNhsS8EX0CfB8zlAFucesUwbMYHgGZl9TvfsBwObeHWh+R1eqskTUf+sxivl6lv/1mAg== + checksum: sha256:223ce09d4779d73a9c35a2ed3776330b1ef6318bc33145252bf1693bb9b71644 + signature: x97hJyltPjofAJeHkaWpXmf9TtgYsnI0+zk8RFx5mLqcFYQbJxtwECS7Xvld+RIHaBAKmOAQtImIWtl09sgtDQ== diff --git a/src/specfact_cli/modules/init/src/commands.py b/src/specfact_cli/modules/init/src/commands.py index b9b46f5a..3ab4ee81 100644 --- a/src/specfact_cli/modules/init/src/commands.py +++ b/src/specfact_cli/modules/init/src/commands.py @@ -18,7 +18,9 @@ from specfact_cli import __version__ from specfact_cli.contracts.module_interface import ModuleIOContract from specfact_cli.modules import module_io_shim +from specfact_cli.modules.init.src import first_run_selection from specfact_cli.registry.help_cache import run_discovery_and_write_cache +from specfact_cli.registry.module_installer import USER_MODULES_ROOT as INIT_USER_MODULES_ROOT from specfact_cli.registry.module_packages import get_discovered_modules_for_state from specfact_cli.registry.module_state import write_modules_state from specfact_cli.runtime import debug_log_operation, debug_print, is_debug_mode, is_non_interactive @@ -33,6 +35,10 @@ ) +install_bundles_for_init = first_run_selection.install_bundles_for_init +is_first_run = first_run_selection.is_first_run + + def _copy_backlog_field_mapping_templates(repo_path: Path, force: bool, console: Console) -> None: """ Copy backlog field mapping templates to .specfact/templates/backlog/field_mappings/. @@ -347,6 +353,63 @@ def _is_valid_repo_path(repo: Path) -> bool: return repo.exists() and repo.is_dir() +def _interactive_first_run_bundle_selection() -> list[str]: + """Show first-run welcome and bundle selection; return list of canonical bundle ids to install (or empty).""" + try: + import questionary # type: ignore[reportMissingImports] + except ImportError as e: + console.print( + "[red]Interactive bundle selection requires 'questionary'. Install with: pip install questionary[/red]" + ) + raise typer.Exit(1) from e + + console.print() + console.print( + Panel( + "[bold cyan]Welcome to SpecFact[/bold cyan]\n" + "Choose which workflow bundles to install. Core commands (init, auth, module, upgrade) are always available.", + border_style="cyan", + ) + ) + console.print("[dim]You can install more later with `specfact module install <bundle>`[/dim]") + console.print() + + profile_choices = [f"{label} [dim]({key})[/dim]" for key, label in first_run_selection.PROFILE_DISPLAY_ORDER] + profile_to_key = {f"{label} [dim]({key})[/dim]": key for key, label in first_run_selection.PROFILE_DISPLAY_ORDER} + profile_to_key["Choose bundles manually"] = "_manual_" + + choice = questionary.select( + "Select a profile or choose bundles manually:", + choices=[*profile_choices, "Choose bundles manually"], + style=_questionary_style(), + ).ask() + + if not choice: + return [] + + if choice in profile_to_key: + key = profile_to_key[choice] + if key == "_manual_": + bundle_choices = [ + f"{first_run_selection.BUNDLE_DISPLAY.get(bid, bid)} [dim]({bid})[/dim]" + for bid in first_run_selection.CANONICAL_BUNDLES + ] + selected = questionary.checkbox( + "Select bundles to install:", + choices=bundle_choices, + style=_questionary_style(), + ).ask() + if not selected: + return [] + return [bid for bid in first_run_selection.CANONICAL_BUNDLES if any(bid in s for s in selected)] + return list(first_run_selection.PROFILE_PRESETS.get(key, [])) + + for key, label in first_run_selection.PROFILE_DISPLAY_ORDER: + if choice.startswith(label) or f"({key})" in choice: + return list(first_run_selection.PROFILE_PRESETS.get(key, [])) + return [] + + @app.command("ide") @require(_is_valid_repo_path, "Repo path must exist and be directory") @ensure(lambda result: result is None, "Command should return None") @@ -435,6 +498,16 @@ def init( file_okay=False, dir_okay=True, ), + profile: str | None = typer.Option( + None, + "--profile", + help="First-run profile preset: solo-developer, backlog-team, api-first-team, enterprise-full-stack", + ), + install: str | None = typer.Option( + None, + "--install", + help="Comma-separated bundle names or 'all' to install without prompting", + ), install_deps: bool = typer.Option( False, "--install-deps", @@ -450,6 +523,42 @@ def init( return repo_path = repo.resolve() + + if profile is not None or install is not None: + try: + if profile is not None: + bundle_ids = first_run_selection.resolve_profile_bundles(profile) + else: + bundle_ids = first_run_selection.resolve_install_bundles(install or "") + if bundle_ids: + first_run_selection.install_bundles_for_init( + bundle_ids, + INIT_USER_MODULES_ROOT, + non_interactive=is_non_interactive(), + ) + except ValueError as e: + console.print(f"[red]Error:[/red] {e}") + raise typer.Exit(1) from e + elif is_first_run(user_root=INIT_USER_MODULES_ROOT) and not is_non_interactive(): + try: + bundle_ids = _interactive_first_run_bundle_selection() + if bundle_ids: + first_run_selection.install_bundles_for_init( + bundle_ids, + INIT_USER_MODULES_ROOT, + non_interactive=False, + ) + else: + console.print( + "[dim]Tip: Install bundles later with " + "`specfact module install <bundle>` or `specfact init --profile <name>`[/dim]" + ) + except typer.Exit: + raise + except ValueError as e: + console.print(f"[red]Error:[/red] {e}") + raise typer.Exit(1) from e + modules_list = get_discovered_modules_for_state(enable_ids=[], disable_ids=[]) if modules_list: write_modules_state(modules_list) diff --git a/src/specfact_cli/modules/init/src/first_run_selection.py b/src/specfact_cli/modules/init/src/first_run_selection.py new file mode 100644 index 00000000..653fe646 --- /dev/null +++ b/src/specfact_cli/modules/init/src/first_run_selection.py @@ -0,0 +1,196 @@ +"""First-run bundle selection: profiles, --install parsing, and installation (Phase 3).""" + +from __future__ import annotations + +from pathlib import Path + +from beartype import beartype +from icontract import ensure, require + +from specfact_cli.registry.module_discovery import USER_MODULES_ROOT +from specfact_cli.registry.module_grouping import VALID_CATEGORIES + + +PROFILE_PRESETS: dict[str, list[str]] = { + "solo-developer": ["specfact-codebase"], + "backlog-team": ["specfact-backlog", "specfact-project", "specfact-codebase"], + "api-first-team": ["specfact-spec", "specfact-codebase"], + "enterprise-full-stack": [ + "specfact-project", + "specfact-backlog", + "specfact-codebase", + "specfact-spec", + "specfact-govern", + ], +} + +CANONICAL_BUNDLES: tuple[str, ...] = ( + "specfact-project", + "specfact-backlog", + "specfact-codebase", + "specfact-spec", + "specfact-govern", +) + +BUNDLE_ALIAS_TO_CANONICAL: dict[str, str] = { + "project": "specfact-project", + "backlog": "specfact-backlog", + "codebase": "specfact-codebase", + "code": "specfact-codebase", + "spec": "specfact-spec", + "govern": "specfact-govern", +} + +BUNDLE_TO_MODULE_NAMES: dict[str, list[str]] = { + "specfact-project": ["project", "plan", "import_cmd", "sync", "migrate"], + "specfact-backlog": ["backlog", "policy_engine"], + "specfact-codebase": ["analyze", "drift", "validate", "repro"], + "specfact-spec": ["contract", "spec", "sdd", "generate"], + "specfact-govern": ["enforce", "patch_mode"], +} + +BUNDLE_DEPENDENCIES: dict[str, list[str]] = { + "specfact-spec": ["specfact-project"], +} + + +@require(lambda profile: isinstance(profile, str) and profile.strip() != "", "profile must be non-empty string") +@ensure(lambda result: isinstance(result, list), "result must be list of bundle ids") +@beartype +def resolve_profile_bundles(profile: str) -> list[str]: + """Resolve a profile name to the list of canonical bundle ids to install.""" + key = profile.strip().lower() + if key not in PROFILE_PRESETS: + valid = ", ".join(sorted(PROFILE_PRESETS)) + raise ValueError(f"Unknown profile {profile!r}. Valid profiles: {valid}") + return list(PROFILE_PRESETS[key]) + + +@require(lambda install_arg: isinstance(install_arg, str), "install_arg must be string") +@ensure(lambda result: isinstance(result, list), "result must be list of bundle ids") +@beartype +def resolve_install_bundles(install_arg: str) -> list[str]: + """Parse --install value (comma-separated or 'all') into canonical bundle ids.""" + raw = install_arg.strip() + if not raw: + return [] + if raw.lower() == "all": + return list(CANONICAL_BUNDLES) + seen: set[str] = set() + result: list[str] = [] + for part in raw.split(","): + alias = part.strip().lower() + if not alias: + continue + if alias in BUNDLE_ALIAS_TO_CANONICAL: + canonical = BUNDLE_ALIAS_TO_CANONICAL[alias] + if canonical not in seen: + seen.add(canonical) + result.append(canonical) + else: + valid = ", ".join([*sorted(BUNDLE_ALIAS_TO_CANONICAL), "all"]) + raise ValueError(f"Unknown bundle {part.strip()!r}. Valid bundle names: {valid}") + return result + + +@ensure(lambda result: isinstance(result, bool), "result must be bool") +@beartype +def is_first_run( + *, + user_root: Path | None = None, +) -> bool: + """Return True when no category bundle is installed (first run).""" + from specfact_cli.registry.module_discovery import discover_all_modules + + root = user_root or USER_MODULES_ROOT + discovered = discover_all_modules(user_root=root) + for entry in discovered: + if entry.source not in ("user", "marketplace", "project"): + continue + cat = entry.metadata.category + if cat is not None and cat != "core" and cat in VALID_CATEGORIES: + return False + return True + + +@require(lambda bundle_ids: isinstance(bundle_ids, list), "bundle_ids must be list") +@require( + lambda install_root: install_root is None or isinstance(install_root, Path), "install_root must be Path or None" +) +@beartype +def install_bundles_for_init( + bundle_ids: list[str], + install_root: Path | None = None, + *, + non_interactive: bool = False, + trust_non_official: bool = False, +) -> None: + """Install the given bundles (and their dependencies) via bundled module installer.""" + from specfact_cli.registry.module_installer import ( + USER_MODULES_ROOT as DEFAULT_ROOT, + install_bundled_module, + ) + + root = install_root or DEFAULT_ROOT + to_install: list[str] = [] + seen: set[str] = set() + + def _add_bundle(bid: str) -> None: + if bid in seen: + return + for dep in BUNDLE_DEPENDENCIES.get(bid, []): + _add_bundle(dep) + seen.add(bid) + to_install.append(bid) + + for bid in bundle_ids: + if bid not in CANONICAL_BUNDLES: + continue + _add_bundle(bid) + + for bid in to_install: + module_names = BUNDLE_TO_MODULE_NAMES.get(bid, []) + for module_name in module_names: + try: + install_bundled_module( + module_name, + root, + trust_non_official=trust_non_official, + non_interactive=non_interactive, + ) + except Exception as e: + from specfact_cli.common import get_bridge_logger + + logger = get_bridge_logger(__name__) + logger.warning( + "Bundle install failed for %s: %s. Dependency resolver may be unavailable.", + module_name, + e, + ) + raise + + +def get_valid_profile_names() -> list[str]: + """Return sorted list of valid profile names for error messages.""" + return sorted(PROFILE_PRESETS) + + +def get_valid_bundle_aliases() -> list[str]: + """Return sorted list of valid bundle aliases (including 'all').""" + return [*sorted(BUNDLE_ALIAS_TO_CANONICAL), "all"] + + +BUNDLE_DISPLAY: dict[str, str] = { + "specfact-project": "Project lifecycle (project, plan, import, sync, migrate)", + "specfact-backlog": "Backlog management (backlog, policy)", + "specfact-codebase": "Codebase quality (analyze, drift, validate, repro)", + "specfact-spec": "Spec & API (contract, spec, sdd, generate)", + "specfact-govern": "Governance (enforce, patch)", +} + +PROFILE_DISPLAY_ORDER: list[tuple[str, str]] = [ + ("solo-developer", "Solo developer"), + ("backlog-team", "Backlog team"), + ("api-first-team", "API-first team"), + ("enterprise-full-stack", "Enterprise full-stack"), +] diff --git a/src/specfact_cli/modules/migrate/module-package.yaml b/src/specfact_cli/modules/migrate/module-package.yaml index 4303eada..6f7d7739 100644 --- a/src/specfact_cli/modules/migrate/module-package.yaml +++ b/src/specfact_cli/modules/migrate/module-package.yaml @@ -1,7 +1,11 @@ name: migrate -version: 0.1.0 +version: 0.1.1 commands: - migrate +category: project +bundle: specfact-project +bundle_group_command: project +bundle_sub_command: migrate command_help: migrate: Migrate project bundles between formats pip_dependencies: [] @@ -11,9 +15,9 @@ core_compatibility: '>=0.28.0,<1.0.0' publisher: name: nold-ai url: https://github.com/nold-ai/specfact-cli-modules - email: oss@nold.ai + email: hello@noldai.com description: Migrate project bundles across supported structure versions. license: Apache-2.0 integrity: - checksum: sha256:0173eddfed089e7979d608f44d7e3e5505567d0e32b722a56d87a59ea0a5699f - signature: Hoo1BBhAN24s8YpEPsWEyMPv6eL6apeBiX8VCALst5b/77ZpOCCcsSSGFUrNQMWD/L7pxDphTzKhGPP8W1t7DQ== + checksum: sha256:72c3de7e4584f99942e74806aed866eaa8a6afe4c715abf4af0bc98ae20eed5a + signature: QYLY60r1M1hg7LuK//giQrurI3nlTCEgqsHdNyIdDOFCFARIC8Fu5lV83aidy5fP4+gs2e4gVWhuiaCUn3EzBg== diff --git a/src/specfact_cli/modules/module_registry/module-package.yaml b/src/specfact_cli/modules/module_registry/module-package.yaml index b0ef0c30..3ead78d4 100644 --- a/src/specfact_cli/modules/module_registry/module-package.yaml +++ b/src/specfact_cli/modules/module_registry/module-package.yaml @@ -1,7 +1,9 @@ name: module-registry -version: 0.1.5 +version: 0.1.6 commands: - module +category: core +bundle_sub_command: module command_help: module: Manage marketplace modules (install, uninstall, search, list, show, upgrade) pip_dependencies: [] @@ -11,9 +13,9 @@ core_compatibility: '>=0.28.0,<1.0.0' publisher: name: nold-ai url: https://github.com/nold-ai/specfact-cli-modules - email: oss@nold.ai + email: hello@noldai.com description: 'Manage modules: search, list, show, install, and upgrade.' license: Apache-2.0 integrity: - checksum: sha256:4837d40c55ebde6eba87b434c3ec3ae3d0d842eb6a6984d4212ffbc6fd26eac2 - signature: m2tJyNfaHOnil3dsT5NxUB93+4nnVJHBaF7QzQf/DC8F/LG7oJJMWHU063HY9x2/d9hFVXLwItf9TNgNjnirDQ== + checksum: sha256:e195013a5624d8c06079133b040841a4851016cbde48039ac1e399477762e4dc + signature: UBkZjFECBomxFC9FleLacUZPSJkadwDXni2D6amPMiNULk0KzdQjYi1FOafLoInML+F/nuY/9KbGBVp940tcCA== diff --git a/src/specfact_cli/modules/patch_mode/module-package.yaml b/src/specfact_cli/modules/patch_mode/module-package.yaml index 83a532f4..39191d9e 100644 --- a/src/specfact_cli/modules/patch_mode/module-package.yaml +++ b/src/specfact_cli/modules/patch_mode/module-package.yaml @@ -1,7 +1,11 @@ name: patch-mode -version: 0.1.0 +version: 0.1.1 commands: - patch +category: govern +bundle: specfact-govern +bundle_group_command: govern +bundle_sub_command: patch command_help: patch: Preview and apply patches (backlog body, OpenSpec, config); --apply local, --write upstream with confirmation. @@ -12,9 +16,9 @@ core_compatibility: '>=0.28.0,<1.0.0' publisher: name: nold-ai url: https://github.com/nold-ai/specfact-cli-modules - email: oss@nold.ai + email: hello@noldai.com description: Prepare, review, and apply structured repository patches safely. license: Apache-2.0 integrity: - checksum: sha256:9f6ceb4ea1a9539cd900d63065bcd36b8681d56d58dfca6835687ba5c58d5272 - signature: 2f8u+wSUKnC5KTIvHt/Qcor0r1J7Pv3FDhdts2OsIEHPCeXtIwoN2XU3CyRlpr+Zyg3+T++OO4Rv7akiWPK1Bw== + checksum: sha256:874ad2c164a73e030fb58764a3b969fea254a3f362b8f8e213aab365ddc00cc3 + signature: 9jrzryT8FGO61RnF1Z5IQVWoY0gR9MXnHXeod/xqblyiYd6osqOIivBbv642xvb6F1oLuG8VOxVNCwYYlAqbDw== diff --git a/src/specfact_cli/modules/plan/module-package.yaml b/src/specfact_cli/modules/plan/module-package.yaml index 250f870f..52c74580 100644 --- a/src/specfact_cli/modules/plan/module-package.yaml +++ b/src/specfact_cli/modules/plan/module-package.yaml @@ -1,7 +1,11 @@ name: plan -version: 0.1.0 +version: 0.1.1 commands: - plan +category: project +bundle: specfact-project +bundle_group_command: project +bundle_sub_command: plan command_help: plan: Manage development plans pip_dependencies: [] @@ -12,9 +16,9 @@ core_compatibility: '>=0.28.0,<1.0.0' publisher: name: nold-ai url: https://github.com/nold-ai/specfact-cli-modules - email: oss@nold.ai + email: hello@noldai.com description: Create and manage implementation plans for project execution. license: Apache-2.0 integrity: - checksum: sha256:f7d992a44b0bcee0a6cc3cb4857dbe9c57a3bbe314398893f1337c8c5e4070b6 - signature: MI5BELFxfgZNusPlP6lLKSEdRZR5MdjyOE+IsVutMJHWESmCoO9SmlzycZbHYKBdz9v2BI04kcXsy/AI4+fjDQ== + checksum: sha256:07b2007ef96eab67c49d6a94032011b464d25ac9e5f851dedebdc00523d1749c + signature: LAT1OpTH0p+/0KGx6hvv5CCQGAeLHjgj5VagXXOtJ7nHkqMoAvqGKJygkZDu6h7dpAEbHhotcPet0o9CMqgWDg== diff --git a/src/specfact_cli/modules/policy_engine/module-package.yaml b/src/specfact_cli/modules/policy_engine/module-package.yaml index e0548fb1..7c464464 100644 --- a/src/specfact_cli/modules/policy_engine/module-package.yaml +++ b/src/specfact_cli/modules/policy_engine/module-package.yaml @@ -1,7 +1,11 @@ name: policy-engine -version: 0.1.0 +version: 0.1.1 commands: - policy +category: backlog +bundle: specfact-backlog +bundle_group_command: backlog +bundle_sub_command: policy command_help: policy: Policy validation and suggestion workflows (DoR/DoD/Flow/PI) pip_dependencies: [] @@ -17,10 +21,10 @@ schema_extensions: publisher: name: nold-ai url: https://github.com/nold-ai/specfact-cli-modules - email: oss@nold.ai + email: hello@noldai.com integrity: - checksum: sha256:45d56fe74e32db9713d42ea622143da1c9b4403c7b22d148ada1fda0060226cf - signature: q0kWPGTaqZTTnFglm8OuHqJyngGLtXnAYeKJp69R/gzzX6QIVZ11bo6mtByG4NKX9KmjXKxOI3JVXWCu3sDOAw== + checksum: sha256:9220ad2598f2214092377baab52f8c91cdad1e642e60d6668ac6ba533cbb5153 + signature: tjShituw5CDCYu+s2qbRYFheH9X7tjtFDIG/+ba1gPhP2vXvjDNhNyqYXa4A9wTLbbGpXUMoZ5Iu/fkhn6rVCw== dependencies: [] description: Run policy evaluations with recommendation and compliance outputs. license: Apache-2.0 diff --git a/src/specfact_cli/modules/project/module-package.yaml b/src/specfact_cli/modules/project/module-package.yaml index 04b07d24..489a86d8 100644 --- a/src/specfact_cli/modules/project/module-package.yaml +++ b/src/specfact_cli/modules/project/module-package.yaml @@ -1,7 +1,11 @@ name: project -version: 0.1.0 +version: 0.1.1 commands: - project +category: project +bundle: specfact-project +bundle_group_command: project +bundle_sub_command: project command_help: project: Manage project bundles with persona workflows pip_dependencies: [] @@ -11,9 +15,9 @@ core_compatibility: '>=0.28.0,<1.0.0' publisher: name: nold-ai url: https://github.com/nold-ai/specfact-cli-modules - email: oss@nold.ai + email: hello@noldai.com description: Manage project bundles, contexts, and lifecycle workflows. license: Apache-2.0 integrity: - checksum: sha256:bb6a1c0004d242fa6829975d307470f6f9b895690d4052ae6a9d7a64ec9c7a25 - signature: HIX5WUIWEpjcIZ/lYD9bTk0HqmUXaJ68EZmiS3+IIjx3/GiQ0VcW+QtMxOpRZxYA/MnHZvIB5PdQGjOeahA/Bg== + checksum: sha256:78f91db47087a84f229c1c9f414652ff3e740c14ccf5768e3cc65e9e27987742 + signature: 9bbaYWz718cDw4x3P9BkJf3YN1IWQQ4e4UjM/4S+3k9D64js8CbUpDAXgvYfa5a7TsY8jf/yA2U3kxCWZ2/5BQ== diff --git a/src/specfact_cli/modules/repro/module-package.yaml b/src/specfact_cli/modules/repro/module-package.yaml index 438e0abb..d66fb84b 100644 --- a/src/specfact_cli/modules/repro/module-package.yaml +++ b/src/specfact_cli/modules/repro/module-package.yaml @@ -1,7 +1,11 @@ name: repro -version: 0.1.0 +version: 0.1.1 commands: - repro +category: codebase +bundle: specfact-codebase +bundle_group_command: code +bundle_sub_command: repro command_help: repro: Run validation suite pip_dependencies: [] @@ -11,9 +15,9 @@ core_compatibility: '>=0.28.0,<1.0.0' publisher: name: nold-ai url: https://github.com/nold-ai/specfact-cli-modules - email: oss@nold.ai + email: hello@noldai.com description: Run reproducible validation and diagnostics workflows end-to-end. license: Apache-2.0 integrity: - checksum: sha256:b7082bc1c0ed330a20b97ce52baf93f9686854babe28d412063e05821f3fbc62 - signature: pY7zG/pOURam2csn6HH92scnRY553QMhPnNPEkcfiau8L3pfIauaXtdgt8L27Cq3ARteT6hUK8xkUffQujYBDg== + checksum: sha256:24b812744e3839086fa72001b1a6d47298c9a2f853f9027ab30ced1dcbc238b4 + signature: g+1DnnYzrBt+J+J/tt5VY/0z49skGt5AGU70q9qL7l49sNCOpODiR7yP0e+p319C3lyI1us6OgXR029/qpzgCg== diff --git a/src/specfact_cli/modules/sdd/module-package.yaml b/src/specfact_cli/modules/sdd/module-package.yaml index 755f7e7d..5510196f 100644 --- a/src/specfact_cli/modules/sdd/module-package.yaml +++ b/src/specfact_cli/modules/sdd/module-package.yaml @@ -1,7 +1,11 @@ name: sdd -version: 0.1.0 +version: 0.1.1 commands: - sdd +category: spec +bundle: specfact-spec +bundle_group_command: spec +bundle_sub_command: sdd command_help: sdd: Manage SDD (Spec-Driven Development) manifests pip_dependencies: [] @@ -11,9 +15,9 @@ core_compatibility: '>=0.28.0,<1.0.0' publisher: name: nold-ai url: https://github.com/nold-ai/specfact-cli-modules - email: oss@nold.ai + email: hello@noldai.com description: Create and validate Spec-Driven Development manifests and mappings. license: Apache-2.0 integrity: - checksum: sha256:1d5e11925f8e578dc3186baad6cf6e6beed9af2824d21967ae56440d65f36222 - signature: rJtZAsUXpIeBkLx8Oe0LgMPKs4pSof52r0FhHAOCDaR/Y2559XAKOUcNYdBkyO1BwosCbZqvmEEf1gZzZKwLAQ== + checksum: sha256:12924835b01bab7f3c5d4edd57577b91437520040fa5fa9cd8f928bd2c46dfc7 + signature: jbaTUCE4DNwJBipXLLgybpP6MzyeLrkJPqdPu3K7sd7GgJYpHKxh722356GneZ7PgiMTfPiHogzh8915jKLGBg== diff --git a/src/specfact_cli/modules/spec/module-package.yaml b/src/specfact_cli/modules/spec/module-package.yaml index c4adc760..278e934f 100644 --- a/src/specfact_cli/modules/spec/module-package.yaml +++ b/src/specfact_cli/modules/spec/module-package.yaml @@ -1,7 +1,11 @@ name: spec -version: 0.1.0 +version: 0.1.1 commands: - spec +category: spec +bundle: specfact-spec +bundle_group_command: spec +bundle_sub_command: api command_help: spec: Specmatic integration for API contract testing pip_dependencies: [] @@ -11,9 +15,9 @@ core_compatibility: '>=0.28.0,<1.0.0' publisher: name: nold-ai url: https://github.com/nold-ai/specfact-cli-modules - email: oss@nold.ai + email: hello@noldai.com description: Integrate and run API specification and contract checks. license: Apache-2.0 integrity: - checksum: sha256:f14970c58bed5647cfcdc76933f4c7af22c186ef89f74d63bb97df3a5e4a09c4 - signature: /V0wm4tU6gKoXZ29AX9FiICdF0loq/N9OVvQyQ5ICtVarJFBQLUPeYZJup5/PJgIrhjZxDr6Ih+wLG6gZBtLAg== + checksum: sha256:9a9a1c5ba8bd8c8e9c6f4f7de2763b6afc908345488c1c97c67f4947bff7b904 + signature: mSzS1UmMwQKaf3Xv8hPlEA51+d65BppvKO+TJ7KH9UvPyftyKluNpspRXHk8Lz6sWBNHGRWEAbrHxewt5mT+DA== diff --git a/src/specfact_cli/modules/sync/module-package.yaml b/src/specfact_cli/modules/sync/module-package.yaml index 5675c012..dce9409f 100644 --- a/src/specfact_cli/modules/sync/module-package.yaml +++ b/src/specfact_cli/modules/sync/module-package.yaml @@ -1,7 +1,11 @@ name: sync -version: 0.1.0 +version: 0.1.1 commands: - sync +category: project +bundle: specfact-project +bundle_group_command: project +bundle_sub_command: sync command_help: sync: Synchronize external tool artifacts and repository changes (Spec-Kit, OpenSpec, GitHub, ADO, Linear, Jira, etc.) @@ -14,9 +18,9 @@ core_compatibility: '>=0.28.0,<1.0.0' publisher: name: nold-ai url: https://github.com/nold-ai/specfact-cli-modules - email: oss@nold.ai + email: hello@noldai.com description: Synchronize repository state with connected external systems. license: Apache-2.0 integrity: - checksum: sha256:05023a72241101dd19a9c402fcb4882e40a925d0958b95b4b13217032ad8e31b - signature: rovvEszsr1+/kq2yy9R1g01fjhlG38R2eIwg/aXZy789SKq9ttBkBqJ6d1U+ysXYzOUBqgc6WwcfCI1X2Il7Dg== + checksum: sha256:c690b401e5469f8bac7bf36d278014e6dd1132453424bd9728769579a31a474b + signature: QtPgmc9urSzIgqLKqXVLRUpTu32UZ0Lns57ynHLnnZHoOI/46AcIFJ8GrHjVSgMAlCjmxTqjihe6FbuxmpmyBw== diff --git a/src/specfact_cli/modules/upgrade/module-package.yaml b/src/specfact_cli/modules/upgrade/module-package.yaml index d0708644..7c8a8a99 100644 --- a/src/specfact_cli/modules/upgrade/module-package.yaml +++ b/src/specfact_cli/modules/upgrade/module-package.yaml @@ -1,7 +1,9 @@ name: upgrade -version: 0.1.0 +version: 0.1.1 commands: - upgrade +category: core +bundle_sub_command: upgrade command_help: upgrade: Check for and install SpecFact CLI updates pip_dependencies: [] @@ -11,9 +13,9 @@ core_compatibility: '>=0.28.0,<1.0.0' publisher: name: nold-ai url: https://github.com/nold-ai/specfact-cli-modules - email: oss@nold.ai + email: hello@noldai.com description: Check and apply SpecFact CLI version upgrades. license: Apache-2.0 integrity: - checksum: sha256:441c8d1d5bb5b57b809150e58911966cd1b2aec20ff88dba9985114a65a3aead - signature: mr1FGw1rrBbFEH812TGAxoykpSfP+VzyEMwW5Q5UGNzJgqXwXQxa5bOsVYHwTfToIttGGoFv1jDjJ4NE6b+EBg== + checksum: sha256:2ff659d146ad1ec80c56e40d79f5dbcc2c90cb5eb5ed3498f6f7690ec1171676 + signature: I/BlgrSwWzXUt+Ib7snF/ukmRjXuu6w3bDBVOadWEtcwWzmP8WiaIkK4WYNxMVIKuXNV7TYDhJo1KCuLxZNRBA== diff --git a/src/specfact_cli/modules/validate/module-package.yaml b/src/specfact_cli/modules/validate/module-package.yaml index 3a7fb67b..2eb9d8c6 100644 --- a/src/specfact_cli/modules/validate/module-package.yaml +++ b/src/specfact_cli/modules/validate/module-package.yaml @@ -1,7 +1,11 @@ name: validate -version: 0.1.0 +version: 0.1.1 commands: - validate +category: codebase +bundle: specfact-codebase +bundle_group_command: code +bundle_sub_command: validate command_help: validate: Validation commands including sidecar validation pip_dependencies: [] @@ -11,9 +15,9 @@ core_compatibility: '>=0.28.0,<1.0.0' publisher: name: nold-ai url: https://github.com/nold-ai/specfact-cli-modules - email: oss@nold.ai + email: hello@noldai.com description: Run schema, contract, and workflow validation suites. license: Apache-2.0 integrity: - checksum: sha256:01252349bfc86e36138b2acb4e82e60bcaaa84b0f60dc1bfcf4ca554a02bad67 - signature: rU1JJUw057QUVp6YaEEM0vcx+/hrciNsh2A3SlD4xhZwbPyzf9O+RvaAh99q/iAns9EzmsqMDW9IYafLXmEYDQ== + checksum: sha256:9b8ade0253df16ed367e0992b483738d3b4379e92d05ba97d9f5dd6f7fc51715 + signature: 3TD8nGRVXLDA7VgExKP/tK7H/gGCb7P7LuU1fQzwzsiuZAsEebIL2bSuZ54bD3vKwIvcMooVzyL/8a9w4cu+Cg== diff --git a/src/specfact_cli/registry/bootstrap.py b/src/specfact_cli/registry/bootstrap.py index a08d72a1..4450083d 100644 --- a/src/specfact_cli/registry/bootstrap.py +++ b/src/specfact_cli/registry/bootstrap.py @@ -4,13 +4,48 @@ Commands are discovered from configured module-package roots. Loaders import each package's src on first use and return its .app (Typer). cli.py must not import command modules at top level; it uses the registry. +When category_grouping_enabled is True, mounts category groups (code, backlog, project, spec, govern) +and compat shims for flat commands; otherwise mounts all modules flat. """ from __future__ import annotations +from pathlib import Path + +import yaml +from beartype import beartype + from specfact_cli.registry.module_packages import register_module_package_commands +_SPECFACT_CONFIG_PATH = Path.home() / ".specfact" / "config.yaml" + + +@beartype +def _get_category_grouping_enabled() -> bool: + """Read category_grouping_enabled from env then config file; default True.""" + env_val = __import__("os").environ.get("SPECFACT_CATEGORY_GROUPING_ENABLED", "").strip().lower() + if env_val in ("1", "true", "yes"): + return True + if env_val in ("0", "false", "no"): + return False + if not _SPECFACT_CONFIG_PATH.exists(): + return True + try: + raw = yaml.safe_load(_SPECFACT_CONFIG_PATH.read_text(encoding="utf-8")) + if isinstance(raw, dict) and "category_grouping_enabled" in raw: + val = raw["category_grouping_enabled"] + if isinstance(val, bool): + return val + if isinstance(val, str): + return val.strip().lower() in ("1", "true", "yes") + except Exception: + pass + return True + + +@beartype def register_builtin_commands() -> None: """Register all command groups from discovered module packages with CommandRegistry.""" - register_module_package_commands() + category_grouping_enabled = _get_category_grouping_enabled() + register_module_package_commands(category_grouping_enabled=category_grouping_enabled) diff --git a/src/specfact_cli/registry/module_grouping.py b/src/specfact_cli/registry/module_grouping.py new file mode 100644 index 00000000..deb4a9f0 --- /dev/null +++ b/src/specfact_cli/registry/module_grouping.py @@ -0,0 +1,60 @@ +"""Module category grouping constants and validation (module-grouping capability).""" + +from __future__ import annotations + +from beartype import beartype +from icontract import require + +from specfact_cli.models.module_package import ModulePackageMetadata + + +VALID_CATEGORIES = frozenset({"core", "project", "backlog", "codebase", "spec", "govern"}) +CATEGORY_TO_GROUP_COMMAND: dict[str, str] = { + "project": "project", + "backlog": "backlog", + "codebase": "code", + "spec": "spec", + "govern": "govern", +} + + +class ModuleManifestError(Exception): + """Raised when module-package.yaml category/bundle metadata is invalid.""" + + +@require(lambda manifests: isinstance(manifests, list), "manifests must be a list") +@beartype +def group_modules_by_category( + manifests: list[ModulePackageMetadata], +) -> dict[str, list[ModulePackageMetadata]]: + """Group module manifests by bundle_group_command; core and missing category are ungrouped.""" + result: dict[str, list[ModulePackageMetadata]] = {} + for meta in manifests: + if meta.category == "core" or meta.bundle_group_command is None: + continue + cmd = meta.bundle_group_command + result.setdefault(cmd, []).append(meta) + return result + + +@beartype +def validate_module_category_manifest(meta: ModulePackageMetadata) -> None: + """Validate category and bundle_group_command; raise ModuleManifestError if invalid.""" + if meta.category is None: + return + if meta.category not in VALID_CATEGORIES: + raise ModuleManifestError( + f"Module '{meta.name}': category must be one of {sorted(VALID_CATEGORIES)}, got {meta.category!r}" + ) + if meta.category == "core": + if meta.bundle is not None or meta.bundle_group_command is not None: + raise ModuleManifestError( + f"Module '{meta.name}': core category must not set bundle or bundle_group_command" + ) + return + expected = CATEGORY_TO_GROUP_COMMAND.get(meta.category) + if expected is not None and meta.bundle_group_command != expected: + raise ModuleManifestError( + f"Module '{meta.name}': bundle_group_command for category {meta.category!r} must be {expected!r}, " + f"got {meta.bundle_group_command!r}" + ) diff --git a/src/specfact_cli/registry/module_installer.py b/src/specfact_cli/registry/module_installer.py index 2fe74115..326e1e73 100644 --- a/src/specfact_cli/registry/module_installer.py +++ b/src/specfact_cli/registry/module_installer.py @@ -6,6 +6,7 @@ import os import re import shutil +import subprocess import sys import tarfile import tempfile @@ -196,6 +197,70 @@ def _is_hashable(path: Path) -> bool: return "\n".join(entries).encode("utf-8") +def _module_artifact_payload_signed(package_dir: Path) -> bytes: + """Build payload identical to scripts/sign-modules.py so verification matches after signing. + + Uses git ls-files when the module lives in a git repo (same file set and order as sign script); + otherwise falls back to rglob + same hashable/sort rules so checksums match for non-git use. + """ + if not package_dir.exists() or not package_dir.is_dir(): + raise ValueError(f"Module directory not found: {package_dir}") + module_dir_resolved = package_dir.resolve() + + def _is_hashable(path: Path) -> bool: + rel = path.resolve().relative_to(module_dir_resolved) + if any(part in _IGNORED_MODULE_DIR_NAMES for part in rel.parts): + return False + return path.suffix.lower() not in _IGNORED_MODULE_FILE_SUFFIXES + + files: list[Path] + try: + result = subprocess.run( + ["git", "rev-parse", "--show-toplevel"], + cwd=package_dir, + capture_output=True, + text=True, + check=False, + timeout=2, + ) + if result.returncode != 0 or not result.stdout.strip(): + raise FileNotFoundError("not in git repo") + git_root = Path(result.stdout.strip()).resolve() + rel_to_repo = module_dir_resolved.relative_to(git_root) + ls_result = subprocess.run( + ["git", "ls-files", "--", rel_to_repo.as_posix()], + cwd=git_root, + capture_output=True, + text=True, + check=False, + timeout=2, + ) + if ls_result.returncode != 0: + raise FileNotFoundError("git ls-files failed") + lines = [line.strip() for line in ls_result.stdout.splitlines() if line.strip()] + files = [git_root / line for line in lines] + files = [p for p in files if p.is_file() and _is_hashable(p)] + files.sort(key=lambda p: p.resolve().relative_to(module_dir_resolved).as_posix()) + except (FileNotFoundError, ValueError, subprocess.TimeoutExpired): + files = sorted( + (p for p in package_dir.rglob("*") if p.is_file() and _is_hashable(p)), + key=lambda p: p.resolve().relative_to(module_dir_resolved).as_posix(), + ) + + entries: list[str] = [] + for path in files: + rel = path.resolve().relative_to(module_dir_resolved).as_posix() + if rel in {"module-package.yaml", "metadata.yaml"}: + raw = yaml.safe_load(path.read_text(encoding="utf-8")) + if not isinstance(raw, dict): + raise ValueError(f"Invalid manifest YAML: {path}") + data = _canonical_manifest_payload(path) + else: + data = path.read_bytes() + entries.append(f"{rel}:{hashlib.sha256(data).hexdigest()}") + return "\n".join(entries).encode("utf-8") + + @beartype def _is_signature_backend_unavailable(error: ValueError) -> bool: """Return True when signature verification backend is unavailable in runtime.""" @@ -371,26 +436,32 @@ def verify_module_artifact( return False return True + verification_payload: bytes try: - legacy_payload = _module_artifact_payload(package_dir) - verify_checksum(legacy_payload, meta.integrity.checksum) - verification_payload = legacy_payload - except ValueError as exc: + signed_payload = _module_artifact_payload_signed(package_dir) + verify_checksum(signed_payload, meta.integrity.checksum) + verification_payload = signed_payload + except ValueError: try: - stable_payload = _module_artifact_payload_stable(package_dir) - verify_checksum(stable_payload, meta.integrity.checksum) - if _integrity_debug_details_enabled(): - logger.debug( - "Module %s: checksum matched with generated-file exclusions (cache/transient files ignored)", - meta.name, - ) - verification_payload = stable_payload - except ValueError: - if _integrity_debug_details_enabled(): - logger.warning("Module %s: Integrity check failed: %s", meta.name, exc) - else: - logger.debug("Module %s: Integrity check failed: %s", meta.name, exc) - return False + legacy_payload = _module_artifact_payload(package_dir) + verify_checksum(legacy_payload, meta.integrity.checksum) + verification_payload = legacy_payload + except ValueError as exc: + try: + stable_payload = _module_artifact_payload_stable(package_dir) + verify_checksum(stable_payload, meta.integrity.checksum) + if _integrity_debug_details_enabled(): + logger.debug( + "Module %s: checksum matched with generated-file exclusions (cache/transient files ignored)", + meta.name, + ) + verification_payload = stable_payload + except ValueError: + if _integrity_debug_details_enabled(): + logger.warning("Module %s: Integrity check failed: %s", meta.name, exc) + else: + logger.debug("Module %s: Integrity check failed: %s", meta.name, exc) + return False if meta.integrity.signature: key_material = _load_public_key_pem(public_key_pem) diff --git a/src/specfact_cli/registry/module_packages.py b/src/specfact_cli/registry/module_packages.py index c096227d..ab529878 100644 --- a/src/specfact_cli/registry/module_packages.py +++ b/src/specfact_cli/registry/module_packages.py @@ -36,6 +36,10 @@ from specfact_cli.registry.bridge_registry import BridgeRegistry, SchemaConverter from specfact_cli.registry.extension_registry import get_extension_registry from specfact_cli.registry.metadata import CommandMetadata +from specfact_cli.registry.module_grouping import ( + ModuleManifestError, + validate_module_category_manifest, +) from specfact_cli.registry.module_installer import verify_module_artifact from specfact_cli.registry.module_state import find_dependents, read_modules_state from specfact_cli.registry.registry import CommandRegistry @@ -44,6 +48,7 @@ # Display order for core modules (formerly built-in); others follow alphabetically. +CORE_NAMES = ("init", "auth", "module", "upgrade") CORE_MODULE_ORDER: tuple[str, ...] = ( "init", "auth", @@ -260,8 +265,22 @@ def discover_package_metadata(modules_root: Path, source: str = "builtin") -> li description=str(raw["description"]) if raw.get("description") else None, license=str(raw["license"]) if raw.get("license") else None, source=source, + category=str(raw["category"]) if raw.get("category") else None, + bundle=str(raw["bundle"]) if raw.get("bundle") else None, + bundle_group_command=str(raw["bundle_group_command"]) if raw.get("bundle_group_command") else None, + bundle_sub_command=str(raw["bundle_sub_command"]) if raw.get("bundle_sub_command") else None, ) + if meta.category is None: + logger = get_bridge_logger(__name__) + logger.warning( + "Module '%s' has no category field; mounting as flat top-level command.", + meta.name, + ) + else: + validate_module_category_manifest(meta) result.append((child, meta)) + except ModuleManifestError: + raise except Exception: continue return result @@ -756,10 +775,6 @@ def merge_module_state( enable_ids: list[str], disable_ids: list[str], ) -> dict[str, bool]: - """ - Merge discovered (id, version) with state; apply enable/disable overrides. - Returns dict module_id -> enabled (bool). - """ merged: dict[str, bool] = {} for mid, _version in discovered: if mid in state: @@ -773,16 +788,105 @@ def merge_module_state( return merged +# Flat command name -> (group_command, sub_command) for compat shims when category grouping is enabled. +FLAT_TO_GROUP: dict[str, tuple[str, str]] = { + "analyze": ("code", "analyze"), + "drift": ("code", "drift"), + "validate": ("code", "validate"), + "repro": ("code", "repro"), + "backlog": ("backlog", "backlog"), + "policy": ("backlog", "policy"), + "project": ("project", "project"), + "plan": ("project", "plan"), + "import": ("project", "import"), + "sync": ("project", "sync"), + "migrate": ("project", "migrate"), + "contract": ("spec", "contract"), + "spec": ("spec", "api"), + "sdd": ("spec", "sdd"), + "generate": ("spec", "generate"), + "enforce": ("govern", "enforce"), + "patch": ("govern", "patch"), +} + + +def _make_shim_loader( + flat_name: str, + group_name: str, + sub_name: str, + help_str: str, +) -> Any: + """Return a loader that returns the real module Typer so flat invocations like + 'specfact sync bridge' work (subcommands come from the real module). + """ + + def loader() -> Any: + return CommandRegistry.get_module_typer(flat_name) + + return loader + + +def _register_category_groups_and_shims() -> None: + """Register category group typers and compat shims in CommandRegistry._entries.""" + from specfact_cli.groups.backlog_group import build_app as build_backlog_app + from specfact_cli.groups.codebase_group import build_app as build_codebase_app + from specfact_cli.groups.govern_group import build_app as build_govern_app + from specfact_cli.groups.project_group import build_app as build_project_app + from specfact_cli.groups.spec_group import build_app as build_spec_app + + group_apps = [ + ("code", "Codebase quality commands: analyze, drift, validate, repro.", build_codebase_app), + ("backlog", "Backlog and policy commands.", build_backlog_app), + ("project", "Project lifecycle commands.", build_project_app), + ("spec", "Spec and contract commands: contract, api, sdd, generate.", build_spec_app), + ("govern", "Governance and quality gates: enforce, patch.", build_govern_app), + ] + for group_name, help_str, build_fn in group_apps: + + def _make_group_loader(fn: Any) -> Any: + def _group_loader(_fn: Any = fn) -> Any: + return _fn() + + return _group_loader + + loader = _make_group_loader(build_fn) + cmd_meta = CommandMetadata( + name=group_name, + help=help_str, + tier="community", + addon_id=None, + ) + CommandRegistry.register(group_name, loader, cmd_meta) + + for flat_name, (group_name, sub_name) in FLAT_TO_GROUP.items(): + if flat_name == group_name: + continue + meta = CommandRegistry.get_module_metadata(flat_name) + if meta is None: + continue + help_str = meta.help + shim_loader = _make_shim_loader(flat_name, group_name, sub_name, help_str) + cmd_meta = CommandMetadata( + name=flat_name, + help=help_str + " (deprecated; use specfact " + group_name + " " + sub_name + ")", + tier=meta.tier, + addon_id=meta.addon_id, + ) + CommandRegistry.register(flat_name, shim_loader, cmd_meta) + + def register_module_package_commands( enable_ids: list[str] | None = None, disable_ids: list[str] | None = None, allow_unsigned: bool | None = None, + category_grouping_enabled: bool = True, ) -> None: """ Discover module packages, merge with modules.json state, register only enabled packages' commands. Call after register_builtin_commands(). enable_ids/disable_ids from CLI (--enable-module/--disable-module). allow_unsigned: If True, allow modules without integrity metadata. Default from SPECFACT_ALLOW_UNSIGNED env. + category_grouping_enabled: If True, register category groups (code, backlog, project, spec, govern) and compat shims. """ enable_ids = enable_ids or [] disable_ids = disable_ids or [] @@ -907,6 +1011,58 @@ def register_module_package_commands( protocol_legacy += 1 for cmd_name in meta.commands: + if category_grouping_enabled and meta.category is not None: + help_str = (meta.command_help or {}).get(cmd_name) or f"Module package: {meta.name}" + extension_loader = _make_package_loader(package_dir, meta.name, cmd_name) + cmd_meta = CommandMetadata(name=cmd_name, help=help_str, tier=meta.tier, addon_id=meta.addon_id) + existing_module_entry = next( + (entry for entry in CommandRegistry._module_entries if entry.get("name") == cmd_name), + None, + ) + if existing_module_entry is not None: + base_loader = existing_module_entry.get("loader") + if base_loader is None: + logger.warning( + "Module %s attempted to extend command '%s' but module base loader was missing; skipping.", + meta.name, + cmd_name, + ) + else: + existing_module_entry["loader"] = _make_extending_loader( + base_loader, + extension_loader, + meta.name, + cmd_name, + ) + existing_module_entry["metadata"] = cmd_meta + CommandRegistry._module_typer_cache.pop(cmd_name, None) + else: + CommandRegistry.register_module(cmd_name, extension_loader, cmd_meta) + if cmd_name in CORE_NAMES: + existing_root_entry = next( + (entry for entry in CommandRegistry._entries if entry.get("name") == cmd_name), + None, + ) + if existing_root_entry is not None: + base_loader = existing_root_entry.get("loader") + if base_loader is None: + logger.warning( + "Module %s attempted to extend core command '%s' but base loader was missing; skipping.", + meta.name, + cmd_name, + ) + else: + existing_root_entry["loader"] = _make_extending_loader( + base_loader, + extension_loader, + meta.name, + cmd_name, + ) + existing_root_entry["metadata"] = cmd_meta + CommandRegistry._typer_cache.pop(cmd_name, None) + else: + CommandRegistry.register(cmd_name, extension_loader, cmd_meta) + continue existing_entry = next((entry for entry in CommandRegistry._entries if entry.get("name") == cmd_name), None) if existing_entry is not None: extension_loader = _make_package_loader(package_dir, meta.name, cmd_name) @@ -932,6 +1088,8 @@ def register_module_package_commands( loader = _make_package_loader(package_dir, meta.name, cmd_name) cmd_meta = CommandMetadata(name=cmd_name, help=help_str, tier=meta.tier, addon_id=meta.addon_id) CommandRegistry.register(cmd_name, loader, cmd_meta) + if category_grouping_enabled: + _register_category_groups_and_shims() discovered_count = protocol_full + protocol_partial + protocol_legacy if discovered_count and (protocol_partial > 0 or protocol_legacy > 0): print_warning( diff --git a/src/specfact_cli/registry/registry.py b/src/specfact_cli/registry/registry.py index 39efe29e..33ea7dc1 100644 --- a/src/specfact_cli/registry/registry.py +++ b/src/specfact_cli/registry/registry.py @@ -34,10 +34,14 @@ class CommandRegistry: Registry for CLI command groups (lazy load). Register by name with a loader and metadata; get_typer(name) invokes loader on first use. + When category grouping is enabled, _module_entries holds the 21 module loaders for group + sub-command resolution; _entries holds root-level commands (core + groups + shims). """ _entries: list[_Entry] = [] _typer_cache: dict[str, Any] = {} + _module_entries: list[_Entry] = [] + _module_typer_cache: dict[str, Any] = {} @classmethod def _ensure_bootstrapped(cls) -> None: @@ -64,6 +68,49 @@ def register(cls, name: str, loader: Loader, metadata: CommandMetadata) -> None: return cls._entries.append({"name": name, "loader": loader, "metadata": metadata}) + @classmethod + @beartype + @require(lambda name: isinstance(name, str) and len(name) > 0, "Name must be non-empty string") + @require(lambda metadata: isinstance(metadata, CommandMetadata), "Metadata must be CommandMetadata") + @ensure(lambda result: result is None, "Must return None") + def register_module(cls, name: str, loader: Loader, metadata: CommandMetadata) -> None: + """Register a module command (for group sub-command resolution). Does not invoke loader.""" + for e in cls._module_entries: + if e.get("name") == name: + e["loader"] = loader + e["metadata"] = metadata + cls._module_typer_cache.pop(name, None) + return + cls._module_entries.append({"name": name, "loader": loader, "metadata": metadata}) + + @classmethod + @beartype + @require(lambda name: isinstance(name, str) and len(name) > 0, "Name must be non-empty string") + def get_module_typer(cls, name: str) -> Any: + """Return Typer app for module name (from _module_entries); invoke loader on first use and cache.""" + cls._ensure_bootstrapped() + if name in cls._module_typer_cache: + return cls._module_typer_cache[name] + for e in cls._module_entries: + if e.get("name") == name: + loader = e.get("loader") + if loader is None: + raise ValueError(f"Module command '{name}' has no loader") + app = loader() + cls._module_typer_cache[name] = app + return app + registered = ", ".join(e.get("name", "") for e in cls._module_entries) + raise ValueError(f"Module command '{name}' not found. Registered modules: {registered or '(none)'}") + + @classmethod + def get_module_metadata(cls, name: str) -> CommandMetadata | None: + """Return metadata for module name without invoking loader.""" + cls._ensure_bootstrapped() + for e in cls._module_entries: + if e.get("name") == name: + return e.get("metadata") + return None + @classmethod @beartype @require(lambda name: isinstance(name, str) and len(name) > 0, "Name must be non-empty string") @@ -113,3 +160,5 @@ def _clear_for_testing(cls) -> None: """Reset registry state (for tests only).""" cls._entries = [] cls._typer_cache = {} + cls._module_entries = [] + cls._module_typer_cache = {} diff --git a/tests/e2e/test_first_run_init.py b/tests/e2e/test_first_run_init.py new file mode 100644 index 00000000..2c9583ce --- /dev/null +++ b/tests/e2e/test_first_run_init.py @@ -0,0 +1,57 @@ +"""E2E tests for first-run init and category group availability.""" + +from __future__ import annotations + +import os +from pathlib import Path +from unittest.mock import patch + +import pytest +from typer.testing import CliRunner + +from specfact_cli.cli import app + + +@pytest.fixture(autouse=True) +def _category_grouping_enabled() -> None: + """Ensure category grouping is enabled for E2E (default; set explicitly for isolation).""" + os.environ.setdefault("SPECFACT_CATEGORY_GROUPING_ENABLED", "true") + + +runner = CliRunner() + + +def test_init_profile_solo_developer_completes_in_temp_workspace(tmp_path: Path) -> None: + """specfact init --profile solo-developer in a temp workspace completes without error.""" + with patch( + "specfact_cli.modules.init.src.commands.install_bundles_for_init", + return_value=None, + ): + result = runner.invoke( + app, + ["init", "--repo", str(tmp_path), "--profile", "solo-developer"], + catch_exceptions=False, + ) + assert result.exit_code == 0, ( + f"Expected exit 0, got {result.exit_code}\nstdout: {result.stdout}\nstderr: {result.stderr}" + ) + + +def test_after_solo_developer_init_code_analyze_help_available(tmp_path: Path) -> None: + """After init --profile solo-developer, specfact code analyze --help is available.""" + with patch( + "specfact_cli.modules.init.src.commands.install_bundles_for_init", + return_value=None, + ): + init_result = runner.invoke( + app, + ["init", "--repo", str(tmp_path), "--profile", "solo-developer"], + catch_exceptions=False, + ) + assert init_result.exit_code == 0 + + result = runner.invoke(app, ["code", "analyze", "--help"]) + assert result.exit_code == 0, ( + f"Expected exit 0, got {result.exit_code}\nstdout: {result.stdout}\nstderr: {result.stderr}" + ) + assert "analyze" in (result.stdout or "").lower() or "usage" in (result.stdout or "").lower() diff --git a/tests/integration/backlog/test_custom_field_mapping.py b/tests/integration/backlog/test_custom_field_mapping.py index 6aa9bad1..b1c94ced 100644 --- a/tests/integration/backlog/test_custom_field_mapping.py +++ b/tests/integration/backlog/test_custom_field_mapping.py @@ -92,7 +92,8 @@ def test_custom_field_mapping_file_validation_file_not_found(self) -> None: ) # Should exit with error code (validation happens before adapter setup) assert result.exit_code != 0 - assert "not found" in result.stdout.lower() or "error" in result.stdout.lower() or "Error" in result.stdout + out = result.output or result.stdout or "" + assert "not found" in out.lower() or "error" in out.lower() or "Error" in out def test_custom_field_mapping_file_validation_invalid_format(self, invalid_mapping_file: Path) -> None: """Test that invalid custom field mapping file format is rejected.""" @@ -112,7 +113,8 @@ def test_custom_field_mapping_file_validation_invalid_format(self, invalid_mappi ], ) assert result.exit_code != 0 - assert "invalid" in result.stdout.lower() or "error" in result.stdout.lower() + out = result.output or result.stdout or "" + assert "invalid" in out.lower() or "error" in out.lower() def test_custom_field_mapping_environment_variable( self, custom_mapping_file: Path, monkeypatch: pytest.MonkeyPatch diff --git a/tests/integration/commands/test_spec_commands.py b/tests/integration/commands/test_spec_commands.py index 6b119d5f..652ccd4b 100644 --- a/tests/integration/commands/test_spec_commands.py +++ b/tests/integration/commands/test_spec_commands.py @@ -43,7 +43,7 @@ async def mock_validate_coro(*args, **kwargs): old_cwd = os.getcwd() try: os.chdir(tmp_path) - result = runner.invoke(app, ["spec", "validate", str(spec_path)]) + result = runner.invoke(app, ["spec", "api", "validate", str(spec_path)]) finally: os.chdir(old_cwd) @@ -62,7 +62,7 @@ def test_validate_command_specmatic_not_available(self, mock_check, tmp_path): old_cwd = os.getcwd() try: os.chdir(tmp_path) - result = runner.invoke(app, ["spec", "validate", str(spec_path)]) + result = runner.invoke(app, ["spec", "api", "validate", str(spec_path)]) finally: os.chdir(old_cwd) @@ -94,7 +94,7 @@ async def mock_validate_async(*args, **kwargs): old_cwd = os.getcwd() try: os.chdir(tmp_path) - result = runner.invoke(app, ["spec", "validate", str(spec_path)]) + result = runner.invoke(app, ["spec", "api", "validate", str(spec_path)]) finally: os.chdir(old_cwd) @@ -126,7 +126,7 @@ async def mock_compat_async(*args, **kwargs): old_cwd = os.getcwd() try: os.chdir(tmp_path) - result = runner.invoke(app, ["spec", "backward-compat", str(old_spec), str(new_spec)]) + result = runner.invoke(app, ["spec", "api", "backward-compat", str(old_spec), str(new_spec)]) finally: os.chdir(old_cwd) @@ -154,7 +154,7 @@ async def mock_compat_async(*args, **kwargs): old_cwd = os.getcwd() try: os.chdir(tmp_path) - result = runner.invoke(app, ["spec", "backward-compat", str(old_spec), str(new_spec)]) + result = runner.invoke(app, ["spec", "api", "backward-compat", str(old_spec), str(new_spec)]) finally: os.chdir(old_cwd) @@ -189,7 +189,7 @@ async def mock_generate_async(*args, **kwargs): output_dir.mkdir(parents=True, exist_ok=True) result = runner.invoke( app, - ["spec", "generate-tests", str(spec_path), "--output", str(output_dir)], + ["spec", "api", "generate-tests", str(spec_path), "--output", str(output_dir)], ) finally: os.chdir(old_cwd) @@ -225,7 +225,7 @@ def test_mock_command_success(self, mock_create, mock_check, tmp_path): # Use timeout to prevent hanging result = runner.invoke( app, - ["spec", "mock", "--spec", str(spec_path), "--port", "9000"], + ["spec", "api", "mock", "--spec", str(spec_path), "--port", "9000"], input="\n", # Send Enter to exit ) finally: diff --git a/tests/integration/test_category_group_routing.py b/tests/integration/test_category_group_routing.py new file mode 100644 index 00000000..28a4a497 --- /dev/null +++ b/tests/integration/test_category_group_routing.py @@ -0,0 +1,58 @@ +"""Integration tests for category group routing (code, backlog, validate shim).""" + +from __future__ import annotations + +import os +from collections.abc import Generator +from unittest.mock import patch + +import pytest +from typer.testing import CliRunner + +from specfact_cli.cli import app +from specfact_cli.registry import CommandRegistry +from specfact_cli.registry.bootstrap import register_builtin_commands + + +@pytest.fixture(autouse=True) +def _category_grouping_enabled() -> Generator[None, None, None]: + """Ensure category grouping is enabled and registry is fresh for routing tests.""" + with patch.dict(os.environ, {"SPECFACT_CATEGORY_GROUPING_ENABLED": "true"}, clear=False): + CommandRegistry._clear_for_testing() + register_builtin_commands() + yield + with patch.dict(os.environ, {"SPECFACT_CATEGORY_GROUPING_ENABLED": "true"}, clear=False): + CommandRegistry._clear_for_testing() + register_builtin_commands() + + +runner = CliRunner() + + +def test_code_analyze_help_exits_zero() -> None: + """specfact code analyze --help returns non-error exit (CLI integration).""" + result = runner.invoke(app, ["code", "analyze", "--help"]) + assert result.exit_code == 0, ( + f"Expected exit 0, got {result.exit_code}\nstdout: {result.stdout}\nstderr: {result.stderr}" + ) + assert "analyze" in (result.stdout or "").lower() or "usage" in (result.stdout or "").lower() + + +def test_backlog_help_lists_subcommands() -> None: + """specfact backlog --help lists backlog and policy sub-commands.""" + result = runner.invoke(app, ["backlog", "--help"]) + assert result.exit_code == 0, ( + f"Expected exit 0, got {result.exit_code}\nstdout: {result.stdout}\nstderr: {result.stderr}" + ) + out = (result.stdout or "").lower() + assert "backlog" in out + assert "policy" in out or "ceremony" in out + + +def test_validate_shim_help_exits_zero() -> None: + """Deprecated flat command specfact validate --help still returns help without error.""" + result = runner.invoke(app, ["validate", "--help"]) + assert result.exit_code == 0, ( + f"Expected exit 0, got {result.exit_code}\nstdout: {result.stdout}\nstderr: {result.stderr}" + ) + assert "validate" in (result.stdout or "").lower() or "usage" in (result.stdout or "").lower() diff --git a/tests/unit/commands/test_backlog_commands.py b/tests/unit/commands/test_backlog_commands.py index 201f3174..c56378ac 100644 --- a/tests/unit/commands/test_backlog_commands.py +++ b/tests/unit/commands/test_backlog_commands.py @@ -9,6 +9,7 @@ from pathlib import Path from unittest.mock import MagicMock, patch +import pytest import yaml from rich.panel import Panel from typer.testing import CliRunner @@ -38,6 +39,18 @@ runner = CliRunner() +@pytest.fixture(autouse=True) +def _bootstrap_registry_for_backlog_commands(): + """Ensure registry is bootstrapped so root 'backlog' resolves to the group with init-config, map-fields, etc.""" + from specfact_cli.registry.bootstrap import register_builtin_commands + from specfact_cli.registry.registry import CommandRegistry + + CommandRegistry._clear_for_testing() + register_builtin_commands() + yield + CommandRegistry._clear_for_testing() + + @patch("specfact_cli.modules.backlog.src.commands._resolve_standup_options") @patch("specfact_cli.modules.backlog.src.commands._fetch_backlog_items") def test_daily_issue_id_bypasses_implicit_default_state( @@ -376,7 +389,8 @@ def test_map_fields_requires_token(self) -> None: # Should fail with error about missing token assert result.exit_code != 0 - assert "token required" in result.stdout.lower() or "error" in result.stdout.lower() + out = result.output or result.stdout or "" + assert "token required" in out.lower() or "error" in out.lower() @patch("questionary.checkbox") @patch("specfact_cli.utils.auth_tokens.get_token") @@ -833,7 +847,7 @@ def test_backlog_init_config_does_not_overwrite_without_force(self, tmp_path) -> assert result.exit_code == 0 content = cfg_file.read_text(encoding="utf-8") assert "adapter: github" in content - assert "already exists" in result.stdout.lower() + assert "already exists" in (result.output or result.stdout or "").lower() class TestParseRefinedExportMarkdown: diff --git a/tests/unit/commands/test_backlog_daily.py b/tests/unit/commands/test_backlog_daily.py index c38145d4..419d5865 100644 --- a/tests/unit/commands/test_backlog_daily.py +++ b/tests/unit/commands/test_backlog_daily.py @@ -60,6 +60,18 @@ runner = CliRunner() +@pytest.fixture(autouse=True) +def _bootstrap_registry_for_backlog_daily(): + """Ensure registry is bootstrapped so root 'backlog' resolves to the group with 'daily'.""" + from specfact_cli.registry.bootstrap import register_builtin_commands + from specfact_cli.registry.registry import CommandRegistry + + CommandRegistry._clear_for_testing() + register_builtin_commands() + yield + CommandRegistry._clear_for_testing() + + def _strip_ansi(text: str) -> str: """Remove ANSI escape codes from CLI output.""" ansi_escape = re.compile(r"\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])") @@ -67,19 +79,35 @@ def _strip_ansi(text: str) -> str: def _get_daily_command_option_names() -> set[str]: - """Return all option names registered on `specfact backlog daily`.""" + """Return all option names registered on `specfact backlog daily` (from CLI help or command tree).""" root_cmd = typer.main.get_command(app) root_ctx = click.Context(root_cmd) backlog_cmd = root_cmd.get_command(root_ctx, "backlog") - assert backlog_cmd is not None + assert backlog_cmd is not None, "root should have 'backlog' command" backlog_ctx = click.Context(backlog_cmd) daily_cmd = backlog_cmd.get_command(backlog_ctx, "daily") - assert daily_cmd is not None - option_names: set[str] = set() - for param in daily_cmd.params: - if isinstance(param, click.Option): - option_names.update(param.opts) - option_names.update(param.secondary_opts) + if daily_cmd is not None: + option_names: set[str] = set() + for param in daily_cmd.params: + if isinstance(param, click.Option): + option_names.update(param.opts) + option_names.update(param.secondary_opts) + return option_names + result = runner.invoke(app, ["backlog", "daily", "--help"]) + if result.exit_code != 0: + return set() + out = result.output or result.stdout or "" + option_names = set() + for word in out.replace(",", " ").split(): + w = word.strip() + if w.startswith("--") and "=" not in w: + opt = w.lstrip("-").split("=")[0] + option_names.add("--" + opt) + if not option_names: + import re + + for m in re.finditer(r"--([a-z][a-z0-9-]*)", out): + option_names.add("--" + m.group(1)) return option_names diff --git a/tests/unit/groups/test_codebase_group.py b/tests/unit/groups/test_codebase_group.py new file mode 100644 index 00000000..abebd413 --- /dev/null +++ b/tests/unit/groups/test_codebase_group.py @@ -0,0 +1,35 @@ +"""Tests for codebase category group app (category-command-groups).""" + +from __future__ import annotations + +import os +from collections.abc import Generator +from unittest.mock import patch + +import pytest + +from specfact_cli.registry import CommandRegistry +from specfact_cli.registry.bootstrap import register_builtin_commands + + +@pytest.fixture(autouse=True) +def _clear_registry() -> Generator[None, None, None]: + CommandRegistry._clear_for_testing() + yield + CommandRegistry._clear_for_testing() + + +def test_codebase_group_has_expected_subcommands() -> None: + """Group app 'code' has expected sub-commands: analyze, drift, validate, repro.""" + with patch.dict(os.environ, {"SPECFACT_CATEGORY_GROUPING_ENABLED": "true"}, clear=False): + register_builtin_commands() + from typer.main import get_command + + from specfact_cli.registry.registry import CommandRegistry + + code_app = CommandRegistry.get_typer("code") + click_code = get_command(code_app) + assert hasattr(click_code, "commands") + code_subcommands = list(click_code.commands.keys()) + for expected in ("analyze", "drift", "validate", "repro"): + assert expected in code_subcommands, f"Expected sub-command {expected!r} in code group: {code_subcommands}" diff --git a/tests/unit/modules/init/test_first_run_selection.py b/tests/unit/modules/init/test_first_run_selection.py new file mode 100644 index 00000000..5326afa7 --- /dev/null +++ b/tests/unit/modules/init/test_first_run_selection.py @@ -0,0 +1,415 @@ +"""Tests for first-run bundle selection in specfact init (Phase 3).""" + +from __future__ import annotations + +from pathlib import Path +from unittest.mock import MagicMock, patch + +import pytest +from typer.testing import CliRunner + +from specfact_cli.modules.init.src import first_run_selection as frs +from specfact_cli.modules.init.src.commands import app + + +runner = CliRunner() + + +def _telemetry_track_context(): + return patch( + "specfact_cli.modules.init.src.commands.telemetry", + MagicMock( + track_command=MagicMock(return_value=MagicMock(__enter__=lambda s: None, __exit__=lambda s, *a: None)) + ), + ) + + +# --- Profile resolution --- + + +def test_profile_solo_developer_resolves_to_specfact_codebase_only() -> None: + bundles = frs.resolve_profile_bundles("solo-developer") + assert bundles == ["specfact-codebase"] + + +def test_profile_enterprise_full_stack_resolves_to_all_five_bundles() -> None: + bundles = frs.resolve_profile_bundles("enterprise-full-stack") + assert set(bundles) == { + "specfact-project", + "specfact-backlog", + "specfact-codebase", + "specfact-spec", + "specfact-govern", + } + assert len(bundles) == 5 + + +def test_profile_nonexistent_raises_with_valid_list() -> None: + with pytest.raises(ValueError) as exc_info: + frs.resolve_profile_bundles("nonexistent") + msg = str(exc_info.value).lower() + assert "nonexistent" in msg or "unknown" in msg or "invalid" in msg + assert "solo-developer" in msg or "valid" in msg + + +# --- --install parsing --- + + +def test_install_backlog_codebase_resolves_to_two_bundles() -> None: + bundles = frs.resolve_install_bundles("backlog,codebase") + assert set(bundles) == {"specfact-backlog", "specfact-codebase"} + assert len(bundles) == 2 + + +def test_install_all_resolves_to_all_five_bundles() -> None: + bundles = frs.resolve_install_bundles("all") + assert set(bundles) == { + "specfact-project", + "specfact-backlog", + "specfact-codebase", + "specfact-spec", + "specfact-govern", + } + assert len(bundles) == 5 + + +def test_install_unknown_bundle_raises() -> None: + with pytest.raises(ValueError) as exc_info: + frs.resolve_install_bundles("widgets") + msg = str(exc_info.value).lower() + assert "widgets" in msg or "unknown" in msg + assert "valid" in msg or "bundle" in msg + + +# --- is_first_run --- + + +def test_is_first_run_true_when_no_category_bundle_installed(monkeypatch: pytest.MonkeyPatch, tmp_path: Path) -> None: + def _discover(_builtin=None, user_root=None, **_kwargs): + from specfact_cli.models.module_package import ModulePackageMetadata + from specfact_cli.registry.module_discovery import DiscoveredModule + + meta_core = ModulePackageMetadata(name="init", version="0.1.0", commands=["init"], category="core") + return [DiscoveredModule(tmp_path / "init", meta_core, "builtin")] + + monkeypatch.setattr("specfact_cli.registry.module_discovery.discover_all_modules", _discover) + assert frs.is_first_run(user_root=tmp_path) is True + + +def test_is_first_run_false_when_category_bundle_installed(monkeypatch: pytest.MonkeyPatch, tmp_path: Path) -> None: + def _discover(_builtin=None, user_root=None, **_kwargs): + from specfact_cli.models.module_package import ModulePackageMetadata + from specfact_cli.registry.module_discovery import DiscoveredModule + + meta_core = ModulePackageMetadata(name="init", version="0.1.0", commands=["init"], category="core") + meta_code = ModulePackageMetadata( + name="analyze", version="0.1.0", commands=["analyze"], category="codebase", bundle="specfact-codebase" + ) + return [ + DiscoveredModule(tmp_path / "init", meta_core, "builtin"), + DiscoveredModule(tmp_path / "analyze", meta_code, "user"), + ] + + monkeypatch.setattr("specfact_cli.registry.module_discovery.discover_all_modules", _discover) + assert frs.is_first_run(user_root=tmp_path) is False + + +def test_is_first_run_false_when_project_scoped_category_bundle_installed( + monkeypatch: pytest.MonkeyPatch, tmp_path: Path +) -> None: + def _discover(_builtin=None, user_root=None, **_kwargs): + from specfact_cli.models.module_package import ModulePackageMetadata + from specfact_cli.registry.module_discovery import DiscoveredModule + + meta_project = ModulePackageMetadata( + name="analyze", version="0.1.0", commands=["analyze"], category="codebase", bundle="specfact-codebase" + ) + return [DiscoveredModule(tmp_path / "analyze", meta_project, "project")] + + monkeypatch.setattr("specfact_cli.registry.module_discovery.discover_all_modules", _discover) + assert frs.is_first_run(user_root=tmp_path) is False + + +# --- CLI: specfact init --profile (mock installer) --- + + +def test_init_profile_solo_developer_calls_installer_with_specfact_codebase( + monkeypatch: pytest.MonkeyPatch, tmp_path: Path +) -> None: + install_calls: list[list[str]] = [] + + def _fake_install_bundles(bundle_ids: list[str], install_root: Path, **kwargs: object) -> None: + install_calls.append(list(bundle_ids)) + + monkeypatch.setattr( + "specfact_cli.modules.init.src.first_run_selection.install_bundles_for_init", _fake_install_bundles + ) + monkeypatch.setattr("specfact_cli.modules.init.src.commands.is_first_run", lambda **_: True) + monkeypatch.setattr( + "specfact_cli.modules.init.src.commands.get_discovered_modules_for_state", + lambda **_: [{"id": "init", "enabled": True}], + ) + monkeypatch.setattr("specfact_cli.modules.init.src.commands.write_modules_state", lambda _: None) + monkeypatch.setattr("specfact_cli.modules.init.src.commands.run_discovery_and_write_cache", lambda _: None) + monkeypatch.setattr( + "specfact_cli.modules.init.src.commands.detect_env_manager", lambda _: MagicMock(manager=MagicMock()) + ) + with _telemetry_track_context(): + result = runner.invoke( + app, + ["--repo", str(tmp_path), "--profile", "solo-developer"], + catch_exceptions=False, + ) + assert result.exit_code == 0, result.output + assert len(install_calls) == 1 + assert install_calls[0] == ["specfact-codebase"] + + +def test_init_profile_enterprise_full_stack_calls_installer_with_all_five( + monkeypatch: pytest.MonkeyPatch, tmp_path: Path +) -> None: + install_calls: list[list[str]] = [] + + def _fake_install_bundles(bundle_ids: list[str], install_root: Path, **kwargs: object) -> None: + install_calls.append(list(bundle_ids)) + + monkeypatch.setattr( + "specfact_cli.modules.init.src.first_run_selection.install_bundles_for_init", _fake_install_bundles + ) + monkeypatch.setattr("specfact_cli.modules.init.src.commands.is_first_run", lambda **_: True) + monkeypatch.setattr( + "specfact_cli.modules.init.src.commands.get_discovered_modules_for_state", + lambda **_: [{"id": "init", "enabled": True}], + ) + monkeypatch.setattr("specfact_cli.modules.init.src.commands.write_modules_state", lambda _: None) + monkeypatch.setattr("specfact_cli.modules.init.src.commands.run_discovery_and_write_cache", lambda _: None) + monkeypatch.setattr( + "specfact_cli.modules.init.src.commands.detect_env_manager", lambda _: MagicMock(manager=MagicMock()) + ) + with _telemetry_track_context(): + result = runner.invoke( + app, + ["--repo", str(tmp_path), "--profile", "enterprise-full-stack"], + catch_exceptions=False, + ) + assert result.exit_code == 0, result.output + assert len(install_calls) == 1 + assert set(install_calls[0]) == { + "specfact-project", + "specfact-backlog", + "specfact-codebase", + "specfact-spec", + "specfact-govern", + } + assert len(install_calls[0]) == 5 + + +def test_init_profile_nonexistent_exits_nonzero_and_lists_valid_profiles( + monkeypatch: pytest.MonkeyPatch, tmp_path: Path +) -> None: + monkeypatch.setattr("specfact_cli.modules.init.src.commands.get_discovered_modules_for_state", lambda **_: []) + monkeypatch.setattr("specfact_cli.modules.init.src.commands.write_modules_state", lambda _: None) + monkeypatch.setattr("specfact_cli.modules.init.src.commands.run_discovery_and_write_cache", lambda _: None) + with _telemetry_track_context(): + result = runner.invoke( + app, + ["--repo", str(tmp_path), "--profile", "nonexistent"], + catch_exceptions=False, + ) + assert result.exit_code != 0 + assert ( + "nonexistent" in result.output.lower() + or "invalid" in result.output.lower() + or "unknown" in result.output.lower() + ) + assert "solo-developer" in result.output or "valid" in result.output.lower() + + +def test_init_install_backlog_codebase_calls_installer_with_two_bundles( + monkeypatch: pytest.MonkeyPatch, tmp_path: Path +) -> None: + install_calls: list[list[str]] = [] + + def _fake_install_bundles(bundle_ids: list[str], install_root: Path, **kwargs: object) -> None: + install_calls.append(list(bundle_ids)) + + monkeypatch.setattr( + "specfact_cli.modules.init.src.first_run_selection.install_bundles_for_init", _fake_install_bundles + ) + monkeypatch.setattr("specfact_cli.modules.init.src.commands.is_first_run", lambda **_: True) + monkeypatch.setattr( + "specfact_cli.modules.init.src.commands.get_discovered_modules_for_state", + lambda **_: [{"id": "init", "enabled": True}], + ) + monkeypatch.setattr("specfact_cli.modules.init.src.commands.write_modules_state", lambda _: None) + monkeypatch.setattr("specfact_cli.modules.init.src.commands.run_discovery_and_write_cache", lambda _: None) + monkeypatch.setattr( + "specfact_cli.modules.init.src.commands.detect_env_manager", lambda _: MagicMock(manager=MagicMock()) + ) + with _telemetry_track_context(): + result = runner.invoke( + app, + ["--repo", str(tmp_path), "--install", "backlog,codebase"], + catch_exceptions=False, + ) + assert result.exit_code == 0, result.output + assert len(install_calls) == 1 + assert set(install_calls[0]) == {"specfact-backlog", "specfact-codebase"} + + +def test_init_install_all_calls_installer_with_five_bundles(monkeypatch: pytest.MonkeyPatch, tmp_path: Path) -> None: + install_calls: list[list[str]] = [] + + def _fake_install_bundles(bundle_ids: list[str], install_root: Path, **kwargs: object) -> None: + install_calls.append(list(bundle_ids)) + + monkeypatch.setattr( + "specfact_cli.modules.init.src.first_run_selection.install_bundles_for_init", _fake_install_bundles + ) + monkeypatch.setattr("specfact_cli.modules.init.src.commands.is_first_run", lambda **_: True) + monkeypatch.setattr( + "specfact_cli.modules.init.src.commands.get_discovered_modules_for_state", + lambda **_: [{"id": "init", "enabled": True}], + ) + monkeypatch.setattr("specfact_cli.modules.init.src.commands.write_modules_state", lambda _: None) + monkeypatch.setattr("specfact_cli.modules.init.src.commands.run_discovery_and_write_cache", lambda _: None) + monkeypatch.setattr( + "specfact_cli.modules.init.src.commands.detect_env_manager", lambda _: MagicMock(manager=MagicMock()) + ) + with _telemetry_track_context(): + result = runner.invoke( + app, + ["--repo", str(tmp_path), "--install", "all"], + catch_exceptions=False, + ) + assert result.exit_code == 0, result.output + assert len(install_calls) == 1 + assert len(install_calls[0]) == 5 + assert set(install_calls[0]) == { + "specfact-project", + "specfact-backlog", + "specfact-codebase", + "specfact-spec", + "specfact-govern", + } + + +def test_init_install_widgets_exits_nonzero(tmp_path: Path) -> None: + result = runner.invoke( + app, + ["--repo", str(tmp_path), "--install", "widgets"], + catch_exceptions=False, + ) + assert result.exit_code != 0 + assert ( + "widgets" in result.output.lower() or "unknown" in result.output.lower() or "invalid" in result.output.lower() + ) + + +def test_init_second_run_skips_first_run_flow(monkeypatch: pytest.MonkeyPatch, tmp_path: Path) -> None: + install_calls: list[list[str]] = [] + + def _fake_install_bundles(bundle_ids: list[str], install_root: Path, **kwargs: object) -> None: + install_calls.append(list(bundle_ids)) + + monkeypatch.setattr( + "specfact_cli.modules.init.src.first_run_selection.install_bundles_for_init", _fake_install_bundles + ) + monkeypatch.setattr("specfact_cli.modules.init.src.commands.is_first_run", lambda **_: False) + modules_list = [{"id": "init", "enabled": True}, {"id": "analyze", "enabled": True}] + monkeypatch.setattr( + "specfact_cli.modules.init.src.commands.get_discovered_modules_for_state", + lambda **_: modules_list, + ) + monkeypatch.setattr("specfact_cli.modules.init.src.commands.write_modules_state", lambda _: None) + monkeypatch.setattr("specfact_cli.modules.init.src.commands.run_discovery_and_write_cache", lambda _: None) + monkeypatch.setattr( + "specfact_cli.modules.init.src.commands.detect_env_manager", lambda _: MagicMock(manager=MagicMock()) + ) + with _telemetry_track_context(): + result = runner.invoke( + app, + ["--repo", str(tmp_path)], + catch_exceptions=False, + ) + assert result.exit_code == 0, result.output + assert len(install_calls) == 0 + + +def test_init_first_run_interactive_with_selection_calls_installer( + monkeypatch: pytest.MonkeyPatch, tmp_path: Path +) -> None: + install_calls: list[list[str]] = [] + + def _fake_install(bundle_ids: list[str], install_root: Path, **kwargs: object) -> None: + install_calls.append(list(bundle_ids)) + + monkeypatch.setattr("specfact_cli.modules.init.src.first_run_selection.install_bundles_for_init", _fake_install) + monkeypatch.setattr("specfact_cli.modules.init.src.commands.is_first_run", lambda **_: True) + monkeypatch.setattr("specfact_cli.modules.init.src.commands.is_non_interactive", lambda: False) + monkeypatch.setattr( + "specfact_cli.modules.init.src.commands._interactive_first_run_bundle_selection", + lambda: ["specfact-codebase"], + ) + monkeypatch.setattr( + "specfact_cli.modules.init.src.commands.get_discovered_modules_for_state", + lambda **_: [{"id": "init", "enabled": True}], + ) + monkeypatch.setattr("specfact_cli.modules.init.src.commands.write_modules_state", lambda _: None) + monkeypatch.setattr("specfact_cli.modules.init.src.commands.run_discovery_and_write_cache", lambda _: None) + monkeypatch.setattr( + "specfact_cli.modules.init.src.commands.detect_env_manager", lambda _: MagicMock(manager=MagicMock()) + ) + with _telemetry_track_context(): + result = runner.invoke(app, ["--repo", str(tmp_path)], catch_exceptions=False) + assert result.exit_code == 0, result.output + assert len(install_calls) == 1 + assert install_calls[0] == ["specfact-codebase"] + + +def test_init_first_run_interactive_no_selection_shows_tip(monkeypatch: pytest.MonkeyPatch, tmp_path: Path) -> None: + install_calls: list[list[str]] = [] + + def _fake_install(bundle_ids: list[str], install_root: Path, **kwargs: object) -> None: + install_calls.append(list(bundle_ids)) + + monkeypatch.setattr("specfact_cli.modules.init.src.first_run_selection.install_bundles_for_init", _fake_install) + monkeypatch.setattr("specfact_cli.modules.init.src.commands.is_first_run", lambda **_: True) + monkeypatch.setattr("specfact_cli.modules.init.src.commands.is_non_interactive", lambda: False) + monkeypatch.setattr( + "specfact_cli.modules.init.src.commands._interactive_first_run_bundle_selection", + list, + ) + monkeypatch.setattr( + "specfact_cli.modules.init.src.commands.get_discovered_modules_for_state", + lambda **_: [{"id": "init", "enabled": True}], + ) + monkeypatch.setattr("specfact_cli.modules.init.src.commands.write_modules_state", lambda _: None) + monkeypatch.setattr("specfact_cli.modules.init.src.commands.run_discovery_and_write_cache", lambda _: None) + monkeypatch.setattr( + "specfact_cli.modules.init.src.commands.detect_env_manager", lambda _: MagicMock(manager=MagicMock()) + ) + with _telemetry_track_context(): + result = runner.invoke(app, ["--repo", str(tmp_path)], catch_exceptions=False) + assert result.exit_code == 0, result.output + assert len(install_calls) == 0 + assert "module install" in result.output or "Tip" in result.output + + +def test_spec_bundle_install_includes_project_dep(monkeypatch: pytest.MonkeyPatch, tmp_path: Path) -> None: + installed_modules: list[str] = [] + + def _record_install(module_name: str, target_root: Path, **kwargs: object) -> bool: + installed_modules.append(module_name) + return True + + monkeypatch.setattr( + "specfact_cli.registry.module_installer.install_bundled_module", + _record_install, + ) + frs.install_bundles_for_init(["specfact-spec"], install_root=tmp_path) + project_module_names = set(frs.BUNDLE_TO_MODULE_NAMES.get("specfact-project", [])) + spec_module_names = set(frs.BUNDLE_TO_MODULE_NAMES.get("specfact-spec", [])) + installed_set = set(installed_modules) + assert project_module_names & installed_set, "spec bundle must trigger project bundle dep install" + assert spec_module_names & installed_set, "spec bundle modules must be installed" diff --git a/tests/unit/registry/test_category_groups.py b/tests/unit/registry/test_category_groups.py new file mode 100644 index 00000000..5ff07071 --- /dev/null +++ b/tests/unit/registry/test_category_groups.py @@ -0,0 +1,136 @@ +"""Tests for category group bootstrap and routing (category-command-groups).""" + +from __future__ import annotations + +import os +from collections.abc import Generator +from pathlib import Path +from unittest.mock import patch + +import pytest + +from specfact_cli.registry import CommandRegistry +from specfact_cli.registry.bootstrap import register_builtin_commands + + +@pytest.fixture(autouse=True) +def _clear_registry() -> Generator[None, None, None]: + CommandRegistry._clear_for_testing() + yield + CommandRegistry._clear_for_testing() + + +def test_bootstrap_with_category_grouping_enabled_registers_group_commands() -> None: + """With category_grouping_enabled=True, bootstrap registers code, backlog, project, spec, govern.""" + with patch.dict(os.environ, {"SPECFACT_CATEGORY_GROUPING_ENABLED": "true"}, clear=False): + register_builtin_commands() + names = [name for name, _ in CommandRegistry.list_commands_for_help()] + for group in ("code", "backlog", "project", "spec", "govern"): + assert group in names, f"Expected group command {group!r} in {names}" + + +def test_bootstrap_with_category_grouping_disabled_registers_flat_commands() -> None: + """With category_grouping_enabled=False, bootstrap registers flat module commands (no group commands).""" + with patch.dict(os.environ, {"SPECFACT_CATEGORY_GROUPING_ENABLED": "false"}, clear=False): + register_builtin_commands() + names = [name for name, _ in CommandRegistry.list_commands_for_help()] + assert "code" not in names, "Group 'code' should not appear when grouping disabled" + assert "govern" not in names, "Group 'govern' should not appear when grouping disabled" + assert "analyze" in names + assert "validate" in names + + +def test_code_analyze_routes_same_as_flat_analyze( + tmp_path: Path, +) -> None: + """specfact code analyze ... routes to the same handler as specfact analyze ... (integration via CLI).""" + with patch.dict(os.environ, {"SPECFACT_CATEGORY_GROUPING_ENABLED": "true"}, clear=False): + register_builtin_commands() + from typer.main import get_command + + from specfact_cli.cli import app + + root_cmd = get_command(app) + assert root_cmd is not None + assert hasattr(root_cmd, "commands") and "code" in root_cmd.commands + code_app = CommandRegistry.get_typer("code") + click_code = get_command(code_app) + if hasattr(click_code, "commands"): + assert "analyze" in click_code.commands + + +def test_govern_help_when_not_installed_suggests_install( + tmp_path: Path, +) -> None: + """specfact govern --help when govern bundle not installed produces install suggestion.""" + with patch.dict(os.environ, {"SPECFACT_CATEGORY_GROUPING_ENABLED": "true"}, clear=False): + register_builtin_commands() + from click.testing import CliRunner + from typer.main import get_command + + from specfact_cli.cli import app + + runner = CliRunner() + root_cmd = get_command(app) + result = runner.invoke(root_cmd, ["govern", "--help"]) + assert ( + result.exit_code == 0 or "install" in (result.output or "").lower() or "govern" in (result.output or "").lower() + ) + + +def test_flat_shim_validate_emits_deprecation_in_copilot_mode( + tmp_path: Path, +) -> None: + """Flat 'specfact validate' resolves to real validate module (no deprecation message since shim is real module).""" + with patch.dict( + os.environ, + {"SPECFACT_CATEGORY_GROUPING_ENABLED": "true", "SPECFACT_MODE": "copilot"}, + clear=False, + ): + register_builtin_commands() + from click.testing import CliRunner + from typer.main import get_command + + from specfact_cli.cli import app + + runner = CliRunner() + root_cmd = get_command(app) + result = runner.invoke(root_cmd, ["validate", "--help"]) + assert result.exit_code == 0 + assert "validate" in (result.output or "").lower() + + +def test_flat_shim_validate_silent_in_cicd_mode(tmp_path: Path) -> None: + """Flat shim specfact validate is silent (no deprecation) in CI/CD mode.""" + with patch.dict( + os.environ, + {"SPECFACT_CATEGORY_GROUPING_ENABLED": "true", "SPECFACT_MODE": "cicd"}, + clear=False, + ): + register_builtin_commands() + from click.testing import CliRunner + from typer.main import get_command + + from specfact_cli.cli import app + + runner = CliRunner() + root_cmd = get_command(app) + result = runner.invoke(root_cmd, ["validate", "--help"]) + assert result.exit_code == 0 + + +def test_spec_api_validate_routes_correctly(tmp_path: Path) -> None: + """specfact spec api routes correctly (spec module mounted as api subcommand; collision avoidance).""" + with patch.dict(os.environ, {"SPECFACT_CATEGORY_GROUPING_ENABLED": "true"}, clear=False): + register_builtin_commands() + from click.testing import CliRunner + from typer.main import get_command + + from specfact_cli.cli import app + + root_cmd = get_command(app) + assert root_cmd is not None and hasattr(root_cmd, "commands") and "spec" in root_cmd.commands + runner = CliRunner() + result = runner.invoke(root_cmd, ["spec", "api", "--help"]) + assert result.exit_code == 0, f"spec api --help failed: {result.output}" + assert "validate" in (result.output or "").lower() or "Specmatic" in (result.output or "") diff --git a/tests/unit/registry/test_module_grouping.py b/tests/unit/registry/test_module_grouping.py new file mode 100644 index 00000000..811d8a18 --- /dev/null +++ b/tests/unit/registry/test_module_grouping.py @@ -0,0 +1,136 @@ +"""Tests for module category metadata and group_modules_by_category (module-grouping).""" + +from __future__ import annotations + +from pathlib import Path + +import pytest + +from specfact_cli.models.module_package import ModulePackageMetadata +from specfact_cli.registry.module_grouping import ModuleManifestError, group_modules_by_category +from specfact_cli.registry.module_packages import discover_package_metadata + + +def _write_manifest( + root: Path, + module_name: str, + *, + category: str | None = None, + bundle: str | None = None, + bundle_group_command: str | None = None, + bundle_sub_command: str | None = None, +) -> None: + module_dir = root / module_name + module_dir.mkdir(parents=True, exist_ok=True) + lines = [ + f"name: {module_name}", + "version: '0.1.0'", + f"commands: [{module_name}]", + ] + if category is not None: + lines.append(f"category: {category}") + if bundle is not None: + lines.append(f"bundle: {bundle}") + if bundle_group_command is not None: + lines.append(f"bundle_group_command: {bundle_group_command}") + if bundle_sub_command is not None: + lines.append(f"bundle_sub_command: {bundle_sub_command}") + (module_dir / "module-package.yaml").write_text("\n".join(lines) + "\n", encoding="utf-8") + (module_dir / "src").mkdir(parents=True, exist_ok=True) + + +def test_module_package_yaml_with_category_codebase_passes_validation(tmp_path: Path) -> None: + """module-package.yaml with category: codebase passes validation.""" + _write_manifest( + tmp_path, + "analyze", + category="codebase", + bundle="specfact-codebase", + bundle_group_command="code", + bundle_sub_command="analyze", + ) + packages = discover_package_metadata(tmp_path, source="builtin") + assert len(packages) == 1 + meta = packages[0][1] + assert meta.category == "codebase" + assert meta.bundle == "specfact-codebase" + assert meta.bundle_group_command == "code" + assert meta.bundle_sub_command == "analyze" + + +def test_module_package_yaml_with_category_unknown_raises_module_manifest_error( + tmp_path: Path, +) -> None: + """module-package.yaml with category: unknown raises ModuleManifestError.""" + _write_manifest(tmp_path, "foo", category="unknown") + (tmp_path / "foo" / "src").mkdir(parents=True, exist_ok=True) + with pytest.raises(ModuleManifestError) as exc_info: + discover_package_metadata(tmp_path, source="builtin") + assert "unknown" in str(exc_info.value).lower() or "category" in str(exc_info.value).lower() + + +def test_module_package_yaml_without_category_mounts_ungrouped_warning_logged( + tmp_path: Path, +) -> None: + """module-package.yaml without category field mounts as ungrouped (no error; warning logged in production).""" + _write_manifest(tmp_path, "legacy_mod") + packages = discover_package_metadata(tmp_path, source="builtin") + assert len(packages) == 1 + meta = packages[0][1] + assert meta.category is None + assert meta.bundle_group_command is None + + +def test_bundle_group_command_mismatch_raises_module_manifest_error(tmp_path: Path) -> None: + """bundle_group_command mismatch vs canonical category raises ModuleManifestError.""" + _write_manifest( + tmp_path, + "analyze", + category="codebase", + bundle="specfact-codebase", + bundle_group_command="wrong_group", + bundle_sub_command="analyze", + ) + with pytest.raises(ModuleManifestError) as exc_info: + discover_package_metadata(tmp_path, source="builtin") + assert "bundle_group_command" in str(exc_info.value) or "code" in str(exc_info.value) + + +def test_core_category_modules_have_no_bundle_or_bundle_group_command(tmp_path: Path) -> None: + """Core-category modules have no bundle or bundle_group_command.""" + _write_manifest( + tmp_path, + "init", + category="core", + bundle_sub_command="init", + ) + packages = discover_package_metadata(tmp_path, source="builtin") + assert len(packages) == 1 + meta = packages[0][1] + assert meta.category == "core" + assert meta.bundle is None + assert meta.bundle_group_command is None + assert meta.bundle_sub_command == "init" + + +def test_group_modules_by_category_returns_correct_grouping() -> None: + """group_modules_by_category() returns correct grouping dict from list of manifests.""" + manifests = [ + ModulePackageMetadata( + name="analyze", version="0.1.0", commands=["analyze"], category="codebase", bundle_group_command="code" + ), + ModulePackageMetadata( + name="validate", version="0.1.0", commands=["validate"], category="codebase", bundle_group_command="code" + ), + ModulePackageMetadata( + name="backlog", version="0.1.0", commands=["backlog"], category="backlog", bundle_group_command="backlog" + ), + ] + grouped = group_modules_by_category(manifests) + assert "code" in grouped + assert "backlog" in grouped + assert len(grouped["code"]) == 2 + assert len(grouped["backlog"]) == 1 + names_code = {m.name for m in grouped["code"]} + assert names_code == {"analyze", "validate"} + assert grouped["backlog"][0].name == "backlog" diff --git a/tests/unit/registry/test_module_installer.py b/tests/unit/registry/test_module_installer.py index 00a7f1a1..2a0a50f2 100644 --- a/tests/unit/registry/test_module_installer.py +++ b/tests/unit/registry/test_module_installer.py @@ -425,6 +425,11 @@ def test_verify_module_artifact_fallback_emits_debug_in_debug_mode( mock_logger = MagicMock() monkeypatch.setattr(module_installer, "get_bridge_logger", lambda _name: mock_logger) monkeypatch.setattr(module_installer, "is_debug_mode", lambda: True, raising=False) + monkeypatch.setattr( + module_installer, + "_module_artifact_payload_signed", + lambda _: (_ for _ in ()).throw(ValueError("force fallback")), + ) assert module_installer.verify_module_artifact(module_dir, metadata, allow_unsigned=False) is True mock_logger.info.assert_not_called() diff --git a/tests/unit/specfact_cli/registry/test_module_packages.py b/tests/unit/specfact_cli/registry/test_module_packages.py index 2a690b25..faaed2e2 100644 --- a/tests/unit/specfact_cli/registry/test_module_packages.py +++ b/tests/unit/specfact_cli/registry/test_module_packages.py @@ -12,6 +12,7 @@ from pathlib import Path import pytest +import typer from specfact_cli.models.module_package import ( IntegrityInfo, @@ -315,6 +316,57 @@ def verify_may_fail(_package_dir: Path, meta, allow_unsigned: bool = False): assert "bad_cmd" not in names +def test_grouped_registration_merges_duplicate_command_extensions( + monkeypatch: pytest.MonkeyPatch, tmp_path: Path +) -> None: + """Grouped mode should merge duplicate module command trees instead of replacing earlier loaders.""" + from specfact_cli.registry import module_packages as mp + + packages = [ + ( + tmp_path / "base_backlog", + ModulePackageMetadata(name="base_backlog", version="0.1.0", commands=["backlog"], category="backlog"), + ), + ( + tmp_path / "ext_backlog", + ModulePackageMetadata(name="ext_backlog", version="0.1.0", commands=["backlog"], category="backlog"), + ), + ] + monkeypatch.setattr(mp, "discover_all_package_metadata", lambda: packages) + monkeypatch.setattr(mp, "verify_module_artifact", lambda _dir, _meta, allow_unsigned=False: True) + monkeypatch.setattr(mp, "read_modules_state", dict) + monkeypatch.setattr(mp, "_check_protocol_compliance_from_source", lambda *_args: []) + + def _build_typer(subcommand_name: str) -> typer.Typer: + app = typer.Typer() + + @app.command(name=subcommand_name) + def _cmd() -> None: + return None + + return app + + def _fake_loader(_package_dir: Path, package_name: str, _cmd_name: str): + return ( + (lambda: _build_typer("base_cmd")) if package_name == "base_backlog" else (lambda: _build_typer("ext_cmd")) + ) + + monkeypatch.setattr(mp, "_make_package_loader", _fake_loader) + + mp.register_module_package_commands(category_grouping_enabled=True) + + backlog_app = CommandRegistry.get_module_typer("backlog") + command_names = tuple( + sorted( + command_info.name + for command_info in backlog_app.registered_commands + if getattr(command_info, "name", None) is not None + ) + ) + assert "base_cmd" in command_names + assert "ext_cmd" in command_names + + def test_integrity_failure_shows_user_friendly_risk_warning(monkeypatch, tmp_path: Path) -> None: """Integrity failure should emit concise risk guidance instead of raw checksum diagnostics.""" from specfact_cli.registry import module_packages as mp