diff --git a/.codex/skills/openspec-workflows/SKILL.md b/.codex/skills/openspec-workflows/SKILL.md new file mode 100644 index 00000000..fdbcaa56 --- /dev/null +++ b/.codex/skills/openspec-workflows/SKILL.md @@ -0,0 +1,66 @@ +--- +name: openspec-workflows +description: Create OpenSpec changes from implementation plans, and validate existing changes before implementation. Use when the user wants to turn a plan document into an OpenSpec change proposal, or validate that a change is safe to implement (breaking changes, dependency analysis). +license: MIT +metadata: + author: openspec + version: "1.0" +--- + +Two workflows for managing OpenSpec changes at the proposal stage. + +**Input**: Optionally specify a workflow name (`create` or `validate`) and a target (plan path or change ID). If omitted, ask the user which workflow they need. + +## Workflow Selection + +Determine which workflow to run: + +| User Intent | Workflow | Reference | +|---|---|---| +| Turn a plan into an OpenSpec change | **Create Change from Plan** | `references/create-change-from-plan.md` | +| Validate a change before implementation | **Validate Change** | `references/validate-change.md` | + +If the user's intent is unclear, use **AskUserQuestion** to ask which workflow they need. + +## Create Change from Plan + +Turns an implementation plan document into a fully formed OpenSpec change with proposal, specs, design, and tasks — including GitHub issue creation for public repos. + +**When to use**: The user has a plan document (typically in `specfact-cli-internal/docs/internal/implementation/`) and wants to create an OpenSpec change from it. + +**Load** `references/create-change-from-plan.md` and follow the full workflow. + +**Key steps**: +1. Select and parse the plan document +2. Cross-reference against existing plans and validate targets +3. Resolve any issues interactively +4. Create the OpenSpec change via `opsx:ff` skill +5. Review and improve: enforce TDD-first, add git worktree tasks (worktree creation first, PR last, cleanup after merge), validate against `openspec/config.yaml` +6. Create GitHub issue (public repos only) + +## Validate Change + +Performs dry-run simulation to detect breaking changes, analyze dependencies, and verify format compliance before implementation begins. + +**When to use**: The user wants to validate that an existing change is safe to implement — check for breaking interface changes, missing dependency updates, and format compliance. + +**Load** `references/validate-change.md` and follow the full workflow. + +**Key steps**: +1. Select the change (by ID or interactive list) +2. Parse all change artifacts (proposal, tasks, design, spec deltas) +3. Simulate interface changes in a temporary workspace +4. Analyze dependencies and detect breaking changes +5. Present findings and get user decision if breaking changes found +6. Run `openspec validate --strict` +7. Create `CHANGE_VALIDATION.md` report + +## Guardrails + +- Read `openspec/config.yaml` for project context and rules +- Read `CLAUDE.md` for project conventions +- Never modify production code during validation — use temp workspaces +- Never proceed with ambiguities — ask for clarification +- Enforce TDD-first ordering in tasks (per config.yaml) +- Enforce git worktree workflow: worktree creation first task, PR creation last task, worktree cleanup after merge — never switch the primary checkout away from `dev` +- Only create GitHub issues in the target repository specified by the plan diff --git a/.codex/skills/openspec-workflows/references/create-change-from-plan.md b/.codex/skills/openspec-workflows/references/create-change-from-plan.md new file mode 100644 index 00000000..bb999477 --- /dev/null +++ b/.codex/skills/openspec-workflows/references/create-change-from-plan.md @@ -0,0 +1,312 @@ +# Workflow: Create OpenSpec Change from Plan + +## Table of Contents + +- [Guardrails](#guardrails) +- [Step 1: Plan Selection](#step-1-plan-selection) +- [Step 2: Plan Review and Alignment](#step-2-plan-review-and-alignment) +- [Step 3: Integrity Re-Check](#step-3-integrity-re-check) +- [Step 4: OpenSpec Change Creation](#step-4-openspec-change-creation) +- [Step 5: Proposal Review and Improvement](#step-5-proposal-review-and-improvement) +- [Step 6: GitHub Issue Creation](#step-6-github-issue-creation) +- [Step 7: Create GitHub Issue via gh CLI](#step-7-create-github-issue-via-gh-cli) +- [Step 8: Completion](#step-8-completion) + +## Guardrails + +- Read `openspec/config.yaml` during the workflow (before or at Step 5) for project context and TDD/SDD rules. +- Favor straightforward, minimal implementations. Keep changes tightly scoped. +- Never proceed with ambiguities or conflicts — ask for clarification interactively. +- Do not write code during the proposal stage. Only create design documents (proposal.md, tasks.md, design.md, spec deltas). +- Always validate alignment against existing plans and implementation reality before proceeding. +- **CRITICAL**: Only create GitHub issues in the target repository specified by the plan. +- **CRITICAL Git Workflow (Worktree Policy)**: Use git worktrees for parallel development — never switch the primary checkout away from `dev`. Add a worktree creation task as the FIRST task, and PR creation as the LAST task. Never work on protected branches (`main`/`dev`) directly. Branch naming: `/`. Worktree path: `../specfact-cli-worktrees//`. All subsequent tasks execute inside the worktree directory. +- **CRITICAL TDD**: Per config.yaml, test tasks MUST come before implementation tasks. + +## Step 1: Plan Selection + +**If plan path provided**: Resolve to absolute path, verify file exists. + +**If no plan path provided**: +1. Search for plans in: + - `specfact-cli-internal/docs/internal/brownfield-strategy/` (`*.md`) + - `specfact-cli-internal/docs/internal/implementation/` (`*.md`) + - `specfact-cli/docs/` (if accessible) +2. Display numbered list with file path, title (first heading), last modified date. +3. Prompt user to select. + +## Step 2: Plan Review and Alignment + +### 2.1: Read and Parse Plan + +1. Read plan file completely. +2. Extract: + - Title and purpose (first H1) + - **Target repository** (look for `**Repository**:` in header metadata, e.g. `` `nold-ai/specfact-cli` ``) + - Phases/tasks with descriptions + - Files to create/modify (note repository prefixes) + - Dependencies, success metrics, estimated effort +3. Identify referenced targets (files, directories, repositories). + +### 2.2: Cross-Reference Check + +1. Search `specfact-cli-internal/docs/internal/brownfield-strategy/` for overlapping plans. +2. Search `specfact-cli-internal/docs/internal/implementation/` for conflicting implementation plans. +3. Extract conflicting info, overlapping scope, dependency relationships, timeline conflicts. + +### 2.3: Target Validation + +For each target in the plan: +- **Files**: Check existence, readability, location, structure matches assumptions. +- **Directories**: Check existence, structure. +- **Repositories**: Verify in workspace, structure matches, access ok. +- **Code refs**: Verify functions/classes exist, structure matches. + +### 2.4: Alignment Analysis + +Check: +1. **Accuracy**: File paths correct? Repos referenced accurately? Commands valid? +2. **Correctness**: Technical details accurate? Implementation approaches align with codebase? +3. **Ambiguities**: Unclear requirements, vague acceptance criteria, missing context. +4. **Conflicts**: With other plans, overlapping scope, timeline/resource conflicts. +5. **Consistency**: With CLAUDE.md conventions, OpenSpec conventions, existing patterns. + +### 2.5: Issue Detection and Interactive Resolution + +**If issues found**: +1. Categorize: Critical (must resolve), Warning (should resolve), Info (non-blocking). +2. Present: `[CRITICAL/WARNING/INFO] : ` with context and suggested resolutions. +3. Resolve interactively: For critical issues, prompt for clarification. For warnings, ask resolve or skip. +4. Re-validate after resolution. Loop until all critical issues resolved. + +## Step 3: Integrity Re-Check + +1. Re-run all checks from Step 2 with updated understanding. +2. Verify user clarifications are consistent. +3. Check for new issues introduced by clarifications. +4. If misalignments remain, go back to Step 2.5. + +## Step 4: OpenSpec Change Creation + +### 4.1: Determine Change Name + +1. Extract from plan title, convert to kebab-case. +2. Ensure unique (check existing changes in `openspec/changes/`). + +### 4.2: Execute OPSX Fast-Forward + +Invoke the `opsx:ff` skill with the change name: +- Use the plan as source of requirements. +- Map plan phases/tasks to OpenSpec capabilities. +- The opsx:ff workflow creates: change directory, proposal.md, specs/, design.md, tasks.md. +- It reads `openspec/config.yaml` for project context and per-artifact rules. + +### 4.3: Extract Change ID + +1. Identify created change ID. +2. Verify change directory: `openspec/changes//`. +3. Verify artifacts created: proposal.md, tasks.md, specs/. + +## Step 5: Proposal Review and Improvement + +### 5.1: Review Against Config and Project Rules + +1. **Read `openspec/config.yaml`**: + - Project context: Tech stack, constraints, architecture patterns. + - Development discipline (SDD + TDD): (1) Specs first, (2) Tests second (expect failure), (3) Code last. + - Per-artifact rules: `rules.tasks` — TDD order, test-before-code. + +2. **Read and apply project rules** from CLAUDE.md: + - Contract-first development, testing requirements, code conventions. + +3. **Verify config.yaml rules applied**: + - Source Tracking section (if public-facing). + - GitHub issue creation task (if public repo). + - 2-hour maximum chunks. + - TDD: test tasks before implementation. + +### 5.2: Update Tasks with Quality Standards and Git Workflow + +#### 5.2.1: Determine Branch Type + +- `add-*`, `create-*`, `implement-*`, `enhance-*` -> `feature/` +- `fix-*`, `correct-*`, `repair-*` -> `bugfix/` +- `update-*`, `modify-*`, `refactor-*` -> `feature/` +- `hotfix-*`, `urgent-*` -> `hotfix/` +- Default: `feature/` + +Branch name: `/`. Target: `dev`. + +#### 5.2.2: Add Git Worktree Creation Task (FIRST TASK) + +Add as first task in tasks.md: + +```markdown +## 1. Create git worktree for this change + +- [ ] 1.1 Fetch latest and create a worktree with a new branch from `origin/dev`. + - [ ] 1.1.1 `git fetch origin` + - [ ] 1.1.2 `git worktree add ../specfact-cli-worktrees// -b / origin/dev` + - [ ] 1.1.3 Change into the worktree: `cd ../specfact-cli-worktrees//` + - [ ] 1.1.4 Create a virtual environment: `python -m venv .venv && source .venv/bin/activate && pip install -e ".[dev]"` + - [ ] 1.1.5 `git branch --show-current` (verify correct branch) +``` + +**If a GitHub issue exists**, use `gh issue develop` to link the branch before creating the worktree: + +```markdown + - [ ] 1.1.2a `gh issue develop --repo --name /` (creates remote branch linked to issue) + - [ ] 1.1.2b `git fetch origin && git worktree add ../specfact-cli-worktrees// /` +``` + +All remaining tasks in tasks.md MUST run inside the worktree directory, not the primary checkout. + +#### 5.2.3: Update Tasks with Quality Standards + +For each task, ensure: +- Testing requirements (unit, contract, integration, E2E). +- Code quality checks: `hatch run format`, `hatch run type-check`, `hatch run contract-test`. +- Validation: `openspec validate --strict`. + +#### 5.2.4: Enforce TDD-first in tasks.md + +1. **Add "TDD / SDD order (enforced)" section** at top of tasks.md (after title, before first numbered task): + - State: per config.yaml, tests before code for any behavior-changing task. + - Order: (1) Spec deltas, (2) Tests from scenarios (expect failure), (3) Code last. + - "Do not implement production code until tests exist and have been run (expecting failure)." + - Separate with `---`. + +2. **Reorder each behavior-changing section**: Test tasks before implementation tasks. + +3. **Verify**: Scan tasks.md — any section with both test and implementation tasks must have tests first. + +#### 5.2.5: Add PR Creation Task (LAST TASK) + +Add as last task in tasks.md. Only create PR if target repo is public (specfact-cli, platform-frontend). + +Key steps (run from inside the worktree directory): +1. Prepare commit: `git add .`, commit with conventional message, push with `-u`: `git push -u origin /`. +2. Create PR body from `.github/pull_request_template.md`: + - Use full repo path format for issue refs: `Fixes nold-ai/specfact-cli#` + - Include OpenSpec change ID in description. +3. Create PR: `gh pr create --repo --base dev --head --title ": " --body-file ` +4. Link to project (specfact-cli only): `gh project item-add 1 --owner nold-ai --url ` +5. Verify Development link on issue, project board. +6. Update project status to "In Progress" (if applicable). + +PR title format: `feat:` for feature/, `fix:` for bugfix/, etc. + +#### 5.2.6: Add Worktree Cleanup Task (AFTER MERGE) + +Add a note after the PR task for post-merge cleanup: + +```markdown +## Post-merge cleanup (after PR is merged) + +- [ ] Return to primary checkout: `cd .../specfact-cli` +- [ ] `git fetch origin` +- [ ] `git worktree remove ../specfact-cli-worktrees//` +- [ ] `git branch -d /` +- [ ] `git worktree prune` +- [ ] (Optional) `git push origin --delete /` +``` + +### 5.3: Update Proposal with Quality Gates + +Update proposal.md with: quality standards section, git workflow requirements, acceptance criteria (branch created, tests pass, contracts validated, docs updated, PR created). + +### 5.4: Validate with OpenSpec + +1. Verify format: proposal.md has `# Change:` title, `## Why`, `## What Changes`, `## Impact`. Tasks.md uses `## 1.` numbered format. +2. Check status: `openspec status --change "" --json`. +3. Run: `openspec validate --strict`. Fix and re-run until passing. + +### 5.5: Markdown Linting + +Run `markdownlint --config .markdownlint.json --fix` on all `.md` files in the change directory. Fix remaining issues manually. + +## Step 6: GitHub Issue Creation + +### 6.1: Determine Target Repository + +1. Extract target repo from plan header (`**Repository**:` field). +2. Decision: + - `specfact-cli` or `platform-frontend` (public) -> create issue, proceed to 6.2. + - `specfact-cli-internal` (internal) -> skip issue creation, go to Step 8. + - Not specified -> ask user. + +### 6.2: Sanitize Proposal Content + +For public issues: +- **Remove**: Competitive analysis, market positioning, internal strategy, effort estimates. +- **Preserve**: User-facing value, feature descriptions, acceptance criteria, API changes. + +Format per config.yaml: +- Title: `[Change] ` +- Labels: `enhancement`, `change-proposal` +- Body: `## Why`, `## What Changes`, `## Acceptance Criteria` +- Footer: `*OpenSpec Change Proposal: *` + +Show sanitized content to user for approval before creating. + +## Step 7: Create GitHub Issue via gh CLI + +1. Write sanitized content to temp file. +2. Create issue: + +```bash +gh issue create \ + --repo \ + --title "[Change] " \ + --body-file /tmp/github-issue-<change-id>.md \ + --label "enhancement" \ + --label "change-proposal" +``` + +3. For specfact-cli: link to project `gh project item-add 1 --owner nold-ai --url <ISSUE_URL>`. +4. Update `proposal.md` Source Tracking section: + +```markdown +## Source Tracking + +<!-- source_repo: <target-repo> --> +- **GitHub Issue**: #<number> +- **Issue URL**: <url> +- **Last Synced Status**: proposed +``` + +5. Cleanup temp file. + +## Step 8: Completion + +Display summary: + +``` +Change ID: <change-id> +Location: openspec/changes/<change-id>/ + +Validation: + - OpenSpec validation passed + - Markdown linting passed + - Config.yaml rules applied (TDD-first enforced) + - Git workflow tasks added (branch + PR) + +GitHub Issue (if public): + - Issue #<number> created: <url> + - Source tracking updated + +Next Steps: + 1. Review: openspec/changes/<change-id>/proposal.md + 2. Review: openspec/changes/<change-id>/tasks.md + 3. Verify TDD order and git workflow in tasks + 4. Apply when ready: invoke opsx:apply skill +``` + +## Error Handling + +- **Plan not found**: Search and suggest alternatives. +- **Validation failures**: Present clearly, allow interactive resolution. +- **OpenSpec validation fails**: Fix and re-validate, don't proceed until passing. +- **gh CLI unavailable**: Inform user, provide manual creation instructions. +- **Issue creation fails**: Log error, allow retry, don't fail entire workflow. +- **Project linking fails**: Log warning, continue (non-critical). diff --git a/.codex/skills/openspec-workflows/references/validate-change.md b/.codex/skills/openspec-workflows/references/validate-change.md new file mode 100644 index 00000000..2ac055a8 --- /dev/null +++ b/.codex/skills/openspec-workflows/references/validate-change.md @@ -0,0 +1,264 @@ +# Workflow: Validate OpenSpec Change + +## Table of Contents + +- [Guardrails](#guardrails) +- [Step 1: Change Selection](#step-1-change-selection) +- [Step 2: Read and Parse Change](#step-2-read-and-parse-change) +- [Step 3: Simulate Change Application](#step-3-simulate-change-application) +- [Step 4: Dependency Analysis](#step-4-dependency-analysis) +- [Step 5: Validation Report and Decision](#step-5-validation-report-and-decision) +- [Step 6: Create Validation Report](#step-6-create-validation-report) +- [Step 7: Completion](#step-7-completion) + +## Guardrails + +- Never modify the actual codebase during validation — only work in temp directories. +- Focus on interface/contract/parameter analysis, not implementation details. +- Identify breaking changes, not style or formatting issues. +- Always create CHANGE_VALIDATION.md for audit trail. +- Ask for user confirmation before extending change scope or rejecting proposals. + +## Step 1: Change Selection + +**If change ID provided**: Resolve to `openspec/changes/<change-id>/`, verify directory and proposal.md exist. + +**If no change ID provided**: +1. List active changes: `openspec list --json`. +2. Display numbered list with change ID, schema, status, brief description. +3. Prompt user to select. + +## Step 2: Read and Parse Change + +### 2.1: Check Status and Read Artifacts + +1. **Read `openspec/config.yaml`** for project context, constraints, and per-artifact rules. + +2. **Check change status**: `openspec status --change "<change-id>" --json` + - Verify artifacts exist and are complete (status: "done"). + +3. **Get artifact context**: `openspec instructions apply --change "<change-id>" --json` + +4. **Verify proposal.md format** (per config.yaml): + - Title: `# Change: [Brief description]` + - Required sections: `## Why`, `## What Changes`, `## Capabilities`, `## Impact` + - "What Changes": bullet list with NEW/EXTEND/MODIFY markers + - "Capabilities": each capability needs a spec file + - "Impact": Affected specs, Affected code, Integration points + +5. **Read proposal.md**: Extract summary, rationale, scope, capabilities, affected files. + +6. **Verify tasks.md format** (per config.yaml): + - Hierarchical numbered sections: `## 1.`, `## 2.` + - Tasks: `- [ ] 1.1 [Description]` + - Sub-tasks: `- [ ] 1.1.1 [Description]` + - Rules: 2-hour max chunks, contract tasks, test tasks, quality gates, git worktree workflow (worktree creation first, PR last, cleanup after merge) + +7. **Read tasks.md**: Extract tasks, files to create/modify/delete, task dependencies. Verify worktree creation first, PR creation last, worktree cleanup after merge. + +8. **Read design.md** (if exists): Architectural decisions, interface changes, contracts, migration plans. Verify bridge adapter docs, sequence diagrams for multi-repo. + +9. **Read spec deltas** (`specs/<capability>/spec.md`): ADDED/MODIFIED/REMOVED requirements, interface/parameter/contract changes, cross-refs. Verify Given/When/Then format. + +### 2.2: Identify Change Scope + +1. **Files to modify**: Extract from tasks.md and proposal.md. Categorize: code, tests, docs, config. +2. **Modules/Components**: Python modules, classes, functions, interfaces, contracts, APIs. Note public vs private. +3. **Dependencies**: From proposal "Dependencies" section and task dependencies. + +## Step 3: Simulate Change Application + +### 3.1: Create Temporary Workspace + +```bash +TEMP_WORKSPACE="/tmp/specfact-validation-<change-id>-$(date +%s)" +mkdir -p "$TEMP_WORKSPACE" +``` + +Copy relevant repository structure to temp workspace. + +### 3.2: Analyze Spec Deltas for Interface Changes + +For each spec delta: +1. Parse ADDED/MODIFIED/REMOVED requirements. +2. Extract interface changes: function signatures, class interfaces, `@icontract`/`@beartype` decorators, type hints, API endpoints. +3. Create interface scaffolds in temp workspace (stubs only, no implementation): + +```python +# OLD INTERFACE (from existing codebase) +def process_data(data: str, options: dict) -> dict: ... + +# NEW INTERFACE (from change proposal) +def process_data(data: str, options: dict, validate: bool = True) -> dict: ... +``` + +### 3.3: Map Tasks to File Modifications + +For each task, categorize modification type: +- **Interface change**: Function/class signature modification +- **Contract change**: `@icontract` decorator modification +- **Type change**: Type hint modification +- **New/Delete file**: Module/class/function added or removed +- **Documentation**: Non-breaking doc changes + +Create modification map: File path -> Modification type -> Interface changes. + +## Step 4: Dependency Analysis + +### 4.1: Find Dependent Code + +For each modified file/interface, search codebase: +- `from...import...<module>` — find imports +- `<function_name>(` or `<class_name>(` — find usages +- `@<decorator>` — find contract decorators + +Build dependency graph: Modified interface -> dependent files (direct, indirect, test). + +### 4.2: Analyze Breaking Changes + +Compare old vs new interface. Detect: +- **Parameter removal**: Required param removed +- **Parameter addition**: Required param added (no default) +- **Parameter type change**: Incompatible type +- **Return type change**: Incompatible return +- **Contract strengthening**: `@require` stricter, `@ensure` weaker +- **Method/class/module removal**: Public API removed + +For each dependent file, check if it would break: +- **Would break**: Incompatible usage detected +- **Would need update**: Compatible but may need adjustment +- **No impact**: Usage compatible + +### 4.3: Identify Required Updates + +Categorize: +- **Critical**: Must update or code breaks +- **Recommended**: Should update for consistency +- **Optional**: No update needed + +## Step 5: Validation Report and Decision + +### 5.1: Summary + +Count breaking changes, affected interfaces, dependent files. Assess impact: High/Medium/Low. + +### 5.2: Present Findings + +``` +Change Validation Report: <change-id> + +Breaking Changes Detected: <count> + - <interface 1>: <description> + +Dependent Files Affected: <count> + Critical (must update): <count> + Recommended: <count> + Optional: <count> + +Impact Assessment: <High/Medium/Low> +``` + +### 5.3: User Decision (if breaking changes) + +**Option A: Extend Scope** — Add tasks to update dependent files. May require major version. + +**Option B: Adjust Change** — Add default params, keep old interface (deprecation), use optional params. + +**Option C: Reject and Defer** — Update status to "deferred", document in CHANGE_VALIDATION.md. + +**No breaking changes**: Proceed to 5.4. + +### 5.4: OpenSpec Validation + +1. Check status: `openspec status --change "<change-id>" --json` +2. Run: `openspec validate <change-id> --strict` +3. Fix issues and re-run until passing. +4. If proposal was updated (scope extended/adjusted), re-validate. + +## Step 6: Create Validation Report + +Create `openspec/changes/<change-id>/CHANGE_VALIDATION.md`: + +```markdown +# Change Validation Report: <change-id> + +**Validation Date**: <timestamp> +**Change Proposal**: [proposal.md](./proposal.md) +**Validation Method**: Dry-run simulation in temporary workspace + +## Executive Summary + +- Breaking Changes: <count> detected / <count> resolved +- Dependent Files: <count> affected +- Impact Level: <High/Medium/Low> +- Validation Result: <Pass/Fail/Deferred> +- User Decision: <Extend Scope/Adjust Change/Reject/N/A> + +## Breaking Changes Detected + +### Interface: <name> +- **Type**: Parameter addition/removal/type change +- **Old Signature**: `<old>` +- **New Signature**: `<new>` +- **Dependent Files**: <file>: <impact> + +## Dependencies Affected + +### Critical Updates Required +- <file>: <reason> + +### Recommended Updates +- <file>: <reason> + +## Impact Assessment + +- **Code Impact**: <description> +- **Test Impact**: <description> +- **Documentation Impact**: <description> +- **Release Impact**: <Minor/Major/Patch> + +## Format Validation + +- **proposal.md Format**: <Pass/Fail> + - Title, sections, capabilities, impact per config.yaml +- **tasks.md Format**: <Pass/Fail> + - Headers, task format, config.yaml compliance (TDD, git workflow, quality gates) +- **specs Format**: <Pass/Fail> + - Given/When/Then format, references existing patterns +- **Config.yaml Compliance**: <Pass/Fail> + +## OpenSpec Validation + +- **Status**: <Pass/Fail> +- **Command**: `openspec validate <change-id> --strict` +- **Issues Found/Fixed**: <count> + +## Validation Artifacts + +- Temporary workspace: <path> +``` + +Update proposal status if deferred, scope extended, or adjusted. + +## Step 7: Completion + +``` +Change ID: <change-id> +Validation Report: openspec/changes/<change-id>/CHANGE_VALIDATION.md + +Findings: + - Breaking Changes: <count> + - Dependent Files: <count> + - Impact Level: <level> + - Validation Result: <result> + +Next Steps: + <based on decision — implement, re-validate, or defer> +``` + +## Error Handling + +- **Change not found**: Search and suggest alternatives. +- **Repo not accessible**: Inform user, provide manual validation instructions. +- **Breaking changes**: Present options clearly, don't proceed without user decision. +- **Dependency analysis fails**: Continue with partial analysis, note limitations. diff --git a/.gitignore b/.gitignore index 6165cc48..a4dca467 100644 --- a/.gitignore +++ b/.gitignore @@ -111,6 +111,12 @@ docs/internal/ .claude/skills/openspec-*/ !.claude/skills/openspec-workflows/ +.codex/skills/openspec-*/ +!.codex/skills/openspec-workflows/ + +.vibe/skills/openspec-*/ +!.vibe/skills/openspec-workflows/ + # Semgrep rules (generated from tools/semgrep/ - source rules are versioned) .semgrep/ diff --git a/.vibe/skills/openspec-workflows/SKILL.md b/.vibe/skills/openspec-workflows/SKILL.md new file mode 100644 index 00000000..fdbcaa56 --- /dev/null +++ b/.vibe/skills/openspec-workflows/SKILL.md @@ -0,0 +1,66 @@ +--- +name: openspec-workflows +description: Create OpenSpec changes from implementation plans, and validate existing changes before implementation. Use when the user wants to turn a plan document into an OpenSpec change proposal, or validate that a change is safe to implement (breaking changes, dependency analysis). +license: MIT +metadata: + author: openspec + version: "1.0" +--- + +Two workflows for managing OpenSpec changes at the proposal stage. + +**Input**: Optionally specify a workflow name (`create` or `validate`) and a target (plan path or change ID). If omitted, ask the user which workflow they need. + +## Workflow Selection + +Determine which workflow to run: + +| User Intent | Workflow | Reference | +|---|---|---| +| Turn a plan into an OpenSpec change | **Create Change from Plan** | `references/create-change-from-plan.md` | +| Validate a change before implementation | **Validate Change** | `references/validate-change.md` | + +If the user's intent is unclear, use **AskUserQuestion** to ask which workflow they need. + +## Create Change from Plan + +Turns an implementation plan document into a fully formed OpenSpec change with proposal, specs, design, and tasks — including GitHub issue creation for public repos. + +**When to use**: The user has a plan document (typically in `specfact-cli-internal/docs/internal/implementation/`) and wants to create an OpenSpec change from it. + +**Load** `references/create-change-from-plan.md` and follow the full workflow. + +**Key steps**: +1. Select and parse the plan document +2. Cross-reference against existing plans and validate targets +3. Resolve any issues interactively +4. Create the OpenSpec change via `opsx:ff` skill +5. Review and improve: enforce TDD-first, add git worktree tasks (worktree creation first, PR last, cleanup after merge), validate against `openspec/config.yaml` +6. Create GitHub issue (public repos only) + +## Validate Change + +Performs dry-run simulation to detect breaking changes, analyze dependencies, and verify format compliance before implementation begins. + +**When to use**: The user wants to validate that an existing change is safe to implement — check for breaking interface changes, missing dependency updates, and format compliance. + +**Load** `references/validate-change.md` and follow the full workflow. + +**Key steps**: +1. Select the change (by ID or interactive list) +2. Parse all change artifacts (proposal, tasks, design, spec deltas) +3. Simulate interface changes in a temporary workspace +4. Analyze dependencies and detect breaking changes +5. Present findings and get user decision if breaking changes found +6. Run `openspec validate <change-id> --strict` +7. Create `CHANGE_VALIDATION.md` report + +## Guardrails + +- Read `openspec/config.yaml` for project context and rules +- Read `CLAUDE.md` for project conventions +- Never modify production code during validation — use temp workspaces +- Never proceed with ambiguities — ask for clarification +- Enforce TDD-first ordering in tasks (per config.yaml) +- Enforce git worktree workflow: worktree creation first task, PR creation last task, worktree cleanup after merge — never switch the primary checkout away from `dev` +- Only create GitHub issues in the target repository specified by the plan diff --git a/.vibe/skills/openspec-workflows/references/create-change-from-plan.md b/.vibe/skills/openspec-workflows/references/create-change-from-plan.md new file mode 100644 index 00000000..bb999477 --- /dev/null +++ b/.vibe/skills/openspec-workflows/references/create-change-from-plan.md @@ -0,0 +1,312 @@ +# Workflow: Create OpenSpec Change from Plan + +## Table of Contents + +- [Guardrails](#guardrails) +- [Step 1: Plan Selection](#step-1-plan-selection) +- [Step 2: Plan Review and Alignment](#step-2-plan-review-and-alignment) +- [Step 3: Integrity Re-Check](#step-3-integrity-re-check) +- [Step 4: OpenSpec Change Creation](#step-4-openspec-change-creation) +- [Step 5: Proposal Review and Improvement](#step-5-proposal-review-and-improvement) +- [Step 6: GitHub Issue Creation](#step-6-github-issue-creation) +- [Step 7: Create GitHub Issue via gh CLI](#step-7-create-github-issue-via-gh-cli) +- [Step 8: Completion](#step-8-completion) + +## Guardrails + +- Read `openspec/config.yaml` during the workflow (before or at Step 5) for project context and TDD/SDD rules. +- Favor straightforward, minimal implementations. Keep changes tightly scoped. +- Never proceed with ambiguities or conflicts — ask for clarification interactively. +- Do not write code during the proposal stage. Only create design documents (proposal.md, tasks.md, design.md, spec deltas). +- Always validate alignment against existing plans and implementation reality before proceeding. +- **CRITICAL**: Only create GitHub issues in the target repository specified by the plan. +- **CRITICAL Git Workflow (Worktree Policy)**: Use git worktrees for parallel development — never switch the primary checkout away from `dev`. Add a worktree creation task as the FIRST task, and PR creation as the LAST task. Never work on protected branches (`main`/`dev`) directly. Branch naming: `<branch-type>/<change-id>`. Worktree path: `../specfact-cli-worktrees/<branch-type>/<change-id>`. All subsequent tasks execute inside the worktree directory. +- **CRITICAL TDD**: Per config.yaml, test tasks MUST come before implementation tasks. + +## Step 1: Plan Selection + +**If plan path provided**: Resolve to absolute path, verify file exists. + +**If no plan path provided**: +1. Search for plans in: + - `specfact-cli-internal/docs/internal/brownfield-strategy/` (`*.md`) + - `specfact-cli-internal/docs/internal/implementation/` (`*.md`) + - `specfact-cli/docs/` (if accessible) +2. Display numbered list with file path, title (first heading), last modified date. +3. Prompt user to select. + +## Step 2: Plan Review and Alignment + +### 2.1: Read and Parse Plan + +1. Read plan file completely. +2. Extract: + - Title and purpose (first H1) + - **Target repository** (look for `**Repository**:` in header metadata, e.g. `` `nold-ai/specfact-cli` ``) + - Phases/tasks with descriptions + - Files to create/modify (note repository prefixes) + - Dependencies, success metrics, estimated effort +3. Identify referenced targets (files, directories, repositories). + +### 2.2: Cross-Reference Check + +1. Search `specfact-cli-internal/docs/internal/brownfield-strategy/` for overlapping plans. +2. Search `specfact-cli-internal/docs/internal/implementation/` for conflicting implementation plans. +3. Extract conflicting info, overlapping scope, dependency relationships, timeline conflicts. + +### 2.3: Target Validation + +For each target in the plan: +- **Files**: Check existence, readability, location, structure matches assumptions. +- **Directories**: Check existence, structure. +- **Repositories**: Verify in workspace, structure matches, access ok. +- **Code refs**: Verify functions/classes exist, structure matches. + +### 2.4: Alignment Analysis + +Check: +1. **Accuracy**: File paths correct? Repos referenced accurately? Commands valid? +2. **Correctness**: Technical details accurate? Implementation approaches align with codebase? +3. **Ambiguities**: Unclear requirements, vague acceptance criteria, missing context. +4. **Conflicts**: With other plans, overlapping scope, timeline/resource conflicts. +5. **Consistency**: With CLAUDE.md conventions, OpenSpec conventions, existing patterns. + +### 2.5: Issue Detection and Interactive Resolution + +**If issues found**: +1. Categorize: Critical (must resolve), Warning (should resolve), Info (non-blocking). +2. Present: `[CRITICAL/WARNING/INFO] <category>: <description>` with context and suggested resolutions. +3. Resolve interactively: For critical issues, prompt for clarification. For warnings, ask resolve or skip. +4. Re-validate after resolution. Loop until all critical issues resolved. + +## Step 3: Integrity Re-Check + +1. Re-run all checks from Step 2 with updated understanding. +2. Verify user clarifications are consistent. +3. Check for new issues introduced by clarifications. +4. If misalignments remain, go back to Step 2.5. + +## Step 4: OpenSpec Change Creation + +### 4.1: Determine Change Name + +1. Extract from plan title, convert to kebab-case. +2. Ensure unique (check existing changes in `openspec/changes/`). + +### 4.2: Execute OPSX Fast-Forward + +Invoke the `opsx:ff` skill with the change name: +- Use the plan as source of requirements. +- Map plan phases/tasks to OpenSpec capabilities. +- The opsx:ff workflow creates: change directory, proposal.md, specs/, design.md, tasks.md. +- It reads `openspec/config.yaml` for project context and per-artifact rules. + +### 4.3: Extract Change ID + +1. Identify created change ID. +2. Verify change directory: `openspec/changes/<change-id>/`. +3. Verify artifacts created: proposal.md, tasks.md, specs/. + +## Step 5: Proposal Review and Improvement + +### 5.1: Review Against Config and Project Rules + +1. **Read `openspec/config.yaml`**: + - Project context: Tech stack, constraints, architecture patterns. + - Development discipline (SDD + TDD): (1) Specs first, (2) Tests second (expect failure), (3) Code last. + - Per-artifact rules: `rules.tasks` — TDD order, test-before-code. + +2. **Read and apply project rules** from CLAUDE.md: + - Contract-first development, testing requirements, code conventions. + +3. **Verify config.yaml rules applied**: + - Source Tracking section (if public-facing). + - GitHub issue creation task (if public repo). + - 2-hour maximum chunks. + - TDD: test tasks before implementation. + +### 5.2: Update Tasks with Quality Standards and Git Workflow + +#### 5.2.1: Determine Branch Type + +- `add-*`, `create-*`, `implement-*`, `enhance-*` -> `feature/` +- `fix-*`, `correct-*`, `repair-*` -> `bugfix/` +- `update-*`, `modify-*`, `refactor-*` -> `feature/` +- `hotfix-*`, `urgent-*` -> `hotfix/` +- Default: `feature/` + +Branch name: `<branch-type>/<change-id>`. Target: `dev`. + +#### 5.2.2: Add Git Worktree Creation Task (FIRST TASK) + +Add as first task in tasks.md: + +```markdown +## 1. Create git worktree for this change + +- [ ] 1.1 Fetch latest and create a worktree with a new branch from `origin/dev`. + - [ ] 1.1.1 `git fetch origin` + - [ ] 1.1.2 `git worktree add ../specfact-cli-worktrees/<branch-type>/<change-id> -b <branch-type>/<change-id> origin/dev` + - [ ] 1.1.3 Change into the worktree: `cd ../specfact-cli-worktrees/<branch-type>/<change-id>` + - [ ] 1.1.4 Create a virtual environment: `python -m venv .venv && source .venv/bin/activate && pip install -e ".[dev]"` + - [ ] 1.1.5 `git branch --show-current` (verify correct branch) +``` + +**If a GitHub issue exists**, use `gh issue develop` to link the branch before creating the worktree: + +```markdown + - [ ] 1.1.2a `gh issue develop <issue-number> --repo <target-repo> --name <branch-type>/<change-id>` (creates remote branch linked to issue) + - [ ] 1.1.2b `git fetch origin && git worktree add ../specfact-cli-worktrees/<branch-type>/<change-id> <branch-type>/<change-id>` +``` + +All remaining tasks in tasks.md MUST run inside the worktree directory, not the primary checkout. + +#### 5.2.3: Update Tasks with Quality Standards + +For each task, ensure: +- Testing requirements (unit, contract, integration, E2E). +- Code quality checks: `hatch run format`, `hatch run type-check`, `hatch run contract-test`. +- Validation: `openspec validate <change-id> --strict`. + +#### 5.2.4: Enforce TDD-first in tasks.md + +1. **Add "TDD / SDD order (enforced)" section** at top of tasks.md (after title, before first numbered task): + - State: per config.yaml, tests before code for any behavior-changing task. + - Order: (1) Spec deltas, (2) Tests from scenarios (expect failure), (3) Code last. + - "Do not implement production code until tests exist and have been run (expecting failure)." + - Separate with `---`. + +2. **Reorder each behavior-changing section**: Test tasks before implementation tasks. + +3. **Verify**: Scan tasks.md — any section with both test and implementation tasks must have tests first. + +#### 5.2.5: Add PR Creation Task (LAST TASK) + +Add as last task in tasks.md. Only create PR if target repo is public (specfact-cli, platform-frontend). + +Key steps (run from inside the worktree directory): +1. Prepare commit: `git add .`, commit with conventional message, push with `-u`: `git push -u origin <branch-type>/<change-id>`. +2. Create PR body from `.github/pull_request_template.md`: + - Use full repo path format for issue refs: `Fixes nold-ai/specfact-cli#<number>` + - Include OpenSpec change ID in description. +3. Create PR: `gh pr create --repo <target-repo> --base dev --head <branch> --title "<type>: <desc>" --body-file <body-file>` +4. Link to project (specfact-cli only): `gh project item-add 1 --owner nold-ai --url <PR_URL>` +5. Verify Development link on issue, project board. +6. Update project status to "In Progress" (if applicable). + +PR title format: `feat:` for feature/, `fix:` for bugfix/, etc. + +#### 5.2.6: Add Worktree Cleanup Task (AFTER MERGE) + +Add a note after the PR task for post-merge cleanup: + +```markdown +## Post-merge cleanup (after PR is merged) + +- [ ] Return to primary checkout: `cd .../specfact-cli` +- [ ] `git fetch origin` +- [ ] `git worktree remove ../specfact-cli-worktrees/<branch-type>/<change-id>` +- [ ] `git branch -d <branch-type>/<change-id>` +- [ ] `git worktree prune` +- [ ] (Optional) `git push origin --delete <branch-type>/<change-id>` +``` + +### 5.3: Update Proposal with Quality Gates + +Update proposal.md with: quality standards section, git workflow requirements, acceptance criteria (branch created, tests pass, contracts validated, docs updated, PR created). + +### 5.4: Validate with OpenSpec + +1. Verify format: proposal.md has `# Change:` title, `## Why`, `## What Changes`, `## Impact`. Tasks.md uses `## 1.` numbered format. +2. Check status: `openspec status --change "<change-id>" --json`. +3. Run: `openspec validate <change-id> --strict`. Fix and re-run until passing. + +### 5.5: Markdown Linting + +Run `markdownlint --config .markdownlint.json --fix` on all `.md` files in the change directory. Fix remaining issues manually. + +## Step 6: GitHub Issue Creation + +### 6.1: Determine Target Repository + +1. Extract target repo from plan header (`**Repository**:` field). +2. Decision: + - `specfact-cli` or `platform-frontend` (public) -> create issue, proceed to 6.2. + - `specfact-cli-internal` (internal) -> skip issue creation, go to Step 8. + - Not specified -> ask user. + +### 6.2: Sanitize Proposal Content + +For public issues: +- **Remove**: Competitive analysis, market positioning, internal strategy, effort estimates. +- **Preserve**: User-facing value, feature descriptions, acceptance criteria, API changes. + +Format per config.yaml: +- Title: `[Change] <Brief Description>` +- Labels: `enhancement`, `change-proposal` +- Body: `## Why`, `## What Changes`, `## Acceptance Criteria` +- Footer: `*OpenSpec Change Proposal: <change-id>*` + +Show sanitized content to user for approval before creating. + +## Step 7: Create GitHub Issue via gh CLI + +1. Write sanitized content to temp file. +2. Create issue: + +```bash +gh issue create \ + --repo <target-repo> \ + --title "[Change] <title>" \ + --body-file /tmp/github-issue-<change-id>.md \ + --label "enhancement" \ + --label "change-proposal" +``` + +3. For specfact-cli: link to project `gh project item-add 1 --owner nold-ai --url <ISSUE_URL>`. +4. Update `proposal.md` Source Tracking section: + +```markdown +## Source Tracking + +<!-- source_repo: <target-repo> --> +- **GitHub Issue**: #<number> +- **Issue URL**: <url> +- **Last Synced Status**: proposed +``` + +5. Cleanup temp file. + +## Step 8: Completion + +Display summary: + +``` +Change ID: <change-id> +Location: openspec/changes/<change-id>/ + +Validation: + - OpenSpec validation passed + - Markdown linting passed + - Config.yaml rules applied (TDD-first enforced) + - Git workflow tasks added (branch + PR) + +GitHub Issue (if public): + - Issue #<number> created: <url> + - Source tracking updated + +Next Steps: + 1. Review: openspec/changes/<change-id>/proposal.md + 2. Review: openspec/changes/<change-id>/tasks.md + 3. Verify TDD order and git workflow in tasks + 4. Apply when ready: invoke opsx:apply skill +``` + +## Error Handling + +- **Plan not found**: Search and suggest alternatives. +- **Validation failures**: Present clearly, allow interactive resolution. +- **OpenSpec validation fails**: Fix and re-validate, don't proceed until passing. +- **gh CLI unavailable**: Inform user, provide manual creation instructions. +- **Issue creation fails**: Log error, allow retry, don't fail entire workflow. +- **Project linking fails**: Log warning, continue (non-critical). diff --git a/.vibe/skills/openspec-workflows/references/validate-change.md b/.vibe/skills/openspec-workflows/references/validate-change.md new file mode 100644 index 00000000..2ac055a8 --- /dev/null +++ b/.vibe/skills/openspec-workflows/references/validate-change.md @@ -0,0 +1,264 @@ +# Workflow: Validate OpenSpec Change + +## Table of Contents + +- [Guardrails](#guardrails) +- [Step 1: Change Selection](#step-1-change-selection) +- [Step 2: Read and Parse Change](#step-2-read-and-parse-change) +- [Step 3: Simulate Change Application](#step-3-simulate-change-application) +- [Step 4: Dependency Analysis](#step-4-dependency-analysis) +- [Step 5: Validation Report and Decision](#step-5-validation-report-and-decision) +- [Step 6: Create Validation Report](#step-6-create-validation-report) +- [Step 7: Completion](#step-7-completion) + +## Guardrails + +- Never modify the actual codebase during validation — only work in temp directories. +- Focus on interface/contract/parameter analysis, not implementation details. +- Identify breaking changes, not style or formatting issues. +- Always create CHANGE_VALIDATION.md for audit trail. +- Ask for user confirmation before extending change scope or rejecting proposals. + +## Step 1: Change Selection + +**If change ID provided**: Resolve to `openspec/changes/<change-id>/`, verify directory and proposal.md exist. + +**If no change ID provided**: +1. List active changes: `openspec list --json`. +2. Display numbered list with change ID, schema, status, brief description. +3. Prompt user to select. + +## Step 2: Read and Parse Change + +### 2.1: Check Status and Read Artifacts + +1. **Read `openspec/config.yaml`** for project context, constraints, and per-artifact rules. + +2. **Check change status**: `openspec status --change "<change-id>" --json` + - Verify artifacts exist and are complete (status: "done"). + +3. **Get artifact context**: `openspec instructions apply --change "<change-id>" --json` + +4. **Verify proposal.md format** (per config.yaml): + - Title: `# Change: [Brief description]` + - Required sections: `## Why`, `## What Changes`, `## Capabilities`, `## Impact` + - "What Changes": bullet list with NEW/EXTEND/MODIFY markers + - "Capabilities": each capability needs a spec file + - "Impact": Affected specs, Affected code, Integration points + +5. **Read proposal.md**: Extract summary, rationale, scope, capabilities, affected files. + +6. **Verify tasks.md format** (per config.yaml): + - Hierarchical numbered sections: `## 1.`, `## 2.` + - Tasks: `- [ ] 1.1 [Description]` + - Sub-tasks: `- [ ] 1.1.1 [Description]` + - Rules: 2-hour max chunks, contract tasks, test tasks, quality gates, git worktree workflow (worktree creation first, PR last, cleanup after merge) + +7. **Read tasks.md**: Extract tasks, files to create/modify/delete, task dependencies. Verify worktree creation first, PR creation last, worktree cleanup after merge. + +8. **Read design.md** (if exists): Architectural decisions, interface changes, contracts, migration plans. Verify bridge adapter docs, sequence diagrams for multi-repo. + +9. **Read spec deltas** (`specs/<capability>/spec.md`): ADDED/MODIFIED/REMOVED requirements, interface/parameter/contract changes, cross-refs. Verify Given/When/Then format. + +### 2.2: Identify Change Scope + +1. **Files to modify**: Extract from tasks.md and proposal.md. Categorize: code, tests, docs, config. +2. **Modules/Components**: Python modules, classes, functions, interfaces, contracts, APIs. Note public vs private. +3. **Dependencies**: From proposal "Dependencies" section and task dependencies. + +## Step 3: Simulate Change Application + +### 3.1: Create Temporary Workspace + +```bash +TEMP_WORKSPACE="/tmp/specfact-validation-<change-id>-$(date +%s)" +mkdir -p "$TEMP_WORKSPACE" +``` + +Copy relevant repository structure to temp workspace. + +### 3.2: Analyze Spec Deltas for Interface Changes + +For each spec delta: +1. Parse ADDED/MODIFIED/REMOVED requirements. +2. Extract interface changes: function signatures, class interfaces, `@icontract`/`@beartype` decorators, type hints, API endpoints. +3. Create interface scaffolds in temp workspace (stubs only, no implementation): + +```python +# OLD INTERFACE (from existing codebase) +def process_data(data: str, options: dict) -> dict: ... + +# NEW INTERFACE (from change proposal) +def process_data(data: str, options: dict, validate: bool = True) -> dict: ... +``` + +### 3.3: Map Tasks to File Modifications + +For each task, categorize modification type: +- **Interface change**: Function/class signature modification +- **Contract change**: `@icontract` decorator modification +- **Type change**: Type hint modification +- **New/Delete file**: Module/class/function added or removed +- **Documentation**: Non-breaking doc changes + +Create modification map: File path -> Modification type -> Interface changes. + +## Step 4: Dependency Analysis + +### 4.1: Find Dependent Code + +For each modified file/interface, search codebase: +- `from...import...<module>` — find imports +- `<function_name>(` or `<class_name>(` — find usages +- `@<decorator>` — find contract decorators + +Build dependency graph: Modified interface -> dependent files (direct, indirect, test). + +### 4.2: Analyze Breaking Changes + +Compare old vs new interface. Detect: +- **Parameter removal**: Required param removed +- **Parameter addition**: Required param added (no default) +- **Parameter type change**: Incompatible type +- **Return type change**: Incompatible return +- **Contract strengthening**: `@require` stricter, `@ensure` weaker +- **Method/class/module removal**: Public API removed + +For each dependent file, check if it would break: +- **Would break**: Incompatible usage detected +- **Would need update**: Compatible but may need adjustment +- **No impact**: Usage compatible + +### 4.3: Identify Required Updates + +Categorize: +- **Critical**: Must update or code breaks +- **Recommended**: Should update for consistency +- **Optional**: No update needed + +## Step 5: Validation Report and Decision + +### 5.1: Summary + +Count breaking changes, affected interfaces, dependent files. Assess impact: High/Medium/Low. + +### 5.2: Present Findings + +``` +Change Validation Report: <change-id> + +Breaking Changes Detected: <count> + - <interface 1>: <description> + +Dependent Files Affected: <count> + Critical (must update): <count> + Recommended: <count> + Optional: <count> + +Impact Assessment: <High/Medium/Low> +``` + +### 5.3: User Decision (if breaking changes) + +**Option A: Extend Scope** — Add tasks to update dependent files. May require major version. + +**Option B: Adjust Change** — Add default params, keep old interface (deprecation), use optional params. + +**Option C: Reject and Defer** — Update status to "deferred", document in CHANGE_VALIDATION.md. + +**No breaking changes**: Proceed to 5.4. + +### 5.4: OpenSpec Validation + +1. Check status: `openspec status --change "<change-id>" --json` +2. Run: `openspec validate <change-id> --strict` +3. Fix issues and re-run until passing. +4. If proposal was updated (scope extended/adjusted), re-validate. + +## Step 6: Create Validation Report + +Create `openspec/changes/<change-id>/CHANGE_VALIDATION.md`: + +```markdown +# Change Validation Report: <change-id> + +**Validation Date**: <timestamp> +**Change Proposal**: [proposal.md](./proposal.md) +**Validation Method**: Dry-run simulation in temporary workspace + +## Executive Summary + +- Breaking Changes: <count> detected / <count> resolved +- Dependent Files: <count> affected +- Impact Level: <High/Medium/Low> +- Validation Result: <Pass/Fail/Deferred> +- User Decision: <Extend Scope/Adjust Change/Reject/N/A> + +## Breaking Changes Detected + +### Interface: <name> +- **Type**: Parameter addition/removal/type change +- **Old Signature**: `<old>` +- **New Signature**: `<new>` +- **Dependent Files**: <file>: <impact> + +## Dependencies Affected + +### Critical Updates Required +- <file>: <reason> + +### Recommended Updates +- <file>: <reason> + +## Impact Assessment + +- **Code Impact**: <description> +- **Test Impact**: <description> +- **Documentation Impact**: <description> +- **Release Impact**: <Minor/Major/Patch> + +## Format Validation + +- **proposal.md Format**: <Pass/Fail> + - Title, sections, capabilities, impact per config.yaml +- **tasks.md Format**: <Pass/Fail> + - Headers, task format, config.yaml compliance (TDD, git workflow, quality gates) +- **specs Format**: <Pass/Fail> + - Given/When/Then format, references existing patterns +- **Config.yaml Compliance**: <Pass/Fail> + +## OpenSpec Validation + +- **Status**: <Pass/Fail> +- **Command**: `openspec validate <change-id> --strict` +- **Issues Found/Fixed**: <count> + +## Validation Artifacts + +- Temporary workspace: <path> +``` + +Update proposal status if deferred, scope extended, or adjusted. + +## Step 7: Completion + +``` +Change ID: <change-id> +Validation Report: openspec/changes/<change-id>/CHANGE_VALIDATION.md + +Findings: + - Breaking Changes: <count> + - Dependent Files: <count> + - Impact Level: <level> + - Validation Result: <result> + +Next Steps: + <based on decision — implement, re-validate, or defer> +``` + +## Error Handling + +- **Change not found**: Search and suggest alternatives. +- **Repo not accessible**: Inform user, provide manual validation instructions. +- **Breaking changes**: Present options clearly, don't proceed without user decision. +- **Dependency analysis fails**: Continue with partial analysis, note limitations. diff --git a/CHANGELOG.md b/CHANGELOG.md index 58d2ed5e..77b411bc 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -7,7 +7,31 @@ All notable changes to this project will be documented in this file. **Important:** Changes need to be documented below this block as this is the header section. Each section should be separated by a horizontal rule. Newer changelog entries need to be added on top of prior ones to keep the history chronological with most recent changes first. + --- + +## [0.36.0] - 2026-02-21 + +### Added + +- Enhanced `specfact backlog add` interactive flow with multiline capture (`::END::` sentinel), acceptance criteria, priority, story points, parent selection, and description format selection (`markdown` or `classic`). +- New `specfact backlog init-config` command to scaffold `.specfact/backlog-config.yaml` with safe provider defaults. +- Expanded `specfact backlog map-fields` into a multi-provider setup flow (`ado`, `github`) with guided discovery/validation and canonical config persistence under `.specfact/backlog-config.yaml`. +- GitHub backlog create flow now supports native sub-issue parent linking and optional issue-type / ProjectV2 Type assignment using configured GraphQL metadata. +- Centralized retry support for backlog adapter write operations with duplicate-safe behavior for non-idempotent creates/comments. + +### Fixed + +- Azure DevOps interactive sprint/iteration selection now resolves context from `--project-id` so available iterations are discoverable during `backlog add`. +- Azure DevOps parent candidate discovery no longer hides valid parents via implicit current-iteration filtering in hierarchy selection flows. +- GitHub backlog field/type extraction now tolerates non-list labels and dict-shaped `issue_type` payloads (`name`/`title`) for more reliable type inference. + +### Changed + +- Backlog documentation now reflects the current `specfact backlog` command surface and updated `backlog add` behavior in both guide and command reference docs. + +--- + ## [0.35.0] - 2026-02-20 ### Added @@ -57,7 +81,26 @@ All notable changes to this project will be documented in this file. ### Added -- None yet. +- Architecture documentation remediation for OpenSpec change `arch-08-documentation-discrepancies-remediation`: + - New architecture implementation status page: `docs/architecture/implementation-status.md`. + - New ADR set with template and initial ADR: `docs/architecture/adr/`. + - New module development guide: `docs/guides/module-development.md`. + +### Changed + +- Reworked architecture references to align with implemented behavior: + - `docs/reference/architecture.md` + - `docs/architecture/README.md` + - `docs/architecture/component-graph.md` + - `docs/architecture/module-system.md` + - `docs/architecture/data-flow.md` + - `docs/architecture/state-machines.md` + - `docs/architecture/interface-contracts.md` +- Updated adapter development documentation and navigation links for discoverability: + - `docs/guides/adapter-development.md` + - `docs/_layouts/default.html` + - `docs/index.md` +- Simplified top-level `README.md` by removing deep architecture implementation details and linking technical readers to architecture docs. --- diff --git a/README.md b/README.md index e08682e5..49c4a937 100644 --- a/README.md +++ b/README.md @@ -154,51 +154,11 @@ Start with: - **Backlogs**: GitHub Issues, Azure DevOps, Jira, Linear - **Contracts**: Specmatic, OpenAPI -### Module Lifecycle Baseline +For technical architecture details (module lifecycle, registry internals, adapters, and implementation status), use: -SpecFact now has a lifecycle-managed module system: - -- `specfact init` is bootstrap-first: initializes local CLI state and reports prompt status. -- `specfact init ide` handles IDE prompt/template sync and IDE settings updates. -- `specfact module` is the canonical lifecycle surface: - - `specfact module install <namespace/name>` installs marketplace modules into `~/.specfact/marketplace-modules/`. - - `specfact module list [--source builtin|marketplace|custom]` shows multi-source discovery state. - - `specfact module enable <id>` / `specfact module disable <id> [--force]` manage enabled state. - - `specfact module uninstall <name>` and `specfact module upgrade <name>` manage marketplace lifecycle. -- `specfact init --list-modules`, `--enable-module`, and `--disable-module` remain supported as compatibility aliases during migration. -- Module lifecycle operations keep dependency-aware safety checks with `--force` cascading behavior. -- Module manifests support dependency and core-version compatibility enforcement at registration time. - -This lifecycle model is the baseline for future granular module updates and enhancements. Module installation from third-party or open-source community providers is planned, but not implemented yet. - -Contract-first module architecture highlights: - -- `ModuleIOContract` formalizes module IO operations (`import`, `export`, `sync`, `validate`) on `ProjectBundle`. -- Core-module isolation is enforced by static analysis (`core` never imports `specfact_cli.modules.*` directly). -- Registration tracks protocol operation coverage and schema compatibility metadata. -- Bridge registry support allows module manifests to declare `service_bridges` converters (for example ADO/Jira/Linear/GitHub) loaded at lifecycle startup without direct core-to-module imports. -- Protocol reporting classifies modules from effective runtime interfaces with a single aggregate summary (`Full/Partial/Legacy`). -- Module manifests support publisher and integrity metadata (arch-06) with optional checksum and signature verification at registration time. - -Why this matters: - -- Feature areas can evolve independently without repeatedly modifying core CLI wiring. -- Module teams can ship at different speeds while preserving stable core behavior. -- Clear IO contracts reduce coupling and make future migrations (e.g., new adapters/modules) lower risk. -- Core remains focused on lifecycle, registry, and validation orchestration rather than tool-specific command logic. - ---- - -## Developer Note: Command Layout - -- Primary command implementations live in `src/specfact_cli/modules/<module>/src/commands.py`. -- Legacy imports from `src/specfact_cli/commands/*.py` are compatibility shims and only guarantee `app` re-exports. -- Preferred imports for module code: - - `from specfact_cli.modules.<module>.src.commands import app` - - `from specfact_cli.modules.<module>.src.commands import <symbol>` -- Shim deprecation timeline: - - Legacy shim usage is deprecated for non-`app` symbols now. - - Shim removal is planned no earlier than `v0.30` (or the next major migration window). +- [Architecture Reference](docs/reference/architecture.md) +- [Architecture Docs Index](docs/architecture/README.md) +- [Architecture Implementation Status](docs/architecture/implementation-status.md) --- diff --git a/docs/_layouts/default.html b/docs/_layouts/default.html index 58ec6f32..ab1aa639 100644 --- a/docs/_layouts/default.html +++ b/docs/_layouts/default.html @@ -143,6 +143,8 @@ <h2 class="docs-sidebar-title"> <li><a href="{{ '/guides/agile-scrum-workflows/' | relative_url }}">Agile/Scrum Workflows</a></li> <li><a href="{{ '/guides/policy-engine-commands/' | relative_url }}">Policy Engine Commands</a></li> <li><a href="{{ '/guides/creating-custom-bridges/' | relative_url }}">Creating Custom Bridges</a></li> + <li><a href="{{ '/guides/module-development/' | relative_url }}">Module Development</a></li> + <li><a href="{{ '/guides/adapter-development/' | relative_url }}">Adapter Development</a></li> <li><a href="{{ '/guides/extending-projectbundle/' | relative_url }}">Extending ProjectBundle</a></li> <li><a href="{{ '/guides/installing-modules/' | relative_url }}">Installing Modules</a></li> <li><a href="{{ '/guides/module-marketplace/' | relative_url }}">Module Marketplace</a></li> @@ -176,6 +178,8 @@ <h2 class="docs-sidebar-title"> <li><a href="{{ '/reference/thorough-codebase-validation/' | relative_url }}">Thorough Codebase Validation</a></li> <li><a href="{{ '/reference/authentication/' | relative_url }}">Authentication</a></li> <li><a href="{{ '/architecture/' | relative_url }}">Architecture</a></li> + <li><a href="{{ '/architecture/implementation-status/' | relative_url }}">Architecture Implementation Status</a></li> + <li><a href="{{ '/architecture/adr/' | relative_url }}">Architecture ADRs</a></li> <li><a href="{{ '/modes/' | relative_url }}">Operational Modes</a></li> <li><a href="{{ '/directory-structure/' | relative_url }}">Directory Structure</a></li> <li><a href="{{ '/reference/projectbundle-schema/' | relative_url }}">ProjectBundle Schema</a></li> diff --git a/docs/architecture/README.md b/docs/architecture/README.md new file mode 100644 index 00000000..d3d5a39f --- /dev/null +++ b/docs/architecture/README.md @@ -0,0 +1,28 @@ +# SpecFact CLI Architecture Documentation + +Architecture documents in this folder describe the current implementation and clearly separate planned features. + +## Current Architecture View + +- Module-first command system is production-ready. +- Command loading is lazy via `CommandRegistry`. +- Bridge adapters integrate external systems through the `BridgeAdapter` contract. +- Contract-first validation remains the primary engineering model. + +## Architecture Documents + +- [Component Graph](component-graph.md) +- [Module System](module-system.md) +- [Workflow State Machines](state-machines.md) +- [Interface Contracts](interface-contracts.md) +- [Data Flow](data-flow.md) +- [Implementation Status](implementation-status.md) +- [Architecture Decision Records (ADR)](adr/README.md) +- [Discrepancies Report](discrepancies-report.md) + +## Related Reference + +- [Main Architecture Reference](../reference/architecture.md) +- [Bridge Registry](../reference/bridge-registry.md) +- [Module Development Guide](../guides/module-development.md) +- [Adapter Development Guide](../guides/adapter-development.md) diff --git a/docs/architecture/adr/0001-module-first-architecture.md b/docs/architecture/adr/0001-module-first-architecture.md new file mode 100644 index 00000000..22a2524e --- /dev/null +++ b/docs/architecture/adr/0001-module-first-architecture.md @@ -0,0 +1,32 @@ +--- +layout: default +title: ADR-0001 Module-First Architecture +permalink: /architecture/adr/0001-module-first-architecture/ +description: Decision record for lazy-loaded module-first CLI architecture. +--- + +# ADR-0001: Module-First Architecture + +- Status: Accepted +- Date: 2026-02-22 +- Deciders: SpecFact maintainers +- Related OpenSpec changes: `arch-01-cli-modular-command-registry`, `arch-02-module-package-separation` + +## Context + +Hard-wired command imports increased startup cost, complicated extension work, and made module lifecycle management difficult. + +## Decision + +Adopt a module-first architecture backed by `CommandRegistry`: + +- Register commands by metadata and lazy loader. +- Discover module packages via `module-package.yaml`. +- Load Typer apps on first command invocation and cache results. +- Keep compatibility shims during migration windows. + +## Consequences + +- Faster startup and improved command isolation. +- Clear extension surface for module authors. +- Additional manifest and lifecycle governance requirements. diff --git a/docs/architecture/adr/README.md b/docs/architecture/adr/README.md new file mode 100644 index 00000000..7cf023e6 --- /dev/null +++ b/docs/architecture/adr/README.md @@ -0,0 +1,21 @@ +--- +layout: default +title: Architecture ADRs +permalink: /architecture/adr/ +description: Architecture Decision Records for SpecFact CLI. +--- + +# Architecture Decision Records + +Use ADRs to document non-trivial architecture decisions. + +## How to add a new ADR + +1. Copy [template.md](template.md) +2. Create the next numbered file (`0002-...`, `0003-...`) +3. Set status (`Proposed`, `Accepted`, `Superseded`) +4. Link related OpenSpec change IDs + +## ADR index + +- [ADR-0001: Module-First Architecture](0001-module-first-architecture.md) diff --git a/docs/architecture/adr/template.md b/docs/architecture/adr/template.md new file mode 100644 index 00000000..8dbb870d --- /dev/null +++ b/docs/architecture/adr/template.md @@ -0,0 +1,25 @@ +--- +layout: default +title: ADR Template +permalink: /architecture/adr/template/ +description: Template for documenting architecture decisions. +--- + +# ADR-XXXX: <Decision Title> + +- Status: Proposed +- Date: YYYY-MM-DD +- Deciders: <team or names> +- Related OpenSpec changes: <change-id list> + +## Context + +Describe the problem and constraints. + +## Decision + +Describe the selected approach. + +## Consequences + +Describe positive and negative impacts, including migration or rollback considerations. diff --git a/docs/architecture/component-graph.md b/docs/architecture/component-graph.md new file mode 100644 index 00000000..9144958f --- /dev/null +++ b/docs/architecture/component-graph.md @@ -0,0 +1,29 @@ +# SpecFact CLI Component Graph + +## High-level components + +```mermaid +graph TD + CLI[CLI Entry] --> Registry[CommandRegistry] + Registry --> Discovery[Module Discovery] + Registry --> Cache[Lazy Load Cache] + Cache --> Commands[Module Commands] + + Commands --> Adapters[Bridge Adapters] + Commands --> Analysis[Analyzers] + Commands --> Validation[Validators] + Commands --> Models[ProjectBundle and Change Models] +``` + +## Adapter view + +```mermaid +graph TD + Sync[Sync Commands] --> AdapterRegistry[Adapter Registry] + AdapterRegistry --> OpenSpec[OpenSpec Adapter] + AdapterRegistry --> SpecKit[SpecKit Adapter] + AdapterRegistry --> GitHub[GitHub Adapter] + AdapterRegistry --> ADO[ADO Adapter] +``` + +Note: The graph uses concrete adapter implementations only. diff --git a/docs/architecture/data-flow.md b/docs/architecture/data-flow.md new file mode 100644 index 00000000..03cfadc5 --- /dev/null +++ b/docs/architecture/data-flow.md @@ -0,0 +1,36 @@ +# SpecFact CLI Data Flow Architecture + +## Command execution flow + +```mermaid +graph LR + Input[CLI Input] --> Parse[Typer Parse] + Parse --> Lookup[Registry Lookup] + Lookup --> Load[Lazy Module Load] + Load --> Execute[Command Execute] + Execute --> AdapterOrAnalysis[Adapter or Analysis Layer] + AdapterOrAnalysis --> Output[Console or File Output] +``` + +## Bridge sync flow + +```mermaid +sequenceDiagram + participant User + participant Command as sync bridge + participant Probe as Bridge Probe + participant Adapter as BridgeAdapter + participant Bundle as ProjectBundle + + User->>Command: run sync + Command->>Probe: select adapter via detect/capabilities + Probe->>Adapter: detect + get_capabilities + Adapter->>Bundle: import/export artifacts + Command-->>User: summary + status +``` + +## Error flow + +- Validate inputs with contracts. +- Convert operational failures into actionable command errors. +- Log adapter and lifecycle non-fatal issues with bridge logger. diff --git a/docs/architecture/discrepancies-report.md b/docs/architecture/discrepancies-report.md new file mode 100644 index 00000000..cde8cf31 --- /dev/null +++ b/docs/architecture/discrepancies-report.md @@ -0,0 +1,108 @@ +# SpecFact CLI Architecture Discrepancies Report + +## Executive Summary + +This report originally captured architecture documentation and spec/code discrepancies. +It is now updated after remediation work completed in OpenSpec change +`arch-08-documentation-discrepancies-remediation` (2026-02-22). + +- Scope of this report: architecture docs/spec/code alignment status +- Current status: most documentation discrepancies are resolved +- Remaining gaps: implementation gaps that are intentionally documented as planned/partial + +## Baseline and Sources + +- Remediation change: `openspec/changes/arch-08-documentation-discrepancies-remediation/` +- Verification checklist: `openspec/changes/arch-08-documentation-discrepancies-remediation/DOC_VERIFICATION_CHECKLIST.md` +- Architecture reference: `docs/reference/architecture.md` +- Architecture docs index: `docs/architecture/README.md` +- Implementation status: `docs/architecture/implementation-status.md` + +## Discrepancy Status Matrix + +Legend: + +- `Resolved`: discrepancy has been remediated and documented accurately +- `Partial`: discrepancy is reduced/clarified, but implementation is still incomplete +- `Open`: discrepancy remains and requires additional implementation/spec work + +1. Module system implementation vs docs: `Resolved` +2. Bridge adapter interface mismatch: `Resolved` +3. Operational modes docs gap: `Resolved` (documented current limits and planned behavior) +4. Architecture layer mismatch: `Resolved` +5. Command registry implementation detail gap: `Resolved` +6. Module package structure missing in docs: `Resolved` +7. Adapter capabilities docs gap: `Resolved` +8. Architecture derive command missing in implementation: `Open` (documented as planned) +9. Change tracking implementation partial vs spec breadth: `Partial` +10. Protocol FSM implementation minimal vs detailed spec: `Partial` +11. Diagram references to non-existent components: `Resolved` +12. Outdated performance metrics: `Resolved` (outdated hard claims removed/tempered) +13. Missing error handling documentation: `Resolved` +14. Missing architecture module vs architecture specs: `Open` +15. Incomplete bridge adapter implementations vs references: `Partial` +16. Protocol validation gaps: `Partial` +17. Terminology inconsistencies: `Resolved` +18. Version reference inconsistencies: `Resolved` +19. Feature maturity mismatch (experimental vs production-ready): `Resolved` +20. No ADR records: `Resolved` +21. Missing module development guide: `Resolved` +22. Missing adapter development guide: `Resolved` +23. Missing architecture commands: `Open` +24. Partial change tracking coverage: `Partial` +25. Incomplete protocol support: `Partial` + +## Resolved in arch-08 + +The following remediation outputs are now present: + +- Updated architecture reference with accurate module-first status and adapter contract details. +- Updated architecture pages (`module-system`, `component-graph`, `data-flow`, `state-machines`, `interface-contracts`). +- Added ADR structure and initial ADR: + - `docs/architecture/adr/README.md` + - `docs/architecture/adr/template.md` + - `docs/architecture/adr/0001-module-first-architecture.md` +- Added implementation status page for implemented vs planned capabilities. +- Added/updated development guides: + - `docs/guides/module-development.md` + - `docs/guides/adapter-development.md` +- Navigation and discoverability updates in: + - `docs/_layouts/default.html` + - `docs/index.md` + +## Remaining Gaps (Not Doc-Only) + +These items are not documentation mismatches anymore; they are implementation/spec-delivery work: + +1. `specfact architecture derive|validate|trace` command family is still planned. +2. Protocol FSM execution/validation engine is still partial. +3. Change tracking depth is not yet uniform across all adapters/providers. +4. Some architecture-level capabilities remain defined in OpenSpec before runtime delivery. + +## Follow-Up Plan + +### Phase A: Architecture command delivery + +- Implement `architecture` command group per `architecture-01-solution-layer`. +- Add contract/integration tests and usage docs once commands are live. + +### Phase B: Protocol runtime completion + +- Expand protocol FSM execution and guard validation support. +- Align docs/spec examples with actual runtime constraints. + +### Phase C: Change tracking normalization + +- Standardize change-tracking semantics across adapters. +- Document provider-level capability matrix explicitly. + +## Maintenance Rules + +1. Keep `docs/architecture/implementation-status.md` as the canonical implemented-vs-planned source. +2. When a planned capability ships, update both implementation status and this report in the same PR. +3. If a new discrepancy is found, add it with `Resolved`/`Partial`/`Open` status and an owner change reference. + +## Conclusion + +Documentation discrepancies targeted by `arch-08` are substantially remediated. +The remaining items are primarily implementation backlog items, now clearly documented as planned or partial rather than presented as current behavior. diff --git a/docs/architecture/implementation-status.md b/docs/architecture/implementation-status.md new file mode 100644 index 00000000..dee91fcf --- /dev/null +++ b/docs/architecture/implementation-status.md @@ -0,0 +1,36 @@ +--- +layout: default +title: Architecture Implementation Status +permalink: /architecture/implementation-status/ +description: Implemented vs planned architecture capabilities. +--- + +# Architecture Implementation Status + +This page tracks implemented vs planned architecture capabilities. + +## Implemented + +- Module-first lazy command registry (`CommandRegistry`, module package discovery). +- Bridge adapter contract and adapter registry usage in sync flows. +- `ProjectBundle` and change-tracking model support in core models. +- Mode detection and explicit mode selection. + +## Planned or Partial + +- `specfact architecture derive|validate|trace` command family is planned. + - Source: OpenSpec `architecture-01-solution-layer`. +- Protocol FSM runtime engine is partial (models/specs exist; full execution engine/guards are planned). + - Source: OpenSpec `architecture-01-solution-layer` and dependent validation changes. +- Change tracking is adapter-dependent and not uniformly complete across all backlog providers. + +## Known limitations + +- Documentation/specs may define forward-looking behavior before command implementation lands. +- Not all adapters implement the same depth for proposal/change-tracking persistence. + +## References + +- [Architecture Reference](../reference/architecture.md) +- [Architecture Docs Index](README.md) +- [Discrepancies Report](discrepancies-report.md) diff --git a/docs/architecture/interface-contracts.md b/docs/architecture/interface-contracts.md new file mode 100644 index 00000000..8c4f1e42 --- /dev/null +++ b/docs/architecture/interface-contracts.md @@ -0,0 +1,38 @@ +# SpecFact CLI Interface Contracts + +## Command Registry contract + +`src/specfact_cli/registry/registry.py` exposes a lazy command registry with: + +- `register(name, loader, metadata)` +- `get_typer(name)` +- `list_commands()` +- `list_commands_for_help()` +- `get_metadata(name)` + +## BridgeAdapter contract + +`src/specfact_cli/adapters/base.py` defines required abstract methods: + +- `detect(...)` +- `get_capabilities(...)` +- `import_artifact(...)` +- `export_artifact(...)` +- `generate_bridge_config(...)` +- `load_change_tracking(...)` +- `save_change_tracking(...)` +- `load_change_proposal(...)` +- `save_change_proposal(...)` + +All public methods are contract-checked (`@icontract`) and runtime type-checked (`@beartype`). + +## ToolCapabilities contract + +`ToolCapabilities` in `src/specfact_cli/models/capabilities.py` communicates adapter runtime support: + +- tool identity and layout +- detected specs path +- available sync modes +- external config/custom hook support flags + +Consumers use this metadata to select sync behavior and capability-safe operations. diff --git a/docs/architecture/module-system.md b/docs/architecture/module-system.md new file mode 100644 index 00000000..b8a5da49 --- /dev/null +++ b/docs/architecture/module-system.md @@ -0,0 +1,50 @@ +# SpecFact CLI Module System Architecture + +## Status + +The module system is production-ready and is the default command registration/runtime path. + +## Registry flow + +```mermaid +sequenceDiagram + participant CLI as cli.py + participant Bootstrap as registry/bootstrap.py + participant Discovery as registry/module_packages.py + participant Registry as registry/registry.py + + CLI->>Bootstrap: register_builtin_commands() + Bootstrap->>Discovery: discover module packages + Discovery->>Registry: register(name, loader, metadata) + Note over Registry: loader and metadata stored only + CLI->>Registry: get_typer(command) + Registry->>Registry: load once, cache Typer app +``` + +## Module package structure + +```text +src/specfact_cli/modules/<module-name>/ + module-package.yaml + src/ + __init__.py + app.py + commands.py +``` + +## Manifest fields + +- Required: `name`, `version`, `commands` +- Optional: `command_help`, `pip_dependencies`, `module_dependencies`, `core_compatibility`, `tier`, `addon_id` +- Optional extension fields: `service_bridges`, `schema_extensions`, `publisher`, `integrity` + +## Runtime behavior + +- Commands are discovered at startup, imported on demand. +- Metadata remains available for help output without importing every module. +- Module enable/disable state is controlled by module state and lifecycle flows. + +## Development guidance + +- [Module Development Guide](../guides/module-development.md) +- [Module Security](../reference/module-security.md) diff --git a/docs/architecture/state-machines.md b/docs/architecture/state-machines.md new file mode 100644 index 00000000..0a1b0962 --- /dev/null +++ b/docs/architecture/state-machines.md @@ -0,0 +1,28 @@ +# SpecFact CLI State Machine Logic + +## CLI command lifecycle + +```mermaid +stateDiagram-v2 + [*] --> Parse + Parse --> ResolveCommand + ResolveCommand --> LoadModule + LoadModule --> Execute + Execute --> Report + Report --> [*] +``` + +## Mode selection (implemented) + +```mermaid +stateDiagram-v2 + [*] --> ExplicitFlag + ExplicitFlag --> Final: --mode provided + ExplicitFlag --> Detect: no flag + Detect --> Final +``` + +## Protocol FSM status + +Protocol states and transitions are modeled in OpenSpec artifacts and data models. +Runtime FSM engine behavior is currently limited and tracked as planned work in `architecture-01-solution-layer`. diff --git a/docs/guides/adapter-development.md b/docs/guides/adapter-development.md index 4699bf75..5cb54f54 100644 --- a/docs/guides/adapter-development.md +++ b/docs/guides/adapter-development.md @@ -1,87 +1,48 @@ +--- +layout: default +title: Adapter Development Guide +permalink: /guides/adapter-development/ +description: Implement BridgeAdapter integrations for external tools. +--- + # Adapter Development Guide -This guide explains how to create new bridge adapters for SpecFact CLI using the adapter registry pattern. +This guide describes how to implement bridge adapters for external tools. -## Overview +## Core interface -SpecFact CLI uses a plugin-based adapter architecture that allows external tools (GitHub, Spec-Kit, Linear, Jira, etc.) to integrate seamlessly. All adapters implement the `BridgeAdapter` interface and are registered in the `AdapterRegistry` for automatic discovery and usage. +Base contract: `src/specfact_cli/adapters/base.py` -## Architecture +Required methods: -### Adapter Registry Pattern +- `detect(repo_path, bridge_config=None) -> bool` +- `get_capabilities(repo_path, bridge_config=None) -> ToolCapabilities` +- `import_artifact(artifact_key, artifact_path, project_bundle, bridge_config=None) -> None` +- `export_artifact(artifact_key, artifact_data, bridge_config=None) -> Path | dict` +- `generate_bridge_config(repo_path) -> BridgeConfig` +- `load_change_tracking(bundle_dir, bridge_config=None) -> ChangeTracking | None` +- `save_change_tracking(bundle_dir, change_tracking, bridge_config=None) -> None` +- `load_change_proposal(bundle_dir, change_name, bridge_config=None) -> ChangeProposal | None` +- `save_change_proposal(bundle_dir, proposal, bridge_config=None) -> None` -The adapter registry provides a centralized way to: +All methods should preserve runtime contracts (`@icontract`) and runtime type checks (`@beartype`). -- **Register adapters**: Auto-discover and register adapters at import time -- **Get adapters**: Retrieve adapters by name (e.g., `"speckit"`, `"github"`, `"openspec"`) -- **List adapters**: Enumerate all registered adapters -- **Check registration**: Verify if an adapter is registered +## ToolCapabilities model -### BridgeAdapter Interface +`ToolCapabilities` lives in `src/specfact_cli/models/capabilities.py` and communicates runtime support: -All adapters must implement the `BridgeAdapter` abstract base class, which defines the following methods: +- `tool`, `version`, `layout`, `specs_dir` +- `supported_sync_modes` +- `has_external_config`, `has_custom_hooks` -```python -class BridgeAdapter(ABC): - @abstractmethod - def detect(self, repo_path: Path, bridge_config: BridgeConfig | None = None) -> bool: - """Detect if this adapter applies to the repository.""" - - @abstractmethod - def get_capabilities(self, repo_path: Path, bridge_config: BridgeConfig | None = None) -> ToolCapabilities: - """Get tool capabilities for detected repository.""" - - @abstractmethod - def import_artifact(self, artifact_key: str, artifact_path: Path | dict[str, Any], project_bundle: Any, bridge_config: BridgeConfig | None = None) -> None: - """Import artifact from tool format to SpecFact.""" - - @abstractmethod - def export_artifact(self, artifact_key: str, artifact_data: Any, bridge_config: BridgeConfig | None = None) -> Path | dict[str, Any]: - """Export artifact from SpecFact to tool format.""" - - @abstractmethod - def generate_bridge_config(self, repo_path: Path) -> BridgeConfig: - """Generate bridge configuration for this adapter.""" - - @abstractmethod - def load_change_tracking(self, bundle_dir: Path, bridge_config: BridgeConfig | None = None) -> ChangeTracking | None: - """Load change tracking (adapter-specific storage location).""" - - @abstractmethod - def save_change_tracking(self, bundle_dir: Path, change_tracking: ChangeTracking, bridge_config: BridgeConfig | None = None) -> None: - """Save change tracking (adapter-specific storage location).""" - - @abstractmethod - def load_change_proposal(self, change_id: str, bridge_config: BridgeConfig | None = None) -> ChangeProposal | None: - """Load change proposal from adapter-specific location.""" - - @abstractmethod - def save_change_proposal(self, change_proposal: ChangeProposal, bridge_config: BridgeConfig | None = None) -> None: - """Save change proposal to adapter-specific location.""" -``` - -## Step-by-Step Guide +Sync selection and safe behavior depend on this model. -### Step 1: Create Adapter Module - -Create a new file `src/specfact_cli/adapters/<adapter_name>.py`: +## Minimal adapter skeleton ```python -""" -<Adapter Name> bridge adapter for <tool description>. - -This adapter implements the BridgeAdapter interface to sync <tool> artifacts -with SpecFact plan bundles and protocols. -""" - -from __future__ import annotations - from pathlib import Path from typing import Any -from beartype import beartype -from icontract import ensure, require - from specfact_cli.adapters.base import BridgeAdapter from specfact_cli.models.bridge import BridgeConfig from specfact_cli.models.capabilities import ToolCapabilities @@ -89,474 +50,48 @@ from specfact_cli.models.change import ChangeProposal, ChangeTracking class MyAdapter(BridgeAdapter): - """ - <Adapter Name> bridge adapter implementing BridgeAdapter interface. - - This adapter provides <sync direction> sync between <tool> artifacts - and SpecFact plan bundles/protocols. - """ - - @beartype - @ensure(lambda result: result is None, "Must return None") - def __init__(self) -> None: - """Initialize <Adapter Name> adapter.""" - pass - - # Implement all abstract methods... -``` - -### Step 2: Implement Required Methods - -#### 2.1 Implement `detect()` - -Detect if the repository uses your tool: - -```python -@beartype -@require(lambda repo_path: repo_path.exists(), "Repository path must exist") -@require(lambda repo_path: repo_path.is_dir(), "Repository path must be a directory") -@ensure(lambda result: isinstance(result, bool), "Must return bool") -def detect(self, repo_path: Path, bridge_config: BridgeConfig | None = None) -> bool: - """ - Detect if this is a <tool name> repository. - - Args: - repo_path: Path to repository root - bridge_config: Optional bridge configuration (for cross-repo detection) - - Returns: - True if <tool> structure detected, False otherwise - """ - # Check for cross-repo support - base_path = repo_path - if bridge_config and bridge_config.external_base_path: - base_path = bridge_config.external_base_path - - # Check for tool-specific structure - # Example: Check for .tool/ directory or tool-specific files - tool_dir = base_path / ".tool" - config_file = base_path / "tool.config" - - return (tool_dir.exists() and tool_dir.is_dir()) or config_file.exists() -``` - -#### 2.2 Implement `get_capabilities()` - -Return tool capabilities: - -```python -@beartype -@require(lambda repo_path: repo_path.exists(), "Repository path must exist") -@require(lambda repo_path: repo_path.is_dir(), "Repository path must be a directory") -@ensure(lambda result: isinstance(result, ToolCapabilities), "Must return ToolCapabilities") -def get_capabilities(self, repo_path: Path, bridge_config: BridgeConfig | None = None) -> ToolCapabilities: - """ - Get <tool name> adapter capabilities. - - Args: - repo_path: Path to repository root - bridge_config: Optional bridge configuration (for cross-repo detection) - - Returns: - ToolCapabilities instance for <tool> adapter - """ - from specfact_cli.models.capabilities import ToolCapabilities - - base_path = repo_path - if bridge_config and bridge_config.external_base_path: - base_path = bridge_config.external_base_path - - # Determine tool-specific capabilities - return ToolCapabilities( - tool="<tool-name>", - layout="<layout-type>", - specs_dir="<specs-directory>", - supported_sync_modes=["<sync-mode-1>", "<sync-mode-2>"], # e.g., ["bidirectional", "unidirectional"] - has_custom_hooks=False, # Set to True if tool has custom hooks/constitution - ) -``` - -#### 2.3 Implement `generate_bridge_config()` - -Generate bridge configuration: - -```python -@beartype -@require(lambda repo_path: repo_path.exists(), "Repository path must exist") -@require(lambda repo_path: repo_path.is_dir(), "Repository path must be a directory") -@ensure(lambda result: isinstance(result, BridgeConfig), "Must return BridgeConfig") -def generate_bridge_config(self, repo_path: Path) -> BridgeConfig: - """ - Generate bridge configuration for <tool name> adapter. - - Args: - repo_path: Path to repository root - - Returns: - BridgeConfig instance for <tool> adapter - """ - from specfact_cli.models.bridge import AdapterType, ArtifactMapping, BridgeConfig - - # Auto-detect layout and create appropriate config - # Use existing preset methods if available, or create custom config - return BridgeConfig( - adapter=AdapterType.<TOOL_NAME>, - artifacts={ - "specification": ArtifactMapping( - path_pattern="<path-pattern>", - format="<format>", - ), - # Add other artifact mappings... - }, - ) -``` - -#### 2.4 Implement `import_artifact()` - -Import artifacts from tool format: - -```python -@beartype -@require( - lambda artifact_key: isinstance(artifact_key, str) and len(artifact_key) > 0, "Artifact key must be non-empty" -) -@ensure(lambda result: result is None, "Must return None") -def import_artifact( - self, - artifact_key: str, - artifact_path: Path | dict[str, Any], - project_bundle: Any, # ProjectBundle - avoid circular import - bridge_config: BridgeConfig | None = None, -) -> None: - """ - Import artifact from <tool name> format to SpecFact. - - Args: - artifact_key: Artifact key (e.g., "specification", "plan", "tasks") - artifact_path: Path to artifact file or dict for API-based artifacts - project_bundle: Project bundle to update - bridge_config: Bridge configuration (may contain adapter-specific settings) - """ - # Parse tool-specific format and update project_bundle - # Store tool-specific paths in source_tracking.source_metadata - pass -``` - -#### 2.5 Implement `export_artifact()` - -Export artifacts to tool format: - -```python -@beartype -@require( - lambda artifact_key: isinstance(artifact_key, str) and len(artifact_key) > 0, "Artifact key must be non-empty" -) -@ensure(lambda result: isinstance(result, (Path, dict)), "Must return Path or dict") -def export_artifact( - self, - artifact_key: str, - artifact_data: Any, # Feature, ChangeProposal, etc. - avoid circular import - bridge_config: BridgeConfig | None = None, -) -> Path | dict[str, Any]: - """ - Export artifact from SpecFact to <tool name> format. - - Args: - artifact_key: Artifact key (e.g., "specification", "plan", "tasks") - artifact_data: Data to export (Feature, Plan, etc.) - bridge_config: Bridge configuration (may contain adapter-specific settings) - - Returns: - Path to exported file or dict with API response data - """ - # Convert SpecFact models to tool-specific format - # Write to file or send via API - # Return Path for file-based exports, dict for API-based exports - pass -``` - -#### 2.6 Implement Change Tracking Methods - -For adapters that support change tracking: - -```python -@beartype -@require(lambda bundle_dir: isinstance(bundle_dir, Path), "Bundle directory must be Path") -@require(lambda bundle_dir: bundle_dir.exists(), "Bundle directory must exist") -@ensure(lambda result: result is None or isinstance(result, ChangeTracking), "Must return ChangeTracking or None") -def load_change_tracking( - self, bundle_dir: Path, bridge_config: BridgeConfig | None = None -) -> ChangeTracking | None: - """Load change tracking from tool-specific location.""" - # Return None if tool doesn't support change tracking - return None - -@beartype -@require(lambda bundle_dir: isinstance(bundle_dir, Path), "Bundle directory must be Path") -@require(lambda bundle_dir: bundle_dir.exists(), "Bundle directory must exist") -@ensure(lambda result: result is None, "Must return None") -def save_change_tracking( - self, bundle_dir: Path, change_tracking: ChangeTracking, bridge_config: BridgeConfig | None = None -) -> None: - """Save change tracking to tool-specific location.""" - # Raise NotImplementedError if tool doesn't support change tracking - raise NotImplementedError("Change tracking not supported by this adapter") -``` - -#### 2.7 Implement Change Proposal Methods - -For adapters that support change proposals: - -```python -@beartype -@require(lambda change_id: isinstance(change_id, str) and len(change_id) > 0, "Change ID must be non-empty") -@ensure(lambda result: result is None or isinstance(result, ChangeProposal), "Must return ChangeProposal or None") -def load_change_proposal( - self, change_id: str, bridge_config: BridgeConfig | None = None -) -> ChangeProposal | None: - """Load change proposal from tool-specific location.""" - # Return None if tool doesn't support change proposals - return None - -@beartype -@require(lambda change_proposal: isinstance(change_proposal, ChangeProposal), "Must provide ChangeProposal") -@ensure(lambda result: result is None, "Must return None") -def save_change_proposal( - self, change_proposal: ChangeProposal, bridge_config: BridgeConfig | None = None -) -> None: - """Save change proposal to tool-specific location.""" - # Raise NotImplementedError if tool doesn't support change proposals - raise NotImplementedError("Change proposals not supported by this adapter") -``` - -### Step 3: Register Adapter - -Register your adapter in `src/specfact_cli/adapters/__init__.py`: - -```python -from specfact_cli.adapters.my_adapter import MyAdapter -from specfact_cli.adapters.registry import AdapterRegistry - -# Auto-register adapter -AdapterRegistry.register("my-adapter", MyAdapter) - -__all__ = [..., "MyAdapter"] -``` - -**Important**: Use the actual CLI tool name as the registry key (e.g., `"speckit"`, `"github"`, not `"spec-kit"` or `"git-hub"`). - -### Step 4: Add Contract Decorators - -All methods must have contract decorators: - -- `@beartype`: Runtime type checking -- `@require`: Preconditions (input validation) -- `@ensure`: Postconditions (output validation) - -Example: - -```python -@beartype -@require(lambda repo_path: repo_path.exists(), "Repository path must exist") -@require(lambda repo_path: repo_path.is_dir(), "Repository path must be a directory") -@ensure(lambda result: isinstance(result, bool), "Must return bool") -def detect(self, repo_path: Path, bridge_config: BridgeConfig | None = None) -> bool: - # Implementation... -``` - -### Step 5: Add Tests - -Create comprehensive tests in `tests/unit/adapters/test_my_adapter.py`: - -```python -"""Unit tests for MyAdapter.""" - -import pytest -from pathlib import Path - -from specfact_cli.adapters.my_adapter import MyAdapter -from specfact_cli.adapters.registry import AdapterRegistry -from specfact_cli.models.bridge import BridgeConfig - - -class TestMyAdapter: - """Test MyAdapter class.""" - - def test_detect(self, tmp_path: Path): - """Test detect() method.""" - adapter = MyAdapter() - # Create tool-specific structure - (tmp_path / ".tool").mkdir() - - assert adapter.detect(tmp_path) is True - - def test_get_capabilities(self, tmp_path: Path): - """Test get_capabilities() method.""" - adapter = MyAdapter() - capabilities = adapter.get_capabilities(tmp_path) - - assert capabilities.tool == "my-adapter" - assert "bidirectional" in capabilities.supported_sync_modes - - def test_adapter_registry_registration(self): - """Test adapter is registered in registry.""" - assert AdapterRegistry.is_registered("my-adapter") - adapter_class = AdapterRegistry.get_adapter("my-adapter") - assert adapter_class == MyAdapter -``` - -### Step 6: Update Documentation - -1. **Update `docs/reference/architecture.md`**: Add your adapter to the adapters section -2. **Update `README.md`**: Add your adapter to the supported tools list -3. **Update `CHANGELOG.md`**: Document the new adapter addition - -## Examples - -### SpecKitAdapter (Bidirectional Sync) - -The `SpecKitAdapter` is a complete example of a bidirectional sync adapter: - -- **Location**: `src/specfact_cli/adapters/speckit.py` -- **Registry key**: `"speckit"` -- **Features**: Bidirectional sync, classic/modern layout support, constitution management -- **Public helpers**: `discover_features()`, `detect_changes()`, `detect_conflicts()`, `export_bundle()` - -### GitHubAdapter (Export-Only) - -The `GitHubAdapter` is an example of an export-only adapter: - -- **Location**: `src/specfact_cli/adapters/github.py` -- **Registry key**: `"github"` -- **Features**: Export-only (OpenSpec → GitHub Issues), progress tracking, content sanitization - -### OpenSpecAdapter (Bidirectional Sync) - -The `OpenSpecAdapter` is an example of a bidirectional sync adapter with change tracking: - -- **Location**: `src/specfact_cli/adapters/openspec.py` -- **Registry key**: `"openspec"` -- **Features**: Bidirectional sync, change tracking, change proposals - -## Best Practices - -### 1. Use Adapter Registry Pattern - -**✅ DO:** - -```python -# In modules/sync/src/commands.py -adapter = AdapterRegistry.get_adapter(adapter_name) -if adapter: - adapter_instance = adapter() - if adapter_instance.detect(repo_path, bridge_config): - # Use adapter... -``` - -**❌ DON'T:** - -```python -# Hard-coded adapter checks -if adapter_name == "speckit": - adapter = SpecKitAdapter() -elif adapter_name == "github": - adapter = GitHubAdapter() -``` - -### 2. Support Cross-Repo Detection - -Always check `bridge_config.external_base_path` for cross-repository support: - -```python -base_path = repo_path -if bridge_config and bridge_config.external_base_path: - base_path = bridge_config.external_base_path - -# Use base_path for all file operations -tool_dir = base_path / ".tool" -``` - -### 3. Store Source Metadata + def detect(self, repo_path: Path, bridge_config: BridgeConfig | None = None) -> bool: + return (repo_path / ".mytool").exists() -When importing artifacts, store tool-specific paths in `source_tracking.source_metadata`: + def get_capabilities(self, repo_path: Path, bridge_config: BridgeConfig | None = None) -> ToolCapabilities: + return ToolCapabilities(tool="mytool", layout="classic", specs_dir="specs", supported_sync_modes=["read-only"]) -```python -if hasattr(project_bundle, "source_tracking") and project_bundle.source_tracking: - project_bundle.source_tracking.source_metadata = { - "tool": "my-adapter", - "original_path": str(artifact_path), - "tool_version": "1.0.0", - } -``` + def import_artifact(self, artifact_key: str, artifact_path: Path | dict[str, Any], project_bundle: Any, bridge_config: BridgeConfig | None = None) -> None: + ... -### 4. Handle Missing Artifacts Gracefully + def export_artifact(self, artifact_key: str, artifact_data: Any, bridge_config: BridgeConfig | None = None) -> Path | dict[str, Any]: + ... -Return appropriate error messages when artifacts are not found: + def generate_bridge_config(self, repo_path: Path) -> BridgeConfig: + ... -```python -if not artifact_path.exists(): - raise FileNotFoundError( - f"Artifact '{artifact_key}' not found at {artifact_path}. " - f"Expected location: {expected_path}" - ) -``` + def load_change_tracking(self, bundle_dir: Path, bridge_config: BridgeConfig | None = None) -> ChangeTracking | None: + ... -### 5. Use Contract Decorators + def save_change_tracking(self, bundle_dir: Path, change_tracking: ChangeTracking, bridge_config: BridgeConfig | None = None) -> None: + ... -Always add contract decorators for runtime validation: + def load_change_proposal(self, bundle_dir: Path, change_name: str, bridge_config: BridgeConfig | None = None) -> ChangeProposal | None: + ... -```python -@beartype -@require(lambda artifact_key: len(artifact_key) > 0, "Artifact key must be non-empty") -@ensure(lambda result: result is not None, "Must return non-None value") -def import_artifact(self, artifact_key: str, ...) -> None: - # Implementation... + def save_change_proposal(self, bundle_dir: Path, proposal: ChangeProposal, bridge_config: BridgeConfig | None = None) -> None: + ... ``` -## Testing - -### Unit Tests - -Create comprehensive unit tests covering: - -- Detection logic (same-repo and cross-repo) -- Capabilities retrieval -- Artifact import/export for all supported artifact types -- Error handling -- Adapter registry registration - -### Integration Tests - -Create integration tests covering: - -- Full sync workflows -- Bidirectional sync (if supported) -- Cross-repo scenarios -- Error recovery - -## Troubleshooting - -### Adapter Not Detected - -- Check `detect()` method logic -- Verify tool-specific structure exists -- Check `bridge_config.external_base_path` for cross-repo scenarios - -### Import/Export Failures +## Code references -- Verify artifact paths are resolved correctly -- Check `bridge_config.external_base_path` for cross-repo scenarios -- Ensure artifact format matches tool expectations +- Base interface: `src/specfact_cli/adapters/base.py` +- Capabilities model: `src/specfact_cli/models/capabilities.py` +- Adapter examples: `src/specfact_cli/adapters/openspec.py`, `src/specfact_cli/adapters/speckit.py` -### Registry Registration Issues +## Error handling and logging -- Verify adapter is imported in `adapters/__init__.py` -- Check registry key matches actual tool name -- Ensure `AdapterRegistry.register()` is called at module import time +- Raise explicit exceptions for invalid artifact keys or unsupported operations. +- Use bridge logger patterns in command/service layers for non-fatal adapter issues. +- Keep adapter behavior deterministic and avoid silent data mutation. -## Related Documentation +## Related docs -- **[Architecture Documentation](../reference/architecture.md)**: Adapter architecture overview -- **[Architecture Documentation](../reference/architecture.md)**: Adapter architecture and BridgeConfig/ToolCapabilities models -- **[SpecKitAdapter Example](../../src/specfact_cli/adapters/speckit.py)**: Complete bidirectional sync example -- **[GitHubAdapter Example](../../src/specfact_cli/adapters/github.py)**: Export-only adapter example +- [Architecture Reference](../reference/architecture.md) +- [Bridge Registry](../reference/bridge-registry.md) +- [Creating Custom Bridges](creating-custom-bridges.md) diff --git a/docs/guides/agile-scrum-workflows.md b/docs/guides/agile-scrum-workflows.md index 8863786b..a01f2b62 100644 --- a/docs/guides/agile-scrum-workflows.md +++ b/docs/guides/agile-scrum-workflows.md @@ -10,6 +10,62 @@ This guide explains how to use SpecFact CLI for agile/scrum workflows, including Preferred command paths are `specfact backlog ceremony standup ...` and `specfact backlog ceremony refinement ...`. Legacy `backlog daily`/`backlog refine` remain available for compatibility. +Backlog module command surface: + +- `specfact backlog add` +- `specfact backlog analyze-deps` +- `specfact backlog trace-impact` +- `specfact backlog verify-readiness` +- `specfact backlog diff` +- `specfact backlog sync` +- `specfact backlog promote` +- `specfact backlog generate-release-notes` +- `specfact backlog delta status|impact|cost-estimate|rollback-analysis` + +## Backlog Issue Creation (`backlog add`) + +Use `specfact backlog add` to create a backlog item with optional parent hierarchy validation and DoR checks. + +```bash +# Non-interactive creation +specfact backlog add \ + --adapter github \ + --project-id nold-ai/specfact-cli \ + --template github_projects \ + --type story \ + --parent FEAT-123 \ + --title "Implement X" \ + --body "Acceptance criteria: ..." \ + --non-interactive + +# Enforce Definition of Ready from .specfact/dor.yaml before create +specfact backlog add \ + --adapter github \ + --project-id nold-ai/specfact-cli \ + --type story \ + --title "Implement X" \ + --body "Acceptance criteria: ..." \ + --check-dor \ + --repo-path . + +# Interactive ADO flow with sprint/iteration selection and story-quality fields +specfact backlog add \ + --adapter ado \ + --project-id "dominikusnold/Specfact CLI" +``` + +Key behavior: + +- validates parent exists in current backlog graph before creating +- validates child-parent type compatibility using `creation_hierarchy` from config/template +- supports interactive prompts when required fields are missing (unless `--non-interactive`) +- prompts for ADO sprint/iteration selection and resolves available iterations from `--project-id` context +- supports multiline body and acceptance criteria capture (default sentinel `::END::`) +- captures priority and story points for story-like items +- supports description rendering mode (`markdown` or `classic`) +- auto-selects template by adapter when omitted (`ado_scrum` for ADO, `github_projects` for GitHub) +- creates via adapter protocol (`github` or `ado`) and prints created `id`, `key`, and `url` + ## Overview SpecFact CLI supports real-world agile/scrum practices through: diff --git a/docs/guides/module-development.md b/docs/guides/module-development.md new file mode 100644 index 00000000..24b15340 --- /dev/null +++ b/docs/guides/module-development.md @@ -0,0 +1,75 @@ +--- +layout: default +title: Module Development Guide +permalink: /guides/module-development/ +description: How to build and package SpecFact CLI modules. +--- + +# Module Development Guide + +This guide defines the required structure and contracts for authoring SpecFact modules. + +## Required structure + +```text +src/specfact_cli/modules/<module-name>/ + module-package.yaml + src/ + __init__.py + app.py + commands.py +``` + +For workspace-level modules, keep the same structure under the configured modules root. + +## `module-package.yaml` schema + +Required fields: + +- `name`: module identifier +- `version`: semantic version string +- `commands`: top-level command names provided by this module + +Common optional fields: + +- `command_help` +- `pip_dependencies` +- `module_dependencies` +- `core_compatibility` +- `tier` +- `addon_id` + +Extension/security fields: + +- `schema_extensions` +- `service_bridges` +- `publisher` +- `integrity` + +## Command code expectations + +- `src/app.py` exposes the Typer `app` used by registry loaders. +- `src/commands.py` holds command handlers and options. +- Public APIs should use contract-first decorators: + - `@icontract` (`@require`, `@ensure`) + - `@beartype` + +## Naming and design conventions + +- File/module names: `snake_case` +- Classes: `PascalCase` +- Keep command implementations scoped to module boundaries. +- Use `get_bridge_logger` for production command logging paths. + +## Integration checklist + +1. Add `module-package.yaml`. +2. Implement `src/app.py` and `src/commands.py`. +3. Ensure loader/import path works with registry discovery. +4. Run format/type-check/lint/contract checks. + +## Related docs + +- [Architecture Reference](../reference/architecture.md) +- [Module System Architecture](../architecture/module-system.md) +- [Adapter Development Guide](adapter-development.md) diff --git a/docs/index.md b/docs/index.md index fc163a94..a76ddde9 100644 --- a/docs/index.md +++ b/docs/index.md @@ -84,6 +84,13 @@ Why this matters: - **[Extending ProjectBundle](guides/extending-projectbundle.md)** - Declare and use namespaced extension fields on Feature/ProjectBundle - **[Module Security](reference/module-security.md)** - Publisher, integrity (checksum/signature), and versioned dependencies +### For Technical Readers + +- **[Architecture Reference](reference/architecture.md)** - Current architecture model and interfaces +- **[Architecture Docs Index](architecture/README.md)** - Component graph, module system, data flow, state machines +- **[Architecture Implementation Status](architecture/implementation-status.md)** - Implemented vs planned features +- **[Architecture ADRs](architecture/adr/README.md)** - Decision records and template + ## Module Marketplace @@ -171,6 +178,9 @@ specfact sync bridge --adapter ado --mode export-only \ - **[Command Reference](reference/commands.md)** - Complete command documentation - **[Authentication](reference/authentication.md)** - Device code auth flows and token storage - **[Architecture](reference/architecture.md)** - Technical design and principles +- **[Architecture Docs Index](architecture/README.md)** - Deep-dive architecture documentation +- **[Architecture Implementation Status](architecture/implementation-status.md)** - Current vs planned architecture scope +- **[Architecture ADRs](architecture/adr/README.md)** - Architecture decision records - **[Bridge Registry](reference/bridge-registry.md)** 🆕 - Module-declared bridge converters and lifecycle registration - **[Operational Modes](reference/modes.md)** - CI/CD vs CoPilot modes - **[Directory Structure](reference/directory-structure.md)** - Project structure diff --git a/docs/reference/architecture.md b/docs/reference/architecture.md index b18bb64b..82588c86 100644 --- a/docs/reference/architecture.md +++ b/docs/reference/architecture.md @@ -6,1113 +6,160 @@ permalink: /architecture/ # Architecture -Technical architecture and design principles of SpecFact CLI. +SpecFact CLI is a contract-first Python CLI with a production-ready module registry and bridge-based integration layer. -## Quick Overview +## Current Architecture Status -**For Users**: SpecFact CLI is a **brownfield-first tool** that reverse engineers legacy Python code into documented specs, then enforces them as runtime contracts. It works in two modes: **CI/CD mode** (fast, automated) and **CoPilot mode** (interactive, AI-enhanced). **Primary use case**: Analyze existing codebases. **Secondary use case**: Add enforcement to Spec-Kit projects. +- Module system is **production-ready** (introduced in `v0.27`) and is the default command-loading path. +- Architecture commands such as `specfact architecture derive|validate|trace` are **planned** and tracked in OpenSpec change `architecture-01-solution-layer`. +- Protocol FSM modeling exists in data models; a full runtime FSM engine is still planned. -**For Contributors**: SpecFact CLI implements a contract-driven development framework through three layers: Specification (plans and protocols), Contract (runtime validation), and Enforcement (quality gates). The architecture supports dual-mode operation (CI/CD and CoPilot) with agent-based routing for complex operations. +## Layer Model ---- - -## Overview - -SpecFact CLI implements a **contract-driven development** framework through three core layers: - -1. **Specification Layer** - Plan bundles and protocol definitions -2. **Contract Layer** - Runtime contracts, static checks, and property tests -3. **Enforcement Layer** - No-escape gates with budgets and staged enforcement - -### Related Documentation - -- [Getting Started](../getting-started/README.md) - Installation and first steps -- [Use Cases](../guides/use-cases.md) - Real-world scenarios -- [Workflows](../guides/workflows.md) - Common daily workflows -- [Commands](commands.md) - Complete command reference -- [Bridge Registry](bridge-registry.md) - Module-declared converter registration -- [Creating Custom Bridges](../guides/creating-custom-bridges.md) - Custom converter patterns - -## Bridge Registry Integration - -`arch-05-bridge-registry` introduces module-declared service converters into lifecycle registration. - -- Modules declare `service_bridges` in `module-package.yaml`. -- Lifecycle loads converter classes by dotted path and registers them in `BridgeRegistry`. -- Invalid bridge declarations are non-fatal and skipped with warnings. -- Protocol compliance reporting uses effective runtime interface detection and logs one aggregate summary line. - -## Schema Extension System - -`arch-07-schema-extension-system` lets modules extend `Feature` and `ProjectBundle` with namespaced custom fields without changing core models. - -- **Extensions field**: `Feature` and `ProjectBundle` have an `extensions: dict[str, Any]` field (default empty dict). Keys use the form `module_name.field` (e.g. `backlog.ado_work_item_id`). -- **Accessors**: `get_extension(module_name, field, default=None)` and `set_extension(module_name, field, value)` enforce namespace format and type safety via contracts. -- **Manifest**: Optional `schema_extensions` in `module-package.yaml` declare target model, field name, type hint, and description. Lifecycle loads these and registers them in a global extension registry. -- **Collision detection**: If two modules declare the same (target, field), the second registration is rejected and an error is logged; module command registration continues. -- See [Extending ProjectBundle](/guides/extending-projectbundle/) for usage and best practices. - -## Module System Foundation - -SpecFact is transitioning from hard-wired command wiring to a module-first architecture. - -### Design Intent - -- Core runtime should stay stable and minimal: lifecycle, registry, contracts, validation orchestration. -- Feature behavior should live in modules with explicit interfaces. -- Legacy command paths remain as compatibility shims during migration. - -### Command Implementation Layout - -- Primary command implementations: `src/specfact_cli/modules/<module>/src/commands.py` -- Legacy compatibility shims: `src/specfact_cli/commands/*.py` (only `app` re-export is guaranteed) -- Preferred imports: - - `from specfact_cli.modules.<module>.src.commands import app` - - `from specfact_cli.modules.<module>.src.commands import <symbol>` - -### Engineering Benefits - -- Independent module delivery cadence without repeated core rewiring. -- Lower coupling between features and CLI runtime. -- Easier interface-based testing and safer incremental migrations. -- Better path for pending OpenSpec-driven module evolution. - -## Module Marketplace - -SpecFact supports marketplace-driven module distribution with deterministic multi-location discovery. - -### Discovery Pattern - -Module discovery scans in strict priority order: - -1. Built-in modules (`site-packages/specfact_cli/modules/`) -2. Marketplace modules (`~/.specfact/marketplace-modules/`) -3. Custom modules (`~/.specfact/custom-modules/`) - -When duplicate module names exist, the higher-priority source wins and shadowed modules are ignored. - -### Registry Client Architecture - -The registry client fetches `index.json` from the central module repository and resolves: - -- module metadata (`id`, `namespace`, `latest_version`, compatibility) -- download URL -- checksum for integrity validation - -Install and search commands degrade gracefully in offline mode. - -### Install Sequence - -```mermaid -sequenceDiagram - participant User - participant CLI as specfact module install - participant Registry as Marketplace Registry - participant Local as ~/.specfact/marketplace-modules - - User->>CLI: install specfact/backlog - CLI->>Registry: fetch index.json - Registry-->>CLI: module metadata + checksum - CLI->>Registry: download tarball - Registry-->>CLI: module archive - CLI->>CLI: verify checksum + compatibility - CLI->>Local: extract and register module - CLI-->>User: install success -``` - -## Operational Modes - -SpecFact CLI supports two operational modes for different use cases: - -### Mode 1: CI/CD Automation (Default) - -**Best for:** - -- Clean-code repositories -- Self-explaining codebases -- Lower complexity projects -- Automated CI/CD pipelines - -**Characteristics:** - -- Fast, deterministic execution (< 10s typical) -- No AI copilot dependency -- Direct command execution -- Structured JSON/Markdown output -- **Enhanced Analysis**: AST + Semgrep hybrid pattern detection (API endpoints, models, CRUD, code quality) -- **Optimized Bundle Size**: 81% reduction (18MB → 3.4MB, 5.3x smaller) via test pattern extraction to OpenAPI contracts -- **Interruptible**: All parallel operations support Ctrl+C for immediate cancellation - -**Usage:** - -```bash -# Auto-detected (default) -specfact import from-code my-project --repo . - -# Explicit CI/CD mode -specfact --mode cicd import from-code my-project --repo . -``` - -### Mode 2: CoPilot-Enabled - -**Best for:** - -- Brownfield repositories -- High complexity codebases -- Mixed code quality -- Interactive development with AI assistants - -**Characteristics:** - -- Enhanced prompts for better analysis -- IDE integration via prompt templates (slash commands) -- Agent mode routing for complex operations -- Interactive assistance - -**Usage:** - -```bash -# Auto-detected (if CoPilot available) -specfact import from-code my-project --repo . - -# Explicit CoPilot mode -specfact --mode copilot import from-code my-project --repo . - -# IDE integration (slash commands) -# First, initialize: specfact init ide --ide cursor -# Then use in IDE chat: -/specfact.01-import legacy-api --repo . --confidence 0.7 -/specfact.02-plan init legacy-api -/specfact.06-sync --adapter speckit --repo . --bidirectional -``` - -### Mode Detection - -Mode is automatically detected based on: - -1. **Explicit `--mode` flag** (highest priority) -2. **CoPilot API availability** (environment/IDE detection) -3. **IDE integration** (VS Code/Cursor with CoPilot enabled) -4. **Default to CI/CD mode** (fallback) - ---- - -## Agent Modes - -Agent modes provide enhanced prompts and routing for CoPilot-enabled operations: - -### Available Agent Modes - -- **`analyze` agent mode**: Brownfield analysis with code understanding -- **`plan` agent mode**: Plan management with business logic understanding -- **`sync` agent mode**: Bidirectional sync with conflict resolution - -### Agent Mode Routing - -Each command uses specialized agent mode routing: - -```python -# Analyze agent mode -/specfact.01-import legacy-api --repo . --confidence 0.7 -# → Enhanced prompts for code understanding -# → Context injection (current file, selection, workspace) -# → Interactive assistance for complex codebases - -# Plan agent mode -/specfact.02-plan init legacy-api -# → Guided wizard mode -# → Natural language prompts -# → Context-aware feature extraction - -# Sync agent mode -/specfact.06-sync --adapter speckit --repo . --bidirectional -# → Automatic source detection via bridge adapter -# → Conflict resolution assistance -# → Change explanation and preview -``` - ---- - -## Sync Operation - -SpecFact CLI supports bidirectional synchronization for consistent change management: - -### Bridge-Based Sync (Adapter-Agnostic) - -Bidirectional synchronization between external tools (e.g., Spec-Kit, OpenSpec) and SpecFact via configurable bridge: - -```bash -# Spec-Kit bidirectional sync -specfact sync bridge --adapter speckit --bundle <bundle-name> --repo . --bidirectional - -# OpenSpec read-only sync (Phase 1) -specfact sync bridge --adapter openspec --mode read-only --bundle <bundle-name> --repo . - -# OpenSpec cross-repository sync -specfact sync bridge --adapter openspec --mode read-only --bundle <bundle-name> --repo . --external-base-path ../specfact-cli-internal - -# Continuous watch mode -specfact sync bridge --adapter speckit --bundle <bundle-name> --repo . --bidirectional --watch --interval 5 -``` - -**What it syncs:** +SpecFact is organized as three core quality layers plus supporting implementation layers: -- `specs/[###-feature-name]/spec.md`, `plan.md`, `tasks.md` ↔ `.specfact/projects/<bundle-name>/` aspect files -- `.specify/memory/constitution.md` ↔ SpecFact business context -- `specs/[###-feature-name]/research.md`, `data-model.md`, `quickstart.md` ↔ SpecFact supporting artifacts -- `specs/[###-feature-name]/contracts/*.yaml` ↔ SpecFact protocol definitions -- Automatic conflict resolution with priority rules +1. Specification layer: `ProjectBundle` models and protocol specs. +2. Contract layer: runtime contracts (`@icontract`, `@beartype`) plus type checks. +3. Enforcement layer: CLI/workflow quality gates and validation orchestration. +4. Adapter layer: external integration via `BridgeAdapter` implementations. +5. Analysis layer: analyzers and validators for repository and backlog intelligence. +6. Module layer: command modules discovered and loaded through the registry. -**Bridge Architecture**: The sync layer uses a configurable bridge (`.specfact/config/bridge.yaml`) that maps SpecFact logical concepts to physical tool artifacts, making it adapter-agnostic and extensible for future tool integrations (OpenSpec, Linear, Jira, Notion, etc.). The architecture uses a plugin-based adapter registry pattern - all adapters are registered in `AdapterRegistry` and accessed via `AdapterRegistry.get_adapter()`, eliminating hard-coded adapter checks in core components like `BridgeProbe` and `BridgeSync`. +## Command Registry and Module System -### Repository Sync +The command runtime uses lazy loading through `CommandRegistry` in `src/specfact_cli/registry/registry.py`: -Sync code changes to SpecFact artifacts: +- Registration stores command metadata and loader callables. +- `get_typer()` loads module Typer apps on first use and caches them. +- Help/list operations use metadata without importing every module. +- `register_builtin_commands()` wires module packages from discovery. -```bash -# One-time sync -specfact sync repository --repo . --target .specfact +Module discovery and registration logic lives in `src/specfact_cli/registry/module_packages.py` and reads module manifests from `module-package.yaml`. -# Continuous watch mode -specfact sync repository --repo . --watch --interval 5 -``` - -**What it tracks:** - -- Code changes → Plan artifact updates -- Deviations from manual plans -- Feature/story extraction from code - -## Contract Layers - -```mermaid -graph TD - A[Specification] --> B[Runtime Contracts] - B --> C[Static Checks] - B --> D[Property Tests] - B --> E[Runtime Sentinels] - C --> F[No-Escape Gate] - D --> F - E --> F - F --> G[PR Approved/Blocked] -``` - -### 1. Specification Layer +### Module package structure -**Project Bundle** (`.specfact/projects/<bundle-name>/` - modular structure with multiple aspect files): +Canonical package structure used by current modules: -```yaml -version: "1.0" -idea: - title: "SpecFact CLI Tool" - narrative: "Enable contract-driven development" -product: - themes: - - "Developer Experience" - releases: - - name: "v0.1" - objectives: ["Import", "Analyze", "Enforce"] -features: - - key: FEATURE-001 - title: "Spec-Kit Import" - outcomes: - - "Zero manual conversion" - stories: - - key: STORY-001 - title: "Parse Spec-Kit artifacts" - acceptance: - - "Schema validation passes" +```text +src/specfact_cli/modules/<module-name>/ + module-package.yaml + src/ + __init__.py + app.py + commands.py ``` -**Protocol** (`.specfact/protocols/workflow.protocol.yaml`): - -```yaml -states: - - INIT - - PLAN - - REQUIREMENTS - - ARCHITECTURE - - CODE - - REVIEW - - DEPLOY -start: INIT -transitions: - - from_state: INIT - on_event: start_planning - to_state: PLAN - - from_state: PLAN - on_event: approve_plan - to_state: REQUIREMENTS - guard: plan_quality_gate_passes -``` - -### 2. Contract Layer - -## Contract-First Module Development - -SpecFact module development follows a contract-first pattern: +For externalized module development (workspace-level modules), the same manifest and source conventions apply. -- `ModuleIOContract` formalizes module IO on top of `ProjectBundle`. -- `ValidationReport` standardizes module validation output. -- Registration validates supported protocol operations and declared schema compatibility. +### `module-package.yaml` fields -### Core-Module Isolation Principle +Common manifest fields: -Core runtime paths (`cli.py`, `registry/`, `models/`, `utils/`, `contracts/`) must not import from -`specfact_cli.modules.*` directly. - -- Core invokes module capabilities through `CommandRegistry`. -- Modules are discovered and loaded lazily. -- Static isolation tests enforce this boundary in CI. +- Required: `name`, `version`, `commands` +- Common optional: `command_help`, `pip_dependencies`, `module_dependencies`, `core_compatibility`, `tier`, `addon_id` +- Extension/security optional: `schema_extensions`, `service_bridges`, `publisher`, `integrity` See also: - +- [Module Development Guide](../guides/module-development.md) - [Module Contracts](module-contracts.md) -- [ProjectBundle Schema](projectbundle-schema.md) - -#### Runtime Contracts (icontract) - -```python -from icontract import require, ensure -from beartype import beartype - -@require(lambda plan: plan.version == "1.0") -@ensure(lambda result: len(result.features) > 0) -@beartype -def validate_plan(plan: PlanBundle) -> ValidationResult: - """Validate plan bundle against contracts.""" - return ValidationResult(valid=True) -``` - -#### Static Checks (Semgrep) - -```yaml -# .semgrep/async-anti-patterns.yaml -rules: - - id: async-without-await - pattern: | - async def $FUNC(...): - ... - pattern-not: | - async def $FUNC(...): - ... - await ... - message: "Async function without await" - severity: ERROR -``` - -#### Property Tests (Hypothesis) - -```python -from hypothesis import given -from hypothesis.strategies import text - -@given(text()) -def test_plan_key_format(feature_key: str): - """All feature keys must match FEATURE-\d+ format.""" - if feature_key.startswith("FEATURE-"): - assert feature_key[8:].isdigit() -``` - -#### Runtime Sentinels +- [Module Security](module-security.md) -```python -import asyncio -from typing import Optional - -class EventLoopMonitor: - """Monitor event loop health.""" - - def __init__(self, lag_threshold_ms: float = 100.0): - self.lag_threshold_ms = lag_threshold_ms - - async def check_lag(self) -> Optional[float]: - """Return lag in ms if above threshold.""" - start = asyncio.get_event_loop().time() - await asyncio.sleep(0) - lag_ms = (asyncio.get_event_loop().time() - start) * 1000 - return lag_ms if lag_ms > self.lag_threshold_ms else None -``` - -### 3. Enforcement Layer - -#### No-Escape Gate - -```yaml -# .github/workflows/specfact-gate.yml -name: No-Escape Gate -on: [pull_request] -jobs: - validate: - runs-on: ubuntu-latest - steps: - - uses: actions/checkout@v4 - - name: SpecFact Validation - run: | - specfact repro --budget 120 --verbose - if [ $? -ne 0 ]; then - echo "::error::Contract violations detected" - exit 1 - fi -``` - -#### Staged Enforcement - -| Stage | Description | Violations | -|-------|-------------|------------| -| **Shadow** | Log only, never block | All logged, none block | -| **Warn** | Warn on medium+, block high | HIGH blocks, MEDIUM warns | -| **Block** | Block all medium+ | MEDIUM+ blocks | - -#### Budget-Based Execution - -```python -from typing import Optional -import time - -class BudgetedValidator: - """Validator with time budget.""" - - def __init__(self, budget_seconds: int = 120): - self.budget_seconds = budget_seconds - self.start_time: Optional[float] = None - - def start(self): - """Start budget timer.""" - self.start_time = time.time() - - def check_budget(self) -> bool: - """Return True if budget exceeded.""" - if self.start_time is None: - return False - elapsed = time.time() - self.start_time - return elapsed > self.budget_seconds -``` - -## Data Models - -### PlanBundle - -```python -from pydantic import BaseModel, Field -from typing import List - -class Idea(BaseModel): - """High-level idea.""" - title: str - narrative: str - -class Story(BaseModel): - """User story.""" - key: str = Field(pattern=r"^STORY-\d+$") - title: str - acceptance: List[str] - -class Feature(BaseModel): - """Feature with stories.""" - key: str = Field(pattern=r"^FEATURE-\d+$") - title: str - outcomes: List[str] - stories: List[Story] - -class PlanBundle(BaseModel): - """Complete plan bundle.""" - version: str = "1.0" - idea: Idea - features: List[Feature] -``` - -### ProtocolSpec - -```python -from pydantic import BaseModel -from typing import List, Optional - -class Transition(BaseModel): - """State machine transition.""" - from_state: str - on_event: str - to_state: str - guard: Optional[str] = None - -class ProtocolSpec(BaseModel): - """FSM protocol specification.""" - states: List[str] - start: str - transitions: List[Transition] -``` - -### Deviation - -```python -from enum import Enum -from pydantic import BaseModel - -class DeviationSeverity(str, Enum): - """Severity levels.""" - LOW = "LOW" - MEDIUM = "MEDIUM" - HIGH = "HIGH" - CRITICAL = "CRITICAL" - -class Deviation(BaseModel): - """Detected deviation.""" - type: str - severity: DeviationSeverity - description: str - location: str - suggestion: Optional[str] = None -``` - -### Change Tracking Models (v1.1 Schema) - -**Introduced in v0.21.1**: Tool-agnostic change tracking models for delta spec tracking and change proposals. These models support OpenSpec and other tools (Linear, Jira, etc.) that track changes to specifications. - -```python -from enum import Enum -from pydantic import BaseModel -from typing import Optional, Dict, List, Any - -class ChangeType(str, Enum): - """Change type for delta specs (tool-agnostic).""" - ADDED = "added" - MODIFIED = "modified" - REMOVED = "removed" - -class FeatureDelta(BaseModel): - """Delta tracking for a feature change (tool-agnostic).""" - feature_key: str - change_type: ChangeType - original_feature: Optional[Feature] = None # For MODIFIED/REMOVED - proposed_feature: Optional[Feature] = None # For ADDED/MODIFIED - change_rationale: Optional[str] = None - change_date: Optional[str] = None # ISO timestamp - validation_status: Optional[str] = None # pending, passed, failed - validation_results: Optional[Dict[str, Any]] = None - source_tracking: Optional[SourceTracking] = None # Tool-specific metadata - -class ChangeProposal(BaseModel): - """Change proposal (tool-agnostic, used by OpenSpec and other tools).""" - name: str # Change identifier (e.g., 'add-user-feedback') - title: str - description: str # What: Description of the change - rationale: str # Why: Rationale and business value - timeline: Optional[str] = None # When: Timeline and dependencies - owner: Optional[str] = None # Who: Owner and stakeholders - stakeholders: List[str] = [] - dependencies: List[str] = [] - status: str = "proposed" # proposed, in-progress, applied, archived - created_at: str # ISO timestamp - applied_at: Optional[str] = None - archived_at: Optional[str] = None - source_tracking: Optional[SourceTracking] = None # Tool-specific metadata - -class ChangeTracking(BaseModel): - """Change tracking for a bundle (tool-agnostic capability).""" - proposals: Dict[str, ChangeProposal] = {} # change_name -> ChangeProposal - feature_deltas: Dict[str, List[FeatureDelta]] = {} # change_name -> [FeatureDelta] - -class ChangeArchive(BaseModel): - """Archive entry for completed changes (tool-agnostic).""" - change_name: str - applied_at: str # ISO timestamp - applied_by: Optional[str] = None - pr_number: Optional[str] = None - commit_hash: Optional[str] = None - feature_deltas: List[FeatureDelta] = [] - validation_results: Optional[Dict[str, Any]] = None - source_tracking: Optional[SourceTracking] = None # Tool-specific metadata -``` - -**Key Design Principles**: - -- **Tool-Agnostic**: All tool-specific metadata stored in `source_tracking`, not in core models -- **Cross-Repository Support**: Adapters can load change tracking from external repositories -- **Backward Compatible**: All fields optional - v1.0 bundles work without modification -- **Validation Integration**: Change proposals can include SpecFact validation results - -**Schema Versioning**: - -- **v1.0**: Original bundle format (no change tracking) -- **v1.1**: Extended with optional `change_tracking` and `change_archive` fields -- **Automatic Detection**: Bundle loader checks schema version and conditionally loads change tracking via adapters - -## Modules Design - -**Introduced in v0.27**: The CLI uses a **modular command registry** so that command groups are discovered from **module packages** and loaded lazily. This keeps startup fast and allows optional modules to be enabled or disabled per user. - -### Command registry - -- **CommandRegistry** (`src/specfact_cli/registry/registry.py`): Registers command groups by name with a **loader** (callable that returns a Typer app) and **metadata** (help text, tier, addon_id). The loader is invoked only when that command is requested (e.g. `specfact sync …`), so help and completion can run without loading every module. -- **Bootstrap** (`registry/bootstrap.py`): On startup, `register_builtin_commands()` calls `register_module_package_commands()`, which discovers module packages and registers only **enabled** modules’ commands. - -### Module packages - -- **Location**: `src/specfact_cli/modules/<name>/` (e.g. `sync`, `plan`, `init`). -- **Manifest**: Each package has a `module-package.yaml` with: - - `name`, `version`, `commands` (list of command names the package provides) - - optional `command_help` (name → short help for root `specfact --help`) - - optional `pip_dependencies`, `module_dependencies`, `core_compatibility`, `tier` (e.g. community/enterprise), `addon_id` -- **Entry point**: Each package has `src/app.py` that exposes a Typer `app` by importing from module-local `src/commands.py`. - -### Legacy shim policy and timeline - -- Legacy files under `src/specfact_cli/commands/*.py` are compatibility shims. -- Supported legacy surface: `from specfact_cli.commands.<name> import app`. -- Preferred replacement imports: - - `from specfact_cli.modules.<module>.src.commands import app` - - `from specfact_cli.modules.<module>.src.commands import <symbol>` -- Deprecation timeline: non-`app` legacy shim usage is deprecated now; shim removal is planned no earlier than `v0.30` (or next major migration window). - -### Module state (user-level) - -- **File**: `~/.specfact/registry/modules.json` (created when you run `specfact init`). -- **Content**: List of `{ "id", "version", "enabled" }` per module. Only modules with `enabled: true` have their commands registered. -- **CLI**: - - Canonical lifecycle surface: `specfact module` (`install`, `list`, `uninstall`, `upgrade`). - - Compatibility aliases: `specfact init --list-modules`, `--enable-module`, `--disable-module` remain supported during migration. - - In interactive terminals, bare init compatibility flags still open an interactive selector. - - In non-interactive mode, explicit module ids are required. - - Safe dependency guards block invalid enable/disable actions unless `--force` is used. - - With `--force`, enable cascades to required dependencies and disable cascades to enabled dependents. - -### Lifecycle notes and roadmap - -- `specfact init` is bootstrap-focused; lifecycle UX is canonical in `specfact module` with init aliases preserved for compatibility. -- `specfact init ide` is responsible for IDE prompt/template setup. -- This lifecycle architecture is the baseline for future granular module updates and enhancements. -- Third-party/community module installation is planned as a next step, but not implemented yet. - -### Registry package layout - -- **registry/registry.py** – CommandRegistry (lazy loaders, metadata, list_commands, get_typer). -- **registry/module_packages.py** – Discovery of packages under `modules/`, parsing of `module-package.yaml`, building loaders, and registration with CommandRegistry; respects `modules.json` and `--enable-module` / `--disable-module`. -- **registry/module_state.py** – Read/write `~/.specfact/registry/modules.json`. -- **registry/metadata.py** – CommandMetadata (name, help, tier, addon_id). -- **registry/bootstrap.py** – Single entry point that registers all built-in commands via module discovery. -- **registry/help_cache.py** – Registry directory and optional `commands.json` cache for fast root help. - -## Module Structure - -```bash -src/specfact_cli/ -├── cli.py # Main CLI entry point (uses CommandRegistry; no top-level command imports) -├── registry/ # Command registry and module discovery (v0.27+) -│ ├── registry.py # CommandRegistry: lazy-loaded Typer apps by name -│ ├── bootstrap.py # Registers commands from module packages -│ ├── module_packages.py # Discover modules, parse module-package.yaml, register loaders -│ ├── module_state.py # Read/write ~/.specfact/registry/modules.json -│ ├── metadata.py # CommandMetadata for help/tier/addon -│ └── help_cache.py # Registry dir and commands.json cache -├── modules/ # Module packages (each provides one or more CLI commands) -│ ├── init/ # e.g. init -│ │ ├── module-package.yaml -│ │ └── src/app.py -│ ├── sync/ -│ │ ├── module-package.yaml -│ │ └── src/app.py -│ └── ... # plan, analyze, enforce, repro, etc. -├── commands/ # Legacy app-only compatibility shims -│ ├── import_cmd.py # -> modules/import_cmd/src/commands.py -│ ├── analyze.py # -> modules/analyze/src/commands.py -│ ├── plan.py # -> modules/plan/src/commands.py -│ ├── enforce.py # -> modules/enforce/src/commands.py -│ └── ... # auth, backlog, contract, drift, etc. -├── modes/ # Operational mode management -│ ├── detector.py # Mode detection logic -│ └── router.py # Command routing -├── utils/ # Utilities -│ └── ide_setup.py # IDE integration (template copying) -├── agents/ # Agent mode implementations -│ ├── base.py # Agent mode base class -│ ├── analyze_agent.py # Analyze agent mode -│ ├── plan_agent.py # Plan agent mode -│ └── sync_agent.py # Sync agent mode -├── adapters/ # Bridge adapter implementations -│ ├── base.py # BridgeAdapter base interface -│ ├── registry.py # AdapterRegistry for plugin-based architecture -│ ├── openspec.py # OpenSpec adapter (read-only sync) -│ └── speckit.py # Spec-Kit adapter (bidirectional sync) -├── sync/ # Sync operation modules -│ ├── bridge_sync.py # Bridge-based bidirectional sync (adapter-agnostic) -│ ├── bridge_probe.py # Bridge detection and auto-generation -│ ├── bridge_watch.py # Bridge-based watch mode -│ ├── repository_sync.py # Repository sync -│ └── watcher.py # Watch mode for continuous sync -├── models/ # Pydantic data models -│ ├── plan.py # Plan bundle models (legacy compatibility) -│ ├── project.py # Project bundle models (modular structure) -│ ├── change.py # Change tracking models (v1.1 schema) -│ ├── bridge.py # Bridge configuration models -│ ├── protocol.py # Protocol FSM models -│ └── deviation.py # Deviation models -├── validators/ # Schema validators -│ ├── schema.py # Schema validation -│ ├── contract.py # Contract validation -│ └── fsm.py # FSM validation -├── generators/ # Code generators -│ ├── protocol.py # Protocol generator -│ ├── plan.py # Plan generator -│ └── report.py # Report generator -├── utils/ # CLI utilities -│ ├── console.py # Rich console output -│ ├── git.py # Git operations -│ └── yaml_utils.py # YAML helpers -├── analyzers/ # Code analysis engines -│ ├── code_analyzer.py # AST+Semgrep hybrid analysis -│ ├── graph_analyzer.py # Dependency graph analysis -│ └── relationship_mapper.py # Relationship extraction -└── common/ # Shared utilities - ├── logger_setup.py # Logging infrastructure - ├── logging_utils.py # Logging helpers - ├── text_utils.py # Text utilities - └── utils.py # File/JSON utilities -``` - -## Analysis Components - -### AST+Semgrep Hybrid Analysis - -The `CodeAnalyzer` uses a hybrid approach combining AST parsing with Semgrep pattern detection: - -**AST Analysis** (Core): - -- Structural code analysis (classes, methods, imports) -- Type hint extraction -- Parallelized processing (2-4x speedup) -- Interruptible with Ctrl+C (graceful cancellation) - -**Recent Improvements** (2025-11-30): - -- ✅ **Bundle Size Optimization**: 81% reduction (18MB → 3.4MB, 5.3x smaller) via test pattern extraction to OpenAPI contracts -- ✅ **Acceptance Criteria Limiting**: 1-3 high-level items per story (detailed examples in contract files) -- ✅ **KeyboardInterrupt Handling**: All parallel operations support immediate cancellation -- ✅ **Semgrep Detection Fix**: Increased timeout from 1s to 5s for reliable detection -- Async pattern detection -- Theme detection from imports - -**Semgrep Pattern Detection** (Enhancement): - -- **API Endpoint Detection**: FastAPI, Flask, Express, Gin routes -- **Database Model Detection**: SQLAlchemy, Django, Pydantic, TortoiseORM, Peewee -- **CRUD Operation Detection**: Function naming patterns (create_*, get_*, update_*, delete_*) -- **Authentication Patterns**: Auth decorators, permission checks -- **Code Quality Assessment**: Anti-patterns, code smells, security vulnerabilities -- **Framework Patterns**: Async/await, context managers, type hints, configuration - -**Plugin Status**: The import command displays plugin status (AST Analysis, Semgrep Pattern Detection, Dependency Graph Analysis) showing which tools are enabled and used. - -**Benefits**: - -- Framework-aware feature detection -- Enhanced confidence scores (AST + Semgrep evidence) -- Code quality maturity assessment -- Multi-language ready (TypeScript, JavaScript, Go patterns available) - -## Testing Strategy - -### Contract-First Testing - -SpecFact CLI uses **contracts as specifications**: - -1. **Runtime Contracts** - `@icontract` decorators on public APIs -2. **Type Validation** - `@beartype` for runtime type checking -3. **Contract Exploration** - CrossHair to discover counterexamples -4. **Scenario Tests** - Focus on business workflows - -### Test Pyramid - -```ascii - /\ - / \ E2E Tests (Scenario) - /____\ - / \ Integration Tests (Contract) - /________\ - / \ Unit Tests (Property) - /____________\ -``` - -### Running Tests - -```bash -# Contract validation -hatch run contract-test-contracts - -# Contract exploration (CrossHair) -hatch run contract-test-exploration - -# Scenario tests -hatch run contract-test-scenarios - -# E2E tests -hatch run contract-test-e2e - -# Full test suite -hatch run contract-test-full -``` - -## Bridge Adapter Interface - -**Introduced in v0.21.1**: The `BridgeAdapter` interface has been extended with change tracking methods to support OpenSpec and other tools that track specification changes. - -### Core Interface Methods - -All adapters must implement these base methods: - -```python -from abc import ABC, abstractmethod -from pathlib import Path -from specfact_cli.models.bridge import BridgeConfig -from specfact_cli.models.change import ChangeProposal, ChangeTracking +## Operational Modes -class BridgeAdapter(ABC): - @abstractmethod - def detect(self, repo_path: Path, bridge_config: BridgeConfig | None = None) -> bool: - """Detect if adapter applies to repository.""" +Mode detection currently exists via `src/specfact_cli/modes/detector.py` and CLI flags. - @abstractmethod - def import_artifact(self, artifact_key: str, artifact_path: Path | dict, project_bundle: Any, bridge_config: BridgeConfig | None = None) -> None: - """Import artifact from tool format to SpecFact.""" +Detection order: - @abstractmethod - def export_artifact(self, artifact_key: str, artifact_data: Any, bridge_config: BridgeConfig | None = None) -> Path | dict: - """Export artifact from SpecFact to tool format.""" +1. Explicit `--mode` +2. Environment/context-based detection +3. Fallback to `cicd` - @abstractmethod - def generate_bridge_config(self, repo_path: Path) -> BridgeConfig: - """Generate bridge configuration for adapter.""" - - @abstractmethod - def get_capabilities(self, repo_path: Path, bridge_config: BridgeConfig | None = None) -> ToolCapabilities: - """Get adapter capabilities (sync modes, layout, etc.).""" -``` +Current implementation note: -### Change Tracking Methods (v0.21.1+) +- Mode **detection is implemented**. +- Some advanced mode-specific behavior remains roadmap/planned and is tracked in OpenSpec. +- See [Implementation Status](../architecture/implementation-status.md) for implemented vs planned details. -**Introduced in v0.21.1**: Adapters that support change tracking must implement these additional methods: +## Adapter Architecture -```python -@abstractmethod -def load_change_tracking( - self, bundle_dir: Path, bridge_config: BridgeConfig | None = None -) -> ChangeTracking | None: - """ - Load change tracking from adapter-specific storage location. - - Args: - bundle_dir: Path to bundle directory (.specfact/projects/<bundle-name>/) - bridge_config: Bridge configuration (may contain external_base_path for cross-repo) - - Returns: - ChangeTracking instance or None if not available - """ +Bridge integration is defined by `BridgeAdapter` in `src/specfact_cli/adapters/base.py`. -@abstractmethod -def save_change_tracking( - self, bundle_dir: Path, change_tracking: ChangeTracking, bridge_config: BridgeConfig | None = None -) -> None: - """ - Save change tracking to adapter-specific storage location. - - Args: - bundle_dir: Path to bundle directory - change_tracking: ChangeTracking instance to save - bridge_config: Bridge configuration (may contain external_base_path for cross-repo) - """ +### Required adapter interface -@abstractmethod -def load_change_proposal( - self, bundle_dir: Path, change_name: str, bridge_config: BridgeConfig | None = None -) -> ChangeProposal | None: - """ - Load change proposal from adapter-specific storage location. - - Args: - bundle_dir: Path to bundle directory - change_name: Change identifier (e.g., 'add-user-feedback') - bridge_config: Bridge configuration (may contain external_base_path for cross-repo) - - Returns: - ChangeProposal instance or None if not found - """ +All adapters implement: -@abstractmethod -def save_change_proposal( - self, bundle_dir: Path, proposal: ChangeProposal, bridge_config: BridgeConfig | None = None -) -> None: - """ - Save change proposal to adapter-specific storage location. - - Args: - bundle_dir: Path to bundle directory - proposal: ChangeProposal instance to save - bridge_config: Bridge configuration (may contain external_base_path for cross-repo) - """ -``` +- `detect(repo_path, bridge_config=None) -> bool` +- `get_capabilities(repo_path, bridge_config=None) -> ToolCapabilities` +- `import_artifact(artifact_key, artifact_path, project_bundle, bridge_config=None) -> None` +- `export_artifact(artifact_key, artifact_data, bridge_config=None) -> Path | dict` +- `generate_bridge_config(repo_path) -> BridgeConfig` +- `load_change_tracking(bundle_dir, bridge_config=None) -> ChangeTracking | None` +- `save_change_tracking(bundle_dir, change_tracking, bridge_config=None) -> None` +- `load_change_proposal(bundle_dir, change_name, bridge_config=None) -> ChangeProposal | None` +- `save_change_proposal(bundle_dir, proposal, bridge_config=None) -> None` -### Cross-Repository Support +### Tool capabilities and adapter selection -Adapters must support loading change tracking from external repositories: +`ToolCapabilities` is defined in `src/specfact_cli/models/capabilities.py` and includes: -- **`external_base_path`**: If `bridge_config.external_base_path` is set, adapters should load change tracking from that location instead of `bundle_dir` -- **Tool-Specific Storage**: Each adapter determines where change tracking is stored (e.g., OpenSpec uses `openspec/changes/`, Linear uses API) -- **Source Tracking**: Tool-specific metadata (issue IDs, file paths, etc.) stored in `source_tracking` field +- `tool`, `version`, `layout`, `specs_dir` +- `supported_sync_modes` +- `has_external_config`, `has_custom_hooks` -### Implementation Examples +`BridgeProbe`/sync flows use detection and capabilities to select adapters and choose sync behavior safely. -**OpenSpec Adapter** (v0.21.1+): +See also: +- [Adapter Development Guide](../guides/adapter-development.md) +- [Bridge Registry](bridge-registry.md) -The OpenSpec adapter provides read-only sync (Phase 1) for importing OpenSpec specifications and change tracking: +## Change Tracking and Protocol Scope -```python -class OpenSpecAdapter(BridgeAdapter): - def detect(self, repo_path: Path, bridge_config: BridgeConfig | None = None) -> bool: - # Detects openspec/config.yaml (OPSX), openspec/project.md (legacy), or openspec/specs/ - base_path = bridge_config.external_base_path if bridge_config and bridge_config.external_base_path else repo_path - openspec = base_path / "openspec" - return (openspec / "config.yaml").exists() or (openspec / "project.md").exists() or (openspec / "specs").exists() - - def get_capabilities(self, repo_path: Path, bridge_config: BridgeConfig | None = None) -> ToolCapabilities: - # Returns OpenSpec-specific capabilities - return ToolCapabilities(tool="openspec", layout="openspec", specs_dir="openspec/specs") - - def load_change_tracking(self, bundle_dir: Path, bridge_config: BridgeConfig | None = None) -> ChangeTracking | None: - # Load from openspec/changes/ directory - base_path = bridge_config.external_base_path if bridge_config and bridge_config.external_base_path else bundle_dir.parent.parent.parent - changes_dir = base_path / "openspec" / "changes" - # Parse change proposals and feature deltas - return ChangeTracking(...) - - def import_artifact(self, artifact_key: str, artifact_path: Path, project_bundle: Any, bridge_config: BridgeConfig | None = None) -> None: - # Supports: specification, project_context, change_proposal, change_spec_delta - # Parses OpenSpec markdown and updates project bundle - pass -``` +- Change tracking models exist and are used by adapter flows. +- Support is adapter-dependent and not uniformly complete across all external systems. +- Protocol specs exist as models/spec artifacts; full FSM execution and guard engine behavior is not yet fully implemented. -**Key Features:** -- **Read-only sync (Phase 1)**: Import only, export methods raise `NotImplementedError` -- **Cross-repository support**: Uses `external_base_path` for OpenSpec in different repositories -- **Change tracking**: Loads change proposals and feature deltas from `openspec/changes/` -- **Source tracking**: Stores OpenSpec paths in `source_tracking.source_metadata` +Status and roadmap references: -**SpecKit Adapter** (v0.22.0+): +- [Implementation Status](../architecture/implementation-status.md) +- OpenSpec change `architecture-01-solution-layer` -The SpecKit adapter provides full bidirectional sync for Spec-Kit markdown artifacts: +## Error Handling Conventions -```python -class SpecKitAdapter(BridgeAdapter): - def detect(self, repo_path: Path, bridge_config: BridgeConfig | None = None) -> bool: - # Detects .specify/ directory or specs/ directory (classic/modern layouts) - base_path = bridge_config.external_base_path if bridge_config and bridge_config.external_base_path else repo_path - return (base_path / ".specify").exists() or (base_path / "specs").exists() or (base_path / "docs" / "specs").exists() - - def get_capabilities(self, repo_path: Path, bridge_config: BridgeConfig | None = None) -> ToolCapabilities: - # Returns Spec-Kit-specific capabilities (bidirectional sync supported) - return ToolCapabilities( - tool="speckit", - layout="classic" or "modern", - specs_dir="specs" or "docs/specs", - supported_sync_modes=["bidirectional", "unidirectional"] - ) - - def import_artifact(self, artifact_key: str, artifact_path: Path, project_bundle: Any, bridge_config: BridgeConfig | None = None) -> None: - # Supports: specification, plan, tasks, constitution - # Parses Spec-Kit markdown and updates project bundle - pass - - def export_artifact(self, artifact_key: str, artifact_data: Any, bridge_config: BridgeConfig | None = None) -> Path: - # Supports: specification, plan, tasks, constitution - # Exports SpecFact models to Spec-Kit markdown format - pass -``` +Architecture-level error handling conventions: -**Key Features:** -- **Bidirectional sync**: Full import and export support for Spec-Kit artifacts -- **Classic and modern layouts**: Supports both `specs/` (classic) and `docs/specs/` (modern) directory structures -- **Public helper methods**: `discover_features()`, `detect_changes()`, `detect_conflicts()`, `export_bundle()` for advanced operations -- **Contract-first**: All methods have `@beartype`, `@require`, and `@ensure` decorators for runtime validation -- **Adapter registry**: Registered in `AdapterRegistry` for plugin-based architecture +- Raise specific exceptions (`ValueError`, adapter-specific runtime errors) with actionable context. +- Validate public API inputs using `@icontract` and `@beartype`. +- Use bridge logger via `specfact_cli.common.get_bridge_logger` for non-fatal lifecycle and adapter failures. +- Keep module lifecycle resilient: malformed optional metadata is skipped with warnings instead of crashing the CLI. +- In command paths, prefer structured and user-oriented error output over raw trace text. -**GitHub Adapter** (export-only): +## Component Overview -```python -class GitHubAdapter(BridgeAdapter): - def load_change_tracking(self, bundle_dir: Path, bridge_config: BridgeConfig | None = None) -> ChangeTracking | None: - # GitHub adapter is export-only (OpenSpec → GitHub Issues) - return None - - def save_change_tracking(self, bundle_dir: Path, change_tracking: ChangeTracking, bridge_config: BridgeConfig | None = None) -> None: - # Export change proposals to GitHub Issues - pass - - def export_artifact(self, artifact_key: str, artifact_data: Any, bridge_config: BridgeConfig | None = None) -> dict: - # Supports artifact keys: change_proposal, change_status, change_proposal_update, code_change_progress - if artifact_key == "code_change_progress": - # Add progress comment to existing GitHub issue based on code changes - return self._add_progress_comment(artifact_data, ...) +```mermaid +graph TD + CLI[CLI Entry] --> Registry[CommandRegistry] + Registry --> Discovery[Module Package Discovery] + Registry --> LazyLoad[Lazy Module Loading] + LazyLoad --> Modules[Module Commands] + Modules --> Adapters[Bridge Adapters] + Modules --> Analysis[Analyzers and Validators] + Modules --> Models[ProjectBundle and Change Models] ``` -### Schema Version Handling - -- **v1.0 Bundles**: `load_change_tracking()` returns `None` (backward compatible) -- **v1.1 Bundles**: Bundle loader calls `load_change_tracking()` via adapter if schema version is 1.1+ -- **Automatic Detection**: `ProjectBundle.load_from_directory()` checks schema version before loading change tracking - -## Dependencies - -### Core - -- **typer** - CLI framework -- **pydantic** - Data validation -- **rich** - Terminal output -- **networkx** - Graph analysis -- **ruamel.yaml** - YAML processing - -### Validation - -- **icontract** - Runtime contracts -- **beartype** - Type checking -- **crosshair-tool** - Contract exploration -- **hypothesis** - Property-based testing +## Terminology -### Development +- `ProjectBundle`: canonical persisted bundle under `.specfact/projects/<bundle>/`. +- `PlanBundle`: legacy conceptual model references in older docs/code paths. -- **hatch** - Build and environment management -- **basedpyright** - Type checking -- **ruff** - Linting -- **pytest** - Test runner +Use `ProjectBundle` for current architecture descriptions unless explicitly discussing legacy compatibility. -See [pyproject.toml](../../pyproject.toml) for complete dependency list. +## Architecture Decisions -## Design Principles +- ADR index: [Architecture ADRs](../architecture/adr/README.md) +- Initial ADR: [ADR-0001 Module-First Architecture](../architecture/adr/0001-module-first-architecture.md) -1. **Contract-Driven** - Contracts are specifications -2. **Evidence-Based** - Claims require reproducible evidence -3. **Offline-First** - No SaaS required for core functionality -4. **Progressive Enhancement** - Shadow → Warn → Block -5. **Fast Feedback** - < 90s CI overhead -6. **Escape Hatches** - Override mechanisms for emergencies -7. **Quality-First** - TDD with quality gates from day 1 -8. **Dual-Mode Operation** - CI/CD automation or CoPilot-enabled assistance -9. **Bidirectional Sync** - Consistent change management across tools - -## Performance Characteristics - -| Operation | Typical Time | Budget | -|-----------|--------------|--------| -| Plan validation | < 1s | 5s | -| Contract exploration | 10-30s | 60s | -| Full repro suite | 60-90s | 120s | -| Brownfield analysis | 2-5 min | 300s | - -## Security Considerations - -1. **No external dependencies** for core validation -2. **Secure defaults** - Shadow mode by default -3. **No data exfiltration** - Works offline -4. **Contract provenance** - SHA256 hashes in reports -5. **Reproducible builds** - Deterministic outputs - ---- +## Related Docs -See [Commands](commands.md) for command reference and [Technical Deep Dives](../technical/README.md) for testing procedures. +- [Architecture Docs Index](../architecture/README.md) +- [Implementation Status](../architecture/implementation-status.md) +- [Directory Structure](directory-structure.md) diff --git a/docs/reference/commands.md b/docs/reference/commands.md index 8a4c008f..91327636 100644 --- a/docs/reference/commands.md +++ b/docs/reference/commands.md @@ -3797,14 +3797,14 @@ specfact sync repository --repo . --watch --interval 2 --confidence 0.7 ### `backlog` - Backlog Refinement and Template Management -Backlog refinement commands for AI-assisted template-driven refinement of DevOps backlog items. +Backlog refinement and dependency commands grouped under the `specfact backlog` command family. **Command Topology (recommended):** - `specfact backlog ceremony standup ...` - `specfact backlog ceremony refinement ...` - `specfact backlog delta status|impact|cost-estimate|rollback-analysis ...` -- `specfact backlog analyze-deps|trace-impact|sync|verify-readiness|diff|promote|generate-release-notes ...` +- `specfact backlog add|analyze-deps|trace-impact|sync|verify-readiness|diff|promote|generate-release-notes ...` Compatibility commands `specfact backlog daily` and `specfact backlog refine` remain available, but ceremony entrypoints are preferred for discoverability. @@ -3855,6 +3855,33 @@ specfact backlog delta cost-estimate --project-id 1 --adapter github specfact backlog delta rollback-analysis --project-id 1 --adapter github ``` +#### `backlog add` + +Create a backlog item with optional parent hierarchy validation and DoR checks. + +```bash +specfact backlog add --project-id <id> [OPTIONS] +``` + +**Common options:** + +- `--adapter ADAPTER` - Backlog adapter id (default: `github`) +- `--template TEMPLATE` - Mapping template (default is adapter-aware: `github_projects` for GitHub, `ado_scrum` for ADO) +- `--type TYPE` - Child type to create (for example `story`, `task`, `feature`) +- `--parent REF` - Optional parent reference (id/key/title); validated against graph +- `--title TEXT` - Issue title +- `--body TEXT` - Issue description/body +- `--acceptance-criteria TEXT` - Acceptance criteria content (also supported via interactive multiline input) +- `--priority TEXT` - Optional priority value (for example `1`, `high`, `P1`) +- `--story-points VALUE` - Optional story points (integer or float) +- `--sprint TEXT` - Optional sprint/iteration path assignment +- `--body-end-marker TEXT` - Sentinel marker for multiline input (default: `::END::`) +- `--description-format TEXT` - Description rendering mode (`markdown` or `classic`) +- `--non-interactive` - Fail fast on missing required inputs instead of prompting +- `--check-dor` - Validate draft against `.specfact/dor.yaml` before create +- `--repo-path PATH` - Repository path used to load DoR configuration (default `.`) +- `--custom-config PATH` - Optional config containing `creation_hierarchy` + #### `backlog analyze-deps` Build and analyze backlog dependency graph for a provider project. @@ -3868,7 +3895,7 @@ specfact backlog analyze-deps --project-id <id> [OPTIONS] **Migration note:** `specfact module` is the canonical lifecycle command group. Init lifecycle flags remain supported as compatibility aliases. - `--adapter ADAPTER` - Backlog adapter id (default: `github`) -- `--template TEMPLATE` - Mapping template (default: `github_projects`) +- `--template TEMPLATE` - Mapping template (default is adapter-aware: `github_projects` for GitHub, `ado_scrum` for ADO) - `--custom-config PATH` - Optional custom mapping YAML - `--output PATH` - Optional markdown summary output - `--json-export PATH` - Optional graph JSON export @@ -3886,7 +3913,7 @@ specfact backlog trace-impact <item-id> --project-id <id> [OPTIONS] **Migration note:** `specfact module` is the canonical lifecycle command group. Init lifecycle flags remain supported as compatibility aliases. - `--adapter ADAPTER` - Backlog adapter id (default: `github`) -- `--template TEMPLATE` - Mapping template (default: `github_projects`) +- `--template TEMPLATE` - Mapping template (default is adapter-aware: `github_projects` for GitHub, `ado_scrum` for ADO) - `--custom-config PATH` - Optional custom mapping YAML #### `backlog verify-readiness` @@ -3902,7 +3929,7 @@ specfact backlog verify-readiness --project-id <id> [OPTIONS] **Migration note:** `specfact module` is the canonical lifecycle command group. Init lifecycle flags remain supported as compatibility aliases. - `--adapter ADAPTER` - Backlog adapter id (default: `github`) -- `--template TEMPLATE` - Mapping template (default: `github_projects`) +- `--template TEMPLATE` - Mapping template (default is adapter-aware: `github_projects` for GitHub, `ado_scrum` for ADO) - `--target-items CSV` - Optional comma-separated subset of item IDs #### `backlog diff` diff --git a/modules/backlog-core/src/backlog_core/adapters/backlog_protocol.py b/modules/backlog-core/src/backlog_core/adapters/backlog_protocol.py index 556d4f37..0b321515 100644 --- a/modules/backlog-core/src/backlog_core/adapters/backlog_protocol.py +++ b/modules/backlog-core/src/backlog_core/adapters/backlog_protocol.py @@ -26,6 +26,13 @@ def fetch_all_issues(self, project_id: str, filters: dict[str, Any] | None = Non def fetch_relationships(self, project_id: str) -> list[dict[str, Any]]: """Fetch all issue/work-item relationships for a project.""" + @beartype + @require(lambda project_id: project_id.strip() != "", "project_id must be non-empty") + @require(lambda payload: isinstance(payload, dict), "payload must be dict") + @ensure(lambda result: isinstance(result, dict), "create_issue must return dict") + def create_issue(self, project_id: str, payload: dict[str, Any]) -> dict[str, Any]: + """Create a provider issue/work item and return id/key/url metadata.""" + @beartype @require(lambda adapter: adapter is not None, "adapter must be provided") @@ -35,7 +42,7 @@ def require_backlog_graph_protocol(adapter: Any) -> BacklogGraphProtocol: if not isinstance(adapter, BacklogGraphProtocol): msg = ( f"Adapter '{type(adapter).__name__}' does not support BacklogGraphProtocol. " - "Expected methods: fetch_all_issues(project_id, filters), fetch_relationships(project_id)." + "Expected methods: fetch_all_issues(project_id, filters), fetch_relationships(project_id), create_issue(project_id, payload)." ) raise TypeError(msg) return adapter diff --git a/modules/backlog-core/src/backlog_core/commands/__init__.py b/modules/backlog-core/src/backlog_core/commands/__init__.py index 8b57b490..054d093f 100644 --- a/modules/backlog-core/src/backlog_core/commands/__init__.py +++ b/modules/backlog-core/src/backlog_core/commands/__init__.py @@ -3,6 +3,7 @@ from specfact_cli.contracts.module_interface import ModuleIOContract from specfact_cli.modules import module_io_shim +from .add import add from .analyze_deps import analyze_deps, trace_impact from .diff import diff from .promote import promote @@ -20,6 +21,7 @@ __all__ = [ "BacklogGraphToPlanBundle", + "add", "analyze_deps", "commands_interface", "compute_delta", diff --git a/modules/backlog-core/src/backlog_core/commands/add.py b/modules/backlog-core/src/backlog_core/commands/add.py new file mode 100644 index 00000000..0cf71ee3 --- /dev/null +++ b/modules/backlog-core/src/backlog_core/commands/add.py @@ -0,0 +1,655 @@ +"""Backlog add command.""" + +from __future__ import annotations + +from pathlib import Path +from typing import Annotated, Any + +import requests +import typer +import yaml +from beartype import beartype +from icontract import require + +from backlog_core.adapters.backlog_protocol import require_backlog_graph_protocol +from backlog_core.graph.builder import BacklogGraphBuilder +from backlog_core.graph.config_schema import load_backlog_config_from_backlog_file, load_backlog_config_from_spec +from specfact_cli.adapters.registry import AdapterRegistry +from specfact_cli.models.dor_config import DefinitionOfReady +from specfact_cli.utils.prompts import print_error, print_info, print_success, print_warning, prompt_text + + +DEFAULT_CREATION_HIERARCHY: dict[str, list[str]] = { + "epic": [], + "feature": ["epic"], + "story": ["feature", "epic"], + "task": ["story", "feature"], + "bug": ["story", "feature", "epic"], + "spike": ["feature", "epic"], + "custom": ["epic", "feature", "story"], +} + +STORY_LIKE_TYPES: set[str] = {"story", "feature", "task", "bug"} + +DEFAULT_CUSTOM_MAPPING_FILES: dict[str, Path] = { + "ado": Path(".specfact/templates/backlog/field_mappings/ado_custom.yaml"), + "github": Path(".specfact/templates/backlog/field_mappings/github_custom.yaml"), +} + + +@beartype +def _prompt_multiline_text(field_label: str, end_marker: str) -> str: + """Read multiline text until a sentinel marker line is entered.""" + marker = end_marker.strip() or "::END::" + print_info(f"{field_label} (multiline). End input with '{marker}' on a new line.") + lines: list[str] = [] + while True: + try: + line = input() + except EOFError: + break + if line.strip() == marker: + break + lines.append(line) + return "\n".join(lines).strip() + + +@beartype +def _select_with_fallback(message: str, choices: list[str], default: str | None = None) -> str: + """Use questionary select when available, otherwise plain text prompt.""" + normalized = [choice for choice in choices if choice] + if not normalized: + return (default or "").strip() + + try: + import questionary # type: ignore[reportMissingImports] + + selected = questionary.select(message, choices=normalized, default=default).ask() + if isinstance(selected, str) and selected.strip(): + return selected.strip() + except Exception: + # If questionary is unavailable or fails, continue with plain-text prompt fallback. + pass + + print_info(f"{message}: {', '.join(normalized)}") + fallback_default = default if default in normalized else normalized[0] + return prompt_text(message, default=fallback_default) + + +@beartype +def _interactive_sprint_selection(adapter_name: str, adapter_instance: Any, project_id: str) -> str | None: + """Prompt for sprint/iteration selection (provider-aware).""" + adapter_lower = adapter_name.strip().lower() + + if adapter_lower != "ado": + raw = prompt_text("Sprint/iteration (optional)", default="", required=False).strip() + return raw or None + + current_iteration: str | None = None + list_iterations: list[str] = [] + + restore_org = getattr(adapter_instance, "org", None) + restore_project = getattr(adapter_instance, "project", None) + resolver = getattr(adapter_instance, "_resolve_graph_project_context", None) + if callable(resolver): + try: + resolved_org, resolved_project = resolver(project_id) + if hasattr(adapter_instance, "org"): + adapter_instance.org = resolved_org + if hasattr(adapter_instance, "project"): + adapter_instance.project = resolved_project + except Exception: + # Best-effort org/project resolution only; keep existing context on any failure. + pass + + get_current = getattr(adapter_instance, "_get_current_iteration", None) + if callable(get_current): + try: + resolved = get_current() + if isinstance(resolved, str) and resolved.strip(): + current_iteration = resolved.strip() + except Exception: + current_iteration = None + + get_list = getattr(adapter_instance, "_list_available_iterations", None) + if callable(get_list): + try: + candidates = get_list() + if isinstance(candidates, list): + list_iterations = [str(item).strip() for item in candidates if str(item).strip()] + except Exception: + list_iterations = [] + + if hasattr(adapter_instance, "org"): + adapter_instance.org = restore_org + if hasattr(adapter_instance, "project"): + adapter_instance.project = restore_project + + options = ["(skip sprint/iteration)"] + if current_iteration: + options.append(f"current: {current_iteration}") + options.extend([iteration for iteration in list_iterations if iteration != current_iteration]) + options.append("manual entry") + + default = f"current: {current_iteration}" if current_iteration else "manual entry" + selected = _select_with_fallback("Select sprint/iteration", options, default=default) + + if selected == "(skip sprint/iteration)": + return None + if selected.startswith("current: "): + return selected.removeprefix("current: ").strip() or None + if selected == "manual entry": + manual = prompt_text("Enter sprint/iteration path", default="", required=False).strip() + return manual or None + return selected.strip() or None + + +@beartype +@require(lambda value: isinstance(value, str), "value must be a string") +def _normalize_type(value: str) -> str: + return value.strip().lower().replace("_", " ").replace("-", " ") + + +@beartype +def _resolve_default_template(adapter_name: str, template: str | None) -> str: + if template and template.strip(): + return template.strip() + if adapter_name.strip().lower() == "ado": + return "ado_scrum" + return "github_projects" + + +@beartype +def _extract_item_type(item: Any) -> str: + """Best-effort normalized item type from graph item and raw payload.""" + value = getattr(item, "type", None) + enum_value = getattr(value, "value", None) + if isinstance(enum_value, str) and enum_value.strip(): + return _normalize_type(enum_value) + if isinstance(value, str) and value.strip(): + normalized = _normalize_type(value) + if normalized.startswith("itemtype."): + normalized = normalized.split(".", 1)[1] + if normalized: + return normalized + + inferred = getattr(item, "inferred_type", None) + inferred_value = getattr(inferred, "value", None) + if isinstance(inferred_value, str) and inferred_value.strip(): + return _normalize_type(inferred_value) + + raw_data = getattr(item, "raw_data", {}) + if isinstance(raw_data, dict): + fields = raw_data.get("fields") if isinstance(raw_data.get("fields"), dict) else {} + candidates = [ + raw_data.get("type"), + raw_data.get("work_item_type"), + fields.get("System.WorkItemType") if isinstance(fields, dict) else None, + raw_data.get("issue_type"), + ] + for candidate in candidates: + if isinstance(candidate, str) and candidate.strip(): + normalized = _normalize_type(candidate) + aliases = { + "user story": "story", + "product backlog item": "story", + "pb i": "story", + } + return aliases.get(normalized, normalized) + + return "custom" + + +@beartype +def _load_custom_config(custom_config: Path | None) -> dict[str, Any]: + if custom_config is None: + return {} + if not custom_config.exists(): + raise ValueError(f"Custom config file not found: {custom_config}") + loaded = yaml.safe_load(custom_config.read_text(encoding="utf-8")) + return loaded if isinstance(loaded, dict) else {} + + +@beartype +def _resolve_custom_config_path(adapter_name: str, custom_config: Path | None) -> Path | None: + """Resolve custom mapping file path with adapter-specific default fallback.""" + if custom_config is not None: + return custom_config + candidate = DEFAULT_CUSTOM_MAPPING_FILES.get(adapter_name.strip().lower()) + if candidate is not None and candidate.exists(): + return candidate + return None + + +@beartype +def _load_template_config(template: str) -> dict[str, Any]: + module_root = Path(__file__).resolve().parents[1] + template_file = module_root / "resources" / "backlog-templates" / f"{template}.yaml" + shared_template_file = ( + Path(__file__).resolve().parents[5] + / "src" + / "specfact_cli" + / "resources" + / "backlog-templates" + / f"{template}.yaml" + ) + + for candidate in (template_file, shared_template_file): + if candidate.exists(): + loaded = yaml.safe_load(candidate.read_text(encoding="utf-8")) + if isinstance(loaded, dict): + return loaded + return {} + + +@beartype +def _derive_creation_hierarchy(template_payload: dict[str, Any], custom_config: dict[str, Any]) -> dict[str, list[str]]: + custom_hierarchy = custom_config.get("creation_hierarchy") + if isinstance(custom_hierarchy, dict): + return { + _normalize_type(str(child)): [_normalize_type(str(parent)) for parent in parents] + for child, parents in custom_hierarchy.items() + if isinstance(parents, list) + } + + template_hierarchy = template_payload.get("creation_hierarchy") + if isinstance(template_hierarchy, dict): + return { + _normalize_type(str(child)): [_normalize_type(str(parent)) for parent in parents] + for child, parents in template_hierarchy.items() + if isinstance(parents, list) + } + + return DEFAULT_CREATION_HIERARCHY + + +@beartype +def _resolve_provider_fields_for_create( + adapter_name: str, + template_payload: dict[str, Any], + custom_config: dict[str, Any], + repo_path: Path, +) -> dict[str, Any] | None: + """Resolve provider-specific create payload fields from template/custom config.""" + if adapter_name.strip().lower() != "github": + return None + + def _extract_github_project_v2(source: dict[str, Any]) -> dict[str, Any]: + provider_fields = source.get("provider_fields") + if isinstance(provider_fields, dict): + candidate = provider_fields.get("github_project_v2") + if isinstance(candidate, dict): + return dict(candidate) + fallback = source.get("github_project_v2") + if isinstance(fallback, dict): + return dict(fallback) + return {} + + def _extract_github_issue_types(source: dict[str, Any]) -> dict[str, Any]: + provider_fields = source.get("provider_fields") + if isinstance(provider_fields, dict): + candidate = provider_fields.get("github_issue_types") + if isinstance(candidate, dict): + return dict(candidate) + fallback = source.get("github_issue_types") + if isinstance(fallback, dict): + return dict(fallback) + return {} + + spec_settings: dict[str, Any] = {} + backlog_cfg = load_backlog_config_from_backlog_file(repo_path / ".specfact" / "backlog-config.yaml") + spec_config = backlog_cfg or load_backlog_config_from_spec(repo_path / ".specfact" / "spec.yaml") + if spec_config is not None: + github_provider = spec_config.providers.get("github") + if github_provider is not None and isinstance(github_provider.settings, dict): + spec_settings = dict(github_provider.settings) + + template_cfg = _extract_github_project_v2(template_payload) + spec_cfg = _extract_github_project_v2(spec_settings) + custom_cfg = _extract_github_project_v2(custom_config) + + template_issue_types = _extract_github_issue_types(template_payload) + spec_issue_types = _extract_github_issue_types(spec_settings) + custom_issue_types = _extract_github_issue_types(custom_config) + + result: dict[str, Any] = {} + + if template_cfg or spec_cfg or custom_cfg: + template_option_ids = template_cfg.get("type_option_ids") + spec_option_ids = spec_cfg.get("type_option_ids") + custom_option_ids = custom_cfg.get("type_option_ids") + merged_option_ids: dict[str, Any] = {} + if isinstance(template_option_ids, dict): + merged_option_ids.update(template_option_ids) + if isinstance(spec_option_ids, dict): + merged_option_ids.update(spec_option_ids) + if isinstance(custom_option_ids, dict): + merged_option_ids.update(custom_option_ids) + + merged_cfg = {**template_cfg, **spec_cfg, **custom_cfg} + if merged_option_ids: + merged_cfg["type_option_ids"] = merged_option_ids + if merged_cfg: + result["github_project_v2"] = merged_cfg + + if template_issue_types or spec_issue_types or custom_issue_types: + template_type_ids = template_issue_types.get("type_ids") + spec_type_ids = spec_issue_types.get("type_ids") + custom_type_ids = custom_issue_types.get("type_ids") + merged_type_ids: dict[str, Any] = {} + if isinstance(template_type_ids, dict): + merged_type_ids.update(template_type_ids) + if isinstance(spec_type_ids, dict): + merged_type_ids.update(spec_type_ids) + if isinstance(custom_type_ids, dict): + merged_type_ids.update(custom_type_ids) + + issue_type_cfg = {**template_issue_types, **spec_issue_types, **custom_issue_types} + if merged_type_ids: + issue_type_cfg["type_ids"] = merged_type_ids + if issue_type_cfg: + result["github_issue_types"] = issue_type_cfg + + return result or None + + +@beartype +def _has_github_repo_issue_type_mapping(provider_fields: dict[str, Any] | None, issue_type: str) -> bool: + """Return True when repository GitHub issue-type mapping metadata is available.""" + if not isinstance(provider_fields, dict): + return False + issue_cfg = provider_fields.get("github_issue_types") + if not isinstance(issue_cfg, dict): + return False + type_ids = issue_cfg.get("type_ids") + if not isinstance(type_ids, dict): + return False + mapped = str(type_ids.get(issue_type) or type_ids.get(issue_type.lower()) or "").strip() + return bool(mapped) + + +@beartype +def _resolve_parent_id(parent_ref: str, graph_items: dict[str, Any]) -> tuple[str | None, str | None]: + normalized_ref = parent_ref.strip().lower() + + for item_id, item in graph_items.items(): + key = str(getattr(item, "key", "") or "").lower() + title = str(getattr(item, "title", "") or "").lower() + if normalized_ref in {item_id.lower(), key, title}: + return item_id, _extract_item_type(item) + + return None, None + + +@beartype +def _validate_parent(child_type: str, parent_type: str, hierarchy: dict[str, list[str]]) -> bool: + allowed = hierarchy.get(child_type, []) + if not allowed: + return True + return parent_type in allowed + + +@beartype +def _choose_parent_interactively( + issue_type: str, + graph_items: dict[str, Any], + hierarchy: dict[str, list[str]], +) -> str | None: + """Interactively choose parent from existing hierarchy-compatible items.""" + add_parent_choice = _select_with_fallback("Add parent issue?", ["yes", "no"], default="yes") + if add_parent_choice.strip().lower() != "yes": + return None + + allowed = set(hierarchy.get(issue_type, [])) + all_candidates: list[tuple[str, str]] = [] + candidates: list[tuple[str, str]] = [] + for item_id, item in graph_items.items(): + parent_type = _extract_item_type(item) + key = str(getattr(item, "key", item_id) or item_id) + title = str(getattr(item, "title", "") or "") + label = f"{key} | {title} | type={parent_type}" if title else f"{key} | type={parent_type}" + all_candidates.append((label, item_id)) + if allowed and parent_type not in allowed: + continue + candidates.append((label, item_id)) + + if not candidates: + if all_candidates: + print_warning( + "No hierarchy-compatible parent candidates found from inferred types. " + "Showing all issues so you can choose a parent manually." + ) + candidates = all_candidates + else: + print_warning("No hierarchy-compatible parent candidates found. Continuing without parent.") + return None + + options = ["(no parent)"] + [label for label, _ in candidates] + default_option = options[1] if len(options) > 1 else options[0] + selected = _select_with_fallback("Select parent issue", options, default=default_option) + if selected == "(no parent)": + return None + + mapping = dict(candidates) + return mapping.get(selected) + + +@beartype +def _parse_story_points(raw_value: str | None) -> int | float | None: + if raw_value is None: + return None + stripped = raw_value.strip() + if not stripped: + return None + try: + if "." in stripped: + return float(stripped) + return int(stripped) + except ValueError: + print_warning(f"Invalid story points '{raw_value}', keeping as text") + return None + + +@beartype +def add( + project_id: Annotated[str, typer.Option("--project-id", help="Backlog project identifier")], + adapter: Annotated[str, typer.Option("--adapter", help="Adapter to use")] = "github", + template: Annotated[str | None, typer.Option("--template", help="Template name for mapping")] = None, + issue_type: Annotated[str | None, typer.Option("--type", help="Issue type (story/task/feature/...)")] = None, + parent: Annotated[str | None, typer.Option("--parent", help="Parent issue id/key/title")] = None, + title: Annotated[str | None, typer.Option("--title", help="Issue title")] = None, + body: Annotated[str | None, typer.Option("--body", help="Issue body/description")] = None, + acceptance_criteria: Annotated[ + str | None, + typer.Option("--acceptance-criteria", help="Acceptance criteria text (recommended for story-like items)"), + ] = None, + priority: Annotated[str | None, typer.Option("--priority", help="Priority value (for example 1, high, P1)")] = None, + story_points: Annotated[ + str | None, typer.Option("--story-points", help="Story points value (integer/float)") + ] = None, + sprint: Annotated[str | None, typer.Option("--sprint", help="Sprint/iteration assignment")] = None, + body_end_marker: Annotated[ + str, + typer.Option("--body-end-marker", help="End marker for interactive multiline input"), + ] = "::END::", + description_format: Annotated[ + str, + typer.Option("--description-format", help="Description format: markdown or classic"), + ] = "markdown", + non_interactive: Annotated[bool, typer.Option("--non-interactive", help="Disable prompts")] = False, + check_dor: Annotated[ + bool, typer.Option("--check-dor", help="Validate Definition of Ready before creation") + ] = False, + repo_path: Annotated[Path, typer.Option("--repo-path", help="Repository path for DoR config")] = Path("."), + custom_config: Annotated[ + Path | None, typer.Option("--custom-config", help="Path to custom hierarchy/config YAML") + ] = None, +) -> None: + """Create a backlog item with optional parent hierarchy validation and DoR checks.""" + adapter_instance = AdapterRegistry.get_adapter(adapter) + interactive_mode = not non_interactive + + if non_interactive: + missing = [ + name for name, value in {"type": issue_type, "title": title}.items() if not (value and value.strip()) + ] + if missing: + print_error(f"{', '.join(missing)} required in --non-interactive mode") + raise typer.Exit(code=1) + else: + issue_type_choices = sorted(set(DEFAULT_CREATION_HIERARCHY.keys())) + if not issue_type: + issue_type = _select_with_fallback("Select issue type", issue_type_choices, default="story") + if not title: + title = prompt_text("Issue title") + if body is None: + body = _prompt_multiline_text("Issue body", body_end_marker) + if sprint is None: + sprint = _interactive_sprint_selection(adapter, adapter_instance, project_id) + description_format = _select_with_fallback( + "Select description format", + ["markdown", "classic"], + default=description_format or "markdown", + ).lower() + + normalized_issue_type = _normalize_type(issue_type or "") + if normalized_issue_type in STORY_LIKE_TYPES and acceptance_criteria is None: + capture_ac = _select_with_fallback("Add acceptance criteria?", ["yes", "no"], default="yes") + if capture_ac.strip().lower() == "yes": + acceptance_criteria = _prompt_multiline_text("Acceptance criteria", body_end_marker) + + if priority is None: + priority_raw = prompt_text("Priority (optional)", default="", required=False).strip() + priority = priority_raw or None + + if story_points is None and normalized_issue_type in STORY_LIKE_TYPES: + story_points = prompt_text("Story points (optional)", default="", required=False).strip() or None + + assert issue_type is not None + assert title is not None + issue_type = _normalize_type(issue_type) + title = title.strip() + body = (body or "").strip() + acceptance_criteria = (acceptance_criteria or "").strip() or None + priority = (priority or "").strip() or None + + description_format = (description_format or "markdown").strip().lower() + if description_format not in {"markdown", "classic"}: + print_error("description-format must be one of: markdown, classic") + raise typer.Exit(code=1) + + parsed_story_points = _parse_story_points(story_points) + + graph_adapter = require_backlog_graph_protocol(adapter_instance) + + template = _resolve_default_template(adapter, template) + print_info("Input captured. Preparing backlog context and validations before create...") + + resolved_custom_config = _resolve_custom_config_path(adapter, custom_config) + custom = _load_custom_config(resolved_custom_config) + template_payload = _load_template_config(template) + + fetch_filters = dict(custom.get("filters") or {}) + if adapter.strip().lower() == "ado": + fetch_filters.setdefault("use_current_iteration_default", False) + items = graph_adapter.fetch_all_issues(project_id, filters=fetch_filters) + relationships = graph_adapter.fetch_relationships(project_id) + + graph = ( + BacklogGraphBuilder( + provider=adapter, + template_name=template, + custom_config={**custom, "project_key": project_id}, + ) + .add_items(items) + .add_dependencies(relationships) + .build() + ) + + hierarchy = _derive_creation_hierarchy(template_payload, custom) + + parent_id: str | None = None + if parent: + parent_id, parent_type = _resolve_parent_id(parent, graph.items) + if not parent_id or not parent_type: + print_error(f"Parent '{parent}' not found") + raise typer.Exit(code=1) + if not _validate_parent(issue_type, parent_type, hierarchy): + allowed = hierarchy.get(issue_type, []) + print_error( + f"Type '{issue_type}' is not allowed under parent type '{parent_type}'. " + f"Allowed parent types: {', '.join(allowed) if allowed else '(any)'}" + ) + raise typer.Exit(code=1) + elif interactive_mode: + parent_id = _choose_parent_interactively(issue_type, graph.items, hierarchy) + + payload: dict[str, Any] = { + "type": issue_type, + "title": title, + "description": body, + "description_format": description_format, + } + if acceptance_criteria: + payload["acceptance_criteria"] = acceptance_criteria + if priority: + payload["priority"] = priority + if parsed_story_points is not None: + payload["story_points"] = parsed_story_points + if parent_id: + payload["parent_id"] = parent_id + if sprint: + payload["sprint"] = sprint + + provider_fields = _resolve_provider_fields_for_create(adapter, template_payload, custom, repo_path) + if provider_fields: + payload["provider_fields"] = provider_fields + + if adapter.strip().lower() == "github" and not _has_github_repo_issue_type_mapping(provider_fields, issue_type): + print_warning( + "GitHub repository issue-type mapping is not configured for this issue type; " + "issue type may fall back to labels/body only. Configure " + "backlog_config.providers.github.settings.github_issue_types.type_ids " + "(ProjectV2 mapping is optional) to enable automatic issue Type updates." + ) + + if check_dor: + dor_config = DefinitionOfReady.load_from_repo(repo_path) + if dor_config: + draft = { + "id": "DRAFT", + "title": title, + "body_markdown": body, + "description": body, + "type": issue_type, + "provider_fields": { + "acceptance_criteria": acceptance_criteria, + "priority": priority, + "story_points": parsed_story_points, + }, + } + dor_errors = dor_config.validate_item(draft) + if dor_errors: + print_warning("Definition of Ready (DoR) issues detected:") + for err in dor_errors: + print_warning(err) + raise typer.Exit(code=1) + print_info("Definition of Ready (DoR) satisfied") + + create_context = f"adapter={adapter}, format={description_format}" + if sprint: + create_context += f", sprint={sprint}" + if parent_id: + create_context += ", parent=selected" + print_info(f"Creating backlog item now ({create_context})...") + + try: + created = graph_adapter.create_issue(project_id, payload) + except (requests.Timeout, requests.ConnectionError) as error: + print_warning("Create request failed after send; item may already exist remotely.") + print_warning("Verify backlog for the title/key before retrying to avoid duplicates.") + raise typer.Exit(code=1) from error + print_success("Issue created successfully") + print_info(f"id: {created.get('id', '')}") + print_info(f"key: {created.get('key', '')}") + print_info(f"url: {created.get('url', '')}") diff --git a/modules/backlog-core/src/backlog_core/graph/builder.py b/modules/backlog-core/src/backlog_core/graph/builder.py index dadc8c54..92737c40 100644 --- a/modules/backlog-core/src/backlog_core/graph/builder.py +++ b/modules/backlog-core/src/backlog_core/graph/builder.py @@ -25,6 +25,10 @@ class BacklogConfigModel(BaseModel): description="Raw relationship type -> normalized dependency mapping", ) status_mapping: dict[str, str] = Field(default_factory=dict, description="Raw status -> normalized status mapping") + creation_hierarchy: dict[str, list[str]] = Field( + default_factory=dict, + description="Allowed parent types per child type", + ) @beartype @@ -105,6 +109,7 @@ def _flatten_config_payload(self, config_payload: dict[str, Any]) -> dict[str, A "type_mapping": dependency_data.get("type_mapping", {}), "dependency_rules": dependency_data.get("dependency_rules", {}), "status_mapping": dependency_data.get("status_mapping", {}), + "creation_hierarchy": dependency_data.get("creation_hierarchy", {}), "providers": {name: provider.model_dump() for name, provider in schema.providers.items()}, } return BacklogConfigModel.model_validate(config_payload).model_dump() @@ -117,7 +122,7 @@ def _merge_config(self, base: dict[str, Any], override: dict[str, Any]) -> dict[ value = override.get(key) if value is not None: merged[key] = value - for key in ("type_mapping", "dependency_rules", "status_mapping", "providers"): + for key in ("type_mapping", "dependency_rules", "status_mapping", "creation_hierarchy", "providers"): merged[key] = {**merged.get(key, {}), **override.get(key, {})} return merged diff --git a/modules/backlog-core/src/backlog_core/graph/config_schema.py b/modules/backlog-core/src/backlog_core/graph/config_schema.py index 40ab36d5..50a12ea5 100644 --- a/modules/backlog-core/src/backlog_core/graph/config_schema.py +++ b/modules/backlog-core/src/backlog_core/graph/config_schema.py @@ -16,6 +16,10 @@ class DependencyConfig(BaseModel): type_mapping: dict[str, str] = Field(default_factory=dict, description="Raw type -> normalized type mapping") dependency_rules: dict[str, str] = Field(default_factory=dict, description="Raw relation -> normalized mapping") status_mapping: dict[str, str] = Field(default_factory=dict, description="Raw status -> normalized status mapping") + creation_hierarchy: dict[str, list[str]] = Field( + default_factory=dict, + description="Allowed parent types per child type", + ) class ProviderConfig(BaseModel): @@ -42,12 +46,12 @@ class BacklogConfigSchema(BaseModel): devops_stages: dict[str, DevOpsStageConfig] = Field(default_factory=dict) -def load_backlog_config_from_spec(spec_path: Path) -> BacklogConfigSchema | None: - """Load backlog config from `.specfact/spec.yaml` if present and valid.""" - if not spec_path.exists(): +def _load_backlog_config_from_yaml(path: Path) -> BacklogConfigSchema | None: + """Load and validate backlog config payload from a YAML file path.""" + if not path.exists(): return None - loaded = yaml.safe_load(spec_path.read_text(encoding="utf-8")) + loaded = yaml.safe_load(path.read_text(encoding="utf-8")) if not isinstance(loaded, dict): return None @@ -61,3 +65,13 @@ def load_backlog_config_from_spec(spec_path: Path) -> BacklogConfigSchema | None payload["devops_stages"] = devops_stages return BacklogConfigSchema.model_validate(payload) + + +def load_backlog_config_from_spec(spec_path: Path) -> BacklogConfigSchema | None: + """Load backlog config from `.specfact/spec.yaml` if present and valid.""" + return _load_backlog_config_from_yaml(spec_path) + + +def load_backlog_config_from_backlog_file(config_path: Path) -> BacklogConfigSchema | None: + """Load backlog config from `.specfact/backlog-config.yaml` if present and valid.""" + return _load_backlog_config_from_yaml(config_path) diff --git a/modules/backlog-core/src/backlog_core/main.py b/modules/backlog-core/src/backlog_core/main.py index 45166885..a7c3f2a1 100644 --- a/modules/backlog-core/src/backlog_core/main.py +++ b/modules/backlog-core/src/backlog_core/main.py @@ -7,6 +7,7 @@ from typer.core import TyperGroup from backlog_core.commands import ( + add, analyze_deps, diff, generate_release_notes, @@ -25,13 +26,14 @@ class _BacklogCoreCommandGroup(TyperGroup): # Command groups first for discoverability. "delta": 10, # High-impact flow commands next. - "sync": 20, - "verify-readiness": 30, - "analyze-deps": 40, - "diff": 50, - "promote": 60, - "generate-release-notes": 70, - "trace-impact": 80, + "add": 20, + "sync": 30, + "verify-readiness": 40, + "analyze-deps": 50, + "diff": 60, + "promote": 70, + "generate-release-notes": 80, + "trace-impact": 90, } def list_commands(self, ctx: click.Context) -> list[str]: @@ -44,6 +46,7 @@ def list_commands(self, ctx: click.Context) -> list[str]: help="Backlog dependency analysis and sync", cls=_BacklogCoreCommandGroup, ) +backlog_app.command("add")(add) backlog_app.command("analyze-deps")(analyze_deps) backlog_app.command("trace-impact")(trace_impact) backlog_app.command("sync")(sync) diff --git a/modules/backlog-core/src/backlog_core/resources/backlog-templates/github_custom.yaml b/modules/backlog-core/src/backlog_core/resources/backlog-templates/github_custom.yaml new file mode 100644 index 00000000..a8c99127 --- /dev/null +++ b/modules/backlog-core/src/backlog_core/resources/backlog-templates/github_custom.yaml @@ -0,0 +1,22 @@ +type_mapping: + epic: epic + feature: feature + story: story + task: task + bug: bug +creation_hierarchy: + epic: [] + feature: [epic] + story: [feature, epic] + task: [story, feature] + bug: [story, feature, epic] +dependency_rules: + blocks: blocks + blocked_by: blocks + relates: relates_to +status_mapping: + open: todo + closed: done + todo: todo + in progress: in_progress + done: done diff --git a/modules/backlog-core/tests/unit/test_adapter_create_issue.py b/modules/backlog-core/tests/unit/test_adapter_create_issue.py new file mode 100644 index 00000000..bf048d6e --- /dev/null +++ b/modules/backlog-core/tests/unit/test_adapter_create_issue.py @@ -0,0 +1,334 @@ +"""Unit tests for backlog adapter create_issue contract.""" + +from __future__ import annotations + +import sys +from pathlib import Path + + +# ruff: noqa: E402 + + +REPO_ROOT = Path(__file__).resolve().parents[4] +sys.path.insert(0, str(REPO_ROOT / "modules" / "backlog-core" / "src")) +sys.path.insert(0, str(REPO_ROOT / "src")) + +from specfact_cli.adapters.ado import AdoAdapter +from specfact_cli.adapters.github import GitHubAdapter + + +class _DummyResponse: + def __init__(self, payload: dict) -> None: + self._payload = payload + self.status_code = 201 + self.ok = True + self.text = "" + + def raise_for_status(self) -> None: + return None + + def json(self) -> dict: + return self._payload + + +def test_github_create_issue_maps_payload_and_returns_shape(monkeypatch) -> None: + """GitHub create_issue sends issue payload and normalizes response fields.""" + adapter = GitHubAdapter(repo_owner="nold-ai", repo_name="specfact-cli", api_token="token", use_gh_cli=False) + + captured: dict = {} + + def _fake_post(url: str, json: dict, headers: dict, timeout: int): + captured["url"] = url + captured["json"] = json + captured["headers"] = headers + captured["timeout"] = timeout + return _DummyResponse({"id": 77, "number": 42, "html_url": "https://github.com/nold-ai/specfact-cli/issues/42"}) + + import specfact_cli.adapters.github as github_module + + monkeypatch.setattr(github_module.requests, "post", _fake_post) + + retry_call: dict[str, object] = {} + + def _capture_retry(request_callable, **kwargs): + retry_call.update(kwargs) + return request_callable() + + monkeypatch.setattr(adapter, "_request_with_retry", _capture_retry) + + result = adapter.create_issue( + "nold-ai/specfact-cli", + { + "type": "story", + "title": "Implement X", + "description": "Acceptance criteria: ...", + "acceptance_criteria": "Given/When/Then", + "priority": "high", + "story_points": 5, + "parent_id": "100", + }, + ) + + assert retry_call.get("retry_on_ambiguous_transport") is False + assert captured["url"].endswith("/repos/nold-ai/specfact-cli/issues") + assert captured["json"]["title"] == "Implement X" + labels = [label.lower() for label in captured["json"]["labels"]] + assert "story" in labels + assert "priority:high" in labels + assert "story-points:5" in labels + assert "acceptance criteria" in captured["json"]["body"].lower() + assert result == {"id": "42", "key": "42", "url": "https://github.com/nold-ai/specfact-cli/issues/42"} + + +def test_ado_create_issue_maps_payload_and_parent_relation(monkeypatch) -> None: + """ADO create_issue sends JSON patch and includes parent relation when provided.""" + adapter = AdoAdapter(org="nold-ai", project="specfact-cli", api_token="token") + + captured: dict = {} + + def _fake_patch(url: str, json: list, headers: dict, timeout: int): + captured["url"] = url + captured["json"] = json + captured["headers"] = headers + captured["timeout"] = timeout + return _DummyResponse( + { + "id": 901, + "url": "https://dev.azure.com/nold-ai/specfact-cli/_apis/wit/workItems/901", + "_links": { + "html": {"href": "https://dev.azure.com/nold-ai/specfact-cli/_workitems/edit/901"}, + }, + } + ) + + import specfact_cli.adapters.ado as ado_module + + monkeypatch.setattr(ado_module.requests, "patch", _fake_patch) + + retry_call: dict[str, object] = {} + + def _capture_retry(request_callable, **kwargs): + retry_call.update(kwargs) + return request_callable() + + monkeypatch.setattr(adapter, "_request_with_retry", _capture_retry) + + result = adapter.create_issue( + "nold-ai/specfact-cli", + { + "type": "story", + "title": "Implement X", + "description": "Acceptance criteria: ...", + "acceptance_criteria": "Given/When/Then", + "priority": 1, + "story_points": 8, + "sprint": "Project\\Release 1\\Sprint 3", + "parent_id": "123", + "description_format": "classic", + }, + ) + + assert retry_call.get("retry_on_ambiguous_transport") is False + assert "/_apis/wit/workitems/$" in captured["url"] + assert any(op.get("path") == "/fields/System.Title" and op.get("value") == "Implement X" for op in captured["json"]) + assert any(op.get("path") == "/relations/-" for op in captured["json"]) + assert any( + op.get("path") == "/multilineFieldsFormat/System.Description" and op.get("value") == "Html" + for op in captured["json"] + ) + assert any(op.get("path") == "/fields/Microsoft.VSTS.Common.AcceptanceCriteria" for op in captured["json"]) + assert any( + op.get("path") == "/fields/Microsoft.VSTS.Common.Priority" and op.get("value") == 1 for op in captured["json"] + ) + assert any( + op.get("path") == "/fields/Microsoft.VSTS.Scheduling.StoryPoints" and op.get("value") == 8 + for op in captured["json"] + ) + assert any( + op.get("path") == "/fields/System.IterationPath" and op.get("value") == "Project\\Release 1\\Sprint 3" + for op in captured["json"] + ) + assert result == { + "id": "901", + "key": "901", + "url": "https://dev.azure.com/nold-ai/specfact-cli/_workitems/edit/901", + } + + +def test_github_create_issue_sets_projects_type_field_when_configured(monkeypatch) -> None: + """GitHub create_issue can set ProjectV2 Type field when config is provided.""" + adapter = GitHubAdapter(repo_owner="nold-ai", repo_name="specfact-cli", api_token="token", use_gh_cli=False) + + calls: list[tuple[str, dict]] = [] + + def _fake_post(url: str, json: dict, headers: dict, timeout: int): + _ = headers, timeout + calls.append((url, json)) + if url.endswith("/issues"): + return _DummyResponse( + { + "id": 88, + "number": 55, + "node_id": "ISSUE_NODE_55", + "html_url": "https://github.com/nold-ai/specfact-cli/issues/55", + } + ) + if url.endswith("/graphql"): + query = str(json.get("query") or "") + if "addProjectV2ItemById" in query: + return _DummyResponse({"data": {"addProjectV2ItemById": {"item": {"id": "PVT_ITEM_1"}}}}) + if "updateProjectV2ItemFieldValue" in query: + return _DummyResponse( + {"data": {"updateProjectV2ItemFieldValue": {"projectV2Item": {"id": "PVT_ITEM_1"}}}} + ) + return _DummyResponse({"data": {}}) + raise AssertionError(f"Unexpected URL: {url}") + + import specfact_cli.adapters.github as github_module + + monkeypatch.setattr(github_module.requests, "post", _fake_post) + + result = adapter.create_issue( + "nold-ai/specfact-cli", + { + "type": "story", + "title": "Implement projects type", + "description": "Body", + "provider_fields": { + "github_project_v2": { + "project_id": "PVT_PROJECT_1", + "type_field_id": "PVT_FIELD_TYPE", + "type_option_ids": { + "story": "PVT_OPTION_STORY", + }, + } + }, + }, + ) + + graphql_calls = [entry for entry in calls if entry[0].endswith("/graphql")] + assert len(graphql_calls) == 2 + + add_variables = graphql_calls[0][1]["variables"] + assert add_variables == {"projectId": "PVT_PROJECT_1", "contentId": "ISSUE_NODE_55"} + + set_variables = graphql_calls[1][1]["variables"] + assert set_variables["projectId"] == "PVT_PROJECT_1" + assert set_variables["itemId"] == "PVT_ITEM_1" + assert set_variables["fieldId"] == "PVT_FIELD_TYPE" + assert set_variables["optionId"] == "PVT_OPTION_STORY" + + assert result == {"id": "55", "key": "55", "url": "https://github.com/nold-ai/specfact-cli/issues/55"} + + +def test_github_create_issue_sets_repository_issue_type_when_configured(monkeypatch) -> None: + """GitHub create_issue sets repository issue Type when mapping is configured.""" + adapter = GitHubAdapter(repo_owner="nold-ai", repo_name="specfact-cli", api_token="token", use_gh_cli=False) + + calls: list[tuple[str, dict]] = [] + + def _fake_post(url: str, json: dict, headers: dict, timeout: int): + _ = headers, timeout + calls.append((url, json)) + if url.endswith("/issues"): + return _DummyResponse( + { + "id": 188, + "number": 77, + "node_id": "ISSUE_NODE_77", + "html_url": "https://github.com/nold-ai/specfact-cli/issues/77", + } + ) + if url.endswith("/graphql"): + query = str(json.get("query") or "") + if "updateIssue(input:" in query: + return _DummyResponse({"data": {"updateIssue": {"issue": {"id": "ISSUE_NODE_77"}}}}) + return _DummyResponse({"data": {}}) + raise AssertionError(f"Unexpected URL: {url}") + + import specfact_cli.adapters.github as github_module + + monkeypatch.setattr(github_module.requests, "post", _fake_post) + + result = adapter.create_issue( + "nold-ai/specfact-cli", + { + "type": "task", + "title": "Apply issue type", + "description": "Body", + "provider_fields": { + "github_issue_types": { + "type_ids": { + "task": "IT_kwDODWwjB84Brk47", + } + } + }, + }, + ) + + graphql_calls = [entry for entry in calls if entry[0].endswith("/graphql")] + assert len(graphql_calls) == 1 + variables = graphql_calls[0][1]["variables"] + assert variables == {"issueId": "ISSUE_NODE_77", "issueTypeId": "IT_kwDODWwjB84Brk47"} + assert result == {"id": "77", "key": "77", "url": "https://github.com/nold-ai/specfact-cli/issues/77"} + + +def test_github_create_issue_links_native_parent_subissue(monkeypatch) -> None: + """GitHub create_issue links parent relationship via native sidebar sub-issue mutation.""" + adapter = GitHubAdapter(repo_owner="nold-ai", repo_name="specfact-cli", api_token="token", use_gh_cli=False) + + calls: list[tuple[str, dict]] = [] + + def _fake_post(url: str, json: dict, headers: dict, timeout: int): + _ = headers, timeout + calls.append((url, json)) + if url.endswith("/issues"): + return _DummyResponse( + { + "id": 288, + "number": 99, + "node_id": "ISSUE_NODE_99", + "html_url": "https://github.com/nold-ai/specfact-cli/issues/99", + } + ) + if url.endswith("/graphql"): + query = str(json.get("query") or "") + if "repository(owner:$owner, name:$repo)" in query and "issue(number:$number)" in query: + return _DummyResponse({"data": {"repository": {"issue": {"id": "ISSUE_NODE_PARENT_11"}}}}) + if "addSubIssue(input:" in query: + return _DummyResponse( + { + "data": { + "addSubIssue": { + "issue": {"id": "ISSUE_NODE_PARENT_11"}, + "subIssue": {"id": "ISSUE_NODE_99"}, + } + } + } + ) + return _DummyResponse({"data": {}}) + raise AssertionError(f"Unexpected URL: {url}") + + import specfact_cli.adapters.github as github_module + + monkeypatch.setattr(github_module.requests, "post", _fake_post) + + result = adapter.create_issue( + "nold-ai/specfact-cli", + { + "type": "task", + "title": "Link native parent", + "description": "Body", + "parent_id": "11", + }, + ) + + graphql_calls = [entry for entry in calls if entry[0].endswith("/graphql")] + assert len(graphql_calls) == 2 + + lookup_variables = graphql_calls[0][1]["variables"] + assert lookup_variables == {"owner": "nold-ai", "repo": "specfact-cli", "number": 11} + + link_variables = graphql_calls[1][1]["variables"] + assert link_variables == {"parentIssueId": "ISSUE_NODE_PARENT_11", "subIssueId": "ISSUE_NODE_99"} + assert result == {"id": "99", "key": "99", "url": "https://github.com/nold-ai/specfact-cli/issues/99"} diff --git a/modules/backlog-core/tests/unit/test_add_command.py b/modules/backlog-core/tests/unit/test_add_command.py new file mode 100644 index 00000000..fc7e660b --- /dev/null +++ b/modules/backlog-core/tests/unit/test_add_command.py @@ -0,0 +1,800 @@ +"""Unit tests for backlog add interactive issue creation command.""" + +from __future__ import annotations + +import sys +from pathlib import Path + +from typer.testing import CliRunner + + +# ruff: noqa: E402 + + +REPO_ROOT = Path(__file__).resolve().parents[4] +sys.path.insert(0, str(REPO_ROOT / "modules" / "backlog-core" / "src")) +sys.path.insert(0, str(REPO_ROOT / "src")) + +from backlog_core.main import backlog_app + + +runner = CliRunner() + + +class _FakeAdapter: + def __init__(self, items: list[dict], relationships: list[dict], created: list[dict]) -> None: + self._items = items + self._relationships = relationships + self.created = created + + def fetch_all_issues(self, project_id: str, filters: dict | None = None) -> list[dict]: + _ = project_id, filters + return self._items + + def fetch_relationships(self, project_id: str) -> list[dict]: + _ = project_id + return self._relationships + + def create_issue(self, project_id: str, payload: dict) -> dict: + _ = project_id + self.created.append(payload) + return {"id": "123", "key": "123", "url": "https://example.test/issues/123"} + + +def test_backlog_add_non_interactive_requires_type_and_title(monkeypatch) -> None: + """Non-interactive add fails when required options are missing.""" + from specfact_cli.adapters.registry import AdapterRegistry + + monkeypatch.setattr(AdapterRegistry, "get_adapter", lambda _adapter: _FakeAdapter([], [], [])) + + result = runner.invoke( + backlog_app, + [ + "add", + "--project-id", + "nold-ai/specfact-cli", + "--adapter", + "github", + "--non-interactive", + ], + ) + + assert result.exit_code == 1 + assert "required in --non-interactive mode" in result.stdout + + +def test_backlog_add_validates_missing_parent(monkeypatch) -> None: + """Add fails when provided parent key/id cannot be resolved.""" + from specfact_cli.adapters.registry import AdapterRegistry + + adapter = _FakeAdapter(items=[], relationships=[], created=[]) + monkeypatch.setattr(AdapterRegistry, "get_adapter", lambda _adapter: adapter) + + result = runner.invoke( + backlog_app, + [ + "add", + "--project-id", + "nold-ai/specfact-cli", + "--adapter", + "github", + "--type", + "story", + "--parent", + "FEAT-123", + "--title", + "Implement X", + "--non-interactive", + ], + ) + + assert result.exit_code == 1 + assert "Parent 'FEAT-123' not found" in result.stdout + + +def test_backlog_add_uses_default_hierarchy_when_no_github_custom_mapping_file(monkeypatch, tmp_path: Path) -> None: + """Add falls back to default hierarchy when github_custom mapping file is absent.""" + from specfact_cli.adapters.registry import AdapterRegistry + + items = [ + { + "id": "42", + "key": "STORY-1", + "title": "Story Parent", + "type": "story", + "status": "todo", + } + ] + created_payloads: list[dict] = [] + adapter = _FakeAdapter(items=items, relationships=[], created=created_payloads) + monkeypatch.setattr(AdapterRegistry, "get_adapter", lambda _adapter: adapter) + monkeypatch.chdir(tmp_path) + + result = runner.invoke( + backlog_app, + [ + "add", + "--project-id", + "nold-ai/specfact-cli", + "--adapter", + "github", + "--type", + "task", + "--parent", + "STORY-1", + "--title", + "Implement X", + "--body", + "Body", + "--non-interactive", + ], + ) + + assert result.exit_code == 0 + assert created_payloads + + +def test_backlog_add_auto_applies_github_custom_mapping_file(monkeypatch, tmp_path: Path) -> None: + """Add automatically loads .specfact github_custom mapping file when present.""" + from specfact_cli.adapters.registry import AdapterRegistry + + custom_mapping_file = tmp_path / ".specfact" / "templates" / "backlog" / "field_mappings" / "github_custom.yaml" + custom_mapping_file.parent.mkdir(parents=True, exist_ok=True) + custom_mapping_file.write_text( + """ +creation_hierarchy: + task: [epic] +""".strip(), + encoding="utf-8", + ) + + items = [ + { + "id": "42", + "key": "STORY-1", + "title": "Story Parent", + "type": "story", + "status": "todo", + } + ] + created_payloads: list[dict] = [] + adapter = _FakeAdapter(items=items, relationships=[], created=created_payloads) + monkeypatch.setattr(AdapterRegistry, "get_adapter", lambda _adapter: adapter) + monkeypatch.chdir(tmp_path) + + result = runner.invoke( + backlog_app, + [ + "add", + "--project-id", + "nold-ai/specfact-cli", + "--adapter", + "github", + "--type", + "task", + "--parent", + "STORY-1", + "--title", + "Implement X", + "--body", + "Body", + "--non-interactive", + ], + ) + + assert result.exit_code == 1 + assert "Type 'task' is not allowed under parent type 'story'" in result.stdout + + +def test_backlog_add_honors_creation_hierarchy_from_custom_config(monkeypatch, tmp_path: Path) -> None: + """Add validates child->parent relationship using explicit hierarchy config.""" + from specfact_cli.adapters.registry import AdapterRegistry + + config_file = tmp_path / "custom.yaml" + config_file.write_text( + """ +creation_hierarchy: + story: [feature] +""".strip(), + encoding="utf-8", + ) + + items = [ + { + "id": "42", + "key": "FEAT-123", + "title": "Parent", + "type": "feature", + "status": "todo", + } + ] + created_payloads: list[dict] = [] + adapter = _FakeAdapter(items=items, relationships=[], created=created_payloads) + monkeypatch.setattr(AdapterRegistry, "get_adapter", lambda _adapter: adapter) + + result = runner.invoke( + backlog_app, + [ + "add", + "--project-id", + "nold-ai/specfact-cli", + "--adapter", + "github", + "--template", + "github_projects", + "--custom-config", + str(config_file), + "--type", + "story", + "--parent", + "FEAT-123", + "--title", + "Implement X", + "--body", + "Acceptance criteria: ...", + "--non-interactive", + ], + ) + + assert result.exit_code == 0 + assert created_payloads and created_payloads[0]["parent_id"] == "42" + + +def test_backlog_add_check_dor_blocks_invalid_draft(monkeypatch, tmp_path: Path) -> None: + """Add fails DoR check when configured required fields are missing.""" + from specfact_cli.adapters.registry import AdapterRegistry + + dor_dir = tmp_path / ".specfact" + dor_dir.mkdir(parents=True, exist_ok=True) + (dor_dir / "dor.yaml").write_text( + """ +rules: + acceptance_criteria: true +""".strip(), + encoding="utf-8", + ) + + adapter = _FakeAdapter(items=[], relationships=[], created=[]) + monkeypatch.setattr(AdapterRegistry, "get_adapter", lambda _adapter: adapter) + + result = runner.invoke( + backlog_app, + [ + "add", + "--project-id", + "nold-ai/specfact-cli", + "--adapter", + "github", + "--type", + "story", + "--title", + "Implement X", + "--body", + "No explicit section", + "--non-interactive", + "--check-dor", + "--repo-path", + str(tmp_path), + ], + ) + + assert result.exit_code == 1 + assert "Definition of Ready" in result.stdout + + +class _FakeAdoAdapter(_FakeAdapter): + def _get_current_iteration(self) -> str | None: + return "Project\\Sprint 42" + + def _list_available_iterations(self) -> list[str]: + return ["Project\\Sprint 41", "Project\\Sprint 42"] + + +def test_backlog_add_interactive_multiline_body_uses_end_marker(monkeypatch) -> None: + """Interactive add supports multiline body input terminated by marker.""" + from specfact_cli.adapters.registry import AdapterRegistry + + created_payloads: list[dict] = [] + adapter = _FakeAdapter(items=[], relationships=[], created=created_payloads) + monkeypatch.setattr(AdapterRegistry, "get_adapter", lambda _adapter: adapter) + + import importlib + + add_module = importlib.import_module("backlog_core.commands.add") + + def _select(message: str, _choices: list[str], default: str | None = None) -> str: + lowered = message.lower() + if "issue type" in lowered: + return "story" + if "description format" in lowered: + return "markdown" + if "acceptance criteria" in lowered: + return "no" + if "add parent issue" in lowered: + return "no" + return default or "markdown" + + monkeypatch.setattr(add_module, "_select_with_fallback", _select) + + result = runner.invoke( + backlog_app, + [ + "add", + "--project-id", + "nold-ai/specfact-cli", + "--adapter", + "github", + "--body-end-marker", + "::END::", + ], + input="Interactive title\nline one\nline two\n::END::\n\n\n\n", + ) + + assert result.exit_code == 0 + assert created_payloads + assert created_payloads[0]["description"] == "line one\nline two" + + +def test_backlog_add_interactive_ado_selects_current_iteration(monkeypatch) -> None: + """Interactive add can set sprint from current ADO iteration selection.""" + import importlib + + add_module = importlib.import_module("backlog_core.commands.add") + + from specfact_cli.adapters.registry import AdapterRegistry + + created_payloads: list[dict] = [] + adapter = _FakeAdoAdapter(items=[], relationships=[], created=created_payloads) + monkeypatch.setattr(AdapterRegistry, "get_adapter", lambda _adapter: adapter) + + def _select(message: str, _choices: list[str], default: str | None = None) -> str: + lowered = message.lower() + if "issue type" in lowered: + return "story" + if "sprint/iteration" in lowered: + return "current: Project\\Sprint 42" + if "description format" in lowered: + return "markdown" + if "acceptance criteria" in lowered: + return "no" + if "add parent issue" in lowered: + return "no" + return "markdown" + + monkeypatch.setattr(add_module, "_select_with_fallback", _select) + + result = runner.invoke( + backlog_app, + [ + "add", + "--project-id", + "dominikusnold/Specfact CLI", + "--adapter", + "ado", + ], + input="ADO story\nbody line\n::END::\n\n\n", + ) + + assert result.exit_code == 0 + assert created_payloads + assert created_payloads[0]["sprint"] == "Project\\Sprint 42" + + +def test_backlog_add_interactive_collects_story_fields_and_parent(monkeypatch) -> None: + """Interactive story flow captures AC/priority/story points and selected parent.""" + import importlib + + add_module = importlib.import_module("backlog_core.commands.add") + + from specfact_cli.adapters.registry import AdapterRegistry + + items = [ + { + "id": "42", + "key": "FEAT-123", + "title": "Parent feature", + "type": "feature", + "status": "todo", + } + ] + created_payloads: list[dict] = [] + adapter = _FakeAdapter(items=items, relationships=[], created=created_payloads) + monkeypatch.setattr(AdapterRegistry, "get_adapter", lambda _adapter: adapter) + + def _select(message: str, _choices: list[str], default: str | None = None) -> str: + lowered = message.lower() + if "issue type" in lowered: + return "story" + if "description format" in lowered: + return "markdown" + if "acceptance criteria" in lowered: + return "yes" + if "add parent issue" in lowered: + return "yes" + if "select parent issue" in lowered: + return "FEAT-123 | Parent feature | type=feature" + return "markdown" + + monkeypatch.setattr(add_module, "_select_with_fallback", _select) + + result = runner.invoke( + backlog_app, + [ + "add", + "--project-id", + "nold-ai/specfact-cli", + "--adapter", + "github", + ], + input="Story title\nbody line\n::END::\n\nac line\n::END::\nhigh\n5\n", + ) + + assert result.exit_code == 0 + assert created_payloads + payload = created_payloads[0] + assert payload["acceptance_criteria"] == "ac line" + assert payload["priority"] == "high" + assert payload["story_points"] == 5 + assert payload["parent_id"] == "42" + + +def test_backlog_add_interactive_parent_selection_falls_back_to_all_candidates(monkeypatch) -> None: + """Interactive parent picker falls back to all candidates when type inference yields no matches.""" + import importlib + + add_module = importlib.import_module("backlog_core.commands.add") + + from specfact_cli.adapters.registry import AdapterRegistry + + items = [ + { + "id": "42", + "key": "STORY-1", + "title": "Parent", + "type": "custom", + "status": "todo", + } + ] + created_payloads: list[dict] = [] + adapter = _FakeAdapter(items=items, relationships=[], created=created_payloads) + monkeypatch.setattr(AdapterRegistry, "get_adapter", lambda _adapter: adapter) + + def _select(message: str, _choices: list[str], default: str | None = None) -> str: + lowered = message.lower() + if "issue type" in lowered: + return "task" + if "description format" in lowered: + return "markdown" + if "acceptance criteria" in lowered: + return "no" + if "add parent issue" in lowered: + return "yes" + if "select parent issue" in lowered: + return "STORY-1 | Parent | type=custom" + return default or "markdown" + + monkeypatch.setattr(add_module, "_select_with_fallback", _select) + + result = runner.invoke( + backlog_app, + [ + "add", + "--project-id", + "nold-ai/specfact-cli", + "--adapter", + "github", + ], + input="Task title\nBody line\n::END::\n\n\n\n", + ) + + assert result.exit_code == 0 + assert "No hierarchy-compatible parent candidates found from inferred types." in result.stdout + assert created_payloads + assert created_payloads[0]["parent_id"] == "42" + + +def test_backlog_add_ado_default_template_enables_epic_parent_candidates(monkeypatch) -> None: + """ADO add without explicit template should still resolve epic parent candidate for feature.""" + import importlib + + add_module = importlib.import_module("backlog_core.commands.add") + + from specfact_cli.adapters.registry import AdapterRegistry + + items = [ + { + "id": "900", + "key": "EPIC-900", + "title": "Platform Epic", + "work_item_type": "Epic", + "status": "New", + } + ] + created_payloads: list[dict] = [] + adapter = _FakeAdoAdapter(items=items, relationships=[], created=created_payloads) + monkeypatch.setattr(AdapterRegistry, "get_adapter", lambda _adapter: adapter) + + def _select(message: str, _choices: list[str], default: str | None = None) -> str: + lowered = message.lower() + if "issue type" in lowered: + return "feature" + if "sprint/iteration" in lowered: + return "(skip sprint/iteration)" + if "description format" in lowered: + return "markdown" + if "acceptance criteria" in lowered: + return "no" + if "add parent issue" in lowered: + return "yes" + if "select parent issue" in lowered: + return "EPIC-900 | Platform Epic | type=epic" + return default or "markdown" + + monkeypatch.setattr(add_module, "_select_with_fallback", _select) + + result = runner.invoke( + backlog_app, + [ + "add", + "--project-id", + "dominikusnold/Specfact CLI", + "--adapter", + "ado", + ], + input="Feature title\nFeature body\n::END::\n\n\n", + ) + + assert result.exit_code == 0 + assert created_payloads + assert created_payloads[0].get("parent_id") == "900" + + +def test_backlog_add_warns_on_ambiguous_create_failure(monkeypatch) -> None: + """CLI warns user when duplicate-safe create fails with ambiguous transport error.""" + import requests + + from specfact_cli.adapters.registry import AdapterRegistry + + class _TimeoutAdapter(_FakeAdapter): + def create_issue(self, project_id: str, payload: dict) -> dict: # type: ignore[override] + _ = project_id, payload + raise requests.Timeout("network timeout") + + adapter = _TimeoutAdapter(items=[], relationships=[], created=[]) + monkeypatch.setattr(AdapterRegistry, "get_adapter", lambda _adapter: adapter) + + result = runner.invoke( + backlog_app, + [ + "add", + "--project-id", + "nold-ai/specfact-cli", + "--adapter", + "github", + "--type", + "story", + "--title", + "Implement X", + "--non-interactive", + ], + ) + + assert result.exit_code == 1 + assert "may already exist remotely" in result.stdout + assert "before retrying to avoid duplicates" in result.stdout + + +def test_backlog_add_interactive_ado_sprint_lookup_uses_project_context(monkeypatch) -> None: + """ADO sprint lookup uses project_id-resolved org/project context before selection.""" + import importlib + + add_module = importlib.import_module("backlog_core.commands.add") + + from specfact_cli.adapters.registry import AdapterRegistry + + class _ContextAdoAdapter(_FakeAdapter): + def __init__(self) -> None: + super().__init__(items=[], relationships=[], created=[]) + self.org = None + self.project = None + + def _resolve_graph_project_context(self, project_id: str) -> tuple[str, str]: + assert project_id == "dominikusnold/Specfact CLI" + return "dominikusnold", "Specfact CLI" + + def _get_current_iteration(self) -> str | None: + if self.org == "dominikusnold" and self.project == "Specfact CLI": + return r"Specfact CLI\2026\Sprint 01" + return None + + def _list_available_iterations(self) -> list[str]: + if self.org == "dominikusnold" and self.project == "Specfact CLI": + return [r"Specfact CLI\2026\Sprint 01", r"Specfact CLI\2026\Sprint 02"] + return [] + + adapter = _ContextAdoAdapter() + monkeypatch.setattr(AdapterRegistry, "get_adapter", lambda _adapter: adapter) + + def _select(message: str, _choices: list[str], default: str | None = None) -> str: + lowered = message.lower() + if "issue type" in lowered: + return "story" + if "sprint/iteration" in lowered: + return r"current: Specfact CLI\2026\Sprint 01" + if "description format" in lowered: + return "markdown" + if "acceptance criteria" in lowered: + return "no" + if "add parent issue" in lowered: + return "no" + return default or "markdown" + + monkeypatch.setattr(add_module, "_select_with_fallback", _select) + + result = runner.invoke( + backlog_app, + [ + "add", + "--project-id", + "dominikusnold/Specfact CLI", + "--adapter", + "ado", + ], + input="Story title\nBody\n::END::\n\n\n", + ) + + assert result.exit_code == 0 + assert adapter.created + assert adapter.created[0].get("sprint") == r"Specfact CLI\2026\Sprint 01" + + +def test_backlog_add_forwards_github_project_v2_provider_fields(monkeypatch, tmp_path: Path) -> None: + """backlog add forwards GitHub ProjectV2 config from custom config into create payload.""" + from specfact_cli.adapters.registry import AdapterRegistry + + config_file = tmp_path / "custom.yaml" + config_file.write_text( + """ +provider_fields: + github_project_v2: + project_id: PVT_PROJECT_1 + type_field_id: PVT_FIELD_TYPE + type_option_ids: + story: PVT_OPTION_STORY +""".strip(), + encoding="utf-8", + ) + + created_payloads: list[dict] = [] + adapter = _FakeAdapter(items=[], relationships=[], created=created_payloads) + monkeypatch.setattr(AdapterRegistry, "get_adapter", lambda _adapter: adapter) + + result = runner.invoke( + backlog_app, + [ + "add", + "--project-id", + "nold-ai/specfact-cli", + "--adapter", + "github", + "--template", + "github_projects", + "--custom-config", + str(config_file), + "--type", + "story", + "--title", + "Implement X", + "--body", + "Acceptance criteria", + "--non-interactive", + ], + ) + + assert result.exit_code == 0 + assert created_payloads + provider_fields = created_payloads[0].get("provider_fields") + assert isinstance(provider_fields, dict) + github_project_v2 = provider_fields.get("github_project_v2") + assert isinstance(github_project_v2, dict) + assert github_project_v2.get("project_id") == "PVT_PROJECT_1" + assert github_project_v2.get("type_field_id") == "PVT_FIELD_TYPE" + assert github_project_v2.get("type_option_ids", {}).get("story") == "PVT_OPTION_STORY" + + +def test_backlog_add_forwards_github_project_v2_from_backlog_config(monkeypatch, tmp_path: Path) -> None: + """backlog add loads GitHub ProjectV2 config from .specfact/backlog-config.yaml provider settings.""" + from specfact_cli.adapters.registry import AdapterRegistry + + spec_dir = tmp_path / ".specfact" + spec_dir.mkdir(parents=True, exist_ok=True) + (spec_dir / "backlog-config.yaml").write_text( + """ +backlog_config: + providers: + github: + adapter: github + project_id: nold-ai/specfact-demo-repo + settings: + provider_fields: + github_project_v2: + project_id: PVT_PROJECT_SPEC + type_field_id: PVT_FIELD_TYPE_SPEC + type_option_ids: + task: PVT_OPTION_TASK_SPEC + github_issue_types: + type_ids: + task: IT_TASK_SPEC +""".strip(), + encoding="utf-8", + ) + + created_payloads: list[dict] = [] + adapter = _FakeAdapter(items=[], relationships=[], created=created_payloads) + monkeypatch.setattr(AdapterRegistry, "get_adapter", lambda _adapter: adapter) + + result = runner.invoke( + backlog_app, + [ + "add", + "--project-id", + "nold-ai/specfact-demo-repo", + "--adapter", + "github", + "--template", + "github_projects", + "--type", + "task", + "--title", + "Implement task", + "--body", + "Body", + "--non-interactive", + "--repo-path", + str(tmp_path), + ], + ) + + assert result.exit_code == 0 + assert created_payloads + provider_fields = created_payloads[0].get("provider_fields") + assert isinstance(provider_fields, dict) + github_project_v2 = provider_fields.get("github_project_v2") + assert isinstance(github_project_v2, dict) + assert github_project_v2.get("project_id") == "PVT_PROJECT_SPEC" + assert github_project_v2.get("type_field_id") == "PVT_FIELD_TYPE_SPEC" + assert github_project_v2.get("type_option_ids", {}).get("task") == "PVT_OPTION_TASK_SPEC" + github_issue_types = provider_fields.get("github_issue_types") + assert isinstance(github_issue_types, dict) + assert github_issue_types.get("type_ids", {}).get("task") == "IT_TASK_SPEC" + assert "repository issue-type mapping is not configured" not in result.stdout + + +def test_backlog_add_warns_when_github_issue_type_mapping_missing(monkeypatch) -> None: + """backlog add warns when repository issue-type mapping is unavailable for selected type.""" + from specfact_cli.adapters.registry import AdapterRegistry + + created_payloads: list[dict] = [] + adapter = _FakeAdapter(items=[], relationships=[], created=created_payloads) + monkeypatch.setattr(AdapterRegistry, "get_adapter", lambda _adapter: adapter) + + result = runner.invoke( + backlog_app, + [ + "add", + "--project-id", + "nold-ai/specfact-demo-repo", + "--adapter", + "github", + "--type", + "spike", + "--title", + "Sample task", + "--body", + "Body", + "--non-interactive", + ], + ) + + assert result.exit_code == 0 + assert "repository issue-type mapping is not configured" in result.stdout diff --git a/modules/backlog-core/tests/unit/test_backlog_protocol.py b/modules/backlog-core/tests/unit/test_backlog_protocol.py index 7fe06733..bd5db3c7 100644 --- a/modules/backlog-core/tests/unit/test_backlog_protocol.py +++ b/modules/backlog-core/tests/unit/test_backlog_protocol.py @@ -32,6 +32,10 @@ def fetch_relationships(self, project_id: str) -> list[dict]: _ = project_id return [] + def create_issue(self, project_id: str, payload: dict) -> dict: + _ = project_id, payload + return {"id": "1", "key": "1", "url": "https://example.test/1"} + class _InvalidAdapter: def fetch_all_issues(self, project_id: str, filters: dict | None = None) -> list[dict]: diff --git a/modules/bundle-mapper/module-package.yaml b/modules/bundle-mapper/module-package.yaml index e0c8f2c0..bb5b4886 100644 --- a/modules/bundle-mapper/module-package.yaml +++ b/modules/bundle-mapper/module-package.yaml @@ -1,9 +1,9 @@ name: bundle-mapper -version: 0.1.0 +version: "0.1.0" commands: [] pip_dependencies: [] module_dependencies: [] -core_compatibility: '>=0.28.0,<1.0.0' +core_compatibility: ">=0.28.0,<1.0.0" tier: community schema_extensions: project_bundle: {} diff --git a/modules/bundle-mapper/src/bundle_mapper/__init__.py b/modules/bundle-mapper/src/bundle_mapper/__init__.py index d23cba4a..94555d77 100644 --- a/modules/bundle-mapper/src/bundle_mapper/__init__.py +++ b/modules/bundle-mapper/src/bundle_mapper/__init__.py @@ -1,7 +1,8 @@ """Bundle mapper module: confidence-based spec-to-bundle assignment with interactive review.""" -from bundle_mapper.mapper.engine import BundleMapper -from bundle_mapper.models.bundle_mapping import BundleMapping +from .commands import commands_interface +from .mapper.engine import BundleMapper +from .models.bundle_mapping import BundleMapping -__all__ = ["BundleMapper", "BundleMapping"] +__all__ = ["BundleMapper", "BundleMapping", "commands_interface"] diff --git a/modules/bundle-mapper/src/bundle_mapper/commands/__init__.py b/modules/bundle-mapper/src/bundle_mapper/commands/__init__.py index 191c5148..90943ae6 100644 --- a/modules/bundle-mapper/src/bundle_mapper/commands/__init__.py +++ b/modules/bundle-mapper/src/bundle_mapper/commands/__init__.py @@ -1 +1,21 @@ -"""Command hooks for backlog refine/import --auto-bundle (used when module is loaded).""" +"""Command hooks and ModuleIOContract exports for bundle-mapper.""" + +from specfact_cli.contracts.module_interface import ModuleIOContract +from specfact_cli.modules import module_io_shim + + +_MODULE_IO_CONTRACT = ModuleIOContract +import_to_bundle = module_io_shim.import_to_bundle +export_from_bundle = module_io_shim.export_from_bundle +sync_with_bundle = module_io_shim.sync_with_bundle +validate_bundle = module_io_shim.validate_bundle +commands_interface = module_io_shim + +__all__ = [ + "_MODULE_IO_CONTRACT", + "commands_interface", + "export_from_bundle", + "import_to_bundle", + "sync_with_bundle", + "validate_bundle", +] diff --git a/modules/bundle-mapper/src/bundle_mapper/mapper/engine.py b/modules/bundle-mapper/src/bundle_mapper/mapper/engine.py index 67592cbf..95c78b36 100644 --- a/modules/bundle-mapper/src/bundle_mapper/mapper/engine.py +++ b/modules/bundle-mapper/src/bundle_mapper/mapper/engine.py @@ -97,6 +97,8 @@ def _score_historical_mapping(self, item: BacklogItem) -> tuple[str | None, floa continue counts = entry.get("counts", {}) for bid, cnt in counts.items(): + if self._available_bundle_ids and bid not in self._available_bundle_ids: + continue if cnt > best_count: best_count = cnt best_bundle = bid @@ -182,10 +184,12 @@ def compute_mapping(self, item: BacklogItem) -> BundleMapping: if content_list: best_content = content_list[0] contrib = WEIGHT_CONTENT * best_content[1] - weighted += contrib if primary_bundle_id is None: + weighted += contrib primary_bundle_id = best_content[0] reasons.append(self._explain_score(best_content[0], best_content[1], "content_similarity")) + elif best_content[0] == primary_bundle_id: + weighted += contrib confidence = min(1.0, weighted) candidates: list[tuple[str, float]] = [] diff --git a/modules/bundle-mapper/src/bundle_mapper/mapper/history.py b/modules/bundle-mapper/src/bundle_mapper/mapper/history.py index 9a3bb1a0..bb33eb32 100644 --- a/modules/bundle-mapper/src/bundle_mapper/mapper/history.py +++ b/modules/bundle-mapper/src/bundle_mapper/mapper/history.py @@ -7,6 +7,7 @@ import re from pathlib import Path from typing import Any, Protocol, runtime_checkable +from urllib.parse import quote, unquote import yaml from beartype import beartype @@ -58,25 +59,58 @@ def matches(self, item: _ItemLike) -> bool: def item_key(item: _ItemLike) -> str: """Build a stable key for history lookup (area, assignee, tags).""" - area = (item.area or "").strip() - assignee = (item.assignees[0] if item.assignees else "").strip() - tags_str = "|".join(sorted(t.strip() for t in item.tags if t)) - return f"area={area}|assignee={assignee}|tags={tags_str}" + area = quote((item.area or "").strip(), safe="") + assignee = quote((item.assignees[0] if item.assignees else "").strip(), safe="") + # Use comma-separated, URL-encoded tag values to avoid delimiter collisions. + tags = [quote(t.strip(), safe="") for t in sorted(t.strip() for t in item.tags if t)] + tags_str = ",".join(tags) + return f"area={area};assignee={assignee};tags={tags_str}" def item_keys_similar(key_a: str, key_b: str) -> bool: """Return True if keys share at least 2 of 3 non-empty components (area, assignee, tags). Empty fields are ignored to avoid matching unrelated items.""" - def parts(k: str) -> tuple[str, str, str]: - d: dict[str, str] = {} - for seg in k.split("|"): + def _parse_key(k: str) -> tuple[str, str, str]: + # Preferred modern format: area=...;assignee=...;tags=a,b + if ";" in k: + d: dict[str, str] = {} + for seg in k.split(";"): + if "=" in seg: + name, val = seg.split("=", 1) + d[name.strip()] = val.strip() + area = unquote(d.get("area", "")) + assignee = unquote(d.get("assignee", "")) + tags_raw = d.get("tags", "") + tags = [unquote(t) for t in tags_raw.split(",") if t] + return (area, assignee, ",".join(tags)) + + # Legacy format: area=...|assignee=...|tags=a|b + d_legacy: dict[str, str] = {} + segments = k.split("|") + idx = 0 + while idx < len(segments): + seg = segments[idx] if "=" in seg: name, val = seg.split("=", 1) - d[name.strip()] = val.strip() - return (d.get("area", ""), d.get("assignee", ""), d.get("tags", "")) - - a1, a2, a3 = parts(key_a) - b1, b2, b3 = parts(key_b) + name = name.strip() + val = val.strip() + if name == "tags": + tag_parts = [val] if val else [] + j = idx + 1 + while j < len(segments) and "=" not in segments[j]: + if segments[j]: + tag_parts.append(segments[j].strip()) + j += 1 + d_legacy["tags"] = ",".join(tag_parts) + idx = j + continue + d_legacy[name] = val + idx += 1 + + return (d_legacy.get("area", ""), d_legacy.get("assignee", ""), d_legacy.get("tags", "")) + + a1, a2, a3 = _parse_key(key_a) + b1, b2, b3 = _parse_key(key_b) matches = 0 if a1 and b1 and a1 == b1: matches += 1 @@ -129,10 +163,17 @@ def load_bundle_mapping_config(config_path: Path | None = None) -> dict[str, Any with open(config_path, encoding="utf-8") as f: data = yaml.safe_load(f) or {} bm = (data.get("backlog") or {}).get("bundle_mapping") or {} + + def _safe_float(value: Any, default: float) -> float: + try: + return float(value) + except (TypeError, ValueError): + return default + return { "rules": bm.get("rules", []), "history": bm.get("history", {}), "explicit_label_prefix": bm.get("explicit_label_prefix", DEFAULT_LABEL_PREFIX), - "auto_assign_threshold": float(bm.get("auto_assign_threshold", DEFAULT_AUTO_ASSIGN_THRESHOLD)), - "confirm_threshold": float(bm.get("confirm_threshold", DEFAULT_CONFIRM_THRESHOLD)), + "auto_assign_threshold": _safe_float(bm.get("auto_assign_threshold"), DEFAULT_AUTO_ASSIGN_THRESHOLD), + "confirm_threshold": _safe_float(bm.get("confirm_threshold"), DEFAULT_CONFIRM_THRESHOLD), } diff --git a/modules/bundle-mapper/src/bundle_mapper/ui/interactive.py b/modules/bundle-mapper/src/bundle_mapper/ui/interactive.py index 97a92998..01df76ec 100644 --- a/modules/bundle-mapper/src/bundle_mapper/ui/interactive.py +++ b/modules/bundle-mapper/src/bundle_mapper/ui/interactive.py @@ -82,8 +82,8 @@ def ask_bundle_mapping( if 1 <= i <= len(available_bundles): return available_bundles[i - 1] except ValueError: - console.print("[red]Invalid selection. Skipping bundle selection.[/red]") - return None + pass + return None if choice.isdigit() and candidates: i = int(choice) if 1 <= i <= len(candidates): diff --git a/modules/bundle-mapper/tests/unit/test_bundle_mapper_engine.py b/modules/bundle-mapper/tests/unit/test_bundle_mapper_engine.py index dfad84e0..2bf430cc 100644 --- a/modules/bundle-mapper/tests/unit/test_bundle_mapper_engine.py +++ b/modules/bundle-mapper/tests/unit/test_bundle_mapper_engine.py @@ -2,6 +2,9 @@ from __future__ import annotations +from pathlib import Path + +import yaml from bundle_mapper.mapper.engine import BundleMapper from specfact_cli.models.backlog_item import BacklogItem @@ -65,3 +68,50 @@ def test_weighted_calculation_explicit_dominates() -> None: m = mapper.compute_mapping(item) assert m.primary_bundle_id == "backend" assert m.confidence >= 0.8 + + +def test_historical_mapping_ignores_stale_bundle_ids(tmp_path: Path) -> None: + config_path = tmp_path / "config.yaml" + key = "area=backend;assignee=alice;tags=bug,login" + config_path.write_text( + yaml.safe_dump( + { + "backlog": { + "bundle_mapping": { + "history": { + key: { + "counts": { + "removed-bundle": 50, + "backend-services": 2, + } + } + } + } + } + }, + sort_keys=False, + ), + encoding="utf-8", + ) + + mapper = BundleMapper(available_bundle_ids=["backend-services"], config_path=config_path) + item = _item(assignees=["alice"], area="backend", tags=["bug", "login"]) + mapping = mapper.compute_mapping(item) + + assert mapping.primary_bundle_id == "backend-services" + + +def test_conflicting_content_signal_does_not_increase_primary_confidence() -> None: + mapper = BundleMapper( + available_bundle_ids=["alpha", "beta"], + bundle_spec_keywords={"beta": {"beta"}}, + ) + item = _item( + tags=["bundle:alpha"], + title="beta", + ) + + mapping = mapper.compute_mapping(item) + + assert mapping.primary_bundle_id == "alpha" + assert mapping.confidence == 0.8 diff --git a/modules/bundle-mapper/tests/unit/test_mapping_history.py b/modules/bundle-mapper/tests/unit/test_mapping_history.py index 089e1015..291c65b7 100644 --- a/modules/bundle-mapper/tests/unit/test_mapping_history.py +++ b/modules/bundle-mapper/tests/unit/test_mapping_history.py @@ -69,3 +69,29 @@ def test_save_user_confirmed_mapping_increments_history() -> None: break else: pytest.fail("Expected backend-services in history counts") + + +def test_item_key_similarity_does_not_false_match_tag_lists() -> None: + k1 = item_key(_item(assignees=["alice"], area="api", tags=["a", "b"])) + k2 = item_key(_item(assignees=["alice"], area="web", tags=["a"])) + + assert item_keys_similar(k1, k2) is False + + +def test_load_bundle_mapping_config_malformed_thresholds_use_defaults(tmp_path: Path) -> None: + config_path = tmp_path / "config.yaml" + config_path.write_text( + """ +backlog: + bundle_mapping: + auto_assign_threshold: high + confirm_threshold: medium +""".strip() + + "\n", + encoding="utf-8", + ) + + cfg = load_bundle_mapping_config(config_path=config_path) + + assert cfg["auto_assign_threshold"] == 0.8 + assert cfg["confirm_threshold"] == 0.5 diff --git a/openspec/CHANGE_ORDER.md b/openspec/CHANGE_ORDER.md index cade5eb3..189d5dab 100644 --- a/openspec/CHANGE_ORDER.md +++ b/openspec/CHANGE_ORDER.md @@ -30,11 +30,13 @@ Changes are grouped by **module** and prefixed with **`<module>-NN-`** so implem | policy-engine-01-unified-framework | implemented 2026-02-17 (archived) | | patch-mode-01-preview-apply | implemented 2026-02-18 (archived) | | validation-01-deep-validation | implemented 2026-02-18 (archived) | -| bundle-mapper-01-mapping-strategy | implemented 2026-02-18 (archived) | +| bundle-mapper-01-mapping-strategy | implemented 2026-02-22 (archived) | | backlog-core-01-dependency-analysis-commands | implemented 2026-02-18 (archived) | +| backlog-core-02-interactive-issue-creation | implemented 2026-02-22 (archived) | | ceremony-cockpit-01-ceremony-aliases | implemented 2026-02-18 (archived) | | workflow-01-git-worktree-management | implemented 2026-02-18 (archived) | | verification-01-wave1-delta-closure | implemented 2026-02-18 (archived) | +| marketplace-01-central-module-registry | implemented 2026-02-22 (archived) | ### Pending @@ -58,12 +60,13 @@ These are derived extensions of the same 2026-02-15 plan and are required to ope | Module | Order | Change folder | GitHub # | Blocked by | |--------|-------|---------------|----------|------------| | — | — | arch-06, arch-07 implemented 2026-02-16 (see Implemented above) | — | — | +| arch | 08 | arch-08-documentation-discrepancies-remediation | [#291](https://github.com/nold-ai/specfact-cli/issues/291) | — | ### Marketplace (module distribution) | Module | Order | Change folder | GitHub # | Blocked by | |--------|-------|---------------|----------|------------| -| marketplace | 01 | marketplace-01-central-module-registry | [#214](https://github.com/nold-ai/specfact-cli/issues/214) | #208 | +| marketplace | 01 | marketplace-01-central-module-registry (implemented 2026-02-22; archived) | [#214](https://github.com/nold-ai/specfact-cli/issues/214) | #208 | | marketplace | 02 | marketplace-02-advanced-marketplace-features | [#215](https://github.com/nold-ai/specfact-cli/issues/215) | #214 | ### Cross-cutting foundations (no hard dependencies — implement early) @@ -73,7 +76,7 @@ These are derived extensions of the same 2026-02-15 plan and are required to ope | policy-engine | 01 | policy-engine-01-unified-framework (implemented 2026-02-17; archived) | [#176](https://github.com/nold-ai/specfact-cli/issues/176) | — | | patch-mode | 01 | patch-mode-01-preview-apply (implemented 2026-02-18; archived) | [#177](https://github.com/nold-ai/specfact-cli/issues/177) | — | | validation | 01 | validation-01-deep-validation (implemented 2026-02-18; archived) | [#163](https://github.com/nold-ai/specfact-cli/issues/163) | — | -| bundle-mapper | 01 | bundle-mapper-01-mapping-strategy (implemented 2026-02-18; archived) | [#121](https://github.com/nold-ai/specfact-cli/issues/121) | — | +| bundle-mapper | 01 | bundle-mapper-01-mapping-strategy (implemented 2026-02-22; archived) | [#121](https://github.com/nold-ai/specfact-cli/issues/121) | — | | verification | 01 | verification-01-wave1-delta-closure (implemented 2026-02-18; archived) | [#276](https://github.com/nold-ai/specfact-cli/issues/276) | #177 ✅, #163 ✅, #116 ✅, #121 ✅ | ### CI/CD (workflow and artifacts) @@ -93,7 +96,7 @@ These are derived extensions of the same 2026-02-15 plan and are required to ope | Module | Order | Change folder | GitHub # | Blocked by | |--------|-------|---------------|----------|------------| | backlog-core | 01 | backlog-core-01-dependency-analysis-commands ✅ (implemented 2026-02-18; archived) | [#116](https://github.com/nold-ai/specfact-cli/issues/116) | — | -| backlog-core | 02 | backlog-core-02-interactive-issue-creation | [#173](https://github.com/nold-ai/specfact-cli/issues/173) | #116 (optional: #176, #177) | +| backlog-core | 02 | backlog-core-02-interactive-issue-creation (implemented 2026-02-22; archived) | [#173](https://github.com/nold-ai/specfact-cli/issues/173) | #116 (optional: #176, #177) | ### backlog-scrum diff --git a/openspec/changes/archive/2026-02-22-arch-08-documentation-discrepancies-remediation/CHANGE_VALIDATION.md b/openspec/changes/archive/2026-02-22-arch-08-documentation-discrepancies-remediation/CHANGE_VALIDATION.md new file mode 100644 index 00000000..b398404c --- /dev/null +++ b/openspec/changes/archive/2026-02-22-arch-08-documentation-discrepancies-remediation/CHANGE_VALIDATION.md @@ -0,0 +1,21 @@ +# Change Validation: arch-08-documentation-discrepancies-remediation + +**Validated**: 2026-02-22 + +## OpenSpec validation + +- **Command**: `openspec validate arch-08-documentation-discrepancies-remediation --strict` +- **Result**: Passed + +## Quality gates run for this change + +- `hatch run format` with temporary hatch/virtualenv cache overrides: **Passed** +- `hatch run type-check` with temporary hatch/virtualenv cache overrides: **Passed** (0 errors; existing repository warnings reported) +- `hatch run yaml-lint` with temporary hatch/virtualenv cache overrides: **Passed** + +## Summary + +- Source tracking synced to GitHub issue `#291`. +- Architecture/reference docs updated to align with current implementation. +- Added ADR template + ADR-0001, module development guide, adapter development guide updates, and implementation status page. +- Navigation updated in `docs/_layouts/default.html` and architecture references added to `docs/index.md`. diff --git a/openspec/changes/archive/2026-02-22-arch-08-documentation-discrepancies-remediation/DOC_VERIFICATION_CHECKLIST.md b/openspec/changes/archive/2026-02-22-arch-08-documentation-discrepancies-remediation/DOC_VERIFICATION_CHECKLIST.md new file mode 100644 index 00000000..47fa9734 --- /dev/null +++ b/openspec/changes/archive/2026-02-22-arch-08-documentation-discrepancies-remediation/DOC_VERIFICATION_CHECKLIST.md @@ -0,0 +1,30 @@ +# Documentation Verification Checklist + +This checklist maps required discrepancy-report items to concrete documentation updates for `arch-08-documentation-discrepancies-remediation`. + +## Required discrepancy items (pre-implementation mapping) + +- [x] Item 1 (module system status): `docs/reference/architecture.md` module system section + `docs/architecture/module-system.md` overview. +- [x] Item 2 (BridgeAdapter interface mismatch): `docs/reference/architecture.md` adapter interface section + `docs/guides/adapter-development.md` BridgeAdapter methods. +- [x] Item 3 (operational modes gap): `docs/reference/architecture.md` operational modes section + `docs/architecture/implementation-status.md` planned vs implemented. +- [x] Item 4 (layer mismatch): `docs/reference/architecture.md` architecture layer overview + `docs/architecture/README.md` overview. +- [x] Item 5 (registry implementation details): `docs/reference/architecture.md` CommandRegistry details + `docs/architecture/module-system.md` discovery/load flow. +- [x] Item 6 (module package structure): `docs/guides/module-development.md` + cross-link in `docs/architecture/module-system.md`. +- [x] Item 7 (ToolCapabilities): `docs/guides/adapter-development.md` + `docs/reference/architecture.md` adapter selection section. +- [x] Item 11 (non-existent diagram components): `docs/architecture/component-graph.md` and related architecture diagrams. +- [x] Item 12 (outdated performance metrics): `docs/reference/architecture.md` performance claims revised/removed. +- [x] Item 13 (missing error handling docs): `docs/reference/architecture.md` error handling conventions section. +- [x] Item 17 (terminology inconsistency): standardize to `ProjectBundle` and `PlanBundle` where referencing models and files. +- [x] Item 18 (version references): normalize version references to current state or remove unstable version claims. +- [x] Item 19 (feature maturity mismatch): remove "transitioning/experimental" phrasing for module system. +- [x] Item 20 (no ADRs): add `docs/architecture/adr/README.md`, `docs/architecture/adr/template.md`, and first ADR. +- [x] Item 21 (missing module development guide): add `docs/guides/module-development.md` and nav link. +- [x] Item 22 (missing adapter development guide): update `docs/guides/adapter-development.md` and nav/discoverability links. + +## Post-edit verification + +- [x] No remaining "transitioning" references for module system in architecture docs. +- [x] No architecture diagrams claim non-existent "DevOps Adapters" component. +- [x] `BridgeAdapter` docs include `detect`, `import_artifact`, `export_artifact`, `load_change_tracking`, `save_change_tracking`. +- [x] Implementation status page links planned features to OpenSpec changes (including `architecture-01-solution-layer`). +- [x] Navigation includes ADR, module-development guide, adapter-development guide, and implementation status page. diff --git a/openspec/changes/archive/2026-02-22-arch-08-documentation-discrepancies-remediation/proposal.md b/openspec/changes/archive/2026-02-22-arch-08-documentation-discrepancies-remediation/proposal.md new file mode 100644 index 00000000..7277fbc4 --- /dev/null +++ b/openspec/changes/archive/2026-02-22-arch-08-documentation-discrepancies-remediation/proposal.md @@ -0,0 +1,53 @@ +# Change: Architecture Documentation Discrepancies Remediation + +## Why + +The architecture discrepancies report (`docs/architecture/discrepancies-report.md`) identified 25 conflicts between documentation, codebase, and OpenSpec: module system described as "transitioning" while code is production-ready, incomplete BridgeAdapter and layer documentation, missing development guides, and spec–code gaps (e.g. architecture commands specified but not yet implemented). These cause confusion for contributors and understate feature readiness. Remediating docs and aligning specs with current capabilities will restore consistency and improve developer experience without changing runtime behavior. + +## What Changes + +- **UPDATE** `docs/reference/architecture.md` and `docs/architecture/*`: Reflect module system as production-ready since v0.27; document full BridgeAdapter interface (including `load_change_tracking` / `save_change_tracking`); describe actual layer structure (Adapter, Analysis, Module layers in addition to Specification/Contract/Enforcement); clarify operational modes (current implementation vs planned); add CommandRegistry implementation details and required module package structure. +- **UPDATE** Documentation: Add ToolCapabilities model and adapter selection; document error handling patterns and conventions; update or remove outdated performance metrics; fix terminology (Project Bundle / Plan Bundle); standardize version references; remove or correct Mermaid diagrams that reference non-existent components (e.g. DevOps Adapters). +- **NEW** ADR template and initial ADRs: Create `docs/architecture/adr/` with template and at least one ADR for a major decision (e.g. module-first architecture). +- **NEW** Module development guide: `docs/guides/module-development.md` (or equivalent) with required structure, `module-package.yaml` schema, naming conventions, and patterns. +- **NEW** Adapter development guide: Extend or add `docs/guides/adapter-development.md` (or integrate into existing creating-custom-bridges) with BridgeAdapter interface, ToolCapabilities, and examples. +- **ALIGN** Specs with current state: Where specs describe not-yet-implemented behavior (e.g. architecture derive/validate/trace, protocol FSM engine), ensure docs clearly state "planned" or "specified in OpenSpec change architecture-01" and document current limitations (change tracking, protocol validation) in a single place (e.g. docs/architecture/implementation-status.md or equivalent). + +No new application code or CLI behavior; documentation and spec-doc alignment only. + +## Capabilities + +- **documentation-alignment**: Architecture and reference docs accurately reflect current implementation (module system status, BridgeAdapter, layers, modes, CommandRegistry, module structure, ToolCapabilities, error handling, performance, terminology, versions, diagrams). +- **adr-template**: ADR template and initial ADRs for major architectural decisions, discoverable from docs. +- **module-development-guide**: Single guide describing how to develop and package new modules (structure, manifest, commands, contracts). +- **adapter-development-guide**: Guide for implementing adapters (BridgeAdapter interface, change tracking, ToolCapabilities, examples). +- **implementation-status-docs**: Documented current limitations and spec–code alignment (what is implemented vs planned), with pointers to relevant OpenSpec changes. + +## Impact + +- **Affected documentation**: + - `docs/reference/architecture.md` + - `docs/architecture/` (README, module-system, component-graph, data-flow, state-machines, interface-contracts, discrepancies-report.md) + - New: `docs/architecture/adr/` (template + initial ADR(s)), `docs/architecture/implementation-status.md` (or equivalent) + - `docs/guides/` (module-development, adapter-development or extended creating-custom-bridges) + - `docs/_layouts/default.html` (navigation for new pages) + - `README.md` / `docs/index.md` if terminology or version references are updated +- **Affected specs**: Optional deltas in existing changes (e.g. architecture-01) to add "current implementation status" notes; no new runtime contracts. +- **Backward compatibility**: N/A (documentation only). +- **Rollback plan**: Revert documentation commits. + +**Documentation impact (per config.yaml):** All changes are documentation-only and improve accuracy and discoverability at https://docs.specfact.io. New pages will have correct Jekyll front-matter and be linked from `docs/_layouts/default.html`. + +## Clarifications for implementation + +- **Operational modes**: Remediation will document current state (detector exists; mode-specific behavior as planned) rather than implement new mode logic. +- **Architecture commands**: Will not implement `specfact architecture derive|validate|trace` in this change; docs will state they are specified in `architecture-01-solution-layer` and not yet implemented. +- **Protocol FSM / change tracking**: Will document current limitations and point to relevant OpenSpec changes; no FSM engine or full change-tracking implementation in this change. + +## Source Tracking + +<!-- source_repo: nold-ai/specfact-cli --> +- **GitHub Issue**: #291 +- **Issue URL**: https://github.com/nold-ai/specfact-cli/issues/291 +- **Repository**: nold-ai/specfact-cli +- **Last Synced Status**: synced-2026-02-22 diff --git a/openspec/changes/archive/2026-02-22-arch-08-documentation-discrepancies-remediation/specs/adapter-development-guide/spec.md b/openspec/changes/archive/2026-02-22-arch-08-documentation-discrepancies-remediation/specs/adapter-development-guide/spec.md new file mode 100644 index 00000000..2b1b4f33 --- /dev/null +++ b/openspec/changes/archive/2026-02-22-arch-08-documentation-discrepancies-remediation/specs/adapter-development-guide/spec.md @@ -0,0 +1,45 @@ +# adapter-development-guide Specification + +Adapter developers have a guide that describes the full BridgeAdapter interface, ToolCapabilities, and how to implement or extend adapters. + +## ADDED Requirements + +### Requirement: Full BridgeAdapter interface documented + +The adapter development guide (or extended creating-custom-bridges) SHALL document the full BridgeAdapter interface: detect, import_artifact, export_artifact, load_change_tracking, save_change_tracking (or equivalent), with contracts and usage notes. + +#### Scenario: Developer implements adapter +- **GIVEN** the adapter development guide (or extended creating-custom-bridges) +- **WHEN** a developer implements an adapter +- **THEN** the full BridgeAdapter interface is documented +- **AND** contracts and usage notes are provided + +### Requirement: ToolCapabilities and adapter selection documented + +The ToolCapabilities model and its role in adapter selection (e.g. sync modes) SHALL be documented, with reference to code (e.g. models/bridge.py) if needed. + +#### Scenario: Developer declares or uses capabilities +- **GIVEN** the adapter documentation +- **WHEN** a developer needs to declare or use adapter capabilities +- **THEN** ToolCapabilities model is documented +- **AND** its role in adapter selection is explained + +### Requirement: Examples or code references provided + +The adapter guide SHALL provide at least one code reference or minimal example (e.g. base adapter, existing OpenSpec/SpecKit adapter) so that implementation is clear. + +#### Scenario: Developer follows adapter guide +- **GIVEN** the adapter guide +- **WHEN** a developer follows the guide +- **THEN** at least one code reference or minimal example is provided +- **AND** implementation path is clear + +### Requirement: Adapter guide discoverable + +The adapter development content SHALL be reachable from the docs navigation and from bridge/architecture documentation. + +#### Scenario: User looks for adapter development +- **GIVEN** the published docs +- **WHEN** a user looks for adapter or bridge development +- **THEN** the adapter development content is reachable from the docs navigation +- **AND** from bridge/architecture documentation diff --git a/openspec/changes/archive/2026-02-22-arch-08-documentation-discrepancies-remediation/specs/adr-template/spec.md b/openspec/changes/archive/2026-02-22-arch-08-documentation-discrepancies-remediation/specs/adr-template/spec.md new file mode 100644 index 00000000..d44fd6a7 --- /dev/null +++ b/openspec/changes/archive/2026-02-22-arch-08-documentation-discrepancies-remediation/specs/adr-template/spec.md @@ -0,0 +1,35 @@ +# adr-template Specification + +Architecture Decision Records (ADRs) are available so that major architectural decisions are recorded and discoverable. + +## ADDED Requirements + +### Requirement: ADR template exists + +The docs SHALL provide an ADR template with at least: title, status, context, decision, consequences. + +#### Scenario: Maintainer records new decision +- **GIVEN** the docs repository +- **WHEN** a maintainer wants to record a new architectural decision +- **THEN** an ADR template exists (e.g. in docs/architecture/adr/template.md) +- **AND** the template includes title, status, context, decision, consequences + +### Requirement: At least one ADR present + +The ADR directory SHALL contain at least one ADR (e.g. for module-first architecture) following the template. + +#### Scenario: Reader opens architecture docs +- **GIVEN** the ADR directory +- **WHEN** a reader opens the architecture documentation +- **THEN** at least one ADR is present following the template +- **AND** it documents a major architectural decision + +### Requirement: ADRs discoverable from docs + +ADRs SHALL be linked from docs/architecture/README.md or docs/reference/architecture.md so they can be found without searching the repo. + +#### Scenario: User navigates architecture docs +- **GIVEN** the docs site (e.g. docs.specfact.io) +- **WHEN** a user navigates architecture or reference docs +- **THEN** ADRs are linked +- **AND** discoverable from the menu or architecture index diff --git a/openspec/changes/archive/2026-02-22-arch-08-documentation-discrepancies-remediation/specs/documentation-alignment/spec.md b/openspec/changes/archive/2026-02-22-arch-08-documentation-discrepancies-remediation/specs/documentation-alignment/spec.md new file mode 100644 index 00000000..2341db4e --- /dev/null +++ b/openspec/changes/archive/2026-02-22-arch-08-documentation-discrepancies-remediation/specs/documentation-alignment/spec.md @@ -0,0 +1,94 @@ +# documentation-alignment Specification + +Documentation accurately reflects current implementation so that contributors and users are not misled. + +## ADDED Requirements + +### Requirement: Module system status in docs + +The published architecture documentation SHALL state that the module system is production-ready (e.g. since v0.27) and SHALL NOT describe it as "transitioning" or "experimental." + +#### Scenario: Reader checks module system status +- **GIVEN** the published architecture documentation (e.g. docs/reference/architecture.md, docs/architecture/module-system.md) +- **WHEN** a reader looks for the current state of the module system +- **THEN** the docs state production-ready status +- **AND** do not use "transitioning" or "experimental" for the module system + +### Requirement: BridgeAdapter interface documented + +The adapter documentation SHALL include the full BridgeAdapter interface: detect, import_artifact, export_artifact, load_change_tracking, save_change_tracking (or equivalent), with current behavior and contracts. + +#### Scenario: Developer implements BridgeAdapter +- **GIVEN** the adapter documentation +- **WHEN** a developer implements or extends a BridgeAdapter +- **THEN** the documented interface includes all methods above +- **AND** contracts and usage are described + +### Requirement: Architecture layers match codebase + +The architecture overview SHALL describe the actual layers (Specification, Contract, Enforcement, and where relevant Adapter, Analysis, Module layers) so they match the codebase structure. + +#### Scenario: Reader learns layer structure +- **GIVEN** the architecture overview +- **WHEN** a reader learns the high-level layer structure +- **THEN** the docs describe actual layers present in code +- **AND** do not omit Adapter, Analysis, or Module layers where they exist + +### Requirement: Operational modes clarity + +The documentation for CI/CD and CoPilot modes SHALL clarify current mode detection and any limitations (e.g. mode-specific behavior as planned), and SHALL NOT imply full mode implementations that do not exist. + +#### Scenario: Reader checks mode implementation +- **GIVEN** the documentation for CI/CD and CoPilot modes +- **WHEN** a reader checks what is implemented +- **THEN** current detector behavior is stated +- **AND** planned vs implemented behavior is distinguished + +### Requirement: CommandRegistry and module structure documented + +The architecture or module docs SHALL describe lazy loading, metadata caching, and the required module package structure (e.g. module-package.yaml, src/<name>/main.py) and naming conventions. + +#### Scenario: Developer needs registry or module layout details +- **GIVEN** the architecture or module docs +- **WHEN** a developer needs implementation details for the command registry or module layout +- **THEN** lazy loading and metadata caching are described +- **AND** required module package structure and naming are documented + +### Requirement: ToolCapabilities and error handling documented + +The ToolCapabilities model and adapter selection SHALL be documented; error handling patterns (custom exceptions, logging) SHALL be described in reference or adapter documentation. + +#### Scenario: Developer looks for capabilities or error handling +- **GIVEN** the reference or adapter documentation +- **WHEN** a developer looks for adapter capabilities or error handling +- **THEN** ToolCapabilities and adapter selection are documented +- **AND** error handling patterns are described + +### Requirement: Terminology and version consistency + +The documentation set SHALL use consistent terminology (e.g. Project Bundle, Plan Bundle) and SHALL standardize or remove version references that cause confusion. + +#### Scenario: Same concept referenced across docs +- **GIVEN** the documentation set +- **WHEN** the same concept is referenced +- **THEN** terminology is consistent +- **AND** version references are standardized or removed where confusing + +### Requirement: Diagrams reference only existing or planned components + +Any Mermaid or component diagram in the docs SHALL show only components that exist in the codebase or are clearly marked as planned; non-existent components (e.g. unimplemented "DevOps Adapters") SHALL be removed or relabeled. + +#### Scenario: Reader interprets diagram +- **GIVEN** any Mermaid or component diagram in the docs +- **WHEN** a reader interprets the diagram +- **THEN** only existing or clearly planned components are shown +- **AND** non-existent components are removed or relabeled + +### Requirement: Performance metrics current or removed + +Any stated performance or timing in the docs SHALL reflect current benchmarks or SHALL be removed if outdated. + +#### Scenario: Docs publish performance claims +- **GIVEN** any stated performance or timing (e.g. "typical execution: < 10s") +- **WHEN** the docs are published +- **THEN** metrics reflect current benchmarks or are removed if outdated diff --git a/openspec/changes/archive/2026-02-22-arch-08-documentation-discrepancies-remediation/specs/implementation-status-docs/spec.md b/openspec/changes/archive/2026-02-22-arch-08-documentation-discrepancies-remediation/specs/implementation-status-docs/spec.md new file mode 100644 index 00000000..f8550572 --- /dev/null +++ b/openspec/changes/archive/2026-02-22-arch-08-documentation-discrepancies-remediation/specs/implementation-status-docs/spec.md @@ -0,0 +1,45 @@ +# implementation-status-docs Specification + +A single, maintained place describes what is implemented versus planned and points to OpenSpec changes for planned features. + +## ADDED Requirements + +### Requirement: Implemented vs planned clearly stated + +The implementation status documentation SHALL clearly mark each feature (e.g. architecture commands, protocol FSM, change tracking) as implemented or planned, with brief notes on scope where relevant. + +#### Scenario: Reader checks feature status +- **GIVEN** the implementation status documentation (e.g. docs/architecture/implementation-status.md) +- **WHEN** a reader checks the status of a feature +- **THEN** each feature is clearly marked as implemented or planned +- **AND** scope notes are provided (e.g. change tracking: models exist, limited adapter support) + +### Requirement: Pointers to OpenSpec for planned features + +For planned or partially implemented features, the implementation status doc SHALL link or reference the relevant OpenSpec change (e.g. architecture-01-solution-layer for architecture derive/validate/trace). + +#### Scenario: Reader finds spec for planned feature +- **GIVEN** a planned or partially implemented feature +- **WHEN** the implementation status doc describes it +- **THEN** it links or references the relevant OpenSpec change +- **AND** readers can find the spec and roadmap + +### Requirement: Current limitations documented + +Current limitations for change tracking and protocol/FSM behavior SHALL be stated (e.g. no FSM engine, partial adapter support for change tracking) so that expectations match reality. + +#### Scenario: Reader checks limitations +- **GIVEN** change tracking and protocol/FSM behavior +- **WHEN** a user or contributor reads the implementation status +- **THEN** current limitations are stated +- **AND** expectations align with implementation + +### Requirement: Implementation status discoverable + +The implementation status page SHALL be linked from the architecture README or reference architecture page so it can be found without searching. + +#### Scenario: User navigates architecture docs +- **GIVEN** the docs site +- **WHEN** a user navigates architecture docs +- **THEN** the implementation status page is linked +- **AND** discoverable from the architecture index or README diff --git a/openspec/changes/archive/2026-02-22-arch-08-documentation-discrepancies-remediation/specs/module-development-guide/spec.md b/openspec/changes/archive/2026-02-22-arch-08-documentation-discrepancies-remediation/specs/module-development-guide/spec.md new file mode 100644 index 00000000..27647e8a --- /dev/null +++ b/openspec/changes/archive/2026-02-22-arch-08-documentation-discrepancies-remediation/specs/module-development-guide/spec.md @@ -0,0 +1,35 @@ +# module-development-guide Specification + +A single, discoverable guide explains how to develop and package new modules so that contributors can extend the CLI consistently. + +## ADDED Requirements + +### Requirement: Required module structure documented + +The module development guide SHALL describe the required directory structure (e.g. modules/<name>/, module-package.yaml, src/<name>/__init__.py, main.py, commands) and file roles. + +#### Scenario: Developer creates new module +- **GIVEN** the module development guide +- **WHEN** a developer creates a new module +- **THEN** the guide describes the required directory structure +- **AND** file roles are explained + +### Requirement: Manifest and contract requirements documented + +The module development guide SHALL document the module-package.yaml schema (name, version, commands, dependencies, schema_extensions, service_bridges) and SHALL mention contract requirements (@icontract, @beartype) for public APIs. + +#### Scenario: Developer configures module +- **GIVEN** the module development guide +- **WHEN** a developer configures a module +- **THEN** the guide documents the module-package.yaml schema +- **AND** contract requirements for public APIs are mentioned + +### Requirement: Module guide discoverable + +The module development guide SHALL be reachable from the docs navigation (e.g. Guides or Reference) and from the architecture or module system documentation. + +#### Scenario: User looks for module development +- **GIVEN** the published docs (e.g. docs.specfact.io) +- **WHEN** a user looks for how to develop modules +- **THEN** the guide is reachable from the docs navigation +- **AND** from the architecture or module system documentation diff --git a/openspec/changes/archive/2026-02-22-arch-08-documentation-discrepancies-remediation/tasks.md b/openspec/changes/archive/2026-02-22-arch-08-documentation-discrepancies-remediation/tasks.md new file mode 100644 index 00000000..c6e54597 --- /dev/null +++ b/openspec/changes/archive/2026-02-22-arch-08-documentation-discrepancies-remediation/tasks.md @@ -0,0 +1,118 @@ +# Implementation Tasks: arch-08-documentation-discrepancies-remediation + +## TDD / SDD Order (Enforced) + +Per config.yaml: spec deltas first, then verification criteria (for doc-only change: checklist and link/consistency checks), then documentation implementation. No production code; documentation updates are the "implementation." + +--- + +## 1. Create git branch from dev + +- [x] 1.1 Create feature branch from dev + - [x] 1.1.1 `git checkout dev && git pull origin dev` + - [x] 1.1.2 `git checkout -b feature/arch-08-documentation-discrepancies-remediation` + - [x] 1.1.3 `git branch --show-current` + +## 2. Spec deltas (documentation capabilities) + +- [x] 2.1 Add/update spec for documentation-alignment + - [x] 2.1.1 Ensure `specs/documentation-alignment/spec.md` exists with scenarios: module system described as production-ready; BridgeAdapter interface complete; layers and modes accurate; CommandRegistry and module structure documented; ToolCapabilities and error handling documented; terminology and version refs consistent; diagrams only reference existing components. +- [x] 2.2 Add spec for adr-template + - [x] 2.2.1 Ensure `specs/adr-template/spec.md` exists with scenarios: ADR template available; at least one ADR present; linked from architecture docs. +- [x] 2.3 Add spec for module-development-guide + - [x] 2.3.1 Ensure `specs/module-development-guide/spec.md` exists with scenarios: required module structure and manifest documented; naming and contracts mentioned; discoverable from docs nav. +- [x] 2.4 Add spec for adapter-development-guide + - [x] 2.4.1 Ensure `specs/adapter-development-guide/spec.md` exists with scenarios: BridgeAdapter full interface and ToolCapabilities documented; examples or link to code; discoverable from docs nav. +- [x] 2.5 Add spec for implementation-status-docs + - [x] 2.5.1 Ensure `specs/implementation-status-docs/spec.md` exists with scenarios: single place describes implemented vs planned; pointers to OpenSpec changes for planned features; change tracking and protocol FSM limitations stated. + +## 3. Verification criteria (pre-implementation) + +- [x] 3.1 Define documentation verification checklist + - [x] 3.1.1 List required updates from discrepancies report (items 1–7, 11–13, 17–19, 20–22) and map to files/sections. + - [x] 3.1.2 Document checklist in change folder (e.g. `DOC_VERIFICATION_CHECKLIST.md`) for use after edits. + +## 4. Update architecture and reference documentation + +- [x] 4.1 Update `docs/reference/architecture.md` + - [x] 4.1.1 Replace "transitioning" with production-ready module system (since v0.27). + - [x] 4.1.2 Add or expand BridgeAdapter interface: `detect`, `import_artifact`, `export_artifact`, `load_change_tracking`, `save_change_tracking`. + - [x] 4.1.3 Describe actual layers: Specification, Contract, Enforcement, plus Adapter, Analysis, Module layers where applicable. + - [x] 4.1.4 Clarify operational modes: current detector behavior; mode-specific behavior as planned; link to implementation-status if created. + - [x] 4.1.5 Add CommandRegistry implementation details (lazy loading, metadata caching) and reference to module package structure. + - [x] 4.1.6 Add section or link to ToolCapabilities and adapter selection. + - [x] 4.1.7 Add error handling patterns / conventions (custom exceptions, logging). + - [x] 4.1.8 Update or remove outdated performance metrics; use current benchmarks or remove specific numbers. + - [x] 4.1.9 Standardize terminology (Project Bundle, Plan Bundle) and version references. +- [x] 4.2 Update `docs/architecture/` assets + - [x] 4.2.1 Fix component-graph and other diagrams: remove or relabel non-existent components (e.g. DevOps Adapters). + - [x] 4.2.2 Align module-system.md, data-flow.md, state-machines.md, interface-contracts.md with code (registry, adapters, protocol models). + - [x] 4.2.3 Ensure README and discrepancies-report cross-references remain correct after edits. + +## 5. Create ADR template and initial ADR(s) + +- [x] 5.1 Create ADR template + - [x] 5.1.1 Create `docs/architecture/adr/` directory. + - [x] 5.1.2 Add `template.md` (title, status, context, decision, consequences). + - [x] 5.1.3 Add `README.md` in adr/ explaining how to create ADRs. +- [x] 5.2 Add at least one ADR + - [x] 5.2.1 Add ADR for module-first architecture (e.g. `0001-module-first-architecture.md`) using template. + - [x] 5.2.2 Link ADR from `docs/architecture/README.md` and/or `docs/reference/architecture.md`. + +## 6. Create module development guide + +- [x] 6.1 Create `docs/guides/module-development.md` (or equivalent path) + - [x] 6.1.1 Document required directory structure (`modules/<name>/`, `module-package.yaml`, `src/<name>/`, `main.py`, etc.). + - [x] 6.1.2 Document `module-package.yaml` schema (name, version, commands, dependencies, schema_extensions, service_bridges). + - [x] 6.1.3 Document naming conventions and contract requirements (@icontract, @beartype). + - [x] 6.1.4 Add link from architecture docs and from docs nav. +- [x] 6.2 Add Jekyll front-matter and navigation + - [x] 6.2.1 Set layout, title, permalink, description. + - [x] 6.2.2 Update `docs/_layouts/default.html` so the guide appears in the menu. + +## 7. Create or extend adapter development guide + +- [x] 7.1 Create or update adapter guide + - [x] 7.1.1 Add or extend `docs/guides/adapter-development.md` (or integrate into `creating-custom-bridges.md`): full BridgeAdapter interface, change tracking methods, ToolCapabilities model and adapter selection. + - [x] 7.1.2 Include code references or minimal examples (e.g. `src/specfact_cli/adapters/base.py`, `models/bridge.py`). +- [x] 7.2 Add Jekyll front-matter and navigation + - [x] 7.2.1 Set layout, title, permalink, description for new page. + - [x] 7.2.2 Update `docs/_layouts/default.html` if new page added. + +## 8. Add implementation status documentation + +- [x] 8.1 Create `docs/architecture/implementation-status.md` (or equivalent) + - [x] 8.1.1 List what is implemented vs planned (architecture commands, protocol FSM, change tracking scope, adapters: OpenSpec/SpecKit vs GitHub/ADO). + - [x] 8.1.2 Point to OpenSpec changes (e.g. architecture-01-solution-layer) for planned features. + - [x] 8.1.3 Link from architecture README and reference architecture page. +- [x] 8.2 Add Jekyll front-matter and navigation + - [x] 8.2.1 Set layout, title, permalink, description. + - [x] 8.2.2 Update `docs/_layouts/default.html` if needed. + +## 9. Documentation verification and quality gates + +- [x] 9.1 Run documentation verification + - [x] 9.1.1 Complete `DOC_VERIFICATION_CHECKLIST.md` against updated docs. + - [x] 9.1.2 Run link check (e.g. `hatch run yaml-lint` or project link-check script if any). + - [x] 9.1.3 Confirm no remaining "transitioning" / "experimental" for module system in architecture docs; confirm BridgeAdapter and layers are accurate. +- [x] 9.2 Format and lint + - [x] 9.2.1 `hatch run format` + - [x] 9.2.2 `hatch run type-check` (no code change; verify no regressions) + - [x] 9.2.3 `hatch run lint` (or equivalent) + +## 10. Version and changelog (if applicable) + +- [x] 10.1 Only if project policy requires a patch bump for doc-only release + - [x] 10.1.1 Bump patch in `pyproject.toml`, `setup.py`, `src/specfact_cli/__init__.py`. + - [x] 10.1.2 Add CHANGELOG.md entry under new version: Documentation – architecture discrepancies remediation. + +## 11. GitHub issue and PR + +- [x] 11.1 Create GitHub issue (public repo) + - [x] 11.1.1 Title: `[Change] Architecture documentation discrepancies remediation` + - [x] 11.1.2 Labels: `enhancement`, `change-proposal` + - [x] 11.1.3 Body: Why and What Changes from proposal; footer: `*OpenSpec Change Proposal: arch-08-documentation-discrepancies-remediation*` + - [x] 11.1.4 Update proposal.md Source Tracking with issue number, URL, status. +- [x] 11.2 Create PR + - [x] 11.2.1 Push branch and open PR to `dev`. + - [x] 11.2.2 PR description references this change and discrepancies report. diff --git a/openspec/changes/archive/2026-02-22-backlog-core-02-interactive-issue-creation/CHANGE_VALIDATION.md b/openspec/changes/archive/2026-02-22-backlog-core-02-interactive-issue-creation/CHANGE_VALIDATION.md new file mode 100644 index 00000000..d32a69e7 --- /dev/null +++ b/openspec/changes/archive/2026-02-22-backlog-core-02-interactive-issue-creation/CHANGE_VALIDATION.md @@ -0,0 +1,91 @@ +# Change Validation Report: backlog-core-02-interactive-issue-creation + +**Validation Date**: 2026-02-21 01:57:48 +0100 +**Change Proposal**: [proposal.md](./proposal.md) +**Validation Method**: Dry-run simulation in temporary workspace + dependency scan + +## Executive Summary + +- Breaking Changes: 0 detected / 0 resolved +- Dependent Files: 6 affected +- Impact Level: Medium +- Validation Result: Pass +- User Decision: Proceed with implementation in current scope + +## Breaking Changes Detected + +No breaking API/interface changes were detected from the proposed delta: +- `load_backlog_config_from_backlog_file()` is additive. +- Existing `load_backlog_config_from_spec()` remains available for compatibility fallback. +- `backlog map-fields` CLI enhancements are backward compatible for existing ADO usage. + +## Dependencies Affected + +### Critical Updates Required +- None + +### Recommended Updates +- `modules/backlog-core/src/backlog_core/graph/builder.py`: consider reading `.specfact/backlog-config.yaml` first in a follow-up for full consistency. +- docs pages referencing `backlog map-fields` options should include provider-based flow. + +### Directly Scanned Dependencies +- `modules/backlog-core/src/backlog_core/commands/add.py` +- `modules/backlog-core/src/backlog_core/graph/builder.py` +- `modules/backlog-core/tests/unit/test_schema_extensions.py` +- `modules/backlog-core/tests/unit/test_add_command.py` +- `tests/unit/commands/test_backlog_commands.py` +- `src/specfact_cli/modules/backlog/src/commands.py` + +## Impact Assessment + +- **Code Impact**: New backlog config scaffold command and provider-aware map-fields persistence. +- **Test Impact**: New tests required for init-config and github map-fields persistence; existing map-fields tests retained. +- **Documentation Impact**: map-fields and backlog config docs should mention `.specfact/backlog-config.yaml`. +- **Release Impact**: Minor (feature enhancement, backward compatible) + +## User Decision + +**Decision**: Implement now +**Rationale**: Align backlog provider configuration under dedicated `.specfact/backlog-config.yaml` and keep module metadata in sync with marketplace updates. +**Next Steps**: +1. Implement `specfact backlog init-config` scaffold. +2. Extend `specfact backlog map-fields` for provider selection and provider-specific persistence. +3. Run quality gates (format/type/contract) and targeted tests for modified test modules. + +## Format Validation + +- **proposal.md Format**: Pass + - Title format: Correct + - Required sections: All present (`Why`, `What Changes`, `Capabilities`, `Impact`) + - "What Changes" format: Correct + - "Capabilities" section: Present + - "Impact" format: Correct + - Source Tracking section: Present +- **tasks.md Format**: Pass + - Section headers: Correct + - Task format: Correct + - Sub-task format: Correct + - Config.yaml compliance: Pass (worktree + testing + quality gate tasks present) +- **specs Format**: Pass + - Given/When/Then format: Verified + - References existing patterns: Verified +- **design.md Format**: Pass + - Bridge adapter integration: Documented + - Sequence diagrams: Not required for this delta +- **Format Issues Found**: 0 +- **Format Issues Fixed**: 0 +- **Config.yaml Compliance**: Pass + +## OpenSpec Validation + +- **Status**: Pass +- **Validation Command**: `openspec validate backlog-core-02-interactive-issue-creation --strict` +- **Issues Found**: 0 +- **Issues Fixed**: 0 +- **Re-validated**: Yes + +## Validation Artifacts + +- Temporary workspace: `/tmp/specfact-validation-backlog-core-02-1771635189/repo` +- Interface scaffolds: analyzed in-place via additive function diff (`config_schema.py`, `commands.py`, `add.py`) +- Dependency graph: generated from `rg` dependency scans across `src/`, `modules/`, and `tests/` diff --git a/openspec/changes/archive/2026-02-22-backlog-core-02-interactive-issue-creation/TDD_EVIDENCE.md b/openspec/changes/archive/2026-02-22-backlog-core-02-interactive-issue-creation/TDD_EVIDENCE.md new file mode 100644 index 00000000..ab2d05ee --- /dev/null +++ b/openspec/changes/archive/2026-02-22-backlog-core-02-interactive-issue-creation/TDD_EVIDENCE.md @@ -0,0 +1,54 @@ +# TDD Evidence: backlog-core-02-interactive-issue-creation + +## Failing-before Implementation + +- Timestamp: 2026-02-20 23:06:46 +0100 +- Command: + +```bash +hatch test --cover -v modules/backlog-core/tests/unit/test_backlog_protocol.py modules/backlog-core/tests/unit/test_adapter_create_issue.py modules/backlog-core/tests/unit/test_add_command.py +``` + +- Result: **FAILED** (expected at this stage) +- Failure summary: + - `GitHubAdapter` missing `create_issue(...)` + - `AdoAdapter` missing `create_issue(...)` + - `specfact backlog add` command not registered/implemented yet (`SystemExit(2)` in command tests) + +## Passing-after Implementation + +- Timestamp: 2026-02-20 23:11:39 +0100 +- Command: + +```bash +hatch test -v modules/backlog-core/tests/unit/test_backlog_protocol.py modules/backlog-core/tests/unit/test_adapter_create_issue.py modules/backlog-core/tests/unit/test_add_command.py +``` + +- Result: **PASSED** (11 passed) + +## Regression Fix Round: Sprint Persistence and Canonical GitHub Created ID + +### Failing-before Implementation + +- Timestamp: 2026-02-22 23:16:44 +0100 +- Command: + +```bash +hatch test -v modules/backlog-core/tests/unit/test_adapter_create_issue.py +``` + +- Result: **FAILED** (expected at this stage) +- Failure summary: + - GitHub `create_issue` returned internal DB id in `id` instead of canonical issue number (`id != key`). + - ADO `create_issue` did not map payload `sprint` to `/fields/System.IterationPath`. + +### Passing-after Implementation + +- Timestamp: 2026-02-22 23:17:07 +0100 +- Command: + +```bash +hatch test -v modules/backlog-core/tests/unit/test_adapter_create_issue.py +``` + +- Result: **PASSED** (5 passed) diff --git a/openspec/changes/backlog-core-02-interactive-issue-creation/design.md b/openspec/changes/archive/2026-02-22-backlog-core-02-interactive-issue-creation/design.md similarity index 100% rename from openspec/changes/backlog-core-02-interactive-issue-creation/design.md rename to openspec/changes/archive/2026-02-22-backlog-core-02-interactive-issue-creation/design.md diff --git a/openspec/changes/archive/2026-02-22-backlog-core-02-interactive-issue-creation/proposal.md b/openspec/changes/archive/2026-02-22-backlog-core-02-interactive-issue-creation/proposal.md new file mode 100644 index 00000000..01589730 --- /dev/null +++ b/openspec/changes/archive/2026-02-22-backlog-core-02-interactive-issue-creation/proposal.md @@ -0,0 +1,60 @@ +# Change: Backlog Core — Interactive Issue Creation + +## Why + +After backlog-core-01, teams can analyze dependencies but still create new work items manually in GitHub/ADO. That causes hierarchy drift (wrong parent types), missing readiness fields, and inconsistent sprint/iteration assignment. + +This change adds `specfact backlog add` as a guided creation workflow in the `backlog` command group, with provider-aware interactive UX and contract-safe adapter writes. + +## What Changes + +- **NEW**: `specfact backlog add` in `modules/backlog-core/src/backlog_core/commands/add.py` for interactive and non-interactive issue/work-item creation. +- **EXTEND**: Backlog adapter protocol with `create_issue(project_id: str, payload: dict) -> dict` and concrete implementations in GitHub and ADO adapters. +- **EXTEND**: GitHub parent assignment uses native issue relationship metadata (sidebar parent/sub-issue) via GraphQL sub-issue linking, not only body text conventions. +- **NEW**: Configurable creation hierarchy (`creation_hierarchy`) from template/config for parent-type validation (for example epic -> feature -> story -> task). +- **NEW**: Interactive creation UX for required fields including type/title/body, parent selection, sprint/iteration selection, and immediate create-progress feedback. +- **NEW**: Multiline body entry with non-markdown sentinel (default `::END::`) and configurable marker. +- **NEW**: Provider-agnostic draft fields for story-quality capture where applicable: acceptance criteria, priority, story points. +- **NEW**: Description format selection (`markdown` or `classic`) with provider mapping (`ADO multiline format` handling). +- **EXTEND**: GitHub custom mapping parity with ADO behavior: when `.specfact/templates/backlog/field_mappings/github_custom.yaml` exists and `--custom-config` is omitted, `backlog add` auto-loads it; otherwise it falls back to default mappings. +- **EXTEND**: Parent selection behavior: + - ADO: hierarchy-aware parent candidates filtered by allowed parent types. + - GitHub: select from available issues and normalized type mapping (including custom/epic labels when configured). +- **EXTEND**: `specfact backlog map-fields` to support a multi-provider field mapping workflow (ADO + GitHub), including auth checks, provider field discovery, mapping verification, and config persistence in `.specfact/backlog-config.yaml`. For GitHub, issue-type source-of-truth is repository issue types (`repository.issueTypes`), while ProjectV2 Type option mapping is optional enrichment when a suitable Type-like single-select field exists. + +## Capabilities + +- **backlog-core** (extended): `backlog add` interactive creation flow with hierarchy validation, readiness checks, and adapter-backed create operations. +- **backlog** (extended): Provider-aware `backlog init-config` scaffolding and `backlog map-fields` setup for mapping backlog fields across supported adapters. +- **backlog** (extended): Minimal default backlog-config scaffolding (without empty GitHub ProjectV2 placeholders); persist ProjectV2 mapping only when explicitly configured/discovered. + +## Impact + +- **Affected specs**: `openspec/changes/backlog-core-02-interactive-issue-creation/specs/backlog-add/spec.md` +- **Affected code**: + - `modules/backlog-core/src/backlog_core/commands/add.py` + - `modules/backlog-core/src/backlog_core/adapters/backlog_protocol.py` + - `modules/backlog-core/src/backlog_core/graph/config_schema.py` + - `modules/backlog-core/src/backlog_core/graph/builder.py` + - `src/specfact_cli/adapters/backlog_base.py` + - `src/specfact_cli/adapters/github.py` + - `src/specfact_cli/adapters/ado.py` + - `src/specfact_cli/modules/backlog/src/commands.py` +- **Affected tests**: + - `modules/backlog-core/tests/unit/test_add_command.py` + - `modules/backlog-core/tests/unit/test_adapter_create_issue.py` + - `modules/backlog-core/tests/unit/test_backlog_protocol.py` +- **Documentation impact**: + - `docs/guides/agile-scrum-workflows.md` + - `docs/reference/commands.md` + +--- + +## Source Tracking + +<!-- source_repo: nold-ai/specfact-cli --> +- **GitHub Issue**: #173 +- **Issue URL**: <https://github.com/nold-ai/specfact-cli/issues/173> +- **Repository**: nold-ai/specfact-cli +- **Last Synced Status**: proposed +- **Sanitized**: false diff --git a/openspec/changes/archive/2026-02-22-backlog-core-02-interactive-issue-creation/specs/backlog-add/spec.md b/openspec/changes/archive/2026-02-22-backlog-core-02-interactive-issue-creation/specs/backlog-add/spec.md new file mode 100644 index 00000000..ac099b09 --- /dev/null +++ b/openspec/changes/archive/2026-02-22-backlog-core-02-interactive-issue-creation/specs/backlog-add/spec.md @@ -0,0 +1,392 @@ +# Backlog Add (Interactive Issue Creation) + +## ADDED Requirements + +### Requirement: Backlog adapter create method + +The system SHALL extend backlog adapters with a create method that accepts a unified payload and returns the created item (id, key, url). + +**Rationale**: Creation is currently out-of-band (user creates in GitHub/ADO UI). CLI-driven creation with consistent payload shape allows draft → validate → create flow. + +#### Scenario: Create issue via GitHub adapter + +**Given**: A GitHub adapter is configured and project_id (owner/repo) is set + +**When**: The user or add command calls `create_issue(project_id, payload)` with payload containing type, title, description, and optional parent_id + +**Then**: The adapter maps the unified payload to GitHub Issues API (e.g. POST /repos/{owner}/{repo}/issues) and creates the issue + +**And**: The method returns a dict with id, key (or number), and url of the created issue + +**Acceptance Criteria**: + +- Payload is provider-agnostic (type, title, description, parent_id, optional fields) +- Adapter performs provider-specific mapping (e.g. GitHub labels for type, body for description) +- Failure (auth, validation) is reported; no silent swallow +- Returned created-item identity uses canonical GitHub issue number for both `id` and `key` so follow-up parent/reference inputs resolve consistently. + +#### Scenario: Create work item via ADO adapter + +**Given**: An ADO adapter is configured and project_id is set + +**When**: The user or add command calls `create_issue(project_id, payload)` with payload containing type, title, description, and optional parent_id + +**Then**: The adapter maps the unified payload to ADO Create Work Item API and creates the work item + +**And**: The method returns a dict with id, key, and url of the created work item + +**Acceptance Criteria**: + +- ADO work item type is derived from unified type via template type_mapping +- Parent link is created when parent_id is present and adapter supports it +- When payload includes `sprint`, adapter maps it to `System.IterationPath` in create patch payload. + +### Requirement: Backlog add command + +The system SHALL provide a `specfact backlog add` command that supports interactive creation of backlog issues with type selection, optional parent, title/body, validation (parent exists, allowed type, optional DoR), and create via adapter. + +**Rationale**: Teams need a single flow to add well-scoped, hierarchy-aligned issues from CLI or slash prompt. + +#### Scenario: Add story with parent + +**Given**: A backlog graph or project is loaded (e.g. from fetch_all_issues and fetch_relationships or existing graph) + +**And**: Template or backlog_config defines allowed types and creation hierarchy (e.g. Story may have parent Feature or Epic) + +**When**: The user runs `specfact backlog add --type story --parent FEAT-123 --title "Implement X" --body "As a user..."` (or equivalent interactive prompts) + +**Then**: The system validates that parent FEAT-123 exists in the graph and that Story is allowed under that parent type + +**And**: If validation passes, the system builds a unified payload and calls the adapter's create_issue(project_id, payload) + +**And**: The CLI outputs the created issue id, key, and url + +**Acceptance Criteria**: + +- Validation fails clearly when parent does not exist or type is not allowed +- Optional --check-dor runs DoR rules (from backlog-refinement / .specfact/dor.yaml) on the draft and warns or fails when not met + +#### Scenario: Add issue with custom hierarchy + +**Given**: backlog_config (or template) defines creation_hierarchy with custom allowed parent types per child type (e.g. Spike may have parent Epic or Feature) + +**When**: The user runs `specfact backlog add --type spike --parent EPIC-1 --title "Spike: evaluate Y"` + +**Then**: The system loads creation hierarchy from config and validates that Spike is allowed under Epic + +**And**: If allowed, the system creates the issue and optionally links parent + +**Acceptance Criteria**: + +- Hierarchy rules are read from template or backlog_config; no hardcoded hierarchy +- Multiple levels (epic, feature, story, task, bug, spike, custom) are supported + +#### Scenario: Non-interactive (scripted) add + +**Given**: All required options are provided on the command line (e.g. --type, --title, --non-interactive) + +**When**: The user runs `specfact backlog add --type story --title "T" --body "B" --non-interactive` + +**Then**: The system does not prompt for missing fields; it uses provided values or fails with clear error for missing required fields + +**And**: Validation (parent if provided, DoR if --check-dor) runs before create + +**Acceptance Criteria**: + +- Required fields are documented (e.g. type, title; body may be optional per provider) +- Missing required fields in non-interactive mode result in clear error exit + +### Requirement: Creation hierarchy configuration + +The system SHALL support configurable creation hierarchy (allowed parent types per child type) via template or backlog_config so that Scrum, SAFe, Kanban, and custom hierarchies work without code changes. + +**Rationale**: Different frameworks and orgs use different trees (e.g. Story under Feature vs Story under Epic); configuration avoids hardcoding. + +#### Scenario: Default hierarchy from template + +**Given**: A template (e.g. ado_scrum) is selected and does not define creation_hierarchy + +**When**: The add command needs to validate parent type for a new item + +**Then**: The system derives allowed parent types from existing type_mapping and dependency_rules (e.g. PARENT_CHILD) where possible + +**And**: If derivation is not possible, a conservative default (e.g. any type or no parent) is used and documented + +#### Scenario: Custom hierarchy in backlog_config + +**Given**: ProjectBundle.metadata.backlog_config (or .specfact/backlog-config.yaml) contains creation_hierarchy with entries such as story: [feature, epic], task: [story] + +**When**: The user adds an item with --type story --parent FEAT-1 + +**Then**: The system validates that "feature" is in the allowed parent types for "story" and that FEAT-1 exists and has type Feature + +**And**: Validation fails clearly if parent type is not allowed + +**Acceptance Criteria**: + +- creation_hierarchy is optional; when absent, default or derived rules apply +- Validation uses both existence of parent in graph and allowed type from hierarchy + +### Requirement: Optional sprint assignment and linking via fuzzy match (E5) + +The system SHALL support optional `--sprint <sprint-id>` so the created issue can be assigned to a sprint when the adapter and provider support it. When linking to existing issues (e.g. parent, blocks), the system SHALL support fuzzy match with user confirmation; no silent or automatic link creation. + +**Rationale**: 2026-01-30 plan value chain; E5—bundle mapping and future linking. + +#### Scenario: Add issue with optional sprint assignment + +**Given**: Adapter and provider support sprint assignment (e.g. GitHub Projects, ADO iteration) + +**When**: The user runs `specfact backlog add --type story --title "T" --sprint Sprint-1` + +**Then**: The system includes sprint assignment in the payload when creating the issue (when supported) + +**And**: When provider does not support sprint, the option is ignored or a clear message is shown + +**Acceptance Criteria**: + +- `--sprint` is optional; payload includes sprint when adapter supports it; no failure when unsupported. + +#### Scenario: Link to existing issue via fuzzy match + +**Given**: User specifies a parent or "blocks" target by partial key or title + +**When**: The system finds one or more candidate issues (fuzzy match) + +**Then**: The system presents candidates and requires user confirmation before creating the link + +**And**: No link is created without explicit user confirmation + +**Acceptance Criteria**: + +- Fuzzy match is used for discovery only; linking requires user confirmation; no silent writes. + +### Requirement: Interactive drafting fields and format selection + +The system SHALL collect story-quality drafting fields during interactive creation where applicable and map them into provider payloads before create. + +#### Scenario: Collect multiline body with non-conflicting sentinel + +**Given**: User runs interactive `specfact backlog add` without `--body` + +**When**: The command prompts for multiline body input + +**Then**: The command accepts multiline text until sentinel marker is entered (default `::END::`) + +**And**: The command shows immediate progress feedback that input capture is complete and creation preparation has started + +#### Scenario: Collect acceptance criteria, priority, and story points for story-like types + +**Given**: User selects a story-like type (story/task/feature where supported) + +**When**: The command asks for quality fields + +**Then**: Acceptance criteria is collected via multiline input + +**And**: Priority and story points are collected (interactive prompts or explicit options) + +**And**: Collected values are included in the create payload where provider supports them + +#### Scenario: Select description format before create + +**Given**: Interactive creation mode + +**When**: The user is prompted for description format (`markdown` or `classic`) + +**Then**: Selected format is included in the payload + +**And**: Provider mapping respects format (for ADO: multiline field format set according to selected mode) + +### Requirement: Interactive sprint/iteration and parent selection + +The system SHALL prompt for sprint/iteration and parent assignment in interactive mode and validate both against provider and hierarchy constraints. + +#### Scenario: Interactive sprint/iteration selection for ADO + +**Given**: ADO adapter can resolve current and available iterations + +**When**: User runs interactive add without `--sprint` + +**Then**: The command shows selectable iteration options (including current and skip) + +**And**: Selected iteration is included in payload + +#### Scenario: Interactive parent selection using hierarchy constraints + +**Given**: Graph and creation hierarchy are loaded + +**When**: User opts to set a parent interactively + +**Then**: Candidate parents are filtered to allowed parent types for selected child type + +**And**: User selects parent from existing items + +**And**: Selected parent id is written as `parent_id` in payload + +#### Scenario: GitHub parent selection reflects mapped type consistency + +**Given**: GitHub issues use label/type mapping and may include custom hierarchy labels (e.g. epic) + +**When**: Parent candidates are presented + +**Then**: Candidate type resolution uses current template type mapping / normalized graph type + +**And**: Parent compatibility follows creation hierarchy rules with no hardcoded provider-only assumptions + + +### Requirement: Centralized retry policy for backlog adapter write operations + +The system SHALL apply a shared retry policy for transient failures in backlog adapter create operations so command behavior is consistent across providers. + +#### Scenario: Retry transient create failure and succeed + +**Given**: A backlog adapter create call receives a transient failure (for example timeout, connection error, HTTP 429, or HTTP 5xx) + +**When**: The command executes `create_issue` + +**Then**: The adapter uses centralized retry logic with bounded attempts and backoff + +**And**: If a later attempt succeeds, the command returns success with created item metadata + +#### Scenario: Non-transient create failure does not retry + +**Given**: A backlog adapter create call fails with non-transient error (for example HTTP 400/401/403/404) + +**When**: The command executes `create_issue` + +**Then**: The adapter does not retry unnecessarily + +**And**: The failure is surfaced immediately to the caller with context + + +#### Scenario: Non-idempotent create avoids ambiguous automatic retry + +**Given**: A create operation is non-idempotent and the transport fails ambiguously (for example timeout/connection drop after request may have reached provider) + +**When**: The adapter executes create via shared retry core logic + +**Then**: The adapter does not automatically replay the create request in that ambiguous state + +**And**: The error is surfaced so caller can verify provider state and retry intentionally + +### Requirement: Adapter-aware default template selection for parent hierarchy + +The system SHALL default template selection by adapter when user does not explicitly pass `--template` so hierarchy/type mapping remains provider-consistent. + +#### Scenario: ADO backlog add defaults to ado_scrum mapping + +**Given**: User runs `specfact backlog add --adapter ado` without `--template` + +**When**: The command builds graph and parent candidates + +**Then**: It uses ADO-compatible template mapping (default `ado_scrum`) + +**And**: Epic/feature/story hierarchy candidates are resolved consistently for parent selection + + +### Requirement: Shared retry policy applied consistently across adapter write operations + +The system SHALL apply centralized retry policy to backlog adapter write operations beyond create, with operation-specific ambiguity safety. + +#### Scenario: Non-idempotent write uses duplicate-safe mode + +**Given**: Adapter operation is non-idempotent (for example comment creation) + +**When**: Shared retry helper is used + +**Then**: Ambiguous transport replay is disabled to avoid duplicate side effects + +#### Scenario: Idempotent update uses bounded transient retry + +**Given**: Adapter operation is idempotent (for example status/body patch) + +**When**: Shared retry helper is used + +**Then**: Transient HTTP failures are retried with bounded backoff + +**And**: Non-transient failures are surfaced immediately + + +### Requirement: Parent candidate discovery must not exclude valid hierarchy parents by implicit sprint defaults + +The system SHALL avoid implicit current-iteration filtering when loading parent candidates for interactive parent selection. + +#### Scenario: ADO parent candidate fetch includes epics without sprint assignment + +**Given**: User creates a feature and opts to choose parent interactively in ADO + +**When**: Parent candidates are loaded for hierarchy filtering + +**Then**: Parent discovery does not implicitly limit candidates to current iteration + +**And**: Epics/features outside current iteration remain selectable when hierarchy allows + +### Requirement: User warning on duplicate-safe ambiguous create failure + +The system SHALL display a user-facing warning when non-idempotent create fails due to ambiguous transport errors while duplicate-safe retry mode is active. + +#### Scenario: Timeout/connection drop on duplicate-safe create + +**Given**: Create uses duplicate-safe mode (no ambiguous replay) + +**When**: Create fails with timeout/connection error + +**Then**: CLI warns the user that the item may have been created remotely + +**And**: CLI advises verifying backlog before retrying manually + + +#### Scenario: ADO sprint selection resolves iterations using project_id context + +**Given**: User runs `backlog add` with `--adapter ado --project-id <org>/<project>` and adapter defaults do not already include org/project + +**When**: Interactive sprint/iteration selection is shown + +**Then**: The command resolves ADO org/project context from project_id for iteration API calls + +**And**: Available iterations are listed for selection when accessible + + +#### Scenario: GitHub backlog add forwards Projects Type field configuration + +**Given**: `backlog add` runs with GitHub adapter and template/custom config contains GitHub Projects v2 type field mapping metadata + +**When**: The command builds create payload for `create_issue` + +**Then**: It forwards provider field metadata in payload (for example `provider_fields.github_project_v2`) so the adapter can set the Projects `Type` field in addition to labels + + +#### Scenario: GitHub ProjectV2 Type mapping can come from repo backlog provider settings + +**Given**: `.specfact/backlog-config.yaml` defines `backlog_config.providers.github.settings.provider_fields.github_project_v2` + +**When**: `backlog add` runs with GitHub adapter and no explicit `--custom-config` + +**Then**: The command forwards that provider field configuration in create payload so adapter ProjectV2 Type mapping can run + + +#### Scenario: GitHub add warns when ProjectV2 Type mapping config is absent + +**Given**: User runs `backlog add` with GitHub adapter and no ProjectV2 Type mapping metadata is available + +**When**: The command prepares create payload + +**Then**: The command prints a warning that GitHub ProjectV2 Type field will not be set automatically and labels/body fallback is used + + +#### Scenario: GitHub custom mapping file auto-applies when present + +**Given**: `--adapter github` and no `--custom-config` flag is provided +**And**: `.specfact/templates/backlog/field_mappings/github_custom.yaml` exists +**When**: The user runs `specfact backlog add` +**Then**: The command loads `github_custom.yaml` as custom mapping/hierarchy overrides +**And**: Parent validation and candidate filtering use those overrides +**And**: If the file does not exist, the command falls back to default `github_projects` mapping behavior. + + +#### Scenario: GitHub parent is linked using native sub-issue relationship + +**Given**: A GitHub parent issue is selected during `backlog add` +**When**: The issue is created +**Then**: The adapter links parent/child using GitHub native issue relationship (`addSubIssue`) so the right-sidebar parent relation is populated +**And**: Body text markers are secondary compatibility metadata, not the primary relationship mechanism. diff --git a/openspec/changes/archive/2026-02-22-backlog-core-02-interactive-issue-creation/specs/backlog-map-fields/spec.md b/openspec/changes/archive/2026-02-22-backlog-core-02-interactive-issue-creation/specs/backlog-map-fields/spec.md new file mode 100644 index 00000000..1a6a996d --- /dev/null +++ b/openspec/changes/archive/2026-02-22-backlog-core-02-interactive-issue-creation/specs/backlog-map-fields/spec.md @@ -0,0 +1,113 @@ +# Backlog Map Fields (Multi-Provider Mapping Setup) + +## ADDED Requirements + +### Requirement: Backlog-scoped config scaffolding command + +The system SHALL provide a backlog-scoped scaffolding command `specfact backlog init-config` that creates `.specfact/backlog-config.yaml` with safe defaults. + +**Rationale**: Backlog mapping config should live under backlog command ownership and not require manual file creation. + +#### Scenario: Initialize backlog config defaults + +**Given**: User runs `specfact backlog init-config` in a repository + +**When**: `.specfact/backlog-config.yaml` does not exist + +**Then**: The command creates `.specfact/backlog-config.yaml` with minimal provider settings defaults + +**And**: GitHub defaults do not include empty ProjectV2 id/option placeholders; ProjectV2 mapping is written only when configured + +**And**: The command prints next steps for `specfact backlog map-fields` + +#### Scenario: Initialize backlog config without overwrite + +**Given**: `.specfact/backlog-config.yaml` already exists + +**When**: User runs `specfact backlog init-config` without force option + +**Then**: The command does not overwrite existing config and reports how to proceed safely + +### Requirement: Multi-provider map-fields setup workflow + +The system SHALL provide a provider-aware `specfact backlog map-fields` workflow that supports configuring mapping metadata for one or more backlog adapters in a single guided run. + +**Rationale**: Users currently need different setup paths per provider and manual config edits for some providers. A unified setup flow prevents missing mappings and hidden fallback behavior. + +#### Scenario: Select providers and run setup sequentially + +**Given**: User runs `specfact backlog map-fields` + +**When**: User selects one or more providers to configure (for example `ado` and `github`) + +**Then**: The command executes setup for each selected provider in sequence + +**And**: The command prints per-provider success/failure status with actionable next steps + +### Requirement: Provider auth and field discovery checks + +The system SHALL verify auth context and discover provider fields/metadata before accepting mappings. + +#### Scenario: ADO mapping setup with API discovery + +**Given**: ADO provider is selected + +**When**: The command validates auth and loads ADO work item fields from API + +**Then**: The user maps required canonical fields to available ADO fields + +**And**: The command validates mapped field ids before saving + +#### Scenario: GitHub ProjectV2 Type mapping setup with API discovery + +**Given**: GitHub provider is selected + +**When**: The command validates auth and loads ProjectV2 metadata (project, Type field, options) + +**Then**: The user maps canonical issue types (`epic`, `feature`, `story`, `task`, `bug`) to ProjectV2 Type options + +**And**: The command validates selected option IDs before saving + +#### Scenario: GitHub issue types are sourced from repository metadata + +**Given**: GitHub provider is selected + +**When**: The command loads repository issue types via GitHub GraphQL (`repository.issueTypes`) + +**Then**: Canonical type mapping is derived from repository issue type names/ids (for example `epic`, `feature`, `story`, `task`, `bug`) + +**And**: This source is preferred over ProjectV2 `Status` options for issue-type identity + +#### Scenario: ProjectV2 type-option mapping is optional when Type field is absent + +**Given**: GitHub ProjectV2 has no Type-like single-select field (for example only `Status`) + +**When**: The user runs `specfact backlog map-fields` for GitHub + +**Then**: The command persists repository issue-type mappings and warns that ProjectV2 Type option mapping is skipped + +**And**: The command does not fail solely because ProjectV2 Type options are unavailable + +### Requirement: Canonical config persistence and verification + +The system SHALL persist provider mapping outputs into canonical backlog config and verify integrity post-write. + +#### Scenario: Persist provider mappings into .specfact/backlog-config.yaml + +**Given**: User completes mapping flow for one or more providers + +**When**: The command writes configuration + +**Then**: Mappings are stored under `backlog_config.providers.<provider>.settings` in `.specfact/backlog-config.yaml` + +**And**: Existing unrelated config keys are preserved + +#### Scenario: Post-write verification and summary + +**Given**: Mapping write completes + +**When**: Verification runs + +**Then**: The command confirms required keys are present and prints a concise summary of configured providers + +**And**: If verification fails, the command reports the failing keys and exits non-zero diff --git a/openspec/changes/backlog-core-02-interactive-issue-creation/tasks.md b/openspec/changes/archive/2026-02-22-backlog-core-02-interactive-issue-creation/tasks.md similarity index 65% rename from openspec/changes/backlog-core-02-interactive-issue-creation/tasks.md rename to openspec/changes/archive/2026-02-22-backlog-core-02-interactive-issue-creation/tasks.md index 2238207a..5db1f2d6 100644 --- a/openspec/changes/backlog-core-02-interactive-issue-creation/tasks.md +++ b/openspec/changes/archive/2026-02-22-backlog-core-02-interactive-issue-creation/tasks.md @@ -14,67 +14,67 @@ Do not implement production code for new behavior until the corresponding tests ## 1. Create git worktree branch from dev -- [ ] 1.1 Ensure primary checkout is on dev and up to date: `git checkout dev && git pull origin dev` -- [ ] 1.2 Create dedicated worktree branch (preferred): `scripts/worktree.sh create feature/backlog-core-02-interactive-issue-creation`; if issue exists, link branch to issue with `gh issue develop 173 --repo nold-ai/specfact-cli --name feature/backlog-core-02-interactive-issue-creation` -- [ ] 1.3 Or create worktree branch without issue link: `scripts/worktree.sh create feature/backlog-core-02-interactive-issue-creation` (if no issue yet) -- [ ] 1.4 Verify branch in worktree: `git worktree list` includes the branch path; then run `git branch --show-current` inside that worktree. +- [x] 1.1 Ensure primary checkout is on dev and up to date: `git checkout dev && git pull origin dev` +- [x] 1.2 Create dedicated worktree branch (preferred): `scripts/worktree.sh create feature/backlog-core-02-interactive-issue-creation`; if issue exists, link branch to issue with `gh issue develop 173 --repo nold-ai/specfact-cli --name feature/backlog-core-02-interactive-issue-creation` +- [x] 1.3 Or create worktree branch without issue link: `scripts/worktree.sh create feature/backlog-core-02-interactive-issue-creation` (if no issue yet) +- [x] 1.4 Verify branch in worktree: `git worktree list` includes the branch path; then run `git branch --show-current` inside that worktree. ## 2. Create GitHub issue in nold-ai/specfact-cli (mandatory) -- [ ] 2.1 If issue not yet created: create issue in nold-ai/specfact-cli: `gh issue create --repo nold-ai/specfact-cli --title "[Change] Add backlog add (interactive issue creation)" --body-file <path> --label "enhancement" --label "change-proposal"`. If issue already exists (e.g. #173), skip and ensure proposal.md Source Tracking is up to date. -- [ ] 2.2 Use body from proposal (Why, What Changes, Acceptance Criteria); add footer `*OpenSpec Change Proposal: add-backlog-add-interactive-issue-creation*` -- [ ] 2.3 Update `proposal.md` Source Tracking section with issue number, issue URL, repository nold-ai/specfact-cli, Last Synced Status: proposed -- [ ] 2.4 Link issue to project (optional): `gh project item-add 1 --owner nold-ai --url <issue-url>` (requires `gh auth refresh -s project` if needed) +- [x] 2.1 If issue not yet created: create issue in nold-ai/specfact-cli: `gh issue create --repo nold-ai/specfact-cli --title "[Change] Add backlog add (interactive issue creation)" --body-file <path> --label "enhancement" --label "change-proposal"`. If issue already exists (e.g. #173), skip and ensure proposal.md Source Tracking is up to date. +- [x] 2.2 Use body from proposal (Why, What Changes, Acceptance Criteria); add footer `*OpenSpec Change Proposal: add-backlog-add-interactive-issue-creation*` +- [x] 2.3 Update `proposal.md` Source Tracking section with issue number, issue URL, repository nold-ai/specfact-cli, Last Synced Status: proposed +- [x] 2.4 Link issue to project (optional): `gh project item-add 1 --owner nold-ai --url <issue-url>` (requires `gh auth refresh -s project` if needed) ## 3. Verify spec deltas (SDD: specs first) -- [ ] 3.1 Confirm `specs/backlog-add/spec.md` exists and is complete (ADDED requirements, Given/When/Then for create_issue, add command, creation hierarchy). -- [ ] 3.2 Map scenarios to implementation: create via GitHub/ADO, add command with parent validation, custom hierarchy from config, non-interactive mode. +- [x] 3.1 Confirm `specs/backlog-add/spec.md` exists and is complete (ADDED requirements, Given/When/Then for create_issue, add command, creation hierarchy). +- [x] 3.2 Map scenarios to implementation: create via GitHub/ADO, add command with parent validation, custom hierarchy from config, non-interactive mode. ## 4. Tests first (TDD: write tests from spec scenarios; expect failure) -- [ ] 4.1 Write unit tests for adapter create_issue: mock GitHub/ADO API; assert payload mapping and return shape (id, key, url). -- [ ] 4.2 Write unit or integration tests from `specs/backlog-add/spec.md` scenarios: add with parent validation, hierarchy from config, non-interactive add, DoR check when --check-dor. -- [ ] 4.3 Run tests: `hatch run smart-test-unit` (or equivalent); **expect failure** (no implementation yet). -- [ ] 4.4 Document which scenarios are covered by which test modules. +- [x] 4.1 Write unit tests for adapter create_issue: mock GitHub/ADO API; assert payload mapping and return shape (id, key, url). +- [x] 4.2 Write unit or integration tests from `specs/backlog-add/spec.md` scenarios: add with parent validation, hierarchy from config, non-interactive add, DoR check when --check-dor. +- [x] 4.3 Run tests: `hatch run smart-test-unit` (or equivalent); **expect failure** (no implementation yet). +- [x] 4.4 Document which scenarios are covered by which test modules. ## 5. Extend BacklogAdapterMixin with create_issue (TDD: code until tests pass) -- [ ] 5.1 Add abstract method `create_issue(project_id: str, payload: dict) -> dict` to `BacklogAdapterMixin` in `src/specfact_cli/adapters/backlog_base.py` with @abstractmethod, @beartype, and @icontract. -- [ ] 5.2 Implement `create_issue` in GitHub adapter: map payload to GitHub Issues API (POST /repos/{owner}/{repo}/issues); return dict with id, key (number), url. -- [ ] 5.3 Implement `create_issue` in ADO adapter: map payload to ADO Create Work Item API; set parent relation when parent_id present; return dict with id, key, url. -- [ ] 5.4 Run adapter create tests; **expect pass**; fix until tests pass. +- [x] 5.1 Add abstract method `create_issue(project_id: str, payload: dict) -> dict` to `BacklogAdapterMixin` in `src/specfact_cli/adapters/backlog_base.py` with @abstractmethod, @beartype, and @icontract. +- [x] 5.2 Implement `create_issue` in GitHub adapter: map payload to GitHub Issues API (POST /repos/{owner}/{repo}/issues); return dict with id, key (number), url. +- [x] 5.3 Implement `create_issue` in ADO adapter: map payload to ADO Create Work Item API; set parent relation when parent_id present; return dict with id, key, url. +- [x] 5.4 Run adapter create tests; **expect pass**; fix until tests pass. ## 6. Implement creation hierarchy and add command (TDD: code until tests pass) -- [ ] 6.1 Define optional creation_hierarchy in template or backlog_config schema (child type → list of allowed parent types); implement loader (from ProjectBundle.metadata.backlog_config or .specfact/spec.yaml). -- [ ] 6.2 Implement add command: options --adapter, --project-id, --template, --type, --parent, --title, --body, --non-interactive, --check-dor; interactive prompts when key args missing (unless --non-interactive). -- [ ] 6.3 Implement flow: load graph (fetch_all_issues + fetch_relationships or BacklogGraphBuilder when available); resolve type and parent; validate parent exists and allowed type from creation_hierarchy; optional DoR check (reuse backlog refine DoR); build payload; call adapter.create_issue; output id, key, url. -- [ ] 6.4 Register `specfact backlog add` in backlog command group (same place as refine, analyze-deps). -- [ ] 6.5 Run add-command tests; **expect pass**; fix until tests pass. +- [x] 6.1 Define optional creation_hierarchy in template or backlog_config schema (child type → list of allowed parent types); implement loader (from ProjectBundle.metadata.backlog_config or .specfact/spec.yaml). +- [x] 6.2 Implement add command: options --adapter, --project-id, --template, --type, --parent, --title, --body, --non-interactive, --check-dor; interactive prompts when key args missing (unless --non-interactive). +- [x] 6.3 Implement flow: load graph (fetch_all_issues + fetch_relationships or BacklogGraphBuilder when available); resolve type and parent; validate parent exists and allowed type from creation_hierarchy; optional DoR check (reuse backlog refine DoR); build payload; call adapter.create_issue; output id, key, url. +- [x] 6.4 Register `specfact backlog add` in backlog command group (same place as refine, analyze-deps). +- [x] 6.5 Run add-command tests; **expect pass**; fix until tests pass. ## 7. Quality gates -- [ ] 7.1 Run format and type-check: `hatch run format`, `hatch run type-check`. -- [ ] 7.2 Run contract test: `hatch run contract-test`. -- [ ] 7.3 Run full test suite: `hatch run smart-test` (or `hatch run smart-test-full`). -- [ ] 7.4 Ensure all new public APIs have @icontract and @beartype where applicable. +- [x] 7.1 Run format and type-check: `hatch run format`, `hatch run type-check`. +- [x] 7.2 Run contract test: `hatch run contract-test`. +- [x] 7.3 Run full test suite: `hatch run smart-test` (or `hatch run smart-test-full`). +- [x] 7.4 Ensure all new public APIs have @icontract and @beartype where applicable. ## 8. Documentation research and review -- [ ] 8.1 Identify affected documentation: docs/guides/agile-scrum-workflows.md, backlog-refinement or backlog guide. -- [ ] 8.2 Update agile-scrum-workflows (or backlog guide): add section for backlog add (`specfact backlog add`), interactive creation, DoR, slash prompt usage. -- [ ] 8.3 If adding a new doc page: set front-matter (layout, title, permalink, description) and update docs/_layouts/default.html sidebar if needed. +- [x] 8.1 Identify affected documentation: docs/guides/agile-scrum-workflows.md, backlog-refinement or backlog guide. +- [x] 8.2 Update agile-scrum-workflows (or backlog guide): add section for backlog add (`specfact backlog add`), interactive creation, DoR, slash prompt usage. +- [x] 8.3 If adding a new doc page: set front-matter (layout, title, permalink, description) and update docs/_layouts/default.html sidebar if needed. ## 9. Version and changelog (patch bump; required before PR) -- [ ] 9.1 Bump **patch** version in `pyproject.toml` (e.g. X.Y.Z → X.Y.(Z+1)). -- [ ] 9.2 Sync version in `setup.py`, `src/__init__.py`, `src/specfact_cli/__init__.py` to match pyproject.toml. -- [ ] 9.3 Add CHANGELOG.md entry under new [X.Y.Z] - YYYY-MM-DD section: **Added** – Backlog add (interactive issue creation): `specfact backlog add` with type/parent selection, DoR validation, and create via adapter. +- [x] 9.1 Bump **patch** version in `pyproject.toml` (e.g. X.Y.Z → X.Y.(Z+1)). +- [x] 9.2 Sync version in `setup.py`, `src/__init__.py`, `src/specfact_cli/__init__.py` to match pyproject.toml. +- [x] 9.3 Add CHANGELOG.md entry under new [X.Y.Z] - YYYY-MM-DD section: **Added** – Backlog add (interactive issue creation): `specfact backlog add` with type/parent selection, DoR validation, and create via adapter. ## 10. Create Pull Request to dev -- [ ] 10.1 Ensure all changes are committed: `git add .` and `git commit -m "feat(backlog): add backlog add for interactive issue creation"` -- [ ] 10.2 Push to remote: `git push origin feature/backlog-core-02-interactive-issue-creation` -- [ ] 10.3 Create PR: `gh pr create --repo nold-ai/specfact-cli --base dev --head feature/backlog-core-02-interactive-issue-creation --title "feat(backlog): add backlog add for interactive issue creation" --body-file <path>` (use repo PR template; add OpenSpec change ID `backlog-core-02-interactive-issue-creation` and summary; reference GitHub issue with `Fixes nold-ai/specfact-cli#173`). -- [ ] 10.4 Verify PR and branch are linked to issue in Development section. +- [x] 10.1 Ensure all changes are committed: `git add .` and `git commit -m "feat(backlog): add backlog add for interactive issue creation"` +- [x] 10.2 Push to remote: `git push origin feature/backlog-core-02-interactive-issue-creation` +- [x] 10.3 Create PR: `gh pr create --repo nold-ai/specfact-cli --base dev --head feature/backlog-core-02-interactive-issue-creation --title "feat(backlog): add backlog add for interactive issue creation" --body-file <path>` (use repo PR template; add OpenSpec change ID `backlog-core-02-interactive-issue-creation` and summary; reference GitHub issue with `Fixes nold-ai/specfact-cli#173`). +- [x] 10.4 Verify PR and branch are linked to issue in Development section. diff --git a/openspec/changes/bundle-mapper-01-mapping-strategy/CHANGE_VALIDATION.md b/openspec/changes/archive/2026-02-22-bundle-mapper-01-mapping-strategy/CHANGE_VALIDATION.md similarity index 89% rename from openspec/changes/bundle-mapper-01-mapping-strategy/CHANGE_VALIDATION.md rename to openspec/changes/archive/2026-02-22-bundle-mapper-01-mapping-strategy/CHANGE_VALIDATION.md index c7980830..46ccf56b 100644 --- a/openspec/changes/bundle-mapper-01-mapping-strategy/CHANGE_VALIDATION.md +++ b/openspec/changes/archive/2026-02-22-bundle-mapper-01-mapping-strategy/CHANGE_VALIDATION.md @@ -107,3 +107,10 @@ This change was re-validated after renaming and updating to align with the modul - All old change ID references updated to new module-scoped naming **Result**: Pass — format compliant, module architecture aligned, no breaking changes introduced. + +## Remediation Re-Validation (2026-02-22) + +- Scope: review defect remediation for stale historical bundle IDs, history key encoding ambiguity, conflicting content contribution, and malformed threshold parsing. +- Validation command: `openspec validate bundle-mapper-01-mapping-strategy --strict` +- Result: `Change 'bundle-mapper-01-mapping-strategy' is valid` +- Notes: telemetry flush warnings were emitted due restricted network (`edge.openspec.dev`) but validation completed successfully with exit code 0. diff --git a/openspec/changes/archive/2026-02-22-bundle-mapper-01-mapping-strategy/TDD_EVIDENCE.md b/openspec/changes/archive/2026-02-22-bundle-mapper-01-mapping-strategy/TDD_EVIDENCE.md new file mode 100644 index 00000000..dfa51106 --- /dev/null +++ b/openspec/changes/archive/2026-02-22-bundle-mapper-01-mapping-strategy/TDD_EVIDENCE.md @@ -0,0 +1,43 @@ +# TDD Evidence: bundle-mapper-01-mapping-strategy + +## Review findings intake (2026-02-22) + +- Historical scorer may choose stale bundle IDs not present in current `available_bundle_ids`. +- History key format is ambiguous because `|` is used for both field and tag separators. +- Content signal can boost confidence even when it points to a different bundle than the selected primary bundle. +- Threshold parsing crashes on malformed user config values instead of falling back to defaults. + +## Pre-implementation (failing run) + +- **Command**: `hatch run pytest modules/bundle-mapper/tests/ -v --no-cov` +- **Timestamp**: 2026-02-18 (session) +- **Result**: Collection errors — `ModuleNotFoundError: No module named 'bundle_mapper'` (resolved by adding `conftest.py` with `sys.path.insert` for module `src`). Then `BeartypeDecorHintPep3119Exception` for `_ItemLike` Protocol (resolved by `@runtime_checkable`). + +## Post-implementation (passing run) + +- **Command**: `hatch run pytest modules/bundle-mapper/tests/ -v --no-cov` +- **Timestamp**: 2026-02-18 +- **Result**: 11 passed in 0.71s +- **Tests**: test_bundle_mapping_model (3), test_bundle_mapper_engine (5), test_mapping_history (3) + +## Pre-implementation (review-defect regression tests) + +- **Command**: `hatch run pytest modules/bundle-mapper/tests/unit/test_bundle_mapper_engine.py modules/bundle-mapper/tests/unit/test_mapping_history.py -q` +- **Timestamp**: 2026-02-22 +- **Result**: 4 failed, 9 passed +- **Failure summary**: + - `test_historical_mapping_ignores_stale_bundle_ids`: primary mapping was `None`/invalid due to stale history IDs + - `test_conflicting_content_signal_does_not_increase_primary_confidence`: confidence was `0.85` instead of `0.80` + - `test_item_key_similarity_does_not_false_match_tag_lists`: returned false-positive similarity (`True`) + - `test_load_bundle_mapping_config_malformed_thresholds_use_defaults`: `ValueError` raised for non-numeric thresholds + +## Post-implementation (review-defect regression tests) + +- **Command**: `hatch run pytest modules/bundle-mapper/tests/unit/test_bundle_mapper_engine.py modules/bundle-mapper/tests/unit/test_mapping_history.py -q` +- **Timestamp**: 2026-02-22 +- **Result**: 13 passed in 0.75s +- **Tests**: + - stale historical bundle IDs are ignored during scoring + - unambiguous history key serialization preserves tag semantics + - conflicting content signal does not boost different primary bundle confidence + - malformed thresholds fall back to defaults diff --git a/openspec/changes/bundle-mapper-01-mapping-strategy/proposal.md b/openspec/changes/archive/2026-02-22-bundle-mapper-01-mapping-strategy/proposal.md similarity index 100% rename from openspec/changes/bundle-mapper-01-mapping-strategy/proposal.md rename to openspec/changes/archive/2026-02-22-bundle-mapper-01-mapping-strategy/proposal.md diff --git a/openspec/changes/bundle-mapper-01-mapping-strategy/specs/bundle-mapping/spec.md b/openspec/changes/archive/2026-02-22-bundle-mapper-01-mapping-strategy/specs/bundle-mapping/spec.md similarity index 80% rename from openspec/changes/bundle-mapper-01-mapping-strategy/specs/bundle-mapping/spec.md rename to openspec/changes/archive/2026-02-22-bundle-mapper-01-mapping-strategy/specs/bundle-mapping/spec.md index bdc04a04..9bfc935f 100644 --- a/openspec/changes/bundle-mapper-01-mapping-strategy/specs/bundle-mapping/spec.md +++ b/openspec/changes/archive/2026-02-22-bundle-mapper-01-mapping-strategy/specs/bundle-mapping/spec.md @@ -29,25 +29,6 @@ The system SHALL provide a `BundleMapper` that computes mapping from backlog ite - **WHEN** no signals match any bundle - **THEN** the system returns mapping with primary_bundle_id=None and confidence=0.0 -### Requirement: Confidence-Based Routing - -The system SHALL route bundle mappings based on confidence thresholds: auto-assign (>=0.8), prompt user (0.5-0.8), require explicit selection (<0.5). - -#### Scenario: Auto-assign high confidence - -- **WHEN** mapping confidence >= 0.8 -- **THEN** the system automatically assigns to bundle (unless user declines) - -#### Scenario: Prompt for medium confidence - -- **WHEN** mapping confidence 0.5-0.8 -- **THEN** the system prompts user with suggested bundle and rationale, allowing selection from candidates - -#### Scenario: Require explicit selection for low confidence - -- **WHEN** mapping confidence < 0.5 -- **THEN** the system requires user to explicitly select a bundle (no silent assignment) - ### Requirement: Mapping History Persistence The system SHALL persist mapping rules learned from user confirmations. @@ -62,11 +43,24 @@ The system SHALL persist mapping rules learned from user confirmations. - **WHEN** a new item matches historical pattern (same assignee, area, tags) - **THEN** the system uses historical mapping frequency to boost confidence score +#### Scenario: Historical mapping ignores stale bundle ids + +- **GIVEN** history contains bundle ids that are no longer present in available bundles +- **WHEN** historical scoring is computed +- **THEN** stale bundle ids are ignored +- **AND** returned historical bundle ids are always members of current available bundles + #### Scenario: Mapping rules from config - **WHEN** config file contains mapping rules (e.g., "assignee=alice → backend-services") - **THEN** the system applies these rules before computing other signals +#### Scenario: History key encoding is unambiguous + +- **WHEN** item keys are serialized for history matching +- **THEN** field delimiters and tag-value delimiters do not collide +- **AND** round-trip parsing preserves all tag values without truncation + ### Requirement: Interactive Mapping UI The system SHALL provide an interactive prompt for bundle selection with confidence visualization and candidate options. diff --git a/openspec/changes/bundle-mapper-01-mapping-strategy/specs/confidence-scoring/spec.md b/openspec/changes/archive/2026-02-22-bundle-mapper-01-mapping-strategy/specs/confidence-scoring/spec.md similarity index 85% rename from openspec/changes/bundle-mapper-01-mapping-strategy/specs/confidence-scoring/spec.md rename to openspec/changes/archive/2026-02-22-bundle-mapper-01-mapping-strategy/specs/confidence-scoring/spec.md index a6f0d975..ff011847 100644 --- a/openspec/changes/bundle-mapper-01-mapping-strategy/specs/confidence-scoring/spec.md +++ b/openspec/changes/archive/2026-02-22-bundle-mapper-01-mapping-strategy/specs/confidence-scoring/spec.md @@ -62,6 +62,13 @@ The system SHALL score content similarity between item text and existing specs i - **WHEN** item text has no keywords in common with bundle specs - **THEN** the system assigns score 0.0 for that bundle +#### Scenario: Conflicting content signal does not increase confidence + +- **GIVEN** explicit or historical scoring selected a primary bundle +- **AND** top content similarity points to a different bundle +- **WHEN** final confidence is calculated +- **THEN** the content contribution is not added to the selected primary bundle confidence + #### Scenario: Tokenization for matching - **WHEN** content similarity is computed @@ -90,3 +97,9 @@ The system SHALL use configurable confidence thresholds for routing decisions. - **WHEN** user configures custom thresholds in `.specfact/config.yaml` - **THEN** the system uses custom thresholds instead of defaults + +#### Scenario: Malformed thresholds fall back to defaults + +- **WHEN** config contains non-numeric threshold values +- **THEN** mapper initialization does not fail +- **AND** default threshold values are used diff --git a/openspec/changes/archive/2026-02-22-bundle-mapper-01-mapping-strategy/tasks.md b/openspec/changes/archive/2026-02-22-bundle-mapper-01-mapping-strategy/tasks.md new file mode 100644 index 00000000..6b50ed6a --- /dev/null +++ b/openspec/changes/archive/2026-02-22-bundle-mapper-01-mapping-strategy/tasks.md @@ -0,0 +1,163 @@ +## 1. Git Workflow + +- [x] 1.1 Create git worktree branch `feature/add-bundle-mapping-strategy` from `dev` branch + - [x] 1.1.1 Ensure primary checkout is on dev and up to date: `git checkout dev && git pull origin dev` + - [x] 1.1.2 Create branch: `scripts/worktree.sh create feature/add-bundle-mapping-strategy` + - [x] 1.1.3 Verify branch in worktree: `git worktree list` includes the branch path; then run `git branch --show-current` inside that worktree. + +## 2. BundleMapping Model + +- [x] 2.1 Create `src/specfact_cli/models/bundle_mapping.py` + - [x] 2.1.1 Define `BundleMapping` dataclass with fields: primary_bundle_id, confidence, candidates, explained_reasoning + - [x] 2.1.2 Add `@beartype` decorator for runtime type checking + - [x] 2.1.3 Add `@icontract` decorators with `@require`/`@ensure` contracts + +## 3. BundleMapper Engine + +- [x] 3.1 Create `modules/bundle-mapper/src/bundle_mapper/mapper/engine.py` + - [x] 3.1.1 Implement `BundleMapper` class with `compute_mapping(item: BacklogItem) -> BundleMapping` + - [x] 3.1.2 Implement `_score_explicit_mapping()` for explicit label signals (bundle:xyz tags) + - [x] 3.1.3 Implement `_score_historical_mapping()` for historical pattern signals + - [x] 3.1.4 Implement `_score_content_similarity()` for content-based signals (keyword matching) + - [x] 3.1.5 Implement weighted confidence calculation (0.8 × explicit + 0.15 × historical + 0.05 × content) + - [x] 3.1.6 Implement `_item_key()` for creating metadata keys for history matching + - [x] 3.1.7 Implement `_item_keys_similar()` for comparing metadata keys + - [x] 3.1.8 Implement `_explain_score()` for human-readable explanations + - [x] 3.1.9 Implement `_build_explanation()` for detailed mapping rationale + - [x] 3.1.10 Add `@beartype` decorator for runtime type checking + - [x] 3.1.11 Add `@icontract` decorators with `@require`/`@ensure` contracts + +## 4. Mapping History Persistence + +- [x] 4.1 Extend `.specfact/config.yaml` structure + - [x] 4.1.1 Add `backlog.bundle_mapping.rules` section for persistent mapping rules + - [x] 4.1.2 Add `backlog.bundle_mapping.history` section for auto-populated historical mappings + - [x] 4.1.3 Add `backlog.bundle_mapping.explicit_label_prefix` config (default: "bundle:") + - [x] 4.1.4 Add `backlog.bundle_mapping.auto_assign_threshold` config (default: 0.8) + - [x] 4.1.5 Add `backlog.bundle_mapping.confirm_threshold` config (default: 0.5) + +## 5. Mapping Rule Model + +- [x] 5.1 Create `MappingRule` Pydantic model + - [x] 5.1.1 Define fields: pattern, bundle_id, action, confidence + - [x] 5.1.2 Implement `matches(item: BacklogItem) -> bool` method + - [x] 5.1.3 Support pattern matching: tag=~regex, assignee=exact, area=exact + - [x] 5.1.4 Add `@beartype` decorator for runtime type checking + +## 6. Mapping History Functions + +- [x] 6.1 Implement `save_user_confirmed_mapping()` function + - [x] 6.1.1 Create item_key from item metadata + - [x] 6.1.2 Increment mapping count in history + - [x] 6.1.3 Save to config file + - [x] 6.1.4 Add `@beartype` decorator for runtime type checking + +## 7. Interactive Mapping UI + +- [x] 7.1 Implement `ask_bundle_mapping()` function in `src/specfact_cli/cli/backlog_commands.py` + - [x] 7.1.1 Display confidence level (✓ high, ? medium, ! low) + - [x] 7.1.2 Show suggested bundle with reasoning + - [x] 7.1.3 Display alternative candidates with scores + - [x] 7.1.4 Provide options: accept, select from candidates, show all bundles, skip + - [x] 7.1.5 Handle user selection and return bundle_id + - [x] 7.1.6 Add `@beartype` decorator for runtime type checking + +## 8. CLI Integration: --auto-bundle Flag + +- [x] 8.1 Extend `backlog refine` command + - [x] 8.1.1 Add `--auto-bundle` flag option + - [x] 8.1.2 Add `--auto-accept-bundle` flag option + - [x] 8.1.3 Integrate bundle mapping into refinement workflow + - [x] 8.1.4 Auto-assign if confidence >= 0.8 + - [x] 8.1.5 Prompt user if confidence 0.5-0.8 + - [x] 8.1.6 Require explicit selection if confidence < 0.5 + +- [x] 8.2 Extend `backlog import` command + - [x] 8.2.1 Add `--auto-bundle` flag option + - [x] 8.2.2 Add `--auto-accept-bundle` flag option + - [x] 8.2.3 Integrate bundle mapping into import workflow + - [x] 8.2.4 Use mapping if `--bundle` not specified + +## 9. Source Tracking Extension + +- [x] 9.1 Extend `src/specfact_cli/models/source_tracking.py` + - [x] 9.1.1 Add `bundle_id` field (Optional[str]) + - [x] 9.1.2 Add `mapping_confidence` field (Optional[float]) + - [x] 9.1.3 Add `mapping_method` field (Optional[str]) - "explicit_label", "historical", "content_similarity", "user_confirmed" + - [x] 9.1.4 Add `mapping_timestamp` field (Optional[datetime]) + - [x] 9.1.5 Ensure backward compatibility (all fields optional) + +## 10. OpenSpec Generation Integration + +- [x] 10.1 Extend `_write_openspec_change_from_proposal()` function + - [x] 10.1.1 Add `mapping: Optional[BundleMapping]` parameter + - [x] 10.1.2 Update source_tracking with mapping metadata + - [x] 10.1.3 Include mapping information in proposal.md source tracking section + - [x] 10.1.4 Ensure backward compatibility (parameter optional) + +## 11. Code Quality and Contract Validation + +- [x] 11.1 Apply code formatting + - [x] 11.1.1 Run `hatch run format` to apply black and isort + - [x] 11.1.2 Verify all files are properly formatted +- [x] 11.2 Run linting checks + - [x] 11.2.1 Run `hatch run lint` to check for linting errors + - [x] 11.2.2 Fix all pylint, ruff, and other linter errors +- [x] 11.3 Run type checking + - [x] 11.3.1 Run `hatch run type-check` to verify type annotations + - [x] 11.3.2 Fix all basedpyright type errors +- [x] 11.4 Verify contract decorators + - [x] 11.4.1 Ensure all new public functions have `@beartype` decorators + - [x] 11.4.2 Ensure all new public functions have `@icontract` decorators with appropriate `@require`/`@ensure` + +## 12. Testing and Validation + +- [x] 12.1 Add new tests + - [x] 12.1.1 Add unit tests for BundleMapper (9+ tests: 3 signals × 3 confidence levels) + - [x] 12.1.2 Add unit tests for explicit mapping signal (3+ tests) + - [x] 12.1.3 Add unit tests for historical mapping signal (3+ tests) + - [x] 12.1.4 Add unit tests for content similarity signal (3+ tests) + - [x] 12.1.5 Add unit tests for confidence scoring (5+ tests) + - [x] 12.1.6 Add unit tests for mapping history persistence (5+ tests) + - [x] 12.1.7 Add unit tests for interactive UI (5+ tests: user selections) + - [x] 12.1.8 Add integration tests: end-to-end mapping workflow (5+ tests) +- [x] 12.2 Update existing tests + - [x] 12.2.1 Update source_tracking tests to include new mapping fields + - [x] 12.2.2 Update OpenSpec generation tests to handle mapping parameter +- [x] 12.3 Run full test suite of modified tests only + - [x] 12.3.1 Run `hatch run smart-test` to execute only the tests that are relevant to the changes + - [x] 12.3.2 Verify all modified tests pass (unit, integration, E2E) +- [x] 12.4 Final validation + - [x] 12.4.1 Run `hatch run format` one final time + - [x] 12.4.2 Run `hatch run lint` one final time + - [x] 12.4.3 Run `hatch run type-check` one final time + - [x] 12.4.4 Run `hatch test --cover -v` one final time + - [x] 12.4.5 Verify no errors remain (formatting, linting, type-checking, tests) + +## 12R. Review Defect Remediation (2026-02-22) + +- [x] 12R.1 Add regression tests first (must fail before implementation) + - [x] 12R.1.1 Historical scoring ignores stale bundle IDs not present in available bundles + - [x] 12R.1.2 History key encoding is unambiguous and does not lose tag values + - [x] 12R.1.3 Conflicting content signal does not boost confidence for another primary bundle + - [x] 12R.1.4 Malformed threshold config values fall back to defaults without crashing +- [x] 12R.2 Record failing run in `TDD_EVIDENCE.md` with command, timestamp, and failure summary +- [x] 12R.3 Implement production fixes in mapper/history modules +- [x] 12R.4 Re-run regression tests and record passing run in `TDD_EVIDENCE.md` + +## 13. OpenSpec Validation + +- [x] 13.1 Validate change proposal + - [x] 13.1.1 Run `openspec validate add-bundle-mapping-strategy --strict` + - [x] 13.1.2 Fix any validation errors + - [x] 13.1.3 Re-run validation until passing + +## 14. Pull Request Creation + +- [x] 14.1 Prepare changes for commit + - [x] 14.1.1 Ensure all changes are committed: `git add .` + - [x] 14.1.2 Commit with conventional message: `git commit -m "feat: add bundle mapping strategy with confidence scoring"` + - [x] 14.1.3 Push to remote: `git push origin feature/add-bundle-mapping-strategy` +- [x] 14.2 Create Pull Request + - [x] 14.2.1 Create PR in specfact-cli repository + - [x] 14.2.2 Changes are ready for review in the branch diff --git a/openspec/changes/marketplace-01-central-module-registry/.openspec.yaml b/openspec/changes/archive/2026-02-22-marketplace-01-central-module-registry/.openspec.yaml similarity index 100% rename from openspec/changes/marketplace-01-central-module-registry/.openspec.yaml rename to openspec/changes/archive/2026-02-22-marketplace-01-central-module-registry/.openspec.yaml diff --git a/openspec/changes/marketplace-01-central-module-registry/CHANGE_VALIDATION.md b/openspec/changes/archive/2026-02-22-marketplace-01-central-module-registry/CHANGE_VALIDATION.md similarity index 100% rename from openspec/changes/marketplace-01-central-module-registry/CHANGE_VALIDATION.md rename to openspec/changes/archive/2026-02-22-marketplace-01-central-module-registry/CHANGE_VALIDATION.md diff --git a/openspec/changes/marketplace-01-central-module-registry/TDD_EVIDENCE.md b/openspec/changes/archive/2026-02-22-marketplace-01-central-module-registry/TDD_EVIDENCE.md similarity index 100% rename from openspec/changes/marketplace-01-central-module-registry/TDD_EVIDENCE.md rename to openspec/changes/archive/2026-02-22-marketplace-01-central-module-registry/TDD_EVIDENCE.md diff --git a/openspec/changes/marketplace-01-central-module-registry/design.md b/openspec/changes/archive/2026-02-22-marketplace-01-central-module-registry/design.md similarity index 100% rename from openspec/changes/marketplace-01-central-module-registry/design.md rename to openspec/changes/archive/2026-02-22-marketplace-01-central-module-registry/design.md diff --git a/openspec/changes/marketplace-01-central-module-registry/proposal.md b/openspec/changes/archive/2026-02-22-marketplace-01-central-module-registry/proposal.md similarity index 100% rename from openspec/changes/marketplace-01-central-module-registry/proposal.md rename to openspec/changes/archive/2026-02-22-marketplace-01-central-module-registry/proposal.md diff --git a/openspec/changes/marketplace-01-central-module-registry/specs/module-installation/spec.md b/openspec/changes/archive/2026-02-22-marketplace-01-central-module-registry/specs/module-installation/spec.md similarity index 100% rename from openspec/changes/marketplace-01-central-module-registry/specs/module-installation/spec.md rename to openspec/changes/archive/2026-02-22-marketplace-01-central-module-registry/specs/module-installation/spec.md diff --git a/openspec/changes/marketplace-01-central-module-registry/specs/module-lifecycle-management/spec.md b/openspec/changes/archive/2026-02-22-marketplace-01-central-module-registry/specs/module-lifecycle-management/spec.md similarity index 100% rename from openspec/changes/marketplace-01-central-module-registry/specs/module-lifecycle-management/spec.md rename to openspec/changes/archive/2026-02-22-marketplace-01-central-module-registry/specs/module-lifecycle-management/spec.md diff --git a/openspec/changes/marketplace-01-central-module-registry/specs/module-marketplace-registry/spec.md b/openspec/changes/archive/2026-02-22-marketplace-01-central-module-registry/specs/module-marketplace-registry/spec.md similarity index 100% rename from openspec/changes/marketplace-01-central-module-registry/specs/module-marketplace-registry/spec.md rename to openspec/changes/archive/2026-02-22-marketplace-01-central-module-registry/specs/module-marketplace-registry/spec.md diff --git a/openspec/changes/marketplace-01-central-module-registry/specs/module-packages/spec.md b/openspec/changes/archive/2026-02-22-marketplace-01-central-module-registry/specs/module-packages/spec.md similarity index 100% rename from openspec/changes/marketplace-01-central-module-registry/specs/module-packages/spec.md rename to openspec/changes/archive/2026-02-22-marketplace-01-central-module-registry/specs/module-packages/spec.md diff --git a/openspec/changes/marketplace-01-central-module-registry/specs/multi-location-discovery/spec.md b/openspec/changes/archive/2026-02-22-marketplace-01-central-module-registry/specs/multi-location-discovery/spec.md similarity index 100% rename from openspec/changes/marketplace-01-central-module-registry/specs/multi-location-discovery/spec.md rename to openspec/changes/archive/2026-02-22-marketplace-01-central-module-registry/specs/multi-location-discovery/spec.md diff --git a/openspec/changes/marketplace-01-central-module-registry/tasks.md b/openspec/changes/archive/2026-02-22-marketplace-01-central-module-registry/tasks.md similarity index 100% rename from openspec/changes/marketplace-01-central-module-registry/tasks.md rename to openspec/changes/archive/2026-02-22-marketplace-01-central-module-registry/tasks.md diff --git a/openspec/changes/archive/add-aisp-formal-clarification/ADOPTION_ASSESSMENT.md b/openspec/changes/archive/add-aisp-formal-clarification/ADOPTION_ASSESSMENT.md deleted file mode 100644 index 99ec7b8f..00000000 --- a/openspec/changes/archive/add-aisp-formal-clarification/ADOPTION_ASSESSMENT.md +++ /dev/null @@ -1,337 +0,0 @@ -# AISP Adoption Assessment: Should OpenSpec Use AISP? - -**Date:** 2026-01-15 -**Question:** Is AISP a legitimate specification protocol worth adopting, or is it "AI slop" / unproven experiment? - -## Executive Summary - -**Verdict: ⚠️ NOT RECOMMENDED for OpenSpec's primary use case** - -AISP is **not "AI slop"** — it has legitimate mathematical foundations and well-defined structure. However, it's **not suitable for OpenSpec's LLM-focused workflow** due to: - -1. **Reduced efficiency** (3-5x slower LLM processing) -2. **Unproven claims** (many assertions lack empirical validation) -3. **Missing tooling** (parser/validator not yet available) -4. **Better alternatives exist** (well-structured markdown achieves similar goals) - -**Recommendation:** Do NOT adopt AISP as primary format. Consider it as optional formalization layer for critical invariants only. - ---- - -## Is AISP "AI Slop"? - -### ❌ NO — It Has Legitimate Foundations - -**Evidence of Legitimacy:** - -1. **Mathematical Foundations:** - - ✅ Category Theory (functors, natural transformations, monads) — Real mathematics - - ✅ Natural Deduction (inference rules) — Standard formal logic - - ✅ Dependent Type Theory — Established type system - - ✅ Proof-carrying structure — Well-defined concept - -2. **Well-Defined Structure:** - - ✅ Grammar formally specified - - ✅ Type system defined - - ✅ Validation mechanisms specified - - ✅ Deterministic parsing defined - -3. **Academic Context:** - - Harvard capstone project (legitimate research) - - MIT license (open source) - - Published specification - -**Verdict:** AISP is **NOT "AI slop"** — it's a legitimate formal specification language with real mathematical foundations. - ---- - -## Is AISP an Unproven Experiment? - -### ⚠️ PARTIALLY — Many Claims Lack Empirical Validation - -**Unproven Claims:** - -1. **"Reduces AI decision points from 40-65% to <2%"** - - ❌ No empirical evidence provided - - ❌ "Decision points" not clearly defined - - ❌ Symbol interpretation adds new decision points - -2. **"Telephone game math" (10-step pipeline: 0.84% → 81.7% success)** - - ❌ No empirical data provided - - ❌ Based on theoretical calculations - - ❌ Not validated in real-world testing - -3. **"+22% SWE benchmark improvement"** - - ⚠️ Context missing (older version, no details) - - ⚠️ May not apply to AISP 5.1 Platinum - - ⚠️ No independent replication - -4. **"LLMs understand natively"** - - ⚠️ True that LLMs can parse it - - ❌ False that it's "native" (requires symbol lookup) - - ❌ Processing is slower than natural language - -**Proven Claims:** - -1. **Tic-Tac-Toe test: 6 ambiguities → 0** - - ✅ Likely true (formal notation reduces semantic ambiguity) - - ⚠️ But doesn't account for symbol interpretation overhead - -2. **Mathematical foundations** - - ✅ Category Theory is real - - ✅ Natural Deduction is standard - - ✅ Proof-carrying structure is well-defined - -**Verdict:** AISP is **PARTIALLY unproven** — mathematical foundations are real, but many performance/effectiveness claims lack empirical validation. - ---- - -## Should OpenSpec Adopt AISP? - -### ❌ NOT RECOMMENDED for Primary Use Case - -**Analysis Based on OpenSpec's Needs:** - -### 1. **LLM Optimization** (OpenSpec's Primary Goal) - -**AISP Performance:** - -- ❌ 3-5x slower processing than markdown -- ❌ Symbol lookup overhead (512 symbols) -- ❌ Poor scanability (dense notation) -- ❌ Higher effective token cost (reference dependency) - -**OpenSpec's Current Approach:** - -- ✅ Well-structured markdown with clear requirements -- ✅ Scenarios with WHEN/THEN format -- ✅ Immediate LLM comprehension -- ✅ High efficiency - -**Verdict:** ❌ AISP is **worse** for LLM consumption than current markdown approach. - -### 2. **Ambiguity Reduction** (OpenSpec's Goal) - -**AISP Approach:** - -- ✅ Low semantic ambiguity (`Ambig(D) < 0.02` for parsing) -- ⚠️ But symbol interpretation ambiguity not measured -- ⚠️ Requires parser tooling (not yet available) - -**OpenSpec's Current Approach:** - -- ✅ Clear requirement format ("SHALL", "MUST") -- ✅ Structured scenarios (WHEN/THEN) -- ✅ Can achieve very low ambiguity without symbol overhead - -**Verdict:** ⚠️ AISP may reduce semantic ambiguity, but OpenSpec's markdown can achieve similar results more efficiently. - -### 3. **Validation** (OpenSpec's Need) - -**AISP Approach:** - -- ✅ Validation mechanisms defined -- ⚠️ Parser/validator tooling planned Q1 2026 (not yet available) -- ⚠️ Currently no automatic enforcement - -**OpenSpec's Current Approach:** - -- ✅ `openspec validate` command exists -- ✅ Validation rules defined -- ✅ Working implementation - -**Verdict:** ⚠️ AISP validation is **theoretical** (defined but not implemented), while OpenSpec validation is **practical** (working now). - -### 4. **Maintainability** (OpenSpec's Need) - -**AISP Approach:** - -- ❌ Dense notation (hard to read) -- ❌ Requires 512-symbol glossary -- ❌ Poor human readability -- ❌ Steep learning curve - -**OpenSpec's Current Approach:** - -- ✅ Natural language (readable) -- ✅ Clear structure -- ✅ Easy to understand -- ✅ Low learning curve - -**Verdict:** ❌ AISP is **worse** for maintainability than current markdown approach. - ---- - -## When Would AISP Make Sense? - -### ✅ POTENTIAL USE CASES (Not OpenSpec's Primary Need) - -1. **Formal Verification:** - - Mathematical proofs required - - Type-theoretic guarantees needed - - Automated theorem proving - -2. **Multi-Agent Coordination:** - - Zero-tolerance for interpretation variance - - Deterministic parsing critical - - Proof-carrying code required - -3. **Academic Research:** - - Exploring formal specification languages - - Testing ambiguity reduction theories - - Category Theory applications - -4. **Critical Safety Systems:** - - Life-critical systems - - Mathematical guarantees required - - Formal verification mandatory - -**Verdict:** AISP might make sense for formal verification or critical systems, but **not for OpenSpec's LLM-focused specification workflow**. - ---- - -## Comparison: AISP vs. OpenSpec's Current Approach - -| Criterion | AISP | OpenSpec Markdown | Winner | -|-----------|------|------------------|--------| -| **LLM Processing Speed** | 3-5x slower | Fast | ✅ Markdown | -| **Human Readability** | Poor (dense) | Good (clear) | ✅ Markdown | -| **Ambiguity Reduction** | Low semantic | Low (with structure) | ⚠️ Tie | -| **Validation** | Theoretical | Practical | ✅ Markdown | -| **Maintainability** | Low | High | ✅ Markdown | -| **Learning Curve** | Steep | Gentle | ✅ Markdown | -| **Tooling** | Planned Q1 2026 | Available now | ✅ Markdown | -| **Formal Guarantees** | High | Low | ✅ AISP | -| **Mathematical Precision** | High | Medium | ✅ AISP | - -**Overall:** OpenSpec's markdown approach wins 7/9 criteria. - ---- - -## Risks of Adopting AISP - -### 1. **Efficiency Loss** - -- 3-5x slower LLM processing -- Higher token costs -- Reduced productivity - -### 2. **Maintainability Issues** - -- Harder for humans to read/edit -- Steeper learning curve -- Higher cognitive load - -### 3. **Tooling Dependency** - -- Parser/validator not yet available -- Uncertain release timeline -- Risk of delays - -### 4. **Unproven Benefits** - -- Many claims lack empirical validation -- May not deliver promised benefits -- Symbol interpretation overhead may offset gains - -### 5. **Over-Engineering** - -- Complexity exceeds needs -- Better alternatives exist -- Premature optimization - ---- - -## Alternative: Hybrid Approach - -**If formal precision is needed for specific use cases:** - -### Option 1: Optional AISP Formalization - -- Keep markdown as primary format -- Add optional AISP sections for critical invariants -- Example: - - ```markdown - ### Requirement: Backlog Adapter Extensibility - - **Natural Language:** - All backlog adapters SHALL follow the extensibility pattern. - - **Formal Property (Optional AISP):** - ```aisp - ∀adapter:BacklogAdapter→extensible_pattern(adapter) - ``` - - ``` - -### Option 2: AISP for Critical Paths Only - -- Use AISP only for safety-critical requirements -- Use markdown for everything else -- Reduces complexity while maintaining precision where needed - -### Option 3: Wait for Tooling - -- Monitor AISP parser/validator development -- Re-evaluate after Q1 2026 tooling release -- Test empirically before adoption - ---- - -## Final Recommendation - -### ❌ DO NOT ADOPT AISP as Primary Format - -**Reasons:** - -1. **Worse for LLM consumption** (primary OpenSpec use case) -2. **Unproven benefits** (many claims lack validation) -3. **Missing tooling** (parser/validator not available) -4. **Better alternatives exist** (well-structured markdown) -5. **Over-engineering** (complexity exceeds needs) - -### ✅ CONSIDER Optional Hybrid Approach - -**If formal precision is needed:** - -1. Keep markdown as primary format -2. Add optional AISP sections for critical invariants -3. Wait for tooling release (Q1 2026) before broader adoption -4. Test empirically before committing - -### ✅ MONITOR Development - -**Track:** - -- Parser/validator release (Q1 2026) -- Empirical validation of claims -- Real-world usage examples -- Tooling maturity - -**Re-evaluate after:** - -- Tooling is released and tested -- Empirical evidence validates claims -- Clear benefits demonstrated - ---- - -## Conclusion - -**AISP is NOT "AI slop"** — it has legitimate mathematical foundations and well-defined structure. However, it's **NOT suitable for OpenSpec's primary use case** (LLM-focused specification workflow). - -**Key Findings:** - -1. ✅ **Legitimate:** Mathematical foundations are real -2. ⚠️ **Unproven:** Many performance claims lack validation -3. ❌ **Inefficient:** Worse for LLM consumption than markdown -4. ⚠️ **Incomplete:** Tooling not yet available -5. ❌ **Over-engineered:** Complexity exceeds needs - -**Recommendation:** **Do NOT adopt AISP as primary format.** Consider optional hybrid approach for critical invariants only, and monitor development for future re-evaluation. - ---- - -**Rulesets Applied:** None (assessment task) -**AI Provider & Model:** Claude Sonnet 4.5 (claude-sonnet-4-20250514) diff --git a/openspec/changes/archive/add-aisp-formal-clarification/CHANGE_VALIDATION.md b/openspec/changes/archive/add-aisp-formal-clarification/CHANGE_VALIDATION.md deleted file mode 100644 index d9611708..00000000 --- a/openspec/changes/archive/add-aisp-formal-clarification/CHANGE_VALIDATION.md +++ /dev/null @@ -1,218 +0,0 @@ -# Change Validation Report: add-aisp-formal-clarification - -**Validation Date**: 2026-01-14 17:05:53 +0100 -**Change Proposal**: [proposal.md](./proposal.md) -**Validation Method**: Dry-run simulation in temporary workspace - ---- - -## Executive Summary - -- **Breaking Changes**: 0 detected / 0 resolved -- **Dependent Files**: 3 affected (all compatible, no updates required) -- **Impact Level**: Low (additive changes, no interface modifications) -- **Validation Result**: ✅ Pass -- **User Decision**: Proceed with implementation - ---- - -## Format Validation - -### proposal.md Format: ✅ Pass - -- **Title format**: ✅ Correct (`# Change: Add AISP Formal Clarification to Spec-Kit and OpenSpec Workflows`) -- **Required sections**: ✅ All present (Why, What Changes, Impact) -- **"What Changes" format**: ✅ Correct (uses NEW/EXTEND/MODIFY markers) -- **"Impact" format**: ✅ Correct (lists Affected specs, Affected code, Integration points) - -### tasks.md Format: ✅ Pass - -- **Section headers**: ✅ Correct (uses hierarchical numbered format: `## 1.`, `## 2.`, etc.) -- **Task format**: ✅ Correct (uses `- [ ] 1.1 [Description]` format) -- **Sub-task format**: ✅ Correct (uses `- [ ] 1.1.1 [Description]` with indentation) - -### Format Issues Found: 0 - -### Format Issues Fixed: 0 - ---- - -## AISP Consistency Check - -- **Consistency Status**: ✅ All consistent -- **AISP Artifacts Checked**: 5 - - proposal.md ↔ proposal.aisp.md: ✅ consistent - - tasks.md ↔ tasks.aisp.md: ✅ consistent - - specs/bridge-adapter/spec.md ↔ spec.aisp.md: ✅ consistent - - specs/cli-output/spec.md ↔ spec.aisp.md: ✅ consistent - - specs/data-models/spec.md ↔ spec.aisp.md: ✅ consistent -- **Inconsistencies Detected**: 0 -- **AISP Updates Performed**: 0 -- **Ambiguities Detected**: 0 -- **Clarifications Applied**: 0 -- **User Feedback Required**: No -- **All Clarifications Resolved**: Yes - -### AISP Structure Validation - -All AISP artifacts have valid AISP 5.1 structure: - -- ✅ Valid header: `𝔸5.1.complete@2026-01-14` -- ✅ Valid context: `γ≔...` -- ✅ Valid references: `ρ≔⟨...⟩` -- ✅ All required blocks present: `⟦Ω⟧`, `⟦Σ⟧`, `⟦Γ⟧`, `⟦Λ⟧`, `⟦Χ⟧`, `⟦Ε⟧` -- ✅ Evidence blocks with `Ambig < 0.02`: - - proposal.aisp.md: `δ≜0.85`, `τ≜◊⁺⁺`, `⊢Ambig<0.02` - - tasks.aisp.md: `δ≜0.88`, `τ≜◊⁺⁺`, `⊢Ambig<0.02` - - specs/bridge-adapter/spec.aisp.md: `δ≜0.82`, `τ≜◊⁺⁺`, `⊢Ambig<0.02` - - specs/cli-output/spec.aisp.md: `δ≜0.84`, `τ≜◊⁺⁺`, `⊢Ambig<0.02` - - specs/data-models/spec.aisp.md: `δ≜0.86`, `τ≜◊⁺⁺`, `⊢Ambig<0.02` - -### Ambiguity Check - -- ✅ No vague terms detected in markdown files -- ✅ All AISP files provide formal clarification with `Ambig < 0.02` -- ✅ All decision points encoded in AISP formal notation -- ✅ All invariants clearly defined in AISP blocks - ---- - -## Breaking Changes Detected - -### Analysis Result: ✅ No Breaking Changes - -**Interface Analysis:** - -1. **New files to be created:** - - `src/specfact_cli/parsers/aisp.py` - New file, no breaking changes - - `src/specfact_cli/models/aisp.py` - New file, no breaking changes - - `src/specfact_cli/validators/aisp_schema.py` - New file, no breaking changes - - `src/specfact_cli/commands/clarify.py` - New file, no breaking changes - -2. **Existing files to be extended:** - - `src/specfact_cli/adapters/openspec.py` - Add new methods for AISP generation - - **Breaking**: ❌ No - Adding new methods is non-breaking - - **Impact**: Additive change - new functionality available - - `src/specfact_cli/adapters/speckit.py` - Add new methods for AISP generation - - **Breaking**: ❌ No - Adding new methods is non-breaking - - **Impact**: Additive change - new functionality available - - `src/specfact_cli/commands/validate.py` - Add `--aisp` and `--aisp --against-code` flags - - **Breaking**: ❌ No - Optional flags, backward compatible - - **Impact**: Additive change - new functionality, existing behavior preserved - - `src/specfact_cli/utils/bundle_loader.py` - Add AISP storage functions - - **Breaking**: ❌ No - Adding new functions is non-breaking - - **Impact**: Additive change - new functionality available - -3. **Adapter interface:** - - `BridgeAdapter` interface remains unchanged - - New methods added to adapters don't affect existing interface - - All existing adapter methods continue to work as before - ---- - -## Dependencies Affected - -### Files That Use OpenSpecAdapter - -1. **src/specfact_cli/adapters/**init**.py** - - **Usage**: Imports and registers OpenSpecAdapter - - **Impact**: ✅ No impact - Registration unchanged - - **Update Required**: ❌ No - -2. **src/specfact_cli/sync/bridge_sync.py** (if exists) - - **Usage**: Uses OpenSpecAdapter via BridgeAdapter interface - - **Impact**: ✅ No impact - Interface unchanged, new methods optional - - **Update Required**: ❌ No - -### Files That Use SpecKitAdapter - -1. **src/specfact_cli/adapters/**init**.py** - - **Usage**: Imports and registers SpecKitAdapter - - **Impact**: ✅ No impact - Registration unchanged - - **Update Required**: ❌ No - -### Files That Use validate Command - -1. **CLI entry point** (if exists) - - **Usage**: Registers validate command - - **Impact**: ✅ No impact - Command registration unchanged, new flags optional - - **Update Required**: ❌ No - -### Summary - -- **Critical Updates Required**: 0 -- **Recommended Updates**: 0 -- **Optional Updates**: 0 -- **No Impact**: All existing code compatible - ---- - -## Impact Assessment - -- **Code Impact**: Low - Additive changes only, no modifications to existing interfaces -- **Test Impact**: Medium - New tests required for AISP functionality, existing tests unaffected -- **Documentation Impact**: Medium - New documentation for AISP integration required -- **Release Impact**: Minor - New feature addition, backward compatible - ---- - -## User Decision - -**Decision**: Proceed with implementation - -**Rationale**: - -- No breaking changes detected -- All changes are additive (new files, new methods, optional flags) -- AISP consistency check passed - all AISP artifacts are valid and consistent -- No ambiguities detected - all specifications are clear -- OpenSpec validation passed - -**Next Steps**: - -1. Review validation report -2. Proceed with implementation: `/openspec-apply add-aisp-formal-clarification` -3. Follow tasks.md implementation checklist -4. Use AISP formalized versions (`.aisp.md` files) for implementation guidance - ---- - -## OpenSpec Validation - -- **Status**: ✅ Pass -- **Validation Command**: `openspec validate add-aisp-formal-clarification --strict` -- **Issues Found**: 0 -- **Issues Fixed**: 0 -- **Re-validated**: No (proposal unchanged) - ---- - -## Validation Artifacts - -- **Temporary workspace**: Not created (no code simulation needed - additive changes only) -- **Interface scaffolds**: Not needed (no interface changes) -- **Dependency graph**: Analyzed via codebase search -- **AISP consistency report**: Generated and validated - ---- - -## Additional Notes - -### AISP Integration Benefits - -- **Mathematical Precision**: All AISP artifacts have `Ambig < 0.02`, ensuring precise AI LLM interpretation -- **Formal Clarification**: Decision trees, invariants, and error handling encoded in formal notation -- **Tool-Agnostic**: AISP stored internally in project bundles, independent of SDD tool formats -- **Developer-Friendly**: Developers work with natural language specs, AI LLM consumes AISP - -### Implementation Readiness - -- ✅ All AISP artifacts validated and consistent -- ✅ No breaking changes detected -- ✅ All dependencies compatible -- ✅ OpenSpec validation passed -- ✅ Ready for implementation - ---- - -**Validation Complete**: Change is safe to implement. All checks passed. diff --git a/openspec/changes/archive/add-aisp-formal-clarification/CLAIM_ANALYSIS.md b/openspec/changes/archive/add-aisp-formal-clarification/CLAIM_ANALYSIS.md deleted file mode 100644 index 46467f8c..00000000 --- a/openspec/changes/archive/add-aisp-formal-clarification/CLAIM_ANALYSIS.md +++ /dev/null @@ -1,585 +0,0 @@ -# AISP Claim Analysis: When Is This True? - -**Date:** 2026-01-15 -**Last Updated:** 2026-01-15 (Added implementation status analysis) -**Analyzing Claim:** -> "AISP is a self-validating, proof-carrying protocol designed for high-density, low-ambiguity AI-to-AI communication. It utilizes Category Theory and Natural Deduction to ensure `Ambig(D) < 0.02`, creating a zero-trust architecture for autonomous agent swarms." - -## Implementation Status Context - -**Critical Finding:** The AISP specification defines mechanisms and structures, but many require tooling/implementation that is **planned but not yet complete**: - -- **Parser & Validator:** Planned for Q1 2026 (per GitHub roadmap) -- **Automatic Validation:** Specified in design but requires parser/validator tooling -- **Symbol Interpretation:** Mechanisms defined but tooling needed - -This analysis evaluates claims both: - -1. **By Design** (what the spec defines) -2. **In Practice** (what currently exists vs. what's planned) - -## Implementation Status Context - -**Critical Finding:** The AISP specification defines mechanisms and structures, but many require tooling/implementation that is **planned but not yet complete**: - -- **Parser & Validator:** Planned for Q1 2026 (per GitHub roadmap) -- **Automatic Validation:** Specified in design but requires parser/validator tooling -- **Symbol Interpretation:** Mechanisms defined but tooling needed - -This analysis evaluates claims both: - -1. **By Design** (what the spec defines) -2. **In Practice** (what currently exists vs. what's planned) - -**Key Evidence:** - -- AISP Reference line 25: `ρ≔⟨glossary,types,rules,functions,errors,proofs,parser,agent⟩` — Parser is part of spec -- AISP Reference line 445: `⊢deterministic:∀D:∃!AST.parse(D)→AST` — Deterministic parsing is design goal -- AISP Reference line 440: `drift_detected⇒reparse(original); ambiguity_detected⇒reject∧clarify` — Automatic rejection defined -- GitHub Repository (aisp-open-core): Parser & Validator Release planned Q1 2026 - -## Claim Breakdown - -The claim contains 6 distinct assertions: - -1. **Self-validating** -2. **Proof-carrying** -3. **High-density, low-ambiguity AI-to-AI communication** -4. **Utilizes Category Theory and Natural Deduction** -5. **Ensures `Ambig(D) < 0.02`** -6. **Creates zero-trust architecture for autonomous agent swarms** - ---- - -## 1. "Self-validating" - -### What This Means - -A protocol that automatically validates itself without external tools or manual checks. - -### Evidence from AISP Reference - -**✅ Validation Function Exists:** - -```aisp -validate:𝕊→𝕄 𝕍; validate≜⌈⌉∘δ∘Γ?∘∂ -Γ?:𝔻oc→Option⟨Proof⟩; Γ?≜λd.search(Γ,wf(d),k_max) -``` - -**✅ Error Handling for Ambiguity:** - -```aisp -ε_ambig≜⟨Ambig(D)≥0.02,reject∧⊥⟩ -``` - -**✅ Well-Formedness Checks:** - -```aisp -𝔻oc≜Σ(b⃗:Vec n 𝔅)(π:Γ⊢wf(b⃗)) -``` - -### When Is This True? - -**✅ TRUE** — **If validation is automatically applied:** - -- Documents include well-formedness proofs (`π:Γ⊢wf(b⃗)`) -- Validation function exists (`validate`) -- Error handling rejects invalid documents (`ε_ambig`) - -**❌ FALSE** — **If validation requires manual invocation:** - -- No evidence of automatic validation on document creation -- Validation appears to be a function that must be called -- No parser/validator tool shown to automatically check documents - -### Implementation Status - -**From AISP Reference:** - -- Line 25: `ρ≔⟨glossary,types,rules,functions,errors,proofs,parser,agent⟩` — Parser is part of the spec -- Line 445: `⊢deterministic:∀D:∃!AST.parse(D)→AST` — Deterministic parsing is a design goal -- Line 440: `drift_detected⇒reparse(original); ambiguity_detected⇒reject∧clarify` — Automatic rejection mechanisms defined - -**From GitHub Repository (aisp-open-core):** - -- **Parser & Validator Release:** 📅 Planned for Q1 2026 -- **Current Status:** Specification complete, tooling in development - -### Verdict: **✅ TRUE BY DESIGN, ⚠️ CONDITIONAL IN PRACTICE** - -**By Design (Specification):** - -- ✅ Self-validating structure exists (proofs, validation functions) -- ✅ Automatic enforcement mechanisms defined (`ambiguity_detected⇒reject`) -- ✅ Deterministic parsing specified (`⊢deterministic:∀D:∃!AST.parse(D)→AST`) - -**In Practice (Current Implementation):** - -- ⚠️ Parser/validator tooling planned but not yet released (Q1 2026) -- ⚠️ Automatic validation requires tooling that's in development -- ⚠️ Currently depends on manual validation or LLM-based parsing - -**Conclusion:** The claim is **TRUE by design** (specification defines automatic validation), but **CONDITIONAL in practice** (requires parser/validator tooling that's planned but not yet complete). - ---- - -## 2. "Proof-carrying" - -### What This Means - -Documents carry their own proofs of correctness/well-formedness. - -### Evidence from AISP Reference - -**✅ Document Structure Includes Proofs:** - -```aisp -𝔻oc≜Σ(b⃗:Vec n 𝔅)(π:Γ⊢wf(b⃗)) -``` - -Translation: Document = (content blocks, proof of well-formedness) - -**✅ Proof Search Function:** - -```aisp -Γ?:𝔻oc→Option⟨Proof⟩; Γ?≜λd.search(Γ,wf(d),k_max) -``` - -**✅ Evidence Block Required:** - -```aisp -Doc≜𝔸≫CTX?≫REF?≫⟦Ω⟧≫⟦Σ⟧≫⟦Γ⟧≫⟦Λ⟧≫⟦Χ⟧?≫⟦Ε⟧ -``` - -The `⟦Ε⟧` (Evidence) block is required and contains proofs. - -**✅ Theorems Section:** - -```aisp -⟦Θ:Proofs⟧{ - ∴∀L:Signal(L)≡L - π:V_H⊕V_L⊕V_S preserves;direct sum lossless∎ - ... -} -``` - -### When Is This True? - -**✅ TRUE** — **Always, by design:** - -- Document structure requires proof (`π:Γ⊢wf(b⃗)`) -- Evidence block (`⟦Ε⟧`) is required in document structure -- Proofs are embedded in documents, not external - -### Verdict: **✅ TRUE** - -AISP documents are designed to carry proofs. This is a structural property of the format. - ---- - -## 3. "High-density, low-ambiguity AI-to-AI communication" - -### What This Means - -- **High-density:** Packing maximum information into minimal space -- **Low-ambiguity:** Minimal interpretation variance - -### Evidence from AISP Reference - -**✅ High-Density:** - -- 512 symbols across 8 categories -- Dense notation: `∀adapter:BacklogAdapter→category(adapter)≡BacklogAdapters∧extensible_pattern(adapter)` -- Single lines contain multiple concepts - -**✅ Low-Ambiguity Claim:** - -```aisp -∀D∈AISP:Ambig(D)<0.02 -Ambig≜λD.1-|Parse_u(D)|/|Parse_t(D)| -``` - -### When Is This True? - -**✅ High-Density: TRUE** - -- AISP is extremely dense (symbols pack more information than words) -- Single expressions convey complex relationships - -**⚠️ Low-Ambiguity: PARTIALLY TRUE** - -- **Semantic ambiguity:** Likely low (<2% for semantic meaning) -- **Symbol interpretation ambiguity:** Mechanisms defined but effectiveness unclear - -**From AISP Reference:** - -- Line 436: `∀s∈Σ_512:Mean(s)≡Mean_0(s)` — Symbol meanings are fixed (anti-drift) -- Line 440: `drift_detected⇒reparse(original); ambiguity_detected⇒reject∧clarify` — Ambiguity detection and rejection defined -- Line 445: `⊢deterministic:∀D:∃!AST.parse(D)→AST` — Deterministic parsing ensures single interpretation - -**Symbol Interpretation Handling:** - -- **By Design:** Symbols have fixed meanings (`Mean(s)≡Mean_0(s)`), deterministic parsing ensures single AST -- **In Practice:** Requires parser implementation that enforces deterministic parsing -- **Gap:** `Ambig(D)` formula measures parsing ambiguity, not symbol lookup overhead (different concern) - -### Verdict: **✅ TRUE for density, ⚠️ PARTIALLY TRUE for ambiguity** - -- High-density: ✅ Confirmed -- Low-ambiguity: ⚠️ **TRUE BY DESIGN** (deterministic parsing, fixed symbol meanings), but **CONDITIONAL IN PRACTICE** (requires parser implementation) -- **Note:** Symbol lookup overhead (efficiency) is separate from ambiguity (interpretation variance) - ---- - -## 4. "Utilizes Category Theory and Natural Deduction" - -### What This Means - -The protocol uses mathematical foundations from: - -- **Category Theory:** Functors, natural transformations, adjunctions, monads -- **Natural Deduction:** Formal inference rules - -### Evidence from AISP Reference - -**✅ Category Theory Section:** - -```aisp -⟦ℭ:Categories⟧{ - 𝐁𝐥𝐤≜⟨Ob≜𝔅,Hom≜λAB.A→B,∘,id⟩ - 𝐕𝐚𝐥≜⟨Ob≜𝕍,Hom≜λVW.V⊑W,∘,id⟩ - ... - ;; Functors - 𝔽:𝐁𝐥𝐤⇒𝐕𝐚𝐥; 𝔽.ob≜λb.validate(b); ... - ;; Natural Transformations - η:∂⟹𝔽; ... - ;; Adjunctions - ε⊣ρ:𝐄𝐫𝐫⇄𝐃𝐨𝐜; ... - ;; Monads - 𝕄_val≜ρ∘ε; ... -} -``` - -**✅ Natural Deduction Section:** - -```aisp -⟦Γ:Inference⟧{ - ───────────── [ax-header] - d↓₁≡𝔸 ⊢ wf₁(d) - - wf₁(d) wf₂(d) - ─────────────── [∧I-wf] - ⊢ wf(d) - ... -} -``` - -**✅ Natural Deduction Notation:** - -- Uses `⊢` (proves) symbol -- Inference rules in standard ND format -- Proof trees implied - -### When Is This True? - -**✅ TRUE** — **Always, by design:** - -- Category Theory: Explicitly defined (functors, natural transformations, adjunctions, monads) -- Natural Deduction: Inference rules follow ND format -- Both are structural elements of the specification - -### Verdict: **✅ TRUE** - -AISP explicitly uses both Category Theory and Natural Deduction as foundational elements. - ---- - -## 5. "Ensures `Ambig(D) < 0.02`" - -### What This Means - -The protocol guarantees that ambiguity is less than 2% for all documents. - -### Evidence from AISP Reference - -**✅ Ambiguity Definition:** - -```aisp -Ambig≜λD.1-|Parse_u(D)|/|Parse_t(D)| -``` - -**✅ Requirement Stated:** - -```aisp -∀D∈AISP:Ambig(D)<0.02 -``` - -**✅ Error Handling:** - -```aisp -ε_ambig≜⟨Ambig(D)≥0.02,reject∧⊥⟩ -``` - -### When Is This True? - -**⚠️ PARTIALLY TRUE** — **Depends on enforcement:** - -**✅ TRUE if:** - -- All AISP documents are validated before acceptance -- Parser/validator automatically rejects documents with `Ambig(D) ≥ 0.02` -- Tooling enforces the constraint - -**❌ FALSE if:** - -- Documents can be created without validation -- No automatic enforcement mechanism -- Constraint is aspirational, not enforced - -**⚠️ CAVEAT:** - -- Formula measures **parsing ambiguity** (unique parses vs. total parses) -- Does NOT measure **symbol interpretation ambiguity** -- A document could have `Ambig(D) < 0.02` for parsing but high ambiguity for symbol interpretation - -### Implementation Status - -**From AISP Reference:** - -- Line 32: `∀D∈AISP:Ambig(D)<0.02` — Requirement stated -- Line 221: `ε_ambig≜⟨Ambig(D)≥0.02,reject∧⊥⟩` — Error handling defined -- Line 440: `ambiguity_detected⇒reject∧clarify` — Automatic rejection mechanism -- Line 445: `⊢deterministic:∀D:∃!AST.parse(D)→AST` — Deterministic parsing ensures single parse - -**From GitHub Repository:** - -- Parser/validator tooling planned for Q1 2026 -- Will enforce `Ambig(D) < 0.02` constraint - -### Verdict: **✅ TRUE BY DESIGN, ⚠️ CONDITIONAL IN PRACTICE** - -**By Design (Specification):** - -- ✅ Requirement stated (`∀D∈AISP:Ambig(D)<0.02`) -- ✅ Automatic rejection defined (`ε_ambig`, `ambiguity_detected⇒reject`) -- ✅ Deterministic parsing ensures single parse (reduces parsing ambiguity) - -**In Practice (Current Implementation):** - -- ⚠️ Parser/validator tooling planned but not yet released -- ⚠️ Currently no automatic enforcement (documents can exist without validation) -- ⚠️ Constraint is aspirational until tooling is released - -**Scope Clarification:** - -- Formula measures **parsing ambiguity** (unique parses vs. total parses) -- Does NOT measure **symbol lookup overhead** (efficiency concern, not ambiguity) -- Deterministic parsing (`⊢deterministic`) addresses parsing ambiguity, not lookup efficiency - -**Conclusion:** The claim is **TRUE BY DESIGN** (specification defines enforcement mechanisms), but **CONDITIONAL IN PRACTICE** (requires parser/validator tooling that's planned but not yet complete). - ---- - -## 6. "Creates zero-trust architecture for autonomous agent swarms" - -### What This Means - -A security architecture where: - -- No agent trusts another by default -- All interactions are verified -- Autonomous agents can coordinate without central authority - -### Evidence from AISP Reference - -**❌ NO EXPLICIT ZERO-TRUST MECHANISMS:** - -- No mention of "zero-trust" beyond the abstract -- No authentication/authorization mechanisms -- No trust verification protocols - -**✅ INTEGRITY CHECKS (Related but not zero-trust):** - -```aisp -;; Immutability Physics -∀p:∂𝒩(p)⇒∂ℋ.id(p) -∀p:ℋ.id(p)≡SHA256(𝒩(p)) - -∴∀p:tamper(𝒩)⇒SHA256(𝒩)≠ℋ.id⇒¬reach(p) -π:CAS addressing;content-hash mismatch blocks∎ -``` - -**✅ BINDING FUNCTION (Agent compatibility, not trust):** - -```aisp -Δ⊗λ≜λ(A,B).case[ - Logic(A)∩Logic(B)⇒⊥ → 0, - Sock(A)∩Sock(B)≡∅ → 1, - Type(A)≠Type(B) → 2, - Post(A)⊆Pre(B) → 3 -] -``` - -### When Is This True? - -**❌ FALSE** — **No zero-trust mechanisms:** - -- **Zero-trust requires:** - - Identity verification - - Least-privilege access - - Continuous verification - - Explicit trust boundaries - -- **AISP provides:** - - Content integrity (SHA256 hashing) - - Agent compatibility checking (binding function) - - Proof-carrying structure - -- **Gap:** Integrity checks ≠ zero-trust architecture - - SHA256 ensures content hasn't changed, not that agent is trusted - - Binding function checks compatibility, not trustworthiness - - No authentication, authorization, or trust verification - -**⚠️ POSSIBLY TRUE IF:** - -- Zero-trust is interpreted as "no implicit trust in content" (integrity checks) -- But this is a weak interpretation — zero-trust typically means "verify everything, trust nothing" - -### Implementation Status - -**From AISP Reference:** - -- Line 122-124: Content integrity via SHA256 hashing -- Line 336: `∴∀p:tamper(𝒩)⇒SHA256(𝒩)≠ℋ.id⇒¬reach(p)` — Tamper detection blocks access -- Line 136-145: Binding function checks agent compatibility -- Line 307-309: Packet validation via content hash - -**No Zero-Trust Mechanisms Found:** - -- No authentication/authorization -- No identity verification -- No continuous verification -- No trust boundaries - -### Verdict: **❌ FALSE (Even by Design)** - -**By Design:** - -- ❌ No zero-trust mechanisms defined in specification -- ✅ Integrity checks exist (SHA256, tamper detection) -- ✅ Compatibility checks exist (binding function) -- ❌ But these are not zero-trust (they're integrity/compatibility checks) - -**In Practice:** - -- ❌ No zero-trust implementation (none planned either) - -**Conclusion:** AISP does not create a zero-trust architecture. It provides integrity checks and compatibility verification, but lacks the authentication, authorization, and continuous verification mechanisms required for zero-trust. This is **FALSE even by design** — the specification doesn't define zero-trust mechanisms. - ---- - -## Summary Table - -| Claim Component | Verdict (By Design) | Verdict (In Practice) | Implementation Status | -|----------------|---------------------|----------------------|----------------------| -| **Self-validating** | ✅ True | ⚠️ Conditional | Parser/validator planned Q1 2026 | -| **Proof-carrying** | ✅ True | ✅ True | Always true (structural) | -| **High-density** | ✅ True | ✅ True | Always true (structural) | -| **Low-ambiguity** | ✅ True | ⚠️ Conditional | Deterministic parsing requires parser tooling | -| **Category Theory** | ✅ True | ✅ True | Always true (structural) | -| **Natural Deduction** | ✅ True | ✅ True | Always true (structural) | -| **Ensures Ambig(D) < 0.02** | ✅ True | ⚠️ Conditional | Enforcement requires parser/validator | -| **Zero-trust architecture** | ❌ False | ❌ False | Not defined in spec, not planned | - ---- - -## Overall Verdict - -**The claim is TRUE BY DESIGN but CONDITIONAL IN PRACTICE:** - -### ✅ TRUE BY DESIGN (Specification Defines It) - -1. **Self-validating** — Automatic validation mechanisms defined (`ambiguity_detected⇒reject`) -2. **Proof-carrying** — Documents include proofs by design (`π:Γ⊢wf(b⃗)`) -3. **High-density** — Extremely dense notation (512 symbols) -4. **Low-ambiguity** — Deterministic parsing ensures single interpretation (`⊢deterministic`) -5. **Category Theory** — Explicitly defined (functors, natural transformations, monads) -6. **Natural Deduction** — Inference rules follow ND format -7. **Ensures Ambig(D) < 0.02** — Enforcement mechanisms defined (`ε_ambig`, deterministic parsing) - -### ⚠️ CONDITIONAL IN PRACTICE (Requires Tooling) - -1. **Self-validating** — Requires parser/validator tooling (planned Q1 2026) -2. **Low-ambiguity** — Requires deterministic parser implementation -3. **Ambig(D) < 0.02** — Requires validator to enforce constraint - -### ❌ FALSE (Even by Design) - -1. **Zero-trust architecture** — Not defined in specification, not planned - ---- - -## When Is the Full Claim True? - -### By Design (Specification Level) - -**The full claim is TRUE BY DESIGN if:** - -1. ✅ Specification defines automatic validation mechanisms (✅ TRUE — `ambiguity_detected⇒reject`) -2. ✅ Specification defines deterministic parsing (✅ TRUE — `⊢deterministic:∀D:∃!AST.parse(D)→AST`) -3. ✅ Specification defines enforcement mechanisms (✅ TRUE — `ε_ambig`, validation functions) -4. ❌ Specification defines zero-trust mechanisms (❌ FALSE — not defined) - -**Result:** 7/8 components TRUE by design, 1/8 FALSE (zero-trust) - -### In Practice (Implementation Level) - -**The full claim is TRUE IN PRACTICE only if:** - -1. ✅ Parser/validator tooling is implemented and automatically validates all documents -2. ✅ Deterministic parser is implemented and enforces single interpretation -3. ✅ Validator enforces `Ambig(D) < 0.02` constraint automatically -4. ❌ Zero-trust mechanisms are implemented (❌ FALSE — not planned) - -**Current Status:** - -- Parser/validator: 📅 Planned Q1 2026 (not yet released) -- Automatic validation: ⚠️ Conditional on tooling release -- Zero-trust: ❌ Not defined, not planned - -**Result:** Currently CONDITIONAL (depends on tooling release), will be TRUE IN PRACTICE once parser/validator is released (except zero-trust, which remains FALSE) - ---- - -## Recommendation - -### Revised Claim (Accurate for Current State) - -> "AISP is a proof-carrying protocol designed for high-density, low-ambiguity AI-to-AI communication. It utilizes Category Theory and Natural Deduction, with validation mechanisms defined to ensure `Ambig(D) < 0.02` for parsing ambiguity. The specification defines automatic validation and deterministic parsing, with parser/validator tooling planned for Q1 2026. Documents include integrity checks via content hashing." - -### Revised Claim (Accurate for Post-Tooling Release) - -> "AISP is a self-validating, proof-carrying protocol designed for high-density, low-ambiguity AI-to-AI communication. It utilizes Category Theory and Natural Deduction to ensure `Ambig(D) < 0.02` through deterministic parsing and automatic validation. Documents include integrity checks via content hashing." - -**Key Changes:** - -**Removed:** - -- "Zero-trust architecture" (not provided, not planned) - -**Clarified:** - -- "Self-validating" — TRUE by design, conditional in practice until tooling release -- "Ensures" — TRUE by design (mechanisms defined), conditional in practice (requires tooling) -- "Low-ambiguity" — TRUE by design (deterministic parsing), conditional in practice (requires parser) - -**Added:** - -- Implementation status context (planned vs. current) -- "Deterministic parsing" (clarifies mechanism) -- "Integrity checks" (what actually exists vs. zero-trust) - ---- - -**Rulesets Applied:** None (analysis task) -**AI Provider & Model:** Claude Sonnet 4.5 (claude-sonnet-4-20250514) diff --git a/openspec/changes/archive/add-aisp-formal-clarification/GITHUB_ISSUE_COMMENT.md b/openspec/changes/archive/add-aisp-formal-clarification/GITHUB_ISSUE_COMMENT.md deleted file mode 100644 index 9396a784..00000000 --- a/openspec/changes/archive/add-aisp-formal-clarification/GITHUB_ISSUE_COMMENT.md +++ /dev/null @@ -1,129 +0,0 @@ -# GitHub Issue #106 Comment - -**Post this as a comment on:** https://github.com/nold-ai/specfact-cli/issues/106 - ---- - -## 🔍 Critical Assessment: AISP Adoption Analysis - -After comprehensive analysis of AISP 5.1 Platinum for OpenSpec/SpecFact integration, I recommend **NOT proceeding with this change** at this time. Here are the critical findings: - -### Executive Summary - -**Verdict: ⚠️ NOT RECOMMENDED for OpenSpec's primary use case** - -AISP is **not "AI slop"** — it has legitimate mathematical foundations (Category Theory, Natural Deduction). However, it's **not suitable for our LLM-focused workflow** due to: - -1. **Reduced efficiency** (3-5x slower LLM processing than markdown) -2. **Unproven claims** (many assertions lack empirical validation) -3. **Missing tooling** (parser/validator planned Q1 2026, not yet available) -4. **Better alternatives exist** (well-structured markdown achieves similar goals) - -### Key Findings - -#### ✅ What AISP IS: -- **Legitimate:** Mathematical foundations are real (Category Theory, Natural Deduction, Dependent Type Theory) -- **Well-defined:** Grammar, type system, validation mechanisms formally specified -- **Proof-carrying:** Documents include proofs by design -- **Academic:** Harvard capstone project (legitimate research) - -#### ❌ What AISP IS NOT: -- **Optimized for LLM consumption:** 3-5x slower processing than markdown -- **Proven in practice:** Many performance claims lack empirical validation -- **Tooling available:** Parser/validator not yet released (planned Q1 2026) -- **Zero-trust architecture:** Claim is false (not defined in specification) - -### Performance Analysis - -**LLM Processing Comparison:** - -| Metric | AISP | OpenSpec Markdown | Winner | -|--------|------|------------------|--------| -| Processing Speed | 3-5x slower | Fast | ✅ Markdown | -| Symbol Lookup | 512 symbols | None | ✅ Markdown | -| Human Readability | Poor (dense) | Good (clear) | ✅ Markdown | -| Validation | Theoretical | Practical | ✅ Markdown | -| Tooling | Planned Q1 2026 | Available now | ✅ Markdown | -| Ambiguity Reduction | Low semantic | Low (with structure) | ⚠️ Tie | - -**Result:** OpenSpec markdown wins 7/9 criteria. - -### Claim Validation - -Analysis of AISP claims reveals: - -| Claim | By Design | In Practice | Status | -|-------|-----------|------------|--------| -| Self-validating | ✅ True | ⚠️ Conditional | Requires tooling (Q1 2026) | -| Low-ambiguity | ✅ True | ⚠️ Conditional | Requires parser implementation | -| Ambig(D) < 0.02 | ✅ True | ⚠️ Conditional | Requires validator enforcement | -| Zero-trust | ❌ False | ❌ False | Not defined in spec | - -**Key Issue:** Many claims are **TRUE BY DESIGN** (specification defines mechanisms) but **CONDITIONAL IN PRACTICE** (requires tooling that's not yet available). - -### Unproven Claims - -Several AISP claims lack empirical validation: - -- ❌ **"Reduces AI decision points from 40-65% to <2%"** — No evidence provided, "decision points" not clearly defined -- ❌ **"Telephone game math" (10-step pipeline: 0.84% → 81.7%)** — Theoretical calculations, no empirical data -- ⚠️ **"+22% SWE benchmark improvement"** — Context missing, older version, may not apply to 5.1 Platinum -- ⚠️ **"LLMs understand natively"** — True that LLMs can parse, but processing is slower than natural language - -### Risks of Adoption - -1. **Efficiency Loss:** 3-5x slower LLM processing, higher token costs -2. **Maintainability Issues:** Harder for humans to read/edit, steeper learning curve -3. **Tooling Dependency:** Parser/validator not available, uncertain timeline -4. **Unproven Benefits:** May not deliver promised benefits -5. **Over-engineering:** Complexity exceeds needs, better alternatives exist - -### Recommendation - -#### ❌ DO NOT ADOPT AISP as Primary Format - -**Reasons:** -- Worse for LLM consumption (our primary use case) -- Unproven benefits (many claims lack validation) -- Missing tooling (parser/validator not available) -- Better alternatives exist (well-structured markdown) -- Over-engineering (complexity exceeds needs) - -#### ✅ CONSIDER Optional Hybrid Approach (Future) - -**If formal precision is needed:** -1. Keep markdown as primary format -2. Add optional AISP sections for critical invariants only -3. Wait for tooling release (Q1 2026) before broader adoption -4. Test empirically before committing - -#### ✅ MONITOR Development - -**Track:** -- Parser/validator release (Q1 2026) -- Empirical validation of claims -- Real-world usage examples -- Tooling maturity - -**Re-evaluate after:** -- Tooling is released and tested -- Empirical evidence validates claims -- Clear benefits demonstrated - -### Conclusion - -**AISP is NOT "AI slop"** — it has legitimate mathematical foundations. However, it's **NOT suitable for OpenSpec's LLM-focused workflow** due to efficiency, unproven benefits, and missing tooling. - -**Recommendation:** **Do NOT proceed with this change.** Our current well-structured markdown approach is more efficient and practical for LLM consumption. Consider optional hybrid approach for critical invariants only, and monitor AISP development for future re-evaluation. - -### References - -Full analysis documents: -- **Adoption Assessment:** `openspec/changes/add-aisp-formal-clarification/ADOPTION_ASSESSMENT.md` -- **Claim Analysis:** `openspec/changes/add-aisp-formal-clarification/CLAIM_ANALYSIS.md` -- **LLM Optimization Review:** `openspec/changes/add-aisp-formal-clarification/REVIEW.md` - ---- - -**Status:** 🔴 **RECOMMENDATION: DO NOT PROCEED** -**Next Steps:** Monitor AISP development, re-evaluate after Q1 2026 tooling release diff --git a/openspec/changes/archive/add-aisp-formal-clarification/GITHUB_ISSUE_COMMENT_CONCISE.md b/openspec/changes/archive/add-aisp-formal-clarification/GITHUB_ISSUE_COMMENT_CONCISE.md deleted file mode 100644 index 4a8934c0..00000000 --- a/openspec/changes/archive/add-aisp-formal-clarification/GITHUB_ISSUE_COMMENT_CONCISE.md +++ /dev/null @@ -1,112 +0,0 @@ -# GitHub Issue #106 Comment (Concise Version) - -**Post this as a comment on:** https://github.com/nold-ai/specfact-cli/issues/106 - ---- - -## 🔍 Critical Assessment: Recommendation to NOT Proceed - -After comprehensive analysis of AISP 5.1 Platinum for OpenSpec/SpecFact integration, I recommend **NOT proceeding with this change** at this time. - -### Executive Summary - -**Verdict: ⚠️ NOT RECOMMENDED** - -AISP has legitimate mathematical foundations (Category Theory, Natural Deduction), but it's **not suitable for our LLM-focused workflow**: - -1. **3-5x slower LLM processing** than markdown -2. **Unproven claims** (many lack empirical validation) -3. **Missing tooling** (parser/validator planned Q1 2026, not available) -4. **Better alternatives exist** (well-structured markdown achieves similar goals) - -### Key Findings - -**What AISP IS:** -- ✅ Legitimate mathematical foundations (Category Theory, Natural Deduction) -- ✅ Well-defined structure (grammar, types, validation) -- ✅ Proof-carrying by design - -**What AISP IS NOT:** -- ❌ Optimized for LLM consumption (3-5x slower than markdown) -- ❌ Proven in practice (many claims lack validation) -- ❌ Tooling available (parser/validator not yet released) -- ❌ Zero-trust architecture (claim is false) - -### Performance Comparison - -| Metric | AISP | OpenSpec Markdown | Winner | -|--------|------|------------------|--------| -| LLM Speed | 3-5x slower | Fast | ✅ Markdown | -| Readability | Poor | Good | ✅ Markdown | -| Validation | Theoretical | Practical | ✅ Markdown | -| Tooling | Planned Q1 2026 | Available now | ✅ Markdown | - -**Result:** Markdown wins 7/9 criteria. - -### Claim Status - -| Claim | By Design | In Practice | Issue | -|-------|-----------|------------|-------| -| Self-validating | ✅ True | ⚠️ Conditional | Requires tooling (Q1 2026) | -| Low-ambiguity | ✅ True | ⚠️ Conditional | Requires parser | -| Ambig(D) < 0.02 | ✅ True | ⚠️ Conditional | Requires validator | -| Zero-trust | ❌ False | ❌ False | Not in spec | - -**Key Issue:** Claims are TRUE BY DESIGN but CONDITIONAL IN PRACTICE (requires unavailable tooling). - -### Unproven Claims - -- ❌ "Reduces decision points 40-65% → <2%" — No evidence, unclear definition -- ❌ "Telephone game math" — Theoretical, no empirical data -- ⚠️ "+22% SWE benchmark" — Context missing, older version -- ⚠️ "LLMs understand natively" — True but slower than natural language - -### Risks - -1. **Efficiency Loss:** 3-5x slower processing -2. **Maintainability:** Harder to read/edit -3. **Tooling Dependency:** Not available yet -4. **Unproven Benefits:** May not deliver -5. **Over-engineering:** Complexity exceeds needs - -### Recommendation - -#### ❌ DO NOT ADOPT as Primary Format - -**Reasons:** -- Worse for LLM consumption (our primary use case) -- Unproven benefits -- Missing tooling -- Better alternatives exist -- Over-engineering - -#### ✅ CONSIDER Optional Hybrid (Future) - -- Keep markdown as primary -- Add optional AISP for critical invariants only -- Wait for tooling release (Q1 2026) -- Test empirically before committing - -#### ✅ MONITOR Development - -- Track parser/validator release -- Re-evaluate after empirical validation -- Test when tooling is available - -### Conclusion - -**AISP is NOT "AI slop"** — it has legitimate foundations. However, it's **NOT suitable for OpenSpec's LLM-focused workflow**. - -**Recommendation:** **Do NOT proceed.** Current markdown approach is more efficient and practical. Consider optional hybrid for critical invariants only, monitor development for future re-evaluation. - -### Full Analysis - -See detailed analysis documents: -- `openspec/changes/add-aisp-formal-clarification/ADOPTION_ASSESSMENT.md` -- `openspec/changes/add-aisp-formal-clarification/CLAIM_ANALYSIS.md` -- `openspec/changes/add-aisp-formal-clarification/REVIEW.md` - ---- - -**Status:** 🔴 **DO NOT PROCEED** -**Next Steps:** Monitor AISP development, re-evaluate after Q1 2026 tooling release diff --git a/openspec/changes/archive/add-aisp-formal-clarification/REVIEW.md b/openspec/changes/archive/add-aisp-formal-clarification/REVIEW.md deleted file mode 100644 index 9c0673e6..00000000 --- a/openspec/changes/archive/add-aisp-formal-clarification/REVIEW.md +++ /dev/null @@ -1,466 +0,0 @@ -# AISP Format Review: LLM Optimization Analysis - -**Date:** 2026-01-15 -**Reviewer:** Claude Sonnet 4.5 (claude-sonnet-4-20250514) -**Context:** Evaluation of AISP 5.1 Platinum format for AI/LLM consumption optimization - -## Executive Summary - -This review evaluates the AISP (AI Symbolic Programming) format proposed in OpenSpec against five critical criteria for LLM optimization. The analysis is based on actual parsing experience with AISP files and comparison with natural language markdown specifications. - -**Overall Assessment: 4.6/10** — Not optimized for LLM consumption - -While AISP achieves mathematical precision and low ambiguity, it introduces significant cognitive overhead that reduces efficiency for LLM processing. The format may be better suited for automated verification tools than direct LLM consumption. - -## Detailed Analysis - -### 1. Efficiency: ❌ 2/10 - -**Problem:** Symbol lookup overhead dominates processing time. - -**Evidence:** -- AISP uses 512 Unicode symbols across 8 categories (Ω, Γ, ∀, Δ, 𝔻, Ψ, ⟦⟧, ∅) -- Each symbol requires mental mapping to domain concepts -- Example parsing overhead: - ``` - ∀adapter:BacklogAdapter→category(adapter)≡BacklogAdapters∧extensible_pattern(adapter) - ``` - - **Required parsing steps:** - 1. Parse `∀` (for all) - 2. Understand type constraint `BacklogAdapter` - 3. Parse `→` (implies/maps to) - 4. Parse `≡` (equivalent to) - 5. Parse `∧` (and) - 6. Map symbols to domain concepts - 7. Reconstruct meaning - -**Comparison:** -- **Markdown:** "All backlog adapters SHALL belong to the BacklogAdapters category and SHALL follow the extensibility pattern." -- **Processing:** Immediate comprehension, zero symbol lookup - -**Verdict:** Natural language markdown is processed 3-5x faster than AISP notation. - -### 2. Non-Ambiguity: ⚠️ 6/10 - -**Strengths:** -- Mathematical precision for formal properties -- Type-theoretic foundations reduce semantic ambiguity -- Claims `Ambig(D) < 0.02` (2% ambiguity threshold) - -**Weaknesses:** -- **Symbol interpretation ambiguity:** Symbols themselves require interpretation -- **Structural ambiguity:** Nested structures can be parsed multiple ways -- **Context dependency:** Requires full glossary (512 symbols) in context - -**Example Ambiguity:** -```aisp -Δ⊗λ≜λ(A,B).case[Logic(A)∩Logic(B)⇒⊥ → 0, ...] -``` -- What does `Δ⊗λ` mean without glossary lookup? -- What does `case[...]` structure represent? -- How to interpret `Logic(A)∩Logic(B)⇒⊥`? - -**Comparison:** -Well-structured markdown with clear requirements ("SHALL", "MUST") and scenarios (WHEN/THEN) can achieve very low ambiguity without symbol overhead. - -**Verdict:** AISP reduces semantic ambiguity but introduces symbol interpretation ambiguity. Net benefit is marginal. - -### 3. Clear Focus: ❌ 3/10 - -**Problems:** -- **Information density:** Too much packed into single lines -- **Scanning difficulty:** Hard to quickly find specific information -- **Mixed abstraction levels:** Category theory, type theory, and implementation details interleaved - -**Example:** -```aisp -∀p:∂𝒩(p)⇒∂ℋ.id(p); ∀p:ℋ.id(p)≡SHA256(𝒩(p)) -``` -This single line mixes: -- Immutability rules -- Hash computation -- Logical implications -- Domain concepts (pocket, nucleus, header) - -**Comparison:** -Markdown with clear headers (`### Requirement:`) and structured sections is easier to scan and navigate. - -**Verdict:** Markdown provides clearer focus through natural language structure. - -### 4. Completeness: ✅ 8/10 - -**Strengths:** -- Mathematically complete specifications -- Formal properties captured (invariants, type constraints) -- Proof-carrying structure - -**Weaknesses:** -- Missing implementation context -- Examples require inference -- Practical guidance often absent - -**Verdict:** AISP is complete for formal properties but incomplete for practical implementation guidance. - -### 5. Token Optimization: ❌ 4/10 - -**Problems:** -- **Reference dependency:** Full glossary (512 symbols) must be in context -- **Cognitive overhead:** Symbols are compact but require mental parsing -- **Effective token cost:** While symbols are short, the processing overhead increases effective cost - -**Analysis:** -- AISP symbols: `∀`, `∃`, `λ`, `≜`, `Δ⊗λ` — compact but require lookup -- Markdown: "for all", "exists", "lambda", "defined as" — longer but immediately processable - -**Verdict:** Token count is lower, but effective processing cost is higher due to symbol lookup overhead. - -## Concrete Example Analysis - -### AISP Format (from actual file): -```aisp -∀adapter:BacklogAdapter→category(adapter)≡BacklogAdapters∧extensible_pattern(adapter) -``` - -**LLM Processing Steps:** -1. Identify quantifier: `∀` = "for all" -2. Parse type constraint: `BacklogAdapter` -3. Parse implication: `→` = "maps to" or "implies" -4. Parse equivalence: `≡` = "equivalent to" -5. Parse conjunction: `∧` = "and" -6. Map to domain: "backlog adapters", "category", "extensibility pattern" -7. Reconstruct: "All backlog adapters map to BacklogAdapters category and extensibility pattern" - -**Processing Time:** ~500-800ms (estimated) - -### Markdown Format: -```markdown -All backlog adapters SHALL belong to the BacklogAdapters category -and SHALL follow the extensibility pattern. -``` - -**LLM Processing Steps:** -1. Read natural language -2. Understand immediately - -**Processing Time:** ~100-200ms (estimated) - -**Efficiency Ratio:** Markdown is 3-4x faster to process. - -## Recommendations - -### 1. Hybrid Approach -- Use AISP for formal properties (invariants, type constraints) -- Use markdown for requirements, scenarios, and implementation guidance -- Example: Markdown requirements with AISP formalizations in separate sections - -### 2. Progressive Disclosure -- Start with markdown for human and LLM readability -- Add AISP formalizations for critical invariants -- Keep AISP as optional enhancement, not replacement - -### 3. Symbol Glossary -- If using AISP, include minimal inline glossary for common symbols -- Provide symbol-to-meaning mapping at file header -- Reduce dependency on external reference - -### 4. Tooling Separation -- AISP may be better suited for automated verification tools -- LLMs benefit more from structured natural language -- Consider AISP as compilation target, not primary format - -## Comparison with GitHub Repository Claims - -Based on analysis of [aisp-open-core repository](https://github.com/bar181/aisp-open-core), here is a detailed comparison of claims vs. reality: - -### Claim 1: "LLMs understand natively without instructions or training" - -**GitHub Claim:** -> "A proof-carrying protocol LLMs understand natively—no training, no fine-tuning, no special interpreters required." - -**Reality:** ❌ **Partially False** -- **What's True:** LLMs can parse AISP syntax without special training -- **What's False:** "Native understanding" is overstated - - Symbols still require interpretation (512 symbol glossary needed) - - Processing is 3-5x slower than natural language - - "Native" implies effortless, but symbol lookup adds cognitive overhead -- **Evidence:** This review demonstrates 7-step parsing process for simple AISP expressions - -**Verdict:** LLMs can parse AISP, but it's not "native" in the sense of being optimized or effortless. - ---- - -### Claim 2: "Reduces AI decision points from 40-65% to <2%" - -**GitHub Claim:** -> "Reduces AI decision points from 40-65% to <2%" - -**Reality:** ⚠️ **Unverified and Potentially Misleading** -- **Missing Evidence:** No empirical data provided for this specific metric -- **Definition Issue:** "Decision points" is not clearly defined - - Does this mean ambiguity? (AISP claims `Ambig(D) < 0.02`) - - Does this mean parsing choices? (Symbol interpretation adds new decision points) - - Does this mean implementation choices? (Unclear) -- **Symbol Overhead:** While semantic ambiguity may be reduced, symbol interpretation introduces new decision points: - - Which symbol category? (8 categories: Ω, Γ, ∀, Δ, 𝔻, Ψ, ⟦⟧, ∅) - - What does this compound symbol mean? (`Δ⊗λ`, `V_H⊕V_L⊕V_S`) - - How to parse this structure? (Nested blocks, precedence rules) - -**Verdict:** Ambiguity reduction may be real, but "decision points" reduction is unproven and potentially offset by symbol interpretation overhead. - ---- - -### Claim 3: "Works directly with Claude, OpenAI, Gemini, Cursor, Claude Code" - -**GitHub Claim:** -> "Works directly with Claude, GPT-4, Gemini, Claude Code, Cursor, and any modern LLM." - -**Reality:** ✅ **True, but Misleading** -- **What's True:** LLMs can parse and generate AISP syntax -- **What's Misleading:** "Works" doesn't mean "optimized" or "efficient" - - Processing is slower than natural language - - Efficiency is lower (3-5x slower) - - Token optimization is questionable (reference dependency adds overhead) -- **Evidence:** This review shows AISP requires 7 parsing steps vs. 2 for markdown - -**Verdict:** Technically true, but the claim implies optimization that doesn't exist. - ---- - -### Claim 4: "Zero execution overhead" - -**GitHub Claim:** -> "Zero execution overhead (Validated)" — "The AISP specification is only needed during compilation, not execution." - -**Reality:** ✅ **True for Execution, ❌ False for Compilation/Parsing** -- **Execution Overhead:** ✅ True — AISP spec not needed at runtime -- **Compilation/Parsing Overhead:** ❌ Significant - - Symbol lookup overhead (512 symbols) - - Parsing complexity (nested structures, precedence rules) - - Reference dependency (glossary must be in context) -- **Effective Cost:** While execution has zero overhead, the compilation/parsing phase has higher overhead than natural language - -**Verdict:** Claim is technically correct but omits the significant parsing overhead. - ---- - -### Claim 5: "+22% SWE benchmark improvement" - -**GitHub Claim:** -> "SWE Benchmark: +22% over base model (cold start, no hints, blind evaluation)" -> "Using an older AISP model (AISP Strict) with rigorous test conditions" - -**Reality:** ⚠️ **Context Missing and Potentially Outdated** -- **Version Mismatch:** Claim is for "AISP Strict" (older version), not AISP 5.1 Platinum -- **Missing Details:** - - What were the test conditions? - - What was the baseline model? - - How was AISP integrated? (Full spec? Partial? Hybrid?) -- **No Validation:** No independent replication or validation -- **May Not Apply:** Results from older version may not apply to AISP 5.1 Platinum - -**Verdict:** Potentially valid but lacks context and may not apply to current version. - ---- - -### Claim 6: "Tic-Tac-Toe Test: 6 ambiguities (prose) → 0 ambiguities (AISP)" - -**GitHub Claim:** -> "Tic-Tac-Toe test: 6 ambiguities (prose) → 0 ambiguities (AISP)" -> "Technical Precision: 43/100 (prose) → 95/100 (AISP)" - -**Reality:** ✅ **Likely True, but Context Matters** -- **Ambiguity Reduction:** ✅ Likely true — formal notation reduces semantic ambiguity -- **But:** Symbol interpretation ambiguity is not measured -- **Trade-off:** While semantic ambiguity is reduced, processing efficiency is reduced -- **Missing Comparison:** No comparison with well-structured markdown (not just "prose") - -**Verdict:** Valid for semantic ambiguity, but doesn't account for symbol interpretation overhead or compare against structured markdown. - ---- - -### Claim 7: "The Telephone Game Math" - -**GitHub Claim:** -> "10-step pipeline: 0.84% success (natural language) → 81.7% success (AISP)" -> "20-step pipeline: 0.007% success (natural language) → 66.8% success (AISP)" - -**Reality:** ⚠️ **Unverified and Potentially Misleading** -- **No Evidence:** No empirical data or methodology provided -- **Assumptions:** Based on theoretical calculations, not real-world testing -- **Missing Variables:** - - What type of pipeline? (Unclear) - - What defines "success"? (Unclear) - - How was natural language structured? (Unclear — was it well-structured markdown?) -- **Symbol Propagation:** While semantic ambiguity may not propagate, symbol interpretation errors could propagate - -**Verdict:** Theoretically plausible but unverified and potentially misleading without empirical evidence. - ---- - -### Claim 8: "Measurable Ambiguity: Ambig(D) < 0.02" - -**GitHub Claim:** -> "AISP is the first specification language where ambiguity is a computable, first-class property" -> "Ambig(D) ≜ 1 - |Parse_unique(D)| / |Parse_total(D)|" -> "Every AISP document must satisfy: Ambig(D) < 0.02" - -**Reality:** ✅ **True for Semantic Ambiguity, ⚠️ False for Symbol Ambiguity** -- **Semantic Ambiguity:** ✅ AISP likely achieves <2% semantic ambiguity -- **Symbol Ambiguity:** ⚠️ Not measured — symbol interpretation adds ambiguity -- **Measurement Gap:** The formula measures parsing ambiguity, not interpretation ambiguity -- **Practical Impact:** While semantic ambiguity is low, symbol lookup overhead reduces practical utility - -**Verdict:** Valid for semantic ambiguity, but doesn't account for symbol interpretation overhead. - ---- - -### Claim 9: "Zero-overhead validated when GitHub Copilot analysis... demonstrated perfect comprehension" - -**GitHub Claim:** -> "This was validated when a GitHub Copilot analysis—initially arguing LLMs couldn't understand AISP—inadvertently demonstrated perfect comprehension by correctly interpreting and generating AISP throughout its review." - -**Reality:** ⚠️ **Anecdotal Evidence, Not Validation** -- **Single Instance:** One anecdotal example, not systematic validation -- **"Perfect Comprehension":** Subjective — what defines "perfect"? -- **No Metrics:** No quantitative measures of comprehension quality -- **Selection Bias:** Only positive examples may be reported - -**Verdict:** Anecdotal evidence, not systematic validation. Needs empirical testing. - ---- - -### Claim 10: "8,817 tokens (GPT-4o tokenizer)" - -**GitHub Claim:** -> "Specification Size (Measured): GPT-4o tokenizer: 8,817 tokens" - -**Reality:** ✅ **True, but Incomplete** -- **Token Count:** ✅ Likely accurate -- **But:** Doesn't account for: - - Reference dependency (glossary must be in context) - - Effective processing cost (symbol lookup overhead) - - Comparison with optimized markdown (not just raw token count) - -**Verdict:** Accurate but incomplete — effective cost is higher than token count suggests. - ---- - -## Summary of Claims vs. Reality - -| Claim | Status | Notes | -|-------|--------|-------| -| Native LLM understanding | ❌ Partially False | Can parse, but not optimized | -| Reduces decision points 40-65% → <2% | ⚠️ Unverified | No evidence, definition unclear | -| Works with Claude/GPT/Gemini | ✅ True | But efficiency is lower | -| Zero execution overhead | ✅ True | But parsing overhead significant | -| +22% SWE benchmark | ⚠️ Context Missing | Older version, no details | -| Tic-Tac-Toe: 6 → 0 ambiguities | ✅ Likely True | But symbol overhead not measured | -| Telephone game math | ⚠️ Unverified | No empirical evidence | -| Ambig(D) < 0.02 | ✅ True | For semantic, not symbol ambiguity | -| Copilot validation | ⚠️ Anecdotal | Single example, not systematic | -| 8,817 tokens | ✅ True | But effective cost higher | - -**Overall Verdict:** AISP achieves mathematical precision and low semantic ambiguity, but many claims are overstated, unverified, or omit important trade-offs (especially symbol interpretation overhead and processing efficiency). - ---- - -## Actionable Recommendations for OpenSpec - -Based on this analysis, here are specific recommendations for OpenSpec's use of AISP: - -### 1. **Reject AISP as Primary Format** -- ❌ Do not make AISP the first-priority format for LLM consumption -- ✅ Keep markdown as primary format -- ✅ Use AISP as optional formalization layer - -### 2. **Revise AGENTS.md Instructions** -Current instruction (line 585-600): -> "AI LLMs MUST treat `.aisp.md` files as first-priority when both markdown and AISP versions exist" - -**Recommended Change:** -> "AI LLMs SHOULD prefer markdown versions for efficiency. AISP versions provide formal precision for critical invariants but have higher processing overhead. Use AISP when formal verification is required, markdown for implementation guidance." - -### 3. **Hybrid Format Strategy** -Instead of separate files, embed AISP in markdown: - -```markdown -### Requirement: Backlog Adapter Extensibility Pattern - -**Natural Language:** -All backlog adapters SHALL belong to the BacklogAdapters category -and SHALL follow the extensibility pattern. - -**Formal Property (AISP):** -```aisp -∀adapter:BacklogAdapter→category(adapter)≡BacklogAdapters∧extensible_pattern(adapter) -``` - -**Scenario:** Future backlog adapters follow established patterns -- **WHEN** a new backlog adapter is implemented -- **THEN** it follows the same patterns as GitHub adapter -``` - -### 4. **Remove "First-Priority" Language** -The current AGENTS.md states AISP files are "first-priority" — this contradicts efficiency optimization. Revise to: -- Markdown: Primary format (efficiency optimized) -- AISP: Optional formalization (precision optimized) - -### 5. **Validate Claims Before Adoption** -Before adopting AISP claims: -- Request empirical evidence for "decision points" reduction -- Validate "telephone game math" with real-world testing -- Compare against well-structured markdown (not just "prose") - -### 6. **Measure Actual Performance** -If using AISP, measure: -- Processing time: AISP vs. markdown -- Error rate: Symbol interpretation errors -- Token efficiency: Effective cost (including reference dependency) -- Developer experience: Human readability - ---- - -## Conclusion - -AISP achieves mathematical precision and low semantic ambiguity, but at the cost of: -- **Reduced efficiency** (3-5x slower processing) -- **Symbol interpretation overhead** (512 symbols to map) -- **Poor scanability** (dense notation) -- **Higher effective token cost** (reference dependency) - -**Recommendation:** Use AISP as an optional formalization layer for critical invariants, not as primary specification format. Well-structured markdown with clear requirements and scenarios provides better LLM optimization while maintaining low ambiguity. - -## Alternative: Optimized Markdown Format - -A better approach for LLM optimization: - -```markdown -### Requirement: Backlog Adapter Extensibility Pattern - -**Type:** BacklogAdapter → Category × Pattern - -**Constraint:** -- All backlog adapters MUST belong to BacklogAdapters category -- All backlog adapters MUST follow extensibility pattern - -**Formal Property:** -```aisp -∀adapter:BacklogAdapter→category(adapter)≡BacklogAdapters∧extensible_pattern(adapter) -``` - -**Scenario:** Future backlog adapters follow established patterns -- **WHEN** a new backlog adapter is implemented (ADO, Jira, Linear) -- **THEN** it follows the same import/export patterns as GitHub adapter -``` - -This provides: -- ✅ Natural language for immediate comprehension -- ✅ AISP formalization for precision (optional) -- ✅ Clear structure for scanning -- ✅ Low ambiguity without symbol overhead - ---- - -**Rulesets Applied:** None (analysis task) -**AI Provider & Model:** Claude Sonnet 4.5 (claude-sonnet-4-20250514) diff --git a/openspec/changes/archive/add-aisp-formal-clarification/design.md b/openspec/changes/archive/add-aisp-formal-clarification/design.md deleted file mode 100644 index 06548d80..00000000 --- a/openspec/changes/archive/add-aisp-formal-clarification/design.md +++ /dev/null @@ -1,326 +0,0 @@ -# Technical Design: AISP Formal Clarification Integration - -> **AI LLM Instructions**: This document has a formalized AISP version available at [design.aisp.md](./design.aisp.md). For implementation tasks, use the AISP version which provides mathematically precise architectural decisions, trade-offs, and design constraints with `Ambig < 0.02`. The AISP version encodes all design rules, storage patterns, and integration points in formal notation that eliminates interpretation variance. - -## Context - -This design implements AISP (AI Symbolic Programming) v5.1 formal clarification integration for SpecFact CLI. AISP provides a proof-carrying protocol that reduces AI interpretation variance from 40–65% to <2% by encoding decision trees and logical boundaries in a form that LLMs can verify deterministically. - -The integration establishes SpecFact as the **validation and clarification layer** by storing AISP formal specifications internally in project bundles as a tool-agnostic, AI-optimized representation. This approach maintains SpecFact's independence from SDD tool formats while enabling AI LLMs to consume mathematically precise specifications instead of ambiguous markdown. - -## Goals - -1. **Internal AISP Storage**: Store AISP proof artifacts in project bundles (`.specfact/projects/<bundle>/aisp/`) without modifying source spec files -2. **Tool-Agnostic Representation**: AISP blocks work with any SDD tool format (OpenSpec, Spec-Kit, etc.) without format dependencies -3. **AI LLM Consumption**: Enable AI LLMs to consume AISP specifications via slash command prompts instead of ambiguous markdown -4. **Automatic Generation**: Generate AISP blocks from natural language requirements via bridge adapters -5. **Developer-Friendly**: Keep AISP as internal representation, avoiding exposure of formal notation to developers -6. **Mathematical Precision**: Achieve `Ambig < 0.02` in AISP formalizations, reducing interpretation variance - -## Non-Goals - -- Embedding AISP directly in spec markdown files (AISP remains internal) -- Modifying source spec files (OpenSpec, Spec-Kit) with AISP notation -- Requiring developers to write AISP manually (generated automatically) -- Replacing markdown specs with AISP (AISP is supplementary, not replacement) -- AISP syntax validation in spec files (validation only in project bundles) -- Bidirectional AISP sync (AISP is generated from specs, not synced back) - -## Decisions - -### Decision 1: Internal Storage in Project Bundles - -**What**: AISP proof artifacts are stored internally in `.specfact/projects/<bundle>/aisp/` directory, not in source spec files. - -**Why**: - -- Maintains tool-agnostic independence from SDD tool formats -- Avoids exposing developers to formal notation ("hieroglyphs") -- Enables SpecFact to act as validation/clarification layer -- Preserves source spec file integrity (no modifications) -- Allows AISP to evolve independently from spec file formats - -**Alternatives Considered**: - -- Embedding AISP in spec markdown files (rejected - breaks tool-agnosticism, exposes developers to formal notation) -- Storing AISP in `specs/<capability>/aisp/` subdirectories (rejected - couples AISP to spec file structure) -- Storing AISP in separate repository (rejected - adds complexity, breaks project bundle cohesion) - -**Implementation**: - -- AISP blocks stored as `proof-<requirement-id>.aisp.md` files in `.specfact/projects/<bundle>/aisp/` -- Proof ID to requirement ID mapping in project bundle metadata -- AISP loading from project bundle for slash commands and validation -- Source spec files remain unchanged (no AISP notation visible) - -### Decision 2: Bridge Adapter Pattern for Generation - -**What**: AISP blocks are generated from requirements via bridge adapters (OpenSpec, Spec-Kit) during import/sync operations. - -**Why**: - -- Follows existing bridge adapter pattern (consistent with project architecture) -- Enables automatic AISP generation from any SDD tool format -- Maintains separation of concerns (adapters handle tool-specific logic) -- Supports cross-repository AISP generation via `external_base_path` -- Allows future adapters to generate AISP without code changes - -**Alternatives Considered**: - -- Manual AISP authoring (rejected - too complex, defeats purpose of automatic clarification) -- Separate AISP generation service (rejected - adds unnecessary complexity) -- AISP generation in CLI commands only (rejected - misses import/sync opportunities) - -**Implementation**: - -- OpenSpec adapter: Generate AISP during `import_artifact()` and `sync_artifact()` calls -- Spec-Kit adapter: Generate AISP during spec import/sync operations -- Generated AISP stored in project bundle immediately after generation -- Proof IDs mapped to requirement IDs for binding validation - -### Decision 3: Slash Commands for AI LLM Consumption - -**What**: Slash command prompts (`/specfact.compile-aisp`, `/specfact.update-aisp`) instruct AI LLMs to consume AISP from project bundles instead of markdown specs. - -**Why**: - -- Enables AI LLMs to use mathematically precise AISP instead of ambiguous markdown -- Provides interactive clarification workflow for vague/ambiguous elements -- Maintains developer workflow (developers work with markdown, AI LLMs consume AISP) -- Establishes SpecFact as the clarification layer that enforces mathematical clarity -- References AISP v5.1 specification for formal semantics - -**Alternatives Considered**: - -- Requiring developers to manually invoke AISP compilation (rejected - too complex, defeats automation) -- Embedding AISP compilation in all AI interactions (rejected - may not always be needed) -- Separate AISP compilation CLI command only (rejected - misses AI LLM integration opportunity) - -**Implementation**: - -- `/specfact.compile-aisp`: Instructs AI LLM to update AISP from spec, clarify ambiguities, then execute AISP -- `/specfact.update-aisp`: Detects spec changes and updates corresponding AISP blocks -- Slash command prompts stored in `resources/templates/slash-commands/` -- Prompts reference AISP v5.1 specification for AI LLM context - -### Decision 4: Tool-Agnostic Data Models - -**What**: AISP data models (`AispProofBlock`, `AispBinding`, `AispParseResult`) are tool-agnostic and work with any SDD tool format. - -**Why**: - -- Maintains SpecFact's independence from SDD tool formats -- Enables AISP to work with future SDD tools without code changes -- Separates AISP concerns from tool-specific metadata -- Allows AISP blocks to be shared across different tool formats -- Supports cross-tool AISP validation and comparison - -**Alternatives Considered**: - -- Tool-specific AISP models (rejected - breaks tool-agnosticism, adds complexity) -- Embedding AISP in tool-specific models (rejected - couples AISP to tool formats) -- Separate AISP models per tool (rejected - unnecessary duplication) - -**Implementation**: - -- `AispProofBlock`: Tool-agnostic proof block structure (id, input_schema, decisions, outcomes, invariants) -- `AispBinding`: Tool-agnostic requirement-proof binding (requirement_id, proof_id, scenario_ids) -- `AispParseResult`: Tool-agnostic parse result (proofs, bindings, errors, warnings) -- AISP models stored separately from tool-specific models (Feature, Story, etc.) - -### Decision 5: Internal Representation Only - -**What**: AISP blocks are never exposed in source spec files or exported artifacts - they remain internal to SpecFact. - -**Why**: - -- Keeps developers working with natural language specs (no formal notation exposure) -- Maintains spec file compatibility with SDD tools (OpenSpec, Spec-Kit) -- Preserves spec file readability and maintainability -- Allows AISP to evolve independently from spec file formats -- Establishes SpecFact as the clarification layer (AISP is SpecFact's internal optimization) - -**Alternatives Considered**: - -- Exporting AISP in spec files (rejected - breaks tool compatibility, exposes developers to formal notation) -- Embedding AISP in exported artifacts (rejected - couples exports to AISP format) -- Making AISP optional in spec files (rejected - breaks tool-agnosticism) - -**Implementation**: - -- AISP blocks stored only in `.specfact/projects/<bundle>/aisp/` -- Source spec files never modified with AISP notation -- Exported artifacts (spec.md, plan.md) never include AISP blocks -- AISP accessible only through SpecFact CLI commands and slash commands - -### Decision 6: AISP v5.1 Specification Reference - -**What**: All AISP blocks reference AISP v5.1 specification from <https://github.com/bar181/aisp-open-core/blob/main/AI_GUIDE.md> for formal semantics. - -**Why**: - -- Ensures AISP blocks follow standard formal notation -- Enables AI LLMs to understand AISP semantics via specification reference -- Provides validation rules for AISP syntax checking -- Maintains consistency across all AISP blocks -- Supports future AISP specification updates - -**Alternatives Considered**: - -- Custom AISP syntax (rejected - breaks standardization, adds maintenance burden) -- Multiple AISP versions (rejected - adds complexity, breaks consistency) -- No specification reference (rejected - AI LLMs need formal semantics) - -**Implementation**: - -- AISP blocks include AISP v5.1 header: `𝔸5.1.complete@<date>` -- Slash command prompts reference AISP specification URL -- Validator checks AISP syntax against v5.1 specification -- Documentation references AISP specification for syntax rules - -## Architecture - -### Storage Architecture - -```bash -.specfact/ -└── projects/ - └── <bundle>/ - ├── contracts/ # Existing contract storage - ├── reports/ # Existing report storage - └── aisp/ # NEW: AISP proof artifact storage - ├── proof-<req-id-1>.aisp.md - ├── proof-<req-id-2>.aisp.md - └── ... -``` - -### Generation Flow - -1. **Import/Sync**: Bridge adapter (OpenSpec/Spec-Kit) imports requirements -2. **AISP Generation**: Adapter generates AISP blocks from requirement text and scenarios -3. **Storage**: Generated AISP blocks stored in `.specfact/projects/<bundle>/aisp/` -4. **Mapping**: Proof IDs mapped to requirement IDs in project bundle metadata -5. **Validation**: AISP blocks validated for syntax and binding consistency - -### Consumption Flow - -1. **Slash Command**: AI LLM invokes `/specfact.compile-aisp` or `/specfact.update-aisp` -2. **AISP Loading**: SpecFact loads AISP blocks from project bundle -3. **Clarification**: Vague/ambiguous elements flagged for clarification -4. **AI LLM Consumption**: AI LLM consumes AISP instead of markdown spec -5. **Implementation**: AI LLM follows AISP decision trees and invariants - -### Integration Points - -- **Bridge Adapters**: Generate AISP during import/sync operations -- **CLI Commands**: Validate and clarify AISP blocks (`validate --aisp`, `clarify`) -- **Slash Commands**: AI LLM consumption of AISP (`/specfact.compile-aisp`, `/specfact.update-aisp`) -- **Project Bundle**: AISP storage and mapping infrastructure -- **Validators**: AISP syntax and binding validation - -## Trade-offs - -### Trade-off 1: Internal Storage vs. Embedded Storage - -**Chosen**: Internal storage in project bundles - -**Benefits**: - -- Tool-agnostic independence -- Developer-friendly (no formal notation exposure) -- Spec file integrity preserved - -**Costs**: - -- AISP blocks not visible in source spec files -- Requires SpecFact CLI to access AISP -- Additional storage layer - -**Mitigation**: Slash commands provide easy AI LLM access, CLI commands provide developer access - -### Trade-off 2: Automatic Generation vs. Manual Authoring - -**Chosen**: Automatic generation via bridge adapters - -**Benefits**: - -- No manual AISP authoring required -- Consistent AISP generation across tools -- Automatic updates when specs change - -**Costs**: - -- Generation may miss some decision points -- Requires clarification workflow for ambiguous elements -- Generation logic complexity - -**Mitigation**: Clarification command (`specfact clarify`) handles ambiguous elements, validation detects gaps - -### Trade-off 3: Tool-Agnostic Models vs. Tool-Specific Models - -**Chosen**: Tool-agnostic AISP models - -**Benefits**: - -- Works with any SDD tool format -- Future-proof for new tools -- Consistent AISP structure - -**Costs**: - -- Additional mapping layer between tool-specific and tool-agnostic -- May lose some tool-specific context -- Requires adapter logic for each tool - -**Mitigation**: Bridge adapters handle tool-specific to tool-agnostic mapping, AISP focuses on decision trees (tool-agnostic) - -## Risks and Mitigations - -### Risk 1: AISP Generation Quality - -**Risk**: Generated AISP blocks may miss decision points or encode incorrect logic. - -**Mitigation**: - -- Validation detects coverage gaps (requirements without proofs, orphaned proofs) -- Clarification command allows manual refinement -- Contract-to-AISP comparison flags deviations - -### Risk 2: AISP Maintenance Overhead - -**Risk**: AISP blocks may become stale when specs change. - -**Mitigation**: - -- `/specfact.update-aisp` slash command detects spec changes and updates AISP -- Validation reports stale AISP blocks -- Automatic regeneration during import/sync - -### Risk 3: Developer Confusion - -**Risk**: Developers may not understand AISP's role or how to use it. - -**Mitigation**: - -- AISP remains internal (developers work with markdown) -- Documentation explains AISP's role as clarification layer -- Slash commands handle AISP consumption automatically - -## Success Criteria - -- ✅ AISP blocks stored internally in project bundles (not in spec files) -- ✅ AISP blocks generated automatically from requirements via adapters -- ✅ AI LLMs consume AISP via slash commands instead of markdown -- ✅ AISP blocks achieve `Ambig < 0.02` (mathematical precision) -- ✅ Developers work with natural language specs (no AISP exposure) -- ✅ Validation detects coverage gaps and binding inconsistencies -- ✅ Clarification workflow handles vague/ambiguous elements - -## Related Documentation - -- [AISP v5.1 Specification](https://github.com/bar181/aisp-open-core/blob/main/AI_GUIDE.md) -- [proposal.md](./proposal.md) - Change proposal overview -- [tasks.md](./tasks.md) - Implementation tasks -- [specs/bridge-adapter/spec.md](./specs/bridge-adapter/spec.md) - Adapter requirements -- [specs/cli-output/spec.md](./specs/cli-output/spec.md) - CLI command requirements -- [specs/data-models/spec.md](./specs/data-models/spec.md) - Data model requirements diff --git a/openspec/changes/archive/add-aisp-formal-clarification/proposal.md b/openspec/changes/archive/add-aisp-formal-clarification/proposal.md deleted file mode 100644 index be569957..00000000 --- a/openspec/changes/archive/add-aisp-formal-clarification/proposal.md +++ /dev/null @@ -1,85 +0,0 @@ -# Change: Add AISP Formal Clarification to Spec-Kit and OpenSpec Workflows - -## Why - -Current spec-driven development tools (Spec-Kit, OpenSpec, SpecFact) solve *structural* ambiguity through formatting discipline, but they don't eliminate **semantic ambiguity** when LLMs interpret specifications. AISP (AI Symbolic Programming) v5.1 provides a proof-carrying protocol that reduces AI interpretation variance from 40–65% to <2% by encoding decision trees and logical boundaries in a form that LLMs can verify deterministically. - -This change establishes SpecFact as the **validation and clarification layer** by storing AISP formal specifications internally in project bundles (`.specfact/projects/<bundle>/aisp/`) as a tool-agnostic, AI-optimized representation. This approach: - -- Keeps AISP as an internal representation, avoiding exposure of formal notation to developers -- Maintains SpecFact's independence from SDD tool formats (OpenSpec, Spec-Kit, etc.) -- Enables AI LLM to consume AISP specifications instead of ambiguous markdown specs -- Provides automatic translation/compilation from natural language specs to AISP via slash command prompts -- Establishes SpecFact as the clarification layer that enforces mathematical clarity under the hood - -The integration follows the bridge adapter pattern (per project.md) and maintains complete backward compatibility by keeping AISP as an internal representation that doesn't affect existing spec files or workflows. - -## What Changes - -- **NEW**: Add AISP internal storage in project bundles - - AISP proof artifacts stored in `.specfact/projects/<bundle>/aisp/` directory (internal to SpecFact) - - Proof artifacts stored as separate files (e.g., `proof-<requirement-id>.aisp.md`) mapped to requirements - - Each proof block includes unique proof id, input schema, decision tree, outcomes, and invariants - - Reference AISP v5.1 specification from <https://github.com/bar181/aisp-open-core/blob/main/AI_GUIDE.md> - - **No changes to existing spec files** - AISP remains internal representation - -- **NEW**: Add AISP parser and data models to SpecFact CLI - - New parser: `src/specfact_cli/parsers/aisp.py` for parsing AISP blocks from internal storage - - New models: `src/specfact_cli/models/aisp.py` with `AispProofBlock`, `AispBinding`, `AispParseResult` - - Validator: `src/specfact_cli/validators/aisp_schema.py` for syntax and binding validation - - Storage strategy: AISP blocks stored in project bundle, mapped to requirements by ID - -- **NEW**: Add automatic AISP generation from specs via adapters - - OpenSpec adapter: Generate AISP blocks from OpenSpec requirements during import/sync - - Spec-Kit adapter: Generate AISP blocks from Spec-Kit requirements during import/sync - - Both adapters generate AISP internally without modifying source spec files - - Generated AISP stored in `.specfact/projects/<bundle>/aisp/` for tool-agnostic access - -- **NEW**: Add SpecFact CLI commands for AISP validation and clarification - - `specfact validate --aisp`: Validates AISP blocks in project bundle, validates proof ids, syntax, and requirement bindings, reports coverage gaps - - `specfact clarify requirement <requirement-id>`: Generates/updates AISP block from requirement, clarifies vague/ambiguous elements, stores in project bundle - - `specfact validate --aisp --against-code`: Compares extracted contracts to AISP decision trees, flags deviations - -- **NEW**: Add specfact slash command prompts for AI LLM consumption - - `/specfact.compile-aisp`: Instructs AI LLM to first update internal AISP spec from available spec, clarify vague/ambiguous elements, then execute AISP spec instead of markdown spec - - `/specfact.update-aisp`: Detects spec changes and updates corresponding AISP blocks in project bundle - - Both commands use AISP v5.1 specification as reference for formal semantics - - Commands enable AI LLM to consume mathematically precise AISP instead of ambiguous markdown - -- **EXTEND**: Add AISP proof artifact examples and templates - - Example AISP blocks for common patterns (auth, payment, state machines) in `resources/templates/aisp/` - - Documentation for AISP generation and validation workflows - - Integration examples showing AISP as internal representation layer - -## Impact - -- **Affected specs**: `bridge-adapter` (adapter hooks for AISP parsing), `cli-output` (new CLI commands), `data-models` (AISP data models) -- **Affected code**: - - `src/specfact_cli/parsers/aisp.py` (new AISP parser) - - `src/specfact_cli/models/aisp.py` (new AISP data models) - - `src/specfact_cli/validators/aisp_schema.py` (new AISP validator) - - `src/specfact_cli/adapters/openspec.py` (add AISP generation from OpenSpec requirements) - - `src/specfact_cli/adapters/speckit.py` (add AISP generation from Spec-Kit requirements) - - `src/specfact_cli/commands/validate.py` (add `--aisp` and `--aisp --against-code` flags) - - `src/specfact_cli/commands/clarify.py` (new command for clarification workflow) - - `src/specfact_cli/utils/bundle_loader.py` (add AISP storage in project bundle) - - `resources/templates/slash-commands/` (slash command prompts for AI LLM) - - `resources/templates/aisp/` (AISP block templates and examples) - - `docs/guides/aisp-integration.md` (new documentation) -- **Integration points**: - - OpenSpec adapter (AISP generation from requirements) - - Spec-Kit adapter (AISP generation from requirements) - - SpecFact validation (AISP-aware contract matching) - - SpecFact CLI commands (validation and clarification workflows) - - AI reasoning integration (slash commands for AISP compilation and consumption) - - Project bundle storage (`.specfact/projects/<bundle>/aisp/` directory) - - ---- - -## Source Tracking - -- **GitHub Issue**: #106 -- **Issue URL**: <https://github.com/nold-ai/specfact-cli/issues/106> -- **Last Synced Status**: proposed -<!-- content_hash: c1a67e2c4e8710c3 --> \ No newline at end of file diff --git a/openspec/changes/archive/add-aisp-formal-clarification/tasks.md b/openspec/changes/archive/add-aisp-formal-clarification/tasks.md deleted file mode 100644 index 86d54b8d..00000000 --- a/openspec/changes/archive/add-aisp-formal-clarification/tasks.md +++ /dev/null @@ -1,235 +0,0 @@ -## 1. Git Workflow - -- [ ] 1.1 Create git branch `feature/add-aisp-formal-clarification` from `dev` branch - - [ ] 1.1.1 Ensure we're on dev and up to date: `git checkout dev && git pull origin dev` - - [ ] 1.1.2 Create branch: `git checkout -b feature/add-aisp-formal-clarification` - - [ ] 1.1.3 Verify branch was created: `git branch --show-current` - -## 2. AISP Data Models and Parser - -- [ ] 2.1 Create AISP data models - - [ ] 2.1.1 Create `src/specfact_cli/models/aisp.py` with `AispProofBlock`, `AispBinding`, `AispParseResult`, `AispDecision`, `AispOutcome` models - - [ ] 2.1.2 Add Pydantic models with proper type hints and field validators - - [ ] 2.1.3 Add `@beartype` decorators for runtime type checking - - [ ] 2.1.4 Add `@icontract` decorators with `@require`/`@ensure` contracts - - [ ] 2.1.5 Add docstrings following Google style guide - -- [ ] 2.2 Create AISP parser - - [ ] 2.2.1 Create `src/specfact_cli/parsers/aisp.py` for parsing AISP blocks from project bundle storage - - [ ] 2.2.2 Implement AISP file reading from `.specfact/projects/<bundle>/aisp/` directory - - [ ] 2.2.3 Implement proof ID extraction (format: `proof[id]:`) - - [ ] 2.2.4 Implement input schema parsing - - [ ] 2.2.5 Implement decision tree parsing (choice points, branches) - - [ ] 2.2.6 Implement outcome parsing (success/failure) - - [ ] 2.2.7 Implement invariant parsing - - [ ] 2.2.8 Add `@beartype` decorators for runtime type checking - - [ ] 2.2.9 Add `@icontract` decorators with `@require`/`@ensure` contracts - - [ ] 2.2.10 Add error handling and error collection in `AispParseResult` - -- [ ] 2.3 Create AISP validator - - [ ] 2.3.1 Create `src/specfact_cli/validators/aisp_schema.py` for syntax and binding validation - - [ ] 2.3.2 Implement proof ID uniqueness validation within spec - - [ ] 2.3.3 Implement requirement binding validation (proof IDs referenced by requirements) - - [ ] 2.3.4 Implement coverage gap detection (requirements without proofs, orphaned proofs) - - [ ] 2.3.5 Implement AISP v5.1 syntax validation (reference: <https://github.com/bar181/aisp-open-core/blob/main/AI_GUIDE.md>) - - [ ] 2.3.6 Add `@beartype` decorators for runtime type checking - - [ ] 2.3.7 Add `@icontract` decorators with `@require`/`@ensure` contracts - -## 3. Adapter Integration - -- [ ] 3.1 Extend OpenSpec adapter for AISP generation - - [ ] 3.1.1 Modify `src/specfact_cli/adapters/openspec.py` to generate AISP blocks from requirements - - [ ] 3.1.2 Add AISP generation during spec import/sync - - [ ] 3.1.3 Add AISP generation during change proposal processing - - [ ] 3.1.4 Store generated AISP blocks in `.specfact/projects/<bundle>/aisp/` directory - - [ ] 3.1.5 Map AISP blocks to requirement IDs (no modification of source spec files) - - [ ] 3.1.6 Support cross-repository AISP generation via `external_base_path` - - [ ] 3.1.7 Add `@beartype` decorators for runtime type checking - - [ ] 3.1.8 Add `@icontract` decorators with `@require`/`@ensure` contracts - -- [ ] 3.2 Extend Spec-Kit adapter for AISP generation - - [ ] 3.2.1 Modify `src/specfact_cli/adapters/speckit.py` to generate AISP blocks from spec.md requirements - - [ ] 3.2.2 Add AISP generation from plan.md requirements - - [ ] 3.2.3 Store generated AISP blocks in project bundle (not in exported spec.md) - - [ ] 3.2.4 Maintain proof IDs and bindings in project bundle - - [ ] 3.2.5 Ensure source spec files remain unchanged (no AISP notation) - - [ ] 3.2.6 Add `@beartype` decorators for runtime type checking - - [ ] 3.2.7 Add `@icontract` decorators with `@require`/`@ensure` contracts - -## 4. CLI Commands - -- [ ] 4.1 Extend validate command with AISP support - - [ ] 4.1.1 Modify `src/specfact_cli/commands/validate.py` to add `--aisp` flag - - [ ] 4.1.2 Implement AISP block loading from project bundle when `--aisp` flag is used - - [ ] 4.1.3 Add `--aisp --against-code` flag for contract matching - - [ ] 4.1.4 Implement contract-to-AISP comparison logic - - [ ] 4.1.5 Add deviation reporting (extra branches, missing invariants, different outcomes) - - [ ] 4.1.6 Integrate AISP validation reports into existing validate output - - [ ] 4.1.7 Add `@beartype` decorators for runtime type checking - - [ ] 4.1.8 Add `@icontract` decorators with `@require`/`@ensure` contracts - -- [ ] 4.2 Create clarify command - - [ ] 4.2.1 Create `src/specfact_cli/commands/clarify.py` for clarification workflow - - [ ] 4.2.2 Implement `specfact clarify requirement <requirement-id>` command - - [ ] 4.2.3 Generate structured prompt based on requirement content - - [ ] 4.2.4 Create YAML response template for AISP block structure - - [ ] 4.2.5 Generate/update AISP block and store in `.specfact/projects/<bundle>/aisp/` - - [ ] 4.2.6 Clarify vague/ambiguous elements in requirement text - - [ ] 4.2.7 Add `@beartype` decorators for runtime type checking - - [ ] 4.2.8 Add `@icontract` decorators with `@require`/`@ensure` contracts - -- [ ] 4.3 Add slash command prompts for AISP compilation and AI LLM consumption - - [ ] 4.3.1 Create `/specfact.compile-aisp` slash command prompt template - - [ ] 4.3.1.1 Instruct AI LLM to update internal AISP spec from available spec - - [ ] 4.3.1.2 Instruct AI LLM to clarify vague/ambiguous elements - - [ ] 4.3.1.3 Instruct AI LLM to execute AISP spec instead of markdown spec - - [ ] 4.3.2 Create `/specfact.update-aisp` slash command prompt template - - [ ] 4.3.2.1 Detect spec changes and update corresponding AISP blocks - - [ ] 4.3.2.2 Flag vague/ambiguous elements for clarification - - [ ] 4.3.3 Reference AISP v5.1 specification in prompt templates - - [ ] 4.3.4 Implement AISP loading from project bundle in slash commands - - [ ] 4.3.5 Store prompt templates in `resources/templates/slash-commands/` - - [ ] 4.3.6 Document slash command usage in CLI documentation - -## 5. AISP Proof Artifact Storage in Project Bundles - -- [ ] 5.1 Implement proof artifact storage in project bundles - - [ ] 5.1.1 Create `.specfact/projects/<bundle>/aisp/` directory structure support - - [ ] 5.1.2 Implement proof artifact file storage (e.g., `proof-<requirement-id>.aisp.md`) - - [ ] 5.1.3 Implement proof ID to requirement ID mapping in project bundle metadata - - [ ] 5.1.4 Ensure storage does not conflict with existing project bundle structure - - [ ] 5.1.5 Add AISP storage to `src/specfact_cli/utils/bundle_loader.py` - -- [ ] 5.2 Implement AISP as internal representation - - [ ] 5.2.1 Ensure AISP blocks are not visible in source spec files - - [ ] 5.2.2 Ensure AISP blocks are accessible only through SpecFact CLI - - [ ] 5.2.3 Implement AISP loading from project bundle for slash commands - - [ ] 5.2.4 Ensure developers work with natural language specs (no AISP exposure) - -## 6. Templates and Examples - -- [ ] 6.1 Create AISP block templates - - [ ] 6.1.1 Create `resources/templates/aisp/` directory - - [ ] 6.1.2 Add template for authentication pattern - - [ ] 6.1.3 Add template for payment processing pattern - - [ ] 6.1.4 Add template for state machine pattern - - [ ] 6.1.5 Add template for generic decision tree pattern - -- [ ] 6.2 Create integration examples - - [ ] 6.2.1 Create example OpenSpec spec with embedded AISP blocks - - [ ] 6.2.2 Create example Spec-Kit spec with AISP blocks - - [ ] 6.2.3 Create example showing AISP block in change proposal - - [ ] 6.2.4 Store examples in `docs/examples/aisp-integration/` - -## 7. Documentation - -- [ ] 7.1 Create AISP integration guide - - [ ] 7.1.1 Create `docs/guides/aisp-integration.md` - - [ ] 7.1.2 Document AISP block syntax and structure - - [ ] 7.1.3 Document when to use AISP blocks (heuristics) - - [ ] 7.1.4 Document authoring guidelines - - [ ] 7.1.5 Document integration with OpenSpec and Spec-Kit workflows - -- [ ] 7.2 Update existing documentation - - [ ] 7.2.1 Update OpenSpec adapter documentation with AISP support - - [ ] 7.2.2 Update Spec-Kit adapter documentation with AISP support - - [ ] 7.2.3 Update validate command documentation with `--aisp` flags - - [ ] 7.2.4 Add clarify command documentation - - [ ] 7.2.5 Add slash command documentation for AISP conversion - -## 8. Code Quality and Contract Validation - -- [ ] 8.1 Apply code formatting - - [ ] 8.1.1 Run `hatch run format` to apply black and isort - - [ ] 8.1.2 Verify all files are properly formatted - -- [ ] 8.2 Run linting checks - - [ ] 8.2.1 Run `hatch run lint` to check for linting errors - - [ ] 8.2.2 Fix all pylint, ruff, and other linter errors - -- [ ] 8.3 Run type checking - - [ ] 8.3.1 Run `hatch run type-check` to verify type annotations - - [ ] 8.3.2 Fix all basedpyright type errors - -- [ ] 8.4 Verify contract decorators - - [ ] 8.4.1 Ensure all new public functions have `@beartype` decorators - - [ ] 8.4.2 Ensure all new public functions have `@icontract` decorators with appropriate `@require`/`@ensure` - -## 9. Testing and Validation - -- [ ] 9.1 Add unit tests for AISP parser - - [ ] 9.1.1 Create `tests/unit/parsers/test_aisp.py` - - [ ] 9.1.2 Test fenced code block detection - - [ ] 9.1.3 Test proof ID extraction - - [ ] 9.1.4 Test input schema parsing - - [ ] 9.1.5 Test decision tree parsing - - [ ] 9.1.6 Test outcome parsing - - [ ] 9.1.7 Test invariant parsing - - [ ] 9.1.8 Test error handling - -- [ ] 9.2 Add unit tests for AISP validator - - [ ] 9.2.1 Create `tests/unit/validators/test_aisp_schema.py` - - [ ] 9.2.2 Test proof ID uniqueness validation - - [ ] 9.2.3 Test requirement binding validation - - [ ] 9.2.4 Test coverage gap detection - - [ ] 9.2.5 Test AISP v5.1 syntax validation - -- [ ] 9.3 Add unit tests for AISP data models - - [ ] 9.3.1 Create `tests/unit/models/test_aisp.py` - - [ ] 9.3.2 Test `AispProofBlock` model creation and validation - - [ ] 9.3.3 Test `AispBinding` model creation and validation - - [ ] 9.3.4 Test `AispParseResult` model creation and validation - - [ ] 9.3.5 Test `AispDecision` and `AispOutcome` models - -- [ ] 9.4 Add integration tests for adapter AISP support - - [ ] 9.4.1 Create `tests/integration/adapters/test_openspec_aisp.py` - - [ ] 9.4.2 Test OpenSpec adapter AISP block detection - - [ ] 9.4.3 Test OpenSpec adapter AISP block parsing - - [ ] 9.4.4 Test cross-repository AISP block support - - [ ] 9.4.5 Create `tests/integration/adapters/test_speckit_aisp.py` - - [ ] 9.4.6 Test Spec-Kit adapter AISP block reading - - [ ] 9.4.7 Test Spec-Kit adapter AISP block preservation on export - -- [ ] 9.5 Add integration tests for CLI commands - - [ ] 9.5.1 Create `tests/integration/commands/test_validate_aisp.py` - - [ ] 9.5.2 Test `specfact validate --aisp` command - - [ ] 9.5.3 Test `specfact validate --aisp --against-code` command - - [ ] 9.5.4 Create `tests/integration/commands/test_clarify.py` - - [ ] 9.5.5 Test `specfact clarify requirement <requirement-id>` command - -- [ ] 9.6 Run full test suite - - [ ] 9.6.1 Run `hatch run smart-test` to execute tests for modified files - - [ ] 9.6.2 Verify all modified tests pass (unit, integration) - -- [ ] 9.7 Final validation - - [ ] 9.7.1 Run `hatch run format` one final time - - [ ] 9.7.2 Run `hatch run lint` one final time - - [ ] 9.7.3 Run `hatch run type-check` one final time - - [ ] 9.7.4 Run `hatch run contract-test` for contract validation - - [ ] 9.7.5 Run `hatch test --cover -v` one final time - - [ ] 9.7.6 Verify no errors remain (formatting, linting, type-checking, tests) - -## 10. OpenSpec Validation - -- [ ] 10.1 Validate OpenSpec change proposal - - [ ] 10.1.1 Run `openspec validate add-aisp-formal-clarification --strict` - - [ ] 10.1.2 Fix any validation errors - - [ ] 10.1.3 Re-run validation until passing - -- [ ] 10.2 Markdown linting - - [ ] 10.2.1 Run markdownlint on all markdown files in change directory - - [ ] 10.2.2 Fix any linting errors - - [ ] 10.2.3 Verify all markdown files pass linting - -## 11. Pull Request Creation - -- [ ] 11.1 Prepare changes for commit - - [ ] 11.1.1 Ensure all changes are committed: `git add .` - - [ ] 11.1.2 Commit with conventional message: `git commit -m "feat: add AISP formal clarification to Spec-Kit and OpenSpec workflows"` - - [ ] 11.1.3 Push to remote: `git push origin feature/add-aisp-formal-clarification` - -- [ ] 11.2 Create Pull Request - - [ ] 11.2.1 Create PR from `feature/add-aisp-formal-clarification` to `dev` branch - - [ ] 11.2.2 Use PR template with proper description - - [ ] 11.2.3 Link to OpenSpec change proposal - - [ ] 11.2.4 Verify PR is ready for review diff --git a/openspec/changes/backlog-core-02-interactive-issue-creation/CHANGE_VALIDATION.md b/openspec/changes/backlog-core-02-interactive-issue-creation/CHANGE_VALIDATION.md deleted file mode 100644 index 57590e37..00000000 --- a/openspec/changes/backlog-core-02-interactive-issue-creation/CHANGE_VALIDATION.md +++ /dev/null @@ -1,92 +0,0 @@ -# Change Validation Report: backlog-core-02-interactive-issue-creation - -**Validation Date**: 2026-01-31T00:32:54+01:00 -**Change Proposal**: [proposal.md](./proposal.md) -**Validation Method**: Dry-run simulation and format/OpenSpec compliance check - -## Executive Summary - -- **Breaking Changes**: 1 interface extension (new abstract method on BacklogAdapterMixin); all concrete backlog adapters must implement it. -- **Dependent Files**: 2 affected (GitHubAdapter, AdoAdapter); no existing callers of create_issue. -- **Impact Level**: Low -- **Validation Result**: Pass -- **User Decision**: N/A (no breaking-change options required) - -## Breaking Changes Detected - -### Interface: BacklogAdapterMixin.create_issue - -- **Type**: New abstract method -- **Old Signature**: (none; method does not exist) -- **New Signature**: `create_issue(project_id: str, payload: dict) -> dict` -- **Breaking**: Yes for implementors (any class inheriting BacklogAdapterMixin must implement the new method) -- **Dependent Files**: - - `src/specfact_cli/adapters/github.py`: Must implement create_issue - - `src/specfact_cli/adapters/ado.py`: Must implement create_issue - -**Mitigation**: Change scope already includes implementing create_issue in both GitHub and ADO adapters; no external dependents of BacklogAdapterMixin exist outside this repo. No scope extension needed. - -## Dependencies Affected - -### Critical Updates Required - -- `src/specfact_cli/adapters/github.py`: Implement create_issue (in scope) -- `src/specfact_cli/adapters/ado.py`: Implement create_issue (in scope) - -### Recommended Updates - -- None - -## Impact Assessment - -- **Code Impact**: Additive; new command and adapter method. Existing refine/sync/analyze-deps unchanged. -- **Test Impact**: New tests for create_issue and add command (TDD in tasks). -- **Documentation Impact**: docs/guides/agile-scrum-workflows.md, backlog guide for backlog add workflow. -- **Release Impact**: Minor (new feature). - -## Dependency on add-backlog-dependency-analysis-and-commands - -- **Note**: The plan states this change "Depends on" add-backlog-dependency-analysis-and-commands (BacklogGraphBuilder, fetch_all_issues, fetch_relationships). If that change is not yet merged, implementation can use minimal graph usage (e.g. fetch_backlog_item to validate parent exists) as stated in proposal Impact. No ambiguity; design and tasks already allow fallback. - -## Format Validation - -- **proposal.md Format**: Pass - - Title format: Correct (# Change: ...) - - Required sections: All present (Why, What Changes, Capabilities, Impact) - - "What Changes" format: Correct (NEW/EXTEND bullets) - - "Capabilities" section: Present (backlog-add) - - "Impact" format: Correct - - Source Tracking section: Present (GitHub #173) -- **tasks.md Format**: Pass - - Section headers: Hierarchical numbered (## 1. ... ## 10.) - - Task format: - [ ] N.N Description - - Sub-task format: Indented - [ ] N.N.N - - Config.yaml compliance: Pass (TDD section, branch first, PR last, version/changelog task, GitHub issue task) -- **specs/backlog-add/spec.md Format**: Pass (ADDED requirements, Given/When/Then) -- **design.md Format**: Pass (bridge adapter, sequence, contract, fallback) -- **Config.yaml Compliance**: Pass - -## OpenSpec Validation - -- **Status**: Pass -- **Validation Command**: `openspec validate add-backlog-add-interactive-issue-creation --strict` -- **Issues Found**: 0 -- **Issues Fixed**: 0 - -## Validation Artifacts - -- No temporary workspace used (validation was format and dependency analysis only). -- Breaking change is in-scope (adapter implementations are part of the change). - -## Module Architecture Alignment (Re-validated 2026-02-10) - -This change was re-validated after renaming and updating to align with the modular architecture (arch-01 through arch-07): - -- Module package structure updated to `modules/{name}/module-package.yaml` pattern -- CLI command registration moved from `cli.py` to `module-package.yaml` declarations -- Core model modifications replaced with arch-07 schema extensions where applicable -- Adapter protocol extensions use arch-05 bridge registry (no direct mixin modification) -- Publisher and integrity metadata added for arch-06 marketplace readiness -- All old change ID references updated to new module-scoped naming - -**Result**: Pass — format compliant, module architecture aligned, no breaking changes introduced. diff --git a/openspec/changes/backlog-core-02-interactive-issue-creation/proposal.md b/openspec/changes/backlog-core-02-interactive-issue-creation/proposal.md deleted file mode 100644 index 6d7cce6a..00000000 --- a/openspec/changes/backlog-core-02-interactive-issue-creation/proposal.md +++ /dev/null @@ -1,64 +0,0 @@ -# Change: Backlog Core — Interactive Issue Creation - -## Why - - -After implementing backlog adapters and dependency analysis (backlog-core-01), teams can analyze and sync backlog items but cannot create new issues from the CLI with proper scoping, hierarchy alignment, and Definition of Ready (DoR) checks. Without a dedicated add flow, users create issues manually in GitHub/ADO and risk orphaned or misaligned items. Adding `specfact backlog add` enables interactive creation with AI copilot assistance: draft → review → enhance → validate (graph, DoR) → create, so new issues fit the existing backlog structure and value chain. - -This change extends the **`backlog-core` module** (backlog-core-01) with the `backlog add` command. - -## Module Package Structure - -This change adds to the existing `modules/backlog-core/` module: - -``` -modules/backlog-core/ - module-package.yaml # updated: add 'backlog add' to commands list - src/backlog_core/ - commands/ - add.py # specfact backlog add (interactive issue creation) - adapters/ - backlog_protocol.py # extended: add create_issue() to BacklogGraphProtocol -``` - -**`module-package.yaml` update:** Add `backlog add` to commands list. No new module; this is a capability increment to backlog-core. - -## Module Package Structure - -This change adds to the existing `modules/backlog-core/` module: - -``` -modules/backlog-core/ - module-package.yaml # updated: add 'backlog add' to commands list - src/backlog_core/ - commands/ - add.py # specfact backlog add (interactive issue creation) - adapters/ - backlog_protocol.py # extended: add create_issue() to BacklogGraphProtocol -``` - -**`module-package.yaml` update:** Add `backlog add` to commands list. No new module; this is a capability increment to backlog-core. - -## What Changes - - -- **NEW**: Add CLI command `specfact backlog add` in `modules/backlog-core/src/backlog_core/commands/add.py` for interactive creation of backlog issues (epic, feature, story, task, bug, spike) with optional parent, title, body, DoR validation, and optional `--sprint` to assign new issue to sprint (when provider supports it). -- **NEW**: Support multiple backlog levels (epic, feature, story, task, bug, spike, custom) with configurable creation hierarchy (allowed parent types per child type) via template or backlog_config; default derived from existing type_mapping and dependency_rules in `ado_scrum.yaml` / `github_projects.yaml` templates. -- **EXTEND** (arch-05 bridge registry): Extend `BacklogGraphProtocol` in `modules/backlog-core/src/backlog_core/adapters/backlog_protocol.py` with `create_issue(project_id: str, payload: dict) -> dict` returning created item (id, key, url). Adapter modules (github-adapter, ado-adapter) implement this method and register updated protocol conformance via bridge registry. -- **NEW**: Add flow: load graph (BacklogGraphBuilder, fetch_all_issues, fetch_relationships from backlog-core-01), resolve type and parent from template/hierarchy, validate parent exists and allowed type, optional DoR check (**use policy-engine-01 when available**), map draft to provider payload, call adapter `create_issue`, output created id/key/url. -- **EXTEND** (E5): Provide draft patch preview before create (integrate with patch-mode-01 when available) so user can review proposed issue body/fields before creating. -- **EXTEND** (E5): When linking to existing issues (e.g. parent, blocks), support fuzzy match + user confirmation; no silent link (aligns with bundle-mapper-01). -- **EXTEND**: Template or backlog_config with optional `creation_hierarchy` (allowed parent types per child type) so Scrum/SAFe/Kanban and custom hierarchies work without code changes. - -## Capabilities -- **backlog-core** (extended): `backlog add` — interactive creation of backlog issues with type/parent selection, draft validation (graph and DoR), and create via adapter protocol; multi-level support with configurable hierarchy. - ---- - -## Source Tracking - -<!-- source_repo: nold-ai/specfact-cli --> -- **GitHub Issue**: #173 -- **Issue URL**: <https://github.com/nold-ai/specfact-cli/issues/173> -- **Last Synced Status**: proposed -- **Sanitized**: false diff --git a/openspec/changes/bundle-mapper-01-mapping-strategy/TDD_EVIDENCE.md b/openspec/changes/bundle-mapper-01-mapping-strategy/TDD_EVIDENCE.md deleted file mode 100644 index c05002d9..00000000 --- a/openspec/changes/bundle-mapper-01-mapping-strategy/TDD_EVIDENCE.md +++ /dev/null @@ -1,14 +0,0 @@ -# TDD Evidence: bundle-mapper-01-mapping-strategy - -## Pre-implementation (failing run) - -- **Command**: `hatch run pytest modules/bundle-mapper/tests/ -v --no-cov` -- **Timestamp**: 2026-02-18 (session) -- **Result**: Collection errors — `ModuleNotFoundError: No module named 'bundle_mapper'` (resolved by adding `conftest.py` with `sys.path.insert` for module `src`). Then `BeartypeDecorHintPep3119Exception` for `_ItemLike` Protocol (resolved by `@runtime_checkable`). - -## Post-implementation (passing run) - -- **Command**: `hatch run pytest modules/bundle-mapper/tests/ -v --no-cov` -- **Timestamp**: 2026-02-18 -- **Result**: 11 passed in 0.71s -- **Tests**: test_bundle_mapping_model (3), test_bundle_mapper_engine (5), test_mapping_history (3) diff --git a/openspec/changes/bundle-mapper-01-mapping-strategy/tasks.md b/openspec/changes/bundle-mapper-01-mapping-strategy/tasks.md deleted file mode 100644 index 9c33127e..00000000 --- a/openspec/changes/bundle-mapper-01-mapping-strategy/tasks.md +++ /dev/null @@ -1,152 +0,0 @@ -## 1. Git Workflow - -- [ ] 1.1 Create git worktree branch `feature/add-bundle-mapping-strategy` from `dev` branch - - [ ] 1.1.1 Ensure primary checkout is on dev and up to date: `git checkout dev && git pull origin dev` - - [ ] 1.1.2 Create branch: `scripts/worktree.sh create feature/add-bundle-mapping-strategy` - - [ ] 1.1.3 Verify branch in worktree: `git worktree list` includes the branch path; then run `git branch --show-current` inside that worktree. - -## 2. BundleMapping Model - -- [ ] 2.1 Create `src/specfact_cli/models/bundle_mapping.py` - - [ ] 2.1.1 Define `BundleMapping` dataclass with fields: primary_bundle_id, confidence, candidates, explained_reasoning - - [ ] 2.1.2 Add `@beartype` decorator for runtime type checking - - [ ] 2.1.3 Add `@icontract` decorators with `@require`/`@ensure` contracts - -## 3. BundleMapper Engine - -- [ ] 3.1 Create `modules/bundle-mapper/src/bundle_mapper/mapper/engine.py` - - [ ] 3.1.1 Implement `BundleMapper` class with `compute_mapping(item: BacklogItem) -> BundleMapping` - - [ ] 3.1.2 Implement `_score_explicit_mapping()` for explicit label signals (bundle:xyz tags) - - [ ] 3.1.3 Implement `_score_historical_mapping()` for historical pattern signals - - [ ] 3.1.4 Implement `_score_content_similarity()` for content-based signals (keyword matching) - - [ ] 3.1.5 Implement weighted confidence calculation (0.8 × explicit + 0.15 × historical + 0.05 × content) - - [ ] 3.1.6 Implement `_item_key()` for creating metadata keys for history matching - - [ ] 3.1.7 Implement `_item_keys_similar()` for comparing metadata keys - - [ ] 3.1.8 Implement `_explain_score()` for human-readable explanations - - [ ] 3.1.9 Implement `_build_explanation()` for detailed mapping rationale - - [ ] 3.1.10 Add `@beartype` decorator for runtime type checking - - [ ] 3.1.11 Add `@icontract` decorators with `@require`/`@ensure` contracts - -## 4. Mapping History Persistence - -- [ ] 4.1 Extend `.specfact/config.yaml` structure - - [ ] 4.1.1 Add `backlog.bundle_mapping.rules` section for persistent mapping rules - - [ ] 4.1.2 Add `backlog.bundle_mapping.history` section for auto-populated historical mappings - - [ ] 4.1.3 Add `backlog.bundle_mapping.explicit_label_prefix` config (default: "bundle:") - - [ ] 4.1.4 Add `backlog.bundle_mapping.auto_assign_threshold` config (default: 0.8) - - [ ] 4.1.5 Add `backlog.bundle_mapping.confirm_threshold` config (default: 0.5) - -## 5. Mapping Rule Model - -- [ ] 5.1 Create `MappingRule` Pydantic model - - [ ] 5.1.1 Define fields: pattern, bundle_id, action, confidence - - [ ] 5.1.2 Implement `matches(item: BacklogItem) -> bool` method - - [ ] 5.1.3 Support pattern matching: tag=~regex, assignee=exact, area=exact - - [ ] 5.1.4 Add `@beartype` decorator for runtime type checking - -## 6. Mapping History Functions - -- [ ] 6.1 Implement `save_user_confirmed_mapping()` function - - [ ] 6.1.1 Create item_key from item metadata - - [ ] 6.1.2 Increment mapping count in history - - [ ] 6.1.3 Save to config file - - [ ] 6.1.4 Add `@beartype` decorator for runtime type checking - -## 7. Interactive Mapping UI - -- [ ] 7.1 Implement `ask_bundle_mapping()` function in `src/specfact_cli/cli/backlog_commands.py` - - [ ] 7.1.1 Display confidence level (✓ high, ? medium, ! low) - - [ ] 7.1.2 Show suggested bundle with reasoning - - [ ] 7.1.3 Display alternative candidates with scores - - [ ] 7.1.4 Provide options: accept, select from candidates, show all bundles, skip - - [ ] 7.1.5 Handle user selection and return bundle_id - - [ ] 7.1.6 Add `@beartype` decorator for runtime type checking - -## 8. CLI Integration: --auto-bundle Flag - -- [ ] 8.1 Extend `backlog refine` command - - [ ] 8.1.1 Add `--auto-bundle` flag option - - [ ] 8.1.2 Add `--auto-accept-bundle` flag option - - [ ] 8.1.3 Integrate bundle mapping into refinement workflow - - [ ] 8.1.4 Auto-assign if confidence >= 0.8 - - [ ] 8.1.5 Prompt user if confidence 0.5-0.8 - - [ ] 8.1.6 Require explicit selection if confidence < 0.5 - -- [ ] 8.2 Extend `backlog import` command - - [ ] 8.2.1 Add `--auto-bundle` flag option - - [ ] 8.2.2 Add `--auto-accept-bundle` flag option - - [ ] 8.2.3 Integrate bundle mapping into import workflow - - [ ] 8.2.4 Use mapping if `--bundle` not specified - -## 9. Source Tracking Extension - -- [ ] 9.1 Extend `src/specfact_cli/models/source_tracking.py` - - [ ] 9.1.1 Add `bundle_id` field (Optional[str]) - - [ ] 9.1.2 Add `mapping_confidence` field (Optional[float]) - - [ ] 9.1.3 Add `mapping_method` field (Optional[str]) - "explicit_label", "historical", "content_similarity", "user_confirmed" - - [ ] 9.1.4 Add `mapping_timestamp` field (Optional[datetime]) - - [ ] 9.1.5 Ensure backward compatibility (all fields optional) - -## 10. OpenSpec Generation Integration - -- [ ] 10.1 Extend `_write_openspec_change_from_proposal()` function - - [ ] 10.1.1 Add `mapping: Optional[BundleMapping]` parameter - - [ ] 10.1.2 Update source_tracking with mapping metadata - - [ ] 10.1.3 Include mapping information in proposal.md source tracking section - - [ ] 10.1.4 Ensure backward compatibility (parameter optional) - -## 11. Code Quality and Contract Validation - -- [ ] 11.1 Apply code formatting - - [ ] 11.1.1 Run `hatch run format` to apply black and isort - - [ ] 11.1.2 Verify all files are properly formatted -- [ ] 11.2 Run linting checks - - [ ] 11.2.1 Run `hatch run lint` to check for linting errors - - [ ] 11.2.2 Fix all pylint, ruff, and other linter errors -- [ ] 11.3 Run type checking - - [ ] 11.3.1 Run `hatch run type-check` to verify type annotations - - [ ] 11.3.2 Fix all basedpyright type errors -- [ ] 11.4 Verify contract decorators - - [ ] 11.4.1 Ensure all new public functions have `@beartype` decorators - - [ ] 11.4.2 Ensure all new public functions have `@icontract` decorators with appropriate `@require`/`@ensure` - -## 12. Testing and Validation - -- [ ] 12.1 Add new tests - - [ ] 12.1.1 Add unit tests for BundleMapper (9+ tests: 3 signals × 3 confidence levels) - - [ ] 12.1.2 Add unit tests for explicit mapping signal (3+ tests) - - [ ] 12.1.3 Add unit tests for historical mapping signal (3+ tests) - - [ ] 12.1.4 Add unit tests for content similarity signal (3+ tests) - - [ ] 12.1.5 Add unit tests for confidence scoring (5+ tests) - - [ ] 12.1.6 Add unit tests for mapping history persistence (5+ tests) - - [ ] 12.1.7 Add unit tests for interactive UI (5+ tests: user selections) - - [ ] 12.1.8 Add integration tests: end-to-end mapping workflow (5+ tests) -- [ ] 12.2 Update existing tests - - [ ] 12.2.1 Update source_tracking tests to include new mapping fields - - [ ] 12.2.2 Update OpenSpec generation tests to handle mapping parameter -- [ ] 12.3 Run full test suite of modified tests only - - [ ] 12.3.1 Run `hatch run smart-test` to execute only the tests that are relevant to the changes - - [ ] 12.3.2 Verify all modified tests pass (unit, integration, E2E) -- [ ] 12.4 Final validation - - [ ] 12.4.1 Run `hatch run format` one final time - - [ ] 12.4.2 Run `hatch run lint` one final time - - [ ] 12.4.3 Run `hatch run type-check` one final time - - [ ] 12.4.4 Run `hatch test --cover -v` one final time - - [ ] 12.4.5 Verify no errors remain (formatting, linting, type-checking, tests) - -## 13. OpenSpec Validation - -- [ ] 13.1 Validate change proposal - - [ ] 13.1.1 Run `openspec validate add-bundle-mapping-strategy --strict` - - [ ] 13.1.2 Fix any validation errors - - [ ] 13.1.3 Re-run validation until passing - -## 14. Pull Request Creation - -- [ ] 14.1 Prepare changes for commit - - [ ] 14.1.1 Ensure all changes are committed: `git add .` - - [ ] 14.1.2 Commit with conventional message: `git commit -m "feat: add bundle mapping strategy with confidence scoring"` - - [ ] 14.1.3 Push to remote: `git push origin feature/add-bundle-mapping-strategy` -- [ ] 14.2 Create Pull Request - - [ ] 14.2.1 Create PR in specfact-cli repository - - [ ] 14.2.2 Changes are ready for review in the branch diff --git a/openspec/specs/adapter-development-guide/spec.md b/openspec/specs/adapter-development-guide/spec.md new file mode 100644 index 00000000..51a65ee6 --- /dev/null +++ b/openspec/specs/adapter-development-guide/spec.md @@ -0,0 +1,45 @@ +# adapter-development-guide Specification + +## Purpose +TBD - created by archiving change arch-08-documentation-discrepancies-remediation. Update Purpose after archive. +## Requirements +### Requirement: Full BridgeAdapter interface documented + +The adapter development guide (or extended creating-custom-bridges) SHALL document the full BridgeAdapter interface: detect, import_artifact, export_artifact, load_change_tracking, save_change_tracking (or equivalent), with contracts and usage notes. + +#### Scenario: Developer implements adapter +- **GIVEN** the adapter development guide (or extended creating-custom-bridges) +- **WHEN** a developer implements an adapter +- **THEN** the full BridgeAdapter interface is documented +- **AND** contracts and usage notes are provided + +### Requirement: ToolCapabilities and adapter selection documented + +The ToolCapabilities model and its role in adapter selection (e.g. sync modes) SHALL be documented, with reference to code (e.g. models/bridge.py) if needed. + +#### Scenario: Developer declares or uses capabilities +- **GIVEN** the adapter documentation +- **WHEN** a developer needs to declare or use adapter capabilities +- **THEN** ToolCapabilities model is documented +- **AND** its role in adapter selection is explained + +### Requirement: Examples or code references provided + +The adapter guide SHALL provide at least one code reference or minimal example (e.g. base adapter, existing OpenSpec/SpecKit adapter) so that implementation is clear. + +#### Scenario: Developer follows adapter guide +- **GIVEN** the adapter guide +- **WHEN** a developer follows the guide +- **THEN** at least one code reference or minimal example is provided +- **AND** implementation path is clear + +### Requirement: Adapter guide discoverable + +The adapter development content SHALL be reachable from the docs navigation and from bridge/architecture documentation. + +#### Scenario: User looks for adapter development +- **GIVEN** the published docs +- **WHEN** a user looks for adapter or bridge development +- **THEN** the adapter development content is reachable from the docs navigation +- **AND** from bridge/architecture documentation + diff --git a/openspec/specs/adr-template/spec.md b/openspec/specs/adr-template/spec.md new file mode 100644 index 00000000..a03aec9b --- /dev/null +++ b/openspec/specs/adr-template/spec.md @@ -0,0 +1,35 @@ +# adr-template Specification + +## Purpose +TBD - created by archiving change arch-08-documentation-discrepancies-remediation. Update Purpose after archive. +## Requirements +### Requirement: ADR template exists + +The docs SHALL provide an ADR template with at least: title, status, context, decision, consequences. + +#### Scenario: Maintainer records new decision +- **GIVEN** the docs repository +- **WHEN** a maintainer wants to record a new architectural decision +- **THEN** an ADR template exists (e.g. in docs/architecture/adr/template.md) +- **AND** the template includes title, status, context, decision, consequences + +### Requirement: At least one ADR present + +The ADR directory SHALL contain at least one ADR (e.g. for module-first architecture) following the template. + +#### Scenario: Reader opens architecture docs +- **GIVEN** the ADR directory +- **WHEN** a reader opens the architecture documentation +- **THEN** at least one ADR is present following the template +- **AND** it documents a major architectural decision + +### Requirement: ADRs discoverable from docs + +ADRs SHALL be linked from docs/architecture/README.md or docs/reference/architecture.md so they can be found without searching the repo. + +#### Scenario: User navigates architecture docs +- **GIVEN** the docs site (e.g. docs.specfact.io) +- **WHEN** a user navigates architecture or reference docs +- **THEN** ADRs are linked +- **AND** discoverable from the menu or architecture index + diff --git a/openspec/changes/backlog-core-02-interactive-issue-creation/specs/backlog-add/spec.md b/openspec/specs/backlog-add/spec.md similarity index 97% rename from openspec/changes/backlog-core-02-interactive-issue-creation/specs/backlog-add/spec.md rename to openspec/specs/backlog-add/spec.md index 4246fd9e..6ea776e1 100644 --- a/openspec/changes/backlog-core-02-interactive-issue-creation/specs/backlog-add/spec.md +++ b/openspec/specs/backlog-add/spec.md @@ -1,7 +1,8 @@ -# Backlog Add (Interactive Issue Creation) - -## ADDED Requirements +# backlog-add Specification +## Purpose +TBD - created by archiving change backlog-core-02-interactive-issue-creation. Update Purpose after archive. +## Requirements ### Requirement: Backlog adapter create method The system SHALL extend backlog adapters with a create method that accepts a unified payload and returns the created item (id, key, url). @@ -158,3 +159,4 @@ The system SHALL support optional `--sprint <sprint-id>` so the created issue ca **Acceptance Criteria**: - Fuzzy match is used for discovery only; linking requires user confirmation; no silent writes. + diff --git a/openspec/specs/bundle-mapping/spec.md b/openspec/specs/bundle-mapping/spec.md index 8829b74f..e3d8e093 100644 --- a/openspec/specs/bundle-mapping/spec.md +++ b/openspec/specs/bundle-mapping/spec.md @@ -15,3 +15,93 @@ The system SHALL route bundle mappings based on confidence thresholds: auto-assi - **AND** confidence routing behavior is enforced (auto/prompt/explicit selection) instead of placeholder or no-op import messaging - **AND** resulting mapping decision is persisted via configured mapping history/rules storage. +### Requirement: Bundle Mapping Engine + +The system SHALL provide a `BundleMapper` that computes mapping from backlog items to OpenSpec bundles with confidence scoring. + +#### Scenario: Compute mapping with explicit label + +- **WHEN** a backlog item has tag "bundle:backend-services" +- **THEN** the system returns mapping with bundle_id="backend-services" and confidence >= 0.8 + +#### Scenario: Compute mapping with historical pattern + +- **WHEN** similar items (same assignee, area, tags) were previously mapped to a bundle +- **THEN** the system returns mapping with that bundle_id and confidence based on historical frequency + +#### Scenario: Compute mapping with content similarity + +- **WHEN** item title/body contains keywords matching existing specs in a bundle +- **THEN** the system returns mapping with that bundle_id and confidence based on keyword overlap + +#### Scenario: Weighted confidence calculation + +- **WHEN** multiple signals contribute to mapping +- **THEN** the system calculates final confidence as: 0.8 × explicit + 0.15 × historical + 0.05 × content + +#### Scenario: No mapping found + +- **WHEN** no signals match any bundle +- **THEN** the system returns mapping with primary_bundle_id=None and confidence=0.0 + +### Requirement: Mapping History Persistence + +The system SHALL persist mapping rules learned from user confirmations. + +#### Scenario: Save user-confirmed mapping + +- **WHEN** a user confirms a bundle mapping +- **THEN** the system saves the mapping pattern to config history for future use + +#### Scenario: Historical mapping lookup + +- **WHEN** a new item matches historical pattern (same assignee, area, tags) +- **THEN** the system uses historical mapping frequency to boost confidence score + +#### Scenario: Historical mapping ignores stale bundle ids + +- **GIVEN** history contains bundle ids that are no longer present in available bundles +- **WHEN** historical scoring is computed +- **THEN** stale bundle ids are ignored +- **AND** returned historical bundle ids are always members of current available bundles + +#### Scenario: Mapping rules from config + +- **WHEN** config file contains mapping rules (e.g., "assignee=alice → backend-services") +- **THEN** the system applies these rules before computing other signals + +#### Scenario: History key encoding is unambiguous + +- **WHEN** item keys are serialized for history matching +- **THEN** field delimiters and tag-value delimiters do not collide +- **AND** round-trip parsing preserves all tag values without truncation + +### Requirement: Interactive Mapping UI + +The system SHALL provide an interactive prompt for bundle selection with confidence visualization and candidate options. + +#### Scenario: Display high confidence suggestion + +- **WHEN** mapping confidence >= 0.8 +- **THEN** the system displays "✓ HIGH CONFIDENCE" with suggested bundle and reason + +#### Scenario: Display medium confidence suggestion + +- **WHEN** mapping confidence 0.5-0.8 +- **THEN** the system displays "? MEDIUM CONFIDENCE" with suggested bundle and alternative candidates + +#### Scenario: Display low confidence warning + +- **WHEN** mapping confidence < 0.5 +- **THEN** the system displays "! LOW CONFIDENCE" and requires explicit bundle selection + +#### Scenario: Show all available bundles + +- **WHEN** user selects "S" option +- **THEN** the system displays all available bundles with descriptions + +#### Scenario: Skip item + +- **WHEN** user selects "Q" option +- **THEN** the system skips the item without mapping + diff --git a/openspec/specs/confidence-scoring/spec.md b/openspec/specs/confidence-scoring/spec.md new file mode 100644 index 00000000..ea75de32 --- /dev/null +++ b/openspec/specs/confidence-scoring/spec.md @@ -0,0 +1,109 @@ +# confidence-scoring Specification + +## Purpose +TBD - created by archiving change bundle-mapper-01-mapping-strategy. Update Purpose after archive. +## Requirements +### Requirement: Explicit Label Signal + +The system SHALL score explicit bundle labels (e.g., "bundle:xyz", "project:abc") with highest priority and 100% confidence when bundle exists. + +#### Scenario: Explicit label with valid bundle + +- **WHEN** item has tag "bundle:backend-services" and bundle exists +- **THEN** the system assigns score 1.0 (100% confidence) to that bundle + +#### Scenario: Explicit label with invalid bundle + +- **WHEN** item has tag "bundle:nonexistent" and bundle doesn't exist +- **THEN** the system ignores the label and uses other signals + +#### Scenario: Multiple explicit labels + +- **WHEN** item has multiple bundle labels +- **THEN** the system uses the first matching label + +### Requirement: Historical Mapping Signal + +The system SHALL score historical mappings based on frequency of similar items mapped to the same bundle. + +#### Scenario: Strong historical pattern + +- **WHEN** 10+ similar items (same assignee, area, tags) were mapped to "backend-services" +- **THEN** the system assigns high confidence (normalized count / 10, capped at 1.0) + +#### Scenario: Weak historical pattern + +- **WHEN** 1-2 similar items were mapped to a bundle +- **THEN** the system assigns low confidence (count / 10) + +#### Scenario: No historical pattern + +- **WHEN** no similar items exist in history +- **THEN** the system returns None for historical signal + +#### Scenario: Item key similarity matching + +- **WHEN** item keys share at least 2 of 3 components (area, assignee, tags) +- **THEN** the system considers them similar for historical lookup + +### Requirement: Content Similarity Signal + +The system SHALL score content similarity between item text and existing specs in bundles using keyword matching. + +#### Scenario: High keyword overlap + +- **WHEN** item title/body shares many keywords with specs in a bundle +- **THEN** the system assigns high similarity score (Jaccard similarity) + +#### Scenario: Low keyword overlap + +- **WHEN** item title/body shares few keywords with specs in a bundle +- **THEN** the system assigns low similarity score or ignores bundle + +#### Scenario: No keyword overlap + +- **WHEN** item text has no keywords in common with bundle specs +- **THEN** the system assigns score 0.0 for that bundle + +#### Scenario: Conflicting content signal does not increase confidence + +- **GIVEN** explicit or historical scoring selected a primary bundle +- **AND** top content similarity points to a different bundle +- **WHEN** final confidence is calculated +- **THEN** the content contribution is not added to the selected primary bundle confidence + +#### Scenario: Tokenization for matching + +- **WHEN** content similarity is computed +- **THEN** the system tokenizes text (lowercase, split by non-alphanumeric) for comparison + +### Requirement: Confidence Thresholds + +The system SHALL use configurable confidence thresholds for routing decisions. + +#### Scenario: Auto-assign threshold + +- **WHEN** confidence >= auto_assign_threshold (default 0.8) +- **THEN** the system auto-assigns to bundle (with optional user confirmation) + +#### Scenario: Confirm threshold + +- **WHEN** confidence >= confirm_threshold (default 0.5) and < auto_assign_threshold +- **THEN** the system prompts user for confirmation + +#### Scenario: Reject threshold + +- **WHEN** confidence < confirm_threshold (default 0.5) +- **THEN** the system requires explicit bundle selection + +#### Scenario: Configurable thresholds + +- **WHEN** user configures custom thresholds in `.specfact/config.yaml` +- **THEN** the system uses custom thresholds instead of defaults + +#### Scenario: Malformed thresholds fall back to defaults + +- **WHEN** config contains non-numeric threshold values +- **THEN** mapper initialization does not fail +- **AND** default threshold values are used + diff --git a/openspec/specs/documentation-alignment/spec.md b/openspec/specs/documentation-alignment/spec.md new file mode 100644 index 00000000..72bc0e4a --- /dev/null +++ b/openspec/specs/documentation-alignment/spec.md @@ -0,0 +1,94 @@ +# documentation-alignment Specification + +## Purpose +TBD - created by archiving change arch-08-documentation-discrepancies-remediation. Update Purpose after archive. +## Requirements +### Requirement: Module system status in docs + +The published architecture documentation SHALL state that the module system is production-ready (e.g. since v0.27) and SHALL NOT describe it as "transitioning" or "experimental." + +#### Scenario: Reader checks module system status +- **GIVEN** the published architecture documentation (e.g. docs/reference/architecture.md, docs/architecture/module-system.md) +- **WHEN** a reader looks for the current state of the module system +- **THEN** the docs state production-ready status +- **AND** do not use "transitioning" or "experimental" for the module system + +### Requirement: BridgeAdapter interface documented + +The adapter documentation SHALL include the full BridgeAdapter interface: detect, import_artifact, export_artifact, load_change_tracking, save_change_tracking (or equivalent), with current behavior and contracts. + +#### Scenario: Developer implements BridgeAdapter +- **GIVEN** the adapter documentation +- **WHEN** a developer implements or extends a BridgeAdapter +- **THEN** the documented interface includes all methods above +- **AND** contracts and usage are described + +### Requirement: Architecture layers match codebase + +The architecture overview SHALL describe the actual layers (Specification, Contract, Enforcement, and where relevant Adapter, Analysis, Module layers) so they match the codebase structure. + +#### Scenario: Reader learns layer structure +- **GIVEN** the architecture overview +- **WHEN** a reader learns the high-level layer structure +- **THEN** the docs describe actual layers present in code +- **AND** do not omit Adapter, Analysis, or Module layers where they exist + +### Requirement: Operational modes clarity + +The documentation for CI/CD and CoPilot modes SHALL clarify current mode detection and any limitations (e.g. mode-specific behavior as planned), and SHALL NOT imply full mode implementations that do not exist. + +#### Scenario: Reader checks mode implementation +- **GIVEN** the documentation for CI/CD and CoPilot modes +- **WHEN** a reader checks what is implemented +- **THEN** current detector behavior is stated +- **AND** planned vs implemented behavior is distinguished + +### Requirement: CommandRegistry and module structure documented + +The architecture or module docs SHALL describe lazy loading, metadata caching, and the required module package structure (e.g. module-package.yaml, src/<name>/main.py) and naming conventions. + +#### Scenario: Developer needs registry or module layout details +- **GIVEN** the architecture or module docs +- **WHEN** a developer needs implementation details for the command registry or module layout +- **THEN** lazy loading and metadata caching are described +- **AND** required module package structure and naming are documented + +### Requirement: ToolCapabilities and error handling documented + +The ToolCapabilities model and adapter selection SHALL be documented; error handling patterns (custom exceptions, logging) SHALL be described in reference or adapter documentation. + +#### Scenario: Developer looks for capabilities or error handling +- **GIVEN** the reference or adapter documentation +- **WHEN** a developer looks for adapter capabilities or error handling +- **THEN** ToolCapabilities and adapter selection are documented +- **AND** error handling patterns are described + +### Requirement: Terminology and version consistency + +The documentation set SHALL use consistent terminology (e.g. Project Bundle, Plan Bundle) and SHALL standardize or remove version references that cause confusion. + +#### Scenario: Same concept referenced across docs +- **GIVEN** the documentation set +- **WHEN** the same concept is referenced +- **THEN** terminology is consistent +- **AND** version references are standardized or removed where confusing + +### Requirement: Diagrams reference only existing or planned components + +Any Mermaid or component diagram in the docs SHALL show only components that exist in the codebase or are clearly marked as planned; non-existent components (e.g. unimplemented "DevOps Adapters") SHALL be removed or relabeled. + +#### Scenario: Reader interprets diagram +- **GIVEN** any Mermaid or component diagram in the docs +- **WHEN** a reader interprets the diagram +- **THEN** only existing or clearly planned components are shown +- **AND** non-existent components are removed or relabeled + +### Requirement: Performance metrics current or removed + +Any stated performance or timing in the docs SHALL reflect current benchmarks or SHALL be removed if outdated. + +#### Scenario: Docs publish performance claims +- **GIVEN** any stated performance or timing (e.g. "typical execution: < 10s") +- **WHEN** the docs are published +- **THEN** metrics reflect current benchmarks or are removed if outdated + diff --git a/openspec/specs/implementation-status-docs/spec.md b/openspec/specs/implementation-status-docs/spec.md new file mode 100644 index 00000000..81e990b7 --- /dev/null +++ b/openspec/specs/implementation-status-docs/spec.md @@ -0,0 +1,45 @@ +# implementation-status-docs Specification + +## Purpose +TBD - created by archiving change arch-08-documentation-discrepancies-remediation. Update Purpose after archive. +## Requirements +### Requirement: Implemented vs planned clearly stated + +The implementation status documentation SHALL clearly mark each feature (e.g. architecture commands, protocol FSM, change tracking) as implemented or planned, with brief notes on scope where relevant. + +#### Scenario: Reader checks feature status +- **GIVEN** the implementation status documentation (e.g. docs/architecture/implementation-status.md) +- **WHEN** a reader checks the status of a feature +- **THEN** each feature is clearly marked as implemented or planned +- **AND** scope notes are provided (e.g. change tracking: models exist, limited adapter support) + +### Requirement: Pointers to OpenSpec for planned features + +For planned or partially implemented features, the implementation status doc SHALL link or reference the relevant OpenSpec change (e.g. architecture-01-solution-layer for architecture derive/validate/trace). + +#### Scenario: Reader finds spec for planned feature +- **GIVEN** a planned or partially implemented feature +- **WHEN** the implementation status doc describes it +- **THEN** it links or references the relevant OpenSpec change +- **AND** readers can find the spec and roadmap + +### Requirement: Current limitations documented + +Current limitations for change tracking and protocol/FSM behavior SHALL be stated (e.g. no FSM engine, partial adapter support for change tracking) so that expectations match reality. + +#### Scenario: Reader checks limitations +- **GIVEN** change tracking and protocol/FSM behavior +- **WHEN** a user or contributor reads the implementation status +- **THEN** current limitations are stated +- **AND** expectations align with implementation + +### Requirement: Implementation status discoverable + +The implementation status page SHALL be linked from the architecture README or reference architecture page so it can be found without searching. + +#### Scenario: User navigates architecture docs +- **GIVEN** the docs site +- **WHEN** a user navigates architecture docs +- **THEN** the implementation status page is linked +- **AND** discoverable from the architecture index or README + diff --git a/openspec/specs/module-development-guide/spec.md b/openspec/specs/module-development-guide/spec.md new file mode 100644 index 00000000..4f9e3a60 --- /dev/null +++ b/openspec/specs/module-development-guide/spec.md @@ -0,0 +1,35 @@ +# module-development-guide Specification + +## Purpose +TBD - created by archiving change arch-08-documentation-discrepancies-remediation. Update Purpose after archive. +## Requirements +### Requirement: Required module structure documented + +The module development guide SHALL describe the required directory structure (e.g. modules/<name>/, module-package.yaml, src/<name>/__init__.py, main.py, commands) and file roles. + +#### Scenario: Developer creates new module +- **GIVEN** the module development guide +- **WHEN** a developer creates a new module +- **THEN** the guide describes the required directory structure +- **AND** file roles are explained + +### Requirement: Manifest and contract requirements documented + +The module development guide SHALL document the module-package.yaml schema (name, version, commands, dependencies, schema_extensions, service_bridges) and SHALL mention contract requirements (@icontract, @beartype) for public APIs. + +#### Scenario: Developer configures module +- **GIVEN** the module development guide +- **WHEN** a developer configures a module +- **THEN** the guide documents the module-package.yaml schema +- **AND** contract requirements for public APIs are mentioned + +### Requirement: Module guide discoverable + +The module development guide SHALL be reachable from the docs navigation (e.g. Guides or Reference) and from the architecture or module system documentation. + +#### Scenario: User looks for module development +- **GIVEN** the published docs (e.g. docs.specfact.io) +- **WHEN** a user looks for how to develop modules +- **THEN** the guide is reachable from the docs navigation +- **AND** from the architecture or module system documentation + diff --git a/openspec/specs/module-installation/spec.md b/openspec/specs/module-installation/spec.md new file mode 100644 index 00000000..a7189afb --- /dev/null +++ b/openspec/specs/module-installation/spec.md @@ -0,0 +1,92 @@ +# module-installation Specification + +## Purpose +TBD - created by archiving change marketplace-01-central-module-registry. Update Purpose after archive. +## Requirements +### Requirement: Install command downloads and installs modules + +The system SHALL provide `specfact module install <module-id>` command that downloads, verifies, and installs modules from the registry. + +#### Scenario: Install module from marketplace +- **WHEN** user runs `specfact module install specfact/backlog` +- **THEN** system SHALL fetch registry index +- **AND** SHALL download module tarball +- **AND** SHALL verify checksum +- **AND** SHALL extract to ~/.specfact/marketplace-modules/backlog/ +- **AND** SHALL register module +- **AND** SHALL display success message + +#### Scenario: Install specific version +- **WHEN** user runs `specfact module install specfact/backlog --version 0.29.0` +- **THEN** system SHALL install specified version +- **AND** SHALL verify core_compatibility with current CLI version + +#### Scenario: Install module already installed +- **WHEN** user installs module that is already installed +- **THEN** system SHALL display message "Module already installed (version X)" +- **AND** SHALL suggest using upgrade command + +### Requirement: Uninstall command removes marketplace modules + +The system SHALL provide `specfact module uninstall <module-name>` command that removes modules from marketplace path. + +#### Scenario: Uninstall marketplace module +- **WHEN** user runs `specfact module uninstall backlog` +- **THEN** system SHALL check if module is from marketplace +- **AND** SHALL remove ~/.specfact/marketplace-modules/backlog/ directory +- **AND** SHALL remove module from registry +- **AND** SHALL display success message + +#### Scenario: Attempt to uninstall built-in module +- **WHEN** user attempts to uninstall built-in module +- **THEN** system SHALL display error "Cannot uninstall built-in module" +- **AND** SHALL NOT modify module + +### Requirement: Search command finds modules in registry + +The system SHALL provide `specfact module search <query>` command that searches registry index by name, description, or tags. + +#### Scenario: Search modules by keyword +- **WHEN** user runs `specfact module search backlog` +- **THEN** system SHALL fetch registry index +- **AND** SHALL filter modules matching query in name, description, or tags +- **AND** SHALL display results with module ID, description, latest version + +### Requirement: List command shows installed modules + +The system SHALL provide `specfact module list` command that displays modules from all sources with source indicators. + +#### Scenario: List all modules +- **WHEN** user runs `specfact module list` +- **THEN** system SHALL show modules from built-in, marketplace, and custom paths +- **AND** SHALL indicate source (built-in/marketplace/custom) for each module + +#### Scenario: List marketplace modules only +- **WHEN** user runs `specfact module list --source marketplace` +- **THEN** system SHALL show only marketplace-installed modules + +### Requirement: Upgrade command updates installed modules + +The system SHALL provide `specfact module upgrade <module-name>` command that upgrades marketplace modules to latest version. + +#### Scenario: Upgrade marketplace module +- **WHEN** user runs `specfact module upgrade backlog` +- **THEN** system SHALL fetch registry index +- **AND** SHALL check if newer version available +- **AND** SHALL download and install newer version +- **AND** SHALL remove old version after successful install + +#### Scenario: Upgrade reinstalls when module already exists +- **WHEN** user runs `specfact module upgrade backlog` and backlog is already installed +- **THEN** system SHALL replace existing installed files with the upgraded package +- **AND** SHALL NOT no-op due to existing install marker files + +### Requirement: Installation extraction is path-safe + +The system SHALL reject archive members that escape the intended extraction root. + +#### Scenario: Installer blocks path traversal entries +- **WHEN** a downloaded marketplace tarball contains absolute paths or `..` traversal +- **THEN** install SHALL fail before extraction +- **AND** SHALL raise a validation error indicating unsafe archive content + diff --git a/openspec/specs/module-lifecycle-management/spec.md b/openspec/specs/module-lifecycle-management/spec.md index d1b709c1..e8df38a5 100644 --- a/openspec/specs/module-lifecycle-management/spec.md +++ b/openspec/specs/module-lifecycle-management/spec.md @@ -350,3 +350,36 @@ The system SHALL extend module registration to load schema_extensions from manif - **AND** SHALL skip that extension - **AND** SHALL NOT fail entire module registration +### Requirement: Registration handles modules from multiple sources + +The system SHALL extend registration to handle modules from built-in, marketplace, and custom sources with appropriate lifecycle rules. + +#### Scenario: Marketplace modules can be uninstalled +- **WHEN** module from marketplace is registered +- **THEN** system SHALL mark it as uninstallable +- **AND** SHALL allow removal via uninstall command + +#### Scenario: Built-in modules cannot be uninstalled +- **WHEN** module from built-in source is registered +- **THEN** system SHALL mark it as non-uninstallable +- **AND** SHALL prevent removal via uninstall command + +#### Scenario: Registration validates namespace for marketplace modules +- **WHEN** marketplace module is registered +- **THEN** system SHALL validate id uses "namespace/name" format +- **AND** SHALL log warning if flat name used + +### Requirement: Lifecycle command harmonization remains backward compatible + +The system SHALL keep existing init-based lifecycle flags functional while introducing `specfact module` as the canonical lifecycle command surface. + +#### Scenario: init lifecycle flags remain functional +- **WHEN** user runs `specfact init --list-modules` or `--enable-module/--disable-module` +- **THEN** system SHALL preserve current lifecycle behavior and state updates +- **AND** SHALL provide deprecation guidance toward `specfact module` commands + +#### Scenario: module command is canonical lifecycle surface +- **WHEN** user runs `specfact module list` or lifecycle operations +- **THEN** system SHALL provide equivalent lifecycle management capabilities +- **AND** documentation SHALL reference `specfact module` as primary UX + diff --git a/openspec/specs/module-marketplace-registry/spec.md b/openspec/specs/module-marketplace-registry/spec.md new file mode 100644 index 00000000..23c87504 --- /dev/null +++ b/openspec/specs/module-marketplace-registry/spec.md @@ -0,0 +1,82 @@ +# module-marketplace-registry Specification + +## Purpose +TBD - created by archiving change marketplace-01-central-module-registry. Update Purpose after archive. +## Requirements +### Requirement: Registry index schema with module metadata + +The system SHALL define an index.json schema for the central registry containing module metadata including ID, namespace, version, download URLs, and checksums. + +#### Scenario: Index includes module with full metadata +- **WHEN** registry index.json is parsed +- **THEN** it SHALL include schema_version field +- **AND** SHALL include modules array with module entries +- **AND** each module SHALL have: id, namespace, name, description, latest_version, core_compatibility, download_url, checksum_sha256 + +#### Scenario: Module ID uses namespace format +- **WHEN** module is listed in registry +- **THEN** id SHALL use format "namespace/name" (e.g., "specfact/backlog") +- **AND** namespace SHALL match separate namespace field + +#### Scenario: Core compatibility uses PEP 440 specifier +- **WHEN** module declares core_compatibility +- **THEN** it SHALL use PEP 440 specifier format (e.g., ">=0.28.0,<1.0.0") +- **AND** SHALL be validated during module installation + +### Requirement: Registry client fetches index from GitHub + +The system SHALL implement a registry client that fetches index.json from the GitHub repository. + +#### Scenario: Client fetches registry index +- **WHEN** client calls fetch_registry_index() +- **THEN** it SHALL request index.json from GitHub raw content URL +- **AND** SHALL parse JSON response +- **AND** SHALL return dict with schema_version and modules + +#### Scenario: Network unavailable during fetch +- **WHEN** client attempts to fetch index but network is unavailable +- **THEN** it SHALL log warning "Registry unavailable, using offline mode" +- **AND** SHALL NOT raise exception +- **AND** SHALL return None or empty index + +#### Scenario: Invalid JSON in registry index +- **WHEN** registry index contains invalid JSON +- **THEN** client SHALL log error with parse details +- **AND** SHALL raise ValueError with message "Invalid registry index format" + +### Requirement: Module download with checksum verification + +The system SHALL download module tarballs from registry URLs and verify checksums before extraction. + +#### Scenario: Download module tarball +- **WHEN** download_module() is called with module_id and version +- **THEN** system SHALL look up module in registry index +- **AND** SHALL download tarball from download_url to temp directory +- **AND** SHALL verify checksum matches checksum_sha256 from index + +#### Scenario: Checksum mismatch detected +- **WHEN** downloaded tarball checksum does not match index +- **THEN** system SHALL delete downloaded file +- **AND** SHALL raise SecurityError with message "Checksum mismatch for module X" +- **AND** SHALL NOT proceed with installation + +#### Scenario: Module not found in registry +- **WHEN** download_module() is called with non-existent module_id +- **THEN** system SHALL raise ValueError with message "Module 'X' not found in registry" +- **AND** SHALL suggest using `specfact module search` to find available modules + +### Requirement: Offline-first registry access + +The system SHALL support offline operation with graceful degradation when registry is unavailable. + +#### Scenario: Registry fetch fails gracefully +- **WHEN** registry fetch fails due to network issues +- **THEN** system SHALL log warning +- **AND** SHALL continue with built-in modules only +- **AND** SHALL NOT block CLI functionality + +#### Scenario: Install command fails offline +- **WHEN** user runs install command but registry unavailable +- **THEN** system SHALL display error "Cannot install from marketplace (offline)" +- **AND** SHALL suggest installing from local tarball (future feature) + diff --git a/openspec/specs/module-packages/spec.md b/openspec/specs/module-packages/spec.md index bc3787d6..5a530832 100644 --- a/openspec/specs/module-packages/spec.md +++ b/openspec/specs/module-packages/spec.md @@ -213,3 +213,17 @@ The system SHALL extend `ModulePackageMetadata` to include optional `schema_exte - **THEN** module SHALL load successfully - **AND** no extensions registered for that module +### Requirement: Module discovery supports multiple source locations + +The system SHALL extend module discovery to scan built-in, marketplace, and custom paths with source tracking. + +#### Scenario: Discovery function returns source information +- **WHEN** discover_package_metadata() finds a module +- **THEN** it SHALL include source field in metadata +- **AND** source SHALL be "builtin", "marketplace", or "custom" + +#### Scenario: Registry stores module source +- **WHEN** module is registered +- **THEN** registry SHALL persist source information +- **AND** SHALL be queryable via module list command + diff --git a/openspec/specs/multi-location-discovery/spec.md b/openspec/specs/multi-location-discovery/spec.md new file mode 100644 index 00000000..4e11a27e --- /dev/null +++ b/openspec/specs/multi-location-discovery/spec.md @@ -0,0 +1,52 @@ +# multi-location-discovery Specification + +## Purpose +TBD - created by archiving change marketplace-01-central-module-registry. Update Purpose after archive. +## Requirements +### Requirement: Discover modules from multiple paths + +The system SHALL discover modules from built-in, marketplace, and custom paths in priority order. + +#### Scenario: Discovery scans all three locations +- **WHEN** module discovery runs +- **THEN** system SHALL scan {site-packages}/specfact_cli/modules/ +- **AND** SHALL scan ~/.specfact/marketplace-modules/ if exists +- **AND** SHALL scan ~/.specfact/custom-modules/ if exists + +#### Scenario: Built-in modules take priority +- **WHEN** module "backlog" exists in both built-in and marketplace +- **THEN** system SHALL use built-in version +- **AND** SHALL log warning about shadowed marketplace module + +#### Scenario: Marketplace modules discovered when no built-in +- **WHEN** module exists in marketplace but not built-in +- **THEN** system SHALL discover and register marketplace module + +### Requirement: Source tracking for discovered modules + +The system SHALL track the source (built-in/marketplace/custom) for each discovered module. + +#### Scenario: Module metadata includes source +- **WHEN** module is discovered +- **THEN** system SHALL record source in module metadata +- **AND** source SHALL be one of: "builtin", "marketplace", "custom" + +#### Scenario: List command shows module source +- **WHEN** user runs `specfact module list` +- **THEN** each module SHALL display source indicator +- **AND** built-in modules SHALL be marked as "[built-in]" + +### Requirement: Graceful handling of missing paths + +The system SHALL handle missing marketplace or custom paths without errors. + +#### Scenario: Marketplace path does not exist +- **WHEN** ~/.specfact/marketplace-modules/ does not exist +- **THEN** discovery SHALL continue with built-in modules only +- **AND** SHALL NOT log warning (normal state) + +#### Scenario: Custom path does not exist +- **WHEN** ~/.specfact/custom-modules/ does not exist +- **THEN** discovery SHALL continue normally +- **AND** SHALL NOT raise exception + diff --git a/pyproject.toml b/pyproject.toml index 12452a7e..efcad9b2 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -4,7 +4,7 @@ build-backend = "hatchling.build" [project] name = "specfact-cli" -version = "0.35.0" +version = "0.36.0" description = "The swiss knife CLI for agile DevOps teams. Keep backlog, specs, tests, and code in sync with validation and contract enforcement for new projects and long-lived codebases." readme = "README.md" requires-python = ">=3.11" diff --git a/setup.py b/setup.py index 94c550cc..62a8c181 100644 --- a/setup.py +++ b/setup.py @@ -7,7 +7,7 @@ if __name__ == "__main__": _setup = setup( name="specfact-cli", - version="0.35.0", + version="0.36.0", description=( "The swiss knife CLI for agile DevOps teams. Keep backlog, specs, tests, and code in sync with " "validation and contract enforcement for new projects and long-lived codebases." diff --git a/src/__init__.py b/src/__init__.py index d82b7ce8..ed7b37cf 100644 --- a/src/__init__.py +++ b/src/__init__.py @@ -3,4 +3,4 @@ """ # Package version: keep in sync with pyproject.toml, setup.py, src/specfact_cli/__init__.py -__version__ = "0.35.0" +__version__ = "0.36.0" diff --git a/src/specfact_cli/__init__.py b/src/specfact_cli/__init__.py index b44ef2f6..ec9195f7 100644 --- a/src/specfact_cli/__init__.py +++ b/src/specfact_cli/__init__.py @@ -8,6 +8,6 @@ - Supporting agile ceremonies and team workflows """ -__version__ = "0.35.0" +__version__ = "0.36.0" __all__ = ["__version__"] diff --git a/src/specfact_cli/adapters/ado.py b/src/specfact_cli/adapters/ado.py index d4550ddf..ab30030d 100644 --- a/src/specfact_cli/adapters/ado.py +++ b/src/specfact_cli/adapters/ado.py @@ -1657,7 +1657,10 @@ def _create_work_item_from_proposal( ] try: - response = requests.patch(url, json=patch_document, headers=headers, timeout=30) + response = self._request_with_retry( + lambda: requests.patch(url, json=patch_document, headers=headers, timeout=30), + retry_on_ambiguous_transport=False, + ) if is_debug_mode(): debug_log_operation( "ado_patch", @@ -1800,8 +1803,9 @@ def _update_work_item_status( patch_document = [{"op": "replace", "path": "/fields/System.State", "value": ado_state}] try: - response = requests.patch(url, json=patch_document, headers=headers, timeout=30) - response.raise_for_status() + response = self._request_with_retry( + lambda: requests.patch(url, json=patch_document, headers=headers, timeout=30) + ) work_item_data = response.json() work_item_url = work_item_data.get("_links", {}).get("html", {}).get("href", "") @@ -1933,8 +1937,9 @@ def _update_work_item_body( ] try: - response = requests.patch(url, json=patch_document, headers=headers, timeout=30) - response.raise_for_status() + response = self._request_with_retry( + lambda: requests.patch(url, json=patch_document, headers=headers, timeout=30) + ) work_item_data = response.json() work_item_url = work_item_data.get("_links", {}).get("html", {}).get("href", "") @@ -2040,8 +2045,9 @@ def sync_status_to_ado( patch_document = [{"op": "replace", "path": "/fields/System.State", "value": ado_state}] try: - response = requests.patch(url, json=patch_document, headers=headers, timeout=30) - response.raise_for_status() + response = self._request_with_retry( + lambda: requests.patch(url, json=patch_document, headers=headers, timeout=30) + ) work_item_data = response.json() work_item_url = work_item_data.get("_links", {}).get("html", {}).get("href", "") @@ -2442,8 +2448,10 @@ def _add_work_item_comment( comment_body = {"text": comment_text} try: - response = requests.post(url, json=comment_body, headers=headers, timeout=30) - response.raise_for_status() + response = self._request_with_retry( + lambda: requests.post(url, json=comment_body, headers=headers, timeout=30), + retry_on_ambiguous_transport=False, + ) comment_data = response.json() comment_id = comment_data.get("id") @@ -2690,6 +2698,7 @@ def _resolve_sprint_filter( self, sprint_filter: str | None, items: list[BacklogItem], + apply_current_when_missing: bool = True, ) -> tuple[str | None, list[BacklogItem]]: """ Resolve sprint filter with path matching and ambiguity detection. @@ -2705,6 +2714,8 @@ def _resolve_sprint_filter( ValueError: If ambiguous sprint name match is detected """ if not sprint_filter: + if not apply_current_when_missing: + return None, items # No sprint filter - try to get current iteration current_iteration = self._get_current_iteration() if current_iteration: @@ -2859,13 +2870,16 @@ def fetch_backlog_items(self, filters: BacklogFilters) -> list[BacklogItem]: # Sprint will be resolved post-fetch to handle ambiguity pass else: - # No sprint/iteration - try current iteration - current_iteration = self._get_current_iteration() - if current_iteration: - resolved_iteration = current_iteration - conditions.append(f"[System.IterationPath] = '{resolved_iteration}'") - else: - console.print("[yellow]⚠ No current iteration found and no sprint/iteration filter provided[/yellow]") + # No sprint/iteration - optionally use current iteration default + if getattr(filters, "use_current_iteration_default", True): + current_iteration = self._get_current_iteration() + if current_iteration: + resolved_iteration = current_iteration + conditions.append(f"[System.IterationPath] = '{resolved_iteration}'") + else: + console.print( + "[yellow]⚠ No current iteration found and no sprint/iteration filter provided[/yellow]" + ) if conditions: wiql_parts.append("AND " + " AND ".join(conditions)) @@ -3113,7 +3127,11 @@ def fetch_backlog_items(self, filters: BacklogFilters) -> list[BacklogItem]: # Sprint filtering with path matching and ambiguity detection if filters.sprint: try: - _, filtered_items = self._resolve_sprint_filter(filters.sprint, filtered_items) + _, filtered_items = self._resolve_sprint_filter( + filters.sprint, + filtered_items, + apply_current_when_missing=getattr(filters, "use_current_iteration_default", True), + ) except ValueError as e: # Ambiguous sprint match - raise with clear error message console.print(f"[red]Error:[/red] {e}") @@ -3137,6 +3155,115 @@ def fetch_backlog_items(self, filters: BacklogFilters) -> list[BacklogItem]: return filtered_items + @beartype + @require( + lambda project_id: isinstance(project_id, str) and len(project_id.strip()) > 0, "project_id must be non-empty" + ) + @require(lambda payload: isinstance(payload, dict), "payload must be dict") + @ensure(lambda result: isinstance(result, dict), "Must return dict") + def create_issue(self, project_id: str, payload: dict[str, Any]) -> dict[str, Any]: + """Create an Azure DevOps work item from provider-agnostic backlog payload.""" + org, project = self._resolve_graph_project_context(project_id) + if not self.api_token: + raise ValueError("Azure DevOps API token is required") + + title = str(payload.get("title") or "").strip() + if not title: + raise ValueError("payload.title is required") + + raw_type = str(payload.get("type") or "task").strip().lower() + type_mapping = { + "epic": "Epic", + "feature": "Feature", + "story": "User Story", + "user story": "User Story", + "task": "Task", + "bug": "Bug", + "spike": "Task", + } + work_item_type = type_mapping.get(raw_type, "Task") + + description = str(payload.get("description") or payload.get("body") or "").strip() + description_format = str(payload.get("description_format") or "markdown").strip().lower() + field_rendering_format = "Markdown" if description_format != "classic" else "Html" + patch_document: list[dict[str, Any]] = [ + {"op": "add", "path": "/fields/System.Title", "value": title}, + {"op": "add", "path": "/fields/System.Description", "value": description}, + {"op": "add", "path": "/multilineFieldsFormat/System.Description", "value": field_rendering_format}, + ] + + acceptance_criteria = str(payload.get("acceptance_criteria") or "").strip() + if acceptance_criteria: + patch_document.append( + { + "op": "add", + "path": "/fields/Microsoft.VSTS.Common.AcceptanceCriteria", + "value": acceptance_criteria, + } + ) + + priority = payload.get("priority") + if priority not in (None, ""): + patch_document.append( + { + "op": "add", + "path": "/fields/Microsoft.VSTS.Common.Priority", + "value": priority, + } + ) + + story_points = payload.get("story_points") + if story_points is not None: + patch_document.append( + { + "op": "add", + "path": "/fields/Microsoft.VSTS.Scheduling.StoryPoints", + "value": story_points, + } + ) + + sprint = str(payload.get("sprint") or "").strip() + if sprint: + patch_document.append( + { + "op": "add", + "path": "/fields/System.IterationPath", + "value": sprint, + } + ) + + parent_id = str(payload.get("parent_id") or "").strip() + if parent_id: + parent_url = f"{self.base_url}/{org}/{project}/_apis/wit/workItems/{parent_id}" + patch_document.append( + { + "op": "add", + "path": "/relations/-", + "value": {"rel": "System.LinkTypes.Hierarchy-Reverse", "url": parent_url}, + } + ) + + url = f"{self.base_url}/{org}/{project}/_apis/wit/workitems/${work_item_type}?api-version=7.1" + headers = { + "Content-Type": "application/json-patch+json", + **self._auth_headers(), + } + response = self._request_with_retry( + lambda: requests.patch(url, json=patch_document, headers=headers, timeout=30), + retry_on_ambiguous_transport=False, + ) + created = response.json() + + created_id = str(created.get("id") or "") + html_url = str(created.get("_links", {}).get("html", {}).get("href") or "") + fallback_url = str(created.get("url") or "") + + return { + "id": created_id, + "key": created_id, + "url": html_url or fallback_url, + } + @beartype @require(lambda project_id: isinstance(project_id, str) and len(project_id) > 0, "project_id must be non-empty") @ensure(lambda result: isinstance(result, list), "Must return list") @@ -3429,8 +3556,9 @@ def update_backlog_item(self, item: BacklogItem, update_fields: list[str] | None # Update work item try: - response = requests.patch(url, headers=headers, json=operations, timeout=30) - response.raise_for_status() + response = self._request_with_retry( + lambda: requests.patch(url, headers=headers, json=operations, timeout=30) + ) except requests.HTTPError as e: user_msg = _log_ado_patch_failure(e.response, operations, url) e.ado_user_message = user_msg diff --git a/src/specfact_cli/adapters/backlog_base.py b/src/specfact_cli/adapters/backlog_base.py index 111fb073..25b94275 100644 --- a/src/specfact_cli/adapters/backlog_base.py +++ b/src/specfact_cli/adapters/backlog_base.py @@ -11,10 +11,12 @@ from __future__ import annotations +import time from abc import ABC, abstractmethod from datetime import UTC, datetime from typing import Any +import requests from beartype import beartype from icontract import ensure, require @@ -35,6 +37,10 @@ class BacklogAdapterMixin(ABC): and implement the abstract methods to provide tool-specific implementations. """ + RETRYABLE_HTTP_STATUSES: tuple[int, ...] = (429, 500, 502, 503, 504) + RETRY_DEFAULT_ATTEMPTS: int = 3 + RETRY_BACKOFF_SECONDS: float = 0.5 + @abstractmethod @beartype @require(lambda status: isinstance(status, str) and len(status) > 0, "Status must be non-empty string") @@ -140,6 +146,71 @@ def map_backlog_state_between_adapters( return target_state + @beartype + @require(lambda attempts: attempts is None or attempts > 0, "attempts must be > 0 when provided") + @require( + lambda backoff_seconds: backoff_seconds is None or backoff_seconds >= 0, + "backoff_seconds must be >= 0 when provided", + ) + @require( + lambda retry_on_ambiguous_transport: isinstance(retry_on_ambiguous_transport, bool), "retry flag must be bool" + ) + @ensure(lambda result: hasattr(result, "raise_for_status"), "Result must support raise_for_status") + def _request_with_retry( + self, + request_callable: Any, + *, + attempts: int | None = None, + backoff_seconds: float | None = None, + retry_on_ambiguous_transport: bool = True, + ) -> Any: + """Execute HTTP request with central retry policy for transient failures. + + For non-idempotent writes, callers can disable transport-error replay by passing + retry_on_ambiguous_transport=False to avoid accidental duplicate side effects. + """ + max_attempts = attempts or self.RETRY_DEFAULT_ATTEMPTS + delay = backoff_seconds if backoff_seconds is not None else self.RETRY_BACKOFF_SECONDS + + last_error: Exception | None = None + for attempt in range(1, max_attempts + 1): + try: + response = request_callable() + status_code = int(getattr(response, "status_code", 0) or 0) + if status_code in self.RETRYABLE_HTTP_STATUSES and attempt < max_attempts: + time.sleep(delay * (2 ** (attempt - 1))) + continue + response.raise_for_status() + return response + except requests.HTTPError as error: + status_code = int(getattr(error.response, "status_code", 0) or 0) + is_transient = status_code in self.RETRYABLE_HTTP_STATUSES + last_error = error + if is_transient and attempt < max_attempts: + time.sleep(delay * (2 ** (attempt - 1))) + continue + raise + except (requests.Timeout, requests.ConnectionError) as error: + last_error = error + if retry_on_ambiguous_transport and attempt < max_attempts: + time.sleep(delay * (2 ** (attempt - 1))) + continue + raise + + if last_error is not None: + raise last_error + raise RuntimeError("Retry logic failed without response or error") + + @abstractmethod + @beartype + @require( + lambda project_id: isinstance(project_id, str) and len(project_id.strip()) > 0, "Project ID must be non-empty" + ) + @require(lambda payload: isinstance(payload, dict), "Payload must be dict") + @ensure(lambda result: isinstance(result, dict), "Must return created issue metadata dict") + def create_issue(self, project_id: str, payload: dict[str, Any]) -> dict[str, Any]: + """Create backlog issue/work item from provider-agnostic payload.""" + @abstractmethod @beartype @require(lambda item_data: isinstance(item_data, dict), "Item data must be dict") diff --git a/src/specfact_cli/adapters/github.py b/src/specfact_cli/adapters/github.py index 8296a86b..0d74b7c7 100644 --- a/src/specfact_cli/adapters/github.py +++ b/src/specfact_cli/adapters/github.py @@ -1161,8 +1161,10 @@ def _create_issue_from_proposal( payload["state_reason"] = state_reason try: - response = requests.post(url, json=payload, headers=headers, timeout=30) - response.raise_for_status() + response = self._request_with_retry( + lambda: requests.post(url, json=payload, headers=headers, timeout=30), + retry_on_ambiguous_transport=False, + ) issue_data = response.json() # If issue was created as closed, add a comment explaining why @@ -1281,8 +1283,7 @@ def _update_issue_status( payload["state_reason"] = state_reason try: - response = requests.patch(url, json=payload, headers=headers, timeout=30) - response.raise_for_status() + response = self._request_with_retry(lambda: requests.patch(url, json=payload, headers=headers, timeout=30)) issue_data = response.json() # Add comment explaining status change @@ -1346,8 +1347,10 @@ def _add_issue_comment(self, repo_owner: str, repo_name: str, issue_number: int, payload = {"body": comment} try: - response = requests.post(url, json=payload, headers=headers, timeout=30) - response.raise_for_status() + self._request_with_retry( + lambda: requests.post(url, json=payload, headers=headers, timeout=30), + retry_on_ambiguous_transport=False, + ) except requests.RequestException as e: # Log but don't fail - comment is non-critical console.print(f"[yellow]⚠[/yellow] Failed to add comment to issue #{issue_number}: {e}") @@ -1509,8 +1512,7 @@ def _update_issue_body( payload["state_reason"] = state_reason try: - response = requests.patch(url, json=payload, headers=headers, timeout=30) - response.raise_for_status() + response = self._request_with_retry(lambda: requests.patch(url, json=payload, headers=headers, timeout=30)) issue_data = response.json() # Add comment if issue was closed due to status change, or if already closed with applied status @@ -1684,8 +1686,7 @@ def sync_status_to_github( patch_url = f"{self.base_url}/repos/{repo_owner}/{repo_name}/issues/{issue_number}" patch_payload = {"labels": all_labels} - patch_response = requests.patch(patch_url, json=patch_payload, headers=headers, timeout=30) - patch_response.raise_for_status() + self._request_with_retry(lambda: requests.patch(patch_url, json=patch_payload, headers=headers, timeout=30)) return { "issue_number": current_issue.get("number", issue_number), # Use API response number (int) @@ -2639,6 +2640,250 @@ def fetch_backlog_items(self, filters: BacklogFilters) -> list[BacklogItem]: return filtered_items + @beartype + def _github_graphql(self, query: str, variables: dict[str, Any]) -> dict[str, Any]: + """Execute GitHub GraphQL request and return `data` payload.""" + headers = { + "Authorization": f"token {self.api_token}", + "Accept": "application/vnd.github+json", + } + response = self._request_with_retry( + lambda: requests.post( + f"{self.base_url}/graphql", + json={"query": query, "variables": variables}, + headers=headers, + timeout=30, + ) + ) + payload = response.json() + if not isinstance(payload, dict): + raise ValueError("GitHub GraphQL response must be an object") + errors = payload.get("errors") + if isinstance(errors, list) and errors: + raise ValueError(f"GitHub GraphQL errors: {errors}") + data = payload.get("data") + return data if isinstance(data, dict) else {} + + @beartype + def _try_set_github_issue_type( + self, + issue_node_id: str, + issue_type: str, + provider_fields: dict[str, Any] | None, + ) -> None: + """Best-effort GitHub issue type update using repository issue-type ids.""" + if not issue_node_id or not isinstance(provider_fields, dict): + return + + issue_cfg = provider_fields.get("github_issue_types") + if not isinstance(issue_cfg, dict): + return + type_ids = issue_cfg.get("type_ids") + if not isinstance(type_ids, dict): + return + + issue_type_id = str(type_ids.get(issue_type) or type_ids.get(issue_type.lower()) or "").strip() + if not issue_type_id: + return + + mutation = ( + "mutation($issueId: ID!, $issueTypeId: ID!) { " + "updateIssue(input: {id: $issueId, issueTypeId: $issueTypeId}) { issue { id } } " + "}" + ) + try: + self._github_graphql( + mutation, + {"issueId": issue_node_id, "issueTypeId": issue_type_id}, + ) + except (requests.RequestException, ValueError) as error: + console.print(f"[yellow]⚠[/yellow] Could not set GitHub issue Type automatically: {error}") + + @beartype + def _try_link_github_sub_issue( + self, + owner: str, + repo: str, + parent_ref: Any, + sub_issue_node_id: str, + ) -> None: + """Best-effort native GitHub parent/sub-issue link using sidebar relationship.""" + if not sub_issue_node_id: + return + + parent_raw = str(parent_ref or "").strip() + if not parent_raw: + return + + parent_number_text = parent_raw.removeprefix("#") + if not parent_number_text.isdigit(): + return + parent_number = int(parent_number_text) + + parent_query = ( + "query($owner:String!, $repo:String!, $number:Int!) { " + "repository(owner:$owner, name:$repo) { issue(number:$number) { id } } " + "}" + ) + link_mutation = ( + "mutation($parentIssueId:ID!, $subIssueId:ID!) { " + "addSubIssue(input:{ issueId:$parentIssueId, subIssueId:$subIssueId, replaceParent:true }) { " + "issue { id } subIssue { id } " + "} " + "}" + ) + + try: + parent_data = self._github_graphql( + parent_query, + {"owner": owner, "repo": repo, "number": parent_number}, + ) + repository = parent_data.get("repository") if isinstance(parent_data, dict) else None + issue = repository.get("issue") if isinstance(repository, dict) else None + parent_issue_id = str(issue.get("id") or "").strip() if isinstance(issue, dict) else "" + if not parent_issue_id: + return + self._github_graphql( + link_mutation, + {"parentIssueId": parent_issue_id, "subIssueId": sub_issue_node_id}, + ) + except (requests.RequestException, ValueError) as error: + console.print(f"[yellow]⚠[/yellow] Could not create native GitHub parent/sub-issue link: {error}") + + def _try_set_github_project_type_field( + self, + issue_node_id: str, + issue_type: str, + provider_fields: dict[str, Any] | None, + ) -> None: + """Best-effort GitHub Projects v2 Type field update for created issues.""" + if not issue_node_id or not isinstance(provider_fields, dict): + return + + project_cfg = provider_fields.get("github_project_v2") + if not isinstance(project_cfg, dict): + return + + project_id = str(project_cfg.get("project_id") or "").strip() + type_field_id = str(project_cfg.get("type_field_id") or "").strip() + option_map = project_cfg.get("type_option_ids") + if not isinstance(option_map, dict): + return + + option_id = str(option_map.get(issue_type) or option_map.get(issue_type.lower()) or "").strip() + if not project_id or not type_field_id or not option_id: + return + + add_item_mutation = ( + "mutation($projectId: ID!, $contentId: ID!) { " + "addProjectV2ItemById(input: {projectId: $projectId, contentId: $contentId}) { item { id } }" + " }" + ) + set_type_mutation = ( + "mutation($projectId: ID!, $itemId: ID!, $fieldId: ID!, $optionId: String!) { " + "updateProjectV2ItemFieldValue(input: {" + "projectId: $projectId, itemId: $itemId, fieldId: $fieldId, " + "value: { singleSelectOptionId: $optionId }" + "}) { projectV2Item { id } }" + " }" + ) + + try: + add_data = self._github_graphql( + add_item_mutation, + {"projectId": project_id, "contentId": issue_node_id}, + ) + add_result = add_data.get("addProjectV2ItemById") if isinstance(add_data, dict) else None + item = add_result.get("item") if isinstance(add_result, dict) else None + item_id = str(item.get("id") or "").strip() if isinstance(item, dict) else "" + if not item_id: + return + self._github_graphql( + set_type_mutation, + { + "projectId": project_id, + "itemId": item_id, + "fieldId": type_field_id, + "optionId": option_id, + }, + ) + except (requests.RequestException, ValueError) as error: + console.print(f"[yellow]⚠[/yellow] Could not set GitHub Projects Type field automatically: {error}") + + @beartype + @require( + lambda project_id: isinstance(project_id, str) and len(project_id.strip()) > 0, "project_id must be non-empty" + ) + @require(lambda payload: isinstance(payload, dict), "payload must be dict") + @ensure(lambda result: isinstance(result, dict), "Must return dict") + def create_issue(self, project_id: str, payload: dict[str, Any]) -> dict[str, Any]: + """Create a GitHub issue from provider-agnostic backlog payload.""" + owner, repo = project_id.split("/", 1) if "/" in project_id else (self.repo_owner, self.repo_name) + if not owner or not repo: + raise ValueError( + "GitHub project_id must be '<owner>/<repo>' or adapter must be configured with repo_owner/repo_name" + ) + if not self.api_token: + raise ValueError("GitHub API token required to create issues") + + title = str(payload.get("title") or "").strip() + if not title: + raise ValueError("payload.title is required") + + issue_type = str(payload.get("type") or "task").strip().lower() + description_format = str(payload.get("description_format") or "markdown").strip().lower() + body = str(payload.get("description") or payload.get("body") or "").strip() + + acceptance_criteria = str(payload.get("acceptance_criteria") or "").strip() + if acceptance_criteria: + if description_format == "classic": + body = f"{body}\n\nAcceptance Criteria:\n{acceptance_criteria}".strip() + else: + body = f"{body}\n\n## Acceptance Criteria\n{acceptance_criteria}".strip() + + parent_id = payload.get("parent_id") + if parent_id: + parent_line = f"Parent: #{parent_id}" + body = f"{body}\n\n{parent_line}".strip() if body else parent_line + + labels = [issue_type] if issue_type else [] + priority = str(payload.get("priority") or "").strip() + if priority: + labels.append(f"priority:{priority.lower()}") + story_points = payload.get("story_points") + if story_points is not None: + labels.append(f"story-points:{story_points}") + url = f"{self.base_url}/repos/{owner}/{repo}/issues" + headers = { + "Authorization": f"token {self.api_token}", + "Accept": "application/vnd.github.v3+json", + } + response = self._request_with_retry( + lambda: requests.post( + url, + json={"title": title, "body": body, "labels": labels}, + headers=headers, + timeout=30, + ), + retry_on_ambiguous_transport=False, + ) + created = response.json() + issue_node_id = str(created.get("node_id") or "").strip() + if parent_id: + self._try_link_github_sub_issue(owner, repo, parent_id, issue_node_id) + + provider_fields = payload.get("provider_fields") + if isinstance(provider_fields, dict): + self._try_set_github_issue_type(issue_node_id, issue_type, provider_fields) + self._try_set_github_project_type_field(issue_node_id, issue_type, provider_fields) + + canonical_issue_number = str(created.get("number") or created.get("id") or "") + return { + "id": canonical_issue_number, + "key": canonical_issue_number, + "url": str(created.get("html_url") or created.get("url") or ""), + } + @beartype @require(lambda project_id: isinstance(project_id, str) and len(project_id) > 0, "project_id must be non-empty") @ensure(lambda result: isinstance(result, list), "Must return list") @@ -2770,6 +3015,13 @@ def _normalize(raw_value: str) -> str | None: mapped = _normalize(value) if mapped: return mapped + if isinstance(value, dict): + for candidate_key in ("name", "title"): + candidate_value = value.get(candidate_key) + if isinstance(candidate_value, str): + mapped = _normalize(candidate_value) + if mapped: + return mapped tags = issue_payload.get("tags") if isinstance(tags, list): @@ -2962,8 +3214,7 @@ def update_backlog_item(self, item: BacklogItem, update_fields: list[str] | None payload["state"] = item.state # Update issue - response = requests.patch(url, headers=headers, json=payload, timeout=30) - response.raise_for_status() + response = self._request_with_retry(lambda: requests.patch(url, headers=headers, json=payload, timeout=30)) updated_issue = response.json() # Convert back to BacklogItem diff --git a/src/specfact_cli/backlog/filters.py b/src/specfact_cli/backlog/filters.py index 0420f4ba..41411421 100644 --- a/src/specfact_cli/backlog/filters.py +++ b/src/specfact_cli/backlog/filters.py @@ -42,6 +42,8 @@ class BacklogFilters: """Filter by release identifier.""" limit: int | None = None """Maximum number of items to fetch (applied after filtering).""" + use_current_iteration_default: bool = True + """When sprint is omitted, whether provider may auto-resolve current iteration.""" @staticmethod def normalize_filter_value(value: str | None) -> str | None: diff --git a/src/specfact_cli/backlog/mappers/github_mapper.py b/src/specfact_cli/backlog/mappers/github_mapper.py index 7b09645d..b29ff360 100644 --- a/src/specfact_cli/backlog/mappers/github_mapper.py +++ b/src/specfact_cli/backlog/mappers/github_mapper.py @@ -43,7 +43,8 @@ def extract_fields(self, item_data: dict[str, Any]) -> dict[str, Any]: Dict mapping canonical field names to extracted values """ body = item_data.get("body", "") or "" - labels = item_data.get("labels", []) + labels_raw = item_data.get("labels", []) + labels = labels_raw if isinstance(labels_raw, list) else [] label_names = [label.get("name", "") if isinstance(label, dict) else str(label) for label in labels if label] fields: dict[str, Any] = {} @@ -211,23 +212,44 @@ def _extract_numeric_field(self, body: str, field_name: str) -> int | None: Returns: Numeric value or None if not found """ - # Pattern 1: ## Field Name\n\n<number> - section_pattern = rf"^##+\s+{re.escape(field_name)}\s*$\n\s*(\d+)" - match = re.search(section_pattern, body, re.MULTILINE) - if match: - try: - return int(match.group(1)) - except (ValueError, IndexError): - pass - - # Pattern 2: **Field Name:** <number> - inline_pattern = rf"\*\*{re.escape(field_name)}:\*\*\s*(\d+)" - match = re.search(inline_pattern, body, re.IGNORECASE) - if match: - try: - return int(match.group(1)) - except (ValueError, IndexError): - pass + normalized_field = field_name.strip().lower() + if not normalized_field: + return None + + lines = body.splitlines() + + # Pattern 1: markdown section heading followed by a numeric line. + for idx, raw_line in enumerate(lines): + line = raw_line.strip() + if not line.startswith("##"): + continue + heading = line.lstrip("#").strip().lower() + if heading != normalized_field: + continue + for next_line in lines[idx + 1 :]: + candidate = next_line.strip() + if not candidate: + continue + match = re.match(r"^(\d+)", candidate) + if match: + try: + return int(match.group(1)) + except (ValueError, IndexError): + return None + break + + # Pattern 2: inline markdown label, e.g. **Field Name:** 8 + inline_prefix = f"**{normalized_field}:**" + for raw_line in lines: + line = raw_line.strip() + if line.lower().startswith(inline_prefix): + remainder = line[len(inline_prefix) :].strip() + match = re.match(r"^(\d+)", remainder) + if match: + try: + return int(match.group(1)) + except (ValueError, IndexError): + return None return None @@ -268,7 +290,12 @@ def _extract_work_item_type(self, label_names: list[str], item_data: dict[str, A # Check issue type metadata if available issue_type = item_data.get("issue_type") or item_data.get("type") - if issue_type: - return str(issue_type) + if isinstance(issue_type, str) and issue_type.strip(): + return issue_type.strip() + if isinstance(issue_type, dict): + for key in ("name", "title"): + candidate = issue_type.get(key) + if isinstance(candidate, str) and candidate.strip(): + return candidate.strip() return None diff --git a/src/specfact_cli/modules/analyze/module-package.yaml b/src/specfact_cli/modules/analyze/module-package.yaml index b9b23efb..7bd869eb 100644 --- a/src/specfact_cli/modules/analyze/module-package.yaml +++ b/src/specfact_cli/modules/analyze/module-package.yaml @@ -1,5 +1,5 @@ name: analyze -version: 0.35.0 +version: 0.36.0 commands: - analyze command_help: diff --git a/src/specfact_cli/modules/auth/module-package.yaml b/src/specfact_cli/modules/auth/module-package.yaml index 161447b1..495ed7ae 100644 --- a/src/specfact_cli/modules/auth/module-package.yaml +++ b/src/specfact_cli/modules/auth/module-package.yaml @@ -1,5 +1,5 @@ name: auth -version: 0.35.0 +version: 0.36.0 commands: - auth command_help: diff --git a/src/specfact_cli/modules/auth/src/commands.py b/src/specfact_cli/modules/auth/src/commands.py index 763894c3..9b8fa6f9 100644 --- a/src/specfact_cli/modules/auth/src/commands.py +++ b/src/specfact_cli/modules/auth/src/commands.py @@ -42,7 +42,7 @@ AZURE_DEVOPS_SCOPES = [AZURE_DEVOPS_RESOURCE] DEFAULT_GITHUB_BASE_URL = "https://github.com" DEFAULT_GITHUB_API_URL = "https://api.github.com" -DEFAULT_GITHUB_SCOPES = "repo" +DEFAULT_GITHUB_SCOPES = "repo read:project project" DEFAULT_GITHUB_CLIENT_ID = "Ov23lizkVHsbEIjZKvRD" @@ -589,7 +589,7 @@ def auth_github( scopes: str = typer.Option( DEFAULT_GITHUB_SCOPES, "--scopes", - help="OAuth scopes (comma or space separated)", + help="OAuth scopes (comma or space separated). Default: repo,read:project,project", hidden=True, ), ) -> None: diff --git a/src/specfact_cli/modules/backlog/module-package.yaml b/src/specfact_cli/modules/backlog/module-package.yaml index 4b35d540..229c2fc1 100644 --- a/src/specfact_cli/modules/backlog/module-package.yaml +++ b/src/specfact_cli/modules/backlog/module-package.yaml @@ -1,5 +1,5 @@ name: backlog -version: 0.35.0 +version: 0.36.0 commands: - backlog command_help: diff --git a/src/specfact_cli/modules/backlog/src/commands.py b/src/specfact_cli/modules/backlog/src/commands.py index d03aea1b..28494b46 100644 --- a/src/specfact_cli/modules/backlog/src/commands.py +++ b/src/specfact_cli/modules/backlog/src/commands.py @@ -69,6 +69,7 @@ class _BacklogCommandGroup(TyperGroup): # Compatibility / lower-frequency commands later. "refine": 100, "daily": 110, + "init-config": 118, "map-fields": 120, } @@ -476,6 +477,85 @@ def _load_backlog_config() -> dict[str, Any]: return config +@beartype +def _load_backlog_module_config_file() -> tuple[dict[str, Any], Path]: + """Load canonical backlog module config from `.specfact/backlog-config.yaml`.""" + config_dir = os.environ.get("SPECFACT_CONFIG_DIR") + search_paths: list[Path] = [] + if config_dir: + search_paths.append(Path(config_dir)) + search_paths.append(Path.cwd() / ".specfact") + + for base in search_paths: + path = base / "backlog-config.yaml" + if path.is_file(): + try: + data = yaml.safe_load(path.read_text(encoding="utf-8")) or {} + if isinstance(data, dict): + return data, path + except Exception as exc: + debug_log_operation("config_load", str(path), "error", error=repr(exc)) + return {}, path + + default_path = search_paths[-1] / "backlog-config.yaml" + return {}, default_path + + +@beartype +def _save_backlog_module_config_file(config: dict[str, Any], path: Path) -> None: + """Persist canonical backlog module config to `.specfact/backlog-config.yaml`.""" + path.parent.mkdir(parents=True, exist_ok=True) + path.write_text(yaml.dump(config, sort_keys=False), encoding="utf-8") + + +@beartype +def _upsert_backlog_provider_settings( + provider: str, + settings_update: dict[str, Any], + *, + project_id: str | None = None, + adapter: str | None = None, +) -> Path: + """Merge provider settings into `.specfact/backlog-config.yaml` and save.""" + cfg, path = _load_backlog_module_config_file() + backlog_config = cfg.get("backlog_config") + if not isinstance(backlog_config, dict): + backlog_config = {} + providers = backlog_config.get("providers") + if not isinstance(providers, dict): + providers = {} + + provider_cfg = providers.get(provider) + if not isinstance(provider_cfg, dict): + provider_cfg = {} + + if adapter: + provider_cfg["adapter"] = adapter + if project_id: + provider_cfg["project_id"] = project_id + + settings = provider_cfg.get("settings") + if not isinstance(settings, dict): + settings = {} + + def _deep_merge(dst: dict[str, Any], src: dict[str, Any]) -> dict[str, Any]: + for key, value in src.items(): + if isinstance(value, dict) and isinstance(dst.get(key), dict): + _deep_merge(dst[key], value) + else: + dst[key] = value + return dst + + _deep_merge(settings, settings_update) + provider_cfg["settings"] = settings + providers[provider] = provider_cfg + backlog_config["providers"] = providers + cfg["backlog_config"] = backlog_config + + _save_backlog_module_config_file(cfg, path) + return path + + @beartype def _resolve_standup_options( cli_state: str | None, @@ -3767,23 +3847,75 @@ def _on_write_comment_progress(index: int, total: int, item: BacklogItem) -> Non raise typer.Exit(1) from e +@app.command("init-config") +@beartype +def init_config( + force: bool = typer.Option(False, "--force", help="Overwrite existing .specfact/backlog-config.yaml"), +) -> None: + """Scaffold `.specfact/backlog-config.yaml` with default backlog provider config structure.""" + cfg, path = _load_backlog_module_config_file() + if path.exists() and not force: + console.print(f"[yellow]⚠[/yellow] Config already exists: {path}") + console.print("[dim]Use --force to overwrite or run `specfact backlog map-fields` to update mappings.[/dim]") + return + + default_config: dict[str, Any] = { + "backlog_config": { + "providers": { + "github": { + "adapter": "github", + "project_id": "", + "settings": { + "github_issue_types": { + "type_ids": {}, + } + }, + }, + "ado": { + "adapter": "ado", + "project_id": "", + "settings": { + "field_mapping_file": ".specfact/templates/backlog/field_mappings/ado_custom.yaml", + }, + }, + } + } + } + + if cfg and not force: + # unreachable due earlier return, keep for safety + default_config = cfg + + _save_backlog_module_config_file(default_config if force or not cfg else cfg, path) + console.print(f"[green]✓[/green] Backlog config initialized: {path}") + console.print("[dim]Next: run `specfact backlog map-fields` to configure provider mappings.[/dim]") + + @app.command("map-fields") -@require( - lambda ado_org, ado_project: ( - isinstance(ado_org, str) and len(ado_org) > 0 and isinstance(ado_project, str) and len(ado_project) > 0 - ), - "ADO org and project must be non-empty strings", -) @beartype def map_fields( - ado_org: str = typer.Option(..., "--ado-org", help="Azure DevOps organization (required)"), - ado_project: str = typer.Option(..., "--ado-project", help="Azure DevOps project (required)"), + ado_org: str | None = typer.Option(None, "--ado-org", help="Azure DevOps organization"), + ado_project: str | None = typer.Option(None, "--ado-project", help="Azure DevOps project"), ado_token: str | None = typer.Option( None, "--ado-token", help="Azure DevOps PAT (optional, uses AZURE_DEVOPS_TOKEN env var if not provided)" ), ado_base_url: str | None = typer.Option( None, "--ado-base-url", help="Azure DevOps base URL (defaults to https://dev.azure.com)" ), + provider: list[str] = typer.Option( + [], "--provider", help="Provider(s) to configure: ado, github (repeatable)", show_default=False + ), + github_project_id: str | None = typer.Option(None, "--github-project-id", help="GitHub owner/repo context"), + github_project_v2_id: str | None = typer.Option(None, "--github-project-v2-id", help="GitHub ProjectV2 node ID"), + github_type_field_id: str | None = typer.Option( + None, "--github-type-field-id", help="GitHub ProjectV2 Type field ID" + ), + github_type_option: list[str] = typer.Option( + [], + "--github-type-option", + help="Type mapping entry '<type>=<option-id>' (repeatable, e.g. --github-type-option task=OPT123)", + show_default=False, + ), reset: bool = typer.Option( False, "--reset", help="Reset custom field mapping to defaults (deletes ado_custom.yaml)" ), @@ -3808,6 +3940,547 @@ def map_fields( from specfact_cli.backlog.mappers.template_config import FieldMappingConfig from specfact_cli.utils.auth_tokens import get_token + def _normalize_provider_selection(raw: Any) -> list[str]: + alias_map = { + "ado": "ado", + "azure devops": "ado", + "azure dev ops": "ado", + "azure dev-ops": "ado", + "azure_devops": "ado", + "azure_dev-ops": "ado", + "github": "github", + } + + def _normalize_item(item: Any) -> str | None: + candidate: Any = item + if isinstance(item, dict) and "value" in item: + candidate = item.get("value") + elif hasattr(item, "value"): + candidate = item.value + + text_item = str(candidate or "").strip().lower() + if not text_item: + return None + if text_item in {"done", "finish", "finished"}: + return None + + cleaned = text_item.replace("(", " ").replace(")", " ").replace("-", " ").replace("_", " ") + cleaned = " ".join(cleaned.split()) + + mapped = alias_map.get(text_item) or alias_map.get(cleaned) + if mapped: + return mapped + + # Last-resort parser for stringified choice objects containing value='ado' / value='github'. + if "value='ado'" in text_item or 'value="ado"' in text_item: + return "ado" + if "value='github'" in text_item or 'value="github"' in text_item: + return "github" + + return None + + normalized: list[str] = [] + if isinstance(raw, list): + for item in raw: + mapped = _normalize_item(item) + if mapped and mapped not in normalized: + normalized.append(mapped) + return normalized + + if isinstance(raw, str): + for part in raw.replace(";", ",").split(","): + mapped = _normalize_item(part) + if mapped and mapped not in normalized: + normalized.append(mapped) + return normalized + + mapped = _normalize_item(raw) + return [mapped] if mapped else [] + + selected_providers = _normalize_provider_selection(provider) + if not selected_providers: + # Preserve historical behavior for existing explicit provider options. + if ado_org or ado_project or ado_token: + selected_providers = ["ado"] + elif github_project_id or github_project_v2_id or github_type_field_id or github_type_option: + selected_providers = ["github"] + else: + try: + import questionary # type: ignore[reportMissingImports] + + picked = questionary.checkbox( + "Select providers to configure", + choices=[ + questionary.Choice(title="Azure DevOps", value="ado"), + questionary.Choice(title="GitHub", value="github"), + ], + ).ask() + selected_providers = _normalize_provider_selection(picked) + if not selected_providers: + console.print("[yellow]⚠[/yellow] No providers selected. Aborting.") + raise typer.Exit(1) + except typer.Exit: + raise + except Exception: + selected_raw = typer.prompt("Providers to configure (comma-separated: ado,github)", default="") + selected_providers = _normalize_provider_selection(selected_raw) + + if not selected_providers: + console.print("[red]Error:[/red] Please select at least one provider (ado or github).") + raise typer.Exit(1) + + if any(item not in {"ado", "github"} for item in selected_providers): + console.print("[red]Error:[/red] --provider supports only: ado, github") + raise typer.Exit(1) + + def _persist_github_custom_mapping_file(repo_issue_types: dict[str, str]) -> Path: + """Create or update github_custom.yaml with inferred type/hierarchy mappings.""" + mapping_file = Path.cwd() / ".specfact" / "templates" / "backlog" / "field_mappings" / "github_custom.yaml" + mapping_file.parent.mkdir(parents=True, exist_ok=True) + + default_payload: dict[str, Any] = { + "type_mapping": { + "epic": "epic", + "feature": "feature", + "story": "story", + "task": "task", + "bug": "bug", + "spike": "spike", + }, + "creation_hierarchy": { + "epic": [], + "feature": ["epic"], + "story": ["feature", "epic"], + "task": ["story", "feature"], + "bug": ["story", "feature", "epic"], + "spike": ["feature", "epic"], + "custom": ["epic", "feature", "story"], + }, + "dependency_rules": { + "blocks": "blocks", + "blocked_by": "blocks", + "relates": "relates_to", + }, + "status_mapping": { + "open": "todo", + "closed": "done", + "todo": "todo", + "in progress": "in_progress", + "done": "done", + }, + } + + existing_payload: dict[str, Any] = {} + if mapping_file.exists(): + try: + loaded = yaml.safe_load(mapping_file.read_text(encoding="utf-8")) or {} + if isinstance(loaded, dict): + existing_payload = loaded + except Exception: + existing_payload = {} + + def _deep_merge(dst: dict[str, Any], src: dict[str, Any]) -> dict[str, Any]: + for key, value in src.items(): + if isinstance(value, dict) and isinstance(dst.get(key), dict): + _deep_merge(dst[key], value) + else: + dst[key] = value + return dst + + final_payload = _deep_merge(dict(default_payload), existing_payload) + + alias_to_canonical = { + "epic": "epic", + "feature": "feature", + "story": "story", + "user story": "story", + "task": "task", + "bug": "bug", + "spike": "spike", + "initiative": "epic", + "requirement": "feature", + } + discovered_map: dict[str, str] = {} + existing_type_mapping = final_payload.get("type_mapping") + if isinstance(existing_type_mapping, dict): + for key, value in existing_type_mapping.items(): + discovered_map[str(key)] = str(value) + for raw_type_name in repo_issue_types: + normalized = str(raw_type_name).strip().lower().replace("_", " ").replace("-", " ") + canonical = alias_to_canonical.get(normalized, "custom") + discovered_map.setdefault(normalized, canonical) + final_payload["type_mapping"] = discovered_map + + mapping_file.write_text(yaml.dump(final_payload, sort_keys=False), encoding="utf-8") + return mapping_file + + def _run_github_mapping_setup() -> None: + token = os.environ.get("GITHUB_TOKEN") + if not token: + stored = get_token("github", allow_expired=False) + token = stored.get("access_token") if isinstance(stored, dict) else None + if not token: + console.print("[red]Error:[/red] GitHub token required for github mapping setup") + console.print("[yellow]Use:[/yellow] specfact auth github or set GITHUB_TOKEN") + raise typer.Exit(1) + + def _github_graphql(query: str, variables: dict[str, Any]) -> dict[str, Any]: + response = requests.post( + "https://api.github.com/graphql", + headers={ + "Authorization": f"Bearer {token}", + "Accept": "application/vnd.github+json", + }, + json={"query": query, "variables": variables}, + timeout=30, + ) + response.raise_for_status() + payload = response.json() + if not isinstance(payload, dict): + raise ValueError("Unexpected GitHub GraphQL response payload") + errors = payload.get("errors") + if isinstance(errors, list) and errors: + messages = [str(err.get("message")) for err in errors if isinstance(err, dict) and err.get("message")] + combined = "; ".join(messages) + lower_combined = combined.lower() + if "required scopes" in lower_combined and "read:project" in lower_combined: + raise ValueError( + "GitHub token is missing Projects scopes. Re-authenticate with: " + "specfact auth github --scopes repo,read:project,project" + ) + raise ValueError(combined or "GitHub GraphQL returned errors") + data = payload.get("data") + return data if isinstance(data, dict) else {} + + project_context = (github_project_id or "").strip() or typer.prompt( + "GitHub project context (owner/repo)", default="" + ).strip() + if "/" not in project_context: + console.print("[red]Error:[/red] GitHub project context must be in owner/repo format") + raise typer.Exit(1) + owner, repo_name = project_context.split("/", 1) + owner = owner.strip() + repo_name = repo_name.strip() + console.print( + f"[dim]Hint:[/dim] Open https://github.com/{owner}/{repo_name}/projects and use the project number shown there, " + "or paste a ProjectV2 node ID (PVT_xxx)." + ) + + project_ref = (github_project_v2_id or "").strip() or typer.prompt( + "GitHub ProjectV2 (number like 1, or node ID like PVT_xxx)", default="" + ).strip() + + issue_types_query = ( + "query($owner:String!, $repo:String!){ " + "repository(owner:$owner, name:$repo){ issueTypes(first:50){ nodes{ id name } } } " + "}" + ) + repo_issue_types: dict[str, str] = {} + try: + issue_types_data = _github_graphql(issue_types_query, {"owner": owner, "repo": repo_name}) + repository = ( + issue_types_data.get("repository") if isinstance(issue_types_data.get("repository"), dict) else None + ) + issue_types = repository.get("issueTypes") if isinstance(repository, dict) else None + nodes = issue_types.get("nodes") if isinstance(issue_types, dict) else None + if isinstance(nodes, list): + for node in nodes: + if not isinstance(node, dict): + continue + type_name = str(node.get("name") or "").strip().lower() + type_id = str(node.get("id") or "").strip() + if type_name and type_id: + repo_issue_types[type_name] = type_id + except (requests.RequestException, ValueError): + # Keep flow resilient; ProjectV2 mapping can still be configured without repository issue type ids. + repo_issue_types = {} + + if repo_issue_types: + discovered = ", ".join(sorted(repo_issue_types.keys())) + console.print(f"[cyan]Discovered repository issue types:[/cyan] {discovered}") + + cli_option_map: dict[str, str] = {} + for entry in github_type_option: + raw = entry.strip() + if "=" not in raw: + console.print(f"[yellow]⚠[/yellow] Skipping invalid --github-type-option '{raw}'") + continue + key, value = raw.split("=", 1) + key = key.strip().lower() + value = value.strip() + if key and value: + cli_option_map[key] = value + + # Fast-path for fully specified non-interactive invocations. + if project_ref and (github_type_field_id or "").strip() and cli_option_map: + github_custom_mapping_file = _persist_github_custom_mapping_file(repo_issue_types) + config_path = _upsert_backlog_provider_settings( + "github", + { + "field_mapping_file": ".specfact/templates/backlog/field_mappings/github_custom.yaml", + "provider_fields": { + "github_project_v2": { + "project_id": project_ref, + "type_field_id": str(github_type_field_id).strip(), + "type_option_ids": cli_option_map, + } + }, + "github_issue_types": {"type_ids": repo_issue_types}, + }, + project_id=project_context, + adapter="github", + ) + console.print(f"[green]✓[/green] GitHub ProjectV2 Type mapping saved to {config_path}") + console.print(f"[green]Custom mapping:[/green] {github_custom_mapping_file}") + return + + project_id = "" + project_title = "" + fields_nodes: list[dict[str, Any]] = [] + + def _extract_project(node: dict[str, Any] | None) -> tuple[str, str, list[dict[str, Any]]]: + if not isinstance(node, dict): + return "", "", [] + pid = str(node.get("id") or "").strip() + title = str(node.get("title") or "").strip() + fields = node.get("fields") + nodes = fields.get("nodes") if isinstance(fields, dict) else None + valid_nodes = [item for item in nodes if isinstance(item, dict)] if isinstance(nodes, list) else [] + return pid, title, valid_nodes + + try: + if project_ref.isdigit(): + org_query = ( + "query($login:String!, $number:Int!) { " + "organization(login:$login) { projectV2(number:$number) { id title fields(first:100) { nodes { " + "__typename ... on ProjectV2Field { id name } " + "... on ProjectV2SingleSelectField { id name options { id name } } " + "... on ProjectV2IterationField { id name } " + "} } } } " + "}" + ) + user_query = ( + "query($login:String!, $number:Int!) { " + "user(login:$login) { projectV2(number:$number) { id title fields(first:100) { nodes { " + "__typename ... on ProjectV2Field { id name } " + "... on ProjectV2SingleSelectField { id name options { id name } } " + "... on ProjectV2IterationField { id name } " + "} } } } " + "}" + ) + + number = int(project_ref) + org_error: str | None = None + user_error: str | None = None + + try: + org_data = _github_graphql(org_query, {"login": owner, "number": number}) + org_node = org_data.get("organization") if isinstance(org_data.get("organization"), dict) else None + project_node = org_node.get("projectV2") if isinstance(org_node, dict) else None + project_id, project_title, fields_nodes = _extract_project( + project_node if isinstance(project_node, dict) else None + ) + except ValueError as error: + org_error = str(error) + + if not project_id: + try: + user_data = _github_graphql(user_query, {"login": owner, "number": number}) + user_node = user_data.get("user") if isinstance(user_data.get("user"), dict) else None + project_node = user_node.get("projectV2") if isinstance(user_node, dict) else None + project_id, project_title, fields_nodes = _extract_project( + project_node if isinstance(project_node, dict) else None + ) + except ValueError as error: + user_error = str(error) + + if not project_id and (org_error or user_error): + detail = "; ".join(part for part in [org_error, user_error] if part) + raise ValueError(detail) + else: + project_id = project_ref + query = ( + "query($projectId:ID!) { " + "node(id:$projectId) { " + "... on ProjectV2 { id title fields(first:100) { nodes { " + "__typename ... on ProjectV2Field { id name } " + "... on ProjectV2SingleSelectField { id name options { id name } } " + "... on ProjectV2IterationField { id name } " + "} } } " + "} " + "}" + ) + data = _github_graphql(query, {"projectId": project_id}) + node = data.get("node") if isinstance(data.get("node"), dict) else None + project_id, project_title, fields_nodes = _extract_project(node) + except (requests.RequestException, ValueError) as error: + message = str(error) + console.print(f"[red]Error:[/red] Could not discover GitHub ProjectV2 metadata: {message}") + if "required scopes" in message.lower() or "read:project" in message.lower(): + console.print( + "[yellow]Hint:[/yellow] Run `specfact auth github --scopes repo,read:project,project` " + "or provide `GITHUB_TOKEN` with those scopes." + ) + else: + console.print( + f"[yellow]Hint:[/yellow] Verify the project exists under " + f"https://github.com/{owner}/{repo_name}/projects and that the number/ID is correct." + ) + raise typer.Exit(1) from error + + if not project_id: + console.print( + "[red]Error:[/red] Could not resolve GitHub ProjectV2. Check owner/repo and project number or ID." + ) + raise typer.Exit(1) + + type_field_id = (github_type_field_id or "").strip() + selected_type_field: dict[str, Any] | None = None + single_select_fields = [ + field + for field in fields_nodes + if isinstance(field.get("options"), list) and str(field.get("id") or "").strip() + ] + + expected_type_names = {"epic", "feature", "story", "task", "bug"} + + def _field_options(field: dict[str, Any]) -> set[str]: + raw = field.get("options") + if not isinstance(raw, list): + return set() + return { + str(opt.get("name") or "").strip().lower() + for opt in raw + if isinstance(opt, dict) and str(opt.get("name") or "").strip() + } + + if type_field_id: + selected_type_field = next( + (field for field in single_select_fields if str(field.get("id") or "").strip() == type_field_id), + None, + ) + else: + # Prefer explicit Type-like field names first. + selected_type_field = next( + ( + field + for field in single_select_fields + if str(field.get("name") or "").strip().lower() + in {"type", "issue type", "item type", "work item type"} + ), + None, + ) + # Otherwise pick a field whose options look like backlog item types (epic/feature/story/task/bug). + if selected_type_field is None: + selected_type_field = next( + ( + field + for field in single_select_fields + if len(_field_options(field).intersection(expected_type_names)) >= 2 + ), + None, + ) + + if selected_type_field is None and single_select_fields: + console.print("[cyan]Discovered project single-select fields:[/cyan]") + for field in single_select_fields: + field_name = str(field.get("name") or "") + options_preview = sorted(_field_options(field)) + preview = ", ".join(options_preview[:8]) + suffix = "..." if len(options_preview) > 8 else "" + console.print(f" - {field_name} (id={field.get('id')}) | options: {preview}{suffix}") + # Simplified flow: do not force manual field picking here. + # Repository issue types are source-of-truth; ProjectV2 mapping is optional enrichment. + + if selected_type_field is None: + console.print( + "[yellow]⚠[/yellow] No ProjectV2 Type-like single-select field found. " + "Skipping ProjectV2 type-option mapping for now." + ) + + type_field_id = ( + str(selected_type_field.get("id") or "").strip() if isinstance(selected_type_field, dict) else "" + ) + options_raw = selected_type_field.get("options") if isinstance(selected_type_field, dict) else None + options = [item for item in options_raw if isinstance(item, dict)] if isinstance(options_raw, list) else [] + + option_map: dict[str, str] = dict(cli_option_map) + + option_name_to_id = { + str(opt.get("name") or "").strip().lower(): str(opt.get("id") or "").strip() + for opt in options + if str(opt.get("name") or "").strip() and str(opt.get("id") or "").strip() + } + + if not option_map and option_name_to_id: + for issue_type in ["epic", "feature", "story", "task", "bug"]: + if issue_type in option_name_to_id: + option_map[issue_type] = option_name_to_id[issue_type] + + if not option_map and option_name_to_id: + available_names = ", ".join(sorted(option_name_to_id.keys())) + console.print(f"[cyan]Available Type options:[/cyan] {available_names}") + for issue_type in ["epic", "feature", "story", "task", "bug"]: + option_name = ( + typer.prompt( + f"Type option name for '{issue_type}' (optional)", + default=issue_type if issue_type in option_name_to_id else "", + ) + .strip() + .lower() + ) + if option_name and option_name in option_name_to_id: + option_map[issue_type] = option_name_to_id[option_name] + + issue_type_id_map = { + issue_type: repo_issue_types.get(issue_type, "") + for issue_type in ["epic", "feature", "story", "task", "bug"] + if repo_issue_types.get(issue_type) + } + + settings_update: dict[str, Any] = {} + if issue_type_id_map: + settings_update["github_issue_types"] = {"type_ids": issue_type_id_map} + + if type_field_id and option_map: + settings_update["provider_fields"] = { + "github_project_v2": { + "project_id": project_id, + "type_field_id": type_field_id, + "type_option_ids": option_map, + } + } + elif type_field_id and not option_map: + console.print( + "[yellow]⚠[/yellow] ProjectV2 Type field found, but no matching type options were configured. " + "Repository issue-type ids were still saved." + ) + + if not settings_update: + console.print( + "[red]Error:[/red] Could not resolve GitHub type mappings from repository issue types or ProjectV2 options." + ) + raise typer.Exit(1) + + github_custom_mapping_file = _persist_github_custom_mapping_file(repo_issue_types) + settings_update["field_mapping_file"] = ".specfact/templates/backlog/field_mappings/github_custom.yaml" + + config_path = _upsert_backlog_provider_settings( + "github", + settings_update, + project_id=project_context, + adapter="github", + ) + + project_label = project_title or project_id + console.print(f"[green]✓[/green] GitHub mapping saved to {config_path}") + console.print(f"[green]Custom mapping:[/green] {github_custom_mapping_file}") + if type_field_id: + field_name = str(selected_type_field.get("name") or "") if isinstance(selected_type_field, dict) else "" + console.print(f"[dim]Project: {project_label} | Type field: {field_name}[/dim]") + else: + console.print("[dim]ProjectV2 Type field mapping skipped; repository issue types were captured.[/dim]") + def _find_potential_match(canonical_field: str, available_fields: list[dict[str, Any]]) -> str | None: """ Find a potential ADO field match for a canonical field using regex/fuzzy matching. @@ -3869,6 +4542,10 @@ def _find_potential_match(canonical_field: str, available_fields: list[dict[str, return None + if "ado" not in selected_providers and "github" in selected_providers: + _run_github_mapping_setup() + return + # Resolve token (explicit > env var > stored token) api_token: str | None = None auth_scheme = "basic" @@ -3900,6 +4577,14 @@ def _find_potential_match(canonical_field: str, available_fields: list[dict[str, console.print(" 3. Use: specfact auth azure-devops") raise typer.Exit(1) + if not ado_org: + ado_org = typer.prompt("Azure DevOps organization", default="").strip() or None + if not ado_project: + ado_project = typer.prompt("Azure DevOps project", default="").strip() or None + if not ado_org or not ado_project: + console.print("[red]Error:[/red] Azure DevOps organization and project are required when configuring ado") + raise typer.Exit(1) + # Build base URL base_url = (ado_base_url or "https://dev.azure.com").rstrip("/") @@ -4172,5 +4857,20 @@ def _find_potential_match(canonical_field: str, available_fields: list[dict[str, console.print() console.print(Panel("[bold green]✓ Mapping saved successfully[/bold green]", border_style="green")) console.print(f"[green]Location:[/green] {custom_mapping_file}") + + provider_cfg_path = _upsert_backlog_provider_settings( + "ado", + { + "field_mapping_file": ".specfact/templates/backlog/field_mappings/ado_custom.yaml", + "ado_org": ado_org, + "ado_project": ado_project, + }, + project_id=f"{ado_org}/{ado_project}" if ado_org and ado_project else None, + adapter="ado", + ) + console.print(f"[green]Provider config:[/green] {provider_cfg_path}") console.print() console.print("[dim]You can now use this mapping with specfact backlog refine.[/dim]") + + if "github" in selected_providers: + _run_github_mapping_setup() diff --git a/src/specfact_cli/modules/contract/module-package.yaml b/src/specfact_cli/modules/contract/module-package.yaml index e23f2e5a..a40f8c52 100644 --- a/src/specfact_cli/modules/contract/module-package.yaml +++ b/src/specfact_cli/modules/contract/module-package.yaml @@ -1,5 +1,5 @@ name: contract -version: 0.35.0 +version: 0.36.0 commands: - contract command_help: diff --git a/src/specfact_cli/modules/drift/module-package.yaml b/src/specfact_cli/modules/drift/module-package.yaml index 6ec43c49..9c295903 100644 --- a/src/specfact_cli/modules/drift/module-package.yaml +++ b/src/specfact_cli/modules/drift/module-package.yaml @@ -1,5 +1,5 @@ name: drift -version: 0.35.0 +version: 0.36.0 commands: - drift command_help: diff --git a/src/specfact_cli/modules/enforce/module-package.yaml b/src/specfact_cli/modules/enforce/module-package.yaml index 341f58bf..7ace3b1d 100644 --- a/src/specfact_cli/modules/enforce/module-package.yaml +++ b/src/specfact_cli/modules/enforce/module-package.yaml @@ -1,5 +1,5 @@ name: enforce -version: 0.35.0 +version: 0.36.0 commands: - enforce command_help: diff --git a/src/specfact_cli/modules/generate/module-package.yaml b/src/specfact_cli/modules/generate/module-package.yaml index 2ccbab5e..749d0a87 100644 --- a/src/specfact_cli/modules/generate/module-package.yaml +++ b/src/specfact_cli/modules/generate/module-package.yaml @@ -1,5 +1,5 @@ name: generate -version: 0.35.0 +version: 0.36.0 commands: - generate command_help: diff --git a/src/specfact_cli/modules/import_cmd/module-package.yaml b/src/specfact_cli/modules/import_cmd/module-package.yaml index 7a383556..5e44c0f2 100644 --- a/src/specfact_cli/modules/import_cmd/module-package.yaml +++ b/src/specfact_cli/modules/import_cmd/module-package.yaml @@ -1,5 +1,5 @@ name: import_cmd -version: 0.35.0 +version: 0.36.0 commands: - import command_help: diff --git a/src/specfact_cli/modules/init/module-package.yaml b/src/specfact_cli/modules/init/module-package.yaml index b7174644..6d80b88d 100644 --- a/src/specfact_cli/modules/init/module-package.yaml +++ b/src/specfact_cli/modules/init/module-package.yaml @@ -1,5 +1,5 @@ name: init -version: 0.35.0 +version: 0.36.0 commands: - init command_help: diff --git a/src/specfact_cli/modules/migrate/module-package.yaml b/src/specfact_cli/modules/migrate/module-package.yaml index bebac551..fb4f5425 100644 --- a/src/specfact_cli/modules/migrate/module-package.yaml +++ b/src/specfact_cli/modules/migrate/module-package.yaml @@ -1,5 +1,5 @@ name: migrate -version: 0.35.0 +version: 0.36.0 commands: - migrate command_help: diff --git a/src/specfact_cli/modules/module_registry/module-package.yaml b/src/specfact_cli/modules/module_registry/module-package.yaml index 96f2c0a6..bf08f24d 100644 --- a/src/specfact_cli/modules/module_registry/module-package.yaml +++ b/src/specfact_cli/modules/module_registry/module-package.yaml @@ -1,5 +1,5 @@ name: module-registry -version: 0.35.0 +version: 0.36.0 commands: - module command_help: diff --git a/src/specfact_cli/modules/patch_mode/module-package.yaml b/src/specfact_cli/modules/patch_mode/module-package.yaml index d044a3ec..6357e43c 100644 --- a/src/specfact_cli/modules/patch_mode/module-package.yaml +++ b/src/specfact_cli/modules/patch_mode/module-package.yaml @@ -1,5 +1,5 @@ name: patch-mode -version: 0.35.0 +version: 0.36.0 commands: - patch command_help: diff --git a/src/specfact_cli/modules/plan/module-package.yaml b/src/specfact_cli/modules/plan/module-package.yaml index c75def19..1a4989f4 100644 --- a/src/specfact_cli/modules/plan/module-package.yaml +++ b/src/specfact_cli/modules/plan/module-package.yaml @@ -1,5 +1,5 @@ name: plan -version: 0.35.0 +version: 0.36.0 commands: - plan command_help: diff --git a/src/specfact_cli/modules/policy_engine/module-package.yaml b/src/specfact_cli/modules/policy_engine/module-package.yaml index 8f933789..8b9de5c0 100644 --- a/src/specfact_cli/modules/policy_engine/module-package.yaml +++ b/src/specfact_cli/modules/policy_engine/module-package.yaml @@ -1,5 +1,5 @@ name: policy-engine -version: 0.35.0 +version: 0.36.0 commands: - policy command_help: diff --git a/src/specfact_cli/modules/project/module-package.yaml b/src/specfact_cli/modules/project/module-package.yaml index 7d21c66c..d1f6abea 100644 --- a/src/specfact_cli/modules/project/module-package.yaml +++ b/src/specfact_cli/modules/project/module-package.yaml @@ -1,5 +1,5 @@ name: project -version: 0.35.0 +version: 0.36.0 commands: - project command_help: diff --git a/src/specfact_cli/modules/repro/module-package.yaml b/src/specfact_cli/modules/repro/module-package.yaml index 21d18be5..00ad5965 100644 --- a/src/specfact_cli/modules/repro/module-package.yaml +++ b/src/specfact_cli/modules/repro/module-package.yaml @@ -1,5 +1,5 @@ name: repro -version: 0.35.0 +version: 0.36.0 commands: - repro command_help: diff --git a/src/specfact_cli/modules/sdd/module-package.yaml b/src/specfact_cli/modules/sdd/module-package.yaml index f4b27b73..1b0ef7bd 100644 --- a/src/specfact_cli/modules/sdd/module-package.yaml +++ b/src/specfact_cli/modules/sdd/module-package.yaml @@ -1,5 +1,5 @@ name: sdd -version: 0.35.0 +version: 0.36.0 commands: - sdd command_help: diff --git a/src/specfact_cli/modules/spec/module-package.yaml b/src/specfact_cli/modules/spec/module-package.yaml index 2e7b0a30..86e6393d 100644 --- a/src/specfact_cli/modules/spec/module-package.yaml +++ b/src/specfact_cli/modules/spec/module-package.yaml @@ -1,5 +1,5 @@ name: spec -version: 0.35.0 +version: 0.36.0 commands: - spec command_help: diff --git a/src/specfact_cli/modules/sync/module-package.yaml b/src/specfact_cli/modules/sync/module-package.yaml index 7d730aa7..ae0c4ea5 100644 --- a/src/specfact_cli/modules/sync/module-package.yaml +++ b/src/specfact_cli/modules/sync/module-package.yaml @@ -1,5 +1,5 @@ name: sync -version: 0.35.0 +version: 0.36.0 commands: - sync command_help: diff --git a/src/specfact_cli/modules/upgrade/module-package.yaml b/src/specfact_cli/modules/upgrade/module-package.yaml index 0b695869..263e8f8c 100644 --- a/src/specfact_cli/modules/upgrade/module-package.yaml +++ b/src/specfact_cli/modules/upgrade/module-package.yaml @@ -1,5 +1,5 @@ name: upgrade -version: 0.35.0 +version: 0.36.0 commands: - upgrade command_help: diff --git a/src/specfact_cli/modules/validate/module-package.yaml b/src/specfact_cli/modules/validate/module-package.yaml index 8bcd85e4..add1002c 100644 --- a/src/specfact_cli/modules/validate/module-package.yaml +++ b/src/specfact_cli/modules/validate/module-package.yaml @@ -1,5 +1,5 @@ name: validate -version: 0.35.0 +version: 0.36.0 commands: - validate command_help: diff --git a/src/specfact_cli/registry/module_packages.py b/src/specfact_cli/registry/module_packages.py index f151c182..6fdd87c9 100644 --- a/src/specfact_cli/registry/module_packages.py +++ b/src/specfact_cli/registry/module_packages.py @@ -905,7 +905,8 @@ def register_module_package_commands( cmd_name, ) CommandRegistry._typer_cache.pop(cmd_name, None) - logger.debug("Module %s extended command group '%s'.", meta.name, cmd_name) + if is_debug_mode(): + logger.debug("Module %s extended command group '%s'.", meta.name, cmd_name) continue help_str = (meta.command_help or {}).get(cmd_name) or f"Module package: {meta.name}" loader = _make_package_loader(package_dir, meta.name, cmd_name) diff --git a/tests/integration/backlog/test_additional_commands_e2e.py b/tests/integration/backlog/test_additional_commands_e2e.py index 016049be..a10f366c 100644 --- a/tests/integration/backlog/test_additional_commands_e2e.py +++ b/tests/integration/backlog/test_additional_commands_e2e.py @@ -33,6 +33,10 @@ def fetch_relationships(self, project_id: str) -> list[dict[str, Any]]: _ = project_id return [{"source_id": "1", "target_id": "2", "type": "blocks"}] + def create_issue(self, project_id: str, payload: dict[str, Any]) -> dict[str, Any]: + _ = project_id, payload + return {"id": "3", "key": "TASK-3", "url": "https://example.test/issues/3"} + def _write_baseline(path: Path) -> None: path.parent.mkdir(parents=True, exist_ok=True) diff --git a/tests/integration/backlog/test_ado_e2e.py b/tests/integration/backlog/test_ado_e2e.py index fef97696..af8e67e8 100644 --- a/tests/integration/backlog/test_ado_e2e.py +++ b/tests/integration/backlog/test_ado_e2e.py @@ -38,6 +38,10 @@ def fetch_relationships(self, project_id: str) -> list[dict[str, Any]]: _ = project_id return [{"source_id": "100", "target_id": "101", "type": "blocks"}] + def create_issue(self, project_id: str, payload: dict[str, Any]) -> dict[str, Any]: + _ = project_id, payload + return {"id": "102", "key": "ADO-102", "url": "https://example.test/workitems/102"} + def test_backlog_trace_impact_ado_flow(monkeypatch) -> None: runner = CliRunner() diff --git a/tests/integration/backlog/test_delta_e2e.py b/tests/integration/backlog/test_delta_e2e.py index 21d1cbe5..149b3b43 100644 --- a/tests/integration/backlog/test_delta_e2e.py +++ b/tests/integration/backlog/test_delta_e2e.py @@ -33,6 +33,10 @@ def fetch_relationships(self, project_id: str) -> list[dict[str, Any]]: _ = project_id return [{"source_id": "1", "target_id": "2", "type": "blocks"}] + def create_issue(self, project_id: str, payload: dict[str, Any]) -> dict[str, Any]: + _ = project_id, payload + return {"id": "3", "key": "#3", "url": "https://example.test/issues/3"} + def _write_baseline(path: Path) -> None: path.parent.mkdir(parents=True, exist_ok=True) diff --git a/tests/integration/backlog/test_github_e2e.py b/tests/integration/backlog/test_github_e2e.py index 97c16860..d7baa121 100644 --- a/tests/integration/backlog/test_github_e2e.py +++ b/tests/integration/backlog/test_github_e2e.py @@ -32,6 +32,10 @@ def fetch_relationships(self, project_id: str) -> list[dict[str, Any]]: _ = project_id return [{"source_id": "1", "target_id": "2", "type": "blocks"}] + def create_issue(self, project_id: str, payload: dict[str, Any]) -> dict[str, Any]: + _ = project_id, payload + return {"id": "3", "key": "#3", "url": "https://example.test/issues/3"} + def test_backlog_analyze_deps_github_flow(tmp_path: Path, monkeypatch) -> None: runner = CliRunner() diff --git a/tests/integration/backlog/test_sync_e2e.py b/tests/integration/backlog/test_sync_e2e.py index 6542f57e..db4183ce 100644 --- a/tests/integration/backlog/test_sync_e2e.py +++ b/tests/integration/backlog/test_sync_e2e.py @@ -33,6 +33,10 @@ def fetch_relationships(self, project_id: str) -> list[dict[str, Any]]: _ = project_id return [{"source_id": "1", "target_id": "2", "type": "blocks"}] + def create_issue(self, project_id: str, payload: dict[str, Any]) -> dict[str, Any]: + _ = project_id, payload + return {"id": "3", "key": "#3", "url": "https://example.test/issues/3"} + def test_backlog_sync_generates_plan_and_updates_baseline(tmp_path: Path, monkeypatch) -> None: runner = CliRunner() diff --git a/tests/integration/backlog/test_verify_readiness_e2e.py b/tests/integration/backlog/test_verify_readiness_e2e.py index de31f30b..f2373ad7 100644 --- a/tests/integration/backlog/test_verify_readiness_e2e.py +++ b/tests/integration/backlog/test_verify_readiness_e2e.py @@ -36,6 +36,10 @@ def fetch_relationships(self, project_id: str) -> list[dict[str, Any]]: {"source_id": "101", "target_id": "102", "type": "relates_to"}, ] + def create_issue(self, project_id: str, payload: dict[str, Any]) -> dict[str, Any]: + _ = project_id, payload + return {"id": "103", "key": "A-103", "url": "https://example.test/workitems/103"} + def test_verify_readiness_returns_ready_exit_code(monkeypatch) -> None: runner = CliRunner() diff --git a/tests/unit/adapters/test_ado.py b/tests/unit/adapters/test_ado.py index 8292f31a..cdf8679c 100644 --- a/tests/unit/adapters/test_ado.py +++ b/tests/unit/adapters/test_ado.py @@ -245,11 +245,20 @@ def test_update_work_item_status( @beartype @patch("specfact_cli.adapters.ado.requests.patch") @patch("specfact_cli.adapters.ado.requests.get") - def test_missing_api_token(self, mock_get: MagicMock, mock_patch: MagicMock, bridge_config: BridgeConfig) -> None: + @patch("specfact_cli.adapters.ado.get_token") + def test_missing_api_token( + self, + mock_get_token: MagicMock, + mock_get: MagicMock, + mock_patch: MagicMock, + bridge_config: BridgeConfig, + ) -> None: """Test error when API token is missing.""" # Clear environment variable BEFORE creating adapter old_token = os.environ.pop("AZURE_DEVOPS_TOKEN", None) try: + # Ensure adapter cannot resolve token from persisted auth cache. + mock_get_token.return_value = None adapter = AdoAdapter(org="test-org", project="test-project", api_token=None) # Mock process template API call (called by _get_work_item_type) diff --git a/tests/unit/backlog/test_field_mappers.py b/tests/unit/backlog/test_field_mappers.py index cac2c0b4..814461fb 100644 --- a/tests/unit/backlog/test_field_mappers.py +++ b/tests/unit/backlog/test_field_mappers.py @@ -192,6 +192,17 @@ def test_extract_work_item_type_from_prefixed_label(self) -> None: fields = mapper.extract_fields(item_data) assert fields["work_item_type"] == "Story" + def test_extract_work_item_type_from_native_type_object(self) -> None: + """GitHub mapper resolves work item type from native issue type metadata.""" + mapper = GitHubFieldMapper() + item_data = { + "body": "test", + "labels": [], + "type": {"name": "Feature"}, + } + fields = mapper.extract_fields(item_data) + assert fields["work_item_type"] == "Feature" + class TestAdoFieldMapper: """Tests for AdoFieldMapper with default mappings.""" diff --git a/tests/unit/commands/test_backlog_commands.py b/tests/unit/commands/test_backlog_commands.py index 61c49246..c9dc574d 100644 --- a/tests/unit/commands/test_backlog_commands.py +++ b/tests/unit/commands/test_backlog_commands.py @@ -6,8 +6,10 @@ from __future__ import annotations +from pathlib import Path from unittest.mock import MagicMock, patch +import yaml from rich.panel import Panel from typer.testing import CliRunner @@ -238,6 +240,134 @@ def test_map_fields_requires_token(self) -> None: assert result.exit_code != 0 assert "token required" in result.stdout.lower() or "error" in result.stdout.lower() + @patch("questionary.checkbox") + @patch("specfact_cli.utils.auth_tokens.get_token") + def test_map_fields_provider_picker_accepts_choice_objects( + self, + mock_get_token: MagicMock, + mock_checkbox: MagicMock, + tmp_path, + ) -> None: + """Provider picker should accept questionary Choice-like objects with `.value`.""" + + class _ChoiceLike: + def __init__(self, value: str) -> None: + self.value = value + + mock_checkbox.return_value.ask.return_value = [_ChoiceLike("github")] + mock_get_token.return_value = {"access_token": "gho_test", "token_type": "bearer"} + + import os + + cwd = Path.cwd() + try: + os.chdir(tmp_path) + result = runner.invoke( + app, + [ + "backlog", + "map-fields", + "--github-project-id", + "nold-ai/specfact-demo-repo", + "--github-project-v2-id", + "PVT_project_id", + "--github-type-field-id", + "PVT_type_field", + "--github-type-option", + "task=OPT_TASK", + ], + ) + finally: + os.chdir(cwd) + + assert result.exit_code == 0 + assert "No providers selected" not in result.stdout + + @patch("specfact_cli.utils.auth_tokens.get_token") + def test_map_fields_github_provider_persists_backlog_config(self, mock_get_token: MagicMock, tmp_path) -> None: + """Test GitHub provider mapping persistence into .specfact/backlog-config.yaml.""" + mock_get_token.return_value = {"access_token": "gho_test", "token_type": "bearer"} + import os + + cwd = Path.cwd() + try: + os.chdir(tmp_path) + result = runner.invoke( + app, + [ + "backlog", + "map-fields", + "--provider", + "github", + "--github-project-id", + "nold-ai/specfact-demo-repo", + "--github-project-v2-id", + "PVT_project_id", + "--github-type-field-id", + "PVT_type_field", + "--github-type-option", + "task=OPT_TASK", + ], + ) + finally: + os.chdir(cwd) + + assert result.exit_code == 0 + cfg_file = tmp_path / ".specfact" / "backlog-config.yaml" + assert cfg_file.exists() + loaded = yaml.safe_load(cfg_file.read_text(encoding="utf-8")) + github_settings = loaded["backlog_config"]["providers"]["github"]["settings"] + mapping = github_settings["provider_fields"]["github_project_v2"] + assert mapping["project_id"] == "PVT_project_id" + assert mapping["type_field_id"] == "PVT_type_field" + assert mapping["type_option_ids"]["task"] == "OPT_TASK" + assert github_settings["field_mapping_file"] == ".specfact/templates/backlog/field_mappings/github_custom.yaml" + github_custom = tmp_path / ".specfact" / "templates" / "backlog" / "field_mappings" / "github_custom.yaml" + assert github_custom.exists() + github_custom_payload = yaml.safe_load(github_custom.read_text(encoding="utf-8")) + assert github_custom_payload["type_mapping"]["task"] == "task" + + def test_backlog_init_config_scaffolds_default_file(self, tmp_path) -> None: + """Test backlog init-config creates default backlog-config scaffold.""" + import os + + cwd = Path.cwd() + try: + os.chdir(tmp_path) + result = runner.invoke(app, ["backlog", "init-config"]) + finally: + os.chdir(cwd) + + assert result.exit_code == 0 + cfg_file = tmp_path / ".specfact" / "backlog-config.yaml" + assert cfg_file.exists() + loaded = yaml.safe_load(cfg_file.read_text(encoding="utf-8")) + assert "backlog_config" in loaded + assert "providers" in loaded["backlog_config"] + assert "github" in loaded["backlog_config"]["providers"] + assert "ado" in loaded["backlog_config"]["providers"] + + def test_backlog_init_config_does_not_overwrite_without_force(self, tmp_path) -> None: + """Test backlog init-config respects no-overwrite behavior by default.""" + import os + + cfg_dir = tmp_path / ".specfact" + cfg_dir.mkdir(parents=True, exist_ok=True) + cfg_file = cfg_dir / "backlog-config.yaml" + cfg_file.write_text("backlog_config:\n providers:\n github:\n adapter: github\n", encoding="utf-8") + + cwd = Path.cwd() + try: + os.chdir(tmp_path) + result = runner.invoke(app, ["backlog", "init-config"]) + finally: + os.chdir(cwd) + + assert result.exit_code == 0 + content = cfg_file.read_text(encoding="utf-8") + assert "adapter: github" in content + assert "already exists" in result.stdout.lower() + class TestParseRefinedExportMarkdown: """Tests for _parse_refined_export_markdown (refine --import-from-tmp parser).""" diff --git a/tests/unit/integrations/test_specmatic.py b/tests/unit/integrations/test_specmatic.py index 7f38a48b..efa51ab7 100644 --- a/tests/unit/integrations/test_specmatic.py +++ b/tests/unit/integrations/test_specmatic.py @@ -1,9 +1,8 @@ """Unit tests for Specmatic integration.""" +import asyncio from unittest.mock import MagicMock, patch -import pytest - from specfact_cli.integrations.specmatic import ( SpecValidationResult, check_backward_compatibility, @@ -128,10 +127,9 @@ def test_to_json(self): class TestValidateSpecWithSpecmatic: """Test suite for validate_spec_with_specmatic function.""" - @pytest.mark.asyncio @patch("specfact_cli.integrations.specmatic._get_specmatic_command") @patch("specfact_cli.integrations.specmatic.asyncio.to_thread") - async def test_validate_success(self, mock_to_thread, mock_get_cmd, tmp_path): + def test_validate_success(self, mock_to_thread, mock_get_cmd, tmp_path): """Test successful validation.""" # Mock specmatic command mock_get_cmd.return_value = ["specmatic"] @@ -143,33 +141,31 @@ async def test_validate_success(self, mock_to_thread, mock_get_cmd, tmp_path): spec_path = tmp_path / "openapi.yaml" spec_path.write_text("openapi: 3.0.0\n") - result = await validate_spec_with_specmatic(spec_path) + result = asyncio.run(validate_spec_with_specmatic(spec_path)) assert result.is_valid is True assert result.schema_valid is True assert result.examples_valid is True assert mock_to_thread.call_count == 2 # Schema validation + examples - @pytest.mark.asyncio @patch("specfact_cli.integrations.specmatic._get_specmatic_command") - async def test_validate_specmatic_not_available(self, mock_get_cmd, tmp_path): + def test_validate_specmatic_not_available(self, mock_get_cmd, tmp_path): """Test when Specmatic is not available.""" mock_get_cmd.return_value = None spec_path = tmp_path / "openapi.yaml" spec_path.write_text("openapi: 3.0.0\n") - result = await validate_spec_with_specmatic(spec_path) + result = asyncio.run(validate_spec_with_specmatic(spec_path)) assert result.is_valid is False assert result.schema_valid is False assert result.examples_valid is False assert "Specmatic" in result.errors[0] and "not available" in result.errors[0] - @pytest.mark.asyncio @patch("specfact_cli.integrations.specmatic._get_specmatic_command") @patch("specfact_cli.integrations.specmatic.asyncio.to_thread") - async def test_validate_with_previous_version(self, mock_to_thread, mock_get_cmd, tmp_path): + def test_validate_with_previous_version(self, mock_to_thread, mock_get_cmd, tmp_path): """Test validation with previous version for backward compatibility.""" mock_get_cmd.return_value = ["specmatic"] # Mock successful subprocess runs @@ -183,7 +179,7 @@ async def test_validate_with_previous_version(self, mock_to_thread, mock_get_cmd previous_path = tmp_path / "openapi.v1.yaml" previous_path.write_text("openapi: 3.0.0\n") - result = await validate_spec_with_specmatic(spec_path, previous_path) + result = asyncio.run(validate_spec_with_specmatic(spec_path, previous_path)) assert result.is_valid is True assert result.backward_compatible is True @@ -193,10 +189,9 @@ async def test_validate_with_previous_version(self, mock_to_thread, mock_get_cmd class TestCheckBackwardCompatibility: """Test suite for check_backward_compatibility function.""" - @pytest.mark.asyncio @patch("specfact_cli.integrations.specmatic._get_specmatic_command") @patch("specfact_cli.integrations.specmatic.asyncio.to_thread") - async def test_backward_compatible(self, mock_to_thread, mock_get_cmd, tmp_path): + def test_backward_compatible(self, mock_to_thread, mock_get_cmd, tmp_path): """Test when specs are backward compatible.""" mock_get_cmd.return_value = ["specmatic"] # Mock successful backward compatibility check @@ -208,15 +203,14 @@ async def test_backward_compatible(self, mock_to_thread, mock_get_cmd, tmp_path) new_spec = tmp_path / "new.yaml" new_spec.write_text("openapi: 3.0.0\n") - is_compatible, breaking_changes = await check_backward_compatibility(old_spec, new_spec) + is_compatible, breaking_changes = asyncio.run(check_backward_compatibility(old_spec, new_spec)) assert is_compatible is True assert breaking_changes == [] - @pytest.mark.asyncio @patch("specfact_cli.integrations.specmatic._get_specmatic_command") @patch("specfact_cli.integrations.specmatic.asyncio.to_thread") - async def test_backward_incompatible(self, mock_to_thread, mock_get_cmd, tmp_path): + def test_backward_incompatible(self, mock_to_thread, mock_get_cmd, tmp_path): """Test when specs are not backward compatible.""" mock_get_cmd.return_value = ["specmatic"] # Mock failed backward compatibility check with breaking changes in output @@ -232,7 +226,7 @@ async def test_backward_incompatible(self, mock_to_thread, mock_get_cmd, tmp_pat new_spec = tmp_path / "new.yaml" new_spec.write_text("openapi: 3.0.0\n") - is_compatible, breaking_changes = await check_backward_compatibility(old_spec, new_spec) + is_compatible, breaking_changes = asyncio.run(check_backward_compatibility(old_spec, new_spec)) assert is_compatible is False assert len(breaking_changes) > 0 @@ -242,10 +236,9 @@ async def test_backward_incompatible(self, mock_to_thread, mock_get_cmd, tmp_pat class TestGenerateSpecmaticTests: """Test suite for generate_specmatic_tests function.""" - @pytest.mark.asyncio @patch("specfact_cli.integrations.specmatic._get_specmatic_command") @patch("specfact_cli.integrations.specmatic.asyncio.to_thread") - async def test_generate_tests_success(self, mock_to_thread, mock_get_cmd, tmp_path): + def test_generate_tests_success(self, mock_to_thread, mock_get_cmd, tmp_path): """Test successful test generation.""" mock_get_cmd.return_value = ["specmatic"] mock_result = MagicMock(returncode=0, stderr="") @@ -255,7 +248,7 @@ async def test_generate_tests_success(self, mock_to_thread, mock_get_cmd, tmp_pa spec_path.write_text("openapi: 3.0.0\n") output_dir = tmp_path / "tests" - output = await generate_specmatic_tests(spec_path, output_dir) + output = asyncio.run(generate_specmatic_tests(spec_path, output_dir)) assert output == output_dir mock_to_thread.assert_called_once() @@ -264,12 +257,11 @@ async def test_generate_tests_success(self, mock_to_thread, mock_get_cmd, tmp_pa class TestCreateMockServer: """Test suite for create_mock_server function.""" - @pytest.mark.asyncio @patch("builtins.__import__") @patch("specfact_cli.integrations.specmatic._get_specmatic_command") @patch("specfact_cli.integrations.specmatic.asyncio.to_thread") @patch("specfact_cli.integrations.specmatic.asyncio.sleep") - async def test_create_mock_server(self, mock_sleep, mock_to_thread, mock_get_cmd, mock_import, tmp_path): + def test_create_mock_server(self, mock_sleep, mock_to_thread, mock_get_cmd, mock_import, tmp_path): """Test mock server creation.""" import socket as real_socket @@ -307,7 +299,7 @@ def import_side_effect(name, *args, **kwargs): spec_path = tmp_path / "openapi.yaml" spec_path.write_text("openapi: 3.0.0\n") - mock_server = await create_mock_server(spec_path, port=9000, strict_mode=True) + mock_server = asyncio.run(create_mock_server(spec_path, port=9000, strict_mode=True)) assert mock_server.port == 9000 assert mock_server.spec_path == spec_path diff --git a/tests/unit/specfact_cli/adapters/test_adapter_retry_policy_usage.py b/tests/unit/specfact_cli/adapters/test_adapter_retry_policy_usage.py new file mode 100644 index 00000000..7dd2853b --- /dev/null +++ b/tests/unit/specfact_cli/adapters/test_adapter_retry_policy_usage.py @@ -0,0 +1,96 @@ +"""Tests for shared retry policy usage across adapter write operations.""" + +from __future__ import annotations + +from specfact_cli.adapters.ado import AdoAdapter +from specfact_cli.adapters.github import GitHubAdapter + + +class _Resp: + def __init__(self, payload: dict) -> None: + self._payload = payload + + def raise_for_status(self) -> None: + return None + + def json(self) -> dict: + return self._payload + + +def test_github_add_issue_comment_uses_duplicate_safe_retry(monkeypatch) -> None: + adapter = GitHubAdapter(repo_owner="nold-ai", repo_name="specfact-cli", api_token="token", use_gh_cli=False) + + captured: dict[str, object] = {} + + def _capture_retry(_request_callable, **kwargs): + captured.update(kwargs) + return _Resp({}) + + monkeypatch.setattr(adapter, "_request_with_retry", _capture_retry) + + adapter._add_issue_comment("nold-ai", "specfact-cli", 42, "hello") + + assert captured.get("retry_on_ambiguous_transport") is False + + +def test_github_update_issue_status_uses_default_retry_mode(monkeypatch) -> None: + adapter = GitHubAdapter(repo_owner="nold-ai", repo_name="specfact-cli", api_token="token", use_gh_cli=False) + + captured: dict[str, object] = {} + + def _capture_retry(_request_callable, **kwargs): + captured.update(kwargs) + return _Resp({"number": 42, "html_url": "https://example.test/42", "state": "open"}) + + monkeypatch.setattr(adapter, "_request_with_retry", _capture_retry) + monkeypatch.setattr(adapter, "_get_status_comment", lambda *_args, **_kwargs: "") + + proposal_data = { + "status": "in-progress", + "title": "Change title", + "source_tracking": {"source_id": 42}, + } + + result = adapter._update_issue_status(proposal_data, "nold-ai", "specfact-cli") + + assert result["issue_number"] == 42 + assert "retry_on_ambiguous_transport" not in captured + + +def test_ado_add_work_item_comment_uses_duplicate_safe_retry(monkeypatch) -> None: + adapter = AdoAdapter(org="nold-ai", project="specfact-cli", api_token="token") + + captured: dict[str, object] = {} + + def _capture_retry(_request_callable, **kwargs): + captured.update(kwargs) + return _Resp({"id": 7}) + + monkeypatch.setattr(adapter, "_request_with_retry", _capture_retry) + + result = adapter._add_work_item_comment("nold-ai", "specfact-cli", 101, "comment") + + assert result["comment_id"] == 7 + assert captured.get("retry_on_ambiguous_transport") is False + + +def test_ado_update_work_item_status_uses_default_retry_mode(monkeypatch) -> None: + adapter = AdoAdapter(org="nold-ai", project="specfact-cli", api_token="token") + + captured: dict[str, object] = {} + + def _capture_retry(_request_callable, **kwargs): + captured.update(kwargs) + return _Resp({"_links": {"html": {"href": "https://example.test/workitem/101"}}}) + + monkeypatch.setattr(adapter, "_request_with_retry", _capture_retry) + + proposal_data = { + "status": "in-progress", + "source_tracking": {"source_id": 101}, + } + + result = adapter._update_work_item_status(proposal_data, "nold-ai", "specfact-cli") + + assert result["work_item_id"] == 101 + assert "retry_on_ambiguous_transport" not in captured diff --git a/tests/unit/specfact_cli/adapters/test_ado_parent_candidate_filtering.py b/tests/unit/specfact_cli/adapters/test_ado_parent_candidate_filtering.py new file mode 100644 index 00000000..bb557945 --- /dev/null +++ b/tests/unit/specfact_cli/adapters/test_ado_parent_candidate_filtering.py @@ -0,0 +1,76 @@ +"""Regression tests for ADO parent-candidate filtering behavior.""" + +from __future__ import annotations + +from specfact_cli.adapters.ado import AdoAdapter +from specfact_cli.models.backlog_item import BacklogItem + + +def _item(item_id: str, iteration: str | None) -> BacklogItem: + return BacklogItem( + id=item_id, + provider="ado", + url=f"https://example.test/{item_id}", + title=f"Item {item_id}", + state="open", + iteration=iteration, + ) + + +def test_resolve_sprint_filter_skips_implicit_current_iteration_when_disabled(monkeypatch) -> None: + adapter = AdoAdapter(org="nold-ai", project="specfact-cli", api_token="token") + + monkeypatch.setattr(adapter, "_get_current_iteration", lambda: "Project\\Sprint 42") + + items = [_item("1", None), _item("2", "Project\\Sprint 41")] + + resolved, filtered = adapter._resolve_sprint_filter(None, items, apply_current_when_missing=False) + + assert resolved is None + assert [item.id for item in filtered] == ["1", "2"] + + +def test_resolve_sprint_filter_uses_current_iteration_by_default(monkeypatch) -> None: + adapter = AdoAdapter(org="nold-ai", project="specfact-cli", api_token="token") + + monkeypatch.setattr(adapter, "_get_current_iteration", lambda: "Project\\Sprint 42") + + items = [_item("1", None), _item("2", "Project\\Sprint 42"), _item("3", "Project\\Sprint 41")] + + resolved, filtered = adapter._resolve_sprint_filter(None, items, apply_current_when_missing=True) + + assert resolved == "Project\\Sprint 42" + assert [item.id for item in filtered] == ["2"] + + +def test_fetch_backlog_items_wiql_omits_iteration_when_current_default_disabled(monkeypatch) -> None: + import specfact_cli.adapters.ado as ado_module + from specfact_cli.backlog.filters import BacklogFilters + + adapter = AdoAdapter(org="nold-ai", project="specfact-cli", api_token="token") + + captured_query: dict[str, str] = {} + + class _Resp: + status_code = 200 + ok = True + text = "" + + def raise_for_status(self) -> None: + return None + + def json(self) -> dict: + return {"workItems": []} + + def _fake_post(url: str, headers: dict, json: dict, timeout: int): + _ = url, headers, timeout + captured_query["query"] = json.get("query", "") + return _Resp() + + monkeypatch.setattr(adapter, "_get_current_iteration", lambda: r"Project\Sprint 42") + monkeypatch.setattr(ado_module.requests, "post", _fake_post) + + filters = BacklogFilters(use_current_iteration_default=False) + _ = adapter.fetch_backlog_items(filters) + + assert "System.IterationPath" not in captured_query.get("query", "") diff --git a/tests/unit/specfact_cli/adapters/test_backlog_retry.py b/tests/unit/specfact_cli/adapters/test_backlog_retry.py new file mode 100644 index 00000000..ebb7020c --- /dev/null +++ b/tests/unit/specfact_cli/adapters/test_backlog_retry.py @@ -0,0 +1,115 @@ +"""Unit tests for centralized backlog adapter retry behavior.""" + +from __future__ import annotations + +import requests + +from specfact_cli.adapters.backlog_base import BacklogAdapterMixin + + +class _DummyRetryAdapter(BacklogAdapterMixin): + def map_backlog_status_to_openspec(self, status: str) -> str: + return status + + def map_openspec_status_to_backlog(self, status: str) -> str | list[str]: + return status + + def create_issue(self, project_id: str, payload: dict[str, object]) -> dict[str, object]: + _ = project_id, payload + return {} + + def extract_change_proposal_data(self, item_data: dict[str, object]) -> dict[str, object]: + _ = item_data + return {} + + +class _Response: + def __init__(self, status_code: int, payload: dict | None = None) -> None: + self.status_code = status_code + self._payload = payload or {} + + def raise_for_status(self) -> None: + if self.status_code >= 400: + error = requests.HTTPError(f"HTTP {self.status_code}") + error.response = self + raise error + + def json(self) -> dict: + return self._payload + + +def test_request_with_retry_retries_transient_status_then_succeeds(monkeypatch) -> None: + adapter = _DummyRetryAdapter() + monkeypatch.setattr("specfact_cli.adapters.backlog_base.time.sleep", lambda _seconds: None) + + calls = {"count": 0} + + def _request() -> _Response: + calls["count"] += 1 + if calls["count"] < 3: + return _Response(503) + return _Response(200, {"ok": True}) + + response = adapter._request_with_retry(_request) + + assert response.status_code == 200 + assert calls["count"] == 3 + + +def test_request_with_retry_does_not_retry_non_transient_http_error(monkeypatch) -> None: + adapter = _DummyRetryAdapter() + monkeypatch.setattr("specfact_cli.adapters.backlog_base.time.sleep", lambda _seconds: None) + + calls = {"count": 0} + + def _request() -> _Response: + calls["count"] += 1 + return _Response(400) + + try: + adapter._request_with_retry(_request) + except requests.HTTPError as error: + assert error.response is not None + assert error.response.status_code == 400 + else: + raise AssertionError("Expected HTTPError") + + assert calls["count"] == 1 + + +def test_request_with_retry_retries_connection_error_then_succeeds(monkeypatch) -> None: + adapter = _DummyRetryAdapter() + monkeypatch.setattr("specfact_cli.adapters.backlog_base.time.sleep", lambda _seconds: None) + + calls = {"count": 0} + + def _request() -> _Response: + calls["count"] += 1 + if calls["count"] < 2: + raise requests.ConnectionError("network") + return _Response(200) + + response = adapter._request_with_retry(_request) + + assert response.status_code == 200 + assert calls["count"] == 2 + + +def test_request_with_retry_does_not_retry_transport_when_ambiguous_disabled(monkeypatch) -> None: + adapter = _DummyRetryAdapter() + monkeypatch.setattr("specfact_cli.adapters.backlog_base.time.sleep", lambda _seconds: None) + + calls = {"count": 0} + + def _request() -> _Response: + calls["count"] += 1 + raise requests.Timeout("timeout") + + try: + adapter._request_with_retry(_request, retry_on_ambiguous_transport=False) + except requests.Timeout: + pass + else: + raise AssertionError("Expected Timeout") + + assert calls["count"] == 1