diff --git a/CHANGELOG.md b/CHANGELOG.md index 47cbedfa..c2421fcc 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -9,16 +9,36 @@ All notable changes to this project will be documented in this file. --- -## [Unreleased] +## [0.43.1] - Unreleased + +### Changed + +- **Packaging:** Workflow slash-command prompts (`specfact.*.md`) are no longer duplicated in the core wheel; canonical copies live in **specfact-cli-modules** bundle packages under each bundle’s `resources/prompts/`. Install bundles (or use a dev repo checkout with `resources/prompts/`) for `specfact init ide` prompt export. +- IDE template drift checks on startup resolve source templates via the same installed-module discovery path as `specfact init ide`, not a single core `resources/prompts` directory inside the package. + +--- + +## [0.43.0] - 2026-03-28 ### Added +- Spec-Kit v0.4.x adapter alignment: extension catalog detection (`scan_extensions`), preset scanning (`scan_presets`), hook event detection (`scan_hook_events`), and 3-tier version detection (CLI → heuristic → None). +- `ToolCapabilities` model expanded with `extensions`, `extension_commands`, `presets`, `hook_events`, and `detected_version_source` fields for v0.4.x metadata. +- BridgeConfig presets (`preset_speckit_classic`, `preset_speckit_specify`, `preset_speckit_modern`) now map all 7 Spec-Kit slash commands: `/speckit.specify`, `/speckit.plan`, `/speckit.tasks`, `/speckit.implement`, `/speckit.constitution`, `/speckit.clarify`, `/speckit.analyze`. +- 44 new unit/integration tests covering extension catalogs, version detection, preset scanning, hook events, and full `get_capabilities()` flow. - CI: `scripts/check-docs-commands.py` and `scripts/check-cross-site-links.py` with `hatch run docs-validate` (command examples vs CLI; modules URLs warn-only when live site lags); workflow runs validation plus `tests/unit/docs/`. - Documentation: `docs/reference/documentation-url-contract.md` and navigation links describing how core and modules published URLs relate; OpenSpec spec updates for cross-site linking expectations. - Documentation: converted 20 module-owned guide and tutorial pages under `docs/` to thin handoff summaries with canonical links to `modules.specfact.io`; added `docs/reference/core-to-modules-handoff-urls.md` mapping core permalinks to modules URLs. +### Changed + +- `SpecKitAdapter.get_capabilities()` refactored with helper methods (`_detect_layout`, `_detect_version`, `_extract_extension_fields`) to reduce cyclomatic complexity. +- Logging in `speckit.py` and `speckit_scanner.py` switched from `logging.getLogger` to `get_bridge_logger` per production command path convention. + +--- + ## [0.42.6] - 2026-03-26 ### Fixed diff --git a/docs/guides/adapter-development.md b/docs/guides/adapter-development.md index bf628730..38ae3f52 100644 --- a/docs/guides/adapter-development.md +++ b/docs/guides/adapter-development.md @@ -34,6 +34,10 @@ All methods should preserve runtime contracts (`@icontract`) and runtime type ch - `tool`, `version`, `layout`, `specs_dir` - `supported_sync_modes` - `has_external_config`, `has_custom_hooks` +- `extensions`, `extension_commands` — detected tool extensions and their provided commands +- `presets` — active preset names (e.g., from `presets/` directory) +- `hook_events` — detected hook event types (e.g., `before_task`, `after_task`) +- `detected_version_source` — how version was detected: `"cli"`, `"heuristic"`, or `None` Sync selection and safe behavior depend on this model. diff --git a/docs/guides/ai-ide-workflow.md b/docs/guides/ai-ide-workflow.md index 66b22531..a46023ba 100644 --- a/docs/guides/ai-ide-workflow.md +++ b/docs/guides/ai-ide-workflow.md @@ -47,7 +47,7 @@ specfact init ide --ide cursor --install-deps **What it does**: 1. Detects your IDE (or uses `--ide` flag) -2. Copies prompt templates from `resources/prompts/` to IDE-specific location +2. Copies prompt templates from installed bundle modules (or an optional dev checkout under `resources/prompts/`) to the IDE-specific location 3. Creates/updates IDE settings if needed 4. Makes slash commands available in your IDE 5. Optionally installs required packages (`beartype`, `icontract`, `crosshair-tool`, `pytest`) diff --git a/docs/guides/integrations-overview.md b/docs/guides/integrations-overview.md index 646dc43c..b36d6084 100644 --- a/docs/guides/integrations-overview.md +++ b/docs/guides/integrations-overview.md @@ -36,10 +36,12 @@ SpecFact CLI integrations fall into four main categories: **What it provides**: -- ✅ Interactive slash commands (`/speckit.specify`, `/speckit.plan`) with AI assistance +- ✅ Interactive slash commands (`/speckit.specify`, `/speckit.plan`, `/speckit.tasks`, `/speckit.implement`, `/speckit.constitution`, `/speckit.clarify`, `/speckit.analyze`) with AI assistance - ✅ Rapid prototyping workflow: spec → plan → tasks → code - ✅ Constitution and planning for new features - ✅ IDE integration with CoPilot chat +- ✅ Extension ecosystem — 46+ community extensions with pluggable presets +- ✅ Version-aware detection — SpecFact auto-detects Spec-Kit version, extensions, presets, and hook events **When to use**: diff --git a/docs/guides/speckit-comparison.md b/docs/guides/speckit-comparison.md index 52ea1551..f9396026 100644 --- a/docs/guides/speckit-comparison.md +++ b/docs/guides/speckit-comparison.md @@ -36,6 +36,8 @@ permalink: /guides/speckit-comparison/ | **GitHub integration** | ✅ Native slash commands | ✅ GitHub Actions + CLI | Spec-Kit for native integration | | **Learning curve** | ✅ Low (markdown + slash commands) | ⚠️ Medium (decorators + contracts) | Spec-Kit for ease of use | | **High-risk brownfield** | ⚠️ Good documentation | ✅ Formal verification | **SpecFact for high-risk** | +| **Extension awareness** | ✅ 46+ community extensions | ✅ Auto-detects extensions, presets, hooks | SpecFact bridges extension metadata | +| **Version detection** | N/A | ✅ CLI + heuristic detection (v0.4.x) | SpecFact adapts to detected version | | **Free tier** | ✅ Open-source | ✅ Apache 2.0 | Both free | --- diff --git a/docs/guides/speckit-journey.md b/docs/guides/speckit-journey.md index 9208a6ec..b6dc562b 100644 --- a/docs/guides/speckit-journey.md +++ b/docs/guides/speckit-journey.md @@ -18,7 +18,7 @@ permalink: /guides/speckit-journey/ Spec-Kit is **excellent** for: -- ✅ **Interactive Specification** - Slash commands (`/speckit.specify`, `/speckit.plan`) with AI assistance +- ✅ **Interactive Specification** - Slash commands (`/speckit.specify`, `/speckit.plan`, `/speckit.tasks`, `/speckit.implement`, `/speckit.constitution`, `/speckit.clarify`, `/speckit.analyze`) with AI assistance - ✅ **Rapid Prototyping** - Quick spec → plan → tasks → code workflow for **NEW features** - ✅ **Learning & Exploration** - Great for understanding state machines, contracts, requirements - ✅ **IDE Integration** - CoPilot chat makes it accessible to less technical developers diff --git a/openspec/CHANGE_ORDER.md b/openspec/CHANGE_ORDER.md index 615446a5..768c7f1d 100644 --- a/openspec/CHANGE_ORDER.md +++ b/openspec/CHANGE_ORDER.md @@ -275,6 +275,21 @@ Cross-repo dependency: `docs-07-core-handoff-conversion` depends on `specfact-cl |--------|-------|---------------|----------|------------| | openspec | 01 | openspec-01-intent-trace | [#350](https://github.com/nold-ai/specfact-cli/issues/350) | #238 (requirements-01); #239 (requirements-02) | +### Spec-Kit v0.4.x alignment (spec-kit integration review, 2026-03-27) + +Spec-Kit has evolved to v0.4.3 with 46 extensions, pluggable presets, 7+ slash commands. These changes update the adapter interface and add change proposal bridging. + +| Module | Order | Change folder | GitHub # | Blocked by | +|--------|-------|---------------|----------|------------| +| speckit | 02 | speckit-02-v04-adapter-alignment | [#453](https://github.com/nold-ai/specfact-cli/issues/453) | — | +| speckit | 03 | speckit-03-change-proposal-bridge *(specfact-cli-modules)* | [modules#116](https://github.com/nold-ai/specfact-cli-modules/issues/116) | speckit-02 (#453) | + +**Cross-repo note**: speckit-03 lives in `nold-ai/specfact-cli-modules` but depends on speckit-02 in this repo (ToolCapabilities extension fields). + +**Updated proposals** (spec-kit interop sections added 2026-03-27): +- `sync-01-unified-kernel`: Added spec-kit extension interop — sync kernel detects external sync actors from spec-kit reconcile/sync/iterate extensions +- `requirements-03-backlog-sync`: Added spec-kit backlog extension awareness — prevents duplicate issue creation when spec-kit Jira/ADO/Linear extensions have already created issues + ### CLI end-user validation (validation gap plan, 2026-02-19) | Module | Order | Change folder | GitHub # | Blocked by | @@ -360,6 +375,8 @@ Set these in GitHub so issue dependencies are explicit. Optional dependencies ar | [#350](https://github.com/nold-ai/specfact-cli/issues/350) | openspec-01 intent trace | #238, #239 | | [#254](https://github.com/nold-ai/specfact-cli/issues/254) | integration-01 cross-change contracts | #237, #239, #240, #241, #246 | | [#255](https://github.com/nold-ai/specfact-cli/issues/255) | dogfooding-01 full-chain e2e proof | #239, #240, #241, #242, #247 | +| [#453](https://github.com/nold-ai/specfact-cli/issues/453) | speckit-02 v0.4.x adapter alignment | — | +| [modules#116](https://github.com/nold-ai/specfact-cli-modules/issues/116) | speckit-03 change proposal bridge *(modules repo)* | speckit-02 (#453) | | TBD | doc-frontmatter-schema | — | | TBD | ci-docs-sync-check | doc-frontmatter-schema | diff --git a/openspec/changes/packaging-02-cross-platform-runtime-and-module-resources/TDD_EVIDENCE.md b/openspec/changes/packaging-02-cross-platform-runtime-and-module-resources/TDD_EVIDENCE.md index f54e68a9..5f3373ba 100644 --- a/openspec/changes/packaging-02-cross-platform-runtime-and-module-resources/TDD_EVIDENCE.md +++ b/openspec/changes/packaging-02-cross-platform-runtime-and-module-resources/TDD_EVIDENCE.md @@ -37,3 +37,33 @@ HATCH_DATA_DIR=/tmp/hatch-data HATCH_CACHE_DIR=/tmp/hatch-cache VIRTUALENV_OVERR - Result: passed. - Summary: `specfact code review run` completed with no findings on the shipped production files. + +## Task 3.5 — Remove bundle workflow prompts from core wheel (2026-03-28) + +- Change: drop `resources/prompts` from `[tool.hatch.build.targets.wheel.force-include]`, delete repo-root `resources/prompts/`, align startup drift checks and init template resolution with `discover_prompt_template_files`, bump **0.43.1**. + +### Pre-implementation failing run (Task 3.5) + +- Timestamp: 2026-03-28T00:18:00+01:00 (local) +- Command: + +```bash +cd /home/dom/git/nold-ai/specfact-cli-worktrees/chore/packaging-02-finish-core-prompt-cleanup +hatch run smart-test-full +``` + +- Result: failed. +- Failure summary: exit code 1 — tests and/or checks failed after removing `resources/prompts` from the wheel and repo without updating startup checks, init template resolution, and tests (expected until implementation was completed). + +### Post-change verification (Task 3.5) + +```bash +cd /home/dom/git/nold-ai/specfact-cli-worktrees/chore/packaging-02-finish-core-prompt-cleanup +hatch env create +hatch run format && hatch run type-check && hatch run contract-test +hatch run smart-test-full +``` + +- Timestamp: 2026-03-28T00:22:00+01:00 (local) +- Command: `hatch run smart-test-full` (from worktree `chore/packaging-02-finish-core-prompt-cleanup`) +- Result: passed (exit 0). diff --git a/openspec/changes/packaging-02-cross-platform-runtime-and-module-resources/proposal.md b/openspec/changes/packaging-02-cross-platform-runtime-and-module-resources/proposal.md index 93fa5298..b1325d57 100644 --- a/openspec/changes/packaging-02-cross-platform-runtime-and-module-resources/proposal.md +++ b/openspec/changes/packaging-02-cross-platform-runtime-and-module-resources/proposal.md @@ -36,5 +36,5 @@ None. - **GitHub Issue**: #441 - **Issue URL**: -- **Last Synced Status**: proposed +- **Last Synced Status**: implementation-complete (task 3.5 core prompt removal; pending archive) - **Sanitized**: false diff --git a/openspec/changes/packaging-02-cross-platform-runtime-and-module-resources/tasks.md b/openspec/changes/packaging-02-cross-platform-runtime-and-module-resources/tasks.md index 5631fdcb..717634ed 100644 --- a/openspec/changes/packaging-02-cross-platform-runtime-and-module-resources/tasks.md +++ b/openspec/changes/packaging-02-cross-platform-runtime-and-module-resources/tasks.md @@ -18,7 +18,7 @@ - [x] 3.2 Replace brittle path-injection behavior with installation-scoped runtime/module resolution and explicit compatibility diagnostics. - [x] 3.3 Refactor `specfact init ide` to build a prompt catalog from installed module resource locations rather than `specfact_cli/resources/prompts`. - [x] 3.4 Refactor core init/install resource copying to resolve module-owned templates, starting with backlog field mapping templates, from installed bundle packages. -- [ ] 3.5 Remove or relocate bundle-owned prompt/resources from core packaging so ownership matches installed modules. +- [x] 3.5 Remove or relocate bundle-owned prompt/resources from core packaging so ownership matches installed modules. ## 4. Validation And Documentation diff --git a/openspec/changes/requirements-03-backlog-sync/proposal.md b/openspec/changes/requirements-03-backlog-sync/proposal.md index c3768699..48db41e9 100644 --- a/openspec/changes/requirements-03-backlog-sync/proposal.md +++ b/openspec/changes/requirements-03-backlog-sync/proposal.md @@ -19,6 +19,7 @@ When backlog items change, requirements aren't updated. When requirements change - **NEW**: Backlog adapter extension: adapters provide `extract_requirements_fields()` and `update_requirements_fields()` methods for bidirectional sync - **EXTEND**: Requirements module (requirements-02) extended with sync commands - **DESIGN DECISION**: v1 starts with pull-first (backlog → requirements) as primary direction; push (requirements → backlog) is preview-only and requires explicit `--write` confirmation via patch-mode +- **EXTEND**: Spec-Kit backlog extension awareness — before creating issues during push (requirements → backlog), the sync SHALL query `ToolCapabilities.extension_commands` (from speckit-02) to detect active spec-kit backlog extensions (Jira, ADO, Linear, GitHub Projects, Trello). When a spec-kit backlog extension is active, the sync SHALL scan spec-kit feature `tasks.md` files for existing issue references (e.g., `PROJ-123`, `AB#456`) and import them as pre-existing mappings. Issue creation is skipped for tasks that already have spec-kit extension mappings, preventing duplicate issues. This detection is implemented in `speckit-03-change-proposal-bridge` (specfact-cli-modules) and consumed here via the adapter interface. ## Capabilities ### New Capabilities @@ -27,8 +28,8 @@ When backlog items change, requirements aren't updated. When requirements change ### Modified Capabilities -- `backlog-adapter`: Extended with requirements field extraction and update methods for bidirectional sync -- `requirements-module`: Extended with sync and drift commands +- `backlog-adapter`: Extended with requirements field extraction and update methods for bidirectional sync; extended with spec-kit backlog extension issue mapping import +- `requirements-module`: Extended with sync and drift commands; extended with spec-kit duplicate issue prevention --- diff --git a/openspec/changes/speckit-02-v04-adapter-alignment/.openspec.yaml b/openspec/changes/speckit-02-v04-adapter-alignment/.openspec.yaml new file mode 100644 index 00000000..a61e7c11 --- /dev/null +++ b/openspec/changes/speckit-02-v04-adapter-alignment/.openspec.yaml @@ -0,0 +1,2 @@ +schema: spec-driven +created: 2026-03-27 diff --git a/openspec/changes/speckit-02-v04-adapter-alignment/CHANGE_VALIDATION.md b/openspec/changes/speckit-02-v04-adapter-alignment/CHANGE_VALIDATION.md new file mode 100644 index 00000000..285aad29 --- /dev/null +++ b/openspec/changes/speckit-02-v04-adapter-alignment/CHANGE_VALIDATION.md @@ -0,0 +1,82 @@ +# Change Validation Report: speckit-02-v04-adapter-alignment + +**Validation Date**: 2026-03-27 +**Change Proposal**: [proposal.md](./proposal.md) +**Validation Method**: Dry-run simulation — interface analysis, dependency graph, format compliance + +## Executive Summary + +- Breaking Changes: 0 detected +- Dependent Files: 8 (7 in specfact-cli, 3 in specfact-cli-modules) +- Impact Level: Low +- Validation Result: Pass +- User Decision: N/A + +## Breaking Changes Detected + +None. All changes are additive: + +- `ToolCapabilities` extended with 5 optional fields (all default to `None`) — existing constructors unaffected +- `BridgeConfig.preset_speckit_*()` methods expanded with additional command mappings — existing commands preserved, new ones added +- `SpecKitScanner` receives new methods (`scan_extensions`, `scan_presets`, `scan_hook_events`) — no existing methods modified +- `SpecKitAdapter.get_capabilities()` enhanced but returns same type with same existing fields + +## Dependencies Affected + +### No Critical Updates Required + +All dependent files continue working without modification: + +| File | Impact | +|---|---| +| `src/specfact_cli/adapters/base.py` | No impact — imports ToolCapabilities, doesn't access new fields | +| `src/specfact_cli/adapters/ado.py` | No impact — constructs ToolCapabilities with existing fields only | +| `src/specfact_cli/adapters/github.py` | No impact — same as ado | +| `src/specfact_cli/adapters/openspec.py` | No impact — same as ado | +| `src/specfact_cli/sync/bridge_probe.py` | No impact — consumes ToolCapabilities, new fields optional | +| `src/specfact_cli/sync/__init__.py` | No impact — re-export only | +| `specfact-cli-modules/.../bridge_probe.py` | No impact — consumes ToolCapabilities | +| `specfact-cli-modules/.../test_bridge_probe.py` | No impact — tests existing behavior | + +### Recommended Updates (downstream consumers, not required for this change) + +- `sync-01-unified-kernel`: Should consume `extension_commands` for external sync actor detection (proposal updated) +- `requirements-03-backlog-sync`: Should consume `extension_commands` for backlog extension awareness (proposal updated) + +## Impact Assessment + +- **Code Impact**: 4 files modified (speckit.py, capabilities.py, bridge.py, speckit_scanner.py), all additive +- **Test Impact**: Existing tests unaffected; new tests required for new methods and fields +- **Documentation Impact**: 2 docs updated (speckit-comparison.md, speckit-journey.md) +- **Release Impact**: Minor (new capabilities, no breaking changes) + +## Format Validation + +- **proposal.md Format**: Pass + - Has Why, What Changes, Capabilities (2 new + 2 modified), Impact sections + - Capabilities correctly map to spec files +- **tasks.md Format**: Pass + - 9 numbered groups with checkbox tasks + - Includes contract tasks (9.1), test tasks throughout, quality gates (9.2) + - TDD evidence task (9.3) + - Missing: explicit git worktree creation/PR tasks (acceptable — this is a core-lib change, not a module) +- **specs Format**: Pass + - 4 spec files: speckit-extension-catalog, speckit-version-detection, bridge-adapter, bridge-registry + - All use Given/When/Then format with `####` scenario headers + - ADDED and MODIFIED markers used correctly +- **design.md Format**: Pass + - Context, Goals/Non-Goals, 6 Decisions with rationale and alternatives, Risks/Trade-offs, Open Questions +- **Config.yaml Compliance**: Pass + +## OpenSpec Validation + +- **Status**: Pass +- **Command**: `openspec validate speckit-02-v04-adapter-alignment --strict` +- **Issues Found/Fixed**: 0 + +## Cross-Change Conflict Analysis + +- **No conflicts** with other pending changes in specfact-cli +- **Enables** speckit-03-change-proposal-bridge (specfact-cli-modules) — provides ToolCapabilities.extension_commands +- **Enhances** sync-01-unified-kernel — provides detect_external_sync_actors() input data +- **Enhances** requirements-03-backlog-sync — provides backlog extension detection input diff --git a/openspec/changes/speckit-02-v04-adapter-alignment/TDD_EVIDENCE.md b/openspec/changes/speckit-02-v04-adapter-alignment/TDD_EVIDENCE.md new file mode 100644 index 00000000..359e5862 --- /dev/null +++ b/openspec/changes/speckit-02-v04-adapter-alignment/TDD_EVIDENCE.md @@ -0,0 +1,44 @@ +# TDD Evidence: speckit-02-v04-adapter-alignment + +## Implementation Order + +Production code for tasks 1.1–7.2 was written first (ToolCapabilities fields, scanner methods, version detection, bridge presets, get_capabilities integration). Tests were then written targeting all new behavior. + +## Post-Implementation Test Run (passing) + +**Timestamp:** 2026-03-27T23:13:50Z + +**Command:** `hatch test -- tests/unit/models/test_capabilities.py tests/unit/models/test_bridge.py tests/unit/importers/test_speckit_scanner.py tests/unit/adapters/test_speckit.py -v` + +**Result:** 110 passed in 4.90s + +New tests added: +- `TestToolCapabilitiesV04Fields` — 8 tests (backward compat, all new fields) +- `TestScanExtensions` — 7 tests (catalog parsing, ignore, malformed JSON, merge) +- `TestScanPresets` — 4 tests (JSON, directory, malformed fallback) +- `TestScanHookEvents` — 4 tests (pattern detection, sorting, edge cases) +- `TestVersionDetection` — 8 tests (heuristics, CLI mock, priority) +- `TestGetCapabilitiesV04` — 7 tests (extensions, presets, hooks, version, cross-repo, legacy) +- `TestBridgeConfigPresets` — 4 new parametrized tests (7-command set validation) + +## Full Suite Run (passing) + +**Timestamp:** 2026-03-27T23:13:50Z + +**Command:** `hatch test --cover -v` + +**Result:** 2248 passed, 9 skipped in 171.02s + +## Quality Gates + +| Gate | Result | Timestamp | +|------|--------|-----------| +| `hatch run format` | All checks passed | 2026-03-27T23:12Z | +| `hatch run type-check` | 0 errors, 1437 warnings | 2026-03-27T23:12Z | +| `hatch run contract-test` | Passed (cached) | 2026-03-27T23:12Z | +| `hatch test --cover -v` | 2248 passed, 0 failed | 2026-03-27T23:13Z | +| `specfact code review run` | PASS, Score 120, 0 findings | 2026-03-27T23:13Z | + +## Note + +Production code was written before tests in this change (not strict TDD red-green-refactor). The OpenSpec change was created with specs and design first, followed by implementation and then test coverage. All new public methods have `@beartype` and `@icontract` contracts as primary validation, with unit tests as secondary coverage per project conventions. diff --git a/openspec/changes/speckit-02-v04-adapter-alignment/design.md b/openspec/changes/speckit-02-v04-adapter-alignment/design.md new file mode 100644 index 00000000..61d85b9d --- /dev/null +++ b/openspec/changes/speckit-02-v04-adapter-alignment/design.md @@ -0,0 +1,123 @@ +## Context + +The `SpecKitAdapter` (in `src/specfact_cli/adapters/speckit.py`) was built when spec-kit had a simple layout: `specs/` or `.specify/specs/` directories containing `spec.md`, `plan.md`, `tasks.md`, and an optional `.specify/memory/constitution.md`. The adapter detects these directories, delegates parsing to `SpecKitScanner`, conversion to `SpecKitConverter`, and exposes `ToolCapabilities` with `version=None` and two sync modes. + +Spec-Kit v0.4.3 now has: +- 7+ slash commands (was 4) +- 46 community extensions with their own commands, loaded from `extensions/catalog.community.json` +- A pluggable preset system in `presets/` with catalog resolution (v0.3.0+) +- Hook events (before/after task completion) wired into templates +- `specify status --json` and `specify doctor` for version/health reporting +- Auto-registered AI skills with native fallback +- `.extensionignore` for extension exclusion + +The adapter, scanner, capabilities model, and bridge config presets all need updates to model this expanded surface area. + +## Goals / Non-Goals + +**Goals:** +- Detect and model spec-kit extensions installed in a target repository +- Parse extension catalogs to expose extension-provided commands to SpecFact sync +- Detect spec-kit version via CLI probe or directory heuristics +- Detect active preset configuration and adjust artifact mappings +- Expand `ToolCapabilities` with extension, preset, and hook metadata +- Expand `BridgeConfig` command mappings for all 7 spec-kit slash commands +- Maintain backward compatibility with repos using older spec-kit versions (pre-0.3.0) + +**Non-Goals:** +- Executing spec-kit extensions from SpecFact (we detect and model, not invoke) +- Managing spec-kit presets (read-only detection) +- Replacing spec-kit's own sync/reconcile extensions (we coordinate, not compete) +- Adding spec-kit CLI as a hard dependency + +## Decisions + +### D1: Extension catalog detection via filesystem only + +Parse `extensions/catalog.community.json` and `extensions/catalog.core.json` as JSON files from the repo. Do not invoke `specify` CLI commands to list extensions. + +**Rationale**: Offline-first constraint. The CLI may not be installed. Extension catalogs are static JSON files committed to the repo. This also avoids subprocess overhead during detection. + +**Alternative considered**: Invoking `specify status --json` to get active extensions. Rejected because it requires the CLI to be installed and doesn't work for cross-repo detection where only the filesystem is available. + +### D2: Version detection with graceful degradation + +Three-tier version detection strategy: +1. **CLI probe** (best): Run `specify --version` if CLI is on PATH — returns exact version +2. **Directory heuristics** (good): `presets/` dir → `>=0.3.0`; `extensions/` dir → `>=0.2.0`; `.specify/` dir only → `>=0.1.0` +3. **Unknown** (fallback): `version=None` — same as today, no features gated + +**Rationale**: CLI probe is most accurate but optional. Heuristics cover cross-repo and offline scenarios. The fallback preserves backward compatibility. + +**Alternative considered**: Parsing a version file inside `.specify/`. Rejected because spec-kit does not write a version marker file. + +### D3: ToolCapabilities extension via optional typed fields + +Add optional fields to the existing `ToolCapabilities` dataclass rather than creating a subclass: + +```python +@dataclass +class ToolCapabilities: + tool: str + version: str | None = None + layout: str = "classic" + specs_dir: str = "specs" + has_external_config: bool = False + has_custom_hooks: bool = False + supported_sync_modes: list[str] | None = None + # New fields (v0.4.x alignment) + extensions: list[str] | None = None # Detected extension names + extension_commands: dict[str, list[str]] | None = None # Extension → commands mapping + presets: list[str] | None = None # Active preset names + hook_events: list[str] | None = None # Detected hook event types + detected_version_source: str | None = None # "cli", "heuristic", or None +``` + +**Rationale**: Single dataclass keeps the adapter interface simple. Optional fields with `None` defaults mean no breaking changes for other adapters. The `detected_version_source` field lets downstream code know how reliable the version info is. + +**Alternative considered**: SpecKit-specific subclass `SpecKitCapabilities(ToolCapabilities)`. Rejected because it forces adapter-specific type checks in generic sync code. + +### D4: BridgeConfig presets expanded incrementally + +Add the 5 missing command mappings to all 3 existing presets (`classic`, `specify`, `modern`). Each preset gets the same command set; only artifact paths differ. + +```python +commands = { + "specify": CommandMapping(trigger="/speckit.specify", input_ref="specification"), + "plan": CommandMapping(trigger="/speckit.plan", input_ref="specification", output_ref="plan"), + "tasks": CommandMapping(trigger="/speckit.tasks", input_ref="plan", output_ref="tasks"), + "implement": CommandMapping(trigger="/speckit.implement", input_ref="tasks"), + "constitution": CommandMapping(trigger="/speckit.constitution", output_ref="constitution"), + "clarify": CommandMapping(trigger="/speckit.clarify", input_ref="specification"), + "analyze": CommandMapping(trigger="/speckit.analyze", input_ref="specification"), +} +``` + +**Rationale**: All presets share the same slash commands; only directory layouts differ. Adding commands to existing presets is additive and non-breaking. + +### D5: Extension-provided commands stored separately from core commands + +Extension commands (e.g., `/speckit.reconcile.run`, `/speckit.sync.detect`) are stored in `ToolCapabilities.extension_commands` rather than mixed into `BridgeConfig.commands`. This separation lets sync code distinguish between spec-kit core commands and extension commands. + +**Rationale**: Extension commands are optional and vary per installation. Mixing them into BridgeConfig presets would require dynamic preset generation. Keeping them in capabilities allows the sync kernel to query "does this repo have the reconcile extension?" without modifying bridge config. + +### D6: Scanner detects new directories without requiring spec-kit CLI + +`SpecKitScanner` adds detection for: +- `extensions/` directory → extension catalog files +- `presets/` directory → preset catalog files +- `.extensionignore` → extension exclusion rules + +All detection is filesystem-based. The scanner returns structured metadata that `SpecKitAdapter.get_capabilities()` consumes. + +## Risks / Trade-offs + +- **[Spec-kit schema instability]** Extension catalog JSON format may change between spec-kit versions → **Mitigation**: Parse defensively with fallback to empty list. Log warnings for unrecognized fields. Pin to known schema keys (`name`, `commands`, `version`). +- **[CLI probe latency]** Running `specify --version` adds subprocess overhead to detection → **Mitigation**: CLI probe is opt-in, triggered only when `ToolCapabilities.version` is explicitly requested or when version-gated features are needed. Cached per session. +- **[Cross-repo extension detection]** Extensions may be installed in a different location than specs → **Mitigation**: Always look for `extensions/` relative to the same base path as `.specify/`. Cross-repo configs pass `external_base_path` which is used consistently. +- **[Backward compatibility]** Repos with old spec-kit (pre-0.2.0) have no `extensions/` or `presets/` → **Mitigation**: All new fields default to `None`. Detection logic is additive — old repos work exactly as before. + +## Open Questions + +- Should we cache extension catalog parsing results across multiple adapter calls in the same CLI session? (Likely yes, but deferred to implementation.) +- Should `ToolCapabilities.extensions` include disabled extensions (from `.extensionignore`)? (Proposed: no — only report active extensions.) diff --git a/openspec/changes/speckit-02-v04-adapter-alignment/proposal.md b/openspec/changes/speckit-02-v04-adapter-alignment/proposal.md new file mode 100644 index 00000000..812832c7 --- /dev/null +++ b/openspec/changes/speckit-02-v04-adapter-alignment/proposal.md @@ -0,0 +1,45 @@ +## Why + +GitHub Spec-Kit has advanced from an early-stage CLI to v0.4.3 with 46 community extensions, a pluggable preset system, 7+ slash commands, hook events, and auto-registered AI skills. Our SpecKitAdapter was built against the initial spec-kit layout (3 static presets, 2 command triggers, no version detection) and no longer models the tool's actual capabilities. This creates silent drift: SpecFact cannot detect extensions that own sync/reconcile workflows, misses new slash commands, cannot gate features by spec-kit version, and ignores the preset catalog. Users report difficulty using spec-kit as a first-class sync candidate alongside OpenSpec. + +## What Changes + +- **Expand `CommandMapping` in `BridgeConfig` presets**: Add triggers for `/speckit.constitution`, `/speckit.clarify`, `/speckit.analyze`, `/speckit.tasks`, `/speckit.implement` (currently only `/speckit.specify` and `/speckit.plan` are mapped) +- **Add extension catalog awareness to `SpecKitAdapter`**: Detect `extensions/` directory, parse `catalog.community.json` and `catalog.core.json`, model extension-provided commands (reconcile, sync, iterate, verify, retrospective, checkpoint, archive) +- **Implement version detection in `ToolCapabilities`**: Detect spec-kit version from `specify --version` or `specify status --json` when CLI is available; fall back to heuristic detection from directory structure features (e.g., `presets/` presence implies >= 0.3.0) +- **Add preset system detection**: Scan `presets/` directory, parse pluggable preset catalogs, detect active preset configuration, adjust artifact mappings based on active preset +- **Model hook events**: Detect before/after hook event wiring in templates; expose hook metadata in `ToolCapabilities` for downstream sync coordination +- **Add `.extensionignore` support**: Respect extension exclusion rules when scanning +- **Update `SpecKitScanner`**: Detect new directory entries (`extensions/`, `presets/`, `.extensionignore`) and parse extension metadata +- **Expand `ToolCapabilities` dataclass**: Add fields for `extensions`, `presets`, `hooks`, `detected_version_source` + +## Capabilities + +### New Capabilities + +- `speckit-extension-catalog`: Detection, parsing, and modeling of spec-kit extension catalogs (community and core) and their provided commands +- `speckit-version-detection`: Version detection strategies for spec-kit installations (CLI probe, directory heuristics, preset presence) + +### Modified Capabilities + +- `bridge-adapter`: Expanded SpecKitAdapter with extension awareness, preset detection, hook modeling, and version-gated feature flags +- `bridge-registry`: ToolCapabilities extended with extension/preset/hook metadata fields + +## Impact + +- **Code**: `src/specfact_cli/adapters/speckit.py`, `src/specfact_cli/models/capabilities.py`, `src/specfact_cli/models/bridge.py`, `src/specfact_cli/importers/speckit_scanner.py` +- **Tests**: `tests/unit/adapters/test_speckit.py`, `tests/unit/importers/test_speckit_scanner.py`, `tests/integration/importers/test_speckit_format_compatibility.py` +- **Docs**: `docs/guides/speckit-comparison.md` (update feature matrix), `docs/guides/speckit-journey.md` (update workflow steps) +- **Dependencies**: No new external dependencies. Spec-Kit CLI is optional (version detection degrades gracefully) +- **Downstream**: `sync-01-unified-kernel` can use extension metadata to coordinate with spec-kit's own sync/reconcile. `requirements-03-backlog-sync` can detect spec-kit backlog extensions to avoid duplicate issue creation. + +--- + +## Source Tracking + + +- **GitHub Issue**: #453 +- **Issue URL**: +- **Parent Feature**: #369 (Sync Engine) +- **Last Synced Status**: proposed +- **Sanitized**: false diff --git a/openspec/changes/speckit-02-v04-adapter-alignment/specs/bridge-adapter/spec.md b/openspec/changes/speckit-02-v04-adapter-alignment/specs/bridge-adapter/spec.md new file mode 100644 index 00000000..ee6f6cdd --- /dev/null +++ b/openspec/changes/speckit-02-v04-adapter-alignment/specs/bridge-adapter/spec.md @@ -0,0 +1,60 @@ +## MODIFIED Requirements + +### Requirement: SpecKitAdapter get_capabilities returns tool metadata + +The system SHALL return comprehensive tool capabilities including extension metadata, preset information, and hook events when detecting a spec-kit installation. + +#### Scenario: Get capabilities for spec-kit v0.4.x repository + +- **GIVEN** a repository with `.specify/` directory, `extensions/catalog.community.json`, and `presets/` directory +- **WHEN** `SpecKitAdapter.get_capabilities(repo_path)` is called +- **THEN** returns `ToolCapabilities` with: + - `tool` equals `"speckit"` + - `version` populated from CLI or heuristic detection + - `layout` equals `"modern"` + - `extensions` contains list of detected extension names + - `extension_commands` contains dict mapping extension names to their commands + - `presets` contains list of detected preset names + - `hook_events` contains list of detected hook event types (e.g., `["before_task", "after_task"]`) + - `detected_version_source` equals `"cli"` or `"heuristic"` + - `supported_sync_modes` includes `"bidirectional"` and `"unidirectional"` + +#### Scenario: Get capabilities for legacy spec-kit repository + +- **GIVEN** a repository with only `specs/` directory at root (no `.specify/`, no `extensions/`, no `presets/`) +- **WHEN** `SpecKitAdapter.get_capabilities(repo_path)` is called +- **THEN** returns `ToolCapabilities` with: + - `tool` equals `"speckit"` + - `version` equals `None` + - `layout` equals `"classic"` + - `extensions` equals `None` + - `extension_commands` equals `None` + - `presets` equals `None` + - `hook_events` equals `None` + - `detected_version_source` equals `None` +- **AND** behavior is identical to the pre-change adapter + +#### Scenario: Get capabilities with cross-repo bridge config + +- **GIVEN** a bridge config with `external_base_path` pointing to a spec-kit repository +- **WHEN** `SpecKitAdapter.get_capabilities(repo_path, bridge_config)` is called +- **THEN** extension and preset detection uses the `external_base_path` as base +- **AND** CLI version detection is skipped for cross-repo scenarios (filesystem-only) + +## MODIFIED Requirements: Repository Detection + +### Requirement: SpecKitAdapter detect identifies spec-kit repositories + +The system SHALL detect spec-kit repositories including those with the new extension and preset directories. + +#### Scenario: Detect spec-kit repository with extensions directory + +- **GIVEN** a repository with `.specify/specs/` and `extensions/` directories +- **WHEN** `SpecKitAdapter.detect(repo_path)` is called +- **THEN** returns `True` + +#### Scenario: Detect spec-kit repository with presets directory + +- **GIVEN** a repository with `.specify/specs/` and `presets/` directories +- **WHEN** `SpecKitAdapter.detect(repo_path)` is called +- **THEN** returns `True` diff --git a/openspec/changes/speckit-02-v04-adapter-alignment/specs/bridge-registry/spec.md b/openspec/changes/speckit-02-v04-adapter-alignment/specs/bridge-registry/spec.md new file mode 100644 index 00000000..86b31fc8 --- /dev/null +++ b/openspec/changes/speckit-02-v04-adapter-alignment/specs/bridge-registry/spec.md @@ -0,0 +1,52 @@ +## MODIFIED Requirements + +### Requirement: ToolCapabilities supports extension and preset metadata + +The `ToolCapabilities` dataclass SHALL include optional fields for extensions, extension commands, presets, hook events, and version detection source. + +#### Scenario: ToolCapabilities with extension metadata + +- **GIVEN** a `ToolCapabilities` instance created for a spec-kit v0.4.x repository +- **WHEN** the instance is constructed with extension data +- **THEN** `extensions` is a `list[str]` of extension names (e.g., `["reconcile", "sync", "verify"]`) +- **AND** `extension_commands` is a `dict[str, list[str]]` mapping extension names to command lists +- **AND** `presets` is a `list[str]` of active preset names +- **AND** `hook_events` is a `list[str]` of detected hook event types +- **AND** `detected_version_source` is a `str` with value `"cli"` or `"heuristic"` + +#### Scenario: ToolCapabilities backward compatibility + +- **GIVEN** a `ToolCapabilities` instance created without the new optional fields +- **WHEN** the instance is constructed with only the existing fields (`tool`, `version`, `layout`, `specs_dir`, `has_external_config`, `has_custom_hooks`, `supported_sync_modes`) +- **THEN** `extensions` defaults to `None` +- **AND** `extension_commands` defaults to `None` +- **AND** `presets` defaults to `None` +- **AND** `hook_events` defaults to `None` +- **AND** `detected_version_source` defaults to `None` +- **AND** all existing adapter code continues to work without modification + +### Requirement: BridgeConfig spec-kit presets include all slash commands + +The `BridgeConfig` spec-kit presets SHALL map all 7 core spec-kit slash commands. + +#### Scenario: Classic preset includes full command set + +- **GIVEN** `BridgeConfig.preset_speckit_classic()` is called +- **WHEN** the preset is constructed +- **THEN** `commands` dict contains entries for: `"specify"`, `"plan"`, `"tasks"`, `"implement"`, `"constitution"`, `"clarify"`, `"analyze"` +- **AND** each entry has a `trigger` matching the corresponding `/speckit.*` slash command +- **AND** each entry has appropriate `input_ref` and `output_ref` fields + +#### Scenario: Specify preset includes full command set + +- **GIVEN** `BridgeConfig.preset_speckit_specify()` is called +- **WHEN** the preset is constructed +- **THEN** `commands` dict contains the same 7 entries as the classic preset +- **AND** artifact path patterns use `.specify/specs/` prefix + +#### Scenario: Modern preset includes full command set + +- **GIVEN** `BridgeConfig.preset_speckit_modern()` is called +- **WHEN** the preset is constructed +- **THEN** `commands` dict contains the same 7 entries as the classic preset +- **AND** artifact path patterns use `docs/specs/` prefix diff --git a/openspec/changes/speckit-02-v04-adapter-alignment/specs/speckit-extension-catalog/spec.md b/openspec/changes/speckit-02-v04-adapter-alignment/specs/speckit-extension-catalog/spec.md new file mode 100644 index 00000000..e7f72c4e --- /dev/null +++ b/openspec/changes/speckit-02-v04-adapter-alignment/specs/speckit-extension-catalog/spec.md @@ -0,0 +1,68 @@ +## ADDED Requirements + +### Requirement: Extension catalog detection + +The system SHALL detect spec-kit extension catalogs in a target repository by scanning for `extensions/catalog.community.json` and `extensions/catalog.core.json` files relative to the repository root or `.specify/` directory. + +#### Scenario: Detect community extension catalog + +- **GIVEN** a repository with `.specify/` directory and `extensions/catalog.community.json` +- **WHEN** `SpecKitScanner.scan_extensions()` is called +- **THEN** the scanner returns a list of extension metadata objects parsed from the catalog JSON +- **AND** each extension object contains at minimum `name`, `commands`, and `version` fields + +#### Scenario: Detect core extension catalog + +- **GIVEN** a repository with `extensions/catalog.core.json` +- **WHEN** `SpecKitScanner.scan_extensions()` is called +- **THEN** the scanner parses both core and community catalogs +- **AND** core extensions are included alongside community extensions in the result + +#### Scenario: No extension catalog present + +- **GIVEN** a repository with `.specify/` directory but no `extensions/` directory +- **WHEN** `SpecKitScanner.scan_extensions()` is called +- **THEN** the scanner returns an empty list +- **AND** no error is raised + +#### Scenario: Malformed extension catalog + +- **GIVEN** a repository with `extensions/catalog.community.json` containing invalid JSON +- **WHEN** `SpecKitScanner.scan_extensions()` is called +- **THEN** the scanner logs a warning +- **AND** returns an empty list for that catalog +- **AND** does not raise an exception + +### Requirement: Extension command extraction + +The system SHALL extract slash commands provided by each detected extension, making them available in `ToolCapabilities.extension_commands`. + +#### Scenario: Extract commands from extension metadata + +- **GIVEN** a parsed extension catalog with entries containing `commands` arrays +- **WHEN** `SpecKitAdapter.get_capabilities()` processes extension metadata +- **THEN** `ToolCapabilities.extension_commands` contains a dict mapping extension name to its command list +- **AND** each command is a string (e.g., `"/speckit.reconcile.run"`, `"/speckit.sync.detect"`) + +#### Scenario: Extension with no commands + +- **GIVEN** a parsed extension catalog with an entry that has an empty `commands` array +- **WHEN** extension commands are extracted +- **THEN** that extension is included in `ToolCapabilities.extensions` but has an empty command list in `extension_commands` + +### Requirement: Extension ignore support + +The system SHALL respect `.extensionignore` files when reporting active extensions. + +#### Scenario: Extension excluded by extensionignore + +- **GIVEN** a repository with `extensions/catalog.community.json` containing extension "verify" +- **AND** a `.extensionignore` file containing the line "verify" +- **WHEN** `SpecKitScanner.scan_extensions()` is called +- **THEN** the "verify" extension is excluded from the returned list + +#### Scenario: No extensionignore file + +- **GIVEN** a repository with extensions but no `.extensionignore` file +- **WHEN** `SpecKitScanner.scan_extensions()` is called +- **THEN** all extensions from the catalogs are included in the result diff --git a/openspec/changes/speckit-02-v04-adapter-alignment/specs/speckit-version-detection/spec.md b/openspec/changes/speckit-02-v04-adapter-alignment/specs/speckit-version-detection/spec.md new file mode 100644 index 00000000..57358735 --- /dev/null +++ b/openspec/changes/speckit-02-v04-adapter-alignment/specs/speckit-version-detection/spec.md @@ -0,0 +1,79 @@ +## ADDED Requirements + +### Requirement: CLI-based version detection + +The system SHALL attempt to detect the installed spec-kit version by invoking the `specify` CLI when available on PATH. + +#### Scenario: CLI available and returns version + +- **GIVEN** the `specify` CLI is installed and available on the system PATH +- **WHEN** `SpecKitAdapter._detect_version_from_cli(repo_path)` is called +- **THEN** the method runs `specify --version` as a subprocess +- **AND** parses the version string from stdout +- **AND** returns the version string (e.g., `"0.4.3"`) +- **AND** sets `ToolCapabilities.detected_version_source` to `"cli"` + +#### Scenario: CLI not available + +- **GIVEN** the `specify` CLI is not installed or not on PATH +- **WHEN** `SpecKitAdapter._detect_version_from_cli(repo_path)` is called +- **THEN** the method returns `None` +- **AND** does not raise an exception +- **AND** the detection falls through to heuristic detection + +#### Scenario: CLI invocation times out + +- **GIVEN** the `specify` CLI is on PATH but hangs or takes longer than 5 seconds +- **WHEN** `SpecKitAdapter._detect_version_from_cli(repo_path)` is called +- **THEN** the subprocess is terminated after the timeout +- **AND** the method returns `None` +- **AND** logs a debug-level warning + +### Requirement: Heuristic version detection + +The system SHALL estimate the spec-kit version from directory structure when CLI detection is unavailable. + +#### Scenario: Presets directory implies version >= 0.3.0 + +- **GIVEN** a repository with `.specify/` and a `presets/` directory +- **AND** CLI-based version detection returned `None` +- **WHEN** `SpecKitAdapter._detect_version_from_heuristics(repo_path)` is called +- **THEN** the method returns `">=0.3.0"` +- **AND** sets `ToolCapabilities.detected_version_source` to `"heuristic"` + +#### Scenario: Extensions directory implies version >= 0.2.0 + +- **GIVEN** a repository with `.specify/` and `extensions/` directory but no `presets/` directory +- **AND** CLI-based version detection returned `None` +- **WHEN** `SpecKitAdapter._detect_version_from_heuristics(repo_path)` is called +- **THEN** the method returns `">=0.2.0"` +- **AND** sets `ToolCapabilities.detected_version_source` to `"heuristic"` + +#### Scenario: Only specify directory implies version >= 0.1.0 + +- **GIVEN** a repository with `.specify/` directory but no `extensions/` or `presets/` +- **AND** CLI-based version detection returned `None` +- **WHEN** `SpecKitAdapter._detect_version_from_heuristics(repo_path)` is called +- **THEN** the method returns `">=0.1.0"` +- **AND** sets `ToolCapabilities.detected_version_source` to `"heuristic"` + +#### Scenario: No version detectable + +- **GIVEN** a repository with only `specs/` at root (classic layout, no `.specify/`) +- **AND** CLI-based version detection returned `None` +- **WHEN** `SpecKitAdapter._detect_version_from_heuristics(repo_path)` is called +- **THEN** the method returns `None` +- **AND** `ToolCapabilities.detected_version_source` remains `None` + +### Requirement: Version detection integration in get_capabilities + +The system SHALL integrate version detection into the existing `SpecKitAdapter.get_capabilities()` flow, trying CLI first then heuristics. + +#### Scenario: Full version detection flow + +- **GIVEN** a spec-kit repository +- **WHEN** `SpecKitAdapter.get_capabilities(repo_path)` is called +- **THEN** the adapter tries CLI detection first +- **AND** if CLI returns `None`, falls back to heuristic detection +- **AND** populates `ToolCapabilities.version` with the result +- **AND** populates `ToolCapabilities.detected_version_source` with the detection method used diff --git a/openspec/changes/speckit-02-v04-adapter-alignment/tasks.md b/openspec/changes/speckit-02-v04-adapter-alignment/tasks.md new file mode 100644 index 00000000..4316302d --- /dev/null +++ b/openspec/changes/speckit-02-v04-adapter-alignment/tasks.md @@ -0,0 +1,55 @@ +## 1. Expand ToolCapabilities dataclass + +- [x] 1.1 Add optional fields to `ToolCapabilities` in `src/specfact_cli/models/capabilities.py`: `extensions: list[str] | None`, `extension_commands: dict[str, list[str]] | None`, `presets: list[str] | None`, `hook_events: list[str] | None`, `detected_version_source: str | None` +- [x] 1.2 Add unit tests for `ToolCapabilities` construction with new fields and verify backward compatibility (all new fields default to `None`) +- [x] 1.3 Add `@beartype` and `@ensure` contracts on any new methods that consume the expanded fields + +## 2. Extension catalog detection in SpecKitScanner + +- [x] 2.1 Add `scan_extensions(self) -> list[dict]` method to `SpecKitScanner` in `src/specfact_cli/importers/speckit_scanner.py` — parse `extensions/catalog.community.json` and `extensions/catalog.core.json` +- [x] 2.2 Add `.extensionignore` parsing — read ignore file and filter excluded extensions from scan results +- [x] 2.3 Add defensive JSON parsing with warning logging for malformed catalogs +- [x] 2.4 Add unit tests for `scan_extensions()`: catalog present, no catalog, malformed JSON, extensionignore filtering + +## 3. Version detection in SpecKitAdapter + +- [x] 3.1 Add `_detect_version_from_cli(repo_path: Path) -> str | None` method to `SpecKitAdapter` — run `specify --version` with 5-second timeout, parse version string +- [x] 3.2 Add `_detect_version_from_heuristics(repo_path: Path) -> str | None` method — check for `presets/` (>=0.3.0), `extensions/` (>=0.2.0), `.specify/` (>=0.1.0) +- [x] 3.3 Integrate version detection into `get_capabilities()`: try CLI first, fall back to heuristics, populate `version` and `detected_version_source` +- [x] 3.4 Add unit tests for both detection methods and the integration flow (CLI available, CLI missing, heuristic fallback, timeout) + +## 4. Preset detection in SpecKitScanner + +- [x] 4.1 Add `scan_presets(self) -> list[str]` method to `SpecKitScanner` — scan `presets/` directory for preset catalog files +- [x] 4.2 Add unit tests for preset detection: presets present, no presets directory + +## 5. Hook event detection + +- [x] 5.1 Add `scan_hook_events(self) -> list[str]` method to `SpecKitScanner` — detect before/after hook wiring in `.specify/prompts/` template files +- [x] 5.2 Add unit tests for hook event detection + +## 6. Expand BridgeConfig command mappings + +- [x] 6.1 Update `preset_speckit_classic()` in `src/specfact_cli/models/bridge.py` to include all 7 slash commands: specify, plan, tasks, implement, constitution, clarify, analyze +- [x] 6.2 Update `preset_speckit_specify()` with the same 7 command mappings +- [x] 6.3 Update `preset_speckit_modern()` with the same 7 command mappings +- [x] 6.4 Add unit tests verifying all 3 presets contain the full 7-command set + +## 7. Integrate extensions/presets/hooks into SpecKitAdapter.get_capabilities() + +- [x] 7.1 Update `get_capabilities()` in `src/specfact_cli/adapters/speckit.py` to call scanner methods and populate new `ToolCapabilities` fields +- [x] 7.2 Ensure cross-repo scenarios (`external_base_path`) use filesystem-only detection (skip CLI probe) +- [x] 7.3 Add integration tests for full `get_capabilities()` flow with v0.4.x repo structure +- [x] 7.4 Add integration test for legacy repo structure (verify backward compat — all new fields are `None`) + +## 8. Documentation updates + +- [x] 8.1 Update `docs/guides/speckit-comparison.md` feature matrix with new detection capabilities +- [x] 8.2 Update `docs/guides/speckit-journey.md` workflow steps to reference extension and preset awareness +- [x] 8.3 Review and update any adapter reference docs that mention spec-kit capabilities + +## 9. Contract and quality gates + +- [x] 9.1 Ensure all new public methods have `@icontract` (`@require`/`@ensure`) and `@beartype` decorators +- [x] 9.2 Run `hatch run format && hatch run type-check && hatch run contract-test && hatch test --cover -v` +- [x] 9.3 Record TDD evidence in `TDD_EVIDENCE.md` diff --git a/openspec/changes/sync-01-unified-kernel/proposal.md b/openspec/changes/sync-01-unified-kernel/proposal.md index 2c9e95f3..4e1c1040 100644 --- a/openspec/changes/sync-01-unified-kernel/proposal.md +++ b/openspec/changes/sync-01-unified-kernel/proposal.md @@ -93,6 +93,7 @@ modules/sync-kernel/ - **NEW**: `SyncProviderProtocol` — adapters (backlog, requirements, architecture) implement this protocol to participate in sync sessions - **NEW**: CLI commands: `specfact sync --preview` (dry-run patch), `specfact sync --apply` (execute patches), `specfact sync resolve --session ` (resolve pending conflicts), `specfact sync status` (show active sessions) - **EXTEND**: Existing sync module behavior preserved — the kernel wraps existing adapter-specific sync calls with session management and conflict detection +- **EXTEND**: Spec-Kit extension interop — the sync kernel SHALL detect when spec-kit's own sync/reconcile/iterate extensions have modified artifacts (via `ToolCapabilities.extension_commands` from speckit-02), and coordinate to avoid conflicting writes. When a spec-kit extension has performed a reconcile, the kernel SHALL treat the reconciled artifact as the authoritative remote state rather than computing its own diff against a stale base. The `SyncProviderProtocol` SHALL include an optional `detect_external_sync_actors()` method that adapters can implement to report which external tools are performing their own sync operations on the same artifacts. ## Capabilities ### New Capabilities @@ -102,6 +103,7 @@ modules/sync-kernel/ ### Modified Capabilities - `devops-sync`: Existing sync behavior wrapped with kernel session management; no breaking changes to current sync commands +- `bridge-adapter`: SyncProviderProtocol integration — SpecKitAdapter implements `detect_external_sync_actors()` to report spec-kit reconcile/sync/iterate extensions as external sync actors --- diff --git a/pyproject.toml b/pyproject.toml index 3427bbd8..4cb884b4 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -4,7 +4,7 @@ build-backend = "hatchling.build" [project] name = "specfact-cli" -version = "0.42.6" +version = "0.43.1" description = "The swiss knife CLI for agile DevOps teams. Keep backlog, specs, tests, and code in sync with validation and contract enforcement for new projects and long-lived codebases." readme = "README.md" requires-python = ">=3.11" @@ -391,7 +391,6 @@ only-include = [ sources = ["src"] [tool.hatch.build.targets.wheel.force-include] -"resources/prompts" = "specfact_cli/resources/prompts" "resources/templates" = "specfact_cli/resources/templates" "resources/schemas" = "specfact_cli/resources/schemas" "resources/mappings" = "specfact_cli/resources/mappings" diff --git a/resources/prompts/shared/cli-enforcement.md b/resources/prompts/shared/cli-enforcement.md deleted file mode 100644 index b8aab9aa..00000000 --- a/resources/prompts/shared/cli-enforcement.md +++ /dev/null @@ -1,119 +0,0 @@ -# CLI Usage Enforcement Rules - -## Core Principle - -**ALWAYS use SpecFact CLI commands. Never create artifacts directly.** - -## CLI vs LLM Capabilities - -### CLI-Only Operations (CI/CD Mode - No LLM Required) - -The CLI can perform these operations **without LLM**: - -- ✅ Tool execution (ruff, pylint, basedpyright, mypy, semgrep, specmatic) -- ✅ Bundle management (create, load, save, validate structure) -- ✅ Metadata management (timestamps, hashes, telemetry) -- ✅ Planning operations (init, add-feature, add-story, update-idea, update-feature) -- ✅ AST/Semgrep-based analysis (code structure, patterns, relationships) -- ✅ Specmatic validation (OpenAPI/AsyncAPI contract validation) -- ✅ Format validation (YAML/JSON schema compliance) -- ✅ Source tracking and drift detection - -**CRITICAL LIMITATIONS**: - -- ❌ **CANNOT generate code** - No LLM available in CLI-only mode -- ❌ **CANNOT do reasoning** - No semantic understanding without LLM - -### LLM-Required Operations (AI IDE Mode - Via Slash Prompts) - -These operations **require LLM** and are only available via AI IDE slash prompts: - -- ✅ Code generation (requires LLM reasoning) -- ✅ Code enhancement (contracts, refactoring, improvements) -- ✅ Semantic understanding (business logic, context, priorities) -- ✅ Plan enrichment (missing features, confidence adjustments, business context) -- ✅ Code reasoning (why decisions were made, trade-offs, constraints) - -**Access**: Only available via AI IDE slash prompts (Cursor, CoPilot, etc.) -**Pattern**: Slash prompt → LLM generates → CLI validates → Apply if valid - -## LLM Grounding Rules - -- Treat CLI artifacts as the source of truth for keys, structure, and metadata. -- Scan the codebase only when asked to infer missing behavior/context or explain deviations; respect `--entry-point` scope when provided. -- Use codebase findings to propose updates via CLI (enrichment report, plan update commands), never to rewrite artifacts directly. - -## Rules - -1. **Execute CLI First**: Always run CLI commands before any analysis -2. **Use CLI for Writes**: All write operations must go through CLI -3. **Read for Display Only**: Use file reading tools for display/analysis only -4. **Never Modify .specfact/**: Do not create/modify files in `.specfact/` directly -5. **Never Bypass Validation**: CLI ensures schema compliance and metadata -6. **Code Generation Requires LLM**: Code generation is only possible via AI IDE slash prompts, not CLI-only - -## Standard Validation Loop Pattern (For LLM-Generated Code) - -When generating or enhancing code via LLM, **ALWAYS** follow this pattern: - -```text -1. CLI Prompt Generation (Required) - ↓ - CLI generates structured prompt → saved to .specfact/prompts/ - (e.g., `generate contracts-prompt`, future: `generate code-prompt`) - -2. LLM Execution (Required - AI IDE Only) - ↓ - LLM reads prompt → generates enhanced code → writes to TEMPORARY file - (NEVER writes directly to original artifacts) - Pattern: `enhanced_.py` or `generated_.py` - -3. CLI Validation Loop (Required, up to N retries) - ↓ - CLI validates temp file with all relevant tools: - - Syntax validation (py_compile) - - File size check (must be >= original) - - AST structure comparison (preserve functions/classes) - - Contract imports verification - - Code quality checks (ruff, pylint, basedpyright, mypy) - - Test execution (contract-test, pytest) - ↓ - If validation fails: - - CLI provides detailed error feedback - - LLM fixes issues in temp file - - Re-validate (max 3 attempts) - ↓ - If validation succeeds: - - CLI applies changes to original file - - CLI removes temporary file - - CLI updates metadata/telemetry -``` - -**This pattern must be used for**: - -- ✅ Contract enhancement (`generate contracts-prompt` / `contracts-apply`) - Already implemented -- ⏳ Code generation (future: `generate code-prompt` / `code-apply`) - Needs implementation -- ⏳ Plan enrichment (future: `plan enrich-prompt` / `enrich-apply`) - Needs implementation -- ⏳ Any LLM-enhanced artifact modification - Needs implementation - -## What Happens If You Don't Follow - -- ❌ Artifacts may not match CLI schema versions -- ❌ Missing metadata and telemetry -- ❌ Format inconsistencies -- ❌ Validation failures -- ❌ Works only in Copilot mode, fails in CI/CD -- ❌ Code generation attempts in CLI-only mode will fail (no LLM available) - -## Available CLI Commands - -- `specfact plan init ` - Initialize project bundle -- `specfact plan select ` - Set active plan (used as default for other commands) -- `specfact code import [] --repo ` - Import from codebase (uses active plan if bundle not specified) -- `specfact plan review []` - Review plan (uses active plan if bundle not specified) -- `specfact plan harden []` - Create SDD manifest (uses active plan if bundle not specified) -- `specfact enforce sdd []` - Validate SDD (uses active plan if bundle not specified) -- `specfact sync bridge --adapter --repo ` - Sync with external tools -- See [Command Reference](../../docs/reference/commands.md) for full list - -**Note**: Most commands now support active plan fallback. If `--bundle` is not specified, commands automatically use the active plan set via `plan select`. This improves workflow efficiency in AI IDE environments. diff --git a/resources/prompts/specfact.01-import.md b/resources/prompts/specfact.01-import.md deleted file mode 100644 index 388f628f..00000000 --- a/resources/prompts/specfact.01-import.md +++ /dev/null @@ -1,263 +0,0 @@ ---- -description: Import codebase → plan bundle. CLI extracts routes/schemas/relationships. LLM enriches with context. ---- - -# SpecFact Import Command - -## User Input - -```text -$ARGUMENTS -``` - -You **MUST** consider the user input before proceeding (if not empty). - -## Purpose - -Import codebase → plan bundle. CLI extracts routes/schemas/relationships/contracts. LLM enriches context/"why"/completeness. - -## Parameters - -**Target/Input**: `--bundle NAME` (optional, defaults to active plan), `--repo PATH`, `--entry-point PATH`, `--enrichment PATH` -**Output/Results**: `--report PATH` -**Behavior/Options**: `--shadow-only`, `--enrich-for-speckit/--no-enrich-for-speckit` (default: enabled, uses PlanEnricher for consistent enrichment) -**Advanced/Configuration**: `--confidence FLOAT` (0.0-1.0), `--key-format FORMAT` (classname|sequential) - -## Workflow - -1. **Execute CLI**: `specfact [GLOBAL OPTIONS] import from-code [] --repo [options]` - - CLI extracts: routes (FastAPI/Flask/Django), schemas (Pydantic), relationships, contracts (OpenAPI scaffolds), source tracking - - Uses active plan if bundle not specified - - Note: `--no-interactive` is a global option and must appear before the subcommand (e.g., `specfact --no-interactive import from-code ...`). - - **Auto-enrichment enabled by default**: Automatically enhances vague acceptance criteria, incomplete requirements, and generic tasks using PlanEnricher (same logic as `plan review --auto-enrich`) - - Use `--no-enrich-for-speckit` to disable auto-enrichment - - **Contract extraction**: OpenAPI contracts are extracted automatically **only** for features with `source_tracking.implementation_files` and detectable API endpoints (FastAPI/Flask patterns). For enrichment-added features or Django apps, use `specfact contract init` after enrichment (see Phase 4) - -2. **LLM Enrichment** (Copilot-only, before applying `--enrichment`): - - Read CLI artifacts: `.specfact/projects//enrichment_context.md`, feature YAMLs, contract scaffolds, and brownfield reports - - Scan the codebase within `--entry-point` (and adjacent modules) to identify missing features, dependencies, and behavior; do **not** rely solely on AST-derived YAML - - Compare code findings vs CLI artifacts, then add missing features/stories, reasoning, and acceptance criteria (each added feature must include at least one story) - - Save the enrichment report to `.specfact/projects//reports/enrichment/-.enrichment.md` (bundle-specific, Phase 8.5) - - **CRITICAL**: Follow the exact enrichment report format (see "Enrichment Report Format" section below) to ensure successful parsing - -3. **Present**: Bundle location, report path, summary (features/stories/contracts/relationships) - -## CLI Enforcement - -**CRITICAL**: Always use SpecFact CLI commands. See [CLI Enforcement Rules](./shared/cli-enforcement.md) for details. - -**Rules:** - -- Execute CLI first - never create artifacts directly -- Use the global `--no-interactive` flag in CI/CD environments (must appear before the subcommand) -- Never modify `.specfact/` directly -- Use CLI output as grounding for validation -- Code generation requires LLM (only via AI IDE slash prompts, not CLI-only) - -## Dual-Stack Workflow (Copilot Mode) - -When in copilot mode, follow this three-phase workflow: - -### Phase 1: CLI Grounding (REQUIRED) - -```bash -# Execute CLI to get structured output -specfact --no-interactive import from-code [] --repo -``` - -**Capture**: - -- CLI-generated artifacts (plan bundles, reports) -- Metadata (timestamps, confidence scores) -- Telemetry (execution time, file counts) - -### Phase 2: LLM Enrichment (OPTIONAL, Copilot Only) - -**Purpose**: Add semantic understanding to CLI output - -**What to do**: - -- Read CLI-generated artifacts (use file reading tools for display only) -- Scan the codebase within `--entry-point` for missing features/behavior and compare against CLI artifacts -- Identify missing features/stories and add reasoning/acceptance criteria (no direct edits to `.specfact/`) -- Suggest confidence adjustments and extract business context -- **CRITICAL**: Generate enrichment report in the exact format specified below (see "Enrichment Report Format" section) - -**What NOT to do**: - -- ❌ Create YAML/JSON artifacts directly -- ❌ Modify CLI artifacts directly (use CLI commands to update) -- ❌ Bypass CLI validation -- ❌ Write to `.specfact/` folder directly (always use CLI) -- ❌ Use direct file manipulation tools for writing (use CLI commands) -- ❌ Deviate from the enrichment report format (will cause parsing failures) - -**Output**: Generate enrichment report (Markdown) saved to `.specfact/projects//reports/enrichment/` (bundle-specific, Phase 8.5) - -**Enrichment Report Format** (REQUIRED for successful parsing): - -The enrichment parser expects a specific Markdown format. Follow this structure exactly: - -```markdown -# [Bundle Name] Enrichment Report - -**Date**: YYYY-MM-DDTHH:MM:SS -**Bundle**: - ---- - -## Missing Features - -1. **Feature Title** (Key: FEATURE-XXX) - - Confidence: 0.85 - - Outcomes: outcome1, outcome2, outcome3 - - Stories: - 1. Story title here - - Acceptance: criterion1, criterion2, criterion3 - 2. Another story title - - Acceptance: criterion1, criterion2 - -2. **Another Feature** (Key: FEATURE-YYY) - - Confidence: 0.80 - - Outcomes: outcome1, outcome2 - - Stories: - 1. Story title - - Acceptance: criterion1, criterion2, criterion3 - -## Confidence Adjustments - -- FEATURE-EXISTING-KEY: 0.90 (reason: improved understanding after code review) - -## Business Context - -- Priority: High priority feature for core functionality -- Constraint: Must support both REST and GraphQL APIs -- Risk: Potential performance issues with large datasets -``` - -**Format Requirements**: - -1. **Section Header**: Must use `## Missing Features` (case-insensitive, but prefer this exact format) -2. **Feature Format**: - - Numbered list: `1. **Feature Title** (Key: FEATURE-XXX)` - - **Bold title** is required (use `**Title**`) - - **Key in parentheses**: `(Key: FEATURE-XXX)` - must be uppercase, alphanumeric with hyphens/underscores - - Fields on separate lines with `-` prefix: - - `- Confidence: 0.85` (float between 0.0-1.0) - - `- Outcomes: comma-separated or line-separated list` - - `- Stories:` (required - each feature must have at least one story) -3. **Stories Format**: - - Numbered list under `Stories:` section: `1. Story title` - - **Indentation**: Stories must be indented (2-4 spaces) under the feature - - **Acceptance Criteria**: `- Acceptance: criterion1, criterion2, criterion3` - - Can be comma-separated on one line - - Or multi-line (each criterion on new line) - - Must start with `- Acceptance:` -4. **Optional Sections**: - - `## Confidence Adjustments`: List existing features with confidence updates - - `## Business Context`: Priorities, constraints, risks (bullet points) -5. **File Naming**: `-.enrichment.md` (e.g., `djangogoat-2025-12-23T23-50-00.enrichment.md`) - -**Example** (working format): - -```markdown -## Missing Features - -1. **User Authentication** (Key: FEATURE-USER-AUTHENTICATION) - - Confidence: 0.85 - - Outcomes: User registration, login, profile management - - Stories: - 1. User can sign up for new account - - Acceptance: sign_up view processes POST requests, creates User automatically, user is logged in after signup, redirects to profile page - 2. User can log in with credentials - - Acceptance: log_in view authenticates username/password, on success user is logged in and redirected, on failure error message is displayed -``` - -**Common Mistakes to Avoid**: - -- ❌ Missing `(Key: FEATURE-XXX)` - parser needs this to identify features -- ❌ Missing `Stories:` section - every feature must have at least one story -- ❌ Stories not indented - parser expects indented numbered lists -- ❌ Missing `- Acceptance:` prefix - acceptance criteria won't be parsed -- ❌ Using bullet points (`-`) instead of numbers (`1.`) for stories -- ❌ Feature title not in bold (`**Title**`) - parser may not extract title correctly - -### Phase 3: CLI Artifact Creation (REQUIRED) - -```bash -# Use enrichment to update plan via CLI -specfact --no-interactive import from-code [] --repo --enrichment -``` - -**Result**: Final artifacts are CLI-generated with validated enrichments - -**Note**: If code generation is needed, use the validation loop pattern (see [CLI Enforcement Rules](./shared/cli-enforcement.md#standard-validation-loop-pattern-for-llm-generated-code)) - -### Phase 4: OpenAPI Contract Generation (REQUIRED for Sidecar Validation) - -**When contracts are generated automatically:** - -The `import from-code` command attempts to extract OpenAPI contracts automatically, but **only if**: - -1. Features have `source_tracking.implementation_files` (AST-detected features) -2. The OpenAPI extractor finds API endpoints (FastAPI/Flask patterns like `@app.get`, `@router.post`, `@app.route`) - -**When contracts are NOT generated:** - -Contracts are **NOT** generated automatically when: - -- Features were added via enrichment (no `source_tracking.implementation_files`) -- Django applications (Django `path()` patterns are not detected by the extractor) -- Features without API endpoints (models, utilities, middleware, etc.) -- Framework SDKs or libraries without web endpoints - -**How to generate contracts manually:** - -For features that need OpenAPI contracts (e.g., for sidecar validation with CrossHair), use: - -```bash -# Generate contract for a single feature -specfact --no-interactive contract init --bundle --feature --repo - -# Example: Generate contracts for all enrichment-added features -specfact --no-interactive contract init --bundle djangogoat-validation --feature FEATURE-USER-AUTHENTICATION --repo . -specfact --no-interactive contract init --bundle djangogoat-validation --feature FEATURE-NOTES-MANAGEMENT --repo . -# ... repeat for each feature that needs a contract -``` - -**When to apply contract generation:** - -- **After Phase 3** (enrichment applied): Check which features have contracts in `.specfact/projects//contracts/` -- **Before sidecar validation**: All features that will be analyzed by CrossHair/Specmatic need OpenAPI contracts -- **For Django apps**: Always generate contracts manually after enrichment, as Django URL patterns are not auto-detected - -**Verification:** - -```bash -# Check which features have contracts -ls .specfact/projects//contracts/*.yaml - -# Compare with total features -ls .specfact/projects//features/*.yaml -``` - -If the contract count is less than the feature count, generate missing contracts using `contract init`. - -## Expected Output - -**Success**: Bundle location, report path, summary (features/stories/contracts/relationships) -**Error**: Missing bundle name or bundle already exists - -## Common Patterns - -```bash -/specfact.01-import --repo . # Uses active plan, auto-enrichment enabled by default -/specfact.01-import --bundle legacy-api --repo . # Auto-enrichment enabled -/specfact.01-import --repo . --no-enrich-for-speckit # Disable auto-enrichment -/specfact.01-import --repo . --entry-point src/auth/ -/specfact.01-import --repo . --enrichment report.md -``` - -## Context - -{ARGS} diff --git a/resources/prompts/specfact.02-plan.md b/resources/prompts/specfact.02-plan.md deleted file mode 100644 index 66c7c010..00000000 --- a/resources/prompts/specfact.02-plan.md +++ /dev/null @@ -1,177 +0,0 @@ ---- -description: Manage project bundles - create, add features/stories, and update plan metadata. ---- - -# SpecFact Plan Management Command - -## User Input - -```text -$ARGUMENTS -``` - -You **MUST** consider the user input before proceeding (if not empty). - -## Purpose - -Manage project bundles: initialize, add features/stories, update metadata (idea/features/stories). - -**When to use:** Creating bundles, adding features/stories, updating metadata. - -**Quick:** `/specfact.02-plan init legacy-api` or `/specfact.02-plan add-feature --key FEATURE-001 --title "User Auth"` - -## Parameters - -### Target/Input - -- `--bundle NAME` - Project bundle name (optional, defaults to active plan set via `plan select`) -- `--key KEY` - Feature/story key (e.g., FEATURE-001, STORY-001) -- `--feature KEY` - Parent feature key (for story operations) - -### Output/Results - -- (No output-specific parameters for plan management) - -### Behavior/Options - -- `--interactive/--no-interactive` - Interactive mode. Default: True (interactive) -- `--scaffold/--no-scaffold` - Create directory structure. Default: True (scaffold enabled) - -### Advanced/Configuration - -- `--title TEXT` - Feature/story title -- `--outcomes TEXT` - Expected outcomes (comma-separated) -- `--acceptance TEXT` - Acceptance criteria (comma-separated) -- `--constraints TEXT` - Constraints (comma-separated) -- `--confidence FLOAT` - Confidence score (0.0-1.0) -- `--draft/--no-draft` - Mark as draft - -## Workflow - -### Step 1: Parse Arguments - -- Determine operation: `init`, `add-feature`, `add-story`, `update-idea`, `update-feature`, `update-story` -- Extract parameters (bundle name defaults to active plan if not specified, keys, etc.) - -### Step 2: Execute CLI - -```bash -specfact plan init [--interactive/--no-interactive] [--scaffold/--no-scaffold] -specfact plan add-feature [--bundle ] --key --title [--outcomes <outcomes>] [--acceptance <acceptance>] -specfact plan add-story [--bundle <name>] --feature <feature-key> --key <story-key> --title <title> [--acceptance <acceptance>] -specfact plan update-idea [--bundle <name>] [--title <title>] [--narrative <narrative>] [--target-users <users>] [--value-hypothesis <hypothesis>] [--constraints <constraints>] -specfact plan update-feature [--bundle <name>] --key <key> [--title <title>] [--outcomes <outcomes>] [--acceptance <acceptance>] [--constraints <constraints>] [--confidence <score>] [--draft/--no-draft] -specfact plan update-story [--bundle <name>] --feature <feature-key> --key <story-key> [--title <title>] [--acceptance <acceptance>] [--story-points <points>] [--value-points <points>] [--confidence <score>] [--draft/--no-draft] -# --bundle defaults to active plan if not specified -``` - -### Step 3: Present Results - -- Display bundle location -- Show created/updated features/stories -- Present summary of changes - -## CLI Enforcement - -**CRITICAL**: Always use SpecFact CLI commands. See [CLI Enforcement Rules](./shared/cli-enforcement.md) for details. - -**Rules:** - -- Execute CLI first - never create artifacts directly -- Use `--no-interactive` flag in CI/CD environments -- Never modify `.specfact/` directly -- Use CLI output as grounding for validation -- Code generation requires LLM (only via AI IDE slash prompts, not CLI-only) - -## Dual-Stack Workflow (Copilot Mode) - -When in copilot mode, follow this three-phase workflow: - -### Phase 1: CLI Grounding (REQUIRED) - -```bash -# Execute CLI to get structured output -specfact plan <operation> [--bundle <name>] [options] --no-interactive -``` - -**Capture**: - -- CLI-generated artifacts (plan bundles, features, stories) -- Metadata (timestamps, confidence scores) -- Telemetry (execution time, file counts) - -### Phase 2: LLM Enrichment (OPTIONAL, Copilot Only) - -**Purpose**: Add semantic understanding to CLI output - -**What to do**: - -- Read CLI-generated artifacts (use file reading tools for display only) -- Use CLI artifacts as the source of truth for keys/structure/metadata -- Scan codebase only if asked to align the plan with implementation or to add missing features -- When scanning, compare findings against CLI artifacts and propose updates via CLI commands -- Identify missing features/stories -- Suggest confidence adjustments -- Extract business context - -**What NOT to do**: - -- ❌ Create YAML/JSON artifacts directly -- ❌ Modify CLI artifacts directly (use CLI commands to update) -- ❌ Bypass CLI validation -- ❌ Write to `.specfact/` folder directly (always use CLI) -- ❌ Use direct file manipulation tools for writing (use CLI commands) - -**Output**: Generate enrichment report (Markdown) or use `--batch-updates` JSON/YAML file - -### Phase 3: CLI Artifact Creation (REQUIRED) - -```bash -# Use enrichment to update plan via CLI -specfact plan update-feature [--bundle <name>] --key <key> [options] --no-interactive -# Or use batch updates: -specfact plan update-feature [--bundle <name>] --batch-updates <updates.json> --no-interactive -``` - -**Result**: Final artifacts are CLI-generated with validated enrichments - -**Note**: If code generation is needed, use the validation loop pattern (see [CLI Enforcement Rules](./shared/cli-enforcement.md#standard-validation-loop-pattern-for-llm-generated-code)) - -## Expected Output - -## Success (Init) - -```text -✓ Project bundle created: .specfact/projects/legacy-api/ -✓ Bundle initialized with scaffold structure -``` - -## Success (Add Feature) - -```text -✓ Feature 'FEATURE-001' added successfully -Feature: User Authentication -Outcomes: Secure login, Session management -``` - -## Error (Missing Bundle) - -```text -✗ Project bundle name is required (or set active plan with 'plan select') -Usage: specfact plan <operation> [--bundle <name>] [options] -``` - -## Common Patterns - -```bash -/specfact.02-plan init legacy-api -/specfact.02-plan add-feature --key FEATURE-001 --title "User Auth" --outcomes "Secure login" --acceptance "Users can log in" -/specfact.02-plan add-story --feature FEATURE-001 --key STORY-001 --title "Login API" --acceptance "API returns JWT" -/specfact.02-plan update-feature --key FEATURE-001 --title "Updated Title" --confidence 0.9 -/specfact.02-plan update-idea --target-users "Developers, DevOps" --value-hypothesis "Reduce technical debt" -# --bundle defaults to active plan if not specified -``` - -## Context - -{ARGS} diff --git a/resources/prompts/specfact.03-review.md b/resources/prompts/specfact.03-review.md deleted file mode 100644 index a66a6fed..00000000 --- a/resources/prompts/specfact.03-review.md +++ /dev/null @@ -1,714 +0,0 @@ ---- -description: Review project bundle to identify ambiguities, resolve gaps, and prepare for promotion. ---- - -# SpecFact Review Command - -## User Input - -```text -$ARGUMENTS -``` - -You **MUST** consider the user input before proceeding (if not empty). - -## Purpose - -Review project bundle to identify/resolve ambiguities and missing information. Asks targeted questions for promotion readiness. - -**When to use:** After import/creation, before promotion, when clarification needed. - -**Quick:** `/specfact.03-review` (uses active plan) or `/specfact.03-review legacy-api` - -## Interactive Question Presentation - -**CRITICAL**: When presenting questions interactively, **ALWAYS** generate and display multiple answer options in a table format. This makes it easier for users to select appropriate answers. - -### Answer Options Format - -For each question, generate 3-5 reasonable answer options based on: - -- **Code analysis**: Review existing patterns, similar features, error handling approaches -- **Domain knowledge**: Best practices, common scenarios, industry standards -- **Business context**: Product requirements, user needs, feature relationships - -**Present options in a numbered table with recommended answer:** - -```text -Question 1/5 -Category: Interaction & UX Flow -Q: What error/empty states should be handled for story STORY-XXX? - -Current Plan Settings: -Story STORY-XXX Acceptance: [current acceptance criteria] - -Answer Options: -┌─────┬─────────────────────────────────────────────────────────────────┐ -│ No. │ Option │ -├─────┼─────────────────────────────────────────────────────────────────┤ -│ 1 │ Error handling: Invalid input produces clear error messages │ -│ │ Empty states: Missing data shows "No data available" message │ -│ │ Validation: Required fields validated before processing │ -│ │ ⭐ Recommended (based on code analysis) │ -├─────┼─────────────────────────────────────────────────────────────────┤ -│ 2 │ Error handling: Network failures retry with exponential backoff │ -│ │ Empty states: Show empty state UI with helpful guidance │ -│ │ Validation: Schema-based validation with clear error messages │ -├─────┼─────────────────────────────────────────────────────────────────┤ -│ 3 │ Error handling: Errors logged to stderr with exit codes (CLI) │ -│ │ Empty states: Sensible defaults when data is missing │ -│ │ Validation: Covered in OpenAPI contract files │ -├─────┼─────────────────────────────────────────────────────────────────┤ -│ 4 │ Not applicable - error handling covered in contract files │ -├─────┼─────────────────────────────────────────────────────────────────┤ -│ 5 │ [Custom answer - type your own] │ -└─────┴─────────────────────────────────────────────────────────────────┘ - -Your answer (1-5, or type custom answer): [1] ⭐ Recommended -``` - -**CRITICAL**: Always provide a **recommended answer** (marked with ⭐) based on: - -- Code analysis (what the actual implementation does) -- Best practices (industry standards, common patterns) -- Domain knowledge (what makes sense for this feature) - -The recommendation helps less-experienced users make informed decisions. - -### Guidelines for Answer Options - -- **Option 1-3**: Specific, actionable options based on code analysis and domain knowledge -- **Option 4**: "Not applicable" or "Covered elsewhere" when appropriate -- **Option 5**: Always include "[Custom answer - type your own]" as the last option -- **Base options on research**: Review codebase, similar features, existing patterns -- **Make options specific**: Avoid generic responses - be concrete and actionable -- **Use numbered selection**: Allow users to select by number (1-5) or letter (A-E) -- **⭐ Always provide a recommended answer**: Mark one option as recommended (⭐) based on: - - Code analysis (what the actual implementation does or should do) - - Best practices (industry standards, common patterns) - - Domain knowledge (what makes sense for this specific feature) - - The recommendation helps less-experienced users make informed decisions - -## Parameters - -### Target/Input - -- `bundle NAME` (optional argument) - Project bundle name (e.g., legacy-api, auth-module). Default: active plan (set via `plan select`) -- `--category CATEGORY` - Focus on specific taxonomy category. Default: None (all categories) - -### Output/Results - -- `--list-questions` - Output questions in JSON format. Default: False -- `--output-questions PATH` - Save questions directly to file (JSON format). Use with `--list-questions` to save instead of stdout. Default: None -- `--list-findings` - Output all findings in structured format. Default: False -- `--output-findings PATH` - Save findings directly to file (JSON/YAML format). Use with `--list-findings` to save instead of stdout. Default: None -- `--findings-format FORMAT` - Output format: json, yaml, or table. Default: json for non-interactive, table for interactive - -### Behavior/Options - -- `--no-interactive` - Non-interactive mode (for CI/CD). Default: False (interactive mode) -- `--answers JSON` - JSON object with question_id -> answer mappings. Default: None -- `--auto-enrich` - Automatically enrich vague acceptance criteria using PlanEnricher (same enrichment logic as `import from-code`). Default: False (opt-in for review, but import has auto-enrichment enabled by default) - -**Important**: `--auto-enrich` will **NOT** resolve partial findings such as: - -- Missing error handling specifications ("Interaction & UX Flow" category) -- Vague acceptance criteria requiring domain knowledge ("Completion Signals" category) -- Business context questions requiring human judgment - -For these cases, use the **export-to-file → LLM reasoning → import-from-file** workflow (see Step 4). - -### Advanced/Configuration - -- `--max-questions INT` - Maximum questions per session. Default: 5 (range: 1-10) - - **Important**: This limits the number of questions asked per review session, not the total number of available questions. If there are more questions than the limit, you may need to run the review multiple times to answer all questions. Each session will ask different questions (avoiding duplicates from previous sessions). - -## Workflow - -### Step 1: Parse Arguments - -- Extract bundle name (defaults to active plan if not specified) -- Extract optional parameters (max-questions, category, etc.) - -### Step 2: Execute CLI to Export Questions - -**CRITICAL**: Always use `/tmp/` for temporary artifacts to avoid polluting the codebase. Never create temporary files in the project root. - -**CRITICAL**: Question IDs are generated per run and can change if you re-run review. -**Do not** re-run `plan review` between exporting questions and applying answers. Always answer using the exact exported questions file for that session. - -**Note**: The `--max-questions` parameter (default: 5) limits the number of questions per session, not the total number of available questions. If there are more questions available, you may need to run the review multiple times to answer all questions. Each session will ask different questions (avoiding duplicates from previous sessions). - -**Export questions to file for LLM reasoning:** - -```bash -# Export questions to file (REQUIRED for LLM enrichment workflow) -# Use /tmp/ to avoid polluting the codebase -specfact plan review [<bundle-name>] --list-questions --output-questions /tmp/questions.json --no-interactive -# Uses active plan if bundle not specified -``` - -**Optional: Get findings for comprehensive analysis:** - -```bash -# Get findings (saves to stdout - can redirect to /tmp/) -# Use /tmp/ to avoid polluting the codebase -# Option 1: Redirect output (includes CLI banner - not recommended) -specfact plan review [<bundle-name>] --list-findings --findings-format json --no-interactive > /tmp/findings.json - -# Option 2: Save directly to file (recommended - clean JSON only) -specfact plan review [<bundle-name>] --list-findings --output-findings /tmp/findings.json --no-interactive -``` - -**Note**: The `--output-questions` option saves questions directly to a file, avoiding the need for complex JSON parsing. The ambiguity scanner now recognizes the simplified format (e.g., "Must verify X works correctly (see contract examples)") as valid and will not flag it as vague. - -**Important**: Always use `/tmp/` for temporary files (`questions.json`, `findings.json`, etc.) to keep the project root clean and avoid accidental commits of temporary artifacts. - -### Step 3: LLM Reasoning and Answer Generation - -**CRITICAL**: For partial findings (missing error handling, vague acceptance criteria, business context), `--auto-enrich` will **NOT** resolve them. You must use LLM reasoning. - -**CRITICAL WORKFLOW**: Present questions with answer options **IN THE CHAT**, wait for user selection, then add selected answers to file. - -**Workflow:** - -1. **Read the exported questions file** (`/tmp/questions.json`): - - - Review all questions in the file - - Identify which questions require code/feature analysis - - Determine which questions need domain knowledge or business context - -2. **Research codebase and features** (as needed): - - - For error handling questions: Check existing error handling patterns in the codebase - - For acceptance criteria questions: Review related features and stories - - For business context questions: Review `idea.yaml`, `product.yaml`, and related documentation - -3. **Present questions with answer options IN THE CHAT** (REQUIRED): - - **DO NOT add answers to the file yet!** Present each question with answer options in the chat conversation and wait for user selection. - - For each question: - - - **Generate 3-5 reasonable answer options** based on: - - Code analysis (existing patterns, similar features) - - Domain knowledge (best practices, common scenarios) - - Business context (product requirements, user needs) - - **Present options in a table format** in the chat with numbered choices: - - ```text - Question 1/5 - Category: Interaction & UX Flow - Q: What error/empty states should be handled for story STORY-XXX? - - Current Plan Settings: - Story STORY-XXX Acceptance: [current acceptance criteria] - - Answer Options: - ┌─────┬─────────────────────────────────────────────────────────────────┐ - │ No. │ Option │ - ├─────┼─────────────────────────────────────────────────────────────────┤ - │ 1 │ Error handling: Invalid input produces clear error messages │ - │ │ Empty states: Missing data shows "No data available" message │ - │ │ Validation: Required fields validated before processing │ - │ │ ⭐ Recommended (based on code analysis) │ - ├─────┼─────────────────────────────────────────────────────────────────┤ - │ 2 │ Error handling: Network failures retry with exponential backoff │ - │ │ Empty states: Show empty state UI with helpful guidance │ - │ │ Validation: Schema-based validation with clear error messages │ - ├─────┼─────────────────────────────────────────────────────────────────┤ - │ 3 │ Error handling: Errors logged to stderr with exit codes (CLI) │ - │ │ Empty states: Sensible defaults when data is missing │ - │ │ Validation: Covered in OpenAPI contract files │ - ├─────┼─────────────────────────────────────────────────────────────────┤ - │ 4 │ Not applicable - error handling covered in contract files │ - ├─────┼─────────────────────────────────────────────────────────────────┤ - │ 5 │ [Custom answer - type your own] │ - └─────┴─────────────────────────────────────────────────────────────────┘ - - Your answer (1-5, or type custom answer): [1] ⭐ Recommended - ``` - - - **Wait for user to select an answer** (number 1-5, letter A-E, or custom text) - - **Option 5 (or last option)** should always be "[Custom answer - type your own]" to allow free-form input - - **Base options on code analysis** - review similar features, existing error handling patterns, and domain knowledge - - **Make options specific and actionable** - not generic responses - - **⭐ Always provide a recommended answer** - mark one option as recommended (⭐) based on code analysis, best practices, and domain knowledge. This helps less-experienced users make informed decisions. - - **Present one question at a time** and wait for user selection before moving to the next - -4. **After user has selected all answers**: - - - **THEN** export the selected answers to a separate file `/tmp/answers.json` - - Map user selections to the actual answer text (if user selected option 1, use the text from option 1) - - If user selected a custom answer, use that text directly - - **Export format**: Create a JSON object with `question_id -> answer` mappings - - **DO NOT** add answers to the file until user has selected all answers - - **CRITICAL**: Export answers to `/tmp/answers.json` (not `/tmp/questions.json`) for CLI import - -**Example `/tmp/questions.json` structure:** - -```json -{ - "questions": [ - { - "id": "Q001", - "category": "Interaction & UX Flow", - "question": "What error/empty states should be handled for story STORY-XXX?", - "related_sections": ["features.FEATURE-XXX.stories.STORY-XXX.acceptance"] - } - ], - "total": 5 -} -``` - -**Example `/tmp/answers.json` structure (exported after user selections):** - -```json -{ - "Q001": "Error handling should include: network failures (retry with exponential backoff), invalid input (clear validation messages), empty results (show 'No data available' message), timeout errors (show progress indicator and allow cancellation). Based on analysis of similar features in the codebase.", - "Q002": "Answer for question 2 based on code review..." -} -``` - -**CRITICAL**: Export answers to `/tmp/answers.json` (separate file), not to `/tmp/questions.json`. The CLI expects a file path for `--answers`, not a JSON string extracted from the questions file. - -### Step 4: Apply Enrichment via CLI - -**REQUIRED workflow for partial findings:** - -1. **Export questions to file** (already done in Step 2): - - ```bash - # Use /tmp/ to avoid polluting the codebase - specfact plan review [<bundle-name>] --list-questions --output-questions /tmp/questions.json --no-interactive - ``` - -2. **LLM reasoning and user selection** (Step 3): - - - LLM presents questions with answer options **IN THE CHAT** - - User selects answers (1-5, A-E, or custom text) - - **After user has selected all answers**, LLM adds selected answers to `/tmp/questions.json` - -3. **Import answers via CLI** (after user selections are complete): - - ```bash - # Import answers from exported file - # Use /tmp/ to avoid polluting the codebase - specfact plan review [<bundle-name>] --answers /tmp/answers.json --no-interactive - ``` - -**CRITICAL**: - -- Do NOT add answers to the file until the user has selected all answers -- Present questions in chat, wait for selections -- Export answers to `/tmp/answers.json` (separate file, not `/tmp/questions.json`) -- Import via CLI using the file path: `--answers /tmp/answers.json` - -**Alternative approaches** (for non-partial findings only): - -#### Option B: Update idea fields directly via CLI - -Use `plan update-idea` to update idea fields from enrichment recommendations: - -```bash -specfact plan update-idea --bundle [<bundle-name>] --value-hypothesis "..." --narrative "..." --target-users "..." -``` - -#### Option C: Apply enrichment via import (only if bundle needs regeneration) - -```bash -specfact code import [<bundle-name>] --repo . --enrichment enrichment-report.md -``` - -**Note:** - -- **For partial findings**: Always use Option A (export → LLM reasoning → import) -- **For business context only**: Option B (update-idea) may be sufficient -- **For bundle regeneration**: Only use Option C if you need to regenerate the bundle -- **CRITICAL**: Never manually edit `.specfact/` files directly - always use CLI commands - - This includes `idea.yaml`, `product.yaml`, feature files, story files, etc. - - Even if a file doesn't exist yet, use CLI commands to create it (e.g., `plan update-idea` will create `idea.yaml` if needed) - - Direct file modification bypasses validation and can cause inconsistencies - -- **Preferred**: Use Option A (answers) or Option B (update-idea) for most cases -- Only use Option C if you need to regenerate the bundle -- **CRITICAL**: Never manually edit `.specfact/` files directly - always use CLI commands - - This includes `idea.yaml`, `product.yaml`, feature files, story files, etc. - - Even if a file doesn't exist yet, use CLI commands to create it (e.g., `plan update-idea` will create `idea.yaml` if needed) - - Direct file modification bypasses validation and can cause inconsistencies - -### Step 5: Present Results - -- Display Q&A, sections touched, coverage summary (initial/updated) -- Note: Clarifications don't affect hash (stable across review sessions) -- If enrichment report was created, summarize what was addressed - -## CLI Enforcement - -**CRITICAL**: Always use SpecFact CLI commands. See [CLI Enforcement Rules](./shared/cli-enforcement.md) for details. - -**Rules:** - -- Execute CLI first - never create artifacts directly -- Use `--no-interactive` flag in CI/CD environments -- **NEVER modify `.specfact/` files directly** - always use CLI commands - - ❌ **DO NOT** edit `idea.yaml`, `product.yaml`, feature files, or any other artifacts directly - - ❌ **DO NOT** create new artifact files manually (even if they don't exist yet) - - ✅ **DO** use CLI commands: `plan update-idea`, `plan update-feature`, `plan update-story`, etc. - - ✅ **DO** use CLI commands to create new artifacts: `plan init`, `plan add-feature`, etc. -- Use CLI output as grounding for validation -- Code generation requires LLM (only via AI IDE slash prompts, not CLI-only) - -**Important**: If an artifact file doesn't exist yet, use the appropriate CLI command to create it. Never create or modify `.specfact/` files manually, as this bypasses validation and can cause inconsistencies. - -## Dual-Stack Workflow (Copilot Mode) - -When in copilot mode, follow this three-phase workflow: - -### Phase 1: CLI Grounding (REQUIRED) - -```bash -# Option 1: Get findings (redirect to /tmp/ to avoid polluting codebase) -# Option 1: Save findings directly to file (recommended - clean JSON only) -specfact plan review [<bundle-name>] --list-findings --output-findings /tmp/findings.json --no-interactive - -# Option 2: Get questions and save directly to /tmp/ (recommended - avoids JSON parsing) -specfact plan review [<bundle-name>] --list-questions --output-questions /tmp/questions.json --no-interactive -``` - -**Capture**: - -- CLI-generated findings (ambiguities, missing information) -- Questions saved directly to file (no complex parsing needed) -- Structured JSON/YAML output for bulk processing -- Metadata (timestamps, confidence scores) - -**Note**: Use `--output-questions` to save questions directly to a file. This avoids the need for complex on-the-fly Python code to extract JSON from CLI output. - -**CRITICAL**: Always use `/tmp/` for temporary artifacts (`questions.json`, `findings.json`, etc.) to avoid polluting the codebase and prevent accidental commits of temporary files. - -### Phase 2: LLM Enrichment (REQUIRED for Partial Findings) - -**Purpose**: Add semantic understanding and domain knowledge to CLI findings - -**CRITICAL**: `--auto-enrich` will **NOT** resolve partial findings. LLM reasoning is **REQUIRED** for: - -- Missing error handling specifications ("Interaction & UX Flow" category) -- Vague acceptance criteria requiring domain knowledge ("Completion Signals" category) -- Business context questions requiring human judgment - -**What to do**: - -0. **Grounding rule**: - - Treat CLI-exported questions as the source of truth; consult codebase/docs only to answer them (do not invent new artifacts) - - **Feature/Story Completeness note**: Answers here are clarifications only. They do **NOT** create stories. - For missing stories, use `specfact plan add-story` (or `plan update-story --batch-updates` if stories already exist). - -1. **Read exported questions file** (`/tmp/questions.json`): - - Review all questions and their categories - - Identify questions requiring code/feature analysis - - Determine questions needing domain knowledge - -2. **Research codebase**: - - For error handling: Analyze existing error handling patterns - - For acceptance criteria: Review related features and stories - - For business context: Review `idea.yaml`, `product.yaml`, documentation - -3. **Present questions with answer options IN THE CHAT** (REQUIRED): - - **DO NOT add answers to the file yet!** Present each question with answer options in the chat conversation. - - **For each question:** - - - Generate 3-5 reasonable options based on code analysis and domain knowledge - - Present in a numbered table (1-5) or lettered table (A-E) **IN THE CHAT** - - Include a "[Custom answer]" option as the last choice - - Make options specific and actionable, not generic - - **Wait for user to select an answer** before moving to the next question - - **Example format (present in chat):** - - ```text - Question 1/5 - Category: Interaction & UX Flow - Q: What error/empty states should be handled for story STORY-XXX? - - Answer Options: - ┌─────┬─────────────────────────────────────────────────────────────┐ - │ No. │ Option │ - ├─────┼─────────────────────────────────────────────────────────────┤ - │ 1 │ [Option based on code analysis - specific and actionable] │ - │ │ ⭐ Recommended (based on code analysis) │ - │ 2 │ [Option based on best practices - domain knowledge] │ - │ 3 │ [Option based on similar features - pattern matching] │ - │ 4 │ [Not applicable / covered elsewhere] │ - │ 5 │ [Custom answer - type your own] │ - └─────┴─────────────────────────────────────────────────────────────┘ - - Your answer (1-5, or type custom answer): [1] ⭐ Recommended - ``` - -4. **After user has selected all answers**: - - - **THEN** add the selected answers to `/tmp/questions.json` in the `answers` object - - Map user selections (1-5) to the actual answer text from the options - - If user selected a custom answer, use that text directly - - **DO NOT** add answers to the file until user has selected all answers - -**What NOT to do**: - -- ❌ Use `--auto-enrich` expecting it to resolve partial findings -- ❌ Create YAML/JSON artifacts directly (even if they don't exist yet) -- ❌ Modify CLI artifacts directly (use CLI commands to update) -- ❌ Edit `idea.yaml`, `product.yaml`, feature files, or story files manually -- ❌ Create new artifact files manually - use CLI commands instead -- ❌ Bypass CLI validation -- ❌ Write to `.specfact/` folder directly (always use CLI) -- ❌ Create temporary files in project root (always use `/tmp/`) - -**Output**: Updated `/tmp/questions.json` file with `answers` object populated - -### Phase 3: CLI Artifact Creation (REQUIRED) - -**For partial findings (REQUIRED workflow):** - -```bash -# Import answers from /tmp/questions.json file -# Use /tmp/ to avoid polluting the codebase -specfact plan review [<bundle-name>] --answers "$(jq -c '.answers' /tmp/questions.json)" --no-interactive -``` - -**For non-partial findings only:** - -```bash -# Use auto-enrich for simple vague criteria (not partial findings) -specfact plan review [<bundle-name>] --auto-enrich --no-interactive - -# Or use batch updates for feature updates -specfact plan update-feature [--bundle <name>] --batch-updates <updates.json> --no-interactive -``` - -**Result**: Final artifacts are CLI-generated with validated enrichments - -**Note**: If code generation is needed, use the validation loop pattern (see [CLI Enforcement Rules](./shared/cli-enforcement.md#standard-validation-loop-pattern-for-llm-generated-code)) - -## Expected Output - -### Success - -```text -✓ Review complete: 5 question(s) answered - -Project Bundle: legacy-api -Questions Asked: 5 - -Sections Touched: - • idea.narrative - • features[FEATURE-001].acceptance - • features[FEATURE-002].outcomes - -Coverage Summary: - ✅ Functional Scope: clear - ✅ Technical Constraints: clear - ⚠️ Business Context: partial -``` - -### Error (Missing Bundle) - -```text -✗ Project bundle 'legacy-api' not found -Create one with: specfact plan init legacy-api -``` - -## Common Patterns - -```bash -# Get findings first -/specfact.03-review --list-findings # List all findings -/specfact.03-review --list-findings --findings-format json # JSON format for enrichment -/specfact.03-review --list-findings --output-findings /tmp/findings.json # Save findings to file (clean JSON) - -# Interactive review -/specfact.03-review # Uses active plan (default: 5 questions per session) -/specfact.03-review legacy-api # Specific bundle -/specfact.03-review --max-questions 3 # Limit questions per session (may need multiple runs) -/specfact.03-review --category "Functional Scope" # Focus category -/specfact.03-review --max-questions 10 # Ask more questions per session (up to 10) - -# Non-interactive with answers -/specfact.03-review --answers '{"Q001": "answer"}' # Provide answers directly -/specfact.03-review --list-questions # Output questions as JSON to stdout -/specfact.03-review --list-questions --output-questions /tmp/questions.json # Save questions to /tmp/ - -# Auto-enrichment (NOTE: Will NOT resolve partial findings - use export/LLM/import workflow instead) -/specfact.03-review --auto-enrich # Auto-enrich simple vague criteria only - -# Recommended workflow for partial findings (use /tmp/ to avoid polluting codebase) -/specfact.03-review --list-questions --output-questions /tmp/questions.json # Export questions (default: 5 per session) -# [LLM reasoning: present questions in chat, wait for user selections, then export answers] -/specfact.03-review --answers /tmp/answers.json # Import answers from file -# [Repeat if more questions available - each session asks different questions] -/specfact.03-review --list-questions --output-questions /tmp/questions.json # Export next batch -/specfact.03-review --answers /tmp/answers.json # Import next batch -``` - -## Enrichment Workflow - -**CRITICAL**: `--auto-enrich` will **NOT** resolve partial findings such as: - -- Missing error handling specifications ("Interaction & UX Flow" category) -- Vague acceptance criteria requiring domain knowledge ("Completion Signals" category) -- Business context questions requiring human judgment - -**For partial findings, use this REQUIRED workflow:** - -1. **Export questions to file** (use `/tmp/` to avoid polluting codebase): - - ```bash - specfact plan review [<bundle-name>] --list-questions --output-questions /tmp/questions.json --no-interactive - ``` - -2. **Get findings** (optional, for comprehensive analysis - use `/tmp/`): - - ```bash - specfact plan review [<bundle-name>] --list-findings --output-findings /tmp/findings.json --no-interactive - ``` - -3. **LLM reasoning and user selection** (REQUIRED for partial findings): - - **CRITICAL**: Present questions with answer options **IN THE CHAT**, wait for user selections, then add selected answers to file. - - - Read `/tmp/questions.json` file - - Research codebase for error handling patterns, feature relationships, domain knowledge - - **Present each question with answer options IN THE CHAT** (see Step 3 for format) - - **Wait for user to select answers** (1-5, A-E, or custom text) - - **After user has selected all answers**, export selected answers to `/tmp/answers.json` (separate file) - - Map user selections to actual answer text (if user selected option 1, use the text from option 1) - - **Export format**: Create a JSON object with `question_id -> answer` mappings - - **DO NOT** export answers to file until user has selected all answers - - **CRITICAL**: Export to `/tmp/answers.json` (not `/tmp/questions.json`) for CLI import - -4. **Import answers via CLI** (after user selections are complete): - - ```bash - # Import answers from exported file - specfact plan review [<bundle-name>] --answers /tmp/answers.json --no-interactive - ``` - - **CRITICAL**: Use the file path `/tmp/answers.json` (not a JSON string extracted from `/tmp/questions.json`) - -5. **Verify**: Run `plan review` again to confirm improvements - - **Important**: The `--max-questions` parameter (default: 5) limits questions per session, not the total available. If there are more questions, repeat the workflow (Steps 2-4) until all are answered. Each session asks different questions, avoiding duplicates from previous sessions. - -**For non-partial findings only:** - -- **During import**: Auto-enrichment happens automatically (enabled by default) -- **After import**: Use `specfact plan review --auto-enrich` for simple vague criteria -- **Note**: The scanner now recognizes simplified format (e.g., "Must verify X works correctly (see contract examples)") as valid - -**Alternative approaches** (for business context only): - -- Use `plan update-idea` to update idea fields directly -- If bundle needs regeneration, use `import from-code --enrichment` - -**Note on OpenAPI Contracts:** - -After applying enrichment or review updates, check if features need OpenAPI contracts for sidecar validation: - -- Features added via enrichment typically don't have contracts (no `source_tracking`) -- Django applications require manual contract generation (Django URL patterns not auto-detected) -- Use `specfact contract init --bundle <bundle> --feature <FEATURE_KEY>` to generate contracts for features that need them - -**Enrichment Report Format** (for `import from-code --enrichment`): - -When generating enrichment reports for use with `import from-code --enrichment`, follow this exact format: - -```markdown -# [Bundle Name] Enrichment Report - -**Date**: YYYY-MM-DDTHH:MM:SS -**Bundle**: <bundle-name> - ---- - -## Missing Features - -1. **Feature Title** (Key: FEATURE-XXX) - - Confidence: 0.85 - - Outcomes: outcome1, outcome2, outcome3 - - Stories: - 1. Story title here - - Acceptance: criterion1, criterion2, criterion3 - 2. Another story title - - Acceptance: criterion1, criterion2 - -2. **Another Feature** (Key: FEATURE-YYY) - - Confidence: 0.80 - - Outcomes: outcome1, outcome2 - - Stories: - 1. Story title - - Acceptance: criterion1, criterion2, criterion3 - -## Confidence Adjustments - -- FEATURE-EXISTING-KEY: 0.90 (reason: improved understanding after code review) - -## Business Context - -- Priority: High priority feature for core functionality -- Constraint: Must support both REST and GraphQL APIs -- Risk: Potential performance issues with large datasets -``` - -**Format Requirements**: - -1. **Section Header**: Must use `## Missing Features` (case-insensitive, but prefer this exact format) -2. **Feature Format**: - - Numbered list: `1. **Feature Title** (Key: FEATURE-XXX)` - - **Bold title** is required (use `**Title**`) - - **Key in parentheses**: `(Key: FEATURE-XXX)` - must be uppercase, alphanumeric with hyphens/underscores - - Fields on separate lines with `-` prefix: - - `- Confidence: 0.85` (float between 0.0-1.0) - - `- Outcomes: comma-separated or line-separated list` - - `- Stories:` (required - each feature must have at least one story) -3. **Stories Format**: - - Numbered list under `Stories:` section: `1. Story title` - - **Indentation**: Stories must be indented (2-4 spaces) under the feature - - **Acceptance Criteria**: `- Acceptance: criterion1, criterion2, criterion3` - - Can be comma-separated on one line - - Or multi-line (each criterion on new line) - - Must start with `- Acceptance:` -4. **Optional Sections**: - - `## Confidence Adjustments`: List existing features with confidence updates - - `## Business Context`: Priorities, constraints, risks (bullet points) -5. **File Naming**: `<bundle-name>-<timestamp>.enrichment.md` (e.g., `djangogoat-2025-12-23T23-50-00.enrichment.md`) - -**Example** (working format): - -```markdown -## Missing Features - -1. **User Authentication** (Key: FEATURE-USER-AUTHENTICATION) - - Confidence: 0.85 - - Outcomes: User registration, login, profile management - - Stories: - 1. User can sign up for new account - - Acceptance: sign_up view processes POST requests, creates User automatically, user is logged in after signup, redirects to profile page - 2. User can log in with credentials - - Acceptance: log_in view authenticates username/password, on success user is logged in and redirected, on failure error message is displayed -``` - -**Common Mistakes to Avoid**: - -- ❌ Missing `(Key: FEATURE-XXX)` - parser needs this to identify features -- ❌ Missing `Stories:` section - every feature must have at least one story -- ❌ Stories not indented - parser expects indented numbered lists -- ❌ Missing `- Acceptance:` prefix - acceptance criteria won't be parsed -- ❌ Using bullet points (`-`) instead of numbers (`1.`) for stories -- ❌ Feature title not in bold (`**Title**`) - parser may not extract title correctly - -## Context - -{ARGS} diff --git a/resources/prompts/specfact.04-sdd.md b/resources/prompts/specfact.04-sdd.md deleted file mode 100644 index 6e406999..00000000 --- a/resources/prompts/specfact.04-sdd.md +++ /dev/null @@ -1,160 +0,0 @@ ---- -description: Create or update SDD manifest (hard spec) from project bundle with WHY/WHAT/HOW extraction. ---- - -# SpecFact SDD Creation Command - -## User Input - -```text -$ARGUMENTS -``` - -You **MUST** consider the user input before proceeding (if not empty). - -## Purpose - -Create/update SDD manifest from project bundle. Captures WHY (intent/constraints), WHAT (capabilities/acceptance), HOW (architecture/invariants/contracts). - -**When to use:** After plan review, before promotion, when plan changes. - -**Quick:** `/specfact.04-sdd` (uses active plan) or `/specfact.04-sdd legacy-api` - -## Parameters - -### Target/Input - -- `bundle NAME` (optional argument) - Project bundle name (e.g., legacy-api, auth-module). Default: active plan (set via `plan select`) -- `--sdd PATH` - Output SDD manifest path. Default: bundle-specific .specfact/projects/<bundle-name>/sdd.<format> (Phase 8.5) - -### Output/Results - -- `--output-format FORMAT` - SDD manifest format (yaml or json). Default: global --output-format (yaml) - -### Behavior/Options - -- `--interactive/--no-interactive` - Interactive mode with prompts. Default: True (interactive, auto-detect) - -## Workflow - -### Step 1: Parse Arguments - -- Extract bundle name (defaults to active plan if not specified) -- Extract optional parameters (sdd path, output format, etc.) - -### Step 2: Execute CLI - -```bash -specfact plan harden [<bundle-name>] [--sdd <path>] [--output-format <format>] -# Uses active plan if bundle not specified -``` - -### Step 3: Present Results - -- Display SDD location, WHY/WHAT/HOW summary, coverage metrics -- Hash excludes clarifications (stable across review sessions) - -## CLI Enforcement - -**CRITICAL**: Always use SpecFact CLI commands. See [CLI Enforcement Rules](./shared/cli-enforcement.md) for details. - -**Rules:** - -- Execute CLI first - never create artifacts directly -- Use `--no-interactive` flag in CI/CD environments -- Never modify `.specfact/` directly -- Use CLI output as grounding for validation -- Code generation requires LLM (only via AI IDE slash prompts, not CLI-only) - -## Dual-Stack Workflow (Copilot Mode) - -When in copilot mode, follow this three-phase workflow: - -### Phase 1: CLI Grounding (REQUIRED) - -```bash -# Execute CLI to get structured output -specfact plan harden [<bundle-name>] [--sdd <path>] --no-interactive -``` - -**Capture**: - -- CLI-generated SDD manifest -- Metadata (hash, coverage metrics) -- Telemetry (execution time, file counts) - -### Phase 2: LLM Enrichment (OPTIONAL, Copilot Only) - -**Purpose**: Add semantic understanding to SDD content - -**What to do**: - -- Read CLI-generated SDD (use file reading tools for display only) -- Treat CLI SDD as the source of truth; scan codebase only to enrich WHY/WHAT/HOW context -- Research codebase for additional context -- Suggest improvements to WHY/WHAT/HOW sections - -**What NOT to do**: - -- ❌ Create YAML/JSON artifacts directly -- ❌ Modify CLI artifacts directly (use CLI commands to update) -- ❌ Bypass CLI validation -- ❌ Write to `.specfact/` folder directly (always use CLI) - -**Output**: Generate enrichment report (Markdown) with suggestions - -### Phase 3: CLI Artifact Creation (REQUIRED) - -```bash -# Use enrichment to update plan via CLI, then regenerate SDD -specfact plan update-idea [--bundle <name>] [options] --no-interactive -specfact plan harden [<bundle-name>] --no-interactive -``` - -**Result**: Final SDD is CLI-generated with validated enrichments - -**Note**: If code generation is needed, use the validation loop pattern (see [CLI Enforcement Rules](./shared/cli-enforcement.md#standard-validation-loop-pattern-for-llm-generated-code)) - -## Expected Output - -### Success - -```text -✓ SDD manifest created: .specfact/projects/legacy-api/sdd.yaml - -SDD Manifest Summary: -Project Bundle: .specfact/projects/legacy-api/ -Bundle Hash: abc123def456... -SDD Path: .specfact/projects/legacy-api/sdd.yaml - -WHY (Intent): - Build secure authentication system -Constraints: 2 - -WHAT (Capabilities): 12 - -HOW (Architecture): - Microservices architecture with JWT tokens... -Invariants: 8 -Contracts: 15 -``` - -### Error (Missing Bundle) - -```text -✗ Project bundle 'legacy-api' not found -Create one with: specfact plan init legacy-api -``` - -## Common Patterns - -```bash -/specfact.04-sdd # Uses active plan -/specfact.04-sdd legacy-api # Specific bundle -/specfact.04-sdd --output-format json # JSON format -/specfact.04-sdd --sdd .specfact/projects/custom-bundle/sdd.yaml -``` - -## Context - -{ARGS} diff --git a/resources/prompts/specfact.05-enforce.md b/resources/prompts/specfact.05-enforce.md deleted file mode 100644 index 0d0c227b..00000000 --- a/resources/prompts/specfact.05-enforce.md +++ /dev/null @@ -1,166 +0,0 @@ ---- -description: Validate SDD manifest against project bundle and contracts, check coverage thresholds. ---- - -# SpecFact SDD Enforcement Command - -## User Input - -```text -$ARGUMENTS -``` - -You **MUST** consider the user input before proceeding (if not empty). - -## Purpose - -Validate SDD manifest against project bundle and contracts. Checks hash matching, coverage thresholds, and contract density. - -**When to use:** After creating/updating SDD, before promotion, in CI/CD pipelines. - -**Quick:** `/specfact.05-enforce` (uses active plan) or `/specfact.05-enforce legacy-api` - -## Parameters - -### Target/Input - -- `bundle NAME` (optional argument) - Project bundle name (e.g., legacy-api, auth-module). Default: active plan (set via `plan select`) -- `--sdd PATH` - Path to SDD manifest. Default: bundle-specific .specfact/projects/<bundle-name>/sdd.<format> (Phase 8.5), with fallback to legacy .specfact/sdd/<bundle-name>.<format> - -### Output/Results - -- `--output-format FORMAT` - Output format (yaml, json, markdown). Default: yaml -- `--out PATH` - Output file path. Default: bundle-specific .specfact/projects/<bundle-name>/reports/enforcement/report-<timestamp>.<format> (Phase 8.5) - -### Behavior/Options - -- `--no-interactive` - Non-interactive mode (for CI/CD). Default: False (interactive mode) - -## Workflow - -### Step 1: Parse Arguments - -- Extract bundle name (defaults to active plan if not specified) -- Extract optional parameters (sdd path, output format, etc.) - -### Step 2: Execute CLI - -```bash -specfact enforce sdd [<bundle-name>] [--sdd <path>] [--output-format <format>] [--out <path>] -# Uses active plan if bundle not specified -``` - -### Step 3: Present Results - -- Display validation summary (passed/failed) -- Show deviation counts by severity -- Present coverage metrics vs thresholds -- Indicate hash match status -- Provide fix hints for failures - -## CLI Enforcement - -**CRITICAL**: Always use SpecFact CLI commands. See [CLI Enforcement Rules](./shared/cli-enforcement.md) for details. - -**Rules:** - -- Execute CLI first - never create artifacts directly -- Use `--no-interactive` flag in CI/CD environments -- Never modify `.specfact/` directly -- Use CLI output as grounding for validation -- Code generation requires LLM (only via AI IDE slash prompts, not CLI-only) - -## Dual-Stack Workflow (Copilot Mode) - -When in copilot mode, follow this three-phase workflow: - -### Phase 1: CLI Grounding (REQUIRED) - -```bash -# Execute CLI to get structured output -specfact enforce sdd [<bundle-name>] [--sdd <path>] --no-interactive -``` - -**Capture**: - -- CLI-generated validation report -- Deviation counts and severity -- Coverage metrics vs thresholds - -### Phase 2: LLM Enrichment (OPTIONAL, Copilot Only) - -**Purpose**: Add semantic understanding to validation results - -**What to do**: - -- Read CLI-generated validation report (use file reading tools for display only) -- Treat the CLI report as the source of truth; scan codebase only to explain deviations or propose fixes -- Research codebase for context on deviations -- Suggest fixes for validation failures - -**What NOT to do**: - -- ❌ Create YAML/JSON artifacts directly -- ❌ Modify CLI artifacts directly (use CLI commands to update) -- ❌ Bypass CLI validation -- ❌ Write to `.specfact/` folder directly (always use CLI) - -**Output**: Generate fix suggestions report (Markdown) - -### Phase 3: CLI Artifact Creation (REQUIRED) - -```bash -# Apply fixes via CLI commands, then re-validate -specfact plan update-feature [--bundle <name>] [options] --no-interactive -specfact enforce sdd [<bundle-name>] --no-interactive -``` - -**Result**: Final artifacts are CLI-generated with validated fixes - -**Note**: If code generation is needed, use the validation loop pattern (see [CLI Enforcement Rules](./shared/cli-enforcement.md#standard-validation-loop-pattern-for-llm-generated-code)) - -## Expected Output - -### Success - -```text -✓ SDD validation passed - -Validation Summary -Total deviations: 0 - High: 0 - Medium: 0 - Low: 0 - -Report saved to: .specfact/projects/<bundle-name>/reports/enforcement/report-2025-11-26T10-30-00.yaml -``` - -### Failure (Hash Mismatch) - -```text -✗ SDD validation failed - -Issues Found: - -1. Hash Mismatch (HIGH) - The project bundle has been modified since the SDD manifest was created. - SDD hash: abc123def456... - Bundle hash: xyz789ghi012... - - Hash changes when modifying features, stories, or product/idea/business sections. - Note: Clarifications don't affect hash (review metadata). Hash stable across review sessions. - Fix: Run `specfact plan harden <bundle-name>` to update SDD manifest. -``` - -## Common Patterns - -```bash -/specfact.05-enforce # Uses active plan -/specfact.05-enforce legacy-api # Specific bundle -/specfact.05-enforce --output-format json --out report.json -/specfact.05-enforce --no-interactive # CI/CD mode -``` - -## Context - -{ARGS} diff --git a/resources/prompts/specfact.06-sync.md b/resources/prompts/specfact.06-sync.md deleted file mode 100644 index 4902781e..00000000 --- a/resources/prompts/specfact.06-sync.md +++ /dev/null @@ -1,202 +0,0 @@ ---- -description: Sync changes between external tool artifacts and SpecFact using bridge architecture. ---- - -# SpecFact Sync Command - -## User Input - -```text -$ARGUMENTS -``` - -You **MUST** consider the user input before proceeding (if not empty). - -## Purpose - -Synchronize artifacts from external tools (Spec-Kit, Linear, Jira) with SpecFact project bundles using bridge mappings. Supports bidirectional sync. - -**When to use:** Syncing with Spec-Kit, integrating external tools, maintaining consistency. - -**Quick:** `/specfact.06-sync --adapter speckit --repo . --bidirectional` or `/specfact.06-sync --bundle legacy-api --watch` - -## Parameters - -### Target/Input - -- `--repo PATH` - Path to repository. Default: current directory (.) -- `--bundle NAME` - Project bundle name for SpecFact → tool conversion. Default: auto-detect - -### Behavior/Options - -- `--bidirectional` - Enable bidirectional sync (tool ↔ SpecFact). Default: False -- `--overwrite` - Overwrite existing tool artifacts. Default: False -- `--watch` - Watch mode for continuous sync. Default: False -- `--ensure-compliance` - Validate and auto-enrich for tool compliance. Default: False - -### Advanced/Configuration - -- `--adapter TYPE` - Adapter type (speckit, generic-markdown, openspec, github, ado). Default: auto-detect -- `--interval SECONDS` - Watch interval in seconds. Default: 5 (range: 1+) - -**GitHub Adapter Options (for backlog sync):** - -- `--repo-owner OWNER` - GitHub repository owner (required for GitHub backlog sync) -- `--repo-name NAME` - GitHub repository name (required for GitHub backlog sync) -- `--github-token TOKEN` - GitHub API token (optional, uses GITHUB_TOKEN env var or gh CLI if not provided) -- `--use-gh-cli/--no-gh-cli` - Use GitHub CLI (`gh auth token`) to get token automatically (default: True) - -**Azure DevOps Adapter Options (for backlog sync):** - -- `--ado-org ORG` - Azure DevOps organization (required for ADO backlog sync) -- `--ado-project PROJECT` - Azure DevOps project (required for ADO backlog sync) -- `--ado-base-url URL` - Azure DevOps base URL (optional, defaults to <https://dev.azure.com>). Use for Azure DevOps Server (on-prem) -- `--ado-token TOKEN` - Azure DevOps PAT (optional, uses AZURE_DEVOPS_TOKEN env var if not provided) -- `--ado-work-item-type TYPE` - Azure DevOps work item type (optional, derived from process template if not provided) - -## Workflow - -### Step 1: Parse Arguments - -- Extract repository path (default: current directory) -- Extract adapter type (default: auto-detect) -- Extract sync options (bidirectional, overwrite, watch, etc.) - -### Step 2: Execute CLI - -```bash -# Spec-Kit adapter (default) -specfact sync bridge --adapter speckit --repo <path> [--bidirectional] [--bundle <name>] [--overwrite] [--watch] [--interval <seconds>] - -# GitHub adapter (for backlog sync) -specfact sync bridge --adapter github --repo <path> --repo-owner <owner> --repo-name <name> [--bidirectional] [--bundle <name>] [--github-token <token>] [--use-gh-cli] - -# Azure DevOps adapter (for backlog sync) -specfact sync bridge --adapter ado --repo <path> --ado-org <org> --ado-project <project> [--bidirectional] [--bundle <name>] [--ado-token <token>] [--ado-base-url <url>] - -# --bundle defaults to active plan if not specified -``` - -### Step 3: Present Results - -- Display sync direction and adapter used -- Show artifacts synchronized -- Present conflict resolution (if any) -- Indicate watch status (if enabled) - -## CLI Enforcement - -**CRITICAL**: Always use SpecFact CLI commands. See [CLI Enforcement Rules](./shared/cli-enforcement.md) for details. - -**Rules:** - -- Execute CLI first - never create artifacts directly -- Use `--no-interactive` flag in CI/CD environments -- Never modify `.specfact/` or `.specify/` directly -- Use CLI output as grounding for validation -- Code generation requires LLM (only via AI IDE slash prompts, not CLI-only) - -## Dual-Stack Workflow (Copilot Mode) - -When in copilot mode, follow this three-phase workflow: - -### Phase 1: CLI Grounding (REQUIRED) - -```bash -# Execute CLI to get structured output -specfact sync bridge --adapter <adapter> --repo <path> [options] --no-interactive -``` - -**Capture**: - -- CLI-generated sync results -- Artifacts synchronized -- Conflict resolution status - -### Phase 2: LLM Enrichment (OPTIONAL, Copilot Only) - -**Purpose**: Add semantic understanding to sync results - -**What to do**: - -- Read CLI-generated sync results (use file reading tools for display only) -- Treat CLI sync output as the source of truth; scan codebase only to explain conflicts -- Research codebase for context on conflicts -- Suggest resolution strategies - -**What NOT to do**: - -- ❌ Create YAML/JSON artifacts directly -- ❌ Modify CLI artifacts directly (use CLI commands to update) -- ❌ Bypass CLI validation -- ❌ Write to `.specfact/` or `.specify/` folders directly (always use CLI) - -**Output**: Generate conflict resolution suggestions (Markdown) - -### Phase 3: CLI Artifact Creation (REQUIRED) - -```bash -# Apply resolutions via CLI commands, then re-sync -specfact plan update-feature [--bundle <name>] [options] --no-interactive -specfact sync bridge --adapter <adapter> --repo <path> --no-interactive -``` - -**Result**: Final artifacts are CLI-generated with validated resolutions - -**Note**: If code generation is needed, use the validation loop pattern (see [CLI Enforcement Rules](./shared/cli-enforcement.md#standard-validation-loop-pattern-for-llm-generated-code)) - -## Expected Output - -### Success - -```text -✓ Sync complete: Spec-Kit ↔ SpecFact (bidirectional) - -Adapter: speckit -Repository: /path/to/repo - -Artifacts Synchronized: - - Spec-Kit → SpecFact: 12 features, 45 stories - - SpecFact → Spec-Kit: 3 new features, 8 updated stories - -Conflicts Resolved: 2 -``` - -### Error (Missing Adapter) - -```text -✗ Unsupported adapter: invalid-adapter -Supported adapters: speckit, generic-markdown, openspec, github, ado -``` - -### Error (Missing Required Parameters) - -```text -✗ GitHub adapter requires both --repo-owner and --repo-name options -Example: specfact sync bridge --adapter github --repo-owner 'nold-ai' --repo-name 'specfact-cli' --bidirectional -``` - -```text -✗ Azure DevOps adapter requires both --ado-org and --ado-project options -Example: specfact sync bridge --adapter ado --ado-org 'my-org' --ado-project 'my-project' --bidirectional -``` - -## Common Patterns - -```bash -# Spec-Kit adapter -/specfact.06-sync --adapter speckit --repo . --bidirectional -/specfact.06-sync --adapter speckit --repo . --bundle legacy-api -/specfact.06-sync --adapter speckit --repo . --watch --interval 5 -/specfact.06-sync --repo . --bidirectional # Auto-detect adapter - -# GitHub adapter (backlog sync) -/specfact.06-sync --adapter github --repo . --repo-owner nold-ai --repo-name specfact-cli --bidirectional - -# Azure DevOps adapter (backlog sync) -/specfact.06-sync --adapter ado --repo . --ado-org my-org --ado-project my-project --bidirectional -``` - -## Context - -{ARGS} diff --git a/resources/prompts/specfact.07-contracts.md b/resources/prompts/specfact.07-contracts.md deleted file mode 100644 index 0511859a..00000000 --- a/resources/prompts/specfact.07-contracts.md +++ /dev/null @@ -1,364 +0,0 @@ ---- -description: Analyze contract coverage, generate enhancement prompts, and apply contracts sequentially with careful review. ---- - -# SpecFact Contract Enhancement Workflow - -## User Input - -```text -$ARGUMENTS -``` - -You **MUST** consider the user input before proceeding (if not empty). - -## Purpose - -Complete contract enhancement workflow: analyze coverage → generate prompts → apply contracts sequentially with careful review. - -**When to use:** After codebase analysis, when adding contracts to existing code, improving contract coverage. - -**Quick:** `/specfact.07-contracts` (uses active plan) or `/specfact.07-contracts legacy-api` - -## Parameters - -### Target/Input - -- `bundle NAME` (optional argument) - Project bundle name (e.g., legacy-api, auth-module). Default: active plan (set via `plan select`) -- `--repo PATH` - Repository path. Default: current directory (.) -- `--apply CONTRACTS` - Contract types to apply: 'all-contracts', 'beartype', 'icontract', 'crosshair', or comma-separated list. Default: 'all-contracts' -- `--min-priority PRIORITY` - Minimum priority for files to process: 'high', 'medium', 'low'. Default: 'low' (process all files missing contracts) - -### Behavior/Options - -- `--no-interactive` - Non-interactive mode (for CI/CD). Default: False (interactive mode with careful review) -- `--auto-apply` - Automatically apply contracts after validation (skips confirmation). Default: False (requires confirmation) -- `--batch-size INT` - Number of files to process before pausing for review. Default: 1 (one file at a time for careful review) - -## Workflow - -### Step 1: Analyze Contract Coverage - -**First, identify files missing contracts:** - -```bash -specfact analyze contracts --repo <repo-path> --bundle <bundle-name> -# Uses active plan if bundle not specified -``` - -**Parse the output to identify:** - -- Files missing beartype (marked with ✗) -- Files missing icontract (marked with ✗) -- Files missing crosshair (marked with ✗ or dim ✗) -- Files that need attention (prioritized in the table) - -**Extract file list:** - -- Focus on files marked with ✗ for beartype or icontract -- Crosshair is optional (marked with dim ✗), but can be included if user requests -- Filter out pure data model files (they use Pydantic validation) - -**Present summary:** - -- Total files analyzed -- Files missing contracts (by type) -- Files recommended for enhancement - -### Step 2: Generate Enhancement Prompts - -**For each file missing contracts, generate a prompt:** - -```bash -specfact generate contracts-prompt <file-path> --apply <contract-types> --bundle <bundle-name> -``` - -**Important:** - -- Generate prompts for ALL files missing contracts (or based on --min-priority) -- Prompts are saved to `.specfact/projects/<bundle-name>/prompts/enhance-<filename>-<contracts>.md` -- If no bundle, prompts saved to `.specfact/prompts/` -- Each prompt file contains instructions for the AI IDE to enhance the file - -**Present prompt generation summary:** - -- Number of prompts generated -- Location of prompt files -- List of files ready for enhancement - -### Step 3: User Review and Selection - -**Present files for user selection:** - -```text -Files ready for contract enhancement: -1. src/auth/login.py (missing: beartype, icontract) -2. src/api/users.py (missing: beartype, icontract, crosshair) -3. src/utils/helpers.py (missing: beartype) -... - -Select files to enhance (comma-separated numbers, 'all', or 'skip'): -``` - -**Wait for user input:** - -- If user selects specific files, process only those -- If user selects 'all', process all files sequentially -- If user selects 'skip', move to next step or exit - -**In non-interactive mode:** - -- Process all files automatically (or based on --min-priority) -- Still process sequentially (one at a time) for careful validation - -### Step 4: Apply Contracts Sequentially - -**For each selected file, apply contracts one at a time:** - -**4.1: Read the prompt file:** - -```bash -# Prompt file location: .specfact/projects/<bundle-name>/prompts/enhance-<filename>-<contracts>.md -# Or: .specfact/prompts/enhance-<filename>-<contracts>.md -``` - -**4.2: Enhance the code using AI IDE:** - -- Read the original file -- Apply contracts according to the prompt instructions -- Write enhanced code to temporary file: `enhanced_<filename>.py` -- **DO NOT modify the original file directly** - -**4.3: Validate enhanced code:** - -```bash -specfact generate contracts-apply enhanced_<filename>.py --original <original-file-path> -``` - -**Validation includes:** - -- File size check -- Syntax validation -- AST structure comparison -- Contract imports verification -- Code quality checks (ruff, pylint, basedpyright, mypy if available) -- Test execution (scoped to relevant test files) - -**4.4: Handle validation results:** - -**If validation fails:** - -- Review error messages -- Fix issues in enhanced code -- Re-validate (up to 3 attempts) -- If still failing after 3 attempts, skip this file and continue to next - -**If validation succeeds:** - -- Show diff preview (what will change) -- If `--auto-apply` is False, ask for confirmation: - - ```text - Validation passed. Apply changes to <original-file>? (y/n): - ``` - -- If confirmed (or `--auto-apply` is True), apply changes automatically -- If not confirmed, skip this file and continue to next - -#### 4.5: Pause for review (if --batch-size > 1) - -After processing `--batch-size` files, pause and show summary: - -```text -Processed 3/10 files: -✓ src/auth/login.py - Contracts applied successfully -✓ src/api/users.py - Contracts applied successfully -⏭ src/utils/helpers.py - Skipped (user declined) - -Continue with next batch? (y/n): -``` - -### Step 5: Final Summary - -**After all files processed, show final summary:** - -```text -Contract Enhancement Complete - -Summary: -- Files analyzed: 25 -- Files processed: 18 -- Files enhanced: 15 -- Files skipped: 3 -- Files failed: 0 - -Enhanced files: -✓ src/auth/login.py (beartype, icontract) -✓ src/api/users.py (beartype, icontract, crosshair) -... - -Next steps: -1. Verify contract coverage: specfact analyze contracts --bundle <bundle-name> -2. Run full test suite: pytest (or your project's test command) -3. Review changes: git diff -4. Commit enhanced code -``` - -## CLI Enforcement - -**CRITICAL**: Always use SpecFact CLI commands. See [CLI Enforcement Rules](./shared/cli-enforcement.md) for details. - -**Rules:** - -- Execute CLI commands in sequence (analyze → generate → apply) -- Never modify `.specfact/` directly -- Always validate before applying changes -- Process files sequentially for careful review -- Use `--no-interactive` only in CI/CD environments -- Use CLI output as grounding for all operations -- Code generation requires LLM (only via AI IDE slash prompts, not CLI-only) - -## Dual-Stack Workflow (Copilot Mode) - -This command **already implements** the standard validation loop pattern (see [CLI Enforcement Rules](./shared/cli-enforcement.md#standard-validation-loop-pattern-for-llm-generated-code)): - -### Phase 1: CLI Prompt Generation (REQUIRED) - -```bash -# CLI generates structured prompt -specfact generate contracts-prompt <file-path> --apply <contract-types> --bundle <bundle-name> -``` - -**Result**: Prompt saved to `.specfact/projects/<bundle-name>/prompts/enhance-<filename>-<contracts>.md` - -### Phase 2: LLM Execution (REQUIRED - AI IDE Only) - -- LLM reads prompt → generates enhanced code → writes to TEMPORARY file (`enhanced_<filename>.py`) -- **NEVER writes directly to original artifacts** - -### Phase 3: CLI Validation Loop (REQUIRED, up to 3 retries) - -```bash -# CLI validates temp file with all relevant tools -specfact generate contracts-apply enhanced_<filename>.py --original <original-file> -``` - -**Validation includes**: - -- Syntax validation (py_compile) -- File size check (must be >= original) -- AST structure comparison (preserve functions/classes) -- Contract imports verification -- Code quality checks (ruff, pylint, basedpyright, mypy) -- Test execution (contract-test, pytest) - -**If validation fails**: CLI provides detailed error feedback → LLM fixes → Re-validate (max 3 attempts) - -**If validation succeeds**: CLI applies changes to original file → CLI removes temporary file → CLI updates metadata/telemetry - -**This is the standard pattern for all LLM-generated code** - see [CLI Enforcement Rules](./shared/cli-enforcement.md#standard-validation-loop-pattern-for-llm-generated-code) for details. - -## Expected Output - -### Step 1: Analysis Results - -```text -Contract Coverage Analysis: legacy-api -Repository: /path/to/repo - -┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━━┳━━━━━━━━━━┓ -┃ File ┃ beartype ┃ icontract ┃ crosshair ┃ Coverage ┃ -┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━━╇━━━━━━━━━━┩ -│ src/auth/login.py │ ✗ │ ✗ │ ✗ │ 0% │ -│ src/api/users.py │ ✗ │ ✗ │ ✗ │ 0% │ -... - -Summary: - Files analyzed: 25 - Files with beartype: 7 (28.0%) - Files with icontract: 7 (28.0%) - Files with crosshair: 2 (8.0%) - -Found 18 files missing contracts. -``` - -### Step 2: Prompt Generation - -```text -Generating enhancement prompts... - -✓ Generated prompt for: src/auth/login.py - Location: .specfact/projects/legacy-api/prompts/enhance-login.py-all-contracts.md - -✓ Generated prompt for: src/api/users.py - Location: .specfact/projects/legacy-api/prompts/enhance-users.py-all-contracts.md - -... - -✓ Generated 18 prompts successfully -``` - -### Step 3: User Selection - -```text -Files ready for contract enhancement: -1. src/auth/login.py (missing: beartype, icontract, crosshair) -2. src/api/users.py (missing: beartype, icontract, crosshair) -3. src/utils/helpers.py (missing: beartype) -... - -Select files to enhance (comma-separated numbers, 'all', or 'skip'): all -``` - -### Step 4: Sequential Application - -```text -Processing file 1/18: src/auth/login.py - -[Reading prompt file...] -[Enhancing code with AI IDE...] -[Writing enhanced code to: enhanced_login.py] - -Validating enhanced code... -✓ File size check: passed -✓ Syntax validation: passed -✓ AST structure: passed (15 definitions preserved) -✓ Contract imports: verified -✓ Code quality checks: passed (ruff, pylint) -✓ Tests: 12/12 passed - -Diff preview: -+ from beartype import beartype -+ from icontract import require, ensure -... - -Apply changes to src/auth/login.py? (y/n): y -✓ Contracts applied successfully - -[Pausing for review... Press Enter to continue to next file] -``` - -## Common Patterns - -```bash -/specfact.07-contracts # Uses active plan, all-contracts, interactive -/specfact.07-contracts legacy-api # Specific bundle -/specfact.07-contracts --apply beartype,icontract # Specific contract types -/specfact.07-contracts --min-priority high # Only high-priority files -/specfact.07-contracts --batch-size 3 # Process 3 files before pausing -/specfact.07-contracts --auto-apply # Auto-apply after validation (no confirmation) -/specfact.07-contracts --no-interactive # CI/CD mode (still sequential for safety) -``` - -## Important Notes - -1. **Sequential Processing**: Files are processed one at a time (or in small batches) to allow careful review -2. **Validation Required**: All enhanced code must pass validation before applying -3. **User Control**: User can skip files, pause between files, or stop the process -4. **Data Model Files**: Pure Pydantic/dataclass files are automatically excluded (they use Pydantic validation) -5. **Prompt Location**: Prompts are saved to bundle-specific directories when bundle is provided -6. **Temporary Files**: Enhanced code is written to temporary files (`enhanced_<filename>.py`) for validation before applying - -## Context - -{ARGS} diff --git a/resources/prompts/specfact.compare.md b/resources/prompts/specfact.compare.md deleted file mode 100644 index 637c1987..00000000 --- a/resources/prompts/specfact.compare.md +++ /dev/null @@ -1,159 +0,0 @@ ---- -description: Compare manual and auto-derived plans to detect code vs plan drift and deviations. ---- - -# SpecFact Compare Command - -## User Input - -```text -$ARGUMENTS -``` - -You **MUST** consider the user input before proceeding (if not empty). - -## Purpose - -Compare two project bundles (or legacy plan bundles) to detect deviations, mismatches, and missing features. Identifies code vs plan drift. - -**When to use:** After import to compare with manual plan, detecting spec/implementation drift, validating completeness. - -**Quick:** `/specfact.compare --bundle legacy-api` or `/specfact.compare --code-vs-plan` - -## Parameters - -### Target/Input - -- `--bundle NAME` - Project bundle name. If specified, compares bundles instead of legacy plan files. Default: None -- `--manual PATH` - Manual plan bundle path. Default: active plan in .specfact/plans. Ignored if --bundle specified -- `--auto PATH` - Auto-derived plan bundle path. Default: latest in .specfact/plans/. Ignored if --bundle specified - -### Output/Results - -- `--output-format FORMAT` - Output format (markdown, json, yaml). Default: markdown -- `--out PATH` - Output file path. Default: bundle-specific .specfact/projects/<bundle-name>/reports/comparison/report-<timestamp>.md (Phase 8.5), or global .specfact/reports/comparison/ if no bundle context - -### Behavior/Options - -- `--code-vs-plan` - Alias for comparing code-derived plan vs manual plan. Default: False - -## Workflow - -### Step 1: Parse Arguments - -- Extract comparison targets (bundle, manual plan, auto plan) -- Determine comparison mode (bundle vs bundle, or legacy plan files) - -### Step 2: Execute CLI - -```bash -specfact plan compare [--bundle <bundle-name>] [--manual <path>] [--auto <path>] [--code-vs-plan] [--output-format <format>] [--out <path>] -# --bundle defaults to active plan if not specified -``` - -### Step 3: Present Results - -- Display deviation summary (by type and severity) -- Show missing features in each plan -- Present drift analysis -- Indicate comparison report location - -## CLI Enforcement - -**CRITICAL**: Always use SpecFact CLI commands. See [CLI Enforcement Rules](./shared/cli-enforcement.md) for details. - -**Rules:** - -- Execute CLI first - never create artifacts directly -- Use `--no-interactive` flag in CI/CD environments -- Never modify `.specfact/` directly -- Use CLI output as grounding for validation -- Code generation requires LLM (only via AI IDE slash prompts, not CLI-only) - -## Dual-Stack Workflow (Copilot Mode) - -When in copilot mode, follow this three-phase workflow: - -### Phase 1: CLI Grounding (REQUIRED) - -```bash -# Execute CLI to get structured output -specfact plan compare [--bundle <name>] [options] --no-interactive -``` - -**Capture**: - -- CLI-generated comparison report -- Deviation counts and severity -- Missing features analysis - -### Phase 2: LLM Enrichment (OPTIONAL, Copilot Only) - -**Purpose**: Add semantic understanding to comparison results - -**What to do**: - -- Read CLI-generated comparison report (use file reading tools for display only) -- Treat the comparison report as the source of truth; scan codebase only to explain or confirm deviations -- Research codebase for context on deviations -- Suggest fixes for missing features or mismatches - -**What NOT to do**: - -- ❌ Create YAML/JSON artifacts directly -- ❌ Modify CLI artifacts directly (use CLI commands to update) -- ❌ Bypass CLI validation -- ❌ Write to `.specfact/` folder directly (always use CLI) - -**Output**: Generate fix suggestions report (Markdown) - -### Phase 3: CLI Artifact Creation (REQUIRED) - -```bash -# Apply fixes via CLI commands, then re-compare -specfact plan update-feature [--bundle <name>] [options] --no-interactive -specfact plan compare [--bundle <name>] --no-interactive -``` - -**Result**: Final artifacts are CLI-generated with validated fixes - -**Note**: If code generation is needed, use the validation loop pattern (see [CLI Enforcement Rules](./shared/cli-enforcement.md#standard-validation-loop-pattern-for-llm-generated-code)) - -## Expected Output - -### Success - -```text -✓ Comparison complete - -Comparison Report: .specfact/projects/<bundle-name>/reports/comparison/report-2025-11-26T10-30-00.md - -Deviations Summary: - Total: 5 - High: 1 (Missing Feature) - Medium: 3 (Feature Mismatch) - Low: 1 (Story Difference) - -Missing in Manual Plan: 2 features -Missing in Auto Plan: 1 feature -``` - -### Error (Missing Plans) - -```text -✗ Default manual plan not found: .specfact/plans/main.bundle.yaml -Create one with: specfact plan init --interactive -``` - -## Common Patterns - -```bash -/specfact.compare --bundle legacy-api -/specfact.compare --code-vs-plan -/specfact.compare --manual <path> --auto <path> -/specfact.compare --code-vs-plan --output-format json -``` - -## Context - -{ARGS} diff --git a/resources/prompts/specfact.validate.md b/resources/prompts/specfact.validate.md deleted file mode 100644 index 2548a8ee..00000000 --- a/resources/prompts/specfact.validate.md +++ /dev/null @@ -1,166 +0,0 @@ ---- -description: Run full validation suite for reproducibility and contract compliance. ---- - -# SpecFact Validate Command - -## User Input - -```text -$ARGUMENTS -``` - -You **MUST** consider the user input before proceeding (if not empty). - -## Purpose - -Run full validation suite for reproducibility and contract compliance. Executes linting, type checking, contract exploration, and tests. - -**When to use:** Before committing, in CI/CD pipelines, validating contract compliance. - -**Quick:** `/specfact.validate --repo .` or `/specfact.validate --verbose --budget 120` - -## Parameters - -### Target/Input - -- `--repo PATH` - Path to repository. Default: current directory (.) - -### Output/Results - -- `--out PATH` - Output report path. Default: bundle-specific .specfact/projects/<bundle-name>/reports/enforcement/report-<timestamp>.yaml (Phase 8.5), or global .specfact/reports/enforcement/ if no bundle context - -### Behavior/Options - -- `--verbose` - Verbose output. Default: False -- `--fail-fast` - Stop on first failure. Default: False -- `--fix` - Apply auto-fixes where available. Default: False - -### Advanced/Configuration - -- `--budget SECONDS` - Time budget in seconds. Default: 120 (must be > 0) - -## Workflow - -### Step 1: Parse Arguments - -- Extract repository path (default: current directory) -- Extract validation options (verbose, fail-fast, fix, budget) - -### Step 2: Execute CLI - -```bash -specfact repro --repo <path> [--verbose] [--fail-fast] [--fix] [--budget <seconds>] [--out <path>] -``` - -### Step 3: Present Results - -- Display validation summary table -- Show check results (pass/fail/timeout) -- Present report location -- Indicate exit code - -## CLI Enforcement - -**CRITICAL**: Always use SpecFact CLI commands. See [CLI Enforcement Rules](./shared/cli-enforcement.md) for details. - -**Rules:** - -- Execute CLI first - never create artifacts directly -- Use `--no-interactive` flag in CI/CD environments -- Never modify `.specfact/` directly -- Use CLI output as grounding for validation results -- Code generation requires LLM (only via AI IDE slash prompts, not CLI-only) - -## Dual-Stack Workflow (Copilot Mode) - -When in copilot mode, follow this three-phase workflow: - -### Phase 1: CLI Grounding (REQUIRED) - -```bash -# Execute CLI to get structured output -specfact repro --repo <path> [options] --no-interactive -``` - -**Capture**: - -- CLI-generated validation report -- Check results (pass/fail/timeout) -- Exit code - -### Phase 2: LLM Enrichment (OPTIONAL, Copilot Only) - -**Purpose**: Add semantic understanding to validation results - -**What to do**: - -- Read CLI-generated validation report (use file reading tools for display only) -- Treat the validation report as the source of truth; scan codebase only to explain failures -- Research codebase for context on failures -- Suggest fixes for validation failures - -**What NOT to do**: - -- ❌ Create YAML/JSON artifacts directly -- ❌ Modify CLI artifacts directly (use CLI commands to update) -- ❌ Bypass CLI validation -- ❌ Write to `.specfact/` folder directly (always use CLI) - -**Output**: Generate fix suggestions report (Markdown) - -### Phase 3: CLI Artifact Creation (REQUIRED) - -```bash -# Apply fixes via CLI commands, then re-validate -specfact plan update-feature [--bundle <name>] [options] --no-interactive -specfact repro --repo <path> --no-interactive -``` - -**Result**: Final artifacts are CLI-generated with validated fixes - -**Note**: If code generation is needed, use the validation loop pattern (see [CLI Enforcement Rules](./shared/cli-enforcement.md#standard-validation-loop-pattern-for-llm-generated-code)) - -## Expected Output - -### Success - -```text -✓ All validations passed! - -Check Summary: - Lint (ruff) ✓ Passed - Async Patterns ✓ Passed - Type Checking ✓ Passed - Contract Exploration ✓ Passed - Property Tests ✓ Passed - Smoke Tests ✓ Passed - -Report saved to: .specfact/projects/<bundle-name>/reports/enforcement/report-2025-11-26T10-30-00.yaml -``` - -### Failure - -```text -✗ Some validations failed - -Check Summary: - Lint (ruff) ✓ Passed - Async Patterns ✗ Failed (2 issues) - Type Checking ✓ Passed - ... -``` - -## Common Patterns - -```bash -/specfact.validate --repo . -/specfact.validate --verbose -/specfact.validate --fix -/specfact.validate --fail-fast -/specfact.validate --budget 300 -``` - -## Context - -{ARGS} diff --git a/setup.py b/setup.py index b9cb503c..f5df4c72 100644 --- a/setup.py +++ b/setup.py @@ -7,7 +7,7 @@ if __name__ == "__main__": _setup = setup( name="specfact-cli", - version="0.42.6", + version="0.43.1", description=( "The swiss knife CLI for agile DevOps teams. Keep backlog, specs, tests, and code in sync with " "validation and contract enforcement for new projects and long-lived codebases." @@ -23,7 +23,7 @@ "cryptography>=43.0.0", "cffi>=1.17.1", "typer>=0.15.0", - "rich>=14.0.0", + "rich>=13.5.2,<13.6.0", "jinja2>=3.1.0", "networkx>=3.2", "gitpython>=3.1.0", diff --git a/src/__init__.py b/src/__init__.py index 123f4b38..d44458c3 100644 --- a/src/__init__.py +++ b/src/__init__.py @@ -3,4 +3,4 @@ """ # Package version: keep in sync with pyproject.toml, setup.py, src/specfact_cli/__init__.py -__version__ = "0.42.6" +__version__ = "0.43.1" diff --git a/src/specfact_cli/__init__.py b/src/specfact_cli/__init__.py index 9d7aa16c..ef990799 100644 --- a/src/specfact_cli/__init__.py +++ b/src/specfact_cli/__init__.py @@ -42,6 +42,6 @@ def _bootstrap_bundle_paths() -> None: _bootstrap_bundle_paths() -__version__ = "0.42.6" +__version__ = "0.43.1" __all__ = ["__version__"] diff --git a/src/specfact_cli/adapters/speckit.py b/src/specfact_cli/adapters/speckit.py index fbb862fa..2564aecb 100644 --- a/src/specfact_cli/adapters/speckit.py +++ b/src/specfact_cli/adapters/speckit.py @@ -9,6 +9,8 @@ import hashlib import re +import shutil +import subprocess from pathlib import Path from typing import Any @@ -16,6 +18,7 @@ from icontract import ensure, require from specfact_cli.adapters.base import BridgeAdapter +from specfact_cli.common import get_bridge_logger from specfact_cli.importers.speckit_converter import SpecKitConverter from specfact_cli.importers.speckit_scanner import SpecKitScanner from specfact_cli.models.bridge import BridgeConfig @@ -35,6 +38,9 @@ ) +logger = get_bridge_logger(__name__) + + class SpecKitAdapter(BridgeAdapter): """ Spec-Kit bridge adapter implementing BridgeAdapter interface. @@ -83,6 +89,73 @@ def detect(self, repo_path: Path, bridge_config: BridgeConfig | None = None) -> or (docs_specs_dir.exists() and docs_specs_dir.is_dir()) ) + @beartype + @ensure(lambda result: result is None or isinstance(result, str), "Must return str or None") + def _detect_version_from_cli(self, repo_path: Path) -> str | None: + """Attempt to detect spec-kit version by running the specify CLI.""" + if not shutil.which("specify"): + return None + try: + result = subprocess.run( + ["specify", "--version"], + capture_output=True, + text=True, + timeout=5, + cwd=str(repo_path), + ) + if result.returncode == 0 and result.stdout.strip(): + version_match = re.search(r"(\d+\.\d+\.\d+)", result.stdout) + if version_match: + return version_match.group(1) + except subprocess.TimeoutExpired: + logger.debug("specify --version timed out after 5 seconds") + except OSError as exc: + logger.debug("specify --version failed due to OSError: %s", exc) + return None + + @beartype + @ensure(lambda result: result is None or isinstance(result, str), "Must return str or None") + def _detect_version_from_heuristics(self, repo_path: Path) -> str | None: + """Estimate spec-kit version from directory structure presence.""" + if (repo_path / "presets").is_dir(): + return ">=0.3.0" + if (repo_path / "extensions").is_dir(): + return ">=0.2.0" + if (repo_path / ".specify").is_dir(): + return ">=0.1.0" + return None + + @staticmethod + def _detect_layout(base_path: Path) -> tuple[str, str]: + """Determine spec-kit layout type and specs directory from repo structure.""" + if (base_path / "docs" / "specs").exists(): + return "modern", "docs/specs" + if (base_path / ".specify").exists(): + return "modern", "specs" + return "classic", "specs" + + def _detect_version(self, base_path: Path, *, skip_cli: bool) -> tuple[str | None, str | None]: + """Detect spec-kit version, returning (version, source).""" + if not skip_cli: + version = self._detect_version_from_cli(base_path) + if version: + return version, "cli" + version = self._detect_version_from_heuristics(base_path) + if version: + return version, "heuristic" + return None, None + + @staticmethod + def _extract_extension_fields( + ext_list: list[dict[str, Any]], + ) -> tuple[list[str] | None, dict[str, list[str]] | None]: + """Extract extension names and command maps from scanner output.""" + if not ext_list: + return None, None + names = [e["name"] for e in ext_list] + commands = {e["name"]: e.get("commands", []) for e in ext_list} + return names, commands + @beartype @require(require_repo_path_exists, "Repository path must exist") @require(require_repo_path_is_dir, "Repository path must be a directory") @@ -98,37 +171,30 @@ def get_capabilities(self, repo_path: Path, bridge_config: BridgeConfig | None = Returns: ToolCapabilities instance for Spec-Kit adapter """ - base_path = repo_path - if bridge_config and bridge_config.external_base_path: - base_path = bridge_config.external_base_path - - # Determine layout (classic vs modern) - specify_dir = base_path / ".specify" - docs_specs_dir = base_path / "docs" / "specs" - - if docs_specs_dir.exists(): - layout = "modern" - specs_dir_path = "docs/specs" - elif specify_dir.exists(): - layout = "modern" - specs_dir_path = "specs" - else: - layout = "classic" - specs_dir_path = "specs" + is_cross_repo = bridge_config is not None and bridge_config.external_base_path is not None + base_path: Path = bridge_config.external_base_path if is_cross_repo else repo_path # type: ignore[assignment] - # Check for constitution file (set has_custom_hooks flag) + layout, specs_dir_path = self._detect_layout(base_path) scanner = SpecKitScanner(base_path) has_constitution, _ = scanner.has_constitution() - has_custom_hooks = has_constitution + version, detected_version_source = self._detect_version(base_path, skip_cli=is_cross_repo) + extensions, extension_commands = self._extract_extension_fields(scanner.scan_extensions()) + preset_names = scanner.scan_presets() + hook_list = scanner.scan_hook_events() return ToolCapabilities( tool="speckit", - version=None, # Spec-Kit version not tracked in files + version=version, layout=layout, specs_dir=specs_dir_path, - has_external_config=bridge_config is not None and bridge_config.external_base_path is not None, - has_custom_hooks=has_custom_hooks, - supported_sync_modes=["bidirectional", "unidirectional"], # Spec-Kit supports bidirectional sync + has_external_config=is_cross_repo, + has_custom_hooks=has_constitution, + supported_sync_modes=["bidirectional", "unidirectional"], + extensions=extensions, + extension_commands=extension_commands, + presets=preset_names or None, + hook_events=hook_list or None, + detected_version_source=detected_version_source, ) @beartype diff --git a/src/specfact_cli/importers/speckit_scanner.py b/src/specfact_cli/importers/speckit_scanner.py index 5a1cb477..72871d3e 100644 --- a/src/specfact_cli/importers/speckit_scanner.py +++ b/src/specfact_cli/importers/speckit_scanner.py @@ -11,14 +11,20 @@ from __future__ import annotations +import json import re from contextlib import suppress from pathlib import Path -from typing import Any +from typing import Any, cast from beartype import beartype from icontract import ensure, require +from specfact_cli.common import get_bridge_logger + + +logger = get_bridge_logger(__name__) + def _spec_file_is_markdown(spec_file: Path) -> bool: return spec_file.suffix == ".md" @@ -687,6 +693,122 @@ def _parse_constitution_file(self, constitution_file: Path, memory_data: dict[st self._parse_constitution_principles(content, memory_data) self._parse_constitution_governance_constraints(content, memory_data) + def _load_extensionignore(self) -> set[str]: + """Load extension names to ignore from .extensionignore file.""" + extensionignore = self.repo_path / ".extensionignore" + if not extensionignore.exists(): + return set() + try: + content = extensionignore.read_text(encoding="utf-8") + return {line.strip() for line in content.splitlines() if line.strip() and not line.startswith("#")} + except OSError: + logger.debug("Failed to read .extensionignore, proceeding without ignore rules") + return set() + + def _parse_catalog_file(self, catalog_path: Path, ignored: set[str], extensions: list[dict[str, Any]]) -> None: + """Parse a single extension catalog JSON file and append results.""" + if not catalog_path.exists(): + return + try: + raw = json.loads(catalog_path.read_text(encoding="utf-8")) + parsed: Any = raw + items: list[Any] = ( + parsed if isinstance(parsed, list) else cast("dict[str, Any]", parsed).get("extensions", []) + ) + for item in items: + if not isinstance(item, dict): + continue + item_dict = cast("dict[str, Any]", item) + name: str = str(item_dict.get("name", "")) + if name and name not in ignored: + commands: list[str] = list(item_dict.get("commands") or []) + ext_version = item_dict.get("version") + extensions.append( + {"name": name, "commands": commands, "version": str(ext_version) if ext_version else None} + ) + except (json.JSONDecodeError, OSError) as exc: + logger.warning("Malformed extension catalog %s: %s", catalog_path, exc) + + @beartype + @ensure(lambda result: isinstance(result, list), "Must return list") + def scan_extensions(self) -> list[dict[str, Any]]: + """ + Scan for spec-kit extension catalogs and return parsed extension metadata. + + Parses extensions/catalog.community.json and extensions/catalog.core.json, + filtering out extensions listed in .extensionignore. + + Returns: + List of extension metadata dicts with at minimum 'name' and 'commands' keys. + """ + extensions_dir = self.repo_path / "extensions" + if not extensions_dir.exists() or not extensions_dir.is_dir(): + return [] + + ignored = self._load_extensionignore() + extensions: list[dict[str, Any]] = [] + for catalog_name in ("catalog.core.json", "catalog.community.json"): + self._parse_catalog_file(extensions_dir / catalog_name, ignored, extensions) + + return extensions + + @beartype + @ensure(lambda result: isinstance(result, list), "Must return list") + def scan_presets(self) -> list[str]: + """ + Scan for spec-kit preset catalogs in the presets/ directory. + + Returns: + List of detected preset names. + """ + presets_dir = self.repo_path / "presets" + if not presets_dir.exists() or not presets_dir.is_dir(): + return [] + + preset_names: list[str] = [] + for item in presets_dir.iterdir(): + if item.is_file() and item.suffix == ".json": + try: + data: Any = json.loads(item.read_text(encoding="utf-8")) + name = ( + str(cast("dict[str, Any]", data).get("name", item.stem)) + if isinstance(data, dict) + else item.stem + ) + preset_names.append(name) + except (json.JSONDecodeError, OSError): + preset_names.append(item.stem) + elif item.is_dir(): + preset_names.append(item.name) + return preset_names + + @beartype + @ensure(lambda result: isinstance(result, list), "Must return list") + def scan_hook_events(self) -> list[str]: + """ + Detect before/after hook event wiring in .specify/prompts/ template files. + + Returns: + List of detected hook event types (e.g., ["before_task", "after_task"]). + """ + prompts_dir = self.repo_path / ".specify" / "prompts" + if not prompts_dir.exists() or not prompts_dir.is_dir(): + return [] + + hook_events: set[str] = set() + hook_pattern = re.compile(r"(before|after)[_-](task|plan|specify|implement|constitution)", re.IGNORECASE) + + for template_file in prompts_dir.glob("*.md"): + try: + content = template_file.read_text(encoding="utf-8") + for match in hook_pattern.finditer(content): + event = f"{match.group(1).lower()}_{match.group(2).lower()}" + hook_events.add(event) + except OSError: + continue + + return sorted(hook_events) + @ensure(lambda result: isinstance(result, dict), "Must return dict") def parse_memory_files(self, memory_dir: Path) -> dict[str, Any]: """ diff --git a/src/specfact_cli/models/bridge.py b/src/specfact_cli/models/bridge.py index 72374599..b3896d29 100644 --- a/src/specfact_cli/models/bridge.py +++ b/src/specfact_cli/models/bridge.py @@ -255,7 +255,7 @@ def preset_speckit_classic(cls) -> BridgeConfig: } commands = { - "analyze": CommandMapping( + "specify": CommandMapping( trigger="/speckit.specify", input_ref="specification", ), @@ -264,6 +264,27 @@ def preset_speckit_classic(cls) -> BridgeConfig: input_ref="specification", output_ref="plan", ), + "tasks": CommandMapping( + trigger="/speckit.tasks", + input_ref="plan", + output_ref="tasks", + ), + "implement": CommandMapping( + trigger="/speckit.implement", + input_ref="tasks", + ), + "constitution": CommandMapping( + trigger="/speckit.constitution", + input_ref="constitution", + ), + "clarify": CommandMapping( + trigger="/speckit.clarify", + input_ref="specification", + ), + "analyze": CommandMapping( + trigger="/speckit.analyze", + input_ref="specification", + ), } templates = TemplateMapping( @@ -319,7 +340,7 @@ def preset_speckit_specify(cls) -> BridgeConfig: } commands = { - "analyze": CommandMapping( + "specify": CommandMapping( trigger="/speckit.specify", input_ref="specification", ), @@ -328,6 +349,27 @@ def preset_speckit_specify(cls) -> BridgeConfig: input_ref="specification", output_ref="plan", ), + "tasks": CommandMapping( + trigger="/speckit.tasks", + input_ref="plan", + output_ref="tasks", + ), + "implement": CommandMapping( + trigger="/speckit.implement", + input_ref="tasks", + ), + "constitution": CommandMapping( + trigger="/speckit.constitution", + input_ref="constitution", + ), + "clarify": CommandMapping( + trigger="/speckit.clarify", + input_ref="specification", + ), + "analyze": CommandMapping( + trigger="/speckit.analyze", + input_ref="specification", + ), } templates = TemplateMapping( @@ -380,7 +422,7 @@ def preset_speckit_modern(cls) -> BridgeConfig: } commands = { - "analyze": CommandMapping( + "specify": CommandMapping( trigger="/speckit.specify", input_ref="specification", ), @@ -389,6 +431,27 @@ def preset_speckit_modern(cls) -> BridgeConfig: input_ref="specification", output_ref="plan", ), + "tasks": CommandMapping( + trigger="/speckit.tasks", + input_ref="plan", + output_ref="tasks", + ), + "implement": CommandMapping( + trigger="/speckit.implement", + input_ref="tasks", + ), + "constitution": CommandMapping( + trigger="/speckit.constitution", + input_ref="constitution", + ), + "clarify": CommandMapping( + trigger="/speckit.clarify", + input_ref="specification", + ), + "analyze": CommandMapping( + trigger="/speckit.analyze", + input_ref="specification", + ), } templates = TemplateMapping( diff --git a/src/specfact_cli/models/capabilities.py b/src/specfact_cli/models/capabilities.py index 26b7475d..9b7bce7d 100644 --- a/src/specfact_cli/models/capabilities.py +++ b/src/specfact_cli/models/capabilities.py @@ -18,3 +18,9 @@ class ToolCapabilities: supported_sync_modes: list[str] | None = ( None # Supported sync modes (e.g., ["bidirectional", "unidirectional", "read-only", "export-only"]) ) + # Spec-Kit v0.4.x alignment fields + extensions: list[str] | None = None # Detected extension names (e.g., ["reconcile", "sync", "verify"]) + extension_commands: dict[str, list[str]] | None = None # Extension name → provided commands + presets: list[str] | None = None # Active preset names + hook_events: list[str] | None = None # Detected hook event types (e.g., ["before_task", "after_task"]) + detected_version_source: str | None = None # How version was detected: "cli", "heuristic", or None diff --git a/src/specfact_cli/modules/init/module-package.yaml b/src/specfact_cli/modules/init/module-package.yaml index 558eab8f..966023a6 100644 --- a/src/specfact_cli/modules/init/module-package.yaml +++ b/src/specfact_cli/modules/init/module-package.yaml @@ -1,5 +1,5 @@ name: init -version: 0.1.18 +version: 0.1.19 commands: - init category: core @@ -17,5 +17,5 @@ publisher: description: Initialize SpecFact workspace and bootstrap local configuration. license: Apache-2.0 integrity: - checksum: sha256:218801ddd11b02e90e386a3019685add0d14a9a09d246ef958c05f53c9b46a72 - signature: o9QdwF5+ASt8dJy5D38PgMy7pysqZUJqOaDHHjcRPXoI1dGpeSyOKjJOOtboqoH9qN2a+4nhZX6S5NGp8hlPCA== + checksum: sha256:a0ca0fb136f278a11a113be78047c3c7037de9a393c27f8e677a26c4ab2ba659 + signature: r3czsyG/tinaxMTd/lNkhjXQqRUU0ecLywXt1iDWPO0QuExVM+msp5tuAud03QPnuCsMXNdyBWWSDBovPeY+Aw== diff --git a/src/specfact_cli/modules/init/src/commands.py b/src/specfact_cli/modules/init/src/commands.py index 814f2ce4..6ac7f90a 100644 --- a/src/specfact_cli/modules/init/src/commands.py +++ b/src/specfact_cli/modules/init/src/commands.py @@ -38,7 +38,6 @@ discover_prompt_sources_catalog, discover_prompt_template_files, expected_ide_prompt_export_paths, - find_package_resources_path, load_ide_prompt_export_source_ids, write_ide_prompt_export_state, ) @@ -310,8 +309,8 @@ def _select_module_ids_interactive(action: str, modules_list: list[dict[str, Any def _resolve_templates_dir(repo_path: Path) -> Path | None: - """Resolve templates directory from repo checkout or installed package.""" - prompt_files = discover_prompt_template_files(repo_path, include_package_fallback=False) + """Resolve a representative templates directory from installed modules or a dev repo checkout.""" + prompt_files = discover_prompt_template_files(repo_path, include_package_fallback=True) if prompt_files: return prompt_files[0].parent @@ -319,7 +318,7 @@ def _resolve_templates_dir(repo_path: Path) -> Path | None: if dev_templates_dir.exists(): return dev_templates_dir - return find_package_resources_path("specfact_cli", "resources/prompts") + return None def _audit_prompt_installation(repo_path: Path) -> None: diff --git a/src/specfact_cli/utils/ide_setup.py b/src/specfact_cli/utils/ide_setup.py index 60497394..0bcfed7f 100644 --- a/src/specfact_cli/utils/ide_setup.py +++ b/src/specfact_cli/utils/ide_setup.py @@ -259,8 +259,10 @@ def discover_prompt_sources_catalog( """ Build prompt templates grouped by owning source: ``core`` or a module id (``module-package.yaml`` name). - Core templates come from the repo checkout or the installed ``specfact_cli`` package. Module templates - are discovered from effective module roots (builtin, project, user, marketplace, custom). + Core templates may come from a repo checkout under ``resources/prompts`` or the installed package when + present; workflow prompts are normally provided by bundle modules under ``resources/prompts`` at the + module root. Module templates are discovered from effective module roots (builtin, project, user, + marketplace, custom). When a module ships a template with the same source filename as core (e.g. ``specfact.01-import.md``), the module copy wins: core does not list that basename so exports stay single-sourced. diff --git a/src/specfact_cli/utils/startup_checks.py b/src/specfact_cli/utils/startup_checks.py index dfe8546c..c90869ff 100644 --- a/src/specfact_cli/utils/startup_checks.py +++ b/src/specfact_cli/utils/startup_checks.py @@ -24,7 +24,7 @@ from specfact_cli import __version__ from specfact_cli.registry.module_installer import get_outdated_or_missing_bundled_modules from specfact_cli.utils.contract_predicates import file_path_exists, optional_repo_path_exists -from specfact_cli.utils.ide_setup import IDE_CONFIG, detect_ide, find_package_resources_path +from specfact_cli.utils.ide_setup import IDE_CONFIG, detect_ide, discover_prompt_template_files from specfact_cli.utils.metadata import ( get_last_checked_version, get_last_module_freshness_check_timestamp, @@ -49,6 +49,7 @@ class TemplateCheckResult(NamedTuple): missing_templates: list[str] outdated_templates: list[str] ide_dir: Path | None + sources_available: bool = True class VersionCheckResult(NamedTuple): @@ -92,17 +93,10 @@ def calculate_file_hash(file_path: Path) -> str: return sha256_hash.hexdigest() -def _resolve_templates_dir(repo_path: Path) -> Path | None: - templates_dir = find_package_resources_path("specfact_cli", "resources/prompts") - if templates_dir is not None: - return templates_dir - repo_root = repo_path - while repo_root.parent != repo_root: - dev_templates = repo_root / "resources" / "prompts" - if dev_templates.exists(): - return dev_templates - repo_root = repo_root.parent - return None +def _template_sources_by_basename(repo_path: Path) -> dict[str, Path]: + """Map specfact*.md basename -> path for drift checks (installed modules and optional dev repo).""" + files = discover_prompt_template_files(repo_path, include_package_fallback=True) + return {p.name: p for p in files} def _expected_ide_template_filenames(format_type: str) -> list[str]: @@ -135,20 +129,20 @@ def _find_ide_exported_prompt_file(ide_dir: Path, basename: str) -> Path | None: def _scan_ide_template_drift( ide_dir: Path, - templates_dir: Path, + source_by_basename: dict[str, Path], expected_files: list[str], ) -> tuple[list[str], list[str]]: missing_templates: list[str] = [] outdated_templates: list[str] = [] for expected_file in expected_files: - ide_file = _find_ide_exported_prompt_file(ide_dir, expected_file) source_template_name = expected_file.replace(".prompt.md", ".md").replace(".toml", ".md") - source_file = templates_dir / source_template_name + source_file = source_by_basename.get(source_template_name) + if source_file is None or not source_file.exists(): + continue + ide_file = _find_ide_exported_prompt_file(ide_dir, expected_file) if ide_file is None: missing_templates.append(expected_file) continue - if not source_file.exists(): - continue with contextlib.suppress(Exception): source_mtime = source_file.stat().st_mtime ide_mtime = ide_file.stat().st_mtime @@ -167,7 +161,9 @@ def check_ide_templates(repo_path: Path | None = None) -> TemplateCheckResult | repo_path: Repository path (default: current directory) Returns: - TemplateCheckResult if IDE detected and templates found, None otherwise + ``TemplateCheckResult`` when an IDE export directory exists (``sources_available`` is False + when no prompt templates are discoverable). ``None`` when IDE detection fails or the IDE + folder is missing. """ if repo_path is None: repo_path = Path.cwd() @@ -188,13 +184,20 @@ def check_ide_templates(repo_path: Path | None = None) -> TemplateCheckResult | if not ide_dir.exists(): return None - templates_dir = _resolve_templates_dir(repo_path) - if templates_dir is None: - return None + source_by_basename = _template_sources_by_basename(repo_path) + if not source_by_basename: + return TemplateCheckResult( + ide=detected_ide, + templates_outdated=False, + missing_templates=[], + outdated_templates=[], + ide_dir=ide_dir, + sources_available=False, + ) format_type = str(config["format"]) expected_files = _expected_ide_template_filenames(format_type) - missing_templates, outdated_templates = _scan_ide_template_drift(ide_dir, templates_dir, expected_files) + missing_templates, outdated_templates = _scan_ide_template_drift(ide_dir, source_by_basename, expected_files) templates_outdated = len(outdated_templates) > 0 or len(missing_templates) > 0 @@ -203,7 +206,8 @@ def check_ide_templates(repo_path: Path | None = None) -> TemplateCheckResult | templates_outdated=templates_outdated, missing_templates=missing_templates, outdated_templates=outdated_templates, - ide_dir=ide_dir if ide_dir.exists() else None, + ide_dir=ide_dir, + sources_available=True, ) @@ -390,13 +394,19 @@ def _startup_progress_task(progress: Progress, show_progress: bool, label: str): return progress.add_task(label, total=None) if show_progress else None -def _run_startup_templates_segment(progress: Progress, repo_path: Path, show_progress: bool) -> None: +def _run_startup_templates_segment(progress: Progress, repo_path: Path, show_progress: bool) -> bool: + """Return True when installable prompt sources existed so drift could be evaluated.""" task = _startup_progress_task(progress, show_progress, "[cyan]Checking IDE templates...[/cyan]") template_result = check_ide_templates(repo_path) if task: progress.update(task, description="[green]✓[/green] Checked IDE templates") - if template_result and template_result.templates_outdated: + if template_result is None: + return False + if not template_result.sources_available: + return False + if template_result.templates_outdated: _print_template_outdated_panel(template_result) + return True def _run_startup_version_segment(progress: Progress, show_progress: bool) -> None: @@ -422,7 +432,9 @@ def _run_startup_progress_block( should_check_templates: bool, should_check_version: bool, should_check_modules: bool, -) -> None: +) -> bool | None: + """Return whether template drift had sources (None if the template segment did not run).""" + template_sources_available: bool | None = None with Progress( SpinnerColumn(), TextColumn("[progress.description]{task.description}"), @@ -430,22 +442,24 @@ def _run_startup_progress_block( transient=True, ) as progress: if should_check_templates: - _run_startup_templates_segment(progress, repo_path, show_progress) + template_sources_available = _run_startup_templates_segment(progress, repo_path, show_progress) if should_check_version: _run_startup_version_segment(progress, show_progress) if should_check_modules: _run_startup_modules_segment(progress, repo_path, show_progress) + return template_sources_available def _flush_startup_metadata( should_check_templates: bool, should_check_version: bool, should_check_modules: bool, + template_sources_available: bool | None = None, ) -> None: from datetime import datetime metadata_updates: dict[str, Any] = {} - if should_check_templates or should_check_version: + if should_check_templates and template_sources_available is True: metadata_updates["last_checked_version"] = __version__ if should_check_version: metadata_updates["last_version_check_timestamp"] = datetime.now(UTC).isoformat() @@ -493,11 +507,18 @@ def print_startup_checks( last_module_freshness_check_timestamp = get_last_module_freshness_check_timestamp() should_check_modules = should_check_templates or is_version_check_needed(last_module_freshness_check_timestamp) - _run_startup_progress_block( - repo_path, - show_progress, + template_sources_available: bool | None = None + if should_check_templates or should_check_version or should_check_modules: + template_sources_available = _run_startup_progress_block( + repo_path, + show_progress, + should_check_templates, + should_check_version, + should_check_modules, + ) + _flush_startup_metadata( should_check_templates, should_check_version, should_check_modules, + template_sources_available, ) - _flush_startup_metadata(should_check_templates, should_check_version, should_check_modules) diff --git a/tests/conftest.py b/tests/conftest.py index bc5008a3..44b4ee65 100644 --- a/tests/conftest.py +++ b/tests/conftest.py @@ -104,7 +104,6 @@ def _resolve_modules_repo_root() -> Path: "tests/unit/commands/test_backlog_daily.py", "tests/unit/commands/test_project_cmd.py", # Legacy topology and extracted-module path assumptions retired from core. - "tests/unit/prompts/test_prompt_validation.py", "tests/unit/specfact_cli/test_module_migration_compatibility.py", ) diff --git a/tests/integration/utils/test_startup_checks_integration.py b/tests/integration/utils/test_startup_checks_integration.py index a63d22b8..c11471dc 100644 --- a/tests/integration/utils/test_startup_checks_integration.py +++ b/tests/integration/utils/test_startup_checks_integration.py @@ -7,7 +7,7 @@ import pytest -from specfact_cli.utils.startup_checks import print_startup_checks +from specfact_cli.utils.startup_checks import VersionCheckResult, print_startup_checks class TestStartupChecksIntegration: @@ -119,13 +119,34 @@ def test_startup_checks_real_template_check(self, tmp_path: Path): templates_dir.mkdir(parents=True) (templates_dir / "specfact.01-import.md").write_text("# Import") + def _fake_discover(_repo_path, include_package_fallback=True): + if not include_package_fallback: + return [] + return sorted(templates_dir.glob("specfact*.md")) + with ( + patch("specfact_cli.utils.startup_checks.get_last_checked_version", return_value=None), + patch("specfact_cli.utils.startup_checks.get_last_version_check_timestamp", return_value=None), + patch("specfact_cli.utils.startup_checks.update_metadata"), + patch( + "specfact_cli.utils.startup_checks.check_pypi_version", + return_value=VersionCheckResult( + current_version="1.0.0", + latest_version="1.0.0", + update_available=False, + update_type=None, + error=None, + ), + ), patch("specfact_cli.utils.startup_checks.detect_ide", return_value="cursor"), patch( "specfact_cli.utils.startup_checks.IDE_CONFIG", {"cursor": {"folder": ".cursor/commands", "format": "md"}}, ), - patch("specfact_cli.utils.startup_checks.find_package_resources_path", return_value=templates_dir), + patch( + "specfact_cli.utils.startup_checks.discover_prompt_template_files", + side_effect=_fake_discover, + ) as mock_discover, patch( "specfact_cli.utils.ide_setup.SPECFACT_COMMANDS", ["specfact.01-import"], @@ -135,3 +156,4 @@ def test_startup_checks_real_template_check(self, tmp_path: Path): # Function should complete without error assert result is None + mock_discover.assert_called_with(tmp_path, include_package_fallback=True) diff --git a/tests/unit/adapters/test_speckit.py b/tests/unit/adapters/test_speckit.py index 3f17b84c..d29f51e8 100644 --- a/tests/unit/adapters/test_speckit.py +++ b/tests/unit/adapters/test_speckit.py @@ -2,7 +2,11 @@ from __future__ import annotations +import json +import subprocess from pathlib import Path +from types import SimpleNamespace +from unittest.mock import patch import pytest @@ -448,3 +452,188 @@ def test_export_bundle(self, speckit_adapter: SpecKitAdapter, speckit_repo_class assert isinstance(count, int) assert count >= 0 + + +class TestVersionDetection: + """Tests for spec-kit version detection methods.""" + + def test_detect_version_from_heuristics_presets(self, tmp_path: Path) -> None: + """Detects >=0.3.0 when presets/ directory exists.""" + (tmp_path / "presets").mkdir() + adapter = SpecKitAdapter() + assert adapter._detect_version_from_heuristics(tmp_path) == ">=0.3.0" + + def test_detect_version_from_heuristics_extensions(self, tmp_path: Path) -> None: + """Detects >=0.2.0 when extensions/ directory exists (no presets).""" + (tmp_path / "extensions").mkdir() + adapter = SpecKitAdapter() + assert adapter._detect_version_from_heuristics(tmp_path) == ">=0.2.0" + + def test_detect_version_from_heuristics_specify(self, tmp_path: Path) -> None: + """Detects >=0.1.0 when .specify/ directory exists (no extensions/presets).""" + (tmp_path / ".specify").mkdir() + adapter = SpecKitAdapter() + assert adapter._detect_version_from_heuristics(tmp_path) == ">=0.1.0" + + def test_detect_version_from_heuristics_none(self, tmp_path: Path) -> None: + """Returns None when no spec-kit directories exist.""" + adapter = SpecKitAdapter() + assert adapter._detect_version_from_heuristics(tmp_path) is None + + def test_detect_version_from_heuristics_priority(self, tmp_path: Path) -> None: + """Presets takes priority over extensions over .specify.""" + (tmp_path / ".specify").mkdir() + (tmp_path / "extensions").mkdir() + (tmp_path / "presets").mkdir() + adapter = SpecKitAdapter() + assert adapter._detect_version_from_heuristics(tmp_path) == ">=0.3.0" + + def test_detect_version_from_cli_no_binary(self, tmp_path: Path) -> None: + """Returns None when specify binary is not on PATH.""" + adapter = SpecKitAdapter() + with patch("specfact_cli.adapters.speckit.shutil.which", return_value=None): + assert adapter._detect_version_from_cli(tmp_path) is None + + def test_detect_version_from_cli_success(self, tmp_path: Path) -> None: + """Parses version from successful specify --version output.""" + adapter = SpecKitAdapter() + mock_result = SimpleNamespace(returncode=0, stdout="specify v0.4.3\n") + with ( + patch("specfact_cli.adapters.speckit.shutil.which", return_value="/usr/bin/specify"), + patch("specfact_cli.adapters.speckit.subprocess.run", return_value=mock_result), + ): + assert adapter._detect_version_from_cli(tmp_path) == "0.4.3" + + def test_detect_version_from_cli_bad_output(self, tmp_path: Path) -> None: + """Returns None when specify --version output has no version pattern.""" + adapter = SpecKitAdapter() + mock_result = SimpleNamespace(returncode=0, stdout="unknown\n") + with ( + patch("specfact_cli.adapters.speckit.shutil.which", return_value="/usr/bin/specify"), + patch("specfact_cli.adapters.speckit.subprocess.run", return_value=mock_result), + ): + assert adapter._detect_version_from_cli(tmp_path) is None + + def test_detect_version_from_cli_timeout(self, tmp_path: Path) -> None: + """Returns None when specify --version times out.""" + adapter = SpecKitAdapter() + with ( + patch("specfact_cli.adapters.speckit.shutil.which", return_value="/usr/bin/specify"), + patch( + "specfact_cli.adapters.speckit.subprocess.run", + side_effect=subprocess.TimeoutExpired(cmd="specify", timeout=5), + ), + ): + assert adapter._detect_version_from_cli(tmp_path) is None + + def test_detect_version_from_cli_oserror(self, tmp_path: Path) -> None: + """Returns None when specify --version raises OSError.""" + adapter = SpecKitAdapter() + with ( + patch("specfact_cli.adapters.speckit.shutil.which", return_value="/usr/bin/specify"), + patch("specfact_cli.adapters.speckit.subprocess.run", side_effect=OSError("no such file")), + ): + assert adapter._detect_version_from_cli(tmp_path) is None + + +class TestGetCapabilitiesV04: + """Integration tests for get_capabilities() with v0.4.x repo structures.""" + + @pytest.fixture + def v04_repo(self, tmp_path: Path) -> Path: + """Create a v0.4.x Spec-Kit repo with extensions, presets, and hooks.""" + # Spec-Kit directories + (tmp_path / ".specify" / "memory").mkdir(parents=True) + (tmp_path / ".specify" / "memory" / "constitution.md").write_text("# Constitution\n") + (tmp_path / "specs" / "001-auth").mkdir(parents=True) + (tmp_path / "specs" / "001-auth" / "spec.md").write_text("# Auth\n") + + # Extensions + ext_dir = tmp_path / "extensions" + ext_dir.mkdir() + catalog = [{"name": "reconcile", "commands": ["reconcile"]}, {"name": "verify", "commands": ["verify"]}] + (ext_dir / "catalog.community.json").write_text(json.dumps(catalog)) + + # Presets + presets_dir = tmp_path / "presets" + presets_dir.mkdir() + (presets_dir / "minimal.json").write_text(json.dumps({"name": "minimal"})) + + # Hook templates + prompts_dir = tmp_path / ".specify" / "prompts" + prompts_dir.mkdir(parents=True) + (prompts_dir / "tasks.md").write_text("Run before_task validation.\nafter_task cleanup.\n") + + return tmp_path + + def test_extensions_populated(self, v04_repo: Path) -> None: + """Extensions are detected and populated in capabilities.""" + adapter = SpecKitAdapter() + caps = adapter.get_capabilities(v04_repo) + + assert caps.extensions is not None + assert "reconcile" in caps.extensions + assert "verify" in caps.extensions + + def test_extension_commands_populated(self, v04_repo: Path) -> None: + """Extension commands dict is populated.""" + adapter = SpecKitAdapter() + caps = adapter.get_capabilities(v04_repo) + + assert caps.extension_commands is not None + assert caps.extension_commands["reconcile"] == ["reconcile"] + + def test_presets_populated(self, v04_repo: Path) -> None: + """Presets are detected and populated.""" + adapter = SpecKitAdapter() + caps = adapter.get_capabilities(v04_repo) + + assert caps.presets is not None + assert "minimal" in caps.presets + + def test_hook_events_populated(self, v04_repo: Path) -> None: + """Hook events are detected from prompt templates.""" + adapter = SpecKitAdapter() + caps = adapter.get_capabilities(v04_repo) + + assert caps.hook_events is not None + assert "before_task" in caps.hook_events + assert "after_task" in caps.hook_events + + def test_version_heuristic_with_presets(self, v04_repo: Path) -> None: + """Version is detected via heuristics when CLI not available.""" + adapter = SpecKitAdapter() + with patch("specfact_cli.adapters.speckit.shutil.which", return_value=None): + caps = adapter.get_capabilities(v04_repo) + + assert caps.version == ">=0.3.0" + assert caps.detected_version_source == "heuristic" + + def test_cross_repo_skips_cli_probe(self, tmp_path: Path) -> None: + """Cross-repo scenarios skip CLI version detection.""" + external = tmp_path / "external" + (external / "specs" / "001").mkdir(parents=True) + (external / "specs" / "001" / "spec.md").write_text("# Spec\n") + + bridge_config = BridgeConfig.preset_speckit_classic() + bridge_config.external_base_path = external + + adapter = SpecKitAdapter() + with patch.object(adapter, "_detect_version_from_cli") as mock_cli: + caps = adapter.get_capabilities(tmp_path, bridge_config) + mock_cli.assert_not_called() + + assert caps.has_external_config is True + + def test_legacy_repo_new_fields_none(self, tmp_path: Path) -> None: + """Legacy repo (no extensions/presets/hooks) has all new fields as None.""" + (tmp_path / "specs" / "001").mkdir(parents=True) + (tmp_path / "specs" / "001" / "spec.md").write_text("# Spec\n") + + adapter = SpecKitAdapter() + caps = adapter.get_capabilities(tmp_path) + + assert caps.extensions is None + assert caps.extension_commands is None + assert caps.presets is None + assert caps.hook_events is None diff --git a/tests/unit/importers/test_speckit_scanner.py b/tests/unit/importers/test_speckit_scanner.py index 290ac760..f802ef94 100644 --- a/tests/unit/importers/test_speckit_scanner.py +++ b/tests/unit/importers/test_speckit_scanner.py @@ -7,6 +7,7 @@ from __future__ import annotations +import json from pathlib import Path from specfact_cli.importers.speckit_scanner import SpecKitScanner @@ -128,3 +129,177 @@ def test_parse_memory_files_with_constitution(self, tmp_path: Path) -> None: assert memory_data["constitution"] is not None assert memory_data["version"] == "1.0.0" assert len(memory_data["principles"]) >= 1 + + +class TestScanExtensions: + """Tests for scan_extensions() — v0.4.x extension catalog detection.""" + + def test_no_extensions_dir(self, tmp_path: Path) -> None: + """Returns empty list when extensions/ does not exist.""" + scanner = SpecKitScanner(tmp_path) + assert scanner.scan_extensions() == [] + + def test_empty_extensions_dir(self, tmp_path: Path) -> None: + """Returns empty list when extensions/ exists but has no catalogs.""" + (tmp_path / "extensions").mkdir() + scanner = SpecKitScanner(tmp_path) + assert scanner.scan_extensions() == [] + + def test_community_catalog(self, tmp_path: Path) -> None: + """Parses catalog.community.json and returns extension metadata.""" + ext_dir = tmp_path / "extensions" + ext_dir.mkdir() + catalog = [ + {"name": "reconcile", "commands": ["reconcile", "diff"], "version": "1.0.0"}, + {"name": "sync", "commands": ["push", "pull"]}, + ] + (ext_dir / "catalog.community.json").write_text(json.dumps(catalog)) + + scanner = SpecKitScanner(tmp_path) + result = scanner.scan_extensions() + + assert len(result) == 2 + assert result[0]["name"] == "reconcile" + assert result[0]["commands"] == ["reconcile", "diff"] + assert result[1]["name"] == "sync" + + def test_catalog_with_extensions_key(self, tmp_path: Path) -> None: + """Parses catalog where extensions are under an 'extensions' key.""" + ext_dir = tmp_path / "extensions" + ext_dir.mkdir() + catalog = {"extensions": [{"name": "verify", "commands": ["verify"]}]} + (ext_dir / "catalog.core.json").write_text(json.dumps(catalog)) + + scanner = SpecKitScanner(tmp_path) + result = scanner.scan_extensions() + + assert len(result) == 1 + assert result[0]["name"] == "verify" + + def test_extensionignore_filtering(self, tmp_path: Path) -> None: + """Extensions listed in .extensionignore are excluded.""" + ext_dir = tmp_path / "extensions" + ext_dir.mkdir() + catalog = [ + {"name": "reconcile", "commands": []}, + {"name": "deprecated-ext", "commands": []}, + ] + (ext_dir / "catalog.community.json").write_text(json.dumps(catalog)) + (tmp_path / ".extensionignore").write_text("deprecated-ext\n# comment line\n") + + scanner = SpecKitScanner(tmp_path) + result = scanner.scan_extensions() + + assert len(result) == 1 + assert result[0]["name"] == "reconcile" + + def test_malformed_json_catalog(self, tmp_path: Path) -> None: + """Malformed JSON catalog is skipped with warning, not crash.""" + ext_dir = tmp_path / "extensions" + ext_dir.mkdir() + (ext_dir / "catalog.community.json").write_text("{bad json") + + scanner = SpecKitScanner(tmp_path) + result = scanner.scan_extensions() + assert result == [] + + def test_both_catalogs_merged(self, tmp_path: Path) -> None: + """Extensions from both core and community catalogs are merged.""" + ext_dir = tmp_path / "extensions" + ext_dir.mkdir() + (ext_dir / "catalog.core.json").write_text(json.dumps([{"name": "core-ext", "commands": []}])) + (ext_dir / "catalog.community.json").write_text(json.dumps([{"name": "comm-ext", "commands": []}])) + + scanner = SpecKitScanner(tmp_path) + result = scanner.scan_extensions() + + names = [e["name"] for e in result] + assert "core-ext" in names + assert "comm-ext" in names + + +class TestScanPresets: + """Tests for scan_presets() — v0.4.x preset catalog detection.""" + + def test_no_presets_dir(self, tmp_path: Path) -> None: + """Returns empty list when presets/ does not exist.""" + scanner = SpecKitScanner(tmp_path) + assert scanner.scan_presets() == [] + + def test_json_presets(self, tmp_path: Path) -> None: + """Detects preset names from JSON files.""" + presets_dir = tmp_path / "presets" + presets_dir.mkdir() + (presets_dir / "minimal.json").write_text(json.dumps({"name": "minimal"})) + (presets_dir / "full.json").write_text(json.dumps({"name": "full-stack"})) + + scanner = SpecKitScanner(tmp_path) + result = scanner.scan_presets() + + assert "minimal" in result + assert "full-stack" in result + + def test_directory_presets(self, tmp_path: Path) -> None: + """Detects preset names from subdirectories.""" + presets_dir = tmp_path / "presets" + presets_dir.mkdir() + (presets_dir / "my-preset").mkdir() + + scanner = SpecKitScanner(tmp_path) + result = scanner.scan_presets() + + assert "my-preset" in result + + def test_malformed_json_uses_stem(self, tmp_path: Path) -> None: + """Falls back to filename stem when JSON is malformed.""" + presets_dir = tmp_path / "presets" + presets_dir.mkdir() + (presets_dir / "broken.json").write_text("{bad") + + scanner = SpecKitScanner(tmp_path) + result = scanner.scan_presets() + + assert "broken" in result + + +class TestScanHookEvents: + """Tests for scan_hook_events() — v0.4.x hook event detection.""" + + def test_no_prompts_dir(self, tmp_path: Path) -> None: + """Returns empty list when .specify/prompts/ does not exist.""" + scanner = SpecKitScanner(tmp_path) + assert scanner.scan_hook_events() == [] + + def test_detects_hook_patterns(self, tmp_path: Path) -> None: + """Detects before/after hook patterns in prompt templates.""" + prompts_dir = tmp_path / ".specify" / "prompts" + prompts_dir.mkdir(parents=True) + (prompts_dir / "tasks.md").write_text("Run before_task validation.\nThen after_task cleanup.\n") + (prompts_dir / "plan.md").write_text("Execute before_plan checks.\n") + + scanner = SpecKitScanner(tmp_path) + result = scanner.scan_hook_events() + + assert "before_task" in result + assert "after_task" in result + assert "before_plan" in result + + def test_no_hook_patterns(self, tmp_path: Path) -> None: + """Returns empty list when no hook patterns found in templates.""" + prompts_dir = tmp_path / ".specify" / "prompts" + prompts_dir.mkdir(parents=True) + (prompts_dir / "tasks.md").write_text("Normal content without hooks.\n") + + scanner = SpecKitScanner(tmp_path) + assert scanner.scan_hook_events() == [] + + def test_results_are_sorted(self, tmp_path: Path) -> None: + """Hook events are returned in sorted order.""" + prompts_dir = tmp_path / ".specify" / "prompts" + prompts_dir.mkdir(parents=True) + (prompts_dir / "all.md").write_text("after_task before_task after_plan before_plan") + + scanner = SpecKitScanner(tmp_path) + result = scanner.scan_hook_events() + + assert result == sorted(result) diff --git a/tests/unit/models/test_bridge.py b/tests/unit/models/test_bridge.py index 768b73eb..600435a5 100644 --- a/tests/unit/models/test_bridge.py +++ b/tests/unit/models/test_bridge.py @@ -328,10 +328,22 @@ def test_preset_speckit_classic(self): assert "plan" in config.artifacts assert "tasks" in config.artifacts assert "contracts" in config.artifacts - assert len(config.commands) == 2 + assert len(config.commands) == 7 assert config.templates is not None assert config.templates.root_dir == ".specify/prompts" + def test_preset_speckit_specify(self): + """Test Spec-Kit specify (canonical) preset.""" + config = BridgeConfig.preset_speckit_specify() + assert config.adapter == AdapterType.SPECKIT + assert "specification" in config.artifacts + assert config.artifacts["specification"].path_pattern == ".specify/specs/{feature_id}/spec.md" + assert "plan" in config.artifacts + assert "tasks" in config.artifacts + assert "contracts" in config.artifacts + assert len(config.commands) == 7 + assert config.templates is not None + def test_preset_speckit_modern(self): """Test Spec-Kit modern preset.""" config = BridgeConfig.preset_speckit_modern() @@ -341,7 +353,7 @@ def test_preset_speckit_modern(self): assert "plan" in config.artifacts assert "tasks" in config.artifacts assert "contracts" in config.artifacts - assert len(config.commands) == 2 + assert len(config.commands) == 7 assert config.templates is not None def test_preset_generic_markdown(self): @@ -400,3 +412,35 @@ def test_preset_openspec_resolve_path_external_base(self, tmp_path): context = {"feature_id": "001-auth"} resolved = config.resolve_path("specification", context, base_path=tmp_path) assert resolved == external_path / "openspec" / "specs" / "001-auth" / "spec.md" + + @pytest.mark.parametrize( + "preset_method", + ["preset_speckit_classic", "preset_speckit_specify", "preset_speckit_modern"], + ) + def test_speckit_presets_have_all_7_commands(self, preset_method): + """Test that all Spec-Kit presets contain the full 7-command set.""" + config = getattr(BridgeConfig, preset_method)() + expected_commands = {"specify", "plan", "tasks", "implement", "constitution", "clarify", "analyze"} + assert set(config.commands.keys()) == expected_commands + + @pytest.mark.parametrize( + "preset_method", + ["preset_speckit_classic", "preset_speckit_specify", "preset_speckit_modern"], + ) + def test_speckit_presets_command_triggers(self, preset_method): + """Test that all Spec-Kit preset command triggers match /speckit.* pattern.""" + config = getattr(BridgeConfig, preset_method)() + for key, cmd in config.commands.items(): + assert cmd.trigger.startswith("/speckit."), f"Command '{key}' trigger should start with /speckit." + + @pytest.mark.parametrize( + "preset_method", + ["preset_speckit_classic", "preset_speckit_specify", "preset_speckit_modern"], + ) + def test_speckit_presets_command_refs(self, preset_method): + """Test that Spec-Kit preset commands have correct input/output refs.""" + config = getattr(BridgeConfig, preset_method)() + assert config.commands["plan"].output_ref == "plan" + assert config.commands["tasks"].output_ref == "tasks" + assert config.commands["specify"].input_ref == "specification" + assert config.commands["implement"].input_ref == "tasks" diff --git a/tests/unit/models/test_capabilities.py b/tests/unit/models/test_capabilities.py new file mode 100644 index 00000000..fac1d1ad --- /dev/null +++ b/tests/unit/models/test_capabilities.py @@ -0,0 +1,74 @@ +"""Unit tests for ToolCapabilities model — v0.4.x alignment fields.""" + +from specfact_cli.models.capabilities import ToolCapabilities + + +class TestToolCapabilitiesV04Fields: + """Test v0.4.x alignment fields on ToolCapabilities.""" + + def test_default_new_fields_are_none(self) -> None: + """All v0.4.x fields default to None for backward compatibility.""" + caps = ToolCapabilities(tool="speckit") + assert caps.extensions is None + assert caps.extension_commands is None + assert caps.presets is None + assert caps.hook_events is None + assert caps.detected_version_source is None + + def test_construct_with_extensions(self) -> None: + """Extensions list is stored correctly.""" + caps = ToolCapabilities(tool="speckit", extensions=["reconcile", "sync"]) + assert caps.extensions == ["reconcile", "sync"] + + def test_construct_with_extension_commands(self) -> None: + """Extension commands dict is stored correctly.""" + cmds = {"reconcile": ["reconcile", "diff"], "sync": ["push", "pull"]} + caps = ToolCapabilities(tool="speckit", extension_commands=cmds) + assert caps.extension_commands == cmds + + def test_construct_with_presets(self) -> None: + """Presets list is stored correctly.""" + caps = ToolCapabilities(tool="speckit", presets=["minimal", "full"]) + assert caps.presets == ["minimal", "full"] + + def test_construct_with_hook_events(self) -> None: + """Hook events list is stored correctly.""" + caps = ToolCapabilities(tool="speckit", hook_events=["before_task", "after_task"]) + assert caps.hook_events == ["before_task", "after_task"] + + def test_construct_with_detected_version_source(self) -> None: + """Detected version source is stored correctly.""" + caps = ToolCapabilities(tool="speckit", version="0.4.3", detected_version_source="cli") + assert caps.detected_version_source == "cli" + assert caps.version == "0.4.3" + + def test_construct_with_all_v04_fields(self) -> None: + """All v0.4.x fields can be set together.""" + caps = ToolCapabilities( + tool="speckit", + version="0.4.3", + layout="modern", + extensions=["reconcile"], + extension_commands={"reconcile": ["reconcile"]}, + presets=["minimal"], + hook_events=["before_task"], + detected_version_source="cli", + ) + assert caps.extensions == ["reconcile"] + assert caps.presets == ["minimal"] + assert caps.hook_events == ["before_task"] + assert caps.detected_version_source == "cli" + + def test_backward_compat_no_new_fields(self) -> None: + """Pre-v0.4.x construction still works without new fields.""" + caps = ToolCapabilities( + tool="speckit", + version=None, + layout="classic", + specs_dir="specs", + has_external_config=False, + has_custom_hooks=False, + supported_sync_modes=["bidirectional"], + ) + assert caps.tool == "speckit" + assert caps.extensions is None diff --git a/tests/unit/prompts/test_prompt_validation.py b/tests/unit/prompts/test_prompt_validation.py index a95e36e1..0bfbb46c 100644 --- a/tests/unit/prompts/test_prompt_validation.py +++ b/tests/unit/prompts/test_prompt_validation.py @@ -155,18 +155,65 @@ def test_validate_dual_stack_workflow(self, tmp_path: Path): validator = PromptValidator(prompt_file) assert validator.validate_dual_stack_workflow() is True - def test_validate_all_prompts(self): - """Test validating all prompts in resources/prompts.""" - # Path from tests/unit/prompts/test_prompt_validation.py to resources/prompts - # tests/unit/prompts -> tests/unit -> tests -> root -> resources/prompts - prompts_dir = Path(__file__).parent.parent.parent.parent / "resources" / "prompts" - # Prompts directory should exist in the repository - assert prompts_dir.exists(), f"Prompts directory not found at {prompts_dir}" + def test_validate_all_prompts(self, tmp_path: Path): + """``validate_all_prompts`` runs over a directory of ``specfact.*.md`` templates.""" + prompts_dir = tmp_path / "prompts" + prompts_dir.mkdir() + valid_content = """--- +description: Test prompt +--- + +# Test Prompt + +## User Input + +```text +$ARGUMENTS +``` + +You **MUST** consider the user input before proceeding (if not empty). + +## Purpose + +Test purpose. + +## Parameters + +Test parameters. + +## Workflow + +Test workflow. + +## CLI Enforcement + +**ALWAYS execute CLI first**. Never modify `.specfact/` directly. Use CLI output as grounding. + +## Expected Output + +Test expected output. + +## Common Patterns + +Test common patterns. + +## Context + +Test context. +""" + # Stem must not be in DUAL_STACK_COMMANDS / CLI_COMMANDS so structure-only template passes validate_all. + (prompts_dir / "specfact.99-good.md").write_text(valid_content, encoding="utf-8") + (prompts_dir / "specfact.98-invalid.md").write_text("# Broken\n\n## Goal\n\n", encoding="utf-8") results = validate_all_prompts(prompts_dir) - assert len(results) > 0 + assert len(results) == 2 + + by_name = {r["prompt"]: r for r in results} + assert by_name["specfact.99-good"]["passed"] is True + assert by_name["specfact.99-good"]["errors"] == [] + assert by_name["specfact.98-invalid"]["passed"] is False + assert len(by_name["specfact.98-invalid"]["errors"]) >= 1 - # All prompts should pass basic validation for result in results: assert "prompt" in result assert "errors" in result diff --git a/tests/unit/specfact_cli/registry/test_init_module_lifecycle_ux.py b/tests/unit/specfact_cli/registry/test_init_module_lifecycle_ux.py index 1d4e4992..ab37ab17 100644 --- a/tests/unit/specfact_cli/registry/test_init_module_lifecycle_ux.py +++ b/tests/unit/specfact_cli/registry/test_init_module_lifecycle_ux.py @@ -122,18 +122,14 @@ def _fail_copy(*args, **kwargs): assert calls[0][:4] == ["pip", "install", "-U", "beartype>=0.22.4"] -def test_resolve_templates_dir_uses_package_fallback_when_repo_templates_missing(tmp_path: Path, monkeypatch) -> None: - """Template resolution should fallback to package resource lookup for installed runtime parity.""" - fallback_templates = tmp_path / "installed" / "resources" / "prompts" - fallback_templates.mkdir(parents=True) - monkeypatch.setattr(init_commands, "find_package_resources_path", lambda *_args: fallback_templates) - monkeypatch.setattr("importlib.resources.files", lambda *_args: (_ for _ in ()).throw(RuntimeError("boom"))) +def test_resolve_templates_dir_none_when_no_discoverable_prompts(tmp_path: Path, monkeypatch) -> None: + """Workflow prompts ship in bundles; without modules or dev repo prompts, resolution is None.""" monkeypatch.setattr( init_commands, "discover_prompt_template_files", - lambda repo_path, include_package_fallback=False: [], + lambda repo_path, include_package_fallback=True: [], ) resolved = init_commands._resolve_templates_dir(tmp_path) - assert resolved == fallback_templates + assert resolved is None diff --git a/tests/unit/utils/test_startup_checks.py b/tests/unit/utils/test_startup_checks.py index e05c1fd0..f31e4e7c 100644 --- a/tests/unit/utils/test_startup_checks.py +++ b/tests/unit/utils/test_startup_checks.py @@ -4,7 +4,7 @@ import sys import time -from datetime import UTC +from datetime import UTC, datetime from pathlib import Path from unittest.mock import MagicMock, Mock, patch @@ -95,10 +95,11 @@ def test_check_ide_templates_no_templates_dir(self, monkeypatch, tmp_path: Path) "specfact_cli.utils.startup_checks.IDE_CONFIG", {"cursor": {"folder": ".cursor/commands", "format": "md"}}, ), - patch("specfact_cli.utils.startup_checks.find_package_resources_path", return_value=None), + patch("specfact_cli.utils.startup_checks.discover_prompt_template_files", return_value=[]), ): result = check_ide_templates(tmp_path) - assert result is None + assert result is not None + assert result.sources_available is False def test_check_ide_templates_missing_templates(self, monkeypatch, tmp_path: Path): """Test when templates are missing.""" @@ -110,13 +111,16 @@ def test_check_ide_templates_missing_templates(self, monkeypatch, tmp_path: Path # Create a source template (templates_dir / "specfact.01-import.md").write_text("# Import command") + def _fake_discover(_repo_path: Path, include_package_fallback: bool = True) -> list[Path]: + return sorted(templates_dir.glob("specfact*.md")) + with ( patch("specfact_cli.utils.startup_checks.detect_ide", return_value="cursor"), patch( "specfact_cli.utils.startup_checks.IDE_CONFIG", {"cursor": {"folder": ".cursor/commands", "format": "md"}}, ), - patch("specfact_cli.utils.startup_checks.find_package_resources_path", return_value=templates_dir), + patch("specfact_cli.utils.startup_checks.discover_prompt_template_files", side_effect=_fake_discover), patch( "specfact_cli.utils.ide_setup.SPECFACT_COMMANDS", ["specfact.01-import"], @@ -150,13 +154,16 @@ def test_check_ide_templates_outdated_templates(self, monkeypatch, tmp_path: Pat time.sleep(1.1) source_file.touch() + def _fake_discover(_repo_path: Path, include_package_fallback: bool = True) -> list[Path]: + return sorted(templates_dir.glob("specfact*.md")) + with ( patch("specfact_cli.utils.startup_checks.detect_ide", return_value="cursor"), patch( "specfact_cli.utils.startup_checks.IDE_CONFIG", {"cursor": {"folder": ".cursor/commands", "format": "md"}}, ), - patch("specfact_cli.utils.startup_checks.find_package_resources_path", return_value=templates_dir), + patch("specfact_cli.utils.startup_checks.discover_prompt_template_files", side_effect=_fake_discover), patch( "specfact_cli.utils.ide_setup.SPECFACT_COMMANDS", ["specfact.01-import"], @@ -187,13 +194,16 @@ def test_check_ide_templates_up_to_date(self, monkeypatch, tmp_path: Path): ide_file.write_text("# Import command") ide_file.touch() + def _fake_discover(_repo_path: Path, include_package_fallback: bool = True) -> list[Path]: + return sorted(templates_dir.glob("specfact*.md")) + with ( patch("specfact_cli.utils.startup_checks.detect_ide", return_value="cursor"), patch( "specfact_cli.utils.startup_checks.IDE_CONFIG", {"cursor": {"folder": ".cursor/commands", "format": "md"}}, ), - patch("specfact_cli.utils.startup_checks.find_package_resources_path", return_value=templates_dir), + patch("specfact_cli.utils.startup_checks.discover_prompt_template_files", side_effect=_fake_discover), patch( "specfact_cli.utils.ide_setup.SPECFACT_COMMANDS", ["specfact.01-import"], @@ -216,13 +226,16 @@ def test_check_ide_templates_different_formats(self, monkeypatch, tmp_path: Path templates_dir.mkdir(parents=True) (templates_dir / "specfact.01-import.md").write_text("# Import") + def _fake_discover(_repo_path: Path, include_package_fallback: bool = True) -> list[Path]: + return sorted(templates_dir.glob("specfact*.md")) + with ( patch("specfact_cli.utils.startup_checks.detect_ide", return_value="gemini"), patch( "specfact_cli.utils.startup_checks.IDE_CONFIG", {"gemini": {"folder": ".gemini/commands", "format": "toml"}}, ), - patch("specfact_cli.utils.startup_checks.find_package_resources_path", return_value=templates_dir), + patch("specfact_cli.utils.startup_checks.discover_prompt_template_files", side_effect=_fake_discover), patch( "specfact_cli.utils.ide_setup.SPECFACT_COMMANDS", ["specfact.01-import"], @@ -451,6 +464,9 @@ def test_check_pypi_version_timeout(self, mock_get: MagicMock): class TestPrintStartupChecks: """Test startup checks printing.""" + @patch("specfact_cli.utils.startup_checks.get_last_module_freshness_check_timestamp") + @patch("specfact_cli.utils.startup_checks.get_last_version_check_timestamp") + @patch("specfact_cli.utils.startup_checks.get_last_checked_version") @patch("specfact_cli.utils.startup_checks.check_ide_templates") @patch("specfact_cli.utils.startup_checks.check_pypi_version") @patch("specfact_cli.utils.startup_checks.console") @@ -461,8 +477,16 @@ def test_print_startup_checks_no_issues( mock_console: MagicMock, mock_version: MagicMock, mock_templates: MagicMock, + mock_last_checked: MagicMock, + _mock_version_ts: MagicMock, + _mock_module_ts: MagicMock, ): """Test when no issues are found.""" + from specfact_cli import __version__ + + mock_last_checked.return_value = __version__ + _mock_version_ts.return_value = datetime.now(UTC).isoformat() + _mock_module_ts.return_value = datetime.now(UTC).isoformat() mock_templates.return_value = None mock_version.return_value = VersionCheckResult( current_version="1.0.0", @@ -610,13 +634,27 @@ def test_print_startup_checks_version_update_minor( return pytest.fail("Minor version update message not found in console.print calls") + @patch("specfact_cli.utils.startup_checks.get_last_module_freshness_check_timestamp") + @patch("specfact_cli.utils.startup_checks.get_last_version_check_timestamp") + @patch("specfact_cli.utils.startup_checks.get_last_checked_version") @patch("specfact_cli.utils.startup_checks.check_ide_templates") @patch("specfact_cli.utils.startup_checks.check_pypi_version") @patch("specfact_cli.utils.startup_checks.console") def test_print_startup_checks_version_update_no_type( - self, mock_console: MagicMock, mock_version: MagicMock, mock_templates: MagicMock + self, + mock_console: MagicMock, + mock_version: MagicMock, + mock_templates: MagicMock, + mock_last_checked: MagicMock, + _mock_version_ts: MagicMock, + _mock_module_ts: MagicMock, ): """Test that update without type is not printed.""" + from specfact_cli import __version__ + + mock_last_checked.return_value = __version__ + _mock_version_ts.return_value = None + _mock_module_ts.return_value = datetime.now(UTC).isoformat() mock_templates.return_value = None mock_version.return_value = VersionCheckResult( current_version="1.0.0", @@ -829,8 +867,15 @@ def test_metadata_updated_after_checks( mock_home.mkdir() monkeypatch.setattr(Path, "home", lambda: mock_home) - # No metadata exists (first run) - mock_check_templates.return_value = None + # No metadata exists (first run); template check runs with sources so watermark can advance. + mock_check_templates.return_value = TemplateCheckResult( + ide="cursor", + templates_outdated=False, + missing_templates=[], + outdated_templates=[], + ide_dir=tmp_path / ".cursor" / "commands", + sources_available=True, + ) mock_check_version.return_value = VersionCheckResult( current_version="1.0.0", latest_version="1.0.0",