diff --git a/.cursor/commands/wf-create-change-from-plan.md b/.cursor/commands/wf-create-change-from-plan.md index 71de8bb0..cfaee254 100644 --- a/.cursor/commands/wf-create-change-from-plan.md +++ b/.cursor/commands/wf-create-change-from-plan.md @@ -16,6 +16,7 @@ Create an OpenSpec change proposal from a plan document (e.g., documentation imp **Guardrails** +- **Read `openspec/config.yaml`** during the workflow (before or at Step 5) to get project context and the TDD/SDD rules; use them when updating tasks.md so that tests-before-code is enforced. - Favor straightforward, minimal implementations first and add complexity only when it is requested or clearly required. - Keep changes tightly scoped to the requested outcome. - Never proceed with ambiguities or conflicts - always ask for clarification interactively. @@ -24,6 +25,7 @@ Create an OpenSpec change proposal from a plan document (e.g., documentation imp - **CRITICAL**: Only create GitHub issues in the target repository specified by the plan. Never create issues in a different repository than the plan's target. - For public-facing changes, always sanitize content before creating GitHub issues. - **CRITICAL Git Workflow**: Always add tasks to create a git branch (feature/bugfix/hotfix based on change-id) BEFORE any code modifications, and create a Pull Request to `dev` branch AFTER all tasks are complete. Never work directly on protected branches (main/dev). Branch naming: `/`. +- **CRITICAL TDD**: Per config.yaml, test tasks MUST come before implementation tasks. Write tests from spec scenarios first; run tests and expect failure; then implement until tests pass. **Workflow Steps** @@ -248,9 +250,9 @@ Execute the `/opsx:ff` command to create all artifacts at once: - **proposal.md**: Must include Why, What Changes, Capabilities, Impact sections. Capabilities section is critical - each capability needs a spec file. - **specs//spec.md**: Use Given/When/Then format for scenarios. Reference existing patterns in openspec/specs/. - **design.md**: Document bridge adapter integration, sequence diagrams for multi-repo flows, contract enforcement strategy. - - **tasks.md**: Break into 2-hour maximum chunks. Include contract decorator tasks, test tasks, quality gate tasks, git workflow tasks (branch creation first, PR creation last). + - **tasks.md**: Break into 2-hour maximum chunks. **Per config.yaml:** Test tasks MUST come before implementation tasks (TDD). Include contract decorator tasks, test tasks, quality gate tasks, git workflow tasks (branch creation first, PR creation last). Step 5.2.4 will add a TDD order section and reorder tasks so tests-before-code is explicit. -5. **Note**: After OPSX completes, Step 5 will add git workflow tasks (branch creation and PR creation) and quality standards if not already included. +5. **Note**: After OPSX completes, Step 5 will read config.yaml, add git workflow tasks (branch creation and PR creation), **enforce TDD-first in tasks.md** (Step 5.2.4), and add quality standards if not already included. **4.3: Extract Change ID** @@ -265,10 +267,12 @@ Execute the `/opsx:ff` command to create all artifacts at once: **5.1: Review Against Project Rules and Config** -1. **Read openspec/config.yaml:** - - Project context (tech stack, constraints, architecture patterns) - - Per-artifact rules (proposal, specs, design, tasks) - - Verify artifacts follow config.yaml rules +1. **Required: Read `openspec/config.yaml`** (in the specfact-cli repo: `openspec/config.yaml`): + - **Project context**: Tech stack, constraints, architecture patterns. + - **Development discipline (SDD + TDD)** in context: (1) Specs first, (2) Tests second—write unit/integration tests from spec scenarios; run tests and **expect failure**, (3) Code last—implement until tests pass. + - **Per-artifact rules**: `rules.tasks` in config.yaml—Enforce SDD+TDD order: (1) Branch creation, (2) Spec deltas, (3) Write tests from spec scenarios; run tests and expect failure (no implementation yet), (4) Implement code until tests pass, (5) Quality gates, (6) Documentation, (7) PR creation. Also: "Test tasks MUST come before implementation tasks: write tests derived from specs first, then implement. Do not implement before tests exist for the changed behavior." + - Use this context for Step 5.2.4 (TDD enforcement in tasks.md). + - Verify artifacts follow config.yaml rules. 2. **Read and apply rules from `specfact-cli/.cursor/rules/`:** - **spec-fact-cli-rules.mdc**: Problem analysis, centralize logic, testing requirements, contract-first approach @@ -281,6 +285,7 @@ Execute the `/opsx:ff` command to create all artifacts at once: - Proposal includes Source Tracking section (if public-facing change) - Tasks include GitHub issue creation task (if public-facing change in public repo) - Tasks follow 2-hour maximum chunk rule + - **Tasks enforce TDD: test tasks before implementation tasks** (see Step 5.2.4) - All artifacts reference existing architecture patterns where applicable **5.2: Update Tasks with Quality Standards and Git Workflow** @@ -349,7 +354,24 @@ For each task in `tasks.md` (after branch creation task), ensure it includes: - Prerequisite changes - External dependencies -**5.2.4: Add Pull Request Creation Task (LAST TASK)** +**5.2.4: Enforce TDD-first in tasks.md (use config.yaml)** + +**Required:** Use the Development discipline and `rules.tasks` from `openspec/config.yaml` (read in Step 5.1). Ensure tasks.md enforces tests before code. + +1. **Add a "TDD / SDD order (enforced)" section** at the top of `tasks.md` (after the title, before the first numbered task section, e.g. before `## 1. Create git branch`): + - State that per `openspec/config.yaml`, **tests before code** apply to any task that adds or changes behavior. + - List the order: (1) Spec deltas define behavior (Given/When/Then), (2) **Tests second**—write unit/integration tests from those scenarios; run tests and **expect failure** (no implementation yet), (3) **Code last**—implement until tests pass and behavior satisfies the spec. + - Add: "Do not implement production code for new behavior until the corresponding tests exist and have been run (expecting failure)." + - Use a horizontal rule `---` to separate this block from the numbered tasks. + +2. **For each task section that adds or changes behavior** (e.g. a section that has both "add tests" and "implement" subtasks): + - **Reorder** so that "write tests from spec scenarios" (and "run tests; expect failure") appears **before** any "implement" or "add code" tasks for that behavior. + - If the current order is "implement 3.1, 3.2, 3.3, then add tests 3.4", rewrite to: "**Tests first:** 3.1 Write tests from change spec scenarios (e.g. `changes/.../specs//spec.md`); run tests; **expect failure**. 3.2–3.N Implement (add options, helper, etc.). 3.N+1 Run tests again; **expect pass**; then quality gates." + - Add a short **TDD for this section** reminder in the section heading or first bullet where applicable (e.g. "TDD: tests first, then code"). + +3. **Verify:** Scan tasks.md for any block that has both test tasks and implementation tasks; ensure test tasks come first. Config.yaml: "Test tasks MUST come before implementation tasks." + +**5.2.5: Add Pull Request Creation Task (LAST TASK)** **Add as the LAST task in `tasks.md` (after all implementation tasks are complete):** @@ -924,9 +946,10 @@ Location: openspec/changes// Validation: ✓ OpenSpec validation passed ✓ Markdown linting passed (auto-fixed where possible) - ✓ Project rules applied + ✓ Project rules applied (config.yaml read; TDD-first enforced in tasks.md) ✓ Quality standards integrated ✓ Git workflow tasks added (branch creation + PR creation) + ✓ TDD order section and test-before-code task order applied GitHub Issue (if target repository supports issues): ✓ Issue # created in : @@ -937,7 +960,9 @@ GitHub Issue (if target repository supports issues): Next Steps: 1. Review proposal: openspec/changes//proposal.md 2. Review tasks: openspec/changes//tasks.md - 3. Verify git workflow tasks are included: + 3. Verify TDD and git workflow are reflected: + - tasks.md has "TDD / SDD order (enforced)" section at top + - For behavior changes: test tasks before implementation tasks - First task: Create branch `/` - Last task: Create PR to `dev` branch 4. Apply change when ready: /opsx:apply (or /openspec-apply for legacy) diff --git a/.github/workflows/pr-orchestrator.yml b/.github/workflows/pr-orchestrator.yml index 93e32080..2931f81a 100644 --- a/.github/workflows/pr-orchestrator.yml +++ b/.github/workflows/pr-orchestrator.yml @@ -6,20 +6,52 @@ on: pull_request: branches: [main, dev] paths-ignore: + - "**/*.md" + - "**/*.mdc" - "docs/**" - - "**.md" - - "**.mdc" push: branches: [main, dev] paths-ignore: + - "**/*.md" + - "**/*.mdc" - "docs/**" - - "**.md" - - "**.mdc" workflow_dispatch: +concurrency: + group: ${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }} + cancel-in-progress: true + jobs: + changes: + name: Detect code changes + runs-on: ubuntu-latest + outputs: + code_changed: ${{ steps.out.outputs.code_changed }} + steps: + - uses: actions/checkout@v4 + with: + fetch-depth: 0 + - uses: dorny/paths-filter@v3 + id: filter + with: + filters: | + code: + - '**' + - '!**/*.md' + - '!**/*.mdc' + - '!docs/**' + - id: out + run: | + if [ "${{ github.event_name }}" = "workflow_dispatch" ]; then + echo "code_changed=true" >> "$GITHUB_OUTPUT" + else + echo "code_changed=${{ steps.filter.outputs.code }}" >> "$GITHUB_OUTPUT" + fi + tests: name: Tests (Python 3.12) + needs: [changes] + if: needs.changes.outputs.code_changed == 'true' outputs: run_unit_coverage: ${{ steps.detect-unit.outputs.run_unit_coverage }} permissions: diff --git a/AGENTS.md b/AGENTS.md index ce9f1e6d..e9a6e96e 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -43,6 +43,7 @@ - Format only: `hatch run format` - Type check: `hatch run type-check` (basedpyright) - Dev shell: `hatch shell` +- **Faster startup**: Use `specfact --skip-checks ` to skip template and version checks (useful in CI or when security scanning causes delay). ## Coding Style & Naming Conventions diff --git a/CHANGELOG.md b/CHANGELOG.md index de9ac1a7..b58d950c 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -9,6 +9,25 @@ All notable changes to this project will be documented in this file. --- +## [0.26.15] - 2026-01-30 + +### Added (0.26.15) + +- **Backlog refine: ignore-refined and single-item by ID** (OpenSpec change `improve-backlog-refine-and-cli-startup`, fixes [#166](https://github.com/nold-ai/specfact-cli/issues/166)) + - **`--ignore-refined` / `--no-ignore-refined`**: Default on; when set, only items that need refinement are shown (limit applies to unrefined items). Use `--no-ignore-refined` to include already-refined items. + - **`--id `**: Refine only the backlog item with the given issue or work item ID; exits with error if not found. + - **Helper**: `_item_needs_refinement(item)` in `backlog_commands.py` to decide if an item needs refinement (missing sections or low confidence). + - **Fetch behavior**: When both `--ignore-refined` and `--limit` are set, fetches more candidates (e.g. limit × 5) then filters and slices so limit applies to items needing refinement. + - **Docs**: `docs/guides/backlog-refinement.md` documents `--ignore-refined`, `--no-ignore-refined`, and `--id`; AGENTS.md documents `--skip-checks` for faster startup. + - **Prompt**: `resources/prompts/specfact.backlog-refine.md` adds "Interactive refinement (Copilot mode)" with loop: present story → list ambiguities → ask clarification → re-refine until user approves → then mark done and next story. + - **Startup**: Comment in `cli.py` confirms version line is printed before startup checks. + +### Changed (0.26.15) + +- **Version**: Bumped to 0.26.15; synced in `pyproject.toml`, `setup.py`, `src/__init__.py`, `src/specfact_cli/__init__.py`. + +--- + ## [0.26.14] - 2026-01-29 ### Fixed (0.26.14) diff --git a/docs/README.md b/docs/README.md index 772861ca..b0f1c93d 100644 --- a/docs/README.md +++ b/docs/README.md @@ -20,7 +20,8 @@ SpecFact isn't just a technical tool—it's designed for **real-world agile/scru 👉 **[Agile/Scrum Workflows Guide](guides/agile-scrum-workflows.md)** ⭐ **START HERE** - Complete guide to persona-based team collaboration 👉 **[DevOps Backlog Integration](guides/devops-adapter-integration.md)** 🆕 **NEW FEATURE** - Integrate SpecFact into agile DevOps workflows -👉 **[Backlog Refinement Guide](guides/backlog-refinement.md)** 🆕 **NEW FEATURE** - AI-assisted template-driven refinement for standardizing work items +👉 **[Backlog Refinement Guide](guides/backlog-refinement.md)** 🆕 **NEW FEATURE** - AI-assisted template-driven refinement for standardizing work items +👉 **[Tutorial: Backlog Refine with AI IDE](getting-started/tutorial-backlog-refine-ai-ide.md)** 🆕 - End-to-end for agile DevOps: slash prompt, story quality, underspecification, DoR, custom templates --- diff --git a/docs/_layouts/default.html b/docs/_layouts/default.html index 6d14e8b3..691780d5 100644 --- a/docs/_layouts/default.html +++ b/docs/_layouts/default.html @@ -133,6 +133,7 @@

Guides

diff --git a/docs/getting-started/README.md b/docs/getting-started/README.md index 9e3ae1dd..34597183 100644 --- a/docs/getting-started/README.md +++ b/docs/getting-started/README.md @@ -46,6 +46,7 @@ uvx specfact-cli@latest plan init my-project --interactive - 📖 **[Tutorial: Using SpecFact with OpenSpec or Spec-Kit](tutorial-openspec-speckit.md)** ⭐ **NEW** - Complete beginner-friendly tutorial - 📖 **[DevOps Backlog Integration](../guides/devops-adapter-integration.md)** 🆕 **NEW FEATURE** - Integrate SpecFact into agile DevOps workflows - 📖 **[Backlog Refinement](../guides/backlog-refinement.md)** 🆕 **NEW FEATURE** - AI-assisted template-driven refinement for standardizing work items +- 📖 **[Tutorial: Backlog Refine with AI IDE](tutorial-backlog-refine-ai-ide.md)** 🆕 - End-to-end for agile DevOps teams: slash prompt, story quality, underspecification, splitting, DoR, custom templates - 📖 **[Use Cases](../guides/use-cases.md)** - See real-world examples - 📖 **[Command Reference](../reference/commands.md)** - Learn all available commands diff --git a/docs/getting-started/tutorial-backlog-refine-ai-ide.md b/docs/getting-started/tutorial-backlog-refine-ai-ide.md new file mode 100644 index 00000000..ee57f451 --- /dev/null +++ b/docs/getting-started/tutorial-backlog-refine-ai-ide.md @@ -0,0 +1,165 @@ +--- +layout: default +title: Tutorial - Backlog Refine with Your AI IDE +description: Integrate SpecFact CLI backlog refinement with your AI IDE. Improve story quality, underspec/overspec, split stories, fix ambiguities, respect DoR, and use custom template mapping. +permalink: /getting-started/tutorial-backlog-refine-ai-ide/ +--- + +# Tutorial: Backlog Refine with Your AI IDE (Agile DevOps Teams) + +This tutorial walks agile DevOps teams through integrating SpecFact CLI backlog refinement with their AI IDE (Cursor, VS Code + Copilot, Claude Code, etc.) using the interactive slash prompt. You will improve backlog story quality, make informed decisions about underspecification, split stories when too big, fix ambiguities, respect Definition of Ready (DoR), and optionally use custom template mapping for advanced teams. + +**Time**: ~20–30 minutes +**Outcome**: End-to-end flow from raw backlog items to template-compliant, DoR-ready stories via your AI IDE. + +--- + +## What You'll Learn + +- Run `specfact backlog refine` and use the **slash prompt** in your AI IDE for interactive refinement +- Use the **interactive feedback loop**: present story → assess specification level (under-/over-/fit) → list ambiguities → ask clarification → re-refine until approved +- Improve story quality: identify **underspecified** (missing AC, vague scope), **overspecified** (too many sub-steps, implementation detail), or **fit-for-scope** stories +- Decide when to **split** stories that are too big +- Respect **Definition of Ready (DoR)** once defined in your team +- For advanced teams: point to **custom template mapping** (e.g. ADO custom fields) when required + +--- + +## Prerequisites + +- SpecFact CLI installed (`uvx specfact-cli@latest` or `pip install specfact-cli`) +- Access to a backlog (GitHub repo or Azure DevOps project) +- AI IDE with slash commands (Cursor, VS Code + Copilot, etc.) +- Optional: `specfact init --ide cursor` (or your IDE) so the backlog-refine slash command is available + +--- + +## Step 1: Run Backlog Refine and Get Items + +From your repo root (or where your backlog lives): + +```bash +# GitHub: fetch open items that need refinement (default: ignore already-refined) +specfact backlog refine github --repo-owner OWNER --repo-name REPO --search "is:open label:feature" --limit 5 --preview + +# Or export to a temp file for your AI IDE to process (recommended for interactive loop) +specfact backlog refine github --repo-owner OWNER --repo-name REPO --export-to-tmp --search "is:open label:feature" --limit 5 +``` + +- Use `--ignore-refined` (default) so `--limit` applies to items that **need** refinement +- Use `--id ISSUE_ID` to refine a **single** item by ID +- Use `--check-dor` when your team has a DoR config in `.specfact/dor.yaml` + +--- + +## Step 2: Invoke the Slash Prompt in Your AI IDE + +In Cursor, VS Code, or your IDE: + +1. Open the **slash command** for backlog refinement (e.g. `/specfact.backlog-refine` or the equivalent in your IDE). +2. Pass the same arguments you would use in the CLI, for example: + - `/specfact.backlog-refine --adapter github --repo-owner OWNER --repo-name NAME --labels feature --limit 5` + +The AI will use the **SpecFact Backlog Refinement** prompt, which includes: + +- Template-driven refinement (user story, defect, spike, enabler) +- **Interactive refinement (Copilot mode)**: present story → list ambiguities → ask clarification → re-refine until you approve +- **Specification level**: for each story, the AI assesses whether it is **under-specified**, **over-specified**, or **fit for scope and intent**, with evidence (missing AC, vague scope, too many sub-steps, etc.) + +--- + +## Step 3: Use the Interactive Feedback Loop + +For each story, the AI should: + +1. **Present** the refined story (Title, Body, Acceptance Criteria, Metrics) in a clear, scannable format. +2. **Assess specification level**: + - **Under-specified**: Missing acceptance criteria, vague scope, unclear “so that” or user value. List what’s missing. + - **Over-specified**: Too much implementation detail, too many sub-steps for one story, or solution prescribed instead of outcome. Suggest what to trim or move. + - **Fit for scope and intent**: Clear persona, capability, benefit, and testable AC; appropriate size. State briefly why it’s ready. +3. **List ambiguities** or open questions (e.g. conflicting assumptions, unclear priority). +4. **Ask** you (PO/stakeholder): “Do you want any changes? Any ambiguities to resolve? Should this story be split?” +5. **Re-refine** if you give feedback, then repeat from “Present” until you **explicitly approve** (e.g. “looks good”, “approved”). +6. Only after approval: mark the story done and move to the next. Do **not** update the backlog item until that story is approved. + +This loop ensures the DevOps team sees **underspecification** (and over-specification) explicitly and can improve story quality and respect DoR before committing to the backlog. + +--- + +## Step 4: Respect Definition of Ready (DoR) + +If your team uses DoR: + +1. Create or edit `.specfact/dor.yaml` in the repo (e.g. require story_points, priority, business_value, acceptance_criteria). +2. Run refine with `--check-dor`: + + ```bash + specfact backlog refine github --repo-owner OWNER --repo-name REPO --check-dor --labels feature + ``` + +3. In the interactive loop, treat DoR as part of “fit for scope”: if the refined story doesn’t meet DoR (e.g. missing AC or story points), the AI should flag it as under-specified or not ready and suggest what to add. + +--- + +## Step 5: When to Split a Story + +During the loop, if the AI or you identify that a story is **too big** (e.g. multiple capabilities, many sub-steps, or clearly two user outcomes): + +- The AI should state: “This story may be too large; consider splitting by [capability / user outcome / step].” +- You decide: either split into two (or more) stories and refine each separately, or keep as one and trim scope. Only after that decision should the story be marked approved and written back. + +--- + +## Step 6: Write Back (When Ready) + +When you’re satisfied with the refined content: + +```bash +# If you used --export-to-tmp, save the refined file as ...-refined.md, then: +specfact backlog refine github --repo-owner OWNER --repo-name REPO --import-from-tmp --write + +# Or run refine interactively with --write (use with care; confirm each item) +specfact backlog refine github --repo-owner OWNER --repo-name REPO --write --labels feature --limit 3 +``` + +Use `--preview` (default) until you’re confident; use `--write` only when you want to update the remote backlog. + +--- + +## Step 7: Advanced Teams — Custom Template Mapping + +If your team uses **custom fields** (e.g. Azure DevOps custom process templates): + +1. **ADO**: Add a custom field mapping file and point the CLI to it: + + ```bash + specfact backlog refine ado --ado-org ORG --ado-project PROJECT \ + --custom-field-mapping .specfact/templates/backlog/field_mappings/ado_custom.yaml \ + --state Active + ``` + +2. See **[Template Customization](../guides/template-customization.md)** and **[Custom Field Mapping](../guides/custom-field-mapping.md)** for defining templates and mapping ADO fields. +3. The same **interactive loop and specification-level assessment** (under-/over-/fit) apply; the AI should use your template’s required sections when assessing “fit for scope”. + +--- + +## Summary + +| Goal | How | +|-----------------------------|-----| +| Improve story quality | Use the interactive loop; fix under-/over-specification and ambiguities before approving. | +| Know if a story is under/over/fit | AI assesses each story and lists evidence; you decide to add detail, split, or accept. | +| Split stories that are too big | AI suggests splitting when appropriate; you refine each new story separately. | +| Respect DoR | Use `--check-dor` and treat DoR as part of “fit for scope” in the loop. | +| Custom templates / mapping | Use `--custom-field-mapping` (ADO) and custom templates; see Template Customization and Custom Field Mapping guides. | + +--- + +## Related Documentation + +- **[Backlog Refinement Guide](../guides/backlog-refinement.md)** — Full reference: templates, options, export/import, DoR +- **[Story scope and specification level](../guides/backlog-refinement.md#story-scope-and-specification-level)** — Underspecification, over-specification, fit-for-scope +- **[Definition of Ready (DoR)](../guides/backlog-refinement.md#step-45-definition-of-ready-dor-validation-optional)** — DoR configuration and validation +- **[Template Customization](../guides/template-customization.md)** — Custom templates for advanced teams +- **[Custom Field Mapping](../guides/custom-field-mapping.md)** — ADO custom field mapping +- **[IDE Integration](../guides/ide-integration.md)** — Set up slash commands in Cursor, VS Code, etc. diff --git a/docs/guides/ai-ide-workflow.md b/docs/guides/ai-ide-workflow.md index c1891083..7376d8ff 100644 --- a/docs/guides/ai-ide-workflow.md +++ b/docs/guides/ai-ide-workflow.md @@ -76,6 +76,9 @@ Once initialized, the following slash commands are available in your IDE: |---------------|---------|------------------------| | `/specfact.compare` | Compare plans | `specfact plan compare` | | `/specfact.validate` | Validation suite | `specfact repro` | +| `/specfact.backlog-refine` | Backlog refinement (AI IDE interactive loop) | `specfact backlog refine github \| ado` | + +For an end-to-end tutorial on backlog refine with your AI IDE (story quality, underspecification, DoR, custom templates), see **[Tutorial: Backlog Refine with AI IDE](../getting-started/tutorial-backlog-refine-ai-ide.md)**. **Related**: [IDE Integration - Available Slash Commands](ide-integration.md#available-slash-commands) diff --git a/docs/guides/backlog-refinement.md b/docs/guides/backlog-refinement.md index 3384a4ad..2582369c 100644 --- a/docs/guides/backlog-refinement.md +++ b/docs/guides/backlog-refinement.md @@ -11,6 +11,8 @@ permalink: /guides/backlog-refinement/ This guide explains how to use SpecFact CLI's backlog refinement feature to standardize work items from GitHub Issues, Azure DevOps, and other backlog tools into corporate templates (user stories, defects, spikes, enablers). +**Tutorial**: For an end-to-end walkthrough with your AI IDE (Cursor, VS Code, etc.)—including interactive slash prompt, story quality, underspecification, splitting, and DoR—see **[Tutorial: Backlog Refine with AI IDE](../getting-started/tutorial-backlog-refine-ai-ide.md)**. + ## Overview **Why This Matters**: DevOps teams often create backlog items with informal, unstructured descriptions. Template-driven refinement helps enforce corporate standards while maintaining lossless synchronization with your backlog tools. @@ -172,6 +174,16 @@ specfact backlog refine github --search "is:open" # 4. Ask for confirmation before applying ``` +### Story scope and specification level + +During interactive refinement (e.g. when using the slash prompt in your AI IDE), the team should assess each story’s **specification level** so you can improve quality and respect Definition of Ready: + +- **Under-specified**: Missing acceptance criteria, vague scope, unclear “so that” or user value. The AI should list what’s missing (e.g. “No AC”, “Scope could mean X or Y”) so the team can add detail before approving. +- **Over-specified**: Too much implementation detail, too many sub-steps for one story, or solution prescribed instead of outcome. The AI should suggest what to trim or move so the story stays fit for one sprint or one outcome. +- **Fit for scope and intent**: Clear persona, capability, benefit, and testable AC; appropriate size. The AI should state briefly why it’s ready (and, if you use DoR, that DoR is satisfied). + +Include this assessment in the **interactive feedback loop**: present story → assess under-/over-/fit → list ambiguities → ask clarification → re-refine until the PO/stakeholder approves. That way the DevOps team gets to know if a story is under-/over-specified or actually fitting for scope and intent before updating the backlog. + ### Step 4: Preview and Apply Refinement Once validated, the refinement can be previewed or applied: @@ -413,6 +425,8 @@ specfact backlog refine [OPTIONS] - `--search`, `-s` - Search query to filter backlog items - `--template`, `-t` - Target template ID (default: auto-detect) +- `--ignore-refined` / `--no-ignore-refined` - When using `--limit N`, apply limit to items that need refinement (default: ignore already-refined items so you see N items that actually need work) +- `--id` - Refine only the backlog item with the given issue or work item ID - `--auto-accept-high-confidence` - Auto-accept refinements with confidence >= 0.85 - `--bundle`, `-b` - OpenSpec bundle path to import refined items - `--auto-bundle` - Auto-import refined items to OpenSpec bundle diff --git a/docs/index.md b/docs/index.md index a271f172..e9c6befc 100644 --- a/docs/index.md +++ b/docs/index.md @@ -21,8 +21,9 @@ SpecFact CLI helps you modernize legacy codebases by automatically extracting sp 1. **[Installation](getting-started/installation.md)** - Get started in 60 seconds 2. **[First Steps](getting-started/first-steps.md)** - Run your first command -3. **[Modernizing Legacy Code](guides/brownfield-engineer.md)** ⭐ **PRIMARY** - Brownfield-first guide -4. **[The Brownfield Journey](guides/brownfield-journey.md)** ⭐ - Complete modernization workflow +3. **[Tutorial: Backlog Refine with AI IDE](getting-started/tutorial-backlog-refine-ai-ide.md)** - Integrate backlog refinement with your AI IDE (agile DevOps) +4. **[Modernizing Legacy Code](guides/brownfield-engineer.md)** ⭐ **PRIMARY** - Brownfield-first guide +5. **[The Brownfield Journey](guides/brownfield-journey.md)** ⭐ - Complete modernization workflow ### Using GitHub Spec-Kit? @@ -59,6 +60,7 @@ SpecFact CLI helps you modernize legacy codebases by automatically extracting sp - **Status Synchronization**: Keep OpenSpec and backlog status in sync - **[Backlog Refinement Guide](guides/backlog-refinement.md)** 🆕 **NEW** - AI-assisted template-driven refinement for standardizing work items + - **[Tutorial: Backlog Refine with AI IDE](getting-started/tutorial-backlog-refine-ai-ide.md)** - End-to-end tutorial for agile DevOps teams (slash prompt, DoR, split stories, underspec/overspec) - **Template Detection**: Automatically detect which template matches a backlog item with priority-based resolution - **AI-Assisted Refinement**: Generate prompts for IDE AI copilots to refine unstructured input - **Confidence Scoring**: Validate refined content and provide confidence scores diff --git a/openspec/changes/implement-backlog-refine-import-from-tmp/proposal.md b/openspec/changes/archive/2026-01-30-implement-backlog-refine-import-from-tmp/proposal.md similarity index 100% rename from openspec/changes/implement-backlog-refine-import-from-tmp/proposal.md rename to openspec/changes/archive/2026-01-30-implement-backlog-refine-import-from-tmp/proposal.md diff --git a/openspec/changes/implement-backlog-refine-import-from-tmp/specs/backlog-refinement/spec.md b/openspec/changes/archive/2026-01-30-implement-backlog-refine-import-from-tmp/specs/backlog-refinement/spec.md similarity index 100% rename from openspec/changes/implement-backlog-refine-import-from-tmp/specs/backlog-refinement/spec.md rename to openspec/changes/archive/2026-01-30-implement-backlog-refine-import-from-tmp/specs/backlog-refinement/spec.md diff --git a/openspec/changes/archive/2026-01-30-implement-backlog-refine-import-from-tmp/tasks.md b/openspec/changes/archive/2026-01-30-implement-backlog-refine-import-from-tmp/tasks.md new file mode 100644 index 00000000..8a818e7e --- /dev/null +++ b/openspec/changes/archive/2026-01-30-implement-backlog-refine-import-from-tmp/tasks.md @@ -0,0 +1,38 @@ +# Tasks: Implement backlog refine --import-from-tmp + +## 1. Create git branch + +- [x] 1.1.1 Ensure we're on dev and up to date: `git checkout dev && git pull origin dev` +- [x] 1.1.2 Create branch: `git checkout -b feature/implement-backlog-refine-import-from-tmp` +- [x] 1.1.3 Verify branch: `git branch --show-current` + +## 2. Parser for refined export format + +- [x] 2.1.1 Add function to parse refined markdown (e.g. `_parse_refined_export_markdown(content: str) -> dict[str, dict]` returning id → {body_markdown, acceptance_criteria, title?, ...}) in `backlog_commands.py` or new module `src/specfact_cli/backlog/refine_export_parser.py` +- [x] 2.1.2 Split content by `## Item` or `---` to get per-item blocks +- [x] 2.1.3 From each block extract **ID** (required), **Body** (from ```markdown ... ```), **Acceptance Criteria** (optional), optionally **title** and metrics +- [x] 2.1.4 Add unit tests for parser (export-format sample, multiple items, missing optional fields) +- [x] 2.1.5 Run `hatch run format` and `hatch run type-check` + +## 3. Import branch in backlog refine command + +- [x] 3.1.1 In the `if import_from_tmp:` block, after file-exists check: read file content, call parser, build map id → parsed fields +- [x] 3.1.2 For each item in `items`, if item.id in map: set item.body_markdown, item.acceptance_criteria (and optionally title/metrics) from parsed fields +- [x] 3.1.3 If `--write` is not set: print preview ("Would update N items") and return +- [x] 3.1.4 If `--write` is set: get adapter via _build_adapter_kwargs and adapter_registry.get_adapter; for each updated item call adapter.update_backlog_item(item, update_fields=[...]) with same update_fields logic as interactive refine +- [x] 3.1.5 Print success summary (e.g. "Updated N backlog items") +- [x] 3.1.6 Remove "Import functionality pending implementation" message and TODO +- [x] 3.1.7 Run `hatch run format` and `hatch run type-check` + +## 4. Tests and quality + +- [x] 4.1.1 Add or extend test for refine --import-from-tmp (unit: parser; integration or unit with mock: import flow with --tmp-file and --write) +- [x] 4.1.2 Run `hatch run contract-test` (or `hatch run smart-test`) +- [x] 4.1.3 Run `hatch run lint` +- [x] 4.1.4 Run `openspec validate implement-backlog-refine-import-from-tmp --strict` + +## 5. Documentation and PR + +- [x] 5.1.1 Update CHANGELOG.md with fix entry +- [x] 5.1.2 Ensure help text for --import-from-tmp and --tmp-file is accurate +- [x] 5.1.3 Create Pull Request from feature/implement-backlog-refine-import-from-tmp to dev diff --git a/openspec/changes/archive/2026-01-30-improve-backlog-refine-and-cli-startup/CHANGE_VALIDATION.md b/openspec/changes/archive/2026-01-30-improve-backlog-refine-and-cli-startup/CHANGE_VALIDATION.md new file mode 100644 index 00000000..e86c4166 --- /dev/null +++ b/openspec/changes/archive/2026-01-30-improve-backlog-refine-and-cli-startup/CHANGE_VALIDATION.md @@ -0,0 +1,56 @@ +# Change Validation Report: improve-backlog-refine-and-cli-startup + +**Validation Date**: 2026-01-30 +**Change Proposal**: [proposal.md](./proposal.md) +**Validation Method**: Dry-run simulation and OpenSpec strict validation + +## Executive Summary + +- **Breaking Changes**: 0 detected +- **Dependent Files**: Limited to backlog_commands.py, cli.py, startup_checks.py, prompt file +- **Impact Level**: Low +- **Validation Result**: Pass +- **User Decision**: N/A (no breaking changes) + +## Breaking Changes Detected + +None. New options (`--ignore-refined`, `--no-ignore-refined`, `--id`) are additive; default `--ignore-refined` improves behavior without breaking existing scripts (scripts can use `--no-ignore-refined` to preserve previous behavior). + +## Dependencies Affected + +- **Critical**: None +- **Recommended**: Tests for new helper and options +- **Optional**: Docs for `--skip-checks`, `--ignore-refined`, `--id` + +## Impact Assessment + +- **Code Impact**: backlog_commands.py (filter logic, new options), cli.py (startup order), startup_checks.py (optional timeout), prompt file (new section) +- **Test Impact**: New unit/integration tests for ignore-refined and --id +- **Documentation Impact**: AGENTS.md or docs; backlog refine reference +- **Release Impact**: Patch (additive, backward compatible) + +## Format Validation + +- **proposal.md Format**: Pass (Why, What Changes, Capabilities, Impact present) +- **tasks.md Format**: Pass (hierarchical tasks, branch creation first, PR last) +- **specs Format**: Pass (ADDED Requirements with #### Scenario blocks) +- **design.md Format**: Pass +- **Config.yaml Compliance**: Pass + +## OpenSpec Validation + +- **Status**: Pass +- **Validation Command**: `openspec validate improve-backlog-refine-and-cli-startup --strict` +- **Issues Found**: 0 (after fixing delta header to ## ADDED Requirements) +- **Re-validated**: Yes + +## Validation Artifacts + +- Plan: `specfact-cli-internal/docs/internal/implementation/2026-01-30-backlog-refine-and-cli-improvements-plan.md` +- Change: `openspec/changes/improve-backlog-refine-and-cli-startup/` + +## Next Steps + +1. Review proposal and tasks +2. Apply change when ready: `/opsx:apply improve-backlog-refine-and-cli-startup` (or legacy `/openspec-apply`) +3. Create GitHub issue in nold-ai/specfact-cli for tracking (optional; update proposal Source Tracking when created) diff --git a/openspec/changes/archive/2026-01-30-improve-backlog-refine-and-cli-startup/design.md b/openspec/changes/archive/2026-01-30-improve-backlog-refine-and-cli-startup/design.md new file mode 100644 index 00000000..fd704f58 --- /dev/null +++ b/openspec/changes/archive/2026-01-30-improve-backlog-refine-and-cli-startup/design.md @@ -0,0 +1,21 @@ +# Design: Improve Backlog Refine and CLI Startup + +## Startup: First Output Before Checks + +- **Current**: Callback runs version line then `print_startup_checks()`. If checks are slow (xagt), the version line may appear but console can feel blocked until checks finish. +- **Change**: Ensure `print_version_line()` (or welcome) is the first console output; then run startup checks. No bridge adapter or multi-repo flow. Optional: add a short timeout (e.g. 3s) to `check_pypi_version()` in `startup_checks.py` so slow networks do not block. + +## Backlog Refine: Ignore-Refined and --id + +- **Ignore-refined**: Extract "needs refinement" logic into a helper (e.g. `_item_needs_refinement(item, detector, registry, template_id, ...)`). After `_fetch_backlog_items`, if `ignore_refined`: filter `items` to those where the helper returns True; then if `limit` is set, `items = items[:limit]`. When both `ignore_refined` and `limit` are set, fetch may need a larger cap (e.g. `limit * 5`) so enough non-refined candidates exist; alternatively fetch without limit and filter then slice. +- **--id**: After fetch (and after ignore-refined filter if applied), if `issue_id` is set: `items = [i for i in items if str(i.id) == str(issue_id)]`. If empty, print error and exit. No adapter API change; post-filter only. + +## Prompt: Interactive Refinement (Copilot) + +- **Scope**: Prompt file only (`resources/prompts/specfact.backlog-refine.md`). No CLI code change for this part. +- **Content**: New section "Interactive refinement (Copilot mode)" that instructs the AI to: (1) present each refined story in a clear format; (2) list ambiguities; (3) ask PO/stakeholders for clarification; (4) re-refine with feedback and repeat until approved; (5) only then mark story done and proceed; (6) use readable formatting (tables, panels, headings) for an enjoyable refinement session. + +## Contract / Testing + +- New helper `_item_needs_refinement` (or equivalent) should be covered by unit tests. +- Integration or e2e: refine with `--ignore-refined --limit N` and with `--id N`; assert expected filtering and exit behavior. diff --git a/openspec/changes/archive/2026-01-30-improve-backlog-refine-and-cli-startup/proposal.md b/openspec/changes/archive/2026-01-30-improve-backlog-refine-and-cli-startup/proposal.md new file mode 100644 index 00000000..3139aebf --- /dev/null +++ b/openspec/changes/archive/2026-01-30-improve-backlog-refine-and-cli-startup/proposal.md @@ -0,0 +1,47 @@ +# Change: Improve Backlog Refine and CLI Startup + +## Why + +1. **Startup delay**: CLI can take 5–10s before first output (e.g. under xagt). Users need version line before any checks. +2. **Backlog refine --limit**: With `--limit N`, if the first N items are already refined, the user gets N skips every run. We need `--ignore-refined` (default) so limit applies to items that need refinement. +3. **Focused refinement**: No way to refine one story by ID. Add `--id ISSUE_ID`. +4. **Copilot loop**: Prompt should instruct AI to present story → ambiguities → clarify → re-refine until approved → update (interactive, stakeholder-friendly). + +## What Changes + +- **MODIFY** `src/specfact_cli/cli.py` – Ensure first output (version line) before startup checks; optional PyPI timeout. +- **MODIFY** `src/specfact_cli/commands/backlog_commands.py` – Add `--ignore-refined`/`--no-ignore-refined`, `--id`; filter logic so limit applies to items needing refinement when ignore-refined. +- **MODIFY** `src/specfact_cli/utils/startup_checks.py` – Optional timeout for `check_pypi_version()`. +- **MODIFY** `resources/prompts/specfact.backlog-refine.md` – Add interactive refinement (Copilot) section: present story → ambiguities → ask clarification → re-refine until approved → then update. +- **MODIFY** `openspec/specs/backlog-refinement/spec.md` – Add scenarios for ignore-refined and --id. +- **NEW** Tests for ignore-refined and --id behavior. +- **MODIFY** Docs/AGENTS.md – Document `--skip-checks` for startup. + +## Capabilities + +- **backlog-refinement**: Ignore already-refined by default; add --id for single-item refinement; prompt interactive loop (prompt-level). +- **cli-performance**: First output before startup checks; optional PyPI timeout (existing cli-performance spec). + +## Impact + +**Affected Specs**: backlog-refinement, cli-performance (reference only) + +**Affected Code**: + +- `cli.py`, `startup_checks.py`, `backlog_commands.py`, `specfact.backlog-refine.md` + +**Integration Points**: + +- Backlog adapters (unchanged; post-filter by id and needs-refinement). +- Copilot/Cursor prompt consumers (improved prompt text). + +**Breaking Changes**: None. New options; default `--ignore-refined` changes which items count toward limit (behavioral improvement). + +**Documentation**: AGENTS.md or docs for `--skip-checks`; backlog refine docs for `--ignore-refined`, `--no-ignore-refined`, `--id`. + +## Source Tracking + +- **GitHub Issue**: #166 +- **Issue URL**: +- **Repository**: nold-ai/specfact-cli +- **Last Synced Status**: proposed diff --git a/openspec/changes/archive/2026-01-30-improve-backlog-refine-and-cli-startup/specs/backlog-refinement/spec.md b/openspec/changes/archive/2026-01-30-improve-backlog-refine-and-cli-startup/specs/backlog-refinement/spec.md new file mode 100644 index 00000000..f175fe51 --- /dev/null +++ b/openspec/changes/archive/2026-01-30-improve-backlog-refine-and-cli-startup/specs/backlog-refinement/spec.md @@ -0,0 +1,39 @@ +# backlog-refinement (delta) + +## ADDED Requirements + +### Requirement: Ignore Already-Refined Items by Default + +The system SHALL support `--ignore-refined` (default) and `--no-ignore-refined` so that when `--limit N` is used, the limit applies to items that need refinement (already-refined items are excluded from the batch by default). + +#### Scenario: Limit applies to items needing refinement when ignore-refined + +- **GIVEN** the user runs `specfact backlog refine --limit 3` (default `--ignore-refined`) +- **AND** the adapter returns at least 5 items, of which the first 3 are already refined (checkboxes + all required sections or high confidence with no missing fields) +- **WHEN** the command processes items +- **THEN** the system filters out already-refined items, then takes the first 3 that need refinement +- **AND** the user sees up to 3 items that actually require refinement (no loop of the same 3 refined items) + +#### Scenario: No-ignore-refined preserves previous behavior + +- **GIVEN** the user runs `specfact backlog refine --limit 3 --no-ignore-refined` +- **WHEN** the command processes items +- **THEN** the system takes the first 3 items from the fetch and processes them in order +- **AND** already-refined items are skipped in the loop (current behavior) + +### Requirement: Focused Refinement by Issue ID + +The system SHALL support `--id ISSUE_ID` to refine only the backlog item with the given issue or work item ID. + +#### Scenario: Refine single item by ID + +- **GIVEN** the user runs `specfact backlog refine --id 123` (with required adapter options) +- **WHEN** the adapter returns items including item with id 123 +- **THEN** the system filters to only the item with id 123 and refines only that item +- **AND** other items are not processed + +#### Scenario: ID not found + +- **GIVEN** the user runs `specfact backlog refine --id 999` (with required adapter options) +- **WHEN** no item with id 999 is in the fetched set +- **THEN** the system prints a clear error (e.g. "No backlog item with id 999 found") and exits with non-zero status diff --git a/openspec/changes/archive/2026-01-30-improve-backlog-refine-and-cli-startup/tasks.md b/openspec/changes/archive/2026-01-30-improve-backlog-refine-and-cli-startup/tasks.md new file mode 100644 index 00000000..849a65a3 --- /dev/null +++ b/openspec/changes/archive/2026-01-30-improve-backlog-refine-and-cli-startup/tasks.md @@ -0,0 +1,56 @@ +# Tasks: Improve Backlog Refine and CLI Startup + +## TDD / SDD order (enforced) + +Per `openspec/config.yaml`: **tests before code**. For any task that adds or changes behavior: + +1. **Spec deltas** define behavior (Given/When/Then) — already in `changes/.../specs/backlog-refinement/spec.md`. +2. **Tests second** — write unit/integration tests from those scenarios; run tests and **expect failure** (no implementation yet). +3. **Code last** — implement until tests pass and behavior satisfies the spec. + +Do not implement production code for new behavior until the corresponding tests exist and have been run (expecting failure). + +--- + +## 1. Create git branch + +- [x] 1.1.1 Ensure we're on dev and up to date: `git checkout dev && git pull origin dev` +- [x] 1.1.2 Create branch: `git checkout -b feature/improve-backlog-refine-and-cli-startup` +- [x] 1.1.3 Verify branch: `git branch --show-current` + +## 2. Startup: first output before checks + +- [x] 2.1 Verify in `cli.py` that version line (or welcome) is printed before `print_startup_checks()`; add comment if order is correct +- [x] 2.2 Optional: add timeout (e.g. 3s) to `check_pypi_version()` in `startup_checks.py` +- [x] 2.3 Document `--skip-checks` in AGENTS.md or docs for faster startup in CI/slow environments +- [x] 2.4 Run `hatch run format` and `hatch run type-check`; run contract and unit tests for touched files + +## 3. Backlog refine: ignore-refined and --id (TDD: tests first, then code) + +**TDD for this section:** Write tests from spec scenarios below first; run tests and expect failure; then implement until tests pass. + +- [x] 3.1 **Tests first:** From `changes/.../specs/backlog-refinement/spec.md` scenarios (Limit applies when ignore-refined; No-ignore-refined preserves behavior; Refine single item by ID; ID not found), add unit/integration tests in `tests/unit/commands/test_backlog_commands.py` (e.g. for `_item_needs_refinement` and refine filtering). Run tests: `hatch run smart-test-unit` or target file — **expect failure** (no implementation yet). +- [x] 3.2 Add `ignore_refined: bool = typer.Option(True, "--ignore-refined/--no-ignore-refined", ...)` and `issue_id: str | None = typer.Option(None, "--id", ...)` to refine command in `backlog_commands.py` +- [x] 3.3 Extract "already refined" logic into helper (e.g. `_item_needs_refinement(...)`) returning True if item needs refinement +- [x] 3.4 After fetch: if `ignore_refined`, filter items to those needing refinement; if `limit` set, slice to `items[:limit]`; when both set, consider fetching with larger limit (e.g. limit * 5) or no limit then filter+slice +- [x] 3.5 After fetch (and ignore-refined filter): if `issue_id` set, filter to `[i for i in items if str(i.id) == str(issue_id)]`; if empty, print error and exit +- [x] 3.6 Run tests again; **expect pass**. Then run `hatch run format`, `hatch run type-check`, `hatch run contract-test`, `hatch run smart-test` + +## 4. Prompt: interactive refinement section + +- [x] 4.1 Edit `resources/prompts/specfact.backlog-refine.md`: add section "Interactive refinement (Copilot mode)" with loop: present story → list ambiguities → ask clarification → re-refine until user approves → then mark done and next story; add formatting guidance for readability +- [x] 4.2 Ensure prompt states backlog item is updated only after user approval for that story + +## 5. Docs and release + +Specs are updated only when the change is **archived** (not during apply). Do not add tasks to merge spec delta into main spec during implementation. + +- [x] 5.1 Update backlog refine docs (if any) for `--ignore-refined`, `--no-ignore-refined`, `--id` +- [x] 5.2 Update patch version and sync across files (`pyproject.toml`, `setup.py`, `__init__.py`) +- [x] 5.3 Update `CHANGELOG.md` with the new version number and the changes made in this change + +## 6. Validation and PR + +- [x] 6.1 Run `openspec validate improve-backlog-refine-and-cli-startup --strict` +- [x] 6.2 Run `hatch run format`, `hatch run type-check`, `hatch run contract-test`, `hatch run smart-test` +- [x] 6.3 Create Pull Request from `feature/improve-backlog-refine-and-cli-startup` to `dev` with conventional message and description referencing this change (use `.github/pull_request_template.md` for the PR body) diff --git a/openspec/changes/daily-standup-progress-support/CHANGE_VALIDATION.md b/openspec/changes/daily-standup-progress-support/CHANGE_VALIDATION.md new file mode 100644 index 00000000..d1bc416e --- /dev/null +++ b/openspec/changes/daily-standup-progress-support/CHANGE_VALIDATION.md @@ -0,0 +1,74 @@ +# Change Validation Report: daily-standup-progress-support + +**Validation Date**: 2026-01-30 +**Change Proposal**: [proposal.md](./proposal.md) +**Validation Method**: Dry-run and format/config compliance check + +## Executive Summary + +- **Breaking Changes**: 0 detected +- **Dependent Files**: Additive only (new `specfact backlog daily` subcommand; existing `backlog` Typer group in `backlog_commands.py` will gain a new callback; `sync bridge --add-progress-comment` pattern can be extended for standup comment) +- **Impact Level**: Low +- **Validation Result**: Pass +- **User Decision**: N/A (no breaking changes) +- **Command placement**: Standup/progress is under backlog command group (`specfact backlog daily`); no top-level scrum/standup command (per harmonization) + +## Breaking Changes Detected + +None. Change is additive: new standup view and optional post standup comment; existing sync/bridge behavior unchanged. + +## Dependencies Affected + +- **Critical**: None +- **Recommended**: Reuse or align with existing `specfact sync bridge --add-progress-comment` and progress-comment logic in sync.py when implementing post standup comment. +- **Optional**: None + +## Impact Assessment + +- **Code Impact**: New or extended command (standup view); optional adapter extension for posting comment (e.g. GitHub issue comment). +- **Test Impact**: New tests from spec scenarios (standup view, assignee filter, post comment with mock, adapter without comment support). +- **Documentation Impact**: agile-scrum-workflows.md, devops-adapter-integration.md. +- **Release Impact**: Patch (additive feature). + +## Format Validation + +- **proposal.md Format**: Pass + - Title format: Correct (`# Change: Daily standup and progress support`) + - Required sections: All present (Why, What Changes, Capabilities, Impact) + - "What Changes" format: Correct (bullet list with NEW/EXTEND) + - "Capabilities" section: Present (daily-standup) + - "Impact" format: Correct + - Source Tracking section: Present (placeholder for issue number) +- **tasks.md Format**: Pass + - Section headers: Hierarchical numbered format + - Task format: `- [ ] N.N [Description]` + - Sub-task format: Indented `- [ ] N.N.N` + - Config.yaml compliance: Pass + - TDD order section at top; tests before implementation (Section 4 before Section 5) + - Branch creation first (Section 1); PR creation last (Section 9) + - GitHub issue creation task (Section 2) for nold-ai/specfact-cli + - Version and changelog task (Section 8) before PR; patch bump and CHANGELOG sync + - Quality gates, documentation tasks present +- **specs Format**: Pass (Given/When/Then in specs/daily-standup/spec.md) +- **design.md Format**: Pass (bridge adapter integration, sequence, fallback documented) +- **Config.yaml Compliance**: Pass + +## OpenSpec Validation + +- **Status**: Pass +- **Validation Command**: `openspec validate daily-standup-progress-support --strict` +- **Issues Found**: 0 +- **Issues Fixed**: 0 + +## Recommended Improvements Applied + +1. **GitHub issue mandatory**: Task 2 explicitly creates issue in nold-ai/specfact-cli and updates proposal Source Tracking. +2. **Patch version and changelog**: Task 8 bumps patch version, syncs pyproject.toml/setup.py/src __init__.py, and adds CHANGELOG.md entry. Optional: Task 8.3 CHANGELOG line could mention `specfact backlog daily` for discoverability. +3. **TDD order**: TDD/SDD section at top of tasks.md; Section 4 (tests first, expect failure) before Section 5 (implement until tests pass). +4. **Integration note**: Design and validation note that existing `--add-progress-comment` in sync bridge can be aligned or extended for standup comment format to avoid duplication. +5. **Backlog harmonization**: All agile/standup behavior is under `specfact backlog daily`; no top-level `specfact standup` or scrum command. + +## Validation Artifacts + +- No temporary workspace used (dry-run analysis only). +- Change directory: `openspec/changes/daily-standup-progress-support/` diff --git a/openspec/changes/daily-standup-progress-support/design.md b/openspec/changes/daily-standup-progress-support/design.md new file mode 100644 index 00000000..ee54b289 --- /dev/null +++ b/openspec/changes/daily-standup-progress-support/design.md @@ -0,0 +1,32 @@ +# Design: Daily standup and progress support + +## Bridge adapter integration + +- **Standup view**: Reads from existing OpenSpec change proposals and/or backlog adapter data (same sources as `specfact sync bridge` and backlog commands). No new adapter contract; reuse existing list/fetch APIs. +- **Post standup comment**: When user opts in, use existing GitHub (and optionally ADO) adapter to add a comment to the linked issue. GitHub: use Issues API `POST /repos/{owner}/{repo}/issues/{issue_number}/comments` with standup body. ADO: use Work Item Comments API if available; otherwise document as future extension. +- **Adapter capability**: Adapters that support "post comment" expose a method (e.g. `post_comment(issue_id, body)`) or equivalent; standup command calls it when adapter supports it and user opts in. Read-only or unsupported adapters return a clear "not supported" so the CLI does not attempt to post. + +## Sequence (post standup comment) + +``` +User → specfact standup --post-standup --standup-text "Yesterday: X. Today: Y. Blockers: Z" + → CLI resolves change proposal → Source Tracking → issue number + repo + → CLI gets adapter for repo (e.g. GitHub) + → If adapter supports post_comment: adapter.post_comment(issue_number, formatted_body) + → GitHub API: POST .../issues/{n}/comments + → CLI reports success or failure +``` + +## Contract enforcement + +- New public functions (e.g. standup view builder, comment poster) shall have @icontract and @beartype. +- Adapter interface extension (post_comment) optional with default "not supported" to keep backward compatibility. + +## Fallback / offline + +- Standup view is read-only from local/cached data; no network required for view-only. +- Post comment requires network and auth; failure (rate limit, auth) is reported; no silent swallow. + +## Alignment with existing sync bridge + +- Existing `specfact sync bridge --add-progress-comment` and `--track-code-changes` already add progress comments to GitHub issues. Standup comment can reuse or extend that path (e.g. standup format as a variant of progress comment) to avoid duplicate comment-posting logic. Standup/progress is exposed under the backlog command group as `specfact backlog daily` (no top-level standup or scrum command). diff --git a/openspec/changes/daily-standup-progress-support/proposal.md b/openspec/changes/daily-standup-progress-support/proposal.md new file mode 100644 index 00000000..318dabb9 --- /dev/null +++ b/openspec/changes/daily-standup-progress-support/proposal.md @@ -0,0 +1,31 @@ +# Change: Daily standup and progress support + +## Why + +Bridge comments and sync already support exporting/updating change proposals and issues. For daily standup there is no structured "standup" view that aggregates my items, recent activity, and blockers; progress/standup notes are not first-class (e.g. yesterday/today/blockers format) that could be pushed to issue comments. Teams duplicate standup info in tools; SpecFact can surface progress from OpenSpec/bridge and optionally publish to GitHub/ADO so standup updates are visible where the team works. + +## What Changes + +- **NEW**: Add a lightweight standup/progress view under the backlog command group: list my change proposals or backlog items (by assignee or filter), with last-updated and status; optional one-line summary for yesterday/today/blockers from proposal or linked issue body. Expose as `specfact backlog daily` (no top-level `specfact standup`). +- **NEW**: Optional mode to post standup summary as a comment on linked issues via `specfact backlog daily` (e.g. `--post`) or reuse of `specfact sync bridge --add-progress-comment` with standup format (e.g. GitHub issue comment). +- **EXTEND**: Bridge/adapters: support posting comment to linked issue when adapter supports it (e.g. GitHub). +- **EXTEND**: Documentation (agile-scrum-workflows, devops-adapter-integration) for daily standup with SpecFact. + +## Capabilities + +- **daily-standup**: Standup view (list my/filtered items with status and last activity; optional standup summary lines) and optional post standup comment to linked issue via adapter. + +## Impact + +- **Affected specs**: New `openspec/changes/daily-standup-progress-support/specs/daily-standup/spec.md` (Given/When/Then for standup view and comment). +- **Affected code**: `src/specfact_cli/commands/` (extend backlog command group with `backlog daily` subcommand for standup view and optional comment post); bridge/adapters extended to post comment when supported (e.g. GitHub). +- **Affected documentation** (): docs/guides/agile-scrum-workflows.md, docs/guides/devops-adapter-integration.md for daily standup workflow. +- **Integration points**: Existing `specfact sync bridge`, GitHub/ADO adapters; OpenSpec change proposals and backlog items. +- **Backward compatibility**: Additive only; existing sync/bridge behavior unchanged unless user opts into standup view or comment post. + +## Source Tracking + +- **GitHub Issue**: #168 +- **Issue URL**: +- **Repository**: nold-ai/specfact-cli +- **Last Synced Status**: proposed diff --git a/openspec/changes/daily-standup-progress-support/specs/daily-standup/spec.md b/openspec/changes/daily-standup-progress-support/specs/daily-standup/spec.md new file mode 100644 index 00000000..232b9420 --- /dev/null +++ b/openspec/changes/daily-standup-progress-support/specs/daily-standup/spec.md @@ -0,0 +1,71 @@ +# Daily Standup + +## ADDED Requirements + +### Requirement: Standup view + +The system SHALL provide a standup or progress view that lists change proposals or backlog items (by assignee or filter) with last-updated and status, and optional one-line summary for yesterday/today/blockers. + +**Rationale**: Teams need a single place to see "my items" and recent activity for daily standup without duplicating data in multiple tools. + +#### Scenario: List my items with status and last activity + +**Given**: A user has change proposals or backlog items assigned to them (or a filter is applied) + +**When**: The user runs the standup view (e.g. `specfact backlog daily` or equivalent under the backlog command group) + +**Then**: The system lists items (change proposal id or backlog item id, title, status, last-updated) for the user or filter + +**And**: Optional standup summary lines (yesterday/today/blockers) are shown when available from proposal or linked issue body + +**Acceptance Criteria**: + +- Output is readable (e.g. table or structured list) +- Last-updated is displayed per item +- Optional standup fields (yesterday, today, blockers) shown when present in source data + +#### Scenario: Standup view with assignee filter + +**Given**: A repo with multiple change proposals or backlog items and assignee metadata + +**When**: The user runs standup view with assignee filter (e.g. `--assignee me` or current user) + +**Then**: Only items matching the assignee are listed + +**And**: If no assignee filter is applied, all items (or default scope) are listed per command contract + +### Requirement: Post standup comment to linked issue + +The system SHALL support posting a standup summary as a comment on the linked issue (e.g. GitHub issue comment) when the user opts in and the adapter supports it. + +**Rationale**: Standup updates should be visible in the DevOps backend (GitHub, ADO) so the team sees progress where they work. + +#### Scenario: Post standup comment via GitHub adapter + +**Given**: A change proposal with Source Tracking linking to a GitHub issue (e.g. nold-ai/specfact-cli#N) + +**And**: The user has provided standup text (yesterday/today/blockers format) and opts to post (e.g. `specfact backlog daily --post` or equivalent) + +**When**: The user runs `specfact backlog daily --post` (or equivalent) and GitHub adapter is configured + +**Then**: The system adds a comment to the linked GitHub issue with the standup text (format: Yesterday / Today / Blockers or team-defined format) + +**And**: The comment is clearly identifiable (e.g. "Standup YYYY-MM-DD" or configurable prefix) + +**Acceptance Criteria**: + +- Comment is posted only when user opts in and adapter supports comments +- Format is configurable or follows a simple standard (yesterday, today, blockers) +- Failure to post (e.g. auth, rate limit) is reported clearly; no silent swallow + +#### Scenario: Post standup when adapter does not support comments + +**Given**: An adapter that does not support posting comments (e.g. read-only or no comment API) + +**When**: The user runs `specfact backlog daily --post` (or equivalent) + +**Then**: The system reports that posting is not supported for this adapter and does not attempt to post + +**Acceptance Criteria**: + +- Clear message; no crash or undefined behavior diff --git a/openspec/changes/daily-standup-progress-support/tasks.md b/openspec/changes/daily-standup-progress-support/tasks.md new file mode 100644 index 00000000..bddc59ab --- /dev/null +++ b/openspec/changes/daily-standup-progress-support/tasks.md @@ -0,0 +1,74 @@ +# Tasks: Daily standup and progress support + +## TDD / SDD order (enforced) + +Per `openspec/config.yaml`, **tests before code** apply to any task that adds or changes behavior. + +1. **Spec deltas** define behavior (Given/When/Then) in `openspec/changes/daily-standup-progress-support/specs/daily-standup/spec.md`. +2. **Tests second**: Write unit/integration tests from those scenarios; run tests and **expect failure** (no implementation yet). +3. **Code last**: Implement until tests pass and behavior satisfies the spec. + +Do not implement production code for new behavior until the corresponding tests exist and have been run (expecting failure). + +--- + +## 1. Create git branch from dev + +- [ ] 1.1 Ensure we're on dev and up to date: `git checkout dev && git pull origin dev` +- [ ] 1.2 Create branch with Development link to issue (if exists): `gh issue develop --repo nold-ai/specfact-cli --name feature/daily-standup-progress-support --checkout` +- [ ] 1.3 Or create branch without issue link: `git checkout -b feature/daily-standup-progress-support` (if no issue yet) +- [ ] 1.4 Verify branch was created: `git branch --show-current` + +## 2. Create GitHub issue in nold-ai/specfact-cli (mandatory) + +- [ ] 2.1 Create issue in nold-ai/specfact-cli: `gh issue create --repo nold-ai/specfact-cli --title "[Change] Daily standup and progress support" --body-file --label "enhancement" --label "change-proposal"` +- [ ] 2.2 Use body from proposal (Why, What Changes, Acceptance Criteria); add footer `*OpenSpec Change Proposal: daily-standup-progress-support*` +- [ ] 2.3 Update `proposal.md` Source Tracking section with issue number, issue URL, repository nold-ai/specfact-cli, Last Synced Status: proposed +- [ ] 2.4 Link issue to project (optional): `gh project item-add 1 --owner nold-ai --url ` (requires `gh auth refresh -s project` if needed) + +## 3. Verify spec deltas (SDD: specs first) + +- [ ] 3.1 Confirm `specs/daily-standup/spec.md` exists and is complete (ADDED requirements, Given/When/Then scenarios for standup view and post standup comment). +- [ ] 3.2 Map scenarios to implementation: list my items with status/last activity, assignee filter, post standup comment via GitHub, adapter does not support comments. + +## 4. Tests first (TDD: write tests from spec scenarios; expect failure) + +- [ ] 4.1 Write unit or integration tests from `specs/daily-standup/spec.md` scenarios: standup view lists items with status and last-updated; optional standup summary lines; assignee filter; post standup comment (mock adapter); adapter without comment support reports clearly. +- [ ] 4.2 Run tests: `hatch run smart-test-unit` (or equivalent); **expect failure** (no implementation yet). +- [ ] 4.3 Document which scenarios are covered by which test modules. + +## 5. Implement standup view and optional comment (TDD: code until tests pass) + +- [ ] 5.1 Implement standup view: query change proposals and/or backlog items by assignee or filter; display item id, title, status, last-updated; optional standup fields (yesterday/today/blockers) when present in source. +- [ ] 5.2 Expose via `specfact backlog daily` (backlog command group); keep scope minimal (read-only view from existing data). Do not add a top-level `specfact standup` command. +- [ ] 5.3 Optional: implement post standup comment: when user opts in and adapter supports comments (e.g. GitHub), add comment to linked issue with standup text (format: Yesterday / Today / Blockers); use existing GitHub adapter comment API if available. +- [ ] 5.4 When adapter does not support comments, report clearly; do not attempt to post. +- [ ] 5.5 Add or extend bridge/adapters to support posting comment (e.g. GitHub issue comment); ensure @icontract and @beartype on new public APIs. +- [ ] 5.6 Run tests again; **expect pass**; fix until all tests pass. + +## 6. Quality gates + +- [ ] 6.1 Run format and type-check: `hatch run format`, `hatch run type-check`. +- [ ] 6.2 Run contract test: `hatch run contract-test`. +- [ ] 6.3 Run full test suite: `hatch run smart-test` (or `hatch run smart-test-full`). +- [ ] 6.4 Ensure any new or modified public APIs have @icontract and @beartype where applicable. + +## 7. Documentation research and review + +- [ ] 7.1 Identify affected documentation: docs/guides/agile-scrum-workflows.md, docs/guides/devops-adapter-integration.md. +- [ ] 7.2 Update agile-scrum-workflows.md: add section or subsection for daily standup with SpecFact (`specfact backlog daily` view, optional post standup comment to linked issue). +- [ ] 7.3 Update devops-adapter-integration.md: document standup comment posting when using GitHub/ADO adapter (if implemented). +- [ ] 7.4 If adding a new doc page: set front-matter (layout, title, permalink, description) and update docs/_layouts/default.html sidebar if needed. + +## 8. Version and changelog (patch bump; required before PR) + +- [ ] 8.1 Bump **patch** version in `pyproject.toml` (e.g. X.Y.Z → X.Y.(Z+1)). +- [ ] 8.2 Sync version in `setup.py`, `src/__init__.py`, `src/specfact_cli/__init__.py` to match pyproject.toml. +- [ ] 8.3 Add CHANGELOG.md entry under new [X.Y.Z] - YYYY-MM-DD section: **Added** – Daily standup and progress support (standup view, optional post standup comment to linked issue). + +## 9. Create Pull Request to dev + +- [ ] 9.1 Ensure all changes are committed: `git add .` and `git commit -m "feat(backlog): add backlog daily for standup view and optional comment post"` +- [ ] 9.2 Push to remote: `git push origin feature/daily-standup-progress-support` +- [ ] 9.3 Create PR: `gh pr create --repo nold-ai/specfact-cli --base dev --head feature/daily-standup-progress-support --title "feat(backlog): add backlog daily for standup view and optional comment post" --body-file ` (use repo PR template; add OpenSpec change ID `daily-standup-progress-support` and summary; reference GitHub issue with `Fixes nold-ai/specfact-cli#`). +- [ ] 9.4 Verify PR and branch are linked to issue in Development section. diff --git a/openspec/changes/definition-of-done-support/CHANGE_VALIDATION.md b/openspec/changes/definition-of-done-support/CHANGE_VALIDATION.md new file mode 100644 index 00000000..5e9b6d6f --- /dev/null +++ b/openspec/changes/definition-of-done-support/CHANGE_VALIDATION.md @@ -0,0 +1,74 @@ +# Change Validation Report: definition-of-done-support + +**Validation Date**: 2026-01-30 +**Change Proposal**: [proposal.md](./proposal.md) +**Validation Method**: Dry-run and format/config compliance check + +## Executive Summary + +- **Breaking Changes**: 0 detected +- **Dependent Files**: Additive only (new DoD config, validator, optional hook into backlog list/refine/export; existing BacklogItem and backlog_commands unchanged except optional DoD path) +- **Impact Level**: Low +- **Validation Result**: Pass +- **User Decision**: N/A (no breaking changes) +- **Command placement**: DoD under backlog command group (`specfact backlog list --dod`, `specfact backlog dod`, etc.); no top-level DoD/scrum command (per harmonization) + +## Breaking Changes Detected + +None. Change is additive: new DoD config schema, loader, validator; optional DoD check for done items in backlog output/export; existing behavior unchanged unless user opts in. + +## Dependencies Affected + +- **Critical**: None +- **Recommended**: Reuse DoR patterns (config load, provider-agnostic rules) where applicable; BacklogItem state field used for "Done" filtering. +- **Optional**: None + +## Impact Assessment + +- **Code Impact**: New DoD config and validator; optional integration in backlog_commands.py (list/refine/export or new `backlog dod` subcommand). +- **Test Impact**: New tests from spec scenarios (config load, validation for done items, status in output). +- **Documentation Impact**: agile-scrum-workflows.md, backlog-refinement.md. +- **Release Impact**: Patch (additive feature). + +## Format Validation + +- **proposal.md Format**: Pass + - Title format: Correct (`# Change: Definition of Done (DoD) support`) + - Required sections: All present (Why, What Changes, Capabilities, Impact) + - "What Changes" format: Correct (bullet list with NEW/EXTEND) + - "Capabilities" section: Present (definition-of-done) + - "Impact" format: Correct + - Source Tracking section: Present (GitHub Issue #169, Issue URL, Repository, Last Synced Status) +- **tasks.md Format**: Pass + - Section headers: Hierarchical numbered format + - Task format: `- [ ] N.N [Description]` + - Sub-task format: Indented `- [ ] N.N.N` + - Config.yaml compliance: Pass + - TDD order section at top; tests before implementation (Section 4 before Section 5) + - Branch creation first (Section 1); PR creation last (Section 9) + - GitHub issue creation task (Section 2) for nold-ai/specfact-cli + - Version and changelog task (Section 8) before PR; patch bump and CHANGELOG sync + - Quality gates, documentation tasks present +- **specs Format**: Pass (Given/When/Then in specs/definition-of-done/spec.md) +- **design.md Format**: Pass (DoD config/validator, sequence, contract enforcement, fallback documented) +- **Config.yaml Compliance**: Pass + +## OpenSpec Validation + +- **Status**: Pass +- **Validation Command**: `openspec validate definition-of-done-support --strict` +- **Issues Found**: 0 +- **Issues Fixed**: 0 +- **Re-validated**: 2026-01-30 (status: all artifacts done; schema: spec-driven) + +## Recommended Improvements Applied + +1. **GitHub issue mandatory**: Task 2 creates issue in nold-ai/specfact-cli; proposal Source Tracking updated with #169. +2. **Patch version and changelog**: Task 8 bumps patch version, syncs pyproject.toml/setup.py/src __init__.py, and adds CHANGELOG.md entry. +3. **TDD order**: TDD/SDD section at top of tasks.md; Section 4 (tests first, expect failure) before Section 5 (implement until tests pass). +4. **Backlog harmonization**: DoD integrated under backlog group (list/refine/dod); no top-level DoD command. + +## Validation Artifacts + +- No temporary workspace used (dry-run analysis only). +- Change directory: `openspec/changes/definition-of-done-support/` diff --git a/openspec/changes/definition-of-done-support/design.md b/openspec/changes/definition-of-done-support/design.md new file mode 100644 index 00000000..2d16a0ac --- /dev/null +++ b/openspec/changes/definition-of-done-support/design.md @@ -0,0 +1,27 @@ +# Design: Definition of Done (DoD) support + +## DoD config and validator + +- **Config**: DoD config schema (e.g. YAML) with checklist items (e.g. tests_pass, docs_updated, code_reviewed). Store in `.specfact/dod.yaml` or under project templates. Reuse patterns from DoR (framework-specific rules, provider-agnostic where possible). +- **Validator**: Function that takes a BacklogItem (state=Done) and DoD config, runs the checklist (e.g. by checking item fields or linked artifacts), returns pass/fail and list of failed criteria. New public APIs shall have @icontract and @beartype. +- **Integration**: Hook into backlog list/refine/export; when items are in Done state and DoD is enabled, run validator and attach DoD status to output/export. Expose optionally via `specfact backlog list`, `specfact backlog refine`, or a dedicated `specfact backlog dod` / `specfact backlog validate` subcommand under the backlog group (no top-level DoD command). + +## Sequence (DoD validation for done items) + +``` +User → specfact backlog list --dod (or specfact backlog dod) + → CLI loads .specfact/dod.yaml (if present) + → CLI fetches backlog items (existing adapter) + → For each item in Done state: run DoD validator(item, config) + → Attach DoD status (pass/fail, failed criteria) to item + → Output/export includes DoD status per done item +``` + +## Contract enforcement + +- DoD config loader and validator shall have @icontract and @beartype. +- Backlog command integration: optional flag (e.g. `--dod`) to enable DoD; default off for backward compatibility. + +## Fallback / offline + +- DoD config is read from local project; no network required for config load. Validation may require item data already fetched by backlog adapter (offline if cache present). diff --git a/openspec/changes/definition-of-done-support/proposal.md b/openspec/changes/definition-of-done-support/proposal.md new file mode 100644 index 00000000..0176c996 --- /dev/null +++ b/openspec/changes/definition-of-done-support/proposal.md @@ -0,0 +1,31 @@ +# Change: Definition of Done (DoD) support + +## Why + +SpecFact CLI has Definition of Ready (DoR) for backlog refinement (readiness for sprint planning). Teams also need Definition of Done (DoD) to ensure items moved to "Done" meet completion criteria. DoD is not modeled or validated today; there is no way to define team DoD rules (e.g. checklist: tests pass, docs updated, code reviewed) and run them against items in Done state. + +## What Changes + +- **NEW**: Model DoD as a checklist or rule set (similar in spirit to DoR but for completion). Store DoD config per project (e.g. `.specfact/dod.yaml` or under templates). +- **NEW**: When listing or exporting backlog items in "Done" (or equivalent) state, optionally run DoD validation and attach DoD status (pass/fail + which criteria failed). +- **EXTEND**: Integrate into the **backlog command group** (e.g. `specfact backlog list`, `specfact backlog refine`, or a dedicated `specfact backlog dod` / `specfact backlog validate` subcommand): for items in done state, show DoD status in output and export. Do not add a top-level scrum/DoD command. +- **EXTEND**: Documentation (agile-scrum-workflows, backlog-refinement) for DoD workflow. + +## Capabilities + +- **definition-of-done**: DoD config load, DoD validation for done items, DoD status in CLI/export when enabled. + +## Impact + +- **Affected specs**: New `openspec/changes/definition-of-done-support/specs/definition-of-done/spec.md` (Given/When/Then for DoD config, validation, status output). +- **Affected code**: `src/specfact_cli/` (DoD config and validator); `src/specfact_cli/commands/backlog_commands.py` (optional DoD check for done items under backlog group). +- **Affected documentation** (): docs/guides/agile-scrum-workflows.md, docs/guides/backlog-refinement.md for DoD. +- **Integration points**: Existing backlog list/refine/export; BacklogItem state=Done; DoR patterns for reuse. +- **Backward compatibility**: Additive only; existing backlog behavior unchanged unless user opts into DoD validation. + +## Source Tracking + +- **GitHub Issue**: #169 +- **Issue URL**: +- **Repository**: nold-ai/specfact-cli +- **Last Synced Status**: proposed diff --git a/openspec/changes/definition-of-done-support/specs/definition-of-done/spec.md b/openspec/changes/definition-of-done-support/specs/definition-of-done/spec.md new file mode 100644 index 00000000..a196681b --- /dev/null +++ b/openspec/changes/definition-of-done-support/specs/definition-of-done/spec.md @@ -0,0 +1,71 @@ +# Definition of Done (DoD) + +## ADDED Requirements + +### Requirement: DoD configuration + +The system SHALL support loading a DoD configuration (checklist or rule set) from project config (e.g. `.specfact/dod.yaml` or under templates), similar in spirit to DoR. + +**Rationale**: Teams need to define completion criteria (e.g. tests pass, docs updated, code reviewed) per project. + +#### Scenario: Load DoD config from project + +**Given**: A project with a DoD config file (e.g. `.specfact/dod.yaml`) defining a checklist (e.g. tests_pass, docs_updated, code_reviewed) + +**When**: The user runs a backlog command that uses DoD (e.g. `specfact backlog list` or `specfact backlog dod` with DoD enabled) + +**Then**: The system loads the DoD config and uses it for validation + +**And**: If no DoD config exists, DoD validation is skipped or a clear message is shown + +**Acceptance Criteria**: + +- DoD config schema is documented; loader does not crash on missing or invalid config (report clearly). + +### Requirement: DoD validation for done items + +The system SHALL run DoD validation against backlog items in "Done" (or equivalent) state when the user opts in and config is present. + +**Rationale**: Only items in done state are checked against completion criteria. + +#### Scenario: DoD validation for done item + +**Given**: A backlog item in Done state and a loaded DoD config with criteria (e.g. tests_pass, docs_updated) + +**When**: The user runs backlog list/export or `specfact backlog dod` with DoD enabled + +**Then**: The system runs the DoD checklist against the item and produces a result (pass/fail + which criteria failed) + +**Acceptance Criteria**: + +- Result is deterministic; failed criteria are listed; no silent swallow of errors. + +#### Scenario: DoD not run for non-done items + +**Given**: A backlog item not in Done state (e.g. In Progress) + +**When**: The user runs a command with DoD validation enabled + +**Then**: DoD validation is not applied to that item (or item is skipped for DoD) + +**Acceptance Criteria**: + +- Only items in Done (or equivalent) state are validated against DoD. + +### Requirement: DoD status in output and export + +The system SHALL display or export DoD status (pass/fail, criteria) for done items when DoD validation is enabled. + +**Rationale**: Teams need to see which done items meet DoD and which do not. + +#### Scenario: DoD status in CLI output + +**Given**: Backlog items in Done state and DoD validation has been run + +**When**: The user runs `specfact backlog list` (or equivalent) with DoD enabled + +**Then**: The output includes DoD status per done item (e.g. pass/fail, list of failed criteria) + +**Acceptance Criteria**: + +- Output is readable (e.g. column or section per item); export format includes DoD status when applicable. diff --git a/openspec/changes/definition-of-done-support/tasks.md b/openspec/changes/definition-of-done-support/tasks.md new file mode 100644 index 00000000..d0903043 --- /dev/null +++ b/openspec/changes/definition-of-done-support/tasks.md @@ -0,0 +1,73 @@ +# Tasks: Definition of Done (DoD) support + +## TDD / SDD order (enforced) + +Per `openspec/config.yaml`, **tests before code** apply to any task that adds or changes behavior. + +1. **Spec deltas** define behavior (Given/When/Then) in `openspec/changes/definition-of-done-support/specs/definition-of-done/spec.md`. +2. **Tests second**: Write unit/integration tests from those scenarios; run tests and **expect failure** (no implementation yet). +3. **Code last**: Implement until tests pass and behavior satisfies the spec. + +Do not implement production code for new behavior until the corresponding tests exist and have been run (expecting failure). + +--- + +## 1. Create git branch from dev + +- [ ] 1.1 Ensure we're on dev and up to date: `git checkout dev && git pull origin dev` +- [ ] 1.2 Create branch with Development link to issue (if exists): `gh issue develop --repo nold-ai/specfact-cli --name feature/definition-of-done-support --checkout` +- [ ] 1.3 Or create branch without issue link: `git checkout -b feature/definition-of-done-support` (if no issue yet) +- [ ] 1.4 Verify branch was created: `git branch --show-current` + +## 2. Create GitHub issue in nold-ai/specfact-cli (mandatory) + +- [ ] 2.1 Create issue in nold-ai/specfact-cli: `gh issue create --repo nold-ai/specfact-cli --title "[Change] Definition of Done (DoD) support" --body-file --label "enhancement" --label "change-proposal"` +- [ ] 2.2 Use body from proposal (Why, What Changes, Acceptance Criteria); add footer `*OpenSpec Change Proposal: definition-of-done-support*` +- [ ] 2.3 Update `proposal.md` Source Tracking section with issue number, issue URL, repository nold-ai/specfact-cli, Last Synced Status: proposed +- [ ] 2.4 Link issue to project (optional): `gh project item-add 1 --owner nold-ai --url ` (requires `gh auth refresh -s project` if needed) + +## 3. Verify spec deltas (SDD: specs first) + +- [ ] 3.1 Confirm `specs/definition-of-done/spec.md` exists and is complete (ADDED requirements, Given/When/Then for DoD config load, validation for done items, status in output). +- [ ] 3.2 Map scenarios to implementation: load DoD config, validate done items only, DoD status in CLI/export. + +## 4. Tests first (TDD: write tests from spec scenarios; expect failure) + +- [ ] 4.1 Write unit or integration tests from `specs/definition-of-done/spec.md` scenarios: DoD config load (present/missing); DoD validation for done item (pass/fail, failed criteria); non-done items skipped; DoD status in output. +- [ ] 4.2 Run tests: `hatch run smart-test-unit` (or equivalent); **expect failure** (no implementation yet). +- [ ] 4.3 Document which scenarios are covered by which test modules. + +## 5. Implement DoD (TDD: code until tests pass) + +- [ ] 5.1 Define DoD config schema and loader (e.g. `.specfact/dod.yaml`); load from project; handle missing/invalid config without crash. +- [ ] 5.2 Implement DoD validator: takes BacklogItem (state=Done) and config, runs checklist, returns pass/fail and failed criteria; ensure @icontract and @beartype on new public APIs. +- [ ] 5.3 Hook into backlog list/refine/export: when items in Done state and DoD enabled (e.g. `--dod`), run validator and attach DoD status. Expose under backlog group (e.g. `specfact backlog list --dod` or `specfact backlog dod`); do not add top-level DoD command. +- [ ] 5.4 Include DoD status in CLI output and export for done items when enabled. +- [ ] 5.5 Run tests again; **expect pass**; fix until all tests pass. + +## 6. Quality gates + +- [ ] 6.1 Run format and type-check: `hatch run format`, `hatch run type-check`. +- [ ] 6.2 Run contract test: `hatch run contract-test`. +- [ ] 6.3 Run full test suite: `hatch run smart-test` (or `hatch run smart-test-full`). +- [ ] 6.4 Ensure any new or modified public APIs have @icontract and @beartype where applicable. + +## 7. Documentation research and review + +- [ ] 7.1 Identify affected documentation: docs/guides/agile-scrum-workflows.md, docs/guides/backlog-refinement.md. +- [ ] 7.2 Update agile-scrum-workflows.md: add section or subsection for DoD with SpecFact (config, validation for done items, status in output). +- [ ] 7.3 Update backlog-refinement.md: document DoD support and how it complements DoR. +- [ ] 7.4 If adding a new doc page: set front-matter (layout, title, permalink, description) and update docs/_layouts/default.html sidebar if needed. + +## 8. Version and changelog (patch bump; required before PR) + +- [ ] 8.1 Bump **patch** version in `pyproject.toml` (e.g. X.Y.Z → X.Y.(Z+1)). +- [ ] 8.2 Sync version in `setup.py`, `src/__init__.py`, `src/specfact_cli/__init__.py` to match pyproject.toml. +- [ ] 8.3 Add CHANGELOG.md entry under new [X.Y.Z] - YYYY-MM-DD section: **Added** – Definition of Done (DoD) support (config, validation for done items, status in backlog output/export). + +## 9. Create Pull Request to dev + +- [ ] 9.1 Ensure all changes are committed: `git add .` and `git commit -m "feat(backlog): add Definition of Done (DoD) support for done items"` +- [ ] 9.2 Push to remote: `git push origin feature/definition-of-done-support` +- [ ] 9.3 Create PR: `gh pr create --repo nold-ai/specfact-cli --base dev --head feature/definition-of-done-support --title "feat(backlog): add Definition of Done (DoD) support for done items" --body-file ` (use repo PR template; add OpenSpec change ID `definition-of-done-support` and summary; reference GitHub issue with `Fixes nold-ai/specfact-cli#`). +- [ ] 9.4 Verify PR and branch are linked to issue in Development section. diff --git a/openspec/changes/implement-backlog-refine-import-from-tmp/tasks.md b/openspec/changes/implement-backlog-refine-import-from-tmp/tasks.md deleted file mode 100644 index 10e310fa..00000000 --- a/openspec/changes/implement-backlog-refine-import-from-tmp/tasks.md +++ /dev/null @@ -1,38 +0,0 @@ -# Tasks: Implement backlog refine --import-from-tmp - -## 1. Create git branch - -- [ ] 1.1.1 Ensure we're on dev and up to date: `git checkout dev && git pull origin dev` -- [ ] 1.1.2 Create branch: `git checkout -b feature/implement-backlog-refine-import-from-tmp` -- [ ] 1.1.3 Verify branch: `git branch --show-current` - -## 2. Parser for refined export format - -- [ ] 2.1.1 Add function to parse refined markdown (e.g. `_parse_refined_export_markdown(content: str) -> dict[str, dict]` returning id → {body_markdown, acceptance_criteria, title?, ...}) in `backlog_commands.py` or new module `src/specfact_cli/backlog/refine_export_parser.py` -- [ ] 2.1.2 Split content by `## Item` or `---` to get per-item blocks -- [ ] 2.1.3 From each block extract **ID** (required), **Body** (from ```markdown ... ```), **Acceptance Criteria** (optional), optionally **title** and metrics -- [ ] 2.1.4 Add unit tests for parser (export-format sample, multiple items, missing optional fields) -- [ ] 2.1.5 Run `hatch run format` and `hatch run type-check` - -## 3. Import branch in backlog refine command - -- [ ] 3.1.1 In the `if import_from_tmp:` block, after file-exists check: read file content, call parser, build map id → parsed fields -- [ ] 3.1.2 For each item in `items`, if item.id in map: set item.body_markdown, item.acceptance_criteria (and optionally title/metrics) from parsed fields -- [ ] 3.1.3 If `--write` is not set: print preview ("Would update N items") and return -- [ ] 3.1.4 If `--write` is set: get adapter via _build_adapter_kwargs and adapter_registry.get_adapter; for each updated item call adapter.update_backlog_item(item, update_fields=[...]) with same update_fields logic as interactive refine -- [ ] 3.1.5 Print success summary (e.g. "Updated N backlog items") -- [ ] 3.1.6 Remove "Import functionality pending implementation" message and TODO -- [ ] 3.1.7 Run `hatch run format` and `hatch run type-check` - -## 4. Tests and quality - -- [ ] 4.1.1 Add or extend test for refine --import-from-tmp (unit: parser; integration or unit with mock: import flow with --tmp-file and --write) -- [ ] 4.1.2 Run `hatch run contract-test` (or `hatch run smart-test`) -- [ ] 4.1.3 Run `hatch run lint` -- [ ] 4.1.4 Run `openspec validate implement-backlog-refine-import-from-tmp --strict` - -## 5. Documentation and PR - -- [ ] 5.1.1 Update CHANGELOG.md with fix entry -- [ ] 5.1.2 Ensure help text for --import-from-tmp and --tmp-file is accurate -- [ ] 5.1.3 Create Pull Request from feature/implement-backlog-refine-import-from-tmp to dev diff --git a/openspec/changes/sprint-planning-capacity-commitment-support/CHANGE_VALIDATION.md b/openspec/changes/sprint-planning-capacity-commitment-support/CHANGE_VALIDATION.md new file mode 100644 index 00000000..95ca49fa --- /dev/null +++ b/openspec/changes/sprint-planning-capacity-commitment-support/CHANGE_VALIDATION.md @@ -0,0 +1,73 @@ +# Change Validation Report: sprint-planning-capacity-commitment-support + +**Validation Date**: 2026-01-30 +**Change Proposal**: [proposal.md](./proposal.md) +**Validation Method**: Dry-run and format/config compliance check + +## Executive Summary + +- **Breaking Changes**: 0 detected +- **Dependent Files**: Additive only (new `specfact backlog sprint-summary` subcommand; existing `backlog` Typer group in `backlog_commands.py` will gain a new callback; capacity config and commitment aggregation are new modules) +- **Impact Level**: Low +- **Validation Result**: Pass +- **User Decision**: N/A (no breaking changes) +- **Command placement**: Sprint summary is under backlog command group (`specfact backlog sprint-summary`); no top-level `specfact sprint` command (per plan) + +## Breaking Changes Detected + +None. Change is additive: new sprint capacity config, commitment sum, sprint-summary output; existing backlog behavior unchanged. + +## Dependencies Affected + +- **Critical**: None +- **Recommended**: Reuse BacklogItem.sprint and BacklogItem.story_points from existing models; capacity loader pattern similar to DoR/DoD config loaders. +- **Optional**: None + +## Impact Assessment + +- **Code Impact**: New subcommand (sprint-summary); new or extended config loader (sprint_capacity.yaml); commitment aggregation from backlog items. +- **Test Impact**: New tests from spec scenarios (capacity config load, commitment sum, over/under output, sprint-summary CLI). +- **Documentation Impact**: agile-scrum-workflows.md, backlog-refinement.md for sprint planning. +- **Release Impact**: Patch (additive feature). + +## Format Validation + +- **proposal.md Format**: Pass + - Title format: Correct (`# Change: Sprint planning (capacity and commitment) support`) + - Required sections: All present (Why, What Changes, Capabilities, Impact) + - "What Changes" format: Correct (bullet list with NEW/EXTEND) + - "Capabilities" section: Present (sprint-planning) + - "Impact" format: Correct + - Source Tracking section: Present (GitHub Issue #170, URL, repository) +- **tasks.md Format**: Pass + - Section headers: Hierarchical numbered format + - Task format: `- [ ] N.N [Description]` + - Sub-task format: Indented `- [ ] N.N.N` + - Config.yaml compliance: Pass + - TDD order section at top; tests before implementation (Section 4 before Section 5) + - Branch creation first (Section 1); PR creation last (Section 9) + - GitHub issue creation task (Section 2) for nold-ai/specfact-cli + - Version and changelog task (Section 8) before PR; patch bump and CHANGELOG sync + - Quality gates, documentation tasks present +- **specs Format**: Pass (Given/When/Then in specs/sprint-planning/spec.md) +- **design.md Format**: Pass (sequence, contract enforcement, fallback documented) +- **Config.yaml Compliance**: Pass + +## OpenSpec Validation + +- **Status**: Pass +- **Validation Command**: `openspec validate sprint-planning-capacity-commitment-support --strict` +- **Issues Found**: 0 +- **Issues Fixed**: 0 + +## Recommended Improvements Applied + +1. **GitHub issue mandatory**: Issue #170 created in nold-ai/specfact-cli; proposal Source Tracking updated. +2. **Patch version and changelog**: Task 8 bumps patch version, syncs pyproject.toml/setup.py/src __init__.py, and adds CHANGELOG.md entry. +3. **TDD order**: TDD/SDD section at top of tasks.md; Section 4 (tests first, expect failure) before Section 5 (implement until tests pass). +4. **Backlog harmonization**: Sprint planning is under `specfact backlog sprint-summary`; no top-level `specfact sprint` command. + +## Validation Artifacts + +- No temporary workspace used (dry-run analysis only). +- Change directory: `openspec/changes/sprint-planning-capacity-commitment-support/` diff --git a/openspec/changes/sprint-planning-capacity-commitment-support/design.md b/openspec/changes/sprint-planning-capacity-commitment-support/design.md new file mode 100644 index 00000000..a68b4448 --- /dev/null +++ b/openspec/changes/sprint-planning-capacity-commitment-support/design.md @@ -0,0 +1,28 @@ +# Design: Sprint planning (capacity and commitment) support + +## Capacity config and commitment aggregation + +- **Config**: Sprint capacity config schema (e.g. YAML) with sprint identifier → capacity (story points). Store in `.specfact/sprint_capacity.yaml` or similar. Load from project; handle missing/invalid config without crash. +- **Commitment**: Sum BacklogItem.story_points for items where BacklogItem.sprint matches the requested sprint. Commitment is adapter-agnostic (derived from existing BacklogItem data). +- **Integration**: New subcommand under backlog group: `specfact backlog sprint-summary` (optional `--sprint `). Output: sprint id, committed points, capacity (if configured), gap (over/under). No top-level `specfact sprint` command. + +## Sequence (sprint summary) + +``` +User → specfact backlog sprint-summary [--sprint ] + → CLI loads .specfact/sprint_capacity.yaml (if present) + → CLI fetches backlog items (existing adapter) or uses cached list + → Filter items by sprint (if --sprint given) + → Sum story_points per sprint + → For each sprint: compare sum to capacity (if config present) + → Output: sprint, committed, capacity, gap (over/under) +``` + +## Contract enforcement + +- Capacity loader and commitment aggregator shall have @icontract and @beartype where they are public APIs. +- Backlog command: additive only; default behavior unchanged. + +## Fallback / offline + +- Capacity config is read from local project; no network required for config load. Commitment uses backlog item data (from adapter/cache); offline if data already available. diff --git a/openspec/changes/sprint-planning-capacity-commitment-support/proposal.md b/openspec/changes/sprint-planning-capacity-commitment-support/proposal.md new file mode 100644 index 00000000..4f608dc2 --- /dev/null +++ b/openspec/changes/sprint-planning-capacity-commitment-support/proposal.md @@ -0,0 +1,31 @@ +# Change: Sprint planning (capacity and commitment) support + +## Why + +SpecFact CLI supports sprint/release assignment and story points at the backlog-item level (e.g. BacklogItem fields, DoR), but there is no first-class support for sprint capacity (e.g. available story points per sprint), commitment vs capacity comparison (over/under committed), or a CLI/export view that shows sprint-level summary. Teams must manually sum story points and compare to capacity outside SpecFact. + +## What Changes + +- **NEW**: Introduce a lightweight notion of "sprint capacity" (e.g. config or optional file per project: capacity in story points per sprint identifier). +- **NEW**: When exporting or listing backlog items filtered by sprint, compute total story points for that sprint and compare to capacity (if configured). +- **NEW**: Add optional output (CLI and/or export) that shows: sprint id, total committed points, capacity, difference (over/under). Expose under the **backlog command group** as `specfact backlog sprint-summary` (or similar subcommand); do not add a top-level `specfact sprint` command. +- **EXTEND**: Documentation (agile-scrum-workflows, backlog-refinement) for sprint planning support. + +## Capabilities + +- **sprint-planning**: Capacity config load, commitment sum by sprint, over/under commitment comparison, sprint-summary CLI/export output. + +## Impact + +- **Affected specs**: New `openspec/changes/sprint-planning-capacity-commitment-support/specs/sprint-planning/spec.md` (Given/When/Then for capacity config, commitment sum, over/under output). +- **Affected code**: `src/specfact_cli/commands/backlog_commands.py` (sprint-summary subcommand or extend existing); `src/specfact_cli/` (models or config for sprint capacity, commitment aggregation). +- **Affected documentation** (): docs/guides/agile-scrum-workflows.md, docs/guides/backlog-refinement.md for sprint planning. +- **Integration points**: Existing backlog list/export; BacklogItem.sprint + story_points; adapter-agnostic (capacity from `.specfact/sprint_capacity.yaml` or similar). +- **Backward compatibility**: Additive only; existing backlog behavior unchanged unless user uses sprint-summary or config. + +## Source Tracking + +- **GitHub Issue**: #170 +- **Issue URL**: +- **Repository**: nold-ai/specfact-cli +- **Last Synced Status**: proposed diff --git a/openspec/changes/sprint-planning-capacity-commitment-support/specs/sprint-planning/spec.md b/openspec/changes/sprint-planning-capacity-commitment-support/specs/sprint-planning/spec.md new file mode 100644 index 00000000..f2e95586 --- /dev/null +++ b/openspec/changes/sprint-planning-capacity-commitment-support/specs/sprint-planning/spec.md @@ -0,0 +1,77 @@ +# Sprint planning (capacity and commitment) + +## ADDED Requirements + +### Requirement: Sprint capacity configuration + +The system SHALL support loading sprint capacity configuration from project config (e.g. `.specfact/sprint_capacity.yaml`) mapping sprint identifiers to available story points per sprint. + +**Rationale**: Teams need to define capacity (e.g. velocity or available points) per sprint to compare with commitment. + +#### Scenario: Load sprint capacity config from project + +**Given**: A project with a sprint capacity config file (e.g. `.specfact/sprint_capacity.yaml`) defining capacity per sprint (e.g. sprint_1: 40, sprint_2: 38) + +**When**: The user runs a backlog command that uses sprint summary (e.g. `specfact backlog sprint-summary` or equivalent) + +**Then**: The system loads the capacity config and uses it for comparison + +**And**: If no capacity config exists, the system shows committed points only or a clear message that capacity is not configured + +**Acceptance Criteria**: + +- Capacity config schema is documented; loader does not crash on missing or invalid config (report clearly). + +### Requirement: Commitment sum by sprint + +The system SHALL compute total committed story points per sprint from backlog items assigned to that sprint (BacklogItem.sprint + story_points). + +**Rationale**: Commitment is derived from items in the sprint; no manual sum required. + +#### Scenario: Sum committed points for a sprint + +**Given**: Backlog items with sprint assignment and story_points (e.g. items A, B, C in sprint_1 with points 13, 8, 5) + +**When**: The user requests sprint summary for that sprint (e.g. `specfact backlog sprint-summary --sprint sprint_1`) + +**Then**: The system sums story_points for all items in that sprint and reports total committed points (e.g. 26) + +**Acceptance Criteria**: + +- Sum is deterministic; items without story_points are treated as 0 or excluded per documented behavior. + +### Requirement: Over/under commitment output + +The system SHALL compare total committed points to capacity (when configured) and report over-commitment (committed > capacity) or under-commitment/slack (committed < capacity). + +**Rationale**: Teams need to see at a glance whether the sprint is over or under committed. + +#### Scenario: Sprint summary with capacity comparison + +**Given**: Capacity for sprint_1 is 40 points and committed points from backlog items are 26 + +**When**: The user runs `specfact backlog sprint-summary` for that sprint + +**Then**: The output shows sprint id, total committed points, capacity, and difference (e.g. sprint_1, committed: 26, capacity: 40, gap: -14 or "under by 14") + +**Acceptance Criteria**: + +- Output is readable (CLI and/or export); when capacity is not configured, show committed only; over-commitment shows positive gap or "over by X". + +### Requirement: Sprint summary under backlog group + +The system SHALL expose sprint summary under the backlog command group (e.g. `specfact backlog sprint-summary`); there SHALL be no top-level `specfact sprint` command. + +**Rationale**: Align with other scrum/backlog features under `specfact backlog`. + +#### Scenario: Invoke sprint summary via backlog + +**Given**: SpecFact CLI is installed and project has backlog and optional capacity config + +**When**: The user runs `specfact backlog sprint-summary` (with optional `--sprint `) + +**Then**: The command runs and outputs sprint-level summary (committed, capacity if configured, gap) + +**Acceptance Criteria**: + +- Command is discoverable under `specfact backlog --help`; behavior matches spec scenarios above. diff --git a/openspec/changes/sprint-planning-capacity-commitment-support/tasks.md b/openspec/changes/sprint-planning-capacity-commitment-support/tasks.md new file mode 100644 index 00000000..18656214 --- /dev/null +++ b/openspec/changes/sprint-planning-capacity-commitment-support/tasks.md @@ -0,0 +1,73 @@ +# Tasks: Sprint planning (capacity and commitment) support + +## TDD / SDD order (enforced) + +Per `openspec/config.yaml`, **tests before code** apply to any task that adds or changes behavior. + +1. **Spec deltas** define behavior (Given/When/Then) in `openspec/changes/sprint-planning-capacity-commitment-support/specs/sprint-planning/spec.md`. +2. **Tests second**: Write unit/integration tests from those scenarios; run tests and **expect failure** (no implementation yet). +3. **Code last**: Implement until tests pass and behavior satisfies the spec. + +Do not implement production code for new behavior until the corresponding tests exist and have been run (expecting failure). + +--- + +## 1. Create git branch from dev + +- [ ] 1.1 Ensure we're on dev and up to date: `git checkout dev && git pull origin dev` +- [ ] 1.2 Create branch with Development link to issue (if exists): `gh issue develop 170 --repo nold-ai/specfact-cli --name feature/sprint-planning-capacity-commitment-support --checkout` +- [ ] 1.3 Or create branch without issue link: `git checkout -b feature/sprint-planning-capacity-commitment-support` (if no issue yet) +- [ ] 1.4 Verify branch was created: `git branch --show-current` + +## 2. Create GitHub issue in nold-ai/specfact-cli (mandatory) + +- [ ] 2.1 Create issue in nold-ai/specfact-cli: `gh issue create --repo nold-ai/specfact-cli --title "[Change] Sprint planning (capacity and commitment) support" --body-file --label "enhancement" --label "change-proposal"` +- [ ] 2.2 Use body from proposal (Why, What Changes, Acceptance Criteria); add footer `*OpenSpec Change Proposal: sprint-planning-capacity-commitment-support*` +- [ ] 2.3 Update `proposal.md` Source Tracking section with issue number, issue URL, repository nold-ai/specfact-cli, Last Synced Status: proposed +- [ ] 2.4 Link issue to project (optional): `gh project item-add 1 --owner nold-ai --url ` (requires `gh auth refresh -s project` if needed) + +## 3. Verify spec deltas (SDD: specs first) + +- [ ] 3.1 Confirm `specs/sprint-planning/spec.md` exists and is complete (ADDED requirements, Given/When/Then for capacity config, commitment sum, over/under output). +- [ ] 3.2 Map scenarios to implementation: load capacity config, sum story_points by sprint, compare to capacity, output sprint-summary. + +## 4. Tests first (TDD: write tests from spec scenarios; expect failure) + +- [ ] 4.1 Write unit or integration tests from `specs/sprint-planning/spec.md` scenarios: capacity config load (present/missing); commitment sum per sprint; over/under comparison; sprint-summary output format. +- [ ] 4.2 Run tests: `hatch run smart-test-unit` (or equivalent); **expect failure** (no implementation yet). +- [ ] 4.3 Document which scenarios are covered by which test modules. + +## 5. Implement sprint planning (TDD: code until tests pass) + +- [ ] 5.1 Define sprint capacity config schema and loader (e.g. `.specfact/sprint_capacity.yaml`); load from project; handle missing/invalid config without crash. +- [ ] 5.2 Implement commitment aggregation: sum BacklogItem.story_points by BacklogItem.sprint; ensure @icontract and @beartype on new public APIs. +- [ ] 5.3 Add `specfact backlog sprint-summary` subcommand (optional `--sprint `): output sprint id, committed points, capacity (if configured), gap (over/under). Do not add top-level `specfact sprint` command. +- [ ] 5.4 Include sprint-summary in CLI output and optionally in export when applicable. +- [ ] 5.5 Run tests again; **expect pass**; fix until all tests pass. + +## 6. Quality gates + +- [ ] 6.1 Run format and type-check: `hatch run format`, `hatch run type-check`. +- [ ] 6.2 Run contract test: `hatch run contract-test`. +- [ ] 6.3 Run full test suite: `hatch run smart-test` (or `hatch run smart-test-full`). +- [ ] 6.4 Ensure any new or modified public APIs have @icontract and @beartype where applicable. + +## 7. Documentation research and review + +- [ ] 7.1 Identify affected documentation: docs/guides/agile-scrum-workflows.md, docs/guides/backlog-refinement.md. +- [ ] 7.2 Update agile-scrum-workflows.md: add section or subsection for sprint planning with SpecFact (capacity config, commitment vs capacity, sprint-summary). +- [ ] 7.3 Update backlog-refinement.md: document sprint-summary and capacity/commitment workflow. +- [ ] 7.4 If adding a new doc page: set front-matter (layout, title, permalink, description) and update docs/_layouts/default.html sidebar if needed. + +## 8. Version and changelog (patch bump; required before PR) + +- [ ] 8.1 Bump **patch** version in `pyproject.toml` (e.g. X.Y.Z → X.Y.(Z+1)). +- [ ] 8.2 Sync version in `setup.py`, `src/__init__.py`, `src/specfact_cli/__init__.py` to match pyproject.toml. +- [ ] 8.3 Add CHANGELOG.md entry under new [X.Y.Z] - YYYY-MM-DD section: **Added** – Sprint planning (capacity and commitment) support: `specfact backlog sprint-summary`, capacity config, commitment vs capacity comparison. + +## 9. Create Pull Request to dev + +- [ ] 9.1 Ensure all changes are committed: `git add .` and `git commit -m "feat(backlog): add sprint planning capacity and commitment support"` +- [ ] 9.2 Push to remote: `git push origin feature/sprint-planning-capacity-commitment-support` +- [ ] 9.3 Create PR: `gh pr create --repo nold-ai/specfact-cli --base dev --head feature/sprint-planning-capacity-commitment-support --title "feat(backlog): add sprint planning capacity and commitment support" --body-file ` (use repo PR template; add OpenSpec change ID `sprint-planning-capacity-commitment-support` and summary; reference GitHub issue with `Fixes nold-ai/specfact-cli#170`). +- [ ] 9.4 Verify PR and branch are linked to issue in Development section. diff --git a/openspec/changes/story-complexity-splitting-hints-support/CHANGE_VALIDATION.md b/openspec/changes/story-complexity-splitting-hints-support/CHANGE_VALIDATION.md new file mode 100644 index 00000000..83a93d5d --- /dev/null +++ b/openspec/changes/story-complexity-splitting-hints-support/CHANGE_VALIDATION.md @@ -0,0 +1,74 @@ +# Change Validation Report: story-complexity-splitting-hints-support + +**Validation Date**: 2026-01-30 +**Change Proposal**: [proposal.md](./proposal.md) +**Validation Method**: Dry-run and format/config compliance check + +## Executive Summary + +- **Breaking Changes**: 0 detected +- **Dependent Files**: Additive only (extend `specfact backlog refine` with complexity/splitting; new or extended module for complexity score and splitting suggestion; existing backlog_commands.py and refinement flow) +- **Impact Level**: Low +- **Validation Result**: Pass +- **User Decision**: N/A (no breaking changes) +- **Command placement**: Complexity and splitting integrated into `specfact backlog refine` only; no top-level scrum/refine command (per plan) + +## Breaking Changes Detected + +None. Change is additive: complexity score, needs_splitting predicate, splitting suggestion in refinement output and export-to-tmp; existing refine behavior unchanged for non-complex items. + +## Dependencies Affected + +- **Critical**: None +- **Recommended**: Reuse BacklogItem (story_points, business_value, acceptance_criteria) from existing models; align with existing refinement result type and export-to-tmp format. +- **Optional**: None + +## Impact Assessment + +- **Code Impact**: New or extended module (complexity score, needs_splitting, splitting suggestion); integration into backlog refine output and export-to-tmp. +- **Test Impact**: New tests from spec scenarios (complexity score, needs_splitting, splitting suggestion, refinement output for complex items). +- **Documentation Impact**: backlog-refinement.md for complexity and splitting hints. +- **Release Impact**: Patch (additive feature). + +## Format Validation + +- **proposal.md Format**: Pass + - Title format: Correct (`# Change: Story complexity and splitting hints support`) + - Required sections: All present (Why, What Changes, Capabilities, Impact) + - "What Changes" format: Correct (bullet list with NEW/EXTEND) + - "Capabilities" section: Present (story-complexity) + - "Impact" format: Correct + - Source Tracking section: Present (GitHub Issue #171, URL, repository) +- **tasks.md Format**: Pass + - Section headers: Hierarchical numbered format + - Task format: `- [ ] N.N [Description]` + - Sub-task format: Indented `- [ ] N.N.N` + - Config.yaml compliance: Pass + - TDD order section at top; tests before implementation (Section 4 before Section 5) + - Branch creation first (Section 1); PR creation last (Section 9) + - GitHub issue creation task (Section 2) for nold-ai/specfact-cli + - Version and changelog task (Section 8) before PR; patch bump and CHANGELOG sync + - Quality gates, documentation tasks present +- **specs Format**: Pass (Given/When/Then in specs/story-complexity/spec.md) +- **design.md Format**: Pass (sequence, contract enforcement, fallback documented) +- **Config.yaml Compliance**: Pass + +## OpenSpec Validation + +- **Status**: Pass +- **Validation Command**: `openspec validate story-complexity-splitting-hints-support --strict` +- **Issues Found**: 0 +- **Issues Fixed**: 0 + +## Recommended Improvements Applied + +1. **GitHub issue mandatory**: Issue #171 created in nold-ai/specfact-cli; proposal Source Tracking updated. +2. **Patch version and changelog**: Task 8 bumps patch version, syncs pyproject.toml/setup.py/src __init__.py, and adds CHANGELOG entry. +3. **TDD order**: TDD/SDD section at top of tasks.md; Section 4 (tests first, expect failure) before Section 5 (implement until tests pass). +4. **Backlog harmonization**: Complexity and splitting integrated into `specfact backlog refine` only; no top-level scrum/refine command. +5. **Spec alignment**: Spec delta references main `openspec/specs/backlog-refinement/spec.md` Story Complexity Analysis; scenarios restate requirements for this change scope. + +## Validation Artifacts + +- No temporary workspace used (dry-run analysis only). +- Change directory: `openspec/changes/story-complexity-splitting-hints-support/` diff --git a/openspec/changes/story-complexity-splitting-hints-support/design.md b/openspec/changes/story-complexity-splitting-hints-support/design.md new file mode 100644 index 00000000..384e831a --- /dev/null +++ b/openspec/changes/story-complexity-splitting-hints-support/design.md @@ -0,0 +1,33 @@ +# Design: Story complexity and splitting hints support + +## Complexity score and needs_splitting + +- **Complexity score**: Function of story_points and business_value (e.g. weighted combination or simple threshold on story_points). Configurable threshold (default 13) for "needs splitting"; stories above threshold are flagged. +- **needs_splitting(item, threshold)**: Predicate: True when item.story_points > threshold (or when complexity score exceeds threshold if score is used). Handle missing story_points (e.g. treat as 0 or skip). New public APIs shall have @icontract and @beartype. +- **Configuration**: Threshold may be read from project config (e.g. `.specfact/` or refinement config); default 13 if not set. + +## Splitting suggestion + +- **Input**: BacklogItem (story_points, business_value, acceptance_criteria); optional AI hint for split boundaries. +- **Output**: Rationale (text) + recommended split points (e.g. list of boundaries derived from acceptance_criteria or heuristic). Provider-agnostic; use BacklogItem fields; optional AI for finer boundaries. +- **Integration**: When refinement completes for an item, if needs_splitting(item, threshold), append "Story splitting suggestion" block to refinement result; include in CLI output and in export-to-tmp format. + +## Sequence (refine with splitting suggestion) + +``` +User → specfact backlog refine [args] + → Refinement runs (existing flow) + → For each refined item: compute complexity / needs_splitting(threshold) + → If needs_splitting: generate splitting suggestion (rationale + split points) + → Append splitting suggestion to item output and export-to-tmp + → Emit refinement output (CLI and/or export) +``` + +## Contract enforcement + +- Complexity score and needs_splitting shall have @icontract and @beartype where they are public APIs. +- Splitting suggestion generator: same; handle missing fields without crash. + +## Fallback / offline + +- No network required for complexity or threshold; optional AI hint for split boundaries may require network if used; design for offline-first (heuristic split points from acceptance_criteria when AI unavailable). diff --git a/openspec/changes/story-complexity-splitting-hints-support/proposal.md b/openspec/changes/story-complexity-splitting-hints-support/proposal.md new file mode 100644 index 00000000..8488794c --- /dev/null +++ b/openspec/changes/story-complexity-splitting-hints-support/proposal.md @@ -0,0 +1,31 @@ +# Change: Story complexity and splitting hints support + +## Why + +The backlog-refinement spec (openspec/specs/backlog-refinement/spec.md) includes "Story Complexity Analysis" and related scenarios (complexity score, multi-sprint detection, splitting suggestion in refinement output), but this behavior is not implemented. Teams need complexity scores considering story points and business value, flagging of stories > 13 points for potential splitting, suggestions to split into multiple stories under the same feature with rationale, and splitting suggestion included in refinement output when a story is complex. Without this, refinement sessions do not surface size/scope risks and teams may commit to oversized stories. + +## What Changes + +- **NEW**: Implement complexity calculation (story_points, business_value) and a configurable threshold (e.g. 13 points) for "needs splitting" flag. +- **NEW**: Add splitting detection that suggests split points and rationale (e.g. by acceptance criteria or logical boundaries). +- **EXTEND**: Integrate into **backlog refine** flow (`specfact backlog refine`): when refinement completes for a complex story, include a "Story splitting suggestion" block in the output (and in export-to-tmp format) with recommended split points and rationale. All agile/backlog features stay under the backlog command group; no top-level scrum/refine command. +- **EXTEND**: Documentation (backlog-refinement guide, reference) for complexity and splitting hints. + +## Capabilities + +- **story-complexity**: Complexity score (story_points, business_value), needs_splitting predicate (configurable threshold), splitting suggestion (rationale + split points), integration into refinement output/export. + +## Impact + +- **Affected specs**: New `openspec/changes/story-complexity-splitting-hints-support/specs/story-complexity/spec.md` (Given/When/Then for complexity score, needs splitting, splitting suggestion in refinement output); references main `openspec/specs/backlog-refinement/spec.md` Story Complexity Analysis. +- **Affected code**: `src/specfact_cli/commands/backlog_commands.py` (integrate complexity/splitting into refine); `src/specfact_cli/` (new or existing module for complexity score, splitting suggestion). +- **Affected documentation** (): docs/guides/backlog-refinement.md for complexity and splitting hints. +- **Integration points**: Existing `specfact backlog refine`; BacklogItem (story_points, business_value, acceptance_criteria); optional AI hint for split boundaries; provider-agnostic. +- **Backward compatibility**: Additive only; refinement output gains optional splitting suggestion section for complex stories; threshold configurable. + +## Source Tracking + +- **GitHub Issue**: #171 +- **Issue URL**: +- **Repository**: nold-ai/specfact-cli +- **Last Synced Status**: proposed diff --git a/openspec/changes/story-complexity-splitting-hints-support/specs/story-complexity/spec.md b/openspec/changes/story-complexity-splitting-hints-support/specs/story-complexity/spec.md new file mode 100644 index 00000000..43225ce3 --- /dev/null +++ b/openspec/changes/story-complexity-splitting-hints-support/specs/story-complexity/spec.md @@ -0,0 +1,95 @@ +# Story complexity and splitting hints + +## ADDED Requirements + +This change implements the Story Complexity Analysis requirements from `openspec/specs/backlog-refinement/spec.md`; scenarios below restate and scope them for this change. + +### Requirement: Complexity score and needs-splitting flag + +The system SHALL calculate a complexity score from story_points and business_value and SHALL flag stories above a configurable threshold (e.g. 13 points) as needing potential splitting. + +**Rationale**: Teams need to identify oversized stories before commitment. + +#### Scenario: Story points complexity calculation + +**Given**: A backlog item with `story_points = 13` and `business_value = 8` + +**When**: Complexity score is calculated + +**Then**: The score considers both story points and business value + +**And**: Stories > 13 points (or above configured threshold) are flagged for potential splitting + +**Acceptance Criteria**: + +- Threshold is configurable (e.g. via config or default 13); needs_splitting(item, threshold) is deterministic. + +#### Scenario: Needs splitting predicate + +**Given**: A backlog item with story_points = 21 and threshold = 13 + +**When**: needs_splitting(item, threshold) is evaluated + +**Then**: The result is True (item is flagged for splitting) + +**Acceptance Criteria**: + +- Items at or below threshold return False; items above return True; missing story_points handled per documented behavior. + +### Requirement: Splitting suggestion (rationale and split points) + +The system SHALL suggest splitting into multiple stories under the same feature and provide rationale and recommended split points (e.g. by acceptance criteria or logical boundaries). + +**Rationale**: Teams need actionable guidance on how to split complex stories. + +#### Scenario: Splitting suggestion generation + +**Given**: A backlog item flagged for splitting (e.g. story_points > 13) with acceptance_criteria or logical boundaries + +**When**: Story splitting detection is performed + +**Then**: The system suggests splitting into multiple stories under the same feature + +**And**: The suggestion includes rationale and recommended split points (e.g. derived from acceptance criteria or optional AI hint) + +**Acceptance Criteria**: + +- Suggestion is deterministic or explicitly best-effort; rationale and split points are present in output when available. + +### Requirement: Splitting suggestion in refinement output + +The system SHALL include a story splitting suggestion in refinement output (CLI and export-to-tmp) when the refined item is complex (above threshold). + +**Rationale**: Refinement sessions must surface size/scope risks in the same flow. + +#### Scenario: Story splitting suggestion in refinement output + +**Given**: A backlog item refinement session with a complex story (e.g. story_points > 13) + +**When**: Refinement completes and output (or export-to-tmp) is emitted + +**Then**: The output includes a "Story splitting suggestion" section for that item + +**And**: The suggestion includes recommended split points and rationale + +**Acceptance Criteria**: + +- Section appears only for items above threshold; non-complex items do not show splitting suggestion; export-to-tmp format includes suggestion when applicable. + +### Requirement: Integration under backlog refine only + +The system SHALL integrate complexity and splitting into `specfact backlog refine` only; there SHALL be no top-level scrum/refine command. + +**Rationale**: Align with backlog command group; no new top-level commands. + +#### Scenario: Invoke via backlog refine + +**Given**: SpecFact CLI is installed and backlog refine is used + +**When**: The user runs `specfact backlog refine` (with item(s) that may be complex) + +**Then**: Refinement output (and export-to-tmp when used) includes complexity/splitting suggestion for complex items + +**Acceptance Criteria**: + +- Behavior is discoverable as part of existing `specfact backlog refine`; no new top-level commands. diff --git a/openspec/changes/story-complexity-splitting-hints-support/tasks.md b/openspec/changes/story-complexity-splitting-hints-support/tasks.md new file mode 100644 index 00000000..06eb360a --- /dev/null +++ b/openspec/changes/story-complexity-splitting-hints-support/tasks.md @@ -0,0 +1,71 @@ +# Tasks: Story complexity and splitting hints support + +## TDD / SDD order (enforced) + +Per `openspec/config.yaml`, **tests before code** apply to any task that adds or changes behavior. + +1. **Spec deltas** define behavior (Given/When/Then) in `openspec/changes/story-complexity-splitting-hints-support/specs/story-complexity/spec.md`. +2. **Tests second**: Write unit/integration tests from those scenarios; run tests and **expect failure** (no implementation yet). +3. **Code last**: Implement until tests pass and behavior satisfies the spec. + +Do not implement production code for new behavior until the corresponding tests exist and have been run (expecting failure). + +--- + +## 1. Create git branch from dev + +- [ ] 1.1 Ensure we're on dev and up to date: `git checkout dev && git pull origin dev` +- [ ] 1.2 Create branch with Development link to issue (if exists): `gh issue develop 171 --repo nold-ai/specfact-cli --name feature/story-complexity-splitting-hints-support --checkout` +- [ ] 1.3 Or create branch without issue link: `git checkout -b feature/story-complexity-splitting-hints-support` (if no issue yet) +- [ ] 1.4 Verify branch was created: `git branch --show-current` + +## 2. Create GitHub issue in nold-ai/specfact-cli (mandatory) + +- [ ] 2.1 Create issue in nold-ai/specfact-cli: `gh issue create --repo nold-ai/specfact-cli --title "[Change] Story complexity and splitting hints support" --body-file --label "enhancement" --label "change-proposal"` +- [ ] 2.2 Use body from proposal (Why, What Changes, Acceptance Criteria); add footer `*OpenSpec Change Proposal: story-complexity-splitting-hints-support*` +- [ ] 2.3 Update `proposal.md` Source Tracking section with issue number, issue URL, repository nold-ai/specfact-cli, Last Synced Status: proposed +- [ ] 2.4 Link issue to project (optional): `gh project item-add 1 --owner nold-ai --url ` (requires `gh auth refresh -s project` if needed) + +## 3. Verify spec deltas (SDD: specs first) + +- [ ] 3.1 Confirm `specs/story-complexity/spec.md` exists and is complete (ADDED requirements, Given/When/Then for complexity score, needs_splitting, splitting suggestion in refinement output). +- [ ] 3.2 Map scenarios to implementation: complexity score, needs_splitting(threshold), splitting suggestion generator, integration into backlog refine output and export-to-tmp. + +## 4. Tests first (TDD: write tests from spec scenarios; expect failure) + +- [ ] 4.1 Write unit or integration tests from `specs/story-complexity/spec.md` scenarios: complexity score (story_points, business_value); needs_splitting predicate (above/below threshold); splitting suggestion (rationale + split points); refinement output includes suggestion for complex items only. +- [ ] 4.2 Run tests: `hatch run smart-test-unit` (or equivalent); **expect failure** (no implementation yet). +- [ ] 4.3 Document which scenarios are covered by which test modules. + +## 5. Implement complexity and splitting (TDD: code until tests pass) + +- [ ] 5.1 Add helper(s) for complexity score and needs_splitting(item, threshold); configurable threshold (default 13); ensure @icontract and @beartype on new public APIs. +- [ ] 5.2 Add splitting suggestion logic (rationale + optional split points from acceptance_criteria or heuristic); integrate into refinement result type/output. +- [ ] 5.3 In `specfact backlog refine`, when emitting refined item output (and export-to-tmp), append "Story splitting suggestion" section for items above threshold; no top-level scrum/refine command. +- [ ] 5.4 Run tests again; **expect pass**; fix until all tests pass. + +## 6. Quality gates + +- [ ] 6.1 Run format and type-check: `hatch run format`, `hatch run type-check`. +- [ ] 6.2 Run contract test: `hatch run contract-test`. +- [ ] 6.3 Run full test suite: `hatch run smart-test` (or `hatch run smart-test-full`). +- [ ] 6.4 Ensure any new or modified public APIs have @icontract and @beartype where applicable. + +## 7. Documentation research and review + +- [ ] 7.1 Identify affected documentation: docs/guides/backlog-refinement.md, docs/reference as needed. +- [ ] 7.2 Update backlog-refinement.md: document complexity score, needs-splitting threshold, and splitting hints in refinement output. +- [ ] 7.3 If adding a new doc page: set front-matter (layout, title, permalink, description) and update docs/_layouts/default.html sidebar if needed. + +## 8. Version and changelog (patch bump; required before PR) + +- [ ] 8.1 Bump **patch** version in `pyproject.toml` (e.g. X.Y.Z → X.Y.(Z+1)). +- [ ] 8.2 Sync version in `setup.py`, `src/__init__.py`, `src/specfact_cli/__init__.py` to match pyproject.toml. +- [ ] 8.3 Add CHANGELOG.md entry under new [X.Y.Z] - YYYY-MM-DD section: **Added** – Story complexity and splitting hints in `specfact backlog refine` (complexity score, needs-splitting flag, splitting suggestion in output/export). + +## 9. Create Pull Request to dev + +- [ ] 9.1 Ensure all changes are committed: `git add .` and `git commit -m "feat(backlog): add story complexity and splitting hints to refine"` +- [ ] 9.2 Push to remote: `git push origin feature/story-complexity-splitting-hints-support` +- [ ] 9.3 Create PR: `gh pr create --repo nold-ai/specfact-cli --base dev --head feature/story-complexity-splitting-hints-support --title "feat(backlog): add story complexity and splitting hints to refine" --body-file ` (use repo PR template; add OpenSpec change ID `story-complexity-splitting-hints-support` and summary; reference GitHub issue with `Fixes nold-ai/specfact-cli#171`). +- [ ] 9.4 Verify PR and branch are linked to issue in Development section. diff --git a/openspec/specs/backlog-refinement/spec.md b/openspec/specs/backlog-refinement/spec.md index 41d05ebd..32051569 100644 --- a/openspec/specs/backlog-refinement/spec.md +++ b/openspec/specs/backlog-refinement/spec.md @@ -360,3 +360,61 @@ The system SHALL provide progress feedback during initialization of the `specfac - **AND** the progress should use Rich Progress with time elapsed column - **AND** this provides user feedback during 5-10 second initialization delay (especially important in corporate environments with security scans/firewalls) +### Requirement: Import refined content from temporary file + +The system SHALL support importing refined backlog content from a temporary markdown file (same format as export) when `specfact backlog refine --import-from-tmp` is used, matching items by ID and updating remote backlog via the adapter when `--write` is set. + +#### Scenario: Import refined content from temporary file + +- **GIVEN** a markdown file in the same format as the export from `specfact backlog refine --export-to-tmp` (header, then per-item blocks with `## Item N:`, **ID**, **Body** in ```markdown ...```, **Acceptance Criteria**) +- **AND** the user runs `specfact backlog refine --import-from-tmp --tmp-file ` with the same adapter and filters as used for export (so the same set of items is fetched) +- **WHEN** the import file exists and is readable +- **THEN** the system parses the file and matches each block to a fetched item by **ID** +- **AND** for each matched item the system updates `body_markdown` and `acceptance_criteria` (and optionally title/metrics) from the parsed block +- **AND** if `--write` is not set, the system prints a preview (e.g. "Would update N items") and does not call the adapter +- **AND** if `--write` is set, the system calls `adapter.update_backlog_item(item, update_fields=[...])` for each updated item and prints a success summary (e.g. "Updated N backlog items") +- **AND** the system does not show "Import functionality pending implementation" + +#### Scenario: Import file not found + +- **GIVEN** the user runs `specfact backlog refine --import-from-tmp` (or with `--tmp-file `) +- **WHEN** the resolved import file does not exist +- **THEN** the system prints an error with the expected path and suggests using `--tmp-file` to specify the path +- **AND** the command exits with non-zero status + +### Requirement: Ignore Already-Refined Items by Default + +The system SHALL support `--ignore-refined` (default) and `--no-ignore-refined` so that when `--limit N` is used, the limit applies to items that need refinement (already-refined items are excluded from the batch by default). + +#### Scenario: Limit applies to items needing refinement when ignore-refined + +- **GIVEN** the user runs `specfact backlog refine --limit 3` (default `--ignore-refined`) +- **AND** the adapter returns at least 5 items, of which the first 3 are already refined (checkboxes + all required sections or high confidence with no missing fields) +- **WHEN** the command processes items +- **THEN** the system filters out already-refined items, then takes the first 3 that need refinement +- **AND** the user sees up to 3 items that actually require refinement (no loop of the same 3 refined items) + +#### Scenario: No-ignore-refined preserves previous behavior + +- **GIVEN** the user runs `specfact backlog refine --limit 3 --no-ignore-refined` +- **WHEN** the command processes items +- **THEN** the system takes the first 3 items from the fetch and processes them in order +- **AND** already-refined items are skipped in the loop (current behavior) + +### Requirement: Focused Refinement by Issue ID + +The system SHALL support `--id ISSUE_ID` to refine only the backlog item with the given issue or work item ID. + +#### Scenario: Refine single item by ID + +- **GIVEN** the user runs `specfact backlog refine --id 123` (with required adapter options) +- **WHEN** the adapter returns items including item with id 123 +- **THEN** the system filters to only the item with id 123 and refines only that item +- **AND** other items are not processed + +#### Scenario: ID not found + +- **GIVEN** the user runs `specfact backlog refine --id 999` (with required adapter options) +- **WHEN** no item with id 999 is in the fetched set +- **THEN** the system prints a clear error (e.g. "No backlog item with id 999 found") and exits with non-zero status + diff --git a/pyproject.toml b/pyproject.toml index e7fc64b2..37c8f005 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -4,7 +4,7 @@ build-backend = "hatchling.build" [project] name = "specfact-cli" -version = "0.26.14" +version = "0.26.15" description = "Brownfield-first CLI: Reverse engineer legacy Python → specs → enforced contracts. Automate legacy code documentation and prevent modernization regressions." readme = "README.md" requires-python = ">=3.11" diff --git a/resources/prompts/specfact.backlog-refine.md b/resources/prompts/specfact.backlog-refine.md index 5fcc757c..50dd7636 100644 --- a/resources/prompts/specfact.backlog-refine.md +++ b/resources/prompts/specfact.backlog-refine.md @@ -69,6 +69,8 @@ Refine backlog items from DevOps tools (GitHub Issues, Azure DevOps, etc.) into - Ambiguous name-only matches will prompt for explicit iteration path - `--release RELEASE` - Filter by release identifier (case-insensitive) - `--limit N` - Maximum number of items to process in this refinement session (caps batch size) +- `--ignore-refined` / `--no-ignore-refined` - When set (default), exclude already-refined items so `--limit` applies to items that need refinement. Use `--no-ignore-refined` to process the first N items in order. +- `--id ISSUE_ID` - Refine only this backlog item (issue or work item ID). Other items are ignored. - `--persona PERSONA` - Filter templates by persona (product-owner, architect, developer) - `--framework FRAMEWORK` - Filter templates by framework (agile, scrum, safe, kanban) @@ -162,6 +164,30 @@ specfact backlog refine $ADAPTER \ - Use `:quit` or `:abort` to cancel the entire session gracefully - Session cancellation shows summary and exits without error +### Interactive refinement (Copilot mode) + +When refining backlog items in Copilot mode (e.g. after export to tmp or during a refinement session), follow this **per-story loop** so the PO and stakeholders can review and approve before any update: + +1. **For each story** (one at a time): + - **Present** the refined story in a clear, readable format: + - Use headings for Title, Body, Acceptance Criteria, Metrics. + - Use tables or panels for structured data so it is easy to scan. + - **Assess specification level** so the DevOps team knows if the story is ready, under-specified, or over-specified: + - **Under-specified**: Missing acceptance criteria, vague scope, unclear "so that" or user value. List evidence (e.g. "No AC", "Scope could mean X or Y"). Suggest what to add. + - **Over-specified**: Too much implementation detail, too many sub-steps for one story, or solution prescribed instead of outcome. List evidence and suggest what to trim or split. + - **Fit for scope and intent**: Clear persona, capability, benefit, and testable AC; appropriate size. State briefly why it is ready (and, if DoR is in use, that DoR is satisfied). + - **List ambiguities** or open questions (e.g. unclear scope, missing acceptance criteria, conflicting assumptions). + - **Ask** the PO and other stakeholders for clarification: "Please review the refined story above. Do you want any changes? Any ambiguities to resolve? Should this story be split?" + - **If the user provides feedback**: Re-refine the story incorporating the feedback, then repeat from "Present" for this story. + - **Only when the user explicitly approves** (e.g. "looks good", "approved", "no changes"): Mark this story as done and move to the **next** story. + - **Do not update** the backlog item (or write to the refined file as final) until the user has approved this story. + +2. **Formatting**: + - Use clear headings, bullet lists, and optional tables/panels so refinement sessions are easy to follow and enjoyable. + - Keep each story’s block self-contained so stakeholders can focus on one item at a time. + +3. **Rule**: The backlog item (or exported block) must only be updated/finalized **after** the user has approved the refined content for that story. Then proceed to the next story with the same process. + ### Step 3: Present Results Display refinement results: diff --git a/setup.py b/setup.py index e03e30d1..a3e04d7f 100644 --- a/setup.py +++ b/setup.py @@ -7,7 +7,7 @@ if __name__ == "__main__": _setup = setup( name="specfact-cli", - version="0.26.14", + version="0.26.15", description="SpecFact CLI - Spec -> Contract -> Sentinel tool for contract-driven development", packages=find_packages(where="src"), package_dir={"": "src"}, diff --git a/src/__init__.py b/src/__init__.py index aa8d6a59..5d4202ed 100644 --- a/src/__init__.py +++ b/src/__init__.py @@ -3,4 +3,4 @@ """ # Package version: keep in sync with pyproject.toml, setup.py, src/specfact_cli/__init__.py -__version__ = "0.26.14" +__version__ = "0.26.15" diff --git a/src/specfact_cli/__init__.py b/src/specfact_cli/__init__.py index 3bc2eb64..0c49b21b 100644 --- a/src/specfact_cli/__init__.py +++ b/src/specfact_cli/__init__.py @@ -9,6 +9,6 @@ - Validating reproducibility """ -__version__ = "0.26.14" +__version__ = "0.26.15" __all__ = ["__version__"] diff --git a/src/specfact_cli/cli.py b/src/specfact_cli/cli.py index c0cd7f0d..b89c5bf0 100644 --- a/src/specfact_cli/cli.py +++ b/src/specfact_cli/cli.py @@ -454,6 +454,7 @@ def cli_main() -> None: console.print() # Empty line after banner elif not is_help_or_version and not is_test_mode: # Show simple version line like other CLIs (skip for help/version commands and in test mode) + # Printed before startup checks so users see output immediately (important with slow checks e.g. xagt) print_version_line() # Run startup checks (template validation and version check) diff --git a/src/specfact_cli/commands/backlog_commands.py b/src/specfact_cli/commands/backlog_commands.py index f288b36f..2228a9fd 100644 --- a/src/specfact_cli/commands/backlog_commands.py +++ b/src/specfact_cli/commands/backlog_commands.py @@ -296,6 +296,44 @@ def _parse_refined_export_markdown(content: str) -> dict[str, dict[str, Any]]: return result +@beartype +def _item_needs_refinement( + item: BacklogItem, + detector: TemplateDetector, + registry: TemplateRegistry, + template_id: str | None, + normalized_adapter: str | None, + normalized_framework: str | None, + normalized_persona: str | None, +) -> bool: + """ + Return True if the item needs refinement (should be processed); False if already refined (skip). + + Mirrors the "already refined" skip logic used in the refine loop: checkboxes + all required + sections, or high confidence with no missing fields. + """ + detection_result = detector.detect_template( + item, + provider=normalized_adapter, + framework=normalized_framework, + persona=normalized_persona, + ) + if detection_result.template_id: + target = registry.get_template(detection_result.template_id) if detection_result.template_id else None + if target and target.required_sections: + has_checkboxes = bool( + re.search(r"^[\s]*- \[[ x]\]", item.body_markdown or "", re.MULTILINE | re.IGNORECASE) + ) + all_present = all( + bool(re.search(rf"^#+\s+{re.escape(s)}\s*$", item.body_markdown or "", re.MULTILINE | re.IGNORECASE)) + for s in target.required_sections + ) + if has_checkboxes and all_present and not detection_result.missing_fields: + return False + already_refined = template_id is None and detection_result.confidence >= 0.8 and not detection_result.missing_fields + return not already_refined + + def _fetch_backlog_items( adapter_name: str, search_query: str | None = None, @@ -424,6 +462,16 @@ def refine( "--limit", help="Maximum number of items to process in this refinement session. Use to cap batch size and avoid processing too many items at once.", ), + ignore_refined: bool = typer.Option( + True, + "--ignore-refined/--no-ignore-refined", + help="When set (default), exclude already-refined items from the batch so --limit applies to items that need refinement. Use --no-ignore-refined to process the first N items in order (already-refined skipped in loop).", + ), + issue_id: str | None = typer.Option( + None, + "--id", + help="Refine only this backlog item (issue or work item ID). Other items are ignored.", + ), template_id: str | None = typer.Option(None, "--template", "-t", help="Target template ID (default: auto-detect)"), auto_accept_high_confidence: bool = typer.Option( False, "--auto-accept-high-confidence", help="Auto-accept refinements with confidence >= 0.85" @@ -651,6 +699,10 @@ def refine( init_progress.update(validate_task, description="[green]✓[/green] Configuration validated") # Fetch backlog items with filters + # When ignore_refined and limit are set, fetch more candidates so we have enough after filtering + fetch_limit: int | None = limit + if ignore_refined and limit is not None and limit > 0: + fetch_limit = limit * 5 with Progress( SpinnerColumn(), TextColumn("[progress.description]{task.description}"), @@ -668,7 +720,7 @@ def refine( iteration=iteration, sprint=sprint, release=release, - limit=limit, + limit=fetch_limit, repo_owner=repo_owner, repo_name=repo_name, github_token=github_token, @@ -706,6 +758,34 @@ def refine( console.print("[yellow]No backlog items found.[/yellow]") return + # Filter by issue ID when --id is set + if issue_id is not None: + items = [i for i in items if str(i.id) == str(issue_id)] + if not items: + console.print( + f"[bold red]✗[/bold red] No backlog item with id {issue_id!r} found. " + "Check filters and adapter configuration." + ) + raise typer.Exit(1) + + # When ignore_refined (default), keep only items that need refinement; then apply limit + if ignore_refined: + items = [ + i + for i in items + if _item_needs_refinement( + i, detector, registry, template_id, normalized_adapter, normalized_framework, normalized_persona + ) + ] + if limit is not None and len(items) > limit: + items = items[:limit] + if ignore_refined and (limit is not None or issue_id is not None): + console.print( + f"[dim]Filtered to {len(items)} item(s) needing refinement" + + (f" (limit {limit})" if limit is not None else "") + + "[/dim]" + ) + # Validate export/import flags if export_to_tmp and import_from_tmp: console.print("[bold red]✗[/bold red] --export-to-tmp and --import-from-tmp are mutually exclusive") @@ -843,8 +923,8 @@ def refine( console.print(f"[green]✓ Updated {len(updated_items)} backlog item(s)[/green]") return - # Apply limit if specified - if limit and len(items) > limit: + # Apply limit if specified (when not ignore_refined; when ignore_refined we already filtered and sliced) + if not ignore_refined and limit is not None and len(items) > limit: items = items[:limit] console.print(f"[yellow]Limited to {limit} items (found {len(items)} total)[/yellow]") else: diff --git a/tests/unit/commands/test_backlog_commands.py b/tests/unit/commands/test_backlog_commands.py index 6cfd5319..0d99f9b7 100644 --- a/tests/unit/commands/test_backlog_commands.py +++ b/tests/unit/commands/test_backlog_commands.py @@ -10,9 +10,14 @@ from typer.testing import CliRunner +from specfact_cli.backlog.template_detector import TemplateDetector from specfact_cli.cli import app -from specfact_cli.commands.backlog_commands import _parse_refined_export_markdown +from specfact_cli.commands.backlog_commands import ( + _item_needs_refinement, + _parse_refined_export_markdown, +) from specfact_cli.models.backlog_item import BacklogItem +from specfact_cli.templates.registry import BacklogTemplate, TemplateRegistry runner = CliRunner() @@ -329,3 +334,54 @@ def foo(): assert "return 42" in body assert "```" in body assert "Then we see the error." in body + + +class TestItemNeedsRefinement: + """Tests for _item_needs_refinement helper.""" + + def test_needs_refinement_when_missing_sections(self) -> None: + """Item needs refinement when required sections are missing.""" + registry = TemplateRegistry() + registry.register_template( + BacklogTemplate( + template_id="user-story", + name="User Story", + description="", + required_sections=["As a", "I want", "Acceptance Criteria"], + ) + ) + detector = TemplateDetector(registry) + item = BacklogItem( + id="1", + provider="github", + url="https://github.com/org/repo/issues/1", + title="Story", + body_markdown="As a user I want...", + state="open", + assignees=[], + ) + assert _item_needs_refinement(item, detector, registry, None, "github", None, None) is True + + def test_does_not_need_refinement_when_high_confidence_no_missing(self) -> None: + """Item does not need refinement when confidence >= 0.8 and no missing fields.""" + registry = TemplateRegistry() + registry.register_template( + BacklogTemplate( + template_id="user-story", + name="User Story", + description="", + required_sections=["Acceptance Criteria"], + ) + ) + detector = TemplateDetector(registry) + item = BacklogItem( + id="2", + provider="github", + url="https://github.com/org/repo/issues/2", + title="Story", + body_markdown="As a user I want X.\n\n## Acceptance Criteria\n- [ ] Done", + state="open", + assignees=[], + ) + result = _item_needs_refinement(item, detector, registry, None, "github", None, None) + assert result is False