Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
700c360
perf: optimize startup performance with metadata tracking and update …
djm81 Jan 27, 2026
4b167dd
Merge branch 'main' into dev
djm81 Jan 27, 2026
e4782ea
fix: add missing ADO field mappings and assignee display (#145)
djm81 Jan 27, 2026
a2f6ac7
Merge branch 'main' into dev
djm81 Jan 27, 2026
c74a773
fix: mitigate code scanning vulnerabilities (#148)
djm81 Jan 27, 2026
af030dc
fix: detect GitHub remotes using ssh:// and git:// URLs
djm81 Jan 27, 2026
db827a0
chore: bump version to 0.26.9 and update changelog
djm81 Jan 27, 2026
1ade334
Merge branch 'main' into dev
djm81 Jan 27, 2026
5c1cb41
fix: compare GitHub SSH hostnames case-insensitively
djm81 Jan 27, 2026
68cc345
Merge branch 'main' into dev
djm81 Jan 27, 2026
dfeb7ca
Add openspec and workflow commands for transparency
djm81 Jan 27, 2026
9e1f22d
Add specs from openspec
djm81 Jan 27, 2026
115e402
Remove aisp change which wasn't implemented
djm81 Jan 27, 2026
2675361
Fix openspec gitignore pattern
djm81 Jan 27, 2026
573fb7b
Update gitignore
djm81 Jan 27, 2026
907501e
Update contribution standards to use openspec for SDD
djm81 Jan 27, 2026
568000c
Merge branch 'main' into dev
djm81 Jan 27, 2026
fe082f6
Migrate to new opsx openspec commands
djm81 Jan 27, 2026
036afbe
Migrate workflow and openspec config
djm81 Jan 28, 2026
5a1493f
fix: bump version to 0.26.10 for PyPI publish
djm81 Jan 28, 2026
da606a1
Update version and changelog
djm81 Jan 28, 2026
608f317
Add canonical user-friendly workitem url for ado workitems
djm81 Jan 28, 2026
719256c
Update to support OSPX
djm81 Jan 28, 2026
1f94d7c
Merge branch 'main' into dev
djm81 Jan 28, 2026
bbf730a
feat(backlog): implement refine --import-from-tmp and fix type-check …
djm81 Jan 28, 2026
080743a
Merge branch 'main' into dev
djm81 Jan 28, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 12 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,18 @@ All notable changes to this project will be documented in this file.

---

## [0.26.11] - 2026-01-27

### Fixed (0.26.11)

- **Backlog refine --import-from-tmp**: Implemented import path so refined content from a temporary file is applied to backlog items
- **Parser**: Added `_parse_refined_export_markdown()` to parse the same markdown format produced by `--export-to-tmp` (## Item blocks, **ID**, **Body** in ```markdown ... ```, **Acceptance Criteria**, optional **Metrics**)
- **Import flow**: When `--import-from-tmp` (and optional `--tmp-file`) is used, the CLI reads the file, matches blocks to fetched items by ID, updates `body_markdown`, `acceptance_criteria`, and optionally title/metrics; without `--write` shows "Would update N item(s)", with `--write` calls `adapter.update_backlog_item()` for each and prints success summary
- **Removed**: "Import functionality pending implementation" message and TODO
- **Tests**: Unit tests for the parser (single item, acceptance criteria and metrics, header-only, blocks without ID)

---

## [0.26.10] - 2026-01-27

### Added (0.26.10)
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# Change: Implement backlog refine --import-from-tmp

## Why

The `specfact backlog refine` command supports `--export-to-tmp` to export items to a markdown file for copilot processing and documents `--import-from-tmp` / `--tmp-file` to re-import refined content. When users run with `--import-from-tmp`, the CLI only checks that the file exists and then prints "Import functionality pending implementation" and exits. This leaves the export/import workflow unusable and contradicts the documented behavior. Implementing the import path completes the round-trip: export β†’ edit with copilot β†’ import with --write, so teams can refine backlog items in bulk via their IDE without interactive prompts.

## What Changes

- **NEW**: Parser for the refined export markdown format (same structure as export: `## Item N:`, **ID**, **Body** in ```markdown ... ```, **Acceptance Criteria**, optional title/metrics). Parser returns a list of parsed blocks keyed by item ID for matching against fetched items.
- **NEW**: Import branch in `specfact backlog refine`: when `--import-from-tmp` is set and the file exists, read and parse the file, match parsed blocks to currently fetched items by ID, update each matched `BacklogItem`'s `body_markdown` and `acceptance_criteria` (and optionally title/metrics), then call `adapter.update_backlog_item(item, update_fields=[...])` when `--write` is set. Without `--write`, show a preview (e.g. "Would update N items") and do not call the adapter.
- **EXTEND**: Reuse existing refine flow: same adapter/fetch as export so `items` is in scope; reuse `_build_adapter_kwargs` and `adapter_registry.get_adapter` for write-back; reuse the same `update_fields` logic as interactive refine (title, body_markdown, acceptance_criteria, story_points, business_value, priority).
- **NOTE**: Default import path remains `...-refined.md`; users are expected to pass `--tmp-file` to point to the file they edited (same path as export or a copy). No change to export path or naming.

## Capabilities

- **backlog-refinement**: ADDED requirement for import-from-tmp (parse refined export format, match by ID, update items via adapter with --write).

## Impact

- **Affected specs**: backlog-refinement (ADDED scenario for import-from-tmp).
- **Affected code**: `src/specfact_cli/commands/backlog_commands.py` (import branch implementation); optionally `src/specfact_cli/backlog/refine_export_parser.py` (parser helper).
- **Integration points**: BacklogAdapter.update_backlog_item (existing); _fetch_backlog_items, _build_adapter_kwargs (existing).

## Source Tracking

- **GitHub Issue**: #155
- **Issue URL**: <https://github.com/nold-ai/specfact-cli/issues/155>
- **Repository**: nold-ai/specfact-cli
- **Last Synced Status**: implemented
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
# backlog-refinement (delta)

## ADDED Requirements

### Requirement: Import refined content from temporary file

The system SHALL support importing refined backlog content from a temporary markdown file (same format as export) when `specfact backlog refine --import-from-tmp` is used, matching items by ID and updating remote backlog via the adapter when `--write` is set.

#### Scenario: Import refined content from temporary file

- **GIVEN** a markdown file in the same format as the export from `specfact backlog refine --export-to-tmp` (header, then per-item blocks with `## Item N:`, **ID**, **Body** in ```markdown ... ```, **Acceptance Criteria**)
- **AND** the user runs `specfact backlog refine --import-from-tmp --tmp-file <path>` with the same adapter and filters as used for export (so the same set of items is fetched)
- **WHEN** the import file exists and is readable
- **THEN** the system parses the file and matches each block to a fetched item by **ID**
- **AND** for each matched item the system updates `body_markdown` and `acceptance_criteria` (and optionally title/metrics) from the parsed block
- **AND** if `--write` is not set, the system prints a preview (e.g. "Would update N items") and does not call the adapter
- **AND** if `--write` is set, the system calls `adapter.update_backlog_item(item, update_fields=[...])` for each updated item and prints a success summary (e.g. "Updated N backlog items")
- **AND** the system does not show "Import functionality pending implementation"

#### Scenario: Import file not found

- **GIVEN** the user runs `specfact backlog refine --import-from-tmp` (or with `--tmp-file <path>`)
- **WHEN** the resolved import file does not exist
- **THEN** the system prints an error with the expected path and suggests using `--tmp-file` to specify the path
- **AND** the command exits with non-zero status
38 changes: 38 additions & 0 deletions openspec/changes/implement-backlog-refine-import-from-tmp/tasks.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
# Tasks: Implement backlog refine --import-from-tmp

## 1. Create git branch

- [ ] 1.1.1 Ensure we're on dev and up to date: `git checkout dev && git pull origin dev`
- [ ] 1.1.2 Create branch: `git checkout -b feature/implement-backlog-refine-import-from-tmp`
- [ ] 1.1.3 Verify branch: `git branch --show-current`

## 2. Parser for refined export format

- [ ] 2.1.1 Add function to parse refined markdown (e.g. `_parse_refined_export_markdown(content: str) -> dict[str, dict]` returning id β†’ {body_markdown, acceptance_criteria, title?, ...}) in `backlog_commands.py` or new module `src/specfact_cli/backlog/refine_export_parser.py`
- [ ] 2.1.2 Split content by `## Item` or `---` to get per-item blocks
- [ ] 2.1.3 From each block extract **ID** (required), **Body** (from ```markdown ... ```), **Acceptance Criteria** (optional), optionally **title** and metrics
- [ ] 2.1.4 Add unit tests for parser (export-format sample, multiple items, missing optional fields)
- [ ] 2.1.5 Run `hatch run format` and `hatch run type-check`

## 3. Import branch in backlog refine command

- [ ] 3.1.1 In the `if import_from_tmp:` block, after file-exists check: read file content, call parser, build map id β†’ parsed fields
- [ ] 3.1.2 For each item in `items`, if item.id in map: set item.body_markdown, item.acceptance_criteria (and optionally title/metrics) from parsed fields
- [ ] 3.1.3 If `--write` is not set: print preview ("Would update N items") and return
- [ ] 3.1.4 If `--write` is set: get adapter via _build_adapter_kwargs and adapter_registry.get_adapter; for each updated item call adapter.update_backlog_item(item, update_fields=[...]) with same update_fields logic as interactive refine
- [ ] 3.1.5 Print success summary (e.g. "Updated N backlog items")
- [ ] 3.1.6 Remove "Import functionality pending implementation" message and TODO
- [ ] 3.1.7 Run `hatch run format` and `hatch run type-check`

## 4. Tests and quality

- [ ] 4.1.1 Add or extend test for refine --import-from-tmp (unit: parser; integration or unit with mock: import flow with --tmp-file and --write)
- [ ] 4.1.2 Run `hatch run contract-test` (or `hatch run smart-test`)
- [ ] 4.1.3 Run `hatch run lint`
- [ ] 4.1.4 Run `openspec validate implement-backlog-refine-import-from-tmp --strict`

## 5. Documentation and PR

- [ ] 5.1.1 Update CHANGELOG.md with fix entry
- [ ] 5.1.2 Ensure help text for --import-from-tmp and --tmp-file is accurate
- [ ] 5.1.3 Create Pull Request from feature/implement-backlog-refine-import-from-tmp to dev
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ build-backend = "hatchling.build"

[project]
name = "specfact-cli"
version = "0.26.10"
version = "0.26.11"
description = "Brownfield-first CLI: Reverse engineer legacy Python β†’ specs β†’ enforced contracts. Automate legacy code documentation and prevent modernization regressions."
readme = "README.md"
requires-python = ">=3.11"
Expand Down
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
if __name__ == "__main__":
_setup = setup(
name="specfact-cli",
version="0.26.10",
version="0.26.11",
description="SpecFact CLI - Spec -> Contract -> Sentinel tool for contract-driven development",
packages=find_packages(where="src"),
package_dir={"": "src"},
Expand Down
2 changes: 1 addition & 1 deletion src/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,4 @@
"""

# Package version: keep in sync with pyproject.toml, setup.py, src/specfact_cli/__init__.py
__version__ = "0.26.10"
__version__ = "0.26.11"
2 changes: 1 addition & 1 deletion src/specfact_cli/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,6 @@
- Validating reproducibility
"""

__version__ = "0.26.10"
__version__ = "0.26.11"

__all__ = ["__version__"]
156 changes: 152 additions & 4 deletions src/specfact_cli/commands/backlog_commands.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@
from __future__ import annotations

import os
import re
import sys
from datetime import datetime
from pathlib import Path
Expand Down Expand Up @@ -202,6 +203,97 @@ def _build_adapter_kwargs(
return kwargs


def _extract_body_from_block(block: str) -> str:
"""
Extract **Body** content from a refined export block, handling nested fenced code.

The body is wrapped in ```markdown ... ```. If the body itself contains fenced
code blocks (e.g. ```python ... ```), the closing fence is matched by tracking
depth: a line that is exactly ``` closes the current fence (body or inner).
"""
start_marker = "**Body**:"
fence_open = "```markdown"
if start_marker not in block or fence_open not in block:
return ""
idx = block.find(start_marker)
rest = block[idx + len(start_marker) :].lstrip()
if not rest.startswith("```"):
return ""
if not rest.startswith(fence_open + "\n") and not rest.startswith(fence_open + "\r\n"):
return ""
after_open = rest[len(fence_open) :].lstrip("\n\r")
if not after_open:
return ""
lines = after_open.split("\n")
body_lines: list[str] = []
depth = 1
for line in lines:
stripped = line.rstrip()
if stripped == "```":
if depth == 1:
break
depth -= 1
body_lines.append(line)
elif stripped.startswith("```") and stripped != "```":
depth += 1
body_lines.append(line)
else:
body_lines.append(line)
return "\n".join(body_lines).strip()


def _parse_refined_export_markdown(content: str) -> dict[str, dict[str, Any]]:
"""
Parse refined export markdown (same format as --export-to-tmp) into id -> fields.

Splits by ## Item blocks, extracts **ID**, **Body** (from ```markdown ... ```),
**Acceptance Criteria**, and optionally title and **Metrics** (story_points,
business_value, priority). Body extraction is fence-aware so bodies containing
nested code blocks are parsed correctly. Returns a dict mapping item id to
parsed fields (body_markdown, acceptance_criteria, title?, story_points?,
business_value?, priority?).
"""
result: dict[str, dict[str, Any]] = {}
blocks = re.split(r"\n## Item \d+:", content)
for block in blocks:
block = block.strip()
if not block or block.startswith("# SpecFact") or "**ID**:" not in block:
continue
id_match = re.search(r"\*\*ID\*\*:\s*(.+?)(?:\n|$)", block)
if not id_match:
continue
item_id = id_match.group(1).strip()
fields: dict[str, Any] = {}

fields["body_markdown"] = _extract_body_from_block(block)

ac_match = re.search(r"\*\*Acceptance Criteria\*\*:\s*\n(.*?)(?=\n\*\*|\n---|\Z)", block, re.DOTALL)
if ac_match:
fields["acceptance_criteria"] = ac_match.group(1).strip() or None
else:
fields["acceptance_criteria"] = None

first_line = block.split("\n")[0].strip() if block else ""
if first_line and not first_line.startswith("**"):
fields["title"] = first_line

if "Story Points:" in block:
sp_match = re.search(r"Story Points:\s*(\d+)", block)
if sp_match:
fields["story_points"] = int(sp_match.group(1))
if "Business Value:" in block:
bv_match = re.search(r"Business Value:\s*(\d+)", block)
if bv_match:
fields["business_value"] = int(bv_match.group(1))
if "Priority:" in block:
pri_match = re.search(r"Priority:\s*(\d+)", block)
if pri_match:
fields["priority"] = int(pri_match.group(1))

result[item_id] = fields
return result


def _fetch_backlog_items(
adapter_name: str,
search_query: str | None = None,
Expand Down Expand Up @@ -680,9 +772,65 @@ def refine(
raise typer.Exit(1)

console.print(f"[bold cyan]Importing refined content from: {import_file}[/bold cyan]")
# TODO: Implement import logic to parse refined content and apply to items
console.print("[yellow]⚠ Import functionality pending implementation[/yellow]")
console.print("[dim]For now, use interactive refinement with --write flag[/dim]")
raw = import_file.read_text(encoding="utf-8")
parsed_by_id = _parse_refined_export_markdown(raw)
if not parsed_by_id:
console.print(
"[yellow]No valid item blocks found in import file (expected ## Item N: and **ID**:)[/yellow]"
)
raise typer.Exit(1)

updated_items: list[BacklogItem] = []
for item in items:
if item.id not in parsed_by_id:
continue
data = parsed_by_id[item.id]
item.body_markdown = data.get("body_markdown", item.body_markdown or "")
if "acceptance_criteria" in data:
item.acceptance_criteria = data["acceptance_criteria"]
if data.get("title"):
item.title = data["title"]
if "story_points" in data:
item.story_points = data["story_points"]
if "business_value" in data:
item.business_value = data["business_value"]
if "priority" in data:
item.priority = data["priority"]
updated_items.append(item)

if not write:
console.print(f"[green]Would update {len(updated_items)} item(s)[/green]")
console.print("[dim]Run with --write to apply changes to the backlog[/dim]")
return

writeback_kwargs = _build_adapter_kwargs(
adapter,
repo_owner=repo_owner,
repo_name=repo_name,
github_token=github_token,
ado_org=ado_org,
ado_project=ado_project,
ado_team=ado_team,
ado_token=ado_token,
)
adapter_instance = adapter_registry.get_adapter(adapter, **writeback_kwargs)
if not isinstance(adapter_instance, BacklogAdapter):
console.print("[bold red]βœ—[/bold red] Adapter does not support backlog updates")
raise typer.Exit(1)

for item in updated_items:
update_fields_list = ["title", "body_markdown"]
if item.acceptance_criteria:
update_fields_list.append("acceptance_criteria")
if item.story_points is not None:
update_fields_list.append("story_points")
if item.business_value is not None:
update_fields_list.append("business_value")
if item.priority is not None:
update_fields_list.append("priority")
adapter_instance.update_backlog_item(item, update_fields=update_fields_list)
console.print(f"[green]βœ“ Updated backlog item: {item.url}[/green]")
console.print(f"[green]βœ“ Updated {len(updated_items)} backlog item(s)[/green]")
return

# Apply limit if specified
Expand Down Expand Up @@ -1231,7 +1379,7 @@ def map_fields(
import re
import sys

import questionary
import questionary # type: ignore[reportMissingImports]
import requests

from specfact_cli.backlog.mappers.template_config import FieldMappingConfig
Expand Down
Loading
Loading