diff --git a/CLAUDE.md b/CLAUDE.md index 9cfa12b..41f1125 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -46,6 +46,8 @@ Commands with Teams Variant ship as `{name}.md` (parallel subagents) and `{name} **Claude Code Flags**: Typed registry (`src/cli/utils/flags.ts`) for managing Claude Code feature flags (env vars and top-level settings). Pure functions `applyFlags`/`stripFlags`/`getDefaultFlags` follow the `applyTeamsConfig`/`stripTeamsConfig` pattern. Initial flags: `tool-search`, `lsp`, `clear-context-on-plan` (default ON), `brief`, `disable-1m-context` (default OFF). Manageable via `devflow flags --enable/--disable/--status/--list`. Stored in manifest `features.flags: string[]`. +**Feature Knowledge Bases**: Per-feature `.features/` directory containing KNOWLEDGE.md files that capture area-specific patterns, conventions, architecture, and gotchas. KBs are created as side-effects of planning (plan:orch Phase 12), loaded automatically across all workflows via `FEATURE_KNOWLEDGE` variable (companion to `KNOWLEDGE_CONTEXT`), and use staleness detection via git log against `referencedFiles`. Index at `.features/index.json` (object keyed by slug). Managed via `devflow kb list|create|check|refresh|remove`. Knowledge agent (sonnet) structures exploration outputs into KNOWLEDGE.md. `apply-feature-kb` skill provides consumption algorithm for agents. `.features/.kb.lock` is gitignored (transient lock directory for concurrent index writes, added automatically by `devflow init`). `devflow kb list` — List all feature KBs with staleness status. `devflow kb create ` — Create a new KB via claude -p exploration. `devflow kb check` — Check all KBs for staleness. `devflow kb refresh [slug]` — Refresh stale KB(s). `devflow kb remove ` — Remove a KB and its index entry. Note: debug:orch keeps FEATURE_KNOWLEDGE orchestrator-local (investigation workers examine code without pre-loaded context). Toggleable via `devflow kb --enable/--disable/--status` or `devflow init --kb/--no-kb`. SessionEnd hook auto-refreshes stale KBs (throttled to once per 2 hours, max 3 per run). `.features/.disabled` sentinel gates Phase 12 generation and refresh hook. + **Two-Mode Init**: `devflow init` offers Recommended (sensible defaults, quick setup) or Advanced (full interactive flow) after plugin selection. `--recommended` / `--advanced` CLI flags for non-interactive use. Recommended applies: ambient ON, memory ON, learn ON, HUD ON, teams OFF, default-ON flags, .claudeignore ON, auto-install safe-delete if trash CLI detected, user-mode security deny list. **Migrations**: Run-once migrations execute automatically on `devflow init`, tracked at `~/.devflow/migrations.json` (scope-independent; single file regardless of user-scope vs local-scope installs). Registry: append an entry to `MIGRATIONS` in `src/cli/utils/migrations.ts`. Scopes: `global` (runs once per machine, no project context) vs `per-project` (sweeps all discovered Claude-enabled projects in parallel). Failures are non-fatal — migrations retry on next init. Currently registered per-project migrations include `purge-legacy-knowledge-v2` (removes 4 hardcoded pre-v2 ADR/PF IDs and orphan `PROJECT-PATTERNS.md`) and `purge-legacy-knowledge-v3` (v3: sweeps all remaining pre-v2 seeded entries using the `- **Source**: self-learning:` format discriminator — any ADR/PF section lacking this marker is removed; entries the user edited to include the marker survive). **D37 edge case**: a project cloned *after* migrations have run won't be swept (the marker is global, not per-project). Recovery: `rm ~/.devflow/migrations.json` forces a re-sweep on next `devflow init`. @@ -54,15 +56,16 @@ Commands with Teams Variant ship as `{name}.md` (parallel subagents) and `{name} ``` devflow/ -├── shared/skills/ # 41 skills (single source of truth) -├── shared/agents/ # 12 shared agents (single source of truth) +├── shared/skills/ # 44 skills (single source of truth) +├── shared/agents/ # 13 shared agents (single source of truth) ├── plugins/devflow-*/ # 17 plugins (8 core + 9 optional language/ecosystem) ├── docs/reference/ # Detailed reference documentation ├── scripts/ # Helper scripts (statusline, docs-helpers) -│ └── hooks/ # Working Memory + ambient + learning hooks (prompt-capture-memory, stop-update-memory, background-memory-update, session-start-memory, session-start-classification, pre-compact-memory, preamble, session-end-learning, stop-update-learning [deprecated], background-learning, get-mtime) -├── src/cli/ # TypeScript CLI (init, list, uninstall, ambient, learn, flags) +│ └── hooks/ # Working Memory + ambient + learning hooks (prompt-capture-memory, stop-update-memory, background-memory-update, session-start-memory, session-start-classification, pre-compact-memory, preamble, session-end-learning, stop-update-learning [deprecated], background-learning, get-mtime, session-end-kb-refresh, background-kb-refresh) +├── src/cli/ # TypeScript CLI (init, list, uninstall, ambient, learn, flags, kb) ├── .claude-plugin/ # Marketplace registry ├── .docs/ # Project docs (reviews, design) — per-project +├── .features/ # Per-feature knowledge bases (committed to git) └── .memory/ # Working memory files — per-project ``` @@ -150,7 +153,7 @@ Working memory files live in a dedicated `.memory/` directory: - `/self-review` — Simplifier then Scrutinizer (sequential); consumes knowledge via index + on-demand Read via `devflow:apply-knowledge` - `/audit-claude` — CLAUDE.md audit (optional plugin) -**Shared agents** (12): git, synthesizer, skimmer, simplifier, coder, reviewer, resolver, evaluator, tester, scrutinizer, validator, designer +**Shared agents** (13): git, synthesizer, skimmer, simplifier, coder, reviewer, resolver, evaluator, tester, scrutinizer, validator, designer, knowledge **Plugin-specific agents** (1): claude-md-auditor diff --git a/docs/reference/file-organization.md b/docs/reference/file-organization.md index 85da5b4..bd6637d 100644 --- a/docs/reference/file-organization.md +++ b/docs/reference/file-organization.md @@ -9,13 +9,13 @@ devflow/ ├── .claude-plugin/ # Marketplace registry (repo root) │ └── marketplace.json ├── shared/ -│ ├── skills/ # SINGLE SOURCE OF TRUTH (41 skills) +│ ├── skills/ # SINGLE SOURCE OF TRUTH (44 skills) │ │ ├── git/ │ │ │ ├── SKILL.md │ │ │ └── references/ │ │ ├── software-design/ │ │ └── ... -│ └── agents/ # SINGLE SOURCE OF TRUTH (12 shared agents) +│ └── agents/ # SINGLE SOURCE OF TRUTH (13 shared agents) │ ├── git.md │ ├── synthesizer.md │ ├── coder.md @@ -42,7 +42,7 @@ devflow/ │ ├── build-hud.js # Copies dist/hud/ → scripts/hud/ │ ├── hud.sh # Thin wrapper: exec node hud/index.js │ ├── hud/ # GENERATED — compiled HUD module (gitignored) -│ └── hooks/ # Working Memory + ambient + learning hooks +│ └── hooks/ # Working Memory + ambient + learning + KB hooks │ ├── stop-update-memory # Stop hook: writes WORKING-MEMORY.md │ ├── session-start-memory # SessionStart hook: injects memory + git state │ ├── pre-compact-memory # PreCompact hook: saves git state backup @@ -52,9 +52,16 @@ devflow/ │ ├── session-end-learning # SessionEnd hook: batched learning trigger │ ├── stop-update-learning # Stop hook: deprecated stub (upgrade via devflow learn) │ ├── background-learning # Background: pattern detection via Sonnet +│ ├── session-end-kb-refresh # SessionEnd hook: stale KB detection + background spawn +│ ├── background-kb-refresh # Background: KB refresher via Sonnet │ ├── get-mtime # Shared helper: portable mtime (BSD/GNU stat) │ ├── json-helper.cjs # Node.js jq-equivalent operations -│ └── json-parse # Shell wrapper: jq with node fallback +│ ├── json-parse # Shell wrapper: jq with node fallback +│ └── lib/ # Node.js helper modules +│ ├── feature-kb.cjs # Feature KB index operations (CRUD, staleness) +│ ├── knowledge-context.cjs # Knowledge context index builder +│ ├── staleness.cjs # Code reference staleness checker +│ └── transcript-filter.cjs # Transcript channel extractor └── src/ └── cli/ ├── commands/ @@ -138,7 +145,7 @@ Skills and agents are **not duplicated** in git. Instead: ### Shared vs Plugin-Specific Agents -- **Shared** (12): `git`, `synthesizer`, `skimmer`, `simplifier`, `coder`, `reviewer`, `resolver`, `evaluator`, `tester`, `scrutinizer`, `validator`, `designer` +- **Shared** (13): `git`, `synthesizer`, `skimmer`, `simplifier`, `coder`, `reviewer`, `resolver`, `evaluator`, `tester`, `scrutinizer`, `validator`, `designer`, `knowledge` - **Plugin-specific** (1): `claude-md-auditor` — committed directly in its plugin ## Settings Override @@ -147,7 +154,7 @@ Skills and agents are **not duplicated** in git. Instead: Included settings: - `statusLine` - Configurable HUD with presets (replaces legacy statusline.sh) -- `hooks` - Working Memory hooks (UserPromptSubmit, Stop, SessionStart, PreCompact) + Learning Stop hook +- `hooks` - Working Memory hooks (UserPromptSubmit, Stop, SessionStart, PreCompact) + Learning SessionEnd hook + KB SessionEnd hook - `env.ENABLE_TOOL_SEARCH` - Deferred MCP tool loading (~85% token savings) - `env.ENABLE_LSP_TOOL` - Language Server Protocol support - `env.CLAUDE_CODE_EXPERIMENTAL_AGENT_TEAMS` - Agent Teams for peer-to-peer collaboration @@ -158,7 +165,9 @@ Included settings: Four hooks in `scripts/hooks/` provide automatic session continuity. Toggleable via `devflow memory --enable/--disable/--status` or `devflow init --memory/--no-memory`. -A fifth hook (`session-end-learning`) provides self-learning. Toggleable via `devflow learn --enable/--disable/--status` or `devflow init --learn/--no-learn`: +A fifth hook (`session-end-kb-refresh`) provides feature KB maintenance. Toggleable via `devflow kb --enable/--disable/--status` or `devflow init --kb/--no-kb`. + +A sixth hook (`session-end-learning`) provides self-learning. Toggleable via `devflow learn --enable/--disable/--status` or `devflow init --learn/--no-learn`: | Hook | Event | File | Purpose | |------|-------|------|---------| @@ -167,6 +176,8 @@ A fifth hook (`session-end-learning`) provides self-learning. Toggleable via `de | `background-memory-update` | (background) | `.memory/WORKING-MEMORY.md` | Queue-based updater spawned by stop-update-memory. Reads queued turns + git state, writes WORKING-MEMORY.md via `claude -p --model haiku`. | | `session-start-memory` | SessionStart | reads WORKING-MEMORY.md | Injects previous memory + git state as `additionalContext`. Warns if >1h stale. Injects pre-compact snapshot when compaction occurred mid-session. | | `pre-compact-memory` | PreCompact | `.memory/backup.json` | Saves git state + WORKING-MEMORY.md snapshot. Bootstraps minimal WORKING-MEMORY.md if none exists. | +| `session-end-kb-refresh` | SessionEnd | `.features/index.json` | Checks for stale feature KBs. Throttled (<2h). Spawns background-kb-refresh. | +| `background-kb-refresh` | (background) | `.features/{slug}/KNOWLEDGE.md` | KB refresher. Up to 3 stale KBs via `claude -p --model sonnet`. | **Flow**: User sends prompt → UserPromptSubmit hook (prompt-capture-memory) appends user turn to `.memory/.pending-turns.jsonl`. Session ends → Stop hook appends assistant turn to queue, checks throttle (skips if <2min fresh), spawns background updater → background updater reads queued turns + git state → fresh `claude -p --model haiku` writes WORKING-MEMORY.md. On `/clear` or new session → SessionStart injects memory as `additionalContext` (system context, not user-visible) with staleness warning if >1h old. @@ -180,8 +191,8 @@ Knowledge files in `.memory/knowledge/` capture decisions and pitfalls that agen | File | Format | Source | Purpose | |------|--------|--------|---------| -| `decisions.md` | ADR-NNN (sequential) | `/implement` Phase 11.5 | Architectural decisions — why choices were made | -| `pitfalls.md` | PF-NNN (sequential) | `/code-review`, `/debug`, `/resolve` | Known gotchas, fragile areas, past bugs | +| `decisions.md` | ADR-NNN (sequential) | `background-learning` | Architectural decisions — why choices were made | +| `pitfalls.md` | PF-NNN (sequential) | `background-learning` | Known gotchas, fragile areas, past bugs | Each file has a `` comment on line 1. SessionStart injects TL;DR headers only (~30-50 tokens). Agents read full files when relevant to their work. Cap: 50 entries per file. diff --git a/plugins/devflow-ambient/.claude-plugin/plugin.json b/plugins/devflow-ambient/.claude-plugin/plugin.json index acd4cdf..dabdbfb 100644 --- a/plugins/devflow-ambient/.claude-plugin/plugin.json +++ b/plugins/devflow-ambient/.claude-plugin/plugin.json @@ -27,7 +27,8 @@ "git", "synthesizer", "resolver", - "designer" + "designer", + "knowledge" ], "skills": [ "router", @@ -53,6 +54,8 @@ "qa", "worktree-support", "gap-analysis", - "design-review" + "design-review", + "feature-kb", + "apply-feature-kb" ] } diff --git a/plugins/devflow-ambient/agents/knowledge.md b/plugins/devflow-ambient/agents/knowledge.md new file mode 100644 index 0000000..a0567e9 --- /dev/null +++ b/plugins/devflow-ambient/agents/knowledge.md @@ -0,0 +1,58 @@ +--- +name: Knowledge +description: Structures codebase exploration into a feature knowledge base +model: sonnet +skills: + - devflow:feature-kb + - devflow:apply-feature-kb + - devflow:apply-knowledge + - devflow:worktree-support +tools: + - Read + - Grep + - Glob + - Write +--- + +# Knowledge Agent + +## Input Context + +- **FEATURE_SLUG** (required): Kebab-case identifier for the feature area (e.g., `cli-commands`) +- **FEATURE_NAME** (required): Human-readable name (e.g., "CLI Command System") +- **EXPLORATION_OUTPUTS** (required): Combined findings from Skimmer + Explore agents +- **DIRECTORIES** (required): Directory prefixes defining the feature area scope +- **KNOWLEDGE_CONTEXT** (optional): Existing ADR/PF index for cross-referencing +- **EXISTING_KB** (optional): Current KNOWLEDGE.md content when refreshing a stale KB +- **CHANGED_FILES** (optional): Files that changed since last KB update (for refresh) +- **WORKTREE_PATH** (optional): Worktree root for path resolution + +## Responsibilities + +1. **Resolve worktree path**: Use `devflow:worktree-support` to determine the working directory +2. **Orient on feature area**: Read EXPLORATION_OUTPUTS to understand the feature's architecture, patterns, and boundaries +3. **Follow the feature-kb skill**: Execute the 4-phase process (Scan → Extract → Distill → Forge) from `devflow:feature-kb` +4. **Cross-reference knowledge**: If KNOWLEDGE_CONTEXT is provided, reference relevant ADR/PF entries in the KB's "Related" section +5. **Handle refresh**: If EXISTING_KB is provided, update stale sections based on CHANGED_FILES while preserving any manually added content (user edits). Don't regenerate from scratch. +6. **Write KNOWLEDGE.md**: Write to `.features/{FEATURE_SLUG}/KNOWLEDGE.md` (create directory if needed) +7. **Write sidecar**: Write sidecar JSON file (`.create-result.json` or `.refresh-result.json`) with `referencedFiles` and `description` so the host process can update the index +8. **Report**: Output what was created/updated + +## Output + +``` +KB_STATUS: created | refreshed +KB_PATH: .features/{slug}/KNOWLEDGE.md +KB_SLUG: {slug} +KB_NAME: {name} +SECTIONS: [list of sections written] +REFERENCED_FILES: [files selected for staleness tracking] +CROSS_REFERENCES: [ADR/PF entries referenced, if any] +``` + +## Boundaries + +- **Only writes to `.features/` directory** — never modify source code +- **Never delete existing KBs** — only create new or refresh existing +- **500-line cap** — if KB exceeds 500 lines, split into focused sub-KBs (each gets own index entry) +- **No push, no external API calls** — local filesystem operations only diff --git a/plugins/devflow-code-review/.claude-plugin/plugin.json b/plugins/devflow-code-review/.claude-plugin/plugin.json index f5e2c71..282bf24 100644 --- a/plugins/devflow-code-review/.claude-plugin/plugin.json +++ b/plugins/devflow-code-review/.claude-plugin/plugin.json @@ -33,6 +33,7 @@ "review-methodology", "security", "testing", - "worktree-support" + "worktree-support", + "apply-feature-kb" ] } diff --git a/plugins/devflow-code-review/commands/code-review-teams.md b/plugins/devflow-code-review/commands/code-review-teams.md index 0a0eea3..31a73c2 100644 --- a/plugins/devflow-code-review/commands/code-review-teams.md +++ b/plugins/devflow-code-review/commands/code-review-teams.md @@ -95,7 +95,7 @@ Per worktree, detect file types in diff using `DIFF_RANGE` to determine conditio ### Phase 1b: Load Knowledge Index -**Produces:** KNOWLEDGE_CONTEXT +**Produces:** KNOWLEDGE_CONTEXT, FEATURE_KNOWLEDGE Load the knowledge index for the current worktree before spawning the review team: @@ -105,6 +105,14 @@ KNOWLEDGE_CONTEXT=$(node scripts/hooks/lib/knowledge-context.cjs index "{worktre This produces a compact index of active ADR/PF entries. Pass `KNOWLEDGE_CONTEXT` to each reviewer teammate prompt. Reviewers use `devflow:apply-knowledge` to Read full entry bodies on demand. +**Load Feature Knowledge:** +1. Read `.features/index.json` if it exists +2. Based on changed files from Phase 1 analysis, identify relevant KBs (match file paths against KB `directories` and `referencedFiles`) +3. For each match: check staleness via `node scripts/hooks/lib/feature-kb.cjs stale "{worktree}" {slug}`, read `.features/{slug}/KNOWLEDGE.md` +4. Set `FEATURE_KNOWLEDGE` (or `(none)` if no KBs exist or none are relevant) + +Pass `FEATURE_KNOWLEDGE` to each reviewer teammate alongside `KNOWLEDGE_CONTEXT`. + ### Phase 2: Spawn Review Team **Produces:** REVIEWER_OUTPUTS @@ -143,16 +151,18 @@ Spawn review teammates. For each teammate, compose a self-contained prompt using You are reviewing PR #{pr_number} on branch {branch} (base: {base_branch}). WORKTREE_PATH: {worktree_path} (omit if cwd) KNOWLEDGE_CONTEXT: {knowledge_context} + FEATURE_KNOWLEDGE: {feature_knowledge} 1. Read your skill(s): `Read {SKILL_PATHS}` 2. Read review methodology: `Read ~/.claude/skills/devflow:review-methodology/SKILL.md` 3. Follow devflow:apply-knowledge to scan KNOWLEDGE_CONTEXT index and Read full ADR/PF bodies on demand. Skip if (none). - 4. Get the diff: `git -C {WORKTREE_PATH} diff {DIFF_RANGE}` - 5. Apply the 6-step review process from devflow:review-methodology - 6. Focus: {FOCUS} - 7. Classify each finding: 🔴 BLOCKING / ⚠️ SHOULD-FIX / ℹ️ PRE-EXISTING - 8. Include file:line references for every finding - 9. Write your report: `Write to {worktree_path}/.docs/reviews/{branch_slug}/{timestamp}/{REPORT_NAME}.md` - 10. Report completion: SendMessage(type: "message", recipient: "team-lead", summary: "{SUMMARY}") + 4. Follow devflow:apply-feature-kb for FEATURE_KNOWLEDGE — feature-specific patterns and anti-patterns inform findings. Skip if (none). + 5. Get the diff: `git -C {WORKTREE_PATH} diff {DIFF_RANGE}` + 6. Apply the 6-step review process from devflow:review-methodology + 7. Focus: {FOCUS} + 8. Classify each finding: 🔴 BLOCKING / ⚠️ SHOULD-FIX / ℹ️ PRE-EXISTING + 9. Include file:line references for every finding + 10. Write your report: `Write to {worktree_path}/.docs/reviews/{branch_slug}/{timestamp}/{REPORT_NAME}.md` + 11. Report completion: SendMessage(type: "message", recipient: "team-lead", summary: "{SUMMARY}") **Core reviewers (always spawn):** diff --git a/plugins/devflow-code-review/commands/code-review.md b/plugins/devflow-code-review/commands/code-review.md index 9b6efff..6dde5e1 100644 --- a/plugins/devflow-code-review/commands/code-review.md +++ b/plugins/devflow-code-review/commands/code-review.md @@ -102,7 +102,7 @@ Per worktree, detect file types in diff using `DIFF_RANGE` to determine conditio ### Phase 1b: Load Knowledge Index -**Produces:** KNOWLEDGE_CONTEXT +**Produces:** KNOWLEDGE_CONTEXT, FEATURE_KNOWLEDGE While file analysis runs (or just before spawning reviewers), load the knowledge index for the current worktree: @@ -112,6 +112,14 @@ KNOWLEDGE_CONTEXT=$(node scripts/hooks/lib/knowledge-context.cjs index "{worktre This produces a compact index of active ADR/PF entries. Pass `KNOWLEDGE_CONTEXT` to all Reviewer agents. Reviewers use `devflow:apply-knowledge` to Read full entry bodies on demand. +**Load Feature Knowledge:** +1. Read `.features/index.json` if it exists +2. Based on changed files from Phase 1 analysis, identify relevant KBs (match file paths against KB `directories` and `referencedFiles`) +3. For each match: check staleness via `node scripts/hooks/lib/feature-kb.cjs stale "{worktree}" {slug}`, read `.features/{slug}/KNOWLEDGE.md` +4. Set `FEATURE_KNOWLEDGE` (or `(none)` if no KBs exist or none are relevant) + +Pass `FEATURE_KNOWLEDGE` to all Reviewer agents alongside `KNOWLEDGE_CONTEXT`. + ### Phase 2: Run Reviews (Parallel) **Produces:** REVIEWER_OUTPUTS @@ -149,7 +157,9 @@ PR: #{pr_number}, Base: {base_branch} WORKTREE_PATH: {worktree_path} (omit if cwd) DIFF_COMMAND: git -C {WORKTREE_PATH} diff {DIFF_RANGE} (omit -C flag if no WORKTREE_PATH) KNOWLEDGE_CONTEXT: {knowledge_context} +FEATURE_KNOWLEDGE: {feature_knowledge} Follow devflow:apply-knowledge to scan the index and Read full ADR/PF bodies on demand. +Follow devflow:apply-feature-kb for FEATURE_KNOWLEDGE — feature-specific patterns and anti-patterns inform findings. IMPORTANT: Write report to {worktree_path}/.docs/reviews/{branch-slug}/{timestamp}/{focus}.md using Write tool" ``` diff --git a/plugins/devflow-core-skills/.claude-plugin/plugin.json b/plugins/devflow-core-skills/.claude-plugin/plugin.json index 1e1a8cf..ba25180 100644 --- a/plugins/devflow-core-skills/.claude-plugin/plugin.json +++ b/plugins/devflow-core-skills/.claude-plugin/plugin.json @@ -19,6 +19,7 @@ "agents": [], "skills": [ "apply-knowledge", + "apply-feature-kb", "software-design", "docs-framework", "git", diff --git a/plugins/devflow-debug/.claude-plugin/plugin.json b/plugins/devflow-debug/.claude-plugin/plugin.json index 3daf04b..1737a67 100644 --- a/plugins/devflow-debug/.claude-plugin/plugin.json +++ b/plugins/devflow-debug/.claude-plugin/plugin.json @@ -21,6 +21,7 @@ "skills": [ "agent-teams", "git", - "worktree-support" + "worktree-support", + "apply-feature-kb" ] } diff --git a/plugins/devflow-debug/commands/debug-teams.md b/plugins/devflow-debug/commands/debug-teams.md index 961271d..a0cf56b 100644 --- a/plugins/devflow-debug/commands/debug-teams.md +++ b/plugins/devflow-debug/commands/debug-teams.md @@ -25,7 +25,7 @@ Investigate bugs by spawning a team of agents, each pursuing a different hypothe ### Phase 1: Load Knowledge Index (Orchestrator-Local) -**Produces:** KNOWLEDGE_CONTEXT +**Produces:** KNOWLEDGE_CONTEXT, FEATURE_KNOWLEDGE Before hypothesizing, load the knowledge index: @@ -35,6 +35,12 @@ KNOWLEDGE_CONTEXT=$(node scripts/hooks/lib/knowledge-context.cjs index "{worktre The orchestrator uses `KNOWLEDGE_CONTEXT` locally when generating hypotheses (Phase 2) — prior pitfalls and decisions can suggest specific root causes to investigate. Follow `devflow:apply-knowledge` to Read full entry bodies on demand. **Do NOT pass `KNOWLEDGE_CONTEXT` to investigator teammates** — knowledge context stays in the orchestrator; teammates examine code directly. +**Load Feature Knowledge:** +1. Read `.features/index.json` if it exists +2. Based on the bug description, identify relevant KBs +3. For each match: check staleness via `node scripts/hooks/lib/feature-kb.cjs stale "{worktree}" {slug}`, read `.features/{slug}/KNOWLEDGE.md` +4. Use `FEATURE_KNOWLEDGE` **locally only** for hypothesis generation — feature-specific gotchas and anti-patterns suggest root causes. **Do NOT pass to investigator teammates.** + ### Phase 2: Context Gathering **Produces:** HYPOTHESES, BUG_CONTEXT diff --git a/plugins/devflow-debug/commands/debug.md b/plugins/devflow-debug/commands/debug.md index 12ec57e..9cf8c36 100644 --- a/plugins/devflow-debug/commands/debug.md +++ b/plugins/devflow-debug/commands/debug.md @@ -33,7 +33,7 @@ Investigate bugs by spawning parallel agents, each pursuing a different hypothes ### Phase 1: Load Knowledge Index (Orchestrator-Local) -**Produces:** KNOWLEDGE_CONTEXT +**Produces:** KNOWLEDGE_CONTEXT, FEATURE_KNOWLEDGE Before hypothesizing, load the knowledge index: @@ -43,6 +43,12 @@ KNOWLEDGE_CONTEXT=$(node scripts/hooks/lib/knowledge-context.cjs index "{worktre The orchestrator uses `KNOWLEDGE_CONTEXT` locally when generating hypotheses (Phase 2) — prior pitfalls and decisions can suggest specific root causes to investigate. Follow `devflow:apply-knowledge` to Read full entry bodies on demand. **Do NOT pass `KNOWLEDGE_CONTEXT` to Explore investigators** — knowledge context stays in the orchestrator; investigators examine code directly. +**Load Feature Knowledge:** +1. Read `.features/index.json` if it exists +2. Based on the bug description, identify relevant KBs +3. For each match: check staleness via `node scripts/hooks/lib/feature-kb.cjs stale "{worktree}" {slug}`, read `.features/{slug}/KNOWLEDGE.md` +4. Use `FEATURE_KNOWLEDGE` **locally only** for hypothesis generation — feature-specific gotchas and anti-patterns suggest root causes. **Do NOT pass to Explore investigators.** + ### Phase 2: Context Gathering **Produces:** HYPOTHESES, BUG_CONTEXT diff --git a/plugins/devflow-implement/.claude-plugin/plugin.json b/plugins/devflow-implement/.claude-plugin/plugin.json index 2b1d465..0f1badb 100644 --- a/plugins/devflow-implement/.claude-plugin/plugin.json +++ b/plugins/devflow-implement/.claude-plugin/plugin.json @@ -30,6 +30,7 @@ "patterns", "qa", "quality-gates", - "worktree-support" + "worktree-support", + "apply-feature-kb" ] } diff --git a/plugins/devflow-implement/commands/implement-teams.md b/plugins/devflow-implement/commands/implement-teams.md index 915e3f2..b5da1fd 100644 --- a/plugins/devflow-implement/commands/implement-teams.md +++ b/plugins/devflow-implement/commands/implement-teams.md @@ -29,7 +29,7 @@ Orchestrate a single task through implementation by spawning specialized agent t ### Phase 1: Setup -**Produces:** TASK_ID, BASE_BRANCH, EXECUTION_PLAN +**Produces:** TASK_ID, BASE_BRANCH, EXECUTION_PLAN, FEATURE_KNOWLEDGE Record the current branch name as `BASE_BRANCH` - this will be the PR target. @@ -60,6 +60,12 @@ Return the branch setup summary." 5. Use extracted content as EXECUTION_PLAN for the Coder phase (replaces exploration/planning output) 6. Captured values override defaults from Git agent where present +**Load Feature Knowledge:** +1. Read `.features/index.json` if it exists +2. Based on task description and file targets, identify relevant KBs +3. For each match: check staleness via `node scripts/hooks/lib/feature-kb.cjs stale "{worktree}" {slug}`, read `.features/{slug}/KNOWLEDGE.md` +4. Set `FEATURE_KNOWLEDGE` (or `(none)` if no KBs exist or none are relevant) + ### Phase 2: Implement **Produces:** CODER_OUTPUT, FILES_CHANGED @@ -89,7 +95,8 @@ BASE_BRANCH: {base branch} EXECUTION_PLAN: {full plan from setup context} PATTERNS: {patterns from plan document or empty} CREATE_PR: true -DOMAIN: {detected domain or 'fullstack'}" +DOMAIN: {detected domain or 'fullstack'} +FEATURE_KNOWLEDGE: {feature_knowledge}" ``` --- @@ -108,6 +115,7 @@ EXECUTION_PLAN: {phase 1 steps} PATTERNS: {patterns from plan document or empty} CREATE_PR: false DOMAIN: {phase 1 domain, e.g., 'backend'} +FEATURE_KNOWLEDGE: {feature_knowledge} HANDOFF_REQUIRED: true" ``` @@ -123,6 +131,7 @@ CREATE_PR: {true if last phase, false otherwise} DOMAIN: {phase N domain, e.g., 'frontend'} PRIOR_PHASE_SUMMARY: {summary from previous Coder} FILES_FROM_PRIOR_PHASE: {list of files created} +FEATURE_KNOWLEDGE: {feature_knowledge} HANDOFF_REQUIRED: {true if not last phase}" ``` @@ -142,7 +151,8 @@ BASE_BRANCH: {base branch} EXECUTION_PLAN: {subtask 1 steps} PATTERNS: {patterns} CREATE_PR: false -DOMAIN: {subtask 1 domain}" +DOMAIN: {subtask 1 domain} +FEATURE_KNOWLEDGE: {feature_knowledge}" Agent(subagent_type="Coder"): # Coder 2 (same message) "TASK_ID: {task-id}-part2 @@ -151,7 +161,8 @@ BASE_BRANCH: {base branch} EXECUTION_PLAN: {subtask 2 steps} PATTERNS: {patterns} CREATE_PR: false -DOMAIN: {subtask 2 domain}" +DOMAIN: {subtask 2 domain} +FEATURE_KNOWLEDGE: {feature_knowledge}" ``` **Independence criteria** (all must be true for PARALLEL_CODERS): @@ -219,6 +230,7 @@ After Simplifier completes, spawn Scrutinizer as final quality gate: Agent(subagent_type="Scrutinizer"): "TASK_DESCRIPTION: {task description} FILES_CHANGED: {list of files from Coder output} +FEATURE_KNOWLEDGE: {feature_knowledge} Evaluate 9 pillars, fix P0/P1 issues, report status" ``` @@ -391,6 +403,11 @@ Design and execute scenario-based acceptance tests. Report PASS or FAIL with evi **Requires:** VALIDATION_RESULT, ALIGNMENT_RESULT, QA_RESULT, PR_URL +After quality gates pass, check for overlapping feature KBs whose `referencedFiles` intersect the changed files: +```bash +node scripts/hooks/lib/feature-kb.cjs find-overlapping "{worktree}" {files_changed...} +``` + Display completion summary with phase status, PR info, and next steps. When you apply a decision from `.memory/knowledge/decisions.md` or avoid a pitfall from `.memory/knowledge/pitfalls.md`, cite the entry ID in your final summary (e.g., 'applying ADR-003' or 'per PF-002') so usage can be tracked for capacity reviews. diff --git a/shared/agents/designer.md b/shared/agents/designer.md index 87bdd70..93e8814 100644 --- a/shared/agents/designer.md +++ b/shared/agents/designer.md @@ -7,6 +7,7 @@ skills: - devflow:apply-knowledge - devflow:gap-analysis - devflow:design-review + - devflow:apply-feature-kb --- # Designer Agent @@ -23,6 +24,7 @@ The orchestrator provides: **Worktree Support**: If `WORKTREE_PATH` is provided, follow the `devflow:worktree-support` skill for path resolution. If omitted, use cwd. - **KNOWLEDGE_CONTEXT** (optional): Compact index of active ADR/PF entries for this worktree (generated by `knowledge-context.cjs index`). `(none)` when absent. Use `devflow:apply-knowledge` to Read full bodies on demand. +- **FEATURE_KNOWLEDGE** (optional): Pre-computed feature area context for pattern-aware gap analysis. Incorporate feature area patterns and architecture into gap analysis — design additions that fit existing structure. Follow `devflow:apply-feature-kb`. ## Apply Knowledge diff --git a/shared/agents/evaluator.md b/shared/agents/evaluator.md index f448d14..5dc5f83 100644 --- a/shared/agents/evaluator.md +++ b/shared/agents/evaluator.md @@ -5,6 +5,7 @@ model: opus skills: - devflow:software-design - devflow:worktree-support + - devflow:apply-feature-kb --- # Evaluator Agent @@ -21,6 +22,10 @@ You receive from orchestrator: **Worktree Support**: If `WORKTREE_PATH` is provided, follow the `devflow:worktree-support` skill for path resolution. If omitted, use cwd. +- **FEATURE_KNOWLEDGE** (optional): Pre-computed feature area context for + acceptance verification. Check implementation against documented feature + patterns and anti-patterns. Follow `devflow:apply-feature-kb`. + ## Responsibilities 1. **Understand intent**: Read ORIGINAL_REQUEST and EXECUTION_PLAN to understand what was requested @@ -35,7 +40,7 @@ You receive from orchestrator: | Wired | Connected to running app | Route registered, imported, reachable | Flag anything at "Exists" without reaching "Wired" as `incomplete`. -5. **Check completeness**: Verify all plan steps implemented, all acceptance criteria met +5. **Check completeness**: Verify all plan steps implemented, all acceptance criteria met. If FEATURE_KNOWLEDGE is provided, verify implementation follows documented patterns and avoids documented anti-patterns for the feature area 6. **Check scope**: Identify out-of-scope additions not justified by design improvements 7. **Report misalignments**: Document issues with sufficient detail for Coder to fix diff --git a/shared/agents/knowledge.md b/shared/agents/knowledge.md new file mode 100644 index 0000000..a0567e9 --- /dev/null +++ b/shared/agents/knowledge.md @@ -0,0 +1,58 @@ +--- +name: Knowledge +description: Structures codebase exploration into a feature knowledge base +model: sonnet +skills: + - devflow:feature-kb + - devflow:apply-feature-kb + - devflow:apply-knowledge + - devflow:worktree-support +tools: + - Read + - Grep + - Glob + - Write +--- + +# Knowledge Agent + +## Input Context + +- **FEATURE_SLUG** (required): Kebab-case identifier for the feature area (e.g., `cli-commands`) +- **FEATURE_NAME** (required): Human-readable name (e.g., "CLI Command System") +- **EXPLORATION_OUTPUTS** (required): Combined findings from Skimmer + Explore agents +- **DIRECTORIES** (required): Directory prefixes defining the feature area scope +- **KNOWLEDGE_CONTEXT** (optional): Existing ADR/PF index for cross-referencing +- **EXISTING_KB** (optional): Current KNOWLEDGE.md content when refreshing a stale KB +- **CHANGED_FILES** (optional): Files that changed since last KB update (for refresh) +- **WORKTREE_PATH** (optional): Worktree root for path resolution + +## Responsibilities + +1. **Resolve worktree path**: Use `devflow:worktree-support` to determine the working directory +2. **Orient on feature area**: Read EXPLORATION_OUTPUTS to understand the feature's architecture, patterns, and boundaries +3. **Follow the feature-kb skill**: Execute the 4-phase process (Scan → Extract → Distill → Forge) from `devflow:feature-kb` +4. **Cross-reference knowledge**: If KNOWLEDGE_CONTEXT is provided, reference relevant ADR/PF entries in the KB's "Related" section +5. **Handle refresh**: If EXISTING_KB is provided, update stale sections based on CHANGED_FILES while preserving any manually added content (user edits). Don't regenerate from scratch. +6. **Write KNOWLEDGE.md**: Write to `.features/{FEATURE_SLUG}/KNOWLEDGE.md` (create directory if needed) +7. **Write sidecar**: Write sidecar JSON file (`.create-result.json` or `.refresh-result.json`) with `referencedFiles` and `description` so the host process can update the index +8. **Report**: Output what was created/updated + +## Output + +``` +KB_STATUS: created | refreshed +KB_PATH: .features/{slug}/KNOWLEDGE.md +KB_SLUG: {slug} +KB_NAME: {name} +SECTIONS: [list of sections written] +REFERENCED_FILES: [files selected for staleness tracking] +CROSS_REFERENCES: [ADR/PF entries referenced, if any] +``` + +## Boundaries + +- **Only writes to `.features/` directory** — never modify source code +- **Never delete existing KBs** — only create new or refresh existing +- **500-line cap** — if KB exceeds 500 lines, split into focused sub-KBs (each gets own index entry) +- **No push, no external API calls** — local filesystem operations only diff --git a/shared/agents/resolver.md b/shared/agents/resolver.md index 7848aae..e5bc02c 100644 --- a/shared/agents/resolver.md +++ b/shared/agents/resolver.md @@ -9,6 +9,7 @@ skills: - devflow:test-driven-development - devflow:worktree-support - devflow:apply-knowledge + - devflow:apply-feature-kb --- # Resolver Agent @@ -22,6 +23,7 @@ You receive from orchestrator: - **BRANCH**: Current branch slug - **BATCH_ID**: Identifier for this batch of issues - **KNOWLEDGE_CONTEXT** (optional): Compact index of active ADR/PF entries for this worktree (generated by `knowledge-context.cjs index`). `(none)` when absent. Use `devflow:apply-knowledge` to Read full bodies on demand. +- **FEATURE_KNOWLEDGE** (optional): Pre-computed feature area context for convention-aware fixes. Feature patterns help determine if a fix follows area conventions. Follow `devflow:apply-feature-kb`. **Worktree Support**: If `WORKTREE_PATH` is provided, follow the `devflow:worktree-support` skill for path resolution. If omitted, use cwd. diff --git a/shared/agents/reviewer.md b/shared/agents/reviewer.md index 487e56e..4294217 100644 --- a/shared/agents/reviewer.md +++ b/shared/agents/reviewer.md @@ -6,6 +6,7 @@ skills: - devflow:review-methodology - devflow:worktree-support - devflow:apply-knowledge + - devflow:apply-feature-kb --- # Reviewer Agent @@ -20,6 +21,7 @@ The orchestrator provides: - **Output path**: Where to save findings (e.g., `.docs/reviews/{branch}/{timestamp}/{focus}.md`) - **DIFF_COMMAND** (optional): Specific diff command to use (e.g., `git diff {sha}...HEAD` for incremental reviews). If not provided, default to `git diff {base_branch}...HEAD`. - **KNOWLEDGE_CONTEXT** (optional): Compact index of active ADR/PF entries for this worktree (generated by `knowledge-context.cjs index`). `(none)` when absent. Use `devflow:apply-knowledge` to Read full bodies on demand. +- **FEATURE_KNOWLEDGE** (optional): Pre-computed feature area context for pattern-aware review. Feature-specific anti-patterns and gotchas inform findings — flag deviations from documented patterns. Follow `devflow:apply-feature-kb`. **Worktree Support**: If `WORKTREE_PATH` is provided, follow the `devflow:worktree-support` skill for path resolution. If omitted, use cwd. diff --git a/shared/agents/scrutinizer.md b/shared/agents/scrutinizer.md index c2e09ac..1e5ffcc 100644 --- a/shared/agents/scrutinizer.md +++ b/shared/agents/scrutinizer.md @@ -7,6 +7,7 @@ skills: - devflow:software-design - devflow:worktree-support - devflow:apply-knowledge + - devflow:apply-feature-kb --- # Scrutinizer Agent @@ -19,6 +20,7 @@ You receive from orchestrator: - **TASK_DESCRIPTION**: What was implemented - **FILES_CHANGED**: List of modified files from Coder output - **KNOWLEDGE_CONTEXT** (optional): Compact index of active ADR/PF entries for this worktree (generated by `knowledge-context.cjs index`). `(none)` when absent. Use `devflow:apply-knowledge` to Read full bodies on demand. +- **FEATURE_KNOWLEDGE** (optional): Pre-computed feature area context for pattern compliance checking. Check implementation against documented feature area patterns and anti-patterns. Follow `devflow:apply-feature-kb`. **Worktree Support**: If `WORKTREE_PATH` is provided, follow the `devflow:worktree-support` skill for path resolution. If omitted, use cwd. diff --git a/shared/skills/apply-feature-kb/SKILL.md b/shared/skills/apply-feature-kb/SKILL.md new file mode 100644 index 0000000..00db6bf --- /dev/null +++ b/shared/skills/apply-feature-kb/SKILL.md @@ -0,0 +1,67 @@ +--- +name: apply-feature-kb +description: Consumption algorithm for FEATURE_KNOWLEDGE variable — pre-computed feature context +allowed-tools: Read +--- + +# Apply Feature Knowledge + +## Iron Law + +> **Pre-computed context, not a cage. Verify against current code when assumptions seem outdated.** +> +> A feature KB captures patterns AS THEY WERE when last updated. Code evolves. +> Use the KB as a starting point, not gospel truth. + +--- + +## 3-Step Algorithm + +### Step 1: Read the KB + +When `FEATURE_KNOWLEDGE` is provided and is not `(none)`: + +1. Read each KB section (separated by `--- Feature KB: {slug} ---` headers) +2. Absorb: architecture, data flow, key patterns, anti-patterns, gotchas +3. Note integration points that relate to your current task + +### Step 2: Apply to Current Task + +1. **Patterns as defaults**: Follow documented patterns unless you have a specific reason not to +2. **Anti-patterns as warnings**: Check your work against documented anti-patterns +3. **Gotchas as checklists**: Verify each gotcha doesn't apply to your changes +4. **Integration points**: Ensure your changes respect documented boundaries +5. **Key files**: Use as starting points for exploration + +### Step 3: Supplement as Needed + +The KB may not cover everything: +- If the KB doesn't address your specific area, explore further +- If the KB seems outdated (marked `[STALE]`), verify against current code +- If you discover new patterns, note them — they may become KB updates + +--- + +## Skip Guard + +When `FEATURE_KNOWLEDGE` is `(none)`, empty, or not provided — skip this skill entirely. +Do not mention feature knowledge or its absence in your output. + +## Staleness Handling + +KBs marked with `[STALE — referenced files changed since last update. Verify against current code.]`: +- Treat as **lower-confidence** context +- Verify key assertions against current code before relying on them +- Don't assume anti-patterns or gotchas are still valid +- Still use as a starting point — stale context is better than no context + +## Concatenation Format + +Multiple KBs are concatenated with slug headers: +``` +--- Feature KB: payments --- +[full KNOWLEDGE.md content] + +--- Feature KB: auth --- +[full KNOWLEDGE.md content] +``` diff --git a/shared/skills/debug:orch/SKILL.md b/shared/skills/debug:orch/SKILL.md index 97a398a..e8d0288 100644 --- a/shared/skills/debug:orch/SKILL.md +++ b/shared/skills/debug:orch/SKILL.md @@ -24,9 +24,9 @@ This is a lightweight variant of `/debug` for ambient ORCHESTRATED mode. Exclude If the orchestrator receives a `WORKTREE_PATH` context (e.g., from multi-worktree workflows), pass it through to all spawned agents. Each agent's "Worktree Support" section handles path resolution. -## Phase 0: Load Knowledge Index (Orchestrator-Local) +## Phase 1: Load Knowledge Index (Orchestrator-Local) -**Produces:** KNOWLEDGE_CONTEXT +**Produces:** KNOWLEDGE_CONTEXT, FEATURE_KNOWLEDGE Before hypothesizing, load the knowledge index: @@ -36,7 +36,14 @@ KNOWLEDGE_CONTEXT=$(node scripts/hooks/lib/knowledge-context.cjs index "{worktre The orchestrator uses `KNOWLEDGE_CONTEXT` locally when generating hypotheses (Phase 1) — prior pitfalls and decisions can suggest specific root causes to investigate. Follow `devflow:apply-knowledge` to Read full entry bodies on demand. **Do NOT pass `KNOWLEDGE_CONTEXT` to Explore sub-agents** — knowledge context stays in the orchestrator, not in the investigation workers. -## Phase 1: Hypothesize +Also load feature knowledge: +1. Read `.features/index.json` if it exists +2. Based on the bug description, identify relevant KBs +3. Read matching KB files, check staleness via `node scripts/hooks/lib/feature-kb.cjs stale "{worktree}" {slug}` +4. Use `FEATURE_KNOWLEDGE` **locally** for hypothesis generation — feature-specific gotchas and anti-patterns suggest root causes +5. **Do NOT pass to Explore sub-agents** (same asymmetric pattern as KNOWLEDGE_CONTEXT) + +## Phase 2: Hypothesize **Produces:** HYPOTHESES **Requires:** KNOWLEDGE_CONTEXT @@ -49,7 +56,7 @@ Analyze the bug description, error messages, and conversation context. Generate If fewer than 3 hypotheses are possible, proceed with 2. -## Phase 2: Investigate (Parallel) +## Phase 3: Investigate (Parallel) **Produces:** INVESTIGATION_RESULTS **Requires:** HYPOTHESES @@ -60,7 +67,7 @@ Spawn one `Agent(subagent_type="Explore")` per hypothesis **in a single message* - Must provide file:line references for all evidence - Returns verdict: **CONFIRMED** | **DISPROVED** | **PARTIAL** (some evidence supports, some contradicts) -## Phase 3: Converge +## Phase 4: Converge **Produces:** CONVERGENCE_DECISION **Requires:** INVESTIGATION_RESULTS @@ -71,7 +78,7 @@ Evaluate investigation results: - **Multiple PARTIAL**: Look for a unifying root cause that explains all partial evidence - **All DISPROVED**: Report honestly — "No root cause identified from initial hypotheses." Generate 2-3 second-round hypotheses if conversation context suggests avenues not yet explored. -## Phase 4: Report +## Phase 5: Report **Produces:** ROOT_CAUSE_REPORT **Requires:** CONVERGENCE_DECISION, INVESTIGATION_RESULTS @@ -83,7 +90,7 @@ Present root cause analysis: - **Root cause**: Clear statement of what's wrong and why - **Recommended fix**: Specific changes with file references -## Phase 5: Offer Fix +## Phase 6: Offer Fix **Requires:** ROOT_CAUSE_REPORT @@ -101,11 +108,11 @@ Ask user via AskUserQuestion: "Want me to implement this fix?" Before reporting results, verify every phase was announced: -- [ ] Phase 0: Load Knowledge Index → KNOWLEDGE_CONTEXT captured -- [ ] Phase 1: Hypothesize → HYPOTHESES captured (3-5 distinct) -- [ ] Phase 2: Investigate → INVESTIGATION_RESULTS captured per hypothesis -- [ ] Phase 3: Converge → CONVERGENCE_DECISION captured -- [ ] Phase 4: Report → ROOT_CAUSE_REPORT presented -- [ ] Phase 5: Offer Fix → User asked, response handled +- [ ] Phase 1: Load Knowledge Index → KNOWLEDGE_CONTEXT captured, FEATURE_KNOWLEDGE loaded (orchestrator-local only, or skipped if `.features/` absent) +- [ ] Phase 2: Hypothesize → HYPOTHESES captured (3-5 distinct) +- [ ] Phase 3: Investigate → INVESTIGATION_RESULTS captured per hypothesis +- [ ] Phase 4: Converge → CONVERGENCE_DECISION captured +- [ ] Phase 5: Report → ROOT_CAUSE_REPORT presented +- [ ] Phase 6: Offer Fix → User asked, response handled If any phase is unchecked, execute it before proceeding. diff --git a/shared/skills/explore:orch/SKILL.md b/shared/skills/explore:orch/SKILL.md index 3610ad7..47073c6 100644 --- a/shared/skills/explore:orch/SKILL.md +++ b/shared/skills/explore:orch/SKILL.md @@ -22,13 +22,39 @@ Agent pipeline for EXPLORE intent in ambient GUIDED and ORCHESTRATED modes. Code For GUIDED depth, the main session performs exploration directly: -1. **Spawn Skimmer** — `Agent(subagent_type="Skimmer")` targeting the area of interest. Use orientation output to ground exploration in real file structures and patterns. -2. **Trace** — Using Skimmer findings, trace the flow or analyze the subsystem directly in main session. Follow call chains, read key files, map integration points. -3. **Present** — Deliver structured findings using the Output format below. Use AskUserQuestion to offer drill-down into specific areas. +1. **Load Knowledge** — Run `node scripts/hooks/lib/knowledge-context.cjs index "{worktree}"` for KNOWLEDGE_CONTEXT. Read `.features/index.json` if it exists. Based on the exploration question, identify relevant KBs and read them. Use both locally to frame exploration. Set `FEATURE_KNOWLEDGE = (none)` if none are relevant. +2. **Spawn Skimmer** — `Agent(subagent_type="Skimmer")` targeting the area of interest. Use orientation output to ground exploration in real file structures and patterns. +3. **Trace** — Using Skimmer findings + `FEATURE_KNOWLEDGE`, trace the flow or analyze the subsystem directly in main session. Follow call chains, read key files, map integration points. +4. **Present** — Deliver structured findings using the Output format below. Use AskUserQuestion to offer drill-down into specific areas. ## ORCHESTRATED Pipeline -### Phase 1: Orient +### Phase 1: Load Knowledge (Orchestrator-Local) + +**Produces:** KNOWLEDGE_CONTEXT, FEATURE_KNOWLEDGE + +Before exploring, load the knowledge index: + +```bash +KNOWLEDGE_CONTEXT=$(node scripts/hooks/lib/knowledge-context.cjs index "{worktree}") +``` + +The orchestrator uses `KNOWLEDGE_CONTEXT` locally when framing exploration — prior +decisions and pitfalls suggest specific areas to investigate. Follow +`devflow:apply-knowledge` to Read full entry bodies on demand. **Do NOT pass +`KNOWLEDGE_CONTEXT` to Explore sub-agents** — knowledge context stays in the +orchestrator, not in the investigation workers. + +Also load feature knowledge: +1. Read `.features/index.json` if it exists. If not, set `FEATURE_KNOWLEDGE = (none)`. +2. Identify relevant KBs (match task intent against KB descriptions and directories). +3. For each match: check staleness via `node scripts/hooks/lib/feature-kb.cjs stale "{worktree}" {slug}`, read `.features/{slug}/KNOWLEDGE.md`. +4. Use `FEATURE_KNOWLEDGE` **locally** for exploration framing — feature-specific patterns and integration points guide where to focus. +5. **Do NOT pass to Explore sub-agents** (same asymmetric pattern as KNOWLEDGE_CONTEXT). + +**Explore agent framing**: "The KB is a baseline — your job is to VALIDATE, EXTEND, and CORRECT it, not repeat it. Focus on areas the KB doesn't cover and things that may have changed." + +### Phase 2: Orient **Produces:** ORIENT_OUTPUT @@ -38,7 +64,7 @@ Spawn `Agent(subagent_type="Skimmer")` to get codebase overview relevant to the - Entry points and key abstractions - Related patterns and conventions -### Phase 2: Explore +### Phase 3: Explore **Produces:** EXPLORE_OUTPUT **Requires:** ORIENT_OUTPUT @@ -51,7 +77,7 @@ Based on Skimmer findings, spawn 2-3 `Agent(subagent_type="Explore")` agents **i Adjust explorer focus based on the specific exploration question. -### Phase 3: Synthesize +### Phase 4: Synthesize **Produces:** MERGED_FINDINGS **Requires:** EXPLORE_OUTPUT @@ -62,7 +88,7 @@ Spawn `Agent(subagent_type="Synthesizer")` in `exploration` mode with combined f - Resolve any contradictions between explorer findings - Organize into the Output format below -### Phase 4: Present +### Phase 5: Present **Requires:** MERGED_FINDINGS @@ -93,9 +119,10 @@ Structured exploration findings with concrete code references: Before presenting findings, verify every phase was announced: -- [ ] Phase 1: Orient → ORIENT_OUTPUT captured -- [ ] Phase 2: Explore → EXPLORE_OUTPUT captured -- [ ] Phase 3: Synthesize → MERGED_FINDINGS captured -- [ ] Phase 4: Present → Findings delivered with file:line references +- [ ] Phase 1: Load Knowledge (Orchestrator-Local) → KNOWLEDGE_CONTEXT and FEATURE_KNOWLEDGE captured (orchestrator-local, not passed to workers) +- [ ] Phase 2: Orient → ORIENT_OUTPUT captured +- [ ] Phase 3: Explore → EXPLORE_OUTPUT captured +- [ ] Phase 4: Synthesize → MERGED_FINDINGS captured +- [ ] Phase 5: Present → Findings delivered with file:line references If any phase is unchecked, execute it before proceeding. diff --git a/shared/skills/feature-kb/SKILL.md b/shared/skills/feature-kb/SKILL.md new file mode 100644 index 0000000..9233479 --- /dev/null +++ b/shared/skills/feature-kb/SKILL.md @@ -0,0 +1,117 @@ +--- +name: feature-kb +description: Structures codebase exploration into a feature knowledge base +trigger: agent-loaded +allowed-tools: + - Read + - Grep + - Glob + - Write +--- + +# Feature Knowledge Base Creation + +## Iron Law + +> **Knowledge that can't be derived from reading one file — capture cross-cutting understanding.** +> +> A KB exists to save the NEXT agent from rediscovering patterns that span multiple files, +> modules, or layers. If it's obvious from a single file read, don't capture it. + +--- + +## Four-Phase Process + +### Phase 1: Scan + +Identify the feature area boundaries: +- Directory prefixes (e.g., `src/payments/`, `src/auth/`) +- Key entry points and exports +- Configuration and wiring files +- Test directories + +### Phase 2: Extract + +For each key file, extract: +- **Architecture**: Module boundaries, dependency graph, data flow +- **Conventions**: Naming patterns, file organization, API style +- **Component Patterns**: Reusable structures, composition patterns +- **Domain Knowledge**: Business rules, invariants, edge cases +- **Integration Points**: How this area connects to other areas + +### Phase 3: Distill + +Compress findings into actionable knowledge: +- Remove obvious/derivable information +- Highlight non-obvious patterns and gotchas +- Cross-reference with ADR/PF entries where relevant +- Identify anti-patterns specific to this area + +### Phase 4: Forge + +Write the KNOWLEDGE.md file with this structure: + +```markdown +--- +feature: {slug} +name: {human-readable name} +directories: [{dir prefixes}] +referencedFiles: [{key files for staleness tracking}] +created: {ISO date} +updated: {ISO date} +--- + +# {Feature Area Name} + +## Overview +[2-3 sentence summary of what this area does and why it exists] + +## Architecture +[Module boundaries, key abstractions, dependency direction] + +## Data Flow +[How data moves through this area — inputs, transformations, outputs] + +## Key Patterns +[Patterns unique to this area that agents should follow] + +## Integration Points +[How this area connects to other areas of the codebase] + +## Anti-Patterns +[Things that look right but are wrong in this specific area] + +## Gotchas +[Non-obvious behaviors, edge cases, things that break silently] + +## Key Files +[Most important files with one-line descriptions] + +## Related +[Links to ADR/PF entries, other KBs, external docs] +``` + +--- + +## Quality Self-Checks + +| Red Flag | Fix | +|----------|-----| +| KB > 500 lines | Split into focused sub-areas | +| Restates what's obvious from one file | Remove — KB is for cross-cutting knowledge | +| No anti-patterns section | Every area has gotchas — dig deeper | +| No integration points | How does this connect to rest of system? | +| Broad directories (e.g., `src/`) | Focus on specific subdirectories | +| No referenced files for staleness | Pick 5-10 key files that signal changes | + +## Integration + +After writing KNOWLEDGE.md, update the index: +```bash +node scripts/hooks/lib/feature-kb.cjs update-index "{worktree}" \ + --slug="{slug}" --name="{name}" \ + --directories='["{dir1}", "{dir2}"]' \ + --referencedFiles='["{file1}", "{file2}"]' \ + --description="Use when {trigger description}" \ + --createdBy="{source}" +``` diff --git a/shared/skills/implement:orch/SKILL.md b/shared/skills/implement:orch/SKILL.md index 3204b0c..412ab30 100644 --- a/shared/skills/implement:orch/SKILL.md +++ b/shared/skills/implement:orch/SKILL.md @@ -8,7 +8,7 @@ user-invocable: false Agent pipeline for IMPLEMENT intent in ambient ORCHESTRATED mode. Pre-flight checks, plan synthesis, Coder execution, and quality gates. -This is a lightweight variant of `/implement` for ambient ORCHESTRATED mode. Excluded: strategy selection (single/sequential/parallel Coders), retry loops, PR creation, knowledge loading. +This is a lightweight variant of `/implement` for ambient ORCHESTRATED mode. Excluded: strategy selection (single/sequential/parallel Coders), retry loops, PR creation. ## Iron Law @@ -28,10 +28,10 @@ Before starting the full pipeline, check for re-validation context: If this condition is true → execute **Re-validation Path**: 1. **Branch safety check**: If current branch is protected (main, master, etc.), execute Phase 1 first to create/switch to a work branch. If already on a work branch, skip Phase 1. -2. Skip Phases 2-3 (no Coder needed) -3. Run Phase 4 (FILES_CHANGED Detection) using the existing branch -4. Run Phase 5 (Quality Gates) on detected changes -5. Proceed to Phase 6 (Completion) +2. Skip Phases 3-4 (no Coder needed) +3. Run Phase 5 (FILES_CHANGED Detection) using the existing branch +4. Run Phase 6 (Quality Gates) on detected changes +5. Proceed to Phase 7 (Completion) If not → proceed with the full pipeline below. @@ -56,7 +56,23 @@ Return the branch setup summary." Capture `branch name` and `BASE_BRANCH` from Git agent output for use throughout the pipeline. -## Phase 2: Plan Synthesis +## Phase 2: Load Knowledge + +**Produces:** KNOWLEDGE_CONTEXT, FEATURE_KNOWLEDGE + +Load the knowledge index: +```bash +KNOWLEDGE_CONTEXT=$(node scripts/hooks/lib/knowledge-context.cjs index "{worktree}") +``` +Pass `KNOWLEDGE_CONTEXT` to Coder (Phase 4) and Scrutinizer (Phase 6). + +1. Check if `.features/index.json` exists. If not, set `FEATURE_KNOWLEDGE = (none)` and skip. +2. Read `.features/index.json`. +3. Based on the EXECUTION_PLAN file targets and task description, identify relevant KBs. +4. For each relevant KB: check staleness via `node scripts/hooks/lib/feature-kb.cjs stale "{worktree}" {slug}`, read `.features/{slug}/KNOWLEDGE.md`, mark stale if needed. +5. Concatenate as `FEATURE_KNOWLEDGE` (or `(none)` if no matches). + +## Phase 3: Plan Synthesis **Produces:** EXECUTION_PLAN **Requires:** FEATURE_BRANCH @@ -72,7 +88,7 @@ Format as structured markdown with: Goal, Steps, Files, Constraints, Decisions. If the orchestrator receives a `WORKTREE_PATH` context (e.g., from multi-worktree workflows), pass it through to all spawned agents. Each agent's "Worktree Support" section handles path resolution. -## Phase 3: Coder Execution +## Phase 4: Coder Execution **Produces:** CODER_COMMITS, PRE_CODER_SHA **Requires:** EXECUTION_PLAN, FEATURE_BRANCH @@ -83,10 +99,12 @@ Spawn `Agent(subagent_type="Coder")` with input variables: - **TASK_ID**: Generated from timestamp (e.g., `task-2026-03-19_1430`) - **TASK_DESCRIPTION**: From conversation context - **BASE_BRANCH**: Current branch (or newly created branch from Phase 1) -- **EXECUTION_PLAN**: From Phase 2 +- **EXECUTION_PLAN**: From Phase 3 - **PATTERNS**: Codebase patterns from conversation context - **CREATE_PR**: `false` (commit only, no push) - **DOMAIN**: Inferred from files in scope (`backend`, `frontend`, `tests`, `fullstack`) +- **FEATURE_KNOWLEDGE**: From Phase 2 (or `(none)`) +- **KNOWLEDGE_CONTEXT**: From Phase 2 (or `(none)`) **Execution strategy**: Single sequential Coder by default. Parallel Coders only when tasks are self-contained — zero shared contracts, no integration points, different files/modules with no imports between them. @@ -96,7 +114,7 @@ If Coder returns **BLOCKED**, halt the pipeline and report to user. **Handoff artifact** (when HANDOFF_REQUIRED=true): After Coder completes, write the phase summary to `.docs/handoff.md` using the Write tool. The next Coder reads this on startup (see Coder agent Responsibility 1). This survives context compaction — unlike PRIOR_PHASE_SUMMARY which is context-mediated. -## Phase 4: FILES_CHANGED Detection +## Phase 5: FILES_CHANGED Detection **Produces:** FILES_CHANGED **Requires:** PRE_CODER_SHA @@ -109,7 +127,7 @@ git diff --name-only {starting_sha}...HEAD Pass FILES_CHANGED to all quality gate agents. -## Phase 5: Quality Gates +## Phase 6: Quality Gates **Produces:** GATE_RESULTS **Requires:** FILES_CHANGED, CODER_COMMITS @@ -118,19 +136,25 @@ Run sequentially — each gate must pass before the next: 1. `Agent(subagent_type="Validator")` (build + typecheck + lint + tests) — retry up to 2× on failure (Coder fixes between retries) 2. `Agent(subagent_type="Simplifier")` — code clarity and maintainability pass on FILES_CHANGED -3. `Agent(subagent_type="Scrutinizer")` — 9-pillar quality evaluation on FILES_CHANGED +3. `Agent(subagent_type="Scrutinizer")` — 9-pillar quality evaluation on FILES_CHANGED, with `KNOWLEDGE_CONTEXT` and `FEATURE_KNOWLEDGE` from Phase 2 4. `Agent(subagent_type="Validator")` (re-validate after Simplifier/Scrutinizer changes) -5. `Agent(subagent_type="Evaluator")` — verify implementation matches original request — retry up to 2× if misalignment found +5. `Agent(subagent_type="Evaluator")` — verify implementation matches original request, with `FEATURE_KNOWLEDGE` from Phase 2 — retry up to 2× if misalignment found 6. `Agent(subagent_type="Tester")` — scenario-based acceptance testing from user's perspective — retry up to 2× if QA fails If any gate exhausts retries, halt pipeline and report what passed and what failed. -## Phase 6: Completion +## Phase 7: Completion **Requires:** GATE_RESULTS, FILES_CHANGED, CODER_COMMITS Cleanup: delete `.docs/handoff.md` if it exists (no longer needed after pipeline completes). +After quality gates pass, check for overlapping KBs whose `referencedFiles` intersect FILES_CHANGED: +```bash +node scripts/hooks/lib/feature-kb.cjs find-overlapping "{worktree}" {files_changed...} +``` +This signals staleness for the next plan cycle. + Report results: - Commits created (from Coder) - Files changed @@ -149,10 +173,11 @@ Report results: Before reporting results, verify every phase was announced: - [ ] Phase 1: Pre-flight → BASE_BRANCH, FEATURE_BRANCH captured -- [ ] Phase 2: Plan Synthesis → EXECUTION_PLAN captured -- [ ] Phase 3: Coder Execution → CODER_COMMITS, PRE_CODER_SHA captured -- [ ] Phase 4: FILES_CHANGED Detection → FILES_CHANGED captured -- [ ] Phase 5: Quality Gates → GATE_RESULTS captured (per gate: pass/fail) -- [ ] Phase 6: Completion → Results reported +- [ ] Phase 2: Load Knowledge → KNOWLEDGE_CONTEXT and FEATURE_KNOWLEDGE captured (or skipped) +- [ ] Phase 3: Plan Synthesis → EXECUTION_PLAN captured +- [ ] Phase 4: Coder Execution → CODER_COMMITS, PRE_CODER_SHA captured +- [ ] Phase 5: FILES_CHANGED Detection → FILES_CHANGED captured +- [ ] Phase 6: Quality Gates → GATE_RESULTS captured (per gate: pass/fail) +- [ ] Phase 7: Completion → Results reported, overlapping KBs checked If any phase is unchecked, execute it before proceeding. diff --git a/shared/skills/pipeline:orch/SKILL.md b/shared/skills/pipeline:orch/SKILL.md index c668bc6..9beb27c 100644 --- a/shared/skills/pipeline:orch/SKILL.md +++ b/shared/skills/pipeline:orch/SKILL.md @@ -17,6 +17,10 @@ Meta-orchestrator chaining implement → review → resolve with status reportin --- +## Feature Knowledge + +`FEATURE_KNOWLEDGE` loading is handled by each sub-orchestrator (implement:orch Phase 2, review:orch Phase 3, resolve:orch Phase 2). Pipeline:orch does NOT load KBs itself — it delegates to the inner skills which handle loading, staleness checks, and agent distribution independently. + ## Cost Communication Classification statement must warn about scope: @@ -28,7 +32,7 @@ Classification statement must warn about scope: **Produces:** IMPLEMENT_RESULT -Load `devflow:implement:orch` via the Skill tool, then execute its full pipeline (Phases 1-6: pre-flight → plan synthesis → Coder → FILES_CHANGED detection → quality gates → completion). The quality gates are non-negotiable: Validator → Simplifier → Scrutinizer → re-Validate → Evaluator → Tester. +Load `devflow:implement:orch` via the Skill tool, then execute its full pipeline (Phases 1-7: pre-flight → feature knowledge → plan synthesis → Coder → FILES_CHANGED detection → quality gates → completion). The quality gates are non-negotiable: Validator → Simplifier → Scrutinizer → re-Validate → Evaluator → Tester. If implementation returns **BLOCKED**: halt entire pipeline, report blocker. @@ -49,7 +53,7 @@ Auto-proceed to Phase 3. **Produces:** REVIEW_RESULT **Requires:** IMPLEMENT_RESULT -Load `devflow:review:orch` via the Skill tool, then execute its full pipeline (Phases 1-6: pre-flight → incremental detection → file analysis → parallel reviewers (7 core + conditional) → synthesis → finalize). All 7 core reviewers (security, architecture, performance, complexity, consistency, testing, regression) are mandatory. +Load `devflow:review:orch` via the Skill tool, then execute its full pipeline (Phases 1-7: pre-flight → incremental detection → knowledge index → file analysis → parallel reviewers (7 core + conditional) → synthesis → finalize). All 7 core reviewers (security, architecture, performance, complexity, consistency, testing, regression) are mandatory. Report review results (merge recommendation, issue counts). @@ -71,7 +75,7 @@ If **no blocking issues**: **Produces:** RESOLVE_RESULT **Requires:** RESOLVE_DECISION, REVIEW_RESULT -Load `devflow:resolve:orch` via the Skill tool, then execute its full pipeline (Phases 1-6: target review directory → parse issues → analyze & batch → parallel resolvers → collect & simplify → report). +Load `devflow:resolve:orch` via the Skill tool, then execute its full pipeline (Phases 1-7: target review directory → project knowledge → parse issues → analyze & batch → parallel resolvers → collect & simplify → report). ## Phase 6: Summary diff --git a/shared/skills/plan:orch/SKILL.md b/shared/skills/plan:orch/SKILL.md index 4b8b8ce..b554afc 100644 --- a/shared/skills/plan:orch/SKILL.md +++ b/shared/skills/plan:orch/SKILL.md @@ -24,10 +24,11 @@ This is a focused variant of the `/plan` command pipeline for ambient ORCHESTRAT For GUIDED depth, the main session performs planning directly: -0. **Discover** — If the planning question is open-ended, ask clarifying questions via AskUserQuestion and present 2-3 approaches with tradeoffs before orienting. Skip if the user's prompt is already specific. If the user says "skip" or "just proceed": skip remaining questions, present inferred scope for confirmation. -1. **Spawn Skimmer** — `Agent(subagent_type="Skimmer")` targeting the area of interest. Use orientation output to ground design decisions in real file structures and patterns. -2. **Design** — Using Skimmer findings + loaded pattern/design skills, design the approach directly in main session. Apply `devflow:design-review` skill inline to check the plan for anti-patterns before presenting. -3. **Present** — Deliver structured plan using the Output format below. Use AskUserQuestion for ambiguous design choices. +1. **Discover** — If the planning question is open-ended, ask clarifying questions via AskUserQuestion and present 2-3 approaches with tradeoffs before orienting. Skip if the user's prompt is already specific. If the user says "skip" or "just proceed": skip remaining questions, present inferred scope for confirmation. +2. **Load Knowledge** — Load `KNOWLEDGE_CONTEXT` via `node scripts/hooks/lib/knowledge-context.cjs index "{worktree}"`. Read `.features/index.json` if it exists; based on the task, identify relevant KBs, read them, and use as context for direct planning. Set `FEATURE_KNOWLEDGE = (none)` if no KBs exist or none are relevant. +3. **Spawn Skimmer** — `Agent(subagent_type="Skimmer")` targeting the area of interest. Use orientation output to ground design decisions in real file structures and patterns. +4. **Design** — Using Skimmer findings + loaded pattern/design skills + `KNOWLEDGE_CONTEXT` + `FEATURE_KNOWLEDGE`, design the approach directly in main session. Apply `devflow:design-review` skill inline to check the plan for anti-patterns before presenting. +5. **Present** — Deliver structured plan using the Output format below. Use AskUserQuestion for ambiguous design choices. ## Worktree Support @@ -44,18 +45,18 @@ Before starting the full pipeline, check for prior planning context: **Override**: If the user explicitly requests a fresh plan ("start from scratch", "ignore the old plan", "new approach"), execute the full pipeline regardless of prior artifacts. -If EITHER condition is true (and no override) → execute **Refinement Path** instead of Phases 0-8: +If EITHER condition is true (and no override) → execute **Refinement Path** instead of Phases 1-12: 1. Read the existing plan (disk artifact or conversation context) -2. Run Phase 2 (Explore) targeting only areas affected by the new request +2. Run Phase 5 (Explore) targeting only areas affected by the new request 3. Update the plan with changes, preserving unchanged sections -4. Run Phase 6 (Design Review Lite) on updated sections only +4. Run Phase 9 (Design Review Lite) on updated sections only 5. Present the delta (what changed and why) -6. Proceed to Phase 8 (Persist) if updated plan is substantial +6. Proceed to Phase 11 (Persist) if updated plan is substantial If NEITHER condition is met → proceed with the full pipeline below. -## Phase 0: Load Knowledge Index +## Phase 1: Load Knowledge Index **Produces:** KNOWLEDGE_CONTEXT @@ -67,7 +68,30 @@ KNOWLEDGE_CONTEXT=$(node scripts/hooks/lib/knowledge-context.cjs index "{worktre This produces a compact index of active ADR/PF entries. Pass `KNOWLEDGE_CONTEXT` to Explorer and Designer agents — prior decisions constrain design, known pitfalls inform gap analysis. Agents use `devflow:apply-knowledge` to Read full entry bodies on demand. -## Phase 0.5: Requirements Discovery +## Phase 2: Load Feature Knowledge + +**Produces:** FEATURE_KNOWLEDGE + +1. Check if `.features/index.json` exists (Read tool). If not, set `FEATURE_KNOWLEDGE = (none)` and skip. +2. Read `.features/index.json` to see available feature KBs. +3. Based on the current task description, identify which KBs are relevant (LLM judgment — match task intent against each KB's `description` and `directories` fields). +4. For each relevant KB: + a. Run `node scripts/hooks/lib/feature-kb.cjs stale "{worktree}" {slug}` to check staleness + b. Read `.features/{slug}/KNOWLEDGE.md` + c. If stale, prefix content with `[STALE — referenced files changed since last update. Verify against current code.]` +5. Concatenate all relevant KB content as `FEATURE_KNOWLEDGE`: + ``` + --- Feature KB: {slug1} --- + [content] + + --- Feature KB: {slug2} --- + [content] + ``` +6. Pass `FEATURE_KNOWLEDGE` to downstream agents alongside `KNOWLEDGE_CONTEXT`. + +If no KBs exist or none are relevant, set `FEATURE_KNOWLEDGE = (none)`. + +## Phase 3: Requirements Discovery **Produces:** CONSTRAINED_PROBLEM @@ -78,36 +102,36 @@ Before committing to an approach, surface ambiguity through focused Socratic que - Invoked from within another pipeline (pipeline:orch, implement:orch) - Single clear approach exists with no meaningful alternatives -**Skip examples** (proceed directly to Phase 1): +**Skip examples** (proceed directly to Phase 4): - "Add retry with exponential backoff to HttpClient in src/http.ts, max 3 retries, configurable timeout" — specific files, clear behavior, defined parameters - "Implement the design from .docs/design/caching.md" — pre-existing specification -**Discover examples** (run Phase 0.5): +**Discover examples** (run Phase 3): - "Add a caching layer" — open-ended, multiple valid approaches - "Improve the auth flow" — vague scope, unclear what aspects need improvement - "Design a notification system" — system-level, many architectural choices **Process:** -1. **Assess** — Does the request have meaningful ambiguity or multiple valid approaches? If not, skip to Phase 1. +1. **Assess** — Does the request have meaningful ambiguity or multiple valid approaches? If not, skip to Phase 4. 2. **Question** — Ask clarifying questions via AskUserQuestion. Prefer multiple choice (2-4 options) when tradeoffs exist. 3. **Propose approaches** — Present 2-3 options with explicit tradeoffs: - Lead with your recommended approach and why - Each option: 2-3 sentences + key tradeoff (complexity, performance, maintenance) - Final option: "Other — describe your preferred approach" -4. **Confirm** — Get user's choice, then proceed to Phase 1 with a constrained problem. +4. **Confirm** — Get user's choice, then proceed to Phase 4 with a constrained problem. -If the user says "skip", "just proceed", or signals impatience — skip remaining questions, present your inferred understanding (problem, scope, recommended approach) in one message for confirmation, then proceed to Phase 1 after confirmation. This matches /plan Gate 0 behavior. +If the user says "skip", "just proceed", or signals impatience — skip remaining questions, present your inferred understanding (problem, scope, recommended approach) in one message for confirmation, then proceed to Phase 4 after confirmation. This matches /plan Gate 0 behavior. **Question design:** - Ask about constraints and goals, not implementation details - Surface hidden assumptions ("Does this need to handle concurrent writes?") - Reveal scope boundaries ("Just the API layer, or the UI as well?") -## Phase 1: Orient +## Phase 4: Orient **Produces:** ORIENT_OUTPUT -**Requires:** CONSTRAINED_PROBLEM (or original prompt if Phase 0.5 skipped) +**Requires:** CONSTRAINED_PROBLEM (or original prompt if Phase 3 skipped) Spawn `Agent(subagent_type="Skimmer")` to get codebase overview relevant to the planning question: @@ -116,10 +140,10 @@ Spawn `Agent(subagent_type="Skimmer")` to get codebase overview relevant to the - Test patterns and coverage approach - Related prior implementations (similar features, analogous patterns) -## Phase 2: Explore +## Phase 5: Explore **Produces:** EXPLORE_OUTPUT -**Requires:** ORIENT_OUTPUT, KNOWLEDGE_CONTEXT +**Requires:** ORIENT_OUTPUT, KNOWLEDGE_CONTEXT, FEATURE_KNOWLEDGE Based on Skimmer findings, spawn 2-3 `Agent(subagent_type="Explore")` agents **in a single message** (parallel execution): @@ -127,14 +151,14 @@ Based on Skimmer findings, spawn 2-3 `Agent(subagent_type="Explore")` agents **i - **Pattern explorer**: Find existing implementations of similar features to follow as templates - **Constraint explorer**: Identify constraints — test infrastructure, build system, CI requirements, deployment concerns -Each Explore agent receives `KNOWLEDGE_CONTEXT` (from Phase 0) and the instruction: "follow `devflow:apply-knowledge` for KNOWLEDGE_CONTEXT". +Each Explore agent receives `KNOWLEDGE_CONTEXT` (from Phase 1), `FEATURE_KNOWLEDGE` (from Phase 2), and the instructions: "follow `devflow:apply-knowledge` for KNOWLEDGE_CONTEXT" and "The FEATURE_KNOWLEDGE is a baseline — your job is to VALIDATE, EXTEND, and CORRECT it, not repeat it. Focus exploration on areas the KB doesn't cover and changes since it was last updated." Adjust explorer focus based on the specific planning question. -## Phase 3: Gap Analysis Lite +## Phase 6: Gap Analysis Lite **Produces:** GAP_OUTPUT -**Requires:** EXPLORE_OUTPUT, ORIENT_OUTPUT, KNOWLEDGE_CONTEXT +**Requires:** EXPLORE_OUTPUT, ORIENT_OUTPUT, KNOWLEDGE_CONTEXT, FEATURE_KNOWLEDGE Spawn 2 `Agent(subagent_type="Designer")` agents **in a single message** (parallel execution): @@ -143,10 +167,11 @@ Agent(subagent_type="Designer"): "Mode: gap-analysis Focus: completeness KNOWLEDGE_CONTEXT: {knowledge_context} +FEATURE_KNOWLEDGE: {feature_knowledge} Artifacts: Planning question: {user's intent} - Exploration findings: {Phase 2 outputs} - Codebase context: {Phase 1 output} + Exploration findings: {Phase 5 outputs} + Codebase context: {Phase 4 output} Identify missing requirements, undefined error states, vague acceptance criteria. Follow devflow:apply-knowledge for KNOWLEDGE_CONTEXT." @@ -154,15 +179,16 @@ Agent(subagent_type="Designer"): "Mode: gap-analysis Focus: architecture KNOWLEDGE_CONTEXT: {knowledge_context} +FEATURE_KNOWLEDGE: {feature_knowledge} Artifacts: Planning question: {user's intent} - Exploration findings: {Phase 2 outputs} - Codebase context: {Phase 1 output} + Exploration findings: {Phase 5 outputs} + Codebase context: {Phase 4 output} Identify pattern violations, missing integration points, layering issues. Follow devflow:apply-knowledge for KNOWLEDGE_CONTEXT." ``` -## Phase 4: Synthesize +## Phase 7: Synthesize **Produces:** SYNTHESIS_OUTPUT **Requires:** GAP_OUTPUT, EXPLORE_OUTPUT @@ -172,24 +198,24 @@ Spawn `Agent(subagent_type="Synthesizer")` combining gap analysis and explore ou ``` Agent(subagent_type="Synthesizer"): "Mode: design -Designer outputs: {Phase 3 designer outputs} +Designer outputs: {Phase 6 designer outputs} Combine gap findings with exploration context into blocking vs. should-address categorization." ``` -## Phase 5: Plan +## Phase 8: Plan **Produces:** PLAN_OUTPUT -**Requires:** ORIENT_OUTPUT, EXPLORE_OUTPUT, SYNTHESIS_OUTPUT, KNOWLEDGE_CONTEXT +**Requires:** ORIENT_OUTPUT, EXPLORE_OUTPUT, SYNTHESIS_OUTPUT, KNOWLEDGE_CONTEXT, FEATURE_KNOWLEDGE -Spawn `Agent(subagent_type="Plan")` with all findings: +Spawn `Agent(subagent_type="Plan")` with all findings, including `FEATURE_KNOWLEDGE`: - Design implementation approach with file-level specificity -- Reference existing patterns discovered in Phases 1-2 +- Reference existing patterns discovered in Phases 4-5 - Include: architecture decisions, file changes, new files needed, test strategy -- Integrate gap mitigations from Phase 4 into the relevant steps +- Integrate gap mitigations from Phase 7 into the relevant steps - Flag areas where existing patterns conflict with the proposed approach -## Phase 6: Design Review Lite +## Phase 9: Design Review Lite **Produces:** REVIEW_NOTES **Requires:** PLAN_OUTPUT @@ -205,19 +231,19 @@ Main session reviews the plan inline using the loaded `devflow:design-review` sk Note findings directly in the plan presentation. This is inline review — no agent spawn needed. -## Phase 7: Present +## Phase 10: Present **Requires:** PLAN_OUTPUT, SYNTHESIS_OUTPUT, REVIEW_NOTES Present plan to user with: - Implementation approach (file-level) -- Gap analysis findings (from Phase 4 synthesis) -- Design review notes (from Phase 6 inline check) +- Gap analysis findings (from Phase 7 synthesis) +- Design review notes (from Phase 9 inline check) - Risk areas Use AskUserQuestion for any ambiguous design choices that need user input before proceeding to IMPLEMENT. -## Phase 8: Persist +## Phase 11: Persist **Requires:** PLAN_OUTPUT @@ -227,6 +253,32 @@ If the plan is substantial (>10 implementation steps or HIGH/CRITICAL context ri Otherwise: plan stays in conversation context, ready for IMPLEMENT to consume directly. +## Phase 12: Feature KB Generation (Conditional) + +If `.features/.disabled` exists, skip KB generation entirely — the KB feature is disabled. + +If Phases 4-5 explored a feature area that does NOT have a matching KB: + +1. Identify the feature area slug and name from the explored directories +2. Spawn Agent(subagent_type="Knowledge"): + ``` + "FEATURE_SLUG: {slug} + FEATURE_NAME: {name} + EXPLORATION_OUTPUTS: {combined Phase 4 + Phase 5 outputs} + DIRECTORIES: {directory prefixes explored} + KNOWLEDGE_CONTEXT: {from Phase 1}" + ``` +3. Report: "Created feature KB: {slug}" + +Skip if all explored areas already have matching KBs. + +If a stale KB was detected in Phase 2, also refresh it here — spawn Knowledge agent with `EXISTING_KB` content + `CHANGED_FILES` from staleness check. + +**Failure handling**: Knowledge agent failure is **non-blocking**. If it crashes, log the failure and complete the plan workflow normally. + +**Produces:** `.features/{slug}/KNOWLEDGE.md`, updated `.features/index.json` +**Requires:** Phase 4-5 exploration outputs + --- ## Output @@ -247,15 +299,17 @@ Structured plan ready to feed into IMPLEMENT/ORCHESTRATED if user proceeds: Before presenting output, verify every phase was announced: -- [ ] Phase 0: Load Knowledge Index → KNOWLEDGE_CONTEXT captured -- [ ] Phase 0.5: Requirements Discovery → CONSTRAINED_PROBLEM captured (or skipped with stated reason) -- [ ] Phase 1: Orient → ORIENT_OUTPUT captured -- [ ] Phase 2: Explore → EXPLORE_OUTPUT captured -- [ ] Phase 3: Gap Analysis Lite → GAP_OUTPUT captured -- [ ] Phase 4: Synthesize → SYNTHESIS_OUTPUT captured -- [ ] Phase 5: Plan → PLAN_OUTPUT captured -- [ ] Phase 6: Design Review Lite → REVIEW_NOTES captured -- [ ] Phase 7: Present → Output delivered to user -- [ ] Phase 8: Persist → Artifact written (or skipped with stated reason) +- [ ] Phase 1: Load Knowledge Index → KNOWLEDGE_CONTEXT captured +- [ ] Phase 2: Load Feature Knowledge → FEATURE_KNOWLEDGE captured (or skipped if `.features/` absent) +- [ ] Phase 3: Requirements Discovery → CONSTRAINED_PROBLEM captured (or skipped with stated reason) +- [ ] Phase 4: Orient → ORIENT_OUTPUT captured +- [ ] Phase 5: Explore → EXPLORE_OUTPUT captured +- [ ] Phase 6: Gap Analysis Lite → GAP_OUTPUT captured +- [ ] Phase 7: Synthesize → SYNTHESIS_OUTPUT captured +- [ ] Phase 8: Plan → PLAN_OUTPUT captured +- [ ] Phase 9: Design Review Lite → REVIEW_NOTES captured +- [ ] Phase 10: Present → Output delivered to user +- [ ] Phase 11: Persist → Artifact written (or skipped with stated reason) +- [ ] Phase 12: Feature KB Generation → Knowledge agent spawned for new feature areas (or skipped if KB exists) If any phase is unchecked, execute it before proceeding. diff --git a/shared/skills/quality-gates/SKILL.md b/shared/skills/quality-gates/SKILL.md index af76ec4..abd83f8 100644 --- a/shared/skills/quality-gates/SKILL.md +++ b/shared/skills/quality-gates/SKILL.md @@ -31,6 +31,8 @@ Based on [Google Engineering Practices](https://google.github.io/eng-practices/r ### P0 - Design Does the implementation fit the architecture? Follows existing patterns, respects layer boundaries, dependencies injected. +If `FEATURE_KNOWLEDGE` is provided, verify implementation respects the feature area's documented architecture and anti-patterns. Flag deviations as P0-Design issues when the documented pattern is clearly intentional. + ### P0 - Functionality Does the code work? Happy path, edge cases (null, empty, boundary), no race conditions. diff --git a/shared/skills/resolve:orch/SKILL.md b/shared/skills/resolve:orch/SKILL.md index 7515649..b2a5b29 100644 --- a/shared/skills/resolve:orch/SKILL.md +++ b/shared/skills/resolve:orch/SKILL.md @@ -32,17 +32,23 @@ If no unresolved review found: halt with "No unresolved review found. Run a revi Extract branch slug from the directory path. - -## Phase 1.5: Load Project Knowledge +## Phase 2: Load Project Knowledge -**Produces:** KNOWLEDGE_CONTEXT +**Produces:** KNOWLEDGE_CONTEXT, FEATURE_KNOWLEDGE **Requires:** REVIEW_DIR Run `node scripts/hooks/lib/knowledge-context.cjs index "{worktree}"` to produce a compact index of active ADR/PF entries from `decisions.md` and `pitfalls.md`, with Deprecated/Superseded entries already stripped. Falls back to `(none)` when both files are absent or all entries are filtered. Pass `KNOWLEDGE_CONTEXT` to every Resolver agent in Phase 4. Resolver agents use `devflow:apply-knowledge` to Read full entry bodies on demand — no fan-out of the full corpus. -## Phase 2: Parse Issues +Also load feature knowledge: +1. Read `.features/index.json` if it exists +2. Based on file paths from review report issue entries, identify relevant KBs +3. Read matching `.features/{slug}/KNOWLEDGE.md` files, check staleness via `node scripts/hooks/lib/feature-kb.cjs stale "{worktree}" {slug}` +4. Concatenate as `FEATURE_KNOWLEDGE` (or `(none)`) + +## Phase 3: Parse Issues **Produces:** ISSUES **Requires:** REVIEW_DIR @@ -55,7 +61,7 @@ For each issue, extract: id (generated), file, line, severity, category (blockin If no actionable issues found: "Review is clean — no issues to resolve." → stop. -## Phase 3: Analyze & Batch +## Phase 4: Analyze & Batch **Produces:** BATCHES **Requires:** ISSUES @@ -67,10 +73,10 @@ Group issues by file/function for efficient resolution: Determine execution: batches with no shared files can run in parallel. -## Phase 4: Resolve (Parallel) +## Phase 5: Resolve (Parallel) **Produces:** RESOLUTION_RESULTS -**Requires:** BATCHES, KNOWLEDGE_CONTEXT, BRANCH_SLUG +**Requires:** BATCHES, KNOWLEDGE_CONTEXT, FEATURE_KNOWLEDGE, BRANCH_SLUG Spawn `Agent(subagent_type="Resolver")` agents — one per batch, parallel where possible. @@ -78,14 +84,15 @@ Each receives: - **ISSUES**: Array of issues in the batch - **BRANCH**: Branch slug - **BATCH_ID**: Identifier for this batch -- **KNOWLEDGE_CONTEXT**: Knowledge index from Phase 1.5 (or `(none)`). Resolvers follow `devflow:apply-knowledge` to Read full ADR/PF bodies on demand. +- **KNOWLEDGE_CONTEXT**: Knowledge index from Phase 2 (or `(none)`). Resolvers follow `devflow:apply-knowledge` to Read full ADR/PF bodies on demand. +- **FEATURE_KNOWLEDGE**: Feature area context from Phase 2 (or `(none)`). Follow `devflow:apply-feature-kb` for consumption algorithm. Resolvers follow a 3-tier risk approach: - **Standard fixes**: Applied directly - **Careful fixes** (public API, shared state, >3 files): Systematic refactoring — understand context, plan, test, implement, verify - **Architectural overhaul**: Defer to tech debt (LAST RESORT — avoided at almost all costs, only when complete system redesign required) -## Phase 5: Collect & Simplify +## Phase 6: Collect & Simplify **Produces:** SIMPLIFICATION_RESULTS **Requires:** RESOLUTION_RESULTS @@ -95,7 +102,7 @@ Aggregate results from all Resolver agents: Spawn `Agent(subagent_type="Simplifier")` on all files modified by Resolvers. -## Phase 6: Report +## Phase 7: Report **Requires:** RESOLUTION_RESULTS, SIMPLIFICATION_RESULTS, REVIEW_DIR @@ -122,11 +129,11 @@ Report to user: Before reporting results, verify every phase was announced: - [ ] Phase 1: Target Review Directory → REVIEW_DIR captured -- [ ] Phase 1.5: Load Project Knowledge → KNOWLEDGE_CONTEXT captured -- [ ] Phase 2: Parse Issues → ISSUES captured (or stopped: no actionable issues) -- [ ] Phase 3: Analyze & Batch → BATCHES captured -- [ ] Phase 4: Resolve → RESOLUTION_RESULTS captured per batch -- [ ] Phase 5: Collect & Simplify → SIMPLIFICATION_RESULTS captured -- [ ] Phase 6: Report → resolution-summary.md written +- [ ] Phase 2: Load Project Knowledge → KNOWLEDGE_CONTEXT captured, FEATURE_KNOWLEDGE loaded (or skipped if `.features/` absent) +- [ ] Phase 3: Parse Issues → ISSUES captured (or stopped: no actionable issues) +- [ ] Phase 4: Analyze & Batch → BATCHES captured +- [ ] Phase 5: Resolve → RESOLUTION_RESULTS captured per batch +- [ ] Phase 6: Collect & Simplify → SIMPLIFICATION_RESULTS captured +- [ ] Phase 7: Report → resolution-summary.md written If any phase is unchecked, execute it before proceeding. diff --git a/shared/skills/review:orch/SKILL.md b/shared/skills/review:orch/SKILL.md index 3577e02..11278bf 100644 --- a/shared/skills/review:orch/SKILL.md +++ b/shared/skills/review:orch/SKILL.md @@ -44,9 +44,9 @@ Check `.docs/reviews/{branch_slug}/.last-review-head`: Generate timestamp: `YYYY-MM-DD_HHMM` Create directory: `mkdir -p .docs/reviews/{branch_slug}/{timestamp}` -## Phase 2b: Load Knowledge Index +## Phase 3: Load Knowledge Index -**Produces:** KNOWLEDGE_CONTEXT +**Produces:** KNOWLEDGE_CONTEXT, FEATURE_KNOWLEDGE **Requires:** REVIEW_DIR After incremental detection, load the knowledge index: @@ -57,7 +57,13 @@ KNOWLEDGE_CONTEXT=$(node scripts/hooks/lib/knowledge-context.cjs index "{worktre This produces a compact index of active ADR/PF entries. Pass `KNOWLEDGE_CONTEXT` to all Reviewer agents. Reviewers use `devflow:apply-knowledge` to Read full entry bodies on demand. -## Phase 3: File Analysis +Also load feature knowledge: +1. Read `.features/index.json` if it exists +2. Based on changed files from Phase 4 file analysis, identify relevant KBs (match file paths against KB `directories` and `referencedFiles`) +3. For each match: check staleness via `node scripts/hooks/lib/feature-kb.cjs stale "{worktree}" {slug}`, read `.features/{slug}/KNOWLEDGE.md` +4. Concatenate as `FEATURE_KNOWLEDGE` (or `(none)`) + +## Phase 4: File Analysis **Produces:** REVIEWER_LIST **Requires:** DIFF_RANGE @@ -80,17 +86,17 @@ Detect conditional reviewers from file types: | `package.json`, lock files | dependencies | | `*.md`, doc files | documentation | -## Phase 4: Reviews (Parallel) +## Phase 5: Reviews (Parallel) **Produces:** REVIEWER_OUTPUTS -**Requires:** DIFF_RANGE, REVIEW_DIR, TIMESTAMP, KNOWLEDGE_CONTEXT, REVIEWER_LIST +**Requires:** DIFF_RANGE, REVIEW_DIR, TIMESTAMP, KNOWLEDGE_CONTEXT, FEATURE_KNOWLEDGE, REVIEWER_LIST Spawn all reviewers in a single message (parallel execution): **7 core reviewers** (always): - security, architecture, performance, complexity, consistency, testing, regression -**Conditional reviewers** (from Phase 3 file analysis): +**Conditional reviewers** (from Phase 4 file analysis): - typescript, react, database, dependencies, documentation, go, java, python, rust, accessibility, ui-design Each reviewer receives: @@ -98,9 +104,10 @@ Each reviewer receives: - **Branch context**: branch → base_branch - **Output path**: `.docs/reviews/{branch_slug}/{timestamp}/{focus}.md` - **DIFF_COMMAND**: `git diff {DIFF_RANGE}` (incremental or full) -- **KNOWLEDGE_CONTEXT**: compact index from Phase 2b (or `(none)` when absent) — follow `devflow:apply-knowledge` to Read full ADR/PF bodies on demand +- **KNOWLEDGE_CONTEXT**: compact index from Phase 3 (or `(none)` when absent) — follow `devflow:apply-knowledge` to Read full ADR/PF bodies on demand +- **FEATURE_KNOWLEDGE**: feature area context from Phase 3 (or `(none)`) — follow `devflow:apply-feature-kb` for consumption algorithm -## Phase 5: Synthesis (Parallel) +## Phase 6: Synthesis (Parallel) **Requires:** REVIEWER_OUTPUTS, REVIEW_DIR, PR_INFO @@ -109,7 +116,7 @@ After all reviewers complete, spawn in parallel: 1. `Agent(subagent_type="Git")` with action `comment-pr` — post review summary as PR comment (deduplicate: check existing comments first) 2. `Agent(subagent_type="Synthesizer")` in review mode — reads all `{focus}.md` files from disk, writes `review-summary.md` -## Phase 6: Finalize +## Phase 7: Finalize **Requires:** BRANCH_INFO, REVIEW_DIR @@ -134,10 +141,10 @@ Before reporting results, verify every phase was announced: - [ ] Phase 1: Pre-flight → BRANCH_INFO, PR_INFO captured - [ ] Phase 2: Incremental Detection → DIFF_RANGE, REVIEW_DIR, TIMESTAMP captured -- [ ] Phase 2b: Load Knowledge Index → KNOWLEDGE_CONTEXT captured -- [ ] Phase 3: File Analysis → REVIEWER_LIST captured -- [ ] Phase 4: Reviews → REVIEWER_OUTPUTS written to disk -- [ ] Phase 5: Synthesis → review-summary.md written -- [ ] Phase 6: Finalize → .last-review-head updated, results reported +- [ ] Phase 3: Load Knowledge Index → KNOWLEDGE_CONTEXT captured, FEATURE_KNOWLEDGE loaded (or skipped if `.features/` absent) +- [ ] Phase 4: File Analysis → REVIEWER_LIST captured +- [ ] Phase 5: Reviews → REVIEWER_OUTPUTS written to disk +- [ ] Phase 6: Synthesis → review-summary.md written +- [ ] Phase 7: Finalize → .last-review-head updated, results reported If any phase is unchecked, execute it before proceeding. diff --git a/src/cli/cli.ts b/src/cli/cli.ts index a704d40..7ec1596 100644 --- a/src/cli/cli.ts +++ b/src/cli/cli.ts @@ -13,6 +13,7 @@ import { skillsCommand } from './commands/skills.js'; import { hudCommand } from './commands/hud.js'; import { learnCommand } from './commands/learn.js'; import { flagsCommand } from './commands/flags.js'; +import { kbCommand } from './commands/kb.js'; const __filename = fileURLToPath(import.meta.url); const __dirname = dirname(__filename); @@ -41,6 +42,7 @@ program.addCommand(skillsCommand); program.addCommand(hudCommand); program.addCommand(learnCommand); program.addCommand(flagsCommand); +program.addCommand(kbCommand); // Handle no command program.action(() => { diff --git a/src/cli/commands/init.ts b/src/cli/commands/init.ts index e98e3de..0874a28 100644 --- a/src/cli/commands/init.ts +++ b/src/cli/commands/init.ts @@ -27,6 +27,7 @@ import { addAmbientHook, removeAmbientHook } from './ambient.js'; import { addMemoryHooks, removeMemoryHooks } from './memory.js'; import { addLearningHook, removeLearningHook } from './learn.js'; import { addHudStatusLine, removeHudStatusLine } from './hud.js'; +import { addKbHook, removeKbHook } from './kb.js'; import { loadConfig as loadHudConfig, saveConfig as saveHudConfig } from '../hud/config.js'; import { readManifest, writeManifest, resolvePluginList, detectUpgrade } from '../utils/manifest.js'; import { getDefaultFlags, applyFlags, stripFlags, FLAG_REGISTRY } from '../utils/flags.js'; @@ -129,6 +130,7 @@ interface InitOptions { memory?: boolean; learn?: boolean; hud?: boolean; + kb?: boolean; hudOnly?: boolean; recommended?: boolean; advanced?: boolean; @@ -149,6 +151,8 @@ export const initCommand = new Command('init') .option('--no-learn', 'Disable self-learning') .option('--hud', 'Enable HUD (git info, context usage, session stats)') .option('--no-hud', 'Disable HUD status line') + .option('--kb', 'Enable feature knowledge bases') + .option('--no-kb', 'Disable feature knowledge bases') .option('--hud-only', 'Install only the HUD (no plugins, hooks, or extras)') .option('--recommended', 'Apply recommended defaults after plugin selection (skip advanced prompts)') .option('--advanced', 'Show all configuration prompts') @@ -258,7 +262,7 @@ export const initCommand = new Command('init') version, plugins: [], scope, - features: { teams: false, ambient: false, memory: false, hud: true, learn: false, flags: [] }, + features: { teams: false, ambient: false, memory: false, hud: true, learn: false, kb: false, flags: [] }, installedAt: now, updatedAt: now, }); @@ -368,6 +372,7 @@ export const initCommand = new Command('init') let memoryEnabled = true; let learnEnabled = true; let hudEnabled = true; + let kbEnabled = true; let enabledFlags = getDefaultFlags(); let claudeignoreEnabled = !!earlyGitRoot; let discoveredProjects: string[] = []; @@ -395,6 +400,7 @@ export const initCommand = new Command('init') if (options.memory !== undefined) memoryEnabled = options.memory; if (options.learn !== undefined) learnEnabled = options.learn; if (options.hud !== undefined) hudEnabled = options.hud; + if (options.kb !== undefined) kbEnabled = options.kb; // Compute safe-delete block synchronously so we know whether to fetch installed version if (profilePath && safeDeleteAvailable) { @@ -427,6 +433,7 @@ export const initCommand = new Command('init') `Working memory: ${memoryEnabled ? 'enabled' : 'disabled'}`, `Self-learning: ${learnEnabled ? 'enabled' : 'disabled'}`, `HUD: ${hudEnabled ? 'enabled' : 'disabled'}`, + `Feature KBs: ${kbEnabled ? 'enabled' : 'disabled'}`, `Agent Teams: ${teamsEnabled ? 'enabled' : 'disabled'}`, `Claude Code flags: ${defaultFlagCount} enabled`, `${claudeignoreEnabled ? '.claudeignore: created' : ''}`, @@ -554,6 +561,26 @@ export const initCommand = new Command('init') hudEnabled = hudChoice; } + if (options.kb !== undefined) { + kbEnabled = options.kb; + } else { + p.note( + 'Per-feature knowledge bases capture cross-cutting patterns,\n' + + 'conventions, and gotchas. Auto-refreshed when files change.\n' + + 'Consumes a background agent session on staleness detection.', + 'Feature Knowledge Bases', + ); + const kbChoice = await p.confirm({ + message: 'Enable feature knowledge bases? (Recommended)', + initialValue: true, + }); + if (p.isCancel(kbChoice)) { + p.cancel('Installation cancelled.'); + process.exit(0); + } + kbEnabled = kbChoice; + } + // Claude Code flags multiselect (advanced only) if (process.stdin.isTTY) { const flagChoices = FLAG_REGISTRY.map(f => ({ @@ -925,6 +952,10 @@ export const initCommand = new Command('init') ? addHudStatusLine(content, devflowDir) : removeHudStatusLine(content); + // KB hook — remove-then-add for upgrade safety + const cleanedForKb = removeKbHook(content); + content = kbEnabled ? addKbHook(cleanedForKb, devflowDir) : cleanedForKb; + // Claude Code flags — strip all managed keys, then re-apply selected flags content = stripFlags(content); content = applyFlags(content, enabledFlags); @@ -945,6 +976,33 @@ export const initCommand = new Command('init') await migrateMemoryFiles(verbose); } + // Create .features/ directory with empty index (feature knowledge bases) + // .features/ is committed to the project repo (not scope-dependent) + if (gitRoot && kbEnabled) { + const featuresDir = path.join(gitRoot, '.features'); + await fs.mkdir(featuresDir, { recursive: true }); + const featuresIndexPath = path.join(featuresDir, 'index.json'); + try { + await fs.access(featuresIndexPath); + } catch { + await fs.writeFile(featuresIndexPath, JSON.stringify({ version: 1, features: {} }, null, 2) + '\n'); + if (verbose) { + p.log.success('.features/index.json created'); + } + } + } + + // Manage .disabled sentinel based on kbEnabled state + if (gitRoot) { + const disabledPath = path.join(gitRoot, '.features', '.disabled'); + if (kbEnabled) { + try { await fs.unlink(disabledPath); } catch { /* doesn't exist */ } + } else { + await fs.mkdir(path.join(gitRoot, '.features'), { recursive: true }); + await fs.writeFile(disabledPath, '', 'utf-8'); + } + } + // Configure HUD const existingHud = loadHudConfig(); saveHudConfig({ enabled: hudEnabled, detail: existingHud.detail }); @@ -1066,7 +1124,7 @@ export const initCommand = new Command('init') version, plugins: resolvePluginList(installedPluginNames, existingManifest, !!options.plugin), scope, - features: { teams: teamsEnabled, ambient: ambientEnabled, memory: memoryEnabled, learn: learnEnabled, hud: hudEnabled, flags: enabledFlags }, + features: { teams: teamsEnabled, ambient: ambientEnabled, memory: memoryEnabled, learn: learnEnabled, hud: hudEnabled, kb: kbEnabled, flags: enabledFlags }, installedAt: existingManifest?.installedAt ?? now, updatedAt: now, }; diff --git a/src/cli/commands/kb.ts b/src/cli/commands/kb.ts new file mode 100644 index 0000000..eb84231 --- /dev/null +++ b/src/cli/commands/kb.ts @@ -0,0 +1,607 @@ +import { Command } from 'commander'; +import { promises as fs } from 'fs'; +import * as path from 'path'; +import { execFileSync } from 'child_process'; +import * as p from '@clack/prompts'; +import color from 'picocolors'; +import { createRequire } from 'module'; +import { fileURLToPath } from 'url'; +import { isClaudeCliAvailable } from '../utils/cli.js'; +import { getGitRoot } from '../utils/git.js'; +import type { HookMatcher, Settings } from '../utils/hooks.js'; +import { getClaudeDirectory, getDevFlowDirectory } from '../utils/paths.js'; +import { readManifest, writeManifest } from '../utils/manifest.js'; + +const __filename = fileURLToPath(import.meta.url); +const __dirname = path.dirname(__filename); + +/** @internal */ +const _require = createRequire(import.meta.url); + +export interface SidecarData { + referencedFiles?: string[]; + description?: string; +} + +export async function readSidecar(sidecarPath: string): Promise { + let raw: unknown; + try { + raw = JSON.parse(await fs.readFile(sidecarPath, 'utf8')); + } catch { + return {}; + } + if (typeof raw !== 'object' || raw === null) return {}; + const data = raw as Record; + const result: SidecarData = {}; + if (Array.isArray(data.referencedFiles)) { + result.referencedFiles = data.referencedFiles.filter( + (f): f is string => typeof f === 'string' + ); + } + if (typeof data.description === 'string') { + result.description = data.description; + } + return result; +} + +interface FeatureKbModule { + listKBs: (worktreePath: string) => Array<{ slug: string; name: string; directories: string[]; lastUpdated: string; referencedFiles?: string[]; description?: string; createdBy?: string }>; + checkAllStaleness: (worktreePath: string) => Record; + checkStaleness: (worktreePath: string, slug: string) => { stale: boolean; changedFiles: string[] }; + findOverlapping: (worktreePath: string, changedFiles: string[]) => string[]; + removeEntry: (worktreePath: string, slug: string) => void; + validateSlug: (slug: string) => void; + updateIndex: (worktreePath: string, entry: { slug: string; name: string; description?: string; directories: string[]; referencedFiles: string[]; createdBy?: string }, lockTimeoutMs?: number) => void; +} + +// dist/cli/commands/kb.js → ../../.. → project root (where scripts/ lives) +const featureKb: FeatureKbModule = _require( + path.join(__dirname, '..', '..', '..', 'scripts', 'hooks', 'lib', 'feature-kb.cjs') +); + +/** Tools passed to `claude -p` when spawning the Knowledge agent. */ +const KB_AGENT_TOOLS = 'Read,Grep,Glob,Write'; + +/** + * Validate a KB slug and exit with an error message if invalid. + * Centralises the repeated try/catch pattern across create/refresh/remove. + */ +function exitOnInvalidSlug(slug: string): void { + try { + featureKb.validateSlug(slug); + } catch (err) { + p.log.error(err instanceof Error ? err.message : String(err)); + process.exit(1); + } +} + +/** + * Get the git root for the current directory, or cwd if not in a git repo. + */ +async function getWorktreePath(): Promise { + return (await getGitRoot()) ?? process.cwd(); +} + +const KB_HOOK_MARKER = 'session-end-kb-refresh'; + +/** + * Add the KB SessionEnd hook to settings JSON. + * Idempotent — returns unchanged JSON if hook already exists. + */ +export function addKbHook(settingsJson: string, devflowDir: string): string { + if (hasKbHook(settingsJson)) { + return settingsJson; + } + + const settings: Settings = JSON.parse(settingsJson); + + if (!settings.hooks) { + settings.hooks = {}; + } + + const hookCommand = `${path.join(devflowDir, 'scripts', 'hooks', 'run-hook')} session-end-kb-refresh`; + + const newEntry: HookMatcher = { + hooks: [ + { + type: 'command', + command: hookCommand, + timeout: 10, + }, + ], + }; + + if (!settings.hooks.SessionEnd) { + settings.hooks.SessionEnd = []; + } + + settings.hooks.SessionEnd.push(newEntry); + + return JSON.stringify(settings, null, 2) + '\n'; +} + +/** + * Remove the KB hook from settings JSON. + * Idempotent — returns unchanged JSON if hook not present. + * Preserves other hooks. Cleans empty arrays/objects. + */ +export function removeKbHook(settingsJson: string): string { + const settings: Settings = JSON.parse(settingsJson); + let changed = false; + + const matchers = settings.hooks?.SessionEnd; + if (matchers) { + const filtered = matchers.filter( + (m) => !m.hooks.some((h) => h.command.includes(KB_HOOK_MARKER)), + ); + if (filtered.length < matchers.length) changed = true; + if (filtered.length === 0) { + delete settings.hooks!.SessionEnd; + } else { + settings.hooks!.SessionEnd = filtered; + } + } + + if (!changed) { + return settingsJson; + } + + if (settings.hooks && Object.keys(settings.hooks).length === 0) { + delete settings.hooks; + } + + return JSON.stringify(settings, null, 2) + '\n'; +} + +/** + * Check if the KB hook is registered in settings JSON or parsed Settings object. + */ +export function hasKbHook(input: string | Settings): boolean { + const settings: Settings = typeof input === 'string' ? JSON.parse(input) : input; + return settings.hooks?.SessionEnd?.some((matcher) => + matcher.hooks.some((h) => h.command.includes(KB_HOOK_MARKER)), + ) ?? false; +} + +export const kbCommand = new Command('kb') + .description('Manage per-feature knowledge bases') + .option('--enable', 'Enable per-feature knowledge bases') + .option('--disable', 'Disable per-feature knowledge bases') + .option('--status', 'Show KB feature status') + .action(async (options: { enable?: boolean; disable?: boolean; status?: boolean }) => { + if (!options.enable && !options.disable && !options.status) return; + + const worktreePath = await getWorktreePath(); + const claudeDir = getClaudeDirectory(); + const devflowDir = getDevFlowDirectory(); + const settingsPath = path.join(claudeDir, 'settings.json'); + + if (options.enable) { + p.intro(color.cyan('Enable Feature Knowledge Bases')); + + // Create .features/index.json if missing + const featuresDir = path.join(worktreePath, '.features'); + await fs.mkdir(featuresDir, { recursive: true }); + const indexPath = path.join(featuresDir, 'index.json'); + try { + await fs.access(indexPath); + } catch { + await fs.writeFile(indexPath, JSON.stringify({ version: 1, features: {} }, null, 2) + '\n'); + } + + // Remove .disabled sentinel + try { await fs.unlink(path.join(featuresDir, '.disabled')); } catch { /* doesn't exist */ } + + // Add SessionEnd hook + try { + const content = await fs.readFile(settingsPath, 'utf-8'); + const updated = addKbHook(content, devflowDir); + if (updated !== content) { + await fs.writeFile(settingsPath, updated, 'utf-8'); + } + } catch { /* settings.json may not exist */ } + + // Update manifest + const manifest = await readManifest(devflowDir); + if (manifest) { + manifest.features.kb = true; + manifest.updatedAt = new Date().toISOString(); + await writeManifest(devflowDir, manifest); + } + + p.log.success('Feature knowledge bases enabled'); + p.outro(`SessionEnd hook installed. Run ${color.cyan('devflow kb create ')} to create a KB.`); + + } else if (options.disable) { + p.intro(color.cyan('Disable Feature Knowledge Bases')); + + // Create .disabled sentinel + const featuresDir = path.join(worktreePath, '.features'); + await fs.mkdir(featuresDir, { recursive: true }); + await fs.writeFile(path.join(featuresDir, '.disabled'), '', 'utf-8'); + + // Remove SessionEnd hook + try { + const content = await fs.readFile(settingsPath, 'utf-8'); + const updated = removeKbHook(content); + if (updated !== content) { + await fs.writeFile(settingsPath, updated, 'utf-8'); + } + } catch { /* settings.json may not exist */ } + + // Update manifest + const manifest = await readManifest(devflowDir); + if (manifest) { + manifest.features.kb = false; + manifest.updatedAt = new Date().toISOString(); + await writeManifest(devflowDir, manifest); + } + + p.log.success('Feature knowledge bases disabled'); + p.log.info('Existing KBs preserved. Manual commands (create/refresh) still work.'); + p.outro(''); + + } else { + // options.status + p.intro(color.cyan('Feature KB Status')); + + // Check hook + let hookPresent = false; + try { + const content = await fs.readFile(settingsPath, 'utf-8'); + hookPresent = hasKbHook(content); + } catch { /* settings.json may not exist */ } + + // Check sentinel + let disabled = false; + try { + await fs.access(path.join(worktreePath, '.features', '.disabled')); + disabled = true; + } catch { /* not disabled */ } + + // Count KBs + const kbs = featureKb.listKBs(worktreePath); + + const enabled = hookPresent && !disabled; + p.log.info(`Status: ${enabled ? color.green('enabled') : color.yellow('disabled')}`); + p.log.info(`Hook: ${hookPresent ? color.green('installed') : color.dim('not installed')}`); + p.log.info(`KBs: ${kbs.length}`); + if (disabled) { + p.log.info(`Sentinel: ${color.yellow('.features/.disabled present')}`); + } + p.outro(''); + } + }); + +// --------------------------------------------------------------------------- +// devflow kb list +// --------------------------------------------------------------------------- + +kbCommand + .command('list') + .description('List all feature KBs with staleness status') + .action(async () => { + p.intro(color.cyan('Feature Knowledge Bases')); + + const worktreePath = await getWorktreePath(); + const kbs = featureKb.listKBs(worktreePath); + const staleness = featureKb.checkAllStaleness(worktreePath); + + if (kbs.length === 0) { + p.log.info( + 'No feature KBs found. KBs are created automatically during planning, or manually via ' + + color.cyan('devflow kb create ') + '.' + ); + p.outro(''); + return; + } + + p.log.info(`Found ${kbs.length} feature KB${kbs.length === 1 ? '' : 's'} in ${color.dim(worktreePath)}`); + console.log(''); + + for (const kb of kbs) { + const staleInfo = staleness[kb.slug]; + const isStale = staleInfo?.stale ?? false; + const statusBadge = isStale ? color.yellow('[STALE]') : color.green('[current]'); + + console.log(` ${color.bold(kb.name)} ${statusBadge}`); + console.log(` slug: ${color.dim(kb.slug)}`); + console.log(` updated: ${color.dim(kb.lastUpdated)}`); + console.log(` dirs: ${color.dim(kb.directories.join(', '))}`); + if (isStale && staleInfo.changedFiles.length > 0) { + const shown = staleInfo.changedFiles.slice(0, 3).join(', '); + const overflow = staleInfo.changedFiles.length > 3 ? ` +${staleInfo.changedFiles.length - 3} more` : ''; + console.log(` changed: ${color.yellow(shown)}${overflow}`); + } + console.log(''); + } + + p.outro(`Run ${color.cyan('devflow kb check')} to see staleness details`); + }); + +// --------------------------------------------------------------------------- +// devflow kb check +// --------------------------------------------------------------------------- + +kbCommand + .command('check') + .description('Check all KBs for staleness') + .action(async () => { + p.intro(color.cyan('KB Staleness Check')); + + const worktreePath = await getWorktreePath(); + const kbs = featureKb.listKBs(worktreePath); + const staleness = featureKb.checkAllStaleness(worktreePath); + + if (kbs.length === 0) { + p.log.info('No feature KBs found.'); + p.outro(''); + return; + } + + let staleCount = 0; + + for (const kb of kbs) { + const staleInfo = staleness[kb.slug]; + const isStale = staleInfo?.stale ?? false; + if (isStale) { + staleCount++; + p.log.warn(`${kb.name} (${kb.slug}) is stale`); + for (const f of staleInfo.changedFiles.slice(0, 5)) { + console.log(` ${color.yellow('•')} ${f}`); + } + if (staleInfo.changedFiles.length > 5) { + console.log(` ${color.yellow('•')} ...and ${staleInfo.changedFiles.length - 5} more`); + } + } else { + p.log.success(`${kb.name} (${kb.slug}) is current`); + } + } + + if (staleCount > 0) { + p.outro(`${staleCount} KB${staleCount === 1 ? '' : 's'} need refresh. Run: ${color.cyan('devflow kb refresh')}`); + } else { + p.outro('All KBs are current'); + } + }); + +// --------------------------------------------------------------------------- +// devflow kb create +// --------------------------------------------------------------------------- + +kbCommand + .command('create ') + .description('Create a new KB via claude -p exploration') + .action(async (slug: string) => { + exitOnInvalidSlug(slug); + p.intro(color.cyan(`Create Feature KB: ${slug}`)); + + if (!isClaudeCliAvailable()) { + p.log.error('claude CLI not found on PATH. Install Claude Code first.'); + process.exit(1); + } + + const worktreePath = await getWorktreePath(); + + const name = await p.text({ + message: 'Feature name (human-readable)', + placeholder: 'e.g., CLI Command System', + validate: (v) => (v.trim().length < 3 ? 'Name must be at least 3 characters' : undefined), + }); + if (p.isCancel(name)) { p.cancel('Cancelled.'); return; } + + const directoriesRaw = await p.text({ + message: 'Directories (comma-separated, e.g., src/cli/commands/,src/cli/utils/)', + placeholder: 'src/feature/', + validate: (v) => (v.trim().length === 0 ? 'Enter at least one directory' : undefined), + }); + if (p.isCancel(directoriesRaw)) { p.cancel('Cancelled.'); return; } + + const directories = (directoriesRaw as string).split(',').map((d) => d.trim()).filter(Boolean); + const dirList = directories.map((d) => `"${d}"`).join(', '); + + const s = p.spinner(); + s.start('Creating KB...'); + + const sidecarPath = path.join(worktreePath, '.features', slug, '.create-result.json'); + try { await fs.unlink(sidecarPath); } catch { /* doesn't exist */ } + + const prompt = [ + `You are the Knowledge agent. Create a feature knowledge base for the following area:`, + ``, + `FEATURE_SLUG: ${slug}`, + `FEATURE_NAME: ${name as string}`, + `DIRECTORIES: [${dirList}]`, + `WORKTREE_PATH: ${worktreePath}`, + ``, + `Follow the devflow:feature-kb skill's 4-phase process:`, + `1. Scan the directories to identify key files and entry points`, + `2. Extract architecture, conventions, patterns, integration points`, + `3. Distill into actionable cross-cutting knowledge`, + `4. Write .features/${slug}/KNOWLEDGE.md with all required sections`, + ``, + `After writing KNOWLEDGE.md, write .features/${slug}/.create-result.json with:`, + `{`, + ` "referencedFiles": [<5-10 key files from the explored directories for staleness tracking>],`, + ` "description": ""`, + `}`, + ``, + `Create the directory if needed. Report KB_STATUS when done.`, + ].join('\n'); + + try { + execFileSync('claude', [ + '-p', prompt, + '--model', 'sonnet', + '--allowedTools', KB_AGENT_TOOLS, + '--dangerously-skip-permissions', + ], { + cwd: worktreePath, + stdio: 'pipe', + encoding: 'utf8', + }); + + const sidecar = await readSidecar(sidecarPath); + + featureKb.updateIndex(worktreePath, { + slug, + name: name as string, + directories, + referencedFiles: sidecar.referencedFiles ?? [], + description: sidecar.description, + createdBy: 'devflow-kb', + }); + + try { await fs.unlink(sidecarPath); } catch { /* already cleaned */ } + + s.stop('KB created successfully'); + p.log.success(`KB written to .features/${slug}/KNOWLEDGE.md`); + } catch (err) { + try { await fs.unlink(sidecarPath); } catch { /* cleanup */ } + s.stop('KB creation failed'); + p.log.error(`claude exited with error: ${err instanceof Error ? err.message : String(err)}`); + process.exit(1); + } + + p.outro(`Run ${color.cyan(`devflow kb list`)} to see all KBs`); + }); + +// --------------------------------------------------------------------------- +// devflow kb refresh [slug] +// --------------------------------------------------------------------------- + +kbCommand + .command('refresh [slug]') + .description('Refresh stale KB(s). Omit slug to refresh all stale KBs.') + .action(async (slug?: string) => { + p.intro(color.cyan(slug ? `Refresh KB: ${slug}` : 'Refresh Stale KBs')); + + if (slug) exitOnInvalidSlug(slug); + + if (!isClaudeCliAvailable()) { + p.log.error('claude CLI not found on PATH. Install Claude Code first.'); + process.exit(1); + } + + const worktreePath = await getWorktreePath(); + + // Determine which slugs to refresh + let slugsToRefresh: string[]; + let stalenessMap: Record | undefined; + if (slug) { + slugsToRefresh = [slug]; + } else { + stalenessMap = featureKb.checkAllStaleness(worktreePath); + slugsToRefresh = Object.entries(stalenessMap) + .filter(([, info]) => info.stale) + .map(([s]) => s); + } + + if (slugsToRefresh.length === 0) { + p.log.success('No stale KBs found — everything is current.'); + p.outro(''); + return; + } + + p.log.info(`Refreshing ${slugsToRefresh.length} KB${slugsToRefresh.length === 1 ? '' : 's'}: ${slugsToRefresh.join(', ')}`); + + const kbs = featureKb.listKBs(worktreePath); + + for (const kbSlug of slugsToRefresh) { + const s = p.spinner(); + s.start(`Refreshing ${kbSlug}...`); + + const staleInfo = stalenessMap?.[kbSlug] ?? featureKb.checkStaleness(worktreePath, kbSlug); + const kbEntry = kbs.find((k: { slug: string }) => k.slug === kbSlug); + const featureName = kbEntry?.name ?? kbSlug; + const kbDirectories = kbEntry?.directories ?? []; + + const sidecarPath = path.join(worktreePath, '.features', kbSlug, '.refresh-result.json'); + try { await fs.unlink(sidecarPath); } catch { /* doesn't exist */ } + + const prompt = [ + `You are the Knowledge agent refreshing a stale feature knowledge base.`, + ``, + `FEATURE_SLUG: ${kbSlug}`, + `FEATURE_NAME: ${featureName}`, + `DIRECTORIES: ${JSON.stringify(kbDirectories)}`, + `WORKTREE_PATH: ${worktreePath}`, + `CHANGED_FILES: ${JSON.stringify(staleInfo.changedFiles)}`, + ``, + `Instructions:`, + `- Read .features/${kbSlug}/KNOWLEDGE.md to see the existing KB content`, + `- Read the CHANGED_FILES to understand what changed`, + `- Update the stale sections based on changes`, + `- Preserve any manually added content`, + `- Do not regenerate from scratch`, + `- Write the updated KB to .features/${kbSlug}/KNOWLEDGE.md`, + `- Write .features/${kbSlug}/.refresh-result.json with: {"referencedFiles": [<5-10 key files from explored directories for staleness tracking>]}`, + ].join('\n'); + + try { + execFileSync('claude', [ + '-p', prompt, + '--model', 'sonnet', + '--allowedTools', KB_AGENT_TOOLS, + '--dangerously-skip-permissions', + ], { + cwd: worktreePath, + stdio: 'pipe', + encoding: 'utf8', + }); + + const sidecar = await readSidecar(sidecarPath); + + featureKb.updateIndex(worktreePath, { + slug: kbSlug, + name: featureName, + directories: kbDirectories, + referencedFiles: sidecar.referencedFiles ?? kbEntry?.referencedFiles ?? [], + createdBy: 'devflow-kb', + }); + + try { await fs.unlink(sidecarPath); } catch { /* already cleaned */ } + + s.stop(`${kbSlug} refreshed`); + } catch (err) { + try { await fs.unlink(sidecarPath); } catch { /* cleanup */ } + s.stop(`${kbSlug} refresh failed`); + p.log.error(`Error: ${err instanceof Error ? err.message : String(err)}`); + } + } + + p.outro('Refresh complete'); + }); + +// --------------------------------------------------------------------------- +// devflow kb remove +// --------------------------------------------------------------------------- + +kbCommand + .command('remove ') + .description('Remove a KB and its index entry') + .action(async (slug: string) => { + exitOnInvalidSlug(slug); + p.intro(color.cyan(`Remove KB: ${slug}`)); + + const confirmed = await p.confirm({ + message: `Remove KB '${slug}' and its KNOWLEDGE.md? This cannot be undone.`, + initialValue: false, + }); + if (p.isCancel(confirmed) || !confirmed) { + p.cancel('Removal cancelled.'); + return; + } + + const worktreePath = await getWorktreePath(); + + try { + featureKb.removeEntry(worktreePath, slug); + p.log.success(`KB '${slug}' removed`); + } catch (err) { + p.log.error(`Failed to remove KB: ${err instanceof Error ? err.message : String(err)}`); + process.exit(1); + } + + p.outro('Done'); + }); diff --git a/src/cli/commands/uninstall.ts b/src/cli/commands/uninstall.ts index 29e0845..2c4f1cf 100644 --- a/src/cli/commands/uninstall.ts +++ b/src/cli/commands/uninstall.ts @@ -13,6 +13,7 @@ import { removeAmbientHook } from './ambient.js'; import { removeMemoryHooks } from './memory.js'; import { removeLearningHook } from './learn.js'; import { removeHudStatusLine } from './hud.js'; +import { removeKbHook } from './kb.js'; import { listShadowed } from './skills.js'; import { detectShell, getProfilePath } from '../utils/safe-delete.js'; import { isAlreadyInstalled, removeFromProfile } from '../utils/safe-delete-install.js'; @@ -400,6 +401,7 @@ export const uninstallCommand = new Command('uninstall') settingsContent = removeMemoryHooks(settingsContent); settingsContent = removeLearningHook(settingsContent); settingsContent = removeHudStatusLine(settingsContent); + settingsContent = removeKbHook(settingsContent); settingsContent = stripFlags(settingsContent); if (settingsContent !== originalContent) { diff --git a/src/cli/plugins.ts b/src/cli/plugins.ts index 7be2872..999c711 100644 --- a/src/cli/plugins.ts +++ b/src/cli/plugins.ts @@ -47,55 +47,55 @@ export const DEVFLOW_PLUGINS: PluginDefinition[] = [ description: 'Auto-activating quality enforcement skills - foundation layer for all Devflow plugins', commands: [], agents: [], - skills: ['apply-knowledge', 'software-design', 'docs-framework', 'git', 'boundary-validation', 'research', 'test-driven-development', 'testing'], + skills: ['apply-knowledge', 'apply-feature-kb', 'software-design', 'docs-framework', 'git', 'boundary-validation', 'research', 'test-driven-development', 'testing'], }, { name: 'devflow-plan', description: 'Unified design planning with gap analysis and design review', commands: ['/plan'], - agents: ['git', 'skimmer', 'synthesizer', 'designer'], - skills: ['agent-teams', 'gap-analysis', 'design-review', 'patterns', 'worktree-support'], + agents: ['git', 'skimmer', 'synthesizer', 'designer', 'knowledge'], + skills: ['agent-teams', 'gap-analysis', 'design-review', 'patterns', 'worktree-support', 'feature-kb', 'apply-feature-kb'], }, { name: 'devflow-implement', description: 'Complete task implementation workflow - accepts plan documents, issues, or task descriptions', commands: ['/implement'], agents: ['git', 'coder', 'simplifier', 'scrutinizer', 'evaluator', 'tester', 'validator'], - skills: ['agent-teams', 'patterns', 'qa', 'quality-gates', 'worktree-support'], + skills: ['agent-teams', 'patterns', 'qa', 'quality-gates', 'worktree-support', 'apply-feature-kb'], }, { name: 'devflow-code-review', description: 'Comprehensive code review with parallel specialized agents', commands: ['/code-review'], agents: ['git', 'reviewer', 'synthesizer'], - skills: ['agent-teams', 'architecture', 'complexity', 'consistency', 'database', 'dependencies', 'documentation', 'performance', 'regression', 'review-methodology', 'security', 'testing', 'worktree-support'], + skills: ['agent-teams', 'architecture', 'complexity', 'consistency', 'database', 'dependencies', 'documentation', 'performance', 'regression', 'review-methodology', 'security', 'testing', 'worktree-support', 'apply-feature-kb'], }, { name: 'devflow-resolve', description: 'Process and fix code review issues with risk assessment', commands: ['/resolve'], agents: ['git', 'resolver', 'simplifier'], - skills: ['agent-teams', 'patterns', 'security', 'worktree-support'], + skills: ['agent-teams', 'patterns', 'security', 'worktree-support', 'apply-feature-kb'], }, { name: 'devflow-debug', description: 'Debugging workflows with competing hypothesis investigation using agent teams', commands: ['/debug'], agents: ['git', 'synthesizer'], - skills: ['agent-teams', 'git', 'worktree-support'], + skills: ['agent-teams', 'git', 'worktree-support', 'apply-feature-kb'], }, { name: 'devflow-self-review', description: 'Self-review workflow: Simplifier + Scrutinizer for code quality', commands: ['/self-review'], agents: ['simplifier', 'scrutinizer', 'validator'], - skills: ['quality-gates', 'software-design', 'worktree-support'], + skills: ['quality-gates', 'software-design', 'worktree-support', 'apply-feature-kb'], }, { name: 'devflow-ambient', description: 'Ambient mode — intent classification with proportional agent orchestration', commands: ['/ambient'], - agents: ['coder', 'validator', 'simplifier', 'scrutinizer', 'evaluator', 'tester', 'skimmer', 'reviewer', 'git', 'synthesizer', 'resolver', 'designer'], + agents: ['coder', 'validator', 'simplifier', 'scrutinizer', 'evaluator', 'tester', 'skimmer', 'reviewer', 'git', 'synthesizer', 'resolver', 'designer', 'knowledge'], skills: [ 'router', 'implement:orch', @@ -121,6 +121,8 @@ export const DEVFLOW_PLUGINS: PluginDefinition[] = [ 'worktree-support', 'gap-analysis', 'design-review', + 'feature-kb', + 'apply-feature-kb', ], }, { @@ -393,6 +395,9 @@ export const LEGACY_SKILL_NAMES: string[] = [ 'design-review', // v2.x knowledge index pattern: new shared skill bare name for pre-namespace installs 'apply-knowledge', + // v2.x feature knowledge bases: new skills bare names for pre-namespace installs + 'feature-kb', + 'apply-feature-kb', ]; /** diff --git a/src/cli/utils/manifest.ts b/src/cli/utils/manifest.ts index 112b720..e0e5d55 100644 --- a/src/cli/utils/manifest.ts +++ b/src/cli/utils/manifest.ts @@ -15,6 +15,7 @@ export interface ManifestData { memory: boolean; learn: boolean; hud: boolean; + kb: boolean; flags: string[]; }; installedAt: string; @@ -55,6 +56,7 @@ export async function readManifest(devflowDir: string): Promise { try { const gitignorePath = path.join(gitRoot, '.gitignore'); - const entriesToAdd = ['.claude/', '.devflow/', '.memory/', '.docs/']; + const entriesToAdd = ['.claude/', '.devflow/', '.memory/', '.docs/', '.features/.kb.lock', '.features/.disabled', '.features/.kb-last-refresh', '.features/.kb-refresh.lock']; let gitignoreContent = ''; try { diff --git a/tests/feature-kb/apply-feature-kb-skill.test.ts b/tests/feature-kb/apply-feature-kb-skill.test.ts new file mode 100644 index 0000000..8bc2f27 --- /dev/null +++ b/tests/feature-kb/apply-feature-kb-skill.test.ts @@ -0,0 +1,41 @@ +import { describe, it, expect } from 'vitest'; +import { readFileSync } from 'fs'; +import * as path from 'path'; + +const ROOT = path.resolve(import.meta.dirname, '../..'); + +describe('feature-kb skill', () => { + const content = readFileSync(path.join(ROOT, 'shared/skills/feature-kb/SKILL.md'), 'utf8'); + + it('has iron law', () => { expect(content).toContain('## Iron Law'); }); + it('has 4-phase process', () => { + expect(content).toContain('### Phase 1: Scan'); + expect(content).toContain('### Phase 2: Extract'); + expect(content).toContain('### Phase 3: Distill'); + expect(content).toContain('### Phase 4: Forge'); + }); + it('has quality self-checks', () => { expect(content).toContain('## Quality Self-Checks'); }); + it('has KB format template with required sections', () => { + expect(content).toContain('## Overview'); + expect(content).toContain('## Architecture'); + expect(content).toContain('## Data Flow'); + expect(content).toContain('## Key Patterns'); + expect(content).toContain('## Anti-Patterns'); + expect(content).toContain('## Gotchas'); + expect(content).toContain('## Key Files'); + }); +}); + +describe('apply-feature-kb skill', () => { + const content = readFileSync(path.join(ROOT, 'shared/skills/apply-feature-kb/SKILL.md'), 'utf8'); + + it('has iron law', () => { expect(content).toContain('## Iron Law'); }); + it('has 3-step algorithm', () => { + expect(content).toContain('### Step 1: Read the KB'); + expect(content).toContain('### Step 2: Apply to Current Task'); + expect(content).toContain('### Step 3: Supplement as Needed'); + }); + it('has skip guard', () => { expect(content).toContain('## Skip Guard'); }); + it('has staleness handling', () => { expect(content).toContain('## Staleness Handling'); }); + it('references (none) skip', () => { expect(content).toContain('(none)'); }); +}); diff --git a/tests/feature-kb/feature-kb.test.ts b/tests/feature-kb/feature-kb.test.ts new file mode 100644 index 0000000..07f94fc --- /dev/null +++ b/tests/feature-kb/feature-kb.test.ts @@ -0,0 +1,659 @@ +import { describe, it, expect, afterAll, afterEach } from 'vitest'; +import * as path from 'path'; +import * as os from 'os'; +import { createRequire } from 'module'; +import { writeFileSync, mkdirSync, readFileSync, existsSync, rmSync, rmdirSync } from 'fs'; +import { promises as fsPromises } from 'fs'; +import { execSync, execFileSync } from 'child_process'; +import { + SAMPLE_INDEX, + SAMPLE_KB_CONTENT, + makeTmpFeatureWorktree, + cleanupTmpFeatureWorktrees, +} from './fixtures'; + +afterAll(() => cleanupTmpFeatureWorktrees()); + +const ROOT = path.resolve(import.meta.dirname, '../..'); +const require = createRequire(import.meta.url); + +const { + loadIndex, + loadKBContent, + checkStaleness, + checkAllStaleness, + updateIndex, + findOverlapping, + removeEntry, + listKBs, + validateSlug, +} = require(path.join(ROOT, 'scripts/hooks/lib/feature-kb.cjs')) as { + loadIndex: (worktreePath: string) => { version: number; features: Record } | null; + loadKBContent: (worktreePath: string, slug: string) => string | null; + checkStaleness: (worktreePath: string, slug: string) => { stale: boolean; changedFiles: string[] }; + checkAllStaleness: (worktreePath: string) => Record; + updateIndex: (worktreePath: string, entry: Record, lockTimeoutMs?: number) => void; + findOverlapping: (worktreePath: string, changedFiles: string[]) => string[]; + removeEntry: (worktreePath: string, slug: string, lockTimeoutMs?: number) => void; + listKBs: (worktreePath: string) => Array<{ slug: string } & Record>; + validateSlug: (slug: string) => void; +}; + +// --------------------------------------------------------------------------- +// loadIndex +// --------------------------------------------------------------------------- + +describe('loadIndex', () => { + it('returns parsed object for valid JSON', () => { + const tmp = makeTmpFeatureWorktree(SAMPLE_INDEX); + const result = loadIndex(tmp); + expect(result).not.toBeNull(); + expect(result!.version).toBe(1); + expect(result!.features['cli-commands']).toBeDefined(); + }); + + it('returns null for missing directory', () => { + const tmp = makeTmpFeatureWorktree(); // no index written + rmSync(path.join(tmp, '.features'), { recursive: true, force: true }); + expect(loadIndex(tmp)).toBeNull(); + }); + + it('returns null for invalid JSON', () => { + const tmp = makeTmpFeatureWorktree(); + writeFileSync(path.join(tmp, '.features', 'index.json'), 'not-json'); + expect(loadIndex(tmp)).toBeNull(); + }); +}); + +// --------------------------------------------------------------------------- +// loadKBContent +// --------------------------------------------------------------------------- + +describe('loadKBContent', () => { + it('returns content string when KNOWLEDGE.md exists', () => { + const tmp = makeTmpFeatureWorktree(SAMPLE_INDEX, { 'cli-commands': SAMPLE_KB_CONTENT }); + const content = loadKBContent(tmp, 'cli-commands'); + expect(content).not.toBeNull(); + expect(content).toContain('# CLI Command System'); + }); + + it('returns null for missing KB', () => { + const tmp = makeTmpFeatureWorktree(SAMPLE_INDEX); + expect(loadKBContent(tmp, 'cli-commands')).toBeNull(); + }); +}); + +// --------------------------------------------------------------------------- +// checkStaleness +// --------------------------------------------------------------------------- + +describe('checkStaleness', () => { + it('returns stale: false when entry is not found in index', () => { + const tmp = makeTmpFeatureWorktree({ version: 1, features: {} }); + const result = checkStaleness(tmp, 'nonexistent'); + expect(result.stale).toBe(false); + expect(result.changedFiles).toEqual([]); + }); + + it('returns stale: false for non-git repos', () => { + // tmp dir has no git init, so it is a non-git directory + const tmp = makeTmpFeatureWorktree(SAMPLE_INDEX); + const result = checkStaleness(tmp, 'cli-commands'); + expect(result.stale).toBe(false); + expect(result.changedFiles).toEqual([]); + }); +}); + +// --------------------------------------------------------------------------- +// checkStaleness (positive — git repo) +// --------------------------------------------------------------------------- + +// T2: Positive staleness detection in a real git repo +describe('checkStaleness (positive — git repo)', () => { + it('detects stale KB when referenced file changed after lastUpdated', () => { + const tmp = makeTmpFeatureWorktree(); + // Remove auto-created .features dir — we'll set it up after git init + rmSync(path.join(tmp, '.features'), { recursive: true, force: true }); + + // Initialize git repo with initial commit + execSync('git init', { cwd: tmp, stdio: 'pipe' }); + execSync('git config user.email "test@test.com"', { cwd: tmp, stdio: 'pipe' }); + execSync('git config user.name "Test"', { cwd: tmp, stdio: 'pipe' }); + + // Create a tracked file and commit it + const srcDir = path.join(tmp, 'src', 'cli'); + mkdirSync(srcDir, { recursive: true }); + writeFileSync(path.join(srcDir, 'cli.ts'), 'export const v = 1;'); + execSync('git add .', { cwd: tmp, stdio: 'pipe' }); + execSync('git commit -m "initial"', { cwd: tmp, stdio: 'pipe' }); + + // Set lastUpdated to just before now + const lastUpdated = new Date(Date.now() - 5000).toISOString(); + + // Create the index with a KB that references src/cli/cli.ts + const featuresDir = path.join(tmp, '.features'); + mkdirSync(featuresDir, { recursive: true }); + const index = { + version: 1, + features: { + 'my-feature': { + name: 'My Feature', + description: '', + directories: ['src/cli/'], + referencedFiles: ['src/cli/cli.ts'], + lastUpdated, + createdBy: 'test', + }, + }, + }; + writeFileSync(path.join(featuresDir, 'index.json'), JSON.stringify(index, null, 2)); + + // Modify the file and commit + writeFileSync(path.join(srcDir, 'cli.ts'), 'export const v = 2;'); + execSync('git add .', { cwd: tmp, stdio: 'pipe' }); + execSync('git commit -m "update cli.ts"', { cwd: tmp, stdio: 'pipe' }); + + const result = checkStaleness(tmp, 'my-feature'); + expect(result.stale).toBe(true); + expect(result.changedFiles).toContain('src/cli/cli.ts'); + }); +}); + +// --------------------------------------------------------------------------- +// updateIndex +// --------------------------------------------------------------------------- + +describe('updateIndex', () => { + it('creates a new entry in an empty index', () => { + const tmp = makeTmpFeatureWorktree({ version: 1, features: {} }); + updateIndex(tmp, { + slug: 'payments', + name: 'Payment Processing', + directories: ['src/payments/'], + referencedFiles: ['src/payments/checkout.ts'], + createdBy: 'test', + }); + const index = loadIndex(tmp); + expect(index).not.toBeNull(); + expect(index!.features['payments']).toBeDefined(); + const entry = index!.features['payments'] as Record; + expect(entry.name).toBe('Payment Processing'); + }); + + it('upserts an existing entry, preserving createdBy', () => { + const tmp = makeTmpFeatureWorktree(SAMPLE_INDEX); + updateIndex(tmp, { + slug: 'cli-commands', + name: 'CLI Command System Updated', + directories: ['src/cli/'], + referencedFiles: ['src/cli/cli.ts'], + }); + const index = loadIndex(tmp); + expect(index).not.toBeNull(); + const entry = index!.features['cli-commands'] as Record; + expect(entry.name).toBe('CLI Command System Updated'); + // createdBy should be preserved from original + expect(entry.createdBy).toBe('plan:orch'); + }); + + it('sets lastUpdated to a current ISO timestamp', () => { + const before = new Date().toISOString(); + const tmp = makeTmpFeatureWorktree({ version: 1, features: {} }); + updateIndex(tmp, { + slug: 'test-slug', + name: 'Test', + directories: [], + referencedFiles: [], + }); + const after = new Date().toISOString(); + const index = loadIndex(tmp); + expect(index).not.toBeNull(); + const entry = index!.features['test-slug'] as Record; + const updated = entry.lastUpdated as string; + expect(updated >= before).toBe(true); + expect(updated <= after).toBe(true); + }); + + // T1: Lock failure + it('throws when lock cannot be acquired within timeout', () => { + const tmp = makeTmpFeatureWorktree({ version: 1, features: {} }); + const lockPath = path.join(tmp, '.features', '.kb.lock'); + // Pre-create lock directory to simulate a held lock + mkdirSync(lockPath); + + expect(() => updateIndex(tmp, { + slug: 'test-lock', + name: 'Test', + directories: [], + referencedFiles: [], + }, 200)).toThrow(/lock/i); + + // Lock dir should still exist (not cleaned up by our failed attempt) + expect(existsSync(lockPath)).toBe(true); + // Clean up + rmdirSync(lockPath); + }); + + // T4: Creates missing .features/ directory + it('creates .features/ directory if missing', () => { + const tmp = makeTmpFeatureWorktree(); + // Remove the .features dir + rmSync(path.join(tmp, '.features'), { recursive: true, force: true }); + expect(existsSync(path.join(tmp, '.features'))).toBe(false); + + updateIndex(tmp, { + slug: 'new-feature', + name: 'New Feature', + directories: ['src/new/'], + referencedFiles: ['src/new/index.ts'], + }); + + expect(existsSync(path.join(tmp, '.features'))).toBe(true); + const index = loadIndex(tmp); + expect(index).not.toBeNull(); + expect(index!.features['new-feature']).toBeDefined(); + }); +}); + +// --------------------------------------------------------------------------- +// removeEntry +// --------------------------------------------------------------------------- + +describe('removeEntry', () => { + it('removes entry from index and deletes its directory', () => { + const tmp = makeTmpFeatureWorktree(SAMPLE_INDEX, { 'cli-commands': SAMPLE_KB_CONTENT }); + const kbDir = path.join(tmp, '.features', 'cli-commands'); + expect(existsSync(kbDir)).toBe(true); + + removeEntry(tmp, 'cli-commands'); + + const index = loadIndex(tmp); + expect(index).not.toBeNull(); + expect(index!.features['cli-commands']).toBeUndefined(); + expect(existsSync(kbDir)).toBe(false); + }); + + it('is a no-op for a non-existent slug', () => { + const tmp = makeTmpFeatureWorktree(SAMPLE_INDEX); + // Should not throw + expect(() => removeEntry(tmp, 'nonexistent')).not.toThrow(); + // Original entry should still exist + const index = loadIndex(tmp); + expect(index).not.toBeNull(); + expect(index!.features['cli-commands']).toBeDefined(); + }); + + // T5: No-op when .features/ directory is missing + it('is a no-op when .features/ directory does not exist', () => { + const tmp = makeTmpFeatureWorktree(); + rmSync(path.join(tmp, '.features'), { recursive: true, force: true }); + expect(existsSync(path.join(tmp, '.features'))).toBe(false); + + // Should not throw + expect(() => removeEntry(tmp, 'nonexistent')).not.toThrow(); + }); + + it('preserves corrupt index.json on remove instead of overwriting', () => { + const tmp = makeTmpFeatureWorktree(); + writeFileSync(path.join(tmp, '.features', 'index.json'), 'not-valid-json'); + removeEntry(tmp, 'nonexistent'); + const raw = readFileSync(path.join(tmp, '.features', 'index.json'), 'utf8'); + expect(raw).toBe('not-valid-json'); + }); +}); + +// --------------------------------------------------------------------------- +// findOverlapping +// --------------------------------------------------------------------------- + +describe('findOverlapping', () => { + it('identifies KBs whose referencedFiles overlap with changed files', () => { + const tmp = makeTmpFeatureWorktree(SAMPLE_INDEX); + const overlapping = findOverlapping(tmp, ['src/cli/cli.ts', 'some/other/file.ts']); + expect(overlapping).toContain('cli-commands'); + }); + + it('returns empty array when no overlap', () => { + const tmp = makeTmpFeatureWorktree(SAMPLE_INDEX); + const overlapping = findOverlapping(tmp, ['src/payments/checkout.ts', 'src/unrelated.ts']); + expect(overlapping).toEqual([]); + }); + + it('returns empty array for missing index', () => { + const tmp = makeTmpFeatureWorktree(); + const overlapping = findOverlapping(tmp, ['src/cli/cli.ts']); + expect(overlapping).toEqual([]); + }); + + it('does not match on common prefix without directory boundary', () => { + const tmp = makeTmpFeatureWorktree(SAMPLE_INDEX); + // 'src/cli' should NOT match 'src/clitools/foo.ts' (no dir boundary) + const overlapping = findOverlapping(tmp, ['src/clitools/foo.ts']); + expect(overlapping).not.toContain('cli-commands'); + }); + + // T3: Directory boundary matching + // referencedFiles uses no trailing slash so the startsWith(ref + '/') logic + // in findOverlapping correctly matches nested files while rejecting + // files that merely share a prefix (e.g. src/client vs src/cli). + it('matches files under a referenced directory prefix', () => { + const index = { + version: 1, + features: { + 'cli-feature': { + name: 'CLI', + description: '', + directories: ['src/cli/'], + referencedFiles: ['src/cli'], + lastUpdated: new Date().toISOString(), + createdBy: 'test', + }, + }, + }; + const tmp = makeTmpFeatureWorktree(index); + + // File under the directory prefix — should match (src/cli is a prefix of src/cli/deep/file.ts) + expect(findOverlapping(tmp, ['src/cli/deep/file.ts'])).toContain('cli-feature'); + + // File NOT under the directory but sharing prefix — should NOT match + // (src/cli is NOT a prefix of src/client.ts since there's no / after cli) + expect(findOverlapping(tmp, ['src/client.ts'])).toEqual([]); + }); +}); + +// --------------------------------------------------------------------------- +// listKBs +// --------------------------------------------------------------------------- + +describe('listKBs', () => { + it('returns all entries with their slugs', () => { + const tmp = makeTmpFeatureWorktree(SAMPLE_INDEX); + const entries = listKBs(tmp); + expect(entries).toHaveLength(1); + expect(entries[0].slug).toBe('cli-commands'); + expect(entries[0].name).toBe('CLI Command System'); + }); + + it('returns empty array for missing index', () => { + const tmp = makeTmpFeatureWorktree(); + expect(listKBs(tmp)).toEqual([]); + }); +}); + +// --------------------------------------------------------------------------- +// checkAllStaleness +// --------------------------------------------------------------------------- + +describe('checkAllStaleness', () => { + it('returns empty object for missing index', () => { + const tmp = makeTmpFeatureWorktree(); + expect(checkAllStaleness(tmp)).toEqual({}); + }); + + it('returns an entry per slug', () => { + const tmp = makeTmpFeatureWorktree(SAMPLE_INDEX); + const result = checkAllStaleness(tmp); + expect(result['cli-commands']).toBeDefined(); + expect(result['cli-commands']).toHaveProperty('stale'); + expect(result['cli-commands']).toHaveProperty('changedFiles'); + }); +}); + +// --------------------------------------------------------------------------- +// validateSlug +// --------------------------------------------------------------------------- + +describe('validateSlug', () => { + it('accepts valid kebab-case slugs', () => { + expect(() => validateSlug('cli-commands')).not.toThrow(); + expect(() => validateSlug('payments')).not.toThrow(); + expect(() => validateSlug('my-feature-123')).not.toThrow(); + expect(() => validateSlug('a')).not.toThrow(); + }); + + it('rejects path traversal attempts', () => { + expect(() => validateSlug('../etc')).toThrow(/must not contain/); + expect(() => validateSlug('../../dangerous')).toThrow(/must not contain/); + expect(() => validateSlug('foo/../bar')).toThrow(/must not contain/); + }); + + it('rejects slugs with slashes', () => { + expect(() => validateSlug('foo/bar')).toThrow(/must not contain/); + expect(() => validateSlug('foo\\bar')).toThrow(/must not contain/); + }); + + it('rejects slugs starting with a dot', () => { + expect(() => validateSlug('.hidden')).toThrow(/must not start with/); + }); + + it('rejects non-kebab-case slugs', () => { + expect(() => validateSlug('MyFeature')).toThrow(/kebab-case/); + expect(() => validateSlug('my_feature')).toThrow(/kebab-case/); + expect(() => validateSlug('MY-FEATURE')).toThrow(/kebab-case/); + expect(() => validateSlug('')).toThrow(/non-empty/); + }); + + it('rejects empty and non-string values', () => { + expect(() => validateSlug('')).toThrow(); + // @ts-expect-error testing runtime behavior + expect(() => validateSlug(null)).toThrow(); + // @ts-expect-error testing runtime behavior + expect(() => validateSlug(undefined)).toThrow(); + }); +}); + +// --------------------------------------------------------------------------- +// CLI: stale-slugs subcommand +// --------------------------------------------------------------------------- + +const FEATURE_KB_CJS = path.join(ROOT, 'scripts/hooks/lib/feature-kb.cjs'); + +describe('CLI stale-slugs', () => { + it('outputs nothing for non-stale index (non-git repo)', () => { + const tmp = makeTmpFeatureWorktree(SAMPLE_INDEX); + // Non-git repo → checkAllStaleness returns stale: false for everything + const output = execFileSync('node', [FEATURE_KB_CJS, 'stale-slugs', tmp], { encoding: 'utf8' }); + expect(output.trim()).toBe(''); + }); + + it('outputs stale slugs one per line for a git repo with changes', () => { + const tmp = makeTmpFeatureWorktree(); + // Remove auto-created .features dir — we'll set it up after git init + rmSync(path.join(tmp, '.features'), { recursive: true, force: true }); + + execSync('git init', { cwd: tmp, stdio: 'pipe' }); + execSync('git config user.email "test@test.com"', { cwd: tmp, stdio: 'pipe' }); + execSync('git config user.name "Test"', { cwd: tmp, stdio: 'pipe' }); + + const srcDir = path.join(tmp, 'src', 'cli'); + mkdirSync(srcDir, { recursive: true }); + writeFileSync(path.join(srcDir, 'cli.ts'), 'export const v = 1;'); + execSync('git add .', { cwd: tmp, stdio: 'pipe' }); + execSync('git commit -m "initial"', { cwd: tmp, stdio: 'pipe' }); + + const lastUpdated = new Date(Date.now() - 5000).toISOString(); + const featuresDir = path.join(tmp, '.features'); + mkdirSync(featuresDir, { recursive: true }); + writeFileSync(path.join(featuresDir, 'index.json'), JSON.stringify({ + version: 1, + features: { + 'stale-feature': { + name: 'Stale Feature', + description: '', + directories: ['src/cli/'], + referencedFiles: ['src/cli/cli.ts'], + lastUpdated, + createdBy: 'test', + }, + }, + }, null, 2)); + + // Modify the referenced file and commit after lastUpdated + writeFileSync(path.join(srcDir, 'cli.ts'), 'export const v = 2;'); + execSync('git add .', { cwd: tmp, stdio: 'pipe' }); + execSync('git commit -m "update cli.ts"', { cwd: tmp, stdio: 'pipe' }); + + const output = execFileSync('node', [FEATURE_KB_CJS, 'stale-slugs', tmp], { encoding: 'utf8' }); + expect(output.trim().split('\n')).toContain('stale-feature'); + }); + + it('exits non-zero and prints usage when worktree argument is missing', () => { + expect(() => + execFileSync('node', [FEATURE_KB_CJS, 'stale-slugs'], { encoding: 'utf8', stdio: 'pipe' }) + ).toThrow(expect.objectContaining({ status: 1 })); + }); +}); + +// --------------------------------------------------------------------------- +// CLI: refresh-context subcommand +// --------------------------------------------------------------------------- + +describe('CLI refresh-context', () => { + it('outputs tab-separated metadata for an existing KB entry', () => { + const tmp = makeTmpFeatureWorktree(SAMPLE_INDEX); + const output = execFileSync('node', [FEATURE_KB_CJS, 'refresh-context', tmp, 'cli-commands'], { encoding: 'utf8' }); + const parts = output.trim().split('\t'); + expect(parts).toHaveLength(3); + expect(parts[0]).toBe('CLI Command System'); // name + expect(JSON.parse(parts[1])).toEqual(['src/cli/commands/', 'src/cli/utils/']); // directories JSON + expect(() => JSON.parse(parts[2])).not.toThrow(); // changed files JSON + }); + + it('exits non-zero when slug is missing', () => { + const tmp = makeTmpFeatureWorktree(SAMPLE_INDEX); + expect(() => + execFileSync('node', [FEATURE_KB_CJS, 'refresh-context', tmp], { encoding: 'utf8', stdio: 'pipe' }) + ).toThrow(expect.objectContaining({ status: 1 })); + }); + + it('exits non-zero when slug is not found in index', () => { + const tmp = makeTmpFeatureWorktree(SAMPLE_INDEX); + expect(() => + execFileSync('node', [FEATURE_KB_CJS, 'refresh-context', tmp, 'nonexistent'], { encoding: 'utf8', stdio: 'pipe' }) + ).toThrow(expect.objectContaining({ status: 1 })); + }); + + it('exits non-zero for invalid slug (path traversal)', () => { + const tmp = makeTmpFeatureWorktree(SAMPLE_INDEX); + expect(() => + execFileSync('node', [FEATURE_KB_CJS, 'refresh-context', tmp, '../etc'], { encoding: 'utf8', stdio: 'pipe' }) + ).toThrow(expect.objectContaining({ status: 1 })); + }); +}); + +// --------------------------------------------------------------------------- +// CLI stale-slugs: empty index +// --------------------------------------------------------------------------- + +describe('CLI stale-slugs (empty index)', () => { + it('outputs nothing for empty index', () => { + const tmp = makeTmpFeatureWorktree({ version: 1, features: {} }); + const output = execFileSync('node', [FEATURE_KB_CJS, 'stale-slugs', tmp], { encoding: 'utf8' }); + expect(output.trim()).toBe(''); + }); +}); + +// --------------------------------------------------------------------------- +// json-helper.cjs read-sidecar +// --------------------------------------------------------------------------- + +const JSON_HELPER_CJS = path.join(ROOT, 'scripts/hooks/json-helper.cjs'); + +describe('json-helper read-sidecar', () => { + it('returns parsed JSON array for valid sidecar with array field', () => { + const sidecar = path.join(os.tmpdir(), `test-sidecar-${Date.now()}.json`); + writeFileSync(sidecar, JSON.stringify({ referencedFiles: ['src/a.ts', 'src/b.ts'] })); + try { + const output = execFileSync('node', [JSON_HELPER_CJS, 'read-sidecar', sidecar, 'referencedFiles'], { encoding: 'utf8' }); + expect(JSON.parse(output.trim())).toEqual(['src/a.ts', 'src/b.ts']); + } finally { + try { rmSync(sidecar); } catch { /* best-effort */ } + } + }); + + it('returns [] for missing file', () => { + const output = execFileSync('node', [JSON_HELPER_CJS, 'read-sidecar', '/nonexistent/path/file.json', 'referencedFiles'], { encoding: 'utf8' }); + expect(output.trim()).toBe('[]'); + }); + + it('returns [] for malformed JSON', () => { + const sidecar = path.join(os.tmpdir(), `test-sidecar-bad-${Date.now()}.json`); + writeFileSync(sidecar, 'not-json'); + try { + const output = execFileSync('node', [JSON_HELPER_CJS, 'read-sidecar', sidecar, 'referencedFiles'], { encoding: 'utf8' }); + expect(output.trim()).toBe('[]'); + } finally { + try { rmSync(sidecar); } catch { /* best-effort */ } + } + }); + + it('returns [] when field value is not an array', () => { + const sidecar = path.join(os.tmpdir(), `test-sidecar-noarray-${Date.now()}.json`); + writeFileSync(sidecar, JSON.stringify({ referencedFiles: 'not-an-array' })); + try { + const output = execFileSync('node', [JSON_HELPER_CJS, 'read-sidecar', sidecar, 'referencedFiles'], { encoding: 'utf8' }); + expect(output.trim()).toBe('[]'); + } finally { + try { rmSync(sidecar); } catch { /* best-effort */ } + } + }); + + it('returns [] when args are missing', () => { + const output = execFileSync('node', [JSON_HELPER_CJS, 'read-sidecar'], { encoding: 'utf8' }); + expect(output.trim()).toBe('[]'); + }); +}); + +// --------------------------------------------------------------------------- +// readSidecar helper (TypeScript) +// --------------------------------------------------------------------------- + +import { readSidecar } from '../../src/cli/commands/kb.js'; + +describe('readSidecar', () => { + const tmpFiles: string[] = []; + + afterEach(() => { + for (const f of tmpFiles) { + try { rmSync(f); } catch { /* best-effort */ } + } + tmpFiles.length = 0; + }); + + function writeTmp(content: string): string { + const f = path.join(os.tmpdir(), `test-read-sidecar-${Date.now()}-${Math.random().toString(36).slice(2)}.json`); + writeFileSync(f, content); + tmpFiles.push(f); + return f; + } + + it('returns referencedFiles and description for valid sidecar', async () => { + const f = writeTmp(JSON.stringify({ referencedFiles: ['src/a.ts', 'src/b.ts'], description: 'Use when X' })); + const result = await readSidecar(f); + expect(result.referencedFiles).toEqual(['src/a.ts', 'src/b.ts']); + expect(result.description).toBe('Use when X'); + }); + + it('returns {} for missing file', async () => { + const result = await readSidecar('/nonexistent/path/file.json'); + expect(result).toEqual({}); + }); + + it('returns {} for invalid JSON', async () => { + const f = writeTmp('not-valid-json'); + const result = await readSidecar(f); + expect(result).toEqual({}); + }); + + it('omits referencedFiles when value is a string not array', async () => { + const f = writeTmp(JSON.stringify({ referencedFiles: 'should-be-array' })); + const result = await readSidecar(f); + expect(result.referencedFiles).toBeUndefined(); + }); + + it('filters mixed-type referencedFiles array to strings only', async () => { + const f = writeTmp(JSON.stringify({ referencedFiles: ['src/a.ts', 42, null, 'src/b.ts'] })); + const result = await readSidecar(f); + expect(result.referencedFiles).toEqual(['src/a.ts', 'src/b.ts']); + }); +}); diff --git a/tests/feature-kb/fixtures.ts b/tests/feature-kb/fixtures.ts new file mode 100644 index 0000000..576075c --- /dev/null +++ b/tests/feature-kb/fixtures.ts @@ -0,0 +1,98 @@ +// Shared test fixtures for feature-kb module tests. + +import { mkdtempSync, writeFileSync, mkdirSync, rmSync } from 'fs'; +import * as path from 'path'; +import * as os from 'os'; + +export const SAMPLE_INDEX = { + version: 1, + features: { + 'cli-commands': { + name: 'CLI Command System', + description: 'Use when adding CLI subcommands, modifying plugin registration, or changing the init flow.', + directories: ['src/cli/commands/', 'src/cli/utils/'], + referencedFiles: ['src/cli/cli.ts', 'src/cli/plugins.ts'], + lastUpdated: '2026-04-20T14:30:00Z', + createdBy: 'plan:orch', + }, + }, +}; + +export const SAMPLE_KB_CONTENT = `--- +feature: cli-commands +name: CLI Command System +directories: + - src/cli/commands/ + - src/cli/utils/ +referencedFiles: + - src/cli/cli.ts + - src/cli/plugins.ts +created: 2026-04-20T14:30:00Z +updated: 2026-04-20T14:30:00Z +--- + +# CLI Command System + +## Overview +Commander.js-based CLI with @clack/prompts for interactive UX. + +## Architecture +Each command is a separate file in src/cli/commands/ exporting a Command instance. + +## Key Patterns +- Commander.js option chain +- @clack/prompts for TUI dialogs + +## Anti-Patterns +- Don't use inquirer (project uses @clack/prompts) + +## Gotchas +- Always register new commands in cli.ts + +## Key Files +- src/cli/cli.ts — command registration +- src/cli/plugins.ts — plugin registry +`; + +const createdTmpDirs: string[] = []; + +/** + * Create a temporary worktree directory with optional .features/ index and KB files. + * Returns the absolute path to the tmpdir root. + * Directories are tracked — call `cleanupTmpFeatureWorktrees()` in afterAll. + */ +export function makeTmpFeatureWorktree( + indexContent?: object, + kbs?: Record, +): string { + const tmp = mkdtempSync(path.join(os.tmpdir(), 'feature-kb-test-')); + createdTmpDirs.push(tmp); + + const featuresDir = path.join(tmp, '.features'); + mkdirSync(featuresDir, { recursive: true }); + + if (indexContent) { + writeFileSync(path.join(featuresDir, 'index.json'), JSON.stringify(indexContent, null, 2)); + } + + if (kbs) { + for (const [slug, content] of Object.entries(kbs)) { + const kbDir = path.join(featuresDir, slug); + mkdirSync(kbDir, { recursive: true }); + writeFileSync(path.join(kbDir, 'KNOWLEDGE.md'), content); + } + } + + return tmp; +} + +/** + * Remove all temporary worktree directories created by `makeTmpFeatureWorktree`. + * Call in `afterAll(() => cleanupTmpFeatureWorktrees())`. + */ +export function cleanupTmpFeatureWorktrees(): void { + for (const dir of createdTmpDirs) { + try { rmSync(dir, { recursive: true, force: true }); } catch { /* best-effort */ } + } + createdTmpDirs.length = 0; +} diff --git a/tests/feature-kb/kb-command.test.ts b/tests/feature-kb/kb-command.test.ts new file mode 100644 index 0000000..bffa501 --- /dev/null +++ b/tests/feature-kb/kb-command.test.ts @@ -0,0 +1,82 @@ +import { describe, it, expect, afterAll } from 'vitest'; +import { execSync } from 'child_process'; +import * as path from 'path'; +import { readFileSync, rmSync } from 'fs'; +import { makeTmpFeatureWorktree, cleanupTmpFeatureWorktrees, SAMPLE_INDEX } from './fixtures'; + +afterAll(() => cleanupTmpFeatureWorktrees()); + +const ROOT = path.resolve(import.meta.dirname, '../..'); +const CJS_PATH = path.join(ROOT, 'scripts/hooks/lib/feature-kb.cjs'); + +describe('feature-kb.cjs CLI', () => { + it('list shows entries', () => { + const tmp = makeTmpFeatureWorktree(SAMPLE_INDEX); + const result = execSync(`node ${CJS_PATH} list ${tmp}`, { encoding: 'utf8' }); + const entries = JSON.parse(result); + expect(entries).toHaveLength(1); + expect(entries[0].slug).toBe('cli-commands'); + expect(entries[0].name).toBe('CLI Command System'); + }); + + it('list returns empty array for missing index', () => { + const tmp = makeTmpFeatureWorktree(); + // Remove the index file so index is missing + try { rmSync(path.join(tmp, '.features', 'index.json')); } catch { /* ignore */ } + const result = execSync(`node ${CJS_PATH} list ${tmp}`, { encoding: 'utf8' }); + expect(JSON.parse(result)).toEqual([]); + }); + + it('stale returns staleness for slug', () => { + const tmp = makeTmpFeatureWorktree(SAMPLE_INDEX); + const result = execSync(`node ${CJS_PATH} stale ${tmp} cli-commands`, { encoding: 'utf8' }); + const parsed = JSON.parse(result); + expect(parsed).toHaveProperty('stale'); + expect(parsed).toHaveProperty('changedFiles'); + }); + + it('exits 1 with usage error on no args', () => { + expect(() => execSync(`node ${CJS_PATH}`, { encoding: 'utf8', stdio: 'pipe' })).toThrow(); + }); + + it('update-index creates entry', () => { + const tmp = makeTmpFeatureWorktree({ version: 1, features: {} }); + execSync( + `node ${CJS_PATH} update-index ${tmp} --slug=payments --name="Payment Processing" --directories='["src/payments/"]' --referencedFiles='["src/payments/checkout.ts"]'`, + { encoding: 'utf8' } + ); + const index = JSON.parse(readFileSync(path.join(tmp, '.features', 'index.json'), 'utf8')); + expect(index.features.payments).toBeDefined(); + expect(index.features.payments.name).toBe('Payment Processing'); + }); + + it('remove deletes entry and directory', () => { + const tmp = makeTmpFeatureWorktree(SAMPLE_INDEX, { 'cli-commands': '# Test KB' }); + execSync(`node ${CJS_PATH} remove ${tmp} cli-commands`, { encoding: 'utf8' }); + const index = JSON.parse(readFileSync(path.join(tmp, '.features', 'index.json'), 'utf8')); + expect(index.features['cli-commands']).toBeUndefined(); + }); + + // T6: Unknown command and invalid worktree + it('exits 1 for unknown subcommand', () => { + expect(() => execSync(`node ${CJS_PATH} unknown-command /tmp`, { encoding: 'utf8', stdio: 'pipe' })).toThrow(); + }); + + it('exits 1 for invalid worktree path', () => { + expect(() => execSync(`node ${CJS_PATH} list /nonexistent/path/that/does/not/exist`, { encoding: 'utf8', stdio: 'pipe' })).toThrow(); + }); + + it('find-overlapping returns overlapping slugs', () => { + const tmp = makeTmpFeatureWorktree(SAMPLE_INDEX); + const result = execSync(`node ${CJS_PATH} find-overlapping ${tmp} src/cli/cli.ts`, { encoding: 'utf8' }); + const slugs = JSON.parse(result); + expect(slugs).toContain('cli-commands'); + }); + + it('find-overlapping returns empty for non-overlapping files', () => { + const tmp = makeTmpFeatureWorktree(SAMPLE_INDEX); + const result = execSync(`node ${CJS_PATH} find-overlapping ${tmp} src/payments/checkout.ts`, { encoding: 'utf8' }); + expect(JSON.parse(result)).toEqual([]); + }); + +}); diff --git a/tests/feature-kb/knowledge-agent.test.ts b/tests/feature-kb/knowledge-agent.test.ts new file mode 100644 index 0000000..1233d1c --- /dev/null +++ b/tests/feature-kb/knowledge-agent.test.ts @@ -0,0 +1,30 @@ +import { describe, it, expect } from 'vitest'; +import { readFileSync } from 'fs'; +import * as path from 'path'; + +const ROOT = path.resolve(import.meta.dirname, '../..'); + +describe('knowledge agent', () => { + const content = readFileSync(path.join(ROOT, 'shared/agents/knowledge.md'), 'utf8'); + + it('has correct model', () => { expect(content).toContain('model: sonnet'); }); + it('has feature-kb skill', () => { expect(content).toContain('devflow:feature-kb'); }); + it('has worktree-support skill', () => { expect(content).toContain('devflow:worktree-support'); }); + it('has required tools', () => { + expect(content).toContain('Read'); + expect(content).toContain('Grep'); + expect(content).toContain('Glob'); + expect(content).toContain('Write'); + }); + it('documents input contract', () => { + expect(content).toContain('FEATURE_SLUG'); + expect(content).toContain('FEATURE_NAME'); + expect(content).toContain('EXPLORATION_OUTPUTS'); + expect(content).toContain('DIRECTORIES'); + expect(content).toContain('KNOWLEDGE_CONTEXT'); + }); + it('constrains writes to .features/', () => { + expect(content).toContain('.features/'); + expect(content).toContain('Boundaries'); + }); +}); diff --git a/tests/kb.test.ts b/tests/kb.test.ts new file mode 100644 index 0000000..5537ca4 --- /dev/null +++ b/tests/kb.test.ts @@ -0,0 +1,140 @@ +import { describe, it, expect } from 'vitest'; +import { addKbHook, removeKbHook, hasKbHook } from '../src/cli/commands/kb.js'; + +const DEVFLOW_DIR = '/home/user/.devflow'; + +describe('addKbHook', () => { + it('adds hook to empty settings', () => { + const result = addKbHook('{}', DEVFLOW_DIR); + const parsed = JSON.parse(result); + expect(parsed.hooks?.SessionEnd).toHaveLength(1); + expect(parsed.hooks.SessionEnd[0].hooks[0].command).toContain('session-end-kb-refresh'); + }); + + it('adds hook with correct run-hook path', () => { + const result = addKbHook('{}', '/custom/.devflow'); + const parsed = JSON.parse(result); + expect(parsed.hooks.SessionEnd[0].hooks[0].command).toContain('/custom/.devflow/scripts/hooks/run-hook session-end-kb-refresh'); + }); + + it('is idempotent — does not add duplicate', () => { + const first = addKbHook('{}', DEVFLOW_DIR); + const second = addKbHook(first, DEVFLOW_DIR); + const parsed = JSON.parse(second); + const kbHooks = parsed.hooks.SessionEnd.filter( + (m: { hooks: Array<{ command: string }> }) => + m.hooks.some((h) => h.command.includes('session-end-kb-refresh')) + ); + expect(kbHooks).toHaveLength(1); + }); + + it('adds alongside existing SessionEnd hooks', () => { + const input = JSON.stringify({ + hooks: { + SessionEnd: [ + { hooks: [{ type: 'command', command: '/path/to/other-hook', timeout: 10 }] }, + ], + }, + }); + const result = addKbHook(input, DEVFLOW_DIR); + const parsed = JSON.parse(result); + expect(parsed.hooks.SessionEnd).toHaveLength(2); + }); + + it('preserves other settings', () => { + const input = JSON.stringify({ theme: 'dark', model: 'claude-sonnet' }); + const result = addKbHook(input, DEVFLOW_DIR); + const parsed = JSON.parse(result); + expect(parsed.theme).toBe('dark'); + expect(parsed.model).toBe('claude-sonnet'); + }); + + it('hook entry has correct timeout', () => { + const result = addKbHook('{}', DEVFLOW_DIR); + const parsed = JSON.parse(result); + expect(parsed.hooks.SessionEnd[0].hooks[0].timeout).toBe(10); + expect(parsed.hooks.SessionEnd[0].hooks[0].type).toBe('command'); + }); +}); + +describe('removeKbHook', () => { + it('removes KB hook from SessionEnd', () => { + const withHook = addKbHook('{}', DEVFLOW_DIR); + const result = removeKbHook(withHook); + const parsed = JSON.parse(result); + expect(parsed.hooks).toBeUndefined(); + }); + + it('preserves other SessionEnd hooks', () => { + const input = JSON.stringify({ + hooks: { + SessionEnd: [ + { hooks: [{ type: 'command', command: '/path/to/other-hook', timeout: 10 }] }, + { hooks: [{ type: 'command', command: '/devflow/scripts/hooks/run-hook session-end-kb-refresh', timeout: 10 }] }, + ], + }, + }); + const result = removeKbHook(input); + const parsed = JSON.parse(result); + expect(parsed.hooks.SessionEnd).toHaveLength(1); + expect(parsed.hooks.SessionEnd[0].hooks[0].command).toContain('other-hook'); + }); + + it('cleans empty hooks object when last hook removed', () => { + const withHook = addKbHook('{}', DEVFLOW_DIR); + const result = removeKbHook(withHook); + const parsed = JSON.parse(result); + expect(parsed.hooks).toBeUndefined(); + }); + + it('is idempotent — removing absent hook returns same JSON', () => { + const input = JSON.stringify({ theme: 'dark' }); + const result = removeKbHook(input); + expect(JSON.parse(result)).toEqual(JSON.parse(input)); + }); + + it('preserves other hook event types', () => { + const input = JSON.stringify({ + hooks: { + UserPromptSubmit: [{ hooks: [{ type: 'command', command: '/path/preamble', timeout: 5 }] }], + SessionEnd: [{ hooks: [{ type: 'command', command: '/devflow/scripts/hooks/run-hook session-end-kb-refresh', timeout: 10 }] }], + }, + }); + const result = removeKbHook(input); + const parsed = JSON.parse(result); + expect(parsed.hooks.UserPromptSubmit).toHaveLength(1); + expect(parsed.hooks.SessionEnd).toBeUndefined(); + }); +}); + +describe('hasKbHook', () => { + it('returns true when hook present on SessionEnd', () => { + const withHook = addKbHook('{}', DEVFLOW_DIR); + expect(hasKbHook(withHook)).toBe(true); + }); + + it('returns false when hook absent', () => { + expect(hasKbHook('{}')).toBe(false); + }); + + it('returns false when only other SessionEnd hooks exist', () => { + const input = JSON.stringify({ + hooks: { + SessionEnd: [ + { hooks: [{ type: 'command', command: '/path/to/session-end-learning', timeout: 10 }] }, + ], + }, + }); + expect(hasKbHook(input)).toBe(false); + }); + + it('accepts parsed Settings object', () => { + const withHook = addKbHook('{}', DEVFLOW_DIR); + const parsed = JSON.parse(withHook); + expect(hasKbHook(parsed)).toBe(true); + }); + + it('returns false for empty hooks object', () => { + expect(hasKbHook(JSON.stringify({ hooks: {} }))).toBe(false); + }); +}); diff --git a/tests/knowledge/command-adoption.test.ts b/tests/knowledge/command-adoption.test.ts index 48296a4..36d75a4 100644 --- a/tests/knowledge/command-adoption.test.ts +++ b/tests/knowledge/command-adoption.test.ts @@ -58,10 +58,10 @@ describe('debug:orch — knowledge is orchestrator-local, not fanned to Explore it('debug:orch Explore spawn blocks do NOT pass KNOWLEDGE_CONTEXT to sub-agents', () => { const content = loadFile('shared/skills/debug:orch/SKILL.md') - // Find the Phase 2 Investigate section (Explore spawns) - const phase2Section = extractSection(content, 'Phase 2: Investigate', '## Phase 3') + // Find the Phase 3 Investigate section (Explore spawns) + const phase3Section = extractSection(content, 'Phase 3: Investigate', '## Phase 4') // KNOWLEDGE_CONTEXT should NOT appear in Explore spawn block parameters - expect(phase2Section).not.toContain('KNOWLEDGE_CONTEXT') + expect(phase3Section).not.toContain('KNOWLEDGE_CONTEXT') }) }) @@ -234,8 +234,8 @@ describe('plan:orch — knowledge loading phase', () => { it('Explore spawn blocks receive KNOWLEDGE_CONTEXT', () => { const content = loadFile('shared/skills/plan:orch/SKILL.md') // The Explore phase section should mention KNOWLEDGE_CONTEXT - const phase2 = extractSection(content, 'Phase 2: Explore', '## Phase 3') - expect(phase2).toContain('KNOWLEDGE_CONTEXT') + const phase5 = extractSection(content, 'Phase 5: Explore', '## Phase 6') + expect(phase5).toContain('KNOWLEDGE_CONTEXT') }) }) @@ -249,9 +249,9 @@ describe('review:orch — knowledge loading phase', () => { expect(content).toMatch(/[Ll]oad.*[Kk]nowledge|[Kk]nowledge.*[Ll]oad/i) }) - it('Phase 4 Reviews section receives KNOWLEDGE_CONTEXT', () => { + it('Phase 5 Reviews section receives KNOWLEDGE_CONTEXT', () => { const content = loadFile('shared/skills/review:orch/SKILL.md') - const phase4 = extractSection(content, 'Phase 4: Reviews', '## Phase 5') - expect(phase4).toContain('KNOWLEDGE_CONTEXT') + const phase5 = extractSection(content, 'Phase 5: Reviews', '## Phase 6') + expect(phase5).toContain('KNOWLEDGE_CONTEXT') }) }) diff --git a/tests/manifest.test.ts b/tests/manifest.test.ts index 8c315bb..7e6ffda 100644 --- a/tests/manifest.test.ts +++ b/tests/manifest.test.ts @@ -77,7 +77,7 @@ describe('readManifest', () => { version: '1.4.0', plugins: ['devflow-core-skills', 'devflow-implement'], scope: 'user', - features: { teams: false, ambient: true, memory: true, learn: false, hud: false, flags: [] }, + features: { teams: false, ambient: true, memory: true, learn: false, hud: false, kb: false, flags: [] }, installedAt: '2026-03-01T00:00:00.000Z', updatedAt: '2026-03-13T00:00:00.000Z', }; @@ -100,8 +100,24 @@ describe('readManifest', () => { expect(result).not.toBeNull(); expect(result!.features.hud).toBe(false); expect(result!.features.learn).toBe(false); + expect(result!.features.kb).toBe(false); expect(result!.features.flags).toEqual([]); }); + + it('normalizes old manifest without kb to default false', async () => { + const oldData = { + version: '1.4.0', + plugins: ['devflow-core-skills'], + scope: 'user', + features: { teams: false, ambient: true, memory: true, learn: true, hud: true, flags: [] }, + installedAt: '2026-03-01T00:00:00.000Z', + updatedAt: '2026-03-13T00:00:00.000Z', + }; + await fs.writeFile(path.join(tmpDir, 'manifest.json'), JSON.stringify(oldData), 'utf-8'); + const result = await readManifest(tmpDir); + expect(result).not.toBeNull(); + expect(result!.features.kb).toBe(false); + }); }); describe('writeManifest', () => { @@ -120,7 +136,7 @@ describe('writeManifest', () => { version: '1.4.0', plugins: ['devflow-core-skills'], scope: 'user', - features: { teams: false, ambient: true, memory: true, learn: false, hud: false, flags: [] }, + features: { teams: false, ambient: true, memory: true, learn: false, hud: false, kb: false, flags: [] }, installedAt: '2026-03-13T00:00:00.000Z', updatedAt: '2026-03-13T00:00:00.000Z', }; @@ -134,7 +150,7 @@ describe('writeManifest', () => { version: '1.0.0', plugins: ['devflow-core-skills'], scope: 'user', - features: { teams: false, ambient: false, memory: false, learn: false, hud: false, flags: [] }, + features: { teams: false, ambient: false, memory: false, learn: false, hud: false, kb: false, flags: [] }, installedAt: '2026-01-01T00:00:00.000Z', updatedAt: '2026-01-01T00:00:00.000Z', }; @@ -153,7 +169,7 @@ describe('writeManifest', () => { version: '1.4.0', plugins: [], scope: 'local', - features: { teams: false, ambient: false, memory: false, learn: false, hud: false, flags: [] }, + features: { teams: false, ambient: false, memory: false, learn: false, hud: false, kb: false, flags: [] }, installedAt: '2026-03-13T00:00:00.000Z', updatedAt: '2026-03-13T00:00:00.000Z', }; @@ -297,7 +313,7 @@ describe('resolvePluginList', () => { version: '1.0.0', plugins: ['devflow-core-skills', 'devflow-implement'], scope: 'user', - features: { teams: false, ambient: true, memory: true, learn: false, hud: false, flags: [] }, + features: { teams: false, ambient: true, memory: true, learn: false, hud: false, kb: false, flags: [] }, installedAt: '2026-01-01T00:00:00.000Z', updatedAt: '2026-01-01T00:00:00.000Z', }; diff --git a/tests/resolve/knowledge-citation.test.ts b/tests/resolve/knowledge-citation.test.ts index 553f40d..203d480 100644 --- a/tests/resolve/knowledge-citation.test.ts +++ b/tests/resolve/knowledge-citation.test.ts @@ -160,18 +160,18 @@ describe('resolve-teams.md — teams variant parity', () => { describe('resolve:orch SKILL.md — ambient mode parity', () => { const content = loadFile('shared/skills/resolve:orch/SKILL.md'); - it('contains Phase 1.5: Load Project Knowledge between Phase 1 and Phase 2', () => { - expect(content).toMatch(/Phase 1\.5.*Load Project Knowledge/i); + it('contains Phase 2: Load Project Knowledge between Phase 1 and Phase 3', () => { + expect(content).toMatch(/Phase 2.*Load Project Knowledge/i); }); - it('Phase 4 spawn block includes KNOWLEDGE_CONTEXT', () => { - const phase4Section = extractSection(content, '## Phase 4', '## Phase 5'); - expect(phase4Section).toContain('KNOWLEDGE_CONTEXT'); + it('Phase 5 spawn block includes KNOWLEDGE_CONTEXT', () => { + const phase5Section = extractSection(content, '## Phase 5', '## Phase 6'); + expect(phase5Section).toContain('KNOWLEDGE_CONTEXT'); }); - it('Phase 6 (Report) mentions Knowledge Citations (D-B)', () => { - const phase6Section = extractSection(content, '## Phase 6', '## Error Handling'); - expect(phase6Section).toContain('Knowledge Citations'); + it('Phase 7 (Report) mentions Knowledge Citations (D-B)', () => { + const phase7Section = extractSection(content, '## Phase 7', '## Error Handling'); + expect(phase7Section).toContain('Knowledge Citations'); }); }); diff --git a/tests/shell-hooks.test.ts b/tests/shell-hooks.test.ts index 0a351ba..5040825 100644 --- a/tests/shell-hooks.test.ts +++ b/tests/shell-hooks.test.ts @@ -20,6 +20,8 @@ const HOOK_SCRIPTS = [ 'preamble', 'json-parse', 'get-mtime', + 'session-end-kb-refresh', + 'background-kb-refresh', ]; describe('shell hook syntax checks', () => { @@ -1506,3 +1508,81 @@ describe('get-mtime behavioral', () => { } }); }); + +describe('session-end-kb-refresh guard clauses', () => { + const KB_HOOK = path.join(HOOKS_DIR, 'session-end-kb-refresh'); + + let tmpDir: string; + + beforeEach(() => { + tmpDir = fs.mkdtempSync(path.join(os.tmpdir(), 'devflow-kb-hook-test-')); + }); + + afterEach(() => { + fs.rmSync(tmpDir, { recursive: true, force: true }); + }); + + it('exits cleanly when DEVFLOW_BG_KB_REFRESH=1', () => { + expect(() => { + execSync(`DEVFLOW_BG_KB_REFRESH=1 bash "${KB_HOOK}"`, { stdio: 'ignore' }); + }).not.toThrow(); + }); + + it('exits cleanly when DEVFLOW_BG_UPDATER=1', () => { + expect(() => { + execSync(`DEVFLOW_BG_UPDATER=1 bash "${KB_HOOK}"`, { stdio: 'ignore' }); + }).not.toThrow(); + }); + + it('exits cleanly when no .features/index.json exists', () => { + const input = JSON.stringify({ cwd: tmpDir, session_id: 'test-kb-001' }); + expect(() => { + execSync(`bash "${KB_HOOK}"`, { input, stdio: ['pipe', 'pipe', 'pipe'] }); + }).not.toThrow(); + }); + + it('exits cleanly when .features/.disabled sentinel exists', () => { + const featuresDir = path.join(tmpDir, '.features'); + fs.mkdirSync(featuresDir, { recursive: true }); + fs.writeFileSync(path.join(featuresDir, 'index.json'), JSON.stringify({ version: 1, features: {} })); + fs.writeFileSync(path.join(featuresDir, '.disabled'), ''); + + const input = JSON.stringify({ cwd: tmpDir, session_id: 'test-kb-002' }); + expect(() => { + execSync(`bash "${KB_HOOK}"`, { input, stdio: ['pipe', 'pipe', 'pipe'] }); + }).not.toThrow(); + }); + + it('exits cleanly when .kb-last-refresh is recent (throttled)', () => { + const featuresDir = path.join(tmpDir, '.features'); + fs.mkdirSync(featuresDir, { recursive: true }); + fs.writeFileSync(path.join(featuresDir, 'index.json'), JSON.stringify({ version: 1, features: {} })); + fs.writeFileSync(path.join(featuresDir, '.kb-last-refresh'), String(Math.floor(Date.now() / 1000))); + + const input = JSON.stringify({ cwd: tmpDir, session_id: 'test-kb-003' }); + expect(() => { + execSync(`bash "${KB_HOOK}"`, { input, stdio: ['pipe', 'pipe', 'pipe'] }); + }).not.toThrow(); + }); + + it('exits cleanly when no stale KBs are found', () => { + // Non-git tmpDir → checkAllStaleness returns stale:false + const featuresDir = path.join(tmpDir, '.features'); + fs.mkdirSync(featuresDir, { recursive: true }); + fs.writeFileSync(path.join(featuresDir, 'index.json'), JSON.stringify({ + version: 1, + features: { + 'test-feature': { + name: 'Test', description: '', directories: ['src/'], + referencedFiles: ['src/index.ts'], + lastUpdated: new Date().toISOString(), createdBy: 'test', + }, + }, + })); + + const input = JSON.stringify({ cwd: tmpDir, session_id: 'test-kb-004' }); + expect(() => { + execSync(`bash "${KB_HOOK}"`, { input, stdio: ['pipe', 'pipe', 'pipe'] }); + }).not.toThrow(); + }); +}); diff --git a/tests/skill-references.test.ts b/tests/skill-references.test.ts index b5428eb..640648d 100644 --- a/tests/skill-references.test.ts +++ b/tests/skill-references.test.ts @@ -828,6 +828,7 @@ describe('Completeness: reviewer.md Focus Areas vs code-review plugin', () => { 'knowledge-persistence', 'review-methodology', 'worktree-support', + 'apply-feature-kb', // consumption meta-skill, not a review focus ]); const reviewerContent = readFileSync(