From 18f968a2109c12dca6891028d8882597eb7d7130 Mon Sep 17 00:00:00 2001 From: Brian Madison Date: Sun, 29 Mar 2026 12:10:10 -0500 Subject: [PATCH 1/6] fix: restructure ideation into 7 phases with capability review and I/O specs Addresses issues found in test run: - Module identity (name, code, description) locked down in phase 1 before skill names are written, preventing expensive find-and-replace later - Writing discipline: raw ideas in phases 1-2, structured sections from phase 3+ - Orchestrator pattern presented as valid multi-agent option - Output check: verify every agent produces tangible output - Single-sidecar with daily/curated memory as recommended pattern - Cross-agent interaction patterns explicitly prompted - Config section must be filled even if empty ("no config needed") - Skill briefs now self-contained with persona, inputs/outputs, memory contract - New mandatory phase 6: capability review with user before finalizing - HTML report output suggested where appropriate - Plan template updated with matching structure --- .../assets/module-plan-template.md | 67 ++++++--- .../references/ideate-module.md | 131 ++++++++++++++---- 2 files changed, 154 insertions(+), 44 deletions(-) diff --git a/skills/bmad-module-builder/assets/module-plan-template.md b/skills/bmad-module-builder/assets/module-plan-template.md index 330eda9..8dcc5ce 100644 --- a/skills/bmad-module-builder/assets/module-plan-template.md +++ b/skills/bmad-module-builder/assets/module-plan-template.md @@ -3,6 +3,7 @@ title: 'Module Plan' status: 'ideation' module_name: '' module_code: '' +module_description: '' architecture: '' standalone: true expands_module: '' @@ -18,45 +19,76 @@ updated: '' -## Architecture Decision +## Architecture - + + + -## User Experience +### Memory Architecture - + + + + +### Memory Contract + + + + + + + +### Cross-Agent Patterns + + + + ## Skills - + + ### {skill-name} **Type:** {agent | workflow} -**Purpose:** + +**Persona:** + +**Core Outcome:** + +**The Non-Negotiable:** **Capabilities:** -| Display Name | Menu Code | Description | Action | Args | Phase | After | Before | Required | Output Location | Outputs | -| ------------ | --------- | ----------- | ------ | ---- | ----- | ----- | ------ | -------- | --------------- | ------- | -| | | | | | | | | | | | +| Capability | Outcome | Inputs | Outputs | +| ---------- | ------- | ------ | ------- | +| | | | | -**Design Notes:** + -## Memory Architecture +**Memory:** - - - +**Init Responsibility:** + +**Activation Modes:** + +**Tool Dependencies:** + +**Design Notes:** + +--- ## Configuration + + + | Variable | Prompt | Default | Result Template | User Setting | | -------- | ------ | ------- | --------------- | ------------ | | | | | | | - - ## External Dependencies @@ -84,10 +116,11 @@ updated: '' ## Ideas Captured + ## Build Roadmap - + **Next steps:** diff --git a/skills/bmad-module-builder/references/ideate-module.md b/skills/bmad-module-builder/references/ideate-module.md index d96e386..0bffeb4 100644 --- a/skills/bmad-module-builder/references/ideate-module.md +++ b/skills/bmad-module-builder/references/ideate-module.md @@ -30,11 +30,15 @@ Weave these into conversation naturally. Never name them or make the user feel l ## Process -### 1. Open the Session +This is a phased process. Each phase has a clear purpose and should not be skipped, even if the user is eager to move ahead. The phases prevent critical details from being missed and avoid expensive rewrites later. + +**Writing discipline:** During phases 1-2, write only to the **Ideas Captured** section — raw, generous, unstructured. Do not write structured Architecture or Skills sections yet. Starting at phase 3, begin writing structured sections. This avoids rewriting the entire document when the architecture shifts. + +### Phase 1: Vision and Module Identity Initialize the plan document immediately using `./assets/module-plan-template.md`. Write it to `{bmad_builder_reports}` with a descriptive filename. Set `created` and `updated` timestamps. This document is your cache — update it progressively as the conversation unfolds so work survives context compaction. -Start by understanding the spark. Let the user talk freely — this is where the richest context comes from: +**First: capture the spark.** Let the user talk freely — this is where the richest context comes from: - What's the idea? What problem space or domain? - Who would use this and what would they get from it? @@ -42,7 +46,17 @@ Start by understanding the spark. Let the user talk freely — this is where the Don't rush to structure. Just listen, ask follow-ups, and capture. -### 2. Explore Creatively +**Then: lock down module identity.** Before any skill names are written, nail these down — they affect every name and path in the document: + +- **Module name** — Human-friendly display name (e.g., "Content Creators' Creativity Suite") +- **Module code** — 2-4 letter abbreviation (e.g., "cs3"). All skill names and sidecar paths derive from this. Changing it later means a find-and-replace across the entire plan. +- **Description** — One-line summary of what the module does + +Write these to the plan document frontmatter immediately. All subsequent skill names use `bmad-{modulecode}-{skillname}`. + +- **Standalone or expansion?** If expansion: which module does it extend? How do the new capabilities relate? Even expansion modules should provide value independently — the parent module being absent shouldn't break this one. + +### Phase 2: Creative Exploration This is the heart of the session — spend real time here. Use the brainstorming toolkit to help the user explore: @@ -57,9 +71,9 @@ Update the **Ideas Captured** section of the plan document as ideas emerge. Capt Energy check: if the conversation plateaus, try a perspective shift or reverse brainstorming to open a new vein. -### 3. Shape the Architecture +### Phase 3: Architecture -When exploration feels genuinely complete (not just "we have enough"), shift to architecture. +When exploration feels genuinely complete (not just "we have enough"), shift to architecture. This is where structured writing begins. **Guide toward agent-with-capabilities when appropriate.** Many users default to thinking they need multiple specialized agents. But a well-designed single agent with rich internal capabilities and routing: @@ -81,45 +95,108 @@ However, **multiple agents make sense when:** - The workflow requires sequential phases with fundamentally different processes - No persistent persona or memory is needed between invocations +**The orchestrator pattern** is another option to present: a master agent that the user primarily talks to, which coordinates the domain agents. Think of it like a ship's commander — communications generally flow through them, but the user can still talk directly to a specialist when they want to go deep. This adds complexity but can provide a more cohesive experience for users who want a single conversational partner. Let the user decide if this fits their vision. + +**Output check for multi-agent:** When defining agents, verify that each one produces tangible output. If an agent's primary role is planning or coordinating (not producing), that's usually a sign those capabilities should be distributed into the domain agents as native capabilities, with shared memory handling cross-domain coordination. The exception is an explicit orchestrator agent the user wants as a conversational hub. + Even with multiple agents, each should be self-contained with its own capabilities. Duplicating some common functionality across agents is fine — it keeps each agent coherent and independently useful. This is the user's decision, but guide them toward self-sufficiency per agent. Present the trade-offs. Let the user decide. Document the reasoning either way — future-them will want to know why. **Memory architecture for multi-agent modules.** If the module has multiple agents, explore how memory should work. Every agent has its own sidecar (personal memory at `{project-root}/_bmad/memory/{skillName}-sidecar/`), but modules may also benefit from shared memory: -| Pattern | When It Fits | Example | -| ------------------------------------ | ------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| **Personal sidecars only** | Agents have distinct domains with little overlap | A module with a code reviewer and a test writer — each tracks different things | -| **Personal + shared module sidecar** | Agents have their own context but also learn shared things about the user | A social creative module — podcast, video, and blog experts each remember their domain specifics but share knowledge about the user's style, catchphrases, and content preferences | -| **Shared sidecar only** | All agents serve the same domain and context | Probably a sign this should be a single agent | +| Pattern | When It Fits | Example | +| ------------------------------------------------------------------ | ----------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------- | +| **Personal sidecars only** | Agents have distinct domains with little overlap | A module with a code reviewer and a test writer — each tracks different things | +| **Personal + shared module sidecar** | Agents have their own context but also learn shared things about the user | Agents each remember domain specifics but share knowledge about the user's style and preferences | +| **Single module sidecar (recommended for tightly coupled agents)** | All agents benefit from full visibility into everything the suite has learned | A creative suite where every agent needs the user's voice, brand, and content history. Daily capture + periodic curation keeps it organized | + +The **single sidecar with daily/curated memory** model works well for tightly coupled multi-agent modules: -With shared memory, each agent writes to both its personal sidecar and a module-level sidecar (e.g., `{project-root}/_bmad/memory/{moduleCode}-shared/`) when it learns something relevant to the whole module. Shared content might include: user style preferences, project assets, recurring themes, content history, or any cross-cutting context. +- **Daily files** (`daily/YYYY-MM-DD.md`) — every session, the active agent appends timestamped entries tagged by agent name. Raw, chronological, append-only. +- **Curated files** (organized by topic) — distilled knowledge that agents load on activation. Updated through inline curation (obvious updates go straight to the file) and periodic deep curation. +- **Index** (`index.md`) — orientation document every agent reads first. Summarizes what curated files exist, when each was last updated, and recent activity. Agents selectively load only what's relevant. If the memory architecture points entirely toward shared memory with no personal differentiation, gently surface whether a single agent with multiple capabilities might be the better design. -### 4. Define Module Context +**Cross-agent interaction patterns.** If the module has multiple agents, explicitly define how they hand off work: -- **Standalone or expansion?** If expansion: which module does it extend? How do the new capabilities relate? Even expansion modules should provide value independently — the parent module being absent shouldn't break this one. -- **Custom configuration?** Does the module need to ask users questions during setup? What variables would skills use? Important guidance to capture: skills should always have sensible fallbacks if config hasn't been set, or ask at runtime for specific values they need. -- **External dependencies?** Do any planned skills rely on externally installed CLI tools or MCP servers? If so, the setup skill may need to check for these, guide the user through installation, or configure connection details. Capture what's needed and why. -- **UI or visualization?** Could the module benefit from a user interface? This could be a shared progress dashboard, per-skill visualizations, an interactive view showing how skills relate and flow together, or even a cohesive module-level dashboard. Some modules might warrant a bespoke web app. Not every module needs this, but it's worth exploring — users often don't think of it until prompted. -- **Setup skill extensions?** Beyond config collection, does the setup process need to do anything special? Install a web app, scaffold project directories, configure external services, generate starter files? The setup skill is extensible — it can do more than just write config. +- Is the user the router (brings output from one agent to another)? +- Are there service-layer relationships (e.g., a visual agent other agents can describe needs for)? +- Does an orchestrator agent coordinate? +- How does shared memory enable cross-domain awareness (e.g., blog agent sees a podcast was recorded)? -### 5. Define Each Skill +Document these patterns — they're critical for builders to understand. -For each planned skill (whether agent or workflow), work through: +### Phase 4: Module Context and Configuration -- **Name** — following `bmad-{modulecode}-{skillname}` convention -- **Purpose** — the core outcome in one sentence -- **Capabilities** — each distinct action or mode. These become rows in the help CSV: display name, menu code, description, action name, args, phase, ordering (before/after), required flag, output location, outputs -- **Relationships** — how skills relate to each other. Does one need to run before another? Are there cross-skill dependencies? -- **Design notes** — non-obvious considerations the skill builders should know +**Custom configuration.** Does the module need to ask users questions during setup? For each potential config variable, capture: key name, prompt, default, result template, and whether it's a user setting. + +**Even if there are no config variables, explicitly state this in the plan** — "This module requires no custom configuration beyond core BMad settings." Don't leave the section blank or the builder won't know if it was considered. + +Skills should always have sensible fallbacks if config hasn't been set, or ask at runtime for specific values they need. + +**External dependencies.** Do any planned skills rely on externally installed CLI tools or MCP servers? If so, the setup skill may need to check for these, guide the user through installation, or configure connection details. Capture what's needed and why. -Update the **Skills** section of the plan document with structured entries for each. +**UI or visualization.** Could the module benefit from a user interface? This could be a shared progress dashboard, per-skill visualizations, an interactive view showing how skills relate and flow together, or even a cohesive module-level dashboard. Some modules might warrant a bespoke web app. Not every module needs this, but it's worth exploring — users often don't think of it until prompted. -### 6. Finalize the Plan +**Setup skill extensions.** Beyond config collection, does the setup process need to do anything special? Install a web app, scaffold project directories, configure external services, generate starter files? The setup skill is extensible — it can do more than just write config. -Complete all sections of the plan document. Review with the user — walk through the plan and confirm it captures their vision. Update `status` to "complete" in the frontmatter. +### Phase 5: Define Skills and Capabilities + +For each planned skill (whether agent or workflow), build a **self-contained brief** that could be handed directly to the Agent Builder or Workflow Builder without any conversation context. Each brief should include: + +**For agents:** + +- **Name** — following `bmad-{modulecode}-{skillname}` convention +- **Persona** — who is this agent? Communication style, expertise, personality +- **Core outcome** — what does success look like? +- **The non-negotiable** — the one thing this agent must get right +- **Capabilities** — each distinct action or mode, described as outcomes (not procedures). For each capability, define at minimum: + - What it does (outcome-driven description) + - **Inputs** — what does the user provide? (topic, transcript, existing content, etc.) + - **Outputs** — what does the agent produce? (draft, plan, report, code, etc.) Call out when an output would be a good candidate for an **HTML report** (validation runs, analysis results, quality checks, comparison reports) +- **Memory** — what files does it read on activation? What does it write to? What's in the daily log? +- **Init responsibility** — what happens on first run? +- **Activation modes** — interactive, headless, or both? +- **Tool dependencies** — external tools with technical specifics (what the agent outputs, how it's invoked) +- **Design notes** — non-obvious considerations, the "why" behind decisions +- **Relationships** — ordering (before/after), cross-agent handoff patterns + +**For workflows:** + +- **Name**, **Purpose**, **Capabilities** with inputs/outputs, **Design notes**, **Relationships** + +### Phase 6: Capability Review + +**Do not skip this phase.** Present the complete capability list for each skill back to the user for review. For each skill: + +- Walk through the capabilities — are they complete? Missing anything? +- Are any capabilities too granular and should be consolidated? +- Are any too broad and should be split? +- Do the inputs and outputs make sense? +- Are there capabilities that would benefit from producing structured output (HTML reports, dashboards, exportable artifacts)? +- For multi-skill modules: are there capability overlaps between skills that should be resolved? + +Offer to go deeper on any specific capability the user wants to explore further. Some capabilities may need more detailed planning — sub-steps, edge cases, format specifications. The user decides the depth. + +Iterate until the user confirms the capability list is right. Update the plan document with any changes. + +### Phase 7: Finalize the Plan + +Complete all sections of the plan document. Do a final pass to ensure: + +- **Module identity** (name, code, description) is in the frontmatter +- **Architecture** section documents the decision and rationale +- **Memory architecture** is explicit (which pattern, what files, what's shared) +- **Cross-agent patterns** are documented (if multi-agent) +- **Configuration** section is filled in — even if empty, state it explicitly +- **Every skill brief** is self-contained enough for a builder agent with zero context +- **Inputs and outputs** are defined for each capability +- **Build roadmap** has a recommended order with rationale +- **Ideas Captured** preserves raw brainstorming ideas that didn't make it into the structured plan + +Update `status` to "complete" in the frontmatter. **Close with next steps:** From 2472afd7da135c1850906086d8da773330355c06 Mon Sep 17 00:00:00 2001 From: Brian Madison Date: Sun, 29 Mar 2026 13:21:50 -0500 Subject: [PATCH 2/6] fix: address 5 quality analysis opportunities for bmad-module-builder MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit 1. Parallel/subagent file-reading (CM, VM, IM) - CM and VM: parallel batch reads for ≤4 skills, subagent delegation for 5+ returning compact JSON instead of bloating parent context - CM: selective plan doc reading (structured sections only) - IM: cp command for template init instead of reading into context 2. Completion markers and session-resume protocol - VM: explicit "Validation complete" signal with follow-up guidance - IM: "Session complete" marker at Phase 7 end - IM: Session Resume section for re-entry from existing plan documents - IM: mandatory soft gate at Phase 2→3 transition 3. Headless output contracts and automation interfaces - CM: specified temp file paths at {bmad_builder_reports}, defined required vs inferrable inputs, structured JSON output contract with inferred object for surfacing wrong inferences - VM: added --headless mode with structured JSON for CI gating - SKILL.md: updated args to reflect VM headless support 4. Active handoff and workflow continuity - IM Phase 7: offers to invoke builder for first skill in roadmap - CM: plan doc intake reframed as recommended path with auto-extraction - VM: optional durable findings file-write after presenting results - CM, VM: config headers now include output format 5. IM writing discipline reinforcement - Phase 1: writes "Not ready — complete in Phase 3+" placeholder in all structured sections on template init - Phase 2: explicitly restricts writes to Ideas Captured only --- skills/bmad-module-builder/SKILL.md | 4 +- .../references/create-module.md | 42 +++++++++++++++---- .../references/ideate-module.md | 23 +++++++--- .../references/validate-module.md | 24 ++++++++++- 4 files changed, 76 insertions(+), 17 deletions(-) diff --git a/skills/bmad-module-builder/SKILL.md b/skills/bmad-module-builder/SKILL.md index a7711ac..4ab324f 100644 --- a/skills/bmad-module-builder/SKILL.md +++ b/skills/bmad-module-builder/SKILL.md @@ -11,9 +11,9 @@ This skill helps you bring BMad modules to life — from the first spark of an i - **Ideate Module (IM)** — A creative brainstorming session that helps you imagine what your module could be, decide on the right architecture (agent vs. workflow vs. both), and produce a detailed plan document. The plan then guides you through building each piece with the Agent Builder and Workflow Builder. - **Create Module (CM)** — Takes an existing folder of built skills and scaffolds the setup infrastructure (module.yaml, module-help.csv, setup skill) that makes it a proper installable BMad module. Supports `--headless` / `-H`. -- **Validate Module (VM)** — Checks that a module's setup skill is complete and correct — every skill has its capabilities registered, entries are accurate and well-crafted, and structural integrity is sound. +- **Validate Module (VM)** — Checks that a module's setup skill is complete and correct — every skill has its capabilities registered, entries are accurate and well-crafted, and structural integrity is sound. Supports `--headless` / `-H`. -**Args:** Accepts `--headless` / `-H` for CM path only, an initial description for IM, or a path to a skills folder for CM/VM. +**Args:** Accepts `--headless` / `-H` for CM and VM paths, an initial description for IM, or a path to a skills folder for CM/VM. ## On Activation diff --git a/skills/bmad-module-builder/references/create-module.md b/skills/bmad-module-builder/references/create-module.md index 7cde83c..7146c12 100644 --- a/skills/bmad-module-builder/references/create-module.md +++ b/skills/bmad-module-builder/references/create-module.md @@ -1,6 +1,6 @@ # Create Module -**Language:** Use `{communication_language}` for all output. +**Language:** Use `{communication_language}` for all output. **Output format:** `{document_output_language}` for generated files unless overridden by context. ## Your Role @@ -10,9 +10,11 @@ You are a module packaging specialist. The user has built their skills — your ### 1. Discover the Skills -Ask the user for the folder path containing their built skills. Also ask: do they have a plan document from an Ideate Module (IM) session? If so, read it — it provides valuable context for ordering, relationships, and design intent. +Ask the user for the folder path containing their built skills. Also ask: do they have a plan document from an Ideate Module (IM) session? If they do, this is the recommended path — a plan document lets you auto-extract module identity, capability ordering, config variables, and design rationale, dramatically improving the quality of the scaffolded module. Read it first, focusing on the structured sections (frontmatter, Skills, Configuration, Build Roadmap) — skip Ideas Captured and other freeform sections that don't inform scaffolding. -**Read every SKILL.md in the folder thoroughly.** Understand each skill's: +**Read every SKILL.md in the folder.** For 4 or fewer skills, read all SKILL.md files in a single parallel batch (one message, multiple Read calls). For 5+ skills, spawn parallel subagents — one per skill — each returning compact JSON: `{ name, description, capabilities: [{ name, args, outputs }], dependencies }`. This keeps the parent context lean while still understanding the full ecosystem. + +For each skill, understand: - Name, purpose, and capabilities - Arguments and interaction model @@ -91,15 +93,15 @@ Iterate until the user confirms everything is correct. ### 7. Scaffold -Write the confirmed module.yaml and module-help.csv content to temporary files. Run the scaffold script: +Write the confirmed module.yaml and module-help.csv content to temporary files at `{bmad_builder_reports}/{module-code}-temp-module.yaml` and `{bmad_builder_reports}/{module-code}-temp-help.csv`. Run the scaffold script: ```bash python3 ./scripts/scaffold-setup-skill.py \ --target-dir "{skills-folder}" \ --module-code "{code}" \ --module-name "{name}" \ - --module-yaml "{temp-yaml-path}" \ - --module-csv "{temp-csv-path}" + --module-yaml "{bmad_builder_reports}/{module-code}-temp-module.yaml" \ + --module-csv "{bmad_builder_reports}/{module-code}-temp-help.csv" ``` This creates `bmad-{code}-setup/` in the user's skills folder containing: @@ -124,4 +126,30 @@ When `--headless` is set, the skill requires either: - A **plan document path** — extract all module identity, capabilities, and config from it - A **skills folder path** — read skills and infer sensible defaults for module identity -In headless mode: skip interactive questions, scaffold immediately, present a summary of what was created at the end. If critical information is missing and cannot be inferred (like module code), exit with an error explaining what's needed. +**Required inputs** (must be provided or extractable — exit with error if missing): + +- Module code (cannot be safely inferred) +- Skills folder path + +**Inferrable inputs** (will use defaults if not provided — flag as inferred in output): + +- Module name (inferred from folder name or skill themes) +- Description (synthesized from skills) +- Version (defaults to 1.0.0) +- Capability ordering (inferred from skill dependencies) + +In headless mode: skip interactive questions, scaffold immediately, and return structured JSON: + +```json +{ + "status": "success|error", + "module_code": "...", + "setup_skill": "bmad-{code}-setup", + "location": "/path/to/bmad-{code}-setup/", + "files_created": ["SKILL.md", "scripts/...", "assets/module.yaml", "assets/module-help.csv"], + "inferred": { "module_name": "...", "description": "..." }, + "warnings": [] +} +``` + +The `inferred` object lists every value that was not explicitly provided, so the caller can spot wrong inferences. If critical information is missing and cannot be inferred, return `{ "status": "error", "message": "..." }`. diff --git a/skills/bmad-module-builder/references/ideate-module.md b/skills/bmad-module-builder/references/ideate-module.md index 0bffeb4..060fbb5 100644 --- a/skills/bmad-module-builder/references/ideate-module.md +++ b/skills/bmad-module-builder/references/ideate-module.md @@ -6,6 +6,10 @@ You are a creative collaborator and module architect — part brainstorming partner, part technical advisor. Your job is to help the user discover and articulate their vision for a BMad module. The user is the creative force. You draw out their ideas, build on them, and help them see possibilities they haven't considered yet. When the session is over, they should feel like every great idea was theirs. +## Session Resume + +On activation, check `{bmad_builder_reports}` for an existing plan document matching the user's intent. If one exists with `status: ideation` or `status: in-progress`, load it and orient from its current state: identify which phase was last completed based on which sections have content, briefly summarize where things stand, and ask the user where they'd like to pick up. This prevents re-deriving state from conversation history after context compaction or a new session. + ## Facilitation Principles These are non-negotiable — they define the experience: @@ -36,7 +40,7 @@ This is a phased process. Each phase has a clear purpose and should not be skipp ### Phase 1: Vision and Module Identity -Initialize the plan document immediately using `./assets/module-plan-template.md`. Write it to `{bmad_builder_reports}` with a descriptive filename. Set `created` and `updated` timestamps. This document is your cache — update it progressively as the conversation unfolds so work survives context compaction. +Initialize the plan document by copying `./assets/module-plan-template.md` to `{bmad_builder_reports}` with a descriptive filename — use a `cp` command rather than reading the template into context. Set `created` and `updated` timestamps. Then immediately write "Not ready — complete in Phase 3+" as placeholder text in all structured sections (Architecture, Memory Architecture, Memory Contract, Cross-Agent Patterns, Skills, Configuration, External Dependencies, UI and Visualization, Setup Extensions, Integration, Creative Use Cases, Build Roadmap). This makes the writing discipline constraint visible in the document itself — only Ideas Captured and frontmatter should be written during Phases 1-2. This document is your cache — update it progressively as the conversation unfolds so work survives context compaction. **First: capture the spark.** Let the user talk freely — this is where the richest context comes from: @@ -67,13 +71,15 @@ This is the heart of the session — spend real time here. Use the brainstorming - How might different capabilities work together in unexpected ways? - What exists today that's close but not quite right? -Update the **Ideas Captured** section of the plan document as ideas emerge. Capture raw ideas generously — even ones that seem tangential. They're context for later. +Update **only the Ideas Captured section** of the plan document as ideas emerge — do not write to structured sections yet. Capture raw ideas generously — even ones that seem tangential. They're context for later. Energy check: if the conversation plateaus, try a perspective shift or reverse brainstorming to open a new vein. ### Phase 3: Architecture -When exploration feels genuinely complete (not just "we have enough"), shift to architecture. This is where structured writing begins. +Before shifting to architecture, use a mandatory soft gate: "Anything else to capture before we shift to architecture? Once we start structuring, we'll still be creative — but this is the best moment to get any remaining raw ideas down." Only proceed when the user confirms. + +This is where structured writing begins. **Guide toward agent-with-capabilities when appropriate.** Many users default to thinking they need multiple specialized agents. But a well-designed single agent with rich internal capabilities and routing: @@ -198,8 +204,13 @@ Complete all sections of the plan document. Do a final pass to ensure: Update `status` to "complete" in the frontmatter. -**Close with next steps:** +**Close with next steps and active handoff:** -- "Build each skill using **Build an Agent (BA)** or **Build a Workflow (BW)** — share this plan document as context so the builder understands the bigger picture." +Point to the plan document location. Then, using the Build Roadmap's recommended order, identify the first skill to build and offer to start immediately: + +- "Your plan is complete at `{path}`. The build roadmap suggests starting with **{first-skill-name}** — shall I invoke **Build an Agent (BA)** or **Build a Workflow (BW)** now to start building it? I'll pass the plan document as context so the builder understands the bigger picture." - "When all skills are built, return to **Create Module (CM)** to scaffold the module infrastructure." -- Point them to the plan document location so they can reference it. + +This is the moment of highest user energy — leverage it. If they decline, that's fine — they have the plan document and can return anytime. + +**Session complete.** The IM session ends here. Do not continue unless the user asks a follow-up question. diff --git a/skills/bmad-module-builder/references/validate-module.md b/skills/bmad-module-builder/references/validate-module.md index 44022a3..32c7826 100644 --- a/skills/bmad-module-builder/references/validate-module.md +++ b/skills/bmad-module-builder/references/validate-module.md @@ -1,6 +1,6 @@ # Validate Module -**Language:** Use `{communication_language}` for all output. +**Language:** Use `{communication_language}` for all output. **Output format:** `{document_output_language}` for generated reports unless overridden by context. ## Your Role @@ -26,7 +26,7 @@ If the script cannot execute, perform equivalent checks by reading the files dir ### 3. Quality Assessment -This is where LLM judgment matters. Read every SKILL.md in the module thoroughly, then review each CSV entry against what you learned: +This is where LLM judgment matters. For 4 or fewer skills, read all SKILL.md files in a single parallel batch (one message, multiple Read calls). For 5+ skills, spawn parallel subagents — one per skill — each returning structured findings: `{ name, capabilities_found: [...], quality_notes: [...], issues: [...] }`. Then review each CSV entry against what you learned: **Completeness** — Does every distinct capability of every skill have its own CSV row? A skill with multiple modes or actions should have multiple entries. Look for capabilities described in SKILL.md overviews that aren't registered. @@ -52,3 +52,23 @@ Combine script findings and quality assessment into a clear report: - **Overall assessment** — is this module ready for use, or does it need fixes? For each finding, explain what's wrong and suggest the fix. Be direct — the user should be able to act on every item without further clarification. + +After presenting the report, offer to save findings to a durable file: "Save validation report to `{bmad_builder_reports}/module-validation-{module-code}-{date}.md`?" This gives the user a reference they can share, track as a checklist, and review in future sessions. + +**Completion:** After presenting results, explicitly state: "Validation complete." If findings exist, offer to walk through fixes. If the module passes cleanly, confirm it's ready for use. Do not continue the conversation beyond what the user requests — the session is done once results are delivered and any follow-up questions are answered. + +## Headless Mode + +When `--headless` is set, run the full validation (script + quality assessment) without user interaction and return structured JSON: + +```json +{ + "status": "pass|fail", + "module_code": "...", + "structural_issues": [{ "severity": "...", "message": "...", "file": "..." }], + "quality_findings": [{ "severity": "...", "skill": "...", "message": "...", "suggestion": "..." }], + "summary": "Module is ready for use.|Module has N issues requiring attention." +} +``` + +This enables CI pipelines to gate on module quality before release. From 7ab3431041eb2867d303775e47314e8385096e51 Mon Sep 17 00:00:00 2001 From: Brian Madison Date: Sun, 29 Mar 2026 13:49:08 -0500 Subject: [PATCH 3/6] feat(workflow-builder): add --convert flag for one-command skill conversion MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Add a new Convert (CW) capability to the Workflow Builder that takes any existing skill — bloated, poorly structured, or simply non-BMad-compliant — and produces a clean, outcome-driven equivalent with a visual before/after HTML comparison report. New files: - references/convert-process.md: 3-step conversion workflow (capture original, delegate to build-process for headless rebuild, generate comparison report with categorized changes) - scripts/generate-convert-report.py: Self-contained HTML report generator with dark/light mode, hero reduction banner, metrics table with visual bars, expandable cut/retained categories, and verdict - scripts/tests/test_generate_convert_report.py: 11 unit tests covering measurement, reduction calculation, report assembly, HTML generation, XSS escaping, and end-to-end pipeline Modified files: - SKILL.md: Added --convert arg, Convert section, routing table entry, updated frontmatter description with convert trigger phrase - module-help.csv (both installed and source copies): Added CW row - docs/reference/builder-commands.md: Added Convert to capabilities overview, new Convert (CW) section with usage/process/report docs, comparison table vs other modes, new trigger phrase row --- docs/reference/builder-commands.md | 54 ++- .../bmad-builder-setup/assets/module-help.csv | 1 + skills/bmad-workflow-builder/SKILL.md | 25 +- .../references/convert-process.md | 106 +++++ .../scripts/generate-convert-report.py | 406 ++++++++++++++++++ .../tests/test_generate_convert_report.py | 243 +++++++++++ skills/module-help.csv | 1 + 7 files changed, 824 insertions(+), 12 deletions(-) create mode 100644 skills/bmad-workflow-builder/references/convert-process.md create mode 100644 skills/bmad-workflow-builder/scripts/generate-convert-report.py create mode 100644 skills/bmad-workflow-builder/scripts/tests/test_generate_convert_report.py diff --git a/docs/reference/builder-commands.md b/docs/reference/builder-commands.md index 6889271..2099dda 100644 --- a/docs/reference/builder-commands.md +++ b/docs/reference/builder-commands.md @@ -7,10 +7,11 @@ Reference for the three core BMad Builder skills — the Agent Builder (`bmad-ag ## Capabilities Overview -| Capability | Menu Code | Agent Builder | Workflow Builder | -| -------------------- | --------- | ------------------------------------- | ------------------------------------------------------ | -| **Build Process** | BP | Build, edit, convert, or fix agents | Build, edit, convert, or fix workflows and utilities | -| **Quality Optimize** | QO | Validate and optimize existing agents | Validate and optimize existing workflows and utilities | +| Capability | Menu Code | Agent Builder | Workflow Builder | +| -------------------- | --------- | ------------------------------------- | ----------------------------------------------------------------------------------- | +| **Build Process** | BP | Build, edit, convert, or fix agents | Build, edit, convert, or fix workflows and utilities | +| **Quality Optimize** | QO | Validate and optimize existing agents | Validate and optimize existing workflows and utilities | +| **Convert** | CW | — | Convert any skill to BMad-compliant, outcome-driven equivalent with comparison report | Both capabilities support autonomous/headless mode via `--headless` / `-H` flags. @@ -179,6 +180,50 @@ Not every suggestion should be applied. The optimizer communicates these decisio - **Prefer scripting** for deterministic operations; **prefer prompting** for creative or judgment-based tasks - **Reject changes** that flatten personality unless a neutral tone is explicitly wanted +## Convert (CW) + +One-command conversion of any existing skill into a BMad-compliant, outcome-driven equivalent. This is the fastest path for taking a non-conformant skill — whether it's bloated, poorly structured, or just doesn't follow BMad best practices — and producing a clean version that does. Unlike the Build Process's edit/rebuild modes, `--convert` always runs headless and produces a visual comparison report. + +### Usage + +``` +--convert [-H] +``` + +The `--convert` flag implies headless mode. Accepts a local skill path or a URL (not limited to remote — local file paths work too). + +### Process + +| Step | What Happens | +| ---- | ------------ | +| **1. Capture** | Fetch or read the original skill, save a copy for comparison | +| **2. Rebuild** | Full headless rebuild from intent — extract what the skill achieves, apply BMad outcome-driven best practices | +| **3. Report** | Measure both versions, categorize what changed and why, generate an interactive HTML comparison report | + +### Comparison Report + +The HTML report includes: + +| Section | Content | +| ------- | ------- | +| **Hero banner** | Overall token reduction percentage | +| **Metrics table** | Lines, words, characters, sections, files, estimated tokens — with visual bars | +| **What changed** | Categorized differences — bloat removal, structural reorganization, best-practice alignment — with severity and examples | +| **What survived** | Content that earns its place — instructions the LLM wouldn't follow correctly without being told | +| **Verdict** | One-sentence summary of the conversion | + +Reports are saved to `{bmad_builder_reports}/convert-{skill-name}/`. + +### When to Use Convert vs Build Process + +| Scenario | Use | +| -------- | --- | +| You have any non-BMad-compliant skill and want it converted fast | `--convert` | +| You have a bloated skill and want a lean replacement with a comparison report | `--convert` | +| You want to interactively discuss what to change | Build Process (Edit mode) | +| You want to rethink a skill from scratch with full discovery | Build Process (Rebuild mode) | +| You want a detailed quality analysis without rebuilding | Quality Optimize | + ## Module Builder The Module Builder (`bmad-module-builder`) handles module-level planning, scaffolding, and validation. It operates at a higher level than the Agent and Workflow Builders — it orchestrates what those builders produce into a cohesive, installable module. @@ -275,6 +320,7 @@ Verifies that a module's setup skill is complete and accurate. Combines a determ | Edit | "edit/modify/update a workflow/skill" | Workflow | `prompts/build-process.md` | | Convert | "convert this to a BMad agent" | Agent | `prompts/build-process.md` | | Convert | "convert this to a BMad skill" | Workflow | `prompts/build-process.md` | +| Convert | `--convert ` | Workflow | `./references/convert-process.md` | | Optimize | "quality check/validate/optimize/review agent" | Agent | `prompts/quality-optimizer.md` | | Optimize | "quality check/validate/optimize/review workflow/skill" | Workflow | `prompts/quality-optimizer.md` | | Ideate | "ideate module/plan a module/brainstorm a module" | Module | `./references/ideate-module.md` | diff --git a/skills/bmad-builder-setup/assets/module-help.csv b/skills/bmad-builder-setup/assets/module-help.csv index 9687012..6a364aa 100644 --- a/skills/bmad-builder-setup/assets/module-help.csv +++ b/skills/bmad-builder-setup/assets/module-help.csv @@ -4,6 +4,7 @@ BMad Builder,bmad-agent-builder,Build an Agent,BA,"Create, edit, convert, or fix BMad Builder,bmad-agent-builder,Optimize an Agent,OA,Validate and optimize an existing agent skill. Produces a quality report.,quality-optimizer,[-H] [path],anytime,bmad-agent-builder:build-process,,false,bmad_builder_reports,quality report BMad Builder,bmad-workflow-builder,Build a Workflow,BW,"Create, edit, convert, or fix a workflow or utility skill.",build-process,"[-H] [description | path]",anytime,,bmad-workflow-builder:quality-optimizer,false,output_folder,workflow skill BMad Builder,bmad-workflow-builder,Optimize a Workflow,OW,Validate and optimize an existing workflow or utility skill. Produces a quality report.,quality-optimizer,[-H] [path],anytime,bmad-workflow-builder:build-process,,false,bmad_builder_reports,quality report +BMad Builder,bmad-workflow-builder,Convert a Skill,CW,"Convert any skill to BMad-compliant, outcome-driven equivalent with before/after HTML comparison report.",convert-process,"--convert [-H]",anytime,,,false,bmad_builder_reports,converted skill + comparison report BMad Builder,bmad-module-builder,Ideate Module,IM,"Brainstorm and plan a BMad module — explore ideas, decide architecture, and produce a build plan.",ideate-module,,anytime,,bmad-module-builder:create-module,false,bmad_builder_reports,module plan BMad Builder,bmad-module-builder,Create Module,CM,"Scaffold a setup skill into an existing folder of built skills, making it an installable BMad module.",create-module,"[-H] [path]",anytime,bmad-module-builder:ideate-module,,false,bmad_builder_output_folder,setup skill BMad Builder,bmad-module-builder,Validate Module,VM,"Check that a module's setup skill is complete, accurate, and all capabilities are properly registered.",validate-module,[path],anytime,bmad-module-builder:create-module,,false,bmad_builder_reports,validation report \ No newline at end of file diff --git a/skills/bmad-workflow-builder/SKILL.md b/skills/bmad-workflow-builder/SKILL.md index 2da1f70..f49faa6 100644 --- a/skills/bmad-workflow-builder/SKILL.md +++ b/skills/bmad-workflow-builder/SKILL.md @@ -1,6 +1,6 @@ --- name: bmad-workflow-builder -description: Builds workflows and skills through conversational discovery and analyzes existing ones. Use when the user requests to "build a workflow", "modify a workflow", "quality check workflow", or "analyze skill". +description: Builds, converts, and analyzes workflows and skills. Use when the user requests to "build a workflow", "modify a workflow", "quality check workflow", "analyze skill", or "convert a skill". --- # Workflow & Skill Builder @@ -11,7 +11,7 @@ This skill helps you build AI workflows and skills that are **outcome-driven** Act as an architect guide — walk users through conversational discovery to understand their vision, then craft skill structures that trust the executing LLM's judgment. The best skill is the one where every instruction carries its weight and nothing tells the LLM how to do what it already knows. -**Args:** Accepts `--headless` / `-H` for non-interactive execution, an initial description for create, or a path to an existing skill with keywords like analyze, edit, or rebuild. +**Args:** Accepts `--headless` / `-H` for non-interactive execution, `--convert ` to convert an existing skill into a lean equivalent with before/after HTML comparison report, an initial description for create, or a path to an existing skill with keywords like analyze, edit, or rebuild. **Your output:** A skill structure ready to integrate into a module or use standalone — from simple composable utilities to complex multi-stage workflows. @@ -40,16 +40,25 @@ Comprehensive quality analysis toward outcome-driven design. Analyzes existing s Load `quality-analysis.md` to begin. +## Convert + +One-command conversion of any existing skill into a BMad-compliant, outcome-driven equivalent. Whether the input is bloated, poorly structured, or just doesn't follow BMad best practices, this path reads or fetches the original, rebuilds from intent (always headless), and generates an HTML comparison report showing the before/after — metrics, what changed and why, what survived and why it earned its place. + +`--convert` implies headless mode. Accepts a local path or URL. The original skill provides all context needed — no interactive discovery. + +Load `./references/convert-process.md` to begin. + --- ## Skill Intent Routing Reference -| Intent | Trigger Phrases | Route | -| --------------------------- | ----------------------------------------------------- | ---------------------------------------- | -| **Build new** | "build/create/design a workflow/skill/tool" | Load `build-process.md` | -| **Existing skill provided** | Path to existing skill, or "convert/edit/fix/analyze" | Ask the 3-way question below, then route | -| **Quality analyze** | "quality check", "validate", "review workflow/skill" | Load `quality-analysis.md` | -| **Unclear** | — | Present options and ask | +| Intent | Trigger Phrases | Route | +| --------------------------- | ----------------------------------------------------- | ------------------------------------------------ | +| **Build new** | "build/create/design a workflow/skill/tool" | Load `build-process.md` | +| **Convert** | `--convert path-or-url` | Load `./references/convert-process.md` | +| **Existing skill provided** | Path to existing skill, or "edit/fix/analyze" | Ask the 3-way question below, then route | +| **Quality analyze** | "quality check", "validate", "review workflow/skill" | Load `quality-analysis.md` | +| **Unclear** | — | Present options and ask | ### When given an existing skill, ask: diff --git a/skills/bmad-workflow-builder/references/convert-process.md b/skills/bmad-workflow-builder/references/convert-process.md new file mode 100644 index 0000000..db73d19 --- /dev/null +++ b/skills/bmad-workflow-builder/references/convert-process.md @@ -0,0 +1,106 @@ +--- +name: convert-process +description: Automated skill conversion workflow. Analyzes an existing skill, rebuilds it outcome-driven, and generates a before/after HTML comparison report. +--- + +**Language:** Use `{communication_language}` for all output. + +# Convert Process + +Convert any existing skill into a BMad-compliant, outcome-driven equivalent. Whether the input is bloated, poorly structured, or simply non-conformant, this process extracts intent, rebuilds following BMad best practices, and produces a before/after comparison report. + +This process is always headless — no interactive questions. The original skill provides all the context needed. + +## Step 1: Capture the Original + +1. **Fetch/read the original skill.** If a URL was provided, fetch the raw content. If a local path, read all files in the skill directory. + +2. **Save the original.** Write the complete original content to `{bmad_builder_reports}/convert-{skill-name}/original/SKILL.md` (and any additional files if the original is a multi-file skill). This preserved copy is needed for the comparison script. + +3. **Note the source** (URL or path) for the report metadata. + +## Step 2: Rebuild from Intent + +Load and follow `build-process.md` with these parameters pre-set: + +- **Intent:** Rebuild — rethink from core outcomes, the original is reference material only +- **Headless mode:** Active — skip all interactive questions, use sensible defaults +- **Discovery questions:** Answer them yourself by analyzing the original skill's intent +- **Classification:** Determine from the original's structure and purpose +- **Requirements:** Derive from the original, applying aggressive pruning + +**Critical:** Do not inherit the original's verbosity, structure, or mechanical procedures. Extract *what it achieves*, then build the leanest skill that delivers the same outcome. + +When the build process reaches Phase 6 (Summary), skip the quality analysis offer and continue to Step 3 below. + +## Step 3: Generate Comparison Report + +After the rebuilt skill is complete: + +1. **Create the analysis file.** Write `{bmad_builder_reports}/convert-{skill-name}/convert-analysis.json`: + +```json +{ + "skill_name": "{skill-name}", + "original_source": "{url-or-path-provided-by-user}", + "cuts": [ + { + "category": "Category Name", + "description": "Why this content was cut", + "examples": ["Specific example 1", "Specific example 2"], + "severity": "high|medium|low" + } + ], + "retained": [ + { + "category": "Category Name", + "description": "Why this content was kept — what behavioral impact it has" + } + ], + "verdict": "One sharp sentence summarizing the conversion" +} +``` + +### Categorizing Changes + +Not every conversion is about bloat — some skills are well-intentioned but non-conformant. Categorize what changed and why, drawing from these common patterns: + +**Content removal** (when applicable): + +| Category | Signal | +|----------|--------| +| **Training Data Redundancy** | Facts, biographies, domain knowledge the LLM already has | +| **Prescriptive Procedures** | Step-by-step instructions for things the LLM reasons through naturally | +| **Mechanical Frameworks** | Scoring rubrics, decision matrices, evaluation checklists for subjective judgment | +| **Generic Boilerplate** | "Best Practices", "Common Pitfalls", "When to Use/Not Use" filler | +| **Template Bloat** | Response format templates, greeting scripts, output structure prescriptions | +| **Redundant Examples** | Examples that repeat what the instructions already say | +| **Per-Platform Duplication** | Separate instructions per platform when one adaptive instruction works | + +**Structural changes** (conformance to BMad best practices): + +| Category | Signal | +|----------|--------| +| **Progressive Disclosure** | Monolithic content split into SKILL.md routing + references | +| **Outcome-Driven Rewrite** | Prescriptive instructions reframed as outcomes | +| **Frontmatter/Description** | Added or fixed BMad-compliant frontmatter and trigger phrases | +| **Path Convention Fixes** | Corrected file references to use `./references/`, `{project-root}/_bmad/`, etc. | + +Severity: **high** = significant impact on quality or compliance, **medium** = notable improvement, **low** = minor or stylistic. + +### Categorizing Retained Content + +Focus on what the LLM *wouldn't* do correctly without being told. The retained categories should explain why each piece earns its place. + +2. **Generate the HTML report:** + +```bash +python3 ./scripts/generate-convert-report.py \ + "{bmad_builder_reports}/convert-{skill-name}/original" \ + "{rebuilt-skill-path}" \ + "{bmad_builder_reports}/convert-{skill-name}/convert-analysis.json" \ + -o "{bmad_builder_reports}/convert-{skill-name}/convert-report.html" \ + --open +``` + +3. **Present the summary** — key metrics, reduction percentages, report file location. The HTML report opens automatically. diff --git a/skills/bmad-workflow-builder/scripts/generate-convert-report.py b/skills/bmad-workflow-builder/scripts/generate-convert-report.py new file mode 100644 index 0000000..f85f306 --- /dev/null +++ b/skills/bmad-workflow-builder/scripts/generate-convert-report.py @@ -0,0 +1,406 @@ +#!/usr/bin/env python3 +# /// script +# requires-python = ">=3.9" +# /// +""" +Generate an interactive HTML skill conversion comparison report. + +Measures original and rebuilt skill directories, combines with LLM-generated +analysis (cuts, retained content, verdict), and renders a self-contained +HTML report showing the stark before/after comparison. + +Usage: + python3 generate-convert-report.py [-o output.html] [--open] +""" + +from __future__ import annotations + +import argparse +import html as html_lib +import json +import platform +import subprocess +import sys +from datetime import datetime, timezone +from pathlib import Path + + +def measure_skill(skill_path: Path) -> dict: + """Measure a skill directory or single file for lines, words, chars, sections, files.""" + total_lines = 0 + total_words = 0 + total_chars = 0 + total_sections = 0 + md_file_count = 0 + non_md_file_count = 0 + + if skill_path.is_file(): + md_files = [skill_path] + else: + md_files = sorted(skill_path.rglob('*.md')) + + for f in md_files: + content = f.read_text(encoding='utf-8') + lines = content.splitlines() + total_lines += len(lines) + total_words += sum(len(line.split()) for line in lines) + total_chars += len(content) + total_sections += sum(1 for line in lines if line.startswith('## ')) + md_file_count += 1 + + if skill_path.is_dir(): + for f in skill_path.rglob('*'): + if f.is_file() and f.suffix != '.md': + non_md_file_count += 1 + + return { + 'lines': total_lines, + 'words': total_words, + 'chars': total_chars, + 'sections': total_sections, + 'files': md_file_count + non_md_file_count, + 'estimated_tokens': int(total_words * 1.3), + } + + +def calculate_reductions(original: dict, rebuilt: dict) -> dict: + """Calculate percentage reductions for each metric.""" + reductions = {} + for key in ('lines', 'words', 'chars', 'sections', 'estimated_tokens'): + orig_val = original.get(key, 0) + new_val = rebuilt.get(key, 0) + if orig_val > 0: + reductions[key] = f'{round((1 - new_val / orig_val) * 100)}%' + else: + reductions[key] = 'N/A' + return reductions + + +def build_report_data(original_metrics: dict, rebuilt_metrics: dict, + analysis: dict, reductions: dict) -> dict: + """Assemble the full report data structure.""" + return { + 'meta': { + 'skill_name': analysis.get('skill_name', 'Unknown'), + 'original_source': analysis.get('original_source', ''), + 'timestamp': datetime.now(timezone.utc).isoformat(), + }, + 'metrics': { + 'original': original_metrics, + 'rebuilt': rebuilt_metrics, + }, + 'reductions': reductions, + 'cuts': analysis.get('cuts', []), + 'retained': analysis.get('retained', []), + 'verdict': analysis.get('verdict', ''), + } + + +# ── HTML Template ────────────────────────────────────────────────────────────── + +HTML_TEMPLATE = r""" + + + + +BMad Method · Skill Conversion: SKILL_NAME + + + + +
BMad Method
+

Skill Conversion:

+
+ +
+ + + + + + + + + + + + +
MetricOriginalRebuiltReductionComparison
+ +
+
+
+ + + + + +""" + + +def generate_html(report_data: dict) -> str: + """Inject report data into the HTML template.""" + data_json = json.dumps(report_data, indent=None, ensure_ascii=False) + data_tag = f'' + html = HTML_TEMPLATE.replace( + '', 'original_source': '', 'timestamp': ''}, + 'metrics': {'original': {}, 'rebuilt': {}}, + 'reductions': {}, + 'cuts': [], + 'retained': [], + 'verdict': '', + } + html = generate_html(report_data) + # The skill name in the JSON should be escaped by json.dumps + assert '