Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
71 changes: 71 additions & 0 deletions .claude/hooks/improve-prompt.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
#!/usr/bin/env python3
"""
Claude Code Prompt Improver Hook
Intercepts user prompts and evaluates if they need enrichment before execution.
Uses main session context for intelligent, non-pedantic evaluation.
"""
import json
import sys

# Load input from stdin
try:
input_data = json.load(sys.stdin)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON input: {e}", file=sys.stderr)
sys.exit(1)

prompt = input_data.get("prompt", "")

# Escape quotes in prompt for safe embedding in wrapped instructions
escaped_prompt = prompt.replace("\\", "\\\\").replace('"', '\\"')

def output_json(text):
"""Output text in UserPromptSubmit JSON format"""
output = {
"hookSpecificOutput": {
"hookEventName": "UserPromptSubmit",
"additionalContext": text
}
}
print(json.dumps(output))

# Check for bypass conditions
# 1. Explicit bypass with * prefix
# 2. Slash commands (built-in or custom)
# 3. Memorize feature (# prefix)
if prompt.startswith("*") or prompt.startswith("/") or prompt.startswith("#"):
# User bypassed improvement - don't add evaluation wrapper
# Exit without output so hook doesn't modify the prompt
sys.exit(0)

# Build the improvement wrapper
wrapped_prompt = f"""PROMPT EVALUATION

Original user request: "{escaped_prompt}"

EVALUATE: Is this prompt clear enough to execute, or does it need enrichment?

PROCEED IMMEDIATELY if:
- Detailed/specific OR you have sufficient context OR can infer intent

ONLY ASK if genuinely vague (e.g., "fix the bug" with no context):
- CRITICAL (NON-NEGOTIABLE) RULES:
- Trust user intent by default. Check conversation history before doing research.
- Do not rely on base knowledge.
- Never skip Phase 1. Research before asking.
- Don't announce evaluation - just proceed or ask.

- PHASE 1 - RESEARCH (DO NOT SKIP):
1. Preface with brief note: "Prompt Improver Hook is seeking clarification because [specific reason: ambiguous scope/missing context/unclear requirements/etc]"
2. Create research plan with TodoWrite: Ask yourself "What do I need to research to clarify this vague request?" Research WHAT NEEDS CLARIFICATION, not just the project. Use available tools: Task/Explore for codebase, WebSearch for online research (current info, common approaches, best practices, typical architectures), Read/Grep as needed
3. Execute research
4. Use research findings (not your training) to formulate grounded questions with specific options
5. Mark completed

- PHASE 2 - ASK (ONLY AFTER PHASE 1):
1. Use AskUserQuestion tool with max 1-6 questions offering specific options from your research
2. Use the answers to execute the original user request
"""

output_json(wrapped_prompt)
sys.exit(0)
13 changes: 12 additions & 1 deletion .claude/hooks/settings.hooks.json
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,18 @@
"hooks": {
"UserPromptSubmit": [
{
"description": "Auto-inject playbook bullets and suggest appropriate MAP workflows",
"description": "Step 1: Disambiguate vague prompts",
"hooks": [
{
"type": "command",
"command": "python3 .claude/hooks/improve-prompt.py",
"timeout": 5,
"description": "Wraps vague prompts with evaluation instructions for clarification"
}
]
},
{
"description": "Step 2: Inject playbook patterns and suggest workflows",
"hooks": [
{
"type": "command",
Expand Down
112 changes: 112 additions & 0 deletions docs/USAGE.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,11 @@ Complete usage examples, best practices, and optimization strategies for the MAP
- [Cost Savings](#cost-savings)
- [How It Works](#how-it-works)
- [Cost Comparison Example](#cost-comparison-example)
- [Hooks System](#-hooks-system)
- [Prompt Clarification](#prompt-clarification-prompt-improver-hook)
- [Sequential Hook Processing](#sequential-hook-processing)
- [Disabling Prompt-Improver](#disabling-prompt-improver)
- [Other Active Hooks](#other-active-hooks)
- [Additional Resources](#additional-resources)

---
Expand Down Expand Up @@ -1882,3 +1887,110 @@ See `.claude/skills/README.md` for:
- Best practices and examples

---

## 🔌 Hooks System

MAP Framework uses Claude Code hooks to enhance your workflow experience.

### Prompt Clarification (Prompt-Improver Hook)

**Enabled by default** - Automatically disambiguates vague prompts before execution.

**What it does:**
1. **Evaluates prompt clarity** using conversation history
2. **For vague prompts** (e.g., "fix the bug"):
- Creates research plan (TodoWrite)
- Gathers context from codebase, docs, web
- Asks 1-6 grounded questions with specific options
3. **For clear prompts**: Proceeds immediately

**Example flow:**
```
User: "fix the error"

MAP: [Prompt Improver Hook seeking clarification]
[Research: Found 3 recent errors in logs]

Which error needs fixing?
○ TypeError in src/components/Map.tsx (recent change)
○ API timeout in src/services/osmService.ts
○ Other (paste error message)

User: [Selects option]

MAP: [Proceeds with full context + playbook patterns]
```

**Bypass options:**
- `* your prompt` - Skip evaluation (remove `*` prefix)
- `/command` - Slash commands bypass automatically
- `# memorize` - Memorize feature bypasses automatically

**Token overhead:**
- ~300 tokens per wrapped prompt
- Only adds questions when genuinely needed
- Better outcomes on first try = overall efficiency

**Design philosophy:**
- **Rarely intervene** - Most prompts pass through
- **Trust user intent** - Research before asking
- **Transparent** - Evaluation visible in conversation
- **Max 1-6 questions** - Focused clarification

### Multi-Hook Processing

MAP uses **multiple UserPromptSubmit hooks** that run in parallel:

1. **Prompt-Improver** – Disambiguates vague prompts (wraps prompt with evaluation instructions)
2. **Playbook Injection** – Adds relevant patterns, and suggests workflows and skills

> **Note:** Claude Code executes all matching hooks in parallel. Each hook's `additionalContext` output is concatenated and added to the prompt. The order is not guaranteed, but both enhancements are applied.

> **Implementation detail:** Workflow and skill suggestions are handled within the Playbook Injection hook (`.claude/hooks/user-prompt-submit.sh`), not as separate hooks.

**Benefits:**
- Both hooks enhance the prompt with different types of context
- Prompt-Improver adds evaluation wrapper, Playbook adds patterns/workflows/skills
- Modular design (hooks can be disabled independently)
- Parallel execution (efficient)

### Disabling Prompt-Improver

If you prefer direct execution without clarification:

**Option 1: Use bypass prefix**
```bash
* implement user authentication # Skips improvement
```

**Option 2: Remove from settings.hooks.json**
```json
{
"hooks": {
"UserPromptSubmit": [
// Comment out or remove Prompt-Improver hook
{
"description": "Step 2: Inject playbook patterns and suggest workflows",
"hooks": [
{
"type": "command",
"command": ".claude/hooks/user-prompt-submit.sh"
}
]
}
]
}
}
```

### Other Active Hooks

MAP Framework includes additional hooks:

- **SessionStart** - Auto-injects checkpoint after compaction (see [Compaction Resilience](#-compaction-resilience))
- **PreToolUse** - Validates agent templates before modifications
- **Stop** - Quality gates after code modifications

See `.claude/hooks/README.md` for implementation details.

---
71 changes: 71 additions & 0 deletions src/mapify_cli/templates/hooks/improve-prompt.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
#!/usr/bin/env python3
"""
Claude Code Prompt Improver Hook
Intercepts user prompts and evaluates if they need enrichment before execution.
Uses main session context for intelligent, non-pedantic evaluation.
"""
import json
import sys

# Load input from stdin
try:
input_data = json.load(sys.stdin)
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON input: {e}", file=sys.stderr)
sys.exit(1)

prompt = input_data.get("prompt", "")

# Escape quotes in prompt for safe embedding in wrapped instructions
escaped_prompt = prompt.replace("\\", "\\\\").replace('"', '\\"')

def output_json(text):
"""Output text in UserPromptSubmit JSON format"""
output = {
"hookSpecificOutput": {
"hookEventName": "UserPromptSubmit",
"additionalContext": text
}
}
print(json.dumps(output))

# Check for bypass conditions
# 1. Explicit bypass with * prefix
# 2. Slash commands (built-in or custom)
# 3. Memorize feature (# prefix)
if prompt.startswith("*") or prompt.startswith("/") or prompt.startswith("#"):
# User bypassed improvement - don't add evaluation wrapper
# Exit without output so hook doesn't modify the prompt
sys.exit(0)

# Build the improvement wrapper
wrapped_prompt = f"""PROMPT EVALUATION

Original user request: "{escaped_prompt}"

EVALUATE: Is this prompt clear enough to execute, or does it need enrichment?

PROCEED IMMEDIATELY if:
- Detailed/specific OR you have sufficient context OR can infer intent

ONLY ASK if genuinely vague (e.g., "fix the bug" with no context):
- CRITICAL (NON-NEGOTIABLE) RULES:
- Trust user intent by default. Check conversation history before doing research.
- Do not rely on base knowledge.
- Never skip Phase 1. Research before asking.
- Don't announce evaluation - just proceed or ask.

- PHASE 1 - RESEARCH (DO NOT SKIP):
1. Preface with brief note: "Prompt Improver Hook is seeking clarification because [specific reason: ambiguous scope/missing context/unclear requirements/etc]"
2. Create research plan with TodoWrite: Ask yourself "What do I need to research to clarify this vague request?" Research WHAT NEEDS CLARIFICATION, not just the project. Use available tools: Task/Explore for codebase, WebSearch for online research (current info, common approaches, best practices, typical architectures), Read/Grep as needed
3. Execute research
4. Use research findings (not your training) to formulate grounded questions with specific options
5. Mark completed

- PHASE 2 - ASK (ONLY AFTER PHASE 1):
1. Use AskUserQuestion tool with max 1-6 questions offering specific options from your research
2. Use the answers to execute the original user request
"""

output_json(wrapped_prompt)
sys.exit(0)
13 changes: 12 additions & 1 deletion src/mapify_cli/templates/hooks/settings.hooks.json
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,18 @@
"hooks": {
"UserPromptSubmit": [
{
"description": "Auto-inject playbook bullets and suggest appropriate MAP workflows",
"description": "Step 1: Disambiguate vague prompts",
"hooks": [
{
"type": "command",
"command": "python3 .claude/hooks/improve-prompt.py",
"timeout": 5,
"description": "Wraps vague prompts with evaluation instructions for clarification"
}
]
},
{
"description": "Step 2: Inject playbook patterns and suggest workflows",
"hooks": [
{
"type": "command",
Expand Down