Declarative AI pipelines for the command line. Define LLM workflows in YAML, run them anywhere, version control everything.
# Pipe code through multiple AI agents
cat main.go | comanda process code-review.yaml
# Compare how different models solve a problem
comanda process model-comparison.yaml
# Run Claude Code, Codex, and Gemini CLI in parallel
echo "Design a REST API" | comanda process multi-agent/architecture.yamlFor AI-powered development workflows:
- Run Claude Code, OpenAI Codex, and Gemini CLI side-by-side
- Chain multiple agents for code review → test generation → documentation
- Get diverse perspectives on architecture decisions
For reproducible AI pipelines:
- YAML workflows you can version control and share
- Same workflow runs locally, in CI, or on a server
- Switch providers without changing your pipeline
For command-line power users:
- Pipes, redirects, scripts—works like
greporjq - Process files, URLs, databases, screenshots
- Batch operations with wildcards and parallel execution
# macOS
brew install kris-hansen/comanda/comanda
# Or via Go
go install github.com/kris-hansen/comanda@latest
# Or download from GitHub Releasescomanda configure
# Select providers (OpenAI, Anthropic, Google, Ollama, Claude Code, etc.)
# Enter API keys where neededCreate hello.yaml:
generate:
input: NA
model: gpt-4o
action: Write a haiku about programming
output: STDOUTcomanda process hello.yamlRun multiple agentic coding tools in parallel and synthesize their outputs:
parallel-process:
claude-analysis:
input: STDIN
model: claude-code
action: "Analyze architecture and trade-offs"
output: $CLAUDE_RESULT
gemini-analysis:
input: STDIN
model: gemini-cli
action: "Identify patterns and best practices"
output: $GEMINI_RESULT
codex-analysis:
input: STDIN
model: openai-codex
action: "Focus on implementation structure"
output: $CODEX_RESULT
synthesize:
input: |
Claude: $CLAUDE_RESULT
Gemini: $GEMINI_RESULT
Codex: $CODEX_RESULT
model: claude-code
action: "Combine into unified recommendation"
output: STDOUTecho "Design a real-time collaborative editor" | comanda process architecture.yaml| Agent | Model Names | Best For |
|---|---|---|
| Claude Code | claude-code, claude-code-opus, claude-code-sonnet |
Deep reasoning, synthesis |
| Gemini CLI | gemini-cli, gemini-cli-pro, gemini-cli-flash |
Broad knowledge, patterns |
| OpenAI Codex | openai-codex, openai-codex-o3 |
Implementation, code structure |
No API keys needed for these—they use their own CLI authentication.
review:
input: "src/*.go"
model: claude-code
action: "Review for bugs, security issues, and improvements"
output: review.mdcomanda process analyze.yaml < quarterly_data.csvparallel-process:
gpt4:
input: NA
model: gpt-4o
action: "Write a function to parse JSON"
output: gpt4-solution.py
claude:
input: NA
model: claude-3-5-sonnet-latest
action: "Write a function to parse JSON"
output: claude-solution.py
compare:
input: [gpt4-solution.py, claude-solution.py]
model: gpt-4o-mini
action: "Compare these implementations"
output: STDOUTTurn any workflow into an HTTP API:
comanda server
curl -X POST "http://localhost:8080/process?filename=review.yaml" \
-d '{"input": "code to review"}'| Feature | Description |
|---|---|
| Multi-provider | OpenAI, Anthropic, Google, X.AI, Ollama, vLLM, Claude Code, Codex, Gemini CLI |
| Parallel processing | Run independent steps concurrently |
| Tool execution | Run shell commands (ls, jq, grep, custom CLIs) within workflows |
| File operations | Read/write files, wildcards, batch processing |
| Vision support | Analyze images and screenshots |
| Web scraping | Fetch and process URLs |
| Database I/O | Read from and write to PostgreSQL |
| Chunking | Auto-split large files for processing |
| Memory | Persistent context via COMANDA.md |
| Branching | Conditional workflows with defer: |
| Visualization | ASCII workflow charts with comanda chart |
See the structure of any workflow at a glance:
comanda chart workflow.yaml+================================================+
| WORKFLOW: examples/parallel-data-processing... |
+================================================+
+------------------------------------------------+
| INPUT: examples/test.csv |
+------------------------------------------------+
|
v
+================================================+
| PARALLEL: parallel-process (3 steps) |
+------------------------------------------------+
+----------------------------------------------+
| [OK] analyze_csv |
| Model: gpt-4o-mini |
| Action: Analyze this CSV data and |
+----------------------------------------------+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+----------------------------------------------+
| [OK] extract_entities |
| Model: gpt-4o-mini |
| Action: Extract all named entities |
+----------------------------------------------+
+================================================+
|
v
+------------------------------------------------+
| [OK] consolidate_results |
| Model: gpt-4o |
| Action: Create a comprehensive data analysis |
+------------------------------------------------+
|
v
+------------------------------------------------+
| OUTPUT: comprehensive-report.txt |
+------------------------------------------------+
+================================================+
| STATISTICS |
|------------------------------------------------|
| Steps: 4 total, 3 parallel |
| Valid: 4/4 |
+================================================+
- Examples — Sample workflows for common tasks
- Multi-Agent Workflows — Claude Code + Codex + Gemini CLI patterns
- Claude Code Examples — Agentic coding workflows
- Tool Use Guide — Execute shell commands in workflows
- Server API — HTTP endpoints reference
- Configuration Guide — Adding models and providers
brew install kris-hansen/comanda/comandago install github.com/kris-hansen/comanda@latestDownload from GitHub Releases for Windows, macOS, and Linux.
git clone https://github.com/kris-hansen/comanda.git
cd comanda && go buildcomanda configureInteractive prompts for:
- Provider selection (OpenAI, Anthropic, Google, X.AI, Ollama, vLLM)
- API keys
- Model names and capabilities
These use their own authentication—no comanda configuration needed:
# Claude Code
claude --version # Verify installed
# Gemini CLI
gemini --version
# OpenAI Codex
codex --versionConfiguration stored in .env (current directory) or custom path:
export COMANDA_ENV=/path/to/.envProtect your API keys:
comanda configure --encryptmake deps # Install dependencies
make build # Build binary
make test # Run tests
make lint # Run linter
make dev # Full dev cycleSee CONTRIBUTING.md for contribution guidelines.
MIT — see LICENSE
- OpenAI, Anthropic, and Google for their APIs
- Ollama and vLLM for local model support
- The Go community
