Privacy-first, local-execution coding agent
# 1. Install Ollama from https://ollama.com
# 2. Pull the model
ollama pull qwen2.5-coder:7b
# 3. Create custom Aztec model
ollama create aztec -f Modelfile
# 4. Run Aztec
./aztec.exe -m aztecollama pull qwen2.5-coder:7b
./aztec.exe -m qwen2.5-coder:7b# OpenAI
./aztec.exe --provider openai -m gpt-4o
# Universal (any OpenAI-compatible API)
./aztec.exe --provider universal -m google/gemini-2.0-flash| Component | Minimum | Recommended |
|---|---|---|
| CPU | Core i5 8th gen | Core i7 / Ryzen 7 |
| RAM | 8GB | 16GB+ |
| Storage | 10GB free | SSD |
| GPU | Not required | NVIDIA for speed |
For Ollama + Qwen2.5-Coder-7B:
- Model size: 4.3GB
- RAM usage: ~6-8GB while running
- Speed: ~5-10 tokens/second on Core i5
Aztec executes real tools to get work done:
| Tool | Description |
|---|---|
TERMINAL |
Run shell commands (mkdir, npm install, go build, etc.) |
WRITE_FILE |
Create new files with complete code |
READ_FILE |
Read existing files |
EDIT |
Modify existing files (search/replace) |
LIST_FILES |
Explore directory structure |
GLOB |
Find files by pattern |
GREP |
Search file contents |
TODO_WRITE |
Track progress on complex tasks |
DONE |
Signal task completion |
GIT_* |
Git operations (diff, status, commit, branch) |
aztec [OPTIONS]
OPTIONS:
-w, --workspace <DIR> Set workspace directory
-m, --model <NAME> Set model name
-p, --provider <TYPE> Set provider (ollama, openai, vllm, lmstudio, universal)
--approval <MODE> Set approval mode (confirm, auto, yolo)
-v, --verbose Enable verbose output
--no-color Disable colored output
-h, --help Print help
--version Print version
SESSION COMMANDS:
--list-sessions List saved sessions
--resume <id> Resume a session
--fork <id> Fork a session
--export <id> Export session to markdown