Autonomous task runtime with agentic interaction.
Unlike chat-based agent frameworks that respond to one message at a time, Clawy Agent runs an agentic loop — it plans, executes tools, evaluates results, and iterates until the task is complete. You can observe, intervene, and guide at any point.
Think Claude Code, but open-source, multi-provider, and programmable.
- Autonomous task execution — Agent runs a persistent loop: plan → execute → evaluate → iterate until done
- Agentic interaction — Observe progress, intervene mid-turn, guide direction. Agent can ask questions back
- Programmable LLM hooks — Insert LLM-judged checkpoints anywhere in the turn lifecycle for deterministic control
- Multi-provider — Anthropic Claude, OpenAI GPT, Google Gemini natively supported
- 27+ built-in tools — Bash, FileRead/Write/Edit, Glob, Grep, SpawnAgent, Cron, and more
- Multi-channel — Telegram, Discord, HTTP API out of the box
- Built-in memory — Hipocampus 5-level compaction for persistent cross-session context
- Coding discipline — Optional TDD and git commit enforcement
- Child agents — Spawn sub-agents for parallel task execution
git clone https://github.com/ClawyPro/clawy-agent.git
cd clawy-agent
npm install
npx tsx src/cli/index.ts init
npx tsx src/cli/index.ts startgit clone https://github.com/ClawyPro/clawy-agent.git
cd clawy-agent
npm installThen run commands with npx tsx src/cli/index.ts <command>.
npm install -g clawy-agent
clawy-agent <command>npx tsx src/cli/index.ts startTerminal conversation mode. Like Claude Code.
npx tsx src/cli/index.ts serve --port 8080Starts the agent as an HTTP API server. If Telegram or Discord tokens are configured, the agent automatically connects to those channels and responds to messages.
import { Agent } from 'clawy-agent'
const agent = new Agent({
botId: 'my-agent',
userId: 'local',
workspaceRoot: './workspace',
model: 'claude-sonnet-4-6',
gatewayToken: process.env.ANTHROPIC_API_KEY!,
apiProxyUrl: 'https://api.anthropic.com',
})
await agent.start()Run npx tsx src/cli/index.ts init to generate clawy-agent.yaml interactively, or create it manually:
llm:
provider: anthropic # anthropic, openai, or google
model: claude-sonnet-4-6
apiKey: ${ANTHROPIC_API_KEY}
channels:
telegram:
token: ${TELEGRAM_BOT_TOKEN}
discord:
token: ${DISCORD_BOT_TOKEN}
hooks:
builtin:
factGrounding: true
preRefusalVerifier: true
workspaceAwareness: true
sessionResume: true
discipline: false
memory:
enabled: true
compaction: true
workspace: ./workspace
identity:
name: "My Agent"
instructions: "You are a helpful coding assistant."- Create a bot via @BotFather on Telegram
- Copy the bot token
- Add it to your config:
channels:
telegram:
token: ${TELEGRAM_BOT_TOKEN}- Set the env var and start:
export TELEGRAM_BOT_TOKEN=123456:ABC-DEF...
export ANTHROPIC_API_KEY=sk-ant-...
npx tsx src/cli/index.ts serveThe agent will automatically start long-polling Telegram for messages and reply in the chat. Typing indicators, reply-to context, and /reset command are supported out of the box.
- Create an application at Discord Developer Portal
- Create a bot under the application, copy the token
- Invite the bot to your server with the
bot+applications.commandsscopes - Add to your config:
channels:
discord:
token: ${DISCORD_BOT_TOKEN}- Start the agent — it connects to Discord gateway automatically. The bot responds to @mentions.
Switch providers by changing llm.provider and llm.apiKey:
# Anthropic Claude
llm:
provider: anthropic
model: claude-sonnet-4-6
apiKey: ${ANTHROPIC_API_KEY}
# OpenAI GPT
llm:
provider: openai
model: gpt-5.4
apiKey: ${OPENAI_API_KEY}
# Google Gemini
llm:
provider: google
model: gemini-2.5-flash
apiKey: ${GOOGLE_API_KEY}All providers support streaming, tool use, and the full agentic loop. The provider layer handles format conversion automatically.
The core differentiator. Insert LLM-judged checkpoints anywhere in the turn lifecycle:
User message
→ beforeTurnStart
→ [agentic loop]
→ beforeLLMCall ← Context augmentation
→ LLM streaming
→ afterLLMCall ← Response analysis
→ beforeToolUse ← Tool permit/deny
→ Tool execution
→ [loop continues...]
→ beforeCommit ← Quality verification
→ afterTurnEnd ← Memory save, cleanup
| Hook | Default | Purpose |
|---|---|---|
factGrounding |
on | Hallucination prevention |
preRefusalVerifier |
on | Prevents unnecessary refusals |
workspaceAwareness |
on | Auto-injects filesystem context |
sessionResume |
on | Seeds context on session resume |
discipline |
off | TDD/git commit enforcement |
dangerousPatterns |
on | Blocks dangerous operations |
Agent (singleton)
├── Session (per conversation)
│ ├── Turn (atomic agentic loop)
│ │ ├── LLM call → Tool dispatch → Evaluate → Repeat
│ │ └── Hook checkpoints at each lifecycle point
│ ├── Transcript (persistent history)
│ └── Context (layered: identity + rules + memory + tools)
├── Tool Registry (27+ built-in)
├── Hook Registry (built-in + custom)
├── Channel Adapters (Telegram, Discord)
├── Cron Scheduler
├── Memory (Hipocampus compaction)
└── SpawnAgent (child agent execution)
- Node.js 22+
- An LLM API key (Anthropic, OpenAI, or Google)
See CONTRIBUTING.md.
Apache 2.0 — see LICENSE.