Parent: #204 | Phase 2: Multi-Agent Orchestration
Revised: Polecats now run as separate Kilo CLI processes in the shared Town Container instead of separate cloud-agent-next sessions. Git worktrees provide filesystem isolation.
Goal
Support N concurrent polecats working on different beads in the same rig.
Changes
sling tRPC mutation supports creating multiple beads + agents
- Rig DO manages agent name allocation (sequential names: Toast, Maple, Birch, etc.)
- Each polecat gets its own git worktree and branch:
polecat/<name>/<bead-id-prefix>
- All polecats run as separate Kilo CLI processes inside the same Town Container
- Dashboard shows all active agents with their streams
- Rig DO enforces single-writer per agent (no two processes for the same agent)
Agent Name Allocation
The Rig DO maintains a name pool and assigns names sequentially to new polecats. Names are recycled when agents complete work and are deregistered.
Branch Naming
polecat/toast/abc123 # polecat "Toast" working on bead abc123...
polecat/maple/def456 # polecat "Maple" working on bead def456...
Container Impact
The shared container model makes this natural — adding a polecat is just spawning another Kilo CLI process, not provisioning another container. Each polecat gets:
- Its own git worktree (isolated filesystem)
- Its own Kilo CLI process
- Its own JWT for DO auth
- Its own heartbeat reporting
Resource contention is managed by the container's vCPU/memory limits. Polecats are mostly I/O-bound (waiting on LLM responses), so CPU sharing works well.
Dependencies
Acceptance Criteria
Parent: #204 | Phase 2: Multi-Agent Orchestration
Goal
Support N concurrent polecats working on different beads in the same rig.
Changes
slingtRPC mutation supports creating multiple beads + agentspolecat/<name>/<bead-id-prefix>Agent Name Allocation
The Rig DO maintains a name pool and assigns names sequentially to new polecats. Names are recycled when agents complete work and are deregistered.
Branch Naming
Container Impact
The shared container model makes this natural — adding a polecat is just spawning another Kilo CLI process, not provisioning another container. Each polecat gets:
Resource contention is managed by the container's vCPU/memory limits. Polecats are mostly I/O-bound (waiting on LLM responses), so CPU sharing works well.
Dependencies
Acceptance Criteria
slingmutation updated to handle multiple concurrent assignments