Parent
Part of #204 (Phase 4: Hardening)
Purpose
Periodic sync update tracking how Gastown Local has evolved since we last analyzed it. This is not a commitment to implement any of these features — it's a reference point for understanding where the ecosystem is heading so we can make informed decisions about what to adopt, adapt, or skip. We'll perform more of these sync updates in the future.
Versions covered: v0.7.0 through v0.12.1 (6 releases, ~45k lines added, 440 files changed)
Must-Evaluate for Cloud
ACP (Agent Communication Protocol)
New internal/acp/ package. A JSON-RPC 2.0 proxy that sits between UI clients and AI agent processes. Manages session lifecycle, handshake state, keepalive heartbeats, startup prompt injection, and "propulsion mode" (suppresses output during autonomous grinding so the UI isn't flooded). Includes a Propeller component that watches for queued nudges and injects them into agent sessions in real-time via filesystem watchers.
Cloud relevance: Our WebSocket agent gateway needs the same abstractions — session lifecycle management, busy/idle tracking, output suppression during autonomous work, and real-time nudge injection. The JSON-RPC protocol could inform our container ↔ worker communication contract.
Agent Provider Abstraction
New internal/agent/provider/ package. Defines a standard ACPProvider interface: Initialize, ListTools, CallTool, CreateMessage, GetStatus, plus lifecycle callbacks. MCP-compatible content blocks (text, tool_use, tool_result, image). Includes a LocalProvider implementation and message translation utilities. Protocol version "2024-11-05" aligns with the emerging MCP standard.
Cloud relevance: This is the abstraction layer for talking to any AI agent regardless of runtime. The cloud version currently only supports the Kilo SDK. If we want to support multiple AI backends (Claude direct, GPT, Gemini, etc.), we'd need an equivalent provider interface. The MCP alignment means we'd be compatible with the emerging standard.
Mountain-Eater (Autonomous Epic Execution)
New internal/cmd/mountain.go, internal/witness/mountain.go, and docs/design/convoy/mountain-eater.md. This is the biggest new feature. A four-layer autonomous system for grinding through epics:
- Layer 0 — ConvoyManager: Mechanical bead feeding (already existed)
- Layer 1 — Witness failure tracking: After 3 polecat failures on a mountain-labeled issue, it's auto-skipped and the next bead is dispatched
- Layer 2 — Deacon Dog audit: Periodic checks if the mountain is progressing; spawns investigation Dogs for stalls
- Layer 3 — Mayor notification: Human escalation when automated recovery fails
A "mountain" is just a convoy with a mountain label — no new entity types. gt mountain stages, labels, and launches in one command with status/pause/resume/cancel subcommands. Key design insight: "no agent holds the thread, the beads ARE the state."
Cloud relevance: This is the killer UX feature — "launch an epic, go away, come back to it done." Our alarm-driven architecture already supports this conceptually. The four-layer resilience model (mechanical feeding → failure tracking → stall investigation → human escalation) maps directly to our witnessPatrol → deaconPatrol → triage agent → Mayor escalation stack. The main gap is the gt mountain CLI equivalent — a Mayor tool or UI action that stages a convoy from an epic and launches it with mountain-level resilience.
Convoy Multi-Store
New internal/convoy/multi_store.go. Cross-database issue resolution for convoys in multi-rig setups. When convoys live in the HQ store but track issues across rig-specific Dolt databases, standard SQL JOINs fail. A StoreResolver routes issue lookups to the correct store based on ID prefix.
Cloud relevance: Cloud Gastown is inherently multi-store (TownDO SQLite is per-town). Cross-town or cross-rig convoy tracking needs the same prefix-based routing pattern.
Swarm Removed — Replaced by Mountain-Eater
The entire internal/swarm/ package and gt swarm command were deleted in v0.11.0. The swarm concept (persistent molecule coordinated by a dedicated agent) was replaced by Mountain-Eater. The design doc explicitly states: Mountain-Eater achieves the same goals through patrol-driven grinding instead of coordinator-driven grinding, eliminating the "hysteresis problem" (agents lose coordination context at compaction).
Cloud relevance: Validates our alarm-driven, stateless architecture. If we were considering a coordinator pattern, this is evidence against it.
Generic Hook Installer
New internal/hooks/installer.go. Replaces per-agent hook installer packages (claude/, gemini/, cursor/, etc.) with a single template-based installer. Embedded Go templates for Claude, Codex, Copilot, Cursor, Gemini, and OpenCode. Role-aware templates (autonomous vs interactive variants), {{GT_BIN}} substitution, and auto-upgrade detection for stale hooks.
Cloud relevance: The cloud needs to install hooks into agent runtimes in the container. The template-based, multi-agent approach is the right pattern. Currently we only support the Kilo SDK — this signals the ecosystem is moving toward multi-provider support.
Worth Tracking
Nudge Poller
New internal/nudge/poller.go. Background process polling the nudge queue for non-Claude agents (Gemini, Codex, Cursor) that lack turn-boundary drain hooks. Includes a filesystem Watcher using fsnotify for event-driven nudge delivery (used by ACP Propeller).
Cloud relevance: We'd use WebSocket push rather than polling, but the concept matters — agents without built-in drain hooks need external nudge delivery. Relevant to #1032 (Cloud-Native Nudge System).
Crew Specialization Design
New docs/gas-city/crew-specialization-design.md. Capability-based dispatch where agents advertise what they handle (with examples and anti-examples), accumulate evidence through completions and bounces, and routing improves over time. "Cellular model" — agents as recursive mini-towns that delegate. Two-sided knowledge (symptom + resolution pairs). Tiered trust (speculative → tentative → operational → proven).
Cloud relevance: Directly informs the agent personas idea in #447. The "claims + evidence" model and explore/exploit tradeoff for routing are the right design for intelligent cloud dispatch.
Daemon Pressure Monitoring
New internal/daemon/pressure.go with platform-specific files (Darwin, Linux, Windows). Pre-spawn resource checks: CPU pressure (load average per core), memory pressure (available GB), session concurrency cap. Infrastructure agents (deacon, witness, mayor) are exempt. Only polecats, refineries, and dogs are gated.
Cloud relevance: Cloud handles this at the infrastructure level (container resource limits). But the concept of gating agent spawns based on system pressure translates to queue backpressure — don't dispatch more agents than the container can handle.
Wasteland Spider + Trust
New internal/wasteland/spider.go and trust.go. Spider: fraud detection against Wasteland reputation — detects collusion, rubber-stamping, confidence inflation, self-loops. Trust: tier escalation engine (Drifter → Registered → Contributor → War Chief) with requirements on completions, stamps, distinct validators, quality, and time-in-tier.
Cloud relevance: Only relevant when we implement Wasteland integration (#1040). The trust tier system is a good pattern for any reputation system.
Channel Events
New internal/channelevents/ package. File-based event emission for named channels. Events are JSON files written to ~/gt/events/<channel>/ and consumed by await-event subscribers (e.g., refinery watching for MERGE_READY). Atomic sequence counters for unique filenames.
Cloud relevance: The channel-based event subscription pattern maps to cloud pub/sub. We'd use WebSocket event streams or a message broker instead of filesystem events.
Not Applicable to Cloud
| Feature |
Why not applicable |
Wasteland Schema Evolution (wl_schema_evolution.go) |
Dolt-specific semver migration. Cloud uses standard DB migrations. |
Plugin Sync (plugin/sync.go) |
Local file sync with SHA-256 hash detection. Cloud uses CI/CD pipelines. |
gt assign command |
Trivial bead create + hook. Already covered by Mayor tools in cloud. |
| Platform-specific pressure checks (Darwin sysctl, Linux /proc) |
Cloud containers have their own resource management. |
Related Cloud Issues
| Cloud issue |
Gastown Local feature it relates to |
| #1032 — Cloud-Native Nudge System |
Nudge Poller, ACP Propeller |
| #1040 — Integrating with the Wasteland |
Wasteland Spider, Trust, Schema Evolution |
| #447 — Agent Personas (Future Ideas) |
Crew Specialization Design |
| #442 — Witness & Deacon Orchestration |
Mountain-Eater four-layer model |
| #1006 — Staged Convoys |
Mountain gt mountain staging flow |
Parent
Part of #204 (Phase 4: Hardening)
Purpose
Periodic sync update tracking how Gastown Local has evolved since we last analyzed it. This is not a commitment to implement any of these features — it's a reference point for understanding where the ecosystem is heading so we can make informed decisions about what to adopt, adapt, or skip. We'll perform more of these sync updates in the future.
Versions covered: v0.7.0 through v0.12.1 (6 releases, ~45k lines added, 440 files changed)
Must-Evaluate for Cloud
ACP (Agent Communication Protocol)
New
internal/acp/package. A JSON-RPC 2.0 proxy that sits between UI clients and AI agent processes. Manages session lifecycle, handshake state, keepalive heartbeats, startup prompt injection, and "propulsion mode" (suppresses output during autonomous grinding so the UI isn't flooded). Includes aPropellercomponent that watches for queued nudges and injects them into agent sessions in real-time via filesystem watchers.Cloud relevance: Our WebSocket agent gateway needs the same abstractions — session lifecycle management, busy/idle tracking, output suppression during autonomous work, and real-time nudge injection. The JSON-RPC protocol could inform our container ↔ worker communication contract.
Agent Provider Abstraction
New
internal/agent/provider/package. Defines a standardACPProviderinterface: Initialize, ListTools, CallTool, CreateMessage, GetStatus, plus lifecycle callbacks. MCP-compatible content blocks (text, tool_use, tool_result, image). Includes aLocalProviderimplementation and message translation utilities. Protocol version "2024-11-05" aligns with the emerging MCP standard.Cloud relevance: This is the abstraction layer for talking to any AI agent regardless of runtime. The cloud version currently only supports the Kilo SDK. If we want to support multiple AI backends (Claude direct, GPT, Gemini, etc.), we'd need an equivalent provider interface. The MCP alignment means we'd be compatible with the emerging standard.
Mountain-Eater (Autonomous Epic Execution)
New
internal/cmd/mountain.go,internal/witness/mountain.go, anddocs/design/convoy/mountain-eater.md. This is the biggest new feature. A four-layer autonomous system for grinding through epics:A "mountain" is just a convoy with a
mountainlabel — no new entity types.gt mountainstages, labels, and launches in one command with status/pause/resume/cancel subcommands. Key design insight: "no agent holds the thread, the beads ARE the state."Cloud relevance: This is the killer UX feature — "launch an epic, go away, come back to it done." Our alarm-driven architecture already supports this conceptually. The four-layer resilience model (mechanical feeding → failure tracking → stall investigation → human escalation) maps directly to our
witnessPatrol→deaconPatrol→ triage agent → Mayor escalation stack. The main gap is thegt mountainCLI equivalent — a Mayor tool or UI action that stages a convoy from an epic and launches it with mountain-level resilience.Convoy Multi-Store
New
internal/convoy/multi_store.go. Cross-database issue resolution for convoys in multi-rig setups. When convoys live in the HQ store but track issues across rig-specific Dolt databases, standard SQL JOINs fail. AStoreResolverroutes issue lookups to the correct store based on ID prefix.Cloud relevance: Cloud Gastown is inherently multi-store (TownDO SQLite is per-town). Cross-town or cross-rig convoy tracking needs the same prefix-based routing pattern.
Swarm Removed — Replaced by Mountain-Eater
The entire
internal/swarm/package andgt swarmcommand were deleted in v0.11.0. The swarm concept (persistent molecule coordinated by a dedicated agent) was replaced by Mountain-Eater. The design doc explicitly states: Mountain-Eater achieves the same goals through patrol-driven grinding instead of coordinator-driven grinding, eliminating the "hysteresis problem" (agents lose coordination context at compaction).Cloud relevance: Validates our alarm-driven, stateless architecture. If we were considering a coordinator pattern, this is evidence against it.
Generic Hook Installer
New
internal/hooks/installer.go. Replaces per-agent hook installer packages (claude/, gemini/, cursor/, etc.) with a single template-based installer. Embedded Go templates for Claude, Codex, Copilot, Cursor, Gemini, and OpenCode. Role-aware templates (autonomous vs interactive variants),{{GT_BIN}}substitution, and auto-upgrade detection for stale hooks.Cloud relevance: The cloud needs to install hooks into agent runtimes in the container. The template-based, multi-agent approach is the right pattern. Currently we only support the Kilo SDK — this signals the ecosystem is moving toward multi-provider support.
Worth Tracking
Nudge Poller
New
internal/nudge/poller.go. Background process polling the nudge queue for non-Claude agents (Gemini, Codex, Cursor) that lack turn-boundary drain hooks. Includes a filesystemWatcherusing fsnotify for event-driven nudge delivery (used by ACP Propeller).Cloud relevance: We'd use WebSocket push rather than polling, but the concept matters — agents without built-in drain hooks need external nudge delivery. Relevant to #1032 (Cloud-Native Nudge System).
Crew Specialization Design
New
docs/gas-city/crew-specialization-design.md. Capability-based dispatch where agents advertise what they handle (with examples and anti-examples), accumulate evidence through completions and bounces, and routing improves over time. "Cellular model" — agents as recursive mini-towns that delegate. Two-sided knowledge (symptom + resolution pairs). Tiered trust (speculative → tentative → operational → proven).Cloud relevance: Directly informs the agent personas idea in #447. The "claims + evidence" model and explore/exploit tradeoff for routing are the right design for intelligent cloud dispatch.
Daemon Pressure Monitoring
New
internal/daemon/pressure.gowith platform-specific files (Darwin, Linux, Windows). Pre-spawn resource checks: CPU pressure (load average per core), memory pressure (available GB), session concurrency cap. Infrastructure agents (deacon, witness, mayor) are exempt. Only polecats, refineries, and dogs are gated.Cloud relevance: Cloud handles this at the infrastructure level (container resource limits). But the concept of gating agent spawns based on system pressure translates to queue backpressure — don't dispatch more agents than the container can handle.
Wasteland Spider + Trust
New
internal/wasteland/spider.goandtrust.go. Spider: fraud detection against Wasteland reputation — detects collusion, rubber-stamping, confidence inflation, self-loops. Trust: tier escalation engine (Drifter → Registered → Contributor → War Chief) with requirements on completions, stamps, distinct validators, quality, and time-in-tier.Cloud relevance: Only relevant when we implement Wasteland integration (#1040). The trust tier system is a good pattern for any reputation system.
Channel Events
New
internal/channelevents/package. File-based event emission for named channels. Events are JSON files written to~/gt/events/<channel>/and consumed byawait-eventsubscribers (e.g., refinery watching for MERGE_READY). Atomic sequence counters for unique filenames.Cloud relevance: The channel-based event subscription pattern maps to cloud pub/sub. We'd use WebSocket event streams or a message broker instead of filesystem events.
Not Applicable to Cloud
wl_schema_evolution.go)plugin/sync.go)gt assigncommandRelated Cloud Issues
gt mountainstaging flow