feat(agents): agent opt-in enforcement + unified run timeout system#909
Closed
zbigniewsobiecki wants to merge 6 commits intodevfrom
Closed
feat(agents): agent opt-in enforcement + unified run timeout system#909zbigniewsobiecki wants to merge 6 commits intodevfrom
zbigniewsobiecki wants to merge 6 commits intodevfrom
Conversation
The new `isAgentEnabledForProject()` guard requires an explicit `agent_configs` row before any trigger can fire. Integration tests that called `handle()` and expected a non-null result were failing because no agent config was seeded. - Seed `agent_configs` + `agent_trigger_configs` rows in tests that expect triggers to fire (trigger-registry, pm-provider-switching, github-personas) - Export `clearAgentEnabledCache()` from agentConfigsRepository for test isolation (the 5s TTL cache was causing false negatives when tests ran within the TTL window of a prior test that had no config) - Call `clearAgentEnabledCache()` in `truncateAll()` so every `beforeEach` starts with a clean cache alongside a clean DB Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- Add `getDefaultTaskPrompt(agentType)` to `src/agents/prompts/index.ts` Reads the factory-default task prompt directly from the YAML definition without requiring `initPrompts()`. Returns null for unknown agent types. - Wire it into the `agentConfigs.getPrompts` tRPC endpoint as a fourth prompt layer (`defaultTaskPrompt`), completing the inheritance chain: project override → global override → default system (disk template) → default task (YAML definition). - Update `agent-prompt-overrides.tsx` to use `defaultTaskPrompt` as the final fallback when initialising the task prompt editor and as the target of the "Load default" button. - Add `startWatchdog(project.watchdogTimeoutMs)` to `triggerManualRun` so manual runs respect the per-project timeout the same way webhook- triggered runs do. - Fix unit tests: add `getDefaultTaskPrompt` to the `prompts/index.js` mock in agentConfigs.test.ts; mock `lifecycle.js` in manual-runner.test.ts to prevent the watchdog timer from calling process.exit during tests. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Replaces two independent, uncoordinated timeout mechanisms with a single coherent flow where `watchdogTimeoutMs` is the source of truth. ## Problem Two timeouts existed with no knowledge of each other: 1. In-container watchdog (`startWatchdog(project.watchdogTimeoutMs)`) — per-project, updates DB to `timed_out` then exits. 2. Router-level kill (`setTimeout → killWorker`) — global env var, killed the Docker container with no DB update. This caused three bugs: - If `WORKER_TIMEOUT_MS` < `watchdogTimeoutMs` the router killed the container before the watchdog could set the correct DB status. - GitHub-triggered runs (no `workItemId`) were never marked in the DB after a router kill — they stayed `running` forever. - Orphaned containers (after router restart) were stopped but their DB runs were never updated. ## Solution **Per-project timeout in `spawnWorker`**: router now reads `watchdogTimeoutMs` from project config and uses it + 2-minute buffer (`ROUTER_KILL_BUFFER_MS`) for the container kill timer, so the watchdog always fires first and the router is purely a backstop. **DB update on router kill (`killWorker`)**: after stopping the container, marks the run `timed_out` via `failOrphanedRun` (workItemId path) or `failOrphanedRunFallback` (GitHub PR runs without workItemId). The call to `cleanupWorker` no longer passes an exit code so it skips its own DB write, eliminating the race that could set the wrong status (`failed` instead of `timed_out`). **Fallback for GitHub PR runs (`failOrphanedRunFallback`)**: new repository function that finds the most recent running run by `projectId + agentType + startedAt ≥ containerStart` and marks it, guarded by an optimistic `WHERE status='running'` check so it is always safe to call even if the watchdog already acted. **DB update in `cleanupWorker`**: extended to also handle the workItemId-absent case via `failOrphanedRunFallback`, covering crashes of GitHub PR runs that the watchdog didn't catch. **`cascade.agent.type` container label**: added at spawn time so orphan cleanup can pass `agentType` to `failOrphanedRunFallback`, avoiding matching the wrong run when multiple agent types run concurrently. **`durationMs` on orphaned runs**: all three fail paths now compute and persist the elapsed duration so dashboard users see actual run time instead of null. **Fixed BullMQ `lockDuration`**: replaced `workerTimeoutMs + 60s` with a fixed 8-hour constant (`BULLMQ_LOCK_DURATION_MS`) — `guardedSpawn` resolves immediately after container start so the lock is held for seconds, and tying it to `workerTimeoutMs` risked lock expiry for long-running project configs. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Member
Author
|
Closing — recreating PR from a clean branch against dev to avoid duplicate commit history from PR #897 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Two related improvements that complete the agent opt-in enforcement feature and fix a class of DB consistency bugs in the router's timeout handling.
Agent opt-in enforcement (completing existing branch work)
agent_configsrow exists — no more "everything enabled by default"getDefaultTaskPrompt()reads the factory-default task prompt from YAML definitions and surfaces it through thegetPromptsAPI as a fourth inheritance layer (defaultTaskPrompt)triggerManualRunnow callsstartWatchdog(project.watchdogTimeoutMs)so manual runs are subject to the same per-project timeout as webhook-triggered runsUnified agent run timeout system
Replaces two independent, uncoordinated timeout mechanisms with a single coherent flow where
watchdogTimeoutMsis the source of truth.Before: two timeouts with no knowledge of each other:
startWatchdog) — per-project, updates DB totimed_outthen exitssetTimeout → killWorker) — global env var, killed the Docker container with no DB updateThis meant GitHub PR runs (no
workItemId) stayedrunningin the DB forever after a router kill, and orphaned containers (after router restart) were stopped but their runs were never updated.After — one coherent flow:
watchdogTimeoutMsfrom project config and sets its kill timer towatchdogTimeoutMs + 2 min(theROUTER_KILL_BUFFER_MSbackstop), so the watchdog always fires firstkillWorkernow marks the runtimed_outin the DB after stopping the container, covering bothworkItemIdruns and GitHub PR runs (via newfailOrphanedRunFallback)cleanupWorkeris called without an exit code fromkillWorker, preventing it from also firing a DB update with the wrongfailedstatuscascade.agent.typelabel added to containers so orphan cleanup can narrow its fallback query to the correct agent typedurationMsso dashboard users see actual run time instead of nulllockDurationchanged fromworkerTimeoutMs + 60sto a fixed 8-hour constant to prevent lock expiry on long-running projectsTest plan
npm run typecheck— cleannpm run lint— cleannpm test— 5411/5411 tests passing (all previously-failing agentConfigs and manual-runner tests now fixed)failOrphanedRunFallbackinactive-workers.test.ts,container-manager.test.ts, andorphan-cleanup.test.tstoHaveBeenCalledTimes(1)assertions on killWorker DB calls to lock out the double-update regressioncascade.agent.typelabel passthrough in orphan cleanupwatchdogTimeoutMswithtimed_outstatus in dashboard+2 minshowstimed_out(notrunning) in dashboard🤖 Generated with Claude Code