Skip to content

fix(linear): create issues directly in backlog state via stateId#1137

Merged
zbigniewsobiecki merged 7 commits intodevfrom
fix/linear-createworkitem-backlog-state
Apr 17, 2026
Merged

fix(linear): create issues directly in backlog state via stateId#1137
zbigniewsobiecki merged 7 commits intodevfrom
fix/linear-createworkitem-backlog-state

Conversation

@zbigniewsobiecki
Copy link
Copy Markdown
Member

Summary

  • Pass stateId to linearClient.createIssue() so new work items are created directly in the backlog state, instead of the team's default state ("Ideas")
  • Remove the fragile post-creation moveWorkItem transition — 13 lines of error-swallowing code that silently failed, leaving issues in the wrong state
  • Fix promptContext.ts to provide linearConfig.teamId (not the backlog status UUID) as backlogListId for Linear — the splitting agent was getting "Entity not found: Team" errors and burning ~15 LLM iterations per run recovering

Root cause

Two bugs working together:

  1. The Linear adapter's createWorkItem() called createIssue() without stateId, then attempted a separate moveWorkItem transition that caught and swallowed errors
  2. The prompt context provided the backlog status UUID as backlogListId, but CreateWorkItem uses it as containerId which Linear interprets as teamId

Files changed

File Change
src/pm/linear/adapter.ts Pass stateId on create, remove post-creation transition
src/agents/shared/promptContext.ts Linear backlogListId = teamId
tests/unit/pm/linear/adapter.test.ts Updated + new tests for stateId behavior
tests/unit/agents/shared/promptContext.test.ts Updated tests for Linear prompt context

Test plan

  • New test: createWorkItem passes stateId when statuses.backlog is configured
  • New test: createWorkItem omits stateId when statuses.backlog is not configured
  • New test: createWorkItem does not call updateIssueState after creation
  • Updated test: backlogListId resolves to teamId for Linear projects
  • Updated test: backlogListId still resolves to teamId when statuses are empty
  • Full unit suite: 7861 tests pass
  • Typecheck clean
  • Lint clean
  • Manual: trigger a splitting run on a Linear project, verify stories land in "Backlog"

🤖 Generated with Claude Code

zbigniewsobiecki and others added 7 commits April 17, 2026 06:47
Spec 007 addresses the silent review drop observed on MNG-122/PR-572:
the work-item lock's total-concurrency cap (MAX_WORK_ITEM_CONCURRENCY=2)
blocked review dispatch while other agents were enqueued for the same
work item. Three intersecting fixes specified: per-agent-type locking,
lock release timing, and a post-completion review hook.

Two plans:
- 007/1 (lock-infra): remove MAX_WORK_ITEM_CONCURRENCY total cap,
  keep per-type MAX_SAME_TYPE_PER_WORK_ITEM=1, enrich lock-skip log.
- 007/2 (post-completion-review): deterministic review dispatch from
  the implementation pipeline after success + green CI.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…serialization

Removes the MAX_WORK_ITEM_CONCURRENCY total cap from isWorkItemLocked.
The total cap falsely serialized unrelated agent types: the review for
MNG-122/PR-572 was silently dropped because 2 agents (implementation +
backlog-manager) were already enqueued for the same work item, hitting
the total limit of 2.

Now only MAX_SAME_TYPE_PER_WORK_ITEM = 1 is enforced. Different agent
types can run concurrently on the same work item (e.g. review starts
while implementation's container is still cleaning up). Same-type
duplicate prevention is preserved.

Changes:
- src/router/work-item-lock.ts — deleted MAX_WORK_ITEM_CONCURRENCY
  constant and the total-count checks (in-memory + DB). Simplified
  getInMemoryCounts → getInMemorySameTypeCount (no longer iterates all
  keys). Removed the dbTotal query (saves one DB round-trip per lock
  check). Deleted the unused keyPrefix helper.
- src/router/webhook-processor.ts — enriched the lock-skip log with
  source (adapter type) and renamed agentType → blockedAgentType for
  clarity.
- CLAUDE.md — added per-agent-type lock semantics note under "Agent
  triggers".

Tests: updated 6 existing tests + added 2 new cross-type concurrency
tests. All 7852 unit tests pass. Lint + typecheck clean.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…fter implementation

When an implementation agent succeeds with a PR, the execution pipeline
now checks CI status and fires the review agent before the container
exits. This guarantees review dispatch within seconds of implementation
completion, regardless of GitHub webhook timing.

Uses the same recursive `runAgentExecutionPipeline` pattern as the
splitting → backlog-manager chain. The review runs in the same container,
same credential scope. Uses `claimReviewDispatch` with the same dedup key
format as the `check-suite-success` trigger, so the two paths cannot
double-enqueue.

The hook is best-effort: GitHub API failures, Redis errors, or any
exception is caught, logged as warn, and does NOT break the
implementation pipeline.

New function `tryDispatchPostCompletionReview` in agent-execution.ts:
1. Extracts prNumber from agentResult.prUrl
2. Fetches PR details (headSha, headRef) from GitHub
3. Checks CI status via getCheckSuiteStatus — if not allPassing, returns
   (check-suite-success webhook will handle it when CI finishes)
4. Claims the dedup key via claimReviewDispatch — if already claimed,
   returns (review was already dispatched by the webhook path)
5. Builds a review TriggerResult and calls runAgentExecutionPipeline
   recursively (same pattern as splitting → backlog-manager)

Tests: +7 new tests covering: fires review on success + green CI, skips
for non-implementation, skips on failure, skips when no prUrl, skips when
CI not green, skips when already dispatched, swallows errors gracefully.
All 7859 unit tests pass. Lint + typecheck clean.

CLAUDE.md updated with post-completion review dispatch documentation.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Linear's createIssue API supports stateId, but the adapter was creating
issues in the team's default state ("Ideas") then attempting a separate
moveWorkItem transition — which silently failed. Additionally, the
splitting agent's prompt context provided the backlog status UUID as
containerId, causing "Entity not found: Team" errors and wasting ~15
LLM iterations per run.

- Pass stateId to linearClient.createIssue() for atomic backlog placement
- Remove fragile post-creation moveWorkItem transition (13 LOC of
  error-swallowing code)
- Fix promptContext to provide teamId (not status UUID) as backlogListId
  for Linear, matching what CreateWorkItem expects as containerId

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@zbigniewsobiecki zbigniewsobiecki merged commit 4109c23 into dev Apr 17, 2026
8 checks passed
@zbigniewsobiecki zbigniewsobiecki deleted the fix/linear-createworkitem-backlog-state branch April 17, 2026 13:06
@codecov
Copy link
Copy Markdown

codecov Bot commented Apr 17, 2026

Codecov Report

❌ Patch coverage is 98.97959% with 1 line in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
src/triggers/shared/agent-execution.ts 98.71% 1 Missing ⚠️

📢 Thoughts on this report? Let us know!

zbigniewsobiecki added a commit that referenced this pull request Apr 17, 2026
addChecklistItem() created sub-issues via linearClient.createIssue()
without stateId, so they landed in the team's default state ("Ideas")
instead of "Backlog". Same bug as createWorkItem() (fixed in #1137),
different code path.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
zbigniewsobiecki added a commit that referenced this pull request Apr 17, 2026
addChecklistItem() created sub-issues via linearClient.createIssue()
without stateId, so they landed in the team's default state ("Ideas")
instead of "Backlog". Same bug as createWorkItem() (fixed in #1137),
different code path.

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant