Skip to content

Job-level concurrency group ignores workflow inputs #20187

@JanKrivanek

Description

@JanKrivanek

Job-level concurrency group ignores workflow inputs, causing fan-out cancellations

Summary

When a workflow is designed to run multiple concurrent instances with different inputs (fan-out pattern), the compiler-generated job-level concurrency group causes all but 2 of N dispatched runs to be cancelled. The workflow-level concurrency.group correctly includes ${{ inputs.* }} discriminators, but the agent and output job-level groups use a static pattern (gh-aw-copilot-${{ github.workflow }}) that is identical across all concurrent runs.

Reproduction

1. Define a worker workflow with input-discriminated concurrency

# worker.md frontmatter
on:
  workflow_dispatch:
    inputs:
      finding_id:
        description: "Unique ID of the item to process"
        required: true

concurrency:
  group: gh-aw-${{ github.workflow }}-${{ inputs.finding_id }}

2. Dispatch N instances from an orchestrator

The orchestrator dispatches multiple instances of the worker, each with a different finding_id:

dispatch-workflow: worker, inputs: { finding_id: "aaa" }
dispatch-workflow: worker, inputs: { finding_id: "bbb" }
dispatch-workflow: worker, inputs: { finding_id: "ccc" }
dispatch-workflow: worker, inputs: { finding_id: "ddd" }

3. Observe: only 2 out of N runs complete

The compiled .lock.yml produces:

# Workflow-level — unique per finding ✅
concurrency:
  group: gh-aw-worker-${{ inputs.finding_id }}

jobs:
  activation:
    # No shared concurrency — runs fine ✅

  agent:
    concurrency:
      group: "gh-aw-copilot-${{ github.workflow }}"  # ← static, shared across ALL runs ❌

  output:
    concurrency:
      group: "gh-aw-copilot-${{ github.workflow }}"  # ← same static group ❌

All N runs start their activation job successfully. When the agent jobs begin, they all compete for the same concurrency group. GitHub Actions allows one running + one pending job per group. Each new pending job cancels the previous pending one. Result: only the first (already running) and last (final pending) runs survive.

Real-world evidence

This is happening in dotnet/skills with the DevOps health monitoring workflows:

Consistent pattern across every batch:

Date Dispatched Completed Cancelled Surviving runs
Mar 9 10 (#21#30) 2 (#21, #30) 8 (#22#29) First + last
Mar 8 3 (#18#20) 2 (#18, #20) 1 (#19) First + last
Mar 7 4 (#14#17) 2 (#14, #17) 2 (#15#16) First + last
Mar 4 4 (#9#12) 2 (#9, #12) 2 (#10#11) First + last
Mar 3 4 (#4#8) 2 (#4, #8) 3 (#5#7) First + last

The cancelled runs all die within ~45–60s (during activationagent handoff), while successful runs take 5–10 minutes.

Expected behavior

The job-level concurrency group should incorporate the same discriminator as the workflow-level group, so that concurrent runs with different inputs don't interfere. For example:

agent:
  concurrency:
    group: "gh-aw-copilot-${{ github.workflow }}-${{ inputs.finding_id }}"

Suggested fix

Allow the workflow author to specify a concurrency discriminator in the frontmatter that flows into the compiler-generated job-level concurrency groups. For example:

concurrency:
  group: gh-aw-${{ github.workflow }}-${{ inputs.finding_id }}

The compiler could extract the discriminator expression (${{ inputs.finding_id }}) and append it to the agent/output job-level groups automatically, or provide a dedicated frontmatter field:

concurrency:
  group: gh-aw-${{ github.workflow }}-${{ inputs.finding_id }}
  job-discriminator: ${{ inputs.finding_id }}  # propagated to agent/output jobs

Current workaround

We limited dispatch-workflow.max to 2 in the orchestrator, since only 2 runs survive anyway. This wastes no Actions minutes but limits investigation throughput to 2 findings per health check run.

Environment

  • gh-aw version: v0.45.4
  • Workflow trigger: workflow_dispatch with inputs

Metadata

Metadata

Labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions