Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
67 changes: 67 additions & 0 deletions .github/aw/create-agentic-workflow.md
Original file line number Diff line number Diff line change
Expand Up @@ -136,6 +136,16 @@ Agentic workflows execute as **a single GitHub Actions job** with the AI agent r
- Example: "Run migrations, rollback if deployment fails"
- **Alternative**: Use traditional GitHub Actions with conditional steps and job failure handling

### Steering Pattern

For complex scenarios, use a hybrid model: **agentic control-plane decisions** with a **deterministic workflow backbone**:

- Agentic layer steers: selects next wave, decides retry scope, and dispatches downstream workflows
- Deterministic jobs execute: approvals, fan-out, builds/tests, and auditable state transitions
- Use workflow outputs/artifacts/tracker IDs for state handoff between runs

This keeps AI decision-making while preserving reliability for long-running orchestration.

### How to Handle These Requests

When a user requests capabilities beyond agentic workflows:
Expand Down Expand Up @@ -272,6 +282,63 @@ These resources contain workflow patterns, best practices, safe outputs, and per
- **Feature synchronization**: Main repo propagates changes to sub-repos via PRs
- **Organization-wide coordination**: Single workflow creates issues across multiple repos

**CentralRepoOps (private control-plane variant):**
- Use when the user wants **one private repository** to coordinate rollout across many repositories
- Keywords: "single private repo", "control repo", "rollout to 100s of repos", "fleet", "governance"
- Prefer **agentic-first explicit notation** in examples and generated markdown:
- Show prompt body first, but keep tunable frontmatter defaults visible in the same file
- Avoid imports for commonly tuned controls (concurrency, rate-limit, tracker-id, safe-outputs)
- Omit `default:` fields when using standard defaults; apply defaults via fallback expressions where values are consumed
- Preferred example style:

```aw
---
on:
workflow_dispatch:
inputs:
objective: { required: true }
rollout_profile: { required: false, default: "standard" }
concurrency:
group: "centralrepoops-${{ github.workflow }}-${{ inputs.rollout_profile }}"
cancel-in-progress: false
rate-limit:
max: 2
window: 60
---

# <workflow>

## Objective
{{inputs.objective}}

## Instructions
Focus on the rollout objective and produce the smallest safe change.
```

- Expansion rule:
- Start with explicit tunable defaults in one file
- Extract only truly fixed internals if the user asks for further simplification
- Always include a short "Tuning" section in generated workflow text:
- Call out exactly which knobs users are expected to change
- Mark deterministic/runtime wiring as "usually keep as-is"
- Prefer profile-based scaffolding for this variant:
- **Lean profile (default):** compact frontmatter + instruction-first prompt
- **Advanced profile (only when requested):** campaign tracking and wave-promotion controls
- Then customize only objective/targets/caps instead of writing from scratch
- Selection rule (minimize choices):
1. Single-repo demo → simple profile
2. Multi-repo control plane (most cases) → CentralRepoOps lean profile
3. Campaign + promotion steering required → CentralRepoOps advanced profile
- Prefer markdown-first authoring (`.github/workflows/<id>.md`) with deterministic jobs + prompt instructions
- Prefer built-in frontmatter controls first: `manual-approval`, `concurrency`, `rate-limit`, `tracker-id`
- Recommended model:
1. agentic control-plane decisions for steering (next wave, retry scope, mutation intent)
2. deterministic workflow backbone (fan-out, shard/build validation, approvals, auditable state transitions)
- For campaign-style rollouts, include explicit control-plane metadata as workflow inputs (for example: `campaign_id`, `campaign_goal`) and propagate them into summary/dispatch outputs
- For wave steering, include promotion controls in workflow inputs (for example: `promote_to_next_wave`, `next_rollout_profile`) and publish structured payload via `safe-outputs.dispatch-workflow`
- Prefer a **single starter workflow** for initial adoption, then split into retry/replay workflows as scale grows
- Reference docs: https://github.github.com/gh-aw/patterns/centralrepoops/

**Architectural Constraints:**
- ✅ **CAN**: Create issues/PRs/comments in external repos using `target-repo`
- ✅ **CAN**: Read from external repos using GitHub toolsets (repos, issues, actions)
Expand Down
198 changes: 198 additions & 0 deletions .github/workflows/reusable-central-deterministic-gate.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,198 @@
name: Reusable Central Deterministic Gate

on:
workflow_call:
inputs:
rollout_profile:
Comment on lines +3 to +6
Copy link

Copilot AI Feb 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This reusable workflow uses secrets.REPO_TOKEN for cross-repo checkouts, but the secret is not declared under on.workflow_call.secrets. Declare the expected secret(s) (e.g., REPO_TOKEN) so callers can pass them (including via secrets: inherit) and the workflow fails fast with a clear contract.

Copilot uses AI. Check for mistakes.
description: Rollout profile (pilot, standard, broad)
required: false
default: standard
type: string
targets_pilot_json:
description: JSON array of owner/repo targets for pilot profile
required: false
default: "[]"
type: string
targets_standard_json:
description: JSON array of owner/repo targets for standard profile
required: false
default: "[]"
type: string
targets_broad_json:
description: JSON array of owner/repo targets for broad profile
required: false
default: "[]"
type: string
target_ref:
description: Default target git ref for deterministic checkout/build
required: false
default: main
type: string
shard_count:
description: Number of rollout shards
required: false
default: "1"
type: string
shard_index:
description: Zero-based shard index to execute
required: false
default: "0"
type: string
outputs:
primary_target_repo:
description: Primary repository used by the agentic wrapper checkout step
value: ${{ jobs.select_shard.outputs.primary_target_repo }}
target_ref:
description: Target git ref applied to deterministic checkout/build
value: ${{ jobs.resolve_policy.outputs.target_ref }}
Copy link

Copilot AI Feb 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

selected_targets_json is produced by jobs.select_shard but is not exposed as a top-level workflow_call output. Without surfacing it, callers can't pass the selected target list into summary/replay/reporting jobs. Consider adding a selected_targets_json output (value from jobs.select_shard.outputs.selected_targets_json) to the on.workflow_call.outputs section.

Suggested change
value: ${{ jobs.resolve_policy.outputs.target_ref }}
value: ${{ jobs.resolve_policy.outputs.target_ref }}
selected_targets_json:
description: JSON array of selected owner/repo targets for this shard
value: ${{ jobs.select_shard.outputs.selected_targets_json }}

Copilot uses AI. Check for mistakes.

permissions:
contents: read

jobs:
resolve_policy:
runs-on: ubuntu-latest
outputs:
targets_json: ${{ steps.resolve.outputs.targets_json }}
target_ref: ${{ steps.resolve.outputs.target_ref }}
shard_count: ${{ steps.resolve.outputs.shard_count }}
shard_index: ${{ steps.resolve.outputs.shard_index }}
steps:
- id: resolve
uses: actions/github-script@v7
env:
ROLLOUT_PROFILE: ${{ inputs.rollout_profile }}
TARGETS_PILOT_JSON: ${{ inputs.targets_pilot_json }}
TARGETS_STANDARD_JSON: ${{ inputs.targets_standard_json }}
TARGETS_BROAD_JSON: ${{ inputs.targets_broad_json }}
TARGET_REF: ${{ inputs.target_ref }}
SHARD_COUNT: ${{ inputs.shard_count }}
SHARD_INDEX: ${{ inputs.shard_index }}
CURRENT_REPOSITORY: ${{ github.repository }}
with:
script: |
const profile = (process.env.ROLLOUT_PROFILE || 'standard').toLowerCase();
const targetsByProfile = {
pilot: process.env.TARGETS_PILOT_JSON || '[]',
standard: process.env.TARGETS_STANDARD_JSON || '[]',
broad: process.env.TARGETS_BROAD_JSON || '[]',
};

if (!Object.prototype.hasOwnProperty.call(targetsByProfile, profile)) {
core.setFailed(`Invalid rollout_profile '${profile}'. Expected one of: pilot, standard, broad`);
return;
}

let targets = [];
try {
const parsed = JSON.parse(targetsByProfile[profile]);
if (!Array.isArray(parsed) || !parsed.every((repo) => typeof repo === 'string' && repo.includes('/'))) {
core.setFailed(`targets_${profile}_json must be a JSON array of owner/repo strings`);
return;
}
targets = parsed;
Comment on lines +87 to +93
Copy link

Copilot AI Feb 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Target repo validation is very loose (repo.includes('/')), which allows values that aren't valid owner/repo (extra slashes, whitespace, etc.) to pass the gate and then fail later during checkout. Tighten validation by trimming and requiring exactly one slash and non-empty owner/repo parts (or use a stricter regex) so invalid targets fail with a clear policy error.

Suggested change
try {
const parsed = JSON.parse(targetsByProfile[profile]);
if (!Array.isArray(parsed) || !parsed.every((repo) => typeof repo === 'string' && repo.includes('/'))) {
core.setFailed(`targets_${profile}_json must be a JSON array of owner/repo strings`);
return;
}
targets = parsed;
const isValidRepo = (value) => {
if (typeof value !== 'string') return false;
const trimmed = value.trim();
if (!trimmed) return false;
const parts = trimmed.split('/');
if (parts.length !== 2) return false;
const [owner, repo] = parts;
if (!owner || !repo) return false;
// Disallow whitespace within owner or repo segments
if (/\s/.test(owner) || /\s/.test(repo)) return false;
return true;
};
try {
const parsed = JSON.parse(targetsByProfile[profile]);
if (!Array.isArray(parsed) || !parsed.every(isValidRepo)) {
core.setFailed(`targets_${profile}_json must be a JSON array of owner/repo strings`);
return;
}
targets = parsed.map((repo) => repo.trim());

Copilot uses AI. Check for mistakes.
} catch (error) {
core.setFailed(`Failed to parse targets_${profile}_json: ${error.message}`);
return;
}

if (targets.length === 0) {
targets = [process.env.CURRENT_REPOSITORY];
core.info(`No configured targets for '${profile}'. Falling back to current repository: ${targets[0]}`);
}

const targetRef = String(process.env.TARGET_REF || 'main').trim() || 'main';
const shardCountRaw = String(process.env.SHARD_COUNT || '1').trim();
const shardIndexRaw = String(process.env.SHARD_INDEX || '0').trim();
const shardCount = Number.parseInt(shardCountRaw, 10);
const shardIndex = Number.parseInt(shardIndexRaw, 10);

if (!Number.isInteger(shardCount) || shardCount <= 0) {
core.setFailed(`shard_count must be a positive integer, got '${shardCountRaw}'`);
return;
}

if (!Number.isInteger(shardIndex) || shardIndex < 0) {
core.setFailed(`shard_index must be a non-negative integer, got '${shardIndexRaw}'`);
return;
}

core.setOutput('targets_json', JSON.stringify(targets));
core.setOutput('target_ref', targetRef);
core.setOutput('shard_count', String(shardCount));
core.setOutput('shard_index', String(shardIndex));

select_shard:
needs: [resolve_policy]
runs-on: ubuntu-latest
outputs:
selected_targets_json: ${{ steps.select.outputs.selected_targets_json }}
primary_target_repo: ${{ steps.select.outputs.primary_target_repo }}
steps:
- id: select
uses: actions/github-script@v7
env:
TARGETS_JSON: ${{ needs.resolve_policy.outputs.targets_json }}
SHARD_COUNT: ${{ needs.resolve_policy.outputs.shard_count }}
SHARD_INDEX: ${{ needs.resolve_policy.outputs.shard_index }}
with:
script: |
const targets = JSON.parse(process.env.TARGETS_JSON || '[]');
const shardCount = Number.parseInt(process.env.SHARD_COUNT || '1', 10);
const shardIndex = Number.parseInt(process.env.SHARD_INDEX || '0', 10);

if (!Array.isArray(targets) || targets.length === 0) {
core.setFailed('No target repositories found after policy resolution');
return;
}

if (shardIndex >= shardCount) {
core.setFailed(`shard_index (${shardIndex}) must be less than shard_count (${shardCount})`);
return;
}

const shardSize = Math.ceil(targets.length / shardCount);
const start = shardIndex * shardSize;
const end = start + shardSize;
const selected = targets.slice(start, end);

if (selected.length === 0) {
core.setFailed(`Selected shard ${shardIndex} is empty for ${targets.length} targets and ${shardCount} shards`);
return;
}

core.setOutput('selected_targets_json', JSON.stringify(selected));
core.setOutput('primary_target_repo', selected[0]);

deterministic_build:
needs: [resolve_policy, select_shard]
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
repo: ${{ fromJson(needs.select_shard.outputs.selected_targets_json) }}
steps:
- name: Checkout selected repository
uses: actions/checkout@v5
with:
repository: ${{ matrix.repo }}
ref: ${{ needs.resolve_policy.outputs.target_ref }}
token: ${{ secrets.REPO_TOKEN }}

- name: Deterministic validation
shell: bash
run: |
set -euo pipefail

echo "Validating repository: ${{ matrix.repo }}"
git rev-parse HEAD

if [[ -f package.json ]]; then
npm ci
npm run --if-present build
Comment on lines +191 to +192
Copy link

Copilot AI Feb 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The deterministic Node build path runs npm ci whenever package.json exists. npm ci fails if no lockfile is present (common in repos using yarn/pnpm or no lock committed), which can cause false-negative gate failures. Consider checking for package-lock.json/npm-shrinkwrap.json before npm ci, and falling back to npm install or skipping JS build when no lockfile is available.

Suggested change
npm ci
npm run --if-present build
if [[ -f package-lock.json || -f npm-shrinkwrap.json ]]; then
npm ci
npm run --if-present build
else
echo "package.json found, but no npm lockfile (package-lock.json or npm-shrinkwrap.json) present; skipping Node deterministic build."
fi

Copilot uses AI. Check for mistakes.
elif [[ -f go.mod ]]; then
go build ./...
else
echo "No default build strategy detected; checkout validation completed."
fi

105 changes: 105 additions & 0 deletions .github/workflows/reusable-central-replay-failed-shards.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,105 @@
name: Reusable Central Replay Failed Shards

on:
workflow_call:
inputs:
failed_repos_json:
Comment on lines +3 to +6
Copy link

Copilot AI Feb 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This reusable workflow uses secrets.REPO_TOKEN for cross-repo checkout, but the secret is not declared under on.workflow_call.secrets. Declare REPO_TOKEN (and any others) in workflow_call so callers can pass it and the secret contract is explicit.

Copilot uses AI. Check for mistakes.
description: JSON array of owner/repo values that failed in a previous run
required: false
default: "[]"
type: string
target_ref:
description: Git ref used for replay checkout/build validation
required: false
default: main
type: string
max_replays:
description: Maximum number of failed repositories to replay in one run
required: false
default: "10"
type: string
outputs:
replay_targets_json:
description: JSON array of replay targets selected for this run
value: ${{ jobs.select_replays.outputs.replay_targets_json }}
replay_count:
description: Number of replay targets selected
value: ${{ jobs.select_replays.outputs.replay_count }}

permissions:
contents: read

jobs:
select_replays:
runs-on: ubuntu-latest
outputs:
replay_targets_json: ${{ steps.select.outputs.replay_targets_json }}
replay_count: ${{ steps.select.outputs.replay_count }}
steps:
- id: select
uses: actions/github-script@v7
env:
FAILED_REPOS_JSON: ${{ inputs.failed_repos_json }}
MAX_REPLAYS: ${{ inputs.max_replays }}
with:
script: |
const raw = process.env.FAILED_REPOS_JSON || '[]';
const maxReplaysRaw = String(process.env.MAX_REPLAYS || '10').trim();
const maxReplays = Number.parseInt(maxReplaysRaw, 10);

if (!Number.isInteger(maxReplays) || maxReplays <= 0) {
core.setFailed(`max_replays must be a positive integer, got '${maxReplaysRaw}'`);
return;
}

let repos;
try {
repos = JSON.parse(raw);
} catch (error) {
core.setFailed(`failed_repos_json is not valid JSON: ${error.message}`);
return;
}

if (!Array.isArray(repos)) {
core.setFailed('failed_repos_json must be a JSON array');
return;
}

const cleaned = [...new Set(repos.filter((repo) => typeof repo === 'string' && repo.includes('/')))];
Copy link

Copilot AI Feb 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Replay target filtering only checks repo.includes('/'), so invalid values (extra slashes, whitespace, etc.) can slip through and then fail during checkout. Tighten validation (trim + exactly one slash + non-empty owner/repo parts) so the workflow fails early with a clearer error.

Suggested change
const cleaned = [...new Set(repos.filter((repo) => typeof repo === 'string' && repo.includes('/')))];
const normalizeRepo = (value) => {
if (typeof value !== 'string') return null;
const trimmed = value.trim();
if (!trimmed) return null;
const parts = trimmed.split('/');
if (parts.length !== 2) return null;
const [owner, repo] = parts;
if (!owner || !repo) return null;
return `${owner}/${repo}`;
};
const cleaned = [
...new Set(
repos
.map(normalizeRepo)
.filter((repo) => repo !== null)
),
];

Copilot uses AI. Check for mistakes.
const selected = cleaned.slice(0, maxReplays);

core.setOutput('replay_targets_json', JSON.stringify(selected));
core.setOutput('replay_count', String(selected.length));

deterministic_replay:
needs: [select_replays]
if: ${{ needs.select_replays.outputs.replay_count != '0' }}
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
repo: ${{ fromJson(needs.select_replays.outputs.replay_targets_json) }}
steps:
- name: Checkout replay target repository
uses: actions/checkout@v5
with:
repository: ${{ matrix.repo }}
ref: ${{ inputs.target_ref }}
token: ${{ secrets.REPO_TOKEN }}

- name: Deterministic replay validation
shell: bash
run: |
set -euo pipefail

echo "Replaying failed target: ${{ matrix.repo }}"
git rev-parse HEAD

if [[ -f package.json ]]; then
npm ci
Copy link

Copilot AI Feb 19, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The replay validation runs npm ci based solely on package.json. npm ci requires a lockfile and will fail for repos without package-lock.json/npm-shrinkwrap.json (or using pnpm/yarn), which can make replays fail even when the repo is healthy. Consider conditioning npm ci on a lockfile and/or providing a safer fallback.

Suggested change
npm ci
if [[ -f package-lock.json || -f npm-shrinkwrap.json ]]; then
npm ci
else
echo "No npm lockfile detected; using 'npm install' instead of 'npm ci'."
npm install
fi

Copilot uses AI. Check for mistakes.
npm run --if-present build
elif [[ -f go.mod ]]; then
go build ./...
else
echo "No default build strategy detected; replay checkout validation completed."
fi
Loading
Loading