From e9575dc006c7d4961e48300159af978398e78354 Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Thu, 16 Apr 2026 04:33:59 +0000 Subject: [PATCH 01/10] Initial plan From 71c1438af450d8a1064b4fbff3bcbf410a9456a2 Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Thu, 16 Apr 2026 04:40:54 +0000 Subject: [PATCH 02/10] feat: add JSON/YAML config file loading and schema docs Agent-Logs-Url: https://github.com/github/gh-aw-firewall/sessions/dcd77d8b-19a4-4eab-9b64-5772d37fda34 --- README.md | 2 + docs/awf-config-spec.md | 116 +++++++++++ docs/awf-config.schema.json | 151 ++++++++++++++ src/cli.ts | 23 +++ src/config-file.test.ts | 118 +++++++++++ src/config-file.ts | 382 ++++++++++++++++++++++++++++++++++++ 6 files changed, 792 insertions(+) create mode 100644 docs/awf-config-spec.md create mode 100644 docs/awf-config.schema.json create mode 100644 src/config-file.test.ts create mode 100644 src/config-file.ts diff --git a/README.md b/README.md index fd54482d..2bab9d7f 100644 --- a/README.md +++ b/README.md @@ -34,6 +34,8 @@ The `--` separator divides firewall options from the command to run. - [Quick start](docs/quickstart.md) — install, verify, and run your first command - [Usage guide](docs/usage.md) — CLI flags, domain allowlists, examples +- [AWF config schema](docs/awf-config.schema.json) — machine-readable JSON Schema for JSON/YAML configs +- [AWF config spec](docs/awf-config-spec.md) — normative processing and precedence rules for tooling/compiler integration - [Enterprise configuration](docs/enterprise-configuration.md) — GitHub Enterprise Cloud and Server setup - [Chroot mode](docs/chroot-mode.md) — use host binaries with network isolation - [API proxy sidecar](docs/api-proxy-sidecar.md) — secure credential management for LLM APIs diff --git a/docs/awf-config-spec.md b/docs/awf-config-spec.md new file mode 100644 index 00000000..869b7fef --- /dev/null +++ b/docs/awf-config-spec.md @@ -0,0 +1,116 @@ +# AWF Configuration Specification (W3C-style) + +## Status of This Document + +This document defines the canonical configuration model for AWF (`awf`) and is intended for: + +- `awf` CLI runtime loading (`--config`) +- tooling that compiles workflows to AWF invocations (including `gh-aw`) +- IDE/static validation via JSON Schema + +The machine-readable schema is published at: + +- `docs/awf-config.schema.json` + +## 1. Conformance + +Keywords **MUST**, **MUST NOT**, **SHOULD**, and **MAY** are to be interpreted as described in RFC 2119. + +An AWF config document is conforming when: + +1. It is valid JSON or YAML. +2. Its data model satisfies `docs/awf-config.schema.json`. +3. Unknown properties are not present (closed-world schema). + +## 2. Processing Model + +1. The user invokes `awf --config -- `. +2. If `` is `-`, AWF reads configuration bytes from stdin. +3. If `` ends with `.json`, AWF parses as JSON. +4. If `` ends with `.yaml` or `.yml`, AWF parses as YAML. +5. Otherwise, AWF attempts JSON parse first, then YAML parse. +6. AWF validates the parsed document and fails fast on validation errors. +7. AWF maps config fields to CLI option semantics. +8. **CLI options MUST take precedence over config file values**. + +## 3. Precedence Rules + +The effective configuration order is: + +1. AWF internal defaults +2. Config file (`--config`) +3. Explicit CLI flags + +This precedence model allows reusable checked-in configs with environment-specific CLI overrides. + +## 4. Data Model + +The root object MAY contain: + +- `$schema` +- `network` +- `apiProxy` +- `security` +- `container` +- `environment` +- `logging` +- `rateLimiting` + +Section semantics and constraints are defined by `docs/awf-config.schema.json`. + +## 5. CLI Mapping (Normative) + +Tools generating AWF invocations (such as `gh-aw`) SHOULD use this mapping: + +- `network.allowDomains[]` → `--allow-domains ` +- `network.blockDomains[]` → `--block-domains ` +- `network.dnsServers[]` → `--dns-servers ` +- `network.upstreamProxy` → `--upstream-proxy` +- `apiProxy.enabled` → `--enable-api-proxy` +- `apiProxy.targets..host` → `---api-target` +- `apiProxy.targets.openai.basePath` → `--openai-api-base-path` +- `apiProxy.targets.anthropic.basePath` → `--anthropic-api-base-path` +- `apiProxy.targets.gemini.basePath` → `--gemini-api-base-path` +- `security.sslBump` → `--ssl-bump` +- `security.enableDlp` → `--enable-dlp` +- `security.enableHostAccess` → `--enable-host-access` +- `security.allowHostPorts` → `--allow-host-ports` +- `security.allowHostServicePorts` → `--allow-host-service-ports` +- `security.difcProxy.host` → `--difc-proxy-host` +- `security.difcProxy.caCert` → `--difc-proxy-ca-cert` +- `container.memoryLimit` → `--memory-limit` +- `container.agentTimeout` → `--agent-timeout` +- `container.enableDind` → `--enable-dind` +- `container.workDir` → `--work-dir` +- `container.containerWorkDir` → `--container-workdir` +- `container.imageRegistry` → `--image-registry` +- `container.imageTag` → `--image-tag` +- `container.skipPull` → `--skip-pull` +- `container.buildLocal` → `--build-local` +- `container.agentImage` → `--agent-image` +- `container.tty` → `--tty` +- `container.dockerHost` → `--docker-host` +- `environment.envFile` → `--env-file` +- `environment.envAll` → `--env-all` +- `environment.excludeEnv[]` → repeated `--exclude-env` +- `logging.logLevel` → `--log-level` +- `logging.diagnosticLogs` → `--diagnostic-logs` +- `logging.auditDir` → `--audit-dir` +- `logging.proxyLogsDir` → `--proxy-logs-dir` +- `logging.sessionStateDir` → `--session-state-dir` +- `rateLimiting.enabled: false` → `--no-rate-limit` +- `rateLimiting.requestsPerMinute` → `--rate-limit-rpm` +- `rateLimiting.requestsPerHour` → `--rate-limit-rph` +- `rateLimiting.bytesPerMinute` → `--rate-limit-bytes-pm` + +## 6. Stdin Mode + +AWF MUST support `--config -` for programmatic/pipeline scenarios. + +## 7. Error Reporting + +On parse or validation failure, AWF MUST: + +1. exit non-zero +2. print an error describing location and reason +3. avoid partial execution diff --git a/docs/awf-config.schema.json b/docs/awf-config.schema.json new file mode 100644 index 00000000..89005b0c --- /dev/null +++ b/docs/awf-config.schema.json @@ -0,0 +1,151 @@ +{ + "$schema": "https://json-schema.org/draft/2020-12/schema", + "$id": "https://raw.githubusercontent.com/github/gh-aw-firewall/main/docs/awf-config.schema.json", + "title": "AWF Configuration", + "description": "JSON/YAML configuration for awf CLI. CLI flags override config file values.", + "type": "object", + "additionalProperties": false, + "properties": { + "$schema": { + "type": "string" + }, + "network": { + "type": "object", + "additionalProperties": false, + "properties": { + "allowDomains": { + "type": "array", + "items": { "type": "string" } + }, + "blockDomains": { + "type": "array", + "items": { "type": "string" } + }, + "dnsServers": { + "type": "array", + "items": { "type": "string" } + }, + "upstreamProxy": { + "type": "string" + } + } + }, + "apiProxy": { + "type": "object", + "additionalProperties": false, + "properties": { + "enabled": { "type": "boolean" }, + "targets": { + "type": "object", + "additionalProperties": false, + "properties": { + "openai": { "$ref": "#/$defs/providerTarget" }, + "anthropic": { "$ref": "#/$defs/providerTarget" }, + "copilot": { "$ref": "#/$defs/providerHostOnlyTarget" }, + "gemini": { "$ref": "#/$defs/providerTarget" } + } + } + } + }, + "security": { + "type": "object", + "additionalProperties": false, + "properties": { + "sslBump": { "type": "boolean" }, + "enableDlp": { "type": "boolean" }, + "enableHostAccess": { "type": "boolean" }, + "allowHostPorts": { + "oneOf": [ + { "type": "string" }, + { "type": "array", "items": { "type": "string" } } + ] + }, + "allowHostServicePorts": { + "oneOf": [ + { "type": "string" }, + { "type": "array", "items": { "type": "string" } } + ] + }, + "difcProxy": { + "type": "object", + "additionalProperties": false, + "properties": { + "host": { "type": "string" }, + "caCert": { "type": "string" } + } + } + } + }, + "container": { + "type": "object", + "additionalProperties": false, + "properties": { + "memoryLimit": { "type": "string" }, + "agentTimeout": { "type": "integer", "minimum": 1 }, + "enableDind": { "type": "boolean" }, + "workDir": { "type": "string" }, + "containerWorkDir": { "type": "string" }, + "imageRegistry": { "type": "string" }, + "imageTag": { "type": "string" }, + "skipPull": { "type": "boolean" }, + "buildLocal": { "type": "boolean" }, + "agentImage": { "type": "string" }, + "tty": { "type": "boolean" }, + "dockerHost": { "type": "string" } + } + }, + "environment": { + "type": "object", + "additionalProperties": false, + "properties": { + "envFile": { "type": "string" }, + "envAll": { "type": "boolean" }, + "excludeEnv": { + "type": "array", + "items": { "type": "string" } + } + } + }, + "logging": { + "type": "object", + "additionalProperties": false, + "properties": { + "logLevel": { + "type": "string", + "enum": ["debug", "info", "warn", "error"] + }, + "diagnosticLogs": { "type": "boolean" }, + "auditDir": { "type": "string" }, + "proxyLogsDir": { "type": "string" }, + "sessionStateDir": { "type": "string" } + } + }, + "rateLimiting": { + "type": "object", + "additionalProperties": false, + "properties": { + "enabled": { "type": "boolean" }, + "requestsPerMinute": { "type": "integer", "minimum": 1 }, + "requestsPerHour": { "type": "integer", "minimum": 1 }, + "bytesPerMinute": { "type": "integer", "minimum": 1 } + } + } + }, + "$defs": { + "providerTarget": { + "type": "object", + "additionalProperties": false, + "properties": { + "host": { "type": "string" }, + "basePath": { "type": "string" } + } + }, + "providerHostOnlyTarget": { + "type": "object", + "additionalProperties": false, + "properties": { + "host": { "type": "string" } + } + } + } +} diff --git a/src/cli.ts b/src/cli.ts index f8b0141f..2a359454 100644 --- a/src/cli.ts +++ b/src/cli.ts @@ -29,6 +29,7 @@ import { validateDomainOrPattern, SQUID_DANGEROUS_CHARS } from './domain-pattern import { loadAndMergeDomains } from './rules'; import { detectHostDnsServers } from './dns-resolver'; import { detectUpstreamProxy, parseProxyUrl, parseNoProxy } from './upstream-proxy'; +import { loadAwfFileConfig, mapAwfFileConfigToCliOptions, applyConfigOptionsWithCliPrecedence } from './config-file'; import { OutputFormat } from './types'; import { version } from '../package.json'; @@ -1233,6 +1234,7 @@ export const program = new Command(); // Option group markers used by the custom help formatter to insert section headers. // Each key is the long flag name of the first option in a group. const optionGroupHeaders: Record = { + 'config': 'Configuration:', 'allow-domains': 'Domain Filtering:', 'build-local': 'Image Management:', 'env': 'Container Configuration:', @@ -1298,6 +1300,11 @@ program } }) + .option( + '--config ', + 'Path to AWF JSON/YAML config file (use "-" to read from stdin)' + ) + // -- Domain Filtering -- .option( '-d, --allow-domains ', @@ -1608,6 +1615,22 @@ program // - The $$$$ escaping pattern requires literal $ preservation // const agentCommand = args.length === 1 ? args[0] : joinShellArgs(args); + + if (options.config) { + try { + const fileConfig = loadAwfFileConfig(options.config); + const fileDerivedOptions = mapAwfFileConfigToCliOptions(fileConfig); + applyConfigOptionsWithCliPrecedence( + options as Record, + fileDerivedOptions, + (optionName: string) => program.getOptionValueSource(optionName) === 'cli' + ); + } catch (error) { + console.error(`Error loading --config: ${error instanceof Error ? error.message : String(error)}`); + process.exit(1); + } + } + // Parse and validate options const logLevel = options.logLevel as LogLevel; if (!['debug', 'info', 'warn', 'error'].includes(logLevel)) { diff --git a/src/config-file.test.ts b/src/config-file.test.ts new file mode 100644 index 00000000..502abfac --- /dev/null +++ b/src/config-file.test.ts @@ -0,0 +1,118 @@ +import * as fs from 'fs'; +import * as os from 'os'; +import * as path from 'path'; +import { + applyConfigOptionsWithCliPrecedence, + loadAwfFileConfig, + mapAwfFileConfigToCliOptions, + validateAwfFileConfig, +} from './config-file'; + +describe('config-file', () => { + describe('validateAwfFileConfig', () => { + it('accepts valid nested config sections', () => { + const errors = validateAwfFileConfig({ + network: { allowDomains: ['github.com'] }, + apiProxy: { enabled: true, targets: { openai: { host: 'api.openai.com' } } }, + container: { agentTimeout: 30 }, + }); + + expect(errors).toEqual([]); + }); + + it('reports unknown keys and invalid value types', () => { + const errors = validateAwfFileConfig({ + network: { allowDomains: 'github.com' }, + unknown: true, + }); + + expect(errors).toContain('config.unknown is not supported'); + expect(errors).toContain('config.network.allowDomains must be an array of strings'); + }); + + it('rejects unsupported copilot basePath', () => { + const errors = validateAwfFileConfig({ + apiProxy: { targets: { copilot: { host: 'api.githubcopilot.com', basePath: '/v1' } } }, + }); + + expect(errors).toContain('config.apiProxy.targets.copilot.basePath is not supported'); + }); + }); + + describe('loadAwfFileConfig', () => { + let testDir: string; + + beforeEach(() => { + testDir = fs.mkdtempSync(path.join(os.tmpdir(), 'awf-config-test-')); + }); + + afterEach(() => { + if (fs.existsSync(testDir)) { + fs.rmSync(testDir, { recursive: true, force: true }); + } + }); + + it('loads JSON config files', () => { + const filePath = path.join(testDir, 'awf.json'); + fs.writeFileSync(filePath, JSON.stringify({ logging: { logLevel: 'debug' } })); + + const result = loadAwfFileConfig(filePath); + + expect(result.logging?.logLevel).toBe('debug'); + }); + + it('loads YAML config files', () => { + const filePath = path.join(testDir, 'awf.yaml'); + fs.writeFileSync(filePath, 'network:\n allowDomains:\n - github.com\n'); + + const result = loadAwfFileConfig(filePath); + + expect(result.network?.allowDomains).toEqual(['github.com']); + }); + + it('loads config from stdin when path is "-"', () => { + const result = loadAwfFileConfig('-', () => '{"network":{"allowDomains":["github.com"]}}'); + + expect(result.network?.allowDomains).toEqual(['github.com']); + }); + + it('throws helpful validation errors', () => { + const filePath = path.join(testDir, 'awf.json'); + fs.writeFileSync(filePath, JSON.stringify({ container: { agentTimeout: -1 } })); + + expect(() => loadAwfFileConfig(filePath)).toThrow('config.container.agentTimeout must be a positive integer'); + }); + }); + + describe('mapAwfFileConfigToCliOptions', () => { + it('maps nested config values to CLI option names', () => { + const result = mapAwfFileConfigToCliOptions({ + network: { allowDomains: ['github.com', 'api.github.com'], dnsServers: ['1.1.1.1', '1.0.0.1'] }, + apiProxy: { enabled: true, targets: { anthropic: { host: 'api.anthropic.com', basePath: '/anthropic' } } }, + container: { agentTimeout: 15, containerWorkDir: '/workspace' }, + rateLimiting: { enabled: false, requestsPerMinute: 60 }, + }); + + expect(result.allowDomains).toBe('github.com,api.github.com'); + expect(result.dnsServers).toBe('1.1.1.1,1.0.0.1'); + expect(result.enableApiProxy).toBe(true); + expect(result.anthropicApiTarget).toBe('api.anthropic.com'); + expect(result.anthropicApiBasePath).toBe('/anthropic'); + expect(result.agentTimeout).toBe('15'); + expect(result.containerWorkdir).toBe('/workspace'); + expect(result.rateLimit).toBe(false); + expect(result.rateLimitRpm).toBe('60'); + }); + }); + + describe('applyConfigOptionsWithCliPrecedence', () => { + it('does not overwrite explicitly provided CLI options', () => { + const options: Record = { logLevel: 'warn', memoryLimit: '4g' }; + const configOptions: Record = { logLevel: 'debug', memoryLimit: '8g', imageTag: 'latest' }; + + applyConfigOptionsWithCliPrecedence(options, configOptions, (name) => name === 'logLevel'); + + expect(options).toEqual({ logLevel: 'warn', memoryLimit: '8g', imageTag: 'latest' }); + }); + }); +}); diff --git a/src/config-file.ts b/src/config-file.ts new file mode 100644 index 00000000..040510e3 --- /dev/null +++ b/src/config-file.ts @@ -0,0 +1,382 @@ +import * as fs from 'fs'; +import * as path from 'path'; +import * as yaml from 'js-yaml'; + +export interface AwfFileConfig { + $schema?: string; + network?: { + allowDomains?: string[]; + blockDomains?: string[]; + dnsServers?: string[]; + upstreamProxy?: string; + }; + apiProxy?: { + enabled?: boolean; + targets?: { + openai?: { host?: string; basePath?: string }; + anthropic?: { host?: string; basePath?: string }; + copilot?: { host?: string; basePath?: string }; + gemini?: { host?: string; basePath?: string }; + }; + }; + security?: { + sslBump?: boolean; + enableDlp?: boolean; + enableHostAccess?: boolean; + allowHostPorts?: string[] | string; + allowHostServicePorts?: string[] | string; + difcProxy?: { + host?: string; + caCert?: string; + }; + }; + container?: { + memoryLimit?: string; + agentTimeout?: number; + enableDind?: boolean; + workDir?: string; + containerWorkDir?: string; + imageRegistry?: string; + imageTag?: string; + skipPull?: boolean; + buildLocal?: boolean; + agentImage?: string; + tty?: boolean; + dockerHost?: string; + }; + environment?: { + envFile?: string; + envAll?: boolean; + excludeEnv?: string[]; + }; + logging?: { + logLevel?: 'debug' | 'info' | 'warn' | 'error'; + diagnosticLogs?: boolean; + auditDir?: string; + proxyLogsDir?: string; + sessionStateDir?: string; + }; + rateLimiting?: { + enabled?: boolean; + requestsPerMinute?: number; + requestsPerHour?: number; + bytesPerMinute?: number; + }; +} + +function isRecord(value: unknown): value is Record { + return typeof value === 'object' && value !== null && !Array.isArray(value); +} + +function validateKnownKeys( + value: Record, + keys: string[], + location: string, + errors: string[] +): void { + const allowed = new Set(keys); + for (const key of Object.keys(value)) { + if (!allowed.has(key)) { + errors.push(`${location}.${key} is not supported`); + } + } +} + +function validateStringArray(value: unknown, location: string, errors: string[]): void { + if (!Array.isArray(value) || value.some(item => typeof item !== 'string')) { + errors.push(`${location} must be an array of strings`); + } +} + +function validateStringOrStringArray(value: unknown, location: string, errors: string[]): void { + const isValid = typeof value === 'string' || (Array.isArray(value) && value.every(item => typeof item === 'string')); + if (!isValid) { + errors.push(`${location} must be a string or array of strings`); + } +} + +function validateProviderTarget(value: unknown, location: string, errors: string[], allowBasePath = true): void { + if (!isRecord(value)) { + errors.push(`${location} must be an object`); + return; + } + validateKnownKeys(value, allowBasePath ? ['host', 'basePath'] : ['host'], location, errors); + if (value.host !== undefined && typeof value.host !== 'string') { + errors.push(`${location}.host must be a string`); + } + if (allowBasePath && value.basePath !== undefined && typeof value.basePath !== 'string') { + errors.push(`${location}.basePath must be a string`); + } +} + +export function validateAwfFileConfig(config: unknown): string[] { + const errors: string[] = []; + + if (!isRecord(config)) { + return ['config root must be an object']; + } + + validateKnownKeys( + config, + ['$schema', 'network', 'apiProxy', 'security', 'container', 'environment', 'logging', 'rateLimiting'], + 'config', + errors + ); + + if (config.$schema !== undefined && typeof config.$schema !== 'string') { + errors.push('config.$schema must be a string'); + } + + if (config.network !== undefined) { + if (!isRecord(config.network)) { + errors.push('config.network must be an object'); + } else { + validateKnownKeys(config.network, ['allowDomains', 'blockDomains', 'dnsServers', 'upstreamProxy'], 'config.network', errors); + if (config.network.allowDomains !== undefined) validateStringArray(config.network.allowDomains, 'config.network.allowDomains', errors); + if (config.network.blockDomains !== undefined) validateStringArray(config.network.blockDomains, 'config.network.blockDomains', errors); + if (config.network.dnsServers !== undefined) validateStringArray(config.network.dnsServers, 'config.network.dnsServers', errors); + if (config.network.upstreamProxy !== undefined && typeof config.network.upstreamProxy !== 'string') { + errors.push('config.network.upstreamProxy must be a string'); + } + } + } + + if (config.apiProxy !== undefined) { + if (!isRecord(config.apiProxy)) { + errors.push('config.apiProxy must be an object'); + } else { + validateKnownKeys(config.apiProxy, ['enabled', 'targets'], 'config.apiProxy', errors); + if (config.apiProxy.enabled !== undefined && typeof config.apiProxy.enabled !== 'boolean') { + errors.push('config.apiProxy.enabled must be a boolean'); + } + if (config.apiProxy.targets !== undefined) { + if (!isRecord(config.apiProxy.targets)) { + errors.push('config.apiProxy.targets must be an object'); + } else { + validateKnownKeys(config.apiProxy.targets, ['openai', 'anthropic', 'copilot', 'gemini'], 'config.apiProxy.targets', errors); + if (config.apiProxy.targets.openai !== undefined) validateProviderTarget(config.apiProxy.targets.openai, 'config.apiProxy.targets.openai', errors); + if (config.apiProxy.targets.anthropic !== undefined) validateProviderTarget(config.apiProxy.targets.anthropic, 'config.apiProxy.targets.anthropic', errors); + if (config.apiProxy.targets.copilot !== undefined) validateProviderTarget(config.apiProxy.targets.copilot, 'config.apiProxy.targets.copilot', errors, false); + if (config.apiProxy.targets.gemini !== undefined) validateProviderTarget(config.apiProxy.targets.gemini, 'config.apiProxy.targets.gemini', errors); + } + } + } + } + + if (config.security !== undefined) { + if (!isRecord(config.security)) { + errors.push('config.security must be an object'); + } else { + validateKnownKeys( + config.security, + ['sslBump', 'enableDlp', 'enableHostAccess', 'allowHostPorts', 'allowHostServicePorts', 'difcProxy'], + 'config.security', + errors + ); + if (config.security.sslBump !== undefined && typeof config.security.sslBump !== 'boolean') errors.push('config.security.sslBump must be a boolean'); + if (config.security.enableDlp !== undefined && typeof config.security.enableDlp !== 'boolean') errors.push('config.security.enableDlp must be a boolean'); + if (config.security.enableHostAccess !== undefined && typeof config.security.enableHostAccess !== 'boolean') errors.push('config.security.enableHostAccess must be a boolean'); + if (config.security.allowHostPorts !== undefined) validateStringOrStringArray(config.security.allowHostPorts, 'config.security.allowHostPorts', errors); + if (config.security.allowHostServicePorts !== undefined) validateStringOrStringArray(config.security.allowHostServicePorts, 'config.security.allowHostServicePorts', errors); + if (config.security.difcProxy !== undefined) { + if (!isRecord(config.security.difcProxy)) { + errors.push('config.security.difcProxy must be an object'); + } else { + validateKnownKeys(config.security.difcProxy, ['host', 'caCert'], 'config.security.difcProxy', errors); + if (config.security.difcProxy.host !== undefined && typeof config.security.difcProxy.host !== 'string') errors.push('config.security.difcProxy.host must be a string'); + if (config.security.difcProxy.caCert !== undefined && typeof config.security.difcProxy.caCert !== 'string') errors.push('config.security.difcProxy.caCert must be a string'); + } + } + } + } + + if (config.container !== undefined) { + if (!isRecord(config.container)) { + errors.push('config.container must be an object'); + } else { + validateKnownKeys( + config.container, + ['memoryLimit', 'agentTimeout', 'enableDind', 'workDir', 'containerWorkDir', 'imageRegistry', 'imageTag', 'skipPull', 'buildLocal', 'agentImage', 'tty', 'dockerHost'], + 'config.container', + errors + ); + if (config.container.memoryLimit !== undefined && typeof config.container.memoryLimit !== 'string') errors.push('config.container.memoryLimit must be a string'); + if (config.container.agentTimeout !== undefined && (typeof config.container.agentTimeout !== 'number' || !Number.isInteger(config.container.agentTimeout) || config.container.agentTimeout <= 0)) { + errors.push('config.container.agentTimeout must be a positive integer'); + } + if (config.container.enableDind !== undefined && typeof config.container.enableDind !== 'boolean') errors.push('config.container.enableDind must be a boolean'); + if (config.container.workDir !== undefined && typeof config.container.workDir !== 'string') errors.push('config.container.workDir must be a string'); + if (config.container.containerWorkDir !== undefined && typeof config.container.containerWorkDir !== 'string') errors.push('config.container.containerWorkDir must be a string'); + if (config.container.imageRegistry !== undefined && typeof config.container.imageRegistry !== 'string') errors.push('config.container.imageRegistry must be a string'); + if (config.container.imageTag !== undefined && typeof config.container.imageTag !== 'string') errors.push('config.container.imageTag must be a string'); + if (config.container.skipPull !== undefined && typeof config.container.skipPull !== 'boolean') errors.push('config.container.skipPull must be a boolean'); + if (config.container.buildLocal !== undefined && typeof config.container.buildLocal !== 'boolean') errors.push('config.container.buildLocal must be a boolean'); + if (config.container.agentImage !== undefined && typeof config.container.agentImage !== 'string') errors.push('config.container.agentImage must be a string'); + if (config.container.tty !== undefined && typeof config.container.tty !== 'boolean') errors.push('config.container.tty must be a boolean'); + if (config.container.dockerHost !== undefined && typeof config.container.dockerHost !== 'string') errors.push('config.container.dockerHost must be a string'); + } + } + + if (config.environment !== undefined) { + if (!isRecord(config.environment)) { + errors.push('config.environment must be an object'); + } else { + validateKnownKeys(config.environment, ['envFile', 'envAll', 'excludeEnv'], 'config.environment', errors); + if (config.environment.envFile !== undefined && typeof config.environment.envFile !== 'string') errors.push('config.environment.envFile must be a string'); + if (config.environment.envAll !== undefined && typeof config.environment.envAll !== 'boolean') errors.push('config.environment.envAll must be a boolean'); + if (config.environment.excludeEnv !== undefined) validateStringArray(config.environment.excludeEnv, 'config.environment.excludeEnv', errors); + } + } + + if (config.logging !== undefined) { + if (!isRecord(config.logging)) { + errors.push('config.logging must be an object'); + } else { + validateKnownKeys(config.logging, ['logLevel', 'diagnosticLogs', 'auditDir', 'proxyLogsDir', 'sessionStateDir'], 'config.logging', errors); + if (config.logging.logLevel !== undefined && (typeof config.logging.logLevel !== 'string' || !['debug', 'info', 'warn', 'error'].includes(config.logging.logLevel))) { + errors.push('config.logging.logLevel must be one of: debug, info, warn, error'); + } + if (config.logging.diagnosticLogs !== undefined && typeof config.logging.diagnosticLogs !== 'boolean') errors.push('config.logging.diagnosticLogs must be a boolean'); + if (config.logging.auditDir !== undefined && typeof config.logging.auditDir !== 'string') errors.push('config.logging.auditDir must be a string'); + if (config.logging.proxyLogsDir !== undefined && typeof config.logging.proxyLogsDir !== 'string') errors.push('config.logging.proxyLogsDir must be a string'); + if (config.logging.sessionStateDir !== undefined && typeof config.logging.sessionStateDir !== 'string') errors.push('config.logging.sessionStateDir must be a string'); + } + } + + if (config.rateLimiting !== undefined) { + if (!isRecord(config.rateLimiting)) { + errors.push('config.rateLimiting must be an object'); + } else { + validateKnownKeys(config.rateLimiting, ['enabled', 'requestsPerMinute', 'requestsPerHour', 'bytesPerMinute'], 'config.rateLimiting', errors); + if (config.rateLimiting.enabled !== undefined && typeof config.rateLimiting.enabled !== 'boolean') errors.push('config.rateLimiting.enabled must be a boolean'); + if (config.rateLimiting.requestsPerMinute !== undefined && (typeof config.rateLimiting.requestsPerMinute !== 'number' || !Number.isInteger(config.rateLimiting.requestsPerMinute) || config.rateLimiting.requestsPerMinute <= 0)) { + errors.push('config.rateLimiting.requestsPerMinute must be a positive integer'); + } + if (config.rateLimiting.requestsPerHour !== undefined && (typeof config.rateLimiting.requestsPerHour !== 'number' || !Number.isInteger(config.rateLimiting.requestsPerHour) || config.rateLimiting.requestsPerHour <= 0)) { + errors.push('config.rateLimiting.requestsPerHour must be a positive integer'); + } + if (config.rateLimiting.bytesPerMinute !== undefined && (typeof config.rateLimiting.bytesPerMinute !== 'number' || !Number.isInteger(config.rateLimiting.bytesPerMinute) || config.rateLimiting.bytesPerMinute <= 0)) { + errors.push('config.rateLimiting.bytesPerMinute must be a positive integer'); + } + } + } + + return errors; +} + +export function loadAwfFileConfig(configPath: string, readStdin: () => string = () => fs.readFileSync(0, 'utf8')): AwfFileConfig { + let rawContent: string; + let sourceLabel = configPath; + + if (configPath === '-') { + rawContent = readStdin(); + sourceLabel = 'stdin'; + } else { + const resolvedPath = path.resolve(process.cwd(), configPath); + rawContent = fs.readFileSync(resolvedPath, 'utf8'); + sourceLabel = resolvedPath; + } + + let parsed: unknown; + const isJson = configPath.endsWith('.json'); + const isYaml = configPath.endsWith('.yaml') || configPath.endsWith('.yml'); + + try { + if (isJson) { + parsed = JSON.parse(rawContent); + } else if (isYaml) { + parsed = yaml.load(rawContent); + } else { + try { + parsed = JSON.parse(rawContent); + } catch { + parsed = yaml.load(rawContent); + } + } + } catch (error) { + throw new Error(`Failed to parse AWF config from ${sourceLabel}: ${error instanceof Error ? error.message : String(error)}`); + } + + const errors = validateAwfFileConfig(parsed); + if (errors.length > 0) { + throw new Error(`Invalid AWF config at ${sourceLabel}:\n- ${errors.join('\n- ')}`); + } + + return parsed as AwfFileConfig; +} + +function joinComma(value: string[] | undefined): string | undefined { + if (!value || value.length === 0) return undefined; + return value.join(','); +} + +function joinPorts(value: string[] | string | undefined): string | undefined { + if (value === undefined) return undefined; + return Array.isArray(value) ? value.join(',') : value; +} + +export function mapAwfFileConfigToCliOptions(config: AwfFileConfig): Record { + return { + allowDomains: joinComma(config.network?.allowDomains), + blockDomains: joinComma(config.network?.blockDomains), + dnsServers: joinComma(config.network?.dnsServers), + upstreamProxy: config.network?.upstreamProxy, + + enableApiProxy: config.apiProxy?.enabled, + openaiApiTarget: config.apiProxy?.targets?.openai?.host, + openaiApiBasePath: config.apiProxy?.targets?.openai?.basePath, + anthropicApiTarget: config.apiProxy?.targets?.anthropic?.host, + anthropicApiBasePath: config.apiProxy?.targets?.anthropic?.basePath, + copilotApiTarget: config.apiProxy?.targets?.copilot?.host, + geminiApiTarget: config.apiProxy?.targets?.gemini?.host, + geminiApiBasePath: config.apiProxy?.targets?.gemini?.basePath, + + sslBump: config.security?.sslBump, + enableDlp: config.security?.enableDlp, + enableHostAccess: config.security?.enableHostAccess, + allowHostPorts: joinPorts(config.security?.allowHostPorts), + allowHostServicePorts: joinPorts(config.security?.allowHostServicePorts), + difcProxyHost: config.security?.difcProxy?.host, + difcProxyCaCert: config.security?.difcProxy?.caCert, + + memoryLimit: config.container?.memoryLimit, + agentTimeout: config.container?.agentTimeout !== undefined ? String(config.container.agentTimeout) : undefined, + enableDind: config.container?.enableDind, + workDir: config.container?.workDir, + containerWorkdir: config.container?.containerWorkDir, + imageRegistry: config.container?.imageRegistry, + imageTag: config.container?.imageTag, + skipPull: config.container?.skipPull, + buildLocal: config.container?.buildLocal, + agentImage: config.container?.agentImage, + tty: config.container?.tty, + dockerHost: config.container?.dockerHost, + + envFile: config.environment?.envFile, + envAll: config.environment?.envAll, + excludeEnv: config.environment?.excludeEnv, + + logLevel: config.logging?.logLevel, + diagnosticLogs: config.logging?.diagnosticLogs, + auditDir: config.logging?.auditDir, + proxyLogsDir: config.logging?.proxyLogsDir, + sessionStateDir: config.logging?.sessionStateDir, + + rateLimit: config.rateLimiting?.enabled === false ? false : undefined, + rateLimitRpm: config.rateLimiting?.requestsPerMinute !== undefined ? String(config.rateLimiting.requestsPerMinute) : undefined, + rateLimitRph: config.rateLimiting?.requestsPerHour !== undefined ? String(config.rateLimiting.requestsPerHour) : undefined, + rateLimitBytesPm: config.rateLimiting?.bytesPerMinute !== undefined ? String(config.rateLimiting.bytesPerMinute) : undefined, + }; +} + +export function applyConfigOptionsWithCliPrecedence( + options: Record, + configOptions: Record, + isCliProvided: (optionName: string) => boolean +): void { + for (const [key, value] of Object.entries(configOptions)) { + if (value === undefined) continue; + if (isCliProvided(key)) continue; + options[key] = value; + } +} From 78a93ef1d5db68180402eea9c6a45e4caa86afdd Mon Sep 17 00:00:00 2001 From: "copilot-swe-agent[bot]" <198982749+Copilot@users.noreply.github.com> Date: Thu, 16 Apr 2026 04:42:58 +0000 Subject: [PATCH 03/10] refactor: tighten config validation helpers and precedence docs Agent-Logs-Url: https://github.com/github/gh-aw-firewall/sessions/dcd77d8b-19a4-4eab-9b64-5772d37fda34 --- src/cli.ts | 2 ++ src/config-file.test.ts | 2 +- src/config-file.ts | 29 ++++++++++++++++++++--------- 3 files changed, 23 insertions(+), 10 deletions(-) diff --git a/src/cli.ts b/src/cli.ts index 2a359454..afb3767e 100644 --- a/src/cli.ts +++ b/src/cli.ts @@ -1623,6 +1623,8 @@ program applyConfigOptionsWithCliPrecedence( options as Record, fileDerivedOptions, + // Commander marks explicit user flags with source "cli". + // We only apply config values when a flag was not explicitly provided. (optionName: string) => program.getOptionValueSource(optionName) === 'cli' ); } catch (error) { diff --git a/src/config-file.test.ts b/src/config-file.test.ts index 502abfac..e88e43d3 100644 --- a/src/config-file.test.ts +++ b/src/config-file.test.ts @@ -43,7 +43,7 @@ describe('config-file', () => { let testDir: string; beforeEach(() => { - testDir = fs.mkdtempSync(path.join(os.tmpdir(), 'awf-config-test-')); + testDir = fs.mkdtempSync(path.join(os.tmpdir(), 'awf-config-file-test-')); }); afterEach(() => { diff --git a/src/config-file.ts b/src/config-file.ts index 040510e3..8b7a6190 100644 --- a/src/config-file.ts +++ b/src/config-file.ts @@ -109,6 +109,10 @@ function validateProviderTarget(value: unknown, location: string, errors: string } } +function isPositiveInteger(value: unknown): value is number { + return typeof value === 'number' && Number.isInteger(value) && value > 0; +} + export function validateAwfFileConfig(config: unknown): string[] { const errors: string[] = []; @@ -201,7 +205,7 @@ export function validateAwfFileConfig(config: unknown): string[] { errors ); if (config.container.memoryLimit !== undefined && typeof config.container.memoryLimit !== 'string') errors.push('config.container.memoryLimit must be a string'); - if (config.container.agentTimeout !== undefined && (typeof config.container.agentTimeout !== 'number' || !Number.isInteger(config.container.agentTimeout) || config.container.agentTimeout <= 0)) { + if (config.container.agentTimeout !== undefined && !isPositiveInteger(config.container.agentTimeout)) { errors.push('config.container.agentTimeout must be a positive integer'); } if (config.container.enableDind !== undefined && typeof config.container.enableDind !== 'boolean') errors.push('config.container.enableDind must be a boolean'); @@ -249,13 +253,13 @@ export function validateAwfFileConfig(config: unknown): string[] { } else { validateKnownKeys(config.rateLimiting, ['enabled', 'requestsPerMinute', 'requestsPerHour', 'bytesPerMinute'], 'config.rateLimiting', errors); if (config.rateLimiting.enabled !== undefined && typeof config.rateLimiting.enabled !== 'boolean') errors.push('config.rateLimiting.enabled must be a boolean'); - if (config.rateLimiting.requestsPerMinute !== undefined && (typeof config.rateLimiting.requestsPerMinute !== 'number' || !Number.isInteger(config.rateLimiting.requestsPerMinute) || config.rateLimiting.requestsPerMinute <= 0)) { + if (config.rateLimiting.requestsPerMinute !== undefined && !isPositiveInteger(config.rateLimiting.requestsPerMinute)) { errors.push('config.rateLimiting.requestsPerMinute must be a positive integer'); } - if (config.rateLimiting.requestsPerHour !== undefined && (typeof config.rateLimiting.requestsPerHour !== 'number' || !Number.isInteger(config.rateLimiting.requestsPerHour) || config.rateLimiting.requestsPerHour <= 0)) { + if (config.rateLimiting.requestsPerHour !== undefined && !isPositiveInteger(config.rateLimiting.requestsPerHour)) { errors.push('config.rateLimiting.requestsPerHour must be a positive integer'); } - if (config.rateLimiting.bytesPerMinute !== undefined && (typeof config.rateLimiting.bytesPerMinute !== 'number' || !Number.isInteger(config.rateLimiting.bytesPerMinute) || config.rateLimiting.bytesPerMinute <= 0)) { + if (config.rateLimiting.bytesPerMinute !== undefined && !isPositiveInteger(config.rateLimiting.bytesPerMinute)) { errors.push('config.rateLimiting.bytesPerMinute must be a positive integer'); } } @@ -264,7 +268,9 @@ export function validateAwfFileConfig(config: unknown): string[] { return errors; } -export function loadAwfFileConfig(configPath: string, readStdin: () => string = () => fs.readFileSync(0, 'utf8')): AwfFileConfig { +const readStdinSync = (): string => fs.readFileSync(0, 'utf8'); + +export function loadAwfFileConfig(configPath: string, readStdin: () => string = readStdinSync): AwfFileConfig { let rawContent: string; let sourceLabel = configPath; @@ -287,6 +293,7 @@ export function loadAwfFileConfig(configPath: string, readStdin: () => string = } else if (isYaml) { parsed = yaml.load(rawContent); } else { + // For stdin/extensionless input, prefer JSON first (strict) then YAML. try { parsed = JSON.parse(rawContent); } catch { @@ -315,6 +322,10 @@ function joinPorts(value: string[] | string | undefined): string | undefined { return Array.isArray(value) ? value.join(',') : value; } +function toStringIfDefined(value: number | undefined): string | undefined { + return value !== undefined ? String(value) : undefined; +} + export function mapAwfFileConfigToCliOptions(config: AwfFileConfig): Record { return { allowDomains: joinComma(config.network?.allowDomains), @@ -340,7 +351,7 @@ export function mapAwfFileConfigToCliOptions(config: AwfFileConfig): Record Date: Thu, 16 Apr 2026 04:44:23 +0000 Subject: [PATCH 04/10] docs: clarify config parsing and RFC wording Agent-Logs-Url: https://github.com/github/gh-aw-firewall/sessions/dcd77d8b-19a4-4eab-9b64-5772d37fda34 --- docs/awf-config-spec.md | 2 +- src/config-file.ts | 13 +++++++++++-- 2 files changed, 12 insertions(+), 3 deletions(-) diff --git a/docs/awf-config-spec.md b/docs/awf-config-spec.md index 869b7fef..a67cba58 100644 --- a/docs/awf-config-spec.md +++ b/docs/awf-config-spec.md @@ -14,7 +14,7 @@ The machine-readable schema is published at: ## 1. Conformance -Keywords **MUST**, **MUST NOT**, **SHOULD**, and **MAY** are to be interpreted as described in RFC 2119. +The normative keywords in this document are to be interpreted as described in RFC 2119. An AWF config document is conforming when: diff --git a/src/config-file.ts b/src/config-file.ts index 8b7a6190..f91f9ec8 100644 --- a/src/config-file.ts +++ b/src/config-file.ts @@ -268,7 +268,7 @@ export function validateAwfFileConfig(config: unknown): string[] { return errors; } -const readStdinSync = (): string => fs.readFileSync(0, 'utf8'); +const readStdinSync = (): string => fs.readFileSync(process.stdin.fd, 'utf8'); export function loadAwfFileConfig(configPath: string, readStdin: () => string = readStdinSync): AwfFileConfig { let rawContent: string; @@ -286,14 +286,22 @@ export function loadAwfFileConfig(configPath: string, readStdin: () => string = let parsed: unknown; const isJson = configPath.endsWith('.json'); const isYaml = configPath.endsWith('.yaml') || configPath.endsWith('.yml'); + const isStdin = configPath === '-'; try { if (isJson) { parsed = JSON.parse(rawContent); } else if (isYaml) { parsed = yaml.load(rawContent); + } else if (isStdin) { + // stdin intentionally supports both formats; prefer strict JSON parse first. + try { + parsed = JSON.parse(rawContent); + } catch { + parsed = yaml.load(rawContent); + } } else { - // For stdin/extensionless input, prefer JSON first (strict) then YAML. + // For extensionless paths, prefer JSON first (strict) then YAML. try { parsed = JSON.parse(rawContent); } catch { @@ -373,6 +381,7 @@ export function mapAwfFileConfigToCliOptions(config: AwfFileConfig): Record Date: Thu, 16 Apr 2026 04:46:00 +0000 Subject: [PATCH 05/10] refactor: clarify in-place config option merge behavior Agent-Logs-Url: https://github.com/github/gh-aw-firewall/sessions/dcd77d8b-19a4-4eab-9b64-5772d37fda34 --- src/cli.ts | 4 ++-- src/config-file.test.ts | 6 +++--- src/config-file.ts | 3 ++- 3 files changed, 7 insertions(+), 6 deletions(-) diff --git a/src/cli.ts b/src/cli.ts index afb3767e..314b934a 100644 --- a/src/cli.ts +++ b/src/cli.ts @@ -29,7 +29,7 @@ import { validateDomainOrPattern, SQUID_DANGEROUS_CHARS } from './domain-pattern import { loadAndMergeDomains } from './rules'; import { detectHostDnsServers } from './dns-resolver'; import { detectUpstreamProxy, parseProxyUrl, parseNoProxy } from './upstream-proxy'; -import { loadAwfFileConfig, mapAwfFileConfigToCliOptions, applyConfigOptionsWithCliPrecedence } from './config-file'; +import { loadAwfFileConfig, mapAwfFileConfigToCliOptions, applyConfigOptionsInPlaceWithCliPrecedence } from './config-file'; import { OutputFormat } from './types'; import { version } from '../package.json'; @@ -1620,7 +1620,7 @@ program try { const fileConfig = loadAwfFileConfig(options.config); const fileDerivedOptions = mapAwfFileConfigToCliOptions(fileConfig); - applyConfigOptionsWithCliPrecedence( + applyConfigOptionsInPlaceWithCliPrecedence( options as Record, fileDerivedOptions, // Commander marks explicit user flags with source "cli". diff --git a/src/config-file.test.ts b/src/config-file.test.ts index e88e43d3..46ef3694 100644 --- a/src/config-file.test.ts +++ b/src/config-file.test.ts @@ -2,7 +2,7 @@ import * as fs from 'fs'; import * as os from 'os'; import * as path from 'path'; import { - applyConfigOptionsWithCliPrecedence, + applyConfigOptionsInPlaceWithCliPrecedence, loadAwfFileConfig, mapAwfFileConfigToCliOptions, validateAwfFileConfig, @@ -105,12 +105,12 @@ describe('config-file', () => { }); }); - describe('applyConfigOptionsWithCliPrecedence', () => { + describe('applyConfigOptionsInPlaceWithCliPrecedence', () => { it('does not overwrite explicitly provided CLI options', () => { const options: Record = { logLevel: 'warn', memoryLimit: '4g' }; const configOptions: Record = { logLevel: 'debug', memoryLimit: '8g', imageTag: 'latest' }; - applyConfigOptionsWithCliPrecedence(options, configOptions, (name) => name === 'logLevel'); + applyConfigOptionsInPlaceWithCliPrecedence(options, configOptions, (name) => name === 'logLevel'); expect(options).toEqual({ logLevel: 'warn', memoryLimit: '8g', imageTag: 'latest' }); }); diff --git a/src/config-file.ts b/src/config-file.ts index f91f9ec8..8270ed71 100644 --- a/src/config-file.ts +++ b/src/config-file.ts @@ -321,6 +321,7 @@ export function loadAwfFileConfig(configPath: string, readStdin: () => string = } function joinComma(value: string[] | undefined): string | undefined { + // Empty arrays intentionally map to undefined so they don't override defaults with "". if (!value || value.length === 0) return undefined; return value.join(','); } @@ -389,7 +390,7 @@ export function mapAwfFileConfigToCliOptions(config: AwfFileConfig): Record, configOptions: Record, isCliProvided: (optionName: string) => boolean From dd2d1c78ba3f98e3c30edc28631ddd27224b04c1 Mon Sep 17 00:00:00 2001 From: Landon Cox Date: Thu, 16 Apr 2026 08:17:55 -0700 Subject: [PATCH 06/10] Potential fix for pull request finding 'CodeQL / Useless assignment to local variable' Co-authored-by: Copilot Autofix powered by AI <62310815+github-advanced-security[bot]@users.noreply.github.com> --- src/config-file.ts | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/src/config-file.ts b/src/config-file.ts index 8270ed71..7f65cd37 100644 --- a/src/config-file.ts +++ b/src/config-file.ts @@ -272,7 +272,7 @@ const readStdinSync = (): string => fs.readFileSync(process.stdin.fd, 'utf8'); export function loadAwfFileConfig(configPath: string, readStdin: () => string = readStdinSync): AwfFileConfig { let rawContent: string; - let sourceLabel = configPath; + let sourceLabel: string; if (configPath === '-') { rawContent = readStdin(); From b053221849b705cb24adba34b6dc385f97fc9681 Mon Sep 17 00:00:00 2001 From: Landon Cox Date: Thu, 16 Apr 2026 08:26:11 -0700 Subject: [PATCH 07/10] fix: retry apt-get update on transient mirror failures in Dockerfiles The initial apt-get update can fail with hash mismatches when Ubuntu mirrors are mid-sync. The existing retry logic only covered apt-get install failures, not apt-get update failures. This adds a retry with cache clear for the initial apt-get update in both agent and squid Dockerfiles. Fixes: squid-proxy build failure (exit code 100) in --build-local CI Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- containers/agent/Dockerfile | 9 +++++---- containers/squid/Dockerfile | 4 ++-- 2 files changed, 7 insertions(+), 6 deletions(-) diff --git a/containers/agent/Dockerfile b/containers/agent/Dockerfile index f3e09c51..0472423c 100644 --- a/containers/agent/Dockerfile +++ b/containers/agent/Dockerfile @@ -11,10 +11,10 @@ FROM ${BASE_IMAGE} # Install required packages and Node.js 22 # Note: Some packages may already exist in runner-like base images, apt handles this gracefully -# Retry logic handles transient 404s when Ubuntu archive supersedes package versions mid-build +# Retry logic handles transient mirror hash-mismatches and 404s during apt-get update/install RUN set -eux; \ PKGS="iptables curl ca-certificates git gh gnupg dnsutils net-tools netcat-openbsd gosu libcap2-bin"; \ - apt-get update && \ + ( apt-get update || (sleep 5 && rm -rf /var/lib/apt/lists/* && apt-get update) ) && \ ( apt-get install -y --no-install-recommends $PKGS || \ (echo "apt-get install failed, retrying with fresh package index..." && \ rm -rf /var/lib/apt/lists/* && \ @@ -40,7 +40,7 @@ RUN set -eux; \ # See: https://github.com/actions/runner-images/blob/main/images/ubuntu/Ubuntu2204-Readme.md RUN set -eux; \ PARITY_PKGS="libgdiplus libev-dev libssl-dev php-intl php-gd"; \ - apt-get update && \ + ( apt-get update || (sleep 5 && rm -rf /var/lib/apt/lists/* && apt-get update) ) && \ ( apt-get install -y --no-install-recommends $PARITY_PKGS || \ (echo "apt-get install failed, retrying with fresh package index..." && \ rm -rf /var/lib/apt/lists/* && \ @@ -51,7 +51,8 @@ RUN set -eux; \ # Upgrade all packages to pick up security patches # Addresses CVE-2023-44487 (HTTP/2 Rapid Reset) and other known vulnerabilities # Retry logic handles transient mirror sync failures during apt-get update -RUN apt-get update && apt-get upgrade -y && rm -rf /var/lib/apt/lists/* || \ +RUN ( apt-get update || (sleep 5 && rm -rf /var/lib/apt/lists/* && apt-get update) ) && \ + apt-get upgrade -y && rm -rf /var/lib/apt/lists/* || \ (echo "apt-get upgrade failed, retrying with fresh package index..." && \ rm -rf /var/lib/apt/lists/* && \ apt-get update && apt-get upgrade -y && rm -rf /var/lib/apt/lists/*) diff --git a/containers/squid/Dockerfile b/containers/squid/Dockerfile index 96251cf5..640979a7 100644 --- a/containers/squid/Dockerfile +++ b/containers/squid/Dockerfile @@ -1,10 +1,10 @@ FROM ubuntu/squid:latest # Install additional tools for debugging, healthcheck, and SSL Bump -# Retry logic handles transient 404s when Ubuntu archive supersedes package versions mid-build +# Retry logic handles transient mirror hash-mismatches and 404s during apt-get update/install RUN set -eux; \ PKGS="curl dnsutils net-tools netcat-openbsd openssl squid-openssl"; \ - apt-get update && \ + ( apt-get update || (sleep 5 && rm -rf /var/lib/apt/lists/* && apt-get update) ) && \ apt-get install -y --only-upgrade gpgv && \ ( apt-get install -y --no-install-recommends $PKGS || \ (rm -rf /var/lib/apt/lists/* && apt-get update && \ From 323d39708758b43fd473968bb7699e8236dcf2b4 Mon Sep 17 00:00:00 2001 From: Landon Cox Date: Thu, 16 Apr 2026 08:36:37 -0700 Subject: [PATCH 08/10] fix: set COPILOT_MODEL fallback to claude-sonnet-4.5 for BYOK mode The byok-copilot feature flag generates an empty COPILOT_MODEL fallback, but BYOK providers require an explicit model. This patches the lock file with claude-sonnet-4.5 as the default. Workaround for: github/gh-aw#26565 Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- .github/workflows/smoke-copilot.lock.yml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.github/workflows/smoke-copilot.lock.yml b/.github/workflows/smoke-copilot.lock.yml index 72ee22dd..804c1891 100644 --- a/.github/workflows/smoke-copilot.lock.yml +++ b/.github/workflows/smoke-copilot.lock.yml @@ -699,7 +699,7 @@ jobs: env: COPILOT_AGENT_RUNNER_TYPE: STANDALONE COPILOT_GITHUB_TOKEN: ${{ secrets.COPILOT_GITHUB_TOKEN }} - COPILOT_MODEL: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || '' }} + COPILOT_MODEL: ${{ vars.GH_AW_MODEL_AGENT_COPILOT || 'claude-sonnet-4.5' }} GH_AW_MCP_CONFIG: /home/runner/.copilot/mcp-config.json GH_AW_PHASE: agent GH_AW_PROMPT: /tmp/gh-aw/aw-prompts/prompt.txt From 59815968f7b6020a0119d7e93cdd90003607dbe5 Mon Sep 17 00:00:00 2001 From: Landon Cox Date: Thu, 16 Apr 2026 10:19:21 -0700 Subject: [PATCH 09/10] fix: use retry loop with backoff for apt-get update in Dockerfiles Replace single-retry apt-get update with a 3-attempt retry loop using exponential backoff (10s, 20s, 30s). The single retry was insufficient when Ubuntu mirrors are in prolonged sync states (observed in CI where mirror hash mismatches persisted across multiple minutes). The apt_update_retry function clears the apt cache before each attempt, ensuring a clean state. Applied to all apt-get update calls in both agent and squid Dockerfiles, including the install-retry fallback paths. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- containers/agent/Dockerfile | 35 +++++++++++++++++++++++++---------- containers/squid/Dockerfile | 12 +++++++++--- 2 files changed, 34 insertions(+), 13 deletions(-) diff --git a/containers/agent/Dockerfile b/containers/agent/Dockerfile index 0472423c..9841ad3a 100644 --- a/containers/agent/Dockerfile +++ b/containers/agent/Dockerfile @@ -11,14 +11,19 @@ FROM ${BASE_IMAGE} # Install required packages and Node.js 22 # Note: Some packages may already exist in runner-like base images, apt handles this gracefully -# Retry logic handles transient mirror hash-mismatches and 404s during apt-get update/install +# apt_update_retry: retries up to 3 times with backoff to survive prolonged mirror syncs RUN set -eux; \ + apt_update_retry() { \ + local i; for i in 1 2 3; do \ + rm -rf /var/lib/apt/lists/* && apt-get update && return 0; \ + echo "apt-get update attempt $i/3 failed, retrying in $((i*10))s..." >&2; sleep $((i*10)); \ + done; return 1; \ + }; \ PKGS="iptables curl ca-certificates git gh gnupg dnsutils net-tools netcat-openbsd gosu libcap2-bin"; \ - ( apt-get update || (sleep 5 && rm -rf /var/lib/apt/lists/* && apt-get update) ) && \ + apt_update_retry && \ ( apt-get install -y --no-install-recommends $PKGS || \ (echo "apt-get install failed, retrying with fresh package index..." && \ - rm -rf /var/lib/apt/lists/* && \ - apt-get update && \ + apt_update_retry && \ apt-get install -y --no-install-recommends $PKGS) ) && \ # Prefer system binaries over runner toolcache (e.g., act images) for Node checks. export PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:$PATH" && \ @@ -39,23 +44,33 @@ RUN set -eux; \ # These packages are commonly needed by workflows and avoid agents spending time installing them manually # See: https://github.com/actions/runner-images/blob/main/images/ubuntu/Ubuntu2204-Readme.md RUN set -eux; \ + apt_update_retry() { \ + local i; for i in 1 2 3; do \ + rm -rf /var/lib/apt/lists/* && apt-get update && return 0; \ + echo "apt-get update attempt $i/3 failed, retrying in $((i*10))s..." >&2; sleep $((i*10)); \ + done; return 1; \ + }; \ PARITY_PKGS="libgdiplus libev-dev libssl-dev php-intl php-gd"; \ - ( apt-get update || (sleep 5 && rm -rf /var/lib/apt/lists/* && apt-get update) ) && \ + apt_update_retry && \ ( apt-get install -y --no-install-recommends $PARITY_PKGS || \ (echo "apt-get install failed, retrying with fresh package index..." && \ - rm -rf /var/lib/apt/lists/* && \ - apt-get update && \ + apt_update_retry && \ apt-get install -y --no-install-recommends $PARITY_PKGS) ) && \ rm -rf /var/lib/apt/lists/* # Upgrade all packages to pick up security patches # Addresses CVE-2023-44487 (HTTP/2 Rapid Reset) and other known vulnerabilities # Retry logic handles transient mirror sync failures during apt-get update -RUN ( apt-get update || (sleep 5 && rm -rf /var/lib/apt/lists/* && apt-get update) ) && \ +RUN apt_update_retry() { \ + local i; for i in 1 2 3; do \ + rm -rf /var/lib/apt/lists/* && apt-get update && return 0; \ + echo "apt-get update attempt $i/3 failed, retrying in $((i*10))s..." >&2; sleep $((i*10)); \ + done; return 1; \ + }; \ + apt_update_retry && \ apt-get upgrade -y && rm -rf /var/lib/apt/lists/* || \ (echo "apt-get upgrade failed, retrying with fresh package index..." && \ - rm -rf /var/lib/apt/lists/* && \ - apt-get update && apt-get upgrade -y && rm -rf /var/lib/apt/lists/*) + apt_update_retry && apt-get upgrade -y && rm -rf /var/lib/apt/lists/*) # Create non-root user with UID/GID matching host user # This allows the user command to run with appropriate permissions diff --git a/containers/squid/Dockerfile b/containers/squid/Dockerfile index 640979a7..b2f9828b 100644 --- a/containers/squid/Dockerfile +++ b/containers/squid/Dockerfile @@ -1,13 +1,19 @@ FROM ubuntu/squid:latest # Install additional tools for debugging, healthcheck, and SSL Bump -# Retry logic handles transient mirror hash-mismatches and 404s during apt-get update/install +# apt_update_retry: retries up to 3 times with backoff to survive prolonged mirror syncs RUN set -eux; \ + apt_update_retry() { \ + local i; for i in 1 2 3; do \ + rm -rf /var/lib/apt/lists/* && apt-get update && return 0; \ + echo "apt-get update attempt $i/3 failed, retrying in $((i*10))s..." >&2; sleep $((i*10)); \ + done; return 1; \ + }; \ PKGS="curl dnsutils net-tools netcat-openbsd openssl squid-openssl"; \ - ( apt-get update || (sleep 5 && rm -rf /var/lib/apt/lists/* && apt-get update) ) && \ + apt_update_retry && \ apt-get install -y --only-upgrade gpgv && \ ( apt-get install -y --no-install-recommends $PKGS || \ - (rm -rf /var/lib/apt/lists/* && apt-get update && \ + (apt_update_retry && \ apt-get install -y --no-install-recommends $PKGS) ) && \ rm -rf /var/lib/apt/lists/* From 30eb608df2426f9f10f8144c41ef97a6c6630b3d Mon Sep 17 00:00:00 2001 From: Landon Cox Date: Thu, 16 Apr 2026 10:33:17 -0700 Subject: [PATCH 10/10] fix: use Azure apt mirrors in Dockerfiles for CI reliability GitHub Actions runners are Azure-hosted, so azure.archive.ubuntu.com is geographically closer and more reliable than archive.ubuntu.com. This reduces Hash Sum mismatch failures during Ubuntu mirror syncs. Handles both traditional sources.list (jammy/22.04) and DEB822 format (noble/24.04+) used by ubuntu/squid:latest. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- containers/agent/Dockerfile | 14 ++++++++++++++ containers/squid/Dockerfile | 14 ++++++++++++++ 2 files changed, 28 insertions(+) diff --git a/containers/agent/Dockerfile b/containers/agent/Dockerfile index 9841ad3a..70425684 100644 --- a/containers/agent/Dockerfile +++ b/containers/agent/Dockerfile @@ -9,6 +9,20 @@ ARG BASE_IMAGE=ubuntu:22.04 FROM ${BASE_IMAGE} +# Switch to Azure apt mirror for faster, more reliable package fetches in CI +# GitHub Actions runners are Azure-hosted; azure.archive.ubuntu.com is geographically closer +# Handles both traditional sources.list (jammy) and DEB822 format (noble+) +RUN if [ -f /etc/apt/sources.list ]; then \ + sed -i 's|http://archive.ubuntu.com|http://azure.archive.ubuntu.com|g' /etc/apt/sources.list; \ + sed -i 's|http://security.ubuntu.com|http://azure.archive.ubuntu.com|g' /etc/apt/sources.list; \ + fi && \ + if [ -d /etc/apt/sources.list.d ]; then \ + find /etc/apt/sources.list.d -name '*.sources' -exec \ + sed -i 's|http://archive.ubuntu.com|http://azure.archive.ubuntu.com|g' {} + 2>/dev/null || true; \ + find /etc/apt/sources.list.d -name '*.sources' -exec \ + sed -i 's|http://security.ubuntu.com|http://azure.archive.ubuntu.com|g' {} + 2>/dev/null || true; \ + fi + # Install required packages and Node.js 22 # Note: Some packages may already exist in runner-like base images, apt handles this gracefully # apt_update_retry: retries up to 3 times with backoff to survive prolonged mirror syncs diff --git a/containers/squid/Dockerfile b/containers/squid/Dockerfile index b2f9828b..cbd80e5f 100644 --- a/containers/squid/Dockerfile +++ b/containers/squid/Dockerfile @@ -1,5 +1,19 @@ FROM ubuntu/squid:latest +# Switch to Azure apt mirror for faster, more reliable package fetches in CI +# GitHub Actions runners are Azure-hosted; azure.archive.ubuntu.com is geographically closer +# Handles both traditional sources.list (jammy) and DEB822 format (noble+) +RUN if [ -f /etc/apt/sources.list ]; then \ + sed -i 's|http://archive.ubuntu.com|http://azure.archive.ubuntu.com|g' /etc/apt/sources.list; \ + sed -i 's|http://security.ubuntu.com|http://azure.archive.ubuntu.com|g' /etc/apt/sources.list; \ + fi && \ + if [ -d /etc/apt/sources.list.d ]; then \ + find /etc/apt/sources.list.d -name '*.sources' -exec \ + sed -i 's|http://archive.ubuntu.com|http://azure.archive.ubuntu.com|g' {} + 2>/dev/null || true; \ + find /etc/apt/sources.list.d -name '*.sources' -exec \ + sed -i 's|http://security.ubuntu.com|http://azure.archive.ubuntu.com|g' {} + 2>/dev/null || true; \ + fi + # Install additional tools for debugging, healthcheck, and SSL Bump # apt_update_retry: retries up to 3 times with backoff to survive prolonged mirror syncs RUN set -eux; \