Problem
Workflows that fetch secrets from external managers (Conjur, HashiCorp Vault) using dedicated GitHub Actions cannot use strict mode. These actions require authentication credentials passed via with:, but strict mode classifies any secrets.* expression in with: as unsafe and blocks compilation.
steps:
- uses: my-org/secrets-action@v2
with:
username: ${{ secrets.VAULT_USERNAME }} # blocked by strict mode
password: ${{ secrets.VAULT_PASSWORD }} # blocked by strict mode
secret_map: ${{ inputs.secret_map }}
Why step-level env: (ADR-0002) doesn't fully solve this
ADR-0002 (#25779) allows secrets.* in step-level env: bindings, which is the correct pattern for run: steps. However, uses: action steps receive their configuration via with:, not env:. While the INPUT_* env var passthrough exists at the runner level, it's an undocumented implementation detail — not a supported pattern for passing action inputs.
Why a separate custom job doesn't solve this
Moving secret-fetching to a separate jobs: entry and passing values via job outputs fails at runtime. GitHub Actions masks values registered via core.setSecret() across job output boundaries, so the downstream agent job receives *** instead of actual values.
Current workaround
Setting strict: false disables all strict-mode protections just to bypass this single restriction. This forces workflows to give up write-permission checks, network validation, SHA-pinning enforcement, and other protections that strict mode provides.
Proposed Solution
Extend the secret classification to treat secrets.* in step-level with: as safe (or provide a per-step opt-in), based on the following:
with: inputs are passed to external actions, not interpolated into shell scripts. The action handles the value internally, not the agent.
- The GitHub Actions runner already masks
with: values derived from secrets, providing the same protection as env: bindings.
- Custom
steps: run outside the agent sandbox (per the frontmatter docs), so secret values in with: are not leaked to the AI model.
Context
This is a common pattern for enterprise teams that use centralized secret managers — the workaround of disabling strict mode entirely is not an acceptable long-term solution.
Problem
Workflows that fetch secrets from external managers (Conjur, HashiCorp Vault) using dedicated GitHub Actions cannot use strict mode. These actions require authentication credentials passed via
with:, but strict mode classifies anysecrets.*expression inwith:as unsafe and blocks compilation.Why step-level
env:(ADR-0002) doesn't fully solve thisADR-0002 (#25779) allows
secrets.*in step-levelenv:bindings, which is the correct pattern forrun:steps. However,uses:action steps receive their configuration viawith:, notenv:. While theINPUT_*env var passthrough exists at the runner level, it's an undocumented implementation detail — not a supported pattern for passing action inputs.Why a separate custom job doesn't solve this
Moving secret-fetching to a separate
jobs:entry and passing values via job outputs fails at runtime. GitHub Actions masks values registered viacore.setSecret()across job output boundaries, so the downstream agent job receives***instead of actual values.Current workaround
Setting
strict: falsedisables all strict-mode protections just to bypass this single restriction. This forces workflows to give up write-permission checks, network validation, SHA-pinning enforcement, and other protections that strict mode provides.Proposed Solution
Extend the secret classification to treat
secrets.*in step-levelwith:as safe (or provide a per-step opt-in), based on the following:with:inputs are passed to external actions, not interpolated into shell scripts. The action handles the value internally, not the agent.with:values derived from secrets, providing the same protection asenv:bindings.steps:run outside the agent sandbox (per the frontmatter docs), so secret values inwith:are not leaked to the AI model.Context
This is a common pattern for enterprise teams that use centralized secret managers — the workaround of disabling strict mode entirely is not an acceptable long-term solution.