Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions docs/src/content/docs/index.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,15 @@ Developed by GitHub Next and Microsoft Research, workflows run with added guardr

Workflows run with read-only permissions by default. Write operations require explicit approval through sanitized [safe outputs](/gh-aw/reference/glossary/#safe-outputs) (pre-approved GitHub operations), with sandboxed execution, tool allowlisting, and network isolation ensuring AI agents operate within controlled boundaries.

Every workflow runs through a three-stage security pipeline before any write operation can occur:

```mermaid
flowchart LR
Agent["🤖 Agent"] --> Detection["🔍 Detection"] --> SafeOutputs["✅ Safe Outputs"]
Comment on lines +62 to +66
Copy link

Copilot AI Mar 25, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The text/diagram here describes a “three-stage security pipeline” as Agent → Detection → Safe Outputs, but elsewhere the docs define the sequential security pipeline as the post-agent jobs (Detection → Safe Outputs → Conclusion) and note that the detection job runs when safe outputs are configured (e.g., docs/src/content/docs/reference/compilation-process.md:166 and docs/src/content/docs/reference/glossary.md:90). To avoid an inaccurate mental model, please reword this to scope it to workflows with write operations/safe-outputs enabled and align the stages with the established terminology (or include the conclusion stage if you want a 3-stage pipeline).

Suggested change
Every workflow runs through a three-stage security pipeline before any write operation can occur:
```mermaid
flowchart LR
Agent["🤖 Agent"] --> Detection["🔍 Detection"] --> SafeOutputs["✅ Safe Outputs"]
For workflows that perform write operations via safe outputs, proposed changes go through a three-stage post-agent security pipeline before any write operation can occur:
```mermaid
flowchart LR
Detection["🔍 Detection"] --> SafeOutputs["✅ Safe Outputs"] --> Conclusion["🧩 Conclusion"]

Copilot uses AI. Check for mistakes.
```

See the [Security Architecture](/gh-aw/introduction/architecture/) for a full breakdown of the layered defense-in-depth model.

## Example: Daily Issues Report

Here's a simple workflow that runs daily to create an upbeat status report:
Expand Down
Loading