This software is provided as-is under the MIT License. AI agents launched by crew operate with
full access to your filesystem and can create, modify, or delete files without confirmation. The
authors are not responsible for any damage, data loss, or unintended changes caused by agent
actions. Use in trusted environments only and always review agent prompts and configuration before
running.
crew orchestrates AI CLI agents that execute with full access to your codebase and system. Understanding the trust boundaries is critical:
- crew.yaml config: Defines agent commands and environment variables. Anyone who can edit this file controls what processes crew launches. Treat it like a shell script.
- Prompt files: Fed directly to AI agents. Prompt injection in these files can cause agents to execute arbitrary actions via their CLI tools.
- AI agent output: Agents may create, modify, or delete files. Their actions are only bounded by the permissions of the CLI tool they run through.
- User-supplied agent names: Validated to
[A-Za-z0-9_-](max 32 chars) to prevent injection into file paths and yq queries. - File paths: Validated to reject
..traversal, absolute paths, and null bytes. - Intervals: Validated as positive integers with upper bounds.
As of v0.1.0, crew does not use eval to execute agent commands. Commands from crew.yaml are parsed into arrays and executed directly, which prevents shell injection through config values.
If you need complex commands (pipes, redirects, shell operators), use a wrapper script instead of inline shell in the config.
Never put API keys in crew.yaml if the file is committed to version control.
Set API keys in your shell environment instead:
export ANTHROPIC_API_KEY="sk-..."The env field in crew.yaml is intended for non-secret configuration like ANTHROPIC_BASE_URL and ANTHROPIC_MODEL. If you must use env for keys in a local-only config, ensure .crew/ is in your .gitignore.
AI agents are susceptible to prompt injection. Be aware that:
- Any file an agent reads could contain adversarial instructions
- Agent prompts in
.crew/prompts/or.design/prompts/define agent behavior - In cross-review mode, agents read each other's output, creating a potential injection chain
Mitigations:
- Review agent prompts before use
- Monitor agent logs (
crew logs <AGENT>) - Use
--dangerously-skip-permissionsonly when you understand the implications - Run agents in isolated environments when possible
PID files in .crew/run/ use flock-based locking to prevent race conditions. On systems without flock, crew falls back to best-effort (unlocked) operation. This is acceptable for a local development tool but should not be relied upon in multi-user environments.
If you discover a security vulnerability, please report it responsibly:
- Do NOT open a public issue
- Email the maintainers or use GitHub's private vulnerability reporting
- Include steps to reproduce and potential impact
- Allow reasonable time for a fix before disclosure
- Run
crewonly in project directories you trust - Review
.crew/crew.yamlbefore runningcrew start - Keep AI CLI tools updated to their latest versions
- Monitor agent logs for unexpected behavior
- Use
crew stopto cleanly shut down all agents - Add
.crew/and.design/to.gitignoreif they contain sensitive config