AI-powered security testing framework for Python functions. Uses LLMs to analyze code, discover vulnerabilities, and generate targeted test cases.
pip install llm-fuzzgit clone https://github.com/yourusername/llm-fuzz.git
cd llm-fuzz
pip install -e .LLM Fuzz uses smolagents which supports multiple LLM providers through LiteLLM. You need to set up an API key for your chosen provider.
- OpenAI (GPT-4, GPT-3.5, etc.)
- Anthropic (Claude)
- Google (Gemini)
- Many others via LiteLLM
Before using llm-fuzz, export your API key:
# For OpenAI
export OPENAI_API_KEY="your-key-here"
# For Anthropic (Claude)
export ANTHROPIC_API_KEY="your-key-here"
# For Google (Gemini)
export GEMINI_API_KEY="your-key-here"from llm_fuzz import llm_fuzz, FuzzConfig
# Your function to test
def divide(x, y):
return x / y
# Wrapper for testing
def divide_wrapper(input_data):
x = input_data.get("x", 1)
y = input_data.get("y", 1)
return divide(x, y)
# Configure and run fuzzer
config = FuzzConfig(
model="gemini/gemini-2.5-flash", # or "gpt-4", "claude-3-opus", etc.
max_vulnerabilities=3,
tests_per_vulnerability=2,
)
report = llm_fuzz(
target_function=divide,
test_function=divide_wrapper,
config=config,
fuzz_params=["x", "y"]
)
print(f"Tests: {report.passed_count}/{report.total_count} passed")
report.save_to_file("fuzz_report.json")FuzzConfig(
model="gemini/gemini-2.5-flash", # LLM model to use
max_vulnerabilities=5, # Number of vulnerabilities to discover
tests_per_vulnerability=3, # Test cases per vulnerability
max_discovery_steps=15, # Max steps for discovery
temperature=0.1, # LLM temperature
verbose=False # Enable verbose logging
)See the examples/ directory:
- simple_division.py - Basic fuzzing example
Run the example:
export GEMINI_API_KEY="your-key"
python examples/simple_division.pyRun the test suite:
pytest