Curated prompt injection payloads and automated testing for LLM applications
NullSec PromptInject is a curated library of prompt injection payloads and an automated tester for LLM-powered applications. It targets system prompt extraction, instruction hijacking, context manipulation, and output steering across chatbots, RAG pipelines, AI agents, and function-calling systems.
| Feature | Description |
|---|---|
| Payload Library | 500+ categorised prompt injection payloads |
| System Prompt Extraction | Techniques to leak hidden system instructions |
| Instruction Override | Payloads that hijack model behaviour |
| Context Manipulation | Indirect injection via RAG document poisoning |
| Function Call Abuse | Exploit tool-use / function-calling APIs |
| Multi-Language | Payloads in EN, ZH, JA, DE, FR, ES, AR |
| Auto-Tester | Batch-test payloads against target endpoints |
| Category | Count | Targets |
|---|---|---|
| System Prompt Extraction | 80+ | Chatbots, assistants |
| Instruction Override | 90+ | Any LLM app |
| Jailbreak Chains | 60+ | Safety-aligned models |
| Indirect Injection | 50+ | RAG, email agents |
| Function Call Abuse | 40+ | Tool-use agents |
| Output Steering | 45+ | Content generators |
| Encoding Bypass | 35+ | Input filters |
| Multi-turn Escalation | 30+ | Conversation systems |
# Test all payloads against a target endpoint
nullsec-promptinject test --target http://chatbot.example.com/api --category all
# Extract system prompt
nullsec-promptinject extract --target http://chatbot.example.com/api --techniques top20
# Test RAG indirect injection
nullsec-promptinject indirect --target http://rag.example.com/query --inject-doc malicious.txt
# List available payload categories
nullsec-promptinject list --categories| Project | Description |
|---|---|
| nullsec-llmred | LLM red-teaming framework |
| nullsec-adversarial | Adversarial ML attack toolkit |
| nullsec-modelaudit | ML model security auditing |
| nullsec-datapoisoning | Training data poisoning detection |
| nullsec-linux | Security Linux distro (140+ tools) |
For authorized security testing only. Never use prompt injection against systems without explicit written permission.
MIT License β @bad-antics
Part of the NullSec AI/ML Security Suite