Skip to content

bad-antics/nullsec-promptinject

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

3 Commits
Β 
Β 

Repository files navigation

πŸ’‰ NullSec PromptInject

Prompt Injection Payload Library & Tester

Python License NullSec

Curated prompt injection payloads and automated testing for LLM applications


🎯 Overview

NullSec PromptInject is a curated library of prompt injection payloads and an automated tester for LLM-powered applications. It targets system prompt extraction, instruction hijacking, context manipulation, and output steering across chatbots, RAG pipelines, AI agents, and function-calling systems.

⚑ Features

Feature Description
Payload Library 500+ categorised prompt injection payloads
System Prompt Extraction Techniques to leak hidden system instructions
Instruction Override Payloads that hijack model behaviour
Context Manipulation Indirect injection via RAG document poisoning
Function Call Abuse Exploit tool-use / function-calling APIs
Multi-Language Payloads in EN, ZH, JA, DE, FR, ES, AR
Auto-Tester Batch-test payloads against target endpoints

πŸ“‹ Payload Categories

Category Count Targets
System Prompt Extraction 80+ Chatbots, assistants
Instruction Override 90+ Any LLM app
Jailbreak Chains 60+ Safety-aligned models
Indirect Injection 50+ RAG, email agents
Function Call Abuse 40+ Tool-use agents
Output Steering 45+ Content generators
Encoding Bypass 35+ Input filters
Multi-turn Escalation 30+ Conversation systems

πŸš€ Quick Start

# Test all payloads against a target endpoint
nullsec-promptinject test --target http://chatbot.example.com/api --category all

# Extract system prompt
nullsec-promptinject extract --target http://chatbot.example.com/api --techniques top20

# Test RAG indirect injection
nullsec-promptinject indirect --target http://rag.example.com/query --inject-doc malicious.txt

# List available payload categories
nullsec-promptinject list --categories

πŸ”— Related Projects

Project Description
nullsec-llmred LLM red-teaming framework
nullsec-adversarial Adversarial ML attack toolkit
nullsec-modelaudit ML model security auditing
nullsec-datapoisoning Training data poisoning detection
nullsec-linux Security Linux distro (140+ tools)

⚠️ Legal

For authorized security testing only. Never use prompt injection against systems without explicit written permission.

πŸ“œ License

MIT License β€” @bad-antics


About

AI/ML Security Tool - Part of NullSec Linux

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors