Runtime-secured AI tooling framework for production-grade LLM applications, protecting against prompt injection, jailbreaks, and adversarial attacks.
-
Updated
Apr 2, 2026 - Python
Runtime-secured AI tooling framework for production-grade LLM applications, protecting against prompt injection, jailbreaks, and adversarial attacks.
🛡️ Secure your LLM applications with PromptShields, a framework designed for real-time protection against prompt injection and data leaks.
Add a description, image, and links to the hacking-tools-ai topic page so that developers can more easily learn about it.
To associate your repository with the hacking-tools-ai topic, visit your repo's landing page and select "manage topics."