| Version | Supported |
|---|---|
| 0.4.x | ✅ |
| < 0.4 | ❌ |
We take security seriously. If you discover a security vulnerability in LLMKube, please report it responsibly.
- Do NOT open a public GitHub issue for security vulnerabilities
- Email security concerns to: contact@defilan.com (or open a private security advisory)
- Use GitHub's private vulnerability reporting
- Description of the vulnerability
- Steps to reproduce
- Potential impact
- Suggested fix (if any)
- Acknowledgment: Within 48 hours
- Initial Assessment: Within 7 days
- Resolution Target: Within 30 days for critical issues
This security policy applies to:
- LLMKube controller
- CLI (
llmkube) - Helm charts
- Container images published to GHCR
- Third-party dependencies (report to upstream)
- LLM model vulnerabilities (report to model providers)
- Self-hosted llama.cpp issues (report to llama.cpp project)
When deploying LLMKube:
- Use RBAC: Restrict who can create InferenceService resources
- Network Policies: Isolate inference pods from sensitive workloads
- Resource Limits: Always set CPU/memory limits to prevent DoS
- Image Verification: Use image digests in production Helm values
- Air-gapped Models: Pre-download models for sensitive environments