| Version | Supported |
|---|---|
| 2.2.x | Yes |
| 2.1.x | Yes |
| 2.0.x | No |
If you discover a security vulnerability in the ai-stack Helm chart, please report it responsibly.
- Preferred: Use GitHub Security Advisories to report privately
- Alternative: Email r.mednitzer@outlook.com with subject
[ai-stack] Security vulnerability report - Include:
- Description of the vulnerability
- Steps to reproduce
- Affected component(s) and version(s)
- Potential impact assessment
- Suggested fix (if any)
| Step | Timeline |
|---|---|
| Acknowledgement of report | Within 48 hours |
| Initial triage and severity assessment | Within 5 business days |
| Fix development and testing | Depends on severity (see below) |
| Coordinated disclosure | After fix is available |
| Severity | Fix Target | Disclosure |
|---|---|---|
| Critical (active exploitation, data breach risk) | 48 hours | After fix deployed |
| High (exploitable with moderate effort) | 7 days | After fix released |
| Medium (limited impact or difficult to exploit) | 30 days | After fix released |
| Low (informational, hardening) | 90 days | With next scheduled release |
This policy covers:
- The ai-stack Helm chart (templates, values, helpers)
- CI/CD pipeline configuration (
.github/workflows/) - Documentation that could lead to insecure configurations
This policy does not cover vulnerabilities in upstream container images (Open WebUI, Ollama, Qdrant, etc.). Report those to their respective projects. However, if an upstream vulnerability creates risk in the ai-stack deployment context, we welcome reports so we can issue guidance or workarounds.
Per CRA Art. 13(8) and industry best practice:
- We will work with reporters to understand and reproduce the vulnerability
- We will develop and test a fix
- We will coordinate disclosure timing with the reporter
- We will credit the reporter (unless they prefer anonymity)
- We will not take legal action against good-faith security researchers
There is currently no bug bounty program. We gratefully acknowledge all responsible disclosures in our release notes (with permission).
The ai-stack implements the following security controls by default:
- Pod Security Admission: Restricted baseline (
runAsNonRoot,drop: ALL,seccompProfile: RuntimeDefault) - Network isolation: Default-deny NetworkPolicy with per-component allowlists
- Secret management: Auto-generated 64-byte keys; external secret manager support
- Service account isolation: Per-component,
automountServiceAccountToken: false - Read-only filesystem: Enforced where possible (Qdrant, Valkey, Tika, SearXNG, OTel)
- Supply chain security: CycloneDX SBOM, Syft deep SBOMs, CVE scanning (Grype), Dependabot for GitHub Actions; container images tracked manually
- PII redaction: OTel Collector strips email, SSN, and credit card patterns
- Telemetry opt-out:
DO_NOT_TRACK=true,ANONYMIZED_TELEMETRY=false
For details, see ENTERPRISE_EVALUATION.md and LICENSE_COMPLIANCE.md.