| Version | Supported |
|---|---|
| 0.1.x | ✅ |
If you discover a security vulnerability in llm-patch, please do not open a public issue.
Instead, report it privately:
- Email: Send details to the maintainers via the contact information in the repository.
- GitHub Security Advisories: Use GitHub's private vulnerability reporting to submit a confidential report.
- Description of the vulnerability
- Steps to reproduce
- Potential impact
- Suggested fix (if any)
- Acknowledgment: Within 48 hours
- Initial assessment: Within 1 week
- Fix or mitigation: Depends on severity, typically within 2 weeks for critical issues
This policy covers the llm-patch Python library itself. Security issues in dependencies (PyTorch, transformers, PEFT, etc.) should be reported to those projects directly.
- Model weights: Generated LoRA adapters contain numerical weight matrices, not executable code. However, always load adapters from trusted sources.
- safetensors format: llm-patch uses the
safetensorsformat specifically because it prevents arbitrary code execution during deserialization (unlike pickle-based formats). - File watching: The
watchdog-based file watcher only reads files from directories you explicitly configure. It does not execute file contents. - No network access: llm-patch does not make network requests. All processing is local.