-
-
Notifications
You must be signed in to change notification settings - Fork 52
Closed
Labels
bountyenhancementNew feature or requestNew feature or requestkernel-featuresKernel-level AI enhancementsKernel-level AI enhancementstier-1Tier 1 priority featureTier 1 priority feature
Description
Model Lifecycle Manager - Systemd-Based LLM Service Management
Part of: Kernel-Level AI Enhancements (Tier 1 - User-Space)
Description
Manage LLM models as first-class system services using systemd. This brings "systemctl for AI models" to Cortex Linux.
Effort: 2-3 weeks | Bounty: $150
The Solution
cortex model register llama-70b --path meta-llama/Llama-2-70b-hf --backend vllm --gpus 0,1
cortex model start llama-70b
cortex model status
cortex model enable llama-70b # auto-start on bootFeatures
- Systemd service generation for any LLM backend
- Multi-backend support (vLLM, llama.cpp, Ollama, TGI)
- Automatic GPU memory configuration
- Health check monitoring with auto-restart
- Resource limits via systemd (CPU, memory, PIDs)
- Security hardening (NoNewPrivileges, ProtectSystem)
- SQLite database for configuration persistence
Acceptance Criteria
-
cortex model registercreates valid systemd service -
cortex model start/stopworks reliably - Health checks trigger auto-restart on failure
- Unit tests pass with >80% coverage
Files
Complete implementation available:
model_lifecycle.py(~800 lines)test_model_lifecycle.py(~600 lines)README_MODEL_LIFECYCLE.md
Priority
High - Core infrastructure for AI-native OS vision
Metadata
Metadata
Assignees
Labels
bountyenhancementNew feature or requestNew feature or requestkernel-featuresKernel-level AI enhancementsKernel-level AI enhancementstier-1Tier 1 priority featureTier 1 priority feature