Skip to content

[Kernel Feature] Model Lifecycle Manager - Systemd-Based LLM Service Management #220

@mikejmorgan-ai

Description

@mikejmorgan-ai

Model Lifecycle Manager - Systemd-Based LLM Service Management

Part of: Kernel-Level AI Enhancements (Tier 1 - User-Space)

Description

Manage LLM models as first-class system services using systemd. This brings "systemctl for AI models" to Cortex Linux.

Effort: 2-3 weeks | Bounty: $150

The Solution

cortex model register llama-70b --path meta-llama/Llama-2-70b-hf --backend vllm --gpus 0,1
cortex model start llama-70b
cortex model status
cortex model enable llama-70b  # auto-start on boot

Features

  • Systemd service generation for any LLM backend
  • Multi-backend support (vLLM, llama.cpp, Ollama, TGI)
  • Automatic GPU memory configuration
  • Health check monitoring with auto-restart
  • Resource limits via systemd (CPU, memory, PIDs)
  • Security hardening (NoNewPrivileges, ProtectSystem)
  • SQLite database for configuration persistence

Acceptance Criteria

  • cortex model register creates valid systemd service
  • cortex model start/stop works reliably
  • Health checks trigger auto-restart on failure
  • Unit tests pass with >80% coverage

Files

Complete implementation available:

  • model_lifecycle.py (~800 lines)
  • test_model_lifecycle.py (~600 lines)
  • README_MODEL_LIFECYCLE.md

Priority

High - Core infrastructure for AI-native OS vision

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions