A FastAPI service for face detection and embedding extraction using InsightFace with ONNXRuntime (CPU or GPU).
- REST API (
/analyze) → upload an image, get bounding boxes, detection scores, embeddings. - Status endpoint (
/status) → shows backend (CPU/GPU), model bundle, and providers. - Configurable via environment variables.
- Runs manually with
uvicornor as a systemd service on Ubuntu.
face-service/
├── app.py # FastAPI app
├── requirements-cpu.txt # Dependencies for CPU build
├── requirements-gpu.txt # Dependencies for GPU build
├── face.env.example # Example environment config
├── face-service@.service # Systemd unit template
└── README.md # This file
You can clone this repo anywhere (e.g., /home/<user>/Develop/face-service or /opt/face-service):
git clone https://github.com/goruck/face-service.git
cd face-serviceCreate a Python 3.9+ virtual environment:
python3 -m venv .venv
source .venv/bin/activate
pip install --upgrade pipInstall dependencies (choose either CPU or GPU build):
CPU build
pip install -r requirements-cpu.txtGPU build (CUDA)
pip install -r requirements-gpu.txtDevelopment tools (optional, for linting)
pip install -r requirements-dev.txtEnvironment is controlled by variables in an .env file.
An example env file is included: face.env.example.
Copy it to /etc/face-service/ and edit as needed:
sudo mkdir -p /etc/face-service
sudo cp face.env.example /etc/face-service/face.env
sudo nano /etc/face-service/face.envKey values to set:
- APP_DIR → path where you cloned this repo
- VENV_DIR → path to the venv inside this repo (usually <APP_DIR>/.venv)
- PORT → service port (e.g. 8000, 8001 for another instance)
- USE_CPU, GPU_ID, FACE_MODEL → control runtime
From inside the project directory:
source .venv/bin/activate
python app.pyOr run with uvicorn (recommended):
source .venv/bin/activate
uvicorn app:app --host 0.0.0.0 --port 8000This repo includes a systemd unit template: face-service@.service.
sudo cp face-service@.service /etc/systemd/system/sudo systemctl daemon-reload
sudo systemctl enable face-service@lindo
sudo systemctl start face-service@lindoReplace lindo with your Linux username.
The service will read /etc/face-service/face.env for settings.
systemctl status face-service@lindo
journalctl -u face-service@lindo -fCheck service status
curl http://127.0.0.1:8000/statusAnalyze an image
curl -F "file=@/path/to/image.jpg" http://127.0.0.1:8000/analyzeThis repo ships with a GitHub Actions workflow at .github/workflows/ci.yml:
- cpu-smoke: installs CPU requirements, lints (ruff), launches the app with
USE_CPU=true, and curls/status. - gpu-install: verifies
requirements-gpu.txtis installable on a standard runner (no GPU runtime test).
If your app file/module isn’t app.py with app = FastAPI(), edit the CI step:
uvicorn <your_module>:app --host 127.0.0.1 --port 8000