Skip to content

goruck/face-service

Repository files navigation

Face Service (FastAPI + InsightFace + ONNXRuntime)

A FastAPI service for face detection and embedding extraction using InsightFace with ONNXRuntime (CPU or GPU).

Features

  • REST API (/analyze) → upload an image, get bounding boxes, detection scores, embeddings.
  • Status endpoint (/status) → shows backend (CPU/GPU), model bundle, and providers.
  • Configurable via environment variables.
  • Runs manually with uvicorn or as a systemd service on Ubuntu.

Project structure

face-service/
├── app.py                   # FastAPI app
├── requirements-cpu.txt     # Dependencies for CPU build
├── requirements-gpu.txt     # Dependencies for GPU build
├── face.env.example         # Example environment config
├── face-service@.service    # Systemd unit template
└── README.md                # This file

Installation

1. Clone the project

You can clone this repo anywhere (e.g., /home/<user>/Develop/face-service or /opt/face-service):

git clone https://github.com/goruck/face-service.git
cd face-service

⚠️ Important: If you move the repo later, update the paths in your .env file (see Configuration).

2. Set up environment

Create a Python 3.9+ virtual environment:

python3 -m venv .venv
source .venv/bin/activate
pip install --upgrade pip

Install dependencies (choose either CPU or GPU build):

CPU build

pip install -r requirements-cpu.txt

GPU build (CUDA)

pip install -r requirements-gpu.txt

Development tools (optional, for linting)

pip install -r requirements-dev.txt

3. Configuration

Environment is controlled by variables in an .env file.

An example env file is included: face.env.example.

Copy it to /etc/face-service/ and edit as needed:

sudo mkdir -p /etc/face-service
sudo cp face.env.example /etc/face-service/face.env
sudo nano /etc/face-service/face.env

Key values to set:

  • APP_DIR → path where you cloned this repo
  • VENV_DIR → path to the venv inside this repo (usually <APP_DIR>/.venv)
  • PORT → service port (e.g. 8000, 8001 for another instance)
  • USE_CPU, GPU_ID, FACE_MODEL → control runtime

Usage

1. Run manually

From inside the project directory:

source .venv/bin/activate
python app.py

Or run with uvicorn (recommended):

source .venv/bin/activate
uvicorn app:app --host 0.0.0.0 --port 8000

2. Run as a systemd service (Ubuntu)

This repo includes a systemd unit template: face-service@.service.

2.1 Copy to systemd

sudo cp face-service@.service /etc/systemd/system/

2.2 Reload and enable

sudo systemctl daemon-reload
sudo systemctl enable face-service@lindo
sudo systemctl start face-service@lindo

Replace lindo with your Linux username.

The service will read /etc/face-service/face.env for settings.

2.3 Check logs

systemctl status face-service@lindo
journalctl -u face-service@lindo -f

3. API

Check service status

curl http://127.0.0.1:8000/status

Analyze an image

curl -F "file=@/path/to/image.jpg" http://127.0.0.1:8000/analyze

CI

This repo ships with a GitHub Actions workflow at .github/workflows/ci.yml:

  • cpu-smoke: installs CPU requirements, lints (ruff), launches the app with USE_CPU=true, and curls /status.
  • gpu-install: verifies requirements-gpu.txt is installable on a standard runner (no GPU runtime test).

If your app file/module isn’t app.py with app = FastAPI(), edit the CI step:

uvicorn <your_module>:app --host 127.0.0.1 --port 8000

About

A FastAPI service for face detection and embedding extraction using InsightFace

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages