A simple enough image anotation application for SAM, written in Python/Qt/PySide6, for learning purpose. Samnotator currently work with Promptable Visual Segmentation: points and bounding boxes.
First, you need to install the dependencies. We are using uv. Then, get a model. See below.
uv sync
source .venv/bin/activate
PYTHONPATH=src python -m samnotator.main --path test/objects.jpg # Optional pathUse the file menu to open a file or a directory. Add instances with the right pan, and add annotations by clicking on the image with a selected instance.
Select a kind of model (image/video) and an implementation (for now, only SAM3 wrappers, on for each kind). Load the model and lanch the inference. On video mode, assumes that all the loaded frames form a video.
- click left/right: positive/negative point
- click & drag left: bounding box (right click gives a negative bounding box, not used for now)
- click left on item: select/move
- After selection, click on empty scene: deselect
- left/righ arrow: change frames
Models must be downloaded separately. For now, only SAM3 is implemented.
Check https://huggingface.co/facebook/sam3. Note that as of December 2025, the model is gated and requires access to be granted.
uv run huggingface-cli login
uv run huggingface-cli download facebook/sam3 --local-dir ./models/sam3
# Use `--local-dir-use-symlinks False` for self contained project.
# uv run huggingface-cli download facebook/sam3 --local-dir ./models/sam3 --local-dir-use-symlinks FalseOn WSL (Windows Subsystem for Linux), you may encouter some graphics bugs. If so, try to export:
export QT_QPA_PLATFORM=xcb




