Pretends to be qBittorrent or SABnzbd so your *arr apps can use TorBox as a download backend.
When Sonarr or Radarr sends a torrent or NZB, TorBoxarr accepts it through the standard qBittorrent/SABnzbd API, submits it to TorBox, polls until the remote download finishes, pulls the files down locally, and places them where the *arr app expects to find them.
If TorBoxarr has been useful to you and you want to support the project, you can use my TorBox referral when you subscribe:
- Referral code:
605c7a7b-6913-4ec3-868c-061aa6694f43 - Referral link: torbox.app/subscription?referral=605c7a7b-6913-4ec3-868c-061aa6694f43
TorBoxarr runs a single Go binary with an HTTP server and a set of background workers. The server handles two API surfaces:
- /api/v2/... implements enough of the qBittorrent Web API for Sonarr/Radarr torrent integration (login, add, info, delete, categories, transfer info)
- /api and /sabnzbd/api implement enough of the SABnzbd API for usenet integration (addfile, addurl, queue, history, delete)
Behind the API, a worker pipeline processes each download through a series of states:
accepted -> submit_pending -> remote_queued/remote_active -> local_download_pending
-> local_downloading -> local_verify -> completed -> remove_pending -> removed
Each stage has its own worker loop:
- Submitter sends the torrent/NZB to TorBox's API. Retries with exponential backoff on transient failures.
- Poller checks TorBox for download progress. Handles the queued-to-active transition, including recovery if the queue ID disappears before an active ID appears.
- Downloader fetches completed files from TorBox to local staging. Supports resumable HTTP range downloads and checkpoints progress to SQLite periodically.
- Finalizer verifies all parts are on disk, then moves the staging directory to the completed path.
- Remover cleans up local files when the *arr app sends a delete request.
- Pruner periodically deletes old removed job records and expired qBit session tokens.
State is stored in a local SQLite database. The schema is managed with goose migrations embedded in the binary.
- Go 1.26+ (for building from source)
- A TorBox account with an API token
- Sonarr, Radarr, or a similar *arr application
TorBoxarr reads its config from environment variables. Copy the example file and fill in your credentials:
cp .env.example .envRequired variables:
| Variable | What it is |
|---|---|
TORBOXARR_TORBOX_API_TOKEN |
Your TorBox API token |
TORBOXARR_QBIT_PASSWORD |
Password for the fake qBittorrent login (you pick this, then use it in Sonarr/Radarr) |
TORBOXARR_SAB_API_KEY |
API key for the fake SABnzbd interface (same idea, you pick it) |
Optional overrides:
| Variable | Default | What it does |
|---|---|---|
TORBOXARR_SERVER_BASE_URL |
http://localhost:8085 |
Base URL the server reports to clients |
TORBOXARR_DATA_ROOT |
/data |
Root directory for staging, completed files, and payloads |
TORBOXARR_DATABASE_PATH |
/config/torboxarr.db |
SQLite database path; the container stores state under /config |
TORBOXARR_LOG_LEVEL |
INFO |
Log verbosity: DEBUG, INFO, WARN, or ERROR |
TORBOXARR_SAB_NZB_KEY |
falls back to TORBOXARR_SAB_API_KEY |
Explicit key for the SABnzbd-compatible endpoint; omit it to reuse the SAB API key |
Docker-specific runtime variables used by the bundled compose file. Set these to the same UID/GID that Sonarr and Radarr use on the host, so TorBoxarr can write to the same download and category folders:
| Variable | Default | What it does |
|---|---|---|
PUID |
none | UID the container drops to before starting TorBoxarr; required for the bundled Docker setup |
PGID |
none | GID the container drops to before starting TorBoxarr; required for the bundled Docker setup |
Set up the *arr download clients after TorBoxarr is running:
For torrent downloads, add a qBittorrent download client in Sonarr/Radarr:
- Host: wherever TorBoxarr is running
- Port:
8085 - Username:
admin - Password: whatever you set in
TORBOXARR_QBIT_PASSWORD
qBittorrent categories map to subfolders under TORBOXARR_DATA_ROOT/completed. If you want a new category for Sonarr or Radarr, create it in qBittorrent first or create the folder manually under the completed root before using it in SAB.
For usenet, add a SABnzbd download client:
- Host/port: same as above
- API key: whatever you set in
TORBOXARR_SAB_API_KEY
The normal Sonarr/Radarr SABnzbd client flow will only use categories that already exist. If you need a new category, create the folder first under TORBOXARR_DATA_ROOT/completed/<category> or create the category through qBittorrent so TorBoxarr can discover it.
If you use a SAB category that points to a folder, make sure TorBoxarr runs with the same UID/GID as Sonarr and Radarr. Otherwise the container may not be able to create or move files into that folder.
docker compose -f deploy/docker-compose.yml up -dThe compose file publishes 8085:8085 on the host, bind-mounts ../data into /data for downloads and payloads, creates a named Docker volume for /config, stores the SQLite database at /config/torboxarr.db, and reads ../.env for configuration.
On startup the container runs as root briefly, recursively assigns /config to PUID:PGID, then drops privileges and starts TorBoxarr as that user. This makes named-volume /config setups work without manual chown.
The automatic ownership repair only applies to /config. ../data is left untouched, so you should still make sure your downloads/media path is writable by the same UID/GID if you expect shared write access with Sonarr, Radarr, or other tools.
If you choose Docker's user: setting instead, the container starts directly as that user and skips the built-in ownership repair. In that mode /config must already be writable or the container will exit early with a clear startup error.
go build -o bin/torboxarr ./cmd/torboxarr
./bin/torboxarrDirect binary runs default to /config/torboxarr.db. If you want a different path for local development, set TORBOXARR_DATABASE_PATH.
The binary runs database migrations automatically on startup, so you don't need to run goose separately unless you want to manage migrations by hand:
goose -dir internal/store/migrations sqlite3 /config/torboxarr.db upgofmt -w ./cmd ./internalgo test -count=1 ./...go test -coverprofile=coverage.out ./...
go tool cover -func=coverage.outgo build -o bin/torboxarr ./cmd/torboxarrcmd/torboxarr/ Entry point, wiring, server startup
internal/
api/ HTTP handlers for qBit and SABnzbd APIs
auth/ Session management (qBit cookies, SAB API keys)
compat/ Translates internal job state to qBit/SAB response formats
config/ Environment variable parsing and validation
files/ Filesystem layout, range downloader, staging/completed paths
store/ SQLite database, job and transfer part persistence, migrations
torbox/ TorBox API client with rate limiting
worker/ Background worker loops (submit, poll, download, finalize, remove, prune)
deploy/
Dockerfile Multi-stage build plus runtime entrypoint for PUID/PGID startup
docker-compose.yml Compose setup with bind-mounted ../data and named-volume /config