A small YouTube-style demo: signup, login, upload a video, transcode it to multi-bitrate HLS in the background, and watch it from a public feed with a quality selector.
The point of this project is the architecture — direct-to-S3 uploads, async transcoding via a queue, and HLS playback against a private bucket — not a polished product.
| Layer | Tech |
|---|---|
| Backend | Go 1.25 (net/http, pgx/pgxpool, sqlc-generated queries, golang-migrate) |
| Worker | Go + ffmpeg / ffprobe (separate binary, same module) |
| Frontend | Next.js (App Router), React 18, hls.js |
| Database | Postgres 16 |
| Object storage | MinIO (S3-compatible, private bucket) |
| Queue | RabbitMQ |
| Auth | JWT (HS256) in an HttpOnly; Secure; SameSite=Lax cookie, 24h |
| Streaming | HLS — H.264 in MPEG-TS segments, played via hls.js |
┌──────────────────────────────────────────────────────┐
│ Browser │
└───────┬────────────────────────────────┬──────────────┘
│ │
1. POST /api/videos│ 6. GET master.m3u8 →
2. PUT raw → S3────┼───────────────┐ rendition.m3u8 →
3. POST .../complete │ .ts segments
│ ▼
┌───────▼─────────┐ ┌───────────┐
│ Go API (8080) │ │ MinIO │
│ │◄─┤ (9000) │ ← presigned PUT/GET
│ publishes job │ └─────┬─────┘
└────────┬────────┘ ▲
│ │ 5. uploads thumb,
▼ │ rendition .m3u8 + .ts,
┌──────────┐ │ master.m3u8
│ RabbitMQ │ │
└────┬─────┘ │
│ │
▼ │
┌──────────────────┐ │
│ Go Worker │───────┘
│ (ffmpeg) │
│ 4. download raw, │
│ transcode, │
│ upload HLS │
└──────────────────┘
- Client requests an upload intent — server inserts a
Videorow withstatus=uploadingand returns a presigned PUT URL. - Client uploads the raw file directly to MinIO.
- Client calls
complete. Server flips status toprocessingand publishes a transcode job to RabbitMQ.
- Worker pulls the job, downloads the raw file, runs
ffprobefor source dimensions, generates a JPEG thumbnail. - For each rendition (360p / 720p / 1080p, skipping any taller than the source) it runs
ffmpegto produce an HLS playlist +.tssegments, uploads them to S3, upserts aVideoAssetrow. - Builds
master.m3u8, uploads it, marks the videoready.
The worker is idempotent — re-running a job overwrites the same S3 keys and upserts the same asset rows.
The bucket is private. To make HLS work without leaking object access, the API serves the playlist files itself and signs every segment URL on the fly:
GET /api/videos/{id}returnsmaster_playlist_urlpointing to the API.- hls.js fetches
GET /api/videos/{id}/hls/master.m3u8— server reads from S3, returns body unchanged. Variant entries stay relative. - hls.js resolves variants to
GET /api/videos/{id}/hls/{quality}/playlist.m3u8— server reads from S3 and rewrites every.tsline into a presigned MinIO URL (4h TTL). - hls.js fetches
.tssegments directly from MinIO with valid signatures.
The API never touches video bytes — only tiny text manifests pass through it.
.
├── client/ # Next.js app
│ ├── app/ # App Router pages: /, /login, /signup, /upload, /watch/[id]
│ └── features/ # Server actions, components, types per feature
├── server/
│ ├── cmd/
│ │ ├── api/ # HTTP API entrypoint
│ │ └── worker/ # Transcode worker entrypoint
│ ├── internal/
│ │ ├── auth/ # JWT middleware, cookies, password hashing
│ │ ├── common/
│ │ │ ├── gen/ # sqlc-generated queries
│ │ │ ├── queue/ # RabbitMQ publisher / consumer
│ │ │ └── s3/ # MinIO client + key helpers + presigning
│ │ ├── config/
│ │ ├── transcode/ # ffmpeg pipeline, master playlist builder
│ │ ├── user/ # signup, login, profile
│ │ └── video/ # upload intent, watch, feed, playlists
│ ├── migrations/ # golang-migrate SQL files
│ └── queries/ # sqlc input
├── docs/plans/ # Design notes per feature
├── docker-compose.yaml # Postgres, MinIO, RabbitMQ
└── README.md
users — id, email (unique), password_hash (argon2id), username (unique), display_name, avatar_key, created_at
videos — id, uploader_id, title, description, duration_seconds, thumbnail_key, visibility (public/unlisted/private), status (uploading/processing/ready/failed), view_count, created_at, published_at, deleted_at
video_assets — id, video_id, quality, codec, container, storage_key, file_size_bytes, bitrate, width, height
Indexes:
videos (uploader_id)videos (status, visibility, published_at DESC)— feedvideo_assets (video_id)users (email)unique,users (username)unique
raw/{video_id}/original.{ext} # lifecycle-deleted 7d after transcode
hls/{video_id}/master.m3u8
hls/{video_id}/{quality}/playlist.m3u8
hls/{video_id}/{quality}/seg_NNN.ts
thumbnails/{video_id}/default.jpg
avatars/{user_id}/avatar.jpg
All routes under /api. Auth = JWT cookie onetube_session.
| Method | Path | Auth | Purpose |
|---|---|---|---|
| POST | /auth/signup |
— | Create user, set cookie |
| POST | /auth/login |
— | Verify credentials, set cookie |
| POST | /auth/logout |
— | Clear cookie |
| GET | /auth/me |
✓ | Return current user |
| POST | /videos |
✓ | Create upload intent (returns presigned PUT URL) |
| POST | /videos/{id}/complete |
✓ | Mark uploaded → enqueue transcode |
| GET | /videos/{id} |
— | Watch info (or {status: "processing"}) |
| GET | /videos/{id}/hls/master.m3u8 |
— | Master playlist |
| GET | /videos/{id}/hls/{quality}/playlist.m3u8 |
— | Rendition playlist with presigned segments |
| POST | /videos/{id}/view |
— | Increment view count |
| GET | /videos/feed |
— | Public feed, cursor-paginated |
| GET | /videos/mine |
✓ | Caller's uploads |
| DELETE | /videos/{id} |
✓ | Soft delete (ownership-checked) |
- Go 1.25+
- Node 20+
- Docker
ffmpeg/ffprobeon PATH (worker shells out to them)migrateCLI (brew install golang-migrateor equivalent) andsqlcif regenerating queries
cp .env.example .env # fill in DB_*, JWT_SECRET, etc.
docker compose up -dThat gives you Postgres on :5433, MinIO on :9000 (console :9001, login minioadmin / minioadmin), RabbitMQ on :5672 (UI :15672).
Create the bucket once (MinIO console → Buckets → Create → name onetube, keep it private).
cd server
migrate -path migrations -database "$DATABASE_URL" upcd server
go run ./cmd/apiListens on :8080. Required env:
| Var | Example |
|---|---|
DATABASE_URL |
postgres://onetube:onetube@localhost:5433/onetube?sslmode=disable |
JWT_SECRET |
any 32+ char string |
S3_ENDPOINT |
localhost:9000 |
S3_PUBLIC_ENDPOINT |
localhost:9000 (what the browser uses; differs in prod) |
S3_ACCESS_KEY / S3_SECRET_KEY |
minioadmin / minioadmin |
S3_BUCKET |
onetube |
S3_USE_SSL |
false |
API_PUBLIC_BASE_URL |
http://localhost:8080 (used to build master-playlist URLs) |
RABBIT_URL |
amqp://guest:guest@localhost:5672/ |
TRANSCODE_QUEUE |
transcode.jobs |
PORT |
8080 |
In another terminal, with the same env:
cd server
go run ./cmd/workercd client
npm install
npm run devOpen http://localhost:3000.
These are non-negotiable for any change in this repo:
- The API never touches video bytes. Uploads go client → S3, segments go S3 → client, both via presigned URLs. Only tiny text manifests pass through the API.
- Ownership checked on every mutation:
WHERE id = ? AND uploader_id = ?. No role-based auth. - Cursor pagination on
(published_at, id)or(created_at, id). NeverOFFSET. - JOIN to avoid N+1. The feed query joins
videoswithusersfor the uploader name. - The worker is idempotent. Reprocessing a job must be safe.
- Soft delete videos via
deleted_at. A separate cleanup job removes S3 objects later. view_countis denormalized on the video row; incremented on view.- JWT lives in an HttpOnly cookie. Never in localStorage.
context.Contextis the first arg on any Go function doing I/O.
This is a demo, not production. In particular:
- No rate limiting, captcha, or email verification.
unlistedandprivatevisibility are stored but not enforced on the playlist endpoints — any URL with a valid{id}can fetch the master.- View counts are incremented on first play with no deduplication.
- MinIO CORS for
Rangeheaders may need configuring if you proxy MinIO behind a non-default origin. - No tests yet.