AI agent that analyzes a YouTube homepage screenshot → trending topic, why it’s trending, who’s winning, how to post, and 5 copyable short-form hooks. Stateless, MCP-based; built for creators and portfolios.
| Path | Role |
|---|---|
| app/ | Application code: API, analysis pipeline, MCP server, static UI, system prompt. Also app/requirements.txt, app/pyproject.toml, app/.env.example. |
| deploy/ | Deploy scripts and config: Dockerfile, docker-compose.yml, .dockerignore, k8s/, vm/. |
| readme-assets/ | Images, demo video (ai-yt-trend-analyser.mp4), and assets used by this README. |
| README.md | This file: steps to run locally and deploy. |
| .gitignore | Repo root only; .env and copied .dockerignore are ignored. |
Steps summary: 1) Set up env: cp app/.env.example .env and set OPENAI_API_KEY. 2) Install deps: pip install -r app/requirements.txt. 3) Run app: Web UI + API → python -m app.api; MCP server → python -m app.server. 4) Deploy: Docker (copy deploy/.dockerignore to repo root first, then docker compose -f deploy/docker-compose.yml up); K8s → deploy/k8s/README.md; VM → deploy/vm/README.md.
Input (image placeholder): Screenshot of the YouTube homepage (recommended feed) — video grid with thumbnails, titles, channel names, view counts. Users drop this image in the upload area and click Analyze.
Input: YouTube homepage screenshot
Output (insight placeholder): The analyzer returns one structured result: topic (e.g. Business & Economics), trend strength (EARLY | HEATING_UP | SATURATED), why trending, who’s winning, how to post, and 5 copyable hooks with Copy buttons. Shown in the “Insight” section below the upload area.
To display the screenshots above, add input-screenshot.png (YouTube homepage) and output-insight.png (analyzer result) under readme-assets/. See readme-assets/README.md.
Demo video: (animated GIF so it shows in README preview and on GitHub)
┌─────────────────────────────────────────────────────────────────────────────┐
│ TrendSignal │
├─────────────────────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────────┐ ┌───────────────────────────────────────────────┐ │
│ │ Browser │ │ analysis.py (core) │ │
│ │ (upload UI) │ │ ┌─────────┐ ┌─────────┐ ┌─────────┐ │ │
│ └──────┬───────┘ │ │ Vision │→ │ Topics │→ │Strength │→ ... │ │
│ │ │ │ extract │ │ detect │ │estimate │ │ │
│ │ POST │ └────┬────┘ └────┬────┘ └────┬────┘ │ │
│ ▼ │ │ │ │ │ │
│ ┌──────────────┐ │ └────────────┴────────────┘ │ │
│ │ api.py │─────┼───────────────────────────────────────────────┤ │
│ │ (FastAPI) │ │ run_full_pipeline(image) → JSON insight │ │
│ └──────┬───────┘ └───────────────────────────────────────────────┘ │
│ │ ▲ │
│ │ :8001 │ same core │
│ │ │ │
│ ┌──────┴───────┐ ┌─────────┴────────────────────────────────────┐ │
│ │ server.py │ │ MCP tools (vision_extract, trend_detect, │ │
│ │ (MCP) │─────│ trend_estimate_strength, creator_advice) │ │
│ └──────────────┘ └──────────────────────────────────────────────┘ │
│ │ :8000/mcp │
│ ▼ │
│ Cursor / MCP client (orchestrator calls tools in order) │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
│
▼
OpenAI (vision + chat)
Flow
- Input: Screenshot (image file or base64).
- Vision extract: GPT-4o Vision reads the screenshot → list of videos (title, creator, views, hours_since_posted, emotional_tone).
- Topic detection: Chat groups videos into dominant topics (topic_name, video_count).
- Strength estimate: Chat estimates trend stage: EARLY | HEATING_UP | SATURATED.
- Creator advice: Chat returns why_trending, who_is_winning, posting_advice, and 5 hooks.
- Output: Single JSON:
topic,trend_strength,why_trending,who_is_winning,how_to_post,hooks.
Components
| Layer | Role |
|---|---|
| app/analysis.py | Core pipeline and helpers; used by both API and MCP. |
| app/api.py | Web: upload UI + POST /analyze → runs full pipeline. |
| app/server.py | MCP: exposes the same 4 steps as callable tools. |
| OpenAI | Vision (screenshot → videos) and chat (topics, strength, advice). |
cd TrendSignal
cp app/.env.example .env
# Set OPENAI_API_KEY in .envpython3 -m venv .venv
source .venv/bin/activate # Windows: .venv\Scripts\activate
pip install -r app/requirements.txt(Optional: if you have uv, run uv sync from the app/ directory.)
python -m app.apiOpen http://localhost:8001 → drop a YouTube homepage screenshot → Analyze → get topic, strength, why trending, who’s winning, how to post, and copyable hooks.
python -m app.serverMCP endpoint: http://localhost:8000/mcp (streamable HTTP). Add this URL in Cursor (Settings → MCP) and use the system prompt in app/SYSTEM_PROMPT.md so the AI calls the tools in order.
- Create
.envin the repo root (e.g.cp app/.env.example .env) withOPENAI_API_KEY=sk-your-key. - From repo root, copy deploy ignore file so Docker uses it:
cp deploy/.dockerignore .. Then build and run:Or with Compose (copydocker build -f deploy/Dockerfile -t trend-signal . docker run -p 8001:8001 --env-file .env trend-signaldeploy/.dockerignoreto repo root first so Docker uses it):cp deploy/.dockerignore . docker compose -f deploy/docker-compose.yml up --build - Open http://localhost:8001.
- Create the secret (do not commit the key):
kubectl create secret generic trend-signal-secret \ --from-literal=OPENAI_API_KEY=sk-your-key
- Push the image to your registry and set
imageindeploy/k8s/deployment.yaml. - Deploy:
kubectl apply -f deploy/k8s/deployment.yaml kubectl apply -f deploy/k8s/service.yaml
- Access:
kubectl port-forward svc/trend-signal 8001:80or use a LoadBalancer/Ingress. See deploy/k8s/README.md for Ingress and registry details.
- Docker on VM: Copy the project to the VM, create
.env(e.g.cp app/.env.example .env), then from repo root:cp deploy/.dockerignore . docker build -f deploy/Dockerfile -t trend-signal . docker run -d -p 8001:8001 --env-file .env --restart unless-stopped trend-signal
- Python on VM: Install Python 3.11+, create venv,
pip install -r app/requirements.txt, create.env(e.g. fromapp/.env.example), thenpython -m app.api. - Full steps (systemd, firewall): deploy/vm/README.md.
- Start the MCP server (from repo root, venv activated):
cd TrendSignal source .venv/bin/activate python -m app.server
- Add server in Cursor: Settings → MCP → Streamable HTTP →
http://localhost:8000/mcp. - Use in chat: Paste/attach a YouTube homepage screenshot and ask for trend insight and hooks; optionally paste
app/SYSTEM_PROMPT.mdfor the exact flow and response format.
| Tool | Purpose |
|---|---|
vision_extract_youtube_homepage |
Screenshot (base64/data URL) → video metadata list. |
trend_detect_topics |
Video list → dominant topics (topic_name, video_count). |
trend_estimate_strength |
Topic + videos → EARLY | HEATING_UP | SATURATED. |
creator_advice_generator |
Topic + strength → why_trending, who_is_winning, posting_advice, 5 hooks. |
Call order: vision_extract → trend_detect → trend_estimate (for top topic) → creator_advice.
- GET / — Upload UI (HTML).
- POST /analyze — Body: multipart form,
file= image. Response: JSON withtopic,trend_strength,why_trending,who_is_winning,how_to_post,hooks(5 strings).
| Path | Role |
|---|---|
| app/ | Application code. |
| app/analysis.py | Core: helpers, vision extract, topic detection, strength estimate, creator advice, full pipeline. |
| app/api.py | FastAPI: /, /favicon.ico, POST /analyze; serves app/static/. |
| app/server.py | MCP server: 4 tools wrapping analysis; streamable HTTP on :8000. |
| app/static/index.html | Upload UI: drag-drop, Analyze, copy hooks. |
| app/SYSTEM_PROMPT.md | System prompt for LLM when using MCP tools. |
| deploy/Dockerfile | Image for the API (uvicorn on :8001). Build from repo root: docker build -f deploy/Dockerfile . |
| deploy/docker-compose.yml | Local run: build + run API with .env. From repo root: docker compose -f deploy/docker-compose.yml up. |
| deploy/k8s/ | Kubernetes: deployment, service, secret template, optional ingress. See deploy/k8s/README.md. |
| deploy/vm/ | VM run: Docker or Python, optional systemd. See deploy/vm/README.md. |
| readme-assets/ | Images for this README (e.g. input-screenshot.png, output-insight.png). |
| app/requirements.txt | Python dependencies. |
| app/pyproject.toml | Project config (uv/pip). |
| app/.env.example | Template for OPENAI_API_KEY. |
| deploy/.dockerignore | Copy to repo root as .dockerignore before Docker build (root copy is in .gitignore). |
| .gitignore | Repo root only; ignores .env, /.dockerignore, .venv/, etc. |
Possible extensions to make TrendSignal more data-driven and topic-specific:
- YouTube Data API — Use the YouTube Data API v3 to pull real search/trends data (e.g.
search.list,videos.listby region/category). Replace or augment vision-only heuristics with actual view counts, upload dates, and engagement so trend strength and “who’s winning” are based on real metrics. - Trending topics without screenshot — Support a text-only mode: user asks "what's trending?" or "trending in tech" and the agent answers using the YouTube Data API (e.g. trending videos by region/category) so they get trending topics and suggestions without uploading a screenshot.
- Topic-specific suggestions — Let the user request a topic (e.g. “AI”, “fitness”, “cooking”). Use the API (or vision on a topic-filtered feed) to return suggestions specific to that topic: trending angles, top creators in that niche, and 5 hooks tailored to the requested topic instead of only what’s visible in a single screenshot.
OPENAI_VISION_MODEL— Vision model (default:gpt-4o).OPENAI_CHAT_MODEL— Chat model for trend/advice (default:gpt-4o).

