FusionAI combines Wikipedia, DuckDuckGo web search, and Anthropic Claude into a single research assistant that synthesizes multi-source answers in seconds. Ask anything — it finds, verifies, and explains it in plain language with cited sources.
- Conversational interface with full session history
- Smart intent detection — skips search for greetings, only fetches when needed
- Parallel source fetching (Wikipedia + DuckDuckGo simultaneously)
- Upload documents (PDF, Markdown, TXT) for document-backed Q&A
- Fully animated UI with Framer Motion
| Layer | Technology |
|---|---|
| Frontend | React 18 · Vite 5 · Tailwind CSS v4 · Framer Motion |
| Backend | Python 3.12 · FastAPI · LangChain · Anthropic Claude |
| AI Model | claude-sonnet-4-6 via LangChain prompt | llm chain |
| Database | PostgreSQL (Railway) · SQLite fallback for local dev |
| ORM | SQLAlchemy 2.x · Alembic migrations · psycopg2-binary |
| Cache | Redis (optional) · in-memory fallback |
| Hosting | Railway (backend) · Vercel (frontend) |
cd backend
python -m venv venv
venv\Scripts\activate # Windows
# source venv/bin/activate # Mac/Linux
pip install -r requirements.txt
copy .env.example .env # fill in your values
python app.pyBackend runs on http://localhost:5001
Verify your setup:
python scripts/doctor.pyFor local dev, leave DATABASE_URL unset to use SQLite with AUTO_CREATE_TABLES=true.
cd frontend
npm install
npm run devFrontend runs on http://localhost:3000
The Vite dev server proxies /api/* to localhost:5001 automatically. No VITE_API_URL needed locally.
# Required
ANTHROPIC_API_KEY=your_key_here
DATABASE_URL=postgresql://user:password@host:5432/dbname
ENVIRONMENT=production
FRONTEND_ORIGINS=https://www.fusionai.studio,https://your-app.vercel.app
# AI
ANTHROPIC_MODEL=claude-sonnet-4-6
MAX_TOKENS=2000
# Database
AUTO_CREATE_TABLES=false
RUN_MIGRATIONS_ON_START=false # migrations run via Procfile before uvicorn
# Sources
SOURCE_LOOKUP_ENABLED=true
WIKIPEDIA_RESULTS=1
WEB_SEARCH_RESULTS=3
# Cache
LOCAL_CACHE_ENABLED=true
CACHE_TTL_SECONDS=1800VITE_API_URL=https://your-railway-backend.up.railway.app
VITE_WORKSPACE_ID=web-client- Create a Railway project → add a PostgreSQL service
- Add backend as a new service → set Root Directory to
backend - Railway auto-detects Python via
runtime.txtand uses theProcfile - Set environment variables in the Variables tab
- Set
DATABASE_URLto${{Postgres.DATABASE_URL}}(Railway reference variable)
The Procfile runs Alembic migrations before starting the server:
web: alembic upgrade head && uvicorn app:app --host 0.0.0.0 --port $PORT
/api/readyis a readiness endpoint — returns503if the database is down or config is invalid.
- Import repo in Vercel → set Root Directory to
frontend, framework to Vite - Add env vars:
VITE_API_URLandVITE_WORKSPACE_ID - Deploy —
VITE_*vars are baked into the bundle at build time, so redeploy after any change
Migrations live in backend/migrations/ and are managed by Alembic.
# Apply all pending migrations
cd backend
alembic upgrade head
# Generate a new migration after changing models
alembic revision --autogenerate -m "describe your change"In production, the Procfile runs alembic upgrade head automatically before the app starts.
| Method | Endpoint | Description |
|---|---|---|
GET |
/api/health |
Health check + system status |
GET |
/api/ready |
Readiness check (used by Railway) |
POST |
/api/research |
Submit a research query |
POST |
/api/chat |
Send a follow-up chat message |
POST |
/api/sessions |
Create a new session |
GET |
/api/sessions |
List all sessions |
GET |
/api/sessions/{id} |
Get session with messages |
DELETE |
/api/sessions/{id} |
Delete a session |
GET |
/api/sessions/{id}/results |
Get research results for session |
POST |
/api/sessions/{id}/documents |
Add a document to session |
POST |
/api/sessions/{id}/documents/upload |
Upload a file (PDF/MD/TXT) |
GET |
/api/documents/{id} |
Get a document |
DELETE |
/api/documents/{id} |
Delete a document |
GET |
/api/results/{id} |
Get a research result |
GET |
/api/insights |
Usage analytics |
Research request body:
{ "query": "what is quantum computing", "session_id": "optional-existing-id" }All routes accept an optional workspace header for data isolation:
x-fusion-workspace-id: your-workspace-id
FusionAI/
├── backend/
│ ├── app.py # FastAPI app + lifespan startup
│ ├── config.py # Settings dataclass + diagnostics
│ ├── database.py # SQLAlchemy engine + session factory
│ ├── models.py # ORM models (Session, Message, Result, Source, Document)
│ ├── schemas.py # Pydantic request/response schemas
│ ├── services/
│ │ ├── ai.py # LangChain chain + Claude integration (cached)
│ │ ├── sources.py # Wikipedia + DuckDuckGo + intent detection
│ │ ├── research.py # Research orchestration + session handling
│ │ ├── sessions.py # Session CRUD + message history
│ │ ├── cache.py # Redis + in-memory cache layer
│ │ ├── documents.py # Document storage + retrieval
│ │ ├── uploads.py # PDF/Markdown/TXT parsing
│ │ ├── insights.py # Usage analytics aggregation
│ │ └── operations.py # Alembic migration runner
│ ├── migrations/ # Alembic version files
│ ├── scripts/
│ │ └── doctor.py # Config health checker CLI
│ ├── tests/ # pytest suite (SQLite, no API key needed)
│ ├── requirements.txt
│ ├── Procfile # Railway: alembic upgrade head && uvicorn
│ ├── runtime.txt # python-3.12.8
│ └── alembic.ini
└── frontend/
├── src/
│ ├── App.jsx # Entire app — landing page + research UI
│ ├── App.css # Global styles + Tailwind config
│ └── main.jsx # React entry point
├── package.json
└── vite.config.js
cd backend
python -m pytest testsTests use SQLite and disable live source lookup — no Anthropic key or network access required.
Bao Tran — George Mason University