Realtime voice assistant starter that pairs a Pipecat backend (Python) with a Next.js UI deployed on Vercel. Models are switchable per session (OpenAI, Gemini, Claude) and long‑term memory is handled by Mem0 so context survives across model hops.
Web (Next.js)
cd webnpm install- Set
NEXT_PUBLIC_BOT_URL=http://localhost:7860inweb/.env.local npm run devand openhttp://localhost:3000
Bot (Pipecat + Mem0)
python -m venv .venv && .venv/Scripts/activate(orsource .venv/bin/activate)pip install -r bot/requirements.txt- Export the required API keys (see
docs/CONFIG.md). If self-hosting Mem0, setMEM0_BASE_URL=https://mem0.yourdomain.com. - Run the server:
python bot/main.py(exposes WebRTC endpoints for voice and/chatfor text on port7860)
Once both are up, connect from the UI, choose a provider/model, and start talking. Memory is keyed by your userId (stored locally in the browser) so you can swap models without losing context.
/web— Next.js App Router UI with Pipecat client, provider/model selector, transcript, and connection controls./bot— Python backend with Pipecat pipeline, Mem0 integration, model router, and runner entry point./docs— Architecture, config, memory, and deployment notes.
npm --prefix web run dev— start the UI locallynpm --prefix web run lint— lint the UIpython bot/main.py -t webrtc— run Pipecat runner using Small WebRTC transportpytest bot/tests— quick Python test for Mem0 wrapper
More detail in docs/DEPLOYMENT.md and docs/ARCHITECTURE.md.