Chatto
Breaking language barriers, one chat at a time.
Chatto is a modern, AI-powered communication platform that enables seamless, multilingual conversations between hosts and guests — powered by Cerebras, Redis, and FastAPI.
It also remembers personal context through the Memories feature and offers an AI-driven Slate workspace for real-time translation and interaction.
- Real-time host ↔ guest conversations via WebSocket.
- Instant message translation between languages.
- Lightweight and mobile-first UI built with Tailwind.
- Save short facts or “memories” about yourself.
- The AI uses these to personalize translations and responses.
- Full CRUD support with smooth animations and Sonner toasts.
- Express your thoughts in the desired language with the power of Cerebras LLAMA model and Chatto's memories
- Supports custom translation or semantic assistance.
- Guests can join via QR or link (
/s/[sessionId]) — no login required. - Secure session management with Redis.
- Smooth conversation flow with real-time WebSocket messaging.
- Framework: Next.js 14 (App Router)
- Language: TypeScript
- Styling: Tailwind CSS + ShadCN UI
- Auth: Clerk
- Deployment: Vercel
- Framework: FastAPI (Python 3.11)
- Cache / Queue: Redis
- LLM Provider: Cerebras API
- WebSocket: Native
fastapi.websockets - Deployment: Render (Dockerized)
Chatto uses RAG to make conversations more personalized and contextual:
-
Memory Storage:
User memories are stored in Redis and the database for fast retrieval. -
Vectorization:
Each memory is embedded into a vector space using a semantic embedding model. -
Retrieval:
When a chat prompt arrives, relevant memories are retrieved by cosine similarity to enrich the AI’s context. -
Generation:
The final response is generated through Cerebras’ LLM API, blending live input + retrieved memories for context-aware translation.
cd docker
docker-compose up --buildcd chattoz-ui
npm install
npm run build
npm start