AI-powered Formula 1 telemetry analysis platform with multimodal interaction
Features β’ Architecture β’ Documentation β’ Getting Started β’ Roadmap
F1 Telemetry Manager analyzes Formula 1 telemetry data through an interface combining Streamlit for visualization and FastAPI for data processing. The platform provides real-time charts, AI-powered analysis via LM Studio, and data export capabilities for motorsport analysis.
- Telemetry Visualization: Speed, throttle, brake, RPM, gear, DRS, and G-force charts with lap-by-lap analysis
- AI Assistant: Chat interface powered by LM Studio for contextual F1 telemetry questions
- Intelligent Query Routing: Automatic classification of queries into 5 specialized handlers (basic, technical, comparison, report, download)
- Voice Interaction: Speech-to-text (Whisper) and text-to-speech (pyttsx3) for hands-free queries
- Performance Comparison: Side-by-side driver analysis with delta time calculations
- Circuit Analysis: Microsector-level performance visualization showing dominant driver per track segment
- Data Export: CSV/JSON download with filtering and preview
- Report Generation: Markdown format conversation summaries with context metadata
- 8 Visualization Types: Speed, lap times, throttle, brake pressure, RPM, gear shifts, DRS usage, and delta time
- DRS Visualization: Dedicated graphs showing DRS activation zones with speed overlay
- Circuit Domination: Color-coded track segments indicating which driver led each microsector
- Interactive Charts: Plotly-based visualizations with zoom, pan, and data point inspection
- Session Support: Practice (FP1/FP2/FP3), Qualifying, Sprint, and Race sessions from 2018-present
- Lap Selection Interface: Improved UI for selecting specific laps or fastest laps
- Tyre Compound Legends: Visual indicators showing tire types used in each stint
- Smart History Compression: Automatic conversation summarization after 5 interactions (LLM-powered)
- Multimodal Vision: Send telemetry charts directly to vision models (Qwen3-VL-4B-Instruct)
- Auto-send from Dashboard: Click π€ on any chart to analyze it in chat automatically
- Infinite Timeout: Vision models process without time limits for complex image analysis
- Automatic Retry: Falls back to text-only if vision model fails
- Context Awareness: Automatically includes session metadata (year, GP, drivers) in prompts
- Query Routing: Specialized handlers for basic questions, technical analysis, comparisons, reports, and downloads
- Streaming Responses: Real-time response generation for better UX
- Chat Management: Multiple conversation threads with persistent storage
- Optimized Image Format: Charts converted to 768Γ480 JPEG at 85% quality for best performance
- Whisper Medium Model: Enhanced speech recognition (upgraded from small model)
- Speech-to-Text: OpenAI Whisper for accurate audio transcription
- Text-to-Speech: pyttsx3 for offline audio synthesis
- Full Voice Flow: Single-endpoint STT β LLM β TTS pipeline
- Voice Orb Visualization: Audio-reactive orb with Iridescence shader for real-time feedback
- Voice Chat Reports: Export voice conversation transcripts with timestamps
- Voice Models: Configurable system voices (Windows SAPI, macOS NSSpeechSynthesizer, Linux eSpeak)
- Audio Formats: Supports WAV, MP3, WebM, OGG, M4A input
- 2-Driver Analysis: Fastest lap comparison with synchronized telemetry data
- Delta Visualization: Time gap between drivers at each track point
- Microsector Analysis: Sector-by-sector performance breakdown
- Synchronized Data: Interpolated telemetry aligned to common distance points
- Time Format Improvements: Better readability for lap times and delta calculations
- CSV Format: Raw telemetry data with column headers
- JSON Format: Structured data for API integration
- Report Storage: Session-based report management with timestamps
- Exported Reports Section: View and manage previously saved conversation reports
- Context Metadata: Exports include GP, year, session, and driver information
The system uses a layered architecture with feature-based organization:
βββββββββββββββββββββββββββββββββββββββββββ
β USER BROWSER β
ββββββββββββββββ¬βββββββββββββββββββββββββββ
β
β
βββββββββββββββββββββββββββββββββββββββββββ
β STREAMLIT FRONTEND β
β (Presentation Layer) β
ββββββββββββββββ¬βββββββββββββββββββββββββββ
β HTTP Requests
β
βββββββββββββββββββββββββββββββββββββββββββ
β FASTAPI BACKEND β
β (API + Service + Repository Layers) β
ββββββββββ¬βββββββββββββββββββ¬ββββββββββββββ
β β
β β
ββββββββββββββββ ββββββββββββββββββββββ
β SUPABASE β β EXTERNAL APIs β
β (PostgreSQL)β β β’ FastF1 β
β β β β’ LM Studio β
ββββββββββββββββ ββββββββββββββββββββββ
Frontend:
- Streamlit 1.31+ (UI framework)
- Plotly 5.18+ (interactive charts)
- Pandas, NumPy (data processing)
- httpx (HTTP client)
- audio-recorder-streamlit (voice input)
Backend:
- FastAPI 0.109+ (REST API)
- Pydantic 2.5+ (data validation)
- python-jose (JWT tokens)
- passlib + bcrypt (password hashing)
- FastF1 3.4.0 (F1 telemetry source)
- Supabase 2.10.0 (PostgreSQL database)
AI/ML:
- LM Studio (local LLM via OpenAI-compatible API)
- OpenAI Whisper 20231117 (speech-to-text, medium model)
- pyttsx3 (text-to-speech, offline)
| Document | Description | Link |
|---|---|---|
| Architecture | System design, patterns, and technical decisions | π ARCHITECTURE.md |
| Roadmap | Product roadmap, timeline, and feature plan | πΊοΈ ROADMAP.md |
| Changelog | Version history and notable changes | π CHANGELOG.md |
| Issue Templates | Bug reports, feature requests, and task templates | π ISSUE_TEMPLATES.md |
| Query Router | Intelligent query routing system guide | π― QUERY_ROUTER_GUIDE.md |
| Voice Chat | Voice interaction implementation details | π€ VOICE_CHAT_IMPLEMENTATION_PLAN.md |
- Circuit Analysis: CIRCUIT_ANALYSIS_IMPLEMENTATION_PLAN.md
- Circuit Comparison: CIRCUIT_COMPARISON_IMPLEMENTATION_PLAN.md
- Chat System: CHAT_IMPLEMENTATION_PLAN.md
- Multimodal Support: MULTIMODAL_IMPLEMENTATION.md
- Query Routing: QUERY_ROUTING_IMPLEMENTATION.md
System Flow Diagram
Complete user flow showing authentication, dashboard navigation, telemetry analysis, AI interaction, and admin capabilities
- Docker & Docker Compose
- LM Studio (running on
http://localhost:1234with a loaded model) - Supabase account
- Python 3.10+ (for manual installation)
# Clone the repository
git clone https://github.com/VforVitorio/F1_Telemetry_Manager.git
cd F1_Telemetry_Manager
# Set up environment variables
cp .env.example .env
# Edit .env with your Supabase credentials:
# SUPABASE_URL, SUPABASE_KEY, SECRET_KEY, BACKEND_URL
# Start all services
docker-compose up -d
# View logs
docker-compose logs -f
# Stop services
docker-compose downAccess points:
- Frontend: http://localhost:8501
- Backend API: http://localhost:8000
- API Documentation: http://localhost:8000/docs
# Install frontend dependencies
cd frontend
pip install -r requirements.txt
# Install backend dependencies
cd ../backend
pip install -r requirements.txt
# Set up environment variables
cp .env.example .env
# Edit .env with your credentialsRunning manually:
Terminal 1 - Backend:
uvicorn backend.main:app --reload --port 8000Terminal 2 - Frontend:
streamlit run frontend/app/main.pyTerminal 3 - LM Studio:
# Start LM Studio on http://localhost:1234
# Load a model (e.g., llama3.2-vision or qwen2-vl)
# Enable local server in LM Studio settingspython backend/utils/generate_secret.pyCopy the output to your .env file under SECRET_KEY.
Required variables in .env:
# Backend API
BACKEND_URL=http://localhost:8000
# Supabase
SUPABASE_URL=<your-supabase-project-url>
SUPABASE_KEY=<your-supabase-anon-key>
# JWT Security
SECRET_KEY=<generated-secret-key>
ALGORITHM=HS256
ACCESS_TOKEN_EXPIRE_MINUTES=30Edit backend/core/voice_config.py to configure voice services:
WHISPER_MODEL = "medium" # Options: tiny, base, small, medium, large
WHISPER_LANGUAGE = "en" # or None for auto-detect
TTS_RATE = 175 # Speech rate (words per minute)
TTS_VOLUME = 0.9 # Volume (0.0 to 1.0)The system automatically classifies user queries and routes them to specialized handlers:
- BASIC_QUERY: Simple F1 concepts (e.g., "What is DRS?")
- TECHNICAL_QUERY: Advanced telemetry analysis (e.g., "Show throttle data for lap 15")
- COMPARISON_QUERY: Multi-driver comparisons (e.g., "Compare Hamilton vs Verstappen")
- REPORT_REQUEST: Conversation summarization (e.g., "Generate a report")
- DOWNLOAD_REQUEST: Data export (e.g., "Download as CSV")
- LLM-Based Classification: Uses LM Studio with low temperature (0.1) for consistent routing
- Rule-Based Fallback: Keyword matching when LM Studio is unavailable
- Context Injection: Automatically includes session metadata in technical/comparison queries
- Handler Specialization: Each handler has a tailored system prompt for optimal responses
See QUERY_ROUTER_GUIDE.md for detailed examples.
POST /signup- Register new userPOST /signin- Login userGET /me- Get current userPOST /signout- Logout
GET /gps- Available GPs for yearGET /sessions- Sessions for GPGET /drivers- Drivers in sessionGET /lap-times- Lap times for driversGET /lap-telemetry- Telemetry for lapGET /data- Aggregated telemetry
GET /- Microsector performance data
GET /compare- Compare two drivers' fastest laps
POST /message- Send message (non-streaming)POST /stream- Stream message responsePOST /query- Process query with intelligent routingGET /health- Check LM Studio healthGET /models- Get available models
POST /transcribe- Speech-to-textPOST /synthesize- Text-to-speechPOST /voice-chat- Full voice interaction (STT β LLM β TTS)GET /health- Voice services healthGET /voices- Available TTS voices
Contributions are welcome. Please read our contribution guidelines and submit pull requests for improvements.
Use our Issue Templates for:
- π Bug reports
- β¨ Feature requests
- π Data issues
- π Tasks/TODOs
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
Copyright 2025 F1 Telemetry Manager Contributors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
- FastF1 for F1 telemetry data access
- Streamlit for the frontend framework
- FastAPI for the backend API framework
- Supabase for database infrastructure
- LM Studio for local LLM inference
Report Bug β’ Request Feature β’ Documentation