Skip to content

alokkksharmaa/TypeQL

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 

Repository files navigation

Text-to-SQL Analytics Dashboard

An AI-powered internal analytics dashboard where non-technical users can type natural language questions and receive SQL-generated data visualizations in response.

Next.js FastAPI LangChain Ollama SQLite Tailwind CSS Recharts React TypeScript


Features

  • Natural Language → SQL — Ask questions in plain English; the LLM handles the SQL.
  • Auto Chart Detection — Responses automatically suggest the best visualization (bar, line, or table).
  • Dynamic Visualizations — Bar charts, line charts, and data tables rendered with Recharts.
  • Structured Responses — Every answer includes the generated SQL, a human-readable explanation, and raw data.
  • Local & Private — Runs entirely on your machine via Ollama; no data leaves your network.

Architecture

User Question
     ↓
Next.js Frontend (React 19 + Tailwind v4)
     ↓  POST /api/ask
FastAPI Backend
     ↓
LangChain SQL Agent → Ollama (Llama 3)
     ↓
SQLite (sales_data.db)
     ↓
Structured JSON Response (SQL + Data + Explanation + Chart Type)
     ↓
Dynamic Chart (Bar / Line / Table) via Recharts

Prerequisites

Requirement Version Link
Node.js 18+ https://nodejs.org/
Python 3.10+ https://python.org/
Ollama Latest https://ollama.ai/
Llama 3 ollama pull llama3

Quick Start

1. Start Ollama

ollama serve
# In another terminal:
ollama pull llama3

2. Backend Setup

cd backend
pip install -r requirements.txt
python seed_db.py                          # Seed SQLite with sample sales data
uvicorn app.main:app --reload --port 8000  # Start API server → http://localhost:8000

3. Frontend Setup

cd frontend
npm install
npm run dev    # Start dev server → http://localhost:3000

4. Use It

Open http://localhost:3000 and try questions like:

  • "What were the top 5 selling products?"
  • "Show total sales by region"
  • "What is the monthly sales trend?"

⚙️ Environment Variables

All settings live in backend/app/config.py and can be overridden via environment variables:

Variable Default Description
OLLAMA_BASE_URL http://localhost:11434 Base URL of the Ollama server
OLLAMA_MODEL llama3 Ollama model to use for SQL generation
LLM_REQUEST_TIMEOUT 300 Max seconds to wait for an LLM response
AGENT_MAX_ITERATIONS 8 Max LLM call iterations to prevent infinite loops

Example:

OLLAMA_MODEL=llama3.1 LLM_REQUEST_TIMEOUT=600 uvicorn app.main:app --reload --port 8000

📡 API Reference

GET /api/health

Simple liveness probe.

Response:

{ "status": "ok" }

POST /api/ask

Convert a natural language question to SQL, execute it, and return structured results.

Request Body:

{
  "question": "What were the top selling products last month?"
}
Field Type Required Constraints
question string 1–1000 characters

Success Response 200:

{
  "sql_query": "SELECT product_name, SUM(quantity) ...",
  "explanation": "This query returns the top selling products by total quantity sold.",
  "data": [
    { "product_name": "Widget A", "total_quantity": 1500 }
  ],
  "chart_type": "bar"
}
Field Type Description
sql_query string The generated and executed SQL query
explanation string Human-readable explanation of the results
data array Query result rows as a list of objects
chart_type string Suggested chart: "bar", "line", or "table"

Error Response 400 / 500 / 504:

{
  "error": "A user-friendly error message.",
  "detail": "Optional technical detail for debugging."
}

📁 Project Structure

TypeQL/
├── backend/
│   ├── app/
│   │   ├── __init__.py    # Package init
│   │   ├── main.py        # FastAPI app, CORS, /api/ask + /api/health endpoints
│   │   ├── agent.py       # LangChain SQL Agent (ChatOllama + ReAct)
│   │   ├── database.py    # SQLAlchemy engine setup
│   │   ├── config.py      # Central settings (env-var overrides)
│   │   └── models.py      # Pydantic request/response schemas
│   ├── seed_db.py         # Seeds SQLite with sample sales data
│   ├── sales_data.db      # SQLite database (generated by seed_db.py)
│   └── requirements.txt   # Python dependencies
│
├── frontend/
│   ├── src/
│   │   ├── app/           # Next.js App Router pages
│   │   ├── components/    # React UI components
│   │   ├── lib/           # API client + chart utilities
│   │   └── types/         # TypeScript interfaces
│   ├── package.json       # Node.js dependencies
│   └── tsconfig.json      # TypeScript configuration
│
└── README.md

🔧 Troubleshooting

Problem Solution
Connection refused on Ollama Make sure ollama serve is running before starting the backend.
LLM timeout / slow responses Increase LLM_REQUEST_TIMEOUT (default 300s). Local LLMs are slow — each question makes 3–5 LLM calls.
model not found error Run ollama pull llama3 to download the model first.
CORS errors in browser console Frontend must be running on localhost:3000. Check CORS_ORIGINS in config.py.
sales_data.db missing / empty results Run python seed_db.py from the backend/ directory to regenerate the database.
Agent stuck in a loop Lower AGENT_MAX_ITERATIONS (default 8) to force early termination.

🛠️ Tech Stack

Component Technology
Frontend Next.js 16, React 19, TypeScript
Styling Tailwind CSS v4
Charts Recharts 3
Backend FastAPI, Python 3.10+
AI/LLM LangChain + Ollama (Llama 3)
Database SQLite + SQLAlchemy 2
Validation Pydantic v2

About

Text-to-SQL Analytics Dashboard

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors