DevLok is an AI-powered developer tool designed to help you deeply understand your codebase. It combines a modern interface with a powerful local Python backend capable of leveraging Large Language Models (LLMs) to analyze, explain, and visualize your project's architecture and logic.
- Deep Codebase Understanding: Uses Tree-sitter and LangChain to parse and semantically index your entire project.
- AI-Powered Explanations: Ask questions about your code, data flow, or architecture and get context-aware answers from local LLMs (via Ollama).
- Interactive Visualizations: Visualize dependencies, call graphs, and module structures to grasp complex systems quickly.
- Local-First Privacy: All analysis and AI processing happens locally on your machine, ensuring your code never leaves your environment.
- Framework: Electron with React & TypeScript
- Visualization: React Flow / D3.js (Planned)
- Styling: Tailwind CSS
- State Management: Zustand
- Code Viewer: Monaco Editor (Read-only mode)
- Framework: FastAPI
- Language: Python 3.12+
- AI/ML: LangChain, Ollama, ChromaDB
- Analysis: Tree-sitter for parsing
Before getting started, ensure you have the following installed:
- Node.js: v18 or later (Download)
- pnpm: Package manager for Node.js (
npm install -g pnpm) - Python: 3.12 (not 3.13+) (Download)
- Poetry: Python dependency manager (
pip install poetryor see docs) - Ollama: For running local LLMs (Download)
- Make: Build automation tool (usually pre-installed on macOS/Linux)
The easiest way to get started is using the provided Makefile:
# Clone the repository
git clone https://github.com/yourusername/devlok.git
cd devlok
# Install all dependencies (backend + frontend)
make install
# Start both backend and frontend in development mode
make devThat's it! The application should now be running.
If you prefer to set up manually or want more control:
git clone https://github.com/yourusername/devlok.git
cd devlokNavigate to the backend directory and install dependencies:
cd apps/backend
poetry installStart the backend server:
poetry run devThe backend API will be available at http://localhost:4000
In a new terminal, navigate to the client directory and install dependencies:
cd apps/client
pnpm installStart the Electron application:
pnpm devThe Electron app will launch automatically.
Make sure Ollama is running with a model installed:
# Pull a recommended model (e.g., llama2)
ollama pull llama2
# Or use a larger model for better results
ollama pull codellamaHere are all available commands in the Makefile:
make install- Install all dependencies (backend + frontend)make install-backend- Install only backend dependenciesmake install-frontend- Install only frontend dependenciesmake setup-ollama- Pull recommended Ollama model (llama2)
make dev- Start both backend and frontend in development modemake dev-backend- Start only the backend servermake dev-frontend- Start only the Electron appmake dev-parallel- Start both services in parallel (requiresparallelorconcurrently)
make build- Build the Electron applicationmake build-mac- Build for macOSmake build-win- Build for Windowsmake build-linux- Build for Linux
make lint- Run linter on frontend codemake format- Format frontend code with Prettiermake typecheck- Run TypeScript type checking
make test-backend- Run backend tests
make clean- Remove all dependencies and build artifactsmake clean-backend- Clean only backend dependenciesmake clean-frontend- Clean only frontend dependencies
make help- Display all available commands
# Terminal 1: Start backend
make dev-backend
# Terminal 2: Start frontend
make dev-frontendOr use a single command:
make dev- Frontend changes: Hot reload is enabled, changes will reflect automatically
- Backend changes: The server will auto-reload on file changes
# Build for your current platform
make build
# Or build for a specific platform
make build-mac # macOS
make build-win # Windows
make build-linux # LinuxIf you see an error like "Electron failed to install correctly", run:
make fix-electronThis happens because pnpm's build script restrictions. The fix-electron command manually runs the installation scripts that were blocked.
Make sure you've run:
make install-backendAnd verify Ollama is running:
ollama listIf port 4000 or 5173 is already in use, stop the conflicting service or modify the port in the respective configuration files:
- Backend:
apps/backend/main.py - Frontend:
apps/client/electron.vite.config.ts
devlok/
βββ apps/
β βββ client/ # UI for visualization and chat
β βββ backend/ # Analysis engine and AI server
βββ docs/ # Documentation
βββ ...
See CONTRIBUTING.md for guidelines.