Eulix is a local-first code intelligence system designed to provide deep reasoning over massive repositories without compromising privacy or speed.
It orchestrates a high-performance pipeline of Go and Rust to bridge the gap between raw static analysis and LLM-powered insights. By combining a multi-layer retrieval strategy with rigorous anti-hallucination prompting, Eulix ensures that codebase answers are grounded in your actual source code.
Eulix operates as three focused binaries that work in concert:
| Component | Language | Role |
|---|---|---|
eulix |
Go | Orchestrator — Manages the CLI, config, and the RAG pipeline |
eulix_parser |
Rust | Static Analyzer — Extracts symbols, call graphs, and complexity |
eulix_embed |
Rust | Embedder — Runs transformer models via ONNX with GPU acceleration(supports both rocm and cuda) |
Unlike simple RAG tools, Eulix uses a tiered retrieval pipeline to find the most relevant context before hitting an LLM:
- Symbol Lookup: Precision matching for functions, classes, and variables.
- Keyword Search: Traditional lexical matching for specific terms.
- Semantic Vector Search: Deep contextual matching using local embeddings.
- Call-Graph Expansion: Traverses relationships to pull in relevant upstream/downstream logic.
Eulix is built with an anti-hallucination discipline. Our prompts are architected to force the model to cite its sources and strictly adhere to the provided context, minimizing "invented" logic or APIs.
- Symbol Indexing — Comprehensive mapping of functions, classes, and source locations.
- Advanced Call Graphs — Maps incoming and outgoing relationships across the entire project.
- Knowledge Base — Captures control-flow structures, error handling blocks, and cyclomatic complexity.
- Local-First — All parsing and embedding happens on your machine. No code leaves your infrastructure.
- High Performance — Rust-powered backend capable of parsing millions of lines in seconds.
Python · Go · C
Note
rust · Typecript · C++ will be supported soon
Prerequisites: Go 1.22+, Rust stable, a Hugging Face account (for model downloads)
git clone https://github.com/nurysso/eulix
cd eulix && make install
# Or you can do this if you want to test each bin
# Build the CLI
go build -o eulix ./cmd/eulix
# Build the parser
cd eulix-parser && cargo build --release
# Build embedder
# Go back to root of the project and
cd eulix-embed && cargo build --release --feature rocm
# use cuda instead of rocm if you have nvidia gpu
# try make help for other usefull commands during building or installingcd my-project
eulix initeulix analyzeThis triggers the parser and embedding pipeline, generating a .eulix folder which will be used as knowledge base for llm.
eulix chatOpen's an interactive session to query your codebase using the multi-layer retrieval pipeline.
The main entry point for orchestration.
init: Initialize eulix in current directoryanalyze: Analyze codebase and generate knowledge basechat: Start interactive chat interfacecache: Manage cache entriesconfig: Manage eulix configurationhistory: View query history interactivelyversion: Displays version of eulix and eulix_parser, eulix_embedglados: Checks for errors in knowledge base and embeddings sizeaspirine: tries to fix embedings.bin and kb MEANT TO BE USED IN TEST
Fast static analysis tool.
-r, --root: Project root directory-v, --ver: parser version-o, --output: Output file for knowledge base [default: knowledge_base.json]-t, --threads: Number of threads for parallel parsing [default: 4]-v, --verbose: Verbose output-l, --languages: Languages to parse (comma-separated, or "all") [default: all]--no-analyze: Skip analysis phase (faster, only parse files)--euignore: Path to custom .euignore file (defaults to /.euignore)-h, --help: Print help-V, --version: Print version
Vector generation via ONNX. Supports sentence-transformers/all-MiniLM-L6-v2, BAAI/bge-small-en-v1.5, BAAI/bge-base-en-v1.5, and more. Native CUDA/ROCm support for high-throughput embedding.
eulix_embed [COMMAND] [OPTIONS]
COMMANDS:
embed: Generate embeddings for knowledge base (default)query: Generate embedding for a query string
EMBED OPTIONS:
-k, --kb-path: Path to knowledge base JSON file-o, --output: Output directory for embeddings-m, --model: HuggingFace model name or local path
QUERY OPTIONS:
-
-q, --query: Query text to embed -
-m, --model: HuggingFace model name or local path -
-f, --format: Output format: json (default) or binary -
-h, --help: Show this help message -
-v, --version: Show version
Benchmarks will be added in docs soon.
Contributions are welcome. Please open an issue before submitting a pull request for significant changes.
