This VL-JEPA implimentation takes direct insperation from the original VL-JEPA paper
-
Updated
Jan 18, 2026 - Python
This VL-JEPA implimentation takes direct insperation from the original VL-JEPA paper
Semantic caching service using Redis vector search and EmbeddingGemma (via Ollama) for multilingual LLM query caching. Supports Matryoshka dimensions (768/512/256/128) for flexible quality vs storage trade-offs.
A Retrieval-Augmented Generation service that crawls websites, indexes content into a vector database, and answers questions with explicit source citations. Designed for correctness, safety, and observability within practical engineering constraints.
Add a description, image, and links to the embeddinggemma topic page so that developers can more easily learn about it.
To associate your repository with the embeddinggemma topic, visit your repo's landing page and select "manage topics."