A semantic search app for your images and PDFs that runs completely on your phone. No cloud, no internet needed.
- You pick images or PDFs from your phone
- The app uses an AI vision model to understand what's in each image and generates a description
- These descriptions are converted into searchable vectors and stored locally
- You can then search using natural language like "beach sunset" or "invoice from January" and it finds the matching files
Everything happens on-device. Your files never leave your phone.
| Model | Purpose | Size |
|---|---|---|
| LFM2 VL 450M | Generates captions for images (default) | 420 MB |
| LFM2 VL 1.6B | Higher quality captions, uses more RAM | 1440 MB |
| Nomic Embed v2 | Converts text to vectors for search | ~300 MB |
Models are provided by Cactus SDK and downloaded on first use.
- React Native + Expo
- Cactus SDK for on-device AI inference
- SQLite with sqlite-vec for vector storage and search
- TypeScript
bun install
cd android && ./gradlew assembleRelease -PreactNativeArchitectures=arm64-v8aAPK will be at android/app/build/outputs/apk/release/app-release.apk
Built for the Mobile AI Hackathon (Cactus x Nothing x Hugging Face) - Track 1: Memory Master