A powerful, privacy-focused desktop AI assistant built with Electron, React, and TypeScript. Helios provides seamless integration with local LLMs via Ollama, featuring advanced RAG capabilities, thinking transparency, and intelligent document processing.
- π Privacy-First: All processing happens locally with Ollama
- π Advanced RAG: Multi-directory document indexing with configurable sensitivity
- π Thinking Transparency: Real-time display of model reasoning process
- π Document Writing Mode: Amazon-style document creation with structured output
- π¨ Beautiful UI: Custom maroon-purple theme with responsive design
- βοΈ Model Configuration: Fine-tune temperature, context length, and other parameters
- π Auto-Sync: Persistent settings and automatic document indexing
- π¬ Conversation Persistence: Chat history saved automatically between sessions
- π€ AI-Powered Chat Naming: Intelligent titles generated for every conversation
- π File Explorer Integration: Click RAG sources to open files in system explorer
- ποΈ Chat Management: Individual conversation deletion and bulk clear options
- π Multi-Format File Support: Drag & drop support for images, documents, and text files
- Node.js (v18 or higher)
- Ollama - Download from ollama.ai
-
Clone the repository
git clone <repository-url> cd helios
-
Install dependencies
npm install
-
Start Ollama
# Install and start Ollama service ollama serve -
Run Helios
# Development mode with hot reload npm run dev # Or build and run production npm run build npm start
- Automatic Model Installation: Helios will automatically install
qwen3:4bas the default model - Configure RAG (Optional): Go to Settings β RAG to add up to 3 document directories
- Adjust Model Parameters: Fine-tune temperature, context length, and other settings
- Start Chatting: Begin conversations with your local AI assistant
- Type messages in the input field and press Enter
- Toggle "Show thinking" to see the model's reasoning process
- Attach files by dragging them into the chat area
- Chat Titles: Every conversation gets an AI-generated title that updates with each message
- Conversation Management: Click the "Γ" button to delete individual chats
- Persistent History: All conversations are automatically saved and restored between sessions
- Add Directories: Settings β RAG β Add Directory (up to 3)
- Adjust Sensitivity: Use the slider to control document relevance (10-100%)
- Enable RAG: Check the RAG box in the chat interface
- Ask Questions: Helios will automatically search your documents for relevant context
- Source Navigation:
- Click π button to open files in system file explorer/Finder
- Click ποΈ button to view file content within Helios
- Historical RAG sources remain visible even when RAG is disabled
- Enable Document Writing Mode: Check "π Document mode (Amazon-style)"
- Start New Document: Enter a title when prompted
- Build Sections: Each Q&A pair becomes a document section
- Export: Copy the complete markdown document to clipboard
- Temperature: Controls creativity (0 = focused, 1 = creative)
- Context Length: Maximum tokens the model can process
- Top P/K: Advanced sampling parameters for response generation
- Repeat Penalty: Reduces repetitive text generation
- Show Thinking: Display model reasoning process (preserved in chat history)
- Document Writing Mode: Amazon-style structured document creation
- Clear All Chats: Bulk delete all conversations (Settings β Chat Features)
- Drag & Drop: Supports images, text files, documents, PDFs, Excel, PowerPoint
- Vision Models: Automatic image processing with compatible models (llava, minicpm, etc.)
- File Viewer: Built-in content viewer with "Add to Chat" functionality
helios/
βββ src/ # React frontend
β βββ App-full.tsx # Main application component
β βββ components/ # UI components
βββ electron/ # Electron main process
β βββ main.ts # Core application logic
β βββ preload.js # IPC bridge
βββ docs/ # Documentation
βββ dist/ # Built application
# Development
npm run dev # Start with hot reload
npm run dev:renderer # Frontend only (for UI development)
# Building
npm run build # Full build + package
npm run package # Package for distribution
npm run make # Create installers
# Testing & Quality
npm test # Run test suite
npm run test:watch # Watch mode testing
npm run lint # ESLint code checking
npm run typecheck # TypeScript type checking- Frontend: React 19, TypeScript, Tailwind CSS
- Backend: Electron, Node.js
- AI Integration: Ollama API
- Build Tools: Vite, Electron Forge
- Testing: Jest, React Testing Library
- OS: Windows 10+, macOS 10.15+, or Linux (Ubuntu 18.04+)
- RAM: 8GB minimum, 16GB recommended
- Storage: 2GB for application + model storage
- Network: Internet connection for initial model download
Helios supports any Ollama-compatible model:
Recommended Models:
qwen3:4b(Default - balanced performance)llama3.2:8b(Meta's latest)deepseek-coder:6.7b(Code-focused)llava:7b(Vision capabilities)
Thinking Models (show reasoning):
qwen3:*,deepseek-*,r1-*
- macOS:
~/Library/Application Support/helios/helios-settings.json - Windows:
%APPDATA%/helios/helios-settings.json - Linux:
~/.config/helios/helios-settings.json
- macOS:
~/Library/Application Support/helios/helios-conversations.json - Windows:
%APPDATA%/helios/helios-conversations.json - Linux:
~/.config/helios/helios-conversations.json
All data is stored locally and never transmitted to external servers.
- Text:
.md,.txt - Data:
.json,.csv - Future:
.pdf,.docx(planned)
- Images:
.png,.jpg,.jpeg,.gif,.webp(up to 10MB) - Text:
.txt,.md,.json,.csv - Documents:
.pdf,.docx,.xlsx,.pptx(placeholder processing) - Email:
.eml,.msg(placeholder processing)
{
"temperature": 0.7, // 0-1, creativity level
"contextLength": 40000, // 2048-1M, max tokens
"topP": 0.9, // 0.1-1.0, nucleus sampling
"topK": 40, // 1-100, top-k sampling
"repeatPenalty": 1.1 // 0.5-2.0, repetition control
}Ollama Connection Failed
# Check if Ollama is running
ollama list
# Start Ollama service
ollama serveModel Download Stuck
# Manually install default model
ollama pull qwen3:4bRAG Indexing Failed
- Ensure document directories exist and are readable
- Check file permissions
- Verify supported file formats (.md, .txt, .json, .csv)
Settings Not Persisting
- Check write permissions to settings directory
- Restart application after major version updates
Chat History Lost
- Verify conversation file exists in userData directory
- Check file permissions for
helios-conversations.json - Conversations auto-save after each message
RAG Sources Not Clickable
- Ensure files still exist at original locations
- Check directory permissions for file explorer access
- Use ποΈ button if π button fails to open explorer
- Large Document Collections: Use higher RAG sensitivity (80-100%)
- Slow Responses: Reduce context length or use smaller models
- Memory Issues: Use "Clear All Chats" to free memory, restart application
- Chat Title Generation: Happens automatically in background without affecting performance
- File Attachments: Keep image files under 10MB for optimal processing
We welcome contributions! Please see TECH.md for technical details and development guidelines.
- Fork the repository
- Create a feature branch:
git checkout -b feature/amazing-feature - Make your changes and test thoroughly
- Commit with clear messages:
git commit -m 'Add amazing feature' - Push and create a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Ollama Team - For the excellent local LLM runtime
- React & Electron - For the powerful development frameworks
- Open Source Community - For the amazing tools and libraries
Need Help?
- π Read TECH.md for technical details
- π Report issues on GitHub
- π¬ Join our community discussions
Helios - Your Privacy-Focused AI Companion π
