A fully private, 100% offline AI chat assistant that runs on your local machine using Ollama and Streamlit or Flask.
Chat with your PDF, TXT, and Markdown files safely and locally.
- 📴 100% Offline – No data leaves your computer
- 📄 RAG (Retrieval Augmented Generation) – Chat with your documents
- ⚡ Fast & Efficient – Uses
llama3.2:1bmodel - 🧠 Persistent Memory – Local storage via
ChromaDB - 🖱️ Multiple Interfaces – Streamlit (web) & Flask Cyberpunk (modern UI)
- 🐧 Cross-Platform – Works on Windows, macOS, and Linux
- Python 3.8+
- Ollama (ollama.com)
- Git (to clone the repository)
-
Clone the Repository
git clone https://github.com/code-glitchers/IncognitoAI.git cd IncognitoAI -
Run Setup (Choose your platform)
- Windows: Double-click
START_PRIVATEAI.bat - Linux/macOS:
cd linux && chmod +x setup.sh && ./setup.sh
- Windows: Double-click
-
Start Ollama (in a separate terminal)
ollama serve
-
Launch the App
- Streamlit Version:
streamlit run app.py
- Flask Cyberpunk Version (Linux):
cd linux && chmod +x bot.sh && ./bot.sh
- Streamlit Version:
Run the one-click installer:
START_PRIVATEAI.batFollow the Linux/macOS Installation & Setup Guide
- Open
http://localhost:8501 - Upload PDF, TXT, or Markdown files
- Toggle RAG mode to search documents or ask general questions
- Open
http://localhost:5000 - Dark-themed neon aesthetic
- Real-time streaming responses
- Toggle between RAG and general chat modes
IncognitoAI/
├── app.py # Streamlit application
├── requirements.txt # Python dependencies
├── START_PRIVATEAI.bat # Windows launcher
├── linux/
│ ├── bot.py # Flask Cyberpunk app
│ ├── setup.sh # Linux setup script
│ ├── start.sh # Start Streamlit (Linux)
│ ├── bot.sh # Start Flask app (Linux)
│ ├── templates/ # Flask HTML templates
│ ├── static/ # CSS and JavaScript
│ └── README.md # Linux-specific guide
└── .chroma_db/ # Local vector database (auto-created)
- LLM:
llama3.2:1b- Fast, efficient language model - Embeddings:
all-minilm:latest- Fast embedding model for RAG
Models are automatically downloaded on first run via Ollama.
We welcome contributions! Feel free to:
- Report bugs and issues
- Suggest new features
- Submit pull requests
- Improve documentation
MIT License - feel free to modify and distribute!
For questions or support, open an issue on GitHub.
