Skip to content

h9zdev/IncognitoAI

 
 

Repository files navigation

🛡️ IncognitoAI – Local Offline RAG Assistant

IncognitoAI Logo

A fully private, 100% offline AI chat assistant that runs on your local machine using Ollama and Streamlit or Flask.
Chat with your PDF, TXT, and Markdown files safely and locally.

Offline Privacy RAG Python License Tested


✨ Features

  • 📴 100% Offline – No data leaves your computer
  • 📄 RAG (Retrieval Augmented Generation) – Chat with your documents
  • Fast & Efficient – Uses llama3.2:1b model
  • 🧠 Persistent Memory – Local storage via ChromaDB
  • 🖱️ Multiple Interfaces – Streamlit (web) & Flask Cyberpunk (modern UI)
  • 🐧 Cross-Platform – Works on Windows, macOS, and Linux

🚀 Quick Start

Prerequisites

  • Python 3.8+
  • Ollama (ollama.com)
  • Git (to clone the repository)

Installation Steps

  1. Clone the Repository

    git clone https://github.com/code-glitchers/IncognitoAI.git
    cd IncognitoAI
  2. Run Setup (Choose your platform)

    • Windows: Double-click START_PRIVATEAI.bat
    • Linux/macOS: cd linux && chmod +x setup.sh && ./setup.sh
  3. Start Ollama (in a separate terminal)

    ollama serve
  4. Launch the App

    • Streamlit Version:
      streamlit run app.py
    • Flask Cyberpunk Version (Linux):
      cd linux && chmod +x bot.sh && ./bot.sh

🖥️ Platform-Specific Setup

🪟 Windows

Run the one-click installer:

START_PRIVATEAI.bat

🐧 Linux / macOS

Follow the Linux/macOS Installation & Setup Guide


📚 Usage

Streamlit Interface (app.py)

  • Open http://localhost:8501
  • Upload PDF, TXT, or Markdown files
  • Toggle RAG mode to search documents or ask general questions

Flask Cyberpunk Interface (bot.py - Linux)

  • Open http://localhost:5000
  • Dark-themed neon aesthetic
  • Real-time streaming responses
  • Toggle between RAG and general chat modes

📦 Project Structure

IncognitoAI/
├── app.py                    # Streamlit application
├── requirements.txt          # Python dependencies
├── START_PRIVATEAI.bat       # Windows launcher
├── linux/
│   ├── bot.py               # Flask Cyberpunk app
│   ├── setup.sh             # Linux setup script
│   ├── start.sh             # Start Streamlit (Linux)
│   ├── bot.sh               # Start Flask app (Linux)
│   ├── templates/           # Flask HTML templates
│   ├── static/              # CSS and JavaScript
│   └── README.md            # Linux-specific guide
└── .chroma_db/              # Local vector database (auto-created)

🛠️ Models Used

  • LLM: llama3.2:1b - Fast, efficient language model
  • Embeddings: all-minilm:latest - Fast embedding model for RAG

Models are automatically downloaded on first run via Ollama.


🤝 Contributing

We welcome contributions! Feel free to:

  • Report bugs and issues
  • Suggest new features
  • Submit pull requests
  • Improve documentation

👥 Contributors and Developers

haybnzz VaradScript


📝 License

MIT License - feel free to modify and distribute!

For questions or support, open an issue on GitHub.

About

Local Offline RAG Assistant

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • CSS 40.2%
  • Python 19.5%
  • Shell 16.8%
  • HTML 9.9%
  • JavaScript 9.5%
  • Batchfile 3.2%
  • PowerShell 0.9%