LocalMedAI is a cutting-edge medical AI assistant that runs completely locally on your machine. Built with privacy-first principles, it processes medical images and symptom descriptions using local LLMs (Llama2/Mistral via Ollama) without sending any data to external servers.
- FastAPI - Modern, fast Python web framework
- Ollama - Local LLM deployment (Llama2-7B/Mistral-7B)
- OpenCV - Medical image processing
- PIL/Pillow - Image handling
- Pydantic - Data validation
- React 18 with TypeScript
- Tailwind CSS - Modern, responsive design
- Vite - Fast build tool
- Axios - HTTP client
- React Query - State management
- Docker - Containerization
- Docker Compose - Multi-container setup
- Docker and Docker Compose (for containerized setup)
- OR Python 3.11+ and Node.js 16+ (for local development)
- 8GB+ RAM (for local LLM)
- Modern web browser
# Clone the repository
git clone https://github.com/yourusername/LocalMedAI.git
cd LocalMedAI
# Run automated setup
chmod +x setup.sh && ./setup.sh# Clone and start services
git clone https://github.com/yourusername/LocalMedAI.git
cd LocalMedAI
docker-compose up -d
# Access the application
# Frontend: http://localhost:3000
# Backend API: http://localhost:8000
# API Docs: http://localhost:8000/docs# Clone repository
git clone https://github.com/yourusername/LocalMedAI.git
cd LocalMedAI
# Setup local development environment
chmod +x dev-setup.sh && ./dev-setup.sh
# Start services manually
./start-ollama.sh # Terminal 1
./start-backend.sh # Terminal 2
./start-frontend.sh # Terminal 3For local development, the project uses Python virtual environments:
# Setup virtual environment
cd backend
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txt
# Activate when working
source venv/bin/activate
# Deactivate when done
deactivate# Setup Node environment
cd frontend
npm install
# Development commands
npm run dev # Start development server
npm run build # Build for production
npm run lint # Run linting./setup.sh # Complete automated setup
./start.sh # Start all services
./stop.sh # Stop all services
./status.sh # Check service status./dev-setup.sh # Setup local development environment
./start-backend.sh # Start backend with virtual environment
./start-frontend.sh # Start frontend development server
./start-ollama.sh # Start Ollama service# Docker
docker-compose up -d # Start all services
docker-compose down # Stop all services
docker-compose logs -f # View logs
docker-compose restart backend # Restart specific service
# Backend (with virtual environment)
cd backend && source venv/bin/activate
uvicorn app.main:app --reload # Start backend
# Frontend
cd frontend
npm run dev # Start development server
npm run build # Build for production
# Ollama
ollama serve # Start Ollama service
ollama pull llama2:7b # Download model
ollama list # List available models- Upload medical images (dermatology, X-rays, etc.)
- AI analyzes the image for potential conditions
- Receive suggestions with confidence scores
- Describe your symptoms in natural language
- AI processes the description
- Get potential diagnosis suggestions
- ✅ No data leaves your machine
- ✅ No external API calls
- ✅ Complete local processing
- ✅ Optional local storage only
LocalMedAI/
├── backend/ # FastAPI backend
│ ├── app/
│ │ ├── __init__.py
│ │ ├── main.py # FastAPI application
│ │ ├── models/ # Pydantic models
│ │ ├── services/ # Business logic
│ │ ├── utils/ # Utility functions
│ │ └── config.py # Configuration
│ ├── requirements.txt
│ └── Dockerfile
├── frontend/ # React frontend
│ ├── src/
│ │ ├── components/ # React components
│ │ ├── pages/ # Page components
│ │ ├── services/ # API services
│ │ ├── types/ # TypeScript types
│ │ └── utils/ # Utility functions
│ ├── package.json
│ └── Dockerfile
├── docker-compose.yml # Multi-container setup
├── .gitignore
└── README.md
# Backend
OLLAMA_BASE_URL=http://localhost:11434
MODEL_NAME=llama2:7b
UPLOAD_DIR=./uploads
MAX_FILE_SIZE=10485760 # 10MB
# Frontend
VITE_API_URL=http://localhost:8000-
Skin Condition Analysis
- Upload dermatology images
- Get AI-powered condition suggestions
-
Symptom Description
- "I have a persistent cough and chest pain"
- Receive potential diagnosis suggestions
-
Medical History Analysis
- Input patient information
- Get AI insights and recommendations
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
Important: This application is for educational and demonstration purposes only. It is not intended to replace professional medical advice, diagnosis, or treatment. Always consult with qualified healthcare professionals for medical concerns.