Train it once. Style it forever. Prompt, mint, and create with your own visual signature.
A full-stack web application that lets users:
- Generate images from text prompts using Stable Diffusion
- Upload reference images to train custom LoRA adapters
- Apply trained LoRAs to future image generations
- View and manage image outputs and LoRA files
Everyone is hosting their image generation applications on huggingface, and I understand why - the GPU power that can be purchased, but I wanted to focus on a more completed software application build. I don't know much about gradio at the moment, but it seems like its a quick framework to quickly get something up on the screen. Focusing on my Blazor Framework experience, I though building a front-end more polished would be cool. Enjoy the local image generation with NO TOKEN COST :)
Welcome screen with model status
First-time setup wizard with model selection
Settings page for model management
Image gallery with generated outputs
| Layer | Technology |
|---|---|
| Frontend | Blazor Server (C#) |
| Backend | ASP.NET Core Minimal APIs |
| AI Engine | Python (FastAPI) |
| Models | SDXL Base, SDXL Turbo, Z-Image Turbo |
| LoRA | PEFT / Kohya Trainer |
| Format | .safetensors |
| Storage | Local File System |
- Setup: First-time users are guided through a setup wizard to select and download a model
- Generate: User enters a prompt β Python backend generates image using the selected model
- Train: User uploads 1-5 reference images β LoRA model is trained and saved as
.safetensors - Apply: User selects trained LoRA(s) β Generates stylized images with custom style
- Manage: All LoRAs and images are organized per user for easy access
| Model | Speed | Quality | Min VRAM | LoRA Support |
|---|---|---|---|---|
| SDXL Base 1.0 | Medium (30 steps) | High | 8GB | Yes |
| SDXL Turbo | Fast (4 steps) | Good | 8GB | Yes |
| Z-Image Turbo | Fast (8 steps) | Excellent | 16GB | No |
Models are downloaded on-demand through the setup wizard or settings page. Each model is ~7-12GB.
LoraMint/
βββ src/
β βββ LoraMint.Web/ # Blazor Server application
β β βββ BackgroundServices/ # Hosted services
β β β βββ PythonBackendHostedService.cs
β β βββ Components/ # Blazor components
β β β βββ Layout/ # MainLayout, NavMenu
β β β βββ Shared/ # Reusable components (ModelComparisonTable)
β β βββ Models/ # Data models (ModelConfig, GenerateRequest, etc.)
β β βββ Pages/ # Razor pages
β β β βββ Setup.razor # First-time setup wizard
β β β βββ Settings.razor # Model management settings
β β β βββ Generate.razor # Image generation
β β β βββ TrainLora.razor # LoRA training
β β β βββ MyImages.razor # Image gallery
β β β βββ MyLoras.razor # LoRA library
β β βββ Services/ # C# services
β β β βββ PythonBackendService.cs
β β β βββ FileStorageService.cs
β β β βββ ModelConfigurationService.cs
β β βββ wwwroot/css/ # Cyberpunk terminal theme
β β βββ Program.cs # Minimal API endpoints
β β
β βββ python-backend/ # Python FastAPI backend
β βββ models/ # Pydantic models
β βββ services/ # AI services
β β βββ image_generator.py # SD image generation
β β βββ model_manager.py # Model downloading/loading
β β βββ training/ # LoRA training modules
β βββ utils/ # Utilities
β βββ main.py # FastAPI application
β
βββ data/ # Storage (gitignored)
β βββ models/ # Downloaded AI models (~7-12GB each)
β βββ loras/ # User LoRA models
β βββ outputs/ # Generated images
β βββ model-settings.json # Model preferences
β
βββ QUICKSTART.md # Quick start guide
βββ PROJECT_INSTRUCTIONS.md # Detailed specifications
βββ FUTURE_FEATURES.md # Roadmap and planned features
βββ docker-compose.yml # Docker orchestration
βββ README.md # This file
- .NET 8.0 SDK
- Python 3.10+
- CUDA-capable GPU (recommended)
- 16GB+ RAM
The Blazor application automatically sets up and starts the Python backend for you!
Linux/macOS:
./start.shWindows:
start.batThat's it! The application will:
- Stop any existing LoraMint instances (automatic cleanup)
- Create Python virtual environment (if needed)
- Install dependencies (if needed)
- Start the Python backend
- Start the Blazor web application
Access the app at https://localhost:5001
π See QUICKSTART.md for detailed first-run instructions
If you prefer manual control:
Linux/macOS:
./setup-python.shWindows:
setup-python.batOr manually:
cd src/python-backend
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
pip install -r requirements.txtOption A: Use startup script (auto-starts Python backend)
./start.sh # or start.bat on WindowsOption B: Start manually in separate terminals
Terminal 1 - Python Backend:
cd src/python-backend
source venv/bin/activate # On Windows: venv\Scripts\activate
python main.pyTerminal 2 - Blazor Web:
cd src/LoraMint.Web
dotnet runOpen your browser and navigate to:
- Web UI:
https://localhost:5001 - Python API Docs:
http://localhost:8000/docs
# Build and run all services
docker-compose up --build
# Run in detached mode
docker-compose up -d
# View logs
docker-compose logs -f
# Stop services
docker-compose downServices will be available at:
- Web UI:
http://localhost:5001 - Python API:
http://localhost:8000
POST /api/generate- Generate an image from a promptPOST /api/train-lora- Train a new LoRA modelGET /api/loras/{userId}- List user's LoRA modelsGET /api/images/{userId}- List user's generated images
POST /generate- Generate image using Stable DiffusionPOST /train-lora- Train LoRA modelGET /loras/{user_id}- Get user's LoRAsGET /images/{user_id}- Get user's imagesGET /health- Health check and GPU statusGET /models- List available models with download statusPOST /models/{id}/download- Download a model (SSE progress stream)POST /models/{id}/load- Load model into GPU memoryPOST /models/unload- Unload current modelGET /models/current- Get currently loaded modelGET /system/gpu- Get GPU information (VRAM, CUDA version)
For detailed API documentation, visit http://localhost:8000/docs after starting the Python backend.
- First-time setup wizard with GPU detection and VRAM display
- Model comparison table with VRAM requirements and compatibility
- Color-coded compatibility indicators (green/yellow/red based on available VRAM)
- On-demand model downloading with SSE progress streaming
- Dynamic "Powered by [Model Name]" label in the UI
- Settings page for switching models without restart
- Dark theme with gradient mesh background
- Purple/pink/orange gradient accents with cyan highlights
- Terminal-style typography (JetBrains Mono)
- Animated loading states with pulsing dots and spinning rings
- Glowing UI elements and scanline effects
- Terminal prompt prefix (
>_) on headings
- Text-to-image using SDXL Base, SDXL Turbo, or Z-Image Turbo
- Optional LoRA model application with adjustable strength
- Multiple LoRAs can be combined
- Real-time generation feedback with animated progress
- Step counter with digital font display (Orbitron)
- Per-user image storage and organization
- Upload 1-5 reference images for training
- DreamBooth-style PEFT training with prior preservation
- Configurable training parameters (epochs, learning rate, LoRA rank)
- Fast mode option (~40% faster training)
- Real-time progress streaming with phase indicators
- Automatic trigger word generation (e.g.,
sks_<name>) - Class image caching for faster retries
- Automatic .safetensors format output
- Per-user LoRA storage
- Browse all generated images per user
- View generation metadata
- Filter and organize by date
- Download images
- List all trained models per user
- View file information and creation dates
- Quick access for use in generation
- Delete unwanted models
{
"PythonBackend": {
"BaseUrl": "http://localhost:8000",
"Path": "../python-backend",
"AutoStart": true,
"AutoInstallDependencies": true
},
"Storage": {
"LorasPath": "../../data/loras",
"OutputsPath": "../../data/outputs"
}
}Configuration Options:
AutoStart: Automatically start Python backend (default:true)AutoInstallDependencies: Auto-install Python packages (default:true)Path: Path to Python backend directoryBaseUrl: Python backend API URL
- Model selection (SDXL, SD Turbo, etc.)
- Generation parameters
- Storage paths
- GPU settings
Core Features
- Blazor Server UI with all pages
- ASP.NET Core Minimal APIs
- FastAPI backend structure
- File storage system with per-user organization
- Docker support
Image Generation
- Image generation pipeline with real-time SSE progress streaming
- Animated loading states (pulsing dots, spinning rings, step counter)
- Per-user image management and gallery
LoRA Training
- Real LoRA training using DreamBooth-style PEFT (see Known Issues)
- Training UI with progress streaming and configurable settings
- Fast mode for quicker training (~40% faster)
- Automatic trigger word generation
- Class image caching for faster retries
Model Management
- Multi-model selection (SDXL Base, SDXL Turbo, Z-Image Turbo)
- Setup wizard for first-time users with GPU detection
- Model comparison table with VRAM compatibility indicators
- Settings page for model management
- Network-friendly model downloads (single-threaded to prevent saturation)
Developer Experience
- Automatic Python backend startup
- One-command setup and launch (
start.sh/start.bat) - Enhanced startup feedback with progress indicators
- Cross-platform startup scripts (Windows & Linux/macOS)
- Automated dependency installation and validation
UI/UX
- Cyberpunk terminal dark theme with gradient accents
- Terminal-style typography (JetBrains Mono)
- Glowing UI elements and scanline effects
- LoRA training memory optimization for 10GB GPUs
- User authentication
- Image metadata persistence
- LoRA stacking UI with sliders
- Azure Blob Storage support
- Batch generation
- LoRA marketplace
- Additional model support (SDXL Lightning, Playground v2.5, etc.)
- Custom model import (safetensors, ckpt)
The current DreamBooth-style LoRA training implementation may experience out-of-memory (OOM) issues on GPUs with 10GB VRAM (e.g., RTX 3080). This occurs because:
- Class image generation loads a full SDXL pipeline (~8GB)
- Training loads UNet, VAE, and text encoders
- CUDA memory fragmentation prevents efficient reuse
Current mitigations implemented:
- Text encoders run on CPU (FP32) instead of GPU
- Aggressive GPU memory cleanup between phases
- Gradient checkpointing enabled
- 8-bit Adam optimizer
- Mixed precision (FP16) training
Workarounds for users:
- Use GPUs with 12GB+ VRAM for reliable training
- Close other GPU-intensive applications during training
- If training fails, restart the app and try again (class images are cached)
Future fixes planned:
- Sequential model loading with offloading
- Lower resolution option for class image generation
- Memory-efficient attention (xFormers) when available
- Quick Start Guide - Get up and running fast
- Project Instructions - Detailed specifications
- Future Features - Roadmap and planned features
- Python Backend README - Python setup guide
- Blazor Web README - .NET setup guide
- Fork the repository
- Create a feature branch
- Commit your changes
- Push to the branch
- Create a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.



