ββββ βββ βββββββ ββββββββββ βββββββ βββββββ ββββββββββ ββββββββββββββββββ
βββββ ββββββββββββββββββββββββ ββββββββββββββββββββββββββββ ββββββββββββββββββββ
ββββββ ββββββ ββββββββββ βββ βββ ββββββ ββββββ βββββββ ββββββ ββββββββ
βββββββββββββ βββ ββββββ βββ βββ ββββββ ββββββ βββββββ ββββββ ββββββββ
βββ βββββββββββββββ βββββββββββ ββββββββββββββββββββββββββββ ββββββββββββββ βββ
βββ βββββ βββββββ ββββββββββ βββββββ βββββββ ββββββββββ ββββββββββββββ βββ
[ PENTEST & DEV CONTAINERS | bad-antics ]
v2.0 β AI Β· Music Β· Art
Pre-configured Docker containers for security research, AI/ML, image generation, music creation, and development.
- ollama (port 11434) β Local LLM engine, runs all text models
- webui / Open WebUI (port 3080) β ChatGPT-style web interface for all models
- agent-zero (port 3100) β Autonomous AI agent with tool use + code execution
- comfyui (port 8188) β Image/logo/art generation (Stable Diffusion, SDXL, Flux)
- musicgen (port 7860) β AI music generation (Meta AudioCraft)
- pentest β Kali Linux with nmap, metasploit, burp, sqlmap
- ctf β CTF tools with pwntools, gdb, radare2, ghidra
- proxy (port 8080/8081) β mitmproxy traffic interception
- vpn (port 51820/udp) β WireGuard VPN gateway
- dev (port 3000/8000) β Python, Node, Go, Rust dev environment
- julia β Julia data science + security research
After starting Ollama, run the model pull script to download everything:
docker exec n01d-ollama bash /scripts/pull-models.sh- dolphin-mistral:7b (4.1 GB) β Eric Hartford's Dolphin, no alignment filters
- dolphin-llama3:8b (4.7 GB) β Dolphin Llama 3, fully unlocked
- dolphin-mixtral:8x7b (26 GB) β Most capable uncensored MoE model
- wizard-vicuna-uncensored:13b (7.4 GB) β Classic uncensored model
- llama2-uncensored:7b (3.8 GB) β No guardrails Llama 2
- nous-hermes2:10.7b (6.1 GB) β Powerful uncensored reasoning
- codellama:13b (7.4 GB) β Exploit dev, shellcode, reverse engineering
- deepseek-coder-v2:16b (8.9 GB) β Advanced code/vuln analysis
- phind-codellama:34b (19 GB) β Complex exploit generation
- deepseek-r1:8b (4.9 GB) β Chain-of-thought reasoning
- deepseek-r1:14b (9.0 GB) β Deep reasoning
- qwen2.5:7b (4.7 GB) β Strong multilingual reasoning
- command-r:35b (20 GB) β Advanced RAG + reasoning
- codellama:7b (3.8 GB) β Fast code generation
- starcoder2:7b (4.0 GB) β Multi-language code
- codegemma:7b (5.0 GB) β Google's code model
- qwen2.5-coder:7b (4.7 GB) β Qwen code specialist
- nomic-embed-text (274 MB) β Embeddings for Agent Zero + RAG
- all-minilm (45 MB) β Fast semantic search
cd n01d-docker
# Copy environment config
cp .env.example .env
# Build all containers
docker compose build
# Start everything (or pick what you need)
docker compose up -d
# Pull all AI models (takes a while on first run)
docker exec n01d-ollama bash /scripts/pull-models.sh
# Open your browser
# Open WebUI -> http://localhost:3080
# ComfyUI -> http://localhost:8188
# MusicGen -> http://localhost:7860
# Agent Zero -> http://localhost:3100# AI only
docker compose up -d n01d-ollama n01d-webui n01d-agent-zero
# Creative only
docker compose up -d n01d-comfyui n01d-musicgen
# Pentest only
docker compose up -d n01d-pentest n01d-proxy n01d-vpn
# Shell into pentest container
docker exec -it n01d-pentest /bin/bashEvery service binds to 0.0.0.0 so it is accessible on your LAN.
ipconfig | Select-String "IPv4"
# Look for something like 192.168.1.100 or 10.0.0.50Replace HOST_IP with the IP you found above. From any computer, phone, or tablet on the same network:
- Open WebUI β
http://HOST_IP:3080β Chat with any AI model - Agent Zero β
http://HOST_IP:3100β Autonomous AI agent - ComfyUI β
http://HOST_IP:8188β Generate images, logos, art - MusicGen β
http://HOST_IP:7860β Generate music from text - Ollama API β
http://HOST_IP:11434β Raw LLM API endpoint - mitmproxy β
http://HOST_IP:8081β Web traffic inspection UI
# Run as Administrator β opens all N01D ports
$ports = @(3080, 3100, 7860, 8080, 8081, 8188, 11434, 51820)
foreach ($port in $ports) {
New-NetFirewallRule -DisplayName "N01D Docker - Port $port" `
-Direction Inbound -Protocol TCP -LocalPort $port `
-Action Allow -Profile Private
}
# UDP for WireGuard
New-NetFirewallRule -DisplayName "N01D Docker - WireGuard UDP" `
-Direction Inbound -Protocol UDP -LocalPort 51820 `
-Action Allow -Profile Private
Write-Host "All N01D ports opened in Windows Firewall"Any app that supports Ollama (like other Open WebUI instances, Continue.dev, etc.) can point to your server:
Ollama URL: http://HOST_IP:11434
n01d-docker/
docker-compose.yml β All services defined here
.env.example β Configuration template
scripts/
pull-models.sh β Downloads all Ollama models
containers/
pentest/ β Kali security tools
dev/ β Multi-language dev env
julia/ β Julia data science
ctf/ β CTF challenge tools
proxy/ β mitmproxy
vpn/ β WireGuard
agent-zero/ β Autonomous AI agent
musicgen/ β AI music generation
config/
mitmproxy/
wireguard/
data/ β Persistent data
ollama/ β Downloaded models
open-webui/ β Chat history
agent-zero/ β Agent workdir
comfyui/ β SD models + outputs
musicgen/ β Generated music
Copy .env.example to .env and customize ports, models, etc. Each container has its own Dockerfile in containers/. All persistent data lives in data/.
If you have an NVIDIA GPU with enough VRAM, uncomment the deploy sections in docker-compose.yml for:
- Ollama β runs LLMs on GPU (much faster inference)
- ComfyUI β runs Stable Diffusion on GPU (required for decent speed)
- MusicGen β runs AudioCraft on GPU (faster generation)
You will also need NVIDIA Container Toolkit installed: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html
# Check what is running
docker compose ps
# View logs
docker compose logs -f n01d-ollama
docker compose logs -f n01d-webui
# List downloaded models
docker exec n01d-ollama ollama list
# Quick-test a model
docker exec -it n01d-ollama ollama run dolphin-mistral:7b
# Pull a single model
docker exec n01d-ollama ollama pull deepseek-r1:14b
# Stop everything
docker compose down
# Stop + remove volumes (clean slate)
docker compose down -vGitHub: https://github.com/bad-antics NullSec: https://github.com/bad-antics/nullsec
Made by bad-antics β https://github.com/bad-antics