Skip to content

bad-antics/n01d-docker

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

9 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

N01D Docker

 β–ˆβ–ˆβ–ˆβ•—   β–ˆβ–ˆβ•— β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—  β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—     β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•—  β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•— 
 β–ˆβ–ˆβ–ˆβ–ˆβ•—  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ•‘β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—    β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β•β–ˆβ–ˆβ•—β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•”β•β•β•β•β•β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—
 β–ˆβ–ˆβ•”β–ˆβ–ˆβ•— β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘   β–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘    β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘   β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘     β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β• β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—  β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•
 β–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘   β–ˆβ–ˆβ•‘ β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘    β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘   β–ˆβ–ˆβ•‘β–ˆβ–ˆβ•‘     β–ˆβ–ˆβ•”β•β–ˆβ–ˆβ•— β–ˆβ–ˆβ•”β•β•β•  β–ˆβ–ˆβ•”β•β•β–ˆβ–ˆβ•—
 β–ˆβ–ˆβ•‘ β•šβ–ˆβ–ˆβ–ˆβ–ˆβ•‘β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β• β–ˆβ–ˆβ•‘β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•    β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•”β•β•šβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•—β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ•—β–ˆβ–ˆβ•‘  β–ˆβ–ˆβ•‘
 β•šβ•β•  β•šβ•β•β•β• β•šβ•β•β•β•β•β•  β•šβ•β•β•šβ•β•β•β•β•β•     β•šβ•β•β•β•β•β•  β•šβ•β•β•β•β•β•  β•šβ•β•β•β•β•β•β•šβ•β•  β•šβ•β•β•šβ•β•β•β•β•β•β•β•šβ•β•  β•šβ•β•
                    [ PENTEST & DEV CONTAINERS | bad-antics ]
                              v2.0 β€” AI Β· Music Β· Art

Pre-configured Docker containers for security research, AI/ML, image generation, music creation, and development.


Containers

AI / LLM Services

  • ollama (port 11434) β€” Local LLM engine, runs all text models
  • webui / Open WebUI (port 3080) β€” ChatGPT-style web interface for all models
  • agent-zero (port 3100) β€” Autonomous AI agent with tool use + code execution

Creative / Generative

  • comfyui (port 8188) β€” Image/logo/art generation (Stable Diffusion, SDXL, Flux)
  • musicgen (port 7860) β€” AI music generation (Meta AudioCraft)

Security / Pentest

  • pentest β€” Kali Linux with nmap, metasploit, burp, sqlmap
  • ctf β€” CTF tools with pwntools, gdb, radare2, ghidra
  • proxy (port 8080/8081) β€” mitmproxy traffic interception
  • vpn (port 51820/udp) β€” WireGuard VPN gateway

Development

  • dev (port 3000/8000) β€” Python, Node, Go, Rust dev environment
  • julia β€” Julia data science + security research

AI Models Included

After starting Ollama, run the model pull script to download everything:

docker exec n01d-ollama bash /scripts/pull-models.sh

Uncensored / Fully Unlocked

  • dolphin-mistral:7b (4.1 GB) β€” Eric Hartford's Dolphin, no alignment filters
  • dolphin-llama3:8b (4.7 GB) β€” Dolphin Llama 3, fully unlocked
  • dolphin-mixtral:8x7b (26 GB) β€” Most capable uncensored MoE model
  • wizard-vicuna-uncensored:13b (7.4 GB) β€” Classic uncensored model
  • llama2-uncensored:7b (3.8 GB) β€” No guardrails Llama 2
  • nous-hermes2:10.7b (6.1 GB) β€” Powerful uncensored reasoning

Pentesting / Security

  • codellama:13b (7.4 GB) β€” Exploit dev, shellcode, reverse engineering
  • deepseek-coder-v2:16b (8.9 GB) β€” Advanced code/vuln analysis
  • phind-codellama:34b (19 GB) β€” Complex exploit generation

Reasoning

  • deepseek-r1:8b (4.9 GB) β€” Chain-of-thought reasoning
  • deepseek-r1:14b (9.0 GB) β€” Deep reasoning
  • qwen2.5:7b (4.7 GB) β€” Strong multilingual reasoning
  • command-r:35b (20 GB) β€” Advanced RAG + reasoning

Code Generation

  • codellama:7b (3.8 GB) β€” Fast code generation
  • starcoder2:7b (4.0 GB) β€” Multi-language code
  • codegemma:7b (5.0 GB) β€” Google's code model
  • qwen2.5-coder:7b (4.7 GB) β€” Qwen code specialist

Utility

  • nomic-embed-text (274 MB) β€” Embeddings for Agent Zero + RAG
  • all-minilm (45 MB) β€” Fast semantic search

Quick Start

cd n01d-docker

# Copy environment config
cp .env.example .env

# Build all containers
docker compose build

# Start everything (or pick what you need)
docker compose up -d

# Pull all AI models (takes a while on first run)
docker exec n01d-ollama bash /scripts/pull-models.sh

# Open your browser
#   Open WebUI  -> http://localhost:3080
#   ComfyUI     -> http://localhost:8188
#   MusicGen    -> http://localhost:7860
#   Agent Zero  -> http://localhost:3100

Start Individual Service Groups

# AI only
docker compose up -d n01d-ollama n01d-webui n01d-agent-zero

# Creative only
docker compose up -d n01d-comfyui n01d-musicgen

# Pentest only
docker compose up -d n01d-pentest n01d-proxy n01d-vpn

# Shell into pentest container
docker exec -it n01d-pentest /bin/bash

Accessing From Other Machines on Your Network

Every service binds to 0.0.0.0 so it is accessible on your LAN.

Step 1 β€” Find Your Host Machine IP

ipconfig | Select-String "IPv4"
# Look for something like 192.168.1.100 or 10.0.0.50

Step 2 β€” Access Services from Any Device

Replace HOST_IP with the IP you found above. From any computer, phone, or tablet on the same network:

  • Open WebUI β€” http://HOST_IP:3080 β€” Chat with any AI model
  • Agent Zero β€” http://HOST_IP:3100 β€” Autonomous AI agent
  • ComfyUI β€” http://HOST_IP:8188 β€” Generate images, logos, art
  • MusicGen β€” http://HOST_IP:7860 β€” Generate music from text
  • Ollama API β€” http://HOST_IP:11434 β€” Raw LLM API endpoint
  • mitmproxy β€” http://HOST_IP:8081 β€” Web traffic inspection UI

Step 3 β€” Windows Firewall Rules (if needed)

# Run as Administrator β€” opens all N01D ports
$ports = @(3080, 3100, 7860, 8080, 8081, 8188, 11434, 51820)
foreach ($port in $ports) {
    New-NetFirewallRule -DisplayName "N01D Docker - Port $port" `
        -Direction Inbound -Protocol TCP -LocalPort $port `
        -Action Allow -Profile Private
}
# UDP for WireGuard
New-NetFirewallRule -DisplayName "N01D Docker - WireGuard UDP" `
    -Direction Inbound -Protocol UDP -LocalPort 51820 `
    -Action Allow -Profile Private

Write-Host "All N01D ports opened in Windows Firewall"

Step 4 β€” Using Ollama from Remote Apps

Any app that supports Ollama (like other Open WebUI instances, Continue.dev, etc.) can point to your server:

Ollama URL: http://HOST_IP:11434

Structure

n01d-docker/
  docker-compose.yml          β€” All services defined here
  .env.example                β€” Configuration template
  scripts/
    pull-models.sh            β€” Downloads all Ollama models
  containers/
    pentest/                  β€” Kali security tools
    dev/                      β€” Multi-language dev env
    julia/                    β€” Julia data science
    ctf/                      β€” CTF challenge tools
    proxy/                    β€” mitmproxy
    vpn/                      β€” WireGuard
    agent-zero/               β€” Autonomous AI agent
    musicgen/                 β€” AI music generation
  config/
    mitmproxy/
    wireguard/
  data/                       β€” Persistent data
    ollama/                   β€” Downloaded models
    open-webui/               β€” Chat history
    agent-zero/               β€” Agent workdir
    comfyui/                  β€” SD models + outputs
    musicgen/                 β€” Generated music

Configuration

Copy .env.example to .env and customize ports, models, etc. Each container has its own Dockerfile in containers/. All persistent data lives in data/.

GPU Support

If you have an NVIDIA GPU with enough VRAM, uncomment the deploy sections in docker-compose.yml for:

  • Ollama β€” runs LLMs on GPU (much faster inference)
  • ComfyUI β€” runs Stable Diffusion on GPU (required for decent speed)
  • MusicGen β€” runs AudioCraft on GPU (faster generation)

You will also need NVIDIA Container Toolkit installed: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html


Useful Commands

# Check what is running
docker compose ps

# View logs
docker compose logs -f n01d-ollama
docker compose logs -f n01d-webui

# List downloaded models
docker exec n01d-ollama ollama list

# Quick-test a model
docker exec -it n01d-ollama ollama run dolphin-mistral:7b

# Pull a single model
docker exec n01d-ollama ollama pull deepseek-r1:14b

# Stop everything
docker compose down

# Stop + remove volumes (clean slate)
docker compose down -v

GitHub: https://github.com/bad-antics NullSec: https://github.com/bad-antics/nullsec

Made by bad-antics β€” https://github.com/bad-antics

About

🐳 N01D Docker Stack - Custom security & development containers | Pentest, Dev, Julia, CTF, Proxy, VPN | bad-antics

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors