Internal infrastructure for on-premises AI deployments.
Tunnel gets you in. Conduit connects everything inside. Automatic DNS, internal TLS, service routing, and hardware monitoring. Eight commands. Structured audit logging. Zero internet dependency.
You deploy AI services on-premises. Each service needs a hostname, a TLS certificate, and health monitoring. Without automation, you configure dnsmasq by hand, generate certificates manually, write Caddy routes one at a time, and SSH into servers to check GPU utilization. Scale to five services and the maintenance burden is already unsustainable. Scale to twenty and something will break silently.
QP Conduit eliminates this with one-command service registration: DNS, TLS, and routing in a single operation, with continuous health monitoring and a cryptographic audit trail.
OUTSIDE BOUNDARY INSIDE
βββββββββββββββββββββββββββββββ
ββββββββββββ ββββββββββββββββ β QP Conduit β
β Remote β β β β β
β Users ββββββ QP Tunnel ββββββ DNS: grafana.internal β
β β β (WireGuard) β β TLS: auto-cert via CA β
ββββββββββββ β β β Route: reverse proxy β
ββββββββββββββββ β Monitor: GPU/CPU/disk β
Firewall β Health: container checks β
βββββββββββββββββββββββββββββββ
One command, full stack. Register a service and Conduit creates the DNS entry, generates a TLS certificate, configures the reverse proxy route, and starts health checks. One command. Done.
Internal TLS everywhere. Caddy's built-in CA generates certificates automatically for every registered service. No manual cert management. No expiry surprises. No external certificate authority.
Automatic service discovery. Services register with human-readable names. grafana.internal resolves to the right container. hub.local routes to the Hub. No IP addresses to remember.
Hardware monitoring. GPU utilization, CPU load, memory pressure, disk usage, container health. Monitor local and remote servers on the LAN via SSH. One dashboard for your entire deployment.
Cryptographic audit trail. Every registration, deregistration, certificate rotation, and health state change logged as structured JSON. Optional Capsule Protocol integration seals each entry with SHA3-256 + Ed25519 for tamper evidence.
Air-gap compatible. Internal CA, local DNS, no external dependencies. Works in classified environments, air-gapped clinics, and disconnected field deployments.
Pairs with QP Tunnel. Tunnel handles the boundary (VPN access from outside). Conduit handles the interior (DNS, TLS, routing, monitoring). Together they form a complete networking layer for on-premises AI.
# 1. Initialize Conduit on your network
./conduit-setup.sh
# 2. Register your first service
./conduit-register.sh --name grafana --host 10.0.1.50:3000
# 3. Verify it works
./conduit-status.sh
# grafana.internal β 10.0.1.50:3000 [healthy] TLS β DNS βAfter setup, grafana.internal resolves via DNS, serves over HTTPS with an auto-generated certificate, and reports health status continuously.
# Register more services
./conduit-register.sh --name hub --host 10.0.1.10:4200
./conduit-register.sh --name api --host 10.0.1.10:8000
./conduit-register.sh --name ollama --host 10.0.1.20:11434
# Check everything
./conduit-status.sh
# hub.local β 10.0.1.10:4200 [healthy] TLS β DNS β
# api.local β 10.0.1.10:8000 [healthy] TLS β DNS β
# ollama.internal β 10.0.1.20:11434 [healthy] TLS β DNS β
# grafana.internal β 10.0.1.50:3000 [healthy] TLS β DNS β| Command | Description |
|---|---|
conduit-setup.sh |
Initialize Conduit (install dnsmasq, configure Caddy, generate internal CA) |
conduit-register.sh --name <n> --host <ip:port> |
Register a service: DNS + TLS + routing in one step |
conduit-deregister.sh --name <n> |
Remove a service (DNS, route, and cert cleanup) |
conduit-status.sh |
Show all registered services with health, TLS, and DNS status |
conduit-monitor.sh |
Show server hardware stats (GPU, CPU, memory, disk) |
conduit-certs.sh |
List, rotate, or inspect TLS certificates |
conduit-dns.sh |
List or flush DNS entries |
conduit-logs.sh |
Aggregate and stream service logs |
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β QP Conduit β
β β
β ββββββββββββ ββββββββββββββββ ββββββββββββββββββββββββββββ β
β β dnsmasq β β Caddy β β Monitor Daemon β β
β β β β β β β β
β β DNS β β Internal CA β β GPU (nvidia-smi) β β
β β resolver β β TLS certs β β CPU / Memory / Disk β β
β β β β Reverse proxyβ β Container health β β
β β β β Health checksβ β Remote servers (SSH) β β
β ββββββ¬ββββββ ββββββββ¬ββββββββ ββββββββββββββ¬ββββββββββββββ β
β β β β β
β ββββββββββ¬βββββββββ΄ββββββββββββββββββββββββββ β
β β β
β ββββββββ΄βββββββ β
β β Registry β services.json β
β β + Audit β audit.log β
β βββββββββββββββ capsules.db (optional) β
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β β β
ββββββ΄βββββ ββββββ΄βββββ βββββββ΄ββββββ
β Hub β β Core β β Ollama β
β :4200 β β :8000 β β :11434 β
βββββββββββ βββββββββββ βββββββββββββ
hub.local api.local ollama.internal
dnsmasq resolves internal hostnames to service addresses. All DNS queries for registered services return the correct IP without any external lookup.
Caddy serves three roles: internal certificate authority, TLS termination, and reverse proxy. When a service registers, Caddy generates a certificate from its internal CA, configures a route, and starts health checking the upstream.
Monitor Daemon polls hardware metrics (GPU utilization via nvidia-smi, CPU/memory/disk via standard tools) and container health (via Docker socket). For remote servers on the LAN, it connects over SSH.
Registry is the single source of truth: a JSON file listing all registered services with their hostnames, upstreams, health status, and certificate metadata. The audit log records every mutation.
Registration is atomic. One command creates the DNS entry, generates a TLS certificate, and configures the reverse proxy route:
./conduit-register.sh --name grafana --host 10.0.1.50:3000What happens:
- Adds
grafana.internal β 10.0.1.50to dnsmasq configuration - Reloads dnsmasq to activate the DNS entry
- Adds a reverse proxy route in Caddy (
grafana.internal β 10.0.1.50:3000) - Caddy's internal CA auto-generates a TLS certificate for
grafana.internal - Registers a health check against the upstream
- Writes the service to
services.json - Creates a Capsule audit record
Deregistration reverses all steps cleanly:
./conduit-deregister.sh --name grafanaEvery registered service gets HTTPS automatically. No manual certificate management.
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Caddy Internal CA β
β β
β Root CA: Ed25519 (generated at conduit-setup) β
β Per-service: auto-generated, auto-renewed β
β Trust: distribute root cert to clients once β
β β
β ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ β
β β hub.local β β api.local β β grafana β β
β β TLS cert β β TLS cert β β .internal β β
β β (auto) β β (auto) β β TLS cert β β
β ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Trust distribution: After setup, install the root CA certificate on client machines. Conduit outputs trust commands for macOS, Linux, and Windows. Install once, trust all services forever.
Certificate rotation: Caddy handles renewal automatically. For manual inspection or forced rotation:
./conduit-certs.sh # List all certificates with expiry dates
./conduit-certs.sh --rotate grafana # Force certificate rotation
./conduit-certs.sh --inspect grafana # Show full certificate details./conduit-monitor.shSERVER: 10.0.1.20 (gpu-server)
GPU 0: NVIDIA H200 | Util: 87% | Mem: 72.1/141.1 GB | Temp: 62Β°C
GPU 1: NVIDIA H200 | Util: 43% | Mem: 31.4/141.1 GB | Temp: 58Β°C
CPU: 24/48 cores | Load: 12.3
Memory: 189.2 / 256.0 GB (74%)
Disk: 1.2 / 3.8 TB (32%)
SERVER: 10.0.1.10 (app-server)
CPU: 8/16 cores | Load: 2.1
Memory: 12.4 / 32.0 GB (39%)
Disk: 45.2 / 500.0 GB (9%)
Conduit connects to the Docker socket for real-time container inspection:
./conduit-monitor.sh --containersCONTAINER STATUS CPU MEM UPTIME
qp-hub running 2.3% 384 MB 4d 12h
qp-core running 8.7% 1.2 GB 4d 12h
qp-postgres running 1.1% 256 MB 4d 12h
qp-redis running 0.2% 48 MB 4d 12h
qp-ollama running 45.2% 68.3 GB 4d 12h
qp-caddy running 0.4% 32 MB 4d 12h
Monitor servers across your LAN via SSH. Configure targets in .env.conduit:
CONDUIT_REMOTE_SERVERS="10.0.1.20:gpu-server,10.0.1.30:inference-node"Every operation writes a structured JSON entry to audit.log:
{
"timestamp": "2026-04-04T10:15:00Z",
"action": "service_register",
"status": "success",
"message": "Registered grafana.internal β 10.0.1.50:3000",
"user": "operator",
"details": {"name": "grafana", "host": "10.0.1.50:3000", "tls": true, "dns": true}
}Logged actions: conduit_setup, service_register, service_deregister, cert_rotate, dns_flush, health_change, monitor_alert, and all error traps.
When qp-capsule is installed, audit events are sealed as tamper-evident Capsules using SHA3-256 + Ed25519 signatures. This provides cryptographic proof that records have not been modified after creation.
pip install qp-capsule # Or: auto-installs on first use
qp-capsule verify --db capsules.db # Verify chain integrityThe JSON audit log is the fast local index. Capsules are the cryptographic source of truth. Golden test vectors for the audit format are in conformance/.
Conduit includes a browser-based admin UI for managing your entire on-premises infrastructure visually.
make dev # Start in Docker (http://localhost:9999)
make ui # UI dev mode with hot reload (http://localhost:5173)ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β QP Conduit DNS β Caddy β 4/4 up β 3 certs valid β
ββββββββββββ¬ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Overview β β
β ββββββββ β ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ β
β βDashbdβ β β Hub β β Core API β β Grafana β β
β ββββββββ€ β β β hub.local β β β api.local β β β grafana β β
β βSvc β β β :4200 TLS β β β :8000 TLS β β β .internal β β
β βDNS β β β 12ms healthy β β 8ms healthy β β 15ms healthy β β
β βTLS β β ββββββββββββββββ ββββββββββββββββ ββββββββββββββββ β
β ββββββββ€ β β
β βServerβ β GPU Server (10.0.1.20) β
β βRoute β β GPU 0: H200 87% βββββββββ 72/141 GB 62Β°C β
β ββββββββ β GPU 1: H200 43% βββββββββ 31/141 GB 58Β°C β
β β CPU: 24/48 Mem: 189/256 GB Disk: 1.2/3.8 TB β
ββββββββββββ΄ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
URL routing. Each view has a dedicated URL (/, /services, /dns, /tls, /servers, /routing). Deep links, bookmarks, and browser back/forward all work.
Blank slate. First-time users see an interactive topology visualization with animated data packets, capability cards, and step-by-step getting started guidance. It disappears automatically when you register your first service.
Six views. Dashboard (health overview), Services (register/manage), DNS (entries + resolver), TLS (certificates + CA), Servers (GPU/CPU/memory), Routing (proxy routes). Each view has a rich empty state with feature descriptions and CLI commands.
Tech. React 19, TypeScript, Vite 6, TailwindCSS 4 (OKLCH perceptual color system), Zustand, TanStack Query. Node 24 + Python 3.14 in Docker. 225 tests, 97% coverage.
Keyboard-first. 1-6 switches views, / focuses search, Esc dismisses panels.
See docs/admin-ui.md for the full dashboard reference.
| Layer | Mechanism |
|---|---|
| TLS | Internal CA (Ed25519) with auto-generated per-service certificates |
| DNS | Local dnsmasq, no external queries, no DNS-over-HTTPS dependency |
| Routing | Caddy reverse proxy with upstream health checks |
| File protection | umask 077 on all keys and CA material (owner-only, mode 600) |
| Input validation | Strict [a-zA-Z0-9_-] regex on service names (prevents injection) |
| No eval | Zero use of eval in the entire codebase |
| Audit trail | Every operation logged with timestamp, user, and result |
| Tamper evidence | Optional Capsule Protocol sealing (SHA3-256 + Ed25519) |
| Isolation | Services are independently routed; one failure does not cascade |
| Certificate rotation | Automatic renewal; manual rotation available on demand |
Conduit's internal TLS, DNS isolation, and audit logging contribute to controls across five regulatory frameworks. Each mapping documents which controls Conduit satisfies and which require complementary application-level controls.
| Framework | Controls | Focus |
|---|---|---|
| HIPAA | 164.312(e)(1), 164.312(a)(1) | Transmission security, access control, audit |
| CMMC 2.0 | SC.L2-3.13.8, AU.L2-3.3.x | Network architecture, encrypted sessions, logging |
| FedRAMP | SC-8, SC-12, AU-2/3 | Transmission confidentiality, key management |
| SOC 2 | CC6.1, CC6.6, CC7.x | Logical access, network security, monitoring |
| ISO 27001 | A.8.20, A.8.21, A.8.24 | Network security, web filtering, cryptography |
Copy .env.conduit.example to .env.conduit and customize:
| Variable | Default | Description |
|---|---|---|
CONDUIT_APP_NAME |
qp-conduit |
Config directory, log tags |
CONDUIT_DOMAIN |
internal |
Default domain suffix for services |
CONDUIT_DNS_PORT |
53 |
dnsmasq listen port |
CONDUIT_DNS_UPSTREAM |
127.0.0.1 |
Upstream DNS for non-internal queries |
CONDUIT_CADDY_ADMIN |
localhost:2019 |
Caddy admin API address |
CONDUIT_CADDY_HTTPS_PORT |
443 |
HTTPS listen port |
CONDUIT_HEALTH_INTERVAL |
30 |
Health check interval in seconds |
CONDUIT_REMOTE_SERVERS |
(none) | Comma-separated ip:label pairs for remote monitoring |
CONDUIT_CONFIG_DIR |
~/.config/qp-conduit |
State directory (registry, certs, audit) |
CONDUIT_DOCKER_SOCKET |
/var/run/docker.sock |
Docker socket path for container monitoring |
All values are overridable via environment variables or .env.conduit.
Required:
| Dependency | Purpose |
|---|---|
bash 4.0+ |
Shell runtime |
jq |
JSON processing for service registry |
caddy 2.10+ |
Internal CA, TLS termination, reverse proxy |
dnsmasq |
Local DNS resolution for internal hostnames |
Optional:
| Dependency | Purpose |
|---|---|
docker |
Container inspection and health monitoring |
nvidia-smi |
GPU utilization monitoring |
ssh |
Remote server monitoring across LAN |
qp-capsule |
Tamper-evident audit sealing (auto-installs via pip) |
| Document | Audience | Description |
|---|---|---|
| Why Conduit | Decision-Makers | The case for on-premises infrastructure mesh |
| Guide | Operators | End-to-end walkthrough |
| Architecture | Developers, Auditors | Component model and data flow |
| Admin UI | Developers | Dashboard: routing, blank slate, design system, testing |
| API Reference | Developers | REST endpoints served by server.py |
| Commands | Operators | Reference for all 8 CLI scripts |
| Security Evaluation | CISOs | Threat model and cryptographic guarantees |
| Network Guide | Network Engineers | DNS, TLS trust, air-gap configuration |
| Development | Contributors | Prerequisites, testing, code style |
| Deployment | DevOps | Docker, air-gap, multi-server |
| Compliance | Regulators, GRC | HIPAA, CMMC, FedRAMP, SOC 2, ISO 27001 |
| Guide | Use Case |
|---|---|
| Home Lab with GPU | Multi-GPU server with Ollama and Grafana |
| Healthcare Clinic | Air-gapped clinic with EHR and AI diagnostics |
| Defense Installation | Classified environment, no internet, full audit |
.
βββ conduit-*.sh # 8 commands (setup, register, deregister, status, monitor, certs, dns, logs)
βββ conduit-preflight.sh # Pre-flight setup (sourced by all scripts)
βββ lib/
β βββ common.sh # Logging, validation, config defaults
β βββ registry.sh # Service registry CRUD (JSON/jq)
β βββ audit.sh # Structured audit logging + Capsule sealing
β βββ dns.sh # dnsmasq configuration and management
β βββ tls.sh # Caddy CA and certificate operations
β βββ routing.sh # Reverse proxy route management
βββ ui/ # Admin dashboard (React 19 + TypeScript + Vite 6)
β βββ vitest.config.ts # Test configuration (happy-dom, 225 tests)
β βββ src/
β βββ components/views/ # 6 views + blank slate + per-view empty states
β βββ components/layout/ # AppShell, Sidebar, StatusBar
β βββ components/shared/ # HealthDot, CopyButton, SlideOver, Toast, ViewBlankSlate
β βββ api/ # Typed API client modules
β βββ stores/ # Zustand state (URL-synced routing)
β βββ lib/ # Types, formatters, OKLCH theme
βββ templates/
β βββ Caddyfile.service.tpl # Per-service Caddy configuration template
βββ conformance/ # Audit log golden test vectors
βββ completions/ # Bash and Zsh tab-completion scripts
βββ tests/ # Unit, integration, and smoke tests (bats-core)
βββ docs/ # Architecture, security, compliance, guides
βββ examples/ # Deployment walkthroughs
βββ .env.conduit.example # Configuration template
βββ Makefile # All operations as Make targets
βββ VERSION # 0.2.0
QP Conduit is the internal infrastructure layer. It works alongside:
| Component | Role | Repository |
|---|---|---|
| QP Conduit | DNS, TLS, routing, monitoring (you are here) | quantumpipes/conduit |
| QP Tunnel | WireGuard VPN boundary layer | quantumpipes/tunnel |
| QP Capsule | Cryptographic audit trail (SHA3-256 + Ed25519) | quantumpipes/capsule |
| qp-vault | Governed knowledge store with content addressing | quantumpipes/vault |
Tunnel handles the perimeter. Conduit handles the interior. Capsule provides tamper evidence. Vault stores knowledge.
See CONTRIBUTING.md. Issues and pull requests welcome.
Apache License 2.0 with additional patent grant. You can use all patented innovations freely for any purpose, including commercial use.
Internal DNS. Automatic TLS. Service routing. Hardware monitoring. Full audit trail.
Documentation Β· Examples Β· Conformance Β· Security Policy Β· Patent Grant
Copyright 2026 Quantum Pipes Technologies, LLC