A minimal TCP stream proxy that tunnels arbitrary TCP connections over NATS. It consists of:
- A server that accepts proxy requests via NATS and connects to the target TCP service.
- A client that listens on a local TCP port and forwards each incoming connection through NATS to the server, which then talks to the target.
This is useful when direct TCP connectivity to a service is not possible, but NATS connectivity is available.
Experimental. Suitable for demos and experiments. Security, auth, and advanced features are out of scope for now.
- Control plane:
- Client requests a new proxy connection by sending a NATS request to subject
proxy.requestwith metadata:remote_hostandremote_port. - Server replies with a generated
connection ID.
- Client requests a new proxy connection by sending a NATS request to subject
- Data plane:
- Client -> Server bytes are published to
p.data.to_server.{connectionID}. - Server -> Client bytes are published to
p.data.to_client.{connectionID}.
- Client -> Server bytes are published to
- The server maintains a TCP connection to the remote service and relays bytes between it and the client over NATS.
Internally, messages are serialized as JSON using a simple transport.Message structure.
- cmd/server: NATS proxy server
- cmd/client: NATS proxy client
- internal/...: domain, use cases, NATS transport, and in-memory repository
- integration_tests: docker-compose setup and a Redis-based end‑to‑end test
- Go 1.24+
- A running NATS server (e.g.,
nats:2.9-alpine) - Docker and Docker Compose (for integration tests or containerized runs)
Build both binaries locally:
go build -o bin/nats-proxy-server ./cmd/server
go build -o bin/nats-proxy-client ./cmd/clientOr run directly:
go run ./cmd/server --help
go run ./cmd/client --help- Start NATS:
docker run --rm -p 4222:4222 --name nats nats:2.9-alpine- Start Redis locally for the demo:
docker run --rm -p 6379:6379 --name redis redis:7-alpine- Start the proxy server (in another terminal):
bin/nats-proxy-server --nats-url nats://127.0.0.1:4222 --log-level info- Start the proxy client to expose a local port that forwards to Redis via NATS:
bin/nats-proxy-client \
--nats-url nats://127.0.0.1:4222 \
--listen-addr 127.0.0.1:6380 \
--remote-addr 127.0.0.1:6379 \
--proxy-addr localhost:8081 # currently informational/reserved- Test with redis-cli:
redis-cli -h 127.0.0.1 -p 6380 PING
redis-cli -h 127.0.0.1 -p 6380 SET key value
redis-cli -h 127.0.0.1 -p 6380 GET keyIf everything is wired, you should see PONG and value replies proxied over NATS.
Both binaries use Cobra + Viper. You can configure via flags, env vars, or config files.
- Flags:
--nats-url(defaultnats://localhost:4222)--listen-addr(default:8080) — currently not used by the server--log-level(debug|info|warn|error; defaultinfo)--config(path to config file; default search:./then$HOME, file name.nats-proxy-server.*)
- Environment:
NATS_URLmaps tonats.urlLOG_LEVELmaps tolog.level
- Example YAML (e.g.,
.nats-proxy-server.yaml):
nats:
url: nats://localhost:4222
log:
level: info- Flags:
--nats-url(defaultnats://localhost:4222)--listen-addr(default0.0.0.0:8082) — local TCP listen address--remote-addr(defaultredis:6379) — target service address the server will dial--proxy-addr(defaultproxy-server:8081) — currently informational/reserved in NATS transport--log-level(debug|info|warn|error; defaultinfo)--config(path to config file; default search:./then$HOME, file name.nats-proxy-client.*)
- Environment:
NATS_URL→nats.urlLISTEN_ADDR→client.listen_addrREMOTE_ADDR→client.remote_addrPROXY_ADDR→client.proxy_addrLOG_LEVEL→log.level
- Example YAML (e.g.,
.nats-proxy-client.yaml):
nats:
url: nats://localhost:4222
client:
listen_addr: 0.0.0.0:8082
remote_addr: redis:6379
proxy_addr: proxy-server:8081
log:
level: info- Server-only image (root Dockerfile):
docker build -t tcp-sproxy-server -f Dockerfile .
# Then run with a NATS_URL env variable
# NOTE: The Dockerfile exposes 8080 and defines a healthcheck path; the server does not expose an HTTP endpoint.
docker run --rm --network host -e NATS_URL=nats://127.0.0.1:4222 tcp-sproxy-server- Dev/integration image (contains both server and client):
docker build -t tcp-sproxy-bundle -f integration_tests/build/Dockerfile .- Unit tests:
go test ./...- Integration test (Redis through the proxy using Docker Compose):
# Using Makefile
make test-integration
# Or manually
docker-compose -f integration_tests/docker-compose.yml up -d --build
# the test-runner service will execute integration tests
# when finished
docker-compose -f integration_tests/docker-compose.yml downNote: Integration tests require the following ports to be available on the host:
4222- NATS messaging6379- Redis database8080,8081- Proxy server8082- Proxy client
If these ports are already in use, you'll need to either stop the conflicting services or modify the port mappings in integration_tests/docker-compose.yml.
This project uses GitHub Actions for automated testing:
- Workflow:
.github/workflows/test.yml - Triggers: Pull requests and pushes to
main/masterbranches - Features:
- Runs on Go 1.24
- Executes
go testwith race detection - Generates code coverage reports
- Uploads coverage to Codecov
- Caches Go modules for faster builds
- Workflow:
.github/workflows/integration-test.yml - Triggers: Manual dispatch, daily schedule (2 AM UTC), pushes to
main/master - Features:
- Runs Docker Compose-based integration tests via
make test-integration - Automatically spins up Redis, NATS, proxy server, and proxy client containers
- Executes end-to-end Redis proxy tests
- No manual service setup required - all dependencies managed by Docker Compose
- Runs Docker Compose-based integration tests via
Both workflows ensure code quality and functionality across different scenarios.
Both components use logrus. Levels: debug, info, warn, error (set via --log-level or LOG_LEVEL).
- No authentication, rate limiting, or encryption provided by this project. Use NATS security features and network controls as appropriate.
- The
--proxy-addron the client is currently reserved/informational in the NATS transport implementation. - The server does not currently expose an HTTP endpoint (despite the Dockerfile healthcheck example).
This project is licensed under the MIT License - see the LICENSE file for details.