A wrapper for Docker Compose that adds support for:
- Rolling updates
- Environment-specific configurations
- Template-based configuration
- Values management
go install github.com/your-server-support/docker-compose-wrapper/cmd/compose-wrapper@latestThe wrapper provides a command-line interface similar to Docker Compose, with additional features:
# Start services
dcw up
# Start services with specific environment
dcw up -e prod
# Start specific service
dcw up web
# Start specific service with environment
dcw up -e prod web
# Rolling update for a service
dcw rolling-update web
# Rolling update with specific environment
dcw rolling-update -e prod web
# Rolling update with custom configuration
dcw rolling-update --replicas 3 --retry-count 10 --retry-interval 30 webCreate environment-specific configuration files in the environments directory:
# environments/prod.yaml
global:
projectName: myapp
environment: production
services:
web:
image: myapp/web:latest
replicas: 3
rollingUpdate:
replicas: 3
retryCount: 10
retryInterval: 30Use Go templates in your configuration files:
# docker-compose.yaml
services:
web:
image: {{ .Values.services.web.image }}
ports:
- "{{ .Values.services.web.port }}:80"Values can be defined in multiple places with the following precedence (highest to lowest):
- Command-line arguments
- Environment-specific configuration
- Default values
Example values file:
# values.yaml
global:
projectName: myapp
environment: development
services:
web:
image: myapp/web:dev
port: 8080
replicas: 1
rollingUpdate:
replicas: 2
retryCount: 5
retryInterval: 10The rolling update feature ensures zero-downtime deployments by:
- Scaling up the service to double the desired replicas
- Waiting for new containers to start
- Removing old containers
- Scaling back down to the desired number of replicas
Configuration options:
replicas: Number of replicas to maintainretryCount: Number of attempts to wait for new containersretryInterval: Time between retry attempts in seconds
go build -o dcw cmd/compose-wrapper/main.gogo test ./...MIT
- Template-based Docker Compose configuration using Go templates
- Versioned releases: Each configuration generation is saved as a new version in
dist/ - Automatic config hashing: Output directory includes a hash of the config for traceability
- Configurable release retention: Control how many releases to keep (default: 20)
- Rollback: Instantly roll back to any previous release, or the previous one by default
- Releases listing: See all available releases and their timestamps
- Lint: Validate all generated Docker Compose files using
docker compose config - Values file management with override and priority support
- Dependency management between services (charts)
- Automated Docker network management
- Transparent Docker Compose command passing
- Configuration validation (lint)
- Pre and post hooks for running commands or containers
- Rolling updates with zero-downtime deployment support
- Service-specific configuration for replicas and update strategies
/chart-example
|-- Chart.yaml # Main chart description and dependencies
|-- values.yaml # Main values file
|-- /templates # Main chart templates (Go templates)
| |-- docker-compose.yml.tmpl
|-- /charts # Child charts directory
| |-- /database
| | |-- Chart.yaml
| | |-- values.yaml
| | |-- templates/
| | | |-- docker-compose.yml.tmpl
| |-- /cache
| |-- Chart.yaml
| |-- values.yaml
| |-- templates/
| |-- docker-compose.yml.tmpl
|-- /dist # All generated releases
| |-- v1-<hash>/
| | |-- values.yaml # The merged config for this release
| | |-- docker/
| | | |-- docker-compose.yml
| | | |-- database/
| | | | |-- docker-compose.yml
| | | |-- cache/
| | | | |-- docker-compose.yml
| |-- v2-<hash>/
| | |-- ...
All template files follow the naming convention:
<filename>.<extension>.tmpl
For example:
docker-compose.yml.tmplconfig.json.tmplnginx.conf.tmpl
This convention makes it clear which files are templates and what their final output format will be.
--setand--set-file(highest priority)- Additional values files (
-f) - Main chart values (
values.yaml) - Child chart values (lowest, used as base)
Note: The flags
--set,--set-file,--set-string,-f, and--valuesare only interpreted by the wrapper for value merging and are not passed to Docker Compose itself.
Generates a new release if the config changes, or reuses the latest if not. Runs Docker Compose with the generated files.
dcw up -d
Note: Any value-related flags (
--set,--set-file,--set-string,-f,--values) are handled by the wrapper and will not be forwarded to Docker Compose.
Validates all generated Docker Compose files using docker compose config.
dcw lint
Lists all available releases with their timestamps.
dcw releases
Creates a new release from a previous one and runs Docker Compose from it. Supports rolling updates if configured in the target release.
- Roll back to the previous release:
dcw rollback up -d - Roll back to a specific release:
dcw rollback v3-abcdef12 up -d
When rolling back, the wrapper will:
- Create a new release from the selected version
- Preserve all configuration including rolling update settings
- Apply rolling updates if enabled in the target release's configuration
- Use the same zero-downtime update process as regular deployments
After each command, you will see a summary:
+++++++++++++++++++++++++++++++++++++++
Release: v5-abcdef12
Status: �[32mSUCCESS�[0m
+++++++++++++++++++++++++++++++++++++++
or, if there was an error:
+++++++++++++++++++++++++++++++++++++++
Release: v5-abcdef12
Status: �[31mFAIL!!!!�[0m
+++++++++++++++++++++++++++++++++++++++
When rolling back, you will see:
�[33mNew state version v6-12345678 created from release v5-abcdef12�[0m
- Uses Go's
text/templatesyntax. - Supports all Go template features:
{{ .key }},{{ if ... }},{{ range ... }}.
version: '3.9'
services:
web:
image: {{ .image.repository }}:{{ .image.tag }}
ports:
- "{{ .appPort }}:8080"
environment:
- ENVIRONMENT={{ .global.environment }}
- DB_HOST=database
- REDIS_HOST=cache
{{- if .global.network.alias }}
networks:
- {{ .global.network.alias }}
{{- end }}
{{- if .global.network.alias }}
networks:
{{ .global.network.alias }}:
driver: {{ .global.network.driver }}
name: {{ .global.network.name }}
{{- end }}
- The wrapper parses and merges values from all supported sources (
--set,--set-file,--set-string,-f,--values, chart defaults). - These flags are not passed to Docker Compose. Only arguments relevant to Docker Compose are forwarded.
- This ensures Docker Compose receives only valid arguments, while the wrapper manages all configuration logic.
The wrapper uses Docker Compose's ability to work with multiple compose files through the COMPOSE_FILE environment variable. For example:
COMPOSE_FILE=docker-compose.yml:cache/docker-compose.yml:database/docker-compose.yml
This allows:
- Each service to have its own compose file
- Services to be organized in subdirectories
- Easy addition of new services without modifying existing files
- Clear separation of concerns between different services
MIT
Charts can depend on other charts from Git repositories or Helm-like repositories. Dependencies are declared in the Chart.yaml file:
dependencies:
- name: database
repository: https://github.com/your-org/database-chart.git
version: main
- name: cache
repository: https://charts.your-org.com
version: 1.2.3
- name: web2
path: ./charts/web2-
Git Repository
- URL format:
https://github.com/org/repo.gitorgit@github.com:org/repo.git - Version can be a branch name, tag, or commit hash
- The repository must contain a valid chart structure
- URL format:
-
Helm-like Repository
- URL format:
https://charts.your-org.com - Version must be a semantic version (e.g.,
1.2.3) - Repository must provide an
index.yamlfile
- URL format:
To update dependencies:
dcw dependency updateTo list current dependencies:
dcw dependency listDependencies are stored in the charts/ directory and are automatically downloaded when needed.
Hooks allow you to run commands or containers before or after Docker Compose operations. They are defined in Chart.yaml:
hooks:
- name: wait-for-db
type: pre
command: ["./scripts/wait-for-db.sh"]
- name: backup
type: post
container:
image: backup-tool:latest
command: ["backup", "--target", "database"]- Pre-hooks: Run before Docker Compose operations
- Post-hooks: Run after Docker Compose operations
-
Command Hooks: Run shell commands
hooks: - name: setup type: pre command: ["./scripts/setup.sh"]
-
Container Hooks: Run containers
hooks: - name: backup type: post container: image: backup-tool:latest command: ["backup"] env: BACKUP_PATH: "/data"
-
Wait for Services: Hooks can wait for services to be ready
hooks: - name: wait-for-db type: pre waitFor: ["database"] timeout: "30s"
-
Environment Variables: Pass environment variables to hooks
hooks: - name: setup type: pre command: ["./scripts/setup.sh"] env: DB_HOST: "database" DB_PORT: "5432"
-
Network Access: Container hooks can access the same network as your services
hooks: - name: backup type: post container: image: backup-tool:latest network: "appnet"
The Chart.yaml file supports the following configuration options:
name: example
version: 1.0.0
maxReleases: 10 # Optional: Maximum number of releases to keep (default: 20)
dependencies:
- name: database
repository: https://github.com/your-org/database-chart.git
version: main
- name: local-service
path: ./local-charts/service # Path to local chart
hooks:
- name: init-db
type: pre
container:
image: postgres:14
command: ["psql"]
args: ["-h", "database", "-U", "postgres", "-f", "/docker-entrypoint-initdb.d/init.sql"]
env:
PGPASSWORD: "postgres"
network: "my-network"
waitFor:
- database
timeout: "30s"name: Chart nameversion: Chart versionmaxReleases: Maximum number of releases to keep (default: 20)dependencies: List of chart dependenciesname: Dependency namerepository: Git repository URL or Helm repository (optional for local charts)version: Git branch/tag or Helm chart version (optional for local charts)path: Path to local chart directory (relative to Chart.yaml)
hooks: List of pre and post hooks
This project uses Go's log/slog for structured logging. You can control the verbosity of logs using the LOG_LEVEL environment variable:
debug— show debug, info, warning, and error messagesinfo— show info, warning, and error messages (default)warn— show warning and error messageserror— show only error messages
Example usage:
LOG_LEVEL=debug ./compose-wrapper upAll debug and info messages (such as dependency updates, hook execution, and internal steps) are logged via slog. Only the release status and summary are printed to the console via fmt for clear user feedback.
The wrapper supports zero-downtime rolling updates for services. This is configured in the values.yaml file:
# For main service
appName: "web" # Must match the service name in docker-compose.yml
rolling-update: true
replicas: 2
# For other services
web2:
rolling-update: true
replicas: 3- The service is scaled up to double the desired replicas
- New containers are started with the updated configuration
- Old containers are gracefully terminated (SIGTERM)
- The service is scaled back to the desired number of replicas
The wrapper uses exact container name matching to prevent unintended container updates. This means:
- Services with similar names (e.g., "web" and "web2") are updated independently
- Each service's containers are identified by their exact name
- No risk of accidentally updating containers from other services
You can configure rolling updates at two levels:
-
Root Level (applies to main service):
appName: "web" rolling-update: true replicas: 2
-
Service Level (applies to specific services):
web2: rolling-update: true replicas: 3
The appName field in the root configuration determines which service is considered the main service. This service will use the root-level rolling update configuration. The value of appName and service name in docker-compose.yml.tmpl from root chart must be same.
Rolling updates can be configured at both global and service levels:
# Global configuration (applies to main service)
rolling-update: true
replicas: 2
# Service-specific configuration
web2:
rolling-update: true
replicas: 1-
Pre-update Check:
- Verifies current replica count
- Scales to configured replica count if needed
-
Update Process:
- Scales up to double the configured replicas
- Waits for new containers to start (configurable retries)
- Gracefully terminates old containers
- Scales back to original replica count
-
Configuration Options:
RollingUpdateRetryCount = 5 // Number of retries to wait for new containers RollingUpdateRetryInterval = 5 // Seconds to wait between retries