Verify Kubernetes deployments match a version manifest with deep stability auditing. Checks convergence, revision consistency, and pod health.
- Manifest-driven verification - Provide a JSON manifest of expected versions; kubernify verifies the cluster matches
- Deep stability auditing - Goes beyond version checks: convergence, revision consistency, pod health, DaemonSet scheduling, Job completion
- Retry-until-converged loop - Waits for rollouts to complete rather than just snapshot-checking
- Repository-relative image parsing - Flexible component name extraction from any image registry format
- Comprehensive workload support - Deployments, StatefulSets, DaemonSets, Jobs, and CronJobs
- Zero-replica awareness - Verifies version from PodSpec even when HPA/KEDA has scaled to zero
- Structured JSON reports - Machine-readable output for CI/CD pipeline integration
pip install kubernifyOr with pipx for isolated CLI usage:
pipx install kubernifyOr with uv:
uv add kubernify# Verify backend and frontend match expected versions in the "production" namespace
kubernify \
--context my-cluster-context \
--anchor my-app \
--namespace production \
--manifest '{"backend": "v1.2.3", "frontend": "v1.2.4"}'kubernify will connect to the cluster, discover all matching workloads, verify their image versions against the manifest, run stability audits, and exit with code 0 (pass), 1 (fail), or 2 (timeout).
kubernify [OPTIONS]
| Argument | Description | Default |
|---|---|---|
--context |
Kubeconfig context name. Mutually exclusive with --gke-project. |
From kubeconfig |
--gke-project |
GCP project ID for GKE context resolution. Mutually exclusive with --context. |
|
--anchor |
(required) Image path anchor for component name extraction. See How Image Anchor Works. | |
--manifest |
(required) JSON version manifest, e.g. '{"backend": "v1.2.3"}'. |
|
--namespace |
Kubernetes namespace to verify. | From kubeconfig context |
--required-workloads |
Comma-separated workload name patterns that must exist. | |
--skip-containers |
Comma-separated container name patterns to skip during verification. | |
--min-uptime |
Minimum pod uptime in seconds for stability checks. | 0 |
--restart-threshold |
Maximum acceptable container restart count. Use 0 to forbid any restarts, or -1 to skip the restart check entirely. |
3 |
--timeout |
Global timeout in seconds for the verification loop. | 300 |
--allow-zero-replicas |
Allow workloads with zero replicas to pass verification. | false |
--dry-run |
Snapshot check without waiting for convergence. | false |
--include-statefulsets |
Include StatefulSets in workload discovery. | false |
--include-daemonsets |
Include DaemonSets in workload discovery. | false |
--include-jobs |
Include Jobs and CronJobs in workload discovery. | false |
kubernify \
--context my-cluster-context \
--anchor my-app \
--namespace production \
--manifest '{"backend": "v1.2.3", "frontend": "v1.2.4"}'kubernify \
--gke-project my-gke-project-123456 \
--anchor my-app \
--namespace production \
--manifest '{"backend": "v1.2.3", "frontend": "v1.2.4"}'# No --context needed; auto-detects in-cluster config and namespace
kubernify \
--anchor my-app \
--manifest '{"backend": "v1.2.3", "frontend": "v1.2.4"}'kubernify \
--context my-cluster-context \
--anchor my-app \
--namespace production \
--manifest '{"backend": "v1.2.3", "frontend": "v1.2.4", "worker": "v1.2.3"}' \
--required-workloads "backend, frontend, worker" \
--skip-containers "istio-proxy, envoy, fluent-bit" \
--include-statefulsets \
--include-daemonsets \
--include-jobs \
--min-uptime 120 \
--restart-threshold 5 \
--timeout 600 \
--allow-zero-replicaskubernify \
--context my-cluster-context \
--anchor my-app \
--manifest '{"backend": "v1.2.3"}' \
--dry-runjobs:
verify-deployment:
runs-on: ubuntu-latest
steps:
- name: Set up kubeconfig
run: |
echo "${{ secrets.KUBECONFIG }}" > /tmp/kubeconfig
export KUBECONFIG=/tmp/kubeconfig
- name: Install kubernify
run: pip install kubernify
- name: Verify deployment
run: |
kubernify \
--context ${{ secrets.KUBE_CONTEXT }} \
--anchor my-app \
--manifest '${{ steps.build.outputs.manifest }}' \
--timeout 600 \
--min-uptime 60kubernify can be used as a Python library for custom verification workflows:
from kubernify import __version__, VerificationStatus
from kubernify.kubernetes_controller import KubernetesController
from kubernify.workload_discovery import WorkloadDiscovery
from kubernify.cli import construct_component_map, verify_versions
controller = KubernetesController(context="my-cluster")
discovery = WorkloadDiscovery(k8s_controller=controller)
workloads, _ = discovery.discover_cluster_state(namespace="production")
component_map = construct_component_map(
workloads=workloads,
manifest={"backend": "v1.2.3"},
repository_anchor="my-app",
)
results = verify_versions(manifest={"backend": "v1.2.3"}, component_map=component_map)
if results.errors:
print(f"Verification failed: {results.errors}")kubernify uses a repository-relative anchor to extract component names from container image paths. The --anchor argument specifies the path segment after which the component name is derived.
Image: registry.example.com/my-org-foo/my-app-bar/backend:v1.2.3-x
└──── registry ─────┘ └─ org ─┘ └ anchor ┘└ comp.┘└─ tag ─┘
More examples:
| Image | --anchor |
Extracted Component |
|---|---|---|
registry.example.com/my-org/my-app/backend:v1.2.3 |
my-app |
backend |
registry.example.com/my-org/my-app/api/server:v2.0.0 |
my-app |
api/server |
gcr.io/my-project/my-app/worker:v1.0.0 |
my-app |
worker |
The extracted component name is then matched against the keys in your --manifest JSON to verify the correct version is deployed.
| Code | Meaning | Description |
|---|---|---|
0 |
PASS | All workloads match the manifest and pass stability audits |
1 |
FAIL | One or more workloads have version mismatches or stability issues |
2 |
TIMEOUT | Verification did not converge within the --timeout window |
- Python >= 3.10
If using --gke-project for automatic GKE context resolution:
- Install the Google Cloud SDK
- Install the GKE auth plugin:
gcloud components install gke-gcloud-auth-plugin
- Authenticate:
gcloud auth login gcloud container clusters get-credentials CLUSTER_NAME --project PROJECT_ID
kubernify requires read-only access to workloads and pods. Apply the following RBAC configuration:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: kubernify-reader
namespace: <namespace>
rules:
- apiGroups: ["apps"]
resources: ["deployments", "statefulsets", "daemonsets", "replicasets"]
verbs: ["get", "list"]
- apiGroups: ["batch"]
resources: ["jobs", "cronjobs"]
verbs: ["get", "list"]
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: kubernify-reader-binding
namespace: <namespace>
subjects:
- kind: ServiceAccount
name: kubernify
namespace: <namespace>
roleRef:
kind: Role
name: kubernify-reader
apiGroup: rbac.authorization.k8s.ioContributions are welcome! Please see CONTRIBUTING.md for development setup, coding standards, and the PR process.
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.