Expose node labels to kubernetes pods
This project supports both amd64 and arm64 architectures. To build multi-architecture Docker images, use the provided build script with Docker Buildx:
- Docker with Buildx support (Docker Desktop includes Buildx by default)
- For pushing images: Docker Hub or container registry credentials
Build multi-architecture images for local testing:
chmod +x build.sh
./build.shThis will build images for both linux/amd64 and linux/arm64 platforms.
To build and push multi-architecture images to a registry:
PUSH=true IMAGE_NAME=your-registry/kube-node-labels VERSION=1.2.0 ./build.shThe build script supports the following environment variables:
IMAGE_NAME: Docker image name (default:scottcrossen/kube-node-labels)VERSION: Image version/tag (default:latest)PLATFORMS: Comma-separated list of platforms (default:linux/amd64,linux/arm64)PUSH: Set totrueto push to registry (default:false)
Example with custom platforms:
PLATFORMS=linux/amd64,linux/arm64,linux/arm/v7 ./build.shYou can also use Docker Buildx directly:
# Create and use a builder instance
docker buildx create --name multiarch --use
docker buildx inspect --bootstrap
# Build and push
docker buildx build \
--platform linux/amd64,linux/arm64 \
--tag scottcrossen/kube-node-labels:latest \
--push \
.First, apply cluster permissions to access the node labels from a pod with our service account.
Note that this needs to be a ClusterRole as opposed to a Role.
$ cat << EOF | kubectl apply -f -
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: print-region
namespace: default
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: print-region
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: print-region
subjects:
- kind: ServiceAccount
name: print-region
namespace: default
---
kind: ServiceAccount
apiVersion: v1
metadata:
name: print-region
namespace: default
EOF
# Response:
clusterrole.rbac.authorization.k8s.io/print-region created
clusterrolebinding.rbac.authorization.k8s.io/print-region created
serviceaccount/print-region created
Next, add a pod that prints Region, Zone, and Hostname of the node which it is running on:
$ cat << EOF | kubectl apply -f -
---
apiVersion: batch/v1
kind: Job
metadata:
name: print-region
namespace: default
spec:
ttlSecondsAfterFinished: 60
template:
metadata:
labels:
app: print-region
spec:
restartPolicy: Never
serviceAccountName: print-region
containers:
- name: main
image: alpine
command:
- /bin/sh
args:
- -c
- >-
echo "Hostname: '\$(/node-data/label.sh kubernetes.io/hostname)'" && \
echo "Hostname With Default: '\$(/node-data/label.sh kubernetes.io/hostname N/A)'" && \
echo "Nonexistent: '\$(/node-data/label.sh nonexistent)'" && \
echo "Nonexistent With Default: '\$(/node-data/label.sh nonexistent N/A)'" && \
echo "Topology: '\$(/node-data/topology.sh)'" && \
echo "Topology With Default: '\$(/node-data/topology.sh N/A)'"
volumeMounts:
- name: node-data
mountPath: /node-data
initContainers:
- name: init
image: scottcrossen/kube-node-labels:1.1.0
imagePullPolicy: IfNotPresent
env:
- name: NODE
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: OUTPUT_DIR
value: /output
volumeMounts:
- name: node-data
mountPath: /output
volumes:
- name: node-data
emptyDir: {}
EOF
# Response:
job.batch/print-region created
Now print the logs of the previous pod to show that this works.
Note that minikube doesn't have region/zone information by default. Typical cloud setups on GKE and EKS include these labels.
$ kubectl -n default logs -f jobs/print-region
# Response:
Hostname: 'minikube'
Hostname With Default: 'minikube'
Nonexistent: ''
Nonexistent With Default: 'N/A'
Topology: 'host=minikube'
Topology With Default: 'region=N/A,zone=N/A,host=minikube'