Hi!
I have a problem with jibri-pod-controller when running on Google Cloud k8s cluster - GKE.
On first try pods in the deployment end up in CrashLoopBackOff without any log accessible with kubectl logs.
The only information I had was
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 139
So I changed the entrypoint of Docker image to sleep infinity, ran the deployment, enter pod's shell and ran the binary manually ending up with
/ $ /usr/local/bin/jibri-pod-controller
Segmentation fault
I tried in different namespaces (default and my own custom one) without any luck.
Environment variables seems to be passed correctly since I can see them in the pod's description as well as from inside the shell
Environment:
RUST_LOG: info
PORT: 8080
JIBRI_HEALTH_PORT: 2222
JIBRI_BUSY_LABELS: app=jibri,state=busy
SWEEP_INTERVAL: 300
POD_NAME: jibri-pod-controller-5c644f77f6-kkgqw (v1:metadata.name)
NAMESPACE: default (v1:metadata.namespace)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rb6xb (ro)
I also tried to run this image in local Docker and it outputs error logs correctly (no access to k8s)
Hi!
I have a problem with jibri-pod-controller when running on Google Cloud k8s cluster - GKE.
On first try pods in the deployment end up in
CrashLoopBackOffwithout any log accessible withkubectl logs.The only information I had was
So I changed the entrypoint of Docker image to
sleep infinity, ran the deployment, enter pod's shell and ran the binary manually ending up withI tried in different namespaces (default and my own custom one) without any luck.
Environment variables seems to be passed correctly since I can see them in the pod's description as well as from inside the shell
I also tried to run this image in local Docker and it outputs error logs correctly (no access to k8s)