After using imageproxy for 2 weeks it looks like it restarts a lot because of OOMKilled (out of memory errors).
Here's the number of restarts:
kubectl get pods -l app=imageproxy
NAME READY STATUS RESTARTS AGE
imageproxy-3940674779-nm521 1/1 Running 54 12d
imageproxy-3940674779-swbgj 1/1 Running 57 12d
Here's the description of one pod:
Name: imageproxy-3940674779-swbgj
Namespace: default
Node: NODE_HOST/XXXXX
Start Time: Wed, 07 Jun 2017 18:21:42 XXXX
Labels: app=imageproxy
pod-template-hash=XXXX
Status: Running
IP: XXXXX
Controllers: ReplicaSet/imageproxy-3940674779
Containers:
imageproxy:
Container ID: XXXX
Image: imageproxy
Image ID: XXXX
Port: 80/TCP
Args:
-addr
0.0.0.0:80
-cache
/tmp/imageproxycache
State: Running
Started: Tue, 20 Jun 2017 08:14:33 XXXX
Last State: Terminated
Reason: OOMKilled
Exit Code: 137
Started: Mon, 19 Jun 2017 17:17:43 XXXX
Finished: Tue, 20 Jun 2017 08:14:32 XXX
Ready: True
Restart Count: 57
Limits:
cpu: 2
memory: 1Gi
Requests:
cpu: 200m
memory: 512Mi
Liveness: http-get http://:80/health-check delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:80/health-check delay=0s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: <none>
Events: <none>
The logs are only after restart and there is no hint as to why there so much memory being used.
The upper memory limit (from the config) is 1Gi. Probably a memory leak right? Any idea how to diagnose this?
After using imageproxy for 2 weeks it looks like it restarts a lot because of
OOMKilled(out of memory errors).Here's the number of restarts:
Here's the description of one pod:
The logs are only after restart and there is no hint as to why there so much memory being used.
The upper memory limit (from the config) is
1Gi. Probably a memory leak right? Any idea how to diagnose this?