-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Add logs for container in bad status #4534
Copy link
Copy link
Closed
Labels
area/APIAPI objects and controllersAPI objects and controllerskind/featureWell-understood/specified features, ready for coding.Well-understood/specified features, ready for coding.lifecycle/rottenDenotes an issue or PR that has aged beyond stale and will be auto-closed.Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Metadata
Metadata
Assignees
Labels
area/APIAPI objects and controllersAPI objects and controllerskind/featureWell-understood/specified features, ready for coding.Well-understood/specified features, ready for coding.lifecycle/rottenDenotes an issue or PR that has aged beyond stale and will be auto-closed.Denotes an issue or PR that has aged beyond stale and will be auto-closed.
In what area(s)?
Describe the feature
Add logs to indicate that user-container steps into/recovers from bad status.
If a request results in container crash and restart, the request returns 502 but there's no logs/events/ways to know what had happened.
For example, follow the steps below:
watch kubectl get podsto keep pods status refreshed.?bloat=1000multiple times:curl -H "Host: autoscale-go.default.example.com" http://<Istio-gateway IP>?bloat=500OOMKilledfor a few seconds and theuser-containerget restarted.There is
Last Statein the output ofkubectl get pod -oyamlresult:whose started time is the first
OOMKilledevent happened and finished time is the lastOOMKilledevent finished(during that period the container is ready for most of the time).There are no other logs from K8S or Knative components indicating OOMKilled happened. So unless the pod is kept alive and the operator check the correct pod, no ways to know what resulted in the 502 response.