Skip to content

Conversation

@dirrao
Copy link
Contributor

@dirrao dirrao commented Feb 24, 2024

What happened
When the worker pods init/base containers are in a pending state due to fatal container
state reasons, the tasks eventually fail and the pods are deleted. Currently, it has to wait until the worker_pods_pending_timeout even though the worker pods don't recover.

What do you think should happen instead
When the worker pods init/base containers are in a pending state due to fatal container
state reasons, the worker pod doesn't recover. It doesn't make sense to wait until the worker_pods_pending_timeout. Instead, mark the tasks as failed and delete the worker pods.

@boring-cyborg boring-cyborg bot added area:providers provider:cncf-kubernetes Kubernetes (k8s) provider related issues labels Feb 24, 2024
@dirrao dirrao closed this Feb 24, 2024
@dirrao dirrao reopened this Feb 24, 2024
@dirrao dirrao requested a review from potiuk February 26, 2024 05:45
Copy link
Member

@potiuk potiuk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. small nit

@potiuk
Copy link
Member

potiuk commented Feb 27, 2024

But also would love to get someone else to take a look

Copy link
Member

@hussein-awala hussein-awala left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a nice improvement, I was worried about the sub-reason for each reason (eg: pull QPS exceeded which I'm not sure if it's the exact thing in all the K8S versions and distributions), but as the user has the option to update the reasons list, he can fix it without upgrading the provider version if we detect another similar case or any bug.

LGTM

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area:providers provider:cncf-kubernetes Kubernetes (k8s) provider related issues

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants