-
Notifications
You must be signed in to change notification settings - Fork 16.4k
Fix Kubernetes executor set wrong task status #31274
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
3e022c4 to
03dc333
Compare
33807d8 to
b2d280f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. You can test this on a deployment. Please revert when tested as I would like this to be part of 2.6.2 which we will cut tomorrow
7c92d79 to
8006ae9
Compare
In the case of multiple schedulers and lots of tasks running If somehow schedulers restart and try to adopt pods in some cases, it sets the wrong task status. In this PR, I'm changing some checks so that if the pod status is non-terminal then set the task status Failed only if the pod event type is DELETED and POD_EXECUTOR_DONE_KEY is in the pod label
2a519d2 to
c8fe9e0
Compare
Tested it |
* Fix Kubernetes executor set wrong task status In the case of multiple schedulers and lots of tasks running If somehow schedulers restart and try to adopt pods in some cases, it sets the wrong task status. In this PR, I'm changing some checks so that if the pod status is non-terminal then set the task status Failed only if the pod event type is DELETED and POD_EXECUTOR_DONE_KEY is in the pod label * cleanup (cherry picked from commit dfbf529)
* Fix Kubernetes executor set wrong task status In the case of multiple schedulers and lots of tasks running If somehow schedulers restart and try to adopt pods in some cases, it sets the wrong task status. In this PR, I'm changing some checks so that if the pod status is non-terminal then set the task status Failed only if the pod event type is DELETED and POD_EXECUTOR_DONE_KEY is in the pod label * cleanup (cherry picked from commit dfbf529)
|
#31198 issue fixed in this MR. |
In the case of multiple schedulers lots of tasks running
If schedulers restart a bit quickly some cases it sets the wrong task status.
In this PR, I'm changing some checks so that if the pod status is non-terminal
then set the task status to Failed only if the deletion_timestamp (metadata.deletion_timestamp) is set for the pod. This gets set when server receive the deletion request https://kubernetes.io/docs/reference/kubernetes-api/common-definitions/object-meta/#ObjectMeta
Example screenshot before changes

Example screenshot after changes

^ Add meaningful description above
Read the Pull Request Guidelines for more information.
In case of fundamental code changes, an Airflow Improvement Proposal (AIP) is needed.
In case of a new dependency, check compliance with the ASF 3rd Party License Policy.
In case of backwards incompatible changes please leave a note in a newsfragment file, named
{pr_number}.significant.rstor{issue_number}.significant.rst, in newsfragments.