-
Notifications
You must be signed in to change notification settings - Fork 16.4k
Isolate celery tests to separate container #50952
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Isolate celery tests to separate container #50952
Conversation
The celery tests hang intermittently from time to time, and it's rather difficult to pin-point the root cause. This PR attempts to isolate the tests and make them fail faster in case the problem happens. Currently, after some recent refactoring - none of the tests usually run longer that 18-19 minutes, so we can set much lower timeouts for the test job - 30 minutes "soft" timeout (SIGTERM sent to stop the container and dump logs) and 35 minutes for "hard" failure of GitHub Action. If we see that we are still hanging despite the isolation, we can later introduce more debug logging for **just** the celery container run.
amoghrajesh
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah lets isolate and try to fix it
gopidesupavan
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cool, hope to see celery happy now :)
I am really hopeful here. I've looked trough about 40 failed scheduled runs and I never saw the celery issue when running "lowest deps" tests - it seems to only fail when we run the regular tests together with other providers, so it seems that it is reallly side-effect of some other tests causing it - hopefully when we run it in isolation the problem will be gone (but it might be that the side effect will affect other tests ... who knows.. |
|
Ok. So far - so good first pass was full-green. Which is a good sign. |
Backport failed to create: v3-0-test. View the failure log Run details
You can attempt to backport this manually by running: cherry_picker fb8c877 v3-0-testThis should apply the commit to the v3-0-test branch and leave the commit in conflict state marking After you have resolved the conflicts, you can continue the backport process by running: cherry_picker --continue |
The celery tests hang intermittently from time to time, and it's rather difficult to pin-point the root cause. This PR attempts to isolate the tests and make them fail faster in case the problem happens. Currently, after some recent refactoring - none of the tests usually run longer that 18-19 minutes, so we can set much lower timeouts for the test job - 30 minutes "soft" timeout (SIGTERM sent to stop the container and dump logs) and 35 minutes for "hard" failure of GitHub Action. If we see that we are still hanging despite the isolation, we can later introduce more debug logging for **just** the celery container run. (cherry picked from commit fb8c877) Co-authored-by: Jarek Potiuk <jarek@potiuk.com>
The celery tests hang intermittently from time to time, and it's rather difficult to pin-point the root cause. This PR attempts to isolate the tests and make them fail faster in case the problem happens. Currently, after some recent refactoring - none of the tests usually run longer that 18-19 minutes, so we can set much lower timeouts for the test job - 30 minutes "soft" timeout (SIGTERM sent to stop the container and dump logs) and 35 minutes for "hard" failure of GitHub Action. If we see that we are still hanging despite the isolation, we can later introduce more debug logging for **just** the celery container run. (cherry picked from commit fb8c877)
The celery tests hang intermittently from time to time, and it's rather difficult to pin-point the root cause. This PR attempts to isolate the tests and make them fail faster in case the problem happens. Currently, after some recent refactoring - none of the tests usually run longer that 18-19 minutes, so we can set much lower timeouts for the test job - 30 minutes "soft" timeout (SIGTERM sent to stop the container and dump logs) and 35 minutes for "hard" failure of GitHub Action. If we see that we are still hanging despite the isolation, we can later introduce more debug logging for **just** the celery container run.
The celery tests hang intermittently from time to time, and it's rather difficult to pin-point the root cause. This PR attempts to isolate the tests and make them fail faster in case the problem happens. Currently, after some recent refactoring - none of the tests usually run longer that 18-19 minutes, so we can set much lower timeouts for the test job - 30 minutes "soft" timeout (SIGTERM sent to stop the container and dump logs) and 35 minutes for "hard" failure of GitHub Action. If we see that we are still hanging despite the isolation, we can later introduce more debug logging for **just** the celery container run. (cherry picked from commit fb8c877)
The celery tests hang intermittently from time to time, and it's rather difficult to pin-point the root cause. This PR attempts to isolate the tests and make them fail faster in case the problem happens. Currently, after some recent refactoring - none of the tests usually run longer that 18-19 minutes, so we can set much lower timeouts for the test job - 30 minutes "soft" timeout (SIGTERM sent to stop the container and dump logs) and 35 minutes for "hard" failure of GitHub Action. If we see that we are still hanging despite the isolation, we can later introduce more debug logging for **just** the celery container run.
The celery tests hang intermittently from time to time, and it's rather difficult to pin-point the root cause. This PR attempts to isolate the tests and make them fail faster in case the problem happens.
Currently, after some recent refactoring - none of the tests usually run longer that 18-19 minutes, so we can set much lower timeouts for the test job - 30 minutes "soft" timeout (SIGTERM sent to stop the container and dump logs) and 35 minutes for "hard" failure of GitHub Action.
If we see that we are still hanging despite the isolation, we can later introduce more debug logging for just the celery container run.
^ Add meaningful description above
Read the Pull Request Guidelines for more information.
In case of fundamental code changes, an Airflow Improvement Proposal (AIP) is needed.
In case of a new dependency, check compliance with the ASF 3rd Party License Policy.
In case of backwards incompatible changes please leave a note in a newsfragment file, named
{pr_number}.significant.rstor{issue_number}.significant.rst, in airflow-core/newsfragments.