-
Notifications
You must be signed in to change notification settings - Fork 4.8k
Revert 1.25 again #27499
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Revert 1.25 again #27499
Conversation
|
/payload 4.12 nightly blocking |
|
@stbenjam: trigger 6 job(s) of type blocking for the nightly release of OCP 4.12
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/199ab2d0-5629-11ed-82ea-6b3cd6c180ec-0 |
|
/hold |
soltysh
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/approve
|
/payload 4.12 ci blocking |
|
@kikisdeliveryservice: trigger 4 job(s) of type blocking for the ci release of OCP 4.12
See details on https://pr-payload-tests.ci.openshift.org/runs/ci/9397cb70-564e-11ed-8e04-43fd59826c90-0 |
|
In the runs that completed, there were no cases of the ~5 hour runs, going to remove the hold. Many of the aggregated pods got deleted as usual, I reported to test platform again. /hold cancel |
aaf3633 to
1ec0eb3
Compare
|
New changes are detected. LGTM label has been removed. |
|
I've had to revert #27491 as well... it merged while this was pending. @ingvagabund Do you have anything else pending? Please don't merge anything else until we get this revert in, thanks |
|
We're returning to a previously known state, so we feel this is low risk after discussing live. Going to override CI so we can get the revert in to allow branching. /skip |
|
/lgtm |
|
@stbenjam: Overrode contexts on behalf of stbenjam: ci/prow/e2e-agnostic-ovn-cmd, ci/prow/e2e-aws-csi, ci/prow/e2e-aws-image-registry, ci/prow/e2e-aws-ovn-cgroupsv2, ci/prow/e2e-aws-ovn-fips, ci/prow/e2e-aws-ovn-serial, ci/prow/e2e-aws-ovn-single-node, ci/prow/e2e-aws-ovn-single-node-serial, ci/prow/e2e-aws-ovn-single-node-upgrade, ci/prow/e2e-gcp-builds, ci/prow/e2e-gcp-csi, ci/prow/e2e-gcp-image-ecosystem, ci/prow/e2e-gcp-ovn, ci/prow/e2e-gcp-ovn-rt-upgrade, ci/prow/e2e-gcp-ovn-upgrade, ci/prow/e2e-metal-ipi-ovn-ipv6, ci/prow/e2e-metal-ipi-sdn, ci/prow/e2e-openstack-ovn DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: bparees, DennisPeriquet, soltysh, stbenjam The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
@stbenjam: all tests passed! Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
This reverts #27491, #27490 and #27498 ; tracked by TRT-643
Per OpenShift policy, we are reverting this breaking change to get CI and/or nightly payloads flowing again. Note we had to revert 2 PR's, as #27498 is dependent on #27490.
We have noticed that tests are now taking 2x-4x longer to run sometimes, but this is causing random payload rejections. We are ending up in a situation where around 30% of runs end up exceeding the timeout for the test step. We do not know why. Spot checking tests during office hours, we noticed there are tests that typically take 5-6 seconds taking 1-2 minutes.
This PR was previously reverted, we tested the CI blocking jobs twice, and merged. Immediately after merging we saw this problem again:
To get this back in, we're going to unrevert this, and run some payload jobs on it to do further debugging.