Skip to content

React to degraded condition change and bump API so upgrade can handle Force#22644

Merged
smarterclayton merged 3 commits intoopenshift:masterfrom
smarterclayton:bump_upgrade
Apr 24, 2019
Merged

React to degraded condition change and bump API so upgrade can handle Force#22644
smarterclayton merged 3 commits intoopenshift:masterfrom
smarterclayton:bump_upgrade

Conversation

@smarterclayton
Copy link
Copy Markdown
Contributor

@smarterclayton smarterclayton commented Apr 24, 2019

This gates signing our payloads which is a GA blocker (we have to have a way to bypass upgrades in CLI first).

@openshift-ci-robot openshift-ci-robot added the size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. label Apr 24, 2019
@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 24, 2019
@smarterclayton smarterclayton force-pushed the bump_upgrade branch 2 times, most recently from 9fbb882 to 5d748da Compare April 24, 2019 03:59
Copy link
Copy Markdown
Contributor

@soltysh soltysh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

Comment thread glide.yaml
- package: k8s.io/kube-openapi
repo: https://github.com/openshift/kube-openapi.git
version: origin-4.0-kubernetes-master-d7c86cd # bumped to match k8s version we've had and then additionally with changes for openapi CRD (we should pick that back around k8s 1.14)
- package: k8s.io/klog
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@soltysh
Copy link
Copy Markdown
Contributor

soltysh commented Apr 24, 2019

/retest

Copy link
Copy Markdown
Contributor

@soltysh soltysh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Apr 24, 2019
@openshift-ci-robot
Copy link
Copy Markdown

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: smarterclayton, soltysh

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@smarterclayton
Copy link
Copy Markdown
Contributor Author

/retest

@smarterclayton smarterclayton merged commit 9c0f52a into openshift:master Apr 24, 2019
wking added a commit to wking/oc that referenced this pull request Aug 17, 2021
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
wking added a commit to wking/oc that referenced this pull request Aug 17, 2021
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
wking added a commit to wking/oc that referenced this pull request Nov 16, 2021
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
wking added a commit to wking/oc that referenced this pull request Feb 9, 2022
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
wking added a commit to wking/oc that referenced this pull request Feb 9, 2022
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
wking added a commit to wking/oc that referenced this pull request Oct 19, 2022
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
wking added a commit to wking/oc that referenced this pull request Nov 15, 2022
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
wking added a commit to wking/oc that referenced this pull request Nov 15, 2022
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
wking added a commit to wking/oc that referenced this pull request Nov 15, 2022
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
wking added a commit to wking/oc that referenced this pull request Nov 15, 2022
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
wking added a commit to wking/oc that referenced this pull request Nov 15, 2022
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
wking added a commit to wking/oc that referenced this pull request Nov 16, 2022
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
wking added a commit to wking/oc that referenced this pull request Nov 17, 2022
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
wking added a commit to wking/oc that referenced this pull request Nov 17, 2022
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
wking added a commit to wking/oc that referenced this pull request Nov 17, 2022
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants