Skip to content

Change the ClusterStatusConditionType "Failing" to "Degraded"#287

Merged
openshift-merge-robot merged 2 commits intoopenshift:masterfrom
eparis:fail_to_degrade
Apr 17, 2019
Merged

Change the ClusterStatusConditionType "Failing" to "Degraded"#287
openshift-merge-robot merged 2 commits intoopenshift:masterfrom
eparis:fail_to_degrade

Conversation

@eparis
Copy link
Copy Markdown
Member

@eparis eparis commented Apr 16, 2019

No description provided.

eparis added 2 commits April 16, 2019 19:40
This is used to indicate what we used to call 'Failing'. It was very odd
to see that your service was available, but also failing. But it would
be normal for your service to be available and degraded.
@openshift-ci-robot openshift-ci-robot added size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. approved Indicates a PR has been approved by an approver from all required OWNERS files. labels Apr 16, 2019
@eparis eparis changed the title Change the ClusterStatusConditionType "Failing" to "Degraded" [wip] Change the ClusterStatusConditionType "Failing" to "Degraded" Apr 17, 2019
@openshift-ci-robot openshift-ci-robot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Apr 17, 2019
@eparis eparis changed the title [wip] Change the ClusterStatusConditionType "Failing" to "Degraded" Change the ClusterStatusConditionType "Failing" to "Degraded" Apr 17, 2019
@openshift-ci-robot openshift-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Apr 17, 2019
@eparis
Copy link
Copy Markdown
Member Author

eparis commented Apr 17, 2019

@deads2k

@deads2k
Copy link
Copy Markdown
Contributor

deads2k commented Apr 17, 2019

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Apr 17, 2019
@openshift-ci-robot
Copy link
Copy Markdown

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: deads2k, eparis

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-merge-robot openshift-merge-robot merged commit 7924f91 into openshift:master Apr 17, 2019
wking added a commit to wking/cluster-image-registry-operator that referenced this pull request Apr 23, 2019
Catching up with openshift/api@8e476cb732 (Create a new
ClusterStatusCondition Degraded, 2019-04-16, openshift/api#287) and
openshift/api@a9fb3b1629 (Remove ClusterStatusConditionType Failing,
2019-04-16, openshift/api#287).
wking added a commit to wking/cluster-image-registry-operator that referenced this pull request Apr 23, 2019
Catching up with openshift/api@8e476cb732 (Create a new
ClusterStatusCondition Degraded, 2019-04-16, openshift/api#287) and
openshift/api@a9fb3b1629 (Remove ClusterStatusConditionType Failing,
2019-04-16, openshift/api#287).
wking added a commit to wking/cluster-image-registry-operator that referenced this pull request Apr 23, 2019
Catching up with openshift/api@8e476cb732 (Create a new
ClusterStatusCondition Degraded, 2019-04-16, openshift/api#287) and
openshift/api@a9fb3b1629 (Remove ClusterStatusConditionType Failing,
2019-04-16, openshift/api#287).
wking added a commit to wking/cluster-image-registry-operator that referenced this pull request Apr 23, 2019
Catching up with openshift/api@8e476cb732 (Create a new
ClusterStatusCondition Degraded, 2019-04-16, openshift/api#287) and
openshift/api@a9fb3b1629 (Remove ClusterStatusConditionType Failing,
2019-04-16, openshift/api#287).
wking added a commit to wking/cluster-image-registry-operator that referenced this pull request Apr 23, 2019
Catching up with openshift/api@8e476cb732 (Create a new
ClusterStatusCondition Degraded, 2019-04-16, openshift/api#287) and
openshift/api@a9fb3b1629 (Remove ClusterStatusConditionType Failing,
2019-04-16, openshift/api#287).
wking added a commit to wking/cluster-image-registry-operator that referenced this pull request Apr 23, 2019
Catching up with openshift/api@8e476cb732 (Create a new
ClusterStatusCondition Degraded, 2019-04-16, openshift/api#287) and
openshift/api@a9fb3b1629 (Remove ClusterStatusConditionType Failing,
2019-04-16, openshift/api#287).
wking added a commit to wking/oc that referenced this pull request Aug 17, 2021
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
wking added a commit to wking/oc that referenced this pull request Aug 17, 2021
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
wking added a commit to wking/oc that referenced this pull request Nov 16, 2021
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
wking added a commit to wking/oc that referenced this pull request Feb 9, 2022
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
wking added a commit to wking/oc that referenced this pull request Feb 9, 2022
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
wking added a commit to wking/oc that referenced this pull request Oct 19, 2022
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
wking added a commit to wking/oc that referenced this pull request Nov 15, 2022
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
wking added a commit to wking/oc that referenced this pull request Nov 15, 2022
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
wking added a commit to wking/oc that referenced this pull request Nov 15, 2022
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
wking added a commit to wking/oc that referenced this pull request Nov 15, 2022
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
wking added a commit to wking/oc that referenced this pull request Nov 15, 2022
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
wking added a commit to wking/oc that referenced this pull request Nov 16, 2022
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
wking added a commit to wking/oc that referenced this pull request Nov 17, 2022
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
wking added a commit to wking/oc that referenced this pull request Nov 17, 2022
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
wking added a commit to wking/oc that referenced this pull request Nov 17, 2022
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants