Skip to content

conditions: Use a consistent constant for the Failing condition#191

Merged
openshift-merge-robot merged 1 commit intoopenshift:masterfrom
smarterclayton:failing
May 20, 2019
Merged

conditions: Use a consistent constant for the Failing condition#191
openshift-merge-robot merged 1 commit intoopenshift:masterfrom
smarterclayton:failing

Conversation

@smarterclayton
Copy link
Copy Markdown
Contributor

This needs to be moved back into openshift/api since it is now part
of our public API, but for now ensure it is consistently used.

This needs to be moved back into openshift/api since it is now part
of our public API, but for now ensure it is consistently used.
@openshift-ci-robot openshift-ci-robot added approved Indicates a PR has been approved by an approver from all required OWNERS files. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels May 19, 2019
@abhinavdahiya
Copy link
Copy Markdown
Contributor

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label May 20, 2019
@openshift-ci-robot
Copy link
Copy Markdown
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: abhinavdahiya, smarterclayton

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:
  • OWNERS [abhinavdahiya,smarterclayton]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-merge-robot openshift-merge-robot merged commit 2b1eb96 into openshift:master May 20, 2019
@wking
Copy link
Copy Markdown
Member

wking commented May 21, 2019

I'm still not clear on why the cluster-version operator's needs a Failing with different semantics than the second-level operator's Degraded. All the stuff from the Degraded docs seems like it applies here too.

@smarterclayton
Copy link
Copy Markdown
Contributor Author

/cherry-pick release-4.1

Prerequisite for #197

wking added a commit to wking/oc that referenced this pull request Aug 17, 2021
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
wking added a commit to wking/oc that referenced this pull request Aug 17, 2021
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
wking added a commit to wking/oc that referenced this pull request Nov 16, 2021
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
wking added a commit to wking/oc that referenced this pull request Feb 9, 2022
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
wking added a commit to wking/oc that referenced this pull request Feb 9, 2022
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
wking added a commit to wking/oc that referenced this pull request Oct 19, 2022
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
wking added a commit to wking/oc that referenced this pull request Nov 15, 2022
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
wking added a commit to wking/oc that referenced this pull request Nov 15, 2022
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
wking added a commit to wking/oc that referenced this pull request Nov 15, 2022
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
wking added a commit to wking/oc that referenced this pull request Nov 15, 2022
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
wking added a commit to wking/oc that referenced this pull request Nov 15, 2022
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
wking added a commit to wking/oc that referenced this pull request Nov 16, 2022
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
wking added a commit to wking/oc that referenced this pull request Nov 17, 2022
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
wking added a commit to wking/oc that referenced this pull request Nov 17, 2022
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
wking added a commit to wking/oc that referenced this pull request Nov 17, 2022
cae0b5e (React to degraded condition change, 2019-04-23,
openshift/origin#22644) moved this code from Failing to Degraded,
likely inspired by [1].  But Degraded is only used in ClusterOperator.
ClusterVersion kept using Failing, as seen in [2].  This commit
returns us to watching for Failing (the condition the CVO has been
setting the whole time), and informing the caller for any non-happy
statuses (or the lack of a Failing condition at all).

Even though the issue causing `Failing=True` may block the current
update from progressing, it should not block admins from requesting a
new update target.  For some bugs, retargeting is the recommended way
to resolve the issue that is currently sticking the update [3].

[1]: openshift/api#287
[2]: openshift/cluster-version-operator#191
[3]: https://bugzilla.redhat.com/show_bug.cgi?id=1988576#c30
wking added a commit to wking/cluster-version-operator that referenced this pull request Feb 23, 2023
The outgoing text goes way back to the local Failing type in
7f5b7f4 (conditions: Use a consistent constant for the Failing
condition, 2019-05-19, openshift#191).  But ClusterVersion doesn't include
Degraded, and ClusterOperator don't set Failing, so we don't need a
relative-seriousness ranking.  In practice, a Degraded=True
ClusterOperator is one of several issues that could lead to a
Failing=True ClusterVersion, and when that's the only issue going on,
they clearly have the same severity.  When an Available=False
ClusterOperator feeds a Failing=True ClusterVersion, that would be
worse than a Degraded=True Available=True ClusterOperator.  And there
may also be issues like the CVO failing to reconcile a peripheral
change like an alert rule where ClusterVersion is Failing=True despite
the issue being less severe than many Degraded=True ClusterOperator
situations.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants