pkg/cvo/status: Failing is not more serious than Degraded#905
pkg/cvo/status: Failing is not more serious than Degraded#905wking wants to merge 1 commit intoopenshift:masterfrom
Conversation
The outgoing text goes way back to the local Failing type in 7f5b7f4 (conditions: Use a consistent constant for the Failing condition, 2019-05-19, openshift#191). But ClusterVersion doesn't include Degraded, and ClusterOperator don't set Failing, so we don't need a relative-seriousness ranking. In practice, a Degraded=True ClusterOperator is one of several issues that could lead to a Failing=True ClusterVersion, and when that's the only issue going on, they clearly have the same severity. When an Available=False ClusterOperator feeds a Failing=True ClusterVersion, that would be worse than a Degraded=True Available=True ClusterOperator. And there may also be issues like the CVO failing to reconcile a peripheral change like an alert rule where ClusterVersion is Failing=True despite the issue being less severe than many Degraded=True ClusterOperator situations.
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: wking The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
| // ClusterStatusFailing is set on the ClusterVersion status when a cluster | ||
| // cannot reach the desired state. It is considered more serious than Degraded | ||
| // and indicates the cluster is not healthy. | ||
| // cannot reach the desired state. It indicates the cluster is not healthy. |
There was a problem hiding this comment.
Why can not we say that it probably means one or more operators are in degraded state, rather than saying not healthy?
There was a problem hiding this comment.
because there could be other reasons besides unavailable/degraded operators to be Failing=True. Although without #867 in place, it's hard to get Telemetry stats on how frequent the various modes are.
There was a problem hiding this comment.
Though this is not customer facing documentation still makes me nervous about telling cluster is not healthy but we do not think this is more serious degraded condition of operators.
There was a problem hiding this comment.
Failing may be more serious than Degraded (e.g. it may be because a ClusterOperator is Available=False). Failing may be less serious than Degraded (e.g. we may be having trouble rolling out a peripheral alert rule). I'm not saying Failing is not serious. I'm just dropping the apples-to-oranges Degraded comparison.
There was a problem hiding this comment.
Even though it seems Failing is more serious given it indicates that a cluster cannot reach its desired state, is unhealthy, and requires an administrator to intervene. And Degraded's consequences may vary given the specific cluster operator and Degraded is only an indication that something may need investigation and adjustment. As long as the Operator is available, the Degraded condition does not cause user workload failure or application downtime. The Failing does sound scarier in the documentation, I am not going to lie. And a failing cluster sounds more serious than a degraded operator.
I agree with Trevor's statement:
dropping the apples-to-oranges Degraded comparison
The comparison seems to depend on the specific reasons for the conditions, and since we can't tell which reasons seem to be more frequent (https://github.com/openshift/cluster-version-operator/pull/905/files#r1116077435) we can drop the comparison. I would simply say Failing is for reporting one group of things, and Degraded is for reporting another group of things, and both can be reported due to more or less serious issues, and thus the comparison can be dropped.
It's also worth pointing out that if we end up modifying the comment, we can also modify the comment in the openshift/oc repository (https://github.com/openshift/oc/blob/master/pkg/cli/admin/upgrade/upgrade.go#L32-L35).
|
@wking: The following tests failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
|
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
|
Stale issues rot after 30d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle rotten |
|
/uncc |
|
Rotten issues close after 30d of inactivity. Reopen the issue by commenting /close |
|
@openshift-bot: Closed this PR. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
The outgoing text goes way back to the local Failing type in 7f5b7f4 (#191). But ClusterVersion doesn't include Degraded, and ClusterOperator don't set Failing, so we don't need a relative-seriousness ranking. In practice, a Degraded=True ClusterOperator is one of several issues that could lead to a Failing=True ClusterVersion, and when that's the only issue going on, they clearly have the same severity. When an Available=False ClusterOperator feeds a Failing=True ClusterVersion, that would be worse than a Degraded=True Available=True ClusterOperator. And there may also be issues like the CVO failing to reconcile a peripheral change like an alert rule where ClusterVersion is Failing=True despite the issue being less severe than many Degraded=True ClusterOperator situations.