Unsure if this is causing any performance issues (it has not from what I can tell but)
The Cluster Operator machine-config-operator seems to cycle through statuses when the cluster is relatively idle (ie I am not performing and updates/applying mcs/etc...). Not sure why the status is changing but it makes for a confusing user experience. See the following.
$ oc get clusteroperator -n machine-config-operator yields varying statuses within very short periods of time. For example:
machine-config-operator 3.11.0-494-g18b8dc50-dirty True False False 7s
machine-config-operator 3.11.0-494-g18b8dc50-dirty False True False 1s
In another example, I see changes in the conditions: section but the bottom entry also never changes? So the top two alternate between Available, False / Progressing, True but the oldest entry also never seems to change?
$ oc get clusteroperator machine-config-operator -o yaml -w
apiVersion: config.openshift.io/v1
kind: ClusterOperator
metadata:
creationTimestamp: 2019-01-23T18:14:33Z
generation: 1
name: machine-config-operator
resourceVersion: "919947"
selfLink: /apis/config.openshift.io/v1/clusteroperators/machine-config-operator
uid: bbc5a98d-1f3a-11e9-a4a4-02204cc1daaa
spec: {}
status:
conditions:
- lastTransitionTime: 2019-01-24T18:39:17Z
status: "False"
type: Available
- lastTransitionTime: 2019-01-24T18:39:17Z
message: Progressing towards 3.11.0-494-g18b8dc50-dirty
status: "True"
type: Progressing
- lastTransitionTime: 2019-01-24T02:24:05Z
status: "False"
type: Failing
extension:
master: all 3 nodes are at latest configuration master-8b907698492f2829b35f6f826511dfa6
worker: all 3 nodes are at latest configuration worker-58b30499568a7a2588b05ed01bba91a7
version: 3.11.0-494-g18b8dc50-dirty
apiVersion: config.openshift.io/v1
kind: ClusterOperator
metadata:
creationTimestamp: 2019-01-23T18:14:33Z
generation: 1
name: machine-config-operator
resourceVersion: "919948"
selfLink: /apis/config.openshift.io/v1/clusteroperators/machine-config-operator
uid: bbc5a98d-1f3a-11e9-a4a4-02204cc1daaa
spec: {}
status:
conditions:
- lastTransitionTime: 2019-01-24T18:39:22Z
status: "True"
type: Available
- lastTransitionTime: 2019-01-24T18:39:22Z
status: "False"
type: Progressing
- lastTransitionTime: 2019-01-24T02:24:05Z
status: "False"
type: Failing
extension:
master: all 3 nodes are at latest configuration master-8b907698492f2829b35f6f826511dfa6
worker: all 3 nodes are at latest configuration worker-58b30499568a7a2588b05ed01bba91a7
version: 3.11.0-494-g18b8dc50-dirty
I'm also not seeing anything happening in the oc logs -f for the machine-config-operator at the times indicated in the statuses.
Does anyone have an idea why this is happening? Why do the top two alternate, when from what I can tell nothing is happening in the MCO oc logs -f?
Unsure if this is causing any performance issues (it has not from what I can tell but)
The Cluster Operator machine-config-operator seems to cycle through statuses when the cluster is relatively idle (ie I am not performing and updates/applying mcs/etc...). Not sure why the status is changing but it makes for a confusing user experience. See the following.
$ oc get clusteroperator -n machine-config-operatoryields varying statuses within very short periods of time. For example:machine-config-operator 3.11.0-494-g18b8dc50-dirty True False False 7smachine-config-operator 3.11.0-494-g18b8dc50-dirty False True False 1sIn another example, I see changes in the
conditions:section but the bottom entry also never changes? So the top two alternate between Available, False / Progressing, True but the oldest entry also never seems to change?$ oc get clusteroperator machine-config-operator -o yaml -wI'm also not seeing anything happening in the
oc logs -ffor the machine-config-operator at the times indicated in the statuses.Does anyone have an idea why this is happening? Why do the top two alternate, when from what I can tell nothing is happening in the MCO
oc logs -f?