Skip to content

Conversation

@wking
Copy link
Member

@wking wking commented Apr 24, 2020

To make it easier to debug precondition failures like rhbz#1827166.

To make it easier to debug precondition failures like [1].

[1]: https://bugzilla.redhat.com/show_bug.cgi?id=1827166
@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 24, 2020
@wking wking changed the title pkg/cvo/sync_worker: Log precondition handling Bug 1827166: pkg/cvo/sync_worker: Log precondition handling Apr 24, 2020
@openshift-ci-robot openshift-ci-robot added the bugzilla/severity-high Referenced Bugzilla bug's severity is high for the branch this PR is targeting. label Apr 24, 2020
@openshift-ci-robot
Copy link
Contributor

@wking: This pull request references Bugzilla bug 1827166, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target release (4.5.0) matches configured target release for branch (4.5.0)
  • bug is in the state NEW, which is one of the valid states (NEW, ASSIGNED, ON_DEV, POST, POST)
Details

In response to this:

Bug 1827166: pkg/cvo/sync_worker: Log precondition handling

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci-robot openshift-ci-robot added the bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. label Apr 24, 2020
@wking
Copy link
Member Author

wking commented Apr 24, 2020

4.4 backport created manually in #361.

…heck

So we can understand why this passes or fails more easily.
@wking
Copy link
Member Author

wking commented Apr 24, 2020

e2e-aws-upgrade:

level=info msg="Cluster operator authentication Progressing is True with _WellKnownNotReady: Progressing: got '404 Not Found' status while trying to GET the OAuth well-known https://10.0.128.43:6443/.well-known/oauth-authorization-server endpoint data"
level=info msg="Cluster operator authentication Available is False with : "
level=info msg="Cluster operator cloud-credential Progressing is True with Reconciling: 3 of 4 credentials requests provisioned, 0 reporting errors."
level=info msg="Cluster operator console Progressing is True with SyncLoopRefresh_InProgress: SyncLoopRefreshProgressing: Working toward version 0.0.1-2020-04-24-202827"
level=info msg="Cluster operator console Available is False with Deployment_InsufficientReplicas: DeploymentAvailable: 0 pods available for console deployment"
level=error msg="Cluster operator etcd Degraded is True with NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node \"ip-10-0-143-80.us-west-2.compute.internal\" not ready since 2020-04-24 20:48:57 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)"
level=info msg="Cluster operator insights Disabled is False with : "
level=error msg="Cluster operator kube-apiserver Degraded is True with NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node \"ip-10-0-143-80.us-west-2.compute.internal\" not ready since 2020-04-24 20:48:57 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)"
level=info msg="Cluster operator kube-apiserver Progressing is True with NodeInstaller: NodeInstallerProgressing: 3 nodes are at revision 2; 0 nodes have achieved new revision 6"
level=error msg="Cluster operator kube-controller-manager Degraded is True with NodeController_MasterNodesReady::NodeInstaller_InstallerPodFailed: NodeControllerDegraded: The master nodes not ready: node \"ip-10-0-143-80.us-west-2.compute.internal\" not ready since 2020-04-24 20:48:57 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)\nNodeInstallerDegraded: 1 nodes are failing on revision 5:\nNodeInstallerDegraded: static pod of revision 5 has been installed, but is not ready while new revision 6 is pending"
level=info msg="Cluster operator kube-controller-manager Progressing is True with NodeInstaller: NodeInstallerProgressing: 1 nodes are at revision 0; 2 nodes are at revision 4; 0 nodes have achieved new revision 7"
level=error msg="Cluster operator kube-scheduler Degraded is True with NodeController_MasterNodesReady: NodeControllerDegraded: The master nodes not ready: node \"ip-10-0-143-80.us-west-2.compute.internal\" not ready since 2020-04-24 20:48:57 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)"
level=info msg="Cluster operator kube-scheduler Progressing is True with NodeInstaller: NodeInstallerProgressing: 2 nodes are at revision 5; 1 nodes are at revision 6"
level=error msg="Cluster operator machine-config Degraded is True with RequiredPoolsFailed: Failed to resync 0.0.1-2020-04-24-202827 because: timed out waiting for the condition during syncRequiredMachineConfigPools: error pool master is not ready, retrying. Status: (pool degraded: false total: 3, ready 2, updated: 3, unavailable: 1)"
level=info msg="Cluster operator machine-config Available is False with : Cluster not available for 0.0.1-2020-04-24-202827"
level=info msg="Cluster operator monitoring Available is False with : "
level=info msg="Cluster operator monitoring Progressing is True with RollOutInProgress: Rolling out the stack."
level=error msg="Cluster operator monitoring Degraded is True with UpdatingnodeExporterFailed: Failed to rollout the stack. Error: running task Updating node-exporter failed: reconciling node-exporter DaemonSet failed: updating DaemonSet object failed: waiting for DaemonSetRollout of node-exporter: daemonset node-exporter is not ready. status: (desired: 6, updated: 6, ready: 5, unavailable: 1)"
level=error msg="Cluster operator network Degraded is True with RolloutHung: DaemonSet \"openshift-multus/multus\" rollout is not making progress - last change 2020-04-24T20:53:49Z\nDaemonSet \"openshift-sdn/ovs\" rollout is not making progress - last change 2020-04-24T20:53:41Z\nDaemonSet \"openshift-sdn/sdn\" rollout is not making progress - last change 2020-04-24T20:53:27Z"
level=info msg="Cluster operator network Progressing is True with Deploying: DaemonSet \"openshift-multus/multus\" is not available (awaiting 1 nodes)\nDaemonSet \"openshift-sdn/ovs\" is not available (awaiting 1 nodes)\nDaemonSet \"openshift-sdn/sdn\" is not available (awaiting 1 nodes)"
level=error msg="Cluster operator openshift-apiserver Degraded is True with APIServerDeployment_UnavailablePod: APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver"
level=info msg="Cluster operator openshift-controller-manager Progressing is True with _DesiredStateNotYetAchieved: Progressing: daemonset/controller-manager: updated number scheduled is 2, desired number scheduled is 3"
level=fatal msg="failed to initialize the cluster: Cluster operator machine-config is reporting a failure: Failed to resync 0.0.1-2020-04-24-202827 because: timed out waiting for the condition during syncRequiredMachineConfigPools: error pool master is not ready, retrying. Status: (pool degraded: false total: 3, ready 2, updated: 3, unavailable: 1)" 

Oh, man.

/retest

// need to make sure the payload is only set when the preconditions have been successful
if !info.Local && len(w.preconditions) > 0 {
if len(w.preconditions) == 0 {
klog.V(4).Info("No preconditions configured.")
Copy link
Member

@LalatenduMohanty LalatenduMohanty Apr 25, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nitpick No preconditions found sounds better?

Copy link
Member

@LalatenduMohanty LalatenduMohanty left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@openshift-ci-robot
Copy link
Contributor

@LalatenduMohanty: changing LGTM is restricted to collaborators

Details

In response to this:

/lgtm

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: LalatenduMohanty, wking

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@wking wking added the lgtm Indicates that a PR is ready to be merged. label Apr 25, 2020
@wking
Copy link
Member Author

wking commented Apr 25, 2020

Setting lgtm myself while we work out @LalatenduMohanty 's creds.

@openshift-merge-robot openshift-merge-robot merged commit e3c7dd1 into openshift:master Apr 25, 2020
@openshift-ci-robot
Copy link
Contributor

@wking: All pull requests linked via external trackers have merged: openshift/cluster-version-operator#360. Bugzilla bug 1827166 has been moved to the MODIFIED state.

Details

In response to this:

Bug 1827166: pkg/cvo/sync_worker: Log precondition handling

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@wking wking deleted the precondition-logging branch April 26, 2020 19:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. bugzilla/severity-high Referenced Bugzilla bug's severity is high for the branch this PR is targeting. bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. lgtm Indicates that a PR is ready to be merged.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants