-
Notifications
You must be signed in to change notification settings - Fork 150
[WIP] Add ConsolePodsMustSettle() helper to test/e2e #357
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Add ConsolePodsMustSettle() helper to test/e2e #357
Conversation
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: benjaminapetersen The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
The possibility that sibling PRs such as: could break our merge queue or make it to production without an automated verification that our pods will continue to function concerns me a bit. |
jhadvig
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know its WIP, just adding couple of nits ..
test/e2e/framework/console.go
Outdated
| count++ | ||
| t.Log(fmt.Sprintf("running %d time(s)", count)) | ||
| err, ok := ConsolePodsRunning(clientset) | ||
| t.Logf(fmt.Sprintf("is ok: %t, why? %s", ok, err)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| t.Logf(fmt.Sprintf("is ok: %t, why? %s", ok, err)) | |
| t.Log(fmt.Sprintf("is ok: %t, why? %s", ok, err)) |
Would only suggest to print if error occurs, dont think we need output if ok is true
test/e2e/framework/console.go
Outdated
| if deployment.Status.Replicas > 2 { | ||
| // todo: may or may not be a bad thing, we need to investigate further | ||
| // this may simply indicate churn, new pods are rolling out, old pods being terminated | ||
| // do we need to consider this? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
since deployment.status.replicas is targeting "non-terminated pods" we should be ok
|
@benjaminapetersen: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
|
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
|
Stale issues rot after 30d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle rotten |
|
Rotten issues close after 30d of inactivity. Reopen the issue by commenting /close |
|
@openshift-bot: Closed this PR. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
We should be testing that the console pods also settle down into a happy state in our e2e tests.