Skip to content

"Bug 1823950: [baremetal] Don't failover api vip if loadbalanced endpoint is responding"#1893

Merged
openshift-merge-robot merged 3 commits intoopenshift:masterfrom
cybertron:keepalived-lb
Aug 25, 2020
Merged

"Bug 1823950: [baremetal] Don't failover api vip if loadbalanced endpoint is responding"#1893
openshift-merge-robot merged 3 commits intoopenshift:masterfrom
cybertron:keepalived-lb

Conversation

@cybertron
Copy link
Copy Markdown
Member

We're having some issues in ci with api flakiness that appear to be
related to VIP failover. In order to reduce the need to do failovers
during normal circumstances, we want to leave the VIP alone as long
as the loadbalanced api endpoint on the node is responding. Currently
if the local api service on a node stops responding it will trigger
a failover. This is unnecessary since in most cases haproxy will
continue to distribute the traffic while the local api restarts.

In order to get this behavior, the chk_ocp vrrp_script is split in
two. The reason for this is that we still want to handle the case
where all haproxy instances in the cluster go down, but at least
one api service is still functional. One check looks at the haproxy
endpoint only. This one has a higher weight since it's the preferred
situation. The other check looks for either the haproxy endpoint or
the local endpoint. This means that if haproxy is up then both
checks will succeed and the node will have maximum priority regardless
of the state of its local api. However, if haproxy goes down but
the local api is still working then at least the priority will be
higher than the minimum because the local api is responding.

It was also necessary to add a check that the haproxy firewall rule
is in place. It does us no good to have the loadbalancer working if
traffic isn't being routed to it.

- What I did

- How to verify it

- Description for the changelog

@cybertron
Copy link
Copy Markdown
Member Author

/hold

Depends on openshift/baremetal-runtimecfg#70

@openshift-ci-robot openshift-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jul 2, 2020
@bcrochet
Copy link
Copy Markdown
Member

bcrochet commented Jul 2, 2020

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Jul 2, 2020
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what checks this endpoint?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The liveness probe for haproxy.

@celebdor
Copy link
Copy Markdown
Contributor

celebdor commented Jul 6, 2020

/retest

@kikisdeliveryservice kikisdeliveryservice changed the title Don't failover api vip if loadbalanced endpoint is responding [baremetal] Don't failover api vip if loadbalanced endpoint is responding Jul 7, 2020
@kikisdeliveryservice
Copy link
Copy Markdown
Contributor

/assign @celebdor

@cybertron
Copy link
Copy Markdown
Member Author

This is also going to need a rebase to pull in the changes in #1909

I may retarget this for 4.5 since #1909 means anything after that won't backport cleanly (unless we end up needing to backport that too, which is possible).

In any case, it will need to be reworked.

@openshift-ci-robot openshift-ci-robot added needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. and removed lgtm Indicates that a PR is ready to be merged. labels Jul 9, 2020
@openshift-ci-robot openshift-ci-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Jul 14, 2020
Comment thread pkg/operator/bootstrap.go Outdated
@celebdor
Copy link
Copy Markdown
Contributor

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Jul 21, 2020
@celebdor
Copy link
Copy Markdown
Contributor

/assign @runcom

@kikisdeliveryservice
Copy link
Copy Markdown
Contributor

do we expect e2e-metal-ipi to pass with this PR?

just gonna hit
/retest

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need this init container?

I think that this change will break the unicast keepalived deployment.
In unicast mode the keepalived conf should be rendered in a synchronized manner (we use etcd for that purpose), otherwise, each master will render first a cfg file with empty unicast_peers section, so we'll end up with multiple nodes holding the VIP.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was added to template the check scripts. The monitor only regenerates the keepalived.conf file, but to make the check ports dynamic I had to template the scripts as well. I'll have to look at whether we can put the scripts in a separate location so we can template only them. Looks like I need to rebase this anyway so I'll look at it then.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried the current version of this with unicast enabled and it worked correctly as far as I can tell. It's only templating the scripts in the init container, so it shouldn't affect the unicast configuration.

@cybertron
Copy link
Copy Markdown
Member Author

do we expect e2e-metal-ipi to pass with this PR?

Not until openshift/baremetal-runtimecfg#70 merges and we have a new image with it. e2e-metal-ipi is just broken right now too, but that's unrelated. :-/

We're having some issues in ci with api flakiness that appear to be
related to VIP failover. In order to reduce the need to do failovers
during normal circumstances, we want to leave the VIP alone as long
as the loadbalanced api endpoint on the node is responding. Currently
if the local api service on a node stops responding it will trigger
a failover. This is unnecessary since in most cases haproxy will
continue to distribute the traffic while the local api restarts.

In order to get this behavior, the chk_ocp vrrp_script is split in
two. The reason for this is that we still want to handle the case
where all haproxy instances in the cluster go down, but at least
one api service is still functional. One check looks at the haproxy
endpoint only. This one has a higher weight since it's the preferred
situation. The other check looks for either the haproxy endpoint or
the local endpoint. This means that if haproxy is up then both
checks will succeed and the node will have maximum priority regardless
of the state of its local api. However, if haproxy goes down but
the local api is still working then at least the priority will be
higher than the minimum because the local api is responding.

It was also necessary to add a check that the haproxy firewall rule
is in place. It does us no good to have the loadbalancer working if
traffic isn't being routed to it.
Rather than hardcode the ports for keepalived healthchecks, get them
from the node config populated by baremetal-runtimecfg.
For unicast support we need to not render the keepalived.conf until
the unicast mechanism is ready. However, we do need to render the
check scripts so they can pick up the correct ports from
baremetal-runtimecfg. This moves the scripts to a subdirectory of
the keepalived static-pod-resources so they can be processed
separately.
@cybertron
Copy link
Copy Markdown
Member Author

/retest
/hold cancel

I think all the dependencies have merged now. Let's see what ci says.

@openshift-ci-robot openshift-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Aug 17, 2020
@celebdor
Copy link
Copy Markdown
Contributor

/retitle "Bug 1823950: [baremetal] Don't failover api vip if loadbalanced endpoint is responding"

@openshift-ci-robot openshift-ci-robot changed the title [baremetal] Don't failover api vip if loadbalanced endpoint is responding "Bug 1823950: [baremetal] Don't failover api vip if loadbalanced endpoint is responding" Aug 21, 2020
@openshift-ci-robot
Copy link
Copy Markdown
Contributor

@cybertron: This pull request references Bugzilla bug 1823950, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target release (4.6.0) matches configured target release for branch (4.6.0)
  • bug is in the state ASSIGNED, which is one of the valid states (NEW, ASSIGNED, ON_DEV, POST, POST)
Details

In response to this:

"Bug 1823950: [baremetal] Don't failover api vip if loadbalanced endpoint is responding"

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci-robot openshift-ci-robot added bugzilla/severity-low Referenced Bugzilla bug's severity is low for the branch this PR is targeting. bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. labels Aug 21, 2020
Copy link
Copy Markdown
Member

@bcrochet bcrochet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Aug 21, 2020
@cybertron
Copy link
Copy Markdown
Member Author

/retest

I don't know if those jobs are supposed to be passing, but let's give them one more try.

@openshift-ci-robot
Copy link
Copy Markdown
Contributor

@cybertron: The following tests failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
ci/prow/e2e-aws-proxy 71afaa748662fbfea24659edfc9e151f70b0686f link /test e2e-aws-proxy
ci/prow/e2e-aws-scaleup-rhel7 14bc327 link /test e2e-aws-scaleup-rhel7
ci/prow/okd-e2e-aws 14bc327 link /test okd-e2e-aws

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@cybertron
Copy link
Copy Markdown
Member Author

Of the two failing jobs, one doesn't appear to be running on this repo anymore and the other is consistently red.

@kikisdeliveryservice
Copy link
Copy Markdown
Contributor

/skip

@openshift-ci-robot
Copy link
Copy Markdown
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: bcrochet, celebdor, cybertron, kikisdeliveryservice

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Details Needs approval from an approver in each of these files:
  • OWNERS [kikisdeliveryservice]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Aug 25, 2020
@openshift-merge-robot openshift-merge-robot merged commit fa7a977 into openshift:master Aug 25, 2020
@openshift-ci-robot
Copy link
Copy Markdown
Contributor

@cybertron: All pull requests linked via external trackers have merged:

Bugzilla bug 1823950 has been moved to the MODIFIED state.

Details

In response to this:

"Bug 1823950: [baremetal] Don't failover api vip if loadbalanced endpoint is responding"

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

mandre added a commit to mandre/machine-config-operator that referenced this pull request Sep 11, 2020
We're having some issues in ci with api flakiness that appear to be
related to VIP failover. In order to reduce the need to do failovers
during normal circumstances, we want to leave the VIP alone as long
as the loadbalanced api endpoint on the node is responding. Currently
if the local api service on a node stops responding it will trigger
a failover. This is unnecessary since in most cases haproxy will
continue to distribute the traffic while the local api restarts.

In order to get this behavior, the chk_ocp vrrp_script is split in
two. The reason for this is that we still want to handle the case
where all haproxy instances in the cluster go down, but at least
one api service is still functional. One check looks at the haproxy
endpoint only. This one has a higher weight since it's the preferred
situation. The other check looks for either the haproxy endpoint or
the local endpoint. This means that if haproxy is up then both
checks will succeed and the node will have maximum priority regardless
of the state of its local api. However, if haproxy goes down but
the local api is still working then at least the priority will be
higher than the minimum because the local api is responding.

It was also necessary to add a check that the haproxy firewall rule
is in place. It does us no good to have the loadbalancer working if
traffic isn't being routed to it.

This ports openshift#1893 to OpenStack platform.
jcpowermac added a commit to jcpowermac/machine-config-operator that referenced this pull request Sep 22, 2020
Ports openshift#1893 to the vSphere platform.

vSphere CI is experiencing the issues described in the above
PR.
mandre added a commit to mandre/machine-config-operator that referenced this pull request Sep 23, 2020
We're having some issues in ci with api flakiness that appear to be
related to VIP failover. In order to reduce the need to do failovers
during normal circumstances, we want to leave the VIP alone as long
as the loadbalanced api endpoint on the node is responding. Currently
if the local api service on a node stops responding it will trigger
a failover. This is unnecessary since in most cases haproxy will
continue to distribute the traffic while the local api restarts.

In order to get this behavior, the chk_ocp vrrp_script is split in
two. The reason for this is that we still want to handle the case
where all haproxy instances in the cluster go down, but at least
one api service is still functional. One check looks at the haproxy
endpoint only. This one has a higher weight since it's the preferred
situation. The other check looks for either the haproxy endpoint or
the local endpoint. This means that if haproxy is up then both
checks will succeed and the node will have maximum priority regardless
of the state of its local api. However, if haproxy goes down but
the local api is still working then at least the priority will be
higher than the minimum because the local api is responding.

It was also necessary to add a check that the haproxy firewall rule
is in place. It does us no good to have the loadbalancer working if
traffic isn't being routed to it.

This ports openshift#1893 to OpenStack platform.
vrutkovs pushed a commit to vrutkovs/machine-config-operator that referenced this pull request Oct 13, 2020
We're having some issues in ci with api flakiness that appear to be
related to VIP failover. In order to reduce the need to do failovers
during normal circumstances, we want to leave the VIP alone as long
as the loadbalanced api endpoint on the node is responding. Currently
if the local api service on a node stops responding it will trigger
a failover. This is unnecessary since in most cases haproxy will
continue to distribute the traffic while the local api restarts.

In order to get this behavior, the chk_ocp vrrp_script is split in
two. The reason for this is that we still want to handle the case
where all haproxy instances in the cluster go down, but at least
one api service is still functional. One check looks at the haproxy
endpoint only. This one has a higher weight since it's the preferred
situation. The other check looks for either the haproxy endpoint or
the local endpoint. This means that if haproxy is up then both
checks will succeed and the node will have maximum priority regardless
of the state of its local api. However, if haproxy goes down but
the local api is still working then at least the priority will be
higher than the minimum because the local api is responding.

It was also necessary to add a check that the haproxy firewall rule
is in place. It does us no good to have the loadbalancer working if
traffic isn't being routed to it.

This ports openshift#1893 to OpenStack platform.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

approved Indicates a PR has been approved by an approver from all required OWNERS files. bugzilla/severity-low Referenced Bugzilla bug's severity is low for the branch this PR is targeting. bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. lgtm Indicates that a PR is ready to be merged.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants