"Bug 1823950: [baremetal] Don't failover api vip if loadbalanced endpoint is responding"#1893
Conversation
|
/hold Depends on openshift/baremetal-runtimecfg#70 |
|
/lgtm |
There was a problem hiding this comment.
what checks this endpoint?
There was a problem hiding this comment.
The liveness probe for haproxy.
|
/retest |
|
/assign @celebdor |
|
/lgtm |
|
/assign @runcom |
|
do we expect e2e-metal-ipi to pass with this PR? just gonna hit |
There was a problem hiding this comment.
Why do we need this init container?
I think that this change will break the unicast keepalived deployment.
In unicast mode the keepalived conf should be rendered in a synchronized manner (we use etcd for that purpose), otherwise, each master will render first a cfg file with empty unicast_peers section, so we'll end up with multiple nodes holding the VIP.
There was a problem hiding this comment.
This was added to template the check scripts. The monitor only regenerates the keepalived.conf file, but to make the check ports dynamic I had to template the scripts as well. I'll have to look at whether we can put the scripts in a separate location so we can template only them. Looks like I need to rebase this anyway so I'll look at it then.
There was a problem hiding this comment.
I tried the current version of this with unicast enabled and it worked correctly as far as I can tell. It's only templating the scripts in the init container, so it shouldn't affect the unicast configuration.
Not until openshift/baremetal-runtimecfg#70 merges and we have a new image with it. e2e-metal-ipi is just broken right now too, but that's unrelated. :-/ |
We're having some issues in ci with api flakiness that appear to be related to VIP failover. In order to reduce the need to do failovers during normal circumstances, we want to leave the VIP alone as long as the loadbalanced api endpoint on the node is responding. Currently if the local api service on a node stops responding it will trigger a failover. This is unnecessary since in most cases haproxy will continue to distribute the traffic while the local api restarts. In order to get this behavior, the chk_ocp vrrp_script is split in two. The reason for this is that we still want to handle the case where all haproxy instances in the cluster go down, but at least one api service is still functional. One check looks at the haproxy endpoint only. This one has a higher weight since it's the preferred situation. The other check looks for either the haproxy endpoint or the local endpoint. This means that if haproxy is up then both checks will succeed and the node will have maximum priority regardless of the state of its local api. However, if haproxy goes down but the local api is still working then at least the priority will be higher than the minimum because the local api is responding. It was also necessary to add a check that the haproxy firewall rule is in place. It does us no good to have the loadbalancer working if traffic isn't being routed to it.
Rather than hardcode the ports for keepalived healthchecks, get them from the node config populated by baremetal-runtimecfg.
For unicast support we need to not render the keepalived.conf until the unicast mechanism is ready. However, we do need to render the check scripts so they can pick up the correct ports from baremetal-runtimecfg. This moves the scripts to a subdirectory of the keepalived static-pod-resources so they can be processed separately.
|
/retest I think all the dependencies have merged now. Let's see what ci says. |
|
/retitle "Bug 1823950: [baremetal] Don't failover api vip if loadbalanced endpoint is responding" |
|
@cybertron: This pull request references Bugzilla bug 1823950, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker. 3 validation(s) were run on this bug
DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
/retest I don't know if those jobs are supposed to be passing, but let's give them one more try. |
|
@cybertron: The following tests failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
|
Of the two failing jobs, one doesn't appear to be running on this repo anymore and the other is consistently red. |
|
/skip |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: bcrochet, celebdor, cybertron, kikisdeliveryservice The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
@cybertron: All pull requests linked via external trackers have merged: Bugzilla bug 1823950 has been moved to the MODIFIED state. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
We're having some issues in ci with api flakiness that appear to be related to VIP failover. In order to reduce the need to do failovers during normal circumstances, we want to leave the VIP alone as long as the loadbalanced api endpoint on the node is responding. Currently if the local api service on a node stops responding it will trigger a failover. This is unnecessary since in most cases haproxy will continue to distribute the traffic while the local api restarts. In order to get this behavior, the chk_ocp vrrp_script is split in two. The reason for this is that we still want to handle the case where all haproxy instances in the cluster go down, but at least one api service is still functional. One check looks at the haproxy endpoint only. This one has a higher weight since it's the preferred situation. The other check looks for either the haproxy endpoint or the local endpoint. This means that if haproxy is up then both checks will succeed and the node will have maximum priority regardless of the state of its local api. However, if haproxy goes down but the local api is still working then at least the priority will be higher than the minimum because the local api is responding. It was also necessary to add a check that the haproxy firewall rule is in place. It does us no good to have the loadbalancer working if traffic isn't being routed to it. This ports openshift#1893 to OpenStack platform.
Ports openshift#1893 to the vSphere platform. vSphere CI is experiencing the issues described in the above PR.
We're having some issues in ci with api flakiness that appear to be related to VIP failover. In order to reduce the need to do failovers during normal circumstances, we want to leave the VIP alone as long as the loadbalanced api endpoint on the node is responding. Currently if the local api service on a node stops responding it will trigger a failover. This is unnecessary since in most cases haproxy will continue to distribute the traffic while the local api restarts. In order to get this behavior, the chk_ocp vrrp_script is split in two. The reason for this is that we still want to handle the case where all haproxy instances in the cluster go down, but at least one api service is still functional. One check looks at the haproxy endpoint only. This one has a higher weight since it's the preferred situation. The other check looks for either the haproxy endpoint or the local endpoint. This means that if haproxy is up then both checks will succeed and the node will have maximum priority regardless of the state of its local api. However, if haproxy goes down but the local api is still working then at least the priority will be higher than the minimum because the local api is responding. It was also necessary to add a check that the haproxy firewall rule is in place. It does us no good to have the loadbalancer working if traffic isn't being routed to it. This ports openshift#1893 to OpenStack platform.
We're having some issues in ci with api flakiness that appear to be related to VIP failover. In order to reduce the need to do failovers during normal circumstances, we want to leave the VIP alone as long as the loadbalanced api endpoint on the node is responding. Currently if the local api service on a node stops responding it will trigger a failover. This is unnecessary since in most cases haproxy will continue to distribute the traffic while the local api restarts. In order to get this behavior, the chk_ocp vrrp_script is split in two. The reason for this is that we still want to handle the case where all haproxy instances in the cluster go down, but at least one api service is still functional. One check looks at the haproxy endpoint only. This one has a higher weight since it's the preferred situation. The other check looks for either the haproxy endpoint or the local endpoint. This means that if haproxy is up then both checks will succeed and the node will have maximum priority regardless of the state of its local api. However, if haproxy goes down but the local api is still working then at least the priority will be higher than the minimum because the local api is responding. It was also necessary to add a check that the haproxy firewall rule is in place. It does us no good to have the loadbalancer working if traffic isn't being routed to it. This ports openshift#1893 to OpenStack platform.
We're having some issues in ci with api flakiness that appear to be
related to VIP failover. In order to reduce the need to do failovers
during normal circumstances, we want to leave the VIP alone as long
as the loadbalanced api endpoint on the node is responding. Currently
if the local api service on a node stops responding it will trigger
a failover. This is unnecessary since in most cases haproxy will
continue to distribute the traffic while the local api restarts.
In order to get this behavior, the chk_ocp vrrp_script is split in
two. The reason for this is that we still want to handle the case
where all haproxy instances in the cluster go down, but at least
one api service is still functional. One check looks at the haproxy
endpoint only. This one has a higher weight since it's the preferred
situation. The other check looks for either the haproxy endpoint or
the local endpoint. This means that if haproxy is up then both
checks will succeed and the node will have maximum priority regardless
of the state of its local api. However, if haproxy goes down but
the local api is still working then at least the priority will be
higher than the minimum because the local api is responding.
It was also necessary to add a check that the haproxy firewall rule
is in place. It does us no good to have the loadbalancer working if
traffic isn't being routed to it.
- What I did
- How to verify it
- Description for the changelog