Bug 1881147: OpenStack: Don't failover api vip if loadbalanced endpoint is responding#2110
Conversation
When using CNV or other operators that modify how the node is connected to the network, we may end up in the case where the configured VRRP interface no longer has an address in the network that it is configured to hold virtual IPs in. This patch takes a page from what we do for HAProxy and adds a monitor side car container that checks keepalived and reloads it when necessary. This ports openshift#1124 to OpenStack platform, alongside with fixes from openshift#1508 and openshift#1604.
|
@mandre: This pull request references Bugzilla bug 1881147, which is invalid:
Comment DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
/test e2e-openstack |
1 similar comment
|
/test e2e-openstack |
We're having some issues in ci with api flakiness that appear to be related to VIP failover. In order to reduce the need to do failovers during normal circumstances, we want to leave the VIP alone as long as the loadbalanced api endpoint on the node is responding. Currently if the local api service on a node stops responding it will trigger a failover. This is unnecessary since in most cases haproxy will continue to distribute the traffic while the local api restarts. In order to get this behavior, the chk_ocp vrrp_script is split in two. The reason for this is that we still want to handle the case where all haproxy instances in the cluster go down, but at least one api service is still functional. One check looks at the haproxy endpoint only. This one has a higher weight since it's the preferred situation. The other check looks for either the haproxy endpoint or the local endpoint. This means that if haproxy is up then both checks will succeed and the node will have maximum priority regardless of the state of its local api. However, if haproxy goes down but the local api is still working then at least the priority will be higher than the minimum because the local api is responding. It was also necessary to add a check that the haproxy firewall rule is in place. It does us no good to have the loadbalancer working if traffic isn't being routed to it. This ports openshift#1893 to OpenStack platform.
Partial port of openshift#1768 to OpenStack platform
Since we moved to keepalived healthchecking against haproxy, we want haproxy to handle most failures so the VIP doesn't have to move. However, previously the time it took for haproxy to recognize an outage on a node was longer than it was for keepalived, which resulted in the VIP moving before haproxy removed the failing backend. This change makes the haproxy interval 1 second, so it should notice outages in 2 seconds or less (because it has a fall value of 2). The keepalived interval is changed to 2, which means it will detect failures in 2 to 4 seconds (also a fall value of 2). This means haproxy should deal with api outages before keepalived does and allow the VIP to stay on the same node. This ports openshift#2075 to OpenStack platform.
23820b6 to
76d8f4a
Compare
|
/test e2e-openstack |
|
/bugzilla refresh |
|
@mandre: This pull request references Bugzilla bug 1881147, which is valid. The bug has been moved to the POST state. The bug has been updated to refer to the pull request using the external bug tracker. 6 validation(s) were run on this bug
DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
/test e2e-openstack |
|
/lgtm |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: Fedosin, mandre The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
6 similar comments
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
13 similar comments
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
@mandre: All pull requests linked via external trackers have merged: Bugzilla bug 1881147 has been moved to the MODIFIED state. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Manual cherry-pick of #2077 to release-4.5.