Bug 1823950: Reverse haproxy and keepalived check timings#2075
Conversation
|
/retest |
|
@cybertron: This pull request references Bugzilla bug 1823950, which is valid. The bug has been updated to refer to the pull request using the external bug tracker. 3 validation(s) were run on this bug
DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
|
/retest |
There was a problem hiding this comment.
Hmm, I should change this timeout too.
Since we moved to keepalived healthchecking against haproxy, we want haproxy to handle most failures so the VIP doesn't have to move. However, previously the time it took for haproxy to recognize an outage on a node was longer than it was for keepalived, which resulted in the VIP moving before haproxy removed the failing backend. This change makes the haproxy interval 1 second, so it should notice outages in 2 seconds or less (because it has a fall value of 2). The keepalived interval is changed to 2, which means it will detect failures in 2 to 4 seconds (also a fall value of 2). This means haproxy should deal with api outages before keepalived does and allow the VIP to stay on the same node.
119aacb to
0f7517f
Compare
|
/approve |
|
@cybertron Don't we need this fix for 4.6? I can see that the PR was labeled with 4.7 |
Yes. I think the label got added because I pushed it without a bug reference initially. @kikisdeliveryservice Are you okay with dropping the 4.7 label? This is needed to complete a 4.6 bug fix. |
|
/test e2e-metal-ipi |
1 similar comment
|
/test e2e-metal-ipi |
|
go for it! 😄 (there originally wasn't a bz attached to this) |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: bcrochet, cybertron, kikisdeliveryservice The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
3 similar comments
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
@cybertron: The following test failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
|
@cybertron: All pull requests linked via external trackers have merged:
Bugzilla bug 1823950 has been moved to the MODIFIED state. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Since we moved to keepalived healthchecking against haproxy, we want haproxy to handle most failures so the VIP doesn't have to move. However, previously the time it took for haproxy to recognize an outage on a node was longer than it was for keepalived, which resulted in the VIP moving before haproxy removed the failing backend. This change makes the haproxy interval 1 second, so it should notice outages in 2 seconds or less (because it has a fall value of 2). The keepalived interval is changed to 2, which means it will detect failures in 2 to 4 seconds (also a fall value of 2). This means haproxy should deal with api outages before keepalived does and allow the VIP to stay on the same node. This ports openshift#2075 to OpenStack platform.
Since we moved to keepalived healthchecking against haproxy, we want haproxy to handle most failures so the VIP doesn't have to move. However, previously the time it took for haproxy to recognize an outage on a node was longer than it was for keepalived, which resulted in the VIP moving before haproxy removed the failing backend. This change makes the haproxy interval 1 second, so it should notice outages in 2 seconds or less (because it has a fall value of 2). The keepalived interval is changed to 2, which means it will detect failures in 2 to 4 seconds (also a fall value of 2). This means haproxy should deal with api outages before keepalived does and allow the VIP to stay on the same node. This ports openshift#2075 to OpenStack platform.
Since we moved to keepalived healthchecking against haproxy, we want haproxy to handle most failures so the VIP doesn't have to move. However, previously the time it took for haproxy to recognize an outage on a node was longer than it was for keepalived, which resulted in the VIP moving before haproxy removed the failing backend. This change makes the haproxy interval 1 second, so it should notice outages in 2 seconds or less (because it has a fall value of 2). The keepalived interval is changed to 2, which means it will detect failures in 2 to 4 seconds (also a fall value of 2). This means haproxy should deal with api outages before keepalived does and allow the VIP to stay on the same node. This ports openshift#2075 to OpenStack platform.
Since we moved to keepalived healthchecking against haproxy, we want
haproxy to handle most failures so the VIP doesn't have to move.
However, previously the time it took for haproxy to recognize an
outage on a node was longer than it was for keepalived, which resulted
in the VIP moving before haproxy removed the failing backend.
This change makes the haproxy interval 1 second, so it should notice
outages in 2 seconds or less (because it has a fall value of 2).
The keepalived interval is changed to 2, which means it will detect
failures in 2 to 4 seconds (also a fall value of 2). This means
haproxy should deal with api outages before keepalived does and
allow the VIP to stay on the same node.
- What I did
- How to verify it
- Description for the changelog