load balancer health check for kube-apiserver#3537
load balancer health check for kube-apiserver#3537openshift-merge-robot merged 2 commits intoopenshift:masterfrom
Conversation
|
@abhinavdahiya let me know if this is the right place for this doc, otherwise I will move it. /assign @abhinavdahiya |
|
Thanks for the detailed doc @tkashem ! I think the next step will be to make this doc discoverable by linking this doc from the code that defines these healthchecks in |
There was a problem hiding this comment.
nit: space in front of "reports"
There was a problem hiding this comment.
everywhere in the doc: load balancers
There was a problem hiding this comment.
be clear: not configurable by the user, but by the devs
There was a problem hiding this comment.
be clear that this is an example. P2 could be right at T+0s, depending on the alignment of the probe request interval.
There was a problem hiding this comment.
made it clear that this is a worst case scenario to calculate at most 30s
There was a problem hiding this comment.
do we know from aws/gcp docs that this is really the case?
There was a problem hiding this comment.
this is true for aws, i copied them verbatim from aws doc.
There was a problem hiding this comment.
link the docs, to make this easy to check? Seems like it's the classic-LB docs, but the installer uses network load balancers (classic LBs are aws_elb).
There was a problem hiding this comment.
I could not find a doc that describes this for network load balancer exclusively. I think the health check mechanics should be the same for classic, application and network LB. Maybe we can ask this question to our AWS account rep.
On the other hand, what we stipulate above must hold true for all health checks universally. Otherwise if we allow one interval to bleed into another then we don't have a deterministic "at most".
28371c9 to
b8d4bb5
Compare
There was a problem hiding this comment.
nit: ok -> 200 OK? I expect LBs to care about HTTP status codes and not about the response body. And your ok is likely shorthand for the 200 status, but I think explicitly saying "200" (and possibly even "HTTP status 200 OK") would make it harder to misunderstand.
There was a problem hiding this comment.
Thanks for the detailed doc @tkashem !
I think the next step will be to make this doc discoverable by linking this doc from the code that defines these healthchecks indata/data/{aws,gcp}..
@abhinavdahiya I linked the doc.
There was a problem hiding this comment.
Elsewhere in the doc you have:
In future we will reduce
shutdown-delay-durationto30s.
I'd rather make this portion of the doc robust to that sort of pivot by using T+shudown-delay-duration here.
There was a problem hiding this comment.
As far as the user/dev is concerned, they should treat shutdown-delay-duration to be 30s for the purpose of designing health check probes. So I changed it to T+30s.
There was a problem hiding this comment.
Does this 60s also have a config variable name that we can use to guard against future default changes?
There was a problem hiding this comment.
This is hardcoded in kube-apiserver.
|
lgtm |
Per [0], the /readyz endpoint is how the api communicates that it is gracefully shutting down. Once /readyz starts to report failure, we want to stop sending traffic to that backend. If we wait for /healthz, it may be too late because once /healthz starts failing the api is already not accepting connections. 0: openshift/installer#3537
Per [0], the /readyz endpoint is how the api communicates that it is gracefully shutting down. Once /readyz starts to report failure, we want to stop sending traffic to that backend. If we wait for /healthz, it may be too late because once /healthz starts failing the api is already not accepting connections. I also moved the liveness probe for haproxy itself to use a /readyz endpoint for consistency. This isn't strictly necessary, but I think it will be less confusing if there aren't multiple health check endpoints in the config. 0: openshift/installer#3537
b8d4bb5 to
dcd415c
Compare
There was a problem hiding this comment.
this is probably not correct comment syntax in python
There was a problem hiding this comment.
oops, my bad. fixed.
There was a problem hiding this comment.
|
/test e2e-gcp-upi |
49cb2af to
3bc71bb
Compare
|
@tkashem: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
|
/approve |
|
Adding valid bug since this is docs update |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: abhinavdahiya The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
/retest Please review the full test history for this PR and help us cut down flakes. |
Per [0], the /readyz endpoint is how the api communicates that it is gracefully shutting down. Once /readyz starts to report failure, we want to stop sending traffic to that backend. If we wait for /healthz, it may be too late because once /healthz starts failing the api is already not accepting connections. I also moved the liveness probe for haproxy itself to use a /readyz endpoint for consistency. This isn't strictly necessary, but I think it will be less confusing if there aren't multiple health check endpoints in the config. 0: openshift/installer#3537 (cherry picked from commit 022933c)
No description provided.