http: adding 100-Continue support to Envoy#2497
http: adding 100-Continue support to Envoy#2497alyssawilk merged 10 commits intoenvoyproxy:masterfrom
Conversation
Signed-off-by: Alyssa Wilk <alyssar@chromium.org>
Signed-off-by: Alyssa Wilk <alyssar@chromium.org>
|
@mattklein123 I think I have the critical tests working (integration + http connection manager resumption) so while I should add 1-2 more for coverage completeness do you mind taking a pass at this? I definitely wanted your take because even avoiding duplicate encode/decodeHeaders this was more complicated than I'd initially expected! Sorry for the size - I couldn't figure out a way to split it up but I'm open to ideas. |
Signed-off-by: Alyssa Wilk <alyssar@chromium.org>
b256dd9 to
c02d392
Compare
Signed-off-by: Alyssa Wilk <alyssar@chromium.org>
mattklein123
left a comment
There was a problem hiding this comment.
This is neat. Generally looks great to me. Some random comments. Great work!
| * Encode 100-Continue headers. | ||
| * @param headers supplies the 100-Continue header map to encode. | ||
| */ | ||
| virtual void encode100ContinueHeaders(const HeaderMap& headers) PURE; |
There was a problem hiding this comment.
General design question: Do you think it's worthwhile passing around the HeaderMap everywhere? Is anyone ever going to look at it or do anything with it? I'm wondering if we could just make this void, and then have the header map that is encoded by effectively static on either side? Thoughts?
There was a problem hiding this comment.
I don't ever expect to do anything with it. I couldn't find anything in the spec about 100 continue only being firstline, and IMO if it's allowed to have other headers we're better off passing it around than changing the API in a year when someone wants to add trace headers or some such.
| void continueDecoding() override { NOT_IMPLEMENTED; } | ||
| void addDecodedData(Buffer::Instance&, bool) override { NOT_IMPLEMENTED; } | ||
| const Buffer::Instance* decodingBuffer() override { return buffered_body_.get(); } | ||
| // The async client won't pause if sending an Expect: 100-Continue so simply |
There was a problem hiding this comment.
Is it worth asserting somewhere that no one actually sends expect continue on async client? Might be a nice sanity check somewhere.
There was a problem hiding this comment.
I can imagine use cases where we might legitimately get one (forwarding logging requests with original headers) so I think swallowing is the right thing to do.
| if (CodeUtility::is1xx(response_code)) { | ||
| connection_manager_.stats_.named_.downstream_rq_1xx_.inc(); | ||
| connection_manager_.listener_stats_.downstream_rq_1xx_.inc(); | ||
| } |
There was a problem hiding this comment.
nit: else if (or change the following to match)
There was a problem hiding this comment.
this one still needs fixing.
There was a problem hiding this comment.
LGTM other than this one. Can we clean this up and make it consistent one way or the other?
There was a problem hiding this comment.
Oh!
I kept putting these lines together and not noticing that fix_format undid my change because I didn't change it to "else if". "But I swear I fixed this! Twice!"
Actually fixed :-)
| request_headers_->Expect()->value() == Headers::get().ExpectValues._100Continue.c_str()) { | ||
| // Note in the case Envoy is handling 100-Continue complexity, it skips the filter chain | ||
| // and sends the 100-Continue directly to the encoder. | ||
| continue_headers_ = |
There was a problem hiding this comment.
create a static continue header to use?
| // Note in the case Envoy is handling 100-Continue complexity, it skips the filter chain | ||
| // and sends the 100-Continue directly to the encoder. | ||
| continue_headers_ = | ||
| HeaderMapPtr{new HeaderMapImpl{{Headers::get().Status, std::to_string(100)}}}; |
There was a problem hiding this comment.
nit: instead of 100 using the constant in codes.h would be better.
| // responds with 1xx. In the future, if needed, we can properly handle 1xx in higher layer | ||
| // code, or just eat it. | ||
| if (stream->headers_->Status() && stream->headers_->Status()->value() == "100") { | ||
| throw CodecProtocolException("Unexpected 100-continue in header continuation"); |
There was a problem hiding this comment.
Can this happen? I think nghttp2 blocks this?
| stream->decoder_->decodeHeaders(std::move(stream->headers_), stream->remote_end_stream_); | ||
| if (stream->headers_->Status() && stream->headers_->Status()->value() == "100") { | ||
| if (stream->remote_end_stream_) { | ||
| throw CodecProtocolException("Unexpected end stream with 100-Continue headers."); |
There was a problem hiding this comment.
nghttp2 likely checks this. Can this happen?
|
|
||
| downstream_response_started_ = true; | ||
| // We will double count response codes for 100-Continue. | ||
| upstream_request_->upstream_host_->outlierDetector().putHttpResponseCode(100); |
There was a problem hiding this comment.
nit: use constant in codes.h, also can you verify the outlier detector code isn't going to do something stupid with a 100 status code and maybe add some tests there? (Not even sure if we should send 100 to the outlier detector, I guess that is worth discussing).
There was a problem hiding this comment.
Now that I think of it, given 100-Continue is timing based (clients can choose to post data right away, then servers nto send it) and likely rare, I'm inclined to not send it. I don't think it's worth an extra interface and it breaks the 'calling things twice' policy
| downstream_response_started_ = true; | ||
| // We will double count response codes for 100-Continue. | ||
| upstream_request_->upstream_host_->outlierDetector().putHttpResponseCode(100); | ||
| // Don't send retries after 100-Continue has been sent on. |
There was a problem hiding this comment.
technically, I think we could still retry by resending expect: continue again, and eating the next 100 continue, but I agree this is overcomplicated and not necessary. I might expand this comment a bit though about that potential and why we are disabling retry, with an optional TODO to implement retry if you feel that would ever happen.
There was a problem hiding this comment.
I'd argue that's not the right behavior. Technically someone could have different backends with different behaviors and the next would not accept. I mean it's not the end of the world but it is the start of a response. Will definitely comment.
| bool generate_request_id_; | ||
| Http::DateProvider& date_provider_; | ||
| Http::ConnectionManagerListenerStats listener_stats_; | ||
| bool proxy_100_continue_; |
There was a problem hiding this comment.
nit: const (see if other members can be constified)
There was a problem hiding this comment.
Hahaha. I added the config after I did my const check. Will double check the rest in case.
Signed-off-by: Alyssa Wilk <alyssar@chromium.org>
…rame Signed-off-by: Alyssa Wilk <alyssar@chromium.org>
Signed-off-by: Alyssa Wilk <alyssar@chromium.org>
mattklein123
left a comment
There was a problem hiding this comment.
LGTM, few more comments. There also still some nits from my previous review that need action but now I'm thinking you just didn't get around to them yet so I stopped commenting on them.
| runtime_(runtime), local_info_(local_info), cluster_manager_(cluster_manager), | ||
| listener_stats_(config_.listenerStats()) {} | ||
|
|
||
| const std::unique_ptr<const Http::HeaderMap> ConnectionManagerImpl::CONTINUE_HEADER{ |
There was a problem hiding this comment.
Technically I think this breaks the non-POD rule (though I don't care). Might want to expose via private function and use CONSTRUCT_ON_FIRST_USE.
There was a problem hiding this comment.
Yeah, I was mentally claiming it was moving an existing non-POD header map but I should really just clean as I move it.
| if (CodeUtility::is1xx(response_code)) { | ||
| connection_manager_.stats_.named_.downstream_rq_1xx_.inc(); | ||
| connection_manager_.listener_stats_.downstream_rq_1xx_.inc(); | ||
| } |
There was a problem hiding this comment.
this one still needs fixing.
| } | ||
|
|
||
| // Stip the T-E headers etc. Defer other header additions as well as drain-close logic to the | ||
| // continuation headers. |
There was a problem hiding this comment.
Do we care that this function will get called twice in the proxy 100 continue case? Now it doesn't matter but might it matter in the future? Might be worth putting some comments in the code to watch out for this.
| Buffer::WatermarkBufferPtr buffered_response_data_; | ||
| HeaderMapPtr response_trailers_{}; | ||
| HeaderMapPtr request_headers_; | ||
| HeaderMapPtr request_headers_{}; |
|
Please merge master following #2495. |
Signed-off-by: Alyssa Wilk <alyssar@chromium.org>
Signed-off-by: Alyssa Wilk <alyssar@chromium.org>
|
@alyssawilk FYI I think this will break envoy-filter-example. Can you fix the filter there when this merges? (Or maybe you already did that and I'm forgetting). |
|
Doesn't one of our builds test that envoy-filter-example works? I remember CI catching a failure when I was refactoring the integration tests. |
|
I can never remember which sequence breaks things. Ah OK I thought this change affected all filters but I guess not. |
|
Yeah, asan and tsan |
|
Deploying at Lyft right now and seeing crashes. In the words of @mattklein123
|
|
Argh! Sorry :-( |
This reverts commit 235d477. Signed-off-by: Matt Klein <mklein@lyft.com>
…proxy#2497) * initial raw bytes Signed-off-by: Kuat Yessenov <kuat@google.com> * add benchmark Signed-off-by: Kuat Yessenov <kuat@google.com> * better benchmark Signed-off-by: Kuat Yessenov <kuat@google.com> * better benchmark Signed-off-by: Kuat Yessenov <kuat@google.com>
By default Envoy will still strip Expect: 100-Continue and send the 100 Continue response, but for each HttpConnectionManager configured to proxy_100_continue it will pass the header through and proxy the 100 Continue from upstream.
Risk Level: Medium-High (http connection manager refactor)
Testing: thorough unit tests and integration tests
Docs Changes: envoyproxy/data-plane-api#455
Fixes #2325
[API Changes:] envoyproxy/data-plane-api#433