-
Notifications
You must be signed in to change notification settings - Fork 172
Fix Transfer-Encoding header not removed for buffered requests #1401
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
When client sends chunked request, if request body is larger than client_body_buffer_size, nginx renders body to temporary file and since no "Content-Length" header is set on request, we have to set "Content-Length" manually file size
kevprice83
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some big changes required here, see individual comments. We should also make clear in the PR that this is only a short-term fix to solve the fact we don't yet support chunked requests.
Unit tests will also be required and a CHANGELOG entry.
Move the file reading and size calculation into the file.lua script and keep the header handling in the http_proxy.lua script.
If you need any help with this let us know and we can try to sync up with you.
| end | ||
|
|
||
| function fsize (path) | ||
| local file, err = open(path) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is I/O blocking, I don't think we want to do this on every request as it will really hurt performance.
| return nil, err | ||
| end | ||
|
|
||
| local current = file:seek() -- get current position |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems unnecessary as we already have a file_reader function which is implemented with lua coroutines and will be much safer in my opinion.
|
|
||
| if temp_file_path then | ||
| -- Set Content-Length only if none exist | ||
| if not request.headers["Content-Length"] then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This check should not be needed. The RFC states that only one of Transfer-Encoding & Content-Length should be sent and in the case of chunked requests the latter should be omitted or ignored in which case we must always set this.
| if temp_file_path then | ||
| -- Set Content-Length only if none exist | ||
| if not request.headers["Content-Length"] then | ||
| local size, err = fsize(temp_file_path) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Better to move this logc to file_reader function where it is safely read in chunks. I think what we can do here is push each chunk into a table, iterate over each chunk in the table to get the size, sum the sizes and then set that as the Content-Length value. I've not thought too much about the implementation yet but given each chunk should be of a known size we might not even need to push each chunk into a table, simply do a count and multiply the size of each chunk by the length of the table - 1 and then we only ever need to calculate the size of the last chunk. Maybe @eguzki can comment on this proposal.
| end | ||
|
|
||
| -- The whole request is buffered in the memory so remove the Transfer-Encoding: chunked | ||
| if request.headers["Transfer-Encoding"] == "chunked" then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should make it very clear here that this is a short-term fix and when we finally add support for chunked requests to then remove this logic.
|
@eguzki let's try to discuss this soon. I have documented a patch here: https://access.redhat.com/solutions/7013421 and will share a gist or draft PR just as a reference but ultimately my feeling is that this is not something we want to add to the Product. I think it would be much better to solve this by adding support for chunked requests otherwise this patch is kind of anti-pattern in my opinion and customers won't realise their requests are being modified. |
|
@tkan145 did you review my comments yet? Did you try to implement that? In the end I have actually provided a bug fix for the customer which reported the issue, it's in the original support case. I still think the correct fix that we would want to ship though is full support for chunked requests and that we don't touch any request headers. @eguzki would you agree? |
|
@kevprice83 yes I have an updated code for this PR and I also have a patch to support the native chunked request but I haven't fully tested it. Let me know what you guys want to do and I'll send another PR |
|
@tkan145 that should be submitted as a new PR in my opinion and this one closed. EDIT: I don't know what the new patch is intending to solve because as I mentioned this is not a bug and we should only be implementing support for chunked requests, nothing else. Feel free to submit the patched files as a learning exercise though if you wish. As I said above the patch is already documented and working here and that has been provided to the customer. |
|
Close this, as #1400 has a better implementation |
What
by default, APICast buffers the entire request before sending it upstream, however, if proxies are involved and the request is sent with a "Transfer-Encoding: chunked" header, the API will forward the request with the full buffered request body and "Transfer-Encoding: chunked" header and thus upset the upstream server.
This PR fixes https://issues.redhat.com/browse/THREESCALE-9542 by removing the "Transfer-Encoding: chunked".