Needs work: tolerate chunked responses without a DOS vulnerability#8
Needs work: tolerate chunked responses without a DOS vulnerability#8roscoemcinerney wants to merge 2 commits intomasterfrom
Conversation
…ansfers. Not ready for merge yet
We have code for that :)
FYI: the same code is available in Sticking with
Good idea :) |
…ird failure modes around files without extension on source URL
Reiterating the code comments:
https://logo.clearbit.com/https%3A/duckduckgo.com/which is a small and very normal PNG but can't be served.1: This code will block until the entire file, which could be gigabytes or even endless (for a malicious server) is transferred - rather than aborting the transfer when the size limit is passed, so we need to do something a little fiddlier than just using FileUtils.copy on the connection's InputStream.
2: This code will allow repeated requests for the same file, transferring up to the limit and then returning a 400 error each time. My instinct is that when a cache request is made for a too-large file, we should copy a placeholder image into the destination path with readable text saying "File too large to cache" - so subsequent requests become a cache hit, and (probably more importantly) legitimate users immediately know what the problem is.
We're not in a hurry to tie off this branch (it's due to a problem found in Aidan's testing, not live) and the changes above aren't hard to implement at all - but I'm raising this PR as a sanity check for when I come back to it.