KAFKA-7549: Old ProduceRequest with zstd compression does not return error to client#5925
Conversation
…oduceReques now validates ProducerRequest instance with ProducerRequest#validateRecords.
3f8b40c to
958a409
Compare
|
Here is the update. Since +1. Now |
|
@dongjinleekr Thanks, I think the fix looks good. I am wondering if it is possible to create a test case in |
|
@hachikuji No problem. I will complete it by this weekend. |
…ate invalid ProduceRequest. (for testing only)
|
Here is the update with rebasing upon the latest trunk. In this update, I added a method to go around the validation logic for unit testing(commit c23db83). I know this approach is a little bit dangerous but could not find any other good alternative. |
hachikuji
left a comment
There was a problem hiding this comment.
Thanks, left a minor suggestion. Otherwise looks good.
| return build(version, true); | ||
| } | ||
|
|
||
| public ProduceRequest build(short version, boolean validate) { |
There was a problem hiding this comment.
Perhaps this can be private and we can expose a buildUnsafe method that sets validate to false. Then we will be less tempted to accidentally use the API.
hachikuji
left a comment
There was a problem hiding this comment.
LGTM, thanks for the patch. @dongjinleekr Note that I went ahead and pushed the minor suggestion that I had.
|
@hachikuji I greatly appreciate your help. 👍 |
…error to client (#5925) Older versions of the Produce API should return an error if zstd is used. This validation existed, but it was done during request parsing, which means that instead of returning an error code, the broker disconnected. This patch fixes the issue by moving the validation outside of the parsing logic. It also fixes several other record validations which had the same problem. Reviewers: Jason Gustafson <jason@confluent.io>
…error to client (apache#5925) Older versions of the Produce API should return an error if zstd is used. This validation existed, but it was done during request parsing, which means that instead of returning an error code, the broker disconnected. This patch fixes the issue by moving the validation outside of the parsing logic. It also fixes several other record validations which had the same problem. Reviewers: Jason Gustafson <jason@confluent.io>
As of current version (2.1.0), zstd-related validations are located in following spots:
ProduceRequest
MemoryRecordsBuilder: can't createMemoryRecordswith magic < 2 (IllegalArgumentException)ProduceRequest.Builder: can't createProduceRequestwith api version below 7 (InvalidRecordException)FetchRequest
KafkaApis#handleFetchRequest: ReturnsFetchResponsew/Errors#UNSUPPORTED_COMPRESSION_TYPEif ...FetchRequestw/ API version < 10 is delivered to a zstd-compressed topic.LazyDownConversionRecords#makeNext→RecordsUtil#downConvertthrowsUnsupportedCompressionTypeException.Etc
AbstractLegacyRecordBatch.DeepRecordsIterator: A boilerplate validation for legacy record batches.In short, there is no broker-side validation for
ProduceRequestw/ zstd compressed records. This PR compensates this hole.There is a reason why this validation can't be located in other class, e.g.,
LogValidator: it can't see the API version ofProduceRequest.The only method that can check both ofCompressionTypeand API version isKafkaApis#handleProduceRequest; it's why.Committer Checklist (excluded from commit message)