KAFKA-8460: reduce the record size and increase the delay time#9775
KAFKA-8460: reduce the record size and increase the delay time#9775showuon wants to merge 3 commits intoapache:trunkfrom
Conversation
|
@chia7712 , could you help review this small PR? Thanks. |
| // we produce 10 records for each topic partition. There are 3 topics, and 30 partitions each topic, | ||
| // so total producerRecords size should be 10 * 3 * 30 = 900 | ||
| val producerRecords = partitions.flatMap(sendRecords(producer, numRecords = 10, _)) | ||
| val consumerRecords = consumeRecords(consumer, producerRecords.size, waitTimeMs = 90 * 1000) |
There was a problem hiding this comment.
Personally, 90 seconds is too long to be a test case. Reducing the produce size can't resolve this issue?
There was a problem hiding this comment.
Agree! I increase to 90 secs just in case. I think reduce the record size is good enough.
There was a problem hiding this comment.
revert back to 60 secs now.
| maxPollRecords: Int = Int.MaxValue): ArrayBuffer[ConsumerRecord[K, V]] = { | ||
| maxPollRecords: Int = Int.MaxValue, | ||
| waitTimeMs: Int = 60000): ArrayBuffer[ConsumerRecord[K, V]] = { | ||
| val records = new ArrayBuffer[ConsumerRecord[K, V]] |
There was a problem hiding this comment.
Could you add the initial size? It collects all return records so the default size is too small to this case.
There was a problem hiding this comment.
Nice catch! Updated. Thanks.
|
@chia7712 , I was too naive, I saw there are only 7xx records consumed in recent build: https://ci-builds.apache.org/job/Kafka/job/kafka-trunk-jdk15/353/testReport/junit/kafka.api/PlaintextConsumerTest/testLowMaxFetchSizeForRequestAndPartition/ I don't know how slow the system will be. So, I reduce to 5 records each partition, total will be 450 records. FYI. |
|
@chia7712 , I found the test failed in my PR tests: It only consumed 325 records within 60 seconds!! So slow! Do you think I should reduce the records lower? |
|
Monitoring recent test failed: I think reduce the records to 450 should be good enough. How do you think? @chia7712 https://ci-builds.apache.org/job/Kafka/job/kafka-trunk-jdk11/342/testReport/junit/kafka.api/PlaintextConsumerTest/testLowMaxFetchSizeForRequestAndPartition/ |
|
@showuon Is it a potential bug which can slowdown the consumer in this test case? Or this bug is caused by busy Jenkins? |
Looking into this flaky test, the error messages are:
https://ci-builds.apache.org/job/Kafka/job/kafka-trunk-jdk8/303/testReport/kafka.api/PlaintextConsumerTest/testLowMaxFetchSizeForRequestAndPartition/
https://ci-builds.apache.org/job/Kafka/job/kafka-trunk-jdk8/305/testReport/kafka.api/PlaintextConsumerTest/testLowMaxFetchSizeForRequestAndPartition/
https://ci-builds.apache.org/job/Kafka/job/kafka-trunk-jdk8/305/testReport/junit/kafka.api/PlaintextConsumerTest/testLowMaxFetchSizeForRequestAndPartition/
We can see, the number consumes are not fixed number and close to 1350. After checking the test, I found the test is expected to be slow because it tests
we can consume all partitions if fetch.max.bytes and max.partition.fetch.bytes are low. So I think the test has no bug, just need more time.What I did are:
Hope this can makes the test more reliable!
Committer Checklist (excluded from commit message)