KAFKA-12455: OffsetValidationTest.test_broker_rolling_bounce fail: Raft#10322
Merged
ijuma merged 3 commits intoapache:trunkfrom Mar 16, 2021
Merged
KAFKA-12455: OffsetValidationTest.test_broker_rolling_bounce fail: Raft#10322ijuma merged 3 commits intoapache:trunkfrom
ijuma merged 3 commits intoapache:trunkfrom
Conversation
Contributor
Author
|
This patch needs to be cherry-picked to 2.8 |
Contributor
Author
|
As per an offline conversation, since 2 brokers is a supported cluster size, we would prefer that this system test keep 2 brokers instead of bumping it to 3 -- we have lots of tests that run with 3 brokers already. So I will change the test to use the |
ijuma
reviewed
Mar 16, 2021
ijuma
approved these changes
Mar 16, 2021
Member
ijuma
left a comment
There was a problem hiding this comment.
LGTM, thanks. Can we please update the PR description to summarize things first and include the details later?
Contributor
Author
|
@ijuma Thanks for the suggestions -- all set I think. |
ijuma
pushed a commit
that referenced
this pull request
Mar 16, 2021
…ure with Raft (#10322) This test was failing when used with a Raft-based metadata quorum but succeeding with a ZooKeeper-based quorum. This patch increases the consumers' session timeouts to 30 seconds, which fixes the Raft case and also eliminates flakiness that has historically existed in the Zookeeper case. This patch also fixes a minor logging bug in RaftReplicaManager.endMetadataChangeDeferral() that was discovered during the debugging of this issue, and it adds an extra logging statement in RaftReplicaManager.handleMetadataRecords() when a single metadata batch is applied to mirror the same logging statement that occurs when deferred metadata changes are applied. In the Raft system test case the consumer was sometimes receiving a METADATA response with just 1 alive broker, and then when that broker rolled the consumer wouldn't know about any alive nodes. It would have to wait until the broker returned before it could reconnect, and by that time the group coordinator on the second broker would have timed-out the client and initiated a group rebalance. The test explicitly checks that no rebalances occur, so the test would fail. It turns out that the reason why the ZooKeeper configuration wasn't seeing rebalances was just plain luck. The brokers' metadata caches in the ZooKeeper configuration show 1 alive broker even more frequently than the Raft configuration does. If we tweak the metadata.max.age.ms value on the consumers we can easily get the ZooKeeper test to fail, and in fact this system test has historically been flaky for the ZooKeeper configuration. We can get the test to pass by setting session.timeout.ms=30000 (which is longer than the roll time of any broker), or we can increase the broker count so that the client never sees a METADATA response with just a single alive broker and therefore never loses contact with the cluster for an extended period of time. We have plenty of system tests with 3+ brokers, so we choose to keep this test with 2 brokers and increase the session timeout. Reviewers: Ismael Juma <ismael@juma.me.uk>
3 tasks
ijuma
added a commit
to confluentinc/kafka
that referenced
this pull request
Mar 17, 2021
Conflicts: * Jenkinsfile: `install` -> `publishToMavenLocal`, drop ARM build and other changes that don't make sense for Confluent's version of `Jenkinsfile`. * build.gradle: keep Confluent changes for automatic skipping signing for specific version patterns (upstream only does it if the version ends with `SNAPSHOT`). Commits: * apache-github/trunk: (59 commits) MINOR: Remove redundant allows in import-control.xml (apache#10339) MINOR: remove some specifying types in tool command (apache#10329) KAFKA-12455: Fix OffsetValidationTest.test_broker_rolling_bounce failure with Raft (apache#10322) MINOR: Add toString to various Kafka Metrics classes (apache#10330) KAFKA-12330; FetchSessionCache may cause starvation for partitions when FetchResponse is full (apache#10318) KAFKA-12427: Don't update connection idle time for muted connections (apache#10267) MINOR; Various code cleanups (apache#10319) HOTFIX: timeout issue in removeStreamThread() (apache#10321) revert stream logging level back to ERROR (apache#10320) KAFKA-12352: Make sure all rejoin group and reset state has a reason (apache#10232) KAFKA-10348: Share client channel between forwarding and auto creation manager (apache#10135) MINOR: Update year in NOTICE (apache#10308) KAFKA-12398: Fix flaky test `ConsumerBounceTest.testClose` (apache#10243) MINOR: Remove redundant inheritance from FilteringJmxReporter #onMetricRemoved (apache#10303) KAFKA-12462: proceed with task revocation in case of thread in PENDING_SHUTDOWN (apache#10311) KAFKA-12460; Do not allow raft truncation below high watermark (apache#10310) MINOR: Log project, gradle, java and scala versions at the start of the build (apache#10307) KAFKA-10357: Add missing repartition topic validation (apache#10305) MINOR: Improve error message in MirrorConnectorsIntegrationBaseTest (apache#10268) MINOR: Add missing unit tests for Mirror Connect (apache#10192) ...
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
OffsetValidationTest.test_broker_rolling_bounce was failing when used with a Raft-based metadata quorum but succeeding with a ZooKeeper-based quorum. This patch increases the consumers' session timeouts to 30 seconds, which fixes the Raft case and also eliminates flakiness that has historically existed in the Zookeeper case. This patch also fixes a minor logging bug in
RaftReplicaManager.endMetadataChangeDeferral()that was discovered during the debugging of this issue, and it adds an extra logging statement inRaftReplicaManager.handleMetadataRecords()when a single metadata batch is applied to mirror the same logging statement that occurs when deferred metadata changes are applied.In the Raft system test case the consumer was sometimes receiving a METADATA response with just 1 alive broker, and then when that broker rolled the consumer wouldn't know about any alive nodes. It would have to wait until the broker returned before it could reconnect, and by that time the group coordinator on the second broker would have timed-out the client and initiated a group rebalance. The test explicitly checks that no rebalances occur, so the test would fail. It turns out that the reason why the ZooKeeper configuration wasn't seeing rebalances was just plain luck. The brokers' metadata caches in the ZooKeeper configuration show 1 alive broker even more frequently than the Raft configuration does. If we tweak the metadata.max.age.ms value on the consumers we can easily get the ZooKeeper test to fail, and in fact this system test has historically been flaky for the ZooKeeper configuration. We can get the test to pass by setting session.timeout.ms=30000 (which is longer than the roll time of any broker), or we can increase the broker count so that the client never sees a METADATA response with just a single alive broker and therefore never loses contact with the cluster for an extended period of time. We have plenty of system tests with 3+ brokers, so we choose to keep this test with 2 brokers and increase the session timeout.
Committer Checklist (excluded from commit message)