Skip to content

MINOR: Next round of fetcher thread consolidation#5587

Merged
hachikuji merged 7 commits intoapache:trunkfrom
hachikuji:more-fetcher-thread-consolidation
Aug 31, 2018
Merged

MINOR: Next round of fetcher thread consolidation#5587
hachikuji merged 7 commits intoapache:trunkfrom
hachikuji:more-fetcher-thread-consolidation

Conversation

@hachikuji
Copy link
Copy Markdown
Contributor

Pull the epoch request build logic up to AbstractFetcherThread. Also get rid of the FetchRequest indirection.

Committer Checklist (excluded from commit message)

  • Verify design and implementation
  • Verify test coverage and CI build status
  • Verify documentation (including upgrade notes)

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Am I missing something or did we drop the usage of sessionPartitions? Was that not needed?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope, you're right. I misunderstood the usage.

@hachikuji
Copy link
Copy Markdown
Contributor Author

retest this please

@rajinisivaram
Copy link
Copy Markdown
Contributor

@hachikuji I tried running a build locally, but see a lot of integration test failures. Do they work for you?

@hachikuji
Copy link
Copy Markdown
Contributor Author

@rajinisivaram Sorry about that. I must have broken something in the recent push. I'll take a look.

@hachikuji
Copy link
Copy Markdown
Contributor Author

@rajinisivaram I pushed a couple fixes. Tests are passing locally.

@hachikuji hachikuji force-pushed the more-fetcher-thread-consolidation branch from 5597101 to 27c4309 Compare August 31, 2018 00:12
@rajinisivaram
Copy link
Copy Markdown
Contributor

retest this please

Copy link
Copy Markdown
Contributor

@rajinisivaram rajinisivaram left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hachikuji Thanks for the PR. Left a minor question (just for my understanding). LGTM.


private def processFetchRequest(fetchRequest: REQ) {
private def maybeTruncate(fetchedEpochs: Map[TopicPartition, EpochEndOffset]): ResultWithPartitions[Map[TopicPartition, OffsetTruncationState]] = {
val fetchOffsets = scala.collection.mutable.HashMap.empty[TopicPartition, OffsetTruncationState]
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: don't need scala.collection.?

.forReplica(fetchRequestVersion, replicaId, maxWait, minBytes, fetchData.toSend)
.setMaxBytes(maxBytes)
.toForget(fetchData.toForget)
if (fetchMetadataSupported) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this check not required? I wasn't sure why the check was there before.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I couldn't figure out why we needed it either. If the fetch version doesn't support it, then the metadata will be ignored. It was odd to check for metadata support, but not the toForget partitions.

Copy link
Copy Markdown
Contributor

@rajinisivaram rajinisivaram left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hachikuji Thanks for the explanation, LGTM

@hachikuji
Copy link
Copy Markdown
Contributor Author

@rajinisivaram Thanks for the review. The test failure is unrelated. I submitted a separate fix for it #5595. I will go ahead and merge.

@hachikuji hachikuji merged commit f6890e7 into apache:trunk Aug 31, 2018
@ijuma
Copy link
Copy Markdown
Member

ijuma commented Sep 1, 2018

Nice clean-up! One remaining item that didn't look nice was the Option.get calls after getPartition. I submitted a tiny clean-up for that:

https://github.com/apache/kafka/pull/5598/files

Copy link
Copy Markdown
Contributor

@junrao junrao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@hachikuji : Sorry for the late review. Added one more comment below.

currentPartitionFetchState.isReadyForFetch) {
// In this case, we only want to process the fetch response if the partition state is ready for fetch and
// the current offset is the same as the offset requested.
val fetchOffset = fetchStates(topicPartition).fetchOffset
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is actually a reason why we fetch the partition state directly from partitionStates here again under the partitionMapLock. partitionStates can be modified after we release the lock in doWork() (e.g, due to a leadership change). When we get to processFetchRequest(), a replica may have become the leader and therefore have been removed from partitionStates. In this case, we don't want to append the pending fetched data to the log since this partition could have appended new data from the producer to the log. This may cause leader epoch to be out of order in the log.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On another look, the change is fine. We still check partitionStates in processFetchRequest().

pengxiaolong pushed a commit to pengxiaolong/kafka that referenced this pull request Jun 14, 2019
Pull the epoch request build logic up to `AbstractFetcherThread`. Also get rid of the `FetchRequest` indirection.

Reviewers: Ismael Juma <ismael@juma.me.uk>, Rajini Sivaram <rajinisivaram@googlemail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants