Skip to content

MINOR: Log4j Improvements on Fetcher#8629

Merged
guozhangwang merged 3 commits intoapache:trunkfrom
guozhangwang:KMinor-fix-consumer-fetcher
May 7, 2020
Merged

MINOR: Log4j Improvements on Fetcher#8629
guozhangwang merged 3 commits intoapache:trunkfrom
guozhangwang:KMinor-fix-consumer-fetcher

Conversation

@guozhangwang
Copy link
Copy Markdown
Contributor

@guozhangwang guozhangwang commented May 7, 2020

Committer Checklist (excluded from commit message)

  • Verify design and implementation
  • Verify test coverage and CI build status
  • Verify documentation (including upgrade notes)

@guozhangwang guozhangwang changed the title MINOR: Update nodesWithPendingFetchRequests in Fetcher before sending request MINOR: Log4j Improvements on Fetcher May 7, 2020
RequestFuture<ClientResponse> future = client.send(fetchTarget, request);
// We add the node to the set of nodes with pending fetch requests before adding the
// listener because the future may have been fulfilled on another thread (e.g. during a
// listenerbecause the future may have been fulfilled on another thread (e.g. during a
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I assume this was unintentional.

List<ConsumerRecord<K, V>> partRecords = completedFetch.fetchRecords(maxRecords);

log.trace("Returning fetched records {} at offset {} for assigned partition {}",
partRecords, position, completedFetch.partition);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel logging all of the records even at TRACE level will be too much. For example, our system tests often have TRACE enabled. Huge single-line log messages are difficult to consume both visually and in systems like elastic.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense, it is primarily for my local debugging of an integration test which only sends a couple records. I will only print the num.records instead.

@guozhangwang
Copy link
Copy Markdown
Contributor Author

guozhangwang commented May 7, 2020

@mjsax @hachikuji My previous investigation on this issue ended up in the broker-side DefaultRecordBatch#setLastOffset which could some time set garbage into the byte buffer (sometimes deterministic but incorrect values, sometimes total random values), and after 36+ hours of chasing this lead I came to the conclusion that it is not really a bug in our code but more likely my IDEA's rabbit holes.. for example yesterday night with rebasing I managed to be able to run 500+ without failure and I peeled off each of the changes I made and it still succeeds, while this morning being on top of trunk #8400 it again fails on 170+ runs, and with all of this observations I'd have to say so long with my past two days and that this is likely not related to any issues on broker or consumer side..

Copy link
Copy Markdown
Contributor

@hachikuji hachikuji left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@guozhangwang guozhangwang merged commit 71397ad into apache:trunk May 7, 2020
@guozhangwang guozhangwang deleted the KMinor-fix-consumer-fetcher branch May 7, 2020 23:30
Kvicii pushed a commit to Kvicii/kafka that referenced this pull request May 8, 2020
* 'trunk' of github.com:apache/kafka:
  KAFKA-9290: Update IQ related JavaDocs (apache#8114)
  KAFKA-9928: Fix flaky GlobalKTableEOSIntegrationTest (apache#8600)
  KAFKA-6145: Set HighAvailabilityTaskAssignor as default in streams_upgrade_test.py (apache#8613)
  KAFKA-9667: Connect JSON serde strip trailing zeros (apache#8230)
  MINOR: Log4j Improvements on Fetcher (apache#8629)
jwijgerd pushed a commit to buxapp/kafka that referenced this pull request May 14, 2020
Reviewers: Jason Gustafson <jason@confluent.io>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants