Skip to content

MINOR: revert change in log level for skipped record#8079

Closed
vvcephei wants to merge 1 commit intoapache:trunkfrom
vvcephei:minor-fix-skipped-record-log
Closed

MINOR: revert change in log level for skipped record#8079
vvcephei wants to merge 1 commit intoapache:trunkfrom
vvcephei:minor-fix-skipped-record-log

Conversation

@vvcephei
Copy link
Copy Markdown
Contributor

It looks like it was a mistake during #6521

Specifically, while addressing code review comments
to change other logs from debugs to warnings, this
one seems to have been included by accident.

Committer Checklist (excluded from commit message)

  • Verify design and implementation
  • Verify test coverage and CI build status
  • Verify documentation (including upgrade notes)

It looks like it was a mistake during #6521

Specifically, while addressing code review comments
to change other logs from debugs to warnings, this
one seems to have been included by accident.
@vvcephei
Copy link
Copy Markdown
Contributor Author

Ping @ableegoldman or @cadonna for a review

vvcephei referenced this pull request Feb 10, 2020
Due to KAFKA-8159, Streams will throw an unchecked exception when a caching layer or in-memory underlying store is queried over a range of keys from negative to positive. We should add a check for this and log it then return an empty iterator (as the RocksDB stores happen to do) rather than crash

Reviewers: Bruno Cadonna <bruno@confluent.io> Bill Bejeck <bbejeck@gmail.com>
@ableegoldman
Copy link
Copy Markdown
Member

I replied on the user mailing list thread about my hesitation to "fix" this, but if we do ultimately decide to go back to debug note that we should also fix this in InMemorySessionStore and AbstractRocksDBSegmentedBytesStores

@vvcephei
Copy link
Copy Markdown
Contributor Author

After discussing this with @cadonna , he filled me in that this change was actually intentional as part of KIP-444. Closing this PR.

@vvcephei vvcephei closed this Feb 10, 2020
@vvcephei vvcephei deleted the minor-fix-skipped-record-log branch February 10, 2020 19:29
@IndeedSi
Copy link
Copy Markdown

@vvcephei @ableegoldman Hi, I'm seeing large amount of this WARN log every time our instances restart. I think it's just the same case as described in the mail list. These logs are causing high disk and CPU consumption.

Is there any followup after that mail thread? Thanks!

@vvcephei
Copy link
Copy Markdown
Contributor Author

Hi @IndeedSi ,

No, as far as I know Jiri's solution was to increase the join window size. Judging from the investigation, it seemed like there really were out-of-order records in the repartition topic that needed to be buffered so they could be processed instead of dropped (as opposed to some kind of superfluous logging as we initially thought).

Does you investigation point to the same cause? If so, then really the only thing that can be done is to increase the join window (aka buffer more records), so that you aren't dropping any data.

I hope this helps,
-John

@IndeedSi
Copy link
Copy Markdown

@vvcephei Thanks, John
I think you are right. We did end up with larger window size (for some other reason) and this log seems no longer happening during restarts.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants