Skip to content

KAFKA-9080: Addresses MessageFormatChangeTest.testCompatibilty with version 0.9.0.1#7628

Closed
tuvtran wants to merge 3 commits intoapache:trunkfrom
tuvtran:KAFKA-9080
Closed

KAFKA-9080: Addresses MessageFormatChangeTest.testCompatibilty with version 0.9.0.1#7628
tuvtran wants to merge 3 commits intoapache:trunkfrom
tuvtran:KAFKA-9080

Conversation

@tuvtran
Copy link
Copy Markdown
Contributor

@tuvtran tuvtran commented Oct 31, 2019

#7167 added a check for non-incremental offsets in assignOffsetsNonCompressed, which is not applicable for message format V0 and V1. Therefore, I added a condition to disable the check if the record version precedes V2.

Committer Checklist (excluded from commit message)

  • Verify design and implementation
  • Verify test coverage and CI build status
  • Verify documentation (including upgrade notes)

@tuvtran
Copy link
Copy Markdown
Contributor Author

tuvtran commented Oct 31, 2019

All MessageFormatChangeTest tests passed.

@tuvtran tuvtran marked this pull request as ready for review October 31, 2019 23:03
@tuvtran
Copy link
Copy Markdown
Contributor Author

tuvtran commented Oct 31, 2019

@omkreddy @guozhangwang ready for reviews

@guozhangwang
Copy link
Copy Markdown
Contributor

@tuvtran I took a look at the PR but still cannot understand why for V0/V1 continuous offsets no longer hold true: with V0/V1 the inner records use the base offset + relative offset which should also be continuous, right?

@junrao
Copy link
Copy Markdown
Contributor

junrao commented Nov 1, 2019

@tuvtran : Looking at the logic in DeepRecordsIterator, it seems there is a difference btw V0 and V1 message format. In V1, the wrapper message has the absolute base offset and each inner message has the relative offset. In V0, each inner message has the absolute offset and the offset in the wrapper message is ignored. So, we need to distinguish between these 2 cases during offset validation.

@omkreddy
Copy link
Copy Markdown
Contributor

omkreddy commented Nov 1, 2019

Can we revert the check added in assignOffsetsNonCompressed ?, so that we can continue with existing behavior in 2.4. We can add complete fix in next release.

@tuvtran
Copy link
Copy Markdown
Contributor Author

tuvtran commented Nov 1, 2019

Yeah I can create a separate JIRA for V0/V1 offset validation.

@tuvtran
Copy link
Copy Markdown
Contributor Author

tuvtran commented Nov 1, 2019

retest this please

@guozhangwang
Copy link
Copy Markdown
Contributor

@tuvtran : Looking at the logic in DeepRecordsIterator, it seems there is a difference btw V0 and V1 message format. In V1, the wrapper message has the absolute base offset and each inner message has the relative offset. In V0, each inner message has the absolute offset and the offset in the wrapper message is ignored. So, we need to distinguish between these 2 cases during offset validation.

What confused me here is that, in either V0, V1 or V2 case, the inner message offset (either it is inferred as base + relative, or it is encoded as absolute) should always be continuous right? @junrao @tuvtran

@junrao
Copy link
Copy Markdown
Contributor

junrao commented Nov 1, 2019

@guozhangwang : It seems the issue is that expectedInnerOffset always starts at offset 0. This will fail the test if (record.offset != expectedOffset) for V0 message since each inner offset is the absolute offset.

Copy link
Copy Markdown
Contributor

@omkreddy omkreddy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tuvtran Thanks for the PR. LGTM. Please raise JIRA to handle V0/V1 offset validation.

@omkreddy omkreddy closed this in f5a4519 Nov 2, 2019
omkreddy pushed a commit that referenced this pull request Nov 2, 2019
…batch does have continuous incremental offsets

#7167 added a check for non-incremental offsets in `assignOffsetsNonCompressed`, which is not applicable for message format V0 and V1. Therefore, I added a condition to disable the check if the record version precedes V2.

Author: Tu Tran <tu@confluent.io>

Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>

Closes #7628 from tuvtran/KAFKA-9080
@ijuma
Copy link
Copy Markdown
Member

ijuma commented Nov 2, 2019

Can we make sure to include a unit test that verifies this fix? Generally, we should aim to do that before merging.

@guozhangwang
Copy link
Copy Markdown
Contributor

@guozhangwang : It seems the issue is that expectedInnerOffset always starts at offset 0. This will fail the test if (record.offset != expectedOffset) for V0 message since each inner offset is the absolute offset.

That makes sense, thanks!

@tuvtran tuvtran deleted the KAFKA-9080 branch November 4, 2019 22:10
KetkiT pushed a commit to KetkiT/kafka that referenced this pull request Nov 5, 2019
…batch does have continuous incremental offsets

apache#7167 added a check for non-incremental offsets in `assignOffsetsNonCompressed`, which is not applicable for message format V0 and V1. Therefore, I added a condition to disable the check if the record version precedes V2.

Author: Tu Tran <tu@confluent.io>

Reviewers: Manikumar Reddy <manikumar.reddy@gmail.com>

Closes apache#7628 from tuvtran/KAFKA-9080
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants