KAFKA-17393: Remove log.message.format.version/message.format.version (KIP-724)#18267
KAFKA-17393: Remove log.message.format.version/message.format.version (KIP-724)#18267ijuma merged 8 commits intoapache:trunkfrom
Conversation
b542a90 to
a673575
Compare
log.message.format.version and message.format.version
ijuma
left a comment
There was a problem hiding this comment.
Thanks for the PR. It's looking pretty good. I left a few comments.
| // Validate the configurations. | ||
| val configNamesToExclude = excludedConfigs(topic, topicConfig) | ||
| val props = new Properties() | ||
| topicConfig.asScala.foreachEntry { (key, value) => |
| @@ -436,7 +375,7 @@ private Optional<Compression> getCompression() { | |||
| } | |||
|
|
|||
| public RecordVersion recordVersion() { | |||
There was a problem hiding this comment.
Shall we remove this method? It doesn't make sense now that we don't have a log config for record version.
| </li> | ||
| <li>The <code>org.apache.kafka.clients.producer.internals.DefaultPartitioner</code> and <code>org.apache.kafka.clients.producer.UniformStickyPartitioner</code> class was removed. | ||
| </li> | ||
| <li>The <code>log.message.format.version</code> and <code>message.format.version</code> were removed. |
There was a problem hiding this comment.
We should add the word configs before were removed.
| } | ||
|
|
||
| @nowarn("cat=deprecation") | ||
| def setIbpAndMessageFormatVersions(config: Properties, version: MetadataVersion): Unit = { |
| @@ -113,9 +106,7 @@ class ConsumerWithLegacyMessageFormatIntegrationTest extends AbstractConsumerTes | |||
| prop.setProperty(ProducerConfig.ENABLE_IDEMPOTENCE_CONFIG, "false") | |||
There was a problem hiding this comment.
Shall we remove this property and comment? Also, the topic name should be updated.
Signed-off-by: PoAn Yang <payang@apache.org>
9bb186f to
045dd80
Compare
ijuma
left a comment
There was a problem hiding this comment.
Thanks for the updates, just a couple more comments.
| MemoryRecordsBuilder builder = new MemoryRecordsBuilder( | ||
| buffer, | ||
| magic, | ||
| RecordVersion.V2.value, |
There was a problem hiding this comment.
Perhaps use RecordBatch.CURRENT_MAGIC_VALUE here and other similar places.
| def getLogConfig(topicPartition: TopicPartition): Option[LogConfig] = localLog(topicPartition).map(_.config) | ||
|
|
||
| def getMagic(topicPartition: TopicPartition): Option[Byte] = getLogConfig(topicPartition).map(_.recordVersion.value) | ||
| def getMagic(topicPartition: TopicPartition): Option[Byte] = getLogConfig(topicPartition).map(_ => RecordVersion.V2.value) |
There was a problem hiding this comment.
Thanks for review and suggestion. I try to remove ReplicaManager#getMagic and it looks like some related logic on this path can be removed too. Can I also remove them? Thanks.
There was a problem hiding this comment.
Perhaps we can leave the changes to the records for a separate PR. One thing we have to be careful about is that old record formats may exist on disk - so the functionality to handle that needs to remain.
Signed-off-by: PoAn Yang <payang@apache.org>
8ff57db to
53cf9f0
Compare
Signed-off-by: PoAn Yang <payang@apache.org>
53cf9f0 to
443e802
Compare
| val offsetKey = GroupMetadataManager.readMessageKey(message.key).asInstanceOf[OffsetKey] | ||
| assertEquals(groupId, offsetKey.key.group) | ||
| assertEquals("foo", offsetKey.key.topicPartition.topic) | ||
| } |
There was a problem hiding this comment.
I added back this code and made the test pass by returning the right partition from replica manager.
| @Test | ||
| def shouldRespondWithUnsupportedMessageFormatForBadPartitionAndNoErrorsForGoodPartition(): Unit = { | ||
| val tp1 = new TopicPartition("t", 0) | ||
| def shouldRespondWithNoErrorsForGoodPartition(): Unit = { |
There was a problem hiding this comment.
I think we can delete this test. shouldRespondWithUnknownTopicOrPartitionForBadPartitionAndNoErrorsForGoodPartition seems to cover this path already.
|
I left a couple of comments above and pushed a commit with a few improvements. Outside of the two comments, I think we're good. |
|
|
||
| groupMetadataManager.cleanupGroupMetadata() | ||
|
|
||
| verify(partition).appendRecordsToLeader(any[MemoryRecords], |
There was a problem hiding this comment.
Also added this back (and fixed the test).
| (removedOffsets, group.is(Dead), group.generationId) | ||
| } | ||
|
|
||
| val offsetsPartition = partitionFor(groupId) |
There was a problem hiding this comment.
The indenting of this existing code was wrong, fixed it as part of this change.
|
@FrankYang0529 If the tests pass, I'll go ahead and merge. Please take a look when you have a chance and let me know if you see any issues with my updates. |
log.message.format.version and message.format.versionlog.message.format.version and message.format.version (KIP-724)
log.message.format.version and message.format.version (KIP-724)|
Hi @ijuma, thanks for reviewing and updating the PR. I think the change is good. 👍 |
… (KIP-724) (#18267) Based on [KIP-724](https://cwiki.apache.org/confluence/display/KAFKA/KIP-724%3A+Drop+support+for+message+formats+v0+and+v1), the `log.message.format.version` and `message.format.version` can be removed in 4.0. These configs effectively a no-op with inter-broker protocol version 3.0 or higher since Apache Kafka 3.0, so the impact should be minimal. Reviewers: Ismael Juma <ismael@juma.me.uk>
… (KIP-724) (apache#18267) Based on [KIP-724](https://cwiki.apache.org/confluence/display/KAFKA/KIP-724%3A+Drop+support+for+message+formats+v0+and+v1), the `log.message.format.version` and `message.format.version` can be removed in 4.0. These configs effectively a no-op with inter-broker protocol version 3.0 or higher since Apache Kafka 3.0, so the impact should be minimal. Reviewers: Ismael Juma <ismael@juma.me.uk>
| @@ -223,11 +222,6 @@ public boolean renameDir(String name) { | |||
| public void updateConfig(LogConfig newConfig) { | |||
| LogConfig oldConfig = config; | |||
There was a problem hiding this comment.
we don't need this local variable now - maybe it needs a minor to cleanup :)
…sion v2 (KIP-724) (#18321) Convert v0/v1 record batches to v2 during compaction even if said record batches would be written with no change otherwise. A few important details: 1. V0 compressed record batch with multiple records is converted into single V2 record batch 2. V0 uncompressed records are converted into single record V2 record batches 3. V0 records are converted to V2 records with timestampType set to `CreateTime` and the timestamp is `-1`. 4. The `KAFKA-4298` workaround is no longer needed since the conversion to V2 fixes the issue too. 5. Removed a log warning applicable to consumers older than 0.10.1 - they are no longer supported. 6. Added back the ability to append records with v0/v1 (for testing only). 7. The creation of the leader epoch cache is no longer optional since the record version config is effectively always V2. Add integration tests, these tests existed before #18267 - restored, modified and extended them. Reviewers: Jun Rao <jun@confluent.io>
…sion v2 (KIP-724) (#18321) Convert v0/v1 record batches to v2 during compaction even if said record batches would be written with no change otherwise. A few important details: 1. V0 compressed record batch with multiple records is converted into single V2 record batch 2. V0 uncompressed records are converted into single record V2 record batches 3. V0 records are converted to V2 records with timestampType set to `CreateTime` and the timestamp is `-1`. 4. The `KAFKA-4298` workaround is no longer needed since the conversion to V2 fixes the issue too. 5. Removed a log warning applicable to consumers older than 0.10.1 - they are no longer supported. 6. Added back the ability to append records with v0/v1 (for testing only). 7. The creation of the leader epoch cache is no longer optional since the record version config is effectively always V2. Add integration tests, these tests existed before #18267 - restored, modified and extended them. Reviewers: Jun Rao <jun@confluent.io>
…sion v2 (KIP-724) (apache#18321) Convert v0/v1 record batches to v2 during compaction even if said record batches would be written with no change otherwise. A few important details: 1. V0 compressed record batch with multiple records is converted into single V2 record batch 2. V0 uncompressed records are converted into single record V2 record batches 3. V0 records are converted to V2 records with timestampType set to `CreateTime` and the timestamp is `-1`. 4. The `KAFKA-4298` workaround is no longer needed since the conversion to V2 fixes the issue too. 5. Removed a log warning applicable to consumers older than 0.10.1 - they are no longer supported. 6. Added back the ability to append records with v0/v1 (for testing only). 7. The creation of the leader epoch cache is no longer optional since the record version config is effectively always V2. Add integration tests, these tests existed before apache#18267 - restored, modified and extended them. Reviewers: Jun Rao <jun@confluent.io>
…sion v2 (KIP-724) (apache#18321) Convert v0/v1 record batches to v2 during compaction even if said record batches would be written with no change otherwise. A few important details: 1. V0 compressed record batch with multiple records is converted into single V2 record batch 2. V0 uncompressed records are converted into single record V2 record batches 3. V0 records are converted to V2 records with timestampType set to `CreateTime` and the timestamp is `-1`. 4. The `KAFKA-4298` workaround is no longer needed since the conversion to V2 fixes the issue too. 5. Removed a log warning applicable to consumers older than 0.10.1 - they are no longer supported. 6. Added back the ability to append records with v0/v1 (for testing only). 7. The creation of the leader epoch cache is no longer optional since the record version config is effectively always V2. Add integration tests, these tests existed before apache#18267 - restored, modified and extended them. Reviewers: Jun Rao <jun@confluent.io>
In PR #18267, we removed old message format for cases in ConsumerWithLegacyMessageFormatIntegrationTest. Although test cases can pass, they don't fulfill original purpose. We can't send old message format since 4.0, so I change cases to append old records by ReplicaManager directly. Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
In PR #18267, we removed old message format for cases in ConsumerWithLegacyMessageFormatIntegrationTest. Although test cases can pass, they don't fulfill original purpose. We can't send old message format since 4.0, so I change cases to append old records by ReplicaManager directly. Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
…sion v2 (KIP-724) (apache#18321) Convert v0/v1 record batches to v2 during compaction even if said record batches would be written with no change otherwise. A few important details: 1. V0 compressed record batch with multiple records is converted into single V2 record batch 2. V0 uncompressed records are converted into single record V2 record batches 3. V0 records are converted to V2 records with timestampType set to `CreateTime` and the timestamp is `-1`. 4. The `KAFKA-4298` workaround is no longer needed since the conversion to V2 fixes the issue too. 5. Removed a log warning applicable to consumers older than 0.10.1 - they are no longer supported. 6. Added back the ability to append records with v0/v1 (for testing only). 7. The creation of the leader epoch cache is no longer optional since the record version config is effectively always V2. Add integration tests, these tests existed before apache#18267 - restored, modified and extended them. Reviewers: Jun Rao <jun@confluent.io>
…e#18889) In PR apache#18267, we removed old message format for cases in ConsumerWithLegacyMessageFormatIntegrationTest. Although test cases can pass, they don't fulfill original purpose. We can't send old message format since 4.0, so I change cases to append old records by ReplicaManager directly. Reviewers: Chia-Ping Tsai <chia7712@gmail.com>
Based on KIP-724, the
log.message.format.versionandmessage.format.versioncan be removed in 4.0.These configs effectively a no-op with inter-broker protocol version 3.0 or higher
since Apache Kafka 3.0, so the impact should be minimal.
Committer Checklist (excluded from commit message)