Skip to content

KFKA-4340: follow up patch for KAFAK-4340#2544

Closed
becketqin wants to merge 3 commits intoapache:trunkfrom
becketqin:KAFKA-4340_follow_up
Closed

KFKA-4340: follow up patch for KAFAK-4340#2544
becketqin wants to merge 3 commits intoapache:trunkfrom
becketqin:KAFKA-4340_follow_up

Conversation

@becketqin
Copy link
Copy Markdown
Contributor

No description provided.

@becketqin becketqin force-pushed the KAFKA-4340_follow_up branch from 03a3bee to fe9c0f0 Compare February 13, 2017 10:26
@asfbot
Copy link
Copy Markdown

asfbot commented Feb 13, 2017

Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/1659/
Test PASSed (JDK 8 and Scala 2.11).

@asfbot
Copy link
Copy Markdown

asfbot commented Feb 13, 2017

Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk7-scala2.10/1656/
Test PASSed (JDK 7 and Scala 2.10).

@asfbot
Copy link
Copy Markdown

asfbot commented Feb 13, 2017

Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/1656/
Test PASSed (JDK 8 and Scala 2.12).

@becketqin
Copy link
Copy Markdown
Contributor Author

cc @ijuma

@becketqin becketqin changed the title KFKA-4340: update the upgrade.html KFKA-4340: follow up patch for KAFAK-4340 Feb 15, 2017
@asfbot
Copy link
Copy Markdown

asfbot commented Feb 15, 2017

Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/1701/
Test PASSed (JDK 8 and Scala 2.11).

@asfbot
Copy link
Copy Markdown

asfbot commented Feb 15, 2017

Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk7-scala2.10/1698/
Test PASSed (JDK 7 and Scala 2.10).

Copy link
Copy Markdown
Contributor

@junrao junrao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@becketqin : Thanks for the patch. LGTM. Just a couple of minor comments.

Comment thread docs/upgrade.html
<li>Several new fields including "security.protocol", "connections.max.idle.ms", "retry.backoff.ms", "reconnect.backoff.ms" and "request.timeout.ms" were added to
StreamsConfig class. User should pay attenntion to the default values and set these if needed. For more details please refer to <a href="#streamsconfigs">3.5 Kafka Streams Configs</a>.</li>
<li>The <code>offsets.topic.replication.factor</code> broker config is now enforced upon auto topic creation. Internal auto topic creation will fail with a GROUP_COORDINATOR_NOT_AVAILABLE error until the cluster size meets this replication factor requirement.</li>
<li>The default value of <code>message.timestamp.difference.max.ms</code> has been changed to the same as <code>retention.ms</code></li>
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems that we need to add that to a new section for upgrade_1030_notable?

recoveryPoint = 0L,
time.scheduler,
time)
log.append(MemoryRecords.withRecords(Record.create(-1L, "key".getBytes, "value".getBytes)))
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of -1, could we just reference Record.NO_TIMESTAMP?

@asfbot
Copy link
Copy Markdown

asfbot commented Feb 16, 2017

Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/1698/
Test PASSed (JDK 8 and Scala 2.12).

@junrao
Copy link
Copy Markdown
Contributor

junrao commented Feb 16, 2017

Also, @ijuma brought up a good point. Right now, in KafkaConfig, if retention time is set, but MessageTimestampDifferenceMaxMs is not, MessageTimestampDifferenceMaxMs defaults to the default retention time. It seems that MessageTimestampDifferenceMaxMs should default to the customized retention time instead.

@cmccabe
Copy link
Copy Markdown
Contributor

cmccabe commented Feb 16, 2017

Thanks for the patch, @becketqin. It looks good.

@ijuma, @junrao, do you think we'll be able to fix this soon to get the tests passing again, or should we revert KAFKA-4340 so that we have more time to think about it and get it right? Right now trunk is broken.

@becketqin
Copy link
Copy Markdown
Contributor Author

@cmccabe I'm working on the patch and will submit an update today. I am fine to revert the patch as it is not that critical to change the default configuration value.

Copy link
Copy Markdown
Member

@ijuma ijuma left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the update, LGTM.

@asfbot
Copy link
Copy Markdown

asfbot commented Feb 17, 2017

Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/1733/
Test PASSed (JDK 8 and Scala 2.11).

@asfbot
Copy link
Copy Markdown

asfbot commented Feb 17, 2017

Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk7-scala2.10/1730/
Test PASSed (JDK 7 and Scala 2.10).

@ijuma
Copy link
Copy Markdown
Member

ijuma commented Feb 17, 2017

Fixed the upgrade notes, tweaked the documentation of the config to mention the new default and merged to trunk.

@asfgit asfgit closed this in 1f2ee5f Feb 17, 2017
@asfbot
Copy link
Copy Markdown

asfbot commented Feb 17, 2017

Refer to this link for build results (access rights to CI server needed):
https://builds.apache.org/job/kafka-pr-jdk8-scala2.12/1730/
Test FAILed (JDK 8 and Scala 2.12).

hachikuji pushed a commit to confluentinc/kafka that referenced this pull request Feb 23, 2017
…fault log.retention.ms

Author: Jiangjie Qin <becket.qin@gmail.com>

Reviewers: Jun Rao <junrao@gmail.com>, Ismael Juma <ismael@juma.me.uk>

Closes apache#2544 from becketqin/KAFKA-4340_follow_up
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants