Skip to content

MINOR : lower Metadata info log to debug for topic ID change#12378

Closed
nicolasguyomar wants to merge 1 commit intoapache:trunkfrom
nicolasguyomar:patch-6
Closed

MINOR : lower Metadata info log to debug for topic ID change#12378
nicolasguyomar wants to merge 1 commit intoapache:trunkfrom
nicolasguyomar:patch-6

Conversation

@nicolasguyomar
Copy link
Copy Markdown
Contributor

Hi team,

Minor suggestion to lower that new log to DEBUG as we used not to have any INFO log in this class and it looks like something has changed when you start a client while it's just the first metadata response result which is triggering the log with a somehow worrisome change from null to an actual topicID, as if something was not OK before, because null, and now it's good

Let me know what you think

Thank you

More detailed description of your change,
if necessary. The PR title and PR message become
the squashed commit message, so use a separate
comment to ping reviewers.

Summary of testing strategy (including rationale)
for the feature or bug fix. Unit and/or integration
tests are expected for any behaviour change and
system tests should be considered for larger changes.

Committer Checklist (excluded from commit message)

  • Verify design and implementation
  • Verify test coverage and CI build status
  • Verify documentation (including upgrade notes)

Hi team,

Minor suggestion to lower that new log to DEBUG as we used not to have any INFO log in this class and it looks like something has changed when you start a client while it's just the first metadata response result which is triggering the log with a somehow worrisome  change from null to an actual topicID, as if something was not OK before, because null, and now it's good

Let me know what you think

Thank you
@jolshan
Copy link
Copy Markdown
Member

jolshan commented Jul 16, 2022

something has changed when you start a client while it's just the first metadata response result which is triggering the log with a somehow worrisome change from null to an actual topicID, as if something was not OK before, because null, and now it's good

Can you clarify this point? Are you seeing this log on startup? This message should only be logged when the ID was not null.

@nicolasguyomar
Copy link
Copy Markdown
Contributor Author

Hello @jolshan

Yes you will see the log on any consumer/producer startup

@jolshan
Copy link
Copy Markdown
Member

jolshan commented Jul 18, 2022

I'm not sure I follow why this happens though -- can you share an example of one of the logs?

@nicolasguyomar
Copy link
Copy Markdown
Contributor Author

Hello @jolshan , this is a Confluent Bundle 7.2.0 console producer output, producing a "test" message in topic "nicolas"

You'll see that upon receiving the first Metadata response, we log the message that I would like to "hide", but you're probably right we should not log it at all

[2022-07-19 15:35:41,214] INFO [Producer clientId=console-producer] Resetting the last seen epoch of partition nicolas-0 to 0 since the associated topicId changed from null to 7OdYyViuRY2XmggNWuP_Vg (org.apache.kafka.clients.Metadata)

./kafka-console-producer --bootstrap-server=localhost:9092 --topic nicolas
[2022-07-19 15:35:30,945] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$)
[2022-07-19 15:35:31,012] INFO ProducerConfig values: 
	xxxx
 (org.apache.kafka.clients.producer.ProducerConfig)
[2022-07-19 15:35:31,037] INFO [Producer clientId=console-producer] Instantiated an idempotent producer. (org.apache.kafka.clients.producer.KafkaProducer)
[2022-07-19 15:35:31,129] INFO Successfully logged in. (org.apache.kafka.common.security.authenticator.AbstractLogin)
[2022-07-19 15:35:31,227] WARN The configuration 'sasl.mechanisms' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig)
[2022-07-19 15:35:31,227] WARN The configuration 'schema.registry.url' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig)
[2022-07-19 15:35:31,227] WARN The configuration 'schema.registry.basic.auth.credentials.source' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig)
[2022-07-19 15:35:31,227] WARN The configuration 'sasl.username' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig)
[2022-07-19 15:35:31,227] WARN The configuration 'schema.registry.basic.auth.user.info' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig)
[2022-07-19 15:35:31,227] WARN The configuration 'sasl.password' was supplied but isn't a known config. (org.apache.kafka.clients.producer.ProducerConfig)
[2022-07-19 15:35:31,230] INFO Kafka version: 7.2.0-ce (org.apache.kafka.common.utils.AppInfoParser)
[2022-07-19 15:35:31,230] INFO Kafka commitId: 510078ca78367bd2542eaca9d26ebd22c5ca95cc (org.apache.kafka.common.utils.AppInfoParser)
[2022-07-19 15:35:31,230] INFO Kafka startTimeMs: 1658237731227 (org.apache.kafka.common.utils.AppInfoParser)
>[2022-07-19 15:35:32,290] INFO [Producer clientId=console-producer] Cluster ID: lkc-xrrwg (org.apache.kafka.clients.Metadata)
[2022-07-19 15:35:32,293] INFO [Producer clientId=console-producer] ProducerId set to 11385814 with epoch 0 (org.apache.kafka.clients.producer.internals.TransactionManager)
test
[2022-07-19 15:35:41,214] INFO [Producer clientId=console-producer] Resetting the last seen epoch of partition nicolas-0 to 0 since the associated topicId changed from null to 7OdYyViuRY2XmggNWuP_Vg (org.apache.kafka.clients.Metadata)
>^C[2022-07-19 15:35:55,532] INFO [Producer clientId=console-producer] Closing the Kafka producer with timeoutMillis = 9223372036854775807 ms. (org.apache.kafka.clients.producer.KafkaProducer)
[2022-07-19 15:35:55,557] INFO Metrics scheduler closed (org.apache.kafka.common.metrics.Metrics)
[2022-07-19 15:35:55,557] INFO Closing reporter org.apache.kafka.common.metrics.JmxReporter (org.apache.kafka.common.metrics.Metrics)
[2022-07-19 15:35:55,557] INFO Metrics reporters closed (org.apache.kafka.common.metrics.Metrics)
[2022-07-19 15:35:55,558] INFO App info kafka.producer for console-producer unregistered (org.apache.kafka.common.utils.AppInfoParser)


@jolshan
Copy link
Copy Markdown
Member

jolshan commented Jul 20, 2022

Ah looks like the issue is that the oldID is null (not the new ID).

As described by this comment:

    // If the new topic ID is valid and different from the last seen topic ID, update the metadata.
    // Between the time that a topic is deleted and re-created, the client may lose track of the
   // corresponding topicId (i.e. `oldTopicId` will be null). In this case, when we discover the new
   // topicId, we allow the corresponding leader epoch to override the last seen value.

So based on this comment, we do want to override the epoch. Just curious why this one extra log message was causing issues. I suppose one thing we can do is check if this is the first metadata response/ if the epochs are equal, but not sure how much this helps.

fabien-chebel added a commit to fabien-chebel/kafka that referenced this pull request Aug 19, 2022
Context
-------

Since updating to kafka client 3.0.1, we noticed quite a lot of new info-level logs generated by `org.apache.kafka.clients.Metadata`.

The logs look like this:
```
[Producer clientId=producer-1] Resetting the last seen epoch of partition user-3 to 2044 since the associated topicId changed from null to O5w7zNcCTpKdylvkOUMesQ
```

They appear at seemingly random times, while keeping the same last seen epoch value.

Proposed fix
------------

As suggested in apache#12378, I am updating the code condition to avoid generating a log when the last seen epoch reset operation is a no-op.
@github-actions
Copy link
Copy Markdown

This PR is being marked as stale since it has not had any activity in 90 days. If you
would like to keep this PR alive, please leave a comment asking for a review. If the PR has
merge conflicts, update it with the latest from the base branch.

If you are having difficulty finding a reviewer, please reach out on the [mailing list](https://kafka.apache.org/contact).

If this PR is no longer valid or desired, please feel free to close it. If no activity occurs in the next 30 days, it will be automatically closed.

@github-actions github-actions Bot added the stale Stale PRs label Dec 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

stale Stale PRs

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants