Skip to content

KAFKA-10090 Misleading warnings: The configuration was supplied but i…#8826

Merged
chia7712 merged 7 commits intoapache:trunkfrom
chia7712:KAFKA-10090
Dec 3, 2020
Merged

KAFKA-10090 Misleading warnings: The configuration was supplied but i…#8826
chia7712 merged 7 commits intoapache:trunkfrom
chia7712:KAFKA-10090

Conversation

@chia7712
Copy link
Copy Markdown
Member

@chia7712 chia7712 commented Jun 7, 2020

https://issues.apache.org/jira/browse/KAFKA-10090

Committer Checklist (excluded from commit message)

  • Verify design and implementation
  • Verify test coverage and CI build status
  • Verify documentation (including upgrade notes)

Copy link
Copy Markdown

@abbccdda abbccdda left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One meta comment, in the ticket we stated that we want to fix the warning for used instead of unknown. Does it make sense to just change the logging? And in terms of misleading, does this log confuse the user by any chance?

Comment thread clients/src/main/java/org/apache/kafka/common/network/ChannelBuilders.java Outdated
@junrao
Copy link
Copy Markdown
Contributor

junrao commented Nov 4, 2020

@chia7712 : Thanks for the PR.

I ran the following command with the PR.
bin/kafka-topics.sh --bootstrap-server localhost:9092 --command-config kafka.properties --create --topic test1

kafka.properties

security.protocol=SSL
ssl.protocol=TLS

I still saw the WARN.

[2020-11-04 13:32:16,059] WARN The configuration 'ssl.protocol' was supplied but isn't a known config. (org.apache.kafka.clients.admin.AdminClientConfig)

@chia7712
Copy link
Copy Markdown
Member Author

chia7712 commented Nov 5, 2020

@junrao Thanks for your response. fixed.

@junrao
Copy link
Copy Markdown
Contributor

junrao commented Nov 6, 2020

@chia7712 : Thanks for the updated PR. Did that fix the issue since I still saw the same WARN when running kafka-topics.sh.

@chia7712
Copy link
Copy Markdown
Member Author

chia7712 commented Nov 6, 2020

Did that fix the issue since I still saw the same WARN when running kafka-topics.sh.

my bad :(

Copy link
Copy Markdown
Contributor

@junrao junrao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@chia7712 : Thanks for the updated PR. It does seem to fix the WARN in admin tool.

I tried running console-producer with/without this PR. It doesn't seem to WARN any unused SSL configs in either test. Do you know why?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this cover the case when listenerName is not null? I guess that can only happen on the server side and since we don't log unused configs on the server, so maybe this is ok for now?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

        if (listenerName == null)
            parsedConfigs = (Map<String, Object>) config.values();
        else
            parsedConfigs = config.valuesWithPrefixOverride(listenerName.configPrefix());

the method config.valuesWithPrefixOverride also returns ```RecordingMap so it is ok.

Comment thread clients/src/main/java/org/apache/kafka/common/network/ChannelBuilders.java Outdated
Comment thread clients/src/main/java/org/apache/kafka/common/config/AbstractConfig.java Outdated
Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried running console-producer with/without this PR. It doesn't seem to WARN any unused SSL configs in either test. Do you know why?

@junrao this is the root cause.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, why is this necessary since we reset used to empty in the next line?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ConfigDef#parse (https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/config/ConfigDef.java#L468) tries to get all elements from input maps so all gets are recorded. In order to avoid recording, we pass a copy instead of RecordingMap.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok. I guess the issue is in the following, where we pass in a RecordingMap to construct ProducerConfig.

https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java#L384

However, that code seems no longer necessary since we are now setting clientId in ProducerConfig.postProcessParsedConfig(). Could we just avoid constructing ProducerConfig there?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

However, that code seems no longer necessary since we are now setting clientId in ProducerConfig.postProcessParsedConfig(). Could we just avoid constructing ProducerConfig there?

I don't think so. The configs passed to configurable object is origins so the generated "client.id" is not included. However, your feedback inspires me that we don't need to create a new ProducerConfig. Instead, we can use overrideConfig to set generated client.id to those configurable object. Will update it later.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I am still a bit confused. My understanding is that with the latest change, ProducerConfig will only be instantiated once and thus the passed in originals will never be a RecordingMap. But it seems this is still needed? Could you explain a bit more why this is the case?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But it seems this is still needed?

It is not necessary with the latest change. I kept it as a total solution (if someone pass RecordingMap in the future). However, I'm going to remove it to make this PR simpler.

@chia7712
Copy link
Copy Markdown
Member Author

org.apache.kafka.streams.integration.StreamTableJoinTopologyOptimizationIntegrationTest.shouldDoStreamTableJoinWithDifferentNumberOfPartitions

unrelated error. rebase to trigger QA

Copy link
Copy Markdown
Contributor

@junrao junrao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@chia7712 : Thanks for the updated PR. A few more comments below.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, why is this necessary since we reset used to empty in the next line?

Comment thread clients/src/main/java/org/apache/kafka/common/network/ChannelBuilders.java Outdated
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When will the input configs not be recording?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently, the use cases of non-RecordingMap happens on tests. However, it seems to me we don't give a good definition of Configurable#configure. It is hard to say what we should pass to it. immutable map, mutable map and RecordingMap are alternatives. I want to keep flexibility but it is ok to me to rewrite related tests to make sure all pass are RecordingMap

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

consumer is unused.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It tests the specify config is recorded when constructing KafkaConsumer

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we use the private static constructor in this class? Ditto below.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

copy that

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

producer is unused.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It tests the specify config is recorded when constructing KafkaProducer

Copy link
Copy Markdown
Contributor

@junrao junrao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@chia7712 : Thanks for the updated PR. One more comment below.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok. I guess the issue is in the following, where we pass in a RecordingMap to construct ProducerConfig.

https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java#L384

However, that code seems no longer necessary since we are now setting clientId in ProducerConfig.postProcessParsedConfig(). Could we just avoid constructing ProducerConfig there?

Copy link
Copy Markdown
Contributor

@junrao junrao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@chia7712 : Thanks for the updated PR. A few more comments below.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I am still a bit confused. My understanding is that with the latest change, ProducerConfig will only be instantiated once and thus the passed in originals will never be a RecordingMap. But it seems this is still needed? Could you explain a bit more why this is the case?

Comment thread clients/src/main/java/org/apache/kafka/clients/producer/KafkaProducer.java Outdated
Comment thread clients/src/test/java/org/apache/kafka/common/network/ChannelBuildersTest.java Outdated
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need to instantiate again?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need a new RecordingMap to test different key without listener prefix. Otherwise, the key may be used by previous test.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this test necessary? Do we still have a case where we pass in a RecordingMap to ProducerConfig?

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm going to remove this test assertTrue(new ProducerConfig(config.originals(), false).unused().co...

Copy link
Copy Markdown
Contributor

@junrao junrao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@chia7712 : Thanks for the latest PR. LGTM. I triggered a run of system tests. Will share the results later. If there are no issues with the system tests, you can merged the PR.

@junrao
Copy link
Copy Markdown
Contributor

junrao commented Dec 2, 2020

@chia7712
Copy link
Copy Markdown
Member Author

chia7712 commented Dec 2, 2020

There are 6 failures in the system test run. Are they related to this PR?

They are unrelated to this PR and I have opened a PR to fix them (#9673)

Copy link
Copy Markdown
Contributor

@junrao junrao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@chia7712 : Thanks for the investigation. LGTM

@chia7712
Copy link
Copy Markdown
Member Author

chia7712 commented Dec 2, 2020

rebase to trigger QA again

@chia7712 chia7712 merged commit e63f591 into apache:trunk Dec 3, 2020
ijuma added a commit to ijuma/kafka that referenced this pull request Dec 3, 2020
…t-for-generated-requests

* apache-github/trunk:
MINOR: Fix flaky test shouldQueryOnlyActivePartitionStoresByDefault
(apache#9681)
  KAFKA-10799 AlterIsr utilizes ReplicaManager ISR metrics (apache#9677)
  MINOR: Fix KTable-KTable foreign-key join example (apache#9683)
KAFKA-10473: Add docs on partition size-on-disk, and other log-related
metrics (apache#9276)
  KAFKA-10739; Replace EpochEndOffset with automated protocol (apache#9630)
KAFKA-10460: ReplicaListValidator format checking is incomplete
(apache#9326)
KAFKA-10554; Perform follower truncation based on diverging epochs in
Fetch response (apache#9382)
  MINOR: Align the UID inside/outside container (apache#9652)
KAFKA-10794 Replica leader election is too slow in the case of too
many partitions (apache#9675)
KAFKA-10090 Misleading warnings: The configuration was supplied but i…
(apache#8826)

clients/src/main/java/org/apache/kafka/common/requests/OffsetsForLeaderEpochResponse.java
clients/src/test/java/org/apache/kafka/clients/consumer/internals/FetcherTest.java
core/src/test/scala/unit/kafka/server/epoch/util/ReplicaFetcherMockBlockingSend.scala
@chia7712 chia7712 deleted the KAFKA-10090 branch March 25, 2024 15:22
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants