Skip to content
This repository was archived by the owner on Jan 24, 2024. It is now read-only.

Conversation

@eolivelli
Copy link
Contributor

@eolivelli eolivelli commented Feb 9, 2022

This patch upgrade the Kafka Client dependency to 2.1.1.
The reason is that Kafka 2.0.0 dependency has some CVEs reported and we cannot ship this project to users if it contains CVEs.

The downside is that we are dropping compatibility with 0.9 and 0.10 clients because version 0 of ListOffSets is no more supported.

@CLAassistant
Copy link

CLAassistant commented Feb 9, 2022

CLA assistant check
All committers have signed the CLA.

@eolivelli
Copy link
Contributor Author

I only disabled 0.9 and 0.10 tests, I will follow up with deleting the maven modules that create that shaded clients for tests

@BewareMyPower
Copy link
Collaborator

I will follow up with deleting the maven modules that create that shaded clients for tests

Why cannot we support ListOffsets v0 request after the upgrade? It's better not to drop the support for older clients.

@Demogorgon314
Copy link
Member

I think one of the reasons is ListOffsetRequest.offsetData() method has removed.

@eolivelli
Copy link
Contributor Author

I think one of the reasons is ListOffsetRequest.offsetData() method has removed.

Yes, this is the problem, we are using the Kafka Client jars to encode/decode the requests and the responses, so we cannot parse any more the V0 ListOffSetRequest version.

We would have to write our own version of the Request/Response parser.

I think that those clients are very old and people could upgrade to the latest versions (at least 1.0+)

@BewareMyPower
Copy link
Collaborator

I think that those clients are very old and people could upgrade to the latest versions (at least 1.0+)

AFAIK, they are still widely used and it's not easy to upgrade the client. Especially some applications are not written by Java but they use the old protocol. For example, #539 was opened not long ago. Another example is that TEG of Tencent has adopted KoP for some time, they still have some users with old Kafka protocol.

The most important reason is that we've already supported 0.9 and 1.0. If we dropped the support, it would be a regression.

We would have to write our own version of the Request/Response parser.

It's okay. My suggestion is that we can merge this PR to master after tests passed. But it can only be merged to existing branches like branch-2.10.0 after the ListOffsetRequest V0 request is supported. It must be done before Pulsar 2.11 is released.

In addition, please fix the Codacy Static Code Analysis.


And I found IdempotentProducerTest.testIdempotentProducer still failed even the branch of this PR contains the fix of #1055, PTAL @Demogorgon314

@eolivelli
Copy link
Contributor Author

I will fix the PR and have CI passing.
it is okay for me to have this only in master branch.

thanks

@eolivelli
Copy link
Contributor Author

I am rebasing this patch on top of current master.

@BewareMyPower
Copy link
Collaborator

Hi, I've opened a PR #1107 that has a lot of changes on the metadata request handler. So I think there is no hurry about this PR, because after #1107 is merged, some new conflicts might happen.

Besides, you can also review the PR when you are free.

@eolivelli eolivelli requested a review from a team as a code owner May 13, 2022 07:11
@eolivelli
Copy link
Contributor Author

@BewareMyPower @Demogorgon314 I have rebased this patch.
I believe it is time to move forward with this upgrade for the next upcoming major release.

I have been running this code in production for a while without any particular issues

@BewareMyPower
Copy link
Collaborator

We can ignore the Codacy check at this moment, it's an existing error.

private CompletableFuture<Pair<Errors, Long>> fetchOffset(String topicName, ListOffsetRequest.PartitionData pd) {

But we need to fix the incompatibility brought by the constructor of OAuthBearerClientInitialResponse.

  1. It has an exception signature since Kafka 2.1 so that we need to process the SaslException.
  2. It accepted a String before Kafka 2.1 while it accepts a byte array since Kafka 2.1.

@eolivelli
Copy link
Contributor Author

@BewareMyPower thanks for your suggestion, the patch was based on an old version of KOP without the changes around OAuth.
I will update it as soon as possible

@eolivelli
Copy link
Contributor Author

it looks like our implementation is not compatible with Kafka requirements:

we are passing this stuff as content of the response:
eyJhbGciOiJSUzI1NiIsImtpZCI6InB1YmxpYzpoeWRyYS5qd3QuYWNjZXNzLXRva2VuIiwidHlwIjoiSldUIn0.eyJhdWQiOlsiaHR0cDovL2V4YW1wbGUuY29tL2FwaS92Mi8iXSwiY2xpZW50X2lkIjoic2ltcGxlX2NsaWVudF9pZCIsImV4cCI6MTY1MjQ0MTMzOSwiZXh0Ijp7fSwiaWF0IjoxNjUyNDM3NzM5LCJpc3MiOiJodHRwOi8vMTI3LjAuMC4xOjQ0NDQvIiwianRpIjoiNzEwYmFmYmYtYThkYy00ZmUxLTg1ZTItYjJiNDU2ZTMyZjAyIiwibmJmIjoxNjUyNDM3NzM5LCJzY3AiOltdLCJzdWIiOiJzaW1wbGVfY2xpZW50X2lkIn0.LpwLlkvhnb_OxQTN7jJbhhcIYQvPo5TxYUqJ8uMixarX3_ayDc77ji5bT9r_p5zt_3NG69sZTlYf0MbkfEvYHP-KbDH6mcQdFa88X46TmGRIALAKnjlB5o7eW_69jAZI98uNsKsvh6NuwAmho2opdbDgS945qbZBpJ0VN2QYxp212DkFFMKqSuU861cwOykWYEX0gHElK_KBpcqJ091phjhArE8vUro7T_AlSGYBhFe4BpdT1tNmBZ5vwQuk7Vpznkskc_CVQvE7mwFzJNj_o144ScgZNa5OZhC0CcK4vJF1cupJFrgKXd8pW0frvZrjBH8gGnyRlhbEy06VTirbDA

logs:

12:29:00.748 [nioEventLoopGroup-66-10:io.streamnative.pulsar.handlers.kop.coordinator.transaction.TransactionMarkerChannelHandler@268] DEBUG io.streamnative.pulsar.handlers.kop.coordinator.transaction.TransactionMarkerChannelHandler - SASL Handshake completed with success
12:29:00.748 [nioEventLoopGroup-66-10:io.streamnative.pulsar.handlers.kop.coordinator.transaction.TransactionMarkerChannelHandler@329] INFO io.streamnative.pulsar.handlers.kop.coordinator.transaction.TransactionMarkerChannelHandler - OAUTHBEARER_MECHANISM eyJhbGciOiJSUzI1NiIsImtpZCI6InB1YmxpYzpoeWRyYS5qd3QuYWNjZXNzLXRva2VuIiwidHlwIjoiSldUIn0.eyJhdWQiOlsiaHR0cDovL2V4YW1wbGUuY29tL2FwaS92Mi8iXSwiY2xpZW50X2lkIjoic2ltcGxlX2NsaWVudF9pZCIsImV4cCI6MTY1MjQ0MTMzOSwiZXh0Ijp7fSwiaWF0IjoxNjUyNDM3NzM5LCJpc3MiOiJodHRwOi8vMTI3LjAuMC4xOjQ0NDQvIiwianRpIjoiNzEwYmFmYmYtYThkYy00ZmUxLTg1ZTItYjJiNDU2ZTMyZjAyIiwibmJmIjoxNjUyNDM3NzM5LCJzY3AiOltdLCJzdWIiOiJzaW1wbGVfY2xpZW50X2lkIn0.LpwLlkvhnb_OxQTN7jJbhhcIYQvPo5TxYUqJ8uMixarX3_ayDc77ji5bT9r_p5zt_3NG69sZTlYf0MbkfEvYHP-KbDH6mcQdFa88X46TmGRIALAKnjlB5o7eW_69jAZI98uNsKsvh6NuwAmho2opdbDgS945qbZBpJ0VN2QYxp212DkFFMKqSuU861cwOykWYEX0gHElK_KBpcqJ091phjhArE8vUro7T_AlSGYBhFe4BpdT1tNmBZ5vwQuk7Vpznkskc_CVQvE7mwFzJNj_o144ScgZNa5OZhC0CcK4vJF1cupJFrgKXd8pW0frvZrjBH8gGnyRlhbEy06VTirbDA

12:29:00.748 [nioEventLoopGroup-66-10:io.streamnative.pulsar.handlers.kop.coordinator.transaction.TransactionMarkerChannelHandler@352] ERROR io.streamnative.pulsar.handlers.kop.coordinator.transaction.TransactionMarkerChannelHandler - Transaction marker channel handler authentication failed.
javax.security.sasl.SaslException: Invalid OAUTHBEARER client first message
at org.apache.kafka.common.security.oauthbearer.internals.OAuthBearerClientInitialResponse.(OAuthBearerClientInitialResponse.java:53) ~[kafka-clients-2.1.1.jar:?]
at io.streamnative.pulsar.handlers.kop.coordinator.transaction.TransactionMarkerChannelHandler.authenticateInternal(TransactionMarkerChannelHandler.java:330) ~[classes/:?]
at io.streamnative.pulsar.handlers.kop.coordinator.transaction.TransactionMarkerChannelHandler.authenticate(TransactionMarkerChannelHandler.java:294) ~[classes/:?]
at java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1072) [?:?]
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) [?:?]
at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073) [?:?]
at io.streamnative.pulsar.handlers.kop.coordinator.transaction.TransactionMarkerChannelHandler.channelRead(TransactionMarkerChannelHandler.java:194) [classes/:?]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.74.Final.jar:4.1.74.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.74.Final.jar:4.1.74.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.74.Final.jar:4.1.74.Final]
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:327) [netty-codec-4.1.74.Final.jar:4.1.74.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:299) [netty-codec-4.1.74.Final.jar:4.1.74.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.74.Final.jar:4.1.74.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.74.Final.jar:4.1.74.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) [netty-transport-4.1.74.Final.jar:4.1.74.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) [netty-transport-4.1.74.Final.jar:4.1.74.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) [netty-transport-4.1.74.Final.jar:4.1.74.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) [netty-transport-4.1.74.Final.jar:4.1.74.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) [netty-transport-4.1.74.Final.jar:4.1.74.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166) [netty-transport-4.1.74.Final.jar:4.1.74.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722) [netty-transport-4.1.74.Final.jar:4.1.74.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658) [netty-transport-4.1.74.Final.jar:4.1.74.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584) [netty-transport-4.1.74.Final.jar:4.1.74.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496) [netty-transport-4.1.74.Final.jar:4.1.74.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:986) [netty-common-4.1.74.Final.jar:4.1.74.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.74.Final.jar:4.1.74.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty-common-4.1.74.Final.jar:4.1.74.Final]
at java.lang.Thread.run(Thread.java:829) [?:?]
12:29:00.748 [nioEventLoopGroup-66-10:io.streamnative.pulsar.handlers.kop.coordinator.transaction.TransactionMarkerChannelHandler@91] ERROR io.streamnative.pulsar.handlers.kop.coordinator.transaction.TransactionMarkerChannelHandler - Cannot send a WriteTxnMarkersRequest request
java.util.concurrent.CompletionException: javax.security.sasl.SaslException: Invalid OAUTHBEARER client first message
at java.util.concurrent.CompletableFuture.encodeThrowable(CompletableFuture.java:331) ~[?:?]
at java.util.concurrent.CompletableFuture.uniApplyNow(CompletableFuture.java:670) ~[?:?]
at java.util.concurrent.CompletableFuture.uniApplyStage(CompletableFuture.java:658) ~[?:?]
at java.util.concurrent.CompletableFuture.thenApply(CompletableFuture.java:2094) ~[?:?]
at io.streamnative.pulsar.handlers.kop.coordinator.transaction.TransactionMarkerChannelHandler.authenticateInternal(TransactionMarkerChannelHandler.java:356) ~[classes/:?]
at io.streamnative.pulsar.handlers.kop.coordinator.transaction.TransactionMarkerChannelHandler.authenticate(TransactionMarkerChannelHandler.java:294) ~[classes/:?]
at java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:1072) ~[?:?]
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:506) [?:?]
at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2073) [?:?]

@eolivelli
Copy link
Contributor Author

no sorry, I used the wrong constructor.
fixing it

@eolivelli
Copy link
Contributor Author

test is passing now locally

@eolivelli
Copy link
Contributor Author

@BewareMyPower PTAL when you can.
I will follow up with the deletion of useless modules

@BewareMyPower BewareMyPower merged commit ae29ee3 into streamnative:master May 16, 2022
BewareMyPower pushed a commit that referenced this pull request May 30, 2022
Fixes: #1293

### Motivation

#1060 updates the `kafka-clients` dependency to 2.1.1 but it drops the support for Kafka clients 0.9 and 0.10 because `ListOffsetRequest.offsetData()` method has removed.

### Modifications

Add the old version of `ListOffsetRequest` to be compatible with Kafka clients 0.9 and 0.10.
BewareMyPower pushed a commit that referenced this pull request Jun 29, 2022
)

This patch upgrade the Kafka Client dependency to 2.1.1.
The reason is that Kafka 2.0.0 dependency has some CVEs reported and we cannot ship this project to users if it contains CVEs.

The downside is that we are dropping compatibility with 0.9 and 0.10 clients because version 0 of ListOffSets is no more supported.

(cherry picked from commit ae29ee3)
BewareMyPower pushed a commit that referenced this pull request Jun 29, 2022
Fixes: #1293

### Motivation

#1060 updates the `kafka-clients` dependency to 2.1.1 but it drops the support for Kafka clients 0.9 and 0.10 because `ListOffsetRequest.offsetData()` method has removed.

### Modifications

Add the old version of `ListOffsetRequest` to be compatible with Kafka clients 0.9 and 0.10.

(cherry picked from commit 723d3be)
BewareMyPower added a commit that referenced this pull request Jun 29, 2022
michaeljmarshall pushed a commit to michaeljmarshall/kop that referenced this pull request Dec 13, 2022
Fixes: streamnative#1293

### Motivation

streamnative#1060 updates the `kafka-clients` dependency to 2.1.1 but it drops the support for Kafka clients 0.9 and 0.10 because `ListOffsetRequest.offsetData()` method has removed.

### Modifications

Add the old version of `ListOffsetRequest` to be compatible with Kafka clients 0.9 and 0.10.

(cherry picked from commit 723d3be)
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants