-
Notifications
You must be signed in to change notification settings - Fork 3.7k
Improve Key_Shared subscription message dispatching performance. #6647
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve Key_Shared subscription message dispatching performance. #6647
Conversation
|
/pulsarbot run-failure-checks |
|
/pulsarbot run-failure-checks |
4 similar comments
|
/pulsarbot run-failure-checks |
|
/pulsarbot run-failure-checks |
|
/pulsarbot run-failure-checks |
|
/pulsarbot run-failure-checks |
09db1b0 to
7f8af69
Compare
|
/pulsarbot run-failure-checks |
4 similar comments
|
/pulsarbot run-failure-checks |
|
/pulsarbot run-failure-checks |
|
/pulsarbot run-failure-checks |
|
/pulsarbot run-failure-checks |
|
/pulsarbot run-failure-checks |
1 similar comment
|
/pulsarbot run-failure-checks |
7f8af69 to
b69d0e8
Compare
|
/pulsarbot run-failure-checks |
4 similar comments
|
/pulsarbot run-failure-checks |
|
/pulsarbot run-failure-checks |
|
/pulsarbot run-failure-checks |
|
/pulsarbot run-failure-checks |
|
/pulsarbot run-failure-checks |
3 similar comments
|
/pulsarbot run-failure-checks |
|
/pulsarbot run-failure-checks |
|
/pulsarbot run-failure-checks |
|
/pulsarbot run-failure-checks |
1 similar comment
|
/pulsarbot run-failure-checks |
* Improve Key_Shared subscription message dispatching performance. * Fix unit tests. * Remove system.out.println (cherry picked from commit c555eb7)
* Improve Key_Shared subscription message dispatching performance. * Fix unit tests. * Remove system.out.println (cherry picked from commit c555eb7)
|
Hi @codelipenghui. I'm failing to see a practical difference in the code here compared to the previous implementation. I think the only difference is when the |
|
The difference is I hashed the key twice before, this change is reduced one hash calculation in the selector. Another difference is, I group the messages by the message key, but when users use random keys, the dispatcher can get benefit from the group operation. So the new way is grouped by I have analyzed by yourkit, the managed ledger worker thread in high CPU workload with random keys and non-batched messages. |
I think it's actually the reverse, the % operation is done for each message instead of for each "key"
I think this only matters if I believe a better approach here, instead of exposing the index internals, would be to just group by consumers instead of key or index. Basically, construct |
Totally agree.
If I understand correctly, this also needs to find a consumer for a key right? Can we group the messages by the consumer without getting a consumer from the Map? |
Yes, instead of saying: "what is the index for this key?" we say "what is the consumer for this key?". Grouping by consumer will lead to the smallest possible cardinality.
I don't understand the question here. |
Sorry, I'm lost here. How can we map a key to a consumer? I just know currently we map them by a hash range or a hash ring. So we need to get a consumer for a key from the hash range or the hash ring. The key purpose is to reduce the get consumer operation from the hash range or the hash ring. Too many get consumer operations to result in high CPU usage of the thread. I don't know how to map a key to a consumer without getting a consumer from a hash range or a hash ring. If I missed something, please point out, thanks. |
### Motivation
The current implementation of KeyShared subscriptions uses a mechanism to divide they hash space across the available consumers. This is based on dividing the currently assigned hash ranges when a new consumer joins or leaves.
There are few problems with the current approach:
1. When adding a new consumer, the bigger range is split to make space for the new consumer. That means that when adding 3 consumers, 1 of them will "own" a hash range that is double in size compared to the other 2 consumers and consequently it will receive twice the traffic. This is not terrible, but not ideal either.
2. When removing consumers, the range for the removed consumer will always be assigned to the next consumer. The new hash distribution really depends on the sequence upon which the consumers are removed. If one is unlucky, the traffic will be very heavily skewed having situations where 1 consumer is getting >90% of the traffic.
This is an example of removing consumers in sequence, with attached the size of their respective hash ranges:
```
Removed consumer from rangeMap: {c1=8192, c10=4096, c3=4096, c4=8192, c5=4096, c6=8192, c7=16384, c8=8192, c9=4096}
Removed consumer from rangeMap: {c1=8192, c10=4096, c4=8192, c5=4096, c6=12288, c7=16384, c8=8192, c9=4096}
Removed consumer from rangeMap: {c1=8192, c10=4096, c5=4096, c6=12288, c7=16384, c8=16384, c9=4096}
Removed consumer from rangeMap: {c1=8192, c10=8192, c6=12288, c7=16384, c8=16384, c9=4096}
Removed consumer from rangeMap: {c1=24576, c10=8192, c6=12288, c7=16384, c9=4096}
Removed consumer from rangeMap: {c1=24576, c10=8192, c7=28672, c9=4096}
Removed consumer from rangeMap: {c1=53248, c10=8192, c9=4096}
```
As you can see, `c1` will take most of the traffic.
Most likely it will not be able to process all the messages and the backlog builds up.
### Modifications
* No functional difference from user perspective
* Use consistent hashing mechanism to assign keys to consumers. This will ensure even distribution without the degradation in the corner cases.
* Number of points in the ring is configurable, default=100.
* Refactored current unit test. The test are currently duplicating the logic of the implementation and checking the a messages is placed on the bucket for one consumer. Of course it works, since it's the same code executed on both sides. But, instead, the test should focus on the contract of the feature: message should arrive in order, there should be "decent" sharing of load across consumers.
* @codelipenghui I've removed the `selectByIndex()`. In my opinion there's absolutely no difference in efficiency/performance as I've also explained on #6647 (comment). I'm happy to discuss more about it.
### Motivation
The current implementation of KeyShared subscriptions uses a mechanism to divide they hash space across the available consumers. This is based on dividing the currently assigned hash ranges when a new consumer joins or leaves.
There are few problems with the current approach:
1. When adding a new consumer, the bigger range is split to make space for the new consumer. That means that when adding 3 consumers, 1 of them will "own" a hash range that is double in size compared to the other 2 consumers and consequently it will receive twice the traffic. This is not terrible, but not ideal either.
2. When removing consumers, the range for the removed consumer will always be assigned to the next consumer. The new hash distribution really depends on the sequence upon which the consumers are removed. If one is unlucky, the traffic will be very heavily skewed having situations where 1 consumer is getting >90% of the traffic.
This is an example of removing consumers in sequence, with attached the size of their respective hash ranges:
```
Removed consumer from rangeMap: {c1=8192, c10=4096, c3=4096, c4=8192, c5=4096, c6=8192, c7=16384, c8=8192, c9=4096}
Removed consumer from rangeMap: {c1=8192, c10=4096, c4=8192, c5=4096, c6=12288, c7=16384, c8=8192, c9=4096}
Removed consumer from rangeMap: {c1=8192, c10=4096, c5=4096, c6=12288, c7=16384, c8=16384, c9=4096}
Removed consumer from rangeMap: {c1=8192, c10=8192, c6=12288, c7=16384, c8=16384, c9=4096}
Removed consumer from rangeMap: {c1=24576, c10=8192, c6=12288, c7=16384, c9=4096}
Removed consumer from rangeMap: {c1=24576, c10=8192, c7=28672, c9=4096}
Removed consumer from rangeMap: {c1=53248, c10=8192, c9=4096}
```
As you can see, `c1` will take most of the traffic.
Most likely it will not be able to process all the messages and the backlog builds up.
### Modifications
* No functional difference from user perspective
* Use consistent hashing mechanism to assign keys to consumers. This will ensure even distribution without the degradation in the corner cases.
* Number of points in the ring is configurable, default=100.
* Refactored current unit test. The test are currently duplicating the logic of the implementation and checking the a messages is placed on the bucket for one consumer. Of course it works, since it's the same code executed on both sides. But, instead, the test should focus on the contract of the feature: message should arrive in order, there should be "decent" sharing of load across consumers.
* @codelipenghui I've removed the `selectByIndex()`. In my opinion there's absolutely no difference in efficiency/performance as I've also explained on apache#6647 (comment). I'm happy to discuss more about it.
### Motivation
The current implementation of KeyShared subscriptions uses a mechanism to divide they hash space across the available consumers. This is based on dividing the currently assigned hash ranges when a new consumer joins or leaves.
There are few problems with the current approach:
1. When adding a new consumer, the bigger range is split to make space for the new consumer. That means that when adding 3 consumers, 1 of them will "own" a hash range that is double in size compared to the other 2 consumers and consequently it will receive twice the traffic. This is not terrible, but not ideal either.
2. When removing consumers, the range for the removed consumer will always be assigned to the next consumer. The new hash distribution really depends on the sequence upon which the consumers are removed. If one is unlucky, the traffic will be very heavily skewed having situations where 1 consumer is getting >90% of the traffic.
This is an example of removing consumers in sequence, with attached the size of their respective hash ranges:
```
Removed consumer from rangeMap: {c1=8192, c10=4096, c3=4096, c4=8192, c5=4096, c6=8192, c7=16384, c8=8192, c9=4096}
Removed consumer from rangeMap: {c1=8192, c10=4096, c4=8192, c5=4096, c6=12288, c7=16384, c8=8192, c9=4096}
Removed consumer from rangeMap: {c1=8192, c10=4096, c5=4096, c6=12288, c7=16384, c8=16384, c9=4096}
Removed consumer from rangeMap: {c1=8192, c10=8192, c6=12288, c7=16384, c8=16384, c9=4096}
Removed consumer from rangeMap: {c1=24576, c10=8192, c6=12288, c7=16384, c9=4096}
Removed consumer from rangeMap: {c1=24576, c10=8192, c7=28672, c9=4096}
Removed consumer from rangeMap: {c1=53248, c10=8192, c9=4096}
```
As you can see, `c1` will take most of the traffic.
Most likely it will not be able to process all the messages and the backlog builds up.
### Modifications
* No functional difference from user perspective
* Use consistent hashing mechanism to assign keys to consumers. This will ensure even distribution without the degradation in the corner cases.
* Number of points in the ring is configurable, default=100.
* Refactored current unit test. The test are currently duplicating the logic of the implementation and checking the a messages is placed on the bucket for one consumer. Of course it works, since it's the same code executed on both sides. But, instead, the test should focus on the contract of the feature: message should arrive in order, there should be "decent" sharing of load across consumers.
* @codelipenghui I've removed the `selectByIndex()`. In my opinion there's absolutely no difference in efficiency/performance as I've also explained on apache#6647 (comment). I'm happy to discuss more about it.
### Motivation
The current implementation of KeyShared subscriptions uses a mechanism to divide they hash space across the available consumers. This is based on dividing the currently assigned hash ranges when a new consumer joins or leaves.
There are few problems with the current approach:
1. When adding a new consumer, the bigger range is split to make space for the new consumer. That means that when adding 3 consumers, 1 of them will "own" a hash range that is double in size compared to the other 2 consumers and consequently it will receive twice the traffic. This is not terrible, but not ideal either.
2. When removing consumers, the range for the removed consumer will always be assigned to the next consumer. The new hash distribution really depends on the sequence upon which the consumers are removed. If one is unlucky, the traffic will be very heavily skewed having situations where 1 consumer is getting >90% of the traffic.
This is an example of removing consumers in sequence, with attached the size of their respective hash ranges:
```
Removed consumer from rangeMap: {c1=8192, c10=4096, c3=4096, c4=8192, c5=4096, c6=8192, c7=16384, c8=8192, c9=4096}
Removed consumer from rangeMap: {c1=8192, c10=4096, c4=8192, c5=4096, c6=12288, c7=16384, c8=8192, c9=4096}
Removed consumer from rangeMap: {c1=8192, c10=4096, c5=4096, c6=12288, c7=16384, c8=16384, c9=4096}
Removed consumer from rangeMap: {c1=8192, c10=8192, c6=12288, c7=16384, c8=16384, c9=4096}
Removed consumer from rangeMap: {c1=24576, c10=8192, c6=12288, c7=16384, c9=4096}
Removed consumer from rangeMap: {c1=24576, c10=8192, c7=28672, c9=4096}
Removed consumer from rangeMap: {c1=53248, c10=8192, c9=4096}
```
As you can see, `c1` will take most of the traffic.
Most likely it will not be able to process all the messages and the backlog builds up.
### Modifications
* No functional difference from user perspective
* Use consistent hashing mechanism to assign keys to consumers. This will ensure even distribution without the degradation in the corner cases.
* Number of points in the ring is configurable, default=100.
* Refactored current unit test. The test are currently duplicating the logic of the implementation and checking the a messages is placed on the bucket for one consumer. Of course it works, since it's the same code executed on both sides. But, instead, the test should focus on the contract of the feature: message should arrive in order, there should be "decent" sharing of load across consumers.
* @codelipenghui I've removed the `selectByIndex()`. In my opinion there's absolutely no difference in efficiency/performance as I've also explained on apache#6647 (comment). I'm happy to discuss more about it.
…che#6647) * Improve Key_Shared subscription message dispatching performance. * Fix unit tests. * Remove system.out.println
### Motivation
The current implementation of KeyShared subscriptions uses a mechanism to divide they hash space across the available consumers. This is based on dividing the currently assigned hash ranges when a new consumer joins or leaves.
There are few problems with the current approach:
1. When adding a new consumer, the bigger range is split to make space for the new consumer. That means that when adding 3 consumers, 1 of them will "own" a hash range that is double in size compared to the other 2 consumers and consequently it will receive twice the traffic. This is not terrible, but not ideal either.
2. When removing consumers, the range for the removed consumer will always be assigned to the next consumer. The new hash distribution really depends on the sequence upon which the consumers are removed. If one is unlucky, the traffic will be very heavily skewed having situations where 1 consumer is getting >90% of the traffic.
This is an example of removing consumers in sequence, with attached the size of their respective hash ranges:
```
Removed consumer from rangeMap: {c1=8192, c10=4096, c3=4096, c4=8192, c5=4096, c6=8192, c7=16384, c8=8192, c9=4096}
Removed consumer from rangeMap: {c1=8192, c10=4096, c4=8192, c5=4096, c6=12288, c7=16384, c8=8192, c9=4096}
Removed consumer from rangeMap: {c1=8192, c10=4096, c5=4096, c6=12288, c7=16384, c8=16384, c9=4096}
Removed consumer from rangeMap: {c1=8192, c10=8192, c6=12288, c7=16384, c8=16384, c9=4096}
Removed consumer from rangeMap: {c1=24576, c10=8192, c6=12288, c7=16384, c9=4096}
Removed consumer from rangeMap: {c1=24576, c10=8192, c7=28672, c9=4096}
Removed consumer from rangeMap: {c1=53248, c10=8192, c9=4096}
```
As you can see, `c1` will take most of the traffic.
Most likely it will not be able to process all the messages and the backlog builds up.
### Modifications
* No functional difference from user perspective
* Use consistent hashing mechanism to assign keys to consumers. This will ensure even distribution without the degradation in the corner cases.
* Number of points in the ring is configurable, default=100.
* Refactored current unit test. The test are currently duplicating the logic of the implementation and checking the a messages is placed on the bucket for one consumer. Of course it works, since it's the same code executed on both sides. But, instead, the test should focus on the contract of the feature: message should arrive in order, there should be "decent" sharing of load across consumers.
* @codelipenghui I've removed the `selectByIndex()`. In my opinion there's absolutely no difference in efficiency/performance as I've also explained on apache#6647 (comment). I'm happy to discuss more about it.
Motivation
Improve Key_Shared subscription message dispatching performance.
Modifications
Verifying this change
Performance improves.
On my laptop, publish messages with rate 150k/s and create 1500 consumers. The consumption rate can't catchup the message publish rate.
After this change, 3000 consumers still can catchup the message publish rate.
Does this pull request potentially affect one of the following parts:
If
yeswas chosen, please highlight the changesDocumentation