Skip to content

Conversation

@merlimat
Copy link
Contributor

Motivation

Existing tools like pulsar-admin and pulsar-client should use less memory to have shorter startup times.

@merlimat merlimat added the type/enhancement The enhancements for the existing features or docs. e.g. reduce memory usage of the delayed messages label May 24, 2017
@merlimat merlimat added this to the 1.18 milestone May 24, 2017
@merlimat merlimat self-assigned this May 24, 2017
@jai1
Copy link
Contributor

jai1 commented May 24, 2017

👍

@merlimat merlimat merged commit 6f08d3b into apache:master May 26, 2017
hangc0276 added a commit to hangc0276/pulsar that referenced this pull request May 26, 2021
Fix apache#431 

### Motivation
When multi consumers poll the same topic for the same position at the same time, it may trigger KafkaTopicConsumerManager#consumers concurrentHashMap deadlock and block the following read/write operation related to the same topic.

### Deadlock Parse
deadlock stack
```
"bookkeeper-ml-workers-OrderedExecutor-0-0" apache#141 prio=5 os_prio=0 tid=0x00007f4866042000 nid=0x1013 waiting on condition [0x00007f47c9ad6000]
   java.lang.Thread.State: WAITING (parking)
        at sun.misc.Unsafe.park(Native Method)
        - parking to wait for  <0x00000000865b0b58> (a org.apache.pulsar.common.util.collections.ConcurrentLongHashMap$Section)
        at java.util.concurrent.locks.StampedLock.acquireWrite(StampedLock.java:1119)
        at java.util.concurrent.locks.StampedLock.writeLock(StampedLock.java:354)
        at org.apache.pulsar.common.util.collections.ConcurrentLongHashMap$Section.put(ConcurrentLongHashMap.java:272)
        at org.apache.pulsar.common.util.collections.ConcurrentLongHashMap.putIfAbsent(ConcurrentLongHashMap.java:129)
        at io.streamnative.pulsar.handlers.kop.KafkaTopicConsumerManager.add(KafkaTopicConsumerManager.java:258)
        at io.streamnative.pulsar.handlers.kop.MessageFetchContext.lambda$null$10(MessageFetchContext.java:342)
        at io.streamnative.pulsar.handlers.kop.MessageFetchContext$$Lambda$1021/1882277105.accept(Unknown Source)
        at java.util.concurrent.CompletableFuture.uniAccept(CompletableFuture.java:656)
        at java.util.concurrent.CompletableFuture.uniAcceptStage(CompletableFuture.java:669)
        at java.util.concurrent.CompletableFuture.thenAccept(CompletableFuture.java:1997)
        at io.streamnative.pulsar.handlers.kop.MessageFetchContext.lambda$null$12(MessageFetchContext.java:337)
        at io.streamnative.pulsar.handlers.kop.MessageFetchContext$$Lambda$1020/843113083.accept(Unknown Source)
        at java.util.concurrent.ConcurrentHashMap$EntrySetView.forEach(ConcurrentHashMap.java:4795)
        at io.streamnative.pulsar.handlers.kop.MessageFetchContext.lambda$readMessagesInternal$13(MessageFetchContext.java:328)
        at io.streamnative.pulsar.handlers.kop.MessageFetchContext$$Lambda$1015/1232352552.accept(Unknown Source)
        at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:760)
        at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:736)
        at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
        at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1962)
        at io.streamnative.pulsar.handlers.kop.MessageFetchContext$2.readEntriesComplete(MessageFetchContext.java:479)
        at org.apache.bookkeeper.mledger.impl.OpReadEntry.lambda$checkReadCompletion$2(OpReadEntry.java:156)
        at org.apache.bookkeeper.mledger.impl.OpReadEntry$$Lambda$845/165410888.run(Unknown Source)
        at org.apache.bookkeeper.mledger.util.SafeRun$1.safeRun(SafeRun.java:36)
        at org.apache.bookkeeper.common.util.SafeRunnable.run(SafeRunnable.java:36)
        at org.apache.bookkeeper.common.util.OrderedExecutor$TimedRunnable.run(OrderedExecutor.java:204)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
        at java.lang.Thread.run(Thread.java:748)
```

The deadlock happens as the following picture.
![image](https://user-images.githubusercontent.com/5436568/114263154-8407e500-9a16-11eb-8b7c-d51b9b068ccd.png)

When Consumer C1 send poll request with `offset = 500`, KOP dispatch Thread A to fetch data from bookkeeper, and then Consuemr C2 also send poll request with `offset = 500` and KOP dispatch Thread B to fetch data. 

In `t0` timeStamp, thread A fetch data completed and do callback operation. KOP will **own thread C** to fetch data from BookKeeper.

In `t1` timeStamp, thread B call `consumers.computeIfAbsent` with `key = 500` to get related cursor. In concurrentHashMap implementation, `computeIfAbsent` method will **hold the section write lock of `key = 500`** and do computation operation. 

In `t2` timeStamp, thread C fetch data completed and call `consumers.putIfAbsent` with `key = 500` in callback operation. It will try to **acquire the section write lock of `key = 500`** in concurrentHashMap. Howere the section write lock of `key = 500` has been hold by Thread B in `t1` timeStamp, thread C will stay to wait for write lock.

In `t3` timeStamp, thread B call `getPositionForOffset` in `computeIfAbsent` operation.  `getPositionForOffset`  need to acquire thread C to read entry from BookKeeper. However thread C has been own in t0 timeStamp, thus, thread B will hold section write lock of `key = 500` in concurrentHashMap to wait for thread C. Dead lock happens.

### Modification
1. Move `getPositionForOffset` out of concurrentHashMap section write lock.
2. Add test to simulate the above multi consumers poll the same topic for the same offset at the same time situation.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

type/enhancement The enhancements for the existing features or docs. e.g. reduce memory usage of the delayed messages

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants