[SignalR] Avoid blocking common InvokeAsync usage#42796
Conversation
| finally | ||
| { | ||
| // Re-acquire the SemaphoreSlim, this is because when the hub method completes it will call release | ||
| await _parallelInvokes.WaitAsync(CancellationToken.None); |
There was a problem hiding this comment.
Just an idea:
Must _parallelInvokes be a SemaphoreSlim or can other sync-types be used?
E.g. with a bounded-channel (of maxInvokeLimit) the reader / writer pair can be used for synchronization too. IIf "waiting" will succeed synchronous, then the advantage is that it's a ValueTask, thus saving the allocation for the Task.
(Of course, if it real async mostly, then the benefit is much lower).
PS: in the past I ran some benchmarks (in my projects) with such an approach, and the channels variant looked better than SemaphoreSlim (due Task -> ValueTask mostly).
There was a problem hiding this comment.
Channels would completely break the concept I'm trying to implement here, so while it might be better from a Task alloc perspective, it is much worse from a behavior perspective 😞
If we decide to go a different direction, then I think the Channel is worth revisiting to see if we can replace SemaphoreSlim.
There was a problem hiding this comment.
Maybe we talked past each other.
I meant something like this
using System.Threading.Channels;
ChannelBasedSemaphore semaphore = new(1);
ValueTask task0 = semaphore.WaitAsync();
ValueTask task1 = semaphore.WaitAsync();
Console.WriteLine(task0.IsCompleted); // true
Console.WriteLine(task1.IsCompleted); // false
semaphore.Release();
await Task.Yield();
Console.WriteLine(task1.IsCompleted); // true
internal class ChannelBasedSemaphore
{
private readonly Channel<int> _channel;
public ChannelBasedSemaphore(int initialCount)
{
_channel = Channel.CreateBounded<int>(initialCount);
for (int i = 0; i < initialCount; ++i)
{
_channel.Writer.TryWrite(42); // any dummy-value will do
}
}
public void Release() => _channel.Writer.TryWrite(42);
public async ValueTask WaitAsync(CancellationToken cancellationToken = default)
{
_ = await _channel.Reader.ReadAsync(cancellationToken);
}
}as replacement for SemaphoreSlim (kind of drop-in replacement).
(Note: only hacked togehter, no perf-optimization, etc.).
8333047 to
50b5a0d
Compare
|
Keep the channel, wrap it in a struct or class the hides the channel type and mimics the semaphore with a comment that we're saving allocations 😄 |
|
Do we need a limit here? |
With a limit do we throw when another |
|
Lets keep it as is, no limit. |
| { | ||
| if (channelSemaphore.AttemptWait()) | ||
| { | ||
| _ = RunTask(callback, channelSemaphore, state); |
There was a problem hiding this comment.
We should be delaying shutdown if the callbacks are still running. Is that happening? I have a feeling that ChannelBasedSemaphore will need to implement IAsyncDisposable or have some way to wait for all capacity to return before moving on.
halter73
left a comment
There was a problem hiding this comment.
I like it!
I assume that hub methods don't block the receive loop like streaming and client-invoking methods cannot delay shutdown. I don't think this is the most critical thing since streaming already behaves this way, but we should probably file a follow up issue.
Right, so today if you enable parallel invokes we don't block the receive loop (unless you're at max invokes + 1) and consequently do delay some of the shutdown logic. Tomorrow (after this change), you won't block the receive loop for non-parallel invokes unless you have a pending invoke. So it's a slight change, but we do part of the shutdown logic always regardless of the receive loop state. So this is mostly delaying calling |
|
Approved for RC1, this fixes a potentially bad issue in a new-to-7 feature. |
c5c8ac9 to
8c0b9b3
Compare
|
Had to rebase due to conflicts with another change in test files. |
* [SignalR] Avoid blocking common InvokeAsync usage * channel * fixup test * fb * sealed * crazy

Proposal to fix #41997
Pros:
Cons:
IHubContextinjected and used in Hubs forInvokeAsynccan still blockStill need to defineMaximumParallelInvocationsPerClient > 1