-
Notifications
You must be signed in to change notification settings - Fork 349
Description
The DP scheduler is introducing unacceptable latency to the stream. When used with Google AEC (see PR #8571), the capture pipeline is delaying delivery of output data by at least 10ms.
The architecture of the DP component is that it accumulates and runs analysis on blocks 10ms at a time, but it's being fed from a capture pipeline running at 1ms. There is a warning condition in the new code[1] where it detects the case where it needs to run the AEC analysis twice in one process() call (which it can't do, because it doesn't have output buffer space), because of too much data waiting for it in the input buffers (both its own and the upstream sources). See here:
| comp_dbg(mod->dev, "AEC sink backed up!"); |
This is a "comp_dbg()" currently because if I enable it process() ends up spamming the log at 100 Hz with the warning: at steady state there are always 20ms+ worth of data to analyze in process.
Discussion in #8621 implies that this is the DP queueing code trying to rate limit the output, which I'm having a hard time understanding. The component (simply by virtue of running slower) provides lots of output buffering already.
Basically: we can't be doing this. The requirement has to be that as soon as there is output data available from the component, it gets flushed downstream to the host ALSA stream as fast as possible. Right now AEC is seeing much worse latency with DP than it does under a synchronous 10ms pipeline.
[1] You can also see it in the unadulterated mtl-007-drop-stable branch by e.g. logging the fill status of the input buffers in process().