-
Notifications
You must be signed in to change notification settings - Fork 140
ASoC: SOF: ipc4/Intel: Place constraint on the PERIOD_TIME based on the DSP host buffer size (which is provided in ms) fro playback #4858
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -263,6 +263,18 @@ int hda_dsp_pcm_open(struct snd_sof_dev *sdev, | |
| snd_pcm_hw_constraint_mask64(substream->runtime, SNDRV_PCM_HW_PARAM_FORMAT, | ||
| SNDRV_PCM_FMTBIT_S16 | SNDRV_PCM_FMTBIT_S32); | ||
|
|
||
| /* | ||
| * Set constraint on the period time to make sure that it is larger than | ||
| * the DSP buffer size. This is the maximum DMA burst size that can | ||
| * happen on stream start for example. | ||
| * The period size must not be smaller than this. | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is not a completely simple statement to explain (why period size cannot be smaller). Purely by ALSA semantics, smaller period sizes can be allowed. The period size is just a reporting interval. So I think this patch is more to encode a practical lower limit as many existing applications are not prepared to handle the behaviour that will result if DMA burst size is longer than period size. But yeah, just nitpicking on why we have the "must" here. I think for purpose of this patch, this is ok -- there's no functional need for us to allow to such small sizes, so why not just set the constraint. |
||
| */ | ||
| if (spcm->stream[substream->stream].dsp_buffer_time_ms) | ||
| snd_pcm_hw_constraint_minmax(substream->runtime, | ||
| SNDRV_PCM_HW_PARAM_PERIOD_TIME, | ||
| spcm->stream[substream->stream].dsp_buffer_time_ms * USEC_PER_MSEC, | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I was wondering if this can be cause unexpected isssues with large values (e.g. a 100ms buffer) with appes that want a specific amount of periods. But yeah, I think in all cases I thought through, it's better to fail early. On the other hand, I can see a lot more confusion with apps that set the period size to low and expect period wakeups to come at a steady rate.
Collaborator
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Right, that is true. On the other hand those apps will have other problems if they want to use small periods when the DMA burst in a different magnitude... |
||
| UINT_MAX); | ||
|
|
||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @ujfalusi I understand that if you dont set this constraint you will see an underrun immediately when the period size is smaller than the burst size. But for my education, when is the period time set? I am confused about how setting this constraint during pcm open helps
Collaborator
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I will revisit this as I think the PERIOD_TIME min is not the right one to constrain. I think we will place the constraint on the min BUFFER_TIME with the dsp_buffer_time_ms, that is the absolute minimum size of buffer applications can have without spinning on it during the first burst. Hrm, but that will not work with DeepBuffer if the app takes the min buffer size and uses 2 periods? It is on the edge imho.
Collaborator
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The constraint on BUFFER_TIME still leaves The constraint must be on the period size, or we should make the buffer size to be at least twice of the host DMA buffer and leave the period alone? At the end both will result the same buffer_size_min limit, I wonder which one would be better and more correct for applications.
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. "When we start a playback the whole host DMA buffer is filled regardless of it's size" Isn't this the problem statement? |
||
| /* binding pcm substream to hda stream */ | ||
| substream->runtime->private_data = &dsp_stream->hstream; | ||
| return 0; | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -412,8 +412,9 @@ static int sof_ipc4_widget_setup_pcm(struct snd_sof_widget *swidget) | |
| struct sof_ipc4_available_audio_format *available_fmt; | ||
| struct snd_soc_component *scomp = swidget->scomp; | ||
| struct sof_ipc4_copier *ipc4_copier; | ||
| struct snd_sof_pcm *spcm; | ||
| int node_type = 0; | ||
| int ret; | ||
| int ret, dir; | ||
plbossart marked this conversation as resolved.
Outdated
Show resolved
Hide resolved
|
||
|
|
||
| ipc4_copier = kzalloc(sizeof(*ipc4_copier), GFP_KERNEL); | ||
| if (!ipc4_copier) | ||
|
|
@@ -447,6 +448,19 @@ static int sof_ipc4_widget_setup_pcm(struct snd_sof_widget *swidget) | |
| } | ||
| dev_dbg(scomp->dev, "host copier '%s' node_type %u\n", swidget->widget->name, node_type); | ||
|
|
||
| spcm = snd_sof_find_spcm_comp(scomp, swidget->comp_id, &dir); | ||
| if (spcm && dir == SNDRV_PCM_STREAM_PLAYBACK) { | ||
| struct snd_sof_pcm_stream *sps = &spcm->stream[dir]; | ||
|
|
||
| sof_update_ipc_object(scomp, &sps->dsp_buffer_time_ms, | ||
| SOF_COPIER_DEEP_BUFFER_TOKENS, | ||
| swidget->tuples, | ||
| swidget->num_tuples, sizeof(u32), 1); | ||
| /* Set default DMA buffer size if it is not specified in topology */ | ||
| if (!sps->dsp_buffer_time_ms) | ||
| sps->dsp_buffer_time_ms = SOF_IPC4_MIN_DMA_BUFFER_SIZE; | ||
|
||
| } | ||
|
|
||
| skip_gtw_cfg: | ||
| ipc4_copier->gtw_attr = kzalloc(sizeof(*ipc4_copier->gtw_attr), GFP_KERNEL); | ||
| if (!ipc4_copier->gtw_attr) { | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just a nit, can we not squash all these three patches into one?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't like patches touching core and platform code at the same time when it can be avoided.