Skip to content

Conversation

@lyakh
Copy link
Collaborator

@lyakh lyakh commented Jul 28, 2021

with connected pipelines, if an XRUN occurs on one of them, all of them should be stopped

@lyakh lyakh marked this pull request as ready for review July 28, 2021 09:02
@lyakh lyakh mentioned this pull request Jul 28, 2021
mrajwa
mrajwa previously requested changes Jul 28, 2021
src/audio/host.c Outdated
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm I like to return "ret" more TBH this way we are consistent of always returning the same variable and not magic numbers.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I usually approach this firstly from the PoV of minimalism, secondly of locality. You don't do

int f(void)
{
	int ret = 0;
	return ret;
}

you just do

int f(void)
{
	return 0;
}

I think this is similar. Locality - when I look at that line I immediately know what happens there. With the previous version I have to check 9 lines up to see where ret comes from and whether it was modified in between. So, sorry, no, I prefer this version.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Both approaches can be considered better or worse... I think we should create a macro or enum which maps 0 with some meaningful success constant like in CAVS we have ADSP_SUCCESS. Anyway, if you feel like returning 0 instead of defined ret is more appropriate then OK, this is not critically important tbh.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

magic number

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

any non-zero should do, the value isn't used anyway. I can add a comment.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see but can't we just assign exact XRUN_BYTES here? Even though currently we treat it as a boolean value at some point we may need exact value - this veriable was defined as 32 bit one so this suggest me that was the original intention.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there should be a comment informing that fall thru is deliberate

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no, in such trivial cases of several case statements following each other with no code between them, no COMPILER_FALLTHROUGH is needed - see line 304 below.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

error message is not logical, "No active pipeline found to continue pipeline processing" maybe?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

well, firstly "this error should never happen" (TM), if anyone ever sees it, there's a problem. We could put an assert() there but I don't think we do that for this kind of errors. And no, it isn't processing, it's rather linking. I'll adjust

@lyakh
Copy link
Collaborator Author

lyakh commented Jul 28, 2021

re-tested with Zephyr xrun recipe and found out that a change to my previous PR #4471 conflicted with this one, so I had to account for it. Now actually tested to work again. Before this PR an XRUN looked like:

78965599 dw_dma_get_data_size(): xrun detected
78966958 dai_report_xrun(): underrun due to no data available
78968638 comp_underrun(): dev->comp.id = 26, source->avail = 768, copy_bytes = 0
78970933 pipeline_trigger_run(): ret = -22, host->comp.id = 24, cmd = 8
78973461 pipeline_xrun(): Pipelines notification about XRUN failed, ret = -22
78975623 perf comp_copy peak plat 10048 cpu 10050
78977209 pipeline_copy(): ret = -61, start->comp.id = 26, dir = 1
78979110 pipeline_task(): xrun recovery failed! pipeline is stopped.
78980952 task complete 0x9e0563e8 0x1fffa360U
78982507 num_tasks 7 total_num_tasks 7

now it looks like

6768997 dw_dma_get_data_size(): xrun detected
6770339 dai_report_xrun(): underrun due to no data available
6771992 comp_underrun(): dev->comp.id = 26, source->avail = 768, copy_bytes = 0
6774244 pipe trigger cmd 8
6775569 perf comp_copy peak plat 6621 cpu 6621
6777100 pipeline_copy(): ret = -61, start->comp.id = 26, dir = 1
6778977 pipeline_task(): xrun recovery failed! pipeline is stopped.
6781251 task complete 0x9e056648 0x1fffa360U
6782827 num_tasks 8 total_num_tasks 8
6805856 task complete 0x9e0565a0 0x1fffa360U
6807443 num_tasks 7 total_num_tasks 7

Note, how in the new version 2 tasks are completed as a result of the XRUN

@lyakh lyakh force-pushed the xrun branch 2 times, most recently from db48ccd to 703d967 Compare July 28, 2021 12:59
@lyakh lyakh requested a review from mrajwa July 28, 2021 13:01
@lyakh lyakh force-pushed the xrun branch 3 times, most recently from 0c99624 to f1e7af6 Compare July 29, 2021 07:14
@lyakh
Copy link
Collaborator Author

lyakh commented Jul 30, 2021

@mrajwa could you re-review, please?

@mrajwa
Copy link
Contributor

mrajwa commented Jul 30, 2021

@lyakh done, in general it is fine for me, only small concern about the xrun_bytes stored as boolean value instead of exact bytes count.

@lyakh
Copy link
Collaborator Author

lyakh commented Jul 30, 2021

@lyakh done, in general it is fine for me, only small concern about the xrun_bytes stored as boolean value instead of exact bytes count.

@mrajwa well, if that's a problem, I could try to copy the same xrun_bytes value from the original pipeline to all connected ones. It can be done with scanning the list twice - which shouldn't be a performance issue because the list usually consists of 1 or 2 members. But it's kind of ugly. @lgirdwood what do you think?

@lgirdwood
Copy link
Member

@lyakh done, in general it is fine for me, only small concern about the xrun_bytes stored as boolean value instead of exact bytes count.

@mrajwa well, if that's a problem, I could try to copy the same xrun_bytes value from the original pipeline to all connected ones. It can be done with scanning the list twice - which shouldn't be a performance issue because the list usually consists of 1 or 2 members. But it's kind of ugly. @lgirdwood what do you think?

To be honest I dont think the host really uses the xrun size within the individual pipelines but uses the host DMA position to determine where to restart the stream. So probably useful for debug, but not useful for host.

@lyakh
Copy link
Collaborator Author

lyakh commented Aug 3, 2021

@lyakh done, in general it is fine for me, only small concern about the xrun_bytes stored as boolean value instead of exact bytes count.

@mrajwa well, if that's a problem, I could try to copy the same xrun_bytes value from the original pipeline to all connected ones. It can be done with scanning the list twice - which shouldn't be a performance issue because the list usually consists of 1 or 2 members. But it's kind of ugly. @lgirdwood what do you think?

To be honest I dont think the host really uses the xrun size within the individual pipelines but uses the host DMA position to determine where to restart the stream. So probably useful for debug, but not useful for host.

@lgirdwood @mrajwa I've just grepped the sources and I only see ->xrun_bytes being used as a flag, compared to 0, never as an actual value. So, unless I'm missing something, setting it to 1 seems safe to me.

@lgirdwood
Copy link
Member

@lgirdwood @mrajwa I've just grepped the sources and I only see ->xrun_bytes being used as a flag, compared to 0, never as an actual value. So, unless I'm missing something, setting it to 1 seems safe to me.

Can you check IPC4 messages for any fields relating to glitches or xruns. It may be that IPC4 uses a byte or frame value in reporting stream errors and if so we need to keep.

@lyakh
Copy link
Collaborator Author

lyakh commented Sep 6, 2021

SOFCI TEST

@lyakh
Copy link
Collaborator Author

lyakh commented Sep 6, 2021

tests have been hanging since last week, kick them again. Also re-tested locally, it still does the right thing

Fix indentation in edf_schedule.c

Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
Remove redundant initialisation, re-use the same return statement.

Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
When an xrun occurs we currently stop the affected pipelines, which
also leads to stopping the pipeline task. However, the stream can be
using several pipelines, all of which have to be stopped. If not
stopped, tasks belonging to those pipelines will continue to be
scheduled. This patch collects all connected pipelines and stops them
all.

Signed-off-by: Guennadi Liakhovetski <guennadi.liakhovetski@linux.intel.com>
@lgirdwood
Copy link
Member

CI showing one CML DUT is not running but other CML DUT is all green.

@lgirdwood lgirdwood merged commit ac5ba21 into thesofproject:main Sep 7, 2021
@lyakh lyakh deleted the xrun branch September 7, 2021 12:52
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants