-
Notifications
You must be signed in to change notification settings - Fork 349
WIP: Testbench Support For Crossover Component #3139
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
This commit extracts the biquad processing from iir_df2t(). It allows to reuse the biquad processing code independently of the equalizer implementation. Components such as crossover use biquads for processing input. But it could not do so because iir_df2t() was specific to single output audio processing. The motivation behind this change was to reuse the coefficients of the LR4 biquads for the crossover more freely. An LR4 filter is made of two biquads in series with same coefficients. Therefore to save memory, we can store one copy of each set of coefficient, and pass those to the biquad processing function. Signed-off-by: Sebastiano Carlucci <scarlucci@google.com>
This commit adds Crossover to the list of SOF components. A crossover filter can be used to split an input to different frequency bands. The number of outputs should be set statically in the topology. The user then uses the control bytes to route the frequency bands to different outputs. (similar to the demux component). This commit adds support for the following formats: - S16_LE - S24_LE - S32_LE Signed-off-by: Sebastiano Carlucci <scarlucci@google.com>
This commit adds the topology files for the crossover component. The control bytes are generated by the tools in tune/crossover. Signed-off-by: Sebastiano Carlucci <scarlucci@google.com>
This commit adds the tools to generate the control bytes for the crossover component. To generate the control bytes, run the example_crossover.m script. The parameters of the crossover components are: - number of outputs - sink assignments (routing crossover output to different pipelines) - frequency cutoffs To tweak the parameters modify the values in example_crossover.m and run it. Refer to sof/src/include/user/crossover.h for more information on how the crossover config is structured and how sink assignments are done. Signed-off-by: Sebastiano Carlucci <scarlucci@google.com>
This commit includes the whole change from Seb's PR: Testbench support for Crossover https://github.com/sebcarlucci/sof/pull/2/commits Signed-off-by: Pin-chih Lin <johnylin@google.com>
Signed-off-by: Pin-chih Lin <johnylin@google.com>
Signed-off-by: Pin-chih Lin <johnylin@google.com>
user input as "-o output_file1,output_file2,..." to assign multiple output files. Signed-off-by: Pin-chih Lin <johnylin@google.com>
I refered sof-icl-rt711-rt1308-rt715-hdmi.m4 which utilized sof/pipe-volume-demux-playback. In this case, pipeline n+1 uses sof/pipe-dai-endpoint. Test topology compile produces no error. According to the output log from running testbench, topology parsing is finished and the route chain looks like what we imagined. output log: https://paste.googleplex.com/5981280506216448 However, testbench will hang after pipe triggered. I suspect it may be caused by somewhere in testbench lack of considering multiple sinks? still under investigation. Signed-off-by: Pin-chih Lin <johnylin@google.com>
Testbench output log: https://paste.googleplex.com/5753664419397632 Signed-off-by: Pin-chih Lin <johnylin@google.com>
src/audio/crossover/crossover.c
Outdated
| buffer_unlock(sinks[i], flags); | ||
| } | ||
|
|
||
| printf("crossover_copy: frames to be proc=%u\n", frames); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://paste.googleplex.com/5753664419397632
From output log, "frames" becomes 0 because free frames of sink[1] is still 0 after the first crossover_copy() call, which should be back to 96 as sink[0] did.
"frames"=0 means no new data will be processed by "cd->crossover_process()" (line#699) during this call, and it causes deadlock.
| for (i = 0; i < frames; i++) { | ||
| x = audio_stream_read_frag_s16(source_stream, idx); | ||
| cd->crossover_split(*x << 16, out, state); | ||
| for (ch = 0; ch < nch; ch++) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The diff becomes messy now...
What I've done here is to recover crossover_generic.c code as implemented by #2802
(Recovered from Seb's modification to reroute all the outputs to the channels of one buffer.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this just for sanity checking?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seb's testbench hack has reroute all outputs to the first sink stream buffer only.
In current state, we want each output goes to the corresponding sink stream buffer (2 sinks from my experiment).
| # connect pipelines together | ||
| SectionGraph."pipe-sof-second-pipe" { | ||
| index "0" | ||
| index "2" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found that this index will determine the order of this routing dapm(PIPELINE_SOURCE_2, PIPELINE_CROSSOVER_1) among the whole m4 file.
My understanding is, index=2 will make this routing handled right after the graph of the second pipeline, is it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have never used the index field, @lgirdwood can you explain how it is used?
| SSP_CLOCK(fsync, 48000, codec_slave), | ||
| SSP_TDM(2, TEST_SSP_PHY_BITS, 3, 3), | ||
| SSP_CONFIG_DATA(TEST_DAI_TYPE, 2, | ||
| TEST_SSP_DATA_BITS, TEST_SSP_MCLK_ID))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From my experiment, this is crucial because we cannot use the same port and link name for two playback DAIs. So I put a hack here to configure SSP2 as well since from testbench SSP5 will be used by default (TEST_DAI_PORT=5)
| exit(EXIT_FAILURE); | ||
| } | ||
| printf("Test Pipeline:\n"); | ||
| printf("%s\n", pipeline); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Check the debug message from parse_topology here.
According to the output, the pipeline graph is kind of weird... I'm sure if the message producer is correct...?
Test Pipeline:
PCM0P->BUF1.0->CROSSOVER1.0->BUF1.1->SSP5.OUTBUF2.0->CROSSOVER1.0->BUF2.0
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The first half looks good. SSP5.OUT is the end of the first pipeline and BUF2.0 is the beginning of the next pipeline (with CROSSOVER1.0 feeding BUF2.0). But why is it feeding into a crossover? Also why is it feeding back into the same buffer (BUF2.0)
What is weird is that
| printf("debug: sched pcm_dev: type=%u, core=%u, id=%u\n", pcm_dev->type, pcm_dev->core, pcm_dev->id); | ||
| p = pcm_dev->cd->pipeline; | ||
| ipc_pipe = &p->ipc_pipe; | ||
| printf("debug: ipc_pipe: comp_id=%u, pipe_id=%u, sched_id=%u\n", ipc_pipe->comp_id, ipc_pipe->pipeline_id, ipc_pipe->sched_id); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just like you have mentioned, the working pipeline id is 1 and sched_id is 0, so I'm not sure if buffers belong to the second pipeline could be correctly processed.
https://paste.googleplex.com/5753664419397632
According to the output log, after the first crossover_copy, both sink[0] (1st pipe) and sink[1] (2nd pipe) stream had filled 96 frames. However in the beginning next crossover_copy call, sink[0] buffer is empty whose data is consumed, while sink[1] buffer is still full (data is not consumed).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That sounds right, so we need to schedule the second pipeline so the DAI will drain the second sink
| if (output_file_index == 0) { | ||
| tp->fw_id = comp_id; | ||
| } | ||
| output_file_index++; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Load filewrite with different output file name to each DAI.
argument: -o output1.raw,output2.raw
| # 1000, 1, 0, SCHEDULE_TIME_DOMAIN_TIMER, | ||
| # PIPELINE_PLAYBACK_SCHED_COMP_1) | ||
| # 1000, 0, 0, SCHEDULE_TIME_DOMAIN_TIMER) | ||
| DAI_ADD_SCHED(sof/pipe-dai-sched-playback.m4, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm still not sure for this case we should use DAI_ADD or DAI_ADD_SCHED for the second pipeline?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this adds the ability to tie the scheduling of the pipelines together. It might be a good way to lookup the id of the additional pipeline and schedule them.
Testbench output log: https://paste.googleplex.com/6407279421161472 Signed-off-by: Pin-chih Lin <johnylin@google.com>
| pipe_cl_dbg("pipeline_comp_trigger(), current->comp.id = %u, dir = %u", | ||
| dev_comp_id(current), dir); | ||
|
|
||
| printf("is_single_ppl=%d, is_same_sched=%d\n", is_single_ppl, is_same_sched); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found in pipeline.c, pipeline_comp_trigger() takes "is_same_sched" into regard, and then pipeline_comp_trigger_sched_comp() to list current pipeline into ctx. So using DAI_ADD_SCHED should be correct.
According to output log:
https://paste.googleplex.com/6407279421161472
pipeline_comp_trigger(), current->comp.id = 0, dir = 0
is_single_ppl=1, is_same_sched=1
pipeline_comp_trigger(), current->comp.id = 1, dir = 0
is_single_ppl=1, is_same_sched=1
pipeline_comp_trigger(), current->comp.id = 7, dir = 0
is_single_ppl=0, is_same_sched=1
pipeline_comp_trigger(), current->comp.id = 4, dir = 0
is_single_ppl=1, is_same_sched=1
comp.id 7 is the second DAI, in theory it should be listed.
| list_for_item(tlist, &ctx->pipelines) { | ||
| p = container_of(tlist, struct pipeline, list); | ||
|
|
||
| printf("pipeline_schedule_triggered: pipeline number %d\n", i); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pipeline_schedule_triggered() is the first function to call pipeline_schedule_copy(p,0). Read from the code this is a for-loop iterating all pipeline in the list. And after adding the debug code, I found there are 2 pipeline iteration (both pipeline should have called pipeline_schedule_copy(p,0)).
Output log:
https://paste.googleplex.com/6407279421161472
pipeline_schedule_triggered: pipeline number 1
unknown pipeline_schedule_copy
pipeline_schedule_triggered: pipeline number 2
unknown pipeline_schedule_copy
| dapm(PIPELINE_SOURCE_2, PIPELINE_CROSSOVER_1) | ||
| ] | ||
| } | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The topology debug message looks correct after I moved SectionGraph here.
https://paste.googleplex.com/6407279421161472
Test Pipeline:
PCM0P->BUF1.0->CROSSOVER1.0->BUF1.1->SSP5.OUTCROSSOVER1.0->BUF2.0->SSP2.OUT
But in fact this is a bug in tplg_parser.c which will assume the routings in topology file is already sorted for pipelines.
tools/testbench/testbench.c
Outdated
|
|
||
| while (frcd->fs.reached_eof == 0) | ||
| pipeline_schedule_copy(p, 0); | ||
| // should we add pipeline_schedule_copy to the second pipeline? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After knowing that pipeline_schedule_copy() is called for not only one pipeline (as explained in pipeline.c), then I start to doubt this code: here we only put pipeline_schdule_copy() for the first pipeline, should we also add the second one?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
HOORAY. It works after I added the following codes:
struct pipeline *p2;
pcm_dev = ipc_get_comp_by_id(sof.ipc, 7); // SSP2.OUT
p2 = pcm_dev->cd->pipeline;
while (frcd->fs.reached_eof == 0) {
pipeline_schedule_copy(p, 0);
pipeline_schedule_copy(p2, 0);
}
And I have checked the output waveform and they are as expected.
However these codes are fishy, we need to embellish them.
Signed-off-by: Pin-chih Lin <johnylin@google.com>
|
@singalsu any comments here ? |
|
@lgirdwood this is blocked by #3210 and will be rebased on that once it has landed. Also we have some more work to do here for fixing up the biquad selection algorithm |
No description provided.