-
Notifications
You must be signed in to change notification settings - Fork 349
ll_schedule: add a DSP load tracker to ll_tasks_execute #5407
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
@jsarha any update ? |
| } | ||
|
|
||
| /* perf measurement windows size 2^x */ | ||
| #define CHECKS_WINDOW_SIZE 10 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It looks to me as a debug feature,
once per second doesn't look very heavy, anyway i suggest putting it as an option in kconfig
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jsarha can we give it the same Kconfig menu as the same feature that we have in the Zephyr LL scheduler i.e. the same Kconfig should enable them.
Can we also make the output trace message the same as Zephyr version too (using same units and strings) so that it's easy for developers to compare performance between xtos and Zephyr when optimising. Thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have commit in my own repo for adding kconfig under Debug menu, but is that what you wanted? Or should it be without menu presence, and maybe in the schedule directory with changes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need to met a very aggressive power limitations and need to count every single cycle, so I think that if performance log is the only purpose for calculating CPU load, it should be off when there's no need for it (like in production). Unless there are other purposes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is now a Kconfig option, are you are happy with it? Can I resolve this conversation?
|
I think this now does more or less the same thing as the Zephyr side implementation. Is there something more needed here before I turn this to be ready for review? Maybe squash everything to a single commit? |
|
This is how the output on xtos side looks now: |
|
@jsarha if this aligns the output with xtos and zephyr then we should use this as a starting point and add future performance logging in tandem for both in teh same PRs. @marcinszkudlinski @mwasko ok plan for you ? |
| if (load->cycles_peak < diff) | ||
| load->cycles_peak = diff; | ||
| if (task->cycles_max < diff) | ||
| task->cycles_max = diff; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why max instead of peak? Max suggests range and here we clearly want to track 'peak' value in given timespan
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
*_peak was my name for variable before the per task implementation. With the per task implementation I decided to reuse the same struct task members as Zephyr side implementation does. The struct task members have different names.
mwasko
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please check comment.
@lgirdwood, @jsarha the LL average and peak performance logging is SOF audio specific since we need to track MCPS consumption in a sys tick processing period that Zephyr is not aware of. We can of course use Zephyr Thread statistics to obtain the data in each sys tick but the logging and tracking sys tick average / peak will most likely remain a part of SOF itself. |
|
Maybe for simplicity I squash all changes to one, with the kconfig menu option under Debug menu. The evolution process is not that relevant, but it can be still be found here: https://github.com/jsarha/sof/tree/xtos-dsp-load-evolution |
lgirdwood
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm happy as long as we can compare data between xtos and Zephyr.
Calculates average and records the maximum execution cycles taken by each execution round and prints the results out once every 1024 round, e.g. roughly once per second with a typical 1ms tick configuration. The output is mimicked from the Zephyr side scheduler output. There is a Kconfig menu option under debug menu to disable this feature. Currently the default is on. Signed-off-by: Jyri Sarha <jyri.sarha@intel.com>
@mwasko is been a while and I do not any more follow what this comment was referring to. |
lgirdwood
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mwasko I'm good for this, to help with current issues. We can always build on the data output and align as we go forward.
Add a lightweight load calculatio to ll_tasks_execute() and prints out
the average every 2^10 runs (roughly once per second on most systems).
Example log output:
--cut--
[ 6950848.421715] ( 809129.187500) c0 ll-schedule ./schedule/ll_schedule.c:139 INFO ll avg 19720/38400
[ 8868838.449667] ( 679100.000000) c0 ll-schedule ./schedule/ll_schedule.c:139 INFO ll avg 19658/38400
[ 10802857.643650] ( 565147.187500) c0 ll-schedule ./schedule/ll_schedule.c:139 INFO ll avg 19822/38400
--cut--
Signed-off-by: Kai Vehmanen kai.vehmanen@linux.intel.com