Skip to content

Conversation

@Flamefire
Copy link
Contributor

This uses numpy to print the most important information about the benchmark run to stdout:

  • script invoked
  • scorep parameters
  • summary of timings (number of repetitions, min, max, mean and median of times)

I moved the -m scorep out to scorep_settings so they are shown in the script summary. It also makes the code easier as only 1 param for enabling has to be passed: -m scorep and the settings instead of a bool and a potentially empty list which is ignored if that bool is false

Example output (part) looks like this:

#########
bm_simplefunc.py: ['-m', 'scorep', '--instrumenter-type=dummy']
#########
100000: Range=0.0000-0.0000 Mean=0.0000 Median=0.0000
200000: Range=0.0000-0.0000 Mean=0.0000 Median=0.0000
300000: Range=0.0000-0.0000 Mean=0.0000 Median=0.0000
400000: Range=0.0000-0.0000 Mean=0.0000 Median=0.0000
500000: Range=0.0000-0.0000 Mean=0.0000 Median=0.0000
#########
bm_simplefunc.py: []
#########
100000: Range=0.0000-0.0000 Mean=0.0000 Median=0.0000
200000: Range=0.0000-0.0000 Mean=0.0000 Median=0.0000
300000: Range=0.0000-0.0000 Mean=0.0000 Median=0.0000
400000: Range=0.0000-0.0000 Mean=0.0000 Median=0.0000
500000: Range=0.0000-0.0000 Mean=0.0000 Median=0.0000

(The zeros are because I commented out the actual invocation)
As you can see the output is directly usable for further analysis or at least provides a quick answer on how fast it is. This works well with #101 which allows to selectively run 1 or more test configurations

The width of the repetitions (first number) is automatically determined from the maximum value so all are right aligned for easier comparison. Example:

#########
bm_simplefunc.py: ['-m', 'scorep', '--instrumenter-type=dummy']
#########
 100000: Range=0.0000-0.0000 Mean=0.0000 Median=0.0000
 200000: Range=0.0000-0.0000 Mean=0.0000 Median=0.0000
 300000: Range=0.0000-0.0000 Mean=0.0000 Median=0.0000
 400000: Range=0.0000-0.0000 Mean=0.0000 Median=0.0000
5000000: Range=0.0000-0.0000 Mean=0.0000 Median=0.0000

@Flamefire
Copy link
Contributor Author

CI failure is unrelated. Fix in #103

@AndreasGocht
Copy link
Collaborator

@rschoene Please have a look. I am ok with this.

@Flamefire
Copy link
Contributor Author

Rebased to include the CI fix.

Can this be included soon? The change is basically cosmetical only (i.e. the created results.pkl doesn't change) and together with #101 is very useful in quickly evaluating the performance impact of a change (e.g. the CString change or the new maps from #105)

Copy link
Member

@rschoene rschoene left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me

@AndreasGocht AndreasGocht merged commit 6fa1cfc into score-p:master Oct 5, 2020
@Flamefire Flamefire deleted the benchmark_report branch October 5, 2020 13:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants