Skip to content

Conversation

@Flamefire
Copy link
Contributor

I found myself modifying the benchmark script multiple times to only run a single benchmark or instrumenter or use another number of repetitions.

To make this easier I added a CLI that allows to change the behavior via parameters. Without any parameter given it behaves exactly as before. But now you can also do ./benchmark.py --test bm_baseline.py --instrumenter cTrace cProfile --loop-count 1e4 2e4 5e6

I think it is simple, self-descriptive and flexible. I didn't include the option to change the number depending on the benchmark to keep it simple and as you can select a single benchmark I don't think it is required. By default it does use the pre-defined iterations per benchmark though.

@AndreasGocht
Copy link
Collaborator

@rschoene Please have a look. I am ok with this.

Copy link
Member

@rschoene rschoene left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That thing. Rest is okay with me

@AndreasGocht AndreasGocht merged commit c84f7b6 into score-p:master Oct 5, 2020
@Flamefire Flamefire deleted the benchmark_args branch October 5, 2020 13:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants