Conversation
There was a problem hiding this comment.
@jamesp thanks so much for taking this on. Makes me happy to see benchmarking moving forward. Some comments for you, and some stuff that won't fit as code annotations:
-
Also need to remove the
tests/benchmarkingdirectory. The single benchmark in there is part of our offline suite, although we will need to sort out file loading before it come into this setup. -
Could we add
.gitignoreandpyproject.tomlexclusions forbenchmarks/.asv? I've been burned before!
* [ ] What do you think of making a Nox session for this? Too early? I would want to eventually - avoids everyone having to know the correct asv run ... incantation if they want to run benchmarks manually.
- As discussed offline, it would be great if we could soon replicate the ASV environment plugin introduced in SciTools/iris-esmf-regrid#76, to truly defer environment management to a single common place. Something for me to capture in an issue once merged.
Did *not* add existing benchmarks that depend on external data files
lower the log warning level in environment updates Co-authored-by: Martin Yeo <40734014+trexfeathers@users.noreply.github.com>
Co-authored-by: Martin Yeo <40734014+trexfeathers@users.noreply.github.com>
|
Ok I think the checkout logic is way too naive, it will fail with branches and other forks. I'll review it and come up with something better |
|
ok @trexfeathers, I think this might be ready for final review. |
|
I just took a look at Clues in the |
|
Should be resolved |
|
Thanks for your persistence @jamesp |
🚀 Pull Request
Description
An MVP for CI performance testing PRs.
Things in this PR:
benchmarks/benchmarks/*These are directly lifted from the offline
metricsrepository. No edits made.benchmarks/conda_lock_plugin.pyThis is a low-touch change to
asv.plugins.conda.Condato use a lockfile as the environment source.The trickiness here is that
asvdoesn't really have great support for per-commit environment changes.It wants to make one environment and then run a whole load of commits in this same environment.
So this environment plugin moves the setting of environment packages to the build stage of testing
a specific commit.
This may have a poor interaction with parallel processing of environments.
.github/workflows/benchmark.ymlThis workflow is configured to run on pull requests.
It runs asv on the head of the pull request and the head of the target branch.
The results are compared and output as an artifact from the run. i.e. you can download the
.asv/resultsfolder and interrogate in more detail if needed.
If the comparison finds performance degradation that hits ASV's criteria (currently default criteria)
then the run is marked as failed and will show up in the pull request checks section as a failed check.
See jamesp#9 for an example of this.
If there are no changes above the threshold, the check is a success.
Consult Iris pull request check list