Asv benchmarks#2297
Asv benchmarks#2297NinadBhat wants to merge 21 commits intoMDAnalysis:developfrom NinadBhat:asv_benchmarks
Conversation
Merge Upstream
Updating fork
|
@orbeckst, @richardjgowers, @jbarnoud, @micaela-matta Can you kindly look if I am adding the benchmarks correctly. I will keep on adding the benchmarks to this PR as |
Codecov Report
@@ Coverage Diff @@
## develop #2297 +/- ##
===========================================
- Coverage 89.7% 89.63% -0.08%
===========================================
Files 173 173
Lines 21499 21513 +14
Branches 2801 2803 +2
===========================================
- Hits 19286 19283 -3
- Misses 1616 1629 +13
- Partials 597 601 +4
Continue to review full report at Codecov.
|
orbeckst
left a comment
There was a problem hiding this comment.
This looks like the right approach. asv does not do fancy fixtures like pytest so writing out the different benchmarks like you did seems to be a sane approach.
Did you try running asv locally with your tests?
|
Do you want this to be merged or do you want to keep adding to it? My suggestion would be to add benchmarks in the PRs that implement the feature so that ASV does not choke on tests that rely on features that aren't merged yet. Can you add a check list of the PRs that this PR depends on? |
I wanted to keep adding to it
At this point, this PR is not dependent on any other PRs. I plan to add other benchmarks to this PR once #2299, #2296, #2293 get merged. Then this will be ready to merge. |
|
Am Jul 12, 2019 um 12:37 schrieb Ninad ***@***.***>:
I wanted to keep adding to it
Ok. Let me know when you need a final review.
|
benchmarks/benchmarks/ag_methods.py
Outdated
| """Benchmark center_of_mass calculation with | ||
| unwrap active. | ||
| """ | ||
| self.ag_unwrap.center_of_mass(unwrap=True, compound='residues') |
There was a problem hiding this comment.
These look cool, but some are only going to work from 0.20.0 onwards. Is there a way to skip benchmarks if they're not valid? (@tylerjereddy ) Some sort of @requires('0.20.0') decorator?
There was a problem hiding this comment.
It is certainly common for imports to be wrapped in try: except: pass to skip the benchmarks that don't have the imports.
Not sure how easy it would be to cook that up for this case though.
Perhaps more promising is that if setup raises a NotImplementedError, the benchmark is marked as skipped.
richardjgowers
left a comment
There was a problem hiding this comment.
Looks like we can make setup raise a NotImplementedError to skip these benchmarks for earlier versions which don't support the kwarg
benchmarks/benchmarks/ag_methods.py
Outdated
| self.u_unwrap = mda.Universe(TRZ_psf, TRZ) | ||
| self.ag_unwrap = self.u_unwrap.residues[0:3] | ||
|
|
||
| if MDAnalysis.__version__ < '0.20': |
There was a problem hiding this comment.
Let's not rely on simple ordering, follow https://stackoverflow.com/questions/11887762/how-do-i-compare-version-numbers-in-python
from packaging import version
if version.parse(MDAnalysis.__version__) < version.parse("0.20.0"):
...(In our setup.py we use LooseVersion() but it looks as if this is pretty much deprecated. If the above does not work, maybe look at the other answer https://stackoverflow.com/a/21065570 using from pkg_resources import parse_version, which is guaranteed to assist)
orbeckst
left a comment
There was a problem hiding this comment.
@NinadBhat I am approving the PR because this looks sensible and you addressed my comments (and I think you also addressed @richardjgowers comment).
What other PRs are blocking this one or is it ready to go (and won't break our ASV set up)?
Fixes #
Changes made in this Pull Request:
PR Checklist