Skip to content

Asv benchmarks#2297

Closed
NinadBhat wants to merge 21 commits intoMDAnalysis:developfrom
NinadBhat:asv_benchmarks
Closed

Asv benchmarks#2297
NinadBhat wants to merge 21 commits intoMDAnalysis:developfrom
NinadBhat:asv_benchmarks

Conversation

@NinadBhat
Copy link
Copy Markdown
Contributor

Fixes #

Changes made in this Pull Request:

  • Add benchmarks for unwrap

PR Checklist

  • Tests?
  • Docs?
  • CHANGELOG updated?
  • Issue raised/referenced?

@NinadBhat
Copy link
Copy Markdown
Contributor Author

@orbeckst, @richardjgowers, @jbarnoud, @micaela-matta Can you kindly look if I am adding the benchmarks correctly. I will keep on adding the benchmarks to this PR as unwrap is added to different methods.

@codecov
Copy link
Copy Markdown

codecov bot commented Jul 10, 2019

Codecov Report

Merging #2297 into develop will decrease coverage by 0.07%.
The diff coverage is n/a.

Impacted file tree graph

@@             Coverage Diff             @@
##           develop    #2297      +/-   ##
===========================================
- Coverage     89.7%   89.63%   -0.08%     
===========================================
  Files          173      173              
  Lines        21499    21513      +14     
  Branches      2801     2803       +2     
===========================================
- Hits         19286    19283       -3     
- Misses        1616     1629      +13     
- Partials       597      601       +4
Impacted Files Coverage Δ
package/MDAnalysis/analysis/base.py 93.57% <0%> (-3.67%) ⬇️
package/MDAnalysis/coordinates/TRZ.py 81.74% <0%> (-1.53%) ⬇️
package/MDAnalysis/lib/formats/libdcd.pyx 78.31% <0%> (-0.98%) ⬇️
package/MDAnalysis/analysis/lineardensity.py 62.38% <0%> (-0.92%) ⬇️
package/MDAnalysis/lib/util.py 87.59% <0%> (-0.89%) ⬇️
package/MDAnalysis/analysis/gnm.py 95.38% <0%> (-0.77%) ⬇️
package/MDAnalysis/core/topologyobjects.py 98.02% <0%> (-0.4%) ⬇️
package/MDAnalysis/analysis/waterdynamics.py 82.46% <0%> (-0.22%) ⬇️
package/MDAnalysis/core/topologyattrs.py 97.8% <0%> (ø) ⬆️
package/MDAnalysis/transformations/__init__.py 100% <0%> (ø) ⬆️
... and 4 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update b1c68e4...0186a08. Read the comment docs.

Copy link
Copy Markdown
Member

@orbeckst orbeckst left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like the right approach. asv does not do fancy fixtures like pytest so writing out the different benchmarks like you did seems to be a sane approach.

Did you try running asv locally with your tests?

@orbeckst
Copy link
Copy Markdown
Member

Do you want this to be merged or do you want to keep adding to it?

My suggestion would be to add benchmarks in the PRs that implement the feature so that ASV does not choke on tests that rely on features that aren't merged yet.

Can you add a check list of the PRs that this PR depends on?

@NinadBhat
Copy link
Copy Markdown
Contributor Author

Do you want this to be merged or do you want to keep adding to it?

I wanted to keep adding to it

My suggestion would be to add benchmarks in the PRs that implement the feature so that ASV does not choke on tests that rely on features that aren't merged yet.
Can you add a checklist of the PRs that this PR depends on?

At this point, this PR is not dependent on any other PRs. I plan to add other benchmarks to this PR once #2299, #2296, #2293 get merged. Then this will be ready to merge.

@orbeckst
Copy link
Copy Markdown
Member

orbeckst commented Jul 12, 2019 via email

"""Benchmark center_of_mass calculation with
unwrap active.
"""
self.ag_unwrap.center_of_mass(unwrap=True, compound='residues')
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These look cool, but some are only going to work from 0.20.0 onwards. Is there a way to skip benchmarks if they're not valid? (@tylerjereddy ) Some sort of @requires('0.20.0') decorator?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is certainly common for imports to be wrapped in try: except: pass to skip the benchmarks that don't have the imports.

Not sure how easy it would be to cook that up for this case though.

Perhaps more promising is that if setup raises a NotImplementedError, the benchmark is marked as skipped.

Copy link
Copy Markdown
Member

@richardjgowers richardjgowers left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like we can make setup raise a NotImplementedError to skip these benchmarks for earlier versions which don't support the kwarg

@NinadBhat
Copy link
Copy Markdown
Contributor Author

@orbeckst can you have a look at line and let me know if my implementation is right?

self.u_unwrap = mda.Universe(TRZ_psf, TRZ)
self.ag_unwrap = self.u_unwrap.residues[0:3]

if MDAnalysis.__version__ < '0.20':
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's not rely on simple ordering, follow https://stackoverflow.com/questions/11887762/how-do-i-compare-version-numbers-in-python

from packaging import version
if version.parse(MDAnalysis.__version__) < version.parse("0.20.0"):
   ...

(In our setup.py we use LooseVersion() but it looks as if this is pretty much deprecated. If the above does not work, maybe look at the other answer https://stackoverflow.com/a/21065570 using from pkg_resources import parse_version, which is guaranteed to assist)

Copy link
Copy Markdown
Member

@orbeckst orbeckst left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@NinadBhat I am approving the PR because this looks sensible and you addressed my comments (and I think you also addressed @richardjgowers comment).

What other PRs are blocking this one or is it ready to go (and won't break our ASV set up)?

@IAlibay IAlibay added the close? Evaluate if issue/PR is stale and can be closed. label May 18, 2021
This pull request was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

close? Evaluate if issue/PR is stale and can be closed. performance

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants