Skip to content

Feature/neb workflow and FIRE logic modified #173

Closed
mstapelberg wants to merge 14 commits intoTorchSim:optimizer_ase_testfrom
mstapelberg:feature/neb-workflow
Closed

Feature/neb workflow and FIRE logic modified #173
mstapelberg wants to merge 14 commits intoTorchSim:optimizer_ase_testfrom
mstapelberg:feature/neb-workflow

Conversation

@mstapelberg
Copy link
Copy Markdown
Contributor

Summary

This PR implements a NEB workflow (torch_sim.workflows.neb) and several changes to the FIRE optimizer (torch_sim.optimizers.fire). The changes closely emulate ASE's results and implement batching for fast NEB simulations. This is meant to be added to pull request #146 based on @janosh suggestion.

Key changes include:

  • Introduced torch_sim.workflows.neb.NEB class:
    • Enables Nudged Elastic Band calculations within torch-sim.
    • Supports batched optimization of intermediate images for potential performance gains.
    • Integrates with existing torch-sim optimizers (fire, gd, frechet_cell_fire).
    • Implements Climbing Image NEB (use_climbing_image=True).
    • Uses the improved tangent estimate from Henkelman & Jónsson (2000).
    • Includes optional trajectory writing via TorchSimTrajectory.
  • Corrected FIRE Optimizer Logic (torch_sim.optimizers.fire):
    • Aligned the fire_step logic more closely with standard FIRE implementations (like ASE's) regarding power calculation (F·V), adaptive timestep/alpha updates, and velocity mixing/resetting, resolving previous discrepancies.

Checklist

Before a pull request can be merged, the following items must be checked:

  • Doc strings have been added in the Google docstring format.
    Run ruff on your code.
  • Tests have been added for any new functionality or bug fixes.
  • All linting and tests pass.

Adds NEB workflow feature and addresses issues identified during debugging NEB workflow related to #146

janosh and others added 14 commits April 17, 2025 19:21
* fix virial calculations in optimizers and integrators

* refactor cell_forces in optimizers.py

* clarify test_state_round_trip not testing round trip for masses with pymatgen and phonopy
* update mace checkpoint URLs

* tweak float formatting in prints

* add torch_sim/typing.py

* docs CI remove invalid workflow_run context access

* fix broken import

* rename MemoryScaler -> MemoryScaling

* move BravaisType enum to typing.py

* move StateLike union to typing.py

* use simpler torch.as_tensor instead of if/else
* pad max memory estimation in autobatcher by 0.9

* prevent mace logic from running and ignore env files

* comment out note in PR template

* multiply by max memory scaler and don't use *=

* lint

* *= -> more explicit
* throw error if autobatcher type is wrong

* make InFlight.load_stes return max memory scaler

* refactor _get_first_batch to save memory estimation evaluation

* fix testing

* lint
* removed unused today variable in `torch_sim/__init__.py`

* add docs/_static/draw_pkg_treemap.py script to draw treemap of the torch_sim package structure, show HTML in docs/reference/index.rst

* try iframe to render ../_static/torch-sim-pkg-treemap.html

* run python docs/_static/draw_pkg_treemap.py in docs.yml

* draw_pkg_treemap.py use uv run to resolve script deps
* fix broken image rendering as raw markdown in InFlightAutoBatcher doc string

* bump mace-torch to v0.3.12 in pyproject.toml and example scripts
enable markdownlint pre-commit hook and fix errors
Calculates power before velocity updates

velocity mixing is applied conditionally, if P > 0, otherwise velocities are reset

maxstep parameter (default 0.2) is added and uses global scaling based on norm of entire displacement vector like ASE

removed explicit velocity-verlet half-steps from fire_step to mimic ASE

feat(neb): added NEB implementation with improved tangent based on ASE

NEB includes interpolation (mic optional), climbing image (optional), and batched processing.
@cla-bot cla-bot Bot added the cla-signed label Apr 24, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants