diff --git a/README.md b/README.md index 1fe0c16ed..b633e084c 100644 --- a/README.md +++ b/README.md @@ -92,14 +92,15 @@ print(relaxed_state.energy) TorchSim achieves up to 100x speedup compared to ASE with popular MLIPs. -![Speedup comparison](/docs/_static/speedup_plot.svg) +Speedup comparison This figure compares the time per atom of ASE and `torch_sim`. Time per atom is defined as the number of atoms / total time. While ASE can only run a single system of `n_atoms` (on the $x$ axis), `torch_sim` can run as many systems as will fit in memory. On an H100 80 GB card, -the max atoms that could fit in memory was ~8,000 for [GemNet](https://github.com/FAIR-Chem/fairchem), ~10,000 for [MACE](https://github.com/ACEsuit/mace), and ~2,500 -for [SevenNet](https://github.com/MDIL-SNU/SevenNet). This metric describes model performance by capturing speed and memory -usage simultaneously. +the max atoms that could fit in memory was ~8,000 for [EGIP](https://github.com/FAIR-Chem/fairchem), +~10,000 for [MACE-MPA-0](https://github.com/ACEsuit/mace), ~22,000 for [Mattersim V1 1M](https://github.com/microsoft/mattersim), +~2,500 for [SevenNet](https://github.com/MDIL-SNU/SevenNet), and ~9000 for [PET-MAD](https://github.com/lab-cosmo/pet-mad). +This metric describes model performance by capturing speed and memory usage simultaneously. ## Installation diff --git a/docs/_static/speedup_plot.svg b/docs/_static/speedup_plot.svg old mode 100755 new mode 100644 index c6e2de2e7..e0eca6b4d --- a/docs/_static/speedup_plot.svg +++ b/docs/_static/speedup_plot.svg @@ -1 +1,180 @@ -23.515.85.01.91.41.2101.967.218.19.87.13.924.312.64.02.11.51.116321082565008640x10x20x30x40x50x60x70x80x90x100xGemNetMACESevenNetSpeedup: Batched Integration with TorchSim vs Serial Integration with ASENumber of Atoms in Single SystemSpeedup Factor + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +