Skip to content

Deprecate ModelOutput.quantity#199

Merged
Luthaf merged 2 commits intometatensor:mainfrom
HaoZeke:morfixes
Apr 22, 2026
Merged

Deprecate ModelOutput.quantity#199
Luthaf merged 2 commits intometatensor:mainfrom
HaoZeke:morfixes

Conversation

@HaoZeke
Copy link
Copy Markdown
Member

@HaoZeke HaoZeke commented Apr 3, 2026

The quantity field is no longer required for unit conversion since the unit parser determines dimensions from the expression itself.

Changes:

  • Add deprecated documentation to C++ header
  • Add TORCH_WARN in C++ setter when quantity is non-empty
  • Add DeprecationWarning in Python when quantity validation runs
  • Update documentation with deprecation notice
  • Add deprecation entries to CHANGELOG

Contributor (creator of pull-request) checklist

  • Tests updated (for new features and bugfixes)?
  • Documentation updated (for new features)?
  • Issue referenced (for PRs that solve an issue)?

Reviewer checklist

  • CHANGELOG updated with public API or any other important changes?

@HaoZeke HaoZeke force-pushed the morfixes branch 3 times, most recently from 7b73a2a to 409b32c Compare April 5, 2026 00:50
@HaoZeke HaoZeke requested a review from Luthaf April 5, 2026 01:06
Comment thread metatomic-torch/include/metatomic/torch/model.hpp
Comment thread metatomic-torch/src/model.cpp Outdated
Comment thread python/metatomic_ase/tests/conftest.py Outdated
Comment thread python/metatomic_torch/metatomic/torch/model.py Outdated
Comment thread python/metatomic_torch/tests/conftest.py Outdated
Comment thread python/metatomic_torchsim/tests/conftest.py Outdated
Comment thread python/metatomic_ase/tests/calculator.py Outdated
@HaoZeke HaoZeke requested a review from Luthaf April 16, 2026 12:06
HaoZeke added a commit that referenced this pull request Apr 17, 2026
…nd migrate call sites

Pol's fca4e05 kept the existing `TORCH_WARN_DEPRECATION` in
`set_per_atom` / `get_per_atom` intentionally: the new `sample_kind`
API is preferred and `per_atom` is the deprecated shim. This commit
mirrors the conftest/filterwarnings pattern the morfixes branch uses
for `ModelOutput.quantity`, applied only to `per_atom` (no overlap
with #199 -- this PR does not depend on morfixes).

Tests / packaging
- `tests/conftest.py` in all three subpackages: regex-strip the C++
  `TORCH_WARN_DEPRECATION` line for `per_atom` from stderr in the
  autouse `fail_test_with_output` fixture. The regex tolerates both
  Torch 2.3 (`[W model.cpp:N]`) and Torch 2.11+
  (`[W<MMDD> HH:MM:SS.fractZ model.cpp:N]`) prefixes. Any other
  unexpected stderr still fails the test.
- `pyproject.toml` in all three subpackages: ignore the matching
  Python `DeprecationWarning` so tests exercising the field directly
  do not blow up under `-W error`.
- `tests/outputs.py`: drop the two `assert message in captured.err`
  sections from `test_sample_kind`. On Torch 2.3 the C++ warning
  lands on stderr directly and the conftest regex scrubs it; on
  Torch 2.11 `PyErr_WarnEx` + the filterwarnings ignore leaves
  stderr empty. Asserting a specific stderr shape mid-test was
  torch-version-brittle. The `pytest.warns(match=...)` guards remain,
  so the shim is still verified. Drop the now-unused `capfd`
  parameter.

Internal call site migration
- `metatomic_torch/metatomic/torch/heat_flux.py`: the four
  `ModelOutput(..., per_atom=...)` call sites for masses/velocities/
  energies/heat_flux output all use `sample_kind="atom"/"system"`
  now, so the wrapper does not emit warnings from its own
  construction path.
- `metatomic_torch/metatomic/torch/model.py`: the dedup check in
  `_get_requested_inputs` compares `sample_kind` instead of reading
  the deprecated `per_atom` property.
- `metatomic_torch/tests/heat_flux.py`, `metatomic_ase/tests/heat_flux.py`,
  `metatomic_ase/tests/calculator.py::test_inputs_different_units`:
  migrate all `ModelOutput(..., per_atom=...)` to `sample_kind`.
HaoZeke added a commit that referenced this pull request Apr 17, 2026
My earlier switch to `HaoZeke/lj-test@f56f6b4` broke every test that
computes an energy via the reference LJ model: the fork drops
`quantity="energy"` from every `ModelOutput(...)` (forward-looking for
the morfixes #199 `quantity` deprecation), which causes
`AtomisticModel.forward()` to hit the early `continue` in

    if declared.quantity == "" or requested.quantity == "":
        continue

and skip the `unit_conversion_factor(declared.unit, requested.unit)`
step that the engine needs when asking for energy in `"ev"` while the
model declares `"eV"`. Downstream ASE tests see ~100x-wrong energies.

`metatensor/lj-test@eea52c6` is the existing upstream sample_kind
branch commit: it already migrates every `ModelOutput(per_atom=...)`
to `ModelOutput(sample_kind=...)` and every `outputs[...].per_atom`
read to `outputs[...].sample_kind == "atom"` so the runtime
deprecation path is not hit, but it keeps `quantity=...` intact so
unit conversion still runs. That is the right pin for this PR's
scope (per_atom only, no quantity overlap).

Both `install_lj_tests` and the inline `pip install` in the
`torch-tests` env are bumped to `eea52c6`. Bump again once that
branch lands on lj-test main and gets a stable SHA.
@Luthaf Luthaf force-pushed the morfixes branch 2 times, most recently from 585ddc9 to b7e4d2a Compare April 22, 2026 11:34
@Luthaf Luthaf changed the title deprecate(ModelOutput.quantity): mark quantity field as deprecated Deprecate ModelOutput.quantity Apr 22, 2026
This function allows engines to extract the expected kind of unit
for a given standard output/input
@Luthaf Luthaf force-pushed the morfixes branch 2 times, most recently from 53a3299 to e5dfa9e Compare April 22, 2026 12:25
It is no longer needed for unit conversion and many users
are confusing the quantity/physical dimension with the output
name

Co-Authored-By: Guillaume Fraux <guillaume.fraux@epfl.ch>
@Luthaf Luthaf merged commit ccfa009 into metatensor:main Apr 22, 2026
10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants