Skip to content

feat(pt_expt): add dipole, polar, dos, property and dp-zbl models with cross-backend consistency tests#5260

Merged
wanghan-iapcm merged 52 commits intodeepmodeling:masterfrom
wanghan-iapcm:feat-other-full-model
Mar 1, 2026
Merged

feat(pt_expt): add dipole, polar, dos, property and dp-zbl models with cross-backend consistency tests#5260
wanghan-iapcm merged 52 commits intodeepmodeling:masterfrom
wanghan-iapcm:feat-other-full-model

Conversation

@wanghan-iapcm
Copy link
Collaborator

@wanghan-iapcm wanghan-iapcm commented Feb 24, 2026

Summary

  • Add 5 new pt_expt model types: DipoleModel, PolarModel, DOSModel, PropertyModel, and DPZBLModel, completing pt_expt's model coverage to parity with pt
  • Refactor dpmodel base architecture so that pt_expt models inherit directly from dpmodel via make_model(), removing the intermediate pt_expt atomic model layer
  • Consolidate scattered model-level methods (get_out_bias, set_out_bias, get_observed_type_list, compute_or_load_stat) into shared dpmodel base classes
  • Move compute_fitting_input_stat for set-by-statistic mode from model-level change_out_bias to training-level model_change_out_bias (pt and pd backends), keeping the change_out_bias logic focused on bias only (copied from chore(pt): mv the input stat update to model_change_out_bias #5266)
  • Fix array-api-compat violations in general_fitting.change_type_map (bare np.zeros/np.ones/np.concatenatexp equivalents with device)
  • Fix dpmodel change_type_map not forwarding model_with_new_type_stat through the call chain
  • Add comprehensive cross-backend consistency tests for all model types (dp vs pt vs pt_expt), covering: model output, serialization round-trip, change_out_bias, change_type_map, compute_or_load_stat, model API methods.

Changes

New pt_expt models

  • deepmd/pt_expt/model/dipole_model.py
  • deepmd/pt_expt/model/polar_model.py
  • deepmd/pt_expt/model/dos_model.py
  • deepmd/pt_expt/model/property_model.py
  • deepmd/pt_expt/model/dp_zbl_model.py

Architecture refactoring

  • Remove deepmd/pt_expt/atomic_model/ layer — models now wrap dpmodel atomic models directly
  • Clean up BaseModel: remove concrete methods/data, add plugin registry
  • Refactor make_model so backends (dp, pt_expt) inherit shared model logic from dpmodel
  • Consolidate get_out_bias/set_out_bias into base_atomic_model.py
  • Add get_observed_type_list to abstract API and implement in dpmodel, pt, pd
  • Move fitting input stat update to model_change_out_bias in pt/pd training code (chore(pt): mv the input stat update to model_change_out_bias #5266)

Bug fixes

  • general_fitting.change_type_map: use array-api-compat ops instead of bare numpy (breaks pt_expt)
  • make_model.change_type_map: properly forward model_with_new_type_stat to atomic model
  • stat.py: fix in-place mutation issue

Tests

  • New cross-backend consistency tests: test_dipole.py, test_polar.py, test_dos.py, test_property.py and test_zbl_ener.py (~1400 lines each)
  • Expanded test_ener.py with pt_expt and full model API coverage
  • New pt_expt unit tests: test_dipole_model.py, test_polar_model.py, test_dos_model.py, test_property_model.py, test_dp_zbl_model.py
  • Added test_get_model_def_script, test_get_min_nbor_dist, test_set_case_embd across all 6 model test files
  • Moved atomic model output stat tests from pt_expt to dpmodel
  • Added model_change_out_bias tests in pt/pd training tests (chore(pt): mv the input stat update to model_change_out_bias #5266)

Summary by CodeRabbit

  • New Features

    • File-backed compute/load for descriptor and fitting statistics; new compute-or-load stat APIs, get/set output-bias, and observed-type discovery.
    • Exportable/traceable lower-level inference paths for dipole, dos, polar, property, zbl, and energy models.
  • Refactor

    • Model factory and generated models support extensible base-class composition and unified fitting backend wiring.
  • Tests

    • Large expansion of cross-backend (DP/PT/PT-EXPT) parity, statistics, bias, and exportability tests.

Han Wang added 14 commits February 22, 2026 16:10
…om the base models of the corresponding backend
  Add TestEnerComputeOrLoadStat to the consistency test framework, comparing
  dp, pt, and pt_expt backends after compute_or_load_stat. Tests cover
  descriptor stats, fparam/aparam fitting stats, output bias, and forward
  consistency, parameterized over exclusion types and fparam source
  (default injection vs explicit data). Both compute and load-from-file
  paths are tested.

  Three dpmodel bugs found and fixed:
  - repflows.py: compute_input_stats now respects set_stddev_constant,
    matching the pt backend behavior
  - stat.py: compute_output_stats_global now applies atom_exclude_types
    mask to natoms before computing output bias
  - general_fitting.py: compute_input_stats now supports save/load of
    fparam/aparam stats via stat_file_path, matching the pt backend
@wanghan-iapcm wanghan-iapcm marked this pull request as draft February 24, 2026 01:26
@dosubot dosubot bot added the new feature label Feb 24, 2026
Copy link
Contributor

@github-advanced-security github-advanced-security bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CodeQL found more than 20 potential problems in the proposed changes. Check the Files changed tab for more details.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: b2028a80f7

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@codecov
Copy link

codecov bot commented Feb 24, 2026

Codecov Report

❌ Patch coverage is 96.18474% with 19 lines in your changes missing coverage. Please review.
✅ Project coverage is 82.12%. Comparing base (622179e) to head (162c5a1).
⚠️ Report is 2 commits behind head on master.

Files with missing lines Patch % Lines
deepmd/pd/model/model/make_model.py 12.50% 7 Missing ⚠️
...eepmd/dpmodel/atomic_model/pairtab_atomic_model.py 83.33% 2 Missing ⚠️
deepmd/dpmodel/descriptor/repflows.py 50.00% 2 Missing ⚠️
deepmd/dpmodel/model/base_model.py 66.66% 2 Missing ⚠️
deepmd/dpmodel/atomic_model/dp_atomic_model.py 95.83% 1 Missing ⚠️
deepmd/jax/model/hlo.py 50.00% 1 Missing ⚠️
deepmd/pd/model/model/frozen.py 50.00% 1 Missing ⚠️
deepmd/pt/model/model/frozen.py 50.00% 1 Missing ⚠️
deepmd/pt_expt/model/dipole_model.py 98.36% 1 Missing ⚠️
deepmd/pt_expt/model/dp_zbl_model.py 98.36% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #5260      +/-   ##
==========================================
+ Coverage   81.95%   82.12%   +0.17%     
==========================================
  Files         750      753       +3     
  Lines       75444    75826     +382     
  Branches     3649     3648       -1     
==========================================
+ Hits        61828    62274     +446     
+ Misses      12448    12383      -65     
- Partials     1168     1169       +1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Han Wang added 5 commits February 24, 2026 12:12
  Move get_observed_type_list from a PT-only method to a backend-independent
  abstract API on BaseBaseModel, with a concrete implementation in dpmodel's
  make_model CM using array_api_compat for torch compatibility. Add a
  cross-backend consistency test that verifies dp, pt, and pt_expt return
  identical results when only a subset of types is observed.
…bare np ops

  dpmodel's model-level change_type_map was not forwarding
  model_with_new_type_stat to the atomic model, so fine-tuning with new
  atom types would silently lose reference statistics. Align with the pt
  backend by unwrapping .atomic_model and passing it through.

  Also fix array API violations in fitting change_type_map methods:
  np.zeros/np.ones/np.concatenate fail when arrays are torch tensors
  (pt_expt backend). Replace with xp.zeros/xp.ones/xp.concat using
  proper array namespace and device.

  Add cross-backend test (test_change_type_map_extend_stat) that
  exercises the model-level change_type_map with
  model_with_new_type_stat across dp, pt, and pt_expt.
  Add get_out_bias() and set_out_bias() methods to dpmodel's
  base_atomic_model, and update make_model to call them instead of
  accessing the attribute directly. For PT, add get_out_bias() to
  base_atomic_model and remove the redundant implementations from
  dp_atomic_model, pairtab_atomic_model, and linear_atomic_model.
…et-by-statistic

  The PT backend calls atomic_model.compute_fitting_input_stat(merged) in
  change_out_bias when mode is set-by-statistic, but dpmodel/pt_expt did
  not. This meant fparam/aparam statistics (avg, inv_std) were never updated
  during bias adjustment in these backends.

  Add compute_fitting_input_stat to dpmodel's DPAtomicModel and call it
  from make_model.change_out_bias. Enhance test_change_out_bias with
  fparam/aparam data, pt_expt coverage, and verification that fitting input
  stats are updated after set-by-statistic but unchanged after
  change-by-statistic.
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 0ec574876a

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@source/tests/consistent/model/test_dipole.py`:
- Around line 1248-1256: The comparator loop that iterates over keys in d1
currently skips keys missing from d2; change it to fail fast by asserting
presence instead of continuing: when iterating the top-level keys (variables d1,
d2 and path) assert that each key from d1 is present in d2 (e.g., raise an
AssertionError or use assert) and likewise inside the "@variables" special-case
loop assert that each vk in v1 exists in v2 instead of continuing; update the
blocks referencing d1, d2, path, and the "@variables" branch so key-set
mismatches immediately cause test failures.

In `@source/tests/consistent/model/test_dos.py`:
- Around line 1237-1245: The loop in _compare_variables_recursive silently
continues when a key present in d1 (or v1) is missing in d2, making comparisons
non-strict; change the behavior so missing keys raise/assert test failures
instead of continuing: in the outer loop over d1 keys (inside
_compare_variables_recursive) replace the early "continue" for key not in d2
with a failure/assertion that includes child_path and the missing key, and
similarly in the nested loop over v1 keys (the "@variables" branch) replace the
"continue" for vk not in v2 with a failure/assertion reporting the missing
variable key and its path. Ensure messages reference child_path and vk so test
output identifies the mismatch.

In `@source/tests/consistent/model/test_polar.py`:
- Around line 1242-1250: In _compare_variables_recursive, stop skipping missing
keys and instead assert that the key sets match before comparing values: for the
outer loop over d1/d2 (vars d1, d2, key, child_path, v1, v2) replace the "if key
not in d2: continue" with an assertion that key in d2 (or assert set(d1.keys())
== set(d2.keys())) so missing keys fail the test; likewise, inside the
"@variables" branch (vk, v1, v2) assert vk exists in v2 (or assert
set(v1.keys()) == set(v2.keys())) before value comparisons so absent keys do not
get silently ignored.

In `@source/tests/consistent/model/test_property.py`:
- Around line 1234-1242: In _compare_variables_recursive, do not silently skip
keys that are missing between v1 and v2; replace the current "if vk not in v2:
continue" behavior with an explicit assertion or test that raises/fails when
keys differ so the test enforces exact key parity for "@variables" entries;
locate the block inside _compare_variables_recursive where key == "@variables"
and the nested loop iterates over vk, and change it to compare the sets of keys
(v1.keys() vs v2.keys()) or assert presence of each vk in v2 with a failure
message that includes child_path and the missing key to surface missing
serialized stats/variables between backends.

ℹ️ Review info

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 0ec5748 and 00f83cc.

📒 Files selected for processing (4)
  • source/tests/consistent/model/test_dipole.py
  • source/tests/consistent/model/test_dos.py
  • source/tests/consistent/model/test_polar.py
  • source/tests/consistent/model/test_property.py

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (5)
source/tests/consistent/model/test_property.py (2)

738-780: Test logic for change_out_bias is correct; both locals are used.

The previous review flagged dp_bias_init (Line 743) and dp_bias_before (Line 764) as unused, but in the current code they are consumed by the assertFalse(np.allclose(...)) checks on Lines 759 and 778 respectively. No issue here.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@source/tests/consistent/model/test_property.py` around lines 738 - 780, This
is a duplicate/false-positive review: the local variables dp_bias_init and
dp_bias_before are used in assertions, so mark the comment as resolved and do
not change the test; leave the change_out_bias calls and the subsequent
np.allclose assertions (and assertFalse checks referencing dp_bias_init and
dp_bias_before) intact.

1212-1237: _compare_variables_recursive silently skips mismatched keys between dicts.

Lines 1217-1218 and 1223-1224 both continue when a key is absent from the other dict, meaning extra or missing keys in either direction are never flagged. This could mask genuine serialization divergences between backends.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@source/tests/consistent/model/test_property.py` around lines 1212 - 1237, The
helper _compare_variables_recursive currently ignores missing keys by using
continue when a key in d1 is not in d2 (and likewise inside the `@variables`
loop), which hides asymmetric differences; update _compare_variables_recursive
to explicitly check key set equality (or assert presence) before descending:
when iterating over d1 keys, assert the key exists in d2 (include child_path in
the assertion message) or compute missing = set(d1) - set(d2) and extra =
set(d2) - set(d1) and raise/assert with clear messages; similarly inside the
"@variables" branch for vk ensure vk exists in both dicts (or compare sets) so
missing/extra variable names cause test failures rather than being silently
skipped.
source/tests/consistent/model/test_dos.py (1)

1215-1241: _compare_variables_recursive silently skips missing keys, weakening parity enforcement.

Keys present in d1 but absent from d2 are silently skipped, so serialization regressions (missing variables, structural drift) will not be caught.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@source/tests/consistent/model/test_dos.py` around lines 1215 - 1241, The
helper _compare_variables_recursive currently ignores keys present in d1 but
missing from d2 which hides regressions; change the logic to assert/fail when a
key from d1 is not found in d2 (use child_path to give context), and similarly
assert when a variable key vk under an "@variables" dict is missing in v2;
update the two places that now "continue" (the top-level if key not in d2 and
the inner if vk not in v2) to raise an AssertionError or call pytest.fail with a
descriptive message like "@variables missing at {child_path}/{vk}" so missing
keys fail the test rather than being silently skipped.
source/tests/consistent/model/test_dipole.py (1)

1226-1251: _compare_variables_recursive is duplicated from the other two files and still silently skips missing keys.

See the consolidated comments on test_dos.py lines 1215–1241 for the fix.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@source/tests/consistent/model/test_dipole.py` around lines 1226 - 1251, The
_compare_variables_recursive function currently silently skips keys missing in
d2; update it to assert when expected keys are absent instead of continuing
silently: in the top-level for loop replace the "if key not in d2: continue"
with an assertion (e.g., raise AssertionError or use a test assert) that
includes the missing key and current path, and likewise inside the "@variables"
block replace "if vk not in v2: continue" with an assertion that vk exists in v2
(including child_path/vk in the message); keep the existing numeric comparisons
and recursive call behavior otherwise so mismatches still surface.
source/tests/consistent/model/test_polar.py (1)

1220-1245: _compare_variables_recursive is duplicated from test_dos.py and still silently skips missing keys.

Both concerns are tracked across all three files — see comment on test_dos.py lines 1215–1241 for the consolidation suggestion and the fix for key-set strictness.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@source/tests/consistent/model/test_polar.py` around lines 1220 - 1245, The
helper _compare_variables_recursive is duplicated and currently silently skips
missing keys; extract this function into a shared test utility (e.g.,
tests.utils) and update its logic to enforce strict key-set equality: when
iterating keys in d1 and d2, compute the union and assert (or raise
AssertionError) on any key present in one dict but not the other (including
inside "@variables"), then compare corresponding values as before (using
np.asarray and np.testing.assert_allclose); update callers in test_polar (and
test_dos/test_* duplicates) to import the consolidated function.
🧹 Nitpick comments (10)
source/tests/consistent/model/test_property.py (2)

1395-1424: Inconsistent do_atomic_virial between _eval_dp and _eval_pt/_eval_pt_expt.

_eval_dp omits do_atomic_virial while _eval_pt and _eval_pt_expt pass do_atomic_virial=True. For a property model this is unlikely to affect the compared keys ("foo", "atom_foo"), but the asymmetry is surprising. If it's intentional (e.g., the dpmodel __call__ doesn't accept the kwarg), a brief comment would help.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@source/tests/consistent/model/test_property.py` around lines 1395 - 1424, The
three helpers _eval_dp, _eval_pt and _eval_pt_expt are inconsistent: _eval_pt
and _eval_pt_expt pass do_atomic_virial=True while _eval_dp does not; either
modify the dp_model invocation in _eval_dp to pass do_atomic_virial=True (if
dp_model.__call__/dp_model accepts that kwarg) or, if dp_model does not accept
that kwarg, add a concise comment above _eval_dp explaining why do_atomic_virial
is intentionally omitted (and mention dp_model.__call__ inability), so the
asymmetry is explicit and reviewers aren’t surprised.

359-391: Consider adding pt_expt assertions in test_get_out_bias and test_set_out_bias.

These tests validate dp vs pt but skip pt_expt. Since self.pt_expt_model is built in setUp and other tests (e.g., test_translated_output_def, test_get_model_def_script) already cover pt_expt, the omission here may leave a gap in bias-accessor parity testing for the pt_expt backend.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@source/tests/consistent/model/test_property.py` around lines 359 - 391, Add
parallel assertions for the pt_expt backend in both test_get_out_bias and
test_set_out_bias: call self.pt_expt_model.get_out_bias, convert to numpy via
torch_to_numpy (or to_numpy_array if needed), and assert_allclose against
dp_bias/expected new_bias with the same rtol/atol; in test_set_out_bias also
call self.pt_expt_model.set_out_bias(numpy_to_torch(new_bias)) before asserting.
Reference the existing test functions test_get_out_bias and test_set_out_bias
and the methods get_out_bias and set_out_bias on self.pt_expt_model to locate
where to add these checks.
source/tests/consistent/model/test_ener.py (2)

1055-1056: pe_merged = dp_merged is a shallow alias — fragile if exclusion types are later added.

Currently safe because this model config has no atom_exclude_types, but if someone extends this test, dp_model.change_out_bias mutating natoms in-place would corrupt the data seen by pt_expt_model.change_out_bias. Consider using a separate construction or a copy.deepcopy for defense.

Suggested defensive fix
-        # pt_expt stat data (numpy, same as dp)
-        pe_merged = dp_merged
+        # pt_expt stat data (numpy, same structure as dp — use deepcopy for safety)
+        import copy
+        pe_merged = copy.deepcopy(dp_merged)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@source/tests/consistent/model/test_ener.py` around lines 1055 - 1056,
pe_merged is currently a shallow alias to dp_merged which is fragile if
exclusion types are added because in-place mutations (e.g.
dp_model.change_out_bias) can corrupt the pt_expt data; fix by constructing an
independent copy for pe_merged instead of assignment — use a deep copy of
dp_merged (or rebuild pe_merged from the same source) so later calls like
dp_model.change_out_bias or pt_expt_model.change_out_bias no longer share the
same underlying arrays (ensure natoms and any mutable arrays are duplicated).

1517-1542: Mixed None case in _compare_variables_recursive would produce a confusing error.

Lines 1532-1533 handle the case where both values are None, but if only one is None, np.testing.assert_allclose(array, None) will raise a TypeError rather than a clear assertion message. Consider adding an explicit check.

Suggested improvement
                 a1 = np.asarray(v1[vk]) if v1[vk] is not None else None
                 a2 = np.asarray(v2[vk]) if v2[vk] is not None else None
                 if a1 is None and a2 is None:
                     continue
+                assert a1 is not None and a2 is not None, (
+                    f"None mismatch at {child_path}/{vk}: "
+                    f"a1 is {'None' if a1 is None else 'not None'}, "
+                    f"a2 is {'None' if a2 is None else 'not None'}"
+                )
                 np.testing.assert_allclose(
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@source/tests/consistent/model/test_ener.py` around lines 1517 - 1542, The
helper _compare_variables_recursive currently calls
np.testing.assert_allclose(a1, a2) even when one of a1 or a2 is None which
raises a TypeError; before calling np.testing.assert_allclose in the
"@variables" vk loop, add an explicit check for the mixed-None case (if (a1 is
None) != (a2 is None)) and raise a clear AssertionError (or use np.testing.fail)
with a message like "@variables mismatch at {child_path}/{vk}: one value is None
and the other is not"; only call np.testing.assert_allclose when both a1 and a2
are not None, passing through rtol and atol as before.
source/tests/consistent/model/test_dos.py (2)

516-519: Redundant local model_args import — already imported at module level (Line 59).

🧹 Proposed fix
-        from deepmd.utils.argcheck import (
-            model_args,
-        )
-
         # Build a model with dim_case_embd > 0
         data = model_args().normalize_value(
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@source/tests/consistent/model/test_dos.py` around lines 516 - 519, The local
import of model_args inside the test at test_dos.py is redundant because
model_args is already imported at module level; remove the inner import
statement that references model_args so the test uses the module-level symbol
(ensure no local shadowing or redefinition occurs and run tests to confirm no
other references rely on the local import).

352-625: pt_expt_model is under-exercised across most TestDOSModelAPIs test methods.

The class initializes self.pt_expt_model in setUp, but the majority of individual test methods (e.g., test_get_descriptor, test_get_fitting_net, test_set_out_bias, test_model_output_def, test_model_output_type, test_do_grad_r, test_do_grad_c, test_get_rcut, test_get_type_map, etc.) only compare dp_model vs pt_model. Only test_get_model_def_script, test_get_min_nbor_dist, and test_set_case_embd include pt_expt assertions.

Consider extending each getter/scalar test to also assert the pt_expt variant, or add a comment explaining the intentional omission.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@source/tests/consistent/model/test_dos.py` around lines 352 - 625, Many tests
in TestDOSModelAPIs compare dp_model vs pt_model but omit pt_expt_model; update
the getter/scalar tests (e.g., test_get_descriptor, test_get_fitting_net,
test_set_out_bias, test_model_output_def, test_model_output_type,
test_do_grad_r, test_do_grad_c, test_get_rcut, test_get_type_map, test_get_sel,
test_get_nsel, test_get_nnei, test_mixed_types, test_has_message_passing,
test_need_sorted_nlist_for_lower, test_get_dim_fparam, test_get_dim_aparam,
test_get_sel_type, test_is_aparam_nall, test_atomic_output_def, etc.) to also
assert equality (or appropriate numeric closeness) with self.pt_expt_model,
using the same helper conversions (torch_to_numpy/numpy_to_torch/to_numpy_array)
and the same tolerances, or alternatively add a short comment above each test
explaining why pt_expt_model is intentionally excluded; locate assertions around
self.dp_model and self.pt_model in each test and replicate/adjust them to
include self.pt_expt_model (e.g., compare dp_val vs pt_expt_val or call
self.pt_expt_model.method() and assert equality).
source/tests/consistent/model/test_dipole.py (2)

533-536: Redundant local model_args import — already imported at module level (Line 59).

🧹 Proposed fix
-        from deepmd.utils.argcheck import (
-            model_args,
-        )
-
         # Build a model with dim_case_embd > 0
         data = model_args().normalize_value(
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@source/tests/consistent/model/test_dipole.py` around lines 533 - 536, Remove
the redundant local import of model_args in test_dipole.py (the local "from
deepmd.utils.argcheck import model_args" shown in the diff) since model_args is
already imported at module scope; delete that local import block to avoid
shadowing/redundant imports and rely on the existing module-level model_args
symbol.

358-632: pt_expt_model is under-exercised in most TestDipoleModelAPIs test methods — same coverage gap as the DOS and Polar API test classes.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@source/tests/consistent/model/test_dipole.py` around lines 358 - 632, Many
tests only compare dp_model vs pt_model and omit pt_expt_model; update each API
test (e.g., test_translated_output_def, test_get_descriptor,
test_get_fitting_net, test_get_out_bias, test_set_out_bias,
test_model_output_def, test_model_output_type, test_do_grad_r, test_do_grad_c,
test_get_rcut, test_get_type_map, test_get_sel, test_get_nsel, test_get_nnei,
test_mixed_types, test_has_message_passing, test_need_sorted_nlist_for_lower,
test_get_dim_fparam, test_get_dim_aparam, test_get_sel_type,
test_is_aparam_nall) to also check pt_expt_model for parity with dp_model by
calling the same methods (e.g., translated_output_def(), get_descriptor(),
get_fitting_net(), get_out_bias(), model_output_def(), model_output_type(),
do_grad_r("dipole"), do_grad_c("dipole"), get_rcut(), get_type_map(), get_sel(),
get_nsel(), get_nnei(), mixed_types(), has_message_passing(),
need_sorted_nlist_for_lower(), get_dim_fparam(), get_dim_aparam(),
get_sel_type(), is_aparam_nall()) and asserting equality of keys, shapes,
numerical arrays (use torch_to_numpy/pt_expt_numpy_to_torch conversions as
needed) or non-None where appropriate so pt_expt_model is exercised exactly like
pt_model in each test.
source/tests/consistent/model/test_polar.py (2)

527-530: Redundant local model_args import — already imported at module level (Line 59).

🧹 Proposed fix
-        from deepmd.utils.argcheck import (
-            model_args,
-        )
-
         # Build a model with dim_case_embd > 0
         data = model_args().normalize_value(
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@source/tests/consistent/model/test_polar.py` around lines 527 - 530, Remove
the redundant local import of model_args in test_polar.py: the symbol model_args
is already imported at module scope, so delete the duplicate "from
deepmd.utils.argcheck import model_args" inside the test to avoid
shadowing/redundant imports and keep the module-level import as the single
source of truth; after removing the line, run the tests to ensure no
unused-import warnings or linter errors remain.

352-626: pt_expt_model is under-exercised in most TestPolarModelAPIs test methods — same coverage gap as TestDOSModelAPIs. Most getter/boolean tests only compare dp_model vs pt_model, omitting pt_expt_model assertions.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@source/tests/consistent/model/test_polar.py` around lines 352 - 626, Many
tests only compare dp_model vs pt_model and omit pt_expt_model; update each API
test (e.g., translated_output_def(), get_descriptor(), get_fitting_net(),
get_out_bias()/set_out_bias(), model_output_def(), model_output_type(),
do_grad_r(), do_grad_c(), get_rcut(), get_type_map(), get_sel(), get_nsel(),
get_nnei(), mixed_types(), has_message_passing(), need_sorted_nlist_for_lower(),
get_dim_fparam(), get_dim_aparam(), get_sel_type(), is_aparam_nall(),
get_model_def_script(), get_min_nbor_dist()) to also assert the same properties
against pt_expt_model (keys, shapes, numerical equality or booleans as
appropriate) by calling the same methods on pt_expt_model and adding the same
equality/shape/allclose checks used for dp_model vs pt_model; ensure
set_out_bias() and get_out_bias() tests also call pt_expt_model where relevant
and mirror the numpy/torch conversion checks.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@source/tests/consistent/model/test_dos.py`:
- Around line 1215-1241: Move the duplicated helper _compare_variables_recursive
into a single shared module (e.g. create source/tests/consistent/model/common.py
or source/tests/consistent/model/test_helpers.py), keep the exact function
signature and ensure it imports numpy as np; then remove the copies from
test_dos.py, test_polar.py and test_dipole.py and replace them with a single
import such as "from .common import _compare_variables_recursive" (or
appropriate relative path) in each test file so all three tests reuse the shared
implementation.

In `@source/tests/consistent/model/test_ener.py`:
- Around line 1831-1840: In test_load_stat_from_file, guard against in-place
mutation by wrapping the sampled inputs in deepcopy before passing to
compute_or_load_stat: replace direct uses of self.np_sampled / self.pt_sampled
when calling dp_model.compute_or_load_stat, pt_model.compute_or_load_stat and
pt_expt_model.compute_or_load_stat with deepcopy(self.np_sampled) or
deepcopy(self.pt_sampled) as appropriate (same pattern used in
test_compute_stat) so that compute_or_load_stat cannot mutate the original
self.* sample objects when atom_exclude_types is non-empty.

---

Duplicate comments:
In `@source/tests/consistent/model/test_dipole.py`:
- Around line 1226-1251: The _compare_variables_recursive function currently
silently skips keys missing in d2; update it to assert when expected keys are
absent instead of continuing silently: in the top-level for loop replace the "if
key not in d2: continue" with an assertion (e.g., raise AssertionError or use a
test assert) that includes the missing key and current path, and likewise inside
the "@variables" block replace "if vk not in v2: continue" with an assertion
that vk exists in v2 (including child_path/vk in the message); keep the existing
numeric comparisons and recursive call behavior otherwise so mismatches still
surface.

In `@source/tests/consistent/model/test_dos.py`:
- Around line 1215-1241: The helper _compare_variables_recursive currently
ignores keys present in d1 but missing from d2 which hides regressions; change
the logic to assert/fail when a key from d1 is not found in d2 (use child_path
to give context), and similarly assert when a variable key vk under an
"@variables" dict is missing in v2; update the two places that now "continue"
(the top-level if key not in d2 and the inner if vk not in v2) to raise an
AssertionError or call pytest.fail with a descriptive message like "@variables
missing at {child_path}/{vk}" so missing keys fail the test rather than being
silently skipped.

In `@source/tests/consistent/model/test_polar.py`:
- Around line 1220-1245: The helper _compare_variables_recursive is duplicated
and currently silently skips missing keys; extract this function into a shared
test utility (e.g., tests.utils) and update its logic to enforce strict key-set
equality: when iterating keys in d1 and d2, compute the union and assert (or
raise AssertionError) on any key present in one dict but not the other
(including inside "@variables"), then compare corresponding values as before
(using np.asarray and np.testing.assert_allclose); update callers in test_polar
(and test_dos/test_* duplicates) to import the consolidated function.

In `@source/tests/consistent/model/test_property.py`:
- Around line 738-780: This is a duplicate/false-positive review: the local
variables dp_bias_init and dp_bias_before are used in assertions, so mark the
comment as resolved and do not change the test; leave the change_out_bias calls
and the subsequent np.allclose assertions (and assertFalse checks referencing
dp_bias_init and dp_bias_before) intact.
- Around line 1212-1237: The helper _compare_variables_recursive currently
ignores missing keys by using continue when a key in d1 is not in d2 (and
likewise inside the `@variables` loop), which hides asymmetric differences; update
_compare_variables_recursive to explicitly check key set equality (or assert
presence) before descending: when iterating over d1 keys, assert the key exists
in d2 (include child_path in the assertion message) or compute missing = set(d1)
- set(d2) and extra = set(d2) - set(d1) and raise/assert with clear messages;
similarly inside the "@variables" branch for vk ensure vk exists in both dicts
(or compare sets) so missing/extra variable names cause test failures rather
than being silently skipped.

---

Nitpick comments:
In `@source/tests/consistent/model/test_dipole.py`:
- Around line 533-536: Remove the redundant local import of model_args in
test_dipole.py (the local "from deepmd.utils.argcheck import model_args" shown
in the diff) since model_args is already imported at module scope; delete that
local import block to avoid shadowing/redundant imports and rely on the existing
module-level model_args symbol.
- Around line 358-632: Many tests only compare dp_model vs pt_model and omit
pt_expt_model; update each API test (e.g., test_translated_output_def,
test_get_descriptor, test_get_fitting_net, test_get_out_bias, test_set_out_bias,
test_model_output_def, test_model_output_type, test_do_grad_r, test_do_grad_c,
test_get_rcut, test_get_type_map, test_get_sel, test_get_nsel, test_get_nnei,
test_mixed_types, test_has_message_passing, test_need_sorted_nlist_for_lower,
test_get_dim_fparam, test_get_dim_aparam, test_get_sel_type,
test_is_aparam_nall) to also check pt_expt_model for parity with dp_model by
calling the same methods (e.g., translated_output_def(), get_descriptor(),
get_fitting_net(), get_out_bias(), model_output_def(), model_output_type(),
do_grad_r("dipole"), do_grad_c("dipole"), get_rcut(), get_type_map(), get_sel(),
get_nsel(), get_nnei(), mixed_types(), has_message_passing(),
need_sorted_nlist_for_lower(), get_dim_fparam(), get_dim_aparam(),
get_sel_type(), is_aparam_nall()) and asserting equality of keys, shapes,
numerical arrays (use torch_to_numpy/pt_expt_numpy_to_torch conversions as
needed) or non-None where appropriate so pt_expt_model is exercised exactly like
pt_model in each test.

In `@source/tests/consistent/model/test_dos.py`:
- Around line 516-519: The local import of model_args inside the test at
test_dos.py is redundant because model_args is already imported at module level;
remove the inner import statement that references model_args so the test uses
the module-level symbol (ensure no local shadowing or redefinition occurs and
run tests to confirm no other references rely on the local import).
- Around line 352-625: Many tests in TestDOSModelAPIs compare dp_model vs
pt_model but omit pt_expt_model; update the getter/scalar tests (e.g.,
test_get_descriptor, test_get_fitting_net, test_set_out_bias,
test_model_output_def, test_model_output_type, test_do_grad_r, test_do_grad_c,
test_get_rcut, test_get_type_map, test_get_sel, test_get_nsel, test_get_nnei,
test_mixed_types, test_has_message_passing, test_need_sorted_nlist_for_lower,
test_get_dim_fparam, test_get_dim_aparam, test_get_sel_type,
test_is_aparam_nall, test_atomic_output_def, etc.) to also assert equality (or
appropriate numeric closeness) with self.pt_expt_model, using the same helper
conversions (torch_to_numpy/numpy_to_torch/to_numpy_array) and the same
tolerances, or alternatively add a short comment above each test explaining why
pt_expt_model is intentionally excluded; locate assertions around self.dp_model
and self.pt_model in each test and replicate/adjust them to include
self.pt_expt_model (e.g., compare dp_val vs pt_expt_val or call
self.pt_expt_model.method() and assert equality).

In `@source/tests/consistent/model/test_ener.py`:
- Around line 1055-1056: pe_merged is currently a shallow alias to dp_merged
which is fragile if exclusion types are added because in-place mutations (e.g.
dp_model.change_out_bias) can corrupt the pt_expt data; fix by constructing an
independent copy for pe_merged instead of assignment — use a deep copy of
dp_merged (or rebuild pe_merged from the same source) so later calls like
dp_model.change_out_bias or pt_expt_model.change_out_bias no longer share the
same underlying arrays (ensure natoms and any mutable arrays are duplicated).
- Around line 1517-1542: The helper _compare_variables_recursive currently calls
np.testing.assert_allclose(a1, a2) even when one of a1 or a2 is None which
raises a TypeError; before calling np.testing.assert_allclose in the
"@variables" vk loop, add an explicit check for the mixed-None case (if (a1 is
None) != (a2 is None)) and raise a clear AssertionError (or use np.testing.fail)
with a message like "@variables mismatch at {child_path}/{vk}: one value is None
and the other is not"; only call np.testing.assert_allclose when both a1 and a2
are not None, passing through rtol and atol as before.

In `@source/tests/consistent/model/test_polar.py`:
- Around line 527-530: Remove the redundant local import of model_args in
test_polar.py: the symbol model_args is already imported at module scope, so
delete the duplicate "from deepmd.utils.argcheck import model_args" inside the
test to avoid shadowing/redundant imports and keep the module-level import as
the single source of truth; after removing the line, run the tests to ensure no
unused-import warnings or linter errors remain.
- Around line 352-626: Many tests only compare dp_model vs pt_model and omit
pt_expt_model; update each API test (e.g., translated_output_def(),
get_descriptor(), get_fitting_net(), get_out_bias()/set_out_bias(),
model_output_def(), model_output_type(), do_grad_r(), do_grad_c(), get_rcut(),
get_type_map(), get_sel(), get_nsel(), get_nnei(), mixed_types(),
has_message_passing(), need_sorted_nlist_for_lower(), get_dim_fparam(),
get_dim_aparam(), get_sel_type(), is_aparam_nall(), get_model_def_script(),
get_min_nbor_dist()) to also assert the same properties against pt_expt_model
(keys, shapes, numerical equality or booleans as appropriate) by calling the
same methods on pt_expt_model and adding the same equality/shape/allclose checks
used for dp_model vs pt_model; ensure set_out_bias() and get_out_bias() tests
also call pt_expt_model where relevant and mirror the numpy/torch conversion
checks.

In `@source/tests/consistent/model/test_property.py`:
- Around line 1395-1424: The three helpers _eval_dp, _eval_pt and _eval_pt_expt
are inconsistent: _eval_pt and _eval_pt_expt pass do_atomic_virial=True while
_eval_dp does not; either modify the dp_model invocation in _eval_dp to pass
do_atomic_virial=True (if dp_model.__call__/dp_model accepts that kwarg) or, if
dp_model does not accept that kwarg, add a concise comment above _eval_dp
explaining why do_atomic_virial is intentionally omitted (and mention
dp_model.__call__ inability), so the asymmetry is explicit and reviewers aren’t
surprised.
- Around line 359-391: Add parallel assertions for the pt_expt backend in both
test_get_out_bias and test_set_out_bias: call self.pt_expt_model.get_out_bias,
convert to numpy via torch_to_numpy (or to_numpy_array if needed), and
assert_allclose against dp_bias/expected new_bias with the same rtol/atol; in
test_set_out_bias also call
self.pt_expt_model.set_out_bias(numpy_to_torch(new_bias)) before asserting.
Reference the existing test functions test_get_out_bias and test_set_out_bias
and the methods get_out_bias and set_out_bias on self.pt_expt_model to locate
where to add these checks.

ℹ️ Review info

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 00f83cc and 35d4cbe.

📒 Files selected for processing (5)
  • source/tests/consistent/model/test_dipole.py
  • source/tests/consistent/model/test_dos.py
  • source/tests/consistent/model/test_ener.py
  • source/tests/consistent/model/test_polar.py
  • source/tests/consistent/model/test_property.py

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (2)
source/tests/consistent/model/common.py (1)

156-165: ⚠️ Potential issue | 🟠 Major

Make compare_variables_recursive strict on missing keys.

At Line 158 and Line 164, continue hides structural drift between backends. These parity tests should fail when expected keys are missing.

Suggested fix
 def compare_variables_recursive(
     d1: dict, d2: dict, path: str = "", rtol: float = 1e-10, atol: float = 1e-10
 ) -> None:
     """Recursively compare ``@variables`` sections in two serialized dicts."""
+    missing_in_d2 = set(d1) - set(d2)
+    missing_in_d1 = set(d2) - set(d1)
+    assert not missing_in_d2 and not missing_in_d1, (
+        f"Key mismatch at {path or '<root>'}: "
+        f"missing_in_d2={sorted(missing_in_d2)}, missing_in_d1={sorted(missing_in_d1)}"
+    )
     for key in d1:
-        if key not in d2:
-            continue
         child_path = f"{path}/{key}" if path else key
         v1, v2 = d1[key], d2[key]
         if key == "@variables" and isinstance(v1, dict) and isinstance(v2, dict):
+            miss_v2 = set(v1) - set(v2)
+            miss_v1 = set(v2) - set(v1)
+            assert not miss_v2 and not miss_v1, (
+                f"@variables key mismatch at {child_path}: "
+                f"missing_in_d2={sorted(miss_v2)}, missing_in_d1={sorted(miss_v1)}"
+            )
             for vk in v1:
-                if vk not in v2:
-                    continue
                 a1 = np.asarray(v1[vk]) if v1[vk] is not None else None
                 a2 = np.asarray(v2[vk]) if v2[vk] is not None else None
                 if a1 is None and a2 is None:
                     continue
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@source/tests/consistent/model/common.py` around lines 156 - 165, The loops in
compare_variables_recursive currently use "continue" when keys are missing (for
key in d1: if key not in d2: continue and for vk in v1: if vk not in v2:
continue), which hides structural mismatches; change both checks to treat
missing keys as a mismatch (e.g., return False or record the difference) so the
parity test fails when expected keys are absent; update the logic around
compare_variables_recursive, d1, d2, v1, v2, and the "@variables" vk loop to
explicitly handle and surface missing keys rather than skipping them.
source/tests/consistent/model/test_ener.py (1)

1804-1813: ⚠️ Potential issue | 🟠 Major

test_load_stat_from_file should also isolate sampled inputs per backend call.

Line 1724 documents in-place mutation risk, and test_compute_stat already protects against it. The load-from-file setup at Lines 1804-1813 should apply the same protection to avoid backend-order coupling.

Suggested fix
+            from copy import (
+                deepcopy,
+            )
+
             # 1. Compute stats and save to file
             self.dp_model.compute_or_load_stat(
-                lambda: self.np_sampled, stat_file_path=DPPath(dp_h5, "a")
+                lambda: deepcopy(self.np_sampled), stat_file_path=DPPath(dp_h5, "a")
             )
             self.pt_model.compute_or_load_stat(
-                lambda: self.pt_sampled, stat_file_path=DPPath(pt_h5, "a")
+                lambda: deepcopy(self.pt_sampled), stat_file_path=DPPath(pt_h5, "a")
             )
             self.pt_expt_model.compute_or_load_stat(
-                lambda: self.np_sampled, stat_file_path=DPPath(pe_h5, "a")
+                lambda: deepcopy(self.np_sampled), stat_file_path=DPPath(pe_h5, "a")
             )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@source/tests/consistent/model/test_ener.py` around lines 1804 - 1813, The
load-from-file setup for compute_or_load_stat is passing shared references
(self.np_sampled / self.pt_sampled) which can be mutated in-place and cause
backend-order coupling; update each lambda passed to
DPModel.compute_or_load_stat / PTModel.compute_or_load_stat /
PTExptModel.compute_or_load_stat to return a defensive copy (e.g., replace
lambda: self.np_sampled with lambda: self.np_sampled.copy() and replace lambda:
self.pt_sampled with lambda: self.pt_sampled.clone() or an appropriate tensor
copy) so compute_or_load_stat receives isolated sampled inputs for each backend
call.
🧹 Nitpick comments (3)
source/tests/consistent/model/test_dipole.py (2)

370-378: Consider adding pt_expt coverage to get_descriptor and get_fitting_net tests.

These tests only verify dp and pt backends. If pt_expt exposes get_descriptor() and get_fitting_net() methods, consider adding assertions for consistency.

Suggested addition
     def test_get_descriptor(self) -> None:
         """get_descriptor should return a non-None object on both backends."""
         self.assertIsNotNone(self.dp_model.get_descriptor())
         self.assertIsNotNone(self.pt_model.get_descriptor())
+        self.assertIsNotNone(self.pt_expt_model.get_descriptor())

     def test_get_fitting_net(self) -> None:
         """get_fitting_net should return a non-None object on both backends."""
         self.assertIsNotNone(self.dp_model.get_fitting_net())
         self.assertIsNotNone(self.pt_model.get_fitting_net())
+        self.assertIsNotNone(self.pt_expt_model.get_fitting_net())
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@source/tests/consistent/model/test_dipole.py` around lines 370 - 378, Tests
test_get_descriptor and test_get_fitting_net only assert on self.dp_model and
self.pt_model; add assertions for the experimental PyTorch backend (likely named
self.pt_expt) to ensure it also returns non-None for get_descriptor() and
get_fitting_net(). Update test_get_descriptor to include
self.assertIsNotNone(self.pt_expt.get_descriptor()) and update
test_get_fitting_net to include
self.assertIsNotNone(self.pt_expt.get_fitting_net()), keeping the same
docstrings and style as the existing assertions.

1148-1150: Minor: Consider extracting the error message to a constant or using a custom exception.

This is a Ruff TRY003 warning about long exception messages. For test code where descriptive messages are helpful for debugging failures, this pattern is acceptable. The warning is minor and won't fail CI.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@source/tests/consistent/model/test_dipole.py` around lines 1148 - 1150, The
inline long exception message in the test's raise_error() function should be
extracted to a named constant or replaced by a small custom exception to silence
the Ruff TRY003 warning; update the test by replacing the literal string "Should
not recompute — should load from file" with a constant like RECOMPUTE_ERROR_MSG
(or raise a custom TestRecomputeError) and reference that constant/exception
inside raise_error() so the test remains descriptive but avoids the long inline
literal.
source/tests/consistent/model/test_property.py (1)

1156-1210: Consider clarifying the test expectation in the docstring.

The test creates data where all atoms are type 0, yet the assertion at line 1210 expects ["O", "H"] (both types) to be observed. While the comment at lines 1208-1209 explains this is due to stats_distinguish_types=False, this behavior may surprise future maintainers. Consider expanding the docstring to explicitly mention that PropertyModel uses stats_distinguish_types=False.

📝 Suggested docstring enhancement
     def test_get_observed_type_list(self) -> None:
         """get_observed_type_list should be consistent across dp, pt, pt_expt.

         Uses mock data containing only type 0 ("O") so that type 1 ("H") is
-        unobserved and should be absent from the returned list.
+        unobserved. However, PropertyModel uses stats_distinguish_types=False,
+        so all types receive the same (non-zero) bias and both appear observed.
         """
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@source/tests/consistent/model/test_property.py` around lines 1156 - 1210, The
test's docstring is unclear about why both types ("O","H") are expected even
though input data contains only type 0; update the docstring of
test_get_observed_type_list to explicitly state that PropertyModel (used by
self.dp_model/self.pt_model/self.pt_expt_model) sets
stats_distinguish_types=False so biases are shared across types and unobserved
types will be reported as observed; mention the relevant symbols PropertyModel
and stats_distinguish_types and that this is why the final assertion expects
["O","H"].
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@source/tests/consistent/model/common.py`:
- Around line 156-165: The loops in compare_variables_recursive currently use
"continue" when keys are missing (for key in d1: if key not in d2: continue and
for vk in v1: if vk not in v2: continue), which hides structural mismatches;
change both checks to treat missing keys as a mismatch (e.g., return False or
record the difference) so the parity test fails when expected keys are absent;
update the logic around compare_variables_recursive, d1, d2, v1, v2, and the
"@variables" vk loop to explicitly handle and surface missing keys rather than
skipping them.

In `@source/tests/consistent/model/test_ener.py`:
- Around line 1804-1813: The load-from-file setup for compute_or_load_stat is
passing shared references (self.np_sampled / self.pt_sampled) which can be
mutated in-place and cause backend-order coupling; update each lambda passed to
DPModel.compute_or_load_stat / PTModel.compute_or_load_stat /
PTExptModel.compute_or_load_stat to return a defensive copy (e.g., replace
lambda: self.np_sampled with lambda: self.np_sampled.copy() and replace lambda:
self.pt_sampled with lambda: self.pt_sampled.clone() or an appropriate tensor
copy) so compute_or_load_stat receives isolated sampled inputs for each backend
call.

---

Nitpick comments:
In `@source/tests/consistent/model/test_dipole.py`:
- Around line 370-378: Tests test_get_descriptor and test_get_fitting_net only
assert on self.dp_model and self.pt_model; add assertions for the experimental
PyTorch backend (likely named self.pt_expt) to ensure it also returns non-None
for get_descriptor() and get_fitting_net(). Update test_get_descriptor to
include self.assertIsNotNone(self.pt_expt.get_descriptor()) and update
test_get_fitting_net to include
self.assertIsNotNone(self.pt_expt.get_fitting_net()), keeping the same
docstrings and style as the existing assertions.
- Around line 1148-1150: The inline long exception message in the test's
raise_error() function should be extracted to a named constant or replaced by a
small custom exception to silence the Ruff TRY003 warning; update the test by
replacing the literal string "Should not recompute — should load from file" with
a constant like RECOMPUTE_ERROR_MSG (or raise a custom TestRecomputeError) and
reference that constant/exception inside raise_error() so the test remains
descriptive but avoids the long inline literal.

In `@source/tests/consistent/model/test_property.py`:
- Around line 1156-1210: The test's docstring is unclear about why both types
("O","H") are expected even though input data contains only type 0; update the
docstring of test_get_observed_type_list to explicitly state that PropertyModel
(used by self.dp_model/self.pt_model/self.pt_expt_model) sets
stats_distinguish_types=False so biases are shared across types and unobserved
types will be reported as observed; mention the relevant symbols PropertyModel
and stats_distinguish_types and that this is why the final assertion expects
["O","H"].

ℹ️ Review info

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 35d4cbe and 4540382.

📒 Files selected for processing (7)
  • source/tests/consistent/model/common.py
  • source/tests/consistent/model/test_dipole.py
  • source/tests/consistent/model/test_dos.py
  • source/tests/consistent/model/test_ener.py
  • source/tests/consistent/model/test_polar.py
  • source/tests/consistent/model/test_property.py
  • source/tests/consistent/model/test_zbl_ener.py

# Conflicts:
#	deepmd/dpmodel/atomic_model/polar_atomic_model.py
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
deepmd/pd/train/training.py (1)

1412-1418: The compute_input_stats method exists and the code pattern matches the pt backend implementation.

The null safety suggestion is good defensive programming, but note that deepmd/pt/train/training.py (line 1822) uses the identical pattern without a null check:

_model.get_fitting_net().compute_input_stats(_sample_func)

Both backends return self.atomic_model.fitting_net directly, and for models that pass the isinstance(_model, DPModelCommon) check, the fitting network should always be initialized from the model configuration. If null safety is desired, consider applying the guard consistently across both backends for cross-backend consistency.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deepmd/pd/train/training.py` around lines 1412 - 1418, The current guard
around compute_input_stats introduces an inconsistency with the PT backend;
update this file to match deepmd/pt by removing the extra null check and
directly call _model.get_fitting_net().compute_input_stats(_sample_func) inside
the existing isinstance(_model, DPModelCommon) and _bias_adjust_mode ==
"set-by-statistic" condition so DPModelCommon, get_fitting_net, and
compute_input_stats follow the same pattern as the PT backend.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@deepmd/pt/train/training.py`:
- Around line 1817-1822: The current call assumes a fitting network exists: when
_model is a DPModelCommon and _bias_adjust_mode == "set-by-statistic" you should
first retrieve fitting_net = _model.get_fitting_net() and only call
fitting_net.compute_input_stats(_sample_func) if fitting_net is not None; update
the block around DPModelCommon/_model.get_fitting_net() to guard against a None
return (skip or handle accordingly) to match the defensive checks used elsewhere
(e.g., ener_model.py).

---

Nitpick comments:
In `@deepmd/pd/train/training.py`:
- Around line 1412-1418: The current guard around compute_input_stats introduces
an inconsistency with the PT backend; update this file to match deepmd/pt by
removing the extra null check and directly call
_model.get_fitting_net().compute_input_stats(_sample_func) inside the existing
isinstance(_model, DPModelCommon) and _bias_adjust_mode == "set-by-statistic"
condition so DPModelCommon, get_fitting_net, and compute_input_stats follow the
same pattern as the PT backend.

ℹ️ Review info

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4540382 and aa2643e.

📒 Files selected for processing (3)
  • deepmd/dpmodel/atomic_model/polar_atomic_model.py
  • deepmd/pd/train/training.py
  • deepmd/pt/train/training.py
🚧 Files skipped from review as they are similar to previous changes (1)
  • deepmd/dpmodel/atomic_model/polar_atomic_model.py

  Remove unused  parameter from PairTabAtomicModel and
  redundant  assignments already handled by BaseAtomicModel.__init__.
@wanghan-iapcm wanghan-iapcm added this pull request to the merge queue Feb 26, 2026
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to a conflict with the base branch Feb 26, 2026
# Conflicts:
#	deepmd/dpmodel/utils/stat.py
@wanghan-iapcm wanghan-iapcm added this pull request to the merge queue Feb 27, 2026
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Feb 27, 2026
@wanghan-iapcm wanghan-iapcm enabled auto-merge March 1, 2026 05:58
@wanghan-iapcm wanghan-iapcm added this pull request to the merge queue Mar 1, 2026
Merged via the queue into deepmodeling:master with commit 314d03a Mar 1, 2026
70 checks passed
@wanghan-iapcm wanghan-iapcm deleted the feat-other-full-model branch March 1, 2026 10:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants