Skip to content

feat(pt_expt): add linear energy model#5317

Open
wanghan-iapcm wants to merge 9 commits intodeepmodeling:masterfrom
wanghan-iapcm:feat-pt-expt-linear
Open

feat(pt_expt): add linear energy model#5317
wanghan-iapcm wants to merge 9 commits intodeepmodeling:masterfrom
wanghan-iapcm:feat-pt-expt-linear

Conversation

@wanghan-iapcm
Copy link
Collaborator

@wanghan-iapcm wanghan-iapcm commented Mar 16, 2026

This is a breaking change: bump the model data versio of LinearEnergyAtomicModel from 2 to 3 due to the bug fixing.

Summary

  • Add LinearEnergyModel to pt_expt backend, enabling linear combination of multiple sub-models
  • Add get_linear_model factory in pt_expt for constructing from config dicts
  • Fix bugs in dpmodel/pt shared code:
    • get_linear_model (pt) not propagating type_map to sub-models
    • LinearEnergyAtomicModel (dpmodel) missing weights parameter, causing deserialization failure
    • _compute_weight calling array_api_compat.array_namespace() with Python list and using numpy dtype with torch

Test plan

  • Cross-backend consistency test (source/tests/consistent/model/test_linear_ener.py) — pt vs pt_expt, with parameterized exclude types
  • Existing dpmodel/pt linear model tests still pass

Summary by CodeRabbit

  • New Features

    • Added LinearEnergyModel and a "linear_ener" fitting path to assemble multiple sub-models.
    • Configurable weighting when combining sub-model energies: "mean", "sum", or custom vector; weights are validated and persisted.
  • Behavior

    • Sub-model type mappings are propagated when omitted.
    • Model serialization updated to include weight settings and support the new version.
  • Tests

    • Extensive tests across backends verifying weighting, outputs, and selector update behavior.

Han Wang added 3 commits March 16, 2026 00:27
Wrap dpmodel's LinearEnergyAtomicModel via make_model, following the
same pattern as DPZBLModel. Supports linear combination of multiple
atomic models with configurable weights.
- Fix get_linear_model not propagating type_map from root to sub-models
- Fix array_api_compat.array_namespace called with Python list argument
- Fix xp.ones using numpy dtype (GLOBAL_NP_FLOAT_PRECISION) with torch
- Add cross-backend consistency test for linear_ener model
- Add get_linear_model to pt_expt get_model.py for constructing
  LinearEnergyModel from config dict
- Fix dpmodel LinearEnergyAtomicModel missing weights parameter
  (was passed through **kwargs to BaseAtomicModel which rejected it)
- Update consistency test to use pt_expt's get_linear_model directly
Comment on lines +41 to +49
def forward(
self,
coord: torch.Tensor,
atype: torch.Tensor,
box: torch.Tensor | None = None,
fparam: torch.Tensor | None = None,
aparam: torch.Tensor | None = None,
do_atomic_virial: bool = False,
) -> dict[str, torch.Tensor]:

Check warning

Code scanning / CodeQL

Signature mismatch in overriding method Warning

This method requires at least 3 positional arguments, whereas overridden
Identity.forward
requires 2.
This method requires at least 3 positional arguments, whereas overridden
test_torch_module_respects_explicit_forward.MockModule.forward
requires 2.
Comment on lines +71 to +80
def forward_lower(
self,
extended_coord: torch.Tensor,
extended_atype: torch.Tensor,
nlist: torch.Tensor,
mapping: torch.Tensor | None = None,
fparam: torch.Tensor | None = None,
aparam: torch.Tensor | None = None,
do_atomic_virial: bool = False,
) -> dict[str, torch.Tensor]:

Check warning

Code scanning / CodeQL

Signature mismatch in overriding method Warning

This method requires at least 4 positional arguments, whereas overridden
test_torch_module_respects_explicit_forward_lower.MockModule.forward_lower
requires 3.
This call
correctly calls the base method, but does not match the signature of the overriding method.
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Mar 16, 2026

📝 Walkthrough

Walkthrough

Adds a configurable weighted linear energy model: exposes LinearEnergyModel (PT-Expt), extends LinearEnergyAtomicModel with a public weights parameter/serialization (v3), propagates type_map to sub-models, and adds multi-backend and unit tests validating mean/sum/custom weighting and selector updates.

Changes

Cohort / File(s) Summary
Core DP atomic model
deepmd/dpmodel/atomic_model/linear_atomic_model.py
Add weights parameter (accepts "mean", "sum", or list[float]) with validation and self.weights; update _compute_weight to use self.weights; bump serialized "@Version" to 3 and include "weights"; deserialization accepts v3 and defaults missing weights to "mean".
PT atomic model compatibility
deepmd/pt/model/atomic_model/linear_atomic_model.py
Align serialize/deserialize to version 3 and default missing weights to "mean" for backwards compatibility.
PT-Expt model implementation
deepmd/pt_expt/model/dp_linear_model.py
New LinearEnergyModel (registered "linear_ener") implementing forward/forward_lower, FX-exportable forward_lower, translated output defs, and update_sel delegating to sub-models while tracking minimum neighbor distance.
Model factory / integration
deepmd/pt/model/model/__init__.py, deepmd/pt_expt/model/get_model.py
Add get_linear_model to construct LinearEnergyModel from config; get_model routes "linear_ener" to it; propagate parent type_map into sub-model params when missing.
API export
deepmd/pt_expt/model/__init__.py
Export LinearEnergyModel by adding it to module imports and __all__.
Tests — integration & multi-backend
source/tests/consistent/model/test_linear_ener.py
Add comprehensive multi-backend tests exercising DP, PT, and PT-Expt linear energy behavior across parameterized configs.
Tests — unit (DP atomic)
source/tests/common/dpmodel/test_linear_atomic_model.py
Add TestLinearWeights asserting mean, sum, and custom weight vectors combine sub-model energies correctly.
Tests — PT-Expt model
source/tests/pt_expt/model/test_linear_ener_model.py
Add tests verifying outputs are exact weighted sums (mean, sum, custom) of sub-model outputs and that update_sel writes updated selectors back into returned config.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant LinearEnergyModel
    participant SubModels as Sub-Models
    participant Weighter as WeightComputation
    participant Serializer

    User->>LinearEnergyModel: forward(... / forward_lower(...))
    LinearEnergyModel->>SubModels: call_common(...) per sub-model
    SubModels-->>LinearEnergyModel: atom_energy, forces, virials
    LinearEnergyModel->>Weighter: _compute_weight(nmodels, extended_coord, self.weights)
    Weighter-->>LinearEnergyModel: weight tensors
    LinearEnergyModel->>LinearEnergyModel: apply weights -> atom_energy, reduce -> energy
    LinearEnergyModel->>Serializer: serialize()/deserialize() (includes "weights", v3)
    LinearEnergyModel-->>User: atom_energy, energy, (force), (virial)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested reviewers

  • njzjz
  • iProzd
  • anyangml
🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 22.22% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title 'feat(pt_expt): add linear energy model' clearly and accurately summarizes the main change: adding a new LinearEnergyModel feature to the pt_expt backend. It is concise, specific, and directly reflects the primary objective of the pull request.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
📝 Coding Plan
  • Generate coding plan for human review comments

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (4)
deepmd/pt_expt/model/get_model.py (1)

141-152: Consider adding validation for required pairtab parameters.

If tab_file, rcut, or sel keys are missing from a pairtab sub-model config, this will raise a generic KeyError. Consider providing a more helpful error message, similar to the assertion for type checking.

💡 Suggested validation
         else:
             assert (
                 "type" in sub_model_params and sub_model_params["type"] == "pairtab"
             ), "Sub-models in LinearEnergyModel must be a DPModel or a PairTable Model"
+            for key in ("tab_file", "rcut", "sel"):
+                if key not in sub_model_params:
+                    raise ValueError(
+                        f"PairTabAtomicModel requires '{key}' in sub_model_params"
+                    )
             list_of_models.append(
                 PairTabAtomicModel(
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deepmd/pt_expt/model/get_model.py` around lines 141 - 152, Add explicit
validation for the required pairtab keys before constructing PairTabAtomicModel:
check that sub_model_params contains "tab_file", "rcut", and "sel" (in addition
to the existing "type" check) and raise a clear assertion or ValueError if any
are missing, referencing sub_model_params and the parent model context (e.g.,
model_params["type_map"]) so the error message identifies the offending
sub-model; then proceed to append PairTabAtomicModel using
sub_model_params["tab_file"], sub_model_params["rcut"], and
sub_model_params["sel"] as currently done.
deepmd/pt_expt/model/dp_linear_model.py (1)

61-66: Inconsistent key access patterns between forward and forward_lower.

forward() uses direct dictionary access (model_ret["energy_derv_r"]), while forward_lower() uses .get() method. If energy_derv_r key is missing, forward() will raise KeyError before the is not None check executes.

Consider using .get() consistently for defensive access, matching the pattern in forward_lower() (lines 93-100) and the sibling EnergyModel in ener_model.py.

♻️ Proposed fix for consistency
-        if self.do_grad_r("energy") and model_ret["energy_derv_r"] is not None:
+        if self.do_grad_r("energy") and model_ret.get("energy_derv_r") is not None:
             model_predict["force"] = model_ret["energy_derv_r"].squeeze(-2)
-        if self.do_grad_c("energy") and model_ret["energy_derv_c_redu"] is not None:
+        if self.do_grad_c("energy") and model_ret.get("energy_derv_c_redu") is not None:
             model_predict["virial"] = model_ret["energy_derv_c_redu"].squeeze(-2)
-            if do_atomic_virial and model_ret["energy_derv_c"] is not None:
+            if do_atomic_virial and model_ret.get("energy_derv_c") is not None:
                 model_predict["atom_virial"] = model_ret["energy_derv_c"].squeeze(-2)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deepmd/pt_expt/model/dp_linear_model.py` around lines 61 - 66, The forward
method currently accesses model_ret with direct indexing (e.g.
model_ret["energy_derv_r"], model_ret["energy_derv_c_redu"],
model_ret["energy_derv_c"]) which can raise KeyError if keys are absent; change
these to defensive lookups using model_ret.get("energy_derv_r"),
model_ret.get("energy_derv_c_redu"), and model_ret.get("energy_derv_c") and keep
the existing is not None checks and logic that writes into model_predict (also
keeping do_grad_r/do_grad_c and do_atomic_virial conditions unchanged) so
behavior matches forward_lower and EnergyModel.
deepmd/dpmodel/atomic_model/linear_atomic_model.py (1)

70-70: The weights parameter is added but not used in _compute_weight.

The weights parameter is correctly added to restore deserialization compatibility, but note that _compute_weight (lines 390-405) doesn't use self.weights – it always returns equal weights (1/nmodels). If this is intentional (expecting subclasses like DPZBLLinearEnergyAtomicModel to override), consider documenting this behavior or storing self.weights for potential future use.

💡 Consider storing weights for subclass use
     def __init__(
         self,
         models: list[BaseAtomicModel],
         type_map: list[str],
         weights: str | list[float] | None = "mean",
         **kwargs: Any,
     ) -> None:
         super().__init__(type_map, **kwargs)
         super().init_out_stat()
+        self.weights = weights
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deepmd/dpmodel/atomic_model/linear_atomic_model.py` at line 70, The
constructor added the weights parameter but never stores or uses it; update the
class initializer to assign self.weights = weights and modify _compute_weight to
use self.weights: if self.weights == "mean" return equal weights, if it's a
sequence validate length==nmodels, normalize to sum 1 and return that array, and
raise a clear ValueError for invalid types; this ensures subclasses like
DPZBLLinearEnergyAtomicModel can rely on self.weights and that _compute_weight
actually respects provided weights.
source/tests/consistent/model/test_linear_ener.py (1)

159-159: Minor: ntypes doesn't match type_map length.

self.ntypes = 2 but type_map in data has 3 types ["O", "H", "B"]. While this works because only types 0 and 1 are used in the test data, it could cause confusion. Consider aligning ntypes with the actual type count or adding a comment explaining the discrepancy.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@source/tests/consistent/model/test_linear_ener.py` at line 159, The test sets
self.ntypes = 2 while the constructed data's type_map contains three entries
["O","H","B"], which is misleading; update the test to make ntypes match the
number of types in data (set self.ntypes = 3) or if only two types are
intentionally used, add a clarifying comment next to self.ntypes explaining that
type "B" is unused in this test, and ensure references to type_map and any
usages of data in test_linear_ener (e.g., variables reading type_map) remain
consistent with the chosen approach.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@deepmd/pt_expt/model/dp_linear_model.py`:
- Around line 186-192: The loop updates a sub_model via DPModelCommon.update_sel
but never writes it back into local_jdata_cpy, so local_jdata_cpy["models"]
still holds the original; after calling DPModelCommon.update_sel(...) inside the
for-loop (where symbols are local_jdata_cpy, sub_model,
DPModelCommon.update_sel, and models), assign the returned sub_model back into
local_jdata_cpy["models"][idx] and keep the existing min_nbor_dist update logic
so the modified model is persisted in local_jdata_cpy.

---

Nitpick comments:
In `@deepmd/dpmodel/atomic_model/linear_atomic_model.py`:
- Line 70: The constructor added the weights parameter but never stores or uses
it; update the class initializer to assign self.weights = weights and modify
_compute_weight to use self.weights: if self.weights == "mean" return equal
weights, if it's a sequence validate length==nmodels, normalize to sum 1 and
return that array, and raise a clear ValueError for invalid types; this ensures
subclasses like DPZBLLinearEnergyAtomicModel can rely on self.weights and that
_compute_weight actually respects provided weights.

In `@deepmd/pt_expt/model/dp_linear_model.py`:
- Around line 61-66: The forward method currently accesses model_ret with direct
indexing (e.g. model_ret["energy_derv_r"], model_ret["energy_derv_c_redu"],
model_ret["energy_derv_c"]) which can raise KeyError if keys are absent; change
these to defensive lookups using model_ret.get("energy_derv_r"),
model_ret.get("energy_derv_c_redu"), and model_ret.get("energy_derv_c") and keep
the existing is not None checks and logic that writes into model_predict (also
keeping do_grad_r/do_grad_c and do_atomic_virial conditions unchanged) so
behavior matches forward_lower and EnergyModel.

In `@deepmd/pt_expt/model/get_model.py`:
- Around line 141-152: Add explicit validation for the required pairtab keys
before constructing PairTabAtomicModel: check that sub_model_params contains
"tab_file", "rcut", and "sel" (in addition to the existing "type" check) and
raise a clear assertion or ValueError if any are missing, referencing
sub_model_params and the parent model context (e.g., model_params["type_map"])
so the error message identifies the offending sub-model; then proceed to append
PairTabAtomicModel using sub_model_params["tab_file"], sub_model_params["rcut"],
and sub_model_params["sel"] as currently done.

In `@source/tests/consistent/model/test_linear_ener.py`:
- Line 159: The test sets self.ntypes = 2 while the constructed data's type_map
contains three entries ["O","H","B"], which is misleading; update the test to
make ntypes match the number of types in data (set self.ntypes = 3) or if only
two types are intentionally used, add a clarifying comment next to self.ntypes
explaining that type "B" is unused in this test, and ensure references to
type_map and any usages of data in test_linear_ener (e.g., variables reading
type_map) remain consistent with the chosen approach.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 2951d777-03d1-4455-9c6b-4e938aeeddc2

📥 Commits

Reviewing files that changed from the base of the PR and between 09345bf and 64fcc3b.

📒 Files selected for processing (6)
  • deepmd/dpmodel/atomic_model/linear_atomic_model.py
  • deepmd/pt/model/model/__init__.py
  • deepmd/pt_expt/model/__init__.py
  • deepmd/pt_expt/model/dp_linear_model.py
  • deepmd/pt_expt/model/get_model.py
  • source/tests/consistent/model/test_linear_ener.py

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 64fcc3b490

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@codecov
Copy link

codecov bot commented Mar 16, 2026

Codecov Report

❌ Patch coverage is 67.21311% with 40 lines in your changes missing coverage. Please review.
✅ Project coverage is 82.29%. Comparing base (09345bf) to head (afcb868).
⚠️ Report is 1 commits behind head on master.

Files with missing lines Patch % Lines
deepmd/pt_expt/model/dp_linear_model.py 58.10% 31 Missing ⚠️
deepmd/dpmodel/atomic_model/linear_atomic_model.py 80.00% 4 Missing ⚠️
deepmd/pt_expt/model/get_model.py 81.81% 4 Missing ⚠️
...eepmd/pt/model/atomic_model/linear_atomic_model.py 66.66% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #5317      +/-   ##
==========================================
- Coverage   82.29%   82.29%   -0.01%     
==========================================
  Files         775      776       +1     
  Lines       77627    77743     +116     
  Branches     3676     3675       -1     
==========================================
+ Hits        63887    63982      +95     
- Misses      12566    12587      +21     
  Partials     1174     1174              

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Han Wang added 2 commits March 17, 2026 11:31
Store self.weights in __init__, use it in _compute_weight to support
"sum", "mean", and custom list weights. Persist weights in serialize.
Previously the weights parameter was accepted but silently ignored,
always using equal mean weights.

Add unit tests at both atomic model level (dpmodel) and model level
(pt_expt) verifying energy, force, and virial match expected weighted
combinations of individual sub-model outputs.
DPModelCommon.update_sel returns an updated copy of the sub-model
config, but the result was assigned to a local variable and never
written back to local_jdata_cpy["models"]. Also read from the copy
instead of the original. This caused sel updates to be silently
dropped for linear models.
@wanghan-iapcm wanghan-iapcm requested a review from njzjz March 17, 2026 04:01
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (2)
deepmd/dpmodel/atomic_model/linear_atomic_model.py (1)

70-70: Inconsistent validation: assert vs. ValueError.

The validation uses assert for string and list validation (lines 102, 104) but ValueError for the fallback case. Assertions can be disabled with python -O, making the validation ineffective in optimized mode. For production code, use explicit ValueError consistently.

♻️ Proposed fix
         if isinstance(weights, str):
-            assert weights in ["sum", "mean"]
+            if weights not in ["sum", "mean"]:
+                raise ValueError(
+                    f"'weights' must be 'sum' or 'mean' when provided as a string, got '{weights}'."
+                )
         elif isinstance(weights, list):
-            assert len(weights) == len(models)
+            if len(weights) != len(models):
+                raise ValueError(
+                    f"'weights' list length ({len(weights)}) must match number of models ({len(models)})."
+                )
         else:
             raise ValueError(
                 f"'weights' must be a string ('sum' or 'mean') or a list of float of length {len(models)}."
             )

Also applies to: 101-109

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deepmd/dpmodel/atomic_model/linear_atomic_model.py` at line 70, The
constructor/validator for the weights parameter in class LinearAtomicModel
currently uses assert for checking string and list cases while raising
ValueError for the fallback case; replace those assert statements in the
__init__ (or any helper like _validate_weights) with explicit ValueError
exceptions so validation cannot be bypassed with python -O, i.e., check that
weights is either "mean" or a list/sequence of floats and raise ValueError with
a clear message for invalid types/values and for invalid lengths or contents.
deepmd/pt_expt/model/dp_linear_model.py (1)

59-66: Inconsistent dictionary key access pattern.

In forward (lines 59-66), you use direct access model_ret["energy_derv_r"] after checking the condition, but in forward_lower (lines 93-100), you use .get() pattern: model_ret.get("energy_derv_r"). This inconsistency could mask bugs if keys are missing unexpectedly.

Looking at the PT version in the context snippet (deepmd/pt/model/model/dp_linear_model.py:60-91), it uses direct bracket access which would raise KeyError on missing keys—a fail-fast approach that helps identify issues early.

Consider using consistent direct access in both methods, since the condition check already verifies the key should exist.

♻️ Proposed fix for forward_lower
-        if self.do_grad_r("energy") and model_ret.get("energy_derv_r") is not None:
+        if self.do_grad_r("energy") and model_ret["energy_derv_r"] is not None:
             model_predict["extended_force"] = model_ret["energy_derv_r"].squeeze(-2)
-        if self.do_grad_c("energy") and model_ret.get("energy_derv_c_redu") is not None:
+        if self.do_grad_c("energy") and model_ret["energy_derv_c_redu"] is not None:
             model_predict["virial"] = model_ret["energy_derv_c_redu"].squeeze(-2)
-            if do_atomic_virial and model_ret.get("energy_derv_c") is not None:
+            if do_atomic_virial and model_ret["energy_derv_c"] is not None:
                 model_predict["extended_virial"] = model_ret["energy_derv_c"].squeeze(
                     -2
                 )

Also applies to: 91-100

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deepmd/pt_expt/model/dp_linear_model.py` around lines 59 - 66, The
forward_lower method uses model_ret.get(...) inconsistently with forward's
direct bracket access; change forward_lower to access keys directly (e.g.,
model_ret["energy_derv_r"], model_ret["energy_derv_c_redu"],
model_ret["energy_derv_c"]) after the existing do_grad_r("energy") /
do_grad_c("energy") checks so failures surface as KeyError like in forward, and
ensure the same squeeze(-2) assignments to model_predict["force"],
model_predict["virial"], and model_predict["atom_virial"] mirror the logic in
forward.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@source/tests/pt_expt/model/test_linear_ener_model.py`:
- Around line 177-181: The test calls LinearEnergyModel.update_sel and assigns
its two-return values to result and min_dist, but min_dist is unused and flagged
by Ruff RUF059; change the assignment to capture the second return as an
intentionally ignored name (e.g., _min_dist or _) so the linter knows it's
unused. Edit the test line that currently reads "result, min_dist =
LinearEnergyModel.update_sel(...)" to use "result, _min_dist =
LinearEnergyModel.update_sel(...)" (or "result, _ = ...") and run tests/linter
to confirm the warning is resolved.

---

Nitpick comments:
In `@deepmd/dpmodel/atomic_model/linear_atomic_model.py`:
- Line 70: The constructor/validator for the weights parameter in class
LinearAtomicModel currently uses assert for checking string and list cases while
raising ValueError for the fallback case; replace those assert statements in the
__init__ (or any helper like _validate_weights) with explicit ValueError
exceptions so validation cannot be bypassed with python -O, i.e., check that
weights is either "mean" or a list/sequence of floats and raise ValueError with
a clear message for invalid types/values and for invalid lengths or contents.

In `@deepmd/pt_expt/model/dp_linear_model.py`:
- Around line 59-66: The forward_lower method uses model_ret.get(...)
inconsistently with forward's direct bracket access; change forward_lower to
access keys directly (e.g., model_ret["energy_derv_r"],
model_ret["energy_derv_c_redu"], model_ret["energy_derv_c"]) after the existing
do_grad_r("energy") / do_grad_c("energy") checks so failures surface as KeyError
like in forward, and ensure the same squeeze(-2) assignments to
model_predict["force"], model_predict["virial"], and
model_predict["atom_virial"] mirror the logic in forward.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 88eafc5a-948c-427c-a54e-cbae9f99b446

📥 Commits

Reviewing files that changed from the base of the PR and between 64fcc3b and 5decc37.

📒 Files selected for processing (4)
  • deepmd/dpmodel/atomic_model/linear_atomic_model.py
  • deepmd/pt_expt/model/dp_linear_model.py
  • source/tests/common/dpmodel/test_linear_atomic_model.py
  • source/tests/pt_expt/model/test_linear_ener_model.py

Han Wang added 3 commits March 17, 2026 22:39
…sion

Add weights to PT backend serialize, bump @Version to 3 for both
dpmodel and PT backends, and default to "mean" when deserializing
version 2 data that lacks the weights field.
DPModelCommon.update_sel returns an updated copy of the sub-model
config, but the result was assigned to a local variable and never
written back to local_jdata_cpy["models"]. Also read from the copy
instead of the original. This caused sel updates to be silently
dropped for linear models.
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
deepmd/dpmodel/atomic_model/linear_atomic_model.py (1)

70-109: ⚠️ Potential issue | 🟠 Major

Replace assert-based validation with explicit exceptions and fix ruff violations before committing.

The weights validation at lines 101–104 uses assert statements, which can be disabled with optimized Python (python -O). Additionally, the type annotation allows None (line 70) but the code doesn't handle it explicitly—it falls through to the else block and raises ValueError.

The file also has 2 ruff violations (RUF059: unused variables at line 251) that must be fixed per the coding guideline: **/*.py: Always run ruff check . and ruff format . before committing changes or CI will fail.

Replace the assertions with explicit validation and handle the None case:

Proposed fix
         if isinstance(weights, str):
-            assert weights in ["sum", "mean"]
+            if weights not in ("sum", "mean"):
+                raise ValueError("`weights` must be 'sum' or 'mean' when provided as a string.")
         elif isinstance(weights, list):
-            assert len(weights) == len(models)
+            if len(weights) != len(models):
+                raise ValueError(
+                    f"'weights' must be a string ('sum' or 'mean') or a numeric list of length {len(models)}."
+                )
         else:
             raise ValueError(
                 f"'weights' must be a string ('sum' or 'mean') or a list of float of length {len(models)}."
             )

Also run ruff check . --fix and ruff format . on the repository to resolve the RUF059 violations.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deepmd/dpmodel/atomic_model/linear_atomic_model.py` around lines 70 - 109,
The constructor in LinearAtomicModel uses assert for validating the weights
parameter and allows weights: None but doesn't handle it; replace the assert
checks in the weights handling block (the code that checks isinstance(weights,
str) / list) with explicit ValueError/TypeError raises and add explicit handling
for weights is None (e.g., set a sensible default or raise a clear exception),
ensuring the error messages reference the parameter and expected values; after
updating, run ruff check . --fix and ruff format . to resolve the reported
RUF059 unused-variable violations (and fix or remove the unused variable(s)
around the previously reported location) before committing.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@deepmd/dpmodel/atomic_model/linear_atomic_model.py`:
- Around line 333-337: The unpacking of nlist.shape assigns nframes, nloc, nnei
but nloc and nnei are unused; update the unpacking (where nlist.shape is used,
e.g., in the function/method that currently does "nframes, nloc, nnei =
nlist.shape") to use _nloc and _nnei (nframes, _nloc, _nnei = nlist.shape) to
satisfy ruff's unused-variable rule, then run ruff check . && ruff format . to
ensure no other style violations remain before merging.

---

Outside diff comments:
In `@deepmd/dpmodel/atomic_model/linear_atomic_model.py`:
- Around line 70-109: The constructor in LinearAtomicModel uses assert for
validating the weights parameter and allows weights: None but doesn't handle it;
replace the assert checks in the weights handling block (the code that checks
isinstance(weights, str) / list) with explicit ValueError/TypeError raises and
add explicit handling for weights is None (e.g., set a sensible default or raise
a clear exception), ensuring the error messages reference the parameter and
expected values; after updating, run ruff check . --fix and ruff format . to
resolve the reported RUF059 unused-variable violations (and fix or remove the
unused variable(s) around the previously reported location) before committing.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: 567d1b56-19c2-4431-a47b-ed3351555713

📥 Commits

Reviewing files that changed from the base of the PR and between 5decc37 and 740eb0c.

📒 Files selected for processing (2)
  • deepmd/dpmodel/atomic_model/linear_atomic_model.py
  • deepmd/pt/model/atomic_model/linear_atomic_model.py

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
deepmd/dpmodel/atomic_model/linear_atomic_model.py (1)

101-109: Avoid assert for input validation.

Using assert for validating user input is problematic because assertions can be disabled with Python's -O flag, which would silently accept invalid weights values. Use explicit if/raise instead.

Proposed fix
         if isinstance(weights, str):
-            assert weights in ["sum", "mean"]
+            if weights not in ["sum", "mean"]:
+                raise ValueError(
+                    f"'weights' must be 'sum' or 'mean' when provided as a string, got '{weights}'."
+                )
         elif isinstance(weights, list):
-            assert len(weights) == len(models)
+            if len(weights) != len(models):
+                raise ValueError(
+                    f"'weights' list length ({len(weights)}) must match number of models ({len(models)})."
+                )
         else:
             raise ValueError(
                 f"'weights' must be a string ('sum' or 'mean') or a list of float of length {len(models)}."
             )
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deepmd/dpmodel/atomic_model/linear_atomic_model.py` around lines 101 - 109,
Replace the two assert statements used to validate the constructor argument
`weights` with explicit checks that raise ValueError: in the class/constructor
in linear_atomic_model.py (around the logic that assigns self.weights) change
the branch that currently uses `assert weights in ["sum", "mean"]` to an if that
raises a ValueError with a clear message when weights is a str but not "sum" or
"mean", and change the branch that currently uses `assert len(weights) ==
len(models)` to an if that raises a ValueError when weights is a list whose
length does not match len(models); leave the final else to raise the same
ValueError for invalid types and then assign self.weights = weights.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@deepmd/dpmodel/atomic_model/linear_atomic_model.py`:
- Line 70: The type hint for the weights parameter in LinearAtomicModel
(weights: str | list[float] | None = "mean") conflicts with the validation that
rejects None; update the constructor/validation to treat None as the default
"mean" before running existing checks (i.e., if weights is None: weights =
"mean"), then proceed with current validation logic for weights, ensuring all
references use the normalized value; alternatively remove None from the type
annotation if you prefer not to accept None.

---

Nitpick comments:
In `@deepmd/dpmodel/atomic_model/linear_atomic_model.py`:
- Around line 101-109: Replace the two assert statements used to validate the
constructor argument `weights` with explicit checks that raise ValueError: in
the class/constructor in linear_atomic_model.py (around the logic that assigns
self.weights) change the branch that currently uses `assert weights in ["sum",
"mean"]` to an if that raises a ValueError with a clear message when weights is
a str but not "sum" or "mean", and change the branch that currently uses `assert
len(weights) == len(models)` to an if that raises a ValueError when weights is a
list whose length does not match len(models); leave the final else to raise the
same ValueError for invalid types and then assign self.weights = weights.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: CHILL

Plan: Pro

Run ID: e33356a5-0051-4bf6-8c94-cee96fde3bd8

📥 Commits

Reviewing files that changed from the base of the PR and between 740eb0c and afcb868.

📒 Files selected for processing (1)
  • deepmd/dpmodel/atomic_model/linear_atomic_model.py

self,
models: list[BaseAtomicModel],
type_map: list[str],
weights: str | list[float] | None = "mean",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Type hint allows None but validation rejects it.

The type annotation str | list[float] | None suggests None is a valid input, but the validation logic (lines 101-108) will raise ValueError if None is passed. Either remove None from the type hint or handle it explicitly in validation (e.g., by treating None as "mean").

Proposed fix
-        weights: str | list[float] | None = "mean",
+        weights: str | list[float] = "mean",
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
weights: str | list[float] | None = "mean",
weights: str | list[float] = "mean",
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@deepmd/dpmodel/atomic_model/linear_atomic_model.py` at line 70, The type hint
for the weights parameter in LinearAtomicModel (weights: str | list[float] |
None = "mean") conflicts with the validation that rejects None; update the
constructor/validation to treat None as the default "mean" before running
existing checks (i.e., if weights is None: weights = "mean"), then proceed with
current validation logic for weights, ensuring all references use the normalized
value; alternatively remove None from the type annotation if you prefer not to
accept None.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants