feat(pt_expt): add linear energy model#5317
feat(pt_expt): add linear energy model#5317wanghan-iapcm wants to merge 9 commits intodeepmodeling:masterfrom
Conversation
Wrap dpmodel's LinearEnergyAtomicModel via make_model, following the same pattern as DPZBLModel. Supports linear combination of multiple atomic models with configurable weights.
- Fix get_linear_model not propagating type_map from root to sub-models - Fix array_api_compat.array_namespace called with Python list argument - Fix xp.ones using numpy dtype (GLOBAL_NP_FLOAT_PRECISION) with torch - Add cross-backend consistency test for linear_ener model
- Add get_linear_model to pt_expt get_model.py for constructing LinearEnergyModel from config dict - Fix dpmodel LinearEnergyAtomicModel missing weights parameter (was passed through **kwargs to BaseAtomicModel which rejected it) - Update consistency test to use pt_expt's get_linear_model directly
| def forward( | ||
| self, | ||
| coord: torch.Tensor, | ||
| atype: torch.Tensor, | ||
| box: torch.Tensor | None = None, | ||
| fparam: torch.Tensor | None = None, | ||
| aparam: torch.Tensor | None = None, | ||
| do_atomic_virial: bool = False, | ||
| ) -> dict[str, torch.Tensor]: |
Check warning
Code scanning / CodeQL
Signature mismatch in overriding method Warning
| def forward_lower( | ||
| self, | ||
| extended_coord: torch.Tensor, | ||
| extended_atype: torch.Tensor, | ||
| nlist: torch.Tensor, | ||
| mapping: torch.Tensor | None = None, | ||
| fparam: torch.Tensor | None = None, | ||
| aparam: torch.Tensor | None = None, | ||
| do_atomic_virial: bool = False, | ||
| ) -> dict[str, torch.Tensor]: |
Check warning
Code scanning / CodeQL
Signature mismatch in overriding method Warning
📝 WalkthroughWalkthroughAdds a configurable weighted linear energy model: exposes LinearEnergyModel (PT-Expt), extends LinearEnergyAtomicModel with a public Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant LinearEnergyModel
participant SubModels as Sub-Models
participant Weighter as WeightComputation
participant Serializer
User->>LinearEnergyModel: forward(... / forward_lower(...))
LinearEnergyModel->>SubModels: call_common(...) per sub-model
SubModels-->>LinearEnergyModel: atom_energy, forces, virials
LinearEnergyModel->>Weighter: _compute_weight(nmodels, extended_coord, self.weights)
Weighter-->>LinearEnergyModel: weight tensors
LinearEnergyModel->>LinearEnergyModel: apply weights -> atom_energy, reduce -> energy
LinearEnergyModel->>Serializer: serialize()/deserialize() (includes "weights", v3)
LinearEnergyModel-->>User: atom_energy, energy, (force), (virial)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (4)
deepmd/pt_expt/model/get_model.py (1)
141-152: Consider adding validation for required pairtab parameters.If
tab_file,rcut, orselkeys are missing from a pairtab sub-model config, this will raise a genericKeyError. Consider providing a more helpful error message, similar to the assertion for type checking.💡 Suggested validation
else: assert ( "type" in sub_model_params and sub_model_params["type"] == "pairtab" ), "Sub-models in LinearEnergyModel must be a DPModel or a PairTable Model" + for key in ("tab_file", "rcut", "sel"): + if key not in sub_model_params: + raise ValueError( + f"PairTabAtomicModel requires '{key}' in sub_model_params" + ) list_of_models.append( PairTabAtomicModel(🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deepmd/pt_expt/model/get_model.py` around lines 141 - 152, Add explicit validation for the required pairtab keys before constructing PairTabAtomicModel: check that sub_model_params contains "tab_file", "rcut", and "sel" (in addition to the existing "type" check) and raise a clear assertion or ValueError if any are missing, referencing sub_model_params and the parent model context (e.g., model_params["type_map"]) so the error message identifies the offending sub-model; then proceed to append PairTabAtomicModel using sub_model_params["tab_file"], sub_model_params["rcut"], and sub_model_params["sel"] as currently done.deepmd/pt_expt/model/dp_linear_model.py (1)
61-66: Inconsistent key access patterns betweenforwardandforward_lower.
forward()uses direct dictionary access (model_ret["energy_derv_r"]), whileforward_lower()uses.get()method. Ifenergy_derv_rkey is missing,forward()will raiseKeyErrorbefore theis not Nonecheck executes.Consider using
.get()consistently for defensive access, matching the pattern inforward_lower()(lines 93-100) and the siblingEnergyModelinener_model.py.♻️ Proposed fix for consistency
- if self.do_grad_r("energy") and model_ret["energy_derv_r"] is not None: + if self.do_grad_r("energy") and model_ret.get("energy_derv_r") is not None: model_predict["force"] = model_ret["energy_derv_r"].squeeze(-2) - if self.do_grad_c("energy") and model_ret["energy_derv_c_redu"] is not None: + if self.do_grad_c("energy") and model_ret.get("energy_derv_c_redu") is not None: model_predict["virial"] = model_ret["energy_derv_c_redu"].squeeze(-2) - if do_atomic_virial and model_ret["energy_derv_c"] is not None: + if do_atomic_virial and model_ret.get("energy_derv_c") is not None: model_predict["atom_virial"] = model_ret["energy_derv_c"].squeeze(-2)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deepmd/pt_expt/model/dp_linear_model.py` around lines 61 - 66, The forward method currently accesses model_ret with direct indexing (e.g. model_ret["energy_derv_r"], model_ret["energy_derv_c_redu"], model_ret["energy_derv_c"]) which can raise KeyError if keys are absent; change these to defensive lookups using model_ret.get("energy_derv_r"), model_ret.get("energy_derv_c_redu"), and model_ret.get("energy_derv_c") and keep the existing is not None checks and logic that writes into model_predict (also keeping do_grad_r/do_grad_c and do_atomic_virial conditions unchanged) so behavior matches forward_lower and EnergyModel.deepmd/dpmodel/atomic_model/linear_atomic_model.py (1)
70-70: Theweightsparameter is added but not used in_compute_weight.The
weightsparameter is correctly added to restore deserialization compatibility, but note that_compute_weight(lines 390-405) doesn't useself.weights– it always returns equal weights (1/nmodels). If this is intentional (expecting subclasses likeDPZBLLinearEnergyAtomicModelto override), consider documenting this behavior or storingself.weightsfor potential future use.💡 Consider storing weights for subclass use
def __init__( self, models: list[BaseAtomicModel], type_map: list[str], weights: str | list[float] | None = "mean", **kwargs: Any, ) -> None: super().__init__(type_map, **kwargs) super().init_out_stat() + self.weights = weights🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deepmd/dpmodel/atomic_model/linear_atomic_model.py` at line 70, The constructor added the weights parameter but never stores or uses it; update the class initializer to assign self.weights = weights and modify _compute_weight to use self.weights: if self.weights == "mean" return equal weights, if it's a sequence validate length==nmodels, normalize to sum 1 and return that array, and raise a clear ValueError for invalid types; this ensures subclasses like DPZBLLinearEnergyAtomicModel can rely on self.weights and that _compute_weight actually respects provided weights.source/tests/consistent/model/test_linear_ener.py (1)
159-159: Minor:ntypesdoesn't matchtype_maplength.
self.ntypes = 2buttype_mapindatahas 3 types["O", "H", "B"]. While this works because only types 0 and 1 are used in the test data, it could cause confusion. Consider aligningntypeswith the actual type count or adding a comment explaining the discrepancy.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@source/tests/consistent/model/test_linear_ener.py` at line 159, The test sets self.ntypes = 2 while the constructed data's type_map contains three entries ["O","H","B"], which is misleading; update the test to make ntypes match the number of types in data (set self.ntypes = 3) or if only two types are intentionally used, add a clarifying comment next to self.ntypes explaining that type "B" is unused in this test, and ensure references to type_map and any usages of data in test_linear_ener (e.g., variables reading type_map) remain consistent with the chosen approach.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@deepmd/pt_expt/model/dp_linear_model.py`:
- Around line 186-192: The loop updates a sub_model via DPModelCommon.update_sel
but never writes it back into local_jdata_cpy, so local_jdata_cpy["models"]
still holds the original; after calling DPModelCommon.update_sel(...) inside the
for-loop (where symbols are local_jdata_cpy, sub_model,
DPModelCommon.update_sel, and models), assign the returned sub_model back into
local_jdata_cpy["models"][idx] and keep the existing min_nbor_dist update logic
so the modified model is persisted in local_jdata_cpy.
---
Nitpick comments:
In `@deepmd/dpmodel/atomic_model/linear_atomic_model.py`:
- Line 70: The constructor added the weights parameter but never stores or uses
it; update the class initializer to assign self.weights = weights and modify
_compute_weight to use self.weights: if self.weights == "mean" return equal
weights, if it's a sequence validate length==nmodels, normalize to sum 1 and
return that array, and raise a clear ValueError for invalid types; this ensures
subclasses like DPZBLLinearEnergyAtomicModel can rely on self.weights and that
_compute_weight actually respects provided weights.
In `@deepmd/pt_expt/model/dp_linear_model.py`:
- Around line 61-66: The forward method currently accesses model_ret with direct
indexing (e.g. model_ret["energy_derv_r"], model_ret["energy_derv_c_redu"],
model_ret["energy_derv_c"]) which can raise KeyError if keys are absent; change
these to defensive lookups using model_ret.get("energy_derv_r"),
model_ret.get("energy_derv_c_redu"), and model_ret.get("energy_derv_c") and keep
the existing is not None checks and logic that writes into model_predict (also
keeping do_grad_r/do_grad_c and do_atomic_virial conditions unchanged) so
behavior matches forward_lower and EnergyModel.
In `@deepmd/pt_expt/model/get_model.py`:
- Around line 141-152: Add explicit validation for the required pairtab keys
before constructing PairTabAtomicModel: check that sub_model_params contains
"tab_file", "rcut", and "sel" (in addition to the existing "type" check) and
raise a clear assertion or ValueError if any are missing, referencing
sub_model_params and the parent model context (e.g., model_params["type_map"])
so the error message identifies the offending sub-model; then proceed to append
PairTabAtomicModel using sub_model_params["tab_file"], sub_model_params["rcut"],
and sub_model_params["sel"] as currently done.
In `@source/tests/consistent/model/test_linear_ener.py`:
- Line 159: The test sets self.ntypes = 2 while the constructed data's type_map
contains three entries ["O","H","B"], which is misleading; update the test to
make ntypes match the number of types in data (set self.ntypes = 3) or if only
two types are intentionally used, add a clarifying comment next to self.ntypes
explaining that type "B" is unused in this test, and ensure references to
type_map and any usages of data in test_linear_ener (e.g., variables reading
type_map) remain consistent with the chosen approach.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 2951d777-03d1-4455-9c6b-4e938aeeddc2
📒 Files selected for processing (6)
deepmd/dpmodel/atomic_model/linear_atomic_model.pydeepmd/pt/model/model/__init__.pydeepmd/pt_expt/model/__init__.pydeepmd/pt_expt/model/dp_linear_model.pydeepmd/pt_expt/model/get_model.pysource/tests/consistent/model/test_linear_ener.py
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 64fcc3b490
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## master #5317 +/- ##
==========================================
- Coverage 82.29% 82.29% -0.01%
==========================================
Files 775 776 +1
Lines 77627 77743 +116
Branches 3676 3675 -1
==========================================
+ Hits 63887 63982 +95
- Misses 12566 12587 +21
Partials 1174 1174 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
Store self.weights in __init__, use it in _compute_weight to support "sum", "mean", and custom list weights. Persist weights in serialize. Previously the weights parameter was accepted but silently ignored, always using equal mean weights. Add unit tests at both atomic model level (dpmodel) and model level (pt_expt) verifying energy, force, and virial match expected weighted combinations of individual sub-model outputs.
DPModelCommon.update_sel returns an updated copy of the sub-model config, but the result was assigned to a local variable and never written back to local_jdata_cpy["models"]. Also read from the copy instead of the original. This caused sel updates to be silently dropped for linear models.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
deepmd/dpmodel/atomic_model/linear_atomic_model.py (1)
70-70: Inconsistent validation:assertvs.ValueError.The validation uses
assertfor string and list validation (lines 102, 104) butValueErrorfor the fallback case. Assertions can be disabled withpython -O, making the validation ineffective in optimized mode. For production code, use explicitValueErrorconsistently.♻️ Proposed fix
if isinstance(weights, str): - assert weights in ["sum", "mean"] + if weights not in ["sum", "mean"]: + raise ValueError( + f"'weights' must be 'sum' or 'mean' when provided as a string, got '{weights}'." + ) elif isinstance(weights, list): - assert len(weights) == len(models) + if len(weights) != len(models): + raise ValueError( + f"'weights' list length ({len(weights)}) must match number of models ({len(models)})." + ) else: raise ValueError( f"'weights' must be a string ('sum' or 'mean') or a list of float of length {len(models)}." )Also applies to: 101-109
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deepmd/dpmodel/atomic_model/linear_atomic_model.py` at line 70, The constructor/validator for the weights parameter in class LinearAtomicModel currently uses assert for checking string and list cases while raising ValueError for the fallback case; replace those assert statements in the __init__ (or any helper like _validate_weights) with explicit ValueError exceptions so validation cannot be bypassed with python -O, i.e., check that weights is either "mean" or a list/sequence of floats and raise ValueError with a clear message for invalid types/values and for invalid lengths or contents.deepmd/pt_expt/model/dp_linear_model.py (1)
59-66: Inconsistent dictionary key access pattern.In
forward(lines 59-66), you use direct accessmodel_ret["energy_derv_r"]after checking the condition, but inforward_lower(lines 93-100), you use.get()pattern:model_ret.get("energy_derv_r"). This inconsistency could mask bugs if keys are missing unexpectedly.Looking at the PT version in the context snippet (deepmd/pt/model/model/dp_linear_model.py:60-91), it uses direct bracket access which would raise
KeyErroron missing keys—a fail-fast approach that helps identify issues early.Consider using consistent direct access in both methods, since the condition check already verifies the key should exist.
♻️ Proposed fix for forward_lower
- if self.do_grad_r("energy") and model_ret.get("energy_derv_r") is not None: + if self.do_grad_r("energy") and model_ret["energy_derv_r"] is not None: model_predict["extended_force"] = model_ret["energy_derv_r"].squeeze(-2) - if self.do_grad_c("energy") and model_ret.get("energy_derv_c_redu") is not None: + if self.do_grad_c("energy") and model_ret["energy_derv_c_redu"] is not None: model_predict["virial"] = model_ret["energy_derv_c_redu"].squeeze(-2) - if do_atomic_virial and model_ret.get("energy_derv_c") is not None: + if do_atomic_virial and model_ret["energy_derv_c"] is not None: model_predict["extended_virial"] = model_ret["energy_derv_c"].squeeze( -2 )Also applies to: 91-100
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deepmd/pt_expt/model/dp_linear_model.py` around lines 59 - 66, The forward_lower method uses model_ret.get(...) inconsistently with forward's direct bracket access; change forward_lower to access keys directly (e.g., model_ret["energy_derv_r"], model_ret["energy_derv_c_redu"], model_ret["energy_derv_c"]) after the existing do_grad_r("energy") / do_grad_c("energy") checks so failures surface as KeyError like in forward, and ensure the same squeeze(-2) assignments to model_predict["force"], model_predict["virial"], and model_predict["atom_virial"] mirror the logic in forward.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@source/tests/pt_expt/model/test_linear_ener_model.py`:
- Around line 177-181: The test calls LinearEnergyModel.update_sel and assigns
its two-return values to result and min_dist, but min_dist is unused and flagged
by Ruff RUF059; change the assignment to capture the second return as an
intentionally ignored name (e.g., _min_dist or _) so the linter knows it's
unused. Edit the test line that currently reads "result, min_dist =
LinearEnergyModel.update_sel(...)" to use "result, _min_dist =
LinearEnergyModel.update_sel(...)" (or "result, _ = ...") and run tests/linter
to confirm the warning is resolved.
---
Nitpick comments:
In `@deepmd/dpmodel/atomic_model/linear_atomic_model.py`:
- Line 70: The constructor/validator for the weights parameter in class
LinearAtomicModel currently uses assert for checking string and list cases while
raising ValueError for the fallback case; replace those assert statements in the
__init__ (or any helper like _validate_weights) with explicit ValueError
exceptions so validation cannot be bypassed with python -O, i.e., check that
weights is either "mean" or a list/sequence of floats and raise ValueError with
a clear message for invalid types/values and for invalid lengths or contents.
In `@deepmd/pt_expt/model/dp_linear_model.py`:
- Around line 59-66: The forward_lower method uses model_ret.get(...)
inconsistently with forward's direct bracket access; change forward_lower to
access keys directly (e.g., model_ret["energy_derv_r"],
model_ret["energy_derv_c_redu"], model_ret["energy_derv_c"]) after the existing
do_grad_r("energy") / do_grad_c("energy") checks so failures surface as KeyError
like in forward, and ensure the same squeeze(-2) assignments to
model_predict["force"], model_predict["virial"], and
model_predict["atom_virial"] mirror the logic in forward.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 88eafc5a-948c-427c-a54e-cbae9f99b446
📒 Files selected for processing (4)
deepmd/dpmodel/atomic_model/linear_atomic_model.pydeepmd/pt_expt/model/dp_linear_model.pysource/tests/common/dpmodel/test_linear_atomic_model.pysource/tests/pt_expt/model/test_linear_ener_model.py
…sion Add weights to PT backend serialize, bump @Version to 3 for both dpmodel and PT backends, and default to "mean" when deserializing version 2 data that lacks the weights field.
DPModelCommon.update_sel returns an updated copy of the sub-model config, but the result was assigned to a local variable and never written back to local_jdata_cpy["models"]. Also read from the copy instead of the original. This caused sel updates to be silently dropped for linear models.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
deepmd/dpmodel/atomic_model/linear_atomic_model.py (1)
70-109:⚠️ Potential issue | 🟠 MajorReplace
assert-based validation with explicit exceptions and fix ruff violations before committing.The weights validation at lines 101–104 uses
assertstatements, which can be disabled with optimized Python (python -O). Additionally, the type annotation allowsNone(line 70) but the code doesn't handle it explicitly—it falls through to theelseblock and raisesValueError.The file also has 2 ruff violations (RUF059: unused variables at line 251) that must be fixed per the coding guideline:
**/*.py: Always runruff check .andruff format .before committing changes or CI will fail.Replace the assertions with explicit validation and handle the
Nonecase:Proposed fix
if isinstance(weights, str): - assert weights in ["sum", "mean"] + if weights not in ("sum", "mean"): + raise ValueError("`weights` must be 'sum' or 'mean' when provided as a string.") elif isinstance(weights, list): - assert len(weights) == len(models) + if len(weights) != len(models): + raise ValueError( + f"'weights' must be a string ('sum' or 'mean') or a numeric list of length {len(models)}." + ) else: raise ValueError( f"'weights' must be a string ('sum' or 'mean') or a list of float of length {len(models)}." )Also run
ruff check . --fixandruff format .on the repository to resolve the RUF059 violations.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deepmd/dpmodel/atomic_model/linear_atomic_model.py` around lines 70 - 109, The constructor in LinearAtomicModel uses assert for validating the weights parameter and allows weights: None but doesn't handle it; replace the assert checks in the weights handling block (the code that checks isinstance(weights, str) / list) with explicit ValueError/TypeError raises and add explicit handling for weights is None (e.g., set a sensible default or raise a clear exception), ensuring the error messages reference the parameter and expected values; after updating, run ruff check . --fix and ruff format . to resolve the reported RUF059 unused-variable violations (and fix or remove the unused variable(s) around the previously reported location) before committing.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@deepmd/dpmodel/atomic_model/linear_atomic_model.py`:
- Around line 333-337: The unpacking of nlist.shape assigns nframes, nloc, nnei
but nloc and nnei are unused; update the unpacking (where nlist.shape is used,
e.g., in the function/method that currently does "nframes, nloc, nnei =
nlist.shape") to use _nloc and _nnei (nframes, _nloc, _nnei = nlist.shape) to
satisfy ruff's unused-variable rule, then run ruff check . && ruff format . to
ensure no other style violations remain before merging.
---
Outside diff comments:
In `@deepmd/dpmodel/atomic_model/linear_atomic_model.py`:
- Around line 70-109: The constructor in LinearAtomicModel uses assert for
validating the weights parameter and allows weights: None but doesn't handle it;
replace the assert checks in the weights handling block (the code that checks
isinstance(weights, str) / list) with explicit ValueError/TypeError raises and
add explicit handling for weights is None (e.g., set a sensible default or raise
a clear exception), ensuring the error messages reference the parameter and
expected values; after updating, run ruff check . --fix and ruff format . to
resolve the reported RUF059 unused-variable violations (and fix or remove the
unused variable(s) around the previously reported location) before committing.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 567d1b56-19c2-4431-a47b-ed3351555713
📒 Files selected for processing (2)
deepmd/dpmodel/atomic_model/linear_atomic_model.pydeepmd/pt/model/atomic_model/linear_atomic_model.py
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
deepmd/dpmodel/atomic_model/linear_atomic_model.py (1)
101-109: Avoidassertfor input validation.Using
assertfor validating user input is problematic because assertions can be disabled with Python's-Oflag, which would silently accept invalidweightsvalues. Use explicitif/raiseinstead.Proposed fix
if isinstance(weights, str): - assert weights in ["sum", "mean"] + if weights not in ["sum", "mean"]: + raise ValueError( + f"'weights' must be 'sum' or 'mean' when provided as a string, got '{weights}'." + ) elif isinstance(weights, list): - assert len(weights) == len(models) + if len(weights) != len(models): + raise ValueError( + f"'weights' list length ({len(weights)}) must match number of models ({len(models)})." + ) else: raise ValueError( f"'weights' must be a string ('sum' or 'mean') or a list of float of length {len(models)}." )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@deepmd/dpmodel/atomic_model/linear_atomic_model.py` around lines 101 - 109, Replace the two assert statements used to validate the constructor argument `weights` with explicit checks that raise ValueError: in the class/constructor in linear_atomic_model.py (around the logic that assigns self.weights) change the branch that currently uses `assert weights in ["sum", "mean"]` to an if that raises a ValueError with a clear message when weights is a str but not "sum" or "mean", and change the branch that currently uses `assert len(weights) == len(models)` to an if that raises a ValueError when weights is a list whose length does not match len(models); leave the final else to raise the same ValueError for invalid types and then assign self.weights = weights.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@deepmd/dpmodel/atomic_model/linear_atomic_model.py`:
- Line 70: The type hint for the weights parameter in LinearAtomicModel
(weights: str | list[float] | None = "mean") conflicts with the validation that
rejects None; update the constructor/validation to treat None as the default
"mean" before running existing checks (i.e., if weights is None: weights =
"mean"), then proceed with current validation logic for weights, ensuring all
references use the normalized value; alternatively remove None from the type
annotation if you prefer not to accept None.
---
Nitpick comments:
In `@deepmd/dpmodel/atomic_model/linear_atomic_model.py`:
- Around line 101-109: Replace the two assert statements used to validate the
constructor argument `weights` with explicit checks that raise ValueError: in
the class/constructor in linear_atomic_model.py (around the logic that assigns
self.weights) change the branch that currently uses `assert weights in ["sum",
"mean"]` to an if that raises a ValueError with a clear message when weights is
a str but not "sum" or "mean", and change the branch that currently uses `assert
len(weights) == len(models)` to an if that raises a ValueError when weights is a
list whose length does not match len(models); leave the final else to raise the
same ValueError for invalid types and then assign self.weights = weights.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: e33356a5-0051-4bf6-8c94-cee96fde3bd8
📒 Files selected for processing (1)
deepmd/dpmodel/atomic_model/linear_atomic_model.py
| self, | ||
| models: list[BaseAtomicModel], | ||
| type_map: list[str], | ||
| weights: str | list[float] | None = "mean", |
There was a problem hiding this comment.
Type hint allows None but validation rejects it.
The type annotation str | list[float] | None suggests None is a valid input, but the validation logic (lines 101-108) will raise ValueError if None is passed. Either remove None from the type hint or handle it explicitly in validation (e.g., by treating None as "mean").
Proposed fix
- weights: str | list[float] | None = "mean",
+ weights: str | list[float] = "mean",📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| weights: str | list[float] | None = "mean", | |
| weights: str | list[float] = "mean", |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@deepmd/dpmodel/atomic_model/linear_atomic_model.py` at line 70, The type hint
for the weights parameter in LinearAtomicModel (weights: str | list[float] |
None = "mean") conflicts with the validation that rejects None; update the
constructor/validation to treat None as the default "mean" before running
existing checks (i.e., if weights is None: weights = "mean"), then proceed with
current validation logic for weights, ensuring all references use the normalized
value; alternatively remove None from the type annotation if you prefer not to
accept None.
This is a breaking change: bump the model data versio of
LinearEnergyAtomicModelfrom 2 to 3 due to the bug fixing.Summary
LinearEnergyModelto pt_expt backend, enabling linear combination of multiple sub-modelsget_linear_modelfactory in pt_expt for constructing from config dictsget_linear_model(pt) not propagatingtype_mapto sub-modelsLinearEnergyAtomicModel(dpmodel) missingweightsparameter, causing deserialization failure_compute_weightcallingarray_api_compat.array_namespace()with Python list and using numpy dtype with torchTest plan
source/tests/consistent/model/test_linear_ener.py) — pt vs pt_expt, with parameterized exclude typesSummary by CodeRabbit
New Features
Behavior
Tests