Add interface for SphinxDFT code#413
Conversation
WalkthroughThis pull request introduces a new SphinxDFT integration. It adds configuration files for Sphinx documentation environments, updates dependency versions, and includes an optional dependency in the project configuration. The changes incorporate a new GitHub Actions job for unit testing and expand the calculators module with functions for running static calculations using SphinxDFT, including input preparation and output parsing. Additionally, several tests have been added to validate the new functionality. Changes
Sequence Diagram(s)SphinxDFT Calculator FlowsequenceDiagram
participant U as User/Workflow
participant ES as evaluate_with_sphinx
participant CS as calc_static_with_sphinxdft
participant WI as _write_input
participant EX as Executable Function
participant OP as OutputParser
U->>ES: Call evaluate_with_sphinx(structure, tasks, ...)
ES->>CS: Invoke calc_static_with_sphinxdft(...)
CS->>WI: Write input.sx file from structure & parameters
WI-->>CS: Confirm input written
CS->>EX: Execute SphinxDFT calculation
EX-->>CS: Return raw output data
CS->>OP: Parse output data (energy, forces, volume)
OP-->>CS: Return parsed results
CS-->>ES: Return results dictionary
GitHub Actions Pipeline FlowsequenceDiagram
participant B as Black Job
participant U as unittest_sphinxdft Job
participant A as Autobot Job
B->>U: On successful completion, trigger unittests
U->>U: Checkout repository, merge config, setup Mambaforge, run unit tests
U-->>A: Signal completion (dependency fulfilled)
Possibly related PRs
Suggested reviewers
Poem
✨ Finishing Touches
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (3)
.ci_support/environment-sphinx.yml (1)
6-6: Add a newline at the end of the file.This file would benefit from a trailing newline to avoid potential linting or formatting issues.
channels: - conda-forge dependencies: - sphinx_parser =0.0.1 - sphinxdft =3.1 - sphinxdft-data =0.0.1 +🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 6-6: no new line character at the end of file
(new-line-at-end-of-file)
.github/workflows/pipeline.yml (1)
265-289: Rename the step label for clarity.Using "Merge Notebook environment" might be confusing, as this step is configuring Sphinx for testing. Consider naming it "Merge Sphinx environment" or something similar to avoid confusion.
- - name: Merge Notebook environment + - name: Merge Sphinx environment run: | cp .ci_support/environment.yml environment.yml tail --lines=+4 .ci_support/environment-sphinx.yml >> environment.yml echo -e "channels:\n - conda-forge\n" > .condarctests/test_evcurve_sphinx.py (1)
1-66: Consider cleaning up generated files.This test is well-structured. However, it might leave behind files (e.g.,
relaxHist.sx,energy.dat). You could implement atearDown()method to remove any generated artifacts after the test completes, keeping the workspace clean.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
.ci_support/environment-sphinx.yml(1 hunks).github/workflows/pipeline.yml(1 hunks)atomistics/calculators/sphinxdft.py(1 hunks)tests/test_evcurve_sphinx.py(1 hunks)
🧰 Additional context used
🪛 Ruff (0.8.2)
atomistics/calculators/sphinxdft.py
20-20: Do not use mutable data structures for argument defaults
Replace with None; initialize within function
(B006)
21-21: Do not use mutable data structures for argument defaults
Replace with None; initialize within function
(B006)
90-90: Do not use mutable data structures for argument defaults
Replace with None; initialize within function
(B006)
91-91: Do not use mutable data structures for argument defaults
Replace with None; initialize within function
(B006)
121-121: Do not use mutable data structures for argument defaults
Replace with None; initialize within function
(B006)
122-122: Do not use mutable data structures for argument defaults
Replace with None; initialize within function
(B006)
🪛 YAMLlint (1.35.1)
.ci_support/environment-sphinx.yml
[error] 6-6: no new line character at the end of file
(new-line-at-end-of-file)
⏰ Context from checks skipped due to timeout of 90000ms (12)
- GitHub Check: unittest_matrix (ubuntu-latest, 3.10)
- GitHub Check: unittest_matrix (ubuntu-latest, 3.11)
- GitHub Check: unittest_matrix (windows-latest, 3.12)
- GitHub Check: unittest_matrix (ubuntu-latest, 3.12)
- GitHub Check: unittest_qe
- GitHub Check: unittest_old
- GitHub Check: unittest_matrix (macos-latest, 3.12)
- GitHub Check: unittest_matgl
- GitHub Check: unittest_mace
- GitHub Check: unittest_gpaw
- GitHub Check: notebooks
- GitHub Check: coverage
🔇 Additional comments (2)
.ci_support/environment-sphinx.yml (1)
1-5: All dependencies look good.atomistics/calculators/sphinxdft.py (1)
1-19: Overall implementation approved.The rest of the code looks clean and well-structured. Nice job leveraging a parser class for output handling, ensuring modularity and maintainability.
Also applies to: 22-89, 92-120, 123-137
| kpoint_coords: list[float, float, float] = [0.5, 0.5, 0.5], | ||
| kpoint_folding: list[int, int, int] = [4, 4, 4], |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Avoid mutable default arguments in evaluate_with_lammpsfile.
As with the other functions, avoid using lists as default parameter values.
def evaluate_with_lammpsfile(
structure: Atoms,
tasks: list,
working_directory: str,
executable_function: callable,
maxSteps: int = 100,
eCut: float = 25,
- kpoint_coords: list[float, float, float] = [0.5, 0.5, 0.5],
- kpoint_folding: list[int, int, int] = [4, 4, 4],
+ kpoint_coords: list[float, float, float] | None = None,
+ kpoint_folding: list[int, int, int] | None = None,
) -> dict:
+ if kpoint_coords is None:
+ kpoint_coords = [0.5, 0.5, 0.5]
+ if kpoint_folding is None:
+ kpoint_folding = [4, 4, 4]📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| kpoint_coords: list[float, float, float] = [0.5, 0.5, 0.5], | |
| kpoint_folding: list[int, int, int] = [4, 4, 4], | |
| def evaluate_with_lammpsfile( | |
| structure: Atoms, | |
| tasks: list, | |
| working_directory: str, | |
| executable_function: callable, | |
| maxSteps: int = 100, | |
| eCut: float = 25, | |
| kpoint_coords: list[float, float, float] | None = None, | |
| kpoint_folding: list[int, int, int] | None = None, | |
| ) -> dict: | |
| if kpoint_coords is None: | |
| kpoint_coords = [0.5, 0.5, 0.5] | |
| if kpoint_folding is None: | |
| kpoint_folding = [4, 4, 4] | |
| # ... rest of the function body ... |
🧰 Tools
🪛 Ruff (0.8.2)
121-121: Do not use mutable data structures for argument defaults
Replace with None; initialize within function
(B006)
122-122: Do not use mutable data structures for argument defaults
Replace with None; initialize within function
(B006)
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #413 +/- ##
==========================================
+ Coverage 81.52% 81.89% +0.36%
==========================================
Files 40 41 +1
Lines 2382 2447 +65
==========================================
+ Hits 1942 2004 +62
- Misses 440 443 +3 ☔ View full report in Codecov by Sentry. |
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
# Conflicts: # .github/workflows/pipeline.yml
There was a problem hiding this comment.
Actionable comments posted: 0
🔭 Outside diff range comments (1)
atomistics/calculators/sphinxdft.py (1)
135-163: 🛠️ Refactor suggestionAdd negative test coverage for unimplemented tasks.
Line 162 raises
ValueErrorfor unsupported tasks, but codecov reports no coverage. Consider adding a test that callsevaluate_with_sphinxwith an unknown task to cover this error path.🧰 Tools
🪛 GitHub Check: codecov/patch
[warning] 162-162: atomistics/calculators/sphinxdft.py#L162
Added line #L162 was not covered by tests
🧹 Nitpick comments (7)
.ci_support/environment-sphinx.yml (1)
1-5: New environment file for SphinxDFT integrationThe creation of a dedicated Sphinx environment file with dependencies on
sphinxdftandsphinxdft-datasupports the new SphinxDFT interface.Add a newline at the end of the file to satisfy YAML best practices and fix the static analysis warning.
- sphinxdft =3.1 - sphinxdft-data =0.0.1 +🧰 Tools
🪛 YAMLlint (1.35.1)
[error] 5-5: no new line character at the end of file
(new-line-at-end-of-file)
.github/workflows/pipeline.yml (1)
265-289: Consider enabling coverage measurements in the sphinxdft test job.The newly introduced
unittest_sphinxdftjob runs Sphinx-based tests but doesn’t currently upload coverage reports. If coverage metrics are desired, consider adding a step to upload coverage to Codecov (similar to other jobs) to maintain uniform visibility of test coverage across all calculators.tests/test_evcurve_sphinxdft.py (2)
32-38: Potential improvement for debugging.Printing command output is helpful for debugging in CI logs. If runtime and stability are a concern, consider logging selectively, or capturing logs only on failure via additional test logic.
45-67: Comprehensive test coverage for energy-volume curve.
- The test effectively constructs the workflow, generates structures, and validates the final fit dictionary.
- If additional edge cases are pertinent (e.g., unusual volumes or different elements), consider parametrizing tests for broader coverage.
tests/test_sphinx_parser.py (2)
8-14: Unused imports fromatomistics.calculators.sphinxdft.Ruff flags
HARTREE_OVER_BOHR_TO_EV_OVER_ANGSTROMandBOHR_TO_ANGSTROMas unused. Consider removing them to keep imports minimal and clean.- from atomistics.calculators.sphinxdft import OutputParser, _write_input, calc_static_with_sphinxdft, evaluate_with_sphinx, HARTREE_TO_EV, HARTREE_OVER_BOHR_TO_EV_OVER_ANGSTROM, BOHR_TO_ANGSTROM + from atomistics.calculators.sphinxdft import OutputParser, _write_input, calc_static_with_sphinxdft, evaluate_with_sphinx, HARTREE_TO_EV🧰 Tools
🪛 Ruff (0.8.2)
9-9:
atomistics.calculators.sphinxdft.HARTREE_OVER_BOHR_TO_EV_OVER_ANGSTROMimported but unused; consider usingimportlib.util.find_specto test for availability(F401)
9-9:
atomistics.calculators.sphinxdft.BOHR_TO_ANGSTROMimported but unused; consider usingimportlib.util.find_specto test for availability(F401)
37-66: Minor code style adjustments in input checks.
- Removing redundant
"r"mode inopen(file, "r")achieves the same behavior with fewer parameters.- Using clearer variable names instead of
lin the list comprehension can improve readability.-with open(file, "r") as f: +with open(file) as f: - " ".join([l for l in lines if ... + " ".join([line for line in lines if ...🧰 Tools
🪛 Ruff (0.8.2)
50-50: Unnecessary open mode parameters
Remove open mode parameters
(UP015)
55-55: Ambiguous variable name:
l(E741)
59-59: Ambiguous variable name:
l(E741)
63-63: Ambiguous variable name:
l(E741)
atomistics/calculators/sphinxdft.py (1)
93-100: Dead code or insufficient coverage for_get_output.This helper function is not called elsewhere in the file. If it’s intended for future usage, consider adding tests to improve coverage. Otherwise, it could be removed to keep the codebase clean.
🧰 Tools
🪛 GitHub Check: codecov/patch
[warning] 94-96: atomistics/calculators/sphinxdft.py#L94-L96
Added lines #L94 - L96 were not covered by tests
[warning] 99-99: atomistics/calculators/sphinxdft.py#L99
Added line #L99 was not covered by tests
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
tests/static/sphinxdft/energy.datis excluded by!**/*.dat
📒 Files selected for processing (10)
.ci_support/environment-old.yml(2 hunks).ci_support/environment-sphinx.yml(1 hunks).ci_support/environment.yml(1 hunks).github/workflows/pipeline.yml(2 hunks)atomistics/calculators/__init__.py(1 hunks)atomistics/calculators/sphinxdft.py(1 hunks)pyproject.toml(1 hunks)tests/static/sphinxdft/relaxHist.sx(1 hunks)tests/test_evcurve_sphinxdft.py(1 hunks)tests/test_sphinx_parser.py(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- tests/static/sphinxdft/relaxHist.sx
🧰 Additional context used
🪛 Ruff (0.8.2)
tests/test_evcurve_sphinxdft.py
12-15: Use ternary operator skip_sphinx_test = False if shutil.which(sphinx_command) is not None else True instead of if-else-block
Replace if-else-block with skip_sphinx_test = False if shutil.which(sphinx_command) is not None else True
(SIM108)
tests/test_sphinx_parser.py
9-9: atomistics.calculators.sphinxdft.HARTREE_OVER_BOHR_TO_EV_OVER_ANGSTROM imported but unused; consider using importlib.util.find_spec to test for availability
(F401)
9-9: atomistics.calculators.sphinxdft.BOHR_TO_ANGSTROM imported but unused; consider using importlib.util.find_spec to test for availability
(F401)
50-50: Unnecessary open mode parameters
Remove open mode parameters
(UP015)
55-55: Ambiguous variable name: l
(E741)
59-59: Ambiguous variable name: l
(E741)
63-63: Ambiguous variable name: l
(E741)
🪛 GitHub Check: codecov/patch
atomistics/calculators/__init__.py
[warning] 89-90: atomistics/calculators/init.py#L89-L90
Added lines #L89 - L90 were not covered by tests
atomistics/calculators/sphinxdft.py
[warning] 94-96: atomistics/calculators/sphinxdft.py#L94-L96
Added lines #L94 - L96 were not covered by tests
[warning] 99-99: atomistics/calculators/sphinxdft.py#L99
Added line #L99 was not covered by tests
[warning] 162-162: atomistics/calculators/sphinxdft.py#L162
Added line #L162 was not covered by tests
🪛 YAMLlint (1.35.1)
.ci_support/environment-sphinx.yml
[error] 5-5: no new line character at the end of file
(new-line-at-end-of-file)
⏰ Context from checks skipped due to timeout of 90000ms (14)
- GitHub Check: unittest_old
- GitHub Check: unittest_sphinxdft
- GitHub Check: unittest_grace
- GitHub Check: unittest_siesta
- GitHub Check: unittest_qe
- GitHub Check: unittest_matgl
- GitHub Check: unittest_matrix (ubuntu-latest, 3.10)
- GitHub Check: unittest_mace
- GitHub Check: unittest_matrix (ubuntu-latest, 3.11)
- GitHub Check: unittest_gpaw
- GitHub Check: unittest_matrix (windows-latest, 3.12)
- GitHub Check: unittest_matrix (ubuntu-latest, 3.12)
- GitHub Check: unittest_matrix (macos-latest, 3.12)
- GitHub Check: notebooks
🔇 Additional comments (20)
.ci_support/environment.yml (1)
16-16: New dependency added for SphinxDFT integrationThe addition of
sphinx_parser =0.0.1is consistent with the PR objectives to add an interface for SphinxDFT code. This dependency will enable the new functionality for parsing Sphinx outputs.pyproject.toml (1)
67-69: New optional dependency added for SphinxDFT supportThe addition of the
sphinxdftoptional dependency withsphinx-parser==0.0.1aligns with the PR objectives to add a SphinxDFT interface. Making it an optional dependency is the correct approach as it allows users to install only what they need..ci_support/environment-old.yml (1)
4-4: Updated dependencies and added SphinxDFT support to legacy environmentThe changes include updating ASE to 3.23.0, numpy to 1.26.4, and adding the new sphinx_parser dependency. These updates ensure consistent support for the SphinxDFT interface across both legacy and current environments.
Also applies to: 12-12, 23-23
atomistics/calculators/__init__.py (1)
81-90:Details
❓ Verification inconclusive
Added SphinxDFT calculator to the public API
The addition of
evaluate_with_sphinxto the module's public API through the__all__list is correctly implemented with a try-except block to handle potential import errors.The static analysis indicates that lines 89-90 (the except block) are not covered by tests. Consider adding a test that mocks an ImportError for the sphinxdft module to improve coverage.
🏁 Script executed:
#!/bin/bash # Check if there are tests that verify the import error handling rg -A 5 "test.*import.*error.*sphinx" tests/Length of output: 46
Please add a test to verify the ImportError handling in the SphinxDFT calculator import.
The try-except block in
atomistics/calculators/__init__.pyis correctly implemented. However, our static analysis did not locate any tests that simulate an ImportError for thesphinxdftmodule to ensure that lines 89–90 are executed when needed. Please add a test (e.g., by monkey patching or usingunittest.mock) to force an ImportError foratomistics.calculators.sphinxdftand verify that the except block is triggered appropriately.
- File:
atomistics/calculators/__init__.py(Lines 81–90)🧰 Tools
🪛 GitHub Check: codecov/patch
[warning] 89-90: atomistics/calculators/init.py#L89-L90
Added lines #L89 - L90 were not covered by tests.github/workflows/pipeline.yml (1)
449-449: Job dependency updated successfully.Adding
unittest_sphinxdftto the dependencies ensures the workflow only proceeds once Sphinx tests pass. This is consistent with the existing pattern for system-specific jobs.tests/test_evcurve_sphinxdft.py (4)
1-4: No immediate issues with the imports.
These imports are standard for filesystem access (os,shutil,subprocess) and do not exhibit any flags.
8-16: Shell command usage insubprocess.check_output.Using
shell=Truecan pose a security risk if untrusted inputs are passed. While this is a test context, consider clarifying or ensuring thatsphinx_commandand the working directory input is controlled, reducing any potential vulnerability.🧰 Tools
🪛 Ruff (0.8.2)
12-15: Use ternary operator
skip_sphinx_test = False if shutil.which(sphinx_command) is not None else Trueinstead ofif-else-blockReplace
if-else-block withskip_sphinx_test = False if shutil.which(sphinx_command) is not None else True(SIM108)
18-30: Solid approach for validating fit dictionary.This specialized check ensures the resulting fit parameters fall within expected thresholds. The printing of the dictionary on failure aids debugging. The approach is concise and effective.
40-43: Skip tests if Sphinx is unavailable.This skip condition at the class level is consistent with typical patterns. No issues here.
tests/test_sphinx_parser.py (6)
1-4: Review necessary imports.
os,unittest,shutil, andnumpyare all used here. It looks good and relevant for file manipulation and numeric checks.
16-18: Good approach to mocking the executable logic.Defining
executable_functionas a stub that returns a simple string is helpful for testing. If needed, consider verifying that the function is called with the expected working directory or capturing additional arguments for more precise testing.
25-28: Test directory path appears correct.Pointing to a static directory for test data is a reliable approach. Ensure any required files exist in that location and that the path is valid on all targeted CI platforms.
29-36: Forces array check is thorough.This test precisely checks the energy, volume, and zero forces. The
NotImplementedErrorfor the stress property is also well validated. Great coverage here.
67-84: Sensible check on numeric results fromcalc_static_with_sphinxdft.Verifying energy and forces matches the known data. The test is thorough for typical usage. If additional parameter combinations are important, consider adding parameterized tests.
85-100: Clear demonstration ofevaluate_with_sphinx.The test flows well, confirming that both energy and forces are produced as expected. Consider verifying peak memory usage or CPU usage only if performance testing is needed.
atomistics/calculators/sphinxdft.py (5)
1-11: Initial imports and constants look correct.Constants for unit conversions (Bohr, Hartree) are accurately pulled from
scipy.constants. The references tosphinx_parsermodules appear consistent with the rest of the Sphinx integration.
23-30: Avoiding mutable default arguments successfully.Using
Noneforkpoint_coordsandkpoint_foldingis a solid practice. This avoids shared references between function calls. Good compliance with best practices.
31-63: Sphinx input creation is well structured.
_write_inputelegantly assembles the Sphinx input blocks.- The
with open(..., "w")usage is appropriate for file writing.
65-91: OutputParser structures results logically.
get_energy,get_forces,get_volume, andget_stressform a clear separation of concerns. Stress raisingNotImplementedErroris consistent with partial implementation. No major concerns here.
102-133: Comprehensive static calculation flow incalc_static_with_sphinxdft.
- The function
_write_inputis used properly, followed by the call toexecutable_function.- Results are parsed via
OutputParserand returned.- Code structure is straightforward and maintainable.
There was a problem hiding this comment.
Actionable comments posted: 2
🔭 Outside diff range comments (1)
atomistics/calculators/sphinxdft.py (1)
93-154: 🛠️ Refactor suggestionReduce code duplication by defining constants for default values.
The default values for
kpoint_coordsandkpoint_foldingare duplicated across three functions. Consider defining module-level constants:# Near the top of the file with other constants DEFAULT_KPOINT_COORDS = [0.5, 0.5, 0.5] DEFAULT_KPOINT_FOLDING = [3, 3, 3] # Then in each function: def _write_input( # ... kpoint_coords: Optional[list[float, float, float]] = None, kpoint_folding: Optional[list[int, int, int]] = None, ): if kpoint_coords is None: kpoint_coords = DEFAULT_KPOINT_COORDS if kpoint_folding is None: kpoint_folding = DEFAULT_KPOINT_FOLDING # ...Also make sure the default value for
eCutis consistent (either25or25.0) across all functions.
🧹 Nitpick comments (3)
atomistics/calculators/sphinxdft.py (3)
28-29: Improve type annotations for list parameters.The current type annotations
list[float, float, float]andlist[int, int, int]don't correctly express "a list of 3 floats" or "a list of 3 integers". These annotations actually mean "a list that can contain elements of type float, float, and float".Consider using
tuple[float, float, float]andtuple[int, int, int]for fixed-length sequences, orlist[float]andlist[int]for lists of floats and integers.Also applies to: 99-100, 134-135
23-63: Add docstrings to improve function documentation.The
_write_inputfunction lacks a docstring explaining its purpose, parameters, and constraints. Adding a comprehensive docstring would make the code more maintainable and easier to understand.Example:
def _write_input( structure: Atoms, working_directory: str, maxSteps: int = 100, eCut: float = 25.0, kpoint_coords: Optional[list[float, float, float]] = None, kpoint_folding: Optional[list[int, int, int]] = None, ): """ Create and write SphinxDFT input file (input.sx) based on given parameters. Parameters: structure: Atomic structure to calculate working_directory: Directory where input file will be written maxSteps: Maximum number of SCF steps eCut: Energy cutoff in eV kpoint_coords: Coordinates of the k-point (default: [0.5, 0.5, 0.5]) kpoint_folding: K-point folding/mesh (default: [3, 3, 3]) """
152-153: Improve error message in ValueError.The error message would be clearer if formatted like this:
else: raise ValueError(f"The SphinxDFT calculator does not implement the following tasks: {tasks}")
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
atomistics/calculators/sphinxdft.py(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms (13)
- GitHub Check: unittest_old
- GitHub Check: unittest_matgl
- GitHub Check: unittest_mace
- GitHub Check: unittest_grace
- GitHub Check: unittest_matrix (ubuntu-latest, 3.10)
- GitHub Check: unittest_matrix (ubuntu-latest, 3.11)
- GitHub Check: unittest_matrix (windows-latest, 3.12)
- GitHub Check: unittest_matrix (ubuntu-latest, 3.12)
- GitHub Check: unittest_gpaw
- GitHub Check: unittest_matrix (macos-latest, 3.12)
- GitHub Check: coverage
- GitHub Check: notebooks
- GitHub Check: minimal
| class OutputParser: | ||
| def __init__(self, working_directory: str, structure: Atoms): | ||
| self._working_directory = working_directory | ||
| self._structure = structure | ||
|
|
||
| def get_energy(self): | ||
| return ( | ||
| collect_energy_dat(os.path.join(self._working_directory, "energy.dat"))[ | ||
| "scf_energy_int" | ||
| ][-1][-1] | ||
| * HARTREE_TO_EV | ||
| ) | ||
|
|
||
| def get_forces(self): | ||
| return ( | ||
| collect_eval_forces(os.path.join(self._working_directory, "relaxHist.sx"))[ | ||
| "forces" | ||
| ][-1][id_spx_to_ase(self._structure)] | ||
| * HARTREE_OVER_BOHR_TO_EV_OVER_ANGSTROM | ||
| ) | ||
|
|
||
| def get_volume(self): | ||
| return self._structure.get_volume() | ||
|
|
||
| def get_stress(self): | ||
| raise NotImplementedError() | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Add docstrings and improve error handling in the OutputParser class.
The OutputParser class and its methods lack docstrings explaining their purpose. Additionally, there's no error handling for cases where output files don't exist or have unexpected formats.
Consider adding:
- Class and method docstrings
- Try-except blocks around file operations:
def get_energy(self):
"""Extract total energy from energy.dat file."""
energy_file = os.path.join(self._working_directory, "energy.dat")
try:
return (
collect_energy_dat(energy_file)["scf_energy_int"][-1][-1]
* HARTREE_TO_EV
)
except (FileNotFoundError, KeyError, IndexError) as e:
raise RuntimeError(f"Failed to parse energy from {energy_file}: {e}") from eAlso, improve the NotImplementedError message in get_stress():
def get_stress(self):
"""Get stress tensor (not yet implemented)."""
raise NotImplementedError("Stress calculation is not yet implemented in SphinxDFT interface")| executable_function(working_directory) | ||
| output_obj = OutputParser(working_directory=working_directory, structure=structure) |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Add error handling around executable_function call.
The function calls executable_function without any error handling. If the external SphinxDFT process fails, this could lead to unclear error messages.
Consider adding a try-except block:
try:
executable_function(working_directory)
except Exception as e:
raise RuntimeError(f"SphinxDFT calculation failed: {e}") from e
Summary by CodeRabbit
New Features
Chores
Tests