From abbfe4458e527daacf44f9c84020eadde9a0cc52 Mon Sep 17 00:00:00 2001 From: Kyle Mandli Date: Mon, 16 Mar 2026 11:48:02 -0400 Subject: [PATCH 01/12] Add new testing stub --- doc/testing.rst | 28 +++++++++++++++++++++++++++- 1 file changed, 27 insertions(+), 1 deletion(-) diff --git a/doc/testing.rst b/doc/testing.rst index 3f847e1..55fdce3 100644 --- a/doc/testing.rst +++ b/doc/testing.rst @@ -5,6 +5,33 @@ Testing your installation =================================================================== +Fortran Regression Tests +------------------------- +The Fortran code in Clawpack has a suite of regression tests that can be run to +check that the code is working properly. In each of the Fortran packages there +are a series of regression tests along side some of the examples as well as some +tests for Python functionality. All these tests can be run by going to the base +directory of the corresponding pacakge and running:: + + pytest + +The most useful option for debugging a failing test is to use:: + + pytest --basetemp=./test_output + +which will save the output from the teset into the directory `test_output`. The +package `pytetst` also has a number of additional debugging options that you can +use. See the `pytest documentation `_ for more +details. + +Adding Regression Tests +----------------------- + +:TODO: add instructions for adding regression tests here. + +Old Testing +=========== + PyClaw ------ If you downloaded Clawpack manually, you can test your :ref:`pyclaw` @@ -46,4 +73,3 @@ There are similar `tests` subdirectories of `$CLAW/amrclaw` and More extensive tests can be performed by running all of the examples in the `examples` directory and comparing the resulting plots against those archived in the :ref:`galleries`. See also :ref:`regression`. - From 3d638d4a02d971c4cb0c61031ea249f557c525f1 Mon Sep 17 00:00:00 2001 From: Kyle Mandli Date: Mon, 16 Mar 2026 12:10:55 -0400 Subject: [PATCH 02/12] Flesh out new testing with pytest and add a new test example --- doc/testing.rst | 114 ++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 111 insertions(+), 3 deletions(-) diff --git a/doc/testing.rst b/doc/testing.rst index 55fdce3..bdbffe3 100644 --- a/doc/testing.rst +++ b/doc/testing.rst @@ -5,8 +5,21 @@ Testing your installation =================================================================== +PyClaw Tests +------------ + +You can exercise all the tests in PyClaw by running the following command from +the base of the `pyclaw directory`: + +.. code-block:: console + + cd $CLAW/pyclaw + pytest + + Fortran Regression Tests ------------------------- + The Fortran code in Clawpack has a suite of regression tests that can be run to check that the code is working properly. In each of the Fortran packages there are a series of regression tests along side some of the examples as well as some @@ -27,10 +40,105 @@ details. Adding Regression Tests ----------------------- -:TODO: add instructions for adding regression tests here. +If you want to add a new regression test using the new `pytest` framework, you can follow along with this example for the acoustics_1d_example1 test. If something more complicated is needed, take a look at the other tests available in the packages, or reach out to the developers for help. + +Adding a Test for `acoustics_1d_example1` +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +1. Create a new file in the `examples/acoustics_1d_example1` directory called `test_acoustics_1d_example1.py` by: + +.. code-block:: console + + touch examples/acoustics_1d_example1/test_acoustics_1d_example1.py + +and place the following content in it: + +.. code-block:: python + :linenos: + + #!/usr/bin/env python + + from pathlib import Path + import pytest + + import clawpack.classic.test as test + + + def test_acoustics_1d_example1(tmp_path: Path, save: bool): + runner = test.ClassicTestRunner( + tmp_path=tmp_path, + test_path=Path(__file__).parent, + ) + + # Set data using default setrun.py file in local directory. If you want + # to override this then hand it another setrun.py + runner.set_data() + + runner.rundata.clawdata.num_output_times = 2 + runner.rundata.clawdata.tfinal = 1.0 + runner.rundata.clawdata.output_t0 = False + + runner.write_data() + + # Build xclaw and execute code + runner.executable_name = "xclaw" + runner.build_executable() + runner.run_code() + + # Check t=0.5 and t=1.0, we are looking at both the pressure and velocity + # in this test so need to specify those indices + runner.check_frame(1, indices=(0, 1), save=save) + runner.check_frame(2, indices=(0, 1), save=save) + + if __name__=="__main__": + pytest.main([__file__]) + +This file is executable from the command line. The middle section modifies what is in the local `setrun.py` file to make the test small and deterministic. The final section runs the test when the file is executed from the command line. You can run this test with: + +.. code-block:: console + + python test_acoustics_1d_example1.py + +or with: + +.. code-block:: console + + pytest test_acoustics_1d_example1.py + + +2. We now need to generate the expected results for this test. To do this, run the test with the `--save` option: + +.. code-block:: console + + pytest test_acoustics_1d_example1.py --save + +This will run the test and save the results in a directory called `regression_data` in the same directory as the test. This file contains the expected results for the test, which will be used to compare against future runs of the test. Note that if you would like to see the full output of the test, you can add `--basetemp=./test_output` to the command above, which will save the output from the test into the directory `test_output`. + + +3. Now you can run the test without the `--save` option to check that it is working properly. If the test passes, you should see output similar to this: + +.. code-block:: console + + ============================= test session starts ============================== + platform darwin -- Python 3.8.10, pytest-7.1.2, pluggy-1.0.0 + rootdir: /path/to/clawpack/classic/examples/acoustics_1d_example1 + collected 1 item + + test_acoustics_1d_example1.py . [100%] + + ============================== 1 passed in 5.00s =============================== + +To complete the test you will want to add the test script `test_acoustics_1d_example1.py` add the regression data to the repository. + +============== +Legacy Testing +============== + +Tests via `nose` are no longer supported, but if you have an older version of +Clawpack installed and `nostests` available, you can still run the old tests. +These are not as comprehensive as the new `pytest` tests, but they can be useful +for checking that your installation is working properly. -Old Testing -=========== PyClaw ------ From 3f2a0273a723f39ad949e65e4fedd9b746ab40c1 Mon Sep 17 00:00:00 2001 From: Kyle Mandli Date: Mon, 16 Mar 2026 12:26:11 -0400 Subject: [PATCH 03/12] Add testing refactor doc --- doc/testing_refactor.rst | 98 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 98 insertions(+) create mode 100644 doc/testing_refactor.rst diff --git a/doc/testing_refactor.rst b/doc/testing_refactor.rst new file mode 100644 index 0000000..a159399 --- /dev/null +++ b/doc/testing_refactor.rst @@ -0,0 +1,98 @@ +========================= +Clawpack Testing Refactor +========================= + +Overview +-------- + +Clawpack is moving to a pytest-based testing model built around example-local regression tests and shared test infrastructure in clawutil. + +This refactor is motivated by the need to: + - simplify test authoring + - reduce custom test scaffolding + - better match pytest conventions + - improve CI integration + - support incremental migration from the legacy regression framework + +Current reference implementations include: + - https://github.com/clawpack/clawutil/issues/187 + - https://github.com/clawpack/classic/issues/96 + - https://github.com/clawpack/amrclaw/issues/310 + +Design decisions +---------------- + +1. **Pytest is the system-wide test runner** - All new tests should be written + for pytest. +2. **Example-based regression tests are the primary solver test model** - For + solver-heavy code, the canonical test is a small example that: + - writes input data + - builds using the example Makefile + - runs in a temporary directory + - compares output to saved regression data +3. **Shared testing infrastructure lives in clawutil** - Common runner logic and + helpers should be centralized rather than duplicated across repositories. +4. **Tests should use the real build workflow** - Tests should exercise the same + example Makefile workflow that users rely on. +5. **Fresh builds should be explicit** - Tests should request a fresh build + through the runner or build target, rather than relying on import-time + cleanup or hidden state mutation. +6. **Legacy test infrastructure is transitional** - Existing legacy tests may + remain temporarily, but new tests should follow the pytest model and old + tests should be migrated over time. + +Test layout +----------- + +A typical migrated example should contain:: + + example_name/ + Makefile + setrun.py + test_example_name.py + regression_data/ + frame0001.txt + frame0002.txt + +Typical test workflow +--------------------- + +A typical example test: +1. creates or modifies rundata +2. writes data files +3. builds the executable +4. runs in tmp_path +5. compares selected frames or diagnostics + +Regression data policy +---------------------- + +Regression data should be: + - small + - reviewable in a PR + - deterministic + - specific to the example + +Use `--save` to regenerate baselines intentionally. + +CI policy +--------- + +CI should: + - run pytest directly + - store test artifacts in a predictable directory + - prefer fast, stable examples in PR checks + - allow broader coverage in scheduled or extended workflows + +Migration guidance +------------------ + +When migrating an old test: + - prefer example-local placement + - move shared behavior into clawutil + - remove hidden setup side effects + - keep the test close to the user-facing workflow + +Reference example +----------------- +`$CLAW/classic/examples/acoustics_1d_heterogeneous/test_acoustics_1d_heterogeneous.py` is intended to serve as an example. From 48fcdcec34ae8be8a0386f3e156951b908a7eb76 Mon Sep 17 00:00:00 2001 From: Kyle Mandli Date: Mon, 16 Mar 2026 12:27:08 -0400 Subject: [PATCH 04/12] Bump CC-BY date --- doc/conf.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/doc/conf.py b/doc/conf.py index 4629497..ba3a364 100644 --- a/doc/conf.py +++ b/doc/conf.py @@ -75,7 +75,7 @@ # General information about the project. project = u'Clawpack' -copyright = u'CC-BY 2024, The Clawpack Development Team' +copyright = u'CC-BY 2026, The Clawpack Development Team' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the From e302938080eead643438382f27446158cf32e45d Mon Sep 17 00:00:00 2001 From: Kyle Mandli Date: Mon, 16 Mar 2026 13:54:16 -0400 Subject: [PATCH 05/12] Correct argument naming error --- doc/testing.rst | 19 ++++++++----------- doc/testing_refactor.rst | 3 ++- 2 files changed, 10 insertions(+), 12 deletions(-) diff --git a/doc/testing.rst b/doc/testing.rst index bdbffe3..5593e9d 100644 --- a/doc/testing.rst +++ b/doc/testing.rst @@ -24,11 +24,15 @@ The Fortran code in Clawpack has a suite of regression tests that can be run to check that the code is working properly. In each of the Fortran packages there are a series of regression tests along side some of the examples as well as some tests for Python functionality. All these tests can be run by going to the base -directory of the corresponding pacakge and running:: +directory of the corresponding pacakge and running: + +.. code-block:: console pytest -The most useful option for debugging a failing test is to use:: +The most useful option for debugging a failing test is to use: + +.. code-block:: console pytest --basetemp=./test_output @@ -65,13 +69,9 @@ and place the following content in it: def test_acoustics_1d_example1(tmp_path: Path, save: bool): - runner = test.ClassicTestRunner( - tmp_path=tmp_path, - test_path=Path(__file__).parent, - ) + runner = test.ClassicTestRunner(tmp_path, + test_path=Path(__file__).parent) - # Set data using default setrun.py file in local directory. If you want - # to override this then hand it another setrun.py runner.set_data() runner.rundata.clawdata.num_output_times = 2 @@ -80,13 +80,10 @@ and place the following content in it: runner.write_data() - # Build xclaw and execute code runner.executable_name = "xclaw" runner.build_executable() runner.run_code() - # Check t=0.5 and t=1.0, we are looking at both the pressure and velocity - # in this test so need to specify those indices runner.check_frame(1, indices=(0, 1), save=save) runner.check_frame(2, indices=(0, 1), save=save) diff --git a/doc/testing_refactor.rst b/doc/testing_refactor.rst index a159399..486690f 100644 --- a/doc/testing_refactor.rst +++ b/doc/testing_refactor.rst @@ -95,4 +95,5 @@ When migrating an old test: Reference example ----------------- -`$CLAW/classic/examples/acoustics_1d_heterogeneous/test_acoustics_1d_heterogeneous.py` is intended to serve as an example. +`$CLAW/classic/examples/acoustics_1d_heterogeneous/test_acoustics_1d_heterogeneous.py` +is intended to serve as an example setup. From 34f618287569172de61c2e7a8075ce9b1eda14fa Mon Sep 17 00:00:00 2001 From: Kyle Mandli Date: Tue, 17 Mar 2026 11:33:32 -0400 Subject: [PATCH 06/12] Add mention of compiler flag issues --- doc/testing_refactor.rst | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/doc/testing_refactor.rst b/doc/testing_refactor.rst index 486690f..06220a6 100644 --- a/doc/testing_refactor.rst +++ b/doc/testing_refactor.rst @@ -84,6 +84,16 @@ CI should: - prefer fast, stable examples in PR checks - allow broader coverage in scheduled or extended workflows +Compiler Flags and Numerical Reproducibility +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Regression tests are sensitive to floating-point roundoff and compiler +optimizations. To ensure stable and reproducible results across platforms, +CI uses conservative optimization flags (e.g., `-O1`). + +Higher optimization levels may produce small numerical differences and are +not currently used for regression validation. + Migration guidance ------------------ From 0232cd95123cb428f429c968ab6da38b0cbd51b6 Mon Sep 17 00:00:00 2001 From: Kyle Mandli Date: Tue, 17 Mar 2026 12:32:48 -0400 Subject: [PATCH 07/12] Add instructions for using alternative setrun and plotting test output --- doc/testing.rst | 23 ++++++++++++++++++++++- 1 file changed, 22 insertions(+), 1 deletion(-) diff --git a/doc/testing.rst b/doc/testing.rst index 5593e9d..25d7ee8 100644 --- a/doc/testing.rst +++ b/doc/testing.rst @@ -41,10 +41,31 @@ package `pytetst` also has a number of additional debugging options that you can use. See the `pytest documentation `_ for more details. +If you would like to use a different default `setrun.py` file for testing you +can modify the test script to use a different `setrun.py` file. + +If you would like to plot the output of a test, you can use the same plotting +tools that are used for the examples. You can find the output of the test in +the `test_output` directory if you used the `--basetemp` option above. You can +then use the plotting tools to plot the output from the test. For example: + +.. code-block:: console + + cd $CLAW/classic/examples/acoustics_1d_example1 + pytest --basetemp=./test_output . + python plotclaw.py test_output/test_acoustics_1d_example1/ ./_plots ./setplot.py + +which will run the test and save the output into a subdirectory of +`test_output`. The plotting command will then plot the output from the +appropriate subdirectory specified. + Adding Regression Tests ----------------------- -If you want to add a new regression test using the new `pytest` framework, you can follow along with this example for the acoustics_1d_example1 test. If something more complicated is needed, take a look at the other tests available in the packages, or reach out to the developers for help. +If you want to add a new regression test using the new `pytest` framework, you +can follow along with this example for the acoustics_1d_example1 test. If +something more complicated is needed, take a look at the other tests available +in the packages, or reach out to the developers for help. Adding a Test for `acoustics_1d_example1` ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ From 511c069a3a11998cc8f20088ce8be6c619aa5d68 Mon Sep 17 00:00:00 2001 From: Kyle Mandli Date: Tue, 17 Mar 2026 12:49:51 -0400 Subject: [PATCH 08/12] Add some hints for testing --- doc/testing.rst | 26 +++++++++++++++++--------- 1 file changed, 17 insertions(+), 9 deletions(-) diff --git a/doc/testing.rst b/doc/testing.rst index 25d7ee8..c634896 100644 --- a/doc/testing.rst +++ b/doc/testing.rst @@ -41,13 +41,23 @@ package `pytetst` also has a number of additional debugging options that you can use. See the `pytest documentation `_ for more details. -If you would like to use a different default `setrun.py` file for testing you -can modify the test script to use a different `setrun.py` file. +Hints +^^^^^ +- Often times the output from a failing test will overwhelm the console output. In this case, you can use the following to pipe the output into the file `log.txt` and look at it directly: -If you would like to plot the output of a test, you can use the same plotting -tools that are used for the examples. You can find the output of the test in -the `test_output` directory if you used the `--basetemp` option above. You can -then use the plotting tools to plot the output from the test. For example: +.. code-block:: console + + pytest --basetemp=./test_output > log.txt 2>&1 + +- If you would like to use a different default `setrun.py` file for testing you + can modify the test script to use a different `setrun.py` file. +- If you would like to plot the output of a test, you can use the same plotting + tools that are used for the examples. You can find the output of the test in + the `test_output` directory if you used the `--basetemp` option above. You + can then use the plotting tools to plot the output from the test. For + example this code will run the test and save the output into a subdirectory + of `test_output`. The plotting command will then plot the output from the + appropriate subdirectory specified: .. code-block:: console @@ -55,9 +65,7 @@ then use the plotting tools to plot the output from the test. For example: pytest --basetemp=./test_output . python plotclaw.py test_output/test_acoustics_1d_example1/ ./_plots ./setplot.py -which will run the test and save the output into a subdirectory of -`test_output`. The plotting command will then plot the output from the -appropriate subdirectory specified. + Adding Regression Tests ----------------------- From ae9e0dc4d331d01b2937eaba4a4fc7caa9620c41 Mon Sep 17 00:00:00 2001 From: Kyle Mandli Date: Wed, 18 Mar 2026 10:29:34 -0400 Subject: [PATCH 09/12] Add comment regarding local test data --- doc/testing_refactor.rst | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/doc/testing_refactor.rst b/doc/testing_refactor.rst index 06220a6..e66e685 100644 --- a/doc/testing_refactor.rst +++ b/doc/testing_refactor.rst @@ -84,6 +84,12 @@ CI should: - prefer fast, stable examples in PR checks - allow broader coverage in scheduled or extended workflows +Data Included in the Repository for CI +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ +Example regression tests should avoid external downloads when possible. Small, +stable input files should be checked into the repository. Download and +conversion logic should be tested separately in focused utility tests. + Compiler Flags and Numerical Reproducibility ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ From 348993389804023cfb9f2113177b6af5bba12bf0 Mon Sep 17 00:00:00 2001 From: Kyle Mandli Date: Fri, 27 Mar 2026 09:32:32 -0400 Subject: [PATCH 10/12] Add new flags for NetCDF support --- doc/fortran_compilers.rst | 23 ++++++++++++++--------- 1 file changed, 14 insertions(+), 9 deletions(-) diff --git a/doc/fortran_compilers.rst b/doc/fortran_compilers.rst index de6215c..da69faa 100644 --- a/doc/fortran_compilers.rst +++ b/doc/fortran_compilers.rst @@ -77,6 +77,20 @@ and some testing abilities. The `PPFLAGS` environment variable is meant to provide further control of the pre-processor. +.. _fortran_NETCDF: + +Compiling with NetCDF Support +----------------------------- + +For NetCDF we provide convenience flags for compiling with the NetCDF library:: + + FFLAGS = -DNETCDF $(NETCDF_FFLAGS) + LFLAGS = $(NETCDF_LFLAGS) + +These flags are determined using the utility `nf-config` and `pkg-config`. If +these are not available the older `NETCDF4_DIR` is used and still supported. + + .. _fortran_gfortran: gfortran compiler @@ -102,15 +116,6 @@ gfortran compiler **Note:** Versions of gfortran before 4.6 are known to have OpenMP bugs. -* For using NetCDF:: - - FFLAGS = -DNETCDF -lnetcdf -I$(NETCDF4_DIR)/include - LFLAGS = -lnetcdf - - The `FFLAGS` can also be put into `PPFLAGS`. Note that the variable - `NETCDF4_DIR` should be defined in the environment. - - .. _fortran_intel: Intel fortran compiler From 47c8361d204044e1ba85b8bbab0b8973497edba2 Mon Sep 17 00:00:00 2001 From: Kyle Mandli Date: Tue, 31 Mar 2026 10:19:23 -0400 Subject: [PATCH 11/12] Add some more testing hints --- doc/testing.rst | 16 +++++++++++++++- 1 file changed, 15 insertions(+), 1 deletion(-) diff --git a/doc/testing.rst b/doc/testing.rst index c634896..988a315 100644 --- a/doc/testing.rst +++ b/doc/testing.rst @@ -43,7 +43,9 @@ details. Hints ^^^^^ -- Often times the output from a failing test will overwhelm the console output. In this case, you can use the following to pipe the output into the file `log.txt` and look at it directly: +- Often times the output from a failing test will overwhelm the console output. + In this case, you can use the following to pipe the output into the file + `log.txt` and look at it directly: .. code-block:: console @@ -65,7 +67,19 @@ Hints pytest --basetemp=./test_output . python plotclaw.py test_output/test_acoustics_1d_example1/ ./_plots ./setplot.py +- If you would like to plot output from a test that the output was saved for, + e.g. with `--basetemp=./test_output`, you can use the same plotting commands + to plot the output from the test. For example this code will plot the output + from the test `test_acoustics_1d_example1`: +.. code-block:: console + + python plotclaw.py test_output/test_acoustics_1d_example1#/ ./_plots ./setplot.py + +Note that the `#` in the command above is used to specify the subdirectory of +`test_output` that contains the output from the test. You can use this same +command to plot the output from any test that you have saved the output for. +The script `plotclaw.py` is in VisClaw. Adding Regression Tests ----------------------- From 01c243867fd1ade2ae39306d30339368a4c5ee78 Mon Sep 17 00:00:00 2001 From: Randy LeVeque Date: Fri, 3 Apr 2026 17:17:57 -0700 Subject: [PATCH 12/12] add some cross-references --- doc/contents.rst | 1 + doc/testing.rst | 36 ++++++++++++++++++++++-------------- doc/testing_refactor.rst | 5 +++++ 3 files changed, 28 insertions(+), 14 deletions(-) diff --git a/doc/contents.rst b/doc/contents.rst index 8bae78c..c96c511 100644 --- a/doc/contents.rst +++ b/doc/contents.rst @@ -63,6 +63,7 @@ Examples and Applications fvmbook contribute_apps testing + testing_refactor sphinxdoc .. _contents_fortcodes: diff --git a/doc/testing.rst b/doc/testing.rst index 988a315..1892ead 100644 --- a/doc/testing.rst +++ b/doc/testing.rst @@ -5,10 +5,16 @@ Testing your installation =================================================================== +Clawpack has switched from using `nose` tests to +`pytest `_. + +See :ref:`testing_refactor` for more information about the switch, +and :ref:`legacy_testing` for some notes on using `nose`. + PyClaw Tests ------------ -You can exercise all the tests in PyClaw by running the following command from +You can exercise all the tests in PyClaw by running the following command from the base of the `pyclaw directory`: .. code-block:: console @@ -33,11 +39,11 @@ directory of the corresponding pacakge and running: The most useful option for debugging a failing test is to use: .. code-block:: console - + pytest --basetemp=./test_output -which will save the output from the teset into the directory `test_output`. The -package `pytetst` also has a number of additional debugging options that you can +which will save the output from the tests into the directory `test_output`. The +package `pytest` also has a number of additional debugging options that you can use. See the `pytest documentation `_ for more details. @@ -52,10 +58,10 @@ Hints pytest --basetemp=./test_output > log.txt 2>&1 - If you would like to use a different default `setrun.py` file for testing you - can modify the test script to use a different `setrun.py` file. + can modify the test script to use a different `setrun.py` file. - If you would like to plot the output of a test, you can use the same plotting tools that are used for the examples. You can find the output of the test in - the `test_output` directory if you used the `--basetemp` option above. You + the `test_output` directory if you used the `\--basetemp` option above. You can then use the plotting tools to plot the output from the test. For example this code will run the test and save the output into a subdirectory of `test_output`. The plotting command will then plot the output from the @@ -68,7 +74,7 @@ Hints python plotclaw.py test_output/test_acoustics_1d_example1/ ./_plots ./setplot.py - If you would like to plot output from a test that the output was saved for, - e.g. with `--basetemp=./test_output`, you can use the same plotting commands + e.g. with `\--basetemp=./test_output`, you can use the same plotting commands to plot the output from the test. For example this code will plot the output from the test `test_acoustics_1d_example1`: @@ -78,7 +84,7 @@ Hints Note that the `#` in the command above is used to specify the subdirectory of `test_output` that contains the output from the test. You can use this same -command to plot the output from any test that you have saved the output for. +command to plot the output from any test that you have saved the output for. The script `plotclaw.py` is in VisClaw. Adding Regression Tests @@ -146,16 +152,16 @@ or with: pytest test_acoustics_1d_example1.py -2. We now need to generate the expected results for this test. To do this, run the test with the `--save` option: +2. We now need to generate the expected results for this test. To do this, run the test with the `\--save` option: .. code-block:: console pytest test_acoustics_1d_example1.py --save -This will run the test and save the results in a directory called `regression_data` in the same directory as the test. This file contains the expected results for the test, which will be used to compare against future runs of the test. Note that if you would like to see the full output of the test, you can add `--basetemp=./test_output` to the command above, which will save the output from the test into the directory `test_output`. +This will run the test and save the results in a directory called `regression_data` in the same directory as the test. This file contains the expected results for the test, which will be used to compare against future runs of the test. Note that if you would like to see the full output of the test, you can add `\--basetemp=./test_output` to the command above, which will save the output from the test into the directory `test_output`. -3. Now you can run the test without the `--save` option to check that it is working properly. If the test passes, you should see output similar to this: +3. Now you can run the test without the `\--save` option to check that it is working properly. If the test passes, you should see output similar to this: .. code-block:: console @@ -170,9 +176,11 @@ This will run the test and save the results in a directory called `regression_da To complete the test you will want to add the test script `test_acoustics_1d_example1.py` add the regression data to the repository. -============== +.. _legacy_testing: + Legacy Testing -============== +------------------------- + Tests via `nose` are no longer supported, but if you have an older version of Clawpack installed and `nostests` available, you can still run the old tests. @@ -202,7 +210,7 @@ As a first test of the Fortran code, try the following:: This will run several tests and compare a few numbers from the solution with -archived results. The tests should run in a few seconds and +archived results. The tests should run in a few seconds and you should see output similar to this:: runTest (tests.acoustics_1d_heterogeneous.regression_tests.Acoustics1DHeterogeneousTest) ... ok diff --git a/doc/testing_refactor.rst b/doc/testing_refactor.rst index e66e685..aa543e2 100644 --- a/doc/testing_refactor.rst +++ b/doc/testing_refactor.rst @@ -1,7 +1,12 @@ +.. _testing_refactor: + ========================= Clawpack Testing Refactor ========================= +.. seealso:: + - :ref:`testing` + Overview --------