Tests fail when running the full testing suite#365
Conversation
…"easy" fix regarding the test that makes the .ipynb notebook fail, I included `muse --model default` to be run as a necessary step to complete setting MUSE_OS up. Given the fact that after patching the setup, 304 tests pass successfully while 10 are skipped, 1 comes with an expected fail and 4 warnings , it is probably better to sort these out in separate issues to separate responsibilities and make it easier to solve them.
…which is true only upon running `muse --model default` or any other model that creates MCACapacity.csv and only then proceeds running the test notebooks. I hard-code this explicit file dependency which may not be the most general solution, however, due to the explicit dependency on MCACapacity.csv in one of the test notebooks, which itself is hardcoded, I think this is a straight-forward approach.
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## develop #365 +/- ##
===========================================
+ Coverage 71.36% 71.39% +0.03%
===========================================
Files 44 44
Lines 5915 5915
Branches 1162 1162
===========================================
+ Hits 4221 4223 +2
+ Misses 1371 1370 -1
+ Partials 323 322 -1 ☔ View full report in Codecov by Sentry. |
dalonsoa
left a comment
There was a problem hiding this comment.
I've re-run the failed test and now things work as planned. It is just a flaky test.
The changes look good and the explanations are clearer now, so all good.
For future PR remember to 1) provide a description of what the PRs is about, 2) indicate what issue it is closing and 3) make sure the branch is up to date.
docs/installation/developers.rst
Outdated
| # 1- Create a virtual environment | ||
| # 2- Activate that virtual environment | ||
| # 3- Install MUSE in editable mode with: python -m pip install -e .[dev,doc] | ||
| # 4 - Invoke `muse --model default` |
There was a problem hiding this comment.
So this is the key thing to enable all tests to pass when the full suite is run at once, right?
There was a problem hiding this comment.
I don't think so, as this will generate a results folder in the current working directory, whereas we specifically need it in the docs folder
There was a problem hiding this comment.
In the docs workflow a link is made between both, but it is true I'm not entirely sure that's a solution in this case.
There was a problem hiding this comment.
Yes. For the tests to pass without the failure one has to run muse --model default.
Independently, there are is still an "expected fail", some tests that got "skipped" and four "warnings", though I opened separate issues for them.
There was a problem hiding this comment.
In the docs workflow a link is made between both, but it is true I'm not entirely sure that's a solution in this case.
Yeah, that was a hack on my part to avoid having to generate results twice.
I think it's a good idea to tell users that they need to run this command in order for tests to pass, but I think we should put that in the "Running Tests" section, not here.
tests/test_docs_notebooks.py
Outdated
| def available_notebooks() -> list[Path]: | ||
| """Locate the available notebooks in the docs.""" | ||
| if not Path("Results/MCACapacity.csv").exists(): | ||
| return [] |
There was a problem hiding this comment.
Is this saying that it will run no notebook tests if that path doesn't exist? I don't think we want that
There was a problem hiding this comment.
Sorry, I missed this file. Yes, we do not want this. All tests should run.
dalonsoa
left a comment
There was a problem hiding this comment.
All tests should run, and not skipping them if a file is missing. If it is missing, we need to figure out why it is missing and put it in there.
tests/test_docs_notebooks.py
Outdated
| def available_notebooks() -> list[Path]: | ||
| """Locate the available notebooks in the docs.""" | ||
| if not Path("Results/MCACapacity.csv").exists(): | ||
| return [] |
There was a problem hiding this comment.
Sorry, I missed this file. Yes, we do not want this. All tests should run.
|
Well, for "Results/MCACapacity.csv" to be visible, one has to create it first, and the most simple way of creating it is by running a model, say the default model via As @dalonsoa pointed out: If my suggestion is not the one to go for, I can think of either: |
|
I think the main issue here is:
What makes no sense is to have a notebook in the docs requiring data in the root directory. There's a discussion going on on how to avoid committing data to the repository, but that's a longer term discussion and we do not have an answer for it, yet. |
|
And remember to update the branch and, ideally, add a PR description - not essential, but serve as reference for the future. |
|
Did you want me to review this @HarmonicReflux? |
Yes, please take a look, so we are all on the same page. |
alexdewar
left a comment
There was a problem hiding this comment.
As the others have said, we don't want to disable tests when the files aren't there. While that will make the tests pass, that'll just mean that developers who haven't run the default model first won't run these tests at all, so they won't know if their code is broken until they open a PR.
I think the right solution is to just put an instruction about this in the documentation for now. We could potentially do something cleverer, like automatically running the default model in conftest.py if the files are missing, but we shouldn't do that on this PR. I'd maybe open an issue called "default model must be run for tests to pass" and we can have a think about it. (We don't want to go back to committing the contents of Results/ to the repo, because we've only just got rid of them!)
As @dalonsoa has said, you want a PR description so reviewers know what has been changed and why. In this case, it would have been good to have a heads-up that you'd disabled some of the tests! If @tsmbland hadn't clocked it then it could have broken things down the line.
docs/installation/developers.rst
Outdated
| # 1- Create a virtual environment | ||
| # 2- Activate that virtual environment | ||
| # 3- Install MUSE in editable mode with: python -m pip install -e .[dev,doc] | ||
| # 4 - Invoke `muse --model default` |
There was a problem hiding this comment.
In the docs workflow a link is made between both, but it is true I'm not entirely sure that's a solution in this case.
Yeah, that was a hack on my part to avoid having to generate results twice.
I think it's a good idea to tell users that they need to run this command in order for tests to pass, but I think we should put that in the "Running Tests" section, not here.
Revoked changes to exclude the test in case input file is not there as per discussion for pull requests. Updated the manual to instruct users to run the default model on Muse to complete the installation. This ensures all tests pass (some may be skipped) and avoids including model results files in the codebase.
|
Appreciated comments of the reviewers. |
alexdewar
left a comment
There was a problem hiding this comment.
I still think the text about having to run muse --model default should be in the "Running Tests" section. See comment
…l before the test scripts. Running a model is now mentioned twice: once in the installation instructions and once before running the tests. This ensures readers who do not follow the guide in sequence will understand the requirement and avoid confusion.
|
@alexdewar please take a look at this reworked pull request. |
|
I'm still a bit unsure about this, because we specifically need a results folder in the |
|
Running a model will create a "Results" folder in the directory MUSE is installed. The results folder itself then contains subfolder that of which one contains the file the one testing the notebooks needs to have access in order to run successfully. To my knowledge, building the documentation is a separate process unrelated to the tests themselves. |
|
I get that, but the notebook in question ( |
…file that is required for the notebook tests.
…installation procedure as required file creation to run notebook tests is now fixed.
for more information, see https://pre-commit.ci
|
@dalonsoa and @tsmbland, @alexdewar is out of office until later this week. |
tsmbland
left a comment
There was a problem hiding this comment.
Looks good! Just a small change needed which I didn't spot before
Co-authored-by: Tom Bland <t.bland@imperial.ac.uk>
Co-authored-by: Tom Bland <t.bland@imperial.ac.uk>
Co-authored-by: Tom Bland <t.bland@imperial.ac.uk>
dalonsoa
left a comment
There was a problem hiding this comment.
The latest modification is very neat! Let's hope it works in all instances we are interested.
Description
This pull request refactors the way
muse --model -defaultis executed to ensure that all tests pass.Specifically, the Results/MCACapacity.csv file is now placed within the docs folder, which is where the test expects to find MCACapacity.csv. The necessary file is created during the test run by the notebook itself and subsequently picked up by it. This change resolves the error that causes pytest to fail, and as a result, all tests now pass by default.
Fixes #261
Type of change
Please add a line in the relevant section of
CHANGELOG.md to
document the change (include PR #) - note reverse order of PR #s.
Key checklist
$ python -m pytest$ python -m sphinx -b html docs docs/buildFurther checks