Skip to content

Comments

Tests fail when running the full testing suite#365

Merged
HarmonicReflux merged 16 commits intodevelopfrom
tests_fail_when_running_the_full_testing_suite
Jul 12, 2024
Merged

Tests fail when running the full testing suite#365
HarmonicReflux merged 16 commits intodevelopfrom
tests_fail_when_running_the_full_testing_suite

Conversation

@HarmonicReflux
Copy link
Collaborator

@HarmonicReflux HarmonicReflux commented Jun 21, 2024

Description

This pull request refactors the way muse --model -default is executed to ensure that all tests pass.

Specifically, the Results/MCACapacity.csv file is now placed within the docs folder, which is where the test expects to find MCACapacity.csv. The necessary file is created during the test run by the notebook itself and subsequently picked up by it. This change resolves the error that causes pytest to fail, and as a result, all tests now pass by default.

Fixes #261

Type of change

Please add a line in the relevant section of
CHANGELOG.md to
document the change (include PR #) - note reverse order of PR #s.

  • New feature (non-breaking change which adds functionality)
  • Optimization (non-breaking, back-end change that speeds up the code)
  • Bug fix (non-breaking change which fixes an issue)
  • Breaking change (whatever its nature)

Key checklist

  • All tests pass: $ python -m pytest
  • The documentation builds and looks OK: $ python -m sphinx -b html docs docs/build

Further checks

  • Code is commented, particularly in hard-to-understand areas
  • Tests added that prove fix is effective or that feature works

…"easy" fix regarding the test that makes the .ipynb notebook fail, I included `muse --model default` to be run as a necessary step to complete setting MUSE_OS up.

Given the fact that after patching the setup, 304 tests pass successfully while 10 are skipped, 1 comes with an expected
 fail and 4 warnings , it is probably better to sort these out in separate issues to separate responsibilities and make it easier to solve them.
…which is true only upon running `muse --model default` or any other model that creates MCACapacity.csv and only then proceeds running the test notebooks.

I hard-code this explicit file dependency which may not be the most general solution, however, due to the explicit dependency on MCACapacity.csv in one of the test notebooks, which itself is hardcoded, I think this is a straight-forward approach.
@HarmonicReflux HarmonicReflux requested a review from dalonsoa June 21, 2024 16:04
@codecov
Copy link

codecov bot commented Jun 21, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 71.39%. Comparing base (2e8e1b5) to head (297e8db).

Additional details and impacted files
@@             Coverage Diff             @@
##           develop     #365      +/-   ##
===========================================
+ Coverage    71.36%   71.39%   +0.03%     
===========================================
  Files           44       44              
  Lines         5915     5915              
  Branches      1162     1162              
===========================================
+ Hits          4221     4223       +2     
+ Misses        1371     1370       -1     
+ Partials       323      322       -1     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Collaborator

@dalonsoa dalonsoa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've re-run the failed test and now things work as planned. It is just a flaky test.

The changes look good and the explanations are clearer now, so all good.

For future PR remember to 1) provide a description of what the PRs is about, 2) indicate what issue it is closing and 3) make sure the branch is up to date.

# 1- Create a virtual environment
# 2- Activate that virtual environment
# 3- Install MUSE in editable mode with: python -m pip install -e .[dev,doc]
# 4 - Invoke `muse --model default`
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this is the key thing to enable all tests to pass when the full suite is run at once, right?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think so, as this will generate a results folder in the current working directory, whereas we specifically need it in the docs folder

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the docs workflow a link is made between both, but it is true I'm not entirely sure that's a solution in this case.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. For the tests to pass without the failure one has to run muse --model default.
Independently, there are is still an "expected fail", some tests that got "skipped" and four "warnings", though I opened separate issues for them.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the docs workflow a link is made between both, but it is true I'm not entirely sure that's a solution in this case.

Yeah, that was a hack on my part to avoid having to generate results twice.

I think it's a good idea to tell users that they need to run this command in order for tests to pass, but I think we should put that in the "Running Tests" section, not here.

def available_notebooks() -> list[Path]:
"""Locate the available notebooks in the docs."""
if not Path("Results/MCACapacity.csv").exists():
return []
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this saying that it will run no notebook tests if that path doesn't exist? I don't think we want that

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I missed this file. Yes, we do not want this. All tests should run.

Copy link
Collaborator

@dalonsoa dalonsoa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All tests should run, and not skipping them if a file is missing. If it is missing, we need to figure out why it is missing and put it in there.

def available_notebooks() -> list[Path]:
"""Locate the available notebooks in the docs."""
if not Path("Results/MCACapacity.csv").exists():
return []
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, I missed this file. Yes, we do not want this. All tests should run.

@HarmonicReflux
Copy link
Collaborator Author

Well, for "Results/MCACapacity.csv" to be visible, one has to create it first, and the most simple way of creating it is by running a model, say the default model via muse --model default. Hence, I included invocation of this command in the documentation. If users then run the test suite, the failure will not occur.

As @dalonsoa pointed out: If my suggestion is not the one to go for, I can think of either:
i) Include the file into the codebase (still feels a bit hacky to me)
ii) Defining to tun the default model in the setup file such that it is created while running python -m pip install -e .[dev,doc]
iii) Any other suggestions?

@dalonsoa
Copy link
Collaborator

I think the main issue here is:

  1. to figure out what notebook needs the Results folder in the root folder
  2. then take the steps such that the data it looks for is within the docs directory - like any other of the notebooks - and put the data in the right place.
  3. And yes, committing the data as well, as it is done with the other notebooks that need data, like those of the tutorials.

What makes no sense is to have a notebook in the docs requiring data in the root directory.

There's a discussion going on on how to avoid committing data to the repository, but that's a longer term discussion and we do not have an answer for it, yet.

@dalonsoa
Copy link
Collaborator

And remember to update the branch and, ideally, add a PR description - not essential, but serve as reference for the future.

@alexdewar
Copy link
Collaborator

Did you want me to review this @HarmonicReflux?

@HarmonicReflux HarmonicReflux requested a review from alexdewar June 28, 2024 14:48
@HarmonicReflux
Copy link
Collaborator Author

Did you want me to review this @HarmonicReflux?

Yes, please take a look, so we are all on the same page.
Thks.

Copy link
Collaborator

@alexdewar alexdewar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As the others have said, we don't want to disable tests when the files aren't there. While that will make the tests pass, that'll just mean that developers who haven't run the default model first won't run these tests at all, so they won't know if their code is broken until they open a PR.

I think the right solution is to just put an instruction about this in the documentation for now. We could potentially do something cleverer, like automatically running the default model in conftest.py if the files are missing, but we shouldn't do that on this PR. I'd maybe open an issue called "default model must be run for tests to pass" and we can have a think about it. (We don't want to go back to committing the contents of Results/ to the repo, because we've only just got rid of them!)

As @dalonsoa has said, you want a PR description so reviewers know what has been changed and why. In this case, it would have been good to have a heads-up that you'd disabled some of the tests! If @tsmbland hadn't clocked it then it could have broken things down the line.

# 1- Create a virtual environment
# 2- Activate that virtual environment
# 3- Install MUSE in editable mode with: python -m pip install -e .[dev,doc]
# 4 - Invoke `muse --model default`
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the docs workflow a link is made between both, but it is true I'm not entirely sure that's a solution in this case.

Yeah, that was a hack on my part to avoid having to generate results twice.

I think it's a good idea to tell users that they need to run this command in order for tests to pass, but I think we should put that in the "Running Tests" section, not here.

Revoked changes to exclude the test in case input file is not there as per discussion for pull requests. 

Updated the manual to instruct users to run the default model on Muse to complete the installation. This ensures all tests pass (some may be skipped) and avoids including model results files in the codebase.
@HarmonicReflux
Copy link
Collaborator Author

Appreciated comments of the reviewers.

Copy link
Collaborator

@alexdewar alexdewar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still think the text about having to run muse --model default should be in the "Running Tests" section. See comment

…l before the test scripts.

Running a model is now mentioned twice: once in the installation instructions and once before running the tests.

This ensures readers who do not follow the guide in sequence will understand the requirement and avoid confusion.
@HarmonicReflux
Copy link
Collaborator Author

@alexdewar please take a look at this reworked pull request.

@tsmbland
Copy link
Collaborator

tsmbland commented Jul 2, 2024

I'm still a bit unsure about this, because we specifically need a results folder in the docs folder, so running muse --model default in the root folder won't be enough. The link that Alex created in the docs workflow will only exist if the user runs the documentation build locally (right?) which we're not asking people to do before running the tests, so I'm not sure how this fixes the problem. Am I missing something?

@HarmonicReflux
Copy link
Collaborator Author

Running a model will create a "Results" folder in the directory MUSE is installed. The results folder itself then contains subfolder that of which one contains the file the one testing the notebooks needs to have access in order to run successfully. To my knowledge, building the documentation is a separate process unrelated to the tests themselves.

@tsmbland
Copy link
Collaborator

tsmbland commented Jul 2, 2024

I get that, but the notebook in question (running-muse-example.ipynb) points to a Results folder contained in the docs folder, not the root folder

HarmonicReflux and others added 3 commits July 8, 2024 15:11
@HarmonicReflux HarmonicReflux self-assigned this Jul 8, 2024
@HarmonicReflux
Copy link
Collaborator Author

@dalonsoa and @tsmbland,
please review this pull-request.

@alexdewar is out of office until later this week.

Copy link
Collaborator

@tsmbland tsmbland left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! Just a small change needed which I didn't spot before

@HarmonicReflux HarmonicReflux requested a review from tsmbland July 8, 2024 15:30
HarmonicReflux and others added 3 commits July 8, 2024 16:31
Co-authored-by: Tom Bland <t.bland@imperial.ac.uk>
Co-authored-by: Tom Bland <t.bland@imperial.ac.uk>
Co-authored-by: Tom Bland <t.bland@imperial.ac.uk>
Copy link
Collaborator

@dalonsoa dalonsoa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The latest modification is very neat! Let's hope it works in all instances we are interested.

@alexdewar alexdewar self-requested a review July 11, 2024 08:18
Copy link
Collaborator

@alexdewar alexdewar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@HarmonicReflux HarmonicReflux merged commit bca773a into develop Jul 12, 2024
@HarmonicReflux HarmonicReflux deleted the tests_fail_when_running_the_full_testing_suite branch July 12, 2024 09:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

Tests fail when running the full testing suite [BUG]

4 participants