Skip to content

establish testing (at least smoke) for the notebooks on the hub instance #10

@yarikoptic

Description

@yarikoptic

with #9 as the motivator, I think it would be important for us to ensure that notebooks we provide "work" on the hub.

  • we need a way to test notebooks on an existing instance of the hub, so we could test new/updated/beta version of the hub
    • testing could be just smoke run of the notebook to start with
    • I found this article which introduces nbmake
      • running pytest --nbmake dandi-notebooks/ now, will report below the "results"
    • to complete in reasonable time we might want to allow to run them in parallel (might not do parallel on small instance)
  • we should be able to trigger "testing" all notebooks for a specific type of the hub instance (is that "scriptable" @satra to start a specific instance and interact with that environment)?
  • similarly to https://github.com/dandi/dandi-api-webshots/ and https://github.com/datalad/datalad-extensions/ we could have a repository (e.g. example-notebooks-dashboard) which would contain the README.md which in a table would list which notebooks are ok or failing on which type of hub instance.
current as of 20211027 state of things
jovyan@jupyter-yarikoptic:~$ pytest  --nbmake dandi-notebooks/
===================================== test session starts =====================================
platform linux -- Python 3.9.6, pytest-6.2.5, py-1.10.0, pluggy-1.0.0
rootdir: /home/jovyan
plugins: nbmake-0.10, anyio-3.3.0
collected 9 items                                                                             

dandi-notebooks/000004_demo_analysis.ipynb F                                            [ 11%]
dandi-notebooks/NWBWidget-demo.ipynb F                                                  [ 22%]
dandi-notebooks/Untitled.ipynb .                                                        [ 33%]
dandi-notebooks/000055/BruntonLab/peterson21/Fig_coarse_labels.ipynb F                  [ 44%]
dandi-notebooks/000055/BruntonLab/peterson21/Fig_pow_spectra.ipynb F                    [ 55%]
dandi-notebooks/000055/BruntonLab/peterson21/Table_coarse_labels.ipynb F                [ 66%]
dandi-notebooks/000055/BruntonLab/peterson21/Table_part_characteristics.ipynb F         [ 77%]
dandi-notebooks/000055/BruntonLab/peterson21/dashboard.ipynb .                          [ 88%]
dandi-notebooks/cosyne_2020_tutorial/NWB_tutorial_2019.ipynb .                          [100%]

edit 1:

  • as @bendichter pointed out - some notebooks need environment with nwb extension(s) installed. So we can't just run them all in the same environment. We might need to annotate and then group and run in specific environments
  • some notebooks have their own pip install -r requirements.txt which alters the environment thus making order of execution matter etc. Not yet sure what to do about that, besides running each test in a new "clone" of the test environment

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions