Skip to content

Improve testing of examples #1146

@lrandersson

Description

@lrandersson

Checklist

  • I added a descriptive title
  • I searched open requests and couldn't find a duplicate

What is the idea?

Currently when we run tests_examples.py we test each installer via a for-loop, in most cases it's written as:

def test_something():
    for installer, install_dir in create_installer:
       ...

This means that for platforms that support multiple installer types will in some cases not run all the tests upon a test failure. As an example, considering macOS where we have pkg and sh, and in the future with Windows where we might have both exe and msi; if the for-loop fails in the first iteration, the second type of installer will never be tested. Additionally, if it fails in the second iteration (or later), all the test output from the previous installers is written to the log which clutters the console output with unnecessary information. For example, for Windows-installers I'd say this makes test debugging extremely difficult because you will have all of the NSIS output written even if it was the MSI installer that failed.
Currently on the feature branch where we have both msi and exe, the test job console output has to be opened in a separate tab with raw text because the browser is not able to display all of the text.

Why is this needed?

Improve testing infrastructure and simplify debugging in case of test failure. In detail:

  1. Tests are not blocked by early failures (i.e. if one installer type fails while there are more remaining)
  2. Test output can be read more easily which will improve debugging capabilities
  3. Depending on the design choice, it can make it possible to run tests for specific installer types locally.

What should happen?

Installer types should be tested without the use of a loop, since today the design is for each test-function, loop over each installer type, which leads to the issues mentioned above upon test failures.

Additional Context

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    type::featurerequest for a new feature or capability

    Type

    No type

    Projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions