Skip to content

Conversation

@Lunderberg
Copy link
Contributor

Previously, individual unit tests would call tvm.contrib.nvcc.get_target_compute_version and return early. This was repeated boilerplate in many tests, and incorrectly reported a test as PASSED if the required infrastructure wasn't present.

This commit introduces tvm.testing.requires_cuda_compute_version, a decorator that checks the CUDA compute version and applies pytest.mark.skipif. If required infrastructure isn't present, a test will be reported as SKIPPED.

Previously, individual unit tests would call
`tvm.contrib.nvcc.get_target_compute_version` and return early.  This
was repeated boilerplate in many tests, and incorrectly reported a
test as `PASSED` if the required infrastructure wasn't present.

This commit introduces `tvm.testing.requires_cuda_compute_version`, a
decorator that checks the CUDA compute version and applies
`pytest.mark.skipif`.  If required infrastructure isn't present, a
test will be reported as `SKIPPED`.
@vinx13 vinx13 merged commit aded9d4 into apache:main Sep 16, 2022
@Lunderberg Lunderberg deleted the cuda_compute_decorator branch September 19, 2022 13:17
xinetzone pushed a commit to daobook/tvm that referenced this pull request Nov 25, 2022
…ache#12778)

* [Testing] Add decorator tvm.testing.requires_cuda_compute_version

Previously, individual unit tests would call
`tvm.contrib.nvcc.get_target_compute_version` and return early.  This
was repeated boilerplate in many tests, and incorrectly reported a
test as `PASSED` if the required infrastructure wasn't present.

This commit introduces `tvm.testing.requires_cuda_compute_version`, a
decorator that checks the CUDA compute version and applies
`pytest.mark.skipif`.  If required infrastructure isn't present, a
test will be reported as `SKIPPED`.

* requires_cuda_compute_version skips test when no GPU is present
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants