Skip to content

Conversation

@wyli
Copy link
Contributor

@wyli wyli commented Apr 11, 2022

Fixes #4105

  • drop the tests for torch 1.6.x
  • remove pytorch_after(1, 7) checks

Status

Ready

Types of changes

  • Non-breaking change (fix or new feature that would not break existing functionality).
  • Breaking change (fix or new feature that would cause existing functionality to change).
  • New tests added to cover the changes.
  • Integration tests passed locally by running ./runtests.sh -f -u --net --coverage.
  • Quick tests passed locally by running ./runtests.sh --quick --unittests --disttests.
  • In-line docstrings updated.
  • Documentation updated, tested make html command in the docs/ folder.

wyli added 6 commits April 11, 2022 17:33
Signed-off-by: Wenqi Li <wenqil@nvidia.com>
Signed-off-by: Wenqi Li <wenqil@nvidia.com>
Signed-off-by: Wenqi Li <wenqil@nvidia.com>
Signed-off-by: Wenqi Li <wenqil@nvidia.com>
Signed-off-by: Wenqi Li <wenqil@nvidia.com>
Signed-off-by: Wenqi Li <wenqil@nvidia.com>
@wyli wyli changed the title 4105 pt16 4105 drops pt16 support Apr 11, 2022
wyli added 2 commits April 11, 2022 18:08
Signed-off-by: Wenqi Li <wenqil@nvidia.com>
Signed-off-by: Wenqi Li <wenqil@nvidia.com>
@rijobro
Copy link
Contributor

rijobro commented Apr 11, 2022

I think there's also some stuff in utils_pytorch_numpy_unification.py that can be removed if we want.

@rijobro
Copy link
Contributor

rijobro commented Apr 11, 2022

also two instances of hasattr(torch, "quantile") in test_utils_pytorch_numpy_unification.py.

Signed-off-by: Wenqi Li <wenqil@nvidia.com>
@wyli
Copy link
Contributor Author

wyli commented Apr 11, 2022

I think there's also some stuff in utils_pytorch_numpy_unification.py that can be removed if we want.

thanks, the module becomes simpler now..

@wyli
Copy link
Contributor Author

wyli commented Apr 12, 2022

Signed-off-by: Wenqi Li <wenqil@nvidia.com>
@wyli wyli marked this pull request as ready for review April 12, 2022 08:49
@wyli
Copy link
Contributor Author

wyli commented Apr 12, 2022

/build

Copy link
Contributor

@Nic-Ma Nic-Ma left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the quick update.

@wyli wyli merged commit 9c0a538 into dev Apr 12, 2022
@wyli wyli deleted the 4105-pt16 branch April 12, 2022 10:08
Can-Zhao added a commit to Can-Zhao/MONAI that referenced this pull request May 10, 2022
Add padding to filter to ensure same size after anti-aliasing

Use replicate padding insteadof zero padding to avoid artifacts for non-zero boundary

Reuse GaussianSmooth

4073 Enhance DynUNet doc-strings (Project-MONAI#4102)

* Fix doc strings error

Signed-off-by: Yiheng Wang <vennw@nvidia.com>

* remove duplicate places

Signed-off-by: Yiheng Wang <vennw@nvidia.com>

4105 drops pt16 support (Project-MONAI#4106)

* update sys req

Signed-off-by: Wenqi Li <wenqil@nvidia.com>

* temp test

Signed-off-by: Wenqi Li <wenqil@nvidia.com>

* update code for torch>=1.7

Signed-off-by: Wenqi Li <wenqil@nvidia.com>

* temp tests

Signed-off-by: Wenqi Li <wenqil@nvidia.com>

* fixes tests

Signed-off-by: Wenqi Li <wenqil@nvidia.com>

* autofix

Signed-off-by: Wenqi Li <wenqil@nvidia.com>

* fixes import

Signed-off-by: Wenqi Li <wenqil@nvidia.com>

* clear cache

Signed-off-by: Wenqi Li <wenqil@nvidia.com>

* update based on comments

Signed-off-by: Wenqi Li <wenqil@nvidia.com>

* remove temp cmd

Signed-off-by: Wenqi Li <wenqil@nvidia.com>

Make `pixelshuffle` scriptable (Project-MONAI#4109)

* Update the existing functionality to comply with the `torchscript.jit.script` function.

Signed-off-by: Ramon Emiliani <ramon@afxmedical.com>

meta tensor (Project-MONAI#4077)

* meta tensor

Signed-off-by: Richard Brown <33289025+rijobro@users.noreply.github.com>

4084 Add kwargs for `Tensor.to()` in engines (Project-MONAI#4112)

* [DLMED] add kwargs for to() API

Signed-off-by: Nic Ma <nma@nvidia.com>

* [MONAI] python code formatting

Signed-off-by: monai-bot <monai.miccai2019@gmail.com>

* [DLMED] fix typo

Signed-off-by: Nic Ma <nma@nvidia.com>

* [DLMED] fix flake8

Signed-off-by: Nic Ma <nma@nvidia.com>

* [DLMED] update according to comments

Signed-off-by: Nic Ma <nma@nvidia.com>

Co-authored-by: monai-bot <monai.miccai2019@gmail.com>

fixes pytorch version tests (Project-MONAI#4127)

Signed-off-by: Wenqi Li <wenqil@nvidia.com>

update meta tensor api (Project-MONAI#4131)

* update meta tensor api

Signed-off-by: Wenqi Li <wenqil@nvidia.com>

* update based on comments

Signed-off-by: Wenqi Li <wenqil@nvidia.com>

runtests.sh isort (Project-MONAI#4134)

Signed-off-by: Richard Brown <33289025+rijobro@users.noreply.github.com>

update citation (Project-MONAI#4133)

Signed-off-by: Wenqi Li <wenqil@nvidia.com>

`ToMetaTensor` and `FromMetaTensor` transforms (Project-MONAI#4115)

to and from meta

no skip if before pytorch 1.7 (Project-MONAI#4139)

* no skip if before pytorch 1.7

Signed-off-by: Richard Brown <33289025+rijobro@users.noreply.github.com>

* fix

Signed-off-by: Richard Brown <33289025+rijobro@users.noreply.github.com>

* fix

Signed-off-by: Richard Brown <33289025+rijobro@users.noreply.github.com>

[DLMED] fix file name in meta (Project-MONAI#4145)

Signed-off-by: Nic Ma <nma@nvidia.com>

4116 Add support for advanced args of AMP (Project-MONAI#4132)

* [DLMED] fix typo in bundle scripts

Signed-off-by: Nic Ma <nma@nvidia.com>

* [DLMED] add support for AMP args

Signed-off-by: Nic Ma <nma@nvidia.com>

* [MONAI] python code formatting

Signed-off-by: monai-bot <monai.miccai2019@gmail.com>

* [DLMED] fix flake8

Signed-off-by: Nic Ma <nma@nvidia.com>

Co-authored-by: monai-bot <monai.miccai2019@gmail.com>

New wsireader (Project-MONAI#4147)

`MetaTensor`: collate; decollate; dataset; dataloader; out=; indexing and iterating across batches (Project-MONAI#4137)

`MetaTensor`: collate; decollate; dataset; dataloader; out=; indexing and iterating across batches (Project-MONAI#4137)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

drop the support of pytorch 1.6.x

4 participants