Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
35 commits
Select commit Hold shift + click to select a range
f6f9e81
Use int16 instead of int8 in `LabelStats` (#7489)
KumoLiu Feb 23, 2024
20512d3
auto updates (#7495)
monai-bot Feb 26, 2024
7cfa2c9
Add sample_std parameter to RandGaussianNoise. (#7492)
bakert1 Feb 26, 2024
9830525
Add __repr__ and __str__ to Metrics baseclass (#7487)
MathijsdeBoer Feb 28, 2024
02c7f53
Bump al-cheb/configure-pagefile-action from 1.3 to 1.4 (#7510)
dependabot[bot] Mar 1, 2024
e9e2738
Add arm support (#7500)
KumoLiu Mar 3, 2024
95f69de
Fix error in "test_bundle_trt_export" (#7524)
KumoLiu Mar 10, 2024
6b7568d
Fix typo in the PerceptualNetworkType Enum (#7548)
SomeUserName1 Mar 15, 2024
ec63e06
Update to use `log_sigmoid` in `FocalLoss` (#7534)
KumoLiu Mar 18, 2024
35c93fd
Update integration_segmentation_3d result for PyTorch2403 (#7551)
KumoLiu Mar 22, 2024
c649934
Add Barlow Twins loss for representation learning (#7530)
Lucas-rbnt Mar 22, 2024
c3a7383
Stein's Unbiased Risk Estimator (SURE) loss and Conjugate Gradient (#…
cxlcl Mar 22, 2024
c86e790
auto updates (#7577)
monai-bot Mar 25, 2024
97678fa
Remove nested error propagation on `ConfigComponent` instantiate (#7569)
Mar 26, 2024
e5bebfc
2872 implementation of mixup, cutmix and cutout (#7198)
juampatronics Mar 26, 2024
2716b6a
Remove device count cache when import monai (#7581)
KumoLiu Mar 27, 2024
c9fed96
Fixing gradient in sincos positional encoding in monai/networks/block…
Lucas-rbnt Mar 27, 2024
ba3c72c
Fix inconsistent alpha parameter/docs for RandGibbsNoise/RandGibbsNoi…
johnzielke Mar 27, 2024
7c0b10e
Fix bundle_root for NNIGen (#7586)
mingxin-zheng Mar 27, 2024
2d463a7
Auto3DSeg algo_template hash update (#7589)
monai-bot Mar 27, 2024
15d2abf
Utilizing subprocess for nnUNet training. (#7576)
KumoLiu Apr 1, 2024
ec4d946
typo fix (#7595)
scalyvladimir Apr 1, 2024
a7c2589
auto updates (#7599)
monai-bot Apr 1, 2024
c885100
7540 change bundle workflow args (#7549)
yiheng-wang-nv Apr 1, 2024
264b9e4
Add "properties_path" in BundleWorkflow (#7542)
KumoLiu Apr 1, 2024
bbaaf4c
Auto3DSeg algo_template hash update (#7603)
monai-bot Apr 1, 2024
5ec7305
ENH: generate_label_classes_crop_centers: warn only if ratio of missi…
lorinczszabolcs Apr 2, 2024
763347d
Update base image to 2403 (#7600)
KumoLiu Apr 3, 2024
195d7dd
simplification of the sincos positional encoding in patchembedding.py…
Lucas-rbnt Apr 4, 2024
625967c
harmonization and clarification of dice losses variants docs and asso…
Lucas-rbnt Apr 5, 2024
c0b9cc0
Implementation of intensity clipping transform: bot hard and soft app…
Lucas-rbnt Apr 5, 2024
87152d1
Fix typo in `SSIMMetric` (#7612)
KumoLiu Apr 8, 2024
e9a5bfe
auto updates (#7614)
monai-bot Apr 10, 2024
54a6991
Fix test error in `test_soft_clipping_one_sided_high` (#7624)
KumoLiu Apr 11, 2024
ac70074
flip refactor for geometry
atbenmurray Apr 11, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/conda.yml
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ jobs:
steps:
- if: runner.os == 'windows'
name: Config pagefile (Windows only)
uses: al-cheb/configure-pagefile-action@v1.3
uses: al-cheb/configure-pagefile-action@v1.4
with:
minimum-size: 8GB
maximum-size: 16GB
Expand Down
16 changes: 8 additions & 8 deletions .github/workflows/cron.yml
Original file line number Diff line number Diff line change
Expand Up @@ -19,18 +19,18 @@ jobs:
- "PTLATEST+CUDA121"
include:
# https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes
- environment: PT191+CUDA113
pytorch: "torch==1.9.1 torchvision==0.10.1 --extra-index-url https://download.pytorch.org/whl/cu113"
base: "nvcr.io/nvidia/pytorch:21.06-py3" # CUDA 11.3
- environment: PT110+CUDA113
pytorch: "torch==1.10.2 torchvision==0.11.3 --extra-index-url https://download.pytorch.org/whl/cu113"
base: "nvcr.io/nvidia/pytorch:21.06-py3" # CUDA 11.3
- environment: PT113+CUDA113
pytorch: "torch==1.13.1 torchvision==0.14.1 --extra-index-url https://download.pytorch.org/whl/cu113"
base: "nvcr.io/nvidia/pytorch:21.06-py3" # CUDA 11.3
- environment: PTLATEST+CUDA121
pytorch: "-U torch torchvision --extra-index-url https://download.pytorch.org/whl/cu118"
- environment: PT113+CUDA122
pytorch: "torch==1.13.1 torchvision==0.14.1 --extra-index-url https://download.pytorch.org/whl/cu121"
base: "nvcr.io/nvidia/pytorch:23.08-py3" # CUDA 12.2
- environment: PTLATEST+CUDA124
pytorch: "-U torch torchvision --extra-index-url https://download.pytorch.org/whl/cu121"
base: "nvcr.io/nvidia/pytorch:24.03-py3" # CUDA 12.4
container:
image: ${{ matrix.base }}
options: "--gpus all"
Expand Down Expand Up @@ -76,7 +76,7 @@ jobs:
if: github.repository == 'Project-MONAI/MONAI'
strategy:
matrix:
container: ["pytorch:22.10", "pytorch:23.08"]
container: ["pytorch:23.08", "pytorch:24.03"]
container:
image: nvcr.io/nvidia/${{ matrix.container }}-py3 # testing with the latest pytorch base image
options: "--gpus all"
Expand Down Expand Up @@ -121,7 +121,7 @@ jobs:
if: github.repository == 'Project-MONAI/MONAI'
strategy:
matrix:
container: ["pytorch:23.08"]
container: ["pytorch:24.03"]
container:
image: nvcr.io/nvidia/${{ matrix.container }}-py3 # testing with the latest pytorch base image
options: "--gpus all"
Expand Down Expand Up @@ -221,7 +221,7 @@ jobs:
if: github.repository == 'Project-MONAI/MONAI'
needs: cron-gpu # so that monai itself is verified first
container:
image: nvcr.io/nvidia/pytorch:23.08-py3 # testing with the latest pytorch base image
image: nvcr.io/nvidia/pytorch:24.03-py3 # testing with the latest pytorch base image
options: "--gpus all --ipc=host"
runs-on: [self-hosted, linux, x64, integration]
steps:
Expand Down
8 changes: 4 additions & 4 deletions .github/workflows/pythonapp-gpu.yml
Original file line number Diff line number Diff line change
Expand Up @@ -29,10 +29,6 @@ jobs:
- "PT210+CUDA121DOCKER"
include:
# https://docs.nvidia.com/deeplearning/frameworks/pytorch-release-notes
- environment: PT19+CUDA114DOCKER
# 21.10: 1.10.0a0+0aef44c
pytorch: "-h" # we explicitly set pytorch to -h to avoid pip install error
base: "nvcr.io/nvidia/pytorch:21.10-py3"
- environment: PT110+CUDA111
pytorch: "torch==1.10.2 torchvision==0.11.3 --extra-index-url https://download.pytorch.org/whl/cu111"
base: "nvcr.io/nvidia/cuda:11.1.1-devel-ubuntu18.04"
Expand All @@ -47,6 +43,10 @@ jobs:
# 23.08: 2.1.0a0+29c30b1
pytorch: "-h" # we explicitly set pytorch to -h to avoid pip install error
base: "nvcr.io/nvidia/pytorch:23.08-py3"
- environment: PT210+CUDA121DOCKER
# 24.03: 2.3.0a0+40ec155e58.nv24.3
pytorch: "-h" # we explicitly set pytorch to -h to avoid pip install error
base: "nvcr.io/nvidia/pytorch:24.03-py3"
container:
image: ${{ matrix.base }}
options: --gpus all --env NVIDIA_DISABLE_REQUIRE=true # workaround for unsatisfied condition: cuda>=11.6
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/pythonapp.yml
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,7 @@ jobs:
steps:
- if: runner.os == 'windows'
name: Config pagefile (Windows only)
uses: al-cheb/configure-pagefile-action@v1.3
uses: al-cheb/configure-pagefile-action@v1.4
with:
minimum-size: 8GB
maximum-size: 16GB
Expand Down
6 changes: 5 additions & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -11,11 +11,15 @@

# To build with a different base image
# please run `docker build` using the `--build-arg PYTORCH_IMAGE=...` flag.
ARG PYTORCH_IMAGE=nvcr.io/nvidia/pytorch:23.08-py3
ARG PYTORCH_IMAGE=nvcr.io/nvidia/pytorch:24.03-py3
FROM ${PYTORCH_IMAGE}

LABEL maintainer="monai.contact@gmail.com"

# TODO: remark for issue [revise the dockerfile](https://github.com/zarr-developers/numcodecs/issues/431)
WORKDIR /opt
RUN git clone --recursive https://github.com/zarr-developers/numcodecs.git && pip wheel numcodecs

WORKDIR /opt/monai

# install full deps
Expand Down
10 changes: 10 additions & 0 deletions docs/source/losses.rst
Original file line number Diff line number Diff line change
Expand Up @@ -73,6 +73,11 @@ Segmentation Losses
.. autoclass:: ContrastiveLoss
:members:

`BarlowTwinsLoss`
~~~~~~~~~~~~~~~~~
.. autoclass:: BarlowTwinsLoss
:members:

`HausdorffDTLoss`
~~~~~~~~~~~~~~~~~
.. autoclass:: HausdorffDTLoss
Expand Down Expand Up @@ -134,6 +139,11 @@ Reconstruction Losses
.. autoclass:: JukeboxLoss
:members:

`SURELoss`
~~~~~~~~~~
.. autoclass:: SURELoss
:members:


Loss Wrappers
-------------
Expand Down
5 changes: 5 additions & 0 deletions docs/source/networks.rst
Original file line number Diff line number Diff line change
Expand Up @@ -408,6 +408,11 @@ Layers
.. autoclass:: LLTM
:members:

`ConjugateGradient`
~~~~~~~~~~~~~~~~~~~
.. autoclass:: ConjugateGradient
:members:

`Utilities`
~~~~~~~~~~~
.. automodule:: monai.networks.layers.convutils
Expand Down
54 changes: 54 additions & 0 deletions docs/source/transforms.rst
Original file line number Diff line number Diff line change
Expand Up @@ -309,6 +309,12 @@ Intensity
:members:
:special-members: __call__

`ClipIntensityPercentiles`
""""""""""""""""""""""""""
.. autoclass:: ClipIntensityPercentiles
:members:
:special-members: __call__

`RandScaleIntensity`
""""""""""""""""""""
.. image:: https://raw.githubusercontent.com/Project-MONAI/DocImages/main/transforms/RandScaleIntensity.png
Expand Down Expand Up @@ -661,6 +667,27 @@ Post-processing
:members:
:special-members: __call__

Regularization
^^^^^^^^^^^^^^

`CutMix`
""""""""
.. autoclass:: CutMix
:members:
:special-members: __call__

`CutOut`
""""""""
.. autoclass:: CutOut
:members:
:special-members: __call__

`MixUp`
"""""""
.. autoclass:: MixUp
:members:
:special-members: __call__

Signal
^^^^^^^

Expand Down Expand Up @@ -1384,6 +1411,12 @@ Intensity (Dict)
:members:
:special-members: __call__

`ClipIntensityPercentilesd`
"""""""""""""""""""""""""""
.. autoclass:: ClipIntensityPercentilesd
:members:
:special-members: __call__

`RandScaleIntensityd`
"""""""""""""""""""""
.. image:: https://raw.githubusercontent.com/Project-MONAI/DocImages/main/transforms/RandScaleIntensityd.png
Expand Down Expand Up @@ -1707,6 +1740,27 @@ Post-processing (Dict)
:members:
:special-members: __call__

Regularization (Dict)
^^^^^^^^^^^^^^^^^^^^^

`CutMixd`
"""""""""
.. autoclass:: CutMixd
:members:
:special-members: __call__

`CutOutd`
"""""""""
.. autoclass:: CutOutd
:members:
:special-members: __call__

`MixUpd`
""""""""
.. autoclass:: MixUpd
:members:
:special-members: __call__

Signal (Dict)
^^^^^^^^^^^^^

Expand Down
10 changes: 10 additions & 0 deletions docs/source/transforms_idx.rst
Original file line number Diff line number Diff line change
Expand Up @@ -74,6 +74,16 @@ Post-processing
post.array
post.dictionary

Regularization
^^^^^^^^^^^^^^

.. autosummary::
:toctree: _gen
:nosignatures:

regularization.array
regularization.dictionary

Signal
^^^^^^

Expand Down
5 changes: 5 additions & 0 deletions monai/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -83,6 +83,11 @@
from .utils.tf32 import detect_default_tf32

detect_default_tf32()
import torch

# workaround related to https://github.com/Project-MONAI/MONAI/issues/7575
if hasattr(torch.cuda.device_count, "cache_clear"):
torch.cuda.device_count.cache_clear()
except BaseException:
from .utils.misc import MONAIEnvVars

Expand Down
4 changes: 3 additions & 1 deletion monai/apps/auto3dseg/hpo_gen.py
Original file line number Diff line number Diff line change
Expand Up @@ -193,7 +193,9 @@ def generate(self, output_folder: str = ".") -> None:
self.obj_filename = os.path.join(write_path, "algo_object.pkl")

if isinstance(self.algo, BundleAlgo):
self.algo.export_to_disk(output_folder, task_prefix + task_id, fill_with_datastats=False)
self.algo.export_to_disk(
output_folder, task_prefix + task_id, bundle_root=write_path, fill_with_datastats=False
)
else:
ConfigParser.export_config_file(self.params, write_path)
logger.info(write_path)
Expand Down
94 changes: 43 additions & 51 deletions monai/apps/nnunet/nnunetv2_runner.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@
from monai.apps.nnunet.utils import analyze_data, create_new_data_copy, create_new_dataset_json
from monai.bundle import ConfigParser
from monai.utils import ensure_tuple, optional_import
from monai.utils.misc import run_cmd

load_pickle, _ = optional_import("batchgenerators.utilities.file_and_folder_operations", name="load_pickle")
join, _ = optional_import("batchgenerators.utilities.file_and_folder_operations", name="join")
Expand Down Expand Up @@ -495,65 +496,64 @@ def train_single_model(self, config: Any, fold: int, gpu_id: tuple | list | int
fold: fold of the 5-fold cross-validation. Should be an int between 0 and 4.
gpu_id: an integer to select the device to use, or a tuple/list of GPU device indices used for multi-GPU
training (e.g., (0,1)). Default: 0.
from nnunetv2.run.run_training import run_training
kwargs: this optional parameter allows you to specify additional arguments in
``nnunetv2.run.run_training.run_training``. Currently supported args are
- plans_identifier: custom plans identifier. Default: "nnUNetPlans".
- pretrained_weights: path to nnU-Net checkpoint file to be used as pretrained model. Will only be
used when actually training. Beta. Use with caution. Default: False.
- use_compressed_data: True to use compressed data for training. Reading compressed data is much
more CPU and (potentially) RAM intensive and should only be used if you know what you are
doing. Default: False.
- continue_training: continue training from latest checkpoint. Default: False.
- only_run_validation: True to run the validation only. Requires training to have finished.
Default: False.
- disable_checkpointing: True to disable checkpointing. Ideal for testing things out and you
don't want to flood your hard drive with checkpoints. Default: False.
``nnunetv2.run.run_training.run_training_entry``.

Currently supported args are:

- p: custom plans identifier. Default: "nnUNetPlans".
- pretrained_weights: path to nnU-Net checkpoint file to be used as pretrained model. Will only be
used when actually training. Beta. Use with caution. Default: False.
- use_compressed: True to use compressed data for training. Reading compressed data is much
more CPU and (potentially) RAM intensive and should only be used if you know what you are
doing. Default: False.
- c: continue training from latest checkpoint. Default: False.
- val: True to run the validation only. Requires training to have finished.
Default: False.
- disable_checkpointing: True to disable checkpointing. Ideal for testing things out and you
don't want to flood your hard drive with checkpoints. Default: False.
"""
if "num_gpus" in kwargs:
kwargs.pop("num_gpus")
logger.warning("please use gpu_id to set the GPUs to use")

if "trainer_class_name" in kwargs:
kwargs.pop("trainer_class_name")
if "tr" in kwargs:
kwargs.pop("tr")
logger.warning("please specify the `trainer_class_name` in the __init__ of `nnUNetV2Runner`.")

if "export_validation_probabilities" in kwargs:
kwargs.pop("export_validation_probabilities")
if "npz" in kwargs:
kwargs.pop("npz")
logger.warning("please specify the `export_validation_probabilities` in the __init__ of `nnUNetV2Runner`.")

cmd = self.train_single_model_command(config, fold, gpu_id, kwargs)
run_cmd(cmd, shell=True)

def train_single_model_command(self, config, fold, gpu_id, kwargs):
if isinstance(gpu_id, (tuple, list)):
if len(gpu_id) > 1:
gpu_ids_str = ""
for _i in range(len(gpu_id)):
gpu_ids_str += f"{gpu_id[_i]},"
os.environ["CUDA_VISIBLE_DEVICES"] = gpu_ids_str[:-1]
device_setting = f"CUDA_VISIBLE_DEVICES={gpu_ids_str[:-1]}"
else:
os.environ["CUDA_VISIBLE_DEVICES"] = f"{gpu_id[0]}"
else:
os.environ["CUDA_VISIBLE_DEVICES"] = f"{gpu_id}"

from nnunetv2.run.run_training import run_training

if isinstance(gpu_id, int) or len(gpu_id) == 1:
run_training(
dataset_name_or_id=self.dataset_name_or_id,
configuration=config,
fold=fold,
trainer_class_name=self.trainer_class_name,
export_validation_probabilities=self.export_validation_probabilities,
**kwargs,
)
device_setting = f"CUDA_VISIBLE_DEVICES={gpu_id[0]}"
else:
run_training(
dataset_name_or_id=self.dataset_name_or_id,
configuration=config,
fold=fold,
num_gpus=len(gpu_id),
trainer_class_name=self.trainer_class_name,
export_validation_probabilities=self.export_validation_probabilities,
**kwargs,
)
device_setting = f"CUDA_VISIBLE_DEVICES={gpu_id}"
num_gpus = 1 if isinstance(gpu_id, int) or len(gpu_id) == 1 else len(gpu_id)

cmd = (
f"{device_setting} nnUNetv2_train "
+ f"{self.dataset_name_or_id} {config} {fold} "
+ f"-tr {self.trainer_class_name} -num_gpus {num_gpus}"
)
if self.export_validation_probabilities:
cmd += " --npz"
for _key, _value in kwargs.items():
if _key == "p" or _key == "pretrained_weights":
cmd += f" -{_key} {_value}"
else:
cmd += f" --{_key} {_value}"
return cmd

def train(
self,
Expand Down Expand Up @@ -637,15 +637,7 @@ def train_parallel_cmd(
if _config in ensure_tuple(configs):
for _i in range(self.num_folds):
the_device = gpu_id_for_all[_index % n_devices] # type: ignore
cmd = (
"python -m monai.apps.nnunet nnUNetV2Runner train_single_model "
+ f"--input_config '{self.input_config_or_dict}' --work_dir '{self.work_dir}' "
+ f"--config '{_config}' --fold {_i} --gpu_id {the_device} "
+ f"--trainer_class_name {self.trainer_class_name} "
+ f"--export_validation_probabilities {self.export_validation_probabilities}"
)
for _key, _value in kwargs.items():
cmd += f" --{_key} {_value}"
cmd = self.train_single_model_command(_config, _i, the_device, kwargs)
all_cmds[-1][the_device].append(cmd)
_index += 1
return all_cmds
Expand Down
Loading