diff --git a/2d_registration/registration_mednist.ipynb b/2d_registration/registration_mednist.ipynb index 3732c8b343..c927e257c1 100644 --- a/2d_registration/registration_mednist.ipynb +++ b/2d_registration/registration_mednist.ipynb @@ -34,15 +34,21 @@ { "cell_type": "code", "execution_count": null, - "metadata": { - "id": "4OTG9ShCUQtS" - }, + "metadata": {}, "outputs": [], "source": [ - "!BUILD_MONAI=1 pip install git+https://github.com/Project-MONAI/MONAI#egg=monai[all]\n", "%env BUILD_MONAI=1" ] }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "!python -c \"import monai\" || pip install -q git+https://github.com/Project-MONAI/MONAI#egg=monai[all]" + ] + }, { "cell_type": "markdown", "metadata": { diff --git a/deployment/bentoml/mednist_classifier_bentoml.ipynb b/deployment/bentoml/mednist_classifier_bentoml.ipynb index fb7b408371..f143426afd 100644 --- a/deployment/bentoml/mednist_classifier_bentoml.ipynb +++ b/deployment/bentoml/mednist_classifier_bentoml.ipynb @@ -22,7 +22,7 @@ "metadata": {}, "outputs": [], "source": [ - "!python -c \"import monai\" || pip install -q \"monai[pillow, tqdm]\"\n", + "!python -c \"import monai\" || pip install -q \"monai-weekly[pillow, tqdm]\"\n", "!python -c \"import bentoml\" || pip install -q bentoml" ] }, @@ -635,7 +635,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.8.10" + "version": "3.7.10" } }, "nbformat": 4, diff --git a/modules/TorchIO_MONAI_PyTorch_Lightning.ipynb b/modules/TorchIO_MONAI_PyTorch_Lightning.ipynb index 4cc7e8f1df..4d6f97515e 100644 --- a/modules/TorchIO_MONAI_PyTorch_Lightning.ipynb +++ b/modules/TorchIO_MONAI_PyTorch_Lightning.ipynb @@ -51,6 +51,15 @@ "Training curves will be logged and visualized using TensorFlow's [TensorBoard](https://www.tensorflow.org/tensorboard). For quantitative results, we will use [Pandas](https://pandas.pydata.org/) and [Seaborn](https://seaborn.pydata.org/). Finally, we will perform a qualitative evaluation using TorchIO and [Matplotlib](https://matplotlib.org/stable/index.html)." ] }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "!python -c \"import monai\" || pip install -q \"monai-weekly\"" + ] + }, { "cell_type": "code", "execution_count": null, @@ -61,7 +70,6 @@ "source": [ "%%bash\n", "pip install -q torchio==0.18.39\n", - "pip install -q monai==0.6.0\n", "pip install -q pytorch-lightning==1.2.10\n", "pip install -q pandas==1.1.5 seaborn==0.11.1" ] diff --git a/modules/UNet_input_size_constrains.ipynb b/modules/UNet_input_size_constrains.ipynb index ffc3c24e51..21868bd2b0 100644 --- a/modules/UNet_input_size_constrains.ipynb +++ b/modules/UNet_input_size_constrains.ipynb @@ -2,7 +2,7 @@ "cells": [ { "cell_type": "markdown", - "id": "8c3dd079", + "id": "0aed74fd", "metadata": {}, "source": [ "# UNet input size constrains\n", @@ -16,8 +16,18 @@ }, { "cell_type": "code", - "execution_count": 1, - "id": "b5a2f858", + "execution_count": 19, + "id": "efcd04b9", + "metadata": {}, + "outputs": [], + "source": [ + "!python -c \"import monai\" || pip install -q monai-weekly" + ] + }, + { + "cell_type": "code", + "execution_count": 20, + "id": "86ee1f12", "metadata": {}, "outputs": [], "source": [ @@ -41,7 +51,7 @@ }, { "cell_type": "markdown", - "id": "8d22e42c", + "id": "2f64140c", "metadata": {}, "source": [ "## Check UNet structure" @@ -49,7 +59,7 @@ }, { "cell_type": "markdown", - "id": "b247210e", + "id": "30f9f2f7", "metadata": {}, "source": [ "The following comes from: [Left-Ventricle Quantification Using Residual U-Net](https://link.springer.com/chapter/10.1007/978-3-030-12029-0_40).\n", @@ -61,8 +71,8 @@ }, { "cell_type": "code", - "execution_count": 2, - "id": "30cfabdf", + "execution_count": 21, + "id": "fd05bcb4", "metadata": {}, "outputs": [ { @@ -122,14 +132,14 @@ ")" ] }, - "execution_count": 2, + "execution_count": 21, "metadata": {}, "output_type": "execute_result" } ], "source": [ "network_0 = UNet(\n", - " dimensions=3,\n", + " spatial_dims=3,\n", " in_channels=3,\n", " out_channels=3,\n", " channels=(8, 16, 32),\n", @@ -145,7 +155,7 @@ }, { "cell_type": "markdown", - "id": "5e9d977d", + "id": "9437ea49", "metadata": {}, "source": [ "As we can see from the printed structure, the network is consisted with three parts:\n", @@ -174,7 +184,7 @@ }, { "cell_type": "markdown", - "id": "57dcf10b", + "id": "bded0633", "metadata": {}, "source": [ "## Constrains of convolution layers" @@ -182,7 +192,7 @@ }, { "cell_type": "markdown", - "id": "f66a3782", + "id": "c1f19415", "metadata": {}, "source": [ "### Conv layer" @@ -190,7 +200,7 @@ }, { "cell_type": "markdown", - "id": "2e280ec3", + "id": "072ed303", "metadata": {}, "source": [ "The formula in Pytorch's official docs explains how to calculate the output size for [Conv3d](https://pytorch.org/docs/stable/generated/torch.nn.Conv3d.html#torch.nn.Conv3d), and [ConvTranspose3d](https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose3d.html#torch.nn.ConvTranspose3d) (the formulas for `1d` and `2d` are similar)." @@ -198,7 +208,7 @@ }, { "cell_type": "markdown", - "id": "8234ce45", + "id": "4cef3f58", "metadata": {}, "source": [ "As the docs shown, the output size depends on the input size and:\n", @@ -217,8 +227,8 @@ }, { "cell_type": "code", - "execution_count": 3, - "id": "d4c0a713", + "execution_count": 22, + "id": "37f7e0e6", "metadata": {}, "outputs": [], "source": [ @@ -233,7 +243,7 @@ }, { "cell_type": "markdown", - "id": "f6813383", + "id": "ba1c88b5", "metadata": {}, "source": [ "Let's check if the function is correct:" @@ -241,8 +251,8 @@ }, { "cell_type": "code", - "execution_count": 6, - "id": "02f18d9c", + "execution_count": 23, + "id": "0d5b0d70", "metadata": {}, "outputs": [ { @@ -261,8 +271,8 @@ }, { "cell_type": "code", - "execution_count": 7, - "id": "f76990e2", + "execution_count": 24, + "id": "3b1b4388", "metadata": {}, "outputs": [ { @@ -280,7 +290,7 @@ }, { "cell_type": "markdown", - "id": "c19a0097", + "id": "a47f741e", "metadata": {}, "source": [ "### ConvTranspose layer" @@ -288,7 +298,7 @@ }, { "cell_type": "markdown", - "id": "809fb277", + "id": "b458f329", "metadata": {}, "source": [ "Similarly, due to the default settings in [monai.networks.blocks.convolutions.Convolution](https://github.com/Project-MONAI/MONAI/blob/dev/monai/networks/blocks/convolutions.py), `output_padding = stride - 1`. The output size of `ConvTranspose` can be simplified as:\n", @@ -299,8 +309,8 @@ }, { "cell_type": "code", - "execution_count": 8, - "id": "51ab4709", + "execution_count": 25, + "id": "caece123", "metadata": {}, "outputs": [ { @@ -318,8 +328,8 @@ }, { "cell_type": "code", - "execution_count": 9, - "id": "c57e0edb", + "execution_count": 26, + "id": "f67804d2", "metadata": {}, "outputs": [ { @@ -344,7 +354,7 @@ }, { "cell_type": "markdown", - "id": "1b39fe7a", + "id": "391b93e6", "metadata": {}, "source": [ "## Constrains of normalization layers" @@ -352,8 +362,8 @@ }, { "cell_type": "code", - "execution_count": 10, - "id": "4fd4b113", + "execution_count": 27, + "id": "dc4be9d5", "metadata": {}, "outputs": [ { @@ -362,7 +372,7 @@ "dict_keys(['INSTANCE', 'BATCH', 'GROUP', 'LAYER', 'LOCALRESPONSE', 'SYNCBATCH'])" ] }, - "execution_count": 10, + "execution_count": 27, "metadata": {}, "output_type": "execute_result" } @@ -373,7 +383,7 @@ }, { "cell_type": "markdown", - "id": "39673302", + "id": "9e47a8ef", "metadata": {}, "source": [ "In MONAI's norm factories, There are six normalization layers can be used. The official docs can be found in [here](https://pytorch.org/docs/stable/nn.html#normalization-layers), and their constrains is shown in [torch.nn.functional](https://pytorch.org/docs/stable/_modules/torch/nn/functional.html).\n", @@ -388,7 +398,7 @@ }, { "cell_type": "markdown", - "id": "c52d258b", + "id": "b611a564", "metadata": {}, "source": [ "### batch normalization\n", @@ -398,8 +408,8 @@ }, { "cell_type": "code", - "execution_count": 11, - "id": "a2d592db", + "execution_count": 28, + "id": "732f2769", "metadata": {}, "outputs": [], "source": [ @@ -413,7 +423,7 @@ }, { "cell_type": "markdown", - "id": "950a5dc0", + "id": "07347476", "metadata": {}, "source": [ "In reality, when batch size is 1, it's not practical to use batch normalizaton. Therefore, the constrain can be converted to **the batch size should be larger than 1**." @@ -421,7 +431,7 @@ }, { "cell_type": "markdown", - "id": "bcfddf5c", + "id": "73c2b29f", "metadata": {}, "source": [ "### instance normalization\n", @@ -431,8 +441,8 @@ }, { "cell_type": "code", - "execution_count": 12, - "id": "2493c5eb", + "execution_count": 29, + "id": "0a33cc15", "metadata": {}, "outputs": [], "source": [ @@ -446,7 +456,7 @@ }, { "cell_type": "markdown", - "id": "8cdffcc8", + "id": "78f05589", "metadata": {}, "source": [ "### local response normalization\n", @@ -456,17 +466,17 @@ }, { "cell_type": "code", - "execution_count": 13, - "id": "9f75aae9", + "execution_count": 30, + "id": "23d75904", "metadata": {}, "outputs": [ { "data": { "text/plain": [ - "tensor([[[[[-1.6043]]]]])" + "tensor([[[[[-0.7587]]]]])" ] }, - "execution_count": 13, + "execution_count": 30, "metadata": {}, "output_type": "execute_result" } @@ -477,7 +487,7 @@ }, { "cell_type": "markdown", - "id": "32bf77fa", + "id": "bd830ec6", "metadata": {}, "source": [ "## Constrains of SkipConnection" @@ -485,7 +495,7 @@ }, { "cell_type": "markdown", - "id": "d9c5ba20", + "id": "37f2aff4", "metadata": {}, "source": [ "In this section, we will check if the module [SkipConnection](https://github.com/Project-MONAI/MONAI/blob/dev/monai/networks/layers/simplelayers.py) itself has more constrains for the input size.\n", @@ -503,7 +513,7 @@ }, { "cell_type": "markdown", - "id": "dd7169bc", + "id": "ba8b9380", "metadata": {}, "source": [ "### When `len(channels) = 2` " @@ -511,7 +521,7 @@ }, { "cell_type": "markdown", - "id": "00f0e62a", + "id": "cdd7033e", "metadata": {}, "source": [ "If `len(channels) = 2`, there will only have one `SkipConnection` module in the network, and the module is built by a single down layer with `stride = 1`. From the formulas we achieved in the previous section, we know that this layer will not change the size, thus we only need to meet the constrains from the inside normalization layer:\n", @@ -523,7 +533,7 @@ }, { "cell_type": "markdown", - "id": "d612f3e8", + "id": "e391f534", "metadata": {}, "source": [ "### When `len(channels) > 2` " @@ -531,7 +541,7 @@ }, { "cell_type": "markdown", - "id": "6525c523", + "id": "2efce3e2", "metadata": {}, "source": [ "If `len(channels) > 2`, more `SkipConnection` module will be built and each of the module is consisted with one down layer and one up layer. Consequently, **the output of the up layer should has the same spatial sizes as the input before entering into the down layer**. The corresponding stride values for these modules are coming from `strides[1:]`, hence for each stride value `s` from `strides[1:]`, for each spatial size value `v` of the input, the constrain of the corresponding `SkipConnection` module is:\n", @@ -552,7 +562,7 @@ }, { "cell_type": "markdown", - "id": "92f073f3", + "id": "ae3eb93c", "metadata": {}, "source": [ "For the whole `SkipConnection` module, assume `[H, W, D]` is the input spatial size, then for `v in [H, W, D]`:\n", @@ -568,7 +578,7 @@ }, { "cell_type": "markdown", - "id": "76f3ca4b", + "id": "8e2d99ef", "metadata": {}, "source": [ "## Constrains of UNet" @@ -576,7 +586,7 @@ }, { "cell_type": "markdown", - "id": "ce0e5fb9", + "id": "554744bc", "metadata": {}, "source": [ "As the first section discussed, UNet is consisted with 1) a down layer, 2) one or mode skip connection module(s) and 3) an up layer. Based on the analyses for each single layer/module, the constrains of the network can be summarized as follow." @@ -584,7 +594,7 @@ }, { "cell_type": "markdown", - "id": "82db338e", + "id": "d7ae8cd7", "metadata": {}, "source": [ "### When `len(channels) = 2`" @@ -592,7 +602,7 @@ }, { "cell_type": "markdown", - "id": "e89169e8", + "id": "8cd1d3b5", "metadata": {}, "source": [ "If `len(channels) == 2`, `strides` must be a single value, thus assume `s = strides`, and the input size is `[B, C, H, W, D]`. The constrains are:\n", @@ -606,8 +616,8 @@ }, { "cell_type": "code", - "execution_count": 30, - "id": "b21cb92e", + "execution_count": 31, + "id": "1bdc5c8e", "metadata": {}, "outputs": [ { @@ -641,8 +651,8 @@ }, { "cell_type": "code", - "execution_count": 31, - "id": "ae44c19a", + "execution_count": 32, + "id": "7485a83a", "metadata": {}, "outputs": [ { @@ -672,8 +682,8 @@ }, { "cell_type": "code", - "execution_count": 32, - "id": "296ef4d5", + "execution_count": 33, + "id": "6a31861f", "metadata": {}, "outputs": [ { @@ -707,7 +717,7 @@ }, { "cell_type": "markdown", - "id": "a864e162", + "id": "ba6057e8", "metadata": {}, "source": [ "### When `len(channels) > 2`" @@ -715,7 +725,7 @@ }, { "cell_type": "markdown", - "id": "5e50c659", + "id": "c804fa49", "metadata": {}, "source": [ "Assume the input size is `[B, C, H, W, D]`, and `s = strides`. The common constrains are:\n", @@ -741,7 +751,7 @@ { "cell_type": "code", "execution_count": 34, - "id": "2a8e3019", + "id": "d234f140", "metadata": {}, "outputs": [ { @@ -777,7 +787,7 @@ { "cell_type": "code", "execution_count": 35, - "id": "a5a2060d", + "id": "ccf53aa1", "metadata": {}, "outputs": [ { @@ -792,7 +802,7 @@ "# example 2: strides=(3, 2, 4), localresponse.\n", "# thus math.floor((v + 2) / 3) should be 8 * k. If k = 1, v should be in [22, 23, 24].\n", "network = UNet(\n", - " dimensions=3,\n", + " spatial_dims=3,\n", " in_channels=1,\n", " out_channels=3,\n", " channels=(8, 16, 32, 16),\n", @@ -813,7 +823,7 @@ { "cell_type": "code", "execution_count": 36, - "id": "8fa2ed2d", + "id": "da6e9277", "metadata": {}, "outputs": [ { @@ -829,7 +839,7 @@ "# thus v should be 12 * k. If k = 1, v should be 12. In addition, the maximum size should be 24 * k.\n", "\n", "network = UNet(\n", - " dimensions=3,\n", + " spatial_dims=3,\n", " in_channels=1,\n", " out_channels=3,\n", " channels=(8, 16, 32, 32, 16),\n", @@ -864,7 +874,7 @@ "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", - "version": "3.8.10" + "version": "3.7.10" } }, "nbformat": 4,