Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 20 additions & 0 deletions .github/workflows/cleanup.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
name: cleanup-workflow

on:
workflow_run:
workflows:
- "build"
types: ["requested"]

jobs:
cancel-duplicated-workflow:
name: "Cancel duplicated workflow"
runs-on: ubuntu-latest
steps:
- uses: potiuk/cancel-workflow-runs@953e057dc81d3458935a18d1184c386b0f6b5738 # tested
name: "Cancel duplicate workflows"
with:
cancelMode: allDuplicates
token: ${{ secrets.GITHUB_TOKEN }}
sourceRunId: ${{ github.event.workflow_run.id }}
skipEventTypes: '["schedule"]'
11 changes: 8 additions & 3 deletions .github/workflows/weekly-preview.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ on:

jobs:
packaging:
if: github.repository == 'Project-MONAI/MONAI'
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
Expand All @@ -24,12 +25,16 @@ jobs:
sed -i 's/name\ =\ monai$/name\ =\ monai-weekly/g' setup.cfg
echo "__commit_id__ = \"$HEAD_COMMIT_ID\"" >> monai/__init__.py
git diff setup.cfg monai/__init__.py
# build tar.gz and wheel
git config user.name "CI Builder"
git config user.email "monai.miccai2019@gmail.com"
git config user.email "monai.contact@gmail.com"
git add setup.cfg monai/__init__.py
git commit -m "Weekly build at $HEAD_COMMIT_ID"
git tag 0.5.dev$(date +'%y%U')
export YEAR_WEEK=$(date +'%y%U')
echo "Year week for tag is ${YEAR_WEEK}"
if ! [[ $YEAR_WEEK =~ ^[0-9]{4}$ ]] ; then echo "Wrong 'year week' format. Should be 4 digits."; exit 1 ; fi
git tag "0.5.dev${YEAR_WEEK}"
git log -1
git tag --list
python setup.py sdist bdist_wheel

- name: Publish to PyPI
Expand Down
76 changes: 76 additions & 0 deletions CODE_OF_CONDUCT.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
# Contributor Covenant Code of Conduct

## Our Pledge

In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, sex characteristics, gender identity and expression,
level of experience, education, socio-economic status, nationality, personal
appearance, race, religion, or sexual identity and orientation.

## Our Standards

Examples of behavior that contributes to creating a positive environment
include:

* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members

Examples of unacceptable behavior by participants include:

* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting

## Our Responsibilities

Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.

Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.

## Scope

This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.

## Enforcement

Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at monai.contact@gmail.com. All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.

Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.

## Attribution

This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html

[homepage]: https://www.contributor-covenant.org

For answers to common questions about this code of conduct, see
https://www.contributor-covenant.org/faq
2 changes: 1 addition & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ ARG PYTORCH_IMAGE=nvcr.io/nvidia/pytorch:20.10-py3

FROM ${PYTORCH_IMAGE}

LABEL maintainer="monai.miccai2019@gmail.com"
LABEL maintainer="monai.contact@gmail.com"

WORKDIR /opt/monai

Expand Down
16 changes: 3 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,23 +30,13 @@ Its ambitions are:

## Installation

### Installing [the current release](https://pypi.org/project/monai/):
```bash
pip install monai
```
To install [the current release](https://pypi.org/project/monai/), you can simply run:

### Installing the master branch from the source code repository:
```bash
pip install git+https://github.com/Project-MONAI/MONAI#egg=MONAI
pip install monai
```

### Using the pre-built Docker image [DockerHub](https://hub.docker.com/r/projectmonai/monai):
```bash
# with docker v19.03+
docker run --gpus all --rm -ti --ipc=host projectmonai/monai:latest
```

For more details, please refer to [the installation guide](https://docs.monai.io/en/latest/installation.html).
For other installation methods (using the master branch, using Docker, etc.), please refer to [the installation guide](https://docs.monai.io/en/latest/installation.html).

## Getting Started

Expand Down
13 changes: 13 additions & 0 deletions docs/source/handlers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,12 +16,25 @@ Model checkpoint saver
.. autoclass:: CheckpointSaver
:members:


Metrics saver
-------------
.. autoclass:: MetricsSaver
:members:


CSV saver
---------
.. autoclass:: ClassificationSaver
:members:


Iteration Metric
----------------
.. autoclass:: IterationMetric
:members:


Mean Dice metrics handler
-------------------------
.. autoclass:: MeanDice
Expand Down
21 changes: 14 additions & 7 deletions docs/source/installation.md
Original file line number Diff line number Diff line change
@@ -1,17 +1,24 @@
# Installation guide

## Table of Contents
1. [From PyPI](#from-pypi)
1. [Milestone release](#milestone-release)
2. [Weekly preview release](#weekly-preview-release)
2. [From GitHub](#from-github)
1. [System-wide](#milestone-release)
2. [Editable](#weekly-preview-release)
3. [Validating the install](#validating-the-install)
4. [MONAI version string](#monai-version-string)
5. [From DockerHub](#from-dockerhub)
6. [Installing the recommended dependencies](#Installing-the-recommended-dependencies)

---

MONAI's core functionality is written in Python 3 (>= 3.6) and only requires [Numpy](https://numpy.org/) and [Pytorch](https://pytorch.org/).

The package is currently distributed via Github as the primary source code repository,
and the Python package index (PyPI). The pre-built Docker images are made available on DockerHub.

This page provides steps to:
- [Install MONAI from PyPI](#from-pypi)
- [Install MONAI from GitHub](#from-github)
- [Validate the install](#validating-the-install)
- [Understand MONAI version string](#monai-version-string)
- [Run MONAI From DockerHub](#from-dockerhub)

To install optional features such as handling the NIfTI files using
[Nibabel](https://nipy.org/nibabel/), or building workflows using [Pytorch
Ignite](https://pytorch.org/ignite/), please follow the instructions:
Expand Down
2 changes: 1 addition & 1 deletion monai/config/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,4 +18,4 @@
print_gpu_info,
print_system_info,
)
from .type_definitions import IndexSelection, KeysCollection
from .type_definitions import DtypeLike, IndexSelection, KeysCollection, NdarrayTensor
26 changes: 16 additions & 10 deletions monai/config/deviceconfig.py
Original file line number Diff line number Diff line change
Expand Up @@ -162,7 +162,7 @@ def get_system_info() -> OrderedDict:
_dict_append(
output,
"Avg. sensor temp. (Celsius)",
lambda: round(
lambda: np.round(
np.mean([item.current for sublist in psutil.sensors_temperatures().values() for item in sublist], 1)
),
)
Expand Down Expand Up @@ -196,28 +196,34 @@ def get_gpu_info() -> OrderedDict:
_dict_append(output, "Num GPUs", lambda: num_gpus)

_dict_append(output, "Has CUDA", lambda: bool(torch.cuda.is_available()))

if output["Has CUDA"]:
_dict_append(output, "CUDA version", lambda: torch.version.cuda)
cudnn_ver = torch.backends.cudnn.version()
_dict_append(output, "cuDNN enabled", lambda: bool(cudnn_ver))

if cudnn_ver:
_dict_append(output, "cuDNN version", lambda: cudnn_ver)

if num_gpus > 0:
_dict_append(output, "Current device", torch.cuda.current_device)
if hasattr(torch.cuda, "get_arch_list"): # get_arch_list is new in torch 1.7.1
_dict_append(output, "Library compiled for CUDA architectures", torch.cuda.get_arch_list)

for gpu in range(num_gpus):
_dict_append(output, "Info for GPU", gpu)
gpu_info = torch.cuda.get_device_properties(gpu)
_dict_append(output, "\tName", lambda: gpu_info.name)
_dict_append(output, "\tIs integrated", lambda: bool(gpu_info.is_integrated))
_dict_append(output, "\tIs multi GPU board", lambda: bool(gpu_info.is_multi_gpu_board))
_dict_append(output, "\tMulti processor count", lambda: gpu_info.multi_processor_count)
_dict_append(output, "\tTotal memory (GB)", lambda: round(gpu_info.total_memory / 1024 ** 3, 1))
_dict_append(output, "\tCached memory (GB)", lambda: round(torch.cuda.memory_reserved(gpu) / 1024 ** 3, 1))
_dict_append(output, "\tAllocated memory (GB)", lambda: round(torch.cuda.memory_allocated(gpu) / 1024 ** 3, 1))
_dict_append(output, "\tCUDA capability (maj.min)", lambda: f"{gpu_info.major}.{gpu_info.minor}")
_dict_append(output, f"GPU {gpu} Name", lambda: gpu_info.name)
_dict_append(output, f"GPU {gpu} Is integrated", lambda: bool(gpu_info.is_integrated))
_dict_append(output, f"GPU {gpu} Is multi GPU board", lambda: bool(gpu_info.is_multi_gpu_board))
_dict_append(output, f"GPU {gpu} Multi processor count", lambda: gpu_info.multi_processor_count)
_dict_append(output, f"GPU {gpu} Total memory (GB)", lambda: round(gpu_info.total_memory / 1024 ** 3, 1))
_dict_append(
output, f"GPU {gpu} Cached memory (GB)", lambda: round(torch.cuda.memory_reserved(gpu) / 1024 ** 3, 1)
)
_dict_append(
output, f"GPU {gpu} Allocated memory (GB)", lambda: round(torch.cuda.memory_allocated(gpu) / 1024 ** 3, 1)
)
_dict_append(output, f"GPU {gpu} CUDA capability (maj.min)", lambda: f"{gpu_info.major}.{gpu_info.minor}")

return output

Expand Down
20 changes: 18 additions & 2 deletions monai/config/type_definitions.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,9 +9,12 @@
# See the License for the specific language governing permissions and
# limitations under the License.

from typing import Collection, Hashable, Iterable, Union
from typing import Collection, Hashable, Iterable, TypeVar, Union

__all__ = ["KeysCollection", "IndexSelection"]
import numpy as np
import torch

__all__ = ["KeysCollection", "IndexSelection", "DtypeLike", "NdarrayTensor"]

"""Commonly used concepts
This module provides naming and type specifications for commonly used concepts
Expand Down Expand Up @@ -51,3 +54,16 @@
The indices must be integers, and if a container of indices is specified, the
container must be iterable.
"""

DtypeLike = Union[
np.dtype,
type,
None,
]
"""Type of datatypes
adapted from https://github.com/numpy/numpy/blob/master/numpy/typing/_dtype_like.py
"""

# Generic type which can represent either a numpy.ndarray or a torch.Tensor
# Unlike Union can create a dependence between parameter(s) / return(s)
NdarrayTensor = TypeVar("NdarrayTensor", np.ndarray, torch.Tensor)
2 changes: 1 addition & 1 deletion monai/data/csv_saver.py
Original file line number Diff line number Diff line change
Expand Up @@ -75,7 +75,7 @@ def save(self, data: Union[torch.Tensor, np.ndarray], meta_data: Optional[Dict]
"""
save_key = meta_data["filename_or_obj"] if meta_data else str(self._data_index)
self._data_index += 1
if torch.is_tensor(data):
if isinstance(data, torch.Tensor):
data = data.detach().cpu().numpy()
if not isinstance(data, np.ndarray):
raise AssertionError
Expand Down
8 changes: 7 additions & 1 deletion monai/data/dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -498,7 +498,13 @@ def _fill_cache(self) -> List:
warnings.warn("tqdm is not installed, will not show the caching progress bar.")
with ThreadPool(self.num_workers) as p:
if has_tqdm:
return list(tqdm(p.imap(self._load_cache_item, range(self.cache_num)), total=self.cache_num))
return list(
tqdm(
p.imap(self._load_cache_item, range(self.cache_num)),
total=self.cache_num,
desc="Loading dataset",
)
)
return list(p.imap(self._load_cache_item, range(self.cache_num)))

def _load_cache_item(self, idx: int):
Expand Down
3 changes: 2 additions & 1 deletion monai/data/image_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@
import numpy as np
from torch.utils.data import Dataset

from monai.config import DtypeLike
from monai.data.image_reader import ImageReader
from monai.transforms import LoadImage, Randomizable, apply_transform
from monai.utils import MAX_SEED, get_seed
Expand All @@ -36,7 +37,7 @@ def __init__(
transform: Optional[Callable] = None,
seg_transform: Optional[Callable] = None,
image_only: bool = True,
dtype: Optional[np.dtype] = np.float32,
dtype: DtypeLike = np.float32,
reader: Optional[Union[ImageReader, str]] = None,
*args,
**kwargs,
Expand Down
10 changes: 5 additions & 5 deletions monai/data/image_reader.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
import numpy as np
from torch.utils.data._utils.collate import np_str_obj_array_pattern

from monai.config import KeysCollection
from monai.config import DtypeLike, KeysCollection
from monai.data.utils import correct_nifti_header_if_necessary
from monai.utils import ensure_tuple, optional_import

Expand Down Expand Up @@ -244,7 +244,7 @@ def _get_affine(self, img) -> np.ndarray:
affine = np.eye(direction.shape[0] + 1)
affine[(slice(-1), slice(-1))] = direction @ np.diag(spacing)
affine[(slice(-1), -1)] = origin
return affine
return np.asarray(affine)

def _get_spatial_shape(self, img) -> np.ndarray:
"""
Expand All @@ -258,7 +258,7 @@ def _get_spatial_shape(self, img) -> np.ndarray:
shape.reverse()
return np.asarray(shape)

def _get_array_data(self, img) -> np.ndarray:
def _get_array_data(self, img):
"""
Get the raw array data of the image, converted to Numpy array.

Expand Down Expand Up @@ -295,7 +295,7 @@ class NibabelReader(ImageReader):

"""

def __init__(self, as_closest_canonical: bool = False, dtype: Optional[np.dtype] = np.float32, **kwargs):
def __init__(self, as_closest_canonical: bool = False, dtype: DtypeLike = np.float32, **kwargs):
super().__init__()
self.as_closest_canonical = as_closest_canonical
self.dtype = dtype
Expand Down Expand Up @@ -385,7 +385,7 @@ def _get_affine(self, img) -> np.ndarray:
img: a Nibabel image object loaded from a image file.

"""
return img.affine.copy()
return np.array(img.affine, copy=True)

def _get_spatial_shape(self, img) -> np.ndarray:
"""
Expand Down
Loading