diff --git a/README.md b/README.md index e9facef64d..e08b1d07a8 100644 --- a/README.md +++ b/README.md @@ -19,7 +19,7 @@ Its ambitions are: ## Features > _The codebase is currently under active development._ -> _Please see [the technical highlights](https://docs.monai.io/en/latest/highlights.html) and [What's New in 0.6](https://docs.monai.io/en/latest/whatsnew_0_6.html) of the current milestone release._ +> _Please see [the technical highlights](https://docs.monai.io/en/latest/highlights.html) and [What's New](https://docs.monai.io/en/latest/whatsnew.html) of the current milestone release._ - flexible pre-processing for multi-dimensional medical imaging data; - compositional & portable APIs for ease of integration in existing workflows; diff --git a/docs/source/whatsnew_0_7.md b/docs/source/whatsnew_0_7.md index 8d0f3947f7..5a0a82130d 100644 --- a/docs/source/whatsnew_0_7.md +++ b/docs/source/whatsnew_0_7.md @@ -21,8 +21,8 @@ a performance enhancement study. With the performance profiling and enhancements, several typical use cases were studied to improve the training efficiency. The following figure shows that fast -training using MONAI can be 20 times faster than a regular baseline ([learn -more](https://github.com/Project-MONAI/tutorials/blob/master/acceleration/fast_training_tutorial.ipynb)). +training using MONAI can be `200` times faster than a regular baseline ([learn +more](https://github.com/Project-MONAI/tutorials/blob/master/acceleration/fast_training_tutorial.ipynb)), and it's `20` times faster than the MONAI v0.6 fast training solution. ![fast_training](../images/fast_training.png) ## Major usability improvements in `monai.transforms` for NumPy/PyTorch inputs and backends