diff --git a/README.rst b/README.rst index c40e12168a07..20b01c1dcf6a 100644 --- a/README.rst +++ b/README.rst @@ -38,7 +38,7 @@ See `this video `_ for a quick walk-through. **Requirements** 1) Python 3.6 or 3.7 -2) Pytorch 1.2 with GPU support +2) PyTorch 1.2 with GPU support 3) NVIDIA APEX. Install from here: https://github.com/NVIDIA/apex @@ -81,7 +81,7 @@ instead of .. code-block:: bash - # Install the ASR collection from collections/nemo_asr + # Install the ASR collection from collections/nemo_asr apt-get install libsndfile1 cd collections/nemo_asr pip install . diff --git a/docs/sources/source/index.rst b/docs/sources/source/index.rst index adbd2de5e996..1f2a70f2df64 100644 --- a/docs/sources/source/index.rst +++ b/docs/sources/source/index.rst @@ -22,7 +22,7 @@ A "Neural Module" is a block of code that computes a set of outputs from a set o Neural Modules’ inputs and outputs have Neural Type for semantic checking. -An application built with NeMo application is a Directed Acyclic Graph(DAG) of connected modules enabling researchers to define and build new speech and nlp networks easily through API Compatible modules. +An application built with NeMo application is a Directed Acyclic Graph (DAG) of connected modules enabling researchers to define and build new speech and nlp networks easily through API Compatible modules. **Introduction** @@ -49,14 +49,10 @@ See this video for a walk-through. **Requirements** 1) Python 3.6 or 3.7 -2) Pytorch 1.2 with GPU support +2) PyTorch 1.2 with GPU support 3) NVIDIA APEX: https://github.com/NVIDIA/apex -**Documentation** -TBD - - **Getting started** If desired, you can start with `NGC PyTorch container `_ which already includes diff --git a/examples/asr/ASR-Jasper-Tutorial.ipynb b/examples/asr/ASR-Jasper-Tutorial.ipynb index 0c860fe129ff..8fd58f70aadb 100644 --- a/examples/asr/ASR-Jasper-Tutorial.ipynb +++ b/examples/asr/ASR-Jasper-Tutorial.ipynb @@ -354,7 +354,7 @@ "metadata": {}, "source": [ "## Mixed Precision Training\n", - "Mixed precision and distributed training in NeMo is based on NVIDIA’s APEX library. This is installed with NVIDIA's NGC Pytorch container with an example of updating in the example Dockerfile.\n", + "Mixed precision and distributed training in NeMo is based on NVIDIA’s APEX library. This is installed with NVIDIA's NGC PyTorch container with an example of updating in the example Dockerfile.\n", "\n", "> **Note** - _Because mixed precision requires Tensor Cores it\n", "> only works on NVIDIA Volta and Turing based GPUs._\n", diff --git a/examples/start_here/README.md b/examples/start_here/README.md index 0f82fbcf3997..bc218e494179 100644 --- a/examples/start_here/README.md +++ b/examples/start_here/README.md @@ -4,7 +4,7 @@ Just learns simple function `y=sin(x)`. Simply run from `examples/start_here` folder. # ChatBot Example -This is an adaptation of [Pytorch Chatbot tutorial](https://pytorch.org/tutorials/beginner/chatbot_tutorial.html) +This is an adaptation of [PyTorch Chatbot tutorial](https://pytorch.org/tutorials/beginner/chatbot_tutorial.html) Simply run from `examples/start_here` folder. During training it will print **SOURCE**, **PREDICTED RESPONSE** and **TARGET**. @@ -43,4 +43,4 @@ outputs, hidden = decoder(targets=tgt, max_target_len=max_tgt_length) ... ``` -Simply run from `examples/start_here` folder. \ No newline at end of file +Simply run from `examples/start_here` folder.