From 40d071e0a7af24772c8f5a9b4761021ed0b8f326 Mon Sep 17 00:00:00 2001 From: Rhys Goodall Date: Wed, 23 Feb 2022 10:54:07 -0800 Subject: [PATCH] typos: fix more small grammar issues on install pages --- doc/install/build-conda.md | 4 ++-- doc/install/easy-install.md | 2 +- doc/install/install-from-source.md | 16 ++++++++-------- doc/install/install-gromacs.md | 6 +++--- doc/install/install-ipi.md | 2 +- doc/install/install-lammps.md | 10 +++++----- doc/install/install-tf.2.3.md | 28 ++++++++++++++-------------- 7 files changed, 34 insertions(+), 34 deletions(-) diff --git a/doc/install/build-conda.md b/doc/install/build-conda.md index aae9c64a38..e69374d3de 100644 --- a/doc/install/build-conda.md +++ b/doc/install/build-conda.md @@ -1,6 +1,6 @@ # Building conda packages -One may want to keep both convenience and personalization of the DeePMD-kit. To achieve this goal, one can consider builing conda packages. We provide building scripts in [deepmd-kit-recipes organization](https://github.com/deepmd-kit-recipes/). These building tools are driven by [conda-build](https://github.com/conda/conda-build) and [conda-smithy](https://github.com/conda-forge/conda-smithy). +One may want to keep both convenience and personalization of the DeePMD-kit. To achieve this goal, one can consider building conda packages. We provide building scripts in [deepmd-kit-recipes organization](https://github.com/deepmd-kit-recipes/). These building tools are driven by [conda-build](https://github.com/conda/conda-build) and [conda-smithy](https://github.com/conda-forge/conda-smithy). For example, if one wants to turn on `MPIIO` package in LAMMPS, go to [`lammps-dp-feedstock`](https://github.com/deepmd-kit-recipes/lammps-dp-feedstock/) repository and modify `recipe/build.sh`. `-D PKG_MPIIO=OFF` should be changed to `-D PKG_MPIIO=ON`. Then go to the main directory and executing @@ -8,7 +8,7 @@ For example, if one wants to turn on `MPIIO` package in LAMMPS, go to [`lammps-d ./build-locally.py ``` -This requires the Docker has been installed. After the building, the packages will be generated in `build_artifacts/linux-64` and `build_artifacts/noarch`, and then one can install then execuating +This requires that Docker has been installed. After the building, the packages will be generated in `build_artifacts/linux-64` and `build_artifacts/noarch`, and then one can install then executing ```sh conda create -n deepmd lammps-dp -c file:///path/to/build_artifacts -c https://conda.deepmodeling.org -c nvidia ``` diff --git a/doc/install/easy-install.md b/doc/install/easy-install.md index 55720b59e4..572bcc27e4 100644 --- a/doc/install/easy-install.md +++ b/doc/install/easy-install.md @@ -1,6 +1,6 @@ # Easy install -There various easy methods to install DeePMD-kit. Choose one that you prefer. If you want to build by yourself, jump to the next two sections. +There are various easy methods to install DeePMD-kit. Choose one that you prefer. If you want to build by yourself, jump to the next two sections. After your easy installation, DeePMD-kit (`dp`) and LAMMPS (`lmp`) will be available to execute. You can try `dp -h` and `lmp -h` to see the help. `mpirun` is also available considering you may want to train models or run LAMMPS in parallel. diff --git a/doc/install/install-from-source.md b/doc/install/install-from-source.md index c9764e548c..f2ef32b539 100644 --- a/doc/install/install-from-source.md +++ b/doc/install/install-from-source.md @@ -22,7 +22,7 @@ First, check the python version on your machine python --version ``` -We follow the virtual environment approach to install the tensorflow's Python interface. The full instruction can be found on [the tensorflow's official website](https://www.tensorflow.org/install/pip). Now we assume that the Python interface will be installed to virtual environment directory `$tensorflow_venv` +We follow the virtual environment approach to install TensorFlow's Python interface. The full instruction can be found on the official [TensorFlow website](https://www.tensorflow.org/install/pip). Now we assume that the Python interface will be installed to virtual environment directory `$tensorflow_venv` ```bash virtualenv -p python3 $tensorflow_venv source $tensorflow_venv/bin/activate @@ -41,8 +41,8 @@ If one has multiple python interpreters named like python3.x, it can be specifie ```bash virtualenv -p python3.7 $tensorflow_venv ``` -If one does not need the GPU support of deepmd-kit and is concerned about package size, the CPU-only version of tensorflow should be installed by -```bash +If one does not need the GPU support of deepmd-kit and is concerned about package size, the CPU-only version of TensorFlow should be installed by +```bash pip install --upgrade tensorflow-cpu ``` To verify the installation, run @@ -96,7 +96,7 @@ Valid subcommands: [Horovod](https://github.com/horovod/horovod) and [mpi4py](https://github.com/mpi4py/mpi4py) is used for parallel training. For better performance on GPU, please follow tuning steps in [Horovod on GPU](https://github.com/horovod/horovod/blob/master/docs/gpus.rst). ```bash -# With GPU, prefer NCCL as communicator. +# With GPU, prefer NCCL as a communicator. HOROVOD_WITHOUT_GLOO=1 HOROVOD_WITH_TENSORFLOW=1 HOROVOD_GPU_OPERATIONS=NCCL HOROVOD_NCCL_HOME=/path/to/nccl pip install horovod mpi4py ``` @@ -132,7 +132,7 @@ Available Tensor Operations: From version 2.0.1, Horovod and mpi4py with MPICH support is shipped with the installer. -If you don't install horovod, DeePMD-kit will fallback to serial mode. +If you don't install horovod, DeePMD-kit will fall back to serial mode. ## Install the C++ interface @@ -148,11 +148,11 @@ gcc --version The C++ interface of DeePMD-kit was tested with compiler gcc >= 4.8. It is noticed that the I-Pi support is only compiled with gcc >= 4.8. -First the C++ interface of Tensorflow should be installed. It is noted that the version of Tensorflow should be in consistent with the python interface. You may follow [the instruction](install-tf.2.3.md) to install the corresponding C++ interface. +First the C++ interface of Tensorflow should be installed. It is noted that the version of Tensorflow should be consistent with the python interface. You may follow [the instruction](install-tf.2.3.md) to install the corresponding C++ interface. ### Install the DeePMD-kit's C++ interface -Now goto the source code directory of DeePMD-kit and make a build place. +Now go to the source code directory of DeePMD-kit and make a build place. ```bash cd $deepmd_source_dir/source mkdir build @@ -177,7 +177,7 @@ One may add the following arguments to `cmake`: | -DLAMMPS_VERSION_NUMBER=<value> | Number | `20210929` | Only neccessary for LAMMPS built-in mode. The version number of LAMMPS (yyyymmdd). | | -DLAMMPS_SOURCE_ROOT=<value> | Path | - | Only neccessary for LAMMPS plugin mode. The path to the LAMMPS source code (later than 8Apr2021). If not assigned, the plugin mode will not be enabled. | -If the cmake has executed successfully, then +If the cmake has been executed successfully, then run the following make commands to build the package: ```bash make -j4 make install diff --git a/doc/install/install-gromacs.md b/doc/install/install-gromacs.md index 398ed8ccba..5df6da385c 100644 --- a/doc/install/install-gromacs.md +++ b/doc/install/install-gromacs.md @@ -5,14 +5,14 @@ Download source code of a supported gromacs version (2020.2) from https://manual export PATH=$PATH:$deepmd_kit_root/bin dp_gmx_patch -d $gromacs_root -v $version -p ``` -where `deepmd_kit_root` is the directory where the latest version of deepmd-kit is installed, and `gromacs_root` refers to source code directory of gromacs. And `version` represents the version of gromacs, **only support 2020.2 now**. You may patch another version of gromacs but still setting `version` to `2020.2`. However, we cannot ensure that it works. +where `deepmd_kit_root` is the directory where the latest version of deepmd-kit is installed, and `gromacs_root` refers to the source code directory of gromacs. And `version` represents the version of gromacs, **only support 2020.2 now**. If attempting to patch another version of gromacs you will still need to set `version` to `2020.2` as this is the only supported version, we cannot guarantee that patching other versions of gromacs will work. - + ## Compile GROMACS with deepmd-kit -The C++ interface of `deepmd-kit 2.x` and `tensorflow 2.x` are required. And be aware that only deepmd-kit with **high precision** is supported now, since we cannot ensure single precision is enough for a GROMACS simulation. Here is a sample compile scipt: +The C++ interface of `deepmd-kit 2.x` and `tensorflow 2.x` are required. And be aware that only deepmd-kit with **high precision** is supported now, since we cannot ensure single precision is enough for a GROMACS simulation. Here is a sample compile script: ```bash #!/bin/bash export CC=/usr/bin/gcc diff --git a/doc/install/install-ipi.md b/doc/install/install-ipi.md index 2317d299f4..9e29b3cc30 100644 --- a/doc/install/install-ipi.md +++ b/doc/install/install-ipi.md @@ -1,5 +1,5 @@ # Install i-PI -The i-PI works in a client-server model. The i-PI provides the server for integrating the replica positions of atoms, while the DeePMD-kit provides a client named `dp_ipi` that computes the interactions (including energy, force and virial). The server and client communicates via the Unix domain socket or the Internet socket. A full instruction of i-PI can be found [here](http://ipi-code.org/). The source code and a complete installation instructions of i-PI can be found [here](https://github.com/i-pi/i-pi). +The i-PI works in a client-server model. The i-PI provides the server for integrating the replica positions of atoms, while the DeePMD-kit provides a client named `dp_ipi` that computes the interactions (including energy, force and virial). The server and client communicate via the Unix domain socket or the Internet socket. Full documentation for i-PI can be found [here](http://ipi-code.org/). The source code and a complete installation guide for i-PI can be found [here](https://github.com/i-pi/i-pi). To use i-PI with already existing drivers, install and update using Pip: ```bash pip install -U i-PI diff --git a/doc/install/install-lammps.md b/doc/install/install-lammps.md index d7c979ef39..52a9c30724 100644 --- a/doc/install/install-lammps.md +++ b/doc/install/install-lammps.md @@ -1,15 +1,15 @@ # Install LAMMPS -There are two ways to install LAMMPS: the built-in mode and the plugin mode. The built-in mode builds LAMMPS along with the DeePMD-kit and DeePMD-kit will be loaded automatically when running LAMMPS. The plugin mode builds LAMMPS and a plugin separately, so one need to use `plugin load` command to load the DeePMD-kit's LAMMPS plugin library. +There are two ways to install LAMMPS: the built-in mode and the plugin mode. The built-in mode builds LAMMPS along with the DeePMD-kit and DeePMD-kit will be loaded automatically when running LAMMPS. The plugin mode builds LAMMPS and a plugin separately, so one needs to use `plugin load` command to load the DeePMD-kit's LAMMPS plugin library. ## Install LAMMPS's DeePMD-kit module (built-in mode) -DeePMD-kit provide module for running MD simulation with LAMMPS. Now make the DeePMD-kit module for LAMMPS. +DeePMD-kit provides a module for running MD simulation with LAMMPS. Now make the DeePMD-kit module for LAMMPS. ```bash cd $deepmd_source_dir/source/build make lammps ``` -DeePMD-kit will generate a module called `USER-DEEPMD` in the `build` directory. If you need low precision version, move `env_low.sh` to `env.sh` in the directory. Now download the LAMMPS code (`29Oct2020` or later), and uncompress it: +DeePMD-kit will generate a module called `USER-DEEPMD` in the `build` directory. If you need the low precision version, move `env_low.sh` to `env.sh` in the directory. Now download the LAMMPS code (`29Oct2020` or later), and uncompress it: ```bash cd /some/workspace wget https://github.com/lammps/lammps/archive/stable_29Sep2021.tar.gz @@ -38,7 +38,7 @@ make no-user-deepmd ``` ## Install LAMMPS (plugin mode) -Starting from `8Apr2021`, LAMMPS also provides a plugin mode, allowing one build LAMMPS and a plugin separately. +Starting from `8Apr2021`, LAMMPS also provides a plugin mode, allowing one to build LAMMPS and a plugin separately. Now download the LAMMPS code (`8Apr2021` or later), and uncompress it: ```bash @@ -46,7 +46,7 @@ cd /some/workspace wget https://github.com/lammps/lammps/archive/stable_29Sep2021.tar.gz tar xf stable_29Sep2021.tar.gz ``` -The source code of LAMMPS is stored in directory `lammps-stable_29Sep2021`. Now go into the LAMMPS code and create a directory called `build` +The source code of LAMMPS is stored in the directory `lammps-stable_29Sep2021`. Now go into the LAMMPS code and create a directory called `build` ```bash mkdir -p lammps-stable_29Sep2021/build/ cd lammps-stable_29Sep2021/build/ diff --git a/doc/install/install-tf.2.3.md b/doc/install/install-tf.2.3.md index 6971da151c..b2b7193754 100644 --- a/doc/install/install-tf.2.3.md +++ b/doc/install/install-tf.2.3.md @@ -8,14 +8,14 @@ chmod +x bazel-3.1.0-installer-linux-x86_64.sh export PATH=/some/workspace/bazel/bin:$PATH ``` -Firstly get the source code of the tensorflow +Firstly get the source code of the TensorFlow ```bash git clone https://github.com/tensorflow/tensorflow tensorflow -b v2.3.0 --depth=1 cd tensorflow ./configure ``` -You will answer a list of questions that help configure the building of tensorflow. You may want to answer the question like the following. If you do not want to add CUDA support, please answer no. +You will answer a list of questions that help configure the building of TensorFlow. You may want to answer the question like the following. If you do not want to add CUDA support, please answer no. ``` Please specify the location of python. [Default is xxx]: @@ -58,17 +58,17 @@ Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: Not configuring the WORKSPACE for Android builds. Preconfigured Bazel build configs. You can use any of the below by adding "--config=<>" to your build command. See .bazelrc for more details. - --config=mkl # Build with MKL support. - --config=monolithic # Config for mostly static monolithic build. - --config=ngraph # Build with Intel nGraph support. - --config=numa # Build with NUMA support. - --config=dynamic_kernels # (Experimental) Build kernels into separate shared objects. - --config=v2 # Build TensorFlow 2.x instead of 1.x. + --config=mkl # Build with MKL support. + --config=monolithic # Config for mostly static monolithic build. + --config=ngraph # Build with Intel nGraph support. + --config=numa # Build with NUMA support. + --config=dynamic_kernels # (Experimental) Build kernels into separate shared objects. + --config=v2 # Build TensorFlow 2.x instead of 1.x. Preconfigured Bazel build configs to DISABLE default on features: - --config=noaws # Disable AWS S3 filesystem support. - --config=nogcp # Disable GCP support. - --config=nohdfs # Disable HDFS support. - --config=nonccl # Disable NVIDIA NCCL support. + --config=noaws # Disable AWS S3 filesystem support. + --config=nogcp # Disable GCP support. + --config=nohdfs # Disable HDFS support. + --config=nonccl # Disable NVIDIA NCCL support. Configuration finished ``` @@ -80,7 +80,7 @@ bazel build -c opt --verbose_failures //tensorflow:libtensorflow_cc.so ``` You may want to add options `--copt=-msse4.2`, `--copt=-mavx`, `--copt=-mavx2` and `--copt=-mfma` to enable SSE4.2, AVX, AVX2 and FMA SIMD accelerations, respectively. It is noted that these options should be chosen according to the CPU architecture. If the RAM becomes an issue of your machine, you may limit the RAM usage by using `--local_resources 2048,.5,1.0`. -Now I assume you want to install tensorflow in directory `$tensorflow_root`. Create the directory if it does not exists +Now I assume you want to install TensorFlow in directory `$tensorflow_root`. Create the directory if it does not exist ```bash mkdir -p $tensorflow_root ``` @@ -108,4 +108,4 @@ rsync -avzh --include '*/' --include '*.h' --include '*.inc' --exclude '*' bazel ```bash git: unknown command -C ... ``` -This may be your git version issue, because low version of git does not support this command. Upgrading your git maybe helpful. +This may be an issue with your git version issue. Early versions of git do not support this command, in this case upgrading your git to a newer version may resolve any issues.