diff --git a/docs/python_docs/python/scripts/conf.py b/docs/python_docs/python/scripts/conf.py index 5096490e7920..85aa885d3e07 100644 --- a/docs/python_docs/python/scripts/conf.py +++ b/docs/python_docs/python/scripts/conf.py @@ -69,6 +69,9 @@ autosummary_generate = True numpydoc_show_class_members = False +# Disable SSL verification in link check. +tls_verify = False + autodoc_member_order = 'alphabetical' autodoc_default_flags = ['members', 'show-inheritance'] diff --git a/docs/python_docs/python/tutorials/packages/gluon/blocks/custom-layer.md b/docs/python_docs/python/tutorials/packages/gluon/blocks/custom-layer.md index 54fbd7974a76..d29885aa6208 100644 --- a/docs/python_docs/python/tutorials/packages/gluon/blocks/custom-layer.md +++ b/docs/python_docs/python/tutorials/packages/gluon/blocks/custom-layer.md @@ -18,7 +18,7 @@ # Custom Layers -While Gluon API for Apache MxNet comes with [a decent number of pre-defined layers](https://mxnet.apache.org/api/python/gluon/nn.html), at some point one may find that a new layer is needed. Adding a new layer in Gluon API is straightforward, yet there are a few things that one needs to keep in mind. +While Gluon API for Apache MxNet comes with [a decent number of pre-defined layers](https://mxnet.apache.org/versions/master/api/python/docs/api/gluon/nn/index.html), at some point one may find that a new layer is needed. Adding a new layer in Gluon API is straightforward, yet there are a few things that one needs to keep in mind. In this article, I will cover how to create a new layer from scratch, how to use it, what are possible pitfalls and how to avoid them. @@ -54,7 +54,7 @@ The rest of methods of the `Block` class are already implemented, and majority o ## Hybridization and the difference between Block and HybridBlock -Looking into implementation of [existing layers](https://mxnet.apache.org/api/python/gluon/nn.html), one may find that more often a block inherits from a [HybridBlock](https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/gluon/block.py#L428), instead of directly inheriting from `Block`. +Looking into implementation of [existing layers](https://mxnet.apache.org/versions/master/api/python/docs/api/gluon/nn/index.html), one may find that more often a block inherits from a [HybridBlock](https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/gluon/block.py#L428), instead of directly inheriting from `Block`. The reason for that is that `HybridBlock` allows to write custom layers in imperative programming style, while computing in a symbolic way. It unifies the flexibility of imperative programming with the performance benefits of symbolic programming. You can learn more about the difference between symbolic and imperative programming from [this article](https://mxnet.apache.org/api/architecture/overview.html). diff --git a/docs/python_docs/python/tutorials/packages/gluon/training/normalization/index.md b/docs/python_docs/python/tutorials/packages/gluon/training/normalization/index.md index 4b5fdc4fc4b2..49056b587e7a 100644 --- a/docs/python_docs/python/tutorials/packages/gluon/training/normalization/index.md +++ b/docs/python_docs/python/tutorials/packages/gluon/training/normalization/index.md @@ -39,7 +39,7 @@ Tip: A `BatchNorm` layer at the start of your network can have a similar effect Warning: You should calculate the normalization means and standard deviations using the training dataset only. Any leakage of information from you testing dataset will effect the reliability of your testing metrics. -When using pre-trained models from the [Gluon Model Zoo](https://mxnet.apache.org/api/python/gluon/model_zoo.html) you'll usually see the normalization statistics used for training (i.e. statistics from step 1). You'll want to use these statistics to normalize your own input data for fine-tuning or inference with these models. Using `transforms.Normalize` is one way of applying the normalization, and this should be used in the `Dataset`. +When using pre-trained models from the [Gluon Model Zoo](https://mxnet.apache.org/versions/master/api/python/docs/api/gluon/model_zoo/index.html) you'll usually see the normalization statistics used for training (i.e. statistics from step 1). You'll want to use these statistics to normalize your own input data for fine-tuning or inference with these models. Using `transforms.Normalize` is one way of applying the normalization, and this should be used in the `Dataset`. ```{.python .input} import mxnet as mx diff --git a/docs/python_docs/python/tutorials/packages/legacy/ndarray/sparse/row_sparse.md b/docs/python_docs/python/tutorials/packages/legacy/ndarray/sparse/row_sparse.md index 66fda6b4225f..70a0e8838946 100644 --- a/docs/python_docs/python/tutorials/packages/legacy/ndarray/sparse/row_sparse.md +++ b/docs/python_docs/python/tutorials/packages/legacy/ndarray/sparse/row_sparse.md @@ -404,7 +404,7 @@ rsp_retained = mx.nd.sparse.retain(rsp, mx.nd.array([0, 1])) ## Sparse Operators and Storage Type Inference -Operators that have specialized implementation for sparse arrays can be accessed in ``mx.nd.sparse``. You can read the [mxnet.ndarray.sparse API documentation](http://mxnet.apache.org/api/python/ndarray/sparse.html) to find what sparse operators are available. +Operators that have specialized implementation for sparse arrays can be accessed in ``mx.nd.sparse``. You can read the [mxnet.ndarray.sparse API documentation](https://mxnet.apache.org/versions/master/api/python/docs/api/legacy/ndarray/sparse/index.html) to find what sparse operators are available. ```{.python .input} diff --git a/docs/python_docs/python/tutorials/packages/onnx/inference_on_onnx_model.md b/docs/python_docs/python/tutorials/packages/onnx/inference_on_onnx_model.md index c022f2a9a1c7..0f250db55255 100644 --- a/docs/python_docs/python/tutorials/packages/onnx/inference_on_onnx_model.md +++ b/docs/python_docs/python/tutorials/packages/onnx/inference_on_onnx_model.md @@ -154,7 +154,7 @@ for param in aux_params: net_params[param]._load_init(aux_params[param], ctx=ctx) ``` -We can now cache the computational graph through [hybridization](https://mxnet.apache.org/tutorials/gluon/hybrid.html) to gain some performance +We can now cache the computational graph through [hybridization](https://mxnet.apache.org/versions/master/api/python/docs/tutorials/packages/gluon/blocks/hybridize.html) to gain some performance @@ -248,6 +248,6 @@ Lucky for us, the [Caltech101 dataset](http://www.vision.caltech.edu/Image_Datas We show that in our next tutorial: -- [Fine-tuning an ONNX Model using the modern imperative MXNet/Gluon](http://mxnet.apache.org/tutorials/onnx/fine_tuning_gluon.html) +- [Fine-tuning an ONNX Model using the modern imperative MXNet/Gluon](https://mxnet.apache.org/versions/master/api/python/docs/tutorials/packages/onnx/fine_tuning_gluon.html) diff --git a/docs/python_docs/python/tutorials/performance/backend/profiler.md b/docs/python_docs/python/tutorials/performance/backend/profiler.md index e7891c7677b9..354dc48e2f70 100644 --- a/docs/python_docs/python/tutorials/performance/backend/profiler.md +++ b/docs/python_docs/python/tutorials/performance/backend/profiler.md @@ -286,7 +286,7 @@ Here, we have created a custom operator called `MyAddOne`, and within its `forwa As shown by the screenshot, in the **Custom Operator** domain where all the custom operator-related events fall into, we can easily visualize the execution time of each segment of `MyAddOne`. We can tell that `MyAddOne::pure_python` is executed first. We also know that `CopyCPU2CPU` and `_plus_scalr` are two "sub-operators" of `MyAddOne` and the sequence in which they are executed. -Please note that: to be able to see the previously described information, you need to set `profile_imperative` to `True` even when you are using custom operators in [symbolic mode](https://mxnet.apache.org/versions/master/tutorials/basic/symbol.html) (refer to the code snippet below, which is the symbolic-mode equivelent of the code example above). The reason is that within custom operators, pure python code and sub-operators are still called imperatively. +Please note that: to be able to see the previously described information, you need to set `profile_imperative` to `True` even when you are using custom operators in [symbolic mode](https://mxnet.apache.org/versions/master/api/python/docs/api/legacy/symbol/index.html) (refer to the code snippet below, which is the symbolic-mode equivelent of the code example above). The reason is that within custom operators, pure python code and sub-operators are still called imperatively. ```{.python .input} # Set profile_all to True diff --git a/python/mxnet/gluon/block.py b/python/mxnet/gluon/block.py index 5be7a51c4d96..bc16dc7263de 100644 --- a/python/mxnet/gluon/block.py +++ b/python/mxnet/gluon/block.py @@ -1033,8 +1033,8 @@ def forward(self, x): References ---------- - `Hybrid - Faster training and easy deployment - `_ + `Hybridize - A Hybrid of Imperative and Symbolic Programming + `_ """ def __init__(self): super(HybridBlock, self).__init__() diff --git a/python/mxnet/gluon/nn/basic_layers.py b/python/mxnet/gluon/nn/basic_layers.py index c542544cfd29..167fab550a54 100644 --- a/python/mxnet/gluon/nn/basic_layers.py +++ b/python/mxnet/gluon/nn/basic_layers.py @@ -550,7 +550,7 @@ class Embedding(HybridBlock): AdaGrad and Adam. By default lazy updates is turned on, which may perform differently from standard updates. For more details, please check the Optimization API at: - https://mxnet.incubator.apache.org/api/python/optimization/optimization.html + https://mxnet.apache.org/versions/master/api/python/docs/api/optimizer/index.html Parameters ---------- diff --git a/python/mxnet/ndarray/numpy_extension/_op.py b/python/mxnet/ndarray/numpy_extension/_op.py index 20b6c91de70d..ae110806c3fd 100644 --- a/python/mxnet/ndarray/numpy_extension/_op.py +++ b/python/mxnet/ndarray/numpy_extension/_op.py @@ -1072,7 +1072,7 @@ def embedding(data, weight, input_dim=None, output_dim=None, dtype="float32", sp "row_sparse". Only a subset of optimizers support sparse gradients, including SGD, AdaGrad and Adam. Note that by default lazy updates is turned on, which may perform differently from standard updates. For more details, please check the Optimization API at: - https://mxnet.incubator.apache.org/api/python/optimization/optimization.html + https://mxnet.apache.org/versions/master/api/python/docs/api/optimizer/index.html Parameters ---------- diff --git a/python/mxnet/numpy_extension/_op.py b/python/mxnet/numpy_extension/_op.py index 226c2753e4f4..4124988c1536 100644 --- a/python/mxnet/numpy_extension/_op.py +++ b/python/mxnet/numpy_extension/_op.py @@ -1001,7 +1001,7 @@ def embedding(data, weight, input_dim=None, output_dim=None, dtype="float32", sp "row_sparse". Only a subset of optimizers support sparse gradients, including SGD, AdaGrad and Adam. Note that by default lazy updates is turned on, which may perform differently from standard updates. For more details, please check the Optimization API at: - https://mxnet.incubator.apache.org/api/python/optimization/optimization.html + https://mxnet.apache.org/versions/master/api/python/docs/api/optimizer/index.html Parameters ---------- diff --git a/src/common/cuda/rtc.cc b/src/common/cuda/rtc.cc index dda3b7421bed..2294feaa9e2f 100644 --- a/src/common/cuda/rtc.cc +++ b/src/common/cuda/rtc.cc @@ -63,7 +63,7 @@ namespace rtc { #if defined(_WIN32) || defined(_WIN64) || defined(__WINDOWS__) const char cuda_lib_name[] = "nvcuda.dll"; #else - const char cuda_lib_name[] = "libcuda.so"; + const char cuda_lib_name[] = "libcuda.so.1"; #endif std::mutex lock; diff --git a/src/operator/tensor/dot.cc b/src/operator/tensor/dot.cc index 45236993f6dd..0ef4cb216d26 100644 --- a/src/operator/tensor/dot.cc +++ b/src/operator/tensor/dot.cc @@ -76,7 +76,7 @@ above patterns, ``dot`` will fallback and generate output with default storage. "row_sparse". Only a subset of optimizers support sparse gradients, including SGD, AdaGrad and Adam. Note that by default lazy updates is turned on, which may perform differently from standard updates. For more details, please check the Optimization API at: - https://mxnet.incubator.apache.org/api/python/optimization/optimization.html + https://mxnet.apache.org/versions/master/api/python/docs/api/optimizer/index.html )doc" ADD_FILELINE) .set_num_inputs(2) diff --git a/src/operator/tensor/indexing_op.cc b/src/operator/tensor/indexing_op.cc index dfb2b88748fb..a3f10ae6b8f6 100644 --- a/src/operator/tensor/indexing_op.cc +++ b/src/operator/tensor/indexing_op.cc @@ -597,7 +597,7 @@ The storage type of weight can be either row_sparse or default. "row_sparse". Only a subset of optimizers support sparse gradients, including SGD, AdaGrad and Adam. Note that by default lazy updates is turned on, which may perform differently from standard updates. For more details, please check the Optimization API at: - https://mxnet.incubator.apache.org/api/python/optimization/optimization.html + https://mxnet.apache.org/versions/master/api/python/docs/api/optimizer/index.html )code" ADD_FILELINE) .set_num_inputs(2)