Skip to content
This repository was archived by the owner on Nov 17, 2023. It is now read-only.
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/api/python/image/image.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ Iterators support loading image from binary `Record IO` and raw image files.

We use helper function to initialize augmenters
```eval_rst
.. currentmodule:: mxnet
.. currentmodule:: mxnet
.. autosummary::
:nosignatures:

Expand Down
2 changes: 1 addition & 1 deletion docs/api/python/module/module.md
Original file line number Diff line number Diff line change
Expand Up @@ -176,7 +176,7 @@ additional functionality. We summarize them in this section.
.. autosummary::
:nosignatures:

BucketModule.switch_bucket
BucketingModule.switch_bucket
```

### Class `SequentialModule`
Expand Down
6 changes: 3 additions & 3 deletions docs/api/python/symbol/symbol.md
Original file line number Diff line number Diff line change
Expand Up @@ -297,8 +297,8 @@ Composite multiple symbols into a new one by an operator.
Symbol.take
Symbol.one_hot
Symbol.pick
Symbol.ravel_multi_index
Symbol.unravel_index
ravel_multi_index
unravel_index
```

### Get internal and output symbol
Expand Down Expand Up @@ -577,7 +577,7 @@ Composite multiple symbols into a new one by an operator.
broadcast_logical_and
broadcast_logical_or
broadcast_logical_xor
broadcast_logical_not
logical_not
```

### Random sampling
Expand Down
1 change: 1 addition & 0 deletions python/mxnet/contrib/svrg_optimization/svrg_module.py
Original file line number Diff line number Diff line change
Expand Up @@ -401,6 +401,7 @@ def fit(self, train_data, eval_data=None, eval_metric='acc',
force_rebind=False, force_init=False, begin_epoch=0, num_epoch=None,
validation_metric=None, monitor=None, sparse_row_id_fn=None):
"""Trains the module parameters.

Copy link
Copy Markdown
Member

@anirudhacharya anirudhacharya Nov 8, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the issue also states unexpected indent as a Warning. Has that been fixed?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

both, the error and the warning got fixed with this change

Parameters
----------
train_data : DataIter
Expand Down
6 changes: 3 additions & 3 deletions python/mxnet/rnn/rnn.py
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ def save_rnn_checkpoint(cells, prefix, epoch, symbol, arg_params, aux_params):

Parameters
----------
cells : RNNCell or list of RNNCells
cells : mxnet.rnn.RNNCell or list of RNNCells
The RNN cells used by this symbol.
prefix : str
Prefix of model name.
Expand Down Expand Up @@ -65,7 +65,7 @@ def load_rnn_checkpoint(cells, prefix, epoch):

Parameters
----------
cells : RNNCell or list of RNNCells
cells : mxnet.rnn.RNNCell or list of RNNCells
The RNN cells used by this symbol.
prefix : str
Prefix of model name.
Expand Down Expand Up @@ -100,7 +100,7 @@ def do_rnn_checkpoint(cells, prefix, period=1):

Parameters
----------
cells : RNNCell or list of RNNCells
cells : mxnet.rnn.RNNCell or list of RNNCells
The RNN cells used by this symbol.
prefix : str
The file prefix to checkpoint to
Expand Down
2 changes: 1 addition & 1 deletion python/mxnet/rnn/rnn_cell.py
Original file line number Diff line number Diff line change
Expand Up @@ -716,7 +716,7 @@ def unfuse(self):

Returns
-------
cell : SequentialRNNCell
cell : mxnet.rnn.SequentialRNNCell
unfused cell that can be used for stepping, and can run on CPU.
"""
stack = SequentialRNNCell()
Expand Down
2 changes: 1 addition & 1 deletion python/mxnet/symbol/symbol.py
Original file line number Diff line number Diff line change
Expand Up @@ -1347,7 +1347,7 @@ def simple_bind(self, ctx, grad_req='write', type_dict=None, stype_dict=None,
shared_buffer : Dict of string to `NDArray`
The dict mapping argument names to the `NDArray` that can be reused for initializing
the current executor. This buffer will be checked for reuse if one argument name
of the current executor is not found in `shared_arg_names`. The `NDArray`s are
of the current executor is not found in `shared_arg_names`. The `NDArray` s are
expected have default storage type.

kwargs : Dict of str->shape
Expand Down
13 changes: 0 additions & 13 deletions python/mxnet/symbol_doc.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,9 +44,6 @@
- *Examples*: simple and short code snippet showing how to use this operator.
It should show typical calling examples and behaviors (e.g. maps an input
of what shape to an output of what shape).
- *Regression Test*: longer test code for the operators. We normally do not
expect the users to read those, but they will be executed by `doctest` to
ensure the behavior of each operator does not change unintentionally.
"""
from __future__ import absolute_import as _abs
import re as _re
Expand Down Expand Up @@ -75,8 +72,6 @@ class ActivationDoc(SymbolDoc):
>>> mlp
<Symbol mlp>

Regression Test
---------------
ReLU activation

>>> test_suites = [
Expand Down Expand Up @@ -107,8 +102,6 @@ class DropoutDoc(SymbolDoc):
>>> data = Variable('data')
>>> data_dp = Dropout(data=data, p=0.2)

Regression Test
---------------
>>> shape = (100, 100) # take larger shapes to be more statistical stable
>>> x = np.ones(shape)
>>> op = Dropout(p=0.5, name='dp')
Expand Down Expand Up @@ -141,8 +134,6 @@ class EmbeddingDoc(SymbolDoc):
>>> SymbolDoc.get_output_shape(op, letters=(seq_len, batch_size))
{'embed_output': (10L, 64L, 16L)}

Regression Test
---------------
>>> vocab_size, embed_dim = (26, 16)
>>> batch_size = 12
>>> word_vecs = test_utils.random_arrays((vocab_size, embed_dim))
Expand All @@ -167,8 +158,6 @@ class FlattenDoc(SymbolDoc):
>>> SymbolDoc.get_output_shape(flatten, data=(2, 3, 4, 5))
{'flat_output': (2L, 60L)}

Regression Test
---------------
>>> test_dims = [(2, 3, 4, 5), (2, 3), (2,)]
>>> op = Flatten(name='flat')
>>> for dims in test_dims:
Expand Down Expand Up @@ -208,8 +197,6 @@ class FullyConnectedDoc(SymbolDoc):
>>> net
<Symbol pred>

Regression Test
---------------
>>> dim_in, dim_out = (3, 4)
>>> x, w, b = test_utils.random_arrays((10, dim_in), (dim_out, dim_in), (dim_out,))
>>> op = FullyConnected(num_hidden=dim_out, name='FC')
Expand Down
4 changes: 2 additions & 2 deletions src/operator/contrib/adaptive_avg_pooling.cc
Original file line number Diff line number Diff line change
Expand Up @@ -206,10 +206,10 @@ Applies a 2D adaptive average pooling over a 4D input with the shape of (NCHW).
The pooling kernel and stride sizes are automatically chosen for desired output sizes.

- If a single integer is provided for output_size, the output size is
(N x C x output_size x output_size) for any input (NCHW).
(N x C x output_size x output_size) for any input (NCHW).

- If a tuple of integers (height, width) are provided for output_size, the output size is
(N x C x height x width) for any input (NCHW).
(N x C x height x width) for any input (NCHW).

)code" ADD_FILELINE)
.set_attr_parser(ParamParser<AdaptiveAvgPoolParam>)
Expand Down