Skip to content
This repository was archived by the owner on Nov 17, 2023. It is now read-only.
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
51 changes: 27 additions & 24 deletions ci/docker/runtime_functions.sh
Original file line number Diff line number Diff line change
Expand Up @@ -706,30 +706,6 @@ unittest_ubuntu_python2_gpu() {
nosetests-2.7 $NOSE_COVERAGE_ARGUMENTS --with-xunit --xunit-file nosetests_gpu.xml --verbose tests/python/gpu
}

tutorialtest_ubuntu_python3_gpu() {
set -ex
cd /work/mxnet/docs
export MXNET_DOCS_BUILD_MXNET=0
make html
export MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0
export PYTHONPATH=/work/mxnet/python/
export MXNET_TUTORIAL_TEST_KERNEL=python3
cd /work/mxnet/tests/tutorials
nosetests-3.4 $NOSE_COVERAGE_ARGUMENTS --with-xunit --xunit-file nosetests_tutorials.xml test_tutorials.py --nologcapture
}

tutorialtest_ubuntu_python2_gpu() {
set -ex
cd /work/mxnet/docs
export MXNET_DOCS_BUILD_MXNET=0
make html
export MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0
export PYTHONPATH=/work/mxnet/python/
export MXNET_TUTORIAL_TEST_KERNEL=python2
cd /work/mxnet/tests/tutorials
nosetests-3.4 $NOSE_COVERAGE_ARGUMENTS --with-xunit --xunit-file nosetests_tutorials.xml test_tutorials.py --nologcapture
}

unittest_ubuntu_python3_gpu() {
set -ex
export PYTHONPATH=./python/
Expand Down Expand Up @@ -1124,6 +1100,33 @@ nightly_straight_dope_python3_multi_gpu_tests() {
test_notebooks_multi_gpu.py --nologcapture
}

nightly_tutorial_test_ubuntu_python3_gpu() {
set -ex
cd /work/mxnet/docs
export BUILD_VER=tutorial
export MXNET_DOCS_BUILD_MXNET=0
make html
export MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0
export PYTHONPATH=/work/mxnet/python/
export MXNET_TUTORIAL_TEST_KERNEL=python3
cd /work/mxnet/tests/tutorials
nosetests-3.4 --with-xunit --xunit-file nosetests_tutorials.xml test_tutorials.py --nologcapture
}

nightly_tutorial_test_ubuntu_python2_gpu() {
set -ex
cd /work/mxnet/docs
export BUILD_VER=tutorial
export MXNET_DOCS_BUILD_MXNET=0
make html
export MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0
export PYTHONPATH=/work/mxnet/python/
export MXNET_TUTORIAL_TEST_KERNEL=python2
cd /work/mxnet/tests/tutorials
nosetests-3.4 --with-xunit --xunit-file nosetests_tutorials.xml test_tutorials.py --nologcapture
}


# Deploy

deploy_docs() {
Expand Down
6 changes: 6 additions & 0 deletions docs/settings.ini
Original file line number Diff line number Diff line change
@@ -1,6 +1,12 @@
[mxnet]
build_mxnet = 0

[document_sets_tutorial]
clojure_docs = 0
doxygen_docs = 1
r_docs = 0
scala_docs = 0

[document_sets_default]
clojure_docs = 1
doxygen_docs = 1
Expand Down
148 changes: 90 additions & 58 deletions docs/tutorials/basic/module.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,11 +39,16 @@ training examples each time. A separate iterator is also created for test data.

```python
import logging
import random
logging.getLogger().setLevel(logging.INFO)

import mxnet as mx
import numpy as np

mx.random.seed(1234)
np.random.seed(1234)
random.seed(1234)

fname = mx.test_utils.download('https://s3.us-east-2.amazonaws.com/mxnet-public/letter_recognition/letter-recognition.data')
data = np.genfromtxt(fname, delimiter=',')[:,1:]
label = np.array([ord(l.split(',')[0])-ord('A') for l in open(fname, 'r')])
Expand All @@ -64,7 +69,7 @@ net = mx.sym.FullyConnected(net, name='fc1', num_hidden=64)
net = mx.sym.Activation(net, name='relu1', act_type="relu")
net = mx.sym.FullyConnected(net, name='fc2', num_hidden=26)
net = mx.sym.SoftmaxOutput(net, name='softmax')
mx.viz.plot_network(net)
mx.viz.plot_network(net, node_attrs={"shape":"oval","fixedsize":"false"})
```


Expand Down Expand Up @@ -135,11 +140,17 @@ for epoch in range(5):
print('Epoch %d, Training %s' % (epoch, metric.get()))
```

Epoch 0, Training ('accuracy', 0.4554375)
Epoch 1, Training ('accuracy', 0.6485625)
Epoch 2, Training ('accuracy', 0.7055625)
Epoch 3, Training ('accuracy', 0.7396875)
Epoch 4, Training ('accuracy', 0.764375)

Expected output:


```
Epoch 0, Training ('accuracy', 0.434625)
Epoch 1, Training ('accuracy', 0.6516875)
Epoch 2, Training ('accuracy', 0.6968125)
Epoch 3, Training ('accuracy', 0.7273125)
Epoch 4, Training ('accuracy', 0.7575625)
```


To learn more about these APIs, visit [Module API](http://mxnet.io/api/python/module/module.html).
Expand Down Expand Up @@ -172,34 +183,36 @@ mod.fit(train_iter,
optimizer='sgd',
optimizer_params={'learning_rate':0.1},
eval_metric='acc',
num_epoch=8)
num_epoch=7)
```

INFO:root:Epoch[0] Train-accuracy=0.364625
INFO:root:Epoch[0] Time cost=0.388
INFO:root:Epoch[0] Validation-accuracy=0.557250
INFO:root:Epoch[1] Train-accuracy=0.633625
INFO:root:Epoch[1] Time cost=0.470
INFO:root:Epoch[1] Validation-accuracy=0.634750
INFO:root:Epoch[2] Train-accuracy=0.697187
INFO:root:Epoch[2] Time cost=0.402
INFO:root:Epoch[2] Validation-accuracy=0.665500
INFO:root:Epoch[3] Train-accuracy=0.735062
INFO:root:Epoch[3] Time cost=0.402
INFO:root:Epoch[3] Validation-accuracy=0.713000
INFO:root:Epoch[4] Train-accuracy=0.762563
INFO:root:Epoch[4] Time cost=0.408
INFO:root:Epoch[4] Validation-accuracy=0.742000
INFO:root:Epoch[5] Train-accuracy=0.782312
INFO:root:Epoch[5] Time cost=0.400
INFO:root:Epoch[5] Validation-accuracy=0.778500
INFO:root:Epoch[6] Train-accuracy=0.797188
INFO:root:Epoch[6] Time cost=0.392
INFO:root:Epoch[6] Validation-accuracy=0.798250
INFO:root:Epoch[7] Train-accuracy=0.807750
INFO:root:Epoch[7] Time cost=0.401
INFO:root:Epoch[7] Validation-accuracy=0.789250

Expected output:


```
INFO:root:Epoch[0] Train-accuracy=0.325437
INFO:root:Epoch[0] Time cost=0.550
INFO:root:Epoch[0] Validation-accuracy=0.568500
INFO:root:Epoch[1] Train-accuracy=0.622188
INFO:root:Epoch[1] Time cost=0.552
INFO:root:Epoch[1] Validation-accuracy=0.656500
INFO:root:Epoch[2] Train-accuracy=0.694375
INFO:root:Epoch[2] Time cost=0.566
INFO:root:Epoch[2] Validation-accuracy=0.703500
INFO:root:Epoch[3] Train-accuracy=0.732187
INFO:root:Epoch[3] Time cost=0.562
INFO:root:Epoch[3] Validation-accuracy=0.748750
INFO:root:Epoch[4] Train-accuracy=0.755375
INFO:root:Epoch[4] Time cost=0.484
INFO:root:Epoch[4] Validation-accuracy=0.761500
INFO:root:Epoch[5] Train-accuracy=0.773188
INFO:root:Epoch[5] Time cost=0.383
INFO:root:Epoch[5] Validation-accuracy=0.715000
INFO:root:Epoch[6] Train-accuracy=0.794687
INFO:root:Epoch[6] Time cost=0.378
INFO:root:Epoch[6] Validation-accuracy=0.802250
```

By default, `fit` function has `eval_metric` set to `accuracy`, `optimizer` to `sgd`
and optimizer_params to `(('learning_rate', 0.01),)`.
Expand All @@ -225,12 +238,17 @@ It can be used as follows:
```python
score = mod.score(val_iter, ['acc'])
print("Accuracy score is %f" % (score[0][1]))
assert score[0][1] > 0.77, "Achieved accuracy (%f) is less than expected (0.77)" % score[0][1]
assert score[0][1] > 0.76, "Achieved accuracy (%f) is less than expected (0.76)" % score[0][1]
```

Accuracy score is 0.789250

Expected output:


```
Accuracy score is 0.802250
```

Some of the other metrics which can be used are `top_k_acc`(top-k-accuracy),
`F1`, `RMSE`, `MSE`, `MAE`, `ce`(CrossEntropy). To learn more about the metrics,
visit [Evaluation metric](http://mxnet.io/api/python/metric/metric.html).
Expand All @@ -252,22 +270,27 @@ mod = mx.mod.Module(symbol=net)
mod.fit(train_iter, num_epoch=5, epoch_end_callback=checkpoint)
```

INFO:root:Epoch[0] Train-accuracy=0.101062
INFO:root:Epoch[0] Time cost=0.422
INFO:root:Saved checkpoint to "mx_mlp-0001.params"
INFO:root:Epoch[1] Train-accuracy=0.263313
INFO:root:Epoch[1] Time cost=0.785
INFO:root:Saved checkpoint to "mx_mlp-0002.params"
INFO:root:Epoch[2] Train-accuracy=0.452188
INFO:root:Epoch[2] Time cost=0.624
INFO:root:Saved checkpoint to "mx_mlp-0003.params"
INFO:root:Epoch[3] Train-accuracy=0.544125
INFO:root:Epoch[3] Time cost=0.427
INFO:root:Saved checkpoint to "mx_mlp-0004.params"
INFO:root:Epoch[4] Train-accuracy=0.605250
INFO:root:Epoch[4] Time cost=0.399
INFO:root:Saved checkpoint to "mx_mlp-0005.params"

Expected output:


```
INFO:root:Epoch[0] Train-accuracy=0.098437
INFO:root:Epoch[0] Time cost=0.421
INFO:root:Saved checkpoint to "mx_mlp-0001.params"
INFO:root:Epoch[1] Train-accuracy=0.257437
INFO:root:Epoch[1] Time cost=0.520
INFO:root:Saved checkpoint to "mx_mlp-0002.params"
INFO:root:Epoch[2] Train-accuracy=0.457250
INFO:root:Epoch[2] Time cost=0.562
INFO:root:Saved checkpoint to "mx_mlp-0003.params"
INFO:root:Epoch[3] Train-accuracy=0.558187
INFO:root:Epoch[3] Time cost=0.434
INFO:root:Saved checkpoint to "mx_mlp-0004.params"
INFO:root:Epoch[4] Train-accuracy=0.617750
INFO:root:Epoch[4] Time cost=0.414
INFO:root:Saved checkpoint to "mx_mlp-0005.params"
```

To load the saved module parameters, call the `load_checkpoint` function. It
loads the Symbol and the associated parameters. We can then set the loaded
Expand Down Expand Up @@ -299,16 +322,25 @@ mod.fit(train_iter,
assert score[0][1] > 0.77, "Achieved accuracy (%f) is less than expected (0.77)" % score[0][1]
```

INFO:root:Epoch[3] Train-accuracy=0.544125
INFO:root:Epoch[3] Time cost=0.398
INFO:root:Epoch[4] Train-accuracy=0.605250
INFO:root:Epoch[4] Time cost=0.545
INFO:root:Epoch[5] Train-accuracy=0.644312
INFO:root:Epoch[5] Time cost=0.592
INFO:root:Epoch[6] Train-accuracy=0.675000
INFO:root:Epoch[6] Time cost=0.491
INFO:root:Epoch[7] Train-accuracy=0.695812
INFO:root:Epoch[7] Time cost=0.363

Expected output:


```
INFO:root:Epoch[3] Train-accuracy=0.555438
INFO:root:Epoch[3] Time cost=0.377
INFO:root:Epoch[4] Train-accuracy=0.616625
INFO:root:Epoch[4] Time cost=0.457
INFO:root:Epoch[5] Train-accuracy=0.658438
INFO:root:Epoch[5] Time cost=0.518
...........................................
INFO:root:Epoch[18] Train-accuracy=0.788687
INFO:root:Epoch[18] Time cost=0.532
INFO:root:Epoch[19] Train-accuracy=0.789562
INFO:root:Epoch[19] Time cost=0.531
INFO:root:Epoch[20] Train-accuracy=0.796250
INFO:root:Epoch[20] Time cost=0.531
```



Expand Down
8 changes: 4 additions & 4 deletions docs/tutorials/basic/symbol.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ f = mx.sym.reshape(d+e, shape=(1,4))
# broadcast
g = mx.sym.broadcast_to(f, shape=(2,4))
# plot
mx.viz.plot_network(symbol=g)
mx.viz.plot_network(symbol=g, node_attrs={"shape":"oval","fixedsize":"false"})
```

The computations declared in the above examples can be bound to the input data
Expand All @@ -108,7 +108,7 @@ net = mx.sym.FullyConnected(data=net, name='fc1', num_hidden=128)
net = mx.sym.Activation(data=net, name='relu1', act_type="relu")
net = mx.sym.FullyConnected(data=net, name='fc2', num_hidden=10)
net = mx.sym.SoftmaxOutput(data=net, name='out')
mx.viz.plot_network(net, shape={'data':(100,200)})
mx.viz.plot_network(net, shape={'data':(100,200)}, node_attrs={"shape":"oval","fixedsize":"false"})
```

Each symbol takes a (unique) string name. NDArray and Symbol both represent
Expand Down Expand Up @@ -211,7 +211,7 @@ def ConvFactory(data, num_filter, kernel, stride=(1,1), pad=(0, 0),name=None, su
prev = mx.sym.Variable(name="Previous Output")
conv_comp = ConvFactory(data=prev, num_filter=64, kernel=(7,7), stride=(2, 2))
shape = {"Previous Output" : (128, 3, 28, 28)}
mx.viz.plot_network(symbol=conv_comp, shape=shape)
mx.viz.plot_network(symbol=conv_comp, shape=shape, node_attrs={"shape":"oval","fixedsize":"false"})
```

Then we can define a function that constructs an inception module based on
Expand All @@ -237,7 +237,7 @@ def InceptionFactoryA(data, num_1x1, num_3x3red, num_3x3, num_d3x3red, num_d3x3,
return concat
prev = mx.sym.Variable(name="Previous Output")
in3a = InceptionFactoryA(prev, 64, 64, 64, 64, 96, "avg", 32, name="in3a")
mx.viz.plot_network(symbol=in3a, shape=shape)
mx.viz.plot_network(symbol=in3a, shape=shape, node_attrs={"shape":"oval","fixedsize":"false"})
```

Finally, we can obtain the whole network by chaining multiple inception
Expand Down
12 changes: 6 additions & 6 deletions docs/tutorials/control_flow/ControlFlowTutorial.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,13 +15,13 @@ from mxnet.gluon import HybridBlock
## foreach
`foreach` is a for loop that iterates over the first dimension of the input data (it can be an array or a list of arrays). It is defined with the following signature:

```python
```
foreach(body, data, init_states, name) => (outputs, states)
```

It runs the Python function defined in `body` for every slice from the input arrays. The signature of the `body` function is defined as follows:

```python
```
body(data, states) => (outputs, states)
```

Expand Down Expand Up @@ -243,13 +243,13 @@ res, states = lstm(rnn_data, [x for x in init_states], valid_length)
## while_loop
`while_loop` defines a while loop. It has the following signature:

```python
```
while_loop(cond, body, loop_vars, max_iterations, name) => (outputs, states)
```

Instead of running over the first dimension of an array, `while_loop` checks a condition function in every iteration and runs a `body` function for computation. The signature of the `body` function is defined as follows:

```python
```
body(state1, state2, ...) => (outputs, states)
```

Expand Down Expand Up @@ -297,13 +297,13 @@ print(state)
## cond
`cond` defines an if condition. It has the following signature:

```python
```
cond(pred, then_func, else_func, name)
```

`cond` checks `pred`, which is a symbol or an NDArray with one element. If its value is true, it calls `then_func`. Otherwise, it calls `else_func`. The signature of `then_func` and `else_func` are as follows:

```python
```
func() => [outputs]
```

Expand Down
1 change: 1 addition & 0 deletions docs/tutorials/gluon/hybrid.md
Original file line number Diff line number Diff line change
Expand Up @@ -125,6 +125,7 @@ with other language front-ends like C, C++ and Scala. To this end, we simply
use `export` and `SymbolBlock.imports`:

```python
net(x)
net.export('model', epoch=1)
```

Expand Down
Loading