Note: Providing complete information in the most concise form is the best way to get help. This issue template serves as the checklist for essential information to most of the technical issues and bug reports. For non-technical issues and feature requests, feel free to present the information in what you believe is the best form.
For Q & A and discussion, please start a discussion thread at https://discuss.mxnet.io
Description
Training the word-language-model with SparseEmbedding using fixed seed doesn't produce consistent loss across runs. @ZiyueHuang
Environment info (Required)
What to do:
1. Download the diagnosis script from https://raw.githubusercontent.com/apache/incubator-mxnet/master/tools/diagnose.py
2. Run the script using `python diagnose.py` and paste its output here.
Package used (Python/R/Scala/Julia):
(I'm using ...)
For Scala user, please provide:
- Java version: (
java -version)
- Maven version: (
mvn -version)
- Scala runtime if applicable: (
scala -version)
For R user, please provide R sessionInfo():
Build info (Required if built from source)
Compiler (gcc/clang/mingw/visual studio):
MXNet commit hash: 5c3acff
(Paste the output of git rev-parse HEAD here.)
Build config:
(Paste the content of config.mk, or the build command.)
Error Message:
(Paste the complete error message, including stack trace.)
Minimum reproducible example
(If you are using your own code, please provide a short script that reproduces the error. Otherwise, please provide link to the existing example.)
Steps to reproduce
(Paste the commands you ran that produced the error.)
- replace https://github.com/apache/incubator-mxnet/blob/master/example/rnn/word_lm/model.py#L24-L26 with the following code
dense = True
stype = 'default' if dense else 'row_sparse'
weight = mx.sym.var("encoder_weight", init=mx.init.Uniform(0.1), stype=stype)
EMB = mx.sym.Embedding if dense else mx.sym.contrib.SparseEmbedding
embed = EMB(data=data, weight=weight, input_dim=vocab_size,
output_dim=num_embed, name='embed')
replace https://github.com/apache/incubator-mxnet/blob/master/example/rnn/word_lm/train.py#L114 with
module.update(max_norm=None)
- run
python train.py --emsize=200 with dense = True, the result of batch 200 is always
2018-01-04 18:50:50,622 Iter[0] Batch [200] Loss: 690.8467010
- run
python train.py --emsize=200 but change dense = False, the result varies:
Run1:
2018-01-04 18:49:28,784 Iter[0] Batch [200] Loss: 600.4843252
Run2:
2018-01-04 18:49:08,760 Iter[0] Batch [200] Loss: 669.0238662
What have you tried to solve it?
- still happens if setting dropout=0
Note: Providing complete information in the most concise form is the best way to get help. This issue template serves as the checklist for essential information to most of the technical issues and bug reports. For non-technical issues and feature requests, feel free to present the information in what you believe is the best form.
For Q & A and discussion, please start a discussion thread at https://discuss.mxnet.io
Description
Training the word-language-model with SparseEmbedding using fixed seed doesn't produce consistent loss across runs. @ZiyueHuang
Environment info (Required)
Package used (Python/R/Scala/Julia):
(I'm using ...)
For Scala user, please provide:
java -version)mvn -version)scala -version)For R user, please provide R
sessionInfo():Build info (Required if built from source)
Compiler (gcc/clang/mingw/visual studio):
MXNet commit hash: 5c3acff
(Paste the output of
git rev-parse HEADhere.)Build config:
(Paste the content of config.mk, or the build command.)
Error Message:
(Paste the complete error message, including stack trace.)
Minimum reproducible example
(If you are using your own code, please provide a short script that reproduces the error. Otherwise, please provide link to the existing example.)
Steps to reproduce
(Paste the commands you ran that produced the error.)
replace https://github.com/apache/incubator-mxnet/blob/master/example/rnn/word_lm/train.py#L114 with
python train.py --emsize=200with dense = True, the result of batch 200 is alwayspython train.py --emsize=200but change dense = False, the result varies:Run1:
Run2:
What have you tried to solve it?