Note: Providing complete information in the most concise form is the best way to get help. This issue template serves as the checklist for essential information to most of the technical issues and bug reports. For non-technical issues and feature requests, feel free to present the information in what you believe is the best form.
For Q & A and discussion, please start a discussion thread at https://discuss.mxnet.io
Description
Is the gradient factor correct in regression operators(such as LinearRegressionOutput)? https://github.com/apache/incubator-mxnet/blob/master/src/operator/regression_output-inl.h#L89-L90
Strictly speaking, num_output makes no sense since we don't know the layout of the label.
Even if with the common layout, batch_size first, the implementation is not consistent with the documents,
By default, gradients of this loss function are scaled by factor 1/n, where n is the number of training examples. The parameter grad_scale can be used to change this scale to grad_scale/n.
num_output is indeed the number of dimensions, not the number of training examples.
cc @eric-haibin-lin
Environment info (Required)
What to do:
1. Download the diagnosis script from https://raw.githubusercontent.com/apache/incubator-mxnet/master/tools/diagnose.py
2. Run the script using `python diagnose.py` and paste its output here.
Package used (Python/R/Scala/Julia):
(I'm using ...)
For Scala user, please provide:
- Java version: (
java -version)
- Maven version: (
mvn -version)
- Scala runtime if applicable: (
scala -version)
For R user, please provide R sessionInfo():
Build info (Required if built from source)
Compiler (gcc/clang/mingw/visual studio):
MXNet commit hash:
(Paste the output of git rev-parse HEAD here.)
Build config:
(Paste the content of config.mk, or the build command.)
Error Message:
(Paste the complete error message, including stack trace.)
Minimum reproducible example
(If you are using your own code, please provide a short script that reproduces the error. Otherwise, please provide link to the existing example.)
Steps to reproduce
(Paste the commands you ran that produced the error.)
What have you tried to solve it?
Note: Providing complete information in the most concise form is the best way to get help. This issue template serves as the checklist for essential information to most of the technical issues and bug reports. For non-technical issues and feature requests, feel free to present the information in what you believe is the best form.
For Q & A and discussion, please start a discussion thread at https://discuss.mxnet.io
Description
Is the gradient factor correct in regression operators(such as
LinearRegressionOutput)? https://github.com/apache/incubator-mxnet/blob/master/src/operator/regression_output-inl.h#L89-L90Strictly speaking,
num_outputmakes no sense since we don't know the layout of the label.Even if with the common layout, batch_size first, the implementation is not consistent with the documents,
num_outputis indeed the number of dimensions, not the number of training examples.cc @eric-haibin-lin
Environment info (Required)
Package used (Python/R/Scala/Julia):
(I'm using ...)
For Scala user, please provide:
java -version)mvn -version)scala -version)For R user, please provide R
sessionInfo():Build info (Required if built from source)
Compiler (gcc/clang/mingw/visual studio):
MXNet commit hash:
(Paste the output of
git rev-parse HEADhere.)Build config:
(Paste the content of config.mk, or the build command.)
Error Message:
(Paste the complete error message, including stack trace.)
Minimum reproducible example
(If you are using your own code, please provide a short script that reproduces the error. Otherwise, please provide link to the existing example.)
Steps to reproduce
(Paste the commands you ran that produced the error.)
What have you tried to solve it?