-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-11262][ML] Unit test for gradient, loss layers, memory management for multilayer perceptron #9229
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Test build #44163 has finished for PR 9229 at commit
|
|
@mengxr Could you take a look? @feynmanliang These changes clash with some of your changes in #8648, sorry, because I simplified the interface of ActivationFunction to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
4 indent
|
Made a pass |
|
@feynmanliang Thank you! I've addressed all comments, except related to the |
|
Test build #44188 has finished for PR 9229 at commit
|
|
Test build #44190 has finished for PR 9229 at commit
|
|
Test build #44193 has finished for PR 9229 at commit
|
|
Test build #45311 has finished for PR 9229 at commit
|
|
Test build #45410 has finished for PR 9229 at commit
|
129c9aa to
083a9b3
Compare
|
@mengxr Could you suggest why it continues to fail? Thanks. |
|
@feynmanliang Thanks for the tip! My unit tests fail. It has a problem with the new RNG in Spark, so I updated the seeds and the asserts. However the fail seems to correspond to the old code: |
|
retest this please |
|
Test build #45658 has finished for PR 9229 at commit
|
|
retest this please |
|
Test build #45680 has finished for PR 9229 at commit
|
|
Test build #45698 has finished for PR 9229 at commit
|
|
@mengxr Could you suggest what would be the best practice to handle zeros in the computation of logarithm of output? https://github.com/avulanov/spark/blob/mlp-refactoring/mllib/src/main/scala/org/apache/spark/ml/ann/LossFunction.scala#L122 |
|
Test build #53461 has finished for PR 9229 at commit
|
e5bef10 to
b21b56a
Compare
|
Test build #54347 has finished for PR 9229 at commit
|
|
@avulanov This PR still has conflicts with master. |
1.Implement LossFunction trait and implement squared error and cross entropy loss with it 2.Implement unit test for gradient and loss 3.Implement InPlace trait and in-place layer evaluation 4.Refactor interface for ActivationFunction 5.Update of Layer and LayerModel interfaces 6.Fix random weights assignment 7.Implement memory allocation by MLP model instead of individual layers These features decreased the memory usage and increased flexibility of internal API.
b21b56a to
94dcec0
Compare
|
@mengxr That is a |
|
Test build #54646 has finished for PR 9229 at commit
|
|
Build exception unrelated to PR |
|
test this please |
|
Test build #54671 has finished for PR 9229 at commit
|
|
LGTM. Merged into master. Thanks and sorry for the long delay in code review! |
|
@mengxr @feynmanliang Thank you! |
1.Implement LossFunction trait and implement squared error and cross entropy
loss with it
2.Implement unit test for gradient and loss
3.Implement InPlace trait and in-place layer evaluation
4.Refactor interface for ActivationFunction
5.Update of Layer and LayerModel interfaces
6.Fix random weights assignment
7.Implement memory allocation by MLP model instead of individual layers
These features decreased the memory usage and increased flexibility of
internal API.