Conversation
|
also, current code works on latest Julia build (1.6.0). Seems Julia 1.0 supports only Tracker based Flux (while the latest one supports Zygote.jl) |
Codecov Report
@@ Coverage Diff @@
## master #92 +/- ##
=======================================
Coverage 81.20% 81.20%
=======================================
Files 6 6
Lines 782 782
=======================================
Hits 635 635
Misses 147 147 Continue to review full report at Codecov.
|
|
we should make this into a documentation page instead I think, it will also build it at every PR |
odow
left a comment
There was a problem hiding this comment.
The warning suggests you are doing some arithmetic outside the JuMP macros. I couldn't see anything obvious here, so it must be somewhere inside DiffOpt.jl.
Co-authored-by: Oscar Dowson <odow@users.noreply.github.com>
examples/custom-relu-mnist.jl
Outdated
| @objective( | ||
| model, | ||
| Min, | ||
| x'x -2x'y[:, i] |
There was a problem hiding this comment.
This is where the warning comes from I believe, this should be fixed by jump-dev/MutableArithmetics.jl#87
There was a problem hiding this comment.
solved. seems there's a set_objective_coefficient method. + speeds up training a bit
I agree, it would be helpful to run it at every commit with Literate |
examples/custom-relu-mnist.jl
Outdated
| """ | ||
| relu method for a Matrix | ||
| """ | ||
| function myRelu(y::AbstractMatrix{T}; model = Model(() -> diff_optimizer(OSQP.Optimizer))) where {T} |
There was a problem hiding this comment.
matrix_relu better than myRelu? Also write out the model in the docstring
Co-authored-by: Mathieu Besançon <mathieu.besancon@gmail.com>
|
status on this one? Should we merge it or make it a doc page first? |
|
@matbesancon should we close this (without merging ?) |
|
#95 will be lighter to review if this gets merged first.
|
|
otherwise there is a risk we have this example breaking at some point |
|
bump @be-apt to add the example to CI |
|
closing this. duplicate code of #95 |
thanks to @matbesancon for #91, added a very trivial example of implementing ReLU as a layer in a NN trained on MNIST dataset using Flux.jl
There are some issues pending, specifically training on the complete dataset if very slow, and gives a JuMP warning: