Conversation
|
@jeffdonahue I wasn't sure if instead of copying the top_diff to the bottom_diff, now that could be replaced with |
src/caffe/test/test_neuron_layer.cpp
Outdated
There was a problem hiding this comment.
Should this case really be in the tests? You can just skip this test with a filter on a machine where you don't want to run it.
There was a problem hiding this comment.
I just copied the code from the TRAIN mode, so don't know who add that in the first place. Maybe we can remove it.
There was a problem hiding this comment.
@Yangqing I checked the log and it seems that you added the check of major >=2 #502 (diff)
Should we keep it or remove it?
|
@sguada I agree we should be able to compute the gradient in test, but I'm less sure I'm comfortable with the copy and pointer checking. Please rebase for a clean merge too. |
|
@shelhamer talking with @jeffdonahue we thought it was safer to just do copy, and add the logic to copy to avoid copying when source and destination are the same. I wasn't sure if it was safe to use |
|
I think there could a case where during TRAIN the data is not shared, but then after running in TEST mode becomes shared, so once is back to TRAIN is shared. Although this case may never appear since we usually use this layer in-place. |
|
Ok, fair enough! Rebase and merge as you like. On Fri, Jun 27, 2014 at 6:28 PM, Sergio Guadarrama <notifications@github.com
|
Conflicts: src/caffe/layers/dropout_layer.cpp src/caffe/layers/dropout_layer.cu
|
It is rebased and ready to merge, once @Yangqing confirm that the |
|
|
Fix dropout backward in TEST phase
Fix dropout backward in TEST phase
This PR just add the option to pass gradients backwards during the
TESTphase.