Conversation
…onstant value to each channel
|
Is this better than passing a mean image file with the same value at every pixel? |
|
I think this way is better because one can define it in the prototxt, Sergio 2014-02-11 Evan Shelhamer notifications@github.com:
|
|
Ok, good motivation. Could you include an example network using this instead of the |
|
I have wrote some What means that if one don't remove the mean from the images the performance drops substantially 7.66% . If one remove the average pixel value from the image the performance drops 1.13%. While if one remove the average RGB values of the training-data from the image the performance only drops 0.23% So for anyone using a wrapper they can just remove the mean RGB values and lose very little in performance. |
|
Could this be better as a DataProcessing layer #148? With a MeanSubtraction layer one could define a mean image, single number, or channel as desired and use it for training, test, and deployment. A workaround is to add an InnerProduct layer immediately after the input with the bias filled to the negative mean, but that doesn't feel clean. |
Revert caffe-0.13 branch
standardize memory optimization configurations * yjxiong/fix/mem_config: take care of share data with excluded blob improvise memory opt configs fix cudnn conv legacy bug (BVLC#96) add TOC Update README.md Update README.md (BVLC#95) Update README.md Improve the python interface (BVLC#80) Update README.md
…caffe into imagenet_vid_2016 * 'imagenet_vid_2016' of https://github.com/myfavouritekk/caffe: take care of share data with excluded blob Revert "Fix a but when setting no_mem_opt: true for layers near in-place layers." improvise memory opt configs fix cudnn conv legacy bug (BVLC#96) add TOC Update README.md Update README.md (BVLC#95) Update README.md Improve the python interface (BVLC#80) Update README.md
Added the option to fill a blob using a different constant value per channel.
This change allows to fill the mean_image of a data_layer with the mean of the RGB values.