Preserve shape of extracted features#1457
Conversation
|
The extract features tool was contributed by the community, so a few This seems like a reasonable change of behavior. I'm traveling, but we'll
|
|
OK that makes sense. Not being a python person I wasn't aware of the net-surgery you referenced in other thread. Maybe I need to take the plunge. Thanks for the reply. We'll be glad to get you all back. |
…apes extract_features preserves feature shape
|
I merged this to master in c942dc1 since other users have encountered this. Although on the whole the Thanks for the change @jyegerlehner. |
@shelhamer I started looking at the rewrite. Datum is still used heavily in the code, and it is hard wired to have channels/height/width explicitly. So is the way forward on that to have a deprecated V1Datum (like we have with V0LayerParameter, V1LayerParameter), and the new Datum would have repeated dim instead of channels/height/width? If so, the scope of that change goes beyond just the extract features util. I can attempt that, but am a bit daunted by my lack of understanding of how the all these classes fit together. |
Is there a reason why extract_features changes the shape of a feature datum from channels x height x width to 1 x channels_height_width x 1? This caused a problem where I extracted features to use as the training data for the next stacked encoder/decoder pair of layers. I was unable to train with them because their shape was lost: the convolutional layer needs to know the correct width and height of its input, and it had been lost. I couldn't find a way to tell the net to reshape the training data back to the correct feature dimensions. The simplest solution appears to me to just not mangle the dimensions in the first place.