Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
98 commits
Select commit Hold shift + click to select a range
cd953c8
Add and test Net::HasBlob and GetBlob to simplify feature extraction
kloudkl Feb 23, 2014
760d098
Add and test Net::HasLayer and GetLayerByName
kloudkl Feb 23, 2014
e76f7dc
Add image retrieval example
kloudkl Feb 23, 2014
f0336e1
Add feature extraction example
kloudkl Feb 23, 2014
b7b9dd8
Add feature binarization example
kloudkl Feb 23, 2014
fc740a3
Simplify image retrieval example to use binary features directly
kloudkl Feb 23, 2014
4de8280
Add __builtin_popcount* based fast Hamming distance math function
kloudkl Feb 25, 2014
dfe6380
Fix bugs in the feature extraction example
kloudkl Feb 25, 2014
01bb481
Enhance help, log message & format of the feature extraction example
kloudkl Feb 25, 2014
cfb2f91
Fix bugs of the feature binarization example
kloudkl Feb 25, 2014
23eecde
Fix bugs in the image retrieval example
kloudkl Feb 25, 2014
dd13fa0
Fix saving real valued feature bug in the feature extraction example
kloudkl Feb 25, 2014
706a926
Change feature binarization threshold to be the mean of all the values
kloudkl Feb 25, 2014
f97e87b
Save and load data correctly in feat extracion, binarization and IR demo
kloudkl Feb 26, 2014
c60d551
Move extract_features, binarize_features, retrieve_images to tools/
kloudkl Feb 26, 2014
8e7153b
Use lowercase underscore naming convention for Net blob & layer getters
kloudkl Feb 26, 2014
5bcdebd
Fix cpplint errors for Net, its tests and feature related 3 examples
kloudkl Feb 26, 2014
6a60795
Don't create a new batch after all the feature vectors have been saved
kloudkl Mar 17, 2014
25b6bcc
Add a python script to generate a list of all the files in a directory
kloudkl Mar 17, 2014
a2ad3c7
Add documentation for the feature extraction demo
kloudkl Mar 17, 2014
a967cf5
Move binarize_features, retrieve_images to examples/feauture_extraction
kloudkl Mar 18, 2014
44ebe29
Removing feature binarization and image retrieval examples
kloudkl Mar 19, 2014
c7201f7
Change generate file list python script path in feature extraction doc
kloudkl Mar 19, 2014
72c8c9e
Explain how to get the mean image of ILSVRC
kloudkl Mar 19, 2014
748aaff
change specification of forward/backward function and fix layer
jeffdonahue Mar 14, 2014
aee5f54
fix net_speed_benchmark so 'make all' works
jeffdonahue Mar 14, 2014
305e731
make tests compile and pass
jeffdonahue Mar 14, 2014
5e98253
test_gradient_check_util: blobid -> blob_id
jeffdonahue Mar 14, 2014
d54833e
gradient checker optimization with forward pass loss: only need to run
jeffdonahue Mar 14, 2014
74ed9e0
revert unnecessary reordering of lines in softmaxwithlosslayer backward
jeffdonahue Mar 14, 2014
8a3f0c2
remove accidentally added empty line
jeffdonahue Mar 14, 2014
ed23b68
fix softmax loss layer bug; all tests pass
jeffdonahue Mar 14, 2014
44fbb82
loss in forward pass for concat layer (thought i'd rebased to latest dev
jeffdonahue Mar 14, 2014
0551d93
null pointer defaults for forward loss outputs
jeffdonahue Mar 15, 2014
a6ae5be
post rebase fixes: images layer and padding layer compute loss in
jeffdonahue Mar 19, 2014
c10ba54
Merge pull request #161 from kloudkl/simplify_feature_extraction
sergeyk Mar 20, 2014
3b51aab
Fix to #161
sergeyk Mar 20, 2014
5086288
Back-merge documentation and script fixes
shelhamer Mar 20, 2014
e6ef9ca
Merge pull request #209 from jeffdonahue/loss-in-forward-pass
jeffdonahue Mar 21, 2014
a123130
loss in forward pass fix for window data layer
jeffdonahue Mar 21, 2014
510b3c0
Merge pull request #247 from jeffdonahue/loss-in-forward-window-data-…
jeffdonahue Mar 21, 2014
e4e93f4
compile caffe without MKL (dependency replaced by boost::random, Eigen3)
rodrigob Dec 8, 2013
04ca88a
Fixed uniform distribution upper bound to be inclusive
kloudkl Jan 11, 2014
d666bdc
Fixed FlattenLayer Backward_cpu/gpu have no return value
kloudkl Jan 11, 2014
38457e1
Fix test stochastic pooling stepsize/threshold to be same as max pooling
kloudkl Jan 11, 2014
788f070
Fix math funcs, add tests, change Eigen Map to unaligned for lrn_layer
kloudkl Jan 12, 2014
d37a995
relax precision of MultinomialLogisticLossLayer test
shelhamer Jan 9, 2014
2ae2683
nextafter templates off one type
Jan 22, 2014
b925739
mean_bound and sample_mean need referencing with this
Jan 22, 2014
93c9f15
make uniform distribution usage compatible with boost 1.46
jeffdonahue Jan 22, 2014
4b1fba7
use boost variate_generator to pass tests w/ boost 1.46 (Gaussian filler
jeffdonahue Jan 22, 2014
b3e4ac5
change all Rng's to use variate_generator for consistency
jeffdonahue Jan 22, 2014
6cbf9f1
add bernoulli rng test to demonstrate bug (generates all 0s unless p ==
jeffdonahue Jan 29, 2014
4f6b266
fix bernoulli generator bug
jeffdonahue Jan 29, 2014
1cf822e
Replace atlas with multithreaded OpenBLAS to speed-up on multi-core CPU
kloudkl Feb 7, 2014
a8c9b66
major refactoring allow coexistence of MKL and non-MKL cases
Feb 12, 2014
c028d09
rewrite MKL flag note, polish makefile
shelhamer Feb 15, 2014
f6cbe2c
make MKL switch surprise-proof
shelhamer Feb 18, 2014
ff27988
comment out stray mkl includes
shelhamer Feb 27, 2014
40aa12a
Fixed order of cblas and atlas linker flags
jamt9000 Mar 3, 2014
a9e772f
Added extern C wrapper to cblas.h include
jamt9000 Mar 3, 2014
453fcf9
clean up residual mkl comments and code
shelhamer Mar 21, 2014
aaa2646
lint
shelhamer Mar 21, 2014
19bcf2b
Hide boost rng behind facade for osx compatibility
shelhamer Mar 22, 2014
bece205
Set copyright to BVLC and contributors.
shelhamer Mar 22, 2014
699b557
Merge pull request #165 from BVLC/boost-eigen
shelhamer Mar 23, 2014
e2685eb
Implement HDF5 save dataset IO utility function
kloudkl Mar 23, 2014
e2beba9
Implement and test HDF5OutputLayer
kloudkl Mar 23, 2014
dd9e05b
Add HDF5OutputLayer to the layer factory
kloudkl Mar 23, 2014
2b28b20
Rebase and change the HDF5OutputLayer::Forward/Backward signatures
kloudkl Mar 23, 2014
910f312
Add and test sum of absolute values math functions for CPU and GPU
kloudkl Feb 25, 2014
348a338
Add and test element wise sign math funtions for CPU and GPU
kloudkl Feb 25, 2014
f634899
Instantiate caffe_cpu_sign for float and double
kloudkl Feb 25, 2014
ccae3fa
Add and test element wise abs math functions for CPU and GPU
kloudkl Feb 25, 2014
b458b41
Use macro to simplify element wise cpu math functions
kloudkl Feb 25, 2014
b1f6eb0
Add and test non-in-place scale math functions for CPU and GPU
kloudkl Feb 25, 2014
dc552e0
Add signbit math func, simplify GPU defs & instantiations with a macro
kloudkl Feb 26, 2014
a288d95
Rename signbit in macros to sgnbit to avoid conflicts with std::signbit
kloudkl Mar 11, 2014
4d53804
Fixed CPPLint errors related to math funtions
kloudkl Mar 18, 2014
ebf90c3
Separate HDF5OutputLayer::Forward_gpu/Backward_gpu into cu file
kloudkl Mar 24, 2014
d3e4c21
Merge pull request #252 from kloudkl/hdf5_output_layer
sergeyk Mar 24, 2014
91483ae
Merge pull request #201 from kloudkl/more_math_functions
shelhamer Mar 24, 2014
474899e
Add & test regularizer class hierarchy: L1, L2 & skeleton of MaxNorm
kloudkl Feb 15, 2014
8c6ee8c
Add support for multiple regularizers in one layer
kloudkl Feb 20, 2014
493cbc1
Simplify the macros in test_regualarizer_as_loss_layer & add more cases
kloudkl Feb 20, 2014
6b6c60f
Integrate the Regularizer with the Layer
kloudkl Feb 20, 2014
877f8f6
Skip testing failure cases of test_regularizer_as_loss_layer
kloudkl Feb 20, 2014
a9f355f
Add Regularizer::Regularizer return value to the Backward return value
kloudkl Feb 20, 2014
bd071dd
Rename ret to loss to indicate purpose in Layer::Backward
kloudkl Feb 21, 2014
7e5b516
Fix cpp lint errors in the regularizer related filed
kloudkl Mar 25, 2014
e165972
Change the return types of RegularizerAsLossLayer::Forward/Backward
kloudkl Mar 25, 2014
610ac2b
Split regularizer_as_loss_layer.cpp into cpp and cu
kloudkl Mar 25, 2014
1a69f4b
Fix bottom blob vector element access bug
kloudkl Mar 25, 2014
ae9699b
Change RegularizationAsLossTest to accommodate CheckGradientSingle
kloudkl Mar 25, 2014
57441fd
Fix Layer::Forward switch case no break bug introduced during merging
kloudkl Mar 25, 2014
454fc0e
Split regularizer.cu into cpp and cu files
kloudkl Mar 25, 2014
ac68ed4
Change the ScaleSign in regularizer.cu to use CUDA_KERNEL_LOOP
kloudkl Mar 25, 2014
3140448
Change L1Regularizer::Regularize_cpu to use caffe_sign & caffe_cpu_asum
kloudkl Mar 25, 2014
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
28 changes: 21 additions & 7 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -86,27 +86,37 @@ CUDA_LIB_DIR := $(CUDA_DIR)/lib64 $(CUDA_DIR)/lib
MKL_INCLUDE_DIR := $(MKL_DIR)/include
MKL_LIB_DIR := $(MKL_DIR)/lib $(MKL_DIR)/lib/intel64

INCLUDE_DIRS += ./src ./include $(CUDA_INCLUDE_DIR) $(MKL_INCLUDE_DIR)
LIBRARY_DIRS += $(CUDA_LIB_DIR) $(MKL_LIB_DIR)
INCLUDE_DIRS += ./src ./include $(CUDA_INCLUDE_DIR)
LIBRARY_DIRS += $(CUDA_LIB_DIR)
LIBRARIES := cudart cublas curand \
mkl_rt \
pthread \
glog protobuf leveldb \
snappy \
glog protobuf leveldb snappy \
boost_system \
hdf5_hl hdf5 \
opencv_core opencv_highgui opencv_imgproc
PYTHON_LIBRARIES := boost_python python2.7
WARNINGS := -Wall

COMMON_FLAGS := -DNDEBUG -O2 $(foreach includedir,$(INCLUDE_DIRS),-I$(includedir))
COMMON_FLAGS := -DNDEBUG -O2

# MKL switch (default = non-MKL)
USE_MKL ?= 0
ifeq ($(USE_MKL), 1)
LIBRARIES += mkl_rt
COMMON_FLAGS += -DUSE_MKL
INCLUDE_DIRS += $(MKL_INCLUDE_DIR)
LIBRARY_DIRS += $(MKL_LIB_DIR)
else
LIBRARIES += cblas atlas
endif

COMMON_FLAGS += $(foreach includedir,$(INCLUDE_DIRS),-I$(includedir))
CXXFLAGS += -pthread -fPIC $(COMMON_FLAGS)
NVCCFLAGS := -ccbin=$(CXX) -Xcompiler -fPIC $(COMMON_FLAGS)
LDFLAGS += $(foreach librarydir,$(LIBRARY_DIRS),-L$(librarydir)) \
$(foreach library,$(LIBRARIES),-l$(library))
PYTHON_LDFLAGS := $(LDFLAGS) $(foreach library,$(PYTHON_LIBRARIES),-l$(library))


##############################
# Define build targets
##############################
Expand Down Expand Up @@ -210,6 +220,10 @@ $(BUILD_DIR)/src/gtest/%.o: src/gtest/%.cpp
$(CXX) $< $(CXXFLAGS) -c -o $@
@echo

$(BUILD_DIR)/src/$(PROJECT)/%.cuo: src/$(PROJECT)/%.cu
$(CUDA_DIR)/bin/nvcc $(NVCCFLAGS) $(CUDA_ARCH) -c $< -o $@
@echo

$(BUILD_DIR)/src/$(PROJECT)/layers/%.cuo: src/$(PROJECT)/layers/%.cu
$(CUDA_DIR)/bin/nvcc $(NVCCFLAGS) $(CUDA_ARCH) -c $< -o $@
@echo
Expand Down
2 changes: 2 additions & 0 deletions Makefile.config.example
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,8 @@ CUDA_ARCH := -gencode arch=compute_20,code=sm_20 \
-gencode arch=compute_30,code=sm_30 \
-gencode arch=compute_35,code=sm_35

# MKL switch: set to 1 for MKL
USE_MKL := 0
# MKL directory contains include/ and lib/ directions that we need.
MKL_DIR := /opt/intel/mkl

Expand Down
67 changes: 67 additions & 0 deletions docs/feature_extraction.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
---
layout: default
title: Caffe
---

Extracting Features
===================

In this tutorial, we will extract features using a pre-trained model.
Follow instructions for [setting up caffe](installation.html) and for [getting](getting_pretrained_models.html) the pre-trained ImageNet model.
If you need detailed information about the tools below, please consult their source code, in which additional documentation is usually provided.

Select data to run on
---------------------

We'll make a temporary folder to store things into.

mkdir examples/_temp

Generate a list of the files to process.
We're going to use the images that ship with caffe.

find `pwd`/examples/images -type f -exec echo {} \; > examples/_temp/file_list.txt

The `ImagesLayer` we'll use expects labels after each filenames, so let's add a 0 to the end of each line

sed "s/$/ 0/" examples/_temp/file_list.txt > examples/_temp/file_list.txt

Define the Feature Extraction Network Architecture
--------------------------------------------------

In practice, subtracting the mean image from a dataset significantly improves classification accuracies.
Download the mean image of the ILSVRC dataset.

data/ilsvrc12/get_ilsvrc_aux.sh

We will use `data/ilsvrc212/imagenet_mean.binaryproto` in the network definition prototxt.

Let's copy and modify the network definition.
We'll be using the `ImagesLayer`, which will load and resize images for us.

cp examples/feature_extraction/imagenet_val.prototxt examples/_temp

Edit `examples/_temp/imagenet_val.prototxt` to use correct path for your setup (replace `$CAFFE_DIR`)

Extract Features
----------------

Now everything necessary is in place.

build/tools/extract_features.bin models/caffe_reference_imagenet_model examples/_temp/imagenet_val.prototxt fc7 examples/_temp/features 10

The name of feature blob that you extract is `fc7`, which represents the highest level feature of the reference model.
We can use any other layer, as well, such as `conv5` or `pool3`.

The last parameter above is the number of data mini-batches.

The features are stored to LevelDB `examples/_temp/features`, ready for access by some other code.

If you'd like to use the Python wrapper for extracting features, check out the [layer visualization notebook](http://nbviewer.ipython.org/github/BVLC/caffe/blob/master/examples/filter_visualization.ipynb).

Clean Up
--------

Let's remove the temporary directory now.

rm -r examples/_temp
1 change: 1 addition & 0 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@ Even in CPU mode, computing predictions on an image takes only 20 ms when images
* [LeNet / MNIST Demo](/mnist.html): end-to-end training and testing of LeNet on MNIST.
* [CIFAR-10 Demo](/cifar10.html): training and testing on the CIFAR-10 data.
* [Training ImageNet](/imagenet_training.html): end-to-end training of an ImageNet classifier.
* [Feature extraction with C++](/feature_extraction.html): feature extraction using pre-trained model
* [Running Pretrained ImageNet \[notebook\]][pretrained_imagenet]: run classification with the pretrained ImageNet model using the Python interface.
* [Running Detection \[notebook\]][imagenet_detection]: run a pretrained model as a detector.
* [Visualizing Features and Filters \[notebook\]][visualizing_filters]: trained filters and an example image, viewed layer-by-layer.
Expand Down
247 changes: 247 additions & 0 deletions examples/feature_extraction/imagenet_val.prototxt
Original file line number Diff line number Diff line change
@@ -0,0 +1,247 @@
name: "CaffeNet"
layers {
layer {
name: "data"
type: "images"
source: "$CAFFE_DIR/examples/_temp/file_list.txt"
meanfile: "$CAFFE_DIR/data/ilsvrc12/imagenet_mean.binaryproto"
batchsize: 50
new_height: 256
new_width: 256
mirror: false
cropsize: 227
}
top: "data"
top: "label"
}
layers {
layer {
name: "conv1"
type: "conv"
num_output: 96
kernelsize: 11
stride: 4
}
bottom: "data"
top: "conv1"
}
layers {
layer {
name: "relu1"
type: "relu"
}
bottom: "conv1"
top: "conv1"
}
layers {
layer {
name: "pool1"
type: "pool"
pool: MAX
kernelsize: 3
stride: 2
}
bottom: "conv1"
top: "pool1"
}
layers {
layer {
name: "norm1"
type: "lrn"
local_size: 5
alpha: 0.0001
beta: 0.75
}
bottom: "pool1"
top: "norm1"
}
layers {
layer {
name: "conv2"
type: "conv"
num_output: 256
group: 2
kernelsize: 5
pad: 2
}
bottom: "norm1"
top: "conv2"
}
layers {
layer {
name: "relu2"
type: "relu"
}
bottom: "conv2"
top: "conv2"
}
layers {
layer {
name: "pool2"
type: "pool"
pool: MAX
kernelsize: 3
stride: 2
}
bottom: "conv2"
top: "pool2"
}
layers {
layer {
name: "norm2"
type: "lrn"
local_size: 5
alpha: 0.0001
beta: 0.75
}
bottom: "pool2"
top: "norm2"
}
layers {
layer {
name: "conv3"
type: "conv"
num_output: 384
kernelsize: 3
pad: 1
}
bottom: "norm2"
top: "conv3"
}
layers {
layer {
name: "relu3"
type: "relu"
}
bottom: "conv3"
top: "conv3"
}
layers {
layer {
name: "conv4"
type: "conv"
num_output: 384
group: 2
kernelsize: 3
pad: 1
}
bottom: "conv3"
top: "conv4"
}
layers {
layer {
name: "relu4"
type: "relu"
}
bottom: "conv4"
top: "conv4"
}
layers {
layer {
name: "conv5"
type: "conv"
num_output: 256
group: 2
kernelsize: 3
pad: 1
}
bottom: "conv4"
top: "conv5"
}
layers {
layer {
name: "relu5"
type: "relu"
}
bottom: "conv5"
top: "conv5"
}
layers {
layer {
name: "pool5"
type: "pool"
kernelsize: 3
pool: MAX
stride: 2
}
bottom: "conv5"
top: "pool5"
}
layers {
layer {
name: "fc6"
type: "innerproduct"
num_output: 4096
}
bottom: "pool5"
top: "fc6"
}
layers {
layer {
name: "relu6"
type: "relu"
}
bottom: "fc6"
top: "fc6"
}
layers {
layer {
name: "drop6"
type: "dropout"
dropout_ratio: 0.5
}
bottom: "fc6"
top: "fc6"
}
layers {
layer {
name: "fc7"
type: "innerproduct"
num_output: 4096
}
bottom: "fc6"
top: "fc7"
}
layers {
layer {
name: "relu7"
type: "relu"
}
bottom: "fc7"
top: "fc7"
}
layers {
layer {
name: "drop7"
type: "dropout"
dropout_ratio: 0.5
}
bottom: "fc7"
top: "fc7"
}
layers {
layer {
name: "fc8"
type: "innerproduct"
num_output: 1000
}
bottom: "fc7"
top: "fc8"
}
layers {
layer {
name: "prob"
type: "softmax"
}
bottom: "fc8"
top: "prob"
}
layers {
layer {
name: "accuracy"
type: "accuracy"
}
bottom: "prob"
bottom: "label"
top: "accuracy"
}
Loading