Skip to content

abbygray/VideoSearchEngine

 
 

Repository files navigation

Video Search Engine

Authors:

Semantically be able to search through a database of videos (using generated summaries)

System Overview

The system described here is the overview of the overall system archietecture.

System Overview

Video Summarization Overview

Below is the initial architecture of the video summarization network used to generate video summaries.

Video Summarization Network

Example output

Given a minute long video of traffic in Dhaka Bangladesh.

('a man riding a bike down a street next to a large truck .', 'a man riding a bike down a street next to a traffic light .', 'a green truck with a lot of cars on it', 'a green truck with a lot of cars on the road .', 'a city bus driving down a street next to a traffic light .')

Set Up

To set up the python code create a python3 environment with the following:

# create a virtual environment
$ python3 -m venv env

# activate environment
$ source env/bin/activate

# install all requirements
$ pip install -r requirements.txt

# install data files
$ python dataloader.py

If you add a new package you will have to update the requirements.txt with the following command:

# add new packages
$ pip freeze > requirements.txt

And if you want to deactivate the virtual environment

# decativate the virtual env
$ deactivate

Training Captioning Network

Caption Network Set up

python VideoSearchEngine/ImageCaptioningNoYolo/resize.py --image_dir data/coco/train2014/ 
python VideoSearchEngine/ImageCaptioningNoYolo/resize.py --image_dir data/coco/val2014/ --output_dir data/val_resized2014

Plan

Our project will, broadly defined, be attempting video searching through video summarization. To do this we propose the following objectives and resulting action plan:

  • Break videos down into semantically different groups of frames
  • Recognize objects in an image (i.e. a frame)
  • Convert a frame to text
  • Merge summaries of all frames of a video into one large overall summary
  • Build a search engine to query videos via summary.

Goals

For our project, we have come up with a basic goal we plan to reach by the time of the presentation, and a stretch goal we hope to reach if time permits

Basic Goal: We will recognize objects through the YOLO algorithm. Convert each frame to text using the algorithm mentioned in this paper. Come up with basic heuristic for skipping frames so not too much overlap in the summary. Surface all of this through a simple UI to search a video database.

Stretch Goal: Investigate other methods for reducing noise in frames (Generative Adversarial Networks), Investigate grouping together semantically similar frames to one common representation to make better summaries.

Data Sets to Use

Lots of labeled data for text generation of video summaries.

Paper about how data was collected and performance.

The location of the video dataset: Source

Consists of labeled images for image captioning

Consists of action videos that can be used to test summaries.

The "MED Summaries" is a new dataset for evaluation of dynamic video summaries. It contains annotations of 160 videos: a validation set of 60 videos and a test set of 100 videos. There are 10 event categories in the test set.

Citations

Papers

GitHubs

Blogs and Other Websites

About

Semantically be able to search through a database of videos (using generated summaries)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

  • Python 96.1%
  • Shell 2.2%
  • Other 1.7%