by Xiaoping Wang.
This implementation is based on my another repository called py-Vital, that is posted on the project home by the authors of VITAL tracker.
If you want this code for personal use, please cite:
@InProceedings{nam2016mdnet,
author = {Nam, Hyeonseob and Han, Bohyung},
title = {Learning Multi-Domain Convolutional Neural Networks for Visual Tracking},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2016}
}
@inproceedings{shi-nips18-DAT,
author = {Pu, Shi and Song, Yibing and Ma, Chao and Zhang, Honggang and Yang, Ming-Hsuan},
title = {Deep Attentive Tracking via Reciprocative Learning},
booktitle = {Neural Information Processing Systems},
year = {2018},
}
@inproceedings{xiaopingwang-VTAAN,
author = {Xiaoping Wang},
title = {VTAAN: Visual Tracking with Attentive Adversarial Network},
booktitle = {VTAAN tracker implemented by PyTorch},
month = {August},
year = {2019},
}
- python 3.6+
- opencv 3.0+
- PyTorch 1.0+ and its dependencies
python tracking/run_tracker.py -s DragonBaby [-d (display fig)] [-f (save fig)]- You can provide a sequence configuration in two ways (see tracking/gen_config.py):
python tracking/run_tracker.py -s [seq name]python tracking/run_tracker.py -j [json path]