The soure code of the paper "A unified spatial-spectral-temporal network for hyperspectral object tracking".
Use the Anaconda
conda create -n csstrack python=3.8
conda activate csstrack
bash install.sh
Run the following command to set paths for this project
python tracking/create_default_local_file.py --workspace_dir . --data_dir ./data --save_dir ./output
After running this command, you can also modify paths by editing these two files
lib/train/admin/local.py # paths about training
lib/test/evaluation/local.py # paths about testing
- The HOT2020 is from "https://www.hsitracking.com/".
- The IMEC25 dataset is from paper "Histograms of oriented mosaic gradients for snapshot spectral image description".
- The data should look like:
(1). The format of training dataset: rootDir |- videoName1 |- HSI |- 0001.png |- 0002.png ... |- XXXX.png |- groundturth_rect.txt videoName2 |- HSI |- 0001.png |- 0002.png ... |- XXXX.png |- groundturth_rect.txt ... videoNameN |- HSI |- 0001.png |- 0002.png ... |- XXXX.png |- groundturth_rect.txt
(2). The format of testing dataset: rootDir |- test_HSI |- videoName1 |- groundturth_rect.txt |- HSI |- 0001.png |- 0002.png |- ... |- XXXX.png |- videoName2 |- groundturth_rect.txt |- HSI |- 0001.png |- 0002.png |- ... |- XXXX.png ... |- videoNameM |- groundturth_rect.txt |- HSI |- 0001.png |- 0002.png |- ... |- XXXX.png
(a) cd CSSTrack-HOT2020/
(b) Train: Download pretrained model and put in the folder "pretrained_models", which is available in
- https://pan.baidu.com/s/1vBFqFkpCHO9vqRR-Q0O1zw
- Access code: vr63
I. Change the path of training data in lib/train/admin/local.py (Line 25: self.hot2020_dir='/data/xx/HOT2020/train')
II. Run: python tracking/train.py --script csstrack --config CSSTrack-ep30-s256 --save_dir ./output --mode single --nproc_per_node 1
(c) Test: Download testing model of HOT2020 in
- https://pan.baidu.com/s/1a9Byn-R9zL89AVIx1loJiA
- Access code: sumc
I. Change the path of training data in lib/train/admin/local.py (Line 20: settings.hot2020_path = '/data/xx/HOT2020/test')
II. Run: python tracking/test_epoch.py --checkpoint_path ../CSSTrack_ep0030_final.pth.tar
(a) cd CSSTrack-IMEC25/
(b) Train: Download pretrained model and put in the folder "pretrained_models", which is available in
- https://pan.baidu.com/s/1vBFqFkpCHO9vqRR-Q0O1zw
- Access code: vr63
I. Change the path of training data in lib/train/admin/local.py (Line 25: self.imec25_dir='/data/xxx/HOT/IMEC25/train')
II. Run: python tracking/train.py --script csstrack --config CSSTrack-ep30-s256 --save_dir ./output --mode single --nproc_per_node 1
(c) Test: Download testing model of IMEC25 in
- https://pan.baidu.com/s/1Ty_MKeiEwd2977A-6i4nfw
- Access code: adpe
I. Change the path of training data in lib/train/admin/local.py (Line 20: settings.imec25_path = '/data/xxx/HOT/IMEC25/test')
II. Run: python tracking/test_epoch.py --checkpoint_path ../CSSTrack_ep0030_final.pth.tar
@article{LI2025111389,
title = {A unified spatial-spectral-temporal network for hyperspectral object tracking},
author = {Zhuanfeng Li and Jing Wang and Jue Zhang and Dong Zhao and Guanyiman Fu and Jiangtao Wang and Jianfeng Lu},
journal = {Pattern Recognition},
volume = {174},
pages = {113005},
year = {2026},
issn = {0031-3203},
doi = {https://doi.org/10.1016/j.patcog.2025.113005},
}
lizhuanfeng@hytc.edu.cn; If you have any questions, just contact me.
- Thanks for the AQATrack and PyTracking library, which helps us to quickly implement our ideas.