This is the official repository for PURA: Parameter Update-Recovery Test-Time Adaption for RGB-T Tracking (CVPR 2025).
✨ PURA is a test-time adaptation framework for RGB-T tracking, designed to robustly adapt to domain transition scenarios during the testing process.
✨ PURA adapts online through parameter update and recovery mechanisms, avoiding drastic parameter changes and heavy computational burden.
We have released the results of our method on the RGBT234, RGBT210 and GTOT datasets:
| Method | RGBT234 | RGBT210 | GTOT | FPS | Raw Result | |||
|---|---|---|---|---|---|---|---|---|
| MPR(%) | MSR(%) | PR(%) | SR(%) | MPR(%) | MSR(%) | |||
| No Adapt. | 90.8 | 67.6 | 88.6 | 65.1 | 95.1 | 78.2 | 59.5 | Google Drive |
| PURA | 93.3 | 70.3 | 90.3 | 66.8 | 95.7 | 78.6 | 42.0 | Google Drive |
- Note: The full code and weights of our method will be released soon.
For applying PURA to your own RGB-T tracker based on pytracking, you can follow the steps below:
- Copy
pura.pytolib\test\trackerfolder. - Modify
lib\test\tracker\xxx_track.pyto include the following code:
from lib.test.tracker import pura # import PURA
from lib.test.tracker import tent # import Tent
from lib.test.tracker import eata # import EATA
from lib.test.tracker import adabn # import AdaBN
...
class XXXTrack(BaseTracker):
def __init__(self, params, dataset_name):
super(XXXTrack, self).__init__(params)
network = build_xxx_track(params.cfg, training=False)
network.load_state_dict(torch.load(self.params.checkpoint, map_location='cpu')['net'], strict=True)
self.cfg = params.cfg
self.network = network.cuda()
self.network.eval()
# PURA
pura.replace_batchnorm(self.network.box_head) # replace batchnorm in box_head with PURA
self.network = pura.configure_model(self.network) # configure model
# Tent
# model = tent.configure_model(self.network)
# tta_params, tta_param_names = tent.collect_params(model.box_head)
# optimizer = torch.optim.AdamW(tta_params, lr=1e-3)
# self.model = tent.Tent(model, optimizer)
# ETA
# model = eata.configure_model(self.network)
# tta_params, tta_param_names = eata.collect_params(model.box_head)
# optimizer = torch.optim.SGD(tta_params, lr=0.00025, momentum=0.9)
# self.model = eata.EATA(model, optimizer, e_margin=math.log(1000)*0.40, d_margin=0.05)
# AdaBN
# adabn.replace_batchnorm(self.network.box_head)
# self.network = adabn.configure_model(self.network)
self.preprocessor = Preprocessor()
self.state = None
...- If
TentorETAis enabled, please build the data as a dictionary input model of thetrackfunction inlib\test\tracker\xxx_track.py:
def track(self, image, info: dict = None):
H, W, _ = image.shape
self.frame_id += 1
...
with torch.enable_grad(): # Don't forget to enable grad
model_inputs = {
"template": cur_template,
"search": [x_dict.tensors[:, :3, :, :], x_dict.tensors[:, 3:, :, :]],
"ce_template_mask": self.box_mask_z
}
out_dict = self.model(model_inputs)We use the implementation of the SVD decomposition from the PGrad repo.
If our work is helpful for your research, please consider citing our paper:
@inproceedings{shao2025pura,
title={PURA: Parameter Update-Recovery Test-Time Adaption for RGB-T Tracking},
author={Shao, Zekai and Hu, Yufan and Fan, Bin and Liu, Hongmin},
booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference},
pages={22089--22098},
year={2025}
}