Skip to content

BIT-Vision/SATE

Repository files navigation

Rethinking Scale-Aware Temporal Encoding for Event-based Object Detection

This is the implementation of Rethinking Scale-Aware Temporal Encoding for Event-based Object Detection (NeurIPS 2025).

Conda Installation

Step1: Same as RVT

conda create -y -n sate python=3.9 pip
conda activate sate
conda config --set channel_priority flexible

CUDA_VERSION=11.8

conda install -y h5py=3.8.0 blosc-hdf5-plugin=1.0.0
hydra-core=1.3.2 einops=0.6.0 torchdata=0.6.0 tqdm numba
pytorch=2.0.0 torchvision=0.15.0 pytorch-cuda=$CUDA_VERSION
-c pytorch -c nvidia -c conda-forge

python -m pip install pytorch-lightning==1.8.6 wandb==0.14.0
pandas==1.5.3 plotly==5.13.1 opencv-python==4.6.0.66 tabulate==0.9.0
pycocotools==2.0.6 bbox-visualizer==0.1.0 StrEnum=0.4.10
python -m pip install 'git+https://github.com/facebookresearch/detectron2.git'

Detectron2 is not strictly required but speeds up the evaluation.

Step2:

cd DeformableConvLSTM/models/modules/DCNv2/
./make.sh         # build

Or you can find the environment configuration we use in environment.yaml, and set up the environment based on it.

Required Data

To evaluate or train SATE you will need to download the original data from here. And pre-process the data with the following command. ${DATA_DIR} should be point to the directory structure mentioned above. ${DEST_DIR} should point to the directory to which the data will be written.

For the 1 Mpx dataset:

cd scripts/genx/
NUM_PROCESSES=20 # set to the number of parallel processes to use
python preprocess_dataset.py ${DATA_DIR} ${DEST_DIR} conf_preprocess/representation/event_voxel.yaml \
conf_preprocess/extraction/const_duration.yaml conf_preprocess/filter_gen4.yaml -ds gen4 -np ${NUM_PROCESSES}

For the Gen1 dataset:

cd scripts/genx/
NUM_PROCESSES=20 # set to the number of parallel processes to use
python preprocess_dataset.py ${DATA_DIR} ${DEST_DIR} conf_preprocess/representation/event_voxel.yaml \
conf_preprocess/extraction/const_count.yaml conf_preprocess/filter_gen1.yaml -ds gen1 -np ${NUM_PROCESSES}

For the eTram dataset (you can download the original data from here):

cd scripts/genx/
NUM_PROCESSES=20 # set to the number of parallel processes to use
python preprocess_dataset.py ${DATA_DIR} ${DEST_DIR} conf_preprocess/representation/event_voxel.yaml \
conf_preprocess/extraction/const_duration.yaml conf_preprocess/filter_gen4.yaml -ds gen4 -np ${NUM_PROCESSES}

Pre-trained Checkpoints

You can download the pre-trained checkpoints on Gen1, 1 Mpx and eTram from here. The extraction code is tGKi.

Evaluation

  • Set DATA_DIR as the path to either the eTram, 1 Mpx or Gen1 dataset directory
  • Set CKPT_PATH to the path of the correct checkpoint matching the choice of the model and dataset.
  • Set
    • USE_TEST=1 to evaluate on the test set, or
    • USE_TEST=0 to evaluate on the validation set
  • Set GPU_ID to the PCI BUS ID of the GPU that you want to use. e.g. GPU_ID=0.

Gen1

python validation.py model=SATE dataset=gen1 dataset.path=${DATA_DIR} checkpoint=${CKPT_PATH} \
use_test_set=${USE_TEST} hardware.gpus=${GPU_ID} +experiment/gen1="base.yaml" \
batch_size.eval=8 model.postprocess.confidence_threshold=0.001

1 Mpx

python validation.py model=SATE dataset=gen4 dataset.path=${DATA_DIR} checkpoint=${CKPT_PATH} \
use_test_set=${USE_TEST} hardware.gpus=${GPU_ID} +experiment/gen4="base.yaml" \
batch_size.eval=8 model.postprocess.confidence_threshold=0.001

eTram

python validation.py model=SATE dataset=etram dataset.path=${DATA_DIR} checkpoint=${CKPT_PATH} \
use_test_set=${USE_TEST} hardware.gpus=${GPU_ID} +experiment/etram="base.yaml" \
batch_size.eval=8 model.postprocess.confidence_threshold=0.001

Code Acknowledgments

  • RVT for the RVT architecture implementation in Pytorch

  • YOLOX and YOLOv6 for the detection PAFPN and head.

  • DCNv2 for the Deformable Convolution.

  • SE for the SE block.

About

Rethinking Scale-Aware Temporal Encoding for Event-based Object Detection

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors