[Project Page] [Paper] [Contact]
This repository contains the official implementation of ReactDance: Hierarchical Representation for High-Fidelity and Coherent Long-Form Reactive Dance Generation (ICLR 2026).
ReactDance tackles long-form reactive dance generation, where a dancer responds to music and partner motions with coherent, high-quality movements over long horizons.
✨ If you find this repository helpful, please consider giving it a star. ✨
We recommend using Python 3.8 and PyTorch 2.1.0 with CUDA 12.1.
conda env create -f environment.ymlIf the above command gets stuck when downloading or creating the environment, you can try the following manual steps:
conda create -n reactdance python=3.8
conda activate reactdance
conda install pytorch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 pytorch-cuda=12.1 -c pytorch -c nvidia
conda env update -f environment.ymlNotes:
- matplotlib version: newer versions of matplotlib may not be compatible;
3.1.1has been tested to work. - Visualization dependencies: ffmpeg and MoviePy are required for rendering videos.
-
Install ffmpeg:
conda install -c conda-forge x264 ffmpeg -c conda-forge --override-channels
-
Install MoviePy:
conda install -c conda-forge moviePy==1.0.3 --override-channels
-
Please make sure other dependencies are installed according to your package management setup (e.g., pip install -r requirements.txt if provided).
You can download the dataset from the Google Drive link, or follow Duolando to process the data yourself and rename the directory as data_lazy. Put data_lazy in the working directory:
ReactDance
├── configs
├── data_lazy
├── datasets
├── ...
-
GT Visualization: You can check the visualization of GT in
./data_lazy/motion/GT_vis, or run the visualization yourself:python vis_gt.py
ReactDance training consists of two stages: first training the hierarchical feature VQ module (HFSQ), then training the full ReactDance model.
python hfsq.py --config ./configs/hfsq128_512_q2.yaml --mode trainOnce trained, the HFSQ result directory will look like:
results/training
└── HFSQ/lightning_logs/hfsq128_512_q2
├── checkpoints
│ ├── epoch_050.ckpt
│ ├── epoch_100.ckpt
│ ├── epoch_150.ckpt
│ └── epoch_200.ckpt
└── hfsq128_512_q2.yaml
python reactdance.py --config ./configs/reactdance.yaml --mode trainAfter both HFSQ and ReactDance are trained, the results directories will look like:
results/training
├── HFSQ/lightning_logs/hfsq128_512_q2
│ ├── normalizers
│ │ └── epoch_200
│ │ └── normalizer.pt
│ ├── checkpoints
│ │ ├── epoch_050.ckpt
│ │ ├── epoch_100.ckpt
│ │ ├── epoch_150.ckpt
│ │ └── epoch_200.ckpt
│ └── hfsq128_512_q2.yaml
└── ReactDance/lightning_logs/reactdance
├── checkpoints
│ ├── epoch_050.ckpt
│ ├── epoch_100.ckpt
│ ├── epoch_150.ckpt
| ├── ...
│ └── epoch_500.ckpt
├── hfsq128_512_q2.yaml
└── reactdance.yaml
- Put your HFSQ checkpoint in
data-test-checkpointinresults/training/HFSQ/lightning_logs/hfsq128_512_q2/hfsq128_512_q2.yaml. - Set it as
HFSQ_configinconfigs/reactdance.yaml.
Then run:
python hfsq.py --config ./`results/training/HFSQ/lightning_logs/hfsq128_512_q2/hfsq128_512_q2.yaml` --mode duet_sampleThen the reconsturcted motion files and visualizations are available in
results/generated/HFSQ/hfsq128_512_q2/epoch_200.ckpt/samples/duet/
├── pos3d_npy
└── videos
- Put your ReactDance checkpoint in
data-test-checkpointinresults/training/ReactDance/lightning_logs/reactdance/reactdance.yaml.
Then run:
python reactdance.py --config ./`results/training/ReactDance/lightning_logs/reactdance/reactdance.yaml` --mode sampleThen the sampled motion files and visualizations are available in
results/generated/ReactDance/reactdance/epoch_500.ckpt/samples/
├── pos3d_npy
└── videos
utils/metrics.py evaluates solo motion quality on pos3d_npy folders (FID_k / FID_g/ DIV_k / DIV_g / BA).
-
Edit the following variables in
utils/metrics.py(near the bottom):gt_root(default:data_lazy/motion/pos3d/test)music_root(default:data_lazy/music/feature/test)pred_roots: a list ofpos3d_npyfolders you want to evaluate.
The script already contains example entries:
results/generated/HFSQ/hfsq128_512_q2/epoch_200.ckpt/samples/duet/pos3d_npy(HFSQ reconstruction)results/generated/ReactDance/reactdance/epoch_500.ckpt/samples/pos3d_npy(ReactDance sampling)
-
Run:
python utils/metrics.pyutils/metrics_duet.py evaluates duet interaction metrics on pos3d_npy folders (FID_cd / DIV_cd / MPJPE / MPJVE / Jitter / BE).
-
Edit
gt_rootandpred_rootsinutils/metrics_duet.py(near the bottom). Eachpred_rootshould point to apos3d_npyfolder that contains paired files:- follower:
*_00.npy - leader:
*_01.npy
- follower:
-
Run:
python utils/metrics_duet.pyIf you use this repository in your research, please cite:
@inproceedings{lin2026reactdance,
title={ReactDance: Hierarchical Representation for High-Fidelity and Coherent Long-Form Reactive Dance Generation},
author={Jingzhong Lin and Xinru Li and Yuanyuan Qi and Bohao Zhang and Wenxiang Liu and Kecheng Tang and Wenxuan Huang and Xiangfeng Xu and Bangyan Li and Changbo Wang and Gaoqi He},
booktitle={The Fourteenth International Conference on Learning Representations},
year={2026},
url={https://openreview.net/forum?id=FvMyAMbbX0}
}
We would like to acknowledge Xinru Li and Yuanyuan Qi for their help with demo visualization and figure design, as well as Bohao Zhang, Wenxiang Liu, and Kecheng Tang for helpful discussions regarding writing. We also appreciate the input of Wenxuan Huang in research exchanges, and thank Changbo Wang and Gaoqi He for their supervisory advice throughout the project.
This work is partially supported by:
- Natural Science Foundation of China (Grant No. 62472178)
- Open Projects Program of State Key Laboratory of Multimodal Artificial Intelligence Systems (No. MAIS2024111)
- Fundamental Research Funds for the Central Universities
- Science and Technology Commission of Shanghai Municipality (Grant No. 25511107200)
- Technical questions: please open an issue.
- Other inquiries: ripemangobox@gmail.com

