Tengjie Zhu1,2,*,
Guanyu Cai1,*,
Zhaohui Yang1,*,
Guanzhu Ren1,
Haohui Xie1,
Junsong Wu1,
ZiRui Wang2,
Jingbo Wang2,
Xiaokang Yang1,
Yao Mu1,2,†,
Yichao Yan1,†
* Equal Contribution † Corresponding Author
1MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University
2Shanghai AI Laboratory
- [2026-02] We release the code and paper for CLOT.
This is the official implementation of the paper CLOT: Closed-Loop Global Motion Tracking for Whole-Body Humanoid Teleoperation.
Our paper offers a general-purpose action tracking strategy within a global closed-loop framework, together with a large-scale human motion dataset.
This repository includes:
- Multi-simulator support
- Support multiple simulators including IsaacGym, IsaacSim, and MjLab (with MjLab as a primary simulator).
- Efficient RL training
- Support multi-GPU parallel training for large-scale experiments.
- AMP rewards
- Implementation of AMP discriminator rewards for motion imitation policies.
- Transformer backbone
- Our project supports both MLP and Transformer networks as the policy backbone.
Below are the installation and usage instructions for the code in the mjlab environment.
We provide a lightweight environment setup method. We test the code in the following environment:
- OS: Ubuntu 22.04
- GPU: NVIDIA RTX 4090, Driver Version: 575.64.03
conda create -n clot python=3.11
pip install warp-lang --extra-index-url https://pypi.nvidia.com/
pip install "mujoco-warp @ git+https://github.com/google-deepmind/mujoco_warp@502556df5e44d79d6bdaa64361669602b5a206cf"
git clone https://github.com/zhutengjie/CLOT.git
cd CLOT
pip install -e .We have currently uploaded about 10 hours of data to Hugging Face, including BVH files (with the coordinate system converted to Z-up and X-forward), as well as motion data retargeted to Adam Pro and G1 (following the ASAP format). We also provide the corresponding checkpoints for Adam Pro and G1 in this repository.
git lfs install
git clone https://huggingface.co/datasets/Zhutengjie/human_motionIf you want to visualize the data results:
python visualize.py human_motion/adam_data_30fps_cont_mask.pkl humanoidverse/data/robots/adam_pro/adam_pro.xml
# or
python visualize.py human_motion/g1_data_50fps_cont_mask.pkl humanoidverse/data/robots/g1/g1_23dof_lock_wrist.xmlIf you want to test the checkpoints in the mjlab environment:
# for Adam Pro
python humanoidverse/eval_agent.py +checkpoint=human_motion/adam_result/adam.pt
# for G1
python humanoidverse/eval_agent.py +checkpoint=human_motion/G1_result/g1.ptIf you want to deploy in the MuJoCo environment:
# for Adam Pro
python humanoidverse/urci.py +opt=record +simulator=mujoco +checkpoint=human_motion/adam_result/exported/adam.onnx
# for G1
python humanoidverse/urci.py +opt=record +simulator=mujoco +checkpoint=human_motion/G1_result/exported/g1.onnxBy default, training is conducted on 8 × 48GB RTX 4090 GPUs.
# for Adam Pro
sh train_adam_multi.sh
# for G1
sh train_g1_multi.shIf you want to change the number of GPUs used for training, please modify ngpu in humanoidverse/config/base/fabric.yaml and nproc_per_node in the corresponding .sh script.
By default, model is set up to transformer architechture with amp reward. If you want to change the architechture or disable amp reward, please modify +exp in the corresponding .sh script.
# for mlp + amp
+exp=motion_tracking_amp
# for mlp only
+exp=motion_tracking
#for transformer only
+exp=motion_tracking_transformerIf you find our work helpful, please cite:
@article{zhu2026clot,
title={CLOT: Closed-Loop Global Motion Tracking for Whole-Body Humanoid Teleoperation},
author={Zhu, Tengjie and Cai, Guanyu and Zhaohui, Yang and Ren, Guanzhu and Xie, Haohui and Wang, ZiRui and Wu, Junsong and Wang, Jingbo and Yang, Xiaokang and Mu, Yao and others},
journal={arXiv preprint arXiv:2602.15060},
year={2026}
}
}This codebase is under CC BY-NC 4.0 license. You may not use the material for commercial purposes, e.g., to make demos to advertise your commercial products.
Our code builds upon and references the following excellent works. We sincerely thank the authors for their open-source contributions:
We would like to sincerely thank PNDbotics for providing the robotic platform and comprehensive support related to the robot hardware. We also thank Baidu for providing the GPU resources.
Feel free to open an issue or discussion if you encounter any problems or have questions about this project.
For collaborations, feedback, or further inquiries, please reach out to:
- Tengjie Zhu: zhutengjie@sjtu.edu.cn or Weixin
sy_my_treasure - Guanyu Cai: caig3822@gmail.com or Weixin
wxid_2ak1wex0t2ft22 - Zhaohui Yang: yzh_academic@163.com or WeChat
windyy_wechat - You can also join our weixin discussion group for timely Q&A.
We welcome contributions and are happy to support the community in building upon this work!
