This repository provides the official PyTorch implementation of COMPASS.
COMPASS is a framework for cross-embodiment mobility that combines:
- Imitation Learning (IL) for strong baseline performance
- Residual Reinforcement Learning (RL) for embodiment-specific adaptation
- Policy distillation to create a unified, generalist policy
Requires Docker + the NVIDIA Container Toolkit, and an NVIDIA GPU + driver that meet the Isaac Lab system requirements.
git clone https://github.com/NVlabs/COMPASS.git && cd COMPASS
export HF_TOKEN=hf_xxx # https://huggingface.co/settings/tokens
./docker/run.sh assets # USDs + X-Mobility ckpt → ./assets/ (~5 min)
./docker/run.sh build # build the dev image (~10 min)
source ./docker/activate # venv-like activation (prompt: (compass-rl))
python run.py -c configs/train_config.gin -o /tmp/out -b ./assets/x_mobility.ckpt --enable_cameras --num_envs 1 --visualizer kitpython is now a shim that runs inside the container. Edit code with your host
editor — the bind-mount means changes hot-reload. deactivate to leave;
./docker/run.sh down to stop the container.
Everything else — install details, training / distillation / export workflows, ROS2 deployment, OSMO cloud submission, GR00T post-training, agentic skills, auto-OMap generation, and contributing — lives in the handbook:
COMPASS is released under the Apache License 2.0. See LICENSE for additional details.
Wei Liu, Huihua Zhao, Chenran Li, Joydeep Biswas, Soha Pouya, Yan Chang
We would like to acknowledge the following projects where parts of the codes in this repo is derived from:
If you find this work useful in your research, please consider citing:
@article{liu2025compass,
title={COMPASS: Cross-embodiment Mobility Policy via Residual RL and Skill Synthesis},
author={Liu, Wei and Zhao, Huihua and Li, Chenran and Biswas, Joydeep and Pouya, Soha and Chang, Yan},
journal={arXiv preprint arXiv:2502.16372},
year={2025}
}