Open-source code, model, and dataset for our ASSETS 2024 paper "WheelPoser: Sparse-IMU Based Body Pose Estimation for Wheelchair Users" [DOI] [arXiv]
This code was developed in python 3.7.12. For dependencies, please install the latest pytorch chumpy vctoolkit open3d pybullet qpsolvers cvxopt pytorch-lightning opencv-python tqdm.
*Installing pytorch with CUDA is highly recommended.
If you want to use the physics optimization module, please also compile and install rbdl with python bindings and the urdf reader addon enabled.
- Register and download the SMPL model from the official website. Choose "SMPL for Python" and download "Version 1.0.0 for Python 2.7 (10 shape PCs)."
- Update the
smpl_model_pathvariable inconfig.pyto point to the downloaded model file.
- Register and download the AMASS dataset from the AMASS website, selecting "SMPL+H G" for each dataset.
- Extract the downloaded datasets and place them in the
src/data/dataset_raw/AMASSdirectory.
- To obtain the dataset, please fill out this Google Form.
- Extract the dataset and place it in the
src/data/dataset_raw/WheelPoserdirectory. - Please note, we are working on making the WheelPoser-IMU dataset available for download directly from the AMASS website. The public download link will be updated here once it becomes available.
- Run
1.1 preprocess_all.pyto generate synthetic IMU data for the AMASS dataset and extract ground truth IMU data for the WheelPoser-IMU dataset. - Run
1.2 combine_for_nn.pyto organize the synthetic IMU data, ground truth IMU data, and pose ground truth data for model training and fine-tuning.
Execute scripts/run_all_training.py to train and fine-tune the model.
We provide scripts for leave-one-subject-out evaluation of the trained model, supporting both offline and online (real-time) evaluations. While the paper reports online evaluation results, offline evaluation scripts can produce smoother, more accurate motion predictions for non-real-time applications.
- Use the scripts in
2.2.2 Offline Evaluationfor offline leave-one-subject-out evaluation. - Use the scripts in
2.2.3 Online Evaluationfor online leave-one-subject-out evaluation, with options to include or exclude the physics optimization module depending on your application needs.
- Download the network weights from here.
- Place the downloaded files in the
checkpointsfolder.
We use 4 Movella DOT IMUs for the live demo:
- Install the Movella DOT SDK on your machine by following the official guide.
- Perform a heading reset for all 4 IMU sensors before each use to improve pose estimation accuracy.
We use Unity3D to visualize real-time pose estimation. Download a sample Unity scene file here.
Run 3.1 3_stage_live_demo.py and open the Unity3D visualizer to view the pose estimation results in real time.
We would like to thank the following contributors that our code is based on: TransPose, PIP, IMUPoser, and TIP.
If you find the project helpful, please consider citing us:
@inproceedings{10.1145/3663548.3675638,
author = {Li, Yunzhi and Mollyn, Vimal and Yuan, Kuang and Carrington, Patrick},
title = {WheelPoser: Sparse-IMU Based Body Pose Estimation for Wheelchair Users},
year = {2024},
isbn = {9798400706776},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3663548.3675638},
doi = {10.1145/3663548.3675638},
booktitle = {Proceedings of the 26th International ACM SIGACCESS Conference on Computers and Accessibility},
keywords = {Inertial Measurement Units, Motion Capture, Pose Estimation, Real-time, Wheelchair Users},
series = {ASSETS '24}
}
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

