Official code for OPT-AIL: Provably and Practically Efficient Adversarial Imitation Learning with General Function Approximation.
- Python version compatible with this project (see
requires-pythoninpyproject.toml) uvinstalled
curl -LsSf https://astral.sh/uv/install.sh | shgit clone https://github.com/LAMDA-RL/OPT-AIL.git
cd OPT-AIL
uv syncThe expert trajectories used during the experiments can be found here: https://drive.google.com/file/d/1ZBZPjQITGLkiKusdGAwObLdomTTU3bqs/view?usp=drive_link
Set buffer_folder and model_folder in conf/config.yaml using absolute path.
Then, just run the scripts in the scripts dir. You can try as follows:
Run model-free opt-ail:
sh scripts/run_mf.shRun model-based opt-ail:
sh scripts/run_mb.shIf you find this repository useful for your research, please cite:
@inproceedings{
xu2024provably,
title={Provably and Practically Efficient Adversarial Imitation Learning with General Function Approximation},
author={Tian Xu, Zhilong Zhang, Ruishuo Chen, Yihao Sun, and Yang Yu},
booktitle={The 38th Conference on Neural Information Processing System},
year={2024},
url={https://openreview.net/forum?id=7YdafFbhxL}
}