This repo implements mutualistic reward shaping in multi-agent reinforcement learning (MARL) to enhance robot cooperation. Tested on CartPendulum, ShadowHand, and Mobile Manipulation, it improves stability, convergence, and coordination. Includes code, results, and documentation. Contributions welcome! π
This repository contains code for Investigating Symbiosis in Robotic Ecosystems: A Case Study for Multi-Robot Reinforcement Learning Reward Shaping.
Code is available now!
This project has been tested on Ubuntu 22.04 using Isaac Sim 4.5.0 or 4.2.0.
Install Isaac Sim and IsaacLab by following the IsaacLab pip installation guide.
Make sure the following files are placed correctly:
IsaacLab/
βββ source/
β βββ isaaclab_assets/
β β βββ isaaclab_assets/
β β βββ robots/
β β βββ mobile_franka.py
β βββ isaaclab_tasks/
β βββ isaaclab_tasks/
β βββ direct/
β βββ cart_pendulum/
β βββ shadow_hand/
β βββ mobile_franka/
To train the MobileFranka multi-agent task using MAPPO:
./isaaclab.sh -p scripts/reinforcement_learning/skrl/train.py --algorithm MAPPO --task=MobileFrankaMARL #--headless- Bio-Inspired Reward Shaping: Implements a formal symbiosis model to enhance cooperation in MARL.
- Symbiotic Interaction Taxonomy: Categorizes agent interactions as mutualism, commensalism, and parasitism.
- Improved Learning in Complex Tasks: Enhances stability, convergence, and variance reduction in high-dimensional environments.
If you use this code or find the idea useful, please consider citing our work:
@inproceedings{niu2025symbiosis,
title={Investigating Symbiosis in Robotic Ecosystems: A Case Study for Multi-Robot Reinforcement Learning Reward Shaping},
author = {Xuezhi Niu and Didem GΓΌrdΓΌr Broo},
booktitle = {the 2025 9th International Conference on Robotics and Automation Sciences (ICRAS)},
year = {2025},
publisher = {IEEE}
}



