Skip to content

Official implementation of "Robo-Dopamine: General Process Reward Modeling for High-Precision Robotic Manipulation"

License

Notifications You must be signed in to change notification settings

FlagOpen/Robo-Dopamine

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Robo-Dopamine: General Process Reward Modeling for High-Precision Robotic Manipulation

Joy is dopamine’s handiwork—whether in humans or in robotics.

arXiv   Project Homepage   Benchmark   Weights

🗞️ News

  • 2025-12-30: ✨ Codes, Dataset and Weights are coming soon! Stay tuned for updates.
  • 2025-12-30: 🔥 We released our Project Page of Robo-Dopamine.

🎯 TODO

  • Release the 3B Dopamine-Reward (GRM) model and inference codes (About 2 week).
  • Release the 8B Dopamine-Reward (GRM) model (About 2 week).
  • Release the Robo-Dopamine Dataset and SFT training code (About 1 months).
  • Release the Dataset Generation Pipeline (Maybe 1 months or more).
  • Release the Dopamine-RL training code for simulator and real-world settings (Maybe 2 months or more).

🤖 Overview

Robo-Dopamine is composed of two core components: (a) Dopamine-Reward Modeling Method -- At the heart of our reward modeling is to build the General Reward Model (GRM), a vision-language model that is prompted with a task description and conditioned on multi-view images of initial, goal, "BEFORE," and "AFTER" states to predict a relative progress or regress hop. To ensure a stable and accurate signal, we employ Multi-Perspective Progress Fusion, which combines incremental, forward-anchored, and backward-anchored predictions into a final fused reward. And (b) Dopamine-RL Training Framework -- The Dopamine-RL framework first adapts the pre-trained GRM to a novel task using a single demonstration, i.e., One-Shot GRM Adaptation. Subsequently, it uses a theoretically-sound Policy-Invariant Reward Shaping method to convert the GRM's dense output into a reward signal that accelerates learning without altering the optimal policy. This approach is universally compatible with a wide range of RL algorithms.

Logo

📑 Citation

If you find our work helpful, feel free to cite it:

@article{tan2025robo,
  title={Robo-Dopamine: General Process Reward Modeling for High-Precision Robotic Manipulation},
  author={Tan, Huajie and Chen, Sixiang and Xu, Yijie and Wang, Zixiao and Ji, Yuheng and Chi, Cheng and Lyu, Yaoxu and Zhao, Zhongxia and Chen, Xiansheng and Co, Peterson and others},
  journal={arXiv preprint arXiv:2512.23703},
  year={2025}
}

About

Official implementation of "Robo-Dopamine: General Process Reward Modeling for High-Precision Robotic Manipulation"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published