TAMPanda is a MuJoCo-based task and motion planning library developed at the Chair of Machine Learning and Reasoning (i6) at RWTH Aachen University. It combines IK, RRT* motion planning, and PDDL symbolic planning — primarily for the Franka Emika Panda — along with A* navigation for mobile robots, a programmatic scene builder, and remote asset support. In other words, just another MuJoCo wrapper.
Simulation & Control — MuJoCo environments for the Franka Panda and a differential-drive mobile robot; collision detection, gravity compensation, position-based trajectory controller
Planning — IK via MINK; RRT* with path smoothing; geometry-aware grasp candidate ranking; A* navigation with occupancy-grid obstacle inflation
Manipulation — PickPlaceExecutor for end-to-end pick-and-place with multi-candidate retry and kinematic object attachment; PointCloudGraspPlanner for rudimentary grasp pose computation on unseen objects from segmented point clouds (WIP)
Symbolic Planning — grid-based tabletop and blocks-world PDDL domains; ActionFeasibilityChecker validates symbolic actions against the continuous planner; parallel dataset generation with BFS and optional W&B logging
Scene & Assets — SceneBuilder assembles scenes from reusable MJCF templates at runtime with hot-reload; YCBDownloader / GSODownloader fetch ~80 YCB objects and ~1 030 Google Scanned Objects on demand; MujocoCamera for RGB, depth, segmentation, and pointcloud rendering
pip install -e .Dependencies: mujoco>=3.0.0, mink>=0.0.1, numpy, loop-rate-limiters, matplotlib, opencv-python
macOS: The MuJoCo passive viewer requires
mjpythoninstead ofpython. Headless scripts run fine with standardpython.
from tampanda import ArmSceneBuilder
from tampanda.scenes import TABLE_SYMBOLIC_TEMPLATE, CYLINDER_THIN_TEMPLATE
builder = ArmSceneBuilder()
builder.add_resource("table", TABLE_SYMBOLIC_TEMPLATE)
builder.add_resource("cylinder", CYLINDER_THIN_TEMPLATE)
builder.add_resource("can", {"type": "ycb", "name": "master_chef_can"})
builder.add_object("table", pos=[0.45, 0.00, 0.00])
builder.add_object("cylinder", pos=[0.40, 0.10, 0.36], rgba=[0.8, 0.3, 0.2, 1.0])
builder.add_object("can", pos=[0.50, -0.10, 0.33])
env = builder.build_env(rate=200.0)
with env.launch_viewer() as viewer:
while viewer.is_running():
env.step()from tampanda import ArmSceneBuilder, RRTStar, GraspPlanner, PickPlaceExecutor
from tampanda.scenes import TABLE_SYMBOLIC_TEMPLATE, BLOCK_SMALL_TEMPLATE
import numpy as np
builder = ArmSceneBuilder()
builder.add_resource("table", TABLE_SYMBOLIC_TEMPLATE)
builder.add_resource("block", BLOCK_SMALL_TEMPLATE)
builder.add_object("table", pos=[0.75, 0.80, 0.00])
builder.add_object("block", pos=[0.45, 0.40, 0.31], rgba=[0.2, 0.5, 0.9, 1.0], name="block_0")
env = builder.build_env(rate=200.0)
planner = RRTStar(env)
executor = PickPlaceExecutor(env, planner, GraspPlanner(table_z=0.27))
with env.launch_viewer() as viewer:
env.rest(2.0)
ok = executor.pick("block_0",
env.get_object_position("block_0"),
env.get_object_half_size("block_0"),
env.get_object_orientation("block_0"))
if ok:
executor.place("block_0", np.array([0.50, 0.25, 0.31]))
while viewer.is_running():
env.step()For interactive walkthroughs see notebooks/franka_getting_started.ipynb and notebooks/mobile_getting_started.ipynb.
All examples are in examples/. On macOS, use mjpython for anything that opens a viewer.
Arm — control and grasping
basic_ik.py— IK to a target pose, held in viewerbasic_rrt.py— RRT* between two joint configurationsgrasping_ik_planner.py,grasping_rrt_planner.py— geometry-aware grasping with ranked candidatesblocks_world_rrt.py— pick two cubes onto a platform withPickPlaceExecutor
Arm — symbolic planning (TAMP)
The tabletop domain connects PDDL task planning to the continuous planner: symbolic actions (pick, put) are validated with IK + RRT* before being committed to the plan. demo_pick_put.py runs the full loop end-to-end.
symbolic.py— grid-based PDDL planning in viewertabletop_interactive.py— real-time state grounding and interactive tabletopdemo_pick_put.py— full TAMP execution pipelinescene_builder.py— programmatic scene construction with hot-reload
Mobile robot
basic_navigation.py— A* through a slalom, Lidar/IMU readout at goalsquare_drive.py— drift measurement over multiple square laps
Perception and assets
camera_headless.py,pointcloud_demo.py— RGB and pointcloud captureobject_browser.py— browse, download, and preview YCB / Google Scanned Objects
Benchmarks (all headless) — benchmark_grasping.py, benchmark_cylinder_grasping.py, benchmark_feasibility.py, benchmark_feasibility_params.py, benchmark_parallel_rrt.py, benchmark_ycb_grasp.py
If you use TAMPanda in your research, please cite:
@software{tampanda,
title = {{TAMPanda}: Task and Motion Planning for the Franka Emika Panda},
author = {Swoboda, Daniel},
year = {2025},
url = {https://github.com/snoato/TAMPanda},
}- MuJoCo (Google DeepMind) — physics engine
- MINK (Kevin Zakka) — differential IK library
- elpis-lab/ycb_dataset — YCB object assets
- kevinzakka/mujoco_scanned_objects — Google Scanned Objects assets