From 0eebfb9668271305c81e2bec27c14a105f2d079d Mon Sep 17 00:00:00 2001 From: stash Date: Sun, 18 Jan 2026 19:14:47 -0800 Subject: [PATCH 01/10] Readme changes --- README.md | 28 ++++++++++++++++++---------- 1 file changed, 18 insertions(+), 10 deletions(-) diff --git a/README.md b/README.md index c2f9992abd..1ae7f13793 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,6 @@
banner_bordered_trimmed -

The Open-Source Framework for Robotic Intelligence

+

The Agentive Operating System for Generalist Robotics


@@ -37,28 +37,36 @@ The python library comes with a rich set of integrations; visualizers, spatial r ### Installation -- Linux is supported, with tests being performed on Ubuntu 22.04 and 24.04 -- MacOS support is in beta, you're welcome to try it *but expect inconsistent/flakey behavior (rather than errors/crashing)* - - instead of the apt-get command below run: `brew install gnu-sed gcc portaudio git-lfs libjpeg-turbo python` +Supported/tested matrix: + +| Platform | Status | Tested | Required System deps | +| --- | --- | --- | --- | +| Linux | supported | Ubuntu 22.04, 24.04 | See below | +| macOS | experimental beta | not CI-tested | `brew install gnu-sed gcc portaudio git-lfs libjpeg-turbo python` | + +Note: macOS is usable but expect inconsistent/flaky behavior (rather than hard errors/crashes). ```sh sudo apt-get update sudo apt-get install -y curl g++ portaudio19-dev git-lfs libturbojpeg python3-dev # install uv for python curl -LsSf https://astral.sh/uv/install.sh | sh && export PATH="$HOME/.local/bin:$PATH" +``` + +Option 1: Install in a virtualenv -# -# NOTE!!! the first time, you're going to have an empty/black rerun window for a while -# -# the command needs to download the replay file (2.4gb), which takes a bit +```sh -# OPTION 1: install dimos in a virtualenv uv venv && . .venv/bin/activate uv pip install 'dimos[base,unitree]' # replay recorded data to test that the system is working +# IMPORTANT: First replay run will show a black rerun window while 2.4 GB downloads from LFS dimos --replay run unitree-go2 +``` -# OPTION 2: if you want to test out dimos without installing run: +Option 2: Run without installing + +```sh uvx --from 'dimos[base,unitree]' dimos --replay run unitree-go2 ``` From a0226f47e75c50b4c0b669f8337628a80a96bbb0 Mon Sep 17 00:00:00 2001 From: stash Date: Sun, 18 Jan 2026 22:19:11 -0800 Subject: [PATCH 02/10] readme reorg --- README.md | 17 +++++++++-------- 1 file changed, 9 insertions(+), 8 deletions(-) diff --git a/README.md b/README.md index 1ae7f13793..6eae8df804 100644 --- a/README.md +++ b/README.md @@ -72,11 +72,10 @@ uvx --from 'dimos[base,unitree]' dimos --replay run unitree-go2 -### Usage +### Dimensional Usage #### Control a robot in a simulation (no robot required) -After running the commads below, open http://localhost:7779/command-center to control the robot movement. ```sh export DISPLAY=:1 # Or DISPLAY=:0 if getting GLFW/OpenGL X11 errors @@ -84,16 +83,20 @@ export DISPLAY=:1 # Or DISPLAY=:0 if getting GLFW/OpenGL X11 errors dimos --viewer-backend rerun-web --simulation run unitree-go2 ``` -#### Get it working on a physical robot! +#### Control a real robot (Unitree Go2 over WebRTC) ```sh -export ROBOT_IP=PUT_YOUR_IP_ADDR_HERE +export ROBOT_IP= dimos --viewer-backend rerun-web run unitree-go2 ``` -#### Have it controlled by AI! +After running dimOS open http://localhost:7779 to control robot movement. -WARNING: This is a demo showing the **connection** between AI and robotic control -- not a demo of a super-intelligent AI. Be ready to physically prevent your robot from taking dumb physical actions. +#### Dimensional Agents + +> \[!NOTE] +> +> **Experimental Beta: Potential unstoppable robot sentience** ```sh export OPENAI_API_KEY= @@ -105,8 +108,6 @@ After running that, open a new terminal and run the following to start giving in # activate the venv in this new terminal source .venv/bin/activate -# Note: after running the next command, WAIT for the agent to connect -# (this will take a while the first time) # then tell the agent "explore the room" # then tell it to go to something, ex: "go to the door" humancli From c39ec39f237a6b6b0862b2d430a227b831d5765b Mon Sep 17 00:00:00 2001 From: stash Date: Thu, 22 Jan 2026 17:08:18 -0800 Subject: [PATCH 03/10] Final release readme --- README.md | 168 +++++++++++++++++++++++------------------- docs/package_usage.md | 14 ++-- 2 files changed, 99 insertions(+), 83 deletions(-) diff --git a/README.md b/README.md index 6eae8df804..94e0babb45 100644 --- a/README.md +++ b/README.md @@ -1,5 +1,6 @@
banner_bordered_trimmed +

Program Atoms

The Agentive Operating System for Generalist Robotics

@@ -27,13 +28,30 @@ > > **Active Beta: Expect Breaking Changes** -# What is Dimensional? +# The Dimensional Framework -DimOS is both a language-agnostic framework and a Python-first library for robot control. It has optional ROS integration and is designed to let AI agents invoke tools (skills), directly access sensor and state data, and generate complex emergent behaviors. +Dimensional is the open-source, universal operating system for generalist robotics. On DimOS, developers +can design, build, and run physical ("dimensional") applications that run on any humanoid, quadruped, +drone, or wheeled embodiment. -The python library comes with a rich set of integrations; visualizers, spatial reasoners, planners, simulators (mujoco, Isaac Sim, etc.), robot state/action primitives, and more. +**Programming physical robots is now as simple as programming digital software**: Composable, Modular, Repeatable. -# How do I get started? +Core Features: +- **Transport/Middleware:** DimOS native Python transport supports LCM, DDS, and SHM, plus ROS 2. +- **Robot integrations:** We integrate with the majority of hardware OEMs and are moving fast to cover them + all. Supported and/or immediate roadmap: Unitree Go2, Unitree G1, Unitree B1, Booster K1, DJI Mavic 2, + AGIBOT X2, ABIBOT A2, AGIBOT D1 Max/Pro, OpenARMs, xARM 6/7, AgileX Piper, HighTorque Pantera, and + many more. +- **Agents (experimental):** DimOS agents understand physical space, subscribe to sensor streams, and call + **physical** tools. Emergence appears when agents have physical agency. +- **MCP (experimental):** Vibecode robots by giving your AI editor (Cursor, Claude Code) MCP access to run + physical commands (move forward 1 meter, jump, etc.). +- **Navigation:** Production navigation stack for any robot with lidar: SLAM, terrain analysis, collision + avoidance, route planning, exploration. +- **Dashboard:** The DimOS command center gives developers the tooling to debug, visualize, compose, and test dimensional applications in real-time. Control your robot via waypoint, agent query, keyboard, VR, more. +- **Modules:** Standalone components (equivelent to ROS nodes) that publish & subscribe to typed In/Out streams that communicate over DimOS transports: The building blocks of Dimensional. + +# Getting Started ### Installation @@ -70,6 +88,28 @@ Option 2: Run without installing uvx --from 'dimos[base,unitree]' dimos --replay run unitree-go2 ``` +### Development (clone + setup) + +```sh +GIT_LFS_SKIP_SMUDGE=1 git clone -b dev https://github.com/dimensionalOS/dimos.git +cd dimos +``` + +Then pick one of two development paths: + +Option A: Devcontainer +```sh +./bin/dev +``` + +Option B: Editable install with uv +```sh +uv venv && . .venv/bin/activate +uv pip install -e '.[base,dev]' +``` + +For system deps, Nix setups, and testing, see `/docs/development/README.md`. + ### Dimensional Usage @@ -113,104 +153,86 @@ source .venv/bin/activate humancli ``` -# How do I use it as a library? +# The Dimensional Library + +### Blueprints -### Simple Camera Activation +Blueprints are how robots are constructed on Dimensional; instructions for how to construct and wire modules. You compose them with +`autoconnect(...)`, which connects streams by `(name, type)` and returns a `ModuleBlueprintSet`. -Assuming you have a webcam, save the following as a python file and run it: +Blueprints can be composed, remapped, and have transports overridden if `autoconnect()` fails due to conflicting variable names or `IN[]` and `OUT[]` message types. +A blueprint example that connects the image stream from a robot to an LLM Agent for reasoning and action execution. ```py from dimos.core.blueprints import autoconnect -from dimos.hardware.sensors.camera.module import CameraModule +from dimos.core.transport import LCMTransport +from dimos.msgs.sensor_msgs import Image +from dimos.robot.unitree.connection.go2 import go2_connection +from dimos.agents.agent import llm_agent -if __name__ == "__main__": - autoconnect( - # technically autoconnect is not needed because we only have 1 module - CameraModule.blueprint() - ).build().loop() +blueprint = autoconnect( + go2_connection(), + llm_agent(), +).transports({("color_image", Image): LCMTransport("/color_image", Image)}) + +# Run the blueprint +blueprint.build().loop() ``` -### Write A Custom Module +### Modules -Lets convert the camera's image to grayscale. +A simple robot connection module that sends streams of continuous `cmd_vel` to the robot and recieves `color_image` to a simple `Listener` module. ```py +import threading, time, numpy as np +from dimos.core import In, Module, Out, rpc from dimos.core.blueprints import autoconnect -from dimos.core import In, Out, Module, rpc -from dimos.hardware.sensors.camera.module import CameraModule +from dimos.msgs.geometry_msgs import Twist from dimos.msgs.sensor_msgs import Image +from dimos.msgs.sensor_msgs.image_impls.AbstractImage import ImageFormat -from reactivex.disposable import Disposable - -class Listener(Module): - # the CameraModule has an Out[Image] named "color_image" - # How do we know this? Just print(CameraModule.module_info().outputs) - # the name ("color_image") must match the CameraModule's output - color_image: In[Image] = None - grayscale_image: Out[Image] = None - - def __init__(self, *args, **kwargs) -> None: - super().__init__(*args, **kwargs) - self.count = 0 +class RobotConnection(Module): + cmd_vel: In[Twist] + color_image: Out[Image] @rpc - def start(self) -> None: - super().start() - def callback_func(img: Image) -> None: - self.count += 1 - print(f"got frame {self.count}") - print(f"img.data.shape: {img.data.shape}") - self.grayscale_image.publish(img.to_grayscale()) - - unsubscribe_func = self.color_image.subscribe(callback_func) - # the unsubscribe_func be called when the module is stopped - self._disposables.add(Disposable( - unsubscribe_func - )) + def start(self): + threading.Thread(target=self._image_loop, daemon=True).start() + + def _image_loop(self): + while True: + img = Image.from_numpy( + np.zeros((120, 160, 3), np.uint8), + format=ImageFormat.RGB, + frame_id="camera_optical", + ) + self.color_image.publish(img) + time.sleep(0.2) + +class Listener(Module): + color_image: In[Image] @rpc - def stop(self) -> None: - super().stop() + def start(self): + self.color_image.subscribe(lambda img: print(f"image {img.width}x{img.height}")) if __name__ == "__main__": autoconnect( + RobotConnection.blueprint(), Listener.blueprint(), - CameraModule.blueprint(), ).build().loop() ``` -#### Note: Many More Examples in the [Examples Folder](./examples) - -### How do custom modules work? (Example breakdown) - -- Every module represents one process: modules run in parallel (python multiprocessing). Because of this **modules should only save/modify data on themselves**. Do not mutate or share global vars inside a module. -- At the top of this module definition, the In/Out **streams** are defining a pub-sub system. This module expects *someone somewhere* to give it a color image. And, the module is going to publish a grayscale image (that any other module to subscribe to). - - Note: if you are a power user thinking "so streams must be statically declared?" the answer is no, there are ways to perform dynamic connections, but for type-checking and human sanity the creation of dynamic stream connections are under an advanced API and should be used as a last resort. -- The `autoconnect` ties everything together: - - The CameraModule has an output of `color_image` - - The Listener has an input of `color_image` - - Autoconnect puts them together, and checks that their types are compatible (both are of type `Image`) -- How can we see what In/Out streams are provided by a module? - - Open a python repl (e.g. `python`) - - Import the module, ex: `from dimos.hardware.sensors.camera.module import CameraModule` - - Print the module outputs: `print(CameraModule.module_info().outputs)` - - Print the module inputs: `print(CameraModule.module_info().inputs)` - - Print all the information (rpcs, skills, etc): `print(CameraModule.module_info())` -- What about `@rpc`? - - If you want a method to be called by another module (not just an internal method) then add the `@rpc` decorator AND make sure BOTH the arguments and return value of the method are json-serializable. - - Rpc methods get called using threads, meaning two rpc methods can be running at the same time. For this reason, python thread locking is often necessary for data that is being written/read during rpc calls. - - The start/stop methods always need to be an rpc because they are called externally. - ### Monitoring & Debugging -In addition to rerun logging, DimOS comes with a number of monitoring tools: +DimOS comes with a number of monitoring tools: - Run `lcmspy` to see how fast messages are being published on streams. - Run `skillspy` to see how skills are being called, how long they are running, which are active, etc. - Run `agentspy` to see the agent's status over time. - If you suspect there is a bug within DimOS itself, you can enable extreme logging by prefixing the dimos command with `DIMOS_LOG_LEVEL=DEBUG RERUN_SAVE=1 `. Ex: `DIMOS_LOG_LEVEL=DEBUG RERUN_SAVE=1 dimos --replay run unitree-go2` -# How does Dimensional work? +# Documentation Concepts: - [Modules](/docs/concepts/modules.md): The building blocks of DimOS, modules run in parallel and are singleton python classes. @@ -220,12 +242,6 @@ Concepts: - [Skills](/dimos/core/README_BLUEPRINTS.md#defining-skills): An RPC function, except it can be called by an AI agent (a tool for an AI). - Agents: AI that has an objective, access to stream data, and is capable of calling skills as tools. -## Contributing / Building From Source - -For development, we optimize for flexibility—whether you love Docker, Nix, or have nothing but **notepad.exe** and a dream, you’re good to go. Open up the [Development Guide](/docs/development/README.md) to see the extra steps for setting up development environments. +## Contributing We welcome contributions! See our [Bounty List](https://docs.google.com/spreadsheets/d/1tzYTPvhO7Lou21cU6avSWTQOhACl5H8trSvhtYtsk8U/edit?usp=sharing) for open requests for contributions. If you would like to suggest a feature or sponsor a bounty, open an issue. - -# License - -DimOS is licensed under the Apache License, Version 2.0. And will always be free and open source. diff --git a/docs/package_usage.md b/docs/package_usage.md index 24584a2e79..328708122e 100644 --- a/docs/package_usage.md +++ b/docs/package_usage.md @@ -11,19 +11,19 @@ uv init Install: ```bash -uv add dimos[dev,cpu,sim] +uv add dimos[base,dev,unitree] ``` Test the Unitree Go2 robot in the simulator: ```bash -uv run dimos-robot --simulation run unitree-g1 +uv run dimos --simulation run unitree-go2 ``` Run your actual robot: ```bash -uv run dimos-robot --robot-ip=192.168.X.XXX run unitree-g1 +uv run dimos --robot-ip=192.168.X.XXX run unitree-go2 ``` ### Without installing @@ -31,7 +31,7 @@ uv run dimos-robot --robot-ip=192.168.X.XXX run unitree-g1 With `uv` you can run tools without having to explicitly install: ```bash -uvx --from dimos dimos-robot --robot-ip=192.168.X.XXX run unitree-g1 +uvx --from dimos[base,unitree] dimos --robot-ip=192.168.X.XXX run unitree-go2 ``` ## With `pip` @@ -46,17 +46,17 @@ python -m venv .venv Install: ```bash -pip install dimos[dev,cpu,sim] +pip install dimos[base,dev,unitree] ``` Test the Unitree Go2 robot in the simulator: ```bash -dimos-robot --simulation run unitree-g1 +dimos --simulation run unitree-go2 ``` Run your actual robot: ```bash -dimos-robot --robot-ip=192.168.X.XXX run unitree-g1 +dimos --robot-ip=192.168.X.XXX run unitree-go2 ``` From de08c5d8dfdcb8e63845bd7d7d6350a410be528b Mon Sep 17 00:00:00 2001 From: stash Date: Thu, 22 Jan 2026 17:18:46 -0800 Subject: [PATCH 04/10] Formatting --- README.md | 56 ++++++++++++++++++++++++++++--------------------------- 1 file changed, 29 insertions(+), 27 deletions(-) diff --git a/README.md b/README.md index 94e0babb45..a7b6b4ee31 100644 --- a/README.md +++ b/README.md @@ -17,9 +17,11 @@ [![Docker](https://img.shields.io/badge/Docker-ready-2496ED?style=flat-square&logo=docker&logoColor=white)](https://www.docker.com/)

- Key Features • - How To Use • - ContributingLicense + Key Features • + How To Use • + Installation • + Development • + Contributing

@@ -53,7 +55,7 @@ Core Features: # Getting Started -### Installation +## Installation Supported/tested matrix: @@ -88,31 +90,9 @@ Option 2: Run without installing uvx --from 'dimos[base,unitree]' dimos --replay run unitree-go2 ``` -### Development (clone + setup) - -```sh -GIT_LFS_SKIP_SMUDGE=1 git clone -b dev https://github.com/dimensionalOS/dimos.git -cd dimos -``` - -Then pick one of two development paths: - -Option A: Devcontainer -```sh -./bin/dev -``` - -Option B: Editable install with uv -```sh -uv venv && . .venv/bin/activate -uv pip install -e '.[base,dev]' -``` - -For system deps, Nix setups, and testing, see `/docs/development/README.md`. - -### Dimensional Usage +### Test Installation #### Control a robot in a simulation (no robot required) @@ -223,6 +203,28 @@ if __name__ == "__main__": ).build().loop() ``` +## Development (clone + setup) + +```sh +GIT_LFS_SKIP_SMUDGE=1 git clone -b dev https://github.com/dimensionalOS/dimos.git +cd dimos +``` + +Then pick one of two development paths: + +Option A: Devcontainer +```sh +./bin/dev +``` + +Option B: Editable install with uv +```sh +uv venv && . .venv/bin/activate +uv pip install -e '.[base,dev]' +``` + +For system deps, Nix setups, and testing, see `/docs/development/README.md`. + ### Monitoring & Debugging DimOS comes with a number of monitoring tools: From 247f0cbc881e7d9590ef049f117e867d54718f39 Mon Sep 17 00:00:00 2001 From: stash Date: Thu, 22 Jan 2026 17:30:22 -0800 Subject: [PATCH 05/10] formatting --- README.md | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index a7b6b4ee31..7f51097daa 100644 --- a/README.md +++ b/README.md @@ -17,9 +17,9 @@ [![Docker](https://img.shields.io/badge/Docker-ready-2496ED?style=flat-square&logo=docker&logoColor=white)](https://www.docker.com/)

- Key Features • - How To Use • + FeaturesInstallation • + DocumentationDevelopmentContributing

@@ -28,7 +28,7 @@ > \[!NOTE] > -> **Active Beta: Expect Breaking Changes** +> ⚠️ **Alpha Pre-Release: Expect Breaking Changes** ⚠️ # The Dimensional Framework @@ -52,6 +52,7 @@ Core Features: avoidance, route planning, exploration. - **Dashboard:** The DimOS command center gives developers the tooling to debug, visualize, compose, and test dimensional applications in real-time. Control your robot via waypoint, agent query, keyboard, VR, more. - **Modules:** Standalone components (equivelent to ROS nodes) that publish & subscribe to typed In/Out streams that communicate over DimOS transports: The building blocks of Dimensional. +- **Manipulation (unreleased)** Classical (OMPL, IK, GraspGen), Agentive (TAMP), and VLA-native manipulation stack runs out-of-the-box on any DimOS supported arm embodiment. # Getting Started @@ -203,7 +204,7 @@ if __name__ == "__main__": ).build().loop() ``` -## Development (clone + setup) +## Development ```sh GIT_LFS_SKIP_SMUDGE=1 git clone -b dev https://github.com/dimensionalOS/dimos.git From 2f3b5ba665faf51d6a2af4ba96e650c99f4edcf1 Mon Sep 17 00:00:00 2001 From: stash Date: Thu, 22 Jan 2026 17:36:29 -0800 Subject: [PATCH 06/10] fix --- README.md | 1 - 1 file changed, 1 deletion(-) diff --git a/README.md b/README.md index 7f51097daa..237f2e6fe3 100644 --- a/README.md +++ b/README.md @@ -243,7 +243,6 @@ Concepts: - [Blueprints](/dimos/core/README_BLUEPRINTS.md): a way to group modules together and define their connections to each other. - [RPC](/dimos/core/README_BLUEPRINTS.md#calling-the-methods-of-other-modules): how one module can call a method on another module (arguments get serialized to JSON-like binary data). - [Skills](/dimos/core/README_BLUEPRINTS.md#defining-skills): An RPC function, except it can be called by an AI agent (a tool for an AI). -- Agents: AI that has an objective, access to stream data, and is capable of calling skills as tools. ## Contributing From 7134725e3c85355abe03d6fe33344b851ba4808d Mon Sep 17 00:00:00 2001 From: stash Date: Thu, 22 Jan 2026 17:48:43 -0800 Subject: [PATCH 07/10] Formatting --- README.md | 78 ++++++++++++++++++++++++++++++------------------------- 1 file changed, 43 insertions(+), 35 deletions(-) diff --git a/README.md b/README.md index 237f2e6fe3..9c89c97ff4 100644 --- a/README.md +++ b/README.md @@ -39,20 +39,28 @@ drone, or wheeled embodiment. **Programming physical robots is now as simple as programming digital software**: Composable, Modular, Repeatable. Core Features: -- **Transport/Middleware:** DimOS native Python transport supports LCM, DDS, and SHM, plus ROS 2. -- **Robot integrations:** We integrate with the majority of hardware OEMs and are moving fast to cover them - all. Supported and/or immediate roadmap: Unitree Go2, Unitree G1, Unitree B1, Booster K1, DJI Mavic 2, - AGIBOT X2, ABIBOT A2, AGIBOT D1 Max/Pro, OpenARMs, xARM 6/7, AgileX Piper, HighTorque Pantera, and - many more. +- **Navigation:** Production navigation stack for any robot with lidar: SLAM, terrain analysis, collision + avoidance, route planning, exploration. +- **Robot integrations:** We integrate with the majority of hardware OEMs and are moving fast to cover + them all. + + | Category | Platforms | + | --- | --- | + | Quadrupeds | Unitree Go2, Unitree B1, AGIBOT D1 Max/Pro, Dobot Rover | + | Drones | DJI Mavic 2, Holybro x500| + | Humanoids | Unitree G1, Booster K1, AGIBOT X2, ABIBOT A2 | + | Arms | OpenARMs, xARM 6/7, AgileX Piper, HighTorque Pantera | +- **Dashboard:** The DimOS command center gives developers the tooling to debug, visualize, compose, and + test dimensional applications in real-time. Control your robot via waypoint, agent query, keyboard, + VR, more. +- **Modules:** Standalone components (equivalent to ROS nodes) that publish and subscribe to typed + In/Out streams that communicate over DimOS transports. The building blocks of Dimensional. - **Agents (experimental):** DimOS agents understand physical space, subscribe to sensor streams, and call **physical** tools. Emergence appears when agents have physical agency. - **MCP (experimental):** Vibecode robots by giving your AI editor (Cursor, Claude Code) MCP access to run physical commands (move forward 1 meter, jump, etc.). -- **Navigation:** Production navigation stack for any robot with lidar: SLAM, terrain analysis, collision - avoidance, route planning, exploration. -- **Dashboard:** The DimOS command center gives developers the tooling to debug, visualize, compose, and test dimensional applications in real-time. Control your robot via waypoint, agent query, keyboard, VR, more. -- **Modules:** Standalone components (equivelent to ROS nodes) that publish & subscribe to typed In/Out streams that communicate over DimOS transports: The building blocks of Dimensional. - **Manipulation (unreleased)** Classical (OMPL, IK, GraspGen), Agentive (TAMP), and VLA-native manipulation stack runs out-of-the-box on any DimOS supported arm embodiment. +- **Transport/Middleware:** DimOS native Python transport supports LCM, DDS, and SHM, plus ROS 2. # Getting Started @@ -136,33 +144,9 @@ humancli # The Dimensional Library -### Blueprints - -Blueprints are how robots are constructed on Dimensional; instructions for how to construct and wire modules. You compose them with -`autoconnect(...)`, which connects streams by `(name, type)` and returns a `ModuleBlueprintSet`. - -Blueprints can be composed, remapped, and have transports overridden if `autoconnect()` fails due to conflicting variable names or `IN[]` and `OUT[]` message types. - -A blueprint example that connects the image stream from a robot to an LLM Agent for reasoning and action execution. -```py -from dimos.core.blueprints import autoconnect -from dimos.core.transport import LCMTransport -from dimos.msgs.sensor_msgs import Image -from dimos.robot.unitree.connection.go2 import go2_connection -from dimos.agents.agent import llm_agent - -blueprint = autoconnect( - go2_connection(), - llm_agent(), -).transports({("color_image", Image): LCMTransport("/color_image", Image)}) - -# Run the blueprint -blueprint.build().loop() -``` - ### Modules -A simple robot connection module that sends streams of continuous `cmd_vel` to the robot and recieves `color_image` to a simple `Listener` module. +Modules are subsystems on a robot that operate autonomously and communicate with other subsystems using standardized messages. See below a simple robot connection module that sends streams of continuous `cmd_vel` to the robot and recieves `color_image` to a simple `Listener` module. ```py import threading, time, numpy as np @@ -204,7 +188,31 @@ if __name__ == "__main__": ).build().loop() ``` -## Development +### Blueprints + +Blueprints are how robots are constructed on Dimensional; instructions for how to construct and wire modules. You compose them with +`autoconnect(...)`, which connects streams by `(name, type)` and returns a `ModuleBlueprintSet`. + +Blueprints can be composed, remapped, and have transports overridden if `autoconnect()` fails due to conflicting variable names or `IN[]` and `OUT[]` message types. + +A blueprint example that connects the image stream from a robot to an LLM Agent for reasoning and action execution. +```py +from dimos.core.blueprints import autoconnect +from dimos.core.transport import LCMTransport +from dimos.msgs.sensor_msgs import Image +from dimos.robot.unitree.connection.go2 import go2_connection +from dimos.agents.agent import llm_agent + +blueprint = autoconnect( + go2_connection(), + llm_agent(), +).transports({("color_image", Image): LCMTransport("/color_image", Image)}) + +# Run the blueprint +blueprint.build().loop() +``` + +# Development ```sh GIT_LFS_SKIP_SMUDGE=1 git clone -b dev https://github.com/dimensionalOS/dimos.git From 24d6a16f6a9545069e159a8b62f597b9f1a28820 Mon Sep 17 00:00:00 2001 From: stash Date: Thu, 22 Jan 2026 17:52:53 -0800 Subject: [PATCH 08/10] formatting --- README.md | 19 ++++++++++--------- 1 file changed, 10 insertions(+), 9 deletions(-) diff --git a/README.md b/README.md index 9c89c97ff4..b737dd3d35 100644 --- a/README.md +++ b/README.md @@ -41,15 +41,6 @@ drone, or wheeled embodiment. Core Features: - **Navigation:** Production navigation stack for any robot with lidar: SLAM, terrain analysis, collision avoidance, route planning, exploration. -- **Robot integrations:** We integrate with the majority of hardware OEMs and are moving fast to cover - them all. - - | Category | Platforms | - | --- | --- | - | Quadrupeds | Unitree Go2, Unitree B1, AGIBOT D1 Max/Pro, Dobot Rover | - | Drones | DJI Mavic 2, Holybro x500| - | Humanoids | Unitree G1, Booster K1, AGIBOT X2, ABIBOT A2 | - | Arms | OpenARMs, xARM 6/7, AgileX Piper, HighTorque Pantera | - **Dashboard:** The DimOS command center gives developers the tooling to debug, visualize, compose, and test dimensional applications in real-time. Control your robot via waypoint, agent query, keyboard, VR, more. @@ -61,6 +52,16 @@ Core Features: physical commands (move forward 1 meter, jump, etc.). - **Manipulation (unreleased)** Classical (OMPL, IK, GraspGen), Agentive (TAMP), and VLA-native manipulation stack runs out-of-the-box on any DimOS supported arm embodiment. - **Transport/Middleware:** DimOS native Python transport supports LCM, DDS, and SHM, plus ROS 2. +- **Robot integrations:** We integrate with the majority of hardware OEMs and are moving fast to cover + them all. + +Supported and/or immediate roadmap: + | Category | Platforms | + | --- | --- | + | Quadrupeds | Unitree Go2, Unitree B1, AGIBOT D1 Max/Pro, Dobot Rover | + | Drones | DJI Mavic 2, Holybro x500| + | Humanoids | Unitree G1, Booster K1, AGIBOT X2, ABIBOT A2 | + | Arms | OpenARMs, xARM 6/7, AgileX Piper, HighTorque Pantera | # Getting Started From 5467a52afd4d71b0e148274c673a4f13c4957322 Mon Sep 17 00:00:00 2001 From: stash Date: Thu, 22 Jan 2026 17:56:12 -0800 Subject: [PATCH 09/10] table formatting --- README.md | 5 ++--- 1 file changed, 2 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index b737dd3d35..402d681e38 100644 --- a/README.md +++ b/README.md @@ -53,13 +53,12 @@ Core Features: - **Manipulation (unreleased)** Classical (OMPL, IK, GraspGen), Agentive (TAMP), and VLA-native manipulation stack runs out-of-the-box on any DimOS supported arm embodiment. - **Transport/Middleware:** DimOS native Python transport supports LCM, DDS, and SHM, plus ROS 2. - **Robot integrations:** We integrate with the majority of hardware OEMs and are moving fast to cover - them all. + them all. Supported and/or immediate roadmap: -Supported and/or immediate roadmap: | Category | Platforms | | --- | --- | | Quadrupeds | Unitree Go2, Unitree B1, AGIBOT D1 Max/Pro, Dobot Rover | - | Drones | DJI Mavic 2, Holybro x500| + | Drones | DJI Mavic 2, Holybro x500 | | Humanoids | Unitree G1, Booster K1, AGIBOT X2, ABIBOT A2 | | Arms | OpenARMs, xARM 6/7, AgileX Piper, HighTorque Pantera | From 1616503edba3c4368b3ac2076751f84c9540bde7 Mon Sep 17 00:00:00 2001 From: stash Date: Thu, 22 Jan 2026 17:58:52 -0800 Subject: [PATCH 10/10] fixes --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 402d681e38..9ccc7de197 100644 --- a/README.md +++ b/README.md @@ -193,7 +193,7 @@ if __name__ == "__main__": Blueprints are how robots are constructed on Dimensional; instructions for how to construct and wire modules. You compose them with `autoconnect(...)`, which connects streams by `(name, type)` and returns a `ModuleBlueprintSet`. -Blueprints can be composed, remapped, and have transports overridden if `autoconnect()` fails due to conflicting variable names or `IN[]` and `OUT[]` message types. +Blueprints can be composed, remapped, and have transports overridden if `autoconnect()` fails due to conflicting variable names or `In[]` and `Out[]` message types. A blueprint example that connects the image stream from a robot to an LLM Agent for reasoning and action execution. ```py