diff --git a/README.md b/README.md index 5c8ab2128a..ab68e254b8 100644 --- a/README.md +++ b/README.md @@ -45,7 +45,7 @@ Core Features: test dimensional applications in real-time. Control your robot via waypoint, agent query, keyboard, VR, more. - **Modules:** Standalone components (equivalent to ROS nodes) that publish and subscribe to typed - In/Out streams that communicate over DimOS transports. The building blocks of Dimensional. + In/Out streams that communicate over DimOS transports. The primary components of Dimensional. - **Agents (experimental):** DimOS agents understand physical space, subscribe to sensor streams, and call **physical** tools. Emergence appears when agents have physical agency. - **MCP (experimental):** Vibecode robots by giving your AI editor (Cursor, Claude Code) MCP access to run diff --git a/README_installation.md b/README_installation.md deleted file mode 100644 index dc9117798f..0000000000 --- a/README_installation.md +++ /dev/null @@ -1,136 +0,0 @@ -# DimOS - -## Installation - -Clone the repo: - -```bash -git clone -b main --single-branch https://github.com/dimensionalOS/dimos.git -cd dimos -``` - -### System dependencies - -Tested on Ubuntu 22.04/24.04. - -```bash -sudo apt update -sudo apt install git-lfs python3-venv python3-pyaudio portaudio19-dev libturbojpeg0-dev -``` - -### Python dependencies - -Install `uv` by [following their instructions](https://docs.astral.sh/uv/getting-started/installation/) or just run: - -```bash -curl -LsSf https://astral.sh/uv/install.sh | sh -``` - -Install Python dependencies: - -```bash -uv sync -``` - -Depending on what you want to test you might want to install more optional dependencies as well (recommended): - -```bash -uv sync --extra dev --extra cpu --extra sim --extra drone -``` - -### Install Foxglove Studio (robot visualization and control) - -> **Note:** This will be obsolete once we finish our migration to open source [Rerun](https://rerun.io/). - -Download and install [Foxglove Studio](https://foxglove.dev/download): - -```bash -wget https://get.foxglove.dev/desktop/latest/foxglove-studio-latest-linux-amd64.deb -sudo apt install ./foxglove-studio-*.deb -``` - -[Register an account](https://app.foxglove.dev/signup) to use it. - -Open Foxglove Studio: - -```bash -foxglove-studio -``` - -To connect and load our dashboard: - -1. Click on "Open connection" -2. In the popup window, leave the WebSocket URL as `ws://localhost:8765` and click "Open" -3. In the top right, click on the "Default" dropdown, then "Import from file..." -4. Navigate to the `dimos` repo and select `assets/foxglove_dashboards/unitree.json` - -### Test the install - -Run the Python tests: - -```bash -uv run pytest dimos -``` - -They should all pass in about 3 minutes. - -### Test a robot replay - -Run the system by playing back recorded data from a robot (the replay data is automatically downloaded via Git LFS): - -```bash -uv run dimos --replay run unitree-go2-basic -``` - -You can visualize the robot data in Foxglove Studio. - -### Run a simulation - -```bash -uv run dimos --simulation run unitree-go2-basic -``` - -This will open a MuJoCo simulation window. You can also visualize data in Foxglove. - -If you want to also teleoperate the simulated robot run: - -```bash -uv run dimos --simulation run unitree-go2-basic --extra-module keyboard_teleop -``` - -This will also open a Keyboard Teleop window. Focus on the window and use WASD to control the robot. - -### Command center - -You can also control the robot from the `command-center` extension to Foxglove. - -First, pull the LFS file: - -```bash -git lfs pull --include="assets/dimensional.command-center-extension-0.0.1.foxe" -``` - -To install it, drag that file over the Foxglove Studio window. The extension will be installed automatically. Then, click on the "Add panel" icon on the top right and add "command-center". - -You can now click on the map to give it a travel goal, or click on "Start Keyboard Control" to teleoperate it. - -### Using `dimos` in your code - -If you want to use dimos in your own project (not the cloned repo), you can install it as a dependency: - -```bash -uv add dimos -``` - -Note, a few dependencies do not have PyPI packages and need to be installed from their Git repositories. These are only required for specific features: - -- **CLIP** and **detectron2**: Required for the Detic open-vocabulary object detector -- **contact_graspnet_pytorch**: Required for robotic grasp prediction - -You can install them with: - -```bash -uv add git+https://github.com/openai/CLIP.git -uv add git+https://github.com/dimensionalOS/contact_graspnet_pytorch.git -uv add git+https://github.com/facebookresearch/detectron2.git -``` diff --git a/bin/doclinks b/bin/doclinks new file mode 100755 index 0000000000..5dee1c69b0 --- /dev/null +++ b/bin/doclinks @@ -0,0 +1,3 @@ +#!/usr/bin/env bash +REPO_ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)" +python "$REPO_ROOT/dimos/utils/docs/doclinks.py" "$@" diff --git a/dimos/hardware/README.md b/dimos/hardware/README.md deleted file mode 100644 index 2587e3595d..0000000000 --- a/dimos/hardware/README.md +++ /dev/null @@ -1,29 +0,0 @@ -# Hardware - -## Remote camera stream with timestamps - -### Required Ubuntu packages: - -```bash -sudo apt install gstreamer1.0-tools gstreamer1.0-plugins-base gstreamer1.0-plugins-good gstreamer1.0-plugins-bad gstreamer1.0-plugins-ugly gstreamer1.0-libav python3-gi python3-gi-cairo gir1.2-gstreamer-1.0 gir1.2-gst-plugins-base-1.0 v4l-utils gstreamer1.0-vaapi -``` - -### Usage - -On sender machine (with the camera): - -```bash -python3 dimos/hardware/gstreamer_sender.py --device /dev/video0 --host 0.0.0.0 --port 5000 -``` - -If it's a stereo camera and you only want to send the left side (the left camera): - -```bash -python3 dimos/hardware/gstreamer_sender.py --device /dev/video0 --host 0.0.0.0 --port 5000 --single-camera -``` - -On receiver machine: - -```bash -python3 dimos/hardware/gstreamer_camera_test_script.py --host 10.0.0.227 --port 5000 -``` diff --git a/dimos/hardware/sensors/camera/gstreamer/readme.md b/dimos/hardware/sensors/camera/gstreamer/readme.md deleted file mode 100644 index 29198aea24..0000000000 --- a/dimos/hardware/sensors/camera/gstreamer/readme.md +++ /dev/null @@ -1 +0,0 @@ -This gstreamer stuff is obsoleted but could be adopted as an alternative hardware for camera module if needed diff --git a/dimos/simulation/README.md b/dimos/simulation/README.md deleted file mode 100644 index 95d8b4cda1..0000000000 --- a/dimos/simulation/README.md +++ /dev/null @@ -1,98 +0,0 @@ -# Dimensional Streaming Setup - -This guide explains how to set up and run the Isaac Sim and Genesis streaming functionality via Docker. The setup is tested on Ubuntu 22.04 (recommended). - -## Prerequisites - -1. **NVIDIA Driver** - - NVIDIA Driver 535 must be installed - - Check your driver: `nvidia-smi` - - If not installed: - ```bash - sudo apt-get update - sudo apt install build-essential -y - sudo apt-get install -y nvidia-driver-535 - sudo reboot - ``` - -2. **CUDA Toolkit** - ```bash - sudo apt install -y nvidia-cuda-toolkit - ``` - -3. **Docker** - ```bash - # Install Docker - curl -fsSL https://get.docker.com -o get-docker.sh - sudo sh get-docker.sh - - # Post-install steps - sudo groupadd docker - sudo usermod -aG docker $USER - newgrp docker - ``` - -4. **NVIDIA Container Toolkit** - ```bash - # Add NVIDIA Container Toolkit repository - curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg - curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \ - sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \ - sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list - sudo apt-get update - - # Install the toolkit - sudo apt-get install -y nvidia-container-toolkit - sudo systemctl restart docker - - # Configure runtime - sudo nvidia-ctk runtime configure --runtime=docker - sudo systemctl restart docker - - # Verify installation - sudo docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi - ``` - -5. **Pull Isaac Sim Image** - ```bash - sudo docker pull nvcr.io/nvidia/isaac-sim:4.2.0 - ``` - -6. **TO DO: Add ROS2 websocket server for client-side streaming** - -## Running the Streaming Example - -1. **Navigate to the docker/simulation directory** - ```bash - cd docker/simulation - ``` - -2. **Build and run with docker-compose** - For Isaac Sim: - ```bash - docker compose -f isaac/docker-compose.yml build - docker compose -f isaac/docker-compose.yml up - - ``` - - For Genesis: - ```bash - docker compose -f genesis/docker-compose.yml build - docker compose -f genesis/docker-compose.yml up - - ``` - -This will: -- Build the dimos_simulator image with ROS2 and required dependencies -- Start the MediaMTX RTSP server -- Run the test streaming example from either: - - `/tests/isaacsim/stream_camera.py` for Isaac Sim - - `/tests/genesissim/stream_camera.py` for Genesis - -## Viewing the Stream - -The camera stream will be available at: - -- RTSP: `rtsp://localhost:8554/stream` or `rtsp://:8554/stream` - -You can view it using VLC or any RTSP-capable player. diff --git a/dimos/utils/docs/test_doclinks.py b/dimos/utils/docs/test_doclinks.py index 7313ec3676..016cc30113 100644 --- a/dimos/utils/docs/test_doclinks.py +++ b/dimos/utils/docs/test_doclinks.py @@ -277,7 +277,7 @@ def test_indexes_by_stem(self, doc_index): """Should index docs by lowercase stem.""" assert "configuration" in doc_index assert "modules" in doc_index - assert "development" in doc_index + assert "blueprints" in doc_index def test_case_insensitive(self, doc_index): """Should use lowercase keys.""" @@ -349,8 +349,8 @@ def test_doc_link_github_mode(self, file_index, doc_index): def test_doc_link_relative_mode(self, file_index, doc_index): """Should generate relative paths for doc links.""" - content = "See [Development](.md)" - doc_path = REPO_ROOT / "docs/concepts/test.md" + content = "See [Blueprints](.md)" + doc_path = REPO_ROOT / "docs/api/test.md" new_content, _changes, errors = process_markdown( content, diff --git a/dimos/web/README.md b/dimos/web/README.md deleted file mode 100644 index 28f418bb55..0000000000 --- a/dimos/web/README.md +++ /dev/null @@ -1,126 +0,0 @@ -# DimOS Robot Web Interface - -A streamlined interface for controlling and interacting with robots through DimOS. - -## Setup - -First, create an `.env` file in the root dimos directory with your configuration: - -```bash -# Example .env file -OPENAI_API_KEY=sk-your-openai-api-key -ROBOT_IP=192.168.x.x -CONN_TYPE=webrtc -WEBRTC_SERVER_HOST=0.0.0.0 -WEBRTC_SERVER_PORT=9991 -DISPLAY=:0 -``` - -## Unitree Go2 Example - -Running a full stack for Unitree Go2 requires three components: - -### 1. Start ROS2 Robot Driver - -```bash -# Source ROS environment -source /opt/ros/humble/setup.bash -source ~/your_ros_workspace/install/setup.bash - -# Launch robot driver -ros2 launch go2_robot_sdk robot.launch.py -``` - -### 2. Start DimOS Backend - -```bash -# In a new terminal, source your Python environment -source venv/bin/activate # Or your environment - -# Install requirements -pip install -r requirements.txt - -# Source ROS workspace (needed for robot communication) -source /opt/ros/humble/setup.bash -source ~/your_ros_workspace/install/setup.bash - -# Run the server with Robot() and Agent() initialization -python tests/test_unitree_agent_queries_fastapi.py -``` - -### 3. Start Frontend - -**Install yarn if not already installed** - -```bash -npm install -g yarn -``` - -**Then install dependencies and start the development server** - -```bash -# In a new terminal -cd dimos/web/dimos-interface - -# Install dependencies (first time only) -yarn install - -# Start development server -yarn dev -``` - -The frontend will be available at http://localhost:3000 - -## Using the Interface - -1. Access the web terminal at http://localhost:3000 -2. Type commands to control your robot: - - `unitree command ` - Send a command to the robot - - `unitree status` - Check connection status - - `unitree start_stream` - Start the video stream - - `unitree stop_stream` - Stop the video stream - -## Integrating DimOS with the DimOS-interface - -### Unitree Go2 Example - -```python -from dimos.agents_deprecated.agent import OpenAIAgent -from dimos.robot.unitree.unitree_go2 import UnitreeGo2 -from dimos.robot.unitree.unitree_skills import MyUnitreeSkills -from dimos.web.robot_web_interface import RobotWebInterface - -robot_ip = os.getenv("ROBOT_IP") - -# Initialize robot -logger.info("Initializing Unitree Robot") -robot = UnitreeGo2(ip=robot_ip, - connection_method=connection_method, - output_dir=output_dir) - -# Set up video stream -logger.info("Starting video stream") -video_stream = robot.get_ros_video_stream() - -# Create FastAPI server with video stream -logger.info("Initializing FastAPI server") -streams = {"unitree_video": video_stream} -web_interface = RobotWebInterface(port=5555, **streams) - -# Initialize agent with robot skills -skills_instance = MyUnitreeSkills(robot=robot) - -agent = OpenAIAgent( - dev_name="UnitreeQueryPerceptionAgent", - input_query_stream=web_interface.query_stream, - output_dir=output_dir, - skills=skills_instance, -) - -web_interface.run() -``` - -## Architecture - -- **Backend**: FastAPI server runs on port 5555 -- **Frontend**: Web application runs on port 3000 diff --git a/dimos/web/command-center-extension/README.md b/dimos/web/command-center-extension/README.md deleted file mode 100644 index efee4ec11d..0000000000 --- a/dimos/web/command-center-extension/README.md +++ /dev/null @@ -1,17 +0,0 @@ -# command-center-extension - -This is a Foxglove extension for visualizing robot data and controlling the robot. See `dimos/web/websocket_vis/README.md` for how to use the module in your robot. - -## Build and use - -Install the Foxglove Studio desktop application. - -Install the Node dependencies: - - npm install - -Build the package and install it into Foxglove: - - npm run build && npm run local-install - -To add the panel, go to Foxglove Studio, click on the "Add panel" icon on the top right and select "command-center [local]". diff --git a/docs/agents/docs/assets/pikchr_basic.svg b/docs/agents/docs/assets/pikchr_basic.svg deleted file mode 100644 index 5410d35577..0000000000 --- a/docs/agents/docs/assets/pikchr_basic.svg +++ /dev/null @@ -1,12 +0,0 @@ - - -Step 1 - - - -Step 2 - - - -Step 3 - diff --git a/docs/agents/docs/assets/pikchr_sizing.svg b/docs/agents/docs/assets/pikchr_sizing.svg deleted file mode 100644 index 3a0c433cb1..0000000000 --- a/docs/agents/docs/assets/pikchr_sizing.svg +++ /dev/null @@ -1,13 +0,0 @@ - - -short - - - -.subscribe() - - - -two lines -of text - diff --git a/docs/agents/docs/doclinks.md b/docs/agents/docs/doclinks.md deleted file mode 100644 index 07facdcbe4..0000000000 --- a/docs/agents/docs/doclinks.md +++ /dev/null @@ -1,21 +0,0 @@ -When writing or editing markdown documentation, use the `doclinks` tool to resolve file references. - -Full documentation if needed: [`utils/docs/doclinks.md`](/dimos/utils/docs/doclinks.md) - -## Syntax - - -| Pattern | Example | -|-------------|-----------------------------------------------------| -| Code file | `[`service/spec.py`]()` → resolves path | -| With symbol | `Configurable` in `[`spec.py`]()` → adds `#L` | -| Doc link | `[Configuration](.md)` → resolves to doc | - - -## Usage - -```bash -doclinks docs/guide.md # single file -doclinks docs/ # directory -doclinks --dry-run ... # preview only -``` diff --git a/docs/api/README.md b/docs/api/README.md new file mode 100644 index 0000000000..1a14f83640 --- /dev/null +++ b/docs/api/README.md @@ -0,0 +1,11 @@ +# API + +Note: Please see [Concepts](/docs/concepts/README.md) before diving into these API docs. These docs are designed to be technical, cover specific tooling, and give examples of using those tools (argument types) rather than explaining when to use the tool. + + +# Table of Contents + +- [adding configs to your modules](./configuration.md) +- [controlling how visualizations are rendered](./visualization.md) +- [understanding sensor streams](./sensor_streams/README.md) +- [creating coordinate frames and transforms for your hardware](./transforms.md) diff --git a/docs/api/configuration.md b/docs/api/configuration.md index a9c8de0268..162c343ffa 100644 --- a/docs/api/configuration.md +++ b/docs/api/configuration.md @@ -45,7 +45,7 @@ Error: Config.__init__() got an unexpected keyword argument 'something' # Configurable Modules -[Modules]() inherit from `Configurable`, so all of the above applies. Module configs should inherit from `ModuleConfig` ([`core/module.py`](/dimos/core/module.py#L40)), which includes shared configuration for all modules like transport protocols, frame_ids, etc. +[Modules](/docs/concepts/modules.md) inherit from `Configurable`, so all of the above applies. Module configs should inherit from `ModuleConfig` ([`core/module.py`](/dimos/core/module.py#L40)), which includes shared configuration for all modules like transport protocols, frame IDs, etc. ```python from dataclasses import dataclass diff --git a/docs/api/sensor_streams/index.md b/docs/api/sensor_streams/README.md similarity index 100% rename from docs/api/sensor_streams/index.md rename to docs/api/sensor_streams/README.md diff --git a/docs/api/sensor_streams/quality_filter.md b/docs/api/sensor_streams/quality_filter.md index 26d40733fd..01d3b9367d 100644 --- a/docs/api/sensor_streams/quality_filter.md +++ b/docs/api/sensor_streams/quality_filter.md @@ -50,7 +50,7 @@ Qualities: [0.9] For camera streams, we provide `sharpness_barrier` which uses the image's sharpness score. -Let's use real camera data from the Unitree Go2 robot to demonstrate. We use the [Sensor Replay](/docs/old/testing_stream_reply.md) toolkit, which provides access to recorded robot data: +Let's use real camera data from the Unitree Go2 robot to demonstrate. We use the [Sensor Storage & Replay](/docs/api/sensor_streams/storage_replay.md) toolkit, which provides access to recorded robot data: ```python session=qb from dimos.utils.testing import TimedSensorReplay diff --git a/docs/api/sensor_streams/storage_replay.md b/docs/api/sensor_streams/storage_replay.md index 66e913b197..1a31591736 100644 --- a/docs/api/sensor_streams/storage_replay.md +++ b/docs/api/sensor_streams/storage_replay.md @@ -202,7 +202,7 @@ Each pickle file contains a tuple `(timestamp, data)`: Files are numbered sequentially: `000.pickle`, `001.pickle`, etc. -Recordings are stored in the `data/` directory. See [Data Loading](/docs/data.md) for how data storage works, including Git LFS handling for large datasets. +Recordings are stored in the `data/` directory. See [Data Loading](/docs/development/large_file_management.md) for how data storage works, including Git LFS handling for large datasets. ## API Reference diff --git a/docs/api/sensor_streams/temporal_alignment.md b/docs/api/sensor_streams/temporal_alignment.md index b552ac54cc..66230c9d54 100644 --- a/docs/api/sensor_streams/temporal_alignment.md +++ b/docs/api/sensor_streams/temporal_alignment.md @@ -34,7 +34,7 @@ Below we set up replay of real camera and lidar data from the Unitree Go2 robot.
Stream Setup -You can read more about [sensor storage here](storage_replay.md) and [LFS data store here](/docs/data.md). +You can read more about [sensor storage here](storage_replay.md) and [LFS data storage here](/docs/development/large_file_management.md). ```python session=align no-result from reactivex import Subject @@ -70,7 +70,7 @@ lidar_stream = lidar_replay.stream(from_timestamp=seek_ts, duration=2.0).pipe(
-Streams would normally come from an actual robot into your module via `IN` inputs. [`detection/module3D.py`](/dimos/perception/detection/module3D.py#L11) is a good example of this. +Streams would normally come from an actual robot into your module via `In` inputs. [`detection/module3D.py`](/dimos/perception/detection/module3D.py#L11) is a good example of this. Assume we have them. Let's align them. diff --git a/docs/VIEWER_BACKENDS.md b/docs/api/visualization.md similarity index 99% rename from docs/VIEWER_BACKENDS.md rename to docs/api/visualization.md index 51fa20d655..08919b7a73 100644 --- a/docs/VIEWER_BACKENDS.md +++ b/docs/api/visualization.md @@ -54,7 +54,7 @@ VIEWER_BACKEND=foxglove dimos run unitree-go2 - Foxglove bridge on ws://localhost:8765 - No Rerun (saves resources) - Better performance with larger maps/higher resolution -- Open layout: `dimos/assets/foxglove_dashboards/go2.json` +- Open layout: `assets/foxglove_dashboards/old/foxglove_unitree_lcm_dashboard.json` --- diff --git a/docs/concepts/README.md b/docs/concepts/README.md new file mode 100644 index 0000000000..0f82783099 --- /dev/null +++ b/docs/concepts/README.md @@ -0,0 +1,12 @@ +# Concepts + +This page explains general concepts. For specific API docs see [the API reference](/docs/api/README.md). + +## Table of Contents + +- [Modules](/docs/concepts/modules.md): The primary units of deployment in DimOS, modules run in parallel and are python classes. +- [Streams](/docs/api/sensor_streams/README.md): How modules communicate, a Pub / Sub system. +- [Blueprints](/docs/concepts/blueprints.md): a way to group modules together and define their connections to each other. +- [RPC](/docs/concepts/blueprints.md#calling-the-methods-of-other-modules): how one module can call a method on another module (arguments get serialized to JSON-like binary data). +- [Skills](/docs/concepts/blueprints.md#defining-skills): An RPC function, except it can be called by an AI agent (a tool for an AI). +- Agents: AI that has an objective, access to stream data, and is capable of calling skills as tools. diff --git a/dimos/core/README_BLUEPRINTS.md b/docs/concepts/blueprints.md similarity index 69% rename from dimos/core/README_BLUEPRINTS.md rename to docs/concepts/blueprints.md index 0a3e2ceaf5..686aff60d5 100644 --- a/dimos/core/README_BLUEPRINTS.md +++ b/docs/concepts/blueprints.md @@ -6,19 +6,26 @@ You don't typically want to run a single module, so multiple blueprints are hand You create a `ModuleBlueprintSet` from a single module (say `ConnectionModule`) with: -```python +```python session=blueprint-ex1 +from dimos.core.blueprints import create_module_blueprint +from dimos.core import Module, rpc + +class ConnectionModule(Module): + def __init__(self, arg1, arg2, kwarg='value') -> None: + super().__init__() + blueprint = create_module_blueprint(ConnectionModule, 'arg1', 'arg2', kwarg='value') ``` -But the same thing can be acomplished more succinctly as: +But the same thing can be accomplished more succinctly as: -```python +```python session=blueprint-ex1 connection = ConnectionModule.blueprint ``` Now you can create the blueprint with: -```python +```python session=blueprint-ex1 blueprint = connection('arg1', 'arg2', kwarg='value') ``` @@ -26,7 +33,23 @@ blueprint = connection('arg1', 'arg2', kwarg='value') You can link multiple blueprints together with `autoconnect`: -```python +```python session=blueprint-ex1 +from dimos.core.blueprints import autoconnect + +class Module1(Module): + def __init__(self, arg1) -> None: + super().__init__() + +class Module2(Module): + ... + +class Module3(Module): + ... + +module1 = Module1.blueprint +module2 = Module2.blueprint +module3 = Module3.blueprint + blueprint = autoconnect( module1(), module2(), @@ -36,7 +59,16 @@ blueprint = autoconnect( `blueprint` itself is a `ModuleBlueprintSet` so you can link it with other modules: -```python +```python session=blueprint-ex1 +class Module4(Module): + ... + +class Module5(Module): + ... + +module4 = Module4.blueprint +module5 = Module5.blueprint + expanded_blueprint = autoconnect( blueprint, module4(), @@ -50,11 +82,11 @@ Blueprints are frozen data classes, and `autoconnect()` always constructs an exp If the same module appears multiple times in `autoconnect`, the **later blueprint wins** and overrides earlier ones: -```python +```python session=blueprint-ex1 blueprint = autoconnect( - module_a(arg1=1), - module_b(), - module_a(arg1=2), # This one is used, the first is discarded + module1(arg1=1), + module2(), + module1(arg1=2), # This one is used, the first is discarded ) ``` @@ -64,14 +96,20 @@ This is so you can "inherit" from one blueprint but override something you need Imagine you have this code: -```python +```python session=blueprint-ex1 +from functools import partial + +from dimos.core.blueprints import create_module_blueprint, autoconnect +from dimos.core import Module, rpc, Out, In +from dimos.msgs.sensor_msgs import Image + class ModuleA(Module): image: Out[Image] - start_explore: Out[Bool] + start_explore: Out[bool] class ModuleB(Module): image: In[Image] - begin_explore: In[Bool] + begin_explore: In[bool] module_a = partial(create_module_blueprint, ModuleA) module_b = partial(create_module_blueprint, ModuleB) @@ -95,24 +133,37 @@ By default `LCMTransport` is used if the object supports `lcm_encode`. If it doe You can override transports with the `transports` method. It returns a new blueprint in which the override is set. -```python -blueprint = autoconnect(...) -expanded_blueprint = autoconnect(blueprint, ...) -blueprint = blueprint.transports({ +```python session=blueprint-ex1 +from dimos.core.transport import pSHMTransport, pLCMTransport + +base_blueprint = autoconnect( + module1(arg1=1), + module2(), +) +expanded_blueprint = autoconnect( + base_blueprint, + module4(), + module5(), +) +base_blueprint = base_blueprint.transports({ ("image", Image): pSHMTransport( - "/go2/color_image", default_capacity=DEFAULT_CAPACITY_COLOR_IMAGE + "/go2/color_image", default_capacity=1920 * 1080 * 3, # 1920x1080 frame x 3 (RGB) x uint8 ), - ("start_explore", Bool): pLCMTransport(), + ("start_explore", bool): pLCMTransport("/start_explore"), }) ``` -Note: `expanded_blueprint` does not get the transport overrides because it's created from the initial value of `blueprint`, not the second. +Note: `expanded_blueprint` does not get the transport overrides because it's created from the initial value of `base_blueprint`, not the second. ## Remapping connections Sometimes you need to rename a connection to match what other modules expect. You can use `remappings` to rename module connections: -```python +```python session=blueprint-ex2 +from dimos.core.blueprints import autoconnect +from dimos.core import Module, rpc, Out, In +from dimos.msgs.sensor_msgs import Image + class ConnectionModule(Module): color_image: Out[Image] # Outputs on 'color_image' @@ -139,12 +190,11 @@ After remapping: If you want to override the topic, you still have to do it manually: -```python -blueprint -.remappings([ +```python session=blueprint-ex2 +from dimos.core.transport import LCMTransport +blueprint.remappings([ (ConnectionModule, 'color_image', 'rgb_image'), -]) -.transports({ +]).transports({ ("rgb_image", Image): LCMTransport("/custom/rgb/image", Image), }) ``` @@ -153,7 +203,10 @@ blueprint Each module can optionally take a `global_config` option in `__init__`. E.g.: -```python +```python session=blueprint-ex3 +from dimos.core import Module, rpc +from dimos.core.global_config import GlobalConfig + class ModuleA(Module): def __init__(self, global_config: GlobalConfig | None = None): @@ -162,15 +215,17 @@ class ModuleA(Module): The config is normally taken from .env or from environment variables. But you can specifically override the values for a specific blueprint: -```python -blueprint = blueprint.global_config(n_dask_workers=8) +```python session=blueprint-ex3 +blueprint = ModuleA.blueprint().global_config(n_dask_workers=8) ``` ## Calling the methods of other modules Imagine you have this code: -```python +```python session=blueprint-ex3 +from dimos.core import Module, rpc + class ModuleA(Module): @rpc @@ -186,7 +241,9 @@ And you want to call `ModuleA.get_time` in `ModuleB.request_the_time`. To do this, you can request a link to the method you want to call in `rpc_calls`. Calling `get_time_rcp` will call the original `ModuleA.get_time`. -```python +```python session=blueprint-ex3 +from dimos.core import Module, rpc + class ModuleB(Module): rpc_calls: list[str] = [ "ModuleA.get_time", @@ -199,8 +256,10 @@ class ModuleB(Module): You can also request multiple methods at a time: -```python -method1_rpc, method2_rpc = self.get_rpc_calls("ModuleX.m1", "ModuleX.m2") +```python session=blueprint-ex3 +class ModuleB(Module): + def request_the_time(self) -> None: + method1_rpc, method2_rpc = self.get_rpc_calls("ModuleX.m1", "ModuleX.m2") ``` ## Alternative RPC calls @@ -209,7 +268,10 @@ There is an alternative way of receiving RPC methods. It is useful when you want You can use it by defining a method like `set__`: -```python +```python session=blueprint-ex3 +from dimos.core import Module, rpc +from dimos.core.rpc_client import RpcCall + class ModuleB(Module): @rpc # Note that it has to be an rpc method. def set_ModuleA_get_time(self, rpc_call: RpcCall) -> None: @@ -228,12 +290,16 @@ In the previous examples, you can only call methods in a module called `ModuleA` You can do so by extracting the common interface as an `ABC` (abstract base class) and linking to the `ABC` instead one particular class. -```python +```python session=blueprint-ex3 +from abc import ABC, abstractmethod +from dimos.core.blueprints import autoconnect +from dimos.core import Module, rpc + class TimeInterface(ABC): @abstractmethod def get_time(self): ... -class ProperTime(TimeInterface): +class ProperTime(Module, TimeInterface): def get_time(self): return "13:00" @@ -254,7 +320,7 @@ class ModuleB(Module): The actual method that you get in `get_time_rpc` depends on which module is deployed. If you deploy `ProperTime`, you get `ProperTime.get_time`: -```python +```python session=blueprint-ex3 blueprint = autoconnect( ProperTime.blueprint(), # get_rpc_calls("TimeInterface.get_time") returns ProperTime.get_time @@ -268,7 +334,13 @@ If both are deployed, the blueprint will throw an error because it's ambiguous. Skills have to be registered with `AgentSpec.register_skills(self)`. -```python +```python session=blueprint-ex4 +from dimos.core import Module, rpc +from dimos.core.skill_module import SkillModule +from dimos.protocol.skill.skill import skill +from dimos.core.rpc_client import RpcCall +from dimos.core.global_config import GlobalConfig + class SomeSkill(Module): @skill @@ -290,7 +362,10 @@ class SomeSkill(Module): Or, you can avoid all of this by inheriting from `SkillModule` which does the above automatically: -```python +```python session=blueprint-ex4 +from dimos.core.skill_module import SkillModule +from dimos.protocol.skill.skill import skill + class SomeSkill(SkillModule): @skill @@ -302,8 +377,8 @@ class SomeSkill(SkillModule): All you have to do to build a blueprint is call: -```python -module_coordinator = blueprint.build(global_config=config) +```python session=blueprint-ex4 +module_coordinator = SomeSkill.blueprint().build(global_config=GlobalConfig()) ``` This returns a `ModuleCoordinator` instance that manages all deployed modules. @@ -312,7 +387,7 @@ This returns a `ModuleCoordinator` instance that manages all deployed modules. You can block the thread until it exits with: -```python +```python session=blueprint-ex4 module_coordinator.loop() ``` diff --git a/docs/concepts/modules.md b/docs/concepts/modules.md index ee7fbaf2c9..344efa0774 100644 --- a/docs/concepts/modules.md +++ b/docs/concepts/modules.md @@ -1,5 +1,5 @@ -# Dimos Modules +# DimOS Modules Modules are subsystems on a robot that operate autonomously and communicate with other subsystems using standardized messages. @@ -47,8 +47,8 @@ print(CameraModule.io()) ├─ color_image: Image ├─ camera_info: CameraInfo │ - ├─ RPC set_transport(stream_name: str, transport: Transport) -> bool ├─ RPC start() + ├─ RPC stop() │ ├─ Skill video_stream (stream=passive, reducer=latest_reducer, output=image) ``` @@ -58,9 +58,9 @@ We can see that the camera module outputs two streams: - `color_image` with [sensor_msgs.Image](https://docs.ros.org/en/melodic/api/sensor_msgs/html/msg/Image.html) type - `camera_info` with [sensor_msgs.CameraInfo](https://docs.ros.org/en/melodic/api/sensor_msgs/html/msg/CameraInfo.html) type -It offers two RPC calls: `start()` and `stop()`. +It offers two RPC calls: `start()` and `stop()` (lifecycle methods). -As well as an agentic [Skill](skills.md) called `video_stream` (more about this later, in [Skills Tutorial](skills.md)). +It also exposes an agentic [skill](/docs/concepts/blueprints.md#defining-skills) called `video_stream` (more on skills in the Blueprints guide). We can start this module and explore the output of its streams in real time (this will use your webcam). @@ -120,7 +120,7 @@ print(Detection2DModule.io()) ├─ RPC stop() -> None ``` -TODO: add easy way to print config + Looks like the detector just needs an image input and outputs some sort of detection and annotation messages. Let's connect it to a camera. @@ -174,3 +174,6 @@ to_svg(agentic, "assets/go2_agentic.svg") ![output](assets/go2_agentic.svg) + + +To see more information on how to use Blueprints, see [Blueprints](/docs/concepts/blueprints.md). diff --git a/docs/concepts/transports.md b/docs/concepts/transports.md index 62279b6baf..6c9e5fca99 100644 --- a/docs/concepts/transports.md +++ b/docs/concepts/transports.md @@ -18,11 +18,17 @@ print(inspect.getsource(PubSub.publish)) print(inspect.getsource(PubSub.subscribe)) ``` - ``` -Session process exited unexpectedly: -/home/lesh/coding/dimos/.venv/bin/python3: No module named md_babel_py.session_server - +@abstractmethod +def publish(self, topic: TopicT, message: MsgT) -> None: + """Publish a message to a topic.""" + ... +@abstractmethod +def subscribe( + self, topic: TopicT, callback: Callable[[MsgT, TopicT], None] +) -> Callable[[], None]: + """Subscribe to a topic with a callback. returns unsubscribe function""" + ... ``` Key points: diff --git a/docs/development.md b/docs/development.md deleted file mode 100644 index 0109e42768..0000000000 --- a/docs/development.md +++ /dev/null @@ -1,180 +0,0 @@ -# Development Environment Guide - -## Approach - -We optimise for flexibility—if your favourite editor is **notepad.exe**, you’re good to go. Everything below is tooling for convenience. - ---- - -## Dev Containers - -Dev containers give us a reproducible, container-based workspace identical to CI. - -### Why use them? - -* Consistent toolchain across all OSs. -* Unified formatting, linting and type-checking. -* Zero host-level dependencies (apart from Docker). - -### IDE quick start - -Install the *Dev Containers* plug-in for VS Code, Cursor, or your IDE of choice (you’ll likely be prompted automatically when you open our repo). - -### Shell only quick start - -The terminal within your IDE should use devcontainer transparently given you installed the plugin, but in case you want to run our shell without an IDE, you can use `./bin/dev`. -(It depends on npm/node being installed.) - -```sh -./bin/dev -devcontainer CLI (https://github.com/devcontainers/cli) not found. Install into repo root? (y/n): y - -added 1 package, and audited 2 packages in 8s -found 0 vulnerabilities - -[1 ms] @devcontainers/cli 0.76.0. Node.js v20.19.0. linux 6.12.27-amd64 x64. -[4838 ms] Start: Run: docker start f0355b6574d9bd277d6eb613e1dc32e3bc18e7493e5b170e335d0e403578bcdb -[5299 ms] f0355b6574d9bd277d6eb613e1dc32e3bc18e7493e5b170e335d0e403578bcdb -{"outcome":"success","containerId":"f0355b6574d9bd277d6eb613e1dc32e3bc18e7493e5b170e335d0e403578bcdb","remoteUser":"root","remoteWorkspaceFolder":"/workspaces/dimos"} - - ██████╗ ██╗███╗ ███╗███████╗███╗ ██╗███████╗██╗ ██████╗ ███╗ ██╗ █████╗ ██╗ - ██╔══██╗██║████╗ ████║██╔════╝████╗ ██║██╔════╝██║██╔═══██╗████╗ ██║██╔══██╗██║ - ██║ ██║██║██╔████╔██║█████╗ ██╔██╗ ██║███████╗██║██║ ██║██╔██╗ ██║███████║██║ - ██║ ██║██║██║╚██╔╝██║██╔══╝ ██║╚██╗██║╚════██║██║██║ ██║██║╚██╗██║██╔══██║██║ - ██████╔╝██║██║ ╚═╝ ██║███████╗██║ ╚████║███████║██║╚██████╔╝██║ ╚████║██║ ██║███████╗ - ╚═════╝ ╚═╝╚═╝ ╚═╝╚══════╝╚═╝ ╚═══╝╚══════╝╚═╝ ╚═════╝ ╚═╝ ╚═══╝╚═╝ ╚═╝╚══════╝ - - v_unknown:unknown | Wed May 28 09:23:33 PM UTC 2025 - -root@dimos:/workspaces/dimos # -``` - -The script will: - -* Offer to npm install `@devcontainers/cli` locally (if not available globally) on first run. -* Pull `ghcr.io/dimensionalos/dev:dev` if not present (external contributors: we plan to mirror to Docker Hub). - -You’ll land in the workspace as **root** with all project tooling available. - -## Pre-Commit Hooks - -We use [pre-commit](https://pre-commit.com) (config in `.pre-commit-config.yaml`) to enforce formatting, licence headers, EOLs, LFS checks, etc. Hooks run in **milliseconds**. -Hooks also run in CI. Any auto-fixes are committed back to your PR, so local installation is optional — but gives faster feedback. - -```sh -CRLF end-lines checker...................................................Passed -CRLF end-lines remover...................................................Passed -Insert license in comments...............................................Passed -ruff format..............................................................Passed -check for case conflicts.................................................Passed -check json...............................................................Passed -check toml...............................................................Passed -check yaml...............................................................Passed -format json..............................................................Passed -LFS data.................................................................Passed - -``` -Given your editor uses ruff via devcontainers (which it should), the auto-commit hook won't ever reformat your code. Your IDE will have already done this. - -### Running hooks manually - -Given your editor uses git via devcontainers (which it should), auto-commit hooks will run automatically. This is in case you want to run them manually. - -Inside the dev container (Your IDE will likely run this transparently for each commit if using devcontainer plugin): - -```sh -pre-commit run --all-files -``` - -### Installing pre-commit on your host - -```sh -apt install pre-commit # or brew install pre-commit -pre-commit install # install git hook -pre-commit run --all-files -``` - - ---- - -## Testing - -All tests run with **pytest** inside the dev container, ensuring local results match CI. - -### Basic usage - -```sh -./bin/dev # start container -pytest # run all tests beneath the current directory -``` - -Depending on which dir you are in, only tests from that directory will run, which is convenient when developing. You can frequently validate your feature tree. - -Your vibe coding agent will know to use these tests via the devcontainer so it can validate its work. - - -#### Useful options - -| Purpose | Command | -| -------------------------- | ----------------------- | -| Show `print()` output | `pytest -s` | -| Filter by name substring | `pytest -k ""` | -| Run tests with a given tag | `pytest -m ` | - - -We use tags for special tests, like `vis` or `tool` for things that aren't meant to be ran in CI and when casually developing, something that requires hardware or visual inspection (pointcloud merging vis etc). - -You can enable a tag by selecting `-m `. These are configured in `./pyproject.toml`. - -```sh -root@dimos:/workspaces/dimos/dimos # pytest -sm vis -k my_visualization -... -``` - -Classic development run within a subtree: - -```sh -./bin/dev - -... container init ... - -root@dimos:/workspaces/dimos # cd dimos/robot/unitree_webrtc/ -root@dimos:/workspaces/dimos/dimos/robot/unitree_webrtc # pytest -collected 27 items / 22 deselected / 5 selected - -type/test_map.py::test_robot_mapping PASSED -type/test_timeseries.py::test_repr PASSED -type/test_timeseries.py::test_equals PASSED -type/test_timeseries.py::test_range PASSED -type/test_timeseries.py::test_duration PASSED - -``` - -Showing prints: - -```sh -root@dimos:/workspaces/dimos/dimos/robot/unitree_webrtc/type # pytest -s test_odometry.py -test_odometry.py::test_odometry_conversion_and_count Odom ts(2025-05-30 13:52:03) pos(→ Vector Vector([0.432199 0.108042 0.316589])), rot(↑ Vector Vector([ 7.7200000e-04 -9.1280000e-03 3.006 -8621e+00])) yaw(172.3°) -Odom ts(2025-05-30 13:52:03) pos(→ Vector Vector([0.433629 0.105965 0.316143])), rot(↑ Vector Vector([ 0.003814 -0.006436 2.99591235])) yaw(171.7°) -Odom ts(2025-05-30 13:52:04) pos(→ Vector Vector([0.434459 0.104739 0.314794])), rot(↗ Vector Vector([ 0.005558 -0.004183 3.00068456])) yaw(171.9°) -Odom ts(2025-05-30 13:52:04) pos(→ Vector Vector([0.435621 0.101699 0.315852])), rot(↑ Vector Vector([ 0.005391 -0.006002 3.00246893])) yaw(172.0°) -Odom ts(2025-05-30 13:52:04) pos(→ Vector Vector([0.436457 0.09857 0.315254])), rot(↑ Vector Vector([ 0.003358 -0.006916 3.00347172])) yaw(172.1°) -Odom ts(2025-05-30 13:52:04) pos(→ Vector Vector([0.435535 0.097022 0.314399])), rot(↑ Vector Vector([ 1.88300000e-03 -8.17800000e-03 3.00573432e+00])) yaw(172.2°) -Odom ts(2025-05-30 13:52:04) pos(→ Vector Vector([0.433739 0.097553 0.313479])), rot(↑ Vector Vector([ 8.10000000e-05 -8.71700000e-03 3.00729616e+00])) yaw(172.3°) -Odom ts(2025-05-30 13:52:04) pos(→ Vector Vector([0.430924 0.09859 0.31322 ])), rot(↑ Vector Vector([ 1.84000000e-04 -9.68700000e-03 3.00945623e+00])) yaw(172.4°) -... etc -``` ---- - -## Cheatsheet - -| Action | Command | -| --------------------------- | ---------------------------- | -| Enter dev container | `./bin/dev` | -| Run all pre-commit hooks | `pre-commit run --all-files` | -| Install hooks in local repo | `pre-commit install` | -| Run tests in current path | `pytest` | -| Filter tests by name | `pytest -k ""` | -| Enable stdout in tests | `pytest -s` | -| Run tagged tests | `pytest -m ` | diff --git a/docs/development/README.md b/docs/development/README.md index 9517fda6a1..03444ae732 100644 --- a/docs/development/README.md +++ b/docs/development/README.md @@ -1,6 +1,6 @@ # Development Guide -1. [How to setup your system](#1-setup) (pick one: system install, nix flake + direnv, pure nix flake) +1. [How to set up your system](#1-setup) (pick one: system install, nix flake + direnv, pure nix flake) 2. [How to hack on DimOS](#2-how-to-hack-on-dimos) (which files to edit, debugging help, etc) 3. [How to make a PR](#3-how-to-make-a-pr) (our expectations for a PR) @@ -16,12 +16,12 @@ All the setup options are for your convenience. If you can get DimOS running on ### Why pick this option? (pros/cons/when-to-use) -* Downside: mutates your global system, causing (and receiving) side effects causes it to be unreliable +* Downside: mutates your global system, which can create side effects and make it less reliable * Upside: Often good for a quick hack or exploring * Upside: Sometimes easier for CUDA/GPU acceleration * Use when: you understand system package management (arch linux user) or you don't care about making changes to your system -### How to setup DimOS +### How to set up DimOS ```bash # System dependencies @@ -228,15 +228,13 @@ This will save the rerun data to `rerun.json` in the current directory. ## Where is `` located? (Architecture) - -* If you want to add a `dimos run ` command see [dimos_run.md](/dimos/robot/cli/README.md) -* If you want to add a camera driver see [depth_camera_integration.md](/docs/depth_camera_integration.md) - -* For edits to manipulation see [manipulation.md](/dimos/hardware/manipulators/README.md) and [manipulation base](/dimos/hardware/manipulators/base/component_based_architecture.md) +* If you want to add a `dimos run ` command see [dimos_run.md](/docs/development/dimos_run.md) +* If you want to add a camera driver see [depth_camera_integration.md](/docs/development/depth_camera_integration.md) +* For edits to manipulation see [manipulation](/dimos/hardware/manipulators/README.md) and the related modules under `dimos/manipulation/`. * `dimos/core/`: Is where stuff like `Module`, `In`, `Out`, and `RPC` live. * `dimos/robot/`: Robot-specific modules live here. * `dimos/hardware/`: Are for sensors, end-effectors, and related individual hardware pieces. -* `dimos/msgs/`: If you're trying to find a type to send a type over a stream, look here. +* `dimos/msgs/`: If you're trying to find a message type to send over a stream, look here. * `dimos/dashboard/`: Contains code related to visualization. * `dimos/protocol/`: Defines low level stuff for communication between modules. * See `dimos/` for the remainder @@ -258,7 +256,7 @@ pytest # run all tests at or below the current directory | Enable stdout in tests | `pytest -s` | | Run tagged tests | `pytest -m ` | -We use tags for special tests, like `vis` or `tool` for things that aren't meant to be ran in CI and when casually developing, something that requires hardware or visual inspection (pointcloud merging vis etc) +We use tags for special tests, like `tool` for things that aren't meant to be run in CI and for cases that require hardware or visual inspection (pointcloud merging visualization, etc). You can enable a tag by selecting -m - these are configured in `./pyproject.toml` @@ -268,12 +266,12 @@ You can enable a tag by selecting -m - these are configured in `./pyp - Open the PR against the `dev` branch (not `main`). - **No matter what, provide a few-lines that, when run, let a reviewer test the feature you added** (assuming you changed functional python code). - Less changed files = better. -- If you're writing documentation, see [writing docs](/docs/agents/docs/index.md) for how to write code blocks. +- If you're writing documentation, see [writing docs](/docs/development/writing_docs/README.md) - If you get mypy errors, please fix them. Don't just add # type: ignore. Please first understand why mypy is complaining and try to fix it. It's only okay to ignore if the issue cannot be fixed. - If you made a change that is likely going to involve a debate, open the github UI and add a graphical comment on that code. Justify your choice and explain downsides of alternatives. - We don't require 100% test coverage, but if you're making a PR of notable python changes you should probably either have unit tests or good reason why not (ex: visualization stuff is hard to test so we don't). - Have the name of your PR start with `WIP:` if its not ready to merge but you want to show someone the changes. -- If you have large (>500kb) files, see [large file management](/docs/data.md) for how to store and load them (don't just commit them). +- If you have large (>500kb) files, see [large file management](/docs/development/large_file_management.md) for how to store and load them (don't just commit them). - So long as you don't disable pre-commit hooks the formatting, license headers, EOLs, LFS checks, etc will be handled automatically by [pre-commit](https://pre-commit.com). If something goes wrong with the hooks you can run the step manually with `pre-commit run --all-files`. - If you're a new hire at DimOS: - Did we mention smaller PR's are better? Smaller PR's are better. diff --git a/docs/depth_camera_integration.md b/docs/development/depth_camera_integration.md similarity index 94% rename from docs/depth_camera_integration.md rename to docs/development/depth_camera_integration.md index 4fca10da4e..3ff162d646 100644 --- a/docs/depth_camera_integration.md +++ b/docs/development/depth_camera_integration.md @@ -6,7 +6,7 @@ Use this guide to add a new depth camera, wire TF correctly, and publish the req ## Add a New Depth Camera 1) **Create a new driver module** - - Path: `dimos/dimos/hardware/sensors/camera//camera.py` + - Path: `dimos/hardware/sensors/camera//camera.py` - Export a blueprint in `/__init__.py` (match the `realsense` / `zed` pattern). 2) **Define config** @@ -57,7 +57,7 @@ Use this guide to add a new depth camera, wire TF correctly, and publish the req ## TF: Required Frames and Transforms -Frame names are defined by the abstract depth camera spec (`dimos/dimos/hardware/sensors/camera/spec.py`). +Frame names are defined by the abstract depth camera spec (`dimos/hardware/sensors/camera/spec.py`). Use the properties below to ensure consistent naming: - `_camera_link`: base link for the camera module (usually `{camera_name}_link`) @@ -111,8 +111,8 @@ For `ObjectSceneRegistrationModule`, the required inputs are: - Overlay annotations and aggregated pointclouds See: -- `dimos/dimos/perception/object_scene_registration.py` -- `dimos/dimos/perception/demo_object_scene_registration.py` +- `dimos/perception/object_scene_registration.py` +- `dimos/perception/demo_object_scene_registration.py` Quick wiring example: @@ -143,5 +143,5 @@ Install Foxglove from: - **Skills** are callable methods (decorated with `@skill`) exposed by `SkillModule` for agents. Reference: -- Modules overview: `dimos/docs/concepts/modules.md` -- TF fundamentals: `dimos/docs/api/transforms.md` +- Modules overview: `/docs/concepts/modules.md` +- TF fundamentals: `/docs/api/transforms.md` diff --git a/dimos/robot/cli/README.md b/docs/development/dimos_run.md similarity index 83% rename from dimos/robot/cli/README.md rename to docs/development/dimos_run.md index 63087f48b8..9fb0b5845e 100644 --- a/dimos/robot/cli/README.md +++ b/docs/development/dimos_run.md @@ -1,6 +1,20 @@ -# Robot CLI +# DimOS Run -To avoid having so many runfiles, I created a common script to run any blueprint. +#### Warning: If you just want to run a blueprint you don't need to add it to `dimos run`: + +`your_code.py` +```py +from dimos.robot.unitree_webrtc.unitree_go2_blueprints import basic as example_blueprint + +if __name__ == "__main__": + example_blueprint.build().loop() +``` + +```sh +python ./your_code.py +``` + +## Usage For example, to run the standard Unitree Go2 blueprint run: @@ -20,7 +34,7 @@ You can dynamically connect additional modules. For example: dimos run unitree-go2 --extra-module llm_agent --extra-module human_input --extra-module navigation_skill ``` -## Definitions +## Adding your own Blueprints can be defined anywhere, but they're all linked together in `dimos/robot/all_blueprints.py`. E.g.: diff --git a/docs/data.md b/docs/development/large_file_management.md similarity index 100% rename from docs/data.md rename to docs/development/large_file_management.md diff --git a/docs/development/writing_docs/README.md b/docs/development/writing_docs/README.md new file mode 100644 index 0000000000..abced8b706 --- /dev/null +++ b/docs/development/writing_docs/README.md @@ -0,0 +1,17 @@ +# Writing Docs + +Note: as of the DimOS beta, not all existing docs conform to this guide, but newly added docs should. + +## Need-to-know Things + +1. Where to put your docs. + - Some docs are under `docs/` (like this one) but others are stored in the actual codebase, like `dimos/robot/drone/README.md`. + - If your docs have code examples and are somewhere under `docs/`, those code examples must be executable. See [codeblocks guide](/docs/development/writing_docs/codeblocks.md) for details and instructions on how to execute your code examples. + - If your docs nicely *introduce* a new API, or they are a tutorial, then put them in `docs/concepts/` (even if they are about a specific API). + - If the docs are highly technical or exhaustive there are a three options: + - If your docs are about a user-facing API (ex: the reader can follow your instructions without cloning dimos) then put them in `docs/api/`. + - Otherwise (if the reader is modifying their own copy of the dimos codebase) then your docs have two options: + 1. You can choose to store your docs next to relevant python files (ex: `dimos/robot/drone/README.md`), and we are less strict about the contents (code examples don't need to be executable) **BUT**, you need to edit something in `docs/development/` or `docs/api/` to add a reference/link to those docs (don't create "dangling" documentation). + 2. Alternatively, you can put your docs in `docs/development/`. Code examples there should be executable. +2. Even if you know how to link to other docs, read our [how we do doc linking guide](/docs/development/writing_docs/doclinks.md). +3. Even if you know how to create diagrams on your own, read our [how we do diagrams guide](/docs/development/writing_docs/diagram_practices.md). diff --git a/docs/agents/docs/assets/codeblocks_example.svg b/docs/development/writing_docs/assets/codeblocks_example.svg similarity index 100% rename from docs/agents/docs/assets/codeblocks_example.svg rename to docs/development/writing_docs/assets/codeblocks_example.svg diff --git a/docs/development/writing_docs/assets/pikchr_basic.svg b/docs/development/writing_docs/assets/pikchr_basic.svg new file mode 100644 index 0000000000..9dee0bfdec --- /dev/null +++ b/docs/development/writing_docs/assets/pikchr_basic.svg @@ -0,0 +1,12 @@ + + +Step 1 + + + +Step 2 + + + +Step 3 + diff --git a/docs/agents/docs/assets/pikchr_branch.svg b/docs/development/writing_docs/assets/pikchr_branch.svg similarity index 100% rename from docs/agents/docs/assets/pikchr_branch.svg rename to docs/development/writing_docs/assets/pikchr_branch.svg diff --git a/docs/agents/docs/assets/pikchr_explicit.svg b/docs/development/writing_docs/assets/pikchr_explicit.svg similarity index 100% rename from docs/agents/docs/assets/pikchr_explicit.svg rename to docs/development/writing_docs/assets/pikchr_explicit.svg diff --git a/docs/agents/docs/assets/pikchr_labels.svg b/docs/development/writing_docs/assets/pikchr_labels.svg similarity index 100% rename from docs/agents/docs/assets/pikchr_labels.svg rename to docs/development/writing_docs/assets/pikchr_labels.svg diff --git a/docs/development/writing_docs/assets/pikchr_sizing.svg b/docs/development/writing_docs/assets/pikchr_sizing.svg new file mode 100644 index 0000000000..9d17f571d1 --- /dev/null +++ b/docs/development/writing_docs/assets/pikchr_sizing.svg @@ -0,0 +1,13 @@ + + +short + + + +.subscribe() + + + +two lines +of text + diff --git a/docs/agents/docs/codeblocks.md b/docs/development/writing_docs/codeblocks.md similarity index 60% rename from docs/agents/docs/codeblocks.md rename to docs/development/writing_docs/codeblocks.md index 323f1c0c50..a928f1ceb4 100644 --- a/docs/agents/docs/codeblocks.md +++ b/docs/development/writing_docs/codeblocks.md @@ -1,16 +1,97 @@ -# Executable Code Blocks +# Code Blocks Must Be Executable We use [md-babel-py](https://github.com/leshy/md-babel-py/) to execute code blocks in markdown and insert results. ## Golden Rule -**All code blocks must be executable.** Never write illustrative/pseudo code blocks. If you're showing an API usage pattern, create a minimal working example that actually runs. This ensures documentation stays correct as the codebase evolves. +**Never write illustrative/pseudo code blocks.** If you're showing an API usage pattern, create a minimal working example that actually runs. This ensures documentation stays correct as the codebase evolves. + +## Installation + +
+Click to see full installation instructions + +### Nix (recommended) + +```sh skip +# (assuming you have nix) + +# Run directly from GitHub +nix run github:leshy/md-babel-py -- run README.md --stdout + +# run locally +nix run . -- run README.md --stdout +``` + +### Docker + +```sh skip +# Pull from Docker Hub +docker run -v $(pwd):/work lesh/md-babel-py:main run /work/README.md --stdout + +# Or build locally via Nix +nix build '.#docker' # builds tarball to ./result +docker load < result # loads image from tarball +docker run -v $(pwd):/work md-babel-py:latest run /work/file.md --stdout +``` + +### pipx + +```sh skip +pipx install md-babel-py +# or: uv pip install md-babel-py +md-babel-py run README.md --stdout +``` + +If not using nix or docker, evaluators require system dependencies: + +| Language | System packages | +|-----------|-----------------------------| +| python | python3 | +| node | nodejs | +| dot | graphviz | +| asymptote | asymptote, texlive, dvisvgm | +| pikchr | pikchr | +| openscad | openscad, xvfb, imagemagick | +| diagon | diagon | + +```sh skip +# Arch Linux +sudo pacman -S python nodejs graphviz asymptote texlive-basic openscad xorg-server-xvfb imagemagick + +# Debian/Ubuntu +sudo apt-get install python3 nodejs graphviz asymptote texlive xvfb imagemagick openscad +``` + +Note: pikchr and diagon may need to be built from source. Use Docker or Nix for full evaluator support. + +## Usage + +```sh skip +# Edit file in-place +md-babel-py run document.md + +# Output to separate file +md-babel-py run document.md --output result.md + +# Print to stdout +md-babel-py run document.md --stdout + +# Only run specific languages +md-babel-py run document.md --lang python,sh + +# Dry run - show what would execute +md-babel-py run document.md --dry-run +``` + +
+ ## Running ```sh skip -md-babel-py run document.md # edit in-place -md-babel-py run document.md --stdout # preview to stdout +md-babel-py run document.md # edit in-place +md-babel-py run document.md --stdout # preview to stdout md-babel-py run document.md --dry-run # show what would run ``` @@ -112,203 +193,3 @@ plt.savefig('{output}', transparent=True) ![output](assets/matplotlib-demo.svg) - -### Pikchr - -SQLite's diagram language: - -
-diagram source - -```pikchr fold output=assets/pikchr-demo.svg -color = white -fill = none -linewid = 0.4in - -# Input file -In: file "README.md" fit -arrow - -# Processing -Parse: box "Parse" rad 5px fit -arrow -Exec: box "Execute" rad 5px fit - -# Fan out to languages -arrow from Exec.e right 0.3in then up 0.4in then right 0.3in -Sh: oval "Shell" fit -arrow from Exec.e right 0.3in then right 0.3in -Node: oval "Node" fit -arrow from Exec.e right 0.3in then down 0.4in then right 0.3in -Py: oval "Python" fit - -# Merge back -X: dot at (Py.e.x + 0.3in, Node.e.y) invisible -line from Sh.e right until even with X then down to X -line from Node.e to X -line from Py.e right until even with X then up to X -Out: file "README.md" fit with .w at (X.x + 0.3in, X.y) -arrow from X to Out.w -``` - -
- - -![output](assets/pikchr-demo.svg) - -### Asymptote - -Vector graphics: - -```asymptote output=assets/histogram.svg -import graph; -import stats; - -size(400,200,IgnoreAspect); -defaultpen(white); - -int n=10000; -real[] a=new real[n]; -for(int i=0; i < n; ++i) a[i]=Gaussrand(); - -draw(graph(Gaussian,min(a),max(a)),orange); - -int N=bins(a); - -histogram(a,min(a),max(a),N,normalize=true,low=0,rgb(0.4,0.6,0.8),rgb(0.2,0.4,0.6),bars=true); - -xaxis("$x$",BottomTop,LeftTicks,p=white); -yaxis("$dP/dx$",LeftRight,RightTicks(trailingzero),p=white); -``` - - -![output](assets/histogram.svg) - -### Graphviz - -```dot output=assets/graph.svg -A -> B -> C -A -> C -``` - - -![output](assets/graph.svg) - -### OpenSCAD - -```openscad output=assets/cube-sphere.png -cube([10, 10, 10]); -sphere(r=7); -``` - - -![output](assets/cube-sphere.png) - -### Diagon - -ASCII art diagrams: - -```diagon mode=Math -1 + 1/2 + sum(i,0,10) -``` - - -``` - 10 - ___ - 1 ╲ -1 + ─ + ╱ i - 2 ‾‾‾ - 0 -``` - -```diagon mode=GraphDAG -A -> B -> C -A -> C -``` - - -``` -┌───┐ -│A │ -└┬─┬┘ - │┌▽┐ - ││B│ - │└┬┘ -┌▽─▽┐ -│C │ -└───┘ -``` - -## Install - -### Nix (recommended) - -```sh skip -# Run directly from GitHub -nix run github:leshy/md-babel-py -- run README.md --stdout - -# Or clone and run locally -nix run . -- run README.md --stdout -``` - -### Docker - -```sh skip -# Pull from Docker Hub -docker run -v $(pwd):/work lesh/md-babel-py:main run /work/README.md --stdout - -# Or build locally via Nix -nix build .#docker # builds tarball to ./result -docker load < result # loads image from tarball -docker run -v $(pwd):/work md-babel-py:latest run /work/file.md --stdout -``` - -### pipx - -```sh skip -pipx install md-babel-py -# or: uv pip install md-babel-py -md-babel-py run README.md --stdout -``` - -If not using nix or docker, evaluators require system dependencies: - -| Language | System packages | -|-----------|-----------------------------| -| python | python3 | -| node | nodejs | -| dot | graphviz | -| asymptote | asymptote, texlive, dvisvgm | -| pikchr | pikchr | -| openscad | openscad, xvfb, imagemagick | -| diagon | diagon | - -```sh skip -# Arch Linux -sudo pacman -S python nodejs graphviz asymptote texlive-basic openscad xorg-server-xvfb imagemagick - -# Debian/Ubuntu -sudo apt-get install python3 nodejs graphviz asymptote texlive xvfb imagemagick openscad -``` - -Note: pikchr and diagon may need to be built from source. Use Docker or Nix for full evaluator support. - -## Usage - -```sh skip -# Edit file in-place -md-babel-py run document.md - -# Output to separate file -md-babel-py run document.md --output result.md - -# Print to stdout -md-babel-py run document.md --stdout - -# Only run specific languages -md-babel-py run document.md --lang python,sh - -# Dry run - show what would execute -md-babel-py run document.md --dry-run -``` diff --git a/docs/agents/docs/index.md b/docs/development/writing_docs/diagram_practices.md similarity index 69% rename from docs/agents/docs/index.md rename to docs/development/writing_docs/diagram_practices.md index 94dd64b72a..81f3da73a5 100644 --- a/docs/agents/docs/index.md +++ b/docs/development/writing_docs/diagram_practices.md @@ -1,59 +1,9 @@ +We have many diagramming tools. View source code of this page to see examples. -# Code Blocks - -**All code blocks must be executable.** -Never write illustrative/pseudocode blocks. -If you're showing an API usage pattern, create a minimal working example that actually runs. This ensures documentation stays correct as the codebase evolves. - -After writing a code block in your markdown file, you can run it by executing: -```bash -md-babel-py run document.md -``` - -More information on this tool is in [codeblocks](/docs/agents/docs/codeblocks.md). - - -# Code or Docs Links - -After adding a link to a doc, run - -```bash -doclinks document.md -``` - -### Code file references -```markdown -See [`service/spec.py`](/dimos/protocol/service/spec.py) for the implementation. -``` - -After running doclinks, it becomes: -```markdown -See [`service/spec.py`](/dimos/protocol/service/spec.py) for the implementation. -``` - -### Symbol auto-linking -Mention a symbol on the same line to auto-link to its line number: -```markdown -The `Configurable` class is defined in [`service/spec.py`](/dimos/protocol/service/spec.py#L22). -``` - -Becomes: -```markdown -The `Configurable` class is defined in [`service/spec.py`](/dimos/protocol/service/spec.py#L22). -``` -### Doc-to-doc references -Use `.md` as the link target: -```markdown -See [Configuration](/docs/api/configuration.md) for more details. -``` - -Becomes: -```markdown -See [Configuration](/docs/concepts/configuration.md) for more details. -``` - -More information on this is in [doclinks](/docs/agents/docs/doclinks.md). +# How to make diagrams +1. First define a diagram using a codeblock (examples below). See [Pikchr](https://pikchr.org/) for more details on syntax. +2. Then use the cli tool `md-babel-py` (ex: `md-babel-py run README.md`) to generate the diagram. See [codeblocks.md](/docs/development/writing_docs/codeblocks.md) for how to get the `md-babel-py` cli tool. # Pikchr diff --git a/dimos/utils/docs/doclinks.md b/docs/development/writing_docs/doclinks.md similarity index 75% rename from dimos/utils/docs/doclinks.md rename to docs/development/writing_docs/doclinks.md index dce2e67fec..e9a6ee9d8c 100644 --- a/dimos/utils/docs/doclinks.md +++ b/docs/development/writing_docs/doclinks.md @@ -1,6 +1,27 @@ -# doclinks +# Use Doclinks to Resolve file references -A Markdown link resolver that automatically fills in correct file paths for code references in documentation. +## Syntax + + +| Pattern | Example | +|-------------|-----------------------------------------------------| +| Code file | `[`service/spec.py`]()` → resolves path | +| With symbol | `Configurable` in `[`spec.py`]()` → adds `#L` | +| Doc link | `[Configuration](.md)` → resolves to doc | + + +## Usage + +```bash +bin/doclinks docs/guide.md # single file +bin/doclinks docs/ # directory +bin/doclinks --dry-run ... # preview only +``` + +## Full Documentation + +
+Click to see full documentation ## What it does @@ -39,26 +60,26 @@ See [`service/spec.py`](/dimos/protocol/service/spec.py) for the implementation. ```bash # Process a single file -doclinks docs/guide.md +bin/doclinks docs/guide.md # Process a directory recursively -doclinks docs/ +bin/doclinks docs/ # Relative links (from doc location) -doclinks --link-mode relative docs/ +bin/doclinks --link-mode relative docs/ # GitHub links -doclinks --link-mode github \ +bin/doclinks --link-mode github \ --github-url https://github.com/org/repo docs/ # Dry run (preview changes) -doclinks --dry-run docs/ +bin/doclinks --dry-run docs/ # CI check (exit 1 if changes needed) -doclinks --check docs/ +bin/doclinks --check docs/ # Watch mode (auto-update on changes) -doclinks --watch docs/ +bin/doclinks --watch docs/ ``` ## Options @@ -94,3 +115,5 @@ The tool builds an index of all files in the repo. For `/dimos/protocol/service/ - `dimos/protocol/service/spec.py` Use longer paths when multiple files share the same name. + +
diff --git a/docs/old/ci.md b/docs/old/ci.md deleted file mode 100644 index ac9b11115a..0000000000 --- a/docs/old/ci.md +++ /dev/null @@ -1,146 +0,0 @@ -# Continuous Integration Guide - -> *If you are ******not****** editing CI-related files, you can safely ignore this document.* - -Our GitHub Actions pipeline lives in **`.github/workflows/`** and is split into three top-level workflows: - -| Workflow | File | Purpose | -| ----------- | ------------- | -------------------------------------------------------------------- | -| **cleanup** | `cleanup.yml` | Auto-formats code with *pre-commit* and pushes fixes to your branch. | -| **docker** | `docker.yml` | Builds (and caches) our Docker image hierarchy. | -| **tests** | `tests.yml` | Pulls the *dev* image and runs the test suite. | - ---- - -## `cleanup.yml` - -* Checks out the branch. -* Executes **pre-commit** hooks. -* If hooks modify files, commits and pushes the changes back to the same branch. - -> This guarantees consistent formatting even if the developer has not installed pre-commit locally. - ---- - -## `tests.yml` - -* Pulls the pre-built **dev** container image. -* Executes: - -```bash -pytest -``` - -That’s it—making the job trivial to reproduce locally via: - -```bash -./bin/dev # enter container -pytest # run tests -``` - ---- - -## `docker.yml` - -### Objectives - -1. **Layered images**: each image builds on its parent, enabling parallel builds once dependencies are ready. -2. **Speed**: build children as soon as parents finish; leverage aggressive caching. -3. **Minimal work**: skip images whose context hasn’t changed. - -### Current hierarchy - - -``` - ┌──────┐ - │ubuntu│ - └┬────┬┘ - ┌▽──┐┌▽───────┐ - │ros││python │ - └┬──┘└───────┬┘ - ┌▽─────────┐┌▽──┐ - │ros-python││dev│ - └┬─────────┘└───┘ - ┌▽──────┐ - │ros-dev│ - └───────┘ -``` - -* ghcr.io/dimensionalos/ros:dev -* ghcr.io/dimensionalos/python:dev -* ghcr.io/dimensionalos/ros-python:dev -* ghcr.io/dimensionalos/ros-dev:dev -* ghcr.io/dimensionalos/dev:dev - -> **Note**: The diagram shows only currently active images; the system is extensible—new combinations are possible, builds can be run per branch and as parallel as possible - - -``` - ┌──────┐ - │ubuntu│ - └┬────┬┘ - ┌▽──┐┌▽────────────────────────┐ - │ros││python │ - └┬──┘└───────────────────┬────┬┘ - ┌▽─────────────────────┐┌▽──┐┌▽──────┐ - │ros-python ││dev││unitree│ - └┬────────┬───────────┬┘└───┘└───────┘ - ┌▽──────┐┌▽─────────┐┌▽──────────┐ - │ros-dev││ros-jetson││ros-unitree│ - └───────┘└──────────┘└───────────┘ -``` - -### Branch-aware tagging - -When a branch triggers a build: - -* Only images whose context changed are rebuilt. -* New images receive the tag `:`. -* Unchanged parents are pulled from the registry, e.g. - -given we made python requirements.txt changes, but no ros changes, image dep graph would look like this: - -``` -ghcr.io/dimensionalos/ros:dev → ghcr.io/dimensionalos/ros-python:my_branch → ghcr.io/dimensionalos/dev:my_branch -``` - -### Job matrix & the **check-changes** step - -To decide what to build we run a `check-changes` job that compares the diff against path filters: - -```yaml -filters: | - ros: - - .github/workflows/_docker-build-template.yml - - .github/workflows/docker.yml - - docker/base-ros/** - - python: - - docker/base-python/** - - requirements*.txt - - dev: - - docker/dev/** -``` - -This populates a build matrix (ros, python, dev) with `true/false` flags. - -### The dependency execution issue - -Ideally a child job (e.g. **ros-python**) should depend on both: - -* **check-changes** (to know if it *should* run) -* Its **parent image job** (to wait for the artifact) - -GitHub Actions can’t express “run only if *both* conditions are true *and* the parent job wasn’t skipped”. - -We are using `needs: [check-changes, ros]` to ensure the job runs after the ros build, but if ros build has been skipped we need `if: always()` to ensure that the build runs anyway. -Adding `always` for some reason completely breaks the conditional check, we cannot have OR, AND operators, it just makes the job _always_ run, which means we build python even if we don't need to. - -This is unfortunate as the build takes ~30 min first time (a few minutes afterwards thanks to caching) and I've spent a lot of time on this, lots of viable seeming options didn't pan out and probably we need to completely rewrite and own the actions runner and not depend on github structure at all. Single job called `CI` or something, within our custom docker image. - ---- - -## `run-tests` (job inside `docker.yml`) - -After all requested images are built, this job triggers **tests.yml**, passing the freshly created *dev* image tag so the suite runs against the branch-specific environment. diff --git a/docs/old/jetson.MD b/docs/old/jetson.MD deleted file mode 100644 index a4d06e3255..0000000000 --- a/docs/old/jetson.MD +++ /dev/null @@ -1,72 +0,0 @@ -# DimOS Jetson Setup Instructions -Tested on Jetpack 6.2, CUDA 12.6 - -## Required system dependencies -`sudo apt install portaudio19-dev python3-pyaudio` - -## Installing cuSPARSELt -https://ninjalabo.ai/blogs/jetson_pytorch.html - -```bash -wget https://developer.download.nvidia.com/compute/cusparselt/0.7.0/local_installers/cusparselt-local-tegra-repo-ubuntu2204-0.7.0_1.0-1_arm64.deb -sudo dpkg -i cusparselt-local-tegra-repo-ubuntu2204-0.7.0_1.0-1_arm64.deb -sudo cp /var/cusparselt-local-tegra-repo-ubuntu2204-0.7.0/cusparselt-*-keyring.gpg /usr/share/keyrings/ -sudo apt-get update -sudo apt-get install libcusparselt0 libcusparselt-dev -ldconfig -``` -## Install Torch and Torchvision wheels - -Enter virtualenv -```bash -python3 -m venv venv -source venv/bin/activate -``` - -Wheels for jp6/cu126 -https://pypi.jetson-ai-lab.io/jp6/cu126 - -Check compatibility: -https://docs.nvidia.com/deeplearning/frameworks/install-pytorch-jetson-platform-release-notes/pytorch-jetson-rel.html - -### Working torch wheel tested on Jetpack 6.2, CUDA 12.6 -`pip install --no-cache https://developer.download.nvidia.com/compute/redist/jp/v61/pytorch/torch-2.5.0a0+872d972e41.nv24.08.17622132-cp310-cp310-linux_aarch64.whl` - -### Install torchvision from source: -```bash -# Set version by checking above torchvision<-->torch compatibility - -# We use 0.20.0 -export VERSION=20 - -sudo apt-get install libjpeg-dev zlib1g-dev libpython3-dev libopenblas-dev libavcodec-dev libavformat-dev libswscale-dev -git clone --branch release/0.$VERSION https://github.com/pytorch/vision torchvision -cd torchvision -export BUILD_VERSION=0.$VERSION.0 -python3 setup.py install --user # remove --user if installing in virtualenv -``` - -### Verify success: -```bash -$ python3 -import torch -print(torch.__version__) -print('CUDA available: ' + str(torch.cuda.is_available())) # Should be True -print('cuDNN version: ' + str(torch.backends.cudnn.version())) -a = torch.cuda.FloatTensor(2).zero_() -print('Tensor a = ' + str(a)) -b = torch.randn(2).cuda() -print('Tensor b = ' + str(b)) -c = a + b -print('Tensor c = ' + str(c)) - -$ python3 -import torchvision -print(torchvision.__version__) -``` - -## Install Onnxruntime-gpu - -Find pre-build wheels here for your specific JP/CUDA version: https://pypi.jetson-ai-lab.io/jp6 - -`pip install https://pypi.jetson-ai-lab.io/jp6/cu126/+f/4eb/e6a8902dc7708/onnxruntime_gpu-1.23.0-cp310-cp310-linux_aarch64.whl#sha256=4ebe6a8902dc7708434b2e1541b3fe629ebf434e16ab5537d1d6a622b42c622b` diff --git a/docs/old/modules.md b/docs/old/modules.md deleted file mode 100644 index 9cdbf586ac..0000000000 --- a/docs/old/modules.md +++ /dev/null @@ -1,165 +0,0 @@ -# Dimensional Modules - -The DimOS Module system enables distributed, multiprocess robotics applications using Dask for compute distribution and LCM (Lightweight Communications and Marshalling) for high-performance IPC. - -## Core Concepts - -### 1. Module Definition -Modules are Python classes that inherit from `dimos.core.Module` and define inputs, outputs, and RPC methods: - -```python -from dimos.core import Module, In, Out, rpc -from dimos.msgs.geometry_msgs import Vector3 - -class MyModule(Module): - # Declare inputs/outputs as class attributes initialized to None - data_in: In[Vector3] = None - data_out: Out[Vector3] = None - - def __init__(): - # Call parent Module init - super().__init__() - - @rpc - def remote_method(self, param): - """Methods decorated with @rpc can be called remotely""" - return param * 2 -``` - -### 2. Module Deployment -Modules are deployed across Dask workers using the `dimos.deploy()` method: - -```python -from dimos import core - -# Start Dask cluster with N workers -dimos = core.start(4) - -# Deploying modules allows for passing initialization parameters. -# In this case param1 and param2 are passed into Module init -module = dimos.deploy(Module, param1="value1", param2=123) -``` - -### 3. Stream Connections -Modules communicate via reactive streams using LCM transport: - -```python -# Configure LCM transport for outputs -module1.data_out.transport = core.LCMTransport("/topic_name", MessageType) - -# Connect module inputs to outputs -module2.data_in.connect(module1.data_out) - -# Access the underlying Observable stream -stream = module1.data_out.observable() -stream.subscribe(lambda msg: print(f"Received: {msg}")) -``` - -### 4. Module Lifecycle -```python -# Start modules to begin processing -module.start() # Calls the @rpc start() method if defined - -# Inspect module I/O configuration -print(module.io().result()) # Shows inputs, outputs, and RPC methods - -# Clean shutdown -dimos.shutdown() -``` - -## Real-World Example: Robot Control System - -```python -# Connection module wraps robot hardware/simulation -connection = dimos.deploy(ConnectionModule, ip=robot_ip) -connection.lidar.transport = core.LCMTransport("/lidar", LidarMessage) -connection.video.transport = core.LCMTransport("/video", Image) - -# Perception module processes sensor data -perception = dimos.deploy(PersonTrackingStream, camera_intrinsics=[...]) -perception.video.connect(connection.video) -perception.tracking_data.transport = core.pLCMTransport("/person_tracking") - -# Start processing -connection.start() -perception.start() - -# Enable tracking via RPC -perception.enable_tracking() - -# Get latest tracking data -data = perception.get_tracking_data() -``` - -## LCM Transport Configuration - -```python -# Standard LCM transport for simple types like lidar -connection.lidar.transport = core.LCMTransport("/lidar", LidarMessage) - -# Pickle-based transport for complex Python objects / dictionaries -connection.tracking_data.transport = core.pLCMTransport("/person_tracking") - -# Auto-configure LCM system buffers (required in containers) -from dimos.protocol import pubsub -pubsub.lcm.autoconf() -``` - -This architecture enables building complex robotic systems as composable, distributed modules that communicate efficiently via streams and RPC, scaling from single machines to clusters. - -# Dimensional Install -## Python Installation (Ubuntu 22.04) - -```bash -sudo apt install python3-venv - -# Clone the repository (dev branch, no submodules) -git clone -b dev https://github.com/dimensionalOS/dimos.git -cd dimos - -# Create and activate virtual environment -python3 -m venv venv -source venv/bin/activate - -sudo apt install portaudio19-dev python3-pyaudio - -# Install torch and torchvision if not already installed -# Example CUDA 11.7, Pytorch 2.0.1 (replace with your required pytorch version if different) -pip install torch==2.0.1 torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118 -``` - -### Install dependencies -```bash -# CPU only (reccomended to attempt first) -pip install .[cpu,dev] - -# CUDA install -pip install .[cuda,dev] - -# Copy and configure environment variables -cp default.env .env -``` - -### Test install -```bash -# Run standard tests -pytest -s dimos/ - -# Test modules functionality -pytest -s -m module dimos/ - -# Test LCM communication -pytest -s -m lcm dimos/ -``` - -# Unitree Go2 Quickstart - -To quickly test the modules system, you can run the Unitree Go2 multiprocess example directly: - -```bash -# Make sure you have the required environment variables set -export ROBOT_IP= - -# Run the multiprocess Unitree Go2 example -python dimos/robot/unitree_webrtc/multiprocess/unitree_go2.py -``` diff --git a/docs/old/modules_CN.md b/docs/old/modules_CN.md deleted file mode 100644 index 89e16c7112..0000000000 --- a/docs/old/modules_CN.md +++ /dev/null @@ -1,188 +0,0 @@ -# Dimensional 模块系统 - -DimOS 模块系统使用 Dask 进行计算分布和 LCM(轻量级通信和编组)进行高性能进程间通信,实现分布式、多进程的机器人应用。 - -## 核心概念 - -### 1. 模块定义 -模块是继承自 `dimos.core.Module` 的 Python 类,定义输入、输出和 RPC 方法: - -```python -from dimos.core import Module, In, Out, rpc -from dimos.msgs.geometry_msgs import Vector3 - -class MyModule(Module): # ROS Node - # 将输入/输出声明为初始化为 None 的类属性 - data_in: In[Vector3] = None # ROS Subscriber - data_out: Out[Vector3] = None # ROS Publisher - - def __init__(): - # 调用父类 Module 初始化 - super().__init__() - - @rpc - def remote_method(self, param): - """使用 @rpc 装饰的方法可以远程调用""" - return param * 2 -``` - -### 2. 模块部署 -使用 `dimos.deploy()` 方法在 Dask 工作进程中部署模块: - -```python -from dimos import core - -# 启动具有 N 个工作进程的 Dask 集群 -dimos = core.start(4) - -# 部署模块时可以传递初始化参数 -# 在这种情况下,param1 和 param2 被传递到模块初始化中 -module = dimos.deploy(Module, param1="value1", param2=123) -``` - -### 3. 流连接 -模块通过使用 LCM 传输的响应式流进行通信: - -```python -# 为输出配置 LCM 传输 -module1.data_out.transport = core.LCMTransport("/topic_name", MessageType) - -# 将模块输入连接到输出 -module2.data_in.connect(module1.data_out) - -# 访问底层的 Observable 流 -stream = module1.data_out.observable() -stream.subscribe(lambda msg: print(f"接收到: {msg}")) -``` - -### 4. 模块生命周期 -```python -# 启动模块以开始处理 -module.start() # 如果定义了 @rpc start() 方法,则调用它 - -# 检查模块 I/O 配置 -print(module.io().result()) # 显示输入、输出和 RPC 方法 - -# 优雅关闭 -dimos.shutdown() -``` - -## 实际示例:机器人控制系统 - -```python -# 连接模块封装机器人硬件/仿真 -connection = dimos.deploy(ConnectionModule, ip=robot_ip) -connection.lidar.transport = core.LCMTransport("/lidar", LidarMessage) -connection.video.transport = core.LCMTransport("/video", Image) - -# 感知模块处理传感器数据 -perception = dimos.deploy(PersonTrackingStream, camera_intrinsics=[...]) -perception.video.connect(connection.video) -perception.tracking_data.transport = core.pLCMTransport("/person_tracking") - -# 开始处理 -connection.start() -perception.start() - -# 通过 RPC 启用跟踪 -perception.enable_tracking() - -# 获取最新的跟踪数据 -data = perception.get_tracking_data() -``` - -## LCM 传输配置 - -```python -# 用于简单类型(如激光雷达)的标准 LCM 传输 -connection.lidar.transport = core.LCMTransport("/lidar", LidarMessage) - -# 用于复杂 Python 对象/字典的基于 pickle 的传输 -connection.tracking_data.transport = core.pLCMTransport("/person_tracking") - -# 自动配置 LCM 系统缓冲区(在容器中必需) -from dimos.protocol import pubsub -pubsub.lcm.autoconf() -``` - -这种架构使得能够将复杂的机器人系统构建为可组合的分布式模块,这些模块通过流和 RPC 高效通信,从单机扩展到集群。 - -# Dimensional 安装指南 -## Python 安装(Ubuntu 22.04) - -```bash -sudo apt install python3-venv - -# 克隆仓库(dev 分支,无子模块) -git clone -b dev https://github.com/dimensionalOS/dimos.git -cd dimos - -# 创建并激活虚拟环境 -python3 -m venv venv -source venv/bin/activate - -sudo apt install portaudio19-dev python3-pyaudio - -# 如果尚未安装,请安装 torch 和 torchvision -# 示例 CUDA 11.7,Pytorch 2.0.1(如果需要不同的 pytorch 版本,请替换) -pip install torch==2.0.1 torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118 -``` - -### 安装依赖 -```bash -# 仅 CPU(建议首先尝试) -pip install .[cpu,dev] - -# CUDA 安装 -pip install .[cuda,dev] - -# 复制并配置环境变量 -cp default.env .env -``` - -### 测试安装 -```bash -# 运行标准测试 -pytest -s dimos/ - -# 测试模块功能 -pytest -s -m module dimos/ - -# 测试 LCM 通信 -pytest -s -m lcm dimos/ -``` - -# Unitree Go2 快速开始 - -要快速测试模块系统,您可以直接运行 Unitree Go2 多进程示例: - -```bash -# 确保设置了所需的环境变量 -export ROBOT_IP= - -# 运行多进程 Unitree Go2 示例 -python dimos/robot/unitree_webrtc/multiprocess/unitree_go2.py -``` - -## 模块系统的高级特性 - -### 分布式计算 -DimOS 模块系统建立在 Dask 之上,提供了强大的分布式计算能力: - -- **自动负载均衡**:模块自动分布在可用的工作进程中 -- **容错性**:如果工作进程失败,模块可以在其他工作进程上重新启动 -- **可扩展性**:从单机到集群的无缝扩展 - -### 响应式编程模型 -使用 RxPY 实现的响应式流提供了: - -- **异步处理**:非阻塞的数据流处理 -- **背压处理**:自动管理快速生产者和慢速消费者 -- **操作符链**:使用 map、filter、merge 等操作符进行流转换 - -### 性能优化 -LCM 传输针对机器人应用进行了优化: - -- **零拷贝**:大型消息的高效内存使用 -- **低延迟**:微秒级的消息传递 -- **多播支持**:一对多的高效通信 diff --git a/docs/old/ros_navigation.md b/docs/old/ros_navigation.md deleted file mode 100644 index 4a74500b2f..0000000000 --- a/docs/old/ros_navigation.md +++ /dev/null @@ -1,284 +0,0 @@ -# Autonomy Stack API Documentation - -## Prerequisites - -- Ubuntu 24.04 -- [ROS 2 Jazzy Installation](https://docs.ros.org/en/jazzy/Installation.html) - -Add the following line to your `~/.bashrc` to source the ROS 2 Jazzy setup script automatically: - -``` echo "source /opt/ros/jazzy/setup.bash" >> ~/.bashrc``` - -## MID360 Ethernet Configuration (skip for sim) - -### Step 1: Configure Network Interface - -1. Open Network Settings in Ubuntu -2. Find your Ethernet connection to the MID360 -3. Click the gear icon to edit settings -4. Go to IPv4 tab -5. Change Method from "Automatic (DHCP)" to "Manual" -6. Add the following settings: - - **Address**: 192.168.1.5 - - **Netmask**: 255.255.255.0 - - **Gateway**: 192.168.1.1 -7. Click "Apply" - -### Step 2: Configure MID360 IP in JSON - -1. Find your MID360 serial number (on sticker under QR code) -2. Note the last 2 digits (e.g., if serial ends in 89, use 189) -3. Edit the configuration file: - -```bash -cd ~/autonomy_stack_mecanum_wheel_platform -nano src/utilities/livox_ros_driver2/config/MID360_config.json -``` - -4. Update line 28 with your IP (192.168.1.1xx where xx = last 2 digits): - -```json -"ip" : "192.168.1.1xx", -``` - -5. Save and exit - -### Step 3: Verify Connection - -```bash -ping 192.168.1.1xx # Replace xx with your last 2 digits -``` - -## Robot Configuration - -### Setting Robot Type - -The system supports different robot configurations. Set the `ROBOT_CONFIG_PATH` environment variable to specify which robot configuration to use: - -```bash -# For Unitree G1 (default if not set) -export ROBOT_CONFIG_PATH="unitree/unitree_g1" - -# Add to ~/.bashrc to make permanent -echo 'export ROBOT_CONFIG_PATH="unitree/unitree_g1"' >> ~/.bashrc -``` - -Available robot configurations: -- `unitree/unitree_g1` - Unitree G1 robot (default) -- Add your custom robot configs in `src/base_autonomy/local_planner/config/` - -## Build the system - -You must do this every you make a code change, this is not Python - -```colcon build --symlink-install --cmake-args -DCMAKE_BUILD_TYPE=Release``` - -## System Launch - -### Simulation Mode - -```bash -cd ~/autonomy_stack_mecanum_wheel_platform - -# Base autonomy only -./system_simulation.sh - -# With route planner -./system_simulation_with_route_planner.sh - -# With exploration planner -./system_simulation_with_exploration_planner.sh -``` - -### Real Robot Mode - -```bash -cd ~/autonomy_stack_mecanum_wheel_platform - -# Base autonomy only -./system_real_robot.sh - -# With route planner -./system_real_robot_with_route_planner.sh - -# With exploration planner -./system_real_robot_with_exploration_planner.sh -``` - -## Quick Troubleshooting - -- **Cannot ping MID360**: Check Ethernet cable and network settings -- **SLAM drift**: Press clear-terrain-map button on joystick controller -- **Joystick not recognized**: Unplug and replug USB dongle - - -## ROS Topics - -### Input Topics (Commands) - -| Topic | Type | Description | -|-------|------|-------------| -| `/way_point` | `geometry_msgs/PointStamped` | Send navigation goal (position only) | -| `/goal_pose` | `geometry_msgs/PoseStamped` | Send goal with orientation | -| `/cancel_goal` | `std_msgs/Bool` | Cancel current goal (data: true) | -| `/joy` | `sensor_msgs/Joy` | Joystick input | -| `/stop` | `std_msgs/Int8` | Soft Stop (2=stop all commmand, 0 = release) | -| `/navigation_boundary` | `geometry_msgs/PolygonStamped` | Set navigation boundaries | -| `/added_obstacles` | `sensor_msgs/PointCloud2` | Virtual obstacles | - -### Output Topics (Status) - -| Topic | Type | Description | -|-------|------|-------------| -| `/state_estimation` | `nav_msgs/Odometry` | Robot pose from SLAM | -| `/registered_scan` | `sensor_msgs/PointCloud2` | Aligned lidar point cloud | -| `/terrain_map` | `sensor_msgs/PointCloud2` | Local terrain map | -| `/terrain_map_ext` | `sensor_msgs/PointCloud2` | Extended terrain map | -| `/path` | `nav_msgs/Path` | Local path being followed | -| `/cmd_vel` | `geometry_msgs/Twist` | Velocity commands to motors | -| `/goal_reached` | `std_msgs/Bool` | True when goal reached, false when cancelled/new goal | - -### Map Topics - -| Topic | Type | Description | -|-------|------|-------------| -| `/overall_map` | `sensor_msgs/PointCloud2` | Global map (only in sim)| -| `/registered_scan` | `sensor_msgs/PointCloud2` | Current scan in map frame | -| `/terrain_map` | `sensor_msgs/PointCloud2` | Local obstacle map | - -## Usage Examples - -### Send Goal -```bash -ros2 topic pub /way_point geometry_msgs/msg/PointStamped "{ - header: {frame_id: 'map'}, - point: {x: 5.0, y: 3.0, z: 0.0} -}" --once -``` - -### Cancel Goal -```bash -ros2 topic pub /cancel_goal std_msgs/msg/Bool "data: true" --once -``` - -### Monitor Robot State -```bash -ros2 topic echo /state_estimation -``` - -## Configuration Parameters - -### Vehicle Parameters (`localPlanner`) - -| Parameter | Default | Description | -|-----------|---------|-------------| -| `vehicleLength` | 0.5 | Robot length (m) | -| `vehicleWidth` | 0.5 | Robot width (m) | -| `maxSpeed` | 0.875 | Maximum speed (m/s) | -| `autonomySpeed` | 0.875 | Autonomous mode speed (m/s) | - -### Goal Tolerance Parameters - -| Parameter | Default | Description | -|-----------|---------|-------------| -| `goalReachedThreshold` | 0.3-0.5 | Distance to consider goal reached (m) | -| `goalClearRange` | 0.35-0.6 | Extra clearance around goal (m) | -| `goalBehindRange` | 0.35-0.8 | Stop pursuing if goal behind within this distance (m) | -| `omniDirGoalThre` | 1.0 | Distance for omnidirectional approach (m) | - -### Obstacle Avoidance - -| Parameter | Default | Description | -|-----------|---------|-------------| -| `obstacleHeightThre` | 0.1-0.2 | Height threshold for obstacles (m) | -| `adjacentRange` | 3.5 | Sensor range for planning (m) | -| `minRelZ` | -0.4 | Minimum relative height to consider (m) | -| `maxRelZ` | 0.3 | Maximum relative height to consider (m) | - -### Path Planning - -| Parameter | Default | Description | -|-----------|---------|-------------| -| `pathScale` | 0.875 | Path resolution scale | -| `minPathScale` | 0.675 | Minimum path scale when blocked | -| `minPathRange` | 0.8 | Minimum planning range (m) | -| `dirThre` | 90.0 | Direction threshold (degrees) | - -### Control Parameters (`pathFollower`) - -| Parameter | Default | Description | -|-----------|---------|-------------| -| `lookAheadDis` | 0.5 | Look-ahead distance (m) | -| `maxAccel` | 2.0 | Maximum acceleration (m/s²) | -| `slowDwnDisThre` | 0.875 | Slow down distance threshold (m) | - -### SLAM Blind Zones (`feature_extraction_node`) - -| Parameter | Mecanum | Description | -|-----------|---------|-------------| -| `blindFront` | 0.1 | Front blind zone (m) | -| `blindBack` | -0.2 | Back blind zone (m) | -| `blindLeft` | 0.1 | Left blind zone (m) | -| `blindRight` | -0.1 | Right blind zone (m) | -| `blindDiskRadius` | 0.4 | Cylindrical blind zone radius (m) | - -## Operating Modes - -### Mode Control -- **Joystick L2**: Hold for autonomy mode -- **Joystick R2**: Hold to disable obstacle checking - -### Speed Control -The robot automatically adjusts speed based on: -1. Obstacle proximity -2. Path complexity -3. Goal distance - -## Tuning Guide - -### For Tighter Navigation -- Decrease `goalReachedThreshold` (e.g., 0.2) -- Decrease `goalClearRange` (e.g., 0.3) -- Decrease `vehicleLength/Width` slightly - -### For Smoother Navigation -- Increase `goalReachedThreshold` (e.g., 0.5) -- Increase `lookAheadDis` (e.g., 0.7) -- Decrease `maxAccel` (e.g., 1.5) - -### For Aggressive Obstacle Avoidance -- Increase `obstacleHeightThre` (e.g., 0.15) -- Increase `adjacentRange` (e.g., 4.0) -- Increase blind zone parameters - -## Common Issues - -### Robot Oscillates at Goal -- Increase `goalReachedThreshold` -- Increase `goalBehindRange` - -### Robot Stops Too Far from Goal -- Decrease `goalReachedThreshold` -- Decrease `goalClearRange` - -### Robot Hits Low Obstacles -- Decrease `obstacleHeightThre` -- Adjust `minRelZ` to include lower points - -## SLAM Configuration - -### Localization Mode -Set in `livox_mid360.yaml`: -```yaml -local_mode: true -init_x: 0.0 -init_y: 0.0 -init_yaw: 0.0 -``` - -### Mapping Performance -```yaml -mapping_line_resolution: 0.1 # Decrease for higher quality -mapping_plane_resolution: 0.2 # Decrease for higher quality -max_iterations: 5 # Increase for better accuracy -``` diff --git a/docs/old/running_without_devcontainer.md b/docs/old/running_without_devcontainer.md deleted file mode 100644 index d06785e359..0000000000 --- a/docs/old/running_without_devcontainer.md +++ /dev/null @@ -1,21 +0,0 @@ -install nix, - -https://nixos.wiki/wiki/Nix_Installation_Guide -```sh -sudo install -d -m755 -o $(id -u) -g $(id -g) /nix -curl -L https://nixos.org/nix/install | sh -``` - -install direnv -https://direnv.net/ -```sh -apt-get install direnv -echo 'eval "$(direnv hook bash)"' >> ~/.bashrc -``` - -allow direnv in dimos will take a bit to pull the packages, -from that point on your env is standardized -```sh -cd dimos -direnv allow -``` diff --git a/docs/old/testing_stream_reply.md b/docs/old/testing_stream_reply.md deleted file mode 100644 index e3189bb5e8..0000000000 --- a/docs/old/testing_stream_reply.md +++ /dev/null @@ -1,174 +0,0 @@ -# Sensor Replay & Storage Toolkit - -A lightweight framework for **recording, storing, and replaying binary data streams for automated tests**. It keeps your repository small (data lives in Git LFS) while giving you Python‑first ergonomics for working with RxPY streams, point‑clouds, videos, command logs—anything you can pickle. - ---- - -## 1 At a Glance - -| Need | One liner | -| ------------------------------ | ------------------------------------------------------------- | -| **Iterate over every message** | `SensorReplay("raw_odometry_rotate_walk").iterate(print)` | -| **RxPY stream for piping** | `SensorReplay("raw_odometry_rotate_walk").stream().pipe(...)` | -| **Throttle replay rate** | `SensorReplay("raw_odometry_rotate_walk").stream(rate_hz=10)` | -| **Raw path to a blob/dir** | `path = testData("raw_odometry_rotate_walk")` | -| **Store a new stream** | see [`SensorStorage`](#5-storing-new-streams) | - -> If the requested blob is missing locally, it is transparently downloaded from Git LFS, extracted to `tests/data//`, and cached for subsequent runs. - ---- - -## 2 Goals - -* **Zero setup for CI & collaborators** – data is fetched on demand. -* **No repo bloat** – binaries live in Git LFS; the working tree stays trim. -* **Symmetric API** – `SensorReplay` ↔︎ `SensorStorage`; same name, different direction. -* **Format agnostic** – replay *anything* you can pickle (protobuf, numpy, JPEG, …). -* **Data type agnostic** – with testData("raw_odometry_rotate_walk") you get a Path object back, can be a raw video file, whole codebase, ML model etc - - ---- - -## 3 Replaying Data - -### 3.1 Iterating Messages - -```python -from sensor_tools import SensorReplay - -# Print every stored Odometry message -SensorReplay(name="raw_odometry_rotate_walk").iterate(print) -``` - -### 3.2 RxPY Streaming - -```python -from rx import operators as ops -from operator import sub, add -from dimos.utils.testing import SensorReplay, SensorStorage -from dimos.robot.unitree_webrtc.type.odometry import Odometry - -# Compute total yaw rotation (radians) - -total_rad = ( - SensorReplay("raw_odometry_rotate_walk", autocast=Odometry.from_msg) - .stream() - .pipe( - ops.map(lambda odom: odom.rot.z), - ops.pairwise(), # [1,2,3,4] -> [[1,2], [2,3], [3,4]] - ops.starmap(sub), # [sub(1,2), sub(2,3), sub(3,4)] - ops.reduce(add), - ) - .run() -) - -assert total_rad == pytest.approx(4.05, abs=0.01) -``` - -### 3.3 Lidar Mapping Example (200MB blob) - -```python -from dimos.utils.testing import SensorReplay, SensorStorage -from dimos.robot.unitree_webrtc.type.map import Map - -lidar_stream = SensorReplay("office_lidar", autocast=LidarMessage.from_msg) -map_ = Map(voxel_size=0.5) - -# Blocks until the stream is consumed -map_.consume(lidar_stream.stream()).run() - -assert map_.costmap.grid.shape == (404, 276) -``` - ---- - -## 4 Low Level Access - -If you want complete control, call **`testData(name)`** to get a `Path` to the extracted file or directory — no pickling assumptions: - -```python -absolute_path: Path = testData("some_name") -``` - -Do whatever you like: open a video file, load a model checkpoint, etc. - ---- - -## 5 Storing New Streams - -1. **Write a test marked `@pytest.mark.tool`** so CI skips it by default. -2. Use `SensorStorage` to persist the stream into `tests/data//*.pickle`. - -```python -@pytest.mark.tool -def test_store_odometry_stream(): - load_dotenv() - - robot = UnitreeGo2(ip=os.getenv("ROBOT_IP"), mode="ai") - robot.standup() - - storage = SensorStorage("raw_odometry_rotate_walk2") - storage.save_stream(robot.raw_odom_stream()) # ← records until interrupted - - try: - while True: - time.sleep(0.1) - except KeyboardInterrupt: - robot.liedown() -``` - -### 5.1 Behind the Scenes - -* Any new file/dir under `tests/data/` is treated as a **data blob**. -* `./bin/lfs_push` compresses it into `tests/data/.lfs/.tar.gz` *and* uploads it to Git LFS. -* Only the `.lfs/` archive is committed; raw binaries remain `.gitignored`. - ---- - -## 6 Storing Arbitrary Binary Data - -Just copy to `tests/data/whatever` -* `./bin/lfs_push` compresses it into `tests/data/.lfs/.tar.gz` *and* uploads it to Git LFS. - ---- - -## 7 Developer Workflow Checklist - -1. **Drop new data** into `tests/data/`. -2. Run your new tests that use SensorReplay or testData calls, make sure all works -3. Run `./bin/lfs_push` (or let the pre commit hook nag you). -4. Commit the resulting `tests/data/.lfs/.tar.gz`. -5. Optional - you can delete `tests/data/your_new_stuff` and re-run the test to ensure it gets downloaded from LFS correclty -6. Push/PR - -### 7.1 Pre commit Setup (optional but recommended) - -```sh -sudo apt install pre-commit -pre-commit install # inside repo root -``` - -Now each commit checks formatting, linting, *and* whether you forgot to push new blobs: - -``` -$ echo test > tests/data/foo.txt -$ git add tests/data/foo.txt && git commit -m "demo" -LFS data ......................................................... Failed -✗ New test data detected at /tests/data: - foo.txt -Either delete or run ./bin/lfs_push -``` - ---- - -## 8 Future Work - -- A replay rate that mirrors the **original message timestamps** can be implemented downstream (e.g., an RxPY operator) -- Likely this same system should be used for production binary data delivery as well (Models etc) - ---- - -## 9 Existing Examples - -* `dimos/robot/unitree_webrtc/type/test_odometry.py` -* `dimos/robot/unitree_webrtc/type/test_map.py` diff --git a/docs/package_usage.md b/docs/package_usage.md deleted file mode 100644 index 328708122e..0000000000 --- a/docs/package_usage.md +++ /dev/null @@ -1,62 +0,0 @@ -# Package Usage - -## With `uv` - -Init your repo if not already done: - -```bash -uv init -``` - -Install: - -```bash -uv add dimos[base,dev,unitree] -``` - -Test the Unitree Go2 robot in the simulator: - -```bash -uv run dimos --simulation run unitree-go2 -``` - -Run your actual robot: - -```bash -uv run dimos --robot-ip=192.168.X.XXX run unitree-go2 -``` - -### Without installing - -With `uv` you can run tools without having to explicitly install: - -```bash -uvx --from dimos[base,unitree] dimos --robot-ip=192.168.X.XXX run unitree-go2 -``` - -## With `pip` - -Create an environment if not already done: - -```bash -python -m venv .venv -. .venv/bin/activate -``` - -Install: - -```bash -pip install dimos[base,dev,unitree] -``` - -Test the Unitree Go2 robot in the simulator: - -```bash -dimos --simulation run unitree-go2 -``` - -Run your actual robot: - -```bash -dimos --robot-ip=192.168.X.XXX run unitree-go2 -```