diff --git a/dimos/navigation/frontier_exploration/test_wavefront_frontier_goal_selector.py b/dimos/navigation/frontier_exploration/test_wavefront_frontier_goal_selector.py index 7d8c0adf4c..aca154a6dd 100644 --- a/dimos/navigation/frontier_exploration/test_wavefront_frontier_goal_selector.py +++ b/dimos/navigation/frontier_exploration/test_wavefront_frontier_goal_selector.py @@ -450,7 +450,7 @@ def test_performance_timing() -> None: # Check that larger maps take more time (expected behavior) for result in results: - assert result["detect_time"] < 2.0, f"Detection too slow: {result['detect_time']}s" + assert result["detect_time"] < 3.0, f"Detection too slow: {result['detect_time']}s" assert result["goal_time"] < 1.5, f"Goal selection too slow: {result['goal_time']}s" print("\nPerformance test passed - all operations completed within time limits") diff --git a/docs/api/sensor_streams/advanced_streams.md b/docs/api/sensor_streams/advanced_streams.md index 02015f3329..c7db7c98bd 100644 --- a/docs/api/sensor_streams/advanced_streams.md +++ b/docs/api/sensor_streams/advanced_streams.md @@ -8,8 +8,8 @@ In robotics, we deal with hardware that produces data at its own pace - a camera **The problem:** A fast producer can overwhelm a slow consumer, causing memory buildup or dropped frames. We might have multiple subscribers to the same hardware that operate at different speeds. -
-diagram source + +
Pikchr ```pikchr fold output=assets/backpressure.svg color = white @@ -24,10 +24,11 @@ Slow: box "ML Model" "2 fps" rad 5px fit wid 130% ht 130% text "items pile up!" at (Queue.x, Queue.y - 0.45in) ``` +
+ ![output](assets/backpressure.svg) -
**The solution:** The `backpressure()` wrapper handles this by: @@ -74,8 +75,8 @@ slow got 7 items (skipped 13) ### How it works -
-diagram source + +
Pikchr ```pikchr fold output=assets/backpressure_solution.svg color = white @@ -93,11 +94,11 @@ arrow Slow: box "Slow Sub" rad 5px fit wid 170% ht 170% ``` +
+ ![output](assets/backpressure_solution.svg) -
- The `LATEST` strategy means: when the slow subscriber finishes processing, it gets whatever the most recent value is, skipping any values that arrived while it was busy. ### Usage in modules @@ -123,6 +124,9 @@ class MLModel(Module): + + + ## Getting Values Synchronously Sometimes you don't want a stream - you just want to call a function and get the latest value. We provide two approaches: diff --git a/docs/api/sensor_streams/reactivex.md b/docs/api/sensor_streams/reactivex.md index e8f39afee5..1dcbdfe046 100644 --- a/docs/api/sensor_streams/reactivex.md +++ b/docs/api/sensor_streams/reactivex.md @@ -226,10 +226,11 @@ arrow right 0.3in Handler: box "callback" rad 5px fit wid 170% ht 170% ``` + + ![output](assets/observable_flow.svg) - **Key property: Observables are lazy.** Nothing happens until you call `.subscribe()`. This means you can build up complex pipelines without any work being done, then start the flow when ready. diff --git a/docs/data.md b/docs/data.md index 802e1b4ec4..a30a0e3328 100644 --- a/docs/data.md +++ b/docs/data.md @@ -21,9 +21,6 @@ Exists: True ## How It Works -
-diagram source -
Pikchr ```pikchr fold output=assets/get_data_flow.svg @@ -52,8 +49,6 @@ F: box "Return path" rad 5px fit wid 170% ht 170% ![output](assets/get_data_flow.svg) -
- 1. Checks if `data/{name}` already exists locally 2. If missing, pulls the `.tar.gz` archive from Git LFS 3. Decompresses the archive to `data/` @@ -78,45 +73,88 @@ Image shape: (771, 1024, 3) ### Loading Model Checkpoints -```python skip +```python from dimos.utils.data import get_data -model_dir = get_data("models_mobileclip") -checkpoint = model_dir / "mobileclip2_s0.pt" +model_dir = get_data("models_yolo") +checkpoint = model_dir / "yolo11n.pt" +print(f"Checkpoint: {checkpoint.name} ({checkpoint.stat().st_size // 1024}KB)") +``` + + +``` +Checkpoint: yolo11n.pt (5482KB) ``` ### Loading Recorded Data for Replay -```python skip +```python from dimos.utils.data import get_data -from dimos.utils.testing.replay import Replay +from dimos.utils.testing.replay import TimedSensorReplay data_dir = get_data("unitree_office_walk") -replay = Replay(data_dir) +replay = TimedSensorReplay(data_dir / "lidar") +print(f"Replay {replay} loaded from: {data_dir.name}") +print(replay.find_closest_seek(1)) +``` + + +``` +Replay loaded from: unitree_office_walk +{'type': 'msg', 'topic': 'rt/utlidar/voxel_map_compressed', 'data': {'stamp': 1751591000.0, 'frame_id': 'odom', 'resolution': 0.05, 'src_size': 77824, 'origin': [-3.625, -3.275, -0.575], 'width': [128, 128, 38], 'data': {'points': array([[ 2.725, -1.025, -0.575], + [ 2.525, -0.275, -0.575], + [ 2.575, -0.275, -0.575], + ..., + [ 2.675, -0.525, 0.775], + [ 2.375, 1.175, 0.775], + [ 2.325, 1.225, 0.775]], shape=(22730, 3))}}} ``` ### Loading Point Clouds -```python skip +```python from dimos.utils.data import get_data -from dimos.mapping.pointclouds import read_pointcloud +from dimos.mapping.pointclouds.util import read_pointcloud pointcloud = read_pointcloud(get_data("apartment") / "sum.ply") +print(f"Loaded pointcloud with {len(pointcloud.points)} points") +``` + + +``` +Loaded pointcloud with 63672 points ``` ## Data Directory Structure Data files live in `data/` at the repo root. Large files are stored in `data/.lfs/` as `.tar.gz` archives tracked by Git LFS. +
Diagon + +```diagon fold mode=Tree +data/ + cafe.jpg + apartment/ + sum.ply + .lfs/ + cafe.jpg.tar.gz + apartment.tar.gz +``` + +
+ + ``` data/ -├── cafe.jpg # Small files: committed directly -├── apartment/ # Directories: extracted from LFS -│ └── sum.ply -└── .lfs/ - └── apartment.tar.gz # LFS-tracked archive + ├──cafe.jpg + ├──apartment/ + │ └──sum.ply + └──.lfs/ + ├──cafe.jpg.tar.gz + └──apartment.tar.gz ``` + ## Adding New Data ### Small Files (< 1MB)