Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -450,7 +450,7 @@ def test_performance_timing() -> None:

# Check that larger maps take more time (expected behavior)
for result in results:
assert result["detect_time"] < 2.0, f"Detection too slow: {result['detect_time']}s"
assert result["detect_time"] < 3.0, f"Detection too slow: {result['detect_time']}s"
assert result["goal_time"] < 1.5, f"Goal selection too slow: {result['goal_time']}s"

print("\nPerformance test passed - all operations completed within time limits")
18 changes: 11 additions & 7 deletions docs/api/sensor_streams/advanced_streams.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,8 @@ In robotics, we deal with hardware that produces data at its own pace - a camera

**The problem:** A fast producer can overwhelm a slow consumer, causing memory buildup or dropped frames. We might have multiple subscribers to the same hardware that operate at different speeds.

<details>
<summary>diagram source</summary>

<details><summary>Pikchr</summary>

```pikchr fold output=assets/backpressure.svg
color = white
Expand All @@ -24,10 +24,11 @@ Slow: box "ML Model" "2 fps" rad 5px fit wid 130% ht 130%
text "items pile up!" at (Queue.x, Queue.y - 0.45in)
```

</details>

<!--Result:-->
![output](assets/backpressure.svg)

</details>

**The solution:** The `backpressure()` wrapper handles this by:

Expand Down Expand Up @@ -74,8 +75,8 @@ slow got 7 items (skipped 13)

### How it works

<details>
<summary>diagram source</summary>

<details><summary>Pikchr</summary>

```pikchr fold output=assets/backpressure_solution.svg
color = white
Expand All @@ -93,11 +94,11 @@ arrow
Slow: box "Slow Sub" rad 5px fit wid 170% ht 170%
```

</details>

<!--Result:-->
![output](assets/backpressure_solution.svg)

</details>

The `LATEST` strategy means: when the slow subscriber finishes processing, it gets whatever the most recent value is, skipping any values that arrived while it was busy.

### Usage in modules
Expand All @@ -123,6 +124,9 @@ class MLModel(Module):






## Getting Values Synchronously

Sometimes you don't want a stream - you just want to call a function and get the latest value. We provide two approaches:
Expand Down
3 changes: 2 additions & 1 deletion docs/api/sensor_streams/reactivex.md
Original file line number Diff line number Diff line change
Expand Up @@ -226,10 +226,11 @@ arrow right 0.3in
Handler: box "callback" rad 5px fit wid 170% ht 170%
```

</details>

<!--Result:-->
![output](assets/observable_flow.svg)

</details>

**Key property: Observables are lazy.** Nothing happens until you call `.subscribe()`. This means you can build up complex pipelines without any work being done, then start the flow when ready.

Expand Down
74 changes: 56 additions & 18 deletions docs/data.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,6 @@ Exists: True

## How It Works

<details>
<summary>diagram source</summary>

<details><summary>Pikchr</summary>

```pikchr fold output=assets/get_data_flow.svg
Expand Down Expand Up @@ -52,8 +49,6 @@ F: box "Return path" rad 5px fit wid 170% ht 170%
<!--Result:-->
![output](assets/get_data_flow.svg)

</details>

1. Checks if `data/{name}` already exists locally
2. If missing, pulls the `.tar.gz` archive from Git LFS
3. Decompresses the archive to `data/`
Expand All @@ -78,45 +73,88 @@ Image shape: (771, 1024, 3)

### Loading Model Checkpoints

```python skip
```python
from dimos.utils.data import get_data

model_dir = get_data("models_mobileclip")
checkpoint = model_dir / "mobileclip2_s0.pt"
model_dir = get_data("models_yolo")
checkpoint = model_dir / "yolo11n.pt"
print(f"Checkpoint: {checkpoint.name} ({checkpoint.stat().st_size // 1024}KB)")
```

<!--Result:-->
```
Checkpoint: yolo11n.pt (5482KB)
```

### Loading Recorded Data for Replay

```python skip
```python
from dimos.utils.data import get_data
from dimos.utils.testing.replay import Replay
from dimos.utils.testing.replay import TimedSensorReplay

data_dir = get_data("unitree_office_walk")
replay = Replay(data_dir)
replay = TimedSensorReplay(data_dir / "lidar")
print(f"Replay {replay} loaded from: {data_dir.name}")
print(replay.find_closest_seek(1))
```

<!--Result:-->
```
Replay <dimos.utils.testing.replay.TimedSensorReplay object at 0x7fdc24c708f0> loaded from: unitree_office_walk
{'type': 'msg', 'topic': 'rt/utlidar/voxel_map_compressed', 'data': {'stamp': 1751591000.0, 'frame_id': 'odom', 'resolution': 0.05, 'src_size': 77824, 'origin': [-3.625, -3.275, -0.575], 'width': [128, 128, 38], 'data': {'points': array([[ 2.725, -1.025, -0.575],
[ 2.525, -0.275, -0.575],
[ 2.575, -0.275, -0.575],
...,
[ 2.675, -0.525, 0.775],
[ 2.375, 1.175, 0.775],
[ 2.325, 1.225, 0.775]], shape=(22730, 3))}}}
```

### Loading Point Clouds

```python skip
```python
from dimos.utils.data import get_data
from dimos.mapping.pointclouds import read_pointcloud
from dimos.mapping.pointclouds.util import read_pointcloud

pointcloud = read_pointcloud(get_data("apartment") / "sum.ply")
print(f"Loaded pointcloud with {len(pointcloud.points)} points")
```

<!--Result:-->
```
Loaded pointcloud with 63672 points
```

## Data Directory Structure

Data files live in `data/` at the repo root. Large files are stored in `data/.lfs/` as `.tar.gz` archives tracked by Git LFS.

<details><summary>Diagon</summary>

```diagon fold mode=Tree
data/
cafe.jpg
apartment/
sum.ply
.lfs/
cafe.jpg.tar.gz
apartment.tar.gz
```

</details>

<!--Result:-->
```
data/
├── cafe.jpg # Small files: committed directly
├── apartment/ # Directories: extracted from LFS
│ └── sum.ply
└── .lfs/
└── apartment.tar.gz # LFS-tracked archive
├──cafe.jpg
├──apartment/
│ └──sum.ply
└──.lfs/
├──cafe.jpg.tar.gz
└──apartment.tar.gz
```


## Adding New Data

### Small Files (< 1MB)
Expand Down
Loading