Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
53 commits
Select commit Hold shift + click to select a range
fc447fb
Merge pull request #697 - Release v0.0.5
spomichter Oct 28, 2025
8fcbfa2
provide alternative organization
jeff-hykin Nov 14, 2025
a38b6f6
clean up mistakes
jeff-hykin Nov 14, 2025
bbc5416
get lcm working on macos, needed 25.05 for old MacOS
jeff-hykin Nov 14, 2025
0f99eb7
fix pytest command in flake
jeff-hykin Nov 14, 2025
b9d1155
fix SharedMemory on MacOS
jeff-hykin Nov 14, 2025
69ca925
modify check_multicast, and check_buffers for MacOS (could use improv…
jeff-hykin Nov 14, 2025
4d5b07e
disable most of the multicast and check_buffer tests for macos (could…
jeff-hykin Nov 14, 2025
32749e8
switch back to unstable
jeff-hykin Nov 14, 2025
57bb639
grab changes from MacOS PR
jeff-hykin Nov 14, 2025
e1a74b5
clean up
jeff-hykin Nov 14, 2025
ffe8f1b
revert some
jeff-hykin Nov 17, 2025
a39b93c
Merge branch 'mujoco-process' into jeff_macos
jeff-hykin Nov 17, 2025
58a4200
patch lcm service to get running on macOS
jeff-hykin Nov 17, 2025
0c1af71
flake update
jeff-hykin Nov 17, 2025
7cb5e22
fix lcm macos
jeff-hykin Nov 17, 2025
29e0800
use mjpython so works on macos
jeff-hykin Nov 17, 2025
bbba92f
on mujoco fail, show stderr
jeff-hykin Nov 17, 2025
633c1e5
standardize platform checks
jeff-hykin Nov 17, 2025
ef10293
make ruff happy
jeff-hykin Nov 17, 2025
c4ac2db
revert
jeff-hykin Nov 17, 2025
ce48207
revert more
jeff-hykin Nov 17, 2025
927e7d9
clean up opened file descriptors
jeff-hykin Nov 17, 2025
469b6c4
fix Cuda warning on macos, default to CoreMLExecutionProvider before …
jeff-hykin Nov 17, 2025
fde904e
-
jeff-hykin Nov 17, 2025
766aeb3
have pytorch take advantage of MacOS Metal
jeff-hykin Nov 17, 2025
5e80f7a
automate nix develop init
jeff-hykin Nov 20, 2025
6de81c4
-
jeff-hykin Nov 20, 2025
7fcb09f
-
jeff-hykin Nov 20, 2025
83d0046
fix gpu check mistake
jeff-hykin Nov 20, 2025
cf1fd15
fix tests
jeff-hykin Nov 20, 2025
c027554
add example command
jeff-hykin Nov 20, 2025
5d56bb6
add saftey check
jeff-hykin Nov 20, 2025
cf2c844
-
jeff-hykin Nov 20, 2025
bd86cb0
allow for 224.0.0/4
paul-nechifor Nov 25, 2025
d86feb7
always use shm for large things
paul-nechifor Nov 26, 2025
f497f3e
Merge branch 'dev' of https://github.com/dimensionalOS/dimos
jeff-hykin Nov 28, 2025
f020c05
Merge branch 'dev' into jeff_macos-testing
paul-nechifor Nov 29, 2025
43e37aa
dry
paul-nechifor Nov 29, 2025
993d98a
CI code cleanup
paul-nechifor Nov 29, 2025
3e4d8d0
Update dimos/agents/memory/chroma_impl.py
paul-nechifor Nov 29, 2025
b7fc9a9
types
paul-nechifor Nov 29, 2025
970a31d
fix tests
paul-nechifor Nov 29, 2025
de5f0a6
fix "device not defined"
jeff-hykin Nov 29, 2025
a6edb70
cleanup logic
jeff-hykin Nov 29, 2025
0e6142f
Merge branch 'main' into jeff_macos
jeff-hykin Nov 29, 2025
98ab619
Merge branch 'jeff_macos-testing' of https://github.com/dimensionalOS…
jeff-hykin Nov 29, 2025
db39aaa
Merge branch 'dev' into jeff_macos
paul-nechifor Dec 3, 2025
efb1db9
disable bitsandbytes on macos
paul-nechifor Dec 4, 2025
aadebfa
Merge remote-tracking branch 'upstream/dev' into jeff_macos
jeff-hykin Dec 9, 2025
040a706
fixup readme
jeff-hykin Dec 9, 2025
f1862e8
update xome to support --ignore-environment
jeff-hykin Dec 9, 2025
73b9887
fix macos old version check on linux
jeff-hykin Dec 9, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
30 changes: 23 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,6 +54,22 @@ We are shipping a first look at the DIMOS x Unitree Go2 integration, allowing fo
- **DimOS Interface / Development Tools**
- Local development interface to control your robot, orchestrate agents, visualize camera/lidar streams, and debug your dimensional agentive application.

## MacOS Installation

```sh
# Install Nix
curl --proto '=https' --tlsv1.2 -sSf -L https://install.determinate.systems/nix | sh -s -- install

# clone the repository
git clone --branch dev --single-branch https://github.com/dimensionalOS/dimos.git

# setup the environment (follow the prompts after nix develop)
cd dimos
nix develop

# You should be able to follow the instructions below as well for a more manual installation
```

---
## Python Installation
Tested on Ubuntu 22.04/24.04
Expand Down Expand Up @@ -83,10 +99,10 @@ pip install torch==2.0.1 torchvision torchaudio --index-url https://download.pyt
#### Install dependencies
```bash
# CPU only (reccomended to attempt first)
pip install -e .[cpu,dev]
pip install -e '.[cpu,dev]'

# CUDA install
pip install -e .[cuda,dev]
pip install -e '.[cuda,dev]'

# Copy and configure environment variables
cp default.env .env
Expand All @@ -99,27 +115,27 @@ pytest -s dimos/

#### Test Dimensional with a replay UnitreeGo2 stream (no robot required)
```bash
CONNECTION_TYPE=replay python dimos/robot/unitree_webrtc/unitree_go2.py
dimos --replay run unitree-go2
```

#### Test Dimensional with a simulated UnitreeGo2 in MuJoCo (no robot required)
```bash
pip install -e .[sim]
pip install -e '.[sim]'
export DISPLAY=:1 # Or DISPLAY=:0 if getting GLFW/OpenGL X11 errors
CONNECTION_TYPE=mujoco python dimos/robot/unitree_webrtc/unitree_go2.py
dimos --simulation run unitree-go2
```

#### Test Dimensional with a real UnitreeGo2 over WebRTC
```bash
export ROBOT_IP=192.168.X.XXX # Add the robot IP address
python dimos/robot/unitree_webrtc/unitree_go2.py
dimos run unitree-go2
```

#### Test Dimensional with a real UnitreeGo2 running Agents
*OpenAI / Alibaba keys required*
```bash
export ROBOT_IP=192.168.X.XXX # Add the robot IP address
python dimos/robot/unitree_webrtc/run_agents2.py
dimos run unitree-go2-agentic
```
---

Expand Down
16 changes: 12 additions & 4 deletions dimos/agents/memory/chroma_impl.py
Original file line number Diff line number Diff line change
Expand Up @@ -145,10 +145,18 @@ def __init__(
def create(self) -> None:
"""Create local embedding model and initialize the ChromaDB client."""
# Load the sentence transformer model
# Use CUDA if available, otherwise fall back to CPU
device = "cuda" if torch.cuda.is_available() else "cpu"
print(f"Using device: {device}")
self.model = SentenceTransformer(self.model_name, device=device) # type: ignore[name-defined]

# Use GPU if available, otherwise fall back to CPU
if torch.cuda.is_available():
self.device = "cuda"
# MacOS Metal performance shaders
elif torch.backends.mps.is_available() and torch.backends.mps.is_built():
self.device = "mps"
else:
self.device = "cpu"

print(f"Using device: {self.device}")
self.model = SentenceTransformer(self.model_name, device=self.device) # type: ignore[name-defined]

# Create a custom embedding class that implements the embed_query method
class SentenceTransformerEmbeddings:
Expand Down
7 changes: 7 additions & 0 deletions dimos/agents/memory/image_embedding.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@
import base64
import io
import os
import sys

import cv2
import numpy as np
Expand Down Expand Up @@ -76,6 +77,12 @@ def _initialize_model(self): # type: ignore[no-untyped-def]
processor_id = "openai/clip-vit-base-patch32"

providers = ["CUDAExecutionProvider", "CPUExecutionProvider"]
if sys.platform == "darwin":
# 2025-11-17 12:36:47.877215 [W:onnxruntime:, helper.cc:82 IsInputSupported] CoreML does not support input dim > 16384. Input:text_model.embeddings.token_embedding.weight, shape: {49408,512}
# 2025-11-17 12:36:47.878496 [W:onnxruntime:, coreml_execution_provider.cc:107 GetCapability] CoreMLExecutionProvider::GetCapability, number of partitions supported by CoreML: 88 number of nodes in the graph: 1504 number of nodes supported by CoreML: 933
providers = ["CoreMLExecutionProvider"] + [
each for each in providers if each != "CUDAExecutionProvider"
]

self.model = ort.InferenceSession(str(model_id), providers=providers)

Expand Down
9 changes: 8 additions & 1 deletion dimos/models/Detic/tools/dump_clip_features.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,14 @@
if args.use_underscore:
cat_names = [x.strip().replace("/ ", "/").replace(" ", "_") for x in cat_names]
print("cat_names", cat_names)
device = "cuda" if torch.cuda.is_available() else "cpu"
# Use GPU if available, otherwise fall back to CPU
if torch.cuda.is_available():
device = "cuda"
# MacOS Metal performance shaders
elif torch.backends.mps.is_available() and torch.backends.mps.is_built():
device = "mps"
else:
device = "cpu"

if args.prompt == "a":
sentences = ["a " + x for x in cat_names]
Expand Down
10 changes: 9 additions & 1 deletion dimos/models/embedding/clip.py
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,15 @@ def __init__(
device: Device to run on (cuda/cpu), auto-detects if None
normalize: Whether to L2 normalize embeddings
"""
self.device = device or ("cuda" if torch.cuda.is_available() else "cpu")
# Use GPU if available, otherwise fall back to CPU
if torch.cuda.is_available():
self.device = "cuda"
# MacOS Metal performance shaders
elif torch.backends.mps.is_available() and torch.backends.mps.is_built():
self.device = "mps"
else:
self.device = "cpu"

self.normalize = normalize

# Load model and processor
Expand Down
2 changes: 1 addition & 1 deletion dimos/models/embedding/embedding_models_disabled_tests.py
Original file line number Diff line number Diff line change
Expand Up @@ -298,7 +298,7 @@ def test_gpu_query_performance(embedding_model, test_image) -> None: # type: ig

assert len(results) == 5, "Should return top-5 results"
# All should be high similarity (same image, allow some variation for image preprocessing)
for idx, sim in results:
for _, sim in results:
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(just making ruff happy)

assert sim > 0.90, f"Same images should have high similarity, got {sim}"


Expand Down
10 changes: 9 additions & 1 deletion dimos/models/embedding/mobileclip.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,15 @@ def __init__(
"Install it with: pip install open-clip-torch"
)

self.device = device or ("cuda" if torch.cuda.is_available() else "cpu")
# Use GPU if available, otherwise fall back to CPU
if torch.cuda.is_available():
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mdaiter Apple metal support

self.device = "cuda"
# MacOS Metal performance shaders
elif torch.backends.mps.is_available() and torch.backends.mps.is_built():
self.device = "mps"
else:
self.device = "cpu"

self.normalize = normalize

# Load model
Expand Down
10 changes: 9 additions & 1 deletion dimos/models/embedding/treid.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,15 @@ def __init__(
"torchreid is required for TorchReIDModel. Install it with: pip install torchreid"
)

self.device = device or ("cuda" if torch.cuda.is_available() else "cpu")
# Use GPU if available, otherwise fall back to CPU
if torch.cuda.is_available():
self.device = "cuda"
# MacOS Metal performance shaders
elif torch.backends.mps.is_available() and torch.backends.mps.is_built():
self.device = "mps"
else:
self.device = "cpu"

self.normalize = normalize

# Load model using torchreid's FeatureExtractor
Expand Down
10 changes: 9 additions & 1 deletion dimos/models/vl/moondream.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,15 @@ def __init__(
dtype: torch.dtype = torch.bfloat16,
) -> None:
self._model_name = model_name
self._device = device or ("cuda" if torch.cuda.is_available() else "cpu")
# Use GPU if available, otherwise fall back to CPU
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jeff-hykin Has moondream been tested on metal

if torch.cuda.is_available():
self._device = "cuda"
# MacOS Metal performance shaders
elif torch.backends.mps.is_available() and torch.backends.mps.is_built():
self._device = "mps"
else:
self._device = "cpu"

self._dtype = dtype

@cached_property
Expand Down
9 changes: 7 additions & 2 deletions dimos/perception/segmentation/sam_2d_seg.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,6 @@
plot_results,
)
from dimos.utils.data import get_data
from dimos.utils.gpu_utils import is_cuda_available
from dimos.utils.logging_config import setup_logger

logger = setup_logger()
Expand All @@ -48,14 +47,20 @@ def __init__(
use_rich_labeling: bool = False,
use_filtering: bool = True,
) -> None:
if is_cuda_available(): # type: ignore[no-untyped-call]
# Use GPU if available, otherwise fall back to CPU
if torch.cuda.is_available():
logger.info("Using CUDA for SAM 2d segmenter")
if hasattr(onnxruntime, "preload_dlls"): # Handles CUDA 11 / onnxruntime-gpu<=1.18
onnxruntime.preload_dlls(cuda=True, cudnn=True)
self.device = "cuda"
# MacOS Metal performance shaders
elif torch.backends.mps.is_available() and torch.backends.mps.is_built():
logger.info("Using Metal for SAM 2d segmenter")
self.device = "mps"
else:
logger.info("Using CPU for SAM 2d segmenter")
self.device = "cpu"

# Core components
self.model = FastSAM(get_data(model_path) / model_name)
self.use_tracker = use_tracker
Expand Down
5 changes: 3 additions & 2 deletions dimos/protocol/pubsub/shmpubsub.py
Original file line number Diff line number Diff line change
Expand Up @@ -233,8 +233,9 @@ def _ensure_topic(self, topic: str) -> _TopicState:
cap = int(self.config.default_capacity)

def _names_for_topic(topic: str, capacity: int) -> tuple[str, str]:
# Python’s SharedMemory requires names without a leading '/'
h = hashlib.blake2b(f"{topic}:{capacity}".encode(), digest_size=12).hexdigest()
# Python's SharedMemory requires names without a leading '/'
# Use shorter digest to avoid macOS shared memory name length limits
h = hashlib.blake2b(f"{topic}:{capacity}".encode(), digest_size=8).hexdigest()
return f"psm_{h}_data", f"psm_{h}_ctrl"

data_name, ctrl_name = _names_for_topic(topic, cap)
Expand Down
Loading