fix: macOS local inference DNS + oMLX provider + installer test fix#459
fix: macOS local inference DNS + oMLX provider + installer test fix#459suh4s wants to merge 2 commits intoNVIDIA:mainfrom
Conversation
On macOS (Docker Desktop), host.openshell.internal does not resolve inside sandbox containers because the network namespace DNS entry is not injected. This uses host.docker.internal on Darwin, which Docker Desktop resolves automatically. Also adds omlx-local as a local inference provider (port 8080) alongside vllm-local and ollama-local, enabling NemoClaw to use oMLX for fast MLX-native inference on Apple Silicon. Changes: - local-inference.js: platform-aware host gateway URL, consolidated provider port table, omlx-local support - onboard.js: omlx-local provider creation in onboard flow - local-inference.test.js: platform-aware test expectations, omlx tests Fixes NVIDIA#260 (partial — local inference DNS on macOS)
The installer tests create fake node/npm stubs and set PATH to use them. But spreading ...process.env leaks NVM_DIR/FNM_DIR, causing ensure_nvm_loaded() to source the real nvm.sh which puts the real node/npm back on PATH — overriding the test stubs. Set NVM_DIR and FNM_DIR to empty strings in all test spawn envs so the installer's version manager detection is neutralized. Root cause: tests only failed on machines with nvm installed.
📝 WalkthroughWalkthroughThe PR adds macOS support for local provider inference by detecting the platform at runtime and conditionally routing to appropriate host gateway URLs. It introduces Changes
Sequence DiagramsequenceDiagram
participant Client
participant Platform Detector
participant Port Mapper
participant Health Checker
participant Container
participant Host
Client->>Platform Detector: Detect OS (macOS vs Linux)
Platform Detector-->>Client: Return platform flag (IS_MACOS)
Client->>Port Mapper: Get port for provider (vllm-local, ollama-local, omlx-local)
Port Mapper-->>Client: Return provider port (e.g., 8000, 11434, 8080)
Client->>Health Checker: Validate local provider health
Health Checker->>Health Checker: Determine endpoint (/v1/models or /api/tags)
Health Checker->>Host: Probe localhost:${port}/${endpoint}
Host-->>Health Checker: Return status
Health Checker-->>Client: Report health status
alt Container Reachability Check
Client->>Platform Detector: Check IS_MACOS flag
alt macOS
Platform Detector-->>Client: Use host.docker.internal
else Linux/Other
Platform Detector-->>Client: Use host.openshell.internal + --add-host flag
end
end
Client->>Container: Execute docker with platform-specific hostname
Container->>Host: Probe provider endpoint via conditional hostname
Host-->>Container: Return response
Container-->>Client: Validation complete
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
📝 Coding Plan
Comment Tip You can validate your CodeRabbit configuration file in your editor.If your editor has YAML language server, you can enable auto-completion and validation by adding |
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
bin/lib/onboard.js (1)
947-950:⚠️ Potential issue | 🟡 MinorMissing
omlx-localin the dashboard provider label mapping.The
printDashboardfunction maps provider keys to friendly labels but doesn't includeomlx-local. If this provider is eventually selectable, the dashboard will display the raw key instead of "Local oMLX".🔧 Proposed fix
let providerLabel = provider; if (provider === "nvidia-nim") providerLabel = "NVIDIA Cloud API"; else if (provider === "vllm-local") providerLabel = "Local vLLM"; else if (provider === "ollama-local") providerLabel = "Local Ollama"; + else if (provider === "omlx-local") providerLabel = "Local oMLX";🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/lib/onboard.js` around lines 947 - 950, The provider label mapping in the printDashboard area currently maps known provider keys to friendly names but omits "omlx-local", so the raw key will be shown; add an additional branch to map provider === "omlx-local" to providerLabel = "Local oMLX" (adjacent to the existing checks that set providerLabel for "vllm-local" and "ollama-local") so the dashboard displays the friendly name.
🧹 Nitpick comments (1)
bin/lib/local-inference.js (1)
53-55: Consider extracting the duplicatednamesmap to a module-level constant.The
namesobject mapping provider keys to display names is defined identically in two places withinvalidateLocalProvider. Extracting it to a module-level constant would reduce duplication and ensure consistency.♻️ Proposed refactor
+const LOCAL_PROVIDER_NAMES = { + "vllm-local": "vLLM", + "ollama-local": "Ollama", + "omlx-local": "oMLX", +}; + const LOCAL_PROVIDER_PORTS = { "vllm-local": 8000, "ollama-local": 11434, "omlx-local": 8080, };Then use
LOCAL_PROVIDER_NAMES[provider] || providerin both locations.Also applies to: 72-75
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bin/lib/local-inference.js` around lines 53 - 55, Extract the duplicated names map into a module-level constant (e.g., LOCAL_PROVIDER_NAMES = { "vllm-local": "vLLM", "ollama-local": "Ollama", "omlx-local": "oMLX" }) and replace the local `names` usage in validateLocalProvider with `LOCAL_PROVIDER_NAMES[provider] || provider`; update the other identical occurrence (the block around lines 72-75) to also use `LOCAL_PROVIDER_NAMES` so both `const name = ...` lookups reference the shared constant, removing the duplicated map.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@bin/lib/local-inference.js`:
- Around line 13-17: LOCAL_PROVIDER_PORTS assigns "omlx-local" to port 8080
which conflicts with the OpenShell gateway; change the port for "omlx-local" in
LOCAL_PROVIDER_PORTS (in local-inference.js) to an unused port (e.g., 8081),
update any tests or configs that reference the old port, and/or add a clear
comment/doc entry noting the port change or that these services cannot run
concurrently so callers know about the conflict.
In `@bin/lib/onboard.js`:
- Around line 785-803: The onboarding cannot select "omlx-local" because
setupNim does not include it in the provider choices and printDashboard lacks a
friendly label; update setupNim to add "omlx-local" as a selectable provider
option (alongside nim/cloud/ollama/vllm) so provider can be set to "omlx-local"
and ensure any default/experimental flags match existing options, and update
printDashboard to map the "omlx-local" key to a user-facing label such as "Local
oMLX" so the dashboard shows a friendly name instead of the raw string.
---
Outside diff comments:
In `@bin/lib/onboard.js`:
- Around line 947-950: The provider label mapping in the printDashboard area
currently maps known provider keys to friendly names but omits "omlx-local", so
the raw key will be shown; add an additional branch to map provider ===
"omlx-local" to providerLabel = "Local oMLX" (adjacent to the existing checks
that set providerLabel for "vllm-local" and "ollama-local") so the dashboard
displays the friendly name.
---
Nitpick comments:
In `@bin/lib/local-inference.js`:
- Around line 53-55: Extract the duplicated names map into a module-level
constant (e.g., LOCAL_PROVIDER_NAMES = { "vllm-local": "vLLM", "ollama-local":
"Ollama", "omlx-local": "oMLX" }) and replace the local `names` usage in
validateLocalProvider with `LOCAL_PROVIDER_NAMES[provider] || provider`; update
the other identical occurrence (the block around lines 72-75) to also use
`LOCAL_PROVIDER_NAMES` so both `const name = ...` lookups reference the shared
constant, removing the duplicated map.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 83aba87b-5fb2-4e89-a219-d636fb0321ea
📒 Files selected for processing (4)
bin/lib/local-inference.jsbin/lib/onboard.jstest/install-preflight.test.jstest/local-inference.test.js
| const LOCAL_PROVIDER_PORTS = { | ||
| "vllm-local": 8000, | ||
| "ollama-local": 11434, | ||
| "omlx-local": 8080, | ||
| }; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify port 8080 usage across the codebase
rg -n '8080' --type js -C2Repository: NVIDIA/NemoClaw
Length of output: 2879
Port 8080 conflicts with the OpenShell gateway.
Port 8080 is reserved for the OpenShell gateway (bin/lib/onboard.js:315) but is also assigned to omlx-local in LOCAL_PROVIDER_PORTS. This will cause failures if both services attempt to run concurrently. Either reassign omlx-local to a different port (e.g., 8081) and update corresponding tests, or explicitly document that these services cannot run simultaneously.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bin/lib/local-inference.js` around lines 13 - 17, LOCAL_PROVIDER_PORTS
assigns "omlx-local" to port 8080 which conflicts with the OpenShell gateway;
change the port for "omlx-local" in LOCAL_PROVIDER_PORTS (in local-inference.js)
to an unused port (e.g., 8081), update any tests or configs that reference the
old port, and/or add a clear comment/doc entry noting the port change or that
these services cannot run concurrently so callers know about the conflict.
| } else if (provider === "omlx-local") { | ||
| const validation = validateLocalProvider(provider, runCapture); | ||
| if (!validation.ok) { | ||
| console.error(` ${validation.message}`); | ||
| process.exit(1); | ||
| } | ||
| const baseUrl = getLocalProviderBaseUrl(provider); | ||
| run( | ||
| `openshell provider create --name omlx-local --type openai ` + | ||
| `--credential "OPENAI_API_KEY=omlx" ` + | ||
| `--config "OPENAI_BASE_URL=${baseUrl}" 2>&1 || ` + | ||
| `openshell provider update omlx-local --credential "OPENAI_API_KEY=omlx" ` + | ||
| `--config "OPENAI_BASE_URL=${baseUrl}" 2>&1 || true`, | ||
| { ignoreError: true } | ||
| ); | ||
| run( | ||
| `openshell inference set --no-verify --provider omlx-local --model ${model} 2>/dev/null || true`, | ||
| { ignoreError: true } | ||
| ); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if omlx-local is set anywhere in setupNim or related flow
ast-grep --pattern 'provider = "omlx-local"'Repository: NVIDIA/NemoClaw
Length of output: 41
🏁 Script executed:
#!/bin/bash
# Read the setupNim function (lines 549-579)
sed -n '549,579p' bin/lib/onboard.jsRepository: NVIDIA/NemoClaw
Length of output: 1427
🏁 Script executed:
#!/bin/bash
# Read the printDashboard function to check provider labels (lines 947-950)
sed -n '940,960p' bin/lib/onboard.jsRepository: NVIDIA/NemoClaw
Length of output: 1005
🏁 Script executed:
#!/bin/bash
# Search for all references to omlx-local in the file
rg "omlx-local" bin/lib/onboard.js -nRepository: NVIDIA/NemoClaw
Length of output: 372
The omlx-local provider cannot be selected during onboarding.
The setupInference function handles omlx-local (lines 785-803), but setupNim doesn't add it to the selectable options list. The available providers are: nim (experimental, GPU-capable), cloud, ollama, and vllm (experimental). There is no code path that sets provider = "omlx-local", making this branch unreachable.
Additionally, printDashboard (line 947-950) lacks a provider label mapping for omlx-local, so it would display the raw string instead of a friendly label like "Local oMLX".
Add omlx-local to the provider options in setupNim and its corresponding label in printDashboard.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@bin/lib/onboard.js` around lines 785 - 803, The onboarding cannot select
"omlx-local" because setupNim does not include it in the provider choices and
printDashboard lacks a friendly label; update setupNim to add "omlx-local" as a
selectable provider option (alongside nim/cloud/ollama/vllm) so provider can be
set to "omlx-local" and ensure any default/experimental flags match existing
options, and update printDashboard to map the "omlx-local" key to a user-facing
label such as "Local oMLX" so the dashboard shows a friendly name instead of the
raw string.
There was a problem hiding this comment.
Pull request overview
This PR updates NemoClaw’s local-inference plumbing to work on macOS Docker Desktop by switching the host gateway hostname on Darwin, introduces an oMLX local provider (port 8080) alongside existing local providers, and hardens installer preflight tests against host nvm/fnm environment leakage.
Changes:
- Make local provider base URLs and container reachability checks use
host.docker.internalon macOS and keephost.openshell.internalon Linux. - Add
omlx-localprovider support (port map + health check + onboarding provider creation). - Stabilize installer preflight tests by clearing
NVM_DIR/FNM_DIRin spawned environments.
Reviewed changes
Copilot reviewed 2 out of 4 changed files in this pull request and generated 1 comment.
| File | Description |
|---|---|
bin/lib/local-inference.js |
Adds platform-aware host gateway and consolidates local provider port/health-check/reachability logic, including omlx-local. |
bin/lib/onboard.js |
Adds a new omlx-local branch to create/update the OpenShell provider and set inference routing. |
test/local-inference.test.js |
Updates expectations for macOS vs Linux hostnames and adds basic omlx-local URL/health-check tests. |
test/install-preflight.test.js |
Prevents nvm/fnm host env from interfering with runtime preflight tests by overriding env vars in spawns. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| it("returns the expected container reachability command for ollama-local", () => { | ||
| const cmd = getLocalProviderContainerReachabilityCheck("ollama-local"); | ||
| assert.match(cmd, new RegExp(`http://${EXPECTED_HOST.replace(/\./g, "\\.")}:11434/api/tags`)); | ||
| assert.match(cmd, /docker run --rm/); |
There was a problem hiding this comment.
This test only asserts that the reachability command contains docker run --rm and the expected URL, but it no longer verifies the platform-specific behavior that matters here (Linux should include --add-host host.openshell.internal:host-gateway, macOS should not). Consider asserting the presence/absence of the --add-host flag based on IS_MACOS so regressions in host-gateway wiring are caught.
| assert.match(cmd, /docker run --rm/); | |
| assert.match(cmd, /docker run --rm/); | |
| if (IS_MACOS) { | |
| assert.doesNotMatch(cmd, /--add-host host\.openshell\.internal:host-gateway/); | |
| } else { | |
| assert.match(cmd, /--add-host host\.openshell\.internal:host-gateway/); | |
| } |
|
Closing in favor of two focused PRs for cleaner review. |
|
Wat is tokens and how do I get them i need help I dount w v en no wat get hunb is |
Summary
host.openshell.internaldoesn't resolve inside sandbox containers. Useshost.docker.internalon Darwin, which Docker Desktop resolves automatically. Linux behavior unchanged.vllm-localandollama-local. oMLX is the fastest MLX inference server for Apple Silicon with persistent KV caching — a natural fit for Mac users....process.envleakedNVM_DIRinto test spawns, causingensure_nvm_loaded()to source the realnvm.shand override fake node/npm stubs.Changes
bin/lib/local-inference.jshost.docker.internalon macOS,host.openshell.internalon Linux)bin/lib/onboard.jsopenshell provider create --name omlx-local)test/local-inference.test.jstest/install-preflight.test.jsNVM_DIR=""andFNM_DIR=""in all 6spawnSyncenvs to prevent real version managers from overriding fake stubsTest plan
installer runtime preflight)nemoclaw onboardwithomlx-localon macOS Docker Desktophost.docker.internal:8080for oMLX inferencehost.openshell.internal)Partially addresses #260
Related
Summary by CodeRabbit
Release Notes
New Features
Improvements