Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
27 commits
Select commit Hold shift + click to select a range
902d839
feat: add Jetson Orin Nano support
realkim93 Mar 19, 2026
674493e
fix: address code review — null-safety, vllm-local parity, policy tig…
realkim93 Mar 20, 2026
9601a71
fix: address review feedback — port cleanup timing, provider mapping,…
realkim93 Mar 23, 2026
8e23931
fix: remove port-18789 preflight check to avoid regression on re-run
realkim93 Mar 23, 2026
9d57586
fix: align test assertions with merged implementation
realkim93 Mar 23, 2026
59291f1
fix: align tests with main after rebase
realkim93 Mar 29, 2026
92e51f1
refactor: extract Jetson detection to reduce detectGpu complexity
realkim93 Mar 29, 2026
c4d41dd
chore: apply shfmt formatting to setup-jetson.sh
realkim93 Mar 29, 2026
2fad75c
Merge branch 'main' into feat/jetson-orin-nano-support
cv Mar 30, 2026
79af0ff
fix: restore preflight idempotency and fix local provider sandbox config
realkim93 Mar 30, 2026
df08f88
merge: resolve conflicts with latest main
realkim93 Apr 1, 2026
fc8c790
fix: correct setup-jetson placement and apply Prettier formatting
realkim93 Apr 1, 2026
73ca60d
Merge remote-tracking branch 'origin/main' into pr-405-restore
realkim93 Apr 1, 2026
2115aa4
Merge branch 'main' into feat/jetson-orin-nano-support
realkim93 Apr 1, 2026
7b948ee
fix: address review feedback before re-review request
realkim93 Apr 1, 2026
e9bfa88
test: replace source-text inspection with behavioral tests for patchG…
realkim93 Apr 1, 2026
7b31aa5
test: add preflight gateway-reuse idempotency tests and setup-jetson …
realkim93 Apr 1, 2026
4542d9f
Merge branch 'main' into feat/jetson-orin-nano-support
realkim93 Apr 2, 2026
7a80c8d
docs: add Jetson to quickstart compatibility table, remove fragile test
realkim93 Apr 2, 2026
b7536f1
Merge branch 'feat/jetson-orin-nano-support' of github.com:realkim93/…
realkim93 Apr 2, 2026
17b7e81
merge: add new TS source files and Jetson support from merge with main
realkim93 Apr 2, 2026
fec9cec
merge: resolve conflicts with main's CJS→TS migration
realkim93 Apr 2, 2026
148c01d
merge: resolve conflict with main's stale gateway comment update
realkim93 Apr 3, 2026
08b2481
merge: resolve conflicts with latest main (onboard.js + SKILL.md)
realkim93 Apr 3, 2026
756577d
Merge branch 'main' into feat/jetson-orin-nano-support
realkim93 Apr 3, 2026
8afb5d6
merge: resolve conflicts with main's TS migration
realkim93 Apr 12, 2026
fb866bf
merge: incorporate latest main (incl. Jetson installer PR #1702)
realkim93 Apr 12, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions ci/platform-matrix.json
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,14 @@
"_prd_note": "PRD tracks x86 and ARM (WOA) separately; treating as one entry until ARM is validated independently.",
"notes": "Requires WSL2 with Docker Desktop backend."
},
{
"name": "Jetson (Orin Nano, Orin NX, AGX Orin, Xavier)",
"runtimes": ["Docker"],
"status": "tested",
"prd_priority": "P1",
"ci_tested": false,
"notes": "Run `sudo nemoclaw setup-jetson` before onboarding. See [commands reference](../reference/commands.md#nemoclaw-setup-jetson)."
},
{
"name": "DGX Station",
"runtimes": ["Docker"],
Expand Down
1 change: 1 addition & 0 deletions docs/get-started/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,6 +70,7 @@ Availability is not limited to these entries, but untested configurations may ha
| macOS (Apple Silicon) | Colima, Docker Desktop | Tested with limitations | Install Xcode Command Line Tools (`xcode-select --install`) and start the runtime before running the installer. |
| DGX Spark | Docker | Tested | Use the standard installer and `nemoclaw onboard`. |
| Windows WSL2 | Docker Desktop (WSL backend) | Tested with limitations | Requires WSL2 with Docker Desktop backend. |
| Jetson (Orin Nano, Orin NX, AGX Orin, Xavier) | Docker | Tested | Run `sudo nemoclaw setup-jetson` before onboarding. See [commands reference](../reference/commands.md#nemoclaw-setup-jetson). |
<!-- platform-matrix:end -->

## Install NemoClaw and Onboard OpenClaw Agent
Expand Down
10 changes: 10 additions & 0 deletions docs/reference/commands.md
Original file line number Diff line number Diff line change
Expand Up @@ -275,6 +275,16 @@ This command remains as a compatibility alias to `nemoclaw onboard`.
$ nemoclaw setup-spark
```

### `nemoclaw setup-jetson`

Set up NemoClaw on NVIDIA Jetson devices (Orin Nano, Orin NX, AGX Orin, Xavier).
This command configures the NVIDIA container runtime for Docker and applies iptables-legacy fixes required by Jetson's Tegra kernel.
Run with `sudo` on the Jetson host.

```console
$ sudo nemoclaw setup-jetson
```

### `nemoclaw debug`

Collect diagnostics for bug reports.
Expand Down
282 changes: 217 additions & 65 deletions scripts/setup-jetson.sh
Original file line number Diff line number Diff line change
@@ -1,84 +1,236 @@
#!/usr/bin/env bash
# SPDX-FileCopyrightText: Copyright (c) 2026 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
# SPDX-License-Identifier: Apache-2.0
#
# NemoClaw setup for NVIDIA Jetson devices (Orin Nano, Orin NX, AGX Orin, Thor).
#
# Jetson devices use unified memory and a Tegra kernel that lacks nf_tables
# chain modules (nft_chain_filter, nft_chain_nat, etc.). The OpenShell gateway
# runs k3s inside a Docker container, and k3s's network policy controller
# uses iptables in nf_tables mode by default, which panics on Tegra kernels.
#
# This script prepares the Jetson host so that `nemoclaw onboard` succeeds:
# 1. Verifies Jetson platform
# 2. Ensures NVIDIA Container Runtime is configured for Docker
# 3. Loads required kernel modules (br_netfilter, xt_comment)
# 4. Configures Docker daemon with default-runtime=nvidia
#
# The iptables-legacy patch for the gateway container image is handled
# automatically by `nemoclaw onboard` when it detects a Jetson GPU.
#
# Usage:
# sudo nemoclaw setup-jetson
# # or directly:
# sudo bash scripts/setup-jetson.sh

set -euo pipefail

SUDO=()
((EUID != 0)) && SUDO=(sudo)
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m'
MIN_NODE_VERSION="22.16.0"

info() {
printf "[INFO] %s\n" "$*"
}

error() {
printf "[ERROR] %s\n" "$*" >&2
info() { echo -e "${GREEN}>>>${NC} $1"; }
warn() { echo -e "${YELLOW}>>>${NC} $1"; }
fail() {
echo -e "${RED}>>>${NC} $1"
exit 1
}

get_jetpack_version() {
local release_line release revision l4t_version

release_line="$(head -n1 /etc/nv_tegra_release 2>/dev/null || true)"
[[ -n "$release_line" ]] || return 0

release="$(printf '%s\n' "$release_line" | sed -n 's/^# R\([0-9][0-9]*\) (release).*/\1/p')"
revision="$(printf '%s\n' "$release_line" | sed -n 's/^.*REVISION: \([0-9][0-9]*\)\..*$/\1/p')"
l4t_version="${release}.${revision}"

case "$l4t_version" in
36.*)
printf "%s" "jp6"
;;
38.*)
printf "%s" "jp7"
;;
*)
info "Jetson detected (L4T $l4t_version) but version is not recognized — skipping host setup"
;;
esac
version_gte() {
# Returns 0 (true) if $1 >= $2 — portable, no sort -V (BSD compat)
local IFS=.
local -a a b
read -r -a a <<<"$1"
read -r -a b <<<"$2"
for i in 0 1 2; do
local ai=${a[$i]:-0} bi=${b[$i]:-0}
if ((ai > bi)); then return 0; fi
if ((ai < bi)); then return 1; fi
done
return 0
}

configure_jetson_host() {
local jetpack_version="$1"
# ── Pre-flight checks ─────────────────────────────────────────────

if ((EUID != 0)); then
info "Jetson host configuration requires sudo. You may be prompted for your password."
"${SUDO[@]}" true >/dev/null || error "Sudo is required to apply Jetson host configuration."
fi
if [ "$(uname -s)" != "Linux" ]; then
fail "This script is for NVIDIA Jetson (Linux). Use 'nemoclaw setup' for macOS."
fi

if [ "$(uname -m)" != "aarch64" ]; then
fail "Jetson devices are aarch64. This system is $(uname -m)."
fi

case "$jetpack_version" in
jp6)
"${SUDO[@]}" update-alternatives --set iptables /usr/sbin/iptables-legacy
"${SUDO[@]}" sed -i '/"iptables": false,/d; /"bridge": "none"/d; s/"default-runtime": "nvidia",/"default-runtime": "nvidia"/' /etc/docker/daemon.json
;;
jp7)
# JP7 (Thor) does not need iptables or Docker daemon.json changes.
;;
*)
error "Unsupported Jetson version: $jetpack_version"
;;
esac

"${SUDO[@]}" modprobe br_netfilter
"${SUDO[@]}" sysctl -w net.bridge.bridge-nf-call-iptables=1 >/dev/null

# Persist across reboots
echo "br_netfilter" | "${SUDO[@]}" tee /etc/modules-load.d/nemoclaw.conf >/dev/null
echo "net.bridge.bridge-nf-call-iptables=1" | "${SUDO[@]}" tee /etc/sysctl.d/99-nemoclaw.conf >/dev/null

if [[ "$jetpack_version" == "jp6" ]]; then
"${SUDO[@]}" systemctl restart docker
if [ "$(id -u)" -ne 0 ]; then
fail "Must run as root: sudo nemoclaw setup-jetson"
fi

# Verify Jetson platform
JETSON_MODEL=""
if [ -f /proc/device-tree/model ]; then
JETSON_MODEL=$(tr -d '\0' </proc/device-tree/model)
fi

if ! echo "$JETSON_MODEL" | grep -qi "jetson"; then
# Also check nvidia-smi for Orin GPU name
GPU_NAME=$(nvidia-smi --query-gpu=name --format=csv,noheader,nounits 2>/dev/null || echo "")
if ! echo "$GPU_NAME" | grep -qiE "orin|thor"; then
fail "This does not appear to be a Jetson device. Use 'nemoclaw onboard' directly."
fi
}
# Exclude discrete GPUs that happen to contain matching strings
if echo "$GPU_NAME" | grep -qiE "geforce|rtx|quadro"; then
fail "Discrete GPU detected ('$GPU_NAME'). This script is for Jetson only."
fi
JETSON_MODEL="${GPU_NAME}"
fi

info "Detected Jetson platform: ${JETSON_MODEL}"

# Detect the real user (not root) for docker group add
REAL_USER="${SUDO_USER:-$(logname 2>/dev/null || echo "")}"

command -v docker >/dev/null || fail "Docker not found. Install docker.io: sudo apt-get install -y docker.io"
command -v python3 >/dev/null || fail "python3 not found. Install with: sudo apt-get install -y python3-minimal"
command -v node >/dev/null || fail "Node.js not found. NemoClaw requires Node.js >= ${MIN_NODE_VERSION}. Install Node.js before running 'nemoclaw onboard'."

NODE_VERSION_RAW="$(node --version 2>/dev/null || true)"
NODE_VERSION="${NODE_VERSION_RAW#v}"
if ! echo "$NODE_VERSION" | grep -Eq '^[0-9]+\.[0-9]+\.[0-9]+$'; then
fail "Could not parse Node.js version from '${NODE_VERSION_RAW}'. NemoClaw requires Node.js >= ${MIN_NODE_VERSION}."
fi
if ! version_gte "$NODE_VERSION" "$MIN_NODE_VERSION"; then
fail "Node.js ${NODE_VERSION_RAW} is too old. NemoClaw requires Node.js >= ${MIN_NODE_VERSION}."
fi
info "Node.js ${NODE_VERSION_RAW} OK"

# ── 1. Docker group ───────────────────────────────────────────────

if [ -n "$REAL_USER" ]; then
if id -nG "$REAL_USER" | grep -qw docker; then
info "User '$REAL_USER' already in docker group"
else
info "Adding '$REAL_USER' to docker group..."
usermod -aG docker "$REAL_USER"
info "Added. Group will take effect on next login (or use 'newgrp docker')."
fi
fi

# ── 2. NVIDIA Container Runtime ──────────────────────────────────
#
# Jetson JetPack pre-installs nvidia-container-runtime but Docker may
# not be configured to use it as the default runtime.

main() {
local jetpack_version
jetpack_version="$(get_jetpack_version)"
[[ -n "$jetpack_version" ]] || exit 0
DAEMON_JSON="/etc/docker/daemon.json"
NEEDS_RESTART=false

configure_nvidia_runtime() {
if ! command -v nvidia-container-runtime >/dev/null 2>&1; then
warn "nvidia-container-runtime not found. GPU passthrough may not work."
warn "Install with: sudo apt-get install -y nvidia-container-toolkit"
return
fi

info "Jetson detected ($jetpack_version) — applying required host configuration"
configure_jetson_host "$jetpack_version"
if [ -f "$DAEMON_JSON" ]; then
# Check if nvidia runtime is already configured
if python3 -c "
import json, sys
try:
d = json.load(open('$DAEMON_JSON'))
runtimes = d.get('runtimes', {}) if isinstance(d, dict) else {}
if 'nvidia' in runtimes and d.get('default-runtime') == 'nvidia':
sys.exit(0)
sys.exit(1)
except (IOError, ValueError, KeyError, AttributeError):
sys.exit(1)
" 2>/dev/null; then
info "NVIDIA runtime already configured in Docker daemon"
else
info "Adding NVIDIA runtime to Docker daemon config..."
python3 -c "
import json
try:
with open('$DAEMON_JSON') as f:
d = json.load(f)
except (IOError, ValueError, KeyError):
d = {}
if not isinstance(d, dict):
d = {}
d.setdefault('runtimes', {})['nvidia'] = {
'path': 'nvidia-container-runtime',
'runtimeArgs': []
}
d['default-runtime'] = 'nvidia'
with open('$DAEMON_JSON', 'w') as f:
json.dump(d, f, indent=2)
"
NEEDS_RESTART=true
fi
else
info "Creating Docker daemon config with NVIDIA runtime..."
mkdir -p "$(dirname "$DAEMON_JSON")"
cat >"$DAEMON_JSON" <<'DAEMONJSON'
{
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
},
"default-runtime": "nvidia"
}
DAEMONJSON
NEEDS_RESTART=true
fi
}

main "$@"
configure_nvidia_runtime

# ── 3. Kernel modules ────────────────────────────────────────────

info "Loading required kernel modules..."
modprobe br_netfilter 2>/dev/null || warn "Could not load br_netfilter"
modprobe xt_comment 2>/dev/null || warn "Could not load xt_comment"

# Persist across reboots
MODULES_FILE="/etc/modules-load.d/nemoclaw-jetson.conf"
if [ ! -f "$MODULES_FILE" ]; then
info "Persisting kernel modules for boot..."
cat >"$MODULES_FILE" <<'MODULES'
# NemoClaw: required for k3s networking inside Docker
br_netfilter
xt_comment
MODULES
fi

# ── 4. Restart Docker if needed ──────────────────────────────────

if [ "$NEEDS_RESTART" = true ]; then
info "Restarting Docker daemon..."
if command -v systemctl >/dev/null 2>&1; then
systemctl restart docker
else
service docker restart 2>/dev/null || dockerd &
fi
for i in $(seq 1 15); do
if docker info >/dev/null 2>&1; then
break
fi
[ "$i" -eq 15 ] && fail "Docker didn't come back after restart. Check 'systemctl status docker'."
sleep 2
done
info "Docker restarted with NVIDIA runtime"
fi

# ── Done ─────────────────────────────────────────────────────────

echo ""
info "Jetson setup complete."
info ""
info "Device: ${JETSON_MODEL}"
info ""
info "Next step: run 'nemoclaw onboard' to set up your sandbox."
info " nemoclaw onboard"
info ""
info "The onboard wizard will automatically patch the gateway image"
info "for Jetson iptables compatibility."
13 changes: 13 additions & 0 deletions src/lib/local-inference.test.ts
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@ import {
CONTAINER_REACHABILITY_IMAGE,
DEFAULT_OLLAMA_MODEL,
LARGE_OLLAMA_MIN_MEMORY_MB,
DEFAULT_OLLAMA_MODEL_JETSON,
getDefaultOllamaModel,
getBootstrapOllamaModelOptions,
getLocalProviderBaseUrl,
Expand All @@ -26,6 +27,8 @@ import {
validateLocalProvider,
} from "../../dist/lib/local-inference";

const FAKE_JETSON_GPU = { type: "nvidia", totalMemoryMB: 7627, jetson: true, unifiedMemory: true };

describe("local inference helpers", () => {
it("returns the expected base URL for vllm-local", () => {
expect(getLocalProviderBaseUrl("vllm-local")).toBe("http://host.openshell.internal:8000/v1");
Expand Down Expand Up @@ -304,4 +307,14 @@ describe("local inference helpers", () => {
it("treats non-JSON probe output as success once the model responds", () => {
expect(validateOllamaModel("nemotron-3-nano:30b", () => "ok")).toEqual({ ok: true });
});

it("returns jetson 4b model as default on jetson when available", () => {
const list = "nemotron-3-nano:4b abc 2.8 GB now\nqwen3:32b def 20 GB now";
expect(getDefaultOllamaModel(() => list, FAKE_JETSON_GPU)).toBe(DEFAULT_OLLAMA_MODEL_JETSON);
});

it("falls back to jetson 4b model when ollama list is empty on jetson", () => {
expect(getBootstrapOllamaModelOptions(FAKE_JETSON_GPU)).toEqual([DEFAULT_OLLAMA_MODEL_JETSON]);
expect(getDefaultOllamaModel(() => "", FAKE_JETSON_GPU)).toBe(DEFAULT_OLLAMA_MODEL_JETSON);
});
});
Loading