diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index b9f3559910..7751326fe5 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -88,6 +88,11 @@ See [docs/CONTRIBUTING.md](docs/CONTRIBUTING.md) for the full style guide and wr ## Pull Requests +We welcome contributions. Every PR requires maintainer review. To keep the review queue healthy, limit the number of open PRs you have at any time to fewer than 10. + +> [!WARNING] +> Accounts that repeatedly exceed this limit or submit automated bulk PRs may have their PRs closed or their access restricted. + Follow these steps to submit a pull request. 1. Create a feature branch from `main`. diff --git a/docs/deployment/deploy-to-remote-gpu.md b/docs/deployment/deploy-to-remote-gpu.md index 9c08a6815f..6fcf603dce 100644 --- a/docs/deployment/deploy-to-remote-gpu.md +++ b/docs/deployment/deploy-to-remote-gpu.md @@ -27,7 +27,7 @@ The deploy command provisions the VM, installs dependencies, and connects you to - The [Brev CLI](https://brev.nvidia.com) installed and authenticated. - An NVIDIA API key from [build.nvidia.com](https://build.nvidia.com). -- NemoClaw installed locally. Install with `npm install -g nemoclaw`. +- NemoClaw installed locally. Follow the [Quickstart](../get-started/quickstart.md) install steps. ## Deploy the Instance diff --git a/docs/monitoring/monitor-sandbox-activity.md b/docs/monitoring/monitor-sandbox-activity.md index 6b358d067e..f7db139b64 100644 --- a/docs/monitoring/monitor-sandbox-activity.md +++ b/docs/monitoring/monitor-sandbox-activity.md @@ -47,6 +47,8 @@ Key fields in the output include the following: - Blueprint run ID, which is the identifier for the most recent blueprint execution. - Inference provider, which shows the active provider, model, and endpoint. +If you run `openclaw nemoclaw status` from inside the sandbox, the command detects the sandbox context and reports it. Host-level sandbox and inference details are not available from within the sandbox. Run `openshell sandbox status` on the host for full host-side details. + ## View Blueprint and Sandbox Logs Stream the most recent log output from the blueprint runner and sandbox: @@ -116,6 +118,8 @@ The following table lists common problems and their resolution steps: | Inference requests time out | Verify the provider endpoint is reachable. Check `openclaw nemoclaw status` for the active endpoint. | | Agent cannot reach an external host | Open the TUI with `openshell term` and approve the blocked request, or add the endpoint to the policy. | | Blueprint run failed | Run `openclaw nemoclaw logs --run-id ` to view the error output for the failed run. | +| cgroup v2 error during onboard | On Ubuntu 24.04, DGX Spark, or WSL2, Docker requires `"default-cgroupns-mode": "host"` in `/etc/docker/daemon.json`. Run `nemoclaw setup-spark` to apply this fix, then retry `nemoclaw onboard`. | +| Status shows "not running" inside sandbox | This is expected. The status command cannot query host-level state from within the sandbox. Run `openshell sandbox status` on the host instead. | ## Related Topics diff --git a/docs/reference/commands.md b/docs/reference/commands.md index 217d16ec17..6d7ef234f1 100644 --- a/docs/reference/commands.md +++ b/docs/reference/commands.md @@ -63,6 +63,8 @@ $ openclaw nemoclaw status [--json] `--json` : Output as JSON for programmatic consumption. +When you run `status` inside an active OpenShell sandbox, host-level commands like `openshell sandbox status` are unavailable. The status command detects this and reports the sandbox context instead of showing false negatives. Run `openshell sandbox status` on the host for full details. + ### `openclaw nemoclaw logs` Stream blueprint execution and sandbox logs. @@ -104,6 +106,14 @@ $ nemoclaw onboard The first run prompts for your NVIDIA API key and saves it to `~/.nemoclaw/credentials.json`. +The onboard wizard runs a preflight check before creating the gateway. On systems with cgroup v2, such as Ubuntu 24.04 and DGX Spark, the preflight verifies that Docker is configured with `"default-cgroupns-mode": "host"` in `/etc/docker/daemon.json`. If this setting is missing, `nemoclaw onboard` exits with an error and directs you to run `nemoclaw setup-spark` to apply the fix. + +By default, the onboard menu shows NVIDIA cloud inference options only. To enable experimental local inference options (NIM, vLLM, Ollama), set the `NEMOCLAW_EXPERIMENTAL` environment variable before running onboard: + +```console +$ NEMOCLAW_EXPERIMENTAL=1 nemoclaw onboard +``` + ### `nemoclaw list` List all registered sandboxes with their model, provider, and policy presets.