Description:
Environment:
OS: Windows 11 with WSL2 (Ubuntu 22.04)
Runtime: Docker Desktop with WSL2 integration enabled
NemoClaw Version: 2026.3.11
Sandbox Name: ai-expert-ollama
Inference Provider: ollama-local (Ollama running on Windows host at port 11434)
Issue Summary:
The NemoClaw sandbox is unable to communicate with the local Ollama instance on the Windows host (host.docker.internal:11434). All network requests originating from within the sandbox are being intercepted by an internal proxy at 10.200.0.1:3128, which returns a 403 Forbidden error.
Technical Evidence:
Even after manually exporting NO_PROXY="host.docker.internal,127.0.0.1,localhost" inside the sandbox shell, curl continues to route through the proxy.
Output from curl -v inside the sandbox:* Uses proxy env variable http_proxy == 'http://10.200.0.1:3128'
- Trying 10.200.0.1:3128...
- Connected to 10.200.0.1 (10.200.0.1 ) port 3128 (#0)
POST http://host.docker.internal:11434/api/generate HTTP/1.1
Host: host.docker.internal:11434
User-Agent: curl/7.88.1
Accept: /
Proxy-Connection: Keep-Alive
Content-Length: 70
Content-Type: application/x-www-form-urlencoded
< HTTP/1.1 403 Forbidden
- no chunk, no close, no size. Assume close to signal end
<
- Closing connection 0
Steps Taken to Resolve:
Verified Docker Desktop WSL2 integration is active.
Recreated the sandbox using nemoclaw onboard specifically selecting the ollama-local provider.
Attempted to bypass the proxy by setting NO_PROXY environment variables both in the host script and directly inside the sandbox interactive shell.
Confirmed that the proxy at 10.200.0.1:3128 consistently blocks access to the host machine's ports.
Requested Assistance:
How can the internal OpenShell/NemoClaw gateway proxy be configured to allow host.docker.internal traffic to bypass it?
Is there a way to permanently inject NO_PROXY settings into the sandbox blueprint or configuration so they are respected by all internal processes?
Description:
Environment:
OS: Windows 11 with WSL2 (Ubuntu 22.04)
Runtime: Docker Desktop with WSL2 integration enabled
NemoClaw Version: 2026.3.11
Sandbox Name: ai-expert-ollama
Inference Provider: ollama-local (Ollama running on Windows host at port 11434)
Issue Summary:
The NemoClaw sandbox is unable to communicate with the local Ollama instance on the Windows host (host.docker.internal:11434). All network requests originating from within the sandbox are being intercepted by an internal proxy at 10.200.0.1:3128, which returns a 403 Forbidden error.
Technical Evidence:
Even after manually exporting NO_PROXY="host.docker.internal,127.0.0.1,localhost" inside the sandbox shell, curl continues to route through the proxy.
Output from curl -v inside the sandbox:* Uses proxy env variable http_proxy == 'http://10.200.0.1:3128'
< HTTP/1.1 403 Forbidden
<
Steps Taken to Resolve:
Verified Docker Desktop WSL2 integration is active.
Recreated the sandbox using nemoclaw onboard specifically selecting the ollama-local provider.
Attempted to bypass the proxy by setting NO_PROXY environment variables both in the host script and directly inside the sandbox interactive shell.
Confirmed that the proxy at 10.200.0.1:3128 consistently blocks access to the host machine's ports.
Requested Assistance:
How can the internal OpenShell/NemoClaw gateway proxy be configured to allow host.docker.internal traffic to bypass it?
Is there a way to permanently inject NO_PROXY settings into the sandbox blueprint or configuration so they are respected by all internal processes?