fix: add local-inference policy preset for Ollama/vLLM host access (Fixes #693)#2000
fix: add local-inference policy preset for Ollama/vLLM host access (Fixes #693)#2000
Conversation
…ixes #693) New `local-inference.yaml` preset allows `host.openshell.internal` on ports 11434 (Ollama) and 8000 (vLLM) with binary restrictions. Onboard auto-suggests this preset when a local provider is selected. Improvements over #781: - Shared LOCAL_INFERENCE_PROVIDERS constant (no duplicated arrays) - Removed `nim-local` (not in the approved providers list) - Content-level tests for preset hosts, ports, and binary restrictions - Auto-suggestion tests for local vs cloud providers Based on work by @cluster2600 in #781. Co-Authored-By: cluster2600 <cluster2600@users.noreply.github.com> Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
📝 WalkthroughWalkthroughA new "local-inference" policy preset is introduced that permits controlled network access to local LLM inference services (Ollama on port 11434 and vLLM on port 8000) running on Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (2)
test/onboard.test.ts (1)
157-162: Add an explicit empty-string provider assertion.Line 161 checks missing provider (
{}), but notprovider: "". Adding it would lock in the null/empty-input behavior explicitly.♻️ Suggested test hardening
it("does not suggest local-inference for cloud providers", () => { expect(getSuggestedPolicyPresets({ provider: "nvidia-prod" })).not.toContain("local-inference"); expect(getSuggestedPolicyPresets({ provider: "openai-api" })).not.toContain("local-inference"); expect(getSuggestedPolicyPresets({ provider: null })).not.toContain("local-inference"); + expect(getSuggestedPolicyPresets({ provider: "" })).not.toContain("local-inference"); expect(getSuggestedPolicyPresets({})).not.toContain("local-inference"); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/onboard.test.ts` around lines 157 - 162, The test "does not suggest local-inference for cloud providers" is missing an explicit assertion for an empty-string provider; update the test containing getSuggestedPolicyPresets to also assert that getSuggestedPolicyPresets({ provider: "" }) does not contain "local-inference" so the behavior for empty-string inputs is locked in (add the expect call alongside the existing expectations referencing getSuggestedPolicyPresets).test/policies.test.ts (1)
99-102: Avoid duplicating the preset count and name source of truth.The explicit
12can drift from the expected names array. You can derive count fromexpected.lengthin the same assertion block for easier maintenance.Also applies to: 116-130
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/policies.test.ts` around lines 99 - 102, The test for policies.listPresets() currently asserts a hardcoded count (12); replace the magic number with a derived value from the expected names array (use expected.length) so the length assertion and the name assertions use the same single source of truth (referencing policies.listPresets() and the expected array used later in the test).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@test/onboard.test.ts`:
- Around line 157-162: The test "does not suggest local-inference for cloud
providers" is missing an explicit assertion for an empty-string provider; update
the test containing getSuggestedPolicyPresets to also assert that
getSuggestedPolicyPresets({ provider: "" }) does not contain "local-inference"
so the behavior for empty-string inputs is locked in (add the expect call
alongside the existing expectations referencing getSuggestedPolicyPresets).
In `@test/policies.test.ts`:
- Around line 99-102: The test for policies.listPresets() currently asserts a
hardcoded count (12); replace the magic number with a derived value from the
expected names array (use expected.length) so the length assertion and the name
assertions use the same single source of truth (referencing
policies.listPresets() and the expected array used later in the test).
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro Plus
Run ID: 0ed0bf61-6f4a-4f61-842c-8f5a1e63930b
📒 Files selected for processing (4)
nemoclaw-blueprint/policies/presets/local-inference.yamlsrc/lib/onboard.tstest/onboard.test.tstest/policies.test.ts
Summary
local-inference.yamlpreset allowshost.openshell.internalon ports 11434 (Ollama) and 8000 (vLLM) with binary restrictions (openclaw,claude).ollama-local,vllm-local) is selected.Improvements over #781
LOCAL_INFERENCE_PROVIDERSused in bothgetSuggestedPolicyPresets()andsetupPoliciesWithSelection()— no duplicated arrays that can drift.nim-local: Not in the approved providers list ininference-config.ts.Credit
Based on work by @cluster2600 in #781. The preset YAML and auto-suggest approach are theirs; this PR refactors and adds test coverage.
Test plan
ollama-localandvllm-localollama-local, verifylocal-inferenceis pre-selected🤖 Generated with Claude Code
Summary by CodeRabbit
Release Notes