Bug description
thv llm setup does not support Gemini CLI. When trying to manually configure Gemini CLI to work with the LLM gateway, the natural guess at the settings format is wrong in two ways:
Incorrect manual config (what users try):
{
"auth": {"tokenCommand": "\"/path/to/thv\" llm token"},
"baseUrl": "https://gateway:30443"
}
Both keys are invalid:
auth.tokenCommand — not a recognised Gemini CLI settings.json key (this is Claude Code's apiKeyHelper concept; Gemini CLI has no equivalent)
baseUrl — not a valid top-level key; silently ignored by Gemini CLI
Expected behavior
thv llm setup detects Gemini CLI and writes the correct ~/.gemini/settings.json using the proxy path (same as Cursor), since Gemini CLI has no dynamic token command support:
{
"security": {
"auth": {
"selectedType": "gemini-api-key"
}
},
"env": {
"GEMINI_API_KEY": "sk-toolhive-proxy",
"GOOGLE_GEMINI_BASE_URL": "http://127.0.0.1:14000",
"NODE_TLS_REJECT_UNAUTHORIZED": "0"
}
}
Key points:
security.auth.selectedType: "gemini-api-key" — required for GOOGLE_GEMINI_BASE_URL to be honoured (the fix in PR #25357, shipped in v0.40.0, only applies to API key auth, not OAuth)
GEMINI_API_KEY — dummy value; the local proxy does not validate it but Gemini CLI requires a non-empty key
GOOGLE_GEMINI_BASE_URL — points to the local thv llm proxy (default http://127.0.0.1:14000), which injects the real OIDC token before forwarding to the upstream gateway
Actual behavior
No Gemini CLI entry in Registry() in pkg/llm/setup.go; users fall back to manual config with incorrect keys.
Additional context
- Gemini CLI detectable via
~/.gemini/ directory
- Config written to
~/.gemini/settings.json
- Implementation would follow the
ToolKindProxy pattern already used for Cursor (see cursorSpec() in pkg/llm/setup.go)
NODE_TLS_REJECT_UNAUTHORIZED should only be written when TLSSkipVerify is set, same as the Claude Code spec
- Gateway format note: the Envoy AI Gateway currently only exposes an OpenAI-compatible endpoint (
/v1/chat/completions); there is no Gemini API format backend configured. Gemini CLI sends requests to /v1beta/models/{model}:generateContent, which the gateway rejects. A separate issue should track adding a Google/Gemini backend to the gateway so Gemini CLI requests can actually be served end-to-end.
Bug description
thv llm setupdoes not support Gemini CLI. When trying to manually configure Gemini CLI to work with the LLM gateway, the natural guess at the settings format is wrong in two ways:Incorrect manual config (what users try):
{ "auth": {"tokenCommand": "\"/path/to/thv\" llm token"}, "baseUrl": "https://gateway:30443" }Both keys are invalid:
auth.tokenCommand— not a recognised Gemini CLIsettings.jsonkey (this is Claude Code'sapiKeyHelperconcept; Gemini CLI has no equivalent)baseUrl— not a valid top-level key; silently ignored by Gemini CLIExpected behavior
thv llm setupdetects Gemini CLI and writes the correct~/.gemini/settings.jsonusing the proxy path (same as Cursor), since Gemini CLI has no dynamic token command support:{ "security": { "auth": { "selectedType": "gemini-api-key" } }, "env": { "GEMINI_API_KEY": "sk-toolhive-proxy", "GOOGLE_GEMINI_BASE_URL": "http://127.0.0.1:14000", "NODE_TLS_REJECT_UNAUTHORIZED": "0" } }Key points:
security.auth.selectedType: "gemini-api-key"— required forGOOGLE_GEMINI_BASE_URLto be honoured (the fix in PR #25357, shipped in v0.40.0, only applies to API key auth, not OAuth)GEMINI_API_KEY— dummy value; the local proxy does not validate it but Gemini CLI requires a non-empty keyGOOGLE_GEMINI_BASE_URL— points to the localthv llm proxy(defaulthttp://127.0.0.1:14000), which injects the real OIDC token before forwarding to the upstream gatewayActual behavior
No Gemini CLI entry in
Registry()inpkg/llm/setup.go; users fall back to manual config with incorrect keys.Additional context
~/.gemini/directory~/.gemini/settings.jsonToolKindProxypattern already used for Cursor (seecursorSpec()inpkg/llm/setup.go)NODE_TLS_REJECT_UNAUTHORIZEDshould only be written whenTLSSkipVerifyis set, same as the Claude Code spec/v1/chat/completions); there is no Gemini API format backend configured. Gemini CLI sends requests to/v1beta/models/{model}:generateContent, which the gateway rejects. A separate issue should track adding a Google/Gemini backend to the gateway so Gemini CLI requests can actually be served end-to-end.