Skip to content

Add Gemini CLI support to thv llm setup #5141

@yrobla

Description

@yrobla

Bug description

thv llm setup does not support Gemini CLI. When trying to manually configure Gemini CLI to work with the LLM gateway, the natural guess at the settings format is wrong in two ways:

Incorrect manual config (what users try):

{
    "auth": {"tokenCommand": "\"/path/to/thv\" llm token"},
    "baseUrl": "https://gateway:30443"
}

Both keys are invalid:

  • auth.tokenCommand — not a recognised Gemini CLI settings.json key (this is Claude Code's apiKeyHelper concept; Gemini CLI has no equivalent)
  • baseUrl — not a valid top-level key; silently ignored by Gemini CLI

Expected behavior

thv llm setup detects Gemini CLI and writes the correct ~/.gemini/settings.json using the proxy path (same as Cursor), since Gemini CLI has no dynamic token command support:

{
    "security": {
        "auth": {
            "selectedType": "gemini-api-key"
        }
    },
    "env": {
        "GEMINI_API_KEY": "sk-toolhive-proxy",
        "GOOGLE_GEMINI_BASE_URL": "http://127.0.0.1:14000",
        "NODE_TLS_REJECT_UNAUTHORIZED": "0"
    }
}

Key points:

  • security.auth.selectedType: "gemini-api-key" — required for GOOGLE_GEMINI_BASE_URL to be honoured (the fix in PR #25357, shipped in v0.40.0, only applies to API key auth, not OAuth)
  • GEMINI_API_KEY — dummy value; the local proxy does not validate it but Gemini CLI requires a non-empty key
  • GOOGLE_GEMINI_BASE_URL — points to the local thv llm proxy (default http://127.0.0.1:14000), which injects the real OIDC token before forwarding to the upstream gateway

Actual behavior

No Gemini CLI entry in Registry() in pkg/llm/setup.go; users fall back to manual config with incorrect keys.

Additional context

  • Gemini CLI detectable via ~/.gemini/ directory
  • Config written to ~/.gemini/settings.json
  • Implementation would follow the ToolKindProxy pattern already used for Cursor (see cursorSpec() in pkg/llm/setup.go)
  • NODE_TLS_REJECT_UNAUTHORIZED should only be written when TLSSkipVerify is set, same as the Claude Code spec
  • Gateway format note: the Envoy AI Gateway currently only exposes an OpenAI-compatible endpoint (/v1/chat/completions); there is no Gemini API format backend configured. Gemini CLI sends requests to /v1beta/models/{model}:generateContent, which the gateway rejects. A separate issue should track adding a Google/Gemini backend to the gateway so Gemini CLI requests can actually be served end-to-end.

Metadata

Metadata

Assignees

No one assigned

    Labels

    cliChanges that impact CLI functionalityenhancementNew feature or requestgoPull requests that update go codellm gatewayLLM gateway authentication feature

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions