Skip to content

fix(providers): strip models/ prefix for Gemini OpenAI-compat endpoint (#175)#186

Merged
hqhq1025 merged 2 commits intomainfrom
fix/gemini-models-prefix-175
Apr 23, 2026
Merged

fix(providers): strip models/ prefix for Gemini OpenAI-compat endpoint (#175)#186
hqhq1025 merged 2 commits intomainfrom
fix/gemini-models-prefix-175

Conversation

@hqhq1025
Copy link
Copy Markdown
Collaborator

Summary

Fixes #175. Google's OpenAI-compatible endpoint at https://generativelanguage.googleapis.com/v1beta/openai/ accepts the same request shape as OpenAI Chat Completions but rejects model ids carrying the models/ prefix that its own /models listing returns, producing an opaque 400 status code (no body) when routed through a custom provider (OpenAI Chat wire) configured with that baseUrl.

  • New helper packages/providers/src/gemini-compat.ts exposes isGeminiOpenAICompat(baseUrl) and normalizeGeminiModelId(modelId, baseUrl).
  • complete() in packages/providers/src/index.ts normalizes the modelId on the wire only — Settings keeps the prefixed form so provider/model UX stays in sync with /models, while requests drop the prefix before hitting pi-ai.

No changes to retry / errors / Settings UI / agent.ts / core. Skipped the OpenAI-specific param stripping step (presence_penalty / frequency_penalty / response_format): none of those keywords exist in packages/providers, packages/core, or apps/desktop/src/main, so pi-ai is not being handed Chat-specific knobs that Gemini would reject.

Test plan

  • Unit tests for isGeminiOpenAICompat and normalizeGeminiModelId (Gemini host, OpenAI host, undefined baseUrl, non-Gemini models/ id preserved).
  • Integration test in index.test.ts verifying complete({ modelId: 'models/gemini-2-pro', baseUrl: '.../generativelanguage.googleapis.com/...' }) sends bare gemini-2-pro to pi-ai.
  • pnpm exec vitest run in packages/providers: 10 files, 141 tests passed.
  • pnpm typecheck and pnpm lint green.

PRINCIPLES §5b

  • Compatibility: green — only affects requests routed to Gemini host; Settings storage and UI unchanged.
  • Upgradeability: green — single-file helper, trivial to remove when Gemini normalizes its API.
  • No bloat: green — 18 lines of runtime code, no new deps.
  • Elegance: green — pure, local normalization at the wire boundary.

#175)

Google's OpenAI-compatible endpoint
(https://generativelanguage.googleapis.com/v1beta/openai/) rejects
model ids carrying the `models/` prefix that its own /models listing
returns, yielding an opaque `400 status code (no body)`.

Normalize on the wire only: Settings keeps the prefixed form so the
UI stays consistent with what /models returns, while requests drop
the prefix before hitting pi-ai.

Signed-off-by: hqhq1025 <1506751656@qq.com>
@github-actions github-actions Bot added the area:providers packages/providers (pi-ai adapter, model calls) label Apr 23, 2026
Comment thread packages/providers/src/gemini-compat.ts Fixed
Copy link
Copy Markdown
Contributor

@github-actions github-actions Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Findings

  • [Major] Overly broad Gemini endpoint detection can rewrite model IDs for non-Gemini or spoofed URLs — isGeminiOpenAICompat matches any string containing generativelanguage.googleapis.com, so values like https://generativelanguage.googleapis.com.evil.example/v1beta/openai/ are treated as Gemini and models/ is stripped unexpectedly. This can cause wrong model routing and hard-to-debug provider failures. Evidence packages/providers/src/gemini-compat.ts:12, used at packages/providers/src/index.ts:228.
    Suggested fix:
    export function isGeminiOpenAICompat(baseUrl: string | undefined): boolean {
      if (!baseUrl) return false;
      try {
        const url = new URL(baseUrl);
        return (
          url.hostname === 'generativelanguage.googleapis.com' &&
          /(^|\/)openai(\/|$)/.test(url.pathname)
        );
      } catch {
        return false;
      }
    }

Summary

  • Review mode: initial
  • 1 issue found (Major): Gemini endpoint detection is too permissive and can incorrectly normalize model IDs.
  • Context docs status: docs/VISION.md and docs/PRINCIPLES.md are not present in this checkout (Not found in repo/docs).

Testing

  • Not run (automation)

open-codesign Bot

Comment thread packages/providers/src/gemini-compat.ts Outdated

export function isGeminiOpenAICompat(baseUrl: string | undefined): boolean {
if (!baseUrl) return false;
return baseUrl.includes('generativelanguage.googleapis.com');
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Major] includes('generativelanguage.googleapis.com') accepts spoofed/non-canonical URLs and can trigger unintended models/ stripping. Prefer strict URL parsing with exact hostname and OpenAI-compat path checks.

Suggested fix:

export function isGeminiOpenAICompat(baseUrl: string | undefined): boolean {
  if (!baseUrl) return false;
  try {
    const url = new URL(baseUrl);
    return (
      url.hostname === 'generativelanguage.googleapis.com' &&
      /(^|\/)openai(\/|$)/.test(url.pathname)
    );
  } catch {
    return false;
  }
}

…low-up)

Signed-off-by: hqhq1025 <1506751656@qq.com>
Copy link
Copy Markdown
Contributor

@github-actions github-actions Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Findings

  • [Major] Gemini endpoint detection is still too broad and can normalize model IDs on non-OpenAI-compat paths — isGeminiOpenAICompat matches by hostname only, so URLs like https://generativelanguage.googleapis.com/v1beta/models will still return true and strip models/, causing incorrect model routing and avoidable PROVIDER_MODEL_UNKNOWN errors. Evidence packages/providers/src/gemini-compat.ts:14, packages/providers/src/gemini-compat.ts:23, usage at packages/providers/src/index.ts:228.
    Suggested fix:
    export function isGeminiOpenAICompat(baseUrl: string | undefined): boolean {
      if (!baseUrl) return false;
      try {
        const url = new URL(baseUrl);
        return (
          url.hostname === 'generativelanguage.googleapis.com' &&
          /(^|\/)openai(\/|$)/.test(url.pathname)
        );
      } catch {
        return false;
      }
    }

Summary

  • Review mode: follow-up after new commits
  • 1 issue found (Major): host-only Gemini endpoint detection can trigger unintended model-id normalization outside OpenAI-compat paths.
  • Context docs status: docs/VISION.md and docs/PRINCIPLES.md are not present in this checkout (Not found in repo/docs).

Testing

  • Not run (automation)

open-codesign Bot

if (!baseUrl) return false;
try {
const { hostname } = new URL(baseUrl);
return (
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[Major] This matcher currently keys off hostname only, so non-OpenAI-compat URLs on the same host (for example /v1beta/models) still trigger models/ stripping. That can misroute model lookup and synthesize the wrong id.

Suggested fix:

export function isGeminiOpenAICompat(baseUrl: string | undefined): boolean {
  if (!baseUrl) return false;
  try {
    const url = new URL(baseUrl);
    return (
      url.hostname === 'generativelanguage.googleapis.com' &&
      /(^|\/)openai(\/|$)/.test(url.pathname)
    );
  } catch {
    return false;
  }
}

@hqhq1025 hqhq1025 merged commit 003d81b into main Apr 23, 2026
7 checks passed
@hqhq1025 hqhq1025 deleted the fix/gemini-models-prefix-175 branch April 23, 2026 03:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area:providers packages/providers (pi-ai adapter, model calls)

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug] 400 Bad Request error when using Google Gemini API via Custom Provider (OpenAI Chat protocol)

2 participants