Skip to content

feat(ai): 新增 OpenAI Responses API 支持(保持现有兼容)#44

Merged
AmintaCCCP merged 1 commit intomainfrom
feat/openai-responses-support
Mar 6, 2026
Merged

feat(ai): 新增 OpenAI Responses API 支持(保持现有兼容)#44
AmintaCCCP merged 1 commit intomainfrom
feat/openai-responses-support

Conversation

@AmintaCCCP
Copy link
Copy Markdown
Owner

@AmintaCCCP AmintaCCCP commented Mar 6, 2026

变更说明

为 AI 配置新增可选的 OpenAI Responses 调用通道,不影响现有 OpenAI Chat Completions / Claude / Gemini

具体改动

  1. 前端 AI 配置新增 API 格式选项:

    • OpenAI (Chat Completions)
    • OpenAI (Responses)
    • Claude
    • Gemini
  2. AIService 新增 openai-responses 分支:

    • 请求地址:/v1/responses
    • 请求体:input + max_output_tokens
    • 响应解析:优先 output_text,回退解析 output[].content[].text
  3. 后端代理 server/src/routes/proxy.ts 同步支持:

    • api_type === openai-responses 时转发到 /v1/responses
  4. 类型更新:

    • AIConfig.apiType 扩展为 'openai' | 'openai-responses' | 'claude' | 'gemini'

兼容性

  • 默认仍是 openai(chat completions),已有配置无需修改。
  • 本改动是新增能力,不改变旧行为。

验证

  • 前端构建通过:npm run build

Closes #36

Summary by CodeRabbit

Release Notes

  • New Features
    • Added support for OpenAI Responses API endpoint as an alternative configuration option in AI settings, allowing users to select between standard OpenAI and OpenAI Responses endpoints for their integration.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Mar 6, 2026

📝 Walkthrough

Walkthrough

This pull request adds support for OpenAI's /v1/responses API endpoint. A new API type 'openai-responses' is introduced through type definitions, UI settings, service logic, and backend routing to enable requests to this endpoint with its distinct request and response payload formats.

Changes

Cohort / File(s) Summary
Type Definition
src/types/index.ts
Extended AIConfig.apiType union to include 'openai-responses' as an allowed API type option.
UI Configuration
src/components/SettingsPanel.tsx
Added "OpenAI (Responses)" option to the API format select component; updated placeholder logic to include '/responses' path guidance for the new API type.
Service Logic
src/services/aiService.ts
Extended getApiType() return type and API request construction to handle 'openai-responses': uses input/max_output_tokens payload, routes to v1/responses endpoint, and implements conditional response parsing for output_text or concatenated output array elements.
Backend Proxy
server/src/routes/proxy.ts
Updated endpoint routing to conditionally use v1/responses for 'openai-responses' apiType and v1/chat/completions for standard 'openai' type.

Sequence Diagram(s)

sequenceDiagram
    participant User
    participant UI as SettingsPanel
    participant Service as aiService
    participant Proxy as proxy.ts
    participant OpenAI as OpenAI API

    User->>UI: Select API type (openai-responses)
    UI->>Service: sendAIRequest(prompt, apiType='openai-responses')
    
    alt openai-responses path
        Service->>Service: Construct request with input, max_output_tokens
        Service->>Proxy: POST /api/ai with apiType='openai-responses'
        Proxy->>Proxy: Route to v1/responses
        Proxy->>OpenAI: POST https://api.openai.com/v1/responses
        OpenAI-->>Proxy: Response {output_text or output[]}
        Proxy-->>Service: Proxied response
        Service->>Service: Parse output_text or concatenate output[]
    else openai path (existing)
        Service->>Service: Construct request with model, messages
        Service->>Proxy: POST /api/ai with apiType='openai'
        Proxy->>Proxy: Route to v1/chat/completions
        Proxy->>OpenAI: POST https://api.openai.com/v1/chat/completions
        OpenAI-->>Proxy: Response {choices[0].message.content}
        Proxy-->>Service: Proxied response
        Service->>Service: Parse choices[0].message.content
    end
    
    Service-->>UI: Return parsed content
    UI-->>User: Display result
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Poem

🐰 A new path through the garden of API endpoints!
The responses route now hops alongside completions,
Input and output dance in different formations,
While proxy directs each request with care—
OpenAI's new way, now within reach everywhere! 🌟

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly indicates the main change: adding OpenAI Responses API support while maintaining compatibility, which accurately reflects the core objective.
Linked Issues check ✅ Passed The pull request successfully implements support for the OpenAI Responses API (v1/responses endpoint) as requested in issue #36, while preserving existing Chat Completions, Claude, and Gemini integrations.
Out of Scope Changes check ✅ Passed All changes are directly related to adding OpenAI Responses API support; no unrelated or out-of-scope modifications detected across type definitions, services, components, and proxy routing.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
  • 📝 Generate docstrings (stacked PR)
  • 📝 Generate docstrings (commit on current branch)
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/openai-responses-support

Tip

Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs).
Share your feedback on Discord.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
src/types/index.ts (1)

69-80: Extract AIApiType instead of repeating the union.

This new literal already had to be updated in the shared type, the form state, and AIService. A dedicated alias will keep the next provider addition from drifting across files.

♻️ Proposed refactor
+export type AIApiType = 'openai' | 'openai-responses' | 'claude' | 'gemini';
+
 export interface AIConfig {
   id: string;
   name: string;
-  apiType?: 'openai' | 'openai-responses' | 'claude' | 'gemini'; // API 格式/兼容协议(默认 openai)
+  apiType?: AIApiType; // API 格式/兼容协议(默认 openai)
   baseUrl: string;
   apiKey: string;
   model: string;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/types/index.ts` around lines 69 - 80, Extract the union literal 'openai'
| 'openai-responses' | 'claude' | 'gemini' into a named type alias (e.g.,
AIApiType) and replace the inline union in the AIConfig interface's apiType
property with that alias; then update all other occurrences that currently
repeat the union (e.g., form state types, AIService method signatures and any
variables typed with that union) to reference AIApiType so adding a new provider
only requires a single change to the alias.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@server/src/routes/proxy.ts`:
- Around line 103-104: The proxy currently applies the shared 60,000 ms timeout
to all OpenAI routes, causing long-running "openai-responses" requests to be cut
off; update the proxy logic that sets the request timeout (where apiType and
targetUrl are used, e.g., the branch checking apiType === 'openai' || apiType
=== 'openai-responses' and the code that attaches the timeout value) to detect
when apiType === 'openai-responses' (or when targetUrl contains 'v1/responses')
and set a longer timeout (e.g., 600000 ms) for those requests while leaving
other routes at the existing 60000 ms.

---

Nitpick comments:
In `@src/types/index.ts`:
- Around line 69-80: Extract the union literal 'openai' | 'openai-responses' |
'claude' | 'gemini' into a named type alias (e.g., AIApiType) and replace the
inline union in the AIConfig interface's apiType property with that alias; then
update all other occurrences that currently repeat the union (e.g., form state
types, AIService method signatures and any variables typed with that union) to
reference AIApiType so adding a new provider only requires a single change to
the alias.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: d1f9c091-63d7-4174-8068-c8c07bb810a2

📥 Commits

Reviewing files that changed from the base of the PR and between e7e9a03 and f34ef41.

📒 Files selected for processing (4)
  • server/src/routes/proxy.ts
  • src/components/SettingsPanel.tsx
  • src/services/aiService.ts
  • src/types/index.ts

Comment on lines +103 to +104
if (apiType === 'openai' || apiType === 'openai-responses') {
targetUrl = buildApiUrl(baseUrl, apiType === 'openai-responses' ? 'v1/responses' : 'v1/chat/completions');
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n server/src/routes/proxy.ts | sed -n '95,115p'

Repository: AmintaCCCP/GithubStarsManager

Length of output: 1136


🏁 Script executed:

cat -n server/src/routes/proxy.ts | grep -A 10 "timeout:" | head -20

Repository: AmintaCCCP/GithubStarsManager

Length of output: 703


🏁 Script executed:

wc -l server/src/routes/proxy.ts

Repository: AmintaCCCP/GithubStarsManager

Length of output: 103


🏁 Script executed:

rg -A 5 "proxyRequest" server/src/routes/proxy.ts

Repository: AmintaCCCP/GithubStarsManager

Length of output: 883


🌐 Web query:

OpenAI Responses API timeout duration reasoning models documentation

💡 Result:

OpenAI’s docs don’t define a single fixed server-side “Responses API timeout duration” for reasoning models; instead they recommend Background mode because reasoning / high-compute requests can take several minutes, which can exceed typical client/proxy timeouts. [1], [2]

What is explicitly documented is the client-side SDK default timeouts (important when calling the Responses API with reasoning models):

  • OpenAI Python SDK (openai-python): “By default requests time out after 10 minutes”, configurable via timeout (float or httpx.Timeout). [3]
  • OpenAI JavaScript/TypeScript SDK (openai-node): “Requests time out after 10 minutes by default”, configurable via timeout. [4]

Docs that tie this directly to reasoning models:

  • Model docs for GPT-5 pro / GPT-5.2 pro note that some requests may take several minutes and explicitly say to use background mode to avoid timeouts. [2]

Sources
[1] Background mode guide (Responses API) — developers.openai.com/docs/guides/background (platform.openai.com)
[2] GPT-5.2 pro / GPT-5 pro model docs — developers.openai.com (developers.openai.com)
[3] openai-python README “Timeouts” section — github.com/openai/openai-python (github.com)
[4] openai-node README “Timeouts” section — github.com/openai/openai-node (github.com)

Citations:


🏁 Script executed:

git log --oneline -n 20 -- server/src/routes/proxy.ts

Repository: AmintaCCCP/GithubStarsManager

Length of output: 132


🏁 Script executed:

rg -B 3 -A 3 "timeout" server/src/routes/proxy.ts

Repository: AmintaCCCP/GithubStarsManager

Length of output: 351


/responses requests will fail with a 60-second timeout.

The new openai-responses route inherits the existing 60,000 ms timeout from the shared proxy path. OpenAI's documentation confirms that reasoning models in the Responses API can take several minutes to complete, and the official SDKs default to 10-minute timeouts. Valid requests will fail due to this constraint.

Fix
     const result = await proxyRequest({
       url: targetUrl,
       method: 'POST',
       headers,
       body: requestBody,
-      timeout: 60000,
+      timeout: apiType === 'openai-responses' ? 600000 : 60000,
     });

Consider 600,000 ms (10 minutes) to match OpenAI SDK defaults, or higher for models with longer reasoning durations.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@server/src/routes/proxy.ts` around lines 103 - 104, The proxy currently
applies the shared 60,000 ms timeout to all OpenAI routes, causing long-running
"openai-responses" requests to be cut off; update the proxy logic that sets the
request timeout (where apiType and targetUrl are used, e.g., the branch checking
apiType === 'openai' || apiType === 'openai-responses' and the code that
attaches the timeout value) to detect when apiType === 'openai-responses' (or
when targetUrl contains 'v1/responses') and set a longer timeout (e.g., 600000
ms) for those requests while leaving other routes at the existing 60000 ms.

@AmintaCCCP AmintaCCCP merged commit f07b9ca into main Mar 6, 2026
5 checks passed
@AmintaCCCP AmintaCCCP deleted the feat/openai-responses-support branch March 6, 2026 16:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

请问是不是目前还不支持新版 responses 这种调用?

1 participant