feat(ai): 新增 OpenAI Responses API 支持(保持现有兼容)#44
Conversation
📝 WalkthroughWalkthroughThis pull request adds support for OpenAI's Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant UI as SettingsPanel
participant Service as aiService
participant Proxy as proxy.ts
participant OpenAI as OpenAI API
User->>UI: Select API type (openai-responses)
UI->>Service: sendAIRequest(prompt, apiType='openai-responses')
alt openai-responses path
Service->>Service: Construct request with input, max_output_tokens
Service->>Proxy: POST /api/ai with apiType='openai-responses'
Proxy->>Proxy: Route to v1/responses
Proxy->>OpenAI: POST https://api.openai.com/v1/responses
OpenAI-->>Proxy: Response {output_text or output[]}
Proxy-->>Service: Proxied response
Service->>Service: Parse output_text or concatenate output[]
else openai path (existing)
Service->>Service: Construct request with model, messages
Service->>Proxy: POST /api/ai with apiType='openai'
Proxy->>Proxy: Route to v1/chat/completions
Proxy->>OpenAI: POST https://api.openai.com/v1/chat/completions
OpenAI-->>Proxy: Response {choices[0].message.content}
Proxy-->>Service: Proxied response
Service->>Service: Parse choices[0].message.content
end
Service-->>UI: Return parsed content
UI-->>User: Display result
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
src/types/index.ts (1)
69-80: ExtractAIApiTypeinstead of repeating the union.This new literal already had to be updated in the shared type, the form state, and
AIService. A dedicated alias will keep the next provider addition from drifting across files.♻️ Proposed refactor
+export type AIApiType = 'openai' | 'openai-responses' | 'claude' | 'gemini'; + export interface AIConfig { id: string; name: string; - apiType?: 'openai' | 'openai-responses' | 'claude' | 'gemini'; // API 格式/兼容协议(默认 openai) + apiType?: AIApiType; // API 格式/兼容协议(默认 openai) baseUrl: string; apiKey: string; model: string;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/types/index.ts` around lines 69 - 80, Extract the union literal 'openai' | 'openai-responses' | 'claude' | 'gemini' into a named type alias (e.g., AIApiType) and replace the inline union in the AIConfig interface's apiType property with that alias; then update all other occurrences that currently repeat the union (e.g., form state types, AIService method signatures and any variables typed with that union) to reference AIApiType so adding a new provider only requires a single change to the alias.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@server/src/routes/proxy.ts`:
- Around line 103-104: The proxy currently applies the shared 60,000 ms timeout
to all OpenAI routes, causing long-running "openai-responses" requests to be cut
off; update the proxy logic that sets the request timeout (where apiType and
targetUrl are used, e.g., the branch checking apiType === 'openai' || apiType
=== 'openai-responses' and the code that attaches the timeout value) to detect
when apiType === 'openai-responses' (or when targetUrl contains 'v1/responses')
and set a longer timeout (e.g., 600000 ms) for those requests while leaving
other routes at the existing 60000 ms.
---
Nitpick comments:
In `@src/types/index.ts`:
- Around line 69-80: Extract the union literal 'openai' | 'openai-responses' |
'claude' | 'gemini' into a named type alias (e.g., AIApiType) and replace the
inline union in the AIConfig interface's apiType property with that alias; then
update all other occurrences that currently repeat the union (e.g., form state
types, AIService method signatures and any variables typed with that union) to
reference AIApiType so adding a new provider only requires a single change to
the alias.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: d1f9c091-63d7-4174-8068-c8c07bb810a2
📒 Files selected for processing (4)
server/src/routes/proxy.tssrc/components/SettingsPanel.tsxsrc/services/aiService.tssrc/types/index.ts
| if (apiType === 'openai' || apiType === 'openai-responses') { | ||
| targetUrl = buildApiUrl(baseUrl, apiType === 'openai-responses' ? 'v1/responses' : 'v1/chat/completions'); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n server/src/routes/proxy.ts | sed -n '95,115p'Repository: AmintaCCCP/GithubStarsManager
Length of output: 1136
🏁 Script executed:
cat -n server/src/routes/proxy.ts | grep -A 10 "timeout:" | head -20Repository: AmintaCCCP/GithubStarsManager
Length of output: 703
🏁 Script executed:
wc -l server/src/routes/proxy.tsRepository: AmintaCCCP/GithubStarsManager
Length of output: 103
🏁 Script executed:
rg -A 5 "proxyRequest" server/src/routes/proxy.tsRepository: AmintaCCCP/GithubStarsManager
Length of output: 883
🌐 Web query:
OpenAI Responses API timeout duration reasoning models documentation
💡 Result:
OpenAI’s docs don’t define a single fixed server-side “Responses API timeout duration” for reasoning models; instead they recommend Background mode because reasoning / high-compute requests can take several minutes, which can exceed typical client/proxy timeouts. [1], [2]
What is explicitly documented is the client-side SDK default timeouts (important when calling the Responses API with reasoning models):
- OpenAI Python SDK (
openai-python): “By default requests time out after 10 minutes”, configurable viatimeout(float orhttpx.Timeout). [3] - OpenAI JavaScript/TypeScript SDK (
openai-node): “Requests time out after 10 minutes by default”, configurable viatimeout. [4]
Docs that tie this directly to reasoning models:
- Model docs for GPT-5 pro / GPT-5.2 pro note that some requests may take several minutes and explicitly say to use background mode to avoid timeouts. [2]
Sources
[1] Background mode guide (Responses API) — developers.openai.com/docs/guides/background (platform.openai.com)
[2] GPT-5.2 pro / GPT-5 pro model docs — developers.openai.com (developers.openai.com)
[3] openai-python README “Timeouts” section — github.com/openai/openai-python (github.com)
[4] openai-node README “Timeouts” section — github.com/openai/openai-node (github.com)
Citations:
- 1: https://platform.openai.com/docs/guides/background
- 2: https://developers.openai.com/api/docs/models/gpt-5.2-pro?utm_source=openai
- 3: https://github.com/openai/openai-python
- 4: https://github.com/openai/openai-node
🏁 Script executed:
git log --oneline -n 20 -- server/src/routes/proxy.tsRepository: AmintaCCCP/GithubStarsManager
Length of output: 132
🏁 Script executed:
rg -B 3 -A 3 "timeout" server/src/routes/proxy.tsRepository: AmintaCCCP/GithubStarsManager
Length of output: 351
/responses requests will fail with a 60-second timeout.
The new openai-responses route inherits the existing 60,000 ms timeout from the shared proxy path. OpenAI's documentation confirms that reasoning models in the Responses API can take several minutes to complete, and the official SDKs default to 10-minute timeouts. Valid requests will fail due to this constraint.
Fix
const result = await proxyRequest({
url: targetUrl,
method: 'POST',
headers,
body: requestBody,
- timeout: 60000,
+ timeout: apiType === 'openai-responses' ? 600000 : 60000,
});Consider 600,000 ms (10 minutes) to match OpenAI SDK defaults, or higher for models with longer reasoning durations.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@server/src/routes/proxy.ts` around lines 103 - 104, The proxy currently
applies the shared 60,000 ms timeout to all OpenAI routes, causing long-running
"openai-responses" requests to be cut off; update the proxy logic that sets the
request timeout (where apiType and targetUrl are used, e.g., the branch checking
apiType === 'openai' || apiType === 'openai-responses' and the code that
attaches the timeout value) to detect when apiType === 'openai-responses' (or
when targetUrl contains 'v1/responses') and set a longer timeout (e.g., 600000
ms) for those requests while leaving other routes at the existing 60000 ms.
变更说明
为 AI 配置新增可选的 OpenAI Responses 调用通道,不影响现有 OpenAI Chat Completions / Claude / Gemini。
具体改动
前端 AI 配置新增 API 格式选项:
AIService新增openai-responses分支:/v1/responsesinput + max_output_tokensoutput_text,回退解析output[].content[].text后端代理
server/src/routes/proxy.ts同步支持:api_type === openai-responses时转发到/v1/responses类型更新:
AIConfig.apiType扩展为'openai' | 'openai-responses' | 'claude' | 'gemini'兼容性
openai(chat completions),已有配置无需修改。验证
npm run buildCloses #36
Summary by CodeRabbit
Release Notes