-
Notifications
You must be signed in to change notification settings - Fork 625
Reorganize gemini model configs; Refine Gemini reasoning content parsing; Markdown rendering of <a> fixed #807
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughExpanded and restructured Gemini model configurations (new Flash / Flash‑Lite / preview variants), adjusted token/context caps and image-generation types, added thinkingBudget and reasoning flags, updated Gemini provider streaming/response logic and tool-call events, added "tree" as a read keyword, and a CSS inline rule for Markdown-in-link wrapping. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor UI as UI
participant GP as GeminiProvider
participant GA as Gemini API
UI->>GP: request generation (messages, modelId, config)
GP->>GP: lookup modelConfig (type, reasoning, thinkingBudget)
GP->>GP: buildGenerationConfig (include thinkingBudget, modalities)
GP->>GA: stream.generateContent(request)
GA-->>GP: streaming chunks (text, thought, tool_calls, images)
alt tool call detected
GP-->>UI: tool_call_start
GP-->>UI: tool_call_chunk
GP-->>UI: tool_call_end
UI->>GP: tool result (to continue generation)
GP->>GA: continue stream with tool result
end
alt reasoning content present
GP-->>UI: reasoning_content (new format or legacy <think>)
end
GP-->>UI: final completion
sequenceDiagram
autonumber
actor UI as UI
participant GP as GeminiProvider
participant GA as Gemini API
Note over GP,GA: Image generation (multi-modality)
UI->>GP: generateImage(prompt, modelId)
GP->>GP: getGenerationConfig (responseModalities: [TEXT,IMAGE])
GP->>GA: stream.generateContent(image request)
GA-->>GP: image chunks (binary/base64)
GP-->>UI: image chunk(s) and completion
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Poem
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (5)
src/main/presenter/configPresenter/providerModelSettings.ts (1)
185-194: Mark image-preview as ImageGeneration to enable correct pipelineThis model is an image generator; without
type: ModelType.ImageGenerationdownstream code may treat it as chat and break rendering/streaming paths.Apply:
vision: true, functionCall: false, - reasoning: false + reasoning: false, + type: ModelType.ImageGenerationsrc/main/presenter/llmProviderPresenter/providers/geminiProvider.ts (4)
386-397: Enum-string comparison bug drops/incorrectly applies safety settings
thresholdis a HarmBlockThreshold enum, but comparisons use string literals, so filtering never works as intended.Apply:
- if ( - threshold && - category && - threshold !== 'BLOCK_NONE' && - threshold !== 'HARM_BLOCK_THRESHOLD_UNSPECIFIED' - ) { + if ( + threshold !== undefined && + category !== undefined && + threshold !== HarmBlockThreshold.BLOCK_NONE && + threshold !== HarmBlockThreshold.HARM_BLOCK_THRESHOLD_UNSPECIFIED + ) { safetySettings.push({ category, threshold }) }
79-101: Duplicate static model entry: gemini-2.5-flash-lite
GEMINI_MODELSlistsmodels/gemini-2.5-flash-litetwice. This can create duplicate UI rows and inconsistent enablement.Apply:
- { - id: 'models/gemini-2.5-flash-lite', - name: 'Gemini 2.5 Flash-Lite', - group: 'default', - providerId: 'gemini', - isCustom: false, - contextLength: 1048576, - maxTokens: 65535, - vision: true, - functionCall: true, - reasoning: true - },Also applies to: 91-101
893-899: Detect image models by type, not hard-coded IDCurrent check misses
google/gemini-2.5-flash-image-previewand future variants.Apply:
- const isImageGenerationModel = modelId === 'models/gemini-2.0-flash-preview-image-generation' + const isImageGenerationModel = modelConfig?.type === ModelType.ImageGeneration
307-312: Hard-coded test model in check()Use first available model or a configured default to avoid 404s when the hard-coded model is unavailable.
Apply:
- const result = await this.genAI.models.generateContent({ - model: 'models/gemini-1.5-flash-8b', + const testModel = this.models?.[0]?.id ?? 'gemini-2.5-pro' + const result = await this.genAI.models.generateContent({ + model: testModel,
🧹 Nitpick comments (6)
src/renderer/src/components/markdown/MarkdownRenderer.vue (1)
45-45: Replace tab with spaces for consistent indentationTabs here likely triggered Prettier failure; align with surrounding 2-space indentation.
- content: v.node.code, + content: v.node.code,src/main/presenter/configPresenter/modelDefaultSettings.ts (1)
339-351: Non-English comments violate repo guidelinesComments should be English per repo rules for src/**/*.ts.
Would you like a follow-up patch converting these comment lines to English?
Also applies to: 353-363, 389-399, 401-410, 412-421, 423-432, 434-444, 446-459
src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts (4)
971-975: Excessive debug logging of candidates (PII risk + perf)Dumping full chunks every tick is noisy and may leak sensitive data. Remove or guard behind DEBUG.
Apply:
- console.log('chunk.candidates', JSON.stringify(chunk.candidates, null, 2)) + // debug: uncomment if needed + // this.logger?.debug?.('gemini chunk', { hasCandidates: !!chunk.candidates })
280-290: Logs and messages should be English and structuredRepo guidelines require English logs with levels/timestamps; current logs are mixed-language and unstructured.
Recommendation:
- Replace
console.*with a shared logger (INFO/WARN/ERROR/DEBUG).- Convert messages to English and include context (providerId, modelId, requestId).
Want a patch adding a minimal logger and updating these call sites?Also applies to: 313-315, 331-334, 363-366, 715-716, 856-874, 1176-1182
656-676: Duplicate think-parsing logic
completions()re-implements logic already inprocessGeminiResponse(). Use the helper to reduce drift.Apply:
- // 处理响应内容,支持新格式的思考内容 - let textContent = '' - let thoughtContent = '' - ... - resultResp.content = textContent - if (thoughtContent) { - resultResp.reasoning_content = thoughtContent - } - - return resultResp + return this.processGeminiResponse(result)Also applies to: 718-757
293-299: Outdated comment contradicts implementationYou do call
models.list()above. Update the comment to avoid confusion.Suggested text: “Override fetchModels() to return the in-memory list populated by fetchProviderModels().”
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (4)
src/main/presenter/configPresenter/modelDefaultSettings.ts(4 hunks)src/main/presenter/configPresenter/providerModelSettings.ts(6 hunks)src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts(6 hunks)src/renderer/src/components/markdown/MarkdownRenderer.vue(4 hunks)
🧰 Additional context used
📓 Path-based instructions (18)
src/renderer/src/**/*
📄 CodeRabbit inference engine (.cursor/rules/i18n.mdc)
src/renderer/src/**/*: All user-facing strings must use i18n keys (avoid hardcoded user-visible text in code)
Use the 'vue-i18n' framework for all internationalization in the renderer
Ensure all user-visible text in the renderer uses the translation system
Files:
src/renderer/src/components/markdown/MarkdownRenderer.vue
src/renderer/**/*.{vue,ts,js,tsx,jsx}
📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)
渲染进程代码放在
src/renderer
Files:
src/renderer/src/components/markdown/MarkdownRenderer.vue
src/renderer/src/**/*.{vue,ts,tsx,js,jsx}
📄 CodeRabbit inference engine (.cursor/rules/vue-best-practices.mdc)
src/renderer/src/**/*.{vue,ts,tsx,js,jsx}: Use the Composition API for better code organization and reusability
Implement proper state management with Pinia
Utilize Vue Router for navigation and route management
Leverage Vue's built-in reactivity system for efficient data handling
Files:
src/renderer/src/components/markdown/MarkdownRenderer.vue
src/renderer/src/**/*.vue
📄 CodeRabbit inference engine (.cursor/rules/vue-best-practices.mdc)
Use scoped styles to prevent CSS conflicts between components
src/renderer/src/**/*.vue: Follow existing component patterns when creating new UI components
Ensure responsive design with Tailwind CSS for new UI components
Add proper error handling and loading states to UI components
Files:
src/renderer/src/components/markdown/MarkdownRenderer.vue
src/renderer/**/*.{ts,tsx,vue}
📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)
src/renderer/**/*.{ts,tsx,vue}: Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError).
Use TypeScript for all code; prefer types over interfaces.
Avoid enums; use const objects instead.
Use arrow functions for methods and computed properties.
Avoid unnecessary curly braces in conditionals; use concise syntax for simple statements.
Files:
src/renderer/src/components/markdown/MarkdownRenderer.vue
src/renderer/**/*.{vue,ts}
📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)
Implement lazy loading for routes and components.
Files:
src/renderer/src/components/markdown/MarkdownRenderer.vue
src/renderer/**/*.{ts,vue}
📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)
src/renderer/**/*.{ts,vue}: Use useFetch and useAsyncData for data fetching.
Implement SEO best practices using Nuxt's useHead and useSeoMeta.
src/renderer/**/*.{ts,vue}: Use Pinia for frontend state management
From renderer to main, call presenters via the usePresenter.ts composable
Files:
src/renderer/src/components/markdown/MarkdownRenderer.vue
src/**/*.{ts,tsx,vue}
📄 CodeRabbit inference engine (CLAUDE.md)
Use English for all logs and comments
Files:
src/renderer/src/components/markdown/MarkdownRenderer.vuesrc/main/presenter/configPresenter/providerModelSettings.tssrc/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/renderer/**/*.vue
📄 CodeRabbit inference engine (CLAUDE.md)
src/renderer/**/*.vue: Use Vue 3 Composition API for all components
Use Tailwind CSS with scoped styles for component styling
Files:
src/renderer/src/components/markdown/MarkdownRenderer.vue
src/renderer/src/**
📄 CodeRabbit inference engine (CLAUDE.md)
Organize UI components by feature under src/renderer/src/
Files:
src/renderer/src/components/markdown/MarkdownRenderer.vue
**/*.{js,jsx,ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)
**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写
Files:
src/main/presenter/configPresenter/providerModelSettings.tssrc/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/{main,renderer}/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)
src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging
Files:
src/main/presenter/configPresenter/providerModelSettings.tssrc/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/main/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)
Use Electron's built-in APIs for file system and native dialogs
From main to renderer, broadcast events via EventBus using mainWindow.webContents.send()
Files:
src/main/presenter/configPresenter/providerModelSettings.tssrc/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/error-logging.mdc)
**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别Enable and adhere to strict TypeScript type checking
Files:
src/main/presenter/configPresenter/providerModelSettings.tssrc/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/main/**/*.{ts,js,tsx,jsx}
📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)
主进程代码放在
src/main
Files:
src/main/presenter/configPresenter/providerModelSettings.tssrc/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/main/presenter/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Maintain one presenter per functional domain in src/main/presenter/
Files:
src/main/presenter/configPresenter/providerModelSettings.tssrc/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/main/presenter/configPresenter/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Centralize configuration logic under configPresenter/
Files:
src/main/presenter/configPresenter/providerModelSettings.tssrc/main/presenter/configPresenter/modelDefaultSettings.ts
src/main/presenter/llmProviderPresenter/providers/*.ts
📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)
src/main/presenter/llmProviderPresenter/providers/*.ts: Each file insrc/main/presenter/llmProviderPresenter/providers/*.tsshould handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Provider implementations must use acoreStreammethod that yields standardized stream events to decouple the main loop from provider-specific details.
ThecoreStreammethod in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Provider files should implement helper methods such asformatMessages,convertToProviderTools,parseFunctionCalls, andprepareFunctionCallPromptas needed for provider-specific logic.
All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g.,prepareFunctionCallPrompt) before making the API call.
When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., usingconvertToProviderTools) and included in the API request.
Provider implementations should aggregate and yield usage events as part of the standardized stream.
Provider implementations should yield image data events in the standardized format when applicable.
Provider implementations should yield reasoning events in the standardized format when applicable.
Provider implementations should yield tool call events (tool_call_start,tool_call_chunk,tool_call_end) in the standardized format.
Provider implementations should yield stop events with appropriatestop_reasonin the standardized format.
Provider implementations should yield error events in the standardized format...
Files:
src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
🧠 Learnings (3)
📓 Common learnings
Learnt from: neoragex2002
PR: ThinkInAIXYZ/deepchat#550
File: src/main/presenter/mcpPresenter/inMemoryServers/meetingServer.ts:258-324
Timestamp: 2025-06-21T15:53:34.641Z
Learning: User neoragex2002 prefers concise, brief responses rather than chatty or verbose communication. Avoid lengthy explanations, excessive enthusiasm, or ASCII art in responses.
📚 Learning: 2025-07-23T00:45:57.322Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{vue} : Use template syntax for declarative rendering.
Applied to files:
src/renderer/src/components/markdown/MarkdownRenderer.vue
📚 Learning: 2025-07-23T00:45:57.322Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-07-23T00:45:57.322Z
Learning: Applies to src/renderer/**/*.{vue} : Use <script setup> syntax for concise component definitions.
Applied to files:
src/renderer/src/components/markdown/MarkdownRenderer.vue
🪛 GitHub Actions: PR Check
src/renderer/src/components/markdown/MarkdownRenderer.vue
[error] 1-1: Step 'pnpm run format:check' failed. Prettier reported formatting issues in 'src/renderer/src/components/markdown/MarkdownRenderer.vue'. Run 'prettier --write' to fix.
🔇 Additional comments (1)
src/main/presenter/configPresenter/providerModelSettings.ts (1)
177-183: Token/context normalization to 65535/1048576 looks consistentChanges align Gemini entries to binary limits and are consistent across Pro/Flash/Flash-Lite and 2.0/1.5 families. No issues spotted.
Also applies to: 199-205, 210-216, 217-227, 232-238, 243-249, 266-286
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts (2)
942-952: Wrap streaming call with try/catch and emit standardized error events.Prevents unhandled promise rejections and follows provider error-event contract.
- // 发送流式请求 - const result = await this.genAI.models.generateContentStream(requestParams) + // 发送流式请求 + let result + try { + result = await this.genAI.models.generateContentStream(requestParams) + } catch (err) { + console.error('Gemini stream error:', err) + yield { + type: 'error', + error_message: err instanceof Error ? err.message : String(err) + } + yield { type: 'stop', stop_reason: 'error' } + return + } // 状态变量
373-381: Bug: comparing HarmBlockThreshold enum to strings.
thresholdis an enum; string comparisons always fail, silently disabling safety settings.- if ( - threshold && - category && - threshold !== 'BLOCK_NONE' && - threshold !== 'HARM_BLOCK_THRESHOLD_UNSPECIFIED' - ) { + if ( + threshold !== undefined && + category !== undefined && + threshold !== HarmBlockThreshold.BLOCK_NONE && + threshold !== HarmBlockThreshold.HARM_BLOCK_THRESHOLD_UNSPECIFIED + ) { safetySettings.push({ category, threshold }) }
🧹 Nitpick comments (4)
src/main/presenter/configPresenter/providerModelSettings.ts (2)
198-206: Name consistency: “Flash-Lite” vs “Flash Lite”.Use one style across files (providerModelSettings and geminiProvider). Suggest “Flash-Lite”.
- name: 'Gemini 2.5 Flash Lite', + name: 'Gemini 2.5 Flash-Lite',Also applies to: 218-228
4-20: English-only comments/logs per repo guidelines.Replace non-English comments in this TS file with English.
Also applies to: 170-176, 290-296
src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts (2)
61-61: Static model metadata updates for Gemini: LGTM.Caps/flags/types align with config. Names should match “Flash-Lite” consistently with providerModelSettings.
Also applies to: 73-77, 79-89, 96-101, 108-113, 120-125, 127-137, 139-150
1-20: Use English for logs/comments and adopt structured logging.Replace Chinese logs/comments; prefer a logger with levels and structured fields per repo guidelines.
Also applies to: 262-279, 314-323, 351-356, 768-775, 804-809, 858-863, 1164-1171
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
src/main/presenter/configPresenter/providerModelSettings.ts(6 hunks)src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts(7 hunks)
🧰 Additional context used
📓 Path-based instructions (9)
**/*.{js,jsx,ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)
**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写
Files:
src/main/presenter/configPresenter/providerModelSettings.tssrc/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/{main,renderer}/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)
src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging
Files:
src/main/presenter/configPresenter/providerModelSettings.tssrc/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/main/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)
Use Electron's built-in APIs for file system and native dialogs
From main to renderer, broadcast events via EventBus using mainWindow.webContents.send()
Files:
src/main/presenter/configPresenter/providerModelSettings.tssrc/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/error-logging.mdc)
**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别Enable and adhere to strict TypeScript type checking
Files:
src/main/presenter/configPresenter/providerModelSettings.tssrc/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/main/**/*.{ts,js,tsx,jsx}
📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)
主进程代码放在
src/main
Files:
src/main/presenter/configPresenter/providerModelSettings.tssrc/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/**/*.{ts,tsx,vue}
📄 CodeRabbit inference engine (CLAUDE.md)
Use English for all logs and comments
Files:
src/main/presenter/configPresenter/providerModelSettings.tssrc/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/main/presenter/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Maintain one presenter per functional domain in src/main/presenter/
Files:
src/main/presenter/configPresenter/providerModelSettings.tssrc/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/main/presenter/configPresenter/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Centralize configuration logic under configPresenter/
Files:
src/main/presenter/configPresenter/providerModelSettings.ts
src/main/presenter/llmProviderPresenter/providers/*.ts
📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)
src/main/presenter/llmProviderPresenter/providers/*.ts: Each file insrc/main/presenter/llmProviderPresenter/providers/*.tsshould handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Provider implementations must use acoreStreammethod that yields standardized stream events to decouple the main loop from provider-specific details.
ThecoreStreammethod in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Provider files should implement helper methods such asformatMessages,convertToProviderTools,parseFunctionCalls, andprepareFunctionCallPromptas needed for provider-specific logic.
All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g.,prepareFunctionCallPrompt) before making the API call.
When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., usingconvertToProviderTools) and included in the API request.
Provider implementations should aggregate and yield usage events as part of the standardized stream.
Provider implementations should yield image data events in the standardized format when applicable.
Provider implementations should yield reasoning events in the standardized format when applicable.
Provider implementations should yield tool call events (tool_call_start,tool_call_chunk,tool_call_end) in the standardized format.
Provider implementations should yield stop events with appropriatestop_reasonin the standardized format.
Provider implementations should yield error events in the standardized format...
Files:
src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
🧠 Learnings (1)
📓 Common learnings
Learnt from: neoragex2002
PR: ThinkInAIXYZ/deepchat#550
File: src/main/presenter/mcpPresenter/inMemoryServers/meetingServer.ts:258-324
Timestamp: 2025-06-21T15:53:34.641Z
Learning: User neoragex2002 prefers concise, brief responses rather than chatty or verbose communication. Avoid lengthy explanations, excessive enthusiasm, or ASCII art in responses.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build-check (x64)
🔇 Additional comments (4)
src/main/presenter/configPresenter/providerModelSettings.ts (2)
177-177: Token cap adjustment to 65535 is correct (2^16−1).No issues.
200-201: Context/token updates look good.Values align with provider changes and geminiProvider’s static list.
Also applies to: 211-211, 233-233, 244-244, 270-270, 281-281
src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts (2)
881-882: Image-generation detection viamodelConfig.typeis correct.This decouples logic from model IDs.
949-951: Resolved: ModelConfig already declares optionalthinkingBudget.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts (3)
373-385: Bug: Comparing HarmBlockThreshold enum to strings always truethreshold is an enum; comparing to 'BLOCK_NONE'/'HARM_BLOCK_THRESHOLD_UNSPECIFIED' strings never matches, so filters won’t work.
Apply:
- if ( - threshold && - category && - threshold !== 'BLOCK_NONE' && - threshold !== 'HARM_BLOCK_THRESHOLD_UNSPECIFIED' - ) { + if ( + threshold !== undefined && + category !== undefined && + threshold !== HarmBlockThreshold.BLOCK_NONE && + threshold !== HarmBlockThreshold.HARM_BLOCK_THRESHOLD_UNSPECIFIED + ) {
456-471: Harden JSON.parse on tool call argsUntrusted tool args can be non-JSON; parsing without try/catch will throw and break formatting.
Apply:
- args: JSON.parse(toolCall.function.arguments || '{}') + args: (() => { + try { return JSON.parse(toolCall.function.arguments || '{}') } catch { return {} } + })()- args: part.function_call.arguments ? JSON.parse(part.function_call.arguments) : {} + args: (() => { + try { return part.function_call.arguments ? JSON.parse(part.function_call.arguments) : {} } + catch { return {} } + })()Also applies to: 509-523
768-781: User-facing error strings should be EnglishPer coding guidelines, logs/messages under src/**/*.{ts,tsx,vue} must be in English.
Apply:
- return this.processGeminiResponse(result) + return this.processGeminiResponse(result)And update error strings:
- console.error('Gemini summaries error:', error) + console.error('Gemini summaries error:', error) - throw error + throw errorExample (apply similarly across file):
- return ['发生错误,无法获取建议'] + return ['An error occurred and suggestions cannot be generated']- error_message: error instanceof Error ? error.message : '图像生成失败' + error_message: error instanceof Error ? error.message : 'Image generation failed'Also applies to: 804-809, 860-863, 1164-1170
♻️ Duplicate comments (1)
src/main/presenter/configPresenter/modelDefaultSettings.ts (1)
365-375: Duplicate Flash-Lite entry resolved — LGTMThe prior duplicate is gone; single canonical definition remains.
🧹 Nitpick comments (6)
src/main/presenter/configPresenter/modelDefaultSettings.ts (3)
353-363: Gemini 2.5 Flash entry looks good; consider adding legacy alias for robustnessAdd 'google/gemini-2.5-flash' to match[] to catch older IDs returned by some gateways.
Apply:
- match: ['models/gemini-2.5-flash', 'gemini-2.5-flash'], + match: ['models/gemini-2.5-flash', 'gemini-2.5-flash', 'google/gemini-2.5-flash'],
377-387: Preview variant: OK; add date-less alias first for priority matchingPlace the generic alias first so loose matching prefers the stable nickname.
Apply:
- match: ['models/gemini-2.5-flash-lite-preview-06-17', 'gemini-2.5-flash-lite-preview'], + match: ['gemini-2.5-flash-lite-preview', 'models/gemini-2.5-flash-lite-preview-06-17'],
422-432: Image preview model typed correctly; also add models/ aliasSome endpoints expose this as models/gemini-2.5-flash-image-preview.
Apply:
- match: ['google/gemini-2.5-flash-image-preview', 'gemini-2.5-flash-image-preview'], + match: [ + 'google/gemini-2.5-flash-image-preview', + 'models/gemini-2.5-flash-image-preview', + 'gemini-2.5-flash-image-preview' + ],src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts (3)
262-276: Log message is misleading; use English per repo guidelinesMessage says “ignore modelId” but it’s used; also logs should be English.
Apply:
- console.log('gemini ignore modelId', modelId) + console.debug('Gemini summaryTitles selected modelId:', modelId)
879-891: Noise: verbose console.log on hot paths; switch to debug or guard by levelThese logs will spam during streaming; use debug and avoid dumping full candidates.
Apply:
- console.log('modelConfig', modelConfig, modelId) + console.debug('Gemini coreStream model:', { id: modelId, reasoning: modelConfig?.reasoning, type: modelConfig?.type })- console.log('requestParams', requestParams) + console.debug('Gemini requestParams prepared')- console.log('chunk.candidates', JSON.stringify(chunk.candidates, null, 2)) + // debug: uncomment if needed + // console.debug('chunk.candidates received')Also applies to: 913-913, 959-959
950-951: Thought format detection default ties to config — OK; add runtime detection fallbackIf upstream toggles reasoning per-request, fallback to detect thought parts even when config.reasoning=false.
Apply:
- let isNewThoughtFormatDetected = modelConfig.reasoning === true + let isNewThoughtFormatDetected = !!modelConfig.reasoning
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (3)
src/main/presenter/configPresenter/modelDefaultSettings.ts(4 hunks)src/main/presenter/configPresenter/providerModelSettings.ts(6 hunks)src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts(7 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- src/main/presenter/configPresenter/providerModelSettings.ts
🧰 Additional context used
📓 Path-based instructions (9)
**/*.{js,jsx,ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)
**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写
Files:
src/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/{main,renderer}/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)
src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging
Files:
src/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/main/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)
Use Electron's built-in APIs for file system and native dialogs
From main to renderer, broadcast events via EventBus using mainWindow.webContents.send()
Files:
src/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/error-logging.mdc)
**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别Enable and adhere to strict TypeScript type checking
Files:
src/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/main/**/*.{ts,js,tsx,jsx}
📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)
主进程代码放在
src/main
Files:
src/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/**/*.{ts,tsx,vue}
📄 CodeRabbit inference engine (CLAUDE.md)
Use English for all logs and comments
Files:
src/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/main/presenter/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Maintain one presenter per functional domain in src/main/presenter/
Files:
src/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/main/presenter/configPresenter/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Centralize configuration logic under configPresenter/
Files:
src/main/presenter/configPresenter/modelDefaultSettings.ts
src/main/presenter/llmProviderPresenter/providers/*.ts
📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)
src/main/presenter/llmProviderPresenter/providers/*.ts: Each file insrc/main/presenter/llmProviderPresenter/providers/*.tsshould handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Provider implementations must use acoreStreammethod that yields standardized stream events to decouple the main loop from provider-specific details.
ThecoreStreammethod in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Provider files should implement helper methods such asformatMessages,convertToProviderTools,parseFunctionCalls, andprepareFunctionCallPromptas needed for provider-specific logic.
All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g.,prepareFunctionCallPrompt) before making the API call.
When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., usingconvertToProviderTools) and included in the API request.
Provider implementations should aggregate and yield usage events as part of the standardized stream.
Provider implementations should yield image data events in the standardized format when applicable.
Provider implementations should yield reasoning events in the standardized format when applicable.
Provider implementations should yield tool call events (tool_call_start,tool_call_chunk,tool_call_end) in the standardized format.
Provider implementations should yield stop events with appropriatestop_reasonin the standardized format.
Provider implementations should yield error events in the standardized format...
Files:
src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
🧠 Learnings (1)
📓 Common learnings
Learnt from: neoragex2002
PR: ThinkInAIXYZ/deepchat#550
File: src/main/presenter/mcpPresenter/inMemoryServers/meetingServer.ts:258-324
Timestamp: 2025-06-21T15:53:34.641Z
Learning: User neoragex2002 prefers concise, brief responses rather than chatty or verbose communication. Avoid lengthy explanations, excessive enthusiasm, or ASCII art in responses.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build-check (x64)
🔇 Additional comments (3)
src/main/presenter/configPresenter/modelDefaultSettings.ts (1)
434-447: Image generation preview: functionCall=false is correct — LGTMType and flags align with provider logic.
src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts (2)
61-77: Static Gemini models synced with defaults — LGTMToken/context caps and reasoning flags match the config table.
Also applies to: 79-101, 103-137, 139-150
881-887: Image generation routing — LGTMType check via modelConfig.type aligns with config table and avoids brittle id checks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
src/main/presenter/configPresenter/providerModelSettings.ts (3)
4-20: Use English for comments per repo guidelines.
Replace Chinese inline/JSDoc comments with concise English.-// 定义每个provider的模型匹配规则和配置的接口,与modelDefaultSettings保持一致的风格 +// Interface for per-provider model matching rules and config. Keep style in sync with modelDefaultSettings. - id: string // 模型ID + id: string // Model ID - name: string // 模型名称 + name: string // Model name - match: string[] // 用于匹配模型ID的字符串数组 + match: string[] // Aliases used to match incoming model IDs - maxTokens: number // 最大生成token数 + maxTokens: number // Max completion tokens - contextLength: number // 上下文长度 + contextLength: number // Max context length - vision?: boolean // 是否支持视觉 + vision?: boolean // Whether the model supports vision inputs - functionCall?: boolean // 是否支持函数调用 + functionCall?: boolean // Whether tool/function calling is supported - reasoning?: boolean // 是否支持推理能力 + reasoning?: boolean // Whether the model is a reasoning model - type?: ModelType // 模型类型,默认为Chat + type?: ModelType // Model type; defaults to Chat - // GPT-5 系列新参数 + // GPT-5 family parameters
2653-2687: Fix matcher: substring collisions cause wrong config (e.g., flash vs flash‑lite). Add exact-first, longest-win scoring; improve defaults; add structured error logging.
Current includes()-based early return is order-dependent and fragile. Below eliminates collisions and aligns with logging/error-handling guidelines.export function getProviderSpecificModelConfig( providerId: string, modelId: string ): ModelConfig | undefined { - // 将modelId转为小写以进行不区分大小写的匹配 - const lowerModelId = modelId.toLowerCase() - - // 检查该提供商是否存在特定配置 - const providerSetting = providerModelSettings[providerId] - if (!providerSetting || !providerSetting.models) { - return undefined - } - - // 遍历该提供商的模型数组,查找匹配的模型配置 - for (const config of providerSetting.models) { - // 检查是否有任何匹配条件符合 - if (config.match.some((matchStr) => lowerModelId.includes(matchStr.toLowerCase()))) { - return { - maxTokens: config.maxTokens, - contextLength: config.contextLength, - temperature: config.temperature, // 保持可选,某些模型不支持 - vision: config.vision || false, - functionCall: config.functionCall || false, - reasoning: config.reasoning || false, - type: config.type || ModelType.Chat, - reasoningEffort: config.reasoningEffort, - verbosity: config.verbosity, - maxCompletionTokens: config.maxCompletionTokens - } - } - } - - // 如果没有找到匹配的配置,返回undefined - return undefined + try { + const providerSetting = providerModelSettings[providerId] + if (!providerSetting?.models?.length) return undefined + + const lowerModelId = modelId.toLowerCase() + + let best: ProviderModelSetting | undefined + let bestScore = -1 + + for (const config of providerSetting.models) { + const candidates = [config.id, ...(config.match ?? [])] + for (const raw of candidates) { + const m = raw.toLowerCase() + const isExact = lowerModelId === m + const isContain = !isExact && lowerModelId.includes(m) + if (isExact || isContain) { + // exact match wins; otherwise prefer longer (more specific) alias + const score = (isExact ? 1_000_000 : 0) + m.length + if (score > bestScore) { + bestScore = score + best = config + } + } + } + } + + if (!best) return undefined + + return { + maxTokens: best.maxTokens, + contextLength: best.contextLength, + temperature: best.temperature, + vision: best.vision ?? false, + functionCall: best.functionCall ?? false, + reasoning: best.reasoning ?? false, + type: best.type ?? ModelType.Chat, + reasoningEffort: best.reasoningEffort, + verbosity: best.verbosity, + maxCompletionTokens: best.maxCompletionTokens + } + } catch (err) { + const e = err as Error + // Structured log; replace with project logger if available + console.error({ + ts: new Date().toISOString(), + level: 'ERROR', + code: 'MODEL_CONFIG_LOOKUP_FAILED', + msg: 'Failed to resolve provider model config', + providerId, + modelId, + error: e.message, + stack: e.stack + }) + throw err + } }
170-288: Add unit tests for ambiguous alias resolution
- Cover “gemini-2.5-flash-lite” vs “gemini-2.5-flash”
- Cover “gemini-2.5-flash-image-preview” vs “gemini-2.5-flash”
- Cover “gemini-2.5-flash-lite-preview” vs “gemini-2.5-flash-lite”
- Cover “gemini-2.0-flash” vs “gemini-2.0-flash-lite”
- Cover “gpt-5-mini” vs “gpt-5”
Ensure alias lookup returns the correct, most specific model ID.
♻️ Duplicate comments (2)
src/main/presenter/configPresenter/providerModelSettings.ts (2)
185-195: Image-preview model correctly marked as ImageGeneration and functionCall disabled.
Parity achieved with provider routing logic.
2341-2351: OpenRouter Gemini image-preview now typed as ImageGeneration—resolved.
Matches earlier feedback; consistent with gemini provider entry.
🧹 Nitpick comments (2)
src/main/presenter/configPresenter/providerModelSettings.ts (2)
218-228: Optional: reorder more-specific aliases before general ones.
Even with the matcher fix, keeping “flash-lite” before “flash” is safer.- { - id: 'models/gemini-2.5-flash', - name: 'Gemini 2.5 Flash', - ... - }, - { - id: 'models/gemini-2.5-flash-lite', - name: 'Gemini 2.5 Flash Lite', - ... - }, + { + id: 'models/gemini-2.5-flash-lite', + name: 'Gemini 2.5 Flash Lite', + ... + }, + { + id: 'models/gemini-2.5-flash', + name: 'Gemini 2.5 Flash', + ... + },Also applies to: 208-217
197-206: Name consistency: “Flash Lite” vs “Flash‑Lite”.
Pick one hyphenation across entries to avoid UX confusion.Also applies to: 218-228
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
src/main/presenter/configPresenter/providerModelSettings.ts(7 hunks)
🧰 Additional context used
📓 Path-based instructions (8)
**/*.{js,jsx,ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)
**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写
Files:
src/main/presenter/configPresenter/providerModelSettings.ts
src/{main,renderer}/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)
src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging
Files:
src/main/presenter/configPresenter/providerModelSettings.ts
src/main/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)
Use Electron's built-in APIs for file system and native dialogs
From main to renderer, broadcast events via EventBus using mainWindow.webContents.send()
Files:
src/main/presenter/configPresenter/providerModelSettings.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/error-logging.mdc)
**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别Enable and adhere to strict TypeScript type checking
Files:
src/main/presenter/configPresenter/providerModelSettings.ts
src/main/**/*.{ts,js,tsx,jsx}
📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)
主进程代码放在
src/main
Files:
src/main/presenter/configPresenter/providerModelSettings.ts
src/**/*.{ts,tsx,vue}
📄 CodeRabbit inference engine (CLAUDE.md)
Use English for all logs and comments
Files:
src/main/presenter/configPresenter/providerModelSettings.ts
src/main/presenter/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Maintain one presenter per functional domain in src/main/presenter/
Files:
src/main/presenter/configPresenter/providerModelSettings.ts
src/main/presenter/configPresenter/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Centralize configuration logic under configPresenter/
Files:
src/main/presenter/configPresenter/providerModelSettings.ts
🧠 Learnings (1)
📓 Common learnings
Learnt from: neoragex2002
PR: ThinkInAIXYZ/deepchat#550
File: src/main/presenter/mcpPresenter/inMemoryServers/meetingServer.ts:258-324
Timestamp: 2025-06-21T15:53:34.641Z
Learning: User neoragex2002 prefers concise, brief responses rather than chatty or verbose communication. Avoid lengthy explanations, excessive enthusiasm, or ASCII art in responses.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build-check (x64)
🔇 Additional comments (8)
src/main/presenter/configPresenter/providerModelSettings.ts (8)
174-183: Token cap tweak for Gemini 2.5 Pro is fine.
65535 looks intentional (avoid power-of-two boundary). No further action.
197-206: Flash‑Lite Preview caps updated—OK.
Values align with other 2.5 entries.
230-239: Gemini 2.0 Flash token cap adjusted—OK.
241-250: Gemini 2.0 Flash Lite token cap adjusted—OK.
252-265: Image-generation entry: functionCall disabled and type set—good.
This will route via image-generation path.
267-276: Gemini 1.5 Flash token cap adjusted—OK.
278-287: Gemini 1.5 Pro token cap adjusted—OK.
185-195: Verify provider parity for all Gemini image-preview models.
Gemini and OpenRouter entries now carry type: ImageGeneration and functionCall: false. Confirm any other providers (if added later) follow the same.Also applies to: 2341-2351
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
src/main/presenter/mcpPresenter/toolManager.ts (2)
223-268: Avoid permission downgrade: make write keywords override read; restrict 'tree' to token matchesCurrent order classifies names like "delete_tree" as read because "tree" matches first. This weakens gating. Prefer write > read precedence and token-based matching to avoid false positives (e.g., "mtree").
Apply:
- // 确定权限类型的新方法 - private determinePermissionType(toolName: string): 'read' | 'write' | 'all' { - const lowerToolName = toolName.toLowerCase() - - // Read operations - if ( - lowerToolName.includes('read') || - lowerToolName.includes('list') || - lowerToolName.includes('get') || - lowerToolName.includes('show') || - lowerToolName.includes('view') || - lowerToolName.includes('fetch') || - lowerToolName.includes('search') || - lowerToolName.includes('find') || - lowerToolName.includes('query') || - lowerToolName.includes('tree') - ) { - return 'read' - } - - // Write operations - if ( - lowerToolName.includes('write') || - lowerToolName.includes('create') || - lowerToolName.includes('update') || - lowerToolName.includes('delete') || - lowerToolName.includes('modify') || - lowerToolName.includes('edit') || - lowerToolName.includes('remove') || - lowerToolName.includes('add') || - lowerToolName.includes('insert') || - lowerToolName.includes('save') || - lowerToolName.includes('execute') || - lowerToolName.includes('run') || - lowerToolName.includes('call') || - lowerToolName.includes('move') || - lowerToolName.includes('copy') || - lowerToolName.includes('mkdir') || - lowerToolName.includes('rmdir') - ) { - return 'write' - } - - // Default to write for safety (unknown operations require higher permissions) - return 'write' - } + // Determine permission type (write > read precedence; token-based matching) + private determinePermissionType(toolName: string): 'read' | 'write' | 'all' { + const lower = toolName.toLowerCase() + // Split camelCase and then tokenize on non-alphanumerics + const tokenized = lower.replace(/([a-z])([A-Z])/g, '$1 $2').split(/[^a-z0-9]+/g).filter(Boolean) + const hasWord = (kw: string) => + tokenized.includes(kw) || new RegExp(`(?:^|[^a-z0-9])${kw}(?:$|[^a-z0-9])`).test(lower) + + const READ_KWS = ['read', 'list', 'get', 'show', 'view', 'fetch', 'search', 'find', 'query', 'tree'] + const WRITE_KWS = [ + 'write','create','update','delete','modify','edit','remove','add','insert', + 'save','execute','run','call','move','copy','mkdir','rmdir' + ] + + const hasWrite = WRITE_KWS.some(hasWord) + if (hasWrite) return 'write' + + const hasRead = READ_KWS.some(hasWord) + if (hasRead) return 'read' + + // Default to write for safety + return 'write' + }
223-268: Add unit tests for ambiguous names (tree + write) to prevent regressionsCover at least:
- 'tree' → read
- 'view_tree'/'tree-view' → read
- 'delete_tree'/'tree_delete'/'treeUpdate' → write
- 'mtree'/'directorytree' (no separator) → write (no token match)
I can draft a small Jest suite for determinePermissionType if helpful.
🧹 Nitpick comments (1)
src/main/presenter/mcpPresenter/toolManager.ts (1)
46-47: Use English-only comments/logs and externalize user-facing strings via i18n
Non-ASCII characters detected at lines 46, 124–126, 207, 223, 270–281, 455–472, 482–495, 501–503, 505–510 in src/main/presenter/mcpPresenter/toolManager.ts. Replace Chinese comments and inlined messages with English and move all user-facing text into the internationalization framework.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
src/main/presenter/mcpPresenter/toolManager.ts(1 hunks)
🧰 Additional context used
📓 Path-based instructions (7)
**/*.{js,jsx,ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)
**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写
Files:
src/main/presenter/mcpPresenter/toolManager.ts
src/{main,renderer}/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)
src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging
Files:
src/main/presenter/mcpPresenter/toolManager.ts
src/main/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)
Use Electron's built-in APIs for file system and native dialogs
From main to renderer, broadcast events via EventBus using mainWindow.webContents.send()
Files:
src/main/presenter/mcpPresenter/toolManager.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/error-logging.mdc)
**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别Enable and adhere to strict TypeScript type checking
Files:
src/main/presenter/mcpPresenter/toolManager.ts
src/main/**/*.{ts,js,tsx,jsx}
📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)
主进程代码放在
src/main
Files:
src/main/presenter/mcpPresenter/toolManager.ts
src/**/*.{ts,tsx,vue}
📄 CodeRabbit inference engine (CLAUDE.md)
Use English for all logs and comments
Files:
src/main/presenter/mcpPresenter/toolManager.ts
src/main/presenter/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Maintain one presenter per functional domain in src/main/presenter/
Files:
src/main/presenter/mcpPresenter/toolManager.ts
🧠 Learnings (1)
📓 Common learnings
Learnt from: neoragex2002
PR: ThinkInAIXYZ/deepchat#550
File: src/main/presenter/mcpPresenter/inMemoryServers/meetingServer.ts:258-324
Timestamp: 2025-06-21T15:53:34.641Z
Learning: User neoragex2002 prefers concise, brief responses rather than chatty or verbose communication. Avoid lengthy explanations, excessive enthusiasm, or ASCII art in responses.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build-check (x64)
| 并强制其以行内方式显示,从而修正换行 bug。 | ||
| 这可以保留链接组件原有的所有样式(包括颜色)。 | ||
| */ | ||
| a .markdown-renderer { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个修复估计后续得需要合并到上游组件 @Simon-He95
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts (3)
374-381: Bug: comparing enums to strings prevents BLOCK_NONE from being filtered.
HarmBlockThreshold is an enum; current string comparisons always pass and add settings even when disabled.- if ( - threshold && - category && - threshold !== 'BLOCK_NONE' && - threshold !== 'HARM_BLOCK_THRESHOLD_UNSPECIFIED' - ) { + if ( + threshold !== undefined && + category !== undefined && + threshold !== HarmBlockThreshold.BLOCK_NONE && + threshold !== HarmBlockThreshold.HARM_BLOCK_THRESHOLD_UNSPECIFIED + ) { safetySettings.push({ category, threshold }) }
463-469: Harden parsing of tool_call arguments.
A malformed JSON will throw and break formatting.- functionCall: { - name: toolCall.function.name, - args: JSON.parse(toolCall.function.arguments || '{}') - } + functionCall: { + name: toolCall.function.name, + args: (() => { + try { + return toolCall.function.arguments + ? JSON.parse(toolCall.function.arguments) + : {} + } catch (e) { + console.warn('Invalid tool_call arguments JSON:', e) + return {} + } + })() + }
294-300: Use a known-available model id for health check.
Hardcoding 1.5‑flash‑8b may 404; prefer first available or 2.5‑pro fallback.- const result = await this.genAI.models.generateContent({ - model: 'models/gemini-1.5-flash-8b', + const fallbackModel = this.models?.[0]?.id || 'gemini-2.5-pro' + const result = await this.genAI.models.generateContent({ + model: fallbackModel, contents: [{ role: 'user', parts: [{ text: 'Hello' }] }] })
♻️ Duplicate comments (6)
src/main/presenter/configPresenter/modelDefaultSettings.ts (3)
441-447: Set 1.5 Flash output cap to 8192.
Avoid off‑by‑one cap on completions.- maxTokens: 8191, + maxTokens: 8192,Confirm official output token limits for: - models/gemini-1.5-flash
416-425: Set Flash‑Lite output cap to 8192.
8191 is off-by-one vs documented 8192 for Flash family outputs.- maxTokens: 8191, + maxTokens: 8192,Confirm official output token limits for: - models/gemini-2.0-flash-lite
427-436: Set 2.0 Flash output cap to 8192.
Align with Gemini docs.- maxTokens: 8191, + maxTokens: 8192,Confirm official output token limits for: - models/gemini-2.0-flashsrc/main/presenter/configPresenter/providerModelSettings.ts (3)
270-276: Use 8192 for 1.5 Pro/Flash family.- maxTokens: 8191, + maxTokens: 8192,
248-254: Use 8192 for 2.0 Flash Lite.- maxTokens: 8191, + maxTokens: 8192,
256-265: Use 8192 for 2.0 Flash.- maxTokens: 8191, + maxTokens: 8192,
🧹 Nitpick comments (6)
src/main/presenter/configPresenter/modelDefaultSettings.ts (1)
350-350: Use English in comments per guidelines.
Change the inline note to English.- thinkingBudget: -1 // 动态思维 + thinkingBudget: -1 // dynamic thinking budget (auto)src/main/presenter/configPresenter/providerModelSettings.ts (1)
15-20: Add thinkingBudget to provider model settings (optional).
Core stream reads modelConfig.thinkingBudget; exposing it here allows per‑provider overrides.export interface ProviderModelSetting { id: string // 模型ID name: string // 模型名称 match: string[] // 用于匹配模型ID的字符串数组 maxTokens: number // 最大生成token数 contextLength: number // 上下文长度 temperature?: number // 温度参数 vision?: boolean // 是否支持视觉 functionCall?: boolean // 是否支持函数调用 reasoning?: boolean // 是否支持推理能力 + thinkingBudget?: number // optional reasoning budget for Gemini 2.5 family type?: ModelType // 模型类型,默认为ChatAnd return it in getProviderSpecificModelConfig:
maxCompletionTokens: config.maxCompletionTokens + ,thinkingBudget: (config as any).thinkingBudgetsrc/main/presenter/llmProviderPresenter/providers/geminiProvider.ts (4)
262-263: Remove stray debug log.
Avoid noisy console in production.- console.log('gemini ignore modelId', modelId) + // no-op
959-961: Drop per-chunk candidate logging; risks PII and heavy I/O.
Gate behind a debug flag or remove.- console.log('chunk.candidates', JSON.stringify(chunk.candidates, null, 2)) + // debug: inspect chunk candidates if needed
1107-1169: Emit usage for image-generation streams too.
Parity with text path; many UIs rely on usage events.private async *handleImageGenerationStream( @@ - // 处理流式响应 + // 处理流式响应 + let usage: GenerateContentResponseUsageMetadata | undefined for await (const chunk of result) { + if (chunk.usageMetadata) usage = chunk.usageMetadata if (chunk.candidates && chunk.candidates[0]?.content?.parts) { @@ } + if (usage) { + yield { + type: 'usage', + usage: { + prompt_tokens: usage.promptTokenCount || 0, + completion_tokens: usage.candidatesTokenCount || 0, + total_tokens: usage.totalTokenCount || 0 + } + } + } // 发送停止事件 yield { type: 'stop', stop_reason: 'complete' }
281-286: Comment inconsistency.
Method comment says Gemini lacks a model list API but file uses models.list(); update the comment.- // 重载fetchModels方法,因为Gemini没有获取模型的API + // 重载fetchModels方法;prefer API list(), fallback to cached models
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (3)
src/main/presenter/configPresenter/modelDefaultSettings.ts(2 hunks)src/main/presenter/configPresenter/providerModelSettings.ts(4 hunks)src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts(3 hunks)
🧰 Additional context used
📓 Path-based instructions (9)
**/*.{js,jsx,ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)
**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写
Files:
src/main/presenter/configPresenter/providerModelSettings.tssrc/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/{main,renderer}/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)
src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging
Files:
src/main/presenter/configPresenter/providerModelSettings.tssrc/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/main/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)
Use Electron's built-in APIs for file system and native dialogs
From main to renderer, broadcast events via EventBus using mainWindow.webContents.send()
Files:
src/main/presenter/configPresenter/providerModelSettings.tssrc/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/error-logging.mdc)
**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别Enable and adhere to strict TypeScript type checking
Files:
src/main/presenter/configPresenter/providerModelSettings.tssrc/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/main/**/*.{ts,js,tsx,jsx}
📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)
主进程代码放在
src/main
Files:
src/main/presenter/configPresenter/providerModelSettings.tssrc/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/**/*.{ts,tsx,vue}
📄 CodeRabbit inference engine (CLAUDE.md)
Use English for all logs and comments
Files:
src/main/presenter/configPresenter/providerModelSettings.tssrc/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/main/presenter/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Maintain one presenter per functional domain in src/main/presenter/
Files:
src/main/presenter/configPresenter/providerModelSettings.tssrc/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
src/main/presenter/configPresenter/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Centralize configuration logic under configPresenter/
Files:
src/main/presenter/configPresenter/providerModelSettings.tssrc/main/presenter/configPresenter/modelDefaultSettings.ts
src/main/presenter/llmProviderPresenter/providers/*.ts
📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)
src/main/presenter/llmProviderPresenter/providers/*.ts: Each file insrc/main/presenter/llmProviderPresenter/providers/*.tsshould handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Provider implementations must use acoreStreammethod that yields standardized stream events to decouple the main loop from provider-specific details.
ThecoreStreammethod in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Provider files should implement helper methods such asformatMessages,convertToProviderTools,parseFunctionCalls, andprepareFunctionCallPromptas needed for provider-specific logic.
All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g.,prepareFunctionCallPrompt) before making the API call.
When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., usingconvertToProviderTools) and included in the API request.
Provider implementations should aggregate and yield usage events as part of the standardized stream.
Provider implementations should yield image data events in the standardized format when applicable.
Provider implementations should yield reasoning events in the standardized format when applicable.
Provider implementations should yield tool call events (tool_call_start,tool_call_chunk,tool_call_end) in the standardized format.
Provider implementations should yield stop events with appropriatestop_reasonin the standardized format.
Provider implementations should yield error events in the standardized format...
Files:
src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts
🧠 Learnings (1)
📓 Common learnings
Learnt from: neoragex2002
PR: ThinkInAIXYZ/deepchat#550
File: src/main/presenter/mcpPresenter/inMemoryServers/meetingServer.ts:258-324
Timestamp: 2025-06-21T15:53:34.641Z
Learning: User neoragex2002 prefers concise, brief responses rather than chatty or verbose communication. Avoid lengthy explanations, excessive enthusiasm, or ASCII art in responses.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build-check (x64)
🔇 Additional comments (12)
src/main/presenter/configPresenter/modelDefaultSettings.ts (6)
339-339: Docs reference added—good.
Keeps Gemini section discoverable.
353-363: Marking Image Preview as ImageGeneration is correct.
Parity with providers; disables functionCall.
365-375: Lite-Preview entry looks consistent.
Matches array reflects the preview id; thinkingBudget retained.
377-387: Finalized Flash‑Lite entry OK.
Reasoning/thinkingBudget enabled; budgets align with the family.
389-399: 2.5 Flash entry OK.
Unified budgets; reasoning/thinking on.
401-414: 2.0 Flash Preview Image Generation entry is correct.
Explicit ImageGeneration type and no functionCall.src/main/presenter/configPresenter/providerModelSettings.ts (6)
193-195: ImagePreview typed as ImageGeneration—good.
Ensures correct routing.
208-217: New Flash‑Lite entry is fine.
Ordering before “flash” reduces substring shadowing risk.
219-228: 2.5 Flash budgets unified—OK.
230-243: 2.0 Flash Preview Image Generation typed and non‑FC—good.
2349-2350: OpenRouter ImagePreview typed—thanks for parity.
2653-2687: Apply two-stage matching: exact first, then longest‐substring fallback
Verified no callers depend on the old plainincludes()behavior—refactor is safe.-export function getProviderSpecificModelConfig( +export function getProviderSpecificModelConfig( providerId: string, modelId: string ): ModelConfig | undefined { // 将modelId转为小写以进行不区分大小写的匹配 const lowerModelId = modelId.toLowerCase() // 检查该提供商是否存在特定配置 const providerSetting = providerModelSettings[providerId] if (!providerSetting || !providerSetting.models) { return undefined } - // 遍历该提供商的模型数组,查找匹配的模型配置 - for (const config of providerSetting.models) { - // 检查是否有任何匹配条件符合 - if (config.match.some((matchStr) => lowerModelId.includes(matchStr.toLowerCase()))) { - return { - maxTokens: config.maxTokens, - contextLength: config.contextLength, - temperature: config.temperature, // 保持可选,某些模型不支持 - vision: config.vision || false, - functionCall: config.functionCall || false, - reasoning: config.reasoning || false, - type: config.type || ModelType.Chat, - reasoningEffort: config.reasoningEffort, - verbosity: config.verbosity, - maxCompletionTokens: config.maxCompletionTokens - } - } - } + // 两阶段匹配:先精确,再最长包含 + type Cand = { config: ProviderModelSetting; score: number } + let best: Cand | null = null + for (const config of providerSetting.models) { + for (const raw of config.match) { + const m = raw.toLowerCase() + if (lowerModelId === m) { + best = { config, score: Number.POSITIVE_INFINITY } + break + } + if (lowerModelId.includes(m)) { + const score = m.length + if (!best || score > best.score) best = { config, score } + } + } + if (best?.score === Number.POSITIVE_INFINITY) break + } + if (best) { + const config = best.config + return { + maxTokens: config.maxTokens, + contextLength: config.contextLength, + temperature: config.temperature, + vision: config.vision || false, + functionCall: config.functionCall || false, + reasoning: config.reasoning || false, + type: config.type || ModelType.Chat, + reasoningEffort: config.reasoningEffort, + verbosity: config.verbosity, + maxCompletionTokens: config.maxCompletionTokens + } + } // 如果没有找到匹配的配置,返回undefined return undefined }
Summary by CodeRabbit
New Features
Improvements
Bug Fixes