Skip to content

Conversation

@yyhhyyyyyy
Copy link
Collaborator

@yyhhyyyyyy yyhhyyyyyy commented Sep 10, 2025

add live search support for Grok provider

Summary by CodeRabbit

  • New Features

    • Added internet search support for supported Grok models.
    • Introduced a Grok-specific “Enable Internet Search” toggle in Model Settings.
    • Search support applies to both standard and streaming responses; image generation unaffected.
  • Bug Fixes

    • Improved reliability of the thinking budget slider by applying safe defaults when configuration is missing.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 10, 2025

Walkthrough

Adds explicit enableSearch flags to several Grok model configs, introduces enableSearch handling in Grok provider by conditionally injecting search_parameters into requests, updates effective model config post-injection, and adds a Grok-specific search toggle in the ModelConfigDialog. Also hardens thinkingBudgetRange with a default when missing.

Changes

Cohort / File(s) Summary
Provider model settings
src/main/presenter/configPresenter/providerModelSettings.ts
Adds explicit enableSearch: false to multiple Grok models; minor formatting adjustments. No API changes.
Grok provider request modification
src/main/presenter/llmProviderPresenter/providers/grokProvider.ts
Adds Grok search support: model detection, enableSearch gating, and transient injection of search_parameters: { mode: 'on' } and reasoning_effort into chat completion and core stream calls; restores original methods; updates effectiveModelConfig to avoid reapplication.
Renderer UI: model dialog
src/renderer/src/components/settings/ModelConfigDialog.vue
Adds Grok-specific internet search toggle gated by supported models; aligns with existing enableSearch labels; provides default thinkingBudgetRange fallback when config is missing.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor U as User
  participant V as ModelConfigDialog (Renderer)
  participant P as Presenter
  participant G as grokProvider
  participant A as Grok API

  U->>V: Toggle "Enable Search" (Grok model)
  V->>P: Save model config (enableSearch flag)

  U->>P: Send message (Grok model)
  P->>G: createCompletion/coreStream(modelId, config)

  rect rgba(200,220,255,0.25)
    note right of G: Determine flags
    G->>G: supportsEnableSearch(modelId)?<br/>supportsReasoningEffort?
    alt needs parameter modification
      note over G,A: Temporarily wrap API call
      G->>A: chat.completions.create(..., search_parameters:on?, reasoning_effort?)
      A-->>G: Stream/response
      G->>G: Restore original method
    else no modification
      G->>A: chat.completions.create(original params)
      A-->>G: Stream/response
    end
    G->>G: Update effectiveModelConfig<br/>(enableSearch:false, reasoningEffort:undefined)
  end

  G-->>P: Result
  P-->>V: Display assistant output
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Suggested reviewers

  • zerob13

Pre-merge checks (2 passed, 1 warning)

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Description Check ⚠️ Warning The current description is overly brief and does not follow the repository’s required template, lacking sections for the problem statement, detailed solution, UI/UX changes, platform compatibility notes, and additional context. Please expand the description to fill out each section of the template, clearly describing the related problem, the proposed solution, UI/UX modifications, any platform compatibility considerations, and additional context to meet the repository standards.
✅ Passed checks (2 passed)
Check name Status Explanation
Title Check ✅ Passed The pull request title clearly and concisely summarizes the primary feature being introduced, namely adding live search support for the Grok provider, without including extraneous details or vague terms.
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.

Poem

A rabbit flips a tiny switch—search is now in play,
Grok sniffs the winds of webby lore along the way.
Flags set true, then tucked back neat,
Streams restore, a tidy feat.
With twitching nose and careful hop,
I ship the patch—carrot drop! 🥕

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/grok-search-support

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/main/presenter/configPresenter/providerModelSettings.ts (1)

3134-3145: Add explicit grok-4 entry to providerModelSettings.ts
No id: 'grok-4' mapping exists in src/main/presenter/configPresenter/providerModelSettings.ts, so calls to “grok-4” will fall back to generic defaults—define its contextLength, maxTokens, match patterns, and flags to ensure correct limits.

🧹 Nitpick comments (3)
src/renderer/src/components/settings/ModelConfigDialog.vue (1)

833-853: Simplify Grok model detection to avoid drift with provider constants.

You already return true when modelId.includes('grok'), making the curated list redundant. Either:

  • keep only the 'grok' substring check, or
  • source a single shared list (e.g., from a shared module) to avoid divergence with GrokProvider.ENABLE_SEARCH_MODELS.

Apply this minimal simplification:

-const supportedSearchModels = [
-  'grok-4',
-  'grok-3-mini',
-  'grok-3-mini-fast',
-  'grok-3-fast',
-  'grok-3',
-  'grok-2',
-  'grok-2-vision',
-  'grok-2-image'
-]
-const isSupported =
-  supportedSearchModels.some((supportedModel) => modelId.includes(supportedModel)) ||
-  modelId.includes('grok')
-return isGrok && isSupported
+return isGrok && modelId.includes('grok')
src/main/presenter/llmProviderPresenter/providers/grokProvider.ts (2)

16-25: ENABLE_SEARCH_MODELS list is redundant given the fallback.

Since supportsEnableSearch() also returns true for any modelId containing 'grok', this list adds maintenance overhead without functional benefit. Consider removing the array and keeping the substring check only.

-  private static readonly ENABLE_SEARCH_MODELS: string[] = [
-    'grok-4',
-    'grok-3-mini',
-    'grok-3-mini-fast',
-    'grok-2',
-    'grok-2-image'
-  ]

49-62: Confirm single source of truth for “which Grok models support search”.

The UI (ModelConfigDialog) and provider both embed their own “supported” logic. To avoid future drift, consider centralizing this in a shared helper.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ae8e1d1 and 9a985c4.

📒 Files selected for processing (3)
  • src/main/presenter/configPresenter/providerModelSettings.ts (7 hunks)
  • src/main/presenter/llmProviderPresenter/providers/grokProvider.ts (3 hunks)
  • src/renderer/src/components/settings/ModelConfigDialog.vue (2 hunks)
🧰 Additional context used
📓 Path-based instructions (17)
**/*.{js,jsx,ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)

**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写

Files:

  • src/main/presenter/configPresenter/providerModelSettings.ts
  • src/main/presenter/llmProviderPresenter/providers/grokProvider.ts
src/{main,renderer}/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging

Files:

  • src/main/presenter/configPresenter/providerModelSettings.ts
  • src/main/presenter/llmProviderPresenter/providers/grokProvider.ts
src/main/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

Use Electron's built-in APIs for file system and native dialogs

Files:

  • src/main/presenter/configPresenter/providerModelSettings.ts
  • src/main/presenter/llmProviderPresenter/providers/grokProvider.ts
**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/error-logging.mdc)

**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别

Files:

  • src/main/presenter/configPresenter/providerModelSettings.ts
  • src/main/presenter/llmProviderPresenter/providers/grokProvider.ts
src/main/**/*.{ts,js,tsx,jsx}

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

主进程代码放在 src/main

Files:

  • src/main/presenter/configPresenter/providerModelSettings.ts
  • src/main/presenter/llmProviderPresenter/providers/grokProvider.ts
**/*.{ts,tsx,js,vue}

📄 CodeRabbit inference engine (CLAUDE.md)

Use English for all logs and comments

Files:

  • src/main/presenter/configPresenter/providerModelSettings.ts
  • src/renderer/src/components/settings/ModelConfigDialog.vue
  • src/main/presenter/llmProviderPresenter/providers/grokProvider.ts
**/*.{ts,tsx,vue}

📄 CodeRabbit inference engine (CLAUDE.md)

Enable and adhere to strict TypeScript typing (avoid implicit any, prefer precise types)

Files:

  • src/main/presenter/configPresenter/providerModelSettings.ts
  • src/renderer/src/components/settings/ModelConfigDialog.vue
  • src/main/presenter/llmProviderPresenter/providers/grokProvider.ts
src/renderer/src/**/*

📄 CodeRabbit inference engine (.cursor/rules/i18n.mdc)

src/renderer/src/**/*: All user-facing strings must use i18n keys (avoid hardcoded user-visible text in code)
Use the 'vue-i18n' framework for all internationalization in the renderer
Ensure all user-visible text in the renderer uses the translation system

Files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/**/*.{vue,ts,js,tsx,jsx}

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

渲染进程代码放在 src/renderer

Files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/src/**/*.{vue,ts,tsx,js,jsx}

📄 CodeRabbit inference engine (.cursor/rules/vue-best-practices.mdc)

src/renderer/src/**/*.{vue,ts,tsx,js,jsx}: Use the Composition API for better code organization and reusability
Implement proper state management with Pinia
Utilize Vue Router for navigation and route management
Leverage Vue's built-in reactivity system for efficient data handling

Files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/src/**/*.vue

📄 CodeRabbit inference engine (.cursor/rules/vue-best-practices.mdc)

Use scoped styles to prevent CSS conflicts between components

Files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/**/*.{ts,tsx,vue}

📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)

src/renderer/**/*.{ts,tsx,vue}: Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError).
Use TypeScript for all code; prefer types over interfaces.
Avoid enums; use const objects instead.
Use arrow functions for methods and computed properties.
Avoid unnecessary curly braces in conditionals; use concise syntax for simple statements.

Files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/**/*.{vue,ts}

📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)

Implement lazy loading for routes and components.

Files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/**/*.{ts,vue}

📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)

src/renderer/**/*.{ts,vue}: Use useFetch and useAsyncData for data fetching.
Implement SEO best practices using Nuxt's useHead and useSeoMeta.

Use Pinia for frontend state management (do not introduce alternative state libraries)

Files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/{src,shell,floating}/**/*.vue

📄 CodeRabbit inference engine (CLAUDE.md)

src/renderer/{src,shell,floating}/**/*.vue: Use Vue 3 Composition API for all components
All user-facing strings must use i18n keys via vue-i18n (no hard-coded UI strings)
Use Tailwind CSS utilities and ensure styles are scoped in Vue components

Files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
src/renderer/src/components/**/*

📄 CodeRabbit inference engine (CLAUDE.md)

Organize UI components by feature within src/renderer/src/

Files:

  • src/renderer/src/components/settings/ModelConfigDialog.vue
src/main/presenter/llmProviderPresenter/providers/*.ts

📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)

src/main/presenter/llmProviderPresenter/providers/*.ts: Each file in src/main/presenter/llmProviderPresenter/providers/*.ts should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Provider implementations must use a coreStream method that yields standardized stream events to decouple the main loop from provider-specific details.
The coreStream method in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Provider files should implement helper methods such as formatMessages, convertToProviderTools, parseFunctionCalls, and prepareFunctionCallPrompt as needed for provider-specific logic.
All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g., prepareFunctionCallPrompt) before making the API call.
When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., using convertToProviderTools) and included in the API request.
Provider implementations should aggregate and yield usage events as part of the standardized stream.
Provider implementations should yield image data events in the standardized format when applicable.
Provider implementations should yield reasoning events in the standardized format when applicable.
Provider implementations should yield tool call events (tool_call_start, tool_call_chunk, tool_call_end) in the standardized format.
Provider implementations should yield stop events with appropriate stop_reason in the standardized format.
Provider implementations should yield error events in the standardized format...

Files:

  • src/main/presenter/llmProviderPresenter/providers/grokProvider.ts
🧠 Learnings (1)
📚 Learning: 2025-09-06T03:07:23.794Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-09-06T03:07:23.794Z
Learning: Applies to src/main/presenter/configPresenter/providers.ts : Add provider configuration entries in src/main/presenter/configPresenter/providers.ts

Applied to files:

  • src/main/presenter/configPresenter/providerModelSettings.ts
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: build-check (x64)
🔇 Additional comments (5)
src/main/presenter/configPresenter/providerModelSettings.ts (1)

3143-3145: Explicit enableSearch flags for Grok models — good alignment with UI/provider logic.

The explicit enableSearch: false defaults and low reasoningEffort on the Grok 3 mini variants look consistent with the provider handling.

Also applies to: 3156-3158, 3168-3170, 3180-3182, 3192-3194, 3204-3206, 3216-3218

src/renderer/src/components/settings/ModelConfigDialog.vue (1)

373-384: Grok internet search toggle UI — consistent with i18n and config binding.

Binding to config.enableSearch reuses existing translation keys and mirrors DashScope UX. Looks good.

src/main/presenter/llmProviderPresenter/providers/grokProvider.ts (3)

238-242: Resetting flags in effectiveModelConfig — good to prevent double-injection.

Clearing reasoningEffort and enableSearch before delegating avoids repeated mutation downstream. Looks good.


227-233: Confirmed search_parameters: { mode: 'on' } aligns with xAI Grok’s OpenAI-compatible API; no changes needed.


217-236: Runtime monkey-patching of the OpenAI client is not concurrency-safe.

Overriding this.openai.chat.completions.create on a shared provider instance can bleed settings across parallel requests. Confirm whether GrokProvider is instantiated per request or held as a singleton; if it’s shared, refactor to supply per-call parameters (for example via an augmentParams(params) hook on OpenAICompatibleProvider) or create a fresh client instance for each call instead of mutating the global client.

Comment on lines +212 to +216
// Handle reasoning models and search functionality
const shouldAddReasoningEffort = this.isReasoningModel(modelId) && modelConfig?.reasoningEffort
const shouldAddEnableSearch = this.supportsEnableSearch(modelId) && modelConfig?.enableSearch
const needsParameterModification = shouldAddReasoningEffort || shouldAddEnableSearch

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Non-stream path may not inject search/reasoning parameters.

coreStream handles injection, but completions() delegates to openAICompletion() without the same injection. If live search must work for non-stream calls too, mirror the injection there.

Apply a scoped patch around openAICompletion:

   async completions(
     messages: ChatMessage[],
     modelId: string,
     temperature?: number,
     maxTokens?: number
   ): Promise<LLMResponse> {
     // Image generation models require special handling
     if (this.isImageModel(modelId)) {
       return this.handleImageGeneration(messages)
     }
-    return this.openAICompletion(messages, modelId, temperature, maxTokens)
+    // Parity with coreStream: inject reasoning/search when enabled
+    const modelConfig = await this.configPresenter.getModelConfig(modelId, this.provider)
+    const shouldAddReasoningEffort = this.isReasoningModel(modelId) && modelConfig?.reasoningEffort
+    const shouldAddEnableSearch = this.supportsEnableSearch(modelId) && modelConfig?.enableSearch
+    const needsParameterModification = shouldAddReasoningEffort || shouldAddEnableSearch
+
+    if (!needsParameterModification) {
+      return this.openAICompletion(messages, modelId, temperature, maxTokens)
+    }
+
+    const originalCreate = this.openai.chat.completions.create.bind(this.openai.chat.completions)
+    this.openai.chat.completions.create = ((params: any, options?: any) => {
+      const modifiedParams = { ...params }
+      if (shouldAddReasoningEffort && this.supportsReasoningEffort(modelId)) {
+        modifiedParams.reasoning_effort = modelConfig.reasoningEffort
+      }
+      if (shouldAddEnableSearch) {
+        modifiedParams.search_parameters = { mode: 'on' }
+      }
+      return originalCreate(modifiedParams, options)
+    }) as any
+    try {
+      return this.openAICompletion(messages, modelId, temperature, maxTokens)
+    } finally {
+      this.openai.chat.completions.create = originalCreate
+    }
   }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
// Handle reasoning models and search functionality
const shouldAddReasoningEffort = this.isReasoningModel(modelId) && modelConfig?.reasoningEffort
const shouldAddEnableSearch = this.supportsEnableSearch(modelId) && modelConfig?.enableSearch
const needsParameterModification = shouldAddReasoningEffort || shouldAddEnableSearch
async completions(
messages: ChatMessage[],
modelId: string,
temperature?: number,
maxTokens?: number
): Promise<LLMResponse> {
// Image generation models require special handling
if (this.isImageModel(modelId)) {
return this.handleImageGeneration(messages)
}
// Parity with coreStream: inject reasoning/search when enabled
const modelConfig = await this.configPresenter.getModelConfig(modelId, this.provider)
const shouldAddReasoningEffort = this.isReasoningModel(modelId) && modelConfig?.reasoningEffort
const shouldAddEnableSearch = this.supportsEnableSearch(modelId) && modelConfig?.enableSearch
const needsParameterModification = shouldAddReasoningEffort || shouldAddEnableSearch
if (!needsParameterModification) {
return this.openAICompletion(messages, modelId, temperature, maxTokens)
}
// Monkey-patch the chat completion call to inject parameters
const originalCreate = this.openai.chat.completions.create.bind(this.openai.chat.completions)
this.openai.chat.completions.create = ((params: any, options?: any) => {
const modifiedParams = { ...params }
if (shouldAddReasoningEffort && this.supportsReasoningEffort(modelId)) {
modifiedParams.reasoning_effort = modelConfig.reasoningEffort
}
if (shouldAddEnableSearch) {
modifiedParams.search_parameters = { mode: 'on' }
}
return originalCreate(modifiedParams, options)
}) as any
try {
return this.openAICompletion(messages, modelId, temperature, maxTokens)
} finally {
// Restore original behavior
this.openai.chat.completions.create = originalCreate
}
}
🤖 Prompt for AI Agents
In src/main/presenter/llmProviderPresenter/providers/grokProvider.ts around
lines 212 to 216, the non-stream path that calls openAICompletion does not get
the same reasoning/search parameter injection as the coreStream path; mirror the
coreStream injection logic for shouldAddReasoningEffort and
shouldAddEnableSearch so that when needsParameterModification is true you
build/merge the reasoningEffort and enableSearch fields into the completion
request payload and pass that modified payload into openAICompletion; ensure you
reuse the same condition checks and parameter names used for streaming, perform
a shallow merge so existing request fields are preserved, and keep the change
scoped to the openAICompletion invocation in the completions() non-stream
branch.

@zerob13 zerob13 merged commit 70e9447 into dev Sep 10, 2025
2 checks passed
@zerob13 zerob13 deleted the feat/grok-search-support branch January 6, 2026 12:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants