Skip to content

Conversation

@zerob13
Copy link
Collaborator

@zerob13 zerob13 commented Nov 28, 2025

Summary by CodeRabbit

  • New Features

    • Google Vertex AI added as a provider with UI to configure project, location, service account, private key, API version and endpoint mode; verify action and icon support included.
    • Nowledge Mem: export, submit, test connection, and settings UI; thread export/submit integrated into export flows.
    • Danish (da‑DK) locale added; many provider settings localized.
    • Startup deeplink handling and MCP install flow improved; Mermaid content sanitization added.
  • Chores

    • Replaced a PTY dependency with node-pty and bumped @google/genai.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 28, 2025

Caution

Review failed

The pull request is closed.

Walkthrough

Adds Google Vertex AI provider and UI, integrates a NowledgeMem export/submission flow and presenter, introduces per-file serialization and concurrent task processing for knowledge operations, adds Danish (da‑DK) locales, startup deeplink handling, Mermaid sanitization, replaces PTY dependency, and updates various i18n, types, and export paths.

Changes

Cohort / File(s) Change Summary
Dependencies / Build
package.json
Replaced @homebridge/node-pty-prebuilt-multiarch with node-pty, bumped @google/genai to ^1.30.0, and updated pnpm.ignoredBuiltDependencies (added node-pty, sharp, vue-demi).
Vertex provider core
src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts, src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts, src/main/presenter/configPresenter/providers.ts, src/main/presenter/configPresenter/modelCapabilities.ts
Adds a full VertexProvider implementation, registers it in provider maps, adds DEFAULT_PROVIDERS entry for Vertex AI, and adds provider alias vertex -> google-vertex.
Renderer settings & UI
src/renderer/settings/components/ModelProviderSettingsDetail.vue, src/renderer/settings/components/VertexProviderSettingsDetail.vue, src/renderer/settings/components/NowledgeMemSettings.vue, src/renderer/settings/App.vue, src/renderer/shell/App.vue, src/renderer/settings/components/KnowledgeBaseSettings.vue, src/renderer/settings/components/DisplaySettings.vue
New Vertex settings component integrated into provider settings; NowledgeMem settings component added; MCP deeplink / install handling wired into settings and shell App; display language option da-DK added.
Provider store & types
src/renderer/src/stores/providerStore.ts, src/shared/types/presenters/llmprovider.presenter.d.ts, src/shared/types/presenters/legacy.presenters.d.ts
Adds updateVertexProviderConfig, defines exported VERTEX_PROVIDER type (projectId, location, credentials, apiVersion, endpointMode) and NowledgeMem-related presenter API typings.
NowledgeMem integration (backend)
src/main/presenter/nowledgeMemPresenter/index.ts, src/main/presenter/threadPresenter/exporters/nowledgeMemExporter.ts, src/main/presenter/threadPresenter/exporters/conversationExporter.ts, src/main/presenter/threadPresenter/index.ts, src/shared/types/nowledgeMem.ts
New NowledgeMemPresenter, exporter and converter, ThreadPresenter methods to export/submit/test NowledgeMem, and shared NowledgeMem types and export-summary utilities.
Renderer chat & thread flows
src/renderer/src/stores/chat.ts, src/renderer/src/components/ThreadItem.vue, src/renderer/src/components/ChatView.vue, src/renderer/src/components/artifacts/MermaidArtifact.vue
Chat store: stream event wiring and NowledgeMem export/submission APIs added; ThreadItem supports nowledge-mem export and submit; ChatView centralizes stream end/error handlers; MermaidArtifact adds content sanitization.
Knowledge subsystem concurrency & queuing
src/main/presenter/knowledgePresenter/knowledgeStorePresenter.ts, src/main/presenter/knowledgePresenter/knowledgeTaskPresenter.ts
Adds per-file task queueing (enqueueFileTask/fileQueueMap) and moves to concurrent runningTasks model with configurable maxConcurrency for task processing.
Deeplink & startup handling
src/main/index.ts, src/main/presenter/deeplinkPresenter/index.ts, src/renderer/src/lib/storeInitializer.ts, src/renderer/settings/App.vue, src/renderer/shell/App.vue
Startup deeplink detection/handlers added (env/CLI/open-url/second-instance), cold-start deeplink processing, MCP install persistence flow and navigation helper (navigateToMcpSettings).
Config presenter & events
src/main/presenter/configPresenter/index.ts, src/main/events.ts, src/main/presenter/configPresenter/acpInitHelper.ts
Adds get/set NowledgeMem config methods, emits NOWLEDGE_MEM_CONFIG_UPDATED, replaces PTY import to node-pty types in acpInitHelper.
Provider operations / lifecycle
src/shared/provider-operations.ts
Extends REBUILD_REQUIRED_FIELDS to include Vertex-related fields (projectId, location, accountPrivateKey, accountClientEmail, apiVersion, endpointMode).
I18n additions (da‑DK + NowledgeMem keys)
src/renderer/src/i18n/da-DK/*, src/renderer/src/i18n/index.ts, src/renderer/src/i18n/{en-US,fa-IR,fr-FR,ja-JP,ko-KR,pt-BR,ru-RU,zh-CN,zh-HK,zh-TW}/settings.json, many thread/export common files
Adds Danish locale bundle (many files), registers da-DK/da in i18n index, and adds NowledgeMem/export-related i18n strings across locales and minor common keys (testing/saving).
Icons & README
src/renderer/src/components/icons/ModelIcon.vue, README.md, README.jp.md, README.zh.md
Adds vertex icon alias mapping to vertexai icon; updates provider gallery and README entries/assets.
Tests
test/renderer/components/MermaidArtifact.test.ts
Adds tests for Mermaid content sanitization (ensures dangerous tags/attrs are removed).
Misc config
electron-builder.yml, electron-builder-macx64.yml
Adds da-DK and da to electron languages for builds.

Sequence Diagram(s)

sequenceDiagram
    participant SettingsUI as Settings UI
    participant ProviderStore as Provider Store
    participant VertexProvider as VertexProvider
    participant GoogleVertex as Google Vertex API
    participant ModelDB as Model DB

    SettingsUI->>ProviderStore: updateVertexProviderConfig(providerId, updates)
    ProviderStore->>VertexProvider: trigger rebuild/init (if required)
    VertexProvider->>GoogleVertex: create client / listModels()
    GoogleVertex-->>VertexProvider: model list (or error)
    alt models fetched
        VertexProvider->>ModelDB: update cached models
        VertexProvider->>ProviderStore: notify model list changed
        ProviderStore->>SettingsUI: confirmed/verified
    else fetch fails
        VertexProvider->>ModelDB: fallback to DB models
        ProviderStore->>SettingsUI: show fallback/models unavailable
    end
Loading
sequenceDiagram
    participant ThreadUI as Thread UI
    participant ThreadPresenter as ThreadPresenter
    participant NowledgeMem as NowledgeMemPresenter
    participant NowledgeAPI as NowledgeMem Service

    ThreadUI->>ThreadPresenter: exportToNowledgeMem(conversationId)
    ThreadPresenter->>ThreadPresenter: buildNowledgeMemExportData(...)
    ThreadPresenter->>NowledgeMem: submitThread(threadData)
    NowledgeMem->>NowledgeAPI: POST /threads (with API key)
    NowledgeAPI-->>NowledgeMem: success / error
    NowledgeMem-->>ThreadPresenter: result (threadId / errors)
    ThreadPresenter-->>ThreadUI: return export+submission result
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Areas to focus review on:

  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts — streaming, tool-call handling, safety mapping, auth and endpoint modes.
  • NowledgeMem path: src/main/presenter/nowledgeMemPresenter/*, exporters/nowledgeMemExporter.ts, threadPresenter integrations and renderer store/chat submission flows.
  • Knowledge subsystem concurrency: knowledgeStorePresenter.ts and knowledgeTaskPresenter.ts for correctness of per-file queuing and concurrency safety.
  • Chat store stream event handling and new NowledgeMem APIs: src/renderer/src/stores/chat.ts.
  • PTY dependency change and acpInitHelper.ts import updates.
  • i18n and electron-builder changes to ensure build/localization packaging includes new locale.

Possibly related PRs

Suggested labels

codex

Suggested reviewers

  • deepinfect

Poem

🐰 Hopped through code with whiskers bright,

Vertex and Mem now join the light.
Queues in order, locales in tow,
Deeplinks, sanitizers — ready to go.
A carrot-coded cheer for this new show!

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title 'feat: add vertex support' directly and clearly describes the main change: adding Google Vertex AI provider support to the application.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between a85d327 and 475c48d.

⛔ Files ignored due to path filters (1)
  • src/renderer/src/assets/images/nowledge-mem.png is excluded by !**/*.png
📒 Files selected for processing (82)
  • README.jp.md (4 hunks)
  • README.md (4 hunks)
  • README.zh.md (4 hunks)
  • electron-builder-macx64.yml (1 hunks)
  • electron-builder.yml (1 hunks)
  • package.json (2 hunks)
  • src/main/events.ts (1 hunks)
  • src/main/index.ts (1 hunks)
  • src/main/presenter/configPresenter/acpInitHelper.ts (1 hunks)
  • src/main/presenter/configPresenter/index.ts (2 hunks)
  • src/main/presenter/deeplinkPresenter/index.ts (7 hunks)
  • src/main/presenter/knowledgePresenter/knowledgeStorePresenter.ts (12 hunks)
  • src/main/presenter/knowledgePresenter/knowledgeTaskPresenter.ts (7 hunks)
  • src/main/presenter/nowledgeMemPresenter/index.ts (1 hunks)
  • src/main/presenter/threadPresenter/exporters/conversationExporter.ts (2 hunks)
  • src/main/presenter/threadPresenter/exporters/nowledgeMemExporter.ts (1 hunks)
  • src/main/presenter/threadPresenter/index.ts (5 hunks)
  • src/renderer/settings/App.vue (5 hunks)
  • src/renderer/settings/components/DisplaySettings.vue (1 hunks)
  • src/renderer/settings/components/KnowledgeBaseSettings.vue (3 hunks)
  • src/renderer/settings/components/NowledgeMemSettings.vue (1 hunks)
  • src/renderer/shell/App.vue (1 hunks)
  • src/renderer/src/components/ChatView.vue (1 hunks)
  • src/renderer/src/components/ThreadItem.vue (2 hunks)
  • src/renderer/src/components/artifacts/MermaidArtifact.vue (2 hunks)
  • src/renderer/src/i18n/da-DK/about.json (1 hunks)
  • src/renderer/src/i18n/da-DK/artifacts.json (1 hunks)
  • src/renderer/src/i18n/da-DK/chat.json (1 hunks)
  • src/renderer/src/i18n/da-DK/common.json (1 hunks)
  • src/renderer/src/i18n/da-DK/components.json (1 hunks)
  • src/renderer/src/i18n/da-DK/contextMenu.json (1 hunks)
  • src/renderer/src/i18n/da-DK/dialog.json (1 hunks)
  • src/renderer/src/i18n/da-DK/index.ts (1 hunks)
  • src/renderer/src/i18n/da-DK/mcp.json (1 hunks)
  • src/renderer/src/i18n/da-DK/model.json (1 hunks)
  • src/renderer/src/i18n/da-DK/newThread.json (1 hunks)
  • src/renderer/src/i18n/da-DK/promptSetting.json (1 hunks)
  • src/renderer/src/i18n/da-DK/routes.json (1 hunks)
  • src/renderer/src/i18n/da-DK/settings.json (1 hunks)
  • src/renderer/src/i18n/da-DK/sync.json (1 hunks)
  • src/renderer/src/i18n/da-DK/thread.json (1 hunks)
  • src/renderer/src/i18n/da-DK/toolCall.json (1 hunks)
  • src/renderer/src/i18n/da-DK/traceDialog.json (1 hunks)
  • src/renderer/src/i18n/da-DK/update.json (1 hunks)
  • src/renderer/src/i18n/da-DK/welcome.json (1 hunks)
  • src/renderer/src/i18n/en-US/common.json (1 hunks)
  • src/renderer/src/i18n/en-US/settings.json (2 hunks)
  • src/renderer/src/i18n/en-US/thread.json (2 hunks)
  • src/renderer/src/i18n/fa-IR/common.json (1 hunks)
  • src/renderer/src/i18n/fa-IR/settings.json (2 hunks)
  • src/renderer/src/i18n/fa-IR/thread.json (2 hunks)
  • src/renderer/src/i18n/fr-FR/common.json (1 hunks)
  • src/renderer/src/i18n/fr-FR/settings.json (2 hunks)
  • src/renderer/src/i18n/fr-FR/thread.json (2 hunks)
  • src/renderer/src/i18n/index.ts (2 hunks)
  • src/renderer/src/i18n/ja-JP/common.json (1 hunks)
  • src/renderer/src/i18n/ja-JP/settings.json (2 hunks)
  • src/renderer/src/i18n/ja-JP/thread.json (2 hunks)
  • src/renderer/src/i18n/ko-KR/common.json (1 hunks)
  • src/renderer/src/i18n/ko-KR/settings.json (2 hunks)
  • src/renderer/src/i18n/ko-KR/thread.json (2 hunks)
  • src/renderer/src/i18n/pt-BR/common.json (1 hunks)
  • src/renderer/src/i18n/pt-BR/settings.json (2 hunks)
  • src/renderer/src/i18n/pt-BR/thread.json (2 hunks)
  • src/renderer/src/i18n/ru-RU/common.json (1 hunks)
  • src/renderer/src/i18n/ru-RU/settings.json (2 hunks)
  • src/renderer/src/i18n/ru-RU/thread.json (2 hunks)
  • src/renderer/src/i18n/zh-CN/common.json (1 hunks)
  • src/renderer/src/i18n/zh-CN/settings.json (2 hunks)
  • src/renderer/src/i18n/zh-CN/thread.json (1 hunks)
  • src/renderer/src/i18n/zh-HK/common.json (1 hunks)
  • src/renderer/src/i18n/zh-HK/settings.json (2 hunks)
  • src/renderer/src/i18n/zh-HK/thread.json (2 hunks)
  • src/renderer/src/i18n/zh-TW/common.json (1 hunks)
  • src/renderer/src/i18n/zh-TW/settings.json (2 hunks)
  • src/renderer/src/i18n/zh-TW/thread.json (2 hunks)
  • src/renderer/src/lib/storeInitializer.ts (2 hunks)
  • src/renderer/src/stores/chat.ts (13 hunks)
  • src/shared/i18n.ts (2 hunks)
  • src/shared/types/nowledgeMem.ts (1 hunks)
  • src/shared/types/presenters/legacy.presenters.d.ts (5 hunks)
  • test/renderer/components/MermaidArtifact.test.ts (1 hunks)

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (5)
src/renderer/settings/components/VertexProviderSettingsDetail.vue (1)

175-215: Guard updateConfig against store errors to avoid unhandled rejections

updateConfig directly awaits providerStore.updateVertexProviderConfig without a try/catch, so any failure from the main process/store will surface as an unhandled promise from the component. Wrapping this in error handling and surfacing a user-visible error (e.g. toast) will make Vertex config failures easier to debug and avoid noisy console errors.

A minimal pattern inside updateConfig:

const updateConfig = async (updates: Partial<VERTEX_PROVIDER>) => {
  try {
    await providerStore.updateVertexProviderConfig(props.provider.id, updates)
    emit('config-updated')
  } catch (error) {
    // Optionally replace with app logger / toast
    console.error('Failed to update Vertex provider config', error)
  }
}
src/renderer/src/i18n/fa-IR/settings.json (1)

277-288: Vertex keys are wired correctly; consider localizing values later

The added Vertex provider keys match the en-US structure and will work correctly in the UI, but their values are still in English. If full Persian localization is a goal for this locale, you may want to track translation of these labels/placeholders in a follow-up.

src/renderer/src/i18n/fr-FR/settings.json (1)

277-288: Vertex keys are structurally correct; values still in English

The Vertex AI provider keys in fr-FR mirror the en-US structure correctly, so the UI will resolve them. If you aim for full French localization, consider translating these label/placeholder strings in a later pass.

src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts (2)

880-888: Wrap coreStream network/streaming logic in try/catch and emit error events

coreStream performs the Vertex streaming call and iterates the async stream without a surrounding try/catch. Any network/SDK error thrown by generateContentStream or while iterating will bubble up as a rejected async iterator rather than a standardized error + stop('error') event, which breaks the “providers normalize errors into stream events” contract and can make frontend behavior inconsistent.

Consider wrapping the body in a try/catch and translating failures into LLMCoreStreamEvent errors, similar to handleImageGenerationStream:

async *coreStream(...) {
  if (!this.isInitialized) throw new Error('Provider not initialized')
  if (!modelId) throw new Error('Model ID is required')

  try {
    // existing setup + generateContentStream + for-await loop
    // ...
    yield createStreamEvent.stop(toolUseDetected ? 'tool_use' : 'complete')
  } catch (error) {
    console.error('Vertex coreStream error:', error)
    yield createStreamEvent.error(
      error instanceof Error ? error.message : 'Vertex streaming failed'
    )
    yield createStreamEvent.stop('error')
  }
}

This keeps the main presenter’s agent loop simpler and ensures all providers yield a consistent error/stop pattern on failures.

Also applies to: 961-1088


399-405: Normalize model IDs consistently for thinking budget & prefer English comments in new code

supportsThinkingBudget only strips a leading models/ prefix:

const normalized = modelId.replace(/^models\//i, '')
const range = modelCapabilities.getThinkingBudgetRange(this.provider.id, normalized)

Elsewhere (fetchProviderModels) you normalize IDs by also removing publishers/google/models/. For Vertex, model IDs often look like full resource paths (projects/.../locations/.../publishers/google/models/...), so this lighter normalization may never match what modelCapabilities expects and will silently disable thinkingBudget even for supported models. Reusing the same normalization helper you use in fetchProviderModels here would make this more robust.

Separately, there are several new comments in Chinese (e.g. around ensureVertexModelName and getGenerateContentConfig). Per the repo guidelines, new TS code should use English comments; converting these to concise English will make the provider easier to maintain for non‑Chinese‑speaking contributors.

Also applies to: 129-137

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3a89d04 and f08e627.

📒 Files selected for processing (22)
  • package.json (1 hunks)
  • src/main/presenter/configPresenter/modelCapabilities.ts (1 hunks)
  • src/main/presenter/configPresenter/providers.ts (1 hunks)
  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts (3 hunks)
  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts (1 hunks)
  • src/renderer/settings/components/ModelProviderSettingsDetail.vue (2 hunks)
  • src/renderer/settings/components/VertexProviderSettingsDetail.vue (1 hunks)
  • src/renderer/src/components/icons/ModelIcon.vue (1 hunks)
  • src/renderer/src/i18n/en-US/settings.json (1 hunks)
  • src/renderer/src/i18n/fa-IR/settings.json (1 hunks)
  • src/renderer/src/i18n/fr-FR/settings.json (1 hunks)
  • src/renderer/src/i18n/ja-JP/settings.json (1 hunks)
  • src/renderer/src/i18n/ko-KR/settings.json (1 hunks)
  • src/renderer/src/i18n/pt-BR/settings.json (1 hunks)
  • src/renderer/src/i18n/ru-RU/settings.json (1 hunks)
  • src/renderer/src/i18n/zh-CN/settings.json (1 hunks)
  • src/renderer/src/i18n/zh-HK/settings.json (1 hunks)
  • src/renderer/src/i18n/zh-TW/settings.json (1 hunks)
  • src/renderer/src/stores/providerStore.ts (3 hunks)
  • src/shared/provider-operations.ts (1 hunks)
  • src/shared/types/presenters/legacy.presenters.d.ts (1 hunks)
  • src/shared/types/presenters/llmprovider.presenter.d.ts (1 hunks)
🧰 Additional context used
📓 Path-based instructions (37)
src/renderer/src/i18n/**/*.json

📄 CodeRabbit inference engine (.cursor/rules/i18n.mdc)

src/renderer/src/i18n/**/*.json: Translation key naming convention: use dot-separated hierarchical structure with lowercase letters and descriptive names (e.g., 'common.button.submit')
Maintain consistent key-value structure across all language translation files (zh-CN, en-US, ko-KR, ru-RU, zh-HK, fr-FR, fa-IR)

Files:

  • src/renderer/src/i18n/ko-KR/settings.json
  • src/renderer/src/i18n/pt-BR/settings.json
  • src/renderer/src/i18n/ja-JP/settings.json
  • src/renderer/src/i18n/ru-RU/settings.json
  • src/renderer/src/i18n/zh-CN/settings.json
  • src/renderer/src/i18n/zh-HK/settings.json
  • src/renderer/src/i18n/fa-IR/settings.json
  • src/renderer/src/i18n/zh-TW/settings.json
  • src/renderer/src/i18n/fr-FR/settings.json
  • src/renderer/src/i18n/en-US/settings.json
src/**/*

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

New features should be developed in the src directory

Files:

  • src/renderer/src/i18n/ko-KR/settings.json
  • src/renderer/src/i18n/pt-BR/settings.json
  • src/renderer/src/i18n/ja-JP/settings.json
  • src/main/presenter/configPresenter/modelCapabilities.ts
  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/shared/types/presenters/llmprovider.presenter.d.ts
  • src/renderer/src/i18n/ru-RU/settings.json
  • src/renderer/src/components/icons/ModelIcon.vue
  • src/renderer/src/i18n/zh-CN/settings.json
  • src/renderer/src/stores/providerStore.ts
  • src/main/presenter/configPresenter/providers.ts
  • src/renderer/src/i18n/zh-HK/settings.json
  • src/renderer/src/i18n/fa-IR/settings.json
  • src/renderer/src/i18n/zh-TW/settings.json
  • src/renderer/src/i18n/fr-FR/settings.json
  • src/shared/types/presenters/legacy.presenters.d.ts
  • src/renderer/settings/components/VertexProviderSettingsDetail.vue
  • src/renderer/src/i18n/en-US/settings.json
  • src/shared/provider-operations.ts
  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
  • src/renderer/settings/components/ModelProviderSettingsDetail.vue
src/renderer/**

📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)

Use lowercase with dashes for directories (e.g., components/auth-wizard)

Files:

  • src/renderer/src/i18n/ko-KR/settings.json
  • src/renderer/src/i18n/pt-BR/settings.json
  • src/renderer/src/i18n/ja-JP/settings.json
  • src/renderer/src/i18n/ru-RU/settings.json
  • src/renderer/src/components/icons/ModelIcon.vue
  • src/renderer/src/i18n/zh-CN/settings.json
  • src/renderer/src/stores/providerStore.ts
  • src/renderer/src/i18n/zh-HK/settings.json
  • src/renderer/src/i18n/fa-IR/settings.json
  • src/renderer/src/i18n/zh-TW/settings.json
  • src/renderer/src/i18n/fr-FR/settings.json
  • src/renderer/settings/components/VertexProviderSettingsDetail.vue
  • src/renderer/src/i18n/en-US/settings.json
  • src/renderer/settings/components/ModelProviderSettingsDetail.vue
**/*.{ts,tsx,js,jsx,vue}

📄 CodeRabbit inference engine (CLAUDE.md)

Use English for logs and comments (Chinese text exists in legacy code, but new code should use English)

Files:

  • src/main/presenter/configPresenter/modelCapabilities.ts
  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/shared/types/presenters/llmprovider.presenter.d.ts
  • src/renderer/src/components/icons/ModelIcon.vue
  • src/renderer/src/stores/providerStore.ts
  • src/main/presenter/configPresenter/providers.ts
  • src/shared/types/presenters/legacy.presenters.d.ts
  • src/renderer/settings/components/VertexProviderSettingsDetail.vue
  • src/shared/provider-operations.ts
  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
  • src/renderer/settings/components/ModelProviderSettingsDetail.vue
**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Enable and maintain strict TypeScript type checking for all files

**/*.{ts,tsx}: Always use try-catch to handle possible errors in TypeScript code
Provide meaningful error messages when catching errors
Log detailed error logs including error details, context, and stack traces
Distinguish and handle different error types (UserError, NetworkError, SystemError, BusinessError) with appropriate handlers in TypeScript
Use structured logging with logger.error(), logger.warn(), logger.info(), logger.debug() methods from logging utilities
Do not suppress errors (avoid empty catch blocks or silently ignoring errors)
Provide user-friendly error messages for user-facing errors in TypeScript components
Implement error retry mechanisms for transient failures in TypeScript
Avoid logging sensitive information (passwords, tokens, PII) in logs

Files:

  • src/main/presenter/configPresenter/modelCapabilities.ts
  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/shared/types/presenters/llmprovider.presenter.d.ts
  • src/renderer/src/stores/providerStore.ts
  • src/main/presenter/configPresenter/providers.ts
  • src/shared/types/presenters/legacy.presenters.d.ts
  • src/shared/provider-operations.ts
  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
src/main/presenter/**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Organize core business logic into dedicated Presenter classes, with one presenter per functional domain

Files:

  • src/main/presenter/configPresenter/modelCapabilities.ts
  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/main/presenter/configPresenter/providers.ts
  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
src/main/**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Use EventBus from src/main/eventbus.ts for main-to-renderer communication, broadcasting events via mainWindow.webContents.send()

src/main/**/*.ts: Use EventBus pattern for inter-process communication within the main process to decouple modules
Use Electron's built-in APIs for file system and native dialogs instead of Node.js or custom implementations

src/main/**/*.ts: Electron main process code belongs in src/main/ with presenters in presenter/ (Window/Tab/Thread/Mcp/Config/LLMProvider) and eventbus.ts for app events
Use the Presenter pattern in the main process for UI coordination

Files:

  • src/main/presenter/configPresenter/modelCapabilities.ts
  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/main/presenter/configPresenter/providers.ts
  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
src/main/presenter/configPresenter/**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Store and retrieve custom prompts via configPresenter.getCustomPrompts() for config-based data source management

Files:

  • src/main/presenter/configPresenter/modelCapabilities.ts
  • src/main/presenter/configPresenter/providers.ts
**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Do not include AI co-authoring information (e.g., 'Co-Authored-By: Claude') in git commits

Files:

  • src/main/presenter/configPresenter/modelCapabilities.ts
  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/shared/types/presenters/llmprovider.presenter.d.ts
  • src/renderer/src/stores/providerStore.ts
  • src/main/presenter/configPresenter/providers.ts
  • src/shared/types/presenters/legacy.presenters.d.ts
  • src/shared/provider-operations.ts
  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
**/*.{js,ts,jsx,tsx,mjs,cjs}

📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)

Write logs and comments in English

Files:

  • src/main/presenter/configPresenter/modelCapabilities.ts
  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/shared/types/presenters/llmprovider.presenter.d.ts
  • src/renderer/src/stores/providerStore.ts
  • src/main/presenter/configPresenter/providers.ts
  • src/shared/types/presenters/legacy.presenters.d.ts
  • src/shared/provider-operations.ts
  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
{src/main/presenter/**/*.ts,src/renderer/**/*.ts}

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

Implement proper inter-process communication (IPC) patterns using Electron's ipcRenderer and ipcMain APIs

Files:

  • src/main/presenter/configPresenter/modelCapabilities.ts
  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/renderer/src/stores/providerStore.ts
  • src/main/presenter/configPresenter/providers.ts
  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
src/main/**/*.{js,ts}

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

Main process code for Electron should be placed in src/main

Files:

  • src/main/presenter/configPresenter/modelCapabilities.ts
  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/main/presenter/configPresenter/providers.ts
  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
src/**/*.{ts,tsx,vue,js,jsx}

📄 CodeRabbit inference engine (AGENTS.md)

Use Prettier with single quotes, no semicolons, and 100 character width

Files:

  • src/main/presenter/configPresenter/modelCapabilities.ts
  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/shared/types/presenters/llmprovider.presenter.d.ts
  • src/renderer/src/components/icons/ModelIcon.vue
  • src/renderer/src/stores/providerStore.ts
  • src/main/presenter/configPresenter/providers.ts
  • src/shared/types/presenters/legacy.presenters.d.ts
  • src/renderer/settings/components/VertexProviderSettingsDetail.vue
  • src/shared/provider-operations.ts
  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
  • src/renderer/settings/components/ModelProviderSettingsDetail.vue
src/**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (AGENTS.md)

Use OxLint for linting JavaScript and TypeScript files

Files:

  • src/main/presenter/configPresenter/modelCapabilities.ts
  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/shared/types/presenters/llmprovider.presenter.d.ts
  • src/renderer/src/stores/providerStore.ts
  • src/main/presenter/configPresenter/providers.ts
  • src/shared/types/presenters/legacy.presenters.d.ts
  • src/shared/provider-operations.ts
  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
src/**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

src/**/*.{ts,tsx}: Use camelCase for variable and function names in TypeScript files
Use PascalCase for type and class names in TypeScript
Use SCREAMING_SNAKE_CASE for constant names

Files:

  • src/main/presenter/configPresenter/modelCapabilities.ts
  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/shared/types/presenters/llmprovider.presenter.d.ts
  • src/renderer/src/stores/providerStore.ts
  • src/main/presenter/configPresenter/providers.ts
  • src/shared/types/presenters/legacy.presenters.d.ts
  • src/shared/provider-operations.ts
  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
src/**/*.ts

📄 CodeRabbit inference engine (AGENTS.md)

Use EventBus for inter-process communication events

Files:

  • src/main/presenter/configPresenter/modelCapabilities.ts
  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/shared/types/presenters/llmprovider.presenter.d.ts
  • src/renderer/src/stores/providerStore.ts
  • src/main/presenter/configPresenter/providers.ts
  • src/shared/types/presenters/legacy.presenters.d.ts
  • src/shared/provider-operations.ts
  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
src/main/presenter/llmProviderPresenter/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)

Define the standardized LLMCoreStreamEvent interface with fields: type (text | reasoning | tool_call_start | tool_call_chunk | tool_call_end | error | usage | stop | image_data), content (for text), reasoning_content (for reasoning), tool_call_id, tool_call_name, tool_call_arguments_chunk (for streaming), tool_call_arguments_complete (for complete arguments), error_message, usage object with token counts, stop_reason (tool_use | max_tokens | stop_sequence | error | complete), and image_data object with Base64-encoded data and mimeType

Files:

  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
src/shared/**/*.d.ts

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

Define type definitions in shared/*.d.ts files for objects exposed by the main process to the renderer process

Files:

  • src/shared/types/presenters/llmprovider.presenter.d.ts
  • src/shared/types/presenters/legacy.presenters.d.ts
src/shared/**/*.{js,ts}

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

Shared type definitions and utilities between main and renderer processes should be placed in src/shared

Files:

  • src/shared/types/presenters/llmprovider.presenter.d.ts
  • src/shared/types/presenters/legacy.presenters.d.ts
  • src/shared/provider-operations.ts
src/shared/**/*.ts

📄 CodeRabbit inference engine (AGENTS.md)

Shared types and utilities should be placed in src/shared/

Files:

  • src/shared/types/presenters/llmprovider.presenter.d.ts
  • src/shared/types/presenters/legacy.presenters.d.ts
  • src/shared/provider-operations.ts
**/*.vue

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.vue: Use Vue 3 Composition API for all components instead of Options API
Use Tailwind CSS with scoped styles for component styling

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
  • src/renderer/settings/components/VertexProviderSettingsDetail.vue
  • src/renderer/settings/components/ModelProviderSettingsDetail.vue
src/renderer/**/*.vue

📄 CodeRabbit inference engine (CLAUDE.md)

src/renderer/**/*.vue: All user-facing strings must use i18n keys via vue-i18n for internationalization
Ensure proper error handling and loading states in all UI components
Implement responsive design using Tailwind CSS utilities for all UI components

src/renderer/**/*.vue: Use composition API and declarative programming patterns; avoid options API
Structure files: exported component, composables, helpers, static content, types
Use PascalCase for component names (e.g., AuthWizard.vue)
Use Vue 3 with TypeScript, leveraging defineComponent and PropType
Use template syntax for declarative rendering
Use Shadcn Vue, Radix Vue, and Tailwind for components and styling
Implement responsive design with Tailwind CSS; use a mobile-first approach
Use Suspense for asynchronous components
Use <script setup> syntax for concise component definitions
Prefer 'lucide:' icon family as the primary choice for Iconify icons
Import Icon component from '@iconify/vue' and use with lucide icons following pattern '{collection}:{icon-name}'

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
  • src/renderer/settings/components/VertexProviderSettingsDetail.vue
  • src/renderer/settings/components/ModelProviderSettingsDetail.vue
src/renderer/src/**/*.{vue,ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/i18n.mdc)

src/renderer/src/**/*.{vue,ts,tsx}: All user-facing strings must use i18n keys with vue-i18n framework in the renderer
Import and use useI18n() composable with the t() function to access translations in Vue components and TypeScript files
Use the dynamic locale.value property to switch languages at runtime
Avoid hardcoding user-facing text and ensure all user-visible text uses the i18n translation system

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
  • src/renderer/src/stores/providerStore.ts
src/renderer/**/*.{vue,js,ts}

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

Renderer process code should be placed in src/renderer (Vue 3 application)

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
  • src/renderer/src/stores/providerStore.ts
  • src/renderer/settings/components/VertexProviderSettingsDetail.vue
  • src/renderer/settings/components/ModelProviderSettingsDetail.vue
src/renderer/src/**/*.{vue,ts,tsx,js,jsx}

📄 CodeRabbit inference engine (.cursor/rules/vue-best-practices.mdc)

src/renderer/src/**/*.{vue,ts,tsx,js,jsx}: Use the Composition API for better code organization and reusability in Vue.js applications
Implement proper state management with Pinia in Vue.js applications
Utilize Vue Router for navigation and route management in Vue.js applications
Leverage Vue's built-in reactivity system for efficient data handling

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
  • src/renderer/src/stores/providerStore.ts
src/renderer/src/**/*.vue

📄 CodeRabbit inference engine (.cursor/rules/vue-best-practices.mdc)

Use scoped styles to prevent CSS conflicts between Vue components

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
src/renderer/**/*.{ts,tsx,vue}

📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)

src/renderer/**/*.{ts,tsx,vue}: Write concise, technical TypeScript code with accurate examples
Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError)
Avoid enums; use const objects instead
Use arrow functions for methods and computed properties
Avoid unnecessary curly braces in conditionals; use concise syntax for simple statements

Vue 3 app code in src/renderer/src should be organized into components/, stores/, views/, i18n/, lib/ directories with shell UI in src/renderer/shell/

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
  • src/renderer/src/stores/providerStore.ts
  • src/renderer/settings/components/VertexProviderSettingsDetail.vue
  • src/renderer/settings/components/ModelProviderSettingsDetail.vue
src/renderer/**/*.{ts,vue}

📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)

src/renderer/**/*.{ts,vue}: Use useFetch and useAsyncData for data fetching
Leverage ref, reactive, and computed for reactive state management
Use provide/inject for dependency injection when appropriate
Use Iconify/Vue for icon implementation

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
  • src/renderer/src/stores/providerStore.ts
  • src/renderer/settings/components/VertexProviderSettingsDetail.vue
  • src/renderer/settings/components/ModelProviderSettingsDetail.vue
src/renderer/src/**/*.{ts,tsx,vue}

📄 CodeRabbit inference engine (AGENTS.md)

src/renderer/src/**/*.{ts,tsx,vue}: Use TypeScript with Vue 3 Composition API for the renderer application
All user-facing strings must use vue-i18n keys in src/renderer/src/i18n

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
  • src/renderer/src/stores/providerStore.ts
src/renderer/src/components/**/*.vue

📄 CodeRabbit inference engine (AGENTS.md)

src/renderer/src/components/**/*.vue: Use Tailwind for styles in Vue components
Vue component files must use PascalCase naming (e.g., ChatInput.vue)

Files:

  • src/renderer/src/components/icons/ModelIcon.vue
src/renderer/**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Use the usePresenter.ts composable for renderer-to-main IPC communication to call presenter methods directly

Files:

  • src/renderer/src/stores/providerStore.ts
src/renderer/src/stores/**/*.{vue,ts,tsx,js,jsx}

📄 CodeRabbit inference engine (.cursor/rules/pinia-best-practices.mdc)

src/renderer/src/stores/**/*.{vue,ts,tsx,js,jsx}: Use modules to organize related state and actions in Pinia stores
Implement proper state persistence for maintaining data across sessions in Pinia stores
Use getters for computed state properties in Pinia stores
Utilize actions for side effects and asynchronous operations in Pinia stores
Keep Pinia stores focused on global state, not component-specific data

Files:

  • src/renderer/src/stores/providerStore.ts
src/renderer/**/*.{ts,tsx}

📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)

Use TypeScript for all code; prefer types over interfaces

Files:

  • src/renderer/src/stores/providerStore.ts
src/renderer/**/stores/*.ts

📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)

Use Pinia for state management

Files:

  • src/renderer/src/stores/providerStore.ts
src/renderer/src/stores/**/*.ts

📄 CodeRabbit inference engine (AGENTS.md)

Use Pinia for state management

Files:

  • src/renderer/src/stores/providerStore.ts
package.json

📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)

package.json: Node.js >= 22 required
pnpm >= 9 required

Files:

  • package.json
src/main/presenter/llmProviderPresenter/providers/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

src/main/presenter/llmProviderPresenter/providers/*.ts: Each LLM provider must implement the coreStream method following the standardized event interface for tool calling and response streaming
Convert MCP tools to provider-specific formats and normalize streaming responses to standard events in each provider implementation

src/main/presenter/llmProviderPresenter/providers/*.ts: In Provider implementations (src/main/presenter/llmProviderPresenter/providers/*.ts), the coreStream(messages, modelId, temperature, maxTokens) method should perform a single-pass streaming API request for each conversation round without containing multi-turn tool call loop logic
In Provider implementations, handle native tool support by converting MCP tools to Provider format using convertToProviderTools and including them in the API request; for Providers without native function call support, prepare messages using prepareFunctionCallPrompt before making the API call
In Provider implementations, parse Provider-specific data chunks from the streaming response and yield standardized LLMCoreStreamEvent objects conforming to the standard stream event interface, including text, reasoning, tool calls, usage, errors, stop reasons, and image data
In Provider implementations, include helper methods for Provider-specific operations such as formatMessages, convertToProviderTools, parseFunctionCalls, and prepareFunctionCallPrompt

Files:

  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
🧠 Learnings (39)
📚 Learning: 2025-11-25T05:26:43.498Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-11-25T05:26:43.498Z
Learning: Applies to src/renderer/src/i18n/**/*.json : Maintain consistent key-value structure across all language translation files (zh-CN, en-US, ko-KR, ru-RU, zh-HK, fr-FR, fa-IR)

Applied to files:

  • src/renderer/src/i18n/ko-KR/settings.json
  • src/renderer/src/i18n/pt-BR/settings.json
  • src/renderer/src/i18n/ja-JP/settings.json
  • src/renderer/src/i18n/ru-RU/settings.json
  • src/renderer/src/i18n/zh-CN/settings.json
  • src/renderer/src/i18n/zh-HK/settings.json
  • src/renderer/src/i18n/fa-IR/settings.json
  • src/renderer/src/i18n/zh-TW/settings.json
  • src/renderer/src/i18n/fr-FR/settings.json
  • src/renderer/src/i18n/en-US/settings.json
📚 Learning: 2025-11-25T05:26:43.498Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-11-25T05:26:43.498Z
Learning: Applies to src/renderer/src/i18n/**/*.json : Translation key naming convention: use dot-separated hierarchical structure with lowercase letters and descriptive names (e.g., 'common.button.submit')

Applied to files:

  • src/renderer/src/i18n/pt-BR/settings.json
  • src/renderer/src/i18n/ru-RU/settings.json
  • src/renderer/src/i18n/zh-TW/settings.json
📚 Learning: 2025-11-25T05:26:43.498Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-11-25T05:26:43.498Z
Learning: Applies to src/renderer/src/**/*.{vue,ts,tsx} : All user-facing strings must use i18n keys with vue-i18n framework in the renderer

Applied to files:

  • src/renderer/src/i18n/pt-BR/settings.json
  • src/renderer/src/i18n/zh-HK/settings.json
  • src/renderer/src/i18n/zh-TW/settings.json
📚 Learning: 2025-11-25T05:28:20.500Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-25T05:28:20.500Z
Learning: Applies to src/renderer/src/**/*.{ts,tsx,vue} : All user-facing strings must use vue-i18n keys in `src/renderer/src/i18n`

Applied to files:

  • src/renderer/src/i18n/pt-BR/settings.json
  • src/renderer/src/i18n/zh-TW/settings.json
📚 Learning: 2025-11-25T05:26:43.498Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-11-25T05:26:43.498Z
Learning: Applies to src/renderer/src/**/*.{vue,ts,tsx} : Avoid hardcoding user-facing text and ensure all user-visible text uses the i18n translation system

Applied to files:

  • src/renderer/src/i18n/pt-BR/settings.json
  • src/renderer/src/i18n/zh-TW/settings.json
📚 Learning: 2025-11-25T05:27:12.201Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.201Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations, include helper methods for Provider-specific operations such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt`

Applied to files:

  • src/main/presenter/configPresenter/modelCapabilities.ts
  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/shared/types/presenters/llmprovider.presenter.d.ts
  • src/renderer/src/stores/providerStore.ts
  • src/main/presenter/configPresenter/providers.ts
  • src/shared/types/presenters/legacy.presenters.d.ts
  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
  • src/renderer/settings/components/ModelProviderSettingsDetail.vue
📚 Learning: 2025-11-25T05:26:11.297Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-25T05:26:11.297Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Convert MCP tools to provider-specific formats and normalize streaming responses to standard events in each provider implementation

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/shared/types/presenters/llmprovider.presenter.d.ts
  • src/renderer/src/stores/providerStore.ts
  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
📚 Learning: 2025-11-25T05:27:12.201Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.201Z
Learning: Implement separation of concerns where `src/main/presenter/llmProviderPresenter/index.ts` manages the Agent loop and conversation history, while Provider files handle LLM API interactions, Provider-specific request/response formatting, tool definition conversion, and native vs non-native tool call mechanisms

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/shared/types/presenters/llmprovider.presenter.d.ts
  • src/renderer/src/stores/providerStore.ts
  • src/main/presenter/configPresenter/providers.ts
  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
📚 Learning: 2025-11-25T05:27:12.201Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.201Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations, handle native tool support by converting MCP tools to Provider format using `convertToProviderTools` and including them in the API request; for Providers without native function call support, prepare messages using `prepareFunctionCallPrompt` before making the API call

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/shared/types/presenters/llmprovider.presenter.d.ts
  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
📚 Learning: 2025-11-25T05:27:12.201Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.201Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations, parse Provider-specific data chunks from the streaming response and `yield` standardized `LLMCoreStreamEvent` objects conforming to the standard stream event interface, including text, reasoning, tool calls, usage, errors, stop reasons, and image data

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/shared/types/presenters/llmprovider.presenter.d.ts
  • src/renderer/src/stores/providerStore.ts
  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
📚 Learning: 2025-11-25T05:26:11.297Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-25T05:26:11.297Z
Learning: Applies to src/main/presenter/mcpPresenter/**/*.ts : Register new MCP tools in `mcpPresenter/index.ts` after implementing them in `inMemoryServers/`

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
📚 Learning: 2025-11-25T05:26:11.297Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-25T05:26:11.297Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each LLM provider must implement the `coreStream` method following the standardized event interface for tool calling and response streaming

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/shared/types/presenters/llmprovider.presenter.d.ts
  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
📚 Learning: 2025-11-25T05:27:39.191Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/provider-guidelines.mdc:0-0
Timestamp: 2025-11-25T05:27:39.191Z
Learning: Applies to **/*Provider**/index.ts : Do not introduce renderer dependencies inside Provider implementations

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
  • src/renderer/settings/components/ModelProviderSettingsDetail.vue
📚 Learning: 2025-11-25T05:28:04.440Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-11-25T05:28:04.440Z
Learning: Applies to src/renderer/**/*.{ts,vue} : Use provide/inject for dependency injection when appropriate

Applied to files:

  • src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
📚 Learning: 2025-11-25T05:27:12.201Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.201Z
Learning: Applies to src/main/presenter/llmProviderPresenter/**/*.ts : Define the standardized `LLMCoreStreamEvent` interface with fields: `type` (text | reasoning | tool_call_start | tool_call_chunk | tool_call_end | error | usage | stop | image_data), `content` (for text), `reasoning_content` (for reasoning), `tool_call_id`, `tool_call_name`, `tool_call_arguments_chunk` (for streaming), `tool_call_arguments_complete` (for complete arguments), `error_message`, `usage` object with token counts, `stop_reason` (tool_use | max_tokens | stop_sequence | error | complete), and `image_data` object with Base64-encoded data and mimeType

Applied to files:

  • src/shared/types/presenters/llmprovider.presenter.d.ts
  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
📚 Learning: 2025-11-25T05:27:39.191Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/provider-guidelines.mdc:0-0
Timestamp: 2025-11-25T05:27:39.191Z
Learning: Applies to **/*Provider**/index.ts : Output only discriminated union `LLMCoreStreamEvent` in Provider implementations, do not use single interface with optional fields

Applied to files:

  • src/shared/types/presenters/llmprovider.presenter.d.ts
📚 Learning: 2025-11-25T05:28:04.440Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-11-25T05:28:04.440Z
Learning: Applies to src/renderer/**/*.{ts,vue} : Use Iconify/Vue for icon implementation

Applied to files:

  • src/renderer/src/components/icons/ModelIcon.vue
📚 Learning: 2025-11-25T05:28:04.440Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-11-25T05:28:04.440Z
Learning: Applies to src/renderer/**/*.vue : Prefer 'lucide:' icon family as the primary choice for Iconify icons

Applied to files:

  • src/renderer/src/components/icons/ModelIcon.vue
📚 Learning: 2025-11-25T05:28:04.440Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-11-25T05:28:04.440Z
Learning: Applies to src/renderer/**/*.vue : Import Icon component from 'iconify/vue' and use with lucide icons following pattern '{collection}:{icon-name}'

Applied to files:

  • src/renderer/src/components/icons/ModelIcon.vue
📚 Learning: 2025-11-25T05:26:11.297Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-25T05:26:11.297Z
Learning: Applies to src/renderer/**/*.ts : Use the `usePresenter.ts` composable for renderer-to-main IPC communication to call presenter methods directly

Applied to files:

  • src/renderer/src/stores/providerStore.ts
📚 Learning: 2025-11-25T05:27:12.201Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.201Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts`, handle `reasoning`, `text`, `image_data`, and `usage` events by processing and forwarding them through `STREAM_EVENTS.RESPONSE` events to the frontend

Applied to files:

  • src/renderer/src/stores/providerStore.ts
  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
📚 Learning: 2025-11-25T05:26:24.860Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/electron-best-practices.mdc:0-0
Timestamp: 2025-11-25T05:26:24.860Z
Learning: Applies to {src/main/presenter/**/*.ts,src/renderer/**/*.ts} : Implement proper inter-process communication (IPC) patterns using Electron's ipcRenderer and ipcMain APIs

Applied to files:

  • src/renderer/src/stores/providerStore.ts
📚 Learning: 2025-11-25T05:27:39.191Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/provider-guidelines.mdc:0-0
Timestamp: 2025-11-25T05:27:39.191Z
Learning: Applies to **/*Provider**/index.ts : Every event construction in Provider implementations must use factory functions

Applied to files:

  • src/renderer/src/stores/providerStore.ts
📚 Learning: 2025-11-25T05:27:12.201Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.201Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts`, listen for standardized events yielded by `coreStream` and handle them accordingly: buffer text content (`currentContent`), handle `tool_call_start/chunk/end` events by collecting tool details and calling `presenter.mcpPresenter.callTool`, send frontend events via `eventBus` with tool call status, format tool results for the next LLM call, and set `needContinueConversation = true`

Applied to files:

  • src/renderer/src/stores/providerStore.ts
  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
📚 Learning: 2025-11-25T05:27:39.191Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/provider-guidelines.mdc:0-0
Timestamp: 2025-11-25T05:27:39.191Z
Learning: Applies to **/*Provider**/index.ts : Use factory methods `createStreamEvent.*` to construct events in Provider implementations, avoid direct field pollution

Applied to files:

  • src/renderer/src/stores/providerStore.ts
📚 Learning: 2025-11-25T05:26:11.297Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-25T05:26:11.297Z
Learning: Applies to **/*.{ts,tsx,js,jsx,vue} : Use English for logs and comments (Chinese text exists in legacy code, but new code should use English)

Applied to files:

  • src/renderer/src/i18n/zh-TW/settings.json
📚 Learning: 2025-11-25T05:28:04.439Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-11-25T05:28:04.439Z
Learning: Applies to src/renderer/**/*.vue : Use Vue 3 with TypeScript, leveraging defineComponent and PropType

Applied to files:

  • src/renderer/settings/components/VertexProviderSettingsDetail.vue
📚 Learning: 2025-11-25T05:27:45.535Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-best-practices.mdc:0-0
Timestamp: 2025-11-25T05:27:45.535Z
Learning: Applies to src/renderer/src/**/*.{vue,ts,tsx,js,jsx} : Use the Composition API for better code organization and reusability in Vue.js applications

Applied to files:

  • src/renderer/settings/components/VertexProviderSettingsDetail.vue
  • src/renderer/settings/components/ModelProviderSettingsDetail.vue
📚 Learning: 2025-11-25T05:26:15.918Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/development-setup.mdc:0-0
Timestamp: 2025-11-25T05:26:15.918Z
Learning: Applies to package.json : Node.js >= 22 required

Applied to files:

  • package.json
📚 Learning: 2025-11-25T05:28:20.500Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-25T05:28:20.500Z
Learning: Require Node ≥ 20.19 and pnpm ≥ 10.11 (pnpm only, not npm) as the project toolchain

Applied to files:

  • package.json
📚 Learning: 2025-11-25T05:27:12.201Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.201Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations (`src/main/presenter/llmProviderPresenter/providers/*.ts`), the `coreStream(messages, modelId, temperature, maxTokens)` method should perform a *single-pass* streaming API request for each conversation round without containing multi-turn tool call loop logic

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
📚 Learning: 2025-11-25T05:27:12.201Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.201Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts` (`startStreamCompletion`), implement the Agent loop that manages the overall conversation flow, including multiple rounds of LLM calls and tool usage, maintaining `conversationMessages` history, calling `provider.coreStream()` on each iteration, and controlling the loop using `needContinueConversation` and `toolCallCount` (compared against `MAX_TOOL_CALLS`)

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
📚 Learning: 2025-11-25T05:28:04.440Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-11-25T05:28:04.440Z
Learning: Applies to src/renderer/**/*.{ts,vue} : Leverage ref, reactive, and computed for reactive state management

Applied to files:

  • src/renderer/settings/components/ModelProviderSettingsDetail.vue
📚 Learning: 2025-11-25T05:28:04.439Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-11-25T05:28:04.439Z
Learning: Applies to src/renderer/**/composables/*.ts : Use VueUse for common composables and utility functions

Applied to files:

  • src/renderer/settings/components/ModelProviderSettingsDetail.vue
📚 Learning: 2025-11-25T05:27:45.535Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-best-practices.mdc:0-0
Timestamp: 2025-11-25T05:27:45.535Z
Learning: Applies to src/renderer/src/**/*.{vue,ts,tsx,js,jsx} : Leverage Vue's built-in reactivity system for efficient data handling

Applied to files:

  • src/renderer/settings/components/ModelProviderSettingsDetail.vue
📚 Learning: 2025-11-25T05:27:20.058Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/pinia-best-practices.mdc:0-0
Timestamp: 2025-11-25T05:27:20.058Z
Learning: Applies to src/renderer/src/stores/**/*.{vue,ts,tsx,js,jsx} : Use getters for computed state properties in Pinia stores

Applied to files:

  • src/renderer/settings/components/ModelProviderSettingsDetail.vue
📚 Learning: 2025-11-25T05:28:04.439Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-11-25T05:28:04.439Z
Learning: Applies to src/renderer/**/*.vue : Structure files: exported component, composables, helpers, static content, types

Applied to files:

  • src/renderer/settings/components/ModelProviderSettingsDetail.vue
📚 Learning: 2025-11-25T05:28:04.439Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-11-25T05:28:04.439Z
Learning: Applies to src/renderer/**/*.vue : Use composition API and declarative programming patterns; avoid options API

Applied to files:

  • src/renderer/settings/components/ModelProviderSettingsDetail.vue
📚 Learning: 2025-11-25T05:28:20.500Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-25T05:28:20.500Z
Learning: Applies to src/renderer/src/**/*.{ts,tsx,vue} : Use TypeScript with Vue 3 Composition API for the renderer application

Applied to files:

  • src/renderer/settings/components/ModelProviderSettingsDetail.vue
🧬 Code graph analysis (3)
src/shared/types/presenters/llmprovider.presenter.d.ts (2)
src/shared/types/presenters/legacy.presenters.d.ts (2)
  • VERTEX_PROVIDER (761-768)
  • LLM_PROVIDER (641-661)
src/shared/types/presenters/index.d.ts (1)
  • LLM_PROVIDER (9-9)
src/renderer/src/stores/providerStore.ts (2)
src/shared/types/presenters/legacy.presenters.d.ts (1)
  • VERTEX_PROVIDER (761-768)
src/shared/types/presenters/llmprovider.presenter.d.ts (1)
  • VERTEX_PROVIDER (94-101)
src/shared/types/presenters/legacy.presenters.d.ts (2)
src/shared/types/presenters/llmprovider.presenter.d.ts (2)
  • VERTEX_PROVIDER (94-101)
  • LLM_PROVIDER (45-60)
src/shared/types/presenters/index.d.ts (1)
  • LLM_PROVIDER (9-9)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: build-check (x64)
🔇 Additional comments (16)
package.json (1)

69-69: @google/genai bump looks fine; verify against upstream changelog/tests

Upgrading @google/genai to ^1.30.0 is consistent with adding Vertex support; just make sure your Vertex flows and any existing Gemini usage are regression-tested against this version.

src/renderer/src/i18n/zh-TW/settings.json (1)

277-288: Vertex provider i18n keys and wording look consistent

Key names and Traditional Chinese labels/placeholders align with other locales and existing provider fields, so this block is good to go.

src/main/presenter/configPresenter/modelCapabilities.ts (1)

20-24: Provider ID aliases for gemini/vertex are wired correctly

Lower‑case aliases gemini → google and vertex → google-vertex match the normalization logic and will let model lookups work even when callers use the short ids.

src/renderer/src/components/icons/ModelIcon.vue (1)

108-110: Vertex icon alias is correctly added

Mapping vertex to the same asset as vertexai fits the substring‑match lookup and will cover provider/model ids that only contain vertex.

src/renderer/src/i18n/zh-CN/settings.json (1)

277-288: Vertex provider Chinese translations are consistent and well-formed

Key set and Simplified Chinese text match other locales and existing provider wording, so this section is in good shape.

src/shared/types/presenters/legacy.presenters.d.ts (1)

761-768: VERTEX_PROVIDER type mirrors core definition and looks correct

Extending LLM_PROVIDER with projectId, location, accountPrivateKey, accountClientEmail, apiVersion ('v1' | 'v1beta1'), and endpointMode ('standard' | 'express') matches the main VERTEX_PROVIDER typing and should keep legacy presenter consumers type-safe.

src/renderer/src/i18n/ja-JP/settings.json (1)

277-288: Japanese Vertex provider translations are consistent and clear

The added vertex* keys match other locales structurally, and the Japanese text reads naturally for project/location/service-account/endpoint configuration.

src/main/presenter/configPresenter/providers.ts (1)

235-249: Vertex default provider entry is consistent with existing providers

The new vertex entry matches the existing provider schema (id/apiType, empty baseUrl, and a sensible defaultBaseUrl plus docs/models links). No issues from a config/typing perspective.

src/renderer/src/stores/providerStore.ts (1)

6-6: Vertex-specific config wrapper is correctly wired through the store

The updateVertexProviderConfig helper cleanly narrows the update type to Partial<VERTEX_PROVIDER> while delegating to the generic updateProviderConfig, consistent with the existing AWS Bedrock helper and with VERTEX_PROVIDER extending LLM_PROVIDER. Looks good.

Also applies to: 296-301, 410-410

src/renderer/src/i18n/ko-KR/settings.json (1)

277-288: Korean Vertex provider i18n keys look correct and consistent

The added vertex* keys follow the existing provider namespace and appear to be accurate Korean translations matching the en-US semantics (project ID, region, service account email/private key, API version, endpoint mode).

src/shared/types/presenters/llmprovider.presenter.d.ts (1)

94-101: VERTEX_PROVIDER type extension is well-scoped

Defining VERTEX_PROVIDER = LLM_PROVIDER & { projectId/location/accountPrivateKey/accountClientEmail/apiVersion/endpointMode } matches how Vertex is configured elsewhere (store, UI, provider) and keeps Vertex extras optional, which is appropriate for partial updates and progressive configuration.

src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts (1)

26-26: VertexProvider is correctly registered in provider maps

Adding VertexProvider to both PROVIDER_ID_MAP ('vertex') and PROVIDER_TYPE_MAP ('vertex') aligns with the new default provider entry and guarantees proper resolution by both id and apiType.

Also applies to: 81-81, 111-111

src/shared/provider-operations.ts (1)

49-56: Rebuild trigger fields correctly cover new Vertex configuration

Extending REBUILD_REQUIRED_FIELDS to include azureApiVersion and the Vertex fields (projectId, location, accountPrivateKey, accountClientEmail, apiVersion, endpointMode) is appropriate so provider instances are rebuilt when core connection/auth settings change.

src/renderer/settings/components/ModelProviderSettingsDetail.vue (1)

19-27: Vertex provider settings panel is cleanly integrated

Conditionally rendering VertexProviderSettingsDetail when provider.apiType === 'vertex' and passing provider as VERTEX_PROVIDER gives the child proper typing without affecting other providers, and reusing handleConfigChanged / validateApiKey hooks it into the existing config/validation flow. This wiring looks solid.

Also applies to: 93-93, 98-98

src/renderer/src/i18n/zh-HK/settings.json (1)

277-288: Vertex provider i18n keys look consistent and correct

The new Vertex-related provider keys follow the existing naming pattern and align with the en-US structure; no issues from an i18n or structure perspective.

src/renderer/src/i18n/en-US/settings.json (1)

277-288: Vertex provider English labels/placeholders are clear and aligned with UI

The new Vertex-related keys under provider are well named, descriptive, and match the fields rendered in VertexProviderSettingsDetail.vue. Structure is consistent with other provider keys.

zerob13 and others added 10 commits November 28, 2025 14:58
* feat: import deepchat to nowledge-mem

* Merge commit from fork

* feat: i18n fix and nowledge mem style

* feat: add styles

---------

Co-authored-by: duskzhen <zerob13@gmail.com>
* fix: format

* fix: fix first chat error loading timing

* fix: format

* fix: format
* fix: mcp one click

* fix: deeplint mcp install #1136
* feat: add da-DK

* feat: add da-dk setting

* feat: add da-DK translate

* fix: i18n placeholder
#1111)

* feat: implement paralleled vectorizing and thread-safe processing in KnowledgeStorePresenter

* fix: remove unnecessary blank line in processQueue method

* fix: correct chunk message ID usage in enqueueFileTask method
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (1)
src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts (1)

36-54: Safety settings mapping logic now looks correct; confirm cross‑provider config reuse

The refactored safety mapping correctly:

  • Filters out 'BLOCK_NONE' and 'HARM_BLOCK_THRESHOLD_UNSPECIFIED' at the string level.
  • Maps only the remaining values via valueToHarmBlockThresholdMap.
  • Uses explicit undefined/null checks, so numeric enum members (including 0) are handled safely.

This fixes the earlier enum‑vs‑string comparison bug.

One open design question: getFormattedSafetySettings still reads from geminiSafety_${key} even in the Vertex provider. If Vertex is meant to share the exact same safety thresholds as Gemini, this is fine; otherwise you may want Vertex‑specific keys (e.g. vertexSafety_*) or a provider‑agnostic naming scheme so providers don’t unintentionally share or override each other’s safety settings.

Based on learnings, provider‑side safety configuration should be explicit and normalized per provider.

Also applies to: 365-403

🧹 Nitpick comments (4)
src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts (4)

130-138: New comments are in Chinese; please switch to English per repo conventions

Several new comments (e.g. around ensureVertexModelName, autoEnableModelsIfNeeded, formatVertexMessages, processVertexResponse, completions) are written in Chinese. The repo guidelines state that new TS/JS code should use English for comments and logs (Chinese is only tolerated in legacy code).

It’d be good to rephrase these into concise English so future contributors can follow the intent without language barriers.

As per coding guidelines, new comments/logs in *.ts files should be in English.

Also applies to: 334-363, 512-525, 661-662, 709-711


512-608: Tool message mapping is a bit brittle; clarify or align function response handling

The tool‑related parts of formatVertexMessages are thorough but somewhat fragile:

  • For assistant messages with tool_calls, you correctly emit functionCall and functionResponse parts, which matches how the provider expects function invocations and responses.
  • For messages with role === 'tool' and array content, you:
    • Treat part.type === 'function_call' as a functionCall part (good).
    • Treat part.type === 'function_response' as plain { text: part.function_response || '' }, which loses the structured response (name, JSON payload) and won’t be recognized as a function response by the model.
  • There are several // @ts-ignore directives, which makes the code harder to maintain under strict TS.

If ChatMessage is expected to carry structured function responses for tool messages, consider aligning this branch with the assistant/tool_calls branch, e.g. by emitting a proper functionResponse part instead of collapsing to text. If this tool‑role branch is no longer used by the current agent flow, you might simplify or remove it to avoid confusion.

Defining a small local type for the tool parts would also let you remove the @ts-ignore usage while keeping the code type‑safe.

Based on learnings, provider‑side message formatting should preserve structured tool call/response data for consistent tool behavior.


661-707: Reasoning/text split logic is duplicated; reuse processVertexResponse in completions

processVertexResponse already encapsulates the logic for:

  • Preferring candidate parts when present.
  • Separating thought parts (part.thought === true) from normal text.
  • Falling back to <think>...</think> parsing when needed.

completions reimplements essentially the same logic inline (lines 785–823). To reduce duplication and keep behavior consistent, you could:

  1. Do all usage/usageMetadata handling in completions as you already do.
  2. Then call this.processVertexResponse(result) and merge the returned content and reasoning_content into resultResp, instead of manually re‑parsing the response again.

That way any future change in how Vertex encodes reasoning (new fields, different markers) only needs to be updated in one place.

Based on learnings, centralizing provider‑specific response parsing improves consistency across both sync and streaming paths.

Also applies to: 785-823


246-262: Use shared logger utilities instead of console. and keep logs structured*

Throughout this provider (summaryTitles, check/init, token estimation, suggestions parsing, coreStream, image streaming), logging uses console.log / console.warn / console.error. Project guidelines call for structured logging via logger.info / logger.warn / logger.error / logger.debug with clear context and error typing.

To align with that:

  • Inject or import the project’s logger and replace console calls with the appropriate logger methods.
  • Reuse sanitizeForLogging and isDebugLoggingEnabled to ensure debug logs remain PII‑safe and gated by config.
  • Where applicable, distinguish between network/system errors vs. user/config errors to support the existing error type taxonomy.

This will make Vertex logs consistent with other providers and easier to consume in centralized logging.

As per coding guidelines, avoid direct console logging in TypeScript and prefer structured logger usage.

Also applies to: 273-305, 307-331, 782-783, 857-858, 885-886, 923-924, 939-940, 982-986, 1068-1074, 1182-1194, 1278-1280

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f08e627 and a85d327.

📒 Files selected for processing (3)
  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts (1 hunks)
  • src/renderer/src/i18n/pt-BR/settings.json (1 hunks)
  • src/renderer/src/i18n/ru-RU/settings.json (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • src/renderer/src/i18n/ru-RU/settings.json
  • src/renderer/src/i18n/pt-BR/settings.json
🧰 Additional context used
📓 Path-based instructions (15)
**/*.{ts,tsx,js,jsx,vue}

📄 CodeRabbit inference engine (CLAUDE.md)

Use English for logs and comments (Chinese text exists in legacy code, but new code should use English)

Files:

  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
**/*.{ts,tsx}

📄 CodeRabbit inference engine (CLAUDE.md)

Enable and maintain strict TypeScript type checking for all files

**/*.{ts,tsx}: Always use try-catch to handle possible errors in TypeScript code
Provide meaningful error messages when catching errors
Log detailed error logs including error details, context, and stack traces
Distinguish and handle different error types (UserError, NetworkError, SystemError, BusinessError) with appropriate handlers in TypeScript
Use structured logging with logger.error(), logger.warn(), logger.info(), logger.debug() methods from logging utilities
Do not suppress errors (avoid empty catch blocks or silently ignoring errors)
Provide user-friendly error messages for user-facing errors in TypeScript components
Implement error retry mechanisms for transient failures in TypeScript
Avoid logging sensitive information (passwords, tokens, PII) in logs

Files:

  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
src/main/presenter/**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Organize core business logic into dedicated Presenter classes, with one presenter per functional domain

Files:

  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
src/main/presenter/llmProviderPresenter/providers/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

src/main/presenter/llmProviderPresenter/providers/*.ts: Each LLM provider must implement the coreStream method following the standardized event interface for tool calling and response streaming
Convert MCP tools to provider-specific formats and normalize streaming responses to standard events in each provider implementation

src/main/presenter/llmProviderPresenter/providers/*.ts: In Provider implementations (src/main/presenter/llmProviderPresenter/providers/*.ts), the coreStream(messages, modelId, temperature, maxTokens) method should perform a single-pass streaming API request for each conversation round without containing multi-turn tool call loop logic
In Provider implementations, handle native tool support by converting MCP tools to Provider format using convertToProviderTools and including them in the API request; for Providers without native function call support, prepare messages using prepareFunctionCallPrompt before making the API call
In Provider implementations, parse Provider-specific data chunks from the streaming response and yield standardized LLMCoreStreamEvent objects conforming to the standard stream event interface, including text, reasoning, tool calls, usage, errors, stop reasons, and image data
In Provider implementations, include helper methods for Provider-specific operations such as formatMessages, convertToProviderTools, parseFunctionCalls, and prepareFunctionCallPrompt

Files:

  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
src/main/**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Use EventBus from src/main/eventbus.ts for main-to-renderer communication, broadcasting events via mainWindow.webContents.send()

src/main/**/*.ts: Use EventBus pattern for inter-process communication within the main process to decouple modules
Use Electron's built-in APIs for file system and native dialogs instead of Node.js or custom implementations

src/main/**/*.ts: Electron main process code belongs in src/main/ with presenters in presenter/ (Window/Tab/Thread/Mcp/Config/LLMProvider) and eventbus.ts for app events
Use the Presenter pattern in the main process for UI coordination

Files:

  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
**/*.ts

📄 CodeRabbit inference engine (CLAUDE.md)

Do not include AI co-authoring information (e.g., 'Co-Authored-By: Claude') in git commits

Files:

  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
**/*.{js,ts,jsx,tsx,mjs,cjs}

📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)

Write logs and comments in English

Files:

  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
{src/main/presenter/**/*.ts,src/renderer/**/*.ts}

📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)

Implement proper inter-process communication (IPC) patterns using Electron's ipcRenderer and ipcMain APIs

Files:

  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
src/main/presenter/llmProviderPresenter/**/*.ts

📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)

Define the standardized LLMCoreStreamEvent interface with fields: type (text | reasoning | tool_call_start | tool_call_chunk | tool_call_end | error | usage | stop | image_data), content (for text), reasoning_content (for reasoning), tool_call_id, tool_call_name, tool_call_arguments_chunk (for streaming), tool_call_arguments_complete (for complete arguments), error_message, usage object with token counts, stop_reason (tool_use | max_tokens | stop_sequence | error | complete), and image_data object with Base64-encoded data and mimeType

Files:

  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
src/**/*

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

New features should be developed in the src directory

Files:

  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
src/main/**/*.{js,ts}

📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)

Main process code for Electron should be placed in src/main

Files:

  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
src/**/*.{ts,tsx,vue,js,jsx}

📄 CodeRabbit inference engine (AGENTS.md)

Use Prettier with single quotes, no semicolons, and 100 character width

Files:

  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
src/**/*.{ts,tsx,js,jsx}

📄 CodeRabbit inference engine (AGENTS.md)

Use OxLint for linting JavaScript and TypeScript files

Files:

  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
src/**/*.{ts,tsx}

📄 CodeRabbit inference engine (AGENTS.md)

src/**/*.{ts,tsx}: Use camelCase for variable and function names in TypeScript files
Use PascalCase for type and class names in TypeScript
Use SCREAMING_SNAKE_CASE for constant names

Files:

  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
src/**/*.ts

📄 CodeRabbit inference engine (AGENTS.md)

Use EventBus for inter-process communication events

Files:

  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
🧠 Learnings (11)
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Implement separation of concerns where `src/main/presenter/llmProviderPresenter/index.ts` manages the Agent loop and conversation history, while Provider files handle LLM API interactions, Provider-specific request/response formatting, tool definition conversion, and native vs non-native tool call mechanisms

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations, include helper methods for Provider-specific operations such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt`

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations, parse Provider-specific data chunks from the streaming response and `yield` standardized `LLMCoreStreamEvent` objects conforming to the standard stream event interface, including text, reasoning, tool calls, usage, errors, stop reasons, and image data

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations, handle native tool support by converting MCP tools to Provider format using `convertToProviderTools` and including them in the API request; for Providers without native function call support, prepare messages using `prepareFunctionCallPrompt` before making the API call

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/**/*.ts : Define the standardized `LLMCoreStreamEvent` interface with fields: `type` (text | reasoning | tool_call_start | tool_call_chunk | tool_call_end | error | usage | stop | image_data), `content` (for text), `reasoning_content` (for reasoning), `tool_call_id`, `tool_call_name`, `tool_call_arguments_chunk` (for streaming), `tool_call_arguments_complete` (for complete arguments), `error_message`, `usage` object with token counts, `stop_reason` (tool_use | max_tokens | stop_sequence | error | complete), and `image_data` object with Base64-encoded data and mimeType

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
📚 Learning: 2025-11-25T05:26:11.312Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-25T05:26:11.312Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Convert MCP tools to provider-specific formats and normalize streaming responses to standard events in each provider implementation

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
📚 Learning: 2025-11-25T05:26:11.312Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-25T05:26:11.312Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each LLM provider must implement the `coreStream` method following the standardized event interface for tool calling and response streaming

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations (`src/main/presenter/llmProviderPresenter/providers/*.ts`), the `coreStream(messages, modelId, temperature, maxTokens)` method should perform a *single-pass* streaming API request for each conversation round without containing multi-turn tool call loop logic

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts` (`startStreamCompletion`), implement the Agent loop that manages the overall conversation flow, including multiple rounds of LLM calls and tool usage, maintaining `conversationMessages` history, calling `provider.coreStream()` on each iteration, and controlling the loop using `needContinueConversation` and `toolCallCount` (compared against `MAX_TOOL_CALLS`)

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts`, listen for standardized events yielded by `coreStream` and handle them accordingly: buffer text content (`currentContent`), handle `tool_call_start/chunk/end` events by collecting tool details and calling `presenter.mcpPresenter.callTool`, send frontend events via `eventBus` with tool call status, format tool results for the next LLM call, and set `needContinueConversation = true`

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
📚 Learning: 2025-11-25T05:26:35.317Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/error-logging.mdc:0-0
Timestamp: 2025-11-25T05:26:35.317Z
Learning: Applies to **/*.{ts,tsx} : Avoid logging sensitive information (passwords, tokens, PII) in logs

Applied to files:

  • src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
🧬 Code graph analysis (1)
src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts (6)
src/shared/types/presenters/llmprovider.presenter.d.ts (2)
  • LLM_PROVIDER (45-60)
  • VERTEX_PROVIDER (94-101)
src/shared/types/presenters/legacy.presenters.d.ts (4)
  • LLM_PROVIDER (641-661)
  • VERTEX_PROVIDER (761-768)
  • ModelConfig (133-152)
  • Tool (109-120)
src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts (1)
  • process (34-233)
src/main/presenter/configPresenter/modelCapabilities.ts (1)
  • modelCapabilities (178-178)
src/main/presenter/llmProviderPresenter/baseProvider.ts (1)
  • SUMMARY_TITLES_PROMPT (739-741)
src/shared/types/core/llm-events.ts (1)
  • createStreamEvent (100-154)
🔇 Additional comments (1)
src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts (1)

1201-1304: Image streaming and embeddings implementations look consistent with the provider contract

  • handleImageGenerationStream:

    • Validates that there is a user message and at least one valid part.
    • Streams both text and image data via createStreamEvent.text and createStreamEvent.imageData.
    • On any failure, logs the error and emits error then stop('error'), which is consistent with the expected streaming interface.
  • getEmbeddings:

    • Uses embedContent with batched contents.
    • Normalizes the returned embeddings to plain number[][] and throws a clear error if none are returned.

Both methods fit well with the standardized LLMCoreStreamEvent/LLMProvider contracts and error‑handling expectations.

Based on learnings, this matches the expected provider responsibilities for image output and embeddings.

Comment on lines +405 to +408
// Check if detailed debug logging is enabled
private isDebugLoggingEnabled(): boolean {
return is.dev || this.configPresenter.getSetting<boolean>('traceDebugEnabled') === true
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

isDebugLoggingEnabled may misbehave if getSetting is async

Everywhere else (e.g. in getFormattedSafetySettings) configPresenter.getSetting is awaited, but here it’s used synchronously:

private isDebugLoggingEnabled(): boolean {
  return is.dev || this.configPresenter.getSetting<boolean>('traceDebugEnabled') === true
}

If getSetting returns a Promise, this will always be false in production, and debug logs will never be enabled via config. Consider making this helper async and awaiting it at call sites:

-  private isDebugLoggingEnabled(): boolean {
-    return is.dev || this.configPresenter.getSetting<boolean>('traceDebugEnabled') === true
-  }
+  private async isDebugLoggingEnabled(): Promise<boolean> {
+    if (is.dev) return true
+    try {
+      const value = await this.configPresenter.getSetting<boolean>('traceDebugEnabled')
+      return value === true
+    } catch {
+      return false
+    }
+  }

And then:

-    if (this.isDebugLoggingEnabled()) {
+    if (await this.isDebugLoggingEnabled()) {
       // ...
     }

As per coding guidelines, config‑driven debug logging should behave predictably in both dev and production.

🤖 Prompt for AI Agents
In src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts around
lines 405 to 408, isDebugLoggingEnabled currently calls
this.configPresenter.getSetting synchronously which will misbehave if getSetting
is async; change isDebugLoggingEnabled to be async and return a Promise<boolean>
(awaiting getSetting), then update every call site to await
isDebugLoggingEnabled (or handle the Promise) so dev mode short‑circuit still
returns true immediately while production reads the awaited config value; ensure
the method signature, callers, and any upstream control flow are updated to
handle async/Promise semantics.

Comment on lines +464 to +473
// 判断模型是否支持 thinkingBudget
private supportsThinkingBudget(modelId: string): boolean {
const normalized = modelId.replace(/^models\//i, '')
const range = modelCapabilities.getThinkingBudgetRange(this.provider.id, normalized)
return (
typeof range.default === 'number' ||
typeof range.min === 'number' ||
typeof range.max === 'number'
)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Normalize modelId for thinkingBudget capability lookup the same way as for other capabilities

supportsThinkingBudget currently normalizes only the models/ prefix:

private supportsThinkingBudget(modelId: string): boolean {
  const normalized = modelId.replace(/^models\//i, '')
  const range = modelCapabilities.getThinkingBudgetRange(this.provider.id, normalized)
  // ...
}

But fetchProviderModels uses a broader normalization for capability checks:

const normalizeModelId = (mid: string): string =>
  String(mid || '')
    .replace(/^models\//i, '')
    .replace(/^publishers\/google\/models\//i, '')

If modelId is a full Vertex path like publishers/google/models/..., supportsThinkingBudget will pass an unnormalized ID to modelCapabilities.getThinkingBudgetRange, likely causing the lookup to miss and thinkingBudget never being applied even for models that support it.

Recommend aligning the normalization:

-  private supportsThinkingBudget(modelId: string): boolean {
-    const normalized = modelId.replace(/^models\//i, '')
+  private supportsThinkingBudget(modelId: string): boolean {
+    const normalized = String(modelId || '')
+      .replace(/^models\//i, '')
+      .replace(/^publishers\/google\/models\//i, '')
     const range = modelCapabilities.getThinkingBudgetRange(this.provider.id, normalized)
     return (
       typeof range.default === 'number' ||
       typeof range.min === 'number' ||
       typeof range.max === 'number'
     )
   }

Or extract a shared normalizeModelId helper and reuse it in both places.

Based on learnings, capability checks should consistently use the same normalized model identifiers.

Also applies to: 170-175

🤖 Prompt for AI Agents
In src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts around
lines 464 to 473 (also apply same change at lines ~170-175),
supportsThinkingBudget currently only strips a leading "models/" prefix before
querying modelCapabilities, which misses full Vertex paths like
"publishers/google/models/..."; normalize the modelId the same way as
fetchProviderModels (strip both /^models\//i and
/^publishers\/google\/models\//i) or extract and reuse a shared normalizeModelId
helper, then pass the normalized id into
modelCapabilities.getThinkingBudgetRange so capability lookups succeed for full
Vertex model paths.

Comment on lines +947 to +1196
async *coreStream(
messages: ChatMessage[],
modelId: string,
modelConfig: ModelConfig,
temperature: number,
maxTokens: number,
mcpTools: MCPToolDefinition[]
): AsyncGenerator<LLMCoreStreamEvent> {
if (!this.isInitialized) throw new Error('Provider not initialized')
if (!modelId) throw new Error('Model ID is required')

const requestStartTime = Date.now()
const requestId = `vertex-${Date.now()}-${Math.random().toString(36).substring(7)}`
// Log only non-sensitive metadata
console.log('[Vertex] Stream request:', {
requestId,
modelId,
hasReasoning: modelConfig.reasoning === true,
hasSearch: modelConfig.enableSearch === true,
hasTools: mcpTools.length > 0,
temperature,
maxTokens
})

// 检查是否是图片生成模型
const isImageGenerationModel = modelConfig?.type === ModelType.ImageGeneration

// 如果是图片生成模型,使用特殊处理
if (isImageGenerationModel) {
yield* this.handleImageGenerationStream(messages, modelId, temperature, maxTokens)
return
}

const safetySettings = await this.getFormattedSafetySettings()
if (this.isDebugLoggingEnabled()) {
console.log('[Vertex] Safety settings:', {
count: safetySettings?.length || 0,
categories: safetySettings?.map((s) => s.category) || []
})
}

// 添加Gemini工具调用
let geminiTools: Tool[] = []

// 注意:googleSearch内置工具与外部工具是互斥的
if (modelConfig.enableSearch) {
geminiTools.push({ googleSearch: {} as GoogleSearch })
} else {
if (mcpTools.length > 0)
geminiTools = await presenter.mcpPresenter.mcpToolsToGeminiTools(mcpTools, this.provider.id)
}

// 格式化消息为Gemini格式
const formattedParts = this.formatVertexMessages(messages)

// 1. 获取基础 config
const generateContentConfig: GenerateContentConfig = this.getGenerateContentConfig(
temperature,
maxTokens,
modelId,
modelConfig.reasoning,
modelConfig.thinkingBudget
)

// 2. 在本地变量上添加其他属性
if (formattedParts.systemInstruction) {
generateContentConfig.systemInstruction = formattedParts.systemInstruction
}

if (geminiTools.length > 0) {
generateContentConfig.tools = geminiTools
// 仅当存在 functionDeclarations 时才配置 functionCallingConfig
const hasFunctionDeclarations = geminiTools.some((t: any) => {
const fns = t?.functionDeclarations
return Array.isArray(fns) && fns.length > 0
})
if (hasFunctionDeclarations) {
generateContentConfig.toolConfig = {
functionCallingConfig: {
mode: FunctionCallingConfigMode.AUTO // 允许模型自动决定是否调用工具
}
}
}
}

if (safetySettings) {
generateContentConfig.safetySettings = safetySettings
}

// 3. 一次性创建完整的 requestParams
const requestParams: GenerateContentParameters = {
model: modelId,
contents: formattedParts.contents,
config: generateContentConfig
}

if (this.isDebugLoggingEnabled()) {
const sanitizedParams = this.sanitizeForLogging(requestParams)
console.log('[Vertex] Request params (sanitized):', sanitizedParams)
}

// 发送流式请求
const result = await this.genAI.models.generateContentStream({
...requestParams,
model: this.ensureVertexModelName(requestParams.model as string)
})

// 状态变量
let buffer = ''
let isInThinkTag = false
let toolUseDetected = false
let usageMetadata: GenerateContentResponseUsageMetadata | undefined
let isNewThoughtFormatDetected = modelConfig.reasoning === true

// 流处理循环
for await (const chunk of result) {
// 处理用量统计
if (chunk.usageMetadata) {
usageMetadata = chunk.usageMetadata
}

if (this.isDebugLoggingEnabled()) {
const sanitizedChunk = this.sanitizeForLogging({
candidates: chunk.candidates,
usageMetadata: chunk.usageMetadata
})
console.log('[Vertex] Stream chunk (sanitized):', sanitizedChunk)
}
// 检查是否包含函数调用
if (chunk.candidates && chunk.candidates[0]?.content?.parts?.[0]?.functionCall) {
const functionCall = chunk.candidates[0].content.parts[0].functionCall
const functionName = functionCall.name
const functionArgs = functionCall.args || {}
const toolCallId = `gemini-tool-${Date.now()}`

toolUseDetected = true

// 发送工具调用开始事件
yield createStreamEvent.toolCallStart(toolCallId, functionName || '')

// 发送工具调用参数
const argsString = JSON.stringify(functionArgs)
yield createStreamEvent.toolCallChunk(toolCallId, argsString)

// 发送工具调用结束事件
yield createStreamEvent.toolCallEnd(toolCallId, argsString)

// 设置停止原因为工具使用
break
}

// 处理内容块
let content = ''
let thoughtContent = ''

// 处理文本和图像内容
if (chunk.candidates && chunk.candidates[0]?.content?.parts) {
for (const part of chunk.candidates[0].content.parts) {
// 检查是否是思考内容 (新格式)
if ((part as any).thought === true && part.text) {
isNewThoughtFormatDetected = true
thoughtContent += part.text
} else if (part.text) {
content += part.text
} else if (part.inlineData && part.inlineData.data && part.inlineData.mimeType) {
// 处理图像数据
yield createStreamEvent.imageData({
data: part.inlineData.data,
mimeType: part.inlineData.mimeType
})
}
}
} else {
// 兼容处理
content = chunk.text || ''
}

// 如果检测到思考内容,直接发送
if (thoughtContent) {
yield createStreamEvent.reasoning(thoughtContent)
}

if (!content) continue

if (isNewThoughtFormatDetected) {
yield createStreamEvent.text(content)
} else {
buffer += content

if (buffer.includes('<think>') && !isInThinkTag) {
const thinkStart = buffer.indexOf('<think>')
if (thinkStart > 0) {
yield createStreamEvent.text(buffer.substring(0, thinkStart))
}
buffer = buffer.substring(thinkStart + 7)
isInThinkTag = true
}

if (isInThinkTag && buffer.includes('</think>')) {
const thinkEnd = buffer.indexOf('</think>')
const reasoningContent = buffer.substring(0, thinkEnd)
if (reasoningContent) {
yield createStreamEvent.reasoning(reasoningContent)
}
buffer = buffer.substring(thinkEnd + 8)
isInThinkTag = false
}

if (!isInThinkTag && buffer) {
yield createStreamEvent.text(buffer)
buffer = ''
}
}
}

if (usageMetadata) {
yield createStreamEvent.usage({
prompt_tokens: usageMetadata.promptTokenCount || 0,
completion_tokens: usageMetadata.candidatesTokenCount || 0,
total_tokens: usageMetadata.totalTokenCount || 0
})
}

// 处理剩余缓冲区内容
if (!isNewThoughtFormatDetected && buffer) {
if (isInThinkTag) {
yield createStreamEvent.reasoning(buffer)
} else {
yield createStreamEvent.text(buffer)
}
}

// 发送停止事件
const requestDuration = Date.now() - requestStartTime
const status = toolUseDetected ? 'tool_use' : 'complete'
console.log('[Vertex] Stream request completed:', {
requestId,
modelId,
status,
duration: `${requestDuration}ms`,
tokensUsed: usageMetadata
? {
prompt: usageMetadata.promptTokenCount || 0,
completion: usageMetadata.candidatesTokenCount || 0,
total: usageMetadata.totalTokenCount || 0
}
: undefined
})
yield createStreamEvent.stop(status)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

coreStream lacks top‑level error→event translation; consider mirroring image stream error handling

The streaming implementation is otherwise solid and matches the standardized event model (text, reasoning, image_data, usage, stop, and tool_call_* events) in a single‑pass request, which is exactly what the provider is supposed to do.

One gap is error handling:

  • coreStream can throw on generateContentStream (bad credentials, network issues, invalid config) or while iterating the async stream.
  • When that happens, the async generator will reject and the caller has to catch; no error or stop('error') events are yielded from this provider.
  • In contrast, handleImageGenerationStream wraps its logic in a try/catch and always emits error + stop('error') on failure.

To align behavior and make the Agent loop’s life easier, consider wrapping the main body of coreStream in a try/catch that:

  • Logs the error (using the shared logger and possibly sanitized request metadata if debug is enabled).
  • Yields createStreamEvent.error(<user‑friendly message>).
  • Yields createStreamEvent.stop('error') before returning.

You may also want to treat the !isInitialized / missing modelId cases similarly (emit error+stop instead of throwing), so every failure path surfaces as standardized stream events from the provider.

Based on learnings, providers should translate provider‑side failures into error/stop stream events rather than relying solely on thrown exceptions.

@zerob13 zerob13 closed this Dec 1, 2025
@zhangmo8 zhangmo8 deleted the feat/add-Vertex-support branch January 19, 2026 08:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants