-
Notifications
You must be signed in to change notification settings - Fork 625
feat: add vertex support #1135
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add vertex support #1135
Conversation
|
Caution Review failedThe pull request is closed. WalkthroughAdds Google Vertex AI provider and UI, integrates a NowledgeMem export/submission flow and presenter, introduces per-file serialization and concurrent task processing for knowledge operations, adds Danish (da‑DK) locales, startup deeplink handling, Mermaid sanitization, replaces PTY dependency, and updates various i18n, types, and export paths. Changes
Sequence Diagram(s)sequenceDiagram
participant SettingsUI as Settings UI
participant ProviderStore as Provider Store
participant VertexProvider as VertexProvider
participant GoogleVertex as Google Vertex API
participant ModelDB as Model DB
SettingsUI->>ProviderStore: updateVertexProviderConfig(providerId, updates)
ProviderStore->>VertexProvider: trigger rebuild/init (if required)
VertexProvider->>GoogleVertex: create client / listModels()
GoogleVertex-->>VertexProvider: model list (or error)
alt models fetched
VertexProvider->>ModelDB: update cached models
VertexProvider->>ProviderStore: notify model list changed
ProviderStore->>SettingsUI: confirmed/verified
else fetch fails
VertexProvider->>ModelDB: fallback to DB models
ProviderStore->>SettingsUI: show fallback/models unavailable
end
sequenceDiagram
participant ThreadUI as Thread UI
participant ThreadPresenter as ThreadPresenter
participant NowledgeMem as NowledgeMemPresenter
participant NowledgeAPI as NowledgeMem Service
ThreadUI->>ThreadPresenter: exportToNowledgeMem(conversationId)
ThreadPresenter->>ThreadPresenter: buildNowledgeMemExportData(...)
ThreadPresenter->>NowledgeMem: submitThread(threadData)
NowledgeMem->>NowledgeAPI: POST /threads (with API key)
NowledgeAPI-->>NowledgeMem: success / error
NowledgeMem-->>ThreadPresenter: result (threadId / errors)
ThreadPresenter-->>ThreadUI: return export+submission result
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Areas to focus review on:
Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro ⛔ Files ignored due to path filters (1)
📒 Files selected for processing (82)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Nitpick comments (5)
src/renderer/settings/components/VertexProviderSettingsDetail.vue (1)
175-215: GuardupdateConfigagainst store errors to avoid unhandled rejections
updateConfigdirectly awaitsproviderStore.updateVertexProviderConfigwithout a try/catch, so any failure from the main process/store will surface as an unhandled promise from the component. Wrapping this in error handling and surfacing a user-visible error (e.g. toast) will make Vertex config failures easier to debug and avoid noisy console errors.A minimal pattern inside
updateConfig:const updateConfig = async (updates: Partial<VERTEX_PROVIDER>) => { try { await providerStore.updateVertexProviderConfig(props.provider.id, updates) emit('config-updated') } catch (error) { // Optionally replace with app logger / toast console.error('Failed to update Vertex provider config', error) } }src/renderer/src/i18n/fa-IR/settings.json (1)
277-288: Vertex keys are wired correctly; consider localizing values laterThe added Vertex provider keys match the en-US structure and will work correctly in the UI, but their values are still in English. If full Persian localization is a goal for this locale, you may want to track translation of these labels/placeholders in a follow-up.
src/renderer/src/i18n/fr-FR/settings.json (1)
277-288: Vertex keys are structurally correct; values still in EnglishThe Vertex AI provider keys in fr-FR mirror the en-US structure correctly, so the UI will resolve them. If you aim for full French localization, consider translating these label/placeholder strings in a later pass.
src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts (2)
880-888: WrapcoreStreamnetwork/streaming logic intry/catchand emit error events
coreStreamperforms the Vertex streaming call and iterates the async stream without a surroundingtry/catch. Any network/SDK error thrown bygenerateContentStreamor while iterating will bubble up as a rejected async iterator rather than a standardizederror+stop('error')event, which breaks the “providers normalize errors into stream events” contract and can make frontend behavior inconsistent.Consider wrapping the body in a
try/catchand translating failures intoLLMCoreStreamEventerrors, similar tohandleImageGenerationStream:async *coreStream(...) { if (!this.isInitialized) throw new Error('Provider not initialized') if (!modelId) throw new Error('Model ID is required') try { // existing setup + generateContentStream + for-await loop // ... yield createStreamEvent.stop(toolUseDetected ? 'tool_use' : 'complete') } catch (error) { console.error('Vertex coreStream error:', error) yield createStreamEvent.error( error instanceof Error ? error.message : 'Vertex streaming failed' ) yield createStreamEvent.stop('error') } }This keeps the main presenter’s agent loop simpler and ensures all providers yield a consistent error/stop pattern on failures.
Also applies to: 961-1088
399-405: Normalize model IDs consistently for thinking budget & prefer English comments in new code
supportsThinkingBudgetonly strips a leadingmodels/prefix:const normalized = modelId.replace(/^models\//i, '') const range = modelCapabilities.getThinkingBudgetRange(this.provider.id, normalized)Elsewhere (
fetchProviderModels) you normalize IDs by also removingpublishers/google/models/. For Vertex, model IDs often look like full resource paths (projects/.../locations/.../publishers/google/models/...), so this lighter normalization may never match whatmodelCapabilitiesexpects and will silently disablethinkingBudgeteven for supported models. Reusing the same normalization helper you use infetchProviderModelshere would make this more robust.Separately, there are several new comments in Chinese (e.g. around
ensureVertexModelNameandgetGenerateContentConfig). Per the repo guidelines, new TS code should use English comments; converting these to concise English will make the provider easier to maintain for non‑Chinese‑speaking contributors.Also applies to: 129-137
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (22)
package.json(1 hunks)src/main/presenter/configPresenter/modelCapabilities.ts(1 hunks)src/main/presenter/configPresenter/providers.ts(1 hunks)src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts(3 hunks)src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts(1 hunks)src/renderer/settings/components/ModelProviderSettingsDetail.vue(2 hunks)src/renderer/settings/components/VertexProviderSettingsDetail.vue(1 hunks)src/renderer/src/components/icons/ModelIcon.vue(1 hunks)src/renderer/src/i18n/en-US/settings.json(1 hunks)src/renderer/src/i18n/fa-IR/settings.json(1 hunks)src/renderer/src/i18n/fr-FR/settings.json(1 hunks)src/renderer/src/i18n/ja-JP/settings.json(1 hunks)src/renderer/src/i18n/ko-KR/settings.json(1 hunks)src/renderer/src/i18n/pt-BR/settings.json(1 hunks)src/renderer/src/i18n/ru-RU/settings.json(1 hunks)src/renderer/src/i18n/zh-CN/settings.json(1 hunks)src/renderer/src/i18n/zh-HK/settings.json(1 hunks)src/renderer/src/i18n/zh-TW/settings.json(1 hunks)src/renderer/src/stores/providerStore.ts(3 hunks)src/shared/provider-operations.ts(1 hunks)src/shared/types/presenters/legacy.presenters.d.ts(1 hunks)src/shared/types/presenters/llmprovider.presenter.d.ts(1 hunks)
🧰 Additional context used
📓 Path-based instructions (37)
src/renderer/src/i18n/**/*.json
📄 CodeRabbit inference engine (.cursor/rules/i18n.mdc)
src/renderer/src/i18n/**/*.json: Translation key naming convention: use dot-separated hierarchical structure with lowercase letters and descriptive names (e.g., 'common.button.submit')
Maintain consistent key-value structure across all language translation files (zh-CN, en-US, ko-KR, ru-RU, zh-HK, fr-FR, fa-IR)
Files:
src/renderer/src/i18n/ko-KR/settings.jsonsrc/renderer/src/i18n/pt-BR/settings.jsonsrc/renderer/src/i18n/ja-JP/settings.jsonsrc/renderer/src/i18n/ru-RU/settings.jsonsrc/renderer/src/i18n/zh-CN/settings.jsonsrc/renderer/src/i18n/zh-HK/settings.jsonsrc/renderer/src/i18n/fa-IR/settings.jsonsrc/renderer/src/i18n/zh-TW/settings.jsonsrc/renderer/src/i18n/fr-FR/settings.jsonsrc/renderer/src/i18n/en-US/settings.json
src/**/*
📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)
New features should be developed in the
srcdirectory
Files:
src/renderer/src/i18n/ko-KR/settings.jsonsrc/renderer/src/i18n/pt-BR/settings.jsonsrc/renderer/src/i18n/ja-JP/settings.jsonsrc/main/presenter/configPresenter/modelCapabilities.tssrc/main/presenter/llmProviderPresenter/managers/providerInstanceManager.tssrc/shared/types/presenters/llmprovider.presenter.d.tssrc/renderer/src/i18n/ru-RU/settings.jsonsrc/renderer/src/components/icons/ModelIcon.vuesrc/renderer/src/i18n/zh-CN/settings.jsonsrc/renderer/src/stores/providerStore.tssrc/main/presenter/configPresenter/providers.tssrc/renderer/src/i18n/zh-HK/settings.jsonsrc/renderer/src/i18n/fa-IR/settings.jsonsrc/renderer/src/i18n/zh-TW/settings.jsonsrc/renderer/src/i18n/fr-FR/settings.jsonsrc/shared/types/presenters/legacy.presenters.d.tssrc/renderer/settings/components/VertexProviderSettingsDetail.vuesrc/renderer/src/i18n/en-US/settings.jsonsrc/shared/provider-operations.tssrc/main/presenter/llmProviderPresenter/providers/vertexProvider.tssrc/renderer/settings/components/ModelProviderSettingsDetail.vue
src/renderer/**
📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)
Use lowercase with dashes for directories (e.g., components/auth-wizard)
Files:
src/renderer/src/i18n/ko-KR/settings.jsonsrc/renderer/src/i18n/pt-BR/settings.jsonsrc/renderer/src/i18n/ja-JP/settings.jsonsrc/renderer/src/i18n/ru-RU/settings.jsonsrc/renderer/src/components/icons/ModelIcon.vuesrc/renderer/src/i18n/zh-CN/settings.jsonsrc/renderer/src/stores/providerStore.tssrc/renderer/src/i18n/zh-HK/settings.jsonsrc/renderer/src/i18n/fa-IR/settings.jsonsrc/renderer/src/i18n/zh-TW/settings.jsonsrc/renderer/src/i18n/fr-FR/settings.jsonsrc/renderer/settings/components/VertexProviderSettingsDetail.vuesrc/renderer/src/i18n/en-US/settings.jsonsrc/renderer/settings/components/ModelProviderSettingsDetail.vue
**/*.{ts,tsx,js,jsx,vue}
📄 CodeRabbit inference engine (CLAUDE.md)
Use English for logs and comments (Chinese text exists in legacy code, but new code should use English)
Files:
src/main/presenter/configPresenter/modelCapabilities.tssrc/main/presenter/llmProviderPresenter/managers/providerInstanceManager.tssrc/shared/types/presenters/llmprovider.presenter.d.tssrc/renderer/src/components/icons/ModelIcon.vuesrc/renderer/src/stores/providerStore.tssrc/main/presenter/configPresenter/providers.tssrc/shared/types/presenters/legacy.presenters.d.tssrc/renderer/settings/components/VertexProviderSettingsDetail.vuesrc/shared/provider-operations.tssrc/main/presenter/llmProviderPresenter/providers/vertexProvider.tssrc/renderer/settings/components/ModelProviderSettingsDetail.vue
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Enable and maintain strict TypeScript type checking for all files
**/*.{ts,tsx}: Always use try-catch to handle possible errors in TypeScript code
Provide meaningful error messages when catching errors
Log detailed error logs including error details, context, and stack traces
Distinguish and handle different error types (UserError, NetworkError, SystemError, BusinessError) with appropriate handlers in TypeScript
Use structured logging with logger.error(), logger.warn(), logger.info(), logger.debug() methods from logging utilities
Do not suppress errors (avoid empty catch blocks or silently ignoring errors)
Provide user-friendly error messages for user-facing errors in TypeScript components
Implement error retry mechanisms for transient failures in TypeScript
Avoid logging sensitive information (passwords, tokens, PII) in logs
Files:
src/main/presenter/configPresenter/modelCapabilities.tssrc/main/presenter/llmProviderPresenter/managers/providerInstanceManager.tssrc/shared/types/presenters/llmprovider.presenter.d.tssrc/renderer/src/stores/providerStore.tssrc/main/presenter/configPresenter/providers.tssrc/shared/types/presenters/legacy.presenters.d.tssrc/shared/provider-operations.tssrc/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
src/main/presenter/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Organize core business logic into dedicated Presenter classes, with one presenter per functional domain
Files:
src/main/presenter/configPresenter/modelCapabilities.tssrc/main/presenter/llmProviderPresenter/managers/providerInstanceManager.tssrc/main/presenter/configPresenter/providers.tssrc/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
src/main/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Use EventBus from
src/main/eventbus.tsfor main-to-renderer communication, broadcasting events viamainWindow.webContents.send()
src/main/**/*.ts: Use EventBus pattern for inter-process communication within the main process to decouple modules
Use Electron's built-in APIs for file system and native dialogs instead of Node.js or custom implementations
src/main/**/*.ts: Electron main process code belongs insrc/main/with presenters inpresenter/(Window/Tab/Thread/Mcp/Config/LLMProvider) andeventbus.tsfor app events
Use the Presenter pattern in the main process for UI coordination
Files:
src/main/presenter/configPresenter/modelCapabilities.tssrc/main/presenter/llmProviderPresenter/managers/providerInstanceManager.tssrc/main/presenter/configPresenter/providers.tssrc/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
src/main/presenter/configPresenter/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Store and retrieve custom prompts via
configPresenter.getCustomPrompts()for config-based data source management
Files:
src/main/presenter/configPresenter/modelCapabilities.tssrc/main/presenter/configPresenter/providers.ts
**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Do not include AI co-authoring information (e.g., 'Co-Authored-By: Claude') in git commits
Files:
src/main/presenter/configPresenter/modelCapabilities.tssrc/main/presenter/llmProviderPresenter/managers/providerInstanceManager.tssrc/shared/types/presenters/llmprovider.presenter.d.tssrc/renderer/src/stores/providerStore.tssrc/main/presenter/configPresenter/providers.tssrc/shared/types/presenters/legacy.presenters.d.tssrc/shared/provider-operations.tssrc/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
**/*.{js,ts,jsx,tsx,mjs,cjs}
📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)
Write logs and comments in English
Files:
src/main/presenter/configPresenter/modelCapabilities.tssrc/main/presenter/llmProviderPresenter/managers/providerInstanceManager.tssrc/shared/types/presenters/llmprovider.presenter.d.tssrc/renderer/src/stores/providerStore.tssrc/main/presenter/configPresenter/providers.tssrc/shared/types/presenters/legacy.presenters.d.tssrc/shared/provider-operations.tssrc/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
{src/main/presenter/**/*.ts,src/renderer/**/*.ts}
📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)
Implement proper inter-process communication (IPC) patterns using Electron's ipcRenderer and ipcMain APIs
Files:
src/main/presenter/configPresenter/modelCapabilities.tssrc/main/presenter/llmProviderPresenter/managers/providerInstanceManager.tssrc/renderer/src/stores/providerStore.tssrc/main/presenter/configPresenter/providers.tssrc/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
src/main/**/*.{js,ts}
📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)
Main process code for Electron should be placed in
src/main
Files:
src/main/presenter/configPresenter/modelCapabilities.tssrc/main/presenter/llmProviderPresenter/managers/providerInstanceManager.tssrc/main/presenter/configPresenter/providers.tssrc/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
src/**/*.{ts,tsx,vue,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
Use Prettier with single quotes, no semicolons, and 100 character width
Files:
src/main/presenter/configPresenter/modelCapabilities.tssrc/main/presenter/llmProviderPresenter/managers/providerInstanceManager.tssrc/shared/types/presenters/llmprovider.presenter.d.tssrc/renderer/src/components/icons/ModelIcon.vuesrc/renderer/src/stores/providerStore.tssrc/main/presenter/configPresenter/providers.tssrc/shared/types/presenters/legacy.presenters.d.tssrc/renderer/settings/components/VertexProviderSettingsDetail.vuesrc/shared/provider-operations.tssrc/main/presenter/llmProviderPresenter/providers/vertexProvider.tssrc/renderer/settings/components/ModelProviderSettingsDetail.vue
src/**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
Use OxLint for linting JavaScript and TypeScript files
Files:
src/main/presenter/configPresenter/modelCapabilities.tssrc/main/presenter/llmProviderPresenter/managers/providerInstanceManager.tssrc/shared/types/presenters/llmprovider.presenter.d.tssrc/renderer/src/stores/providerStore.tssrc/main/presenter/configPresenter/providers.tssrc/shared/types/presenters/legacy.presenters.d.tssrc/shared/provider-operations.tssrc/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
src/**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
src/**/*.{ts,tsx}: Use camelCase for variable and function names in TypeScript files
Use PascalCase for type and class names in TypeScript
Use SCREAMING_SNAKE_CASE for constant names
Files:
src/main/presenter/configPresenter/modelCapabilities.tssrc/main/presenter/llmProviderPresenter/managers/providerInstanceManager.tssrc/shared/types/presenters/llmprovider.presenter.d.tssrc/renderer/src/stores/providerStore.tssrc/main/presenter/configPresenter/providers.tssrc/shared/types/presenters/legacy.presenters.d.tssrc/shared/provider-operations.tssrc/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
src/**/*.ts
📄 CodeRabbit inference engine (AGENTS.md)
Use EventBus for inter-process communication events
Files:
src/main/presenter/configPresenter/modelCapabilities.tssrc/main/presenter/llmProviderPresenter/managers/providerInstanceManager.tssrc/shared/types/presenters/llmprovider.presenter.d.tssrc/renderer/src/stores/providerStore.tssrc/main/presenter/configPresenter/providers.tssrc/shared/types/presenters/legacy.presenters.d.tssrc/shared/provider-operations.tssrc/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
src/main/presenter/llmProviderPresenter/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)
Define the standardized
LLMCoreStreamEventinterface with fields:type(text | reasoning | tool_call_start | tool_call_chunk | tool_call_end | error | usage | stop | image_data),content(for text),reasoning_content(for reasoning),tool_call_id,tool_call_name,tool_call_arguments_chunk(for streaming),tool_call_arguments_complete(for complete arguments),error_message,usageobject with token counts,stop_reason(tool_use | max_tokens | stop_sequence | error | complete), andimage_dataobject with Base64-encoded data and mimeType
Files:
src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.tssrc/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
src/shared/**/*.d.ts
📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)
Define type definitions in shared/*.d.ts files for objects exposed by the main process to the renderer process
Files:
src/shared/types/presenters/llmprovider.presenter.d.tssrc/shared/types/presenters/legacy.presenters.d.ts
src/shared/**/*.{js,ts}
📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)
Shared type definitions and utilities between main and renderer processes should be placed in
src/shared
Files:
src/shared/types/presenters/llmprovider.presenter.d.tssrc/shared/types/presenters/legacy.presenters.d.tssrc/shared/provider-operations.ts
src/shared/**/*.ts
📄 CodeRabbit inference engine (AGENTS.md)
Shared types and utilities should be placed in
src/shared/
Files:
src/shared/types/presenters/llmprovider.presenter.d.tssrc/shared/types/presenters/legacy.presenters.d.tssrc/shared/provider-operations.ts
**/*.vue
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.vue: Use Vue 3 Composition API for all components instead of Options API
Use Tailwind CSS with scoped styles for component styling
Files:
src/renderer/src/components/icons/ModelIcon.vuesrc/renderer/settings/components/VertexProviderSettingsDetail.vuesrc/renderer/settings/components/ModelProviderSettingsDetail.vue
src/renderer/**/*.vue
📄 CodeRabbit inference engine (CLAUDE.md)
src/renderer/**/*.vue: All user-facing strings must use i18n keys via vue-i18n for internationalization
Ensure proper error handling and loading states in all UI components
Implement responsive design using Tailwind CSS utilities for all UI components
src/renderer/**/*.vue: Use composition API and declarative programming patterns; avoid options API
Structure files: exported component, composables, helpers, static content, types
Use PascalCase for component names (e.g., AuthWizard.vue)
Use Vue 3 with TypeScript, leveraging defineComponent and PropType
Use template syntax for declarative rendering
Use Shadcn Vue, Radix Vue, and Tailwind for components and styling
Implement responsive design with Tailwind CSS; use a mobile-first approach
Use Suspense for asynchronous components
Use <script setup> syntax for concise component definitions
Prefer 'lucide:' icon family as the primary choice for Iconify icons
Import Icon component from '@iconify/vue' and use with lucide icons following pattern '{collection}:{icon-name}'
Files:
src/renderer/src/components/icons/ModelIcon.vuesrc/renderer/settings/components/VertexProviderSettingsDetail.vuesrc/renderer/settings/components/ModelProviderSettingsDetail.vue
src/renderer/src/**/*.{vue,ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/i18n.mdc)
src/renderer/src/**/*.{vue,ts,tsx}: All user-facing strings must use i18n keys with vue-i18n framework in the renderer
Import and use useI18n() composable with the t() function to access translations in Vue components and TypeScript files
Use the dynamic locale.value property to switch languages at runtime
Avoid hardcoding user-facing text and ensure all user-visible text uses the i18n translation system
Files:
src/renderer/src/components/icons/ModelIcon.vuesrc/renderer/src/stores/providerStore.ts
src/renderer/**/*.{vue,js,ts}
📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)
Renderer process code should be placed in
src/renderer(Vue 3 application)
Files:
src/renderer/src/components/icons/ModelIcon.vuesrc/renderer/src/stores/providerStore.tssrc/renderer/settings/components/VertexProviderSettingsDetail.vuesrc/renderer/settings/components/ModelProviderSettingsDetail.vue
src/renderer/src/**/*.{vue,ts,tsx,js,jsx}
📄 CodeRabbit inference engine (.cursor/rules/vue-best-practices.mdc)
src/renderer/src/**/*.{vue,ts,tsx,js,jsx}: Use the Composition API for better code organization and reusability in Vue.js applications
Implement proper state management with Pinia in Vue.js applications
Utilize Vue Router for navigation and route management in Vue.js applications
Leverage Vue's built-in reactivity system for efficient data handling
Files:
src/renderer/src/components/icons/ModelIcon.vuesrc/renderer/src/stores/providerStore.ts
src/renderer/src/**/*.vue
📄 CodeRabbit inference engine (.cursor/rules/vue-best-practices.mdc)
Use scoped styles to prevent CSS conflicts between Vue components
Files:
src/renderer/src/components/icons/ModelIcon.vue
src/renderer/**/*.{ts,tsx,vue}
📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)
src/renderer/**/*.{ts,tsx,vue}: Write concise, technical TypeScript code with accurate examples
Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError)
Avoid enums; use const objects instead
Use arrow functions for methods and computed properties
Avoid unnecessary curly braces in conditionals; use concise syntax for simple statementsVue 3 app code in
src/renderer/srcshould be organized intocomponents/,stores/,views/,i18n/,lib/directories with shell UI insrc/renderer/shell/
Files:
src/renderer/src/components/icons/ModelIcon.vuesrc/renderer/src/stores/providerStore.tssrc/renderer/settings/components/VertexProviderSettingsDetail.vuesrc/renderer/settings/components/ModelProviderSettingsDetail.vue
src/renderer/**/*.{ts,vue}
📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)
src/renderer/**/*.{ts,vue}: Use useFetch and useAsyncData for data fetching
Leverage ref, reactive, and computed for reactive state management
Use provide/inject for dependency injection when appropriate
Use Iconify/Vue for icon implementation
Files:
src/renderer/src/components/icons/ModelIcon.vuesrc/renderer/src/stores/providerStore.tssrc/renderer/settings/components/VertexProviderSettingsDetail.vuesrc/renderer/settings/components/ModelProviderSettingsDetail.vue
src/renderer/src/**/*.{ts,tsx,vue}
📄 CodeRabbit inference engine (AGENTS.md)
src/renderer/src/**/*.{ts,tsx,vue}: Use TypeScript with Vue 3 Composition API for the renderer application
All user-facing strings must use vue-i18n keys insrc/renderer/src/i18n
Files:
src/renderer/src/components/icons/ModelIcon.vuesrc/renderer/src/stores/providerStore.ts
src/renderer/src/components/**/*.vue
📄 CodeRabbit inference engine (AGENTS.md)
src/renderer/src/components/**/*.vue: Use Tailwind for styles in Vue components
Vue component files must use PascalCase naming (e.g.,ChatInput.vue)
Files:
src/renderer/src/components/icons/ModelIcon.vue
src/renderer/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Use the
usePresenter.tscomposable for renderer-to-main IPC communication to call presenter methods directly
Files:
src/renderer/src/stores/providerStore.ts
src/renderer/src/stores/**/*.{vue,ts,tsx,js,jsx}
📄 CodeRabbit inference engine (.cursor/rules/pinia-best-practices.mdc)
src/renderer/src/stores/**/*.{vue,ts,tsx,js,jsx}: Use modules to organize related state and actions in Pinia stores
Implement proper state persistence for maintaining data across sessions in Pinia stores
Use getters for computed state properties in Pinia stores
Utilize actions for side effects and asynchronous operations in Pinia stores
Keep Pinia stores focused on global state, not component-specific data
Files:
src/renderer/src/stores/providerStore.ts
src/renderer/**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)
Use TypeScript for all code; prefer types over interfaces
Files:
src/renderer/src/stores/providerStore.ts
src/renderer/**/stores/*.ts
📄 CodeRabbit inference engine (.cursor/rules/vue-shadcn.mdc)
Use Pinia for state management
Files:
src/renderer/src/stores/providerStore.ts
src/renderer/src/stores/**/*.ts
📄 CodeRabbit inference engine (AGENTS.md)
Use Pinia for state management
Files:
src/renderer/src/stores/providerStore.ts
package.json
📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)
package.json: Node.js >= 22 required
pnpm >= 9 required
Files:
package.json
src/main/presenter/llmProviderPresenter/providers/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
src/main/presenter/llmProviderPresenter/providers/*.ts: Each LLM provider must implement thecoreStreammethod following the standardized event interface for tool calling and response streaming
Convert MCP tools to provider-specific formats and normalize streaming responses to standard events in each provider implementation
src/main/presenter/llmProviderPresenter/providers/*.ts: In Provider implementations (src/main/presenter/llmProviderPresenter/providers/*.ts), thecoreStream(messages, modelId, temperature, maxTokens)method should perform a single-pass streaming API request for each conversation round without containing multi-turn tool call loop logic
In Provider implementations, handle native tool support by converting MCP tools to Provider format usingconvertToProviderToolsand including them in the API request; for Providers without native function call support, prepare messages usingprepareFunctionCallPromptbefore making the API call
In Provider implementations, parse Provider-specific data chunks from the streaming response andyieldstandardizedLLMCoreStreamEventobjects conforming to the standard stream event interface, including text, reasoning, tool calls, usage, errors, stop reasons, and image data
In Provider implementations, include helper methods for Provider-specific operations such asformatMessages,convertToProviderTools,parseFunctionCalls, andprepareFunctionCallPrompt
Files:
src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
🧠 Learnings (39)
📚 Learning: 2025-11-25T05:26:43.498Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-11-25T05:26:43.498Z
Learning: Applies to src/renderer/src/i18n/**/*.json : Maintain consistent key-value structure across all language translation files (zh-CN, en-US, ko-KR, ru-RU, zh-HK, fr-FR, fa-IR)
Applied to files:
src/renderer/src/i18n/ko-KR/settings.jsonsrc/renderer/src/i18n/pt-BR/settings.jsonsrc/renderer/src/i18n/ja-JP/settings.jsonsrc/renderer/src/i18n/ru-RU/settings.jsonsrc/renderer/src/i18n/zh-CN/settings.jsonsrc/renderer/src/i18n/zh-HK/settings.jsonsrc/renderer/src/i18n/fa-IR/settings.jsonsrc/renderer/src/i18n/zh-TW/settings.jsonsrc/renderer/src/i18n/fr-FR/settings.jsonsrc/renderer/src/i18n/en-US/settings.json
📚 Learning: 2025-11-25T05:26:43.498Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-11-25T05:26:43.498Z
Learning: Applies to src/renderer/src/i18n/**/*.json : Translation key naming convention: use dot-separated hierarchical structure with lowercase letters and descriptive names (e.g., 'common.button.submit')
Applied to files:
src/renderer/src/i18n/pt-BR/settings.jsonsrc/renderer/src/i18n/ru-RU/settings.jsonsrc/renderer/src/i18n/zh-TW/settings.json
📚 Learning: 2025-11-25T05:26:43.498Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-11-25T05:26:43.498Z
Learning: Applies to src/renderer/src/**/*.{vue,ts,tsx} : All user-facing strings must use i18n keys with vue-i18n framework in the renderer
Applied to files:
src/renderer/src/i18n/pt-BR/settings.jsonsrc/renderer/src/i18n/zh-HK/settings.jsonsrc/renderer/src/i18n/zh-TW/settings.json
📚 Learning: 2025-11-25T05:28:20.500Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-25T05:28:20.500Z
Learning: Applies to src/renderer/src/**/*.{ts,tsx,vue} : All user-facing strings must use vue-i18n keys in `src/renderer/src/i18n`
Applied to files:
src/renderer/src/i18n/pt-BR/settings.jsonsrc/renderer/src/i18n/zh-TW/settings.json
📚 Learning: 2025-11-25T05:26:43.498Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/i18n.mdc:0-0
Timestamp: 2025-11-25T05:26:43.498Z
Learning: Applies to src/renderer/src/**/*.{vue,ts,tsx} : Avoid hardcoding user-facing text and ensure all user-visible text uses the i18n translation system
Applied to files:
src/renderer/src/i18n/pt-BR/settings.jsonsrc/renderer/src/i18n/zh-TW/settings.json
📚 Learning: 2025-11-25T05:27:12.201Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.201Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations, include helper methods for Provider-specific operations such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt`
Applied to files:
src/main/presenter/configPresenter/modelCapabilities.tssrc/main/presenter/llmProviderPresenter/managers/providerInstanceManager.tssrc/shared/types/presenters/llmprovider.presenter.d.tssrc/renderer/src/stores/providerStore.tssrc/main/presenter/configPresenter/providers.tssrc/shared/types/presenters/legacy.presenters.d.tssrc/main/presenter/llmProviderPresenter/providers/vertexProvider.tssrc/renderer/settings/components/ModelProviderSettingsDetail.vue
📚 Learning: 2025-11-25T05:26:11.297Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-25T05:26:11.297Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Convert MCP tools to provider-specific formats and normalize streaming responses to standard events in each provider implementation
Applied to files:
src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.tssrc/shared/types/presenters/llmprovider.presenter.d.tssrc/renderer/src/stores/providerStore.tssrc/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
📚 Learning: 2025-11-25T05:27:12.201Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.201Z
Learning: Implement separation of concerns where `src/main/presenter/llmProviderPresenter/index.ts` manages the Agent loop and conversation history, while Provider files handle LLM API interactions, Provider-specific request/response formatting, tool definition conversion, and native vs non-native tool call mechanisms
Applied to files:
src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.tssrc/shared/types/presenters/llmprovider.presenter.d.tssrc/renderer/src/stores/providerStore.tssrc/main/presenter/configPresenter/providers.tssrc/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
📚 Learning: 2025-11-25T05:27:12.201Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.201Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations, handle native tool support by converting MCP tools to Provider format using `convertToProviderTools` and including them in the API request; for Providers without native function call support, prepare messages using `prepareFunctionCallPrompt` before making the API call
Applied to files:
src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.tssrc/shared/types/presenters/llmprovider.presenter.d.tssrc/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
📚 Learning: 2025-11-25T05:27:12.201Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.201Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations, parse Provider-specific data chunks from the streaming response and `yield` standardized `LLMCoreStreamEvent` objects conforming to the standard stream event interface, including text, reasoning, tool calls, usage, errors, stop reasons, and image data
Applied to files:
src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.tssrc/shared/types/presenters/llmprovider.presenter.d.tssrc/renderer/src/stores/providerStore.tssrc/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
📚 Learning: 2025-11-25T05:26:11.297Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-25T05:26:11.297Z
Learning: Applies to src/main/presenter/mcpPresenter/**/*.ts : Register new MCP tools in `mcpPresenter/index.ts` after implementing them in `inMemoryServers/`
Applied to files:
src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
📚 Learning: 2025-11-25T05:26:11.297Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-25T05:26:11.297Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each LLM provider must implement the `coreStream` method following the standardized event interface for tool calling and response streaming
Applied to files:
src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.tssrc/shared/types/presenters/llmprovider.presenter.d.tssrc/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
📚 Learning: 2025-11-25T05:27:39.191Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/provider-guidelines.mdc:0-0
Timestamp: 2025-11-25T05:27:39.191Z
Learning: Applies to **/*Provider**/index.ts : Do not introduce renderer dependencies inside Provider implementations
Applied to files:
src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.tssrc/renderer/settings/components/ModelProviderSettingsDetail.vue
📚 Learning: 2025-11-25T05:28:04.440Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-11-25T05:28:04.440Z
Learning: Applies to src/renderer/**/*.{ts,vue} : Use provide/inject for dependency injection when appropriate
Applied to files:
src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts
📚 Learning: 2025-11-25T05:27:12.201Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.201Z
Learning: Applies to src/main/presenter/llmProviderPresenter/**/*.ts : Define the standardized `LLMCoreStreamEvent` interface with fields: `type` (text | reasoning | tool_call_start | tool_call_chunk | tool_call_end | error | usage | stop | image_data), `content` (for text), `reasoning_content` (for reasoning), `tool_call_id`, `tool_call_name`, `tool_call_arguments_chunk` (for streaming), `tool_call_arguments_complete` (for complete arguments), `error_message`, `usage` object with token counts, `stop_reason` (tool_use | max_tokens | stop_sequence | error | complete), and `image_data` object with Base64-encoded data and mimeType
Applied to files:
src/shared/types/presenters/llmprovider.presenter.d.tssrc/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
📚 Learning: 2025-11-25T05:27:39.191Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/provider-guidelines.mdc:0-0
Timestamp: 2025-11-25T05:27:39.191Z
Learning: Applies to **/*Provider**/index.ts : Output only discriminated union `LLMCoreStreamEvent` in Provider implementations, do not use single interface with optional fields
Applied to files:
src/shared/types/presenters/llmprovider.presenter.d.ts
📚 Learning: 2025-11-25T05:28:04.440Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-11-25T05:28:04.440Z
Learning: Applies to src/renderer/**/*.{ts,vue} : Use Iconify/Vue for icon implementation
Applied to files:
src/renderer/src/components/icons/ModelIcon.vue
📚 Learning: 2025-11-25T05:28:04.440Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-11-25T05:28:04.440Z
Learning: Applies to src/renderer/**/*.vue : Prefer 'lucide:' icon family as the primary choice for Iconify icons
Applied to files:
src/renderer/src/components/icons/ModelIcon.vue
📚 Learning: 2025-11-25T05:28:04.440Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-11-25T05:28:04.440Z
Learning: Applies to src/renderer/**/*.vue : Import Icon component from 'iconify/vue' and use with lucide icons following pattern '{collection}:{icon-name}'
Applied to files:
src/renderer/src/components/icons/ModelIcon.vue
📚 Learning: 2025-11-25T05:26:11.297Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-25T05:26:11.297Z
Learning: Applies to src/renderer/**/*.ts : Use the `usePresenter.ts` composable for renderer-to-main IPC communication to call presenter methods directly
Applied to files:
src/renderer/src/stores/providerStore.ts
📚 Learning: 2025-11-25T05:27:12.201Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.201Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts`, handle `reasoning`, `text`, `image_data`, and `usage` events by processing and forwarding them through `STREAM_EVENTS.RESPONSE` events to the frontend
Applied to files:
src/renderer/src/stores/providerStore.tssrc/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
📚 Learning: 2025-11-25T05:26:24.860Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/electron-best-practices.mdc:0-0
Timestamp: 2025-11-25T05:26:24.860Z
Learning: Applies to {src/main/presenter/**/*.ts,src/renderer/**/*.ts} : Implement proper inter-process communication (IPC) patterns using Electron's ipcRenderer and ipcMain APIs
Applied to files:
src/renderer/src/stores/providerStore.ts
📚 Learning: 2025-11-25T05:27:39.191Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/provider-guidelines.mdc:0-0
Timestamp: 2025-11-25T05:27:39.191Z
Learning: Applies to **/*Provider**/index.ts : Every event construction in Provider implementations must use factory functions
Applied to files:
src/renderer/src/stores/providerStore.ts
📚 Learning: 2025-11-25T05:27:12.201Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.201Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts`, listen for standardized events yielded by `coreStream` and handle them accordingly: buffer text content (`currentContent`), handle `tool_call_start/chunk/end` events by collecting tool details and calling `presenter.mcpPresenter.callTool`, send frontend events via `eventBus` with tool call status, format tool results for the next LLM call, and set `needContinueConversation = true`
Applied to files:
src/renderer/src/stores/providerStore.tssrc/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
📚 Learning: 2025-11-25T05:27:39.191Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/provider-guidelines.mdc:0-0
Timestamp: 2025-11-25T05:27:39.191Z
Learning: Applies to **/*Provider**/index.ts : Use factory methods `createStreamEvent.*` to construct events in Provider implementations, avoid direct field pollution
Applied to files:
src/renderer/src/stores/providerStore.ts
📚 Learning: 2025-11-25T05:26:11.297Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-25T05:26:11.297Z
Learning: Applies to **/*.{ts,tsx,js,jsx,vue} : Use English for logs and comments (Chinese text exists in legacy code, but new code should use English)
Applied to files:
src/renderer/src/i18n/zh-TW/settings.json
📚 Learning: 2025-11-25T05:28:04.439Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-11-25T05:28:04.439Z
Learning: Applies to src/renderer/**/*.vue : Use Vue 3 with TypeScript, leveraging defineComponent and PropType
Applied to files:
src/renderer/settings/components/VertexProviderSettingsDetail.vue
📚 Learning: 2025-11-25T05:27:45.535Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-best-practices.mdc:0-0
Timestamp: 2025-11-25T05:27:45.535Z
Learning: Applies to src/renderer/src/**/*.{vue,ts,tsx,js,jsx} : Use the Composition API for better code organization and reusability in Vue.js applications
Applied to files:
src/renderer/settings/components/VertexProviderSettingsDetail.vuesrc/renderer/settings/components/ModelProviderSettingsDetail.vue
📚 Learning: 2025-11-25T05:26:15.918Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/development-setup.mdc:0-0
Timestamp: 2025-11-25T05:26:15.918Z
Learning: Applies to package.json : Node.js >= 22 required
Applied to files:
package.json
📚 Learning: 2025-11-25T05:28:20.500Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-25T05:28:20.500Z
Learning: Require Node ≥ 20.19 and pnpm ≥ 10.11 (pnpm only, not npm) as the project toolchain
Applied to files:
package.json
📚 Learning: 2025-11-25T05:27:12.201Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.201Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations (`src/main/presenter/llmProviderPresenter/providers/*.ts`), the `coreStream(messages, modelId, temperature, maxTokens)` method should perform a *single-pass* streaming API request for each conversation round without containing multi-turn tool call loop logic
Applied to files:
src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
📚 Learning: 2025-11-25T05:27:12.201Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.201Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts` (`startStreamCompletion`), implement the Agent loop that manages the overall conversation flow, including multiple rounds of LLM calls and tool usage, maintaining `conversationMessages` history, calling `provider.coreStream()` on each iteration, and controlling the loop using `needContinueConversation` and `toolCallCount` (compared against `MAX_TOOL_CALLS`)
Applied to files:
src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
📚 Learning: 2025-11-25T05:28:04.440Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-11-25T05:28:04.440Z
Learning: Applies to src/renderer/**/*.{ts,vue} : Leverage ref, reactive, and computed for reactive state management
Applied to files:
src/renderer/settings/components/ModelProviderSettingsDetail.vue
📚 Learning: 2025-11-25T05:28:04.439Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-11-25T05:28:04.439Z
Learning: Applies to src/renderer/**/composables/*.ts : Use VueUse for common composables and utility functions
Applied to files:
src/renderer/settings/components/ModelProviderSettingsDetail.vue
📚 Learning: 2025-11-25T05:27:45.535Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-best-practices.mdc:0-0
Timestamp: 2025-11-25T05:27:45.535Z
Learning: Applies to src/renderer/src/**/*.{vue,ts,tsx,js,jsx} : Leverage Vue's built-in reactivity system for efficient data handling
Applied to files:
src/renderer/settings/components/ModelProviderSettingsDetail.vue
📚 Learning: 2025-11-25T05:27:20.058Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/pinia-best-practices.mdc:0-0
Timestamp: 2025-11-25T05:27:20.058Z
Learning: Applies to src/renderer/src/stores/**/*.{vue,ts,tsx,js,jsx} : Use getters for computed state properties in Pinia stores
Applied to files:
src/renderer/settings/components/ModelProviderSettingsDetail.vue
📚 Learning: 2025-11-25T05:28:04.439Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-11-25T05:28:04.439Z
Learning: Applies to src/renderer/**/*.vue : Structure files: exported component, composables, helpers, static content, types
Applied to files:
src/renderer/settings/components/ModelProviderSettingsDetail.vue
📚 Learning: 2025-11-25T05:28:04.439Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/vue-shadcn.mdc:0-0
Timestamp: 2025-11-25T05:28:04.439Z
Learning: Applies to src/renderer/**/*.vue : Use composition API and declarative programming patterns; avoid options API
Applied to files:
src/renderer/settings/components/ModelProviderSettingsDetail.vue
📚 Learning: 2025-11-25T05:28:20.500Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: AGENTS.md:0-0
Timestamp: 2025-11-25T05:28:20.500Z
Learning: Applies to src/renderer/src/**/*.{ts,tsx,vue} : Use TypeScript with Vue 3 Composition API for the renderer application
Applied to files:
src/renderer/settings/components/ModelProviderSettingsDetail.vue
🧬 Code graph analysis (3)
src/shared/types/presenters/llmprovider.presenter.d.ts (2)
src/shared/types/presenters/legacy.presenters.d.ts (2)
VERTEX_PROVIDER(761-768)LLM_PROVIDER(641-661)src/shared/types/presenters/index.d.ts (1)
LLM_PROVIDER(9-9)
src/renderer/src/stores/providerStore.ts (2)
src/shared/types/presenters/legacy.presenters.d.ts (1)
VERTEX_PROVIDER(761-768)src/shared/types/presenters/llmprovider.presenter.d.ts (1)
VERTEX_PROVIDER(94-101)
src/shared/types/presenters/legacy.presenters.d.ts (2)
src/shared/types/presenters/llmprovider.presenter.d.ts (2)
VERTEX_PROVIDER(94-101)LLM_PROVIDER(45-60)src/shared/types/presenters/index.d.ts (1)
LLM_PROVIDER(9-9)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build-check (x64)
🔇 Additional comments (16)
package.json (1)
69-69: @google/genai bump looks fine; verify against upstream changelog/testsUpgrading
@google/genaito^1.30.0is consistent with adding Vertex support; just make sure your Vertex flows and any existing Gemini usage are regression-tested against this version.src/renderer/src/i18n/zh-TW/settings.json (1)
277-288: Vertex provider i18n keys and wording look consistentKey names and Traditional Chinese labels/placeholders align with other locales and existing provider fields, so this block is good to go.
src/main/presenter/configPresenter/modelCapabilities.ts (1)
20-24: Provider ID aliases for gemini/vertex are wired correctlyLower‑case aliases
gemini → googleandvertex → google-vertexmatch the normalization logic and will let model lookups work even when callers use the short ids.src/renderer/src/components/icons/ModelIcon.vue (1)
108-110: Vertex icon alias is correctly addedMapping
vertexto the same asset asvertexaifits the substring‑match lookup and will cover provider/model ids that only containvertex.src/renderer/src/i18n/zh-CN/settings.json (1)
277-288: Vertex provider Chinese translations are consistent and well-formedKey set and Simplified Chinese text match other locales and existing provider wording, so this section is in good shape.
src/shared/types/presenters/legacy.presenters.d.ts (1)
761-768: VERTEX_PROVIDER type mirrors core definition and looks correctExtending
LLM_PROVIDERwithprojectId,location,accountPrivateKey,accountClientEmail,apiVersion('v1' | 'v1beta1'), andendpointMode('standard' | 'express') matches the mainVERTEX_PROVIDERtyping and should keep legacy presenter consumers type-safe.src/renderer/src/i18n/ja-JP/settings.json (1)
277-288: Japanese Vertex provider translations are consistent and clearThe added
vertex*keys match other locales structurally, and the Japanese text reads naturally for project/location/service-account/endpoint configuration.src/main/presenter/configPresenter/providers.ts (1)
235-249: Vertex default provider entry is consistent with existing providersThe new
vertexentry matches the existing provider schema (id/apiType, emptybaseUrl, and a sensibledefaultBaseUrlplus docs/models links). No issues from a config/typing perspective.src/renderer/src/stores/providerStore.ts (1)
6-6: Vertex-specific config wrapper is correctly wired through the storeThe
updateVertexProviderConfighelper cleanly narrows the update type toPartial<VERTEX_PROVIDER>while delegating to the genericupdateProviderConfig, consistent with the existing AWS Bedrock helper and withVERTEX_PROVIDERextendingLLM_PROVIDER. Looks good.Also applies to: 296-301, 410-410
src/renderer/src/i18n/ko-KR/settings.json (1)
277-288: Korean Vertex provider i18n keys look correct and consistentThe added
vertex*keys follow the existing provider namespace and appear to be accurate Korean translations matching the en-US semantics (project ID, region, service account email/private key, API version, endpoint mode).src/shared/types/presenters/llmprovider.presenter.d.ts (1)
94-101: VERTEX_PROVIDER type extension is well-scopedDefining
VERTEX_PROVIDER = LLM_PROVIDER & { projectId/location/accountPrivateKey/accountClientEmail/apiVersion/endpointMode }matches how Vertex is configured elsewhere (store, UI, provider) and keeps Vertex extras optional, which is appropriate for partial updates and progressive configuration.src/main/presenter/llmProviderPresenter/managers/providerInstanceManager.ts (1)
26-26: VertexProvider is correctly registered in provider mapsAdding
VertexProviderto bothPROVIDER_ID_MAP('vertex') andPROVIDER_TYPE_MAP('vertex') aligns with the new default provider entry and guarantees proper resolution by both id and apiType.Also applies to: 81-81, 111-111
src/shared/provider-operations.ts (1)
49-56: Rebuild trigger fields correctly cover new Vertex configurationExtending
REBUILD_REQUIRED_FIELDSto includeazureApiVersionand the Vertex fields (projectId,location,accountPrivateKey,accountClientEmail,apiVersion,endpointMode) is appropriate so provider instances are rebuilt when core connection/auth settings change.src/renderer/settings/components/ModelProviderSettingsDetail.vue (1)
19-27: Vertex provider settings panel is cleanly integratedConditionally rendering
VertexProviderSettingsDetailwhenprovider.apiType === 'vertex'and passingprovider as VERTEX_PROVIDERgives the child proper typing without affecting other providers, and reusinghandleConfigChanged/validateApiKeyhooks it into the existing config/validation flow. This wiring looks solid.Also applies to: 93-93, 98-98
src/renderer/src/i18n/zh-HK/settings.json (1)
277-288: Vertex provider i18n keys look consistent and correctThe new Vertex-related provider keys follow the existing naming pattern and align with the en-US structure; no issues from an i18n or structure perspective.
src/renderer/src/i18n/en-US/settings.json (1)
277-288: Vertex provider English labels/placeholders are clear and aligned with UIThe new Vertex-related keys under
providerare well named, descriptive, and match the fields rendered inVertexProviderSettingsDetail.vue. Structure is consistent with other provider keys.
src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
Outdated
Show resolved
Hide resolved
* feat: import deepchat to nowledge-mem * Merge commit from fork * feat: i18n fix and nowledge mem style * feat: add styles --------- Co-authored-by: duskzhen <zerob13@gmail.com>
* fix: format * fix: fix first chat error loading timing * fix: format * fix: format
* fix: mcp one click * fix: deeplint mcp install #1136
* feat: add da-DK * feat: add da-dk setting * feat: add da-DK translate * fix: i18n placeholder
#1111) * feat: implement paralleled vectorizing and thread-safe processing in KnowledgeStorePresenter * fix: remove unnecessary blank line in processQueue method * fix: correct chunk message ID usage in enqueueFileTask method
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
♻️ Duplicate comments (1)
src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts (1)
36-54: Safety settings mapping logic now looks correct; confirm cross‑provider config reuseThe refactored safety mapping correctly:
- Filters out
'BLOCK_NONE'and'HARM_BLOCK_THRESHOLD_UNSPECIFIED'at the string level.- Maps only the remaining values via
valueToHarmBlockThresholdMap.- Uses explicit
undefined/nullchecks, so numeric enum members (including0) are handled safely.This fixes the earlier enum‑vs‑string comparison bug.
One open design question:
getFormattedSafetySettingsstill reads fromgeminiSafety_${key}even in the Vertex provider. If Vertex is meant to share the exact same safety thresholds as Gemini, this is fine; otherwise you may want Vertex‑specific keys (e.g.vertexSafety_*) or a provider‑agnostic naming scheme so providers don’t unintentionally share or override each other’s safety settings.Based on learnings, provider‑side safety configuration should be explicit and normalized per provider.
Also applies to: 365-403
🧹 Nitpick comments (4)
src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts (4)
130-138: New comments are in Chinese; please switch to English per repo conventionsSeveral new comments (e.g. around
ensureVertexModelName,autoEnableModelsIfNeeded,formatVertexMessages,processVertexResponse,completions) are written in Chinese. The repo guidelines state that new TS/JS code should use English for comments and logs (Chinese is only tolerated in legacy code).It’d be good to rephrase these into concise English so future contributors can follow the intent without language barriers.
As per coding guidelines, new comments/logs in
*.tsfiles should be in English.Also applies to: 334-363, 512-525, 661-662, 709-711
512-608: Tool message mapping is a bit brittle; clarify or align function response handlingThe tool‑related parts of
formatVertexMessagesare thorough but somewhat fragile:
- For assistant messages with
tool_calls, you correctly emitfunctionCallandfunctionResponseparts, which matches how the provider expects function invocations and responses.- For messages with
role === 'tool'and arraycontent, you:
- Treat
part.type === 'function_call'as afunctionCallpart (good).- Treat
part.type === 'function_response'as plain{ text: part.function_response || '' }, which loses the structured response (name, JSON payload) and won’t be recognized as a function response by the model.- There are several
// @ts-ignoredirectives, which makes the code harder to maintain under strict TS.If
ChatMessageis expected to carry structured function responses for tool messages, consider aligning this branch with the assistant/tool_calls branch, e.g. by emitting a properfunctionResponsepart instead of collapsing to text. If this tool‑role branch is no longer used by the current agent flow, you might simplify or remove it to avoid confusion.Defining a small local type for the tool parts would also let you remove the
@ts-ignoreusage while keeping the code type‑safe.Based on learnings, provider‑side message formatting should preserve structured tool call/response data for consistent tool behavior.
661-707: Reasoning/text split logic is duplicated; reuse processVertexResponse in completions
processVertexResponsealready encapsulates the logic for:
- Preferring candidate parts when present.
- Separating thought parts (
part.thought === true) from normal text.- Falling back to
<think>...</think>parsing when needed.
completionsreimplements essentially the same logic inline (lines 785–823). To reduce duplication and keep behavior consistent, you could:
- Do all usage/usageMetadata handling in
completionsas you already do.- Then call
this.processVertexResponse(result)and merge the returnedcontentandreasoning_contentintoresultResp, instead of manually re‑parsing the response again.That way any future change in how Vertex encodes reasoning (new fields, different markers) only needs to be updated in one place.
Based on learnings, centralizing provider‑specific response parsing improves consistency across both sync and streaming paths.
Also applies to: 785-823
246-262: Use shared logger utilities instead of console. and keep logs structured*Throughout this provider (summaryTitles, check/init, token estimation, suggestions parsing, coreStream, image streaming), logging uses
console.log / console.warn / console.error. Project guidelines call for structured logging vialogger.info / logger.warn / logger.error / logger.debugwith clear context and error typing.To align with that:
- Inject or import the project’s logger and replace console calls with the appropriate logger methods.
- Reuse
sanitizeForLoggingandisDebugLoggingEnabledto ensure debug logs remain PII‑safe and gated by config.- Where applicable, distinguish between network/system errors vs. user/config errors to support the existing error type taxonomy.
This will make Vertex logs consistent with other providers and easier to consume in centralized logging.
As per coding guidelines, avoid direct console logging in TypeScript and prefer structured logger usage.
Also applies to: 273-305, 307-331, 782-783, 857-858, 885-886, 923-924, 939-940, 982-986, 1068-1074, 1182-1194, 1278-1280
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts(1 hunks)src/renderer/src/i18n/pt-BR/settings.json(1 hunks)src/renderer/src/i18n/ru-RU/settings.json(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- src/renderer/src/i18n/ru-RU/settings.json
- src/renderer/src/i18n/pt-BR/settings.json
🧰 Additional context used
📓 Path-based instructions (15)
**/*.{ts,tsx,js,jsx,vue}
📄 CodeRabbit inference engine (CLAUDE.md)
Use English for logs and comments (Chinese text exists in legacy code, but new code should use English)
Files:
src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (CLAUDE.md)
Enable and maintain strict TypeScript type checking for all files
**/*.{ts,tsx}: Always use try-catch to handle possible errors in TypeScript code
Provide meaningful error messages when catching errors
Log detailed error logs including error details, context, and stack traces
Distinguish and handle different error types (UserError, NetworkError, SystemError, BusinessError) with appropriate handlers in TypeScript
Use structured logging with logger.error(), logger.warn(), logger.info(), logger.debug() methods from logging utilities
Do not suppress errors (avoid empty catch blocks or silently ignoring errors)
Provide user-friendly error messages for user-facing errors in TypeScript components
Implement error retry mechanisms for transient failures in TypeScript
Avoid logging sensitive information (passwords, tokens, PII) in logs
Files:
src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
src/main/presenter/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Organize core business logic into dedicated Presenter classes, with one presenter per functional domain
Files:
src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
src/main/presenter/llmProviderPresenter/providers/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
src/main/presenter/llmProviderPresenter/providers/*.ts: Each LLM provider must implement thecoreStreammethod following the standardized event interface for tool calling and response streaming
Convert MCP tools to provider-specific formats and normalize streaming responses to standard events in each provider implementation
src/main/presenter/llmProviderPresenter/providers/*.ts: In Provider implementations (src/main/presenter/llmProviderPresenter/providers/*.ts), thecoreStream(messages, modelId, temperature, maxTokens)method should perform a single-pass streaming API request for each conversation round without containing multi-turn tool call loop logic
In Provider implementations, handle native tool support by converting MCP tools to Provider format usingconvertToProviderToolsand including them in the API request; for Providers without native function call support, prepare messages usingprepareFunctionCallPromptbefore making the API call
In Provider implementations, parse Provider-specific data chunks from the streaming response andyieldstandardizedLLMCoreStreamEventobjects conforming to the standard stream event interface, including text, reasoning, tool calls, usage, errors, stop reasons, and image data
In Provider implementations, include helper methods for Provider-specific operations such asformatMessages,convertToProviderTools,parseFunctionCalls, andprepareFunctionCallPrompt
Files:
src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
src/main/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Use EventBus from
src/main/eventbus.tsfor main-to-renderer communication, broadcasting events viamainWindow.webContents.send()
src/main/**/*.ts: Use EventBus pattern for inter-process communication within the main process to decouple modules
Use Electron's built-in APIs for file system and native dialogs instead of Node.js or custom implementations
src/main/**/*.ts: Electron main process code belongs insrc/main/with presenters inpresenter/(Window/Tab/Thread/Mcp/Config/LLMProvider) andeventbus.tsfor app events
Use the Presenter pattern in the main process for UI coordination
Files:
src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Do not include AI co-authoring information (e.g., 'Co-Authored-By: Claude') in git commits
Files:
src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
**/*.{js,ts,jsx,tsx,mjs,cjs}
📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)
Write logs and comments in English
Files:
src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
{src/main/presenter/**/*.ts,src/renderer/**/*.ts}
📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)
Implement proper inter-process communication (IPC) patterns using Electron's ipcRenderer and ipcMain APIs
Files:
src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
src/main/presenter/llmProviderPresenter/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)
Define the standardized
LLMCoreStreamEventinterface with fields:type(text | reasoning | tool_call_start | tool_call_chunk | tool_call_end | error | usage | stop | image_data),content(for text),reasoning_content(for reasoning),tool_call_id,tool_call_name,tool_call_arguments_chunk(for streaming),tool_call_arguments_complete(for complete arguments),error_message,usageobject with token counts,stop_reason(tool_use | max_tokens | stop_sequence | error | complete), andimage_dataobject with Base64-encoded data and mimeType
Files:
src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
src/**/*
📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)
New features should be developed in the
srcdirectory
Files:
src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
src/main/**/*.{js,ts}
📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)
Main process code for Electron should be placed in
src/main
Files:
src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
src/**/*.{ts,tsx,vue,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
Use Prettier with single quotes, no semicolons, and 100 character width
Files:
src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
src/**/*.{ts,tsx,js,jsx}
📄 CodeRabbit inference engine (AGENTS.md)
Use OxLint for linting JavaScript and TypeScript files
Files:
src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
src/**/*.{ts,tsx}
📄 CodeRabbit inference engine (AGENTS.md)
src/**/*.{ts,tsx}: Use camelCase for variable and function names in TypeScript files
Use PascalCase for type and class names in TypeScript
Use SCREAMING_SNAKE_CASE for constant names
Files:
src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
src/**/*.ts
📄 CodeRabbit inference engine (AGENTS.md)
Use EventBus for inter-process communication events
Files:
src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
🧠 Learnings (11)
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Implement separation of concerns where `src/main/presenter/llmProviderPresenter/index.ts` manages the Agent loop and conversation history, while Provider files handle LLM API interactions, Provider-specific request/response formatting, tool definition conversion, and native vs non-native tool call mechanisms
Applied to files:
src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations, include helper methods for Provider-specific operations such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt`
Applied to files:
src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations, parse Provider-specific data chunks from the streaming response and `yield` standardized `LLMCoreStreamEvent` objects conforming to the standard stream event interface, including text, reasoning, tool calls, usage, errors, stop reasons, and image data
Applied to files:
src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations, handle native tool support by converting MCP tools to Provider format using `convertToProviderTools` and including them in the API request; for Providers without native function call support, prepare messages using `prepareFunctionCallPrompt` before making the API call
Applied to files:
src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/**/*.ts : Define the standardized `LLMCoreStreamEvent` interface with fields: `type` (text | reasoning | tool_call_start | tool_call_chunk | tool_call_end | error | usage | stop | image_data), `content` (for text), `reasoning_content` (for reasoning), `tool_call_id`, `tool_call_name`, `tool_call_arguments_chunk` (for streaming), `tool_call_arguments_complete` (for complete arguments), `error_message`, `usage` object with token counts, `stop_reason` (tool_use | max_tokens | stop_sequence | error | complete), and `image_data` object with Base64-encoded data and mimeType
Applied to files:
src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
📚 Learning: 2025-11-25T05:26:11.312Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-25T05:26:11.312Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Convert MCP tools to provider-specific formats and normalize streaming responses to standard events in each provider implementation
Applied to files:
src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
📚 Learning: 2025-11-25T05:26:11.312Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: CLAUDE.md:0-0
Timestamp: 2025-11-25T05:26:11.312Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each LLM provider must implement the `coreStream` method following the standardized event interface for tool calling and response streaming
Applied to files:
src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : In Provider implementations (`src/main/presenter/llmProviderPresenter/providers/*.ts`), the `coreStream(messages, modelId, temperature, maxTokens)` method should perform a *single-pass* streaming API request for each conversation round without containing multi-turn tool call loop logic
Applied to files:
src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts` (`startStreamCompletion`), implement the Agent loop that manages the overall conversation flow, including multiple rounds of LLM calls and tool usage, maintaining `conversationMessages` history, calling `provider.coreStream()` on each iteration, and controlling the loop using `needContinueConversation` and `toolCallCount` (compared against `MAX_TOOL_CALLS`)
Applied to files:
src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
📚 Learning: 2025-11-25T05:27:12.209Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-11-25T05:27:12.209Z
Learning: Applies to src/main/presenter/llmProviderPresenter/index.ts : In `src/main/presenter/llmProviderPresenter/index.ts`, listen for standardized events yielded by `coreStream` and handle them accordingly: buffer text content (`currentContent`), handle `tool_call_start/chunk/end` events by collecting tool details and calling `presenter.mcpPresenter.callTool`, send frontend events via `eventBus` with tool call status, format tool results for the next LLM call, and set `needContinueConversation = true`
Applied to files:
src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
📚 Learning: 2025-11-25T05:26:35.317Z
Learnt from: CR
Repo: ThinkInAIXYZ/deepchat PR: 0
File: .cursor/rules/error-logging.mdc:0-0
Timestamp: 2025-11-25T05:26:35.317Z
Learning: Applies to **/*.{ts,tsx} : Avoid logging sensitive information (passwords, tokens, PII) in logs
Applied to files:
src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts
🧬 Code graph analysis (1)
src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts (6)
src/shared/types/presenters/llmprovider.presenter.d.ts (2)
LLM_PROVIDER(45-60)VERTEX_PROVIDER(94-101)src/shared/types/presenters/legacy.presenters.d.ts (4)
LLM_PROVIDER(641-661)VERTEX_PROVIDER(761-768)ModelConfig(133-152)Tool(109-120)src/main/presenter/llmProviderPresenter/managers/toolCallProcessor.ts (1)
process(34-233)src/main/presenter/configPresenter/modelCapabilities.ts (1)
modelCapabilities(178-178)src/main/presenter/llmProviderPresenter/baseProvider.ts (1)
SUMMARY_TITLES_PROMPT(739-741)src/shared/types/core/llm-events.ts (1)
createStreamEvent(100-154)
🔇 Additional comments (1)
src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts (1)
1201-1304: Image streaming and embeddings implementations look consistent with the provider contract
handleImageGenerationStream:
- Validates that there is a user message and at least one valid part.
- Streams both text and image data via
createStreamEvent.textandcreateStreamEvent.imageData.- On any failure, logs the error and emits
errorthenstop('error'), which is consistent with the expected streaming interface.
getEmbeddings:
- Uses
embedContentwith batched contents.- Normalizes the returned embeddings to plain
number[][]and throws a clear error if none are returned.Both methods fit well with the standardized
LLMCoreStreamEvent/LLMProvider contracts and error‑handling expectations.Based on learnings, this matches the expected provider responsibilities for image output and embeddings.
| // Check if detailed debug logging is enabled | ||
| private isDebugLoggingEnabled(): boolean { | ||
| return is.dev || this.configPresenter.getSetting<boolean>('traceDebugEnabled') === true | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
isDebugLoggingEnabled may misbehave if getSetting is async
Everywhere else (e.g. in getFormattedSafetySettings) configPresenter.getSetting is awaited, but here it’s used synchronously:
private isDebugLoggingEnabled(): boolean {
return is.dev || this.configPresenter.getSetting<boolean>('traceDebugEnabled') === true
}If getSetting returns a Promise, this will always be false in production, and debug logs will never be enabled via config. Consider making this helper async and awaiting it at call sites:
- private isDebugLoggingEnabled(): boolean {
- return is.dev || this.configPresenter.getSetting<boolean>('traceDebugEnabled') === true
- }
+ private async isDebugLoggingEnabled(): Promise<boolean> {
+ if (is.dev) return true
+ try {
+ const value = await this.configPresenter.getSetting<boolean>('traceDebugEnabled')
+ return value === true
+ } catch {
+ return false
+ }
+ }And then:
- if (this.isDebugLoggingEnabled()) {
+ if (await this.isDebugLoggingEnabled()) {
// ...
}As per coding guidelines, config‑driven debug logging should behave predictably in both dev and production.
🤖 Prompt for AI Agents
In src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts around
lines 405 to 408, isDebugLoggingEnabled currently calls
this.configPresenter.getSetting synchronously which will misbehave if getSetting
is async; change isDebugLoggingEnabled to be async and return a Promise<boolean>
(awaiting getSetting), then update every call site to await
isDebugLoggingEnabled (or handle the Promise) so dev mode short‑circuit still
returns true immediately while production reads the awaited config value; ensure
the method signature, callers, and any upstream control flow are updated to
handle async/Promise semantics.
| // 判断模型是否支持 thinkingBudget | ||
| private supportsThinkingBudget(modelId: string): boolean { | ||
| const normalized = modelId.replace(/^models\//i, '') | ||
| const range = modelCapabilities.getThinkingBudgetRange(this.provider.id, normalized) | ||
| return ( | ||
| typeof range.default === 'number' || | ||
| typeof range.min === 'number' || | ||
| typeof range.max === 'number' | ||
| ) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Normalize modelId for thinkingBudget capability lookup the same way as for other capabilities
supportsThinkingBudget currently normalizes only the models/ prefix:
private supportsThinkingBudget(modelId: string): boolean {
const normalized = modelId.replace(/^models\//i, '')
const range = modelCapabilities.getThinkingBudgetRange(this.provider.id, normalized)
// ...
}But fetchProviderModels uses a broader normalization for capability checks:
const normalizeModelId = (mid: string): string =>
String(mid || '')
.replace(/^models\//i, '')
.replace(/^publishers\/google\/models\//i, '')If modelId is a full Vertex path like publishers/google/models/..., supportsThinkingBudget will pass an unnormalized ID to modelCapabilities.getThinkingBudgetRange, likely causing the lookup to miss and thinkingBudget never being applied even for models that support it.
Recommend aligning the normalization:
- private supportsThinkingBudget(modelId: string): boolean {
- const normalized = modelId.replace(/^models\//i, '')
+ private supportsThinkingBudget(modelId: string): boolean {
+ const normalized = String(modelId || '')
+ .replace(/^models\//i, '')
+ .replace(/^publishers\/google\/models\//i, '')
const range = modelCapabilities.getThinkingBudgetRange(this.provider.id, normalized)
return (
typeof range.default === 'number' ||
typeof range.min === 'number' ||
typeof range.max === 'number'
)
}Or extract a shared normalizeModelId helper and reuse it in both places.
Based on learnings, capability checks should consistently use the same normalized model identifiers.
Also applies to: 170-175
🤖 Prompt for AI Agents
In src/main/presenter/llmProviderPresenter/providers/vertexProvider.ts around
lines 464 to 473 (also apply same change at lines ~170-175),
supportsThinkingBudget currently only strips a leading "models/" prefix before
querying modelCapabilities, which misses full Vertex paths like
"publishers/google/models/..."; normalize the modelId the same way as
fetchProviderModels (strip both /^models\//i and
/^publishers\/google\/models\//i) or extract and reuse a shared normalizeModelId
helper, then pass the normalized id into
modelCapabilities.getThinkingBudgetRange so capability lookups succeed for full
Vertex model paths.
| async *coreStream( | ||
| messages: ChatMessage[], | ||
| modelId: string, | ||
| modelConfig: ModelConfig, | ||
| temperature: number, | ||
| maxTokens: number, | ||
| mcpTools: MCPToolDefinition[] | ||
| ): AsyncGenerator<LLMCoreStreamEvent> { | ||
| if (!this.isInitialized) throw new Error('Provider not initialized') | ||
| if (!modelId) throw new Error('Model ID is required') | ||
|
|
||
| const requestStartTime = Date.now() | ||
| const requestId = `vertex-${Date.now()}-${Math.random().toString(36).substring(7)}` | ||
| // Log only non-sensitive metadata | ||
| console.log('[Vertex] Stream request:', { | ||
| requestId, | ||
| modelId, | ||
| hasReasoning: modelConfig.reasoning === true, | ||
| hasSearch: modelConfig.enableSearch === true, | ||
| hasTools: mcpTools.length > 0, | ||
| temperature, | ||
| maxTokens | ||
| }) | ||
|
|
||
| // 检查是否是图片生成模型 | ||
| const isImageGenerationModel = modelConfig?.type === ModelType.ImageGeneration | ||
|
|
||
| // 如果是图片生成模型,使用特殊处理 | ||
| if (isImageGenerationModel) { | ||
| yield* this.handleImageGenerationStream(messages, modelId, temperature, maxTokens) | ||
| return | ||
| } | ||
|
|
||
| const safetySettings = await this.getFormattedSafetySettings() | ||
| if (this.isDebugLoggingEnabled()) { | ||
| console.log('[Vertex] Safety settings:', { | ||
| count: safetySettings?.length || 0, | ||
| categories: safetySettings?.map((s) => s.category) || [] | ||
| }) | ||
| } | ||
|
|
||
| // 添加Gemini工具调用 | ||
| let geminiTools: Tool[] = [] | ||
|
|
||
| // 注意:googleSearch内置工具与外部工具是互斥的 | ||
| if (modelConfig.enableSearch) { | ||
| geminiTools.push({ googleSearch: {} as GoogleSearch }) | ||
| } else { | ||
| if (mcpTools.length > 0) | ||
| geminiTools = await presenter.mcpPresenter.mcpToolsToGeminiTools(mcpTools, this.provider.id) | ||
| } | ||
|
|
||
| // 格式化消息为Gemini格式 | ||
| const formattedParts = this.formatVertexMessages(messages) | ||
|
|
||
| // 1. 获取基础 config | ||
| const generateContentConfig: GenerateContentConfig = this.getGenerateContentConfig( | ||
| temperature, | ||
| maxTokens, | ||
| modelId, | ||
| modelConfig.reasoning, | ||
| modelConfig.thinkingBudget | ||
| ) | ||
|
|
||
| // 2. 在本地变量上添加其他属性 | ||
| if (formattedParts.systemInstruction) { | ||
| generateContentConfig.systemInstruction = formattedParts.systemInstruction | ||
| } | ||
|
|
||
| if (geminiTools.length > 0) { | ||
| generateContentConfig.tools = geminiTools | ||
| // 仅当存在 functionDeclarations 时才配置 functionCallingConfig | ||
| const hasFunctionDeclarations = geminiTools.some((t: any) => { | ||
| const fns = t?.functionDeclarations | ||
| return Array.isArray(fns) && fns.length > 0 | ||
| }) | ||
| if (hasFunctionDeclarations) { | ||
| generateContentConfig.toolConfig = { | ||
| functionCallingConfig: { | ||
| mode: FunctionCallingConfigMode.AUTO // 允许模型自动决定是否调用工具 | ||
| } | ||
| } | ||
| } | ||
| } | ||
|
|
||
| if (safetySettings) { | ||
| generateContentConfig.safetySettings = safetySettings | ||
| } | ||
|
|
||
| // 3. 一次性创建完整的 requestParams | ||
| const requestParams: GenerateContentParameters = { | ||
| model: modelId, | ||
| contents: formattedParts.contents, | ||
| config: generateContentConfig | ||
| } | ||
|
|
||
| if (this.isDebugLoggingEnabled()) { | ||
| const sanitizedParams = this.sanitizeForLogging(requestParams) | ||
| console.log('[Vertex] Request params (sanitized):', sanitizedParams) | ||
| } | ||
|
|
||
| // 发送流式请求 | ||
| const result = await this.genAI.models.generateContentStream({ | ||
| ...requestParams, | ||
| model: this.ensureVertexModelName(requestParams.model as string) | ||
| }) | ||
|
|
||
| // 状态变量 | ||
| let buffer = '' | ||
| let isInThinkTag = false | ||
| let toolUseDetected = false | ||
| let usageMetadata: GenerateContentResponseUsageMetadata | undefined | ||
| let isNewThoughtFormatDetected = modelConfig.reasoning === true | ||
|
|
||
| // 流处理循环 | ||
| for await (const chunk of result) { | ||
| // 处理用量统计 | ||
| if (chunk.usageMetadata) { | ||
| usageMetadata = chunk.usageMetadata | ||
| } | ||
|
|
||
| if (this.isDebugLoggingEnabled()) { | ||
| const sanitizedChunk = this.sanitizeForLogging({ | ||
| candidates: chunk.candidates, | ||
| usageMetadata: chunk.usageMetadata | ||
| }) | ||
| console.log('[Vertex] Stream chunk (sanitized):', sanitizedChunk) | ||
| } | ||
| // 检查是否包含函数调用 | ||
| if (chunk.candidates && chunk.candidates[0]?.content?.parts?.[0]?.functionCall) { | ||
| const functionCall = chunk.candidates[0].content.parts[0].functionCall | ||
| const functionName = functionCall.name | ||
| const functionArgs = functionCall.args || {} | ||
| const toolCallId = `gemini-tool-${Date.now()}` | ||
|
|
||
| toolUseDetected = true | ||
|
|
||
| // 发送工具调用开始事件 | ||
| yield createStreamEvent.toolCallStart(toolCallId, functionName || '') | ||
|
|
||
| // 发送工具调用参数 | ||
| const argsString = JSON.stringify(functionArgs) | ||
| yield createStreamEvent.toolCallChunk(toolCallId, argsString) | ||
|
|
||
| // 发送工具调用结束事件 | ||
| yield createStreamEvent.toolCallEnd(toolCallId, argsString) | ||
|
|
||
| // 设置停止原因为工具使用 | ||
| break | ||
| } | ||
|
|
||
| // 处理内容块 | ||
| let content = '' | ||
| let thoughtContent = '' | ||
|
|
||
| // 处理文本和图像内容 | ||
| if (chunk.candidates && chunk.candidates[0]?.content?.parts) { | ||
| for (const part of chunk.candidates[0].content.parts) { | ||
| // 检查是否是思考内容 (新格式) | ||
| if ((part as any).thought === true && part.text) { | ||
| isNewThoughtFormatDetected = true | ||
| thoughtContent += part.text | ||
| } else if (part.text) { | ||
| content += part.text | ||
| } else if (part.inlineData && part.inlineData.data && part.inlineData.mimeType) { | ||
| // 处理图像数据 | ||
| yield createStreamEvent.imageData({ | ||
| data: part.inlineData.data, | ||
| mimeType: part.inlineData.mimeType | ||
| }) | ||
| } | ||
| } | ||
| } else { | ||
| // 兼容处理 | ||
| content = chunk.text || '' | ||
| } | ||
|
|
||
| // 如果检测到思考内容,直接发送 | ||
| if (thoughtContent) { | ||
| yield createStreamEvent.reasoning(thoughtContent) | ||
| } | ||
|
|
||
| if (!content) continue | ||
|
|
||
| if (isNewThoughtFormatDetected) { | ||
| yield createStreamEvent.text(content) | ||
| } else { | ||
| buffer += content | ||
|
|
||
| if (buffer.includes('<think>') && !isInThinkTag) { | ||
| const thinkStart = buffer.indexOf('<think>') | ||
| if (thinkStart > 0) { | ||
| yield createStreamEvent.text(buffer.substring(0, thinkStart)) | ||
| } | ||
| buffer = buffer.substring(thinkStart + 7) | ||
| isInThinkTag = true | ||
| } | ||
|
|
||
| if (isInThinkTag && buffer.includes('</think>')) { | ||
| const thinkEnd = buffer.indexOf('</think>') | ||
| const reasoningContent = buffer.substring(0, thinkEnd) | ||
| if (reasoningContent) { | ||
| yield createStreamEvent.reasoning(reasoningContent) | ||
| } | ||
| buffer = buffer.substring(thinkEnd + 8) | ||
| isInThinkTag = false | ||
| } | ||
|
|
||
| if (!isInThinkTag && buffer) { | ||
| yield createStreamEvent.text(buffer) | ||
| buffer = '' | ||
| } | ||
| } | ||
| } | ||
|
|
||
| if (usageMetadata) { | ||
| yield createStreamEvent.usage({ | ||
| prompt_tokens: usageMetadata.promptTokenCount || 0, | ||
| completion_tokens: usageMetadata.candidatesTokenCount || 0, | ||
| total_tokens: usageMetadata.totalTokenCount || 0 | ||
| }) | ||
| } | ||
|
|
||
| // 处理剩余缓冲区内容 | ||
| if (!isNewThoughtFormatDetected && buffer) { | ||
| if (isInThinkTag) { | ||
| yield createStreamEvent.reasoning(buffer) | ||
| } else { | ||
| yield createStreamEvent.text(buffer) | ||
| } | ||
| } | ||
|
|
||
| // 发送停止事件 | ||
| const requestDuration = Date.now() - requestStartTime | ||
| const status = toolUseDetected ? 'tool_use' : 'complete' | ||
| console.log('[Vertex] Stream request completed:', { | ||
| requestId, | ||
| modelId, | ||
| status, | ||
| duration: `${requestDuration}ms`, | ||
| tokensUsed: usageMetadata | ||
| ? { | ||
| prompt: usageMetadata.promptTokenCount || 0, | ||
| completion: usageMetadata.candidatesTokenCount || 0, | ||
| total: usageMetadata.totalTokenCount || 0 | ||
| } | ||
| : undefined | ||
| }) | ||
| yield createStreamEvent.stop(status) | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
coreStream lacks top‑level error→event translation; consider mirroring image stream error handling
The streaming implementation is otherwise solid and matches the standardized event model (text, reasoning, image_data, usage, stop, and tool_call_* events) in a single‑pass request, which is exactly what the provider is supposed to do.
One gap is error handling:
coreStreamcan throw ongenerateContentStream(bad credentials, network issues, invalid config) or while iterating the async stream.- When that happens, the async generator will reject and the caller has to catch; no
errororstop('error')events are yielded from this provider. - In contrast,
handleImageGenerationStreamwraps its logic in a try/catch and always emitserror+stop('error')on failure.
To align behavior and make the Agent loop’s life easier, consider wrapping the main body of coreStream in a try/catch that:
- Logs the error (using the shared logger and possibly sanitized request metadata if debug is enabled).
- Yields
createStreamEvent.error(<user‑friendly message>). - Yields
createStreamEvent.stop('error')before returning.
You may also want to treat the !isInitialized / missing modelId cases similarly (emit error+stop instead of throwing), so every failure path surfaces as standardized stream events from the provider.
Based on learnings, providers should translate provider‑side failures into error/stop stream events rather than relying solely on thrown exceptions.
…tex-support' into feat/add-Vertex-support
Summary by CodeRabbit
New Features
Chores
✏️ Tip: You can customize this high-level summary in your review settings.