-
Notifications
You must be signed in to change notification settings - Fork 625
fix: migration model config data from new default settings #923
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughAdds version-aware model configuration management with a metadata layer, updates setModelConfig to accept an optional source and to emit stored configurations, adjusts providers to pass source: 'provider', updates DeepSeek defaults and thread defaults, and expands shared types and tests accordingly. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor Provider
participant ConfigPresenter
participant ModelConfigHelper as ModelConfigHelper (vX.Y.Z)
participant Store
participant Renderers
Provider->>ConfigPresenter: setModelConfig(modelId, providerId, config, {source:'provider'})
ConfigPresenter->>ModelConfigHelper: setModelConfig(..., {source})
ModelConfigHelper->>Store: ensureStoreSynced(meta.lastRefreshVersion vs appVersion)
alt first-run or version changed
Store-->>ModelConfigHelper: migrate legacy, refresh derived configs
end
ModelConfigHelper->>Store: write storedConfig (marks isUserDefined based on source)
Store-->>ModelConfigHelper: storedConfig
ModelConfigHelper-->>ConfigPresenter: storedConfig
ConfigPresenter-->>Renderers: emit MODEL_CONFIG_CHANGED(storedConfig)
sequenceDiagram
autonumber
participant App as App Startup
participant ModelConfigHelper as ModelConfigHelper (vX.Y.Z)
participant Store
App->>ModelConfigHelper: constructor(appVersion)
ModelConfigHelper->>Store: read __meta__
alt no meta or version mismatch
ModelConfigHelper->>Store: initialize/migrate meta
ModelConfigHelper->>Store: refresh derived configs
Store-->>ModelConfigHelper: updated meta with lastRefreshVersion = appVersion
else up-to-date
Store-->>ModelConfigHelper: meta unchanged
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests
Tip 👮 Agentic pre-merge checks are now available in preview!Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.
Please see the documentation for more information. Example: reviews:
pre_merge_checks:
custom_checks:
- name: "Undocumented Breaking Changes"
mode: "warning"
instructions: |
Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).Please share your feedback with us on this Discord post. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
src/shared/types/presenters/legacy.presenters.d.ts (1)
1083-1086: Typo breaks typings: useunknownnotunknow.This will fail typecheck.
Apply:
- env: Record<string, unknow> + env: Record<string, unknown>src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts (2)
93-107: Add timeout and local error handling for external call.
fetchlacks a timeout and local try/catch. A stuck TokenFlux/modelscall can hang the main process path. UseAbortControllerwith a sane default and structured logging.Apply:
public async getKeyStatus(): Promise<KeyStatus> { - if (!this.provider.apiKey) { - throw new Error('API key is required') - } - - // TokenFlux uses OpenAI-compatible API, so we can use the models endpoint for key validation - const response = await fetch(`${this.provider.baseUrl}/models`, { - method: 'GET', - headers: { - Authorization: `Bearer ${this.provider.apiKey}`, - 'Content-Type': 'application/json' - } - }) + if (!this.provider.apiKey) { + throw new Error('API key is required') + } + const controller = new AbortController() + const timeoutMs = 10000 + const timeoutId = setTimeout(() => controller.abort(), timeoutMs) + try { + const response = await fetch(`${this.provider.baseUrl}/models`, { + method: 'GET', + headers: { + Authorization: `Bearer ${this.provider.apiKey}`, + 'Content-Type': 'application/json' + }, + signal: controller.signal + })And close the timeout:
- if (!response.ok) { + if (!response.ok) { const errorText = await response.text() throw new Error( `TokenFlux API key check failed: ${response.status} ${response.statusText} - ${errorText}` ) } - + } finally { + clearTimeout(timeoutId) + }
167-169: Clamp and integer-normalize token limits.Ensure
maxTokensdoesn’t exceedcontextLengthand is an integer to avoid downstream API rejections.- const contextLength = model.context_length || existingConfig.contextLength || 4096 - const maxTokens = existingConfig.maxTokens || Math.min(contextLength / 2, 4096) + const contextLength = model.context_length || existingConfig.contextLength || 4096 + const computedMax = + existingConfig.maxTokens ?? Math.min(Math.floor(contextLength / 2), 4096) + const maxTokens = Math.min(computedMax, contextLength)src/main/presenter/configPresenter/modelConfig.ts (1)
221-321: Priority chain and defaults look correct; add safety clamp.After composing
finalConfig, ensuremaxTokens <= contextLengthto tolerate bad legacy data.- finalConfig.isUserDefined = false + // Safety: prevent invalid token budgets + if (finalConfig.maxTokens > finalConfig.contextLength) { + finalConfig.maxTokens = finalConfig.contextLength + } + finalConfig.isUserDefined = false
🧹 Nitpick comments (4)
src/main/presenter/configPresenter/modelDefaultSettings.ts (2)
552-553: Use numeric separators for large literals for readabilityOptional style tweak to align with other entries using underscores.
Apply:
- maxTokens: 64000, - contextLength: 128000, + maxTokens: 64_000, + contextLength: 128_000, @@ - contextLength: 128000, + contextLength: 128_000,Also applies to: 586-586
448-448: Comments should be in EnglishGuidelines require English logs/comments. Please translate these section headers.
Example:
- // DeepSeek系列模型配置 + // DeepSeek modelsAlso applies to: 616-616, 1001-1001, 1102-1102, 1148-1148, 1250-1250, 1372-1372, 1396-1396, 1464-1464, 1488-1488, 1534-1534, 1558-1558, 1680-1680, 1714-1714, 1760-1760, 1806-1806, 1863-1863
src/main/presenter/llmProviderPresenter/providers/openRouterProvider.ts (1)
262-265: Consistent source tagging on config updates.Good. Keep it uniform across providers.
This update pattern is duplicated across providers; consider a small helper in OpenAICompatibleProvider like
updateConfigFromModel(modelId, providerId, partial, source)to DRY.test/main/presenter/modelConfig.test.ts (1)
349-426: Solid coverage for provider-managed vs user-managed configs; reduce reliance on internals.The behavior checks are correct. Consider removing direct access to private store internals to keep tests resilient to refactors.
- const helperAny = modelConfigHelper as any - const providerKey = helperAny.generateCacheKey(providerId, providerManagedModelId) - const userKey = helperAny.generateCacheKey(providerId, userManagedModelId) - const refreshedHelper = new ModelConfigHelper('2.0.0') @@ - const refreshedStore = (refreshedHelper as any).modelConfigStore - expect(refreshedStore.get(providerKey)).toBeUndefined() - expect(refreshedStore.get(userKey)).toBeDefined() + // Assert via public API only + expect(refreshedHelper.getProviderModelConfigs(providerId).some(c => c.modelId === providerManagedModelId)).toBe(false) + expect(refreshedHelper.getProviderModelConfigs(providerId).some(c => c.modelId === userManagedModelId)).toBe(true)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (12)
src/main/presenter/configPresenter/index.ts(3 hunks)src/main/presenter/configPresenter/modelConfig.ts(9 hunks)src/main/presenter/configPresenter/modelDefaultSettings.ts(2 hunks)src/main/presenter/configPresenter/providerModelSettings.ts(2 hunks)src/main/presenter/llmProviderPresenter/providers/_302AIProvider.ts(1 hunks)src/main/presenter/llmProviderPresenter/providers/groqProvider.ts(1 hunks)src/main/presenter/llmProviderPresenter/providers/openRouterProvider.ts(1 hunks)src/main/presenter/llmProviderPresenter/providers/ppioProvider.ts(1 hunks)src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts(1 hunks)src/main/presenter/threadPresenter/const.ts(3 hunks)src/shared/types/presenters/legacy.presenters.d.ts(3 hunks)test/main/presenter/modelConfig.test.ts(3 hunks)
🧰 Additional context used
📓 Path-based instructions (17)
**/*.{js,jsx,ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/development-setup.mdc)
**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写
**/*.{js,jsx,ts,tsx}: Use OxLint for JS/TS code; pre-commit hooks run lint-staged and typecheck
Use camelCase for variables and functions
Use PascalCase for types and classes
Use SCREAMING_SNAKE_CASE for constants
Files:
src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.tssrc/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/llmProviderPresenter/providers/groqProvider.tssrc/main/presenter/llmProviderPresenter/providers/ppioProvider.tssrc/shared/types/presenters/legacy.presenters.d.tssrc/main/presenter/configPresenter/index.tssrc/main/presenter/llmProviderPresenter/providers/_302AIProvider.tssrc/main/presenter/configPresenter/providerModelSettings.tssrc/main/presenter/threadPresenter/const.tstest/main/presenter/modelConfig.test.tssrc/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/llmProviderPresenter/providers/openRouterProvider.ts
src/{main,renderer}/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)
src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging
Files:
src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.tssrc/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/llmProviderPresenter/providers/groqProvider.tssrc/main/presenter/llmProviderPresenter/providers/ppioProvider.tssrc/main/presenter/configPresenter/index.tssrc/main/presenter/llmProviderPresenter/providers/_302AIProvider.tssrc/main/presenter/configPresenter/providerModelSettings.tssrc/main/presenter/threadPresenter/const.tssrc/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/llmProviderPresenter/providers/openRouterProvider.ts
src/main/**/*.ts
📄 CodeRabbit inference engine (.cursor/rules/electron-best-practices.mdc)
Use Electron's built-in APIs for file system and native dialogs
Files:
src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.tssrc/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/llmProviderPresenter/providers/groqProvider.tssrc/main/presenter/llmProviderPresenter/providers/ppioProvider.tssrc/main/presenter/configPresenter/index.tssrc/main/presenter/llmProviderPresenter/providers/_302AIProvider.tssrc/main/presenter/configPresenter/providerModelSettings.tssrc/main/presenter/threadPresenter/const.tssrc/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/llmProviderPresenter/providers/openRouterProvider.ts
**/*.{ts,tsx}
📄 CodeRabbit inference engine (.cursor/rules/error-logging.mdc)
**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别
Files:
src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.tssrc/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/llmProviderPresenter/providers/groqProvider.tssrc/main/presenter/llmProviderPresenter/providers/ppioProvider.tssrc/shared/types/presenters/legacy.presenters.d.tssrc/main/presenter/configPresenter/index.tssrc/main/presenter/llmProviderPresenter/providers/_302AIProvider.tssrc/main/presenter/configPresenter/providerModelSettings.tssrc/main/presenter/threadPresenter/const.tstest/main/presenter/modelConfig.test.tssrc/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/llmProviderPresenter/providers/openRouterProvider.ts
src/main/presenter/llmProviderPresenter/providers/*.ts
📄 CodeRabbit inference engine (.cursor/rules/llm-agent-loop.mdc)
src/main/presenter/llmProviderPresenter/providers/*.ts: Each file insrc/main/presenter/llmProviderPresenter/providers/*.tsshould handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Provider implementations must use acoreStreammethod that yields standardized stream events to decouple the main loop from provider-specific details.
ThecoreStreammethod in each Provider must perform a single streaming API request per conversation round and must not contain multi-round tool call loop logic.
Provider files should implement helper methods such asformatMessages,convertToProviderTools,parseFunctionCalls, andprepareFunctionCallPromptas needed for provider-specific logic.
All provider implementations must parse provider-specific data chunks and yield standardized events for text, reasoning, tool calls, usage, errors, stop reasons, and image data.
When a provider does not support native function calling, it must prepare messages using prompt wrapping (e.g.,prepareFunctionCallPrompt) before making the API call.
When a provider supports native function calling, MCP tools must be converted to the provider's format (e.g., usingconvertToProviderTools) and included in the API request.
Provider implementations should aggregate and yield usage events as part of the standardized stream.
Provider implementations should yield image data events in the standardized format when applicable.
Provider implementations should yield reasoning events in the standardized format when applicable.
Provider implementations should yield tool call events (tool_call_start,tool_call_chunk,tool_call_end) in the standardized format.
Provider implementations should yield stop events with appropriatestop_reasonin the standardized format.
Provider implementations should yield error events in the standardized format...
Files:
src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.tssrc/main/presenter/llmProviderPresenter/providers/groqProvider.tssrc/main/presenter/llmProviderPresenter/providers/ppioProvider.tssrc/main/presenter/llmProviderPresenter/providers/_302AIProvider.tssrc/main/presenter/llmProviderPresenter/providers/openRouterProvider.ts
src/main/**/*.{ts,js,tsx,jsx}
📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)
主进程代码放在
src/main
Files:
src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.tssrc/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/llmProviderPresenter/providers/groqProvider.tssrc/main/presenter/llmProviderPresenter/providers/ppioProvider.tssrc/main/presenter/configPresenter/index.tssrc/main/presenter/llmProviderPresenter/providers/_302AIProvider.tssrc/main/presenter/configPresenter/providerModelSettings.tssrc/main/presenter/threadPresenter/const.tssrc/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/llmProviderPresenter/providers/openRouterProvider.ts
**/*.{ts,tsx,js,vue}
📄 CodeRabbit inference engine (CLAUDE.md)
Use English for all logs and comments
Files:
src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.tssrc/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/llmProviderPresenter/providers/groqProvider.tssrc/main/presenter/llmProviderPresenter/providers/ppioProvider.tssrc/shared/types/presenters/legacy.presenters.d.tssrc/main/presenter/configPresenter/index.tssrc/main/presenter/llmProviderPresenter/providers/_302AIProvider.tssrc/main/presenter/configPresenter/providerModelSettings.tssrc/main/presenter/threadPresenter/const.tstest/main/presenter/modelConfig.test.tssrc/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/llmProviderPresenter/providers/openRouterProvider.ts
**/*.{ts,tsx,vue}
📄 CodeRabbit inference engine (CLAUDE.md)
Enable and adhere to strict TypeScript typing (avoid implicit any, prefer precise types)
Files:
src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.tssrc/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/llmProviderPresenter/providers/groqProvider.tssrc/main/presenter/llmProviderPresenter/providers/ppioProvider.tssrc/shared/types/presenters/legacy.presenters.d.tssrc/main/presenter/configPresenter/index.tssrc/main/presenter/llmProviderPresenter/providers/_302AIProvider.tssrc/main/presenter/configPresenter/providerModelSettings.tssrc/main/presenter/threadPresenter/const.tstest/main/presenter/modelConfig.test.tssrc/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/llmProviderPresenter/providers/openRouterProvider.ts
src/main/**
📄 CodeRabbit inference engine (AGENTS.md)
Place all Electron main-process code under src/main/
Files:
src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.tssrc/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/llmProviderPresenter/providers/groqProvider.tssrc/main/presenter/llmProviderPresenter/providers/ppioProvider.tssrc/main/presenter/configPresenter/index.tssrc/main/presenter/llmProviderPresenter/providers/_302AIProvider.tssrc/main/presenter/configPresenter/providerModelSettings.tssrc/main/presenter/threadPresenter/const.tssrc/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/llmProviderPresenter/providers/openRouterProvider.ts
src/main/presenter/**
📄 CodeRabbit inference engine (AGENTS.md)
src/main/presenter/**: Organize main-process presenters under src/main/presenter/ (Window/Tab/Thread/Mcp/Config/LLMProvider)
Follow the Presenter pattern for main-process modules
Files:
src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.tssrc/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/llmProviderPresenter/providers/groqProvider.tssrc/main/presenter/llmProviderPresenter/providers/ppioProvider.tssrc/main/presenter/configPresenter/index.tssrc/main/presenter/llmProviderPresenter/providers/_302AIProvider.tssrc/main/presenter/configPresenter/providerModelSettings.tssrc/main/presenter/threadPresenter/const.tssrc/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/llmProviderPresenter/providers/openRouterProvider.ts
**/*.{js,jsx,ts,tsx,vue}
📄 CodeRabbit inference engine (AGENTS.md)
Apply Prettier formatting: single quotes, no semicolons, max width 100
Files:
src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.tssrc/main/presenter/configPresenter/modelDefaultSettings.tssrc/main/presenter/llmProviderPresenter/providers/groqProvider.tssrc/main/presenter/llmProviderPresenter/providers/ppioProvider.tssrc/shared/types/presenters/legacy.presenters.d.tssrc/main/presenter/configPresenter/index.tssrc/main/presenter/llmProviderPresenter/providers/_302AIProvider.tssrc/main/presenter/configPresenter/providerModelSettings.tssrc/main/presenter/threadPresenter/const.tstest/main/presenter/modelConfig.test.tssrc/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/llmProviderPresenter/providers/openRouterProvider.ts
src/shared/**/*.{ts,tsx,d.ts}
📄 CodeRabbit inference engine (.cursor/rules/project-structure.mdc)
共享类型定义放在
shared目录
Files:
src/shared/types/presenters/legacy.presenters.d.ts
src/shared/**/*.ts
📄 CodeRabbit inference engine (CLAUDE.md)
Place shared types, utilities, constants, and IPC contract definitions under src/shared/
Files:
src/shared/types/presenters/legacy.presenters.d.ts
src/shared/**
📄 CodeRabbit inference engine (AGENTS.md)
Store shared TypeScript types/utilities in src/shared/
Files:
src/shared/types/presenters/legacy.presenters.d.ts
test/**/*
📄 CodeRabbit inference engine (CLAUDE.md)
Place unit and integration tests under the test/ directory mirroring project structure
Files:
test/main/presenter/modelConfig.test.ts
test/{main,renderer}/**
📄 CodeRabbit inference engine (AGENTS.md)
Mirror source structure in tests under test/main/** and test/renderer/** (with setup files)
Files:
test/main/presenter/modelConfig.test.ts
test/{main,renderer}/**/*.{test,spec}.ts
📄 CodeRabbit inference engine (AGENTS.md)
test/{main,renderer}/**/*.{test,spec}.ts: Name test files with *.test.ts or *.spec.ts
Write tests with Vitest (jsdom) and Vue Test Utils
Files:
test/main/presenter/modelConfig.test.ts
🧠 Learnings (6)
📚 Learning: 2025-09-06T03:07:23.817Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-09-06T03:07:23.817Z
Learning: Applies to src/main/presenter/configPresenter/providers.ts : Add provider configuration entries in src/main/presenter/configPresenter/providers.ts
Applied to files:
src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.tssrc/main/presenter/llmProviderPresenter/providers/groqProvider.tssrc/main/presenter/llmProviderPresenter/providers/ppioProvider.tssrc/shared/types/presenters/legacy.presenters.d.tssrc/main/presenter/configPresenter/index.tssrc/main/presenter/llmProviderPresenter/providers/_302AIProvider.tssrc/main/presenter/configPresenter/providerModelSettings.tssrc/main/presenter/configPresenter/modelConfig.tssrc/main/presenter/llmProviderPresenter/providers/openRouterProvider.ts
📚 Learning: 2025-08-28T05:55:31.482Z
Learnt from: zerob13
PR: ThinkInAIXYZ/deepchat#804
File: src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts:153-156
Timestamp: 2025-08-28T05:55:31.482Z
Learning: TokenFlux models generally support function calling by default, so it's reasonable to assume hasFunctionCalling = true for TokenFlux provider implementations in src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts
Applied to files:
src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts
📚 Learning: 2025-09-06T03:07:23.817Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-09-06T03:07:23.817Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : New LLM providers must be added under src/main/presenter/llmProviderPresenter/providers/ as separate files
Applied to files:
src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.tssrc/main/presenter/llmProviderPresenter/providers/ppioProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Provider files should implement helper methods such as `formatMessages`, `convertToProviderTools`, `parseFunctionCalls`, and `prepareFunctionCallPrompt` as needed for provider-specific logic.
Applied to files:
src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.tssrc/main/presenter/llmProviderPresenter/providers/ppioProvider.ts
📚 Learning: 2025-07-21T01:46:52.880Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/llm-agent-loop.mdc:0-0
Timestamp: 2025-07-21T01:46:52.880Z
Learning: Applies to src/main/presenter/llmProviderPresenter/providers/*.ts : Each file in `src/main/presenter/llmProviderPresenter/providers/*.ts` should handle interaction with a specific LLM API, including request/response formatting, tool definition conversion, native/non-native tool call management, and standardizing output streams to a common event format.
Applied to files:
src/main/presenter/llmProviderPresenter/providers/ppioProvider.ts
📚 Learning: 2025-09-04T11:03:30.184Z
Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: .cursor/rules/provider-guidelines.mdc:0-0
Timestamp: 2025-09-04T11:03:30.184Z
Learning: Integrate via the llmProviderPresenter entry point (src/main/presenter/llmProviderPresenter/index.ts) as the related implementation entry
Applied to files:
src/main/presenter/llmProviderPresenter/providers/ppioProvider.ts
🧬 Code graph analysis (3)
src/main/presenter/configPresenter/index.ts (2)
src/main/presenter/configPresenter/modelConfig.ts (1)
ModelConfigHelper(18-528)src/shared/types/presenters/legacy.presenters.d.ts (2)
ModelConfig(132-150)ModelConfigSource(130-130)
test/main/presenter/modelConfig.test.ts (2)
src/main/presenter/configPresenter/modelConfig.ts (1)
ModelConfigHelper(18-528)src/shared/types/presenters/legacy.presenters.d.ts (1)
ModelConfig(132-150)
src/main/presenter/configPresenter/modelConfig.ts (3)
src/shared/types/presenters/legacy.presenters.d.ts (3)
IModelConfig(152-157)ModelConfigSource(130-130)ModelConfig(132-150)src/main/presenter/configPresenter/providerModelSettings.ts (1)
getProviderSpecificModelConfig(3271-3308)src/main/presenter/configPresenter/modelDefaultSettings.ts (1)
defaultModelsSettings(3-1966)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build-check (x64)
🔇 Additional comments (19)
src/main/presenter/configPresenter/modelDefaultSettings.ts (2)
582-591: DeepSeek chat contextLength 128k — ensure UI/validators accept 128kLooks right; please double‑check any per‑provider/global validation and UI sliders/spinners allow 128k and don’t cap at 65,536.
549-558: DeepSeek Reasoner caps bumped to 64k/128k — confirm provider/migration alignmentChange looks good. Verify provider hard limits, validators, and config migrations accept 64000 maxTokens and 128000 contextLength and won't silently clamp or reject requests; automated repo search returned no matches, so manual verification required.
src/main/presenter/configPresenter/providerModelSettings.ts (2)
440-445: Confirm DeepSeek Chat 128K context length.Please verify 128000 aligns with the provider’s latest limits to avoid silent truncation or upstream 400s.
447-456: Validate DeepSeek Reasoner token/context limits.Max tokens 64000 and context 128000 look plausible; confirm against current docs and SDK caps.
src/shared/types/presenters/legacy.presenters.d.ts (2)
130-131: Good addition: explicit ModelConfigSource type.Clear source attribution improves migration logic and audits.
156-157: IModelConfig: source field addition makes sense.Keeps metadata with the stored config without polluting ModelConfig.
src/main/presenter/llmProviderPresenter/providers/_302AIProvider.ts (1)
218-221: Passing{ source: 'provider' }is correct.Aligns with the new API and preserves user overrides in migration paths.
Confirm ModelConfigHelper treats
source: 'provider'as non-user-defined and won’t overwrite user-edited fields on merge.src/main/presenter/llmProviderPresenter/providers/ppioProvider.ts (1)
207-210: Source metadata passed correctly.No issues.
src/main/presenter/llmProviderPresenter/providers/groqProvider.ts (1)
158-161: Good: provider‑sourced config changes are tagged.Matches the new interface.
test/main/presenter/modelConfig.test.ts (2)
9-10: Good: explicit test version value.Using a constant version improves readability and intent.
71-71: Good: constructor updated to pass app version.Matches the new
ModelConfigHelper(appVersion)API.src/main/presenter/configPresenter/index.ts (2)
164-165: Good: version-aware ModelConfigHelper.Passing
this.currentAppVersionwires model-config migrations to app upgrades.
1075-1089: Wrap setModelConfig in try/catch, emit storedConfig, and verify renderer consumersWrap this.modelConfigHelper.setModelConfig in a try/catch, log and rethrow on failure, and emit the returned storedConfig so renderers can read storedConfig.isUserDefined. Search for MODEL_CONFIG_CHANGED returned only src/main/events.ts (declaration) and src/main/presenter/configPresenter/index.ts (emission); no renderer consumers found — verify renderer code reads the emitted storedConfig (especially isUserDefined).
setModelConfig( modelId: string, providerId: string, config: ModelConfig, options?: { source?: ModelConfigSource } ): void { - const storedConfig = this.modelConfigHelper.setModelConfig(modelId, providerId, config, options) + let storedConfig: ModelConfig + try { + storedConfig = this.modelConfigHelper.setModelConfig(modelId, providerId, config, options) + } catch (e) { + console.error('[ConfigPresenter] setModelConfig failed', { + level: 'ERROR', + ts: Date.now(), + providerId, + modelId, + err: e instanceof Error ? e.message : String(e) + }) + throw e + }src/main/presenter/configPresenter/modelConfig.ts (5)
24-31: Constructor/API change is coherent.Version is stored and immediately synchronized; good for migration determinism.
32-43: Migration gate is precise.
ensureStoreSyncedcorrectly initializes meta and refreshes on version change.
44-77: Legacy migration preserves user entries and drops derived ones.Logic to mark user entries and purge others is sound; memory cache reset prevents stale reads.
If you have any legacy snapshots, run a one‑off check to ensure expected keys remain after first boot on a bumped version.
329-354: Source-aware persistence is correct.Returns the stored config with
isUserDefinednormalized; updates meta keyset accordingly.
446-490: Import behavior handles overwrite/merge and meta updates well.Good normalization of
sourceandisUserDefined; clearing cache to force a fresh view is appropriate.src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts (1)
190-192: Correct: provider-managed configs are tagged with source='provider'.Verified setModelConfig calls in provider files pass { source: 'provider' }: src/main/presenter/llmProviderPresenter/providers/tokenfluxProvider.ts, openRouterProvider.ts, groqProvider.ts, ppioProvider.ts, _302AIProvider.ts. The renderer call at src/renderer/src/stores/settings.ts omits the option intentionally for user-driven changes.
| const SPECIAL_CONCAT_CHAR = '-_-' | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick
Delimiter collision edge case.
SPECIAL_CONCAT_CHAR = '-_-' could appear in sanitized IDs. Consider a safer scheme (e.g., base64url pair or length‑prefixed join) to avoid parse ambiguity.
| - Avoid directly quoting citations in the answer. | ||
| - If the meaning is unclear, return the user's original query. | ||
| - Every footnote citations used in the answer must correspond to a real search results. Do not invent or hallucinate references. | ||
| - Every footnote citations used in the answer must correspond to a real search results. Do not invent or hallucinate references. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick
Fix grammar in the new guideline sentence.
Use singular “citation/result” and tighten the phrasing.
Apply this diff:
- - Every footnote citations used in the answer must correspond to a real search results. Do not invent or hallucinate references.
+ - Every footnote citation used in the answer must correspond to a real search result; do not invent or hallucinate references.Also consider deduplicating this rule across templates to avoid drift.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - Every footnote citations used in the answer must correspond to a real search results. Do not invent or hallucinate references. | |
| - Every footnote citation used in the answer must correspond to a real search result; do not invent or hallucinate references. |
🤖 Prompt for AI Agents
In src/main/presenter/threadPresenter/const.ts around line 27, the new guideline
sentence is plural and wordy; change it to singular and tighten phrasing so it
reads something like "Every footnote citation used in the answer must correspond
to a real search result; do not invent or hallucinate references." Update the
line to use "citation/result" singular form and concise wording, and review
other template files to deduplicate and harmonize this rule so the same sentence
is reused rather than duplicated.
| - Avoid directly quoting citations in the answer. | ||
| - If the meaning is unclear, return the user's original query. | ||
| - Every footnote citations used in the answer must correspond to a real search results. Do not invent or hallucinate references. | ||
| - Every footnote citations used in the answer must correspond to a real search results. Do not invent or hallucinate references. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick
Mirror the grammar fix here as well.
Apply this diff:
- - Every footnote citations used in the answer must correspond to a real search results. Do not invent or hallucinate references.
+ - Every footnote citation used in the answer must correspond to a real search result; do not invent or hallucinate references.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - Every footnote citations used in the answer must correspond to a real search results. Do not invent or hallucinate references. | |
| Every footnote citation used in the answer must correspond to a real search result; do not invent or hallucinate references. |
🤖 Prompt for AI Agents
In src/main/presenter/threadPresenter/const.ts around line 58, mirror the
grammar fix by updating the sentence to correct singular/plural and article
usage: change "Every footnote citations used in the answer must correspond to a
real search results. Do not invent or hallucinate references." to "Every
footnote citation used in the answer must correspond to a real search result. Do
not invent or hallucinate references." Ensure spacing and punctuation match
surrounding lines.
| contextLength: 12800, | ||
| maxTokens: 8192, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Verify DeepSeek limits and avoid hardcoded numeric drift.
Confirm 12,800 contextLength and 8,192 maxTokens match the selected DeepSeek model’s actual limits and your new provider defaults; otherwise requests may fail at runtime. Prefer sourcing these from the central model metadata used in the migration rather than hardcoding here, or clamp at apply-time.
Run to verify model defaults/usages and spot inconsistencies:
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "== DeepSeek model definitions and limits =="
rg -n -C3 -e 'deepseek' -e 'DeepSeek' --type ts
echo
echo "== Occurrences of contextLength/maxTokens defaults =="
rg -n -C2 -e '\bcontextLength\s*:\s*\d+' -e '\bmaxTokens\s*:\s*\d+' --type ts | sed 's/^/ /'
echo
echo "== setModelConfig and source propagation =="
rg -n -C2 -e '\bsetModelConfig\s*\(' --type ts
echo
echo "== Potential single-source-of-truth for defaults (metadata/helpers) =="
rg -n -C3 -e 'ModelConfig(Helper|Metadata|Source)' -e 'provider.*defaults' --type tsLength of output: 270626
Fix DeepSeek contextLength typo (12800 → 128000) and remove hardcoded limits
- Problem: src/main/presenter/threadPresenter/const.ts (lines 115–116) has contextLength: 12800, maxTokens: 8192; central defaults for deepseek-chat are maxTokens: 8192 and contextLength: 128000 in src/main/presenter/configPresenter/modelDefaultSettings.ts and providerModelSettings.ts.
- Action: replace the hardcoded values with the central model default (use the modelConfig/providerModelSettings helper) or clamp at apply-time to the model's limit.
🤖 Prompt for AI Agents
In src/main/presenter/threadPresenter/const.ts around lines 115–116, the
DeepSeek defaults are incorrect/hardcoded (contextLength set to 12800 and
maxTokens 8192) while central defaults are contextLength 128000 and maxTokens
8192; update this by removing the hardcoded values and instead fetch the central
model defaults via the modelConfig/providerModelSettings helper (or call the
providerModelSettings helper) when constructing thread defaults, or
alternatively enforce a clamp at apply-time so contextLength is set to
Math.min(hardValue, providerModelSettings.contextLength) (and similarly for
maxTokens) to ensure the threadPresenter uses the central model limit of 128000.
| setModelConfig( | ||
| modelId: string, | ||
| providerId: string, | ||
| config: ModelConfig, | ||
| options?: { | ||
| source?: ModelConfigSource | ||
| } | ||
| ): void |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Signature update: setModelConfig now accepts options.source.
Looks right. Ensure all call sites that auto‑apply defaults pass { source: 'provider' }.
Run to spot missed updates:
🏁 Script executed:
#!/bin/bash
# Calls without an options arg (likely misses)
rg -nP --type=ts -C2 '\bsetModelConfig\s*\(\s*[^,]+,\s*[^,]+,\s*[^,)]+\)' | sed -E 's/^/:NO_OPTIONS:/'
# Calls with options.source (for reference)
rg -nP --type=ts -n '\bsetModelConfig\s*\([^)]*source\s*:\s*[\'"]provider[\'"]'Length of output: 4599
Fix: pass { source: 'provider' } to all setModelConfig call sites that rely on provider defaults.
Multiple call sites still call setModelConfig without the new options arg — update them to pass { source: 'provider' }.
Affected locations (examples):
- test/main/presenter/modelConfig.test.ts — calls at lines 99, 109, 138–139, 215, 238, 289, 323–324, 400.
- src/renderer/src/stores/settings.ts — line 1680.
🤖 Prompt for AI Agents
In src/shared/types/presenters/legacy.presenters.d.ts around lines 458 to 465,
the setModelConfig signature now accepts an options arg and callers that rely on
provider defaults must pass { source: 'provider' }; update every call site that
currently invokes setModelConfig(modelId, providerId, config) (notably
test/main/presenter/modelConfig.test.ts at lines 99, 109, 138–139, 215, 238,
289, 323–324, 400 and src/renderer/src/stores/settings.ts at line 1680) to
include the fourth argument options = { source: 'provider' }, ensuring the
correct type is used and imports/types remain valid.
Summary by CodeRabbit
New Features
Chores