fix: allow isVisionModel function read runtime env var VISION_MODELS#5983
fix: allow isVisionModel function read runtime env var VISION_MODELS#5983JiangYingjin wants to merge 8 commits intoChatGPTNextWeb:mainfrom JiangYingjin:main
Conversation
|
@JiangYingjin is attempting to deploy a commit to the NextChat Team on Vercel. A member of the Team first needs to authorize it. |
WalkthroughThis pull request introduces enhancements to vision model management across various components of the application. Key changes include the addition of a Changes
Sequence DiagramsequenceDiagram
participant Client as Client Application
participant AccessStore as Access Store
participant ConfigService as Config Service
participant ModelUtils as Model Utilities
Client->>AccessStore: Retrieve vision models
AccessStore-->>Client: Return vision models
Client->>ModelUtils: Check if model is vision model
ModelUtils->>ModelUtils: Combine default and custom vision models
ModelUtils-->>Client: Return vision model status
Poem
Note ⚡️ AI Code Reviews for VS Code, Cursor, WindsurfCodeRabbit now has a plugin for VS Code, Cursor and Windsurf. This brings AI code reviews directly in the code editor. Each commit is reviewed immediately, finding bugs before the PR is raised. Seamless context handoff to your AI code agent ensures that you can easily incorporate review feedback. Note ⚡️ Faster reviews with cachingCodeRabbit now supports caching for code and dependencies, helping speed up reviews. This means quicker feedback, reduced wait times, and a smoother review experience overall. Cached data is encrypted and stored securely. This feature will be automatically enabled for all accounts on May 30th. To opt out, configure 📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
app/client/platforms/openai.ts (1)
197-198: Minor placement nitpick
DeclaringaccessStorea few lines above its usage might reduce clarity slightly. Consider moving these lines closer to whereisVisionModelis called for enhanced readability.app/components/chat.tsx (1)
1461-1462: Avoid duplicate store lookups
You're declaring anothercustomVisionModels = useAccessStore().visionModels;here, similar to earlier references. Consider consolidating the logic to a single place or passing the variable as a prop to reduce potential confusion and keep the code cleaner.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (10)
app/api/config/route.ts(1 hunks)app/client/platforms/anthropic.ts(1 hunks)app/client/platforms/google.ts(1 hunks)app/client/platforms/openai.ts(2 hunks)app/client/platforms/tencent.ts(1 hunks)app/components/chat.tsx(5 hunks)app/config/server.ts(3 hunks)app/store/access.ts(1 hunks)app/utils.ts(2 hunks)test/vision-model-checker.test.ts(3 hunks)
🧰 Additional context used
🪛 Biome (1.9.4)
test/vision-model-checker.test.ts
[error] 66-66: Avoid the delete operator which can impact performance.
Unsafe fix: Use an undefined assignment instead.
(lint/performance/noDelete)
🔇 Additional comments (25)
app/api/config/route.ts (1)
16-16: New property added for vision models.
The addition of visionModels: serverConfig.visionModels ensures that the vision model configuration is exposed through the DANGER_CONFIG. This is consistent with the newly introduced environment variable in server.ts. No issues detected.
test/vision-model-checker.test.ts (8)
5-5: Good approach to centralize custom vision models in test.
Defining customVisionModels in one place improves clarity and maintainability. Make sure to keep it in sync with any new additions.
31-31: Approved usage of the additional parameter.
Using customVisionModels as the second argument to isVisionModel properly tests custom models. This test is clear and robust.
36-36: Clear negative test case.
Verifying a non-vision model ensures the function correctly excludes them. This is good coverage.
48-48: Non-vision models tested thoroughly.
Similar to line 36, this test ensures correct exclusion handling for each model in the array.
54-54: Usage of runtime environment for vision models.
It's good that you're relying on process.env.VISION_MODELS dynamically, ensuring coverage for real-world scenarios.
55-59: Comprehensive checks for both environment-defined and custom models.
These lines verify that models from both sources are recognized or excluded correctly. Excellent test coverage.
64-64: Correct handling when VISION_MODELS is empty.
This covers a realistic edge case, ensuring the function safely defaults to no environment-defined vision models or relies on custom ones.
67-68: Verifying behavior after removing VISION_MODELS.
These checks confirm that the function behaves correctly when VISION_MODELS is no longer set. Good addition for realism.
app/store/access.ts (1)
126-126: New visionModels property introduced.
Adding visionModels to the default access state keeps the store consistent with the environment-driven configuration. No issues found.
app/config/server.ts (4)
24-24: VISION_MODELS environment variable declared.
This allows the runtime to define or override vision models dynamically, ensuring better configurability of vision features.
127-127: Safe fallback to empty string.
Using ?? "" as a fallback ensures that visionModels remains a string type. This straightforward approach prevents runtime errors if the env variable is undefined.
134-136: Applying logic within the GPT-4 disable block.
Filtering out 4-series models here is consistent. Ensure any future updates to these model prefixes also account for visionModels if needed.
254-254: Exposing visionModels in the config object.
This ensures that downstream consumers (e.g., DANGER_CONFIG) can access the vision model information. Good integration.
app/client/platforms/tencent.ts (1)
97-101: Looks good: usage of the new parameter
The newly introduced retrieval of accessStore.visionModels and passing it to isVisionModel correctly ensures runtime environment compatibility for vision models. No issues found with concurrency, error handling, or security.
app/client/platforms/google.ts (1)
86-86: Verify accessStore scope
Ensure that accessStore is declared in scope prior to its usage here. As long as your existing code properly defines it, this logic correctly utilizes isVisionModel with the new runtime vision models.
app/client/platforms/anthropic.ts (1)
89-93: Runtime environment reading is consistent
Accessing accessStore.visionModels to determine visionModel aligns with the new parameter approach. The code readability is good, and there are no evident logical or syntax issues.
app/client/platforms/openai.ts (1)
216-219: Correct integration of vision models
Passing accessStore.visionModels to isVisionModel ensures that externally set runtime environment variables are respected. This is a robust approach for supporting custom or user-provided vision models.
app/components/chat.tsx (5)
493-493: Variable type clarification
The customVisionModels variable references useAccessStore().visionModels. Ensure that this store value is consistently typed (e.g., string[] or string) throughout the codebase to avoid type mismatches when passing to other functions.
533-533: Check usage of isVisionModel arguments
Double-check that customVisionModels is a suitable type (string or string array) that matches the updated signature of isVisionModel. If visionModels is actually an array, consider converting it into a string or updating isVisionModel to accept an array parameter.
1466-1466: Check null/undefined safeguards
Before calling isVisionModel with customVisionModels, confirm that customVisionModels is defined to avoid any unexpected errors when joining strings inside isVisionModel.
1503-1503: Dependency array update
Adding customVisionModels to the dependency array ensures the callback re-runs when visionModels changes, which looks correct.
1551-1551: Minor comment addition
This line merely adds or updates a comment for shortcut keys without functional changes.
app/utils.ts (2)
10-10: Import usage confirmed
Importing getModelProvider here makes sense for checking model types. No issues found.
257-265: Validate string-manipulation logic for isVisionModel
The function signature now accepts customVisionModels as a string, then merges it with clientConfig?.visionModels using join, split, etc. This approach may lead to unexpected behavior if either variable is an array or if clientConfig?.visionModels is undefined. Consider:
- Double-check that both values are indeed strings.
- Implement extra checks to handle edge cases (e.g. empty strings, null, arrays).
- Potentially change the function parameter to
[string] | stringfor better clarity.
| expect(isVisionModel("unrelated-model")).toBe(false); | ||
| expect(isVisionModel("unrelated-model", customVisionModels)).toBe(false); | ||
|
|
||
| delete process.env.VISION_MODELS; |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Avoid using the delete operator.
Deleting environment variables can have performance overhead and may lead to unpredictable effects. Consider assigning process.env.VISION_MODELS = undefined instead of using delete.
- delete process.env.VISION_MODELS;
+ process.env.VISION_MODELS = undefined;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| delete process.env.VISION_MODELS; | |
| process.env.VISION_MODELS = undefined; |
🧰 Tools
🪛 Biome (1.9.4)
[error] 66-66: Avoid the delete operator which can impact performance.
Unsafe fix: Use an undefined assignment instead.
(lint/performance/noDelete)
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (2)
app/store/config.ts (2)
48-49: Consider maintainability implications of hardcoded configurationThe conditional border detection based on app context has been replaced with a hardcoded value. While this ensures consistent behavior, it removes the dynamic adaptation to the runtime environment.
- // tightBorder: !!config?.isApp, - tightBorder: true, + tightBorder: !!config?.isApp || true, // Always true but preserves context
52-53: Hardcoded value replaces constant referenceReplacing a constant with a hardcoded value reduces maintainability. If
DEFAULT_SIDEBAR_WIDTHis used elsewhere, this creates inconsistency across the codebase.- // sidebarWidth: DEFAULT_SIDEBAR_WIDTH, - sidebarWidth: 100, + // Define constant at the top of the file if it's not imported anymore + sidebarWidth: 100, // Consider reinstating constant for maintainability
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
app/store/config.ts(2 hunks)
🔇 Additional comments (2)
app/store/config.ts (2)
72-73:Details
✅ Verification successful
Verify token limit compatibility with vision models
The significant increase in
max_tokensto 8000 appears to support the PR's goal of enhancing vision model capabilities. Higher token limits are often necessary for vision models to process image data alongside text.
🏁 Script executed:
#!/bin/bash # Check if any vision models in the codebase have specific token requirements rg -A 2 -B 2 "max_tokens.*vision" --type tsLength of output: 353
Token Limit and Vision Model Integration Confirmed
The updated configuration in
app/store/config.tssettingmax_tokens: 8000is appropriately compatible with vision models. As verified by the grep output inapp/client/platforms/openai.ts, the vision model logic computes:
requestPayload["max_tokens"] = Math.max(modelConfig.max_tokens, 4000)With
modelConfig.max_tokensat 8000, this correctly provides the enhanced token limit while still ensuring a minimum of 4000 tokens if needed.
76-77:Details
❓ Verification inconclusive
Performance impact of increased history and compression threshold
The significant increases to
historyMessageCount(from 4 to 16) andcompressMessageLengthThreshold(from 1000 to 1000000) could impact performance and API costs. These changes appear to support vision model functionality by allowing larger context windows and minimal compression of potentially image-heavy messages.
🌐 Web query:
What are the recommended token limits and context sizes for vision models?💡 Result:
Vision language models like GPT-4 Vision and Gemini Pro Vision have unique token limits and context window considerations compared to standard LLMs. Here's a breakdown of key recommendations and limitations:
Core Context Window Capacities
GPT-4 Vision (gpt-4-vision-preview)
- Input tokens: Up to 128K tokens in Turbo variants [2][8]
- Output tokens: Capped at 4,096 tokens per response [10]
- Practical image limits: ≤10 images recommended due to per-image token compression (70 tokens/image metadata) [1]
- Rate limits: 30,000 tokens/minute for GPT-4 Turbo with Vision [7]
Gemini 1.5 Pro Vision
- Input tokens: Up to 1 million tokens in experimental configurations [9]
- Rate limits: 60 requests/minute (RPM) baseline [4]
Llama 3.2 11B Vision Instruct
- Documented capacity: 128K tokens
- Practical limitations: Users report hard caps at 8,192 tokens due to deployment constraints [6]
Critical Considerations
Image processing costs:
Token counts for images scale with resolution:Low-res (85 tokens) → HD (~2,000+ tokens per image)A single 8517-token prompt with 100 images caused model refusal in testing [1].
Output constraints:
Even models with massive input windows (e.g., 128K tokens) restrict outputs to 4K tokens, requiring iterative workflows for long content generation [10].Architectural tradeoffs:
Larger context windows (e.g., IBM Granite's 128K tokens) use techniques like ring attention and relative positional encoding to manage computational loads [8], but still incur higher API costs [5].Best Practices
For document analysis:
- Use GPT-4 Turbo with Vision (128K context) for ≤50 pages of text+images
- Implement chunking for larger documents [3]
For real-time applications:
- Stay under 10 images/prompt with GPT-4 Vision
- Monitor token usage:
prompt_tokens + (n_images * avg_tokens_per_image)[1]When comparing models:
Model Max Input Tokens Output Limit Rate Limits GPT-4 Vision 128K 4K 30K/min [7] Gemini 1.5 Pro 1M* N/A 60 RPM [4] Magic.dev LTM-2 100M N/A Unspecified [9] *Experimental availability only
Developers should validate advertised context windows against API responses, as implementation-specific constraints (e.g., Llama's 8K hard limit despite 128K claims [6]) often reduce practical usability. For vision-heavy workflows, prioritize models with explicit image token documentation like GPT-4 Vision over generic "large context" claims.
Citations:
- 1: https://community.openai.com/t/gpt-4-vision-maximum-amount-of-images/573110
- 2: https://swimm.io/learn/large-language-models/llm-context-windows-basics-examples-and-prompting-best-practices
- 3: https://dataplatform.cloud.ibm.com/docs/content/wsj/analyze-data/fm-context-length.html?context=wx
- 4: https://www.googlecloudcommunity.com/gc/AI-ML/Current-Rate-Limits-on-Gemini-Pro-Vision/m-p/700508
- 5: https://datanorth.ai/blog/context-length
- 6: https://learn.microsoft.com/en-us/answers/questions/2150702/documentation-about-llama-3-2-11b-vision-instruct
- 7: https://learn.microsoft.com/en-us/answers/questions/1515787/difference-between-tokens-per-minute-rate-limit-(t
- 8: https://research.ibm.com/blog/larger-context-window
- 9: https://codingscape.com/blog/llms-with-largest-context-windows
- 10: openai/openai-python#687
Performance Impact of Enhanced Configuration for Vision Models
The updated configuration in
app/store/config.ts(lines 76–77) increases:
historyMessageCountfrom 4 to 16, andcompressMessageLengthThresholdfrom 1,000 to 1,000,000.These adjustments appear to be aimed at supporting vision model workflows—where larger context windows are often necessary due to the token overhead associated with image data (e.g., GPT-4 Vision can handle up to 128K tokens). However, such increases can also impact performance and API costs. Please ensure that these expanded parameters are validated through performance and cost testing in your specific use case.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (1)
app/components/chat.tsx (1)
1414-1439: Added support for 'code' parameter in settings URL commandThis unrelated but useful change allows users to specify an access code through URL parameters.
- if (payload.key || payload.url) { + if (payload.key || payload.url || payload.code) {While this change works, consider including it in a separate PR since it's not directly related to the vision model fix.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
app/components/chat.tsx(7 hunks)
🔇 Additional comments (6)
app/components/chat.tsx (6)
493-493: Added customVisionModels to enhance vision model detectionThis addition correctly fetches vision models from the access store, which will allow the
isVisionModelfunction to check models from runtime environment variables.
533-533: Updated isVisionModel call with customVisionModels parameterThis change implements the fix to allow the
isVisionModelfunction to check against both predefined and runtime-defined vision models.
1466-1466: Added customVisionModels to _Chat componentThis correctly adds the vision models from the access store to make them available within the component scope.
1471-1471: Updated isVisionModel call in handlePaste functionThe function now correctly checks if the current model is a vision model by looking at both predefined and runtime-defined vision models.
1508-1508: Updated useCallback dependency arrayGood practice to include
customVisionModelsin the dependency array to ensure the callback is refreshed when the vision models change.
1556-1556: Translated comment from Chinese to EnglishThe comment "捷键" has been translated to "shortcut keys", improving code readability for English speakers.
There was a problem hiding this comment.
Actionable comments posted: 0
🔭 Outside diff range comments (1)
app/components/chat.tsx (1)
534-558: 🛠️ Refactor suggestionMissing dependency in useEffect.
The effect uses
customVisionModelsbut doesn't include it in the dependency array. This could lead to stale data if the vision models change but the effect doesn't re-run.-}, [chatStore, currentModel, models, session]); +}, [chatStore, currentModel, customVisionModels, models, session]);
🧹 Nitpick comments (2)
app/components/chat.tsx (2)
1473-1473: Avoid redundant declaration of customVisionModels.This variable is already declared at line 494. Consider refactoring to avoid duplication, such as lifting the declaration to the component level.
-const customVisionModels = useAccessStore().visionModels;Since this variable is already declared at the component level, you can simply reference it here without redeclaring it.
1401-1446: Enhanced settings command to support WebDAV synchronization.The command now handles authentication credentials for WebDAV synchronization alongside the access code. This enables proper synchronization of configurations including vision models.
Consider extracting the settings command logic into a separate function to improve readability, especially since it now handles multiple different concerns (access code and WebDAV credentials).
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
app/components/chat.tsx(8 hunks)nextchat.json(1 hunks)
✅ Files skipped from review due to trivial changes (1)
- nextchat.json
🔇 Additional comments (5)
app/components/chat.tsx (5)
110-110: New import added for WebDAV sync functionality.The
useSyncStoreimport is appropriately added to support the new WebDAV sync functionality in the updated settings command.
494-494: Correctly retrieves custom vision models from access store.This addition allows the component to access the vision models stored in the access store, including those defined in runtime environment variables.
1478-1478: Fixed isVisionModel function call in handlePaste.This correctly updates the image paste handling to check against custom vision models as well.
1515-1515: Correctly updated dependency array.The dependency array correctly includes the new
customVisionModelsdependency, ensuring the callback is regenerated when vision models change.
534-535:Details
✅ Verification successful
Fixed isVisionModel function call to check runtime environment models.
The updated call now correctly passes custom vision models to the function, allowing it to check against both built-in and runtime-defined vision models.
🏁 Script executed:
#!/bin/bash # Check all usages of isVisionModel function to ensure they handle the new parameter rg -A 1 "isVisionModel\(" --type tsLength of output: 2518
Update Verified:
isVisionModelnow correctly receives runtime vision models.
The usage inapp/components/chat.tsx(lines 534-535) properly passescustomVisionModelsto the function, and similar updates across the codebase and tests confirm the correct behavior.
- Files Updated:
app/components/chat.tsx– Updated call ensuring the second parameter is passed.- Additional usage in platform modules and tests were verified.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (3)
app/store/sync.ts (2)
108-145: Implement action-specific synchronization behavior.The sync method now handles three distinct synchronization scenarios based on the action:
- SYNC - bidirectional sync (merge local and remote states)
- UPLOAD - push local state to remote
- DOWNLOAD - fetch remote state and update local state
This implementation is crucial for ensuring that vision model configurations can be properly synchronized between runtime and build-time environments.
However, there's an inefficiency in the SYNC action implementation.
The code unnecessarily makes a second API call to get the remote state when parsing. Refactor to reuse the already fetched remote state:
- const parsedRemoteState = JSON.parse( - await client.get(config.username), - ) as AppState; + const parsedRemoteState = JSON.parse(remoteState) as AppState;
158-166: Implement account validation to prevent sync errors.This new method checks if the necessary credentials exist before attempting to synchronize, improving error handling and user experience.
There's a commented console.log statement that should be removed.
Remove the commented debug statement on line 162:
- // console.log("[Sync] hasAccount", !!(provider === ProviderType.WebDAV ? config.username && config.password : config.username && config.apiKey));app/components/chat.tsx (1)
1402-1447: Updated settings command to handle authentication credentialsThe settings command has been updated to process user code, username, and password for WebDAV synchronization, which enhances the application's sync capabilities. This approach properly handles application configuration through fast links.
One observation: there's a conditional sync call at the end, but only if both username and password are provided. Consider adding more granular feedback to users if only partial credentials are supplied.
if (payload.username && payload.password) { syncStore.sync(); +} else if (payload.username || payload.password) { + showToast(Locale.Settings.IncompleteCredentials); }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
app/client/platforms/google.ts(1 hunks)app/components/chat.tsx(9 hunks)app/components/home.tsx(2 hunks)app/components/settings.tsx(2 hunks)app/store/chat.ts(6 hunks)app/store/sync.ts(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- app/client/platforms/google.ts
🔇 Additional comments (26)
app/components/home.tsx (2)
31-31: Integrate sync functionality with vision models.This import statement is added to support the new synchronization functionality that will help address the vision model environment variable issue.
243-244:Details
✅ Verification successful
Download runtime configuration to ensure vision models are properly identified.
This newly added code explicitly triggers the download of configuration data from remote storage when the Home component is rendered. This is crucial for ensuring that vision models defined in runtime environment variables (like
VISION_MODELS) are properly loaded and accessible to the application after initial rendering.Let's confirm this download functionality works as expected by checking the sync store implementation:
🏁 Script executed:
#!/bin/bash # Check how the download function is implemented and what data it retrieves # Look for the download method implementation rg -A 5 "async download\(\)" --type tsLength of output: 286
Download functionality verified – the download method correctly triggers configuration synchronization.
The executed script confirmed that the
download()method (inapp/store/sync.ts) callsthis.sync(SyncAction.DOWNLOAD)as expected. This ensures that the runtime configuration (including vision models specified via environment variables) is properly loaded when the Home component renders.app/components/settings.tsx (2)
531-560: Add UI controls for manual vision model configuration sync.These new UI components provide explicit upload and download buttons, giving users direct control over synchronizing vision model configurations. This is especially important when using Docker, as mentioned in the PR objectives, where visual models might be defined outside of build-time environment variables.
The implementation includes proper error handling with try/catch blocks and user feedback through toast notifications.
426-429: Ensure configuration is synchronized when navigating away from settings.This modification ensures that any configuration changes are synchronized when the user navigates back to the home page. This is important for ensuring that vision model configurations are properly propagated throughout the application.
app/store/chat.ts (7)
32-32: Import sync store to enable vision model configuration synchronization.This import is necessary to support the synchronization of vision model configurations across the application.
231-231: Synchronize state after forking a session.This ensures that vision model configurations are properly synchronized after creating a new forked chat session, which might use vision capabilities.
240-240: Synchronize state after clearing sessions.This ensures that vision model configurations remain consistent even after clearing all chat sessions.
273-273: Synchronize state after reordering sessions.This ensures that vision model configurations are properly synchronized after moving a session in the list, maintaining consistent behavior.
337-337: Synchronize state after deleting a session.This ensures that vision model configurations are properly maintained even after removing chat sessions.
446-446: Synchronize state after completing a chat interaction.This is particularly important for vision model interactions as it ensures that any state changes during a chat conversation (especially those involving image processing) are properly synchronized.
741-744: Synchronize state after updating memory prompt.This ensures that memory prompts, which might contain references to vision-processed content, are properly synchronized. The memory prompt is updated after summarization completes.
app/store/sync.ts (5)
25-29: Add SyncAction enum to support different synchronization modes.The new SyncAction enum provides clear semantics for different sync operations:
- SYNC: Bidirectional synchronization (merge local and remote)
- UPLOAD: One-way sync from local to remote
- DOWNLOAD: One-way sync from remote to local
This is essential for the proper handling of vision model configurations across different environments.
37-37: Set default WebDAV endpoint for easier configuration.A reasonable default endpoint is now provided, which makes it easier for users to configure WebDAV synchronization for vision model configurations.
97-101: Add account validation check before synchronization attempts.This enhancement prevents unnecessary synchronization attempts when no account is configured, improving efficiency and reducing potential errors.
150-156: Add convenience methods for upload and download operations.These new methods provide a simpler interface for performing specific sync operations, which is useful for components that only need to perform one-way synchronization.
142-143: Properly parse remote state before applying.This code correctly parses the JSON string into an AppState object before applying it to the local state, ensuring type safety and proper data handling.
app/components/chat.tsx (10)
110-110: Added useSyncStore import to support sync operationsThis addition is appropriate for the subsequent uses of syncStore in the file.
494-494: Added access to custom vision models from the storeThis change properly accesses the vision models from the access store, which is in line with the PR objective of allowing the isVisionModel function to read runtime environment variables.
534-535: Updated isVisionModel function call to include custom vision modelsThis modification ensures vision models defined outside of build-time environment variables are correctly identified, addressing the core issue mentioned in the PR objectives.
1170-1170: Added synchronization after message deletionThe addition of
syncStore.upload()ensures data consistency by synchronizing changes after a message is deleted.
1474-1474: Moved customVisionModels to handlePaste scopeThis change appropriately makes the customVisionModels available in the handlePaste function context.
1479-1479: Updated isVisionModel check in handlePaste functionThis change ensures the paste handler correctly identifies vision models defined in runtime environment variables, in line with the PR's main objective.
1516-1516: Added customVisionModels to dependency arrayThis is a critical fix to ensure the handlePaste callback function is properly regenerated when customVisionModels changes, preventing stale closures.
1564-1564: Improved comment localizationThe comment has been updated to include both Chinese and English for better international developer collaboration.
952-952: Added syncStore referenceThis properly initializes the syncStore hook which is used later for synchronization operations.
1516-1516:Details
❓ Verification inconclusive
Ensure all cases of isVisionModel calls are updated
All instances of isVisionModel have been updated to include the customVisionModels parameter, which is good for consistency.
One concern is that if there are other calls to isVisionModel elsewhere in the codebase, they would also need to be updated to maintain consistent behavior.
🏁 Script executed:
#!/bin/bash # Description: Find other locations where isVisionModel is called to make sure all instances are updated # Search for isVisionModel function calls across the codebase echo "Searching for all isVisionModel function calls:" rg "isVisionModel\(" --type ts --type tsxLength of output: 176
Below is an updated shell script that explicitly targets both
.tsand.tsxfiles. This should help verify whether any other instances ofisVisionModelremain in the codebase:Please run this updated script and review its output. If additional instances of
isVisionModelare found outside the already updated location inapp/components/chat.tsx, ensure that those calls are updated to include thecustomVisionModelsparameter for consistency.
🏁 Script executed:
[run_scripts] #!/bin/bash # Re-run the search for isVisionModel usage by explicitly including .ts and .tsx files echo "Searching for all isVisionModel function calls in .ts and .tsx files:" rg -g "*.ts" -g "*.tsx" "isVisionModel\("Length of output: 2061
Please Verify All Instances of isVisionModel
The updated change in
app/components/chat.tsxproperly passes thecustomVisionModelsparameter. However, the search revealed additional occurrences ofisVisionModelin these files:
- app/client/platforms/anthropic.ts
- app/client/platforms/google.ts
- app/client/platforms/openai.ts
- app/client/platforms/tencent.ts
- test/vision-model-checker.test.ts (and the utility definition in app/utils.ts)
Please confirm that each of these call sites is updated to include the
customVisionModelsparameter. If any instance still omits this parameter, update it to ensure consistent behavior across the codebase.
💻 变更类型 | Change Type
🔀 变更说明 | Description of Change
Problem: Before the fix, the
isVisionModelfunction readVISION_MODELSfrom thegetClientConfigfunction. However,getClientConfigwas unable to readVISION_MODELSfrom the program's runtime environment variables. This issue caused visual models outside the predefined and build-time environment variables to be evaluated asfalseby theisVisionModelfunction, resulting in them losing their visual capabilities. Specifically, theVISION_MODELSenvironment variable passed by users in Docker was not taking effect.Fix Logic: The
VISION_MODELSvariable was added toProcessEnvinapp/config/server.ts. This allows the client to read it via/api/configand store it inaccessStorealong with other variables likecustomModels. Since theisVisionModelfunction inapp/utils.tscannot directly accessaccessStore, a new parameter,customVisionModels, was added to theisVisionModelfunction. When callingisVisionModel,accessStore.visionModelsis passed to theisVisionModelfunction via the newcustomVisionModelsparameter. This enables theisVisionModelfunction to read theVISION_MODELSruntime environment variable.Fix Result: After the fix, the
isVisionModelfunction can readVISION_MODELSnot only fromgetClientConfigbut also from the program's runtime environment variables, resolving the problem described above.📝 补充信息 | Additional Information
The test unit
test/vision-model-checker.test.tswas adjusted. Custom visual models were passed in through the newcustomVisionModelsparameter, and the tests passed.Summary by CodeRabbit
New Features
Bug Fixes
Documentation
Tests
Configuration Changes