Skip to content

Conversation

@BowieHe
Copy link
Contributor

@BowieHe BowieHe commented Jul 30, 2025

在使用Gemini Flash 和Pro访问的时候,有概率触发下面的异常
c146c8bef8328491174e2f85c64f03f2

根据Google 的API 文档里面 flash 和Pro的 max output token 会少一个

https://cloud.google.com/vertex-ai/generative-ai/docs/models/gemini/2-5-flash

同时修复gemini-2.5-flash-lite-preview-06-17 的值

Summary by CodeRabbit

  • Chores
    • Updated model settings for Gemini configurations, including adjustments to maximum token limits and context length values. No changes to user interface or functionality.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jul 30, 2025

Walkthrough

The update modifies the maxTokens and contextLength values for three Gemini model configurations in the defaultModelsSettings array within a single file. No changes were made to exported entity signatures or code logic; only numeric property values were adjusted.

Changes

Cohort / File(s) Change Summary
Gemini Model Settings
src/main/presenter/configPresenter/modelDefaultSettings.ts
Adjusted maxTokens for 'gemini-2.5-pro' and 'models/gemini-2.5-flash' (65536 → 65535); increased maxTokens (64000 → 65536) and contextLength (1000000 → 1048576) for 'models/gemini-2.5-flash-lite-preview-06-17'.

Estimated code review effort

🎯 1 (Trivial) | ⏱️ ~2 minutes

Poem

A tweak to the tokens, a hop and a dash,
Gemini models now ready to flash!
Numbers adjusted, precision in sight,
Rabbits rejoice by the soft monitor light.
With settings aligned and context anew,
The code hops forward—review nearly through! 🐇✨

Note

⚡️ Unit Test Generation is now available in beta!

Learn more here, or try it out under "Finishing Touches" below.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
src/main/presenter/configPresenter/modelDefaultSettings.ts (1)

367-369: Flash-Lite values now diverge from Pro/Flash – intentional?

maxTokens is raised to 65 536 while the two siblings were lowered to 65 535. If this is intentional (different model family) please add a short inline comment citing the source, otherwise align the three for clarity.
Also note contextLength is bumped to 1 048 576; if that figure comes from new docs, consider extracting it into a typed constant so future API updates don’t require hunting through this massive literal array.

📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between f020d7f and edff920.

📒 Files selected for processing (1)
  • src/main/presenter/configPresenter/modelDefaultSettings.ts (3 hunks)
🧰 Additional context used
📓 Path-based instructions (8)
**/*.{ts,tsx,js,jsx,vue}

📄 CodeRabbit Inference Engine (CLAUDE.md)

Use English for logs and comments

Files:

  • src/main/presenter/configPresenter/modelDefaultSettings.ts
**/*.{ts,tsx}

📄 CodeRabbit Inference Engine (CLAUDE.md)

Strict type checking enabled for TypeScript

**/*.{ts,tsx}: 始终使用 try-catch 处理可能的错误
提供有意义的错误信息
记录详细的错误日志
优雅降级处理
日志应包含时间戳、日志级别、错误代码、错误描述、堆栈跟踪(如适用)、相关上下文信息
日志级别应包括 ERROR、WARN、INFO、DEBUG
不要吞掉错误
提供用户友好的错误信息
实现错误重试机制
避免记录敏感信息
使用结构化日志
设置适当的日志级别

Files:

  • src/main/presenter/configPresenter/modelDefaultSettings.ts
src/main/**/*.ts

📄 CodeRabbit Inference Engine (CLAUDE.md)

Main to Renderer: Use EventBus to broadcast events via mainWindow.webContents.send()

Use Electron's built-in APIs for file system and native dialogs

Files:

  • src/main/presenter/configPresenter/modelDefaultSettings.ts
src/main/presenter/**/*.ts

📄 CodeRabbit Inference Engine (CLAUDE.md)

One presenter per functional domain

Files:

  • src/main/presenter/configPresenter/modelDefaultSettings.ts
src/main/presenter/configPresenter/**/*.ts

📄 CodeRabbit Inference Engine (CLAUDE.md)

Centralize configuration in configPresenter/

Files:

  • src/main/presenter/configPresenter/modelDefaultSettings.ts
**/*.{js,jsx,ts,tsx}

📄 CodeRabbit Inference Engine (.cursor/rules/development-setup.mdc)

**/*.{js,jsx,ts,tsx}: 使用 OxLint 进行代码检查
Log和注释使用英文书写

Files:

  • src/main/presenter/configPresenter/modelDefaultSettings.ts
src/{main,renderer}/**/*.ts

📄 CodeRabbit Inference Engine (.cursor/rules/electron-best-practices.mdc)

src/{main,renderer}/**/*.ts: Use context isolation for improved security
Implement proper inter-process communication (IPC) patterns
Optimize application startup time with lazy loading
Implement proper error handling and logging for debugging

Files:

  • src/main/presenter/configPresenter/modelDefaultSettings.ts
src/main/**/*.{ts,js,tsx,jsx}

📄 CodeRabbit Inference Engine (.cursor/rules/project-structure.mdc)

主进程代码放在 src/main

Files:

  • src/main/presenter/configPresenter/modelDefaultSettings.ts
🧠 Learnings (1)
src/main/presenter/configPresenter/modelDefaultSettings.ts (1)

Learnt from: CR
PR: ThinkInAIXYZ/deepchat#0
File: CLAUDE.md:0-0
Timestamp: 2025-07-21T01:45:33.790Z
Learning: Applies to src/main/presenter/configPresenter/**/*.ts : Centralize configuration in configPresenter/

🔇 Additional comments (1)
src/main/presenter/configPresenter/modelDefaultSettings.ts (1)

355-357: Same off-by-one risk for Gemini 2.5 Flash

For consistency with the comment above, verify that 65 535 is indeed the hard limit for models/gemini-2.5-flash. If the API still allows 65 536, user prompts that expect the full range will be silently clipped.

Comment on lines +343 to 345
maxTokens: 65535,
contextLength: 1048576,
match: ['gemini-2.5-pro'],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Double-check the new 65 535 limit – documentation appears inconsistent

The previous value (65 536) was reduced by one. Google’s Vertex AI docs for Gemini 2.5 Pro list both 65 535 and 65 536 in different places, depending on whether they count the EOS token. Please confirm the SDK you call actually enforces 65 535, otherwise this change may re-introduce truncation or a 400-error on larger responses.


🏁 Script executed:

#!/bin/bash
# Scan codebase for runtime overrides that still assume 65536
rg -n --pretty $'65536' | head

Length of output: 2888


Inconsistent maxTokens values – confirm and unify

We updated the default in modelDefaultSettings.ts to 65 535, but geminiProvider.ts still uses 65 536. Please verify which limit the Vertex AI SDK actually enforces (with or without EOS) and align both places to avoid unexpected truncation or 400-errors.

• src/main/presenter/configPresenter/modelDefaultSettings.ts (lines 343–345)

  maxTokens: 65535,
  contextLength: 1048576,
  match: ['gemini-2.5-pro'],

• src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts (lines 61, 73)

  maxTokens: 65536,

Next steps:

  1. Confirm via the Vertex AI SDK or docs whether the true max is 65 535 or 65 536 (EOS-inclusive vs exclusive).
  2. Update both the default settings and any hardcoded overrides to the correct value.
🤖 Prompt for AI Agents
In src/main/presenter/configPresenter/modelDefaultSettings.ts lines 343-345 and
src/main/presenter/llmProviderPresenter/providers/geminiProvider.ts lines 61 and
73, the maxTokens values are inconsistent (65535 vs 65536). Verify the exact
maxTokens limit enforced by the Vertex AI SDK, considering whether it includes
the EOS token or not. Then update both files to use the confirmed correct
maxTokens value to ensure consistency and prevent truncation or errors.

@zerob13 zerob13 merged commit 0a68444 into ThinkInAIXYZ:dev Jul 30, 2025
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants