fix(gemma): resolve 404 errors and improve port resolution#25340
fix(gemma): resolve 404 errors and improve port resolution#25340Samee24 merged 2 commits intosameez/gemma-auto-setupfrom
Conversation
- Updated LocalLiteRtLmClient to use apiVersion: 'v1beta' and vertexai: false to ensure compatibility with the local LiteRT-LM server structure. - Updated gemma CLI commands (start, status, stop) to dynamically resolve the port from settings if not provided via flags.
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request improves the robustness of the LiteRT-LM server integration by fixing API version mismatches that caused 404 errors. Additionally, it streamlines the developer experience by allowing the CLI tools to intelligently detect the server port from existing project configurations, reducing the need for manual flag management. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
|
Hi @Abhijit-2592, thank you so much for your contribution to Gemini CLI! We really appreciate the time and effort you've put into this. We're making some updates to our contribution process to improve how we track and review changes. Please take a moment to review our recent discussion post: Improving Our Contribution Process & Introducing New Guidelines. Key Update: Starting January 26, 2026, the Gemini CLI project will require all pull requests to be associated with an existing issue. Any pull requests not linked to an issue by that date will be automatically closed. Thank you for your understanding and for being a part of our community! |
There was a problem hiding this comment.
Code Review
This pull request updates the Gemma-related CLI commands (start, status, stop) to resolve the server port from configuration settings if not explicitly provided via the command line. It also updates the LocalLiteRtLmClient to use the v1beta API version and adds corresponding test assertions. Feedback highlights significant code duplication in the port resolution logic across multiple files and identifies an issue where the status command's default Yargs configuration prevents the new settings-based port resolution from taking effect.
- Extracted port and enabled status resolution into a shared `resolveGemmaConfig` utility in `platform.ts` to reduce duplication. - Removed the default value for the `--port` option in the `status` command so that the workspace settings act as the single source of truth when the flag is omitted.
Summary
This PR resolves 404 errors encountered when using the LiteRT-LM server with the Gemini SDK by ensuring the correct API version (
v1beta) is used. It also improves the user experience by dynamically resolving the server port from project settings in thegemmaCLI commands.Details
apiVersion: 'v1beta'andvertexai: falsein theGoogleGenAIconstructor. The local LiteRT-LM server follows thev1betaendpoint structure, and the SDK defaults tov1, causing 404s.start,status, andstopcommands to read the port fromexperimental.gemmaModelRouter.classifier.hostin the settings file if the--portflag is not provided.Related Issues
Related to
sameez/gemma-auto-setup.How to Validate
experimental.gemmaModelRouter.classifier.host: "http://localhost:1234").gemini gemma startand verify it uses the configured port.LocalLiteRtLmClienttests pass:npm test -w @google/gemini-cli-core -- src/core/localLiteRtLmClient.test.ts.Pre-Merge Checklist