Skip to content

Conversation

@shernshiou
Copy link

The latest lmcache/lmstack-router docker image (lmcache/lmstack-router:0.1.9.dev4-ge1447afd3.d20251215) contains several CVEs including CVEs belonging to vllm v0.11.0 and also older version of starlette. Updating vllm to v0.12.0 reduces the CVEs and also update the starlette to the version without CVE.


  • Make sure the code changes pass the pre-commit checks.
  • Sign-off your commit by using -s when doing git commit
  • Try to classify PRs for easy understanding of the type of changes, such as [Bugfix], [Feat], and [CI].:

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @shernshiou, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on a critical build update for the router component, primarily by upgrading several key Python dependencies. The main objective is to address and reduce existing Common Vulnerabilities and Exposures (CVEs) found in the current lmcache/lmstack-router docker image, ensuring a more secure and robust application environment.

Highlights

  • Dependency Updates: Updated vllm from 0.11.0 to 0.12.0, fastapi from 0.115.8 to 0.124.4, and lmcache from 0.3.9post2 to 0.3.11 across pyproject.toml and src/vllm_router/requirements.txt.
  • Security Enhancements: These dependency updates are crucial for mitigating known CVEs, particularly those associated with older versions of vllm and starlette (a dependency of fastapi), thereby enhancing the security posture of the lmcache/lmstack-router docker image.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates several dependencies, including vllm, lmcache, and fastapi, to address security vulnerabilities as described. The changes are correct and consistently applied across the relevant files. I have one suggestion to improve dependency management consistency between pyproject.toml and requirements.txt, which I've detailed in a review comment.

@zerofishnoodles
Copy link
Collaborator

Hi Can you revert the change to the uv lock since there are too many change line in that.

@shernshiou
Copy link
Author

Hi Can you revert the change to the uv lock since there are too many change line in that.

Hi @zerofishnoodles
Yes, I can but can you help me understand why? Shouldn't uv.lock be consistent with pyproject.toml?

@shernshiou shernshiou force-pushed the feat/update_vllm_v0_12_0 branch 4 times, most recently from 2eef67b to b0ac576 Compare December 18, 2025 20:27
@shernshiou
Copy link
Author

@zerofishnoodles I have reset the uv.lock

@bcdonadio
Copy link
Contributor

I believe a production-stack release should follow this update as well, right?

@zerofishnoodles
Copy link
Collaborator

Hi Can you revert the change to the uv lock since there are too many change line in that.

Hi @zerofishnoodles Yes, I can but can you help me understand why? Shouldn't uv.lock be consistent with pyproject.toml?

Yes it should be. It is just that we don't want too many changes in one PR. Also since pyproject itself should be enough for the venv.

@zerofishnoodles
Copy link
Collaborator

I believe a production-stack release should follow this update as well, right?

Yes it should be

@shernshiou shernshiou force-pushed the feat/update_vllm_v0_12_0 branch from ed93ada to 45c3563 Compare January 2, 2026 15:00
@shernshiou shernshiou changed the title [Build][Router] Update vllm to v0.12.0 [Build][Router] Update vllm to v0.13.0 Jan 2, 2026
Signed-off-by: Shern Shiou Tan <shernshiou@gmail.com>
@shernshiou shernshiou force-pushed the feat/update_vllm_v0_12_0 branch from 45c3563 to 8c02adf Compare January 2, 2026 15:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants