Skip to content

Conversation

@nejch
Copy link

@nejch nejch commented Dec 28, 2025

Historically vllm-router has used 3 HTTP libraries (requests, httpx, aiohttp). It seems like with #569 the future is aiohttp. This attemps to align the health check requests with real workloads: in some scenarios, these will also need to provide an API key, or even multiple API keys depending on the backends configured.

In our case, we'd like to route our co-located vLLM cluster, a second cluster operated by another team (potentially with different auth), and AWS bedrock for the new Anthropic endpoint (#640). Consolidating this makes it easier to add a consistent auth middleware for all requests.

Contributes to #569


  • Make sure the code changes pass the pre-commit checks.
  • Sign-off your commit by using -s when doing git commit
  • Try to classify PRs for easy understanding of the type of changes, such as [Bugfix], [Feat], and [CI].
Detailed Checklist (Click to Expand)

Thank you for your contribution to production-stack! Before submitting the pull request, please ensure the PR meets the following criteria. This helps us maintain the code quality and improve the efficiency of the review process.

PR Title and Classification

Please try to classify PRs for easy understanding of the type of changes. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:

  • [Bugfix] for bug fixes.
  • [CI/Build] for build or continuous integration improvements.
  • [Doc] for documentation fixes and improvements.
  • [Feat] for new features in the cluster (e.g., autoscaling, disaggregated prefill, etc.).
  • [Router] for changes to the vllm_router (e.g., routing algorithm, router observability, etc.).
  • [Misc] for PRs that do not fit the above categories. Please use this sparingly.

Note: If the PR spans more than one category, please include all relevant prefixes.

Code Quality

The PR need to meet the following code quality standards:

  • Pass all linter checks. Please use pre-commit to format your code. See README.md for installation.
  • The code need to be well-documented to ensure future contributors can easily understand the code.
  • Please include sufficient tests to ensure the change is stay correct and robust. This includes both unit tests and integration tests.

DCO and Signed-off-by

When contributing changes to this project, you must agree to the DCO. Commits must include a Signed-off-by: header which certifies agreement with the terms of the DCO.

Using -s with git commit will automatically add this header.

What to Expect for the Reviews

We aim to address all PRs in a timely manner. If no one reviews your PR within 5 days, please @-mention one of YuhanLiu11
, Shaoting-Feng or ApostaC.

Signed-off-by: Nejc Habjan <nejc.habjan@siemens.com>
@nejch nejch force-pushed the refactor/aiohttp-health-checks branch from 1b6dab6 to 1c2bd2d Compare December 28, 2025 16:19
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @nejch, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request streamlines the vLLM router's health check mechanism by transitioning all HTTP requests for health checks to use aiohttp. This change unifies the HTTP client library used within the router, which is crucial for scenarios involving consistent authentication across diverse backend services. The core is_model_healthy function has been refactored to be asynchronous, and the StaticServiceDiscovery component now leverages an aiohttp.ClientSession for its operations. Accompanying these functional changes are updates to the testing suite, including new asynchronous fixtures and modifications to existing tests to ensure robust validation of the new aiohttp-based health check logic.

Highlights

  • Standardized HTTP Client: Migrated health check requests from the requests library to aiohttp to standardize HTTP client usage across the router, aligning with future architectural goals for consistent authentication.
  • Asynchronous Health Checks: The is_model_healthy utility function has been refactored to be asynchronous, leveraging aiohttp.ClientSession for non-blocking I/O operations during health checks.
  • Service Discovery Integration: The StaticServiceDiscovery class now initializes and utilizes an aiohttp.ClientSession for its health check operations, ensuring the service discovery process is fully asynchronous.
  • Enhanced Testing Infrastructure: New pytest fixtures (mock_app, make_mock_engine) have been introduced to facilitate asynchronous testing with aiohttp, and existing health check tests have been updated to use these new fixtures and async/await syntax.
  • Dependency Update: The pytest-aiohttp package has been added to pyproject.toml as a test dependency to support the new asynchronous testing framework.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request successfully consolidates the HTTP client library for health checks to aiohttp, removing the dependency on requests for this functionality. This is a great step towards standardizing on aiohttp throughout the vllm-router. The changes are well-implemented, and the test suite has been significantly improved by replacing monkeypatching with mock aiohttp servers, leading to more robust and realistic tests. I have one suggestion in src/vllm_router/utils.py to refactor the is_model_healthy function for better readability and to add a missing type hint. Overall, this is a solid contribution that improves code consistency and test quality.

Comment on lines +225 to 265
async def is_model_healthy(
session: aiohttp.ClientSession, url: str, model: str, model_type: str
):
model_url = ModelType.get_url(model_type)

try:
if model_type == "transcription":
# for transcription, the backend expects multipart/form-data with a file
# we will use pre-generated silent wav bytes
response = requests.post(
f"{url}{model_url}",
files=ModelType.get_test_payload(model_type), # multipart/form-data
data={"model": model},
timeout=10,
test_payload = ModelType.get_test_payload(model_type)
form_data = aiohttp.FormData()
form_data.add_field(
"file",
test_payload["file"][1],
filename=test_payload["file"][0],
content_type=test_payload["file"][2],
)
form_data.add_field("model", model)

async with session.post(
f"{url}{model_url}",
data=form_data,
timeout=aiohttp.ClientTimeout(total=10),
) as response:
response.raise_for_status()
return True
else:
# for other model types (chat, completion, etc.)
response = requests.post(
async with session.post(
f"{url}{model_url}",
headers={"Content-Type": "application/json"},
json={"model": model} | ModelType.get_test_payload(model_type),
timeout=10,
)

response.raise_for_status()

if model_type == "transcription":
return True
else:
response.json() # verify it's valid json for other model types
return True # validation passed
timeout=aiohttp.ClientTimeout(total=10),
) as response:
response.raise_for_status()
await response.json() # verify it's valid json for other model types
return True # validation passed

except requests.exceptions.RequestException as e:
except aiohttp.ClientError as e:
logger.debug(f"{model_type} Model {model} at {url} is not healthy: {e}")
return False
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This function is missing a return type hint. Additionally, the if/else block for different model types contains duplicated logic for making the HTTP request. This can be refactored to prepare the request arguments first, then make a single session.post call. This will improve readability and maintainability by reducing code duplication.

async def is_model_healthy(
    session: aiohttp.ClientSession, url: str, model: str, model_type: str
) -> bool:
    model_url = ModelType.get_url(model_type)

    try:
        post_kwargs = {
            "timeout": aiohttp.ClientTimeout(total=10),
        }

        if model_type == "transcription":
            # for transcription, the backend expects multipart/form-data with a file
            # we will use pre-generated silent wav bytes
            test_payload = ModelType.get_test_payload(model_type)
            form_data = aiohttp.FormData()
            form_data.add_field(
                "file",
                test_payload["file"][1],
                filename=test_payload["file"][0],
                content_type=test_payload["file"][2],
            )
            form_data.add_field("model", model)
            post_kwargs["data"] = form_data
        else:
            # for other model types (chat, completion, etc.)
            post_kwargs["headers"] = {"Content-Type": "application/json"}
            post_kwargs["json"] = {"model": model} | ModelType.get_test_payload(model_type)

        async with session.post(f"{url}{model_url}", **post_kwargs) as response:
            response.raise_for_status()
            if model_type != "transcription":
                await response.json()  # verify it's valid json for other model types
            return True  # validation passed

    except aiohttp.ClientError as e:
        logger.debug(f"{model_type} Model {model} at {url} is not healthy: {e}")
        return False



@pytest.fixture
async def make_mock_engine(aiohttp_client: Any) -> Callable[[dict[str, Callable]], str]:
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@nejch nejch marked this pull request as ready for review December 28, 2025 16:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant