-
Notifications
You must be signed in to change notification settings - Fork 349
[Feat] Production Stack Router: Add OpenTelemetry tracing support with W3C context propagation #772
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Summary of ChangesHello @HanFa, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the observability of the vLLM production stack by integrating OpenTelemetry tracing directly into the router component. This allows for a complete visualization of request flows, from initial client interaction through routing decisions and eventual processing by backend vLLM engines. The implementation ensures seamless trace context propagation, providing operators with critical insights into the performance and behavior of the distributed inference system. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces comprehensive OpenTelemetry tracing support to the vLLM router, a valuable addition for enhancing observability. The implementation is well-executed, covering CLI and Helm configurations, the core tracing logic with context propagation, and both unit and integration tests. The code is generally clean and follows good practices, such as gracefully handling cases where tracing is disabled.
I have identified a configuration bug in the new CLI argument for OTLP security, which would prevent users from establishing a secure connection. I've also provided a suggestion to refactor a small portion of the request handling logic to improve clarity and avoid redundant function calls when managing spans.
Overall, this is a strong contribution that significantly improves the production readiness of the vLLM stack. My feedback aims to address the identified issues to ensure the feature is robust and maintainable.
| except Exception as e: | ||
| end_span(span, error=e) if tracing_active else None | ||
| raise | ||
| finally: | ||
| end_span(span) if tracing_active else None |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In the case of an exception, end_span is called in both the except block and the finally block. While this is not a functional bug since span.end() is idempotent, it's redundant and can be made cleaner.
A more robust pattern is to handle span ending explicitly on success and error paths, removing the finally block. This ensures the span is ended exactly once and improves readability. You've used this cleaner pattern in the traced_stream async generator in the same file, which is a great example to follow.
Here is an example of a cleaner structure:
try:
# ... main logic ...
if tracing_active:
end_span(span) # Call on success
except Exception as e:
if tracing_active:
end_span(span, error=e) # Call on error
raiseApplying this pattern would involve moving the success-path end_span call to the end of the try block and removing the finally block.
c82c842 to
777823a
Compare
…pagation Signed-off-by: Fang Han <fhan0520@gmail.com>
|
Functionality & e2e tests seemed to be aborted due to the unavailability of action workers. |
This PR adds OpenTelemetry tracing support to the vLLM production stack router, enabling end-to-end distributed tracing across the inference pipeline. When enabled via
--otel-endpoint(or Helm'srouterSpec.otel.endpoint), the router extracts incoming W3C Trace Context headers (traceparent, tracestate), creates spans for routing operations, and propagates trace context to backend vLLM engines. This allows operators to visualize the complete request flow from client through router to engine in observability platforms like Jaeger.The implementation includes a new
experimental/otelmodule with helper functions for span management, CLI arguments for configuration, Helm chart integration with schema validation, unit tests for both Python and Helm templates, and updated documentation in the distributed tracing tutorial. Tracing auto-enables when an OTLP endpoint is provided, requiring no additional feature flags.An example command of launching vLLM Router with Otel enabled:
FIX #773
BEFORE SUBMITTING, PLEASE READ THE CHECKLIST BELOW AND FILL IN THE DESCRIPTION ABOVE
-swhen doinggit commit[Bugfix],[Feat], and[CI].Testing
I use my local machine for testing with following setup (Semantic Router + Production Stack integration)
Without the vLLM Router enabling the Otel (you need this change of Semantic Router side vllm-project/semantic-router#852):
With vLLM Router enabling the Otel:
Note: Right now only the following endpoints have this tracing gap filled
Endpoints without tracing:
/v1/audio/transcriptions- usesroute_transcription_request()route_disaggregated_prefill_request()/health,/v1/models, etc.)Detailed Checklist (Click to Expand)
Thank you for your contribution to production-stack! Before submitting the pull request, please ensure the PR meets the following criteria. This helps us maintain the code quality and improve the efficiency of the review process.
PR Title and Classification
Please try to classify PRs for easy understanding of the type of changes. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:
[Bugfix]for bug fixes.[CI/Build]for build or continuous integration improvements.[Doc]for documentation fixes and improvements.[Feat]for new features in the cluster (e.g., autoscaling, disaggregated prefill, etc.).[Router]for changes to thevllm_router(e.g., routing algorithm, router observability, etc.).[Misc]for PRs that do not fit the above categories. Please use this sparingly.Note: If the PR spans more than one category, please include all relevant prefixes.
Code Quality
The PR need to meet the following code quality standards:
pre-committo format your code. SeeREADME.mdfor installation.DCO and Signed-off-by
When contributing changes to this project, you must agree to the DCO. Commits must include a
Signed-off-by:header which certifies agreement with the terms of the DCO.Using
-swithgit commitwill automatically add this header.What to Expect for the Reviews
We aim to address all PRs in a timely manner. If no one reviews your PR within 5 days, please @-mention one of YuhanLiu11
, Shaoting-Feng or ApostaC.