Skip to content

Conversation

@HanFa
Copy link
Contributor

@HanFa HanFa commented Dec 20, 2025

This PR adds OpenTelemetry tracing support to the vLLM production stack router, enabling end-to-end distributed tracing across the inference pipeline. When enabled via --otel-endpoint (or Helm's routerSpec.otel.endpoint), the router extracts incoming W3C Trace Context headers (traceparent, tracestate), creates spans for routing operations, and propagates trace context to backend vLLM engines. This allows operators to visualize the complete request flow from client through router to engine in observability platforms like Jaeger.

The implementation includes a new experimental/otel module with helper functions for span management, CLI arguments for configuration, Helm chart integration with schema validation, unit tests for both Python and Helm templates, and updated documentation in the distributed tracing tutorial. Tracing auto-enables when an OTLP endpoint is provided, requiring no additional feature flags.

An example command of launching vLLM Router with Otel enabled:

vllm-router --port 8001 --service-discovery static --static-backends "http://localhost:8000" \
--static-models "Qwen/Qwen2.5-7B-Instruct" --routing-logic roundrobin \
--otel-endpoint localhost:4317

FIX #773

BEFORE SUBMITTING, PLEASE READ THE CHECKLIST BELOW AND FILL IN THE DESCRIPTION ABOVE


  • Make sure the code changes pass the pre-commit checks.
  • Sign-off your commit by using -s when doing git commit
  • Try to classify PRs for easy understanding of the type of changes, such as [Bugfix], [Feat], and [CI].

Testing

I use my local machine for testing with following setup (Semantic Router + Production Stack integration)

  Envoy (8080)
      ↓
  Semantic Router (50051 gRPC / 8080 HTTP)
      ↓
  vLLM Production Stack Router (8001)  ← NEW
      ↓
  vLLM Engine (8000)

Without the vLLM Router enabling the Otel (you need this change of Semantic Router side vllm-project/semantic-router#852):

image

With vLLM Router enabling the Otel:

image

Note: Right now only the following endpoints have this tracing gap filled

Endpoint Description
/v1/chat/completions Chat completions API
/v1/completions Text completions API
/v1/embeddings Embeddings API
/tokenize Tokenization
/detokenize Detokenization
/v1/rerank Reranking API (v1)
/rerank Reranking API
/v1/score Scoring API (v1)
/score Scoring API
/v1/responses Responses API

Endpoints without tracing:

  • /v1/audio/transcriptions - uses route_transcription_request()
  • Disaggregated prefill requests - uses route_disaggregated_prefill_request()
  • Health/metrics endpoints (/health, /v1/models, etc.)
Detailed Checklist (Click to Expand)

Thank you for your contribution to production-stack! Before submitting the pull request, please ensure the PR meets the following criteria. This helps us maintain the code quality and improve the efficiency of the review process.

PR Title and Classification

Please try to classify PRs for easy understanding of the type of changes. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:

  • [Bugfix] for bug fixes.
  • [CI/Build] for build or continuous integration improvements.
  • [Doc] for documentation fixes and improvements.
  • [Feat] for new features in the cluster (e.g., autoscaling, disaggregated prefill, etc.).
  • [Router] for changes to the vllm_router (e.g., routing algorithm, router observability, etc.).
  • [Misc] for PRs that do not fit the above categories. Please use this sparingly.

Note: If the PR spans more than one category, please include all relevant prefixes.

Code Quality

The PR need to meet the following code quality standards:

  • Pass all linter checks. Please use pre-commit to format your code. See README.md for installation.
  • The code need to be well-documented to ensure future contributors can easily understand the code.
  • Please include sufficient tests to ensure the change is stay correct and robust. This includes both unit tests and integration tests.

DCO and Signed-off-by

When contributing changes to this project, you must agree to the DCO. Commits must include a Signed-off-by: header which certifies agreement with the terms of the DCO.

Using -s with git commit will automatically add this header.

What to Expect for the Reviews

We aim to address all PRs in a timely manner. If no one reviews your PR within 5 days, please @-mention one of YuhanLiu11
, Shaoting-Feng or ApostaC.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @HanFa, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the observability of the vLLM production stack by integrating OpenTelemetry tracing directly into the router component. This allows for a complete visualization of request flows, from initial client interaction through routing decisions and eventual processing by backend vLLM engines. The implementation ensures seamless trace context propagation, providing operators with critical insights into the performance and behavior of the distributed inference system.

Highlights

  • OpenTelemetry Tracing for Router: Added comprehensive OpenTelemetry tracing support to the vLLM production stack router, enabling end-to-end distributed tracing across the inference pipeline.
  • W3C Context Propagation: The router now extracts incoming W3C Trace Context headers (traceparent, tracestate), creates spans for routing operations, and propagates trace context to backend vLLM engines.
  • Configurability: Tracing can be enabled and configured via a new --otel-endpoint CLI argument or through Helm chart values (routerSpec.otel.endpoint), with options for service name and insecure connections.
  • New experimental/otel Module: Introduced a new Python module (vllm_router.experimental.otel) containing helper functions for OpenTelemetry span management, context extraction, and injection.
  • Helm Chart Integration: Integrated OpenTelemetry configuration into the Helm chart, including schema validation, default values, and unit tests to ensure correct argument passing to the router deployment.
  • Updated Documentation: The distributed tracing tutorial has been updated to reflect the new router tracing capabilities and guide users on achieving end-to-end traces from client to engine.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces comprehensive OpenTelemetry tracing support to the vLLM router, a valuable addition for enhancing observability. The implementation is well-executed, covering CLI and Helm configurations, the core tracing logic with context propagation, and both unit and integration tests. The code is generally clean and follows good practices, such as gracefully handling cases where tracing is disabled.

I have identified a configuration bug in the new CLI argument for OTLP security, which would prevent users from establishing a secure connection. I've also provided a suggestion to refactor a small portion of the request handling logic to improve clarity and avoid redundant function calls when managing spans.

Overall, this is a strong contribution that significantly improves the production readiness of the vLLM stack. My feedback aims to address the identified issues to ensure the feature is robust and maintainable.

Comment on lines +202 to +209
except Exception as e:
end_span(span, error=e) if tracing_active else None
raise
finally:
end_span(span) if tracing_active else None
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

In the case of an exception, end_span is called in both the except block and the finally block. While this is not a functional bug since span.end() is idempotent, it's redundant and can be made cleaner.

A more robust pattern is to handle span ending explicitly on success and error paths, removing the finally block. This ensures the span is ended exactly once and improves readability. You've used this cleaner pattern in the traced_stream async generator in the same file, which is a great example to follow.

Here is an example of a cleaner structure:

    try:
        # ... main logic ...
        if tracing_active:
            end_span(span) # Call on success
    except Exception as e:
        if tracing_active:
            end_span(span, error=e) # Call on error
        raise

Applying this pattern would involve moving the success-path end_span call to the end of the try block and removing the finally block.

@HanFa HanFa force-pushed the main branch 3 times, most recently from c82c842 to 777823a Compare December 20, 2025 23:58
…pagation

Signed-off-by: Fang Han <fhan0520@gmail.com>
@HanFa
Copy link
Contributor Author

HanFa commented Dec 22, 2025

Functionality & e2e tests seemed to be aborted due to the unavailability of action workers.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

feature: Add OpenTelemetry distributed tracing support to vLLM Router

1 participant