Skip to content

Conversation

@tukwila
Copy link
Contributor

@tukwila tukwila commented Dec 24, 2025

Summary

start mock_server via cmd:
uidellm mock-server --host 0.0.0.0 --port 8080

curl cmds:

curl -X POST http://127.0.0.1:8080/v1/audio/transcriptions
-F "file=@/${local_path}/output_audio.mp3"
-F "model=whisper-1"
-F "language=zh"

curl -X POST http://127.0.0.1:8080/v1/audio/transcriptions
-F "file=@/${local_path}/output_audio.mp3"
-F "model=whisper-large-v3"
-F "language=en"
-F "prompt=This is a technical demonstration"
-F "temperature=0.3"
-F "response_format=verbose_json"

curl -X POST http://127.0.0.1:8080/v1/audio/transcriptions
-F "file=@/${local_path}/output_audio.mp3"
-F "response_format=text"

curl -X POST http://127.0.0.1:8080/v1/audio/translations
-F "file=@/${local_path}/output_audio.mp3"

Details

  • [ ]

Test Plan

Related Issues

  • Resolves #

  • "I certify that all code in this PR is my own, except as noted below."

Use of AI

  • Includes AI-assisted code completion
  • Includes code generated by an AI application
  • Includes AI-generated tests (NOTE: AI written tests should have a docstring that includes ## WRITTEN BY AI ##)

Signed-off-by: guangli.bao <guangli.bao@daocloud.io>
@sjmonson sjmonson merged commit 72354e2 into vllm-project:main Jan 7, 2026
15 checks passed
@tukwila tukwila deleted the audio_impl branch January 8, 2026 02:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants