Skip to content

[bugfix] remove vllm speech route#1060

Merged
david6666666 merged 1 commit intovllm-project:mainfrom
linyueqian:fix/remove-vllm-speech-route
Jan 29, 2026
Merged

[bugfix] remove vllm speech route#1060
david6666666 merged 1 commit intovllm-project:mainfrom
linyueqian:fix/remove-vllm-speech-route

Conversation

@linyueqian
Copy link
Collaborator

PLEASE FILL IN THE PR DESCRIPTION HERE ENSURING ALL CHECKLIST ITEMS (AT THE BOTTOM) HAVE BEEN CONSIDERED.

Purpose

Fix 400 Bad Request when calling /v1/audio/speech with Qwen3-TTS voices (e.g., Vivian, Ryan).

vllm's built-in /v1/audio/speech route validates the voice parameter against OpenAI's voice list (alloy, echo, shimmer, etc.), rejecting any Qwen3-TTS voice names. Because vllm-omni never removed this upstream route, vllm's handler took precedence over omni's, which has proper Qwen3-TTS validation.

The fix adds _remove_route_from_router(router, "/v1/audio/speech", {"POST"}) before registering omni's speech endpoint, consistent with how /v1/chat/completions, /health, and /v1/models are already handled.

Fixes #1041 and addresses this comment.

Test Plan

# Start server                                                                                                           
vllm-omni serve /path/to/Qwen3-TTS-12Hz-1.7B-CustomVoice \                                                               
    --stage-configs-path vllm_omni/model_executor/stage_configs/qwen3_tts.yaml \                                         
    --host 0.0.0.0 --port 8800 \                                                                                         
    --trust-remote-code --enforce-eager --omni                                                                           
                                                                                                                         
# Test with Qwen3-TTS voice (previously returned 400)                                                                    
curl -X POST http://localhost:8800/v1/audio/speech -H "Content-Type: application/json" -d '{"input": "今天天气真好", "voice": "Ryan", "instructions": "用开心的语气说"}' --output test_ryan.wav -w "\nHTTP Status: %{http_code}\n"

Test Result

test_ryan.wav


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft.

BEFORE SUBMITTING, PLEASE READ https://github.com/vllm-project/vllm-omni/blob/main/CONTRIBUTING.md (anything written below this line will be removed by GitHub Actions)

@david6666666
Copy link
Collaborator

LTGM

@david6666666 david6666666 added the ready label to trigger buildkite CI label Jan 29, 2026
@david6666666 david6666666 merged commit cac6504 into vllm-project:main Jan 29, 2026
6 of 7 checks passed
dongbo910220 pushed a commit to dongbo910220/vllm-omni that referenced this pull request Feb 1, 2026
Signed-off-by: linyueqian <linyueqian@outlook.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready label to trigger buildkite CI

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: Error retrieving safetensors: Repo id must be in the form 'repo_name' and 400 Bad Request

2 participants