Skip to content

Comments

[bugfix] support text + audio mixed output#843

Merged
david6666666 merged 6 commits intovllm-project:mainfrom
GG-li:bugfix-540
Jan 21, 2026
Merged

[bugfix] support text + audio mixed output#843
david6666666 merged 6 commits intovllm-project:mainfrom
GG-li:bugfix-540

Conversation

@GG-li
Copy link
Contributor

@GG-li GG-li commented Jan 19, 2026

PLEASE FILL IN THE PR DESCRIPTION HERE ENSURING ALL CHECKLIST ITEMS (AT THE BOTTOM) HAVE BEEN CONSIDERED.

Purpose

As mentioned in #540 ,if we set output modality is audio, the output will be text + audio.

Test Plan

Server:

vllm serve Qwen/Qwen3-Omni-30B-A3B-Instruct --omni --port 8888

Curl:

curl http://localhost:8888/v1/chat/completions -H "Content-Type: application/json" -d '{"model": "Qwen/Qwen3-Omni-30B-A3B-Instruct", "messages": [{"role": "user", "content": "Describe vLLM in brief."}], "modalities": ["audio"]}'

Test Result

Output before fix:

{"id":"chatcmpl-a6a266195a78440a","object":"chat.completion","created":1767754518,"model":"Qwen/Qwen3-Omni-30B-A3B-Instruct","choices":[{"index":0,"message":{"role":"assistant","content":"vLLM is an open-source library designed for efficient inference and serving of large language models (LLMs). It optimizes memory usage and computational efficiency through techniques like PagedAttention, which reduces memory fragmentation and improves throughput by managing attention keys and values more effectively. vLLM supports popular LLM architectures such as LLaMA, LLaMA2, and others, and enables high-throughput, low-latency model serving with support for continuous batching and dynamic scheduling. Its primary goal is to accelerate LLM inference while minimizing resource consumption, making it ideal for deployment in production environments.","refusal":null,"annotations":null,"audio":null,"function_call":null,"tool_calls":[],"reasoning":null,"reasoning_content":null},"logprobs":null,"finish_reason":"stop","stop_reason":null,"token_ids":null},{"index":0,"message":{"role":"assistant","content":null,"refusal":null,"annotations":null,"audio":{"id":"audio-e1b491c019cb4736","data":"UklGRngxIABX......JAAgABQAIAAYABwA=","expires_at":1767840988,"transcript":""},"function_call":null,"tool_calls":[],"reasoning":null,"reasoning_content":null},"logprobs":null,"finish_reason":"stop","stop_reason":null,"token_ids":null}],"service_tier":null,"system_fingerprint":null,"usage":{"prompt_tokens":15,"total_tokens":134,"completion_tokens":119,"prompt_tokens_details":null},"prompt_logprobs":null,"prompt_token_ids":null,"kv_transfer_params":null}

Output After Fix :

{"id":"chatcmpl-9434083195f9255f","object":"chat.completion","created":1767750392,"model":"Qwen/Qwen3-Omni-30B-A3B-Instruct","choices":[{"index":0,"message":{"role":"assistant","content":null,"refusal":null,"annotations":null,"audio":{"id":"audio-79963b2733804118","data":"UklGRnjyIgBXQV....../+//3/+//9//3////+//z//f/+////+//6//3////9//r/+v/8//n/+f/3//r/+v/7//z/+//4//n/+//6//n/9//8//v/+P/5//z//f/4//n/9//0//X/+f/5//f/9v/2//b/9/8=","expires_at":1767836939,"transcript":""},"function_call":null,"tool_calls":[],"reasoning":null,"reasoning_content":null},"logprobs":null,"finish_reason":"stop","stop_reason":null,"token_ids":null}],"service_tier":null,"system_fingerprint":null,"usage":{"prompt_tokens":0,"total_tokens":0,"completion_tokens":0,"prompt_tokens_details":null},"prompt_logprobs":null,"prompt_token_ids":null,"kv_transfer_params":null}

Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft.

BEFORE SUBMITTING, PLEASE READ https://github.com/vllm-project/vllm-omni/blob/main/CONTRIBUTING.md (anything written below this line will be removed by GitHub Actions)

Signed-off-by: GG-li <[email protected]>
@GG-li GG-li requested a review from hsliuustc0106 as a code owner January 19, 2026 08:14
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 966ebdf103

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +1310 to +1313
# Filter outputs based on requested modalites
if requested_modalities is not None and omni_outputs.final_output_type not in requested_modalities:
logger.warning(f"final output type: {omni_outputs.final_output_type} is not needed by the request")
continue

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Preserve usage when filtering modalities

For non‑streaming requests that set modalities to exclude text (e.g., [“audio”]), this filter skips the text omni_outputs, which is the only path that populates usage/prompt_token_ids/kv_transfer_params. The response then returns zero usage even though tokens were consumed, while streaming still reports prompt tokens, so clients that depend on usage will see a regression. Consider deriving usage from omni_outputs.request_output even when text output is filtered.

Useful? React with 👍 / 👎.

@ZJY0516 ZJY0516 changed the title 修复 [bugfix] support text + audio mixed output Jan 19, 2026
@hsliuustc0106
Copy link
Collaborator

@fake0fan PTAL

@fake0fan
Copy link
Contributor

Okay, I understand. However, if I really want to output both audio and text, how should I configure it? Should I set it to text+audio? I haven't found any related attempts in the current code. Could you try it?

@GG-li
Copy link
Contributor Author

GG-li commented Jan 20, 2026

Okay, I understand. However, if I really want to output both audio and text, how should I configure it? Should I set it to text+audio? I haven't found any related attempts in the current code. Could you try it?

OK, I understand

@GG-li
Copy link
Contributor Author

GG-li commented Jan 20, 2026

Okay, I understand. However, if I really want to output both audio and text, how should I configure it? Should I set it to ? I haven't found any related attempts in the current code. Could you try it?text+audio

I find no modifications are required; simply set "modalities": ["text","audio"]. Just like:

curl http://localhost:8888/v1/chat/completions -H "Content-Type: application/json" -d '{"model": "Qwen/Qwen3-Omni-30B-A3B-Instruct", "messages": [{"role": "user", "content": "Describe vLLM in brief."}], "modalities": ["text","audio"]}' 

@hsliuustc0106 hsliuustc0106 added the ready label to trigger buildkite CI label Jan 20, 2026
Copy link
Contributor

@fake0fan fake0fan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@david6666666
Copy link
Collaborator

LGTM

@david6666666 david6666666 merged commit 0aa72b9 into vllm-project:main Jan 21, 2026
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready label to trigger buildkite CI

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants