Skip to content

[Bug-fix] Fix Bugs in Qwen3/Qwen2.5 Omni Rebased Support#114

Merged
Gaohan123 merged 6 commits intovllm-project:mainfrom
tzhouam:feat/Fix-Qwen3/2.5-Omni-Rebase-Bugs
Nov 30, 2025
Merged

[Bug-fix] Fix Bugs in Qwen3/Qwen2.5 Omni Rebased Support#114
Gaohan123 merged 6 commits intovllm-project:mainfrom
tzhouam:feat/Fix-Qwen3/2.5-Omni-Rebase-Bugs

Conversation

@tzhouam
Copy link
Collaborator

@tzhouam tzhouam commented Nov 30, 2025

Purpose

This PR is to fix the 2 bug for Qwen 3 Omni and Qwen 2.5 Omni support.

  1. The stage stuck at retrieveing data from queue.
  2. The multimodal list error for Qwen 2.5 Omni.

Test Plan

Test both model on all modalities they support.

For Qwen 2.5:

export PYTHONPATH="""YOUR VLLM OMNI DIR"""
# export HF_HOME="""YOUR HF HOME DIR"""
cd vllm_omni/model_executor/models/qwen2_5_omni

Modify the "--query-type" in run_single_prompt.sh, the values include mixed_modalities, use_audio_in_video, multi_audios, text.

bash run_single_prompt.sh

For Qwen 3:

export PYTHONPATH="""YOUR VLLM OMNI DIR"""
# export HF_HOME="""YOUR HF HOME DIR"""
cd vllm_omni/model_executor/models/qwen3_omni

Modify the "--query-type" in run_single_prompt.sh, the values include text, use_audio, use_image, use_video.

bash run_single_prompt.sh

Test Result

The Qwen 3 passed
The Qwen 2.5 passed

Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft.

BEFORE SUBMITTING, PLEASE READ https://github.com/vllm-project/vllm-omni/blob/main/CONTRIBUTING.md (anything written below this line will be removed by GitHub Actions)

…ng in omni_stage.py; add M-RoPE position initialization in gpu_model_runner.py
@tzhouam tzhouam changed the title [Bug-fix] Update model name in end2end.py and optimize output handli… [Bug-fix] Fix bug in Qwen3/Qwen2 Rebase Nov 30, 2025
@tzhouam tzhouam changed the title [Bug-fix] Fix bug in Qwen3/Qwen2 Rebase [Bug-fix] Fix bug in Qwen3/Qwen2.5 Omni Rebased Support Nov 30, 2025
@tzhouam tzhouam changed the title [Bug-fix] Fix bug in Qwen3/Qwen2.5 Omni Rebased Support [Bug-fix] Fix Bugs in Qwen3/Qwen2.5 Omni Rebased Support Nov 30, 2025
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines 638 to 642
# except Exception as e:
# _logging.getLogger(__name__).exception("[Stage-%s] Failed on batch %s: %s", stage_id, batch_request_ids, e)
# for rid in batch_request_ids:
# out_q.put(
# {

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Restore stage-level error propagation

The per-batch try/except that previously caught failures in stage_engine.generate and the result-emission loop is now commented out, so any exception during generation or serialization will bubble out of _stage_worker and tear down the stage process without enqueuing {error: ...} responses. When a model throws (e.g., invalid multimodal input or SHM dump failure), the orchestrator will wait indefinitely for results because no downstream error is produced.

Useful? React with 👍 / 👎.

@tzhouam tzhouam requested review from Gaohan123 and hsliuustc0106 and removed request for hsliuustc0106 November 30, 2025 09:36
Copy link
Collaborator

@Gaohan123 Gaohan123 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm, thanks!

@Gaohan123 Gaohan123 merged commit 228f4e5 into vllm-project:main Nov 30, 2025
3 checks passed
princepride pushed a commit to princepride/vllm-omni that referenced this pull request Jan 10, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants