Skip to content

Commit 6b9a997

Browse files
[MM][Model] Remove Qwen3-VL modeling files (#4577)
### What this PR does / why we need it? Following #4349, remove Qwen3-VL modeling files. ### Does this PR introduce _any_ user-facing change? ### How was this patch tested? - vLLM version: v0.11.2 - vLLM main: https://github.com/vllm-project/vllm/commit/v0.11.2 --------- Signed-off-by: shen-shanshan <[email protected]> Signed-off-by: Shanshan Shen <[email protected]>
1 parent a9c4b86 commit 6b9a997

File tree

5 files changed

+253
-273
lines changed

5 files changed

+253
-273
lines changed

vllm_ascend/models/__init__.py

Lines changed: 0 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -2,14 +2,6 @@
22

33

44
def register_model():
5-
ModelRegistry.register_model(
6-
"Qwen3VLMoeForConditionalGeneration",
7-
"vllm_ascend.models.qwen3_vl:AscendQwen3VLMoeForConditionalGeneration")
8-
9-
ModelRegistry.register_model(
10-
"Qwen3VLForConditionalGeneration",
11-
"vllm_ascend.models.qwen3_vl:AscendQwen3VLForConditionalGeneration")
12-
135
# There is no PanguProMoEForCausalLM in vLLM, so we should register it before vLLM config initialization
146
# to make sure the model can be loaded correctly. This register step can be removed once vLLM support PanguProMoEForCausalLM.
157
ModelRegistry.register_model(

vllm_ascend/models/qwen3_vl.py

Lines changed: 0 additions & 264 deletions
This file was deleted.

vllm_ascend/patch/worker/__init__.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -29,4 +29,5 @@
2929
import vllm_ascend.patch.worker.patch_minicpm # noqa
3030
import vllm_ascend.patch.worker.patch_qwen2_5_vl # noqa
3131
import vllm_ascend.patch.worker.patch_qwen2_5_omni # noqa
32+
import vllm_ascend.patch.worker.patch_qwen3_vl # noqa
3233
import vllm_ascend.patch.worker.patch_rope # noqa

vllm_ascend/patch/worker/patch_qwen2_5_vl.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,7 @@ def forward(
6565
rotary_pos_emb_cos: torch.Tensor,
6666
rotary_pos_emb_sin: torch.Tensor,
6767
max_seqlen: torch.Tensor,
68-
seqlens: torch.Tensor,
68+
seqlens: torch.Tensor = None,
6969
) -> torch.Tensor:
7070
# [s, b, c] --> [s, b, head * 3 * head_dim]
7171
x, _ = self.qkv(x)

0 commit comments

Comments
 (0)