Skip to content

Conversation

@jikunshang
Copy link
Collaborator

@jikunshang jikunshang commented Nov 17, 2025

Purpose

#27126 add import SequenceParallelismPass, which break xpu path. we should follow L16-L20, import only for cuda_alike platform.

Test Plan

CI.

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses an import error for SequenceParallelismPass on non-CUDA platforms by making the import conditional. While this fixes the immediate issue, it introduces a potential runtime NameError if sequence parallelism is enabled on an unsupported platform. I've provided a suggestion to create a dummy pass for unsupported platforms, which makes the code more robust by preventing crashes and gracefully disabling the feature with a warning. This approach is more resilient to configuration errors.

@Yikun Yikun added the ready ONLY add when PR is ready to merge/full CI is needed label Nov 17, 2025
Copy link
Member

@Yikun Yikun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for fixing, we also validated this with vllm-ascend. This should also be backported to release/v0.11.1 branch

@ywang96
Copy link
Member

ywang96 commented Nov 17, 2025

Thanks for fixing, we also validated this with vllm-ascend. This should also be backported to release/v0.11.1 branch

Sounds good - will cherry pick there!

@ywang96 ywang96 enabled auto-merge (squash) November 17, 2025 11:47
@ywang96 ywang96 merged commit 1b82fb0 into vllm-project:main Nov 17, 2025
49 checks passed
ywang96 pushed a commit that referenced this pull request Nov 17, 2025
bigPYJ1151 pushed a commit that referenced this pull request Nov 25, 2025
bringlein pushed a commit to bringlein/vllm that referenced this pull request Nov 26, 2025
devpatelio pushed a commit to SumanthRH/vllm that referenced this pull request Nov 29, 2025
kitaekatt pushed a commit to kitaekatt/vllm that referenced this pull request Dec 1, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants