Skip to content

Conversation

@wuxun-zhang
Copy link

@wuxun-zhang wuxun-zhang commented Sep 28, 2025

For some backends like Gaudi, the input shape of first MOE layer could be 3D (bs, seqlen, hdim).
So here when SP MOE enabled (#24982), num_tokens may got wrong value.

@tlrmchlsmth

Purpose

Test Plan

Test Result


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.
  • (Optional) Release notes update. If your change is user facing, please update the release notes draft in the Google Doc.

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors.

You ask your reviewers to trigger select CI tests on top of fastcheck CI.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

If you have any questions, please reach out to us on Slack at https://slack.vllm.ai.

🚀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly fixes a bug in the GraniteMoeMoE layer where num_tokens was miscalculated when sequence parallelism is enabled with 3D input tensors. The change ensures num_tokens is derived after flattening the hidden_states tensor, which is the correct approach for handling inputs of various dimensions. The fix is sound and resolves the issue described. I have not found any high or critical severity issues in the proposed changes.

@wuxun-zhang
Copy link
Author

@tlrmchlsmth Hi, could you please take a look at this? I am not able to trigger CI.

@jikunshang jikunshang added the ready ONLY add when PR is ready to merge/full CI is needed label Sep 29, 2025
@jikunshang
Copy link
Collaborator

@tlrmchlsmth please take a further review, thanks!

@wuxun-zhang
Copy link
Author

Update: for Gaudi, we are going to follow same assumption (2D hidden states) as community. Currently we have a flag for this. So this will be not hard requirement.

But, I think it's still valid fix for 1D hidden states (as mentioned in code comment), though not sure what scenario 1D hidden states for.
@tlrmchlsmth Please help take a look here, thanks.

xuechendi pushed a commit to vllm-project/vllm-gaudi that referenced this pull request Sep 30, 2025
After vllm-project/vllm#24982 merged, sequence
parallel MOE will be turned on when `enable_expert_parallel=True`,
`tp_size > 1` and `dp_size > 1`. Since for Gaudi, there is no choice for
`VLLM_ALL2ALL_BACKEND`, we can not easily bypass it. So this PR aims to
support the feature.

```python
class ParallelConfig:

  @Property
    def use_sequence_parallel_moe(self) -> bool:
        return (envs.VLLM_ALL2ALL_BACKEND
                in ("allgather_reducescatter", "naive",
                    "deepep_high_throughput", "deepep_low_latency")
                and self.enable_expert_parallel
                and self.tensor_parallel_size > 1
                and self.data_parallel_size > 1)

```

Update:
No hard requirement on vllm-project/vllm#25828

---------

Signed-off-by: Wuxun Zhang <[email protected]>
iboiko-habana pushed a commit to iboiko-habana/vllm-gaudi that referenced this pull request Oct 2, 2025
After vllm-project/vllm#24982 merged, sequence
parallel MOE will be turned on when `enable_expert_parallel=True`,
`tp_size > 1` and `dp_size > 1`. Since for Gaudi, there is no choice for
`VLLM_ALL2ALL_BACKEND`, we can not easily bypass it. So this PR aims to
support the feature.

```python
class ParallelConfig:

  @Property
    def use_sequence_parallel_moe(self) -> bool:
        return (envs.VLLM_ALL2ALL_BACKEND
                in ("allgather_reducescatter", "naive",
                    "deepep_high_throughput", "deepep_low_latency")
                and self.enable_expert_parallel
                and self.tensor_parallel_size > 1
                and self.data_parallel_size > 1)

```

Update:
No hard requirement on vllm-project/vllm#25828

---------

Signed-off-by: Wuxun Zhang <[email protected]>
Signed-off-by: Iryna Boiko <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants