Skip to content

Conversation

@CRZbulabula
Copy link
Contributor

This PR is to complete #9589 and #9632, adding "torch compile" annotations to some moe models and testing whether they can pass the compilation.

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@CRZbulabula CRZbulabula marked this pull request as draft October 28, 2024 13:07


@support_torch_compile
class PhiMoEModel(nn.Module):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for this model, it seems directly running it with -tp=2 will fail. the error is:

Failed: Cuda error /workspace/csrc/custom_all_reduce.cuh:336 'invalid argument'

need to investigate it later.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

note: this is unrelated to torch.compile

@youkaichao youkaichao marked this pull request as ready for review October 28, 2024 20:29


@support_torch_compile
class ArcticModel(nn.Module):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to run this model successfully on H100, I have to change the config:

"hidden_size": 512,
"intermediate_size": 512,
"num_key_value_heads": 8,
"num_attention_heads": 8,
"num_local_experts": 4,

initially, I want to simply change "num_hidden_layers": 35, to "num_hidden_layers": 2, , but I met various random illegal memory access error. might be caused by fused moe kernel, with extremely large input sizes.

Copy link
Member

@youkaichao youkaichao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for the great efforts!

@youkaichao youkaichao merged commit aa0addb into vllm-project:main Oct 28, 2024
@CRZbulabula CRZbulabula deleted the torch-compile-moe branch October 29, 2024 01:06
FerdinandZhong pushed a commit to FerdinandZhong/vllm that referenced this pull request Oct 29, 2024
rasmith pushed a commit to rasmith/vllm that referenced this pull request Oct 30, 2024
JC1DA pushed a commit to JC1DA/vllm that referenced this pull request Nov 11, 2024
sumitd2 pushed a commit to sumitd2/vllm that referenced this pull request Nov 14, 2024
sleepwalker2017 pushed a commit to sleepwalker2017/vllm that referenced this pull request Dec 13, 2024
LeiWang1999 pushed a commit to LeiWang1999/vllm-bitblas that referenced this pull request Mar 26, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants