Skip to content

Conversation

@youkaichao
Copy link
Member

our final goal is to remove these custom op except for the attention op.

many custom ops are just manual fusions. and we expect torch.compile to do a better job.

however, currently torch.compile will cost more memory.

here we put a new flag to test the behavior.

current

VLLM_TEST_DYNAMO_GRAPH_CAPTURE=0 pytest -v -s tests/compile/test_full_graph.py is the current default behavior, don't use torch.compile .

INFO 09-14 10:11:08 gpu_executor.py:122] # GPU blocks: 27911, # CPU blocks: 2048

compile + custom op

pytest -v -s tests/compile/test_full_graph.py is to test torch.compile with vllm custom op:

INFO 09-14 10:03:27 gpu_executor.py:122] # GPU blocks: 27887, # CPU blocks: 2048

compile without custom op

VLLM_TEST_COMPILE_NO_CUSTOM_OPS=1 pytest -v -s tests/compile/test_full_graph.py is to test torch.compile without vllm custom op:

INFO 09-14 10:05:03 gpu_executor.py:122] # GPU blocks: 27601, # CPU blocks: 2048

summary

with compile, we lost 124 blocks.

without custom ops, we lost 286 blocks.

note: 2048 cpu blocks translate into 4 GB cpu memory. so every block is 2 MB memory.

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@youkaichao youkaichao merged commit 47790f3 into vllm-project:main Sep 14, 2024
@youkaichao youkaichao deleted the custom_op_flag branch September 14, 2024 20:07
Alvant pushed a commit to compressa-ai/vllm that referenced this pull request Oct 26, 2024
garg-amit pushed a commit to garg-amit/vllm that referenced this pull request Oct 28, 2024
LeiWang1999 pushed a commit to LeiWang1999/vllm-bitblas that referenced this pull request Mar 26, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant