-
-
Notifications
You must be signed in to change notification settings - Fork 12k
[CPU] Refactor CPU unquantized linear #24150
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: jiang1.li <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request refactors the CPU unquantized linear operations to enable weight prepacking using oneDNN, which yields significant performance improvements as shown in the benchmarks. The changes are well-structured and introduce a DNNLScratchPadManager to handle memory for oneDNN primitives.
My review focuses on memory management within the new DNNLScratchPadManager. I've identified critical memory leaks that need to be addressed. Specifically, the manager leaks memory upon reallocation and lacks a destructor to free its buffer upon program exit.
| void DNNLScratchPadManager::realloc(size_t new_size) { | ||
| new_size = round(new_size); | ||
| if (new_size > size_) { | ||
| ptr_ = std::aligned_alloc(64, new_size); | ||
| size_ = new_size; | ||
| } | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This realloc implementation has a memory leak. When a larger buffer is needed, it allocates a new one but doesn't free the old buffer pointed to by ptr_.
Additionally, std::aligned_alloc can return nullptr on allocation failure, which is not handled and could lead to crashes.
Please ensure the old buffer is freed and that allocation failures are handled. It's safer to allocate the new buffer first before freeing the old one.
| void DNNLScratchPadManager::realloc(size_t new_size) { | |
| new_size = round(new_size); | |
| if (new_size > size_) { | |
| ptr_ = std::aligned_alloc(64, new_size); | |
| size_ = new_size; | |
| } | |
| } | |
| void DNNLScratchPadManager::realloc(size_t new_size) { | |
| new_size = round(new_size); | |
| if (new_size > size_) { | |
| void* new_ptr = std::aligned_alloc(64, new_size); | |
| if (!new_ptr) { | |
| throw std::bad_alloc(); | |
| } | |
| if (ptr_) { | |
| std::free(ptr_); | |
| } | |
| ptr_ = new_ptr; | |
| size_ = new_size; | |
| } | |
| } |
| static DNNLScratchPadManager* get_dnnl_scratchpad_manager(); | ||
|
|
||
| DNNLScratchPadManager(); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The DNNLScratchPadManager class allocates memory via std::aligned_alloc and manages it with the raw pointer ptr_, but it's missing a destructor. This will cause a memory leak when the static manager instance in get_dnnl_scratchpad_manager is destroyed at program exit.
To fix this, please add a destructor that frees the allocated memory.
| ~DNNLScratchPadManager() { | |
| if (ptr_) { | |
| std::free(ptr_); | |
| } | |
| } | |
Signed-off-by: jiang1.li <[email protected]>
cac013b to
a8790f2
Compare
Signed-off-by: jiang1.li <[email protected]>
Signed-off-by: jiang1.li <[email protected]>
Signed-off-by: jiang1.li <[email protected]>
…ACL and weight prepack vllm-project#24150 introduced weight prepack and a diect oneDNN path for linear ops this path is not currently active for AArch64 - i.e. linears are dispatched to PyTorch and as a result we have to pack weights each time a torch.linear op is executed This PR enables weight prepack and dispatches non-quantized linear ops to oneDNN (only) if oneDNN was built with Arm Compute Library (ACL) as its backend. If oneDNN was built without ACL we follow the current path where linears go through the PyTorch path as this is still much faster than oneDNN without ACL. I had to make the following changes to the current oneDNN matmul path to make it compatible with ACL: oneDNN/ACL matmul does not support runtime dimensions -> pass a default M=128 and input stride=K when creating the matmul primitive descriptor oneDNN/ACL matmul does not support passing a bias -> c=matmul(a,b)+bias is handled as c=bias; c+=matmul(a,b) through attaching a fused sum post-op to the matmul primitive oneDNN/ACL matmul does not support non-contiguous source tensors -> we make sure that source tensors are contiguous oneDNN/ACL matmul API allows for the weight format to change when the input dimensions change, so we now check at execute whether we need to pack again. Note that ACL weight format does not tend to change in practice, so this won't be performance issue. We had to add this check because the API allows for such weight format changes. This PR also ensures that the current cmake arg for enabling building oneDNN with ACL backend (VLLM_BUILD_ACL) is not discarded by setup.py Test Plan: tests/kernels/test_onednn.py exercises the oneDNN path for linear ops. All pass with my changes when oneDNN is built with/without ACL backend Performace: On 16 Neoverse-V2 cores, this PR yields ~ 78% higher throughput (with oneDNN built with ACL backend) than the current default path, in the throughput benchmarks for meta-llama/Llama3.1-8b-Instruct executed as follows: ``` LD_PRELOAD="/usr/lib/aarch64-linux-gnu/libtcmalloc_minimal.so.4:/usr/lib/aarch64-linux-gnu/libgomp.so.1" VLLM_TARGET_DEVICE=cpu VLLM_CPU_KVCACHE_SPACE=32 taskset -c 0-15 vllm bench throughput --num-prompts 64 --seed 0 \ --dataset-name sharegpt --dataset-path ShareGPT_V3_unfiltered_cleaned_split_no_imsorry.json --max-model-len 4096 --model meta-llama/Llama-3.1-8B-Instruct ``` Future PRs will look into building oneDNN with ACL backend by default where appropriate
…ACL and weight prepack vllm-project#24150 introduced weight prepack and a diect oneDNN path for linear ops this path is not currently active for AArch64 - i.e. linears are dispatched to PyTorch and as a result we have to pack weights each time a torch.linear op is executed This PR enables weight prepack and dispatches non-quantized linear ops to oneDNN (only) if oneDNN was built with Arm Compute Library (ACL) as its backend. If oneDNN was built without ACL we follow the current path where linears go through the PyTorch path as this is still much faster than oneDNN without ACL. I had to make the following changes to the current oneDNN matmul path to make it compatible with ACL: - oneDNN/ACL matmul does not support runtime dimensions -> pass a default M=128 and input stride=K when creating the matmul primitive descriptor - oneDNN/ACL matmul does not support passing a bias -> c=matmul(a,b)+bias is handled as c=bias; c+=matmul(a,b) through attaching a fused sum post-op to the matmul primitive - oneDNN/ACL matmul does not support non-contiguous source tensors -> we make sure that source tensors are contiguous - oneDNN/ACL matmul API allows for the weight format to change when the input dimensions change, so we now check at execute whether we need to pack again. Note that ACL weight format does not tend to change in practice, so this won't be performance issue. We had to add this check because the API allows for such weight format changes. This PR also ensures that the current cmake arg for enabling building oneDNN with ACL backend (VLLM_BUILD_ACL) is not discarded by setup.py Test Plan: tests/kernels/test_onednn.py exercises the oneDNN path for linear ops. All pass with my changes when oneDNN is built with/without ACL backend Performace: On 16 Neoverse-V2 cores, this PR yields ~ 78% higher throughput (with oneDNN built with ACL backend) than the current default path, in the throughput benchmarks for meta-llama/Llama3.1-8b-Instruct executed as follows: ``` LD_PRELOAD="/usr/lib/aarch64-linux-gnu/libtcmalloc_minimal.so.4:/usr/lib/aarch64-linux-gnu/libgomp.so.1" VLLM_TARGET_DEVICE=cpu VLLM_CPU_KVCACHE_SPACE=32 taskset -c 0-15 vllm bench throughput --num-prompts 64 --seed 0 \ --dataset-name sharegpt --dataset-path ShareGPT_V3_unfiltered_cleaned_split_no_imsorry.json --max-model-len 4096 --model meta-llama/Llama-3.1-8B-Instruct ``` Future PRs will look into building oneDNN with ACL backend by default where appropriate remove .orig file
…ACL and weight prepack vllm-project#24150 introduced weight prepack and a diect oneDNN path for linear ops this path is not currently active for AArch64 - i.e. linears are dispatched to PyTorch and as a result we have to pack weights each time a torch.linear op is executed This PR enables weight prepack and dispatches non-quantized linear ops to oneDNN (only) if oneDNN was built with Arm Compute Library (ACL) as its backend. If oneDNN was built without ACL we follow the current path where linears go through the PyTorch path as this is still much faster than oneDNN without ACL. I had to make the following changes to the current oneDNN matmul path to make it compatible with ACL: - oneDNN/ACL matmul does not support runtime dimensions -> pass a default M=128 and input stride=K when creating the matmul primitive descriptor - oneDNN/ACL matmul does not support passing a bias -> c=matmul(a,b)+bias is handled as c=bias; c+=matmul(a,b) through attaching a fused sum post-op to the matmul primitive - oneDNN/ACL matmul does not support non-contiguous source tensors -> we make sure that source tensors are contiguous - oneDNN/ACL matmul API allows for the weight format to change when the input dimensions change, so we now check at execute whether we need to pack again. Note that ACL weight format does not tend to change in practice, so this won't be performance issue. We had to add this check because the API allows for such weight format changes. This PR also ensures that the current cmake arg for enabling building oneDNN with ACL backend (VLLM_BUILD_ACL) is not discarded by setup.py Test Plan: tests/kernels/test_onednn.py exercises the oneDNN path for linear ops. All pass with my changes when oneDNN is built with/without ACL backend Performace: On 16 Neoverse-V2 cores, this PR yields ~ 78% higher throughput (with oneDNN built with ACL backend) than the current default path, in the throughput benchmarks for meta-llama/Llama3.1-8b-Instruct executed as follows: ``` LD_PRELOAD="/usr/lib/aarch64-linux-gnu/libtcmalloc_minimal.so.4:/usr/lib/aarch64-linux-gnu/libgomp.so.1" VLLM_TARGET_DEVICE=cpu VLLM_CPU_KVCACHE_SPACE=32 taskset -c 0-15 vllm bench throughput --num-prompts 64 --seed 0 \ --dataset-name sharegpt --dataset-path ShareGPT_V3_unfiltered_cleaned_split_no_imsorry.json --max-model-len 4096 --model meta-llama/Llama-3.1-8B-Instruct ``` Future PRs will look into building oneDNN with ACL backend by default where appropriate
…ACL and weight prepack vllm-project#24150 introduced weight prepack and a diect oneDNN path for linear ops this path is not currently active for AArch64 - i.e. linears are dispatched to PyTorch and as a result we have to pack weights each time a torch.linear op is executed This PR enables weight prepack and dispatches non-quantized linear ops to oneDNN (only) if oneDNN was built with Arm Compute Library (ACL) as its backend. If oneDNN was built without ACL we follow the current path where linears go through the PyTorch path as this is still much faster than oneDNN without ACL. I had to make the following changes to the current oneDNN matmul path to make it compatible with ACL: - oneDNN/ACL matmul does not support runtime dimensions -> pass a default M=128 and input stride=K when creating the matmul primitive descriptor - oneDNN/ACL matmul does not support passing a bias -> c=matmul(a,b)+bias is handled as c=bias; c+=matmul(a,b) through attaching a fused sum post-op to the matmul primitive - oneDNN/ACL matmul does not support non-contiguous source tensors -> we make sure that source tensors are contiguous - oneDNN/ACL matmul API allows for the weight format to change when the input dimensions change, so we now check at execute whether we need to pack again. Note that ACL weight format does not tend to change in practice, so this won't be performance issue. We had to add this check because the API allows for such weight format changes. This PR also ensures that the current cmake arg for enabling building oneDNN with ACL backend (VLLM_BUILD_ACL) is not discarded by setup.py Test Plan: tests/kernels/test_onednn.py exercises the oneDNN path for linear ops. All pass with my changes when oneDNN is built with/without ACL backend Performace: On 16 Neoverse-V2 cores, this PR yields ~ 78% higher throughput (with oneDNN built with ACL backend) than the current default path, in the throughput benchmarks for meta-llama/Llama3.1-8b-Instruct executed as follows: ``` LD_PRELOAD="/usr/lib/aarch64-linux-gnu/libtcmalloc_minimal.so.4:/usr/lib/aarch64-linux-gnu/libgomp.so.1" VLLM_TARGET_DEVICE=cpu VLLM_CPU_KVCACHE_SPACE=32 taskset -c 0-15 vllm bench throughput --num-prompts 64 --seed 0 \ --dataset-name sharegpt --dataset-path ShareGPT_V3_unfiltered_cleaned_split_no_imsorry.json --max-model-len 4096 --model meta-llama/Llama-3.1-8B-Instruct ``` Future PRs will look into building oneDNN with ACL backend by default where appropriate
…ACL and weight prepack vllm-project#24150 introduced weight prepack and a diect oneDNN path for linear ops this path is not currently active for AArch64 - i.e. linears are dispatched to PyTorch and as a result we have to pack weights each time a torch.linear op is executed This PR enables weight prepack and dispatches non-quantized linear ops to oneDNN (only) if oneDNN was built with Arm Compute Library (ACL) as its backend. If oneDNN was built without ACL we follow the current path where linears go through the PyTorch path as this is still much faster than oneDNN without ACL. I had to make the following changes to the current oneDNN matmul path to make it compatible with ACL: - oneDNN/ACL matmul does not support runtime dimensions -> pass a default M=128 and input stride=K when creating the matmul primitive descriptor - oneDNN/ACL matmul does not support passing a bias -> c=matmul(a,b)+bias is handled as c=bias; c+=matmul(a,b) through attaching a fused sum post-op to the matmul primitive - oneDNN/ACL matmul does not support non-contiguous source tensors -> we make sure that source tensors are contiguous - oneDNN/ACL matmul API allows for the weight format to change when the input dimensions change, so we now check at execute whether we need to pack again. Note that ACL weight format does not tend to change in practice, so this won't be performance issue. We had to add this check because the API allows for such weight format changes. This PR also ensures that the current cmake arg for enabling building oneDNN with ACL backend (VLLM_BUILD_ACL) is not discarded by setup.py Test Plan: tests/kernels/test_onednn.py exercises the oneDNN path for linear ops. All pass with my changes when oneDNN is built with/without ACL backend Performace: On 16 Neoverse-V2 cores, this PR yields ~ 78% higher throughput (with oneDNN built with ACL backend) than the current default path, in the throughput benchmarks for meta-llama/Llama3.1-8b-Instruct executed as follows: ``` LD_PRELOAD="/usr/lib/aarch64-linux-gnu/libtcmalloc_minimal.so.4:/usr/lib/aarch64-linux-gnu/libgomp.so.1" VLLM_TARGET_DEVICE=cpu VLLM_CPU_KVCACHE_SPACE=32 taskset -c 0-15 vllm bench throughput --num-prompts 64 --seed 0 \ --dataset-name sharegpt --dataset-path ShareGPT_V3_unfiltered_cleaned_split_no_imsorry.json --max-model-len 4096 --model meta-llama/Llama-3.1-8B-Instruct ``` Future PRs will look into building oneDNN with ACL backend by default where appropriate
…ACL and weight prepack vllm-project#24150 introduced weight prepack and a diect oneDNN path for linear ops this path is not currently active for AArch64 - i.e. linears are dispatched to PyTorch and as a result we have to pack weights each time a torch.linear op is executed This PR enables weight prepack and dispatches non-quantized linear ops to oneDNN (only) if oneDNN was built with Arm Compute Library (ACL) as its backend. If oneDNN was built without ACL we follow the current path where linears go through the PyTorch path as this is still much faster than oneDNN without ACL. I had to make the following changes to the current oneDNN matmul path to make it compatible with ACL: - oneDNN/ACL matmul does not support runtime dimensions -> pass a default M=128 and input stride=K when creating the matmul primitive descriptor - oneDNN/ACL matmul does not support passing a bias -> c=matmul(a,b)+bias is handled as c=bias; c+=matmul(a,b) through attaching a fused sum post-op to the matmul primitive - oneDNN/ACL matmul does not support non-contiguous source tensors -> we make sure that source tensors are contiguous - oneDNN/ACL matmul API allows for the weight format to change when the input dimensions change, so we now check at execute whether we need to pack again. Note that ACL weight format does not tend to change in practice, so this won't be performance issue. We had to add this check because the API allows for such weight format changes. This PR also ensures that the current cmake arg for enabling building oneDNN with ACL backend (VLLM_BUILD_ACL) is not discarded by setup.py Test Plan: tests/kernels/test_onednn.py exercises the oneDNN path for linear ops. All pass with my changes when oneDNN is built with/without ACL backend Performace: On 16 Neoverse-V2 cores, this PR yields ~ 78% higher throughput (with oneDNN built with ACL backend) than the current default path, in the throughput benchmarks for meta-llama/Llama3.1-8b-Instruct executed as follows: ``` LD_PRELOAD="/usr/lib/aarch64-linux-gnu/libtcmalloc_minimal.so.4:/usr/lib/aarch64-linux-gnu/libgomp.so.1" VLLM_TARGET_DEVICE=cpu VLLM_CPU_KVCACHE_SPACE=32 taskset -c 0-15 vllm bench throughput --num-prompts 64 --seed 0 \ --dataset-name sharegpt --dataset-path ShareGPT_V3_unfiltered_cleaned_split_no_imsorry.json --max-model-len 4096 --model meta-llama/Llama-3.1-8B-Instruct ``` Future PRs will look into building oneDNN with ACL backend by default where appropriate
…ACL and weight prepack vllm-project#24150 introduced weight prepack and a diect oneDNN path for linear ops this path is not currently active for AArch64 - i.e. linears are dispatched to PyTorch and as a result we have to pack weights each time a torch.linear op is executed This PR enables weight prepack and dispatches non-quantized linear ops to oneDNN (only) if oneDNN was built with Arm Compute Library (ACL) as its backend. If oneDNN was built without ACL we follow the current path where linears go through the PyTorch path as this is still much faster than oneDNN without ACL. I had to make the following changes to the current oneDNN matmul path to make it compatible with ACL: - oneDNN/ACL matmul does not support runtime dimensions -> pass a default M=128 and input stride=K when creating the matmul primitive descriptor - oneDNN/ACL matmul does not support passing a bias -> c=matmul(a,b)+bias is handled as c=bias; c+=matmul(a,b) through attaching a fused sum post-op to the matmul primitive - oneDNN/ACL matmul does not support non-contiguous source tensors -> we make sure that source tensors are contiguous - oneDNN/ACL matmul API allows for the weight format to change when the input dimensions change, so we now check at execute whether we need to pack again. Note that ACL weight format does not tend to change in practice, so this won't be performance issue. We had to add this check because the API allows for such weight format changes. This PR also ensures that the current cmake arg for enabling building oneDNN with ACL backend (VLLM_BUILD_ACL) is not discarded by setup.py Test Plan: tests/kernels/test_onednn.py exercises the oneDNN path for linear ops. All pass with my changes when oneDNN is built with/without ACL backend Performace: On 16 Neoverse-V2 cores, this PR yields ~ 78% higher throughput (with oneDNN built with ACL backend) than the current default path, in the throughput benchmarks for meta-llama/Llama3.1-8b-Instruct executed as follows: ``` LD_PRELOAD="/usr/lib/aarch64-linux-gnu/libtcmalloc_minimal.so.4:/usr/lib/aarch64-linux-gnu/libgomp.so.1" VLLM_TARGET_DEVICE=cpu VLLM_CPU_KVCACHE_SPACE=32 taskset -c 0-15 vllm bench throughput --num-prompts 64 --seed 0 \ --dataset-name sharegpt --dataset-path ShareGPT_V3_unfiltered_cleaned_split_no_imsorry.json --max-model-len 4096 --model meta-llama/Llama-3.1-8B-Instruct ``` Future PRs will look into building oneDNN with ACL backend by default where appropriate Signed-off-by: Fadi Arafeh <[email protected]>
…ACL and weight prepack vllm-project#24150 introduced weight prepack and a diect oneDNN path for linear ops this path is not currently active for AArch64 - i.e. linears are dispatched to PyTorch and as a result we have to pack weights each time a torch.linear op is executed This PR enables weight prepack and dispatches non-quantized linear ops to oneDNN (only) if oneDNN was built with Arm Compute Library (ACL) as its backend. If oneDNN was built without ACL we follow the current path where linears go through the PyTorch path as this is still much faster than oneDNN without ACL. I had to make the following changes to the current oneDNN matmul path to make it compatible with ACL: - oneDNN/ACL matmul does not support runtime dimensions -> pass a default M=128 and input stride=K when creating the matmul primitive descriptor - oneDNN/ACL matmul does not support passing a bias -> c=matmul(a,b)+bias is handled as c=bias; c+=matmul(a,b) through attaching a fused sum post-op to the matmul primitive - oneDNN/ACL matmul does not support non-contiguous source tensors -> we make sure that source tensors are contiguous - oneDNN/ACL matmul API allows for the weight format to change when the input dimensions change, so we now check at execute whether we need to pack again. Note that ACL weight format does not tend to change in practice, so this won't be performance issue. We had to add this check because the API allows for such weight format changes. This PR also ensures that the current cmake arg for enabling building oneDNN with ACL backend (VLLM_BUILD_ACL) is not discarded by setup.py Test Plan: tests/kernels/test_onednn.py exercises the oneDNN path for linear ops. All pass with my changes when oneDNN is built with/without ACL backend Performace: On 16 Neoverse-V2 cores, this PR yields ~ 78% higher throughput (with oneDNN built with ACL backend) than the current default path, in the throughput benchmarks for meta-llama/Llama3.1-8b-Instruct executed as follows: ``` LD_PRELOAD="/usr/lib/aarch64-linux-gnu/libtcmalloc_minimal.so.4:/usr/lib/aarch64-linux-gnu/libgomp.so.1" VLLM_TARGET_DEVICE=cpu VLLM_CPU_KVCACHE_SPACE=32 taskset -c 0-15 vllm bench throughput --num-prompts 64 --seed 0 \ --dataset-name sharegpt --dataset-path ShareGPT_V3_unfiltered_cleaned_split_no_imsorry.json --max-model-len 4096 --model meta-llama/Llama-3.1-8B-Instruct ``` Future PRs will look into building oneDNN with ACL backend by default where appropriate Signed-off-by: Fadi Arafeh <[email protected]>
Purpose
main
this
Test Plan
Unit tests
Test Result
Essential Elements of an Effective PR Description Checklist
supported_models.mdandexamplesfor a new model.