-
-
Notifications
You must be signed in to change notification settings - Fork 11.7k
[ROCm] Apply FP8 weights padding to values not divisible by 512 bytes on ROCm #13231
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
bbab81f
2205c07
f3da192
d3bb507
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -477,7 +477,7 @@ def w8a8_block_fp8_matmul( | |
| assert triton.cdiv(A.shape[-1], block_k) == As.shape[-1] | ||
| M = A.numel() // A.shape[-1] | ||
|
|
||
| assert B.ndim == 2 and B.is_contiguous() and Bs.ndim == 2 | ||
| assert B.ndim == 2 and Bs.ndim == 2 | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Are we sure this is okay?
Collaborator
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The kernel works just fine with a padded non-contiguous tensor. And in any scenario other than with padding it should be contiguous already, so no existing workflow is supposed to break.
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. One other option is just to call
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. WDYT?
Collaborator
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This would remove the padding, reverting the F.pad action
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. sorry, that was a dumb comment by me
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @gshtras I agree |
||
| N, K = B.shape | ||
| assert triton.cdiv(N, block_n) == Bs.shape[0] | ||
| assert triton.cdiv(K, block_k) == Bs.shape[1] | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is empty_cache really necessary here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Without it there is a possibility of having double the memory allocated, depending on the allocator behavior