[hybrid performance] softmax mask fuse op#33841
Merged
ForFishes merged 1 commit intoPaddlePaddle:developfrom Jul 16, 2021
Merged
[hybrid performance] softmax mask fuse op#33841ForFishes merged 1 commit intoPaddlePaddle:developfrom
ForFishes merged 1 commit intoPaddlePaddle:developfrom
Conversation
|
Thanks for your contribution! |
845deef to
907037c
Compare
907037c to
04c3eff
Compare
Xreki
previously approved these changes
Jul 5, 2021
Contributor
Xreki
left a comment
There was a problem hiding this comment.
LGTM for the modification of atol and LGTM for op benchmark ci.
|
Sorry to inform you that 170cde8's CIs have passed for more than 7 days. To prevent PR conflicts, you need to re-run all CIs manually. |
170cde8 to
3b18c23
Compare
74b218e to
00c0a44
Compare
00c0a44 to
7151fcc
Compare
b15c513 to
c959fcf
Compare
c959fcf to
cb9f173
Compare
cb9f173 to
6e6ff16
Compare
6e6ff16 to
f01ffe0
Compare
kolinwei
approved these changes
Jul 16, 2021
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
PR types
New features
PR changes
OPs
Describe
fuse mask elementwise add and softmax together, for transformer used
general pass:
fused pass:
performance, based on PaddleNLP/GPT under AMP:
loss curve, for PaddleNLP/GPT under AMP:


average loss diff for 20,000 steps: 0.0077
loss diff between fused pass and no fuse pass:
Currently, this OP only supports fp16 dtype.
To use this op from python side, follow these codes for static mode:
Follow these codes for dynamic mode: