Speedup FP16 Gelu op using fast math and vectorized 8 kernel#38980
Merged
sneaxiy merged 2 commits intoPaddlePaddle:developfrom Jan 18, 2022
Merged
Speedup FP16 Gelu op using fast math and vectorized 8 kernel#38980sneaxiy merged 2 commits intoPaddlePaddle:developfrom
sneaxiy merged 2 commits intoPaddlePaddle:developfrom
Conversation
|
Thanks for your contribution! |
Contributor
|
在PR描述里可否提供更详细的性能测试结果:(1) 56*seq_len的取值范围更广一些(比如,seq_len_in_batch为30几,40几在真实数据中也是有的) (2)添加与nv mlperf 1.1中 jit gelu的对比 |
limin2021
approved these changes
Jan 17, 2022
lanxianghit
approved these changes
Jan 18, 2022
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
PR types
Performance optimization
PR changes
OPs
Describe
Speed up FP16 op using: (1) vectorized 8 kernel, since GPU has PTX
ldinstruction to load 4x32bit data; (2) use the PTX fast tanhf instructiontanh.approx.fp32to speed up thetanhffunction. It is enabled whenFLAGS_use_fast_math=1.