Skip to content

[LLM INFER] Optimize fuse some kernels in postprocess#9201

Merged
yuanlehome merged 10 commits into
PaddlePaddle:developfrom
gzy19990617:fuse_kernels_in_preprocess_postprocess
Nov 6, 2024
Merged

[LLM INFER] Optimize fuse some kernels in postprocess#9201
yuanlehome merged 10 commits into
PaddlePaddle:developfrom
gzy19990617:fuse_kernels_in_preprocess_postprocess

Conversation

@gzy19990617
Copy link
Copy Markdown
Contributor

PR types

Performance optimization

PR changes

Others

Description

1.get_padding_offset与remove_padding kernel fuse
2.stop_generation_multi_ends_v2与update_inputs kernel与前面的一些操作进行fuse
3.set_value_by_flags_and_idx_v2与set_stop_value_multi_ends_v2 kernel fuse

均增加测试代码,算子级别已对齐精度

@codecov
Copy link
Copy Markdown

codecov Bot commented Sep 26, 2024

Codecov Report

Attention: Patch coverage is 0% with 5 lines in your changes missing coverage. Please review.

Project coverage is 52.90%. Comparing base (81f5ab5) to head (55aacac).
Report is 34 commits behind head on develop.

Current head 55aacac differs from pull request most recent head 3e5afae

Please upload reports for the commit 3e5afae to get more accurate results.

Files with missing lines Patch % Lines
...enlp/experimental/transformers/generation_utils.py 0.00% 5 Missing ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #9201      +/-   ##
===========================================
- Coverage    52.92%   52.90%   -0.03%     
===========================================
  Files          661      661              
  Lines       107069   106936     -133     
===========================================
- Hits         56670    56571      -99     
+ Misses       50399    50365      -34     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@gzy19990617 gzy19990617 changed the title 【Inference】Optimize fuse some kernels [LLM INFER] Optimize fuse some kernels in postprocess Oct 9, 2024
@yuanlehome yuanlehome self-assigned this Oct 30, 2024
Copy link
Copy Markdown
Collaborator

@yuanlehome yuanlehome left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@yuanlehome yuanlehome merged commit 0977858 into PaddlePaddle:develop Nov 6, 2024
for (int i = tid; i < bad_words_length; i += blockDim.x) {
const int64_t bad_words_token_id = bad_words_list[i];
if (bad_words_token_id >= length || bad_words_token_id < 0) continue;
logits_now[bad_words_token_id] = -1e10;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

如果这里固定写了-1e10,那TypeName应该只能限定Float32或者Bfloat16,而不能传Float16。但算子注册的时候全都注册了,这存在溢出的风险。虽然目前通过组网强制cast(Float32),但容易被用户用错。

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里可以修改为,根据传入的类型设置不同精度的初始值?

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

我觉得比较合理的情况是,输入不同的类型都兼容下;但如果简单处理,也可以只考虑注册特定的精度的算子

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants