Skip to content

[xpu][optimizer]: Add __xpu__quick_gelu op and fuse it into __xpu__multi_encoder_op for ViT model.#9755

Merged
zhupengyang merged 1 commit intoPaddlePaddle:developfrom
stevenshen36:new_quick_gelu
Nov 28, 2022
Merged

[xpu][optimizer]: Add __xpu__quick_gelu op and fuse it into __xpu__multi_encoder_op for ViT model.#9755
zhupengyang merged 1 commit intoPaddlePaddle:developfrom
stevenshen36:new_quick_gelu

Conversation

@stevenshen36
Copy link
Contributor

commit 774928b74a784ad2eb24490628d9e70a742ced58
Author: shenyijun01 shenyijun01@baidu.com
Date: Fri Nov 18 13:35:30 2022 +0800

[Optimizer]: add quick gelu fusion pass for ViT model.

@paddle-bot
Copy link

paddle-bot bot commented Nov 24, 2022

Thanks for your contribution!

commit 774928b74a784ad2eb24490628d9e70a742ced58
Author: shenyijun01 <shenyijun01@baidu.com>
Date:   Fri Nov 18 13:35:30 2022 +0800

    [Optimizer]: add quick gelu fusion pass for ViT model.
@zhupengyang
Copy link
Collaborator

zhupengyang commented Nov 25, 2022

不需要每次都新建 PR。重新 -f push 到原来的分支就行

这次就以此 PR 为准,原来的 #9718 close 掉(但是 #9718 里面的 review 意见还是要在后面的 PR 处理下的)

@stevenshen36
Copy link
Contributor Author

不需要每次都新建 PR。重新 -f push 到原来的分支就行

这次就以此 PR 为准,原来的 #9718 close 掉(但是 #9718 里面的 review 意见还是要在后面的 PR 处理下的)

好的,明白啦。我去关掉#9718

@stevenshen36
Copy link
Contributor Author

遗留的问题参考#9718评论,之后会新建pr解决。

@stevenshen36 stevenshen36 changed the title Squashed commit of the following: [xpu][optimizer]: Add __xpu__quick_gelu op and fuse it into __xpu__multi_encoder_op for ViT model. Nov 25, 2022
Copy link
Collaborator

@zhupengyang zhupengyang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@zhupengyang zhupengyang merged commit fb33afd into PaddlePaddle:develop Nov 28, 2022
qfyinbd pushed a commit to qfyinbd/Paddle-Lite that referenced this pull request Nov 29, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants