Skip to content

Conversation

@zhiqiu
Copy link
Contributor

@zhiqiu zhiqiu commented Jul 20, 2021

PR types

Performance optimization

PR changes

Others

Describe

Since found_inf needs to be on cpu in adam op, we copy it in advance to avoid multiple time copies.

@paddle-bot-old
Copy link

Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@zhiqiu zhiqiu force-pushed the dev/copy_found_inf_to_cpu branch from a7c2068 to dd7bbc3 Compare July 22, 2021 03:19
Copy link
Contributor

@pangyoki pangyoki left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Contributor

@chenwhql chenwhql left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for PADDLE_ENFORCE

Copy link
Member

@zhhsplendid zhhsplendid left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for ShareDataWith, but discussed that name like "x_" can be changed because it usually looks like class private data member name. We will change in the future PR.

@zhiqiu zhiqiu merged commit 781f402 into PaddlePaddle:develop Jul 22, 2021
@zhiqiu
Copy link
Contributor Author

zhiqiu commented Jul 22, 2021

LGTM for ShareDataWith, but discussed that name like "x_" can be changed because it usually looks like class private data member name. We will change in the future PR.

Refined in #34330, thanks.

zhaoyinglia pushed a commit to zhaoyinglia/Paddle that referenced this pull request Sep 2, 2021
…#34274)

* copy found_inf to cpu in advance to improve performance

* add npu test

* add npu test

* refine code

* refine memcpy op

* fix adam
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants