-
Notifications
You must be signed in to change notification settings - Fork 5.9k
Add digamma_op and unittest #33278
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add digamma_op and unittest #33278
Conversation
|
Thanks for your contribution! |
paddle/fluid/operators/digamma_op.cc
Outdated
| @@ -0,0 +1,100 @@ | |||
| /* Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved. | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2020 -> 2021
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
paddle/fluid/operators/digamma_op.cu
Outdated
| @@ -0,0 +1,64 @@ | |||
| /* Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved. | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same above
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done!
| def init_dtype_type(self): | ||
| self.dtype = np.float32 | ||
|
|
||
| def test_check_grad_normal(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use default numeric_grad_delta is ok
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done!
| self.check_output() | ||
|
|
||
| def test_check_grad_normal(self): | ||
| self.check_grad(['X'], 'Out', numeric_grad_delta=1e-7) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use default value
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done!
| out_value = exe.run(feed=input_dict, fetch_list=[out.name]) | ||
| self.assertEqual( | ||
| np.allclose( | ||
| out_value[0], sc_res, rtol=1e-04), True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can use more samll rtol?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done, update the rtol to 1e-05
| with fluid.dygraph.guard(place): | ||
| input_t = paddle.to_tensor(input) | ||
| res = paddle.digamma(input_t).numpy() | ||
| self.assertEqual(np.allclose(res, sc_res, rtol=1e-04), True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same above
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done, update the rtol to 1e-05
python/paddle/tensor/math.py
Outdated
| name(str, optional): The default value is None. Normally there is no need for | ||
| user to set this property. For more information, please refer to :ref:`api_guide_Name` | ||
| Returns: | ||
| Tensor, the digamma of the input Tensor computed element-wise. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
element-wise?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done, has modified this sentence.
|
|
||
| REGISTER_OP_CUDA_KERNEL( | ||
| digamma_grad, | ||
| ops::DigammaGradKernel<paddle::platform::CUDADeviceContext, float>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
need special DigammaGradKernel here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
register a CudaKernel for digamma grad is necessary
paddle/fluid/operators/digamma_op.cu
Outdated
| }; | ||
|
|
||
| template <typename T> | ||
| class DigammaKernel<platform::CUDADeviceContext, T> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
here is faster? maybe can test it and add some comments
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done, it isn't faster in the test, so removed it.
| limitations under the License. */ | ||
|
|
||
| #include <unsupported/Eigen/SpecialFunctions> | ||
| #include "paddle/fluid/operators/digamma_op.h" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
here we only need the header digamma_op.h, remove other headers
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done!
chenwhql
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
chenwhql
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
XiaoguangHu01
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
TCChenlong
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
PR types
New features
PR changes
OPs
Describe
Add digamma_op and unittest
API:
Code position:
Example:
Doc:
Cn Doc PR: PaddlePaddle/docs#3574