-
Notifications
You must be signed in to change notification settings - Fork 5.9k
add log2 operator #28319
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
add log2 operator #28319
Changes from 24 commits
Commits
Show all changes
28 commits
Select commit
Hold shift + click to select a range
f6d9404
add new log2 operation
Joejiong f1b4f73
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
Joejiong cca2f9a
fix sample code
Joejiong 3049700
test fp16
Joejiong 665f827
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
Joejiong eebde3d
fix fp16_error_ratio
Joejiong 933dd5d
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
Joejiong 02fcf16
fix latex
Joejiong cd94a3a
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
Joejiong d1838bf
fix paddle2.0 api style
Joejiong b72f42c
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
Joejiong e5a5c26
add dygraph example code
Joejiong 8ae9d2c
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
Joejiong bfc4d79
fix doc gen
Joejiong e08c326
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
Joejiong 9eadc71
clean doc fluid
Joejiong faedba2
change directory
Joejiong f0151d0
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
Joejiong bb143ad
optimize log2
Joejiong 6ca56f2
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
Joejiong c8b5fcf
clean code
Joejiong 23e94c0
fix float16
Joejiong 4b35414
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
Joejiong d508c4c
remove grad_atol
Joejiong 9a6d1bb
fix example code
Joejiong c84a99f
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
Joejiong c7023f9
clean example
Joejiong 816a086
Merge branch 'develop' of https://github.com/PaddlePaddle/Paddle into…
Joejiong File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -180,10 +180,12 @@ def test_errors(self): | |
| # The input type must be Variable. | ||
| self.assertRaises(TypeError, F.log_sigmoid, 1) | ||
| # The input dtype must be float16, float32, float64. | ||
| x_int32 = paddle.fluid.data(name='x_int32', shape=[11, 17], dtype='int32') | ||
| x_int32 = paddle.fluid.data( | ||
| name='x_int32', shape=[11, 17], dtype='int32') | ||
| self.assertRaises(TypeError, F.log_sigmoid, x_int32) | ||
| # support the input dtype is float16 | ||
| x_fp16 = paddle.fluid.data(name='x_fp16', shape=[11, 17], dtype='float16') | ||
| x_fp16 = paddle.fluid.data( | ||
| name='x_fp16', shape=[11, 17], dtype='float16') | ||
| F.log_sigmoid(x_fp16) | ||
|
|
||
|
|
||
|
|
@@ -260,10 +262,12 @@ def test_errors(self): | |
| # The input type must be Variable. | ||
| self.assertRaises(TypeError, F.tanh, 1) | ||
| # The input dtype must be float16, float32. | ||
| x_int32 = paddle.fluid.data(name='x_int32', shape=[12, 10], dtype='int32') | ||
| x_int32 = paddle.fluid.data( | ||
| name='x_int32', shape=[12, 10], dtype='int32') | ||
| self.assertRaises(TypeError, F.tanh, x_int32) | ||
| # support the input dtype is float16 | ||
| x_fp16 = paddle.fluid.data(name='x_fp16', shape=[12, 10], dtype='float16') | ||
| x_fp16 = paddle.fluid.data( | ||
| name='x_fp16', shape=[12, 10], dtype='float16') | ||
| F.tanh(x_fp16) | ||
|
|
||
|
|
||
|
|
@@ -519,10 +523,12 @@ def test_errors(self): | |
| # The input type must be Variable. | ||
| self.assertRaises(TypeError, F.tanhshrink, 1) | ||
| # The input dtype must be float16, float32, float64. | ||
| x_int32 = paddle.fluid.data(name='x_int32', shape=[12, 10], dtype='int32') | ||
| x_int32 = paddle.fluid.data( | ||
| name='x_int32', shape=[12, 10], dtype='int32') | ||
| self.assertRaises(TypeError, F.tanhshrink, x_int32) | ||
| # support the input dtype is float16 | ||
| x_fp16 = paddle.fluid.data(name='x_fp16', shape=[12, 10], dtype='float16') | ||
| x_fp16 = paddle.fluid.data( | ||
| name='x_fp16', shape=[12, 10], dtype='float16') | ||
| F.tanhshrink(x_fp16) | ||
|
|
||
|
|
||
|
|
@@ -616,10 +622,12 @@ def test_errors(self): | |
| # The input type must be Variable. | ||
| self.assertRaises(TypeError, F.hardshrink, 1) | ||
| # The input dtype must be float16, float32, float64. | ||
| x_int32 = paddle.fluid.data(name='x_int32', shape=[12, 10], dtype='int32') | ||
| x_int32 = paddle.fluid.data( | ||
| name='x_int32', shape=[12, 10], dtype='int32') | ||
| self.assertRaises(TypeError, F.hardshrink, x_int32) | ||
| # support the input dtype is float16 | ||
| x_fp16 = paddle.fluid.data(name='x_fp16', shape=[12, 10], dtype='float16') | ||
| x_fp16 = paddle.fluid.data( | ||
| name='x_fp16', shape=[12, 10], dtype='float16') | ||
| F.hardshrink(x_fp16) | ||
|
|
||
|
|
||
|
|
@@ -676,10 +684,12 @@ def test_errors(self): | |
| # The input type must be Variable. | ||
| self.assertRaises(TypeError, F.hardtanh, 1) | ||
| # The input dtype must be float16, float32, float64. | ||
| x_int32 = paddle.fluid.data(name='x_int32', shape=[12, 10], dtype='int32') | ||
| x_int32 = paddle.fluid.data( | ||
| name='x_int32', shape=[12, 10], dtype='int32') | ||
| self.assertRaises(TypeError, F.hardtanh, x_int32) | ||
| # support the input dtype is float16 | ||
| x_fp16 = paddle.fluid.data(name='x_fp16', shape=[12, 10], dtype='float16') | ||
| x_fp16 = paddle.fluid.data( | ||
| name='x_fp16', shape=[12, 10], dtype='float16') | ||
| F.hardtanh(x_fp16) | ||
|
|
||
|
|
||
|
|
@@ -759,13 +769,16 @@ def test_errors(self): | |
| # The input type must be Variable. | ||
| self.assertRaises(TypeError, F.softshrink, 1) | ||
| # The input dtype must be float16, float32, float64. | ||
| x_int32 = paddle.fluid.data(name='x_int32', shape=[12, 10], dtype='int32') | ||
| x_int32 = paddle.fluid.data( | ||
| name='x_int32', shape=[12, 10], dtype='int32') | ||
| self.assertRaises(TypeError, F.softshrink, x_int32) | ||
| # The threshold must be no less than zero | ||
| x_fp32 = paddle.fluid.data(name='x_fp32', shape=[12, 10], dtype='float32') | ||
| x_fp32 = paddle.fluid.data( | ||
| name='x_fp32', shape=[12, 10], dtype='float32') | ||
| self.assertRaises(ValueError, F.softshrink, x_fp32, -1.0) | ||
| # support the input dtype is float16 | ||
| x_fp16 = paddle.fluid.data(name='x_fp16', shape=[12, 10], dtype='float16') | ||
| x_fp16 = paddle.fluid.data( | ||
| name='x_fp16', shape=[12, 10], dtype='float16') | ||
| F.softshrink(x_fp16) | ||
|
|
||
|
|
||
|
|
@@ -1010,10 +1023,12 @@ def test_errors(self): | |
| # The input type must be Variable. | ||
| self.assertRaises(TypeError, F.relu, 1) | ||
| # The input dtype must be float16, float32, float64. | ||
| x_int32 = paddle.fluid.data(name='x_int32', shape=[10, 12], dtype='int32') | ||
| x_int32 = paddle.fluid.data( | ||
| name='x_int32', shape=[10, 12], dtype='int32') | ||
| self.assertRaises(TypeError, F.relu, x_int32) | ||
| # support the input dtype is float16 | ||
| x_fp16 = paddle.fluid.data(name='x_fp16', shape=[10, 12], dtype='float16') | ||
| x_fp16 = paddle.fluid.data( | ||
| name='x_fp16', shape=[10, 12], dtype='float16') | ||
| F.relu(x_fp16) | ||
|
|
||
|
|
||
|
|
@@ -1119,10 +1134,12 @@ def test_errors(self): | |
| # The input type must be Variable. | ||
| self.assertRaises(TypeError, F.leaky_relu, 1) | ||
| # The input dtype must be float16, float32, float64. | ||
| x_int32 = paddle.fluid.data(name='x_int32', shape=[12, 10], dtype='int32') | ||
| x_int32 = paddle.fluid.data( | ||
| name='x_int32', shape=[12, 10], dtype='int32') | ||
| self.assertRaises(TypeError, F.leaky_relu, x_int32) | ||
| # support the input dtype is float16 | ||
| x_fp16 = paddle.fluid.data(name='x_fp16', shape=[12, 10], dtype='float16') | ||
| x_fp16 = paddle.fluid.data( | ||
| name='x_fp16', shape=[12, 10], dtype='float16') | ||
| F.leaky_relu(x_fp16) | ||
|
|
||
|
|
||
|
|
@@ -1218,10 +1235,12 @@ def test_errors(self): | |
| # The input type must be Variable. | ||
| self.assertRaises(TypeError, F.gelu, 1) | ||
| # The input dtype must be float16, float32, float64. | ||
| x_int32 = paddle.fluid.data(name='x_int32', shape=[11, 17], dtype='int32') | ||
| x_int32 = paddle.fluid.data( | ||
| name='x_int32', shape=[11, 17], dtype='int32') | ||
| self.assertRaises(TypeError, F.gelu, x_int32) | ||
| # support the input dtype is float16 | ||
| x_fp16 = paddle.fluid.data(name='x_fp16', shape=[11, 17], dtype='float16') | ||
| x_fp16 = paddle.fluid.data( | ||
| name='x_fp16', shape=[11, 17], dtype='float16') | ||
| F.gelu(x_fp16) | ||
|
|
||
|
|
||
|
|
@@ -1368,10 +1387,12 @@ def test_errors(self): | |
| # The input type must be Variable. | ||
| self.assertRaises(TypeError, F.relu6, 1) | ||
| # The input dtype must be float16, float32, float64. | ||
| x_int32 = paddle.fluid.data(name='x_int32', shape=[12, 10], dtype='int32') | ||
| x_int32 = paddle.fluid.data( | ||
| name='x_int32', shape=[12, 10], dtype='int32') | ||
| self.assertRaises(TypeError, F.relu6, x_int32) | ||
| # support the input dtype is float16 | ||
| x_fp16 = paddle.fluid.data(name='x_fp16', shape=[12, 10], dtype='float16') | ||
| x_fp16 = paddle.fluid.data( | ||
| name='x_fp16', shape=[12, 10], dtype='float16') | ||
| F.relu6(x_fp16) | ||
|
|
||
|
|
||
|
|
@@ -1455,10 +1476,12 @@ def test_errors(self): | |
| # The input type must be Variable. | ||
| self.assertRaises(TypeError, F.hardswish, 1) | ||
| # The input dtype must be float16, float32, float64. | ||
| x_int32 = paddle.fluid.data(name='x_int32', shape=[12, 10], dtype='int32') | ||
| x_int32 = paddle.fluid.data( | ||
| name='x_int32', shape=[12, 10], dtype='int32') | ||
| self.assertRaises(TypeError, F.hardswish, x_int32) | ||
| # support the input dtype is float16 | ||
| x_fp16 = paddle.fluid.data(name='x_fp16', shape=[12, 10], dtype='float16') | ||
| x_fp16 = paddle.fluid.data( | ||
| name='x_fp16', shape=[12, 10], dtype='float16') | ||
| F.hardswish(x_fp16) | ||
|
|
||
|
|
||
|
|
@@ -1572,10 +1595,12 @@ def test_errors(self): | |
| # The input type must be Variable. | ||
| self.assertRaises(TypeError, F.elu, 1) | ||
| # The input dtype must be float16, float32, float64. | ||
| x_int32 = paddle.fluid.data(name='x_int32', shape=[10, 12], dtype='int32') | ||
| x_int32 = paddle.fluid.data( | ||
| name='x_int32', shape=[10, 12], dtype='int32') | ||
| self.assertRaises(TypeError, F.elu, x_int32) | ||
| # support the input dtype is float16 | ||
| x_fp16 = paddle.fluid.data(name='x_fp16', shape=[10, 12], dtype='float16') | ||
| x_fp16 = paddle.fluid.data( | ||
| name='x_fp16', shape=[10, 12], dtype='float16') | ||
| F.elu(x_fp16) | ||
|
|
||
|
|
||
|
|
@@ -1624,6 +1649,55 @@ def test_error(self): | |
| self.assertRaises(TypeError, fluid.layers.log, in2) | ||
|
|
||
|
|
||
| class TestLog2(TestActivation): | ||
| def setUp(self): | ||
| self.op_type = "log2" | ||
| self.init_dtype() | ||
|
|
||
| x = np.random.uniform(0.1, 1, [11, 17]).astype(self.dtype) | ||
| out = np.log2(x) | ||
|
|
||
| self.inputs = {'X': OpTest.np_dtype_to_fluid_dtype(x)} | ||
| self.outputs = {'Out': out} | ||
|
|
||
| def test_check_grad(self): | ||
| if self.dtype == np.float16: | ||
| return | ||
| self.check_grad(['X'], 'Out') | ||
|
|
||
| def test_error(self): | ||
| in1 = paddle.static.data(name="in1", shape=[11, 17], dtype="int32") | ||
| in2 = paddle.static.data(name="in2", shape=[11, 17], dtype="int64") | ||
|
|
||
| self.assertRaises(TypeError, paddle.log2, in1) | ||
| self.assertRaises(TypeError, paddle.log2, in2) | ||
|
|
||
| def test_api(self): | ||
| with paddle.static.program_guard(paddle.static.Program(), | ||
| paddle.static.Program()): | ||
| input_x = np.random.uniform(0.1, 1, [11, 17]).astype("float64") | ||
| data_x = paddle.static.data( | ||
| name="data_x", shape=[11, 17], dtype="float64") | ||
|
|
||
| out1 = paddle.log2(data_x) | ||
| exe = paddle.static.Executor(place=fluid.CPUPlace()) | ||
| exe.run(paddle.static.default_startup_program()) | ||
| res1 = exe.run(paddle.static.default_main_program(), | ||
| feed={"data_x": input_x}, | ||
| fetch_list=[out1]) | ||
| expected_res = np.log2(input_x) | ||
| self.assertTrue(np.allclose(res1, expected_res)) | ||
|
|
||
| # dygraph | ||
| with fluid.dygraph.guard(): | ||
| np_x = np.random.uniform(0.1, 1, [11, 17]).astype("float64") | ||
| data_x = paddle.to_tensor(np_x) | ||
| z = paddle.log2(data_x) | ||
| np_z = z.numpy() | ||
| z_expected = np.array(np.log2(np_x)) | ||
| self.assertTrue(np.allclose(np_z, z_expected)) | ||
|
|
||
|
|
||
| class TestLog1p(TestActivation): | ||
| def setUp(self): | ||
| self.op_type = "log1p" | ||
|
|
@@ -1895,10 +1969,12 @@ def test_errors(self): | |
| # The input type must be Variable. | ||
| self.assertRaises(TypeError, F.softplus, 1) | ||
| # The input dtype must be float16, float32, float64. | ||
| x_int32 = paddle.fluid.data(name='x_int32', shape=[12, 10], dtype='int32') | ||
| x_int32 = paddle.fluid.data( | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 2.0后用paddle.data是更推荐的用法,可否看看这部分能否改成paddle.data?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 这个是Log1p, 我之后换一个pr,统一看看这个activate里面有多少要改的公共的这种需要迁移的,我统一迁移 |
||
| name='x_int32', shape=[12, 10], dtype='int32') | ||
| self.assertRaises(TypeError, F.softplus, x_int32) | ||
| # support the input dtype is float16 | ||
| x_fp16 = paddle.fluid.data(name='x_fp16', shape=[12, 10], dtype='float16') | ||
| x_fp16 = paddle.fluid.data( | ||
| name='x_fp16', shape=[12, 10], dtype='float16') | ||
| F.softplus(x_fp16) | ||
|
|
||
|
|
||
|
|
@@ -1972,10 +2048,12 @@ def test_errors(self): | |
| # The input type must be Variable. | ||
| self.assertRaises(TypeError, F.softsign, 1) | ||
| # The input dtype must be float16, float32, float64. | ||
| x_int32 = paddle.fluid.data(name='x_int32', shape=[12, 10], dtype='int32') | ||
| x_int32 = paddle.fluid.data( | ||
| name='x_int32', shape=[12, 10], dtype='int32') | ||
| self.assertRaises(TypeError, F.softsign, x_int32) | ||
| # support the input dtype is float16 | ||
| x_fp16 = paddle.fluid.data(name='x_fp16', shape=[12, 10], dtype='float16') | ||
| x_fp16 = paddle.fluid.data( | ||
| name='x_fp16', shape=[12, 10], dtype='float16') | ||
| F.softsign(x_fp16) | ||
|
|
||
|
|
||
|
|
@@ -2055,10 +2133,12 @@ def test_errors(self): | |
| # The input type must be Variable. | ||
| self.assertRaises(TypeError, F.thresholded_relu, 1) | ||
| # The input dtype must be float16, float32, float64. | ||
| x_int32 = paddle.fluid.data(name='x_int32', shape=[12, 10], dtype='int32') | ||
| x_int32 = paddle.fluid.data( | ||
| name='x_int32', shape=[12, 10], dtype='int32') | ||
| self.assertRaises(TypeError, F.thresholded_relu, x_int32) | ||
| # support the input dtype is float16 | ||
| x_fp16 = paddle.fluid.data(name='x_fp16', shape=[12, 10], dtype='float16') | ||
| x_fp16 = paddle.fluid.data( | ||
| name='x_fp16', shape=[12, 10], dtype='float16') | ||
| F.thresholded_relu(x_fp16) | ||
|
|
||
|
|
||
|
|
@@ -2154,10 +2234,12 @@ def test_errors(self): | |
| # The input type must be Variable. | ||
| self.assertRaises(TypeError, F.hardsigmoid, 1) | ||
| # The input dtype must be float16, float32, float64. | ||
| x_int32 = paddle.fluid.data(name='x_int32', shape=[12, 10], dtype='int32') | ||
| x_int32 = paddle.fluid.data( | ||
| name='x_int32', shape=[12, 10], dtype='int32') | ||
| self.assertRaises(TypeError, F.hardsigmoid, x_int32) | ||
| # support the input dtype is float16 | ||
| x_fp16 = paddle.fluid.data(name='x_fp16', shape=[12, 10], dtype='float16') | ||
| x_fp16 = paddle.fluid.data( | ||
| name='x_fp16', shape=[12, 10], dtype='float16') | ||
| F.hardsigmoid(x_fp16) | ||
|
|
||
|
|
||
|
|
@@ -2232,10 +2314,12 @@ def test_errors(self): | |
| # The input type must be Variable. | ||
| self.assertRaises(TypeError, F.swish, 1) | ||
| # The input dtype must be float16, float32, float64. | ||
| x_int32 = paddle.fluid.data(name='x_int32', shape=[12, 10], dtype='int32') | ||
| x_int32 = paddle.fluid.data( | ||
| name='x_int32', shape=[12, 10], dtype='int32') | ||
| self.assertRaises(TypeError, F.swish, x_int32) | ||
| # support the input dtype is float16 | ||
| x_fp16 = paddle.fluid.data(name='x_fp16', shape=[12, 10], dtype='float16') | ||
| x_fp16 = paddle.fluid.data( | ||
| name='x_fp16', shape=[12, 10], dtype='float16') | ||
| F.swish(x_fp16) | ||
|
|
||
|
|
||
|
|
@@ -2347,6 +2431,7 @@ def test_check_grad(self): | |
| create_test_act_fp16_class(TestELU) | ||
| create_test_act_fp16_class(TestReciprocal) | ||
| create_test_act_fp16_class(TestLog) | ||
| create_test_act_fp16_class(TestLog2, atol=5e-2) | ||
| create_test_act_fp16_class(TestLog1p, grad_atol=0.9) | ||
| create_test_act_fp16_class(TestSquare) | ||
| create_test_act_fp16_class(TestPow, atol=5e-2) | ||
|
|
||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
尽管数学上等价,但是计算机应该算以log2为底会更简单快速。比算log(x)/log(2)快。如果有空可以自己写写看这里有没有更快的实现。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里看看有空调查和实现一下,没空时现在这样也能勉强接受。。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
之后实现,谢谢
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个问题,除了性能,还有计算误差的问题,建议再调研下,看是否能优化。
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
换成tensor原生实现,thx
done;