update samples of print and clip api, test=develop#27670
update samples of print and clip api, test=develop#27670zhupengyang merged 8 commits intoPaddlePaddle:developfrom
Conversation
|
Thanks for your contribution! |
58e17af to
b63ab5b
Compare
phlrain
left a comment
There was a problem hiding this comment.
LGTM for core.ops issue
| main_program = paddle.static.default_main_program() | ||
| exe = paddle.static.Executor(place=paddle.CPUPlace()) | ||
| res = exe.run(main_program, fetch_list=[out]) | ||
| # Variable: fill_constant_1.tmp_0 |
There was a problem hiding this comment.
确认过,静态图还是Variable,不用修改
| main_program = fluid.default_main_program() | ||
| exe = fluid.Executor(fluid.CPUPlace()) | ||
| exe.run(main_program) | ||
| import paddle |
There was a problem hiding this comment.
225- 257 中 Variable-> Tensor
There was a problem hiding this comment.
确认过,静态图还是Variable,不用修改
python/paddle/fluid/clip.py
Outdated
|
|
||
| Gradient clip will takes effect after being set in ``optimizer`` , see the document ``optimizer`` | ||
| (for example: :ref:`api_fluid_optimizer_SGDOptimizer`). | ||
| (for example: :ref:`api_fluid_optimizer_SGD`). |
python/paddle/fluid/clip.py
Outdated
|
|
||
| paddle.disable_static() | ||
|
|
||
| in_np = np.random.uniform(-1, 1, [10, 10]).astype("float32") |
There was a problem hiding this comment.
请用paddle.uniform 最好别用numpy~
python/paddle/fluid/clip.py
Outdated
| # clip = paddle.nn.GradientClipByValue(min=-1, max=1, need_clip=fileter_func) | ||
|
|
||
| sdg = paddle.optimizer.SGD(learning_rate=0.1, parameters=linear.parameters(), grad_clip=clip) | ||
| sdg.minimize(loss) |
There was a problem hiding this comment.
参考SDG的文档,以及原有的sample,还是保持minimize
https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api/paddle/fluid/optimizer/SGDOptimizer_cn.html
python/paddle/fluid/clip.py
Outdated
| and gradients of all parameters in the network will be clipped. | ||
|
|
||
| Examples: | ||
| Code Example (DyGraph Mode) |
python/paddle/fluid/clip.py
Outdated
|
|
||
| Gradient clip will takes effect after being set in ``optimizer`` , see the document ``optimizer`` | ||
| (for example: :ref:`api_fluid_optimizer_SGDOptimizer`). | ||
| (for example: :ref:`api_fluid_optimizer_SGD`). |
python/paddle/fluid/clip.py
Outdated
| and gradients of all parameters in the network will be clipped. | ||
|
|
||
| Examples: | ||
| Code Example (DyGraph Mode): |
python/paddle/fluid/clip.py
Outdated
| x = np.random.uniform(-100, 100, (10, 2)).astype('float32') | ||
| exe.run(startup_prog) | ||
| out = exe.run(main_prog, feed={'x': x}, fetch_list=loss) | ||
| in_np = np.random.uniform(-1, 1, [10, 10]).astype("float32") |
python/paddle/fluid/clip.py
Outdated
| loss = fluid.layers.reduce_mean(out) | ||
| loss.backward() | ||
| sdg = paddle.optimizer.SGD(learning_rate=0.1, parameters=linear.parameters(), grad_clip=clip) | ||
| sdg.minimize(loss) |
There was a problem hiding this comment.
参考SDG的文档,以及原有的sample,还是保持minimize
https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/api/paddle/fluid/optimizer/SGDOptimizer_cn.html
python/paddle/fluid/clip.py
Outdated
| # use for Static mode | ||
| import paddle | ||
| import paddle.fluid as fluid | ||
| import numpy as np |
python/paddle/fluid/clip.py
Outdated
| sgd_optimizer = fluid.optimizer.SGD( | ||
| learning_rate=0.1, parameter_list=linear.parameters(), grad_clip=clip) | ||
| sgd_optimizer.minimize(loss) | ||
| paddle.disable_static() |
python/paddle/fluid/clip.py
Outdated
| # use for Static mode | ||
| import paddle | ||
| import paddle.fluid as fluid | ||
| import numpy as np |
python/paddle/fluid/clip.py
Outdated
| out = exe.run(main_prog, feed={'x': x}, fetch_list=loss) | ||
|
|
||
|
|
||
| paddle.disable_static() |
python/paddle/fluid/clip.py
Outdated
| # use for Static mode | ||
| import paddle | ||
| import paddle.fluid as fluid | ||
| import numpy as np |
python/paddle/fluid/clip.py
Outdated
| sgd_optimizer = fluid.optimizer.SGD( | ||
| learning_rate=0.1, parameter_list=linear.parameters(), grad_clip=clip) | ||
| sgd_optimizer.minimize(loss) | ||
| paddle.disable_static() |
phlrain
left a comment
There was a problem hiding this comment.
LGTM for core.ops issue
PR types
Others
PR changes
Docs
Describe
Change the code samples of the following API:
Fluid Doc PR: PaddlePaddle/docs#2714