Skip to content
93 changes: 26 additions & 67 deletions doc/paddle/api/paddle/fluid/clip/GradientClipByGlobalNorm_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
GradientClipByGlobalNorm
-------------------------------

.. py:class:: paddle.fluid.clip.GradientClipByGlobalNorm(clip_norm, group_name='default_group', need_clip=None)
.. py:class:: paddle.nn.GradientClipByGlobalNorm(clip_norm, group_name='default_group', need_clip=None)



Expand All @@ -16,7 +16,7 @@ GradientClipByGlobalNorm

输入的 Tensor列表 不是从该类里传入, 而是默认会选择 ``Program`` 中全部的梯度,如果 ``need_clip`` 不为None,则可以只选择部分参数进行梯度裁剪。

该类需要在初始化 ``optimizer`` 时进行设置后才能生效,可参看 ``optimizer`` 文档(例如: :ref:`cn_api_fluid_optimizer_SGDOptimizer` )。
该类需要在初始化 ``optimizer`` 时进行设置后才能生效,可参看 ``optimizer`` 文档(例如: :ref:`cn_api_fluid_optimizer_SGD` )。

裁剪公式如下:

Expand All @@ -33,72 +33,31 @@ GradientClipByGlobalNorm
- **clip_norm** (float) - 所允许的范数最大值
- **group_name** (str, optional) - 剪切的组名
- **need_clip** (function, optional) - 类型: 函数。用于指定需要梯度裁剪的参数,该函数接收一个 ``Parameter`` ,返回一个 ``bool`` (True表示需要裁剪,False不需要裁剪)。默认为None,此时会裁剪网络中全部参数。

**代码示例1:静态图**

.. code-block:: python

import paddle
import paddle.fluid as fluid
import numpy as np

main_prog = fluid.Program()
startup_prog = fluid.Program()
with fluid.program_guard(
main_program=main_prog, startup_program=startup_prog):
image = fluid.data(
name='x', shape=[-1, 2], dtype='float32')
predict = fluid.layers.fc(input=image, size=3, act='relu') #Trainable parameters: fc_0.w.0, fc_0.b.0
loss = fluid.layers.mean(predict)

# 裁剪网络中全部参数:
clip = fluid.clip.GradientClipByGlobalNorm(clip_norm=1.0)

# 仅裁剪参数fc_0.w_0时:
# 为need_clip参数传入一个函数fileter_func,fileter_func接收参数的类型为Parameter,返回类型为bool
# def fileter_func(Parameter):
# # 可以较为方便的通过Parameter.name判断(name可以在fluid.ParamAttr中设置,默认为fc_0.w_0、fc_0.b_0)
# return Parameter.name=="fc_0.w_0"
# clip = fluid.clip.GradientClipByGlobalNorm(clip_norm=1.0, need_clip=fileter_func)

sgd_optimizer = fluid.optimizer.SGDOptimizer(learning_rate=0.1, grad_clip=clip)
sgd_optimizer.minimize(loss)

place = fluid.CPUPlace()
exe = fluid.Executor(place)
x = np.random.uniform(-100, 100, (10, 2)).astype('float32')
exe.run(startup_prog)
out = exe.run(main_prog, feed={'x': x}, fetch_list=loss)


**代码示例2:动态图**

**代码示例**

.. code-block:: python

import paddle
import paddle.fluid as fluid

with fluid.dygraph.guard():
linear = fluid.dygraph.Linear(10, 10) #可训练参数: linear_0.w.0, linear_0.b.0
inputs = fluid.layers.uniform_random([32, 10]).astype('float32')
out = linear(fluid.dygraph.to_variable(inputs))
loss = fluid.layers.reduce_mean(out)
loss.backward()

# 裁剪网络中全部参数:
clip = fluid.clip.GradientClipByGlobalNorm(clip_norm=1.0)

# 仅裁剪参数linear_0.w_0时:
# 为need_clip参数传入一个函数fileter_func,fileter_func接收参数的类型为ParamBase,返回类型为bool
# def fileter_func(ParamBase):
# # 可以通过ParamBase.name判断(name可以在fluid.ParamAttr中设置,默认为linear_0.w_0、linear_0.b_0)
# return ParamBase.name == "linear_0.w_0"
# # 注:linear.weight、linear.bias能分别返回dygraph.Linear层的权重与偏差,也可以此来判断
# return ParamBase.name == linear.weight.name
# clip = fluid.clip.GradientClipByGlobalNorm(clip_norm=1.0, need_clip=fileter_func)

sgd_optimizer = fluid.optimizer.SGD(
learning_rate=0.1,
parameter_list=linear.parameters(),
grad_clip=clip)
sgd_optimizer.minimize(loss)

x = paddle.uniform([10, 10], min=-1.0, max=1.0, dtype='float32')
linear = paddle.nn.Linear(10, 10)
out = linear(x)
loss = paddle.mean(out)
loss.backward()

# 裁剪网络中全部参数:
clip = paddle.nn.GradientClipByGlobalNorm(clip_norm=1.0)

# 仅裁剪参数linear_0.w_0时:
# pass a function(fileter_func) to need_clip, and fileter_func receive a ParamBase, and return bool
# def fileter_func(ParamBase):
# # 可以通过ParamBase.name判断(name可以在paddle.ParamAttr中设置,默认为linear_0.w_0、linear_0.b_0)
# return ParamBase.name == "linear_0.w_0"
# # 注:linear.weight、linear.bias能分别返回dygraph.Linear层的权重与偏差,可以此来判断
# return ParamBase.name == linear.weight.name
# clip = paddle.nn.GradientClipByGlobalNorm(clip_norm=1.0, need_clip=fileter_func)

sdg = paddle.optimizer.SGD(learning_rate=0.1, parameters=linear.parameters(), grad_clip=clip)
sdg.step()

89 changes: 25 additions & 64 deletions doc/paddle/api/paddle/fluid/clip/GradientClipByNorm_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
GradientClipByNorm
-------------------------------

.. py:class:: paddle.fluid.clip.GradientClipByNorm(clip_norm, need_clip=None)
.. py:class:: paddle.nn.GradientClipByNorm(clip_norm, need_clip=None)



Expand All @@ -16,7 +16,7 @@ GradientClipByNorm

输入的 Tensor 不是从该类里传入, 而是默认会选择 ``Program`` 中全部的梯度,如果 ``need_clip`` 不为None,则可以只选择部分参数进行梯度裁剪。

该类需要在初始化 ``optimizer`` 时进行设置后才能生效,可参看 ``optimizer`` 文档(例如: :ref:`cn_api_fluid_optimizer_SGDOptimizer` )。
该类需要在初始化 ``optimizer`` 时进行设置后才能生效,可参看 ``optimizer`` 文档(例如: :ref:`cn_api_fluid_optimizer_SGD` )。

裁剪公式如下:

Expand All @@ -40,69 +40,30 @@ GradientClipByNorm
- **clip_norm** (float) - 所允许的二范数最大值。
- **need_clip** (function, optional) - 类型: 函数。用于指定需要梯度裁剪的参数,该函数接收一个 ``Parameter`` ,返回一个 ``bool`` (True表示需要裁剪,False不需要裁剪)。默认为None,此时会裁剪网络中全部参数。

**代码示例1:静态图**
**代码示例**

.. code-block:: python

import paddle
import paddle.fluid as fluid
import numpy as np

main_prog = fluid.Program()
startup_prog = fluid.Program()
with fluid.program_guard(
main_program=main_prog, startup_program=startup_prog):
image = fluid.data(
name='x', shape=[-1, 2], dtype='float32')
predict = fluid.layers.fc(input=image, size=3, act='relu') #可训练参数: fc_0.w.0, fc_0.b.0
loss = fluid.layers.mean(predict)

# 裁剪网络中全部参数:
clip = fluid.clip.GradientClipByNorm(clip_norm=1.0)

# 仅裁剪参数fc_0.w_0时:
# 为need_clip参数传入一个函数fileter_func,fileter_func接收参数的类型为Parameter,返回类型为bool
# def fileter_func(Parameter):
# # 可以较为方便的通过Parameter.name判断(name可以在fluid.ParamAttr中设置,默认为fc_0.w_0、fc_0.b_0)
# return Parameter.name=="fc_0.w_0"
# clip = fluid.clip.GradientClipByNorm(clip_norm=1.0, need_clip=fileter_func)

sgd_optimizer = fluid.optimizer.SGDOptimizer(learning_rate=0.1, grad_clip=clip)
sgd_optimizer.minimize(loss)

place = fluid.CPUPlace()
exe = fluid.Executor(place)
x = np.random.uniform(-100, 100, (10, 2)).astype('float32')
exe.run(startup_prog)
out = exe.run(main_prog, feed={'x': x}, fetch_list=loss)


**代码示例2:动态图**

.. code-block:: python

import paddle
import paddle.fluid as fluid

with fluid.dygraph.guard():
linear = fluid.dygraph.Linear(10, 10) #可训练参数: linear_0.w.0, linear_0.b.0
inputs = fluid.layers.uniform_random([32, 10]).astype('float32')
out = linear(fluid.dygraph.to_variable(inputs))
loss = fluid.layers.reduce_mean(out)
loss.backward()

# 裁剪网络中全部参数:
clip = fluid.clip.GradientClipByNorm(clip_norm=1.0)

# 仅裁剪参数linear_0.w_0时:
# 为need_clip参数传入一个函数fileter_func,fileter_func接收参数的类型为ParamBase,返回类型为bool
# def fileter_func(ParamBase):
# # 可以通过ParamBase.name判断(name可以在fluid.ParamAttr中设置,默认为linear_0.w_0、linear_0.b_0)
# return ParamBase.name == "linear_0.w_0"
# # 注:linear.weight、linear.bias能分别返回dygraph.Linear层的权重与偏差,也可以此来判断
# return ParamBase.name == linear.weight.name
# clip = fluid.clip.GradientClipByNorm(clip_norm=1.0, need_clip=fileter_func)

sgd_optimizer = fluid.optimizer.SGD(
learning_rate=0.1, parameter_list=linear.parameters(), grad_clip=clip)
sgd_optimizer.minimize(loss)

x = paddle.uniform([10, 10], min=-1.0, max=1.0, dtype='float32')
linear = paddle.nn.Linear(10, 10)
out = linear(x)
loss = paddle.mean(out)
loss.backward()

# 裁剪网络中全部参数:
clip = paddle.nn.GradientClipByNorm(clip_norm=1.0)

# 仅裁剪参数linear_0.w_0时:
# pass a function(fileter_func) to need_clip, and fileter_func receive a ParamBase, and return bool
# def fileter_func(ParamBase):
# # 可以通过ParamBase.name判断(name可以在paddle.ParamAttr中设置,默认为linear_0.w_0、linear_0.b_0)
# return ParamBase.name == "linear_0.w_0"
# # 注:linear.weight、linear.bias能分别返回dygraph.Linear层的权重与偏差,可以此来判断
# return ParamBase.name == linear.weight.name
# clip = paddle.nn.GradientClipByNorm(clip_norm=1.0, need_clip=fileter_func)

sdg = paddle.optimizer.SGD(learning_rate=0.1, parameters=linear.parameters(), grad_clip=clip)
sdg.step()

91 changes: 24 additions & 67 deletions doc/paddle/api/paddle/fluid/clip/GradientClipByValue_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,7 @@
GradientClipByValue
-------------------------------

.. py:class:: paddle.fluid.clip.GradientClipByValue(max, min=None, need_clip=None)

.. py:class:: paddle.nn.GradientClipByValue(max, min=None, need_clip=None)



Expand All @@ -13,7 +12,7 @@ GradientClipByValue

输入的 Tensor 不是从该类里传入, 而是默认会选择 ``Program`` 中全部的梯度,如果 ``need_clip`` 不为None,则可以只选择部分参数进行梯度裁剪。

该类需要在初始化 ``optimizer`` 时进行设置后才能生效,可参看 ``optimizer`` 文档(例如: :ref:`cn_api_fluid_optimizer_SGDOptimizer` )。
该类需要在初始化 ``optimizer`` 时进行设置后才能生效,可参看 ``optimizer`` 文档(例如: :ref:`cn_api_fluid_optimizer_SGD` )。

给定一个 Tensor ``t`` ,该操作将它的值压缩到 ``min`` 和 ``max`` 之间

Expand All @@ -26,72 +25,30 @@ GradientClipByValue
- **min** (float,optional) - 要修剪的最小值。如果用户没有设置,将被自动设置为 ``-max`` (此时 ``max`` 必须大于0)。
- **need_clip** (function, optional) - 类型: 函数。用于指定需要梯度裁剪的参数,该函数接收一个 ``Parameter`` ,返回一个 ``bool`` (True表示需要裁剪,False不需要裁剪)。默认为None,此时会裁剪网络中全部参数。

**代码示例1:静态图**
**代码示例**

.. code-block:: python

import paddle
import paddle.fluid as fluid
import numpy as np

main_prog = fluid.Program()
startup_prog = fluid.Program()
with fluid.program_guard(
main_program=main_prog, startup_program=startup_prog):
image = fluid.data(
name='x', shape=[-1, 2], dtype='float32')
predict = fluid.layers.fc(input=image, size=3, act='relu') #可训练参数: fc_0.w.0, fc_0.b.0
loss = fluid.layers.mean(predict)

# 裁剪网络中全部参数:
clip = fluid.clip.GradientClipByValue(min=-1, max=1)

# 仅裁剪参数fc_0.w_0时:
# 为need_clip参数传入一个函数fileter_func,fileter_func接收参数的类型为Parameter,返回类型为bool
# def fileter_func(Parameter):
# # 可以较为方便的通过Parameter.name判断(name可以在fluid.ParamAttr中设置,默认为fc_0.w_0、fc_0.b_0)
# return Parameter.name=="fc_0.w_0"
# clip = fluid.clip.GradientClipByValue(min=-1, max=1, need_clip=fileter_func)

sgd_optimizer = fluid.optimizer.SGDOptimizer(learning_rate=0.1, grad_clip=clip)
sgd_optimizer.minimize(loss)

place = fluid.CPUPlace()
exe = fluid.Executor(place)
x = np.random.uniform(-100, 100, (10, 2)).astype('float32')
exe.run(startup_prog)
out = exe.run(main_prog, feed={'x': x}, fetch_list=loss)


**代码示例2:动态图**

.. code-block:: python

import paddle
import paddle.fluid as fluid

with fluid.dygraph.guard():
linear = fluid.dygraph.Linear(10, 10) #可训练参数: linear_0.w.0, linear_0.b.0
inputs = fluid.layers.uniform_random([32, 10]).astype('float32')
out = linear(fluid.dygraph.to_variable(inputs))
loss = fluid.layers.reduce_mean(out)
loss.backward()

# 裁剪网络中全部参数:
clip = fluid.clip.GradientClipByValue(min=-1, max=1)

# 仅裁剪参数linear_0.w_0时:
# 为need_clip参数传入一个函数fileter_func,fileter_func接收参数的类型为ParamBase,返回类型为bool
# def fileter_func(ParamBase):
# # 可以通过ParamBase.name判断(name可以在fluid.ParamAttr中设置,默认为linear_0.w_0、linear_0.b_0)
# return ParamBase.name == "linear_0.w_0"
# # 注:linear.weight、linear.bias能分别返回dygraph.Linear层的权重与偏差,可以此来判断
# return ParamBase.name == linear.weight.name
# clip = fluid.clip.GradientClipByValue(min=-1, max=1, need_clip=fileter_func)

sgd_optimizer = fluid.optimizer.SGD(
learning_rate=0.1, parameter_list=linear.parameters(), grad_clip=clip)
sgd_optimizer.minimize(loss)



x = paddle.uniform([10, 10], min=-1.0, max=1.0, dtype='float32')
linear = paddle.nn.Linear(10, 10)
out = linear(x)
loss = paddle.mean(out)
loss.backward()

# 裁剪网络中全部参数:
clip = paddle.nn.GradientClipByValue(min=-1, max=1)

# 仅裁剪参数linear_0.w_0时:
# pass a function(fileter_func) to need_clip, and fileter_func receive a ParamBase, and return bool
# def fileter_func(ParamBase):
# # 可以通过ParamBase.name判断(name可以在paddle.ParamAttr中设置,默认为linear_0.w_0、linear_0.b_0)
# return ParamBase.name == "linear_0.w_0"
# # 注:linear.weight、linear.bias能分别返回dygraph.Linear层的权重与偏差,可以此来判断
# return ParamBase.name == linear.weight.name
# clip = paddle.nn.GradientClipByValue(min=-1, max=1, need_clip=fileter_func)

sdg = paddle.optimizer.SGD(learning_rate=0.1, parameters=linear.parameters(), grad_clip=clip)
sdg.step()

Loading