You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: python/paddle/optimizer/rprop.py
+2Lines changed: 2 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -58,12 +58,14 @@ class Rprop(Optimizer):
58
58
learning_rate_range (tuple, optional): The range of learning rate.
59
59
Learning rate cannot be smaller than the first element of the tuple;
60
60
learning rate cannot be larger than the second element of the tuple.
61
+
The default value is (1e-5, 50).
61
62
parameters (list|tuple, optional): List/Tuple of ``Tensor`` to update to minimize ``loss``.
62
63
This parameter is required in dygraph mode.
63
64
The default value is None in static graph mode, at this time all parameters will be updated.
64
65
etas (tuple, optional): Tuple used to update learning rate.
65
66
The first element of the tuple is the multiplicative decrease factor;
66
67
the second element of the tuple is the multiplicative increase factor.
68
+
The default value is (0.5, 1.2).
67
69
grad_clip (GradientClipBase, optional): Gradient clipping strategy, it's an instance of some derived class of ``GradientClipBase`` .
68
70
There are three clipping strategies ( :ref:`api_paddle_nn_ClipGradByGlobalNorm` , :ref:`api_paddle_nn_ClipGradByNorm` , :ref:`api_paddle_nn_ClipGradByValue` ).
69
71
Default None, meaning there is no gradient clipping.
0 commit comments