Skip to content

Commit 7515047

Browse files
add default value descriptions for learning_rate_range and etas
1 parent 169de4d commit 7515047

1 file changed

Lines changed: 2 additions & 0 deletions

File tree

python/paddle/optimizer/rprop.py

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -58,12 +58,14 @@ class Rprop(Optimizer):
5858
learning_rate_range (tuple, optional): The range of learning rate.
5959
Learning rate cannot be smaller than the first element of the tuple;
6060
learning rate cannot be larger than the second element of the tuple.
61+
The default value is (1e-5, 50).
6162
parameters (list|tuple, optional): List/Tuple of ``Tensor`` to update to minimize ``loss``.
6263
This parameter is required in dygraph mode.
6364
The default value is None in static graph mode, at this time all parameters will be updated.
6465
etas (tuple, optional): Tuple used to update learning rate.
6566
The first element of the tuple is the multiplicative decrease factor;
6667
the second element of the tuple is the multiplicative increase factor.
68+
The default value is (0.5, 1.2).
6769
grad_clip (GradientClipBase, optional): Gradient clipping strategy, it's an instance of some derived class of ``GradientClipBase`` .
6870
There are three clipping strategies ( :ref:`api_paddle_nn_ClipGradByGlobalNorm` , :ref:`api_paddle_nn_ClipGradByNorm` , :ref:`api_paddle_nn_ClipGradByValue` ).
6971
Default None, meaning there is no gradient clipping.

0 commit comments

Comments
 (0)