Skip to content

Commit e100845

Browse files
committed
update dygraph doc for api, test=develop
1 parent e6f659e commit e100845

1 file changed

Lines changed: 16 additions & 21 deletions

File tree

  • python/paddle/fluid/dygraph

python/paddle/fluid/dygraph/nn.py

Lines changed: 16 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -302,9 +302,8 @@ class Conv3D(layers.Layer):
302302
W_{out}&= \\frac{(W_{in} + 2 * paddings[2] - (dilations[2] * (W_f - 1) + 1))}{strides[2]} + 1
303303
304304
Args:
305-
input (Variable): The input image with [N, C, D, H, W] format.
306-
num_filters(int): The number of filter. It is as same as the output
307-
image channel.
305+
name_scope(str) : The name for this class.
306+
num_filters(int): The number of filter. It is as same as the output image channel.
308307
filter_size (int|tuple|None): The filter size. If filter_size is a tuple,
309308
it must contain three integers, (filter_size_D, filter_size_H, filter_size_W).
310309
Otherwise, the filter will be a square.
@@ -696,12 +695,11 @@ class Pool2D(layers.Layer):
696695
it must contain two integers, (pool_padding_on_Height, pool_padding_on_Width).
697696
Otherwise, the pool padding size will be a square of an int.
698697
global_pooling (bool): (bool, default false) Whether to use the global pooling. If global_pooling = true,
699-
kernel size and paddings will be ignored
700-
use_cudnn (bool): (bool, default True) Onlyceil_mode (bool) - (bool, default false) Whether to use the ceil
701-
function to calculate output height and width. False is the default.
702-
If it is set to False, the floor function will be used.
703-
exclusive (bool): Whether to exclude padding points in average pooling
704-
mode, default is true
698+
kernel size and paddings will be ignored.
699+
use_cudnn (bool): (bool, default True) Only used in cudnn kernel, need install cudnn.
700+
ceil_mode (bool): (bool, default false) Whether to use the ceil function to calculate output height and width.
701+
False is the default. If it is set to False, the floor function will be used.
702+
exclusive (bool): (bool, default True) Whether to exclude padding points in average pooling mode, default is true
705703
706704
Returns:
707705
Variable: The pooling result.
@@ -1056,11 +1054,13 @@ class BatchNorm(layers.Layer):
10561054
10571055
Examples:
10581056
.. code-block:: python
1057+
import paddle.fluid as fluid
10591058
1060-
fc = fluid.FC('fc', size=200, param_attr='fc1.w')
1061-
hidden1 = fc(x)
1062-
batch_norm = fluid.BatchNorm("batch_norm", 10)
1063-
hidden2 = batch_norm(hidden1)
1059+
with fluid.dygraph.guard():
1060+
fc = fluid.FC('fc', size=200, param_attr='fc1.w')
1061+
hidden1 = fc(x)
1062+
batch_norm = fluid.BatchNorm("batch_norm", 10)
1063+
hidden2 = batch_norm(hidden1)
10641064
"""
10651065

10661066
def __init__(self,
@@ -1421,7 +1421,7 @@ class GRUUnit(layers.Layer):
14211421
14221422
if origin_mode is True, then the equation of a gru step is from paper
14231423
`Learning Phrase Representations using RNN Encoder-Decoder for Statistical
1424-
Machine Translation <https://arxiv.org/pdf/1406.1078.pdf>`_
1424+
Machine Translation <https://arxiv.org/pdf/1406.1078.pdf>`
14251425
14261426
.. math::
14271427
u_t & = actGate(xu_{t} + W_u h_{t-1} + b_u)
@@ -1459,9 +1459,7 @@ class GRUUnit(layers.Layer):
14591459
and concatenation of :math:`u_t`, :math:`r_t` and :math:`m_t`.
14601460
14611461
Args:
1462-
input (Variable): The fc transformed input value of current step.
14631462
name_scope (str): See base class.
1464-
hidden (Variable): The hidden value of gru unit from previous step.
14651463
size (integer): The input dimension value.
14661464
param_attr(ParamAttr|None): The parameter attribute for the learnable
14671465
hidden-hidden weight matrix. Note:
@@ -2064,8 +2062,6 @@ class Conv2DTranspose(layers.Layer):
20642062
library is installed. Default: True.
20652063
act (str): Activation type, if it is set to None, activation is not appended.
20662064
Default: None.
2067-
name(str|None): A name for this layer(optional). If set None, the layer
2068-
will be named automatically. Default: True.
20692065
20702066
Returns:
20712067
Variable: The tensor variable storing the convolution transpose result.
@@ -2213,8 +2209,6 @@ class SequenceConv(layers.Layer):
22132209
is not set, the parameter is initialized with Xavier. Default: None.
22142210
act (str): Activation type, if it is set to None, activation is not appended.
22152211
Default: None.
2216-
name (str|None): A name for this layer(optional). If set None, the layer
2217-
will be named automatically. Default: None.
22182212
22192213
Returns:
22202214
Variable: output of sequence_conv
@@ -2291,7 +2285,8 @@ class RowConv(layers.Layer):
22912285
act (str): Non-linear activation to be applied to output variable.
22922286
22932287
Returns:
2294-
the output(Out) is a LodTensor, which supports variable time-length input sequences. The underlying tensor in this LodTensor is a matrix with shape T x N, i.e., the same shape as X.
2288+
the output(Out) is a LodTensor, which supports variable time-length input sequences.
2289+
The underlying tensor in this LodTensor is a matrix with shape T x N, i.e., the same shape as X.
22952290
22962291
Examples:
22972292
.. code-block:: python

0 commit comments

Comments
 (0)