Skip to content

Commit 9af17e6

Browse files
committed
fix en doc for emb (PaddlePaddle#31980)
* fix en doc for emb, test=document_fix; Change-Id: I4757e67caacd7189f068493ed45a7445f87ffb40
1 parent b934d0b commit 9af17e6

File tree

2 files changed

+7
-11
lines changed

2 files changed

+7
-11
lines changed

python/paddle/nn/functional/input.py

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -148,9 +148,7 @@ def embedding(x, weight, padding_idx=None, sparse=False, name=None):
148148
sparse(bool): The flag indicating whether to use sparse update. This parameter only
149149
affects the performance of the backwards gradient update. It is recommended to set
150150
True because sparse update is faster. But some optimizers does not support sparse update,
151-
such as :ref:`api_optimizer_AdadeltaOptimizer` , :ref:`api_optimizer_AdamaxOptimizer` ,
152-
:ref:`api_optimizer_DecayedAdagradOptimizer` , :ref:`api_optimizer_FtrlOptimizer` ,
153-
:ref:`api_optimizer_LambOptimizer` and :ref:`api_optimizer_LarsMomentumOptimizer` .
151+
such as :ref:`api_paddle_optimizer_adadelta_Adadelta` , :ref:`api_paddle_optimizer_adamax_Adamax` , :ref:`api_paddle_optimizer_lamb_Lamb`.
154152
In these cases, sparse must be False. Default: False.
155153
padding_idx(int|long|None): padding_idx needs to be in the interval [-weight.shape[0], weight.shape[0]).
156154
If :math:`padding\_idx < 0`, the :math:`padding\_idx` will automatically be converted

python/paddle/nn/layer/common.py

Lines changed: 6 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -1219,7 +1219,7 @@ class Embedding(layers.Layer):
12191219
For specific usage, refer to code examples. It implements the function of the Embedding Layer.
12201220
This layer is used to lookup embeddings vector of ids provided by :attr:`x` .
12211221
It automatically constructs a 2D embedding matrix based on the
1222-
input :attr:`num_embeddings` and attr:`embedding_dim`.
1222+
input :attr:`num_embeddings` and :attr:`embedding_dim`.
12231223
12241224
The shape of output Tensor is generated by appending an emb_size dimension to the
12251225
last dimension of the input Tensor shape.
@@ -1231,9 +1231,9 @@ class Embedding(layers.Layer):
12311231
12321232
Case 1:
12331233
1234-
input is a Tensor. padding_idx = -1
1235-
input.data = [[1, 3], [2, 4], [4, 127]
1236-
input.shape = [3, 2]
1234+
x is a Tensor. padding_idx = -1
1235+
x.data = [[1, 3], [2, 4], [4, 127]
1236+
x.shape = [3, 2]
12371237
Given size = [128, 16]
12381238
output is a Tensor:
12391239
out.shape = [3, 2, 16]
@@ -1251,7 +1251,7 @@ class Embedding(layers.Layer):
12511251
Parameters:
12521252
num_embeddings (int): Just one element which indicate the size
12531253
of the dictionary of embeddings.
1254-
embedding_dim: Just one element which indicate the size of each embedding vector respectively.
1254+
embedding_dim (int): Just one element which indicate the size of each embedding vector respectively.
12551255
padding_idx(int|long|None): padding_idx needs to be in the interval [-num_embeddings, num_embeddings).
12561256
If :math:`padding\_idx < 0`, the :math:`padding\_idx` will automatically be converted
12571257
to :math:`vocab\_size + padding\_idx` . It will output all-zero padding data whenever lookup
@@ -1260,9 +1260,7 @@ class Embedding(layers.Layer):
12601260
sparse(bool): The flag indicating whether to use sparse update. This parameter only
12611261
affects the performance of the backwards gradient update. It is recommended to set
12621262
True because sparse update is faster. But some optimizer does not support sparse update,
1263-
such as :ref:`api_optimizer_AdadeltaOptimizer` , :ref:`api_optimizer_AdamaxOptimizer` ,
1264-
:ref:`api_optimizer_DecayedAdagradOptimizer` , :ref:`api_optimizer_FtrlOptimizer` ,
1265-
:ref:`api_optimizer_LambOptimizer` and :ref:`api_optimizer_LarsMomentumOptimizer` .
1263+
such as :ref:`api_paddle_optimizer_adadelta_Adadelta` , :ref:`api_paddle_optimizer_adamax_Adamax` , :ref:`api_paddle_optimizer_lamb_Lamb`.
12661264
In these case, sparse must be False. Default: False.
12671265
weight_attr(ParamAttr): To specify the weight parameter property. Default: None, which means the
12681266
default weight parameter property is used. See usage for details in :ref:`api_ParamAttr` . In addition,

0 commit comments

Comments
 (0)