-
Notifications
You must be signed in to change notification settings - Fork 876
Docathon][Add CN Doc No.56-57] #6358
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from 3 commits
Commits
Show all changes
6 commits
Select commit
Hold shift + click to select a range
fb40183
add docs
zade23 31d53df
Merge branch 'PaddlePaddle:develop' into en_doc_5657
zade23 358b214
fix doc issues
zade23 1f88fa8
Update docs/api/paddle/incubate/nn/functional/fused_bias_dropout_resi…
zade23 3f8be7e
Update docs/api/paddle/incubate/nn/functional/fused_bias_dropout_resi…
zade23 6c6f0cf
rerun pre-commit
zade23 File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
37 changes: 37 additions & 0 deletions
37
docs/api/paddle/incubate/nn/FusedBiasDropoutResidualLayerNorm_cn.rst
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,37 @@ | ||
| .. _cn_api_paddle_incubate_nn_FusedBiasDropoutResidualLayerNorm: | ||
|
|
||
| FusedBiasDropoutResidualLayerNorm | ||
| ------------------------------- | ||
|
|
||
| .. py:class:: paddle.incubate.nn.FusedBiasDropoutResidualLayerNorm(embed_dim, dropout_rate=0.5, weight_attr=None, bias_attr=None, epsilon=1e-05, name=None) | ||
|
|
||
| 应用 fused_bias_dropout_residual_layer_norm 操作符,包含融合偏置、Dropout 和残差层归一化操作。 | ||
|
|
||
| 参数 | ||
| :::::::::::: | ||
| - **embed_dim** (int) - 输入和输出中预期的特征大小。 | ||
| - **dropout_rate** (float,可选) - 在注意力权重上使用的 Dropout 概率,用于在注意力后的 Dropout 过程中丢弃一些注意力目标。0 表示无 Dropout。默认为 0.5。 | ||
| - **bias_attr** (ParamAttr|bool,可选) - 指定偏置参数的属性。默认为 None,意味着使用默认的偏置参数属性。如果设置为 False,则该层不会有可训练的偏置参数。具体用法请参见 :ref:`cn_api_paddle_ParamAttr` 。 | ||
| - **epsilon** (float,可选) - 添加到方差中的小值,以防止除零。默认为 1e-05。 | ||
|
|
||
| 代码示例 | ||
| :::::::::::: | ||
|
|
||
| COPY-FROM: paddle.incubate.nn.FusedBiasDropoutResidualLayerNorm | ||
|
|
||
| forward(x, residual) | ||
| :::::::::::: | ||
| 应用 fused_bias_dropout_residual_layer_norm 操作符,包含融合偏置、Dropout 和残差层归一化操作。 | ||
|
|
||
| 参数 | ||
| :::::::::::: | ||
| - **x** (Tensor) - 输入张量。它是一个形状为 `[batch_size, seq_len, embed_dim]` 的张量。数据类型应为 float32 或 float64。 | ||
| - **residual** (Tensor,可选) - 残差张量。它是一个形状为 `[batch_size, value_length, vdim]` 的张量。数据类型应为 float32 或 float64。 | ||
|
|
||
| 返回 | ||
| :::::::::::: | ||
| Tensor|tuple:与 `x` 具有相同数据类型和形状的张量 | ||
|
|
||
| extra_repr() | ||
| :::::::::::: | ||
| 当前层的额外表示,您可以自定义实现自己的层。 |
45 changes: 45 additions & 0 deletions
45
...api/paddle/incubate/nn/functional/fused_bias_dropout_residual_layer_norm_cn.rst
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,45 @@ | ||
| .. _cn_api_paddle_incubate_nn_functional_fused_bias_dropout_residual_layer_norm: | ||
|
|
||
| fused_bias_dropout_residual_layer_norm | ||
| ------------------------------- | ||
|
|
||
| .. py:function:: paddle.incubate.nn.functional.fused_bias_dropout_residual_layer_norm(x, residual, bias=None, ln_scale=None, ln_bias=None, dropout_rate=0.5, ln_epsilon=1e-05, training=True, mode='upscale_in_train', name=None) | ||
|
|
||
| fused_bias_dropout_residual_layer_norm 操作符,包含融合偏置、Dropout 和残差层归一化。 | ||
|
|
||
| 其伪代码如下: | ||
|
|
||
| .. code-block:: text | ||
|
|
||
| >>> y = layer_norm(residual + dropout(bias + x)) | ||
|
|
||
| 参数 | ||
| :::::::::::: | ||
| - **x** (Tensor) - 输入张量。其形状为 `[*, embed_dim]`。 | ||
| - **residual** (Tensor) - 残差张量。其形状与 x 相同。 | ||
| - **bias** (Tensor,可选) - 线性的偏置。其形状为 `[embed_dim]`。默认为 None。 | ||
| - **ln_scale** (Tensor,可选) - 层归一化的权重张量。其形状为 `[embed_dim]`。默认为 None。 | ||
| - **ln_bias** (Tensor,可选) - 层归一化的偏置张量。其形状为 `[embed_dim]`。默认为 None。 | ||
| - **dropout_rate** (float,可选) - 在注意力权重上使用的 Dropout 概率,用于在注意力后的 Dropout 过程中丢弃一些注意力目标。0 表示无 Dropout。默认为 0.5。 | ||
| - **ln_epsilon** (float,可选) - 在层归一化的分母中添加的小浮点数,用于避免除以零。默认为 1e-5。 | ||
| - **training** (bool,可选) - 表示是否处于训练阶段的标志。默认为 True。 | ||
| - **mode** (str,可选) - ['upscale_in_train'(默认) | 'downscale_in_infer'],两种模式分别为: | ||
|
|
||
| 1. upscale_in_train(默认),在训练时上调输出 | ||
| - 训练:out = input * mask / (1.0 - p) | ||
| - 推理:out = input | ||
|
|
||
| 2. downscale_in_infer,在推理时下调输出 | ||
| - 训练:out = input * mask | ||
| - 推理:out = input * (1.0 - p) | ||
zade23 marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| - **name** (str,可选) - 操作的名称(可选,默认为 None)。具体用法请参见 :ref:`api_guide_Name`。 | ||
|
|
||
| 返回 | ||
| :::::::::::: | ||
| - Tensor,输出张量,数据类型和形状与 `x` 相同。 | ||
|
|
||
|
|
||
| 代码示例 | ||
| :::::::::::: | ||
|
|
||
| COPY-FROM: paddle.incubate.nn.functional.fused_bias_dropout_residual_layer_norm | ||
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.