[Accuracy diff No.90] Fix accuracy diff for paddle.incubate.nn.functional.fused_bias_dropout_residual_layer_norm API #74149
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
PR Category
Operator Mechanism
PR Types
Bug fixes
Description
The existing
fused_bias_dropout_residual_layer_normoperator produces all-zero gradients in backward pass under certain parameter configurations.The source code investigation revealed that the failing cases invoked the fast kernel
ln_bwd_fast_kernel_driver:Within the fast kernel, there exists this code:
, when
dropout_rate == 0, themask_vecbecomes all zeros, causing the computeddoutto become all zeros.The fix references the non-fast-kernel implementation which properly handles the dropout scenario by conditionally applying the mask multiplication (i.e., skips multiplying
mask_vecwhen dropout is disabled):This ensures all 8 test cases pass after the fix. Additionally, all failures in
PaddleAPITestare now resolved as shown below.Pcard-67164