fix the precision problem of test_distribution#27524
Merged
zhiqiu merged 5 commits intoPaddlePaddle:developfrom Sep 29, 2020
Merged
fix the precision problem of test_distribution#27524zhiqiu merged 5 commits intoPaddlePaddle:developfrom
zhiqiu merged 5 commits intoPaddlePaddle:developfrom
Conversation
|
Thanks for your contribution! |
zhiqiu
approved these changes
Sep 28, 2020
kolinwei
approved these changes
Sep 28, 2020
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
PR types
Others
PR changes
APIs
Describe
reason for test_distribution failure
Because
assignop does not support the input of numpy.ndarray whose dtype isFP64.When users set
FP64 numpy.ndarrayas the parameters ofUniformandNormalclasses. We need to useassignop to convert it toFP32 Tensor. And then usecastop to convert it to aFP64 Tensor.There is a loss of accuracy in this conversion.
Refer to PR fix dtype not matching bug in log_prob and probs method of Distribution class #26767 .
In test_distribution, compare the output of paddle and output of numpy to verify the correction.
In
Uniform(low, high), the formula to calculate the entropy isentropy(low, high) = log (high - low).if
lowandhighare very close,high - lowwill be close to0, and small precision loss will become large error because of usinglog.solution
In the realization of the original
Uniformunittest, the range oflowis [-1, 1), the range ofhighis [-5, 5).To avoid
lowandhighbeing too close, setlowin the range of [-1, 1), and sethighin range of [5, 15).What's more, add a unittest to discuss the situation that
high < low.log_probunittest ofNormalclass also fails, change the tolerance from 1e-6 to 1e-4.tolerance: 1e-6 -> 1e-4