Skip to content

Comments

Fix/adam float64#10407

Merged
dzhwinter merged 3 commits intoPaddlePaddle:developfrom
dzhwinter:fix/adam_float64
May 6, 2018
Merged

Fix/adam float64#10407
dzhwinter merged 3 commits intoPaddlePaddle:developfrom
dzhwinter:fix/adam_float64

Conversation

@dzhwinter
Copy link
Contributor

fix #10405

@abhinavarora
Copy link
Contributor

@dzhwinter Should this also be done in sgd_op and ftrl_op?

@dzhwinter
Copy link
Contributor Author

That's true. Done.

Copy link
Contributor

@sidgoyal78 sidgoyal78 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks for the PR Zhihong.

Copy link
Contributor

@sidgoyal78 sidgoyal78 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dzhwinter : It seems that changing the datatype to float64, we get an error:

paddle.fluid.core.EnforceNotMet: Tensor holds the wrong type, it holds f at [/paddle/paddle/fluid/framework/tensor_impl.h:84]

Did I miss something?

@dzhwinter dzhwinter merged commit a28dffb into PaddlePaddle:develop May 6, 2018
@dzhwinter
Copy link
Contributor Author

@sidgoyal78 also need to change the optimizer datatype.

@sidgoyal78
Copy link
Contributor

sidgoyal78 commented May 7, 2018

@dzhwinter Do you have an example? I don't quite understand how we could change the optimizer datatype. (Since, in the optimizer API we don't have the dtype exposed).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Non-deterministic outputs for book chapters (recognize_digits, etc) for a given random seed on GPU

3 participants