-
Notifications
You must be signed in to change notification settings - Fork 7.2k
fix MNIST byte flipping #7081
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix MNIST byte flipping #7081
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Philip.
Before merging, could we add some quick test for the endianness conversion (possibly requiring extracting it in a separate function)?
Something along these lines:
t = torch.tensor([0x00112233, 0xaabbccdd], dtype=float.torch32)
flipped = torch.tensor([0x33221100, 0xddccbbaa], dtype=float.torch32)
assert flip_endianness(t) == flippedIdeally we'd test the QMnist dataset directly, but this may make the tests just as complicated as the code (which kinds defeats the purpose of the test)
NicolasHug
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @pmeier , some comments but LGTM
Summary: * fix MNIST byte flipping * add test * move to utils * remove lazy import Reviewed By: YosuaMichael Differential Revision: D42500904 fbshipit-source-id: 067064facc22efc68368d06dccd4fa2e2fa6dfc1
Fixes #7079.