-
Notifications
You must be signed in to change notification settings - Fork 7.2k
normalise updates #699
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
normalise updates #699
Conversation
|
@fmassa would love to hear from you on this ! |
fmassa
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi,
Thanks for the PR!
I believe there is a potential simplifications in the code.
Let me know what you think.
torchvision/transforms/functional.py
Outdated
|
|
||
| else: | ||
| # This is faster than using broadcasting, don't change without benchmarking | ||
| for t, m, s in zip(tensor, mean, std): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this can now be replaced with something simpler and as efficient as before
mean = torch.tensor(mean, dtype=torch.float32)
std = torch.tensor(std, dtype=torch.float32)
return (tensor - mean[:, None, None]) / std[:, None, None]This is out-of-place, but can be made in-place as well if needed
Just to double check, can you do some benchmarkings and report back if it's indeed now giving the same runtime speed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@fmassa i ran some tests and found that this approach was smewhat faster .
torchvision/transforms/functional.py
Outdated
| t.sub_(m).div_(s) | ||
| return tensor | ||
| if not inplace: | ||
| tensor_clone = tensor.clone() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can't you just do
if not inplace:
tensor = tensor.clone()
# everything as before?
| t.sub_(m).div_(s) | ||
| return tensor | ||
| if not inplace: | ||
| tensor = tensor.clone() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we just have a single clone here, to avoid duplicated code?
like
if not inplace:
tensor = tensor.clone()
mean = ...|
@fmassa how does it look now ? |
fmassa
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
enhancement #680.