-
Notifications
You must be signed in to change notification settings - Fork 7.2k
GPU jpeg decoder: add batch support and hardware decoding #8496
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Changes from all commits
Commits
Show all changes
41 commits
Select commit
Hold shift + click to select a range
133d7c1
Adding GPU acceleration to encode_jpeg
dominikkallusky 4cc30cb
fix test cases
dominikkallusky 2db02f0
fix lints
dominikkallusky 6acef83
fix lints2
dominikkallusky ae0450d
latest round of updates
dominikkallusky a799c53
fix lints
dominikkallusky c5810ff
Ignore mypy
NicolasHug ff40253
Add comment
NicolasHug 0972863
minor test refactor
NicolasHug 4ce658d
Merge branch 'main' of github.com:pytorch/vision into add_gpu_encode
NicolasHug 65372a3
Merge branch 'pytorch:main' into add_gpu_encode
dominikkallusky 62e072a
Caching nvjpeg vars across calls
dominikkallusky b3d06cb
Update if nvjpeg not found
dominikkallusky fcf8a78
Adding gpu decode
dominikkallusky f190d99
Update if nvjpeg not found
dominikkallusky c471db8
merge
dominikkallusky b5eaa89
Merge branch 'main' of github.com:pytorch/vision into add_gpu_encode
NicolasHug 5051050
Revert "Ignore mypy"
NicolasHug 136f790
Add comment
NicolasHug 0a88d27
minor changes to address ahmad's comments
dominikkallusky df60183
Merge branch 'add_gpu_encode' of https://github.com/deekay42/vision i…
dominikkallusky f3c8a72
add dtor log messages
dominikkallusky 117d1f1
Skip CUDA cleanup altogether
dominikkallusky 21eca4c
Merge branch 'main' into add_gpu_encode
NicolasHug 64f2cf9
Merge branch 'add_gpu_encode' into add_gpu_decode
dominikkallusky 156e250
disable cleanup
dominikkallusky 3efb658
Merge branch 'add_gpu_decode'
dominikkallusky 5f77eea
disable cleanup
dominikkallusky ac8edd2
merge
dominikkallusky cebe75f
Merge branch 'add_gpu_encode' into add_gpu_decode
dominikkallusky 2e60784
Merge branch 'deekay42-add_gpu_decode'
dominikkallusky 01a5621
merge
dominikkallusky ccdafd4
ahmad's comments
dominikkallusky c44599d
Merge branch 'main' of github.com:pytorch/vision into add_gpu_decode
NicolasHug 25ca905
Fix syntax
NicolasHug 43b317b
self address a few comments / nits
NicolasHug 223f8a0
lint
NicolasHug 863cf76
ahmads comments 2
dominikkallusky fc28c60
lint
NicolasHug dcd1c07
lint
NicolasHug efa746d
Merge branch 'main' into add_gpu_decode
NicolasHug File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file was deleted.
Oops, something went wrong.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,99 @@ | ||
| import os | ||
| import platform | ||
| import statistics | ||
|
|
||
| import torch | ||
| import torch.utils.benchmark as benchmark | ||
| import torchvision | ||
|
|
||
|
|
||
| def print_machine_specs(): | ||
| print("Processor:", platform.processor()) | ||
| print("Platform:", platform.platform()) | ||
| print("Logical CPUs:", os.cpu_count()) | ||
| print(f"\nCUDA device: {torch.cuda.get_device_name()}") | ||
| print(f"Total Memory: {torch.cuda.get_device_properties(0).total_memory / 1e9:.2f} GB") | ||
|
|
||
|
|
||
| def get_data(): | ||
| transform = torchvision.transforms.Compose( | ||
| [ | ||
| torchvision.transforms.PILToTensor(), | ||
| ] | ||
| ) | ||
| path = os.path.join(os.getcwd(), "data") | ||
| testset = torchvision.datasets.Places365( | ||
| root="./data", download=not os.path.exists(path), transform=transform, split="val" | ||
| ) | ||
| testloader = torch.utils.data.DataLoader( | ||
| testset, batch_size=1000, shuffle=False, num_workers=1, collate_fn=lambda batch: [r[0] for r in batch] | ||
| ) | ||
| return next(iter(testloader)) | ||
|
|
||
|
|
||
| def run_encoding_benchmark(decoded_images): | ||
| results = [] | ||
| for device in ["cpu", "cuda"]: | ||
| decoded_images_device = [t.to(device=device) for t in decoded_images] | ||
| for size in [1, 100, 1000]: | ||
| for num_threads in [1, 12, 24]: | ||
| for stmt, strat in zip( | ||
| [ | ||
| "[torchvision.io.encode_jpeg(img) for img in decoded_images_device_trunc]", | ||
| "torchvision.io.encode_jpeg(decoded_images_device_trunc)", | ||
| ], | ||
| ["unfused", "fused"], | ||
| ): | ||
| decoded_images_device_trunc = decoded_images_device[:size] | ||
| t = benchmark.Timer( | ||
| stmt=stmt, | ||
| setup="import torchvision", | ||
| globals={"decoded_images_device_trunc": decoded_images_device_trunc}, | ||
| label="Image Encoding", | ||
| sub_label=f"{device.upper()} ({strat}): {stmt}", | ||
| description=f"{size} images", | ||
| num_threads=num_threads, | ||
| ) | ||
| results.append(t.blocked_autorange()) | ||
| compare = benchmark.Compare(results) | ||
| compare.print() | ||
|
|
||
|
|
||
| def run_decoding_benchmark(encoded_images): | ||
| results = [] | ||
| for device in ["cpu", "cuda"]: | ||
| for size in [1, 100, 1000]: | ||
| for num_threads in [1, 12, 24]: | ||
| for stmt, strat in zip( | ||
| [ | ||
| f"[torchvision.io.decode_jpeg(img, device='{device}') for img in encoded_images_trunc]", | ||
| f"torchvision.io.decode_jpeg(encoded_images_trunc, device='{device}')", | ||
| ], | ||
| ["unfused", "fused"], | ||
| ): | ||
| encoded_images_trunc = encoded_images[:size] | ||
| t = benchmark.Timer( | ||
| stmt=stmt, | ||
| setup="import torchvision", | ||
| globals={"encoded_images_trunc": encoded_images_trunc}, | ||
| label="Image Decoding", | ||
| sub_label=f"{device.upper()} ({strat}): {stmt}", | ||
| description=f"{size} images", | ||
| num_threads=num_threads, | ||
| ) | ||
| results.append(t.blocked_autorange()) | ||
| compare = benchmark.Compare(results) | ||
| compare.print() | ||
|
|
||
|
|
||
| if __name__ == "__main__": | ||
| print_machine_specs() | ||
| decoded_images = get_data() | ||
| mean_h, mean_w = statistics.mean(t.shape[-2] for t in decoded_images), statistics.mean( | ||
| t.shape[-1] for t in decoded_images | ||
| ) | ||
| print(f"\nMean image size: {int(mean_h)}x{int(mean_w)}") | ||
| run_encoding_benchmark(decoded_images) | ||
| encoded_images_cuda = torchvision.io.encode_jpeg([img.cuda() for img in decoded_images]) | ||
| encoded_images_cpu = [img.cpu() for img in encoded_images_cuda] | ||
| run_decoding_benchmark(encoded_images_cpu) | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I could be wrong, but batched seems like a better term than fused since it appears to be batching images, not fusing kernels necessarily.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the images are batched it uses a fused kernel