Check 'onnxruntime-gpu' if torch.has_cuda#5087
Conversation
|
@glenn-jocher |
|
@callbarian thanks for the pointer! The current check_requirements() will only uninstall existing packages of the same name I believe, though I'm not sure. I guess ideally I tested this in Colab which had neither installed by default. The -gpu install works, but inference does not seem to use the GPU still as speeds are not improved over |
|
@glenn-jocher I am not sure about Colab environment. Any possibility that colab already has onnxruntime by default? Usually, if onnxruntime-gpu does not find cuda or cudnn, it will complain the missing .so files. I don't think it automatically runs with cpu mode. Here's what I did. both cuda and cudnn were installed through conda Python detect.py --weights last.onnx --source images/ the inference time was around 0.04s ~ 0.09s per image. It might be possible that colab gpu is not that fast enough. One way to check it is check the memory usage with nvidia-smi on onnxruntime and onnxruntime-gpu respectively. Hope it helps! |
|
@callbarian thanks! Perhaps it's just a driver issue like you mentioned. In any case I'll go ahead and merge this for now until we find a better solution. /rebase |
|
@callbarian PR is merged. Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐ |
* Check `'onnxruntime-gpu' if torch.has_cuda` * fix indent
Possible fix for #4808 'ONNX Inference Speed extremely slow compare to .pt Model'
🛠️ PR Summary
Made with ❤️ by Ultralytics Actions
🌟 Summary
Improved ONNX runtime support for GPU in
detect.py.📊 Key Changes
onnxruntime-gpuoronnxruntimebased on CUDA availability.🎯 Purpose & Impact