-
Notifications
You must be signed in to change notification settings - Fork 84
Closed
Labels
enhancementNew feature or requestNew feature or request
Description
Description
I've successfully built and run OpenFold-3 inference on Blackwell GPUs (compute capability 12.0) using an alternative Dockerfile approach.
My Approach
Instead of building from nvidia/cuda:12.1.1-cudnn8-devel-ubuntu22.04, I used:
FROM nvcr.io/nvidia/pytorch:25.02-py3This provides native Blackwell support out of the box.
Testing
- ✅ Successfully ran inference on Blackwell GPU
- ❌ Have not tested training
- ❓ Uncertain about backward compatibility with older GPU architectures
Question for Maintainers
Would you be interested in:
- Option A: An alternative
Dockerfile.blackwellfor users with Blackwell GPUs? - Option B: Modifying the main Dockerfile to support newer architectures?
- Option C: Documentation on building for Blackwell GPUs separately?
I'm happy to contribute whichever approach you prefer. I can share my complete Dockerfile if helpful.
Environment
- GPU: NVIDIA RTX PRO 6000 Blackwell (compute capability 12.0)
- Base Image: nvcr.io/nvidia/pytorch:25.02-py3
- Tested: Inference only
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or request