Skip to content

Blackwell GPU Support: Alternative Dockerfile using NVIDIA PyTorch Container #22

@eidamo

Description

@eidamo

Description

I've successfully built and run OpenFold-3 inference on Blackwell GPUs (compute capability 12.0) using an alternative Dockerfile approach.

My Approach

Instead of building from nvidia/cuda:12.1.1-cudnn8-devel-ubuntu22.04, I used:

FROM nvcr.io/nvidia/pytorch:25.02-py3

This provides native Blackwell support out of the box.

Testing

  • ✅ Successfully ran inference on Blackwell GPU
  • ❌ Have not tested training
  • ❓ Uncertain about backward compatibility with older GPU architectures

Question for Maintainers

Would you be interested in:

  1. Option A: An alternative Dockerfile.blackwell for users with Blackwell GPUs?
  2. Option B: Modifying the main Dockerfile to support newer architectures?
  3. Option C: Documentation on building for Blackwell GPUs separately?

I'm happy to contribute whichever approach you prefer. I can share my complete Dockerfile if helpful.

Environment

  • GPU: NVIDIA RTX PRO 6000 Blackwell (compute capability 12.0)
  • Base Image: nvcr.io/nvidia/pytorch:25.02-py3
  • Tested: Inference only

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions