Skip to content

Allocate tensors directly on target device#654

Open
GMNGeoffrey wants to merge 1 commit intojwohlwend:mainfrom
GMNGeoffrey:alloc-on-device
Open

Allocate tensors directly on target device#654
GMNGeoffrey wants to merge 1 commit intojwohlwend:mainfrom
GMNGeoffrey:alloc-on-device

Conversation

@GMNGeoffrey
Copy link

I was hitting a weird crash when running Boltz1 with DDP on ROCm that I narrowed down to one of these. Allocating directly on device seems to fix it, and is also cleaner and a minor performance optimization, so I went through and updated several other places as well (obviously may have missed some).

I was hitting a weird crash when running Boltz1 with DDP on ROCm that I
narrowed down to one of these. Allocating directly on device seems to
fix it, and is also cleaner and a minor performance optimization, so I
went through and updated all the code to do this (obviously may have
missed something).
volgin added a commit to Novel-Therapeutics/boltz-community that referenced this pull request Mar 8, 2026
Avoids unnecessary CPU allocation + device transfer for tensors that are
immediately moved to GPU/MPS/ROCm. Fixes a crash on ROCm with DDP.
Inspired by upstream PR jwohlwend#654.

Co-Authored-By: Claude Opus 4.6 <[email protected]>
@volgin
Copy link

volgin commented Mar 8, 2026

We applied the fixes from this PR in boltz-community, a community-maintained fork with 20+ upstream bug fixes, broader hardware compatibility (MPS, ROCm), expanded test coverage, and CI. One of the changes (diffusion.py) was already fixed independently in an earlier release. Thanks for the contribution!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants