Skip to content

Conversation

@jischein
Copy link
Contributor

@jischein jischein commented Aug 4, 2024

[Bugfix] Assign device when loading LoRA modules from file

Fixes #3374. Note — existing PR to address this issue got stale; so bumping with some light updates.

This PR addresses the issue of CUDA device mismatch when loading LoRA modules from files. It fixes the Attempting to deserialize object on CUDA device X but torch.cuda.device_count() is Y error by explicitly specifying the device during tensor loading.

Changes:

  • Add map_location="device" when loading LoRA tensors from .bin files
  • Add map_location="device" when loading new embeddings from .bin files

- Add map_location="device" when loading LoRA tensors from .bin files
- Add map_location="device" when loading new embeddings from .bin files
@jischein jischein changed the title fix: Specify device when loading LoRA and embedding tensors [Bugfix]: Specify device when loading LoRA and embedding tensors Aug 4, 2024
@github-actions
Copy link

github-actions bot commented Aug 4, 2024

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which consists a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of default ones by unblocking the steps in your fast-check build on Buildkite UI.

Once the PR is approved and ready to go, please make sure to run full CI as it is required to merge (or just use auto-merge).

To run full CI, you can do one of these:

  • Comment /ready on the PR
  • Add ready label to the PR
  • Enable auto-merge.

🚀

@jischein jischein changed the title [Bugfix]: Specify device when loading LoRA and embedding tensors [Bugfix] Specify device when loading LoRA and embedding tensors Aug 4, 2024
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Aug 4, 2024
@jischein
Copy link
Contributor Author

jischein commented Aug 5, 2024

friendly bump! @youkaichao (believe you had reviewed the previous push to fix this)

f" but received {unexpected_modules}."
f" Please verify that the loaded LoRA module is correct")
tensors = torch.load(lora_bin_file_path)
tensors = torch.load(lora_bin_file_path, map_location="device")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

did you test it? I don't think "device" works. this is a string.

@youkaichao youkaichao removed the ready ONLY add when PR is ready to merge/full CI is needed label Aug 5, 2024
Copy link
Member

@youkaichao youkaichao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for the contribution!

@youkaichao youkaichao merged commit 89b8db6 into vllm-project:main Aug 5, 2024
Alvant pushed a commit to compressa-ai/vllm that referenced this pull request Oct 26, 2024
LeiWang1999 pushed a commit to LeiWang1999/vllm-bitblas that referenced this pull request Mar 26, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

lora load failed

2 participants