-
Notifications
You must be signed in to change notification settings - Fork 539
Fix lora tag check and add device maps and precision for diffusion pipelines #1829
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| } else if (model.tags.includes("controlnet")) { | ||
| codeSnippets = diffusers_controlnet(model); | ||
| } else if (model.tags.includes("lora")) { | ||
| } else if (model.tags.some(t => t.toLowerCase() === "lora")) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am unsure if this is the right way to tackle this.
My rationale was to lowercase the tags and check against "lora".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| } else if (model.tags.some(t => t.toLowerCase() === "lora")) { | |
| } else if (model.tags.includes("lora")) { |
Hey @ariG23498 , tags are considered case sensitive in the HF ecosystem and we usually prefer to not lowercase them when checking for a value. So the best here would be to keep it has done before and if some repos have LoRA or LORA tag instead, they should be updated directly (i.e. open a pull request on their model card)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting! @linoytsaban this would mean we need your automated scripts to send PRs on such repos.
Thanks for this @Wauplin
| from diffusers.utils import load_image, export_to_video | ||
| pipe = DiffusionPipeline.from_pretrained("${model.id}", torch_dtype=torch.float16) | ||
| pipe = DiffusionPipeline.from_pretrained("${model.id}", dtype=torch.float16, device_map="cuda") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah, float16 here. Is this intended?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wanted to keep it as is, and not play around the already existing bits.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see your point @ariG23498, I do however think this might be worth changing as most current popular image-to-video models are usually loaded in bf16 in diffusers (e.g. wan)
| from diffusers.utils import load_image | ||
| pipe = AutoPipelineForInpainting.from_pretrained("${model.id}", torch_dtype=torch.float16, variant="fp16").to("cuda") | ||
| pipe = AutoPipelineForInpainting.from_pretrained("${model.id}", torch_dtype=torch.float16, variant="fp16", device_map="cuda") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we only need the variant in this particular case? Also: flagging the use of float16 as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here again I did not want to change things. I am unsure how that might play. But open to suggestions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
here I think it's ok to stay with float16 as it's only used for SDXL pipelines
Vaibhavs10
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's move to auto so that people can use this on mps and distributed setups as well.
| from diffusers.utils import load_image, export_to_video | ||
| pipe = DiffusionPipeline.from_pretrained("${model.id}", torch_dtype=torch.float16) | ||
| pipe = DiffusionPipeline.from_pretrained("${model.id}", dtype=torch.float16, device_map="cuda") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| pipe = DiffusionPipeline.from_pretrained("${model.id}", dtype=torch.float16, device_map="cuda") | |
| pipe = DiffusionPipeline.from_pretrained("${model.id}", dtype=torch.bfloat16, device_map="cuda") |
| from diffusers import DiffusionPipeline | ||
| pipe = DiffusionPipeline.from_pretrained("${model.id}") | ||
| pipe = DiffusionPipeline.from_pretrained("${model.id}", dtype=torch.bfloat16, device_map="cuda") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| pipe = DiffusionPipeline.from_pretrained("${model.id}", dtype=torch.bfloat16, device_map="cuda") | |
| pipe = DiffusionPipeline.from_pretrained("${model.id}", dtype=torch.bfloat16, device_map="auto") |
| from diffusers.utils import load_image | ||
| pipe = DiffusionPipeline.from_pretrained("${model.id}") | ||
| pipe = DiffusionPipeline.from_pretrained("${model.id}", dtype=torch.bfloat16, device_map="cuda") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| pipe = DiffusionPipeline.from_pretrained("${model.id}", dtype=torch.bfloat16, device_map="cuda") | |
| pipe = DiffusionPipeline.from_pretrained("${model.id}", dtype=torch.bfloat16, device_map="auto") |
| from diffusers.utils import load_image, export_to_video | ||
| pipe = DiffusionPipeline.from_pretrained("${model.id}", torch_dtype=torch.float16) | ||
| pipe = DiffusionPipeline.from_pretrained("${model.id}", dtype=torch.float16, device_map="cuda") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| pipe = DiffusionPipeline.from_pretrained("${model.id}", dtype=torch.float16, device_map="cuda") | |
| pipe = DiffusionPipeline.from_pretrained("${model.id}", dtype=torch.float16, device_map="auto") |
| from diffusers import DiffusionPipeline | ||
| pipe = DiffusionPipeline.from_pretrained("${get_base_diffusers_model(model)}") | ||
| pipe = DiffusionPipeline.from_pretrained("${get_base_diffusers_model(model)}", dtype=torch.bfloat16, device_map="cuda") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| pipe = DiffusionPipeline.from_pretrained("${get_base_diffusers_model(model)}", dtype=torch.bfloat16, device_map="cuda") | |
| pipe = DiffusionPipeline.from_pretrained("${get_base_diffusers_model(model)}", dtype=torch.bfloat16, device_map="auto") |
| from diffusers.utils import export_to_video | ||
| pipe = DiffusionPipeline.from_pretrained("${get_base_diffusers_model(model)}") | ||
| pipe = DiffusionPipeline.from_pretrained("${get_base_diffusers_model(model)}", dtype=torch.bfloat16, device_map="cuda") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| pipe = DiffusionPipeline.from_pretrained("${get_base_diffusers_model(model)}", dtype=torch.bfloat16, device_map="cuda") | |
| pipe = DiffusionPipeline.from_pretrained("${get_base_diffusers_model(model)}", dtype=torch.bfloat16, device_map="auto") |
| from diffusers.utils import load_image, export_to_video | ||
| pipe = DiffusionPipeline.from_pretrained("${get_base_diffusers_model(model)}") | ||
| pipe = DiffusionPipeline.from_pretrained("${get_base_diffusers_model(model)}", dtype=torch.bfloat16, device_map="cuda") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| pipe = DiffusionPipeline.from_pretrained("${get_base_diffusers_model(model)}", dtype=torch.bfloat16, device_map="cuda") | |
| pipe = DiffusionPipeline.from_pretrained("${get_base_diffusers_model(model)}", dtype=torch.bfloat16, device_map="auto") |
| from diffusers import DiffusionPipeline | ||
| pipe = DiffusionPipeline.from_pretrained("${get_base_diffusers_model(model)}") | ||
| pipe = DiffusionPipeline.from_pretrained("${get_base_diffusers_model(model)}", dtype=torch.bfloat16, device_map="cuda") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| pipe = DiffusionPipeline.from_pretrained("${get_base_diffusers_model(model)}", dtype=torch.bfloat16, device_map="cuda") | |
| pipe = DiffusionPipeline.from_pretrained("${get_base_diffusers_model(model)}", dtype=torch.bfloat16, device_map="auto") |
| mask = load_image("https://huggingface.co/datasets/diffusers/diffusers-images-docs/resolve/main/cup_mask.png") | ||
| pipe = FluxFillPipeline.from_pretrained("${model.id}", torch_dtype=torch.bfloat16).to("cuda") | ||
| pipe = FluxFillPipeline.from_pretrained("${model.id}", dtype=torch.bfloat16, device_map="cuda") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| pipe = FluxFillPipeline.from_pretrained("${model.id}", dtype=torch.bfloat16, device_map="cuda") | |
| pipe = FluxFillPipeline.from_pretrained("${model.id}", dtype=torch.bfloat16, device_map="auto") |
| from diffusers.utils import load_image | ||
| pipe = AutoPipelineForInpainting.from_pretrained("${model.id}", torch_dtype=torch.float16, variant="fp16").to("cuda") | ||
| pipe = AutoPipelineForInpainting.from_pretrained("${model.id}", torch_dtype=torch.float16, variant="fp16", device_map="cuda") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| pipe = AutoPipelineForInpainting.from_pretrained("${model.id}", torch_dtype=torch.float16, variant="fp16", device_map="cuda") | |
| pipe = AutoPipelineForInpainting.from_pretrained("${model.id}", torch_dtype=torch.float16, variant="fp16", device_map="auto") |
Not yet for a For a Distributed setups are also impacted by the same reason (a |
|
@sayakpaul - thanks for the context, how do you distinguish between |
Either that (i.e., |
|
@sayakpaul @linoytsaban can any one of you approve the PR? That way we know the code snippets don't have any issue. The only thing left would be to check for the failures in the CI 😓. |
linoytsaban
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! thanks @ariG23498
Making lora tag check case insensitive.
Co-authored-by: vb <[email protected]>
Co-authored-by: Lucain <[email protected]>
1265baf to
08481b3
Compare
Vaibhavs10
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good for me to merge - let's put a comment like this for MPS people in the snippets
| `from diffusers import DiffusionPipeline | ||
| `import torch | ||
| from diffusers import DiffusionPipeline | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| # switch to `mps` for apple devices |
Also updated `torch_dtype` to `dtype`
Making lora tag check case insensitive.
CC: @linoytsaban @sayakpaul