Skip to content

Conversation

@ariG23498
Copy link
Contributor

@ariG23498 ariG23498 commented Nov 3, 2025

Making lora tag check case insensitive.

CC: @linoytsaban @sayakpaul

@ariG23498 ariG23498 changed the title Fix lora tag check Fix lora tag check and add device maps and precision for diffusion pipelines Nov 3, 2025
} else if (model.tags.includes("controlnet")) {
codeSnippets = diffusers_controlnet(model);
} else if (model.tags.includes("lora")) {
} else if (model.tags.some(t => t.toLowerCase() === "lora")) {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am unsure if this is the right way to tackle this.

My rationale was to lowercase the tags and check against "lora".

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
} else if (model.tags.some(t => t.toLowerCase() === "lora")) {
} else if (model.tags.includes("lora")) {

Hey @ariG23498 , tags are considered case sensitive in the HF ecosystem and we usually prefer to not lowercase them when checking for a value. So the best here would be to keep it has done before and if some repos have LoRA or LORA tag instead, they should be updated directly (i.e. open a pull request on their model card)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Interesting! @linoytsaban this would mean we need your automated scripts to send PRs on such repos.

Thanks for this @Wauplin

from diffusers.utils import load_image, export_to_video
pipe = DiffusionPipeline.from_pretrained("${model.id}", torch_dtype=torch.float16)
pipe = DiffusionPipeline.from_pretrained("${model.id}", dtype=torch.float16, device_map="cuda")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ah, float16 here. Is this intended?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wanted to keep it as is, and not play around the already existing bits.

Copy link
Contributor

@linoytsaban linoytsaban Nov 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see your point @ariG23498, I do however think this might be worth changing as most current popular image-to-video models are usually loaded in bf16 in diffusers (e.g. wan)

from diffusers.utils import load_image
pipe = AutoPipelineForInpainting.from_pretrained("${model.id}", torch_dtype=torch.float16, variant="fp16").to("cuda")
pipe = AutoPipelineForInpainting.from_pretrained("${model.id}", torch_dtype=torch.float16, variant="fp16", device_map="cuda")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we only need the variant in this particular case? Also: flagging the use of float16 as well.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here again I did not want to change things. I am unsure how that might play. But open to suggestions.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

here I think it's ok to stay with float16 as it's only used for SDXL pipelines

Copy link
Member

@Vaibhavs10 Vaibhavs10 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's move to auto so that people can use this on mps and distributed setups as well.

from diffusers.utils import load_image, export_to_video
pipe = DiffusionPipeline.from_pretrained("${model.id}", torch_dtype=torch.float16)
pipe = DiffusionPipeline.from_pretrained("${model.id}", dtype=torch.float16, device_map="cuda")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
pipe = DiffusionPipeline.from_pretrained("${model.id}", dtype=torch.float16, device_map="cuda")
pipe = DiffusionPipeline.from_pretrained("${model.id}", dtype=torch.bfloat16, device_map="cuda")

from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained("${model.id}")
pipe = DiffusionPipeline.from_pretrained("${model.id}", dtype=torch.bfloat16, device_map="cuda")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
pipe = DiffusionPipeline.from_pretrained("${model.id}", dtype=torch.bfloat16, device_map="cuda")
pipe = DiffusionPipeline.from_pretrained("${model.id}", dtype=torch.bfloat16, device_map="auto")

from diffusers.utils import load_image
pipe = DiffusionPipeline.from_pretrained("${model.id}")
pipe = DiffusionPipeline.from_pretrained("${model.id}", dtype=torch.bfloat16, device_map="cuda")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
pipe = DiffusionPipeline.from_pretrained("${model.id}", dtype=torch.bfloat16, device_map="cuda")
pipe = DiffusionPipeline.from_pretrained("${model.id}", dtype=torch.bfloat16, device_map="auto")

from diffusers.utils import load_image, export_to_video
pipe = DiffusionPipeline.from_pretrained("${model.id}", torch_dtype=torch.float16)
pipe = DiffusionPipeline.from_pretrained("${model.id}", dtype=torch.float16, device_map="cuda")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
pipe = DiffusionPipeline.from_pretrained("${model.id}", dtype=torch.float16, device_map="cuda")
pipe = DiffusionPipeline.from_pretrained("${model.id}", dtype=torch.float16, device_map="auto")

from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained("${get_base_diffusers_model(model)}")
pipe = DiffusionPipeline.from_pretrained("${get_base_diffusers_model(model)}", dtype=torch.bfloat16, device_map="cuda")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
pipe = DiffusionPipeline.from_pretrained("${get_base_diffusers_model(model)}", dtype=torch.bfloat16, device_map="cuda")
pipe = DiffusionPipeline.from_pretrained("${get_base_diffusers_model(model)}", dtype=torch.bfloat16, device_map="auto")

from diffusers.utils import export_to_video
pipe = DiffusionPipeline.from_pretrained("${get_base_diffusers_model(model)}")
pipe = DiffusionPipeline.from_pretrained("${get_base_diffusers_model(model)}", dtype=torch.bfloat16, device_map="cuda")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
pipe = DiffusionPipeline.from_pretrained("${get_base_diffusers_model(model)}", dtype=torch.bfloat16, device_map="cuda")
pipe = DiffusionPipeline.from_pretrained("${get_base_diffusers_model(model)}", dtype=torch.bfloat16, device_map="auto")

from diffusers.utils import load_image, export_to_video
pipe = DiffusionPipeline.from_pretrained("${get_base_diffusers_model(model)}")
pipe = DiffusionPipeline.from_pretrained("${get_base_diffusers_model(model)}", dtype=torch.bfloat16, device_map="cuda")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
pipe = DiffusionPipeline.from_pretrained("${get_base_diffusers_model(model)}", dtype=torch.bfloat16, device_map="cuda")
pipe = DiffusionPipeline.from_pretrained("${get_base_diffusers_model(model)}", dtype=torch.bfloat16, device_map="auto")

from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained("${get_base_diffusers_model(model)}")
pipe = DiffusionPipeline.from_pretrained("${get_base_diffusers_model(model)}", dtype=torch.bfloat16, device_map="cuda")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
pipe = DiffusionPipeline.from_pretrained("${get_base_diffusers_model(model)}", dtype=torch.bfloat16, device_map="cuda")
pipe = DiffusionPipeline.from_pretrained("${get_base_diffusers_model(model)}", dtype=torch.bfloat16, device_map="auto")

mask = load_image("https://huggingface.co/datasets/diffusers/diffusers-images-docs/resolve/main/cup_mask.png")
pipe = FluxFillPipeline.from_pretrained("${model.id}", torch_dtype=torch.bfloat16).to("cuda")
pipe = FluxFillPipeline.from_pretrained("${model.id}", dtype=torch.bfloat16, device_map="cuda")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
pipe = FluxFillPipeline.from_pretrained("${model.id}", dtype=torch.bfloat16, device_map="cuda")
pipe = FluxFillPipeline.from_pretrained("${model.id}", dtype=torch.bfloat16, device_map="auto")

from diffusers.utils import load_image
pipe = AutoPipelineForInpainting.from_pretrained("${model.id}", torch_dtype=torch.float16, variant="fp16").to("cuda")
pipe = AutoPipelineForInpainting.from_pretrained("${model.id}", torch_dtype=torch.float16, variant="fp16", device_map="cuda")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
pipe = AutoPipelineForInpainting.from_pretrained("${model.id}", torch_dtype=torch.float16, variant="fp16", device_map="cuda")
pipe = AutoPipelineForInpainting.from_pretrained("${model.id}", torch_dtype=torch.float16, variant="fp16", device_map="auto")

@sayakpaul
Copy link
Member

let's move to auto so that people can use this on mps and distributed setups as well.

Not yet for a DiffusionPipeline since it can contain multiple models underlying. So, strategies known to work for the model-level device_map implementation won't work right out of the box.

For a DiffusionPipeline, "auto", therefore, isn't a valid input type for device_map, yet.

Distributed setups are also impacted by the same reason (a DiffusionPipeline can contain multiple models) and hence, it's a bit more involved than just specifying device_map="auto". Docs for that are here.

@Vaibhavs10
Copy link
Member

Vaibhavs10 commented Nov 3, 2025

@sayakpaul - thanks for the context, how do you distinguish between cuda or mps then? users just have to explicitly specify it?

@sayakpaul
Copy link
Member

how do you distinguish between cuda or mps then? users just have to explicitly specify it?

Either that (i.e., to("cuda"), to("mps"), etc.) or another pattern that is very common, where users just call enable_model_cpu_offload().

@ariG23498
Copy link
Contributor Author

@sayakpaul @linoytsaban can any one of you approve the PR?

That way we know the code snippets don't have any issue. The only thing left would be to check for the failures in the CI 😓.

Copy link
Contributor

@linoytsaban linoytsaban left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! thanks @ariG23498

Copy link
Member

@Vaibhavs10 Vaibhavs10 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good for me to merge - let's put a comment like this for MPS people in the snippets

`from diffusers import DiffusionPipeline
`import torch
from diffusers import DiffusionPipeline
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# switch to `mps` for apple devices

Also updated `torch_dtype` to `dtype`
@ariG23498 ariG23498 merged commit ed62b85 into main Nov 5, 2025
3 of 5 checks passed
@ariG23498 ariG23498 deleted the ariG23498-patch-1 branch November 5, 2025 12:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

7 participants