Skip to content

ValueError: Could not find the transformer layer class QwenImageTransformerBlock in the model. #3811

@garychan22

Description

@garychan22

Hi, I am trying to fine-tuning qwen-image-edit using accelerate in FSDP mode. I want to warp the QwenImageTransformerBlock in transformer and Qwen2_5_VLVisionBlock,Qwen2_5_VLDecoderLayer in text_encoder. I set the environment param

def set_fsdp_env():
    os.environ["ACCELERATE_USE_FSDP"] = 'true'
    os.environ["FSDP_AUTO_WRAP_POLICY"] = 'TRANSFORMER_BASED_WRAP'
    os.environ["FSDP_BACKWARD_PREFETCH"] = 'BACKWARD_PRE'
    os.environ["FSDP_TRANSFORMER_CLS_TO_WRAP"] = 'QwenImageTransformerBlock,Qwen2_5_VLVisionBlock,Qwen2_5_VLDecoderLayer'
    os.environ["FSDP_CPU_RAM_EFFICIENT_LOADING"] = 'false'

and prepare the two models

transformer = accelerator.prepare(transformer)
text_encoder = accelerator.prepare(text_encoder)

Finally, I encountered the error raised from text_encoder = accelerator.prepare(text_encoder)

ValueError: Could not find the transformer layer class QwenImageTransformerBlock in the model.

How can I resolve this problem? Thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions