Skip to content

Conversation

@pbielak
Copy link
Collaborator

@pbielak pbielak commented Nov 4, 2025

What does this PR do?

When using deepspeed parallelization the attn_output shape is smaller than expected by one of the reshape operations [1]. This commit fixes this issue by loosening the size assumption of the last tensor dimension, i.e., instead of using .reshape(batch_size, seq_length, embed_dim) it now performs .reshape(batch_size, seq_length, -1) to automatically determine that last dimension size.

[1]

attn_output = attn_output.reshape(batch_size, seq_length, embed_dim).contiguous()

Example command that is currently broken (README example):

PT_HPU_LAZY_MODE=1 QUANT_CONFIG=./quantization_config/maxabs_measure.json python ../gaudi_spawn.py --use_deepspeed --world_size 8 run_pipeline.py \
--model_name_or_path llava-hf/llava-v1.6-mistral-7b-hf \
--image_path "https://llava-vl.github.io/static/images/view.jpg" \
--use_hpu_graphs \
--bf16 \
--use_flash_attention \
--flash_attention_recompute

When using deepspeed parallelization the `attn_output` shape is smaller
than expected by one of the reshape operations [1]. This commit fixes
this issue by loosening the size assumption of the last tensor dimension,
i.e., instead of using `.reshape(batch_size, seq_length, embed_dim)`
it now performs `.reshape(batch_size, seq_length, -1)` to automatically
determine that last dimension size.

[1] https://github.com/huggingface/optimum-habana/blob/568f643875ee218d3b901a339813b6472b8c1c36/optimum/habana/transformers/models/clip/modeling_clip.py#L138
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@pbielak
Copy link
Collaborator Author

pbielak commented Nov 4, 2025

This commit essentially reverts this .reshape(...) to the way it was before the Transformers upgrade - see [1]

[1] https://github.com/huggingface/optimum-habana/blob/v1.19-release/optimum/habana/transformers/models/clip/modeling_clip.py#L170

@pbielak pbielak marked this pull request as ready for review November 4, 2025 13:49
@pbielak pbielak requested a review from regisss as a code owner November 4, 2025 13:49
@pbielak pbielak self-assigned this Nov 4, 2025
Copy link
Collaborator

@regisss regisss left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

@regisss regisss merged commit c7c1b3d into huggingface:main Nov 5, 2025
2 of 5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants