Skip to content

Conversation

@jiqing-feng
Copy link
Contributor

Hi @amyeroberts @Narsil .

As the previous PR #31444 mentioned, it could enable low-precision pipelines by converting the outputs to float(). I followed the codes in here. Do you mind taking a review? Thx!

@aliencaocao
Copy link
Contributor

aliencaocao commented Jun 26, 2024

Related to #31342 but I dont quite get your changes - what exactly does it fix? When I tested all the pipelines in fp16 none of them had issues outputting logits

@jiqing-feng
Copy link
Contributor Author

jiqing-feng commented Jul 4, 2024

Related to #31342 but I dont quite get your changes - what exactly does it fix? When I tested all the pipelines in fp16 none of them had issues outputting logits

Hi @aliencaocao , FP16 works fine but BF16 is not acceptable for numpy, you can see:
image

@jiqing-feng jiqing-feng closed this Jul 4, 2024
@jiqing-feng jiqing-feng reopened this Jul 4, 2024
@LysandreJik LysandreJik requested a review from SunMarc July 4, 2024 12:44
Copy link
Member

@SunMarc SunMarc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for addding this @jiqing-feng ! Left a few comments ! Nice tests =)

@jiqing-feng
Copy link
Contributor Author

Hi @SunMarc. I have fixed all your comments; please review them. BTW, the failed tests are due to the connection error, not related to my changes.

Copy link
Member

@SunMarc SunMarc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for iterating ! Just a few nits !

Copy link
Member

@SunMarc SunMarc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for iterating @jiqing-feng !

@SunMarc SunMarc requested a review from amyeroberts July 9, 2024 11:30
@jiqing-feng
Copy link
Contributor Author

Hi @amyeroberts , would you please review this PR? Thx!

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@jiqing-feng
Copy link
Contributor Author

Hi @amyeroberts . I have passed all tests, do you mind take a review and merge it? Thx!

@jiqing-feng
Copy link
Contributor Author

Hi @amyeroberts . This PR should be ready to merge, please take a review, thx!

@amyeroberts
Copy link
Contributor

Hi @jiqing-feng, thanks for opening this PR! I'll get to reviewing it soon, but will likely be in a few days.

@jiqing-feng
Copy link
Contributor Author

Hi @amyeroberts . Do you mind reviewing this PR? Thx.

@jiqing-feng
Copy link
Contributor Author

Hi @amyeroberts . Do you think it could be merged?

@yao-matrix
Copy link
Contributor

@amyeroberts , could you help review this PR? Thx.

@yao-matrix
Copy link
Contributor

@SunMarc , do you have a suggestion any other people from HF can help review and merge this PR? Seems @amyeroberts has no bandwidth on this these serval months. Thx.

@LysandreJik
Copy link
Member

Sorry for the delay @yao-matrix!

@Rocketknight1 is managing the pipeline; Matt, would you mind reviewing this PR when you have a second?

Copy link
Member

@Rocketknight1 Rocketknight1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall this PR seems good to me! However, I prefer if outputs.dtype in (torch.bfloat16, torch.float16) rather than if outputs.dtype == torch.bfloat16, so we can catch float16 as well.

Other than that, I'm happy with it!

Comment on lines 187 to 189
if self.framework == "pt" and outputs.dtype == torch.bfloat16:
# To enable using bf16
outputs = outputs.float().numpy()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems like a regression to me! The previous code worked with both torch.bfloat16 and torch.float16.

Copy link
Contributor Author

@jiqing-feng jiqing-feng Sep 18, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I use torch 2.4, numpy-2.1.1 in python 3.10 and it failed:
image

@jiqing-feng
Copy link
Contributor Author

jiqing-feng commented Sep 18, 2024

Overall this PR seems good to me! However, I prefer if outputs.dtype in (torch.bfloat16, torch.float16) rather than if outputs.dtype == torch.bfloat16, so we can catch float16 as well.

Other than that, I'm happy with it!

Hi @Rocketknight1 , thanks for your review, I have fixed it by your comments.

@jiqing-feng
Copy link
Contributor Author

jiqing-feng commented Sep 18, 2024

Hi @Rocketknight1 , do you mind helping to re-run the tests? Thx.

@Rocketknight1
Copy link
Member

Hi @jiqing-feng, another PR at #33554 touched the same files. I'm sorry - I didn't realize that it was doing the same thing as this PR! That PR has been merged, so I've merged this PR with the code there to avoid conflicts.

@Rocketknight1
Copy link
Member

@jiqing-feng tests pass now and this looks good - are you okay for us to merge it?

Also cc @LysandreJik for final review, but no rush!

@jiqing-feng
Copy link
Contributor Author

@jiqing-feng tests pass now and this looks good - are you okay for us to merge it?

Yes, please.

Copy link
Member

@LysandreJik LysandreJik left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That works for me! Thank you @jiqing-feng!

@LysandreJik LysandreJik merged commit 49a0bef into huggingface:main Sep 20, 2024
BernardZach pushed a commit to BernardZach/transformers that referenced this pull request Dec 5, 2024
* enable low-precision pipeline

* fix parameter for ASR

* reformat

* fix asr bug

* fix bug for zero-shot

* add dtype check

* rm useless comments

* add np.float16 check

* Update src/transformers/pipelines/image_classification.py

Co-authored-by: Marc Sun <[email protected]>

* Update src/transformers/pipelines/token_classification.py

Co-authored-by: Marc Sun <[email protected]>

* fix comments

* fix asr check

* make fixup

* No more need for is_torch_available()

---------

Co-authored-by: Marc Sun <[email protected]>
Co-authored-by: Matt <[email protected]>
Co-authored-by: Matt <[email protected]>
@jiqing-feng jiqing-feng deleted the pipeline branch December 19, 2024 02:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants