Skip to content

Conversation

@RichardWooSJTU
Copy link
Contributor

@RichardWooSJTU RichardWooSJTU commented Feb 4, 2024

PR types

Bug fixes

PR changes

OPs

Description

Background:
test_llm_int8_linear.py failed

Reason:
weight_quantize op has been modified to only support scale dtype which is the same as input data, while llm.int8 op is not supported.

Solution:
Adapt weight_quantize op to satisfy llm.int8
Pcard-71502

Copy link
Contributor

@wwbitejotunn wwbitejotunn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for weight_only gemm scale type

@carryyu carryyu merged commit 194ef8b into PaddlePaddle:develop Feb 26, 2024
@jeng1220
Copy link
Collaborator

@RichardWooSJTU ,
Could you please cherry-pick this patch to release/2.6 branch?

cc @onecatcn for vis

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants