-
Notifications
You must be signed in to change notification settings - Fork 876
[Docathon][Add CN Doc No.47] #6421
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
bee47bd
2436d26
4d4d119
05bc313
0c5cc6a
88b8d7f
1fe15b5
143797d
b579a4d
2d38632
4472f65
c72accd
283b438
c1e2092
0765b1b
711a669
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,47 @@ | ||
| .. _cn_api_paddle_quantization_ptq: | ||
|
|
||
| PTQ | ||
| ------------------------------- | ||
| .. py:class:: paddle.quantization.PTQ(Quantization) | ||
| 将训练后量化应用到模型上。 | ||
|
|
||
| 方法 | ||
| :::::::::::: | ||
| quantize(model: Layer, inplace=False) | ||
| ''''''''' | ||
|
|
||
| 创建一个用于训练后量化的模型。 | ||
|
|
||
| 量化配置将在模型中传播。它将向模型中插入观察者以收集和计算量化参数。 | ||
|
|
||
| **参数** | ||
|
|
||
| - **model**(Layer) - 待量化的模型。 | ||
| - **inplace**(bool) - 是否对模型进行原地修改 | ||
|
|
||
| **返回** | ||
|
|
||
| 为训练后量化准备好的模型。 | ||
|
|
||
| **代码示例** | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 示例不正确 |
||
|
|
||
| COPY-FROM: paddle.quantization.PTQ.quantize | ||
|
|
||
| convert(self, model:layer, inplace=False, remain_weight=False): | ||
| ''''''''' | ||
|
|
||
| 将量化模型转换为ONNX格式。转换后的模型可以通过调用 paddle.jit.save 保存为推理模型。 | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 文档不正确,并不是转换乘ONNX 格式。 |
||
|
|
||
| **参数** | ||
|
|
||
| - **model**(Layer) - 待量化的模型。 | ||
| - **inplace**(bool, optional) - 是否要对模型进行就地修改,默认为false。 | ||
| - **remain_weight**(bool, optional) - 是否宝石权重为floats,默认为false。 | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 有错别字 |
||
|
|
||
| **返回** | ||
|
|
||
| 转换后的模型 | ||
|
|
||
| **代码示例** | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 示例不正确 |
||
|
|
||
| COPY-FROM: paddle.quantization.PTQ.convert | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
文档内容不正确。