From d4923e9ec9c9d2447294f98d5f1381499e619d5c Mon Sep 17 00:00:00 2001 From: jiangziyan-693 Date: Wed, 17 Jan 2024 12:43:23 +0000 Subject: [PATCH 1/9] add cn doc of ptq --- docs/api/paddle/quantization/ptq_cn.rst | 45 +++++++++++++++++++++++++ 1 file changed, 45 insertions(+) create mode 100644 docs/api/paddle/quantization/ptq_cn.rst diff --git a/docs/api/paddle/quantization/ptq_cn.rst b/docs/api/paddle/quantization/ptq_cn.rst new file mode 100644 index 00000000000..b6721401253 --- /dev/null +++ b/docs/api/paddle/quantization/ptq_cn.rst @@ -0,0 +1,45 @@ +.. _cn_api_paddle_quantization_ptq: + +PTQ +------------------------------- +.. py:class:: paddle.quantization.PTQ(Quantization) +将训练后量化应用到模型上。 + +方法 +:::::::::::: +quantize(model: Layer, inplace=False) +''''''''' + +创建一个用于训练后量化的模型。 + +量化配置将在模型中传播。它将向模型中插入观察者以收集和计算量化参数。 + +**参数** + + - **model**(Layer) - 待量化的模型。 + - **inplace**(bool) - 是否对模型进行原地修改 +**返回** + +为训练后量化准备好的模型。 + +**代码示例** + +COPY-FROM: paddle.quantization.PTQ.quantize + +convert(self, model:layer, inplace=False, remain_weight=False): +''''''''' + +将量化模型转换为ONNX格式。转换后的模型可以通过调用 paddle.jit.save 保存为推理模型。 + +**参数** + + - **model**(Layer) - 待量化的模型。 + - **inplace**(bool, optional) - 是否要对模型进行就地修改,默认为false。 + - **remain_weight**(bool, optional) - 是否宝石权重为floats,默认为false。 +**返回** + +转换后的模型 + +**代码示例** + +COPY-FROM: paddle.quantization.PTQ.convert \ No newline at end of file From f20937d7214bd86a352e7095ae4f9003e429ebbe Mon Sep 17 00:00:00 2001 From: jiangziyan-693 Date: Wed, 17 Jan 2024 13:09:28 +0000 Subject: [PATCH 2/9] [Docathon][Add CN Doc No.48] --- docs/api/paddle/quantization/qat_cn.rst | 54 +++++++++++++++++++++++++ 1 file changed, 54 insertions(+) create mode 100644 docs/api/paddle/quantization/qat_cn.rst diff --git a/docs/api/paddle/quantization/qat_cn.rst b/docs/api/paddle/quantization/qat_cn.rst new file mode 100644 index 00000000000..679ff73d3f4 --- /dev/null +++ b/docs/api/paddle/quantization/qat_cn.rst @@ -0,0 +1,54 @@ +.. _cn_api_paddle_quantization_QAT: + +QAT +------------------------------- + +.. py:class:: paddle.quantization.QAT(config: paddle.quantization.config.QuantConfig) +用于为量化感知训练准备模型的工具。 + +参数 +:::::::::::: + - **config** (QuantConfig) - 量化配置,通常指的是设置和调整模型量化过程中的参数和选项。 + +**代码示例** + +COPY-FROM: paddle.quantization.QAT.quantize + +方法 +:::::::::::: +quantize(model: Layer, inplace=False) +''''''''' +创建一个适用于量化感知训练的模型。 + +量化配置将在模型中传播。并且它将在模型中插入伪量化器以模拟量化过程。 + +**参数** + + - **model(Layer)** - 待量化的模型 + - **inplace(bool)** - 是否对模型进行原地修改 + +**返回** + +为量化感知训练准备好的模型。 + +**代码示例** + +COPY-FROM: paddle.quantization.QAT.quantize + +convert(self, model:layer, inplace=False, remain_weight=False): +''''''''' + +将量化模型转换为ONNX格式。转换后的模型可以通过调用 paddle.jit.save 保存为推理模型。 + +**参数** + + - **model**(Layer) - 待量化的模型。 + - **inplace**(bool, optional) - 是否要对模型进行就地修改,默认为false。 + - **remain_weight**(bool, optional) - 是否宝石权重为floats,默认为false。 +**返回** + +转换后的模型 + +**代码示例** + +COPY-FROM: paddle.quantization.QAT.convert \ No newline at end of file From 11c641d13b9b8ff1cdadd380a931d3c1b8fc39d7 Mon Sep 17 00:00:00 2001 From: jiangziyan-693 Date: Thu, 18 Jan 2024 03:44:23 +0000 Subject: [PATCH 3/9] just to complete CI --- docs/api/paddle/quantization/ptq_cn.rst | 4 ++-- docs/api/paddle/quantization/qat_cn.rst | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/api/paddle/quantization/ptq_cn.rst b/docs/api/paddle/quantization/ptq_cn.rst index b6721401253..5acba53def4 100644 --- a/docs/api/paddle/quantization/ptq_cn.rst +++ b/docs/api/paddle/quantization/ptq_cn.rst @@ -1,4 +1,4 @@ -.. _cn_api_paddle_quantization_ptq: +.. _cn_api_paddle_quantization_PTQ: PTQ ------------------------------- @@ -42,4 +42,4 @@ convert(self, model:layer, inplace=False, remain_weight=False): **代码示例** -COPY-FROM: paddle.quantization.PTQ.convert \ No newline at end of file +COPY-FROM: paddle.quantization.PTQ.convert diff --git a/docs/api/paddle/quantization/qat_cn.rst b/docs/api/paddle/quantization/qat_cn.rst index 679ff73d3f4..c28ad72c8c0 100644 --- a/docs/api/paddle/quantization/qat_cn.rst +++ b/docs/api/paddle/quantization/qat_cn.rst @@ -51,4 +51,4 @@ convert(self, model:layer, inplace=False, remain_weight=False): **代码示例** -COPY-FROM: paddle.quantization.QAT.convert \ No newline at end of file +COPY-FROM: paddle.quantization.QAT.convert From 4cadcb5ca410065c7a41825d10d4148b005e0c5d Mon Sep 17 00:00:00 2001 From: jiangziyan-693 Date: Thu, 18 Jan 2024 04:17:31 +0000 Subject: [PATCH 4/9] pass the ci --- docs/api/paddle/quantization/ptq_cn.rst | 1 + docs/api/paddle/quantization/qat_cn.rst | 1 + 2 files changed, 2 insertions(+) diff --git a/docs/api/paddle/quantization/ptq_cn.rst b/docs/api/paddle/quantization/ptq_cn.rst index 5acba53def4..6cdd10d1c7e 100644 --- a/docs/api/paddle/quantization/ptq_cn.rst +++ b/docs/api/paddle/quantization/ptq_cn.rst @@ -43,3 +43,4 @@ convert(self, model:layer, inplace=False, remain_weight=False): **代码示例** COPY-FROM: paddle.quantization.PTQ.convert + diff --git a/docs/api/paddle/quantization/qat_cn.rst b/docs/api/paddle/quantization/qat_cn.rst index c28ad72c8c0..9e4e8f1ce7f 100644 --- a/docs/api/paddle/quantization/qat_cn.rst +++ b/docs/api/paddle/quantization/qat_cn.rst @@ -52,3 +52,4 @@ convert(self, model:layer, inplace=False, remain_weight=False): **代码示例** COPY-FROM: paddle.quantization.QAT.convert + From c01847c9996d66c37b6ab9644d1faa20b0f8062a Mon Sep 17 00:00:00 2001 From: jiangziyan-693 Date: Thu, 18 Jan 2024 04:27:47 +0000 Subject: [PATCH 5/9] cici --- docs/api/paddle/quantization/ptq_cn.rst | 1 + docs/api/paddle/quantization/qat_cn.rst | 1 + 2 files changed, 2 insertions(+) diff --git a/docs/api/paddle/quantization/ptq_cn.rst b/docs/api/paddle/quantization/ptq_cn.rst index 6cdd10d1c7e..f6f35ceed68 100644 --- a/docs/api/paddle/quantization/ptq_cn.rst +++ b/docs/api/paddle/quantization/ptq_cn.rst @@ -44,3 +44,4 @@ convert(self, model:layer, inplace=False, remain_weight=False): COPY-FROM: paddle.quantization.PTQ.convert + diff --git a/docs/api/paddle/quantization/qat_cn.rst b/docs/api/paddle/quantization/qat_cn.rst index 9e4e8f1ce7f..51a176c510e 100644 --- a/docs/api/paddle/quantization/qat_cn.rst +++ b/docs/api/paddle/quantization/qat_cn.rst @@ -53,3 +53,4 @@ convert(self, model:layer, inplace=False, remain_weight=False): COPY-FROM: paddle.quantization.QAT.convert + From 2073bac12bebc0f0a5034e54afcaeef459b968e5 Mon Sep 17 00:00:00 2001 From: zachary sun Date: Thu, 18 Jan 2024 15:34:07 +0800 Subject: [PATCH 6/9] fix style --- docs/api/paddle/quantization/ptq_cn.rst | 8 +++----- docs/api/paddle/quantization/qat_cn.rst | 8 +++----- 2 files changed, 6 insertions(+), 10 deletions(-) diff --git a/docs/api/paddle/quantization/ptq_cn.rst b/docs/api/paddle/quantization/ptq_cn.rst index f6f35ceed68..b7cb0a3524c 100644 --- a/docs/api/paddle/quantization/ptq_cn.rst +++ b/docs/api/paddle/quantization/ptq_cn.rst @@ -29,13 +29,13 @@ COPY-FROM: paddle.quantization.PTQ.quantize convert(self, model:layer, inplace=False, remain_weight=False): ''''''''' -将量化模型转换为ONNX格式。转换后的模型可以通过调用 paddle.jit.save 保存为推理模型。 +将量化模型转换为 ONNX 格式。转换后的模型可以通过调用 paddle.jit.save 保存为推理模型。 **参数** - **model**(Layer) - 待量化的模型。 - - **inplace**(bool, optional) - 是否要对模型进行就地修改,默认为false。 - - **remain_weight**(bool, optional) - 是否宝石权重为floats,默认为false。 + - **inplace**(bool, optional) - 是否要对模型进行就地修改,默认为 false。 + - **remain_weight**(bool, optional) - 是否宝石权重为 floats,默认为 false。 **返回** 转换后的模型 @@ -43,5 +43,3 @@ convert(self, model:layer, inplace=False, remain_weight=False): **代码示例** COPY-FROM: paddle.quantization.PTQ.convert - - diff --git a/docs/api/paddle/quantization/qat_cn.rst b/docs/api/paddle/quantization/qat_cn.rst index 51a176c510e..5fd62ff5004 100644 --- a/docs/api/paddle/quantization/qat_cn.rst +++ b/docs/api/paddle/quantization/qat_cn.rst @@ -38,13 +38,13 @@ COPY-FROM: paddle.quantization.QAT.quantize convert(self, model:layer, inplace=False, remain_weight=False): ''''''''' -将量化模型转换为ONNX格式。转换后的模型可以通过调用 paddle.jit.save 保存为推理模型。 +将量化模型转换为 ONNX 格式。转换后的模型可以通过调用 paddle.jit.save 保存为推理模型。 **参数** - **model**(Layer) - 待量化的模型。 - - **inplace**(bool, optional) - 是否要对模型进行就地修改,默认为false。 - - **remain_weight**(bool, optional) - 是否宝石权重为floats,默认为false。 + - **inplace**(bool, optional) - 是否要对模型进行就地修改,默认为 false。 + - **remain_weight**(bool, optional) - 是否宝石权重为 floats,默认为 false。 **返回** 转换后的模型 @@ -52,5 +52,3 @@ convert(self, model:layer, inplace=False, remain_weight=False): **代码示例** COPY-FROM: paddle.quantization.QAT.convert - - From db118ef0715de3401db8078763ea0508e7055189 Mon Sep 17 00:00:00 2001 From: jiangziyan-693 <150317638+jiangziyan-693@users.noreply.github.com> Date: Fri, 19 Jan 2024 13:16:29 +0800 Subject: [PATCH 7/9] Rename ptq_cn.rst to PTQ_cn.rst --- docs/api/paddle/quantization/{ptq_cn.rst => PTQ_cn.rst} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename docs/api/paddle/quantization/{ptq_cn.rst => PTQ_cn.rst} (100%) diff --git a/docs/api/paddle/quantization/ptq_cn.rst b/docs/api/paddle/quantization/PTQ_cn.rst similarity index 100% rename from docs/api/paddle/quantization/ptq_cn.rst rename to docs/api/paddle/quantization/PTQ_cn.rst From 6f82d686503a68c93a4efa8db4ca59ba9752d784 Mon Sep 17 00:00:00 2001 From: jiangziyan-693 <150317638+jiangziyan-693@users.noreply.github.com> Date: Fri, 19 Jan 2024 13:16:51 +0800 Subject: [PATCH 8/9] Rename qat_cn.rst to QAT_cn.rst --- docs/api/paddle/quantization/{qat_cn.rst => QAT_cn.rst} | 0 1 file changed, 0 insertions(+), 0 deletions(-) rename docs/api/paddle/quantization/{qat_cn.rst => QAT_cn.rst} (100%) diff --git a/docs/api/paddle/quantization/qat_cn.rst b/docs/api/paddle/quantization/QAT_cn.rst similarity index 100% rename from docs/api/paddle/quantization/qat_cn.rst rename to docs/api/paddle/quantization/QAT_cn.rst From 0ab7b7c7820c7d6c6af9810cf54b2e47dc69eb45 Mon Sep 17 00:00:00 2001 From: jiangziyan-693 <150317638+jiangziyan-693@users.noreply.github.com> Date: Fri, 19 Jan 2024 14:28:34 +0800 Subject: [PATCH 9/9] Update QAT_cn.rst --- docs/api/paddle/quantization/QAT_cn.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/api/paddle/quantization/QAT_cn.rst b/docs/api/paddle/quantization/QAT_cn.rst index 5fd62ff5004..fb3a19b6b86 100644 --- a/docs/api/paddle/quantization/QAT_cn.rst +++ b/docs/api/paddle/quantization/QAT_cn.rst @@ -44,7 +44,7 @@ convert(self, model:layer, inplace=False, remain_weight=False): - **model**(Layer) - 待量化的模型。 - **inplace**(bool, optional) - 是否要对模型进行就地修改,默认为 false。 - - **remain_weight**(bool, optional) - 是否宝石权重为 floats,默认为 false。 + - **remain_weight**(bool, optional) - 是否保持权重为 floats,默认为 false。 **返回** 转换后的模型