From bee47bd8f5ec54f152d5a6f1c1b83e22682077b5 Mon Sep 17 00:00:00 2001 From: jiangziyan-693 <150317638+jiangziyan-693@users.noreply.github.com> Date: Fri, 22 Dec 2023 17:33:48 +0800 Subject: [PATCH 01/16] [Docathon][Add CN Doc No.47] --- docs/quantization/ptq_cn.rst | 33 +++++++++++++++++++++++++++++++++ 1 file changed, 33 insertions(+) create mode 100644 docs/quantization/ptq_cn.rst diff --git a/docs/quantization/ptq_cn.rst b/docs/quantization/ptq_cn.rst new file mode 100644 index 00000000000..abb8bc03fbc --- /dev/null +++ b/docs/quantization/ptq_cn.rst @@ -0,0 +1,33 @@ +.. _cn_api_paddle_quantization_ptq: + +ptq +------------------------------- +.. py:class:: paddle.quantization.PTQ(Quantization) +将训练后量化应用到模型上。 + +.. py:function:: quantize(self, model: Layer, inplace=False) +创建一个用于训练后量化的模型。 + +量化配置将在模型中传播。它将向模型中插入观察者以收集和计算量化参数。 + +参数 +::::::::: + - **model**(Layer) - 待量化的模型。 + - **model**(Layer) - 是否对模型进行原地修改 + +返回 +::::::::: +为训练后量化准备好的模型。 + +代码示例 +:::::::::: + +COPY-FROM: paddle.quantization.ptq + +.. py:function:: convert(model: paddle.nn.layer.layers.Layer, inplace=False, remain_weight=False) +将量化模型转换为ONNX格式。转换后的模型可以通过调用 paddle.jit.save 保存为推理模型。 +参数 model:类型 model:Layer参数 inplace:类型 inplace:bool,可选参数 remain_weight:类型 remain_weight:bool,可选 + +返回 +:::::::::: +转换后的模型 \ No newline at end of file From 2436d2644a40c773f21f89cdb6c45f402196693f Mon Sep 17 00:00:00 2001 From: jiangziyan-693 <150317638+jiangziyan-693@users.noreply.github.com> Date: Fri, 22 Dec 2023 17:36:51 +0800 Subject: [PATCH 02/16] Delete docs/quantization/ptq_cn.rst --- docs/quantization/ptq_cn.rst | 33 --------------------------------- 1 file changed, 33 deletions(-) delete mode 100644 docs/quantization/ptq_cn.rst diff --git a/docs/quantization/ptq_cn.rst b/docs/quantization/ptq_cn.rst deleted file mode 100644 index abb8bc03fbc..00000000000 --- a/docs/quantization/ptq_cn.rst +++ /dev/null @@ -1,33 +0,0 @@ -.. _cn_api_paddle_quantization_ptq: - -ptq -------------------------------- -.. py:class:: paddle.quantization.PTQ(Quantization) -将训练后量化应用到模型上。 - -.. py:function:: quantize(self, model: Layer, inplace=False) -创建一个用于训练后量化的模型。 - -量化配置将在模型中传播。它将向模型中插入观察者以收集和计算量化参数。 - -参数 -::::::::: - - **model**(Layer) - 待量化的模型。 - - **model**(Layer) - 是否对模型进行原地修改 - -返回 -::::::::: -为训练后量化准备好的模型。 - -代码示例 -:::::::::: - -COPY-FROM: paddle.quantization.ptq - -.. py:function:: convert(model: paddle.nn.layer.layers.Layer, inplace=False, remain_weight=False) -将量化模型转换为ONNX格式。转换后的模型可以通过调用 paddle.jit.save 保存为推理模型。 -参数 model:类型 model:Layer参数 inplace:类型 inplace:bool,可选参数 remain_weight:类型 remain_weight:bool,可选 - -返回 -:::::::::: -转换后的模型 \ No newline at end of file From 4d4d1198cf2002ae83b2e070c1299d9994051cf9 Mon Sep 17 00:00:00 2001 From: jiangziyan-693 <150317638+jiangziyan-693@users.noreply.github.com> Date: Fri, 22 Dec 2023 17:37:51 +0800 Subject: [PATCH 03/16] [Docathon][Add CN Doc No.47] --- docs/api/paddle/quantization/ptq_cn.rst | 33 +++++++++++++++++++++++++ 1 file changed, 33 insertions(+) create mode 100644 docs/api/paddle/quantization/ptq_cn.rst diff --git a/docs/api/paddle/quantization/ptq_cn.rst b/docs/api/paddle/quantization/ptq_cn.rst new file mode 100644 index 00000000000..abb8bc03fbc --- /dev/null +++ b/docs/api/paddle/quantization/ptq_cn.rst @@ -0,0 +1,33 @@ +.. _cn_api_paddle_quantization_ptq: + +ptq +------------------------------- +.. py:class:: paddle.quantization.PTQ(Quantization) +将训练后量化应用到模型上。 + +.. py:function:: quantize(self, model: Layer, inplace=False) +创建一个用于训练后量化的模型。 + +量化配置将在模型中传播。它将向模型中插入观察者以收集和计算量化参数。 + +参数 +::::::::: + - **model**(Layer) - 待量化的模型。 + - **model**(Layer) - 是否对模型进行原地修改 + +返回 +::::::::: +为训练后量化准备好的模型。 + +代码示例 +:::::::::: + +COPY-FROM: paddle.quantization.ptq + +.. py:function:: convert(model: paddle.nn.layer.layers.Layer, inplace=False, remain_weight=False) +将量化模型转换为ONNX格式。转换后的模型可以通过调用 paddle.jit.save 保存为推理模型。 +参数 model:类型 model:Layer参数 inplace:类型 inplace:bool,可选参数 remain_weight:类型 remain_weight:bool,可选 + +返回 +:::::::::: +转换后的模型 \ No newline at end of file From 05bc31314852cd49f048a49e97fb2dbcdc3147d1 Mon Sep 17 00:00:00 2001 From: jiangziyan-693 <150317638+jiangziyan-693@users.noreply.github.com> Date: Wed, 3 Jan 2024 11:18:22 +0800 Subject: [PATCH 04/16] Update docs/api/paddle/quantization/ptq_cn.rst Co-authored-by: zachary sun <70642955+sunzhongkai588@users.noreply.github.com> --- docs/api/paddle/quantization/ptq_cn.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/api/paddle/quantization/ptq_cn.rst b/docs/api/paddle/quantization/ptq_cn.rst index abb8bc03fbc..b62d9321873 100644 --- a/docs/api/paddle/quantization/ptq_cn.rst +++ b/docs/api/paddle/quantization/ptq_cn.rst @@ -1,6 +1,6 @@ .. _cn_api_paddle_quantization_ptq: -ptq +PTQ ------------------------------- .. py:class:: paddle.quantization.PTQ(Quantization) 将训练后量化应用到模型上。 From 0c5cc6a68261a646e6f677f168111d1e83932292 Mon Sep 17 00:00:00 2001 From: jiangziyan-693 <150317638+jiangziyan-693@users.noreply.github.com> Date: Wed, 3 Jan 2024 11:18:33 +0800 Subject: [PATCH 05/16] Update docs/api/paddle/quantization/ptq_cn.rst Co-authored-by: zachary sun <70642955+sunzhongkai588@users.noreply.github.com> --- docs/api/paddle/quantization/ptq_cn.rst | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/docs/api/paddle/quantization/ptq_cn.rst b/docs/api/paddle/quantization/ptq_cn.rst index b62d9321873..4372f7d66fb 100644 --- a/docs/api/paddle/quantization/ptq_cn.rst +++ b/docs/api/paddle/quantization/ptq_cn.rst @@ -5,7 +5,11 @@ PTQ .. py:class:: paddle.quantization.PTQ(Quantization) 将训练后量化应用到模型上。 -.. py:function:: quantize(self, model: Layer, inplace=False) +方法 +:::::::::::: +quantize(model: Layer, inplace=False) +''''''''' + 创建一个用于训练后量化的模型。 量化配置将在模型中传播。它将向模型中插入观察者以收集和计算量化参数。 From 88b8d7fd11be3d240cd805bee6fa57716662e92a Mon Sep 17 00:00:00 2001 From: jiangziyan-693 <150317638+jiangziyan-693@users.noreply.github.com> Date: Wed, 3 Jan 2024 11:18:46 +0800 Subject: [PATCH 06/16] Update docs/api/paddle/quantization/ptq_cn.rst Co-authored-by: zachary sun <70642955+sunzhongkai588@users.noreply.github.com> --- docs/api/paddle/quantization/ptq_cn.rst | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/docs/api/paddle/quantization/ptq_cn.rst b/docs/api/paddle/quantization/ptq_cn.rst index 4372f7d66fb..f4e7f2f5866 100644 --- a/docs/api/paddle/quantization/ptq_cn.rst +++ b/docs/api/paddle/quantization/ptq_cn.rst @@ -14,8 +14,7 @@ quantize(model: Layer, inplace=False) 量化配置将在模型中传播。它将向模型中插入观察者以收集和计算量化参数。 -参数 -::::::::: +**参数** - **model**(Layer) - 待量化的模型。 - **model**(Layer) - 是否对模型进行原地修改 From 1fe15b582be18f686a12627977339a9bba41c63d Mon Sep 17 00:00:00 2001 From: jiangziyan-693 <150317638+jiangziyan-693@users.noreply.github.com> Date: Wed, 3 Jan 2024 11:18:54 +0800 Subject: [PATCH 07/16] Update docs/api/paddle/quantization/ptq_cn.rst Co-authored-by: zachary sun <70642955+sunzhongkai588@users.noreply.github.com> --- docs/api/paddle/quantization/ptq_cn.rst | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/docs/api/paddle/quantization/ptq_cn.rst b/docs/api/paddle/quantization/ptq_cn.rst index f4e7f2f5866..f4fcc9ae4b7 100644 --- a/docs/api/paddle/quantization/ptq_cn.rst +++ b/docs/api/paddle/quantization/ptq_cn.rst @@ -18,8 +18,7 @@ quantize(model: Layer, inplace=False) - **model**(Layer) - 待量化的模型。 - **model**(Layer) - 是否对模型进行原地修改 -返回 -::::::::: +**返回** 为训练后量化准备好的模型。 代码示例 From 143797df4b2a05bf031503cb84c49f595be457fd Mon Sep 17 00:00:00 2001 From: jiangziyan-693 <150317638+jiangziyan-693@users.noreply.github.com> Date: Wed, 3 Jan 2024 11:19:01 +0800 Subject: [PATCH 08/16] Update docs/api/paddle/quantization/ptq_cn.rst Co-authored-by: zachary sun <70642955+sunzhongkai588@users.noreply.github.com> --- docs/api/paddle/quantization/ptq_cn.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/api/paddle/quantization/ptq_cn.rst b/docs/api/paddle/quantization/ptq_cn.rst index f4fcc9ae4b7..42e25a4917e 100644 --- a/docs/api/paddle/quantization/ptq_cn.rst +++ b/docs/api/paddle/quantization/ptq_cn.rst @@ -24,7 +24,7 @@ quantize(model: Layer, inplace=False) 代码示例 :::::::::: -COPY-FROM: paddle.quantization.ptq +COPY-FROM: paddle.quantization.PTQ.quantize .. py:function:: convert(model: paddle.nn.layer.layers.Layer, inplace=False, remain_weight=False) 将量化模型转换为ONNX格式。转换后的模型可以通过调用 paddle.jit.save 保存为推理模型。 From b579a4d2a3a04aca35a1933828f6263687a46e8b Mon Sep 17 00:00:00 2001 From: jiangziyan-693 <150317638+jiangziyan-693@users.noreply.github.com> Date: Wed, 3 Jan 2024 11:19:06 +0800 Subject: [PATCH 09/16] Update docs/api/paddle/quantization/ptq_cn.rst Co-authored-by: zachary sun <70642955+sunzhongkai588@users.noreply.github.com> --- docs/api/paddle/quantization/ptq_cn.rst | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/docs/api/paddle/quantization/ptq_cn.rst b/docs/api/paddle/quantization/ptq_cn.rst index 42e25a4917e..9b59df01302 100644 --- a/docs/api/paddle/quantization/ptq_cn.rst +++ b/docs/api/paddle/quantization/ptq_cn.rst @@ -26,7 +26,9 @@ quantize(model: Layer, inplace=False) COPY-FROM: paddle.quantization.PTQ.quantize -.. py:function:: convert(model: paddle.nn.layer.layers.Layer, inplace=False, remain_weight=False) +convert(model: paddle.nn.layer.layers.Layer, inplace=False, remain_weight=False) +''''''''' + 将量化模型转换为ONNX格式。转换后的模型可以通过调用 paddle.jit.save 保存为推理模型。 参数 model:类型 model:Layer参数 inplace:类型 inplace:bool,可选参数 remain_weight:类型 remain_weight:bool,可选 From 2d38632043d0443042a069b0aeb3ca4e813b8ebf Mon Sep 17 00:00:00 2001 From: jiangziyan-693 <150317638+jiangziyan-693@users.noreply.github.com> Date: Wed, 3 Jan 2024 11:19:11 +0800 Subject: [PATCH 10/16] Update docs/api/paddle/quantization/ptq_cn.rst Co-authored-by: zachary sun <70642955+sunzhongkai588@users.noreply.github.com> --- docs/api/paddle/quantization/ptq_cn.rst | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/docs/api/paddle/quantization/ptq_cn.rst b/docs/api/paddle/quantization/ptq_cn.rst index 9b59df01302..52dddf61903 100644 --- a/docs/api/paddle/quantization/ptq_cn.rst +++ b/docs/api/paddle/quantization/ptq_cn.rst @@ -21,8 +21,7 @@ quantize(model: Layer, inplace=False) **返回** 为训练后量化准备好的模型。 -代码示例 -:::::::::: +**代码示例** COPY-FROM: paddle.quantization.PTQ.quantize From 4472f65bdba962f5f2abce49429c19d38b5e4ac1 Mon Sep 17 00:00:00 2001 From: jiangziyan-693 <150317638+jiangziyan-693@users.noreply.github.com> Date: Wed, 3 Jan 2024 11:23:00 +0800 Subject: [PATCH 11/16] Update ptq_cn.rst --- docs/api/paddle/quantization/ptq_cn.rst | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/api/paddle/quantization/ptq_cn.rst b/docs/api/paddle/quantization/ptq_cn.rst index 52dddf61903..0a2c9a71e88 100644 --- a/docs/api/paddle/quantization/ptq_cn.rst +++ b/docs/api/paddle/quantization/ptq_cn.rst @@ -16,7 +16,7 @@ quantize(model: Layer, inplace=False) **参数** - **model**(Layer) - 待量化的模型。 - - **model**(Layer) - 是否对模型进行原地修改 + - **inplace**(bool) - 是否对模型进行原地修改 **返回** 为训练后量化准备好的模型。 @@ -33,4 +33,4 @@ convert(model: paddle.nn.layer.layers.Layer, inplace=False, remain_weight=False) 返回 :::::::::: -转换后的模型 \ No newline at end of file +转换后的模型 From c72accdc75fe19a042ef23ab540e784d5b7ff504 Mon Sep 17 00:00:00 2001 From: jiangziyan-693 <150317638+jiangziyan-693@users.noreply.github.com> Date: Mon, 15 Jan 2024 15:26:16 +0800 Subject: [PATCH 12/16] Update ptq_cn.rst --- docs/api/paddle/quantization/ptq_cn.rst | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/docs/api/paddle/quantization/ptq_cn.rst b/docs/api/paddle/quantization/ptq_cn.rst index 0a2c9a71e88..503186e7815 100644 --- a/docs/api/paddle/quantization/ptq_cn.rst +++ b/docs/api/paddle/quantization/ptq_cn.rst @@ -25,12 +25,14 @@ quantize(model: Layer, inplace=False) COPY-FROM: paddle.quantization.PTQ.quantize -convert(model: paddle.nn.layer.layers.Layer, inplace=False, remain_weight=False) +convert(self, model:layer, inplace=False, remain_weight=False): ''''''''' 将量化模型转换为ONNX格式。转换后的模型可以通过调用 paddle.jit.save 保存为推理模型。 -参数 model:类型 model:Layer参数 inplace:类型 inplace:bool,可选参数 remain_weight:类型 remain_weight:bool,可选 +**参数** + - **model**(Layer) - 待量化的模型。 + - **inplace**(bool, optional) - 是否要对模型进行就地修改,默认为false。 + - **remain_weight**(bool, optional) - 是否宝石权重为floats,默认为false。 -返回 -:::::::::: +**返回** 转换后的模型 From 283b43832b1f2aa091aa58b0da37c488a61b57bc Mon Sep 17 00:00:00 2001 From: jiangziyan-693 <150317638+jiangziyan-693@users.noreply.github.com> Date: Mon, 15 Jan 2024 15:26:52 +0800 Subject: [PATCH 13/16] Update ptq_cn.rst --- docs/api/paddle/quantization/ptq_cn.rst | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/api/paddle/quantization/ptq_cn.rst b/docs/api/paddle/quantization/ptq_cn.rst index 503186e7815..798b6d67575 100644 --- a/docs/api/paddle/quantization/ptq_cn.rst +++ b/docs/api/paddle/quantization/ptq_cn.rst @@ -29,6 +29,7 @@ convert(self, model:layer, inplace=False, remain_weight=False): ''''''''' 将量化模型转换为ONNX格式。转换后的模型可以通过调用 paddle.jit.save 保存为推理模型。 + **参数** - **model**(Layer) - 待量化的模型。 - **inplace**(bool, optional) - 是否要对模型进行就地修改,默认为false。 From c1e2092829bf83e1f2db6e0840b210f41da03fff Mon Sep 17 00:00:00 2001 From: jiangziyan-693 <150317638+jiangziyan-693@users.noreply.github.com> Date: Mon, 15 Jan 2024 15:47:24 +0800 Subject: [PATCH 14/16] Update ptq_cn.rst --- docs/api/paddle/quantization/ptq_cn.rst | 3 +++ 1 file changed, 3 insertions(+) diff --git a/docs/api/paddle/quantization/ptq_cn.rst b/docs/api/paddle/quantization/ptq_cn.rst index 798b6d67575..52d703824a4 100644 --- a/docs/api/paddle/quantization/ptq_cn.rst +++ b/docs/api/paddle/quantization/ptq_cn.rst @@ -37,3 +37,6 @@ convert(self, model:layer, inplace=False, remain_weight=False): **返回** 转换后的模型 + +**代码示例** +COPY-FROM: paddle.quantization.PTQ.quantize From 0765b1b91256c99ed43c62f9713b8a359329fbd2 Mon Sep 17 00:00:00 2001 From: jiangziyan-693 <150317638+jiangziyan-693@users.noreply.github.com> Date: Wed, 17 Jan 2024 11:49:41 +0800 Subject: [PATCH 15/16] Update ptq_cn.rst --- docs/api/paddle/quantization/ptq_cn.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/api/paddle/quantization/ptq_cn.rst b/docs/api/paddle/quantization/ptq_cn.rst index 52d703824a4..e11d8dbdc6e 100644 --- a/docs/api/paddle/quantization/ptq_cn.rst +++ b/docs/api/paddle/quantization/ptq_cn.rst @@ -39,4 +39,4 @@ convert(self, model:layer, inplace=False, remain_weight=False): 转换后的模型 **代码示例** -COPY-FROM: paddle.quantization.PTQ.quantize +COPY-FROM: paddle.quantization.PTQ.convert From 711a66934e9518e3c856d9c02cc03186c78f56a2 Mon Sep 17 00:00:00 2001 From: jiangziyan-693 <150317638+jiangziyan-693@users.noreply.github.com> Date: Wed, 17 Jan 2024 15:32:28 +0800 Subject: [PATCH 16/16] Update ptq_cn.rst --- docs/api/paddle/quantization/ptq_cn.rst | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/docs/api/paddle/quantization/ptq_cn.rst b/docs/api/paddle/quantization/ptq_cn.rst index e11d8dbdc6e..5ebf1f836ef 100644 --- a/docs/api/paddle/quantization/ptq_cn.rst +++ b/docs/api/paddle/quantization/ptq_cn.rst @@ -15,10 +15,12 @@ quantize(model: Layer, inplace=False) 量化配置将在模型中传播。它将向模型中插入观察者以收集和计算量化参数。 **参数** + - **model**(Layer) - 待量化的模型。 - **inplace**(bool) - 是否对模型进行原地修改 **返回** + 为训练后量化准备好的模型。 **代码示例** @@ -31,12 +33,15 @@ convert(self, model:layer, inplace=False, remain_weight=False): 将量化模型转换为ONNX格式。转换后的模型可以通过调用 paddle.jit.save 保存为推理模型。 **参数** + - **model**(Layer) - 待量化的模型。 - **inplace**(bool, optional) - 是否要对模型进行就地修改,默认为false。 - **remain_weight**(bool, optional) - 是否宝石权重为floats,默认为false。 **返回** + 转换后的模型 **代码示例** + COPY-FROM: paddle.quantization.PTQ.convert