Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion python/paddle/base/framework.py
Original file line number Diff line number Diff line change
Expand Up @@ -6963,7 +6963,7 @@ def block(self, index):
Get the :code:`index` :ref:`api_guide_Block_en` of this Program

Args:
index (int) - The index of :ref:`api_guide_Block_en` to get
index (int): The index of :ref:`api_guide_Block_en` to get

Returns:
:ref:`api_guide_Block_en`: The :code:`index` block
Expand Down
2 changes: 1 addition & 1 deletion python/paddle/nn/functional/vision.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ def affine_grid(theta, out_shape, align_corners=True, name=None):
output feature map.

Args:
theta (Tensor) - A tensor with shape [N, 2, 3] or [N, 3, 4]. It contains a batch of affine transform parameters.
theta (Tensor): A tensor with shape [N, 2, 3] or [N, 3, 4]. It contains a batch of affine transform parameters.
The data type can be float32 or float64.
out_shape (Tensor | list | tuple): Type can be a 1-D Tensor, list, or tuple. It is used to represent the shape of the output in an affine transformation, in the format ``[N, C, H, W]`` or ``[N, C, D, H, W]``.
When the format is ``[N, C, H, W]``, it represents the batch size, number of channels, height and width. When the format is ``[N, C, D, H, W]``, it represents the batch size, number of channels, depth, height and width.
Expand Down
8 changes: 4 additions & 4 deletions python/paddle/nn/quant/stub.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,8 +26,8 @@ class Stub(Layer):
stub will observe or quantize the inputs of the functional API.

Args:
observer(QuanterFactory) - The configured information of the observer to be inserted.
It will use a global configuration to create the observers if the 'observer' is none.
observer(QuanterFactory): The configured information of the observer to be inserted.
It will use a global configuration to create the observers if the 'observer' is none.

Examples:
.. code-block:: python
Expand Down Expand Up @@ -81,9 +81,9 @@ class QuanterStub(Layer):
The user should not use this class directly.

Args:
layer(paddle.nn.Layer) - The stub layer with an observer configure factory. If the observer
layer(paddle.nn.Layer): The stub layer with an observer configure factory. If the observer
of the stub layer is none, it will use 'q_config' to create an observer instance.
q_config(QuantConfig) - The quantization configuration for the current stub layer.
q_config(QuantConfig): The quantization configuration for the current stub layer.
"""

def __init__(self, layer: Stub, q_config):
Expand Down
2 changes: 1 addition & 1 deletion python/paddle/quantization/factory.py
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ def quanter(class_name):
Annotation to declare a factory class for quanter.

Args:
class_name (str) - The name of factory class to be declared.
class_name (str): The name of factory class to be declared.

Examples:
.. code-block:: python
Expand Down
4 changes: 2 additions & 2 deletions python/paddle/quantization/ptq.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,8 +47,8 @@ def quantize(self, model: Layer, inplace=False):
quantization parameters.

Args:
model(Layer) - The model to be quantized.
inplace(bool) - Whether to modify the model in-place.
model(Layer): The model to be quantized.
inplace(bool): Whether to modify the model in-place.

Return: The prepared model for post-training quantization.

Expand Down
6 changes: 3 additions & 3 deletions python/paddle/quantization/qat.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ class QAT(Quantization):
r"""
Tools used to prepare model for quantization-aware training.
Args:
config(QuantConfig) - Quantization configuration
config(QuantConfig): Quantization configuration

Examples:
.. code-block:: python
Expand All @@ -47,8 +47,8 @@ def quantize(self, model: Layer, inplace=False):
And it will insert fake quanters into the model to simulate the quantization.

Args:
model(Layer) - The model to be quantized.
inplace(bool) - Whether to modify the model in-place.
model(Layer): The model to be quantized.
inplace(bool): Whether to modify the model in-place.

Return: The prepared model for quantization-aware training.

Expand Down
8 changes: 4 additions & 4 deletions python/paddle/quantization/quantize.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ class Quantization(metaclass=abc.ABCMeta):
r"""
Abstract class used to prepares a copy of the model for quantization calibration or quantization-aware training.
Args:
config(QuantConfig) - Quantization configuration
config(QuantConfig): Quantization configuration
"""

def __init__(self, config: QuantConfig):
Expand All @@ -44,9 +44,9 @@ def convert(self, model: Layer, inplace=False, remain_weight=False):
r"""Convert the quantization model to ONNX style. And the converted
model can be saved as inference model by calling paddle.jit.save.
Args:
model(Layer) - The quantized model to be converted.
inplace(bool, optional) - Whether to modify the model in-place, default is False.
remain_weight(bool, optional) - Whether to remain weights in floats, default is False.
model(Layer): The quantized model to be converted.
inplace(bool, optional): Whether to modify the model in-place, default is False.
remain_weight(bool, optional): Whether to remain weights in floats, default is False.

Return: The converted model

Expand Down
6 changes: 3 additions & 3 deletions python/paddle/quantization/wrapper.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,9 +22,9 @@ class ObserveWrapper(Layer):
Put an observer layer and an observed layer into a wrapping layer.
It is used to insert layers into the model for QAT or PTQ.
Args:
observer(BaseQuanter) - Observer layer
observed(Layer) - Observed layer
observe_input(bool) - If it is true the observer layer will be called before observed layer.
observer(BaseQuanter): Observer layer
observed(Layer): Observed layer
observe_input(bool): If it is true the observer layer will be called before observed layer.
If it is false the observed layer will be called before observer layer. Default: True.
"""

Expand Down