-
Notifications
You must be signed in to change notification settings - Fork 5.9k
Closed
Milestone
Description
We have to polish the documents for fluid operators. This issue takes fc as an example to show the documentation specification.
python/paddle/v2/fluid/layers.py#fc
def fc(input,
size,
num_flatten_dims=1,
param_attr=None,
bias_attr=None,
act=None,
name=None,
main_program=None,
startup_program=None):
"""Fully Connected Layer. This layer accepts multiple inputs and applies
linear transformation to each input data. If activation type provided,
corresponding nonlinear transformation would be applied then. For each input
:math:`X`, the equation is:
.. math::
Out = Act(WX + b)
In the above equation:
* :math:`X`: Input value, a tensor with rank at least 2.
* :math:`W`: Weight, a 2-D tensor with shape [M, N].
* :math:`b`: Bias, a 2-D tensor with shape [M, 1].
* :math:`Act`: Activation function.
* :math:`Out`: Output value, same shape with :math:`X`.
All the input variables are passed in as local variables to the LayerHelper
constructor.
Args:
Input (Variable|list): The input values, each value is a tensor with
rank at least 2.
size (int): The output size, an interge value.
num_flatten_dims (int): Column number of the input.
param_attr (ParamAttr|list): The parameters/weights to the FC Layer.
bias_attr (ParamAttr|list): The bias parameter.
act (str): Activation type.
name (str): Name/alias of the function.
main_program (Program): The main program calling this.
startup_program (Program): The startup program.
Returns:
Variable: The tensor variable storing the transformation and \
non-linearity activation result.
Raises:
ValueError: If rank of input tensor is less than 2.
Examples:
.. code-block:: python
data = fluid.layers.data(name='data', shape=[32, 32], dtype='float32')
fc = fluid.layers.fc(input=data, size=1000, act="tanh")
"""And the final html looks like:
How to preview
After refined the documents in layers.py, we need to preview the html page. Here I list some key tips:
- Go to build directory.
- Please make sure
WITH_DOC=1andsphinx==1.5.6. make -j `nproc` && python -m SimpleHTTPServer $PORT_NUM- Assume that PaddlePaddle is compiled on computer with ip being $IP. We can visit
$IP:$PORT_NUM/doc/en/html/api/v2/fluid/layers.htmlto check the preview. - Add link in
doc/api/v2/fluid/layers.rst
URLs
Docs of sphnix: http://www.sphinx-doc.org/en/stable/contents.html
How to insert codes: http://www.sphinx-doc.org/en/stable/markup/code.html
How to insert math equations: http://www.sphinx-doc.org/en/stable/ext/math.html
Previous discussion: #6160
Operators need to polish
Please create an issue first before do polishing.
- fc @pkuyym @abhinavarora Polishing the embedding layer and the fc layer documentation #6806
- embedding @qingqing01 @abhinavarora Polishing the embedding layer and the fc layer documentation #6806
- dynamic_lstm @kuke Add python doc for dynamic_lstm #7640
- gru_unit @sidgoyal78
- data @kavyasrinet Polish docs for data layer #6858
- concat @NHZlX @abhinavarora Polish API docs for Fluid Assign and Concat layer #6855
- sums @wanghaoshuang @kavyasrinet Adding documentation for sums layer #6857
- linear_chain_crf @lcy-seso
- assign @abhinavarora Polish API docs for Fluid Assign and Concat layer #6855
- split_lod_tensor @kavyasrinet Addign document for fluid split_lod_tensor and merge_lod_tensor #6859
- merge_lod_tensor @kavyasrinet Addign document for fluid split_lod_tensor and merge_lod_tensor #6859
- cos_sim @lcy-seso
- cross_entropy @kuke Polish the doc of cross_entropy_op #7018
- square_error_cost @sidgoyal78 Add squared error layers doc #6862
- accuracy @wanghaoshuang Fix doc of accuracy function #7091
- sequence_conv
- conv2d @chengduoZH Add conv2d_python doc #6850
- sequence_pool @luotao1 Need add python wrapper for sequence_pool #6777
- pool2d @NHZlX
- batch_norm @sidgoyal78
- beam_search_decode
- lstm
- lod_rank_table @pkuyym Add doc for lod_rank_table #7024
- max_sequence_len @pkuyym Add doc for max_sequence_len #7023
- topk @kavyasrinet Added documentation for topk layer fluid. #6861
- lod_tensor_to_array @kavyasrinet Adding documentation for the operators: lod_tensor_to_array , array_to_lod_tensor, create_array, increment #6807
- array_to_lod_tensor @kavyasrinet Adding documentation for the operators: lod_tensor_to_array , array_to_lod_tensor, create_array, increment #6807
- fill_constant @abhinavarora ebe4425
- fill_constant_batch_size_like @abhinavarora ebe4425
- ones @abhinavarora Adding API docs for ones and zeros methods #7150
- zeros @abhinavarora Adding API docs for ones and zeros methods #7150
- increment @kavyasrinet Adding documentation for the operators: lod_tensor_to_array , array_to_lod_tensor, create_array, increment #6807
- array_write @kavyasrinet Writeup for array write layer #6820
- create_array @kavyasrinet Adding documentation for the operators: lod_tensor_to_array , array_to_lod_tensor, create_array, increment #6807
- less_than @abhinavarora Polishing the documentation of the less than layer #6816
- array_read @kavyasrinet Adding array read layer documentattion #6853
- shrink_memory
- array_length @kavyasrinet Adding documentation for the layer: array_length #6817
- conv2d_transpose @chengduoZH Refine conv2d_transpose layer doc #6920
- seq_expand @pkuyym Need add python wrapper for SeqExpandOp. #6590
- lstm_unit @pkuyym Need add python wrapper for 'lstm_unit' op #6581
- reduce_sum @guoshengCS
- reduce_mean @guoshengCS
- reduce_max @guoshengCS
- reduce_min @guoshengCS
jacquesqiao, lcy-seso, NHZlX, kuke, Yancey0623 and 8 more
Metadata
Metadata
Assignees
Labels
No labels
