Skip to content

Commit be2d958

Browse files
author
Abhinav Arora
committed
Merge remote-tracking branch 'origin/develop' into adamax
2 parents be99868 + 42e7fe0 commit be2d958

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

70 files changed

+3375
-647
lines changed

doc/api/v1/index_cn.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ Model Config API
2121
trainer_config_helpers/optimizers.rst
2222
trainer_config_helpers/data_sources.rst
2323
trainer_config_helpers/layers.rst
24-
trainer_config_helpers/activations.rst
24+
trainer_config_helpers/activations.rst
2525
trainer_config_helpers/poolings.rst
2626
trainer_config_helpers/networks.rst
2727
trainer_config_helpers/evaluators.rst

doc/api/v2/config/layer.rst

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -345,6 +345,11 @@ clip
345345
.. autoclass:: paddle.v2.layer.clip
346346
:noindex:
347347

348+
resize
349+
------
350+
.. autoclass:: paddle.v2.layer.resize
351+
:noindex:
352+
348353
slope_intercept
349354
---------------
350355
.. autoclass:: paddle.v2.layer.slope_intercept

doc/design/python_api.md

Lines changed: 216 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,216 @@
1+
# Design Doc: Python API
2+
3+
Due to the refactorization of the PaddlePaddle core, we need Python classes to construct corresponding protobuf messages that describe a DL program.
4+
5+
| Python classes | Protobuf messages |
6+
| --- | --- |
7+
| Program | ProgramDesc |
8+
| Block | BlockDesc |
9+
| Operator | OpDesc |
10+
| Variable | VarDesc |
11+
12+
Please be aware that these Python classes need to maintain some construction-time information, which are not part of the protobuf messages.
13+
14+
## Core Concepts
15+
16+
### Program
17+
18+
A `ProgramDesc` describes a [DL program](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/program.md), which is composed of an array of `BlockDesc`s. A `BlockDesc` refers to its parent block by its index in the array. For example, operators in the step block of an RNN operator needs to be able to access variables in its ancessor blocks.
19+
20+
Whenever we create a block, we need set its parent block to the current block, so the Python class `Program` needs to maintain a data member `current_block`.
21+
22+
```python
23+
class Program(objects):
24+
def __init__(self):
25+
self.proto = core.NewProgram() # a C++ ProgramDesc pointer.
26+
self.blocks = vector<Block>()
27+
self.blocks.append(Block(self, -1)) # the global block
28+
self.current_block = 0 # initialized to the global block
29+
30+
def global_block():
31+
return self.blocks[0]
32+
33+
def current_block():
34+
return self.get_block(self.current_block)
35+
36+
def rollback():
37+
self.current_block = self.current_block().parent_idx
38+
39+
def create_block():
40+
new_block_idx = len(self.block)
41+
self.blocks.append(Block(self, self.current_block))
42+
self.current_block = new_block_idx
43+
return current_block()
44+
```
45+
46+
`Program` is an accessor to the protobuf message `ProgramDesc`, which is created in C++ space, because the InferShape function is in C++, which manipulates `VarDesc` messages, which are in turn members of `BlockDesc`, which is a member of `ProgramDesc`.
47+
48+
`Program` creates the first block as the global block in its constructor. All parameters and their initializer operators are in the global block.
49+
50+
### Block
51+
52+
A [Block](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/block.md) includes
53+
54+
1. a map from variable names to an instance of the Python `Variable` class, and
55+
1. a list of `Operator` instances.
56+
57+
```python
58+
class Block(objects):
59+
def __init__(self, program, parent_idx):
60+
self.proto = core.NewBlock(program.proto)
61+
self.program = program
62+
self.vars = map<string, Variable>()
63+
self.ops = vector<Operator>()
64+
self.parent_idx = parent_idx
65+
66+
def create_var(self, ...):
67+
return Variable(self, ...)
68+
69+
def _create_global_var(self, ...):
70+
program.global_block().create_var(...)
71+
72+
def create_parameter(self, name, ...):
73+
# Parameter is a subclass of variable. See Parameter section for details.
74+
self.vars[name] = Parameter(self._create_global_var(...), ...)
75+
return self.vars[name]
76+
77+
def append_operator(self, ...):
78+
self.ops.append(Operator(self, ...))
79+
80+
def prepend_operator(self, ...): # Parameter's ctor prepands initialize operators.
81+
self.ops.prepend(Operator(self, ...))
82+
```
83+
84+
`create_parameter` is necessary because parameters are global variables, those defined in the global block, but can be created in some sub-blocks, e.g., an FC layer in the step block of an RNN operator.
85+
86+
`prepand_operator` is necessary because the constructor of `Parameter` needs to create the initialize (or load) operator of the parameter, and would like to put it in the *preamble* of the global block.
87+
88+
### Operator
89+
90+
The `Operator` class fills in the `OpDesc` message and calls the C++ function `InferShape` to infer output shape from input shape.
91+
92+
```python
93+
class Operator(object):
94+
def __init__(self,
95+
block, # Block
96+
type, # string
97+
inputs, # dict<string, Variable>
98+
outputs,# dict<stirng, Variable>
99+
attrs # dict<string, Any>
100+
):
101+
self.proto = core.NewOpDesc(block.proto, type, inputs, outputs, attrs)
102+
core.infer_shape(self.proto, inputs, outputs)
103+
104+
def type(self):
105+
return self.proto.type()
106+
```
107+
108+
`Operator` creates the `OpDesc` message in C++ space, so could it call the `InferShape` function, which is in C++.
109+
110+
### Variable
111+
112+
Operators take Variables as its inputs and outputs.
113+
114+
```python
115+
class Variable(object):
116+
def __init__(self,
117+
block=None, # Block
118+
name=None, # string
119+
shape, # tuple
120+
dtype="float32", # string
121+
lod_level=None # int
122+
):
123+
if name is None:
124+
name = unique_name_generator()
125+
self.name = name
126+
self.block = block
127+
self.proto = core.NewVarDesc(block.proto, name, shape, lod_level)
128+
self.writer = None
129+
```
130+
131+
Please be aware of `self.writer`, that tracks operator who creates the variable. It possible that there are more than one operators who write a variable, but in Python space, each writes to a variable is represented by a Variable class. This is guaranteed by the fact that **`core.NewVarDesc` must NOT create a new `VarDesc` message if its name already exists in the specified block**.
132+
133+
### Parameter
134+
135+
A parameter is a global variable with an initializer (or load) operator.
136+
137+
```python
138+
class Parameter(Variable):
139+
def __init__(self,
140+
block=None, # Block
141+
name=None, # string
142+
shape, # tuple
143+
dtype="float32", # string
144+
lod_level=None # int
145+
trainable, # bool
146+
initialize_op_attrs,
147+
optimize_op_attrs):
148+
super(Parameter, self).__init__(block, name, shape, dtype, lod_level)
149+
self.trainable = trainable
150+
self.optimize_op_attrs = optimize_op_attrs
151+
block.prepend(Operator(block, # Block
152+
initialize_op_attrs['type'], # string
153+
None, # no inputs
154+
self, # output is the parameter
155+
initialize_op_attrs)
156+
```
157+
158+
When users create a parameter, s/he can call
159+
160+
```python
161+
program.create_parameter(
162+
...,
163+
init_attr={
164+
type: "uniform_random",
165+
min: -1.0,
166+
max: 1.0,
167+
})
168+
)
169+
```
170+
171+
In above example, `init_attr.type` names an initialize operator. It can also name the load operator
172+
173+
```python
174+
init_attr={
175+
type: "load",
176+
filename: "something.numpy",
177+
}
178+
```
179+
180+
`optimize_op_attrs` is not in the `VarDesc` message, but kept in the Python instance, as it will be used in the Python space when creating the optimize operator's `OpDesc`, and will be in the `OpDesc` message.
181+
182+
## Layer Functions
183+
184+
A layer is a Python function that creates some operators and variables. Layers simplify the work of application programmers.
185+
186+
### Data Layer
187+
188+
```python
189+
def data_layer(name, type, column_name):
190+
block = the_current_program.glolal_block()
191+
var = block.create_global_var(
192+
name=name,
193+
shape=[None] + type.dims(),
194+
dtype=type.dtype)
195+
block.prepend_operator(block,
196+
type="Feed",
197+
inputs = None,
198+
outputs = [var],
199+
{column_name: column_name})
200+
return var
201+
```
202+
203+
The input to the feed operator is a special variable in the global scope, which is the output of [Python readers](https://github.com/PaddlePaddle/Paddle/blob/develop/doc/design/reader/README.md).
204+
205+
### FC Layer
206+
207+
```python
208+
def fc_layer(input, size, ...):
209+
block = program.current_block()
210+
w = block.create_parameter(...)
211+
b = block.create_parameter(...)
212+
out = block.create_var()
213+
op = block.append_operator("FC", X=input, W=w, b=b, out=out)
214+
out.writer = op
215+
return out
216+
```

doc/design/register_grad_op.md

Lines changed: 67 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,67 @@
1+
# Design Doc: Gradient Operators Registration
2+
3+
4+
## The Problem Posed
5+
6+
In our current operator registration mechanism, for each operator, the programmer should register a *gradient operator creator* function, which takes a C++ operator instance, and returns the corresponding gradient instance.
7+
8+
However, as we decided to separate the *compilation* and *execution* of DL models, we need to reshape the creator to take a protobuf `OpDesc` message, and returns a corresponding message.
9+
10+
More than that, the new registration mechanism need to support the fact that an operators' gradient computation might be a composition of operators.
11+
12+
## Current Implementation
13+
14+
OpInfos store in a association map which key is the operator type. The `grad_op_type` indicate associated gradient operator type. Operator can create gradient operator by `OpInfo::creator_` of gradient. The pseudo code is
15+
16+
```cpp
17+
struct OpInfo {
18+
std::function<OperatorBase*(...)> creator_;
19+
std::string grad_op_type_;
20+
...
21+
};
22+
23+
map<string, OpInfo> OpInfoMap;
24+
25+
OperatorBase* CreateGradientOperator(const OperatorBase& op) {
26+
return OpInfoMap.at(op.Type()).creator_(...);
27+
}
28+
```
29+
30+
## Proposed Solution
31+
32+
The mapping relationship between an operator and its gradient operators is a function. The interface of that function is:
33+
34+
```cpp
35+
// (OpDesc) --> vector<OpDesc>
36+
using GradOpDescMaker = std::function<std::vector<OpDesc>(const OpDesc&)>;
37+
```
38+
39+
The function take a `OpDesc` of the forward operator and return one or many gradient operator descriptions.
40+
41+
The `GradOpDescMaker` will be registered in `OpInfo`, to replace `grad_op_type_` field. The `OpInfo` should be
42+
43+
```cpp
44+
struct OpInfo {
45+
GradOpDescMaker grad_op_maker_;
46+
...
47+
};
48+
```
49+
50+
The `grad_op_maker_ ` is `nullptr` if the operator does not have associated gradient operators.
51+
52+
We should chagne register macros at the same time. In the current solution, there is no difference between forwarding operators and backward operators. So `REGISTER_OP` just register one operator. If the `REGISTER_OPERATOR ` contains `OpProtoAndCheckerMaker` and `GradOpDescMaker`, we just list them in the same macro. It can be done by a macro contains `__VA_ARGS__`.
53+
54+
The user interface should be
55+
56+
```cpp
57+
vector<OpDesc> MinusOpGradMaker(OpDesc) {...}
58+
REGISTER_OPERATOR(minus, MinusOp, MinusOpProtoAndCheckerMaker, SumOpGradMaker);
59+
// Developers can still manually implement gradient operator.
60+
REGISTER_OPERATOR(minus_grad, MinusGradOp);
61+
```
62+
63+
The interface of current `REGISTER_OP` macro could not be changed. In `REGISTER_OP`, it will invoke `REGISTER_OPERATOR` two times and generate GradOpDescMaker inside.
64+
65+
```cpp
66+
REGISTER_OP(minus, MinusOp, MinusOpProtoAndCheckerMaker, minus_grad, MinusGradOp);
67+
```

doc/howto/dev/new_op_cn.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -206,7 +206,7 @@ MulOp(const std::string &type, const framework::VariableNameMap &inputs,
206206

207207
- `REGISTER_OP` : 注册`ops::MulOp`类,类型名为`mul`,该类的`ProtoMaker`为`ops::MulOpMaker`,注册`ops::MulOpGrad`,类型名为`mul_grad`。
208208
- `REGISTER_OP_WITHOUT_GRADIENT` : 用于注册没有反向的Op。
209-
- `REGISTER_OP_CPU_KERNEL` :注册`ops::MulKernel`类,并特化模板参数为`paddle::platform::CPUPlace`和`float`类型,同理,注册`ops::MulKernel`类。
209+
- `REGISTER_OP_CPU_KERNEL` :注册`ops::MulKernel`类,并特化模板参数为`paddle::platform::CPUPlace`和`float`类型,同理,注册`ops::MulGradKernel`类。
210210

211211

212212
-`.cu`文件中注册GPU Kernel。

doc/howto/dev/new_op_en.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -205,7 +205,7 @@ The definition of its corresponding backward operator, if applicable, is similar
205205

206206
- `REGISTER_OP` registers the `ops::MulOp` class, type named `mul`, its type `ProtoMaker` is `ops::MulOpMaker`, registering `ops::MulOpGrad` as `mul_grad`.
207207
- `REGISTER_OP_WITHOUT_GRADIENT` registers an operator without gradient.
208-
- `REGISTER_OP_CPU_KERNEL` registers `ops::MulKernel` class and specialized template types `paddle::platform::CPUPlace` and `float`, which also registers `ops::MulKernel`.
208+
- `REGISTER_OP_CPU_KERNEL` registers `ops::MulKernel` class and specialized template types `paddle::platform::CPUPlace` and `float`, which also registers `ops::MulGradKernel`.
209209

210210

211211
- Registering GPU Kernel in `.cu` files

paddle/framework/CMakeLists.txt

Lines changed: 5 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -22,14 +22,14 @@ cc_library(attribute SRCS attribute.cc DEPS framework_proto)
2222
cc_library(proto_desc SRCS var_desc.cc op_desc.cc block_desc.cc program_desc.cc DEPS attribute)
2323
cc_library(op_proto_maker SRCS op_proto_maker.cc DEPS framework_proto attribute)
2424
cc_test(op_proto_maker_test SRCS op_proto_maker_test.cc DEPS op_proto_maker)
25-
cc_library(op_info SRCS op_info.cc DEPS attribute framework_proto)
25+
cc_library(op_info SRCS op_info.cc DEPS attribute framework_proto proto_desc)
2626
cc_library(operator SRCS operator.cc DEPS op_info device_context tensor scope)
2727
cc_test(operator_test SRCS operator_test.cc DEPS operator op_registry)
2828

2929
cc_library(grad_op_builder SRCS grad_op_builder.cc DEPS operator proto_desc)
3030
cc_library(op_registry SRCS op_registry.cc DEPS grad_op_builder op_proto_maker op_info)
3131
cc_test(op_registry_test SRCS op_registry_test.cc DEPS op_registry)
32-
cc_test(grad_op_builder_test SRCS grad_op_builder_test.cc DEPS grad_op_builder op_registry add_op)
32+
cc_test(grad_op_builder_test SRCS grad_op_builder_test.cc DEPS grad_op_builder op_registry sum_op)
3333

3434
py_proto_compile(framework_py_proto SRCS framework.proto)
3535
# Generate an empty __init__.py to make framework_py_proto as a valid python module.
@@ -43,3 +43,6 @@ add_custom_command(TARGET framework_py_proto POST_BUILD
4343

4444
cc_library(backward SRCS backward.cc DEPS net_op)
4545
cc_test(backward_test SRCS backward_test.cc DEPS backward recurrent_op device_context)
46+
47+
cc_library(tensor_array SRCS tensor_array.cc DEPS lod_tensor)
48+
cc_test(tensor_array_test SRCS tensor_array_test.cc DEPS tensor_array place)

paddle/framework/attribute.h

Lines changed: 1 addition & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -21,20 +21,12 @@ limitations under the License. */
2121
#include <vector>
2222

2323
#include "paddle/framework/framework.pb.h"
24+
#include "paddle/framework/type_defs.h"
2425
#include "paddle/platform/enforce.h"
25-
#include "paddle/platform/variant.h"
2626

2727
namespace paddle {
2828
namespace framework {
2929

30-
// The order should be as same as framework.proto
31-
typedef boost::variant<boost::blank, int, float, std::string, std::vector<int>,
32-
std::vector<float>, std::vector<std::string>, bool,
33-
std::vector<bool>, BlockDesc*>
34-
Attribute;
35-
36-
typedef std::unordered_map<std::string, Attribute> AttributeMap;
37-
3830
ProgramDesc& GetProgramDesc();
3931

4032
template <typename T>

paddle/framework/block_desc.h

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,7 @@ limitations under the License. */
1919
#include <vector>
2020
#include "paddle/framework/op_desc.h"
2121
#include "paddle/framework/var_desc.h"
22+
#include "paddle/platform/macros.h"
2223

2324
namespace paddle {
2425
namespace framework {
@@ -34,9 +35,6 @@ class BlockDescBind {
3435
BlockDescBind(ProgramDescBind *prog, BlockDesc *desc)
3536
: prog_(prog), desc_(desc), need_update_(false) {}
3637

37-
BlockDescBind(const BlockDescBind &o) = delete;
38-
BlockDescBind &operator=(const BlockDescBind &o) = delete;
39-
4038
int32_t ID() const { return desc_->idx(); }
4139

4240
int32_t Parent() const { return desc_->parent_idx(); }
@@ -66,6 +64,8 @@ class BlockDescBind {
6664

6765
std::deque<std::unique_ptr<OpDescBind>> ops_;
6866
std::unordered_map<std::string, std::unique_ptr<VarDescBind>> vars_;
67+
68+
DISABLE_COPY_AND_ASSIGN(BlockDescBind);
6969
};
7070
} // namespace framework
7171
} // namespace paddle

0 commit comments

Comments
 (0)