Skip to content
Merged
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
68 changes: 41 additions & 27 deletions doc/design/block.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,14 +57,17 @@ The following C++ programs shows how blocks are used with the `if-else` structur
```c++
int x = 10;
int y = 20;
int out;
int z = 30;
int out, out1;
bool cond = false;
if (cond) {
int z = x + y;
out = softmax(z);
} else {
int z = fc(x);
out = z;
out1 = softmax(z);
} else {
int d = fc(z);
out = d;
out1 = d+1;
}
```

Expand All @@ -73,48 +76,59 @@ An equivalent PaddlePaddle program from the design doc of the [IfElseOp operator
```python
import paddle as pd

x = var(10)
y = var(20)
x = var([10, 20])
# a scalar
y = var([1])
z = var(10, 20)
cond = var(false)
ie = pd.create_ifelseop(inputs=[x], output_num=1)
# output_num should be set to ensure the outputs of the true_block and false_block can be merged,
# so the numbers of both blocks should be same as `output_num`.
ie = pd.ifelse_builder(output_num=2)

with ie.true_block():
x = ie.inputs(true, 0)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have a replacement here #4313

z = operator.add(x, y)
ie.set_output(true, 0, operator.softmax(z))
x_ = x.as_ifelse_input()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If the framework knows whether x contains instances, user does not indicate x as ifelse_input. The use can directly use x in the block and the framework can automatically do the splitting.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

some x that has instances might not need to be split.

such as

# v is some op's output
v = some_op() # shape is [20, 20]
data = var(shape=[20, 20])

with ie.true_block():
    # split x
    x = data.as_ifelse_input() # shape [1, 20]
    # v should not be split
    y = pd.matmul(x.T, v) # shape [1, 20]
   y_T = y.T # [20, 1]
   ie.set_outputs(y_T)

v has the same batch_size, but do not need to be split.

if write as

with ie.true_block():
    y = pd.matmul(x, v) # shape [1, 20] x [1, 20] wrong

the shapes will not match.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If a variable has fixed size, it's not splittable. If a variable's size depends on batchsize, it must be splitted because it means that it contains data for each instances.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh, I see, I will change this latter.

z = operator.add(x_, y)
ie.set_outputs(z, operator.softmax(z))

with ie.false_block():
x = ie.inputs(false, 0)
z = layer.fc(x)
ie.set_output(true, 0, operator.softmax(z))
z_ = z.as_ifelse_input()
d = layer.fc(z_)
ie.set_outputs(d+1, operator.softmax(d))

out = b(cond)
```

In both examples, the left branch computes `softmax(x+y)` and the right branch computes `fc(x)`.
In both examples, the left branch computes `x+y` and `softmax(x+y)`, the right branch computes `x+1` and `fc(x)`.

A difference is that variables in the C++ program contain scalar values, whereas those in the PaddlePaddle programs are mini-batches of instances. The `ie.input(true, 0)` invocation returns instances in the 0-th input, `x`, that corresponds to true values in `cond` as the local variable `x`, where `ie.input(false, 0)` returns instances corresponding to false values.


### Blocks with `for` and `RNNOp`

The following RNN model from the [RNN design doc](./rnn.md)

```python
x = sequence([10, 20, 30])
# x is a LoDTensor and stores sequences
x = var(sequence=([10, 20, 30]))
m = var(0)
W = tensor()
U = tensor()

rnn = create_rnn(inputs=[input])
with rnn.stepnet() as net:
x = net.set_inputs(0)
h = net.add_memory(init=m)
fc_out = pd.matmul(W, x)
hidden_out = pd.matmul(U, h.pre(n=1))
W = var()
U = var()

rnn = rnn_builder()
with rnn.stepnet():
# mark the variables that need to be segmented for time steps.
x_ = x.as_step_input()
# mark the varialbe that used as a RNN state.
h_ = h.as_step_memory(init=m)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

h is not defined


fc_out = pd.matmul(W, x_)
hidden_out = pd.matmul(U, h_.pre(nstep=1))
sum = pd.add_two(fc_out, hidden_out)
act = pd.sigmoid(sum)
h.update(act) # update memory with act
net.set_outputs(0, act, hidden_out) # two outputs
h_.update(act) # update memory with act
net.set_outputs(act, hidden_out) # two outputs

o1, o2 = rnn()
print o1, o2
```

has its equivalent C++ program as follows
Expand Down