Implement FC layer with helper#4726
Conversation
| return self.program.current_block().append_op(*args, **kwargs) | ||
|
|
||
| @property | ||
| def multiple_input(self): |
There was a problem hiding this comment.
def multiple_input(self, input_name='input'):
inputs = self.kwargs.get(input_name, [])| } | ||
| }) | ||
|
|
||
| def bias_attr(self, size): |
There was a problem hiding this comment.
Not all layers need bias_attr, so it seems unsuitable to put it here.
There was a problem hiding this comment.
If a layer does not need bias, in the layer function just do not invoke bias_attr.
| return self.program.current_block().append_op(*args, **kwargs) | ||
|
|
||
| @property | ||
| def multiple_input(self): |
There was a problem hiding this comment.
A layer may have more than one input, so this function shall take a str to indicate which input we want to get.
| return inputs | ||
|
|
||
| @property | ||
| def input(self): |
There was a problem hiding this comment.
Same as multiple_input()
| return inputs[0] | ||
|
|
||
| @property | ||
| def param_attr(self): |
There was a problem hiding this comment.
Since there is more than one input, there is also supposed to be more than one parameter. And we need some method to distinguish them.
| yield ipt, param_attr | ||
|
|
||
| @property | ||
| def input_dtype(self): |
There was a problem hiding this comment.
Same as multiple_input, we may have several inputs.
and Rename `Sync` to `Flush`
2c67fea to
03fc36c
Compare
Since lots of types can be cast to bool
| input=hidden2, size=10, act='softmax', program=program) | ||
| cost = cross_entropy(input=predict, label=label, program=program) | ||
| avg_cost = mean(x=cost, program=program) | ||
| self.assertIsNotNone(avg_cost) |
There was a problem hiding this comment.
# backward(avg_cost)
JiayiFeng
left a comment
There was a problem hiding this comment.
I think we can merge this PR first, and keep updating the design if any drawback is found during the implementation of layers.
| type='elementwise_sub', | ||
| inputs={'X': [input], | ||
| 'Y': [label]}, | ||
| outputs={'Out': [minus_out]}) |
There was a problem hiding this comment.
need to stop gradient for label
| 'Label': [label]}, | ||
| outputs={'Y': [out]}, | ||
| attrs=kwargs) | ||
| return out |
There was a problem hiding this comment.
Can we automatically add python interface for operators? Unlike fc or square_error_cost, the python part of cross_entropy does not provide extra function.
| __all__ = ['fc_layer', 'data_layer', 'cross_entropy'] | ||
|
|
||
|
|
||
| def fc_layer(input, |
There was a problem hiding this comment.
Should we follow the current name convention in v2 api, which is fc() for fc layer
| from paddle.v2.framework.framework import OpProtoHolder, Variable | ||
| import re | ||
|
|
||
| __all__ = ['fc_layer', 'data_layer', 'cross_entropy'] |
There was a problem hiding this comment.
I think we should separate different layers into different files.
No description provided.