Local response normalize.#4426
Conversation
paddle/operators/lrn_op.cc
Outdated
| public: | ||
| LRNOpMaker(framework::OpProto *proto, framework::OpAttrChecker *op_checker) | ||
| : OpProtoAndCheckerMaker(proto, op_checker) { | ||
| AddInput("X", "The first input of lrn op"); |
There was a problem hiding this comment.
Personally, I think only "the first/second input of X" is not a good comment.
#4314
fc op has a good comment style. https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/fc_op.cc#L123
|
The comments still do not follow our comment style in our doc: https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/name_convention.md Please take fc_op as an example. |
paddle/operators/lrn_op.cc
Outdated
| PADDLE_ENFORCE_EQ(x_dim.size(), 4, "Input(X)'rank of LRNOp should be 4."); | ||
|
|
||
| ctx.Output<Tensor>("Out")->Resize(x_dim); | ||
| ctx.Output<Tensor>("mid_out")->Resize(x_dim); |
There was a problem hiding this comment.
Need to update to the latest code.
paddle/operators/lrn_op.cc
Outdated
| )DOC"); | ||
|
|
||
| AddOutput("Out", "(Tensor)The output of lrn op"); | ||
| AddOutput("mid_out", R"Doc( |
There was a problem hiding this comment.
Not follow the name convention.
| : OpProtoAndCheckerMaker(proto, op_checker) { | ||
| AddInput("X", R"DOC( | ||
| (Tensor)Input of lrn op.It must be a 4 rank tenor with NCHW format. | ||
| )DOC"); |
There was a problem hiding this comment.
(Tensor) The input of LRN operator. It must be a 4D tenor with NCHW format.
paddle/operators/lrn_op.cc
Outdated
| (Tensor)Input of lrn op.It must be a 4 rank tenor with NCHW format. | ||
| )DOC"); | ||
|
|
||
| AddOutput("Out", "(Tensor)The output of lrn op"); |
There was a problem hiding this comment.
(Tensor) The output of LRN operator, which is also the 4D tensor with NCHW format.
paddle/operators/lrn_op.cc
Outdated
|
|
||
| auto x_dims = ctx.Input<Tensor>("X")->dims(); | ||
| auto *x_g = ctx.Output<framework::Tensor>(framework::GradVarName("X")); | ||
| x_g->Resize(x_dims); |
There was a problem hiding this comment.
Need to update to the latest code.
| and also used in backward process. | ||
| )Doc"); | ||
|
|
||
| AddAttr<int>("n", R"DOC( |
There was a problem hiding this comment.
这是公式里边的变量,是不是按照公式的来更好?
| } | ||
| } | ||
| } | ||
| }; |
There was a problem hiding this comment.
GPU的实现, 最好复用kernel: https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/function/CrossMapNormalOpGpu.cu
现在这样的循环对于GPU来说效率低。
There was a problem hiding this comment.
我增加了一个ISSUE,并且写到了layer port中,新开PR解决这个问题吧!
#5066
paddle/operators/lrn_op.cc
Outdated
| .SetDefault(0.0001) | ||
| .GreaterThan(0.0); | ||
|
|
||
| AddAttr<float>("beta", R"DOC( |
There was a problem hiding this comment.
alpha, beta, k这些float类型的参数,觉得最好也用模板来写,可以参考:https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/dropout_op.cc#L40
| self.check_output() | ||
|
|
||
| def test_check_grad_normal(self): | ||
| self.check_grad(['X'], 'Out', max_relative_error=0.12) |
There was a problem hiding this comment.
如果max_relative_error 过大,可以尝试double测试~
Fix #4425