Conversation
|
|
||
| PADDLE_ENFORCE(x_dims.size() >= 3 && x_dims.size() <= 5, | ||
| "The Input dim size should be between 3 and 5"); | ||
| const int N = x_dims[0]; |
There was a problem hiding this comment.
Do we need to enforce Google coding style?
| saved_mean_e.setZero(); | ||
| saved_variance_e.setZero(); | ||
|
|
||
| switch (tensor_format) { |
There was a problem hiding this comment.
A complete implementation, which is very nice. Do we only need to do NCHW or NHWC? We can discuss.
There was a problem hiding this comment.
I find our old code support both format, and Tensorflow/Caffe2 also support two data format, so I think maybe we need them all.
There was a problem hiding this comment.
Our old code only support NCHW (for 2D Conv) or NCDHW (for 3D Conv), not support NHWC and NDHWC. And all the convolution operators in new framework only support NCHW(for 2D Conv) orNCDHW` too.
|
Do we need to support 2D input data? For example, if the previous layer is a fully connected layer. |
| enum TensorFormat { | ||
| NHWC = 0, | ||
| NCHW = 1, | ||
| }; |
There was a problem hiding this comment.
There is DataLayout in paddle/platform/cudnn_helper.h.
There was a problem hiding this comment.
cudnn_helper cannot be included in a cc file, I think maybe we need to move it to another place
paddle/operators/batch_norm_op.cc
Outdated
| ? (tensor_format == TensorFormat::NCHW ? x_dims[4] : x_dims[3]) | ||
| : 1; | ||
|
|
||
| const int sample_size = H * W * D; |
There was a problem hiding this comment.
输入是5D: 即 NCDHW 或 NDHWC:
可以去掉line 136 - line 144复杂的获取方法,line 146改成:
const int frame_size = x->numel() / N / C;| saved_mean_e.setZero(); | ||
| saved_variance_e.setZero(); | ||
|
|
||
| switch (tensor_format) { |
There was a problem hiding this comment.
Our old code only support NCHW (for 2D Conv) or NCDHW (for 3D Conv), not support NHWC and NDHWC. And all the convolution operators in new framework only support NCHW(for 2D Conv) orNCDHW` too.
paddle/operators/batch_norm_op.cc
Outdated
|
|
||
| switch (tensor_format) { | ||
| case TensorFormat::NCHW: { | ||
| ConstEigenArrayMap<T> X_arr(x->data<T>(), sample_size, N * C); |
There was a problem hiding this comment.
X_arr -> x_arr : https://google.github.io/styleguide/cppguide.html#Variable_Names
paddle/operators/batch_norm_op.cc
Outdated
| saved_mean_e /= N * sample_size; | ||
| for (int nc = 0; nc < N * C; ++nc) { | ||
| saved_variance_e(nc % C) += | ||
| (X_arr.col(nc) - saved_variance_e(nc % C)) |
There was a problem hiding this comment.
saved_variance_e -> saved_mean_e?
saved_variance_e(nc % C) +=
(X_arr.col(nc) - saved_mean_e(nc % C))
paddle/operators/batch_norm_op.cc
Outdated
| ? (tensor_format == TensorFormat::NCHW ? x_dims[4] : x_dims[3]) | ||
| : 1; | ||
|
|
||
| const int sample_size = H * W * D; |
paddle/operators/batch_norm_op.cc
Outdated
| // init output | ||
| auto *dX = ctx.Output<Tensor>(framework::GradVarName("X")); | ||
| auto *dScale = ctx.Output<Tensor>(framework::GradVarName("Scale")); | ||
| auto *dBias = ctx.Output<Tensor>(framework::GradVarName("Bias")); |
There was a problem hiding this comment.
The names of variables (including function parameters) and data members are all lowercase
https://google.github.io/styleguide/cppguide.html#Variable_Names
| @@ -0,0 +1,424 @@ | |||
| /* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. | |||
There was a problem hiding this comment.
all the code is wrong now, I will update them in another PR
| @@ -0,0 +1,62 @@ | |||
| /* Copyright (c) 2016 PaddlePaddle Authors. All Rights Reserve. | |||
Background
project: #4531
fix: #4906
Progress