Skip to content

Comments

Add squared_l2_distance_op#3768

Merged
pkuyym merged 9 commits intoPaddlePaddle:developfrom
pkuyym:fix-3736
Sep 7, 2017
Merged

Add squared_l2_distance_op#3768
pkuyym merged 9 commits intoPaddlePaddle:developfrom
pkuyym:fix-3736

Conversation

@pkuyym
Copy link
Contributor

@pkuyym pkuyym commented Aug 30, 2017

resolves #3736

@pkuyym pkuyym self-assigned this Aug 30, 2017
@pkuyym pkuyym changed the title Finish framework of squared_l2_distance_op (WIP) Add squared_l2_distance_op (WIP) Aug 30, 2017
'X': np.random.uniform(0.1, 1., (2, 3)).astype('float32'),
'Y': np.random.uniform(0.1, 1., (2, 3)).astype('float32')
}
self.check_grad(op, inputs, set(["X", "Y"]), "Out")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add compare_grad to check the consistency of results for GPU and CPU calculation.

@qingqing01
Copy link
Contributor

I think this op is well written. @pkuyym has enhanced functionality compared with the raw implementation of Paddle. And this op also has a good error reporting information.

auto* x_grad = ctx.Output<Tensor>(framework::GradVarName("X"));
auto* y_grad = ctx.Output<Tensor>(framework::GradVarName("Y"));
if (x_grad != nullptr) x_grad->Resize(x_dims);
if (y_grad != nullptr) y_grad->Resize(y_dims);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if (x_grad) ...
if (y_grad) ...


// propagate back to input
auto eigen_place = context.GetEigenDevice<Place>();
if (x_g != nullptr) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if (x_g)

@pkuyym pkuyym changed the title Add squared_l2_distance_op (WIP) Add squared_l2_distance_op Sep 2, 2017
"inputs must be same.");

int rank = framework::arity(x_dims);
PADDLE_ENFORCE(rank >= 2, "Tensor rank should be at least equal to 2.");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PADDLE_ENFORCE_GE(rank, 2, "doc")

just like GLOG

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

framework::product(y_dims) / y_dims[0],
"Product of dimensions expcet the first dimension of "
"input and target must be equal.");
PADDLE_ENFORCE(y_dims[0] == 1 || y_dims[0] == x_dims[0],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

split this enforce to two paddle_enforces or print whether y_dims[0]==1 or y_dims[0]==x_dims[0] make this enforce fail.

make this condition check more concise.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

y_dims[0] == 1 || y_dims[0] == x_dims[0] is an atomic assertion. I think it's difficult to split to two assertions.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pkuyym that's right.
There is a typo expcet need to fix, you can use the grammarly to correct them before pushing to github.

auto y_grad =
EigenMatrix<T>::From(*y_g, framework::make_ddim({y_dims[0], cols}));

PADDLE_ENFORCE(sub_result.dimensions()[0] >= y_dims[0],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PADDLE_ENFORCE_GE

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

dzhwinter
dzhwinter previously approved these changes Sep 4, 2017
Copy link
Contributor

@dzhwinter dzhwinter left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM++

void InferShape(const framework::InferShapeContext& ctx) const override {
PADDLE_ENFORCE_NOT_NULL(ctx.InputVar(framework::GradVarName("Out")),
"Gradient of Out should not be null");
// check out grad dimensions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe this line of comment can be removed?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

out0->mutable_data<T>(context.GetPlace());
out1->mutable_data<T>(context.GetPlace());
auto sub_result = EigenMatrix<T>::From(*out0);
auto z = EigenMatrix<T>::From(*out1);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please change z to this, it will cause run error in mac/clang. Please refer to #3916 and #3919

auto z = EigenVector<T>::Flatten(*out1);

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

}
auto sub_res_pow2 = sub_result * sub_result;
// z is TensorMap, no need reshape
z.device(place) = sub_res_pow2.sum(Eigen::array<int, 1>({1}));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please change to and refer to https://stackoverflow.com/questions/31555584/why-is-clang-warning-suggest-braces-around-initialization-of-subobject-wmis

z.device(place) = sub_res_pow2.sum(Eigen::array<int, 1>({{1}}));

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

int cols = framework::product(x_dims) / x_dims[0];
// calculate gradient
auto grad_mat =
2 * (out_grad.broadcast(Eigen::array<int, 2>({1, cols}))) * sub_result;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

{1, cols} -> {{1, cols}}

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

if (y_g) {
y_g->mutable_data<T>(context.GetPlace());
auto y_grad =
EigenMatrix<T>::From(*y_g, framework::make_ddim({y_dims[0], cols}));
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please think about the rank of y_grad carefully. The result of Eigen::Sum() will reduce dimension

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@pkuyym pkuyym merged commit a072ab9 into PaddlePaddle:develop Sep 7, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

MSE Operator.

5 participants