Conversation
| num = 5 | ||
| # P = {0, 1.0} or {0, 0.5, 1.0} | ||
| P = np.random.randint(0, 2, size=(num, num)).astype("float32") | ||
| Oi = np.random.random((num, num)).astype("float32") |
There was a problem hiding this comment.
Local variable name should be lower_with_under, https://google.github.io/styleguide/pyguide.html?showone=Naming#Naming
|
|
||
| A detailed explanation about these notations can be found in | ||
|
|
||
| [1]. Chris Burges, Tal Shaked, Erin Renshaw, et al. Learning to |
There was a problem hiding this comment.
Maybe we can add the link here.
paddle/operators/rank_loss_op.cc
Outdated
| : OpProtoAndCheckerMaker(proto, op_checker) { | ||
| AddInput("P", "The desired target values for posteriors."); | ||
| AddInput("Oi", "The model output for item i."); | ||
| AddInput("Oj", "The model output for item j."); |
There was a problem hiding this comment.
Please illustrate dimensions of inputs and outputs in their comments.
paddle/operators/rank_loss_op.cc
Outdated
| auto dims = ctx.Input<framework::Tensor>("P")->dims(); | ||
| ctx.Output<framework::Tensor>(framework::GradVarName("P"))->Resize(dims); | ||
| ctx.Output<framework::Tensor>(framework::GradVarName("Oi"))->Resize(dims); | ||
| ctx.Output<framework::Tensor>(framework::GradVarName("Oj"))->Resize(dims); |
There was a problem hiding this comment.
Gradient Op's output(the gradient of forward op's inputs) can be nullptr, which means they are not necessary for backward. So we shall assert it is not nullptr before Resize.
paddle/operators/rank_loss_op.h
Outdated
| auto* oi_t = ctx.Input<framework::Tensor>("Oi"); | ||
| auto* oj_t = ctx.Input<framework::Tensor>("Oj"); | ||
|
|
||
| d_oi->mutable_data<T>(ctx.GetPlace()); |
There was a problem hiding this comment.
Outputs of gradient Op may be nullptr. If so, it means that they are useless for backward and we don't need to compute them.
see https://github.com/PaddlePaddle/Paddle/blob/develop/paddle/operators/cos_sim_op.h#L104 for an example.
| from op_test import OpTest | ||
|
|
||
|
|
||
| class TestReshapeOp(OpTest): |
There was a problem hiding this comment.
Why the class name is TestReshapeOp?
| def test_check_output(self): | ||
| self.check_output() | ||
|
|
||
| def test_check_grad(self): |
There was a problem hiding this comment.
Add some check_grad_ignore_XXX tests if posiible.
In check_grad_ignore_XXX tests, ignored variables' gradients will be set nullptr and your kernel should not compute it.
kuke
left a comment
There was a problem hiding this comment.
Refine this operator by following all comments. Please continue to review.
paddle/operators/rank_loss_op.cc
Outdated
| : OpProtoAndCheckerMaker(proto, op_checker) { | ||
| AddInput("P", "The desired target values for posteriors."); | ||
| AddInput("Oi", "The model output for item i."); | ||
| AddInput("Oj", "The model output for item j."); |
|
|
||
| A detailed explanation about these notations can be found in | ||
|
|
||
| [1]. Chris Burges, Tal Shaked, Erin Renshaw, et al. Learning to |
paddle/operators/rank_loss_op.cc
Outdated
| auto dims = ctx.Input<framework::Tensor>("P")->dims(); | ||
| ctx.Output<framework::Tensor>(framework::GradVarName("P"))->Resize(dims); | ||
| ctx.Output<framework::Tensor>(framework::GradVarName("Oi"))->Resize(dims); | ||
| ctx.Output<framework::Tensor>(framework::GradVarName("Oj"))->Resize(dims); |
paddle/operators/rank_loss_op.h
Outdated
| auto* oi_t = ctx.Input<framework::Tensor>("Oi"); | ||
| auto* oj_t = ctx.Input<framework::Tensor>("Oj"); | ||
|
|
||
| d_oi->mutable_data<T>(ctx.GetPlace()); |
| from op_test import OpTest | ||
|
|
||
|
|
||
| class TestReshapeOp(OpTest): |
| num = 5 | ||
| # P = {0, 1.0} or {0, 0.5, 1.0} | ||
| P = np.random.randint(0, 2, size=(num, num)).astype("float32") | ||
| Oi = np.random.random((num, num)).astype("float32") |
| def test_check_output(self): | ||
| self.check_output() | ||
|
|
||
| def test_check_grad(self): |
Resolve #4065