Skip to content

Comments

Add scaling operator#3942

Closed
kuke wants to merge 1 commit intoPaddlePaddle:developfrom
kuke:scaling_layer_dev
Closed

Add scaling operator#3942
kuke wants to merge 1 commit intoPaddlePaddle:developfrom
kuke:scaling_layer_dev

Conversation

@kuke
Copy link
Contributor

@kuke kuke commented Sep 7, 2017

Resolve #3766

Copy link
Collaborator

@reyoung reyoung left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe it should be named colwise_mul?

}
};

class ScalingGradOp : public framework::OperatorWithKernel {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The gradient operator could be composed by two forward operator. See minus_op.


if __name__ == '__main__':
unittest.main()
if __name__ == '__main__':
Copy link
Collaborator

@reyoung reyoung Sep 7, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe __main__ should not be defined twice.

@qingqing01
Copy link
Contributor

qingqing01 commented Sep 7, 2017

Maybe it should be named colwise_mul?

@reyoung Discussed with @kuke and @gongweibao , we'd like to merge this op with elementwise mul op.

elementwise mul op can handle:

  • tensor * row-vector
  • tensor * col-vector
  • tensor * tensor

like caffe2: https://caffe2.ai/docs/operators-catalogue.html#mul
like tensorflow: https://www.tensorflow.org/api_docs/python/tf/multiply

@kuke
Copy link
Contributor Author

kuke commented Sep 7, 2017

@reyoung @qingqing01 thanks for your comments. I will work with @gongweibao to unify these mul ops.

@kuke
Copy link
Contributor Author

kuke commented Sep 14, 2017

Closed this PR for #3787 has a more complete implementation.

@kuke kuke closed this Sep 14, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants