-
Notifications
You must be signed in to change notification settings - Fork 5.9k
[PIR] pir onednn support mixed instruction #60754
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
f4640c5
b2587cd
7e720f7
67758d1
74808b3
b7016cf
c039795
d2b8c89
77f5364
97a4545
96de8da
5dbbe04
dadd309
e19fa09
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -1345,7 +1345,7 @@ void HandleForSpecialOp( | |
| } | ||
| } | ||
|
|
||
| if (op_item->isa<::pir::YieldOp>() || op_item->isa<::pir::ShadowOutputOp>()) { | ||
| if (op_item->isa<::pir::YieldOp>()) { | ||
| if (op_item->num_operands() > 0) { | ||
| for (size_t i = 0; i < op_item->num_operands(); ++i) { | ||
| auto cur_in = op_item->operand_source(i); | ||
|
|
@@ -1360,6 +1360,32 @@ void HandleForSpecialOp( | |
| } | ||
| } | ||
|
|
||
| if (op_item->isa<::pir::ShadowOutputOp>()) { | ||
| if (op_item->num_operands() > 0) { | ||
| for (size_t i = 0; i < op_item->num_operands(); ++i) { | ||
| auto cur_in = op_item->operand_source(i); | ||
| if (!cur_in) { | ||
| vec_inputs.emplace_back(); | ||
| continue; | ||
| } | ||
| auto new_in = GetNewInput( | ||
| cur_in, *map_value_pair, static_cast<int>(i), op_item->name()); | ||
| // layout transfer(only for onednn) | ||
| #ifdef PADDLE_WITH_DNNL | ||
| auto new_in_type = new_in.type(); | ||
| if (new_in_type.isa<AllocatedDenseTensorType>()) { | ||
| if (new_in_type.dyn_cast<AllocatedDenseTensorType>().data_layout() == | ||
| phi::DataLayout::ONEDNN) { | ||
| new_in = AddOneDNN2PaddleLayoutTransferOp( | ||
| new_in, phi::DataLayout::ANY, block); | ||
| } | ||
| } | ||
| #endif | ||
|
Comment on lines
+1374
to
+1383
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 这个不太合理吧,大家编译的时候带上WITH_MKLDNN,但是不跑mkldnn模式,这里会影响其他模式逻辑?
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 看错了,有这个判断不会的if (new_in_type.dyn_cast().data_layout() == phi::DataLayout::ONEDNN)
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 在不跑mkldnn模式的时候,new_in_type.dyn_cast().data_layout() == phi::DataLayout::ONEDNN 为 false。所以不会影响其它模式。但位置在当前pd_op_to_kernel_pass.cc的代码框架下,只能放在这里。 |
||
| vec_inputs.push_back(new_in); | ||
| } | ||
| } | ||
| } | ||
|
|
||
| if (op_item->isa<::pir::SetParameterOp>()) { | ||
| if (op_item->num_operands() > 0) { | ||
| for (size_t i = 0; i < op_item->num_operands(); ++i) { | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
没有extra_args仍需要指定这个key吗?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个底层能力上是可以不指定的。但是我写这里的时候,想了一下,还是想放个空的。大多数op都有extra_args,写一个空的extra_args更能让看代码的人一眼就知道他的extra_args是空的。