Skip to content

Conversation

@phlrain
Copy link
Collaborator

@phlrain phlrain commented Jan 3, 2024

PR types

Function optimization

PR changes

Others

Description

Pcard-67164

由于cinn里面的broadcast 自带了reshape功能,因此把连续的reshape和broadcast 进行合并

@paddle-ci-bot
Copy link

paddle-ci-bot bot commented Jan 11, 2024

Sorry to inform you that 05ba879's CIs have passed for more than 7 days. To prevent PR conflicts, you need to re-run all CIs manually.

@phlrain phlrain changed the title add merge reshapw with broadcast pass [PIR+CINN]add merge reshape with broadcast pass Jan 15, 2024
zyfncg
zyfncg previously approved these changes Jan 15, 2024
namespace ir {

bool CanMerge(pir::Operation* op) {
auto in_dims = op->operand_source(0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
auto in_dims = op->operand_source(0)
auto& in_dims = op->operand_source(0)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

.type()
.dyn_cast<paddle::dialect::DenseTensorType>()
.dims();
auto out_dims =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
auto out_dims =
auto &out_dims =

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


std::vector<int64_t> GetBroadcastAxis(pir::Operation* reshape_op,
pir::Operation* broadcast_op) {
auto in_dims =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
auto in_dims =
auto &in_dims =

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个有phi::vectorize 操作,不能用 &

.type()
.dyn_cast<paddle::dialect::DenseTensorType>()
.dims());
auto out_dims =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
auto out_dims =
auto& out_dims =

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个有phi::vectorize 操作,不能用 &


std::vector<int64_t> new_broadcast_axes(in_dims.size(), 0);
// two step alignment
// 1 step
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

注释也有一定的规范,这里比较精简,可以删除或者补充代码注释

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

op_broadcast_axes[forward_out_index];
}

for (auto dim : new_broadcast_axes) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

删除此处的调试代码

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

pir::RewritePatternSet MergeReshapeWithBroadcastPass::InitializePatterns(
pir::IrContext* context) {
pir::RewritePatternSet ps(context);
// elementwise ops
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这里的注释似乎跟代码不匹配?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@phlrain phlrain merged commit 4939ae5 into PaddlePaddle:develop Jan 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants