Skip to content

Conversation

@zyfncg
Copy link
Contributor

@zyfncg zyfncg commented Feb 5, 2024

PR types

Others

PR changes

Others

Description

Pcard-73448

引入PrintHook优化Program中符号维度的信息打印。打印效果如下:
image

优化前:

(%0) = "pd_op.data" () {dtype:(pd_op.DataType)float32,name:"x",place:(pd_op.Place)Place(undefined:0),shape:(pd_op.IntArray)[-1,128],stop_gradient:[false],sym_shape_str:"shape[S0, 128], data[NULL]",symbolic_shape:(shape_data)[S0,128]_[nullopt]} : () -> pd_op.tensor<-1x128xf32>
(%1) = "pd_op.data" () {dtype:(pd_op.DataType)float32,name:"y",place:(pd_op.Place)Place(undefined:0),shape:(pd_op.IntArray)[-1,128],stop_gradient:[false],sym_shape_str:"shape[S1, 128], data[NULL]",symbolic_shape:(shape_data)[S1,128]_[nullopt]} : () -> pd_op.tensor<-1x128xf32>
(%2) = "pd_op.exp" (%0) {stop_gradient:[false],sym_shape_str:"shape[S0, 128], data[NULL]",symbolic_shape:(shape_data)[S0,128]_[nullopt]} : (pd_op.tensor<-1x128xf32>) -> pd_op.tensor<-1x128xf32>
(%3) = "pd_op.shape" (%2) {stop_gradient:[false],sym_shape_str:"shape[2], data[S0, 128]",symbolic_shape:(shape_data)[2]_[S0,128]} : (pd_op.tensor<-1x128xf32>) -> pd_op.tensor<2xi32>
(%4) = "pd_op.shape" (%1) {stop_gradient:[false],sym_shape_str:"shape[2], data[S1, 128]",symbolic_shape:(shape_data)[2]_[S1,128]} : (pd_op.tensor<-1x128xf32>) -> pd_op.tensor<2xi32>
(%5) = "pd_op.shape_broadcast" (%3, %4) {stop_gradient:[false],sym_shape_str:"shape[2], data[Broadcast(S1, S0), 128]"} : (pd_op.tensor<2xi32>, pd_op.tensor<2xi32>) -> pd_op.tensor<2xi32>
(%6) = "pd_op.expand" (%1, %5) {stop_gradient:[false],sym_shape_str:"shape[Broadcast(S1, S0), 128], data[NULL]"} : (pd_op.tensor<-1x128xf32>, pd_op.tensor<2xi32>) -> pd_op.tensor<-1x-1xf32>
(%7) = "pd_op.shape" (%2) {stop_gradient:[false],sym_shape_str:"shape[2], data[S0, 128]",symbolic_shape:(shape_data)[2]_[S0,128]} : (pd_op.tensor<-1x128xf32>) -> pd_op.tensor<2xi32>
(%8) = "pd_op.shape" (%6) {stop_gradient:[false],sym_shape_str:"shape[2], data[Broadcast(S1, S0), 128]",symbolic_shape:(shape_data)[2]_[Broadcast(S1, S0),128]} : (pd_op.tensor<-1x-1xf32>) -> pd_op.tensor<2xi32>
(%9) = "pd_op.shape_broadcast" (%7, %8) {stop_gradient:[false],sym_shape_str:"shape[2], data[Broadcast(Broadcast(S1, S0), S0), 128]"} : (pd_op.tensor<2xi32>, pd_op.tensor<2xi32>) -> pd_op.tensor<2xi32>
(%10) = "pd_op.subtract" (%2, %6) {stop_gradient:[false],sym_shape_str:"shape[S0, 128], data[NULL]",symbolic_shape:(shape_data)[S0,128]_[nullopt]} : (pd_op.tensor<-1x128xf32>, pd_op.tensor<-1x-1xf32>) -> pd_op.tensor<-1x128xf32>
() = "builtin.shadow_output" (%10) {output_name:"output_0",sym_shape_str:"shape[S0, 128], data[NULL]"} : (pd_op.tensor<-1x128xf32>) -> 

优化后:(移除了symbolic_shape在op属性上的打印信息,并将该信息放置在 output value 对应位置)

(%0) = "pd_op.data" () {dtype:(pd_op.DataType)float32,name:"x",place:(pd_op.Place)Place(undefined:0),shape:(pd_op.IntArray)[-1,128],stop_gradient:[false],sym_shape_str:"shape[S0, 128], data[NULL]",symbolic_shape:} : () -> pd_op.tensor<-1x128xf32> { (shape[S0, 128], data[NULL]) }
(%1) = "pd_op.data" () {dtype:(pd_op.DataType)float32,name:"y",place:(pd_op.Place)Place(undefined:0),shape:(pd_op.IntArray)[-1,128],stop_gradient:[false],sym_shape_str:"shape[S1, 128], data[NULL]",symbolic_shape:} : () -> pd_op.tensor<-1x128xf32> { (shape[S1, 128], data[NULL]) }
(%2) = "pd_op.exp" (%0) {stop_gradient:[false],sym_shape_str:"shape[S0, 128], data[NULL]",symbolic_shape:} : (pd_op.tensor<-1x128xf32>) -> pd_op.tensor<-1x128xf32> { (shape[S0, 128], data[NULL]) }
(%3) = "pd_op.shape" (%2) {stop_gradient:[false],sym_shape_str:"shape[2], data[S0, 128]",symbolic_shape:} : (pd_op.tensor<-1x128xf32>) -> pd_op.tensor<2xi32> { (shape[2], data[S0, 128]) }
(%4) = "pd_op.shape" (%1) {stop_gradient:[false],sym_shape_str:"shape[2], data[S1, 128]",symbolic_shape:} : (pd_op.tensor<-1x128xf32>) -> pd_op.tensor<2xi32> { (shape[2], data[S1, 128]) }
(%5) = "pd_op.shape_broadcast" (%3, %4) {stop_gradient:[false],sym_shape_str:"shape[2], data[Broadcast(S1, S0), 128]"} : (pd_op.tensor<2xi32>, pd_op.tensor<2xi32>) -> pd_op.tensor<2xi32> { (shape[2], data[Broadcast(S1, S0), 128]) }
(%6) = "pd_op.expand" (%1, %5) {stop_gradient:[false],sym_shape_str:"shape[Broadcast(S1, S0), 128], data[NULL]"} : (pd_op.tensor<-1x128xf32>, pd_op.tensor<2xi32>) -> pd_op.tensor<-1x-1xf32> { (shape[Broadcast(S1, S0), 128], data[NULL]) }
(%7) = "pd_op.shape" (%2) {stop_gradient:[false],sym_shape_str:"shape[2], data[S0, 128]",symbolic_shape:} : (pd_op.tensor<-1x128xf32>) -> pd_op.tensor<2xi32> { (shape[2], data[S0, 128]) }
(%8) = "pd_op.shape" (%6) {stop_gradient:[false],sym_shape_str:"shape[2], data[Broadcast(S1, S0), 128]",symbolic_shape:} : (pd_op.tensor<-1x-1xf32>) -> pd_op.tensor<2xi32> { (shape[2], data[Broadcast(S1, S0), 128]) }
(%9) = "pd_op.shape_broadcast" (%7, %8) {stop_gradient:[false],sym_shape_str:"shape[2], data[Broadcast(Broadcast(S1, S0), S0), 128]"} : (pd_op.tensor<2xi32>, pd_op.tensor<2xi32>) -> pd_op.tensor<2xi32> { (shape[2], data[Broadcast(Broadcast(S1, S0), S0), 128]) }
(%10) = "pd_op.subtract" (%2, %6) {stop_gradient:[false],sym_shape_str:"shape[S0, 128], data[NULL]",symbolic_shape:} : (pd_op.tensor<-1x128xf32>, pd_op.tensor<-1x-1xf32>) -> pd_op.tensor<-1x128xf32> { (shape[S0, 128], data[NULL]) }
() = "builtin.shadow_output" (%10) {output_name:"output_0",sym_shape_str:"shape[S0, 128], data[NULL]"} : (pd_op.tensor<-1x128xf32>) ->  {  }

@paddle-bot
Copy link

paddle-bot bot commented Feb 5, 2024

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

tc20042008
tc20042008 previously approved these changes Feb 5, 2024
kangguangli
kangguangli previously approved these changes Feb 5, 2024
@zyfncg zyfncg merged commit 19d13df into PaddlePaddle:develop Feb 6, 2024
@zyfncg zyfncg deleted the refine_symbolic_print branch February 6, 2024 12:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants