-
Notifications
You must be signed in to change notification settings - Fork 5.9k
Delete extra input (Bias, ResidualData) in OpMaker of conv2d #49121
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 12 commits
ef09b81
837e9be
dc02b62
817ef4c
18ca252
47e6952
8913298
8f39135
07e1425
5480b1a
acd1b1d
08b918e
01a072f
bdb9a0e
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -210,11 +210,8 @@ | |
| - op : conv2d | ||
| backward : conv2d_grad | ||
| extra : | ||
| attrs : [bool is_test = false, bool use_cudnn = true, bool fuse_relu_before_depthwise_conv = false, bool use_mkldnn = false, | ||
| bool use_quantizer = false, str mkldnn_data_type = "float32", bool fuse_relu = false, | ||
| str fuse_activation = "", float fuse_alpha = 0.0f, float fuse_beta = 0.0f, bool use_addto = false, | ||
| bool fuse_residual_connection = false, float Scale_in = 1.0f, float Scale_out = 1.0f, | ||
| float Scale_in_eltwise = 1.0f, 'float[] Scale_weights = {1.0f}', bool force_fp32_output = false, | ||
| attrs : [bool is_test = false, bool use_cudnn = true, bool use_mkldnn = false, bool use_addto = false, | ||
| str mkldnn_data_type = "float32", bool force_fp32_output = false, | ||
| int workspace_size_MB = phi::backends::gpu::GetDefaultConvWorkspaceSizeLimitMB(), bool exhaustive_search = false] | ||
|
|
||
| - op : conv2d_fusion | ||
|
|
@@ -556,6 +553,11 @@ | |
| extra : | ||
| attrs : [bool use_mkldnn = false] | ||
|
|
||
| - op : fused_conv2d | ||
|
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, it is necessary. |
||
| extra : | ||
| attrs : [bool use_cudnn = false, float fuse_alpha = 0.0f, float fuse_beta = 0.0f, float Scale_in = 1.0f, | ||
| float Scale_out = 1.0f, float Scale_in_eltwise = 1.0f, 'float[] Scale_weights = {1.0f}'] | ||
|
|
||
| - op : gather | ||
| backward : gather_grad | ||
| extra : | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So all int8-oneDNN kernels should be executed as fused kernels by default?
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we are trying to delete the extra inputs and attributes in base op, so some extra attributes for int8-oneDNN kernel are also removed, currently we have to put them into fused kernel to execute because no better choice.
I think a good way to execute int8-oneDNN kernel is creating a new kernel for int8-oneDNN, but it is difficult to implement at the current stage, maybe we could come up with a good solution in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, thank you for explaining