[QualcommQnn] add ops#9538
Merged
zhupengyang merged 3 commits intoPaddlePaddle:developfrom Oct 17, 2022
Merged
Conversation
|
Thanks for your contribution! |
4352069 to
a68da13
Compare
lite/backends/nnadapter/nnadapter/src/optimizer/convert_datalayout_nchw_to_nhwc.cc
Outdated
Show resolved
Hide resolved
a68da13 to
9b1e262
Compare
csy0225
pushed a commit
to csy0225/Paddle-Lite
that referenced
this pull request
Oct 20, 2022
support fusion_elementwise_mul_activation, fusion_elementwise_sub_activation, fusion_elementwise_div_activation, fusion_elementwise_min_activation, fusion_elementwise_max_activation, fusion_elementwise_pow_activation, instance_norm, prelu, arg_max, arg_min, flatten, flatten2, norm
zhupengyang
added a commit
to zhupengyang/Paddle-Lite
that referenced
this pull request
Oct 27, 2022
support fusion_elementwise_mul_activation, fusion_elementwise_sub_activation, fusion_elementwise_div_activation, fusion_elementwise_min_activation, fusion_elementwise_max_activation, fusion_elementwise_pow_activation, instance_norm, prelu, arg_max, arg_min, flatten, flatten2, norm
zhupengyang
added a commit
that referenced
this pull request
Oct 31, 2022
* windows ci fix (#9559) * [NNAdapter] support device data (#9493) * [QualcommQnn] support exp, log, reduce_mean, reduce_max, reduce_sum, floor (#9505) * [QualcommQnn] add ops (#9538) support fusion_elementwise_mul_activation, fusion_elementwise_sub_activation, fusion_elementwise_div_activation, fusion_elementwise_min_activation, fusion_elementwise_max_activation, fusion_elementwise_pow_activation, instance_norm, prelu, arg_max, arg_min, flatten, flatten2, norm * [NNAdapter] support vit model (#9583) * [NNAdapter] set output lod according to input lod * [NNAdapter] slice support EndsTensorList * [NNAdapter] fuse pass (5d->4d) * fix cmake cxx flags (#9467)
csy0225
added a commit
that referenced
this pull request
Nov 4, 2022
* [QualcommQnn] add ops (#9538) support fusion_elementwise_mul_activation, fusion_elementwise_sub_activation, fusion_elementwise_div_activation, fusion_elementwise_min_activation, fusion_elementwise_max_activation, fusion_elementwise_pow_activation, instance_norm, prelu, arg_max, arg_min, flatten, flatten2, norm * add float64 type to lite * add float64 kernel for set value * change the third-party-libs url due to flatbuf update. * fix include files conflict * fix bug * Fix heterogeneous execution errors * fix control_flow_op_control_flow_op_shared_inputs_and_outputs_place_sync_pass bug * fix comment Co-authored-by: zhupengyang <[email protected]>
lishicheng1996
pushed a commit
to lishicheng1996/Paddle-Lite
that referenced
this pull request
Nov 18, 2022
…le#9580) * [QualcommQnn] add ops (PaddlePaddle#9538) support fusion_elementwise_mul_activation, fusion_elementwise_sub_activation, fusion_elementwise_div_activation, fusion_elementwise_min_activation, fusion_elementwise_max_activation, fusion_elementwise_pow_activation, instance_norm, prelu, arg_max, arg_min, flatten, flatten2, norm * add float64 type to lite * add float64 kernel for set value * change the third-party-libs url due to flatbuf update. * fix include files conflict * fix bug * Fix heterogeneous execution errors * fix control_flow_op_control_flow_op_shared_inputs_and_outputs_place_sync_pass bug * fix comment Co-authored-by: zhupengyang <[email protected]>
QShiX
pushed a commit
to QShiX/Paddle-Lite
that referenced
this pull request
Nov 18, 2022
…le#9580) * [QualcommQnn] add ops (PaddlePaddle#9538) support fusion_elementwise_mul_activation, fusion_elementwise_sub_activation, fusion_elementwise_div_activation, fusion_elementwise_min_activation, fusion_elementwise_max_activation, fusion_elementwise_pow_activation, instance_norm, prelu, arg_max, arg_min, flatten, flatten2, norm * add float64 type to lite * add float64 kernel for set value * change the third-party-libs url due to flatbuf update. * fix include files conflict * fix bug * Fix heterogeneous execution errors * fix control_flow_op_control_flow_op_shared_inputs_and_outputs_place_sync_pass bug * fix comment Co-authored-by: zhupengyang <[email protected]>
mjp9527
pushed a commit
that referenced
this pull request
Nov 22, 2022
* [X86] Add set value op and double data type to framework. (#9580) * [QualcommQnn] add ops (#9538) support fusion_elementwise_mul_activation, fusion_elementwise_sub_activation, fusion_elementwise_div_activation, fusion_elementwise_min_activation, fusion_elementwise_max_activation, fusion_elementwise_pow_activation, instance_norm, prelu, arg_max, arg_min, flatten, flatten2, norm * add float64 type to lite * add float64 kernel for set value * change the third-party-libs url due to flatbuf update. * fix include files conflict * fix bug * Fix heterogeneous execution errors * fix control_flow_op_control_flow_op_shared_inputs_and_outputs_place_sync_pass bug * fix comment Co-authored-by: zhupengyang <[email protected]> * [PaddleSpeech] Add OPs and others needed by fastspeech_2 model (#9706) * [Host] add 3 OPs: set_value, round, share_data test=develop * [Host] add expand_v2 OP registration with type kBool test=develop * [Arm] add reduce_sum OP Int64 registration and neon implement & add reduce_max OP kInt32 registration test=develop * [X86] fix bug in set_value OP test=develop * [Extra] move 2 round and share_data to extra test=develop * [proto] fix a bug test=develop Co-authored-by: csy0225 <[email protected]> Co-authored-by: zhupengyang <[email protected]>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
support fusion_elementwise_mul_activation, fusion_elementwise_sub_activation, fusion_elementwise_div_activation, fusion_elementwise_min_activation, fusion_elementwise_max_activation, fusion_elementwise_pow_activation, instance_norm, prelu, arg_max, arg_min, flatten, flatten2, norm