Skip to content

[X86] Add set value op and double data type to framework.#9580

Merged
csy0225 merged 11 commits intoPaddlePaddle:developfrom
csy0225:add_set_value_op
Nov 4, 2022
Merged

[X86] Add set value op and double data type to framework.#9580
csy0225 merged 11 commits intoPaddlePaddle:developfrom
csy0225:add_set_value_op

Conversation

@csy0225
Copy link
Collaborator

@csy0225 csy0225 commented Oct 20, 2022

No description provided.

zhupengyang and others added 3 commits October 19, 2022 09:48
support fusion_elementwise_mul_activation, fusion_elementwise_sub_activation, fusion_elementwise_div_activation, fusion_elementwise_min_activation, fusion_elementwise_max_activation, fusion_elementwise_pow_activation, instance_norm, prelu, arg_max, arg_min, flatten, flatten2, norm
@paddle-bot
Copy link

paddle-bot bot commented Oct 20, 2022

Thanks for your contribution!

Shixiaowei02
Shixiaowei02 previously approved these changes Oct 21, 2022
Copy link
Collaborator

@Shixiaowei02 Shixiaowei02 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree with the changes to flatbuffers.

@csy0225 csy0225 force-pushed the add_set_value_op branch 2 times, most recently from f3d371c to 9be6ff1 Compare October 25, 2022 08:53
Copy link
Collaborator

@zhupengyang zhupengyang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@csy0225 csy0225 merged commit eb5f272 into PaddlePaddle:develop Nov 4, 2022
lishicheng1996 pushed a commit to lishicheng1996/Paddle-Lite that referenced this pull request Nov 18, 2022
…le#9580)

* [QualcommQnn] add ops (PaddlePaddle#9538)

support fusion_elementwise_mul_activation, fusion_elementwise_sub_activation, fusion_elementwise_div_activation, fusion_elementwise_min_activation, fusion_elementwise_max_activation, fusion_elementwise_pow_activation, instance_norm, prelu, arg_max, arg_min, flatten, flatten2, norm

* add float64 type to lite

* add float64 kernel for set value

* change the third-party-libs url due to flatbuf update.

* fix include files conflict

* fix bug

* Fix heterogeneous execution errors

* fix control_flow_op_control_flow_op_shared_inputs_and_outputs_place_sync_pass bug

* fix comment

Co-authored-by: zhupengyang <[email protected]>
QShiX pushed a commit to QShiX/Paddle-Lite that referenced this pull request Nov 18, 2022
…le#9580)

* [QualcommQnn] add ops (PaddlePaddle#9538)

support fusion_elementwise_mul_activation, fusion_elementwise_sub_activation, fusion_elementwise_div_activation, fusion_elementwise_min_activation, fusion_elementwise_max_activation, fusion_elementwise_pow_activation, instance_norm, prelu, arg_max, arg_min, flatten, flatten2, norm

* add float64 type to lite

* add float64 kernel for set value

* change the third-party-libs url due to flatbuf update.

* fix include files conflict

* fix bug

* Fix heterogeneous execution errors

* fix control_flow_op_control_flow_op_shared_inputs_and_outputs_place_sync_pass bug

* fix comment

Co-authored-by: zhupengyang <[email protected]>
mjp9527 pushed a commit that referenced this pull request Nov 22, 2022
* [X86] Add set value op and double data type to framework. (#9580)

* [QualcommQnn] add ops (#9538)

support fusion_elementwise_mul_activation, fusion_elementwise_sub_activation, fusion_elementwise_div_activation, fusion_elementwise_min_activation, fusion_elementwise_max_activation, fusion_elementwise_pow_activation, instance_norm, prelu, arg_max, arg_min, flatten, flatten2, norm

* add float64 type to lite

* add float64 kernel for set value

* change the third-party-libs url due to flatbuf update.

* fix include files conflict

* fix bug

* Fix heterogeneous execution errors

* fix control_flow_op_control_flow_op_shared_inputs_and_outputs_place_sync_pass bug

* fix comment

Co-authored-by: zhupengyang <[email protected]>

* [PaddleSpeech] Add OPs and others needed by fastspeech_2 model (#9706)

* [Host] add 3 OPs: set_value, round, share_data
test=develop

* [Host] add expand_v2 OP registration with type kBool
test=develop

* [Arm] add reduce_sum OP Int64 registration and neon implement & add reduce_max OP kInt32 registration
test=develop

* [X86] fix bug in set_value OP
test=develop

* [Extra] move 2 round and share_data to extra
test=develop

* [proto] fix a bug
test=develop

Co-authored-by: csy0225 <[email protected]>
Co-authored-by: zhupengyang <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants