Skip to content

Replace Relu with bounded Relu in MobileNetV2 quantization mkldnn#18988

Merged
luotao1 merged 1 commit intoPaddlePaddle:developfrom
wozna:int8_mobilenetv2_update_brelu
Aug 12, 2019
Merged

Replace Relu with bounded Relu in MobileNetV2 quantization mkldnn#18988
luotao1 merged 1 commit intoPaddlePaddle:developfrom
wozna:int8_mobilenetv2_update_brelu

Conversation

@wozna
Copy link
Copy Markdown

@wozna wozna commented Aug 2, 2019

It removes the workaround from PR #17570

In MKL-DNN 0.18 there was no support for bounded ReLU post-op. Hence, when adding support for MobileNetV2 support we decided to use ReLU instead of ReLU6 (bounded relu).
As the MKL-DNN version is upgraded to 0.20 now and it supports bounded ReLU post-op, the workaround can be removed and bounded ReLU can be used.

test=develop

@wozna wozna changed the title Replace Relu with bounded Relu in MobileNetV2 quantization Replace Relu with bounded Relu in MobileNetV2 quantization mkldnn Aug 2, 2019
@wozna
Copy link
Copy Markdown
Author

wozna commented Aug 5, 2019

@Sand3r- I am asking for a review

@bingyanghuang
Copy link
Copy Markdown
Contributor

@lidanqing-intel could you help review this PR?

@lidanqing-vv
Copy link
Copy Markdown
Contributor

lidanqing-vv commented Aug 12, 2019

Looks good to me~
Later we may consider to call DequantizeOutput after conv_op->Op()->SetAttr("fuse_brelu_threshold", scale_out * threshold) , not before, because DequantizeOutput uses conv_op. But anyway conv_op is passed by pointer so it works well.

Copy link
Copy Markdown
Contributor

@luotao1 luotao1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@luotao1 luotao1 merged commit bce72c7 into PaddlePaddle:develop Aug 12, 2019
@wozna wozna deleted the int8_mobilenetv2_update_brelu branch February 24, 2023 16:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants