[MKL-DNN]Bump up mkl-dnn to 0.20#18370
Conversation
|
This commit will fail on one of the internal models, which runs successfully on |
|
@luotao1 could you please share some logs/description of failure? |
|
@luotao1 In particular I'm interested in MKLDNN_VERBOSE log |
|
The model was sent to you Dec 2018. If you don't find the model, I will ask @bingyanghuang to sync with you. |
|
@luotao1 I have updated changes with needed fix to have crash in internal workload you reported, goes away. We are testing currently those changes. |
|
@jczaja Do you mean current PR is OK for review? |
test=develop
|
@luotao1 I'm sorry for being not very clear. We haven't finished testing it yet (Should be done today/tomorrow) and then those changes will be ready for review. But what I would like You to do is to test your internal workloads if possible on this branch and tell me if you see any problems . If you also let us know if there is some performance change that would be good as well. |
|
@baojun-nervana Just to let You know that we are bumping up mkl-dnn to 0.20 in this PR. |
|
@luotao1 We finished internal testing. From highlights, there is slight performance improvement on BERT inference on AVX512 platforms eg. Skylake and Cascadelake. So PR is now ready for review, if you have any data on how this PR working with your internal workloads then please share |
Thanks, will update ngraph. |
|
@luotao1 Have you had a chance to test this PR on your internal workloads? If yes and positive then please consider merging |
|
It works OK on internal workloads, thanks very much! |
Those changes are updating MKL-DNN from 0.19 to 0.20.
Justification: