update mkldnn to 0.16#12265
Conversation
|
Hi Alex, thanks for bring this up. So far as I know, mkl-dnn introduced in padding format for better performance recently and it would require more memory for computation, compared with the logical size of tensor. In the existing integration code, we are re-using those memory from input/output NDArray for computation and they are allocated according to the logical size during memory planning phase. So I don't think we can simply update mkl-dnn to 0.16 at this time. I have talked with @zheng-da and @eric-haibin-lin about that and we are also trying to figure out a work around before we can change the behavior of memory planning for mkldnn backend. @pengzhao-intel |
|
@TaoLv thanks for the heads-up. It would be great if you can share the design/plan on how you want to change memory planning to achieve this |
|
Thanks for your contribution @azai91 |
|
@TaoLv any plans to upgrade 0.16? a few people within amazon are inquiring. |
Description
update mkldnn submodule to 0.16
Checklist
Essentials
Please feel free to remove inapplicable items for your PR.
Changes
Comments