You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Nov 17, 2023. It is now read-only.
The batch_dot does not support FP16 well and can make training slower compared to using FP32. This is tested using Transformer model in Gluonnlp. This feature has been added in a NVIDIA mxnet. So I think it is good to enable this in the master.
The
batch_dotdoes not support FP16 well and can make training slower compared to using FP32. This is tested using Transformer model in Gluonnlp. This feature has been added in a NVIDIA mxnet. So I think it is good to enable this in the master.