You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems that the package dose not support mixed precision training, since the Embedding layer needs add operating with output and embedding parameters, which dtype are 'float32' and 'float16' leading to a dtype error "TypeError: Input 'y' of 'AddV2' Op has type float16 that does not match type float32 of argument 'x'.".
Is there any plane to fix this?
The text was updated successfully, but these errors were encountered:
It is interesting to note that we were able to use mixed-precision training before with TF 1.15 and TF Estimator. With TF 2.3 and Keras, we are seeing this same error.
It seems that the package dose not support mixed precision training, since the Embedding layer needs add operating with output and embedding parameters, which dtype are 'float32' and 'float16' leading to a dtype error "TypeError: Input 'y' of 'AddV2' Op has type float16 that does not match type float32 of argument 'x'.".
Is there any plane to fix this?
The text was updated successfully, but these errors were encountered: