You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Many quantized hexagon_nn ops has _d32 variant. What is D32 format and how it is different from "flat"?
e.g.
QuantizedAdd_8p8to8 - elementwisely adds Input A and Input B together. (flat format)
0: Input A data (quint8 tensor)
1: Input B data (quint8 tensor)
...
0: Output data (quint8 tensor)
QuantizedAdd_8p8to8_d32 - Elementwise Add; inputs and output are in d32 format
0: Input A data (qint8)
1: Input B data (qint8)
...
0: Output data (quint8)
The text was updated successfully, but these errors were encountered:
Sorry for posting my question as an issue
Many quantized hexagon_nn ops has _d32 variant. What is D32 format and how it is different from "flat"?
e.g.
The text was updated successfully, but these errors were encountered: