You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Following up for those coming across the same thing it appears quantizing activation functions is a hard problem still being researched. This paper seems to be discussing it in more detail https://arxiv.org/pdf/1702.00953.pdf
It might be helpful in the README for both this project and uTensor to specify the supported operators so others will know whether this solution is right for them.
Sorry for the delay. Indeed, we should list all the supported operators in the README. I'm going to do that in the next incremental release. Basically, uTensor should support all quantized operators in TF. Possibly non-quantized Ops too, there's always the option of going to quantized to float and to the regular implementations.
When running the tool I get this error. It appears in https://github.com/uTensor/utensor_cgen/blob/master/utensor_cgen/operators.py#L158-L170 that tanh isnt listed. Am I missing something here or is this tool still being developed. Thank you.
The text was updated successfully, but these errors were encountered: