You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
would it feasible in your opinion to port it to e.g. int8 format (if I read correctly Tensorflow lite has some functionality built-in to allow easy transformation of weight matrices), so it requires only a fraction of RAM, with minimal performance losses?
Well, I reach 8GB of RAM to get that 10 fps at the accuracy I need while smoking 30 Watts at the most power efficient AI board on the market. Even if it was technologically possible, you would end up with a very high processing time and very low accuracy.
The processing would likely be x100 longer at 2 Watts, and if you get down from 8 GB of RAM to, say 256 MB sequential, you are looking at another x100 processing time increase. Thats 1 frame every 100 seconds if it is even technologically possible.
is feasible to port this to an esp32 or similar (e.g. micropython) microcontroller environment?
The text was updated successfully, but these errors were encountered: