Comment by Teknomadix
4 months ago
Not necessarily. There are lots of use cases for on device AI inference. I run YOLO on an Nvidia Jetson powered Lenovo Think Edge, which processes incoming video at full frame rates on four channels with recognition and classification for a bespoke premises security system. No clouds involved other than the Nix package manager etc. Make sure your argument May carry more weight when you're talking about ultra low power devices like an Arduino running AI inference locally that seems like more of a stretch.
true, true, very true, but i observe you use a nvidia chip. which is perfectly logical. why would you use something that is worse in every single way, right? which is exactly what qcom offerings are...
Yolo has actually been shown to be run on 15-year-old Canon cameras. They in no way have the power of my Nvidia Jetson. But they can run the model just the same. No cloud necessary. These small devices, well maybe not being able to process four individual video channels at 60 frames per second will still have the capability of processing at lower frame rates and with less channels. Using much cheaper hardware, and using a whole lot last energy. So I don't think you're argument really holds up.
Is power a single way?