Comment by embedding-shape

9 hours ago

> where latency matters more than raw accuracy – think industrial inspection

Huh? Why would industrial inspection, in particular, benefit from lower latency in exchange for accuracy? Sounds a bit backwards, but maybe I'm missing something obvious.

At a very high level, think fruit sorting[0] where the conveyor belt doesn't stop rolling and you need to rapidly respond, and all the way through to monitoring for things like defects in silicon wafers and root causing it. Some of these issues aren't problematic on their own, but you can aggregate data over time to see if a particular machine, material or process within a factory is degrading over time. This might not be throughout the entire factory but isolated to a particular batch of material or a particular subsection within it. This is not a hypothetical example: this is an active use case.

[0] https://www.youtube.com/watch?v=vxff_CnvPek

  • But that's not something you'd use an LLM for. There have been computer vision systems sorting bad peas for more than a decade[0], of course there are plenty of use cases for very fast inspection systems. But when would you use an LLM for anything like that?

    [0] https://www.youtube.com/watch?v=eLDxXPziztw

  • But why would I want to results to be done faster but less reliable, vs slower and more reliable? Feels like the sort of thing you'd favor accuracy over speed, otherwise you're just degrading the quality control?

    • It's not that you want it to be faster, but you want the latency to be predictable and reliable, which is much more the case for local inference than sending it away over a network (and especially to the current set of frontier model providers who don't exactly have standout reliability numbers).

    • The high-nines of fruit organization are usually not worth running a 400 billion parameter model to catch the last 3 fruit.