← Back to context

Comment by embedding-shape

13 hours ago

But why would I want to results to be done faster but less reliable, vs slower and more reliable? Feels like the sort of thing you'd favor accuracy over speed, otherwise you're just degrading the quality control?

It's not that you want it to be faster, but you want the latency to be predictable and reliable, which is much more the case for local inference than sending it away over a network (and especially to the current set of frontier model providers who don't exactly have standout reliability numbers).

The high-nines of fruit organization are usually not worth running a 400 billion parameter model to catch the last 3 fruit.

Local, offline system you control is worth a lot. Introducing an external dependency guarantees you will have downtime outside of your control.

  • Right, but that doesn't answer why you'd need a fast 7b LLM rather than a slightly less fast 14b LLM.

    • In the hypothetical fruit sorting example, if you have a hard budget of 10 msec to respond and the 7B takes 8 msec and the 14B takes 12msec, there is your imaginary answer. Regular engineering where you have to balance competing constraints instead of running the biggest available.

    • ....because sometimes people need a faster answer? There's many possible reasons someone might need speed over accuracy. In the food sorting example, if lower accuracy means you waste more peanuts, but the speed means you get rid of more bad peanuts overall, then you get fewer complaints about bad peanuts, with a tiny amount of extra material waste.

    • Hard real time is a thing in some systems. Also, the current approaches might have 85% accuracy -- if the LLM can deliver 90% accuracy while being "less exact" that's still a win!