Comment by littlestymaar
19 hours ago
The latency argument is terrible. Of course frontier LLMs are slow and costly. But you don't need Claude to drive a natural language interface, and an LLM with less than 5B parameters (or even <1B) is going it be much faster than this.
And it's highly circumstancial, as LLM efficiency keeps improving as the tech matures.