Comment by NBJack

2 months ago

Oof. This article misses some important details.

Training is not a "one time cost". Training gets you a model that will likely need to be updated (or at least fine-tuned) on newer data. And before GPT4, there was a series of prior, less effective models (likely swept under the rug in press releases) made by the same folks that helped them step forward, but didn't achieve their end goals. And all of this to say nothing of the arms race by the major players all scrambling to outdo each other.

It also needs to compare this to the efficiency modern search engines run at. A single traditional query is far less expensive than a single LLM query.