← Back to context

Comment by 01100011

6 months ago

I find these LLM doomer takes as silly as LLM maximalist takes.

LLMs are literally performing useful functions today and they're not going away. Are they AGI? No, but so what?

There is waaay too much projecting and philosophizing going on in these comments and not enough engineering-minded comments from objective observers.

Is AI hyped? Sure. Are LLMs overshadowing other approaches? Sure. Are LLMs inefficient? Somewhat. Do they have problems like hallucinations? Yes. Do they produce useful output? Yes.

What literally useful functions worth the trillions needed for ROI are you talking about? What are the numbers? How did you measure it? Please share!