← Back to context

Comment by leptons

4 days ago

A lot has been written about LLMs having reached a plateau with regards to improvements. They still all produce garbage way too often. LLMs have fundamental limitations that can't really be fixed. Garbage in / garbage out also applies, and that is only getting worse with LLMs being trained on ever growing volumes of "AI" slop that is permeating everything lately.