← Back to context

Comment by D-Machine

16 days ago

Sorry, but using "LLM" when you mean "AI" is a basic failure to understand simple definitions, and also is ignoring the meat of the blog post and much of the discussion here (which is that LLMs are limited by virtue of being only / mostly trained on language).

Everything you are saying is either incoherent because you actually mean "AI" or "transformer", or is just plain wrong, since e.g. not all problems can be solved using e.g. single-channel, recursively-applied transformers, as I mention elsewhere here: https://news.ycombinator.com/item?id=46948612. The design of LLMs absolutely determines the range of their applicability, and the class of problems they are most suited for. This isn't even a controversial take, lots of influencers and certainly most serious researchers recognize the fundamental limitations of the LLM approach to AI.

You literally have no idea what you are talking about and clearly do not read or understand any actual papers where these models are developed, and are just repeating simplistic metaphors from blog posts, and buying into marketing.