Comment by Mordisquitos

1 year ago

I think that the article is correct. There are indeed things that LLMs will never be able to do, at least not consistently, however much the hardware improves or on how much more material they are trained.

How come? Note my emphasis on the 2nd 'L'. I'm not saying that there are things that AI models will never be able to do, I'm saying that there are things that Large Language Models will be unable to do.

Training LLMs is often argued to be analogous to human learning, most often as a defence against claims of copyright infringement by arguing that human creativity is also based on training from copyrighted materials. However, that is a red herring.

The responses from ever more powerful LLMs are indeed impressive, and beyond what an overwhelming majority of us believed possible just 5 years ago. They are nearing and sometimes surpassing the performance of educated humans in certain areas, so how come I can argue they are limited? Consider it from the other side: how come an educated human can create something as good as an LLM can when said human's brain has been "trained" on an infinitesimal fraction of the material which was used to train even the 1st release of ChatGPT?

That is because LLMs do not learn nor reason like humans: they do not have opinions, do not have intentions, do not have doubts, do not have curiosity, do not have values, do not have a model of mind — they have tokens and probabilities.

For an AI model to be able to do certain things that humans can do it needs to have many of those human characteristics that allow us to do impressive mental feats having absorbed barely any training material (compared to LLMs) and being virtually unable to even remember most of it, let alone verbatim. Such an AI model is surely possible, but it needs a completely different paradigm from straightforward LLMs. That's not to say however that a Language Model will almost certainly be an necessary module of such an AI, but it will not be sufficient.

I don't think values, opinions or things like that are needed at all. These are just aspects we have in order to perform in and together with the society.

Also doubt is just uncertainty, and can be represented as a probability. Actually all values and everything can be presented as a numerical probability, which I personally prefer to do as well.

  • Values and opinions drive human attention, which as transformers demonstrate, is relevant to reasoning.