Comment by jw1224
2 years ago
> The human mind is not, like ChatGPT and its ilk, a lumbering statistical engine for pattern matching, gorging on hundreds of terabytes of data and extrapolating the most likely conversational response or most probable answer to a scientific question.
If that’s not the case then what, exactly, are we doing when asked to respond to a question?
> Because these programs cannot explain the rules of English syntax, for example, they may well predict, incorrectly, that “John is too stubborn to talk to” means that John is so stubborn that he will not talk to someone or other (rather than that he is too stubborn to be reasoned with). Why would a machine learning program predict something so odd?
They don’t [0].
> True intelligence is also capable of moral thinking. […] But the programmers of ChatGPT and other machine learning marvels have struggled — and will continue to struggle — to achieve this kind of balance.
ChatGPT’s morality filters are outstanding. Yes, “jailbreaks” exist… But any true intelligence would be capable of using language to explore ideas which may be immoral.
[0] https://twitter.com/jayelmnop/status/1633635146263052288
It's not entirely clear what our brains do, but it is definitely clear it's not the same as something like ChatGPT, even just from a structural point of view. I'm sure there is some sort of statistcal pattern matching going on in the brain, but there are plenty of examples of things that our brain can do that ChatGPT cannot.
E.g. something as simple as adding number. Yes, it can add many numbers, but ask it to add two large numbers and it will fail. In fact, even if you ask it to explain it step by step, it will give an illogical and obviously wrong answer.
You're using the fact that the human brain has greater capability than ChatGPT as an argument that it's doing something qualitatively different.
This isn't enough of an argument. ChatGPT has greater capability than the smaller language models that preceded it, it can do tasks that they couldn't do, but it is not qualitatively different, it's differently mainly in the amount of information that has been encoded into it.
It is extremely probable that the next generation of large language models will be able to do things that ChatGPT struggles with. Perhaps those new capabilities will overlap much more with the human brains capabilities than we expect.
I just want to point out that GPT isn't a great model for math, and for at least a year we've had better models
>Although LLMs can sometimes answer these types of question correctly, they more often get them wrong. In one early test of its reasoning abilities, ChatGPT scored just 26% when faced with a sample of questions from the ‘MATH’ data set of secondary-school-level mathematical problems.
>But back in June 2022, an LLM called Minerva, created by Google, had already defied these expectations — to some extent. Minerva scored 50% on questions in the MATH data set, a result that shocked some researchers in artificial intelligence (AI; see ‘Minerva’s mathematics test’).
> It's not entirely clear what our brains do, but it is definitely clear it's not the same as something like ChatGPT
As a young child I was bad at math. Over many years I learnt to recognise patterns and understand the steps required to solve more complex formulae.
Today, I can solve 1287 + 9486 in my head. But ask me to divide those two numbers, and I’d use a calculator.
My brain is optimised for linguistic, probabilistic thinking — just like an LLM.
ChatGPT might not replace a deterministic calculator, but nor do we.