Comment by danenania
1 year ago
> the "reasoning" they do is really just parroting a weighted average (with randomness injected) of the matching training data
Perhaps our brains are doing exactly the same, just with more sophistication?
1 year ago
> the "reasoning" they do is really just parroting a weighted average (with randomness injected) of the matching training data
Perhaps our brains are doing exactly the same, just with more sophistication?
No.
We know how current deep learning neural networks are trained.
We know definitively that this is not how brains learn.
Understanding requires learning. Dynamic learning. In order to experience something, an entity needs to be able to form new memories dynamically.
This does not happen anywhere in current tech. It's faked in some cases, but no, it doesn't really happen.
> We know definitively that this is not how brains learn.
Ok then, I guess the case is closed.
> an entity needs to be able to form new memories dynamically.
LLMs can form new memories dynamically. Just pop some new data into the context.
> LLMs can form new memories dynamically. Just pop some new data into the context.
No, that's an illusion.
The LLM itself is static. The recurrent connections form a soft-of temporary memory that doesn't affect the learned behavior of the network at all.
I don't get why people who don't understand what's happening keep arguing that AIs are some sci-fi interpretation of AI. They're not. At least not yet.
1 reply →
> We know definitively that this is not how brains learn.
So you have mechanistic, formal model of how the brain functions? That's news to me.
Your brain was first trained by reading all of the Internet?
Anyway, the question of whether computers can think is as interesting as the question whether submarines can swim.
5 replies →
There's no way brains have the "right answers" fed into them as required by backpropagation.
1 reply →
Every single discussion of ‘AGI’ has endless comments exactly like this. Whatever criticism is made of an attempt to produce a reasoning machine, there’s always inevitably someone who says ‘but that’s just what our brains do, duhhh… stop trying to feel special’.
It’s boring, and it’s also completely content-free. This particular instance doesn’t even make sense: how can it be exactly the same, yet more sophisticated?
Sorry.
The problem is that we currently lack good definitions for crucial words such as "understanding" and we don't know how brains work, so that nobody can objectively tell whether a spreadsheet "understands" anything better than our brains. That makes these kinds of discussions quite unproductive.
I can’t define ‘understanding’ but I can certainly identify a lack of it when I see it. And LLM chatbots absolutely do not show signs of understanding. They do fine at reproducing and remixing things they’ve ‘seen’ millions of times before, but try asking them technical questions that involve logical deduction or an actual ability to do on-the-spot ‘thinking’ about new ideas. They fail miserably. ChatGPT is a smooth-talking swindler.
I suspect those who can’t see this either
(a) are software engineers amazed that a chatbot can write code, despite it having been trained on an unimaginably massive (morally ambiguously procured) dataset that probably already contains something close to the boilerplate you want anyway
(b) don’t have the sufficient level of technical knowledge to ask probing enough questions to betray the weaknesses. That is, anything you might ask is either so open-ended that almost anything coherent will look like a valid answer (this is most questions you could ask, outside of seriously technical fields) or has already been asked countless times before and is explicitly part of the training data.
4 replies →
As the comment I replied to very correctly said, we don’t know how the brain produces cognition. So you certainly cannot discard the hypothesis that it works through “parroting” a weighted average of training data just as LLMs are alleged to do.
Considering that LLMs with a much smaller number of neurons than the brain are in many cases producing human-level output, there is some evidence, if circumstantial, that our brains may be doing something similar.
LLMs don't have neurons. That's just marketing lol.
"A neuron in a neural network typically evaluates a sequence of tokens in one go, considering them as a whole input." -- ChatGPT
You could consider an RTX 4090 to be one neuron too.
2 replies →
> in many cases producing human-level output
They’re not, unless you blindly believe OpenAI press releases and crypto scammer AI hype bros on Twitter.