Comment by elternal_love
3 days ago
Were we go towards really smart roboters. It is interesting what kind of diferent model chips they can produce.
3 days ago
Were we go towards really smart roboters. It is interesting what kind of diferent model chips they can produce.
There is nothing smart about current LLMs. They just regurgitate text compressed in their memory based on probability. None of the LLMs currently have actual understanding of what you ask them to do and what they respond with.
If LLMs just regurgitate compressed text, they'd fail on any novel problem not in their training data. Yet, they routinely solve them, which means whatever's happening between input and output is more than retrieval, and calling it "not understanding" requires you to define understanding in a way that conveniently excludes everything except biological brains.
I somewhat agree with you but I also realise that there are very few "novel" problems in the world. I think it's really just more complex problem spaces is all.
Same relative logic, just more of it/more steps or trials.
Yes there are some fascinating emergent properties at play, but when they fail it's blatantly obvious that there's no actual intelligence nor understanding. They are very cool and very useful tools, I use them on a daily basis now and the way I can just paste a vague screenshot with some vague text and they get it and give a useful response blows my mind every time. But it's very clear that it's all just smoke and mirrors, they're not intelligent and you can't trust them with anything.
4 replies →
> they'd fail on any novel problem not in their training data
Yes, and that's exactly what they do.
No, none of the problems you gave to the LLM while toying around with them are in any way novel.
2 replies →
They don't solve novel problems. But if you have such strong belief, please give us examples.
1 reply →
We know that, but that does not make them unuseful. The opposite in fact, they are extremely useful in the hands of non-idiots.We just happen to have a oversupply of idiots at the moment, which AI is here to eradicate. /Sort of satire.
So you are saying they are like copy, LLMs will copy some training data back to you? Why do we spend so much money training and running them if they "just regurgitate text compressed in their memory based on probability"? billions of dollars to build a lossy grep.
I think you are confused about LLMs - they take in context, and that context makes them generate new things, for existing things we have cp. By your logic pianos can't be creative instruments because they just produce the same 88 notes.
I have a gut feeling, huge portion of deficiencies we note with AI is just reflection of the training data. For instance, wiki/reddit/etc internet is just a soup of human description of the world model, not the actual world model itself. There are gaps or holes in the knowledge because codified summary of world is what is remarkable to us humans, not a 100% faithful, comprehensive description of the world. What is obvious to us humans with lived real world experience often does not make it into the training data. A simple, demonstrable example is whether one should walk or drive to car wash.
Thats not how they work, pro-tip maybe don't comment until you have a good understanding?
Would you mind rectifying the wrong parts then?
2 replies →
Huh? Their words are an accurate, if simplified, description of how they work.
1 reply →
Just HI slop. Ask any decent model, it can explain what's wrong this this description.