Comment by Joel_Mckay
5 days ago
Is that like 80% LLM slop? the allusion for failures to improve productivity in competent developers was cited in the initial response.
The Strawberry test exposes one of the many subtle problems LLMs inherently offer in the Tokenization approach.
The clown car of Phds may be able to entertain the venture capital folks for awhile, but eventually a VR girlfriend chat-bot convinces a kid to kill themselves like last year.
Again, cognitive development like ethics development is currently impossible for LLM as they are lacking any form of intelligence (artificial or otherwise.) People have patched directives into the model, but these weights are likely fundamentally statistically insignificant due to cultural sarcasm in the data sets.
Please write your own responses, =3
You suspect my words of being AI generated while at the same time arguing that AI cannot possibly reason.
It seems like you see AI where there is not, this compromises your ability to assess the limitations of AI.
You say that LLMs cannot have any form of intelligence but for some definitions of intelligence it is obvious they do. Existing models are not capable in all areas but they have some abilities. You are asserting that they cannot be intelligent which implies that you have a different definition of intelligence and that LLMs will never satisfy that definition.
What is that definition for intelligence? How would you prove something does not have it?
"What is that definition for intelligence?"
That is a very open-ended detractor question, and is philosophically loaded with taboo violations of human neurology. i.e. It could seriously harm people to hear my opinion on the matter... so I will insist I am a USB connected turnip for now ... =)
"How would you prove something does not have it?"
A Receiver operating characteristic no better than chance, within a truly randomized data set. i.e. a system incapable of knowing how many Rs in Strawberry at the token level... is also inherently incapable of understanding what a Strawberry means in the context of perception (currently not possible for LLM.)
Have a great day =3
>A Receiver operating characteristic no better than chance, within a truly randomized data set. i.e. a system incapable of knowing how many Rs in Strawberry at the token level... is also inherently incapable of understanding what a Strawberry means in the context of perception (currently not possible for LLM.)
This is just your claim, restated. In short it is saying they don't think because they fundamentally can't think.
There is no support as to why this is the case. Any plain assertion that they don't understand is unprovable because you can't measure directly measure understanding.
Please come up with just one measurable property that you can demonstrate is required for intelligence that LLMs fundamentally lack.
1 reply →