Comment by CamperBob2
5 days ago
Wait and see. You're not paying attention now, but it's not too late to start.
Go to your favorite programming puzzle site and see how you do against the latest models, for instance. If you can beat o1-pro regularly, for instance, then you're right, you have nothing to worry about and can safely disregard it. Same proposition that was offered to John Henry.
Please reformulate your argument, and I will check back tomorrow:
https://www.youtube.com/watch?v=aNSHZG9blQQ
LLMs are rules based search engines with higher dimension vector spaces encoding related topics. There is nothing intelligent about these algorithms, except the trick ones play on oneself interpreting well structured nonsense.
It is stunting kids development, as students often lack the ability to intuitively reason when they are being misled. "How many R's in 'Strawberry'?" is a classic example exposing the underlying pattern recognition failures. =3
I have never understood why the failure to answer the strawberry question has seen as a compelling argument as to the limits of AI. The AIs that suffer from this problem have difficulty counting. That has never been denied. Those AI's also do not see the letters of the words they are processing. Counting the letters in a word is a task that it is quite unsurprising that it fails. I Would say it is more surprising that that they can perform spelling tasks at all. More importantly the models where such weaknesses became apparent are all from the same timeframe where the models advanced so much that those weaknesses were visible only after so many other greater weaknesses had been overcome.
People didn't think that planes flying so high that pilots couldn't breathe exposed a fundamental limitation of flight, just that their success had revealed the next hurdle.
The assertion that an LLM is X and therefore not intelligent is not a useful claim to make without either proof that it is X and proof that X is insufficient. You could say brains are interconnected cells that send pulses at intervals dictated by a combination of the pulses they sense, and there is nothing intelligent about that. The premises must be true and you have to demonstrate that the conclusion follows from those premises. For the record I think your premises are false and your conclusion doesn't follow.
Without a proof you could hypothesise reasons why such a system might not be intelligent and come up with an example of a task that no system that satisfies the premises could accomplish. While that example is unsolved the hypothesis remains unrefuted. What would you suggest as a test that shows a problem that could not be solved by such a machine? It must be solvable by at least one intelligent entity to show that it is solvable by intelligence. It must be undeniable when the problem is solved.
Nope, its not a counting problem. It's a reasoning problem. Thing is, no matter how much hype they get, the AIs have no reasoning capabilities at all, and they can fail in the silliest ways. Same as with Larry Ellison: Don't fall into the trap of anthropomorphizing the AI.
1 reply →
Is that like 80% LLM slop? the allusion for failures to improve productivity in competent developers was cited in the initial response.
The Strawberry test exposes one of the many subtle problems LLMs inherently offer in the Tokenization approach.
The clown car of Phds may be able to entertain the venture capital folks for awhile, but eventually a VR girlfriend chat-bot convinces a kid to kill themselves like last year.
Again, cognitive development like ethics development is currently impossible for LLM as they are lacking any form of intelligence (artificial or otherwise.) People have patched directives into the model, but these weights are likely fundamentally statistically insignificant due to cultural sarcasm in the data sets.
Please write your own responses, =3
4 replies →
(Shrug) If you're retired or independently wealthy, you can afford that attitude. Hopefully one of those describes you.
Otherwise, you're going to spend the rest of your career saying things like, "Well, OK, so the last model couldn't count the number of Rs in 'Strawberry' and the new one can, but..."
Personally, I dislike being wrong. So I don't base arguments on points that have a built-in expiration date, or that are based on a fundamental misunderstanding of whatever I'm talking about.
Every model is deprecated in time if evidenced Science is done well, and hopefully replaced by something more accurate in time. There is no absolute right/correctness unless you are a naive child under 25 cheating on structured homework.
The point was there is nothing intelligent (or AI) about LLMs except the person fooling themselves.
In general, most template libraries already implement the best possible algorithms from the 1960s, and tuned for architecture specific optimizations. Knowing when each finite option is appropriate takes a bit of understanding/study, but does give results far quicker than fitting a statistically salient nonsense answer. Several study datum from senior developers is already available, and it proves LLMs provide zero benefit to people that know what they are doing.
Note, I am driven by having fun, rather than some bizarre irrational competitiveness. Prove your position, or I will assume you are just a silly person or chat bot. =3
15 replies →