Comment by bigmadshoe
10 days ago
We don't have a complete enough theory of neuroscience to conclude that much of human "reasoning" is not "algorithmic pattern matching mixed with statistical likelihoods of success".
Regardless of how it models intelligence, why is it not AI? Do you mean it is not AGI? A system that can take a piece of text as input and output a reasonable response is obviously exhibiting some form of intelligence, regardless of the internal workings.
I always wonder where people get their confidence from. We know so little about our own cognition, what makes us tick, how consciousness emerges, how about thought processes actually fundamentally work. We don't even know why we dream. Yet people proclaim loudly that X clearly isn't intelligent. Ok, but based on what?
A more reasonable application of Occam's razor is that humans also don't meet the definition of "intelligence". Reasoning and perception are separate faculties and need not align. Just because we feel like we're making decisions, doesn't mean we are.
It’s easy to attribute intelligence these systems. They have a flexibility and unpredictability that hasn't typically been associated with computers, but it all rests on (relatively) simple mathematics. We know this is true. We also know that means it has limitations and can't actually reason information. The corpus of work is huge - and that allows the results to be pretty striking - but once you do hit a corner with any of this tech, it can't simply reason about the unknown. If its not in the training data - or the training data is outdated - it will not be able to course correct at all. Thus, it lacks reasoning capability, which is a fundamental attribute of any form of intelligence.
> it all rests on (relatively) simple mathematics. We know this is true. We also know that means it has limitations and can't actually reason information.
What do you imagine is happening inside biological minds that enables reasoning that is something different to, a lot of, "simple mathematics"?
You state that because it is built up of simple mathematics it cannot be reasoning, but this does not follow at all, unless you can posit some other mechanism that gives rise to intelligence and reasoning that is not able to be modelled mathematically.
Because whats inside our minds is more than mathematics, or we would be able to explain human behavior with the purity of mathematics, and so far, we can't.
We can prove the behavior of LLMs with mathematics, because its foundations are constructed. That also means it has the same limits of anything else we use applied mathematics for. Is the broad market analysis that HFT firms use software for to make automated trades also intelligent?
9 replies →