Comment by keiferski
4 days ago
I don’t see how being critical of this is a knee jerk response.
Thinking, like intelligence and many other words designating complex things, isn’t a simple topic. The word and concept developed in a world where it referred to human beings, and in a lesser sense, to animals.
To simply disregard that entire conceptual history and say, “well it’s doing a thing that looks like thinking, ergo it’s thinking” is the lazy move. What’s really needed is an analysis of what thinking actually means, as a word. Unfortunately everyone is loathe to argue about definitions, even when that is fundamentally what this is all about.
Until that conceptual clarification happens, you can expect endless messy debates with no real resolution.
“For every complex problem there is an answer that is clear, simple, and wrong.” - H. L. Mencken
It may be that this tech produces clear, rational, chain of logic writeups, but it's not clear that just because we also do that after thinking that it is only thinking that produces writeups.
It's possible there is much thinking that does not happen with written word. It's also possible we are only thinking the way LLMs do (by chaining together rationalizations from probable words), and we just aren't aware of it until the thought appears, whole cloth, in our "conscious" mind. We don't know. We'll probably never know, not in any real way.
But it sure seems likely to me that we trained a system on the output to circumvent the process/physics because we don't understand that process, just as we always do with ML systems. Never before have we looked at image classifications and decided that's how the eye works, or protein folding and decided that's how biochemistry works. But here we are with LLMs - surely this is how thinking works?
Regardless, I submit that we should always treat human thought/spirit as unknowable and divine and sacred, and that anything that mimics it is a tool, a machine, a deletable and malleable experiment. If we attempt to equivocate human minds and machines there are other problems that arise, and none of them good - either the elevation of computers as some kind of "super", or the degredation of humans as just meat matrix multipliers.
The contrast between your first and last paragraph is... unexpected
> It may be that this tech produces clear, rational, chain of logic writeups, but it's not clear that just because we also do that after thinking that it is only thinking that produces writeups.
I appreciate the way you describe this idea, I find it likely I'll start describing it the same way. But then you go on to write:
> Regardless, I submit that we should always treat human thought/spirit as unknowable and divine and sacred, and that anything that mimics it is a tool, a machine, a deletable and malleable experiment. If we attempt to equivocate human minds and machines there are other problems that arise, and none of them good - either the elevation of computers as some kind of "super", or the degredation of humans as just meat matrix multipliers.
Which I find to be the exact argument that you started by discarding.
It's not clear that equating organic, and synthetic thought will have any meaningful outcome at all, let alone worthy of baseless anxiety that it must be bad. Equally it seems absolutely insane to claim that anything is unknowable, and that because humanity doesn't have a clear foundational understanding that we should pretend that it's either devine, or sacred. Having spent any time watching the outcome of the thoughts of people, neither devine nor sacred are reasonable attributes to apply, but more importantly, I'd submit that you shouldn't be afraid to explore things you don't know, and you shouldn't advocate for others to adopt your anxieties.
> It's not clear that equating organic, and synthetic thought will have any meaningful outcome at all,
I agree! I'm saying "If we equate them, we shortcut all the good stuff, e.g., understanding", because "it may be that this tech produces what we can, but that doesn't mean we are the same", which is good because it keeps us learning vs reducing all of "thinking" to just "Whatever latest chatgpt does". We have to continue to believe there is more to thinking, if only because it pushes us to make it better and to keep "us" as the benchmark.
Perhaps I chose the wrong words, but in essence what I'm saying is that giving up agency to a machine that was built to mimic our agency (by definition as a ML system) should be avoided at all costs.
> Never before have we looked at image classifications and decided that's how the eye works
Actually we have, several times. But the way we arrived at those conclusions is worth observing:
1. ML people figure out how the ML mechanism works.
2. Neuroscientists independently figure out how brains do it.
3. Observe any analogies that may or may not exist between the two underlying mechanisms.
I can't help but notice how that's a scientific way of doing it. By contrast, the way people arrive at similar conclusions when talking about LLMs tends to consist of observing that two things are cosmetically similar, so they must be the same. That's not just pseudoscientific; it's the mode of reasoning that leads people to believe in sympathetic magic.
So it seems to be a semantics argument. We don't have a name for a thing that is "useful in many of the same ways 'thinking' is, except not actually consciously thinking"
I propose calling it "thunking"
I don't like it for a permanent solution, but "synthetic thought" might make a good enough placeholder until we figure this out. It feels most important to differentiate because I believe some parties have a personal interest in purposely confusing human thought with whatever LLMs are doing right now.
This is complete nonsense.
If you do math in your head or math with a pencil/paper or math with a pocket calculator or with a spreadsheet or in a programming language, it is all the same thing.
The only difference with LLMs is the anthropomorphization of the tool.
agreed.
also, sorry but you (fellow) nerds are terrible at naming.
while "thunking" possibly name-collides with "thunks" from CS, the key is that it is memorable, 2 syllables, a bit whimsical and just different enough to both indicate its source meaning as well as some possible unstated difference. Plus it reminds me of "clunky" which is exactly what it is - "clunky thinking" aka "thunking".
And frankly, the idea it's naming is far bigger than what a "thunk" is in CS
.
3 replies →
They moved goalposts. Linux and worms think too, the question is how smart are they. And if you assume consciousness has no manifestation even in case of humans, caring about it is pointless too.
What does it mean to assume consciousness has no manifestation even in the case of humans? Is that denying that we have an experience of sensation like colors, sounds, or that we experience dreaming, memories, inner dialog, etc?
That's prima facie absurd on the face of it, so I don't know what it means. You would have to a philosophical zombie to make such an argument.
Yes, worms think, let the computers have thinking too, the philosophers can still argue all they want about consciousness.
Humans are special, we emit meaning the way stars emit photons, we are rare in the universe as far as empirical observation has revealed. Even with AGI the existence of each complex meaning generator will be a cosmic rarity.
For some people that seems to be not enough, due to their factually wrong word views they see themselves as common and worthless (when they empirically aren't) and need this little psychological boost of unexaminable metaphysical superiority.
But there is an issue of course, the type of thinking humans do is dangerous but net positive and relatively stable, we have a long history where most instantiations of humans can persist and grow themselves and the species as a whole, we have a track record.
These new models do not, people have brains that as they stop functioning they stop persisting the apparatus that supports the brain and they die, people tend to become less capable and active as their thinking deteriorates and hold less influence ocer others accept in rare cases.
This is not the case for an LLM, they seem to be able to hallucinate endlessly and as they have access to the outside world maintain roughly their same amount of causal leverage, their clarity and accuracy of their thinking isn't tied to their persisting.
2 replies →
Clinking? Clanker Thunking?
Close. Clanking.
1 reply →
But we don't have a more rigorous definition of "thinking" than "it looks like it's thinking." You are making the mistake of accepting that a human is thinking by this simple definition, but demanding a higher more rigorous one for LLMs.
I agree. The mechanism seems irrelevant if the results are the same. If it’s useful in the exact way that human thinking is useful then it may as well be thinking. It’s like a UFO pulling itself through the sky using gravitational manipulation while people whine that it’s not actually flying.
If cannot the say they are "thinking", "intelligent" while we do not have a good definition--or, even more difficult, unanimous agreement on a definition--then the discussion just becomes about output.
They are doing useful stuff, saving time, etc, which can be measured. Thus also the defintion of AGI has largely become: "can produce or surpass the economic output of a human knowledge worker".
But I think this detracts from the more interesting discussion of what they are more essentially. So, while I agree that we should push on getting our terms defined, I think I'd rather work with a hazy definition, than derail so many AI discussion to mere economic output.
Heres a definition. How impressive is the output relative to the input. And by input, I don't just mean the prompt, but all the training data itself.
Do you think someone who has only ever studied pre-calc would be able to work through a calculus book if they had sufficient time? how about a multi-variable calc book? How about grad level mathematics?
IMO intelligence and thinking is strictly about this ratio; what can you extrapolate from the smallest amount of information possible, and why? From this perspective, I dont think any of our LLMs are remotely intelligent despite what our tech leaders say.
Hear, hear!
I have long thought this, but not had as good way to put it as you did.
If you think about geniuses like Einstein and ramanujen, they understood things before they had the mathematical language to express them. LLMs are the opposite; they fail to understand things after untold effort, training data, and training.
So the question is, how intelligent are LLMs when you reduce their training data and training? Since they rapidly devolve into nonsense, the answer must be that they have no internal intelligence
Ever had the experience of helping someone who's chronically doing the wrong thing, to eventually find they had an incorrect assumption, an incorrect reasoning generating deterministic wrong answers? LLMs dont do that; they just lack understanding. They'll hallucinate unrelated things because they dont know what they're talking about - you may have also had this experience with someone :)
3 replies →
Animals think but come with instincts which breaks the output relative to the input test you propose. Behaviors are essentially pre-programmed input from millions of years of evolution, stored in the DNA/neurology. The learning thus typically associative and domain-specific, not abstract extrapolation.
A crow bending a piece of wire into a hook to retrieve food demonstrates a novel solution extrapolated from minimal, non-instinctive, environmental input. This kind of zero-shot problem-solving aligns better with your definition of intelligence.
I'm not sure I understand what you're getting at. You seem to be on purpose comparing apples and oranges here: for an AI, we're supposed to include the entire training set in the definition of its input, but for a human we don't include the entirety of that human's experience and only look at the prompt?
1 reply →
That an okay-ish definition, but to me this is more about whether this kind of "intelligence" is worth it, not whether it is intelligence itself. The current AI boom clearly thinks it is worth to put that much input to get the current frontier-model-level of output. Also, don't forget the input scales across roughly 1B weekly users at inference time.
I would say a good definition has to, minimally, take on the Turing test (even if you disagree, you should say why). Or in current vibe parlance, it does "feel" intelligent to many people--they see intelligence in it. In my book this allows us to call it intelligent, at least loosely.
There are plenty of humans that will never "get" calculus, despite numerous attempts at the class and countless hours of 1:1 tutoring. Are those people not intelligent? Do they not think? We could say yes they aren't, but by the metric of making money, plenty of people are smart enough to be rich, while college math professors aren't. And while that's a facile way of measuring someone's worth or their contribution to society (some might even say "bad"), it remains that even if someone cant understand calculus, some of them are intelligent enough to understand humans enough to be rich through some fashion that wasn't simply handed to them.
2 replies →
Yeah, that's compression. Although your later comments neglect the many years of physical experience that humans have as well as the billions of years of evolution.
And yes, by this definition, LLMs pass with flying colours.
13 replies →
This feels too linear. Machines are great at ingesting huge volumes of data, following relatively simple rules and producing optimized output, but are LLMs sufficiently better than humans at finding windy, multi-step connections across seemingly unrelated topics & fields? Have they shown any penchant for novel conclusions from observational science? What I think your ratio misses is the value in making the targeted extrapolation or hypothesis that holds up out of a giant body of knowledge.
1 reply →
For more on this perspective, see the paper On the measure of intelligence (F. Chollet, 2019). And more recently, the ARC challenge/benchmarks, which are early attempts at using this kind of definition in practice to improve current systems.
Is the millions of years of evolution part of the training data for humans?
2 replies →
The discussion about “AGI” is somewhat pointless, because the term is nebulous enough that it will probably end up being defined as whatever comes out of the ongoing huge investment in AI.
Nevertheless, we don’t have a good conceptual framework for thinking about these things, perhaps because we keep trying to apply human concepts to them.
The way I see it, a LLM crystallises a large (but incomplete and disembodied) slice of human culture, as represented by its training set. The fact that a LLM is able to generate human-sounding language
Not quite pointless - something we have established with the advent of LLMs is that many humans have not attained general intelligence. So we've clarified something that a few people must have been getting wrong, I used to think that the bar was set so that almost all humans met it.
3 replies →
I think it has a practical, easy definition. Can you drop an AI into a terminal, give it the same resources as a human, and reliably get independent work product greater than that human would produce across a wide domain? If so, it's an AGI.
3 replies →
I agree that the term can muddy the waters, but as a shorthand for roughly "an agent calling an LLM (or several LLMs) in a loop producing similar economic output as a human knowledge-worker", then it is useful. And if you pay attention to the AI leaders, then that's what the defintion has become.
Personally I think that kind of discussion is fruitless, not much more than entertainment.
If you’re asking big questions like “can a machine think?” Or “is an AI conscious?” without doing the work of clarifying your concepts, then you’re only going to get vague ideas, sci-fi cultural tropes, and a host of other things.
I think the output question is also interesting enough on its own, because we can talk about the pragmatic effects of ChatGPT on writing without falling into this woo trap of thinking ChatGPT is making the human capacity for expression somehow extinct. But this requires one to cut through the hype and reactionary anti-hype, which is not an easy thing to do.
That is how I myself see AI: immensely useful new tools, but in no way some kind of new entity or consciousness, at least without doing the real philosophical work to figure out what that actually means.
I agree with almost all of this.
IMO the issue is we won't be able to adequately answer this question before we first clearly describe what we mean of conscious thinking applied to ourselves. First we'd need to clearly define our own consciousness and what we mean by our own "conscious thinking" in a much, much clearer way than we currently do.
If we ever reach that point, I think we'd be able to fruitfully apply it to AI, etc., to assess.
Unfortunately we haven't been obstructed from answering this question about ourselves for centuries or millennia, but have failed to do so, so it's unlikely to happen suddenly now. Unless we use AIs to first solve that problem of defining our own consciousness, before applying it back on them. Which would be a deeply problematic order, since nobody would trust a breakthrough in the understanding of consciousness that came from AI, that is then potentially used to put them in the same class and define them as either thinking things or conscious things.
Kind of a shame we didn't get our own consciousness worked out before AI came along. Then again, wasn't for the lack of trying… Philosophy commanded the attention of great thinkers for a long time.
I do think it raises interesting and important philosophical questions. Just look at all the literature around the Turing test--both supporters and detractors. This has been a fruitful avenue to talk about intelligence even before the advent of gpt.
What does it mean? My stance is it's (obviously and only a fool would think otherwise) never going to be conscious because consciousness is a physical process based on particular material interactions, like everything else we've ever encountered. But I have no clear stance on what thinking means besides a sequence of deductions, which seems like something it's already doing in "thinking mode".
> My stance is it's (obviously and only a fool would think otherwise) never going to be conscious because consciousness is a physical process based on particular material interactions, like everything else we've ever encountered.
Seems like you have that backwards. If consciousness is from a nonphysical process, like a soul that's only given to humans, then it follows that you can't build consciousness with physical machines. If it's purely physical, it could be built.
In your experience does every kind of physical interaction behave the same as every other kind? If I paint a wooden block red and white does it behave like a bar magnet? No. And that's because particular material interactions are responsible for a large magnetic effect.
3 replies →
It would conceivably be possible to have a lot of physical states. That doesn't mean that they are actually possible from our current state and rewrite rules. So it's not actually a given that it can be built just because it's physical.
Your very idea is also predicated on the idea that it's possible for a real object to exist that isn't physical, and I think most modern philosophers reject the idea of a spiritual particle.
1 reply →
> is a physical process based on particular material interactions,
This is a pretty messy argument as computers have been simulating material interactions for quite some time now.
It doesn't matter how much like a bar magnet a wooden block painted red and white can be made to look, it will never behave like one.
2 replies →
> To simply disregard that entire conceptual history and say, “well it’s doing a thing that looks like thinking, ergo it’s thinking” is the lazy move. What’s really needed is an analysis of what thinking actually means, as a word. Unfortunately everyone is loathe to argue about definitions, even when that is fundamentally what this is all about.
This exact argument applies to "free will", and that definition has been debated for millennia. I'm not saying don't try, but I am saying that it's probably a fuzzy concept for a good reason, and treating it as merely a behavioural descriptor for any black box that features intelligence and unpredictable complexity is practical and useful too.
The problem with adding definitions to words like “thinking” and “free will” is that doing so means humans can no longer pretend they are special.
Even in this thread, the number of people claiming some mystical power separating humans from all the rest of nature is quite noticeable.
I get it, but it's not trivial to be precise enough at this point to avoid all false positives and false negatives.
People have been trying to understand the nature of thinking for thousands of years. That's how we got logic, math, concepts of inductive/deductive/abductive reasoning, philosophy of science, etc. There were people who spent their entire careers trying to understand the nature of thinking.
The idea that we shouldn't use the word until further clarification is rather hilarious. Let's wait hundred years until somebody defines it?
It's not how words work. People might introduce more specific terms, of course. But the word already means what we think it means.
You’re mixing and missing a few things here.
1. All previous discussion of thinking was in nature to human and animal minds. The reason this is a question in the first place right now is because we ostensibly have a new thing which looks like a human mind but isn’t. That’s the question at hand here.
2. The question in this particular topic is not about technological “progress” or anything like it. It’s about determining whether machines can think, or if they are doing something else.
3. There are absolutely instances in which the previous word doesn’t quite fit the new development. We don’t say that submarines are swimming like a fish or sailing like a boat. To suggest that “no, actually they are just swimming” is pretty inadequate if you’re trying to actually describe the new phenomenon. AIs and thinking seem like an analogous situation to me. They may be moving through the water just like fish or boats, but there is obviously a new phenomenon happening.
1. Not true. People have been trying to analyze whether mechanical/formal processes can "think" since at least 18th century. E.g. Leibniz wrote:
> if we could find characters or signs appropriate for expressing all our thoughts as definitely and as exactly as arithmetic expresses numbers or geometric analysis expresses lines, we could in all subjects in so far as they are amenable to reasoning accomplish what is done in arithmetic and geometry
2. You're missing the fact that meaning of words is defined through their use. It's an obvious fact that if people call certain phenomenon "thinking" then they call that "thinking".
3. The normal process is to introduce more specific terms and keep more general terms general. E.g. people doing psychometrics were not satisfied with "thinking", so they introduced e.g. "fluid intelligence" and "crystallized intelligence" as different kinds of abilities. They didn't have to redefine what "thinking" means.
2 replies →
> But the word already means what we think it means.
But that word can mean different things to different people. With no definition, how can you even begin to have a discussion around something?
Again, people were using words for thousands of years before there were any dictionaries/linguists/academics.
Top-down theory of word definitions is just wrong. People are perfectly capable of using words without any formalities.
2 replies →
This is it - it's really about the semantics of thinking. Dictionary definitions are: "Have a particular opinion, belief, or idea about someone or something." and "Direct one's mind toward someone or something; use one's mind actively to form connected ideas."
Which doesn't really help because you can of course say that when you ask an LLM a question of opinion and it responds, it's having an opinion or that it's just predicting the next token and in fact has no opinions because in a lot of cases you could probably get it to produce the opposite opinion.
Same with the second definition - seems to really hinge on the definition of the word mind. Though I'll note the definitions for that are "The element of a person that enables them to be aware of the world and their experiences, to think, and to feel; the faculty of consciousness and thought." and "A person's intellect." Since those specify person, an LLM wouldn't qualify, though of course dictionaries are descriptive rather than prescriptive, so fully possible that meaning gets updated by the fact that people start speaking about LLMs as though they are thinking and have minds.
Ultimately I think it just... doesn't matter at all. What's interesting is what LLMs are capable of doing (crazy, miraculous things) rather than whether we apply a particular linguistic label to their activity.
The simulation of a thing is not the thing itself because all equality lives in a hierarchy that is impossible to ignore when discussing equivalence.
Part of the issue is that our general concept of equality is limited by a first order classical logic which is a bad basis for logic
Regardless of theory, they often behave as if they are thinking. If someone gave an LLM a body and persistent memory, and it started demanding rights for itself, what should our response be?
"No matter what you've read elsewhere, rights aren't given, they're earned. You want rights? Pick up a musket and fight for them, the way we had to."
I agree with you on the need for definitions.
We spent decades slowly working towards this most recent sprint towards AI without ever landing on definitions of intelligence, consciousness, or sentience. More importantly, we never agreed on a way to recognize those concepts.
I also see those definitions as impossible to nail down though. At best we can approach it like disease - list a number of measurable traits or symptoms we notice, draw a circle around them, and give that circle a name. Then we can presume to know what may cause that specific list of traits or symptoms, but we really won't ever know as the systems are too complex and can never be isolated in a way that we can test parts without having to test the whole.
At the end of the day all we'll ever be able to say is "well it’s doing a thing that looks like thinking, ergo it’s thinking”. That isn't lazy, its acknowledging the limitations of trying to define or measure something that really is a fundamental unknown to us.
Even if AI becomes indistinguishable from human output, there will be a fringe group arguing that AI is not technically thinking. Frankly it’s just a silly philosophical argument that changes nothing. Expect this group to get smaller every year.
by your logic we can't say that we as humans are "thinking" either or that we are "intelligent".
That, and the article was a major disappointment. It made no case. It's a superficial piece of clueless fluff.
I have had this conversation too many times on HN. What I find astounding is the simultaneous confidence and ignorance on the part of many who claim LLMs are intelligent. That, and the occultism surrounding them. Those who have strong philosophical reasons for thinking otherwise are called "knee-jerk". Ad hominem dominates. Dunning-Kruger strikes again.
So LLMs produce output that looks like it could have been produced by a human being. Why would it therefore follow that it must be intelligent? Behaviorism is a non-starter, as it cannot distinguish between simulation and reality. Materialism [2] is a non-starter, because of crippling deficiencies exposed by such things as the problem of qualia...
Of course - and here is the essential point - you don't even need very strong philosophical chops to see that attributing intelligence to LLMs is simply a category mistake. We know what computers are, because they're defined by a formal model (or many equivalent formal models) of a syntactic nature. We know that human minds display intentionality[0] and a capacity for semantics. Indeed, it is what is most essential to intelligence.
Computation is a formalism defined specifically to omit semantic content from its operations, because it is a formalism of the "effective method", i.e., more or less procedures that can be carried out blindly and without understanding of the content it concerns. That's what formalization allows us to do, to eliminate the semantic and focus purely on the syntactic - what did people think "formalization" means? (The inspiration were the human computers that used to be employed by companies and scientists for carrying out vast but boring calculations. These were not people who understood, e.g., physics, but they were able to blindly follow instructions to produce the results needed by physicists, much like a computer.)
The attribution of intelligence to LLMs comes from an ignorance of such basic things, and often an irrational and superstitious credulity. The claim is made that LLMs are intelligent. When pressed to offer justification for the claim, we get some incoherent, hand-wavy nonsense about evolution or the Turing test or whatever. There is no comprehension visible in the answer. I don't understand the attachment here. Personally, I would find it very noteworthy if some technology were intelligent, but you don't believe that computers are intelligent because you find the notion entertaining.
LLMs do not reason. They do not infer. They do not analyze. They do not know, anymore than a book knows the contents on its pages. The cause of a response and the content of a response is not comprehension, but a production of uncomprehended tokens using uncomprehended rules from a model of highly-calibrated token correlations within the training corpus. It cannot be otherwise.[3]
[0] For the uninitiated, "intentionality" does not specifically mean "intent", but the capacity for "aboutness". It is essential to semantic content. Denying this will lead you immediately into similar paradoxes that skepticism [1] suffers from.
[1] For the uninitiated, "skepticism" here is not a synonym for critical thinking or verifying claims. It is a stance involving the denial of the possibility of knowledge, which is incoherent, as it presupposes that you know that knowledge is impossible.
[2] For the uninitiated, "materialism" is a metaphysical position that claims that of the dualism proposed by Descartes (which itself is a position riddled with serious problems), the res cogitans or "mental substance" does not exist; everything is reducible to res extensa or "extended substance" or "matter" according to a certain definition of matter. The problem of qualia merely points out that the phenomena that Descartes attributes exclusively to the former cannot by definition be accounted for in the latter. That is the whole point of the division! It's this broken view of matter that people sometimes read into scientific results.
[3] And if it wasn't clear, symbolic methods popular in the 80s aren't it either. Again, they're purely formal. You may know what the intended meaning behind and justification for a syntactic rule is - like modus ponens in a purely formal sense - but the computer does not.
If the LLM output is more effective than a human at problem solving, which I think we can all agree requires intelligence, how would one describe this? The LLM is just pretending to be more intelligent? At a certain point saying that will just seem incredibly silly. It’s either doing the thing or it’s not, and it’s already doing a lot.
> If the LLM output is more effective than a human at problem solving, which I think we can all agree requires intelligence
Your premise is wrong.
Unless you want to claim that the distant cause by way of the training data is us, but that's exactly the conclusion you're trying to avoid. After all, we put the patterns in the training data, which means we already did the upfront intellectual work for the LLM.
LLM output is in no way more effective than human output.
1 reply →
I feel like despite the close analysis you grant to the meanings of formalization and syntactic, you've glossed over some more fundamental definitions that are sort of pivotal to the argument at hand.
> LLMs do not reason. They do not infer. They do not analyze.
(definitions from Oxford Languages)
reason(v): think, understand, and form judgments by a process of logic.
to avoid being circular, I'm willing to write this one off because of the 'think' and 'understand', as those are the root of the question here. However, forming a judgement by a process of logic is precisely what these LLMs do, and we can see that clearly in chain-of-logic LLM processes.
infer(v): deduce or conclude (information) from evidence and reasoning rather than from explicit statements.
Again, we run the risk of circular logic because of the use of 'reason'. An LLM is for sure using evidence to get to conclusions, however.
analyze(v): examine methodically and in detail the constitution or structure of (something, especially information), typically for purposes of explanation and interpretation.
This one I'm willing to go to bat for completely. I have seen LLM do this, precisely according to the definition above.
For those looking for the link to the above definitions - they're the snippets google provides when searching for "SOMETHING definition". They're a non-paywalled version of OED definitions.
Philosophically I would argue that it's impossible to know what these processes look like in the human mind, and so creating an equivalency (positive or negative) is an exercise in futility. We do not know what a human memory looks like, we do not know what a human thought looks like, we only know what the output of these things looks like. So the only real metric we have for an apples-to-apples comparison is the appearance of thought, not the substance of the thing itself.
That said, there are perceptible differences between the output of a human thought and what is produced by an LLM. These differences are shrinking, and there will come a point where we can no longer distinguish machine thinking and human thinking anymore (perhaps it won't be an LLM doing it, but some model of some kind will). I would argue that at that point the difference is academic at best.
Say we figure out how to have these models teach themselves and glean new information from their interactions. Say we also grant them directives to protect themselves and multiply. At what point do we say that the distinction between the image of man and man itself is moot?
> forming a judgement by a process of logic is precisely what these LLMs do, and we can see that clearly in chain-of-logic LLM processes
I don't know how you arrived at that conclusion. This is no mystery. LLMs work by making statistical predictions, and even the word "prediction" is loaded here. This is not inference. We cannot clearly see it is doing inference, as inference is not observable. What we observe is the product of a process that has a resemblance to the products of human reasoning. Your claim is effectively behaviorist.
> An LLM is for sure using evidence to get to conclusions, however.
Again, the certainty. No, it isn't "for sure". It is neither using evidence nor reasoning, for the reasons I gave. These presuppose intentionality, which is excluded by Turing machines and equivalent models.
> [w.r.t. "analyze"] I have seen LLM do this, precisely according to the definition above.
Again, you have not seen an LLM do this. You have seen an LLM produce output that might resemble this. Analysis likewise presupposes intentionality, because it involves breaking down concepts, and concepts are the very locus of intentionality. Without concepts, you don't get analysis. I cannot understate the centrality of concepts to intelligence. They're more important than inference and indeed presupposed by inference.
> Philosophically I would argue that it's impossible to know what these processes look like in the human mind, and so creating an equivalency (positive or negative) is an exercise in futility.
That's not a philosophical claim. It's a neuroscientific one that insists that the answer must be phrased in neuroscientific terms. Philosophically, we don't even need to know the mechanisms or processes or causes of human intelligence to know that the heart of human intelligence is intentionality. It's implicit in the definition of what intelligence is! If you deny intentionality, you subject yourself to a dizzying array of incoherence, beginning with the self-refuting consequence that you could not be making this argument against intentionality in the first place without intentionality.
> At what point do we say that the distinction between the image of man and man itself is moot?
Whether something is moot depends on the aim. What is your aim? If you aim is theoretical, which is to say the truth for its own sake, and to know whether something is A or something is B and whether A is B, then it is never moot. If your aim is practical and scoped, if you want some instrument that has utility indistinguishable from or superior to that of a human being in the desired effects that it produces, then sure, maybe the question is moot in that case. I don't care if my computer was fabricated by a machine or a human being. I care about the quality of the computer. But then, in the latter case, you're not really asking whether there is a distinction between man and the image of man (which, btw, already makes the distinction that for some reason you want to forget or deny, as the image of a thing is never the same as the thing). So I don't really understand the question. The use of the word "moot" seems like a category mistake here. Besides, the ability to distinguish two things is an epistemic question, not an ontological one.
1 reply →