Comment by ceejayoz
14 days ago
> None. But they all claim they don’t understand it.
Do any claim it is likely that LLMs are conscious? Or do they agree with me?
> Look at yourself. When did I claim the LLM is conscious? Never did. I said it’s a possibility but mostly we don’t know wth is going on.
Look at yourself. When did I claim it’s impossible? Never did. I said it’s unlikely.
> Let’s say the horse was in actuality doing quantum mechanics.
But we aren’t at that point with LLMs. Hence, I say it’s unlikely. Not impossible.
You’re so wound up you’re projecting and doing exactly what you falsely accuse others of doing.
> Do any claim it is likely that LLMs are conscious? Or do they agree with me?
Overall no claim was made by me or anyone that they are likely conscious. No claim is made that they are unconscious either. That is inline with my claim and in total agreement with that we don’t know.
Your claim is that LLMs are extremely likely to be unconscious and the answer to that claim is NO. The general sentiment is not in agreement with you on that. There is no hard sentiment that we know for sure.
>Look at yourself. When did I claim it’s impossible? Never did. I said it’s unlikely.
Did I say you said it’s impossible? I didn’t. More hallucinations.
> But we aren’t at that point with LLMs. Hence, I say it’s unlikely. Not impossible.
We are. The LLMs are displaying output and behavior that is consistent with people who are conscious. And we have zero insight as to why. That is the point we are at. There is zero evidence that can lend credence to say it is low probability that an LLm is conscious or there is high probability that an LLM is conscious. But the LLM is outputting text that is indistinguishable from text outputted by beings who ARE conscious.
> You’re so wound up you’re projecting and doing exactly what you falsely accuse others of doing.
No I didn’t. You’re hallucinating this. I am 100 percent referring to your statement that there is a low probability chance an LLM is conscious. My claim is that you have zero evidence to support that claim. There is no information and knowledge available for you to logically come to that conclusion.
> That is inline with my claim and in total agreement with that we don’t know.
For certain? No. But that's a https://news.ycombinator.com/item?id=44652248
> The LLMs are displaying output and behavior that is consistent with people who are conscious.
Your failing the Turing test doesn't mean we all do.
> And we have zero insight as to why.
Sure we do. It's explicitly built to do that. It's supposed to be confusingly like a human, because it's a probability generator based on oodles of real human input.
> There is no information and knowledge available for you to logically come to that conclusion.
Sure there is. I've talked with LLMs. It's very apparent they aren't conscious. As with cooking, I don't have to be a Michelin chef to know a plate of poop tastes bad. I'd love to be wrong about them, just like I'd be happy to find poop surprisingly tasty. But I'm very, very comfortable with my position here until provided with very, very solid evidence to the contrary.
(To be clear: "very very solid evidence" is not a rando on the Internet pulling a widely-flagged HN Don Quixote.)
>You've presented none for yours;
Why the fuck do I have to present evidence for my claim: "We don't fucking know."? I mean that's the one claim you don't need to present any fucking evidence for. But I did anyway by citing Geoffrey Hinton WHO literally said he we don't fucking know what is going on.
>Nah. Go on, find me an expert in the field that disagrees with this. Which one is saying it's plausible, let alone likely, that these things are conscious?
No one says it's plausible, no one says it's unlikely. THEY ARE ALL saying they don't know. And THAT is my claim. PLENTY of evidence for this.
>Your failing the Turing test doesn't mean we all do.
Bro, you have failed it multiple times already. Half the shit on HN and everything you've read could be generated.
>Sure we do. It's explicitly built to do that. It's supposed to be confusingly like a human, because it's a probability generator based on oodles of real human input.
It was NOT built to do that. The fact that it can communicate with us was an emergent side effect. NO ONE expected it. Not even the inventors of transformers. It was built on ONE thing and it turned out to be extremely good at another thing. Every one knows this. Everyone in the game, which you obvously aren't.
>Sure there is. I've talked with LLMs. It's very apparent they aren't conscious.
I've talked with people with down syndrome or schizophrenia who are stupider and less consistent and more delusional than LLMs. They are what we consider conscious, because if we didn't we could save society from a lot of resource drainage by just putting a bullet in their heads and be done with it.
Fact of the matter is, there is very little observable difference between a human with brain damage and chatGPT other then the fact that chatGPT can be a genius as well as stupid.
>But I'm very, very comfortable with my position here until provided with very, very solid evidence to the contrary.
Good to know. Your comfort is an indicator of how irrational you are.
>(To be clear: "very very solid evidence" is not a rando on the Internet pulling a widely-flagged HN Don Quixote.)
You referring to me? No need to be insulting here.
My point is simply this. We don't know. and your counter evidence is simply "I can tell because I talked to it."
So you say you're going to stand in your position which you arrived at, on zero evidence while I say "we don't know."?
I think it's clear you're not a very logical person.
That being said, and I don't agree with Geoffrey Hinton on this matter, BUT, here is the most eminent god father of modern AI saying the exact fucking opposite of what you're saying:
https://youtu.be/vxkBE23zDmQ?t=362
We have someone smarter than you and basically an expert in the field saying they are conscious (mind you I think he's just saying it's very likely to be conscious). And do you notice his reasoning? It's not proof but he uses logical induction and builds off of what happens when you take a single neuron and take the logic all the way to the end. This is very different from your: "I talked to chatGPT so I can tell it's not alive" bullshit reasoning.
The thing that Geoffrey misses IS, we don't have a technical definition of consciousness AND we don't even know how to characterize it for people let alone an LLM. So we actually don't know if the feedforward network in Transformers can fit the definition of consciousness.
We can guess that it's conscious from the output, but the actual ground truth reality is: "we don't know."