← Back to context

Comment by yunwal

2 years ago

> What would a skeptic achieve by asking its reasoning when ChatGPT cannot provide you with its reasoning?

"If you let go of an apple from your hand on Earth, it will fall downwards due to the force of gravity. This is because the Earth exerts a gravitational force on all objects near its surface, and this force causes objects to accelerate downwards at a rate of approximately 9.8 meters per second squared.

In outer space, however, the behavior of the apple would be different. In the absence of gravity, the apple would not experience any force pulling it towards the Earth, and would therefore continue to move in a straight line at a constant speed, as per Newton's first law of motion.

However, it is worth noting that in reality, there is no such thing as "zero gravity" in outer space. While the force of gravity decreases with distance from the Earth, it never truly reaches zero. Additionally, other forces such as the gravitational pull of other celestial bodies, as well as the effects of acceleration and rotation, can influence the motion of objects in space. Therefore, the behavior of the apple in outer space would depend on the specific conditions of its surroundings."

Looks like reasoning to me. In seriousness, the reason it's able to generate this output is because it does look for explanations. Those explanations are in the form of weights and biases rather than organic neurons, and the inputs are words instead of visuals, but the function is the same, and neither is a perfect representation of our world. Recognizing patterns is the same thing as an explanation.

>Looks like reasoning to me. In seriousness, the reason it's able to generate this output is because it does look for explanations.

Yeah, it looks like reasoning, but it isn't, because it's not the reasoning that ChatGPT used - it's just, once again, fitting whatever would be the most likely next word for the situation. It's not using logic or reasoning to do that, it's using statistics.

It's as if you flat out do not understand how ChatGPT works. ChatGPT cannot provide you with reasoning because it does not reason. So asking to provide reasoning, just indicates that you do not understand how ChatGPT works and that you also misunderstood the Op-Ed.

>In outer space, however, the behavior of the apple would be different. In the absence of gravity, the apple would not experience any force pulling it towards the Earth, and would therefore continue to move in a straight line at a constant speed, as per Newton's first law of motion.

>However, it is worth noting that in reality, there is no such thing as "zero gravity" in outer space. While the force of gravity decreases with distance from the Earth, it never truly reaches zero. Additionally, other forces such as the gravitational pull of other celestial bodies, as well as the effects of acceleration and rotation, can influence the motion of objects in space. Therefore, the behavior of the apple in outer space would depend on the specific conditions of its surroundings."

Who fucking cares? The point isn't about zero gravity in space - the point is w/r/t what is happening inside of ChatGPT...

  • It’s as if you don’t know how ChatGPT or the human brain works. The correlations are built into a prediction model. Sometimes those predictions can be near certain, which is indistinguishable from human understanding.

    You can see this quite clearly when the same neuron lights up for any prompt related to a certain topic. It’s because there’s actual abstraction being done.

    • >The correlations are built into a prediction model. Sometimes those predictions can be near certain, which is indistinguishable from human understanding.

      This is quite literally not what the word understanding means, and trying to use my words against me in this way just makes you seem smarmy and butthurt. And if you are going to converse with me like that, I'm not going to engage when your material is a) pointed and aggressive, and b) completely non-responsive to what I wrote.

      >You can see this quite clearly when the same neuron lights up for any prompt related to a certain topic. It’s because there’s actual abstraction being done.

      Um, what?

      7 replies →

  • I agree with you, but I'm not sure if it matters, and we could say the same thing about a person. We cannot prove that a human reasons, only that they output text that looks like it could be reasoning

    • No, you can't say the same thing as a person because a person can express reasoning. ChatGPT can't, because it can't reason. You may ask yourself, "wow, is there perhaps a magical algorithm in humans capable of reasoning that is the ultimate source of what emanated from this other person, being that I'm not actually sure everyone else is real?" - that's totally different from what's going on with ChatGPT when it just puts out more dreck, but arbitrarily states "this is reasoning". Like, try and read the article - he deals with this point EXACTLY.