Comment by freejazz

2 years ago

What is the difference of embedding? It is of no meaning to chatGPT - it only has meanings to humans, who must discern from the embedment. This is Chomsky's exact point.

>"What happens to X when you let it go from your hand on earth/in outerspace? please explain your reasoning." And fill in X with any object or concept that you can think of.

What would a skeptic achieve by asking its reasoning when ChatGPT cannot provide you with its reasoning? Seems like maybe you are the more confused one here than Chomsky (let alone everyone else in the thread).

> What would a skeptic achieve by asking its reasoning when ChatGPT cannot provide you with its reasoning?

"If you let go of an apple from your hand on Earth, it will fall downwards due to the force of gravity. This is because the Earth exerts a gravitational force on all objects near its surface, and this force causes objects to accelerate downwards at a rate of approximately 9.8 meters per second squared.

In outer space, however, the behavior of the apple would be different. In the absence of gravity, the apple would not experience any force pulling it towards the Earth, and would therefore continue to move in a straight line at a constant speed, as per Newton's first law of motion.

However, it is worth noting that in reality, there is no such thing as "zero gravity" in outer space. While the force of gravity decreases with distance from the Earth, it never truly reaches zero. Additionally, other forces such as the gravitational pull of other celestial bodies, as well as the effects of acceleration and rotation, can influence the motion of objects in space. Therefore, the behavior of the apple in outer space would depend on the specific conditions of its surroundings."

Looks like reasoning to me. In seriousness, the reason it's able to generate this output is because it does look for explanations. Those explanations are in the form of weights and biases rather than organic neurons, and the inputs are words instead of visuals, but the function is the same, and neither is a perfect representation of our world. Recognizing patterns is the same thing as an explanation.

  • >Looks like reasoning to me. In seriousness, the reason it's able to generate this output is because it does look for explanations.

    Yeah, it looks like reasoning, but it isn't, because it's not the reasoning that ChatGPT used - it's just, once again, fitting whatever would be the most likely next word for the situation. It's not using logic or reasoning to do that, it's using statistics.

    It's as if you flat out do not understand how ChatGPT works. ChatGPT cannot provide you with reasoning because it does not reason. So asking to provide reasoning, just indicates that you do not understand how ChatGPT works and that you also misunderstood the Op-Ed.

    >In outer space, however, the behavior of the apple would be different. In the absence of gravity, the apple would not experience any force pulling it towards the Earth, and would therefore continue to move in a straight line at a constant speed, as per Newton's first law of motion.

    >However, it is worth noting that in reality, there is no such thing as "zero gravity" in outer space. While the force of gravity decreases with distance from the Earth, it never truly reaches zero. Additionally, other forces such as the gravitational pull of other celestial bodies, as well as the effects of acceleration and rotation, can influence the motion of objects in space. Therefore, the behavior of the apple in outer space would depend on the specific conditions of its surroundings."

    Who fucking cares? The point isn't about zero gravity in space - the point is w/r/t what is happening inside of ChatGPT...

    • It’s as if you don’t know how ChatGPT or the human brain works. The correlations are built into a prediction model. Sometimes those predictions can be near certain, which is indistinguishable from human understanding.

      You can see this quite clearly when the same neuron lights up for any prompt related to a certain topic. It’s because there’s actual abstraction being done.

      8 replies →

    • I agree with you, but I'm not sure if it matters, and we could say the same thing about a person. We cannot prove that a human reasons, only that they output text that looks like it could be reasoning

      1 reply →