← Back to context

Comment by IAmGraydon

1 day ago

It can't do that without the answer to who did it being in the training data. I think the reason people keep falling for this illusion is that they can't really imagine how vast the training dataset is. In all cases where it appears to answer a question like the one you posed, it's regurgitating the answer from its training data in a way that creates an illusion of using logic to answer it.

It can't do that without the answer to who did it being in the training data.

Try it. Write a simple original mystery story, and then ask a good model to solve it.

This isn't your father's Chinese Room. It couldn't solve original brainteasers and puzzles if it were.