← Back to context

Comment by usefulcat

5 hours ago

Whataboutism is almost never a compelling argument, and this case is no exception.

ETA:

To elaborate a bit: based on your response, it seems like you don't think my question is a valid one.

If you don't think it's a valid question, I'm curious to know why not.

If you do think it's a valid question, I'm curious to know your answer.

its not whataboutism, i'm simply asking how you would perform the same test for a human. then we can see if it applies or not to chatgpt?

  • I don't know. What is your answer to my question?

    • Knowing which word is likely to come after the other is trivially the concept of knowing truth for me.

      Why not? We have optimised for truth and we are predicting the best words that ensure this optimal value.