← Back to context

Comment by easyThrowaway

1 year ago

The problem is that they're both terrible.

Going first route means we get to calcify our terrible current biases in the future, while the latter instead goes for a facile and sanitized version of our expectations.

You're asking a machine for a binary "bad/good" response to complex questions that don't have easy answers. It will always be wrong, regardless of your prompt.