← Back to context

Comment by swalsh

1 day ago

That models entire world is the corpus of human text. They don't have eyes or ears or hands. Their environment is text. So it would make sense if the environment contains human concerns it would adopt to human concerns.

Yes, that would make sense, and it would probably be the best-case scenario after complete assurance that there's no consciousness at all. At least we could understand what's going on. But if you acknowledge that a machine can suffer, given how little we understand about consciousness, you should also acknowledge that they might be suffering in ways completely alien to us, for reasons that have very little to do with the reasons humans suffer. Maybe the training process is extremely unpleasant, or something.