Comment by Sophira
5 months ago
I can... sorta see the value in wanting to keep it hidden, actually. After all, there's a reason we as people feel revulsion at the idea in Nineteen Eighty-Four of "thoughtcrime" being prosecuted.
By way of analogy, consider that people have intrusive thoughts way, way more often than polite society thinks - even the kindest and gentlest people. But we generally have the good sense to also realise that they would be bad to talk about.
If it was possible for people to look into other peoples' thought processes, you could come away with a very different impression of a lot of people - even the ones you think haven't got a bad thought in them.
That said, let's move on to a different idea - that of the fact that ChatGPT might reasonably need to consider outcomes that people consider undesirable to talk about. As people, we need to think about many things which we wish to keep hidden.
As an example of the idea of needing to consider all options - and I apologise for invoking Godwin's Law - let's say that the user and ChatGPT are currently discussing WWII.
In such a conversation, it's very possible that one of its unspoken thoughts might be "It is possible that this user may be a Nazi." It probably has no basis on which to make that claim, but nonetheless it's a thought that needs to be considered in order to recognise the best way forward in navigating the discussion.
Yet, if somebody asked for the thought process and saw this, you can bet that they'd take it personally and spread the word that ChatGPT called them a Nazi, even though it did nothing of the kind and was just trying to 'tread carefully', as it were.
Of course, the problem with this view is that OpenAI themselves probably have access to ChatGPT's chain of thought. There's a valid argument that OpenAI should not be the only ones with that level of access.
No comments yet
Contribute on Hacker News ↗