Comment by cagatayk
1 year ago
> Neither truth nor intention plays a role in the operation of a perfect language model. The machine merely follows the narrative demands of the evolving story. As the dialogue between the human and the machine progresses, these demands are coloured by the convictions and the aspirations of the human, the only visible dialog participant who possesses agency.
This is a really good way of thinking about these models. It reminds me of the recent-ish story where a reporter got really creeped out Bing's OpenAI powered chatbot (https://www.nytimes.com/2023/02/16/technology/bing-chatbot-m...). Reading that, I had thought the bot was relatively easily led into a narrative the reporter had been setting up. In a conversation between actual people who have their own will and agency, you don't get to see one leading the other around by the nose so completely.
Reframing the problem as one of picking through the many threads of potential fictions to evolve a story makes it easier to explain what happened in that particular case.
No comments yet
Contribute on Hacker News ↗