Comment by ginayuksel
7 months ago
I once tried prompting an LLM to summarize a blog I had written myself, not only did it fail to recognize the main argument, it confidently hallucinated a completely unrelated conclusion. It was disturbing not because it was wrong, but because it sounded so right.
That moment made me question how easily AI can shape narratives when the user isn’t aware of the original content.
No comments yet
Contribute on Hacker News ↗