Comment by root_axis
10 hours ago
I've actually seen this lead to serious issues when a zoom LLM summary attributed statements to someone who didn't say them.
Someone else who couldn't attend the meeting later read that summary and it created a major argument because the topic had been a sore subject for this person due to an ongoing debate at the company. Everyone who attended the meeting confirmed it was an error, but the coincidental timing made it hard for him to accept, because the LLMs summary presented things in a way that validated this person's concerns that had been previously minimized by some folks on that meeting.
The drama got heated to the point where management produced a policy about not trusting generative output without independent verification. Seems at least it was a lesson learned.
No comments yet
Contribute on Hacker News ↗