Comment by namaria
1 day ago
Accurate notes are valuable for several reasons.
Show me an LLM that can reliably produce 100% accurate notes. Alternatively, accept working in a company where some nonsense becomes future reference and subpoenable documentation.
Counterpoint: show me a human who can reliably produce 100% accurate notes.
Seriously, I wish to hire this person.
Seriously, do people around you not normally double check, proofread, review what they turn in as done work?
Maybe I am just very fortunate, but people who are not capable of producing documents that are factually correct do not get to keep producing documents in the organizations I have worked with.
I am not talking about typos, misspelling words, bad formatting. I am talking about factual content. Because LLMs can actually produce 100% correct text but they routinely mangle factual content in a way that I have never had the misfortune of finding in the work of my colleagues and teams around us.
A friend of mine asked an AI for a summary of a pending Supreme Court case. It came back with the decision, majority arguments, dissent, the whole deal. Only problem was that the case hadn't happened yet. It had made up the whole thing, and admitted that when called on it.
A human law clerk could make a mistake, like "Oh, I thought you said 'US v. Wilson,' not 'US v. Watson.'" But a human wouldn't just make up a case out of whole cloth, complete with pages of details.
So it seems to me that AI mistakes will be unlike the human mistakes that we're accustomed to and good at spotting from eons of practice. That may make them harder to catch.
4 replies →
What are the odds that the comment you're responding to was AI-generated?
1 reply →
You are mixing up notes and full blown transcript of the meeting. The latter is impossible to produce by the untrained humans. The former is relatively easy for a person paying attention, because it is usually 5 to 10 short lines per an hour long meeting, with action items or links. Also in a usual work meeting, a person taking notes has possibility to simply say "wait a minute, I will write this down" and this does happens in practice. Short notes made like that usually are accurate in the meaning, with maybe some minor typos not affecting accuracy.
If it is just for people in the meeting we don't need 100%, just close enough that we remember what was discussed.
I really don't see the value of records that may be inaccurate as long as I can rely on my memory. Human memory is quite unreliable, the point of the record is the accuracy.
Written records are only accurate if they are carefully reviewed. Humans make mistakes all the time too. We just are better at correcting them, and if we review the record soon after the meeting there is a chance we remember well enough to make a correction.
There is a reason meeting rules (ie Robert's rules of order) have the notes from the previous meeting read and then voted on to accept them - often changes are made before accepting them.
1 reply →
Meh, show me a human that can reliably produce 100% accurate notes. It seems that the baseline for AI should be human performance rather than perfection. There are very few perfect systems in existence, and humans definitely aren't one of them.
You show me human meeting minutes written by a PM that accurately reflect the engineer discussions first.
Has it been your experience? That's unacceptable to me. From people or language models.