Comment by menaerus
3 days ago
> LLMs tend to include subtle mistakes or even completely incorrect information (and reasoning!) which disrupts this process.
You see, so many humans do that as well but yet we make it believe that LLMs are somehow special here. Yes, they make mistakes, we make mistakes, I make mistakes, your colleague makes mistakes, but that's not the point of this discussion at all and it's a form of confirmation bias.
Why this reflex gag happens, I think, is because people are biased to catch somebody in making the mistake so that they can feel superior and irreplaceable. We do that to keep our position strong (in society, work), and this is completely natural - it's called the survival instinct, and it is present in our species regardless of LLMs. LLMs are just one of the ways we can obviously trigger this condition.
So, your response is no more special than other people combating the AI but take into account that "The problems I work on are not well-represented in training data" or "correctness matters far more than speed" or "it's important to form strong and correct mental models of complex systems so I can reason about them well" makes a great deal of strong assumptions. Almost any domain can literally copy-paste this into their defense, and this is also something interesting I observed through the course of years - every domain I worked in, and there were plenty, each domain thought that it is their domain which is the "hardest". Vanity.
High quality writing (e.g. MDN Web Docs, Go Documentation, internal docs) does not have a tendency to include mistakes or incorrect reasoning because it comes from a place of clear thinking and goes through a continuous process of peer review and improvement.
LLMs outputs are, at best, first drafts that have not been reviewed or revised, and they certainly do not have analytical reasoning behind their content.
I am not claiming that my domain is uniquely challenging, but the statements I made are factual for my work (and likely many adjacent fields, too).
Would you trust your life to a pacemaker or aircraft control system designed and manufactured quickly by people without a correct understanding of what they are developing, working purely off of information from the Internet and other semi-public sources? I wouldn't. But maybe you are braver than me!
Nobody here is talking about the pacemakers nor am I suggesting that AI will be replacing 100% of working force.
While we are at writing documentation, did you hear about the latest layoffs from MySQL which, among others, impacted heavily the documentation team as well cutting their size from 8 to 3 people?
I mean, the software I develop is the system of record for the engineering and development of medical devices (and aerospace, automotive, industrial, etc. systems). So pacemakers and aircraft are relevant examples.
I hadn't heard about the MySQL layoffs. I hope the affected people find good new opportunities.
1 reply →