← Back to context

Comment by Art9681

8 hours ago

Counterpoint: If humans shipped perfect products they would no longer havejobs. The majority of time spent in an organization is fixing problems humans caused. For good reasons and bad excuses. We are not machines.

What we, collectively as a species are building now with AI is a mirror that reflects the failures and successes we contributed to.

No engineer here has a perfect record. No senior or principal either. We make a ton of mistakes that are rarely written about.

This is an opportunity for the ones that assume they have mastered the craft to put up or shut up. Anyone can write a blog with or without AI.

Put your skills to work and implement the system that solves the problem you lament. Otherwise, get off my lawn.

Its another voice screaming into the void without offering a solution. The solution is not to build a faster horse. It is not to reminisce about the past. That ship sailed.

Fix the problem. It's the 100th blog repeating the same thing we've read for two years. Nothing was accomplished here except wasting time on the obvious to pat yourself on the back.

A lot of time is being wasted writing blogs raising red flags.

That's the easy part.

I think it’s worth recognizing that people’s issues with LLMs isn’t that they make mistakes. And I think hammering the argument that humans also make mistakes indicates a bit of a disconnect with the more common reasons there is frustration with LLM use.

Ultimately I think people find it frustrating because many of us have spent years refining our communication so that it is deliberate and precise. LLMs essentially represent a layer of indirection to both of those goals. If I prepare some communication (email, code, a blog post, etc) and try to use an LLM more actively, I find at best I end up with something that more or less captures what I probably was going to communicate but doesn’t quite feel like an extension of my own thoughts as much as an slightly blurred approximation.

I think this also explains to some degree why it seems folks who were never particularly critical of their own communication have a hard time comprehending why anyone could be upset about this.

There is of course the flip side where now when receiving communication that I have to attempt to deduce if I’m reading a 5 paragraph, meticulously formatted email (or 200 line, meticulously tested function) because whoever sent it was too lazy to more concisely write 2-3 well thought out sentences (or make a 15-line diff to an existing function). And of course the answer here for the AI pragmatist is that I should consider having an AI summarize these extensive communications back down to an easily digestible 2-3 sentence summary (or employ an AI to do code review for me).

For those that value precise communications, this experience is pretty exhausting.

You won't ship a perfect product even if you make 0 mistakes. Software maintenance is adapting the product based on feedback from the outside world which you could never get during development.

Human mistakes in code usually have reasoning behind it. You can understand how the engineer made the mistake.

AI mistakes aren't like this, mistakes look like someone was lobotomized mid coding.