← Back to context

Comment by appplication

8 hours ago

I think it’s worth recognizing that people’s issues with LLMs isn’t that they make mistakes. And I think hammering the argument that humans also make mistakes indicates a bit of a disconnect with the more common reasons there is frustration with LLM use.

Ultimately I think people find it frustrating because many of us have spent years refining our communication so that it is deliberate and precise. LLMs essentially represent a layer of indirection to both of those goals. If I prepare some communication (email, code, a blog post, etc) and try to use an LLM more actively, I find at best I end up with something that more or less captures what I probably was going to communicate but doesn’t quite feel like an extension of my own thoughts as much as an slightly blurred approximation.

I think this also explains to some degree why it seems folks who were never particularly critical of their own communication have a hard time comprehending why anyone could be upset about this.

There is of course the flip side where now when receiving communication that I have to attempt to deduce if I’m reading a 5 paragraph, meticulously formatted email (or 200 line, meticulously tested function) because whoever sent it was too lazy to more concisely write 2-3 well thought out sentences (or make a 15-line diff to an existing function). And of course the answer here for the AI pragmatist is that I should consider having an AI summarize these extensive communications back down to an easily digestible 2-3 sentence summary (or employ an AI to do code review for me).

For those that value precise communications, this experience is pretty exhausting.