← Back to context

Comment by aidos

8 months ago

I really don’t think they’re the same thing. Email or letter, the words are yours while an LLM output isn’t.

Initially, it had the same effect on people until they got used to it. In the near future, whether the text is yours or not won't matter. What will matter is the message or idea you're communicating. Just like today, it doesn't matter if the code is yours, only the product you're shipping and problem it's solving.

  • Code is either fit for a given purpose or not. Communicating with a LLM instead of directly with the desired recipient may be considered fit for purpose for the receiving party, but it’s not for the LLM user to say what the goals of the writer is, nor is it for the LLM user to say what the goals of the writer ought to be. LLMs for communication are inherently unfit for purpose for anything beyond basic yes/no and basic autocomplete. Otherwise I’m not even interacting with a human in the loop except before they hit send, which doesn’t inspire confidence.

  • I think just looking at information transfer misses the picture. What's going to happen is that my Siri is going to talk to your Cortana, and that our digital secretaries will either think we're fit to meet or we're not. Like real secretaries do.

    You largely won't know such conversations are happening.

  • Similar-looking effects are not the "same" effect.

    "Change always triggers backlash" does not imply "all backlash is unwarranted."

    > What will matter is the message or idea you're communicating. Just like today, it doesn't matter if the code is yours, only the product you're shipping and problem it's solving.

    But like the article explains about why it's rude: the less thought you put into it, the less chance the message is well communicated. The less thought you put into the code you ship, the less chance it will solve the problem reliably and consistently.

    You aren't replying to "don't use LLM tools" you're replying to "don't just trust and forward their slop blindly."

That is indeed the crux of it. If you write me an inane email, it’s still you, and it tells me something about you. If you send me the output of some AI, have I learned anything? Has anything been communicated? I simply can’t know. It reminds me a bit of the classic philosophical thought experiment "If a tree falls in a forest and no one is around to hear it, does it make a sound?" Hence the waste of time the author alludes to. The only comparison to email that makes any sense in this case are the senseless chain mails people used to forward endlessly. They have that same quality.

Some do put their words into the LLM and clean it up.

And it stays much closer to how they are writing.

The prompt is theirs.

  • Then just send me the prompt.

    • as long as there are people and HR out there screeching "unprofessional", why take the risk?

      writing mails/messages used to take me a long time. now i have a "make it professional" llm window, let it do its magic and edit out the most egregious stuff. it does 80-90% of the job.

      that said, sometimes it fails spectacularly, so i just write by hand.

      so.. many... hours... saved.

Which words, exactly, are "yours"? Working with an LLM is like having a copywriter 24/7, who will steer you toward whatever voice and style you want. Candidly, I'm getting the sense the issue here is some junior varsity level LLM skill.