← Back to context

Comment by phito

8 months ago

I really wish some of my coworkers would stop using LLMs to write me emails or even Teams messages. It does feel extremely rude, to the point I don't even want to read them anymore.

"Hey, I can't help but notice that some of the messages you're sending me are partially LLM-generated. I appreciate you wanting to communicate stylistically and grammatically correct, but I personally prefer the occasional typo or inelegant expression over the chance of distorted meanings or lost/hallucinated context.

Going forward, could you please communicate with me directly? I really don't mind a lack of capitalization or colloquial expressions in internal communications."

  • I see two things people are not happy about when it comes to LLMs:

    1. The message you sent doesn't feel personal. It reads like something written by a machine, and I struggle to connect with someone who sends me messages like that.

    2. People who don't speak English very well are now sending me perfectly written messages with solid arguments. And honestly, my ego doest’t like it because I used to think I was more intelligent than them. Turns out I wasn't. It was just my perception, based on the fact that I speak the language natively.

    Both of these things won't matter anymore in the next two or three years. LLMs will keep getting smarter, while our egos will keep getting smaller.

    People still don't fully grasp just how much LLMs will reshape the way we communicate and work, for better or worse.

    • The word for this, we learned recently, is "LLM inevitabilism". It's often argued for far more convincingly than your attempt here, too.

      The future is here, and even if you don't like it, and even if it's worse, you'll take it anyway. Because it's the future. Because... some megalomaniacal dweeb somewhere said so?

      When does this hype train get to the next station, so everyone can take a breath? All this "future" has us hyperventilating.

      6 replies →

Even worse when they accidently leave in the dialog with the AI. Dead giveaway. I got an email from a colleague the other day and at the bottom was this line:

> Would you like me to format this for Outlook or help you post it to a specific channel or distribution list?

Didn't our parents go through the same thing when email came out?

My dad used to say: "Stop sending me emails. It's not the same." I'd tell him, "It's better. "No, it's not. People used to sit down and take the time to write a letter, in their own handwriting. Every letter had its own personality, even its own smell. And you had to walk to the post office to send it. Now sending a letter means nothing."

Change is inevitable. Most people just won't like it.

A lot of people don't realise that Transformers were originally designed to translate text between languages. Which, in a way, is just another way of improving how we communicate ideas. Right now, I see two things people are not happy about when it comes to LLMs:

1. The message you sent doesn't feel personal. It reads like something written by a machine, and I struggle to connect with someone who sends me messages like that.

2. People who don't speak English very well are now sending me perfectly written messages with solid arguments. And honestly, my ego doest’t like it because I used to think I was more intelligent than them. Turns out I wasn't. It was just my perception, based on the fact that I speak the language natively.

Both of these things won't matter anymore in the next two or three years.

  • I really don’t think they’re the same thing. Email or letter, the words are yours while an LLM output isn’t.

    • Initially, it had the same effect on people until they got used to it. In the near future, whether the text is yours or not won't matter. What will matter is the message or idea you're communicating. Just like today, it doesn't matter if the code is yours, only the product you're shipping and problem it's solving.

      13 replies →

    • That is indeed the crux of it. If you write me an inane email, it’s still you, and it tells me something about you. If you send me the output of some AI, have I learned anything? Has anything been communicated? I simply can’t know. It reminds me a bit of the classic philosophical thought experiment "If a tree falls in a forest and no one is around to hear it, does it make a sound?" Hence the waste of time the author alludes to. The only comparison to email that makes any sense in this case are the senseless chain mails people used to forward endlessly. They have that same quality.

    • Which words, exactly, are "yours"? Working with an LLM is like having a copywriter 24/7, who will steer you toward whatever voice and style you want. Candidly, I'm getting the sense the issue here is some junior varsity level LLM skill.

      1 reply →

  • I can see the similarity yes! Although I do feel like the distance between handwritten letter and an email is shorter than between email and LLM generated email. There's some line it crossed. Maybe it's that email provided some benefit to the reader too. Yes, there's less character, but you receive it faster, you can easily save it, copy it, attach a link or a picture. You may even get lucky and receive an .exe file as a bonus! LLM does not provide any benefit for the reader though, it just wastes their resources on yapping that no human cared to write.

  • Just be a robot. Sell your voice to the AI overlords. Sell your ears and eyes. Reality was the scam; choose the Matrix. I choose the Matrix!

  • Same thing with photography and painting. These opinionated pieces display a false dichotomy which propagates into argument, when we have a tunable dial rather than a switch, appropriately increasing or decreasing our consideration, time, and focus along a spectrum rather than treating it as an on and off switch.

    I value letters far more than emails, pouring out my heart and complex thought to justify the post office trip and even postage stamp. Heck, why do we write birthday cards instead of emails? I hold a similar attitude towards LLM output and writing; perhaps more analogous is a comparison between painting and photography. I’ll take a glance at LLM output, but reading intentional thought (especially if it’s a letter) is when I infer about the sender as a person through their content. So if you want to send me a snapshot or fact, I’m fine with LLM output, but if you’re painting me a message, your actionable brushstrokes are more telling than the photo itself.

  • Letters had a time and potential money cost to send. And most letters don't need to be personalized to the point where we need handwriting to justify them.

    >Change is inevitable. Most people just won't like it.

    people love saying this and never taking the time to consider if the change is good or bad. Change for change's sake is called chaos. I don't think chaos is inevitable.

    >And honestly, my ego doest’t like it because I used to think I was more intelligent than them. Turns out I wasn't. It was just my perception, based on the fact that I speak the language natively.

    I don't think I ever heard that argument until now. And to be frank that argument says more about the arguer than the subject or LLM's.

    Have you simply considered 3) LLM's don't have context and can output wrong information? If you're spending more time correcting the machine than communicating, we're just adding more beauracracy to the mix.

  • One thing is it's less about change. It's more about quality vs quantity and both have their place.

  • I mean that's fine, but the right response isn't all this moral negotiation, but rather just to point out that it's not hard to have Siri respond to things.

    So have your Siri talk to my Cortana and we'll work things out.

    Is this a colder world or old people just not understanding the future?

    • It's demonstration by absurdity that that is not the future. You're describing the collapse of all value.

I know people with disabilities that struggle with writing. They feel that AI enables them to express themselves better than they could without the help. I know that’s not necessarily what you’re dealing with but it’s worth considering.

  • If they're copy pasting whole paragraphs, then they're not expressing themselves at all. They're getting some program to express for them.

LinkedIn is probably the worst culprit. It has always been a wasteland of “corporate/professional slop”, except now the interface deliberately suggests AI-generated responses to posts. I genuinely cannot think of a worse “social network” than that hell hole.

“Very insightful! Truly a masterclass in turning everyday professional rituals into transformative personal branding opportunities. Your ability to synergize authenticity, thought leadership, and self-congratulation is unparalleled.”

  • AI Content that doesn't appear AI today will have to be the type that doesn't appear like AI in 1, 2 years.

    Folks who are new to AI are just posting away with their December 2022 because it's new to them.

    It is best to personally understand your own style(s) of communication.

  • > now the interface deliberately suggests AI-generated responses to posts

    This feature absolutely defies belief. If I ran a social network (thank god I don't) one of my main worries would be a flood of AI skip driving away all the human users. And LinkedIn are encouraging it. How does that happen? My best guess is that it drives up engagement numbers to allow some disinterested middle managers to hit some internal targets.

    • This feature predates LLMs though, right? Funnily enough, I actually find it hilarious! In my mind, once they introduced it, it immediately became "a list of things NOT to reply if you want to be polite" and I was used it like that. With one exception. If I came across an update from someone who's a really good friend, I would unleash full power of AI comments on them! We had amazing AI generated comment threads with friends that looked goofy as hell.

      1 reply →

have you tried sharing that feedback with them?

one of my reports started responding to questions with AI Slop. I asked if he was actually writing those sentences (he wasn't), so I gave him that exact feedback - it felt to me like he wasn't even listening, when he clearly jut copy-pasted clearly AI responses. Thankfully he stopped doing it.

Of course as models get better at writing, it'll be harder and harder to tell. IMO the people who stand to lose the most are the AI sloppers, in that case - like in the South Park episode, as they'll get lost in commitments and agreements they didn't even know they made.

I love it because it allows me to filter out people not worth my time and attention beyond minimal politeness and professionalism.

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

  • Wow. What a good giveaway.

    I wonder what others there are.

    I occasionally use bullet points, emdashes (unicode, single, and double hyphens) and words like "delve". I hate it think these are the new heuristics.

    I think AI is a useful tool (especially image and video models), but I've already had folks (on HN [1]!) call out my fully artisanal comments as LLM-generated. It's almost as annoying as getting low-effort LLM splurge from others.

    Edit: As it turns out, cow-orkers isn't actually an LLMism. It's both a joke and a dictation software mistake. Oops.

    [1] most recently https://news.ycombinator.com/item?id=44482876

    • I like to use em-dashes as well (option-shift-hyphen on my macbook). I've seen people try to prompt LLMs to not have em-dashes, I've been in forums where as soon as you type in an em-dash it will block the submit button and tell you not to use AI.

      Here's my take: these forums will drive good writers away or at least discourage them, leaving discourses the worse for it. What they really end up saying — "we don't care whether you use an LLM, just remove the damn em-dash" — indicates it's not a forum hosting riveting discussions in the first place.

    • How is that a “giveaway”? The search turns up results from 7 years ago before LLMs were a thing? More than likely it’s auto correct going astray. I can’t imagine an LLM making that mistake

Why? AI is a tool. Are their messages incorrect or something? If not who cares, they’re being efficient and thus more productive.

Please be honest. If it’s slop or they have incorrect information in the message, then my bad, stop reading here. Otherwise…

I really hope people like this with holier than thou attitude get filtered out. Fast.

People who don’t adapt to use new tools are some of the worst people to work around.

  • They are being efficient with their own time, yes, but it's at the expense of mine. I get less signal. We used to bemoan how hard it was to effectively communicate via text only instead of in person. Now, rather than fixing that gap, we've moved on to removing even more of the signal. We have to infer the intentions of the sender by guessing what they fed into the LLM to avoid getting tricked by what the LLM incorrectly added or accentuated.

    The overall impact on the system makes it much less efficient, despite all those "saving [their] time" by abusing LLMs.

  • If it took you no time to write it, I'll spend no time reading it.

    The holier than thou people are the ones who are telling us genAI is inevitable, it's here to stay, we should use it as a matter of rote, we'll be left out if we don't, it's going to change everything, blah blah blah. These are articles of faith, and I'm sorry but I'm not a believer in the religion of AI.

    • How do you know the effort that went into the message? Somebody with writing challenges may have written the whole thing up and used ai assistance to help get a better outcome. They may have proof-read and revised the generated message. You sound very judgmental.

      3 replies →

    • Except you will spend your time reading it, because that's what is required to figure out that it's written with an LLM. The first few times, at least...

  • >Are their messages incorrect or something?

    consider 3 scaenarios:

    1. misinformation. This is the one you mention so I don't need to elaborate. 2. lack of understanding. The message may be about something they do not fully understand. If they cannot understand their own communication, then it's no longer a 2-way street. This is why AI-generated code in reviews is so infuriating. 3. Effort. Some people may use it to enhance their communication, but others use it as a shortcut. You shouldn't take a shortcut around actions like communicating with your coulleages. As a rising sentiment goes: "If it's not worth writing (yourself), it's not worth reading".

    For your tool metaphor, it's like discovering supeglue. then using it to stick everything together. Sometimes you see a nail and instead glue the nail to the wall instead of hammering it in. Tools can, have, and will be misused. I think it's best to try and correct that early on before we have a lot of sticky nails.

  • > If it’s slop or they have incorrect information in the message, then my bad, stop reading here.

    "my bad" and what next? The reader just wasted time and focus on reading, it doesn't sound like a fair exchange.

    • That’s on them, I said what I wanted to.

      Most of the time people just like getting triggered that someone sent them a —— in their message and blame AI instead of adopting it into their workflows and moving faster.

      2 replies →