← Back to context

Comment by benterris

1 day ago

I really don't get why people would want AI to write their messages for them. If I can write a concise prompt with all the required information, why not save everyone time and just send that instead ? And especially for messages to my close ones, I feel like the actual words I choose are meaningful and the process of writing them is an expression of our living interaction, and I certainly would not like to know the messages from my wife were written by an AI. On the other end of the spectrum, of course sometimes I need to be more formal, but these are usually cases where the precise wording matters, and typing the message is not the time-consuming part.

> If I can write a concise prompt with all the required information, why not save everyone time and just send that instead ?

This point is made multiple times in the article (which is very good; I recommend reading it!):

> The email I'd have written is actually shorter than the original prompt, which means I spent more time asking Gemini for help than I would have if I'd just written the draft myself. Remarkably, the Gmail team has shipped a product that perfectly captures the experience of managing an underperforming employee.

> As I mentioned above, however, a better System Prompt still won't save me much time on writing emails from scratch. The reason, of course, is that I prefer my emails to be as short as possible, which means any email written in my voice will be roughly the same length as the User Prompt that describes it. I've had a similar experience every time I've tried to use an LLM to write something. Surprisingly, generative AI models are not actually that useful for generating text.

People like my dad, who can't read, write, or spell to save his life, but was a very, very successful CPA, would love to use this. It would have replaced at least one of his office staff I bet. Too bad he's getting up there in age, and this newfangled stuff is difficult for him to grok. But good thing he's retired now and will probably never need it.

  • What a missed oppurtunity to fire that extra person. Maybe the AI could also figure out how to do taxes and then everyone in the office could be out a job.

    • Well, you know this employment crisis all started when the wheel was invented and put all the porters out of work. Then tech came for lamplighters, ice cutters, knocker-uppers, switchboard operators, telegraph operators, human computers, video store clerks, bowling alley pinsetters, elevator operators, film developers, lamp lighters, coopers, wheelwrights, candle makers, weavers, plowmen, farriers, street sweepers. It's a wonder anyone still has a job, really.

    • Let's just put an AI in charge of the IRS and have it send us an actual bill which is apparently something that just too complicated for the current and past IRS to do./s

      Edit: added /s because it wasn't apparent this was sarcastic

      1 reply →

Shorter emails are better 99% of the time. No one's going to read a long email, so you should keep your email to just the most important points. Expanding out these points to a longer email is just a waste of time for everyone involved.

My email inbox is already filled with a bunch of automated emails that provide me no info and waste my time. The last thing I want is an AI tool that makes it easier to generate even more crap.

  • Definitely. Also, another thing that wastes time is when requests don't provide the necessary context for people to understand what's being asked for and why, causing them to spend hours on the wrong thing. Or when the nuance is left out of a nuanced good idea causing it to get misinterpreted and pattern-matched to a similar-sounding-but-different bad idea, causes endless back-and-forth misunderstandings and escalation.

    Emails sent company-wide need to be especially short, because so many person-hours are spent reading them. Also, they need to provide the most background context to be understood, because most of those readers won't already share the common ground to understand a compressed message, increasing the risk of miscommunication.

    This is why messages need to be extremely brief, but also not.

There was an HN topic less than a month ago or so where somebody wrote a blog post speculating that you end up with some people using AI to write lengthy emails from short prompts adhering to perfect polite form, while the other people use AI to summarize those blown-up emails back into the essence of the message. Side effect, since the two transformations are imperfect meaning will be lost or altered.

  • This is a plot point in a sci-fi story I'd read recently, though I cannot place what it was. Possibly in Cloud Atlas, or something by Liu Cixin.

    In other contexts, someone I knew had written a system to generate automated emails in response to various online events. They later ran into someone who'd written automated processing systems to act on those emails. This made the original automater quite happy.

    (Context crossed organisational / institutional boundaries, there was no explicit coordination between the two.)

  • Can anybody find the thread? That sounds worth linking to!

    • It was more than a month ago, but perhaps this one:

      https://news.ycombinator.com/item?id=42712143

      How is AI in email a good thing?!

      There's a cartoon going around where in the first frame, one character points to their screen and says to another: "AI turns this single bullet point list into a long email I can pretend I wrote".

      And in the other frame, there are two different characters, one of them presumably the receiver of the email sent in the first frame, who says to their colleague: "AI makes a single bullet point out of this long email I can pretend I read".

      The cartoon itself is the one posted above by PyWoody.

If that's the case, you can easily only write messages to your wife yourself.

But for the 99 other messages, especially things that mundanely convey information like "My daughter has the flu and I won't be in today", "Yes 2pm at Shake Shack sounds good", it will be much faster to read over drafts that are correct and then click send.

The only reason this wouldn't be faster is if the drafts are bad. And that is the point of the article: the models are good enough now that AI drafts don't need to be bad. We are just used to AI drafts being bad due to poor design.

  • I don't understand. Why do you need an AI for messages like "My daughter has the flu and I won't be in today" or "Yes 2pm at Shake Shack sounds good"? You just literally send that.

    Do you really run these things through an AI to burden your reader with pointless additional text?

    • 100% agree. Email like you’re a CEO. Saves your time, saves other people’s time and signals high social status. What’s not to like?

      3 replies →

    • They are automatically drafted when the email comes in, and you can accept or modify them.

      It’s like you’re asking why you would want a password manager when you can just type the characters yourself. It saves time if done correctly.

      3 replies →

  • > But for the 99 other messages, especially things that mundanely convey information like "My daughter has the flu and I won't be in today", "Yes 2pm at Shake Shack sounds good", it will be much faster to read over drafts that are correct and then click send.

    It takes me all of 5 seconds to type messages like that (I timed myself typing it). Where exactly is the savings from AI? I don't care, at all, if a 5s process can be turned into a 2s process (which I doubt it even can).

  • How would an AI know if "2pm at Shake Shake" works for me? I still need to read the original email and make a decision. The actual writing out the response takes me basically no time whatsoever.

    • An AI could read the email and check my calendar and then propose 2pm. Bonus if the AI works with his AI to figure out that 2pm works for both of us. A lot of time is wasted with people going back and forth trying to figure out when they can meet. That is also a hard problem even before you note the privacy concerns.

I sometimes use AI to write messages to colleagues. For example, I had a colleague who was confused about something in Zendesk. When they described the issue I knew it was because they (reasonably) didn't understand that 'views' aren't the same as 'folders'.

I could have written them a message saying "Zendesk has views, not folders [and figure out what I mean by that]", but instead I asked AI something like:

  My colleague is confused about why assigning a ticket in Zendesk adds it to a view but doesn't remove it from a different view. I think they think the views are folders. Please write an email explaining this.

The clear, detailed explanation I got was useful for my colleague, and required little effort from me (after the initial diagnosis).

Totally agree, for myself.

However, I do know people who are not native speakers, or who didn't do an advanced degree that required a lot of writing, and they report loving the ability to have it clean up their writing in professional settings.

This is fairly niche, and already had products targeting it, but it is at least one useful thing.

  • Cleaning up writing is very different from writing it. Lawyers will not have themselves as a client. I can write a novel or I can edit someone else's novel - but I am not nearly as good at editing my own novels as I would be editing someone else's. (I don't write novels, but I could. As for editing - you should get a better editor than me, but I'd be better than you doing it to your own writing)

When it's a simple data transfer, like "2 pm at shake shack sounds good", it's less useful. it's when we're doing messy human shit with deep feelings evoking strong emotions that it shines. when you get to the point where you're trading shitty emails to someone that you, at one point, loved, but are now just getting all up in there and writing some horrible shit. Writing that horrible shit helps you feel better, and you really want to send it, but you know it's not gonna be good, but you just send it anyway. OR - you tell ChatGPT the situation, and have it edit that email before you send it and have it take out the shittiness, and you can have a productive useful conversation instead.

the important point of communicating is to get the other person to understand you. if my own words fall flat for whatever reason, if there are better words to use, I'd prefer to use those instead.

"fuck you, pay me" isn't professional communication with a client. a differently worded message might be more effective (or not). spending an hour agonizing over what to say is easier spent when you have someone help you write it

There are people who do this but on forums; they rely on AI to write their replies.

And I have to wonder, why? What's the point?