← Back to context

Comment by kristjank

21 hours ago

I tread carefully with anyone that by default augments their (however utilitarian or conventionally bland) messages with language models passing them as their own. Prompting the agent to be as concise as you are, or as extensive, takes just as much time in the former case, and lacks the underlying specificity of your experience/knowledge in the latter.

If these were some magically private models that have insight into my past technical explanations or the specifics of my work, this would be a much easier bargain to accept, but usually, nothing that has been written in an email by Gemini could not have been conceived of by a secretary in the 1970s. It lacks control over the expression of your thoughts. It's impersonal, it separates you from expressing your thoughts clearly, and it separates your recipient from having a chance to understand you the person thinking instead of you the construct that generated a response based on your past data and a short prompt. And also, I don't trust some misandric f*ck not to sell my data before piping it into my dataset.

I guess what I'm trying to say is: when messaging personally, summarizing short messages is unnecessary, expanding on short messages generates little more than semantic noise, and everything in between those use cases is a spectrum deceived by the lack of specificity that agents usually present. Changing the underlying vague notions of context is not only a strangely contortionist way of making a square peg fit an umbrella-shaped hole, it pushes around the boundaries of information transfer in a way that is vaguely stylistic, but devoid of any meaning, removed fluff or added value.

Agreed! As i mentioned in the piece I don't think LLMs are very useful for original writing because instructing an agent to write anything from scratch inevitably takes more time than writing it yourself.

Most of the time I spend managing my inbox is not spent on original writing, however. It's spent on mundane tasks like filtering, prioritizing, scheduling back-and-forths, introductions etc. I think an agent could help me with a lot of that, and I dream of a world in which I can spend less time on email and finally be one of those "inbox zero" people.

  • The counter argument is some people are terrible at writing. Millions of people sit at the bottom of any given bell curve.

    I’d never trust a summery from a current generation LLM for something as critical as my inbox. Some hypothetical drastically improved future AI, sure.

    • Smarter models aren't going to somehow magically understand what is important to you. If you took a random smart person you'd never met and asked them to summarize your inbox without any further instructions they would do a terrible job too.

      You'd be surprised at how effective current-gen LLMs are at summarizing text when you explain how to do it in a thoughtful system prompt.

      2 replies →

  • For the case of writing emails, I tend to agree though I think creative writing is an exception. Pairing with an LLM really helps overcome the blank page / writer's block problem because it's often easier to identify what you don't want and then revise all the flaws you see.

  •   instructing an agent to write anything from scratch inevitably takes more time than writing it yourself
    

    But you can reuse your instructions with zero additional effort. I have some instructions that I wrote for a 'Project' in Claude (and now a 'Gem' in Gemini). The instructions give writing guidelines for a children's article about a topic. So I just write 'write an article about cross-pollination' and a minute later I have an article I can hand to my son.

    Even if I had the subject matter knowledge, it would take me much longer to write an article with the type of style and examples that I want.

    (Because you said 'from scratch', I deliberately didn't choose an example that used web search or tools.)

Why can’t the LLM just learn your writing style from your previous emails to that person?

Or a your more general style for new people.

It seems like Google at least should have a TONNE of context to use for this.

Like in his example emails about being asked to meet - it should be checking the calendar for you and putting in if you can / can’t or suggesting an alt time you’re free.

If it can’t actually send emails without permission there’s less harm with giving an LLM more info to work with - and it doesn’t need to get it perfect. You can always edit.

If it deals with the 80% of replies that don’t matter much then you have 5X more time to spend on the 20% that do matter.

  • > Why can’t the LLM just learn your writing style from your previous emails to that person?

    It totally could. For one thing you could fine tune the model, but I don't think I'd recommend that. For this specific use case, imagine an addition to the prompt that says """To help you with additional context and writing style, here snippets of recent emails Pete wrote to {recipient}: --- {recent_email_snippets} """

  • They are saving this for some future release I would guess. A “personalization”-focused update wave/marketing blitz/privacy Overton window shift.

>As I mentioned above, however, a better System Prompt still won't save me much time on writing emails from scratch.

>The thing that LLMs are great at is reading text and transforming it, and that's what I'd like to use an agent for.

Interestingly, the OP agrees with you here and noted in the post that the LLMs are better at transforming data than creating it.

  • I reread those paragraphs. I find the transformative effect of the email missing from the whole discussion. The end result of the inbox examples is to change some internal information in the mind of the recipient. Agent working within the context of the email has very little to contribute because it does not know the OP's schedule, dinner plans, whether he has time for the walk and talk or if he broke his ankle last week... I'd be personally afraid to have something rummaging in my social interface that can send (and let's be honest, idiots will CtrlA+autoreply their whole inboxes) invites, timetables, love messages etc. in my name. It has too many lemmas that need to be fulfilled before it can be assumed competent, and none of those are very well demonstrated. It's cold fusion technology. Feasible, should be nice if it worked, but it would really be a disappointment if someone were to use it in its current state.

A lot of people would love to have a 1970s secretary capable of responding to many mundane requests without any guidance.

  • I have a large part of that though. The computer (outlook today) just schedules meetings rooms for me ensuring there are not multiple different meetings in it at the same time. I can schedule my own flights.

    When I first started working the company rolled out the first version of meeting scheduling (it wasn't outlook), and all the other engineers loved it - finally they could figure out how to schedule our own meetings instead of having the secretary do it. Apparently the old system was some mainframe based things other programmers couldn't figure out (I never worked with it so I can't comment on how it was). Likewise scheduling a plane ticket involved calling travel agents and spending a lot of time on hold.

    If you are a senior executive you still have a secretary. However by the 1970s the secretary for most of us would be department secretary that handled 20-40 people not just our needs, and thus wasn't in tune with all those details. However most of us don't have any needs that are not better handled by a computer today.

  • I would too, but I would have to trust AI at least as much as a 1970s secretary not to mess up basic facts about myself or needlessly embellish/summarize my conversations with known correspondents. Comparing agents and past office cliches was not to imply agents do it and it's stupid; I'm implying agents claim to do it, but don't.

Aside from saving time, I'm bad at writing. Especially emails. I often open ChatGPT, paste in the whole email chain, write out the bullets of the points I want to make and ask it to draft a response which frames it well.

  • I'd prefer to get the bullet points. There's no need to waste time reading autogenerated filler.

  • > write out the bullets of the points I want to make

    Just send those bullet points. Everyone will thank you

  • My boss does that I am sure

    One of their dreadful behaviors, among many

    My advice is to stop doing this for the sake of your colleagues

  • Why not just send the bullet points? Kinder to your audience than sending them AI slop.

  • Hopefully you're specifying that your email is written with ChatGPT so other parties can paste it back into ChatGPT and get bullet points back instead of wasting their time reading the slop.

There's a whole lot of people who struggle to write professionally or when there's any sort of conflict (even telling your boss you won't come to work). It can be crippling trying to find the right wording and certainly take far longer than writing a prompt. AI is incredible for these people. They were never going to express their true feelings anyway and were just struggling to write "properly" or in a way that doesn't lead to misunderstandings. If you can just smash out good emails without a second thought, you wouldn't need it.

AI for writing or research is useful like a dice roll. Terence Tao famously showed how talking to an LLM gave him an idea/approach to a proof that he hadn't immediately thought of (but probably he would have considered it eventually). The other day I wrote an unusal, four-word neologism that I'm pretty sure no one has ever seen, and the AI immediately drew the correct connection to more standard terminology and arguments used, so I did not even have to expand/explain and write it out myself.

I don't know but I am considering the possibility that even for everyday tasks, this kind of exploratory shortcut can be a simple convenience. Furthermore, it is precisely the lack of context that enables LLMs to make these non-human, non-specific connective leaps, their weakness also being their strength. In this sense, they bode as a new kind of discursive common-ground--if human conversants are saying things that an LLM can easily catch then LLMs could even serve as the lowest-common-denominator for laying out arguments, disagreements, talking past each other, etc. But that's in principle, and in practice that is too idealistic, as long as these are built and owned as capitalist IPs.

> And also, I don't trust some misandric f*ck not to sell my data before piping it into my dataset.

..."misandric"? Is this some kind of red pill MRA persecution complex shibboleth you picked up from Andrew Tate's youtube channel and like to interject into all your emails and postings?

Do you really think some "f*ck" out there is hell bent on persecuting you just because you're male, and not just because they're a "f*ck" who is out to sell everyone's data no matter what their gender? Do you really believe they're not out to sell women's data too?

So in the spirit of this discussion, just how would you write a prompt explaining to an LLM when you like to interject words like "misandric" into your email and postings, so it really sounds like you writing? Why did you choose that particular term, and how would you explain in an LLM system prompt when and why you use it? Do you believe you get reverse discriminated against, persecuted, and picked on just because you're male, and choose to use that term because you want to signal your male persecution complex to everyone?