Comment by pmarreck

1 day ago

Loved the fact that the interactive demos were live.

You could even skip the custom system prompt entirely and just have it analyze a randomized but statistically-significant portion of the corpus of your outgoing emails and their style, and have it replicate that in drafts.

You wouldn't even need a UI for this! You could sell a service that you simply authenticated to your inbox and it could do all this from the backend.

It would likely end up being close enough to the mark that the uncanny valley might get skipped and you would mostly just be approving emails after reviewing them.

Similar to reviewing AI-generated code.

The question is, is this what we want? I've already caught myself asking ChatGPT to counterargue as me (but with less inflammatory wording) and it's done an excellent job which I've then (more or less) copy-pasted into social-media responses. That's just one step away from having them automatically appear, just waiting for my approval to post.

Is AI just turning everyone into a "work reviewer" instead of a "work doer"?

honestly you could try this yourself today. Grab a few emails, paste them into chatgpt, and ask it to write a system prompt that will write emails that mimic your style. Might be fun to see how it describes your style.

to address your larger point, I think AI-generated drafts written in my voice will be helpful for mundane, transaction emails, but not for important messages. Even simple questions like "what do you feel like doing for dinner tonight" could only be answered by me, and that's fine. If an AI can manage my inbox while I focus on the handful of messages that really need my time and attention that would be a huge win in my book.

It all depends on how you use it, doesn't it?

A lot of work is inherently repetitive, or involves critical but burdensome details. I'm not going to manually write dozens of lines of code when I can do `bin/rails generate scaffold User name:string`, or manually convert decimal to binary when I can access a calculator within half a second. All the important labor is in writing the prompt, reviewing the output, and altering it as desired. The act of generating the boilerplate itself is busywork. Using a LLM instead of a fixed-functionality wizard doesn't change this.

The new thing is that the generator is essentially unbounded and silently degrades when you go beyond its limits. If you want to learn how to use AI, you have to learn when not to use it.

Using AI for social media is distinct from this. Arguing with random people on the internet has never been a good idea and has always been a massive waste of time. Automating it with AI just makes this more obvious. The only way to have a proper discussion is going to be face-to-face, I'm afraid.

About writing a counterargument for social media: I kinda get it, but what's the end game of this? People reading generated responses others (may have) approved? Do we want that? I think I don't.

The live demos were neat! I was playing around with "The Pete System Prompt", and one of the times, it signed the email literally "Thanks, [Your Name]" (even though Pete was still right there in the prompt).

Just a reminder that these things still need significant oversight or very targeted applications, I suppose.

  • The live demos are using a very cheap and not very smart model. Do not update your opinion on AI capabilities based on the poor performance of gpt-4o-mini

It's what we want, though, isn't it? AI should make our lives easier, and it's much easier (and more productive) to review work already done than to do it yourself. Now, if that is a good development morally/spiritually for the future of mankind is another question... Some would argue industrialization was bad in that respect and I'm not even sure I fully disagree

  • > and it's much easier (and more productive) to review work already done than to do it yourself

    This isn't the tautology you imagine it to be.

    Consider the example given here of having AI write one line draft response to emails. To validate such response, you have to: (1) read the original email, (2) understand it, (3) decide what you want to communicate in your reply, then (4) validate that the suggested draft communicates the same.

    If the AI gave a correct answer, you saved yourself from typing one sentence, which you probably already formulated in your head in step (3). A minor help, at best.

    But if the AI was wrong, you now have to write that reply yourself.

    To get positive expected utility from the above scenario, you'd need the probability of the AI to be correct extremely high, and even then, the savings would be small.

    A task that requires more effort to turn ideas into deliverables would have better expectation, but complex tasks often have results that are not simple nor easy to check, so the savings may not be as meaningful as you naively assume.

  • No? Not everyone's dream is being a manager. I like writing code, it's fun! Telling someone else to go write code for me so that I can read it later? Not fun, avoid it if possible (sometimes it's unavoidable, we don't have unlimited time).

    • People still play chess, even though now AI is far superior to any human. In the future you will still be able to hand-write code for fun, but you might not be able to earn a living by doing it.

    • I meant what we want from an economical perspective, scalability wise. I agree writing code is fun and even disabled AI autocomplete because of it... But I fear it may end up being how we like making our own bread

What is the point? The effort to write the email is equal to the effort to ask the AI to write the email for you. Only when the AI turns your unprofessional style into something professional is any effort saved - but the "professional" sounding style is most of the time wrong and should get dumped into junk.

  • Yeah, I'm with you on this one. Surely in most instances it is easier to just bash out the email plus you get the added bonus of exercising your own mind: vocabulary, typing skills, articulating concepts, defining appropriate etiquette. As the years role by I aiming to be more conscious and diligent with my own writing and communication, not less. If one extrapolates on the use of AI for such basic communication, is there a risk some of us lose our ability to meaningfully think for ourselves? The information space of the present day already feels like it is devolving; shorter and shorter content, lack of nuance, reductive messaging. Sling AI in as a mediator for one to one communication too and it feels perilous for social cohesion.