← Back to context

Comment by Retric

17 hours ago

The counter argument is some people are terrible at writing. Millions of people sit at the bottom of any given bell curve.

I’d never trust a summery from a current generation LLM for something as critical as my inbox. Some hypothetical drastically improved future AI, sure.

Smarter models aren't going to somehow magically understand what is important to you. If you took a random smart person you'd never met and asked them to summarize your inbox without any further instructions they would do a terrible job too.

You'd be surprised at how effective current-gen LLMs are at summarizing text when you explain how to do it in a thoughtful system prompt.

  • I’m less concerned with understanding what’s important to me than I am the number of errors they make. Better prompts don’t fix the underlying issue here.

    • Indeed.

      With humans, every so often I find myself in a conversation where the other party has a wildly incorrect understanding of what I've said, and it can be impossible to get them out of that zone. Rare, but it happens. With LLMs, much as I like them for breadth of knowledge, it happens most days.

      That said, with LLMs I can reset the conversation at any point, backtracking to when they were not misunderstanding me — but even that trick doesn't always work, so the net result is the LLM is still worse at understanding me than real humans are.