Comment by Retric
21 hours ago
I’m less concerned with understanding what’s important to me than I am the number of errors they make. Better prompts don’t fix the underlying issue here.
21 hours ago
I’m less concerned with understanding what’s important to me than I am the number of errors they make. Better prompts don’t fix the underlying issue here.
Indeed.
With humans, every so often I find myself in a conversation where the other party has a wildly incorrect understanding of what I've said, and it can be impossible to get them out of that zone. Rare, but it happens. With LLMs, much as I like them for breadth of knowledge, it happens most days.
That said, with LLMs I can reset the conversation at any point, backtracking to when they were not misunderstanding me — but even that trick doesn't always work, so the net result is the LLM is still worse at understanding me than real humans are.