Comment by alganet
2 days ago
Humans also often get lost in multi-turn conversation.
I have experienced that in person many, many times. Jumps in context that seem easy for one person to follow, but very hard for others.
So, assuming the paper is legit (arxiv, you never know...), its more like something that could be improved than a difference from human beings.
Subjectively the "getting lost" feels totally different than human conversations. Once there is something bad in the context it seems almost impossible to get back on track. All subsequent responses become get a lot worse and it starts contradicting itself. It is possible that with more training this problem can be improved, but what is interesting to me isn't it's worse than humans in this way but that this sort of difficulty scales differently than it does in humans. I would love to get some more objective descriptions of these subjective notions.
Contradictions are normal. Humans make them all the time. They're even easy to induce, due to the simplistic nature of our communication (lots of ambiguities, semantic disputes, etc).
I don't see how that's a problem.
Subjectivity is part of human communication.
Algorithmic convergence and caching :: Consensus in conversational human communication
1 reply →
What you're talking about has absolutely nothing to do with the paper. It's not about jumps in context. It's about LLMs being biased towards producing a complete answer on first try, even when there isn't even enough information. When you provide them with additional information, they will stick with the originally wrong answer. This means that you need to frontload all information in the first prompt and if the LLM messes up, you will have to start from scratch. You can't do that with a human at all. There is no such thing as "single turn conversation" with humans. You can't reset the human to a past state.
I see, thanks for the correction.