Comment by herval
21 days ago
I'm not sure that's the case, and it's quite easily proven - if you ask an LLM any question, then doubt their response, they'll change their minds and offer a different interpretation. It's an indication they hold multiple interpretations, depending on how you ask, otherwise they'd dig in.
You can also see decision paralysis in action if you implement CoT - it's common to see the model "pondering" about a bunch of possible options before picking one.
That's an interesting framing but I'd still contend that an LLM doesn't seem to hold both ideas 'at the same time' because it will answer confidently in both cases. It depends on the input; it will go one way or the other. It doesn't seem to consider and weigh up all of its knowledge when answering.