← Back to context

Comment by handoflixue

2 months ago

5 "b"s, not counting the parenthetical at the end.

https://claude.ai/share/943961ae-58a8-40f6-8519-af883855650e

Amusingly, a bit of a struggle with understanding what I wanted with the python script to confirm the answer.

I really don't get why people think this is some huge un-fixable blindspot...

I don't think the salience of this problem is that it's a supposedly unfixable blind spot. It's an illustrative failure in that it breaks the illusory intuition that something that can speak and write to us (sometimes very impressively!) also thinks like us.

Nobody who could give answers as good as ChatGPT often does would struggle so much with this task. The fact that an LLM works differently from a whole-ass human brain isn't actually surprising when we consider it intellectually, but that habit of always intuiting a mind behind language whenever we see language is subconscious and and reflexive. Examples of LLM failures which challenge that intuition naturally stand out.

That indeed looks pretty good. But then why are we still seeing the issue described in OP?