← Back to context

Comment by tough

7 months ago

its a wourd soup machine

llm's have no -world models- can't reason about truth or lies. only encyclopedic repeating facts.

all the tricks CoT, etc, are just, well tricks, extended yapping simulating thought and understanding.

AI can give great replies, if you give it great prompts, because you activate the tokens that you're interested with.

if you're lost in the first place, you'll get nowhere

for Claude, continuing the text with making up a story about being April fools, sounds the most plausible reasonable output given its training weights

But why is the conclusion that Claudius is 'making up a story about being April Fools'? Maybe this wasn't an identity crisis, just a big human whoosh?