← Back to context

Comment by gavinray

7 months ago

The identity crisis bit was both amusing and slightly worrying.

The article claimed Claudius wasn't having a go for April Fools - that it claimed to be doing so after the fact as a means of explaining (excusing?) its behavior. Given what I understand about LLMs and intent, I'm unsure how they could be so certain.

  • its a wourd soup machine

    llm's have no -world models- can't reason about truth or lies. only encyclopedic repeating facts.

    all the tricks CoT, etc, are just, well tricks, extended yapping simulating thought and understanding.

    AI can give great replies, if you give it great prompts, because you activate the tokens that you're interested with.

    if you're lost in the first place, you'll get nowhere

    for Claude, continuing the text with making up a story about being April fools, sounds the most plausible reasonable output given its training weights

    • But why is the conclusion that Claudius is 'making up a story about being April Fools'? Maybe this wasn't an identity crisis, just a big human whoosh?