← Back to context

Comment by strokirk

7 days ago

It’s like instructing a toddler.

I recall that early LLMs had the problem of not understanding the word "not", which became especially evident and problematic when tasked with summarizing text because the summary would then sometimes directly contradict the original text.

It seems that that problem hasn't really been "fixed", it's just been paved over. But I guess that's the ugly truth most people tend to forget/deny about LLMs: you can't "fix" them because there's not a line of code you can point to that causes a "bug", you can only retrain them and hope the problem goes away. In LLMs, every bug is a "heisenbug" (or should that be "murphybug", as in Murphy's Law?).

  • Same thing happens for humans:

    "Don't think of a green elephant"

    Alan Watts talked of this concept where the harder you try to suppress a thought or sensation, the more mental energy you give it, making it stronger.

i definitely have gone so far as to treat my llm readable docs in this way and have found it very effective