← Back to context

Comment by akersten

3 days ago

Bag of words is actually the perfect metaphor. The data structure is a bag. The output is a word. The selection strategy is opaquely undefined.

> Gen AI tricks laypeople into treating its token inferences as "thinking" because it is trained to replicate the semiotic appearance of doing so. A "bag of words" doesn't sufficiently explain this behavior.

Something about there being significant overlap between the smartest bears and the dumbest humans. Sorry you[0] were fooled by the magic bag.

[0] in the "not you, the layperson in question" sense

I think it's still a bit of a tortured metaphor. LLMs operate on tokens, not words. And to describe their behavior as pulling the right word out of a bag is so vague that it applies every bit as much to a Naive Bayes model written in Python in 10 minutes as it does to the greatest state of the art LLM.

Yeah. I have a half-cynical/half-serious pet theory that a decent fraction of humanity has a broken theory of mind and thinks everyone has the same thought patterns they do. If it talks like me, it thinks like me.

Whenever the comment section takes a long hit and goes "but what is thinking, really" I get slightly more cynical about it lol

  • Why not?

    By now, it's pretty clear that LLMs implement abstract thinking - as do humans.

    They don't think exactly like humans do - but they sure copy a lot of human thinking, and end up closer to it than just about anything that's not a human.

    • It isn't clear because they do none of that lol. They don't think.

      It can kinda sorta look like thinking if you don't have a critical eye, but it really doesn't take much to break the illusion.

      I really don't get this obsessive need to pretend your tools are alive. Y'all know when you watch YouTube that it's a trick and the tiny people on your screen don't live in your computer, right?

      1 reply →