← Back to context

Comment by dTal

10 days ago

Your ultra-reductionism does not not constitute understanding. "Math happens and that somehow leads to a conversational AI" is true, but it is not useful. You cannot use it to answer questions like "how should I prompt the model to achieve <x>". There are many layers of abstraction within the network - important, predictive abstractions - which you have no concept of. It is as useful as asking a particle physicist why your girlfriend left you, because she is made of atoms.

Incidentally, your description of LLMs also describes all software, ever. It's just math, man! That doesn't make you an expert kernel hacker.

It sounds like you're looking for the field of psychology. And like the field of psychology, any predictive abstraction around systems this complicated will be tenuous, statistical, and full of bad science.

You may never get a scientific answer to "how should I prompt the model to achieve <x>", just like you may never get a capital-S scientific answer to "how should I convince people to do X". What would it even mean to "understand people" like this?

You demand too much.