Comment by otabdeveloper4

4 days ago

LLMs can't think. They are not rational actors, they can only generate plausible-looking texts.

Maybe so, but they boost my coding productivity, so why not?

(Not the mentioned LLMs here though.)

I do the rational acting, and it does the rest.

You're being reductive. A system should be evaluated on how its measurable properties more than anything else.

  • Being "reductive" is how we got where we are today. We try to form hypotheses about things so that we can reduce them to their simplest model. This understanding then leads to massive gains. We've been doing this ever since we have observed things like the behavior of animals in order that we could hunt them more easily.

    In the same way it helps a lot to try to understand what the correct model of an AI is in order that we can use it more productively. Certainly based on it's 'measurable properties' it does not behave like a reasonable human being. Some of the time it does, some of the time it goes completely off the rails. So there must be some other model that is more useful. "They are not rational actors, they can only generate plausible-looking texts." - seems to be more useful to me. "They are rational actors" - would be more like magical thinking which is not what got us to where we are today.