Comment by tliltocatl
6 months ago
Anthropomorphising implicitly assumes motivation, goals and values. That's what the core of anthropomorphism is - attempting to explain behavior of a complex system in teleological terms. And prompt escapes make it clear LLMs doesn't have any teleological agency yet. Whenever their course of action is, it is to easy to steer them of. Try to do it with a sufficiently motivated human.
>. Try to do it with a sufficiently motivated human.
That's what they call marketing, propaganda or brain washing, acculturation , education depending on who you ask and at which scale you operate, apparently.
> sufficiently motivated
None of these targets sufficiently motivated, rather those who are either ambivalent or yet unexposed.
How will you know when an AI has teleological agency?
Prompt escapes will be much harder, and some of them will end up in an equivalent of "sure here is… no, wait… You know what, I'm not doing that", i. e. slipping and then getting back on track.