But we also have a stake in our society, in the form of a reputation or accountability, that greatly influences our behaviour. So comparing us to an LLM has always been meaningless anyway.
Money, or any single metrics, no matter how high, is not enough to bend someone actions in territory they will assess unacceptable otherwise.
How much money would make anyone accept to engage in a genocide by direct bribe? The thing is, some people would not see any amount as a convincing one, while some other will do it proactively for no money at all.
to be fair, the people most antisocially obsessed with dogshit AI software are completely divorced from the social fabric and are not burdened by these sorts of juvenile social ties
Yeah, but not when they are expected to perform in a job role. Too much nondeterminism in that case leads to firing and replacing the human with a more deterministic one.
>but not when they are expected to perform in a job role
I mean, this is why any critical systems involving humans have hard coded checklists and do not depend on people 'just winging it'. We really suck at determinism.
I feel like we are talking about different levels of nondeterminism here. The kind of LLM nondeterminism that's problematic has to do with the interplay between its training and its context window.
Take the idea of the checklist. If you give it to a person and tell them to perform with it, if it's their job they will do so. But with the LLM agents, you can give them the checklist, and maybe they apply it at first, but eventually they completely forget it exists. The longer the conversation goes on without reminding them of the checklist, the more likely they're going to act like the checklist never existed at all. And you can't know when this is, so the best solution we have now is to constantly remind them of the exitance of the checklist.
This is the kind of nondeterminism that make LLMs particularly problematic as tools and a very different proposition from a human, because it's less like working with an expert and more like working with a dementia patient.
Birds don't need airports, don't need expensive maintenance every N hours of flight, they run on seeds and bugs found everywhere that they find themselves, instead of expensive poisonous fuel that must be fed to planes by mechanics, they self-replicate for cheap, and the noises they produce are pleasant rather than deafening.
But we also have a stake in our society, in the form of a reputation or accountability, that greatly influences our behaviour. So comparing us to an LLM has always been meaningless anyway.
Hm, great lumps of money also detaches a person from reputation or accountability.
Does it? I think it detaches them from _some_ of the consequences of devaluing their reputation or accountability, which is not quite the same thing.
Money, or any single metrics, no matter how high, is not enough to bend someone actions in territory they will assess unacceptable otherwise.
How much money would make anyone accept to engage in a genocide by direct bribe? The thing is, some people would not see any amount as a convincing one, while some other will do it proactively for no money at all.
to be fair, the people most antisocially obsessed with dogshit AI software are completely divorced from the social fabric and are not burdened by these sorts of juvenile social ties
Which is why every tool that is better than humans at a certain task are deterministic.
Yeah, but not when they are expected to perform in a job role. Too much nondeterminism in that case leads to firing and replacing the human with a more deterministic one.
>but not when they are expected to perform in a job role
I mean, this is why any critical systems involving humans have hard coded checklists and do not depend on people 'just winging it'. We really suck at determinism.
I feel like we are talking about different levels of nondeterminism here. The kind of LLM nondeterminism that's problematic has to do with the interplay between its training and its context window.
Take the idea of the checklist. If you give it to a person and tell them to perform with it, if it's their job they will do so. But with the LLM agents, you can give them the checklist, and maybe they apply it at first, but eventually they completely forget it exists. The longer the conversation goes on without reminding them of the checklist, the more likely they're going to act like the checklist never existed at all. And you can't know when this is, so the best solution we have now is to constantly remind them of the exitance of the checklist.
This is the kind of nondeterminism that make LLMs particularly problematic as tools and a very different proposition from a human, because it's less like working with an expert and more like working with a dementia patient.
Human minds are more complicated than a language model that behaves like a stochastic echo.
Birds are more complicated than jet engines, but jet engines travel a lot faster.
Jet engines don't go anywhere without a large industry continuously taking care of all the complexity that even the simplest jet travel imply.
They also kill a lot more people when they fail.
3 replies →
Birds don't need airports, don't need expensive maintenance every N hours of flight, they run on seeds and bugs found everywhere that they find themselves, instead of expensive poisonous fuel that must be fed to planes by mechanics, they self-replicate for cheap, and the noises they produce are pleasant rather than deafening.