Comment by AdieuToLogic
5 days ago
> If this was done well in a way that was productive for corporate work, I suspect the AI would engage in Machievelian maneuvering and deception that would make typical sociopathic CEOs look like Mister Rogers in comparison.
Algorithms do not possess ethics nor morality[0] and therefore cannot engage in Machiavellianism[1]. At best, algorithms can simulate same as pioneered by ELIZA[2], from which the ELIZA effect[3] could be argued as being one of the best known forms of anthropomorphism.
0 - https://www.psychologytoday.com/us/basics/ethics-and-moralit...
1 - https://en.wikipedia.org/wiki/Machiavellianism_(psychology)
https://en.wikipedia.org/wiki/ELIZA_effect
>As Weizenbaum later wrote, "I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people."...
That pretty much explain the AI Hysteria that we observe today.
https://en.wikipedia.org/wiki/AI_effect
>It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'.
That pretty much explains the "it's not real AI" hysteria that we observe today.
And what is "AI effect", really? It's a coping mechanism. A way for silly humans to keep pretending like they are unique and special - the only thing in the whole world that can be truly intelligent. Rejecting an ever-growing pile of evidence pointing otherwise.
>there was a chorus of critics to say, 'that's not thinking'.
And they were always right...and the other guys..always wrong..
See, the questions is not if something is the "real ai". The questions is, what can this thing realistically achieve.
The "AI is here" crowd is always wrong because they assign a much, or should I say a "delusionaly" optimistic answer to that question. I think this happens because they don't care to understand how it works, and just go by its behavior (which is often cherry-pickly optimized and hyped to the limit to rake in maximum investments).
7 replies →
ELIZA couldn't write working code from an English-language prompt though.
I think the "AI Hysteria" comes more from current LLMs being actually good at replacing a lot of activity that coders are used to doing regularly. I wonder what Weizenbaum would think of Claude or ChatGPT.
> ELIZA couldn't write working code from an English-language prompt though.
Neither can commercial LLM-based offerings.
>ELIZA couldn't write working code from an English-language prompt though.
Yea, that is kind of the point. Even such a system could trick people into delusional thinking.
> actually good at replacing a lot of activity that coders are used to...
I think even that is unrealistic. But that is not what I was thinking. I was thinking when people say that current LLMs will go on improving and reach some kind of real human like intelligence. And ELIZA effect provides a prefect explanation for this.
It is very curious that this effect is the perfect thing for scamming investors who are typically bought into such claims, but under ELIZA effect with this, they will do 10x or 100x investment....
> Algorithms do not possess ethics nor morality[0] and therefore cannot engage in Machiavellianism[1].
Conjecture. There are plenty of ethical frameworks grounded in pure logic (Kant), or game theory (morality as evolved co-operation). These are both amenable to algorithmic implementations.
> There are plenty of ethical frameworks grounded in pure logic (Kant), or game theory (morality as evolved co-operation). These are both amenable to algorithmic implementations.
Algorithm implementations are programmatic manifestations of mathematical models and, as such, are not what they model by definition.
To wit, NOAA hurricane modelling[0] are obviously not the hurricanes which they model.
0 - https://www.aoml.noaa.gov/hurricane-modeling-prediction/
> Algorithm implementations are programmatic manifestations of mathematical models and, as such, are not what they model by definition.
This is false for constructs of information, ie. a "manifested model" of a sorted list is a sorted list and a "manifested model" of a sorting algorithm is a sorting algorithm.
To wit, an accurate algorithmic model of moral reasoning is moral reasoning, since moral reasoning, being a decision procedure, is an information process.
2 replies →