Comment by aanet
5 days ago
by Emmanuel Dupoux, Yann LeCun, Jitendra Malik
"he proposed framework integrates learning from observation (System A) and learning from active behavior (System B) while flexibly switching between these learning modes as a function of internally generated meta-control signals (System M). We discuss how this could be built by taking inspiration on how organisms adapt to real-world, dynamic environments across evolutionary and developmental timescales. "
https://github.com/plastic-labs/honcho has the idea of one sided observations for RAG.
If this was done well in a way that was productive for corporate work, I suspect the AI would engage in Machievelian maneuvering and deception that would make typical sociopathic CEOs look like Mister Rogers in comparison. And I'm not sure our legal and social structures have the capacity to absorb that without very very bad things happening.
I was kind of worried by them going Machiavellian or evil but it doesn't seem the default state for current ones, I think because they are basically trained on the whole internet which has a lot of be nice type stuff. No doubt some individual humans my try to make them go that way though.
I guess it would depend a bit whos interests the AI would be serving. If serving the shareholders it would probably reward creating value for customers, but if it was serving an individual manager competing with others to be CEO say then the optimum strategy might be to go machiavellian on the rivals.
> I think because they are basically trained on the whole internet which has a lot of be nice type stuff.
Is this not just because their goals are currently to be seen as "nice"?
Surely they can be not-nice if directed to, and then the question is just whether someone can accidentally direct them to do that by e.g. setting up goals that can be more readily achieved by being not-nice. Which... is how many goals in the real world are, which is why the very concept and danger of Machiavellianism exists.
1 reply →
Not just CEOs, Legal and social structures will also be run by AI. Chimps with 3 inch brains cant handle the level of complexity global systems are currently producing.
> If this was done well in a way that was productive for corporate work, I suspect the AI would engage in Machievelian maneuvering and deception that would make typical sociopathic CEOs look like Mister Rogers in comparison.
Algorithms do not possess ethics nor morality[0] and therefore cannot engage in Machiavellianism[1]. At best, algorithms can simulate same as pioneered by ELIZA[2], from which the ELIZA effect[3] could be argued as being one of the best known forms of anthropomorphism.
0 - https://www.psychologytoday.com/us/basics/ethics-and-moralit...
1 - https://en.wikipedia.org/wiki/Machiavellianism_(psychology)
2 - https://en.wikipedia.org/wiki/ELIZA
3 - https://en.wikipedia.org/wiki/ELIZA_effect
https://en.wikipedia.org/wiki/ELIZA_effect
>As Weizenbaum later wrote, "I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people."...
That pretty much explain the AI Hysteria that we observe today.
12 replies →
> Algorithms do not possess ethics nor morality[0] and therefore cannot engage in Machiavellianism[1].
Conjecture. There are plenty of ethical frameworks grounded in pure logic (Kant), or game theory (morality as evolved co-operation). These are both amenable to algorithmic implementations.
4 replies →
Agents playing the iterated prisoner's dilemma learn to cooperate. It's usually not a dominant strategy to be entirely sociopathic when other players are involved.
You don't get that many iterations in the real world though, and if one of your first iterations is particularly bad you don't get any more iterations.
3 replies →