Comment by bob1029
7 hours ago
I think a basic overview of game theory should also discuss Pareto optimality to some extent. You can have 100% of participants operating in a locally-ideal way while still creating problems in aggregate.
7 hours ago
I think a basic overview of game theory should also discuss Pareto optimality to some extent. You can have 100% of participants operating in a locally-ideal way while still creating problems in aggregate.
Tragedy of the commons / bounded rationality for example.
https://en.wikipedia.org/wiki/Tragedy_of_the_commons
The Tragedy of the Commons has long been discredited by the noble winning game theorist Elinor Ostrom and her research of numerous case studies on the commons, and how people coordinate for collective resilence and prosperity (through rules and by auditing and retaliating against abusers and selfish exploiters)
The infamous "tragedy of the commons" rational resource optimization game is often cited as justification for machiavellian exploitation, but humans being social creatures are subject to reputations, and have sophisticated methods of communication, cooperation, reputation, trust, accountability, auditing, and retaliation capabilities. [1] [2]
Elinor Ostrom's "Rules, games, and common-pool resources" and Robert Axelrod's work "The Evolution of Cooperation" both explain game theory in the context of human scale realities. Of particular interest to the hacker community would be Ostrom's Common Pool Resource principles, which are totally applicable to the way we form communities anywhere. Decenteralized or in any form.
At the core of game theory and human civilization is communication and trust. The abuse of mass media to manipulate populations knows the power of communication and cultural narratives, and we're in a new enclosure [3] [4] of the commons and as media communication networks are being used to exploit through "hypernormalization" and "accelerationism" [5][6][7][8]
For a better applicable human scale game theory primer, check out Bruce Schneier's (yes, the same legendary cryptographer Bruce), "Liars and Outliers"
[1] https://ncase.me/trust/
[2] https://en.wikipedia.org/wiki/Elinor_Ostrom#Design_principle...
[3] https://en.wikipedia.org/wiki/Enclosure
[4] https://en.wikipedia.org/wiki/The_Dawn_of_Everything
[5] on Cybernetics and the 20th century "All Watched Over by Machines of Loving Grace" by Adam Curtis https://thoughtmaybe.com/all-watched-over-by-machines-of-lov...
[6] on propaganda and 20th century culture "The Century of the Self" by Adam Curtis https://thoughtmaybe.com/the-century-of-the-self/
[7] on the hyperreal news and the use of crisis to manipulate populations and normalize a polycrisis - "Hypernormalization" by Adam Curtis https://thoughtmaybe.com/hypernormalisation/
[8] https://en.wikipedia.org/wiki/Accelerationism
> At the core of game theory and human civilization is communication and trust.
No and no. Game theory is game theory. When Nash says: "Optimal move for non-cooperative participants" there is no communication and no trust. And it is still game theory. The Wikipedia page on the Nash equilibrium mentions game theory 42 times.
I'm not saying what you're mentioning ain't also game theory.
But you're putting an ideological/political motive to freaking maths to then reframe what "game theory" means in your own view of the world.
As a side note I'll point that humans do play games: from toddler to grown up adults. Game theory also applies to something called "games": be it poker or chess or Go or whatever.
Not everything has to be seen through the lens of exploitation / evil capitalism / gentle communism (collective resilience) / etc.
Pareto efficiency is a welfare economics concept. In game theory, the closest you can get to that is a Nash equilibrium.
Pareto optimal is definitely a core concept in game theory. It says that no other vector beats it in every dimension (or at least as good in all but one, and better in at least one).
I wish more business / product people understood this concept. When a product has been refined enough to approach Pareto optimality (at least on the dimensions the product is easily measured), it's all too common for people to chase improvements to one metric at a time, and when that runs out, switch to another metric. This results in going in circles (make metric A go up-up-up, forcing metric B down-down-down, then make B go up-up-up while forcing A to go down-down-down - it's worse than this because multiple dimensions go up/down together, making it harder to spot). Sometimes these cycles are over a period of quarters or years, making it even harder to spot because cycles are slower than employee attrition.
This is not independent of Goodhart's Law[1]. I've seen entire product orgs, on a very mature product (i.e., nearing the Pareto frontier for the metrics that are tracked), assign one metric per PM and tie PM comp to their individual metric improving. Then PMs wheel and deal away good features because "don't ship your thing that hurts my metric and I won't ship my thing that hurts yours" - and that's completely rational given the incentives. Of course the best wheelers-and-dealers get the money/promotions. So the games escalate ("you didn't deal last time, so it's going to cost you more this time"). Eventually negative politics explode and it's all just a reality TV show. Meanwhile engineers who don't have an inside view of what's going on are left wondering why PMs appear to be acting insane with ship/no-ship decisions.
If more people understood Pareto optimality and Goodhart's Law, even at a surface level, I think being "data driven" would be a much better thing.
[1] Goodhart's Law: when a measure becomes a target, it ceases to be a good measure
1 reply →