Comment by Reubend
3 months ago
Yeah, I haven't looked into this much so far but I am extremely skeptical of the claims being made here. For one agent to become a tax collector and another to challenge the tax regime without such behavior being hard coded would be extremely impressive.
They were assigned roles to examine the spread of information and behaviour. The agents pay tax into a chest, as decreed by the (dynamic) rules. There are agents assigned to the roles of pro- and anti-tax influencers; agents in proximity to these influencers would change their own behaviour appropriately, including voting for changes in the tax.
So yes, they didn't take on these roles organically, but no, they weren't aiming to do so: they were examining behavioral influence and community dynamics with that particular experiment.
I'd recommend skimming over the paper; it's a pretty quick read and they aren't making any truly outrageous claims IMO.
It's not clear what actually happened. They're using Minecraft. Why is there not video?
People have tried groups of AI agents inside virtual worlds before. Google has a project.[1] Stanford has a project.[2] Those have video.
A real question is whether they are anthropomorphizing a dumb system too much.
[1] https://deepmind.google/discover/blog/sima-generalist-ai-age...
[2] https://arstechnica.com/information-technology/2023/04/surpr...
So it's a plain vanilla ABM with lots of human crafted interaction logic? So they are making outrageous claims - since they are making it sound like it's all spontaneously arising from the interaction of LLMs...
You can imagine a conversation with an LLM getting to that territory pretty quickly if you pretend to be an unfair tax collector. It sounds impressive on the surface, but in the end it's all LLMs talking to each other, and they'll enit whatever completions are likely given the context.