Comment by James-K-He
20 hours ago
Thank you! We 100% agree. My research back in Cambridge was on misinformation, so we take the danger of misuse very seriously even as a tiny team of 3 people right now. As a social science researcher, one big challenge we faced was just how difficult it was to run experiments - it's quite unethical (and impossible) to have 100k people under policy A and 100k under policy B, so as a result, we as a society struggle to find the "golden path" with big issues like misinformation, climate change, or even everyday economics.
That's what motivated me to start researching in the area of creating "Artificial Societies" - first as an academic project, now as a product everyone can use, because I believe the best way to build a new technology is to try to make it useful for as many people as possible, rather than reserving it for governments and enterprises only. That's why unlike other builders in this space, we've made it a rule to never touch defence use cases; that's why we've gone against much business wisdom to produce a consumer product that anyone can use, rather than going after bigger budgets.
We totally agree that synthetic audiences should never replace listening to real people - we ourselves actually insist on doing manual user interviews so that we can feel our users pain ourselves. We hope what we build doesn't replace traditional methods, but expands what market research can do - that's why we try to simulate how people behave in communities and influence one another, so that we capture the ripple effects that a traditional survey ignores because it treats humans like isolated line items, rather than the communities we really are.
Hopefully, one day, just like a new plane is first tested in a wind tunnel before risking the life of a test pilot, a new policy will also first be tested in an artificial society, before risking unintended consequences in real participants. We are still in the early days though, so for now, we are just working hard to make a product people would love to use :)
But "artificial societies" are only possible with AGI, not with LLMs. These are not reasoning engines. They do not think or have values or care or worry.
Someone must have a wild-ass theorem about whether or not consciousness is representable as some distribution over possible realities. But yeah, I agree this feels like taking a huge step towards fewer and fewer people having agency in their own (real) lives.
I'm certain Big [insert industry] will gobble this kind of thing up.