Comment by milchek
21 hours ago
First off, congrats on the funding and the progress so far!
I’ve seen a a couple of start ups pitching similar ideas lately - platforms that use AI personas or agents to simulate focus groups, either for testing products or collecting user insights. I can see the appeal in scaling audience feedback, reducing costs, reaching demographics that are traditionally hard to access.
That said, this is one of the areas of AI that gives me the most concern. I work at a company building AI tools for creative professionals, so I'm regularly exposed to the ethical and sustainability concerns in this space. But with AI personas specifically, there is something a little more troubling to me.
One recent pitch really stuck with me, in this case, the startup was proposing to use AI personas for focus groups on products and casually mentioned local government consultation. That's where I think this starts to veer into troubling territory. The idea of a local council using synthetic personas instead of talking directly to residents about policy decisions is alarming. It may be faster, cheaper, or even easier to implement but it fundamentally misunderstands what real feedback looks like.
LLMs don't live in communities. They don't vote, experience public services, or have lived context. No matter how well calibrated or "representative" the personas are claimed to be, they are ultimately a reflection of training data and assumptions - not the messy, multimodal, contradictory, emotional reality of human beings. And yet, decisions based on these synthetic signals could end up shaping products, experiences, or even policies that affect real people.
We're entering an era where human behaviour is being abstracted and compressed into models, and then treated as if it's a reliable proxy for actual human insight. That's a level of abstraction I'm deeply uncomfortable with and it's not a signal I think I would ever trust, regardless of how well it's marketed.
Would be curious to know what your approach is to convince others that may also be skeptical or not want to see this kind of tech being abused for the reasons listed above?
Thank you! We 100% agree. My research back in Cambridge was on misinformation, so we take the danger of misuse very seriously even as a tiny team of 3 people right now. As a social science researcher, one big challenge we faced was just how difficult it was to run experiments - it's quite unethical (and impossible) to have 100k people under policy A and 100k under policy B, so as a result, we as a society struggle to find the "golden path" with big issues like misinformation, climate change, or even everyday economics.
That's what motivated me to start researching in the area of creating "Artificial Societies" - first as an academic project, now as a product everyone can use, because I believe the best way to build a new technology is to try to make it useful for as many people as possible, rather than reserving it for governments and enterprises only. That's why unlike other builders in this space, we've made it a rule to never touch defence use cases; that's why we've gone against much business wisdom to produce a consumer product that anyone can use, rather than going after bigger budgets.
We totally agree that synthetic audiences should never replace listening to real people - we ourselves actually insist on doing manual user interviews so that we can feel our users pain ourselves. We hope what we build doesn't replace traditional methods, but expands what market research can do - that's why we try to simulate how people behave in communities and influence one another, so that we capture the ripple effects that a traditional survey ignores because it treats humans like isolated line items, rather than the communities we really are.
Hopefully, one day, just like a new plane is first tested in a wind tunnel before risking the life of a test pilot, a new policy will also first be tested in an artificial society, before risking unintended consequences in real participants. We are still in the early days though, so for now, we are just working hard to make a product people would love to use :)
But "artificial societies" are only possible with AGI, not with LLMs. These are not reasoning engines. They do not think or have values or care or worry.
Someone must have a wild-ass theorem about whether or not consciousness is representable as some distribution over possible realities. But yeah, I agree this feels like taking a huge step towards fewer and fewer people having agency in their own (real) lives.
I'm certain Big [insert industry] will gobble this kind of thing up.
Exactly my concerns as well. If we're indeed heading toward “ask AI first, humans later” model there's potential for a slippery slope—one that could be exploited depending on which regime happens to be in power. If politicians or special-interest groups can manipulate or curate AI-generated “opinions,” they could present those biased outputs as if they were genuine reflections of their constituents’ views. Over time, the line between authentic public sentiment and engineered AI propaganda could blur, undermining informed democratic debate.
See https://en.wikipedia.org/wiki/Franchise_(short_story)