← Back to context

Comment by norir

5 days ago

This has become an impoverished conversation. I have seen this pattern where llm capabilities improve and people who had previously dismissed the technology based on its present capabilities realize they were wrong in their pessimistic assessment of the potential of the tech switch over to the other side and project their own previous bias onto those who continue to object.

The basic structure is this: six months ago, I tried llms and they were trash, but holy cow they have improved so much, now I can use them to avoid tedious that I don't like! Don't be an idiot like my skeptical past self.

Then they accuse everyone who now disagrees with their take on the tech as being an incurious luddite who is blinded by their anti-progress bias.

Personally, as a non-user but close observer of the tech, I never doubted that the tech would improve but there are obvious problems with the tech beyond hallucinations that have to do with human meaning, understanding and power relations that cannot be solved by making the tech better.

My challenge to all of the booster is this: try to articulate your own personal vision of both ai utopia and ai dystopia. I personally find it borderline impossible to even imagine a utopia emerging from genai, but it is extremely easy for me to imagine dystopia, especially given the entities that are controlling the tech and competing to "win" the ai arms race.

For me, the representation of the Chinese state as filtered through western media is already a dystopia. Of course, having not been to China myself and being unable to speak any of their languages, I cannot personally verify the representation. But by competing with the Chinese on ai (and I mean Chinese in the way we define them in the west, which I recognize may be very different from both the actual lived experience in China and their self-conception), we become more like our own negative stereotypes of them. It is essentially a race to disempower ordinary people, remove agency from their lives, hand it over to agents who most certainly do not have the best interest of living humans in mind and call this victory. To "win" the ai war as presently defined would likely be disaster for us all.

There are these ridiculous handwaving things about solving climate change or even human mortality with this tech even though there is no evidence whatsoever that it will do this. Just because the people building it say it will do these things doesn't mean we should trust them.

Imagine if a primatologist tried to tell us that because they have trained chimpanzees to recognize colors and some words better than a three year old, we should now stop investing in education and direct all of out resources into training chimps to do all of our repetitive tasks for us to liberate us from the drudgery of doing anything for ourselves. With enough resources, you would see an explosion in chimp capabilities and this would come directly at the expense of humans, who now have no work to do and just sit on top of a pyramid built by chimp labor. Not only would the things we made be worse than what we could have if we focused on developing our own human capacities instead of chimp capacities, but we would live in fear that the chimps (who also are much stronger than us) will wake up to their plight and rise up against their rulers. Humans would also rapidly lose our own capabilities and become much more like chimps than the humans of today. Sound familiar?

I tend to believe that as it is currently being developed, this tech is far more likely to lead us in a direction like the chimp apocalypse than some post labor utopia.

That doesn't mean that the tech isn't getting better or can't do impressive things. I can hold both things in my head at once. But I am much more concerned with human flourishing and well being than some bored engineers who don't actually like programming (or at least don't like it under the current industrial conditions) feeling like they are being liberated from the tedium of their work. And instead of solving the real the underlying problems that make the work so tedious, we instead compound the problem by having ai generate even more of the exact kind of code that caused the problem i the first place.