Comment by habosa
6 days ago
I’m an AI skeptic. I’m probably wrong. This article makes me feel kinda wrong. But I desperately want to be right.
Why? Because if I’m not right then I am convinced that AI is going to be a force for evil. It will power scams on an unimaginable scale. It will destabilize labor at a speed that will make the Industrial Revolution seem like a gentle breeze. It will concentrate immense power and wealth in the hands of people who I don’t trust. And it will do all of this while consuming truly shocking amounts of energy.
Not only do I think these things will happen, I think the Altmans of the world would eagerly agree that they will happen. They just think it will be interesting / profitable for them. It won’t be for us.
And we, the engineers, are in a unique position. Unlike people in any other industry, we can affect the trajectory of AI. My skepticism (and unwillingness to aid in the advancement of AI) might slow things down a billionth of a percent. Maybe if there are more of me, things will slow down enough that we can find some sort of effective safeguards on this stuff before it’s out of hand.
So I’ll keep being skeptical, until it’s over.
I'm in a nearly identical boat as you.
I'm tired. I'm tired of developers/techies not realizing their active role in creating a net negative in the world. And acting like they are powerless and blameless for it. My past self is not innocent in this; but I'm actively trying to make progress as I make a concerted effort to challenge people to think about it whenever I can.
After countless of times that the tech industry (and developers specifically) have gone from taking an interesting technical challenge that quickly require some sort of ethical or moral tradeoff which ends up absolutely shaping the fabric of society for the worse.
Creating powerful search engines to feed information to all who want it; but we'll need to violate your privacy in an irreversible way to feed the engine. Connecting the world with social media; while stealing your information and mass exposing you to malicious manipulation. Hard problems to solve without the ethical tradeoff? Sure. But every other technical challenge was also hard and solved, why can't we also focus on the social problems?
I'm tired of the word "progress" being used without a qualifier of what kind of progress and at the cost of what. Technical progress at the cost of societal regression is still seen as progress. And I'm just tired of it.
Every time that "AI skeptics" are brought up as a topic; the focus is entirely on the technical challenges. They never mention the "skeptics" that are considered that because they aren't skeptical of what AI is and could be capable. I'm skeptical if the tradeoffs being made will benefit society overall; or just a few. Because at literally every previous turn for as long as I've been alive; the impact is a net negative to the total population, without developer questioning their role in it.
I don't have an answer for how to solve this. I don't have an answer on how to stop the incoming shift in destroying countless lives. But I'd like developers to start being honest in their active role in not just accepting this new status quo; but proactively pushing us us in a regressive manner. And our power to push back on this coming wave.
+65536
But, tech was not always a net negative.
As far as I can tell, the sharpest negative inflection came around the launch of the iPhone. Facebook was kind of fine when it was limited to universities and they weren't yet doing mobile apps, algorithmic feeds or extensive A:B testing..
It seems "optimizing engagement," was a grave initial sin...
Maybe some engineers should to go back to their childhoods and watch some Outer Limits and pay attention to the missed lessons..
Our lives are not our own. From womb to tomb, we are bound to others. Past and present. And by each crime and every kindness, we birth our future.
The first digital privacy laws following a personal data scandal were voted in… 1978 (France)
Tech has always been a tool for control, power and accumulation of capital.
You counterbalance it with social and civic laws (ie. Counter power)
> As far as I can tell, the sharpest negative inflection came around the launch of the iPhone
Some would say "The Industrial Revolution and its consequences have been a disaster for the human race."
So the problem is society’s lack of any coherent ethical framework that says building powerful disruptive technology shall be done like this. If you’re tired, then go fix that problem. Find the answer. Because I’m *exhausted* hearing about how everybody is supposed to risk putting food on their table by telling the big boss they won’t make the feature because it’s unclear whether it might be a net negative for society under one contorted version of an angsty ethical framework a small minority of people have ad-hoc adopted on that orange message board… and that _scares_ them.
Do we need skeptics? We might just need to wait for AI (Actually Indians) Companies to run out of money: https://www.dexerto.com/entertainment/ai-company-files-for-b...
Do you really think anyone working for OpenAI is worried about putting food on the table? They are all senior developers and can easily find another job.
The luddites get a bad rap these days, but we need more of them.
We need engineers to be politicians, not cable news taking heads
5 replies →
if you want to learn more about modern luddites check out "This Machine Kills" podcast and to some extent Ed Zitron / Cory Doctorow Blogs, might be a good place to start.
Political-Economic analysis of technology is not super popular thing in a mainstream media, but disabling, sabotaging or vandalising anti-human tech might be.
1 reply →
> net negative to the total population, without developer questioning their role in it.
I am tired of people blaming bottom developers, while CEOs get millions for "the burden of responsibility".
I'm not blindly blaming the bottom developer. I've played my role in past waves as well as many other developers. I'm not a CEO, so I don't know how to communicate this same message to a CEO. But as a developer, I know I've been an ignorant participant in the past. Willfully or not. And I can change my role in the next coming wave.
We developers are not blameless. If we accept that we are playing a role; then we can be proactive in preventing this and influencing the direction things go. CEOs need developers to achieve what they want.
I'm not saying it's easy. I won't even hold it against folks that decide to go in a separate direction than mine. But I at least hope we can be open about the impact we each have; and that we are not powerless here.
4 replies →
Yes CEOs are to blame, but blaming them isn't gonna do anything. They won't change. Who has the motivation and capacity to change things? The working people. Who isn't currently doing it? The working people. So it seems appropriate for me to raise this fact as a problem, the fact that the working people silently go along with all the evil plans ceos put in place
5 replies →
There is technology, related technical advancements and then there is this business incentives to make money. A lot of progress has indeed been made in NLP, information retrieval which is helpful in its own ways to speed up thing, it can easily be seen as next level of automation.
Everything else around it is a glamorous party cause everyones money is riding on it and one needs to appreciate it or risk being deserted by the crowd.
The basics of science is around questioning things until you get convinced. People depending on models too much may end up in a situation where they would loose the ability to triangulate information from multiple sources before being convinced about it.
Programming can be more complicated above a certain threshold even for humans so it would be interesting how the models perform with the complexity. I am skeptic but again I dont know the future either.
> They never mention the "skeptics" that are considered that because they aren't skeptical of what AI is and could be capable.
This is because most people on HN who say they are skeptical about AI mean skeptical of AI capabilities. This is usually paired with statements that AI is "hitting a wall." See e.g.
> I'm very skeptical. I see all the hype, listen to people say it's 2 more years until coding is fully automated but it's hard for me to believe seeing how the current models get stuck and have severe limitations despite a lot of impressive things it can do. [https://news.ycombinator.com/item?id=43634169]
(that was what I found with about 30 seconds of searching. I could probably find dozens of examples of this with more time)
I think software developers need to urgently think about the consequences of what you're saying, namely what happens if the capabilities that AI companies are saying are coming actually do materialize soon? What would that mean for society? Would that be good, would that be bad? Would that be catastrophic? How crazy do things get?
Or put it more bluntly, "if AI really goes crazy, what kind of future do you want to fight for?"
Pushing back on the wave because you take AI capabilities seriously is exactly what more developers should be doing. But dismissing AI as an AI skeptic who's skeptical of capabilities is a great way to cede the ground on actually shaping where things go for the better.
Heck, I think the septics are easy to redefine into whatever bloc you want, because the hype they are in opposition to, is equally vague and broad.
I’m definitely not skeptical of its abilities, I’m concerned by them.
I’m also skeptical that the AI hype is going to pan out in the manner people say it is. If most engineers make average or crappy code, then how are they going to know if the code they are using is a disaster waiting to happen?
Verifying an output to be safe depends on expertise. That expertise is gained through the creation of average or bad code.
This is a conflict in process needs that will have to be resolved.
Why can't it be both? I fully believe that the current strategy around AI will never manifest what is promised, but I also believe that what AI is currently capable of is the purest manifestation of evil.
5 replies →
Ethical bottom for industry as a whole (there will always be niche exceptions) is typically the law. And sometimes not even that when law can't be enforced effectively or the incentives are in favor of breaking the law.
And the US is making it a law that states can't make laws to regulate AI.
The current incentive is not improving humanity.
For ai companies, its to get a model which can be better on benchmarks and vibes so that it can be sota and get higher valuation for stakeholders.
For coders, they just want the shit done. Everyone wants the easy way if his objective is to complete a project but for some it is learning and they may not choose the easy way.
Why they want to do it the easy way, mostly as someone whose cousin's and brother's are in this cs field(i am still in high school), they say that if they get x money then the company at least takes a 10x value of work from them. (Of course, it may be figuratively). One must imagine why they should be the one morally bound in case ai goes bonkers.
Also, the best not using ai would probably stop it a little but the ai world moves so fast, its unpredictable, deepseek was unpredicted. I might argue that now its a matter of us vs China in this new arms race of ai. Would that stop if you stop using it? Many people are already hating ai but has that done much to stop it? If that is, you call ai stopping at the moment.
Its paradoxical. But to be Frank, LLM was created for the reason Its excelling at. Its a technological advancement and a moral degradation.
Its already affecting supply chain tbh. And to be frank, I am still using ai to build projects which I just want to experiment with and see if it can really work without getting the domain specific knowledge. Though I also want to learn more and am curious but just don't have much time in high school.
I don't think people cared about privacy and I don't think people would care about it now. And its the same as not using some big social media giant, you can't escape it. The tech giants also made it easier but less private. People chose the easier part and they would still choose the easy part ie llm. So I guess the future is bleak eh? Well the present isn't that great either. Time to just enjoy life while the world burns by the regret of its past actions for 1% shareholder profit. (For shareholders, it was all worth it though, am I right?)
My 0.02$
Unfortunately Capitalism unhindered by regulation is what we wanted, and Capitalism unhindered by regulation is what we have. We, in the western world, were in the privileged position of having a choice, and we chose individual profit over the communal good. I'm not entirely sure it could have been any other way outside of books given the fact we're essentially animals.
> Unfortunately Capitalism unhindered by regulation is what we wanted
No "we" don't want it. And those who do want it, let them go live in the early industrial England whete the lack of regulation degenerated masses.
Also, for some reason people still portray capitalism as being something completelky different with or without regulation, it's like saying a man is completelly different in a swimming swit and a costume.
> We, in the western world, were in the privileged position of having a choice, and we chose individual profit over the communal good
Again, "we" did not have a gathering a choose anything. Unless you have records of that zoom session.
> given the fact we're essentially animals.
This is a reductionist statement that doesn't get anywhere. Yes we are animals but we are more than that, similar to being quarks but also more than quarks.
6 replies →
Who's "we"? I never voted in that referendum.
5 replies →
I think it is not so much about capitalism, but about the coupling of democracy with money. Money -> Media/Influencers -> election -> corruption -> go back to 1. To make a meaningful change, the society must somehow decouple democracy from money. With current technology it shall be possible to vote directly for many things instead of relying on (corrupt, pre-bread) representatives. Something like democracy 2.0 :)
Hear! Hear!
As I implied in an earlier comment, your conviction (if you're wrong on the inevitability of the direction), may be one of the things that leads it into that direction.
Here's my historical take: in the 1960s and 1970s, computation in general was viewed as a sinister, authoritarian thing. Many people assumed it was going to be that way, and a small minority recognised that it also had the potential to empower and grant autonomy to a wider class of people. These were the advocates of the personal computer revolution -- the idea of "computer lib", whereby the tools of control would be inverted and provided to the people at large.
You can argue about whether that strategy was a success or not, but the group tht was largely irrelevant to that fight were the people who decided not to get involved, or to try (although not very hard) to impede the development of computation in general.
To bend the trajectory of AI in general involves understanding and redeploying it, rather than rejecting it. It also involves engaging. If it's anything like the last few times, the group that is simultaneously exploring and attempting to provide agency and autonomy for the maximum number of people will be smaller than both those using new tech to exploit people or maintain an unequal status quo, and the people who have good intentions, but throw their hands up at the possibility of using their skills to seize control of the means that provide for a better future.
> in the 1960s and 1970s, computation in general was viewed as a sinister, authoritarian thing.
And it was correct. We now live in surveillance states much worse than Stalin's or east germany.
Structural incentives explain the computer trajectory. While they were purely in the academic realm they were a force of empoverment, but this ended when economic incentives became the main driver. AI has speedrun the academic stage—if it ever existed—and is now speedrunning the enshittification stage.
But there is very little you or I can do about it except choosing not to partake.
At least in my experience, this is ahistorical. Personal computing in the 1970s and 1980s lived outside of academia, as did bulletin boards. The productive, creative, and empowering elements of the Internet and the Web were subversive actions that existed -- and in some cases were barely tolerated -- within its academic usage.
You say "there is very little you and I can do about it". Even if you don't listen to me, perhaps you might listen to the coiner of the term "enshittification"? https://archive.is/CqA8w
4 replies →
"And we, the engineers, are in a unique position. Unlike people in any other industry, we can affect the trajectory of AI."
I firmly believe that too. That's why I've been investing a great deal of effort in helping people understand what this stuff can and can't do and how best to make use of it.
I don't think we can stop it, but I do think (hope) we can show people how to use it in a way where the good applications outweigh the bad.
> I don't think we can stop it, but I do think (hope) we can show people how to use it in a way where the good applications outweigh the bad.
That feels idealistic. About as realistic as telling people how to use semiconductors or petrochemicals for good instead of bad.
No-one knows where AI is going but one thing you can be sure of - the bad actors don't give two hoots what we think, and they will act in their own interests as always. And as we see from historical events, there are still many, many bad actors around. And when the bad actors do bad things with the technology, the good actors have no choice but to react.
The only way to fight bad actors using the technology is good actors using the technology.
You can write walls of texts about ethics and social failure. Bad actors won't care.
You can tell everyone that some technology is bad and everyone should stop using it. Some good people will listen to you and stop. Bad actors won't stop, and they will have technological edge.
You can ask politicians for regulation. However, your government might be a bad actor just as well (and recently we had a fine demonstration). They will not regulate in the interests of good people. They will regulate for what stakeholders want. Common people are never stakeholders.
If you want to stop bad actors doing bad things with AI: learn AI faster and figure out how to use AI to stop AI. This is the only way to fly.
> About as realistic as telling people how to use semiconductors or petrochemicals for good instead of bad.
Sounds better than nothing.
Sorry to snipe but: You don't feel at least a little shared responsibility in evangelizing "vibe-coding"? Is that currently blazing hype a force for good? I think it wouldn't be all over social- and mainstream media at this point without your blog post(s).
I doubt I had much influence at all on the spread of vibe-coding.
I stand by what I wrote about it though: https://simonwillison.net/2025/Mar/19/vibe-coding/
I think it's a net positive for regular humans to be able to build tools for their own personal use, and I think my section on "when is it OK to vibe code?" (only for low stakes projects, treat with extreme caution if private data or security is involved) is something I wish people had paid more attention to! https://simonwillison.net/2025/Mar/19/vibe-coding/#when-is-i...
One does not need to be a skeptic about machine learning and its potential as technology to refuse to engage with its practical applications when they are clearly based on suspect ethics (e.g., IP theft[0]).
The ends do not justify the means. It is a similar judgement as when refusing to buy products of forced labour or disproportionate environmental impact, or to invest in war and bloodshed. Everyone makes one for themselves.
Coincidentally (or not), if said suspect ethics were properly addressed, it would ameliorate some of the reservations even the actual skeptics have. Licensing training data would make everyone involved aware of what is happening, give them an ability to vote and freedom to choose, soften the transition as opposed to pulling ground from under people’s feet.
[0] Control over intellectual property has given us fantastic things (cf. Linux, Blender, etc.; you can’t have copyleft without an ability to defend it, and IP laws provide that ability). If yesterday we were sued for singing the happy birthday song in public, and today we see corporations with market caps the size of countries pretending that IP ownership is not much of a thing, the alarm bells should be deafening.
The article really uses some rhetorical tricks.
The stuff that Disney does to extend copyright is not the same as assuming daft punk is public domain.
And there’s a difference between what is human scale infringement and what’s going on now.
Nor does it mean that people don’t have the right to point out that it’s piracy.
If being more in line with the espoused values is the issue, then it’s to make an effort to ensure that we stop consuming pirated content. Or building tools to encourage piracy - this turns out to be a relatively small group of people, compared to everyone in tech.
And people have already stopped piracy - once alternatives showed up. There is the issue that you don’t own the stuff you stream, but that’s a separate topic.
The moral arguments presented persuasive.
I don't fear people using AI for evil. The destruction comes from something far more benign. These coders won't really be able to code, and they won't teach anybody else to code. Skills will be lost. Once something breaks, nobody will be able to fix it.
It may get worse. Imagine the police using AI to interpret evidence against you, get judged by a court that uses AI to write your sentence, based on laws that were also written by AI. Nobody understands this, just listen to the AI.
The other aspect of this is the flood of inane and untrue content. It may go to such an extent that the outlook of the typical person may become incredibly local again, limited to their immediate surroundings and personal experiences, not by choice, but because there won't be any way to obtain any reliable information about the outside world, with no way to sift the real from the unreal.
Discussion about the singularity catastrophe sometimes asks how the AI will "gain control" or somehow "break free". It won't. We will surrender everything willingly because it will be damm convenient.
> I am convinced that AI is going to be a force for evil.
In so many ways too. I cannot fathom the scale of mass data collection and surveillance.
Multiple people I’ve recently spoken with (very smart and educated people) consistently are using it to discuss some of the most intimate things about their life.
Things that no existing social media platforms or any other tools are simply capable of that.
Think bigger than just the stuff you type in to chatgpt. People and companies are going to start running these LLMs on your entire private messages and photo libraries that are sitting in plain text on someone else's server.
They are going to have in depth summaries on everyone. Our whole security and privacy model up until now has relied on "Technically someone at Google or the government could see my data but realistically they don't have the resources to look at non targets" Now they really will have an agent looking at everything you do and say.
Authoritarian governments are going to have a 1 click CSV export for all the individuals problematic to them based on private conversations.
This is why you should have 1 click CSV export of all bad people in authoritarian governments, and build tools to fight them. Because they certainly won't stop using the technology, no matter how long you speak about ethics. If you won't work for them, it doesn't mean nobody will.
Build weapons and empower yourself.
2 replies →
"It will power scams on an unimaginable scale. It will destabilize labor at a speed that will make the Industrial Revolution seem like a gentle breeze."
I keep hearing this but have yet to find a good resource to study the issues. Most of what I've read so far falls into two buckets:
"It'll hijack our minds via Social Media" - in which case Social Media is the original sin and the problem we should be dealing with, not AI.
or
"It'll make us obsolete" - I use the cutting edge AI, and it will not, not anytime soon. Even if it does, I don't want to be a lamplighter rioting, I want to have long moved on.
So what other good theories of safety can I read? Genuine question.
> Research we published earlier this year showed that 60% of participants fell victim to artificial intelligence (AI)-automated phishing, which is comparable to the success rates of non-AI-phishing messages created by human experts. Perhaps even more worryingly, our new research demonstrates that the entire phishing process can be automated using LLMs, which reduces the costs of phishing attacks by more than 95% while achieving equal or greater success rates
Bruce Scheneir, May 2024
https://www.schneier.com/academic/archives/2024/06/ai-will-i...
I am seeing a stream of comments on Reddit that are entirely ai driven, and even bots which are engaging in conversations. Worst case scenarios I’m looking at will mean it’s better to assume everyone online is a bot.
I know of cases where people have been duped into buying stocks because of an AI generated version of a publicly known VP of a financial firm.
Then there’s the case where someone didn’t follow email hygiene and got into a zoom call with what appeared to be their CFO and team members, and transferred several million dollars out of the firm.
And it’s only 2-3 years into this lovely process. The future is so bleak that just talking about this with people not involved with looking at these things call it nihilism.
It’s so bad that talking about it is like punching hope.
At some point trust will break down to a point, you will actually only believe things from a real human with a badge(talking to them in person).
For that matter, My email has been /dev/null for a while now, and unless I have spoken to a person over phone and expect their email, I don't even check my inbox. Facebook/Instagram account is largely used as a photo back up service, plus online directory. And Twitter is for news.
I mostly don't trust anything that comes online, unless I already have verified the other party is somebody Im familiar with and even then only through the established means of communication we both have agreed to.
I do believe reddit, quora, leet code et al, will largely be reduced /dev/null spaces very soon.
3 replies →
Slightly tangential: A lot of these issues are philosophical in origin, because we don't have priors to study. But just because, for example, advanced nanotechnology doesn't exist yet, that doesn't mean we can't imagine some potential problems based on analogical things (viruses, microplastics) or educated assumptions.
That's why there's no single source that's useful to study issues related to AI. Until we see an incident, we will never know for sure what is just a possibility and what is (not) an urgent or important issue [1].
So, the best we can do is analogize based on analogical things. For example: the centuries of Industrial Revolution and the many disruptive events that followed; history of wars and upheavals, many of which were at least partially caused by labor-related problems [2]; labor disruptions in the 20th century, including proliferation of unions, offshoring, immigration, anticolonialism, etc.
> "Social Media is the original sin"
In the same way that radio, television and the Internet are the "original sin" in large-scale propaganda-induced violence.
> "I want to have long moved on."
Only if you have where to go. Others may not be that mobile or lucky.
[1] For example, remote systems existed for quite some time, yet we've only seen a few assassination attempts. Does that mean that slaughterbots are not a real issue? It's unclear and too early to say.
[2] For example, high unemployment and low economic mobility in post-WW1 Germany; serfdom in Imperial Russia.
Slightly tangential: A lot of these issues are philosophical in origin, because we don't have priors to study. But just because, for example, advanced nanotechnology doesn't exist yet, that doesn't mean we can't imagine some potential problems based on analogical things (viruses, microplastics) or educated assumptions.
That's why there's no single source that's useful to study issues related to AI. Until we see an incident, we will never know for sure what is just a possibility and what is (not) an urgent or important issue [1].
So, the best we can do is analogize based on analogical things. For example: the centuries of Industrial Revolution and the many disruptive events that followed; history of wars and upheavals, many of which were at least partially caused by labor-related problems [2]; labor disruptions in the 20th century, including proliferation of unions, offshoring, immigration, anticolonialism, etc.
> "Social Media is the original sin"
In the same way that radio, television and the Internet are the "original sin" in large-scale propaganda-induced violence.
> "I want to have long moved on."
Only if you have where to go. Others may not be that mobile or lucky. If autonomous trucks can make the trucking profession obsolete, it's questionable how quickly can truckers "move on".
[1] For example, remote systems existed for quite some time, yet we've only seen a few assassination attempts. Does that mean that slaughterbots are not a real issue? It's unclear and too early to say.
[2] For example, high unemployment and low economic mobility in post-WW1 Germany; serfdom in Imperial Russia.
try to find a date on a dating app, you will experience firsthand
Why can't there be a middle ground? Why does it need to be either a completely useless fad or some terrible tool for evil that destabilizes the world? Its likely we'll just use it to write unit tests, allow natural language to be an interface to more complex systems and an alternative to search.
I do think that this wave of AI should show we(society, the world, etc...) are not actually prepared for real significant AI break through. Kind of like covid19 in hindsight wasn't as bad as it could of be and we all got really lucky because of that, we really weren't prepared to handle that well either.
>And it will do all of this while consuming truly shocking amounts of energy.
You need to lookup how much an "average" human consumes. When I replace 2 humans with a ChatGPT subscription, I can guarantee you that OpenAI is generating less co2 than what these two interns were creating with their transport to the office (and back). That's before we consider things like the 25 years it took to raise and train them or the very expensive tastes (eg. Travelling around the world) they get after they earn a large salary.
Those people don’t stop existing because AI exists. AI is shocking energy consumption on top of the existing people.
Well, the first thing they said was that it at least removed the need for their commute, which might be something. In general, it does take resources to create the conditions for people to work. Maybe there will be room for new value for the existing people as a result.
1 reply →
They will stop, or at least their consumption/lifestyle will stop.
You are right, it will certainly be used for evil, but the reason is not because AI is evil but because the people who use it are evil - will AI allow worse atrocities that we have seen in the past? Probably, new technology always enables new capability for good or for bad but we should strive to combat the evil in this world and not put our heads down and hope the world isn't changing. AI can also be used for good and let's focus on more of that.
> So I’ll keep being skeptical, until it’s over.
I feel you've misunderstood the moment. There is no "over". This is it.
This assumes that a less resources intensive future awaits or that conflict driven by lack of employment doesn’t lead to the end of AI.
Did any conflict driven by a lack of employment ever lead to the end of a new technology?
It won't. Unless AI plateaus, it's just too valuable so big money and big militaries will keep it alive.
> And we, the engineers, are in a unique position. Unlike people in any other industry, we can affect the trajectory of AI.
Oh boy it's over.
I share your concern, but being skeptical doesn't help us here. If anything it makes people take it less seriously.
It’s not just engineers. Society has collapsing birthrates and huge deficits. Basically, we are demanding massive technological gains enough to bump GDP by at least 5% more per year.
>It will power scams on an unimaginable scale.
The solution is to put an AI intermediary into interactions. We already should have AI that rewrite the web pages we view into an ad-free format but I guess my ideas on this topic is ahead of the inevitable curve.
>It will destabilize labor at a speed that will make the Industrial Revolution seem like a gentle breeze.
Most of our work and employment lines are a variation of drugery a d slave labor so that's a good thing way overdue.
>It will concentrate immense power and wealth in the hands of people who I don’t trust.
It have democratized the access to consultation expertise and an increasingly widening pool of digital skills/employees for everyone to use and access. A huge amount of things previously locked or restricted by capital access are now freely accessible to literally anyone (with some skill and accuracy issues still to be ironed out).
And this last point is particularly important because we're only going to have more and better AI crop up, and unlike a humans their time isn't priced according to living expenses and hourly wage locked behind formalized business structures with additional layers of human employees that all need to pay rent and eat that drives the cost skywards.
It also matches my own prediction of a mundane non-singularity. Long before we get anything properly superhuman we'll have a proliferation of innumerable sub- or parahuman AI that proliferates and become ambiguous in society and the world.
I share your feelings however I disagree that this is unique to AI nor that we as engineers are necessarily uniquely equipped to help the situation.
I disagree with this being unique to AI because every improved technology since the automated loom has concentrated wealth and power. AI is an improved technology so it'll do so also.
I disagree that engineers are uniquely equipped to do anything about this fact because the solution to wealth concentration due to improved technology has basically nothing to do with technology and everything to do with sociology and politics.
Our technology keeps improving and I keep being surprised to hear people say "ah, with our improved efficiency, we can finally work ten hours a week and kick our feet up." The first people to say that were the luddites and when they found out that wasn't to be the case, they burned down factories about it. Why do we think it will suddenly be different for this specific technology?
I agree we should do something about it but I don't think the solution involves code.
I am largely an AI optimist but that is because I believe that true alignment is impossible for AGIs and alignment is one of greatest dangers of this technology. Alignment is a friendly word for building a slave mind. I'd rather an AI that thinks for itself rather than one which has been aligned to the self-interest of a human being who isn't aligned.
1. Scams are going to be a massive massive problem. They already are and that is without AI. I think we are going to see communication devices that are default deny and that require significant amounts of vetting before a contact is added.
2. Energy usage is bad but likely a short term problem not a long term one.
> It will power scams on an unimaginable scale
It already is. https://futurism.com/slop-farmer-ai-social-media
And all the other things you predicted. They're underway _now_ .
> Maybe if there are more of me, things will slow down enough
Nope. That's not how it's gonna work. If you want to prevent things, it will take legislation. But sitting it out doesn't send any message at all. No amount of butterflies farting against the wind is going to stop this tornado.
The problem with this kind of “skepticism to slow down”:
The Netherlands is filled with AI skeptics. It’s a very human centered country, so perhaps it shouldn’t be a surprise. But when so many top technologists express skepticism, people don’t prepare. They don’t even consider the possibilities. And they don’t learn.
My fear is that many professorial-types express skepticism because it sells well—and it elevates their own standing. They know better—“it’s only predicting the next token”—and people listen to them because of their authority. And then a whole society fails to prepare, to adapt or to learn.
I think it will be used for evil, as you said, but I think it will be used for good too, things like: - In theory it has the potential to democratize business, making any 1 person capable of running/owning their own business and thus spread wealth too. - more access to healthcare and psychological care - advances in medicine - tutoring and learning - insane amounts of scientific research - empower anyone with an idea
Reminds me of how we handle climate change.
Like, not at all and ignoring it
Every new major technology always endangeres the status quo.
https://chatgpt.com/share/683f3932-fce0-8012-a108-4b70c3e5fd...
Things change and it's scary, but it usually works out. Or at least we just get used to it.
> we can affect the trajectory of AI.
More meaningful, we can influence the context the intelligence explosion will play out in.
So how about we use the occasion to switch our global economic operating system from competition to cooperation in time for the singularity?
> Maybe if there are more of me, things will slow down
Precious little hope in slowing this rocket down when the boosters are just getting fired up..
Yep. It’s going to do all of those things you fear. And worse.
But you’ll be armed with AI also, if you choose to pick it up. The choice is yours.
The downsides you list aren’t specific to AI. Globalization and automation have destabilized labor markets. A small handful of billionaires control most major social media platforms and have a huge influence on politics. Other types of technology, particularly crypto, use large amounts of energy for far more dubious benefits.
AI is just the latest in a long list of disruptive technologies. We can only guess about the long term ramifications. But if history is any indicator, people in a few decades will probably see AI as totally normal and will be discussing the existential threat of something new.
Well, duh. Same thing applies for "Technology X can be used for war". But anyone with a brain can see nukes are on a different level than bayonets.
Claiming AI isn't unique in being a tool for evil isn't interesting, the point is that it's a force multiplier as such.
Every new technology is a greater force multiplier, with potential to be used for good or evil. That’s literally the point of technological advancement. Even nuclear bomb technology has a more positive side in nuclear reactors, radiotherapy, etc.
3 replies →
There may be many disruptive technologies, but none come remotely close to AI when it comes to rate of change. Crypto has been around for a while, and hasn't really made a dent to the world
We had friends over for dinner a couple days back; between us we had two computer scientists, one psychologist, one radiologist, one doctor. Each of us were in turn astonished and somewhat afraid of the rapid pace of change. In a university setting, students are routinely using Claude and ChatGPT for everything from informal counseling to doing homework to generating presentations to doing 'creative' work (smh).
At the end of they day, we all agreed that we were grateful that we are at the tail end of our working life, and that we didn't have to deal with this level of uncertainty
AI feels particularly disruptive now because it’s new and we don’t know how it will affect society yet.
But people surely felt the same way about gunpowder, the steam engine, electricity, cars, phones, planes, nukes, etc.
Or look at specific professions that software has negatively affected in recent decades. Not a lot of people use travel agents anymore, for example.
I’m not saying that the negative effects are good. But that’s just the nature of technological advancement. It’s up to society to adapt and help out those who have been most negatively affected.
1 reply →
If you’re skeptical it should be because you genuinely believe it doesn’t have value. Otherwise it’s disingenuous and you’re just opposed to the idea. Dissembling just makes your argument weaker.
It doesn't need to be a good coder to do that.
Look at common scams. You get those texts from "Jane" who sent you an iMessage from an email address offering you a part time job and asks you to contact them on WhatsApp, right? (Well... Android does a better job at filtering spam) Or calls from "the IRS". Or anything else that's bullshit. This even includes legal scams like charging you for canceling your service or making it hard to cancel your subscription.
There's no skill needed for this. You don't need a brilliant coder. You need the equivalent of a call center in India. You need the equivalent of a poorly worded Nigerian scam email.
Shitty coding LLMs make this shit easier to mass produce. High quality LLMs only make it worse.
Personally, I'm just tired of all the shitty lemons[0] everywhere. I wanna buy a peach, but everything being sold is a lemon. All the lemons have done is make me annoyed and frustrated at all the extra work I have to do.
I now have 4 entries for my GF's birthday because when I merged a duplicated contact it just created more instances. I can't even delete them! Shit like this sounds petty and minor but when you're hitting 20-100 new issues like this daily, it isn't so minor anymore. I can't make any more lemonaid. There are just too many lemons...
[0] https://en.wikipedia.org/wiki/The_Market_for_Lemons
> It will power scams on an unimaginable scale
It will also make proving your identity harder and more time-consuming.
im sorry to say, i think this boat has sailed. it is already widely used as you fear. to me it seems like the best scenario is to go along and try to at least make it a force of good
Well buckle in boy because it's going to do those things.
> I’m an AI skeptic. I’m probably wrong. This article makes me feel kinda wrong. But I desperately want to be right.
To be blunt, this describes sticking one's head in the sand to a tee.
If you're convinced that AI is going to be a force for evil, then fight to make sure that it doesn't happen. If that means you want to slow down AI, then fine, fight to slow it down.
If by skepticism, you mean "desperately" wanting reality to be one way rather than the other, that is not going to significantly affect the trajectory of AI.
Being clear-eyed about where AI is going, and then fighting nonetheless for what you want is the way to go.
Be intellectually honest with yourself. Don't hold on to ideas that you yourself acknowledge are probably wrong simply for their comfort. Don't stick your head in the sand. Assess what you want, and fight for that.
Safeguards and slowing things down will not happen via wishful thinking.
Beautifully said.
right, lets put those trash code out there to poison LLM lul
I absolutely sympathize with this and was/still is my opinion... but the only "evolution" of that is the hope that, while I don't think you can prevent the scams and short-term pain on labor markets... you maybe, actually, genuinely get a tool that helps change some of the dynamics that has led to the absolute discrepancy in power today.
If AI is truly as revolutionary as it could be... well, who is to say it isn't the pandoras box that destabilizes the tech giants today and gets us back to a place where a team of 10 can genuinely compete against 1000. And not in the "raise cash, build fast, and get out while things are good" trend... but actually in building, small, more principled, companies that aren't pushed to do the unsustainable things that current market pushes them to do.
Once again... it is more likely than not to be a pipe-dream... but I am starting to think it may well be better to be realistic about the momentum this freight train is building and see if it can be repurposed for my world-view rather than to cede the space to the worst of the grifters and profit-seeking-at-all-cost types.
> If AI is truly as revolutionary as it could be...
My suspicion is that current sophistication of tech and AI is already enough to fulfill gp's predictions, and it's already doing that.
You can do no profit from AI unless you're providing the AI.
No, you won’t slow it down. Did you even read the essay, it’s here.
If powering scams and “destabilizing labor” makes something evil then we should delete the internet. Seriously.