← Back to context

Comment by honestthrow1

5 days ago

I'm in a nearly identical boat as you.

I'm tired. I'm tired of developers/techies not realizing their active role in creating a net negative in the world. And acting like they are powerless and blameless for it. My past self is not innocent in this; but I'm actively trying to make progress as I make a concerted effort to challenge people to think about it whenever I can.

After countless of times that the tech industry (and developers specifically) have gone from taking an interesting technical challenge that quickly require some sort of ethical or moral tradeoff which ends up absolutely shaping the fabric of society for the worse.

Creating powerful search engines to feed information to all who want it; but we'll need to violate your privacy in an irreversible way to feed the engine. Connecting the world with social media; while stealing your information and mass exposing you to malicious manipulation. Hard problems to solve without the ethical tradeoff? Sure. But every other technical challenge was also hard and solved, why can't we also focus on the social problems?

I'm tired of the word "progress" being used without a qualifier of what kind of progress and at the cost of what. Technical progress at the cost of societal regression is still seen as progress. And I'm just tired of it.

Every time that "AI skeptics" are brought up as a topic; the focus is entirely on the technical challenges. They never mention the "skeptics" that are considered that because they aren't skeptical of what AI is and could be capable. I'm skeptical if the tradeoffs being made will benefit society overall; or just a few. Because at literally every previous turn for as long as I've been alive; the impact is a net negative to the total population, without developer questioning their role in it.

I don't have an answer for how to solve this. I don't have an answer on how to stop the incoming shift in destroying countless lives. But I'd like developers to start being honest in their active role in not just accepting this new status quo; but proactively pushing us us in a regressive manner. And our power to push back on this coming wave.

+65536

But, tech was not always a net negative.

As far as I can tell, the sharpest negative inflection came around the launch of the iPhone. Facebook was kind of fine when it was limited to universities and they weren't yet doing mobile apps, algorithmic feeds or extensive A:B testing..

It seems "optimizing engagement," was a grave initial sin...

Maybe some engineers should to go back to their childhoods and watch some Outer Limits and pay attention to the missed lessons..

Our lives are not our own. From womb to tomb, we are bound to others. Past and present. And by each crime and every kindness, we birth our future.

  • The first digital privacy laws following a personal data scandal were voted in… 1978 (France)

    Tech has always been a tool for control, power and accumulation of capital.

    You counterbalance it with social and civic laws (ie. Counter power)

  • > As far as I can tell, the sharpest negative inflection came around the launch of the iPhone

    Some would say "The Industrial Revolution and its consequences have been a disaster for the human race."

So the problem is society’s lack of any coherent ethical framework that says building powerful disruptive technology shall be done like this. If you’re tired, then go fix that problem. Find the answer. Because I’m *exhausted* hearing about how everybody is supposed to risk putting food on their table by telling the big boss they won’t make the feature because it’s unclear whether it might be a net negative for society under one contorted version of an angsty ethical framework a small minority of people have ad-hoc adopted on that orange message board… and that _scares_ them.

The luddites get a bad rap these days, but we need more of them.

  • We need engineers to be politicians, not cable news taking heads

    • And what makes you believe that engineers have more morals than cable news folks have? Does not match my experience.

      Most of them got into tech because it's fun and because it pays royaly. Morals have little to do with that for lots of folks.

      2 replies →

  • if you want to learn more about modern luddites check out "This Machine Kills" podcast and to some extent Ed Zitron / Cory Doctorow Blogs, might be a good place to start.

    Political-Economic analysis of technology is not super popular thing in a mainstream media, but disabling, sabotaging or vandalising anti-human tech might be.

> net negative to the total population, without developer questioning their role in it.

I am tired of people blaming bottom developers, while CEOs get millions for "the burden of responsibility".

  • I'm not blindly blaming the bottom developer. I've played my role in past waves as well as many other developers. I'm not a CEO, so I don't know how to communicate this same message to a CEO. But as a developer, I know I've been an ignorant participant in the past. Willfully or not. And I can change my role in the next coming wave.

    We developers are not blameless. If we accept that we are playing a role; then we can be proactive in preventing this and influencing the direction things go. CEOs need developers to achieve what they want.

    I'm not saying it's easy. I won't even hold it against folks that decide to go in a separate direction than mine. But I at least hope we can be open about the impact we each have; and that we are not powerless here.

  • Yes CEOs are to blame, but blaming them isn't gonna do anything. They won't change. Who has the motivation and capacity to change things? The working people. Who isn't currently doing it? The working people. So it seems appropriate for me to raise this fact as a problem, the fact that the working people silently go along with all the evil plans ceos put in place

There is technology, related technical advancements and then there is this business incentives to make money. A lot of progress has indeed been made in NLP, information retrieval which is helpful in its own ways to speed up thing, it can easily be seen as next level of automation.

Everything else around it is a glamorous party cause everyones money is riding on it and one needs to appreciate it or risk being deserted by the crowd.

The basics of science is around questioning things until you get convinced. People depending on models too much may end up in a situation where they would loose the ability to triangulate information from multiple sources before being convinced about it.

Programming can be more complicated above a certain threshold even for humans so it would be interesting how the models perform with the complexity. I am skeptic but again I dont know the future either.

> They never mention the "skeptics" that are considered that because they aren't skeptical of what AI is and could be capable.

This is because most people on HN who say they are skeptical about AI mean skeptical of AI capabilities. This is usually paired with statements that AI is "hitting a wall." See e.g.

> I'm very skeptical. I see all the hype, listen to people say it's 2 more years until coding is fully automated but it's hard for me to believe seeing how the current models get stuck and have severe limitations despite a lot of impressive things it can do. [https://news.ycombinator.com/item?id=43634169]

(that was what I found with about 30 seconds of searching. I could probably find dozens of examples of this with more time)

I think software developers need to urgently think about the consequences of what you're saying, namely what happens if the capabilities that AI companies are saying are coming actually do materialize soon? What would that mean for society? Would that be good, would that be bad? Would that be catastrophic? How crazy do things get?

Or put it more bluntly, "if AI really goes crazy, what kind of future do you want to fight for?"

Pushing back on the wave because you take AI capabilities seriously is exactly what more developers should be doing. But dismissing AI as an AI skeptic who's skeptical of capabilities is a great way to cede the ground on actually shaping where things go for the better.

  • Heck, I think the septics are easy to redefine into whatever bloc you want, because the hype they are in opposition to, is equally vague and broad.

    I’m definitely not skeptical of its abilities, I’m concerned by them.

    I’m also skeptical that the AI hype is going to pan out in the manner people say it is. If most engineers make average or crappy code, then how are they going to know if the code they are using is a disaster waiting to happen?

    Verifying an output to be safe depends on expertise. That expertise is gained through the creation of average or bad code.

    This is a conflict in process needs that will have to be resolved.

  • Why can't it be both? I fully believe that the current strategy around AI will never manifest what is promised, but I also believe that what AI is currently capable of is the purest manifestation of evil.

    • Am I a psychopath? What is evil about the current iteration of language models? It seems like some people take this as axiomatic lately. I’m truly trying to understand.

      4 replies →

Ethical bottom for industry as a whole (there will always be niche exceptions) is typically the law. And sometimes not even that when law can't be enforced effectively or the incentives are in favor of breaking the law.

The current incentive is not improving humanity.

For ai companies, its to get a model which can be better on benchmarks and vibes so that it can be sota and get higher valuation for stakeholders.

For coders, they just want the shit done. Everyone wants the easy way if his objective is to complete a project but for some it is learning and they may not choose the easy way.

Why they want to do it the easy way, mostly as someone whose cousin's and brother's are in this cs field(i am still in high school), they say that if they get x money then the company at least takes a 10x value of work from them. (Of course, it may be figuratively). One must imagine why they should be the one morally bound in case ai goes bonkers.

Also, the best not using ai would probably stop it a little but the ai world moves so fast, its unpredictable, deepseek was unpredicted. I might argue that now its a matter of us vs China in this new arms race of ai. Would that stop if you stop using it? Many people are already hating ai but has that done much to stop it? If that is, you call ai stopping at the moment.

Its paradoxical. But to be Frank, LLM was created for the reason Its excelling at. Its a technological advancement and a moral degradation.

Its already affecting supply chain tbh. And to be frank, I am still using ai to build projects which I just want to experiment with and see if it can really work without getting the domain specific knowledge. Though I also want to learn more and am curious but just don't have much time in high school.

I don't think people cared about privacy and I don't think people would care about it now. And its the same as not using some big social media giant, you can't escape it. The tech giants also made it easier but less private. People chose the easier part and they would still choose the easy part ie llm. So I guess the future is bleak eh? Well the present isn't that great either. Time to just enjoy life while the world burns by the regret of its past actions for 1% shareholder profit. (For shareholders, it was all worth it though, am I right?)

My 0.02$

Unfortunately Capitalism unhindered by regulation is what we wanted, and Capitalism unhindered by regulation is what we have. We, in the western world, were in the privileged position of having a choice, and we chose individual profit over the communal good. I'm not entirely sure it could have been any other way outside of books given the fact we're essentially animals.

  • > Unfortunately Capitalism unhindered by regulation is what we wanted

    No "we" don't want it. And those who do want it, let them go live in the early industrial England whete the lack of regulation degenerated masses.

    Also, for some reason people still portray capitalism as being something completelky different with or without regulation, it's like saying a man is completelly different in a swimming swit and a costume.

    > We, in the western world, were in the privileged position of having a choice, and we chose individual profit over the communal good

    Again, "we" did not have a gathering a choose anything. Unless you have records of that zoom session.

    > given the fact we're essentially animals.

    This is a reductionist statement that doesn't get anywhere. Yes we are animals but we are more than that, similar to being quarks but also more than quarks.

    • Yes, "we" had the choice. Now nobody can afford homes and would ruin their life if they actually went ahead to demonstrate against the system for an extended period of time. We still have the choice, but nobody is willing to sacrifice their own wellbeing because its easier to live with the minimum, while we shoot billionaires to space.

      5 replies →

  • I think it is not so much about capitalism, but about the coupling of democracy with money. Money -> Media/Influencers -> election -> corruption -> go back to 1. To make a meaningful change, the society must somehow decouple democracy from money. With current technology it shall be possible to vote directly for many things instead of relying on (corrupt, pre-bread) representatives. Something like democracy 2.0 :)