Comment by dwohnitmok

6 days ago

> They never mention the "skeptics" that are considered that because they aren't skeptical of what AI is and could be capable.

This is because most people on HN who say they are skeptical about AI mean skeptical of AI capabilities. This is usually paired with statements that AI is "hitting a wall." See e.g.

> I'm very skeptical. I see all the hype, listen to people say it's 2 more years until coding is fully automated but it's hard for me to believe seeing how the current models get stuck and have severe limitations despite a lot of impressive things it can do. [https://news.ycombinator.com/item?id=43634169]

(that was what I found with about 30 seconds of searching. I could probably find dozens of examples of this with more time)

I think software developers need to urgently think about the consequences of what you're saying, namely what happens if the capabilities that AI companies are saying are coming actually do materialize soon? What would that mean for society? Would that be good, would that be bad? Would that be catastrophic? How crazy do things get?

Or put it more bluntly, "if AI really goes crazy, what kind of future do you want to fight for?"

Pushing back on the wave because you take AI capabilities seriously is exactly what more developers should be doing. But dismissing AI as an AI skeptic who's skeptical of capabilities is a great way to cede the ground on actually shaping where things go for the better.

Heck, I think the septics are easy to redefine into whatever bloc you want, because the hype they are in opposition to, is equally vague and broad.

I’m definitely not skeptical of its abilities, I’m concerned by them.

I’m also skeptical that the AI hype is going to pan out in the manner people say it is. If most engineers make average or crappy code, then how are they going to know if the code they are using is a disaster waiting to happen?

Verifying an output to be safe depends on expertise. That expertise is gained through the creation of average or bad code.

This is a conflict in process needs that will have to be resolved.

Why can't it be both? I fully believe that the current strategy around AI will never manifest what is promised, but I also believe that what AI is currently capable of is the purest manifestation of evil.

  • Am I a psychopath? What is evil about the current iteration of language models? It seems like some people take this as axiomatic lately. I’m truly trying to understand.

    • Even if current models never reach AGI-level capabilities, they are already advanced enough to replace many jobs. They may not be able to replace senior developers, but they can take over the roles of junior developers or interns. They might not replace surgeons, but they can handle basic diagnostic tasks, and soon, possibly even GPs. Paralegals, technical writers, many such roles are at risk.

      These LLMs may not be inherently evil, but their impact on society could be potentially destabilising.

      1 reply →

    • The diffusion-based art generators seem pretty evil. Trained (without permission) on artists' works, devalues said works (letting prompt jockeys LARP as artists), and can then be deployed to directly compete with said artists to threaten their livelihoods.

      These systems (LLMs, diffusion) yield imitative results just powerful enough to eventually threaten the jobs of most non-manual laborers, while simultaneously being not powerful enough (in terms of capability to reason, to predict, to simulate) to solve the hard problems AI was promised to solve, like accelerating cancer research.

      To put it another way, in their present form, even with significant improvement, how many years of life expectancy can we expect these systems to add? My guess is zero. But I can already see a huge chunk of the graphic designers, the artists, the actors, and the programmers or other office workers being made redundant.

      1 reply →