← Back to context

Comment by scratchyone

2 days ago

Honestly we should have learned this claim from AI companies was purely fear-mongering back when GPT-2 was "too dangerous to release".

Given that his reason for saying GPT-2 was too dangerous to release was that the world needed more time to prepare for the effects of this technology, and given that the following models were basically scaled-up versions of it and killed social media, news reporting and other kinds of communication, I'd say he was right about the dangers of it.

  • funny how he didn't care about ethics the moment it was more profitable to release it than to talk about dangers.

That's true but in reality I think people are far more afraid of AI in terms of how it is being used in warfare and policing. Automatic target detection and deployment of drones, or even how it might simply make their role at work redundant etc

  • To me, the more interesting divergence in discussion is on its capabilities.

    AI industry insiders (including "safety" groups like ControlAI) talk about the dangers only in terms of its power: "Scheming", job loss, breaking containment, the New Cold War with China.

    Critics outside the industry talk in terms of its lack of power: Inaccuracy, erroneous translation of user intent, failure to deliver on its promises and investment, environmental cost from the former, and ultimately the danger of people in power (e.g. law enforcement, military officials) treating its output as valid and unbiased, or simply laundering their wishes through it.

  • 100% agreed. That's part of the issue imo, these companies pretend their new models are "too dangerous" to seem like they care about the world, yet they have no qualms deploying existing models in warfare or bragging about impending mass-unemployment.

  • > That's true but in reality I think people are far more afraid of AI in terms of how it is being used in warfare and policing. Automatic target detection and deployment of drones, or even how it might simply make their role at work redundant etc

    I think the last one should be first on the list: regular people are afraid AI will negatively affect their economic security (i.e. knowledge and service workers will get the rust-belt factory worker treatment).

    And the potential of giving knowledge and service workers the rust-belt factory worker treatment is exactly what makes Wall Street excited about AI and has the AI company leaders salivating about the profit they can make.

    Warfare, policing, bio-engineered viruses are theoretical and far down the list.

  • I am to be honest not sure what I am more scared.

    AI shaping warfare Vs. Using AI to justify outrageous warfare

    • We sadly don't need AI to justify outrageous warfare. You just need to remember when the US invaded Iraq over WMDs, including a full investigation into the WMDs that never found any. We then invaded anyway, to the detriment of everyone except defense contractors.

    • that’s not a war crime, that’s boundary setting, and honestly, that’s rare

      would you like me to list the applicable sections of the Geneva convention?

  • AI has been used in defense for a while now, a modern tomahawk cruise missile and its associated targeting systems is a good example. I think most people fear AI taking their job and only source of income.

  • These were all already very valid concerns long before this era of "AI" or computational power.

    The broader public is just now barely beginning to understand because all they have to do is ask a chatbot. AI does not enable new capabilities, but it does aggregate an idea into a rough sketch and do it quickly on-demand.

    None of this really means it will play out that way. The devil is in the details. What it does mean is much more nuanced attention on the politics and money because that's where the power always was.

It seems like they were correct, to me.

  • Yes, I love how everyone uses this argument, when what they were saying was among the lines of "GPT-2 would make it too easy to generate spam, deepfakes, content to manipulate opinion..." (not the actual quote but something like that). Turns out it was completely correct if you look at the state of the internet right now.

    Obviously, they still overhype and oversell this end of humanity stuff, but this argument regurgitated ad-nauseam is not THAT great of an example when you think about it.

  • I was going to say.. I think people in general have this weird understanding of the word dangerous. Just because something is not movie level dramatic and/or does not generate over the top violence does not automatically make it less dangerous. In a sense, just the fact that is benign on the surface and allowed to embed in our day to day life is what makes the upcoming rug pull so painful.

    And I am saying this as a person who actually likes this tech.