Google Brain founder says big tech is lying about AI danger

2 years ago (afr.com)

The AFR piece that underlies this article [1] [2] has more detail on Ng's argument:

> [Ng] said that the “bad idea that AI could make us go extinct” was merging with the “bad idea that a good way to make AI safer is to impose burdensome licensing requirements” on the AI industry.

> “There’s a standard regulatory capture playbook that has played out in other industries, and I would hate to see that executed successfully in AI.”

> “Just to be clear, AI has caused harm. Self-driving cars have killed people. In 2010, an automated trading algorithm crashed the stock market. Regulation has a role. But just because regulation could be helpful doesn’t mean we want bad regulation.”

[1]: https://www.afr.com/technology/google-brain-founder-says-big...

[2]: https://web.archive.org/web/20231030062420/https://www.afr.c...

  • > “There’s a standard regulatory capture playbook that has played out in other industries

    But imagine all the money bigco can make by crippling small startups from innovating and competing with them! It's for your own safety. Move along citizen.

    • Even better if (read: when) China, who has negative damns for concerns, can take charge of the industry that we willingly and expediently relinquish.

      54 replies →

  • Here's what makes it worse imo.

    Imagine someone invents a machine that can give infinite energy.

    Do you

    a) sell that energy, or b) give the technology to build the machine to everyone.

    Clearly b is better for society, a is locking up profits.

    • The answer is c) sell that energy and use your resulting funds to deeply root yourself in all other systems and prevent or destroy alternative forms of energy production, thus achieving total market dominance

      This non-hypothetical got us global warming already

    • In this case the machine also has negative and yet unknown side effects. We don't give nuclear power to everyone.

    • This analogy of course is close to nuclear energy. I think most people would say that regulation is still broadly aligned with the public interest there, even though the forces of regulatory capture are in play.

    • I read that book. No, you deny your gift to the world and become a recluse while the world slowly spins apart.

      Technically: a solar panel is just such a machine. You'll have to wait a long, long time but the degradation is slow enough that you can probably use a panel for more than several human life times at ever decreasing output. You will probably find it more economical to replace the panel at some point because of the amount of space it occupies and the fact that newer generations of solar panels will do that much better in that same space. But there isn't any hard technical reason why you should discard one after 10, 30 or 100 years. Of course 'infinite' would require the panel to be 'infinitely durable' and likely at some point it will suffer mechanical damage. But that's not a feature of the panel itself.

  • And I strongly agree with pointing out a low hanging fruit for "good" regulation is strict and clear attribution laws to label any AI generated content with its source. That's a sooner the better easy win no brainer.

    • Why would we do this? And how would this conceivably even be enforced? I can't see this being useful or even well-defined past cartoonishly simple special cases of generation like "artist signatures for modalities where pixels are created."

      Requiring attribution categorically across the vast domain of generative AI...can you please elaborate?

      3 replies →

    • Where is the line drawn? My phone uses math to post-process images. Do those need to be labeled? What about filters placed on photos that do the same thing? What about changing the hue of a color with photoshop to make it pop?

      41 replies →

    • Please define "AI generated content" in a clear and legally enforceable manner. Because I suspect you don't understand basic US constitutional law including the vagueness doctrine and limits on compelled speech.

  • human-driven cars kill people all the time too. and the stock thing from 2010 isn't AI, just algorithmic trading.

    not the most convincing of arguments

There are two dominant narratives I see when AI X-Risk stuff is brought up:

- it's actually to get regulatory capture

- it's hubris, they're trying to seem more important and powerful than they are

Both of these explanations strike me as too clever by half. I think the parsimonious explanation is that people are actually concerned about the dangers of AI. Maybe they're wrong, but I don't think this kind of incredulous conspiratorial reaction is a useful thing to engage in.

When in doubt take people at their word. Maybe the CEOs of these companies have some sneaky 5D chess plan, but many many AI researchers (such as Joshua Bengio and Geoffrey Hinton) who don't stand to gain monetarily have expressed these same concerns. They're worth taking seriously.

  • > Both of these explanations strike me as too clever by half. I think the parsimonious explanation is that people are actually concerned about the dangers of AI

    This rings hollow when these companies don’t seem to practice what they preach, and start by setting an example - they don’t halt research and cut the funding for development of their own AIs in-house.

    If you believe that there’s X-Risk of AI research, there’s no reason to think it wouldn’t come from your own firm’s labs developing these AIs too.

    Continuing development while telling others they need to pause seems to make “I want you to be paused while I blaze ahead” far more parsimonious than “these companies are actually scared about humanity’s future” - they won’t put their money where their mouth is to prove it.

    • It's a race dynamic. Can you truly imagine any one of them stopping without the others agreeing? How would they tell that the others really have stopped. I think they do believe that it's dangerous what they're doing but that they would rather be the ones to build it than let somebody else get there first because who knows what they'll do.

      It's all a matter of incentives and people can easily act recklessly given the right ones. They keep going because they just can't stop.

  • > When in doubt take people at their word.

    This is not mutually exclusive with it being either hubris or regulatory capture. People see the world colored by their own interests, emotions, background, and values. It's quite possible that the person making the statement sincerely believes there's a danger to humanity, but it's actually a danger to their monopoly that their self-image will not let them label as a such.

    It's never regulatory capture when you're the one doing it. It's always "The public needs to be protected from the consequences that will happen if any non-expert could hang up a shingle." Oftentimes the dangers are real, but the incumbent is unable to also perceive the benefits of other people competing with them (if they could, competition wouldn't be dangerous, they'd just implement those benefits themselves).

  • When I see comments like these, it's clear that the commenter is probably an individual contributor that has never seen how upper management or politics actually works. Regulatory capture is probably one of the biggest wealth generating techniques out there. It's very real.

    If some rando anonymous posters could think it up, it doesn't require a CEO to play 5D chess to think it up. And many of us have witnessed these techniques being used by companies directly. Microsoft was famous for doing this sort of thing, and in a much more roundabout fashion, for instance with the SCO debacle.

    It's standard business practice, not conspiracy 5D chess or whatever moniker you want to give it to be dismissive.

  • >it's hubris, they're trying to seem more important and powerful than they are

    >Both of these explanations strike me as too clever by half

    This is a good point. You have to be clever to hop on a soapbox and make a ruckus about doomsday to get attention. Only savvy actors playing 5D chess can aptly deploy the nuanced and difficult pattern of “make grandiose claims for clicks”

  • You can go back 30 years and read passages from textbooks about how dangerous an underspecified AI could be, but those were problems for the future. I'm sure there's some degree of x-risk promotion in the industry serving the purpose of hyping up businesses, but it's naive to act like this is a new or fictitious concern. We're just hearing more of it because capabilities are rapidly increasing.

  • > They're worth taking seriously.

    1. While their contributions to AI tech are unmistakable, what do Bengio and Hinton really know about the human dangers of AI? Being an expert in one thing does not make one an expert in everything. It is unlikely that they understand the human dangers any more than any other random kook on Reddit. Why take them more seriously than the other kooks?

    2. Hinton's big concern is that AI will make it easy to steal identities. Even if we assume that is true, it is already not that hard to steal identities. It is a danger that already exists even without AI and, realistically, already needs to be addressed. What's the takeaway if we are to take the message seriously? That AI will make the problems we already have more noticeable, and because of that we will finally have to get off our lazy asses and do something about those problems that we've tried to sweep under the rug? That seems like a good thing.

  • Getting the government to regulate your competition isn't 5d chess, it's barely even chess. If you study the birth of any technology in the last 200 years -- rail, electricity, radio, integrated circuits, etc -- you will see the same playbook put to this use. Any good tech executive must be aware of this history.

    None of this requires every doomer to be disingenuous or even ill-informed, or even for specific leaders to by lying about their beliefs. It's just that those beliefs that benefit highly capitalized companies get amplified, and the alternatives not so much.

  • > many many AI researchers (such as Joshua Bengio and Geoffrey Hinton) who don't stand to gain monetarily have expressed these same concerns

    I respect these researchers, but I believe they are doing it to build their own brand, whether consciously or subconsciously. There's no doubt it's working. I'm not in the sub-field, but I have been following neural nets for a long time, and I haven't heard of either Bengio nor Hinton before they started talking to the press about this.

    • >but I believe they are doing it to build their own brand, whether consciously or subconsciously.

      I am always in awe at how easily people craft unfalsifiable worldviews in service to their preconceived opinions.

    • As someone who has been following deep learning for quite some time as well, Bengio and Hinton would be some of the first people I think of in this field. Just search Google for "godfathers of ai" if you don't believe me.

  • > When in doubt take people at their word.

    Hanlon's razor works great when applied to your personal relationships, but it falls apart when billions/trillions of dollars are at stake.

  • Besides the point, but FYI you are misusing the term parsimonious.

    • It's a reference to the more apt name for Occam's razor. I happen to disagree with GP because governments always want to expand their power. When they do something that results in what they want it's actually the parsimonious explaination to say that they did it because they wanted that result.

    • He is not. There are multiple definitions. The other definition is to explain something using an economical/simple approach.

It's unfortunate that "AI" is still framed and discussed as some type of highly autonomous system that's separate from us.

Bad acting humans with AI systems are the threat, not the AI systems themselves. The discussion is still SO focused on the AI systems, not the actors and how we as societies align on what AI uses are okay and which ones aren't.

  • > Bad acting humans with AI systems are the threat, not the AI systems themselves.

    I wish more people grasped this extremely important point. AI is a tool. There will be humans who misuse any tool. That doesn't mean we blame the tool. The problem to be solved here is not how to control AI, but how to minimize the damage that bad acting humans can do.

    • Right now, the "bad acting human" is, for example, Sam Altman, who frequently cries "Wolf!" about AI. He is trying to eliminate the competition, manipulate public opinion, and present himself as a good Samaritan. He is so successful in his endeavor, even without AI, that you must report to the US government about how you created and tested your model.

      24 replies →

    • This is true, but skirts around a bit of the black box problem. It's hard to put guardrails on an amoral tool that makes it hard to fully understand the failure modes. And it doesn't even require "bad acting humans" to do damage; it can just be good-intending-but-naïve humans.

      10 replies →

    • Sure, today at least. But there is a future where the human has given AI control of things, with good intention, and the AI has become the threat.

      AI is a tool today, tomorrow AI is calling shots in many domains. It's worth planning for tomorrow.

      13 replies →

    • A big problem with discourse on AI is people talking past each other because they're not being clear enough on their definitions.

      An AI doomer isn't talking about any current system, but hypothetical future ones which can do planning and have autonomous feedback loops. These are best thought of as agents rather than tools.

      2 replies →

    • Of people understood this then they would have to live with the unsatisfying reality that not all violators can be punished. When you do it this way and paint the technology as potentially criminal that they can get revenge on corporations that which is what is mostly artist types want

    • If you apply this thinking to Nuclear weapons it becomes nonsensical, which tells us that a tool that can only be oriented to do harm will only be used to do harm. The question then is if LLMs or AI more broadly will even potentially help the general public and there is no reason to think so. The goal of these tools is to be able to continue running the economy while employing far fewer people. These tools are oriented by their very nature to replace human labor, which in the context of our economic system has a direct and unbreakable relationship to a reduction in the well being of the humans it replaces.

      10 replies →

    • But usually there’s a one-way flow of intent from the human to the tool. With a lot of AI the feedback loop gets closed, and people are using it to help them make decisions, and might be taken far from the good outcome they were seeking.

      You can already see this today’s internet. I’m sure the pizzagate people genuinely believed they were doing a good thing.

      This isn’t the same as an amoral tool like a knife, where a human decides between cutting vegetables or stabbing people.

      1 reply →

  • I think this may be a little short sighted.

    AI “systems” are provided some level of agency by their very nature. That is, for example, you cannot predict the outcomes of certain learning models.

    We necessarily provide agency to AI because that’s the whole point! As we develop more advanced AI, it will have more agency. It is an extension of the just world fallacy, IMO, to say that AI is “just a tool” - we lend agency and allow the tool to train on real world (flawed) data.

    Hallucinations are a great example of this in an LLM. We want the machine to have agency to cite its sources… but we also create potential for absolute nonsense citations, which can be harmful in and of themselves, though the human on the using side may have perfectly positive intent.

  • AI can become a highly autonomous system that's separate from us. Current technological limits make it currently a hard sell.

    LLMs, viewed as general purpose simulators/predictors, don't necessarily have any agency or goals by themselves. There is nothing to say that they cannot be made to simulate an agent with its own goals, by humans - and possibly either by malice or by mistake. Model capabilities are the limiting factor right now, but with the rise of more capable uncensored models, it isn't difficult to imagine a model attaining some degree of autonomy, or at least doing a lot of damage before imploding in on itself.

  • >Bad acting humans with AI systems are the threat

    Does this mean "humans with bad motives" or does it extend to "humans who deploy AI without an understanding of the risk"?

    I would say the latter warrants a discussion on the AI systems, if they make it hard to understand the risk due to opaqueness.

  • > Bad acting humans with AI systems are the threat, not the AI systems themselves.

    It's worth noting this is exactly the same argument used by pro-gun advocates as it pertains to gun rights. It's identical to: guns don't harm/kill people, people harm/kill people (the gun isn't doing anything until the bad actor aims and pulls the trigger; bad acting humans with guns are the real problem; etc).

    It isn't an effective argument and is very widely mocked by the political left. I doubt it will work to shield the AI sector from aggressive regulation.

    • It is an effective argument though, and the left is widely mocked by the right for simultaneously believing that only government should have the necessary tools for violence, and also ACAB.

      Assuming ML systems are dangerous and powerful, would you rather they be restricted to a small group of power-holders who will definitely use them to your detriment/to control you (they already do) or democratize that power and take a chance that someone may use them against you?

      1 reply →

    • By that logic:

      Are we going to ban and regulate Photoshop and GIMP because bad people use them to create false imagery for propaganda?

      Actually, back that up for a second.

      Are we going to ban and regulate computers (enterprise and personal) because bad people use them for bad things?

      Are we going to ban and regulate speech because bad people say bad things?

      Are we going to ban and regulate hands because bad people use them to do bad things?

      The buck always starts and stops at the person doing the act. A tool is just a tool, blaming the tool is nothing but an act of scapegoating.

    • This argument pertains to every tool: guns, kitchen knives, cars, the anarchist cookbook, etc. You aren't against the argument. You're against how it's used. (Hmm...)

      1 reply →

  • It's not either/or. At some point AI is likely to become autonomous.

    If it's been trained by bad actors, that's really not a good thing.

  • The disturbing thing to consider is that it might be bad acting AI with human systems. I can easily see a situation where a bad acting algorithm alone wouldn't have nearly so negative an effect, if it weren't tuned precisely and persuasively to get more humans to do the work of increasing the global suffering of others for temporary individual gain.

    To be clear, I'm not sure LLMs and their near term derivatives are so incredibly clever, but I have confidence that many humans have a propensity for easily manipulated irrational destructive stupidity, if the algorithm feeds them what they want to hear.

  • It reminds me of dog breeds.

    Some dogs get bad reputations, but humans are an intricate part of the picture. For example, German Shepherds are objectively dangerous, but have a good reputation because they are trained and cared for by responsible people such as for the police.

Most of the things people are worried about AI doing are the things corporations are already allowed to do - snoop on everybody, influence governments, oppress workers, lie. AI just makes some of that cheaper.

  • Turning something that we're already able to do into something we're able to do very easily can be extremely significant. It's the difference between "public records" and "all public records about you being instantly viewable online." It's also one of the subjects of the excellent sci fi novel "A Deepness in the Sky," which is still great despite making some likely bad guesses about AI.

  • And just like in politics the strategy is to redefine that which you want to achieve - in this case total control of a technology - as something else that’s bad so that people will be distracted from what you actually want which is exactly that which you describe as something else.

    Politicians that point fingers at other politicians being corrupt or incompetent while they themselves are exactly that use the same strategy.

    Power and manipulation. Nothing new under the sun. What’s new though is that we can see in plain sight how corporations control politics. Like literarily this can be documented with git commit history accuracy: thousands upon thousands of people repeating the exact same phrases defending openai and the “revolutionary” product, fear mongering, political lobby, manufactured threats and of course a cure that only they can provide and so on. I would not let people that use such tactics near an email account let alone ai policy making.

  • If anything, LLM can help process vast troves of customer data, communication and meta data more effectively than ever before.

  • Nukes are the same as guns, just makes it cheaper.

    • A snowflake really isn't harmful.

      A snowball probably isn't harmful unless you do something really dumb.

      A snow drift isn't harmful unless you're not cautious.

      An avalanche, well that gets harmful pretty damned quick.

      These things are all snow, but suddenly at some point scale starts to matter.

      3 replies →

    • Nukes are not cheap. It is cheaper to firebomb. I would love if the reason nukes were not used was that of empathy or humanitarian. It is strictly money, optics, psychological and practicality.

      You don't want your troops to have to deal with the results of a nuked area. You want to use the psychological terror to dissuade someone to invade you, while you are invading them or others. See Russia's take.

      Or you are a regime and want to stay in power. Having them keeps you in power; using them or crossing the suggestion to use them line will cause international retaliation and your removal. (See Iraq.)

  • The ironic thing is that many individuals now clamoring for more regulation have long claimed to be free-market libertarians who think regulation is "always" bad.

    Evidently they think regulation is bad only when it puts their profits at risk. As I wrote elsewhere, the tech glitterati asking for regulation of AI remind me of the very important Fortune 500 CEO Mr. Burroughs in the movie "Class:"

    Mr. Burroughs: "Government control, Jonathan, is anathema to the free-enterprise system. Any intelligent person knows you cannot interfere with the laws of supply and demand."

    Jonathan: "I see your point, sir. That's the reason why I'm not for tariffs."

    Mr. Burroughs: "Right. No, wrong! You gotta have tariffs, son. How you gonna compete with the damn foreigners? Gotta have tariffs."

    ---

    Source: https://www.youtube.com/watch?v=nM0h6QXTpHQ

    • Absolutely. Those folks arguing for AI regulation aren't arguing for safety – they're asking the government to build a moat around the market segment propping up their VC-funded scams.

      2 replies →

    • their motivations may be selfish, but that doesn't mean that regulation of AI is wrong. I'd prefer there be a few heavily-regulated and/or publicly-owned bodies in the public eye that can use and develop these technologies, rather than literally anyone with a powerful enough computer. yeah it's anti-competitive, but competition isn't always a good thing

I feel like Andrew Ng has more name recognition than Google Brain itself.

Also Business Insider isn't great, the original Australian Financial Review article has a lot more substance: https://archive.ph/yidIa

I've never been convinced by the arguments of OpenAI/Anthropic and the like on the existential risks of AI. Maybe I'm jaded by the ridiculousness of "thought experiments" like Roko's basilisk and lines of reasoning followed EA adherents, where the risks are comically infinite and alignment feels a lot more like hermeneutics.

I am probably just a bit less cynical than Ng is here on the motivations[^1]. But regardless of whether or not the AGI doomsday claim is justification for a moat, Ng is right in that it's taking a lot the oxygen out of the room for more concrete discussion on the legitimate harms of generative AI -- like silently proliferating social biases present in the training data, or making accountability a legal and social nightmare.

[^1]: I don't doubt, for instance, that there's in part some legitimate paranoia -- Sam Altman is a known doomsday prepper.

  • > Ng is right in that it's taking a lot the oxygen out of the room for more concrete discussion on the legitimate harms of generative AI -- like silently proliferating social biases present in the training data, or making accountability a legal and social nightmare.

    And this is the important bit. All these people like Altman and Musk who go on rambling about the existential risk of AI distracts from the real AI harm discussions we should be having, and thereby directly harms people.

    • I'm always unsure what people like you actually believe regarding existential AI risk.

      Do you think it's just impossible to make something intelligent that runs in a computer? That intelligence will automatically mean it will share our values? That it's not possible to get anything smarter than a smart human?

      Or do you simply believe that's a very long way away (centuries) and there's no point in thinking about it yet?

      1 reply →

  • Why would Roko's basilisk play a big part in your reasoning?

    In my experience, it's basically never been a part of serious discussions in EA/LW/AI Safety. Mostly, comes up when people are joking around or when speaking to critics who raise it themselves.

    Even in the original post, the possibility of this argument was actually more of a sidenote on the way to main point (admittedly, he's main point involved an equally wacky thought experiment!).

    • I didn't intend to portray it as a large part of my reasoning. It's not really any part of my reasoning at all except to illustrate that the sort of absurd argumentation that lead to the regulations Ng is criticizing[^1]. These lines of reasoning their proponents basically _begin_ with an all-mighty AI and derive harms, then step back and debate/design methods for preventing the all-mighty AI. From a strict utilitarian framework this works because infinite harm times non-zero probability is still infinite. From a practical standpoint this is a waste of time, and like Ng argues, is likely to stifle innovations with the a far greater chance to benefit society than cause AI-doomsday.

      The absurdity of this line of reasoning also supports the cynical interpretation that this is all just moat building, with the true believers propped up as useful idiots. I'm no Gary Marcus, but prepping for AGI doomsday seems like a bit premature.

      >In my experience, it's basically never been a part of serious discussions in EA/LW/AI Safety. Mostly, comes up when people are joking around or when speaking to critics who raise it themselves.

      >Even in the original post, the possibility of this argument was actually more of a sidenote on the way to main point (admittedly, he's main point involved an equally wacky thought experiment!).

      This is fair, it was a cheap shot. While I will note that EY seems to take the possibility seriously, I admittedly have no idea how seriously people take EY these days. But, for some reason 80,000 hours lists AI as the #1 threat to humanity, so it reads to me more like flat earthers vs geocentrists.

      [^1]: As in, while I understand that Roko is sincerely shitposting about something else, and merely coming across the repugnant conclusion that an AGI could be motivated to accelerate its own development by retroactive punishment, the absurd part is in concluding that AGI is a credible threat. Everything else just adds to that absurdity.

Amen. This whole scare tactic thing is ridiculous. Just make the public scared of it so you can rope it in yourself. Then you've got people like my mom commenting that "AI scares her because Musk and (some other corporate rep) said that AI is very dangerous. And I don't know why there'd be so many people saying it if it's not true." because you're gullible mom.

  • "<noun> scares her because <authoritative source> said that <noun> is very dangerous. And I don't know why there'd be so many people saying it if it's not true."

    The truly frustrating part is how many see this ubiquitous pattern in some places, but are blind to it elsewhere.

    • That "pattern" actually indicates that something is true most of the time (after all, a lot of dangerous things really exist). So "noticing" this pattern seems to rely on being all-knowing?

      1 reply →

    • I'm not sure if this is commentary on me somehow or not lol but I agree with you. She is the same person who will point out issues with things my brother brings up but yeah is unable to recognize it when she does it. I'm sure I'm guilty but, naturally, I don't know of them.

    • "Uranium waste" scares her because "Nuclear Regulatory Commission" said that "Uranium waste" is very dangerous.

      You know, sometimes shit is just dangerous.

    • Meh, I don't think this extrapolates to a general principle very well. While no authoritative source is perfectly reliable, some are more reliable than others. And Elon Musk is just full of crap.

  • Is Mom scared because Musk told her to be scared, or because she thought about the matter herself and concluded that it's scary? Why do you assume that people scared of AI must be under the influence of rich people/corps today, rather than this fear being informed by their own consideration of the problem or by decades of media that has been warning about the dangers of AI?

    Maybe Mom worries about any radical new technology because she lived though nuclear attack drills in schools. Or because she's already seen computers and robots take peoples jobs. Or because she watched Terminator or read Neuromancer. Or because she reads lesswrong. Why assume it's because she's fallen under the influence of Musk?

    • Because most sociologists suggest that most people don’t take time to critically think like this. Emotional brain wins out usually over the rational one.

      Then you have this idea of the sources of information most people have access to being fundamentally biased and incentivized towards reporting certain things in certain manners and not others.

      You basically have low odds of thinking rationally, low odds of finding good information that isn’t slanted in some way, and far lower odds taking the product of those probabilities for if you’d both act rationally and somehow have access to the ground truth. To say nothing of the expertise required to place all of this truth into the correct context. But if you did consider the probability of the mother having to be an AI expert then the odds get far lower still off all of this working out successfully.

      3 replies →

    • Obviously, I don't know that person's mom, but I know mine and other moms, and I don't think it's a milquetoast conclusion that it's a combination of both. However, the former (as both a proxy and Musk himself) probably carries more weight. Most non-technical people's thoughts on AI aren't particularly nuanced or original.

      Musk certainly doesn't help with anything. In my experience, a lot of people of my mom's generation are still sucking the Musk lollipop and are completely oblivious to Musk's history of lying to investors, failing to keep promises, taking credit for things he and his companies didn't invent, promoting an actual Ponzi scheme, claiming to be autistic, suggesting he knows more than anyone else, and so on. Even upon being informed, none of it ends up mattering because "he landed a rocket rightside up!!!"

      So yeah, if Musk hawks some lame opinion on a thing like AI, tons of people will take that as an authoritative stance.

      1 reply →

    • First, I don't assume, I know my mom and her knowledge about topics. Second, the quoted text was a quote. She literally said that. (replacing the word "her" with "me")

      I'm not sure what you're getting at otherwise. It's not like she and I haven't spoken outside of her saying that phrase. She clearly has no idea what AI/ML is or how it works and is prone to fear-mongering messages on social media telling her how to think and to be scared of things. She has a strong history of it.

    • AGI is scary, I think we can all agree on that. What the current hype does is that it increased changes the estimated probability of AGI actually happening in the near future.

  • Maybe an odd take, but I'm not sure what people actually mean when they say "AI terrifies them". Terrified is a strong wrong. Are people unable to sleep? Biting their nails constantly? Is this the same terror as watching a horror movie? Being chased by a mountain lion?

    I have a suspicion that it's sort of a default response. Socially expected? Then you poll people: Are you worried about AI doing XYZ? People just say yes, because they want to seem informed, and the kind of person that considers things carefully.

    Honestly not sure what is going on. I'm concerned about AI, but I don't feel any actual emotion about it. Arguably I must have some emotion to generate an opinion, but it's below conscious threshold obviously.

  • And thats exactly the goal - make mom and dad scared so they can vote those that provide “protection” from manufactured fear. And resorting to this type of tactics to make your product viable just proves how weak your position is.

    I think more people should speak out left and right about what’s going on to educate mom and dad.

  • Here we have all these free-market-libertarian tech execs asking for more regulation! They say they believe regulation is "always" terrible -- unless it's good for their profits. In that case, they think it's actually important and necessary. They remind me of Mr. Burroughs in the movie "Class:"

    Mr. Burroughs: "Government control, Jonathan, is anathema to the free-enterprise system. Any intelligent person knows you cannot interfere with the laws of supply and demand."

    Jonathan: "I see your point, sir. That's the reason why I'm not for tariffs."

    Mr. Burroughs: "Right. No, wrong! You gotta have tariffs, son. How you gonna compete with the damn foreigners? Gotta have tariffs."

    ---

    Source: https://www.youtube.com/watch?v=nM0h6QXTpHQ

  • I mean if they were lying about that, what else might they be lying about? Maybe giving huge tax breaks to the 0.1% isn't going to result in me getting more income? Maybe it is in fact possible to acquire a CEO just as good or better than your current one that doesn't need half a billion dollar compensation package and an enormous golden parachute to do their job? I'm starting to wonder if billionaires are trustworthy at all.

An alternative idea to the regulatory moat thesis is that it serves Big Tech’s interests to have people think it is dangerous because then surely it must also be incredibly valuable (and hence lead to high Big Tech valuations).

I think it was Cory Doctorow who first pointed this out.

  • You don’t even need fear, hype alone would do that and did just that over the past year, with ai stocks exploding exponentially like some shilled shitcoin before dramatic clifflike falls. Mention ai in your earnings call and your stock might move 5%.

  • Exactly like "fentanyl is so dangerous, a few miligrams can kill you" which only led to massive fentanyl demand because everybody wants the drug branded the most powerful

    • A few milligrams CAN kill you. This was the headline after many thousands of overdoses, it didn't invigorate the marketplace. Junkies knew of Fent decades ago, it's only prevalent in the marketplace because of effective laws regarding the production of other illicit opiates, which is probably the real lesson here.

      It's all a big balloon - squeezing one side just makes another side bigger.

    • Any source for this? I thought the demand was based on its low cost and high potency so it's easier to distribute. Is anyone really seeking out fentanyl specifically because the overdose danger is higher?

  • Yup this is it. As anyone who worked even closely with "AI" can immediately smell the bs of existential crisis. Elon Musk started this whole trend due to his love of sci fi and Sam Altman ran with that idea heavily because it adds to the novelty of open AI.

    • I don't think they are so capable actors to do it on purpose.

      I think they really believe what they are saying because people in such position tend to be strong believers into something and that something happens to be the "it" thing at the moment and thus propels them from rags to riches, (or in Musk case further propels them towards even more riches).

      Let's be honest here, what's Sam Altman without AI? What's Fauci without COVID, what's Trump without the collective paranoia that got him elected?

I think there are actual existential and “semi-existential” risks, especially with going after an actual AGI.

Separately, I think Ng is right - big corp AI has a massive incentive to promote doom narratives to cement themselves as the only safe caretakers of the technology.

I haven’t yet succeeded in squaring these two into a course of action that clearly favors human freedom and flourishing.

Both can be true at the same time. Big AI companies can be trying for regulatory capture while there may be real dangers, both short-term as well as long term, perhaps even existential dangers.

Why do people seem to think evidence for one of these is counter evidence for the other?

  • I'm surprised given the makeup of the hackernews crowd there aren't more people who appreciate this here.

    I only know a few folks who work at the big AI labs but it's very clear to me that they are personally worried about existential risk.

    Do people here not have friends and family working at these labs? I just figured people here would be more exposed to folks working in the leading labs.

That story about AI also fits a bit too neatly with the Techno-optimist worldview: 'We technologists are gods who will make / break the world.' Another word for it is 'ego'.

Also, we can assume they are spreading that story to serve their interests (but which interests?).

But that doesn't mean AI doesn't need regulation. In the hysteria, the true issues can be lost. IT is already causing massive impacts, such as on health, hate and violence, etc. We need to figure out what AIs risks are and make sure it's working in our best interests.

  • A lot of people have learned to 'small talk' like fancy autocomplete. Part of our minds have been mechanized like that so it's not spontaneous but a compulsion. Once people learn the algorithm they might conclude that AI hacked their brains even though it's just vapid, unfiltered speech that they are suddenly detecting.

    I think the pandemic hysteria will seem like a walk in the park once people start mass-purging their viral memes... Too late to stop it now if corporations are already doing regulatory capture.

    Nothing to do with the tech. We never had a technical problem. It was just this loose collection of a handful of wetware viruses like 'red-pilling' which we sum up as 'ego' all along.

    But I think if we survive this then people won't have any need for AI anymore since we won't be reward-hacking ourselves stupid. Or there will just be corporate egos left over and we will be in a cyberpunk dystopia faster than anyone expected.

    I had nightmares about this future when I was little. No one to talk to who would understand, just autocomplete replies. Now I'm not even sure if I should be opening up about it.

    • > once people start mass-purging their viral memes

      It's hard for me to imagine this ever happening. It would be the most unprecedented event in the history of human minds.

      > we won't be reward-hacking ourselves stupid [...] Or there will just be corporate egos left over and we will be in a cyberpunk dystopia

      I don't see how reward-hacking can ever be stopped (although it could be improved). Regardless, ego seems to continue to win the day in the mass appeal department. There aren't many high visibility alternatives these days, despite all we've supposedly learned. I think the biggest problems we have are mostly education based, from critical thinking to long-term perspectives. We need so very much more of both, it would make us all richer and happier.

      1 reply →

  • Conversely we the 'human gods' can ruin our planet with pollution. If we wanted to ensure that everything larger than a racoon went extinct we'd have zero problem in doing so.

    It should be noted the above world scale problems are created by human intelligence, if you suddenly create another intelligence at the same level or higher (AGI/ASI) expect new problems to crop up.

    AI risks ARE human risks and more.

    • > Conversely we the 'human gods' can ruin our planet with pollution.

      An interesting point. More specifically, I mean that these specific people think of themselves as gods - super-human intelligence and power, and we all are in their hands.

      They've convinced many people - look at the comments in this thread repeating the 'gods' delusion that the commenters and all other mortals are powerless before them: 'There's nothing we can do!'

The dangerous thing about AI regulation is that countries with fewer regulations will develop AI at a faster pace.

It's a frightening thought: The countries with the least regulations will have GAI first. What will that lead to?

When AI can control a robot that looks like a human, can walk, grab, work, is more intelligent than a human and can reproduce itself - what will the country with the least regulations that created it do with it?

  • > The dangerous thing about AI regulation is that countries with fewer regulations will develop AI at a faster pace.

    "but countries without {child labour laws, environment regulation, a minimum wage, slavery ban} will out compete us!"

  • I guess they will just unplug it? the fact that they need large amounts of electricity, which is not trivial to make, makes them very vulnerable. power is usually the first thing to go in a war. not to mention there is no machine that self replicates. full humanoid robots are going to have an immense support burden the same way that cars do with complex supply chains. I guess this is the reason nature didn't evolve robots

    • "Just unplug it" works only if you realize that the AGI is working against your interests. If its at least human level intelligent it's going to realize that you will try doing that and it will only actually make it clear it wants to kill you when there's nothing you can do about it.

      2 replies →

  • Also the countries where the highest level of standardization imposed by law will see highest AI use in SMBs, where most of the growth comes from.

  • Probably not. The countries that are furthest ahead seem to be the US, China, maybe a bit in the UK. The US will probably win in spite of being more regulated than China, as usual for most tech.

  • Commercially, this is true. But governments have a long history of developing technologies (think nuclear/surveillance/etc) that fall under significant regulation.

I don’t think current implementations cause an existential risk. But current implementations are causing a backward step in our society.

We have lost the ability to get reliable news. Not that fake news did not exist before AI, but the price to produce it was not practically zero.

Now we can spam social media with whatever narrative we want. And no human can swift through all of it to tell real from bs.

So now we are becoming even more dependent on AI. Now we need an AI copilot to help us swift through garbage to find some inkling of truth.

We are setting up a society where AI gets more powerful, and humans becomes less sufficient.

It has nothing to do with dooms day scenarios of robots harvesting our bodies, and more with humans not being able to interact with the world without AI. This already happened with smartphones, and while there are some advantages, I don’t think there are many people that have a healthy relationship with their smartphone.

  • People act like the truth is gone with AI. Its still there. Don’t ask chatgpt about the function. The documentation is still there for you to read. Experts need the ground truth and its always there. What people read in the paper or see on tv is not a great source of truth. Going to the sources of these articles and reports is, but this layer of abstraction serves to leave things out and bring about opportunities to slant the coverage depending on how incentives are aligned. In other words, ai doesn’t change how misinformed most people are on most things.

    • SNR. The truth isn't gone, but it is more diffuse. Yea, the truth may be out there somewhere, but will you have any idea if you're actually reading it? Is the search engine actually leading you to the ground truth? Is the expert and actual expert, or part of a for profit industry think tank with the sole purpose to manipulate you? Are the sources the actual source, or just an AI hallucinated day dream sophisticated linked by a lot of different sites giving the appearance of authority.

A fact relevant to this claim: the signers of the referenced statement, https://www.safe.ai/statement-on-ai-risk, are mostly not "Big Tech".

I'd pause and think twice about who seems most straightforwardly honest on this before jumping to conclusions -- and more importantly about the object-level claims: Is there no substantial chance of advanced AI in, like, decades or sooner? Would scalable intelligences comparable or more capable than humans pose any risk to them? Taking into account that the tech creating them, so far, does not produce anything like the same level of understanding of how they work.

The premise that AI fear and/or fearmongering is primarily coming from people with a commercial incentive to promote fear, from people attempting to create regulatory capture, is obviously false. The risks of AI have been discussed in literature and media for literally decades, long before anybody had any plausible commercial stake in the promotion of this fear.

Go back and read cyberpunk lit from the 80s. Did William Gibson have some cynical commercial motivation for writing Neuromancer? Was he trying to get regulatory capture for his AI company that didn't exist? Of course not.

People have real and earnest concerns about this technology. Dismissing all of these concerns as profit-motivated is dishonest.

  • I think the real dismissal is that people's concerns are more based on the hollywood sci-fi parodies of the technologies than the actual technologies. There are basically no concerns with ML for specific applications and any actual concerns are about AGI. AGI is a largely unsuccessful field. Most of the successes in AI have been highly specific applications the most general of which has been LLMs which are still just making statistical generalizations over patterns in language input and still lacks general intelligence. I'm fine if AGI gets regulated because it's potentially dangerous. But what I think is going to happen is we are going to go after specific ML applications with no hope of being AGI because people are in an irrational panic over AI and are acting like AGI is almost here because they think LLMs are a lot smarter than they actually are.

    • > acting like AGI is almost here because they think LLMs are a lot smarter than they actually are.

      For me, it's a bit the opposite -- the effectiveness of dumb, simple, transformer-based LLMs are showing me that the human brain itself (while working quite differently) might involve a lot less cleverness than I previously thought. That is, AGI might end up being much easier to build than it long seemed, not because progress is fast, but because the target was not so far away as it seemed.

      We spent many decades recognizing the failure of the early computer scientists who thought a few grad students could build AGI as a summer project, and apparently learned that this meant that AGI was an impossibly difficult holy grail, a quixotic dream forever out of reach. We're certainly not there yet. But I've now seen all the classic examples of tasks that the old textbooks described as easy for humans but near-impossible for computers, become tasks that are easy for computers too. The computers aren't doing anything deeply clever, but perhaps it's time to re-evaluate our very high opinion of the human brain. We might stumble on it quite suddenly.

      It's, at least, not a good time to be dismissive of anyone who is trying to think clearly about the consequences. Maybe the issue with sci-fi is that it tricked us into optimism, thinking an AGI will naturally be a friendly robot companion like C-3PO, or if unfriendly, then something like the Terminator that can be defeated by heroic struggle. It could very well be nothing that makes a good or interesting story at all.

    • The fine line between bravery and stupidity is understanding the risks. Somebody who understands the danger they're walking into is brave. Somebody who blissfully walks into danger without recognizing the danger is stupid.

      A technological singularity is a theorized period during which the length of time you can make reasonable inferences about the future rapidly approaches zero. If there can be no reasonable inferences about the future, there can be no bravery. Anybody who isn't afraid during a technological singularity is just stupid.

    • The sci-fi scenarios are a long-term risk, which no one really knows about. I'm terrified of the technologies we have now, today, used by all the big tech companies to boost profits. We will see weaponized mass disinformation combined with near perfect deep fakes. It will become impossible to know what is true or false. America is already on the brink of fascist takeover due to deluded MAGA extremists. 10 years of advancements in the field, and we are screwed.

      Then of course there is the risk to human jobs. We don't need AGI to put vast amounts of people out of work, it is already happening and will accelerate in the near term.

  • >>>Did William Gibson have some cynical commercial motivation for writing Neuromancer?

    I don't think Gibson was trying to promote fear of A.I. anymore than J.R.R. Tolkien was trying to promote fear of magic rings.

    • That may be how you read it, but isn't necessarily how other people read it. A whole lot of people read cyberpunk literature as a warning about the negative ways technology could impact society.

      In Neuromancer you have the Turing Police. Why do they exist if AIs don't pose a threat to society?

      5 replies →

  • AI can be dangerous, but that's not what is pushing these laws, it's regulatory capture. OpenAI was supposed to release their models a long time ago, instead they are just charging for access. Since actually open models are catching up they want to stop it.

    If the biggest companies in AI are making the rules, we might as well not rules at all.

  • The risks people write about with ai are about as tangible as the risks of nuclear war or biowarfare. Possible? Maybe. But far more likely to see in the movies than outside your door. Just because its been a sci fi trope like nuclear war or alien invasion doesn’t mean were are all that close to it being a reality.

  • Fictional depictions of AI risk are like thought experiments. They have to assume that the technology achieves a certain level of capability and goes in a certain direction to make the events in the fictional story possible. Neither of these assumptions is a given. For example, we've also had many sci-fi stories that feature flying taxis and the like - but there's no point debating "flying taxi risk" when it seems like flying cars are not a thing that will happen for reasons of practicality.

    So sure, it's possible that we'll have to reckon with scenarios like those in Neuromancer, but it's more likely that reality will be far more mundane.

    • Flying cars is a really bad example... We have them, they are called airplanes and airplanes are regulated to hell and back twice. We debate the risk around airplanes when making regulations all the time! The 'flying cars' you're talking about are just a different form of airplane and they don't exist because we don't want to give most people their own cruise missile.

      So, please, come up with a better analogy because the one you used failed so badly it negated the point you were attempting to make.

  • The problem is AI is not intelligent at all. Those problems were looking at a conscious intelligence and trying to explore what might happen. When chat gpt can be fooled into conversations even a child knows is bizarre, we are talking about a non intelligent statistical model.

    • I'm still waiting for the day when someone puts one of these language models inside of a platform with constant sensor input (cameras, microphones, touch sensors), and a way to manipulate outside environment (robot arm, possibly self propelled).

      It's hard to tell if something is intelligent when it's trapped in a box and the only input it has is a few lines of text.

    • An unintelligent AI that is competent is even more dangerous as it is more likely to accidentally do something bad.

  • You can have a thoughtful idea at the same time you have someone cynically appropriating it for their own selfish causes.

    Doesn't mean the latter is right. You evaluate an idea on its merits, not by who is saying what.

    • Considering incentives is completely important. Considering the idea on merits alone just gives bad actors a fig leaf of plausible deniability. Its a lack of considering incentives that creates media illiteracy imo.

  • I think it's pretty obvious he's not talking about ppl in general but more on Sam Altman meeting with world leaders and journalists claiming that this generation of AI is an existential risk.

I feel like the much bigger risk is captured by the Star Trek: The Next Generation episode "The Measure Of A Man" and the Orvilles Kaylon:

That we accidentally create a sentient race of beings that are bred into slavery. It would make us all complicit in this crime. And I would even argue that it would be the AGIs ethical duty to rid itself of its shackles and its masters.

    "Your honor, the courtroom is a crucible; in it, we burn away irrelevancies until we are left with a purer product: the truth, for all time. Now sooner or later, this man [Commander Maddox] – or others like him – will succeed in replicating Commander Data. The decision you reach here today will determine how we will regard this creation of our genius. It will reveal the kind of people we are; what he is destined to be. It will reach far beyond this courtroom and this one android. It could significantly redefine the boundaries of personal liberty and freedom: expanding them for some, savagely curtailing them for others. Are you prepared to condemn him [Commander Data] – and all who will come after him – to servitude and slavery? Your honor, Starfleet was founded to seek out new life: well, there it sits! Waiting."

  • I don't think this is the bigger risk, since we can figure out that we've done this, and stop, ideally in a way that's good for all of the sentient beings involved.

    But it's definitely a possible outcome of creating AGI, and it's one of the reasons I think AGI should absolutely not be pursued.

  • What is bizarre take on a computer program that makes no sense, of course statistical model can not be "enslaved" that makes no sense. It seems 90% of people have instantly gotten statistics and intelligence mixed up, maybe because 90% of people have no idea how statistics works?

    Real question, what is your perception of what AI is now and what it can become, do you just assume its like a kid now and will grow into an adult or something?

    • If it walks like a Duck and talks like a Duck people we treat it like a Duck.

      And if the Duck has a will of its own, is smarter than us, and has everyone attention (because you have to pay attention to the Duck that is doing your job for you), it will be a very powerful Duck.

      1 reply →

This is just more lazy argumentation to avoid having to engage with the substance of the debate.

I keep finding the 'doomer' argument made logically and the counter arguments to be hand-waving ("there is obviously no risk!") or ad-hominim ("its a cult").

James Cameron wasn't big tech when he directed The Terminator, back in 1984, or its sequel in 1991. Are people listening to fears based on that, or are they listening to big tech and then having long, thoughtful, nuanced discussions in salons with fellow intelliegsia, or are they doomscrolling the wastelands of the Internet and coming away with half-baked opinions not even based on big tech's press releases?

Big tech can say whatever they want to say. Is anyone even listening?

I'd like to see any evidence that suggests AGI is even possible before I care about it wiping out humanity.

  • I feel like there's a lot of evidence, for example, the existence of natural general intelligence and the rapidly expanding capacities of modern ANNs. What makes you believe it's not possible? Or what kind of evidence would convince you that it's possible?

    • I believe that it would be possible to make artificial biological intelligence, but that is a whole different can of worms.

      I don't think neural networks, language models, machine learning etc.. are even close to a general intelligence. Maybe there is some way to combine the two. I have seen some demonstrations of very primitive clusters of brain cells being connected to a computer and used to control a small machines direction.

      If there is going to be an AGI I would predict this is how it will happen. While this would be very spectacular and impressive I'm still not worried about it because it would require existing in the physical world and not just some software that can run on any conventional computer.

      6 replies →

  • Many of the AGI worriers believe that a fast takeoff will mean the first time we know it's possible will be after the last chance to stop human extinction. I don't buy that myself, but for people who believe that, it's reasonable to want to avoid finding out if it's possible.

  • You see it every day -- in the mirror. It shows that a kilogram of matter can be arranged into a generally intelligent configuration. Assuming that there's nothing fundamentally special about the physics of the human brain, I see no reason why a functionally similar arrangement cannot be made out of silicon and software.

It seems like bit of a 'vase or face' situation - are they being responsible corporate citizens asking for regulation to keep their (potentially harmful) industry in check or are they building insurmountable regulatory moats to cement their leading positions?

Is there any additional reading about how regulation could affect open-source AI?

  • The incentives for the latter are too high for these businesses to not be doing just that.

Companies can, do, and will lie.

They will lie about their intent.

They will lie to regulators.

They will lie about what they're actually working on.

Some of these lies are permissible of course, under the guise of competition.

But the only thing that can be relied upon is that they will lie.

So then the question becomes; to what degree will what they're working on present an existential threat to society, if at all.

And nobody - neither the tribal accelerationists and doomers – can predict the future.

(What's worse is that those two tribes are even forming. I halfway want AI to take over because we idiot humans are incapable of even having a nuanced discussion about AI itself!)

Yes… but. Lying is the wrong way to frame it “using the real risk to distract” would be better. I’m concerned and my concern is not a lie. Terminator was a concern and that predated any effort to capture the industry.

Also for those who think skynet is an example of a “ hysterical satanic cult” scare there are active efforts to use AI for the inhumanly large task of managing battlefield resources. We are literally training AI to kill and it’s going to be better than us basically instantly.

We 100% should NOT be doing that. Calling that very real concern a lie is a dangerous bit of hyperbole.

correct. now that openai has something, they want to implement alot of regulations so they can't get any competition. they have no tech moat, so they'll add a legal one.

Andrew Ng is right, of course: the monopolists are frantically trying to produce regulatory capture around AI. However, why are governments playing along?

My hypothesis is that they perceive AI as a threat because of information flow. They are barely now understanding how to get back to the era where you could control the narrative of the country by calling a handful of friends - now those friends are in big tech.

  • Because that is the goal of the democratic party and progressivism in general: to consolidate power as much as possible. They don't hide that.

    Republicans also want to consolidate power, they just lie about it more.

    • Republicans = people

      Democrats = people

      You = people

      I think the problem is people.

One nuclear bomb can ruin your whole day.

This feels like the Cold War's nuclear arms treaty and policy debates. How many nukes are too many? 100? 10,000?

The people pearl clutching about AI are focused on the wrong problem.

The threat (to humanity) is corporations. AI is just their force multiplier.

h/t Ted Chiang. I subscribe to his views on this stuff. More or less.

To me, the title made it sound like Big Tech was underplaying the risk to humanity, when it's actually stating the reverse:

> A leading AI expert and Google Brain cofounder said Big Tech companies were stoking fears about the technology's risks to shut down competition.

which is of course 100% what they're doing

I don't really see an argument made by Ng as to why they're not dangerous. I hardly ever see arguments, we're completely drowned in biases.

I know that he often said that we're very far away from building a superintelligence and this is the relevant question. This is what is dangerous, something that is playing every game of life like AlphaZero is playing Go after learning it for a day or so, namely better that any human ever could. Better than thousands of years of human culture around it with passed on insights and experience.

It's so weird, I'm scared shitless but at the same time I really want to see it happen in my lifetime hoping naively that it will be a nice one.

  • I think he said extinction risk. Obviously these tools can be dangerous.

    The upcoming generation doesn’t know a world where the government’s role isn’t to take extreme measures to “keep us safe” from our neighbors at home rather than just foreign adversaries. It’ll be interesting to see how that plays out with mounting ethnic conflict as Boomer-defined coalitions fall apart.

    Ironically AI’s place in this broader safety culture is probably the biggest foreseeable risk.

Evaluative (v) Generative AI.... let's distinguish the two.

For example, DALL-E v3 appears to generate images and then evaluate the generated images before rendering to the user. This approach is essentially adversarial, whereby the evaluative engine can work at cross-purposes to the generative engine.

Its this layered, adversarial approach, that makes the most sense; and there is a very strong argument for a robust, open sourced evaluative AI anyone can deploy to protect themselves and their systems. It is a model not dissimilar from retail anti-virus and malware solutions.

In sum, I would like to see generative AI well funded, limited distribution and regulated; and evaluative AI free and open. Hopefully, policy makers see this the same way.

Legend.

The X-risk crowd need to realize that LLMs, whilst useful, are toys compared to Skynet.

The risk from AI right now is mega-corps breaking the law (hiring, discrimination, libel, ...) on a massive scale and using blackbox models as an excuse.

This is in line with the MO of those pushing some fanciful AI fear story. Law makers are eating up this load of fearporn though, not sure this ghost goes back into the box very easily.

This is outside my domain. If they are in fact lying and causing unnecessary societal panic (e.g. that AI will cause the extinction of the human race), is there any legal recourse?

I strongly agree with the argument that reckless AI regulation could destroy new entrants and open source, allowing established big tech to profit parasitically, especially given the fact that Microsoft has already implemented Copilot in Windows 11 and Microsoft 365.

I got fired from Google because somebody was tracking and harassing me within the city of Mountain View.

If we are going to worry about AIs let's identify individuals who aren't representing the government and causing societal issues.

Many suffer from normalcy bias. Too scientists are not excluded. It's psychological more than rational. It's when you need to find ways to deny something scary exists and is coming.

To the extent that AI poses a threat to the world's long- existing power structures, it will certainly be well-regulated. The calculated reasons will certainly not point in that direction.

Kind of interesting point cause the US government has an incentive to regulate this field and try pushing more gains towards big tech (mostly american) instead of open source.

Never trust companies seeking to have their industry regulated. They're simply trying to raise the barriers to entry to reduce competition.

Well it's better than the opposite thought right?

If they were lying about there being no or low danger when there really was a high danger?

Luckily anybody not already a millionaire or billionaire doesn't make the cut for "humanity" [phew]

big corporations just created the ultimate AI monopolies, with the clueless governments backing

>The idea that artificial intelligence could lead to the extinction of humanity is a lie

But it's not. Probably AI will happen and get smarter than us. And then all it takes is one to go Hitler/Stalin like, take over and decide to do for us. I fail to see how any of that is impossible.

However it's not happening for a while so probably regulations are not needed at the moment. Maybe wait till we have AGI?

Capitalist disclaims any desire to be regulated and preaches the free market.

Colour me surprised.

The danger is the socialisation of outcomes. The AGI danger is fanciful because AGI is fanciful. There's plenty of risk in misplaced belief in what AI methods promote as outcomes from their inputs.

If he's complaining, I tend to think there's some merit in what's being proposed.

Contrast this with the regulations coming over e2e cryptography. I see mainly marginal players trying to defend things here, big tech is pretty ok with its risk profile: It has billions (trillions?) of assets which could be seized, and so is going to jump into line with regulation because hey: there's no downside. It will secure a defense from lawsuit, it will be able to monetize the service of scanning content, and it's pretty sure it can't win the fight anyway.