Comment by Imnimo

2 days ago

My read is not so much "if we say this is dangerously powerful, it will make people want to buy our product", but rather that there is a significant segment of AI researchers for whom x-risk, AI alignment, etc. is a deal-breaker issue. And so the Sam Altmans of the world have to treat these concerns as serious to attract and retain talent. See for example OpenAI's pledge to dedicate 20% of their compute to safety research. I don't get the sense that Sam ever intended to follow through on that, but it was very important to a segment of his employees. And it seems like trying to play both sides of this at least contributed to Ilya's departure.

On the other hand, it seems like Dario is himself a bit more of a true believer.

There have been a number of people leaving them because of that bait and switch it seems. That 20% turned out to be something closer to 2% or even 1%

Yeah I just don't buy that it would somehow help AI companies for everyone to be existentially afraid of their technology. It seems much more reasonable to think that they really believe the things they're saying, than that it's some kind of 4d chess.

Additionally Dario has just been really accurate with his predictions so far. For instance in early 2025 he predicted that nearly 100% of code would be written with AI in 2026.

  • I think if you just look at what people like e.g. Sam Altman are doing it's clear that they don't believe everything that they're saying regarding AI safety.

    > nearly 100% of code would be written with AI in 2026

    I feel like this is kind of a meaningless metric. Or at least, it's very difficult to measure. There's a spectrum of "let AI write the code" from "don't ever even look at the code produced" to "carefully review all the output and have AI iterate on it".

    Also, it seems possible as time goes on people will _stop_ using AI to write code as much, or at least shift more to the right side of that spectrum, as we start to discover all kinds of problems caused by AI-authored code with little to no human oversight.

  • It helps with sales because they position it as “we can give you the power to end the world.” There’s plenty of people who want to wield that sort of power. It doesn’t have to be 4D chess. Maybe they are being genuine. But it is helping sales.

    • Isn't it more: "We can give you the power to eliminate the people in your organization you dont like" and expands into basically dismantling all government & business for the benefit of the guy with the largest wallet?

      It's hard to see as anything but a button anyone with enough money can press and suddenly replace the people that annoy them (first digitally then likely, into flesh).

    • They're not saying today's AI has that kind of power, and they're not saying future superintelligent AI will give you that power. They're saying it will take all power from you, and possibly end you.

      If this is some kind of twisted marketing, it's unprecedented in history. Oil companies don't brag about climate change. Tobacco companies don't talk about giving people cancer. If AI companies wanted to talk about how powerful their AI will be, they could easily brag about ending cancer, curing aging, or solving climate change. They're doing a bit of that, but also warning it might get out of control and kill us all. They're getting legislators riled up about things like limiting data centers.

      People saying this aren't just company CEOs. It's researchers who've been studying AI alignment for decades, writing peer reviewed papers and doing experiments. It's people like Geoffrey Hinton, who basically invented deep learning and quit his high-paying job at Google so he could talk freely about how dangerous this is.

      This idea that it's a marketing stunt is a giant pile of cope, because people don't want to believe that humanity could possibly be this stupid.

      6 replies →

  • Does anyone have good estimates of what percent of real production code is currently being written by LLMs? (& presumably this is rather different for your typical SaaS backend vs. frontend vs. device drivers vs. kernel schedulers...)

    • Depends on your reference class. There's a lot of companies and teams where it's literally 100%, and I would be surprised if there were any top company where it's below 75%. I wouldn't be terribly surprised if the industry-wide percentage were a lot lower, although I also have no idea how you'd measure that.

      3 replies →

  • it pushes the idea that these programs are super amazing and powerful to people who are non-technical. It also allows them to control the narrative of how exactly AI is dangerous to society. Rather than worry about the energy consumption of all these new datacenters, they can redirect attention to some far-off concern about SHODAN taking over Citadel Station and turning the inhabitants into cyber-mutants or whatever.

  • > nearly 100% of code would be written with AI in 2026

    HN is the only place I have heard it seriously suggested that anything like this is happening or likely to happen. We certainly get a lot of cheerleading here, my guess is that in the trenches the fraction is way lower.

  • > Yeah I just don't buy that it would somehow help AI companies for everyone to be existentially afraid of their technology.

    It makes more sense if one breaks that "everyone" into subgroups. A good first-pass split would be "investors" versus "everyone else."

    From their perspective: Rich Investor Alice rushing over with bags of money because of FOMO >>> Random Person Bob suffers anxiety reading the news.

    One can hone it a bit more by thinking about how it helps them gain access to politicians, media that's always willing to spread their quotes, and even just getting CEO Carol's name out there.

  • I'd argue if they really believed AI was an existential threat, they would shut down research and encourage everyone else to halt R&D. But then again, the Cold War happened, even over the objections of physicists like Einstein & Oppenheimer.

  • When your statements directly influence millions of dollars in revenue, its always 4D chess. If Sam altman beleives half the stuff he's peddling, I'd be very shocked.

  • > It seems much more reasonable to think that they really believe the things they're saying

    It seems more reasonable to me to think that they know it's bullshit and it's just marketing. Not necessarily marketing to end users as much as investors. It's very hard to take "AGI in 3 years" seriously.

    • AGI in 3 years is literally not possible as it stands. Our current idea of "AI" as an LLM fundamentally will never be able to reach that goal without some absolutely massive changes

      1 reply →

To my mind, "if we don't say this is dangerously powerful, we will not be able to hire the talent we need to build this product" is the supply-side version of "if we do say this is dangerously powerful, it will make people want to buy our product".

Maybe Altman specifically is only paying lip service to this stuff, but when a company like Anthropic is like "BRO MYTHOS IS TOO DANGEROUS BRO WE CANT EVEN RELEASE IT BRO JUST TRUST US BRO", my bullshit detector is beeping too loud to ignore. It's very obviously a publicity stunt, because if it were actually that dangerous you wouldn't be making such a press release, you'd be keeping your mouth shut and working to make it safe.

  • They explained in detail why they felt they had to talk about it. They think there's no safe deployment strategy other than fixing all the vulnerabilities it's likely to find, and there are too many such vulnerabilities for them to fix without getting help from a substantial number of trusted partners.

  • I'm fairly certain it's both. They aren't going to be making a lot of money until they release it so they might as well get something (marketing) out of it, as well as spread more awareness so those paying attention can start preparing for what's to come. We'll see how effective it is with all their hashed patches or whatever.