← Back to context

Comment by ACCount37

1 day ago

I could claim "nuclear weapons are possible" in year 1940 without having a concrete plan on how to get there. Just "we'd need a lot of U235 and we need to set it off", with no roadmap: no "how much uranium to get", "how to actually get it", or "how to get the reaction going". Based entirely on what advanced physics knowledge I could have had back then, without having future knowledge or access to cutting edge classified research.

Would not having a complete foolproof step by step plan to obtaining a nuclear bomb somehow make me wrong then?

The so-called "plan" is simply "fund the R&D, and one of the R&D teams will eventually figure it out, and if not, then, at least some of the resources we poured into it would be reusable elsewhere". Because LLMs are already quite useful - and there's no pathway to getting or utilizing AGI that doesn't involve a lot of compute to throw at the problem.

I think you're falling victim to survivorship bias there, or something like it.

In 1940 I might have said "fusion power is possible" based entirely on what advanced psychics knowledge I had. And I would have been correct, according to the laws of physics it is possible. We still don't have it though. When watching Neil Armstrong walk on the moon I might have said "moon colonies are possible", and I'd have been right there too. And yet...

  • Those two things are prevented by economics more than physics.

    For AI in particular, the economics currently favor ongoing capability R&D - and even if they didn't favor AI R&D directly (i.e. if ChatGPT and Stable Diffusion never happened), they would still favor making the computational inputs of AI R&D cheaper over time.

    Building advanced AIs is becoming easier and cheaper. It's just that the bar of "good enough" has gone off to space, and a "good enough" from 2020 is, nowadays, profoundly unimpressive.

    I'm not sure how much does it take to reach AGI. No one is sure of it. But the path there is getting shorter over time, clearly. And LLMs existing, improving and doing what they do makes me assume shorter AGI timelines, and call for a vote of no confidence on human exceptionalism.

    • > But the path there is getting shorter over time, clearly.

      Why do you assume there is no hard limit we’ll hit with the current tech that prevents us from reaching AGI?

In the case of nuclear weapons, we had a theory that said they were possible. We don't have a theory that says AGI or ASI is possible. It's a big difference.