← Back to context

Comment by atleastoptimal

20 hours ago

Whoever gets AGI first owns the future though, any GDP put into manufacturing not essential to that goal is a geopolitical opportunity cost

It really shows how desperate some people are, sacrificing everything, present and future, in the quest of a digital god that might not even exist.

Why is that the case?

If a company gets to AGI a month later, why does that matter so much?

We’re not talking super intelligence here, just human level intelligence.

OpenAI was first to ChatGPT yet other companies are still in the game.

  • My argument is based on

    1. The first company to get AGI will likely have a multitude of high-leverage problems it would immediately put AGI to task on

    2. One of those problems is simply improving itself. Another is securing that company's lead over its competitors (by basically helping every employee at that company do better at their job)

    3. The company that reaches AGI for a language-style model will likely do so due to a mix of architectural tricks that can be applied to any general-purpose model, including chip design, tactical intelligence, persuasion, beating the stock market, etc

    • The AGI argument assumes there is a 0 -> 1 moment where the model suddenly becomes "AGI" and starts performing miraculous tasks, accompanied by recursive self-improvement. So far, our experience shows that we are getting incremental improvements over time from different companies.

      These things are being commoditized, and we are still at the start of the curve when it comes to hardware, data centers, etc.

      Arguing for an all-in civilization/country bet on AGI given this premise, is either foolish or a sign that you are selling "AGI"

      1 reply →

    • All of that stuff takes time and resources. Self-improvement may not be easy, e.g. if they end up in a local maximum that doesn't extend, and it probably won't be cheap or fast (if it's anything like frontier LLMs it could be months of computation on enormous numbers of cutting-edge devices, costing hundreds of millions or billions, or it may not even be possible without inventing and mass-manufacturing better hardware). Another company achieving a slightly different form of AGI within a few years will probably be at least competitive, and if they have more resources or a better design they could overtake.

      1 reply →

    • Unless AGI includes a speed requirement, AGI is not sufficient to win the market. Take any genius in human history, the impact they had has been hugely limited by their lifespan, they didn’t solve every problem, and each discovery took them decades. The first AGIs will be the same, hyper slow for a while, giving competitors a chance to copy and stay in the race

      1 reply →

    • These companies already have access to the best meat-brains in the world and what tasks do they work on? Advertisement mostly?

      1 reply →

  • The argument is something like AGI or its owner wouldn't want other AGIs to exist. So it would destroy the capabilities of other AGI before it could evolve(by things like hacking, manipulation etc.).

    • Oh, so the goal is to create an insane and predatory piece of software that is out of the control of its creators? Sounds wonderful.

Even assuming that's the case, everyone's acting like throwing more GPUs at the problem is somehow gonna get them to AGI

  • Far more is being done than simply throwing more GPU's at the problem.

    GPT-5 required less compute to train than GPT-4.5. Data, RL, architectural improvements, etc. all contribute to the rate of improvement we're seeing now.

I have seen no credible explanation on how current or proposed technology can possibly achieve AGI.

If you want to hand-wave that away by stating that any company with technology capable of achieving AGI would guard it as the most valuable trade secret in history, then fine. Even if we assume that AGI-capable technology exists in secret somewhere, I've seen no credible explanation from any organization on how they plan to control an AGI and reliably convince it to produce useful work (rather than the AGI just turning into a real-life SHODAN). An uncontrollable AGI would be, at best, functionally useless.

AGI is --- and for the foreseeable future, will continue to be --- science fiction.

  • You seem to have two separate claims. The first that it would be difficult to achieve AGI with current or proposed technology, and the second being that it would be difficult to control AGI, thus making it too risky to use or deploy.

    The second is a significant open problem (the alignment problem) and I'd wager it is a very real risk which companies need to take more seriously. However, whether it would be feasible to control or direct an AGI towards reliably safe, useful outputs has no bearing on whether reaching AGI is possible via current methods. Current scaling gains and the rate of improvement (see METR's horizons on work an AI model can do reliably on its own) make it fairly plausible, at least more plausible than the plain denial that AGI is possible I see around here with very little evidence.

We have an AI promoter here. AGI isn't the future of anything right now. It could be. But so could a lot of things, like vaccine research (we're making promising development on HIV and cancer). Try saying those people would own the future in the 1980s-1990s. Sure, that'd be an obvious outcome, but it wasn't on the horizon for the people in the field at the time (unless your family owned the company).

  • Even if you could cure cancer or HIV with a vaccine it would have a relatively negligible impact compared to AGI.

    There are far more signals that AGI is going to be achieved by OpenAI, Anthropic, DeepMind or X.ai within the next 5-10 years than there were of any other hyped breakthrough in the past 100 years that ultimately never came to fruition. Doesn't mean it's guaranteed to happen, but to ignore the multitude of trends which show no signs of stopping, it seems naive in Anno Domani 2025 to discount it as a likely possibility.

    • > There are far more signals that AGI is going to be achieved by OpenAI, Anthropic, DeepMind or X.ai within the next 5-10 years

      So agi before autonomous tesla? "Just two more years guys I promise", how can people keep falling for these lol

      1 reply →

    • It's just as possible that they need to invest more and more for negligible improvements to model performance. These companies are burning through money at an astonishing rate.

      And as the internet deteriorates due to AI slop, finding good training material will become increasingly difficult. It's already happening that incorrect AI generated information is being cited as source for new AI answers.

      1 reply →

Thats a nice assertion, but do you have any facts?

  • It is a reasonable common-sense claim that if a company which possesses a model that can perform all cognitive tasks as well as a human or better, that that model would be more powerful than any other technology available, barring significant limitations to its operation or deployment.

There is no proof "AGI" is a real thing. Any GDP put towards that goal is a huge gamble and the US is all-in, with potentially ruinous results.