← Back to context

Comment by IshKebab

9 days ago

This is hilariously over-optimistic on the timescales. Like on this timeline we'll have a Mars colony in 10 years, immortality drugs in 15 and Half Life 3 in 20.

These timelines always assume that things progress as quickly as they can be conceived of, likely because these timelines come from "Ideas Guys" whose involvement typically ends at that point.

Orbital mechanics begs to disagree about a Mars colony in 10 years. Drug discovery has many steps that take time, even just the trials will take 5 years, let alone actually finding the drugs.

  • It reminds me of this rather classic post: http://johnsalvatier.org/blog/2017/reality-has-a-surprising-...

    Science is not ideas: new conceptual schemes must be invented, confounding variables must be controlled, dead-ends explored. This process takes years.

    Engineering is not science: kinks must be worked out, confounding variables incorporated. This process also takes years.

    Technology is not engineering: the purely technical implementation must spread, become widespread and beat social inertia and its competition, network effects must be established. Investors and consumers must be convinced in the long term. It must survive social and political repercussions. This process takes yet more years.

  • Didn't the covid significantly reduce trial times? I thought that was such a success that they continued on the same foot.

    • The other reply has better info on covid specifically, but also consider that this refers to "immortality drugs". How long do we have to test those to conclude that they do in fact provide "immortality"?

      Now sure, they don't actually mean immortality, and we don't need to test forever to conclude they extend life, but we probably do have to test for years to get good data on whether a generic life extension drug is effective, because you're testing against illness, old age, etc, things that take literally decades to kill.

      That's not to mention that any drug like that will be met with intense skepticism and likely need to overcome far more scrutiny than normal (rather than the potentially less scrutiny that covid drugs might have managed).

      2 replies →

    • trial times were very brief for Covid vaccines because 1) there was no shortage of volunteers, capital, and political alignment at every level 2) the virus was everywhere and so it was really, really easy to verify if it was working. Compare this with a vaccination for a very rare but deadly disease: it's really hard to know if it's working because you can't just expose your test subjects to the deadly disease!

    • No it didn’t. At least not for new small molecule drugs. It did reduce times a bit for the first vaccines because there were many volunteers available, and it did allow some antibody drug candidates to be used before full testing was complete. The only approved small molecule drug for covid is paxlovid, with both components of its formulation tested on humans for the first time many years before covid. All the rest of the small molecule drugs are still in early parts of the pipeline or have been abandoned.

I like that the "slowdown" scenario has by 2030 we have a robot economy, cure for aging, brain uploading, and are working on a Dyson Sphere.

  • The story is very clearly modeled to follow the exponential curve they show.

    Like the drew the curve out into the shape they wanted, put some milestones on it, and then went to work imagining what would happen if it continued with a heavy dose of X-risk doomerism to keep it spicy.

    It conveniently ignores all of the physical constraints around things like manufacturing GPUs and scaling training networks.

    • https://ai-2027.com/research/compute-forecast

      In section 4 they discuss their projections specifically for model size, the state of inference chips in 2027, etc. It's largely pretty in line with expectations in terms of the capacity, and they only project them using 10k of their latest gen wafer scale inference chips by late 2027, roughly like 1M H100 equivalents. That doesn't seem at all impossible. They also earlier on discuss expectations for growth in efficiency of chips, and for growth in spending, which is only ~10x over the next 2.5 years, not unreasonable in absolute terms at all given the many tens of billions of dollars flooding in.

      So on the "can we train the AI" front, they mostly are just projecting 2.5 years of the growth in scale we've been seeing.

      The reason they predict a fairly hard takeoff is they expect that distillation, some algorithmic improvements, and iterated creation of synthetic data, training, and then making more synthetic data will enable significant improvements in efficiency of the underlying models (something still largely in line with developments over the last 2 years). In particular they expect a 10T parameter model in early 2027 to be basically human equivalent, and they expect it to "think" at about the rate humans do, 10 words/second. That would require ~300 teraflops of compute per second to think at that rate, or ~0.1H100e. That means one of their inference chips could potentially run ~1000 copies (or fewer copies faster etc. etc.) and thus they have the capacity for millions of human equivalent researchers (or 100k 40x speed researchers) in early 2027.

      They further expect distillation of such models etc. to squeeze the necessary size down / more expensive models overseeing much smaller but still good models squeezing the effective amount of compute necessary, down to just 2T parameters and ~60 teraflops each, or 5000 human-equivalents per inference chip, making for up to 50M human-equivalents by late 2027.

      This is probably the biggest open question and the place where the most criticism seems to me to be warranted. Their hardware timelines are pretty reasonable, but one could easily expect needing 10-100x more compute or even perhaps 1000x than they describe to achieve Nobel-winner AGI or superintelligence.

      1 reply →

  • The true absurdity of this timeline is that everyone is behaving as perfectly rational players, which is absurd when you look at the current presidential administration, particularly in light of recent events.

IMO they haven't even predicted mid-2025.

  > Coding AIs increasingly look like autonomous agents rather than mere assistants: taking instructions via Slack or Teams and making substantial code changes on their own, sometimes saving hours or even days.

Yeah, we are so not there yet.

  • That is literally the pitch line for Devin. I recently spoke to the CTO of a small healthtech startup and he was very pro-Devin for small fixes and PRs, and thought he was getting his money worth. Claude Code is a little clunkier but gives better results, and it wouldn't take much effort to hook it up to a Slack interface.

    • Yeah, I get that there are startups trying to do it. But I work with Cursor quite a bit… there is no way I would trust an LLM code agent to take high-level direction and issue a PR on anything but the most trivial bug fix.

      1 reply →

Can you share your detailed projection of what you expect the future to look like so I can compare?

  • Sure

    5 years: AI coding assistants are a lot better than they are now, but still can't actually replace junior engineers (at least ones that aren't shit). AI fraud is rampant, with faked audio commonplace. Some companies try replacing call centres with AI, but it doesn't really work and everyone hates it.

    Tesla's robotaxi won't be available, but Waymo will be in most major US cities.

    10 years: AI assistants are now useful enough that you can use them in the ways that Apple and Google really wanted you to use Siri/Google Assistant 5 years ago. "What have I got scheduled for today?" will give useful results, and you'll be able to have a natural conversation and take actions that you trust ("cancel my 10am meeting; tell them I'm sick").

    AI coding assistants are now very good and everyone will use them. Junior devs will still exist. Vibe coding will actually work.

    Most AI Startups will have gone bust, leaving only a few players.

    Art-based AI will be very popular and artists will use it all the time. It will be part of their normal workflow.

    Waymo will become available in Europe.

    Some receptionists and PAs have been replaced by AI.

    15 years: AI researchers finally discover how to do on-line learning.

    Humanoid robots are robust and smart enough to survive in the real world and start to be deployed in controlled environments (e.g. factories) doing simple tasks.

    Driverless cars are "normal" but not owned by individuals and driverful cars are still way more common.

    Small light computers become fast enough that autonomous slaughter it's become reality (i.e. drones that can do their own navigation and face recognition etc.)

    20 years: Valve confirms no Half Life 3.

    • you should add a bit where AI is pushed really hard in places where the subjects have low political power, like management of entry level workers, care homes or education and super bad stuff happens.

      Also we need a big legal event to happen where (for example) autonomous driving is part of a really big accident where lots of people die or someone brings a successful court case that an AI mortgage underwriter is discriminating based on race or caste. It won't matter if AI is actually genuinely responsible for this or not, what will matter is the push-back and the news cycle.

      Maybe more events where people start successfully gaming deployed AI at scale in order to get mortgages they shouldn't or get A-grades when they shouldn't.

    • > Small light computers become fast enough that autonomous slaughter it's become reality

      This is the real scary bit. I'm not convinced that AI will ever be good enough to think independently and create novel things without some serious human supervision, but none of that matters when applied to machines that are destructive by design and already have expectations of collateral damage. Slaughterbots are going to be the new WMDs — and corporations are salivating at the prospect of being first movers. https://www.youtube.com/watch?v=UiiqiaUBAL8

      4 replies →

    • > Some companies try replacing call centres with AI, but it doesn't really work and everyone hates it.

      I think this is much closer than you think, because there's a good percentage of call centers that are basically just humans with no power cosplaying as people who can help.

      My fiber connection went to shit recently. I messaged the company, and got a human who told me they were going to reset the connection from their side, if I rebooted my router. 30m later with no progress, I got a human who told me that they'd reset my ports, which I was skeptical about, but put down to a language issue, and again reset my router. 30m later, the human gave me an even more outlandish technical explanation of what they'd do, at which point I stumbled across the magical term "complaint" ... an engineer phoned me 15m later, said there was something genuinely wrong with the physical connection, and they had a human show up a few hours later and fix it.

      No part of the first-layer support experience there would have been degraded if replaced by AI, but the company would have saved some cash.

    • It’s soothing to read a realistic scenario amongst all of the ludicrous hype on here.

    • So in the past 5 years we went from not having ChatGPT at all and it being released in 2022 (with non "chat" models before that) but in the next 5 now that the entire tech world is consumed with making better AI models, we'll just get slightly better AI coding assistants?

      Reminds me of that comment about the first iPod being lame and having less space than a nomad. Worst take I've ever seen on here recently.

  • Slightly slower web frameworks by 2026. By 2030, a lot slower.

  • With each passing year, AI doom grifters will learn more and more web design gimmicks.

We currently don't see any ceiling if this continues in this speed, we will have cheaper, faster and better models every quarter.

Therewas never something progressing so fast

It would be very ignorant not to keep a very close eye on it

There is still a a chance that it will happen a lot slower and the progression will be slow enough that we adjust in time.

But besides AI we also now get robots. The impact for a lot of people will be very real

No, sooner lol. We'll have aging cures and brain uploading by late 2028. Dyson Swarms will be "emerging tech".