← Back to context

Comment by digitcatphd

7 months ago

As of now yes. But we are still in day 0.1 of GenAI. Do you think this will be the case when o3 models are 10x better and 100x cheaper? There will be a turning point but it’s not happened yet.

Yet we're what? 5 years into "AI will replace programmers in 6 months"?

10 years into "we'll have self driving cars next year"

We're 10 years into "it's just completely obvious that within 5 years deep learning is going to replace radiologists"

Moravec's paradox strikes again and again. But this time it's different and it's completely obvious now, right?

  • I basically agree with you, and I think the thing that is missing from a bunch of responses that disagree is that it seems fairly apparent now that AI has largely hit a brick wall in terms of the benefits of scaling. That is, most folks were pretty astounded by the gains you could get from just stuffing more training data into these models, but like someone who argues a 15 year old will be 50 feet tall based on the last 5 years' growth rate, people who are still arguing that past growth rates will continue apace don't seem to be honest (or aware) to me.

    I'm not at all saying that it's impossible some improvement will be discovered in the future that allows AI progress to continue at a breakneck speed, but I am saying that the "progress will only accelerate" conclusion, based primarily on the progress since 2017 or so, is faulty reasoning.

    •   > it seems fairly apparent now that AI has largely hit a brick wall in terms of the benefits of scaling
      

      What's annoying is plenty of us (researchers) predicted this and got laughed at. Now that it's happening, it's just quiet.

      I don't know about the rest, but I spoke up because I didn't want to hit a brick wall, I want to keep going! I still want to keep going! But if accurate predictions (with good explanations) aren't a reason to shift resource allocation then we just keep making the same mistake over and over. We let the conmen come in and people who get too excited by success that they get blind to pitfalls.

      And hey, I'm not saying give me money. This account is (mostly) anonymous. There's plenty of people that made accurate predictions and tried working in other directions but never got funding to test how methods scale up. We say there's no alternatives but there's been nothing else that's been given a tenth of the effort. Apples and oranges...

      7 replies →

    • I dont see any wall. Gemini 2.5 and o3/o4 are incredible improvements. Gen AI is miles ahead of where it was a year ago which was miles ahead of where it was 2 years ago.

      6 replies →

    • I basically agree with you also, but I have a somewhat contrarian view of scaling -> brick wall. I feel like applications of powerful local models is stagnating, perhaps because Apple has not done a good job so far with Apple Intelligence.

      A year ago I expected a golden age of local model intelligence integrated into most software tools, and more powerful commercial tools like Google Jules to be something used perhaps 2 or 3 times a week for specific difficult tasks.

      That said, my view of the future is probably now wrong, I am just saying what I expected.

  • > Yet we're what? 5 years into "AI will replace programmers in 6 months"?

    Realistically, we're 2.5 years into it at most.

    • No, the hype cycle started around 2019, slowly at first. The technology this is built with is more like 20 years old, so no, we are not 2.5 years at most really.

      8 replies →

  • Four years into people mocking "we'll have self driving cars next year" while they are on the street daily driving around SF.

    • They are self driving the same way a tram or subway can be self driving. They traffic a tightly bounded designated area. They're not competing with human drivers. Still a marvel of human engineering, just quite expensive compared with other forms of public transport. It just doesn't compete in the same space and likely never will.

      4 replies →

    • They're driving, but not well in my (limited) interactions with them. I had a waymo run me completely out of my lane a couple months ago as it interpreted 2 lanes of left turn as an extra wide lane instead (or, worse, changed lanes during the turn without a blinker or checking its sensors, though that seems unlikely).

    • Yes, but ...

      The argument that self-driving cars should be allowed on public roads as long as they are statistically as safe as human drivers (on average) seems valid, but of course none of these cars have AGI... they perform well in the anticipated simulator conditions in which they were trained (as long as they have the necessary sensors, e.g. Waymo's lidar, to read the environment in reliable fashion), but will not perform well in emergency/unanticipated conditions they were not trained on. Even outside of emergencies, Waymos still sometimes need to "phone home" for remote assistance in knowing what to do.

      So, yes, they are out there, perhaps as safe on average as a human (I'd be interested to see a breakdown of the stats), but I'd not personally be comfortable riding in one since I'm not senile, drunk, teenager, hothead, distracted (using phone while driving), etc - not part of the class that are dragging the human safety stats down. I'd also not trust a Tesla where penny pinching, or just arrogant stupidity, has resulted in a sensor-poor design liable to failure modes like running into parked trucks.

      7 replies →

    • I'm quoting Elon.

      I don't care about SF. I care about what I can but as a typical American. Not as an enthusiast in one of the most technologically advanced cities on the planet

      3 replies →

  • As far as I've seen we appear to already have self driving vehicles, the main barriers are legal and regulatory concerns rather than the tech. If a company wanted to put a car on the road that beetles around by itself there aren't any crazy technical challenges to doing that - the issue is even if it was safer than a human driver the company would have a lot of liability problems.

    • This is just not true, Waymo, MobilEye, Tesla and Chinese companies are not bottlenecked by regulations but by high failure rate and / or economics.

    • They are only self-driving in a very controlled environments of few very good mapped out cities with good roads in good weather.

      And it took what like 2 decades to get there. So no, we don't have self-driving even close. Those examples look more like hard-coded solution for custom test cases.

    • What? If that stuff works, no liability will have to be executed. How can you state that it works and claim liability problems at the same time?

    • > the main barriers are legal and regulatory concerns rather than the tech

      they have failed in sfo, phoenix and other cities that rolled red carpet for them

      10 replies →

  • 100% this. I always argue that groundbreaking technologies are clearly groundbreaking from the start. It is almost a bit like a film, if you have to struggle to get into it in the first few minutes, you may as well spare yourself watching the rest.

  • [flagged]

We’re already heading toward the sigmoid plateau. The GPT 3 to 4 shift was massive. Nothing since had touched that. I could easily go back to the models I was using 1-2 years ago with little impact on my work.

I don’t use RAG, and have no doubt the infrastructure for integrating AI into a large codebase has improved. But the base model powering the whole operation seems stuck.

  • > I don’t use RAG, and have no doubt the infrastructure for integrating AI into a large codebase has improved

    It really hasn't.

    The problem is that a GenAI system needs to not only understand the large codebase but also the latest stable version of every transitive dependency it depends on. Which is typically in the order of hundreds or thousands.

    Having it build a component with 10 year old, deprecated, CVE-riddled libraries is of limited use especially when libraries tend to be upgraded in interconnected waves. And so that component will likely not even work anyway.

    I was assured that MCP was going to solve all of this but nope.

  • > I could easily go back to the models I was using 1-2 years ago with little impact on my work.

    I can't. GPT-4 was useless for me for software development. Claude 4 is not.

I use LLM’s daily and live them but at the current rate of progress it’s just not really something worth worrying about. Those that are hysterical about AI seem to think LLM’s are getting exponentially better when in fact diminishing returns are hitting hard. Could some new innovation change that? It’s possible but it’s not inevitable or at least not necessarily imminent.

  • I agree that the core models are only going to see slow progression from here on out, until something revolutionary happens... which might be a year from now, or maybe twenty years. Who knows.

    But we are going to see a huge explosion in how those models are integrated into the rest of the tech ecosystem. Things that a current model could do right now, if only your car/watch/videogame/heart monitor/stuffed animal had a good working interface into an AI.

    Not necessarily looking forward to that, but that's where the growth will come.

How are we in 0.1 of GenAI ? It's been developed for nearly a decade now.

And each successive model that has been released has done nothing to fundamentally change the use cases that the technology can be applied to i.e. those which are tolerant of a large percentage of incoherent mistakes. Which isn't all that many.

So you can keep your 10x better and 100x cheaper models because they are of limited usefulness let alone being a turning point for anything.

  • A decade?

    The explosion of funding, awareness etc only happened after gpt-3 launch

    • Funding is behind the curve. Social networks existed in 2003 and Facebook became a billion dollar company a decade later. AI horror fantasies from the 90’s still haven’t come true. There is no god, there is no Skynet.

    • AlphaGo beating the top human player was in 2016. To my memory, that was one of the first public breakthroughs of the new era of machine learning.

      Around 2010 when I was at university, a friend did their undergraduate thesis on neural networks. Among our cohort it was seen as a weird choice and a bit of a dead-end from the last AI winter.

How does it work if they get 10x better in 10 years ? Everything else will have already moved on and the actual technology shift will come from elsewhere.

Basically, what if GenAI is the Minitel and what we want is the internet.

10× better by what metric? Progress on LLMs has been amazing but already appears to be slowing down.

  • All these folks are once again seeing the first 1/4 of a sigmoid curve and extrapolating to infinity.

    • No doubt from me that it’s a sigmoid, but how high is the plateau? That’s also hard to know from early in the process, but it would be surprising if there’s not a fair bit of progress left to go.

      Human brains seem like an existence proof for what’s possible, but it would be surprising if humans also represent the farthest physical limits of what’s technologically possible without the constraints of biology (hip size, energy budget etc).

      6 replies →

  • with autonomous vehicles, the narrative of imperceptibly slow incremental change about chasing 9's is still the zeitgeist despite an actual 10x improvement in homicidality compared to humans already existing.

    There is a lag in how humans are reacting to AI which is probably a reflexive aspect of human nature. There are so many strategies being employed to minimize progress in a technology which 3 years ago did not exist and now represents a frontier of countless individual disciplines.

    • This is my favorite thing to point out from the day we started talking about autonomous vehicles on tech sites.

      If you took a Tesla or a Waymo and dropped into into a tier 2 city in India, it will stop moving.

      Driving data is cultural data, not data about pure physics.

      You will never get to full self driving, even with more processing power, because the underlying assumptions are incorrect. Doing more of the same thing, will not achieve the stated goal of full self driving.

      You would need to have something like networked driving, or government supported networks of driving information, to deal with the cultural factor.

      Same with GenAI - the tooling factor will not magically solve the people, process, power and economic factors.

      12 replies →

    • > a technology which 3 years ago did not exist

      Decades of machine learning research would like to have a word.

Frankly, we don't know. That "turning point" that seemed so close for many tech, never came for some of them. Think 3D-printing that was supposed to take over manufacturing. Or self-driving, that is "just around the corner" for a decade now. And still is probably a decade away. Only time will tell if GenAI/LLMs are color TV or 3D TV.

  • > Think 3D-printing that was supposed to take over manufacturing.

    3D printing is making huge progress in heavy industries. It’s not sexy and does not make headlines but it absolutely is happening. It won’t replace traditional manufacturing at huge scales (either large pieces or very high throughput). But it’s bringing costs way down for fiddly parts or replacements. It is also affecting designs, which can be made simpler by using complex pieces that cannot be produced otherwise. It is not taking over, because it is not a silver bullet, but it is now indispensable in several industries.

    • You're misunderstanding the parent's complaint and frankly the complaints with AI. Certainly 3D printing is powerful and hasn't changed things. But you forgot that 30 years ago people were saying there would be one in every house because a printer can print a printer and how this would revolutionize everything because you could just print anything at home.

      The same thing with AI. You'd be blind or lying if you said it hasn't advanced a lot. People aren't denying that. But people are fed up being constantly being promised the moon and getting a cheap plastic replica instead.

      The tech is rapidly advancing and doing good. But it just can't keep up with the bubble of hype. That's the problem. The hype, not the tech.

      Frankly, the hype harms the tech too. We can't solve problems with the tech if we're just throwing most of our money at vaporware. I'm upset with the hype BECAUSE I like the tech.

      So don't confuse the difference. Make sure you understand what you're arguing against. Because it sounds like we should be on the same team, not arguing against one another. That just helps the people selling vaporware

  • >Think 3D-printing that was supposed to take over manufacturing

    This was never the case, and this is obvious to anyone who has ever been to factories that doing mass-produced plastic

    >Or self-driving, that is "just around the corner" for a decade now.

    But it is really around the corner, all that remains is to accept it. That is, to start building and modifying the road infrastructure and changing the traffic rules to enable effective integration self-driving cars into road traffic.

> 5 years into "AI will replace programmers in 6 months"?

Programmers that don't use AI will get replaced by those that do. (no just by mandate, but by performance)

> 10 years into "we'll have self driving cars next year"

They're here now. Waymo does 250K paid rides/week.

There's a lot of "when" people are betting on, and not a lot of action to back it. If "when" is 20 years, then I still got plenty career ahead of me before I need to worry about that.

> Do you think this will be the case when o3 models are 10x better and 100x cheaper?

why don't you bring it up then.

> There will be a turning point but it’s not happened yet.

do you know something that rest of us don't ?