Comment by godelski
7 months ago
Yet we're what? 5 years into "AI will replace programmers in 6 months"?
10 years into "we'll have self driving cars next year"
We're 10 years into "it's just completely obvious that within 5 years deep learning is going to replace radiologists"
Moravec's paradox strikes again and again. But this time it's different and it's completely obvious now, right?
I basically agree with you, and I think the thing that is missing from a bunch of responses that disagree is that it seems fairly apparent now that AI has largely hit a brick wall in terms of the benefits of scaling. That is, most folks were pretty astounded by the gains you could get from just stuffing more training data into these models, but like someone who argues a 15 year old will be 50 feet tall based on the last 5 years' growth rate, people who are still arguing that past growth rates will continue apace don't seem to be honest (or aware) to me.
I'm not at all saying that it's impossible some improvement will be discovered in the future that allows AI progress to continue at a breakneck speed, but I am saying that the "progress will only accelerate" conclusion, based primarily on the progress since 2017 or so, is faulty reasoning.
What's annoying is plenty of us (researchers) predicted this and got laughed at. Now that it's happening, it's just quiet.
I don't know about the rest, but I spoke up because I didn't want to hit a brick wall, I want to keep going! I still want to keep going! But if accurate predictions (with good explanations) aren't a reason to shift resource allocation then we just keep making the same mistake over and over. We let the conmen come in and people who get too excited by success that they get blind to pitfalls.
And hey, I'm not saying give me money. This account is (mostly) anonymous. There's plenty of people that made accurate predictions and tried working in other directions but never got funding to test how methods scale up. We say there's no alternatives but there's been nothing else that's been given a tenth of the effort. Apples and oranges...
> What's annoying is plenty of us (researchers) predicted this and got laughed at. Now that it's happening, it's just quiet.
You need to model the business world and management more like a flock of sheep being herded by forces that mostly don't have to do with what actually is going to happen in future. It makes a lot more sense.
3 replies →
> What's annoying is plenty of us (researchers) predicted this and got laughed at. Now that it's happening, it's just quiet.
Those people always do that. Shouting about cryptocurrencies and NFTs from the rooftops 3-4 years ago, now completely gone.
I suspect they're the same people, basically get rich quick schemers.
Sure, you were right.
But if you had been wrong and we would now have had superintelligence, the upside for its owners would presumably be great.
... Or at least that's the hypothesis. As a matter of fact intelligence is only somewhat useful in the real world :-)
1 reply →
I dont see any wall. Gemini 2.5 and o3/o4 are incredible improvements. Gen AI is miles ahead of where it was a year ago which was miles ahead of where it was 2 years ago.
The actual LLM part isn't much better than a year ago. What's better is that they've added additional logic and made it possible to intertwine traditional, expert-system style AI plus the power of the internet to augment LLMs so that they're actually useful.
This is an improvement for sure, but LLMs themselves are definitely hitting a wall. It was predicted that scaling alone would allow them to reach AGI level.
2 replies →
The improvements have less to do with scaling than adding new techniques like better fine tuning and reinforcement learning. The infinite scaling we were promised, that only required more content and more compute to reach god tier has indeed hit a wall.
2 replies →
I basically agree with you also, but I have a somewhat contrarian view of scaling -> brick wall. I feel like applications of powerful local models is stagnating, perhaps because Apple has not done a good job so far with Apple Intelligence.
A year ago I expected a golden age of local model intelligence integrated into most software tools, and more powerful commercial tools like Google Jules to be something used perhaps 2 or 3 times a week for specific difficult tasks.
That said, my view of the future is probably now wrong, I am just saying what I expected.
> Yet we're what? 5 years into "AI will replace programmers in 6 months"?
Realistically, we're 2.5 years into it at most.
No, the hype cycle started around 2019, slowly at first. The technology this is built with is more like 20 years old, so no, we are not 2.5 years at most really.
If you can quote anyone well-known saying we'd be replacing programmers in 6 months back in 2019, I'd be interested to read it.
Neural networks go back a lot further than 20 years ago. It was considered a research dead end for a long time though.
we're 2.5 years into the current hype trend, no way was this mainstream until at least 2022
5 replies →
Four years into people mocking "we'll have self driving cars next year" while they are on the street daily driving around SF.
They are self driving the same way a tram or subway can be self driving. They traffic a tightly bounded designated area. They're not competing with human drivers. Still a marvel of human engineering, just quite expensive compared with other forms of public transport. It just doesn't compete in the same space and likely never will.
They are literally competing with human uber drivers in the area they operate and also having a much lower crash and injury rate.
I admit they don't operate everywhere - only certain routes. Still they are undoubtedly cars that drive themselves.
I imagine it'll be the same with AGI. We'll have robots / AIs that are much smarter than the average human and people will be saying they don't count because humans win X Factor or something.
3 replies →
They're driving, but not well in my (limited) interactions with them. I had a waymo run me completely out of my lane a couple months ago as it interpreted 2 lanes of left turn as an extra wide lane instead (or, worse, changed lanes during the turn without a blinker or checking its sensors, though that seems unlikely).
Yes, but ...
The argument that self-driving cars should be allowed on public roads as long as they are statistically as safe as human drivers (on average) seems valid, but of course none of these cars have AGI... they perform well in the anticipated simulator conditions in which they were trained (as long as they have the necessary sensors, e.g. Waymo's lidar, to read the environment in reliable fashion), but will not perform well in emergency/unanticipated conditions they were not trained on. Even outside of emergencies, Waymos still sometimes need to "phone home" for remote assistance in knowing what to do.
So, yes, they are out there, perhaps as safe on average as a human (I'd be interested to see a breakdown of the stats), but I'd not personally be comfortable riding in one since I'm not senile, drunk, teenager, hothead, distracted (using phone while driving), etc - not part of the class that are dragging the human safety stats down. I'd also not trust a Tesla where penny pinching, or just arrogant stupidity, has resulted in a sensor-poor design liable to failure modes like running into parked trucks.
The challenge is that most people think they’re better than average drivers.
3 replies →
In my lens, as long as companies don't want to be held liable for an accident, the shouldn't be on roads. They need to be extremely confident to the point of putting their money where their mouths are. That's true "safety".
That's the main difference with a human driver. If I take an Uber and we crash, that driver is liable. Waymo would fight tooth and nail to blame anything else.
1 reply →
Well, it depends on the details. I'd trust a Waymo as much as an Uber but I'm pretty skeptical of the Tesla stuff they are launching in Austin.
I'm quoting Elon.
I don't care about SF. I care about what I can but as a typical American. Not as an enthusiast in one of the most technologically advanced cities on the planet
They’re in other cities too…
2 replies →
As far as I've seen we appear to already have self driving vehicles, the main barriers are legal and regulatory concerns rather than the tech. If a company wanted to put a car on the road that beetles around by itself there aren't any crazy technical challenges to doing that - the issue is even if it was safer than a human driver the company would have a lot of liability problems.
This is just not true, Waymo, MobilEye, Tesla and Chinese companies are not bottlenecked by regulations but by high failure rate and / or economics.
They are only self-driving in a very controlled environments of few very good mapped out cities with good roads in good weather.
And it took what like 2 decades to get there. So no, we don't have self-driving even close. Those examples look more like hard-coded solution for custom test cases.
What? If that stuff works, no liability will have to be executed. How can you state that it works and claim liability problems at the same time?
> the main barriers are legal and regulatory concerns rather than the tech
they have failed in sfo, phoenix and other cities that rolled red carpet for them
Pretty solid evidence that self driving cars already exist though.
9 replies →
100% this. I always argue that groundbreaking technologies are clearly groundbreaking from the start. It is almost a bit like a film, if you have to struggle to get into it in the first few minutes, you may as well spare yourself watching the rest.
[flagged]
I consulted a radiologist more than 5 years after Hinton said that it was completely obvious that radiologists would be replaced by AI in 5 years. I strongly suspect they were not an AI.
Why do I think this?
1) They smelled slightly funny. 2) They got the diagnosis wrong.
OK maybe #2 is a red herring. But I stand by the other reason.
I know a radiologist and talk a decent bit about AI usage in the field. Every radiologist today is making heavy use of AI. They pre screen everything and from what I understand it has led to massive productivity gains. It hasnt led to job losses yet but theres so much money on the line it really feels to me like we're just waiting for the straw that broke the camels back. No one wants to be the first to fully get rid of radiologists but once one hospital does the rest will quickly follow suit.
1 reply →
The quote appears to be “We should stop training radiologists now, it’s just completely obvious within five years deep learning is going to do better than radiologists.”
So there's some room for interpretation, the weaker interpretation is less radical (that AI could beat humans in radiology tasks in 5 years).
I named 3 things...
You're going to have to specify which 2 you think happened
I have a fusion reactor to sell to you.
Some people are ahead of you by 3.5 years [0]:
> Helion has a clear path to net electricity by 2024, and has a long-term goal of delivering electricity for 1 cent per kilowatt-hour. (!)
[0] https://blog.samaltman.com/helion
4 replies →
Where did it happen?
They try it, but it’s not reliable
did you by any chance send money to nigerian prince ?
Over ten years for the we'll have self driving car spiel