Comment by ilaksh
1 month ago
You think it will be 25 years before we have a drop in replacement for most office jobs?
I think it will be less than 5 years.
You seem to be assuming that the rapid progress in AI will suddenly stop.
I think if you look at the history of compute, that is ridiculous. Making the models bigger or work more is making them smarter.
Even if there is no progress in scaling memristors or any exotic new paradigm, high speed memory organized to localize data in frequently used neural circuits and photonic interconnects surely have multiple orders of magnitude of scaling gains in the next several years.
> You seem to be assuming that the rapid progress in AI will suddenly stop.
And you seem to assume that it will just continue for 5 years. We've already seen the plateau start. OpenAI has tacitly acknowledged that they don't know how to make a next generation model, and have been working on stepwise iteration for almost 2 years now.
Why should we project the rapid growth of 2021–2023 5 years into the future? It seems far more reasonable to project the growth of 2023–2025, which has been fast but not earth-shattering, and then also factor in the second derivative we've seen in that time and assume that it will actually continue to slow from here.
At this point, the lack of progress since April 2023 is really what is shocking.
I just looked on midjourney reddit to make sure I wasn't missing some new great model.
Instead what I notice is the small variations on the themes I have already seen a thousand times a year ago now. Midjourney is so limited in what it can actually produce.
I am really worried that all this is much closer to a parlor trick than AGI. "simple trick or demonstration that is used especially to entertain or amuse guests"
It all feels more and more like that to me than any kind of progress towards general intelligence.
> OpenAI has tacitly acknowledged that they don't know how to make a next generation model
Can you provide a source for this? I'm not super plugged into the space.
There's this [0]. But also o1/o3 is that acknowledgment. They're hitting the limits of scaling up models, so they've started scaling compute [1]. That is showing some promise, but it's nowhere near the rate of growth they were hitting while next gen models were buildable.
[0] https://www.wsj.com/tech/ai/openai-gpt5-orion-delays-639e769...
[1] https://techcrunch.com/2024/11/20/ai-scaling-laws-are-showin...
I think you're suffering from some survivorship bias here. There are lot of technologies that don't work out.
Computation isn't one of them so far. Do you believe this is the end of computing efficiency improvements?
No, but there's really very little reason to think that that makes the ol' magic robots less shit in any sort of well-defined way. Like, it certainly _looks_ like they've plateaued.
I often suspect that the tech industry's perception of reality is skewed by Moore's Law. Moore's Law is, quibbles aside, basically real, and has of course had a dramatic impact on the tech industry. But there is a tendency to assume that that sort of scaling is _natural_, and the norm, and should just be expected in _everything_. And, er, that is not the case. Moore's Law is _weird_.
> You seem to be assuming that the rapid progress in AI will suddenly stop.
> I think if you look at the history of compute, that is ridiculous. Making the models bigger or work more is making them smarter.
It's better to talk about actual numbers to characterise progress and measure scaling:
" By scaling I usually mean the specific empirical curve from the 2020 OAI paper. To stay on this curve requires large increases in training data of equivalent quality to what was used to derive the scaling relationships. "[^2]
"I predicted last summer: 70% chance we fall off the LLM scaling curve because of data limits, in the next step beyond GPT4.
[…]
I would say the most plausible reason is because in order to get, say, another 10x in training data, people have started to resort either to synthetic data, so training data that's actually made up by models, or to lower quality data."[^0]
“There were extraordinary returns over the last three or four years as the Scaling Laws were getting going,” Dr. Hassabis said. “But we are no longer getting the same progress.”[^1]
---
[^0]: https://x.com/hsu_steve/status/1868027803868045529
[^1]: https://x.com/hsu_steve/status/1869922066788692328
[^2]: https://x.com/hsu_steve/status/1869031399010832688
o1 proved that synthetic data and inference time is a new ramp. There will be more challenges and more innovations. There is a lot of room in hardware, software, model training and model architecture left.
> There is a lot of room in hardware, software, model training and model architecture left.
Quantify this please? And make a firm prediction with approximate numbers/costs attached?
6 replies →
Also office jobs will be adapted to be a better fit to what AI can do, just as manufacturing jobs were adapted so that at least some tasks could be completed by robots.
Not my downvote, just the opposite but I think you can do a lot in an office already if you start early enough . . .
At one time I would have said you should be able to have an efficient office operation using regular typewriters, copiers, filing cabinets, fax machines, etc.
And then you get Office 97, zip through everything and never worry about office work again.
I was pretty extreme having a paperless office when my only product is paperwork, but I got there. And I started my office with typewriters, nice ones too.
Before long Google gets going. Wow. No-ads information superhighway, if this holds it can only get better. And that's without broadband.
But that's besides the point.
Now it might make sense for you to at least be able to run an efficient office on the equivalent of Office 97 to begin with. Then throw in the AI or let it take over and see what you get in terms of output, and in comparison. Microsoft is probably already doing this in an advanced way. I think a factor that can vary over orders of magnitude is how does the machine leverage the abilities and/or tasks of the nominal human "attendant"?
One type of situation would be where a less-capable AI could augment a defined worker more effectively than even a fully automated alternative utilizing 10x more capable AI. There's always some attendant somewhere so you don't get a zero in this equation no matter how close you come.
Could be financial effectiveness or something else, the dividing line could be a moving target for a while.
You could even go full paleo and train the AI on the typewriters and stuff just to see what happens ;)
But would you really be able to get the most out of it without the momentum of many decades of continuous improvement before capturing it at the peak of its abilities?