Comment by onraglanroad
2 months ago
I'm giving them higher marks than the people who say it won't.
LLMs have seen huge improvements over the last 3 years. Are you going to make the bet that they will continue to make similarly huge improvements, taking them well past human ability, or do you think they'll plateau?
The former is the boring, linear prediction.
>The former is the boring, linear prediction.
right, because if there is one thing that history shows us again and again is that things that have a period of huge improvements never plateau but instead continue improving to infinity.
Improvement to infinity, that is the sober and wise bet!
The prediction that a new technology that is being heavily researched plateaus after just 5 years of development is certainly a daring one. I can’t think of an example from history where that happened.
Neural network research and development existed since the 1980s at least, so at least 40 years. One of the bottlenecks before was not enough compute.
Perhaps the fact that you think this field is only 5 years old means you're probably not enough of an authority to comment confidently on it?
3 replies →
Tiger: humans will never beat tigers because tigers are purpose built killing machines and they are just generalist --40,000BC
You don't think humans hunted tigers in 40,000BC?
1 reply →
LaunchHN: Announcing Twoday, our new YC backed startup coming out of stealth mode.
We’re launching a breakthrough platform that leverages frontier scale artificial intelligence to model, predict, and dynamically orchestrate solar luminance cycles, unlocking the world’s first synthetic second sunrise by Q2 2026. By combining physics informed multimodal models with real time atmospheric optimisation, we’re redefining what’s possible in climate scale AI and opening a new era of programmable daylight.
You joke, but, alas, there is a _real_ company kinda trying to do this. Reflect Orbital[1] wants to set up space mirrors, so you can have daytime at night for your solar panels! (Various issues, like around light pollution and the fact that looking up at the proposed satellites with binoculars could cause eye damage... don't seem to be on their roadmap.) This is one idea that's going to age badly whether or not they actually launch anything, I suspect.
Battery tech is too boring, but seems more likely to manage long-term effectiveness.
[1] https://www.reflectorbital.com
Reflecting sunlight from orbit is an idea that had been talked about for a couple of decades even before Znamya-2[1] launched in 1992. The materials science needed to unfurl large surfaces in space seems to be very difficult, whether mirrors or sails.
[1] https://en.wikipedia.org/wiki/Znamya_(satellite)
> Are you going to make the bet that they will continue to make similarly huge improvements
Sure yeah why not
> taking them well past human ability,
At what? They're already better than me at reciting historical facts. You'd need some actual prediction here for me to give you "prescience".
“At what?” is really the key question here.
A lot of the press likes to paint “AI” as a uniform field that continues to improve together. But really it’s a bunch of related subfields. Once in a blue moon a technique from one subfield crosses over into another.
“AI” can play chess at superhuman skill. “AI” can also drive a car. That doesn’t mean Waymo gets safer when we increase Stockfish’s elo by 10 points.
I imagine "better" in this case depends on how one scores "I don't know" or confident-sounding falsehoods.
Failures aren't just a ratio, they're a multi-dimensional shape.
At every intellectual task.
They're already better than you at reciting historical facts. I'd guess they're probably better at composing poems (they're not great but far better than the average person).
Or you agree with me? I'm not looking for prescience marks, I'm just less convinced that people really make the more boring and obvious predictions.
What is an intellectual task? Once again, there's tons of stuff LLMs won't be trained on in the next 3 years. So it would be trivial to just find one of those things and say voila! LLMs aren't better than me at that.
I'll make one prediction that I think will hold up. No LLM-based system will be able to take a generic ask like "hack the nytimes website and retrieve emails and password hashes of all user accounts" and do better than the best hackers and penetration testers in the world, despite having plenty of training data to go off of. It requires out-of-band thinking that they just don't possess.
2 replies →
> They're already better than you at reciting historical facts.
so is a textbook, but no-one argues that's intelligent
To be clear, you are suggesting “huge improvements” in “every intellectual task”?
This is unlikely for the trivial reason that some tasks are roughly saturated. Modest improvements in chess playing ability are likely. Huge improvements probably not. Even more so for arithmetic. We pretty much have that handled.
But the more substantive issue is that intellectual tasks are not all interconnected. Getting significantly better at drawing hands doesn’t usually translate to executive planning or information retrieval.
2 replies →
> They're already better than you at reciting historical facts.
They're better at regurgitating historical facts than me because they were trained on historical facts written by many humans other than me who knew a lot more historical facts. None of those facts came from an LLM. Every historical fact that isn't entirely LLM generated nonsense came from a human. It's the humans that were intelligent, not the fancy autocomplete.
Now that LLMs have consumed the bulk of humanity's written knowledge on history what's left for it to suck up will be mainly its own slop. Exactly because LLMs are not even a little bit intelligent they will regurgitate that slop with exactly as much ignorance as to what any of it means as when it was human generated facts, and they'll still spew it back out with all the confidence they've been programed to emulate. I predict that the resulting output will increasingly shatter the illusion of intelligence you've so thoroughly fallen for so far.
> At what? They're already better than me at reciting historical facts.
I wonder what happens if you ask deepseek about Tiananmen Square…
Edit: my “subtle” point was, we already know LLMs censor history. Trusting them to honestly recite historical facts is how history dies. “The victor writes history” has never been more true. Terrifying.
> Edit: my “subtle” point was, we already know LLMs censor history. Trusting them to honestly recite historical facts is how history dies.
I mean, that's true but not very relevant. You can't trust a human to honestly recite historical facts either. Or a book.
> “The victor writes history” has never been more true.
I don't see how.
LLMs aren't getting better that fast. I think a linear prediction says they'd need quite a while to maybe get "well past human ability", and if you incorporate the increases in training difficulty the timescale stretches wide.
> The former is the boring, linear prediction.
Surely you meant the latter? The boring option follows previous experience. No technology has ever not reached a plateau, except for evolution itself I suppose, till we nuke the planet.