It won't be able to write a compelling novel, or build a software system solving a real-world problem, or operate heavy machinery, create a sprite sheet or 3d models, design a building or teach.
Long term planning and execution and operating in the physical world is not within reach. Slight variations of known problems should be possible (as long as the size of the solution is small enough).
For teaching, I'm using it to learn about tech I'm unfamiliar with every day, it's one of the things it's the most amazing at.
For the things where the tolerance for mistakes is extremely low and the things where human oversight is extremely importamt, you might be right. It won't have to be perfect (just better than an average human) for that to happen, but I'm not sure if it will.
In large mining operations we already have human assisted teleoperation AI equipment. Was watching one recently where the human got 5 or so push dozers lined up with a (admittedly simple) task of cutting a hill down and then just got them back in line if they ran into anything outside of their training. The push and backup operations along with blade control were done by the AI/dozer itself.
Now, this isn't long term planning, but it is operating in the real world.
Why would it get 60-80% as good as human programmers (which is what the current state of things feels like to me, as a programmer, using these tools for hours every day), but stop there?
So I think there's an assumption you've made here, that the models are currently "60-80% as good as human programmers".
If you look at code being generated by non-programmers (where you would expect to see these results!), you don't see output that is 60-80% of the output of domain experts (programmers) steering the models.
I think we're extremely imprecise when we communicate in natural language, and this is part of the discrepancy between belief systems.
Will an LLM model read a person's mind about what they want to build better than they can communicate?
That's already what recommender systems (like the TikTok algorithm) do.
But will LLMs be able to orchestrate and fill in the blanks of imprecision in our requests on their own, or will they need human steering?
I think that's where there's a gap in (basically) belief systems of the future.
If we truly get post human-level intelligence everywhere, there is no amount of "preparing" or "working with" the LLMs ahead of time that will save you from being rendered economically useless.
This is mostly a question about how long the moat of human judgement lasts. I think there's an opportunity to work together to make things better than before, using these LLMs as tools that work _with_ us.
It's 60-80% as good as Stack Overflow copy-pasting programmers, sure, but those programmers were already providing questionable value.
It's nowhere near as good as someone actually building and maintaining systems. It's barely able to vomit out an MVP and it's almost never capable of making a meaningful change to that MVP.
If your experiences have been different that's fine, but in my day job I am spending more and more time just fixing crappy LLM code produced and merged by STAFF engineers. I really don't see that changing any time soon.
Because ewe still haven't figured out fusion but its been promised for decades. Why would everything thats been promised by people with highly vested interests pan out any different?
One is inherently a more challenging physics problem.
Can you phrase this in a concrete way, so that in 2027 we can all agree whether it's true or false, rather than circling a "no true scotsman" argument?
It was surpassed around the beginning of this year, so you'll need to come up with a new one for 2027. Note that the other opinions in that older HN thread almost all expected less.
Being accountable for telling the truth
accountability sinks are all you need
It won't be able to write a compelling novel, or build a software system solving a real-world problem, or operate heavy machinery, create a sprite sheet or 3d models, design a building or teach.
Long term planning and execution and operating in the physical world is not within reach. Slight variations of known problems should be possible (as long as the size of the solution is small enough).
I'm pretty sure you're wrong for at least 2 of those:
For 3D models, check out blender-mcp:
https://old.reddit.com/r/singularity/comments/1joaowb/claude...
https://old.reddit.com/r/aiwars/comments/1jbsn86/claude_crea...
Also this:
https://old.reddit.com/r/StableDiffusion/comments/1hejglg/tr...
For teaching, I'm using it to learn about tech I'm unfamiliar with every day, it's one of the things it's the most amazing at.
For the things where the tolerance for mistakes is extremely low and the things where human oversight is extremely importamt, you might be right. It won't have to be perfect (just better than an average human) for that to happen, but I'm not sure if it will.
Just think about the delta of what the LLM does and what a human does, or why can’t the LLM replace the human, e.g. in a game studio.
If it can replace a teacher or an artist in 2027, you’re right and I’m wrong.
3 replies →
> or operate heavy machinery
What exactly do you mean by this one?
In large mining operations we already have human assisted teleoperation AI equipment. Was watching one recently where the human got 5 or so push dozers lined up with a (admittedly simple) task of cutting a hill down and then just got them back in line if they ran into anything outside of their training. The push and backup operations along with blade control were done by the AI/dozer itself.
Now, this isn't long term planning, but it is operating in the real world.
Operating an excavator when building a stretch of road. Won’t happen by 2027.
Does a fighter jet count as "heavy machinery"?
https://apnews.com/article/artificial-intelligence-fighter-j...
Yes, when they send unmanned jets to combat.
1 reply →
programming
Why would it get 60-80% as good as human programmers (which is what the current state of things feels like to me, as a programmer, using these tools for hours every day), but stop there?
So I think there's an assumption you've made here, that the models are currently "60-80% as good as human programmers".
If you look at code being generated by non-programmers (where you would expect to see these results!), you don't see output that is 60-80% of the output of domain experts (programmers) steering the models.
I think we're extremely imprecise when we communicate in natural language, and this is part of the discrepancy between belief systems.
Will an LLM model read a person's mind about what they want to build better than they can communicate?
That's already what recommender systems (like the TikTok algorithm) do.
But will LLMs be able to orchestrate and fill in the blanks of imprecision in our requests on their own, or will they need human steering?
I think that's where there's a gap in (basically) belief systems of the future.
If we truly get post human-level intelligence everywhere, there is no amount of "preparing" or "working with" the LLMs ahead of time that will save you from being rendered economically useless.
This is mostly a question about how long the moat of human judgement lasts. I think there's an opportunity to work together to make things better than before, using these LLMs as tools that work _with_ us.
It's 60-80% as good as Stack Overflow copy-pasting programmers, sure, but those programmers were already providing questionable value.
It's nowhere near as good as someone actually building and maintaining systems. It's barely able to vomit out an MVP and it's almost never capable of making a meaningful change to that MVP.
If your experiences have been different that's fine, but in my day job I am spending more and more time just fixing crappy LLM code produced and merged by STAFF engineers. I really don't see that changing any time soon.
4 replies →
Try this, launch Cursor.
Type: print all prime numbers which are divisible by 3 up to 1M
The result is that it will do a sieve. There's no need for this, it's just 3.
1 reply →
Because ewe still haven't figured out fusion but its been promised for decades. Why would everything thats been promised by people with highly vested interests pan out any different?
One is inherently a more challenging physics problem.
Can you phrase this in a concrete way, so that in 2027 we can all agree whether it's true or false, rather than circling a "no true scotsman" argument?
Good question. I tried to phrase a concrete-enough prediction 3.5 years ago, for 5 years out at the time: https://news.ycombinator.com/item?id=29020401
It was surpassed around the beginning of this year, so you'll need to come up with a new one for 2027. Note that the other opinions in that older HN thread almost all expected less.