Comment by GeoAtreides
1 day ago
If I ignore the AGI parts, there's only:
>Everything is awful for almost everyone. I expect even the ultra wealthy will find their lives significantly less pleasant than they were before.
>We're three years into the ChatGPT revolution now and so far the main observable impact on the craft that I care about is that I can build more ambitious things.
I think you refuse to extrapolate the obvious consequences and have forgotten (if you ever knew) how it's like to be in trenches. You put on the horse blinders of 'easy to build' on the left and 'so much fun' on the right and happily trot on, while the wolves of white collar job automation are closing in for the middle class. You believe that we'll all become cyborg centaurs, while the managers believe we'll all become redundant. You think people will care about the sideslop everyone will build, not seeing that 'everyone will build' means 'no one will care'. Worse, means no one will buy (knowledge| skill|creation).
Indeed we have not tipped over into the abyss, but we're teetering and the wind is picking up. It's not the end times, it's not AGI, it doesn't have to be AGI to wreck great damage on the economy, our craft and, ultimately, our way of life and our minds.
And the wind is picking up, faster and faster.
> You believe that we'll all become cyborg centaurs, while the managers believe we'll all become redundant
I hope that we'll all become cyborg centaurs, and that people who think software engineers will all become redundant will be proved very wrong.
I'm trying to use what little influence I have to push things in that direction by ensuring software engineers have the knowledge and tools they need to become cyborg centaurs.
There is a very real chance that you're right, and that the way LLMs are going will massively disrupt the lives of software engineers in a very bad way.
I don't think that's a foregone conclusion yet, and I'm continuing to hope (and in my own tiny way push) for a better path.
> ...the wolves of white collar job automation are closing in for the middle class. You believe that we'll all become cyborg centaurs, while the managers believe we'll all become redundant.
I think he doesn't think it's possible to actually make white collar workers redundant because we don't have AGI, since AGI is precisely defined as that — continually deferred, likely impossible for current technology — end goal that we know we haven't achieved yet.
And FWIW, I think he's right. The fact that LLMs are inherently stochastic, cannot reason or plan sufficiently by themselves, and do not have a world model, means that you will always need humans in the loop not just to oversee, verify, and act as an accountability sink (which by itself could be pretty bad), but also to break problems down, plan, and architecture for them, and, when possible, design automated verifiability systems so that the LLM can act as the core of a cybernetic feedback loop, like a sort of linear genetic programming algorithm (which is what a Ralph loop does, incidentally). This last especially, this act of figuring out how to, either by hand or in a very tight supervised loop with an LLM that relies heavily on human expertise and judgement, specify the desired behaviors in a machine-verifiable way, looks a lot like just a higher level of programming to me. It's just red-green BDD.
> . You think people will care about the sideslop everyone will build, not seeing that 'everyone will build' means 'no one will care'. Worse, means no one will buy (knowledge| skill|creation).
I think this is again assuming AGI, where AI slop sort of reaches and becomes indistinguishable from the designs of people with good taste, architectural knowledge, experience, and care for the craft of actually making good, reliable things — but we're not there yet, and as I said above, I don't know that we'd ever get there. So yes, everyone will be able to make things, but not all of it will be of the same quality even if they're all using AI to do it! See, for example, the kind of thing you get if you put an agent in a Ralph loop to make a terminal emulator, versus what Mitchell Hashimoto is able to do using AI on GhostTTY.
You are missing my point.
It's not what you believe, it's not a question of AGI, it's what the managers and investors think will happen. It's about headcount and it's about the average dev. Sure, now you have one smart guy in the loop and 99 guys not affording their rents anymore.
You ignore the impact "anyone can do it" in the mind of a manager/CEO. It demotes the specialist dev to generic labourer, it devalues their worth.
Tech people need to develop a theory of mind and understand other people have very different views of reality and so make very different plans for the future. It doesn't matter you and Simon think AGI is not happening, it doesn't matter you both think there must always be a meatbag in the loop, what matters is what the managerial class and the guys with capital _think_ it's happening.
I know bosses can think that AGI is happening and they can get away with firing workers, but we've already seen a bunch of high profile cases of companies really rapidly learning their lesson and re-hiring people after firing them because of AI. The turnaround time is like, what, a few months? I'm not worried about it because of that — because they learn their lesson really quickly because they get slapped by reality. Because the thing is, if they try firing people and throwing a bunch of generic workers into the loop or even not even having humans in the loop at all, they extremely quickly run into problems because this isn't like an infrastructure thing where if you underinvest in it, it takes a while for the cracks to show. The hallucinations and nonsense show up immediately.
I also think that as soon as this AI bubble collapses because these companies don't see the insane returns they bet on from AGI in order to justify all of the money they've borrowed and the VC money they've burned, all illusions that AGI will happen even among the managerial class will go up and smoke and in fact investing in AI might become pretty toxic for a while. We've seen it with other bubbles. It's all animal spirits. Right now they're really enthusiastic, but that will go away and actually reverse irrespective of the relative quality of the technology.
1 reply →