← Back to context

Comment by logicprog

16 hours ago

> ...the wolves of white collar job automation are closing in for the middle class. You believe that we'll all become cyborg centaurs, while the managers believe we'll all become redundant.

I think he doesn't think it's possible to actually make white collar workers redundant because we don't have AGI, since AGI is precisely defined as that — continually deferred, likely impossible for current technology — end goal that we know we haven't achieved yet.

And FWIW, I think he's right. The fact that LLMs are inherently stochastic, cannot reason or plan sufficiently by themselves, and do not have a world model, means that you will always need humans in the loop not just to oversee, verify, and act as an accountability sink (which by itself could be pretty bad), but also to break problems down, plan, and architecture for them, and, when possible, design automated verifiability systems so that the LLM can act as the core of a cybernetic feedback loop, like a sort of linear genetic programming algorithm (which is what a Ralph loop does, incidentally). This last especially, this act of figuring out how to, either by hand or in a very tight supervised loop with an LLM that relies heavily on human expertise and judgement, specify the desired behaviors in a machine-verifiable way, looks a lot like just a higher level of programming to me. It's just red-green BDD.

> . You think people will care about the sideslop everyone will build, not seeing that 'everyone will build' means 'no one will care'. Worse, means no one will buy (knowledge| skill|creation).

I think this is again assuming AGI, where AI slop sort of reaches and becomes indistinguishable from the designs of people with good taste, architectural knowledge, experience, and care for the craft of actually making good, reliable things — but we're not there yet, and as I said above, I don't know that we'd ever get there. So yes, everyone will be able to make things, but not all of it will be of the same quality even if they're all using AI to do it! See, for example, the kind of thing you get if you put an agent in a Ralph loop to make a terminal emulator, versus what Mitchell Hashimoto is able to do using AI on GhostTTY.

You are missing my point.

It's not what you believe, it's not a question of AGI, it's what the managers and investors think will happen. It's about headcount and it's about the average dev. Sure, now you have one smart guy in the loop and 99 guys not affording their rents anymore.

You ignore the impact "anyone can do it" in the mind of a manager/CEO. It demotes the specialist dev to generic labourer, it devalues their worth.

Tech people need to develop a theory of mind and understand other people have very different views of reality and so make very different plans for the future. It doesn't matter you and Simon think AGI is not happening, it doesn't matter you both think there must always be a meatbag in the loop, what matters is what the managerial class and the guys with capital _think_ it's happening.

  • I know bosses can think that AGI is happening and they can get away with firing workers, but we've already seen a bunch of high profile cases of companies really rapidly learning their lesson and re-hiring people after firing them because of AI. The turnaround time is like, what, a few months? I'm not worried about it because of that — because they learn their lesson really quickly because they get slapped by reality. Because the thing is, if they try firing people and throwing a bunch of generic workers into the loop or even not even having humans in the loop at all, they extremely quickly run into problems because this isn't like an infrastructure thing where if you underinvest in it, it takes a while for the cracks to show. The hallucinations and nonsense show up immediately.

    I also think that as soon as this AI bubble collapses because these companies don't see the insane returns they bet on from AGI in order to justify all of the money they've borrowed and the VC money they've burned, all illusions that AGI will happen even among the managerial class will go up and smoke and in fact investing in AI might become pretty toxic for a while. We've seen it with other bubbles. It's all animal spirits. Right now they're really enthusiastic, but that will go away and actually reverse irrespective of the relative quality of the technology.

    • Fair enough, if the LLMs critically underperform that would slow down their adoption. Maybe. Not immediately, they (the managerial class) will try everything before they abandon LLM.

      I agree, the bubble collapsing would be the best scenario. That being said, the economic woes following the collapse will make things real bad for a time.

      But what if the AI actually gets better? and there's no bubble collapse and there are no AI abandon? ...