Comment by Toutouxc
6 days ago
I am constantly seeing this thing do most of my work (which is good actually, I don't enjoy typing code), but requiring my constant supervision and frequent intervention and always trying to sneak in subtle bugs or weird architectural decisions that, I feel with every bone in my body, would bite me in the ass later. I see JS developers with little experience and zero CS or SWE education rave about how LLMs are so much better than us in every way, when the hardest thing they've ever written was bubble sort. I'm not even freaking about my career, I'm freaking about how much today's "almost good" LLMs can empower incompetence and how much damage that could cause to systems that I either use or work on.
I agree with you on all of it.
But _what if_ they work out all of that in the next 2 years and it stops needing constant supervision and intervention? Then what?
It’s literally not possible. It has nothing to do with intelligence. A perfectly intelligent AI still can’t read minds. 1000 people give the same prompt and want 1000 different things. Of course it will need supervision and intervention.
We can synthesize answers to questions more easily, yes. We can make better use of extensive test suites, yes. We cannot give 1000 different correct answers to the same prompt. We cannot read minds.
Can you? Read minds, I mean.
If the answer is "yes"? Then, yeah, AI is not coming for you. We can make LLMs multimodal, teach them to listen to audio or view images, but we have no idea how to give them ESP modalities like mind reading.
If the answer is "no"? Then what makes you think that your inability to read minds beats that of an LLM?
2 replies →
If you have an AI that's the equivalent of a senior software developer you essentially have AGI. In that case the entire world will fundamentally change. I don't understand why people keep bringing up software development specifically as something that will be automated, ignoring the implications for all white collar work (and the world in general).
Then who else is still holding a job if a tool like that is available? Manually working people, for the few months or years before robotics development fueled by cheap human-level LLMs catches up?
If We Build It We Will All Die
Yes and look how far we've come in 4 years. If programming has another 4 that's all it has.
I'm just not sure who will end up employed. The near state is obviously jira driven development where agents just pick up tasks from jira, etc. But will that mean the PMs go and we have a technical PM, or will we be the ones binned? Probably for most SMEs it'll just be maybe 1 PM and 2 or so technical PMs churning out tickets.
But whatever. It's the trajectory you should be looking at.
Have you ever thought about the fact that 2 years ago AI wasn't even good enough to write code. Now it's good enough.
Right now you state the current problem is: "requiring my constant supervision and frequent intervention and always trying to sneak in subtle bugs or weird architectural decisions"
But in 2 years that could be gone too, given the objective and literal trendline. So I actually don't see how you can hold this opinion: "I'm not even freaking about my career, I'm freaking about how much today's "almost good" LLMs can empower incompetence and how much damage that could cause to systems that I either use or work on." when all logic points away from it.
We need to be worried, LLMs are only getting better.
That's easy. When LLMs are good enough to fully replace me and my role in the society (kind of above-average smart, well-read guy with university education and solid knowledge of many topics, basically like most people here) without any downsides, and without any escape route for me, we'll probably already be at the brink of a societal collapse and that's something I can't really prepare for or even change.
All evidence points to the world changing. You're not worrying because worrying doesn't solve anything. Valid.
More people need to be upfront about this reasoning. Instead of building irrational scaffolds saying AI is not a threat. AI is a threat, THAT is the only rational conclusion. Give the real reason why you're not worried.