← Back to context

Comment by threethirtytwo

4 days ago

Have you ever thought about the fact that 2 years ago AI wasn't even good enough to write code. Now it's good enough.

Right now you state the current problem is: "requiring my constant supervision and frequent intervention and always trying to sneak in subtle bugs or weird architectural decisions"

But in 2 years that could be gone too, given the objective and literal trendline. So I actually don't see how you can hold this opinion: "I'm not even freaking about my career, I'm freaking about how much today's "almost good" LLMs can empower incompetence and how much damage that could cause to systems that I either use or work on." when all logic points away from it.

We need to be worried, LLMs are only getting better.

That's easy. When LLMs are good enough to fully replace me and my role in the society (kind of above-average smart, well-read guy with university education and solid knowledge of many topics, basically like most people here) without any downsides, and without any escape route for me, we'll probably already be at the brink of a societal collapse and that's something I can't really prepare for or even change.

  • All evidence points to the world changing. You're not worrying because worrying doesn't solve anything. Valid.

    More people need to be upfront about this reasoning. Instead of building irrational scaffolds saying AI is not a threat. AI is a threat, THAT is the only rational conclusion. Give the real reason why you're not worried.