← Back to context

Comment by bitwize

20 hours ago

No. AI is a must for software development. It's non-negotiable. The productivity gains are too great. The era of 100% human-written code is over. People will still do it as an idle curiosity, for personal projects only they intend to use. But even those open source projects with significant user bases that forbid the use of AI (like, afaik, NetBSD) will be eclipsed by those that support it in terms of features, capability, and security. And the commercial world? Forget it. You cannot keep pace with your employer's expectations unless you learn to use these tools well. This is not up for debate. It's reality.

Plenty of accomplished devs are getting good results and accomplishing tasks with unheard-of speed using AI, so if you're still not, that's a PEBKAC. You are not using the tools correctly. Figure it out before you complain.

> "No. AI is a must for software development. It's non-negotiable."

Absolutist rubbish.

> "But even those open source projects with significant user bases that forbid the use of AI [...] will be eclipsed by those that support it in terms of features, capability, and security."

As is this. If a language model is relevant to a project, open source or otherwise, is of course heavily dependent on its nature (ethics, use case, deployment, working environment/culture, et cetera).

> You cannot keep pace with your employer's expectations unless you learn to use these tools well. This is not up for debate. It's reality.

So the issue isn’t LLM productivity but unrealistic expectations that skyrocketed in the last years? Makes sense.

> Plenty of accomplished devs are getting good results and accomplishing tasks with unheard-of speed using AI

I don’t see any major business impact.

LLMs may be a must for programming, but not for engineering. Writing code is the easy part once you figure out what actually needs to be built in the first place.

  • Indeed. But figuring out what actually needs to be built is the systems analyst's job, not the programmer's. It takes people skills and holistic thought, something programmers are generally poor at (and AI certainly is no good at, at least not today).

https://smsk.dev/2026/04/26/ai-cannot-self-improve-and-math-...

>You are not using the tools correctly.

Stop being deluded, man.

When this crap collapses into itself you will be in tears back asking for the knowledge you failed to get without the fancy Clippys.

Now, stop that fancy Megahal chatbot and learn to do things by hand.

  • I know how to do things by hand, man. But the writing is on the wall: that skill is going the way of writing programs on punchcards. And there's little we can do about it because the economics in favor of LLMs are like laws of physics.

    Yes, model collapse is gonna suck. But LLMs are not just left to self-train, they are guided by human researchers who are going to find ways to groom and direct the models to avoid collapse. They can make billions by shipping better models, so why wouldn't they invest a lot of effort in that?

    • > But the writing is on the wall: that skill is going the way of writing programs on punchcards.

      Strange, I don’t see any punchcards inside of my computers, but for some reason I still see code behind anything that LLM does.

    • This is not terminals vs punchcards. This is like Windows ME over Windows 98. Or, maybe, the 286 over a 8086 when a 386 it's the proper path.