← Back to context

Comment by bruce511

4 hours ago

I think existing skilled programmers are leveraging AI to increase productivity.

I think there are some people with limited, or no, programming experience who are vibe coding small apps out of nothing. But I think this is a tiny fraction of people. As much as the AI might write code, the tools used to do that, plus compile, distribute etc are still very developer focused.

Sure, one day my pastor might be able to download and install some complete environment which allows him to create something.

Maybe it'll design the database for him, plus install and maintain the local database server for him (or integrate with a cloud service.)

Maybe it'll get all the necessary database and program security right.

Maybe it'll integrate well with other systems, from email to text-import and export. Maybe that will all be maintainable as external services change.

Maybe it'll be able to do support when the printing stops working, or it all needs to be moved to a new machine.

Maybe this environment will be stable enough for the years and decades that the program will be used for. Maybe updating or adding to the program along the way won't break existing things.

Maybe it'll work so well it can be distributed to others.

All this without my pastor even needing to understand what a "variable" is.

That day may come. But, as well as it might or might not write code today, we're a long long way from this future. Mass producing software is a lot more than writing code.

We could have LLM’s capable of doing all that for your pastor right now and it would still take time before these systems can effectively reason through troubleshooting this bespoke software. Right now the effectiveness of LLLM-powered troubleshooting software platforms relies upon the gravity induced by millions of programmers sharing experiences upon more or less the same platforms. Gigabytes to terabytes of text training data on all sorts of things that go bonkers on each platform.

We are now undergoing a Cambrian explosion of bespoke software vibe coded by a non-technical audience, and each one brings with it new sets of failure modes only found in their operational phase. And compared to the current state, effectively zero training data to guide their troubleshooting response.

Non-linearly increasing the surface area of software to debug, and inversely decreasing the training data to apply to that debugging activity will hopefully apply creative pressure upon AI research to come up with more powerful ways to debug all this code. As it stands now, I sure hope someone deep into AI research and praxis sees this and follows up with a comment here that prescribes the AI-assisted troubleshooting approach I’m missing that goes beyond “a more efficient Google and StackOverflow search”.

Also, the current approach is awesome for me to come up to speed on new applications of coding and new platforms I’m not familiar with. But for areas that I’m already fluent in and the areas my stakeholders especially want to see LLM-based amplification, either I’m doing something wrong or we’re just not yet good at troubleshooting legacy code with them. There is some uncanny valley of reasoning I’m unable to bridge so far with the stuff I’m already familiar with.