Comment by yourapostasy
2 hours ago
We could have LLM’s capable of doing all that for your pastor right now and it would still take time before these systems can effectively reason through troubleshooting this bespoke software. Right now the effectiveness of LLLM-powered troubleshooting software platforms relies upon the gravity induced by millions of programmers sharing experiences upon more or less the same platforms. Gigabytes to terabytes of text training data on all sorts of things that go bonkers on each platform.
We are now undergoing a Cambrian explosion of bespoke software vibe coded by a non-technical audience, and each one brings with it new sets of failure modes only found in their operational phase. And compared to the current state, effectively zero training data to guide their troubleshooting response.
Non-linearly increasing the surface area of software to debug, and inversely decreasing the training data to apply to that debugging activity will hopefully apply creative pressure upon AI research to come up with more powerful ways to debug all this code. As it stands now, I sure hope someone deep into AI research and praxis sees this and follows up with a comment here that prescribes the AI-assisted troubleshooting approach I’m missing that goes beyond “a more efficient Google and StackOverflow search”.
Also, the current approach is awesome for me to come up to speed on new applications of coding and new platforms I’m not familiar with. But for areas that I’m already fluent in and the areas my stakeholders especially want to see LLM-based amplification, either I’m doing something wrong or we’re just not yet good at troubleshooting legacy code with them. There is some uncanny valley of reasoning I’m unable to bridge so far with the stuff I’m already familiar with.
No comments yet
Contribute on Hacker News ↗