Comment by mjr00
12 days ago
I'm not saying that AI can't figure out how to handle bugs (it absolutely can; in fact even a decade ago at AWS there was primitive "AI" that essentially mapped failure codes to a known issues list, and it would not take much to allow an agent to perform some automation). I'm saying there will be situations the AI can't handle, and it's really absurd that you think a product owner will be able to solve deeply technical issues.
You can't product manage away something like "there's an undocumented bug in MariaDB which causes database corruption with spatial indexes" or "there's a regression in jemalloc which is causing Tomcat to memory leak when we upgrade to java 8". Both of which are real things I had to dive deep and discover in my career.
There are definitely issues in human software engineering which reach some combination of the following end states:
1. The team is unable to figure it out
2. The team is able to figure it out but a responsible third-party dependency is unable to fix it
3. The team throws in the towel and works around the issue
At the end of the day it always comes down to money: how much more money do we throw at trying to diagnose or fix this versus working around or living with it? And is that determination not exactly the role of a product manager?
I don't see why this would ipso facto be different with AI
For clarity I come at this with a superposition of skepticism at AI's ultimate capabilities along with recognition of the sometimes frightening depth encountered in those capabilities and speed with which they are advancing
I suppose the net result would be a skepticism of any confident predictions of where this all ends up
> I don't see why this would ipso facto be different with AI
Because humans can learn information they currently do not have, AI cannot?
They can, by putting what they just "learned" into the context window. Claude Code does this without (my) prompting from time to time, adding to its CLAUDE.md things that it has learned about the project or my preferences. Currently this is limited to literally writing it down, but as context windows grow and models continue training on their own usage, it's not clear to me how that will significantly differ from an ability to "learn information they currently do not have".
But does that change the end result? Finding a compiler or SQL bug doesn't mean you yourself can learn enough to fix it. I don't see any reason why AI would be inherently incapable of also concluding that there's a bug in an underlying layer beyond its ability to fix
That doesn't mean AI can do or replace everything but what fraction of software engineering work requires that final frontier?