Comment by conductr
1 day ago
You’ve switched contexts completely with your strawman. Meaning, you’ve pivot Brenda in finance to some technical/software engineer task. You’ve pointed the conversation specifically at a use case that AI is good at, writing code and solving those problems. The world at large is much more complex than helping you be a 10x engineer. To live up to the hype, it has to do this reliably for every vertical in a majority of situations. It’s not even close to being there.
Also, context equivalent counter examples abound. Just read HN or any tech forum and it’s takes no time to hear people talking about the hallucinations and garbage that AI sometimes generates. The whole vibe coding trend is built on “make this app” then followed by hundreds of “fix this” “fix that” prompts because it doesn’t get much right at first attempt.
You're moving goalposts. You claimed "AI" cannot verify results and that's trivially false. Claude code verifies results on a regular basis. You don't have a clue what you're talking about and are just pushing ignorant FUD.
It can't with reliability is what I'm saying. I'm not doubting you built one singular use case where it has. When I feed Copilot a PDF contract and ask it what is my monthly minimum I can charge this client and it tells me $1000, I ask it a dozen other questions and it changes it's response but never to the correct value, then when I ask it to cite where it finds that information and it points me to a paragraph that clearly says $1500 - spelled out clear as day, not entangled in a bunch of legalese or anything else - how is that reliable for a Brenda in finance? (this is a real case I tried out)