Comment by bigstrat2003
7 months ago
This is an insightful post, and I think maybe highlights the gap between AI proponents and me (very skeptical about AI claims). I don't have any applications where I'm willing to accept 90% as good enough. I want my tools to work 100% of the time or damn close to it, and even 90% simply is not acceptable in my book. It seems like maybe the people who are optimistic about AI simply are willing to accept a higher rate of imperfections than I am.
It's very scenario dependent. I wish my dishwasher got all the dishes perfectly clean every time, and I wish that I could simply put everything in there without having to consider that the wood stuff will get damaged or the really fragile stuff will get broken, but in spite of those imperfections I still use it every day because I come out way ahead, even in the cases where I have to get the dishes to 100% clean myself with some extra scrubbing.
Another good example might be a paint roller - absolutely useless in the edges and corners, but there are other tools for those, and boy does it make quick work of the flat walls.
If you think of and try to use AI as a tool in the same way as, say, a compiler or a drill, then yes, the imperfections render it useless. But it's going to be an amazing dishwasher or paint roller for a whole bunch of scenarios we are just now starting to consider.
It’s not hard to find applications where 90% success or even 50% success rate is incredibly useful. For example, hooking up ChatGPT Codex to your repo and asking it to find and fix a bug. If it succeeds in 50% of the attempts, you would hit that button over and over until its success rate drops to a much lower figure. Especially as costs trend towards zero.
I agree there are good examples of 90% being good enough but what you purposed doesn't sound like a good one.
This assumes that AI can't also introduce new bugs into the code causing a negative.
A case of 90% being good enough sound more like story boarding or giving note summaries.
If you have a surgery you already accept less than perfect success rate. In fact you have no way to know how badly it can go. The surgeon or their assistants may have a bad day.
Typically you accept the risk of surgical complications because the alternative is much worse. Given a scenario where you have an aggressive tumor that will most likely kill you in six months, and surgery presents a 90% chance of gaining you at least a few more years of life, but a 10% chance of serious complications or death, most people would take the surgery. But if it's a purely elective procedure, very few people would take that chance.
If your business has an opportunity to save millions in labor costs by replacing humans with AI, but there's a 10% chance that the AI will screw up and destroy the business, will business owners accept that risk? It will be interesting to find out.