Comment by kstrauser
1 day ago
> I was a self confessed skeptic.
I think that's the key. Healthy skepticism is always appropriate. It's the outright cynicism that gets me. "AI will never be able to [...]", when I've been sitting here at work doing 2/3rds of those supposedly impossible things. Flawlessly? No, of course not! But I don't do those things flawlessly on the first pass, either.
Skepticism is good. I have no time or patience for cynics who dismiss the whole technology as impossible.
I think the concern expressed as "impossible" is whether it can ever do those things "flawlessly" because that's what we actually need from its output. Otherwise a more experienced human is forced to do double work figuring out where it's wrong and then fixing it.
This is not a lofty goal. It's what we always expect from a competent human regardless of the number of passes it takes them. This is not what we get from LLMs in the same amount time it takes a human to do the work unassisted. If it's impossible then there is no amount of time that would ever get this result from this type of AI. This matters because it means the human is forced to still be in the loop, not saving time, and forced to work harder than just not using it.
I don't mean "flawless" in the sense that there cannot be improvements. I mean that the result should be what was expected for all possible inputs, and when inspected for bugs there are reasonable and subtle technical misunderstandings at the root of them (true bugs that are possibly undocumented or undefined behavior) and not a mess of additional linguistic ones or misuse. This is the stronger definition of what people mean by "hallucination", and it is absolutely not fixed and there has been no progress made on it either. No amount of prompting or prayer can work around it.
This game of AI whack-a-mole really is a waste of time in so many cases. I would not bet on statistical models being anything more than what they are.