← Back to context

Comment by Timwi

2 years ago

Humans introduce bugs too. ChatGPT is still new, so it probably makes more mistakes than a human at the moment, but it's only a matter of time until someone creates the first language model that will measurably outperform humans in this regard (and several other important regards).

> it's only a matter of time until someone creates the first language model that will measurably outperform humans in this regard

This seems to have been the rallying cry of AI-ish stuff for the past 30 years, tho. At a certain point you have to ask "but how much time"? Like, a lot of people were confidently predicting speech recognition as good as a human's from the 90s on, for instance. It's 2023, and the state of the art in speech recognition is a fair bit better than Dragon Dictate in the 90s, but you still wouldn't trust it for anything important.

That's not to say AI is useless, but historically there's been a strong tendency to say, of AI-ish things "it's 95% of the way there, how hard could the last 5% be?" The answer appears to be "quite hard, actually", based on the last few decades.

As this AI hype cycle ramps up, we're actually simultaneously in the down ramp of _another_ AI hype cycle; the 5% for self-driving cars is going _very slowly indeed_, and people seem to have largely accepted that, while still predicting that the 5% for generative language models will be easy. It's odd.

(Though, also, I'm not convinced that it _is_ just a case of making a better ChatGPT; you could argue that if you want correct results, a generative language model just isn't the way to go at all, and that the future of these things mostly lies in being more convincingly wrong...)

>> it's only a matter of time

That reminds me how in my youth many were planning on vacations to Mars resorts and unlimited fusion energy) Stars looked so close, only a matter of time!