Comment by kriops

2 years ago

If you work at a computer, it will increase your productivity. Revolutionary is not the word I'd use, but finding use cases isn't hard.

I can buy that it's a better/worse search engine (better in that it's easier to formulate a query and you get the response right there without having to parse the results; worse in that there's a decent chance the response is nonsense, and it's very confident when it's being wrong about things).

I can't really imagine asking it a question about anything I cared about and not verifying via a second source, though, given its accuracy issues. This makes it feel a lot less useful.

How will it do that?

One of major problems of modern computer-based work is that there are too many people already in those roles, doing work that isn't needed. Case in point: the culling of tens of thousands of software engineers, people who would consider themselves to be doing 'bullshit jobs'.

But will it? After accounting for the time needed to fix all the bugs it introduces?

  • Humans introduce bugs too. ChatGPT is still new, so it probably makes more mistakes than a human at the moment, but it's only a matter of time until someone creates the first language model that will measurably outperform humans in this regard (and several other important regards).

    • > it's only a matter of time until someone creates the first language model that will measurably outperform humans in this regard

      This seems to have been the rallying cry of AI-ish stuff for the past 30 years, tho. At a certain point you have to ask "but how much time"? Like, a lot of people were confidently predicting speech recognition as good as a human's from the 90s on, for instance. It's 2023, and the state of the art in speech recognition is a fair bit better than Dragon Dictate in the 90s, but you still wouldn't trust it for anything important.

      That's not to say AI is useless, but historically there's been a strong tendency to say, of AI-ish things "it's 95% of the way there, how hard could the last 5% be?" The answer appears to be "quite hard, actually", based on the last few decades.

      As this AI hype cycle ramps up, we're actually simultaneously in the down ramp of _another_ AI hype cycle; the 5% for self-driving cars is going _very slowly indeed_, and people seem to have largely accepted that, while still predicting that the 5% for generative language models will be easy. It's odd.

      (Though, also, I'm not convinced that it _is_ just a case of making a better ChatGPT; you could argue that if you want correct results, a generative language model just isn't the way to go at all, and that the future of these things mostly lies in being more convincingly wrong...)

      1 reply →

    • >> it's only a matter of time

      That reminds me how in my youth many were planning on vacations to Mars resorts and unlimited fusion energy) Stars looked so close, only a matter of time!