Comment by Jyaif

2 months ago

> in this case justifiably so

Oh please. What LLMs are doing now was complete and utter science fiction just 10 years ago (2015).

In what way do you consider that to be the case? IBM's Watson defeated actual human champions in Jeopardy in 2011. Both Walmart and McDonald's notably made large investments shortly after that on custom developed AI based on Watson for business modeling and lots of other major corporations did similar things. Yes subsidizing it for the masses is nice but given the impressive technology of Watson 15 years ago I have a hard time seeing how today's generative AI is science fiction. I'm not even sure that the SOTA models could even win Jeopardy today. Watson only hallucinated facts for one answer.

  • When Watson did that, everyone initially was very impressed, but later it felt more like it was just a slightly better search engine.

    LLMs screw up a lot, sure, but Watson couldn't do code reviews, or help me learn a foreign language by critiquing my use of articles and declination and idiom, nor could it create an SVG of a pelican riding a bicycle, nor help millions of bored kids cheat on their homework by writing entire essays for them.

This.

I’m under the impression that people who are still saying LLMs are unimpressive might just be not using them correctly/effectively.

Or as Primagean says: “skill issue”

Why would the public care what was possible in 2015? They see the results from 2023-2025 and aren't impressed, just like Sutskever.

What exactly are they doing? I've seen a lot of hype but not much real change. It's like a different way to google for answers and some code generation tossed in, but it's not like LLMs are folding my laundry or mowing my lawn. They seem to be good at putting graphic artists out of work mainly because the public abides the miserable slop produced.

Not really.

Any fool could have anticipated the eventual result of transformer architecture if pursued to its maximum viable form.

What is impressive is the massive scale of data collection and compute resources rolled out, and the amount of money pouring into all this.

But 10 years ago, spammers were building simple little bots with markov chains to evade filters because their outputs sounded plausibly human enough. Not hard to see how a more advanced version of that could produce more useful outputs.

  • Any fool could have seen self driving cars coming in 2022. But that didn't happen. And still hasn't happened. But if it did happen, it would be easy to say:

    "Any fool could have seen this coming in 2012 if they were paying attention to vision model improvements"

    Hindsight is 20/20.

    • Everyone who lives in the show belt understands that unless a self driving car can navigate icy, snow-covered roads better than humans can, it's a non-starter. And the car can't just "pull over because it's too dangerous" that doesn't work at all.

      2 replies →

  • I guess I'm worse than a fool then, because I thought it was totally impossible 10 years ago.