← Back to context

Comment by slashdev

2 months ago

While I have written elsewhere[1] that I think AI is causing a bubble right now, AI is also the biggest technological change to the world since the Internet.

I'm a software engineer, and I don't write code anymore. I'm still coming to terms with that, grieving the loss of my old career and getting used to the new career which is more like a technical lead and product person than a computer programmer.

[1] https://news.ycombinator.com/item?id=47037421

Stop calling LLMs AI. We have LLMs as a product, now, but not AI. AI does exist as a research field, and so do "flying cars" and "nuclear fusion" (with arguably those two being much closer to materiality than AI).

And no, that doesn't make me some kind of "AI hater", or someone unable to see value in LLMs.

  • It’s funny to see the treadmill on the term “AI” moved again.

    There’s a reason the term AGI is used a lot now. What are LLMs if not intelligence that is artificial? Just today I used it to debug code, write a shader (which, to be fair, it’s only slightly better than me at doing), and tell my daughter and I what hedgehogs and foxes eat. Seems pretty intelligent to me.

    Of course, not long ago we were using the term largely for things that were basically big chains of conditional statements.

    • "the treadmill on the term “AI”" hasn't moved much or at all, and that's essentially my point. Only 3-5 tech giants want us to think it has.

      > Seems pretty intelligent to me.

      Convenient? Yes. Intelligent? I mean, you can redefine AI to be whatever you want by lowering this bar however low you want. LLMs gave us slightly more convincing chatbots than prev-gen and nobody then would call them intelligent. The only reason we do now is marketing. It's laughable to me that we are still calling that something that's so obviously unable to push back on trivially impossible requests.

      > Of course, not long ago we were using the term largely for things that were basically big chains of conditional statements.

      No, we weren't? What are you even talking about? We had already built large-enough artificial neural networks whose outcomes couldn't be trivially explained in the 70's. Nobody with a sane mind and an understanding of what they were would call them oracles or intelligent devices. That's where the true genius of Altman lies.

  • Why? Why should we suddenly subscribe to your brand new definition of "AI"?

    • The only brand new definition of AI is the one that came in full marketing speed shortly after ChatGPT, to have us believe that "AI has been solved and is a commodity, now" while all we got were more chatbots.

      In the academic fields where this taxonomy matters, nothing much has changed with LLMs, or not more than with DNNs, SVMs, etc. Nobody that's been involved in ML research for more than 5 years seriously thinks "job's done pals, we got to pack it up, after 70 or so years of effort, we've finally figured it out, and that AI we were looking for, we got it".

      2 replies →

  • Respectfully: deal with it.

    That ship has sailed, you're the one who has to adapt to the usage of the word in the world, not the world to you.