Comment by snovymgodym

21 days ago

First of all, "AI" is and always has been a vague term with a shifting definition. "AI" used to mean state search programs or rule-based reasoning systems written in LISP. When deep learning hit, lots of people stopped considering symbolic (i.e., non neural-net) AI to be AI. Now LLMs threaten to do the same to older neural-net methods. A pedantic conversation about what is and isn't true AI is not productive.

Second of all, LLMs have extremely impressive generic uses considering that their training just consists of consuming large amounts of unsorted text. Any counter argument about "it's not real intelligence" or "it's just a next-token predictor" ignores the fact that LLMs have enabled us to do things with machines that would have seemed impossible just a few years ago. No, they are not perfect, and yes there are lots of rough edges, but the fact that simply "solving text" has gotten us this far is huge and echoes some aspects of the Unix philosophy...

"Write programs to handle text streams, because that is a universal interface."

> A pedantic conversation about what is and isn't true AI is not productive.

It's not at all 'pedantic' and while it's not productive to be having to rail against this stupid term, that is not the fault of the people pushing back at it. It's the fault of the hype merchants who have promoted it.

A key part of thinking independently is to be continually questioning the use of language.

> Any counter argument about "it's not real intelligence" or "it's just a next-token predictor" ignores the fact that LLMs have enabled us to do things with machines that would have seemed impossible just a few years ago.

No, it's entirely possible to appreciate that LLMs are a very powerful and useful technology while also pointing out that they are not 'intelligence' in any meaningful sense of the word and that labeling them 'artificial intelligence' is unhelpful to users and, ultimately, to the industry.

> "AI" used to mean state search programs or rule-based reasoning systems written in LISP. When deep learning hit, lots of people stopped considering symbolic (i.e., non neural-net) AI to be AI. Now LLMs threaten to do the same to older neural-net methods. A pedantic conversation about what is and isn't true AI is not productive.

I think you are misstating the problem here.

All of the things you name are still AI.

None of the things you name are, or have ever been, AI.

The problem is that there is AI, the computer science subfield of artificial intelligence, which includes things like expert systems, NPCs in games, and LLMs, and then there is AI, the "true" artificial intelligence, brought to us exclusively by science fiction, which includes things (or people!) like Commander Data, Skynet, Durandal, and HAL 9000.

The general public doesn't understand this distinction in a deep way—even those who recognize that things like Skynet are fiction get confused when they see an LLM apparently able to carry on a coherent conversation with a human—and too many of us, who came into this with a basic understanding of the distinction and who should know better, have bought the hype (and in some cases outright lies) of companies like OpenAI wholesale.

These facts (among others) have combined to allow the various AI grifters to continue operating without being called out on their bullshit.