Comment by andy99
5 hours ago
I think if you took an LLM of today and showed it to someone 20 years ago, most people would probably say AGI has been achieved.
I’ve got to disagree with this. All past pop-culture AI was sentient and self-motivated, it was human like in that it had it’s own goals and autonomy.
Current AI is a transcript generator. It can do smart stuff but it has no goals, it just responds with text when you prompt it. It feels like magic, even compared to 4-5 years ago, but it doesn’t feel like what was classically understood as AI, certainly by the public.
Somewhere marketers changed AGI to mean “does predefined tasks with human level accuracy” or the like. This is more like the definition of a good function approximator (how appropriate) instead of what people think (or thought) about when considering intelligence.
The thing that blows my mind about language models isn't that they do what they do, it's that it's indistinguishable from what we do. We are a black box; nobody knows how we do what we do, or if we even do what we do because of a decision we made. But the funny thing is: if I can perfectly replicate a black box then you cannot say that what I'm doing isn't exactly what the black box is doing as well.
We can't measure goals, autonomy, or consciousness. We don't even have an objective measure of intelligence. Instead, since you probably look like me I think it's polite to assume you're conscious…that's about it. There’s literally no other measure. I mean, if I wanted to be a jerk, I could ask if you're conscious, but whether you say yes or no is proof enough that you are. If I'm curious about intelligence I can come up with a few dozen questions, out of a possible infinite number, and if you get those right I'll call you intelligent too. But if you get them wrong… well, I'll just give you a different set of questions; maybe accounting is more your thing than physics.
So, do you just respond with text when you’re promoted with input from your eyes or ears? You’ll instinctively say “No, I’m conscious and make my own decisions”, but that’s just a sequence of tokens with a high probability in response to that question.
Do you actually have goals, or did the system prompt of life tell you that in your culture, at this point in time, you should strive to achieve goals[] because that’s what gets positive feedback?
> Current AI is a transcript generator. It can do smart stuff but it has no goals
That's probably not because of an inherent lack of capability, but because the companies that run AI products don't want to run autonomous intelligent systems like that