Comment by getnormality
9 months ago
We currently control lots of things that vastly exceed our unaided physical and mental capabilities, including things that are "smarter" than us in the sense that they can solve complex tasks that we could never solve without them.
People have a long history of predicting doomsday from technological change. "This time is different" is said every time, and every time is different. If we gave into fear, we would never progress, and we would just be sitting ducks to be wiped out by something other than technological change.
LLMs are very far behind human intelligence, and even non-human animal intelligence, in ways that fundamentally limit their power. They can't see the world in any way except the way that humans have chopped it up and spoon-fed it to them (e.g. can't count the number of r's in strawberry). Their capacity to notice and correct their own errors is very limited. They have no capacity to accumulate knowledge by self-initiated interaction with the world, and no credible proposal yet exists to endow them with this capability in a way that could approach human or non-human animal ability levels.
Without these basic abilities, LLMs can only be considered intelligent in the sense shared by other normal technologies, like autocomplete and optimal planning algorithms. Intelligence in a truly human sense is not really even on the horizon yet, let alone superintelligence.
No comments yet
Contribute on Hacker News ↗