Comment by mitthrowaway2
2 years ago
> acting like AGI is almost here because they think LLMs are a lot smarter than they actually are.
For me, it's a bit the opposite -- the effectiveness of dumb, simple, transformer-based LLMs are showing me that the human brain itself (while working quite differently) might involve a lot less cleverness than I previously thought. That is, AGI might end up being much easier to build than it long seemed, not because progress is fast, but because the target was not so far away as it seemed.
We spent many decades recognizing the failure of the early computer scientists who thought a few grad students could build AGI as a summer project, and apparently learned that this meant that AGI was an impossibly difficult holy grail, a quixotic dream forever out of reach. We're certainly not there yet. But I've now seen all the classic examples of tasks that the old textbooks described as easy for humans but near-impossible for computers, become tasks that are easy for computers too. The computers aren't doing anything deeply clever, but perhaps it's time to re-evaluate our very high opinion of the human brain. We might stumble on it quite suddenly.
It's, at least, not a good time to be dismissive of anyone who is trying to think clearly about the consequences. Maybe the issue with sci-fi is that it tricked us into optimism, thinking an AGI will naturally be a friendly robot companion like C-3PO, or if unfriendly, then something like the Terminator that can be defeated by heroic struggle. It could very well be nothing that makes a good or interesting story at all.
No comments yet
Contribute on Hacker News ↗