← Back to context

Comment by TimPC

2 years ago

I think the real dismissal is that people's concerns are more based on the hollywood sci-fi parodies of the technologies than the actual technologies. There are basically no concerns with ML for specific applications and any actual concerns are about AGI. AGI is a largely unsuccessful field. Most of the successes in AI have been highly specific applications the most general of which has been LLMs which are still just making statistical generalizations over patterns in language input and still lacks general intelligence. I'm fine if AGI gets regulated because it's potentially dangerous. But what I think is going to happen is we are going to go after specific ML applications with no hope of being AGI because people are in an irrational panic over AI and are acting like AGI is almost here because they think LLMs are a lot smarter than they actually are.

> acting like AGI is almost here because they think LLMs are a lot smarter than they actually are.

For me, it's a bit the opposite -- the effectiveness of dumb, simple, transformer-based LLMs are showing me that the human brain itself (while working quite differently) might involve a lot less cleverness than I previously thought. That is, AGI might end up being much easier to build than it long seemed, not because progress is fast, but because the target was not so far away as it seemed.

We spent many decades recognizing the failure of the early computer scientists who thought a few grad students could build AGI as a summer project, and apparently learned that this meant that AGI was an impossibly difficult holy grail, a quixotic dream forever out of reach. We're certainly not there yet. But I've now seen all the classic examples of tasks that the old textbooks described as easy for humans but near-impossible for computers, become tasks that are easy for computers too. The computers aren't doing anything deeply clever, but perhaps it's time to re-evaluate our very high opinion of the human brain. We might stumble on it quite suddenly.

It's, at least, not a good time to be dismissive of anyone who is trying to think clearly about the consequences. Maybe the issue with sci-fi is that it tricked us into optimism, thinking an AGI will naturally be a friendly robot companion like C-3PO, or if unfriendly, then something like the Terminator that can be defeated by heroic struggle. It could very well be nothing that makes a good or interesting story at all.

The fine line between bravery and stupidity is understanding the risks. Somebody who understands the danger they're walking into is brave. Somebody who blissfully walks into danger without recognizing the danger is stupid.

A technological singularity is a theorized period during which the length of time you can make reasonable inferences about the future rapidly approaches zero. If there can be no reasonable inferences about the future, there can be no bravery. Anybody who isn't afraid during a technological singularity is just stupid.

The sci-fi scenarios are a long-term risk, which no one really knows about. I'm terrified of the technologies we have now, today, used by all the big tech companies to boost profits. We will see weaponized mass disinformation combined with near perfect deep fakes. It will become impossible to know what is true or false. America is already on the brink of fascist takeover due to deluded MAGA extremists. 10 years of advancements in the field, and we are screwed.

Then of course there is the risk to human jobs. We don't need AGI to put vast amounts of people out of work, it is already happening and will accelerate in the near term.