← Back to context

Comment by mvdtnz

2 years ago

It will never stop being funny to me that people are straight-facedly drawing a straight line between shitty text completion computer programs and nuclear weapon level existential risk.

>shitty text completion computer programs

There's a certain kind of psyche that finds it utterly impossible to extrapolate trends into the future. It renders them completely incapable of anticipating significant changes regardless of how clear the trends are.

No, no one is afraid of LLMs as they currently exist. The fear is about what comes next.

  • > There's a certain kind of psyche that finds it utterly impossible to extrapolate trends into the future.

    It is refreshing to see somebody explicitly call out people that disagree with me about AI as having fundamentally inferior psyches. Their inability to picture the same exact future that terrifies me is indicative of a structural flaw.

    One day society will suffer at the hands of people that have the hubris to consider reality as observed as a thing separate from what I see in my dreams and thought experiments. I know this is true because I’ve taken great pains to meticulously pre-imagine it happening ahead of time — something that lesser psyches simply cannot do.

"Looks at all the other species 'intelligent' humans have extincted" --ha ha ha ha

Why the shit would we not draw a straight line?

If we fail to create digital intelligence then yea, we can hem and haw in conversations like this forever online, but you tend to neglect that if we succeed then 'shit gets real quick'. Closing your eyes and years and saying "This can't actually happen" sounds like a pretty damned dumb take on future risk assessments of technology when pretty much most takes on AI say "well, yea this is something that could potentially happen".

  • Literally the thing people are calling "AI" is a program that, given some words, predicts the next word. I refuse to entertain the absolutely absurd idea that we're approaching a general intelligence. It's ludicrous beyond belief.

    • Then this is your failure, not mine, and not a failure of current technology.

      I can, right now, upload an image to an AI and say "Hey, what do you think the emotional state of the person in this image is" pretty damned accurately. Given other images I can have the AI describe the scene and make pretty damned accurate assessments of how the image could have came about.

      If this is not general intelligence I simply have no guess as to what will be enough in your case.

    • Modern generative AI functionality is hardly limited to predicting words. Have you not heard of e.g. Midjourney?