Comment by bell-cot
5 days ago
My interpretation: When they say "will lead to human extinction", they are trying to vocalize their existential terror that an AGI would render them and their fellow rationalist cultists permanently irrelevant - by being obviously superior to them, by the only metric that really matters to them.
You sound like you wouldn't feel existential terror if after typing "My interpretation: " into the text field you'd see the rest of your message suggested by Copilot exactly how you wrote it letter by letter. And the same in every other conversation. How about people interrupting you in "real" life interaction after an AI predicted your whole tirade for them and they read it faster than you said it, and also read an analysis of it?
Dystopian sci-fi for sure, but many people dismissing LLMs as not AGI do so because LLMs are just "token predictors".
Scroll up and read the comment by JKCalhoun, for the context of my prior comment.
Or: I'm decades too old to have grown up in the modern "failure to get attention online = death" social media dystopia. A dozen lines of shell script could pretty well predict the meals I eat in a day. Or how many times I get up to pee in the night. Neither of those facts bother me.
And if I want some "the world is better for my being here" feedback - there are a dozen or more local non-profits, churches, and frail old friends/neighbors who would would happily welcome my assistance or visit on any given day.