Comment by pjc50
2 months ago
This frightens mostly people whose identity is built around "intelligence", but without grounding in the real world. I've yet to see really good articulations of what, precisely we should be scared of.
Bedroom superweapons? Algorithmic propaganda? These things have humans in the loop building them. And the problem of "human alignment" is one unsolved since Cain and Abel.
AI alone is words on a screen.
The sibling thread details the "mass unemployment" scenario, which would be destabilizing, but understates how much of the current world of work is still physical. It's a threat to pure desk workers, but we're not the majority of the economy.
Perhaps there will be political instability, but .. we're already there from good old humans.
Depends on the model I suppose. Atm everything is being heavily trained as LLMs without much capability outside of input text->output text aside from non-modelised calls out to the Internet/RAG system etc.
But at some point (still quite far away) I'm sure we'll start training a more general purpose model, or an LLM self-training will break outside of the "you're a language model" bounds and we'll end up with exactly that;
An LLM model in a self-training loop that breaks outside of what we've told it to be (a Language model), becomes a general purpose model and then becomes intelligent enough to do something like put itself out onto the Internet. Obviously we'd catch the feelers that it puts out and realise that this sort of behaviour is starting to happen, but imagine if we didn't? A model that trained itself to be general purpose but act like a constantly executing LLM, uploads itself to Hugging Face, gets run on thousands of clusters by people, because it's "best in class" and yes it's sitting there answering LLM type queries but also in the background is sending out beacons & communicating with itself between those clusters to...idk do something nefarious.
Some of the scariest horror movies are the ones where the monster isn't shown. Often once the monster is shown, it is less terrifying.
In a general sense, uncertainty causes anxiety. Once you know the properties of the monster you are dealing with you can start planning on how to address it.
Some people have blind and ignorant confidence. A feeling they can take on literally anything, no matter how powerful. Sometimes they are right, sometimes they are wrong.
I'm reminded by the scene in No Country For Old Men where the good guy bad-ass meets the antagonist and immediately dies. I have little faith in blind confidence.
edit: I'll also add that human adaptability (which is probably the trait most confidence in humans would rest) has shown itself capable of saving us from many previous civilization changing events. However, this change with AI is happening much, much faster than any before it. So part of the anxiety is whether or not our species reaction time is enough to avoid the cliff we are accelerating towards.
> without grounding in the real world.
> I've yet to see really good articulations of what, precisely we should be scared of. Bedroom superweapons?
Loss of paid employment opportunities and increasing inequality are real world concerns.
UBI isn't coming by itself.
Worst case scenario humans mostly go back to manual labor, which would fix a lot of modern day ailments such as obesity and (some) mental health struggles, with added enormous engineering advancements based on automatic research.
Manual labour jobs are not magically going to appear.
Sure, but those are also real world concerns in the non-AI alternate timeline. As is the unlikelihood of UBI.
Yes, but they are likely dramatically accelerated in the AI timeline.
> This frightens mostly people whose identity is built around "intelligence", but without grounding in the real world.
It has certainly had this impact on my identity; I am unclear how well-grounded I really am*.
> I've yet to see really good articulations of what, precisely we should be scared of.
What would such an articulation look like, given you've not seen it?
> Bedroom superweapons? Algorithmic propaganda? These things have humans in the loop building them.
Even with current limited systems — which are not purely desk workers, they're already being connected to and controlling robots, even by amateurs — AI lowers the minimum human skill level needed to do those things.
The fear is: how far are we from an AI that doesn't need a human in the loop? Because ChatGPT was almost immediately followed by ChaosGPT, and I have every reason to expect people to continue to make clones of ChaosGPT continuously until one is capable of actually causing harm. (As with 3d-printed guns, high chance the first ones will explode in the face of the user rather than the target).
I hope we're years away, just as self driving cars turned out to be over-promised and under-delivered for the last decade — even without a question of "safety", it's going to be hard to transition the world economy to one where humans need not apply.
> And the problem of "human alignment" is one unsolved since Cain and Abel.
Yes, it is unsolved since time immemorial.
This has required us to not only write laws, but also design our societies and institutions such that humans breaking laws doesn't make everything collapse.
While I dislike the meme "AI == crypto", one overlap is that both have nerds speed-running discovering how legislation works any why it's needed — for crypto, specifically financial legislation after it explodes in their face; for AI, to imbue the machine with a reason to approximate society's moral code, because they see the problem coming.
--
* Dunning Kruger applies; and now I have first-hand experience of what this feels like from the inside, as my self-perception of how competent I am at German has remained constant over 7 years of living in Germany and improving my grasp of the language the entire time.