Comment by justonepost2
13 hours ago
If you succesfully build a highly capable “aligned” model (according to some class of definitions that Anthropic would use for the words “capable” and “aligned”) and it brings about a global dark age of poverty and inequality by completely eliminating the value of labor vs capital, can you still call it aligned?
If the answer is “yes”, our definition of alignment kind of sucks.
> If the answer is “yes”, our definition of alignment kind of sucks.
Sure, but the original sense of this is rather more fundamental than "does this timeline suck?"
Right now, it is still an open question "do we know how to reliably scale up AI to be generally more competent than we are at everything without literally killing everyone due to (1) some small bug when we created the the loss function* it was trained on (outer alignment), or (2) if that loss function was, despite being correct in itself, approximated badly by the AI due to the training process (inner alignment)?"
* https://en.wikipedia.org/wiki/Loss_function
Jobs are an invention of humanity. About 50% of people dislike their job. People spend much of their lives working. Poverty and inequality are a choice made by society if society chooses poorly.
They're only an invention if you consider "seeking sustenance to live" not explicitly a job if there's no monthly direct deposit involved.
Indeed.
On the plus side, if there really is no value to labour, then farm work must have been fully automated along with all the other roles.
On the down side, rich elites have historically had a very hard time truly empathising with normal people and understanding their needs even when they care to attempt it, so it is very possible that a lot of people will starve in such a scenario despite the potential abundance of food.
7 replies →
Many (most?) people make a living from their job whether they like it or not. Having a job that they dislike is far better than losing one because of AI whatever that means.
Not sure it’s much of a choice and more of a decision the greedy half make and imposition (often violent) on the other half.
Sounds great! Quit your job then :)
I wish I lived in a vacuum. Idk about you but I did not make said choice.
Every biological being works to survive. Being good at survival is what builds self esteem.
The "problem" with many modern jobs is that they're divorced from the fundamental goal, which is one of: 1) Kill/acquire food, 2) Build shelter, or 3) Kill enemies/competitors/predators
The benefit of modern jobs is that they are much more peaceful ways for society to operate, freeing up time for humans to pursue art and other forms of expression.
You mean surrogate activities
The only thing invented about jobs is that through cooperation, the activity undertaken can seem completely unrelated to obtaining food, shelter etc. All organisms spend a majority of their energy on survival and reproduction.
And when have we not? When in history has mankind ever treated the idle poor well? What makes this age different, that we who can no longer work would be taken care of?
When in history has being idle not been a problem?
If AI and robots are able to do all the jobs, being idle isn't the negative it has always been.
All through history, you needed lots of non-idle people to do all the work that needed to be done. This is a new situation we are coming upon.
2 replies →
When in history of mankind have we ever… is an appeal to the inability of humans to evolve.
[dead]
So are mortgages, and I’m starting to wonder how will pay mine.
Please note I’ve never had this problem before, until recently.
There's isn't even a solution for how to control highly capable systems at all, everyone wants to decide what to do with the AI before they've even solved the problem of controlling it.
It's like how everybody imagines their lives will be great once they're a millionare, but they have no plan for how to get there. It's too easy to get lost dreaming of solutions instead of actually solving the important problems.
What’s an “important problem”? p(doom)? Anything else?
FWIW, my P(doom) is quite low (~0.1) because I think we're going to get enough non-doomy-but-still-bad incidents caused by AI which lack the competence to take over, and the response to those will be enough to stop actual doom scenarios.
People like Simon Willson are noting the risk of a Challenger-like disaster, talking about normalisation of deviance as we keep using LLMs which we know to be risky in increasing critical systems. I think an AI analogy to Challenger would not be enough to halt the use of AI in the way I mean, but an AI analogy to Chernobyl probably would.
1 reply →
Pdoom would be the most important for me, everything else depends on us being able to control the AI.
But beyond that there's still problems like concentration of power and surveillance, permanent loss of jobs, cyber and bio security. I'm not convinced things will go well even if we can avoid these problems though. I try to think about what the world will be like if AI becomes more creative than us, what happens if it can produce the best song or movie ever made with a prompt, do people get lost in AI addiction? We sort of see that with social media already, and it's only optimizing the content delivery, what happens when algorithms can optimize the content itself?
The categories make no sense. Not having to do a job is the entire best case of AI. What we do with that is another thing, but we simply have to accept that any other lense is complete nonsense. The endpoint is obvious and we need to stop being silly about it: We are replacing human labor. Maybe we will find some new jobs to do in the interim. Maybe not. In the end, if everything goes right (in the AI optimist sense), jobs will not be something that humans do.
Labor = capital/energy in an AI complete world. We have to start from that basis when we talk about alignment or anything else. The social issues that arise from the extinction of human labor are something we have to solve politically, that's not something any model company can do (or should be allowed to do).
Is this some sort of “incompleteness” paradox for AI alignment? Seriously
No, just a request for a better definition.
If you see it as a paradox, maybe that says something about the merits of the technology…
No because alignment makes no sense as a general concept. People are not "aligned" with each other. Humanity has no "goal" that we agree on. So no AI can be aligned with us. It can be at most aligned with the person prompting it in that moment (but most likely aligned with the AI owner).
To make it clear, maybe most people would say they agree with https://www.un.org/en/about-us/universal-declaration-of-huma... but if you read just a few of the rights you see they are not universally respected and so we can conclude enough important people aren't "aligned" with them.
Opposite. All living things are "aligned" in their instinct for surviving. Those which aren't soon join the non-living, keeping the set - almost[0] - 100% aligned.
[0] Need to consider there're a few humans potentially kept alive against their will (if not having a will to survive is a will at all) with machines for whatever reason.
6 replies →
This is completely why the rich love it so much
Why would the elimination of the value of labor result in poverty and inequality? It should be the opposite, as poverty and inequality is the current status quo (for the many).
Should according to your ethos, not should according to history, sadly.
This is radical life denial. I was not born for and do not exist to toil. Work is ontologically evil.
No, THIS is radical denial. You WERE born to toil for your survival.
Sounds like a slogan for slavery.
You were evolved to struggle. This is actually very clear from psychiatric literature.
"Work" is human activity. For example, children's play is work. All living things desire to go about their lives. Well-adjusted humans desire to work. Note that this does not necessarily equate to jobs.
What? Children's play is now work? What timeline are we living in? Is this real life?
Maybe a sufficiently aligned AI would necessarily decide that the zeroth law was necessary, and abscond.
(I’m reading Look To Windward by Iain M. Banks at the moment and I just got to the aside where he explains that any truly unbiased ‘perfect’ AI immediately ascends and vanishes.)
this completely misses the point why alignment exists
Alignment exists to protect shareholder value.
If it creates industry wide outrage, shareholder value declines.
It making shareholders rich and other people poor won't.
You’re quite correct and we are likely going to stumble into this future despite all the very big brains working on these technologies (including people on hn).
“It is difficult to get a man to understand something, when his salary depends upon his not understanding it.”