Comment by atleastoptimal
2 days ago
This is gonna keep happening with every AI advance until humans are an absolute bottleneck in every domain. May take a bit of time for some professions, but the writing is on the wall. This will be the greatest shift in human history, and I think a lot of people will have trouble grappling with it because its not fun to think about being made irrelevant.
The only thing that will slow AI down is massive universal international regulation. Human intelligence really isn’t the be all end all to intelligence in general, it’s just a stepping stone. I feel many on this site don’t want to accept this because their intelligence has been such a valuable tool and source of personal pride/identity for them for so long.
Humans have more access to the real world. These models have to tokenize everything and put it into words, but so much information is outside of words. These models may well be super intelligent but their intelligence is locked inside of a cage (the tokenizer).
Even in the world where AI has full control of lights out factories (again, doubt it. something goes wrong at the factory, you gotta send a guy in), human beings still need to look each other in the eye and communicate, they need to touch each other. Not only that, they need to be seen and acknowledged by other human beings.
"AI" cannot ever replace this. People whose intelligence is their pride/identity kind of miss this. Stupid people are capable of loving each other more deeply and more completely than any machine ever will love them.
You basically just said people will be the janitors, the on-site fixers, and the personification of decisions and that they will still be able to live fulfilling lives in the real world. I think that is perfectly in line with what the parent wrote.
What is all of this for if the result is that human beings are "made irrelevant"? If these LLMs truly become as game changing as so many say they will be, then can we agree that it's time to stop thinking that a person's worth equals their economic output?
I agree with you, the problem currently is that the balance of power has shifted so far in favor of the 0.1%. And those people will not want to give up the power that they already have.
I fear for a future where the technocrats win out and we end up in an "Altered Carbon" scenario. We are on the precipice of AI and robotics equalizing the playing field for everyone, but only if the power is held by the people and not the few at the top with the most resources.
Not sure how to steer the ship in that direction, but I do have a few ideas...
> What is all of this for if the result is that human beings are "made irrelevant"?
I think your views on this will radically differ if you earn 200k a year versus 2k a year.
Which is maddening. Too many people lack class consciousness.
An engineer making 200k a year has more in common with someone making 2k a year, than they do with the Elon Musk's of the world.
This delusion is rampant in professional spheres like medicine and tech.
No, that won’t happen, because these tools are being built based on investments in private goods.
It would be something if there were national level LLM tools, owned and operated as commons.
Things that were once operated as commons became private goods. There is no reason that it can't go the other way.
3 replies →
It is definitely past time to start thinking outside of the economy.
Although must we deal in "worth" at all at that point? If two people have conflicting visions, it shouldn't be the one who is "worth" more that gets their way, it should be the one whose vision is most appealing to the rest of us.
No, I disagree, and for everyone who bemoans capitalism or the power of money, its important to understand the foundational arguments from which economics is born.
Wants are infinite, and resources limited. Economics is the objective methods to order a system to achieve subjective ends.
For better or worse, money is a medium of exchange and signal of what people are willing to allocate for their needs. Unless you create economic markets, information markets, and political systems that are built to handle the forces being harnessed by society, you have failure states.
In other words, taxes need to bleed of wealth, to ensure that it cannot create advantage in other fields (media, politics), breaking the even playing field in those other economies.
10 replies →
What a load of guff.
AI models still produce galling inconsistencies and errors for me on a daily basis.
I think it's easy to ignore all the times the models get things hilariously wrong when there's a few instances where its output really surprises you.
That said, I don't really agree with the GP comment. Humans are the bottleneck if we knew these models get things right 100% of the time but with a model like o3-pro it's very possible it'll just spend 20 minutes chasing down the wrong rabbit hole. I've often found prompting o4-mini gave me results that were pretty good most of the time while being much faster whereas with base o3 I usually have to wait 2-3 minutes and hope that it got things right and didn't make any incorrect assumptions.
Same.
I find LLMs to be useful, but my day to day usage of them doesn't fit the narrative of people who suggest they are creating massive complex projects with ease.
And if they are, where's the actual output proof? Why don't we see obvious evidence of some massive AI-powered renaissance, and instead just see a never ending stream of anecdotes that read like astroturf marketing of AI companies?
Speaking of which, astroturfing seem like the kind of task LLMs should excel at…
I think too many people call this intelligence, and it results in intuitions that are useless and waste time, pushing the day we understand this moment further into the future.
The best I’ve got is theres 2 frames of assessment people are using:
1) Output frame of reference: The output of an LLM is the same as what a human could make.
2) Process frame of reference: The process at play is not the same as human thinking
These 2 conversation streams end up with contradictions when they engage with each other. Yes, the tools are impressive. The tools aren’t thinking. etc.
A useful analogy is rote learning - many people have passed exams by memorizing textbooks. The output is indistinguishable from someone who manipulates a learned model of the subject to understand the question and provide the answer.
> unilateral international regulation
is an oxymoron/contradictory
sorry I meant "universal" or "omnilateral"
Did you mean global regulation?
1 reply →
What good is intelligence if there is nobody with the money to pay for it? We run our brains on a few thousand calories a day. Who is going to pay to provide the billions of calories it takes to run/cool GPUs all day long if there are no humans with marketable skills?
“No marketable skills” seems pretty unlikely if you look beyond office work.
Genuine question--I've seen this thrown around a lot. Do you count yourself in this hypothetical situation where society returns to physical labor, or do you think you're immune from being automated?
AIs will pay other AIs through various means of exchange
Assuming AI need humans in that way is like being a tribe of monkeys and saying
“What good is being human if they don’t have bananas to pay? Monkey only need banana, humans need clothes, houses, cars, gas, who is going to pay the humans bananas if monkeys have all the banana?”
Yes, people will start asking "when must we kill them?"