Comment by FloorEgg
7 days ago
I was enjoying the article until I got to this paragraph:
> Individual intelligence will mean nothing once we have superhuman AI, at which point the difference between an obscenely talented giga-nerd and an ordinary six-pack-drinking bozo will be about as meaningful as the difference between any two ants. If what you do involves anything related to the human capacity for reason, reflection, insight, creativity, or thought, you will be meat for the coltan mines.
Believing this feels incredibly unwise to me. I think it's going to do more damage than the AI itself will.
To any impressionable students reading this: the most valuable and important thing you can learn will be to think critically and communicate well. No AI can take it away from you, and the more powerful AI will get the more you will be able to harness it's potential. Don't let these people saying this ahit discourage you from building a good life.
This part was a long description of the zeitgeist in SF; it was not meant to be the author’s own opinion.
I realize that now, and feel a bit foolish for being triggered by it. It's too late for me to edit my comment now though.
I think your reflexive disagreement is a testimony to the point of the article. And the fact that you didn't immediately notice what was the authors view vs what they were relaying, may be testimony to the author's good writing.
I found it to be an unexpectedly evocative piece, a kind of poetic prose style that I don't see very often in journalism, let alone tech journalism. Each word seemed carefully chosen to make the reader almost fell like they were there, witnessing, understanding.
So, I can imagine the author being a little pleased that you reacted to that passage with a sudden skepticism. Seems like a very successful case of 'show, don't tell'.
"the most valuable and important thing you can learn will be to think critically and communicate well."
I have heard some form this advice for over 30 years. Not one single penny I have earned in my career came from my critical thinking. It came from someone taking a big financial risk with the hope that they will come out ahead. In fact, I've had jobs that actively discouraged critical thinking. I have also been told that the advice to think critically wasn't meant for me.
For what it's worth, most of the pennies I've earned definitely came from my ability to think and communicate well.
I can't help but wonder whether the person who gave you advice "to think critically wasn't for [you]" didn't have YOUR best interests at heart, and/or wasn't a wise person.
I also worked jobs where I was actively discouraged to think critically. Those jobs made me itchy and I moved on. Every time I did it was one step back, three steps forward. My career has been a weird zigzag like that but trended up exponentially over 25 years.
We all have our anecdotes we can share. But ask yourself this: if you get better at making decisions and communicating with other people, who is that most likely to benefit?
Critical, individualistic thinking is what the west does best. The east seems to be better at implementation and improvement once provided with a new idea. That’s where we currently stand atleast, who knows how China will do in the future. Maybe they’re the total package but that remains to be seen.
>Critical, individualistic thinking is what the west does best
Is this parody? The west currently is a huge valley of brain rot, stupid conformity, and financial gambling.
Why conflate critical thinking with individualistic values?
It seems you are unnecessarily muddying the water.
3 replies →
Critical thinking is slave mentality, man. Master mentality, the mentality of the guys who FUCK, is knowing that what you want to happen WILL happen and doing everything you can to make it happen.
/s if not obvious
Thanks Timmy.
<< Believing this feels incredibly unwise to me.
This. Just thinking that those with power would even allow that leveling seems on the verge of impossible. In a sense, you can already see it practice. Online models are carefully 'made safe' ( neutered is my preferred term ), while online inference is increasingly more expensive.
And that does not even account for whether, 'bozo' will be able to use the tool right.. because an expert with a tool will steal beat a non-expert.
It is a brain race. It may differ in details, but the shape remains very much the same.
>No AI can take it away from you, and the more powerful AI will get the more you will be able to harness it's potential.
The author is describing it, not necessarily ensorsing it.
But whether they really believe this or not, the point is that most wouldn't be given any opportunity to "harness is potential", whether they're "obscenely talented giga-nerds" or not, because they'd be economically redundant.
And that point is foolish no matter who is making it.
There's no shortage of masses of people have been made econonically redundant across the world the past decades, even before AI, no matter how smart or creative they are.
1 reply →
In the context of the rest of the piece, I read this as sarcasm. The author is making fun of the species of narcissistic silly con valley techbro who actually believes such nonsense.
Ah, I struggle with sarcasm sometimes and I was a bit distracted while reading. I'll give it another chance.
It is not sarcasm he is fleshing out this sentence earlier in the paragraph, "One of the pervasive new doctrines of Silicon Valley is that we’re in the early stages of a bifurcation event"
3 replies →
I suspect the author is struggling with their own sarcasm.
There's no worth in sarcastically repeating memes like "giga nerd" or whatever except for propagating this line if thinking / the meme.
Imagination knows no negation.
It's a really bad take because AI is already "superhuman" in general knowledge, but it still has trouble figuring out whether I should drive or walk to the car wash place.
Declaring something as "superhuman" requires a hierarchy of inherent human value.
I'm not saying this for social reasons, just for the definition:
"superhuman intelligence" at what?
Calculations? Puzzles? Sudokus?
Or more like...
image classification? ("is this a thief?", "is this a rope?", "is this a medical professional?", "is this a tree?")
Oh, applying the former to the latter would be a pretty stupid category error.
It's almost as if people had this figured out centuries ago...
I mean it’s theoretically true. Will we get there? Who knows.
The first time an LLM solves a truly significant, longstanding problem without help is when we will know we are at AGI.
I don't think that this is supposed to be a statement of the author's beliefs. The whole article is dripping with contempt for AI bros and silicon valley culture in general.
Maybe if you read past these paragraph it would have been clearer?
Yep you're right, but it's too late for me to edit my comment. The idea triggered me, and I tend to struggle with sarcasm.
Historically, tools that made thinking cheaper didn't eliminate thinkers...