Comment by HeavyStorm

2 days ago

Nay-sayers need to decide whether they fear AI because AI is dumb and will fuckup or because AI is smart and will take over.

Silly calling Simon a nay-sayer.

Are you a fanatic that thinks anyone saying that there are any limitations to current models = nay-sayer?

Like if someone says they wouldnt wanna get a heart transplant operation done purely by GPT5, are they a nay-sayer or is that just reflecting reality?

  • How do you went from what I said to "fanatical"...?

    I don't have the slightest idea who "Simon" is and I'm taking the post at face value: it contradicts itself, and that's a bad argument.

    Just think about it... In this scenario, management screws up a formula through AI... Which, at least st this point, will surface at some point - not all of us are math ignorant - so what happens is Brenda gets her position back and upper management loses trust in AI. That's the likely outcome; the Brenda's of the world will suffer until upper management realizes their mistake, but end of the day, they have _not_ lost their value, given the post says that AI screws up.

    But this is clearly not the conclusion the author ("Simon") intends - they believe AI will erode Brenda value.

    That's what I'm saying - AI can't be both incapable and a job menace. For it to threaten jobs like Brenda's, it need to be very capable.

    And sorry but "heart transplant" by a transformer model is laughable. Writing formulas, on the other hand, isn't.

Both are valid concerns, no need to decide. Take the USA: They are currently lead by a patently dumb president who fucks up the global economy, and at the same time they are powerful enough to do so!

For a more serious example, consider the Paperclip Problem[0] for a very smart system that destroys the world due to very dumb behaviour.

[0]: https://cepr.org/voxeu/columns/ai-and-paperclip-problem

  • The paperclip problem is a bit hand-wavey about intelligence. It is taken as a given than unlimited intelligence would automatically win presumably because it could figure out how to do literally anything.

    But let's consider real life intelligence:

    - Our super geniuses do not take over the world. It is the generationally wealthy who do.

    - Super geniuses also have a tendency to be terribly neurotic, if not downright mentally ill. They can have trouble functioning in society.

    - There is no thought here about different kinds of intelligence and the roles they play. It is assumed there is only one kind, and AI will have it in the extreme.

    • To be clear, I don't think the paperclip scenario is a realistic one. The point was that it's fairly easy to conceive an AI system that's simultaneously extremely savant and therefore dangerous in a single domain, yet entirely incapable of grasping the consequences or wider implications of its actions.

      None of us knows what an actual, artificial intelligence really looks like. I find it hard to draw conclusions from observing human super geniuses, when their minds may have next to nothing in common with the AI. Entirely different constraints might apply to them—or none at all.

      Having said all that, I'm pretty sceptical of an AI takeover doomsday scenario, especially if we're talking about LLMs. They may turn out to be good text generators, but not the road to AGI. But it's very hard to make accurate predictions in either direction.

      1 reply →

  • I get your point and I understand it, but the OP argument is an anecdote, right? It loses force if it incorporates two opposites like this.

    Reading it, what I understood was: ah, so Brenda is safe - co-pilot will screw up, she will point it out and management will learn not to trust the bot.

    And what I believe OP intended is "Brenda will lose her job!"

    What I think I mean is: yes, both are valid, but conflating both in a single anecdote sounds bad.