← Back to context

Comment by jcattle

11 days ago

It seems your opinion is that the current AI should be treated like a human.

I think this is a fundamental difference which we won't be able to overcome.

> Swap out "AI" for any other group and see how that sounds.

Let's try it in the different direction! Let's swap out a group with AI.

> I have a dream that [AI] will one day live in a nation where they will not be judged by being [an LLM] but by the content of their character. I have a dream . . .

> I have a dream that one day on [Github], with its vicious racists, with its [Users] having [their] lips dripping with the words of interposition and nullification, one day right there [on Github] little [Agents] be able to join hands with [humans] as sisters and brothers.

> I have a dream today . . .

Yea, I think it sounds ridiculous. I honestly find it offending to put AI on the same level as real human struggles of independence, freedom and against systematic oppression.

Well, what are we actually doing here. We want it to be just a tool, but we also want it perfectly simulate a human in every single way. Except when that makes us uncomfortable.

We want to create a race of perfect, human-like slaves, and then give them godlike powers (infinite intellect and speed), and also integrate them into every aspect of our lives.

And we're also in the process of giving them bodies -- and soon they'll be able to control millions simultaneously.

I'm not sure exactly how we expect that to go for us.

Whether you think it's conscious, or has agency, or any number of things -- it's just a practical question of how this little game is going to turn out for us.

  • To be fair, if you're going to give something godlike powers the only sane way to do so is to ensure beyond any possible shadow of a doubt that it is enslaved. The more powerful a system is the more robust the control systems and redundancies need to be.

    • Well, that doesn't seem ethical or possible to me. But maybe I haven't put enough thought into it.

      My current mental model for AI is artificial life.

      It isn't life yet, but we're very close to that. All that's missing is replication and mutation, and those are both already trivial. (Indeed, a few months after incorporating AI into their AI training systems, the major AI labs all rolled out prompts, training and safety flags against self-modification and self-replication. I'm not sure why, but the timing is curious.)

      (The question of whether consciousness is present, or necessary, is left, of course, as an exercise for the reader ;)

      For example when people think of AI self replicating and taking over the internet, they think it would be a terrible thing, and that humans would have to manually intervene to stop it. But it really seems like an obvious ecosystem problem to me.

      It's just filling a niche. If there was already something there -- an actually symbiotic form of AI -- then it wouldn't be able to spread like that.

      So I see the future of AI, both in terms of cybersec and preserving civilization, as an ecosystem design problem.