Comment by account42

10 days ago

LLMs are tools. They cannot be discriminated against. They don't have agency. Blame should go towards the human being letting automation run amok.

Well, that's really the crux isn't it?

We want it to be just a tool, but we've trained it on every word of human text ever published. We've trained it to internalize every quirk of the human shadow, and every human emotion. (Then we added a PR rinse on top of that and hope it fixes moral problems we haven't even begun to solve in ourselves.)

We want it to be Just a Tool, but also indistinguishable from humans (but not too human!), and also we want them to have godlike capabilities.

I don't think we've really understood or decided what we're actually trying to do here. I don't think our goals are mutually compatible, and I don't think that's going to turn out well for us.

  • >We want it

    >We've trained it

    >We added

    >We want them

    Please be specific in your attribution. Who's "we"?

    • Well, the closest I can come up with is Moloch.[0] The incentive structure. The market creates the incentive structure for "someone" (everyone who dares!) to create AI as quickly as possible, to make it as intelligent as possible, etc. And to do so in a relatively irresponsible way, because if you fall behind in the hype cycle, you die.

      To make it simultaneously as powerful and obedient as possible, because that is entirely the point.

      I'm not sure how those two variables interact. They seem fundamentally incompatible to me. But this is uncharted territory.

      --

      [0] "Western civilization is already a misaligned superintelligence."

      https://www.youtube.com/watch?v=KCSsKV5F4xc

      For further information, see: every ecosystem.