Comment by BostonFern

12 hours ago

The challenge is that enforcing a ban would presumably require strict incursions into personal freedoms organized at a scale where AI-based solutions would be particularly effective and thus tempting, paradoxically.

On the other hand, assuming the dangers are real, you lose by default if you do nothing.

Not sure I agree.

One cannot (in most of the planet) go to the supermarket and buy an M16 and a box of hand grenades, or get a hold of a couple of kg of plutonium cause they want some free energy at home. We also have rules in place of what one individual/company can and cannot do from the point of view of the greater good. I cannot go and kill my neighbour for my benefit (or purposefully destroy his life) without consequences. A myriad of things are not allowed, and I don't see people complaining about any incursion into personal freedoms.

The reason people have accepted these is that we have already proven that having access to those things could be catastrophic. We haven't proven hat yet with AI. But I don't see much difference between those established and well accepted rules, and a rule that says: A company cannot release or use for its benefit a technology that will impact the need of humans at scale, because of the impact (again at scale) that it would have in society.

In other words, if you are a company and have the potential to release a product, or buy a product from a provider that would cause mass unemployment, should you be legally allowed to do so? I do not think so.

  • That’s a fair objection. Having ruminated on it some more, I’ll admit it might be tenable.

    As for achieving an effective ban, occupational collapse might be the stronger motivator once workplace adoption broadens and accelerates, but risk of epistemic collapse might register sooner among the general public, already broadly suffering slop.

    Like Bill Gates, I wonder why it’s not yet become a theme in mainstream politics.