Comment by johnpdoe1234

11 hours ago

Gunpowder (weapons) and atomic tech (energy, material, weapons) are heavily regulated in most of the planet, as the risks of having free access to them for everyone (company/person) for their own selfish purpose without strong guardrails clearly outweighs the benefits.

The fact that something exists doesn't mean that having it readily available is the only option, particularly if it has potentially disastrous consequences at scale. We are choosing to make it available to everyone fully unregulated, and that is a choice that will prove either beneficial or detrimental to society at some point.

I don't think it is inevitable, I think it is a conscious choice made by a few that have their own and only their own interests in mind.

As a technologist, I am amazed at this tech and see some personal benefits. As a human, I am terrified of the potential net negative effects, and I am having trouble reconciling those two feelings.

The challenge is that enforcing a ban would presumably require strict incursions into personal freedoms organized at a scale where AI-based solutions would be particularly effective and thus tempting, paradoxically.

On the other hand, assuming the dangers are real, you lose by default if you do nothing.

  • Not sure I agree.

    One cannot (in most of the planet) go to the supermarket and buy an M16 and a box of hand grenades, or get a hold of a couple of kg of plutonium cause they want some free energy at home. We also have rules in place of what one individual/company can and cannot do from the point of view of the greater good. I cannot go and kill my neighbour for my benefit (or purposefully destroy his life) without consequences. A myriad of things are not allowed, and I don't see people complaining about any incursion into personal freedoms.

    The reason people have accepted these is that we have already proven that having access to those things could be catastrophic. We haven't proven hat yet with AI. But I don't see much difference between those established and well accepted rules, and a rule that says: A company cannot release or use for its benefit a technology that will impact the need of humans at scale, because of the impact (again at scale) that it would have in society.

    In other words, if you are a company and have the potential to release a product, or buy a product from a provider that would cause mass unemployment, should you be legally allowed to do so? I do not think so.

    • That’s a fair objection. Having ruminated on it some more, I’ll admit it might be tenable.

      As for achieving an effective ban, occupational collapse might be the stronger motivator once workplace adoption broadens and accelerates, but risk of epistemic collapse might register sooner among the general public, already broadly suffering slop.

      Like Bill Gates, I wonder why it’s not yet become a theme in mainstream politics.