← Back to context

Comment by blibble

2 months ago

> The important point that Simon makes in careful detail is: an "AI" did not send this email.

same as the NRA slogan: "guns don't kill people, people kill people"

The NRA always forgets the second part: “People kill people… using guns. Tools that we manufacture expressly for that purpose.”

  • [flagged]

    • Guns make it faster and easier to be successful than alternatives, and their explicit only purpose is to do so. It enables short and impulsive actions to be lethal before you and those around you think through your behavior.

      Comparatively, the people in this article are using tools which have a variety of benign purposes to do something bad.

      Similarly though, they probably wouldn’t have gone through with it if they had to set up an email server on hardware they bought and then manually installed in a colo and then set up a DNS server and a GPU for the neural network they trained and hosted themselves.

That is why the argument is not against guns per se, but against human access to guns. Gun laws aim to limit access to guns. Problems only start when humans have guns. Some for AI, maybe we should limit human access to AI.

I think it's important to agree with you and point out the obvious, again, in this thread. The people behind Sage are responsible (or, shall I say, irresponsible.)

The attitude towards AI is much more mixed than the attitude towards guns, so it should be even easier to hammer this home.

Adam Binksmith, Zak Miller, and Shoshannah Tekofsky are _bad_ people who are intentionally doing something objectively malicious under the guise of charity.

does a gun on its own kill people?

my understanding, and correct me if I’m wrong, is a human is always involved. even if you build an autonomous killing robot, you built it, you’re responsible

typically this logic is used to justify the regulation of firearms —- are you proposing the regulation of neural networks? if so, how?

The gun comparison comes up a lot. It especially seemed to come up when AI people argued that ChatGPT was not responsible for sycophanting depressed people to death or into psychosis.

It is a core libertarian defence and it is going to come up a lot: people will conflate the ideas of technological progress and scientific progress and say “our tech is neutral, it is how people use it” when, for example, the one thing a sycophantic AI is not is “neutral”.