Comment by afavour

2 months ago

I don't really understand this logic. Enormous efforts are made to reduce those deaths, if they weren't the numbers would be considerably higher. But we shouldn't worry about AI because of road accident deaths? Huh? We're able to hold more than one thought in our heads at a time.

> But those using this as an argument to ban AI

Are people arguing that, though? The introduction to the article makes the perspective quite clear:

> In tweaking its chatbot to appeal to more people, OpenAI made it riskier for some of them. Now the company has made its chatbot safer. Will that undermine its quest for growth?

This isn't an argument to ban AI. It's questioning the danger of allowing AI companies to do whatever they want to grow the use of their product. To go back to your previous examples, warning labels on cigarette packets help to reduce the number of people killed by smoking. Why shouldn't AI companies be subject to regulations to reduce the danger they pose?

Absolutely, the OP's argument doesn't hold water. Previous dangers have been discussed and discussed (and are still discussed if you look for it), no need to linger on past things and ignore new dangers. Also since a lot of new money is being poured into AI/AI products unlike harmful past industries such as tobacco, it's probably the right thing to be skeptical of any claims this industry is making, to inspect carefully and criticize what we think is wrong.

Many people are arguing for a ban. I did get reactive, because I’ve been hearing that perspective a lot lately.

But you’re right. This article specifically argues for consumer protections. I am fully in favor of that.

I just wish the NYT would also publish articles about the potential of AI. Everything I’ve seen from them (I haven’t looked hard) has been about risks, not about benefits.