Comment by malfist
3 months ago
Oxycontin certainly worked, and the markets demanded more and more of it. Who are we to take a moral stand and limit everyone's access to opiates? We should just focus on making a profit since we're filling a "need"
3 months ago
Oxycontin certainly worked, and the markets demanded more and more of it. Who are we to take a moral stand and limit everyone's access to opiates? We should just focus on making a profit since we're filling a "need"
Using LLMs doesn't kill people, I'm sure there are some exceptions like OpenAI's suicide that was in the news, but not to the degree of oxycontin.
>Using LLMs doesn't kill people
Guess you mmissed the post where lawyers were submitting legal documents generated by LLM's. Or people taking medical advice and ending up with hyperbromium consumptions. Or the lawsuits around LLM's softly encouraging suicide. Or the general AI psychosis being studied.
It's way past "some exceptions" at this point.
Besides the suicide one, I don't know of any examples where that has actually killed someone. Someone could search on Google just the same and ignore their symptoms.
1 reply →
LLMs generate text. It is people who decide what to do with it.
Removing all personal responsibility from this equation isn't going to solve anything.
6 replies →
Not yet maybe... Once we factor in the environmental damage that generative AI, and all the data centers being built to power it, will inevitably cause - I think it will become increasingly difficult to make the assertion you just did.
You're using data centers to read and post comments here.
1 reply →
Your comment is valid as a criticism of an "unfettered free market", but further proves my point that things that work will win.