Comment by DonaldPShimoda

9 hours ago

At the very least, I think there is a need for oversight of how companies building LLMs market and train their models. It's not enough to cross our fingers that they'll add "safeguards" to try to detect certain phrases/topics and hope that that's enough to prevent misuse/danger — there's not sufficient financial incentive for them to do that of their own accord beyond the absolute bare minimum to give the appearance of caring, and that's simply not good enough.

I work on one of these products. An incredible amount of money and energy goes into safety. Just a staggering amount. Turns out it’s really hard.