← Back to context

Comment by com2kid

14 hours ago

They are trained on public information from the Internet! Nothing they know is dangerous!

It is all public info. Freely auditing an intro chemistry course at any university will teach far more "dangerous" knowledge than anything an LLM refuses to say.

There is a case against automating attacks with LLMs, but that ship has already sailed as those protections are apparently trivial to work around.

There is a case to be made for the convenience of it all enabling someone in crisis. It seems some of these prompts are arguably good to keep blocked.

Who is responsible for the real world harms?