Comment by int_19h

1 day ago

> You have a customer facing LLM that has access to sensitive information.

Why? You should never have an LLM deployed with more access to information than the user that provides its inputs.

Having sensitive information is kind of inherent to the way the training slurps up all the data these companies can find. The people who run chatgpt don't want to dox people but also don't want to filter its inputs. They don't want it to tell you how to kill yourself painlessly but they want it to know what the symptoms of various overdoses are.