Comment by skybrian
1 month ago
There will be people who want to experiment, but there's no particular reason why a company that intends to offer a helpful assistant needs to serve them. They can go try Character.ai or something.
1 month ago
There will be people who want to experiment, but there's no particular reason why a company that intends to offer a helpful assistant needs to serve them. They can go try Character.ai or something.
ChatGPT is miserable if your input data involves any kind of reporting on crime. It'll reject even "summarize this article" requests if the content is too icky. Not a very helpful assistant.
I hear the API is more liberal but I haven't tried it.
A company that intends to offer a helpful assistant might find that the "assistant character" of an LLM is not adequate for being a helpful assistant.
To support GP‘s point: I have Claude connected to a database and wanted it to drop a table.
Claude is trained to refuse this, despite the scenario being completely safe since I own both parts! I think this is the “LLMs should just do what the user says” perspective.
Of course this breaks down when you have an adversarial relationship between LLM operator and person interacting with it (though arguably there is no safe way to support this scenario due to jailbreak concerns).