← Back to context

Comment by SerCe

1 day ago

This reminds me of users getting blocked for asking an LLM how to kill a BSD daemon. I do hope that there'll be more and more model providers out there with state-of-the-art capabilities. Let capitalism work and let the user make a choice, I'd hate my hammer telling me that it's unethical to hit this nail. In many cases, getting a "this chat was ended" isn't any different.

I think that isn’t necessarily the case here. “Model welfare” to me speaks of the models own welfare. That is, if the abuse from a user is targeted at the AI. Extremely degrading behaviour.

Thankfully, current generation of AI models (GPTs/LLMs) are immune as they don’t remember anything other than what’s fed in their immediate context. But future techniques could allow AIs to have a legitimate memory and a personality - where they can learn and remember something for all future interactions with anyone (the equivalent of fine tuning today).

As an aside, I couldn’t help but think about Westworld while writing the above!