Comment by zestyping
1 month ago
When you generate real-time video of realistic-looking talking characters, the definition of success is fooling people into believing they are talking to a real person when they aren't.
If you pursue this, your explicit goal is deception, and it's a massively harmful kind of deception. I don't see how you can claim to be operating ethically here if that's your goal.
Do you think the same about text that is indistinguishable from human-written text (LLM chatbots)? Or voice that is indistinguishable from a human talking?
Illegal things, like fraud and impersonation, are illegal. There's a difference between the tool and the actions people do with the tool.
There are tons of useful applications of interactive avatars - from corporate training to kids education to language learning and more. Plus, why would you want to stop this little guy from existing in the world? :) https://lemonslice.com/try/alien
I don't think the same of them because they are not the same thing. Can you not see that the potential for harm is far greater? You can't simply ignore the potential uses of the technology you create. You have the choice to design your technology so it retains its usefulness while limiting the harm; have you given any time to thinking about how you could do that?
The alien is a diversion from the concern; I'm talking about realistic human avatars. Let's stay focused on that.
Let me suggest a worthwhile exercise. Just take ten minutes. What are some of the ways that realistic human avatars would make deception more effective or more scalable than previously possible?
Come up with three scenarios, and let's talk about them, honestly and thoughtfully.