Comment by throwaway115
2 years ago
This whole thing is starting to feel like another Sam Altman spotlight production. There's enough evidence to show no wrongdoing, but it was handled in a way to make people think there was a scandal. Maximum spotlight for relatively low risk. I wonder if people will get tired of being jerked around like this.
I'm genuinely not sure what you're trying to say here. Are you claiming that this was somehow engineered by Altman on purpose to draw attention, because all publicity is good publicity? Or engineered by his enemies to throw mud at Altman, because if you throw enough some of it will stick?
Occam's Razor argues that Sam simply wanted ScarJo's voice, but couldn't get it, so they came up with a legally probably technically OK but ethically murky clone.
> they came up with a legally probably technically OK but ethically murky clone.
Isn't what OpenAI does all the time? Do ethically murky things, and when people react, move the goal posts by saying "Well, it's not illegal now, is it?".
I would like to think that a normal person, having not been able to hire voice work from a specific well-known actor, and wanting to avoid any image of impropriety, would use a completely different voice instead. Sam isn't dumb, he knew the optics of this choice, but he chose it anyways, and here we all are, talking about OpenAI again.
"Talking about OpenAI again" yes. But also reinforcing the "too questionable for me to ever willingly use" image I have of Sam, OpenAI, and their projects.
Maybe that's just me, and it is a win for them on the whole. Hopefully not.
IDK, we were talking about OpenAi regardless. I think this will be a hit on him as a leader. How big? I don't know. It seems to me this is not a brilliant move.
2 replies →
It's not a clone. What is ethically murky about it?
You want Brad Pitt for your movie. He says no. You hire Benicio Del Toro because of the physical resemblence. Big deal.
Having seen "Her" and many other Scarlet Johansson movies, I didn't think for a second that GPT-4o sounded like her. On the contrary, I wondered why they had chosen the voice of a middle aged woman, and whether that was about being woke. It wasn't until social media went hysterical, I realized that the voices were sort of similar.
If it's a sequel and Brad Pitt was in the first movie and you use trickery to make people think he's in the second movie, there's a case. See Crispin Glover, the dad from Back to the future, which was NOT the upside-down dad in BTTF2. They settled for 760k USD.
12 replies →
> I wondered why they had chosen the voice of a middle aged woman
AIs and automated systems, real and fictional, traditionally use women more than men to voice them. Apparently there was some research among all-male bomber crews that this "stood out", the B-58 was issued with some recordings of Joan Elms (https://archive.org/details/b58alertaudio ) and this was widely copied.
(obvious media exception: HAL)
You then tweet out, "look its Brad Pitt in my movie".
3 replies →
> I wondered why they had chosen the voice of a middle aged woman, and whether that was about being woke
really weird line of reasoning. Siri, Alexa, Google Home… etc.
> for relatively low risk
This was rocket fuel for activists trying to get a nationwide personality rights law on the books. That would almost certainly increase costs for OpenAI.
> That would almost certainly increase costs for OpenAI.
And every one of it's competitors. I think regulatory capture would be just as much, if not more, of a victory for OpenAI.
The evidence shows wrongdoing with ass-covering.
I think people will get really sick of all the drama once the paperclips start chiming.
Clippy was ahead of his/her/its time.
xer
I don't think you understand. It's extremely well established in law, you can't approach someone to voice an advert for you, get told no, and then hire an impersonator to do it. Take all the AI hype bullshit and the cult of personality bullshit out of it. What Altman did is very standard and very clearly not allowed. He will end up paying for this in monetary terms, and paying further for it in the reputation damage - in that no one can trust OpenAI to conduct business in good faith.
> It's extremely well established in law, you can't approach someone to voice an advert for you, get told no, and then hire an impersonator to do it.
Can you explain and/or cite the legal basis here? What cases? What law?
It's termed personality rights[1] and this would be appropriation of her likeness. There's good reason that famous actors actually get commercial work and we don't just hire soundalikes all the time.
[1]:https://en.wikipedia.org/wiki/Personality_rights#United_Stat...
This assumes that the voice actress was an impersonator. By her own statements, no one who knows her has said that her voice sounds like Scarlett Johansson (personally, I agree). And she was auditioned and hired before SJ was even approached. I don't think that this falls under the "very standard" scenario you reference.
Grab 'em by the nothing burger.
You don't get the fame of being the psychopath among the Silicon Valley CEOs for nothing.
I honestly don't understand how delusional you have to be to think OpenAI wanted this to happen.
It's a very cheap way to get people to realize gpt4-o is something new.
So they planned to remove ChatGPT's most popular voice, causing anger among many of their customers?
If I didn't much care for my critics, then letting them invent a lot out of story I can rebut easily is worth waiting a few days, knowing full well I can publish it widely whenever I want.
An ordinary person worries all the time about dealing with the legal system. A big company does it all the time.
I mean clearly having Scarlett Johansson on board was plan A.
Bringing the voice offline and then revealing it was a recording of someone else who coincidentally sounded exactly the same is definitely plan B or C though.
I don't understand how you can trust OpenAI so much to think it was all an accident.
Read what I said again
1 reply →
> I honestly don't understand how delusional you have to be to think OpenAI wanted this to happen.
(1) I've become tired of the "I honestly don't understand" prefix. Is the person saying it genuinely hoping to be shown better ways of understanding? Maybe, maybe not, but I'll err on the side of charitability.
(2) So, if the commenter above is reading this: please try to take all of this constructively. There are often opportunities to recalibrate one's thinking and/or write more precisely. This is not a veiled insult; I'm quite sincere. I'm also hoping the human ego won't be in the way, which is a risky gamble.
(3) Why is the commenter so sure the other person is delusional? Whatever one thinks about the underlying claim, one would be wise to admit one's own fallibility and thus uncertainty.
(4) If the commenter was genuinely curious why someone else thought something, it would be better to not presuppose they are "delusional". Doing that makes it very hard to curious and impairs a sincere effort to understand (rather than dismiss).
(5) It is muddled thinking to lump the intentions of all of "OpenAI" into one claimed agent with clear intentions. This just isn't how organizations work.
(6) (continuing from (5)...) this isn't even how individuals work. Virtually all people harbor an inconsistent mess of intentions that vary over time. You might think this is hair-splitting, but if you want to _predict_ why people do specific irrational things, you'll find this level of detail is required. Assuming a perfect utility function run by a perfect optimizer is wishful thinking and doesn't match the experimental evidence.
I honestly don’t understand why people care about this story at all.
Goes to character
I honestly don't understand how delusional you have to be to not think OpenAI wanted this to happen.