Comment by windexh8er

2 years ago

The thing about the situation is that Altman is willing to lie and steal a celebrity's voice for use in ChatGPT. What he did, the timeline, everything - is sleazy if, in fact, that's the story.

The really concerning part here is that Altman is, and wants to be, a large part of AI regulation [0]. Quite the public contradiction.

[0] https://www.businessinsider.com/sam-altman-openai-artificial...

Altman wants to be a part of AI regulation in the same way Bankman Fried wanted to be a part of cryptocurrency regulation.

  • Whats really interesting about our timeline is when you look at the history of market capture in Big Oil, Telco, Pharma, Real Estate, Banks, Tobacco etc all the lobbying, bribing, competition killing used to be done behind the scenes within elite circles.

    The public hardly heard from or saw the mgmt of these firm in media until shit hit the fan.

    Today it feels like managment is in the media every 3 hours trying to capture attention of prospective customers, investors, employees etc or they loose out to whoever is out there capturing more attention.

    So false and condradictory signalling is easy to see. Hopefully out of all this chaos we get a better class of leaders not a better class of panderers.

  • (AI)tman tries to be, (Bank)man fried to be, who is letting Kojima name all these villians?

    • You had me thinking we were in some type of simulation for a second.

      Bernie Madoff is another funny name we should throw in there.

  • I always had trouble telling apart those two Sams. Turns out they're the same person.

  • I'm glad I'm not the only one drawing SBF personality comparisons here. I'd throw Martin Shkreli into the mix too for good measure. Awful.

Altman has proven time and again that he is little more than a huckster wrt technology, and in business he is a stone cold shark.

Conman plain and simple.

  • You'd think that Worldcoin would be enough proof of what he is but I guess people missed that memo.

    • I think people tend to assume our own values and experiences have some degree of being universal.

      So scammers see other scammers, and they just think there's nothing wrong with it.

      While normal people who act in good faith see scammers, and instinctively think that there must be a good reason for it, even (or especially!) if it looks sketchy.

      I think this happens a lot. Not just with Altman, though that is a prominent currently ongoing example.

      Protecting yourself from dark triad type personalities means you need to be able to understand a worldview and system of values and axioms that is completely different from yours, which is… difficult. …There's always that impulse to assume good faith and rationalize the behavior based on your own values.

    • Much as I dislike crypto, that's more of "having no sense of other people's privacy" (and hubris) than general scamminess.

      It's a Musk-error not an SBF-error. (Of course, I do realise many will say all three are the same, but I think it's worth separating the types of mistakes everyone makes, because everyone makes mistakes, and only two of these three also did useful things).

      16 replies →

  • Not going to lie, he had me. He appeared very genuine and fair in almost all media he appeared like podcasts but many of his actions are just so hard to justify.

    • He has a certain charm and seeming sincerity when he talks. But the more I see of him, the more disturbing I find him -- he combines the Mark Zuckerberg stare with the Elizabeth Holmes vocal fry.

      5 replies →

    • I have exactly the same feeling as I think you do. When you reach the levels of success he has, there will always be people screaming that you are incompetent, evil and every other negative adjective under the sun. But he genuinely seemed to care about doing the right thing. But this is just so lacking of basic morals that I have to conclude that I was wrong, at least to an extent.

      3 replies →

    • This is why it's a mistake to go by "vibes" of a person when they're speaking to an audience. Pay attention to what they do, not what they say.

  • I'm glad more people are thinking this. It's amazing that he got his way back into OpenAI somehow. I said as much that he shouldn't go back to OpenAI and got downvotes universally both here and on reddit.

    • Altman's biggest accomplishment is being out of the way. Great work is done despite management, not because of it. It's the ability to hire the right people and get out of their hair. Altman himself has no talents, he is not technical. He is just well-connected in the Valley. But, at least Altman is not the wrecking ball like Elon Musk is, and that's really his only job - to not micromanage.

if this account is true, Sam Altman is a deeply unethical human being. Given that he doesn't bring any technical know how to building of AGI, I just don't see the reason to have such a person in charge here. The new board should act.

  • I thought we had already established this when the previous board tried to oust him for failing to stick to OpenAI’s charter. This is just further confirmation.

    > The new board should act

    You mean like the last board tried? Besides the board was picked to be on Altman’s side. The independent members were forced out.

    • And almost every thread on HN had its top-voted comments defending and praising Altman, while shrugging off Ilya et al. It was bizarre and disheartening to see that from this community, of all places.

  • He rubs elbows with very powerful people including CEOs, heads of state and sheiks. They probably want 'one of them' in charge of the company that has the best chances of getting close to AGI. So it's not his technical chops and not even 'vision' in the Jobs sense that keeps him there.

    • Are they really the ones with the best chance now though?

      They're basically owned by Microsoft, they're bleeding tech/ethnical talent and credibility, and most importantly Microsoft Research itself is no slouch (especially post-Deepmind poaching) - things like Phi are breaking ground on planets that openai hasn't even touched.

      At this point I'm thinking they're destined to become nothing but a premium marketing brand for Microsoft's technology.

  • He has “The Vision”… It’s the modern entrepreneurship trope that lowly engineers won’t achieve anything if they weren’t rallied by a demi-god who has “The Vision” and makes it all happen.

    • I roll my eyes when somebody says that they’re “the idea person” or that they have “the vision”.

      I’d wager that most senior+ engineers or product people also have equally compelling “the vision”s.

      The difference is that they need to do actual work all day so they don’t get to sit around pontificating.

      1 reply →

  • I mean, there's already been some yellow flags with Altman already. He founded Worldcoin, whose plan is to airdrop free money in exchange for retinal scans. And the board of OpenAI fired him for (if I've got this right) lying to the board about conversations he'd had with individual board members.

    • WorldCoin is how I first heard of him, and it's what made me think he was a bad actor. I think of it as a red flag, not yellow.

  • > if this account is true, Sam Altman is a deeply unethical human being

    I thought this when he didn't launch Worldcoin in the US but Africa, and consistently upped the ante to the point where he was offering people in the poorer parts of the continent amounts that equalled two months wages or more to scan their retinas.

    Why was that necessary? It wasn't to share the VC windfall.

  • He must be bringing something to the table as they tried to get rid of him and failed spectacularly. Business is not only about technical know how.

  • because so many people ran cover for him, from paul graham to whos-who of silicon valley.

> The thing about the situation is that Altman is willing to lie and steal a celebrity's voice for use in ChatGPT. What he did, the timeline, everything - is sleazy if, in fact, that's the story.

Correcting, the thing about this whole situation with OpenAI is they are willing to steal everything for use in ChatGPT. They trained their model with copyrighted data and for some reason they won't delete the millions of protected data they used to train the AI model.

  • Using other people data for training without their permission is the "original sin" of LLMs[1]. That will, at best, be a shadow over the entire field for an extremely long time.

    [1] Just to head off people saying that such a use is not a copyright violation -- I'm not saying it is. I'm just saying that it's extremely sketchy and, in my view, ethically unsupportable.

What is so special about her voice? They could’ve found a college student with a sweet voice and offered to pay her tuition in exchange for using her voice, no? Or a voice actor?

Why be cartoonishly stupid and cartoonishly arsehole and steal a celebrity’s voice? Did he think Scarlett won’t find out? Or object?

I don’t understand these rich people. Is it their hobby to be a dick to as many people as they can, for no reason other than their amusement? Just plain weirdos

  • Scarlett voiced Samantha, an AI in the movie "Her"

    Considering the movie's 11 years old, it's surprisingly on-point with depictions of AI/human interactions, relations, and societal acceptance. It does get a bit speculative and imaginative at the end though...

    But I imagine that movie did/does spark the imagination of many people, and I guess Sam just couldn't let it go.

    • It's not just that. Originally the AI voice in Her was played by someone else, but Spike Jonze felt strongly that the movie wasn't working and recast the part to Johansson. The movie immediately worked much better and became a sleeper hit. Johansson just has a much better fitting voice and higher skill in voice acting for this kind of role, to the extent that it maybe was a make/break choice for the movie. It isn't a surprise that after having created the exact tech from the movie, OpenAI wanted it to have the same success that Jonze had with his character.

      It's funny that just seven days ago I was speculating that they deliberately picked someone whose voice is very close to Scarlett's and was told right here on HN, by someone who works in AI, that the Sky voice doesn't sound anything like Scarlett and it is just a generic female voice:

      https://news.ycombinator.com/item?id=40343950#40345807

      Apparently .... not.

    • Also, I understand that sama considers "Her" his favorite movie. Perhaps, for him, it just had to be ScarJo's voice.

  • > Is it their hobby to be a dick to as many people as they can, for no reason other than their amusement? Just plain weirdos

    They seem to love "testing" how much they can bully someone.

    I remember a few experiences where someone responded by being an even bigger dick, and they disappeared fast.

Some people might see some parallel with SBF and see how Altman would try to regulate competition without impeding OpenAI progress

> The thing about the situation is that Altman is willing to lie and steal a celebrity's voice for use in ChatGPT.

He lies and steals much more than that. He’s the scammer behind Worldcoin.

https://www.technologyreview.com/2022/04/06/1048981/worldcoi...

https://www.buzzfeednews.com/article/richardnieva/worldcoin-...

> Altman is, and wants to be, a large part of AI regulation. Quite the public contradiction.

That’s as much of a contradiction as a thief wanting to be a large part of lock regulation. What better way to ensure your sleazy plans benefit you, and preferably only you but not the competition, than being an active participant in the inevitable regulation while it’s being written?

  • > That’s as much of a contradiction as a thief wanting to be a large part of lock regulation.

    Based on what I see in the videos from The Lockpocking Lawyer, that would be a massive improvement.

    Now, the NSA and crypto standards, that would have worked as a metaphor for your point.

    (I don't think it's correct, but that's an independent claim, and I am not only willing to discover that I'm wrong about their sincerity, I think everyone writing that legislation should actively assume the worst while they do so).

    • > > That’s as much of a contradiction as a thief wanting to be a large part of lock regulation.

      > Based on what I see in the videos from The Lockpocking Lawyer, that would be a massive improvement.

      A thief is not a lock picker and they don't have the same incentive. A thief in a position to dictate lock regulation would try to have a legal backdoor on every lock in the world. One that only he has the master key for. Something something NSA & cryptography :)

      1 reply →

    • > Based on what I see in the videos from The Lockpocking Lawyer, that would be a massive improvement.

      If you've watched his videos then surely you should know that lockpicking isn't even on the radar for thieves as there are much easier and faster methods such as breaking the door or breaking a window.

      1 reply →

    • > Based on what I see in the videos from The Lockpocking Lawyer

      The Lockpicking Lawyer is not a thief, so I don’t get your desire to incorrectly nitpick. Especially when you clearly understood the point.

      2 replies →

the whole technology is based on fucking over artists, who didn't expect this exact thing?

  • It's not just the artists, anything you do in the digital realm and anything that can be digitised is fair game. In the UK NHS GP practices refuse to register you to see a doctor even when it's urgent and tell you to use a third-party app to book an appointment. You have use your phone to take photos of the affected area and provide a personal info. I fully expect that data to be fed into some AI and sold without me knowing and without a process for removal of data should the company go bust. It is preying on the vulnerable when they need help.

    • Important to note the "The NHS" is not a single entity and the GP practice is likely a private entity owned in partnership by the doctors. There are a number of reasons why individual practices can refuse to register.

      Take your point about LLMs though.

      2 replies →

    • App? What's an app?

      It's a thing you put on your phone

      I don't have a phone

      Well, we can't register you

      You don't accept people who don't have phones? Could I have that in writing please, ..., oh, your signature on that please ...

"Not consistently candid", the last board said.

Like many people who try to oppose psychopaths though, they don't seem to be around much anymore.

Most likely it was an unforced error, as there’ve been a lot of chaos with cofounders and the board revolt, easy to loose track of something really minor.

Like some intern’s idea to train the voice on their favorite movie.

And then they’ve decided that this is acceptable risk/reward and not a big liability, so worth it.

This could be a well-planned opening move of a regulation gambit. But unlikely.

  • This is an unforced error, but it isn’t minor. It’s quite large and public.

    The general public doesn’t understand the details and nuances of training an LLM, the various data sources required, and how to get them.

    But the public does understand stealing someone’s voice. If you want to keep the public on your side, it’s best to not train a voice with a celebrity who hasn’t agreed to it.

    • I had a conversation with someone responsible for introducing LLMs into the process that involves personal information. That person rejected my concern over one person's data appearing in the report on another person. He told me that it will be possible to train AI to avoid that. The rest of the conversation convinced me that AI is seen as magic that can do anything. It seems to me that we are seeing a split between those who don't understand it and fear it and those who don't understand it, but want to align themselves with it. Those latter are those I fear the most.

      2 replies →

  • I don't think this makes any sense, at all, quite honestly. Why would an "intern" be training one of ChatGPT's voices for a major release?

    If in fact, that was the case, then OpenAI is not aligned with the statement they just put out about having utmost focus on rigor and careful considerations, in particular this line: "We know we can't imagine every possible future scenario. So we need to have a very tight feedback loop, rigorous testing, careful consideration at every step, world-class security, and harmony of safety and capabilities." [0]

    [0] https://x.com/gdb/status/1791869138132218351

  • > easy to loose track of something really minor. Like some intern’s idea

    Yes, because we all know the high profile launch for a major new product is entirely run by the interns. Stop being an apologist.

  • It makes a lot more sense that he was caught red-handed, likely hiring a similar voice actress and not realizing how strong identity protections are for celebs.

  • > Like some intern’s idea to train the voice on their favorite movie.

    Ah, the famous rogue engineer.

    The thing is, even if it were the case, this intern would have been supervised by someone, who themselves would have been managed by someone, all the way to the top. The moment Altman makes a demo using it, he owns the problem. Such a public fuckup is embarrassing.

    > And then they’ve decided that this is acceptable risk/reward and not a big liability, so worth it.

    You mean, they were reckless and tried to wing it? Yes, that’s exactly what’s wrong with them.

    > This could be a well-planned opening move of a regulation gambit. But unlikely.

    LOL. ROFL, even. This was a gambit all right. They just expected her to cave and not ask questions. Altman has a common thing with Musk: he does not play 3D chess.