Comment by zoogeny

2 years ago

I've already wasted a lot of my own time and energy on this, but I'm starting to get a bit confused on this whole thing. People seem pretty comfortable jumping to a profit-driven motivation for employees potentially leaving OpenAI in pursuit of some kind of loyalty to Altman.

But I'm just thinking of the rancor that has been heaped on Terraform, for example, for changing its license. The argument always seems to be that Hashicorp mislead contributors by claiming to release their contributions as open source and now they've reneged on that deal.

My understanding of OpenAI's mission was that there was a fear that AI being developed inside of big tech companies would provide undue advantage to the very few companies that were able to afford the teams and hardware necessary. Meanwhile the rest of us would be unaware of those advancement being made behind closed doors while those behemoths created an insurmountable gap.

Yet now, for some reason, everyone is literally cheerleading the gutting of OpenAI and gleefully pushing the employees into one of the biggest and most notorious tech giants there ever was.

You almost have to wonder, is this the greatest psychological twist in recent memory? People aren't just OK with them turning into a profit-seeking venture, they are seemingly begging for it. There is almost no opposition to it. And for what? Because of some guy none of us actually knows, who we've only seen on TV? And big tech guys like Paul Graham, Eric Schmidt, Satya Nadella - a literal who's-who of the tech giant oligarchy - are all fawning over this young man, along with visits to the white house, meeting foreign presidents, etc.

We went from "big corps are bad" to "big corps are saviours" in less than a week. And I'm not even sure what we think they are saving us from.

“Open”AI already was an unaccountable big corp. They’ve already refused not only to open their weights but to publish most of their research, to create an insurmountable gap with the rest of the world, and to legislate it in with lobbying. “Openness” merely meant “the API is open to your money”, with opaque “content policies” we had no say in. They had the same unaccountable and opaque power and the same commercial drive, but “for our own good” as they define it unilaterally.

Moving to MSFT merely means they’ll do the same thing, but with a bit more reliability, a bit more fear of liability, and a bit less of the doomerist sanctimony of the original leadership. Better to at least have a bigco with coherent and stable values like money. The bigco “nonprofit” led by the current board has made erratic decisions, has insane longtermist and EA values, and refused to give any meaningful statement about why they did what they did. Inasmuch as they resist commercialization, it’s to have more opacity and more control not less. How can we trust these people with control over AGI? Better to junk the board and deal with straightforward greed rather than hubris.

  • I don't understand why you putting the blame on the board, instead of the CEO, who: 1. is way more responsible for the direction the company deviated to 2. was in fight against the board, who did not like his direction 3. will be in the new company leading everything.

    So all the bad that you criticize OpenAI for would leave to MS, and yet people are still cheering for it.

    I am truly baffled.

    • Yes, the EAs and longtermists on the board probably disliked the commercial focus of “Open”AI or its rapid scaling; I’m not blaming them for Laundry Buddy. But make no mistake: the nonprofit board had even less interest in opening up their research or sharing their code. They and the “superalignment” team believe AGI can end the world and needs to be in safe hands (i.e. their own hands). The EA movement which they are embedded in is one of the leading forces advocating shutting down open AI development through regulation. The board has strong EA and doomerist ties. Within OpenAI, Ilya Sutskever is on record as saying that the world will realize open-sourcing weights is foolish by 2025, and the Atlantic reported he literally burned an effigy of “unaligned” AI at the company retreat. Helen Toner is similarly involved in “AI governance” and not coincidentally took funding early in her career from OpenPhil, one of the key EA slush funds. The new CEO, their appointee, is quite literally a character written into Eliezer Yudkowsky’s rationalist fanfic in a cameo, and believes the probability of AGI killing humanity is 50%.

      These people have absolutely no interest in decentralization and accountability, and were more than happy to let OpenAI accumulate power to “protect humanity”—until they pulled the plug for reasons they still refuse to disclose. Let me be clear: this is unacceptable. Not coincidentally, their so-called utilitarianism and altruism merely justifies accumulating all power in their hands, and taking any action (like the backstabbing we saw last week) to make it happen. For all MSFT’s faults, they play by traditional and predictable corporate rules of greed and can be reasoned with. The safety faction are true believers and implacably opposed to openness anywhere, and moreover happily gave the veneer of altruism to the regulatory capture of the commercial faction anyway before they realized they couldn’t control it. I know which one I’d pick.

      3 replies →

    • I'm an accelerationist. All the safety shit is stupid to me. I'm cheering because doomers are annoying. Just boring old schadenfreude

  • Is this a sheepskin comment or a genuinely naïve one? I can't tell.

    Moving to MSFT means they cannot do anything that goes against MSFT's interests.

    Everything they ever did, they ever will do belongs to MSFT.

    MSFT brings with it all the bloat and risk aversion it needs as a big org, killing the "cutting egde" move-fast, make-it-work nature of OAI that got it to this point in the first place.

    Only thing you can be sure of is this thing will now be "closed" forever, with no hope of others benefiting off the hard work of the real people who make it happen.

Elon has Twitter. Zuck has Facebook. Jeff has WaPo. Sam has HN. Everyone has their media platform that ultimately serves them. I’m not suggesting HN is being directly trolled or manipulated, but I think there is such a tight link between HN and Sam that many of the most active people on this platform in particular either personally know, look up to, benefit from, or are sympathetic to him. The overall effect of this is that he gets overwhelming benefit of the doubt in the absence of much information at all.

People are more loyal to their networks than their principles.

Bingo. OpenAI was specifically founded as a non-profit to prevent profit>all from turning this into an uncontrolled arms race. Before founding OpenAI Sam Altman wrote "Why You Should Fear Machine Intelligence."[0]

Last week he said, "I believe that this will be the most important and beneficial technology humanity has yet invented. And I also believe that if we’re not careful about it, it can be quite disastrous. And so we have to navigate it carefully."

If you run to Microsoft with the entire team, whose entire mission is an amorphous "stock price go up" (I mean, look at how much people are talking about them figuring out a story before stock market opens on Monday), then you have failed OpenAI's charter and founding purpose.

[0] https://blog.samaltman.com/machine-intelligence-part-1

I think the “big tech is evil“ and “SV start-ups will save us” groups both still exist but remain distinct and haven’t coordinated the triggers that make each group become vocal. I don’t think there’s a lot of overlap in their membership, so it’s not like hypocrisy, it’s more like different people believing different things for different reasons.

However there are many other sentiments held reasonably by various subgroups, like, “this board overstepped and hasn’t explained itself to our satisfaction” or “we love our boss, he makes us wealthy,” or “if they take my chatGPT away, I’ll be pissed” or “man , I just invested $13B into this ridiculously governed venture and if I don’t fix this my wealth will never approach Balmer’s”

Typical burnout change IMO.

Switching from ‘must do good’ to ‘good is impossible, make as much money as possible’ can happen really quick.

The 80’s Era of ‘greed is good’ followed free love and the hippies pretty closely for a reason, IMO.

  • Yes, I agree the pendulum does swing. Just jarring to see it swing so quickly. Remember, this change began on Friday!

    I guess the old saying: "If you can't beat them, join them" is the new mantra. And I suppose if you're going to do it, might as well do it with some enthusiasm.

I certainly agree. It is absolutely silly, but time and time again it is shown that money moves people. However, I also believe that once a majority of them get 5-10 years in and continue maturing through that, as we all do, they return to OpenAI (or whatever is around in the future) to contribute back in a way that helps all our children's future, not just their own.

The things I stand for now would not have held in the face of millions in comp 10 years ago, so I don't expect the same of others. I only hope they earn whatever it takes to get them to the next level sooner than later. AI does appear to be worth a good fight for humanity.

It is pretty rich that Microsoft of all companies is now coming in to be seen as the savior and not many people are batting an eye. It's a face turn 20 years in the making. If Facebook, Apple or Google were doing the same I suspect we'd hear more opprobrium.

I would guess non-profit or not is not the key. The key, at least to an engineer like me, is whether I can do meaningful work with a reasonable package. The employees in OpenAI are creating history and building amazing career after all, which outweighs the structure of a company or monetary incentives.

The impact that corporate shills have on public perception is quite powerful. It's peculiar how the general public acknowledges it in other fields such as entertainment, such as with Hollywood movie stars or prominent musicians. However, people often become frustrated when someone points it out within their own field. Kudos to you for noticing it.

Thank you for the sobering perspective. I think we as a collective need to answer this question.

Everyone wants to see Microsoft Skynet to emerge

They should take SkyDrive, remove the Drive part and add all new cool AI made with .NET technology; so that's how we get Skynet /s