← Back to context

Comment by drzaiusx11

21 hours ago

Public benefit corporations in the AI space have become a farce at this point. They're just regular corporations wearing a different hat, driven by the same money dynamics as any other corp. They have no ability to balance their stated "mission" with their drive for profit. When being "evil" is profitable and not-evil is not, guess which road they'll take...

In general public benefit corporations and non-profits should have a very modest salary cap for everybody involved and specific public-benefit legally binding mission statements.

Anybody involved should also be prohibited from starting a private company using their IP and catering to the same domain for 5-10 years after they leave.

Non-profits where the CEO makes millions or billions are a joke.

And if e.g. your mission is to build an open browser, being paid by a for-profit to change its behavior (e.g. make theirs the default search engine) should be prohibited too.

  • "A very modest salary cap" works if your mission is planting trees. Not so much if what you're building is frontier AI systems.

    • I think that's the point though. The AI companies can't compete without hiring very talented employees and raising lots of money from investors. Neither the employees nor investors would participate if there weren't the potential for making mountains of money. So these AI companies fundamentally can't be non-profits or true B-corps (I realize that's a vague term, but the it certainly means not doing whatever it takes to make as much money as possible), and they shouldn't pretend they are.

      6 replies →

    • If a non-profit can't attract people not motivated except by profit, perhaps it shouldn't exist.

    • While I agree, if you need high profits to survive, you're not off to a great start as a nonprofit.

  • If we're speaking in generalities of corporations in this space, it's all a joke now, at least from my vantage point. I just don't find it very funny.

  • You're overthinking this. Just give the beneficiaries of the corporation (which in the context of a "public" benefit corporation is the public) the grounds to sue if the company reneges on their mission, the same way shareholders can sue if a company fails to act in their interest.

  • What's the salary cap for hiring a team to build a frontier model? These kind of rules will make PBCs weaker not stronger.

    • >for hiring a team to build a frontier model? These kind of rules will make PBCs weaker not stronger

      Weaker is fine if those working there are actually true to the mission for the mission, are not for the profit.

      Same with FOSS really, e.g. I'd rather have a weaker Linux that's an actual comminity project run by volunteers, than a stronger Linux that's just corporate agendas, corporate hires with an open license on top.

PBCs are peak End of History liberal philanthropy that speak to the kind of person whose solution to any problem is "throw a startup at it"

  • Fukuyama wasn't wrong, he was just early

    • As in a true believer in our present day dystopia? I think chances are we'd evolve a few more neo variants of fascism at least a few times in-between some neo variants of liberal history-ending ones (I think abundance is next?) before the bombs drop and give us the rest.

> Public benefit corporations in the AI space have become a farce at this point.

“At this point”? It was always the case, it’s just harder to hide it the more time passes. Anyone can claim anything they want about themselves, it’s only after you’ve had a chance to see them in the situations which test their words that you can confirm if they are what they said.

>Public benefit corporations in the AI space have become a farce at this point. They're just regular corporations wearing a different hat, driven by the same money dynamics as any other corp.

Could you describe the model that you think might work well?

  • It sounds like OP thinks AI companies should just stop pretending that they care about the public benefit, and be corporations from the start. Skip the hand wringing and the will they/wont they betray their ethics phases entirely since everyone knows they're going to choose profit over public benefit every time.

    That model already exists and has worked well for decades. It's called being a regular ass corporation.

Pete Hegseth also threatened to take, by dictat, everything Anthropic has. He can do that with the Defense Industrial Act or whatever its called if he designates them as critical to national defense.

  • It would've been better PR for Anthropic to let Hegseth do that instead of fold at the slightest hint of pressure and lost contract money. I've canceled my Claude subscription over this (and made sure to let them know in the feedback).

  • He seems to be the driving force behind all this. Mediocrities are attracted to AI like moths.

    The press always say "the Pentagon negotiates". Does any publication have an evidence that it is "the Pentagon" and not Hegseth? In general, I see a lot of common sense from the real Pentagon as opposed to the Secretary of War.

    I hope Westpoint will check for AI psychosis in their entrance interviews and completely forbid AI usage. These people need to be grounded.

  • Hmm, that could be the best "IPO" they'll ever get. Better check if Trump Jr.'s 1789 capital has shares like they did in groq (note the "q").

I feel like we went through this exact situation in the 2010s of social media companies. I don’t get why people defend these companies or ever believe they have any sense of altruism

  • Also, it seems to be the era where the government takes backdoor access to these services and data, as the did with social media

Well, now I'm wondering, if the company was chartered with the public benefit in mind, could you not sue if they don't follow through with working in the public interest?

If regular corporations are sued for not acting in the interests of shareholders, that would suggest that one could file a suit for this sort of corporate behavior.

I'm not even a lawyer (I don't even play one on TV) and public benefit corporations seem to be fairly new, so maybe this doesn't have any precedent in case law, but if you couldn't sue them for that sort of thing, then there's effectively no difference between public benefit corporations and regular corporations.

  • I really don’t see it. PBCs are dual purpose entities - under charter, they have a dual purpose of making profit while adding some benefit to society. Profit is easy to define; benefit to society is a lot more difficult to define. That difficulty is reflected at the penalty stage where few jurisdictions have any sort of examination of PBC status.

    This is what we were all going on about 15 years ago when Maryland was the first state to make PBCs legal. We got called negative at the time.

  • I think public benefit corporations (like Anthropic) are quite poorly defined so I'm not sure how successful a lawsuit is.

I was a Pro subscriber until last week. When I was chatting with Claude, it kept asking a lot of personal questions - that seemed only very very vaguely relevant to the topic. And then it struck me - all these AI companies are doing are just building detailed user models for being either targeted for advertising or to be sold off to the highest bidder. It hasn't happened yet with Anthropic, but when the bubble money runs out, there's not gonna be a lot of options and all we'll see is a blog post "oops! sorry we did what we promised you we wouldn't". Oldest trick in the tech playbook.

  • A less cynical explanation: It's heavily trained to ask follow-up questions at the end of a response, to drive more conversation and more engagement. That's useful both for making sure you want to renew your subscription, and also probably for generating more training data for future models. That's sufficient explanation for the behavior we're seeing.

    • I could be wrong, but I remember that Claude models didn't really ask follow-up questions. But since GPT models are doing that, and somehow people like that (why?), Anthropic started doing it as well.