← Back to context

Comment by tedsanders

19 hours ago

I'm an OpenAI employee and I'll go out on a limb with a public comment. I agree AI shouldn't be used for mass surveillance or autonomous weapons. I also think Anthropic has been treated terribly and has acted admirably. My understanding is that the OpenAI deal disallows domestic mass surveillance and autonomous weapons, and that OpenAI is asking for the same terms for other AI companies (so that we can continue competing on the basis of differing services and not differing scruples). Given this understanding, I don't see why I should quit. If it turns out that the deal is being misdescribed or that it won't be enforced, I can see why I should quit, but so far I haven't seen any evidence that's the case.

Respectfully, it's very hard to see how anyone could look at what just happened and come to the conclusion that one company ends up classed a "supply chain risk" while another agrees the the same terms that led to that. Either the terms are looser, they're not going to be enforced, or there's another reason for the loud attempt to blacklist Anthropic. It's very difficult to see how you could take this at face value in any case. If it is loose terms or a wink agreement to not check in on enforcement you're never going to be told that. We can imagine other scenerios where the terms stated were not the real reason for the blacklisting, but it's a real struggle (at least for me) to find an explanation for this deal that doesn't paint OpenAI in a very ethically questionable light.

  • > it's very hard to see how anyone could look at what just happened

    I think what you are missing is their annual comp with two commas in it.

    • Lets be real one comma is enough for most Americans to flee their own humanity.

    • Hey, with expected stock payout - tres commas!

      Shit, I wonder if I still have any of those ‘tres commas club’ t-shirts lying around?

  • One explanation is that this is effectively a quid pro quo, given Brockman’s enormous financial support of the current president.

    • Yep, theoretically it could just be oligarchic corruption and not institutional insanity at the highest levels of the government. What a reassuring relief it would be to believe that.

  • I agree it makes little sense, and I think if all players were rational it never would have played out this way. My understanding is that there are other reasons (i.e., beyond differing red lines) that made the OpenAI deal more palatable, but unfortunately the information shared with me has not been made public so I won't comment on specifics. I know that's unsatisfying, but I hope it serves as some very mild evidence that it's not all a big fat lie.

    • Your ballooned unvested equity package is preventing you from seeing the difference between “our offering/deal is better” and “designated supply chain risk and threatening all companies who do business with the government to stop using Anthropic or will be similarly dropped” (which is well past what the designation limits). It’s easier being honest.

      11 replies →

    • Friend, this reads like that situation where your paycheck prevents you from seeing clearly - I forget the exact quote. Sam doesn't play a straight game and neither does the administration - there are more than a few examples.

      2 replies →

    • OpenAI should not be agreeing to any contract with DOD under these circumstances of Anthropic being falsely labeled a supply chain risk.

    • The problem is, the vague safeguards are not worth anything.

      "we will comply with US law" The problem is, the US government does not actually comply with US law.

    • That’s not evidence. You’re effectively saying “trust me bro” without a shred of proof to backup your claims.

  • As an OpenAI employee, quitting wouldn't be a problem, as you have a much higher chance of being successful after quitting than anyone else. You could go to any VC and they would fund you.

    • This isn't even close to true. VCs aren't silly, and it's not the 2010-2015 days of free money any more. Having a big company on your resume is not enough to land your seed round. You need a product, traction, and real money revenue in most cases.

      4 replies →

  • I agree with what you're saying, but given the egos involved in the current admin there's a practical interpretation:

    1. Department of War broadly uses Anthropic for general purposes

    2. Minority interests in the Department of War would like to apply it to mass surveillance and/or autonomous weapons

    3. Anthropic disagrees and it escalates

    4. Anthropic goes public criticizing the whole Department of War

    5. Trump sees a political reason to make an example of Anthropic and bans them

    6. The entirety of the Department of War now has no AI for anything

    7. Department of War makes agreement with another organization

    If there was only a minority interest at the department of war to develop mass surveillance / autonomous weapons or it was seen as an unproven use case / unknown value compared to the more proven value from the rest of their organizational use of it, it would make sense that they'd be 1) in practice willing to agree to compromise on this, 2) now unable to do so with Anthropic in specific because of the political kerfuffle.

    I imagine they'd rather not compromise, but if none of the AI companies are going to offer them it then there's only so much you can do as a short term strategy.

    • Well at least we know now that the department of war is less capable than before. All because the big man shit his pants while Anthropic was in view.

    • That is pretty optimistic, i hope it is true, and just a miss-understanding.

      But man, this blew up pretty fast for a miss-understanding in some negotiation. Something must have been said in those meetings to make anthropic go public.

      1 reply →

  • And unless GP has a security clearance, they can't know for sure what OpenAI is allowing on classified networks.

  • Yeah, agreed. I probably wasn't going to delete my OpenAI account (ala the link that is also being upvoted on HN), it just seemed like a hassle vs ceasing to use OpenAI. But when the staff at OpenAI employ mental gymnastics, selective hearing, willful ignorance, or plain ignorance to justify compliance with manmade horrors, I think it's probably important to vote with our feet.

  • > while another agrees the the same terms that led to that

    One of them needs to be investigated for corruption in the next few years. I’d have to assume anyone senior at OpenAI is negotiating indemnities for this.

  • Are you saying that everything so far in this administration has been 100% rational?

  • > one company ends up classed a "supply chain risk" while another agrees the the same terms that led to that

    Never discount the possibility of Hegseth being petty and doing the OpenAI deal with the same terms to imply to the world that Anthropic is being unreasonable because another company signed a deal with him.

    • Or corruption, in which Trump/Hegseth are getting a kickback from OpenAI, but giving the money to Anthropic would be "worthless" to them.

  • >or there's another reason for the loud attempt to blacklist Anthropic

    This one is very easy. Trump has a well established pattern of making a loud statement to make it appear he didn't lose, even when he did.

  • anthropic has nothing but a contract to enforce what is appropriate usage of their models. there are no safety rails, they disabled their standard safety systems

    openai can deploy safety systems of their own making

    from the military perspective this is preferable because they just use the tool -- if it works, it works, and if it doesn't, they'll use another one. with the anthropic model the military needs a legal opinion before they can use the tool, or they might misuse it by accident

    this is also preferable if you think the government is untrustworthy. an untrustworthy government may not obey the contract, but they will have a hard time subverting safety systems that openai builds or trains into the model

    • Huh, that's an interesting and new perspective. I'd love to know what you mean by safety systems, and what OpenAI can do that Anthropic can't.

    • This is entirely nonsense.

      - When has any AI company shipped "safeguards" that aren't trivially bypassed by mid bloggers? Just one example would be fine.

      - The conventional wisdom is that OAI's R&D (including safety) is significantly behind Anthropic's.

      - OpenAI is constantly starved for funding. They don't make money. They have every incentive to say yes to a deal that entrenches them into govt systems, regardless of the externalities

  • There's a critical mass of Trump Derangement Syndrome in SV, as this site exemplifies almost daily. The amount of vitriol and hatred spewed here is not healthy, nor are those who spew it. It kills rational debate, nuance and leads to foolish choices like someone cutting off their nose to spite their face as the old saying goes.

    • The president of the United States sets the tone that hated without reason or explanation is the way the system works now. Belligerence and power are the currency.

      Speaking to people's better angels as if it has a chance of influencing Trumps behaviour is a fool's errand. It's not derangement. His word is worthless.

  • They aren’t the same terms. You are clearly an enemy bot or an uneducated fool. OpenAI has agreed to mass surveillance of those who are not Americans. Anthropic refused. OpenAI’s term was a restriction of surveillance not to be on Americans

(Disclosure, I'm a former OpenAI employee and current shareholder.)

I have two qualms with this deal.

First, Sam's tweet [0] reads as if this deal does not disallow autonomous weapons, but rather requires "human responsibility" for them. I don't think this is much of an assurance at all - obviously at some level a human must be responsible, but this is vague enough that I worry the responsible human could be very far out of the loop.

Second, Jeremy Lewin's tweet [1] indicates that the definitions of these guardrails are now maintained by DoW, not OpenAI. I'm currently unclear on those definitions and the process for changing them. But I worry that e.g. "mass surveillance" may be defined too narrowly for that limitation to be compatible with democratic values, or that DoW could unilaterally make it that narrow in the future. Evidently Anthropic insisted on defining these limits itself, and that was a sticking point.

Of course, it's possible that OpenAI leadership thoughtfully considered both of these points and that there are reasonable explanations for each of them. That's not clear from anything I've seen so far, but things are moving quickly so that may change in the coming days.

[0] https://x.com/sama/status/2027578652477821175

[1] https://x.com/UnderSecretaryF/status/2027594072811098230

  • I don't understand how any sort of deal is defensible in the circumstances.

    Government: "Anthropic, let us do whatever we want"

    Anthropic: "We have some minimal conditions."

    Government: "OpenAI, if we blast Anthropic into the sun, what sort of deal can we get?"

    OpenAI: "Uh well I guess I should ask for those conditions"

    Government: blasts Anthropic into the sun "Sure whatever, those conditions are okay...for now."

    By taking the deal with the DoW, OpenAI accepts that they can be treated the same way the government just treated Anthropic. Does it really matter what they've agreed?

    • From a level headed outside perspective

      It looks like Anthropic likely wanted to be able to verify the terms on their own volition whereas OpenAI was fine with letting the government police themselves.

      From the DoD perspective they don't want a situation, like, a target is being tracked, and then the screen goes black because the Anthropic committee decided this is out of bounds.

      5 replies →

    • This is wise analysis. To summarize: appeasement of the Trump administration is a losing strategy. You won’t get what you want and you’ll get dragged down in the process.

      1 reply →

  • Jeremy Lewin's tweet referenced that "all lawful use" is the particular term that seems to be a particular sticking point.

    While I don't live in the US, I could imagine the US government arguing that third party doctrine[0] means that aggregation and bulk-analysis of say; phone record metadata is "lawful use" in that it isn't /technically/ unlawful, although it would be unethical.

    Another avenue might also be purchasing data from ad brokers for mass-analysis with LLMs which was written about in Byron Tau's Means of Control[1]

    [0] https://en.wikipedia.org/wiki/Third-party_doctrine

    [1] https://www.penguinrandomhouse.com/books/706321/means-of-con...

    • The term lawful use is a joke to the current administration when they go after senators for sedition when reminding government employees to not carry out unlawful orders. It’s all so twisted.

    • To be clear, the sticking point is actually that the DoD signed a deal with Anthropic a few months ago that had an Acceptable Use Policy which, like all policies, is narrower than the absolute outer bounds of statutory limitations.

      DoD is now trying to strongarm Anthropic into changing the deal that they already signed!

  • I’d like to see smart anonymous ways for people to cryptographically prove their claims. Who wants to help find or build such an attestation system?

    I’m not accusing the above commenter of deception; I’m merely saying reasonable people are skeptical. There are classic game theory approaches to address cooperation failure modes. We have to use them. Apologies if this seems cryptic; I’m trying to be brief. It if doesn’t make sense just ask.

Did Sam Altman say that he wouldn't allow ChatGPT to be used for fully autonomous weapons? (Not quite the same as "human responsibility for use of force".)

I don't want to overanalyze things but I also noticed his statement didn't say "our agreement specifically says chatgpt will never be used for fully autonomous weapons or domestic mass surveillance." It said something that kind of gestured towards that, but it didn't quite come out and say it. It says "The DoW agrees with these principles, and we put them in our agreement." Could the principles have been outlined in a nonbinding preamble, or been a statement of the DoW's current intentions rather than binding their future behavior? You should be very suspicious when a corporate person says something vague that somewhat implies what you want to hear - if they could have told you explicitly what you wanted to hear, they would have.

But anyway, it doesn't matter. You said you don't think it should be used for autonomous weapons. I'd be willing to bet you 10:1 that you'll never find altman saying anything like "our agreement specifically says chatgpt will never be used for fully autonomous weapons", now or any point in the future.

  • > you'll never find altman saying anything like "our agreement specifically says chatgpt will never be used for fully autonomous weapons"

    To be fair, Anthropic didn't say that either. Merely that autonomous weapons without a HITL aren't currently within Claude's capabilities; it isn't a moral stance so much as a pragmatic one. (The domestic surveillance point, on the other hand, is an ethical stance.)

    • They specifically said they never agreed to let the DoD use anthropic for fully autonomous weapons. They said "Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now: Mass domestic surveillance [...] Fully autonomous weapons"

      Their rational was pragmatic. But they specifically said that they didn't agree to let the DoD create fully automatic weapons using their technology. I'll bet 10:1 you won't ever hear Sam Altman say that. He doesn't even imply it today.

      2 replies →

    • > it isn't a moral stance so much as a pragmatic one

      Agreed, the moral stance is saying no to DoJ and the US government

  • You're not overanalyzing anything, you're using critical thinking dissecting company communications. Kudos

> My understanding is that the OpenAI deal disallows domestic mass surveillance and autonomous weapons,

In that case, what on earth just happened?

The government was so intent on amending the Anthropic deal to allow 'all lawful use', at the government's sole discretion, that it is now pretty much trying to destroy Anthropic in retaliation for refusing this. Now, almost immediately, the government has entered into a deal with OpenAI that apparently disallows the two use cases that were the main sticking points for Anthropic.

Do you not see something very, very wrong with this picture?

At the very least, OpenAI is clearly signaling to the government that it can steamroll OpenAI on these issues whenever it wants to. Or do you believe OpenAI will stand firm, even having seen what happened to Anthropic (and immediately moved in to profit from it)?

> and that OpenAI is asking for the same terms for other AI companies (so that we can continue competing on the basis of differing services and not differing scruples)

If OpenAI leadership sincerely wanted this, they just squandered the best chance they could ever have had to make it happen! Actual solidarity with Anthropic could have had a huge impact.

  • It looks most likely like Anthropic wanted the ability to audit model usage, where as OpenAI was fine with just an agreement.

    Hegseths tweet strongly alluded to this, and the general terms of the agreement are not public, just the hot button ones.

    • Am I wrong to think that such an agreement is basically meaningless? OpenAI gets to say there are limits, the government gets to do whatever it wants, and OpenAI will be very happy not to know about it.

    • Bingo. You don’t have to read much into this if you remember how the DoD uses the word trust. In their world, a "trusted" system is one that has the power to break your security if it goes wrong. So when they say "unrestricted use," the likely meaning isn’t just fewer guardrails it’s that the vendor doesn’t get to monitor or audit how the system is being used. In other words, the government isn’t handing a private company visibility into sensitive operations.

"AI shouldn't be used for mass surveillance or autonomous weapons". The statement from OpenAI virtually guarantees that the intention is to use it for mass surveillance and autonomous weapons. If this wasn't the intention them the qualifier "domestic" wouldn't be used, and they would be talking about "human in the loop" control of autonomous weapons, not "human responsibility" which just means there's someone willing to stand up and say, "yep I take responsibility for the autonomous weapon systems actions", which lets be honest is the thinnest of thin safety guarantees.

Assuming this is real: Why do you think anthropic was put on what is essentially an "enemy of the state" list and openai didn't?

The two things anthropic refused to do is mass surveillance and autonomous weapons, so why do _you_ think openai refused and still did not get placed on the exact same list.

It's fine to say "I'm not going to resign. I didn't even sign that letter", but thinking that openai can get away with not developing autonomous weapons or mass surveillance is naive at the very best.

My understanding is that OpenAI's deal, and the deal others are signing, implicitly prevents the use of LLMs for mass domestic surveillance and fully autonomous weapons because today one care argue those aren't legal and the deal is a blanket for allowing all lawful use.

Today it can't be used for mass surveillance, but the executive branch has all the authority it needs to later deem that lawful if it wishes to, the Patriot Act and others see to that.

Anthropic was making the limits contractually explicit, meaning the executive branch could change the line of lawfulness and still couldn't use Anthropic models for mass surveillance. That is where they got into a fight and that is where OpenAI and others can claim today that they still got the same agreement Anthropic wanted.

Why would you believe that? If that were the case what was the issue with Anthropic even about?

You, and your colleagues, should resign.

  • > You, and your colleagues, should resign.

    It would be better if everyone stopped doing business with OpenAI so these employees lose their stock value.

    But of course neither of these things will happen.

    • Who still does business with open ai and why? They are usually 5th or sixth in the benchmarks bracketed below and above by models that cost less. This has been the case for quite some time. Glm is out for us government purposes I'd imagine, but if google agrees to the same terms I don't see why the us government would use open ai anyway. If google disagrees it would be rather confusing given the other invasions of privacy they have facilitated, but if they do then using open ai would make sense as all that would be left is grok...

  • You tell me why an employee would believe something convenient to them continuing to receive their paycheck

    • Life is more than a paycheck. We should raise the bar a little IMO. Turning down money for good reasons is not something extreme we should only expect from saints.

  • Imo the more ethical thing is obstructionism. Twitter's takeover showed it's pretty easy to find True Believer sycophants to hire. Better to play the part while secretly finding ways to sabotage.

  • That quote comes to mind...It is difficult to get a man to understand something when his salary depends upon his not understanding it.

    Obviously nothing is going to make Teddy quit his cushy OpenAI job.

> I don't see why I should quit.

So, can you please draw the line when you will quit?

- If OpenAI deal allows domestic mass surveillance - If OpenAI allows the development of autonomous weapons - OpenAI no longer asks for the same terms for other AI companies

Correct?

If so, then if I take your words at face value:

- By your reading non-domestic mass surveillance is fine

- The development of AI based weapons is fine as long as there is one human element in there, even if it could be disabled and then the weapon would work without humans involved

- The day that OpenAI asks for the same terms for other AI companies and if those terms are not granted then that's also fine, because after all, they did ask.

I have become extremely skeptical when seeing people whose livelihood depends on a particular legal entity come out with precise wording around what does and does not constitute their red line but I find it fascinating nonetheless so if you could humor me and clarify I'd be most obliged.

Why do you suppose OpenAI's deal led to a contract, while Anthropic's deal (ostensibly containing identical terms) gets it not only booted but declared a supply chain risk?

The founders are all on a first name basis. I’m surprised no one has noted that Anthropic and OpenAI winning together by giving the world two different choices, just like the US does in its political landscape. In this circumstance, OpenAI wins the local market for its government and aligned entities (while having the free consumer by a matter of cost dynamic for that ideal customer profile which is vary broad and similar to Google’s search audience where most their revenue still depends), while Anthropic is provided the global market and prosumer market where people can afford choice by paying for it.

#1 weekend HN is not a sane place. #2 emotions are high. #3 for what it’s worth @tedsanders I understand where you’re coming from and I believe you’re making the right choice by staying or at least waiting to make a decision. Don’t let #1 and #2 hurt you emotionally or force you to make a rash decision you later regret.

Edit: I don’t work at OpenAI or in any AI business and my neck is on the chopping block if AI succeeds… like a lot of us. Don’t vilify this guy trying to do what’s right for him given the information he has.

Thank you for responding. Everyone wants to think they will “do the right thing” when their own personal Rubicon is challenged. In practice, so many factors are at play, not least of which are the other people you may be responsible for. The calculus of balancing those differing imperatives is only straightforward for those that have never faced this squarely. I’ve been marched out of jobs twice for standing up for what I believed to be right at the time. Am still literally blacklisted (much to the surprise of various recruiters) at a major bank here 8 years after the fact. I can’t imagine that the threat of being blacklisted from a whole raft of companies contracting with a known vindictive regime would make the decision easier.

Ted, what do you think of your CEO’s statement: “the DoW displayed a deep respect for safety and a desire to partner to achieve the best possible outcome.”

The evidence seems to overwhelmingly point in the opposite direction.

You should quit because the only reasonable thing for your leadership to have done is to refuse to sign any agreement with DoW whatsoever while it's attempting to strongarm Anthropic in this fashion.

It doesn't even matter if OpenAI is offered the same terms that Anthropic refused. It's absurd to accept them and do business with the Pentagon in that situation.

If you take the government at its word, it's killing Anthropic because Anthropic wanted to assert the ability to draw _some_ sort of redline. If OpenAI's position is "well sucks to be them", there's nothing stopping Hegseth from doing the same to OpenAI.

It doesn't matter at all if OpenAI gets the deal at the same redline Anthropic was trying to assert. If at the end of this the government has succeeded in cutting Anthropic off from the economy, what's next for OpenAI? What happens next time when OpenAI tries to assert some sort of redline?

What's the point of any talk of "AI Safety" if you sign on to a regime where Hegseth (of all people) can just demand the keys and you hand them right over?

> My understanding is that the OpenAI deal disallows domestic mass surveillance and autonomous weapons

And you believe the US government, let alone the current one will respect that? Why? Is it naïveté or do you support the current regime?

> If it turns out that the deal is being misdescribed or that it won't be enforced, I can see why I should quit.

So your logic is your company is selling harmful technology to a bunch of known liars who are threatening to invade democratic countries, but because they haven’t lied yet in this case (for lack of opportunity), you’ll wait until the harm is done and then maybe quit?

I’ll go out on a limb and say you won’t. You seem to be trying really hard to justify to yourself what’s happening so you can sleep at night.

Know that when things go wrong (not if, when), the blood will be on your hands too.

You can't be this naive?

  • His point reeks of cope. But making a large amount of money would make anyone dumb, deaf, and blind. Also, I give a little leeway to people who are employees without executive decision-making power, as they do stand to have a lot to lose in situations like this.

    • It's probably how they are coping with the cognitive dissonance. I certainly feel for them, I don't know that I could easily walk away from a big pay package either without backup options when I have family to support and I'm not near retirement.

I can totally see why you should quit, but we see different things apparently.

What people don't understand is that domestic surveillance by the government doesn't happen and isn't needed. They know it's illegal and unpopular and for over two decades they have a loophole. Since the Bush administration it's been arranged for private contractors to do the domestic surveillance on the government's behalf. Entire industries have been built around creating "business records" for no other purpose than to sell them to the government to support domestic surveillance. This is entirely legal and why the DoW has been able to get away with saying things like "domestic surveillance is illegal, we don't do that" for over two decades while simultaneously throwing a shit fit about needing "all legal uses" if their access to domestic surveillance is threatened.

There's a big difference between "the government won't use our tools for domestic surveillance" (DoW/DoD/OpenAI/etc) and "we won't allow anyone to use our tools to support domestic surveillance by the government" (Anthropic)

Hegseth and the current Trump admin are completely incompetent in execution of just about everything but competent administrations (of both parties) have been playing this game for a long time and it's already a lost cause.

Aside from that unlikely read, this deal was still used as a pressure point on Anthropic, there's absolutely no way OpenAI was not used as a stick to hit with during negotiations.

What is your red line?

To me it looks weird that a replacement won't accept Dept of War terms. This was the source of the dispute so...

I do not know but I would not very optimistic about those new terms.

Anthropic is deemed a betrayer and a supply chain risk for actually enforcing their principles.

OpenAI agrees to be put in the same position as Anthropic.

It seems like you must actually somehow believe that history will repeat itself, Hegseth will deem OpenAI a supply chain risk too, then move to Grok or something?

There's surely no way that's actually what you believe...

These sort of agreements are easily bypassable, especially on such tools.

Someone might just create a spawn of openai with a tag and do all the stuff there...

There is no much guarantee I think

You may have missed that no single word said or written by any of the current US government’s members can be believed.

I don't know you, so maybe you're actually for real and speaking on good faith here but honestly this and your other responses in this thread read exactly like "...salary depends on not understanding"

For the record I don’t care if you quit or not. Cash rules after all… However, you are incredibly naive if you think the current admin will follow through on those terms.

Assuming this isn't a troll and you really think this, you should at least have the cojones to admit you're taking the blood money instead of trying to pretzel the truth so hard that you just look like a moron instead.

Looks to me like you have decided that you are being paid to shut up and take the word of the most thoroughly dishonest and corrupt US government we've yet seen. Why on God's slowly-browning green earth do you trust that Altman got the deal Anthropic was trying for?

lol, naive as hell. why would your company's agreement be the same as the one who just refused the _same_ agreement? Even my question doesn't even make sense, this is a contradiction, therefore your statement must be false. There, it's proven

"domestic" "mass" surveillance, two words that can be stretched so thin they basically invalidate the whole term. Mass surveillance on other countries? Guess that's fine. Surveillance on just a couple of cities that happen to be resisting the regime? Well, it's not _mass_ surveillance, just a couple of cities!

"It is difficult to get a man to understand something, when his salary depends on his not understanding it."

I have a bridge to Brooklyn to sell you if you believe this.

Standing up for whats right often is not easy and involves hard choices and consequences, your leader has shown you and the world that he is not to be trusted.

I can't tell you what to do but I hope you make the right decision.

>OpenAI deal disallows domestic mass surveillance

And the US Military is forbidden from operating on US soil, but that didn't stop this administration from deploying US Marines to California recently.

You're fooling yourself if you think this administration is following any kind of rule.

You can make blood money but you have to be aware it's blood money. Don't delude yourself in to thinking you work for an ethical or moral company.

“ It is difficult to get a man to understand something, when his salary depends on his not understanding it.”

You’re being purposefully niave if you trust any government and especially this government to behave legally or ethically.

You work for a company that’s part of the Trump, Ellison, Kutchner orbit of corruption.

Y’all are developing amazing technology. But accept reality and drop whatever sense of moral righteousness you’re carrying here. Not because some asshole on the internet says so, but for your own mental health.

Your response is a perfect encapsulation of "It is difficult to get a man to understand something when his salary depends upon his not understanding it."

I think its wrong for someone to ask someone to resign but acting that there is no issue here is debating in bad faith.

"It is difficult to get a man to understand something, when his salary depends upon his not understanding it"

Listen, if the Government using it for legit and safe use cases wasn’t an issue, then they wouldn’t have complained about Anthropic’s language. Sam is just looking the other way and pretending for you employees.

Or Sam bribed the government to do this, which is also entirely possible.

This seems like the kind of foolishness it takes a lot of money to believe. Anthropic blew up their contract with the Pentagon over concerns on lethal autonomous weapons and mass domestic surveillance. OpenAI rushes in to do what Anthropic wouldn't.

If you think that means your company isn't going to be involved in lethal autonomous weapons and mass domestic surveillance... I don't really know what to tell you. I doubt you really believe that. Obviously you will be involved in that and you are effectively working on those projects now.

Bad timing to be defending OpenAI's collaboration with the military as it launches an illegal bombing campaign.

Right beautifying lies are always going to head in the direction of doing whats self interested.

Can you at least stop lying to yourself? Given what they did with Anthropic for not supporting domestic mass surveillance and autonomous weapons...

> My understanding is that the OpenAI deal disallows domestic mass surveillance and autonomous weapons

Your understanding is entirely wrong. At least stop lying to yourself and admit that you are entirely fine with working on evil things if you are paid enough.

I know the money is good, but if I were you (or any OpenAI employee), I'd move over to Google or Anthropic posthaste.

Is it really worth the long-term risk being associated with Sam Altman when the other firms would willingly take you and probably give you a pay bump to boot?

It doesn't make sense to me why anyone would want to associate themselves with Altman. He is universally distrusted. No one believes anything he says. It's insane to work with a person who PG, Ilya, Murati, Musk have all designated a liar and just general creep.

Defending him or the firms actions instantly makes you look terrible, like you'll gladly take the "Elites vs UBI recipients" his vision propagates.

Shame on you people. What a disgusting vision.

there is a recent post about how one of the top OpenAI exec has given 25 million$ to a Trump PAC before publicly supporting Anthropic/signing this deal.

One got characterized as supply chain risk and so much for OpenAI to get the same.

And even that being said, I can be wrong but if I remember, OpenAI and every other company had basically accepted all uses and it was only Anthropic which said no to these two demands.

And I think that this whole scenario became public because Anthropic denied, I do think that the deal could've been done sneakily if Anthropic wanted.

So now OpenAI taking the deal doesn't help with the fact that to me, it looks like they can always walk back and all the optics are horrendous to me for OpenAI so I am curious what you think.

The thing which I am thinking OTOH is why would OpenAI come and say, hey guys yea we are gonna feed autonomous killing machines. Of course they are gonna try to keep it a secret right before their IPO and you are an employee and you mention walking out of openAI but with the current optics, it seems that you/other employees of OpenAI are also more willing to work because evidence isn't out here but to me, as others have pointed out, it looks like slowly boiling the water.

OpenAI gets to have the cake and eat it too but I don't think that there's free lunch. I simply don't understand why DOD would make such a high mess about Anthropic terms being outrageous and then sign the same deal with same terms with OpenAI unless there's a catch. Only time will tell though how wrong or right I am though.

If I may ask, how transparent is OpenAI from an employees perspective? Just out of curiosity but will you as a employee get informed of if OpenAI's top leadership (Sam?) decided that the deal gets changed and DOD gets to have Autonomous killing machine. Would you as an employee or us as the general public get information about it if the deal is done through secret back doors. Snowden did show that a lot of secret court deals were made not available to public until he whistleblowed but not all things get whistleblowed though, so I am genuinely curious to hear your thoughts.

Why would you trust anything out of Sam's mouth? He's a sociopath. Is that lost on you?

  • The comment perfectly exemplifies the kind of person that would work at OpenAI. Government AI drones could be executing citizens in the streets but they’d still find some sort of cope why it’s not a problem. They’ll keep moving the goalposts as long as the money keeps coming.