Comment by tfehring

16 hours ago

> For intelligence activities, any handling of private information will comply with the Fourth Amendment, the National Security Act of 1947 and the Foreign Intelligence and Surveillance Act of 1978, Executive Order 12333, and applicable DoD directives requiring a defined foreign intelligence purpose. The AI System shall not be used for unconstrained monitoring of U.S. persons’ private information as consistent with these authorities. The system shall also not be used for domestic law-enforcement activities except as permitted by the Posse Comitatus Act and other applicable law.

My reading of this is that OpenAI's contract with the Pentagon only prohibits mass surveillance of US citizens to the extent that that surveillance is already prohibited by law. For example, I believe this implies that the DoW can procure data on US citizens en masse from private companies - including, e.g., granular location and financial transaction data - and apply OpenAI's tools to that data to surveil and otherwise target US citizens at scale. As I understand it, this was not the case with Anthropic's contract.

If I'm right, this is abhorrent. However, I've already jumped to a lot of incorrect conclusions in the last few days, so I'm doing my best to withhold judgment for now, and holding out hope for a plausible competing explanation.

(Disclosure, I'm a former OpenAI employee and current shareholder.)

Open ai, the former non-profit, whose board tried to fire the CEO for being deceptive, which is no longer open at all, isn't exactly about ethics these days.

Even on a personal level: OpenAI has changed it's privacy policy twice to let them gather data on me they weren't before. A lot of steps to disable it each time, tons of dark patterns. And the data checkout just bugs out too, it's a fake feature to hide how much they are using everything you type to them

  • So why would we want them setting policy for the DoD? Laws are enacted through a fundamentally democratic process defined over hundreds of years. Why wouldn’t that be the way to govern use of tools?

    Why would we want to trade our constitution for, effectively, “rules Sam Altman came up with”?

    • Part of the problem is that due to a combination of the electoral college, gerrymandering, voter supression, propaganda, and Citizens United; America's government is not meaningfully democratic.

      Even setting that aside, I don't think that people are saying that they want corporations to make the rules. Rather, what I think they are saying is that they don't want AI to be used for mass surveilance or autonomous weapons and cutting the DoD off at the corporate level is one way to accomplish that.

    • Use its real name, the one orange shitler renamed it to: the department of war.

      Why the fuck does the department of war get to dictate anything to a private organization?

      Why does the constitution say that you have to let the government murder schoolgirls with your tools?

      3 replies →

    • A corporation, according to US law, is considered a "person" and afforded many of the same rights as an individual citizen (https://www.fincen.gov/who-united-states-person).

      Even outside of the US, a corporation is widely considered to be a company of people with their own agency and rights.

      A person or group of people should be able to set their own boundaries without being subjected to immoral and unjust retaliation, i.e. corporate murder (https://x.com/i/status/2027515599358730315).

      Also, ask any frontier model what Pete Hegseth thinks about democracy.

This is exactly what it says: the only restrictions are the restrictions that are already in law. This seems like the weasel language Dario was talking about.

  • Laws that can be changed on a whim by "executive orders", or laws that apparently can be ignored completely, like international law.

    • Like by an administration who is constantly ignoring and violating both domestic and international law?

      Like by an administration that likes to act extra judiciously and ignore habeas corups?

      I wonder where we'd find such a government. Probably shouldn't give them the power to "do anything legal NOR 'consistent with operational requirements'". That's the power to do anything they want

    • They do note that their contract language specifically references the laws as they exist today.

      Presumably if the laws become less restrictive, that does not impact OpenAI's contract with them (nothing would change) but if the laws become more restrictive (eg certain loopholes in processing American's data get closed) then OpenAI and the DoD should presumably^ not break the new laws.

      ^ we all get to decide how much work this presumably is doing

      1 reply →

    • No, executive orders can't change law and international law, unless ratified by congress, is not democratically legitimized and applicable law in the US to begin with

      7 replies →

  • Not that this means the big AI corps should relax their values (it truly doesn't), but I would be extremely surprised if the DoD/DoW doesn't have anyone capable of fine tuning an open weights model for this purpose.

    And, I mean, if they don't, gpt 5.3 is going to be pretty good help

    Given the volume fine tuning a small model is probably the only cost effective way to do it anyway

DoD* - the Department of Defense was named through statute, and only the Congress has the power to change it.

> For example, I believe this implies that the DoW can procure data on US citizens en masse from private companies - including, e.g., granular location and financial transaction data - and apply OpenAI's tools to that data to surveil and otherwise target US citizens at scale.

Third Party Doctrine makes trouble for us once again.

Eliminate that and MANY nightmare scenarios disappear or become exceptionally more complicated.

Surely this is the main issue - Doge and others have assembled massive databases of information about all Americans from across the government and now they want to use AI to start making lists.

People often overlook how all the NSA-related activities and government overreach come with a nice memo from officials stating how "lawful" the questionable actions they're taking are.

This is hilarious. I see their lawyers got together to find the most confusing way they could word it to throw people off and let everybody claim it says whatever's best for their own PR.

"Shall not be used as consistent with these authorities"?

So they shall only be used inconsistently with these authorities? That's the literal reading if you assume there's no typo.

Or did they forget a crucial comma that would imply they shall not use it, to the extent this provision is consistent with their authorities?

Or did they forget the comma but it was supposed to mean that they shall not use it, to the extent that not-doing so would be consistent with their authorities?

You gotta hand it to the lawyers, I'm not sure I could've thought of wording this deliberately confusing if they'd given me a million dollars.

Even worse is the kill-bot policy. The eventual-human-in-the-loop clause. aka as yolo mode or --dangerously-skip-permissions

Imagine arming chatgpt and letting it pick targets and launch missiles from clawdbot.

thanks for speaking out, and yes, that was my interpretation, as well, which I outlined below. This is nothing more than some sugar coating on "lawful use" despite what OpenAI says and the contractual "safeguards" they tout like the FDEs.

i.e. Combing through public forums on the internet looking for evidence of thought crime, however, is fair game. The Trump admin will undoubtedly use tools like this to compile a list political enemies or undesirables, which they will then use to harass people or selectively restrict individual rights. They're already doing this and this is just going to make it easier for them.

  • Yes. And I'm sure the next administration will as well. These things only ratchet in one direction.

  > to the extent that that surveillance is already prohibited by law.

The problem with government contracts where you say "can't do anything illegal" is that THEY DECIDE WHAT IS LEGAL. We're lucky we live in a system where you can challenge the government but I think either side of the isle you're on you think people are trying to dismantle that feature (we just disagree on who is doing that, right?).

<edit>

THAT'S EXACTLY WHAT DARIO WAS ARGUING and it is exactly why the DOD wanted to get around. They wanted to use Claude for all legal purposes and Anthropic said moral reasons.

Also notice the subtle language in OpenAI's red lines. "No use of OpenAI technology for mass *domestic* surveillance." We've seen how this was abused by the NSA already since normal communication in the Internet often crosses international lines. And what they couldn't get done that way they got around through allies who can spy on American citizens.

</edit>

I think we need to remember that legality != morality. It's our attempt to formalize morality but I think everyone sees how easy it is to skirt[0]

  > I believe this implies that the DoW can procure data on US citizens en masse from private companies - including

Call your senators. There's a bill in the senate explicitly about this. Here's the EFF's take [1]. IMO it's far from perfect but an important step. I think we should talk about this more. I have problems with it too, but hey, is anything in here preventing things from continuing to get better? It's too easy to critique and then do nothing. We've been arguing for over a decade, I'd rather take a small step than a step back.

  > If I'm right, this is abhorrent.

Let's also not forget WorldCoin[2]. World (blockchain)? World Network?

I have no trust for Altman. His solution to distinguishing humans from bots is mass biometric surveillance. This seems as disconnected as the CEO of Flock or that Ring commercial.

Not to mention all the safety failures. Sora was released allowing real people to be generated? Great marketing. Glad they "fixed it" so quickly...

There's a lot happening now and it's happening fast. I think we need to be careful. We've developed systems to distribute power but it naturally wants to accumulate. Be it government power or email providers. The greater the power, the greater the responsibility. But isn't that why we created distributed power systems in the first place?

Personally I don't want autonomous unquestioning killbots under the control of one or a small number of people. Even if you don't believe the one in control now is not a psychopath (-_-) then you can still agree that it's possible for that type of person to get control. Power corrupts. Things like killing another person should be hard, emotionally. That's a feature, not a flaw. Soldiers questioning orders is a feature, not a flaw. By concentrating power you risk handing that power to those that do not feel. We're making Turnkey Tyranny more dangerous

[0] and law is probably our best attempt to make a formal system out of a natural language but I digress

[1] https://www.eff.org/deeplinks/2024/04/fourth-amendment-not-s...

[2] https://en.wikipedia.org/wiki/World_(blockchain)

As a non-US person I take absolutely no solace in sama's statement (even if I believed a single word that snake has ever uttered, which I do not).