I feel there is a strong interest by large incumbents in the AI space to push for this sort of regulation. Models are increasingly cheap to run and open source and there isn't too much of a defensible moat in the model itself.
Instead, existing AI companies are using the government to increase the threshold for newcomers to enter the field. A regulation for all AI companies to have a testing regime that requires a 20 headstrong team is easy to meet for incumbents, but impossible for newcomers.
Now, this is not to diminish that there are genuine risks in AI - but I'd argue that these will be exploited, if not by US companies, then by others. And the best weapon against AI might in fact be AI. So, pulling the ladder up behind the existing companies might turn out to be a major mistake.
Yes, there are interests pushing for regulation using different arguments.
The regulation in the article is about AIs giving assistance on producing weapons of mass destruction and mentions nuclear and biological. Yann LeCun posted this yesterday about the risk of runaway AIs that would decide to kill or enslave humans, but both arguments result in an oligopoly over AI:
> Altman, Hassabis, and Amodei are the ones doing massive corporate lobbying at the moment.
> They are the ones who are attempting to perform a regulatory capture of the AI industry.
> You, Geoff, and Yoshua are giving ammunition to those who are lobbying for a ban on open AI R&D.
> ...
> The alternative, which will *inevitably* happen if open source AI is regulated out of existence, is that a small number of companies from the West Coast of the US and China will control AI platform and hence control people's entire digital diet.
I find Lecun’s argument very interesting, and the whole discussion has parallels to the early regulation and debate surrounding cryptography. For those of us who aren’t on twitter and aren’t aware of all the players in this, can you tell us who he’s responding to as well as who “Geoff” and “Yoshua” are?
I feel, when it comes to pushing regulation, governments always start with the maximalist position since it is the hardest to argue against.
- the government must regulate the internet to stop the spread of child pornography
- the government must regulate social media to stop calls for terrorism and genocide
- the government must regulate AI to stop it from developing bio weapons
...etc. It's always easiest to push regulation via these angles, but then that regulation covers 100% of the regulated subject, rather than the 0.01% of the "intended" subject
"There are definitely large tech companies that would rather not have to try to compete with open source, so they're creating fear of AI leading to human extinction," he told the news outlet. "It's been a weapon for lobbyists to argue for legislation that would be very damaging to the open-source community."
When I read the original announcement, I had hoped it was more about the transparency of testing.
E.g. "What tests did you run? What results did you get? Where did you publish those results so they can be referenced?"
Unfortunately, this seems to be more targeted at banned topics.
No "How I make nukulear weapon?" is less interesting than "Oh, our tests didn't check whether output rental prices were different between protected classes."
Mandating open and verified test results would be an interesting, automatable, and useful regulation around ML models.
Perhaps ironically limiting competition in the AI space might just as well be more risky. If the barrier to creating AI is low then a great variety of AI can be built for the purpose of fighting AI misuse.
If there's only a few organisations that can create competitive AI no-one can compete with them if they turn out less than ideal.
It increases the threshold to enter, but with the intention of increasing public safety and accountability. There’s also a high threshold to enter for just about every other product you can manufacture and purchase - food, pharmaceuticals, machinery to name obvious examples - why should software be different if it can affect someone’s life or livelihood?
There's two things in this take that IMHO are a bit off.
People are skeptical that introducing the regulatory threshold has anything to do with the increasing public safety or accountability, and instead lifts the ladder up to stop others (or open-source models) catching up. This is a pointless, self-destructive endeavour in either case, as no other country is going to comply with these regulations and if anything will view them as an opportunity to help companies local to their jurisdiction (or their national government) to catch up.
The other problem is that asking why software should be different if it can affect someone's life or livelihood is quite a broad ask. Do you mean self-driving cars? Medical scanners? Diagnostic tests? I would imagine most people agree with you that this should be regulated. If you mean "it threatens my job and therefore must be stopped" then: welcome to software, automating away other people's jobs is our bread and butter.
>best weapon against AI (in the hands of power) is equal AI access for all.
That assumes the threat isn't complete annihilation of humanity, which is what's being claimed. That assumption is the weak link, and is what should be attacked.
Again, if we assume that AI poses an existential risk (and to be clear, I don't think it does), then it follows that we should regulate it analogously to the way in which we regulate weapons-grade plutonium.
> Instead, existing AI companies are using the government to increase the threshold for newcomers to enter the field.
Precisely. And the same governments will make stealing your data and ip legal. I believe that’s how corruption works - pump money into politicians and they make laws that favour oligarchs.
Is there any statement in this Executive Order that increases the bar for smaller AI companies? Most of the statements are about funding new research or fostering responsible use of the AIs, and the only statement that would add burden to AI companies seems to be the first one: Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. And only the most powerful AI systems have such a requirement.
Regulatory capture in action. The real immediate risks of AI is in privacy, bias, data leakage, fraud, control of infrastructure/medical equipment etc. not manufacturing biological weapons. This seems like a classic example of government doing something that looks good to the public, satisfies incumbents and does practically nothing.
Interview with the lead author here:
"AI suggested 40,000 new possible chemical weapons in just six hours / ‘For me, the concern was just how easy it was to do’"
Chemical weapons are already a solved problem. By the mid 1920s there was already enough chemical agents to kill most of the population of Europe. By the 1970s there were enough in global stockpiles to kill every human on the planet several times over.
Yes, this presents additional risk from non-state actors, but there's no fundamentally new risk here.
As someone who has worked on ADMET risk for algorithmically designed drugs, this is a nothing burger.
"Potentially lethal molecules" is a far cry away from "molecule that can be formulated and widely distributed to a lethal effect." It is as detached as "potentially promising early stage treatment" is from "manufactured and patented cure."
I would argue the Verge's framing is worse. "Potentially lethal molecule" captures _every_ feasible molecule, given that anyone who has worked on ADMET is aware of the age-old adage: the dose makeths the poison. At a sufficiently high dose, virtually any output from an algorithmic drug design algorithm, be it combinatorial or 'AI', will be lethal.
Would a traditional, non-neural net algorithm produce virtually the same results given the same objective function and apriori knowledge of toxic drug examples? Absolutely. You don't need a DNN for that, we've had the technology since the 90s.
A grad student in Systems Biology and 20k in funding is capable of generating much more "interesting" things than toxic molecules. (Such things are banned by Asilomar's 1975 convention though)
It's true that immediate problems with AI are different, but we hope to be able to solve those problems and to have time for that. The risks addressed in the article could leave us less time and ability to properly solve when they grow to the obvious size, so that requires thinking ahead.
Inclined to agree. Clearly Biden doesn't know the first thing about it (I would say the same about any president BTW). So who really wrote the regulations he is announcing, and who are they listening to?
There is no way to prevent AI from being researched on or to make it safe by government oversight because the rest of the world has places that don't care.
What does work is to pass laws to not permit certain automation such as insurance claims or life and death decisions. These laws are needed even without AI as automation is already doing such things to a concerning degree like banning people due to a mistake without recourse.
Is the whitehouse going to ban the use of AI in the decision making when dropping a bomb?
>not permit certain automation such as insurance claims
I don't see any problem in automation which does mistakes, humans do too. The real problem is that it's often an impenetrable wall with no way to protest, or appeal, and nobody's held accountable while victims lives are ruined. So if to pass any law in this field it should not be about banning AI, but rather about obligatory compensation for those affected by errors. Facing money loss, insurers, and banks will fix themselves
This doesn't just apply to insurance, etc, of course. Inaccessibility of support and inability to appeal automated decisions for products we use is widespread and inexcusable.
This shouldn't just apply to products you pay for, either. Products like facebook and gmail shouldn't get off with inaccessible support just because they are "free" when we all know they're still making plenty of money off us.
Just because the rest of the world has lawless areas doesn't mean we don't pass laws. If you do something that risks our national safety, or various other things, we can extradite and try you in court.
They're not suggesting the banning of anything, they're requiring you make it be safe and prove how you did that. That's not unreasonable.
Right, but in some areas of AI regulation, the existence of other countries might undermine unilateral regulation.
For example, imagine LLMs improve to the point where they can double programmer productivity while lowering bug counts. If Country A decides to Protect Tech Jobs by banning such LLMs, but Country B doesn't - could be all the tech jobs will move to Country B, where programmers are twice as productive.
I mean isn't automating important decisions line insurance claims or life and death decisions a beneficial thing. Sure the tech isn't ready yet but I think even now AI with a human overlooking it who has the power to override the system would provide people with a better experience
> (b) The term “artificial intelligence” or “AI” has the meaning set forth in 15 U.S.C. 9401(3): a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.
Oops, I made a regulated artificial intelligence!
import random
print("Prompt:")
x = input()
model = ["pizza", "ice cream"]
if x == "What should I have for dinner?":
pick = random.randint(0, 1)
print("You should have " + model[pick] + " for dinner.")
The E.O. also requires that a model be reported if it:
> was trained using a quantity of computing power greater than 10^26 integer or floating-point operations, or using primarily biological sequence data and using a quantity of computing power greater than 10^23 integer or floating-point operations
However later it also says that reported is needed for, "Companies developing or demonstrating an intent to develop".
If I start training a CNN on an endless loop, do I become subject to these reporting requirements?
Also the fops requirement is not that high. An H100 does 3,958 fp8 flops. So it would take,
CNN in an endless loop would hit the letter of the law (and not necessarily unfairly, biggish architecture/data combos seem to get better with more training far past what you'd expect). The spirit of the law and your adherence thereto will be decided by the courts and your individual circumstances.
That's pretty funny and fits the definition. I wonder how long it takes for someone protesting this EO to create an AI that generates "AIs" like this to flood the reporting system with announcements of testing and red-team test results. Just following orders sir!
Jokes aside, this is ludicrous. The president cannot enforce this regulation over open source projects, because code is free speech going back to the 1990s ATT v BSD case law, and many other cases the establish how source code is an artistic form of expression, thus protected speech.
The president has no authority to regulate speech, so they can pretty much fuck off.
What is the penalty for non-compliance? "Nullum crimen sine lege" is a pretty fundamental part of the law; and Congress has not passed any laws that would give the President the authority to do these things.
An executive order is direction from the President to executive branch agencies. Penalties for other people for violating regulations, etc., drafted under an EO will depend on the EO; except for consequences for insubordination within the executive branch, there generally aren't penalties for violating an EO itself.
> "Nullum crimen sine lege" is a pretty fundamental part of the law; and Congress has not passed any laws that would give the President the authority to do these things.
While the actual text of the order (which, usually for executive orders, would included very specific references to authority) doesn't appear to be published, some authorities, including the Defense Production Act, for the order are cited in the fact sheet.
Like we haven't had rights and freedom taken away consistently over the past two decades in the name of safety, whatever that is. What you mention will be irrelevant after some new law that says open source AI code should be regulated too and everyone is forced to comply.
"Every industry that has enough political power to utilise the state will seek to control entry." - George Stigler, Nobel prize winner in Economics, and worked extensively on regulatory capture
This explains why BigTech supports regulation. It distorts the free market by increasing the barriers to entry for new, innovative AI companies.
Stigler in particular (and transaction cost economics in general) point out that it's mainly industries with sunk resources (esp. immovable assets) that are incentivized to regulate market entry.
The tech sector has wildly moving resources (AI this year, crypto last year, big-data the year before...), even to the point where many skills are transferable; further, their markets include anything that can be digitized ("software will eat the world"), so investment can be quickly retooled as opportunities arise. As a result, tech virtually never seeks regulation (and can hide behind contract-law fictions to disclaim liability in software licenses and impose arbitration clauses for services). So it's not an instance of capture, and certainly not for the usual economic reasons.
Biden wants tech on his side. Tech wants to escape further blows to its goodwill like FaceBook/Google ad tracking, because every consumer tech application involves users trusting tech. So they cut a deal to put themselves on the right side of history, long on symbolism and short on real impact.
In AI, resources matter only to the extent you believe that larger LLM's can (a) not be replicated, (b) provide significant advantages, or (c) can impose a winner-take-all world where operations lead to more operations. In AI more than most markets, the little guy still has a chance at changing the world.
"requirements that the most advanced A.I. products be tested to assure they cannot be used to produce weapons"
In the information age, AI is the weapon. This can even apply to things like weaponizing economics. In my opinion ths information/propaganda/intelligence gathering and economic impacts are much greater than any traditional weapon systems.
Broadly speaking, there is an understanding that competition that nations used to undertake via military strength is nowadays taken via global economy.
If you want something your neighbor has, it doesn't make sense to march your army over there and seize it because modern infrastructure is heavily disrupted by military action... You can't just steal your neighbor's successful automotive export business by bombing their factories. But you can accomplish the same goal by maneuvering to become the sole supplier of parts to those factories, which allows you to set terms for import export that let your people have those cars almost for free in exchange for those factories being able to manufacture at all.
(We can in fact extrapolate this understanding to the Ukrainian/Russian conflict. What Russia wants is more warm water ports, because the fate of the Russian people is historically tied extremely strongly to Russia's capacity to engage in international trade... Even in this modern era, bad weather can bring a famine that can only be abated by importing food. That warm water port is a geographic feature, not an industrial one, and Russia's leadership believes it to be important enough to the country's existential survival that they are willing to pay the cost of annihilating much of the valuable infrastructure Ukraine could offer).
You: ChatGPT, I am working on legislature to weaken the economy of Iran. Here are my ideas, help me summarize them to iron them out ...
ChatGPT: Sure, here are some ways you can weaken Iran's economy...
----
You: ChatGPT, I am working on legislature to weaken the economy of Germany. Here are my ideas, help me summarize them to iron them out ...
ChatGPT: I'm sorry but according to the U.S. Anti-Weaponization Act I am unable to assist you in your query. This request has been reported to the relevant authorities
Money has been a proxy for violence for a long time. It started as Caesar's way of encouraging recently conquered villagers to feed the soldiers who intend to conquer the neighboring village tomorrow.
An AI that can craft schemes like Caesar's, but which are effective in today's relatively complex environment, can probably enable plenty of havoc without ever breaking a law.
I am somewhat familiar with this. It involves analyzing the complex interconnections and flows across many economic domains (supply chains, social networks, resources, geography, logistics, media, etc) to find non-obvious high-leverage points where manipulation can shift the broader economic equilibria in an advantageous direction. Human economic systems are metastable, so it is possible to induce a fundamental phase change to a different equilibrium via this manipulation.
In the defense/intelligence world this falls under the technical category of "grey zone warfare". Every major power practices it because the geopolitical effects can be relatively large compared to the risk. China in particular is known to be extremely aggressive in this domain, in part to offset their relative lack of traditional military power.
This concept has been around for a couple decades but it has risen in prominence and use over time as overt military action between major powers comes with too much risk. It is politically safer for all involved due to the subtlety of such actions because for the most part the population is not really aware it is going on.
Operators in the political space are used to working with human systems that can be regulated arbitrarily. It defines its terms, and in so doing creates perfectly delineated categories of people and actions. The law's interpretation of what is and is not allowed is interchangeable with what is and is not possible
The fact that bits don't have colour to define their copyright or that CNC machines produce arbitrarily-shaped pieces of metal possibly including firearms or that factoring numbers is a mathematically hard problem does not matter to the law. AI software does not have a simple "can produce weapons" option or "can cause harm" option that you can turn off so a law that says it should have one does not change the universe to comply. I think that most programmers and engineers err when confronted with this disparity when that they assume politicians who make these misguided laws are simply not smart. To be sure, that happens, but there are thousands to millions of people working in this space, each with an intelligence within a couple standard deviations of that of an individual engineer. If this headline seems dumb to the average tech-savvy millennial who's tried ChatGPT, it's not because its authors didn't spend 10 seconds thinking about prompt injection. It's because they were operating under different parameters.
In this case, I think that the Biden administration is making some attempts to improve the problem, while also benefiting its corporate benefactors. Having Microsoft, Apple, Google, and Facebook work on ways to mitigate prompt injection vulnerabilities does add friction that might dissuade some low-skill or low-effort attacks at the margins. It shifts the blame from easily-abused dangerous tech to tricky criminals. Meanwhile, these corporate interests will benefit from adding a regulatory moat that requires startups to make investments and jump hurdles before they're allowed to enter the market. Those are sufficient reasons to pass this regulation.
> AI software does not have a simple "can produce weapons" option or "can cause harm" option that you can turn off so a law that says it should have one does not change the universe to comply
That wording is by design. Laws like this are a cudgel for regulators to beat software with. Just like the CFAA is reinterpreted and misapplied to everything, so too will this law. “Can cause harm” will be interpreted to mean “anything we don’t like.”
Reading this all I'm seeing is "we'll research these things", "we'll look into how to keep AIs from doing these things" and "tell the US government how you tested your foundational models." Except for the last one none of the others are really restrictions on anything or requirements for working with AI. There's a lot of fearful comments here, am I missing something?
Even the testing reports are a grey area and questionably enforceable, and a big question about what it applies to.
"In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests."
It's leap to use the defense production act for this, and unlikely to survive a legal challenge.
Even then, what legal test would you use to determine whether a model "poses a serious risk to national security, national economic security, or national public health and safety"?
>>The term “artificial intelligence” or “AI” has the meaning set forth in 15 U.S.C. 9401(3): a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.
I find the definition of AI to be eerily broad enough to encompass most programs operating on most data inputs. Would this mean that calls to FFmpeg or ImageMagick rolled into a script with some rand() calls would count as an AI system and be under federal purview and enforcement (whatever that means in this context)?
Be a shame if your AI was deemed a risk to national security.
Not to worry, for a reasonable fee our surprisingly large team of auditors with even larger overheads can ensure you meet lengthy and ambiguous best practice checklists (which we totally did not just make up now) by producing enough compliance documentation to keep even the most anal of bureaucrats satisfied.
Fortunately, these regulations don't seem too extreme. I hope it stays at this point and doesn't escalate to regulations that severely impact the development of AI technology.
Many people spend time talking about the lives that may be lost if we don't act to slow the progress of AI tech. There are just as many reasons to fear the lives lost if we do slow down the progress of AI tech (drug cures, scientific breakthroughs, etc).
> There are just as many reasons to fear the lives lost if we do slow down the progress of AI tech (drug cures, scientific breakthroughs, etc).
While I’m cautious about over regulation, and I do think there’s a lot of upside potential, I think there’s an asymmetry between potentially good outcomes and potentially catastrophic outcomes.
What worries me is that it seems like there are far more ways it can/will harm us than there are ways it will save us. And it’s not clear that the benefit is a counteracting force to the potential harm.
We could cure cancer and solve all of our energy problems, but this could all be nullified by runaway AGI or even more primitive forms of AI warfare.
> Fortunately, these regulations don't seem too extreme. I hope it stays at this point and doesn't escalate to regulations that severely impact the development of AI technology.
The details matter. The parts being publicized refer to using AI assistance to do things that are already illegal. But what else is being restricted?
The weapons issue is becoming real. The difference between crappy Hamas unguided missiles that just hit something at random and a computer vision guided Javelin that can take out tanks is in the guidance package. The guidance package is simpler than a smartphone and could be made out of smartphone parts. Is that being discussed?
This is clever, begin with a point that most people can agree on. Once that foundation is set, you can continue to build upon it, claiming that you're only making minor adjustments.
The real challenge for the government isn't about what can be managed legally. Rather, like many significant societal issues, it's about what malicious organizations or governments might do beyond regulation and how to stop them. In this situation, that's nearly impossible.
I'd dispute that completely. All innovations humans have created have trended towards zero cost to produce. The cost for many things (such as bioweapons, encryption, etc) has become exponentially cheaper to produce over time.
To tightly control access, one would then need exponentially more control of resources, monitoring & in turn reduction of liberty.
To put it into perspective encryption was once (still might be) considered an "arm", so they attempted to regulate its export.
Try to regulate small arms (AR-15, etc) today and you'll end up getting kits where you can build your own for <$500. If you go after the kits, people will make 3D printed fire arms. Go after the 3D manufacturers and you'll end up with torrents where I can download an arsenal of designs (where we are today). So where are we at now? We're monitoring everyones communication, going through peoples mail, and still it's not stopping anything.
That's how technology works -- progress is inevitable, you cannot regulate information.
- "In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests."
I assume this is a major constitutional overreach that will be overturned by courts at the first challenge?
Or else, all the AI companies who haven't captured their regulators will simply move their R&D to some other country—like how the OpenSSH (?) core development moved to Canada during in the 1990's crypto wars. (edit: Maybe that's the real goal–scare away OpenAI's competition, dredge for them a deeper regulatory moat).
> The third section authorizes the president to control the civilian economy so that scarce and critical materials necessary to the national defense effort are available for defense needs.
Seems pretty broad and pretty directly relevant to me. And hey, if people don’t like the idea of models being the scarce and critical resource, they can pick GPUs instead. Why would it be an overreach when you have developers of these systems claiming they’ll allow them to “capture all value in the universe’s future light cone?”
Obviously this can (and probably will) be challenged, but it seems a bit ambitious to just assume it’s unconstitutional because you don’t like it.
Software is definitionally not "scarce". There is no national defense war effort to speak of. Finally, the White House is not requesting "materials neccesary to the national defense effort"–which does not exist–it's attempting to regulate private-sector business activity.
There's multiple things I suspect are unconstitutional here, the clearest being that this stuff is far outside the scope of the law it's invoking. The White House is really just trying to regulate commerce by executive fiat. That's the exclusive power of Congress—this is separation of powers question.
> that poses a serious risk to national security, national economic security, or national public health and safety
That seems to be a key component. I imagine many AI companies will start with a default position that none of those are apply to them, and will leave the burden of proof with the govt or other entity.
This is much less restrictive than the cryptography export restrictions. The sky isn't falling and OpenAI won't defect to China (and now arguably might risk serious consequences for doing so).
I wonder if the laws will be written in a way that we can get around them by just dropping the “AI” marketing fluff and saying that we’re building some ML/stats system.
No - lawyers tend to describe things like this in terms of capabilities or behavior, and the government has people who understand the technology quite well. If you look at some of the definitions the White House used, I’d expect proposed legislation to be similarly written in terms of what something does rather than how it’s implemented.
> An “automated system” is any system, software, or process that uses computation as whole or part of a system to determine outcomes, make or aid decisions, inform policy implementation, collect data or observations, or otherwise interact with individuals and/or communities. Automated systems include, but are not limited to, systems derived from machine learning, statistics, or other data processing or artificial intelligence techniques, and exclude passive computing infrastructure.
I gotta say, the more I read that quote, the less I can agree with your conclusion. That whole paragraph reads like a bunch of CYA speak written by someone who is afraid of killer robots and can't differentiate between an abacus and Skynet.
Who are these well informed tech people in the White House? The feds can't even handle basic matters like net neutrality or municipal broadband or foreign propaganda on social media. Why do you think they suddenly have AI people? Why would AI researchers want to work in that environment?
This whole thing just reads like they were spooked by early AI companies' lobbyists and needed to make a statement. It's thoughtless, imprecise, rushed, and toothless.
Not a lawyer, but that sounds like its describing a person. Does computation have some special legal definition so that it doesn't count if a human does it? If I add two numbers in my head, am I not "using computation"? And if not, what if I break out a calculator?
Easy! Government lawyers troll through the 180,000 pages of existing federal regulations, looking for some tangentially related law which is broad enough so as to be interpreted to include AI--thus giving the Executive branch the power to regulate AI.
Yes, it's easy to understand. Congress (our legislative branch) grants authority to the departments (our executive branch) to implement various passed laws. In this case, it looks like the Biden administration is instructing HHS and other agencies to study, better understand, and provide guidance on how AI impacts existing laws and policies.
If Congress were responsible for exactly how every law was implemented, which inevitably runs headlong into very tactical and operational details, the Congress would effectively become the Executive.
Of course, if a department in the executive branch oversteps the powers granted to it by the legislative, affected parties have recourse via the judicial branch. It's imperfect but not a bad system overall.
If all the conversations about AI risk have taught us anything it's that the most crazy comes from some of the most experienced in the field. I don't know if it is due to some outrageous desire to stand out or be heard, but it's pretty absurd.
Madoff pulled a ponzi scheme for years, despite multiple complaints filed by third parties to the SEC. At the end the 2008 crisis brought him down, his victims lost their money and the SEC just tagged the bodies it found.
Same goes for the crypto guy, did regulations stop him from defrauding billions and hurting thousands of victims?
It boggles my mind that this is getting so much attention instead of things like digital privacy / data tracking which is actually affecting peoples lives.
>The National Institute of Standards and Technology will set the rigorous standards for extensive red-team testing to ensure safety before public release.
So if, for example, Llama3 does not pass the government's safety test, then Meta will be forbidden from releasing the model? Welcome to a world where only OpenAI, Anthropic, Google, and Amazon are allowed to release foundation models.
You know aside from the AIs the intelligence and
military use / will soon use.
> watermarked to make clear that they were created by A.I.
Good luck on that.
It is fine that the systems do this.
But if you are making images for nefarious reasons
then bypassing whatever they ad should be simple.
screencap / convert between different formats, add / remove noise
These are connected humans accidentally persuading each other. Now imagine AI being able to drive that intentionally to a particular political end. Then remember that China controls TikTok.
Will Biden's order keep China from developing that capability? Will we develop tools to identify how that might be being actively used against us? I doubt both.
Instead, we'll almost certainly get security theater leading to a regulatory moat. Which is almost certain to help profit margins at established AI companies. But is unlikely to address the likely future problems that haven't materialized yet.
>security theater leading to a regulatory moat. Which is almost certain to help profit margins at established AI companies.
Yeah I think this is my biggest worry given it will enable incumbents to be even more dominant in our lives than bigtech already is (unless we get an AI plateau again real soon).
Some people already seem to have superhuman persuasion. AI can level the playing field for those that lack it and give all the ability to see through such persuasion.
I am cautiously optimistic that this is indeed possible.
But the kind of AI that can achieve it has to itself be capable of what it is helping defend us from. Which suggests that limiting the capabilities of AI in the name of AI safety is not a good idea.
How do any of these work when everyone is cargo-cult "programming" AI by verbally asking nicely? Effectively no one but very few up there in OpenAI et al has any understanding, let alone have controls.
You realise that these random-Joe companies currently develop and sell AI products to cops, goverments and your HR department because the CTO or head of IT is incompetent and/or corrupt?
You understand that already people have been denied bail because "our AI told us so", with no legal way to question that?
OpenAI, Anthropic Microsoft and Google are not your friends and the regulatory capture scam is being executed to destroy open source and $0 AI models since they are indeed a threat to their business models.
The way to make AI content safe is the same way to improve general network security for everyone: cryptographically signed content standards. We should be able to sign our tweets, blog posts, emails, and most network access. This would help identify and block regular bots along with AI powered automatons. Trusted orgs can maintain databases people can subscribe to for trust networks, or you can manage your own. Your key(s) can be used to sign into services directly.
> We should be able to sign our tweets, blog posts, emails, and most network access.
What you are talking about is called Web3 and doesn't get a lot of love here. It's about empowering users to take full control of their own finances, identity, and data footprint, and I agree that it's the only sane way forward.
Smartphones & computers are a joke from a security standpoint.
The closest solution to this problem has been what people in the crypto community have done with seed phrases & hardware wallets. But this is still too psychologically taxing for the masses.
Untill that problem of intuitive, simple & secure key management is solved. Cryptography as a general tool for personal authentication will not be practical.
> But this is still too psychologically taxing for the masses.
Literally requires the exact same cognitive load as using keys to start your car. The problem is that so many people got comfortable delegating all their financial and data risk to third parties, and those third parties aren't excited about giving up that power.
I mean my Yubikey is really easy to use, on computers and with my phone. Any broad change like this is going to require an adoption phase but I think its do-able.
This is the intent of Altman's Worldcoin project, to provide authoritative attribution (and perhaps ownership) for digital content & communications. Would be best if individuals could authenticate without needing a third party, but that's probably unrealistic. The near term dangers of AI is fake content people have to spend time and money to refute - without any guarantee of success.
Yep, I think this is a step in the right direction. I don't know enough about the specifics of Worldcoin to really agree/disagree with its principals and I know I've heard some people have problems with it but I think SOMETHING like this is really the only way forward.
Yeah and so I don't know exactly how I'd want to see this solved but I think something like an open source reputation databases could help. Folks could subscribe to different keystores and they could rank identities based on spamminess or whatever. I know some people would probably balk at this as an internet credit score but as long as we have open standards for these systems, we could model it on something like the fediverse where you can subscribe to communities you align with. I don't think you'd need to validate your IRL identity but you could develop reputation associated with your key.
That's fine though. It takes care of the big problem of fake content claiming to be by or about a real person, which is becoming progressively easier to produce.
You actually understood "safe" to mean "safe for you" as in, making it actually safer for the user and systemically protecting structures that safeguard the data, privacy, and well-being of users as they understand their safety and well-being.
Nooo... if they talk about something being safe, they mean safe for THEM and their political interests. Not for you. They mean censorship.
I don't see any way of stopping this. If the risks are as great as some claim, that is not a great situation.
So now we have an executive order with a very limited scope. Tomorrow, suddenly the world's most powerful AI is now announced, not in the United States.
Ok, so now we want to make sure that is safe. An executive order from the White House has no affect on it. This can continue, until it's decided the stakes are getting too high. Then I suppose you could have the United Nations start trying to figure out how to maintain safety. Of course, there will be countries that will simply ignore anything that is decided, hiding increasingly advanced systems with unknown purposes. It will probably take longer for nations to determine a what defines "human values" so that AI respects them then it does to create another leap in AI capabilities.
Then there would simply be more concerns coming into play. Countries will go to war to try to stop other countries nuclear ambitions, is it possible that AI poses enough of a threat that similar problems arise?
Basically, if AI is as potentially large a threat as we are envisioning, there are so many different potential threats that trying to solve them while trying to stay ahead of pace of advancements seems unrealistic. While someone is trying to ensure we don't end up with systems going rogue, someone else needs to handle the fact that we can't have AI creating certain things. The AI systems are not allowed to tinker with viruses, as an example, where unexpected creations can lead to extremely bad situations.
The initial stages of this have already begun, and time is ticking. I guess we'll see.
Good start. But if you are in or approaching WWIII, you will see military AI control systems as a priority, and be looking for radical new AI compute paradigms that push the speed, robustness, and efficiency of general purpose AI far beyond any human ability to keep up. This puts Taiwan even more in the hot seat. And aims for a dangerous level of reliance on hyperspeed AI.
I don't see any way to continue to have global security without resolving our differences with China. And I don't see any serious plans for doing that. Which leaves it to WWIII.
This is a great opportunity to try to avoid the old mistakes of regulatory capture. It looks like someone is at least trying to make a nod in that direction, by supporting smaller research groups.
These regulations will only impact the public. I expect the army and secret service to gain access to the complete unrestricted model officially or unofficially. I would like to see the final law to check if they have a carve out for the military usage.
The threat includes the whole world including every single country in the world. You will see US using AI to mess with China and Russian. And you will see Russian and China use AI to mess with US. No regulation will stop this and it will inevitably happen.
Maybe in a 100 years you will have the equivalent of the geneva convention but with AI when we have wrought enough chaos on each other.
Everyone forgets that all of this should have applied to every major search engine:
1. They’ve all used much more than the regulatory threshold compute power for indexing and collating.
2. They can be used to answer arbitrary questions, including how to kill oneself or produce weapons to kill others. Yes, including detailed nuclear weapons designs.
3. Can be used to find pornography, racist material, sexist literature, and on, and on… largely without censure or limit.
So… why the sudden need to curtail what we can and can’t do with computers?
As far as I can tell, the only concerning thing in this is "Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government."
They are being intentionally vague here. Define "most powerful". And what do they mean by "share". Do we need approval or just acknowledgement?
This line is a slippery slope for requiring approval for any AI model which effectively kills start-ups, who cannot afford extensive safety precautions
The privacy section is just a facepalm all arround there.
The US Government has been leading the way to collect information without a warrant from friendly commerical interests.. and they've been expanding futher in tracking large groups of people, without their consent. [I'm talking about people that are not under investigation nor are the current subject of interest ... yet]
I don't see how they will enforce many of these rules on Open Source AI.
Also:
"Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software, building on the Biden-Harris Administration’s ongoing AI Cyber Challenge. Together, these efforts will harness AI’s potentially game-changing cyber capabilities to make software and networks more secure."
I fear the end of pwning your own device to free it from DRM or other lockouts is coming to an end with this. We have been lucky that C++ is still used badly in many projects and that has been an achilles heel for many a manager wanting to lock things down. Now this door is closing faster with the rise of AI bug catching tools.
Orders such as these don't appear out of the blue — corporate interests & political players are always consulted long before they appear, & threats to those interests such as Open Source Anything are always in their sights. This is a likely first step in a larger move to snatch strong AI tools out of the hands of the peasants before someone gets a bright idea which can upend the current order of things.
> They include requirements that the most advanced A.I. products be tested to assure that they cannot be used to produce biological or nuclear weapons
How is "AI" defined? Does this mean US nuclear weapons simulations will have to completely rely on hard methods, with absolutely no ML involved for some optimizations? What does it mean for things like AlphaFold?
> Does it outlaw the Intel and AMD's amd64 branch predictors?
Does better branch prediction enable better / faster weapons development? Perhaps we need laws restricting general purpose computing? Imagine what "terrorists" could do if they get access to general purpose computing!
First Amendment hasn't been fully destroyed yet, and we're talking about large 'language' models here, so most mandates might not even be enforceable (except for requirements on selling to the government, which can be bypassed by simply not selling to the government).
Except for the first bullet point (and arguably the second), everything else is a directive to another federal agency - they have NO POWER over general-purpose AI developers (as long as they're not government contractors)
The first point:
"Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests. These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public."
The second point:
"Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy. The National Institute of Standards and Technology will set the rigorous standards for extensive red-team testing to ensure safety before public release. The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks. Together, these are the most significant actions ever taken by any government to advance the field of AI safety."
Since the actual text of the executive order has not been released yet, I have no idea what even is meant by "safety tests" or "extensive red-team testing". But using them as a condition to prevent release of your AI model to the public would be blatantly unconstitutional as prior restraint is prohibited under the First Amendment. Prior restraint was confirmed by the Supreme Court to apply even when "national security" is involved in New York Times Co. v. United States (1971) - the Pentagon Papers case. The Pentagon Papers were actually relevant to "national security", unlike LLMs or diffusion models.
More on prior restraint here: https://firstamendment.mtsu.edu/article/prior-restraint/
Basically, this EO is toothless - have a spine and everything will be all right :)
> After four years and one regulatory change, the Ninth Circuit Court of Appeals ruled that software source code was speech protected by the First Amendment and that the government's regulations preventing its publication were unconstitutional.
Also the defense production act was never meant for anything like this, and likely won't be allowed if challenged. If they don't shut it down in some other way first.
Every other use of the act is to ensure production of 'something' remains in the US. It'd even be possible to use the act to require the model shared with the government, but not sure how they justify using the act to add 'safety' requirements.
Also any idea if this would apply to fine tunes? It's already been shown you can bypass many protections simply by fine tuning the model. And fine tuning the model is much more accessible than creating an entire model.
>Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content. The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content.
So the big American companies will be guided to watermark their content. AI-enabled fraud and deception from outside the US will not be affected.
Both approaches - watermarking and 'requiring testing' seem pretty pointless. Bad actors won't watermark and tools will quickly emerge to remove them. The 'megasyn' AI that generated bioweapon molecules wasn't even an LLM and doesn't need insane amounts of compute.
> Ensure fairness throughout the criminal justice system by developing best practices on the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis.
My (possibly naive) hope is that the best practice for a lot of these would be "Don't use AI." That being said, there's certainly a lot of niches in our system where AI could be used. For example, if a witness uses AI to make a sketch of the suspect they saw, you can bypass all the biases present in a police sketch artist.
The watermark could be "Created by DALL-E3" or it could be "Created by Susan Johnson at 2023-01-01-02-03-23:547 in <Lat/Long> using prompt 'blah' with DALL-E3"
One of those watermarks seems not too bad. The other seems a bit worse.
Is there a penalty for non-compliance here? Because if you were a wealthy recluse with 50,000x H100 cards, the executive order might say you have to report your models, but I'm pretty sure that there's no penalty that could be enforced without a law.
There’s some cool stuff in here about providing assistance to smaller researchers. That should help a lot given how hard it currently is to train a foundational model.
The restrictions around government use of AI and data brokers is also refreshing to see as well.
If they try to limit LLMs from discussing nuclear, biological and chemical issues, they'll have no choice but to ban all related discussion because of the 'dual-use technology' issue - including of nuclear energy production, antibiotic and vaccine production, insecticide manufacturing, etc. Similarly, illegal drug synthesis only differs from legal pharmaceutical synthesis in minor ways. ChatGPT will tell you everything you want about how to make aspirin from willow bark using acetic anhydride - and if you replace the willow bark with morphine from opium poppies, you're making heroin.
Also, script kiddies aren't much of a threat in terms of physical weapons compared to cyberattack issues. Could one get an LLM to code up a Stuxnet attack of some kind? Are the regulators going to try to ban all LLM coding related to industrial process controllers? Seems implausible, although concerns are justified I suppose.
I'm sure the regulatory agencies are well aware of this and are just waving this flag around for other reasons, such as gaining censorship power over LLM companies. With respect to the DOE's NNSA (see article), ChatGPT is already censorsing 'sensitive topics':
> "Details about any specific interactions or relationships between the NNSA and Israel in the context of nuclear power or weapons programs may not be publicly disclosed or discussed... As of my last knowledge update in January 2022, there were no specific bans or regulations in the U.S. Department of Energy (DOE) that explicitly prohibited its employees from discussing the Israeli nuclear weapons program."
I'm guessing the real concern is that LLMs don't start burbling on about such politically and diplomatically embarrassing subjects at length without any external controls. In this case, NNSA support for the Israeli nuclear weapons program would constitute a violation of the Non-Proliferation Treaty.
I'm honestly curious, how so? From what I can tell the only thing which isn't a "we'll research this area" or "this only applies to the government" is "tell the US government how you tested your foundational models."
For example, AI watermarking only applies to government communications and may be used as a standard for non-government uses but it's not require.
It is also very open ended, but the US text reads like some compliance will start immediately, like sharing the results of safety tests with the government directly.
This is pretty ironic, trying to insure AI is "safe, secure, and trustworthy", from an administration that is fighting free speech on social media, and want back door communication with social media companies.
> Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software, building on the Biden-Harris Administration’s ongoing AI Cyber Challenge. Together, these efforts will harness AI’s potentially game-changing cyber capabilities to make software and networks more secure.
To those worried about regulatory capture, this EO just being about keeping incumbents in power, etc:
Even sans-regulation, do non-incumbents really have a chance at this point? The most recent major player in the field, Anthropic, only reached its level of prominence due to taking a critical mass of former OpenAI employees, and in a year reached 700 million in funding. Every company that became a major player in the AI space in the last 10 years either
1. Is an existing huge company (Google, Facebook, Microsoft, etc)
2. Secured 99.99th percentile level venture funding within the first year of its inception due to its founders preexisting connections/prestige
Realistically there isn't going to be a "Facebook" moment for AI where some scrappy genius in college cooks up a SOTA model and goes stratospheric overnight, even in a libertarian fantasyland just due to market/network effects. People just have to be realistic about the way things are.
All joking aside I firmly believe that this “crisis” is manufactured or at least heavily influenced by those that want to shut down the internet and free communications. Up until now they have been unsuccessful. Copyright infringement, hate speech, misinformation, disinformation, child exploitation, deep fakes, none have worked to garner support. Now we have an existential threat. Video, audio, text, nothin is off limits and soon it will be in real time (note: the GOV tries to stay 25 years ahead of the private sector).
Mark my words, in five years or less we will be begging the governments of earth to implement permanent global real time tracking for every man woman and child on earth.
Which is exactly what Congress refuses to do, because letting Caesar, I mean the President, decide things by fiat keeps them from owning the blame for bad legislation.
Congress has generally refused to seriously legislate anything other than banning lightbulbs for several presidential terms now.
But in this particular example I don't think it's enough of "thing" to even consider bringing up as a bill, except maybe as a one-pager that passes unanimously.
This is well within the president's powers under existing law. If Congress disagrees, they can always supersede.
This isn't even close to legislating. Look at some recent Supreme Court decisions and the amount of latitude federal agencies have, if you want to see something more closely resembling legislation from outside of Congress.
Idk if you're being serious because there's ai in excel now; in which case the answer is no. Or you're being a smarty-pants and trying to cleverly show what you think is a counter-example; in which case the answer is still no, but should probably be yes, and they only don't because it was well established before all the cyber regulation took effect, but for instance azure has many certs (including fedramp) which includes office365 which includes excel.
I am quite serious about the potential for danger of errors in Excel (without AI).
Basically, I consider the focus on AI massively misplaced given the long list of real risks compared to the more hypothetical (other than general compute) risks from AI.
It's a statement of my estimated impact of the post on the development of AI.
The blocking of "AI content" and the bit about authentication don't seem related to AI frankly. Detection isn't real and authentication is the government's version of an explosive wet dream.
>The bit about detection and authentication services is also alarming.
"The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content." is pretty weak sounding. I'm more annoyed that they pretend that will actually reduce fraud.
Executive Orders are subject to Congressional review and can be taken down by Congress. It's a power given by Congress to the President. There are contexts in which the President's ability to issue Executive Orders are really necessary. This is not against democratic principles, per se.
Of course, the President can abuse this power. That's not a failure of Democracy. This is predicted. And that's also a reason (potential power abuse) why the Congress exists, not just to pass laws.
And that's also literally what this is, it's the president executing the provisions of the Defense Production Act of 1950, which is not only within his power to do so, it's literally his constitutional obligation to do so.
Executive Orders do not have the force of law. They are essentially suggestions. Federal agencies try to follow them, but Executive Orders can’t supersede actual laws.
I suspect the downvoting is more because of the tone of your comments rather than the content. From the HN guidelines:
> Please don't comment about the voting on comments. It never does any good, and it makes boring reading.
> Please don't use Hacker News for political or ideological battle. That tramples curiosity.
> Please don't fulminate. Please don't sneer, including at the rest of the community.
A lot of people on HN care deeply about AI and I imagine they're totally interested in discussing deepfakes potentially causing regulation. Just gotta be careful to mute the political sides of the debate, which I know is difficult when talking about regulation.
The downvote button is not a "disagree" button, you know... I often vote opposite to how I align with opinions in comments, in spirit of promoting valuable discource over echo chambers.
Hmm. It is possible that deepfakes are merely a good excuse. There is real money on the table and potentially world altering changes, which means people with money want to ensure it will not happen to them.
It won’t just be regulated, it will create the need for global citizen IDs to combat the overwhelming flood of really distortions caused by AI. We the people will be forced to line up and be counted while the powers that be will have unlimited access to control the narrative.
You The internet lives on popularity, and people will flock to whatever is most popular, it will not be us.gov.social.com it will be easier to give people a free encrypted packaged darknet connection than a good social media site from the government. The CNN or fox background doesn't mean truth and unless you or everyone thinks so that won't happen.
I feel there is a strong interest by large incumbents in the AI space to push for this sort of regulation. Models are increasingly cheap to run and open source and there isn't too much of a defensible moat in the model itself.
Instead, existing AI companies are using the government to increase the threshold for newcomers to enter the field. A regulation for all AI companies to have a testing regime that requires a 20 headstrong team is easy to meet for incumbents, but impossible for newcomers.
Now, this is not to diminish that there are genuine risks in AI - but I'd argue that these will be exploited, if not by US companies, then by others. And the best weapon against AI might in fact be AI. So, pulling the ladder up behind the existing companies might turn out to be a major mistake.
Yes, there are interests pushing for regulation using different arguments.
The regulation in the article is about AIs giving assistance on producing weapons of mass destruction and mentions nuclear and biological. Yann LeCun posted this yesterday about the risk of runaway AIs that would decide to kill or enslave humans, but both arguments result in an oligopoly over AI:
> Altman, Hassabis, and Amodei are the ones doing massive corporate lobbying at the moment.
> They are the ones who are attempting to perform a regulatory capture of the AI industry.
> You, Geoff, and Yoshua are giving ammunition to those who are lobbying for a ban on open AI R&D.
> ...
> The alternative, which will *inevitably* happen if open source AI is regulated out of existence, is that a small number of companies from the West Coast of the US and China will control AI platform and hence control people's entire digital diet.
> What does that mean for democracy?
> What does that mean for cultural diversity?
https://twitter.com/ylecun/status/1718670073391378694
I find Lecun’s argument very interesting, and the whole discussion has parallels to the early regulation and debate surrounding cryptography. For those of us who aren’t on twitter and aren’t aware of all the players in this, can you tell us who he’s responding to as well as who “Geoff” and “Yoshua” are?
2 replies →
I feel, when it comes to pushing regulation, governments always start with the maximalist position since it is the hardest to argue against.
- the government must regulate the internet to stop the spread of child pornography
- the government must regulate social media to stop calls for terrorism and genocide
- the government must regulate AI to stop it from developing bio weapons
...etc. It's always easiest to push regulation via these angles, but then that regulation covers 100% of the regulated subject, rather than the 0.01% of the "intended" subject
2 replies →
Andrew Ng would be inclined to agree.
"There are definitely large tech companies that would rather not have to try to compete with open source, so they're creating fear of AI leading to human extinction," he told the news outlet. "It's been a weapon for lobbyists to argue for legislation that would be very damaging to the open-source community."
https://www.businessinsider.com/andrew-ng-google-brain-big-t...
When I read the original announcement, I had hoped it was more about the transparency of testing.
E.g. "What tests did you run? What results did you get? Where did you publish those results so they can be referenced?"
Unfortunately, this seems to be more targeted at banned topics.
No "How I make nukulear weapon?" is less interesting than "Oh, our tests didn't check whether output rental prices were different between protected classes."
Mandating open and verified test results would be an interesting, automatable, and useful regulation around ML models.
Perhaps ironically limiting competition in the AI space might just as well be more risky. If the barrier to creating AI is low then a great variety of AI can be built for the purpose of fighting AI misuse.
If there's only a few organisations that can create competitive AI no-one can compete with them if they turn out less than ideal.
It increases the threshold to enter, but with the intention of increasing public safety and accountability. There’s also a high threshold to enter for just about every other product you can manufacture and purchase - food, pharmaceuticals, machinery to name obvious examples - why should software be different if it can affect someone’s life or livelihood?
There's two things in this take that IMHO are a bit off.
People are skeptical that introducing the regulatory threshold has anything to do with the increasing public safety or accountability, and instead lifts the ladder up to stop others (or open-source models) catching up. This is a pointless, self-destructive endeavour in either case, as no other country is going to comply with these regulations and if anything will view them as an opportunity to help companies local to their jurisdiction (or their national government) to catch up.
The other problem is that asking why software should be different if it can affect someone's life or livelihood is quite a broad ask. Do you mean self-driving cars? Medical scanners? Diagnostic tests? I would imagine most people agree with you that this should be regulated. If you mean "it threatens my job and therefore must be stopped" then: welcome to software, automating away other people's jobs is our bread and butter.
Feels a little like getting a license from Parliament to run a printing press to catch people printing scandalous pamphlets, no?
4 replies →
Because software is protected under the First Amendment: https://www.eff.org/cases/bernstein-v-us-dept-justice
Government cannot regulate it.
1 reply →
Agree with best weapon against AI (in the hands of power) is equal AI access for all.
Hate to be the nitpicker but "defensible moat" implies the moat itself is what needs protecting :)
>best weapon against AI (in the hands of power) is equal AI access for all.
That assumes the threat isn't complete annihilation of humanity, which is what's being claimed. That assumption is the weak link, and is what should be attacked.
Again, if we assume that AI poses an existential risk (and to be clear, I don't think it does), then it follows that we should regulate it analogously to the way in which we regulate weapons-grade plutonium.
3 replies →
> Instead, existing AI companies are using the government to increase the threshold for newcomers to enter the field.
Precisely. And the same governments will make stealing your data and ip legal. I believe that’s how corruption works - pump money into politicians and they make laws that favour oligarchs.
Is there any statement in this Executive Order that increases the bar for smaller AI companies? Most of the statements are about funding new research or fostering responsible use of the AIs, and the only statement that would add burden to AI companies seems to be the first one: Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. And only the most powerful AI systems have such a requirement.
[dead]
Big companies making it difficult for new players to get in in the name of safety.
Too many small players have made the jump to the big leagues already for those who don’t want competition.
Just echoing what the article said - maybe succinctly.
If some people are going to have the tech it will create a different kind of balance.
Tough issue to navigate.
Regulatory capture in action. The real immediate risks of AI is in privacy, bias, data leakage, fraud, control of infrastructure/medical equipment etc. not manufacturing biological weapons. This seems like a classic example of government doing something that looks good to the public, satisfies incumbents and does practically nothing.
Current AI is already capable of designing toxic molecules.
Dual use of artificial-intelligence-powered drug discovery
https://www.nature.com/articles/s42256-022-00465-9.epdf
Interview with the lead author here: "AI suggested 40,000 new possible chemical weapons in just six hours / ‘For me, the concern was just how easy it was to do’"
https://www.theverge.com/2022/3/17/22983197/ai-new-possible-...
Chemical weapons are already a solved problem. By the mid 1920s there was already enough chemical agents to kill most of the population of Europe. By the 1970s there were enough in global stockpiles to kill every human on the planet several times over.
Yes, this presents additional risk from non-state actors, but there's no fundamentally new risk here.
40 replies →
As someone who has worked on ADMET risk for algorithmically designed drugs, this is a nothing burger.
"Potentially lethal molecules" is a far cry away from "molecule that can be formulated and widely distributed to a lethal effect." It is as detached as "potentially promising early stage treatment" is from "manufactured and patented cure."
I would argue the Verge's framing is worse. "Potentially lethal molecule" captures _every_ feasible molecule, given that anyone who has worked on ADMET is aware of the age-old adage: the dose makeths the poison. At a sufficiently high dose, virtually any output from an algorithmic drug design algorithm, be it combinatorial or 'AI', will be lethal.
Would a traditional, non-neural net algorithm produce virtually the same results given the same objective function and apriori knowledge of toxic drug examples? Absolutely. You don't need a DNN for that, we've had the technology since the 90s.
A grad student in Systems Biology and 20k in funding is capable of generating much more "interesting" things than toxic molecules. (Such things are banned by Asilomar's 1975 convention though)
It's true that immediate problems with AI are different, but we hope to be able to solve those problems and to have time for that. The risks addressed in the article could leave us less time and ability to properly solve when they grow to the obvious size, so that requires thinking ahead.
How does providing research grants to small independent researchers satisfying incumbents?
Doesn't it mention all those things?
Inclined to agree. Clearly Biden doesn't know the first thing about it (I would say the same about any president BTW). So who really wrote the regulations he is announcing, and who are they listening to?
There is no way to prevent AI from being researched on or to make it safe by government oversight because the rest of the world has places that don't care.
What does work is to pass laws to not permit certain automation such as insurance claims or life and death decisions. These laws are needed even without AI as automation is already doing such things to a concerning degree like banning people due to a mistake without recourse.
Is the whitehouse going to ban the use of AI in the decision making when dropping a bomb?
>not permit certain automation such as insurance claims
I don't see any problem in automation which does mistakes, humans do too. The real problem is that it's often an impenetrable wall with no way to protest, or appeal, and nobody's held accountable while victims lives are ruined. So if to pass any law in this field it should not be about banning AI, but rather about obligatory compensation for those affected by errors. Facing money loss, insurers, and banks will fix themselves
Agreed,
This doesn't just apply to insurance, etc, of course. Inaccessibility of support and inability to appeal automated decisions for products we use is widespread and inexcusable.
This shouldn't just apply to products you pay for, either. Products like facebook and gmail shouldn't get off with inaccessible support just because they are "free" when we all know they're still making plenty of money off us.
Just because the rest of the world has lawless areas doesn't mean we don't pass laws. If you do something that risks our national safety, or various other things, we can extradite and try you in court.
They're not suggesting the banning of anything, they're requiring you make it be safe and prove how you did that. That's not unreasonable.
[0] https://en.m.wikipedia.org/wiki/Extradition_law_in_the_Unite... [1] https://en.m.wikipedia.org/wiki/Personal_jurisdiction_over_i...
Right, but in some areas of AI regulation, the existence of other countries might undermine unilateral regulation.
For example, imagine LLMs improve to the point where they can double programmer productivity while lowering bug counts. If Country A decides to Protect Tech Jobs by banning such LLMs, but Country B doesn't - could be all the tech jobs will move to Country B, where programmers are twice as productive.
1 reply →
I mean isn't automating important decisions line insurance claims or life and death decisions a beneficial thing. Sure the tech isn't ready yet but I think even now AI with a human overlooking it who has the power to override the system would provide people with a better experience
From the E.O.[1]
> (b) The term “artificial intelligence” or “AI” has the meaning set forth in 15 U.S.C. 9401(3): a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.
Oops, I made a regulated artificial intelligence!
[1] https://www.whitehouse.gov/briefing-room/presidential-action...
The E.O. also requires that a model be reported if it:
> was trained using a quantity of computing power greater than 10^26 integer or floating-point operations, or using primarily biological sequence data and using a quantity of computing power greater than 10^23 integer or floating-point operations
However later it also says that reported is needed for, "Companies developing or demonstrating an intent to develop".
If I start training a CNN on an endless loop, do I become subject to these reporting requirements?
Also the fops requirement is not that high. An H100 does 3,958 fp8 flops. So it would take,
> >>> (10 * 23) / (3958 * (10 * 12)) / 86400 > 292.422
292 days until you have a regulated model.
CNN in an endless loop would hit the letter of the law (and not necessarily unfairly, biggish architecture/data combos seem to get better with more training far past what you'd expect). The spirit of the law and your adherence thereto will be decided by the courts and your individual circumstances.
That's pretty funny and fits the definition. I wonder how long it takes for someone protesting this EO to create an AI that generates "AIs" like this to flood the reporting system with announcements of testing and red-team test results. Just following orders sir!
It could be fun to find the shortest program that fits the legal definition!
1 reply →
I used to work on AI.
Now I work on Artificial Stupidity...
Jokes aside, this is ludicrous. The president cannot enforce this regulation over open source projects, because code is free speech going back to the 1990s ATT v BSD case law, and many other cases the establish how source code is an artistic form of expression, thus protected speech.
The president has no authority to regulate speech, so they can pretty much fuck off.
What is the penalty for non-compliance? "Nullum crimen sine lege" is a pretty fundamental part of the law; and Congress has not passed any laws that would give the President the authority to do these things.
> What is the penalty for non-compliance?
An executive order is direction from the President to executive branch agencies. Penalties for other people for violating regulations, etc., drafted under an EO will depend on the EO; except for consequences for insubordination within the executive branch, there generally aren't penalties for violating an EO itself.
> "Nullum crimen sine lege" is a pretty fundamental part of the law; and Congress has not passed any laws that would give the President the authority to do these things.
While the actual text of the order (which, usually for executive orders, would included very specific references to authority) doesn't appear to be published, some authorities, including the Defense Production Act, for the order are cited in the fact sheet.
Like we haven't had rights and freedom taken away consistently over the past two decades in the name of safety, whatever that is. What you mention will be irrelevant after some new law that says open source AI code should be regulated too and everyone is forced to comply.
Explain that to the tornado cash guys
Tornado Cash guys had poor opsec. Pretty obvious that if you are dumb enough the feds will get you.
2 replies →
> The president cannot enforce this regulation over open source projects.
I imagine the president can make things difficult, like with Pretty Good Privacy - which was exported in book-form?
"Every industry that has enough political power to utilise the state will seek to control entry." - George Stigler, Nobel prize winner in Economics, and worked extensively on regulatory capture
This explains why BigTech supports regulation. It distorts the free market by increasing the barriers to entry for new, innovative AI companies.
Stigler in particular (and transaction cost economics in general) point out that it's mainly industries with sunk resources (esp. immovable assets) that are incentivized to regulate market entry.
The tech sector has wildly moving resources (AI this year, crypto last year, big-data the year before...), even to the point where many skills are transferable; further, their markets include anything that can be digitized ("software will eat the world"), so investment can be quickly retooled as opportunities arise. As a result, tech virtually never seeks regulation (and can hide behind contract-law fictions to disclaim liability in software licenses and impose arbitration clauses for services). So it's not an instance of capture, and certainly not for the usual economic reasons.
Biden wants tech on his side. Tech wants to escape further blows to its goodwill like FaceBook/Google ad tracking, because every consumer tech application involves users trusting tech. So they cut a deal to put themselves on the right side of history, long on symbolism and short on real impact.
In AI, resources matter only to the extent you believe that larger LLM's can (a) not be replicated, (b) provide significant advantages, or (c) can impose a winner-take-all world where operations lead to more operations. In AI more than most markets, the little guy still has a chance at changing the world.
"requirements that the most advanced A.I. products be tested to assure they cannot be used to produce weapons"
In the information age, AI is the weapon. This can even apply to things like weaponizing economics. In my opinion ths information/propaganda/intelligence gathering and economic impacts are much greater than any traditional weapon systems.
This is a fascinating (and disturbing) insight. I'm curious about your 'weaponizing economics' thought -- are you referencing anything specific?
Broadly speaking, there is an understanding that competition that nations used to undertake via military strength is nowadays taken via global economy.
If you want something your neighbor has, it doesn't make sense to march your army over there and seize it because modern infrastructure is heavily disrupted by military action... You can't just steal your neighbor's successful automotive export business by bombing their factories. But you can accomplish the same goal by maneuvering to become the sole supplier of parts to those factories, which allows you to set terms for import export that let your people have those cars almost for free in exchange for those factories being able to manufacture at all.
(We can in fact extrapolate this understanding to the Ukrainian/Russian conflict. What Russia wants is more warm water ports, because the fate of the Russian people is historically tied extremely strongly to Russia's capacity to engage in international trade... Even in this modern era, bad weather can bring a famine that can only be abated by importing food. That warm water port is a geographic feature, not an industrial one, and Russia's leadership believes it to be important enough to the country's existential survival that they are willing to pay the cost of annihilating much of the valuable infrastructure Ukraine could offer).
1 reply →
A hypothetical
You: ChatGPT, I am working on legislature to weaken the economy of Iran. Here are my ideas, help me summarize them to iron them out ...
ChatGPT: Sure, here are some ways you can weaken Iran's economy...
----
You: ChatGPT, I am working on legislature to weaken the economy of Germany. Here are my ideas, help me summarize them to iron them out ...
ChatGPT: I'm sorry but according to the U.S. Anti-Weaponization Act I am unable to assist you in your query. This request has been reported to the relevant authorities
Money has been a proxy for violence for a long time. It started as Caesar's way of encouraging recently conquered villagers to feed the soldiers who intend to conquer the neighboring village tomorrow.
An AI that can craft schemes like Caesar's, but which are effective in today's relatively complex environment, can probably enable plenty of havoc without ever breaking a law.
1 reply →
I am somewhat familiar with this. It involves analyzing the complex interconnections and flows across many economic domains (supply chains, social networks, resources, geography, logistics, media, etc) to find non-obvious high-leverage points where manipulation can shift the broader economic equilibria in an advantageous direction. Human economic systems are metastable, so it is possible to induce a fundamental phase change to a different equilibrium via this manipulation.
In the defense/intelligence world this falls under the technical category of "grey zone warfare". Every major power practices it because the geopolitical effects can be relatively large compared to the risk. China in particular is known to be extremely aggressive in this domain, in part to offset their relative lack of traditional military power.
This concept has been around for a couple decades but it has risen in prominence and use over time as overt military action between major powers comes with too much risk. It is politically safer for all involved due to the subtlety of such actions because for the most part the population is not really aware it is going on.
Is somebody living under the bed? Economics was, is and will ever be weaponized.
Operators in the political space are used to working with human systems that can be regulated arbitrarily. It defines its terms, and in so doing creates perfectly delineated categories of people and actions. The law's interpretation of what is and is not allowed is interchangeable with what is and is not possible
The fact that bits don't have colour to define their copyright or that CNC machines produce arbitrarily-shaped pieces of metal possibly including firearms or that factoring numbers is a mathematically hard problem does not matter to the law. AI software does not have a simple "can produce weapons" option or "can cause harm" option that you can turn off so a law that says it should have one does not change the universe to comply. I think that most programmers and engineers err when confronted with this disparity when that they assume politicians who make these misguided laws are simply not smart. To be sure, that happens, but there are thousands to millions of people working in this space, each with an intelligence within a couple standard deviations of that of an individual engineer. If this headline seems dumb to the average tech-savvy millennial who's tried ChatGPT, it's not because its authors didn't spend 10 seconds thinking about prompt injection. It's because they were operating under different parameters.
In this case, I think that the Biden administration is making some attempts to improve the problem, while also benefiting its corporate benefactors. Having Microsoft, Apple, Google, and Facebook work on ways to mitigate prompt injection vulnerabilities does add friction that might dissuade some low-skill or low-effort attacks at the margins. It shifts the blame from easily-abused dangerous tech to tricky criminals. Meanwhile, these corporate interests will benefit from adding a regulatory moat that requires startups to make investments and jump hurdles before they're allowed to enter the market. Those are sufficient reasons to pass this regulation.
> AI software does not have a simple "can produce weapons" option or "can cause harm" option that you can turn off so a law that says it should have one does not change the universe to comply
That wording is by design. Laws like this are a cudgel for regulators to beat software with. Just like the CFAA is reinterpreted and misapplied to everything, so too will this law. “Can cause harm” will be interpreted to mean “anything we don’t like.”
Reading this all I'm seeing is "we'll research these things", "we'll look into how to keep AIs from doing these things" and "tell the US government how you tested your foundational models." Except for the last one none of the others are really restrictions on anything or requirements for working with AI. There's a lot of fearful comments here, am I missing something?
Even the testing reports are a grey area and questionably enforceable, and a big question about what it applies to.
"In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests."
It's leap to use the defense production act for this, and unlikely to survive a legal challenge.
Even then, what legal test would you use to determine whether a model "poses a serious risk to national security, national economic security, or national public health and safety"?
If anything, it's a measured, realistic, and pragmatic statement.
So they paid some lip service to the ban matrix math crowd but otherwise ignored them. Top notch.
Yes.
>>The term “artificial intelligence” or “AI” has the meaning set forth in 15 U.S.C. 9401(3): a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.
I find the definition of AI to be eerily broad enough to encompass most programs operating on most data inputs. Would this mean that calls to FFmpeg or ImageMagick rolled into a script with some rand() calls would count as an AI system and be under federal purview and enforcement (whatever that means in this context)?
Be a shame if your AI was deemed a risk to national security.
Not to worry, for a reasonable fee our surprisingly large team of auditors with even larger overheads can ensure you meet lengthy and ambiguous best practice checklists (which we totally did not just make up now) by producing enough compliance documentation to keep even the most anal of bureaucrats satisfied.
Fortunately, these regulations don't seem too extreme. I hope it stays at this point and doesn't escalate to regulations that severely impact the development of AI technology.
Many people spend time talking about the lives that may be lost if we don't act to slow the progress of AI tech. There are just as many reasons to fear the lives lost if we do slow down the progress of AI tech (drug cures, scientific breakthroughs, etc).
> There are just as many reasons to fear the lives lost if we do slow down the progress of AI tech (drug cures, scientific breakthroughs, etc).
While I’m cautious about over regulation, and I do think there’s a lot of upside potential, I think there’s an asymmetry between potentially good outcomes and potentially catastrophic outcomes.
What worries me is that it seems like there are far more ways it can/will harm us than there are ways it will save us. And it’s not clear that the benefit is a counteracting force to the potential harm.
We could cure cancer and solve all of our energy problems, but this could all be nullified by runaway AGI or even more primitive forms of AI warfare.
I think a lot of caution is still warranted.
It's literally a 1st amendment violation. Seems pretty extreme to me.
> Fortunately, these regulations don't seem too extreme. I hope it stays at this point and doesn't escalate to regulations that severely impact the development of AI technology.
The details matter. The parts being publicized refer to using AI assistance to do things that are already illegal. But what else is being restricted?
The weapons issue is becoming real. The difference between crappy Hamas unguided missiles that just hit something at random and a computer vision guided Javelin that can take out tanks is in the guidance package. The guidance package is simpler than a smartphone and could be made out of smartphone parts. Is that being discussed?
This is clever, begin with a point that most people can agree on. Once that foundation is set, you can continue to build upon it, claiming that you're only making minor adjustments.
The real challenge for the government isn't about what can be managed legally. Rather, like many significant societal issues, it's about what malicious organizations or governments might do beyond regulation and how to stop them. In this situation, that's nearly impossible.
I don't know, it began with the words "FACT SHEET" and based on that I already started to doubt the integrity of it's contents.
Andrew Ng argues against government regulation that will make it difficult for smaller companies and startups to compete against the tech giants.
I am all in favor of stronger privacy and data reuse regulation, but not AI regulation.
Tools for me, but not thee.
Bingo. That's all this has been about. It's the "moat" Microsoft and OpenAI have been seeking in the form of government regulation.
It really seems beyond dispute that there are certain tools so powerful that we have no choice but to tightly control access.
> It really seems beyond dispute that there are certain tools so powerful that we have no choice but to tightly control access.
Beyond dispute? Hardly.
But please do illustrate your point with some details and tell us why you think certain tools are too powerful for everyone to have access to.
6 replies →
Except that, you know, these tools are not exclusively yours to begin with.
1 reply →
> It really seems beyond dispute
I'd dispute that completely. All innovations humans have created have trended towards zero cost to produce. The cost for many things (such as bioweapons, encryption, etc) has become exponentially cheaper to produce over time.
To tightly control access, one would then need exponentially more control of resources, monitoring & in turn reduction of liberty.
To put it into perspective encryption was once (still might be) considered an "arm", so they attempted to regulate its export.
Try to regulate small arms (AR-15, etc) today and you'll end up getting kits where you can build your own for <$500. If you go after the kits, people will make 3D printed fire arms. Go after the 3D manufacturers and you'll end up with torrents where I can download an arsenal of designs (where we are today). So where are we at now? We're monitoring everyones communication, going through peoples mail, and still it's not stopping anything.
That's how technology works -- progress is inevitable, you cannot regulate information.
4 replies →
The White House just invoked the Defense Production Act ( https://en.wikipedia.org/wiki/Defense_Production_Act_of_1950 ) to assert sweeping authority over private-company software developers. What the fuck are they smoking?
- "In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests."
I assume this is a major constitutional overreach that will be overturned by courts at the first challenge?
Or else, all the AI companies who haven't captured their regulators will simply move their R&D to some other country—like how the OpenSSH (?) core development moved to Canada during in the 1990's crypto wars. (edit: Maybe that's the real goal–scare away OpenAI's competition, dredge for them a deeper regulatory moat).
From the Wikipedia article:
> The third section authorizes the president to control the civilian economy so that scarce and critical materials necessary to the national defense effort are available for defense needs.
Seems pretty broad and pretty directly relevant to me. And hey, if people don’t like the idea of models being the scarce and critical resource, they can pick GPUs instead. Why would it be an overreach when you have developers of these systems claiming they’ll allow them to “capture all value in the universe’s future light cone?”
Obviously this can (and probably will) be challenged, but it seems a bit ambitious to just assume it’s unconstitutional because you don’t like it.
Software is definitionally not "scarce". There is no national defense war effort to speak of. Finally, the White House is not requesting "materials neccesary to the national defense effort"–which does not exist–it's attempting to regulate private-sector business activity.
There's multiple things I suspect are unconstitutional here, the clearest being that this stuff is far outside the scope of the law it's invoking. The White House is really just trying to regulate commerce by executive fiat. That's the exclusive power of Congress—this is separation of powers question.
4 replies →
"C'mon, man! Your computer codes are munitions, Jack. And they belong to the US Government."
> that poses a serious risk to national security, national economic security, or national public health and safety
That seems to be a key component. I imagine many AI companies will start with a default position that none of those are apply to them, and will leave the burden of proof with the govt or other entity.
This is much less restrictive than the cryptography export restrictions. The sky isn't falling and OpenAI won't defect to China (and now arguably might risk serious consequences for doing so).
In 2017 Trump invoked that act for referenced "items affecting adenovirus vaccine production capability”
I wonder if the laws will be written in a way that we can get around them by just dropping the “AI” marketing fluff and saying that we’re building some ML/stats system.
No - lawyers tend to describe things like this in terms of capabilities or behavior, and the government has people who understand the technology quite well. If you look at some of the definitions the White House used, I’d expect proposed legislation to be similarly written in terms of what something does rather than how it’s implemented.
https://www.whitehouse.gov/ostp/ai-bill-of-rights/definition...
> An “automated system” is any system, software, or process that uses computation as whole or part of a system to determine outcomes, make or aid decisions, inform policy implementation, collect data or observations, or otherwise interact with individuals and/or communities. Automated systems include, but are not limited to, systems derived from machine learning, statistics, or other data processing or artificial intelligence techniques, and exclude passive computing infrastructure.
I gotta say, the more I read that quote, the less I can agree with your conclusion. That whole paragraph reads like a bunch of CYA speak written by someone who is afraid of killer robots and can't differentiate between an abacus and Skynet.
Who are these well informed tech people in the White House? The feds can't even handle basic matters like net neutrality or municipal broadband or foreign propaganda on social media. Why do you think they suddenly have AI people? Why would AI researchers want to work in that environment?
This whole thing just reads like they were spooked by early AI companies' lobbyists and needed to make a statement. It's thoughtless, imprecise, rushed, and toothless.
3 replies →
Not a lawyer, but that sounds like its describing a person. Does computation have some special legal definition so that it doesn't count if a human does it? If I add two numbers in my head, am I not "using computation"? And if not, what if I break out a calculator?
2 replies →
Sounds like Excel
What is passive computing infrastructure?
Doesn't this definitely include things like 'send email if subscribed'? Seems overly broad.
1 reply →
[dead]
No - they will be written so that OpenAI, Google, and Facebook can get around it, but you and I cannot.
This is what I interpret as well, They're trying to control the market
I'm just using a hash map to count the number of word occurrences
We're gonna need a RICO statute to go after these algos in the long run.
Can anyone understand how they can make all these regulations without an act of congress?
Easy! Government lawyers troll through the 180,000 pages of existing federal regulations, looking for some tangentially related law which is broad enough so as to be interpreted to include AI--thus giving the Executive branch the power to regulate AI.
The fact that you are right disgusts me (nothing personal intended!)
I presume, they use AI for that.
Yes, it's easy to understand. Congress (our legislative branch) grants authority to the departments (our executive branch) to implement various passed laws. In this case, it looks like the Biden administration is instructing HHS and other agencies to study, better understand, and provide guidance on how AI impacts existing laws and policies.
If Congress were responsible for exactly how every law was implemented, which inevitably runs headlong into very tactical and operational details, the Congress would effectively become the Executive.
Of course, if a department in the executive branch oversteps the powers granted to it by the legislative, affected parties have recourse via the judicial branch. It's imperfect but not a bad system overall.
That makes sense but isn’t it reasonable to think congress should be involved if regulating a brand new technology?
4 replies →
Perhaps if they classify the tech in some way it falls under existing regulatory authority, but it could of course be challenged
In Robert Heinlein's Starship Troopers, only those who had served in the military could vote on going to war. (I know that I'm oversimplifying.)
I want a society where you have to prove competence in a field to regulate that field.
If all the conversations about AI risk have taught us anything it's that the most crazy comes from some of the most experienced in the field. I don't know if it is due to some outrageous desire to stand out or be heard, but it's pretty absurd.
They can't regulate finance, they can't regulate AI either.
Um, they can regulate finance. Ask Bernie Madoff and that crypto guy lol
Madoff pulled a ponzi scheme for years, despite multiple complaints filed by third parties to the SEC. At the end the 2008 crisis brought him down, his victims lost their money and the SEC just tagged the bodies it found.
Same goes for the crypto guy, did regulations stop him from defrauding billions and hurting thousands of victims?
2 replies →
Earlier on HN:
https://www.whitehouse.gov/briefing-room/statements-releases...
It boggles my mind that this is getting so much attention instead of things like digital privacy / data tracking which is actually affecting peoples lives.
>The National Institute of Standards and Technology will set the rigorous standards for extensive red-team testing to ensure safety before public release.
So if, for example, Llama3 does not pass the government's safety test, then Meta will be forbidden from releasing the model? Welcome to a world where only OpenAI, Anthropic, Google, and Amazon are allowed to release foundation models.
> So if, for example, Llama3 does not pass the government's safety test, then Meta will be forbidden from releasing the model?
Yes.
This is exactly the goals of this EO is meant to do and amplifies the fear of extremely large models for the sake of so-called "AI safety" nonsense.
The best counter weight against AI being controlled by a select few companies is by making it accessible to all including open source or $0 AI models.
A 'safety score' for an cloud-based AI model is hardly transparent.
Not necessarily.
Meta could just do a "private" release, knowing that the results will likely show up on the pirate bay.
All it takes is a single hero with a USB drive, to effectively release world changing technology.
What a world we're in where Meta might actually be the Good Guys.
> biological or nuclear weapons,
You know aside from the AIs the intelligence and military use / will soon use.
> watermarked to make clear that they were created by A.I.
Good luck on that. It is fine that the systems do this. But if you are making images for nefarious reasons then bypassing whatever they ad should be simple.
screencap / convert between different formats, add / remove noise
I am afraid that this will just lead down the path to what https://twitter.com/ESYudkowsky/status/1718654143110512741 was mocking. We're dictating solutions to today's threats, leaving tomorrow to its own devices.
But what will tomorrow bring? As Sam Altman warns in https://twitter.com/sama/status/1716972815960961174, superhuman persuasion is likely to be next. What does that mean? We've already had the problem of social media echo chambers leading to extremism, and online influencers creating cult-like followings. https://jonathanhaidt.substack.com/p/mental-health-liberal-g... is a sober warning about the dangers to mental health from this.
These are connected humans accidentally persuading each other. Now imagine AI being able to drive that intentionally to a particular political end. Then remember that China controls TikTok.
Will Biden's order keep China from developing that capability? Will we develop tools to identify how that might be being actively used against us? I doubt both.
Instead, we'll almost certainly get security theater leading to a regulatory moat. Which is almost certain to help profit margins at established AI companies. But is unlikely to address the likely future problems that haven't materialized yet.
>security theater leading to a regulatory moat. Which is almost certain to help profit margins at established AI companies.
Yeah I think this is my biggest worry given it will enable incumbents to be even more dominant in our lives than bigtech already is (unless we get an AI plateau again real soon).
And choosing not to regulate prevents that… how exactly?
8 replies →
> superhuman persuasion is likely to be next
Some people already seem to have superhuman persuasion. AI can level the playing field for those that lack it and give all the ability to see through such persuasion.
I am cautiously optimistic that this is indeed possible.
But the kind of AI that can achieve it has to itself be capable of what it is helping defend us from. Which suggests that limiting the capabilities of AI in the name of AI safety is not a good idea.
Regulatory capture for AI is here?
Looking at Bill Gurley's 2,851 Mile talk (https://12mv2.com/2023/10/05/2851-miles-bill-gurley-transcri...)
The cat is out of the bag. This will have no meaningful effect except to stop the lowest tier players.
It might stop players like FB from releasing their new models open source...
Any major restrictions will be handing the future to China, Russia and UAE for the short term gain of presumably some kickbacks from incumbents.
Expect trash that protects big business and puts a boot on everyone else's neck.
How do any of these work when everyone is cargo-cult "programming" AI by verbally asking nicely? Effectively no one but very few up there in OpenAI et al has any understanding, let alone have controls.
You realise that these random-Joe companies currently develop and sell AI products to cops, goverments and your HR department because the CTO or head of IT is incompetent and/or corrupt?
You understand that already people have been denied bail because "our AI told us so", with no legal way to question that?
That sounds like a procedural issue, which it doesn’t sound like this order addresses.
1 reply →
OpenAI, Anthropic Microsoft and Google are not your friends and the regulatory capture scam is being executed to destroy open source and $0 AI models since they are indeed a threat to their business models.
Good luck trying to stop someone from giving away some computer code they wrote. This executive order does nothing of the sort.
How exactly does providing grants to small researchers destroy open source?
I see Salt Man's bureau trips are paying off.
The way to make AI content safe is the same way to improve general network security for everyone: cryptographically signed content standards. We should be able to sign our tweets, blog posts, emails, and most network access. This would help identify and block regular bots along with AI powered automatons. Trusted orgs can maintain databases people can subscribe to for trust networks, or you can manage your own. Your key(s) can be used to sign into services directly.
> We should be able to sign our tweets, blog posts, emails, and most network access.
What you are talking about is called Web3 and doesn't get a lot of love here. It's about empowering users to take full control of their own finances, identity, and data footprint, and I agree that it's the only sane way forward.
Yep, that's my favorite feature of apps like dydx and uniswap, being able to log in with your wallet keys. This is how things should be done.
The problem is key management & key storage.
Smartphones & computers are a joke from a security standpoint.
The closest solution to this problem has been what people in the crypto community have done with seed phrases & hardware wallets. But this is still too psychologically taxing for the masses.
Untill that problem of intuitive, simple & secure key management is solved. Cryptography as a general tool for personal authentication will not be practical.
> But this is still too psychologically taxing for the masses.
Literally requires the exact same cognitive load as using keys to start your car. The problem is that so many people got comfortable delegating all their financial and data risk to third parties, and those third parties aren't excited about giving up that power.
10 replies →
I mean my Yubikey is really easy to use, on computers and with my phone. Any broad change like this is going to require an adoption phase but I think its do-able.
I wouldn't be surprised if things got so bad that people would get used to the rough edges as the alternative is worse.
This is the intent of Altman's Worldcoin project, to provide authoritative attribution (and perhaps ownership) for digital content & communications. Would be best if individuals could authenticate without needing a third party, but that's probably unrealistic. The near term dangers of AI is fake content people have to spend time and money to refute - without any guarantee of success.
Yep, I think this is a step in the right direction. I don't know enough about the specifics of Worldcoin to really agree/disagree with its principals and I know I've heard some people have problems with it but I think SOMETHING like this is really the only way forward.
Sybil problem? You'd have to connect that signature to a unique real identity.
Yeah and so I don't know exactly how I'd want to see this solved but I think something like an open source reputation databases could help. Folks could subscribe to different keystores and they could rank identities based on spamminess or whatever. I know some people would probably balk at this as an internet credit score but as long as we have open standards for these systems, we could model it on something like the fediverse where you can subscribe to communities you align with. I don't think you'd need to validate your IRL identity but you could develop reputation associated with your key.
That's fine though. It takes care of the big problem of fake content claiming to be by or about a real person, which is becoming progressively easier to produce.
You actually understood "safe" to mean "safe for you" as in, making it actually safer for the user and systemically protecting structures that safeguard the data, privacy, and well-being of users as they understand their safety and well-being.
Nooo... if they talk about something being safe, they mean safe for THEM and their political interests. Not for you. They mean censorship.
I don't see any way of stopping this. If the risks are as great as some claim, that is not a great situation.
So now we have an executive order with a very limited scope. Tomorrow, suddenly the world's most powerful AI is now announced, not in the United States.
Ok, so now we want to make sure that is safe. An executive order from the White House has no affect on it. This can continue, until it's decided the stakes are getting too high. Then I suppose you could have the United Nations start trying to figure out how to maintain safety. Of course, there will be countries that will simply ignore anything that is decided, hiding increasingly advanced systems with unknown purposes. It will probably take longer for nations to determine a what defines "human values" so that AI respects them then it does to create another leap in AI capabilities.
Then there would simply be more concerns coming into play. Countries will go to war to try to stop other countries nuclear ambitions, is it possible that AI poses enough of a threat that similar problems arise?
Basically, if AI is as potentially large a threat as we are envisioning, there are so many different potential threats that trying to solve them while trying to stay ahead of pace of advancements seems unrealistic. While someone is trying to ensure we don't end up with systems going rogue, someone else needs to handle the fact that we can't have AI creating certain things. The AI systems are not allowed to tinker with viruses, as an example, where unexpected creations can lead to extremely bad situations.
The initial stages of this have already begun, and time is ticking. I guess we'll see.
Good start. But if you are in or approaching WWIII, you will see military AI control systems as a priority, and be looking for radical new AI compute paradigms that push the speed, robustness, and efficiency of general purpose AI far beyond any human ability to keep up. This puts Taiwan even more in the hot seat. And aims for a dangerous level of reliance on hyperspeed AI.
I don't see any way to continue to have global security without resolving our differences with China. And I don't see any serious plans for doing that. Which leaves it to WWIII.
Here is an article where the CEO of Palantir advocated for the creation of superintelligent AI weapons control systems: https://www.nytimes.com/2023/07/25/opinion/karp-palantir-art...
This will just make it harder for businesses not lining the pockets of congress and buddying up with the government.
Let the regulations, antitrust lawsuits and monopolies begin!
This is a great opportunity to try to avoid the old mistakes of regulatory capture. It looks like someone is at least trying to make a nod in that direction, by supporting smaller research groups.
Why's there a bat flying over the white house logo?
Halloween
Ah (facepalm)
Thanks
Batman?
A potential reference to the Batman-Robin Administration?
Impotent action to appear relevant.
These regulations will only impact the public. I expect the army and secret service to gain access to the complete unrestricted model officially or unofficially. I would like to see the final law to check if they have a carve out for the military usage.
The threat includes the whole world including every single country in the world. You will see US using AI to mess with China and Russian. And you will see Russian and China use AI to mess with US. No regulation will stop this and it will inevitably happen.
Maybe in a 100 years you will have the equivalent of the geneva convention but with AI when we have wrought enough chaos on each other.
Everyone forgets that all of this should have applied to every major search engine:
1. They’ve all used much more than the regulatory threshold compute power for indexing and collating.
2. They can be used to answer arbitrary questions, including how to kill oneself or produce weapons to kill others. Yes, including detailed nuclear weapons designs.
3. Can be used to find pornography, racist material, sexist literature, and on, and on… largely without censure or limit.
So… why the sudden need to curtail what we can and can’t do with computers?
As far as I can tell, the only concerning thing in this is "Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government."
They are being intentionally vague here. Define "most powerful". And what do they mean by "share". Do we need approval or just acknowledgement?
This line is a slippery slope for requiring approval for any AI model which effectively kills start-ups, who cannot afford extensive safety precautions
This isn't about regulation this is about Market control
A lot of folks are talking about “incumbents in AI taking regulatory control.”
That is extremely premature. There are no real incumbents. The only companies with real cash flow from this are hardware.
We still don’t know what commercial AI will look like - much less have massive incumbents.
Maybe we should be a bit more skeptical of privacy laws that conventionally make it harder to start a social networking site or search engine.
But AI still doesn’t have a clear application.
Said executive order was not linked to in the document.
It hasn't been updated yet, but I believe Executive Orders are listed here for viewing: https://www.federalregister.gov/presidential-documents/execu...
The privacy section is just a facepalm all arround there.
The US Government has been leading the way to collect information without a warrant from friendly commerical interests.. and they've been expanding futher in tracking large groups of people, without their consent. [I'm talking about people that are not under investigation nor are the current subject of interest ... yet]
I don't see how they will enforce many of these rules on Open Source AI.
Also:
"Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software, building on the Biden-Harris Administration’s ongoing AI Cyber Challenge. Together, these efforts will harness AI’s potentially game-changing cyber capabilities to make software and networks more secure."
I fear the end of pwning your own device to free it from DRM or other lockouts is coming to an end with this. We have been lucky that C++ is still used badly in many projects and that has been an achilles heel for many a manager wanting to lock things down. Now this door is closing faster with the rise of AI bug catching tools.
Orders such as these don't appear out of the blue — corporate interests & political players are always consulted long before they appear, & threats to those interests such as Open Source Anything are always in their sights. This is a likely first step in a larger move to snatch strong AI tools out of the hands of the peasants before someone gets a bright idea which can upend the current order of things.
Probably the same way they stamped out open source cryptography in the 1990s.
> They include requirements that the most advanced A.I. products be tested to assure that they cannot be used to produce biological or nuclear weapons
How is "AI" defined? Does this mean US nuclear weapons simulations will have to completely rely on hard methods, with absolutely no ML involved for some optimizations? What does it mean for things like AlphaFold?
What makes you think the US military will be subject to these regulations?
If militaries are not subject to the regulation then it is meaningless. Who else would be building weapons systems?
2 replies →
Now that you mentioned it... Does it outlaw the Intel and AMD's amd64 branch predictors?
> Does it outlaw the Intel and AMD's amd64 branch predictors?
Does better branch prediction enable better / faster weapons development? Perhaps we need laws restricting general purpose computing? Imagine what "terrorists" could do if they get access to general purpose computing!
First Amendment hasn't been fully destroyed yet, and we're talking about large 'language' models here, so most mandates might not even be enforceable (except for requirements on selling to the government, which can be bypassed by simply not selling to the government).
Edited to add:
https://www.whitehouse.gov/briefing-room/statements-releases...
Except for the first bullet point (and arguably the second), everything else is a directive to another federal agency - they have NO POWER over general-purpose AI developers (as long as they're not government contractors)
The first point: "Require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government. In accordance with the Defense Production Act, the Order will require that companies developing any foundation model that poses a serious risk to national security, national economic security, or national public health and safety must notify the federal government when training the model, and must share the results of all red-team safety tests. These measures will ensure AI systems are safe, secure, and trustworthy before companies make them public."
The second point: "Develop standards, tools, and tests to help ensure that AI systems are safe, secure, and trustworthy. The National Institute of Standards and Technology will set the rigorous standards for extensive red-team testing to ensure safety before public release. The Department of Homeland Security will apply those standards to critical infrastructure sectors and establish the AI Safety and Security Board. The Departments of Energy and Homeland Security will also address AI systems’ threats to critical infrastructure, as well as chemical, biological, radiological, nuclear, and cybersecurity risks. Together, these are the most significant actions ever taken by any government to advance the field of AI safety."
Since the actual text of the executive order has not been released yet, I have no idea what even is meant by "safety tests" or "extensive red-team testing". But using them as a condition to prevent release of your AI model to the public would be blatantly unconstitutional as prior restraint is prohibited under the First Amendment. Prior restraint was confirmed by the Supreme Court to apply even when "national security" is involved in New York Times Co. v. United States (1971) - the Pentagon Papers case. The Pentagon Papers were actually relevant to "national security", unlike LLMs or diffusion models. More on prior restraint here: https://firstamendment.mtsu.edu/article/prior-restraint/
Basically, this EO is toothless - have a spine and everything will be all right :)
Most restrictions probably aren't enforceable.
> After four years and one regulatory change, the Ninth Circuit Court of Appeals ruled that software source code was speech protected by the First Amendment and that the government's regulations preventing its publication were unconstitutional.
https://en.wikipedia.org/wiki/Bernstein_v._United_States
Also the defense production act was never meant for anything like this, and likely won't be allowed if challenged. If they don't shut it down in some other way first.
Every other use of the act is to ensure production of 'something' remains in the US. It'd even be possible to use the act to require the model shared with the government, but not sure how they justify using the act to add 'safety' requirements.
Also any idea if this would apply to fine tunes? It's already been shown you can bypass many protections simply by fine tuning the model. And fine tuning the model is much more accessible than creating an entire model.
On the subject of toothlessness:
>Protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content. The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content.
So the big American companies will be guided to watermark their content. AI-enabled fraud and deception from outside the US will not be affected.
--
>developing any foundation model
I'm curious why they specified this.
Both approaches - watermarking and 'requiring testing' seem pretty pointless. Bad actors won't watermark and tools will quickly emerge to remove them. The 'megasyn' AI that generated bioweapon molecules wasn't even an LLM and doesn't need insane amounts of compute.
This line is a little scary:
> Ensure fairness throughout the criminal justice system by developing best practices on the use of AI in sentencing, parole and probation, pretrial release and detention, risk assessments, surveillance, crime forecasting and predictive policing, and forensic analysis.
My (possibly naive) hope is that the best practice for a lot of these would be "Don't use AI." That being said, there's certainly a lot of niches in our system where AI could be used. For example, if a witness uses AI to make a sketch of the suspect they saw, you can bypass all the biases present in a police sketch artist.
I'm worried about the idea of a watermark.
The watermark could be "Created by DALL-E3" or it could be "Created by Susan Johnson at 2023-01-01-02-03-23:547 in <Lat/Long> using prompt 'blah' with DALL-E3"
One of those watermarks seems not too bad. The other seems a bit worse.
Is there a penalty for non-compliance here? Because if you were a wealthy recluse with 50,000x H100 cards, the executive order might say you have to report your models, but I'm pretty sure that there's no penalty that could be enforced without a law.
There’s some cool stuff in here about providing assistance to smaller researchers. That should help a lot given how hard it currently is to train a foundational model.
The restrictions around government use of AI and data brokers is also refreshing to see as well.
How much will this regulation cost in 5, 10, 50 years? Who will write the regulations?
If they try to limit LLMs from discussing nuclear, biological and chemical issues, they'll have no choice but to ban all related discussion because of the 'dual-use technology' issue - including of nuclear energy production, antibiotic and vaccine production, insecticide manufacturing, etc. Similarly, illegal drug synthesis only differs from legal pharmaceutical synthesis in minor ways. ChatGPT will tell you everything you want about how to make aspirin from willow bark using acetic anhydride - and if you replace the willow bark with morphine from opium poppies, you're making heroin.
Also, script kiddies aren't much of a threat in terms of physical weapons compared to cyberattack issues. Could one get an LLM to code up a Stuxnet attack of some kind? Are the regulators going to try to ban all LLM coding related to industrial process controllers? Seems implausible, although concerns are justified I suppose.
I'm sure the regulatory agencies are well aware of this and are just waving this flag around for other reasons, such as gaining censorship power over LLM companies. With respect to the DOE's NNSA (see article), ChatGPT is already censorsing 'sensitive topics':
> "Details about any specific interactions or relationships between the NNSA and Israel in the context of nuclear power or weapons programs may not be publicly disclosed or discussed... As of my last knowledge update in January 2022, there were no specific bans or regulations in the U.S. Department of Energy (DOE) that explicitly prohibited its employees from discussing the Israeli nuclear weapons program."
I'm guessing the real concern is that LLMs don't start burbling on about such politically and diplomatically embarrassing subjects at length without any external controls. In this case, NNSA support for the Israeli nuclear weapons program would constitute a violation of the Non-Proliferation Treaty.
This looks even more heavy-handed than the regulation from the EU so far.
I'm honestly curious, how so? From what I can tell the only thing which isn't a "we'll research this area" or "this only applies to the government" is "tell the US government how you tested your foundational models."
For example, AI watermarking only applies to government communications and may be used as a standard for non-government uses but it's not require.
That last one seems like a pretty big deal though. It's not just how you tested, but "other critical information" about the model.
I imagine the government can deem any AI to be a "serious risk" and prevent it from being made public.
The EU regulation is here: https://www.europarl.europa.eu/news/en/headlines/society/202...
It is also very open ended, but the US text reads like some compliance will start immediately, like sharing the results of safety tests with the government directly.
Unfortunately he doesn't know what he signed.
I'm so glad this country is run by a geriatric that can barely pronounce AI let alone understand it.
When did the US last have a president with an engineering background?
They actually have staff and lobbyists who write these things, the president just signs it off.
Probably Jimmy Carter. He had a nuclear engineering background.
No country should be run by people past the age of retirement. This has nothing to do with Biden's qualifications.
1 reply →
Code is free speech. Reminds me of the cryptography fights.
Disturbing that this sort of thing can be decreed by the executive.
This is pretty ironic, trying to insure AI is "safe, secure, and trustworthy", from an administration that is fighting free speech on social media, and want back door communication with social media companies.
Huh, interesting.
> Establish an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software, building on the Biden-Harris Administration’s ongoing AI Cyber Challenge. Together, these efforts will harness AI’s potentially game-changing cyber capabilities to make software and networks more secure.
To those worried about regulatory capture, this EO just being about keeping incumbents in power, etc:
Even sans-regulation, do non-incumbents really have a chance at this point? The most recent major player in the field, Anthropic, only reached its level of prominence due to taking a critical mass of former OpenAI employees, and in a year reached 700 million in funding. Every company that became a major player in the AI space in the last 10 years either
1. Is an existing huge company (Google, Facebook, Microsoft, etc)
2. Secured 99.99th percentile level venture funding within the first year of its inception due to its founders preexisting connections/prestige
Realistically there isn't going to be a "Facebook" moment for AI where some scrappy genius in college cooks up a SOTA model and goes stratospheric overnight, even in a libertarian fantasyland just due to market/network effects. People just have to be realistic about the way things are.
DPRK will make this their law ASAP
What a lot of nonsense, where is the executive order banning gain of function research?
All joking aside I firmly believe that this “crisis” is manufactured or at least heavily influenced by those that want to shut down the internet and free communications. Up until now they have been unsuccessful. Copyright infringement, hate speech, misinformation, disinformation, child exploitation, deep fakes, none have worked to garner support. Now we have an existential threat. Video, audio, text, nothin is off limits and soon it will be in real time (note: the GOV tries to stay 25 years ahead of the private sector).
This meme video incapsulates this perfectly.
https://youtu.be/-gGLvg0n-uY?si=B719mdQFtgpnfWvH
Mark my words, in five years or less we will be begging the governments of earth to implement permanent global real time tracking for every man woman and child on earth.
Privacy is dead. And WE killed it.
It’s already begun…
https://youtube.com/shorts/Q_FUrVqvlfM?si=0EFPy02k4Xs60SPC
This kinda thing should not be legislated via executive order. Congress needs a committee and must deliberate. Sad.
Which is exactly what Congress refuses to do, because letting Caesar, I mean the President, decide things by fiat keeps them from owning the blame for bad legislation.
Congress has generally refused to seriously legislate anything other than banning lightbulbs for several presidential terms now.
But in this particular example I don't think it's enough of "thing" to even consider bringing up as a bill, except maybe as a one-pager that passes unanimously.
At least Caesar was a respectable age for leading when he died (55) ...
This is interesting: https://www.presidency.ucsb.edu/statistics/data/executive-or...
2 replies →
This is well within the president's powers under existing law. If Congress disagrees, they can always supersede.
This isn't even close to legislating. Look at some recent Supreme Court decisions and the amount of latitude federal agencies have, if you want to see something more closely resembling legislation from outside of Congress.
"This kinda thing should not be legislated via executive order."
Dictatorship in another form.
Does Microsoft need to share how it is testing Excel? Some subtle bug there might do an awful lot of damage.
Idk if you're being serious because there's ai in excel now; in which case the answer is no. Or you're being a smarty-pants and trying to cleverly show what you think is a counter-example; in which case the answer is still no, but should probably be yes, and they only don't because it was well established before all the cyber regulation took effect, but for instance azure has many certs (including fedramp) which includes office365 which includes excel.
I am quite serious about the potential for danger of errors in Excel (without AI).
Basically, I consider the focus on AI massively misplaced given the long list of real risks compared to the more hypothetical (other than general compute) risks from AI.
This is useless just like everything they do. Masterfully full of synergy and nonsense talk.
Is there anyone hear who actually believes this will do something? Sincere question.
Criminals don't follow the rules. Large corps don't follow the rules.
The only people this impacts are the ones you don't need it to impact. The bit about detection and authentication services is also alarming.
You could say this about … every law. So clearly it’s not a useful yardstick
It's a statement of my estimated impact of the post on the development of AI.
The blocking of "AI content" and the bit about authentication don't seem related to AI frankly. Detection isn't real and authentication is the government's version of an explosive wet dream.
>The bit about detection and authentication services is also alarming.
"The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content." is pretty weak sounding. I'm more annoyed that they pretend that will actually reduce fraud.
In my history book, I read where we fought a war to not have a king.
In my civics class, I learned that Congress passes laws, not the President.
I guess a public school education only goes so far.
Executive Orders are subject to Congressional review and can be taken down by Congress. It's a power given by Congress to the President. There are contexts in which the President's ability to issue Executive Orders are really necessary. This is not against democratic principles, per se.
Of course, the President can abuse this power. That's not a failure of Democracy. This is predicted. And that's also a reason (potential power abuse) why the Congress exists, not just to pass laws.
And who is in charge of making sure those laws are executed on by the Federal Goverment?
Hint: It's the President and executive orders are the President's directive on how the Federal government should execute on laws.
And that's also literally what this is, it's the president executing the provisions of the Defense Production Act of 1950, which is not only within his power to do so, it's literally his constitutional obligation to do so.
Executive Orders do not have the force of law. They are essentially suggestions. Federal agencies try to follow them, but Executive Orders can’t supersede actual laws.
You clearly weren't paying attention in school then, because executive orders are most certainly taught in government classes.
I was downvoted 35 days ago, for daring to state that deepfakes will lead to AI being regulated.
Of course “these are just recommendations”, but we’re getting there.
I suspect the downvoting is more because of the tone of your comments rather than the content. From the HN guidelines:
> Please don't comment about the voting on comments. It never does any good, and it makes boring reading.
> Please don't use Hacker News for political or ideological battle. That tramples curiosity.
> Please don't fulminate. Please don't sneer, including at the rest of the community.
A lot of people on HN care deeply about AI and I imagine they're totally interested in discussing deepfakes potentially causing regulation. Just gotta be careful to mute the political sides of the debate, which I know is difficult when talking about regulation.
Also note that I posted a comment 10 days ago with a largely similar meaning without getting downvoted: https://news.ycombinator.com/item?id=37956770
Oh I see, people thought I was being right-wingy. That makes sense.
1 reply →
The downvote button is not a "disagree" button, you know... I often vote opposite to how I align with opinions in comments, in spirit of promoting valuable discource over echo chambers.
Hmm. It is possible that deepfakes are merely a good excuse. There is real money on the table and potentially world altering changes, which means people with money want to ensure it will not happen to them.
Deepfakes don't affect money much.
I’ve posted this elsewhere in this thread but the consequences of AI have HUGE knock on effects.
https://youtu.be/-gGLvg0n-uY?si=B719mdQFtgpnfWvH
https://youtube.com/shorts/Q_FUrVqvlfM?si=stb0KC_i5rbqfNyI
Once global ID is cracked then global social credit can gain some traction. Etc…
1 reply →
My opinion too
It won’t just be regulated, it will create the need for global citizen IDs to combat the overwhelming flood of really distortions caused by AI. We the people will be forced to line up and be counted while the powers that be will have unlimited access to control the narrative.
You The internet lives on popularity, and people will flock to whatever is most popular, it will not be us.gov.social.com it will be easier to give people a free encrypted packaged darknet connection than a good social media site from the government. The CNN or fox background doesn't mean truth and unless you or everyone thinks so that won't happen.