Comment by TheAceOfHearts

7 hours ago

Archive: https://archive.is/j1XTl

I cannot help but feel that discussing this topic under the blanket term "AI Regulation" is a bit deceptive. I've noticed that whenever this topic comes up, almost every major figure remains rather vague on the details. Who are some influential figures actually advancing clearly defined regulations or key ideas for approaching how we should think about AI regulation?

What we should be doing is surfacing well defined points regarding AI regulation and discussing them, instead of fighting proxy wars for opaque groups with infinite money. It feels like we're at the point where nobody is even pretending like people's opinions on this topic are relevant, it's just a matter of pumping enough money and flooding the zone.

Personally, I still remain very uncertain about the topic; I don't have well-defined or clearly actionable ideas. But I'd love to hear what regulations or mental models other HN readers are using to navigate and think about this topic. Sam Altman and Elon Musk have both mentioned vague ideas of how AI is somehow going to magically result in UBI and a magical communist utopia, but nobody has ever pressed them for details. If they really believe this then they could make some more significant legally binding commitments, right? Notice how nobody ever asks: who is going to own the models, robots, and data centers in this UBI paradise? It feels a lot like Underpants Gnomes: (1) Build AGI, (2) ???, (3) Communist Utopia and UBI.

> I cannot help but feel that discussing this topic under the blanket term "AI Regulation" is a bit deceptive. I've noticed that whenever this topic comes up, almost every major figure remains rather vague on the details. Who are some influential figures actually advancing clearly defined regulations or key ideas for approaching how we should think about AI regulation?

There's a vocal minority calling for AI regulation, but what they actually want often strikes me as misguided:

"Stop AI from taking our jobs" - This shouldn't be solved through regulation. It's on politicians to help people adapt to a new economic reality, not to artificially preserve bullshit jobs.

"Stop the IP theft" - This feels like a cause pushed primarily by the 1%. Let's be realistic: 99% of people don't own patents and have little stake in strengthening IP protections.

  • > "Stop the IP theft" - This feels like a cause pushed primarily by the 1%. Let's be realistic: 99% of people don't own patents and have little stake in strengthening IP protections.

    This is being screamed from the rooftops by nearly the entire creative community of artists, photographers, writers, and other people who do creative work as a job, or even for fun.

    The difference between the 99% of individual creatives and the 1% is that the 1% has entire portfolios of IP - IP that they might not have even created themselves - as well as an army of lawyers to protect that IP.

  • > This shouldn't be solved through regulation. It's on politicians to help people adapt to a new economic reality, not to artificially preserve bullshit jobs.

    They already do this[1]. Why should there be an exception carved out for AI type jobs?

    ------------------------------

    [1] What do you think tariffs are? Show me a country without tariffs and I'll show you a broken economy with widespread starvation and misery.

    • > Show me a country without tariffs and I'll show you a broken economy with widespread starvation and misery.

      I think that would be Singapore, as far as import tarrifs go? Not much starvation there!

      Do you mean taxes? Or excise duties or...?

  • > "Stop AI from taking our jobs" - This shouldn't be solved through regulation. It's on politicians to help people adapt to a new economic reality, not to artificially preserve bullshit jobs.

    This is a really good point. If a country tries to "protect" jobs by blocking AI, it only puts itself at a disadvantage. Other countries that don't pass those restrictions will produce goods and services more efficiently and at lower cost, and they’ll outcompete you anyway. So even with regulations the jobs aren't actually saved.

    The real solution is for people to upskill and learn new abilities so they can thrive in the new economic reality. But it's hard to convince people that they need to change instead of expecting the world around them to stay the same.

    • This presupposes the existence of said jobs, which is a whopper of an assumption that conveniently shifts blame onto the most vulnerable. Of course, that's probably the point.

      This will work even worse than "if everyone goes to college, good jobs will appear for everyone."

      2 replies →

    • > The real solution is for people to upskill and learn new abilities

      AI is being touted as extremely intelligent and, thus, capable of taking over almost any white collar job. What would I upskill to?

      1 reply →

    • > The real solution is for people to upskill and learn new abilities so they can thrive in the new economic reality. But it's hard to convince people that they need to change instead of expecting the world around them to stay the same.

      But why do I have to? Why should your life be dictated by the market and corporations that are pushing these changes? Why do I have to be afraid that my livelihood is at risk because I don't want to adapt to the ever faster changing market? The goal of automation and AI should be to reduce or even eliminate the need for us to work, and not the further reduction of people to their economic value.

      3 replies →

    • > If a country tries to "protect" jobs by blocking AI, it only puts itself at a disadvantage

      Regulating AI doesn't mean blocking it. The EU AI Act regulates AI without blocking it, just imposing restrictions on data usage and decision making (if it's making life or death decisions, you have to be able to reliably explain how and why it makes those decisions, and it needs to be deterministic - no UnitedHealthcare bullshit hiding behind an "algorithm" refusing healthcare)

  • "Stop the laundering of responsibility/liability" - the risk that you can run someone over with a software controlled car and it's not a crime "because AI" whereas a human doing the same thing would be in jail. Image detection leading to false arrests, etc. It's harder to sue because the immediate party can say "it wasn't us, we bought this software product and it did the bad thing!"

    I strongly feel that regulation needs to curb this, even if it leads to product managers going to jail for what their black box did.

  • > "Stop the IP theft" - This feels like a cause pushed primarily by the 1%. Let's be realistic: 99% of people don't own patents and have little stake in strengthening IP protections.

    Artists are not primarily in the 1% though, it's not only patents that are IP theft.

    • Do the artists that are not in the 1% actually benefit from IP or does it hinder them from building new art based on other art? It seem to me that IP only benefits the top players.

      6 replies →

  • It's less about who is right and more about economic interests and lobbying power. There's a vocal minority that is just dead set against AI using all sorts of arguments related to religion, morality, fears about mass unemployment, all sorts of doom scenarios, etc. However, this is a minority with not a lot of lobbying power ultimately. And the louder they are and the less of this stuff actually materializes the easier it becomes to dismiss a lot of the arguments. Despite the loudness of the debate, the consensus is nowhere near as broad on this as it may seem to some.

    And the quality of the debate remains very low as well. Most people barely understand the issues. And that includes many journalists that are still getting hung up on the whole "hallucinations can be funny" thing mostly. There are a lot of confused people spouting nonsense on this topic.

    There are special interest groups with lobbying powers. Media companies with intellectual properties, actors worried about being impersonated, etc. Those have some ability to lobby for changes. And then you have the wider public that isn't that well informed and has sort of caught on to the notion that chat gpt is now definitely a thing that is sometimes mildly useful.

    And there are the AI companies that are definitely very well funded and have an enormous amount of lobbying power. They can move whole economies with their spending so they are getting relatively little push back from politicians. Political Washington and California run on obscene amounts of lobbying money. And the AI companies can provide a lot of that.

    • A vocal minority led to the French Revolution, the Bolshevik Revolution, the Nazi party and the modern climate change movement. Vocal minorities can be powerful.

  • >There's a vocal minority calling for AI regulation, but what they actually want often strikes me as misguided:

    There's a ton other points intersecting with regulation. Either directly related by AI, or made significantly more relevant by it.

    Just from the top of my head:

    - information processing: Is there private data AI should never be able to learn from? We restrict collection but it might be unclear whether model training counts as storage.

    - related to the former, what kind of dystopian practices should we ban? AI can probably create much deeper profiles inferring information from users than our already worrrying tech, even without storing sensitive data. If it can use conversations to deduce I'm in risk of a shorter lifespan, can the owners communicate that data to insurance companies?

    - healthcare/social damage: what is the long term effects of people having an always available yes men, a substitution for social interaction, a cheating tool, etc? should some people be kept from access? (minors, mentally ill, whatever). Should access, on the other hand, become a basic right if it realistically makes a lef-behind person unable to compete with others who have it?

    - National security. Is a country's economy being reliant in a service offered somewhere else? Worse even, is this fact draining skills from the population that might not able to be easily recovered when needed?

    - energy/resources impact: Are we ready to have an enormous increase in usage of energy and/or certain goods? should we limit usage until we can meet the demand without struggle?

    - consumer protections: Many companies just offer 'flat' usage, freely being able to change the model behind the scenes for a worse one when needed or even adapt user limits on their server load. Which of these are fair business practices?

    - economy risks: What is the maximum risk we can take of the economy being made dependent to services that aren't yet profitable? Is there any steps that need to be taken to keep us from the potential bust if costs can't be kept up with?

    - monopoly risks: we could end up with a single company being able to offer literally any intellectual work as a service. Whoever gets this tech might become the most powerful entity in the world. Should we address this impact through regulation before such an entity rises and becomes impossible to tame?

    - enabling crime: can an army of AI hackers disrupt entire countries? how is this handled?

    - impact on job creation: If AIs can practically DDOS job offer forms, how is this handled to keep access fair? Same for a million other places that are subjected to AI spam.

    your point "It's on politicians to help people adapt to a new economic reality" brings a few:

    - Should we tax AI using companies? if they produce the same employing fewer people, tax extraction suffers and the non-taxed money does not make it back to the people. How do we compensate? And how do we remake - How should we handle entire professions being put to pasture at once? Lost employment is a general problem if it's a large enough amount of people. - how should the push of intellectual work be rethought if it becomes extremely cheap relative to manual work? is the way we train our population in need of change?

    You might have strong opinions on most of these issues, but there is clearly A LOT of important debates that aren't being addressed.

    • Your list of evidence-free vibe complaints perfectly exemplifies the reasons why regulations should be approached carefully with the advice of experts, or not at all.

      3 replies →

  • "Stop AI from taking our jobs" - This shouldn't be solved through regulation. It's on politicians to help people adapt to a new economic reality, not to artificially preserve bullshit jobs.

    So politicians are supposed to create "non bullshit" jobs out of thin air?

    The job you've done for decades is suddenly bullshit because some shit LLM is hallucinating nice sounding words?

    • At this point if an LLM can do your job, it was already bullshit. But in the future when they can do non bullshit jobs, then you can go get another one just like every other person out of the billions who has had their job made obsolete by technology. It's not that hard.

      7 replies →

    • They do create bullshit jobs in finance by propping up the system when it's about to collapse from the consequences of their own actions though.

      Not that I believe they should allow the financial system to collapse without intervention but the interventions during recent crises have been done to save corporations that should have been extinguished instead of the common people who were affected by their consequences.

      Which I believe is what's lacking in the whole discussion, politicians shouldn't be trying to maintain the labour status quo if/when AI change the landscape because that would be a distortion of reality but there needs to be some off-ramp, and direct help for people who will suffer from the change in landscape without going through the bullshit of helping companies in the hopes they eventually help people. As many in HN say, companies are not charities, if they can make an extra buck by fucking someone they will do it, the government is supposed to be helping people as a collective.

Algorithmic Accountability. Not just for AI, but also social media, advertising, voting systems, etc. Algorithm Impact Assessments need to become mandatory.

You should ignore literally everything Musk says. He is incredibly unintelligent relative to his status.

Musk wants extreme law and order and will beat down any protests. His X account is full of posts that want to fill up prisons. This is the highlight so far:

https://xcancel.com/elonmusk/status/1992599328897294496#m

Notice that the retweeted Will Tanner post also denigrates EBT. Musk does not give a damn about UBI. The unemployed will do slave labor, go to prison, or, if they revolt, they will be hanged. It is literally all out there by now.

Elon Musk explicitly said in his latest Joe Rogan appearance that he advocates for the smallest government possible - just army, police, legal. He did NOT mention social care, health care.

Doesn't quite align with UBI, unless he envisions the AI companies directly giving the UBI to people (when did that ever happen?)

  • It's possible that the interests of the richest man in the world don't align with the interests of the majority, or society as a whole.

  • I'm sure that "smallest government possible" involves cancelling all subsidies to EV car companies and tax credits to EV customers. What a wanker.

  • Like every other self-serving rich “Libertarian,” they want a small government when it stands to get in their way, and a large one when they want their lifestyle subsidized by government contracts.

    • "subsidized by government contracts"

      Subsidized implies they are getting free money for doing nothing. It's a business transaction. I wouldn't call being a federal worker being subsidized by the government either.

  • > Elon Musk explicitly said in his latest Joe Rogan appearance that he advocates for the smallest government possible - just army, police, legal. He did NOT mention social care, health care.

    This would be a 19th century government, just the "regalian" functions. It's not really plausible in a world where most of the population who benefit from the health/social care/education functions can vote.