← Back to context

Comment by kace91

6 hours ago

>There's a vocal minority calling for AI regulation, but what they actually want often strikes me as misguided:

There's a ton other points intersecting with regulation. Either directly related by AI, or made significantly more relevant by it.

Just from the top of my head:

- information processing: Is there private data AI should never be able to learn from? We restrict collection but it might be unclear whether model training counts as storage.

- related to the former, what kind of dystopian practices should we ban? AI can probably create much deeper profiles inferring information from users than our already worrrying tech, even without storing sensitive data. If it can use conversations to deduce I'm in risk of a shorter lifespan, can the owners communicate that data to insurance companies?

- healthcare/social damage: what is the long term effects of people having an always available yes men, a substitution for social interaction, a cheating tool, etc? should some people be kept from access? (minors, mentally ill, whatever). Should access, on the other hand, become a basic right if it realistically makes a lef-behind person unable to compete with others who have it?

- National security. Is a country's economy being reliant in a service offered somewhere else? Worse even, is this fact draining skills from the population that might not able to be easily recovered when needed?

- energy/resources impact: Are we ready to have an enormous increase in usage of energy and/or certain goods? should we limit usage until we can meet the demand without struggle?

- consumer protections: Many companies just offer 'flat' usage, freely being able to change the model behind the scenes for a worse one when needed or even adapt user limits on their server load. Which of these are fair business practices?

- economy risks: What is the maximum risk we can take of the economy being made dependent to services that aren't yet profitable? Is there any steps that need to be taken to keep us from the potential bust if costs can't be kept up with?

- monopoly risks: we could end up with a single company being able to offer literally any intellectual work as a service. Whoever gets this tech might become the most powerful entity in the world. Should we address this impact through regulation before such an entity rises and becomes impossible to tame?

- enabling crime: can an army of AI hackers disrupt entire countries? how is this handled?

- impact on job creation: If AIs can practically DDOS job offer forms, how is this handled to keep access fair? Same for a million other places that are subjected to AI spam.

your point "It's on politicians to help people adapt to a new economic reality" brings a few:

- Should we tax AI using companies? if they produce the same employing fewer people, tax extraction suffers and the non-taxed money does not make it back to the people. How do we compensate? And how do we remake - How should we handle entire professions being put to pasture at once? Lost employment is a general problem if it's a large enough amount of people. - how should the push of intellectual work be rethought if it becomes extremely cheap relative to manual work? is the way we train our population in need of change?

You might have strong opinions on most of these issues, but there is clearly A LOT of important debates that aren't being addressed.

Your list of evidence-free vibe complaints perfectly exemplifies the reasons why regulations should be approached carefully with the advice of experts, or not at all.

  • I'm not sure what you mean as evidence-free here.

    Debates for public regulation should not be started by evidence-backed conclusions, but rather they are what pushes research and discussion in the first place.

    Perhaps the conclussion to AI's impact on mental health is "hey, multiple high quality studies show that the impact is actually positive, let's allow it and in fact consider it as a potential treatment path". That's perfectly fine.

    What is not fine is not considering the topic at all until it's too late for preventive action. We don't need to wait for a building burning before we consider whether we need fire extinguishers there.

    My list is not made of complains at all, it's just a few of the ways in which we suspect AI can be disruptive, which are then probably worth examining.

  • Evidence-free? Did you even skim OP's list?

    Healthcare/Social damage: we already have peer reviewed studies on the potentially negative impacts of LLMs on mental health: https://pmc.ncbi.nlm.nih.gov/articles/PMC10867692/ . We also have numerous stories of people committing suicides after "falling in love" or being nudged to do so by an LLM.

    Energy/Resources: do I even have to provide evidence that LLMs waste enormous amounts of electricity, even leading to scarcity in some local markets, and even coal power plants being turned back on?

    Those are just the ironclad ones, you can make very good data privacy and national security arguments quite easily as well.

    • > Energy/Resources: do I even have to provide evidence that LLMs waste enormous amounts of electricity, even leading to scarcity in some local markets, and even coal power plants being turned back on?

      Yes, if you want to be taken seriously, then your claims about this should be based in evidence and contextualized amid the overall energy market.

      1 reply →