← Back to context

Comment by jmward01

3 months ago

I believe that the right regulation makes a difference, but I honestly don't know what that looks like for AI. LLMs are so easy to build/use and that trend is accelerating. The idea of regulating AI is quickly becoming like the idea of regulating hammers. They are ubiquitous general purpose tools and putting legislation specifically about hammers would be deeply problematic for, hopefully, obvious reasons. Honest question here, what is practical AND effective here? Specifically, what problems can clearly be solved and by what kinds of regulations?

The most sane version of regulation IMO is the (already passed) EU AI Act. It's less about control of AI itself, more about controlling inputs/outputs. Tell users when they're interacting with an AI, mark/disclaimer AI-generated content, don't use AI in high-risk scenarios, etc. Along the lines of "we don't regulate hammers, but we regulate you hitting people with a hammer".

https://artificialintelligenceact.eu/

  • I haven't read that regulation, but the way you describe it makes me immediately think of cookie banners. Everything is pretty quickly getting 'ai' in it. If the definition just narrows down to LLMs, even there we have big questions. Does speech recognition count? Whisper uses cross attn and transformer blocks to generate text. You could easily call it an LLM but I doubt anyone would use it that way. What about services that use LLMs in their back-end to monitor logs for problems. Does that count? Again, I am actually for regulations but I just don't know where to start. My best, very early and likely deeply flawed, thought is that we create enhanced punishments for crimes when an LLM is used. So a company that illegally harvests your data and processes it with an LLM would get bigger fines and penalties because LLMs were involved. That kind of thing. The idea here is that bigger tools get bigger punishments. Again, not well thought out but there may be something here.

  • Why should a user care whether the entity they're interacting with meets some arbitrary political definition of "AI"? Does it matter whether an article that I'm reading was written by AI or by a monkey randomly banging on a keyboard? Regulations seem totally pointless, just another excuse to shovel taxpayer money to a bunch of bureaucrats with fancy degrees who are incapable of finding real jobs.

    • This is silly.

      Regulation exists to help balance out the power disparity between consumers and corporations that sell AI. The EU wrote a good AI law, it's mainly focused on access to goods and services and the impact that AI can have in those domains. Among other things, it almost entirely bans surveillance pricing. Makes companies liable if an AIs discriminate on their behalf. Also restricts the use of facial recognition.

      This is it's role, to equalize the power disparity by prohibiting companies that do business within the EU from engaging in these predatory practices.