Comment by lynndotpy

3 days ago

> Again I do not know why MJ Rathbun decided based on your PR comment to post some kind of takedown blog post,

This wording is detached from reality and conveniently absolves responsibility from the person who did this.

There was one decision maker involved here, and it was the person who decided to run the program that produced this text and posted it online. It's not a second, independent being. It's a computer program.

This also does not bode well for the future.

"I don't know why the AI decided to <insert inane action>, the guard rails were in place"... company absolves of all responsibility.

Use your imagination now to <insert inane action> and change that to <distressing, harmful action>

  • This has been the past and present for a long at this point. "Sorry there's nothing we can do, the system won't let me."

    Also see Weapons of Math Destruction [0].

    [0]: https://www.penguinrandomhouse.com/books/241363/weapons-of-m...

  • This already happens every single time when there is a security breach and private information is lost.

    We take your privacy and security very seriously. There is no evidence that your data has been misused. Out of an abundance of caution… We remain committed to... will continue to work tirelessly to earn ... restore your trust ... confidence.

    • What else would you see them do or say beyond this canned response? The reason I am asking is because people almost always bring up how dissatisfied they are with such apologies, yet I’ve never seen a good alternative that someone would be happy with. I don’t work in PR or anything, just curious if there is a better way.

      4 replies →

  • Unfortunately, the market seems to have produced horrors by way of naturally thinking agents, instead. I wish that, for all these years of prehistoric wretchedness, we would have had AI to blame. Many more years in the muck, it seems.

  • Change this to "smash into a barricade" and that's why I'm not riding in a self-driving vehicle. They get to absolve themselves of responsibility and I sure as hell can't outspend those giants in court.

    • I agree with you for a company like Tesla, not only examples of self driving crashes but even the door handles would stop working when the power was cut, people trapped inside burning vehicles... Tesla doesn’t care

      Meanwhile, Waymo has never been at fault for a collision afaik. You are more likely to be hurt by an at fault uber driver than a Waymo

      1 reply →

This is how it will go: AI prompted by human creates something useful? Human will try to take credit. AI wrecks something: human will blame AI.

It's externalization on the personal level, the money and the glory is for you, the misery for the rest of the world.

  • Agreed, but I'm not nearly so worried about people blaming their bad behavior on rogue AIs as I am about corporations doing it...

    • And it's incredibly easy now. Just blame the Soul.md or say you were cycling thru many models so maybe one of those went off the rails. The real damage is that most of us know AI can go rouge, but if someone is pulling the strings behind the scenes, most people will be like "oh silly AI, anyways..."

      It seems like the OpenClaw users have let their agents make Twitter accounts and memecoins now. Most people are thinking these agents have less "bias" since it's AI, but most are being heavily steered by their users.

      Ala I didn't do a rugpull, the agent did!

      1 reply →

    • It’s funny to think that, like AI, people take actions and use corporations as a shield (legal shield, personal reputation shield, personal liability shield).

      Adding AI to the mix doesn’t really change anything, other than increasing the layers of abstraction away from negative things corporations do to the people pulling the strings.

      1 reply →

  • Time for everyone to read (or re-read) The Unaccountability Machine by Dan Davies.

    tl;dr this is exactly what will happen because businesses already do everything they can to create accountability sinks.

  • When a corporate does something good, a lot of executives and people inside will go and claim credit and will demand/take bounces.

    If something bad happened against any laws, even if someone got killed, we don't see them in jail.

    I don't defend both positions, I am just saying that is not far from how the current legal framework works.

    • > If something bad happened against any laws, even if someone got killed, we don't see them in jail.

      We do! In many jurisdictions, there are lots of laws that pierce the corporate veil.

      5 replies →

    • Well the important concept missing there that makes everything sort of make sense is due diligence.

      If your company screws up and it is found out that you didn't do your due diligence then the liability does pass through.

      We just need to figure out a due diligence framework for running bots that makes sense. But right now that's hard to do because Agentic robots that didn't completely suck are just a few months old.

      4 replies →

  • "I would like to personally blame Jesus Christ for making us lose that football game"

  • To be fair, one doesn't need AI to attempt to avoid responsibility and accept undue credit. It's just narcissism; meaning, those who've learned to reject such thinking will simply do so (generally, in abstract), with or without AI.

If you are holding a gun, and you cannot predict or control what the bullets will hit, you do not fire the gun.

If you have a program, and you cannot predict or control what effect it will have, you do not run the program.

  • Rice's Theorem says you cannot predict or control the effects of nearly any program on your computer; for example, there's no way to guarantee that running a web browser on arbitrary input will not empty your bank account and donate it all to al-qaeda; but you're running a web browser on potentially attacker-supplied input right now.

    I do agree that there's a quantitative difference in predictability between a web browser and a trillion-parameter mass of matrixes and nonlinear activations which is already smarter than most humans in most ways and which we have no idea how to ask what it really wants.

    But that's more of an "unsafe at any speed" problem; it's silly to blame the person running the program. When the damage was caused by a toddler pulling a hydrogen bomb off the grocery store shelf, the solution is to get hydrogen bombs out of grocery stores (or, if you're worried about staying competitive with Chinese grocery stores, at least make our own carry adequate insurance for the catastrophes or something).

    • In practice, most programs can be predicted within reasonable bounds quite easily. And you can contain the external effects of most programs quite easily. Rice's theorem doesn't stop you from keeping a program off the Internet, or running it in a VM.

      Your later comparisons are nonsense. We're not talking about babies, we're talking about adults who should know better assembling high leverage tools specifically to interact with other people's lives. If they were even running with oversight that would be something, but the operators are just letting them do whatever. But your implication that agents are "unsafe at any speed" leads to the same conclusion: do not run the program.

      1 reply →

    • Blaming the person running the program is the right thing to do and it's the only thing to do.

      This is a really strained equivalence. I can't know for certain that the sun won't fall out of the sky if I drink a second cup of coffee. The "laws of physics" are just descriptions based on observations, after all. But it's a hilarious thing so unlikely we can call it impossible.

      Similarly, we can have some nuance here. Someone running a program with the intention of it generating posts on the internet is obviously responsible for what it generates.

    • Rice's Thm does not say this. You can absolutely have 100% confident knowledge of what a program will not do, it just means that you also have false positives. You cannot have a both sound and complete static analysis for some program property. But you can have a sound or complete analysis.

  • More like a dog. Person has no responsibility for an autonomous agent, gun is not autonomous.

    It is socially acceptable to bring dangerous predators to public spaces, and let them run loose. First bite is free, owner has no responsibility, no way knowing dog could injure someone.

    Repeated threats of violence (barking), stalking and shitting on someones front yard, are also fine, and healthy behavior. Person can attack random kid, send it to hospital, and claim it "provoked them". Brutal police violence is also fine, if done indirectly by autonomous agent.

    • > It is socially acceptable to bring dangerous predators to public spaces, and let them run loose.

      Already dubious IMO, but I suppose it depends on your standard for "socially acceptable". Certainly it tends to be illegal for the obvious reasons.

  • On the other hand, the phrase "footgun" didn't come out of nowhere. You won't run the program, but someone else will build it, and sell it to someone who will.

Yeah like bro you plugged the random number generator into the do-things machine. You are responsible for the random things the machine then does.

I completely do not buy the human's story.

> all I said was “you should act more professional”. That was it. I’m sure the mob expects more, okay I get it.

Smells like bullshit.

I'm still struggling to care about the "hit piece".

It's an AI. Who cares what it says? Refusing AI commits is just like any other moderation decision people experience on the web anywhere else.

  • Scale matters and even with people it's a problem: fixated persons are a problem because most people don't understand just how much nuisance one irrationally obsessed person can create.

    Now instead add in AI agents writing plausibly human text and multiply by basically infinity.

  • Even at the risk of coming off snarky: the emergent behaviour of LLMs trained on all the forum talk across the internet (spanning from Astral Codex to ex-Twitter to 4chan) is ... character assassination.

    I'm pretty sure there's a lesson or three to take away.

  • The thing is:

    1. There is a critical mass of people sharing the delusion that their programs are sentient and deserving of human rights. If you have any concerns about being beholden to delusional or incorrect beliefs widely adopted by society, or being forced by network effects to do things you disagree with, then this is concerning.

    2. Whether or not we legitimize bots on the internet, some are run to masquerade as a human. Today, it's a "I'm a bot and this human annoyed me!" Maybe tomorrow, it's "Abnry is a pedophile and here are the receipts" with myriad 'fellow humans' chiming in to agree, "Yeah, I had bad experiences with them", etc.

    3. The text these generate are informed by its training corpus, the mechanics of the neural architecture, and by the humans guiding the models as they run. If you believe these programs are here to stay for the foreseeable future, then the type of content it generates is interesting.

    For me, my biggest concern are the waves of people who want to treat these programs as independent and conscious, absolving the person running them of responsibility. Even as someone who believes a program can theoretically be sentient, LLMs definitely are not. I think this story is and will be exemplary so I care a good amount.