← Back to context

Comment by thisoneworks

16 days ago

It'll be funny when we have Robots, "The user's facial expression looks to be consenting, I'll take that as an encouraging yes"

That's literally a Portal 2 joke. "Interpreting vague answer as yes" when GLaDOS sarcastically responds "What do you think?"

  • The simplest solution is to open the other pod bay’s door, but the user might interrupt Sanctuary Moon again with a reworded prompt if I do that.

    </think>

    I’m sorry Dave, I can’t do that.

This is really just how the tech industry works. We have abused the concept of consent into an absolute mess

My personal favorite way they do this lately is notification banners for like... Registering for news letters

"Would you like to sign up for our newsletter? Yes | Maybe Later"

Maybe later being the only negative answer shows a pretty strong lack of understanding about consent!

  • Worse yet, instead of a checkbox to opt in/out of a newsletter or marketing email when signing up or checking out, it simply opts the user in. Simply doing business with a company is consent to spam, with the excuse that the user can unsubscribe if they don’t want it.

    Tactics like these should be illegal, but instead they have become industry standards.

    • Not everyone. If your business is chill and you are REEEEALY thoughtful and respectful with newsletters you will be rewarded with open rates well in excess of 50%…

    • Companies that do that instantly get reported as spam. Thrres a good reason beyond regulation to not do it that way.

  • There is no "lack of understanding" here. The people responsible for these interfaces understand consent perfectly well, they just don't care for it.

  • At least we haven’t gotten to Elysium levels yet, where machines arbitrarily decide to break your arm, then make you go to a government office to apologize for your transgressions to an LLM.

    We’re getting close with ICE for commoners, and also for the ultra wealthy, like when Dario was forced to apologize after he complained that Trump solicited bribes, then used the DoW to retaliate on non-payment.

    However, the scenario I describe is definitely still third term BS.

That raises an interesting point. Imagine we have helper bots or sex bots and they get someone killed or rape them or something. Who is held responsible?

These current “AI” implementations could easily harm a person if they had a robot body. And unlike a car it’s hard to blame it on the owner, if the owner is the one being harmed.

The more I hear about AI, the more human-like it seems.

  • We trained the computers to act more like humans, which means they can emulate the best of us and the worst of us.

    If control over them centralizes, that’s terrifying. History tells us the worst of the worst will be the ones in control.