Comment by SCdF

15 days ago

I disagree: in as much as I have noticed this *far* more with AI than any other advancement / fad (depending on your opinion) than anything else before.

This also tracks with every app and website injecting AI into every one of your interactions, with no way to disable it.

I think the article's point about non-consent is a very apt one, and expresses why I dislike this trend so much. I left Google Workspace, as a paying customer for years, because they injected gemini into gmail etc and I couldn't turn it off (only those on the most expensive enterprise plans could at the time I left).

To be clear I am someone that uses AI basically every day, but the non-consent is still frustrating and dehumanising. Users–even paying users–are "considered" in design these days as much as a cow is "considered" in the design of a dairy farm.

I am moving all of the software that I pay for to competitors who either do not integrate AI, or allow me to disable it if I wish.

To add to this, it's the same attitude that they used to create the AI in the first place by using content which they don't own, without permission. Regardless of how useful it may be, the companies creating it and including it have demonstrated time and again that they do not care about consent.

  • > the same attitude that they used to create the AI in the first place by using content which they don't own, without permission

    This was a massive "white pill" for me. When the needs of emerging technology ran head first into the old established norms of ""intellectual property"" it blew straight through like a battle tank, technology didn't even bother to slow down and try to negotiate. This has alleviated much of my concern with IP laws stifling progress; when push comes to shove, progress wins easily.

  • How can you get a machine to have values? Humans have values because of social dynamics and education (or lack of exposure to other types of education). Computers do not have social dynamics, and it is much harder to control what they are being educated on if the answer is "everything".

    • It's not hard if the people in charge had any scruples at all. These machines never could have done anything if some human being, somewhere in the chain, hadn't decided that "yeah, I think we will do {nefarious_thing} with our new technology". Or should we start throwing up our hands when someone gets stabbed to death like "well, I guess knives don't have human values".

      Human beings are doing this.

    • > How can you get a machine to have values?

      The short answer is a reward function. The long answer is the alignment problem.

      Of course, everything in the middle is what matters. Explicitly defined reward functions are complete, but not consistent. Data defined rewards are potentially consistent but incomplete. It's not a solvable problem form machines but equally likewise for humans. Still we practice, improve and middle through dispite this and approximate improvement hopefully, over long enough timescales.

      2 replies →

    • That sounds like the valued-at-billions-and-drowning-in-funding company’s problem. The issue is they just go “there are no consequences for solving this, so we simply won’t.”

    • Maybe if we can't build a machine that isn't a sociopath the answer should be don't build the machine rather then oh well go ahead and build the sociopaths

      1 reply →

  • I’d argue that a lot of the scrape-and-train is just the newest and most blatant exploitation of the relationship that always existed, not a renegotiation of it. Stack overflow monetized millions of hours of people’s work. Same thing with Reddit and Twitter and plenty of other websites.

    Legally it is different with books (as Anthropic found out) but I would argue morally it is more similar: forum users and most authors write not for money, but because they enjoy it.

    • I don't know, it feels odd to declare people wrote "because they enjoy it" and then get irritated when someone finds a way to monetize it retrospectively.

      Like you're either doing this for the money or you're not, and its okay to re-evaluate that decision...but at the same time there's a whole lot of "actually I was low key trying to build a career" type energy to a lot of the complaining.

      Like I switched off from Facebook aboutna years after discovering it when it increasingly became "look at my new business venture...friends". LinkedIn is at least just upfront about it and I can ignore the feed entirely (use it for job listings only).

The shift from "you just don't understand" to damage control would be funny if it wasn't so transparent.

> We have identified a bug in our system... we take communication consent very seriously

> There was a bug, and we fucked up... we take comms consent seriously

These two actors were clearly coached into the same narrative. I also absolutely don't believe them at all: some PM made the conscious decision to bypass user preferences to increase some KPI that pleases some AI-invested stakeholder.

> only those on the most expensive enterprise plans could at the time I left.

lol. so the premium feature is the ability to turn off the AI? That's one way to monetise AI I suppose.

  • Hahaha. It's like a protection racket for the new age.

    "Nice user experience you got there. Would be a real shame if AI got added to it."

> I left Google Workspace, as a paying customer for years, because they injected gemini into gmail

I wonder if this varies by territory. In UK, none of the Gmail accounts I use has received this pollution

> I am moving all of the software that I pay for to competitors who either do not integrate AI, or allow me to disable it if I wish.

The latter sounds safer. The former may add "AI" tomorrow.

Yeah this is not a new thing with AI, you can unsubscribe all you want, they are still gonna email you about "seminars" and other bullshit. AWS has so many of those and your email is permanently in their database, even if you delete your account. I also still get Oracle Cloud emails even though I told them to delete my account as well, so I can't even log in anymore to update preferences!

  • Fun fact, requiring login for unsubscribe is illegal per the canspam act. The most you can do is force a user to verify their email address to you.

> I disagree: in as much as I have noticed this far more with AI than any other advancement / fad (depending on your opinion) than anything else before

Isn't that because most of the other advancements/fads were not as widely applicable?

With earlier things there was usually only particular kinds of sites or products where they would be useful. You'd still get some people trying to put them in places they made no sense, but most of the places they made no sense stayed untouched.

With AI, if well done, it would be useful nearly everywhere. It might not be well done enough yet for some of the places people are putting it so ends up being annoying, but that's a problem of them being premature, not a problem of them wanting to put AI somewhere it makes no sense.

There have been previous advancements that were useful nearly everywhere, such as the internet or the microcomputer, but they started out with limited availability and took many years to become widely available so they were more like several smaller advancements/fads in series rather than one big one like AI.

  • This is a very strange argument. If AI was so bloody revolutionary than you didn't have to sneak it into your products without consent.

    Very often AI seems to be a solution looking for a problem.

  • > With AI, if well done, it would be useful nearly everywhere.

    I fundamentally disagree with this.

    I never, now or in the future, want to use AI to generate or alter communication or expression primarily between me and other humans.

    I do not want emails or articles summarised, I do not emails or documents written for me, I do not want my photos altered yassified. Not now, not ever.

    • Keep in mine I said "if well done". That was not meant to imply that I think the current AI offerings are well done. I'd take "well done" to mean that it performs the tasks it is meant for as well as human assistants perform those tasks.

      > I never, now or in the future, want to use AI to generate or alter communication or expression primarily between me and other humans. [...] I do not want emails or articles summarised, I do not emails or documents written for me, I do not want my photos altered yassified.

      That's fine, but generally the tools involved in doing those things are designed to be general purpose.

      A word processor isn't just going to be used by people writing personal things for example. It will also be used by people writing documentation and reports for work. Without AI it is common for those people to ask subordinates, if they are high enough in their organization to have them, to write sections of the report or to read source material and summarize it for them.

      An AI tool, if good enough to do those tasks, would be useful to those users, and so it makes sense for such tools to be added by the word processor developer.

      Again, I'm not saying that the AI tools currently being added to basically everything are good enough.

      The point is that

      (1) a large variety of tools and products have enough users that would find built-in AI useful (even if some users won't) that it makes a lot of sense for them to include those tools (when they become good enough), and

      (2) AI may be unique compared to prior advances/fads in how wide a range of things this applies to and the speed it has reached a point that companies think it has become good enough (again, not saying they have made the right judgement about whether it is good enough).

    • How about machine translation and fixing grammar in languages you're not very familiar with? That's the only use of "AI" I've found so far. I'd rather read (and write) broken English in informal contexts like this forum, but there are enough more formal situations.

      3 replies →

Even WhatsApp has it in the search bar

  • For me it’s just a multi-coloured ring like a gamer’s mood light, but it’s literally just slapped in the corner of the UI the same way a shitty Intercom widget would be.

    Totally a thing a growth hacking team would do, injecting an interface on top of a design.

>I disagree: in as much as I have noticed this far more with AI than any other advancement / fad

I agree with gp that new spam emails that override customers' email marketing preferences is not an "AI" issue.

The problem is that once companies have your email address, their irresistible compulsion to spam you is so great that they will deliberately not honor their own "Communication Preferences" that supposedly lets customers opt out of all marketing emails.

Even companies that are mostly good citizens about obeying customers' email marketing preferences still end up making exceptions. Examples:

Amazon has a profile page to opt out of all email marketing and it works... except ... it doesn't work to stop the new Amazon Pharmacy and Amazon Health marketing emails. Those emails do not have an "Unsubscribe" link and there is no extra setting in the customer profile to prevent them.

Apple doesn't send out marketing messages and obeys their customers' marketing email preferences ... except .. when you buy a new iPhone and then they send emails about "Your new iPhone lets you try Apple TV for 3 months free!" and then more emails about "You have Apple Music for 3 months free!"

Neither of those aggressive emails have anything to do with AI. Companies just like to make exceptions to their rules to spam you. The customer's email inbox is just too valuable a target for companies to ignore.

That said, I have 3 gmail.com addresses and none of them have marketing spam emails from Google about Gemini AI showing up in the Primary inbox. Maybe it's commendable that Google is showing incredible restraint so far. (Or promoting Gemini in Chrome and web apps is enough exposure for them.)

  • > That said, I have 3 gmail.com addresses and none of them have marketing spam emails from Google about Gemini AI showing up in the Primary inbox.

    That's because they put their alerts in the gmail web interface :-/

    "Try $FOO for business" "Use drive ... blah blah blah"

    All of these can be dismissed, but new ones show up regularly.

    • >That's because they put their alerts in the gmail web interface :-/

      I agree and that's what I meant by Google's "web apps" having promos about Gemini.

      But in terms of accessing Gmail accounts via the IMAP protocol in Mozilla Thunderbird, Apple Mail client, etc, there are no spam emails about Gemini AI. Google could easily pollute everybody's gmail inboxes with endless spam about Gemini such that all email clients with IMAP access would also see them but that doesn't seem to happen (yet). I do see 1 promo email about Youtube Premium over the last 5 years. But zero emails about Google's AI.

  • > Apple doesn't send out marketing messages and obeys their customers' marketing email preferences ... except .. when you buy a new iPhone and then they send emails about "Your new iPhone lets you try Apple TV for 3 months free!" and then more emails about "You have Apple Music for 3 months free!"

    That's "transactional" I'm sure. It makes sense that a company is legally allowed to send transactional emails, but they all abuse it to send marketing bullshit wherever they can blur the line.

  • > Maybe it's commendable that Google is showing incredible restraint so far.

    Or the Gmail spam filter is working.

  • This is not an issue in Europe, due to effective regulation.

    • >This is not an issue in Europe, due to effective regulation.

      This article's author complaining about Proton overriding his email preferences is from the UK. Also in this thread, more commenters from UK and Germany say companies routinely ignore the law and send unwanted spam. Companies will justify it as "oops it was a mistake", or "it's a different category and not marketing", etc.

Imagine making this argument for other technologies. There is no opt-out button for machine learning, choosing the power source for their datacenters, the coding language in their software, etc. Conceptually there is a difference between opting out of an interaction with another party vs opting out of a specific part of their technology stack.

  • The three examples you listed are implementation details, so it's not clear if this is a serious post. Which datacenter they deploy code in is (other than territory for laws etc, which is something you may wish to know about and pick from) an implementation detail.

    A better example would be: imagine every single operating system and app you use adds spellcheck. They only let you spell check in American[1]. You will get spell check prompts from your Operating System, your browser, and the webapp you're in. You can turn none of them off.

    [1] in this example, you speak the Queen's English, so spell color colour etc

    • Unrelated but interesting to think about terms like "queens English" now that the queen is gone. Will we be back to kings English some day? I suppose the monarchy might stay too irrelevant to bother changing phrases.

      1 reply →