Comment by SCdF
13 hours ago
I disagree: in as much as I have noticed this *far* more with AI than any other advancement / fad (depending on your opinion) than anything else before.
This also tracks with every app and website injecting AI into every one of your interactions, with no way to disable it.
I think the article's point about non-consent is a very apt one, and expresses why I dislike this trend so much. I left Google Workspace, as a paying customer for years, because they injected gemini into gmail etc and I couldn't turn it off (only those on the most expensive enterprise plans could at the time I left).
To be clear I am someone that uses AI basically every day, but the non-consent is still frustrating and dehumanising. Users–even paying users–are "considered" in design these days as much as a cow is "considered" in the design of a dairy farm.
I am moving all of the software that I pay for to competitors who either do not integrate AI, or allow me to disable it if I wish.
To add to this, it's the same attitude that they used to create the AI in the first place by using content which they don't own, without permission. Regardless of how useful it may be, the companies creating it and including it have demonstrated time and again that they do not care about consent.
How can you get a machine to have values? Humans have values because of social dynamics and education (or lack of exposure to other types of education). Computers do not have social dynamics, and it is much harder to control what they are being educated on if the answer is "everything".
> How can you get a machine to have values?
The short answer is a reward function. The long answer is the alignment problem.
Of course, everything in the middle is what matters. Explicitly defined reward functions are complete, but not consistent. Data defined rewards are potentially consistent but incomplete. It's not a solvable problem form machines but equally likewise for humans. Still we practice, improve and middle through dispite this and approximate improvement hopefully, over long enough timescales.
It's not hard if the people in charge had any scruples at all. These machines never could have done anything if some human being, somewhere in the chain, hadn't decided that "yeah, I think we will do {nefarious_thing} with our new technology". Or should we start throwing up our hands when someone gets stabbed to death like "well, I guess knives don't have human values".
Human beings are doing this.
That sounds like the valued-at-billions-and-drowning-in-funding company’s problem. The issue is they just go “there are no consequences for solving this, so we simply won’t.”
Maybe if we can't build a machine that isn't a sociopath the answer should be don't build the machine rather then oh well go ahead and build the sociopaths
The shift from "you just don't understand" to damage control would be funny if it wasn't so transparent.
> We have identified a bug in our system... we take communication consent very seriously
> There was a bug, and we fucked up... we take comms consent seriously
These two actors were clearly coached into the same narrative. I also absolutely don't believe them at all: some PM made the conscious decision to bypass user preferences to increase some KPI that pleases some AI-invested stakeholder.
> I disagree: in as much as I have noticed this far more with AI than any other advancement / fad (depending on your opinion) than anything else before
Isn't that because most of the other advancements/fads were not as widely applicable?
With earlier things there was usually only particular kinds of sites or products where they would be useful. You'd still get some people trying to put them in places they made no sense, but most of the places they made no sense stayed untouched.
With AI, if well done, it would be useful nearly everywhere. It might not be well done enough yet for some of the places people are putting it so ends up being annoying, but that's a problem of them being premature, not a problem of them wanting to put AI somewhere it makes no sense.
There have been previous advancements that were useful nearly everywhere, such as the internet or the microcomputer, but they started out with limited availability and took many years to become widely available so they were more like several smaller advancements/fads in series rather than one big one like AI.
> With AI, if well done, it would be useful nearly everywhere.
I fundamentally disagree with this.
I never, now or in the future, want to use AI to generate or alter communication or expression primarily between me and other humans.
I do not want emails or articles summarised, I do not emails or documents written for me, I do not want my photos altered yassified. Not now, not ever.
How about machine translation and fixing grammar in languages you're not very familiar with? That's the only use of "AI" I've found so far. I'd rather read (and write) broken English in informal contexts like this forum, but there are enough more formal situations.
1 reply →
This is a very strange argument. If AI was so bloody revolutionary than you didn't have to sneak it into your products without consent.
Very often AI seems to be a solution looking for a problem.
> only those on the most expensive enterprise plans could at the time I left.
lol. so the premium feature is the ability to turn off the AI? That's one way to monetise AI I suppose.
Hahaha. It's like a protection racket for the new age.
"Nice user experience you got there. Would be a real shame if AI got added to it."
> I left Google Workspace, as a paying customer for years, because they injected gemini into gmail
I wonder if this varies by territory. In UK, none of the Gmail accounts I use has received this pollution
> I am moving all of the software that I pay for to competitors who either do not integrate AI, or allow me to disable it if I wish.
The latter sounds safer. The former may add "AI" tomorrow.
I am in the UK. TBC this isn't a gmail.com email address, this is a paid "small business" workspace against a custom domain.
Eventually they backtracked and allowed (I think?) all paid customers to disable gemini, but I had already migrated to Fastmail so :shrug:
Ah. My addresses are @gmail.com.
Perhaps the fact you paid got you marked as a likely gull :)
2 replies →
Gmail <> Google Workspaces
Maybe not equal but when I launch Gmail the page says "Google Workspace" and I get Gmail, Docs etc. as per https://workspace.google.com/intl/en_uk/resources/what-is-wo... .
Yeah this is not a new thing with AI, you can unsubscribe all you want, they are still gonna email you about "seminars" and other bullshit. AWS has so many of those and your email is permanently in their database, even if you delete your account. I also still get Oracle Cloud emails even though I told them to delete my account as well, so I can't even log in anymore to update preferences!
Fun fact, requiring login for unsubscribe is illegal per the canspam act. The most you can do is force a user to verify their email address to you.
Even WhatsApp has it in the search bar
>I disagree: in as much as I have noticed this far more with AI than any other advancement / fad
I agree with gp that new spam emails that override customers' email marketing preferences is not an "AI" issue.
The problem is that once companies have your email address, their irresistible compulsion to spam you is so great that they will deliberately not honor their own "Communication Preferences" that supposedly lets customers opt out of all marketing emails.
Even companies that are mostly good citizens about obeying customers' email marketing preferences still end up making exceptions. Examples:
Amazon has a profile page to opt out of all email marketing and it works... except ... it doesn't work to stop the new Amazon Pharmacy and Amazon Health marketing emails. Those emails do not have an "Unsubscribe" link and there is no extra setting in the customer profile to prevent them.
Apple doesn't send out marketing messages and obeys their customers' marketing email preferences ... except .. when you buy a new iPhone and then they send emails about "Your new iPhone lets you try Apple TV for 3 months free!" and then more emails about "You have Apple Music for 3 months free!"
Neither of those aggressive emails have anything to do with AI. Companies just like to make exceptions to their rules to spam you. The customer's email inbox is just too valuable a target for companies to ignore.
That said, I have 3 gmail.com addresses and none of them have marketing spam emails from Google about Gemini AI showing up in the Primary inbox. Maybe it's commendable that Google is showing incredible restraint so far. (Or promoting Gemini in Chrome and web apps is enough exposure for them.)
> That said, I have 3 gmail.com addresses and none of them have marketing spam emails from Google about Gemini AI showing up in the Primary inbox.
That's because they put their alerts in the gmail web interface :-/
"Try $FOO for business" "Use drive ... blah blah blah"
All of these can be dismissed, but new ones show up regularly.
>That's because they put their alerts in the gmail web interface :-/
I agree and that's what I meant by Google's "web apps" having promos about Gemini.
But in terms of accessing Gmail accounts via the IMAP protocol in Mozilla Thunderbird, Apple Mail client, etc, there are no spam emails about Gemini AI. Google could easily pollute everybody's gmail inboxes with endless spam about Gemini such that all email clients with IMAP access would also see them but that doesn't seem to happen (yet). I do see 1 promo email about Youtube Premium over the last 5 years. But zero emails about Google's AI.
> Maybe it's commendable that Google is showing incredible restraint so far.
Or the Gmail spam filter is working.
> Apple doesn't send out marketing messages and obeys their customers' marketing email preferences ... except .. when you buy a new iPhone and then they send emails about "Your new iPhone lets you try Apple TV for 3 months free!" and then more emails about "You have Apple Music for 3 months free!"
That's "transactional" I'm sure. It makes sense that a company is legally allowed to send transactional emails, but they all abuse it to send marketing bullshit wherever they can blur the line.
How is it transactional in any way? It looks to me like post-transaction upsell, pure and simple.
3 replies →
Imagine making this argument for other technologies. There is no opt-out button for machine learning, choosing the power source for their datacenters, the coding language in their software, etc. Conceptually there is a difference between opting out of an interaction with another party vs opting out of a specific part of their technology stack.
The three examples you listed are implementation details, so it's not clear if this is a serious post. Which datacenter they deploy code in is (other than territory for laws etc, which is something you may wish to know about and pick from) an implementation detail.
A better example would be: imagine every single operating system and app you use adds spellcheck. They only let you spell check in American[1]. You will get spell check prompts from your Operating System, your browser, and the webapp you're in. You can turn none of them off.
[1] in this example, you speak the Queen's English, so spell color colour etc
Unrelated but interesting to think about terms like "queens English" now that the queen is gone. Will we be back to kings English some day? I suppose the monarchy might stay too irrelevant to bother changing phrases.