Comment by nippoo
6 hours ago
"I went through about a dozen AI tools I've personally authorized in the last year after reading this. Nine of them have Google Workspace OAuth permissions that include reading all emails and accessing all Drive files. Nine. I authorized every one of them without reading the permissions because the onboarding flow asked and I was in a hurry."
Do other (tech-literate) people do this?! Giving anything access to my emails and Google Drive would keep me up at night and I try and be very granular with permissions and revoke them when I don't use an app any more. I would assume that anything confidential/NDA in my emails had been compromised and leaked well before this point!
At my job I was asked to help integrate our Google Workspace account with an AI notetaking tool another team purchased. The vendor instructed us to set up Domain-wide Delegation for reading/writing emails and Google Drive files. Essentially this would automatically opt in every user in my organization and there would be no way to opt out.
I had to contact the vendor to set up a "less recommended" way of requiring users to actually log into the tool and accept the OAuth permissions prompt. The entire time, everybody (the vendor and my organization) acted like it was a waste of my time.
I can't control what everyone else does, if they want to grant some tool these broad permissions, feel free. But I find it unethical to just enable it for all users with no ability to opt out if this isn't actually a critical tool. Not to mention the security concerns with this.
What is most concerning to me is how people are turning their brains off for anything tangentially related to AI. The people making this request to me are smart people who 5 years ago would have never asked to do this. Now suddenly they don't care - everyone else is doing it, why not?
>What is most concerning to me is how people are turning their brains off for anything tangentially related to AI.
Everyone is betting the farm on that .01% chance that they become wild trillionaires. We're going to burn down the whole planet and use all of the resources so a few people can have a minuscule chance at being obscenely rich.
Personally, no. This comment from the other day has been stuck in my head: "Anyone trying to stay safe will be on the gradient to a Stallmanesque monastic computing existence."[0]
It's both hilarious and true. As much I want to reap the gains of having an openclaw agent going ham on my personal data, I abstain. I shed a tear at all the cool stuff I'm missing out on, but permissions are never about now. Once they have it, they'll always have it.
0: https://news.ycombinator.com/item?id=47796469#47797330
Boss: "Just slap something together for the meeting with the Big Cheese this afternoon."
(Engineer internal monologue) "OK I'll just agree to everything during setup, I can just tear it all down later."
Six months later the slapped together demo is the production release.
As the engineering saying goes, nothing more permanent than a temporary solution
> Do other (tech-literate) people do this?!
I'm sure it's very common, yes. Permissions & popup fatigue is very real. Today, every application and website throws 6 dozen popups at you that you have to get through to get to the stuff you came there for. Most of it is marketing; some of it is from braindead lawyers; some of it is important; none of it gets read by users. At some point you give up and just click "yes, goddamnit, I have work to do" and all the security stuff is out the window.
Always remember: there is no such thing as computer security. If your data is on a networked computer, consider it to be semi-public. The first and only rule of computer security is don't store or do anything on a networked computer that would devastate you if it were leaked or compromised
And, make sure not to think about how much of our modern infrastructure is built on top of computers connected to the Internet.
I usually pay pretty close attention if something wants more than my email address, name and profile image, etc... I've used a couple things that request drive access, only because they actually deal with documents. I'm not sure that I've given any AI agents particularly open access... though if Claude Code wanted to, it could probably pwn me... I've been considering shifting to a VM for that.
> *Nine of them have Google Workspace OAuth permissions that include reading all emails and accessing all Drive files. Nine. I authorized every one of them without reading the permissions because the onboarding flow asked and I was in a hurry."
No, you didn't authorize every one of them without reading the permissions because the onboarding flow asked and you were in a hurry.
You authorized it because the onboarding flow asked, and you weren't given an opportunity to say no. What are you to do: say no, and then not use the app?
This whole concept is just wrong. Instead of saying "no" and the app seeing that you didn't grant permission: you should be able to say "no", and the app shouldn't see any denial at all. It should just see empty data when requesting it. Problem fucking solved. You get to use whatever apps you want, apps get to ask for whatever permissions they want, and you get to deny that permission without the app fucking you over.
I think it's a bit easier to add a "Some" option so that then the App is unaware to the effective "No" answer.
But also a lot of the permissions are just bad. Like I think it's reasonable for somebody to make a web-app that uses my Google Drive as a backend for storing data. I don't think its reasonable that it should be able to open files it didn't create though.
This just moves the problem to support. The app doesn't work for users, they don't remember clicking no, and then some CSR has to hand-hold them through clicking "yes".
> This just moves the problem to support.
Boo-hoo. Support should exist. Support should be trained. Support should help educate the customer. If your business isn't doing that then your business is trashy anyway.
Many companies don't have support. That's a major problem. We have a lot of trashy businesses.
The app shouldn't see empty data, it should see statistically likely fake data.
While you're right, I'll be happy with just empty data for now. Generating statistically-likely false data is only recently available generally and turns out to be rather expensive.
What? This makes no sense to me. What's the threat model where you'd rather the OAuth flow result in the client app getting fake data?
If you reject the permissions the client already doesn't hear about it because the callback redirect isn't invoked (or at least, there's no reason for it to be, but that's up to you).
> What are you to do: say no, and then not use the app?
Um, yes? That's literally the point of what's happening. The app is asking for permissions because it needs it to do whatever it's doing. If you don't want to give it access to the data then there's no reason to use the app.