I saw this coming from miles away. Computers are better at solving CAPTCHAs than people are and people can be bribed or convinced to join botnets so IP whitelisting doesn't work either. Now we have tons of fingerprinting and behaviour analysis but governments are cracking down on that. Plus, YouTube had a massive ad fraud problem with ads being played back in the background in embedded videos, so their detection clearly wasn't good enough.
There aren't many good ways to prove you're not a bot and there are even fewer that don't involve things like ID verification.
Their opt-in approach helps shift the blame to individual web stores for a while, so who knows if this will take off. But either way, in the long term, the open, human internet is either going away or getting locked behind proofs of attestation like this.
Apple built remote attestation into Safari years ago together with Cloudflare and Google is now going one step further, as Apple's approach doesn't work well against bots that can drive browsers rather than scripted automation tools.
Luckily, their current approach can be worked around because it's only targeting things like stores now and you can buy things from other stores. Once stores find out that click farms have hundreds of phones just tapping at remotely served content, uptake will probably be limited.
It'll be a few years before this is everywhere, but unless AI suddenly isn't widely available anymore, it's going to be inevitable.
> saw this coming from miles away. Computers are better at solving CAPTCHAs than people are
good point... it's interesting how Captcha was initially popularized as a reverse Turing test, but it's just variants of Proof of Work today.
And it seemed clever at the time for Google to leverage this for improvement of their OCR models (it was!), and makes you wonder what utility is derived from the proven "work" today.
CAPTCHAs were designed as a type of Turing Test, not a reverse Turing Test. It’s not surprising that the effectiveness of these weaker variants has collapsed, given that AI can now pass the real Turing Test.
> people can be bribed or convinced to join botnets so IP whitelisting doesn't work either
what does that bribe look like, as in, how much can one get? what all does that entail? is that a little box i connect to my network and forget about? does that mean if i unplug it unless another payment is received that will work out? i'm asking for a friend that's looking to avoid selling plasma to make ends meet.
> The following methods can be used to acquire residential IP addresses for a residential proxy network:
> Software development kit (SDK) partnerships: Proxy services convince mobile application developers to include their SDK in applications in exchange for payment for each person who downloads the application. Individuals download the application and accept the terms and conditions, allowing the SDKs to run in the background and route proxy traffic through users' devices.
> Virtual private network (VPNs) with hidden terms of service: Free VPN services may enroll users' devices in a residential proxy network, without obtaining their consent. The details are often hidden in the terms of service, which most users do not read prior to download, or the language is difficult for the user to understand.
> [malware and compromised IoT devices]
> Passive income schemes: Proxy services convince people to download applications on their device that promise to pay them for their internet bandwidth. People often do not realize that criminals use their internet connection to commit cyber attacks
One reddit post says bandwidth sharing passive income schemes paid them $1 to $9 per month.
I used to know some Americans who were on the poorer end of the spectrum, and apps that paid you for performing fitness activity and such weren't uncommon in that demographic. Not as much of a thing in Europe for some reason.
I believe the cheap Chinese pirate TV boxes that are somewhat popular in the US these days are also in botnets, which is likely how the vendors make them so cheap.
Oh it's better than that now, if you can afford the up-front costs. You can set up a phone farm with cheap Google-certified devices, and the control software manages the Google accounts and botnet connection (through multiple residential proxies, of course). All of these attestation games are DOA.
I'm afraid it's far less enticing. The usual offer is "To continue playing, pay $0.99 or hit AGREE to share your internet connection with Legit Services Inc."
And that's assuming they're nice enough to ask at all.
I personally think its easier to detect llm controlled browser sessions, the people deploying them are far more naive and inexperienced than traditional scrapers/crawlers.
insert You wouldn't bring a 40 Petabyte Zip Bomb to School, would you? meme
Part of the problem is also that Google wants to permit crawlers to do some things but jot others.
Their announcement is full of buzzwords about "agentic" things. Detecting LLMs is one thing, but imagine the power of being able to pick which LLM browsers are permitted and which aren't!
I think Google is being too early to the party with this. Cloudflare still has CAPTCHAs to throw at the wall. There are ways other than attestation to verify that someone is a real human, but they're getting more and more annoying to real users and harder and harder to implement on a small website.
Despite the massive implications, this is a simple system that just works for the 99% of people who use Chrome or Safari or at least have access to an Android phone or iPhone somewhere. It's quick, doesn't require installing apps or creating accounts, and it just works from both the website perspective and the user perspective.
Of course when you start thinking about people with disabilities things become problematic, but when have tech companies ever really cared about that sort of thing? Inclusiveness was fun and all for a while, but the clowns the American people elected banned that sort of thing for any company considering government contracts, and big tech licked that boot like it was made of honey.
The world becomes a lot easier if you just decide to ignore all edge cases and assume customers who disagree with you didn't matter anyway. And infuriating as it may be, for companies like Google, that business model works.
I mean depending on the cost, Google is guaranteed to lose the battle, like gaming anticheat: there are tools that do parsing of the image on screen and send input as a usb device, there is absolutely nothing to detect.
Doing that for a webpage seems way easier than s videogame
From "Don't be evil" to building the largest, most invasive, surveillance operation the world has ever seen.
That was true before this, but this indicates nothing will ever be enough. Google will always want to track more of everyone's activity online, and will use every tool at their disposal to do it.
It's not Google, it's someone. A person came up with this idea and is pushing it through. We should stop treating corporations as some abstract entity instead of a group of sick people making these kinds of decisions.
I think this is the third HN link I've clicked on in a row that leads to an LLM-generated article. I'm not opposed to AI, but I'm tired of seeing it quietly substituted for human thought and expression.
1. The high number of (em) dashes is suspect, though it's unclear whether they manually replaced the em dashes or is actually human generated.
2. "One additional failure worth noting: one incident response professional in the HN thread, raised a concern that operates independently of the bot problem" feels out of place for a content marketing piece. HN isn't popular enough to be invoked as a source, and referencing it as "the HN thread" seems even weirder, as if the author prompted "write a piece about how google cloud defense sucks, here are some sources: ..."
3. This passage is also suspect because it follows the chained negation pattern, though it's n=1
>No hardware identifier is transmitted. No attestation is required. No certification layer determines who may participate.
edit:
I also noticed there are 2 other comments that are flagged/dead expressing their reasons.
Look at the number of : per paragraph. What human puts two : in a single sentence?
"One additional failure worth noting: one incident response professional in the HN thread, raised a concern that operates independently of the bot problem: …"
The ersatz Ted Talk meets LinkedInfluencer rhythm of sentences, the throat clearing fillers as connective tissue…
The entire article is just one long stream of short, punchy, declarative sentences. The latest Claude models are notorious for writing like this.
There's also a few cookie-cutter patterns that should immediately jump out at you if you're at all familiar with AI writing, such as:
> No hardware identifier is transmitted. No attestation is required. No certification layer determines who may participate. User privacy is structurally preserved, not promised.
> Google Cloud Fraud Defense is not a reCAPTCHA update. The QR code is the visible mechanism, but device attestation is the real product.
It's really obvious. The repeated information. The very. short. sentences. The incessant detail. The tangents that go nowhere. And LLMS always try to structure the entire essay into topical sub-sections.
They can't tell. It has become a statistical thing. There will exist some percentage of them that assumes an item is AI generated. With enough people seeing something, you'll see the accusation.
Whether it's AMP or manifest 3 or android source shenanigan or attempts to replace cookies with their FLOC nonsense or this...Google is rapidly turning into a malicious force when it comes to the open internet
If RMS said not to trust Google's self-proclaimed altruism and relationship with open source, yeah. I always assumed that was a backstab waiting to happen. But that only meant I used an iPhone and didn't care that it was more closed than Android, not that I got an Arch Linux phone or something. (And a Mac more importantly, but there's not really a Google counterpart to that.)
My god AMP was such an annoying thing ~4-5 years ago when I was working in a marketing-forward web dev shop.
"Google really likes when you pipe your words into their shitty UI because it saves some time for the user"
We were all like, cool so on one hand we're being given complex designs for sites to differentiate them, and on the other hand we're bowing to a megacorp who actually wants to skip the whole web design part entirely and pipe our content through their pre-defined UI.
So glad it died. Should have known it would die in a matter of a couple of years with that being the track record for Google in general.
> skip the whole web design part entirely and pipe our content through their pre-defined UI
It's a shame this part didn't stick. I use reading mode every chance I get be cause the more design a page has, the worse it is. For some reason orgs agreed that it is ok to let medium or substack own their content, but hated Google's high speed CDN.
Last time this happened we got a bunch of Google employees downplaying the impact of WEI and calling it a nothingburger, that people were being hysterical. I just checked, and everyone I saw defending it has since left the company. I'm sure another wave of Google managers, keen to appeal to the higher-ups, will be here to defend this new initiative any minute now.
It's not just Google. It's governments, corporations, all around the world, simultaneously. The noose is being tightened gradually, then all at once. And it's coming for all of us:
The threats above interlock by design or convergence:
Identity layer (1-5) creates the prerequisite for the others. Once identity is established at SIM/account/device level, the carve-outs that make surveillance politically viable become possible (powerful users get exemptions; ordinary users get watched).
Device layer (10-12, 16-19) creates the surveillance endpoint. Once content is scanned on the device before encryption, the cryptographic protections at the communications layer become irrelevant.
Communications layer (6-9) is the most-defended. Mass scanning has been defeated repeatedly. This is the layer where the resistance has the best track record.
Reporting layer (13-15) is nascent. Direct OS-to-government reporting hooks haven't been built yet at scale. The UK's December 2025 proposal is the leading edge.
Platform control (20-24) determines whether alternatives can exist. Browser diversity, app distribution diversity, and engine diversity are the structural protections. All three are narrowing.
A society with all five layers complete has the technical infrastructure for total surveillance with elite carve-outs. We are roughly 40% of the way there. Whether that infrastructure becomes a dystopia depends on political choices, not technical ones.
HN as a whole is surprisingly oblivious to the noose tightening, because many here are super against decentralized distributed things, if they involve any sort of token. You can complain all you want, but downvoting and burying the decentralized alternatives just for groupthink makes you somewhat complicit in the erosion of our privacy and liberties. Even if you might disagree with a project, all the work that goes into it might be a good reason to upvote it instead, considering that without this work, we're basically doomed.
Hell, even using cash feels like a minor form of dissent. And of course even if you leave your phone at home, your car will be scanned with ANPR wherever it goes. And if that fails, there's still your face to be tracked.
I said 16 years ago that when IPV6 was coming into use was the only reason for a 128 bit address space was so they could tie every packet on the internet back to you as a person. https://news.ycombinator.com/item?id=1464940
It doesn't help that your first sentence makes you sound like a conspiracy theorist riding his hobby horse. I read on despite that, but others may not.
Google was creating cartels like the "Open Handset Alliance" literally decades ago.
Via their control of Chrome and Search which are both monopolies, Google holds absolute authority on how websites are rendered and if websites can be found.
It cracks me up when people say Chrome is a monopoly, because a massive amount of computing devices do not even ship with Chrome. Windows computers, Macbooks, and iPhones require users go search out and install Chrome on their own out of their own volition, shipping with entirely functional and decent browsers out of the box that they have lots of patterns to push. Even many Android phones ship with browsers other than Chrome as a default still from what I understand.
How is Chrome, of all things, a monopoly? Have words just entirely lost all meaning and now monopoly just means "things which are popular that I dislike"?
I'm amused at how thoroughly Google adopted Microsoft's playbook. Chrome supplanted Internet Explorer by embracing the open web. But then Google immediately started on extensions, and now they're trying to extinguish the open web with nonsense like Cloud Fraud Defense. All very smoothly done. I mean, people are actually _asking_ for this junk. I'm impressed.
No they didn't. Firefox unseated Internet Explorer. Chrome then got big by putting its installer right on the Google homepage and harassing users to install it. And they had it bundled with other software, and would install as a user so that locked down computers could still run it. They absolutely did not win by embracing open standards.
If I may tie this into other things going on, The California wealth tax as written would force Larry and Sergei, if they didn't move out of California, to basically sell almost their entire stake in Google, and it would probably wind up owned by State Street and Vanguard who outsource their proxy votes to ESG consultants, who will probably vote for more surveillance.
what alternative to WEI do you propose? it solves a bajillion Internet-existential problems. it is definitely a crisis. the bot problem is at least as serious as facebook, gmail serving without https.
the fact that this kind of comment gets downvoted proves my point. so what if you personally don't like WEI? it doesn't mean the problems aren't real...
that aside, i don't know how people say stuff like "malicious force" and then you go and use a bajillion Google-authored, completely free as in beer and often free as in freedom technologies that nobody obligates you to use at all. It's not like Apple, where their software is so shitty (Messages, Apple Photos, etc.) that the only reason people use it is because it is locked down and forced upon you. it's interesting to me that @dang worries about the tenor of conversation changing - he longs for that 2009 world of university-level math people hanging out and writing comments about LISP or whatever - when the real deficit is not intelligence about math but, at the very least, seeing that things are nuanced, to see more sides to a problem besides the most emotionally powerful and the most mathematically neutral ones.
Bombing every AI data center on Earth would also solve the Internet-existential problems we're facing. But that solution is beyond the pale of course, instead it's incumbent on me to prove to you that panopticon surveillance of every living human being from now until the Sun consumes us is not a reasonable solution to "bots use the Internet".
People use iMessage because it has worked for a long time, during which all the leading alternatives were terrible. Maybe they still are cause I'm still not convinced that RCS even works reliably, seeing how Android users go on WhatsApp instead.
As much as I hate whatever google's doing, this article has some issues:
>For operations that need Play Integrity attestation specifically, a compliant Android device costs approximately $30 at current market prices
This assumes the logic on google's side is something like `if(attestationResult == "success") allow()`, but it's not hard to imagine the device type being factored into some sort of fraud score. For instance, expensive devices might have a lower fraud score than cheaper devices, to deter buying a bunch of cheap devices. They might also analyze the device mix for a given site, so if thousands of Chinese phones suddenly start signing up for Anne's Muffin Shop, those will get a higher fraud score.
>Firefox for Android does not appear in Google’s stated browser support list for Fraud Defense.
The browser only needs to show a QR code, so if you're on firefox mobile they'll either open a deeplink to google play services on the phone itself, or show a qr code.
>One human solving a single challenge pays a negligible cost. A bot farm running concurrent sessions faces exponential compute costs with each additional attempt - and AI agents, which consume GPU cycles to operate, face identical penalties regardless of how sophisticated their reasoning is.
PoW for bot protection basically never caught on because javascript performance is poor, and human time is worth more than a computer's time. An attacker doesn't care if some server has to wait 10s to solve a PoW challenge, but a human would. An 8-core server costs 10 cents per hour on hetzner. Even if you assume everyone has a 8-core desktop-class CPU at their disposal (ie. no mobile devices), a 6 minute challenge would cost an attacker a penny. On the other hand how much do you think the average person values 6 minutes of their time?
I really tried to switch off Chrome when they broke ad blockers, I gave it a good few months trying out alternatives but I really don't like any of the other browsers. I do primarily use Safari on my Mac, but on Windows where I don't have that option, I don't like any of the big players, and I don't really trust the smaller players. Even the "big" smaller players are not that trustworthy when it comes to security, like Arc browser's "Boosts" feature that enabled remote code execution.
This is truly disturbing, and trying to sneak it in like this without public discussion is disingenous. Hopefully it will be shot down like last time - at the very least, there are surely antitrust issues here.
Last time they tried this they laundered it though an employee's personal github to distance it from google itself, then framed the proposal in the most disingenuous manner possible, as if it was something that users wanted rather than another mechanism for google to exercise control
Maybe a dumb question, but how is this suppose to work for iphone users? They wont have google play, and it seems like android/google play is required here? There is no way they would cut out such a huge chunk of the market.
hacker news when discovering that apple deployed WEI, for ages, with beloved IT company Cloudflare, affecting hundreds of millions of users: "aww, you're sweet"
hacker news when reading that google is doing the same thing for the rest of the userbase: "hello, human resources?"
The claim is that an iPad/iPhone will also work. Not that that makes it acceptable; if anything, it's worse, because if it were Google Play only it'd be more obvious how unacceptable it is, whereas catering to the duopoly makes it less obvious how much it excludes people and builds a reliance on proprietary systems.
One company can soon dictate who can enter the websites.
And only two commercial operating systems are viable in the world after this change.
Not nice.
I believe the latest versions of iOS just work from the browser, you only need to install the app for older versions of the OS.
I don't know what technology they're using, but when I scanned the QR code it launched (downloaded?) an iOS app of sorts with one tap, similar to the way Google tried Instant Apps a few years back. Didn't even need to double tap the power button like usual.
For example:
> Bot operators point a camera at a screen, a trivial automation with off-the-shelf hardware. For operations that need Play Integrity attestation specifically, a compliant Android device costs approximately $30 at current market prices
A bot farm cannot bypass for long with a $30 phone. Do you seriously think that if Google sees the same hardware identifier 1000s of times a day they are not going to consider that usage to be fraud?
I appreciate that Google's made a real proposal to avoid the web becoming bottomless AI slop. This article hasn't come with a better alternative - I'd love to see one!
> Do you seriously think that if Google sees the same hardware identifier 1000s of times a day they are not going to consider that usage to be fraud?
Phones are very cheap, especially refurbished phones. Just have the phones mimic real life sleep/wake cycles and take occasional breaks. Use 25% more devices to account for the loss in uptime.
Besides, some people (often unemployed or disabled, and possibly with sleep disorders or mania) actually don’t do anything other than scroll on their phone all day and night. So you can’t rely on this as a good signal without creating even more blowback. And you really don’t want too much blowback from troubled people who have infinite free time.
This still doesn't seem very economical for the bot farm. For a device to look legit it has to only use its hardware identifier about as often as a real human would. This massively changes the economics. If you have 1 bot farm customer that wants 20,000 solves in a day, the bot farm would need something like 20000/200=100 phones to provide this. (assuming a real user can do about 200 solves before being flagged).
And the cost for the bot farm being detected is very high because if a phone's root key loses trust it destroys the value of the ~$30 phone they purchased. And of course, I'm sure Google can use the phone's value as another signal for trustworthiness, treating cheaper phones many generations behind as less trusted.
I don't think bot farms will go away completely, but the price will spike massively, which is all you need to discourage many types of abuse. Some Googling show that reCAPTCHA solves are about $0.003 each right now, so quite cheap. With this new reCAPTCHA, I suspect the price will jump massively.
It is particularly funny because this is content marketing for a computational proof of work "captcha". Those are pure snakeoil, with economics that are probably at least four orders of magnitude more favorable to the abusers than this attestation would be.
I'm pretty sure that the Ai copied the $30 number from my hacker news comments. However in the USA it is true. https://www.walmart.com/ip/Straight-Talk-Motorola-Moto-g-202... (carrier locks don't matter for this usecase.)
I am not sure that that storing unique device identifiers is legal in the EU.
I remembered $30 from some comment I read, but didn't look for it later. If it was yours, thank you! (def. thank you for the Wallmart link! - would you like a credit in the blogpost like a quote?
inb4 someone productionizes this (the dependency of cloud phones exists & captcha solvers proved demand) && makes it a cloud service && we are back to square one.
> A bot farm cannot bypass for long with a $30 phone.
That's exactly what they are doing already, and it's not 30$/device but something like <5$/device. Remember they can buy the worst of the worst of the used market.
Betting on device attestation is really betting that smartphones will become less ubiquitous and more expensive to own. Sounds like it's not going to happen to me.
I think I understand why Google wants to do this, and I think I understand why people are opposed to this particular solution.
It’s also worth noting that the author of this article is selling a proof of work solution to the problem.
I am fairly skeptical that proof of work is the right way to go here. A lot of users of the web are using older hardware. Adding a computational toll booth doesn't solve the problem in a world where people have differing amounts of compute to spend.
On the other hand, a botnet might have access to thousands of computers and may not actually care about waiting an extra 10 seconds. Or worse, they will come up with a custom solution on an ASIC that solves your proof of work puzzle thousands of times faster than grandma‘s laptop.
Given all the negative comments here - what is anyone's alternate solution for AI-driven fraudulent activity?
CAPTCHAs are increasingly ineffective. Services are either going to go offline or implement some kind of system like this. PII like credit cards or SSNs aren't enough because those are regularly stolen.
So where do things go? Fewer services and infinite fraud?
It will be fewer accessible services for everyone who refuses to use this, that's for sure. In general though, service providers are not going to accept "fewer services and infinite fraud" and thus they will look into implementing this.
This doesn’t even solve the problem thanks to device farms. There’s not really a solution for this short of aiming a camera at someone’s retina 24/7 plus a fully locked down hardware path. And even that would surely be compromised given enough incentives.
People are just going to have to find a new way to monetize. Maybe more things will become paywalled, or sponsored long-term like old TV shows. Again, there’s no good way to solve this, and the “solutions” on offer just contribute to the surveillance state without solving the problem.
I don't know which activity you're referring to, but why are you trying to discriminate between humans and bots? Because bots don't pay? So demand payment.. Demand like payment per account creation, then set appropriate rate limits per account.
CAPTCHA is sort of a flawed concept in the first place. a machine to test if another agent is a machine. But I figure the future of this is give the test, but discard the answer, the truth is in how it is answered, behavioral analyses, see if their access patterns are human or machine like. A simple version of which is how fast they type, or speed items are clicked. A surveillance process that really creeps me out. I am undecided if it creeps me out more or less than fully automated agents spewing shit over the open web.
As a footnote i found googles recaptcha bitterly ironic, it was painted it in bright colors "this data assists in book scanning" or "this help our self driving cars recognize stop signs" but really designed to train models to do exactly what it's trying to prevent them from doing. and making life hell for the humans along the way. The modern single click version is doing behavioral analyses.
What Google has done is incredibly clunky and only serves its own interests.
We already have methods to prove that we're human.
1. lots of laptops have fingerprint readers & TPM2 build-in
2. lots of folks own Yubikeys or FIDO2 keys - if these became the norm then the price would come down significantly.
Both of these methods only require a tap to authenticate to a website.
Both provide public-key authentication, and both provide some level of proof of work / require human interaction, without revealing the identity of the end-user.
Why not use or standardise these? because there's no benefit to Google of course.
Those don't prove that a human is present. A FIDO2 key can be automated by electronic relay. The only way to do this involves device attestation - locking devices down and utilizing hardcoded TPM/Secure Enclave esque chips. The best we can hope for would be an open standard for those chips so that people can use them with their own X.509 certificates that lets them choose their own CA.
Do we know if this is immediately going to slot in wherever reCAPTCHA is currently used / is there a rollout plan? Or will site operators manually opt into the new system? Is there even a way to opt out?
I can think of many sites where, for users that trigger captchas often, introducing a multi-device workflow is even worse for those users than clicking traffic light images. An automatic rollout would be hostile to those operators!
This seems to be an advertisement for Private Captcha. I don't know a lot about the service, but it seems inherently ablest. Does proof of work, support blind users? Does it is support special needs users with cognitive impairments? The QR code and photo support a wide variety of users. What not support a variety of methods. Why does it need to be one or the other?
Yeah, same. It is hard; we start to need a collective boycott.
We can all do our part, by using their products as little as possible, contribute to open alternatives (OpenStreetMap, Fediverse, Linux, Nextcloud...) and by stimulating our (non-techie!) friends and family.
IMO the biggest issue is that some non-tech people will occasionally be straight up hostile and will whine about not having "features", but then again it only takes a small amount of people taking action inflict real change. Also medium term we need to start making phones (smart OR dumb) that are FOSS as possible.
> Linux
Open/FreeBSD too, we need to have more redundancy.
But remember: once again, don't simply get angry at Google the institution. Get angry at Page and Brin personally. They have the power to prevent this, a power they were careful to preserve when they gave Google its IPO. They are fully responsible for Google's choices here. But, partly because they aren't constantly jumping up and down drawing attention to themselves on social media, they've tended to escape the same personal scrutiny given to eg. Elon Musk. That needs to end.
The problem is this type of controlling move, that will be used to benefit their company, is one among many things a company like Google can do that is unethical. They won’t stop. They are too powerful and can get away with it repeatedly. Even if this one thing is stopped, there will always be another dark pattern or another privacy violation or another anti-competitive thing.
We really need brand new legislation that makes it much easier to break up companies that are too big, and also to tax mega corporations at a much higher rate than all other companies. Then we can have fair competition and the power of choice. But the existing laws end up with no real consequence for these companies, and even if there’s some slap on the wrist, it takes years in court. New laws must make it very fast and low cost for society to take action.
Are you genuinely asking? To pay your taxes, order items online, access your bank account, log into your favorite AI service, there are very often CAPTCHAs involved. Try going a month with CAPTCHAs blocked in uBlock Origin, and you will find yourself unable to do many basic things.
Not saying this is any better, but IRS partnered with id.me to enforce ID + face recognition before you can log in to view your records. We are truly doomed.
Even besides services you might need to access, as pointed out in another response (e.g., banks, shops), how are you going to check the veracity and understand the context of the information you seek without going to the (possibly hallucinated!) sources? But I guess a lot of people who are into using AI like that just don't care.
We see the fundamental forces of capitalism at work: To justify valuation, Google needs to grow. When they feel a ceiling, they broaden their search to anything legal that makes customers pay - even if it contradicts their longterm interests. This created countless attack angles for startups.
The good news: we already have a solution! Monopoly laws. In case of the internet, no company should be able to have this much power.
The bad news: US decided to weaponize big tech’s leverage over the world and does not enforce these laws anymore that fix vanilla capitalism.
>We see the fundamental forces of capitalism at work: To justify valuation, Google needs to grow.
You’re confusing markets with capitalism.
Market Socialism (the only reasonable kind) would have these same issues. If Google was owned by the workers instead of capitalists, it would still have incentive to grow. The worker owners would have the exact same incentives as current owners. The only difference would be who the owners are.
Capitalism is not actually “the final boss” that internet leftists make it out to be. Socialism is not the panacea that leftists make it out to be. Surveillance is not a “capitalist only” thing.
I agree, thanks for clarification. I did not want to argue in favor of Socialism - my criticism here is that „free market correction instruments“ like antitrust, monopoly etc are absent.
This API also works on the desktop. In fact, you can't use this system without a phone if your browser isn't Google enough.
We are going to see sooooo many scams out there. No wonder Google is locking down third party Android apps outside of their control, getting a user to install "device verification.apk" will become super trivial after people have clicked through these popups a couple times.
It is, just like a calculator is a small computer. It's not a personal computing device though, in the sense that the user can't develop and deploy their own software/tools on it.
No one should ever browse the web from an ESP32 either. Like seriously the dark patterns are bad enough from a desktop where you've actually got the screen real estate to see the whole page, have other sites open for comparison, have a keyboard to type your own notes, etc. Most browsing can simply wait, especially the adversarial-commercial type we're talking about here.
A device I have no choice in owning because modern employers assume you have sometime to install an authenticator app on. That's what it is for me. Also, sadly, it's an anchor for Signal. Otherwise I don't use the stupid thing.
and it seems Google wants to support people like you!
That entire QR barcode thing is so that you can browse the web on your laptop/desktop, and _still_ rely on smart phone's attestation, no mobile browser needed.
We do need to abandon the reality where we use the same few companies on a daily basis and get back to what's now hidden the under-the-surface: forums, blogs, personal websites. We need to re-discover the "free" internet we used to have before Facebook and smartphone dystopia happened.
I posted a comment on the announcement when it was posted here:
>As someone who is working in incident response and malware analysis I have to say that is one of the worst ideas I have ever seen.
A lot of companies have issues with ClickFix [1] and other social engineering campaigns and now Google wants to teach users that they should scan QR codes to proceed on a website.
>How should we realistically teach Susan from HR the difference between a real Google Captcha QR code and a malicious phishing QR code - you (realistically) can't. I wish we could - but those people don't work in tech, they will never know and I can't really blame them because at the end of the day they are just happy that they don't have to deal with tech after work.
>We have spent years of behavioural conditioning to prevent QR-code based phishing attacks (some people call it Quishing but I hate that term) and since the QR code is being scanned from a mobile device (99.99% of the time the private device), we have no EDR visibility on those devices and can't track what's happening if people scan it.
>This is more of an invitation for threat actors than it is something that holds them back.
I think the idea is good if it could actually curb bot traffic that currently plagues the Internet.
However, a lot of recent bot traffic are sophisticated scrappers called "LLM's." You can tell claude to "research X from this www.example.com" and will automatically scrape it and summarize it, something that a LLM is perfect for. Gemini tends to share links instead, presumably because most of Google's revenue comes from ads served on those websites, so if it completely killed the traffic to those websites it would just make less money. Incidentally, I wonder if Claude/Gemini use an search engine-like "index" of all websites or it refuses to cache anything to always fetch "fresh" data.
If this is employed, I don't think the web is only going to be gatekept to Google devices. I think it will also be gatekept to Google's AI's.
Google would be able to display a captcha that no LLM could defeat, and then just let its own LLM pass through.
The same could be said about its other bots, such as the web crawler. Google's bot could crawl webpages that no other crawler would ever be able to simply because it has free pass to captcha-gated GETs. Although the same could be true already today.
Their product page is full of info about how this works with "agentic" cruft. They're still permitting your regular old scrapers and bots for as long as they like you. Hope you're not thinking of running an independent system instead of a large cloud platform!
> The defeat is mechanical. Bot operators point a camera at a screen, a trivial automation with off-the-shelf hardware. For operations that need Play Integrity attestation specifically, a compliant Android device costs approximately $30 ($29.88 in Wallmart to be precise) - for a professional bot farm, which purchases devices in bulk, this is the fixed cost without material disruption to operations.
That's $30 per account, not one time. Because of the following:
> Device attestation does not just gate access - it produces attribution. A device with a stable hardware identity creates a persistent identifier that crosses sessions, browsers, and private browsing modes.
If you put all your bot accounts on one device, they all get banned at once. So fraudsters have to spread their accounts across multiple devices and replace them when they inevitably get banned. That's the reason for all the spying, attestation, and lockdown bullshit behind Google Cloud Fraud Defense. It is far easier to ban fraudsters if you just let the Maoists run the Risk Department.
The author proposes an alternative solution: proof-of-work. And, yes, there are use cases for that, such as Anubis. Google might even want to consider a proof-of-work option in certain scenarios. But there is no scenario in which someone's phone deliberately burns $30 worth of compute - perhaps a quarter of the user's battery - and the user still has a good onboarding experience. Most of your actual users are not going to be able to burn compute as efficiently as fraudsters, either - so maybe you have to burn the whole battery on a phone to cost a fraudster $30. Proof-of-work is, strictly speaking, anti-egalitarian and anti-democratic. "One CPU, One Vote" is less useful than you think when you realize fraudsters have the money to just buy lots of CPUs to always win[0].
Every Risk Department eventually reinvents arbitrary and capricious punishment. When you have no legal authority to prosecute crime, you rely entirely upon your freedom of association and ban people with a hair trigger. It's the only thing that works. Personally, I'd rather live in the world where governments actually took fraud seriously and corporations didn't have to do this, but for right now, GCFD is at least less onerous than WEI in the sense that WEI was going to lock down all browsers. GCFD just means I have to keep a Google-approved phone around to scan a QR code every once in a while.
[0] I'm not mentioning the massive waste problem proof-of-work creates, because obviously attestation will also produce waste. Actually, if anything, the fraudsters will probably wind up dumping all their banned devices on the used market and ruin it.
The military industrial complex created the internet, and has funded many of the big players in Silicon Valley. Their goal was never an open and free internet.
You don't think that some people simply disagree with the idea that this is bad? Or like maybe the CAPTCHA company who put out the post has an agenda here? So you want to go after engineers personally?
I wonder what you've done that might warrant harassment?
Look at how complicated CAPTCHAs are getting to try to be unsolvable with AI - it's a losing game. This and the WEI proposal are trying to solve a very, very real problem. If you continue to deny the problem, or every proposal solution without working towards an acceptable one, people will route around the blockage.
The crux of the problem is that their solution involves making themselves the gatekeepers of who is and isn't allowed. And that's a power that no one unaccountable organization should wield.
Given how important internet is to modern society, letting any one entity decide who should and should not have access is nearing a human rights issue.
> You don't think that some people simply disagree with the idea that this is bad?
Where are they? Where? Can you point me to one person in this thread who "disagrees with the idea that this is bad"? Apparently even you don't go that far.
But it's so easily beatable! This might be the result of good intentions (being incredibly generous), but as the article states, any bot can afford a $30 phone and the concomitant hardware as the cost of doing business and bypass this.
Also as the article states (referencing an HN comment):
> How should we realistically teach Susan from HR the difference between a real Google Captcha QR code and a malicious phishing QR code - you (realistically) can’t.
Susan from HR is the least of it. This is a huge vector to increase fraud, not decrease it.
How would an ethical, competent engineer argue against this?
The CAPTCHA company who put this out might have an agenda, but also since they're in the industry they might also have knowledge to impart.
We're reaching an inflection point with the oligarchies where the old ideas of "writing a blistering editorial" or "calling your congress-critter" need to be seriously questioned as useful and other non-violent methods of recapturing digital freedom need to be entertained.
I see this comment was flagged, I have vouched for it.
It's making a valid point.
I wondered people are reading "I wonder what you've done that might warrant harassment?" as some kind of personal threat or incitement to harassment, but I read it as precisely the opposite.
It's an entirely valid point that many of us have worked at jobs on products that did something that somebody disagreed with, and we shouldn't be asking anybody to harass us personally for it, because that is wrong.
GP is asking to "aggressively name and shame" engineers. It's entirely valid to say that you wouldn't much like that if it happened to you.
This case is trivially circumvented with device farms, much like described in the post.
What real problem are they trying to solve? AI bots reading content? That’s not something Google want to prevent, it’s part of their business model, this would allow them to easily circumvent it for themselves though.
The usual argumentation is "I need to make a living" and "if I didn't build it someone else would have done an even worse job, like this at least I could be an activist on the inside and guide the efforts to make it better".
Another method is to stall and sabotage the development via endless bike shedding, language changes, rewrites, refactors. All normal things in every project. Drag those feet.
These are private actors. It's not acceptable to harass people for building things that are lawful but that you don't like.
If you don't like this functionality, participate in democracy and work with your representatives to make it unlawful. But be prepared to humbly lose if the majority disagrees with you.
You're not, however, entitled to a "heckler's veto."
Nobody is asking for harassment. Social ignorance is usually enough. Like, nobody wanting to date, be a friend, asking for parties etc. It is very normal treatment to people who have bad behavior etc.
I think the better alternative to making engineers "feel uncomfortable opening their door, walking down the street" is for us to collectively ask if the solution isn't to touch more grass and rely less on the technology we've all come to blindly accept as required.
I mean, I hate this QR code shit as much as anyone, but c'mon, we can and should be better - both in how we treat others, and how much we rely on this shit.
Water use and mass displacement of labor get all the attention but there are so many other more subtle reasons like this that AI is going to be bad for society.
I disagree that this kind of scheme is inevitable. We can "evit" it through thoughtful discussion, foresight, alternative mitigations, and even regulation. Certainly, Google can choose to avoid it. On the other hand, the AI bubble will inevitably burst, since compute is not free. I look forward to post-bubble AI.
> We can "evit" it through thoughtful discussion, foresight, alternative mitigations, and even regulation
Such as? I don't see how regulation would apply here without concrete technical solutions that enforce it. So what alternative mitigations do you have in mind?
I saw this coming from miles away. Computers are better at solving CAPTCHAs than people are and people can be bribed or convinced to join botnets so IP whitelisting doesn't work either. Now we have tons of fingerprinting and behaviour analysis but governments are cracking down on that. Plus, YouTube had a massive ad fraud problem with ads being played back in the background in embedded videos, so their detection clearly wasn't good enough.
There aren't many good ways to prove you're not a bot and there are even fewer that don't involve things like ID verification.
Their opt-in approach helps shift the blame to individual web stores for a while, so who knows if this will take off. But either way, in the long term, the open, human internet is either going away or getting locked behind proofs of attestation like this.
Apple built remote attestation into Safari years ago together with Cloudflare and Google is now going one step further, as Apple's approach doesn't work well against bots that can drive browsers rather than scripted automation tools.
Luckily, their current approach can be worked around because it's only targeting things like stores now and you can buy things from other stores. Once stores find out that click farms have hundreds of phones just tapping at remotely served content, uptake will probably be limited.
It'll be a few years before this is everywhere, but unless AI suddenly isn't widely available anymore, it's going to be inevitable.
> saw this coming from miles away. Computers are better at solving CAPTCHAs than people are
good point... it's interesting how Captcha was initially popularized as a reverse Turing test, but it's just variants of Proof of Work today.
And it seemed clever at the time for Google to leverage this for improvement of their OCR models (it was!), and makes you wonder what utility is derived from the proven "work" today.
CAPTCHAs were designed as a type of Turing Test, not a reverse Turing Test. It’s not surprising that the effectiveness of these weaker variants has collapsed, given that AI can now pass the real Turing Test.
5 replies →
With the crosswalk, bike, motorcycle, stairs type of things, wasn't that just improving their training data?
1 reply →
> people can be bribed or convinced to join botnets so IP whitelisting doesn't work either
Do you think this won’t also be bypassed, by bribing people to scan QR codes and spoofing location etc.?
The person who scanned to QR code is knowable. They have their IMEI encoded in the response.
2 replies →
> people can be bribed or convinced to join botnets so IP whitelisting doesn't work either
what does that bribe look like, as in, how much can one get? what all does that entail? is that a little box i connect to my network and forget about? does that mean if i unplug it unless another payment is received that will work out? i'm asking for a friend that's looking to avoid selling plasma to make ends meet.
https://www.fbi.gov/investigate/cyber/alerts/2026/evading-re...
> The following methods can be used to acquire residential IP addresses for a residential proxy network:
> Software development kit (SDK) partnerships: Proxy services convince mobile application developers to include their SDK in applications in exchange for payment for each person who downloads the application. Individuals download the application and accept the terms and conditions, allowing the SDKs to run in the background and route proxy traffic through users' devices.
> Virtual private network (VPNs) with hidden terms of service: Free VPN services may enroll users' devices in a residential proxy network, without obtaining their consent. The details are often hidden in the terms of service, which most users do not read prior to download, or the language is difficult for the user to understand.
> [malware and compromised IoT devices]
> Passive income schemes: Proxy services convince people to download applications on their device that promise to pay them for their internet bandwidth. People often do not realize that criminals use their internet connection to commit cyber attacks
One reddit post says bandwidth sharing passive income schemes paid them $1 to $9 per month.
I used to know some Americans who were on the poorer end of the spectrum, and apps that paid you for performing fitness activity and such weren't uncommon in that demographic. Not as much of a thing in Europe for some reason.
I believe the cheap Chinese pirate TV boxes that are somewhat popular in the US these days are also in botnets, which is likely how the vendors make them so cheap.
2 replies →
Oh it's better than that now, if you can afford the up-front costs. You can set up a phone farm with cheap Google-certified devices, and the control software manages the Google accounts and botnet connection (through multiple residential proxies, of course). All of these attestation games are DOA.
I'm afraid it's far less enticing. The usual offer is "To continue playing, pay $0.99 or hit AGREE to share your internet connection with Legit Services Inc."
And that's assuming they're nice enough to ask at all.
I'm pretty sure it's one of the revenue models for those free tv/movie boxes. You can even see them at best buy. Absurd.
1 reply →
I personally think its easier to detect llm controlled browser sessions, the people deploying them are far more naive and inexperienced than traditional scrapers/crawlers.
insert You wouldn't bring a 40 Petabyte Zip Bomb to School, would you? meme
Part of the problem is also that Google wants to permit crawlers to do some things but jot others.
Their announcement is full of buzzwords about "agentic" things. Detecting LLMs is one thing, but imagine the power of being able to pick which LLM browsers are permitted and which aren't!
I think Google is being too early to the party with this. Cloudflare still has CAPTCHAs to throw at the wall. There are ways other than attestation to verify that someone is a real human, but they're getting more and more annoying to real users and harder and harder to implement on a small website.
Despite the massive implications, this is a simple system that just works for the 99% of people who use Chrome or Safari or at least have access to an Android phone or iPhone somewhere. It's quick, doesn't require installing apps or creating accounts, and it just works from both the website perspective and the user perspective.
Of course when you start thinking about people with disabilities things become problematic, but when have tech companies ever really cared about that sort of thing? Inclusiveness was fun and all for a while, but the clowns the American people elected banned that sort of thing for any company considering government contracts, and big tech licked that boot like it was made of honey.
The world becomes a lot easier if you just decide to ignore all edge cases and assume customers who disagree with you didn't matter anyway. And infuriating as it may be, for companies like Google, that business model works.
I mean depending on the cost, Google is guaranteed to lose the battle, like gaming anticheat: there are tools that do parsing of the image on screen and send input as a usb device, there is absolutely nothing to detect.
Doing that for a webpage seems way easier than s videogame
From "Don't be evil" to building the largest, most invasive, surveillance operation the world has ever seen.
That was true before this, but this indicates nothing will ever be enough. Google will always want to track more of everyone's activity online, and will use every tool at their disposal to do it.
> Google
It's not Google, it's someone. A person came up with this idea and is pushing it through. We should stop treating corporations as some abstract entity instead of a group of sick people making these kinds of decisions.
I think this is the third HN link I've clicked on in a row that leads to an LLM-generated article. I'm not opposed to AI, but I'm tired of seeing it quietly substituted for human thought and expression.
I'm seeing this stance a lot "this is obviously AI generated"
Why? What's LLM generated? How can you tell?
To me what's obvious is that our trust system is already breaking down. Commenters accusing each other of being AIs is also another example of this.
>Why? What's LLM generated? How can you tell?
Not the guy you're responding to, but:
1. The high number of (em) dashes is suspect, though it's unclear whether they manually replaced the em dashes or is actually human generated.
2. "One additional failure worth noting: one incident response professional in the HN thread, raised a concern that operates independently of the bot problem" feels out of place for a content marketing piece. HN isn't popular enough to be invoked as a source, and referencing it as "the HN thread" seems even weirder, as if the author prompted "write a piece about how google cloud defense sucks, here are some sources: ..."
3. This passage is also suspect because it follows the chained negation pattern, though it's n=1
>No hardware identifier is transmitted. No attestation is required. No certification layer determines who may participate.
edit:
I also noticed there are 2 other comments that are flagged/dead expressing their reasons.
11 replies →
The choppy language is the biggest trigger for me. Examples:
* "With Fraud Defense, there was no process to respond to. The product launched. The requirements page went live."
* "That is not a technical limitation waiting to be engineered around. It is the mechanism."
* "The defeat is mechanical. Bot operators point a camera at a screen, a trivial automation with off-the-shelf hardware."
I could be wrong, of course. Maybe humans are starting to write like LLM's, or maybe it's just confirmation bias on my part.
Look at the number of : per paragraph. What human puts two : in a single sentence?
"One additional failure worth noting: one incident response professional in the HN thread, raised a concern that operates independently of the bot problem: …"
The ersatz Ted Talk meets LinkedInfluencer rhythm of sentences, the throat clearing fillers as connective tissue…
Or Wikipedia: https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing
1 reply →
The entire article is just one long stream of short, punchy, declarative sentences. The latest Claude models are notorious for writing like this.
There's also a few cookie-cutter patterns that should immediately jump out at you if you're at all familiar with AI writing, such as:
> No hardware identifier is transmitted. No attestation is required. No certification layer determines who may participate. User privacy is structurally preserved, not promised.
> Google Cloud Fraud Defense is not a reCAPTCHA update. The QR code is the visible mechanism, but device attestation is the real product.
It's really obvious. The repeated information. The very. short. sentences. The incessant detail. The tangents that go nowhere. And LLMS always try to structure the entire essay into topical sub-sections.
They can't tell. It has become a statistical thing. There will exist some percentage of them that assumes an item is AI generated. With enough people seeing something, you'll see the accusation.
"this is AI" is the new "This is shopped", but without the "I can tell by the pixels" rejoinder.
I mean sometimes they're right, but honestly in this day and age does that even matter?
Whether it's AMP or manifest 3 or android source shenanigan or attempts to replace cookies with their FLOC nonsense or this...Google is rapidly turning into a malicious force when it comes to the open internet
Turns out RMS has always been right. How surprising.
Turns out that identifying a problem doesn't help without a workable solution/alternative.
17 replies →
What is RMS’ solution to this problem?
6 replies →
Root mean square?
1 reply →
Indeed, occasionally hammers do find nails to hit.
7 replies →
If RMS said not to trust Google's self-proclaimed altruism and relationship with open source, yeah. I always assumed that was a backstab waiting to happen. But that only meant I used an iPhone and didn't care that it was more closed than Android, not that I got an Arch Linux phone or something. (And a Mac more importantly, but there's not really a Google counterpart to that.)
> AMP
My god AMP was such an annoying thing ~4-5 years ago when I was working in a marketing-forward web dev shop.
"Google really likes when you pipe your words into their shitty UI because it saves some time for the user"
We were all like, cool so on one hand we're being given complex designs for sites to differentiate them, and on the other hand we're bowing to a megacorp who actually wants to skip the whole web design part entirely and pipe our content through their pre-defined UI.
So glad it died. Should have known it would die in a matter of a couple of years with that being the track record for Google in general.
> skip the whole web design part entirely and pipe our content through their pre-defined UI
It's a shame this part didn't stick. I use reading mode every chance I get be cause the more design a page has, the worse it is. For some reason orgs agreed that it is ok to let medium or substack own their content, but hated Google's high speed CDN.
Last time this happened we got a bunch of Google employees downplaying the impact of WEI and calling it a nothingburger, that people were being hysterical. I just checked, and everyone I saw defending it has since left the company. I'm sure another wave of Google managers, keen to appeal to the higher-ups, will be here to defend this new initiative any minute now.
This makes me curious, where did they go?
1 reply →
Don't you see it closing all around you?
It's not just Google. It's governments, corporations, all around the world, simultaneously. The noose is being tightened gradually, then all at once. And it's coming for all of us:
https://community.qbix.com/t/increasing-state-of-surveillanc...
The threats above interlock by design or convergence: Identity layer (1-5) creates the prerequisite for the others. Once identity is established at SIM/account/device level, the carve-outs that make surveillance politically viable become possible (powerful users get exemptions; ordinary users get watched).
Device layer (10-12, 16-19) creates the surveillance endpoint. Once content is scanned on the device before encryption, the cryptographic protections at the communications layer become irrelevant.
Communications layer (6-9) is the most-defended. Mass scanning has been defeated repeatedly. This is the layer where the resistance has the best track record.
Reporting layer (13-15) is nascent. Direct OS-to-government reporting hooks haven't been built yet at scale. The UK's December 2025 proposal is the leading edge.
Platform control (20-24) determines whether alternatives can exist. Browser diversity, app distribution diversity, and engine diversity are the structural protections. All three are narrowing.
A society with all five layers complete has the technical infrastructure for total surveillance with elite carve-outs. We are roughly 40% of the way there. Whether that infrastructure becomes a dystopia depends on political choices, not technical ones.
HN as a whole is surprisingly oblivious to the noose tightening, because many here are super against decentralized distributed things, if they involve any sort of token. You can complain all you want, but downvoting and burying the decentralized alternatives just for groupthink makes you somewhat complicit in the erosion of our privacy and liberties. Even if you might disagree with a project, all the work that goes into it might be a good reason to upvote it instead, considering that without this work, we're basically doomed.
Hell, even using cash feels like a minor form of dissent. And of course even if you leave your phone at home, your car will be scanned with ANPR wherever it goes. And if that fails, there's still your face to be tracked.
2 replies →
I said 16 years ago that when IPV6 was coming into use was the only reason for a 128 bit address space was so they could tie every packet on the internet back to you as a person. https://news.ycombinator.com/item?id=1464940
1 reply →
It doesn't help that your first sentence makes you sound like a conspiracy theorist riding his hobby horse. I read on despite that, but others may not.
[flagged]
2 replies →
> rapidly becoming
Always has been.
Google was creating cartels like the "Open Handset Alliance" literally decades ago.
Via their control of Chrome and Search which are both monopolies, Google holds absolute authority on how websites are rendered and if websites can be found.
Huge fan of Kagi so far - especially SmallWeb if you do want to find websites that probably would not hit the top of Google search results
1 reply →
> Chrome and Search which are both monopolies
I'm on Firefox and use DuckDuckGo.
1 reply →
It cracks me up when people say Chrome is a monopoly, because a massive amount of computing devices do not even ship with Chrome. Windows computers, Macbooks, and iPhones require users go search out and install Chrome on their own out of their own volition, shipping with entirely functional and decent browsers out of the box that they have lots of patterns to push. Even many Android phones ship with browsers other than Chrome as a default still from what I understand.
How is Chrome, of all things, a monopoly? Have words just entirely lost all meaning and now monopoly just means "things which are popular that I dislike"?
39 replies →
They lost their search monopoly when LLMs came.
1 reply →
I'm amused at how thoroughly Google adopted Microsoft's playbook. Chrome supplanted Internet Explorer by embracing the open web. But then Google immediately started on extensions, and now they're trying to extinguish the open web with nonsense like Cloud Fraud Defense. All very smoothly done. I mean, people are actually _asking_ for this junk. I'm impressed.
No they didn't. Firefox unseated Internet Explorer. Chrome then got big by putting its installer right on the Google homepage and harassing users to install it. And they had it bundled with other software, and would install as a user so that locked down computers could still run it. They absolutely did not win by embracing open standards.
10 replies →
If I may tie this into other things going on, The California wealth tax as written would force Larry and Sergei, if they didn't move out of California, to basically sell almost their entire stake in Google, and it would probably wind up owned by State Street and Vanguard who outsource their proxy votes to ESG consultants, who will probably vote for more surveillance.
what alternative to WEI do you propose? it solves a bajillion Internet-existential problems. it is definitely a crisis. the bot problem is at least as serious as facebook, gmail serving without https.
the fact that this kind of comment gets downvoted proves my point. so what if you personally don't like WEI? it doesn't mean the problems aren't real...
that aside, i don't know how people say stuff like "malicious force" and then you go and use a bajillion Google-authored, completely free as in beer and often free as in freedom technologies that nobody obligates you to use at all. It's not like Apple, where their software is so shitty (Messages, Apple Photos, etc.) that the only reason people use it is because it is locked down and forced upon you. it's interesting to me that @dang worries about the tenor of conversation changing - he longs for that 2009 world of university-level math people hanging out and writing comments about LISP or whatever - when the real deficit is not intelligence about math but, at the very least, seeing that things are nuanced, to see more sides to a problem besides the most emotionally powerful and the most mathematically neutral ones.
Bombing every AI data center on Earth would also solve the Internet-existential problems we're facing. But that solution is beyond the pale of course, instead it's incumbent on me to prove to you that panopticon surveillance of every living human being from now until the Sun consumes us is not a reasonable solution to "bots use the Internet".
2 replies →
People use iMessage because it has worked for a long time, during which all the leading alternatives were terrible. Maybe they still are cause I'm still not convinced that RCS even works reliably, seeing how Android users go on WhatsApp instead.
As much as I hate whatever google's doing, this article has some issues:
>For operations that need Play Integrity attestation specifically, a compliant Android device costs approximately $30 at current market prices
This assumes the logic on google's side is something like `if(attestationResult == "success") allow()`, but it's not hard to imagine the device type being factored into some sort of fraud score. For instance, expensive devices might have a lower fraud score than cheaper devices, to deter buying a bunch of cheap devices. They might also analyze the device mix for a given site, so if thousands of Chinese phones suddenly start signing up for Anne's Muffin Shop, those will get a higher fraud score.
>Firefox for Android does not appear in Google’s stated browser support list for Fraud Defense.
The browser only needs to show a QR code, so if you're on firefox mobile they'll either open a deeplink to google play services on the phone itself, or show a qr code.
>One human solving a single challenge pays a negligible cost. A bot farm running concurrent sessions faces exponential compute costs with each additional attempt - and AI agents, which consume GPU cycles to operate, face identical penalties regardless of how sophisticated their reasoning is.
PoW for bot protection basically never caught on because javascript performance is poor, and human time is worth more than a computer's time. An attacker doesn't care if some server has to wait 10s to solve a PoW challenge, but a human would. An 8-core server costs 10 cents per hour on hetzner. Even if you assume everyone has a 8-core desktop-class CPU at their disposal (ie. no mobile devices), a 6 minute challenge would cost an attacker a penny. On the other hand how much do you think the average person values 6 minutes of their time?
I strongly suggest people move away from chrome. They lost all sense of respect.
I know it is a small move, but as it happened when chrome started, this opens opportunities for other players
I really tried to switch off Chrome when they broke ad blockers, I gave it a good few months trying out alternatives but I really don't like any of the other browsers. I do primarily use Safari on my Mac, but on Windows where I don't have that option, I don't like any of the big players, and I don't really trust the smaller players. Even the "big" smaller players are not that trustworthy when it comes to security, like Arc browser's "Boosts" feature that enabled remote code execution.
So now I'm back on Chrome.
It is understandable but you can also use a simpler browser for common things and use chrome for banking or things like that
Qutebrowser is my favorite daily driver, save for a few sites I can't boycott and which need Firefox or something.
This is truly disturbing, and trying to sneak it in like this without public discussion is disingenous. Hopefully it will be shot down like last time - at the very least, there are surely antitrust issues here.
Last time they tried this they laundered it though an employee's personal github to distance it from google itself, then framed the proposal in the most disingenuous manner possible, as if it was something that users wanted rather than another mechanism for google to exercise control
I agree on the antitrust issues, but I’m not convinced that’s seen as a serious barrier these days.
Maybe a dumb question, but how is this suppose to work for iphone users? They wont have google play, and it seems like android/google play is required here? There is no way they would cut out such a huge chunk of the market.
iPhone users will have to install the "reCAPTCHA" app. https://apps.apple.com/us/app/recaptcha/id6746882749
This is detailed at https://support.google.com/recaptcha/answer/16609652
What's up with the reviews? It's pure spam and the 1-star review is completely hidden.
Apple has device attestation deployed like one year before Google even proposed it: https://httptoolkit.com/blog/apple-private-access-tokens-att...
hacker news when discovering that apple deployed WEI, for ages, with beloved IT company Cloudflare, affecting hundreds of millions of users: "aww, you're sweet"
hacker news when reading that google is doing the same thing for the rest of the userbase: "hello, human resources?"
5 replies →
The claim is that an iPad/iPhone will also work. Not that that makes it acceptable; if anything, it's worse, because if it were Google Play only it'd be more obvious how unacceptable it is, whereas catering to the duopoly makes it less obvious how much it excludes people and builds a reliance on proprietary systems.
One company can soon dictate who can enter the websites. And only two commercial operating systems are viable in the world after this change. Not nice.
iPhones have attestation too: https://developer.apple.com/documentation/devicecheck/establ...
It'll just be more clunky because you have to install their app.
I believe the latest versions of iOS just work from the browser, you only need to install the app for older versions of the OS.
I don't know what technology they're using, but when I scanned the QR code it launched (downloaded?) an iOS app of sorts with one tap, similar to the way Google tried Instant Apps a few years back. Didn't even need to double tap the power button like usual.
1 reply →
They also have Private Access Tokens: https://developer.apple.com/news/?id=huqjyh7k
This article is full of false assumptions.
For example: > Bot operators point a camera at a screen, a trivial automation with off-the-shelf hardware. For operations that need Play Integrity attestation specifically, a compliant Android device costs approximately $30 at current market prices
A bot farm cannot bypass for long with a $30 phone. Do you seriously think that if Google sees the same hardware identifier 1000s of times a day they are not going to consider that usage to be fraud?
I appreciate that Google's made a real proposal to avoid the web becoming bottomless AI slop. This article hasn't come with a better alternative - I'd love to see one!
> Do you seriously think that if Google sees the same hardware identifier 1000s of times a day they are not going to consider that usage to be fraud?
Phones are very cheap, especially refurbished phones. Just have the phones mimic real life sleep/wake cycles and take occasional breaks. Use 25% more devices to account for the loss in uptime.
Besides, some people (often unemployed or disabled, and possibly with sleep disorders or mania) actually don’t do anything other than scroll on their phone all day and night. So you can’t rely on this as a good signal without creating even more blowback. And you really don’t want too much blowback from troubled people who have infinite free time.
This still doesn't seem very economical for the bot farm. For a device to look legit it has to only use its hardware identifier about as often as a real human would. This massively changes the economics. If you have 1 bot farm customer that wants 20,000 solves in a day, the bot farm would need something like 20000/200=100 phones to provide this. (assuming a real user can do about 200 solves before being flagged).
And the cost for the bot farm being detected is very high because if a phone's root key loses trust it destroys the value of the ~$30 phone they purchased. And of course, I'm sure Google can use the phone's value as another signal for trustworthiness, treating cheaper phones many generations behind as less trusted.
I don't think bot farms will go away completely, but the price will spike massively, which is all you need to discourage many types of abuse. Some Googling show that reCAPTCHA solves are about $0.003 each right now, so quite cheap. With this new reCAPTCHA, I suspect the price will jump massively.
It is particularly funny because this is content marketing for a computational proof of work "captcha". Those are pure snakeoil, with economics that are probably at least four orders of magnitude more favorable to the abusers than this attestation would be.
I'm pretty sure that the Ai copied the $30 number from my hacker news comments. However in the USA it is true. https://www.walmart.com/ip/Straight-Talk-Motorola-Moto-g-202... (carrier locks don't matter for this usecase.) I am not sure that that storing unique device identifiers is legal in the EU.
I remembered $30 from some comment I read, but didn't look for it later. If it was yours, thank you! (def. thank you for the Wallmart link! - would you like a credit in the blogpost like a quote?
1 reply →
inb4 someone productionizes this (the dependency of cloud phones exists & captcha solvers proved demand) && makes it a cloud service && we are back to square one.
> A bot farm cannot bypass for long with a $30 phone.
That's exactly what they are doing already, and it's not 30$/device but something like <5$/device. Remember they can buy the worst of the worst of the used market.
Betting on device attestation is really betting that smartphones will become less ubiquitous and more expensive to own. Sounds like it's not going to happen to me.
I think I understand why Google wants to do this, and I think I understand why people are opposed to this particular solution.
It’s also worth noting that the author of this article is selling a proof of work solution to the problem.
I am fairly skeptical that proof of work is the right way to go here. A lot of users of the web are using older hardware. Adding a computational toll booth doesn't solve the problem in a world where people have differing amounts of compute to spend.
On the other hand, a botnet might have access to thousands of computers and may not actually care about waiting an extra 10 seconds. Or worse, they will come up with a custom solution on an ASIC that solves your proof of work puzzle thousands of times faster than grandma‘s laptop.
Given all the negative comments here - what is anyone's alternate solution for AI-driven fraudulent activity?
CAPTCHAs are increasingly ineffective. Services are either going to go offline or implement some kind of system like this. PII like credit cards or SSNs aren't enough because those are regularly stolen.
So where do things go? Fewer services and infinite fraud?
> Given all the negative comments here - what is anyone's alternate solution for AI-driven fraudulent activity?
A combination of "regulate AI" and "The optimal amount of fraud is not zero". https://www.bitsaboutmoney.com/archive/optimal-amount-of-fra...
Yes, fewer services and infinite fraud is substantially better to me than the web being controlled by Google even more than it already is.
It will be fewer accessible services for everyone who refuses to use this, that's for sure. In general though, service providers are not going to accept "fewer services and infinite fraud" and thus they will look into implementing this.
1 reply →
This doesn’t even solve the problem thanks to device farms. There’s not really a solution for this short of aiming a camera at someone’s retina 24/7 plus a fully locked down hardware path. And even that would surely be compromised given enough incentives.
People are just going to have to find a new way to monetize. Maybe more things will become paywalled, or sponsored long-term like old TV shows. Again, there’s no good way to solve this, and the “solutions” on offer just contribute to the surveillance state without solving the problem.
Why do you continue to extend the benefit of the doubt to your former employer when they have shown themselves to be untrustworthy again and again?
For one, I got to see how utterly insane and off-base many of the conspiracy theories around Chrome were compared to reality.
1 reply →
Make people pay money instead of watching ads.
I don't know which activity you're referring to, but why are you trying to discriminate between humans and bots? Because bots don't pay? So demand payment.. Demand like payment per account creation, then set appropriate rate limits per account.
CAPTCHA is sort of a flawed concept in the first place. a machine to test if another agent is a machine. But I figure the future of this is give the test, but discard the answer, the truth is in how it is answered, behavioral analyses, see if their access patterns are human or machine like. A simple version of which is how fast they type, or speed items are clicked. A surveillance process that really creeps me out. I am undecided if it creeps me out more or less than fully automated agents spewing shit over the open web.
As a footnote i found googles recaptcha bitterly ironic, it was painted it in bright colors "this data assists in book scanning" or "this help our self driving cars recognize stop signs" but really designed to train models to do exactly what it's trying to prevent them from doing. and making life hell for the humans along the way. The modern single click version is doing behavioral analyses.
Captchas were never effective. It’s an arms race to the bottom.
[dead]
[dead]
What Google has done is incredibly clunky and only serves its own interests. We already have methods to prove that we're human.
1. lots of laptops have fingerprint readers & TPM2 build-in
2. lots of folks own Yubikeys or FIDO2 keys - if these became the norm then the price would come down significantly.
Both of these methods only require a tap to authenticate to a website. Both provide public-key authentication, and both provide some level of proof of work / require human interaction, without revealing the identity of the end-user.
Why not use or standardise these? because there's no benefit to Google of course.
Those don't prove that a human is present. A FIDO2 key can be automated by electronic relay. The only way to do this involves device attestation - locking devices down and utilizing hardcoded TPM/Secure Enclave esque chips. The best we can hope for would be an open standard for those chips so that people can use them with their own X.509 certificates that lets them choose their own CA.
Real hardware doesn't mean a human is present either, unfortunately. It just means that you have to spend on real devices to bypass these defences.
2 replies →
neither 1 nor 2 can prove you're a human. sorry
neither can Google Cloud Fraud Defence, and yet we're here
1 reply →
Do we know if this is immediately going to slot in wherever reCAPTCHA is currently used / is there a rollout plan? Or will site operators manually opt into the new system? Is there even a way to opt out?
I can think of many sites where, for users that trigger captchas often, introducing a multi-device workflow is even worse for those users than clicking traffic light images. An automatic rollout would be hostile to those operators!
This seems to be an advertisement for Private Captcha. I don't know a lot about the service, but it seems inherently ablest. Does proof of work, support blind users? Does it is support special needs users with cognitive impairments? The QR code and photo support a wide variety of users. What not support a variety of methods. Why does it need to be one or the other?
Exactly my thoughts. I am unfathomably angry and I want to contribute to any effort to dismantle Google as a company.
Yeah, same. It is hard; we start to need a collective boycott.
We can all do our part, by using their products as little as possible, contribute to open alternatives (OpenStreetMap, Fediverse, Linux, Nextcloud...) and by stimulating our (non-techie!) friends and family.
But it is a lot of work :(
It should not be a "vote with your wallet" situation. It should be governments shattering that organization into appropriately sized companies.
12 replies →
It's less work than 10 years ago. So many much more mature alternatives.
2 replies →
They're trying to block your ability to boycott. https://en.wikipedia.org/wiki/Anti-BDS_laws
2 replies →
> Yeah, same. It is hard; we start to need a collective boycott.
Feelgood slactivism. They don't care about your boycott. They finance their own alternatives because they know what makes you shut up.
IMO the biggest issue is that some non-tech people will occasionally be straight up hostile and will whine about not having "features", but then again it only takes a small amount of people taking action inflict real change. Also medium term we need to start making phones (smart OR dumb) that are FOSS as possible. > Linux Open/FreeBSD too, we need to have more redundancy.
But remember: once again, don't simply get angry at Google the institution. Get angry at Page and Brin personally. They have the power to prevent this, a power they were careful to preserve when they gave Google its IPO. They are fully responsible for Google's choices here. But, partly because they aren't constantly jumping up and down drawing attention to themselves on social media, they've tended to escape the same personal scrutiny given to eg. Elon Musk. That needs to end.
On that topic, I would highly recommend you to switch to Kagi!
Search is still their workhorse for ad revenue. Less search, less users, in addition to users now just asking chatgpt and co, will hurt them well
Wouldn’t installing an adblocker basically hurt them as much / more as I still cost them compute but don't get them that sweet ad money?
1 reply →
The problem is this type of controlling move, that will be used to benefit their company, is one among many things a company like Google can do that is unethical. They won’t stop. They are too powerful and can get away with it repeatedly. Even if this one thing is stopped, there will always be another dark pattern or another privacy violation or another anti-competitive thing.
We really need brand new legislation that makes it much easier to break up companies that are too big, and also to tax mega corporations at a much higher rate than all other companies. Then we can have fair competition and the power of choice. But the existing laws end up with no real consequence for these companies, and even if there’s some slap on the wrist, it takes years in court. New laws must make it very fast and low cost for society to take action.
[dead]
For merchants who don't want geeks as customers, cool
As a web-wide captcha replacement, not cool
Also, Google sometimes blocks the audio captchas (messing up blind people) and they are nearly impossible right now.
Why should I even care anymore? I no longer need to access random websites to find information since I can just ask the AIs.
Are you genuinely asking? To pay your taxes, order items online, access your bank account, log into your favorite AI service, there are very often CAPTCHAs involved. Try going a month with CAPTCHAs blocked in uBlock Origin, and you will find yourself unable to do many basic things.
Not saying this is any better, but IRS partnered with id.me to enforce ID + face recognition before you can log in to view your records. We are truly doomed.
Even besides services you might need to access, as pointed out in another response (e.g., banks, shops), how are you going to check the veracity and understand the context of the information you seek without going to the (possibly hallucinated!) sources? But I guess a lot of people who are into using AI like that just don't care.
Where do you think the AI gets this information?
They also need to browse the web, and are more likely to be blocked by these measures than humans
> are more likely to be blocked by these measures than humans
In other words these measures work as intended...?
Very funny that if you want to start a bot farm you also go and buy a bunch of random android devices.
We see the fundamental forces of capitalism at work: To justify valuation, Google needs to grow. When they feel a ceiling, they broaden their search to anything legal that makes customers pay - even if it contradicts their longterm interests. This created countless attack angles for startups. The good news: we already have a solution! Monopoly laws. In case of the internet, no company should be able to have this much power.
The bad news: US decided to weaponize big tech’s leverage over the world and does not enforce these laws anymore that fix vanilla capitalism.
>We see the fundamental forces of capitalism at work: To justify valuation, Google needs to grow.
You’re confusing markets with capitalism.
Market Socialism (the only reasonable kind) would have these same issues. If Google was owned by the workers instead of capitalists, it would still have incentive to grow. The worker owners would have the exact same incentives as current owners. The only difference would be who the owners are.
Capitalism is not actually “the final boss” that internet leftists make it out to be. Socialism is not the panacea that leftists make it out to be. Surveillance is not a “capitalist only” thing.
I agree, thanks for clarification. I did not want to argue in favor of Socialism - my criticism here is that „free market correction instruments“ like antitrust, monopoly etc are absent.
No one should ever browse the web on a smart phone. Not joking.
This API also works on the desktop. In fact, you can't use this system without a phone if your browser isn't Google enough.
We are going to see sooooo many scams out there. No wonder Google is locking down third party Android apps outside of their control, getting a user to install "device verification.apk" will become super trivial after people have clicked through these popups a couple times.
That war was lost in the 2010s, around the same time as the vertical video war.
Phone is small computer
Sure, and the north korean Linux distro also runs on a computer. I still wouldn't touch it.
4 replies →
It is, just like a calculator is a small computer. It's not a personal computing device though, in the sense that the user can't develop and deploy their own software/tools on it.
1 reply →
No one should ever browse the web from an ESP32 either. Like seriously the dark patterns are bad enough from a desktop where you've actually got the screen real estate to see the whole page, have other sites open for comparison, have a keyboard to type your own notes, etc. Most browsing can simply wait, especially the adversarial-commercial type we're talking about here.
And also don't install apps? What's left then?
A device I have no choice in owning because modern employers assume you have sometime to install an authenticator app on. That's what it is for me. Also, sadly, it's an anchor for Signal. Otherwise I don't use the stupid thing.
well that just seems counterproductive and unreasonable but it's Friday so what do I care
-- sent from Chrome on Android
and it seems Google wants to support people like you!
That entire QR barcode thing is so that you can browse the web on your laptop/desktop, and _still_ rely on smart phone's attestation, no mobile browser needed.
Thankfully I haven't met reCAPTCHA that often nowadays, thanks to other providers being more competent.
(And no, not you Microslop!)
In a world where everything is shit, could I at least take away some solace in this helping to reduce Cloudflares hegemony?
No. They have more in common. I would assume this is an internal joint project.
We do need to abandon the reality where we use the same few companies on a daily basis and get back to what's now hidden the under-the-surface: forums, blogs, personal websites. We need to re-discover the "free" internet we used to have before Facebook and smartphone dystopia happened.
I posted a comment on the announcement when it was posted here:
>As someone who is working in incident response and malware analysis I have to say that is one of the worst ideas I have ever seen. A lot of companies have issues with ClickFix [1] and other social engineering campaigns and now Google wants to teach users that they should scan QR codes to proceed on a website.
>How should we realistically teach Susan from HR the difference between a real Google Captcha QR code and a malicious phishing QR code - you (realistically) can't. I wish we could - but those people don't work in tech, they will never know and I can't really blame them because at the end of the day they are just happy that they don't have to deal with tech after work.
>We have spent years of behavioural conditioning to prevent QR-code based phishing attacks (some people call it Quishing but I hate that term) and since the QR code is being scanned from a mobile device (99.99% of the time the private device), we have no EDR visibility on those devices and can't track what's happening if people scan it.
>This is more of an invitation for threat actors than it is something that holds them back.
[1] https://www.kaspersky.com/blog/what-is-clickfix/53348/
I think the idea is good if it could actually curb bot traffic that currently plagues the Internet.
However, a lot of recent bot traffic are sophisticated scrappers called "LLM's." You can tell claude to "research X from this www.example.com" and will automatically scrape it and summarize it, something that a LLM is perfect for. Gemini tends to share links instead, presumably because most of Google's revenue comes from ads served on those websites, so if it completely killed the traffic to those websites it would just make less money. Incidentally, I wonder if Claude/Gemini use an search engine-like "index" of all websites or it refuses to cache anything to always fetch "fresh" data.
If this is employed, I don't think the web is only going to be gatekept to Google devices. I think it will also be gatekept to Google's AI's.
Google would be able to display a captcha that no LLM could defeat, and then just let its own LLM pass through.
The same could be said about its other bots, such as the web crawler. Google's bot could crawl webpages that no other crawler would ever be able to simply because it has free pass to captcha-gated GETs. Although the same could be true already today.
Their product page is full of info about how this works with "agentic" cruft. They're still permitting your regular old scrapers and bots for as long as they like you. Hope you're not thinking of running an independent system instead of a large cloud platform!
This is security theatre. This isn't going to help against bots in any way.
I keep banning gogol Ipv4 ranges because of scanners, script kiddies (and maybe worse). Yes, I am self-hosted, and without paying the DNS mob.
Related:
Google Cloud fraud defense, the next evolution of reCAPTCHA
https://news.ycombinator.com/item?id=48039362
Wrong link. https://news.ycombinator.com/item?id=48039362
Thanks! I've s/48061938/48039362/'d the GP.
Hello
> The defeat is mechanical. Bot operators point a camera at a screen, a trivial automation with off-the-shelf hardware. For operations that need Play Integrity attestation specifically, a compliant Android device costs approximately $30 ($29.88 in Wallmart to be precise) - for a professional bot farm, which purchases devices in bulk, this is the fixed cost without material disruption to operations.
That's $30 per account, not one time. Because of the following:
> Device attestation does not just gate access - it produces attribution. A device with a stable hardware identity creates a persistent identifier that crosses sessions, browsers, and private browsing modes.
If you put all your bot accounts on one device, they all get banned at once. So fraudsters have to spread their accounts across multiple devices and replace them when they inevitably get banned. That's the reason for all the spying, attestation, and lockdown bullshit behind Google Cloud Fraud Defense. It is far easier to ban fraudsters if you just let the Maoists run the Risk Department.
The author proposes an alternative solution: proof-of-work. And, yes, there are use cases for that, such as Anubis. Google might even want to consider a proof-of-work option in certain scenarios. But there is no scenario in which someone's phone deliberately burns $30 worth of compute - perhaps a quarter of the user's battery - and the user still has a good onboarding experience. Most of your actual users are not going to be able to burn compute as efficiently as fraudsters, either - so maybe you have to burn the whole battery on a phone to cost a fraudster $30. Proof-of-work is, strictly speaking, anti-egalitarian and anti-democratic. "One CPU, One Vote" is less useful than you think when you realize fraudsters have the money to just buy lots of CPUs to always win[0].
Every Risk Department eventually reinvents arbitrary and capricious punishment. When you have no legal authority to prosecute crime, you rely entirely upon your freedom of association and ban people with a hair trigger. It's the only thing that works. Personally, I'd rather live in the world where governments actually took fraud seriously and corporations didn't have to do this, but for right now, GCFD is at least less onerous than WEI in the sense that WEI was going to lock down all browsers. GCFD just means I have to keep a Google-approved phone around to scan a QR code every once in a while.
[0] I'm not mentioning the massive waste problem proof-of-work creates, because obviously attestation will also produce waste. Actually, if anything, the fraudsters will probably wind up dumping all their banned devices on the used market and ruin it.
Considering Google's origins and early backers, this shouldn't come as much of a shock to anyone:
https://qz.com/1145669/googles-true-origin-partly-lies-in-ci...
The military industrial complex created the internet, and has funded many of the big players in Silicon Valley. Their goal was never an open and free internet.
[flagged]
[flagged]
[dead]
[dead]
I do wonder how people who work on this don't see themselves as the bad guy.
$$$
They have blinders on made out of money.
Special snowflakes kind of people, it takes one to know one.
[flagged]
[flagged]
You don't think that some people simply disagree with the idea that this is bad? Or like maybe the CAPTCHA company who put out the post has an agenda here? So you want to go after engineers personally?
I wonder what you've done that might warrant harassment?
Look at how complicated CAPTCHAs are getting to try to be unsolvable with AI - it's a losing game. This and the WEI proposal are trying to solve a very, very real problem. If you continue to deny the problem, or every proposal solution without working towards an acceptable one, people will route around the blockage.
The crux of the problem is that their solution involves making themselves the gatekeepers of who is and isn't allowed. And that's a power that no one unaccountable organization should wield.
Given how important internet is to modern society, letting any one entity decide who should and should not have access is nearing a human rights issue.
> You don't think that some people simply disagree with the idea that this is bad?
Where are they? Where? Can you point me to one person in this thread who "disagrees with the idea that this is bad"? Apparently even you don't go that far.
1 reply →
But it's so easily beatable! This might be the result of good intentions (being incredibly generous), but as the article states, any bot can afford a $30 phone and the concomitant hardware as the cost of doing business and bypass this.
Also as the article states (referencing an HN comment):
> How should we realistically teach Susan from HR the difference between a real Google Captcha QR code and a malicious phishing QR code - you (realistically) can’t.
Susan from HR is the least of it. This is a huge vector to increase fraud, not decrease it.
How would an ethical, competent engineer argue against this?
The CAPTCHA company who put this out might have an agenda, but also since they're in the industry they might also have knowledge to impart.
We're reaching an inflection point with the oligarchies where the old ideas of "writing a blistering editorial" or "calling your congress-critter" need to be seriously questioned as useful and other non-violent methods of recapturing digital freedom need to be entertained.
2 replies →
I see this comment was flagged, I have vouched for it.
It's making a valid point.
I wondered people are reading "I wonder what you've done that might warrant harassment?" as some kind of personal threat or incitement to harassment, but I read it as precisely the opposite.
It's an entirely valid point that many of us have worked at jobs on products that did something that somebody disagreed with, and we shouldn't be asking anybody to harass us personally for it, because that is wrong.
GP is asking to "aggressively name and shame" engineers. It's entirely valid to say that you wouldn't much like that if it happened to you.
> Or like maybe the CAPTCHA company who put out the post has an agenda here?
That captcha company is not trying to push spyware onto my device and punish me for daring to remove it. Google is.
> Look at how complicated CAPTCHAs are getting to try to be unsolvable with AI - it's a losing game.
So don't play. Even cloudflare had a better idea - don't block, just demand payment.
This case is trivially circumvented with device farms, much like described in the post. What real problem are they trying to solve? AI bots reading content? That’s not something Google want to prevent, it’s part of their business model, this would allow them to easily circumvent it for themselves though.
> You don't think that some people simply disagree with the idea that this is bad?
Some people think women shouldn’t be allowed to vote, not all opinions are created equal.
1 reply →
The usual argumentation is "I need to make a living" and "if I didn't build it someone else would have done an even worse job, like this at least I could be an activist on the inside and guide the efforts to make it better".
Another method is to stall and sabotage the development via endless bike shedding, language changes, rewrites, refactors. All normal things in every project. Drag those feet.
1 reply →
[flagged]
2 replies →
These are private actors. It's not acceptable to harass people for building things that are lawful but that you don't like.
If you don't like this functionality, participate in democracy and work with your representatives to make it unlawful. But be prepared to humbly lose if the majority disagrees with you.
You're not, however, entitled to a "heckler's veto."
Nobody is asking for harassment. Social ignorance is usually enough. Like, nobody wanting to date, be a friend, asking for parties etc. It is very normal treatment to people who have bad behavior etc.
9 replies →
I think the better alternative to making engineers "feel uncomfortable opening their door, walking down the street" is for us to collectively ask if the solution isn't to touch more grass and rely less on the technology we've all come to blindly accept as required.
I mean, I hate this QR code shit as much as anyone, but c'mon, we can and should be better - both in how we treat others, and how much we rely on this shit.
That doesn't solve a problem, that ignores a problem.
1 reply →
one person's villain is another person's hero.
I imagine if they would be named and shamed, they would get huge contracts in companies like oracle.
Good luck getting a huge contract with Oracle. Facebook.. yes.
[flagged]
They shouldn't just be ashamed. They should be shunned at the very least.
There's a good chance they're on HN FWIW. If you are and you're reading this: Fuck you. Reconsider which side you want to be on!
So many in hn already downvoted you. That says the SV nature and opinions in tech sector.
"ChatGPT, generate a blog post that packages an ad for my service that competes with Google by harvesting HN's latent anti-Google rage."
AI use is far more prevalent now than then sadly. This kind of scheme is inevitable since compute is not free.
Water use and mass displacement of labor get all the attention but there are so many other more subtle reasons like this that AI is going to be bad for society.
I disagree that this kind of scheme is inevitable. We can "evit" it through thoughtful discussion, foresight, alternative mitigations, and even regulation. Certainly, Google can choose to avoid it. On the other hand, the AI bubble will inevitably burst, since compute is not free. I look forward to post-bubble AI.
“Evit” is “avoid” in English, they have the same root.
> We can "evit" it through thoughtful discussion, foresight, alternative mitigations, and even regulation
Such as? I don't see how regulation would apply here without concrete technical solutions that enforce it. So what alternative mitigations do you have in mind?
7 replies →
For those who don't know: WEI is a boy band known for singles such as "Twilight"[0].
[0]: https://youtu.be/4BYkuPUQoWE