Google begins requiring JavaScript for Google Search

17 hours ago (techcrunch.com)

To be fair, if my search engine is anything to go on, about 0.5-1% of the requests I get are from human sources. The rest are from bots, and not like people who haven't found I have an API, but bots that are attempting to poison Google or Bing's query suggestions (even though I'm not backed by either). From what I've heard from other people running search engines, it looks the same everywhere.

I don't know what Google's ratio of human to botspam is, but given how much of a payday it would be if anyone were to succeed, I can imagine they're serving their fair number of automated requests.

Requiring a headless browser to automate the traffic makes the abuse significantly more expensive.

  • If it's such a common issue, I would've thought Google already ignored searches from clients that do not enable JavaScript when computing results?

    Besides, you already got auto-blocked when using it in a slightly unusual way. Google hasn't worked on Tor since forever, and recently I also got blocked a few times just for using it through my text browser that uses libcurl for its network stack. So I imagine a botnet using curl wouldn't last very long either.

    My guess is it had more to do with squeezing out more profit from that supposed 0.1% of users.

    • I have been web searching using Google from the command line, with no Javascript, for decades. Until last week I never sent a User Agent HTTP header either. After this change I'm still searching from the command line, no Javascript. Thus "requiring Javascript" is not a correct phrase to describe this change. The requirement is a User Agent HTTP header with an approved value. The only difference in searching for me as of the last few days is that I now send a User Agent HTTP header, with an "approved" string.

      Javascript does not stop bots. At least it does not stop Googlebot.

      IMO, the change is to funnel more people (cf. bots) into seeing AI-generated search results. The "AI" garbage requires Javascript. That is why the spokesperson suggests "degraded" search results for people who are not using Javascript. For me, the results are improved by avoiding the AI garbage.

      Why bother trying to target the "less than 0.1%" of searches that do not use Javascript. Perhaps because the largest percentage of searches are the 99.9% that do not use Javascript. Shake the cushions.

    • "Why didn't they do it earlier?" is a fallacious argument.

      If we accepted it, there would basically only be a single point in time where a change like this could be legitimately made. If the change is made before there is a large enough problem, you'll argue the change was unnecessary. If it's made after, you'll argue the change should have been made sooner.

      "They've already done something else" isn't quite as logically fallacious, but shows that you don't experience dealing with adversarial application domains.

      Adversarial problems, which scraping is, are dynamic and iterative games. The attacker and defender are stuck in an endless loop of game and counterplay, unless one side gives up. There's no point in defending against attacks that aren't happening -- it's not just useless, but probably harmful, because every defense has some cost in friction to legitimate users.

      > My guess is it had more to do with squeezing out more profit from that supposed 0.1% of users.

      Yes, that kind of thing is very easy to just assert. But just think about it for like two seconds. How much more revenue are you going to make per user? None. Users without JS are still shown ads. JS is not necessary for ad targeting either.

      It seems just as plausible that this is losing them some revenue, because some proprortion of the people using the site without JS will stop using it rather than enable JS.

      4 replies →

  • I run a semi-popular website hosting user-generated content, although it's not a search engine; the attacks on it have surprised me, and I've eventually had to put in the same kinds of restrictions on it.

    I was initially very hesitant to restrict any kind of traffic, relying on ratelimiting IPs on critical endpoints that needed low friction, and captchas on the higher friction with higher intents, such as signup and password reset pages.

    Other than that, I was very liberal with most traffic, making sure that Tor was unblocked, and even ending up migrating off Cloudflare's free tier to a paid CDN due to inexplicable errors that users were facing over Tor that were ultimately related to how they blocked some specific requests over Tor with 403, even though the MVPs on their community forums would never acknowledge such a thing.

    Unfortunately, given that Tor is a free rotating proxy, my website got attacked on one of these critical, compute heavy endpoints through multiple exit nodes totaling ~20,000 RPS. I've reluctantly had to block Tor, and a few other paid proxy services discovered through my own research since then.

    Another time, a set of human spammers distributed all over the world started sending out a large volume of spam towards my website; with something like 1,000,000 spam messages every day (I still feel this was an attack coordinated by a "competitor" of some sort, especially given a small percentage of messages entitled "I want to get paid for posting" or along those lines).

    There was no meaningful differentiator between the spammers and legitimate users, they were using real Gmail accounts to sign up, analysis of their behaviours showed they were real users as opposed to simple or even browser-based automation, and the spammers were based out of the same residential IPs as legitimate users.

    I, again, had to reluctantly introduce a spam filter on some common keywords, and although some legitimate users do get trapped from time to time, this was the only way I could get a handle on that problem.

    I'm appalled by some of the discussions here. Was I "enshittifying" my website out of unbridled "greed"? I don't think so. But every time I come here, I find these accusations, which makes me think that as a website with technical users, we can definitely do better.

    • The problem is accountability. Imagine starting a trade show business in the physical world as an example.

      One day you start getting a bunch of people come in to mess with the place. You can identify them and their organization, then promptly remove them. If they continue, there are legal ramifications.

      On the web, these people can be robots that look just like real people until you spend a while studying their behavior. Worse if they’re real people being paid for sabotage.

      In the real world, you arrest them and find the source. Online they can remain anonymous and protected. What recourse do we have beyond splitting the web into a “verified ID” web, and a pseudonymous analog? We can’t keep treating potential computer engagement the same as human forever. As AI agents inevitably get cheaper and harder to detect, what choice will we have?

      1 reply →

    • > I'm appalled by some of the discussions here. Was I "enshittifying" my website out of unbridled "greed"? I don't think so. But every time I come here, I find these accusations, which makes me think that as a website with technical users, we can definitely do better.

      It's if nothing else very evident most people fundamentally don't understand what an adversarial shit show running a public web service is.

    • There's a certain relatively tiny audience that has congregated on HN for whom hating ads is a kind of religion and google is the great satan.

      Threads like this are where they come to affirm their beliefs with fellow adherents.

      Comments like yours, those that imply there might be some valid reason for a move like this (even with degrees of separation) are simply heretical. I think these people cling to an internet circa 2002ish and the solution to all problems with the modern internet is to make the internet go back to 2002.

      7 replies →

    • 20000 RPS is very little — a web app / database running on an ordinary desktop computer can process up to 10000 RPS on a bare-metal configuration after some basic optimization. If that is half of your total average load, a single co-located server should be enough to eat entire "attack" without flinching. If you have "competitors" and I assume, that this is some kind of commercial product (including running profitable advertising-based business), you should probably have multiple geographically distributed servers and some kind of BGP-based DDoS protection.

      Regarding Tor nodes — there is nothing wrong with locking them out, especially if your website isn't geo-blocked by any governments and there are no privacy concerns related to accessing it.

      If, like Google, you lock out EVERYONE, even your logged in users, whose identities and payment details you have already confirmed, then... yes you are "enshittifying" or have ulterior motives.

      > they were using real Gmail accounts to sign up

      Using Gmail should be a red flag on its own. Google accounts can be purchased by millions, and immediately get resold after being blocked by target website. Same for phones. Only your own accounts / captchas / site rep can be treated as basis of trust. Confirmation e-mail is a mere formality to have some way of contacting your human users. By the time Reddit was created it was already useless as security measure.

      4 replies →

  • My impression is that there's less effort for them to go directly to headless browsers. There are several foot guns in using a raw HTML parsing lib and dispatching HTTP requests. People don't care about resource usage, spammers even less and many of them lack the skills.

    • Most black hat spammers use botnets, especially against bigger targets which have enough traffic to build statistics to fingerprint clients and map out bad ASNs and so on, and most botnets are low powered. You're not running chrome on a smart fridge or an enterprise router.

      3 replies →

    • A major player in this space is apparently looking for people experienced in scraping without using browser automation. My guess is that not running a browser results in using far fewer resources, thus reducing their costs heavily.

      Running a headless browser also means that any differences in the headless environment vs. a "headed" one can be discovered, as well as any of your Javascript executing within the page, which significantly makes it difficult to scale your operation.

      6 replies →

    • so much more expensive and slow vs just scraping the html. It is not hard to scrape raw html if the target is well-defined (like google).

  • I run a not-very-popular site -- at least 50% of the traffic is bots. I can only imagine how bad it would be if the site was a forum or search engine.

  • Maybe you could require hashcash, so that people who wanted to do automated searches could do it at an expense comparable to the expense of a human doing a search manually. Or a cryptocurrency micropayment, though tooling around that is currently poor.

    • The only issue with a hash cash is there’s no way to know whether the user’s browser is the one who computed said proof of work, or has delegated it to a different system and is simply relaying its results. At scale, you’d end up with a large botnet that receives proof of work tokens to solve for the scraping network to use.

      1 reply →

  • > bots that are attempting to poison Google or Bing's query suggestions

    This seems like yet another example of Google and friends inviting the problem they're objecting to.

Just tested (ignoring AI search engines, non-english, non-free):

Search engines which require JavaScript:

Google, Bing, Ecosia, Yandex, Qwant, Gibiru, Presearch, Seekr, Swisscows, Yep, Openverse, Dogpile, Waldo

Search engines which do not require JavaScript:

DuckDuckGo, Yahoo Search, Brave Search, Startpage, AOL Search, giveWater, Mojeek

I recently discovered how great the ChatGPT web search feature is. Returns live (!) results from the web and usually finds things that Google doesn't - mostly niche searches in natural language that G simply doesn't get.

Of course, it uses JavaScript, which doesn't help with the problem discussed here.

But I do think that Google is internally seeing a huge drop in usage which is why they're currently running for the money. We're going to see this all across their products soon enough (I'm thinking Gmail).

  • I've been experimenting with creating single-site browsers[1] for all websites I routinely visit, effectively removing navigational queries from search engines; between that and Claude being able to answer technical questions, it's remarkable how rarely I even use browsers for day-to-day tasks anymore (as in web views with tabs and url bars).

    We've been using the web (as in documents interconnected with links between servers) for a great number of tasks it was never quite designed to solve, and the result has always been awkward. It's been very refreshing to move away from the web browser-search engine duo for these things.

    For one, and it took me a while to notice what was off, but there are like no ads anymore, anywhere. Not because I use adblockers, but because I simply don't end up directed to places where there are ads. And let me tell you, if you've been away from that stuff for a while, and then come back, holy crap what a dumpster fire.

    The web browser has been center stage for a long while, coasting on momentum and old habits, but it turns out it doesn't need to be, and if you work to get rid of it, you get a better and more enjoyable computing experience. Given how much better this feels, I can't help but feel we're in for a big shift in how computers are used.

    [1] You can just launch 'chrome --app=url' to make one. Or use Electron if you want to customize the UI yourself.

    • While I am glad that you seem to have found a new workflow that you like, your description strikes me as a personal experience.

      I am aware that a lot of people use searches as a form of navigation, but it’s also very common that people use bookmarks, speed dial, history, pinned tabs, and other browser features instead of searching. My Firefox is configured to not do online searches when I type into the address bar, instead I get only history suggestions. This setup allows for quick navigation, and does not require any steps to set up new pages that I need to visit.

      What I want to say that while you seem to imply that you found a different pattern of use that many people will soon migrate to, I think these patterns have always been popular. People discover and make use of them as needed.

      It’s also strange that you put such a negative sentiment on interconnected documents. Do you not realize how important these connections were for you to be able to reach the point you are at now? How else would you have found the things that are useful to you? By watching ads?

      Search engines are also … really not really a good example of the strengths of the interconnected web, as they are mostly a one way thing. Consider instead a Hacker News discussion about a blog, and some other blog linking to that discussion, creating these interconnected but still separate communities and documents.

      1 reply →

    • > I've been experimenting with creating single-site browsers[1] for all websites I routinely visit, effectively removing navigational queries from search engines

      Surely it would make more sense to use bookmarks?

      3 replies →

    • As a serious computer user getting on for 25 years using text based search tools I've long made various "single-site" tools. A big inspiration way back was Surfraw [1], originally created by Julian Assange. Reality is, most of us use a small number of websites regularly. nearly all the info I want to touch is three keystrokes away on the command-line or from within emacs.

      When search died, a few years ago practically now, I was still teaching a level-7 Research Methods course. The universities literally did not notice that all of the advice we gave students was totally obsolete and that it was not really possible to conduct academic research that way.

      Research today is very much more like it was in the pre-interent era. You need to curate and keep in mind a set of reliable sources and personal, private collections.

      Had the misfortune of needing to spend a week using a standard browser and sites like Google. It was beyond shocking. What I found I can only describe as a wastescape, a war zone, a bombed-out favela with burned out cars, overflowing sewers, piles of rubble and dead dogs lying in gutters.

      My first thought was kinda, "Oh sweet Jesus Christ, what happened to my Internet?", and the very next one was "How does anyone get anything done now?" How does the economy still function? And of the course the answers are "They don't" and "It doesn't".

      I think this is a really serious situation. There's simply no way that as "knowledge workers", scientists, or whatever people call us now, we can be as competitive as we were 10 or 20 years ago given the colossal degradation of our tools. We have to stop this foolish self-deception that things are "getting better". Google were a company that created free search. Well done. But that was then. We remain stuck in this strange mythology that advertising companies like Google and other enshitified BigTech are a net asset to the economy. Surely they're a vast parasitical drain and need digging into the ground so the rest of us can get on with something resembling progress?

      [1] http://surfraw.org/

  • can it find OLD articles? I generally don't like the idea of a search engine which requires me to be logged in to track my search history (and I do mostly use Google in incognito/private browser windows), but I might ignore that if it allows me to do the one thing that Google refuses to do on phones anymore (which might be a sign that they're gonna phase that out from desktop interfaces soon)..

What I find amusing is that this is Google. It's their bots, and now LLMs as well, that have hammered people's websites for years.

  • Have they hammered people's websites? I find that the Google bot makes as few requests as it can, and it respects robots.txt.

I believe the main intent is to block SERP analysers, which track result positions by keywords. Not that it would help a lot with bot abuse, but will make regular SEO agency life harder and more expensive.

Last month Google have also enstricted YouTube policies which IMHO is a sign, that they are not reaching specific milestones and that'd definitely be reflected over the alphabet stocks

They are going to make Google search even more broken than it is already? Be my guest! Since they are an ads business, I guess they don't really care about their search any longer, or they have sniffed some potential to gather even more information on users using Google, if they require running JS for it to work. Who knows. But anyone valuing their privacy has long left anyway.

Almost everyone I know has moved a lot of their searching onto ChatGPT or WhatsApp AI querying.

Everyone I know under 25 has stopped using Google search altogether.

I think the only people disabling JavaScript must be GenX graybeards such as myself or security experts.

Don't be evil.

How else are you going to load a hideously incorrect AI summary block without your initial page latency being through the roof?

  • You could probably get it working with declarative shadow dom, streaming in the AI generated content at the end of the html document and slotting it into place. There are no doubt a lot of gotchas but at first glance it seems feasible. Here’s a demo I found of something like that: https://github.com/dgp1130/out-of-order-streaming

    • The example repo is a little confusing to me, since it seems to use client-side JS to demonstrate that it doesn't need client-side JS: "It bootstraps a service worker and [...] No client-side JavaScript!"

      But I guess the point is that the code in the service worker could have been on the server instead?

      The trick seems to be using a template element with a slot and then slotting in the streamed content at the end. But you could probably also do it using just CSS to reposition the content from the bottom to the top, similarly to how many websites handle navigation menus, assuming that the client supports CSS.

  • Iframes lazy

    Object content as lazy

    Embed lazy

    Image lazy

    Link rel=import (not support that widely though)

    Heck if you wanted to get REALLY cute you go use multipart-mixed-replace headers.

    Or SSE

Kagi it is, then

Well, I read the HN headline and said to myself, I bet this requirement is pitched as "...to enhance the user experience...", and, yep, it's there.

That's akin with a response to some incident where companies "Take [user security etc.] seriously", when the immediate thought is, yeah, but if you did, that [thing] probably wouldn't have happened.

Dunno why I wrote all that - I don't use Google search, because I wanted to enhance (aka unenshitten) my search experience.

Honestly I wouldn't be surprised that if Google requires some Proof-of-work done on browser's host's CPU/GPU to validate search results and make it infeasible for bots therefore.

  • That brings up an interesting conundrum. If PoW were implemented, could known-valid (i.e. goodstanding for over a decade) accounts be switched over to PoS instead? Or paying accounts?

    PoW could be written into infrequent pages such as the registration page and reset password page. It could run while the user fills in the form. I might implement this on some sites that get attacked.

    • This gives me an idea: thanks to anti-spam mechanisms residential proxies + headless browsers provide a better experiences than regular browsing on real devices.

      Instead of PoW, maybe just make the clients prove they are capable of proxying browser sessions?

      1 reply →