Tell HN: Cloudflare is blocking Pale Moon and other non-mainstream browsers
1 year ago
Hello.
Cloudflare's Browser Intergrity Check/Verification/Challenge feature used by many websites, is denying access to users of non-mainstream browsers like Pale Moon.
Users reports began on January 31:
https://news.ycombinator.com/item?id=31317886
A Cloudflare product manager declared back then: "...we do not want to be in the business of saying one browser is more legitimate than another."
As of now, there is no official response from Cloudflare. Internet access is still denied by their tool.
Yesterday I was attempting to buy a product on a small retailer's website—as soon as I hit the "add to cart" button I got a message from Cloudflare: "Sorry, you have been blocked". My only recourse was to message the owner of the domain asking them to unblock me. Of course, I didn't, and decided to buy the product elsewhere. I wasn't doing anything suspicious.. using Arc on a M1 MBP; normal browsing habits.
Not sure if this problem is common but; I would be pretty upset if I implemented Cloudflare and it started to inadvertently hurt my sales figures. I would hope the cost to retailers is trivial in this case, I guess the upside of blocking automated traffic can be quite great.
Just checked again and I'm still blocked on the website. Hopefully this kind of thing gets sorted out.
> I would be pretty upset if I implemented Cloudflare and it started to inadvertently hurt my sales figures.
The problem is that all these Cloudflare forensics-based throttling and blocking efforts don't hurt sales figures.
The number of legitimate users running Arc is a rounding error. Arc browser users often come to Cloudflare without third-party tracking and without cookies, which is weird and therefore suspicious - you look an awful lot like a freshly instantiated headless browser, in contrast to the vast majority of legitimate users who are carrying around a ton of tracking data. And by blocking cookies and ads, you wouldn't even be attributable in most of the stats if they did let you in.
It would be like kicking anyone wearing dark sunglasses out of a physical store: sure, burglars are likely to want to hide their eyes. Retail shrink is something like 1.5% of inventory, while blind users are <0.5% of the population. It would violate the ADA (and basic ethics) to prohibit out all blind shoppers, so in the real world we've decided that it's not legal to discriminate on this basis even if it would be a net positive for your financials.
The web is a nearly unregulated open ocean, Cloudflare can effectively block anyone for any reason and they don't have much incentive to show compassion to legitimate users that end up as bycatch in their trawl nets.
Something tells me that if you asked the store owner that the poster tried to give money to, they'd be furious at cloudflare for stopping the transaction.
1 reply →
What about all false positives in aggregate?
The problem is site owners do not know - it just adds to the number of blocked threats in cloudflare's reassuring emails.
1 reply →
The number of legitimate users on "not chrome, edge, safari, or firefox" is about 10% of the browser market. I don't know about you, but if I'm running a shop, and the whole point of my website is to make sales, but my front door is preventing 10% of those sales? That door is getting replaced.
11 replies →
I wonder if cloudflare blocks like these affect screen reader users, in which case they may violate the ADA.
27 replies →
Vendors who block iCloud Relay are the worst. I'm sure they don't even know they're doing it. But some significant percentage of Apple users -- and you'd have to think it's only gonna grow -- comes from those IP address ranges.
Bad business, guys. You gotta find another way. Blocking IP addresses is o-ver.
> Bad business, guys. You gotta find another way. Blocking IP addresses is o-ver.
no, it's still the front line. And likely always will be. It's the only client identifier bots can't lie about. (or nearly the only)
At $OLDJOB, ASN reputation was the single best predictor of traffic hostility. We were usually smart enough to know which we can, or can't block outright. But it's an insane take to say network based blocking is over... especially on a thread about some vendor blocking benign users because of the user-agent.
7 replies →
This would be weird, esp. given that Cloudflare is one of the vendors who act as exit nodes for iCloud Relay.
8 replies →
I’ve noticed wifi at coffee shops, etc have started blocking it too.
I need to disable it for one of my internal networks (because I have DNS overrides that go to 192.168.0.x), or I’d wish they’d just make it mandatory for iPhones and put and end to such shenanigans.
Apple could make it a bit more configurable for power users, and then flip the “always on” nuclear option switch.
Either that, or they could add a “workaround oppressive regimes” toggle that’d probably be disabled in China, but hey, I’m in the US, so whatever.
Edit: I also agree that blocking / geolocating IP addresses is a big anti-pattern these days. Many ISPs use CGNAT. For instance, all starlink traffic from the south half of the west coast appears to come from LA.
As a result, some apps have started hell-banning my phone every time I drive to work because they see me teleport hundreds of miles in 10 minutes every morning. (And both of my two IPs probably have 100’s of concurrent users at any given time. I’m sure some of them are doing something naughty).
Wait, this comment made me aware of the existence of iCloud Relay. Apple built their own Tor only for Apple users? Why would they do that? Why not use Tor???
7 replies →
If you use a weird proxy you're gonna get blocked. Facts of life.
Well its primarily because the security vendors for say WAFs and other tools list these IPs in the "Anonymizers" or "VPN" category and most typically these are blocked as seldom do you see legitimate traffic originating to your store front or accounts pages from these. Another vendor we use lists these under "hacking tools" So your option as a security professional is to express to your risk management team we allow "hacking tools" or lose iCloud Relay customers. Which way do you think they steer? In alternative cases a site may use a vendor for their cart/checkout page and don't even have control over these blocks as they are also blocking "hacking tools" or "anonymizers" from hitting their checkout pages.
3 replies →
To access any site protected by cloudflare captcha i have to change browsers from firefox to chrome. and i have basically default suite of addons (ublock is the only one affecting the pages themselves).
VPN doesn't matter, i probably share IP with someone "flagged" via ISP.
Every site, that is except their cloudlfare dashboard.
I have come across several websites on which Cloudflare blocks my devices, whatever I use. No Captcha, just blocked. I tried a stock iPhone (Safari, no blockers, no VPN, no iCloud relay, both on wifi or 4G), and a Windows PC with Firefox, Chrome, or Edge, no luck. That includes a website of a local business so that can't be the country either.
I have no idea why.
Maybe you have anti-fingerprinting protection on? I've heard it can cause issues.
1 reply →
> Of course, I didn't, and decided to buy the product elsewhere
Consider messaging the owner to tell them you were trying to buy a product on their site and the site wouldn't let you. There's a chance that they'll care and be able to do something about it. But no chance if they don't know about the problem!
I think this is on Cloudflare. Perhaps there is a demand for such a service, but it is another to implement it. And this is very bad for a free and therefore safe net.
I don't even know which attack vectors an integrity check for a browser could help against. Against infected clients? It is in any way evidently not effective.
There is some political-philosophical irony that the Chinese prefer their government to do the blocking and take away their freedom, while the US prefers their monopolistic capitalistic corporate world to do it. A rose by any other name. Chose your friends carefully.
5 replies →
> using Arc on a M1 MBP; normal browsing habits.
Well i've certainly never heard of this browser before and it still seems pretty young. I'd guess it's the same issue.
Arc is almost 3 (4?) years old and was the darling child of dev influencers for the better part of 2 years. It's not a niche browser, especially amongst devs that are likely to work at Cloudflare.
10 replies →
I'm still not sure how some random browser should result in a block by the provider. I don't think there's any security risk for the provider of the site by using an outdated browser. Blocking malicious IPs yes/maybe, blocking suspicious acitivity maybe. But because you have browser X - please not.
This is going to lead two a two-class internet where new technologies will not emerge and big players will win because the gate the high is so absurdly high and random that people stop to invent.
2 replies →
It's a chromium derivative.
I think it's also EOL/not getting updates now?
I mean I never used it, their only selling point seem to have been hype.
2 replies →
Cloudflare doesn't report this to the site admins so they're just sitting there losing sales and thinking Cloudflare is doing a good job.
Same thing with Captchas. If I'm placing a food order or something and I'm presented with a Captcha 9 times out of 10 I just say "screw it."
Try clearing your cookies and disabling all extensions, if that still results in a block you can try a mobile hotspot. You're either failing some server side check (IP, TCP fingerprint, JA3 etc.) or a client side check of your browser integrity (generally this is tampered with by privacy focused extensions, anti-fingerprint settings etc.). It's not a "fix" but can at least give you an indication of why it is happening.
That's quite a lot to ask. Not OP, but I'm not doing all that just because sometime else misconfigured their anti-DDoS, unless I really need to.
2 replies →
I believe their point was that they have no desire to fix the issue if they can just look elsewhere, making it detrimental to the vendor more so than the end-user.
1 reply →
I think it's unfair this comment has been flagged or downvoted or whatever. It's pragmatic information!
The mobile hotspot thing... I have to do that to do anything involving Okta.
For some frustrating reason my IPv4 address, which I pay extra to my ISP to have, has been blocklisted by Okta. A login flow failure in one of the apps work uses triggered my address getting banned indefinitely is my best guess. My works Okta admins don't really understand how to unblock me on their Okta tenancy, and Okta support just directs me back to my local admins (even though it's any okta-using org I'm banned from logging into).
I get that misuse/abuse detection has to do its thing but it's so frustrating when there's basically zero way of a legitimate user from an IP of undoing a ban. My only recourse is to do all my using of okta from another IP.... If I was a legit spammer I wouldn't think twice about switching to another IP from my big pool, probably.
2 replies →
You should really take the few minutes to email them and let them know that's happening. It's not their fault Cloudflare is awful.
if the purpose of cloudflare is to block bots and allow humans in, then they fail miserably at their job. what they're doing instead can be summarized in one word: DISCRIMINATION. welcome to the age of internet apartheid.
They are so successful in blocking noob scrapers that an entire industry is blooming around professional web scraping services.
Were you on a VPN?
Some vendors are just weird... I'm always getting blocked by Etsy with Firefox after the first navigation on their site. It shows me a puzzle to solve and then, after solving the puzzle correctly (read "Success"), redirects me to "You have been blocked". It works with Chrome-based browsers though, but that doesn't make me want to use the website at all.
No VPN, just good privacy settings in my case.
8 replies →
Nope, no VPN, making it all the stranger.
[dead]
This echoes the user agent checking that was prevalent in past times. Websites would limit features and sometimes refuse to render for the "wrong" browser, even if that browser had the ability to display the website just fine. So browsers started pretending to be other browsers in their user agents. Case in point - my Chrome browser, running on an M3 mac, has the following user agent:
"'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/132.0.0.0 Safari/537.36'"
That means my browser is pretending to be Firefox AND Safari on an Intel chip.
I don't know what features Cloudflare uses to determine what browser you're on, or if perhaps it's sophisticated enough to get past the user agent spoofing, but it's all rather funny and reminiscent just the same.
I'm still using Mozilla/5.0 (iPhone; CPU iPhone OS 6_0 like Mac OS X) AppleWebKit/536.26 (KHTML, like Gecko) Version/6.0 Mobile/10A5376e Safari/8536.25 on my desktop.
The internet is so much better like this! There is a 2010 lightweight mobile version of Google, and m.youtube with obviously cleaner and better UI and not a single ad (apparently it's not worth to show you ads if you still appear to be using iphone 6)
> (apparently it's not worth to show you ads if you still appear to be using iphone 6)
Why not adwall the user instead, showing only ads until they upgrade the device or buy premium?
This is iOS 6 and not iPhone 6, btw.
1 reply →
I tried this just for fun and youtube said to update my browser :(
1 reply →
As a counterpoint, I asked Claude to write a script to fetch Claude usage and expose it as a Prometheus metric. As no public API exists, Claude suggested I grab the request from the Network tab. I copied it as cURL, and attempted to run it, and was denied with a 403 from CF.
I forgot the script open, polling for about 20 minutes, and suddenly it started working.
So even sending all the same headers as Firefox, but with cURL, CF seemed to detect automated access, and then eventually allowed it through anyway after it saw I was only polling once a minute. I found this rather impressive. Are they using subtle timings? Does cURL have an easy-to-spot fingerprint outside of its headers?
Reminded me of this attack, where they can detect when a script is running under "curl | sh" and serve alternate code versus when it is read in the browser: https://news.ycombinator.com/item?id=17636032
> Does cURL have an easy-to-spot fingerprint outside of its headers?
If it's a https URL: Yes, the TLS handshake. There are curl builds[1] which try (and succeed) to imitate the TLS handshake (and settings for HTTP/2) of a normal browser, though.
[1] https://github.com/lwthiker/curl-impersonate
2 replies →
it's possible there was an attack that stopped which led to more lenient antibot
> if perhaps it's sophisticated enough to get past the user agent spoofing
As a part of some browser fingerprinting I have access to at work, there's both commercial and free solutions to determine the actual browser being used.
It's quite easy even if you're just going off of the browser-exposed properties. You just check the values against a prepopulated table. You can see some of such values here: https://amiunique.org/fingerprint
Edit: To follow up, one of the leading fingerprinting libraries just ignores useragent and uses functionality testing as well: https://github.com/fingerprintjs/fingerprintjs/blob/master/s...
They are pretending to be an ancient Mozilla version from the time after Netscape but before Firefox, KHTML (which was forked to webkit), Firefox (Gecko engine), Chrome and Safari. The only piece of browser history it's missing is somehow pretending to be IE.
> The only piece of browser history it's missing is somehow pretending to be IE.
They're kinda covered because IE also sent Mozilla/5.0 (or 4.0, 2.0, [..]).
Amusingly, I also just realized that even the operating system is spoofed here! I'm on macOS 14, yet the user agent claims "Mac OS X" 10.15. It's a pretty funny situation, and clearly for the sole benefit of very old websites and libraries performing dubious checks.
1 reply →
> I don't know what features Cloudflare uses to determine what browser you're on, or if perhaps it's sophisticated enough to get past the user agent spoofing, but it's all rather funny and reminiscent just the same.
Yes, it is, both your TLS and TCP stacks are unique enough that such spoofing can be detected. But there are a lot of other things that can be fingerprinted as well.
> That means my browser is pretending to be Firefox AND Safari on an Intel chip.
That's not the case, that ua is Chrome on MacOS. The rest is backward compatibility garbage
This is the user agent on Chrome, but the reason for all the references to other browsers (and an old OS and architecture), the backward compatibility garbage, is to pretend to be those browsers for the sake of old websites that are doing string matching on the user agents.
Slack was doing this with their huddle feature for the longest time (still were last I checked). Drives me crazy.
Doesn't drive me crazy - gives me a "Get Out of Huddles Free" card.
How many of you all are running bare metal hooked right up to the internet? Is DDoS or any of that actually a super common problem?
I know it happens, but also I've run plenty of servers hooked directly to the internet (with standard *nix security precautions and hosting provider DDoS protection) and haven't had it actually be an issue.
So why run absolutely everything through Cloudflare?
Yes, [D]DoS is a problem. Its not uncommon for a single person with residential fiber to have more bandwidth than your small site hosted on a 1u box or VPS. Either your bandwidth is rate limited and they can denial of service your site or your bandwidth is greater but they can still cause you to go over your allocation and cause massive charges.
In the past you could ban IPs but that's not very useful anymore.
The distributed attacks tend to be AI companies that assume every site has infinite bandwidth and their crawlers tend to run out of different regions.
Even if you aren't dealing with attacks or outages, Cloudflare's caching features can save you a ton of money.
If you haven't used Cloudflare, most sites only need their free tier offering.
It's hard to say no to a free service that provides feature you need.
Source: I went over a decade hosting a site without a CDN before it became too difficult to deal with. Basically I spent 3 days straight banning ips at the hosting company level, tuning various rate limiting web server modules and even scaling the hardware to double the capacity. None of it could keep the site online 100% of the time. Within 30 mins of trying Cloudflare it was working perfectly.
> It's hard to say no to a free service that provides feature you need.
Very true! Though you still see people who are surprised to learn that CF DDOS protection acts as a MITM proxy and can read your traffic plaintext. This is of course by design, to inspect the traffic. But admittedly, CF is not very clear about this in the Admin Panel or docs.
Places one might expect to learn this, but won't:
- https://developers.cloudflare.com/dns/manage-dns-records/ref...
- https://developers.cloudflare.com/fundamentals/concepts/how-...
- https://imgur.com/a/zGegZ00
19 replies →
> not uncommon for a single person with residential fiber to have more bandwidth than your small site hosted on a 1u box or VPS.
Then self host from your connection at home, don't pay for the VPS :). That's what I've been doing for over a decade now and still never saw a (D)DoS attack
50 mbps has been enough to host various websites, including one site that allows several gigabytes of file upload unauthenticated for most of the time that I self host. Must say that 100 mbps is nicer though, even if not strictly necessary. Well, more is always nicer but returns really diminish after 100 (in 2025, for my use case). Probably it's different if you host videos, a Tor relay, etc. I'm just talking normal websites
1 reply →
I've been hosting web sites on my own bare metal in colo for more than 25 years. In all that time I've dealt with one DDoS that was big enough to bring everything down, and that was because of a specific person being pissed at another specific person. The attacker did jail time for DDoS activities.
Every other attempt at DDoS has been ineffective, has been form abuse and credential stuffing, has been generally amateurish enough to not take anything down.
I host (web, email, shells) lots of people including kids (young adults) who're learning about the Internet, about security, et cetera, who do dumb things like talk shit on irc. You'd think I'd've had more DDoS attacks than that rather famous one.
So when people assert with confidence that the Internet would fall over if companies like Cloudflare weren't there to "protect" them, I have to wonder how Cloudflare marketed so well that these people believe this BS with no experience. Sure, it could be something else, like someone running Wordpress with a default admin URL left open who makes a huge deal about how they're getting "hacked", but that wouldn't explain all the Cloudflare apologists.
Cloudflare wants to be a monopoly. They've shown they have no care in the world for marginalized people, whether they're people who don't live in a western country or people who simply prefer to not run mainstream OSes and browsers. They protect scammers because they make money from scammers. So why would people want to use them? That's a very good question.
Same basic experience. The colo ISP soaks up most actual DDoS. We had a couple mid-sized ones when we were hosting irc.binrev.net from salty b& users. No real effect other than the colo did let us know it was happening and that it was "not a significant amount of DDoS by our standards."
I'm sorry but lumping in people who prefer to use a weird browser with "marginalised people" does not help your credibility.
3 replies →
> How many of you all are running bare metal hooked right up to the internet?
I do. Many people I know do. In my risk model, DDoS is something purely theoretical. Yes it can happen, but you have to seriously upset someone for it to maybe happen.
From my experience, if you tick off the wrong person, the threshold for them starting a DDoS is surprisingly low.
A while ago, my company was hiring and conducting interviews, and after one candidate was rejected, one of our sites got hit by a DDoS. I wasn't in the room when people were dealing with it, but in the post-incident review, they said "we're 99% sure we know exactly who this came from".
2 replies →
I run a Mediawiki instance for an online community on a fairly cheap box (not a ton of traffic) but had a few instances of AI bots like Amazon's crawling a lot of expensive API pages thousands of times an hour (despite robots.txt preventing those). Turned on Cloudflare's bot blocking and 50% of total traffic instantly went away. Even now, blocked bot requests make up 25% of total requests to the site. Without blocking I would have needed to upgrade quite a bit or play a tiring game of whack a mole blocking any new IP ranges for the dozens of bots.
AI bots are a huge issue for a lot of sites. Just putting intentional DDoS attacks aside, AI scrapers can frequently tip over a site because many of them don't know how to back off. Google is an exception really, their experience with creating GoogleBot as ensured that they are never a problem.
Many of the AI scrapers don't identify themselves, they live on AWS, Azure, Alibaba Cloud, and Tencent Cloud, so you can't really block them and rate limiting also have limited effect as they just jump to new IPs. As a site owner, you can't really contact AWS and ask them to terminate their customers service in order for you to recover.
How do you feel, knowing that some portion of the 25% “detected bot traffic” are actually people in this comment thread?
You don't need buttflare's mistery juice to rate-limit or block bad users.
I've been running jakstys.lt (and subdomains like git.jakstys.lt) from my closet, a simple residential connection with a small monthly price for a static IP.
The only time I had a problem was when gitea started caching git bundles of my Linux kernel mirror, which bots kept downloading (things like a full targz of every commit since 2005). Server promptly went out of disk space. I fixed gitea settings to not cache those. That was it.
Not ever ddos. Or I (and uptimerobot) did not notice it. :)
Small/medium SaaS. Had ~8 hours of 100k reqs/sec last year when we usually see 100-150 reqs/sec. Moved everything behind a Cloudflare Enterprise setup and ditched AWS Client Access VPN (OpenVPN) for Cloudflare WARP
I've only been here 1.5 years but sounds like we usually see 1 decent sized DDoS a year plus a handful of other "DoS" usually AI crawler extensions or 3rd parties calling too aggressively
There are some extensions/products that create a "personal AI knowledge base" and they'll use the customers login credentials and scrape every link once an hour. Some links are really really resource intensive data or report requests that are very rare in real usage
Did you put rate limiting rules on your webserver?
Why was that not enough to mitigate the DDoS?
10 replies →
It’s free unless you’re rolling in traffic, it’s extremely easy to setup, and CF can handle pretty much all of your infra with tools way better than AWS.
Also you can buy a cheaper ipv6 only VPS and run it thru free CF proxy to allow ipv4 traffic to your site
Easy to set up, easy to screw up user experience. Easy-peasy.
Most exploits target the software, not the hardware. CF is a good reverse proxy.
I also rely on hosting provider DDoS protection and don't use very intrusive protection like Cloudflare.
Only issues I had to deal with are when someone finds some slow endpoint, and manages to overload the server with it, and my go to approach is to optimize it to max <10-20ms response time, while blocking the source of traffic if it keeps being too annoying after optimization.
And this happened like 2-3 times over 20 years of hosting the eshop.
Much better than exposing users to CF or likes of it.
Most (D)DOS attacks are just either UDP floods or SYN floods that iptables will handle without any problem. Sometimes what people think are DDOS is just their application DDOSing themself because they are doing recursive calls to some back-end micro-service.
If it was actually a traffic based DDOS someone still needs to pay for that bandwidth which would be too expansive for most companies anyway - even if it kept your site running.
But you can sell a lot of services to incompetent people.
What's the iptables invocation that will let my 10Gbps connection drop a a 100Gbps syn flood while also serving good traffic?
2 replies →
You need an answer to someone buying $10 of booter time and sending a volumetric attack your way. If any of the traffic is even reaching your server, you've already lost, so iptables isn't going to help you because your link is saturated.
Cloudflare offers protection for free.
I run my "server" [1] straight to my home internet, and maybe I should count my blessings but I haven't had any issues with DDoS in the years I've done this.
I have relatively fast internet, so maybe it's fast enough to absorb a lot of the problems, but I've had good enough luck with some basic Nginx settings and fail2ban.
[1] a small little mini gaming PC running NixOS.
I would feel pretty safe running my own hand-written services against the raw Internet, but if I was to host Wordpress or other large/complicated/legacy codebases I'd start to get worried. Also the CDN aspect is useful - having lived in Australia you like connections that don't have to traverse continents for every request.
It is common once your website hits a certain threshold in popularity.
If you are just a small startup or a blog, you'll probably never see an attack.
Even if you don't host anything offensive you can be targeted by competitors, blackmailed for money, or just randomly selected by a hacker to test the power of their botnet.
Other comments say that DDoS are common, not my experience though. I run a couple of API/SAAS sites and DDoSes are rare. Sites are in Canada and Brazil if that matters, although I won't disclose what data-centers. Most strange thing is that no one demanded any ransom during those DDoS attacks ever. Just some flooding for 1-2 days. Most of the times I did't even care - servers are on 10G ports and I pay 95% percentile for the traffic with a cap on final bill. Sites are geo-fenced by nftables rules, only countries of interest are allowed.
They make it easy to delegate a DNS zone to them and use their API to create records (eg: install external-dns on kubernetes and key it create records automatically for ingresses)
The biggest problems I see with DDoS is metered traffic and availability. The largest Cloud providers all meter their traffic.
The availability part on the other hand is maybe something that's not so business critical for many but for targeted long-term attacks it probably is.
So I think for some websites, especially smaller ones it's totally feasible to not use Cloudflare but involves planning the hosting really carefully.
DDoS is a problem, but for most ordinary problems it's not as bad as people make it out to be. Even something very simple like fail2ban will go a long way.
Web scraping without any kind of sleeping in between requests (usually firing many threads at once), as well as heavy exploit scanning is a near constant for most websites. With AI technology, it's only getting worse, as vendors attempt to bring in content from all over the web without regard for resource usage. Depending on the industry, DDoS can be very common from competitors that aren't afraid to rent out botnets to boost their business and tear down those they compete against.
Check your logs, you might be surprised.
[flagged]
As a website owner and VPN user I see both sides of this.
On one hand, I get the annoying "Verify" box every time I use ChatGPT (and now due its popularity, DeepSeek as well).
On the other hand, without Cloudflare I'd be seeing thousands of junk requests and hacking attempts everyday, people attempting credit card fraud, etc.
I honestly don't know what the solution is.
What is a "junk" request? Is it hammering an expensive endpoint 5000 times per second, or just somebody using your website in a way you don't like? I've also been on both sides of it (on-call at 3am getting dos'd is no fun), but I think the danger here is that we've gotten to a point where a new google can't realistically be created.
The thing is that these tools are generally used to further entrench power that monopolies, duopolies, and cartels already have. Example: I've built an app that compares grocery prices as you make a shopping list, and you would not believe the extent that grocers go to to make price comparison difficult. This thing doesn't make thousands or even hundreds of requests - maybe a few dozen over the course of a day. What I thought would be a quick little project has turned out to be wildly adversarial. But now spite driven development is a factor so I will press on.
It will always be a cat and mouse game, but we're at a point where the cat has a 46 billion dollar market cap and handles a huge portion of traffic on the internet.
I've such bots on my server. Some Chinese Huawei bot as well as an American one.
They ignored robots.txt (claimed not to, but I blacklisted them there and they didn't stop) and started randomly generating image paths. At some point /img/123.png became /img/123.png?a=123 or whatever, and they just kept adding parameters and subpaths for no good reason. Nginx dutifully ignored the extra parameters and kept sending the same images files over and over again, wasting everyone's time and bandwidth.
I was able to block these bots by just blocking the entire IP range at the firewall level (for Huawei I had to block all of China Telecom and later a huge range owned by Tencent for similar reasons).
I have lost all faith in scrapers. I've written my own scrapers too, but almost all of the scrapers I've come across are nefarious. Some scour the internet searching for personal data to sell, some look for websites to send hack attempts at to brute force bug bounty programs, others are just scraping for more AI content. Until the scraping industry starts behaving, I can't feel bad for people blocking these things even if they hurt small search engines.
12 replies →
> somebody using your website in a way you don't like?
This usually includes people making a near-realtime updated perfect copy of your site and serving that copy for either scam or middle-manning transactions or straight fraud.
Having a clear category of "good bots" from either a verified or accepted companies would help for these cases. Cloudflare has such a system I think, but then a new search engine would have to go to each and every platform provider to make deals and that also sounds impossible.
1 reply →
> and you would not believe the extent that grocers go to to make price comparison difficult. This thing doesn't make thousands or even hundreds of requests - maybe a few dozen over the course of a day.
It's gonna get even worse. Walmart & Kroger are implementing digital price tags, so whatever you see on the website will probably (purposefully?) be out of date by the time you get to the store.
Stores don't want you to compare.
3 replies →
Actually, I think creating google alternative has never been as doable as it is today.
I'll give a fun example from the past.
I used to work at a company that did auto inspections. (e.x. if you turned a lease in, did a trade in on a used car, private party, etc.)
Because of that, we had a server that contained 'condition reports', as well as the images that went through those condition reports.
Mind you, sometimes condition reports had to be revised. Maybe a photo was bad, maybe the photos were in the wrong order, etc.
It was a perfect storm:
- The Image caching was all inmem
- If an image didn't exist, the server would error with a 500
- IIS was set up such that too many errors caused a recycle
- Some scraper was working off a dataset (that ironically was 'corrected' in an hour or so) but contained an image that did not exist.
- The scraper, instead of eventually 'moving on' would keep retrying the URL.
It was the only time that org had an 'anyone who thinks they can help solve please attend' meeting at the IT level.
> and you would not believe the extent that grocers go to to make price comparison difficult. This thing doesn't make thousands or even hundreds of requests - maybe a few dozen over the course of a day.
Very true. I'm reminded of Oren Eini's tale of building an app to compare grocery prices in Israel, where apparently mandated supermarket chains to publish prices [0]. On top of even the government mandate for data sharing appearing to hit the wrong over/under for formatting, There's the constant issue of 'incomparabilities'.
And it's weird, because it immediately triggered memories of how 20-ish years ago, one of the most accessible Best Buy's was across the street from a Circuit City, but good luck price matching because the stores all happened to sell barely different laptops/desktops (e.x. up the storage but use a lower grade CPU) so that nobody really had to price match.
[0] - https://ayende.com/blog/170978/the-business-process-of-compa...
1 reply →
+1 for spite-driven development.
Simple: We need to acknowledge that the vision of a decentralized internet as it was implemented was a complete failure, is dying, and will probably never return.
Robots went out of control, whether malicious or the AI scrapers or the Clearview surveillance kind; users learned to not trust random websites; SEO spam ruined search, the only thing that made a decentralized internet navigable; nation state attacks became a common occurrence; people prefer a few websites that do everything (Facebook becoming an eBay competitor). Even if it were possible to set rules banning Clearview or AI training, no nation outside of your own will follow them; an issue which even becomes a national security problem (are you sure, Taiwan, that China hasn't profiled everyone on your social media platforms by now?)
There is no solution. The dream itself was not sustainable. The only solution is either a global moratorium of understanding which everyone respectfully follows (wishful thinking, never happening); or splinternetting into national internets with different rules and strong firewalls (which is a deal with the devil, and still admitting the vision failed).
I hate that you're right.
To make matters worse, I suspect that not even a splinternet can save it. It needs a new foundation, preferably one that wasn't largely designed before security was a thing.
Federation is probably a good start, but it should be federated well below the application layer.
3 replies →
A walled garden where each a real, vetted human being is responsible for each network device. It wouldn't scale but it could work locally.
Luckily the decentralization community has always been decentralized. There are plenty of decentralized networks to support.
The great firewall, but in reverse.
4 replies →
> On the other hand, without Cloudflare I'd be seeing thousands of junk requests and hacking attempts everyday, people attempting credit card fraud, etc.
Yup!
> I honestly don't know what the solution is.
Force law enforcement to enforce the laws.
Or else, block the countries that don't combat fraud. That means... China? Hey isn't there a "trade war" being "started"? It sure would be fortunate if China (and certain other fraud-friendly countries around Asia/Pacific) were blocked from the rest of the Internet until/unless they provide enforcement and/or compensation their fraudulent use of technology.
A lot of this traffic is bouncing all over the world before it reaches your server. Almost always via at least one botnet. Finding the source of the traffic is pretty hopeless.
2 replies →
A lot of the fake browser traffic I'm seeing is coming from American data centres. China plays a major part, but if we're going by bot traffic, America will end up on the ban list pretty quickly.
7 replies →
Slightly more complicated because a ton of the abuse comes from IPs located western countries, explicitly to evade fraud and abuse detection. Now you can go after the western owners of those systems (and all the big ones do have have large abuse teams to handle reports) but enforcement has a much higher latency. To be effective you would need a much more aggressive system. Stronger KYC. Changes in laws to allow for less due-process and more "guilty by default" type systems that you then need to prove innocence to rebut.
1 reply →
A wild take only possible if you don't understand how the Internet works.
1 reply →
Credit card fraud exists because credit card companies can't (or won't) implement elementary security measures. There should be a requirement to confirm every online payment, but many sites today require just a cc number+date+code+zip, with no additional confirmation, can't call it other than complicity in the crime.
Lost sales due to 2fa are greater than losses due to refunds
11 replies →
Something like iDeal, which is a payment processing system in the Netherlands.
It works so well and is very secure. You get to the checkout page on a website, click a link. If you’re on your phone, it hotlinks to open your banking app. If you’re on desktop, it shows a QR code which does the same.
When your bank app opens, it says “would you like to make this €28 payment to Business X?” And you click either yes or no on the app. You never even need to enter a card in the website!
You can also send money to other people instantly the same way, so it’s perfect for something like buying a used item from someone else.
Plus the whole IBAN system which makes it all possible!
What kind of fraud protection does iDeal have for customers?
2 replies →
> people attempting credit card fraud
this is wrong.
if someone can use your site they can use stolen cards, and bots doing this will not be stopped by them.
cloudflare only raises the cost of doing it, it may make scrapping a million of product pages unprofitable but that doesn't apply to cc fraud yet.
They might be talking about people who are trying to automate the testing hundreds of stolen credit cards with small purchases to see if they are still working. This is basically why we ended up using cloudflare at work.
>that doesn't apply to cc fraud yet
It stops "card testing" where someone has bought or stolen a large number of cards and need verify which are still good. The usual technique is to cycle through all the cards on a smaller site selling something cheap (a $3 ebook for example). The problem is that the high volume of fraud in a short time span will often get the merchant account or payment gateway account shut down, cutting off legitimate sales.
As a consumer, you should also be suspicious of a mysterious low value charge on your card because it could be the prelude to much larger charges.
3 replies →
If I were hosting a web page, I would want it to be able to reach as many people as possible. So in choosing between CDNs, I would choose the one that provides greater browser compatibility, all other things equal. So in principle, the incentives are there for Cloudflare to fix the issue. But the size of the incentive may be the problem. Not too many customers are complaining about these non-mainstream browsers.
In that case you can turn off / not turn on the WAF feature(s) of Cloudflare - it's optional and configured by the webmaster.
1 reply →
> If I were hosting a web page, I would want it to be able to reach as many people as possible. So in choosing between CDNs
I host many webpages and this is exactly it. Anyone is welcome to use the websites I host. There is no CDN, your TLS session terminates at the endpoint (end to end encryption). May be a bit slower for the pages having static assets if you're coming from outside of Europe, but the pages are light anyway (no 2 MB JavaScript blobs)
> On the other hand, without Cloudflare I'd be seeing thousands of junk requests and hacking attempts everyday, people attempting credit card fraud, etc. > > I honestly don't know what the solution is.
The solution is good security-- Cloudflare only cuts down on the noise. I'm looking at junk requests and hacking attempts flow through to my sites as we speak.
Whoops-- this was a draft I didn't intend to post in this state. I must have fatfingered the "reply" button somehow. Alas, too late to edit or delete now.
Cloudflare cuts down on the noise, but also helps does the work of preventing scrapers, people who re-sell your site wholesale, and cutting down on the noise also means cutting down on the cost of network requests.
It also can help where security is lax. You should have measures against credential stuffing, but if you don't, Cloudflare might prevent (some) of your users from being hacked. Which isn't good enough, but is better than no mitigation at all.
I don't use Cloudflare personally, but I won't dismiss it wholesale. I understand why people use it.
>Cloudflare only cuts down on the noise.
That sounds like the solution, that sounds like good security.
>On one hand, I get the annoying "Verify" box every time I use ChatGPT (and now due its popularity, DeepSeek as well).
Though annoying, it's tolerable. It seemed like a fair solution. Blocking doesn't.
Simple: Don't look at the logs.
Bots are a fact of life. Secure your site properly, follow good practices, set up notifications for important things, log stuff, but don't look at the logs unless you have a reason to look at the logs.
Having run web servers forever, this is simply normal. What's not normal is blindly trusting a megacorporation to make my logs quiet. What're they doing? Who are they blocking? What guidelines do they use? Nobody, except them, knows.
It's why I self-host email. Sure, you might feel safe because most people use Gmail or Outlook, and therefore if there are problems, you can point the finger at them, but what if you want to discuss spam? Or have technical discussions about Trojans and viruses? Or you need to be 100% absolutely certain that email related to specific events is delivered, with no exceptions? You can't do that with Gmail / Outlook, because they have filters that you can't see and you can't control.
My VPN/Fileserver VPS is not behind Cloudflare, and I haven't had any trouble for years. Only the SSH port is accessible from outside (which is probably not even necessary), with password login disabled. I use fail2ban and a few other extra layers of security.
Credit cards are an ancient insecure technology that needs to go away. There are systems in Europe like iDEAL that are much more 21st century appropriate.
> I honestly don't know what the solution is.
well, for starters, if you're using cloudflare to block otherwise benign traffic, just because you're worried about some made... up....
> On the other hand, without Cloudflare I'd be seeing thousands of junk requests and hacking attempts everyday, people attempting credit card fraud, etc.
well damn, if you're using it because otherwise you'd be exposing your users to active credit card fraud... I guess the original suggestion to only ban traffic once you find it to be abusive, and then only by subnet, doesn't really apply for you.
I wanna suggest using this as an excuse to learn how not to be a twat (the direction cf is moving towards more and more), where for most sites 20% of the work will get you 80% of the results... but dealing with cc fraud, you're adversaries are already on the more advanced side, and that becomes a lot harder to prevent... rather than catch and stop after the fact.
Balancing the pervasive fear mongering with sensible rules is hard. Not because it's actually hard, but because that's the point of the FUD. To create the perception of a problem where there isn't one. With a few exceptions, a WAF doesn't provide meaningful benefits. It only serves to lower the number of log entries, it rarely ever reduces the actual risk.
I used to work one of the top 1000 visited websites, and we have massive bot issues where 60% of our traffic was bots and had to implement solutions similar to cloudflare to reduce the bots. Also, with the raise of ai, it's become even more important since a lot of ai data scraping companies do not respect robots.
accept reality and design your api so it's not a problem
I'm using chrome on linux and noticed that this year cloudflare is very agressive in showing the "Verify you are a human" box. Now a lot of sites that use cloudflare show it and once you solve the challenge it shows it again after 30 minutes!
What are you protecting cloudflare?
Also they show those captchas when going to robots.txt... unbelievable.
Cloudflare has been even worse for me on Linux + Firefox. On a number of sites I get the "Verify" challenge and after solving it immediately get a message saying "You have been blocked" every time. Clearing cookies, disabling UBO, and other changes make no difference. Reporting the issue to them does nothing.
This hostility to normal browsing behavior makes me extremely reluctant to ever use Cloudflare on any projects.
I'm a Cloudflare customer, even their own dashboard does not work with linux+slightly older firefox. I mean one click and it is ooops, please report the error to dev null
At least you can get past the challenge. For me, every-single-time it is an endless loop of "select all bikes/cars/trains". I've given up even trying to solve the challenge anymore and just close the page when it shows up.
4 replies →
I run a few Linux desktop VMs and Cloudflare's Turnstile verification (their auto/non-input based verification) fails for the couple sites I've tried that use it for logins, on latest Chromium and Firefox browsers. Doesn't matter that I'm even connecting from the same IP.
I'd presumed it was just the VM they're heuristically detecting but sounds like some are experiencing issues on Linux in general.
1 reply →
Check that you are allowing webworker scripts, that did the trick for me. I still have issues on slower computers (Raspberry pies and the like) as they seem to be to slow to do whatever Cloudflare wants as a verification in the allotted time, however.
Sounds like my experience browsing internet while connected to the VPN provided by my employer: tons of captcha and everything is defaulted to German (IP is from Frankfurt).
The problem is that you are not performing "normal browsing behavior". The vast majority of the population (at least ~70% don't use ad-blockers) have no extensions and change no settings, so they are 100% fingerprintable every time, which lets them through immediately.
linux + firefox. not sure what happened to me yesterday but the challange/response thing was borked and when i finally got through it all, it said i was a robot anyway. this was while trying to sign up for a skype acct, could have been a ms issue though and not necessarily cloudflare. i think the solution is to just not use obstructive software. thanks to this issue i discovered jitsi and that seems more than enough for my purposes.
Yeah, Lego and Etsy are two sites I can now only visit with safari. It sucks. Firefox on the same machine it claims I'm a bot or a crawler. (not even on linux, on a mac)
Does it still apply if you change the UA to something more common (Chrome on Windows or something)?
1 reply →
Yeah, same here. I've avoided it for a most of my customers for that very reason already
[flagged]
I have Firefox and Brave set to always clear cookies and everything when I close the browser... it is a nightmare when I come back the amount of captchas everywhere....
It is either that or keep sending data back to the Meta and Co. overlords despite me not being a Facebook, Instagram, Whatsapp user...
You don't need to clear cookies to avoid sending that data back. Just use a browser that properly isolates third party/Facebook cookies.
3 replies →
I wonder if browsers have a future.
I don't bother with sites that have cloudflare turnstyle. Web developers supposedly know the importance of page load time, but even worse than a slow loading page is waiting for cloudflare's gatekeeper before I can even see the page.
That's not turnstile, that's a Managed Challenge.
Turnstile is the in-page captcha option, which you're right, does affect page load. But they force a defer on the loading of that JS as best they can.
Also, turnstile is a Proof of Work check, and is meant to slow down & verify would-be attack vectors. Turnstile should only be used on things like Login, email change, "place order", etc.
1 reply →
The captcha on robots is a misconfiguration in the website. CF has lots of issues, but this one is on their costumer. Also they detect Google and other bots, so those may be going through anyway.
Sure; but sensible defaults ought to be in place. There are certain "well known" urls that are intended for machine consuption. CF should permit (and perhaps rate limit?) those by default, unless the user overrides them.
1 reply →
using palemoon, i don't even get a captcha that i could solve. just a spinning wheel, and the site reloads over and over. this makes it impossible to use e.g. anything hosted on sourceforge.net, as they're behind the clownflare "Great Firewall of the West" too.
See if changing user agent to Chrome/Firefox helps
Whoever configures the Cloudflare rules should be turning off the firewall for things like robots.txt and sitemap.xml. You can still use caching for those resources to prevent them becoming a front door to DDoS.
It seems like common cases like this should be handled correctly by default. These are cachable requests intended for robots. Sure, it would be nice if webmasters configure it but I suspect a tiny minority does.
For example even Cloudflare hasn't configure their official blog's RSS feed properly. My feed reader (running in a DigitalOcean datacenter) hasn't been able to access it since 2021 (403 every time even though backed off to checking weekly). This is a cachable endpoint with public data intended for robots. If they can't configure their own product correctly for their official blog how can they expect other sites to?
3 replies →
The best part is when you get the "box" on a XHR request. Of course no site handles that properly, and just breaks. Happens regularly on ChatGPT.
Cloudflare is security theatre.
I scrape hundreds of cloudflare protected sites every 15 minutes, without ever having any issues, using a simple headless browser and mobile connection, meanwhile real users get interstitial pages.
It's almost like Cloudflare is deliberately showing the challenge to real users just to show that they exist and are doing "something".
Just wanted to mention that the time between challenges is set by the site, not CF. Perhaps if you mention it, the site(s) will update the setting?
Same. I'm consistently getting a captcha and some nonsense about a Ray ID multiple times a day.
It's not just Linux, I'm using Chrome on my macOS Catalina MBP and I can't even get past the "Verify you are a human" box. It just shows another captcha, and another, and yet another... No amount of clearing cookies/disabling adblockers/connecting from a different WiFi does it. And that's on most random sites (like ones from HN links), I also don't recall ever doing anything "suspicious" (web scraping etc.) on that device/IP.
Somehow, Safari passes it the first time. WTF?
> What are you protecting cloudflare?
A cheeky response is "their profit margins", but I don't think that quite right considering that their earnings per share is $-0.28.
I've not looked into Cloudflare much, I've never needed their services, so I'm not totally sure on what all their revenue streams are. I have heard that small websites are not paying much if anything at all [1]. With that preface out of the way–I think that we see challenges on sites that perhaps don't need them as a form of advertising, to ensure that their name is ever-present. Maybe they don't need this form of advertising, or maybe they do.
[1] https://www.cloudflare.com/en-gb/plans/
If you log in to the CF dashboard every 3 months or so you will see pretty clearly they are slowly trying to be a cloud provider like Azure or AWS. Every time I log in there is a who new slew of services that have equivalent on the other cloud providers. They are using the CDN portion of the business as a loss leader.
They usually protect the whole DNS record so it makes sense it would cover robots.txt as well, even if it's a bit silly.
They run their own DNS infra so that when you set the SOA for your zone to their servers they can decide what to resolve to. If you have protection set on a specific record then it resolves to a fleet of nginx servers with a bunch of special sauce that does the reverse proxying that allows for WAF, caching, anti-DDoS, etc. It's entirely feasible for them to exempt specific requests like this one since they aren't "protect[ing] the whole DNS" so much as using it to facilitate control of the entire HTTP request/response.
I run a honeypot and I can say with reasonable confidence many (most?) bots and scrapers use a Chrome on Linux user-agent. It's a fairly good indication of malicious traffic. In fact I would say it probably outweighs legitimate traffic with that user agent.
It's also a pretty safe assumption that Cloudflare is not run by morons, and they have access to more data than we do, by virtue of being the strip club bouncer for half the Internet.
User-agent might be a useful signal but treating it as an absolute flag is sloppy. For one thing it's trivial for malicious actors to change their user-agent. Cloudflare could use many other signals to drastically cut down on false positives that block normal users, but it seems like they don't care enough to be bothered. If they cared more about technical and privacy-conscious users they would do better.
9 replies →
Sure, but does that means that we, Linux users, can't go on the web anymore ? It's way easier for spammers and bots to move to another user agent/system than for legitimate users. So whatever causes this is not a great solution to this problem. You can do better CF
3 replies →
Many / most bots use Chrome on Linux user agent, so you think it's OK to block Chrome on Linux user agents. That's very broken thinking.
So it's OK for them to do shitty things without explaining themselves because they "have access to more data than we do"? Big companies can be mysterious and non-transparent because they're big?
What a take!
2 replies →
I usually notice an increase in those when connecting to sites over vpn and especially tor. could that be it?
We're on Chrome on Linux, mostly we don't see those.
Excuse my ignorance, but what exactly are these stupid checkboxes supposed to accomplish? Surely they do not represent a serious obstacle.
I just downloaded Palemoon to check and it seems the CAPTCHA straight up crashes. Once it crashes, reloading the page no longer shows the CAPTCHA so it did pass something at least. I tried another Cloudflare turnstile but the entire browser crashed on a segfault, and ever since the CAPTCHAs don't seem to come up again.
ChatGPT.com is normally quite useful for generating Cloudflare prompts, but that page doesn't seem to work in Palemoon regardless of prompts. What version browser engine does it use these days? Is it still based on Firefox?
For reference I grabbed the latest main branch of Ladybird and ran that, but Cloudflare isn't showing me any prompts for that either.
This crash is an even newer Cloudflare issue (as of yesterday, I believe). It is not related to the one discussed here, and will be solved in the next browser update:
https://forum.palemoon.org/viewtopic.php?f=3&t=32064
Kinda funny and ironic thing is their forum just don't allow me to see the contents of their website from my hetzner box that I use as an exit node. More ironically if this site was using cloudflare I could at least solve a challenge and browse the forum instead of getting hit with a giant 403
I believe the problem in Ladybird's case is missing JS APIs https://github.com/LadybirdBrowser/ladybird/issues/226
It uses a hard fork of Firefox's Gecko engine called Goanna, and is independently developed other than a few security patches from upstream. It has considerably diverged from contemporary Firefox so is not comparable.
Seems seriously risky to be running a browser without access to mainstream security patches.
Perhaps it’s secure enough for now due to its obscurity.
1 reply →
Companies like Google and Cloudflare make great tools. They give them away for free. They have different reasons for this, but these tools provide a lot of value to a lot of people. I’m sure that in the abstract their devs mean well and take pride in making the internet more robust, as they should.
Is it worth giving the internet to them? Is something so fundamentally wrong with the architecture of the internet that we need megacorps to patch the holes?
Whether something is "wrong" is often more a matter of opinion than a matter of fact for something as large and complex as the internet. The root of problems like this on the internet is connections don't have an innate user identity associated at the lower layers. By the time you get to an identity for a user session you've already driven past many attack points. There isn't really a "happy" way to remove that from the equation, at least for most people.
Forgot to clarify: this is not about an increased amount of captchas, or an annoyance issue.
The Cloudflare tool does not complete its verifications, resulting in an endless "Verifying..." loop and thus none of the websites in question can be accessed. All you get to see is Cloudflare.
Is this the behaviour you're observing? (my recording of HIBP) https://imgur.com/a/cloudflare-makes-have-i-been-pwned-unusa...
I ran into exactly this the other day trying to browse a website from a browser app on an android-powered TV. Just couldn't get to the website.
I was on Brave in iOS. I had to turn off Brave Shield.
The worst is Cloudflare challenges on RSS feeds. I just have to unsubscribe from those feeds, because there's nothing I can do.
That's misconfiguration on the web developers side.
Yes, developers such as those that run Cloudflare's own official blog.
Maybe there should be some better defaults if they can't even use their own product correctly.
BTW a work around for this is to proxy the feed via https://feedburner.google.com/ which seems to be whitelisted by Cloudflare.
A lot of people are failing to conceive the danger that poses to the open web the fact that a lot of traffic runs through/to a few bunch of providers (namely, CloudFlare, AWS, Azure, Google Cloud, and "smaller" ones like Fastly or Akamai) who can take this kind of measures without (many) website owners knowing or giving a crap about.
Google itself tried to push crap like Web Environment Integrity (WEI) so websites could verify "authentic" browsers. We got them to stop it (for now) but there was already code in the Chromium sources. What makes CloudFlare MITMing and blocking/punishing genuine users from visiting websites?
Why are we trusting CloudFlare to be a "good citizen" and not block unfairly/annoy certain people for whatever reason? Or even worse, serve modified content instead of what the actual origin is serving? I mean in the cases where CloudFlare re-encrypts the data, instead of only being a DNS provider. How can we trust that not third party has infiltrated their systems and compromised them? Except "just trust me bro", of course
> Or even worse, serve modified content instead of what the actual origin is serving?
I witnessed this! Last time I checked, in the default config, the connection between cloudflare and the origin server does not do strict TLS cert validation. Which for an active-MITM attacker is as good as no TLS cert validation at all.
A few years ago an Indian ISP decided that https://overthewire.org should be banned for hosting "hacking" content (iirc). For many Indian users, the page showed a "content blocked" page. But the error page had a padlock icon in the URL bar and a valid TLS cert - said ISP was injecting it between Cloudflare and the origin server using a self-signed cert, and Cloudflare was re-encrypting it with a legit cert. In this case it was very conspicuous, but if the tampering was less obvious there'd be no way for an end-user to detect the MITM.
I don't have any evidence on-hand, but iirc there were people reporting this issue on Twitter - somewhere between 2019 and 2021, maybe.
Cloudflare recently started detecting whether strict TLS cert validation works with the origin server, and if it does, it enables strict validation automatically.
I can easily conceive the danger. But I can directly observe the danger that's causing traffic to be so centralized - if you don't have one of those providers on your side, any adversary with a couple hundred dollars to burn can take down your website on demand. That seems like a bigger practical problem for the open web, and I don't know what the alternative solution would be. How can I know, without incurring any nontrivial computation cost, that a weird-looking request coming from a weird browser I don't recognize is not a botnet trying to DDOS me?
Exactly. If you're going to bemoan centralization, which is fine, you also need to address the reason why we're going in that direction. And that's probably going to involve rethinking the naive foundational aspects of the internet.
how do you know a normal-looking request coming from google chrome is not a botnet trying to ddos you?
1 reply →
I don't think people aren't aware that it's bad. They just don't care enough. And they think "I could keep all this money safely in my mattress or I could put it into one of those three big banks!" ... Or something like that.
Maybe it's the customers I deal with, or my own ignorance, but what alternatives are there to a service like Cloudflare? It is very easy to setup, and my clients don't want to pay a lot of money for hosting. With Cloudflare, I can turn on DDoS and bot protection to prevent heavy resource usage, as well as turn on caching to keep resource usage down. I built a plugin for the CMS I use (Umbraco - runs on .NET) to clear the cache for specific pages, or all pages (such as when a change is made to a global element like the header). I am able to run a website on Azure with less than the minimum recommended memory and CPU for Umbraco, due to lots of performance analyzing and enhancements over the years, but also because I have Cloudflare in front of the website.
If there were an alternative that would provide the same benefits at roughly the same cost, I would definitely be willing to take a look, even if it meant I needed to spend some time learning a different way to configure the service from the way I configure Cloudflare.
What's the cost of annoying people trying to browse to your sites, some to the point where they'll just not bother?
4 replies →
Of course we're trusting CloudFlare to be a good citizen. If they were not, they would be banned - unless they sold their business to a sovereign wealth fund.
I don't get if this is sarcasm (perhaps a reference to TikTok?), but in my case (european) it's a foreign third-party for me
Cloudflare has essentially broken the internet. Blocking or restricting access of even residential IPs running in a real, common browser is evil. And just like that, we handed over the internet to a handful of companies, like it was never ours to begin with.
Cloudflare has been blocking "mainstream" browsers too, if you are generous enough to consider Firefox "mainstream." The "verify you are a human" sequence gets stuck in a perpetual never-ending loop where clicking the checkbox only refreshes the page and presents the same challenge. Certain websites (most notably archive.is) have been completely inaccessible for me for years for this reason.
Do you have something that blocks some amount of scripts? I need to allow third party scripts from either Google or Cloudflare to get a lot of the web to function.
I think the archive.is/Cloudflare issue is a known problem separate from the rest.
archive.is does not use cloudflare for bot protection.
I just went to a site that I think uses cloudflare via seamonkey. I was able to get to the site. This is on OpenBSD.
But if someone has a site that is failing, feel free to post it and I will give it a try.
I tested palemoon on Win with one of my Cloudflare sites and didn't see any problem either.
It's probably dependent on the security settings the site owner has choosen. I'm guessing bot fight mode might cause the issue.
CloudFlare sometimes attempts to verify that I'm a human when requesting a JSON resource [1] on Australia Post's web site, which breaks parcel tracking feature without any visible captcha. The problem can only be diagnosed by using the browser's inspector tool.
Even worse, I get the blanket "You have been blocked" message when I try to manually open the URL and solve the captcha.
[1] https://digitalapi.auspost.com.au/shipments-gateway/v1/watch...
I got a HTML with a error message first, but then I tried adding ".json" on the end of the URL and got a JSON with a error message (that the URL is wrong), and then I removed ".json" from the end of te URL and then I got what seems to be the proper JSON response.
Chromium on linux is also frequently blocked by cloudflare. I can't use tools such as HIBP.
Same here. I just gave up on most of these websites. When I absolutely need to use a website such as for flights, I have a clean chrome browser I spin up.
Yeah and Firefox on Linux too. I do have the user agent set to one from Edge because otherwise Microsoft blocks many features in Office 365. Once it thinks it's Edge it suddenly does work just fine. But it doesn't completely fix all the cloudflare blocks and captchas.
+1 for Firefox on Linux. Several other services (like Instagram) now also accuse me of being a bot every time I legitimately log in with Firefox on Linux.
1 reply →
Same issue. I haven't been able to visit any websites powered by Cloudflare on my SeaMonkey browser recently.
Yep, this bug blocked me from being able to respond to a few job postings on Indeed.com today.
Needless to say I want to throttle every CF employee for screwing with my efforts to further enrich my life through legal means.
This situation has been repeatedly happening multiple times a year. Its an ongoing battle where Cloudflare exploits their monopoly on the web to break the web for any unfavored browser.
The fact that its regressed and repeated so many times now that clearly it indicates a trend and pattern of abuse by malicious intention. Change management isn't hard, unit tests are not hard, consistently breaking only certain browsers seems targeted.
Notably there are mainstream browsers that have this problem as well. Mozilla Firefox for example. Their Challenge has broken large swathes of the web many times to the point where companies hosting apps and websites have simply said they will not support any browser other than Google Chrome/Edge.
Anytime the market gets sieved and pushed to only one single solution, its because someone is doing it for their benefit to everyone elses loss.
Cloudflare should be broken up in antitrust as a monopoly, as should Google.
To add another link, I think this is the same issue: https://gitlab.com/gitlab-org/gitlab/-/issues/421396
I use Librewolf and Zen Browser
If I am met with the dreaded cloudflare "Verify you are a human" box, which is very rare for me, I dont bother and just close the tab.
To have cloudflare work on Librewolf I had to enable web workers.
Why does it need web workers, when it worked fined without them on Waterfox Classic firefox 56 fork that hasn't been updated in water?
I use w3m which makes me about as popular as a fart in a spacesuit. No Cloudflare things for me.
In the past a Cloudflare representative typically appears in these threads, and if that's happened, I missed it. Not to mention the MVP's comment in the locked Cloudflare thread that
"You should use an up to date major browser. Old Firefox forks are not supported and expected to have problems."
It's all incredibly telling, that they've given up trying to be impartial. When "they" start picking browser winners and losers, are OS's next?
In a way Cloudflare missed an opportunity, because a try()/catch() around the bit of failing JavaScript would have been perfect fingerprinting. Having said that, I don't expect it will take the Pale Moon team very long to patch the problem.
But where to go from here? Is there anybody besides the ACLU and EFF with enough resources to mount a "public nuisance" lawsuit? And what would constitute winning? A court-appointed overseer to make sure Cloudflare is regularly educating its staff on the variety of browsers in use today, and providing near 24–hour turnaround times when issues like this occur? It would be a start.
Personally I wonder if this whole style of security is a fool's errand and any blocking should be server-based and look at behavior, not at arbitrary support of this or that feature. I think it would also be helpful if anybody who finds themselves blocked would be given at least a sliver of why they were blocked, so they could try rectifying the problem with their ISP (bad IP), some blocklist, etc.
Most of the sites mentioned in the forum work for me with PaleMoon.
I do get a "your browser is unsupported" message from the forums.
The most ironic thing is that they can’t protect against bots; I even wrote some that bypassed their protection.
The latest deployment seems using Service Worker API, which causes broken on "old" browsers because the API is not supported on these browsers.
Some people like me who block Service Worker API all the time are also affected, like https://chromewebstore.google.com/detail/no-service-worker/m... this.
A very random note.
I try to check the forum post and found out that I was blocked by https://forum.palemoon.org , e.g., https://offline.palemoon.org/blocked/index.html . Don't know and haven't visited this site before.
https://www.palemoon.org works though.
I also get a 403 on the same page since they apparently block the entire Hetzner range (i had this IP for like 3 years and it is never used for abuse) (I sometimes use my machine as a shitty VPN since Turkish Goverment does site block with DPI). If they were using Cloudflare I could at least solve a Captcha and see the website
Cloudflare is slowly but surely turning the web into a walled garden.
Pretty soon the internet will just be a vestigial thing that people use to connect to the cloudflare.
Slowly? Have you not watched the pot boil around you for the past decade? There are zero good search engines. Everything returns propaganda.
This is all as it was intended.
> There are zero good search engines. Everything returns propaganda.
Kagi exists and has been production quality and better than Google for over two years already. (At this point I think it is even better than old google.)
since the invention of llms it's not that much of a deal that search engines are useless
Worst part of this is Firefox struggling. Real risk of a monoculture that is functionally google-controlled. (Yes, yes chromium but we saw with manifest who sets direction).
After almost ten days of deafening silence and broken Internet access, I guess we have to paraphrase Adam Martinetti, the Cloudflare Product Manager from 2022 and conclude that in 2025:
Cloudflare DOES want to be in the business of saying one browser is more legitimate than another.
Oh, and a few weeks ago, google search started to block all new noscript/basic (x)html browsers...
Blocking Falkon is especially egregious if they're not also blocking Gnome Web. Those are the default browsers for Plasma and Gnome respectively, and some of the few browsers left that are "just browsers", with no phoning home or any kind of cloud integration.
Things like "using Linux" or "having an adblocker at all" get you sent to captcha hell. Anything where you're in the minority of traffic. It's not going to change; why would it?
Things are going to chance. Unfortunately, things are only getting worse.
CAPTCHAs are barely sufficient against bots these days. I expect the first sites to start implementing Apple/Cloudflare's remote attestation as a CAPTCHA replacement any day now, and after that it's going to get harder and harder to use the web without Official(tm) Software(tm).
Using Linux isn't what's getting you blocked. I use Linux, and I'm not getting blocked. These blocks are the results of a whole range of data points, including things like IP addresses.
I have multiple blockers (Ublock Origin, Privacy Badger, Facebook Container) in Firefox and have not experienced this issue.
For what it's worth, this has been my experience as well. I've seen maybe a handful of full-page Cloudflare walls over the past year, and none have gotten me stuck in any kind of loop
I have been using Fedora + Firefox for years. I sometimes get a captcha from Cloudflare, but not frequently. Works just fine.
I have not tried less mainstream browsers, just FF and Chrome.
For me, captcha hell is very random, and when it happens, it's things like "pick all squares with stairs" where I have to decide if that little corner of a stairway counts (and it never seems to) or "pick all squares with motorcycles" where the camera seemed to have a vision problem.
What usually works for me is to close the browser, reload, and try again.
Adjacent topic: does anybody (US) use Chase Bank from linux? it won't let me use Chromium* or Brave (it used to; it doesn't tell me I can't, it just won't send me TFA confirmation codes to my phone or "recognize" when I log into the phone app. Phone app works, but doesn't have all the features of the web portal. I can use it from macbook but I prefer linux.
*I have not tried downloading Google Chrome or IE or Edge if that still exists for linux
CloudFlare has become the problem. It's high time users banded together in their best interests: no ads, tracking, blocking, geo restrictions etc.
May be the case is that filtering user by browser UA is no longer a feasible solution(new browsers and alike are growing), and neither running javascript(headless chrome everywhere).
For local physical store, geo-location is a naturally filter for customers as long as beaming a person from a spaceship to earth is not invented. For web, a equally effective solution is very hard to find.
When one of my nodejs based sites experienced DoS, I installed & configured "express-slow-down" as middleware and it resolved the issue.
What Cloudflare does that can't possibly be implemented locally by a site owner?
handle terabytes per second of bot traffic
2 replies →
Cloudflare is discriminatory. They, and their fanbois, will likely claim that they can't publicly discuss their criteria for who they block, so some mysterious magic is going on in the background, and we're supposed to just trust them because they're big.
That in mind, I'd love even the most fawning of the fanbois to come up with rationalization for why for a very common browser (Safari on modern macOS), most links through Cloudflare work, but trying to get past the are-you-human checkbox on Cloudflare's abuse reporting page doesn't work half the time.
Obviously that shouldn't be on an abuse reporting page at all, but Cloudflare has been making abuse reporting extremely difficult for years. Adding rate limiting (a human can easily hit it) and prove-you're-human verification on their abuse page just unambiguously proves this.
This happens to me with Firefox as I run it on OpenBSD and enable Strict Privacy and the "resist fingerprinting" feature -- or at least in that config I've had inexplicable 403 Forbidden errors from CloudFlare and fired up Chromium or whatever and could load the page just fine (or Firefox on another computer).
Oligopolies are nasty. In lack of regulation, don't take business to actors like Cloudflare and talk to your local politicians.
There's also another reason, Cloudflare is under the CLOUD Act, can't be trusted to touch the PII of EU citizens for legal reasons or anyone for moral reasons.
Since 2024 to now I've had to constantly verify that I'm human just to visit certain sites due to Cloudflare. Now it's even worse since (sometimes) cdnjs.cloudflare.com loads infinitely unless I turn on my VPN. Infuriating that I have to use a service known for potential spam, to get another service that blocks spam to bloody work.
I use minbrowser.org/ Some sites disallow it... min suggests changing the user-agent setting to something like - Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/121.0
I see an opportunity for scraper-publishers - who use legitimate access corridors to obtain desired content and publish it without human gatewalls.
I'm sure if this becomes more of an issue the market will provide for that.
at this point, im honestly surprised that all non-mainstream browsers dont emulate the same user-agent and ssl fingerprint order of a mainstream browser - or add a flag to change behavior per "tab" (or if cli per some call or other scope) - coupled with a javascript-operating-system which also aligns with those
In concept that's a good idea, but the fingerprinting potential is VAST: user-agent, TLS, JavaScript quirks, CSS, Canvas, proprietary features like Chrome's Topics, maybe WebGL, WebUSB, etc. In practice it's very hard to do.
Do these browsers employ any additional tracking protections? "Browser integrity checks" are browser-specific and they might rely on the "entropy" those tracking vectors provide.
So this would only be "bad" move by cloudflare if you could get around it by recompiling the browser with spoofed UA/strings. Otherwise they'd have to support every possible engine which is infeasible. That saying, the "open web" is indeed dead.
> Otherwise they'd have to support every possible engine which is infeasible.
If I understand correctly, this is why I've said on previous Cloudflare threads that they've managed to design a game they can never win. They project a certain omniscience, but then all this sh*t happens. We need to persuade them to stop playing.
Badly configured bot protection. It'll look at user agent headers and try to fingerprint the browser (some form of fpjs2 or similar,) and then decide. Very error prone.
The silence from CloudFlare staff is rather deafening, especially as they commented on the (newer) outage post.
Doesn't work for me at all on Firefox. Disabled all the privacy preserving extensions and still nada. Fuck off Cloudflare.
this turned out to be related to an extension I was using please disregard
It's blocking Qutebrowser also.
Can't you set your user agent to something else? Like Firefox or Chrome.
They flat out refuse to show what the origin server sent, unless you run some Javascript. Which is sufficient to no longer care about what the browser states in the request headers.
I don't have any issues so far under Librewolf, Waterfox and Ungoogled Chromium.
I have the problem with LibreWolf on Linux, and have to fall back to Ungoogled Chromium.
Edited to add: without adblock
No problem under Linux whatsoever with tracking and ad blocking extensions in each browser
I had the same problem using minbrowser, changing the UA fixes the problem
I avoid and refuse to use Cloud flare for these sorts of reasons. Join me.
I'm seeing this all the time recently using standard Safari on MacOS
Welcome to the modern world. Any deviation from the average will get you flagged as a suspicious deviant. It's not just browsers. It's everything.
Yea, sucks. Cloudflare is also blocking my web scrapers ;-)
But not mine..
Is there any cloudflare alternatives?
is spoofing not a simple solution to this?
Unfortunately not. Cloudflare verification goes deeper into browser 'mechanics' than that. Not to mention it could flag you as malicious if you dare attempt bypassing it.
The whole "browser integrity check" thing is bullshit.
Its disgusting how we've accepted 1 company MITMing the whole internet. Cloudflare hosts DDoS provides while selling DDoS mitigation. It's a mafia racket.
Yeah, this is ridiculous. Even if their heuristics fail badly for Pale Moon, they could easily fall back to POW.
This is totally fucked if true
All by design. The idea is to keep older devices, ones with perhaps no government backdoors, and unauthorized software, off the Internet completely. Same reason there's a big push to kill X11 - it runs great on computers from before hardware backdoors were common. With the Trumpenreich looming, these devices will become very useful. IF they are allowed on the Internet.
Can be explained with fewer assumptions.
>With the Trumpenreich looming
Weren't they useful the last time around, when 'literally Hitler' totally murdered freedom of speech until Biden the hero restored it?
On one hand, this is a scummy move from CloudFlare. All this has ever done is make browsers spoof their UAs. Mozilla/4.0 anyone?
On the other, Pale Moon is an ancient (pre-quantum) volunteer-supported fork of Firefox, with boatloads of known and unfixed security bugs - some fixes might be getting merged from upstream, but for real, the codebases diverged almost a decade ago. You might as well be using IE 11.
I'm not sure why people are mad at Cloudflare. They are not obligated to support browsers outside the general marketshare nor should you expect to.
Cloudflare not supporting Pale Moon has no impact on the rest of us. Matter of fact today is the first time I'm hearing of this browser I will never end up using.
This is not about supporting or not, this is outright blocking/gatekeeping; you can just let pass, but they chose to block. It's completely different from e.g. no longer releasing builds for PPC Macs.
> known and unfixed security bugs
Which are..?
> So sick of Cloudflare
A sentiment I cannot agree with more.
[dead]
[dead]
[dead]
[dead]
[dead]
[dead]
[flagged]
Not helpful to an otherwise worthwhile discussion.
The rest of this comment section is the same sentiment mixed in with trying to make excuses for Cloudflare. So... it is helpful. Stop allowing a private company to control and MITM the entire internet.
Should cloudflare ever be the target of an attempted takeover by Musk and co (like Twitter or the ongoing NIH/USAID saga) you can be sure you won't be able to access any 'inclusive' websites anymore...
Ok kids.. This is Tobin.. but without the Paradigm.
First off.. Gee I wish we had all come together about a decade ago or so and found solutions for what was plainly coming and spelled out by my self and others.
Second before it happens.. Pale Moon is not "old and insecure" It is being mismanaged had has no vision or prospects for future expansion.. It is just whatever XUL they can keep working while chugging away at the modern web features..
Pale Moon is often TOO security patched btw, which have been regularly disclosed and specially noted in the release notes since I convinced Moonchild he should do that for exactly the kind of old and insecure falsehood.
Moonchild's issue as a developer is he will always choose the seemingly simplest path of least resistance and will blindly merge patches without actually testing them. Many security patches are only security patches and not just.. patches because Mozilla redefined the level of security they want their codebase to provide.. But all known Mozilla vulnerabilities and many that would only become vulnerable if surrounding code is changed are patched.. Pale Moon and UXP has become more secure over time and that is an objective fact when you consider the nature of privileged access within a XUL platform which has its own safegaurds as well that persist into Firefox today though less encountered.
Now no one hates that furry bastard more than me (and I challenge you to try) but I will never call out good work as anything other than good work. Besides, there are a MILLION other plainly visible faults with the Pale Moon project and its personnel and my past behavior without having to make stuff up or perpetuate a false mantra like "old and insecure".
Finally, isn't Cloudflare being very unfair to every project save the modern firefox rebuilds listed on thereisonlyxul.org? Like SeaMonkey? Why does seamonkey deserve any hate from anyone.. or systematic discrimination.. What have they ever done but try and have an internet application suite.. Why are they old and insecure despite being patched and progressing a patch queue for Mozilla patches just landed selectively to preserve the bulk of XUL functionality its users adore?
In conclusion, what will be the final cost and how many will burn for trying to going against it.. I know my fate for trying.. how many will join me knowing that?
For context, this is presumably the Tobin who caused significant tangible damage to the Pale Moon project on his way out.
https://forum.palemoon.org/viewtopic.php?t=28265
For additional context it might be wise to include links that can't be edited by the author or webmaster.. While I didn't include links looking BinOC up on the IA isn't difficult to do and I am sure if you want to read the posts of events AS they happened you can look up the other thread.
This cited version is the revised version. Moonchild has revised his version of events multiple times in the nine months after. Pfft that isn't even the latest version lol. There are many now hidden threads on the Pale Moon forum that also showed events as they happened or as told when they happened.. All gone now.. Some of them contradict the later retellings.. I simply refer to events as they happened at the time and the Interlink release notes summery there of.
Can't wait to see if it changes anything...
3 replies →
Yeah except that's not the way it happened. My crimes are taking my ball and going home after being all but forced out for the second time just weeks after my father passed away from cancer and 2 months after I moved across the country.
It isn't like half the Pale Moon userbase ever wanted me there to begin with despite giving them not just an Add-ons Site and a developer wiki/doc site, the Pale Moon for Linux website, but a fully functional XUL platform that survives my involvement and a Pale Moon that is STILL Pale Moon when Moonchild as early as Pale Moon 27 was going to go the cyberfox route of Australis with CTR. So context of a decade of selfless unpaid work of 10-16 hour days every day, forum drama, bad decisions and behavior on my part in response to the response of my selfless work, and relentless attacks such as these no matter if I pop my head out or not?
If you want the full story of the end look it up on Kiwifarms (This was all before them being removed from clearnet so before the stuff you are thinking of) where I was maneuvered towards by 4chan anon people because that was the ONLY venue I had afterwards. For some reason they engaged then moved on.. Left me intact. I don't know why. But it is all there.. A cleaner version is codified in Interlink release notes on the internet archive. I encourage you to learn what actually happened and when and then make your judgement.. If you do that I will accept it even if I disagree with it because I disagree with a lot about my self these days.
Doesn't matter anyway. There are much larger issues now in the world than years old drama that still in the end.. Created the Unified XUL Platform (Take 2, the one that worked) and helps give hope to those otherwise subsumbed by the monoculture. Not that Pale Moon culture is much better but the fact it persists means more than one thing can. I can do better.. and so can we all.. Let's do that while we still can.
-nsITobin
Cloudflare's proxy model solved immediate security and reliability problems but created a lasting tension between service stability and user choice. Like old telecom networks that restricted equipment, Cloudflare's approach favors their paying customers' needs over end-user freedom, particularly in browser choice. While this ensures predictable revenue and service quality, it echoes historical patterns where infrastructure standardization both enables and constrains.
I'm still in the habit of granting Cloudflare a presumption of good faith. Developers frequently make assumptions about things like browsers that can cause problems like this. Something somewhere gets over-optimized, or someone somewhere does some 80/20 calculation, or something gets copy-pasted or (these days) produced by an LLM. There are plenty of reasons why this might be entirely unintentional, or that the severity of the impacts of a change were underestimated.
I agree that this exposes the risk of relying overmuch on handful of large, opaque, unaccountable companies. And as long as Cloudflare's customers are web operators (rather than users), there isn't a lot of incentive for them to be concerned about the user if their customers aren't.
One idea might be to approach web site operators who use Cloudflare and whose sites trigger these captchas more than you'd like. Explain the situation to the web site operator. If the web site operator cares enough about you, they might complain to Cloudflare. And if not, well, you have your answer.
How many times do they have to do the same thing before we modify our presumtion?
Cloudflare is actually pretty upfront about which browsers they support. You can find the whole list right in their developer docs. This isn't some secret they're trying to hide from website owners or users - it's right here https://developers.cloudflare.com/waf/reference/cloudflare-c... - My guess is that there is no response because not one of the browsers you listed is supported.
Think about it this way: when a framework (many modern websites) or CAPTCHA/Challenge doesn't support an older or less common browser, it's not because someone's sitting there trying to keep people out. It's more likely they are trying to balance the maintenance costs and the hassle involved in allowing or working with whatever other many platforms there are (browsers in this case). At what point is a browser relevant? 1 user? 2 users? 100? Can you blame a company that accommodates for probably >99% of the traffic they usually see? I don't think so, but that's just me.
At the end, site owners can always look at their specific situation and decide how they want to handle it - stick with the default security settings or open things up through firewall rules. It's really up to them to figure out what works best for their users.
They do not support major browsers. They support "major browsers in default configuration without any extensions" (which is of course ridiculous proposition), forcing people to either abandon any privacy/security preserving measures they use, or to abandon the websites covered by CF.
I use uptodate Firefox, and was blocked from using company gitlab for months on end simply because I disabled some useless new web API in about:config way before CF started silently requiring it without any feature testing or meningful error message for the user. Just a redirect loop. Gitlab support forum was completely useless for this, just blaming the user.
So we dropped gitlab at the company and went with basic git over https hosting + cgit, rather than pay some company that will happily block us via some user hostile intermediary without any resolution. I figured out what was "wrong" (lack of feature testing for web API features CF uses, and lack of meaningful error message feedback to the user) after the move.
Although I sometimes have problems with Cloudflare, it does not seem to affect GitHub nor Gitlab for me, although they have other problems, which I have been able to work around.
Some things that I had found helpful when working with Gitlab is to add ".patch" on the end of commit URLs, and changing "blob" to "raw" in file URLs. (This works on GitHub as well.) It is also possible to use API, and sometimes the data can be found within the HTML the server sends to you without needing any additional requests (this seems to work on GitHub more reliably than on Gitlab though).
You could also clone the repository into your own computer in order to see the files (and then use the git command line to send any changes you make to the server), but that does not include issue tracker etc, and you might not want all of the files anyways, if the repository has a lot of files.
3 replies →
Not exactly. They say:
"Challenges are not supported by Microsoft Internet Explorer."
Nowhere is it mentioned that internet access will be denied to visitors not using "major" browsers, as defined by Cloudflare presumably. That wouldn't sound too legal, honestly.
Below that: "Visitors must enable JavaScript and cookies on their browser to be able to pass any type of challenge."
These conditions are met.
> * If your visitors are using an up-to-date version of a major browser * > * they will receive the challenge correctly. *
I'm unsure what part of this isn't clear, major browsers, as long as they are up to date, are supported and should always pass challenges. Palemoon isn't a major browser, neither are the other browsers mentioned on the thread.
> * Nowhere is it mentioned that internet access will be denied to visitors not using "major" browsers *
Challenge pages is what your browser is struggling to pass, you aren't seeing a block page or a straight up denying of the connection, instead, the challenge isn't passing because whatever update CF has done, has clearly broken the compatibility with Palemoon, I seriously doubt this was on purpose. Regarding those annoying challenge pages, these aren't meant to be used 24/7 as they are genuinely annoying, if you are seeing challenge pages more often than you are on chrome, its likely that the site owner is actively is flagging your session to be challenged, they can undo this by adjusting their firewall rules.
If a site owner decides to enable challenge pages for every visitor, you should shift the blame on the site owners lack of interest in properly tunning their firewall.
6 replies →
[dead]
So you're saying that which browsers are supported on the Internet should be determined by a single, for-profit company? That's a very interesting and shorthsighted take.
I love how so many of these apologists talk about stuff like "maintenance costs", as though it's impossible to write code that's clean and works consistently across platforms / browsers. "Oh, no! Who'll think of the profits?!?"
If you had any technical knowledge, you'd know that "maintenance costs" are only a thing when you code shittily or intentionally target specific cases. A well written, cross-browser, cross-platform CAPTCHA shouldn't have so many browser specific edge cases that it needs constant "maintenance".
In other words, imagine you're arguing that a web page with a picture doesn't load on a browser because nobody bothered to test with that browser. Now imagine you're making the case for that browser being so obscure that nobody would expend the time and money. Instead, why aren't you pondering why any web site with a picture wouldn't be general enough to just work? What does that say about your agenda, and about the fact that you want to make excuses for this huge, striving-to-be-a-monopoly, for-profit company?
I think it's pretty clear you have never worked on fraud protections or bot detections, otherwise you'd understand the struggles of supporting many environments with a single solution, you already have an opinion on this and by the way your messages are typed, it doesn't seem like any rational will change your ideas.
This is the internet and everybody is a field expert the moment they want to win an argument, best of luck with that.
Indeed. Software can be written like math. 1 + 1 = 2, holds true for now and for all time, past and present.