← Back to context

Comment by Aachen

2 days ago

Huh? What false positives does Anubis produce?

The article doesn't say and I constantly get the most difficult Google captchas, cloudflare block pages saying "having trouble?" (which is a link to submit a ticket that seems to land in /dev/null), IP blocks because user agent spoofing, errors "unsupported browser" when I don't do user agent spoofing... the only anti-bot thing that reliably works on all my clients is Anubis. I'm really wondering what kinds of false positives you think Anubis has, since (as far as I can tell) it's a completely open and deterministic algorithm that just lets you in if you solve the challenge, and as the author of the article demonstrated with some C code (if you don't want to run the included JavaScript that does it for you), that works even if you are a bot. And afaik that's the point: no heuristics and false positives but a straight game of costs; making bad scraping behavior simply cost more than implementing caching correctly or using commoncrawl

I've had Anubis repeatedly fail to authorize me to access numerous open source projects, including the mesa3d gitlab, with a message looking something like "you failed".

As a legitimate open source developer and contributor to buildroot, I've had no recourse besides trying other browsers, networks, and machines, and it's triggered on several combinations.

  • Interesting, I didn't even know it had such a failure mode. Thanks for the reply, I'll sadly have to update my opinion on this project since it's apparently not a pure "everyone is equal if they can Prove the Work" system as I thought :(

    I'm curious how, though, since the submitted article doesn't mention that and demonstrates curl working (which is about as low as you can go on the browser emulation front), but no time to look into it atm. Maybe it's because of an option or module that the author didn't have enabled