Comment by JohnMakin
8 days ago
I've been flagged as a bot on pretty much every major platform. Most ridiculously lately, linkedin - I have to prove my identity using 2 different forms of ID, which they still won't accept, OR find a notary and somehow prove I own the account I no longer have access to. Maybe try refining this tech a little better before you start blasting legitimate users with it - I am extremely skeptical of the catch rate given what I see anecdotally online, and my own experience getting flagged for stuff as benign as being a quick typist.
This is often due to network setup. If you're behind NAT where there's many users behind a single IP address you'll be hit.
Eg. Many cell phone providers are 100% behind NAT for IPV4 internet. Corporate networks almost 100% likely to hit this too. VPNs are straight up almost always flagged for further authentication.
A 'fun' thing that often happens to me is purchasing online via credit card at work and then going to use the CC later that day in stores only to be denied since that's likely fraud since you were in another location completely a few hours ago according to IP location (work routes everything via a datacenter on the other coast).
For me specifically, I do believe this is a major part of it. However, if my options are to use a VPN or the service, but not both, I'm more inclined to pick the VPN and say screw the service, I just will opt out of using it. There's no real reason that a sufficiently sophisticated network/security team at a large company can't differentiate between commercial VPN users and "bot" traffic. It's just laziness/incompetence. Sufficiently advanced bots use residential proxies anyway and it really isn't difficult to go down that road.
Cell tower/provider is a big part I think from my own experience; I'd get constant captchas and rejects only when near one specific tower at work, which happened to be right over a fedex ground building, take that how you will...
> If you're behind NAT where there's many users behind a single IP address you'll be hit.
Doesn't this describe the vast majority of networks in the world?
They likely mean CGNAT specifically
I asked a manager about this, the policy is that we do not need to differentiate between bots and people who sound similar to bots: both are considered low quality content/engagement. Delete them.
Seems like wherever they delete bots, they will in the end, delete human beings.
That's what happens when a business is built on getting a tiny amount of value per user from a vast number of users. There's essentially no incentive to treat any individual user well, and no resources to make it happen even if they wanted to. This becomes more and more problematic as our lives revolve more and more around such businesses.
Silly commenters, mass audiences are for influencers, but go ahead and write your little bandwagoned take so you can feel heard.
I never really thought about this perspective but in some ways it makes sense. I think the ironic part is that LinkedIn now provides built-in AI tools that make you sound more like a bot.
Maybe they could fingerprint slop generated with they tools and allow it through to incentivize upgrading
But "our" bots are always the good ones. Why does this sound like literature...
My problem with this approach is what metrics are you using to determine whatever I am doing is "low quality?" On LinkedIn specifically, I barely ever post "content" publicly - I use it to network with recruiters and read technical articles mostly. It's completely opaque and will catch users doing absolutely nothing wrong or "low content," maybe they are on the spectrum or disabled in a way that makes their user clicks look weird. No managers ever consider these things, it's always like "oh well, fuck em"
Actually, they will only delete humans, because the bots can already far outpace low quality content posted by humans.
LinkedIn always hits me with those frustrating custom CAPTCHAs where you have to rotate the shape 65 degrees -- they've taken a pretty blunt, high-friction approach to bot detection
I think most apps should primarily start with just monitoring for agentic traffic so they can start to better understand the emergent behaviors they're performing (it might tell folks where they actually need real APIs for example), and then go from there
Ironic that orgs using everyone's content (fairly or not) stuffing AI down our throats are the ones aggressively against their users using AI on their services.
Maybe if sales navigator was better there wouldn't be so many third party automation platforms that do automation. Or maybe if linkedin figured out how to make money with an ecosystem rather than monopoly they wouldn't need to be so aggressive.
I think companies that are hostile to AI Agents are going to shrink. AI Agents are a new class of user, the platforms that welcome them will grow and thrive, those that are hostile will suffer.
What you're describing is the end of the internet for some people. Good bots will evade everything (or at least try until they do), and some people like you (and me, this shit always happens to me) just stare at the screen, wondering what Kafka would say about this.