← Back to context

Comment by BLKNSLVR

7 hours ago

I have my own system of IP reputation whereby if an IP address hits one of my systems with some probe or scan that I didn't ask for, then it's blocked for 12 months.

https://github.com/UninvitedActivity/UninvitedActivity

P.S. just to add a note here that I have been blocked out of my own systems occasionally from mobile / remote IPs due to my paranoia-level setup. But I treat that as learning / refinement, but also can accept that as the cost of security sometimes.

My first thought is that with CGNAT ever more present, this kind of approach seems like it'll have a lot of collateral damage.

  • Yeah, my setup is purely for my own security reasons and interests, so there's very little downside to my scorched earth approach.

    I do, however, think that if there was a more widespread scorched earth approach then the issues like those mentioned in the article would be much less common.

    • In such a world you can say goodbye to any kind of free Wi-Fi, anonymous proxy etc., since all it would take to burn an IP for a year is to run a port scan from it, so nobody would risk letting you use theirs.

      Fortunately, real network admins are smarter than that.

      5 replies →

    • If you actually wanted your site or service to be accessible you’d run in to issues immediately since once IP would have cycled between hundreds of homes in a year.

      IP based bans have long been obsolete.

      1 reply →

  • For people that implement it there's less than three people who use it, or agencies supporting it

    • CGNAT? That's definitely not true. There are whole towns that have to share one IP address. They're mostly in the third world.

> can accept that as the cost of security sometimes

And corporate IT wonders why employees are always circumventing "security policies"...

  • Additional explanation: this is primarily a personal setup.

    There would be a lot of refinement and contingencies to implement something like this for corporate / business.

    Having said that, I still exist on the ruthless side of blocking equation. I'd generally prefer some kind of small allow list than a gigantic block list, but this is how it's (d)evolved.

    • How is this better than blocking after a certain quantity in a range of time instead?

      Single queries should never be harmful to something openly accessible. DOS is the only real risk, and blocking after a certain level of traffic solves that problem much better with less possibility of a false positive, and no risk to your infrastructure, either.

I perma-ban any /16 that hits fail2ban 100+ times. That cuts down dramatically on the attacks from the usual suspects.

  • I haven't manually reviewed my lists for a while, but I did similar checks for X IP addresses detected from within a /24 block to determine whether I should just block the whole /24.

    Manual reviewing like this also helped me find a bunch of organisations that just probe the entire IPv4 range on a regular basis, trying to map it for 'security' purposes. Fuck them, blocked!

    P.S. I wholeheartedly support your choice of blocking for your reasons.

  • Sounds like a great idea until you ever try to connect to your own servers from a network with spammy neighbors.

    • Back in the day - port knocking was a perfect fit for this eventuality.

      Nowadays, wireguard would probably be a better choice.

      (both of above of course assume one is to do a sensible thing and add "perma-bans" a bit lower in firewall rules, below "established" and "port-knock")