← Back to context

Comment by Goronmon

4 hours ago

Is there an feasible alternative to "kernel anti-cheat" available on Linux?

There isn't.

When it comes to anti-cheat on Linux, it's basically an elephant in the room that nobody wants to address.

Anti-cheat on Linux would need root access to have any effectiveness. Alternatively, you'd need to be running a custom kernel with anti-cheat built into it.

This is the part of the conversation where someone says anti-cheat needs to be server-side, but that's an incredibly naive and poorly thought out idea. You can't prevent aim-bots server-side. You can't even detect aim-bots server-side. At best, you could come up with heuristics to determine if someone's possibly cheating, but you'd probably have a very hard time distinguishing between a cheater and a highly skilled player.

Something I think the anti-anti-cheat people fail to recognize is that cheaters don't care about their cheats requiring root/admin, which makes it trivial to evade anti-cheat that only runs with user-level permissions.

When it comes to cheating in games, there are two options:

1. Anti-cheat runs as admin/root/rootkit/SYSTEM/etc.

2. The games you play have tons of cheaters.

You can't have it both ways: No cheaters and anti-cheat runs with user-level permissions.

  • Rootkit anti-cheats can still often be bypassed using DMA and external hardware cheats, which are becoming much cheaper and increasingly common. There's still cheaters in Valorant and in Cs2 on faceit, both of which have extremely intrusive ACs that only run on Windows.

    At the level of privilege you're granting to play a video game, you'd need to have a dedicated gaming PC that is isolated from the rest of your home network, lest that another crowdstrike level issue takes place from a bad update to the ring 0 code these systems are running

  • Even kernel anti-cheat can be defeated, this is a similar fight to what captchas have.

    I can just have my screen recorded and have a fake input signal as my mouse/keyboard.. or just simply hire a pro player to play in my name, and it's absolutely impossible to detect any of these.

    The point is to just make it more expensive to cheat, culling out the majority of people who would do so.

  • I don't fully agree with the 1 and 2 dichotomy. For example, before matchmaking-based games became so popular a lot of our competitive games were on dedicated servers.

    On dedicated servers we had a self-policing community with a smaller pool of more regular players and cheaters were less of an issue. Sure, some innocents got banned and less blatant cheaters slipped through but the main issue of cheaters is when they destroy fun for everyone else.

    So, for example, with the modern matchmaking systems they could do person verification instead of machine verification. Such as how some South Korean games require a resident registration number to play.

    Then when people get banned (or probably better, shadowbanned/low priority queued) by player reports or weaker anti-cheat they can't easily ban evade. But of course then there is the issue of incentivizing identity theft.

    And I don't think giving a gaming company my PII is any better than giving them root on my machine. But that seems more like an implementation issue.

    • > For example, before matchmaking-based games became so popular a lot of our competitive games were on dedicated servers.

      I still had a lot of problems with cheaters during this time. And when the admins aren't on you're still then at the whims of cheaters until you go find some other playground to play in.

      And then on top of that you have the challenge of actually finding good servers to go join a game with similarly skilled players, especially when trying to play with a group of friends together. Trying to get all your friends on to the same team just for the server to auto-balance you again because the server has no concept of parties sucked. Finding a good server with the right mods or maps you're looking for, trying to join right when a round started, etc was always quite a mess.

      Matchmaking services have a lot of extremely desirable features for a lot of gamers.

    • Except most anti-cheats started on dedicated servers because it turns out most people are not interested in policing other players.

      Punkbuster was developed for Team Fortress Classic, even getting officially added to Quake 3 Arena. BattleEye for Battlefield games. EasyAntiCheat for Counter-Strike. I even remember Starcraft 1 ICCUP 3rd party servers having an anti-cheat they called 'anti-hack'.

      You can still see this today with modern dedicated servers in CS2: Face-It and ESEA have additional anti-cheat, not less. Even modded 3rd party server FiveM for GTAV has their own anti-cheat called adhesive.

    • > So, for example, with the modern matchmaking systems they could do person verification instead of machine verification. Such as how some South Korean games require a resident registration number to play.

      If you think the hate for anti-cheat is bad, just wait until you see the hate for identity verification.

      I'm actually rather blown away that you would even suggest it.

  • There's a third path:

    3. No humans in your multiplayer

    As someone who grew up amazed at Reaper bot for Quake, I'm surprised we don't see a rennaisance of making 'multiplayer' fun by more expressive, fallible, unpredictable bots. We're in an AI bubble and I don't hear of anyone chasing the holy grail of believable 'AI' opponents.

    This also has the secondary benefit of having your multiplayer game remain enjoyable even when people's short attention spans move on to the next hot live service. Heck this could kill live service games.

    Then again, what people get out of multiplayer is, on some unspoken and sad level, making some other person hurt.

  • 3. write your codebase in a way which is suspicious of client data and gives the server much more control (easier said than done however)

    • That's just server-side anti-cheat, which I've already addressed.

      Cheating isn't always about manipulating game state, especially in FPSes. There, it's more about manipulating input, ie, auto-aim cheats.

  • But isn't all client-side anti-cheat bypassable by doing image recognition on the rendered image? (either remote desktop or a hardware-based display cable proxy)

    • Yes. Using another machine, record the screen & programmatically move mouse.

      At that point you have to look at heuristics (assuming the input device is not trivially detectable vs a legit one).

      However, that can obviously only be used for certain types of cheating (e.g. aimbot, trigger bot (shoot when crosshair is on person)).

  • I'm not letting a game company have root on my PC. How does that kind of exposure for something as frivolous as gaming even make sense?

    • Something that is "frivolous" to you is a passion or even a profession for others. Competitive gaming is a massive market worldwide, and it wouldn't exist without the ability to enforce a level playing field. Not everything has to be a holy FOSS war.

      1 reply →

Today, no. Very simplified but the broad goal of those tools is to prevent manipulation and monitoring of the in-process state of the game. Consoles and PCs require this to varying degrees by requiring a signed boot chain at minimum. Consoles require a fully signed chain for every program, so you can't deploy a hacking tool anyway; no anti-cheat is needed. PCs can run unsigned and signed programs -- so instead they require the kernel at minimum to be signed & trusted, and then you put the anti-cheat system inside it so it cannot be interfered with. If you do not do this then there is basically no way to actually trust any claim the computer makes about its state. For PCs, the problem is you have to basically trust the anti-cheat isn't a piece of shit and thus have to trust both Microsoft and also random corporations. Also PCs are generally insecure anyway at the hardware level due to a number of factors, so it only does so much.

You could make a Linux distro with a signed boot chain and a kernel anti-cheat, then you'd mostly need to get developers on board with trusting that solution. Nobody is doing that today, even Valve.

Funny enough, macOS of all things is maybe "best" theoretical platform for all this because it does not require you to trust anyone beyond Apple. All major macOS programs are signed by their developers, so macOS as an OS knows exactly where each program came from. macOS can also attest that it is running in secure mode, and it can run a process at user-mode level such that it can't be interfered with by another process. So you could enforce a policy like this: if Battlefield6.app is launched, it cannot be examined by any other process, but likewise it may run in a full sandbox. Next, Battlefield6.app needs to login online, so it can ask macOS to provide an attestation saying it is running on genuine Apple hardware in secure mode, and then it could submit that attestation to EA which can validate it as genuine. Then the program launch is trusted. This setup requires you to only trust Apple security and that macOS is functioning correctly, not EA or whatever nor does it require actual anti-cheat mechanisms.