The issue of anti-cheat on Linux (2024)

1 day ago (tulach.cc)

This article gave me more appreciation for the stance of the Linux community.

So to sum up. Valorant's anti-cheat, which the author sees something like an ideal solution:

- starts up and loads its kernel driver on boot.

- generates a persistent unique ID based on hardware serial numbers and associates this with my game account.

- stays active the entire time the system is up, whether I play the game or not. But don't worry, it only does some unspecified logging.

- is somehow not a spyware or data protection risk at all...

  • I also always hear a lot of people complain about cheaters in Valorant, so all of that compromised personal security doesn't actually stop cheaters.

    Honestly I feel like you should only use kernel anticheat on a dedicated machine that's kept 100% separate from any of your personal data. That's a lot to ask of people, but you really shouldn't have anything you don't consider public data on the same hardware.

    • > you should only use kernel anticheat on a dedicated machine that's kept 100% separate from any of your personal data.

      Correct. Unfortunately, what you've just described is a gaming console rather than a PC. This problem fundamentally undermines the appeal of PC gaming in a significant way, imo.

      8 replies →

    • > doesn't actually stop cheaters.

      doesn't actually stop all cheaters.

      We could have a better discussion around this if we recognize that failing to stop 100% of something isn't a prerequisite to rigorously evaluating the tradeoffs.

      6 replies →

    • About halfway in the article, there's a brief nod to CS:GO. It uses a tick system and the server controls what is possible, such as physics or awarding kills. Fighting genre games use the same server-based game logic.

      Cheating is a big draw to Windows for semi-pro gamers and mid streamers. What else is there to do except grind? Windows gives the illusion of "kernel level anti-cheat," which filters out the simplest ones, and fools most people some of the time.

      4 replies →

    • > Honestly I feel like you should only use kernel anticheat on a dedicated machine that's kept 100% separate from any of your personal data. That's a lot to ask of people, but you really shouldn't have anything you don't consider public data on the same hardware.

      Wouldn't it be sufficient to simply have a minimal system installed on a separate partition or on a separate drive (internal or external). Boot that for gaming, and never give it the password for the encryption of your non-gaming volumes.

  • Strongly agreed. Some people want kernel-level anticheat for Linux. I think that's a huge mistake. Ideally, kernel-level anticheat would be done away with altogether. More realistically, I'm just going to avoid any games which use kernel-level anticheat, even if it means missing out.

    • I got roasted on linux subreddits for saying as much. We should not be encouraging this crap to come to Linux, it needs to go away for good.

      IIRC, even Microsoft was getting fed up with hands in the kernel after Cloudstrike so we may see it disappear eventually if Microsoft starts cracking down.

  • Honest question: do you segment your activities on your computer on different users?

    No? In which case, what practical spyware risk does a kernel level driver add that user mode software can’t do?

    User mode software can spy on your clipboard, surreptitiously take screenshots, and take data out of your system. That spooks me enough that, if I don’t trust a software manufacturer, I don’t install it. Kernel mode makes no practical difference in my security posture.

    • For starters:

      - Creating a unique ID that is directly bound to hardware.

      - Accessing the memory of any process, including browsers or messengers.

      - Installing persistent background processes that are hidden from the rest of the system.

      But I think that's the wrong question. Talking about the kernel driver is a distraction.

      The abuse scenario that I think is most likely would be that the game and/or anticheat vendor uses the hardware ID for user profiling instead of just ban enforcement, and that the "logging" functionality is coopted to detect software or activities that aren't related to cheats at all, but are just competition of the vendor or can once against be used for profiling, etc.

      None of that strictly requires a kernel driver. Most of that stuff could be easily done with a usermode daemon. But under normal circumstances, there is no way I'd install such a program. Only in the name of cheat prevention, suddenly it gets permissible to make users install that stuff if all they want to do is play some game.

    • This is adjacent to how Linux users claim their default system is inherently more malware-resistant than Windows, when either way you're trusting anything you run in user space with almost everything important.

    • > User mode software can spy on your clipboard, surreptitiously take screenshots, and take data out of your system

      Not on any properly secured Linux machine. But yes, it's generally a bad idea to install software you don't trust, a category that anticheats slot nicely into, given their resistantance to auditing and analysis.

      1 reply →

  • In Valorant's defence:

    1) There is a 100k bug-bounty on the anti-cheat: https://hackerone.com/riot?type=team

    2) The anti-cheat is the game's entire reason for being. It is the main focus of the development and marketing. People buy Valorant for the anti-cheat; they are willing to accept a kernel driver as a trade off for fairer competition.

    • Based on the install base and the level of access it could theoretically provide, I think a 0-day has a good shot at being worth more than $100k. Definitely worth more than that if you happen to know your high-value target plays League.

      Fair competition is all well and good, but there are other ways to do it and I can already tell you that the war on kernel-level anti cheat is well under way. There are already people cheating in Valorant, and that will not slow down. If anything, it's going to get more common because cheaters and cheat creators are some of the most diligent people out there.

  • The way I described it to a friend was to use this analogy: Imagine you have someone over for game night, and before you play they say "Oh, by the way, I need the keys to the filing cabinet where you keep all your tax returns and whatnot." To which you might respond, "Wait, you need to read my tax returns before we can play this game?" And they say, "Oh, I'm not going to read them, I just need to hold the key while we play."

    And you would rightly tell them to piss off and get out of your house, because that makes no sense. If you really wanted to torture the metaphor, you could I guess argue that they need full access to your house just in case you decide to pull some loaded dice out of the filing cabinet or something, but that's not really the important thing to me. The important thing is that, regardless of whether or not I trust the developer of the anti-cheat, the game just isn't that important.

  • - … but successfully, more or less, prevents most cheating attempts which would also make the game unplayable regardless.

    For anyone saying “just do server side,” no, it’s physically impossible to stop all cheating that way until we have internet faster than human perception.

    • I actually think this is one area where AI and statistics applied to player behavior are actually the right answer, similar to how they catch chess cheaters.

      I've seen videos where cheats are particularly easy to detect if you are also cheating. I.e. when you have all the information, you can start to see players reacting to other players before they should be able to detect them. So it should be possible to build a repertoire of cheating examples and clean examples using high level players to catch a fair amount of cheating behavior. And while I understand that there are ways to mitigate this and its an arms race, the less obvious the cheats are, the less effective they are, almost by definition.

      If someone is consistently reacting outside the range of normal human reaction times, they're cheating. If they randomize it enough to be within human range, well, mission accomplished, kind of.

      If they're reacting to other players in impossible ways by avoiding them or aiming toward them before they can be seen with unusual precision or frequency, they're cheating.

      A lot of complex game dynamics can be simplified to 2D vectors and it shouldn't be that computationally intensive to process.

    • Sure, but you could stop the most blatant wallhacks at least, but most times I see a video of a cheater, it's something stupid like that. It can't be that hard to do occlusion calculations server-side, right?

      Don't let perfect be the enemy of good.

      8 replies →

Was going to post this on a now-deleted comment about anticheat being a hard problem, so popping it here because it might be relevant:

Anticheat is only hard because people are looking for a technical solution to a social problem. The actual way to get a good game in most things is to only play with people you trust and, if you think someone is cheating, stop trusting them and stop playing with them.

This doesn't scale to massive matchmaking scenarios of course - and so many modern games don't even offer it as an option - so companies would have to give up the automatic ranking of all players and the promise of dopamine that can be weaponised against them, but it works for sports in the real world and it worked for the likes of Quake, UT, etc. so I don't think it's a necessarily bad idea. Social ostracism is an incredibly powerful force.

However, it does mean that the big publishers wouldn't have control over everything a player does. Getting them to agree to that is probably the real hard problem.

  • My naive take is that technical solutions are possible, but critically they can’t be fully automated. The most effective anti-cheat solution possible probably looks something like a full-time in-house team comprised of seasoned ITSEC, data nerds, a couple of ML people, and a few devs. A team like that could probably pick out and boot cheaters with a very low rate of false positives given adequate data to crunch, and they’d only get better over time as they build a roster of patterns and behaviors to match against.

    The problem is that this costs more than game companies are willing to spend, even when they’re raking in cash hand over fist. As long as the problem isn’t so bad that it’s making players quit, it’s cheaper to employ more automated, less effective strategies. The end goal isn’t player happiness, it’s higher profit margins.

    • I work on one of the games mentioned in this article and you're underestimating cheaters and cheat developers. We're doing this already and we're one of the smaller studios, so the larger studios are for sure doing it on a larger scale. Cheaters are still managing.

    • Always wondered if some distribute fake cheats that snitch or worse. That'd put the cheaters on defense instead of just offense. Yeah people can make their own, but most aren't.

    • I think this is the most reasonable take I've seen here. As my sibling comment mentions, people are already doing this. I think that - if anything - my point is that this is being done, but separately to the social element. You could get a hundred PhDs to look at the data and identify a cheater, but what you really want to avoid is someone that 9/10 people don't want to play with... and only the players can really tell you who that is. Data from the PhDs would help, though!

      I've not really thought about it so deeply until right exactly now (thanks, all!), but I think doing so might have led me to a very unpopular opinion - I might be prepared to say that this problem can't be solved in an anonymous environment. Unless you have a reputation to ruin (or, say, an xbox account to lose), then being outed as a cheater costs you nothing. Again, this is incompatible with a lot of current multiplayer modes - and most of what I love about PC gaming - but, ultimately, I'd rather be judged by my peers than a rootkit.

  • > Anticheat is only hard because people are looking for a technical solution to a social problem. The actual way to get a good game in most things is to only play with people you trust and, if you think someone is cheating, stop trusting them and stop playing with them.

    As much as I reminisce about the days of private servers for Quake/2/3, UT99, CS1.6, etc., saying this is really ignorant of how modern gaming and matchmaking works. Some games would simply not be possible without public matchmaking; I don't care how much of a social butterfly you are, you are not going to get 99 friends to get a PUBG match going. Even getting 11 other people to run a game of Overwatch or CS would be a pain. Other games need public matchmaking to have a fair ranking system. You go onto say ranking is "weaponised" but, ranking is a feature, and a lot of people like that feature.

    > However, it does mean that the big publishers wouldn't have control over everything a player does. Getting them to agree to that is probably the real hard problem.

    The demand for anticheat, and matchmaking/ranking systems, are entirely player-driven, not publisher-driven. If developers and publishers could get away with only implementing player-managed servers and letting players deal with cheaters, they would! It's a lot less work for them.

    As a sibling comment mentioned, even in the days of private servers you ended up with community-developed tools like Punkbuster. I remember needing to install some anti-cheat crap when I signed up for Brood War's private ICCUP ladder.

    • Large-player count community server driven games actually have a pretty big advantage compared to smaller player count ones: it makes it easier to have somebody with the permission to ban cheaters online at approximately all times.

      Squad has 100 player games, and despite its anticheat having well-known bypasses, I don't see a lot of hacked client cheating. Why? Because I play on servers that consistently have a couple people online during the hours I play that ban anybody who cheats.

      Community servers have a lot more moderators than the game devs could possibly afford, because they can build trust with volunteers.

    • > this is really ignorant of how modern gaming and matchmaking works.

      If you listen to the people complaining about cheating... it doesn't.

      > I don't care how much of a social butterfly you are, you are not going to get 99 friends to get a PUBG match going.

      True, but my county is able to get more than that number of people into a cricket league. You don't need to personally know everyone, just be confident that there is a system of trust in place that would weed out any rotters. Is such a system going to be perfect? No, but neither are any of the top-down approaches attempted in videogames. At least this one doesn't require me to install an umpire in my home at all times.

      > As a sibling comment mentioned, even in the days of private servers you ended up with community-developed tools like Punkbuster.

      The difference is that you could have played the game without doing that. If you didn't trust the people on that server, how likely would you be to install those tools?

      3 replies →

    • This. Back in the day, when you played an FPS on a private server, you'd also be able to observer other players when you died so cheating was discovered pretty quickly. When we had ranked clan matches there'd also be 3rd party observers both for fun (ranked matches were a bit event) and to look for signs of cheating.

  • I agree with you the issue is scale, but the scale when it worked was when gaming was niche. You can't put that back into the bottle.

    The history of plenty of anticheats start with community servers, not matchmaking. Even Team Fortress Classic had enough of a cheating issue that community members developed Punkbuster, which went on to get integrated into Quake 3 Arena. A lot of 3rd party anticheats were developed in that era for community servers. BattlEye for BattleField games. EasyAntiCheat for Counter-Strike. I even remember Starcraft Brood War's 3rd party ICCUP server with 'antihack'.

    You still see this today with additional anticheats on community server solutions. GTA V's modded FiveM servers had anticheats before it was added to the official game. CS2 Face-IT and ESEA servers have additional anticheats as people do not think VAC is effective enough.

  • There are quite a few games that are fun because they throw dozens of players into the same event. I don't have over 100 friends to play with, let alone over 100 friends I trust not to cheat.

    For some games the small group approach works, but even a game as simple as Counter Strike requires at least a dozen players to make the most of.

    That said, there are perverse incentives in many of the games hit worst by cheaters. Games that invent more and more prestigious rewards and titles for accounts that do well in hopes of them spending more money on microtransactions, or the microtransaction hell-holes like GTA Online that exist as a vessel to take your money more than to be of any fun. Adding upgrades and other desired items behind a gambling mechanic makes the whole ordeal extra shitty, praying on the psychological weaknesses of the unfortunate souls to get a digital gambling addiction so they can be sucked dry by billion dollar companies.

    I've personally never run into anticheat issues because I find most of the games that require anticheat for online play just aren't worth the time and effort to play online in.

    But still, the old SW Battlefront II wouldn't be fun without the massive online matches, and those require some form of anticheat to stay fun.

  • Making sure you can get enough people together for a game is one thing; making sure you can get enough people together for a game that you know aren't cheating is even harder. Most "friends" these days are online-only acquaintances that you simply can't know well enough to know if they're cheating or not. In the heat of the moment while playing a game it's tough to tell if someone's cheating or just good at the game. The toxicity of people being accused of cheating and defending themselves will quickly split apart any acquaintance group.

  • I think there's immense value in being able to just press a button and jump into a game, without having to actually know people and build up a community.

    However, I wonder if you could have that while still removing features that make cheating seem appealing. For example, as you said, you can have games with randoms without an automatic ranking of all players. (Or maybe you rank players so you can match people of similar skill levels, but you don't tell anyone what their rank is.)

    • > For example, as you said, you can have games with randoms without an automatic ranking of all players

      Good skill matching is one of the most important advancement in gaming over the last few decades. Being able to consistently play against people who are fair competition for you makes the games so much more fun, especially if you are much better or much worse than the average player. In the old days, you could alternate between opponents that were no challenge at all and opponents you would have no chance against; both types of games get old really fast.

      In some ways, good skill matching can alleviate the harm cheaters do; if the cheating makes them way better than everyone else, then good matchmaking should start to match them up only against other cheaters. In many ways, this is the ideal scenario - cheaters play against each other, and everyone else plays against people who are close in skill level.

      1 reply →

    • That still exists in many games with server browsers. The game just goes through the server list to find a populated one with low latency and “official” settings (ie not knife only or modded).

      Works basically the same as matchmaking does now, albeit in only matching on server quality and not player skill.

    • > However, I wonder if you could have that while still removing features that make cheating seem appealing. For example, as you said, you can have games with randoms without an automatic ranking of all players.

      This does not stop cheaters whatsoever. Anyone who played during the private server era of FPS in the late 90s/early 00s knows this; wallhacking, modified character models with big pointy spikes indicating player locations, aimbots, etc. ran rampant, even when nothing was on the line.

  • Yeah, or don't play video games that people treat as jobs, cause that's where cheaters go. Csgo was one. Better yet, there are hobbies.

  • > The actual way to get a good game in most things is to only play with people you trust and, if you think someone is cheating, stop trusting them and stop playing with them.

    One of the games mentioned in this article is Rust. Playing with only people you trust defeats the point because it's a game full of betrayal. At best you'll be able to get a group together once and then destroy your relationships more than Monopoly would.

  • I cannot agree. Getting a Quake game up in the early 2000s could take hours worth of sitting in IRC pickup channels, if it happened at all. I don't feel publishers are at fault here. I figure the vast majority of players would pick an instant game with potential cheaters over an hour wait for a 50% chance at a game.

    • That's because few people played Quake, it got elitist really fast. I had the same issue with it. I had zero issues with CS, though, finding a match was pretty easy. PUGs aren't a thing of the past, PUBG players used to do them for example.

  • So how am I supposed to play a game of PUBG if I don't have 99 friends who I trust not to cheat who also play it? How is any community going to establish and continuously monitor that their members don't cheat, while also allowing new members to join over time? I don't have a big group of friends who also like playing the same games I play at the same times I want to play, sounds like a total non-starter to me.

Targeting perfect fairness in a multiplayer video game with arbitrary latency between participants is a waste of energy. A much better target is to make it feel like no one is cheating. I don't really care too much if someone is actually better or worse than me at counterstrike. What I mostly care about is wildly implausible gameplay. No one is going to stop the guy who is getting a 5% gain on his ELO by using a 2nd computer, machine vision and a robot to move his mouse ever so slightly faster than he typically can.

However, there are ways to detect when someone is being an absolute madman with the hacks. We're talking head snapping through walls with 100% accuracy and instantaneous displacement across an entire 30 minute match. These people can simply be banned immediately by hardware/steam ID. We can write basic rules to detect stuff like this. There's no "confidence interval" for speed hacking through a map and awping the entire CT team in 3 seconds. You certainly don't need kernel drivers.

  • Or entire lobbies filled with bots with the same name that stand around doing nothing while one of them goes full spinbot, and auto kicks anyone who happens to join their lobby. Those bots I see week after week with the same accounts and no bans in sight.

  • This isn't exactly wrong but you're not looking at it from a modern perspective.

    If you can cheat and get away with it, then you'll see streamers do it. That will tank confidence in your game.

    It doesn't matter if cheating doesn't make you top the leaderboard. If you have global leaderboards, they will be dominated by cheaters.

    I don't think rootkits are excusable but if the solution was simple they would do that.

  • > These people can simply be banned immediately by hardware/steam ID

    And how do you actually ensure a good hardware ID that can't be trivially modified?

I don't personally see an issue that my computer can't run literal rootkits being shipped with the game. But I concede that not everyone shares my preferences, and if you wish to run this malware you should be able to do so.

  • Bigger showstopper is probably that video game devs won't put energy into Linux support, unless we're talking about Android. Wine isn't going to translate the anticheat.

For me, it all boils down to independence and freedom. Many games we're/are run by communities, but in the last 20 years game companies have moved the control of the gaming experience to centralized services run by said companies. This really falls into the same category that StopKillingGames wants to address. Games should be run/controlled by its communities and not centralized corporations. I'd rather trust the community to handle cheating for a game then be convinced that a centralized company needs root level access to my system.

I stopped playing any game that doesn't give me this control when I switched to Linux[0].

If the price of preventing cheating is losing control over my system, its not worth it. There are plenty of games out there that respect's it's players. No need to support ones that wants to be a gatekeeper between your gaming experience and your computer.

[0]: https://www.scottrlarson.com/publications/publication-transi...

One way to do anti-cheat on linux without compromising the sanctity of your host kernel would be to run the game inside a hardware-protected VM.

Anti-cheat does not ordinarily like to run inside a VM, because then the hypervisor can do the cheating, invisibly to the kernel. However, technologies like AMD SEV can (in theory) protect the guest from the host, using memory encryption. (And potentially also protect from DMA-based cheats, too)

What you'd need is some way for the hardware to attest to the guest "yes, you really are running inside SEV".

  • Even with SEV, you need hardware passed through to the VM. That means either running two GPUs or hot-swapping the machine your GPU is connected to and hoping neither driver crashes and burns (which is what you can expect from any consumer GPU driver that tries to hotplug). The software will also break the moment someone finds yet another side channel attack to break memory encryption. Intel's attempts at secure hardware hypervisors failed so bad they took the hardware out of consumer chips.

    In theory you could probably get it to work on some hardware given some boot configurations with some games, but what game developer is going to develop a bespoke Linux VM? And if not the game developer, what Linux developer is going to spend time developing a platform that caters to the wishes of closed-source, rootkit-driven anticheat developers?

    • The guest VM doesn't actually have to be Linux, but I don't see why it couldn't be any old distro.

    • > Intel's attempts at secure hardware hypervisors failed so bad they took the hardware out of consumer chips.

      That doesn't seem right. Hypervising is not a feature many consumers use, so why would they spend the money to include it in consumer chips?

      1 reply →

I miss PUBG, but the fundamental purpose of anti-cheat software is to circumvent and curtail user freedom. I don't really want affordances for that in my OS.

Anti-cheat behaves exactly like malware. It inserts itself in your system in a privileged state to monitor your activity. It's only job is to spy on your behavior.

If you want to run it I don't see a problem. Use a dedicated machine. Lets call it a console. Use it exclusively to play online pvp. Don't use it for anything else.

Privacy and security conscious people who use Linux desktops as general purpose computing devices generally don't want anti-cheat systems on their computers. I have no problem with the technology existing for other people. Don't try and force me to use it or I won't support your games/service.

I think a lot of the posturing from game publishers about anti-cheat on linux is really about dissatisfaction with Valve's control of the platform and revenue cut. Competitors aren't prepared to invest in development to build a strong platform like Valve but they are jealous of Valve's income. Nerfing their product on Linux is likely a way of pushing people to other platforms. I don't know what they are smoking because Sony, Apple, Nintendo and Microsoft aren't going to be any better for them.

I found this part notable:

---

Let me ask you a question. How many vulnerable drivers (yes, those that can be abused by bad actors to gain kernel access) do you think the average gamer has on their Windows install? I’ll start with my own system. This is what I can immediately think of:

MSI Afterburner - RTCore64.sys driver (yes, even in the latest version) has a vulnerability that allows any usermode process to read and write any kernel memory it wishes

CPU-Z - cpuz142_x64.sys driver has (again) kernel memory read/write vulnerability and MSR register read/write

If I looked hard enough, I would most likely find more.

  • I didn't really get the point being made there. Yes, windows kernel security posture is swiss cheese, but that's not an argument for poking more holes.

    • Well, if nothing else, it makes me think that if you are doing truly security-sensitive work, you almost certainly need to get a separate computer for that. Whether or not you play any games with kernel-level anti-cheat, you probably have cpu-z installed.

      And if you're not doing something particularly sensitive, then security on consumer PCs must matter a lot less than some people think.

      1 reply →

> What is a videogame cheat? [...] an external program that somehow manipulates the game or reads information from the game to provide you with an advantage over others.

This has always interested me when it comes to the need for anti-cheat to exist... For instance with wallhacking, the way most FPS style game engines have always been written means the server sends all player location information to all clients, so it's all there in memory if you can get to it.

But what if your server engine instead only sent relevant player location data to each client, it would be more work, the server would have to do occlusion tests for each player pair, but a bounding box and some reasonable spatial partitioning should make that reasonably efficient. To prevent occlusion lag, e.g players not smoothly appearing around corners, the server can make reasonable predictions with some error margins based on a combination of current player velocities and latencies.

I know this is just one part of cheating, but it seems like all the other ones (manipulating input) is a losing battle anyway. I mean ultimately you can't stop people hooking up the input and display to a completely independent device with computer vision.

  • >the server would have to do occlusion tests for each player pair, but a bounding box and some reasonable spatial partitioning should make that reasonably efficient.

    Shadows and reflections are the hard part. Especially when light can be cast by your own/other players' weapon. It gets even more complicated with ray tracing becoming common.

    • Good point, there's also sound. I suppose ultimately only server side rendering could solve that... although that is a thing now.

Article citing valorant as doing anti-cheat the best way is really baffling. Their anti-cheat practices are so invasive they might as well require you to play on a PC they own completely. They simply won't let you play if you have software or drivers installed they don't trust. One step further is to use TPM and secure boot to completely lock your PC to a trusted vendor installation aka iOS/Android walled garden for PCs.

But if "serious gamers" really want to go this far to prevent cheating (which will happen anyways as it's not a technical but social problem) then go ahead I guess.

I thought DMA cheats rendered all of these anticheat efforts useless? It feels like the future of anticheat should probably be focused on how to efficiently send player data to clients only when they would be able to interact with them anyway. Or replay moderation?

  • Not entirely. Valorant's anti-cheat tries hard to detect DMA cards, which eventually led to one of their largest banwaves. See:

    https://playvalorant.com/en-gb/news/dev/vanguard-hits-new-ba...

    Of course the cheat developers don't sit idle, so this is far from over.

    • I read this article, unless I missed it Brazilian pixel bots comprised the bulk of the ban wave, with DMA cheaters getting a mention but of unspecified quantities, and could have been swept up in manual and rage hacking bans?

So what are the implications of having my KeePassXC database open while playing a game that utilizes one of these invasive anticheats? Every time I do it I feel uneasy, but nothing bad has happened yet.

This is one use case where I think the idea of cloud gaming (e.g. google stadia) could make some sense. Having this as an alternative for linux users would be nice.

It's much harder to cheat if the game isn't running on your computer.

  • That's a good idea, sadly I think gamers would reject it due to extra latency.

    The ultimate "anti-cheat" is playing on some trusted party's computer. That can be a cloud machine, but I think today a game console would work just as well, turn that closed nature into an actual user-facing benefit. Console manufacturers seem focused on their traditional niche of controller couch gaming and not on appealing to high-FPS keyboard-and-mouse gamers, though.

    • Consoles are also vulnerable via peripherals. There are controllers that will run recoil countering scripts and things like that.

      XIM fakes being a controller but is KBM. I sort of wonder whether it’s possible to use a camera to get a stream of the game and make an aimbot either by making a fake controller or a robot that manipulates a real controller.

    • Yeah I don't think this would work for hardcore competitive gamers, but it would be nice to have as an option for those who are more casual. Definitely better than not being able to play at all.

      It doesn't even seem very hard to implement, steam already has the ability to stream games, they could add this pretty easily as an option for any game (although there is the concern of the extra cost of running the servers).

    • >That's a good idea, sadly I think gamers would reject it due to extra latency.

      That shouldn't be a problem if all players, regardless of the OS, are required to use the same cloud service with similar latency.

  • Cloud gaming is flatly non-workable for any kind of game where latency matters. This also covers most of the market for games where anti-cheats matter a lot.

    • > Cloud gaming is flatly non-workable for any kind of game where latency matters.

      Not if only the rendering is done on the client. Look at rocket league.

      Edit: of course, it is still possible to cheat in rocket league, but because all physics state is server authoritative at best a perfectly coded cheat could play like a perfect human, not supernatural.

      1 reply →

  • Generally yes, although some cheats like aim assistance would work fine on online streamed games, since they can scan your screen and adjust your mouse input to aim.

    To be fair kernel anticheat can't block this completely either, it can be run on external hardware that uses a capture card to analyze your video feed and alter your mouse inputs to the computer. Generally undetectable unless the game is able to identify unnatural mouse movements.

    • >it can be run on external hardware that uses a capture card to analyze your video feed and alter your mouse inputs to the computer.

      I think at some point defeating this becomes impossible. This sort of cheating isn't much different conceptually from just having someone who's really good at the game play for you.

      1 reply →

  • Lag is the biggest issue... even a local wifi connection vs wired can make a massive difference in terms of what's acceptable lag.

    Of course, to TFA's point on network code... a lot of the issues in question could come down to checking for movements that exceed human... moving faster than the speed in game, or even twitch aiming movements faster than a mouse, or a consistent level of X accuracy in shooting over time. On the last part, I'm not sure if there might be some way to mask a user's hit zone, rendering and such so that an aim-bot thinks the foot is center-mass, etc. Or if it could be randomly shifted in a test scenario.

If anyone finds it useful, these can be added in a startup script but dont put it in sysctl.conf or sysctl.d/ as it may eventually break OS updates. Someone will say these have never broken their OS update but what they do not realize is that they have jynxed themselves and murphies law is now active. These options may prevent some rootkits malicious or otherwise. Research these options and test them before running scissors.

    kernel.modules_disabled = 1
    kernel.kexec_load_disabled = 1

The options can be loaded last after the OS is entirely up and running using sysctl. The script that loads these options would have to be disabled and the OS rebooted prior to doing OS updates. Once these options are enabled they can not be disabled without a reboot.

If giving a video game sudo or doas or root access, research the game, its developers and publisher exhaustively and ask a magic 8 ball at least 3 times if the game developers can be trusted. Are they within your countries jurisdiction? As others eluded to, consider having a dedicated bare metal system for the games that are suspect. Keep a thumb drive around with the OS image, maybe even a few OS snapshots just in case the game performs dark magic on your system. Consider enabling auditd with custom rules to watch for writes within /boot, /etc, /lib and /usr at very least. Auditd has a built in module that can be enabled to send auditd messages to a remote syslog server. If a game is doing something sneaky or shady, name and shame them.

The author cites fear mongering over kernel anticheat, but I don't think anyone reasonable should be ok with their personal computer having kernel anticheat installed.

Genshin's anticheat was used to install ransomware, ESEA's anticheat was used to install bitcoin miners on users machines, EA's anticheat was used to hack clients computers during a tournament, etc.

When not explicitly malicious, anticheat software is at best spyware that's spying on your computer use to identify cheating. People complain a ton about Microsoft recall storing screenshots of your computer locally being a security risk, and yet they're fine with a Chinese owned anticheat program taking screenshots of your computer and uploading them online. And even if the company isn't trying to use that info to spy on you, my understanding is that when you're a chinese company, you have to give full access of that data to the government.

With the ongoing/rising tensions between the US and China, I actually think there's a significant chance that we may see all Chinese owned anticheat programs banned in the US, which would be pretty significant since they own or partially own the majority (as far as I know).

  • > I don't think anyone reasonable should be ok with

    Well, I don't think anyone reasonable should be telling others what they "should" be ok with, myself included (I made an exception this one time).

    > Genshin's anticheat was used to install ransomware

    You should tell the full story: Ransomware installed Genshin's anticheat because it was whitelisted by antivirus providers, it then used the anti-cheat to load itself deeper into the system. So not really a problem with Genshin's anticheat (indeed, users who had never played the game or even heard about it would be affected), but a problem with how antivirus providers dealt with it.

    > ESEA's anticheat was used to install bitcoin miners

    You should tell the full story: Someone compromised the supply-chain and snuck a miner into the anticheat binary. It was discovered immediately, and the fact that the miner was in the anticheat and not, say, a game loader, did nothing to hide it.

    > People complain a ton about Microsoft recall storing screenshots of your computer locally being a security risk, and yet they're fine with a Chinese owned anticheat program taking screenshots of your computer and uploading them online

    This is just a fallacy. Like saying "people voted for candidate A, but then they voted for candidate B!" Obviously, there can be multiple groups of people, and saying that "people" vaguely support X but not Y is usually a misunderstanding of the groupings involved.

    The obvious explanation for this is"apparent" contradiction you point out is: Windows Recall is likely to be an on-by-default feature, and people don't really trust Microsoft not to "accidentally" enable it after an update. Also, Recall would likely be installed on all computers, not just gaming PCs. That's a big deal. A lot of people have multiple PCs, because they're cheap and ubiquitous these days. Maybe they're okay with recall and/or anticheat taking snapshots of their gaming PCs, but not the laptop they use to do their taxes, etc. The source of your confusion is likely the misunderstanding that most people, unlike the HN crowd, are practical, not ideological. They don't oppose anticheat on some abstract level, they care about the practical reality it brings to their life.

    Another element is that most people, at least in the US, have "spy fatigue". They figure, hey, the US government spies on me, the five eyes spies on me, Russia and China spy on me, what does it matter?

    • > So not really a problem with Genshin's anticheat (indeed, users who had never played the game or even heard about it would be affected), but a problem with how antivirus providers dealt with it.

      The distinction doesn't really matter. The claim wasn't that the ransomware authors exploited deficiencies in the anticheat design, just that the anticheat was used to install the ransomware, which it was.

    • > You should tell the full story: Someone compromised the supply-chain and snuck a miner into the anticheat binary. It was discovered immediately, and the fact that the miner was in the anticheat and not, say, a game loader, did nothing to hide it.

      Software with that level of access having a supply chain compromise is not an argument in its defense.

      2 replies →

Kernel level anti-cheat a short term curse with long term damages. For those wondering about the short term, here's a cheat that will never be handled by rootki-anticheat: https://youtu.be/9alJwQG-Wbk (vid description, an aimbot that triggers your human muscles to aim faster than any unaugmented human) That solution was effectively made from a box of scraps. Now imagine in a year when some go getters package and sell it to the mass market.

Long term damages are self explanatory, it's called a-rootkit

Can't help but consider how, perhaps, this could be a teaching moment for other folks. I know "convenience reigns supreme" but getting perhaps less-tech savvy gamers knowledgeable about what is being given up when you use anti-cheat.

Alas, I'd like to believe we could be in an era of "hey, not a problem, just have a dedicated gaming machine," but that too is difficult.

"You shouldn't have to install a root kit to play a game".

I can't agree more with the video linked by the guy in this article claiming it was FUD and misinformation. Author is just flat out wrong to discount the threat.

Any hacker would want a rootkit, any nation state would also want this. Tencent has a convenient nationstate behind them, and a lack of credible history with human rights abuses.

Importantly, you don't need to control and lock down the edge to have an effective anti-cheat. You can do server side checking that is just as effective.

> Just recompile the kernel and change the functions it uses to hide the possible cheat and bypass all checks.

You can do this on macOS too, by the way. XNU is open-source.

Is cheating possible because games are written in low level languages which have to have precise tracked positions of elements in memory?

If your garbage collector is grabbing an entire arena of memory and moving it constantly, doesn't that limit a cheat to asking an API to retrieve an object because only the managed memory knows where objects reside at any given moment?

  • No. When you write code in a high-level language, your data is still in-memory offset at some 'precise tracked position', even if you are not being explicit/conscious about that layout. Games that use high-level languages are often easier to hack. e.g. Escape from Tarkov is one of the most hacked games because players can hook directly into its C# script VM, writing code as easily as if they had the original source.

I feel like the only other solution to kernel-level anticheat is some kind of measured and verified system image. The whole chain has to be signed and trusted from the TPM through the kernel to userspace. This way if anyone tampers with the system the game will refuse to launch. I think something like this is already possible with systemd or is at least the long term goal IIRC from Lennart's blog.

  • IME these systems can be quite fragile in practice. All it takes is one pre-signature exploit (like U-boot parsing ext4 and devicetree before verifying signature) and your whole chain becomes useless.

    And while the kernel is quite secure against hacks from userspace, the hardware interfaces are generally more trusted. This is not a problem on smartphones or embedded devices where you can obfuscate everything on a small SoC but the whole PC/x86_64 platform is much more flexible and open. I doubt there is a way to get reliable attestation on current desktop systems (many of which are assembled from independent parts) unless you get complete buy-in from all the manufacturers.

    Finally, with AI systems recently increasing in power, perhaps soon the nuclear option of camera + CV + keyboard/mouse will become practical.

  • I don't know much about TPM APIs, but I think (barring some hardware attestation scheme) a malicious kernel could intercept any game-TPM communication.

    • The verified bootloader would register the signature of the kernel into the TPM, so a malicious kernel would be noticeable. You could still exploit the kernel, of course.

      Even a hacked kernel won't have access to the key material stored inside of the TPM, though, so it wouldn't be able to fake the remote attestation key material used to sign any challenges.

      Using TPMs this way requires secure boot which only permits non-exploited, signed kernels to load signed operating system images and signed drivers. Revocation of exploitable software and hardware must be harsh and immediate. That means most dTPMs (which have been proven vulnerable to numerous side-channel attacks) are unusable, as well as some fTPMs from CPUs running old microcode. Several graphics cards cannot be used anymore because their drivers contain unpatched vulnerabilities. Running tools with known-exploitable drivers, such as CPU-Z and some motherboard vendor software, would imply a permanent ban.

      This approach can work well for remotely validating the state of devices in a highly secure government programme with strict asset management. For gaming, many hardware and software configurations wouldn't be validatable and you'd lose too much money. Unfortunately, unlike on consoles, hardware and software vendors just don't give a shit about security when there's a risk of mild user inconvenience, so their security features cannot be relied upon.

      You can do what some games do and use TPMs as your system's hardware identifier, requiring cheaters to buy whole new CPUs/motherboards every time an account is banned. You can also take into account systems like these but don't rely on them entirely, combining them with kernel-level anticheat like BF6 does (which requires secure boot to be enabled and VBS to be available to launch, though there are already cheaters in that game).

It's an unpopular opinion, but for better or worse, this is why I think it still makes sense to have a dedicated games machine separate from the main computer.

I'm largely a console gamer, so I don't have to worry about EA's latest malware opening my computer up to the world. I'm also a filthy casual though.

Cheats are why I stopped playing FPS's and only occasionally play Rocket League. I can't tell if I'm bad at the game or if everyone else is cheating. Half of the games on this list are FPS's.

I think the more important question isn't how you implement an anti-cheat, it's why some types of games attract cheaters.

When victory in a game isn't about strategy but just about how quickly you can click o character's head, and just by doing it once you win the game, that makes the whole game a clear target for cheating. Everyone cheats as the sniper, nobody cheats as the medic.

I think you could make an FPS that cheaters hate by designing it so that it requires at least 2 players to defeat a player on the opposite team, e.g. by giving everyone weapons of different type and needing two types to defeat an enemy.

I wonder if anti-cheating game design is a thing?

  • Cheating and worrying about cheating in these matchmaking FPS games is a ridiculous thing to do. If you get matched with cheaters, and the ranking system actually works, they are cheaters whose cheat-augmented skill is equal to yours.

    Game designers could have just worked on their ranking systems, and least the cheaters rocket off into their own domain of impossibly-high-elo games. Let there be a cheaters league. It could be fascinating, what’s fully-cheated gameplay look like? Just ban disruptive behavior like ddosing other players.

    OTOH, artificially lowering your rank to stomp low-level players is a problem. But cheaters, as well as just legitimately really good players, can do this; the place to solve this is the ranking system.

    • I feel like it's more about trust. Once you stop trusting that you are NOT playing against cheaters, every match feels like you are just a walking target for someone else's entertainment.

      To put it in another way: either I'm bad at a competitive game, or I'm playing against cheaters. Once you start feeling like that, neither scenario seems like an enjoyable time, so why play at all?

      I feel like the biggest problem to me is that these types of games are INSANELY popular, but personally I'd rather play something less skill-based and more fun-based. These competitive games just keep appearing in front of me all the time despite that fact I don't enjoy them.

      1 reply →

  • I think that Team Fortress is pretty good in this regard... at least for some CTF maps and configurations... (I'm mostly recalling the original quake mod)... there were some maps that you had to have a scout/spy to be able to get past a strategically positioned automatic gun, and even then an HW guy by the flag was a pretty good secondary that was hard to get through.

    Of course, I still remember seeing cheaters back then, in that game... usually quickly kicked off the server you were playing on.

tinfoil hat time: three letters use anticheat rootkits to pivot into systems and are sock puppeting anti-anti-cheat.

The cat and mouse game between cheat devs and anti-cheat devs is quite interesting. I saw a nice video [1] a year ago about the state of the art in cheat development, which at that point was having a PCIe device that can issue DMA requests to read the RAM at any time and stream the data to a second PC to analyse. Vanguard did end up banning those people eventually, since it can see what devices you have plugged in. I can't help but wonder if the next level would be some kind of shim on the physical RAM sticks; or maybe custom UEFI firmware.

Ultimately the OS should be providing a service that can verify a program is running in a secure environment and hasn't been tampered with. That's something that's useful for things far beyond games. I kind of hope the cheaters win this war for now, to create the incentive for building a better, proper, standardized, cross-platform solution.

[1] https://www.youtube.com/watch?v=kzVYgg9nQis

  • > Vanguard did end up banning those people eventually, since it can see what devices you have plugged in.

    Only because the makers of those DMA cards do a bad job hiding themselves. They either use vague, recognisable names, or don't act like the devices they're spoofing.

    The moment a cheat developer manages to reprogram an actual SSD (especially a common model), hardware detection like that becomes near impossible.

    • Riot just shipped a new kind of DMA protection, using IOMMU, and they tout that that cheating method is now 6 feet deep.

  • I would think the Linux kernel could offer a "don't let anything read/write to the process I'm about to open" with a launcher then have that process also create a random/temp executable to test that the configuration is working...

    Having the kernel itself, actually deny any access... The game devs run a build without debug symbols (not that debugging could work with it on), and run with it... Also, this should severely limit what that process can do in terms of communication outside itself. And maybe a launch warning from the OS... "You are about to launch a sealed application that cannot be observed, do you want to continue? Y/N"

    • > I would think the Linux kernel could offer a "don't let anything read/write to the process I'm about to open" with a launcher then have that process also create a random/temp executable to test that the configuration is working...

      Then all a cheater has to do is run a custom kernel that has an API that responds to that request but then lets another process read/write the memory anyways.

      You have to keep in mind something. The cheaters don't give a shit about what they have to do to let a cheat work. It's only the legit players that are like "I don't want anti-cheat to have kernel access". Cheaters will flash a custom BIOS to their motherboard if they have to without a second thought, while legitimate players would be absolutely horrified of the idea of needing a custom BIOS for anti-cheat, and very rightfully so.

    • That would only protect against userland cheats. A cheat developer would just write a kernel module to read the memory so it wouldn't be another process attempting to read it, but the kernel itself.

> The issue of anti-cheat on Linux

Is the memory of this kernel module protected from access from another kernel module ?

Everyone is thinking about this problem the wrong way. Just use remote attestation.

Who needs opaque binary blob kernel modules or whatever for anti-cheat when you can bootstrap a secure boot and remote attestation setup? It's possible for a game server to verify cryptographically that someone is running stock firmware, stock bootloader, stock TCB userspace, a stock game executable, and that no debugger is attached. You don't need cat and mouse BS with executable obfuscation. You don't need inscrutable spyware. You don't need to prohibit VMs. All you need to do is configure your program not to be debuggable, prohibit network MITM (e.g. with certificate pinning), and then use remote attestation to make sure nobody has tampered with the system to make it ignore your anti debugging configuration.

All of the components involved in this trust chain can be open source. There's no spyware involved. No rootkit. No obfuscation. Everything is transparent and above board.

The only downside (besides implementation complexity) is that the remote attestation scheme is incompatible with running custom builds of the components remotely attested. But so what? Doing so isn't a requirement of open source. You can still run custom builds too -- just not at the same time you play your game.

Seems like a fair compromise to me

TL;DR: the issue of anti-cheat on Linux is that Linux actually gives the user full control of their OS, which precludes all even remotely effective anti-cheat mechanisms by design.

TL;DR: Malware level / kernel invasive anti cheats idea that relies on some opaque anti-user blobs is conceptually incompatible with Linux and open source in general.

Proponents of such junk can get lost with their fake justifications of why kernel level anti-cheat malware should be acceptable. They should instead work on server side anti-cheats.