How kernel anti-cheats work

19 hours ago (s4dbrd.github.io)

I'll simplify for everyone: They don't. Although I do appreciate the author delving into this beyond surface level analysis.

Modern cheats use hypervisors or just compromise hyper-v and because hyper-v protects itself so it automatically protects your cheat.

Another option that is becoming super popular is bios patching, most motherboards will never support boot guard and direct bios flashing will always be an option since the chipset fuse only protects against flashing from the chipset.

DMA is probably the most popular by far with fusers. However, the cost of good ones has been increasing due to vanguard fighting the common methods which is bleeding into other anticheats (some EAC versions and ricochet).

These are not assumptions, every time anticheats go up a level so do the cheats. In the end the weakest link will be exploited and it doesn't matter how sophisticated your anticheat is.

What does make cheat developers afraid is AI, primarily in overwatch. It's quite literally impossible to cheat anymore (in a way that disturbs normal players for more than a few games) and they only have a usermode anticheat! They heavily rely on spoofing detection and gameplay analysis including community reports. Instead of detecting cheats, they detect cheaters themselves and then clamp down on them by capturing as much information about their system as possible (all from usermode!!!).

Of course you could argue that you could just take advantage that they have to go through usermode to capture all this information and just sit in the kernel, but hardware attestation is making this increasily more difficult.

The future is usermode anticheats and gameplay analysis, drop kernel mode anticheats.

No secure boot doesn't work if you patch SMM in bios, you run before TPM attestation happens.

  • > Another option that is becoming super popular is bios patching

    I wouldn’t call BIOS patching “super popular”. That sounds like an admission that anti-cheat is working because running cheats now requires a lot of effort. Now that cheats are becoming more involved to run, it’s becoming less common to cheat.

    When cheats were as simple as downloading a program and you were off to cheating, the barrier to entry was a lot lower. It didn’t require reboots or jumping through hoops. Anyone could do it and didn’t even have to invest much time into it.

    Now that cheats are no longer an easy thing to do, a lot of would-be cheaters are getting turned off of the idea before they get far enough to cheat in a real game.

    > Of course you could argue that you could just take advantage that they have to go through usermode to capture all this information and just sit in the kernel, but hardware attestation is making this increasily more difficult.

    Didn’t the first half of your post just argue that these measures can be defeated and therefore you can’t rely on them?

    • Cheating is so addictive that it doesn't matter if it's more difficult to cheat. I have peronsally interacted with people that just want to spin-bot.

      Anticheats, especially kernel-mode ones does not make the problem smaller. All they do is make it more rewarding for capable people.

      6 replies →

  • I'm playing WoW and I've heard lots of compains about Blizzard banning innocent players. Just recently there was a wave of complains that they banned players who spent a lot of time farming one dungeon (like 10+ hours per day).

    I, myself, got two accounts banned and I was innocent. I managed to make it through support and got them unbanned but I'm fairly certain that many players didn't, because they seem to employ AI in their support.

    So I'm a bit skeptical about that kind of behavioural bans. You risk banning a lot of dedicated players who happened to play differently from the majority and that tend to bring bad reputation. For example I no longer purchase yearly subscription, because I'm afraid of sudden ban and losing lots of unspent subscription time.

    • I think you are right on every point, but I think it's worth noting that WoW is kind of a different beast.

      You don't play a "match", you don't play "against" other players most of the time, in this context "botting" and "cheating" overlap because having your character do stuff 24/7 unattended is an evident advantage over the rest of the population, but it's not like you are hindering anyone's progress directly the vast majority of the time doing so.

      How often does actual cheating happen in WoW, anywhere it matters? M+? Raiding? PvP?

      1 reply →

    • I agree that it's a problem, having a strong support system for remediating false bans is very important.

  • Everything you described increases the cost of attack (creating a cheat), and as a result, not everyone can afford it, which means anti-cheats work. They don't have to be a panacea. Gameplay analysis will only help against blatant cheaters, but will miss players with simple ESP.

    It's almost the same as saying "you don't need a password on your phone" or something like that.

    • > but will miss players with simple ESP.

      False, people that have information they shouldn't have will act in detectable ways, even if they try their hardest not to.

    • Economics work out, harder to make means that it's more profitable to do so. DMA crackdown has actually lead into innovation which has drove the prices down for "normal" DMA hardware what used to be thousands is now $120, excessive spoofing detection has driven down the cost of bios level spoofing and as a result the creation of bios level DMA backdoors - no additional hardware required.

      ESP is a lot more obvious to a machine than one might think, the subtle behavior differences are obvious to a human and even more so for a model. Of course none of that can be proven, but it can increase the scrutiny of such players from player reports.

      10 replies →

  • >It's quite literally impossible to cheat anymore (in a way that disturbs normal players for more than a few games)

    AKA the way that is easiest to detect, and the easiest way to claim that the game doesn't have cheaters. Behavioral analysis doesn't work with closet cheaters, and they corrupt the community and damage the game in much subtler ways. There's nothing worse than to know that the player you've competed with all this time had a slight advantage from the start.

    • In CS2, the game renders your enemies even though you can't see them (within some close range). The draw calls are theoretically interceptable (either on the software/firmware or other hardware level). Detecting this is essentially impossible because the game trusts that the GPU will render correctly.

      3 replies →

    • Overwatch has made the decision that closest cheaters are not a problem and have actually protected a cheater in contenders, although they were forced to leave the competitive scene. None of it ever became public.

      7 replies →

  • Don't forget that ActiBlizz are also pretty much the only ones regularly taking legal action against pay2cheat developers, see Bossland/EngineOwning.

    • I saw engine owning lawsuit verdict as the biggest loss for the companies. They proved that you can continue running a cheat provider service out in the open.

      They won way more than they lost, people who left got given a free pass for ratting the remaining people out.

  • Taking a probabilistic approach to ban people… so if enough people start cheating it's fine?

  • Kernel AC is currently the best way to protect against cheats by far, the game with the strongest protection is Valorant and it works very well. OW2 is lightyears behind Valorant.

    Not sure what your point is. Most of your post is inaccurate, DMA cheats represent the minority of cheats because they're very expensive and you need a second computer.

    • elitepvpers - it's public. DMA cheats have grown and are the primary way people cheat in games these days it makes around 5m/month [retail] just from one of the providers that I know in the scene this includes selling the hardware, the bypass and the cheats (not under the same umbrella for obvious reasons).

      The scene has shifted immensely in the last few years, everyone and their grandmother has DMA now, I mean you can buy these off amazon now. Korean's are a bit stuck since most of them use gaming cafes so they've been slow adopters, but cafe shops have the benefit of using an old version of hyper-v which allows you to just use the method described above. Hyper-V cheats are the most popular for valorant.

      I would argue that valorant and overwatch are pretty much on the same level based on what it feels to play. I've seen just as many visible cheaters in valorant as in overwatch. Although I will admit that I am pretty outdated myself since around mid 2025. Valorant allows you to ** around so that might be related, overwatch bans rage hackers way faster than valorant does as well.

      So no, my post is pretty accurate.

      2 replies →

All of this is beyond horrific.

Mucking about in the kernel basically bypasses the entire security and stability model of the OS. And this is not theoretical, people have been rooted through buggy anticheats software, where the game sent malicious calls to the kernel, and hijacked to anti cheat to gain root access.

Even in a more benign case, people often get 'gremlins', weird failures and BSOD due to some kernel apis being intercepted and overridden incorrectly.

The solution here is to establish root of trust from boot, and use the OSes sandboxing features (like Job Objects on NT and other stuff). Providing a secure execution environment is the OS developers' job.

Every sane approach to security relies on keeping the bad guys out, not mitigating the damage they can do once they're in.

  • Unfortunately (or fortunately depending on what side of the fence you live), boot chain security is not taken as seriously in the PC ecosystem as it is on phones. As as a result, even if you relying on os features, you cannot trust them. This is doubly the case in situations where the user owns the kernel (eg Linux) or hypervisor. Attestation would work, but the number of users that you could probably successfully attest are on on a trustworthy setup is fairly small, so it's not really a realistic option. And that is why they must reach for other options. Keep in mind that even if it's not foolproof, if it reduces the number of cheaters by a statistically significant amount, it's worthwhile.

    I really thought this might change over time given strong desire for useful attestation by major actors like banks and media companies, but apparently they cannot exert the same level of influence on the PC industry as they have on the mobile industry.

  • Are you saying that the solution here is to sell computers so locked down that no user can install anything other than verified software?

    • I'm still not seeing how that would solve it. These are all multiplayer games. You could intercept the network traffic before it reaches the machine and then use a separate device to give you audio or visual cues. In StarCraft, reading the network traffic with a pi and hearing "spawning 5 mutalisk" is gonna completely change the game.

      1 reply →

    • That’s what I want as a gamer. I want a PC that works as a console. Whether I want that for other use cases or this machine doesn’t matter. I’m happy to sandbox _everything else_, boot into a specific OS to game etc.

      The thing about gaming is that it’s not acceptable to leave 5% performance on the table whereas for other uses it usually is.

      14 replies →

    • That’s not really incompatible with this? That’s just how secure boot works. You can re-enlist keys for a different root of trust, or disable it and accept the trade-off there.

    • The idea is that it would require a verified hypervisor, and verified operating system for the game, but you could still at the same time be running an unverified operating system with unverified software. The trusted and untrusted software has to be properly sandboxed from one another. The computer does not need to be locked down so you can't run other hypervisors, it just would require that the anticheat can't prove that it's running on a trusted one when it isn't.

      The security of PCs is still poor. Even if you had every available security feature right now it's not enough for the game to be safe. We still need to wait for PCs to catch up with the state of the art, then we have to wait 5+ years for devices to make it into the wild to have a big enough market share to make targeting them to be commercially viable.

      2 replies →

  • You want to eliminate the freedom of running the software you desire for everyone to hopefully mitigate cheating?

  • > Every sane approach to security relies on keeping the bad guys out, not mitigating the damage they can do once they're in.

    That’s not true at all in the field of cybersecurity in general, and I have doubts that it’s true in the subset of the field that has to do with anticheat.

  • >Mucking about in the kernel basically bypasses the entire security and stability model of the OS. And this is not theoretical, people have been rooted through buggy anticheats software, where the game sent malicious calls to the kernel, and hijacked to anti cheat to gain root access.

    If you got RCE in the game itself, it's effectively game over for any data you have on the computer.

    https://xkcd.com/1200/

  • >All of this is beyond horrific.

    Hot take: It's also totally unnecessary. The entire arms race is stupid.

    Proper anti-cheat needs to be 0% invasive to be effective; server-side analysis plus client-side with no special privileges.

    The problem is laziness, lack of creativity and greed. Most publishers want to push games out the door as fast as possible, so they treat anti-cheat as a low-budget afterthought. That usually means reaching for generic solutions that are relatively easy to implement because they try to be as turn-key as possible.

    This reductionist "Oh no! We have to lock down their access to video output and raw input! Therefore, no VMs or Linux for anyone!" is idiotic. Especially when it flies in the face of Valve's prevailing trend towards Linux as a proper gaming platform.

    There's so many local-only, privacy-preserving anti-cheat approaches that can be done with both software and dirt cheap hardware peripherals. Of course, if anyone ever figures that out, publishers will probably twist it towards invasive harvesting of data.

    I'd love to be playing Marathon right now, but Bungie just wholesale doesn't support Linux nor VMs. Cool. That's $40 they won't get from me, multiply by about 5-10x for my friends. Add in the negative reviews that are preventing the game's Steam rating from reaching Overwhelmingly Positive and the damage to sales is significant.

    • I don't understand why do you think that having the option to have secure boot and a good, trustworthy sandbox for processes implies you cant run Linux on a VM or Linux beside Windows etc.

      People always freak out when I mention secure boot, and the funniest response usually are the ones who threaten to abandon Windows for macOS (which has had secure boot for more than a decade by default)

      I'm not super technically knowledgeable about secure boot, but as far as I understand, you need to have a kernel signed by a trusted CA, which sucks if you want to compile your own, but is a hurdle generally managed by your distro, if you're willing to use their kernel.

      But if all else fails you can always disable secure boot.

      1 reply →

I would love to see a modern competitive game with optional anticheat that, when enabled, allows you to queue for a separate matchmaking pool that is exclusive to other anticheat users. For players in the no-anticheat pool, there could be "community moderation" that anti-anticheat players advocate for.

It'd be really interesting to see what would happen - for instance, what fraction of players would pick each pool during the first few weeks after launch, and then how many of them would switch after? What about players who joined a few months or a year after launch?

Unfortunately, pretty much the only company that could make this work is Valve, because they're the only one who actually cares for players and is big enough that they could gather meaningful data. And I don't think that even Valve will see enough value in this to dedicate the substantial resources it'd take to try to implement.

  • > I would love to see a modern competitive game with optional anticheat that, when enabled, allows you to queue for a separate matchmaking pool that is exclusive to other anticheat users. For players in the no-anticheat pool, there could be "community moderation" that anti-anticheat players advocate for.

    This is roughly what Valve does for CS2. But, as far as I understand, it's not very effective and unfortunately still results in higher cheating rates than e.g. Valorant.

    • Huh. When you say that "it's not very effective" do you mean the segmentation between the pools, or the actual anticheat isn't very good? (I'm assuming the latter - I've heard that VAC is pretty bad as far as anticheat goes)

      14 replies →

  • It exists, it's called FACEIT (for CS, specifically). Anyone who seriously cares about the game at a high level is pretty much exclusively playing there.

    Community moderation simply doesn't work at scale for anticheat - in level of effort required, root cause detection, and accuracy/reliability.

  • I support this idea. Personally, I do not really care about cheating in video games. If some is cheating in a video game, I can just turn it off, go outside, and take deep breath of fresh air and touch some grass.

    I rather play with cheaters here and there than install some kernel level malware on machine just to make sure EA, Activision, et al can keep raking in money hand over fist.

    Or better yet, I can just play on console where there is no cheating that I have ever seen.

It's a whole lot of effort to go through just so corporations can get gamers playing with strangers instead of friends, while taking the whole thing way too seriously. You need anticheat when you want competitive rankings and esports leagues, but is any of that actually any better than just playing casual games with people you know and trust to play fair?

  • Yes it can be? This is a very strange statement to me. Many genuinely like testing themselves against other people, improving over time, and seeing how they stack up. Competition is a pretty basic human thing, e.g. sports, chess, card games, and therefore video games. And competing with the world is a far grander challenge than those you explicitly know.

    Not everyone enjoys that, and that’s fine, but acting like it’s somehow unnatural or pointless feels way off.

    • I know gamers are drawn to it, that's why the game corps like it so much. But is this actually good? So very often with these hyper competitive games played between strangers competing for global ranking, the whole thing turns very toxic, with gamers often seeming to not even enjoy the moment to moment process, often raging at their incompetent team mates or raging at their opponents for supposedly cheating, or whathaveyou. All the while, not developing relationships as they could be if they were playing something with friends. Elevated cortisol levels, when they could be chilling out. Obviously it's profitable, but is it good?

      2 replies →

There is a solution to cheating, but it's not clear how hard it would be to implement.

Cheaters are by definition anomalies, they operate with information regular players do not have. And when they use aimbots they have skills other players don't have.

If you log every single action a player takes server-side and apply machine learning methods it should be possible to identify these anomalies. Anomaly detection is a subfield of machine learning.

It will ultimately prove to be the solution, because only the most clever of cheaters will be able to blend in while still looking like great players. And only the most competently made aimbots will be able to appear like great player skills. In either of those cases the cheating isn't a problem because the victims themselves will never be sure.

There is also another method that the server can employ: Players can be actively probed with game world entities designed for them to react to only if they have cheats. Every such event would add probability weight onto the cheaters. Ultimately, the game world isn't delivered to the client in full so if done well the cheats will not be able to filter. For example: as a potential cheater enters entity broadcast range of a fake entity camping in an invisible corner that only appears to them, their reaction to it is evaluated (mouse movements, strategy shift, etc). Then when it disappears another evaluation can take place (cheats would likely offer mitigations for this part). Over time, cheaters will stand out from the noise, most will likely out themselves very quickly.

  • > Cheaters are by definition anomalies

    So are very good players, very bad players, players with weird hardware issues, players who just got one in a million lucky…

    When you have enough randomly distributed variables, by the law of big numbers some of them will be anomalous by pure chance. You can't just look at any statistical anomaly and declare it must mean something without investigating further.

    In science, looking at a huge number of variables and trying to find one or two statistically significant variables so you can publish a paper is called p hacking. This is why there are so many dubious and often even contradictory "health condition linked to X" articles.

    • > So are very good players, very bad players, players with weird hardware issues, players who just got one in a million lucky…

      They will all cluster in very different latent spaces.

      You don't automatically ban anomalies, you classify them. Once you have the data and a set of known cheaters you ask the model who else looks like the known cheaters.

      Online games are in a position to collect a lot of data and to also actively probe players for more specific data such as their reactions to stimuli only cheaters should see.

      1 reply →

    • For competitive gaming this becomes a problem.

      But a good way of solving this in community managed multiplayer games is this: if a player is extremely good to the point where it’s destroying the fun of every other player: just kick them out.

      Unfair if they weren’t cheating? Sure. But they can go play against better players elsewhere. Dominating 63 other players and ruining their day isn’t a right. You don’t need to prove beyond reasonable doubt they’re cheating if you treat this as community moderation.

      5 replies →

  • I've been advocating for a statistical honeypot model for a while now. This is a much more robust anti cheat measure than even streaming/LAN gaming provides. If someone figures out a way to obtain access to information they shouldn't have on a regular basis, they will be eventually be found with these techniques. It doesn't matter the exact mechanism of cheating. This even catches the "undetectable" screen scraping mouse robot AI wizard stuff. Any amount of signal integrated over enough time can provide damning evidence.

    > With that goal in mind, we released a patch as soon as we understood the method these cheats were using. This patch created a honeypot: a section of data inside the game client that would never be read during normal gameplay, but that could be read by these exploits. Each of the accounts banned today read from this "secret" area in the client, giving us extremely high confidence that every ban was well-deserved.

    https://www.dota2.com/newsentry/3677788723152833273

  • This is said very often, but doesn't seem to be working out in practice.

    Valve has spent a lot of time and money on machine learning models which analyze demo files (all inputs). Yet Counter-Strike is still infested with cheaters. I guess we can speculate that it's just a faulty implementation, but clearly the problem isn't just "throw a ML model at the problem".

  • Honeypots are used pretty often, sure. They're not enough, though useful.

    Behavioral analysis is way harder in practice than it sounds, because most closet cheaters do not give enough signal to stand out, and the clusters are moving pretty fast. The way people play the game always changes. It's not the problem of metric selection as it might appear to an engineer, you need to watch the community dynamics. Currently only humans are able to do that.

    • If you play with friends and your cheats cooperate, I don't think honeypots would be fool-proof any longer. Unless you all get the same fake data.

  • In CS2, a huge portion of cheaters can be identified just by the single stat 'time-to-damage'. Cheaters will often be 100ms faster to react than even the fastest pros. Not all cheaters use their advantage in this way, but simply always make perfect choices because they have more information than their opponents.

  • I disagree with the premise that it doesn't matter as long as users can't tell. Say you're running a Counterstrike tournament with a 10k purse... Integrity matters there. And a smart cheater is running 'stealth' in that situation. Think a basic radar or a verrrrrry light aimbot, etc.

    The problem is that traditional cheats (aimbot, wallhack, etc.) give users such a huge edge that they are multiple standard deviations from the norm on key metrics. I agree with you on that and there are anticheats that look for that exact thing.

    I've also seen anticheats where flagged users have a session reviewed. EG you review a session with "cheats enabled" and try to determine whether you think the user is cheating. This works decently well in a game like CS where you can be reasonably confident over a larger sample size whether a user is playing corners correctly, etc.

    The issue with probing for game world entities is that at some point, you have to resolve it in the client. EG "this is a fake player, store it in memory next to the other player entities but don't render this one on screen." This exact thing has happened in multiple games, and has worked as a temporary solution. End of the day, it ends up being a cat and mouse game. Cheat developers detect this and use the same resolution logic as the game client does. Memory addresses change, etc. and the users are blocked from using it for a few hours or a few days, but the developer patches and boom, off to the races.

    These days game hacks are a huge business. Cheats often are offered as a subscription and can rank from anywhere from 10-hundreds of dollars a month. It's big money and some of the larger hack manufacturers are full blown companies which can have tens of thousands of customers. It's a huge business.

    I think you're realistically left with two options. Require in-person LAN matches with hardware provided by the tournament which is tamper-resistant. Or run on a system so locked down that cheats don't exist.

    Both have their own problems... In-person eliminates most of that risk but it's always possible to exploit. Running on a system which is super locked down (say, the most recent playstation) probably works, until someone has a 0day tucked away that they hoard specifically for their advantage. An unlikely scenario but with the money involved in some esports... Anything is possible.

    https://www.documentcloud.org/documents/24698335-la22cv00051...

    • > End of the day, it ends up being a cat and mouse game. Cheat developers detect this and use the same resolution logic as the game client does.

      This is not well done. Only the server should be able to tell what the honeypot is. The point is to spawn an entity for one or more clients which will be 100% real for them but would not matter because without cheats it has no impact on them whatsoever. When the world evolves such that an impact becomes more likely then you de-spawn it.

      This will only be possible if the server makes an effort to send incomplete entity information (I believe this is common), this way the cheats cannot filter out the honeypots. The cheats will need to become very sophisticated to try and anticipate the logic the server may use in its honeypots, but the honeypot method is able to theoretically approach parity with real behavior while the cheat mitigations cannot do that with their discrimination methods (false positives will degrade cheater performance and may even leak signal as well).

      For example you can use a player entity that the client hasn't seen yet (or one that exited entity broadcast/logic range for some time) as a fake player that's camping an invisible corner, then as the player approaches it you de-spawn it. A regular player will never even know it was there.

      Another vector to push is netcode optimizations for anti-cheating measures. To send as little information as possible to the client, decouple the audio system from the entity information - this will allow the honeypot methods to provide alternative interpretations for the audio such as a firefights between ghosts only cheaters will react to. This will of course be very complex to implement.

      The greatest complexity in the honeypot methods will no doubt be how to ensure no impact on regular players.

Kernel level anti cheat is really the maximum effort of locking down a client from doing something suspicious. But today we still see cheaters in those games running these system. Which proofs that a game server just cannot trust a random client out there. I know it's about costs, what to compute on client and what to compute in server side. But as long as a game trusts computation and 'inputs' of clients we will see those cheating issues.

  • It’s not about costs, it’s about tradeoffs. In an online shooter game (for example) there is latency, and both clients are going to have slightly different viewpoints of the world when they take an action.

    No amount of netcode can solve the fact that if I see you on my screen and you didn’t see me, it’s going to feel unfair.

  • Plus, if I was a motivated cheater, I'd just use a camera, a separate computer, and automate the input devices.

While I’m not really a gamer, I do think the conundrum of online games cheating is an interesting technical problem because I honestly can’t think of a “good” solution. The general simplistic answer from those who never had to design such a game or a system of “do everything on the server” is laughably bad.

  • Preventing cheating is hopeless.

    Anyway, this isn’t the Olympics, a professional sport, or Chess. It’s more like pickup league. Preserving competitive purity should be a non-goal. Rather, aim for fun matches. Matchmaking usually tries to find similar skill level opponents anyway, so let cheaters cheat their way out of the wider population and they’ll stop being a problem.

    Or, let players watch their killcams and tag their deaths. Camper, aimbot, etc etc. Then (for players that have a good sample size of matches) cluster players to use the same tactics together.

    Treating games like serious business has sucked all the fun out of it.

    • Unfortunately that has been proven to not work.

      Matching based on skill works only as long as you have an abundance of players you can do that based on. When you have to account for geography, time of day, momentary availability, and skill level, you realize that you have fractured certain players far too much that it’s not fun for them anymore. Keep in mint that “cheaters” are also looking for matches that would maximize their cheats. Maybe it’s 8PM Pacific Time with tons of players there, but it’s 3 AM somewhere else with much limited number of players. Spoof your ping and location to be there and have fun sniping every player in the map. Sign up for new accounts on every play, who cares. Your fun as a cheater is to watch others lose their shit. You’re not building a character with history and reputation. You are heat sniping others while they are not realizing it. It may sound limited in scope and not worth the effort for you, but it’s millions of people out there tht ruin the game for everyone.

      Almost every game I know of lets players “watch their kill cam”, and cheaters have adapted. The snipped people have a bias to vote the sniper was cheating, and the snipers have a bias to vote otherwise. Lean one way or the other, and it’s another post on /r/gaming of how your game sucks.

    • Well it is a professional sport -- there's tournaments worth tens of millions of dollars. But honestly it is probably easier to catch cheaters in that environment. The real issue is that cheaters suck the fun out of the game, and matchmaking doesn't fix this because cheaters just cheat the matchmaking (smurf accounts, etc) until they're stomping regular players again. I don't think throwing our hands up and letting the cheaters go on is a real solution.

      9 replies →

    • > let cheaters cheat their way out of the wider population

      In a 5v5 shooter this ruins 9 people’s game along the way, times however many games this takes. Enough people do this and the game is ruined

      > or let players watch their killams and tag their deaths

      Players are notoriously bad at this stuff. Valve tried it with “overwatch” and it didn’t work at all.

      Forgetting about anti cheat for a minute though, may hamming for different behaviours is a super interesting topic in itself. It’s very topical right now [0] and a fairly divisive topic. Most games with a ranked mode already do this - there’s a hidden MMR for unranked modes that is match made on, and players self select into “serious” or “non serious” queues. It works remarkably well - if you ever read people saying that Quick Play is unplayable it proves that the separate queues are doing a good job of keeping the two groups separate!

      [0] https://www.pcgamer.com/games/third-person-shooter/arc-raide...

      2 replies →

  • I think from a purely technical viewpoint, cheaters will always have the advantage since they control the machine the game and anti-cheat is running on. Anti-cheat just has to keep the barrier high enough so regular players don't think the game is infested with cheaters.

    • I agree, but that’s precisely the interesting ‘technical’ problem. Like bitcoins “proof of work” in 2011 (it took me few years to comprehend) was an eye opening moment for me. While I do believe that it firmly failed to achieve its lofty goals, the idea of “proof of work” was a really captivating and interring technical idea. Can a video game client have a similar zero-trust proof of their authenticity? I personally can’t think of one. I can’t think of a way to have remote random agents (authenticates or not) to proof they are not cheating in a “game”, and like you, I suspect it’s not really possible. But what does that mean?

      I grew up with star trek and star wars wondering what a “I’ll transfer 20 units to you” meant. Bitcoin was an eye opener in the idea of “maybe this is possible” to me. But it shortly became true to me that it’s not the case. There is no way still for random agents to prove they are not malicious. It’s easier in a network within the confines of Bitcoin network. But maybe I’m not smart enough to come up with a more generalized concept. After all, I was one of the people who read the initial bitcoin white paper on HN and didn’t understand it back then and dismissed it.

      1 reply →

    • I have never worked on AAA games, but I have developed software for 35 years and play many competitive online games regularly.

      I have always wondered why more companies don't do trust based anti cheat management. Many cheats are obvious from anyone in the game, you see people jumping around like crazy, or a character will be able to shoot through walls, or something else that impossible for a non-cheater to do.

      Each opponent in the game is getting the information from the cheating player's game that has it doing something impossible. I know it isn't as simple as having the game report another player automatically, because cheaters could report legitimate players... but what if each game reported cheaters, and then you wait for a pattern... if the same player is reported in every game, including against brand new players, then we would know the were a cheater.

      Unless cheaters got to be a large percentage of the player population, they shouldn't be able to rig it.

      6 replies →

    • > Anti-cheat just has to keep the barrier high enough so regular players don't think the game is infested with cheaters.

      And even that's the (relatively) straightforward part. The hard part is doing this without injuring the kernel enough that the only sensible solution for the security conscious is a separate PC for gaming.

      1 reply →

  • The only solution that seems to work well that I've seen is having very active and good server admins who watch the gameplay and permaban cheaters. Requires a lot of man hours and good UI and info for them to look at, as well as (ideally) the ability to see replays.

    That solution only works on servers hosted by players - I've never seen huge game companies that run their own servers (like GTA) have dedicated server admins. I guess they think they can just code cheaters out of their games, but they never can.

    • It's interesting how often accuracy problems fall back to requiring humans in the loop, and in the case of big consumer systems that means employing people in low wage parts of the world. For playing a match of a video game I don't think there's that much money involved balanced against the amount of playtime to pay for enough monitoring or to ensure a timely response to reports. Gamers always wheel out community run servers and admins because it's pushing the cost onto someone else (I don't think I've ever seen someone volunteer themselves for it), and they'd mostly refuse pay to play if that meant employing a staff that scaled as their online games are popular.

  • The solution is purely cultural. We should collectively think people who cheat online are losers.

    (Not being sarcastic.)

  • Most people ignore that "do everything on the server" kills any game that needs fast interactions or decent local prediction, latency goes through the roof and you might as well play chess by email. There isn't a clean answer.

    Kernel anti-cheat isn't an elegant solution either. It's another landmine, security holes, false positives, broken dev tools, and custody battles with Windows updates while pushing more logic server-side still means weeks of netcode tuning and a cascade of race conditions every time player ping spikes, so the idea that this folds to "better code disipline" is fantasy.

  • Do what Netflix did and run servers at ISPs (or at their providers or Cloudflare points).

    It's kind of weird that we still don't have distributed computing infrastructure. Maybe that will be another thing where agents can run near the data their crunching on generic compute nodes.

    • If me and my roommate are both playing against each other on a server less than 10ms away, in the normal scenario at 60fps there is still ~60ms between me clicking and it appearing on your screen - and another 60ms before I get confirmation. Now add real world conditions like “user is running YouTube in the background” or “wife opens instagram” and that latency becomes unpredictable. You still are left with the same problems. Now multiply it by 10 people who are not the same distance from the ISP and the problems multiply.

    • To quote the parent comment:

      > The general simplistic answer from those who never had to design such a game or a system of “do everything on the server” is laughably bad.

    • Sorry to day this, but I don’t think you understand how any of this works. Whenever someone’s proposed “edge computing” as a way to solve trust problems, I know they are just stringing together fancy sounding words they don’t understand.

      What “Netflix did” was having dead-simple static file serving appliance for ISPs to host with their Netflix auth on top. In their early days, Netflix had one of the simplest “auth” stories because they didn’t care.

      5 replies →

  • I think it's somewhere between halting and turing - given infinite resources it's likely solvable, but lacking that it's just narrowing bounds

  • The only good long term solution is ML on replays + moderately up to date client side (non kernel) AC (just good enough to deter cheaters).

  • Mac OS with remote attestation has proven strong enough for anticheat on Mac OS without needing kernel anticheat.

> Modern kernel anti-cheat systems are, without exaggeration, among the most sophisticated pieces of software running on consumer Windows machines. They operate at the highest privilege level available to software, they intercept kernel callbacks that were designed for legitimate security products, they scan memory structures that most programmers never touch in their entire careers, and they do all of this transparently while a game is running.

Okay, chill. I'm willing to believe that anti-cheat software is "sophisticated", but intercepting system calls doesn't make it so. There is plenty of software that operates at elevated privilege and runs transparently while other software is running, while intentionally being unsophisticated. It's called a kernel subsystem.

I think I'll just stick to simple games on iOS/iPadOS or just use my Nintendo Switch. These anti-cheat systems are far too invasive for my liking. I also worry about those things being hacked! The last time i built a gaming pc was 20 years ago, and i was playing Doom, FEAR, and Half Life Two.. Then i did some simple gaming on macOS

The amount of people in this thread who very clearly don't play competitive video games, let alone at a remotely high level, is astounding. The comment "it's your god given right to cheat in multiplayer games" might legitimately be one of the most insane takes I've ever read.

Kernel anticheat does work. It takes 5 seconds to look at Valve's record of both VAC (client based, signature analysis) and VACNet (machine learning) to know the cheating problem with those technologies is far more prevalent than platforms that use kernel level anticheat (e.g. FACEIT, vanguard). Of course, KLAC is not infallible - this is known. Yes, cheats do (and will continue to) exist. However, it greatly raises the bar to entry. Kernel cheats that are undetected by FACEIT or vanguard are expensive, and often recurring subscriptions (some even going down to intervals as low as per day or week). Cheat developers will 99% of the time not release these publicly because it would be picked up and detected instantly where they could be making serious money selling privately. As mentioned in the article, with DMA devices you're looking at a minimum of a couple hundred dollars just for hardware, not including the cheat itself.

These are video games. No one is forcing you to play them. If you are morally opposed to KLAC, simply don't play the game. If you don't want KLAC, prepare to have your experience consistently and repeatedly ruined.

It’s crazy to me how hard people work to effectively ruin a game for themselves… Imagine putting in this much effort to play Minecraft survival but on creative mode. It just doesn’t sound fun

  • They're getting some actual reward from having a big win/loss ratio. I don't know if that's monetary or just the feeling of being the best but I'd expect the latter group to realise this is all nonsense before spending money on hardware.

Kernel anti-cheats are a fascinating example of security trade-offs.

They solve a real problem (cheats running at higher privilege levels), but at the same time they introduce a massive trusted component into the OS. You're basically asking users to install something that behaves very much like a rootkit, just with a defensive purpose.

>TPM-based measured boot, combined with UEFI Secure Boot, can generate a cryptographically signed attestation ... This is not a complete solution (a sufficiently sophisticated attacker can potentially manipulate attestation)

I was not aware that attackers could potentially manipulate attestation! How could that be done? That would seemingly defeat the point of remote attestation.

  • See this for example:

    https://tee.fail/

    Defeating remote attestation will be a key capability in the future. We should be able to fully own our computers without others being able to discriminate against us for it.

    • Sure, but the exploit presented doesn't really look practical for the everyman. And I'm not sure if it can be patched in HW/SW, and in any case this is just the first step to a fully fake secure boot.

    • Thank you for that link, that's super interesting! It looks like it's actually an architectural vulnerability in modern fTPMs, and considered out of scope by both Intel and AMD. So that's a reliable way to break attestation on even the most modern systems!

  • The comms between the motherboard and the TPM chip isn't secured, so an attacker can just do a MITM attack and substitute in the correct values.

    • That doesn't sound accurate. The T in TPM stands for trust, the whole standard is about verifying and establishing trust between entities. The standard is designed with the assumption that anyone can bring in their scope and probe the ports. This is one of several reasons why the standard defines endorsement keys(EK).

      2 replies →

It is, of course, only a matter of time - just like kernel-level copy protection and Sony's XCP - before something like Vanguard in particular is exploited and abused by malware.

Himata is correct, too. After DMA-based stuff, it'll be CPU debugging mode exploits like DCI-OOB, some of which can be made detectable in kernel mode; or, stealthier hypervisors.

A lot of the techniques that both sides use would be much harder on macOS. Of course, Hackintoshes have always existed and where there’s a will, there’s a way. But it makes me wonder how this would evolve if Apple eventually gets its act together and makes a real push into gaming.

Remember when sony got a huge pushback for putting rootkits on CDs?

Now industry propaganda has gamers installing them voluntarily.

Never forget the risks of trusting game companies with this sort of access to your machine.

https://www.vice.com/en/article/fs-labs-flight-simulator-pas...

Company decides to "catch pirates" as though it was police. Ships a browser stealer to consumers and exfiltrates data via unencrypted channels.

https://old.reddit.com/r/Asmongold/comments/1cibw9r/valorant...

https://www.unknowncheats.me/forum/anti-cheat-bypass/634974-...

Covertly screenshots your screen and sends the image to their servers.

https://www.theregister.com/2016/09/23/capcom_street_fighter...

https://twitter.com/TheWack0lian/status/779397840762245124

https://fuzzysecurity.com/tutorials/28.html

https://github.com/FuzzySecurity/Capcom-Rootkit

Yes, a literal privilege escalation as a service "anticheat" driver.

Trusting these companies is insane.

Every video game you install is untrusted proprietary software that assumes you are a potential cheater and criminal. They are pretty much guaranteed to act adversarially to you. Video games should be sandboxed and virtualized to the fullest possible extent so that they can access nothing on the real system and ideally not even be able to touch each other. We really don't need kernel level anticheat complaining about virtualization.

  • The privacy points in general are valid, but what irritates me is using this rationale against kernel mode anti cheats specifically.

    You do not need kernel access to make spyware that takes screenshots. You do not need a privileged service to read the user’s browser history.

    You can do all of this, completely unprivileged on Windows. People always seem to conflate kernel access with privacy which is completely false. It would in fact be much harder to do any of these things from kernel mode.

    • Kernel access is related to privacy though, and its the most well documented abuse of such things. Kernel level access can help obfuscate the fact that it'a happening. However, it is also useful for significantly worse, and given track records, must be assumed to be true. The problem is kernel level AC hasnt even solved the problem, so the entire thing is risky, uneccesary and unfit for purpose making an entierly unneccesary risk to force onto unsuspecting users. The average user does not understand the risks and is not made aware of them either.

      There are far better ways to detect cheating, such as calculating statistics on performance and behaviour and simply binning players with those of similar competency. This way, if cheating gives god-like behaviour, you play with other godlike folks. No banning required. Detecting the thing cheating allows is much easier than detecting ways in which people gain that thing, it creates a single point of detection that is hard to avoid and can be done entierly server side, with multiple teirs how mucb server side calculation a given player consumes. Milling around in bronze levels? Why check? If you aren't performing so well that yoh can leave low ranks, perhaps we need cheats as a handicap, unless co sistently performing well out of distribution, at which point you catch smurfing as well.

      point is focusing on detecting the thing people care about rather than one of the myriad of ways people may gain that unfair edge, is going to be easier and more robust while asking for less ergregious things of users.

      3 replies →

    • There is no need for irritation. I condemn all sorts of anticheating software. As far as I'm concerned, if the player wants to cheat he's just exercising his god given rights as the owner of the machine. The computer is ours, we can damn well edit any of its memory if we really want to. Attempts to stop it from happening are unacceptable affronts to our freedom as users.

      Simply put, the game companies want to own our machines and tell us what we can or can't do. That's offensive. The machine is ours and we make the rules.

      I single out kernel level anticheats because they are trying to defeat the very mitigations we're putting in place to deal with the exact problems you mentioned. Can't isolate games inside a fancy VFIO setup if you have kernel anticheat taking issue with your hypervisor.

      18 replies →

  • Game compagny have to have those kernel anti cheat because MS never implemented proper isolation in the first place, if Windows was secured like an apple phone or a console there wouldn't be a need for it.

    Anti cheat don't run on modern console, game dev knoes that the latest firmware on a console is secure enough so that the console can't be tempered.

    • Consoles and phones are "secure" because you don't own them. They aren't yours. They belong to the corporations. They're just generously allowing you to use the devices. And only in the ways they prescribe.

      This is the exact sort of nonsense situation I want to prevent. We should own the computers, and the corporations should be forced to simply suck it up and deal with it. Cheating? It doesn't matter. Literal non-issue compared to the loss of our power and freedom.

      It's just sad watching people sacrifice it all for video games. We were the owners of the machine but we gave it all up to play games. This is just hilarious, in a sad way.

      2 replies →

  • And if we embraced instead of feared remote attestation and secure enclaves, the days of game companies having this level of access would come to an end.

    • That's arguably even worse. Remote attestation means you get banned from everything if you "tamper" with "your" computer.

      Remote attestation is the ultimate surrender. It's not really your machine anymore. You don't have the keys to the machine. Even if you did, nobody would trust attestations made by those keys anyway. They would only trust Google's keys, Apple's keys. You? You need not apply.

I feel like this whole problem is just made up. Back in the day, when I played lots of Counter Strike, we had community servers. If a cheater joined, some admin was already online and kicked them right away. I'm sure we hit some people that were not actually cheaters, but they would just go to another server. And since there was no rank, no league, no rewards (like skins, drops, etc.), there was no external reward for cheating. It annoys me that cheating in competitive video games seems like a bigger problem than it has been in the past for no good reason.

  • Manually managing one cheater in a 20 person server is obviously very different than managing games between multiple millions of concurrent players

This got me wondering how easy it'd be to automate discovery of BYOVD vulns with LLMs (both offensively and defensively)

  • Probably not too hard with the LLM side itself assuming latest models and good tooling.

    The harder thing probably is getting a dataset for “all x64/ARM64 Windows drivers that aren’t already considered vulnerable”.

    Also it depends what’s considered a vulnerability here.

Uh, isn’t the IDT one of these things that PatchGuard explicitly checks? Mind you, anticheats keep PatchGuard corralled these days because they want their own KiPageFault hooks assuming HVCI is not in place.

The article doesn’t go too in depth on the actually interesting things modern anticheats do.

In addition:

- you can’t really expect .text section of game/any modules except maybe your own to be 100% matching one on disk, because overlays will hook stuff like render crap (fun fact for you: Steam will also aggressively hook various WinAPI stuff presumably for VAC, at least on CS2)

I still don't understand why people don't cheat in FPSes by looking at the video stream and having a USB mouse that emits the right mouse movements. (The simplest thing is to just click when someone's head is under your crosshair, in games with hitscan weapons.)

  • The problem with these bots is that they are indiscriminate which makes them vulnerable to active detection methods. They can also introduce an amount of latency that begins to defeat the purpose for sufficiently skilled players. 100ms is an eternity when you are playing with shotguns in close quarters.

i've said it before, but is anti-cheat mechanisms needed on consoles? If not, (presumambly due to their locked down nature), what's the problem with having a locked down mode (trusted secure boot path that doesn't allow other programs to run, ala "the xbox mode" that microsoft has started to implement), that is similar to a console.

This seems much more doable today than in the past as machines boot in moments. Switching from secure "xbox mode" to free form PC mode, would be barely a bump.

Now, I see one major difference, heterogenous vs homogenous hardware (and the associated drivers that come with that). In the xbox world, one is dealing with a very specific hardware platform and a single set of drivers. In the PC world (even in a trusted secure boot path), one is dealing with lots of different hardware and drivers that can all have their exploits. If users are more easily able to modify their PCs and set of drivers one, I'd imagine serious cheaters would gravitate to combinations they know they can exploit to break the secure/trusted boot boundary.

I wonder if there are other problems.

  • Not sure if they are considered anti-cheats, but there are some measures to detect usage of input devices like XIM that allow keyboard and mouse inputs which allow for superior aim over controllers.

    Well it's definitely not game developer written kernel anti-cheat on consoles.

Kernel anti-cheats are weaponized by hackers. It is all over HN.

Play games which are beyond that: dota2, cs2 for instance.

On linux, there is a new syscall which allows a process to mmap into itself the pages of another process (I guess ~same effective UID and GID). That is more than enough to give hell to cheats...

But any of that can work only with a permanent and hard working "security" team. If some game devs do not want to do that, they should keep their game offline.

Hear me out:

How about this: Instead of third-party companies installing their custom code to fuck with my operating system,

How about just having the OS offer an API that a game can request to reboot the OS into "console mode": A single-user, single-application mode that just runs that game only.

Similar to how consoles work.

That mode could be reserved for competitive ranked multiplayer only.

I could have sworn online gambling people fixed this years ago with just wifi. I thought I remembered reading a comment on here about the online gambling for kids no cheating people not talking to the online gambling for adults no cheating people.

  • The "just wifi" is about getting your true geolocation so regulated gaming platforms can operate legally. Ironically, I bet whatever API they use can be intercepted by a kernel level process.

    They also have VM checks. I "accidentally" logged into MGM from a virtual machine. They put my account on hold and requested I write a "liability statement" stating I would delete all "location altering software" and not use it again. (Really!)

  • That would be interesting if they did.

    looking at cards is a way easier problem than rendering a 3d world with other players bouncing around. I imagine you could just send the card player basially a screenshot of what you want them to see and give them no other data to work with and that would mostly solve cheating.

    But gambling can be way more complicated than just looking at cards so maybe there's a lot more to it.