Comment by xg15
1 day ago
This article gave me more appreciation for the stance of the Linux community.
So to sum up. Valorant's anti-cheat, which the author sees something like an ideal solution:
- starts up and loads its kernel driver on boot.
- generates a persistent unique ID based on hardware serial numbers and associates this with my game account.
- stays active the entire time the system is up, whether I play the game or not. But don't worry, it only does some unspecified logging.
- is somehow not a spyware or data protection risk at all...
I also always hear a lot of people complain about cheaters in Valorant, so all of that compromised personal security doesn't actually stop cheaters.
Honestly I feel like you should only use kernel anticheat on a dedicated machine that's kept 100% separate from any of your personal data. That's a lot to ask of people, but you really shouldn't have anything you don't consider public data on the same hardware.
> you should only use kernel anticheat on a dedicated machine that's kept 100% separate from any of your personal data.
Correct. Unfortunately, what you've just described is a gaming console rather than a PC. This problem fundamentally undermines the appeal of PC gaming in a significant way, imo.
> This problem fundamentally undermines the appeal of PC gaming in a significant way, imo.
Yes, game publishers are trying to turn PCs into a gaming console, which IMO will always be a futile effort, and is quite frankly annoying. I don't game on PC to have a locked down console-like experience.
Just embrace the PC for what it is and stop trying to turn it into a trusted execution platform with spyware and rootkits.
Look at BF6 - for all the secure boot and TPM required anti-cheat they stuffed it with, there were cheaters day 1, so why abuse your users when it's clearly ineffective anyway.
10 replies →
Honestly, if consoles were willing to accept KB+M (and gyro aiming for that matter), I’d be completely proposing that competitive live service titles mostly abandon PC, except for a small “probably infested with cheaters” base.
4 replies →
Somehow Xonotic manages to be both completely free/open software and not have cheating problems like this. It's never been clear to me how they've done that although client-side stuff like these kernel anti-cheat things were obviously never going to work.
1 reply →
This is why (even though everybody hates my for saying this) - the only way to do security is by enforcing root of trust - which is why Windows 11 forcing secure boot and TPM is a necessary change.
The idea that we should allow arbitrary code execution at some point, then we claw back security by running mass surveillance on your PC is clearly insane.
The only way to go forward is what BF6 has done - ensure the PC is in a pristine state, and nothing bad was loaded in the kernel - which is ironically why their anticheats conflicted - they don't allow loading random crap in the kernel.
Not to mention, people who develop these invasive security modules don't have the expertise, resources or testing culture to muck about in the kernel to the degree they do.
As to how dangerous this actually got actually showcased by Crowdstrike last year.
Sounds great! Guess who I trust? Me. The root of trust should be a key I generate. I do not trust this to any government, any private company or really any 3rd party, except perhaps a member of my family or my lawyer. It can just be me and maybe someone I grant a digital equivalent of power of attorney to. For a company like Microsoft to try and get involved is in my view a form of aggression.
> doesn't actually stop cheaters.
doesn't actually stop all cheaters.
We could have a better discussion around this if we recognize that failing to stop 100% of something isn't a prerequisite to rigorously evaluating the tradeoffs.
Doesn't actually stop all cheat developers. If even one person develops and sells a cheat that the kernel-level anticheat doesn't catch, then it stops 0% of cheaters from buying and using the cheat.
3 replies →
I think the problem with this line of reasoning is that it's one-sided. Essentially you are saying "Just trust me bro" on behalf of a self-evaluating company.
I'd argue the potential for abuse is a perfectly reasonable discussion to have, and doesn't have much bearing on the effectiveness of anticheat, but I understand that's not the point you are trying to make.
2 replies →
I fundamentally agree with you.
But anti-cheat hasn't been about blocking every possible way of cheating for some time now. It's been about making it as in convenient as possible, thus reducing the amount of cheaters.
Is the current fad of using kernel level anti-cheats what we want? hell nah.
The responsibility of keeping a multi-player session clean of cheaters, was previously shared between the developers and server owners. While today this responsibility has fallen mostly on developers (or rather game studios) since they want to own the whole experience.
A dedicated machine with no other general purpose apps that has minimal private data on it sounds like a gaming console.
Or a virtual machine...
14 replies →
About halfway in the article, there's a brief nod to CS:GO. It uses a tick system and the server controls what is possible, such as physics or awarding kills. Fighting genre games use the same server-based game logic.
Cheating is a big draw to Windows for semi-pro gamers and mid streamers. What else is there to do except grind? Windows gives the illusion of "kernel level anti-cheat," which filters out the simplest ones, and fools most people some of the time.
Fighting games do not use server-mediated simulation, in general. Cheating is actually a huge problem in popular games. And in fact, even running a server-mediated simulation wouldn't help with any of the common cheating in fighting games.
For instance, a common cheat in Street Fighter 6 is to trigger a drive impact in response to the startup of a move that is unsafe to a drive impact. That is recognizing the opponent's animation and triggering an input. There's no part of that which cares where the game simulation is being done. In fact, this kind of cheating can only be detected statistically. And the cheats have tools to combat that by adding random triggering chances and delays. It's pretty easy to tune a cheat to be approximately as effective as a high-level player.
Kernel-level anticheat isn't a perfect solution, but there are people asking for it. It would make cheating a lot harder, at least.
> About halfway in the article, there's a brief nod to CS:GO. It uses a tick system and the server controls what is possible,
As does Valorant and virtually every other first person shooter. The cheats aren't people flying around or nocliping, it's wallhacks and aim assists/bots.
3 replies →
> Honestly I feel like you should only use kernel anticheat on a dedicated machine that's kept 100% separate from any of your personal data. That's a lot to ask of people, but you really shouldn't have anything you don't consider public data on the same hardware.
Wouldn't it be sufficient to simply have a minimal system installed on a separate partition or on a separate drive (internal or external). Boot that for gaming, and never give it the password for the encryption of your non-gaming volumes.
Why not dual boot, and keep your files on an encrypted partition?
> Honestly I feel like you should only use kernel anticheat on a dedicated machine that's kept 100% separate from any of your personal data. That's a lot to ask of people, but you really shouldn't have anything you don't consider public data on the same hardware.
Yes, and at that point, you may as well use Windows for that machine.
Strongly agreed. Some people want kernel-level anticheat for Linux. I think that's a huge mistake. Ideally, kernel-level anticheat would be done away with altogether. More realistically, I'm just going to avoid any games which use kernel-level anticheat, even if it means missing out.
I got roasted on linux subreddits for saying as much. We should not be encouraging this crap to come to Linux, it needs to go away for good.
IIRC, even Microsoft was getting fed up with hands in the kernel after Cloudstrike so we may see it disappear eventually if Microsoft starts cracking down.
Wait, people on linux subreddits support kernel-level anti-cheat?
1 reply →
So you're just okay with people cheating then?
That's a rude strawman of the point I was making. Kernel-level anticheat is just too great of a cost. Your entire system is compromised so that you can play some (usually lousy) AAA games.
I oppose kernel-level anticheat because once it's in place, it will proliferate, even to single player games, just as it has in Windows.
In other words, once it's broadly supported, the number of games available to me (assuming I want to avoid kernel-level anticheat) will actually _shrink _.
> - is somehow not a spyware or data protection risk at all...
Don't worry, it's owned by Tencent.
The author made the most ridiculous arguments, had to stop reading after that point.
Phew!
In Valorant's defence:
1) There is a 100k bug-bounty on the anti-cheat: https://hackerone.com/riot?type=team
2) The anti-cheat is the game's entire reason for being. It is the main focus of the development and marketing. People buy Valorant for the anti-cheat; they are willing to accept a kernel driver as a trade off for fairer competition.
Based on the install base and the level of access it could theoretically provide, I think a 0-day has a good shot at being worth more than $100k. Definitely worth more than that if you happen to know your high-value target plays League.
Fair competition is all well and good, but there are other ways to do it and I can already tell you that the war on kernel-level anti cheat is well under way. There are already people cheating in Valorant, and that will not slow down. If anything, it's going to get more common because cheaters and cheat creators are some of the most diligent people out there.
'Buy valorant'?
- and, by design, is resistant to auditing, analysis, or user-modification
If you trust Microsoft with your OS; I suppose you should trust Microsoft when they sign kernel modules, right? ;)
It's a good thing that Microsoft has never signed an anticheat kernel module that turned out to be so vulnerable that some malware installed it on purpose to gain more system access.
2 replies →
It is the same stance as calling Windows games, developed for Windows, using DirectX, without any consideration of the studios to ever target GNU/Linux, even though they might actually target Android/Linux with other titles, Linux games.
Because somehow Proton is better than standing for actual GNU/Linux games.
So like IBM with OS/2 and Windows, studios keep ignoring Linux, and let Valve do whatever is needed, it is Valve's problem to sort out.
Honest question: do you segment your activities on your computer on different users?
No? In which case, what practical spyware risk does a kernel level driver add that user mode software can’t do?
User mode software can spy on your clipboard, surreptitiously take screenshots, and take data out of your system. That spooks me enough that, if I don’t trust a software manufacturer, I don’t install it. Kernel mode makes no practical difference in my security posture.
For starters:
- Creating a unique ID that is directly bound to hardware.
- Accessing the memory of any process, including browsers or messengers.
- Installing persistent background processes that are hidden from the rest of the system.
But I think that's the wrong question. Talking about the kernel driver is a distraction.
The abuse scenario that I think is most likely would be that the game and/or anticheat vendor uses the hardware ID for user profiling instead of just ban enforcement, and that the "logging" functionality is coopted to detect software or activities that aren't related to cheats at all, but are just competition of the vendor or can once against be used for profiling, etc.
None of that strictly requires a kernel driver. Most of that stuff could be easily done with a usermode daemon. But under normal circumstances, there is no way I'd install such a program. Only in the name of cheat prevention, suddenly it gets permissible to make users install that stuff if all they want to do is play some game.
The point it, you don't need a kernel driver to access most of your data. Just a user space process can go read all your files and memory of processes of the same user.
1 reply →
> User mode software can spy on your clipboard, surreptitiously take screenshots, and take data out of your system
Not on any properly secured Linux machine. But yes, it's generally a bad idea to install software you don't trust, a category that anticheats slot nicely into, given their resistantance to auditing and analysis.
A properly secured Linux machine is a unicorn. The Linux desktop ecosystem is struggling a lot with putting software in namespaces. People still install software with their package managers outside Flatpak, there is no isolation of data, not to say many workflows depend on the whole user directory being available to access.
This is adjacent to how Linux users claim their default system is inherently more malware-resistant than Windows, when either way you're trusting anything you run in user space with almost everything important.
Some* Linux users
> Honest question: do you segment your activities on your computer on different users?
Yes.
Except that this kernel driver is audited and signed by Microsoft, whom you also trust with the rest of your kernel if you use Windows at all.
I don't think Microsoft auditing of code it signs. Wasn't Crowdstrike signed by Microsoft?
It was. All Windows kernel drivers are.
Microsoft doesn't do any auditing besides "is this the most obvious malware?"
They don't audit them. Private cheat sellers user signed drivers because they have a small set of customers so they're unlikely to be reported or detected.
And since the game has access to the anticheat running in the kernel, every Valorant bug is a potential root level kernel exploit.
The way I described it to a friend was to use this analogy: Imagine you have someone over for game night, and before you play they say "Oh, by the way, I need the keys to the filing cabinet where you keep all your tax returns and whatnot." To which you might respond, "Wait, you need to read my tax returns before we can play this game?" And they say, "Oh, I'm not going to read them, I just need to hold the key while we play."
And you would rightly tell them to piss off and get out of your house, because that makes no sense. If you really wanted to torture the metaphor, you could I guess argue that they need full access to your house just in case you decide to pull some loaded dice out of the filing cabinet or something, but that's not really the important thing to me. The important thing is that, regardless of whether or not I trust the developer of the anti-cheat, the game just isn't that important.
- … but successfully, more or less, prevents most cheating attempts which would also make the game unplayable regardless.
For anyone saying “just do server side,” no, it’s physically impossible to stop all cheating that way until we have internet faster than human perception.
I actually think this is one area where AI and statistics applied to player behavior are actually the right answer, similar to how they catch chess cheaters.
I've seen videos where cheats are particularly easy to detect if you are also cheating. I.e. when you have all the information, you can start to see players reacting to other players before they should be able to detect them. So it should be possible to build a repertoire of cheating examples and clean examples using high level players to catch a fair amount of cheating behavior. And while I understand that there are ways to mitigate this and its an arms race, the less obvious the cheats are, the less effective they are, almost by definition.
If someone is consistently reacting outside the range of normal human reaction times, they're cheating. If they randomize it enough to be within human range, well, mission accomplished, kind of.
If they're reacting to other players in impossible ways by avoiding them or aiming toward them before they can be seen with unusual precision or frequency, they're cheating.
A lot of complex game dynamics can be simplified to 2D vectors and it shouldn't be that computationally intensive to process.
Fully agreeing with this. I think there are two different approaches when people think of "server side":
The first is "never trust the client", i.e. realtime validation and having the server be the sole authority on the current game state. This is the straightforward solution to think of for programmers, but it's also practically infeasible due to latency, etc.
But what the server could do is a "trust but verify" approach: accept data from the clients when they submit it, but have some background processes that can analyze the data for anomalies and, if too much of it was detected, trigger a ban.
The only problem I see with this approach is that cheaters might react by repeatedly making new accounts and playing as them until the verification process has caught up and bans the account.
Cheating would be more obvious - as cheaters would have to start over with a beginner character every time - but it could still be annoying.
So the problem of ban evasion would become even more important. And I don't really see how a purely server-side solution could work there.
Sure, but you could stop the most blatant wallhacks at least, but most times I see a video of a cheater, it's something stupid like that. It can't be that hard to do occlusion calculations server-side, right?
Don't let perfect be the enemy of good.
>It can't be that hard to do occlusion calculations server-side, right?
I think you already know the answer. Yes, it's bottlenecked by latency and jitter (of the laggiest player, no less), and in addition to that the maximum possible movement velocity makes it much much worse in fast paced games. It's been attempted a few times since at least late 90's, with predictable results.
In other words, complete server-side calculations are a fantasy. Besides, they won't even remotely make cheating impossible or even harder! Even complete hardware lockdown won't.
It can be rather difficult, mostly due to the occlusion calculations having to be conservative (must count visible things as visible, allowed to count invisible as visible, or things pop) and latency (must account for every possible position within max move speed * max latency, or things pop)
The naive raycast from player camera to other player would be fine for perf but may count partially visible as invisible, so its unacceptable. You'd have to raycast every pixel of the potentially visible player model to stay conservative. With movement + latency this expands to every pixel the player model could potentially occupy during your max latency period, and you need to consider the viewer moving too!
In practice this expands to a visibility test between two spheres with radius max_latency*max_movespeed + player_model_radius. Now, you could theoretically do a bunch of random raycasts between the spheres and get an answer that is right some of the time, but it would be a serious violation of our conservativeness criteria and the performance would get worse with more rays/better results. Also keep in mind that we need to do this for every single player/player pair a few dozen times per second, so it needs to be fast!
To do this, you need a dedicated data structure that maps volumes to other volumes visible from said volume. There are a few, and they are all non-trivial and/or slow to build well. (google for eg. potentially visible set, cell-portal graph + occlusion). You also trade performance for precision, and in practice you walls might become 'transparent' a bit too early. With all this being done, we can actually "do occlusion calculations server-side".
There's just one problem with this that I still don't know a solution for, namely precision. With fast players and imprecise conservative visibility, things you care about are going to count as visible pretty often, including stuff like enemies peeking from behind a corner (because they could have moved at full sprint for 100ms and the end of the wall is rounded away in your acceleration structure anyway) so all this complexity might not get you that much, particularly if your game is fast paced. You'd prevent some wallhacks but not the ones that really matter.
TLDR yes, it's actually hard and might not be good enough anyway
In a 2d game? Sure, no problem, all Dota types have them. In games like CS and Valorant? Yes, they already do that, they have maps with simple geometry so its possible. Games with open world geometry with buildings with windows etc? Will be almost useless to implement anyway. You need to avoid pop-in effects so positions needs to be sent 1-2m before they are visible, its what they do in cs/valorant but it doesn't really work with complex geometry.
If the server sends your client "you hear footsteps from this location" then you know where they are.
When it comes to cheating, perfect is the enemy of good. This is one of those rare cases where the phrase doesn’t hold.
The problem is that server-side occlusion is only a small piece of the puzzle. A naïve implementation means hundreds of thousands of raycasts per second, which doesn’t scale. Real engines rely on precomputed visibility sets, spatial partitioning, and still have to leak some data client-side for responsiveness.
Basically - the kernel level check is not laziness, but for unsolvable problems without huge compute costs or latency.
5 replies →
And it gets circumvented anyway.
https://www.youtube.com/watch?v=RwzIq04vd0M
It seems to me that kernel-level anti-cheat is little more than a speed bump for determined cheaters.
Having one determined cheater ist worth not having 1000 cheaters because they fear getting banned.