Dev Drive isn't because Defender is so bad but because Dev behavior can look like malicious behavior. Creating a bunch of random executables, connecting to running processes, decompiling files. Stuff that would be malicious behavior from normal user but normal for a dev.
I could be wrong but I don’t believe that even these days anti viruses look at behavioral patterns to identify viruses. They look for signatures of running executables to match malicious patterns in their database. Instead dev drives recommendations are because of performance. There’s substantial overhead & dev patterns, particularly for native code like C/C++/Rust etc, create a lot of intermediary files as part of the build and AV can cause a slowdown. Traditionally the advice for Windows devs was to turn off Defender or exclude your project folders but maybe there was a reason dev drives were still beneficial (maybe it can avoid even more work by working at a drive level).
AV Comparatives does testing every few months of performance impact of various AV software and Defender has never scored great there. Third party AV options have always done better while having the same or better scores in protection tests.
I'm not familiar with AV Comparatives. Do they have any incentives that might influence this result? Offhand, it seems like if Windows Defender is actually the right choice for basically everyone, they wouldn't have any reason to exist, so I can't help but wonder if that would affect their reporting.
They claim to be independent and I've never gotten the sense that any specific AV product is favored in their testing. I realize they could be biased or taking money while pretending they're not, but that's a tough thing to prove one way or the other.
I can say for sure though that Defender at times has a noticeable performance impact. That's why years ago I went looking for performance comparisons in the first place.
Even defender is dumb. When you control the OS, which (in the default setup) has exclusive control of all disk reads and writes, you can be sure that if you wrote a virus-free file to disk, then it will be virus-free when you go to read the disk again.
So, why are we doing scan-on-read (with substantial performance overhead) when we should instead be doing scan-on-write (when scanning can, in most cases, be done in idle CPU cycles)?
1) virus database gets updated, what was written virus-free with the previous database may not be virus-free on the current database.
2) removable storage devices
3) the system drive is not controlled during reboots
You could imagine building a system that tracks which files we wrote and with which virus database version, which resets things to be scanned across reboots and virus database updates, and has exceptions for removable devices and so on, but it screams "attack surface"...
Network share, the possibility that a client wrote files while the AV software was disabled, etc
I always felt the same way about daily/weekly scans. How would anything get there if your client, server, etc all have AV? At that point it probably wouldn't be caught anyway.
Microsoft itself recommends developers use a "dev drive" where defender is partially disabled because of how bad it is.
Dev Drive isn't because Defender is so bad but because Dev behavior can look like malicious behavior. Creating a bunch of random executables, connecting to running processes, decompiling files. Stuff that would be malicious behavior from normal user but normal for a dev.
I could be wrong but I don’t believe that even these days anti viruses look at behavioral patterns to identify viruses. They look for signatures of running executables to match malicious patterns in their database. Instead dev drives recommendations are because of performance. There’s substantial overhead & dev patterns, particularly for native code like C/C++/Rust etc, create a lot of intermediary files as part of the build and AV can cause a slowdown. Traditionally the advice for Windows devs was to turn off Defender or exclude your project folders but maybe there was a reason dev drives were still beneficial (maybe it can avoid even more work by working at a drive level).
1 reply →
Ok, and where exactly will malware place its artifacts when it comes to infect your company's developers?
AV Comparatives does testing every few months of performance impact of various AV software and Defender has never scored great there. Third party AV options have always done better while having the same or better scores in protection tests.
I'm not familiar with AV Comparatives. Do they have any incentives that might influence this result? Offhand, it seems like if Windows Defender is actually the right choice for basically everyone, they wouldn't have any reason to exist, so I can't help but wonder if that would affect their reporting.
They claim to be independent and I've never gotten the sense that any specific AV product is favored in their testing. I realize they could be biased or taking money while pretending they're not, but that's a tough thing to prove one way or the other.
I can say for sure though that Defender at times has a noticeable performance impact. That's why years ago I went looking for performance comparisons in the first place.
1 reply →
I've seen defender be the cause of all those things that the grandparent listed
Even defender is dumb. When you control the OS, which (in the default setup) has exclusive control of all disk reads and writes, you can be sure that if you wrote a virus-free file to disk, then it will be virus-free when you go to read the disk again.
So, why are we doing scan-on-read (with substantial performance overhead) when we should instead be doing scan-on-write (when scanning can, in most cases, be done in idle CPU cycles)?
1) virus database gets updated, what was written virus-free with the previous database may not be virus-free on the current database.
2) removable storage devices
3) the system drive is not controlled during reboots
You could imagine building a system that tracks which files we wrote and with which virus database version, which resets things to be scanned across reboots and virus database updates, and has exceptions for removable devices and so on, but it screams "attack surface"...
Network share, the possibility that a client wrote files while the AV software was disabled, etc
I always felt the same way about daily/weekly scans. How would anything get there if your client, server, etc all have AV? At that point it probably wouldn't be caught anyway.