Comment by Nextgrid
7 hours ago
> plus it's open-source
Open-source only matters if you have the time/skill/willingness to download said source (and any dependencies') and compile it.
Otherwise you're still running a random binary and there's no telling whether the source is malicious or whether the binary was even built with the published source.
It's no guarantee, but it's a positive indicator of trustworthiness if a codebase is open source.
I don't have hard numbers on this, but in my experience it's pretty rare for an open source codebase to contain malware. Few malicious actors are bold enough to publish the source of their malware. The exception that springs to mind is source-based supply chain attacks, such as publishing malicious Python code to Python's pip package-manager.
You have a valid point that a binary might not correspond to the supposed source code, but I think this is quite uncommon.
Of course this is true. But you can keep going down the rabbit hole. How do you know there isn't a backdoor hidden in the source code? How do you know there isn't a compromised dependency, maybe intentionally?
Ultimately there needs to be trust at some point because nobody is realistically going to do a detailed security analysis of the source code of everything they install. We do this all the time as software developers; why do I trust that `pip install SQLAlchemy==2.0.45` isn't going to install a cryptominer on my system? It's certainly not because I've inspected the source code, it's because there's a web of trust in the ecosystem (well-known package, lots of downloads, if there were malware someone would have likely noticed before me).
> still running a random binary
Again "random" here is untrue, there's nothing random about it. You're running a binary which is published by the maintainers of some software. You're deciding how much you trust those maintainers (and their binary publishing processes, and whoever is hosting their binary).
The problem is that on Windows or your typical Linux distro "how much you trust" needs to be "with full access to all of the information on my computer, including any online accounts I access through that computer". This is very much unlike Android, for example, where all apps are sandboxed by default.
That's a pretty high bar, I don't blame your friend at all for being skeptical.
Right, which goes back to the main point; "total control of your computing environment" fundamentally means that you are responsible for figuring out which applications to trust, based on your own choice of heuristics (FOSS? # of downloads/Github stars? Project age? Reputation of maintainers and file host? etc...) Many, maybe most people don't actually want to do this, and would much rather outsource that determination of trust to Microsoft/Google/Apple.
> Open-source only matters if you have the time/skill/willingness to download said source (and any dependencies') and compile it.
Not really. The fact that an application is open-source means its originator can't rug-pull its users at some random future date (as so often happens with closed-source programs). End users don't need to compile the source for that to be true.
> Otherwise you're still running a random binary and there's no telling whether the source is malicious or whether the binary was even built with the published source.
This is also not true in general. Most open-source programs are available from an established URL, for example a Github archive with an appropriate track record. And the risks of downloading and running a closed-source app are much the same.