Comment by lucianbr
2 years ago
99.9999% of the code running on my machine is written by others and not even readable to me. I'm pretty optimistic that a similar percentage is true on your machines. So yeah, we run remote commands all the time, all of us. There may be a subtle difference between "curl something | bash" and "apt-get install" or "setup.exe", but there is no fundamental one.
Fundamentally:
1. the packages being worked on by Debian et al have a huge pile of infrastructure so that their development happens collaboratively and in the open, with many eyes watching
2. everyone gets the same packages
3. they have their own security teams to _ensure_ everyone is getting the same packages, i.e. that their download servers and checksums haven't been compromised
4. the project has been been working since 1993 to ensure their update system, and the system delivered by those updates, works as expected. If it doesn't, there are IRC channels, mailing lists, bug trackers and a pile of humans to discuss issues with, and if they agree it's a bug, they can fix it for everyone
It's not to say it's impossible to sneak an attack past a project dedicated to stopping such attacks, but it's so much more work compared to attacking someone who executes whatever a remote endpoint tells them
There have been many documented cases of supply chain attacks of various degrees of sophistication. Some of them successful, some of them almost successful. May I remind you of the recent xz vulnerability was discovered by a single dev by mere chance.
As an end user it is nearly impossible to guard against such an attack.
It can be problematic to run something like `curl foo.com | bash` without inspection of the script first. But even here it makes a difference if you are curling from a project like brew.sh that delivers such script from a TLS protected endpoint or some random script you find somewhere in a gist.
Same goes for output from an LLM. You can simply investigate the generated command before executing it. Another strategy might be to only generate the parameters and just pass those to the ffmpeg executable.
> Same goes for output from an LLM
This is the crux of our disagreement. It does not go the same. You have no idea what the LLM is going to write, neither does the LLM, nor the people who created the LLM.
At no point did the people who created the LLM actually think about your use-case, nor did the LLM, and there is no promise of anything you ask getting a correct, or even consistent answer. The creators don't know how the answers got there, and can't easily fix them if they're wrong. You'd be a fool to trust it for anything other than dog and pony shows.
2 replies →
I'd say "discovered by a single dev" is not just mere chance, but system working as designed.
- Everyone was getting the same package, so one person could warn others
- There were well-established procedures for code updates (Andres Freund _knew_ that xz was recently updated, and could go back and forth in previous versions)
- There was access to all steps of the process - git repo with commit history, binary releases, build scripts, extensive version info
None of this is true for LLMs (and only some of this is true for curl|bash, sometimes) - it's a opaque binary service for which you have no version info, no history, and everyone gets a highly customized output. Moreover, there has been documented examples of LLM giving flawed code with security issues and (unlike debian!) everyone basically says "that person got unlucky, this won't happen to me" and keeps using the very same version.
So please don't compare traditional large open-source projects with LLMs - their risk profiles are very different, and LLM's are a way more dangerous.
There's a difference between getting code from a repo, and from AI generator though. We can apply an ancient thing known as "reputation" to the former. Not yet to the latter.