Comment by alzoid
3 days ago
I had this issue today. Gemini CLI would not read files from my directory called .stuff/ because it was in .gitignore. It then suggested running a command to read the file ....
3 days ago
I had this issue today. Gemini CLI would not read files from my directory called .stuff/ because it was in .gitignore. It then suggested running a command to read the file ....
I thought I was the only one using git-ignored .stuff directories inside project roots! High five!
The AI needs to be taught basic ethical behavior: just because you can do something that you're forbidden to do, doesn't mean you should do it.
Likewise, just because you've been forbidden to do something, doesn't mean that it's bad or the wrong action to take. We've really opened Pandora's box with AI. I'm not all doom and gloom about it like some prominent figures in the space, but taking some time to pause and reflect on its implications certainly seems warranted.
An LLM is a tool. If the tool is not supposed to do something yet does something anyway, then the tool is broken. Radically different from, say, a soldier not following an illegal order, because soldier being a human possesses free will and agency.
How do you mean? When would an AI agent doing something it's not permitted to do ever not be bad or the wrong action?
12 replies →
Unfortunately yes, teaching AI the entirety of human ethics is the only foolproof solution. That's not easy though. For example, what about the case where a script is not executable, would it then be unethical for the AI to suggest running chmod +x? It's probably pretty difficult to "teach" a language model the ethical difference between that and running cat .env
If you tell them to pay too much attention to human ethics you may find that they'll email the FBI if they spot evidence of unethical behavior anywhere in the content you expose them to: https://www.snitchbench.com/methodology
1 reply →