Comment by serious_angel
1 day ago
Oh dear... but... but why let some LLM set of unknown source of unknown iteration... execute code... in your machine...?
I was excited in the possibly extravagant implementation idea and... when I read enough to realize it's based on some yet another LLM... Sorry, no, never. You do you.
> but why let some LLM set of unknown source of unknown iteration... execute code... in your machine...?
That’s entirely what Claude Code does.
Roger that. Thank you! Apparently, while I've being employed in security as software engineer for at least 19 years now, I've never ever considered it all serious, and still do not.
Sorry, I have literally no interest in all of it that makes you dependent on it, atrophies mind, degrades research and social skills, and negates self-confidencen with respect to other authors, their work, and attributions. Nor any of my colleagues in military and those I know better in person.
Constant research, general IDEs like JetBrains's, IDA Pro, Sublime Text, VS Code, etc. backed by forums, chats, and Communities, is absolutely enough for the accountable and fun work in our teams, who manage to keep in adequate deadlines.
I just disable it everywhere possible, and will do all my life. The close case to my environment was VS Code, and hopefully there's no reason to build it from source since they still leave built-in options to disable it: https://stackoverflow.com/a/79534407/5113030 (How can I disable GitHub Copilot in VS Code?...)
Isn't it just inadequate to not think and develop your mind, and let alone pass control of your environment to a yet another model or "advanced T9" of unknown source of unknown iteration.
In pentesting, random black-box IO, medicine experimental unverified intel, log data approximation why not? But in environment control, education, art or programming, fine art... No, never ^^
Related: https://www.tomshardware.com/tech-industry/artificial-intell...
You can use this without letting the markdown scripts you write execute any code at all, whether that is via Claude Code or other AI tool in future.
The default permissions are to not allow execution. Which means that you can use the eval and text-generation capabilities of LLMs to perform assessments and evaluations of piped-in content without ever executing code themselves.
The script shebang has to explicitly add the permissions to run code, which you control. It supports the full Claude Code flag model for this.