← Back to context

Comment by dheera

2 years ago

Author here!

I just made a more generalized version for ALL commands: https://github.com/dheera/scripts/blob/master/helpme

I've made it safer in that it doesn't auto-execute the command and defaults to "no". You inspect the command and type "y" to execute.

It would be cool to write some tests to see how often it works out. I have noticed that LLM:s often creates command line options that doesn’t exist.

Security aside, I bet it would work more often than when I input something in the terminal.

I nice feature would be to just loop back the error and get ChatGPT to correct the error. You do this by running the command with bash -n (syntax check) and when it doesn’t return an error it runs the script.

Three months from now the next cloud outage at Google will be from “helpme delete that one weird file”.

May the lord have mercy on our machines.

  • > I nice feature would be to just loop back the error and get ChatGPT to correct the error.

    For code generation this works well, though for command line some additional function calling infrastructure may necessary, e.g. if it gets a file path wrong, its only way to correct it might be to execute a bunch of 'ls' commands. It might need read access to the system, which is okay for some use cases where you can containerize everything and keep private files out, but but opens another can of worms :-/

Came here to suggest this should be generic, but I'd also do something like pack in `man <command>` into the prompt if you are one shotting. Then it works for "all" commands that have a man page rather than just the commands GPT knows about before its cut off. Even just trying to scrape out `<command> --help` or something would be good too.