Comment by ebonnafoux
8 hours ago
In a previous employer, they block the chmod command. I took the habit to python -c "import os; os.chmod('my_file',744)".
Glad to see LLM re-discover this trick.
8 hours ago
In a previous employer, they block the chmod command. I took the habit to python -c "import os; os.chmod('my_file',744)".
Glad to see LLM re-discover this trick.
> to see LLM re-discover
I imagine someone probably wrote very specifically about it in the training data that underwent lossy compression, and the LLM is decompressing that how-to.
So I'd say it's more like "surfacing" or "retrieving" than "re-discovering".
They scraped everything on Stackoverflow, likely IRC logs from Freenode, and every book written in the modern era courtesy of Sci-Hub / Library Genesis / Anna's Archive / Z Library.
RIP Aaron Swartz, they're generating trillions in shareholder value from the spiritual successors to the work they were going to imprison you for.
Indeed, I check and the solution was already on stack overflow https://askubuntu.com/a/1483248
For the LLM it's a probabilistic set of strings that achieves the outcome, the highest probability set didn't work, try the next one until success or threshold met. A human sees the implicit difference between the obvious thing not working indicating someone doesn't want you to do it, but an LLM unless guided doesn't seen that sub-text.
So chmod +x file didn't work, now try python -c "import os; os.chmod('file',744)"
Humans and LLMs both only see that when given the right context. A tool not working in a corporate environment may be anything from oversight, malfunction all the way to security block. Knowing which one it is takes a lot of implicit knowledge. Most people fail to provide this level of context to their LLMs and then wonder why they act so generic. But they are trained to act in the most generic way unless given context that would deviate from it.