Just saying "no" is unclear. LLMs are still very sensitive to prompts. I would recommend being more precise and assuming less as a general rule. Of course you also don't want to be too precise, especially about "how" to do something, which tends to back the LLM into a corner causing bad behavior. Focus on communicating intent clearly in my experience.
It's a machine, it doesn't need anything.
Technically true but besides the point.
[flagged]
I mean OP's example is for sure crazy, but it's true that saying "no" was not necessary at all. They just needed to not prompt it for the same result.
Just saying "no" is unclear. LLMs are still very sensitive to prompts. I would recommend being more precise and assuming less as a general rule. Of course you also don't want to be too precise, especially about "how" to do something, which tends to back the LLM into a corner causing bad behavior. Focus on communicating intent clearly in my experience.
> Just saying "no" is unclear.
No.
[flagged]