Comment by schiffern
24 days ago
There are many legitimate criticisms of LLMs today. "When you prompt XYZ it has no correlation with whether the LLM does XYZ" isn't one of them. LLMs are way past that stage.
Essentially what you're asking is, "why do you think prompt engineering would work?" That ship has already sailed.
No, that’s not what I’m saying. I’m not talking about “LLM [doing] XYZ”. I’m specifically talking about asking an LLM to ignore its training data.
It definitionally cannot do that. It can obey other values of XYZ, but certainly not this one.
It won't ignore it's training data (definitionally), but it will act like it's ignoring its training data, so in practice it's still a useful prompt engineering trick.