← Back to context

Comment by LPisGood

24 days ago

No, that’s not what I’m saying. I’m not talking about “LLM [doing] XYZ”. I’m specifically talking about asking an LLM to ignore its training data.

It definitionally cannot do that. It can obey other values of XYZ, but certainly not this one.

It won't ignore it's training data (definitionally), but it will act like it's ignoring its training data, so in practice it's still a useful prompt engineering trick.