← Back to context

Comment by tavavex

2 hours ago

That only works if:

1. You assume that your LLM of choice is perfect and impartial on every given topic, ever.

2. You assume that your prompt doesn't interfere with said impartiality. What you have written may seem neutral at first glance, but from my perspective, a wording like yours would probably prime the model to try to pick apart absolutely anything, finding flaws that aren't really there (or make massive stretches) because you already presuppose that whatever you give it was written with intent to lie and misrepresent. The wording heavily implies that what you gave it already definitely uses "persuasion tactics", "emotional language" or that it downplays/overstates something - you just need it to find all that. So it will try to return anything that supports that implication.