Comment by post-it
9 hours ago
> Acting like I hold the opposite position as I truly hold can help sometimes as well.
I find this helps a lot. So does taking a step back from my actual question. Like if there's a mysterious sound coming from my car and I think it might be the coolant pump, I just describe the sound, I don't mention the pump. If the AI then independently mentions the pump, there's a good chance I'm on the right track.
Being familiar with the scientific method, and techniques for blinding studies, helps a lot, because this is a lot like trying to not influence study participants.
A lot of getting good mileage out of LLMs is promoting them to behave like they are blind and can only base their outputs on what is in front of them. Maintain an emic stance.