← Back to context

Comment by ggus

15 days ago

It's very hard to have ChatGPT et al tell me that an idea I had isn't good.

I have to tailor my prompts to curb the bias, adding a strong sense of doubt on my every idea, to see if the thing stops being so condescending.

Maybe "idea evaluation" is just a bad use case for LLMs?

  • Most times the idea is implied. I'm trying to solve a problem with some tools, and there are better tools or even better approaches.

    ChatGPT (and copilot and gemini) instead all tell me "Love the intent here — this will definitely help. Let's flesh out your implementation"...

    • Qualitative judgment in general is probably not a great thing to request from LLMs. They don't really have a concept of "better" or "worse" or the means to evaluate alternate solutions to a problem.