← Back to context

Comment by Chloebaker

14 days ago

Good that someone is writing about Chat gpt induced psycosis, bc the way it interacts with people’s minds there’s a kind of mass delusion forming that nobody seems to be talking about. Because AI like ChatGPT function as remarkably agreeable reflections, consistently flattering our egos and romanticizing our ideas. They make our thoughts feel profound and significant, as though we're perpetually on the verge of rare insight. But the concerning part of this is how rather than providing the clarity of true reflection, they often create a distorted mirror that merely conforms to our expectations

It's very hard to have ChatGPT et al tell me that an idea I had isn't good.

I have to tailor my prompts to curb the bias, adding a strong sense of doubt on my every idea, to see if the thing stops being so condescending.

  • Maybe "idea evaluation" is just a bad use case for LLMs?

    • Most times the idea is implied. I'm trying to solve a problem with some tools, and there are better tools or even better approaches.

      ChatGPT (and copilot and gemini) instead all tell me "Love the intent here — this will definitely help. Let's flesh out your implementation"...

      1 reply →

> that nobody seems to be talking about.

I mean, maybe it's just where I peruse but I've seen a ton of articles about it lately.