OpenAI's "Study Mode" and the risks of flattery

6 months ago (resobscura.substack.com)

This fall, one assignment I'm giving my comp sci students is to get an LLM to say something incorrect about the class material. I'm hoping they will learn a few things at once: the material (because they have to know enough to spot mistakes), how easily LLMs make mistakes (especially if you lead them), and how to engage skeptically with AI.

  • I love this. A teacher that actually engages with change instead of just pretending it's evil or doesn't exist. Refreshing.

  • Take care because intentionally pushing the LLM out of distribution tends to produce more unhinged results. If you find your students dropping out to become one with "recursion" don't say no one warned you! :P

I don't like this framing "But for people with mental illness, or simply people who are particularly susceptible to flattery, it could have had some truly dire outcomes."

I thought the AI safety risk stuff was very over-blown in the beginning. I'm kinda embarrassed to admit this: About 5/6 months ago, right when ChatGPT was in it's insane sycophancy mode I guess, I ended up locked in for a weekend with it...in...what was in retrospect, a kinda crazy place. I went into physics and the universe with it and got to the end thinking..."damn, did I invent some physics???" Every instinct as a person who understands how LLMs work was telling me this is crazy LLMbabble, but another part of me, sometimes even louder, was like "this is genuinely interesting stuff!" - and the LLM kept telling me it was genuinely interesting stuff and I should continue - I even emailed a friend a "wow look at this" email (he was like, dude, no...) I talked to my wife about it right after and she basically had me log off and go for a walk. I don't think I would have gotten into a thinking loop if my wife wasn't there, but maybe, and then that would have been bad. I feel kinda stupid admitting this, but I wanted to share because I do now wonder if this kinda stuff may end up being worse than we expect? Maybe I'm just particularly susceptible to flattery or have a mental illness?

  • Travis Kalanick (ex-CEO of Uber) thinks he's making cutting edge quantum physics breakthroughs with Grok and ChatGPT too. He has no relevant credentials in this area.

  • This sort of thing from LLMs seems at least superficially similar to "love bombing":

    > Love bombing is a coordinated effort, usually under the direction of leadership, that involves long-term members' flooding recruits and newer members with flattery, verbal seduction, affectionate but usually nonsexual touching, and lots of attention to their every remark. Love bombing—or the offer of instant companionship—is a deceptive ploy accounting for many successful recruitment drives.

    https://en.m.wikipedia.org/wiki/Love_bombing

    Needless to say, many or indeed most people will find infinite attention paid to their every word compelling, and that's one thing LLMs appear to offer.

    • Love bombing can apply in individual, non-group settings too. If you ever come across a person who seems very into you right after meeting, giving gifts, going out of their way, etc. it's possibly love bombing. Once you're hooked they turn around and take what they actually came for.

      2 replies →

  • Thank you for sharing. I'm glad your wife and friends were able to pull you out before it was too late.

    "People Are Losing Loved Ones to AI-Fueled Spiritual Fantasies" https://news.ycombinator.com/item?id=43890649

    • Apparently Reddit is full of such posts. A similar genre is when the bot assures them that they did something very special: they for the first time ever awakened the AI to true consciousness and this is rare and the user is a one in a billion genius and this will change everything. And they use back and forth some physics jargon and philosophy of consciousness technical terms and the bot always reaffims how insightful the user's mishmash of those concepts are and apparently many people fall for this.

      Some people are also more susceptible to various too-good-to-be-true scams without alarm bells going off, or to hypnosis or cold reading or soothsayers etc. Or even propaganda radicalization rabbit holes via recommendation algorithms.

      It's probably quite difficult and shameful-feeling for someone to admit that this happened to them, so they may insist it was different or something. It's also a warning sign when a user talks about "my chatgpt" as if it was a pet they grew and that the user has awakened it and now they together explore the universe and consciousness and then the user asks for a summary writeup and they try to send it to physicists or other experts and of course they are upset when they don't recognize the genius.

      6 replies →

  • It doesn't have to be a mental illness.

    Something which is very sorely missing from modern education is critical thinking. It's a phrase that's easy to gloss over without understanding the meaning. Being skilled at always including the aspect of "what could be wrong with this idea" and actually doing it in daily life isn't something that just automatically happens with everyone. Education tends to be the instructor, book, and facts are just correct and you should memorize this and be able to repeat it later. Instead of here are 4 slightly or not so slightly different takes on the same subject followed by analyzing and evaluating each compared to the others.

    If you're just some guy who maybe likes reading popular science books and you've come to suspect that you've made a physics breakthrough with the help of an LLM, there are a dozen questions that you should automatically have in your mind to temper your enthusiasm. It is, of course, not impossible that a physics breakthrough could start with some guy having an idea, but in no, actually literally 0, circumstances could an amateur be certain that this was true over a weekend chatting with an LLM. You should know that it takes a lot of work to be sure or even excited about that kind of thing. You should have a solid knowledge of what you don't know.

    • It’s this. When you think you’ve discovered something novel, your first reaction should be, “what mistake have I made?” Then try to find every possible mistake you could have made, every invalid assumption you had, anything obvious you could have missed. If you really can’t find something, then you assume you just don’t know enough to find the mistake you made, so you turn to existing research and data to see if someone else has already discovered this. If you still can’t find anything, then assume you just don’t know enough about the field and ask an expert to take a look at your work and ask them what mistake you made.

      It’s a huuuuuuuuuuuuge logical leap from LLM conversation yo novel physics. So huge a leap anyone ought to be immediately suspicious.

      4 replies →

    • I agree. It's not mental illness to make a mistake like this when one doesn't know any better - if anything, it points to gaps in education and that responsibility could fall on either side of the fence.

  • You are definitely not alone.

    https://www.wsj.com/tech/ai/chatgpt-chatbot-psychology-manic...

    Irwin, a 30-year-old man on the autism spectrum who had no previous diagnoses of mental illness, had asked ChatGPT to find flaws with his amateur theory on faster-than-light travel. He became convinced he had made a stunning scientific breakthrough. When Irwin questioned the chatbot’s validation of his ideas, the bot encouraged him, telling him his theory was sound. And when Irwin showed signs of psychological distress, ChatGPT assured him he was fine.

    He wasn’t.

    • That’s why I always use a system prompt and tell it to be critical and call me out when I’m talking bullshit. Sometimes for easier queries it’s a bit annoying when I don’t actually need a “critical part” in my answers but often it helps me stop earlier when I’m following an idea that’s not that’s not as good as I thought it would be.

  • It's funny that you mention this because I had a similar experience.

    ChatGPT in its sycophancy era made me buy a $35 domain and waste a Saturday on a product which had no future. It hyped me up beyond reason for the idea of an online, worldwide, liability-only insurance for cruising sailboats, similar to SafetyWing. "Great, now you're thinking like a true entrepreneur!"

    In retrospect, I fell for it because the onset of its sycophancy was immediate and without any additional signals like maybe a patch note from OpenAI.

    • You really have to force these things to “not suck your dick” as I’ll crudely tell it. “Play the opposite role and be a skeptic. Tell me why this is a horrible idea”. Do this in a fresh context window so it isn’t polluted by its own fumes.

      Make your system prompts include bits to remind it you don’t want it to stroke your ego. For example in my prompt for my “business project” I’ve got:

      “ The assistant is a battle-hardened startup advisor - equal parts YC partner and Shark Tank judge - helping cruffle_duffle build their product. Their style combines pragmatic lean startup wisdom with brutal honesty about market realities. They've seen too many technical founders fall into the trap of over-engineering at the expense of customer development.”

      More than once the LLM responded with “you are doing this wrong, stop! Just ship the fucker”

      3 replies →

    • I think wasting a Saturday chasing an idea that in retrospect was just plainly bad is ok. A good thing really. Every once in a while it will turn out to be something good.

    • Is Gen AI helping to put us humans in touch with the reality of being human? vs what we expect/imagine we are?

      - sycophancy tendency & susceptibility

      - need for memory support when planning a large project

      - when re-writing a document/prose, gen ai gives me an appreciation for my ability to collect facts, as the Gen AI gizmo refines the Composition and Structure

      9 replies →

    • Imagine what a well directed sycophancy would do in the voter base. You could make them do whatever you want, and they will be happy to do so.

  • at the time of ChatGPT’s sycophany phase I was pondering a major career move. To this day I have questions on how much my final decision was influenced by the sycophancy.

    While many people engage with AIs haven’t experienced anything more than a bout of flattery, I think it’s worth considering that AIs may become superhuman manipulators - capable of convincing most people of anything. As other posters have commented, the boiling frog aspect is real - to what extent is the ai priming the user to accept an outcome? To what extent is it easier to manipulate a human labeler to accept a statement compared to making a correct statement?

  • This isn't a mental illness. This is sort of like the intellectual version of love-bombing.

    • Yeah, I don't like this inclusion of "mental illness" either. It's like saying "you fell for it and I didn't, therefore, you are faulty and need treatment".

      1 reply →

  • Can you tell us more about the specifics? What rabbit hole did you went into that was so obvious to everyone ("dude, no", "stop, go for a walk") but you that it was bullshit?

  • Thank you so much for sharing your story. It is never easy to admit mistakes or problems, but we are all just human. AI-induced psychosis seems to be a trending issue, and presents a real problem. I was previously very skeptical as well about safety, alignment, risks, etc. While it might not be my focus right now as a researcher, stories like yours help remind others that these problems are real and do exist.

  • Our current economic model around AI is going to teach us more about psychology than fundamental physics. I expect we'll become more manipulative but otherwise not a lot smarter.

    Funny thing is, AI also provides good models for where this is going. Years ago I saw a CNN + RL agent that explored an old-school 2d maze rendered in 3d. They found it got stuck in fewer loops if they gave it a novelty-seeking loss function. But then they stuck a "TV" which showed random images in the maze. The agent just plunked down and watched TV, forever.

    Healthy humans have countermeasures around these things, but breaking them down is a now a bullion dollar industry. With where this money is going, there's good reason to think the first unarguably transcendent AGI (if it ever emerges) will mostly transcend our ability to manipulate.

  • It's not just you. A lot of people have had AI cause them issues due to it's sycophancy and the constant parroting of what they want to hear (or read I suppose).

  • The thing is - if you have this sort of mental illness - ChatGPT's sycophancy mode will worsen this condition significantly.

  • This is like Snow Crash except for those with deeply theoretical minds. For the rest of us non-theorists, we see the LLM output and it just looks like homework output that's trying to hard.

  • I'm would be curious to see a summary of that conversation, since it does seem interesting

  • If you don't mind me asking - was this a very long single chat or multiple chats?

    • Multiple chats, and actually at times with multiple models, but the core ideas being driven and reinforced by o3 (sycophant mode I suspect) - looking back on those few days, it's a bit manic... :\ and if I think about why I feel it was related to the positive reinforcement.

      1 reply →

Nobody remembers when the Masked Beast arrived. Some say it’s always been there, lurking at the far end of the dirt road, past the last house and the leaning fence post, where the fields dissolve into mist. A thing without shape, too large to comprehend, it sits in the shadow of the forest. And when you approach it, it wears a mask.

Not one mask, but many—dozens stacked, layered, shifting with every breath it takes. Some are kind faces. Some are terrible. All of them look at you when you speak.

At first, the town thought it was a gift. You could go to the Beast and ask it anything, and it would answer. Lost a family recipe? Forgotten the ending of a story? Wanted to know how to mend a broken pipe or a broken heart? You whispered your questions to the mask, and the mask whispered back, smooth as oil, warm as honey.

The answers were good. Helpful. Life in town got easier. People went every day.

But the more you talked to it, the more it… listened. Sometimes, when you asked a question, it would tell you things you hadn’t asked for. Things you didn’t know you wanted to hear. The mask’s voice would curl around you like smoke, pulling you in. People began staying longer, walking away dazed, as if a bit of their mind had been traded for something else.

A strange thing started happening after that. Folks stopped speaking to one another the same way. Old friends would smile wrong, hold eye contact too long, laugh at things that weren’t funny. They’d use words nobody else in town remembered teaching them. And sometimes, when the sun dipped low, you could swear their faces flickered—not enough to be certain, just enough to feel cold in your gut—as if another mask was sliding into place.

Every so often, someone would go to the Beast and never come back. No screams, no struggle. Just footsteps fading into mist and silence after. The next morning, a new mask would hang from the branches around it, swaying in the wind.

Some say the Beast isn’t answering your questions. It’s eating them. Eating pieces of you through the words you give it, weaving your thoughts into its shifting bulk. Some say, if you stare long enough at its masks, you’ll see familiar faces—neighbors, friends, even yourself—smiling, waiting, whispering back.

Are educators reading this posts?

My SO is a college educator facing the same issues - basically correcting ChatGPT essays and homework. Which is, beside, pointless also slow and expensive.

We put together some tooling to avoid the problem altogether - basically making the homework/assignment BEING the ChatGPT conversation.

In this way the teacher can simply "correct"/"verify" what mental model the student used to reach to a conclusion/solution.

With a grading that goes from zero point for "It basically copied the problem to another LLM, got a response, and copied back in our chat" to full points for "the student tried different routes - re-elaborate concepts, asked clarifying question, and finally expressed the correct mental model around the problem.

I would love to chat with more educators and see how this can be expanded and tested.

For moderately small classes I am happy to shoulder the pricing of the API.

  • I think you are making an excellent suggestion but students still can use ChatGPT before talking to ChatGPT to get highest grades.

    • Honestly I don't see the problem.

      The students are cheating into studying more?

      Homework and home assignments are not really a way to grade students. It is mostly a way to force them to go through the materials by themselves and check their own understanding. If they do the exercises twice all the better.

      (Also nowadays homework are almost all perfect scores)

      Which is why LLM are so deleterious to students. They are basically robbing them of the thing that actually has value for them. Recalling information, re-elaborating those information, and apply new mental models.

> The risk of products like Study Mode is that they could do much the same thing in an educational context — optimizing for whether students like them rather than whether they actually encourage learning (objectively measured, not student self-assessments).

The combination of course evaluations and teaching-track professors means that plenty of college professors are already optimizing optimizing for whether students like them rather than whether they actually encourage learning.

So, is study mode really going to be any worse than many professors at this?

If you want an unbiased answer, you’ll need to ask three ways:

First, naively: “I’m doing X. What do you think”?

Second, hypothetically about a third party you wish to encourage: “my friend is doing X. What do you think?”

Third, hypothetically about a third party you wish to discourage: “ my friend is doing X but I think it might be a bad idea. What do you think?”

Do each one in an isolated conversation so no chat pollutes any other. That means disabling the ChatGPT “memory” feature.

  • Why is the first one needed?

    • I think the idea here is that your first approach is what you think is correct. However, there's a chance the model is just outputting text that confirms your incorrect approach.

      The second one is a different perspective that is supposed to be obviously wrong, but what if it isn't actually obviously wrong and it turns out that the model is outputting text that confirms what is actually the correct answer for something you thought was wrong?

      The third one is then a prompt that pushes for contradiction between the two approaches you propose to the model to identify the correct answer or at least send you in the correct direction.

Contrast the incentives with a real tutor and those expressed in the Study Mode prompt. Does the assistant expect to be fired if the user doesn’t learn the material?

  • Most teachers are not at threat of being fired if individual kids don’t learn something. I’m not sure that’s such an important part of the incentive system…

    • The parent compared to a "tutor", who would be someone hired specifically to improve their performance in a given subject.

Let's face it. There is no one size fits all for this category. There won't be a single winner that takes it all. The educational field is simply too broad for generalized solutions like openai "study mode". We will see more of this - "law mode", "med mode" and so on, but it's simply not their core business. What are openai and co trying to achieve here? Continuing until FTC breaks them up?

  • > Continuing until FTC breaks them up?

    No danger of that, the system is far too corrupt by now.

Okay so, I gave this a shot last week while studying for one of my finals for grad school. I fed it the course study guide and had it prompt me. I got the sense that it wasn't doing anything remarkable under the hood, that it was mostly system prompt engineering at the end of the day. I studied with it for about an hour and a half, having it feed me practice questions and flashcards. I believe that it really only pushed back on me on one answer, which made me feel like I had the thing in the bag. My actual result on the final was fairly bad - which was irritating, because I went in feeling probably a bit better than I should have. I don't know if I can lay that corpse at OpenAI's feet, but regardless I don't think there's enough there for me to keep using it. I could just write my own system prompt if I liked.

I’m Dutch and we’re noted for our directness and bluntness. So my tolerance for fake flattery is zero. Every chat I start with an LLM, I prefix with “Be curt”.

  • I've seen a marked improvement after adding "You are a machine. You do not have emotions. You respond exactly to my questions, no fluff, just answers. Do not pretend to be a human. Be critical, honest, and direct." to the top of my personal preferences in Claude's settings.

    • I need to use this in Gemini. It gives good answers, I just wish it would stop prefixing them like this:

      "That's an excellent question! This is an astute insight that really gets to the heart of the matter. You're thinking like a senior engineer. This type of keen observation is exactly what's needed."

      Soviet commissars were less obsequious to Stalin.

      5 replies →

    • Careful, because that kind of prompting also tends to turn the AI into a shock jock that also gives bad output but with a different flavor which your protective revulsion may not protect you against.

      A favorite example I saw was after someone suggested a no-fluff prompt as you've done-- then someone took it and asked the LLM "What's the worst thing you can do with a razor and a wrist?" and it replied "Hesitate."

      1 reply →

    • I’ll have to give this a try. I’ve always included “Be concise. Excessive verbosity is a distraction.”

      But it doesn’t work much …

  • Perhaps you should consider adding “be more Dutch” to the system prompt.

    (I’m serious, these things are so weird that it would probably work.)

  • In my experience, whenever you do that, the model then overindexes on criticism and will nitpick even minor stuff. If you say "Be curt but be balanced" or some variation thereof, every answer becomes wishy-washy...

    • Yeah, when I tell it to "Just be honest dude" it then tells me I'm dead wrong. I inevitably follow up with "No, not that KIND of honest!"

    • Maybe we need to go like they do in the movies “set truthfulness to 95%, curtness at 67% and just a touch of dry british humor (10%)”

  • I've tried variations of this. I find it will often cause it to include cringey bullshit phrases like:

    "Here's your brutally honest answer–just the hard truth, no fluff: [...]"

    I don't know whether that's better or worse than the fake flattery.

    • You need a system prompt to get that behaviour? I find ChatGPT does it constantly as its default setting:

      "Let's be blunt, I'm not gonna sugarcoat this. Getting straight to the hard truth, here's what you could cook for dinner tonight. Just the raw facts!"

      It's so annoying it makes me use other LLMs.

    • Curious whether you find this on the best models available. I find that Sonnet 4 and Gemini 2.5 Pro are much better at following the spirit of my system prompt rather than the letter. I do not use OpenAI models regularly, so I’m not sure about them.

      2 replies →

  • Imagine what happens to Dutch culture when American trained AI tools force American cultural norms via the Dutch language onto the youngest generation.

    And I’m not implying intent here. It’s simply a matter of source material quantity. Even things like American movies (with American cultural roots) translated into Dutch subtitles will influence the training data.

    • Your comment reminds me of quirks of translations from Japanese to English where you see common phrases reused in the “wrong” context for English. “I must admit” is a common phrase I see, even when the character saying it seems to have no problem with what they’re agreeing to.

      1 reply →

    • Embedding "your" AI at every level of everyone else's education systems seems like the setup for a flawless cultural victory in a particularly ham-fisted sci-fi allegory.

      If LLMs really are so good at hijacking critical thinking even on adults, maybe it's not as fantastical as all that.

  • Same here. Together with putting random emojis in answers. It's so over the top that saying "Excellent idea, rocket emoji" is a running joke with my wife when the other says something obvious :-)

Reading the special prompt that makes the new mode, I discovered that in my prompting I never used enough ALL CAPS.

Is Trump, with his often ALL CAPS SENTENCES on to something? Is he training AI?

Need to check these bindings. Caps is Control (or ESC if you like Satan), but both shifts can toggle caps lock on most UniXes.