Comment by bradly

6 months ago

Specific model I was using was o4-mini-high which the drop-down model selector describes as "Great at coding and visual reasoning".

I'm curious how you ended up in such a conversation in the first place. Hallucinations are one thing, but I can't remember the last time when the model was saying that it actually run something somewhere that wasn't a tool use call, or that it owns a laptop, or such - except when role-playing.

I wonder if the advice on prompting models to role play isn't backfiring now, especially in conversational setting. Might even be a difference between "you are an AI assistant that's an expert programmer" vs. "you are an expert programmer" in the prompt, the latter pushing it towards "role-playing a human" region of the latent space.

(But also yeah, o3. Search access is the key to cutting down on amount of guessing the answers, and o3 is using it judiciously. It's the only model I use for "chat" when the topic requires any kind of knowledge that's niche or current, because it's the only model I see can reliably figure out when and what to search for, and do it iteratively.)

  • I've seen that specific kind of role-playing glitch here and there with the o[X] models from openai. The models do kinda seem to just think of themselves as being developers with their own machines.. I think it usually just doesn't come up but can easily be tilted into it.

  • What is really interesting is in the "thinking" section it said "I need to reassure the user..." so my intuition is that it thought it was right, but did not think I would think they were right, but if they just gave me the confidence, I would try the code and unblock myself. Maybe it thought this was the best % chance I would listen to it and so it is the correct response?

    • Maybe? Depends on what followed that thought process.

      I've noticed this couple times with o3, too - early on, I'd catch a glimpse of something like "The user is asking X... I should reassure them that Y is correct" or such, which raised an eyebrow because I already know Y was bullshit and WTF with the whole reassuring business... but then the model would continue actually exploring the question and the final answer showed no trace of Y, or any kind of measurement. I really wish OpenAI gave us the whole thought process verbatim, as I'm kind of curious where those "thoughts" come from and what happens to them.

      4 replies →

  • A friend recently had a similar interaction where ChatGPT told them that it had just sent them an email or a wetransfer with the requested file

Gotcha. Yeah, give o3 a try. If you don't want to get a sub, you can use it over the api for pennies. They do have you do this biometric registration thing that's kind of annoying if you want to use over api though.

You can get the Google pro subscription (forget what they call it) that's ordinarily $20/mo for free right now (1 month free; can cancel whenever), which gives unlimited Gemini 2.5 Pro access.

  • > Gotcha. Yeah, give o3 a try. If you don't want to get a sub, you can use it over the api for pennies. They do have you do this biometric registration thing that's kind of annoying if you want to use over api though.

    I hope you appreciate just how crazy this sentence sounds, even in an age when this is normalised.

  • Yeah, this model didn't work it seems.

    You're holding it wrong. You need to utter the right series of incantations to get some semblance of truth.

    What, you used the model that was SOTA one week ago? Big mistake, that explains why.

    You need to use this SOTA model that came out one day ago instead. That model definitely wasn't trained to overfit the week-old benchmarks and dismiss the naysayers. Look, a pelican!

    What? You haven't verified your phone number and completed a video facial scan and passed a background check? You're NGMI.

  • Thank you for the tip on o3. I will switch to that and see how it goes. I do have a paid sub for ChatGPT, but from the dropdown model descriptions "Great at coding" sounded better than "Advanced reasoning". And 4 is like almost twice as much as 3.

    • In my current experience:

      - o3 is the bestest and my go-to, but its strength comes from it combining reasoning with search - it's the one model you can count on finding things out for you instead of going off vibe and training data;

      - GPT 4.5 feels the smartest, but also has tight usage limits and doesn't do search like o3 does; I use it when I need something creative done, or switch to it mid-conversation to have it reason off an already primed context;

      - o4-mini / o4-mini-hard - data transformation, coding stuff that doesn't require looking things up - especially when o3 looked stuff up already, and now I just need ChatGPT to apply it into code/diagrams;

      - gpt-4o - only for image generation, and begrudgingly when I run out of quota on GPT 4.5

      o3 has been my default starting model for months now; most of my queries generally benefit from having a model that does autonomous reasoning+search. Agentic coding stuff, that I push to Claude Code now.

      10 replies →

    • I’d also recommend basically always having search enabled. That’s eliminated major hallucinations for me.

    • lol yep, fully get that. And I mean I'm sure o4 will be great but the '-mini' variant is weaker. Some of it will come down to taste and what kind of thing you're working on too but personal preferences aside, from the heavy LLM users I talk to o3 and gemini 2.5 pro at the moment seem to be top if you're dialoging with them directly (vs using through an agent system).

All LLMs can fail this way.

It's kind of weird to see people running into this kind of issue with modern large models with all the RL and getting confused. No one starting today seems to have good intuition for them. One person I knew insisted LLMs could do structural analysis for months until he saw some completely absurd output from one. This used to be super common with small GPTs from around 2022 and so everyone just intuitively knew to watch out for it.