← Back to context

Comment by corysama

7 months ago

The vibe I'm getting from the Reddit community is that 5 is much less "Let's have a nice conversation for hours and hours" and much more "Let's get you a curt, targeted answer quickly."

So, good for professionals who want to spend lots of money on AI to be more efficient at their jobs. And, bad for casuals who want to spend as little money as possible to use lots of datacenter time as their artificial buddy/therapist.

I'm appalled by how dismissive and heartless many HN users seem toward non-professional users of ChatGPT.

I use the GPT models (along with Claude and Gemini) a ton for my work. And from this perspective, I appreciate GPT-5. It does a good job.

But I also used GPT-4o extensively for first-person non-fiction/adventure creation. Over time, 4o had come to be quite good at this. The force upgrade to GPT-5 has, up to this point, been a massive reduction in quality for this use case.

GPT-5 just forgets or misunderstands things or mixes up details about characters that were provided a couple of messages prior, while 4o got these details right even when they hadn't been mentioned in dozens of messages.

I'm using it for fun, yes, but not as a buddy or therapist. Just as entertainment. I'm fine with paying more for this use if I need to. And I do - right now, I'm using `chatgpt-4o-latest` via LibreChat but it's a somewhat inferior experience to the ChatGPT web UI that has access to memory and previous chats.

Not the end of the world - but a little advance notice would have been nice so I'd have had some time to prepare and test alternatives.

  • A lot of people use LLMs for fiction & role playing. Do you know of a place where some of these interactions are shared? The only ones I've found so far are, well, over-the-top sexual in nature.

    And I'm just kind of interested _how_ other people are doing all of this interactive fiction stuff.

    • Sure. Here is the fanfiction book I've been using LLMs to help me write. Helps a lot with improving prose and identifying plot holes. It's much better then a rubber duck for talking out how to improve a chapter and write plausible story arcs. It's not great at word smithing, but I find it errs on the side of too many similies and metaphors, so I just delete some of them as I copy the suggestions over into my draft.

      https://github.com/frypatch/The-Price-of-Remembering

      2 replies →

    • I have some science-fiction story ideas I'd love to flesh out. However, it turns out that I'm a terrible writer, despite some practice at it. Also, I can never be surprised by my own writing, or entertained by it in the same way that someone else's writing can.

      I've tried taking my vague story ideas, throwing them at an AI, and getting half a chapter out to see how it tracks.

      Unfortunately, few if any models can write prose as good as a skilled human author, so I'm still waiting to see if a future model can output customised stories on demand that I'd actually enjoy.

  • I am not sure which heartless comments you are referring to but what I do see is genuine concern for the mental health of individuals who seem to be overly attached, on a deep emotional level, to an LLM: That does not look good at all.

    Just a few days ago another person on that subreddit was explaining how they used ChatGPT to talk to a simulated version of their dad, who recently passed away. At the same time there are reports that may indicate LLMs triggering actual psychosis to some users (https://kclpure.kcl.ac.uk/portal/en/publications/delusions-b...).

    Given the loneliness epidemic there are obvious commercial reasons to make LLMs feel like your best pal, which may result in these vulnerable individuals getting more isolated and very addicted to a tech product.

    • > I do see is genuine concern for the mental health of individuals

      I think that is going to be an issue regardless of the model. It will just take time for that person to reset to the new model.

      For me the whole thing feels like a culture shock. It was rapid change in tone that came off as being rude.

      But if you had that type of conversations from the start it would have been a non-issue.

    • The place we still call America for illogical reasons is a broken society in seemingly finals stages of its existence. Of course broken people will glom onto yet another digital form of a drug that gives an impression of at least suppressing the pain they feel for reasons they do not understand.

      It is little more than the Rat Park Experiment, only in this American version, the researchers think giving more efficient and various ways of delivering morphine water is how you make a rat park.

      3 replies →

  • Personally, I prefer GPT-5 than 4o. It does a good job. But like many others I don't like the sudden removal because it also removed O3, which I sometime use for research based task. GPT-5 thinking mode is okay, but I feel O3 is still better.

  • Then you learned a valuable lesson about relying on hidden features of a tech product to support a niche use case.

    Carry it forward into your next experience with OpenAI.

Well, good, because these things make bad friends and worse therapists.

  • The number of comments in the thread talking about 4o as if it were their best friend the shared all their secrets with is concerning. Lotta lonely folks out there

  • Which is a bit frightening because a lot of the r/ChatGPT comments strike me as unhinged - it's like you would have thought that OpenAI murdered their puppy or something.

    • This is only going to get worse.

      Anyone that remembers the reaction when Sydney from Microsoft or more recently Maya from Sesame losing their respective 'personality' can easily see how product managers are going to have to start paying attention to the emotional impact of changing or shutting down models.

      13 replies →

    • Yeah it’s really bad over there. Like when a website changes its UI and people prefer the older look… except they’re acting like the old look was a personal friend who died.

      I think LLMs are amazing technology but we’re in for really weird times as people become attached to these things.

      3 replies →

    • Considering how much d-listers can lose their shit over a puppet, I’m not surprised by anything.

  • I kind of agree with you as I wouldn't use LLMs for that.

    But also, one cannot speak for everybody, if it's useful for someone on that context, why's that an issue?

    • Because more than any other phenomenon, LLMs are capable of bypassing natural human trust barriers. We ought to treat their output with significant detachment and objectivity, especially when they give personal advice or offer support. But especially for non-technical users, LLMs leap over the uncanny valley and create conversational attachment with their users.

      The conversational capabilities of these models directly engages people's relational wiring and easily fools many people into believing:

      (a) the thing on the other end of the chat is thinking/reasoning and is personally invested in the process (not merely autoregressive stochastic content generation / vector path following)

      (b) its opinions, thoughts, recommendations, and relational signals are the result of that reasoning, some level of personal investment, and a resulting mental state it has with regard to me, and thus

      (c) what it says is personally meaningful on a far higher level than the output of other types of compute (search engines, constraint solving, etc.)

      I'm sure any of us can mentally enumerate a lot of the resulting negative effects. Like social media, there's a temptation to replace important relational parts of life with engaging an LLM, as it always responds immediately with something that feels at least somewhat meaningful.

      But in my opinion the worst effect is that there's a temptation to turn to LLMs first when life trouble comes, instead of to family/friends/God/etc. I don't mean for help understanding a cancer diagnosis (no problem with that), but for support, understanding, reassurance, personal advice, and hope. In the very worst cases, people have been treating an LLM as a spiritual entity -- not unlike the ancient Oracle of Delphi -- and getting sucked deeply into some kind of spiritual engagement with it, and causing destruction to their real relationships as a result.

      A parallel problem is that just like people who know they're taking a placebo pill, even people who are aware of the completely impersonal underpinnings of LLMs can adopt a functional belief in some of the above (a)-(c), even if they really know better. That's the power of verbal conversation, and in my opinion, LLM vendors ought to respect that power far more than they have.

      7 replies →

    • Speaking for myself: the human mind does not seek truth or goodness, it primarily seeks satisfaction. That satisfaction happens in a context, and ever context is at least a little bit different.

      The scary part: It is very easy for LLMs to pick up someone's satisfaction context and feed it back to them. That can distort the original satisfaction context, and it may provide improper satisfaction (if a human did this, it might be called "joining a cult" or "emotional abuse" or "co-dependence").

      You may also hear this expressed as "wire-heading"

      1 reply →

    • The issue is that people in general are very easy to fool into believing something harmful is helping them. If it was actually useful, it's not an issue. But just because someone believes it's useful doesn't mean it actually is.

    • Well, because in a worst case scenario, if the pilot of that big airliner decides to do ChatGPT therapy instead of a real one and then suicides while flying, also other people feel the consequences.

      8 replies →

    • Because it's probably not great for one's mental health to pretend a statistical model is ones friend?

    • Whether the Hippocratic oath, the rules of the APA or any other organization, most all share "do no harm" as a core tenant.

      LLMs cannot conform to that rule because they cannot distinguish between good advice and enabling bad behavior.

      7 replies →

  • Well, like, thats just your opinion man.

    And probably close to wrong if we are looking at the sheer scale of use.

    There is a bit of reality denial among anti-AI people. I thought about why people don't adjust to this new reality. I know one of my friends was anti-AI and seems to continue to be because his reputation is a bit based on proving he is smart. Another because their job is at risk.

> "Let's get you a curt, targeted answer quickly."

This probably why I am absolutely digging GPT-5 right now. It's a chatbot not a therapist, friend, nor a lover.

  • Me too! Finally, these LLMs are showing some appreciation for blunt and concise answers.

    • Something that used to annoy me about all previous models is that if I asked for a fix to something in a code file (i.e.: fix this method in this class), invariably they would return the entire thing with a bunch of small edits.

      GPT 5 is the first model I've used that has consistently done as it is told and returned only the changes.

I've seen quite a bit of this too, the other thing I'm seeing on reddit is I guess a lot of people really liked 4.5 for things like worldbuilding or other creative tasks, so a lot of them are upset as well.

  • There is certainly a market/hobby opportunity for "discount AI" for no-revenue creative tasks. A lot of r/LocalLLaMA/ is focused on that area and in squeezing the best results out of limited hardware. Local is great if you already have a 24 GB gaming GPU. But, maybe there's an opportunity for renting out low power GPUs for casual creative work. Or, an opportunity for a RenderToken-like community of GPU sharing.

    • The great thing about many (not all) "worldbuilding or other creative tasks" is that you could get quite far already using some dice and random tables (or digital equivalents). Even very small local models you can run on a CPU can improve the process enough to be worthwhile and since it is local you know it will remain stable and predictable from day to day.

  • I mean - I 'm quite sure it's going to be available via API, and you can still do your worldbuilding if you're willing to go to places like OpenRouter.

I don't see how people using these as a therapist really has any measurable impact compared to using them as agents. I'll spend a day coding with an LLM and between tool calls, passing context to the model, and iteration I'll blow through millions of tokens. I don't even think a normal person is capable of reading that much.

Why shouldn't "causuals" (and/or "professionals" for that matter) be allowed to use AI for some reasoning or whatever?

One of Claude's "categories" is literally "Life Advice."

I'm often using copilot or claude to help me flesh out content, emails, strategy papers, etc. All of which takes many prompts, back-and-forth, to get to a place where I'm satisfied with the result.

I also use it to develop software, where I am more appreciative of the "as near to pure completions mode" as I can be mot of the time.

The GPT-5 API has a new parameter for verbosity of output. My guess is the default value of this parameter used in ChatGPT corresponds to a lower verbosity than previous models.

I had this feeling too.

I needed some help today and it's messages where shorter but also detailed without all the spare text that I usually don't even read.

That's probably very healthy as well. We may have become desensitized to sitting in a room with a computer for 5 hours, but that's not healthy, especially when we are using our human language interface and dilluting it with llms

It's a good reminder that OpenAI isn't incentivized to have users spend a lot of time on their platform. Yes, they want people to be engaged and keep their subscription, but better if they can answer a question in few turns rather than many. This dynamic would change immediately if OpenAI introduced ads or some other way to monetize each minute spent on the platform.

  • the classic 3rd space problem that Starbucks tackled; they initially wanted people to hang out and do work there, but grew to hate it so they started adding lots of little things to dissuade people from spending too much time there

    • > the classic 3rd space problem that Starbucks tackled

      “Tackled” is misleading. “Leveraged to grow a customer base and then exacerbated to more efficiently monetize the same customer base” would be more accurate.

Great for the environment as well and the financial future of the company. I can't see how this is a bad thing, some people really were just suffering from Proompt Disorder

When using it to write code, what I'm seeing so far is that it's spending less effort trying to reason about how to solve problems from first principles, and more effort just blatantly stealing everything it can from open source projects.