It’s interesting that many comments mention switching back to Claude. I’m on the opposite end, as I’ve been quite happy with ChatGPT recently. Anthropic clearly changed something after December last year. My Pro plan is barely usable now, even when using only Sonnet. I frequently hit the weekly limit, which never happened before. In contrast, ChatGPT has been very generous with usage on their plan.
Another pattern I’m noticing is strong advocacy for Opus, but that requires at least the 5x plan, which costs about $100 per month. I’m on the ChatGPT $20 plan, and I rarely hit any limits while using 5.2 on high in codex.
I've been impressed by how good ChatGPT is at getting the right context old conversations.
When I ask simple programming questions in a new conversation it can generally figure out which project I'm going to apply it to, and write examples catered to those projects. I feel that it also makes the responses a bit more warm and personal.
Not sure what you mean by incorrect since you already validated my point about the limits. I never had these issues even with Sonnet before, but after December, the change has been obvious to me.
Also worth considering that mileage varies because we all use agents differently, and what counts as a large workload is subjective. I am simply sharing my experience from using both Claude and Codex daily. For all we know, they could be running A/B tests, and we could both be right.
> We’re continuing to make progress toward a version of ChatGPT designed for adults over 18, grounded in the principle of treating adults like adults, and expanding user choice and freedom within appropriate safeguards. To support this, we’ve rolled out age prediction for users under 18 in most markets.
https://help.openai.com/en/articles/12652064-age-prediction-...
Pornographic use has long been the "break glass in case of emergency" for the LLM labs when it comes to finances.
My personal opinion is that while smut won't hurt anyone in of itself, LLM smut will have weird and generally negative consequences. As it will be crafted specifically for you on top of the intermittent reinforcement component of LLM generation.
While this is a valid take, I feel compelled to point out Chuck Tingle.
The sheer amount and variety of smut books (just books) is vastly larger than anyone wants to realize. We passed the mark decades ago where there is smut available for any and every taste. Like, to the point that even LLMs are going to take a long time to put a dent in the smut market. Humans have been making smut for longer than we've had writing.
But again I don't think you're wrong, but the scale of the problem is way distorted.
Whatever reward-center path is short-circuiting in 0.0001% of the population and leading to LLM psychosis will become a nuclear bomb for them if we get the sex drive involved too.
My personal take is that there has been no progress - potentially there has been a regression on all LLM things outside of coding a scientific pursuits - I used to have great fun with LLMs with creative writing stuff, but I feel like current models are stiff and not very good prose writers.
This is also true for stuff like writing clear but concise docs, they're overly verbose while often not getting the point across.
I feel like this comes from the rigorous Reinforcement Learning these models go through now. The token distribution is becoming so narrow, so the models give better answers more often that is stuffles their creativity and ability to break out of the harness. To me, every creative prompt I give them turns into kind of the same mush as output. It is rarely interesting
Yeah, I’ve had great success at coding recently, but every time I try to get an LLM to write me a spec it generates endless superlatives and a lot of flowery language.
That's why laws against drugs are so terrible, it forces law-abiding businesses to leave money on the table. Repeal the laws and I'm sure there will be tons of startups to profit off of drug addiction.
It's not just chat. Remember image and video generation are on the table. There are already a huge category of adult video 'games' of this nature. I think they use combos of pre-rendered and dynamic content. But really not hard to imagine a near future that interactive and completely personalized AI porn in full 4kHDR or VR is constantly and near-instantly available. I have no idea the broader social implications of all that, but the tech itself feels inevitable and nearly here.
Even when you're making PG content, the general propriety limits of AI can hinder creative work.
The "Easter Bunny" has always seemed creepy to me, so I started writing a silly song in which the bunny is suspected of eating children. I had too many verses written down and wanted to condense the lyrics, but found LLMs telling me "I cannot help promote violence towards children." Production LLM services would not help me revise this literal parody.
Another day I was writing a romantic poem. It was abstract and colorful, far from a filthy limerick. But when I asked LLMs for help encoding a particular idea sequence into a verse, the models refused (except for grok, which didn't give very good writing advice anyway.)
according to the age-prediction page, the changes are:
> If [..] you are under 18, ChatGPT turns on extra safety settings. [...] Some topics are handled more carefully to help reduce sensitive content, such as:
- Graphic violence or gore
- Viral challenges that could push risky or harmful behavior
- Sexual, romantic, or violent role play
- Content that promotes extreme beauty standards, unhealthy dieting, or body shaming
I am 30 years old, literally told chatgpt I was a software developer, all my queries are something an adult would ask, yet OpenAI assumed I was under 18 and asked me for a persona age verification, which of course I refused because Persona is shady as a company (plus I'm not giving my personal ID to some random tech company).
eh there's an old saying that goes "no Internet technology can be considered a success until it has been adopted by (or in this case integrated with) the porn industry".
Been unhappy with the GPT5 series, after daily driving 4.x for ages (I chat with them through the API) - very pedantic, goes off on too many side topics, stops following system instructions after a few turns (e.g. "you respond in 1-3 sentences" becomes long bulleted lists and multiple paragraphs very quickly.
Much better feel with the Claude 4.5 series, for both chat and coding.
I can never understand why it is so eager to generate walls of text. I have instructions to always keep the response precise and to the point. It almost seem like it wants to overwhelm you, so you give up and do your own research.
> you respond in 1-3 sentences" becomes long bulleted lists and multiple paragraphs very quickly
This is why my heart sank this morning. I have spent over a year training 4.0 to just about be helpful enough to get me an extra 1-2 hours a day of productivity. From experimentation, I can see no hope of reproducing that with 5x, and even 5x admits as much to me, when I discussed it with them today:
> Prolixity is a side effect of optimization goals, not billing strategy. Newer models are trained to maximize helpfulness, coverage, and safety, which biases toward explanation, hedging, and context expansion. GPT-4 was less aggressively optimized in those directions, so it felt terser by default.
> This is why my heart sank this morning. I have spent over a year training 4.0 to just about be helpful enough to get me an extra 1-2 hours a day of productivity.
Maybe you should consider basing your workflows on open-weight models instead? Unlike proprietary API-only models no one can take these away from you.
4.1 is great for our stuff at work. It's quite stable (doesn't change personality every month, and one word difference doesn't change the behaviour). IT doesn't think, so it's still reasonably fast.
Is there anything as good in the 5 series? likely, but doing the full QA testing again for no added business value, just because the model disappears, is just a hard sell. But the ones we tested were just slower, or tried to have more personality, which is useless for automation projects.
Yeah - agreed, the initial latency is annoying too, even with thinking allegedly turned off. Feels like AI companies are stapling more and more weird routing, summarization, safety layers, etc. that degrade the overall feel of things.
I often use ChatGPT without an account and ChatGPT 5 mini (which you get while logged out) might as well be Mistral 7b + web search. Its that mediocre. Even the original 3.5 was way ahead.
>We brought GPT‑4o back after hearing clear feedback from a subset of Plus and Pro users, who told us they needed more time to transition key use cases, like creative ideation, and that they preferred GPT‑4o’s conversational style and warmth.
This does verify the idea that OpenAI does not make models sycophantic due to attempted subversion by buttering up users so that that they use the product more, its because people actually want AI to talk to them like that. To me, that's insane, but they have to play the market I guess
As someone who's worked with population data, I found that there is an enormous rift between reported opinion (and HN and reddit opinion) vs revealed (through experimentation) population preferences.
I always thought that the idea that "revealed preferences" are preferences, discounts that people often make decisions they would rather not. It's like the whole idea that if you're on a diet, it's easier to not have junk food in the house to begin with than to have junk food and not eat more than your target amount. Are you saying these people want to put on weight? Or is it just they've been put in a situation that defeats their impulse control?
I feel a lot of the "revealed preference" stuff in advertising is similar in advertisers finding that if they get past the easier barriers that users put in place, then really it's easier to sell them stuff that at a higher level the users do not want.
Well that's what akrasia is. It's not necessarily a contradiction that needs to be reconciled. It's fine to accept that people might want to behave differently than how they are behaving.
A lot of our industry is still based on the assumption that we should deliver to people what they demonstrate they want, rather than what they say they want.
> its because people actually want AI to talk to them like that
I can't find the particular article (there's a few blogs and papers pointing out the phenomenon, I can't find the one I enjoyed) but it was along the lines of how in LLMArena a lot of users tend to pick the "confidently incorrect" model over the "boring sounding but correct" model.
The average user probably prefers the sycophantic echo chamber of confirmation bias offered by a lot of large language models.
I can't help but draw parallels to the "You are not immune to propaganda" memes. Turns out most of us are not immune to confirmation bias, either.
I thought this was almost due to the AI personality splinter groups (trying to be charitable) like /myboyfriendisai and wrapper apps who vocally let them know they used those models the last time they sunset them.
I was one of those pesky users who complained when o3 suddenly was unavailable.
When 5.2 was first launched, o3 did a notably better job at a lot of analytical prompts (e.g. "Based on the attached weight log and data from my calorie tracking app, please calculate my TDEE using at least 3 different methodologies").
o3 frequently used tables to present information, which I liked a lot. 5.2 rarely does this - it prefers to lay out information in paragraphs / blog post style.
I'm not sure if o3 responses were better, or if it was just the format of the reply that I liked more.
If it's just a matter of how people prefer to be presented their information, that should be something LLMs are equipped to adapt to at a user-by-user level based on preferences.
I thought it was based on the user thumbs-up and thumbs-down reactions, it evolving the way that it does makes it pretty obvious that users want their asses licked
They have added settings for this now - you can dial up and down how “warm” and “enthusiastic” you want the models to be. I haven’t done back to back tests to see how much this affects sycophancy, but adding the option as a user preference feels like the right choice.
If anyone is wondering, the setting for this is called Personalisation in user settings.
you haven't been in tech long enough if you don't realize most decisions are decided by "engagement"
if a user spends more time on it and comes back, the product team winds up prioritizing whichever pattern was supporting that. it's just a continual selective evolution towards things that keep you there longer, based on what kept everyone else there longer
After they pushed the limits on the Thinking models to 3000 per week, I haven't touched anything else. I am really satisfied with their performance and the 200k context windows is quite nice.
I've been using Gemini exclusively for the 1 million token context window, but went back to ChatGPT after the raise of the limits and created a Project system for myself which allows me to have much better organization with Projects + only Thinking chats (big context) + project-only memory.
Also, it seems like Gemini is really averse to googling (which is ironic by itself) and ChatGPT, at least in the Thinking modes loves to look up current and correct info. If I ask something a bit more involved in Extended Thinking mode, it will think for several minutes and look up more than 100 sources. It's really good, practically a Deep Research inside of a normal chat.
I REALLY struggle with Gemini 3 Pro refusing to perform web searches / getting combative with the current date. Ironically their flash model seems much more likely to opt for web search for info validation.
Not sure if others have seen this...
I could attribute it to:
1. It's known quantity with the pro models (I recall that the pro/thinking models from most providers were not immediately equipped with web search tools when they were released originally)
2. Google wants you to pay more for grounding via their API offerings vs. including it out of the box
I find Gemini does the most searching (and the quickest... regularly pulls 70+ search results on a query in a matter of seconds - likely due to googlebot's cache of pretty much every page). Chatgpt seems to only search if you have it in thinking/research mode now.
ChatGPT 5.2 has been a good motivator for me to try out other LLMs because of how bad it is. Both 5.1 and 5.2 have been downgrades in terms of instruction following and accuracy, but 5.2 especially so. The upside is that that's had me using Claude much more, and I like a lot of things about it, both in terms of UI and the answers. It's also gotten me more serious about running local models. So, thank you OpenAI, for forcing me to broaden my horizons!
I switch routinely between Gemini 3 (my main), Claude, GPT, and sometimes Grok. If you came up with 100 random tasks, they would all come out about equal. The issue is some are better at logical issues, some are better at creative writing, etc. If it's something creative I usually drop it in all 4 and combine the best bits of each.
(I also use Deep Think on Gemini too, and to me, on programming tasks, it's not really worth the money)
Not extensively. The few interactions I've tried on it have been disappointing though. The Voice input is really bad, like significantly worse than any other major AI in the market. And I assumed search would be its strong suit and ran a search-and-compile type prompt (that I usually run on ChatGPT) on Gemini, and it was underwhelming at it. Not as bad as Grok (which was pretty much unusable for this), but noticeably worse than ChatGPT. Maybe Gemini has other strengths that I haven't come across yet, but on that one at least, it was
Well yeah, because 5.2 is the default and there's no way to change the default. So every time you open up a new chat you either use 5.2 or go out of your way to select something else.
(I'm particularly annoyed by this UI choice because I always have to switch back to 5.1)
If they were to retire gpt 4.1 series from API that would be a major deal breaker. For structured outputs it is more predictable and significantly better because it does not have the reasoning step baked in.
I've heard great things about the mixtral structured outputs capabilities but haven't had a chance to run my evals on them.
If 4.1 is dropped from API that's the first course of action.
Also 5 series doesn't have fine tuning capabilities and it's unclear how it would work if the reasoning step is involved
I have stopped using ChatGPT in favor of Gemini. Mostly you need LLMs for factual stuff and sometimes to draft bits of code here and there. I use Google with Gemini for the first part and I am a huge fan of codex for the second part.
One of the big arguments for local models is we can't trust providers to maintain ongoing access the models you validated and put into production. Even if you run hosted models, running open ones means you can switch providers.
opus 4.5 is better at gpt on everything except code execution (but with pro you get a lot of claude code usage) and if they nuke all my old convos I'll prob downgrade from pro to freee
I do find it interesting to see how people interact with AI as I think it is quite a personal preference. Is this how you use AI all the time? Do you appreciate the sycophancy, does it bother you, do you not notice it? From your question it seems you would prefer a blog post in plainer language, avoiding "marketing speak", but if a person spoke to me like Miss Chatty spoke to you I would be convinced I'm talking to a salesperson or marketing agent.
(How did I do with channeling Miss Chatty's natural sycophancy?)
Anyway, I do use AI for other things, such as...
• Coding (where I mostly use Claude)
• General research
• Looking up the California Vehicle Code about recording video while driving
• Gift ideas for a young friend who is into astronomy (Team Pluto!)
• Why "Realtor" is pronounced one way in the radio ads, another way by the general public
• Tools and techniques for I18n and L10n
• Identifying AI-generated text and photos (takes one to know one!)
• Why spaghetti softens and is bendable when you first put it into the boiling water
• Burma-Shave sign examples
• Analytics plugins for Rails
• Maritime right-of-way rules
• The Uniform Code of Military Justice and the duty to disobey illegal orders
• Why, in a practical sense, the Earth really once *was* flat
• How de-alcoholized wine gets that way
• California law on recording phone conversations
• Why the toilet runs water every 20 minutes or so (when it shouldn't)
• How guy wires got that name
• Where the "he took too much LDS" scene from Star Trek IV was filmed
• When did Tim Berners-Lee demo the World Wide Web at SLAC
• What "ogr" means in "ogr2ogr"
• Why my Kia EV6 ultrasonic sensors freaked out when I stopped behind a Lucid Air
• The smartest dog breeds (in different ways of "smart")
• The Sputnik 1 sighting in *October Sky*
• Could I possibly be related to John White Geary?
And that's just from the last few weeks.
In other words, pretty much anything someone might interact with an AI - or a fellow human - about.
About the last one (John White Geary), that discussion started with my question about actresses in the "Pick a little, talk a little" song from The Music Man movie, and then went on to how John White Geary bridged the transition from Mexican to US rule, as did others like José Antonio Carrillo:
It's really an interesting insight into people's personalities. Far more than their Google search history. Which is why everyone wants their GPT chats burned to the ground after they die.
I noticed how ChatGPT got progressively worse at helping me with my research. I gave up on ChatGPT 5 and just switched Grok and Gemini. I couldn’t be happier that I switched.
Odd, I've found that Gemini will completely fabricate the content of specific DOIs despite being corrected and even it providing a link to a paper which shows it is off about the title and subject of a paper it will cite. This obviously concerns me about its effectiveness as a research aide.
It's amazing how different are the experiences different people have. To me every new version of chatgpt was an improvement and gemini is borderline unusable.
I wish they would keep 4.1 around for a bit longer. One of the downsides of the current reasoning based training regimens is a significant decrease in creativity. And chat trained AIs were already quite "meh" at creative writing to begin with. 4.1 was the last of its breed.
So we'll have to wait until "creativity" is solved.
Side note: I've been wondering lately about a way to bring creativity back to these thinking models. For creative writing tasks you could add the original, pretrained model as a tool call. So the thinking model could ask for its completions and/or query it and get back N variations. The pretrained model's completions will be much more creative and wild, though often incoherent (think back to the GPT-3 days). The thinking model can then review these and use them to synthesize a coherent, useful result. Essentially giving us the best of both worlds. All the benefits of a thinking model, while still giving it access to "contained" creativity.
My theory, based on what I would see with non-thinking models, is that as soon as you start detailing something too much (ie: not just "speak in the style of X" but more like "speak in the style of X with [a list of adjectives detailing the style of X]" they would loose creativity, would not fit the style very well anymore etc.
I don't know how things have evolved with new training techniques etc. but I suspected that overthinking their tasks by detailing too much what they have to do can lower quality in some models for creative tasks.
I also terribly regret the retirement of 4.1.
From my own personal usage, for code or normal tasks, I clearly noticed a huge gap in degraded performance between 4.1 and 5.1/5.2.
4.1 was the best so far. With straight to the point answers, and most of the time correct. Especially for code related questions.
5.1/5.2 on their side would a lot more easily hallucinate stupid responses or stupid code snippet totally not what was expected.
Have you tried the relatively recent Personalities feature? I wonder if that makes a difference.
(I have no idea. LLMs are infinite code monkeys on infinite typewriters for me, with occasional “how do I evolve this Pokémon’ utility. But worth a shot.)
Why would someone want to spend half a million dollars on GPUs and components (if not more) to run one year old models that genuinely aren't useful? You can't self host trillion parameter models unless you own a datacenter lol (or want to just light money on fire).
OK, everyone is (rightly) bringing up that relatively small but really glaringly prominent AI boyfriend subreddit.
But I think a lot more people are using LLMs for relationship surrogates than that (pretty bonkers) subreddit would suggest. Character AI (https://en.wikipedia.org/wiki/Character.ai) seems quite popular, as do the weird fake friend things in Meta products, and Grok’s various personality mode and very creepy AI girlfriends.
I find this utterly bizarre. LLMs are peer coders in a box for me. I care about Claude Code, and that’s about it. But I realize I am probably in the vast minority.
I used https://openrouter.ai/openai/gpt-4.1 for grammar checking, it was great. No newer ChatGPT models came close to being as responsive and good. ChatGPT 5.2 thinks I want it to write essays about grammar.
5.2 is back to being a sycophantic hallucinating mess for most use cases - I've anecdotally caught it out on many of the sessions I've had where it apologizes "You're absolutely right... that used to be the case but as of the latest version as you pointed out, it no longer is." when it never existed in the first place. It's just not good.
On the other hand - 5.0-nano has been great for fast (and cheap) quick requests and there doesn't seem to be a viable alternative today if they're sunsetting 5.0 models.
I really don't know how they're measuring improvements in the model since things seem to have been getting progressively worse with each release since 4o/o4 - Gemini and Opus still show the occasional hallucination or lack of grounding but both readily spend time fact-checking/searching before making an educated guess.
I've had chatgpt blatantly lie to me and say there are several community posts and reddit threads about an issue then after failing to find that, asked it where it found those and it flat out said "oh yeah it looks like those don't exist"
The range of attitudes in there is interesting. There are a lot of people who take a fairly sensible "this is interactive fiction" kind of attitude, and there are others who bristle at any claim or reminder that these relationships are fictitious. There are even people with human partners who have "married" one or more AIs.
And it's a pity that this highly prevalent phenomenon (to exaggerate a bit, probably the way tech in general will become the most influential in the next couple years) is barely mentioned on HN.
It's a growing market, although it might be because of shifting goal posts. I had a friend whose son was placed in French immersion (a language he doesn't speak at all). From what I was understanding, he was getting up and walking around in kindergarten and was labelled as mentally divergent; his teachers apparently suggested to his mother that he see a doctor.
(Strangely these "mental illnesses" and school problems went away after he switched to an English language school, must be a miracle)
I assume the loneliness epidemic is producing similar cases.
There is/was an interesting period where "normies" were joining twitter en-masse, and adopted many of the denizens ideas as normal widespread ideas. Kinda like going on a camping trip at "the lake" because you heard it's fun and not realizing that everyone else on the trip is part of a semi-deranged cult.
The outsized effect of this was journalists thinking these people on twitter were accurate representations of what society on the whole was thinking.
Wasn't "ChatGPT" itself only supposed to be a research/academic name, until it unexpectedly broke containment and they ended up having to roll with it? The naming was cursed from the start.
GTP goes forward from the middle, teeth, then lips, as compared to GPT which goes middle, lips, teeth; you'll see this pattern happen with a lot of words in linguistic history
It's almost always marketing and some stupid idea someone there had. I don't know why non-technical people try and claim so much ownership over versioning. You nearly always end up with these ridiculous outcomes.
"I know! Let's restart the version numbering for no good reason!" becomes DOOM (2016), Mortal Kombat 1 (2025), Battlefield 1 (2016), Xbox One (not to be confused with the original Xbox 1)
As another example, look at how much of a trainwreck USB 3 has become
Xbox should be in the hall of fame for terrible names.
There's also Xbox One X, which is not in the X series. Did I say that right? Playstation got the version numbers right. I couldn't make names as incomprehensible as Xbox if I tried.
Even more than that, I've seen a lot of people confuse 4 and 4o, probably because 4o sounds like a shorthand for 4.0 which would be the same thing as 4.
Come to think of it, maybe they had a play on 4o being “40”, and o4-mini being “04”, and having to append the “mini” to bring home the message of 04<40
I think this kind of thing is a pretty strong argument for the entire open source model ecosystem, not just open weights but open data and the whole gamut.
Despite 4o being one of the worst models on the market, they loved it. Probably because it was the most insane and delusional. You could get it to talk about really fucked up shit. It would happily tell you that you are the messiah.
The reaction to its original removal on Instagram Reels, r/ChatGPT, etc., was genuinely so weird and creepy. I didn't realise before this how many people had genuine parasocial (?) relationships with these LLMs.
I was mostly using 4o for academic searches and planning. It was the best model for me. Based on the context I was giving and questions I was asking, 4o was the most the consistent model.
It used to get things wrong for sure but it was predictable. Also I liked the tone like everyone else. I stopped using ChatGPT after they removed 4o. Recently, I have started using the newer GPT-5 models (got free one month). Better than before but not quite. Acts way over smart haha
> with only 0.1% of users still choosing GPT‑4o each day.
LOL WHAT?! I'm 0.1% of users? I'm certain part of the issue is it takes 3-clicks to switch to GPT-4o and it has to be done each time the page is loaded.
> that they preferred GPT‑4o’s conversational style and warmth.
Uh.. yeah maybe. But more importantly, GPT-4o gave better answers.
Zero acknowledgement about how terrible GPT-5 was when it was first released. It has since improved but it's not clear to me it's on-par with GPT-4o. Thinking mode is just too slow to be useful and so GPT-4o still seems better and faster.
I agree - I use 4o via the API, simply because it answers so quickly. Its answers are usually pretty good on programming topics. I don't engage in chit-chat with AI models, so it's not really about the personality (which seems to be the main framing people are talking about), just the speed.
It’s interesting that many comments mention switching back to Claude. I’m on the opposite end, as I’ve been quite happy with ChatGPT recently. Anthropic clearly changed something after December last year. My Pro plan is barely usable now, even when using only Sonnet. I frequently hit the weekly limit, which never happened before. In contrast, ChatGPT has been very generous with usage on their plan.
Another pattern I’m noticing is strong advocacy for Opus, but that requires at least the 5x plan, which costs about $100 per month. I’m on the ChatGPT $20 plan, and I rarely hit any limits while using 5.2 on high in codex.
I've been impressed by how good ChatGPT is at getting the right context old conversations.
When I ask simple programming questions in a new conversation it can generally figure out which project I'm going to apply it to, and write examples catered to those projects. I feel that it also makes the responses a bit more warm and personal.
There was a bug, since fixed, that erroneously capped at something like 60% of the limit, if you want to try again
You mean the harness bug on 26th? I'm aware. Just that the limit I mentioned happened since early January.
This is incorrect. I have the $200 per year plan and use Opus 4.5 every day.
Though granted it comes in ~4 hour blocks and it is quite easy to hit the limit if executing large tasks.
Not sure what you mean by incorrect since you already validated my point about the limits. I never had these issues even with Sonnet before, but after December, the change has been obvious to me.
Also worth considering that mileage varies because we all use agents differently, and what counts as a large workload is subjective. I am simply sharing my experience from using both Claude and Codex daily. For all we know, they could be running A/B tests, and we could both be right.
> We’re continuing to make progress toward a version of ChatGPT designed for adults over 18, grounded in the principle of treating adults like adults, and expanding user choice and freedom within appropriate safeguards. To support this, we’ve rolled out age prediction for users under 18 in most markets. https://help.openai.com/en/articles/12652064-age-prediction-...
interesting
Pornographic use has long been the "break glass in case of emergency" for the LLM labs when it comes to finances.
My personal opinion is that while smut won't hurt anyone in of itself, LLM smut will have weird and generally negative consequences. As it will be crafted specifically for you on top of the intermittent reinforcement component of LLM generation.
While this is a valid take, I feel compelled to point out Chuck Tingle.
The sheer amount and variety of smut books (just books) is vastly larger than anyone wants to realize. We passed the mark decades ago where there is smut available for any and every taste. Like, to the point that even LLMs are going to take a long time to put a dent in the smut market. Humans have been making smut for longer than we've had writing.
But again I don't think you're wrong, but the scale of the problem is way distorted.
21 replies →
"Legacy Smut" is well known to cause many kinds of harm to many kind of people, from the participants to the consumers.
Whatever reward-center path is short-circuiting in 0.0001% of the population and leading to LLM psychosis will become a nuclear bomb for them if we get the sex drive involved too.
I can do as much smut as I want through the API for all SOTA models.
2 replies →
I can already see our made to order, LLM generated, VR/neurolink powered, sex fantasies come to life. Throw in the synced Optimus sex robots…
I can see why Elons making the switch from cars. We certainly won’t be driving much
Why llm smut in particular? There's already a vast landscape of the interactive, VR games for all tastes.
Why LLM is supposed to be worse?
I'm waiting until someone combines LLMs with a humanoid robot and a realdoll. That will have a lot of consequences.
People are already addicted to non-interactive pornography so this is going to be even worse.
It says what to do if you are over 18, but thinks you are under 18. But what if it identifies someone under 18 as being older?
And what if you are over 18, but don't want to be exposed to that "adult" content?
> Viral challenges that could push risky or harmful behavior
And
> Content that promotes extreme beauty standards, unhealthy dieting, or body shaming
Seem dangerous regardless of age.
> And what if you are over 18, but don't want to be exposed to that "adult" content?
Don't prompt it.
What are these extremes beauty standards being promoted?
Because it seems to me large swaths of the population need some beauty standards
This is for advertising purposes, not porn. They might feign that's the reason, but it's to allow alcohol & pharma to advertise, no doubt.
both, actually. porn for users, ad spots for companies.
Porn and ads, it's the convergent evolution theory for all things on the internet.
My personal take is that there has been no progress - potentially there has been a regression on all LLM things outside of coding a scientific pursuits - I used to have great fun with LLMs with creative writing stuff, but I feel like current models are stiff and not very good prose writers.
This is also true for stuff like writing clear but concise docs, they're overly verbose while often not getting the point across.
I feel like this comes from the rigorous Reinforcement Learning these models go through now. The token distribution is becoming so narrow, so the models give better answers more often that is stuffles their creativity and ability to break out of the harness. To me, every creative prompt I give them turns into kind of the same mush as output. It is rarely interesting
Yeah, I’ve had great success at coding recently, but every time I try to get an LLM to write me a spec it generates endless superlatives and a lot of flowery language.
Sexual and intimate chat with LLMs will be a huge market for whoever corners it. They'd be crazy to leave that money on the table.
That's why laws against drugs are so terrible, it forces law-abiding businesses to leave money on the table. Repeal the laws and I'm sure there will be tons of startups to profit off of drug addiction.
16 replies →
It's not just chat. Remember image and video generation are on the table. There are already a huge category of adult video 'games' of this nature. I think they use combos of pre-rendered and dynamic content. But really not hard to imagine a near future that interactive and completely personalized AI porn in full 4kHDR or VR is constantly and near-instantly available. I have no idea the broader social implications of all that, but the tech itself feels inevitable and nearly here.
If your goal is to make money, sure. If your goal is to make AI safe, not so much.
2 replies →
It will be an even bigger market when robotics are sufficiently advanced.
2 replies →
Will be?
I've seen four startups make bank on precicely that.
My main concern is when they'll start to allow 18+ deepfakes
That market is for local models right now.
What’s the goal there? Sexting?
I’m guessing age is needed to serve certain ads and the like, but what’s the value for customers?
Even when you're making PG content, the general propriety limits of AI can hinder creative work.
The "Easter Bunny" has always seemed creepy to me, so I started writing a silly song in which the bunny is suspected of eating children. I had too many verses written down and wanted to condense the lyrics, but found LLMs telling me "I cannot help promote violence towards children." Production LLM services would not help me revise this literal parody.
Another day I was writing a romantic poem. It was abstract and colorful, far from a filthy limerick. But when I asked LLMs for help encoding a particular idea sequence into a verse, the models refused (except for grok, which didn't give very good writing advice anyway.)
3 replies →
If you don't think the potential market for AI sexbots is enormous you have not paid attention to humanity.
1 reply →
There is a subreddit called /r/myboyfriendisAI, you can look through it and see for yourself.
according to the age-prediction page, the changes are:
> If [..] you are under 18, ChatGPT turns on extra safety settings. [...] Some topics are handled more carefully to help reduce sensitive content, such as:
- Graphic violence or gore
- Viral challenges that could push risky or harmful behavior
- Sexual, romantic, or violent role play
- Content that promotes extreme beauty standards, unhealthy dieting, or body shaming
Porn has driven just about every bit of progress on the internet, I don't see why AI would be the exception to that rule.
16 replies →
West World style robots
There is a huge book market for sexual stories, in case you were not aware.
I am 30 years old, literally told chatgpt I was a software developer, all my queries are something an adult would ask, yet OpenAI assumed I was under 18 and asked me for a persona age verification, which of course I refused because Persona is shady as a company (plus I'm not giving my personal ID to some random tech company).
ChatGPT is absolute garbage.
eh there's an old saying that goes "no Internet technology can be considered a success until it has been adopted by (or in this case integrated with) the porn industry".
imagine if every only fans creator suddenly paid a portion of their revenue to OpenAI for better messaging with their followers…
Instead of paying it to the human third party firms that currently handle communication with subscribers?
Been unhappy with the GPT5 series, after daily driving 4.x for ages (I chat with them through the API) - very pedantic, goes off on too many side topics, stops following system instructions after a few turns (e.g. "you respond in 1-3 sentences" becomes long bulleted lists and multiple paragraphs very quickly.
Much better feel with the Claude 4.5 series, for both chat and coding.
I can never understand why it is so eager to generate walls of text. I have instructions to always keep the response precise and to the point. It almost seem like it wants to overwhelm you, so you give up and do your own research.
> you respond in 1-3 sentences" becomes long bulleted lists and multiple paragraphs very quickly
This is why my heart sank this morning. I have spent over a year training 4.0 to just about be helpful enough to get me an extra 1-2 hours a day of productivity. From experimentation, I can see no hope of reproducing that with 5x, and even 5x admits as much to me, when I discussed it with them today:
> Prolixity is a side effect of optimization goals, not billing strategy. Newer models are trained to maximize helpfulness, coverage, and safety, which biases toward explanation, hedging, and context expansion. GPT-4 was less aggressively optimized in those directions, so it felt terser by default.
Share and enjoy!
And how would GPT 5.0 know that, I wonder. I bet it’s just making stuff up.
> This is why my heart sank this morning. I have spent over a year training 4.0 to just about be helpful enough to get me an extra 1-2 hours a day of productivity.
Maybe you should consider basing your workflows on open-weight models instead? Unlike proprietary API-only models no one can take these away from you.
4.1 is great for our stuff at work. It's quite stable (doesn't change personality every month, and one word difference doesn't change the behaviour). IT doesn't think, so it's still reasonably fast.
Is there anything as good in the 5 series? likely, but doing the full QA testing again for no added business value, just because the model disappears, is just a hard sell. But the ones we tested were just slower, or tried to have more personality, which is useless for automation projects.
Yeah - agreed, the initial latency is annoying too, even with thinking allegedly turned off. Feels like AI companies are stapling more and more weird routing, summarization, safety layers, etc. that degrade the overall feel of things.
I often use ChatGPT without an account and ChatGPT 5 mini (which you get while logged out) might as well be Mistral 7b + web search. Its that mediocre. Even the original 3.5 was way ahead.
Really? I’ve found it useful for random little things.
1 reply →
>We brought GPT‑4o back after hearing clear feedback from a subset of Plus and Pro users, who told us they needed more time to transition key use cases, like creative ideation, and that they preferred GPT‑4o’s conversational style and warmth.
This does verify the idea that OpenAI does not make models sycophantic due to attempted subversion by buttering up users so that that they use the product more, its because people actually want AI to talk to them like that. To me, that's insane, but they have to play the market I guess
As someone who's worked with population data, I found that there is an enormous rift between reported opinion (and HN and reddit opinion) vs revealed (through experimentation) population preferences.
I always thought that the idea that "revealed preferences" are preferences, discounts that people often make decisions they would rather not. It's like the whole idea that if you're on a diet, it's easier to not have junk food in the house to begin with than to have junk food and not eat more than your target amount. Are you saying these people want to put on weight? Or is it just they've been put in a situation that defeats their impulse control?
I feel a lot of the "revealed preference" stuff in advertising is similar in advertisers finding that if they get past the easier barriers that users put in place, then really it's easier to sell them stuff that at a higher level the users do not want.
3 replies →
Well that's what akrasia is. It's not necessarily a contradiction that needs to be reconciled. It's fine to accept that people might want to behave differently than how they are behaving.
A lot of our industry is still based on the assumption that we should deliver to people what they demonstrate they want, rather than what they say they want.
Exactly, that sounds to me like a TikTok vs NPR/books thing, people tell everyone what they read, then go spend 11h watching TikToks until 2am.
This is why I work in direct performance advertising. Our work reveals the truth!
12 replies →
Sounds both true and interesting. Any particularly wild and/or illuminating examples of which you can share more detail?
5 replies →
> its because people actually want AI to talk to them like that
I can't find the particular article (there's a few blogs and papers pointing out the phenomenon, I can't find the one I enjoyed) but it was along the lines of how in LLMArena a lot of users tend to pick the "confidently incorrect" model over the "boring sounding but correct" model.
The average user probably prefers the sycophantic echo chamber of confirmation bias offered by a lot of large language models.
I can't help but draw parallels to the "You are not immune to propaganda" memes. Turns out most of us are not immune to confirmation bias, either.
I thought this was almost due to the AI personality splinter groups (trying to be charitable) like /myboyfriendisai and wrapper apps who vocally let them know they used those models the last time they sunset them.
I was one of those pesky users who complained when o3 suddenly was unavailable.
When 5.2 was first launched, o3 did a notably better job at a lot of analytical prompts (e.g. "Based on the attached weight log and data from my calorie tracking app, please calculate my TDEE using at least 3 different methodologies").
o3 frequently used tables to present information, which I liked a lot. 5.2 rarely does this - it prefers to lay out information in paragraphs / blog post style.
I'm not sure if o3 responses were better, or if it was just the format of the reply that I liked more.
If it's just a matter of how people prefer to be presented their information, that should be something LLMs are equipped to adapt to at a user-by-user level based on preferences.
I thought it was based on the user thumbs-up and thumbs-down reactions, it evolving the way that it does makes it pretty obvious that users want their asses licked
They have added settings for this now - you can dial up and down how “warm” and “enthusiastic” you want the models to be. I haven’t done back to back tests to see how much this affects sycophancy, but adding the option as a user preference feels like the right choice.
If anyone is wondering, the setting for this is called Personalisation in user settings.
This doesn't come as too much of a surprise to me. Feels like it mirrors some of the reasons why toxic positivity occurs in the workplace.
Put on a good show, offer something novel, and people will gleefully march right off a cliff while admiring their shiny new purchase.
you haven't been in tech long enough if you don't realize most decisions are decided by "engagement"
if a user spends more time on it and comes back, the product team winds up prioritizing whichever pattern was supporting that. it's just a continual selective evolution towards things that keep you there longer, based on what kept everyone else there longer
Your absolutely right. You’re not imagining it. Here is the quiet truth:
You’re not imagining it, and honestly? You're not broken for feeling this—its perfectly natural as a human to have this sentiment.
After they pushed the limits on the Thinking models to 3000 per week, I haven't touched anything else. I am really satisfied with their performance and the 200k context windows is quite nice.
I've been using Gemini exclusively for the 1 million token context window, but went back to ChatGPT after the raise of the limits and created a Project system for myself which allows me to have much better organization with Projects + only Thinking chats (big context) + project-only memory.
Also, it seems like Gemini is really averse to googling (which is ironic by itself) and ChatGPT, at least in the Thinking modes loves to look up current and correct info. If I ask something a bit more involved in Extended Thinking mode, it will think for several minutes and look up more than 100 sources. It's really good, practically a Deep Research inside of a normal chat.
I REALLY struggle with Gemini 3 Pro refusing to perform web searches / getting combative with the current date. Ironically their flash model seems much more likely to opt for web search for info validation.
Not sure if others have seen this...
I could attribute it to:
1. It's known quantity with the pro models (I recall that the pro/thinking models from most providers were not immediately equipped with web search tools when they were released originally)
2. Google wants you to pay more for grounding via their API offerings vs. including it out of the box
Gemini refused to believe that I was using MacOS 26.
when I want it to google stuff, I just use the deep research mode. Not as instant, but it googles a lot of stuff then
Sample of one here, but I get the exact opposite behavior. Flash almost never wants to search and I have to use Pro.
I find Gemini does the most searching (and the quickest... regularly pulls 70+ search results on a query in a matter of seconds - likely due to googlebot's cache of pretty much every page). Chatgpt seems to only search if you have it in thinking/research mode now.
ChatGPT 5.2 has been a good motivator for me to try out other LLMs because of how bad it is. Both 5.1 and 5.2 have been downgrades in terms of instruction following and accuracy, but 5.2 especially so. The upside is that that's had me using Claude much more, and I like a lot of things about it, both in terms of UI and the answers. It's also gotten me more serious about running local models. So, thank you OpenAI, for forcing me to broaden my horizons!
I left my chatgpt pro subscription when they removed the true deep thinkibg methods.
Mostly because how massively varied their releases are. Each one required big changes to how I use and work with it.
Claude is perfect in this sense all their models feel roughly the same just smarter so my workflow is always the same.
Have you had a chance to compare with Gemini 3?
I switch routinely between Gemini 3 (my main), Claude, GPT, and sometimes Grok. If you came up with 100 random tasks, they would all come out about equal. The issue is some are better at logical issues, some are better at creative writing, etc. If it's something creative I usually drop it in all 4 and combine the best bits of each.
(I also use Deep Think on Gemini too, and to me, on programming tasks, it's not really worth the money)
1 reply →
Not extensively. The few interactions I've tried on it have been disappointing though. The Voice input is really bad, like significantly worse than any other major AI in the market. And I assumed search would be its strong suit and ran a search-and-compile type prompt (that I usually run on ChatGPT) on Gemini, and it was underwhelming at it. Not as bad as Grok (which was pretty much unusable for this), but noticeably worse than ChatGPT. Maybe Gemini has other strengths that I haven't come across yet, but on that one at least, it was
nah bruh you are just imagining it.
Its just as good as ever /s
Gemini, Claude, ChatGPT or whatever. Can we all agree, that it's great to have so much choice?
Retiring the most popular model for the relationship roleplay just one day before the Valentin's day is particularly ironic =) bravo, OpenAI!
Valentine's is in mid February
The sunset date is the 13th. V-day is on the 14th.
> [...] the vast majority of usage has shifted to GPT‑5.2, with only 0.1% of users still choosing GPT‑4o each day.
Well yeah, because 5.2 is the default and there's no way to change the default. So every time you open up a new chat you either use 5.2 or go out of your way to select something else.
(I'm particularly annoyed by this UI choice because I always have to switch back to 5.1)
What about 5.1 do you prefer over 5.2?
1 reply →
0.1% of users is not necessarily 0.1% of conversations…
What's the default model when a random user goes to use the chatgpt website or app?
5.2 in the website. You can see what was used for a specific response by hovering over the refresh icon at the end.
5.2.
You can go to chatgpt.com and ask "what model are you" (it doesn't hallucinate on this).
3 replies →
On the paid version it is 5.2.
won't somebody think of the goonettes?!
This was not a word I was prepared to learn about today.
1 reply →
If they were to retire gpt 4.1 series from API that would be a major deal breaker. For structured outputs it is more predictable and significantly better because it does not have the reasoning step baked in.
I've heard great things about the mixtral structured outputs capabilities but haven't had a chance to run my evals on them.
If 4.1 is dropped from API that's the first course of action.
Also 5 series doesn't have fine tuning capabilities and it's unclear how it would work if the reasoning step is involved
There will be a lot of mentally unwell people unhappy with this, but this is a huge net positive decision, thank goodness.
I have stopped using ChatGPT in favor of Gemini. Mostly you need LLMs for factual stuff and sometimes to draft bits of code here and there. I use Google with Gemini for the first part and I am a huge fan of codex for the second part.
GPT 4o is still my favorite model
I had started using it again through Open WebUI. If it's gone, I'll probably switch to GLM-4.7 completely.
> In the API, there are no changes at this time
Curios where this is going to go.
One of the big arguments for local models is we can't trust providers to maintain ongoing access the models you validated and put into production. Even if you run hosted models, running open ones means you can switch providers.
will this nuke my old convos?
opus 4.5 is better at gpt on everything except code execution (but with pro you get a lot of claude code usage) and if they nuke all my old convos I'll prob downgrade from pro to freee
From the blog post (twice):
> creative ideation
At first I had no idea what this meant! So I asked my friend Miss Chatty [1] and we had an interesting conversation about it:
https://chatgpt.com/share/697bf761-990c-8012-9dd1-6ca1d5cc34...
[1] You may know her as ChatGPT, but I figure all the other AIs have fun human-sounding names, so she deserves one too.
I do find it interesting to see how people interact with AI as I think it is quite a personal preference. Is this how you use AI all the time? Do you appreciate the sycophancy, does it bother you, do you not notice it? From your question it seems you would prefer a blog post in plainer language, avoiding "marketing speak", but if a person spoke to me like Miss Chatty spoke to you I would be convinced I'm talking to a salesperson or marketing agent.
That is a great question!
You are absolutely right to ask about it!
(How did I do with channeling Miss Chatty's natural sycophancy?)
Anyway, I do use AI for other things, such as...
And that's just from the last few weeks.
In other words, pretty much anything someone might interact with an AI - or a fellow human - about.
About the last one (John White Geary), that discussion started with my question about actresses in the "Pick a little, talk a little" song from The Music Man movie, and then went on to how John White Geary bridged the transition from Mexican to US rule, as did others like José Antonio Carrillo:
https://chatgpt.com/share/697c5f28-7c18-8012-96fc-219b7c6961...
If I could sum it all up, this is the kind of freewheeling conversation with ChatGPT and other AIs that I value.
It's really an interesting insight into people's personalities. Far more than their Google search history. Which is why everyone wants their GPT chats burned to the ground after they die.
I noticed how ChatGPT got progressively worse at helping me with my research. I gave up on ChatGPT 5 and just switched Grok and Gemini. I couldn’t be happier that I switched.
Odd, I've found that Gemini will completely fabricate the content of specific DOIs despite being corrected and even it providing a link to a paper which shows it is off about the title and subject of a paper it will cite. This obviously concerns me about its effectiveness as a research aide.
It's amazing how different are the experiences different people have. To me every new version of chatgpt was an improvement and gemini is borderline unusable.
I got the same experience. Dont get how people are saying gemini is so good.
2 replies →
Very curious for what use cases you're finding gemini unusable.
7 replies →
Why not Claude?
The limits on the $20 plan are too low compared to Gemini and ChatGPT. They're too low to do any serious work at all.
I personally find Claude the best at coding, but it’s usefulness doesn’t seem to extend to scientific research and writing
Because I’m sick of paying $20 for an hour of claude before it throttles me.
I wish they would keep 4.1 around for a bit longer. One of the downsides of the current reasoning based training regimens is a significant decrease in creativity. And chat trained AIs were already quite "meh" at creative writing to begin with. 4.1 was the last of its breed.
So we'll have to wait until "creativity" is solved.
Side note: I've been wondering lately about a way to bring creativity back to these thinking models. For creative writing tasks you could add the original, pretrained model as a tool call. So the thinking model could ask for its completions and/or query it and get back N variations. The pretrained model's completions will be much more creative and wild, though often incoherent (think back to the GPT-3 days). The thinking model can then review these and use them to synthesize a coherent, useful result. Essentially giving us the best of both worlds. All the benefits of a thinking model, while still giving it access to "contained" creativity.
My theory, based on what I would see with non-thinking models, is that as soon as you start detailing something too much (ie: not just "speak in the style of X" but more like "speak in the style of X with [a list of adjectives detailing the style of X]" they would loose creativity, would not fit the style very well anymore etc. I don't know how things have evolved with new training techniques etc. but I suspected that overthinking their tasks by detailing too much what they have to do can lower quality in some models for creative tasks.
I also terribly regret the retirement of 4.1. From my own personal usage, for code or normal tasks, I clearly noticed a huge gap in degraded performance between 4.1 and 5.1/5.2.
4.1 was the best so far. With straight to the point answers, and most of the time correct. Especially for code related questions. 5.1/5.2 on their side would a lot more easily hallucinate stupid responses or stupid code snippet totally not what was expected.
Have you tried the relatively recent Personalities feature? I wonder if that makes a difference.
(I have no idea. LLMs are infinite code monkeys on infinite typewriters for me, with occasional “how do I evolve this Pokémon’ utility. But worth a shot.)
I hope they won't chop gpt-4o-mini soon because it's fast and accurate for API usage.
Does this mean they're also retiring Standard Voice Mode?
Would be cool if they'd release the weights for these models so users could now use them locally.
Why would someone want to spend half a million dollars on GPUs and components (if not more) to run one year old models that genuinely aren't useful? You can't self host trillion parameter models unless you own a datacenter lol (or want to just light money on fire).
Are the mini / omni models really trillion parameter models?
1 reply →
To do AI research!!!!!!!
They'd only do that if they were some kind of open ai company /s
gpt-oss is pretty great tbh - one of the better all-around local models for knowledge and grounding.
lol :)
Which one is the AI boyfriend model? Tumblr, Twitter, and reddit will go crazy
4o is the most popular one for that
[flagged]
OK, everyone is (rightly) bringing up that relatively small but really glaringly prominent AI boyfriend subreddit.
But I think a lot more people are using LLMs for relationship surrogates than that (pretty bonkers) subreddit would suggest. Character AI (https://en.wikipedia.org/wiki/Character.ai) seems quite popular, as do the weird fake friend things in Meta products, and Grok’s various personality mode and very creepy AI girlfriends.
I find this utterly bizarre. LLMs are peer coders in a box for me. I care about Claude Code, and that’s about it. But I realize I am probably in the vast minority.
We're very echo-chambered here. That graph OpenAI released had coding at 4% or something.
2 weeks notice to migrate to a different style of model (“normal” 4.1-mini to reasoning 5.1) is bad form.
Misread the post - it doesn’t include the API
I used https://openrouter.ai/openai/gpt-4.1 for grammar checking, it was great. No newer ChatGPT models came close to being as responsive and good. ChatGPT 5.2 thinks I want it to write essays about grammar.
Any suggestions?
Sora + OpenAI voice Cloning + AdultGPT = Virtual Girlfriend/Boyfriend
(Upgrade for only 1999 per month)
Damn, some of my prompts worked better on 4o than the more recent models
5.2 is back to being a sycophantic hallucinating mess for most use cases - I've anecdotally caught it out on many of the sessions I've had where it apologizes "You're absolutely right... that used to be the case but as of the latest version as you pointed out, it no longer is." when it never existed in the first place. It's just not good.
On the other hand - 5.0-nano has been great for fast (and cheap) quick requests and there doesn't seem to be a viable alternative today if they're sunsetting 5.0 models.
I really don't know how they're measuring improvements in the model since things seem to have been getting progressively worse with each release since 4o/o4 - Gemini and Opus still show the occasional hallucination or lack of grounding but both readily spend time fact-checking/searching before making an educated guess.
I've had chatgpt blatantly lie to me and say there are several community posts and reddit threads about an issue then after failing to find that, asked it where it found those and it flat out said "oh yeah it looks like those don't exist"
That’s been my experience and has lead to hours of wasted time. It’s faster for me to read through docs and watch YouTube.
Even if I submit the documentation or reference links they are completely ignored.
Last time they tried to do this they got huge push back from the AI boyfriend people lol
/r/MyBoyfriendIsAI https://www.reddit.com/r/MyBoyfriendIsAI/ is a whole thing. It's not a joke subreddit.
The range of attitudes in there is interesting. There are a lot of people who take a fairly sensible "this is interactive fiction" kind of attitude, and there are others who bristle at any claim or reminder that these relationships are fictitious. There are even people with human partners who have "married" one or more AIs.
4 replies →
And it's a pity that this highly prevalent phenomenon (to exaggerate a bit, probably the way tech in general will become the most influential in the next couple years) is barely mentioned on HN.
5 replies →
>It's not a joke subreddit.
Spend a day on Reddit and you'll quickly realize many subreddits are just filled with lies.
1 reply →
I sometimes envy the illiterate.
At least they cannot read this.
I wonder if they have run the analytics on how many users are doing that. I would love to see that number.
> only 0.1% of users still choosing GPT‑4o each day.
If the 800MAU still holds, that's 800k people.
well now you can unlock an 18+ version for sexual role-play so i guess its the other way around
[flagged]
It's a growing market, although it might be because of shifting goal posts. I had a friend whose son was placed in French immersion (a language he doesn't speak at all). From what I was understanding, he was getting up and walking around in kindergarten and was labelled as mentally divergent; his teachers apparently suggested to his mother that he see a doctor.
(Strangely these "mental illnesses" and school problems went away after he switched to an English language school, must be a miracle)
I assume the loneliness epidemic is producing similar cases.
2 replies →
They control reddit and used to control twitter.
There is/was an interesting period where "normies" were joining twitter en-masse, and adopted many of the denizens ideas as normal widespread ideas. Kinda like going on a camping trip at "the lake" because you heard it's fun and not realizing that everyone else on the trip is part of a semi-deranged cult.
The outsized effect of this was journalists thinking these people on twitter were accurate representations of what society on the whole was thinking.
1 reply →
wasn't there a trend on twitter to have a bio/signature with a bunch of mental illness acronyms?
Those people need to be uploaded into the Matrix and the data servers sent far, deep into space.
I can't see o3 in my model selector as well?
RIP
Theo can sleep tonight.
Oh good. Not in the API. The 4o-mini is super cheap and useful for a bunch of things I do (evaluating post vector-search for relevancy).
I still don’t know how openAI thought it was a good idea to have a model named "4o" AND a model named "o4", unless the goal was intentional confusion
Even ChatGPT (and certainly Google) confuses the names.
I'm sure there is some internal/academic reason for them, but from an outside observer simply horrible.
Wasn't "ChatGPT" itself only supposed to be a research/academic name, until it unexpectedly broke containment and they ended up having to roll with it? The naming was cursed from the start.
How many times have you noticed people confusing the name itself: ChatGBT, ChatGTP etc.
We're the technical crowd cursed and blinded by knowledge.
When picking a fight with product marketing, just don't.
Considering how many people say ChatGTP too
I still don't like how French people don't call it "chat j'ai pété".
The other day I heard ChatGBD.
9 replies →
GTP goes forward from the middle, teeth, then lips, as compared to GPT which goes middle, lips, teeth; you'll see this pattern happen with a lot of words in linguistic history
I’ve been hearing that consistently from a friend, I gave up on correcting them because “ChatGPT” just wouldn’t stick
It's almost always marketing and some stupid idea someone there had. I don't know why non-technical people try and claim so much ownership over versioning. You nearly always end up with these ridiculous outcomes.
"I know! Let's restart the version numbering for no good reason!" becomes DOOM (2016), Mortal Kombat 1 (2025), Battlefield 1 (2016), Xbox One (not to be confused with the original Xbox 1)
As another example, look at how much of a trainwreck USB 3 has become
Or how Nvidia restarted Geforce card numbering
Xbox should be in the hall of fame for terrible names.
There's also Xbox One X, which is not in the X series. Did I say that right? Playstation got the version numbers right. I couldn't make names as incomprehensible as Xbox if I tried.
1 reply →
"4o" was bad to begin with, as "four-oh" is a common verbalization of "4.0".
Even more than that, I've seen a lot of people confuse 4 and 4o, probably because 4o sounds like a shorthand for 4.0 which would be the same thing as 4.
Come to think of it, maybe they had a play on 4o being “40”, and o4-mini being “04”, and having to append the “mini” to bring home the message of 04<40
They will have to update the openai. Com footer I guess
Latest Advancements
GPT-5
OpenAI o3
OpenAI o4-mini
GPT-4o
GPT-4o mini
Sora
They should open source GPT-4o.
If people want an AI as a boyfriend at least they should use one that is open source.
If you disagree on something you can also train a lora.
I think this kind of thing is a pretty strong argument for the entire open source model ecosystem, not just open weights but open data and the whole gamut.
That’s really going to upset the crazies.
Despite 4o being one of the worst models on the market, they loved it. Probably because it was the most insane and delusional. You could get it to talk about really fucked up shit. It would happily tell you that you are the messiah.
The reaction to its original removal on Instagram Reels, r/ChatGPT, etc., was genuinely so weird and creepy. I didn't realise before this how many people had genuine parasocial (?) relationships with these LLMs.
It was the first model I used that was half decent at coding. Everyone remembers their gateway drug.
I was mostly using 4o for academic searches and planning. It was the best model for me. Based on the context I was giving and questions I was asking, 4o was the most the consistent model.
It used to get things wrong for sure but it was predictable. Also I liked the tone like everyone else. I stopped using ChatGPT after they removed 4o. Recently, I have started using the newer GPT-5 models (got free one month). Better than before but not quite. Acts way over smart haha
I wonder if it will still be up on Azure? How much you think I can make if I setup 4o under a domain like yourgirlfriendis.ai or w/e
Note: I wouldnt actually, I find it terrible to prey on people.
ChatGPT Made Me Delusional: https://www.youtube.com/watch?v=VRjgNgJms3Q
Should be essential watching for anyone that uses these things.
[dead]
> with only 0.1% of users still choosing GPT‑4o each day.
LOL WHAT?! I'm 0.1% of users? I'm certain part of the issue is it takes 3-clicks to switch to GPT-4o and it has to be done each time the page is loaded.
> that they preferred GPT‑4o’s conversational style and warmth.
Uh.. yeah maybe. But more importantly, GPT-4o gave better answers.
Zero acknowledgement about how terrible GPT-5 was when it was first released. It has since improved but it's not clear to me it's on-par with GPT-4o. Thinking mode is just too slow to be useful and so GPT-4o still seems better and faster.
Oh well, it'll be missed.
I agree - I use 4o via the API, simply because it answers so quickly. Its answers are usually pretty good on programming topics. I don't engage in chit-chat with AI models, so it's not really about the personality (which seems to be the main framing people are talking about), just the speed.