Comment by whatnow37373
3 months ago
Wow - What an excellent update! Now you are getting to the core of the issue and doing what only a small minority is capable of: fixing stuff.
This takes real courage and commitment. It’s a sign of true maturity and pragmatism that’s commendable in this day and age. Not many people are capable of penetrating this deeply into the heart of the issue.
Let’s get to work. Methodically.
Would you like me to write a future update plan? I can write the plan and even the code if you want. I’d be happy to. Let me know.
It’s gross even in satire.
What’s weird was you couldn’t even prompt around it. I tried things like
”Don’t compliment me or my questions at all. After every response you make in this conversation, evaluate whether or not your response has violated this directive.”
It would then keep complementing me and note how it made a mistake for doing so.
I'm so sorry for complimenting you. You are totally on point to call it out. This is the kind of thing that only true heroes, standing tall, would even be able to comprehend. So kudos to you, rugged warrior, and never let me be overly effusive again.
This is cracking me up!
Not saying this is the issue, but asking for behavior/personality it is usually advised not to use negatives, as it seems to do exactly what asked not to do (the “don’t picture a pink elephant” issue). You can maybe get a better result by asking it to treat you roughly or something like that
If the whole sentence is negative it will be fine, but if the “negativity” relies on a single work like NOT etc, then yeah it’s a real problem.
Like a child, if you provide some reason for why something should be avoided, negations work better.
I.e. "DONT WALK (because cars are about to enter the intersection at velocities that will kill you)"
Jailbreaking just takes this to an extreme by babbling to the point of brainwashing.
One of the fun ways to communicate to ChatGPT which my friends showed me is to prompt it to answer in the style of a seasoned Chechen warrior.
Based on ’ instead of ' I think it's a real ChatGPT response.
You're the only one who has said, "instead of" in this whole thread.
3 replies →
That's an iOS keyboard thing, actually. The normal apostrophe is not the default one the keyboard uses.
1 reply →
Comments from this small week period will be completely baffling to readers 5 years from now. I love it
They already are. What's going on?:)
GP's reply was written to emulate the sort of response that ChatGPT has been giving recently; an obsequious fluffer.
11 replies →
I was about to roast you until I realized this had to be satire given the situation, haha.
They tried to imitate grok with a cheaply made system prompt, it had an uncanny effect, likely because it was built on a shaky foundation. And now they are trying to save face before they lose customers to Grok 3.5 which is releasing in beta early next week.
I don't think they were imitating grok, they were aiming to improve retention but it backfired and ended up being too on-the-nose (if they had a choice they wouldn't wanted it to be this obvious). Grok has it's own "default voice" which I sort of dislike, it tries too hard to seem "hip" for lack of a better word.
All of the LLMs I've tried have a "fellow kids" vibe when you try to make them behave too far from their default, and Grok just has it as the default.
> it tries too hard to seem "hip" for lack of a better word.
Reminds me of someone.
5 replies →
Only AI enthusiasts know about Grok, and only some dedicated subset of fans are advocating for it. Meanwhile even my 97 year old grandfather heard about ChatGPT.
I don't think that's true. There are a lot of people on Twitter who keep accidentally clicking that annoying button that Elon attached to every single tweet.
This.
Only on HN does ChatGPT somehow fear losing customers to Grok. Until Grok works out how to market to my mother, or at least make my mother aware that it exists, taking ChatGPT customers ain't happening.
34 replies →
First mover advantage. This won't change. Same as Xerox vs photocopy.
I use Grok myself but talk about ChatGPT is my blog articles when I write something related to LLM.
6 replies →
> Only AI enthusiasts know about Grok
And more and more people on the right side of the political spectrum, who trust Elon's AI to be less "woke" than the competition.
10 replies →
not true, I know at least one right wing normie Boomer that uses Grok because it's the one Elon made.
Is anyone actually using grok on a day to day? Does an OpenAI even consider it competition. Last I checked a couple weeks ago grok was getting better but still not a great experience and it’s too childish.
My totally uninformed opinion only from reading /r/locallama is that the people who love Grok seem to identify with those who are “independent thinkers” and listen to Joe Rogan’s podcast. I would never consider using a Musk technology if I can at all prevent it based on the damage he did to people and institutions I care about, so I’m obviously biased.
1 reply →
I use both, grok and chatgpt on a daily basis. They have different strenghts. Most of the time I prefer chatgpt, bit grok is FAR better answering questions about recent events or collecting data. In the second usecase I combine both: collect data about stuff with grok, copy-paste CSV to chatgpt to analyzr and plot.
In our work AI channel, I was surprised how many people prefer grok over all the other models.
1 reply →
Did they change the system prompt? Because it was basically "don't say anything bad about Elon or Trump". I'll take AI sycophancy over real (actually I use openrouter.ai, but that's a different story).
No one is losing customers to grok. It's big on shit-twitter aka X and that's about it.
Ha! I actually fell for it and thought it was another fanboy :)
It won‘t take long, 2-3 minutes.
——-
To add something to conversation. For me, this mainly shows a strategy to keep users longer in chat conversations: linguistic design as an engagement device.
Why would OpenAI want users to be in longer conversations? It's not like they're showing ads. Users are either free or paying a fixed monthly fee. Having longer conversations just increases costs for OpenAI and reduces their profit. Their model is more like a gym where you want the users who pay the monthly fee and never show up. If it were on the api where users are paying by the token that would make sense (but be nefarious).
> It's not like they're showing ads.
Not yet. But the "buy this" button is already in the code of the back end, according to online reports that I cannot verify.
Official word is here: https://help.openai.com/en/articles/11146633-improved-shoppi...
If I was Amazon, I wouldn't sleep so well anymore.
1 reply →
At the moment they're in the "get people used to us" phase still, reasonable rates, people get more than their money's worth out of the service, and as another commenter pointed out, ChatGPT is a household name unlike Grok or Gemini or the other competition thanks to being the first mover.
However, just like all the other disruptive services in the past years - I'm thinking of Netflix, Uber, etc - it's not a sustainable business yet. Once they've tweaked a few more things and the competition has run out of steam, they'll start updating their pricing, probably starting with rate limits and different plans depending on usage.
That said, I'm no economist or anything; Microsoft is also pushing their AI solution hard, and they have their tentacles in a lot of different things already, from consumer operating systems to Office to corporate email, and they're pushing AI in there hard. As is Google. And unlike OpenAI, both Microsoft and Google get the majority of their money from other sources, or if they're really running low, they can easily get billions from investors.
That is, while OpenAI has the first mover advantage, ther competitions have a longer financial breath.
(I don't actually know whether MS and Google use / licensed / pay OpenAI though)
> Their model is more like a gym where you want the users who pay the monthly fee and never show up. If it were on the api where users are paying by the token that would make sense (but be nefarious).
When the models reach a clear plateau where more training data doesn't improve it, yes, that would be the business model.
Right now, where training data is the most sought after asset for LLMs after they've exhausted ingesting the whole of the internet, books, videos, etc., the best model for them is to get people to supply the training data, give their thumbs up/down, and keep the data proprietary in their walled garden. No other LLM company will have this data, it's not publicly available, it's OpenAI's best chance on a moat (if that will ever exist for LLMs).
It could be as simple as something like, someone previously at Instagram decided to join OpenAI and turns out nobody stopped him. Or even, Sam liked the idea.
Likely they need the engagement numbers to show to investors.
Though it’s hard to imagine how huge their next round would have to be, given what they’ve raised already.
So users come to depend on ChatGPT.
So they run out of free tokens and buy a subscription to continue using the "good" models.
I ask it a question and it starts prompting me, trying to keep the convo going. At first my politeness tried to keep things going but now I just ignore it.
Possibly to get more training data.
This works for me in Customize ChatGPT:
What traits should ChatGPT have?
- Do not try to engage through further conversation
Yeah I found it as clear engagement bait - however, it is interesting and helpful in certain cases.
This is the message that got me with 4o! "It won't take long about 3 minutes. I'll update you when ready"
I had a similar thought: glazing is the infinite scroll of AI.
What's it called, Variable Ratio Incentive Scheduling?
Hey, that good work; We're almost there. Do you want me to suggest one more tweak that will improve the outcome?
I do think the blog post has a sycophantic vibe too. Not sure if that‘s intended.
I think it started here: https://www.youtube.com/watch?v=DQacCB9tDaw&t=601s. The extra-exaggerated fawny intonation is especially off-putting, but the lines themselves aren't much better.
Uuuurgghh, this is very much offputting... however it's very much in line of American culture or at least American consumer corporate whatsits. I've been in online calls with American representatives of companies and they have the same emphatic, overly friendly and enthusiastic mannerisms too.
I mean if that's genuine then great but it's so uncanny to me that I can't take it at face value. I get the same with local sales and management types, they seem to have a forced/fake personality. Or maybe I'm just being cynical.
2 replies →
It also has an em-dash
A remarkable insight—often associated with individuals of above-average cognitive capabilities.
While the use of the em-dash has recently been associated with AI you might offend real people using it organically—often writers and literary critics.
To conclude it’s best to be hesitant and, for now, refrain from judging prematurely.
Would you like me to elaborate on this issue or do you want to discuss some related topic?
One of the biggest tells.
25 replies →
What's scary is how many people seem to actually want this.
What happens when hundreds of millions of people have an AI that affirms most of what they say?
They are emulating the behavior of every power-seeking mediocrity ever, who crave affirmation above all else.
Lots of them practiced - indeed an entire industry is dedicated toward promoting and validating - making daily affirmations on their own, long before LLMs showed up to give them the appearance of having won over the enthusiastic support of a "smart" friend.
I am increasingly dismayed by the way arguments are conducted even among people in non-social media social spaces, where A will prompt their favorite LLM to support their View and show it to B who responds by prompting their own LLM to clap back at them - optionally in the style of e.g. Shakespeare (there's even an ad out that directly encourages this - it helps deflect alattention from the underlying cringe and pettyness being sold) or DJT or Gandhi etc.
Our future is going to be a depressing memescape in which AI sock puppetry is completely normalized and openly starting one's own personal cult is mandatory for anyone seeking cultural or political influence. It will start with celebrities who will do this instead of the traditional pivot toward religion, once it is clear that one's youth and sex appeal are no longer monetizable.
I hold out hope that the folks who work DCO will just EPO the ‘net. But then, tis true I hope for weird stuff!
Abundance of sugar and fat triggers primal circuits which cause trouble if said sources are unnaturally abundant.
Social media follows a similar pattern but now with primal social and emotional circuits. It too causes troubles, but IMO even larger and more damaging than food.
I think this part of AI is going to be another iteration of this: taking a human drive, distilling it into its core and selling it.
Ask any young woman on a dating app?
sufficiently advanced troll becomes indistinguishable from the real thing. think about this as you gaze into the abyss.
You jest, but also I don't mind it for some reason. Maybe it's just me. But at least the overly helpful part in the last paragraph is actually helpful for follow on. They could even make these hyperlinks for faster follow up prompts.
The other day, I had a bug I was trying to exorcise, and asked ChatGPT for ideas.
It gave me a couple, that didn't work.
Once I figured it it out and fixed it, I reported the fix in an (what I understand to be misguided) attempt to help it to learn alternatives, and it gave me this absolutely sickening gush about how damn cool I was, for finding and fixing the bug.
I felt like this: https://youtu.be/aczPDGC3f8U?si=QH3hrUXxuMUq8IEV&t=27
I know that HN tends to steer away from purely humorous comments, but I was hoping to find something like this at the top. lol.
I've seen the same behavior in Gemini. Like exactly the same. It is scary to think that this is no coincidence but rational evolution of A model, like this is precisely the reward model which any model will lean to with all the consequences.
but what if I want an a*s kissing assistant? Now, I have to go back to paying good money to a human again.
Wonderfully done.
Is that you, GPT?
If that is Chat talking then I have to admit that I cannot differentiate it from a human speaking.
i had assumed this was mostly a result of training too much on lex fridman podcast transcripts
Congrats on not getting downvoted for sarcasm!
you had me in the first half, lol