Comment by url00
5 hours ago
I don't want a more conversational GPT. I want the _exact_ opposite. I want a tool with the upper limit of "conversation" being something like LCARS from Star Trek. This is quite disappointing as a current ChatGPT subscriber.
That's what the personality selector is for: you can just pick 'Efficient' (formerly Robot) and it does a good job of answering tersely?
https://share.cleanshot.com/9kBDGs7Q
FWIW I didn't like the Robot / Efficient mode because it would give very short answers without much explanation or background. "Nerdy" seems to be the best, except with GPT-5 instant it's extremely cringy like "I'm putting my nerd hat on - since you're a software engineer I'll make sure to give you the geeky details about making rice."
"Low" thinking is typically the sweet spot for me - way smarter than instant with barely a delay.
I hate its acknowledgement of its personality prompt. Try having a series of back and forth and each response is like “got it, keeping it short and professional. Yes, there are only seven deadly sins.” You get more prompt performance than answer.
1 reply →
At least for the Thinking model it's often still a bit long-winded.
If only that worked for conversation mode as well. At least for me, and especially when it answers me in Norwegian, it will start off with all sorts of platitudes and whole sentences repeating exactly what I just asked. "Oh, so you want to do x, huh? Here is answer for x". It's very annoying. I just want a robot to answer my question, thanks.
Unfortunately, I also don't want other people to interact with a sycophantic robot friend, yet my picker only applies to my conversation
Hey, you leave my sycophantic robot friend alone.
Sorry that you can't control other peoples lives & wants
18 replies →
Exactly. Stop fooling people into thinking there’s a human typing on the other side of the screen. LLMs should be incredibly useful productivity tools, not emotional support.
Food should only be for sustenance, not emotional support. We should only sell brown rice and beans, no more Oreos.
The point the OP is making is that LLMs are not reliably able to provide safe and effective emotional support as has been outlined by recent cases. We're in uncharted territory and before LLMs become emotional companions for people, we should better understand what the risks and tradeoffs are.
3 replies →
How would you propose we address the therapist shortage then?
I think therapists in training, or people providing crisis intervention support, can train/practice using LLMs acting as patients going through various kinds of issues. But people who need help should probably talk to real people.
Who ever claimed there was a therapist shortage?
1 reply →
outlaw therapy
something something bootstraps
[dead]
Maybe there is a human typing on the other side, at least for some parts or all of certain responses. It's not been proven otherwise..
You can just tell the AI to not be warm and it will remember. My ChatGPT used the phrase "turn it up to eleven" and I told it never to speak in that manner ever again and its been very robotic ever since.
I system-prompted all my LLMs "Don't use cliches or stereotypical language." and they like me a lot less now.
They really like to blow sunshine up your ass don’t they? I have to do the same type of stuff. It’s like have to assure that I’m a big boy and I can handle mature content like programming in C
I added the custom instruction "Please go straight to the point, be less chatty". Now it begins every answer with: "Straight to the point, no fluff:" or something similar. It seems to be perfectly unable to simply write out the answer without some form of small talk first.
Your comment reminded me of this article becasue of the Star Trek comparison. Chatting is inefficient, isn't it?
[1] https://jdsemrau.substack.com/p/how-should-agentic-user-expe...
This. When I go to an LLM, I'm not looking for a friend, I'm looking for a tool.
Keeping faux relationships out of the interaction never let's me slip into the mistaken attitude that I'm dealing with a colleague rather than a machine.
I don't know about you, but half my friends are tools.
Are you aware that you can achieve that by going into Personalization in Settings and choosing one of the presets or just describing how you want the model to answer in natural language?
I think they get way more "engagement" from people who use it as their friend, and the end goal of subverting social media and creating the most powerful (read: profitable) influence engine on earth makes a lot of sense if you are a soulless ghoul.
It would be pretty dystopian when we get to the point where ChatGPT pushed (unannounced) advertisements to those people (the ones forming a parasocial relationship with it). Imagine someone complaining they're depressed and ChatGPT proposing doing XYZ activity which is actually a disguised ad.
Other than such scenarios, that "engagement" would be just useless and actually costing them more money than it makes
Do you have reason to believe they are not doing this already?
2 replies →
Enable "Robot" personality. I hate all the other modes.
Same. If i tell it to choose A or B, I want it to output either “A” or “B”.
I don’t want an essay of 10 pages about how this is exactly the right question to ask
10 pages about the question means that the subsequent answer is more likely to be correct. That's why they repeat themselves.
citation needed
3 replies →
LLMs have essentially no capability for internal thought. They can't produce the right answer without doing that.
Of course, you can use thinking mode and then it'll just hide that part from you.
Exactly. The GPT 5 answer is _way_ better than the GPT 5.1 answer in the example. Less AI slop, more information density please.
Engagement Metrics 2.0 are here. Getting your answer in one shot is not cool anymore. You need to waste as much time as possible on OpenAI's platform. Enshittification is now more important than AGI.
This is the AI equivalent of every recipe blog filled with 1000 words of backstory before the actual recipe just to please the SEO Gods
The new boss, same as the old boss
Things really felt great 2023-2024
And utterly unsurprising given their announcement last month that they were looking at exploring erotica as a possible revenue stream.
[1] https://www.bbc.com/news/articles/cpd2qv58yl5o
Yea, I don't want something trying to emulate emotions. I don't want it to even speak a single word, I just want code, unless I explicitly ask it to speak on something, and even in that scenario I want raw bullet points, with concise useful information and no fluff. I don't want to have a conversation with it.
However, being more humanlike, even if it results in an inferior tool, is the top priority because appearances matter more than actual function.
To be fair, of all the LLM coding agents, I find Codex+GPT5 to be closest to this.
It doesn't really offer any commentary or personality. It's concise and doesn't engage in praise or "You're absolutely right". It's a little pedantic though.
I keep meaning to re-point Codex at DeepSeek V3.2 to see if it's a product of the prompting only, or a product of the model as well.
It is absolutely a product of the model, GPT-5 behaves like this over API even without any extra prompts.
1 reply →