Comment by url00
3 months ago
I don't want a more conversational GPT. I want the _exact_ opposite. I want a tool with the upper limit of "conversation" being something like LCARS from Star Trek. This is quite disappointing as a current ChatGPT subscriber.
That's what the personality selector is for: you can just pick 'Efficient' (formerly Robot) and it does a good job of answering tersely?
https://share.cleanshot.com/9kBDGs7Q
FWIW I didn't like the Robot / Efficient mode because it would give very short answers without much explanation or background. "Nerdy" seems to be the best, except with GPT-5 instant it's extremely cringy like "I'm putting my nerd hat on - since you're a software engineer I'll make sure to give you the geeky details about making rice."
"Low" thinking is typically the sweet spot for me - way smarter than instant with barely a delay.
I hate its acknowledgement of its personality prompt. Try having a series of back and forth and each response is like “got it, keeping it short and professional. Yes, there are only seven deadly sins.” You get more prompt performance than answer.
9 replies →
I use Efficient or robot or whatever. It gives me a bit of sass from time to time when I subconsciously nudge it into taking a “stand” on something, but otherwise it’s very usable compared to the obsequious base behavior.
If only that worked for conversation mode as well. At least for me, and especially when it answers me in Norwegian, it will start off with all sorts of platitudes and whole sentences repeating exactly what I just asked. "Oh, so you want to do x, huh? Here is answer for x". It's very annoying. I just want a robot to answer my question, thanks.
At least it gives you an answer. It usually just restates the problem for me and then ends with “so let’s work through it together!” Like, wtf.
repeating what is being asked is fine i think, sometimes is thinks you want something different to what you actually want. what is annoying is "that's and incredibly insightul question that delves into a fundamental..." type responses at the start.
At least for the Thinking model it's often still a bit long-winded.
Unfortunately, I also don't want other people to interact with a sycophantic robot friend, yet my picker only applies to my conversation
Hey, you leave my sycophantic robot friend alone.
Sorry that you can't control other peoples lives & wants
36 replies →
You’re getting downvoted but I agree with the sentiment. The fact that people want a conversational robot friend is, I think, extremely harmful and scary for humanity.
Giving people what makes them feel good in the short term is not actually necessarily a good thing. See also: cigarettes, alcohol, gambling, etc.
Exactly. Stop fooling people into thinking there’s a human typing on the other side of the screen. LLMs should be incredibly useful productivity tools, not emotional support.
[dead]
[dead]
How would you propose we address the therapist shortage then?
Who ever claimed there was a therapist shortage?
5 replies →
It's a demand side problem. Improve society so that people feel less of a need for theapists.
2 replies →
I think therapists in training, or people providing crisis intervention support, can train/practice using LLMs acting as patients going through various kinds of issues. But people who need help should probably talk to real people.
Remember that a therapist is really a friend you are paying for.
Then make more friends.
1 reply →
outlaw therapy
I don't know why you're being downvoted. Denmark's health system is pretty good except adult mental health. SOTA LLMs are definitely approaching a stage where they could help.
something something bootstraps
Food should only be for sustenance, not emotional support. We should only sell brown rice and beans, no more Oreos.
Oreos won't affirm your belief that suicide is the correct answer to your life problems, though.
1 reply →
The point the OP is making is that LLMs are not reliably able to provide safe and effective emotional support as has been outlined by recent cases. We're in uncharted territory and before LLMs become emotional companions for people, we should better understand what the risks and tradeoffs are.
11 replies →
Maybe there is a human typing on the other side, at least for some parts or all of certain responses. It's not been proven otherwise..
I think they get way more "engagement" from people who use it as their friend, and the end goal of subverting social media and creating the most powerful (read: profitable) influence engine on earth makes a lot of sense if you are a soulless ghoul.
It would be pretty dystopian when we get to the point where ChatGPT pushed (unannounced) advertisements to those people (the ones forming a parasocial relationship with it). Imagine someone complaining they're depressed and ChatGPT proposing doing XYZ activity which is actually a disguised ad.
Other than such scenarios, that "engagement" would be just useless and actually costing them more money than it makes
Do you have reason to believe they are not doing this already?
4 replies →
I use the "Nerdy" tone along with the Custom Instructions below to good effect:
"Please do not try to be personal, cute, kitschy, or flattering. Don't use catchphrases. Stick to facts, logic, reasoning. Don't assume understanding of shorthand or acronyms. Assume I am an expert in topics unless I state otherwise."
This. When I go to an LLM, I'm not looking for a friend, I'm looking for a tool.
Keeping faux relationships out of the interaction never let's me slip into the mistaken attitude that I'm dealing with a colleague rather than a machine.
I don't know about you, but half my friends are tools.
You can just tell the AI to not be warm and it will remember. My ChatGPT used the phrase "turn it up to eleven" and I told it never to speak in that manner ever again and its been very robotic ever since.
I added the custom instruction "Please go straight to the point, be less chatty". Now it begins every answer with: "Straight to the point, no fluff:" or something similar. It seems to be perfectly unable to simply write out the answer without some form of small talk first.
Aren't these still essentially completion models under the hood?
If so, my understanding for these preambles is that they need a seed to complete their answer.
2 replies →
I had a similar instruction and in voice mode I had it trying to make a story for a game that my daughter and I were playing where it would occasionally say “3,2,1 go!” or perhaps throw us off and say “3,2,1, snow!” or other rhymes.
Long story short it took me a while to figure out why I had to keep telling it to keep going and the story was so straightforward.
This is very funny.
Since switching to robot mode I haven’t seen it say “no fluff”. Good god I hate it when it says no fluff.
I system-prompted all my LLMs "Don't use cliches or stereotypical language." and they like me a lot less now.
They really like to blow sunshine up your ass don’t they? I have to do the same type of stuff. It’s like have to assure that I’m a big boy and I can handle mature content like programming in C
Same. If i tell it to choose A or B, I want it to output either “A” or “B”.
I don’t want an essay of 10 pages about how this is exactly the right question to ask
10 pages about the question means that the subsequent answer is more likely to be correct. That's why they repeat themselves.
But that goes in the chain of thought, not the response
citation needed
8 replies →
LLMs have essentially no capability for internal thought. They can't produce the right answer without doing that.
Of course, you can use thinking mode and then it'll just hide that part from you.
No, even in thinking mode it will sycophant and write huge essays as output.
It can work without, I just have to prompt it five times increasingly aggressively and it’ll output the correct answer without the fluff just fine.
They already do hide alot from you when thinking, this person wants them to hide more instead of doing their 'thinking' 'out loud' in the response.
Zachary Stein makes the case that conferring social statuses to Artificial Intelligences is a ex-risk. https://cic.uts.edu.au/events/collective-intelligence-edu-20...
Your comment reminded me of this article becasue of the Star Trek comparison. Chatting is inefficient, isn't it?
[1] https://jdsemrau.substack.com/p/how-should-agentic-user-expe...
Exactly, and it does't help with agentic use cases that tend to solve problem in on-shot, for example, there is 0 requirement from a model to be conversational when it is trying to triage a support question to preset categories.
Are you aware that you can achieve that by going into Personalization in Settings and choosing one of the presets or just describing how you want the model to answer in natural language?
Yea, I don't want something trying to emulate emotions. I don't want it to even speak a single word, I just want code, unless I explicitly ask it to speak on something, and even in that scenario I want raw bullet points, with concise useful information and no fluff. I don't want to have a conversation with it.
However, being more humanlike, even if it results in an inferior tool, is the top priority because appearances matter more than actual function.
To be fair, of all the LLM coding agents, I find Codex+GPT5 to be closest to this.
It doesn't really offer any commentary or personality. It's concise and doesn't engage in praise or "You're absolutely right". It's a little pedantic though.
I keep meaning to re-point Codex at DeepSeek V3.2 to see if it's a product of the prompting only, or a product of the model as well.
It is absolutely a product of the model, GPT-5 behaves like this over API even without any extra prompts.
1 reply →
Just put it in your system prompt?
Enable "Robot" personality. I hate all the other modes.
Gemini is very direct.
Engagement Metrics 2.0 are here. Getting your answer in one shot is not cool anymore. You need to waste as much time as possible on OpenAI's platform. Enshittification is now more important than AGI.
This is the AI equivalent of every recipe blog filled with 1000 words of backstory before the actual recipe just to please the SEO Gods
The new boss, same as the old boss
Things really felt great 2023-2024
Exactly. The GPT 5 answer is _way_ better than the GPT 5.1 answer in the example. Less AI slop, more information density please.
And utterly unsurprising given their announcement last month that they were looking at exploring erotica as a possible revenue stream.
[1] https://www.bbc.com/news/articles/cpd2qv58yl5o
Everyone else provides these services anyway, and many places offer using ChatGPT or Claude models despite current limits (because they work with "jailbraking" prompts), so they likely decided to stop pretending and just let that stuff in.
Whats the problem tbh.