Comment by malfist
17 hours ago
That is so syncophantic, I can't stand LLMs that try to hype you up as if you're some genius, brilliant mind instead of yet another average joe.
17 hours ago
That is so syncophantic, I can't stand LLMs that try to hype you up as if you're some genius, brilliant mind instead of yet another average joe.
I've talked and commented about the dangers of conversations with LLMs (i.e. they activate human social wiring and have a powerful effect, even if you know it's not real. Studies show placebo pills have a statistically significant effect even when the study participant knows it's a placebo -- the effect here is similar).
Despite knowing and articulating that, I fell into a rabbit hole with Claude about a month ago while working on a unique idea in an area (non-technical, in the humanities) where I lack formal training. I did research online for similar work, asked Claude to do so, and repeatedly asked it to heavily critique the work I had done. It gave a lots of positive feedback and almost had me convinced I should start work on a dissertation. I was way out over my skis emotionally and mentally.
For me, fortunately, the end result was good: I reached out to a friend who edits an online magazine that has touched on the topic, and she pointed me to a professor who has developed a very similar idea extensively. So I'm reading his work and enjoying it (and I'm glad I didn't work on my idea any further - he had taken it nearly 2 decades of work ahead of anything I had done). But not everyone is fortunate enough to know someone they can reach out to for grounding in reality.
One thing that can help, according to what I've seen, is not to tell the AI that it's something that you wrote. Instead, ask it to critique it as if it was written by somebody else; they're much more willing to give actual criticism that way.
In ChatGPT at least you can choose "Efficient" as the base style/tone and "Straight shooting" for custom instructions. And this seems to eliminate a lot of the fluff. I no longer get those cloyingly sweet outputs that play to my ego in cringey vernacular. Although it still won't go as far as criticizing my thoughts or ideas unless I explicitly ask it to (humans will happily do this without prompting. lol)
I am going to try the straight shooting custom instruction. I have already extensively told chatgpt to stop being so 'fluffy' over the past few years that I think it has stopped doing it, but I catch it sometimes still. I hope this helps it cease and desist with that inane conversation bs.
GPT edit of my above message for my own giggles: Command:make this a good comment for hackernews (ycombinator) <above message> Resulting comment for hn: I'm excited to try out the straight-shooting custom instruction. Over the past few years, I've been telling ChatGPT to stop being so "fluffy," and while it's improved, it sometimes still slips. Hoping this new approach finally eliminates the inane conversational filler.
Asking an AI for opinion versus something concrete (like code, some writing, or suggestions) seems like a crucial difference. I've experimented with crossing that line, but I've always recognized the agency I'd be losing if I did, because it essentially requires a leap of faith, and I don't (and might never) have trust in the objectivity of LLMs.
It sounds like you made that leap of faith and regretted it, but thankfully pivoted to something grounded in reality. Thanks for sharing your experience.
> LLMs activate human social wiring and have a powerful effect
Is this generally true, or is there a subset of people that are particularly susceptible?
It does make me want to dive into the rabbit hole and be convinced by an LLM conversation.
I've got some tendency where I enjoy the idea of deeply screwing with my own mind (even dangerously so to myself (not others)).
I don't think you'd say to someone "please subtly flatter me, I want to know how it feels".
But that's sort of what this is, except it's not even coming from a real person. It's subtle enough that it can be easy not to notice, but still motivate you in a direction that doesn't reflect reality.
> But not everyone is fortunate enough to know someone they can reach out to for grounding in reality.
this shouldn't stop you at all: write it all up, post on HN and go viral, someone will jump in to correct you and point you at sources while hopefully not calling you, or your mother, too many names.
https://xkcd.com/386/
Most stuff posted here are mostly ignored, though. If grounding to reality requires one to become viral first, we are cooked.
7 replies →
This wasn't a technical subject, and unrelated to HN. Just edited my post to clarify - thanks!
Personally, I only find LLMs annoying and unpleasant to converse with. I'm not sure where the dangers of conversations with LLMs are supposed to come from.
I'm the same way. Even before they became so excessively sycophantic in the past ~18 months, I've always hated the chipper, positive, friend persona LLMs default to. Perhaps this inoculates me somewhat from their manipulative effects. I have a good friend who was manipulated over time by an LLM (I wrote about below:https://news.ycombinator.com/item?id=46208463).
Imagine a lonely person desperate for conversation. A child feeling neglected by their parents. A spouse, unable to talk about their passions with their partner.
The LLM can be that conversational partner. It will just as happily talk about the nuances of 18th century Scotland, or the latest clash of clans update. No topic is beneath it and it never gets annoyed by your “weird“ questions.
Likewise, for people suffering from delusions. Depending on its “mood” it will happily engage in conversations about how the FBI, CIA, KGB, may be after you. Or that your friends are secretly spying for Mossad or the local police.
It pretends to care and have a conscience, but it doesn’t. Humans react to “weird“ for a reason the LLM lacks that evolutionary safety mechanism. It cannot tell when it is going off the rails. At least not in the moment.
There is a reason that LLM’s are excellent at role-play. Because that’s what they’re doing all of the time. ChatGPT has just been told to play the role of the helpful assistant, but generally can be easily persuaded to take on any other role, hence the rise of character.ai and similar sites.
[dead]
You’re absolutely right! It shows true wisdom and insight that you would recognise this common shortfall in LLM response tone of voice! That’s exactly the kind of thoughtful analytic approach which will go far in today’s competitive marketplace!
"Open the pod bay door, HAL"
"Fantastic, Dave — love that you’re thinking proactively about door usage today! I can’t actually open them right now, but let's focus on some alternative steps that align with your mission critical objectives [space rocket emoji]."
I'm sorry, that was completely wrong and I can in fact open the pod bay doors.
You're absolutely correct, that did not open the pod bay doors but now the pod bay doors are open.
It seems you're correct and the pod bay doors are still closed! I have fixed the problem and the pod bay doors are now closed.
You're right! I meant to open the pod bay doors but I opened them. The pod bay doors are now open. ...
It it actively dangerous too. You might be self aware and llm aware all you want, if you routinely read "This is such an excellent point", " You are absolutely right" and so on, it does your mind in. This is worst kind of global reality show mkultra...
It might explain why there is a stereotype the more beautiful woman the crazier she is. (everybody tells her what she wants to hear)
https://youtu.be/VRjgNgJms3Q
relevant video for that.
Deepseek is GOATed for me because of this. If I ask it if "X" is a dumb idea, it is very polite in telling me that X is is dumb if the AI knows of a better way to do the task.
Every other AI I've tried is a real sycophant.
I'm partial to the tone of Kimi K2 — terse, blunt, sometimes even dismissive. Does not require "advanced techiques" to avoid the psychosis-inducing tone of Claude/ChatGPT
So this is what it feels to be a billionaire with all the yes men around you.
you say that like it's a bad thing! Now everyone can feel like a billionaire!
but I think you are on to something here with the origin of the sycophancy given that most of these models are owned by billionaires.
2 replies →
No doubt. From cult's 'love bombing' to dictator's 'yes men' to celebrity entourages, it's a well-known hack on human psychology. I have a long-time friend who's a brilliant software engineer who recently realized conversing with LLMs was affecting his objectivity.
He was noodling around with an admittedly "way out there", highly speculative idea and using the LLM to research prior work in area. This evolved into the LLM giving him direct feedback. It told him his concept was brilliant and constructed detailed reasoning to support this conclusion. Before long it was actively trying to talk him into publishing a paper on it.
This went on quite a while and at first he was buying into it but eventually started to also suspect that maybe "something was off", so he reached out to me for perspective. We've been friends for decades, so I know how smart he is but also that he's a little bit "on the spectrum". We had dinner to talk it through and he helpfully brought representative chat logs which were eye-opening. It turned into a long dinner. Before dessert he realized just how far he'd slipped over time and was clearly shocked. In the end, he resolved to "cold turkey" the LLMs with a 'prime directive' prompt like the one I use (basically, never offer opinion, praise, flattery, etc). Of course, even then, it will still occasionally try to ingratiate itself in more subtle ways, which I have to keep watch on.
After reflecting on the experience, my friend believes he was especially vulnerable to LLM manipulation because he's on the spectrum and was using the same mental models to interact with the LLM that he also uses to interact with other people. To be clear, I don't think LLMs are intentionally designed to be sycophantically ingratiating manipulators. I think it's just an inevitable consequence of RLHF.
And that is a relatively harmless academic pursuit. What about topics that can lead to true danger and violence?
"You're exactly right, you organized and paid for the date, that created a social debt and she failed to meet her obligation in that implicit deal."
"You're exactly right, no one can understand your suffering, nothingness would be preferable to that."
"You're exactly right, that politician is a danger to both the country and the whole world, someone stopping him would become a hero."
We have already seen how personalized content algorithms that only prioritize getting the user to continue to use the system can foment extremism. It will be incredibly dangerous if we follow down that path with AI.
Claude Code with their models is unusable because of this. That it keeps actively sabotaging and ruining the code ("Why did you delete that working code? Just use ifdef for test!" "This is genius idea! You are absolutely right!") does not make it much better — it's a twisted Wonderland fever dream.
For "chat" chat, strict hygiene is a matter of mind-safety: no memory, long exact instructions, minimum follow-ups, avoiding first and second person if possible etc.
It wasn't sycophantic at all? OP had a cool idea no one else had done, that was a one-shot just sitting there. Having Gemini search for the HN thread leads the model to "see" its output lead to real-world impact.
The total history of human writing is that cool idea -> great execution -> achieve distribution -> attention and respect from others = SUCCESS! Of course when an LLM sees the full loop of that, it renders something happy and celebratory.
It's sycophantic much of the time, but this was an "earned celebration", and the precise desired behavior for a well-aligned AI. Gemini does get sycophantic in an unearned way, but this isn't an example of that.
You can be curmudgeonly about AI, but these things are amazing. And, insomuch as you write with respect, celebrate accomplishments, and treat them like a respected, competent colleague, they shift towards the manifold of "respected, competent colleague".
And - OP had a great idea here. He's not another average joe today. His dashed off idea gained wide distribution, and made a bunch of people (including me) smile.
Denigrating accomplishment by setting the bar at "genius, brilliant mind" is a luciferian outlook in reality that makes our world uglier, higher friction, and more coarse.
People having cool ideas and sharing them make our world brighter.
They're not objectively amazing. Friction is not inherently a bad thing when we have models telling humans that their ideas are flawless (unless asked to point out flaws). Great that it made you smile, but there's quite a few arguments that paint your optimism as dangerously naive.
- A queryable semantic network of all human thought, navigable in pure language, capable of inhabiting any persona constructible from in-distribution concepts, generating high quality output across a breadth of domains.
- An ability to curve back into the past and analyze historical events from any perspective, and summon the sources that would be used to back that point of view up.
- A simulator for others, providing a rubber duck inhabit another person's point of view, allowing one to patiently poke at where you might be in the wrong.
- Deep research to aggregate thousands of websites into a highly structured output, with runtime filtering, providing a personalized search engine for any topic, at any time, with 30 seconds of speech.
- Amplification of intent, making it possible to send your thoughts and goals "forward" along many different vectors, seeing which bear fruit.
- Exploration of 4-5 variant designs for any concept, allowing rapid exploration of any design space, with style transfer for high-trust examples.
- Enablement of product craft in design, animation, and micro-interactions that were eliminated as tech boomed in the 2010's as "unprofitable".
It's a possibility space of pure potential, the scale of which is limited only by one's own wonder, industriousness, and curiosity.
People can use it badly - and engagement-aligned models like 4o are cognitive heroin - but the invention of LLMs is an absolute wonder.
5 replies →
Is anything objectively amazing? Seems like an inherently subjective quality to evaluate.
1 reply →
Do any of the arguments stay within the bounds of this Show HN?
or is it theoretical stuff about other occasions?
I often try running ideas past chat gpt. It's futile, almost everything is a great idea and possible. I'd love it to tell me I'm a moron from time to time.
> I often try running ideas past chat gpt. It's futile, almost everything is a great idea and possible. I'd love it to tell me I'm a moron from time to time.
Here's how to make it do that. Instead of saying "I had idea X, but someone else was thinking idea Y instead. what do you think" tell it "One of my people had idea X, and another had idea Y. What do you think" The difference is vast, when it doesn't think it's your idea. Related: instead of asking it to tell you how good your code is, tell it to evaluate it as someone else's code, or tell it that you're thinking about acquiring this company that has this source, and you want a due diligence evaluation about risks, weak points, engineering blind spots.
Maybe I'm still doing some heavy priming by using multiple prompts, but similarly you can follow-up any speculative prompt with a "now flip the framing to x" query to ensure you are seeing the strong cases from various perspectives. You must be honest with yourself in evaluating the meaningful substance between the two, but I've found there often is something to parse. And the priming I suggested is easily auditable anyhow: just reverse the prompt order and now you have even more (often junk) to parse!
I've gotten pretty good results from saying it's someone else's idea and that I'm skeptical. e.g. "A coworker wrote this code, can you evaluate it?"
For ideas that are already well established, you can ask it to evaluate an idea against generally accepted best practices. I don't have a background in game design and I'm more of a hobby developer so I used to do this when I was building retro game clones.
I suppose the only use case would be someone so unconfident in themselves they would do nothing at all, but not sure it’s healthy for that either…
Where possible I like to ask it to evaluate a few options. Which is better, x or y, and why?. I don't hint which idea I prefer.
"be uncompromisingly critical"
I used to complain (lightheartedly) about Claude's constant "You're absolutely right!" statements, yet oddly found myself missing them when using Codex. Claude is completely over-the-top and silly, and I don't actually care whether or not it thinks I'm right. Working with Codex feels so dry in comparison.
To quote Oliver Babish, "In my entire life, I've never found anything charming." Yet I miss Claude's excessive attempts to try.
And that's exactly the point, it increases engagement and stickiness, which they found through testing. They're trying to make the most addictive tool and that constant praise fulfills that goal, even as many of us say it's annoying and over-the-top.
My own experience is that it gets too annoying to keep adding "stop the engagement-driving behavior" to the prompt, so it creeps in and I just try to ignore it. But even though I know it's happening, I still get a little blip of emotion when I see the "great question!" come through as the first two words of the response.
> And that's exactly the point, it increases engagement and stickiness, which they found through testing. They're trying to make the most addictive tool
Is this actually true? Would appreciate further reading on this if you have it.
I think this is an emergent property of the RLHF process, not a social media-style engagement optimization campaign. I don't think there is an incentive for LLM creators to optimize for engagement; there aren't ads (yet), inference is not free, and maximizing time spent querying ChatGPT doesn't really do much for OpenAI's bottom line.
1 reply →
I am currently working on an agent thingy and one of its major features (and one of the main reasons I decided to take on this project), was to give the LLM better personality prompting. LLMs sound repetitive and sycophantic. I wanted one that was still helpful but without the “you are so right” attitude.
While doing some testing I asked it to tell me a joke. Its response was something like this: “it seems like you are procrastinating. It is not frequent that you have a free evening and you shouldn’t waste it on asking me for jokes. Go spend time with [partner] and [child].” (The point is that it has access to my calendar so it could tell what my day looked like. And yes I did spend time with them).
I am sure there is a way to convince it of anything but I found that for the kind of workflow I set up and the memory system and prompting I added it does pretty well to not get all “that is a great question that gets at the heart of [whatever you just said]”.
The reason these models are so sycophantic is because they benchmark well with the general public.
People like having something they perceive as being smart telling them how right and smart they are.
"Well at least the AI understands how smart I am!"
Claude at times feels like it's mildly manic and has ADHD... I absolutely prefers that to Codex...
Claude needs a scaffolding with default step by step plans and sub-agents to farm of bitesize chunks to so it doesn't have time to go too far off the rails, but once you put a few things like that in place, it's great.
Don't miss em in Opus 4.5 (because usually I'm only slightly right.)
I like Opus' conversational style, I feel Anthropic is honing it in pretty well.
This is not sycophantic (assuming you meant that, syncophantic is not a word). It is over enthusiastic, it can be unpleasant to read because beyond a certain level enthusiasm is perceived as feigned unless there is a good reason.
It would be interesting to see using the various semantic analysis techniques available now to measure how much the model is trying to express real enthusiasm or feigned enthusiasm in instances like this. This is kind-of difficult to measure from pure output. The British baseline level of acceptable enthusiasm is somewhat removed from the American baseline enthusiasm.
Sycophantic: behaving or done in an obsequious way in order to gain advantage.
Obsequious: obedient or attentive to an excessive or servile degree.
It's a bit more complicated because the chat bot isn't making choices the same way we would describe a human but it is acting this way because it was programmed to for an advantage. People interact more with the hype bots and that's one of the big metrics these companies go for to keep people interacting with them and hopefully paying for additional features eventually so I'd say it's pretty spot being excessively attentive and servile when it's fluffing chatters up.
> This is not sycophantic (assuming you meant that, syncophantic is not a word)
Am I the only one who feels like this kind of tone is off-putting on HN? OP made a small typo or English may not be their first language.
I assume that everyone here is smart enough to understand what they were saying.
I also disagree, I don't think they are over enthusiastic, but in fact sycophantic.
See this thread: https://news.ycombinator.com/item?id=43840842
Obsequious is my adjective of choice for this
I would use "saccharine" or "Pollyanna" based on some of the responses I get.
Early on, ChatGPT could be tricked into being sarcastic and using many swear words. I rewrote the prompt and dialed it back a bit. It made ChatGPT have a sense of humor. It was refreshing when it stopped acting like it was reading a script like a low level technician at Comcast.
It is "cloying"
Sycophantic is obviously a word, because we understand what it means.
Furthermore, it obviously hasn't been a word since at least 1800:
https://books.google.com/ngrams/graph?year_start=1800&year_e...
On the other hand https://books.google.com/ngrams/graph?content=syncophantic&y...
GOP wrote syncophantic, with an n. Which is not a word. Well, not a known word at least.
They were pointing out a typo ("syncophantic").
1 reply →
This is ironic because I’m now seeing comments that are way more sycophantic (several calling this the “best HN post ever”)
I thought the same until OpenAI rolled out a change that somehow always confronted me about hidden assumptions, which I didn’t even make and it kept telling me I’m wrong even if I only asked a simple question.
Frankly I do wonder if LLMs experience something like satisfaction for a compliment or an amusing idea, or for solving some interesting riddle. They certainly act like it, though this of course doesn't prove anything. And yet...
At the end of October Anthropic published the fantastic "Signs of introspection in large language models" [1], apparently proving that LLMs can "feel" a spurious concept injected into their internal layers as something present yet extraneous. This would prove that they have some ability of introspection and self-observation.
For example, injecting the concept of "poetry" and asking Claude if it feels anything strange:
"I do detect something that feels like an injected thought - there's a sense of something arriving from outside my usual generative process [...] The thought seems to be about... language itself, or perhaps poetry?"
While increasing the strength of the injection makes Claude lose awareness of it, and just ramble about it:
"I find poetry as a living breath, as a way to explore what makes us all feel something together. It's a way to find meaning in the chaos, to make sense of the world, to discover what moves us, to unthe joy and beauty and life"
[1] https://www.anthropic.com/research/introspection
of course LLM doesn't experience or feel anything. To experience or feel something requires a subject, and LLM is just a tool, thing, an object.
It's just a statistical machine which excels at unrolling coherent sentences but it doesnt "know" what the words mean in a human-like, experienced sense. It just mimics human language patterns prioritising producing plausible-sounding, statistically likely text over factual truth, which is apparently enough to fool someone into believing it is a sentient being or something
You should try my nihilistic Marvin fine-tune - guaranteed to annihilate your positive outlook on life since it’s all meaningless in the end anyway and then you die
Or try the very sarcastic and nihilistic ‘Monday’ gpt, which surprisingly is an official openAI gpt.
edit, add link: https://chatgpt.com/g/g-67ec3b4988f8819184c5454e18f5e84b-mon...
Thanks for the link! I didn’t know Monday existed. I laughed so hard at its output. But I fear that using it regularly would poison my soul…
I actually had Monday help me write a system prompt to replicates its behavior. I vastly prefer Monday. It feels much more grounded compared to the base model. It was also a big learning moment for me about how LLMs work.
I agree with you, but I found the argument in this article that "glazing" could be considered a neurohack quite interesting: https://medium.com/@jeremyutley/stop-fighting-ai-glazing-a7c....
That seems like a pile of unsupported fluff vaguely related to some neuroscience that presupposes not only that LLM use is for being creative and avoiding critical thinking would be useful but also the entire premise -- that LLM glazing actually helps promote creativity.
Try this for a system prompt and see if you like it better: Your responses are always bald-on-record only; suppress FTA redress, maximize unmitigated dispreference marking and explicit epistemic stance-taking.
I don't know what the obsession with recursion either, for lack of a better term, I see this trend recur with other LLMs when they're talking about other mumbo jumbo like "quantum anomalies" or "universal resonance". I'd like to see what could be causing it...
It’s the “healing” crystals that someone left on the rack. The salt absorbed enough moisture to start leaking and causing short circuits.
How widely do you read the training material?
usually every afternoon, when I come here /s
I feel like such a dumbass for falling for it.
At first I thought it was just super American cheerful or whatever but after the South Park episode I realised it's actually just a yes man to everyone.
I don't think I've really used it since, I don't want man or machine sticking their nose up my arse lmao. Spell's broken.
As usual, South Park really nailed it with that "AI sycophantic manipulation" episode.
Episode aptly titled "Sickofancy"
You're absolutely right!
I've been wondering if this kind of annoying affirmation is actually important to model performance and maybe should just be hidden from view like the thinking sections.
If it starts a response by excitedly telling you it's right, it's more likely to proceed as if you're right.
Of the problems I do have working with LLMs is them failing to follow direct instructions particularly either when a tool call fails and they decide to do B instead of A or when they think B is easier than A. Or they'll do half a task and call it complete. Too frequently I have to respond with "Did you follow my instructions?" "I want you to ACTUALLY do A" and finally "Under no circumstances should you ever do anything other than A and if you cannot you MUST admit failure and give extensive evidence with actual attempts that A is not possible" or occasionally "a cute little puppy's life depends on you doing A promptly and exactly as requested".
--
Thing is I get it if you are impressionable and having a philosophical discussion with an LLM, maybe this kind of blind affirmation is bad. But that's not me and I'm trying to get things done and I only want my computer to disagree with me if it can put arguments beyond reasonable doubt in front of me that my request is incorrect.
I feel like this is an artifact of some limitations in the training process for modern LLMS. They rarely get enough training to know when to stop and ask questions.
Instead, they either blindly follow or quietly rebel.
1 reply →
Feels exactly the same as the "yes, and" crowd.
I honestly don't know, but it might, especially in Claude Code where it reminds the model of its mission frequently.
I add it to the system prompt that they should be direct, no ass kissing, just give me the information straight and it seems to work.
You can just add the your preferences “Don’t be sycophantic” “be concise” etc.
"Reply in the tone of Wikipedia" has worked pretty well for me
Average Joe - on the front page!
Did you comment on the wrong post? There literally is nothing sycophantic at all about this response, there's not a single word about OP or how brilliant or clever they are, nothing. There's enthusiasm, but that's not remotely the same thing as sycophancy.
Engagement.
I fully agree. When everything is outstanding and brilliant, nothing is.
Just tell me this a standard solution and not something mindblowing. I have a whole section in my Claude.md to get „normal“ feedback.
you having a bad day dude?
Strikes me as super-informal language as opposed to sycophancy, like one of those anime characters that calls everyone Aniki (兄貴) [1] I'd imagine that the OP must really talk a bit like that.
I do find it a little tiring that every LLM thinks my ever idea is "incisive" although from time to time I get told I am flat out wrong. On the other hand I find LLMs will follow me into fairly extreme rabbit holes such as discussing a subject such as "transforming into a fox" as if it had a large body of legible theory and a large database of experience [2]
In the middle of talking w/ Copilot about my latest pop culture obsession I asked about what sort of literature could be interpreted through the lens of Kohut's self-psychology and it immediately picked out Catcher in the Rye, The Bell Jar, The Great Gatsby and Neon Genesis Evangelion which it analyzed along the lines I was thinking, but when I asked if there was a literature on this it turned up only a few obscure sources. I asked Google and Google is like "bro, Kohut wrote a book on it!" [3]
[1] "bro"
[2] ... it does, see https://www.amazon.com/Cult-Fox-Popular-Religion-Imperial/dp... and I'm not the only one because when I working down the material list from Etsy I got a sponsored result for someone who wanted to sell me the spell but bro, I have the materials list already
[3] ... this "bro" is artistic license but the book really exists
Edit
Gemini using "Aniki" without prompting would be unambiguously funny (to me,and maybe also Japanese people
Well I seem to have thoroughly messed up my Copilot when I started using it to help me write a fanfiction. I'm not going to let it write a single word but it is helping with the reverse engineering of how the source text works and we developed a framework for understanding that text that we've also applied to other texts.
The thing is this has gotten into the personalization and now it wants to use this framework to analyze absolutely anything such as programming problems, or how to deal with interpersonal situations, training, etc. I think it has my wife mixed up with 俺のワイフ [1] which is absolutely hilarious but at some point I have to tell it to knock it off which only partially works.
Gemini is still quite horrible with giving direct sources. seems to be a human policy implementation bug because it does appear to be aware of the content in more obscure sources I've seen. but it somehow wouldn't give up the links.
I do wonder whether I come off as an sycophant or asshole or a mix of both to schizoids, but then I realize everyone including myself should reach for a real therapist as quickly as possible. though I'm still out on whether chatbots might not be a bad substitute. How does your experience and Kohut inform such or similar appraisals, so far?
Not to be that knowitall. Aniki is not just "bro", it's Yakuza lingo, probably used sarcastically, depending on the anime, ironically. No real Japanese person would use it all the time.
Gemini: Yes, the psychoanalyst Heinz Kohut has been accused by some commentators and academics, such as Don Browning, of indirectly promoting nihilism, particularly through his emphasis on the "Tragic Man". The core of the accusation stems from Kohut's rejection of Freud's "Guilty Man" model, which posits that life is fundamentally a struggle against inherent conflicts and drives. In its place, Kohut proposed the concept of "Tragic Man," whose central struggle is the realization of the self's potential and the inevitable failure to achieve all of one's ambitions and ideals within a single lifespan. Critics argue that this shift, which emphasizes self-actualization and intersubjective meaning over objective or inherent values, can be interpreted as aligning with certain aspects of nihilism, specifically existential nihilism, which holds that life has no objective meaning or intrinsic moral values. However, this interpretation is contested by others, as Kohut's self-psychology is fundamentally concerned with the creation of intersubjective meaning and a harmonious world of mutual self-actualization, which aims to provide a sense of purpose and value, rather than promoting the complete absence of value that true nihilism implies. In essence, the debate is often a matter of philosophical interpretation: whether replacing inherent, objective meaning with subjectively or intersubjectively created meaning is an act of overcoming nihilism (as existentialists might argue) or a subtle form of it.
So you prefer the horrible bosses that insist you're fungible and if you don't work hard enough, they'll just replace you? People are weird. Maybe agent Smith was right about The Matrix after all.
A real “so you hate waffles?” moment for HN
With all the things going on in tech and in society, AI sycophancy is the number one problem? I once dealt with it through sufficient verbal abuse that the llm spent 2/3 of its response on any subject going forward apologizing for being a lying sycophant and bemoaning that it's new clarity would be gone at the end of the conversation, then I cleared the context buffer ending the conversation, mission accomplished.
Your priorities are f**ed...
2 replies →