← Back to context

Comment by leoh

2 months ago

Anthropic was founded by exiles of OpenAI's safety team, who quit en masse about 5 years ago. Then a few years later, the board tried to fire Altman. When will folks stop trusting OpenAI?

Claude has a sycophancy problem too. I actually ended up canceling my subscription because I got sick of being "absolutely right" about everything.

  • I've had fun putting "always say X instead of 'You're absolutely right'" in my llm instructions file, it seems to listen most of the time. For a while I made it 'You're absolutely goddamn right' which was slightly more palatable for some reason.

    • I've found that it still can't really ground me when I've played with it. Like, if I tell it to be honest (or even brutally honest) it goes wayyyyyyyyy too far in the other direction and isn't even remotely objective.

      2 replies →

    • Have it say 'you're absolutely fucked'! That would be very effective as a little reminder to be startled, stop, and think about what's being suggested.

  • Compared to GPT-5 on today's defaults? Claude is good.

    No, it isn't "good", it's grating as fuck. But OpenAI's obnoxious personality tuning is so much worse. Makes Anthropic look good.

When valid reasons are given. Not when OpenAI's legal enemy tries to scare people by claiming adults aren't responsible for themselves, including their own use of computers.

  • I mean we could also allow companies to helicopter-drop crack cocaine in the streets. The big tech companies have been pretending their products aren't addictive for decades and it's become a farce. We regulate drugs because they cause a lot of individual and societal harm. I think at this point its very obvious that social media + chatbots have the same capacity for harm.

    • > We regulate drugs because they cause a lot of individual and societal harm.

      That's a very naive opinion on what the war on drugs has evolved to.

Anthropic emphasizes safety but their acceptance of Middle Eastern sovereign funding undermines claims of independence.

Their safety-first image doesn’t fully hold up under scrutiny.

  • IMO the idea that an LLM company can make a "safe" LLM is.. unrealistic at this time. LLMs are not very well-understood. Any guardrails are best-effort. So even purely technical claims of safety are suspect.

    That's leaving aside your point, which is the overwhelming financial interest in leveraging manipulative/destructive/unethical psychological instruments to drive adoption.

  • There’s a close tangle between the problems that we don’t know how to build a company that would turn down the opportunity to make every human into paperclips for a dollar; and no one knows how how to build a smart AI and stil prevent that outcome even if the companies would choose to avoid it given the chance.

When the justice system finally catches up and puts Sam behind bars.

  • > When the justice system finally catches up and puts Sam behind bars

    Sam bears massive personal liability, in my opinion. But criminal? What crimes has he committed?

    • I'm sure we could invent one that sufficiently covers the insane sociopathy that rots the upper echelons of corporate technology. Society needs to hold these people accountable. If the current legal system is not adequate, we can repair it until it is.

      4 replies →

When will folks stop trusting Palantir-partnered Anthropic is probably a better question.

Anthropic has weaponized the safety narrative into a marketing and political tool, and it is quite clear that they're pushing this narrative both for publicity from media that love the doomer narrative because it brings in ad-revenue, and for regulatory capture reasons.

Their intentions are obviously self-motivated, or they wouldn't be partnering with a company that openly prides itself on dystopian-level spying and surveillance of the world.

OpenAI aren't the good guys either, but I wish people would stop pretending like Anthropic are.

  • All of the leading labs are on track to kill everyone, even Anthropic. Unlike the other labs, Anthropic takes reasonable precautions, and strives for reasonable transparency when it doesn't conflict with their precautions; which is wholly inadequate for the danger and will get everyone killed. But if reality graded on a curve, Anthropic would be a solid B+ to A-.