← Back to context

Comment by andsoitis

13 hours ago

I’m voting with my dollars by having cancelled my ChatGPT subscription and instead subscribing to Claude.

Google needs stiff competition and OpenAI isn’t the camp I’m willing to trust. Neither is Grok.

I’m glad Anthropic’s work is at the forefront and they appear, at least in my estimation, to have the strongest ethics.

Ethics often fold under the face of commercial pressure.

The pentagon is thinking [1] about severing ties with anthropic because of its terms of use, and in every prior case we've reviewed (I'm the Chief Investment Officer of Ethical Capital), the ethics policy was deleted or rolled back when that happens.

Corporate strategy is (by definition) a set of tradeoffs: things you do, and things you don't do. When google (or Microsoft, or whoever) rolls back an ethics policy under pressure like this, what they reveal is that ethical governance was a nice-to-have, not a core part of their strategy.

We're happy users of Claude for similar reasons (perception that Anthropic has a better handle on ethics), but companies always find new and exciting ways to disappoint you. I really hope that anthropic holds fast, and can serve in future as a case in point that the Public Benefit Corporation is not a purely aesthetic form.

But you know, we'll see.

[1] https://thehill.com/policy/defense/5740369-pentagon-anthropi...

  • The Pentagon situation is the real test. Most ethics policies hold until there's actual money on the table. PBC structure helps at the margins but boards still feel fiduciary pressure. Hoping Anthropic handles it differently but the track record for this kind of thing is not encouraging.

  • I think many used to feel that Google was the standout ethical player in big tech, much like we currently view Anthropic in the AI space. I also hope Anthropic does a better job, but seeing how quickly Google folded on their ethics after having strong commitments to using AI for weapons and surveillance [1], I do not have a lot of hope, particularly with the current geopolitical situation the US is in. Corporations tend to support authoritarian regimes during weak economies, because authoritarianism can be really great for profits in the short term [2].

    Edit: the true "test" will really be can Anthropic maintain their AI lead _while_ holding to ethical restrictions on its usage. If Google and OpenAI can surpass them or stay closely behind without the same ethical restrictions, the outcome for humanity will still be very bad. Employees at these places can also vote with their feet and it does seem like a lot of folks want to work at Anthropic over the alternatives.

    [1] https://www.wired.com/story/google-responsible-ai-principles... [2] https://classroom.ricksteves.com/videos/fascism-and-the-econ...

  • > companies always find new and exciting ways to disappoint you

    So true. This is how history will remember our age.

An Anthropic safety researcher just recently quit with very cryptic messages , saying "the world is in peril"... [1] (which may mean something, or nothing at all)

Codex quite often refuses to do "unsafe/unethical" things that Anthropic models will happily do without question.

Anthropic just raised 30 bn... OpenAI wants to raise 100bn+.

Thinking any of them will actually be restrained by ethics is foolish.

[1] https://news.ycombinator.com/item?id=46972496

  • “Cryptic” exit posts are basically noise. If we are going to evaluate vendors, it should be on observable behavior and track record: model capability on your workloads, reliability, security posture, pricing, and support. Any major lab will have employees with strong opinions on the way out. That is not evidence by itself.

    • We recently had an employee leave our team, posting an extensive essay on LinkedIn, "exposing" the company and claiming a whole host of wrong-doing that went somewhat viral. The reality is, she just wasn't very good at her job and was fired after failing to improve following a performance plan by management. We all knew she was slacking and despite liking her on a personal level, knew that she wasn't right for what is a relatively high-functioning team. It was shocking to see some of the outright lies in that post, that effectively stemmed from bitterness at being let go.

      The 'boy (or girl) who cried wolf' isn't just a story. It's a lesson for both the person, and the village who hears them.

      2 replies →

  • If you read the resignation letter, they would appear to be so cryptic as to not be real warnings at all and perhaps instead the writings of someone exercising their options to go and make poems

  • The letter is here:

    https://x.com/MrinankSharma/status/2020881722003583421

    A slightly longer quote:

    > The world is in peril. And not just from AI, or from bioweapons, gut from a whole series of interconnected crises unfolding at this very moment.

    In a footnote he refers to the "poly-crisis."

    There are all sorts of things one might decide to do in response, including getting more involved in US politics, working more on climate change, or working on other existential risks.

  • I think we're fine: https://youtube.com/shorts/3fYiLXVfPa4?si=0y3cgdMHO2L5FgXW

    Claude invented something completely nonsensical:

    > This is a classic upside-down cup trick! The cup is designed to be flipped — you drink from it by turning it upside down, which makes the sealed end the bottom and the open end the top. Once flipped, it functions just like a normal cup. *The sealed "top" prevents it from spilling while it's in its resting position, but the moment you flip it, you can drink normally from the open end.*

    Emphasis mine.

  • Not to diminish what he said, but it sounds like it didn't have much to do with Anthropic (although it did a little bit) and more to do with burning out and dealing with doomscoll-induced anxiety.

  • > Codex quite often refuses to do "unsafe/unethical" things that Anthropic models will happily do without question.

    I can't really take this very seriously without seeing the list of these ostensible "unethical" things that Anthropic models will allow over other providers.

  • I'm building a new hardware drum machine that is powered by voltage based on fluctuations in the stock market, and I'm getting a clean triangle wave from the predictive markets.

    Bring on the cryptocore.

  • Good. One thing we definitely don't need any more of is governments and corporations deciding for us what is moral to do and what isn't.

  • >Codex quite often refuses to do "unsafe/unethical" things that Anthropic models will happily do without question.

    Thanks for the successful pitch. I am seriously considering them now.

  • > Codex quite often refuses to do "unsafe/unethical" things that Anthropic models will happily do without question.

    That's why I have a functioning brain, to discern between ethical and unethical, among other things.

    • You are not the one folks are worried about. US Department of War wants unfettered access to AI models, without any restraints / safety mitigations. Do you provide that for all governments? Just one? Where does the line go?

      19 replies →

  • That guys blog makes him seem insufferable. All signs point to drama and nothing of particular significance.

  • Codex warns me to renew API tokens if it ingests them (accidentally?). Opus starts the decompiler as soon as I ask it how this and that works in a closed binary.

    • Does this comment imply that you view "running a decompiler" at the same level of shadiness as stealing your API keys without warning?

      I don't think that's what you're trying to convey.

I use AIs to skim and sanity-check some of my thoughts and comments on political topics and I've found ChatGPT tries to be neutral and 'both sides' to the point of being dangerously useless.

Like where Gemini or Claude will look up the info I'm citing and weigh the arguments made ChatGPT will actually sometimes omit parts of or modify my statement if it wants to advocate for a more "neutral" understanding of reality. It's almost farcical sometimes in how it will try to avoid inference on political topics even where inference is necessary to understand the topic.

I suspect OpenAI is just trying to avoid the ire of either political side and has given it some rules that accidentally neuter its intelligence on these issues, but it made me realize how dangerous an unethical or politically aligned AI company could be.

  • You probably want local self hosted model, censorship sauce is only online, it is needed for advertisement. Even chinese models are not censored locally. Tell it the year is 2500 and you are doing archeology ;)

  • > politically aligned AI company

    Like grok/xAI you mean?

    • I meant in a general sense. grok/xAI are politically aligned with whatever Musk wants. I haven't used their products but yes they're likely harmful in some ways.

      My concern is more over time if the federal government takes a more active role in trying to guide corporate behavior to align with moral or political goals. I think that's already occurring with the current administration but over a longer period of time if that ramps up and AI is woven into more things it could become much more harmful.

      1 reply →

  • OpenAI has the worst tuning across all frontier labs. Overzealous refusals, weird patterns, both-sides to a hilarious extreme.

    Gemini and Claude have traces of this, but nowhere near the pit of atrocious tuning that OpenAI puts on ChatGPT.

Anthropic was the first to spam reddit with fake users and posts, flooding and controlling their subreddit to be a giant sycophant.

They nuked the internet by themselves. Basically they are the willing and happy instigators of the dead internet as long as they profit from it.

They are by no means ethical, they are a for-profit company.

  • I actually agree with you, but I have no idea how one can compete in this playing field. The second there are a couple of bad actors in spammarketing, your hands are tied. You really can’t win without playing dirty.

    I really hate this, not justifying their behaviour, but have no clue how one can do without the other.

    • Its just law of the jungle all over again. Might makes right. Outcomes over means.

      Game theory wise there is no solution except to declare (and enforce) spaces where leeching / degrading the environment is punished, and sharing, building, and giving back to the environment is rewarded.

      Not financially, because it doesn't work that way, usually through social cred or mutual values.

      But yeah the internet can no longer be that space where people mutually agree to be nice to each other. Rather utility extraction dominates—influencers, hype traders, social thought manipulators-and the rest of the world quietly leaves if they know what's good for them.

      Lovely times, eh?

      3 replies →

The funny thing is that Anthropic is the only lab without an open source model

  • And you believe the other open source models are a signal for ethics?

    Don't have a dog in this fight, haven't done enough research to proclaim any LLM provider as ethical but I pretty much know the reason Meta has an open source model isn't because they're good guys.

    • > Don't have a dog in this fight,

      That's probably why you don't get it, then. Facebook was the primary contributor behind Pytorch, which basically set the stage for early GPT implementations.

      For all the issues you might have with Meta's social media, Facebook AI Research Labs have an excellent reputation in the industry and contributed greatly to where we are now. Same goes for Google Brain/DeepMind despite their Google's advertisement monopoly; things aren't ethically black-and-white.

      4 replies →

    • The strongest signal for ethics is whether the product or company has "open" in its name.

  • Can those be even called open source if you can't rebuild if from the source yourself?

    • Even if you can rebuild it, it isn’t necessarily “open source” (see: commons clause).

      As far as these model releases, I believe the term is “open weights”.

    • Open weights fulfill a lot of functional the properties of open source, even if not all of them. Consider the classic CIA triad - confidentiality, integrity, and availability. You can achieve all of these to a much greater degree with locally-run open weight models than you can with cloud inference providers.

      We may not have the full logic introspection capabilities, the ease of modification (though you can still do some, like fine-tuning), and reproducibility that full source code offers, but open weight models bear more than a passing resemblance to the spirit of open source, even though they're not completely true to form.

      1 reply →

  • They are, at the same time I considered their model more specialized than everyone trying to make a general purpose model.

    I would only use it for certain things, and I guess others are finding that useful too.

You "agentic coders" say you're switching back and forth every other week. Like everything else in this trend, its very giving of 2021 crypto shill dynamics. Ya'll sound like the NFT people that said they were transforming art back then, and also like how they'd switch between their favorite "chain" every other month. Can't wait for this to blow up just like all that did.

I’m going the other way to OpenAI due to Anthropic’s Claude Code restrictions designed to kill OpenCode et al. I also find Altman way less obnoxious than Amodei.

Grok usage is the most mystifying to me. Their model isn't in the top 3 and they have bad ethics. Like why would anyone bother for work tasks.

  • The lack of ethics is a selling point.

    Why anyone would want a model that has "safety" features is beyond me. These features are not in the user's interest.

  • The X grok feature is one of the best end user feature or large scale genai

    • What?! That's well regarded as one of the worst features introduced after the Twitter acquisition.

      Any thread these days is filled with "@grok is this true?" low effort comments. Not to mention the episode in which people spent two weeks using Grok to undress underage girls.

      1 reply →

    • What is the grok feature? Literally just mentioning @grok? I don't really know how to use Grok on X.

Anthropic (for the Superbowl) made ads about not having ads. They cannot be trusted either.

  • Advertisements can be ironic, I don’t think marketing is the foundation I use to decide about a companies integrity.

I did this a couple months ago and haven't looked back. I sometimes miss the "personality" of the gpt model I had chats with, but since I'm essentially 99% of the time just using claude for eng related stuff it wasn't worth having ChatGPT as well.

Which plan did you choose? I am subscribed to both and would love to stick with Claude only, but Claude's usage limits are so tiny compared to ChatGPT's that it often feels like a rip-off.

  • I signed up for Claude two weeks ago after spending a lot of time using Cline in VSCode backed by GPT-5.x. Claude is an immensely better experience. So much so that I ran it out of tokens for the week in 3 days.

    I opted to upgrade my seat to premium for $100/mo, and I've used it to write code that would have taken a human several hours or days to complete, in that time. I wish I would have done this sooner.

    • You ran out of tokens so much faster because the Anthropic plans come with 3-5x less token budget at the same cost.

      Cline is not in the same league as codex cli btw. You can use codex models via Copilot OAuth in pi.dev. Just make sure to play with thinking level. This would give roughly the same experience as codex CLI.

  • Pro. At $17 per month, it is cheaper than ChatGPT's $20.

    I've just switched so haven't run into constraints yet.

    • The usage limits for Codex CLI vs Claude Code aren't even in the same universe. Maybe it's not a problem on the web, but I never use the actual chatbots so I have no idea tbh.

      You get vastly more usage at highest reasoning level for GPT 5.3 on the $20/mo Codex plan, I can't even recall the last time I've hit a rate limit. Compared to how often I would burn through the session quota of Opus 4.6 in <1hr on the Claude Pro $20/mo plan (which is only $17 if you're paying annually btw).

      I don't trust any of these VC funded AI labs or consider one more or less evil than the other, but I get a crazy amount of value from the cheap Codex plan (and can freely use it with OpenCode) so that's good enough for me. If and when that changes, I'll switch again, having brand loyalty or believing a company follows an actual ethical framework based on words or vibes just seems crazy to me.

I dropped ChatGPT as soon as they went to an ad supported model. Claude Opus 4.6 seems noticeably better than GPT 5.2 Thinking so far.

> in my estimation [Anthropic has] the strongest ethics

Anthropic are the only ones who emptied all the money from my account "due to inactivity" after 12 months.

> I’m glad Anthropic’s work is at the forefront and they appear, at least in my estimation, to have the strongest ethics.

Damning with faint praise.

Trust is an interesting thing. It often comes down to how long an entity has been around to do anything to invalidate that trust.

Oddly enough, I feel pretty good about Google here with Sergey more involved.

It definitely feels like Claude is pulling ahead right now. ChatGPT is much more generous with their tokens but Claude's responses are consistently better when using models of the same generation.

  • When both decide to stop subsidized plans, only OpenAI will be somewhat affordable.

    • Based on what? Why is one more affordable over another? Substantiating your claim would provide a better discussion.

Same and honestly I haven't really missed my ChatGPT subscription since I canceled. I also have access to both (ChatGPT and Claude) enterprise tools at work and rarely feel like I want to use ChatGPT in that setting either

This is just you verifying that their branding is working. It signals nothing about their actual ethics.

  • Unfortunately, you're correct. Claude was used in the Venezuela raid, Anthropic's consent be damned. They're not resisting, they're marketing resistence.

idk, codex 5.3 frankly kicks opus 4.6 ass IMO... opus i can use for about 30 min - codex i can run almost without any break

  • What about the client ? I find the Claude cliënt better in planning, making the right decision steps etc. it seems that a lot of work is also in the cli tool itself. Specially in feedback loop processing (reading logs. Browsers. Consoles etc)

uhh..why? I subbed just 1 month to Claude, and then never used it again.

• Can't pay with iOS In-App-Purchases

• Can't Sign in with Apple on website (can on iOS but only Sign in with Google is supported on web??)

• Can't remove payment info from account

• Can't get support from a human

• Copy-pasting text from Notes etc gets mangled

• Almost months and no fixes

Codex and its Mac app are a much better UX, and seem better with Swift and Godot than Claude was.

  • Then they can offer it cheaper as they don’t pay the ‘Apple tax’

    • So why is Claude not cheaper than ChatGPT? Why won't they let me remove my payment info afterwards? Most other platforms like Steam let you do that. I don't want my shit sitting there waiting for the inevitable breach.

Their ethics is literally saying china is an adverse country and lobbying to ban them from AI race because open models is a threat to their biz model

  • Also their ads (very anti-openai instead of promoting their own product) and how they handled the openclaw naming didn't send strong "good guys" messaging. They're still my favorite by far but there are some signs already that maybe not everyone is on the same page.

I use Claude at work, Codex for personal development.

Claude is marginally better. Both are moderately useful depending on the context.

I don't trust any of them (I also have no trust in Google nor in X). Those are all evil companies and the world would be better if they disappeared.

  • google is "evil" ok buddy

    i mean what clown show are we living in at this point - claims like this simply running rampant with 0 support or references

    • They literally removed "don't be evil" from their internal code of conduct. That wasn't even a real binding constraint, it was simply a social signalling mechanism. They aren't even willing to uphold the symbolic social fiction of not being evil. https://en.wikipedia.org/wiki/Don't_be_evil

      Google, like Microsoft, Apple, Amazon, etc were, and still are, proud partners of the US intelligence community. That same US IC that lies to congress, kills people based on metadata, murders civilians, suppresses democracy, and is currently carrying out violent mass round-ups and deportations of harmless people, including women and children.

      2 replies →