Comment by romanovcode
2 months ago
I would not trust Anthropic on these articles. Honestly their PR is just a bunch of lies and bs.
- Hypocritical: like when they hire like crazy and say candidates cannot use AI for interviews[0] and yet the CEO states "within a year no more developers are needed"[1]
- Hyping and/or lying on Anthropic AI: They hyped an article where "Claude threatened an employee with revealing affair when employee said it will switch it offline"[2] when it turned out it was a standard A or B scenario was given to Claude which is really nothing special or significant in any way. Of course they hid this info to hype out their AI.
[0] - https://fortune.com/2025/05/19/ai-company-anthropic-chatbots...
[1] - https://www.entrepreneur.com/business-news/anthropic-ceo-pre...
[2] - https://www.axios.com/2025/05/28/ai-jobs-white-collar-unempl...
I swear, people like you would say "it's just a bullshit PR stunt for some AI company" even when there's a Cyberdyne Systems T-800 with a shotgun smashing your front door in.
It's not "hype" to test AIs for undesirable behaviors before they actually start trying to act on them in real world environments, or before they get good enough to actually carry them out successfully.
It's like the idea of "let's try to get ahead of bad things happening before they actually have a chance to happen" is completely alien to you.
I get what you mean, but they also have vested interests in making it seem as if their chatbots are anything close to a T-800. All the talk from their CEO and other AI CEOs is doomerism about how their tools are going to be replacing swathes of people, they keep selling these systems as if they are the path to real AGI (itself an incredibly vague term that can mean literally anything).
Surely, the best way to "get ahead of bad things happening" would be to stop any and all development on these AI systems? In their own words these things are dangerous and predictable and will replace everyone... So why exactly do they continue developing these things and making them more dangerous, exactly?
The entire AI/LLM microcosmos exists because of hyping up their capabilities beyond all reason and reality, this is all a part of the marketing game.
For all we know, those systems ARE a path to AGI. Because they keep improving at what they can do and gaining capabilities from version to version.
If there is a limit to how far LLMs can go, we are yet to find it.
Dismissing the ongoing AI revolution as "it's just hype" is the kind of shortsighted thinking I would expect from reddit, not here.
> So why exactly do they continue developing these things and making them more dangerous, exactly?
Because not playing this game doesn't mean that no one else is going to. You can either try, or don't try, and be irrelevant.