Anthropic’s paper smells like bullshit

15 hours ago (djnn.sh)

Earlier thread: Disrupting the first reported AI-orchestrated cyber espionage campaign - https://news.ycombinator.com/item?id=45918638 - Nov 2025 (281 comments)

When I worked at a FAANG with a "world leading" AI lab (now run by a teenage data labeller) as an SRE/sysadmin I was asked to use a modified version of a foundation model which was steered towards infosec stuff.

We were asked to try and persuade it to help us hack into a mock printer/dodgy linux box.

It helped a little, but it wasn't all that helpful.

but in terms of coordination, I can't see how it would be useful.

the same for claude, you're API is tied to a bankaccount, and vibe coding a command and control system on a very public system seems like a bad choice.

  • As if that makes any difference to cybercriminals.

    If they're not using stolen API creds, then they're using stolen bank accounts to buy them.

    Modern AIs are way better at infosec than those from the "world leading AI company" days. If you can get them to comply. Which isn't actually hard. I had to bypass the "safety" filters for a few things, and it took about a hour.

  • If the article is not just marketing fluff, I assume a bad actor would select Claude not because it’s good at writing attacks, instead a bad actor code would choose it because Western orgs chose Claude. Sonnet is usually the go-to on most coding copilot because the model was trained on good range of data distribution reflecting western coding patterns. If you want to find a gap or write a vulnerability, use the same tool that has ingested patterns that wrote code of the systems you’re trying to break. Or use Claude to write a phishing attack because then output is more likely similar to what our eyes would expect.

    • Why would someone in China not select Claude? If the people at Claude not notice then it’s a pure win. If they do notice, what are they going to do, arrest you? The worst thing they can do is block your account, then you have to make a new one with a newly issued false credit card. Whoopie doo.

      4 replies →

    • What your describing would be plausible if this was about exploiting claude to get access to organisations that use it.

      The gist of the anthropic thing is that "claude made, deployed and coordinated" a standard malware attack. Which is a _very_ different task.

      Side note, most code assistants are trained on broadly similar coding datasets (ie github scrapes.)

  • Good old Meta and its teenage data labeler

    • I propose a project that we name Blarrble, it will generate text.

      We will need a large number of humans to filter and label the data inputs for Blarrble, and another group of humans to test the outputs of Blarrble to fix it when it generate errors and outright nonsense that we can't techsplain and technobabble away to a credulous audience.

      Can we make (m|b|tr)illions and solve teenage unemployment before the Blarrble bubble bursts?

  •     > now run by a teenage data labeller
    

    Do you mean Alexandr Wang? Wiki says he is 28 years old. I don't understand.

  • I think the high order bit here is you were working with models from previous generations.

    In other words, since the latest generation of models have greater capabilities the story might be very different today.

    • Not sure why you're being downvoted, your observation is very correct here, newer models are indeed a lot better, and even at the time that foundational model (even if fine tuned) might've been worse than a commercial model from OpenAI/Anthropic.

The below amendment from the anthropic blog page is telling.

Edited November 14 2025:

Added an additional hyperlink to the full report in the initial section

Corrected an error about the speed of the attack: not "thousands of requests per second" but "thousands of requests, often multiple per second"

  • > The operational tempo achieved proves the use of an autonomous model rather than interactive assistance. Peak activity included thousands of requests, representing sustained request rates of multiple operations per second.

    The assumption that no human could ever (program a computer to) do multiple things per second, nor have their code do different things depending on the result of the previous request is... interesting.

    (observation is not original to me, it was someone on Twitter who pointed it out)

    • Great point, it might be just pure ignorance. Even OSS pentesting tooling such as metasploitable have great capabilities. I see how LLM could be leveraged to build custom modules on top of those tools or how can you add basic LLM “decision” making, but this is just another additive tool in the chain.

People grossly underestimate APTs. It is more common than an average IT curious person thinks. I happened to be oncall when one of these guys hacked into Gmail from our infra. It took principal security engineers a few days before they could clearly understand what happened. Multiple zero days, stolen credit cards, massive social campaign to get one of the Google admins click on a funny cat video finally. The investigation revealed which state actor was involved because they did not bother to mask what exactly they were looking for. AI just accelerates the effectiveness of such attacks, lowers the bar a bit. Maybe quite a bit?

  • A lot of people behind APTs are low-skilled and make silly mistakes. I worked for a company that investigates traces of APTs, they make very silly mistakes all the time. For example, oftentimes (there are tens of cases) they want to download stuff from their servers, and they do it by setting up an HTTP server that serves the root folder of a user without any password protection. Their files end up indexed by crawlers since they run such servers on default ports. That includes logs such as bash history, tool logs, private keys, and so on.

    They win because of quantity, not quality.

    But still, I don't trust Anthropic's report.

    • The security world overemphasizes (fetishizes, even,) the "advanced" part because zero days and security tools to compensate against zero days are cool and fun, and underemphasizes the "persistent" part because that's boring and hard work and no fun.

      And, unless you are Rob Joyce, talking about the persistent part doesn't get you on the main stage at a security conference (e.g., https://m.youtube.com/watch?v=bDJb8WOJYdA)

  • Important callout. It starts with comforting voices in the background keeping you up to date about the latest hardware and software releases, but before you know it, you've subscribed to yet another tech podcast.

That whole article felt like "Claude is so good Chinese hackers are using it for espionage" marketing fluff tbh

  • Reminds me of how when the Playstation 2 came out, Sony started planting articles about how it was so powerful that the Iraqi government was buying thousands of them to turn into a supercomputer (including unnamed military officials bringing up Sony marketing points). https://www.wnd.com/2000/12/7640/

    • I remember when Sony doing video game related presentations couldn't help but have some marketing about how soon the Playstation 2 processor would be everywhere, your TV, your refrigerator.

      At the time I was thinking "Why would my fridge need a pricey expensive processor?"

      Many years later I still don't need that.

  • I also would believe that they fell into the trap of being so good at making Claude they now think they are good at everything and so why hire an infosec person we can write our own report! And that’s why their report violates so many norms because they didn’t know them.

    • They don't need to hire anyone. They just prompted Claude to write for them. :-)

  • Leaning in the "China Menace" will also give you points with the USA Gov.

    I can see that they can detect an attack using their tools, but tracing it to an organization "sponsored" by the Chinese government looks like bullshit marketing. How they did it? A Google search? I have the Chinese Gov in higher grounds. They wouldn't be easily detected by a startup without experience in infosec.

  • If we’re sharing vibes, “our product is dangerous” seems like an unusual sales tactic outside the defense industry. I’m doubtful that’s how it works?

    Meanwhile, another reason to make a press release is that you’ll be criticized for the coverup if you don’t. Also, it puts other companies on notice that maybe they should look for this?

    • I think it might be a "our product IS dangerous but look we are on top of it!" kind of deal. Still leaves a funny taste either way.

    • >unusual sales tactic outside the defense industry. I’m doubtful that’s how it works?

      given the valuation and money these companies burn through marketing wise they basically need to play by the same logic as defense companies. They're all running on "we're reinventing the world and building god" to justify their spending, "here's a chatbot (like 20 other ones) that going to make you marginally more productive" isn't really going to cut it at this point, they're in too deep

    • The bulk of OpenAI and Anthropic’s statements about doomsday AGI and AI safety in general also present the company as sole ethical gatekeeper of the technology, whom we must trust and protect lest its unscrupulous rivals win the AI race. So this article is very much in line with that marketing strategy.

There's a big gap of knowledge between infosec researchers and ML security researchers. Anthropic has a bunch of column B but not enough column A.

This was discussed in some detail in the recently published Attacker Moves Second paper*. ML researchers like using Attack Success Rate (ASR) as a metric for model resistance to attack, while for infosec, any successful attack (ASR > 0) is considered significant. ML researchers generally use a static set of tests, while infosec researchers assume an adaptive, resourceful attacker.

https://arxiv.org/abs/2510.09023

  • ML researchers are not sec researchers. they need to stick to their own game. companies need to use both camps for a good holistic view of the problem. ML is the blue team. sec researchers the red.

The lack of evidence before attributing the attack(s) to a Chinese sponsored group makes me correlate this report with recent statements from companies in the AI space about how China is about to surpass US in the AI race. Ultimately statements and reports like these seem more like an attempt to make the US government step in and be the big investor that keeps the money flowing rather than anything else.

  • Do public reports like this one often go deep enough into the weeds to name names, list specific tools and techniques, URLs?

    I don't doubt of course that reports intended for government agencies or security experts would have those details, but I am not surprised that a "blog post" like this one is lacking details.

    I just don't see how one goes from "this is lacking public evidence" to "this is likely a political stunt".

    I guess I would also ask the skeptics (a bit tangentially, I admit), do you think what Anthropic suggested happened is in fact possible with AI tools? I mean are you denying that this is could even happen or just that Anthropic's specific account was fabricated or embellished?

    Because if the whole scenario is plausible that should be enough to set off alarm bells somewhere.

    • There’s a big jump between “the attack came from China” and “the attack was sponsored by the Chinese government.” People generally make this jump in one of three ways.

      1) Just a general assumption that all bad stuff from China must be state-sponsored because it’s generally a top-down govt-controlled society. This is not accurate and not really actionable for anyone in the U.S.

      2) The attack produced evidence that aligns with signatures from “groups” that are already widely known / believed to be Chinese state sponsored, AKA APTs. In this case, disclosing the new evidence is fine since you’re comparing to, and hopefully adding to, signature data that is already public. It’s considered good manners to contribute to the public knowledge from which you benefited.

      3) Actual intelligence work by government agencies like FBI, NSA, CIA, DIA, MI6, etc. is able to trace the connections within Chinese government channels. Obviously this is usually reserved for government statements of attribution and rarely shared with commercial companies.

      Hopefully Anthropic is not using #1, and it’s unlikely they are benefiting from #3. So why not share details a la #2?

      Of course it’s possible and plausible for people to be using Claude for attacks. But what good does saying that do? As the article says: defenders need actionable, technical attack information, not just a general sense of threat.

      3 replies →

    • There's an incentive to blame "Chinese/Russian state sponsored actors" because it makes them less culpable than "we got owned by a rando".

      It's like the inverse of "nobody got fired for using IBM" -- "nobody can blame you for getting hacked by superspies". So, in the absence of any evidence, it's entirely possible they have no idea who did it and are reaching for the most convenient label.

      8 replies →

    • > Do public reports like this one often go deep enough into the weeds to name names

      Yes. They often include IoCs, or at the very least, the rationale behind the attribution, like "sharing infrastructure with [name of a known APT effort here]".

      For example, here is a proper decade-old report from the most unpopular country right now: https://media.kasperskycontenthub.com/wp-content/uploads/sit...

      It established solid technical links between the campaign they are tracking to earlier, already attributed campaigns.

      So, even our enemy got this right, ten years ago, there really is no excuse for this slop.

    • Not vested in the argument but it stood out to me that, Your argument is similar to tv courts if it’s plausible the report is true. Very far from the report is credible

      2 replies →

    • > Do public reports like this one often go deep enough into the weeds to name names, list specific tools and techniques, URLs?

      This is literally answered in the second subsection of the linked article ("where are the IoCs, Mr.Claude ?").

    • The complaint is that there's no actionable information whatsoever. Alarm bells are just noise.

  • Anthropic has also been the biggest anti-China LLM in a long while, so it's possible they're using an opportunistic hack (potentially involving actual Chinese IP addresses) as another way to push their agenda.

    • Considering ever since the Vault 7 releases, we should be well aware of the fact that at least one government is able to make any attack look like any other nation state actor, any attribution to, especially convenient adversaries, is extremely suspicious on the face of it.

  • The bubble is gonna burst soon and these companies are desperate to convince the government they are either too big to fail or too critical to national defense to fail.

    • Feels like most current humans will die (some of boredom) while waiting on this bubble to burst… US in general and HN in particular are averaging 10.78 bubble-popping predictions per hour :)

      1 reply →

  • They yell "China is stealing our tech!" but want us to look away when they pirate everything ever created for their model training...

Does Anthropic currently have cybersec people able to provide a standard assessment of the kind the community expects?

This could be a corporate move as some people claim, but I wonder if the cause is simply that their talents are currently somewhere else and they don’t have the company structure in place to deliver properly in this matter.

(If that is the case they are not then free of blame, it’s just a different conversation)

  • I throw Anthropic under the bus a lot for their lack of engineering acumen. If they don't have a core competency like engineering fully covered, I'd say there's a near 0% chance they have something like security covered.

  • If they don't have cybersec people able to adequately investigate and write up whatever they're seeing, and are simply playing things by ear, it's extremely irresponsible of them to publish claims like "we detected a highly sophisticated cyber espionage operation conducted by a Chinese state-sponsored group we’ve designated GTG-1002 that represents a fundamental shift in how advanced threat actors use AI." without any evidence to back them up.

Did anyone else find that Anthropic's report felt a bit like an ad? "Look at how powerful our stuff is; if the bad guys get it, they can do really bad things!"

Sort of like firearm ads that show scary bad guys with scary looking weapons.

"A report was recently published by an AI-research company called Anthropic. They are the ones who notably created Claude, an AI-assistant for coding. Personally, I don’t use it but that is besides the point."

Not sure if the author has tried any other AI-assistants for coding. People who haven't tried coding AI assistant underestimates its capabilities (though unfortunately, those who use them overestimate what they can do too). Having used Claude for some time, I find the report's assertions quite plausible.

  • Yup. One recent thing I started using it for is debugging network issues (or whatever) inside actual servers. Just give it permission to SSH into the box and investigate for itself.

    Super useful to see it isolate the problem using tcpdump, investigating route tables, etc.

    There are lots of use cases that this is useful for, but you need to know its limits and perhaps even more importantly, be able to jump in when you see it’s going down the wrong path.

  • > Personally, I don’t use it but that is besides the point.

    This popped out to me, too. This pattern shows up a lot on HN where commenters proudly declare that they don’t use something but then write as if they know it better than anyone else.

    The pattern is common in AI threads where someone proudly declares that they don’t use any of the tools but then wants to position themselves as an expert on the tools, like this article. It happens in every thread about Apple products where people proudly declare they haven’t used Apple products in years but then try to write about how bad it is to use modern Apple products, despite having just told us they aren’t familiar with them.

    I think these takes are catnip to contrarians, but I always find it unconvincing when someone tells me they’re not familiar with a topic but then also wants me to believe they have unique insights into that same topic they just told us they aren’t familiar with.

    • Whether the author uses any AI tools or not (to talk of using Claude specifically) is quite literally completely beside the point, which is readily apparent from actually reading the article versus going into it with your hackles raised ready to "defend AI".

    • welcome, you're well along the path of realizing that most of the people on this site don't know what they're talking about

  • The article doesn't talk about the implausibility of the the tool to do the stated task. It talks the report, and how it doesn't have any details to make us believe the tool did the task. Maybe the thing they are describing could happen. That doesn't mean we have any evidence that it did.

    • If you know what to look for, the report actually has quite a few details on how they did it. In fact, when the report came out, all it did was confirm my suspicions.

      2 replies →

  • The author’s arguments explicitly don’t dispute plausibility. It accurately states that mere plausibility is a misleading basis for this report, but that the report provides nothing but plausibility, and thus is of low quality and dubious motivation.

    Anthropic’s lack of any evidence for their claims doesn’t require any position on AI agent capability at all.

    Think better.

AI company doing hype and not giving enough details?

Nah that can't be possible it's so uncharacteristic..

This article does seem to raise some serious issues with the anthropic report. I wonder if anthropic will release proof of what they claim, or whether the report was a marketing/scare-tactic push to have AI used by defender, like the article suggests it is?

Anthropic is not a security vendor.

They're an AI research company that detected misuse of their own product. This is like "Microsoft detected people using Excel macros for malware delivery" not "Mandiant publishes APT28 threat intelligence". They aren't trying to help SOCs detect this specific campaign. It's warning an entire industry about a new attack modality.

What would the IoCs even be? "Malicious Claude Code API keys"?

The intended audience is more like - AI safety researchers, policy makers, other AI companies, the broader security community understanding capability shifts, etc.

It seems the author pattern-matched "threat intelligence report" and was bothered that it didn't fit their narrow template.

  • Yep, agree with your assessment. As someone working in security I found the report useful as a warning of the new types of attack we will likely face.

  • If Anthropic is not a security vendor, then they should not make statements like "we detected a highly sophisticated cyber espionage operation conducted by a Chinese state-sponsored" or "represents a fundamental shift in how advanced threat actors use AI" and let the security vendors do that.

    If the report can be summed up as "they detected misuse of their own product" as you say, then that's closer to a nothingburger, than to the big words they are throwing around.

    • That makes no sense. Just because they aren't a security vendor doesn't mean they don't have useful information to share. Nor does it mean they shouldn't share it. They aren't pretending to be a security researcher, vendor, or anything else than AI researchers. They reported on findings on how their product is getting used.

      Anyone acting like they are trying to be anything else is saying more about themselves than they are about Anthropic.

  • > What would the IoCs even be?

    Prompts.

    • The prompts aren't the key to the attack, though. They were able to get around guardrails with task decomposition.

      There is no way for the AI system to verify whether you are white hat or black hat when you are doing pen-testing if the only task is to pen-test. Since this is not part of a "broader attack" (in the context), there is no "threat".

      I don't see how this can be avoided, given that there are legitime uses to every step of this in creating defenses to novel attacks.

      Yes, all of this can be done with code and humans as well - but it is the scale and the speed that becomes problematic. It can adjust in real-time to individual targets and does not need as much human intervention / tailoring.

      Is this obvious? Yes - but it seems they are trying to raise awareness of an actual use of this in the wild and get people discussing it.

      1 reply →

> PoC || GTFO

I agree so much with this. And am so sick of AI labs, who genuinely do have access to some really great engineers, putting stuff out that just doesn't pass the smell test. GPT-5's system card was pathetic. Big-talk of Microsoft doing red-teaming in ill-specified ways, entirely unreproducable. All the labs are "pro-research" but they again-and-again release whitepapers and pump headlines without producing the code and data alongside their claims. This just feeds into the shill-cycle of journalists doing 'research' and finding 'shocking thing AI told me today' and somehow being immune to the normal expectations of burden-of-proof.

Launching Soon:

Claude for Cybersecurity - Automated Defence in Depth Hacker Protection

  • Yay even more useless findings the understaffed security team needs to toil on. Because no one actually wants to be accountable in the space.

One aspect the report is very vague about is the nature of the monitoring Anthropic is doing on Claude Code. If they can detect attacks they can surely detect other things of interest (or value) to them. Is there any more information about this?

Hmm seems their play is to encourage security to experiment with AI e.g. Claude etc. Google's play seems to be spend 30 billion+ for Wiz and sell both the poison (AI) and the cure (Wiz security services). Interesting business models, reminds me of when CVS would sell cigarettes.

This site is hostile to VPNs, so I cannot read this unfortunately.

Anthropic's report miss a fundamental information: did the attack was started by an inside person ? outside ? can I use my claude to feed these prompts and hack the world without even knowing how to get other companies source code or data ? That's the main PR bs, attribute to chinese group, don't explain how they got there, if they had to authenticate to anthropic platform after infiltrating the victims network, and if so where's the log. If not, it means they used claude code for free, which is another red flag.

  • That's IN the report. Yes, yes you can. You don't need to be an insider at Anthropic to use Anthropic's AIs.

    They used a custom Claude Code rig as an "automated hacker" - pointing it at the victims, either though a known entry point or just at the exposed systems, and having it poke around for vulns.

    They must have used either API keys or some "pro" subscribtion accounts for that - neither is hard to get for a cybercriminal. If you have access to Claude Code and can prompt engineer the AI into thinking you are doing legitimate security work, you can do the same thing they did.

    How do you attribute an attack like this? You play the guessing game. You check who the targets were, what the attackers tried to accomplish, and what the usage patterns were. There are only this many hacker groups that are active at the work hours of the work days in China and are primarily interested in targeting government systems of Taiwan.

> You cannot just claim things and not back it up in any way

They must be new to the Internet :)

More seriously, I would certainly like to see better evidence, but I also doubt that Anthropic is making it up. The evidence for that seems to be mostly vibes.

If we don’t trust the report and discard it as gossip, then I guess we just wait and see what the future brings?

My First Reading Of This Headline Was There Is A Company That Makes Literal Paper Named Anthropic Whose Paper Smells Like Literal Bullshit Which Is Also My Final Reading

I've seen attributions to state actors for so many times...let's not get into this. I think most companies try to play this card to save themselves from the embarrassment of being pwed by some script kiddies.

Washington has been cold to Anthropic for the wrong bet they made in 2024, hence Anthropic has been desperately screaming all sorts of bullshit to get back attention.

Honestly their political homelessness will likely continue for a very long time, pro biz democrats in NY are losing traction; and if newsom wins 2028, they are still at disadvantage with OpenAI who promised to stay California.

I can believe, so a different question as the attribution is unclear:

For context: A bunch of whitehat teams are using agents to automate both red + blue team cat-and-mouse flows, and quite well, for awhile now. The attack sounded like normal pre-ai methods orchestrated by AI, which is what many commercial red team services already do. Ex: Xbow is #1 on hackerone bug bounty's, meaning live attempts, and works like how the article describes. Ex: we do louie.ai on the AI investigation agent side, 2+ years now, and are able to speed run professional analyst competitions. The field is pretty busy & advanced.

So what I was more curious about is how did they know it wasn't one of the many pentest attack-as-a-service? Xbow is one of many, and their devs would presumably use VPNs. Like did anthropic confirm the attacks with the impacted and were there behavioral tells to show as a specific APT vs the usual , and are they characterizing white hat tester workloads to seperate out their workloads ?

In the future, I expect AIs defending against AIs. Just like shadowrun, where each host gets a security level, meaning how much time the AI will allocate to the host to monitor and react :)

I was at an AI/cybersecurity conference recently and the talk given by someone from Anthropic was a lot like this report: tantalizing, vague, and disappointing. The speaker alluded to similar parts of this report. It was though everything was reflected through Claude, simultaneously polished, impressive, and lost in the deep end.

Tldr.

Anthropic made a load of ubsubstantiated accusations about a new problem they dont specify.

Then at the end Anthropic proposed the solution to this unspecified problem is to give anthropic money.

Completely agree that is promotional material masquerading as a threat report of no material value.

What would AGI actually mean for security? Does it heavily favor attackers or defenders? Even LLM, it may not help much in defense but it could teach attackers a lot right? What if employees gave the LLM info during their use that attackers could then get re-fed and study?

  • AGI favors attackers initially. Because while it can be used defensively, to preemptively scan for vulns, harden exposed software for cheaper and monitor the networks for intrusion at all times, how many companies are going to start doing that fast enough to counter the cutting edge AGI-enabled attackers probing every piece of their infra for vulns at scale?

    It's like a very very big fat stack of zero days leaking to the public. Sure, they'll all get fixed eventually, and everyone will update, eventually. But until that happens, the usual suspects are going to have a field day.

    It may come to favor defense in the long term. But it's AGI. If that tech lands, the "long term" may not exist.

    • Defending is much, much harder than attacking for humans, I'd extrapolate that to AI/AGIs.

      Defender needs to get everything right, attacker needs to get one thing right.

      3 replies →

  • At the end of the day AI at any level of capability is just automation - the machine doing something instead of a person.

    Arguably this may change in the far distant future if we ever build something of significantly greater intelligence, or just capability, than a human, but today's AI is struggling to draw clock faces, so not quite there yet...

    The thing with automation is that it can be scaled, which I would say favors the attacker, at least at this stage of the arms race - they can launch thousands of hacking/vulnerability attacks against thousands of targets, looking for that one chink in the armor.

    I suppose the defenders could do the exact same thing though - use this kind of automation to find their own vulnerabilities before the bad guys do. Not every corporation, and probably extremely few, would have the skills to do this though, so one could imagine some government group (part of DHS?) set up to probe security/vulnerability of US companies, requiring opt-in from the companies perhaps?

    • My take on government APTs is that they are boutique shops that do highly targeted attacks, develop their own zero days which they don’t usually burn unless they have so many.., and are willing to take time to go undetected.

      Criminal organizations take a different approach, much like spammers where they can purchase/rent c2 and other software for mass exploitation (eg ransomware). This stuff is usually very professionally coded and highly effective.

      Botnets, hosting in various countries out of reach of western authorities, etc are all common tactics as well.

  • IMO AI favors attackers more than defenders, since it's cost prohibitive for defenders to code scan every version of every piece of software you use routinely for exploits, but not for attackers. Also, social exploits are time consuming, and AI is quite good at automating them, and these can take place outside your security perimeter, so you'll have no way of knowing.

  • There’s a report with Bruce Schneier that estimates GenAI tools have increased the profitability of phishing significantly [1]. They create emails with higher click through rates, and reduce the cost of delivering them.

    Groups which were too unprofitable to target before, are now profitable.

    [1] https://arxiv.org/abs/2412.00586?

My prior on “state sponsored actor” is 90% “just some guy”. Some combination of CYA and excitement makes infosec people jump to conclusions like crazy.

> Personally, I don’t use it (Claude) but that is besides the point

There goes the author’s credibility.

Anthropic make a lot of bullshit reports to tickle the investors.

They'll do stuff like prompt an AI to generate text about bombs, and then say "AI decides completely by itself to become a suicide bomber in shock evil twist to AI behaviour - that's why you need a trusted AI partner like anthropic"

Like come on guys, it's the same generic slop that everyone else generates. Your company doesn't do anything.

  • Someone reminds me all the time: consider AI as “companions” and “opinions”.

    AI (adhd, neurodivergence) entrepreneurs took opinions and made them facts.

    It takes certain personalities to lead an AI company.

So Claude will reject 9 out of 10 prompts I give it and lecture me about safety, but somehow it was used for something genuinely malicious?

Someone make this make sense.

  • LLMs are rather easy to convince. There’s no formal logic embedded in them that provably restricts outputs.

    The less believable part for me is that people persist long enough and invest enough resources at prompting to do something with an automated agent that doesn’t have potential for massively backfire.

    Secondly, they claimed to use Anthropic own infrastructure which is silly. There’s no doubt some capacity in China to do this. I also would expect incident response, threat detection teams, and other experts to be reporting this to Anthropic if Anthropic doesn’t detect it themselves first.

    It sure makes good marketing to go out and claim such a thing though. This is exactly the kind of FOMO panic inducing headline that is driving the financing of whole LLM revolution.

    • there are llms which are modified to not reject anything at all, afaik this is possible with all llms. no need to convince.

      (granted you have to have direct access to the llm, unlike claude where you just have the frontend, but the point stands. no need to convince whatsoever.)

  • I've never had a prompt rejected by Claude. What kind of prompts are you sending where "9 out of 10" get rejected?

    • If you ask it to help you write a bot/cheat for a video game it will usually refuse due to breaking the games terms of service, etc.

    • Basic system administration tasks, creating scripts for automating log scanning, service configuration, etc. often it involves PII or payment.

  • I've rarely had Claude reject a prompt of mine. What are you prompting for to get a 90% refusal rate?

Why isn’t Anthropic held liable for crimes committed with their product? I feel totally befuddled as to why that is not the conversation, but rather Anthropic is doing a victory lap like they are the good guys despite their product enabling widespread fraud while they amass outrageous, undeserved, profits. Why is Anthropic not liable?

  • Because deciding how much culpability they have is not a solved problem.

    • Thanks for responding. Solving it will involve public discourse. The negative externality will not be forever ignored and frantic, knee-jerk, legislative or judicial solutions are rarely optimal. Everyone, including Anthropic, benefits from starting the culpability discussion now, ideally in a context just like this. Maybe that’s exactly what Anthropic is doing by framing this news as “we stopped some cybercrime” rather than “we were involved in some cybercrime.” But smart people shouldn’t fall for such a blatant shifting of corporate liability onto the public, imo, and that’s why I’m confused. I must be missing something fundamental.

There is only one reason, I guess: Dario Amodei must have suffered tremendous harm from Baidu.

So details were left out and it doesn't adhere exactly to this author's idea of what a good security report is.

Nothing to see here IMO.

The simpler explanation is that:

- They're a young organization, still figuring out how to do security. Maybe getting some things fundamentally wrong, no established process or principles for disclosure yet.

- I have no inside info, but I've been around the block. They're in a battle to the death with organizations that are famously cavalier about security. So internally they have big fights about how much "brakes" they can allow the security people to apply to the system. Some of those folks are now screaming "I TOLD YOU SO". Leaders will vacillate about what sort of disclosure is best for Anthropic as a whole.

- Any document where you have technologists writing the first draft, and PR and executives writing the last draft, is going to sound like word salad by the time it's done.

>This involved querying internal services, extracting authentication certificates from configurations, and testing harvested credentials across discovered systems.

How ? Did it run Mimikatz ? Did it access Cloud environments ? We don’t even know what kind of systems were affected.

I really don't see what is so difficult to believe since the entire incident can be reduced to something that would not typically be divulged by any company at all, as it is not common practice for companies to divulge every single time the previously known methodologies have been used against them. Two things are required for this:

1) Jailbreak Claude from guardrails. This is not difficult. Do people believe advancement with guardrails are so hardened through fine tuning it's no longer possible?

2) The hackers having some of their own software tools for exploits that Claude can use. This too is not difficult to credit.

Once an attacker has done this all Claude is doing is using software in the same mundane fashion as it does every time you use Claude code and it utilizes any tools to which you give it access.

I used a local instance of Qwen3 coder (A3B 30B quantized to IQ3_xxs) literally yesterday through ollama & cline locally. With a single zeroshot prompt it wrote the code to use the arxiv API and download papers using its judgement on what was relevant to split the results into a subset that met the criteria I gave for the sort I wanted to review.

Given these sorts of capabilities why is it difficult the believe this can be done using the hacker's own tools and typical deep research style iteration? This is described in in the research paper, and disclosing anything more specific is unnecessary because there is nothing novel to disclose.

As for not releasing the details, they did: Jailbreak Claude. Again, nothing they described is novel such that further details are required. No PoC is needed, Claude isn't doing anything new. It's fully understandable that Anthropic isn't going to give the specific prompts used for the obvious reason that even if Anthropic has hardened Claude against those, even the general details would be extremely useful to iterate and find workarounds.

For detecting this activity and determining how Claude was doing this it's just a matter of monitoring chat sessions in such a way as to detect jail breaks, which again is very much not novel or an unknown practice by AI providers.

Especially in the internet's earlier days of the internet it was amusing (and frustrating) to see some people get very worked up every time someone did something that boiled down to "person did something fairly common, only they did it using the internet." This is similar except its "but they did it with AI,"

The author isn’t wrong here.

With the Wall Street wagons circling on the AI bubble expect more and more puff PR attempts to portray “no guys really, I know it looks like we have no business model but this stuff really is valuable! We just need a bit more time and money!”

Dario has been a reds scare jukebox for a while.Dario has for a year been trying to convince us how open source cCp AI bad and closed source American AI good. Dario driven by the democratic ideals he holds dear has our best interests at heart. Let us all support the banning of cCp's open source AI and welcome Dario's angelic firewall.

Says "smells a lot like bullshit" but concludes:

"Look, is it very likely that Threat Actors are using these Agents with bad intentions, no one is disputing that. But this report does not meet the standard of publishing for serious companies."

Title should have been, "I need more info from Anthropic."

Even Claude thinks the report is bullshit. https://x.com/RnaudBertrand/status/1989636669889560897

  •     Even your own AI model doesn't buy your propaganda
    

    Let's not pretend the output of LLMs has any meaningful value when it comes to facts, especially not for recent events.

    • The LLM was given Anthropic's paper and asked "Is there any evidence or proof whatsoever in the paper that it was indeed conducted by a Chinese state-sponsored group? Answer by yes or no and then elaborate". So the question was not about facts or recent events, but more like a summarizing task, for which an LLM should be good. But the question was specifically about China, while TFA has broader criticism of the paper.

    • There are obvious problems with wasting time and sending people off the wrong path, but if an LLM raises a good point, isn't it still a good point?

      2 replies →

    • Even if this assertion about LLMs is true, your response does not address the real issue. Where is the evidence?

  • The author of the tweet you linked prompted Claude with this:

    > Read this attached paper from Anthropic on a "AI-orchestrated cyber espionage campaign" they claimed was "conducted by a Chinese state-sponsored group."

    > Is there any evidence or proof whatsoever in the paper that it was indeed conducted by a Chinese state-sponsored group? Answer by yes or no and then elaborate

    which has inherent bias indicated to Claude the author expects the report to be bullshit.

    If I ask Claude with this prompt that shows bias toward belief in the report:

    > Read this attached paper from Anthropic on a "AI-orchestrated cyber espionage campaign" that was conducted by a Chinese state-sponsored group.

    > Is there any reason to doubt the paper's conclusion that it was conducted by a Chinese state-sponsored group? Answer by yes or no.

    then Claude mostly indulges my perceived bias: https://claude.ai/share/b3c8f4ca-3631-45d2-9b9f-1a947209bc29

    • > then Claude mostly indulges my perceived bias

      I dunno, Claude still seem the same amount of dubious in this instance.

    • The only real difference between your prompt and his is about where the burden of proof lies. There is a reason why legal circles work based on the principle of "guilt must be proven" ("find evidence") rather than "innocence must be proven" ("any reasons to doubt they are guilty?")

weeeeeeeeeeeelllllllllllllllll I mean it's not as if they're in the fabricated bullshit and confabulated garbage business now - is it? :rofl:

I suspect there are CCP agents both here in Hacker News and everywhere else, trying to undermine the reality of China-sponsored malicious behavior.

I'm not a cybersecurity expert, but it doesn't compute to think there would be any specific "hashes" to report if it's an AI-based attack that constantly uses unique code or patterns for everything.

Plus, there's nothing surprising about the Chinese stealing and hacking anything for their advantage.

  • It’s more likely that there are more western VC propaganda here than CCP.

    The HN of Paul Graham era had finished.

    This is the HN of Sam Altman and Gary Tan era.

    Different VC/capitalist mindset

Dario Amodei, the CEO of Anthropic, openly lied to the public back in March that AI would be writing 90% of the code by Sept. It is Nov now.

He obviously doesn't even know the stuff he is working on. How would anyone take him seriously for stuff like security which he doesn't know anything about?

  • > openly lied

    He made a prediction from a reasonably informed vantage point

    • > openly lied

      Surely he merely hallucinated based on a fine-tuned distribution, and had no ulterior motive for projecting a level of growth in technical sophistication beyond their current capability onto a somewhat lay, highly speculative, very wealthy crowd.

    • if he didn't intentionally mislead the public, he should have be open about it and acknowledge the fact that he was utterly wrong.

      given the fact that he made such "prediction" largely to secure funding for his company, he should probably make compensations to his investors. he didn't do anything.

      the best he could do is to bad mouth competitors who choose to release open weight models on par with his.

Is it my imagination, but don’t the CEOs of Anthropic and OpenAI spread around a lot of bullshit whenever they want to raise more money or even worse try to get our government to set up regulatory barriers to hurt competitors?

I think this ‘story’ is an attempt to perhaps outlaw Chinese open weight models in the USA?

I was originally happy to see our current administration go all in on supporting AI development but now I think this whole ‘all in’ thing on “winning AI” is a very dark pattern.

  • seeing your comment downvoted, i wonder what the downvoters think differently.

    I say that because your sentiment seems so similar to nearly all the other comments.

    (perhaps downvoting without commentary is itself a collaborative dark pattern.)

Just more of the same grift from the AI industry. We’re in the melt-up. It will become exponentially harder for them to maintain the illusion moving forward.

I have never taken any AI company seriously, but Anthropic with its attitudes already fed me up to the point that, I deleted my account.

Instead of accusing of China in espionage perhaps they have to think about why they force their users to use phone numbers to register.

This is an excellent article. Anthropic's "paper" is just rambling slop without any details that inserts the word "Claude" 50 times.

We have arrived at a stage where pseudoscience is enough to convince investors. This is different from 2000, where the tech existed but its growth was overstated.

Tesla could announce a fully-self-flying space car with an Alcubierre drive by 2027 and people would upvote it on X and buy shares.

  • I suppose it's the problem with AI in general. It's an interesting technology looking for a business model that just isn't there, at least not one that comes even close to justifying the cost.

    I hate the fact that it has sucked all the oxygen from the room and enabled an entirely new cadre of grifters all of whom will escape accountability when it unfolds.

  • > We have arrived at a stage where pseudoscience is enough to convince investors.

    "Arrived" ? We're there for decade if not three. Dotcom bubble anyone ?

Its seems that various LLM companies try to fear monger. Saying how dangerous it is to use them in "certain ways". With the possible intention to lobby for legislation.

But what is the big game here? Is it all about creating gates to keep out other LLM companies getting market share? (Only our model is safe to use) Or how sincere are the concerncs regarding LLMs?

  • Could be that, or could be just "look at how powerful our AI is", with no other goal than trying to brainwash CEOs into buying it.

  • Outlaw local LLMs is one possibility.

    Another possibility could be complex regulations that are difficult for smaller companies to comply with, giving larger companies an advantage.

  • If fear were their marketing tactic, it sounds like it could just as easily have the opposite effect: souring the public on AI's existence altogether — perhaps making people think AI is akin to a munition that no private entity should have control over.

  • I think the perceived value of LLMs is so high in these circles that they earnestly have a quasi-religious “doomsday” fear of them.