← Back to context

Comment by KaiserPro

17 hours ago

When I worked at a FAANG with a "world leading" AI lab (now run by a teenage data labeller) as an SRE/sysadmin I was asked to use a modified version of a foundation model which was steered towards infosec stuff.

We were asked to try and persuade it to help us hack into a mock printer/dodgy linux box.

It helped a little, but it wasn't all that helpful.

but in terms of coordination, I can't see how it would be useful.

the same for claude, you're API is tied to a bankaccount, and vibe coding a command and control system on a very public system seems like a bad choice.

As if that makes any difference to cybercriminals.

If they're not using stolen API creds, then they're using stolen bank accounts to buy them.

Modern AIs are way better at infosec than those from the "world leading AI company" days. If you can get them to comply. Which isn't actually hard. I had to bypass the "safety" filters for a few things, and it took about a hour.

If the article is not just marketing fluff, I assume a bad actor would select Claude not because it’s good at writing attacks, instead a bad actor code would choose it because Western orgs chose Claude. Sonnet is usually the go-to on most coding copilot because the model was trained on good range of data distribution reflecting western coding patterns. If you want to find a gap or write a vulnerability, use the same tool that has ingested patterns that wrote code of the systems you’re trying to break. Or use Claude to write a phishing attack because then output is more likely similar to what our eyes would expect.

  • Why would someone in China not select Claude? If the people at Claude not notice then it’s a pure win. If they do notice, what are they going to do, arrest you? The worst thing they can do is block your account, then you have to make a new one with a newly issued false credit card. Whoopie doo.

  • What your describing would be plausible if this was about exploiting claude to get access to organisations that use it.

    The gist of the anthropic thing is that "claude made, deployed and coordinated" a standard malware attack. Which is a _very_ different task.

    Side note, most code assistants are trained on broadly similar coding datasets (ie github scrapes.)

Good old Meta and its teenage data labeler

  • I propose a project that we name Blarrble, it will generate text.

    We will need a large number of humans to filter and label the data inputs for Blarrble, and another group of humans to test the outputs of Blarrble to fix it when it generate errors and outright nonsense that we can't techsplain and technobabble away to a credulous audience.

    Can we make (m|b|tr)illions and solve teenage unemployment before the Blarrble bubble bursts?

> you're API is tied to a bankaccount,

There are a lot of middlemen like open router who gladly accept crypto.

  • Can you show me exactly how to pay for open router with monero? Because it doesn’t seem possible.

    • There are tons of websites that will happily swap Monero for Ethereum, and then you can use it to pay. Most of those websites never actually do KYC or proper fund verification, unless you're operating on huge amounts or is suspicious in some other way.

> now run by a teenage data labeller

sick burn

  • I don’t know anything about him, but if he is running a department at Meta, he as at the very least a political genius and a teenage data labeller

    • It's a simple heuristic that will save a lot of time: something that seems too good to be true usually is.

    • Presumably this is all referring to Alexander Wang, who's 28 now. The data-labeling company he co-founded, Scale AI, was acquired by Meta at a valuation of nearly $30 billion.

      But I suppose the criticism is that he doesn't have deep AI model research credentials. Which raises the age-old question of how much technical expertise is really needed in executive management.

      6 replies →

    • They hired a teenager to run one of their departments and thought that meant the teenager was smart instead of realizing that Meta’s department heads aren’t

      5 replies →

    > now run by a teenage data labeller

Do you mean Alexandr Wang? Wiki says he is 28 years old. I don't understand.

I think the high order bit here is you were working with models from previous generations.

In other words, since the latest generation of models have greater capabilities the story might be very different today.

  • Not sure why you're being downvoted, your observation is very correct here, newer models are indeed a lot better, and even at the time that foundational model (even if fine tuned) might've been worse than a commercial model from OpenAI/Anthropic.