Comment by KaiserPro
3 months ago
When I worked at a FAANG with a "world leading" AI lab (now run by a teenage data labeller) as an SRE/sysadmin I was asked to use a modified version of a foundation model which was steered towards infosec stuff.
We were asked to try and persuade it to help us hack into a mock printer/dodgy linux box.
It helped a little, but it wasn't all that helpful.
but in terms of coordination, I can't see how it would be useful.
the same for claude, you're API is tied to a bankaccount, and vibe coding a command and control system on a very public system seems like a bad choice.
As if that makes any difference to cybercriminals.
If they're not using stolen API creds, then they're using stolen bank accounts to buy them.
Modern AIs are way better at infosec than those from the "world leading AI company" days. If you can get them to comply. Which isn't actually hard. I had to bypass the "safety" filters for a few things, and it took about a hour.
If the article is not just marketing fluff, I assume a bad actor would select Claude not because it’s good at writing attacks, instead a bad actor code would choose it because Western orgs chose Claude. Sonnet is usually the go-to on most coding copilot because the model was trained on good range of data distribution reflecting western coding patterns. If you want to find a gap or write a vulnerability, use the same tool that has ingested patterns that wrote code of the systems you’re trying to break. Or use Claude to write a phishing attack because then output is more likely similar to what our eyes would expect.
Why would someone in China not select Claude? If the people at Claude not notice then it’s a pure win. If they do notice, what are they going to do, arrest you? The worst thing they can do is block your account, then you have to make a new one with a newly issued false credit card. Whoopie doo.
> Why would someone in China not select Claude?
Because Anthropic doesn't provide services in China? See https://www.anthropic.com/supported-countries
8 replies →
What your describing would be plausible if this was about exploiting claude to get access to organisations that use it.
The gist of the anthropic thing is that "claude made, deployed and coordinated" a standard malware attack. Which is a _very_ different task.
Side note, most code assistants are trained on broadly similar coding datasets (ie github scrapes.)
> you're API is tied to a bankaccount,
There are a lot of middlemen like open router who gladly accept crypto.
Can you show me exactly how to pay for open router with monero? Because it doesn’t seem possible.
There are tons of websites that will happily swap Monero for Ethereum, and then you can use it to pay. Most of those websites never actually do KYC or proper fund verification, unless you're operating on huge amounts or is suspicious in some other way.
[flagged]
I used to work at RL so I instantly knew what he was referring to.
I propose a project that we name Blarrble, it will generate text.
We will need a large number of humans to filter and label the data inputs for Blarrble, and another group of humans to test the outputs of Blarrble to fix it when it generate errors and outright nonsense that we can't techsplain and technobabble away to a credulous audience.
Can we make (m|b|tr)illions and solve teenage unemployment before the Blarrble bubble bursts?
Where do I write the check?
> now run by a teenage data labeller
sick burn
I don’t know anything about him, but if he is running a department at Meta, he as at the very least a political genius and a teenage data labeller
It's a simple heuristic that will save a lot of time: something that seems too good to be true usually is.
Presumably this is all referring to Alexander Wang, who's 28 now. The data-labeling company he co-founded, Scale AI, was acquired by Meta at a valuation of nearly $30 billion.
But I suppose the criticism is that he doesn't have deep AI model research credentials. Which raises the age-old question of how much technical expertise is really needed in executive management.
7 replies →
I was just watching the Y Combinator interview with Alexandr Wang who I guess may be being referred to https://youtu.be/5noIKN8t69U
The teenage data labeler thing was a bit of an exaggeration. He did found scale.ai at nineteen which does data labeling amongst other things.
12 replies →
[flagged]
10 replies →
Alexandr Wang is 28 years old, the same age as Mark Zuckerberg was when Facebook IPO'ed,
A business where the distinguishing factor was exclusivity not technical excellence so it tracks.
> the same for claude, you're API is tied to a bankaccount, and vibe coding a command and control system on a very public system seems like a bad choice.
Aside from middlemen as others have suggested - You can also just procure hundreds of hacked accounts for any major service through spyware data dump marketplaces. Some percentage of them will have payment already set up. Steal their browser cookies, use it until they notice and cancel / change their password, then move on to the next stolen account. Happens all the time these days.
Yeah, I gave my AWS root API key to Cursor in agent mode. I learned that AWS charges ridiculous amounts for transferring and storing data.
Wouldn't it be relatively cheap to use Claude as a self-organizing control backplane for invoking the MCP tools that would actually do the work?
> "world leading" AI lab (now run by a teenage data labeller)
Aarush Sah?
meta was never "world leading"
pytorch
jax
I think the high order bit here is you were working with models from previous generations.
In other words, since the latest generation of models have greater capabilities the story might be very different today.
Not sure why you're being downvoted, your observation is very correct here, newer models are indeed a lot better, and even at the time that foundational model (even if fine tuned) might've been worse than a commercial model from OpenAI/Anthropic.
Do you mean Alexandr Wang? Wiki says he is 28 years old. I don't understand.