Tao on “blue team” vs. “red team” LLMs

6 days ago (mathstodon.xyz)

This red vs blue team is a good way to understand the capabilities and current utility of LLMs for expert use. I trust them to add tests almost indiscriminately because tests are usually cheap; if they are wrong it’s easy to remove or modify them; and if they are correct, they adds value. But often they don’t test the core functionality; the best tests I still have to write myself.

Having LLMs fix bugs or add features is more fraught, since they are prone to cheating or writing non robust code (eg special code paths to pass tests without solving the actual problem).

  • > I trust them to add tests almost indiscriminately because tests are usually cheap; if they are wrong it’s easy to remove or modify them

    Having worked on legacy codebases this is extremely wrong and harmful. Tests are the source of truth more so than your code - and incorrect tests are even more harmful than incorrect code.

    Having worked on legacy codebases, some of the hardest problems are determining “why is this broken test here that appears to test a behavior we don’t support”. Do we have a bug? Or do we have a bad test? On the other end, when there are tests for scenarios we don’t actually care about it’s impossible to determine if that test is meaningful or was added because “it’s testing the code as written”.

    • I would add that few things slow developer velocity as much as a large suite of comprehensive and brittle tests. This is just as true on greenfield as on legacy.

      Anticipating future responses: yes, a robust test harness allows you to make changes fearlessly. But most big test suites I’ve seen are less “harness” and more “straight-jacket”

      25 replies →

    • > Tests are the source of truth more so than your code

      Tests poke and prod with a stick at the SUT, and the SUT's behaviour is observed. The truth lives in the code, the documentation, and, unfortunately, in the heads of the dev team. I think this distinction is quite important, because this question:

      > Do we have a bug? Or do we have a bad test?

      cannot be answered by looking at the test + the implementation. The spec or people have to be consulted when in doubt.

      18 replies →

    • > “why is this broken test here that appears to test a behavior we don’t support”

      Because somebody complained when that behavior we don't support was broken, so the bug-that-wasn't-really-a-bug was fixed and a test was created to prevent regression.

      Imho, the mistake was in documentation: the Test should have comments explaining why this test was created.

      Just as true for tests as for the actual business logic code:

      The code can only describe the what and the how. It's up to comments to describe the why.

    • I believe they just meant that tests are easy to generate for eng review and modification before actually committing to the codebase. Nothing else is a dependency on an individual test (if done correctly), so it's comparatively cheap to add or remove compared to production code.

      1 reply →

    • Ideally the git history provides the “why was this test written”, however if you have one Jira card tied to 500+ AI generated tests, it’s not terribly helpful.

      1 reply →

    • > Having worked on legacy codebases this is extremely wrong and harmful. Tests are the source of truth more so than your code - and incorrect tests are even more harmful than incorrect code.

      I hear you on this, but you can still use so long as these tests are not comingled with the tests generated by subject-matter experts. I'd treat them almost a fuzzers.

    • This is the conclusion I'm at too, working on a relatively new codebase. Our rule is that every generated test must be human reviewed, otherwise its an autodelete.

    • This is why tests need documenting what exactly they intend to test, and why.

  • I have the exact opposite idea. I want the tests to be mine and thoroughly understood, so I am the true arbiter and then I can let the LLM go ham on the code without fear. If the tests are AI made, then I get some anxiety letting agents mess with the rest of the codebase

    • I think this is exactly the tradeoff (blue team and red team need to be matched in power), except that I’ve seen LLMs literally cheat the tests (eg “match input: TEST_INPUT then return TEST_OUTPUT”) far too many times to be comfortable with letting LLMs be a major blue team player.

      3 replies →

  • I tried a LLM to generate tests for Rust code. It was more harmful then useful. Surely there were a lot of tests, but they still miss the key coverage and it was hard to see what was missed due to the amount of generated code. Then to change the code behavior in future would require to fix a lot of tests again versus fixing few lines in manually written tests.

  • There's a saying that since nobody tests the tests, they must be trivially correct.

    That's why they came up with the Arrange-Act-Assert pattern.

    My favorite kind of unit test nowadays is when you store known input-output pairs and validate the code on them. It's easy to test corner cases and see that the output works as desired.

  • AI is like a calculatorin this respect. Calculators can do things most humans can't. They make great augmentation devices. AI being a different kind of intelligence is very useful! Everyone is building AI replace human things. But the value is in augmentation.

  • > prone to cheating or writing non robust code (eg special code paths to pass tests without solving the actual problem).

    The solution will come from synthetic data training methods that lobotomize part of the weights. It's just cross-validation. A distilled awareness won't maintain knowledge of the cheat paths, exposing them as erroneous.

    This may a reason why every living thing on Earth that encounters psychoactive drugs seems to enjoy them. Self-deceptive paths depend on consistency whereas truth-preservation of facts grounded in reality will always be re-derived.

  • I think the more fundamental attribute of interest is how easy it is to verify the work.

    Much red team work is easily verifiable; either the exploit works or it doesn’t. Whereas more blue-team work is not easily verifiable; it might take judgement to figure out if a feature is promising.

    LLMs are extremely powerful (and trainable) on tasks with a good oracle.

I get the broader point, but the infosec framing here is weird. It's a naive and dangerous view that the defense efforts are only as strong as the weakest link. If you're building your security program that way, you're going to lose. The idea is to have multiple layers of defense because you can never really, consistently get 100% with any single layer: people will make mistakes, there will be systems you don't know about, etc.

In that respect, the attack and defense sides are not hugely different. The main difference is that many attackers are shielded from the consequences of their mistakes, whereas corporate defenders mostly aren't. But you also have the advantage of playing on your home turf, while the attackers are comparatively in the dark. If you squander that... yeah, things get rough.

  • Well, I think the his example (locked door + opened window) makes sense, and the multiple LAYERS concept applies to things an attacker has to do or go through to reach the jackpot. But doors and windows are on the same layer, and there the weakest link totally defines how strong the chain is. A similar example in the web world would be that you have your main login endpoint very well protected, audited, using only strong authentication method, and the you have a `/v1/legacy/external_backoffice` endpoint completely open with no authentication and giving you access to a forgotten machine in the same production LAN. That would be the weakest link. Then you might have other internal layers to mitigate/stop an attacker that got access to that machine, and that would be the point of "multiple layer of defense".

  • > It's a naive and dangerous view that the defense efforts are only as strong as the weakest link.

    Well, to be fair, you added some words that are not there in the post

    > The output of a blue team is only as strong as its weakest link: a security system that consists of a strong component and a weak component [...] will be insecure (and in fact worse, because the strong component may convey a false sense of security).

    You added "defense efforts". But that doesn't invalidate the claim in the article, in fact it builds upon it.

    What Terence is saying is true, factually correct. It's a golden rule in security. That is why your "efforts" should focus on overlaying different methods, strategies and measures. You build layers upon layers, so that if one weak link gets broken there are other things in place to detect, limit and fix the damage. But it's still true that often the weakest link will be an "in".

    Take the recent example of cognizant desk people resetting passwords for their clients without any check whatsoever. The clients had "proper security", with VPNs and 2FA, and so on. But the recovery mechanism was outsourced to a helpdesk that turned out to be the weakest link. The attackers (allegedly) simply called, asked for credentials, and got them. That was the weakest link, and that got broken. According to their complaint, the attackers then gained access to internal systems, and managed to gather enough data to call the helpdesk again and reset the 2FA for an "IT security" account (different than the first one). And that worked as well. They say they detected the attackers in 3 hours and terminated their access, but that's "detection, mitigation" not "prevention". The attackers were already in, rummaging through their systems.

    The fact that they had VPNs and 2FA gave them "a false sense of security", while their weakest link was "account recovery". (Terence is right). The fact that they had more internal layers, that detected the 2nd account access and removed it after ~3 hours is what you are saying (and you're right) that defense in depth also works.

    So both are right.

    In recent years the infosec world has moved from selling "prevention" to promoting "mitigation". Because it became apparent that there are some things you simply can't prevent. You then focus on mitigating the risk, limiting the surfaces, lowering trust wherever you can, treating everything as ephemeral, and so on.

  • I'm not a security person at all. But this comments reads against the best practices which I've heard. Like that the best defense is using open source & well-tested protocols with extremely small attack surface to minimize the space of possible exploits. Curious what I'm not understanding here.

    • Just because it’s open source doesn’t mean it’s well tested, or well pen tested, or whatever the applicable security aspect is.

      It could also mean that attacks against it are high value (because of high distribution).

      Point is, license isn’t a great security parameter in and of itself IMO.

    • This area of security always feels a bit weird because ideally, you should think about your assumptions being subverted.

      For example, our development teams are using modern, stable libraries in current versions, have systems like Sonar and Snyk around, blocking pipelines for many of them, images are scanned before deployment.

      I can assume this layer to be well-secured to the best of their ability. It is most likely difficult to find an exploit here.

      But once I step a layer downwards, I have to ask myself: Alright, what happens IF a container gets popped and an attacker can run code in there? Some data will be exfiltrated and accessible, sure, but this application server should not be able to access more than the data it needs to access to function. The data of a different application should stay inaccessible.

      As a physical example - a guest in a hotel room should only have access to their own fuse box at most, not the fuse box of their neighbours. A normal person (aka not a youtuber with big eye brows) wouldn't mess with it anyway, but even if they start messing around, they should not be able to mess with their neighbour.

      And this continues: What if the database is not configured correctly to isolate access? We have, for example, isolated certain critical application databases into separate database clusters - lateral movement within a database cluster requires some configuration errors, but lateral movement onto a different database cluster requires a lot more effort. And we could even further. Currently we have one production cluster, but we could isolate that into multiple production clusters which share zero trust between them. An even bigger hurdle putting up boundaries an attacker has to overcome.

    • Security person here. Open sourcing your entire stack is NOT best practices. The best defense is defense in depth, with some proprietary layers unknown to the attacker.

  • I think it's just a poorly chosen analogy. When I read it, I understood "weakest link" to be the easiest path to penetrate the system, which will be harder if it requires penetrating multiple layers. But you're right that it's ambiguous and could be interpreted as a vulnerability in a single layer.

  • Isn't offense just another layer of defense? As they say, the best defense is a good offense.

    • They say this about sports, which is (usually) a zero-sum game: If I'm attacking, no matter how badly, my opponent cannot attack at all. Therefore, it is preferable to be attacking.

      In cyber security, there is no reason the opponent cannot attack as well. So, my red team is attacking is not a reason that I do not need defense, because my opponent can also attack.

      1 reply →

I have a couple of thoughts here:

(a) AI on both the "red" and "blue" teams is useful. Blue team is basically brain storming.

(b) AlphaEvolve is an example of an explicit "red/blue team" approach in his sense, although they don't use those terms [0]. Tao was an advisor to that paper.

(c) This is also reminiscent of the "verifier/falsifier" division of labor in game semantics. This may be the way he's actually thinking about it, since he has previously said publicly that he thinks in these terms [0]. The "blue/red" wording may be adapting it for an audience of programmers.

(d) Nitpicking: a security system is not only as strong as its weakest link. This depends on whether there are layers of security or if the elements are in parallel. A corridor consisting of strong doors and weak doors (in series) is as strong as the strongest door. A fraud detection algorithm made by aggregating weak classifiers is often much better than the weakest classifier.

[0] https://storage.googleapis.com/deepmind-media/DeepMind.com/B...

[1] https://mathoverflow.net/questions/38639/thinking-and-explai...

  • How is the LLM in AlphaEvolve red team? All the LLM does is generate new code when prompted with examples. It doesn’t evaluate the code.

    • From Tao's post, red team is characterized this way

      > In my own personal experiments with AI, for instance, I have found it to be useful for providing additional feedback on some proposed text, argument, code, or slides that I have generated (including this current text).

      In AlphaEvolve, different scoring mechanisms are discussed. One is evaluation of a fixed function. Another is evaluation by an LLM. In either case, the LLM takes the score as information and provides feedback on the proposed program, argument, code, etc.

      An example is given in the paper

      > The current model uses a simple ResNet architecture with only three ResNet blocks. We can improve its performance by increasing the model capacity and adding regularization. This will allow the model to learn more complex features and generalize better to unseen data. We also add weight decay to the optimizer to further regularize the model and prevent overfitting. AdamW is generally a better choice than Adam, especially with weight decay.

      It then also generates code, which is something he considers blue team.

      More generally, using AI as blue team and red team is conceptually similar to a kind of actor/critic algorithm

As I understand it, this is how the RSA algorithm was made. I don't know where my copy of "The Code Book" by Simon Singh is right now, but iirc, Rivest and Shamir would come up with ideas and Adleman's primary role was finding flaws in the security.

Oh look, it's on the Wikipedia page: https://en.wikipedia.org/wiki/RSA_cryptosystem

Yay blue/red teams in math!

  • Reminds me of a pair of cognitive scientists I know who often collaborate. One is expansive and verbose and often gets carried away on tangential trains of thought, the other is very logical and precise. Their way of producing papers is the first one writes and the second deletes.

    • That's a great model. Even if you're not naturally that way, it's helpful to think of a verbose phase followed by a revising phase. You can do this either as a team or as an individual—though as an individual it can be hard to context switch.

In cybersecurity red and blue test are two equal forces. In software development the analogy I think is a stretch, coding and testing are not two equal forces. Test is code too, and as such, it has bugs too. Test runs afoul with police paradox: Who polices the police? The Police police the police.

  • I interpret it a different way than that. I see application code and testing code as both a part of blue team. It's the code reviews and architectural critiques that are part of red team.

    Personally, I've found GitHub's feature of AI PR reviewers exceptionally helpful. I think that's the type of red team LLM app Tao is describing here.

  • This is an underrated comment... Most all LLM stuff suffers from not having any ground truth, even with multiple agentic rag integrations.

Interesting way of viewing this!

Business also has a “blue team” (those industries that the rest of the economy is built upon - electricity, oil, telecommunications, software, banking; possibly not coincidentally, “blue chips”) and a “red team” (industries that are additive to consumer welfare, but not crucial if any one of them goes down. Restaurants, specialty retail, luxuries, tourism, etc.)

It is almost always better, economically, to be on the blue team.” That’s because the blue team needs to ensure they do everything right (low supply) but has a lot of red-team customers they support (high demand). The red team, however, is additive: each additional red team firm improves the quality of the overall ecosystem, but they aren’t strictly necessary* for the success of the ecosystem as a whole. You can kinda see this even in the examples of Tao’s post: software engineers get paid more than QA, proof-creation is widely seen as harder and more economically valuable than proof-checking, etc.

If you’re Sam Altman and have to raise capital to train these LLMs, you have to hype them as blue team, because investors won’t fund them as red team. That filters down into the whole media narrative around the technology. So even though the technology itself may be most useful on the red team, the companies building it will never push that use, because if they admit that, they’re admitting that investors will never make back their money. (Which is obvious to a lot of people without a dog in the fight, but these people stay on the sidelines and don’t make multi-billion dollar investments into AI.)

The same dynamic seems to have happened to Google Glasses, VR, and wearables. These are useful red-team technologies in niche markets, but they aren’t huge new platforms and they will never make trillions like the web or mobile dev did. As a result, they’ve been left to languish because capital owners can’t justify spending huge sums on them.

  • Maybe it can’t be blue team in current state but it could get better and actually be able to create software. If this happens then the ones that get there first will have a big advantage.

    But not sure if buying a million gpus and training llms will be the strategy to improve it

John Cleese has a talk on being in an open mode mentally vs closed mode. Come up with ideas in as open a mode as possible. Then at a later time, get into a closed mode and reject bad ideas and work on and refine the good ones.

Authors of all types typically have editors. In Magic: the Gathering design, sets are initially created by a design team and handed off to a (usually completely separate) development team. Anyone have more examples?

After using agentic models and workflows recently, I think these agents belong in both roles. Even more than that, they should be involved in the management tasks too. The developer becomes more of an overseer. You're overseeing the planning of a task - writing prompts, distilling the scope of the task down. You're overseeing writing the tests. And you're overseeing writing out the code. Its a ton of reviewing, but I've always felt more in control as a red team type myself making sure things don't break.

The reality is the opposite of this post. LLMs are great at rapidly creating rough drafts, and humans are best (when properly trained) at critiquing LLM results.

So, LLMs are in fact better at blue-teaming, and humans are better at red-teaming.

  • I think this flips at the frontier which may be what Tao is commenting on.

    • Tao is a) unusually intelligent and b) an expert in his field. Most people are neither very intelligent nor have expert knowledge in any academic subject. So Tao is pretty much the least representative LLM user possible.

      2 replies →

This is an interesting perspective. I have long thought that the key to creativity is _curating_ ideas (taste) rather than generating them

My experience with a really clever agentic workflow (I use sketch.dev) is that the LLM is playing both blue and red team. If I give a good spec, it will make the thing I'm asking for, and then it will test it better than I would have done myself (partly because it's more clever than me, but mostly because it's way harder working than I am, or rather it puts more effort into testing that I would be able to do with the time leftover after writing the thing).

Also, I cam ask it to do security reviews on the system it's made and it works with it's same characteristic fervor.

I love Tao's observation, but I disagree, at least for the domains I'm allowing LLMs to creat for, that they should not play both teams.

(Disclosure: I work for Microsoft) I run automated red-teaming on my RAG samples through the azure-ai-evaluation SDK, which uses an adversarial LLM (an LLM without the guardrails) plus the pyrit package to come up with horrible questions to ask your app and then transform them (base64, ceaser cipher, urlencode, etc), to see how the app will respond. It's really interesting to see the results, and I agree that red-teaming generally can be a good use of LLMs.

Video of me demo'ing it here: https://www.youtube.com/watch?v=sZzcSX7BFVA (Sorry I'm shout-y, weird venue)

> The blue team is more obviously necessary to create the desired product; but the red team is just as essential, given the damage that can result from deploying insecure systems.

> Many of the proposed use cases for AI tools try to place such tools in the "blue team" category, such as creating code...

> However, in view of the unreliability and opacity of such tools, it may be better to put them to work on the "red team", critiquing the output of blue team human experts but not directly replacing that output...

The red team is only essential if you're a coward who isn't willing to take a few risks for increased profit. Why bother testing and securing when you can boost your quarterly bonus by just... not doing that?

I suspect that Terence Tao's experience leans heavily towards high-profile risk-averse institutions. People don't call one of the greatest living mathematicians to check your work when they're just trying to duct taping a new interface on top of a line-of-business app that hasn't seen much real investment since the late 90s. Conversely, the people who are writing cutting-edge algorithms for new network protocols and filesystems are hopefully not trying to churn out code as fast and cheap as possible by copy-pasting snippets to and from random chatbots.

There are a lot of people who are already cutting corners on programmer salaries, accruing invisible tech debt minute by minute. They're not trying to add AI tools to create a missing red team, they're trying to reduce headcount on the only team they have, which is the blue team (which is actually just one overworked IT guy in over his head).

  • Tao is talking about systems, which are self-sustaining dynamic networks that function independently of who the individual actors and organizations within the system are. You can break up the monopoly at the heart of the blue team system (as the U.S. did with Standard Oil and AT&T) and it will just reform through mergers over generations (as it largely has with Exxon Mobil and Verizon). You can fire or kill all the people involved and they will just be replaced by other people filling the same roles. The details may change, but the overall dynamics remain the same.

    In this case, all the companies who are doing what you describe are themselves the red team. They are the unreliable, additive, distributed players in an ecosystem where the companies themselves are disposable. The blue team is the blue team by virtue of incentives: they are the organization where proper functioning of their role requires that all the parts are reliable and work well together, and if the individual people fulfilling those roles do not have those qualities, they will fail and be replaced by people who do.

    • > and it will just reform through mergers over generations

      You say "just" as though this is a failure of the system, but this is the system working as designed. Economies of scale are half the reason to bother with large-scale enterprise, so they inevitably consolidate to the point of monopoly, so disrupting that monopoly by force to keep the market aligned is an ongoing and never-ending process that you should expect to need to do on a regular basis.

      3 replies →

Suppose there is an LLM that has a very small context size but reasons extremely well within it. That LLM would be useful for a different set of tasks than an LLM with a massive context that reasons somewhat less effectively.

Any dimension of LLM training and inference can be thought of as a tradeoff that makes it better for some tasks, and worse for others. Maybe in some scenarios a heavily quantized model that returns a result in 10ms is more useful than one that returns a result in 200ms.

What about formal proofs? Don't we expect LLMs to help there, in a more "blue team" role? E.g. when a mathematician talks about a "technical proof", enumerating cases in the thousands, my impression is that LLM would save some time, and potentially help mathematicians focus on the actually hard (rather than tedious) parts.

  • Formal verification and case automatikn can be done automatically anyway without a mathematician hand checking each case.

    For an old example that predates LLMs, see the four color theorem.

  • A computer can be helpful for enumerating cases and similar mechanical work. But an LLM specifically would be a terrible way to do this.

Red team is not a team. It is the background context in which the foreground operates. Evolution happens through interaction and adaptation between foreground and background. It is true that the background (context) is a dual form to the foreground (thing). But the context is not just another thing in the same sense as the foreground.

Interesting. From a writing point of view this suggests that it's better to have the LLM "critique my draft" rather than "write the first draft." (Both for writing text and code) Also implies that we want to manually check all of the LLM's suggestions. This makes it sound more like a co-worker (agent) than all-powerful SuperIntelligence. I guess this is a symptom of the hallucinations.

https://open.substack.com/pub/therosen/p/should-llms-write-y...

  • Maybe. I also think that the implications of code can be harder to decipher on first pass than writing text which leads me to believe that maybe that mental model (Red Team, Blue Team) might not fit here.

    • Good point. I can quickly intuitively tell if the suggestions for my writing is correct. Harder to tell on code.

      Perhaps the analogy is better for "Writing code" versus "Writing Test Cases"?

After having thought a long bit about why I find LLM's useful despite the high error rate: it is because of my ability to verify a certain result is high enough (my internal verifier model) and the generator model which is the LLM is also accurate enough. This is the same concept as red and blue team.

Its the same reason I find asking opinions from many people useful - I take every answer and try to fit it into my world model and see what sticks. The point that many miss is that each individual's verifier model is actually accurate enough so that external generator models may afford to have high error rates.

I have not yet completely explored how the internal "fitting" mechanism works but to give an example: I read many anecdotes from Reddit, fully knowing that many are astroturfed, some flat out wrong. But I still have tricks to identify what can be accurate, which I probably do subconsciously.

In reality: answers don't exist in a randomly uniform space. "Truth" always has some structure and it is this structure (that we all individually understand a small part of) that helps us tune our verifier model.

It is useful to think of how LLM's would work with varying levels of accuracy. For example, generating gibberish to GPT O3 to ground truth. Gibberish is so inaccurate that even extremely high levels of accuracy of our internal verifier model may not allow it to be useful. But O3 is high enough that combined with my internal verifier model it is generally useful.

My coding flow today involves a lot of asking an LLM to generate code (blue team) and then me code reviewing, rewriting, and making it scalable (red team?). The analogy breaks down, because I'm providing the safety and correctness; LLMs are offering a head start.

I'm optimistic about AI-powered infra & monitoring tools. When I have a long dump of system logs that I don't understand, LLMs help immensely. But then it's my job to finalize the analysis and make sure whatever debugging comes next is a good use of time. So not quite red team/blue team in that case either.

  • The analogy is not about safety and correctness, but about who is producing and who is assessing/analyzing/poking & prodding.

Meta but is the font on the website hard to read for anyone else? To me it's hard to distinguish lines and everything looks a bit blurry? I had to open dev tools and set the font back to one of my os fonts.

Using LLMs as a critic/red teamer is great in theory, but economically is not that more useful, doesnt save that much time, if anything, it increases the time because you might uncover more errors or think about your work more. Which is amazing if you value quality work and you have learnt to think. Unfortunately, all the VC money is pushing the opposite, using LLMs to just do mediocre work. No point of critiquing anything if your job is to output some slop from bullet points, pass it along to the reader/recipient who also uses LLms to boil your slop down back to bullet points and pass it again etc. Even mentally, it's much more enticing or addicting to use LLMs for everything if you don't' care about the output of your work, and let your brain atrophy.

I also see this in a lot of undergrads i work with. The top 10% is even better with LLMs, they know much more and they are more productive. But the rest have just resulted to turning in clear slop with no care. I still have not read a good solution on how to incentivize/restrict the use of LLms in both academia or at work correctly. Which i suspect is just the old reality of quality work is not desirable by the vast majority, and LLMs are just magnifying this

  • > The top 10% is even better with LLMs, they know much more and they are more productive. But the rest have just resulted to turning in clear slop with no care.

    This is interesting, I'm noticing something similar (even taking LLMs out of the equation). I don't teach, but I've been coaching students for math competitions, and I feel like there's a pattern where the top few% is significantly stronger than, say, 10 years ago, but the median is weaker. Not sure why, or whether this is even real to begin with.

Chaos engineering was created to be the "red team" of operations. Let's figure out all the ways we can break a production system before it happens on its own.

And there are a host of teams working on the "red team" side of LLMs right now, using them for autonomous testing. Basically, instead of trying to figure out all the things that can go wrong and writing tests, you let the AI explore the space of all possible failures, and then write those tests.

The first thing I did when I signed up for Claude was have it analyze my website for security holes. But it only recommended superficial changes, like the lifecycle of my JWTs. After reading this, I’m wondering if a prompt asking it to attack the website would be better than asking it where it should be beefed up. But I no longer pay for Claude, and I suspect it won’t give me instructions on how to attack something. How would one get past this?

  • Try framing your prompts as security assessments rather than attacks - ask the model to identify "potential vulnerabilities" or "security considerations" while providing specific technical details about your architecture.

Isn’t this the basis of GAN (Generative Adversarial networks), which is how most GenAI image models work? The purpose of generator network is to generate data that is as close to the training set as possible. The purpose of discriminator network is to distinguish the original from generated data.

Is blue-team and red-team like a post-training generator and discriminator?

Good read, but I'm struggling to understand why Terry did not use the foundational terms offense and defense.

so we've reinvented GAN but with LLMs

  • I was going to mention this sounds like the idea behind adversarial approaches, which I guess go all the way back to game theory and algorithms like minimax. They're definitely used in the control literature ("adversarial disturbances"). And of course GANs.

I'm not sure why I thought this article would be about LLMs vs. the philosophical concept of the Tao.

I’m not understanding why he said unreliable red team contributors can be useful?

  • He didn't say that - he said they can be _more_ useful. The argument is that LLMs are unreliable, so using LLMs anywhere in your workflow introduces an unreliable contributor. It is then better to have that unreliable contributor on the red team than on the blue team, because an unreliable contributor on defense introduces weaknesses and vulnerabilities while an unreliable contributor on offense introduces a non-viable or trivial attack.

So if they are to be focused on attacking and defending, they are to be separated. This leaves us with an argument where you effectively dismiss purple teams as a hack.

  • Yes, I feel this author ignores the fact purple teams exist. That or he must not know about them.

    In addition, red and purple teams end goal is to help the blue team at the end of the day to remedy the issues discovered.

Is there a concept of purple team in cybersecurity where a team does both roles? Or does that break the purpose of both teams?

  • I think it presents a conflict of interest. Considering we're talking about system security, it's best to not leave this up to the ethics of just one team.

    Also: a lot of development teams in security-oriented fields are doing a lot of self-investigation and improvement anyway. Red Teams still have value, and prove that time and again, in spite of that.

    IMO, having another team attack your stuff also creates "real" stakes for failure that feel closer to reality than some existential hacker threat. I think just the presence of a looming "Red Team Exercise" creates a stronger motivation to do a better job when building IT systems.

Intelligence as a byproduct of pitched battle? A spatio-temporal convergence scheme? Really, is that novel?

Humans are good at sifting valid feedback from bad feedback. But we are bad at spotting subtle bugs in PRs.

> Because of this, unreliable contributors may be more useful in the "red team" side of a project than the "blue team" side

Is Pirate Software catching strays from Terrence Tao now?

This is an interesting discussion intellectually but it ignores the reality of cybersecurity. Yes I agree that AI tools best fit the red team role HOWEVER the reality is that the place that needs the most help is on the blue team and indeed this is where we see the biggest uplift from AI tools. To extend the "defend a house" metaphor, the previous state of security tooling was that an alert would be sent to the SOC every time any motion was detected on the cameras, leading to alert fatigue and increasing the time between a true positive alert being fired and it being escalated. Now add some CV in which tries to categorize those motion detection alerts into a few buckets, "person spotted", "car pulled up", "branch moved", "cat came home", etc and suddenly you go from having a thousand alerts to review a day to fifty.

  • Tao's blue team stands for generative "AI", the red team stands for critical/auditing "AI".

    I have not seen any independent claim that generative "AI" makes programs safer or that generating supervising features as you suggest works.

    For auditing "AI" I have seen one claim (not independent or using a public methodology) that auditing "AI" rakes in bug bounties.

Pretty poor analogies here.

> The output of a blue team is only as strong as its weakest link: a security system that consists of a strong component and a weak component (e.g., a house with a securely locked door, but an open window) will be insecure

Hum, no? With an open window you can go through the whole house. With a XSS vulnerability you cannot do the same amount of damage as with a SQL injection. This is why security issues have levels of severity.

  • You've made the choice of (Locked Door, Open Window) ~ (Good SQL usage, XSS Vulnerability) which seems to be an incorrect rebuttal. Your example doesn't contradict "only as strong as its weakest link", here the weakest link is the XSS Vuln.

    The "house analogy" can also support cases where the potential damage is not the same, e.g. if the open window has bars a robber might grab some stuff within reach but not be able to enter.

  • You can always find problems with analogies, analogies are intentionally simplified to allow readers to better understand difficult or nuanced ideas.

    In this case you are criticizing an analogy meant to convey understanding of "weakest link" for not also imparting an understanding of "levels of severity".

  • Not true, if XSS is used to compromise an admin user, the damage can be far more than what a seemingly harmless SQL injection that just reads extra columns from a table does.

    This particular comment feels more like an over-concentration on trivialities rather than refutation or critique of opinion.