← Back to context

Comment by pixel_popping

8 hours ago

So tomorrow, if a model genuinely find a bunch of real vulnerabilities, you just would ignore them? that makes no sense.

An LLM finding problems in code is not the same at all as someone using it to contribute code they couldn't write or haven't written themselves to a project. A report stating "There is a bug/security issue here" is not itself something I have to maintain, it's something I can react to and write code to fix, then I have to maintain that code.