← Back to context

Comment by johannes1234321

10 hours ago

The question still is: will enough useful stuff be included, to make it worth to dig through the slop? And how to tune the prompt to get better results.

Best way to figure that out is to try it and see what happens.

  • [claimed common problem exists, try X to find it] -> [Q about how to best do that] -> "the best way to do it is to do it yourself"

    Surely people have found patterns that work reasonably well, and it's not "everyone is completely on their own"? I get that the scene is changing fast, but that's ridiculous.

    • There's so much superstition and outdated information out there that "try it yourself" really is good advice.

      You can do that in conjunction with trying things other people report, but you'll learn more quickly from your own experiments. It's not like prompting a coding agent is expensive or time consuming, for the most part.

    • /security-review really is pretty good.

      But your codebase is unique. Slop in one codebase is very dangerous in another.

That depends on how the tool is used. People who ask for a security vulnerability get slop. People who asked for deeper analysis often get something useful - but it isn't always a vulnerability.

I assume it's just like asking for help refactoring, just targeting specific kinds of errors.

I ran a small python script that I made some years ago through an LLM recently and it pointed out several areas where the code would likely throw an error if certain inputs were received. Not security, but flaws nonetheless.