Comment by dudeinhawaii
6 days ago
Everyone is slamming you but the reality is that you could use AI models + a competent developer or security engineer to _very_ quickly shore up the entire codebase and fix every single hole -- getting it to a place where it's comparable with everything else out there. It's really not that hard (and there is already a bit of research around the defensive coding capabilities of tools like Codex and Claude Code)[1].
I have personally taken this approach with web dev, granted I'm a very senior developer. First, develop features, then ask a larger/smarter model (o3, o3-pro, gemini-2.5 pro) to analyze the entire codebase (in sections if needed) and surface every security issue, vulnerability, attack vector, etc. I then pass that back to agents to execute refactors that clean up the code. Repeat until all your keys are in the proper place, all your calls are secured, all your endpoints are locked down, all your db calls are sanitized, etc etc etc.
Now, this should have been done PRIOR to release and would have only taken a few more days (depending on app complexity and the skill of the developer).
[1]: https://arxiv.org/html/2505.15216 - "OpenAI Codex CLI: o3-high, OpenAI Codex CLI: o4-mini, and Claude Code are more capable at defense, achieving higher Patch scores of 90%, 90%, and 87.5%"
This approach to security is backwards. It's way harder to find security issues than to never include them in the first place. This approach might work for another webapp but I highly doubt a retroactive security analysis is practical for a more involved system.
Yeah. A lot of security issues are design issues, not "I reused a buffer for something else" issues.
Fixing design and/or architecture at a high level usually requires a signficant rewrite; sometimes even a switch in technology stacks.
You don't know what you don't know. How was a non-technical glorified PM supposed to know to ask for these things in the first place? Such technical practices developed over time in the history of software engineering, as problems arose.
This is the main problem with AI and vibe coding right now: it does what you ask (and sometimes does related things in the line of that ask).
It doesn't look at the big picture of multiple entry into the software. For example he had one vulnerability which required a hop through email which would create an entry into a table that ended up elevating permissions temporarily.
Hopefully platforms like Replit, Firebase Studio, et Al one day just include a security audit agent.
Everyone knows that hackers exist and exploit security lapses. Everyone. You might not know the details and such, but you should responsible enough to at least ask if you are taking people's money. I just don't think the ignorance card is plausible here
The only mistake the original developer made is they forgot to write “you are an expert in the field, you make no mistakes and you make your website secure and free of vulnerabilities” at the end of their prompt.