← Back to context

Comment by biohazard2

9 hours ago

The developer just "cleaned up the code comments", i.e. they removed all TODOs from the code: https://github.com/nkuntz1934/matrix-workers/commit/2d3969dd...

Professionalism at its finest!

LLMs made them twice as efficient: with just one release, they're burning tokens and their reputation.

It's kinda mindblowing. What even is the purpose of this? It's not like this is some post on the vibecoding subreddit, this is fricken Cloudflare. Like... What the hell is going on in there?

I also use this as a simple heuristic:

https://github.com/nkuntz1934/matrix-workers/commits/main/

There exist only two commits. I've never seen a "real" project that looks like this.

  • To be honest sometimes on my hobby project I don’t commit anything in the beginning (I know not great strategy) and then just dump everything in one large commit.

    • I’ve also been guilty of plugging at something, and squashing it all before publishing for the first time because I look at the log and I go “no way I can release this, or untangle it into any sort of usefulness”.

  • I think that's a reasonable heuristic, but I have projects where I primarily commit to an internal Gitea instance, and then sometimes commit to a public GitHub repo. I don't want people to see me stumbling around in my own code until I think it's somewhat clean.

    • I have a similar process. Internal repo where work gets done. External repo that only gets each release.

  • The repository is less than one week old though; having only the initial commit wouldn't shock me right away.

    • That is totally fine... as long as you don't call it 'production grade'. I wouldn't call anything production grade that hasn't actually spent time (more than a week!) in actual production.

      1 reply →

    • But if the initial commit contains the finished project then that suggests that either it was developed without version control, or that the history has deliberately been hidden.

      1 reply →

  • I might just make dummy commits ("asdadasdassadas") in the prototyping phase and then just squash everything to an "Initial commit" afterwards.

Here's the post on LinkedIn

https://www.linkedin.com/posts/nick-kuntz-61551869_building-...

  • https://www.linkedin.com/in/nick-kuntz-61551869/

    DevSecOps Engineer United States Army Special Operations Command · Full-time

    Jun 2022 - Jul 2025 · 3 yrs 2 mos

    Honestly, it is a little scary to see someone with a serious DevSecOps background ship an AI project that looks this sloppy and unreviewed. It makes you question how much rigor and code quality made it into their earlier "mission critical" engineering work.

    • Tbf, there is no one with a ‘serious DevSecOps background’. It’s an incredibly strong hint that the person is largely a goof.

      1 reply →

  • I don't know what's more embarrassing the deed itself, not recognizing the bullshit produced or the hastly attempt of a cover up. Not a good look for Cloudflare does nobody read the content they put out? You can just pretend to have done something and they will release it on their blog, yikes.

Wow this is definitely not a software engineer. Hmm I wonder if Git stores history...

Reminds me of Cloudflare's OAuth library for Workers.

>Claude's output was thoroughly reviewed by Cloudflare engineers with careful attention paid to security

>To emphasize, this is not "vibe coded".

>Every line was thoroughly reviewed and cross-referenced with relevant RFCs, by security experts with previous experience with those RFCs.

...Some time later...

https://github.com/advisories/GHSA-4pc9-x2fx-p7vj

  • What is the learning here? There were humans involved in every step.

    Things built with security in mind are not invulnerable, human written or otherwise.

    • Taking a best-faith approach here, I think it's indicative of a broader issue, which is that code reviewers can easily get "tunnel vision" where the focus shifts to reviewing each line of code, rather than necessarily cross-referencing against both small details and highly-salient "gotchas" of the specification/story/RFC, and ensuring that those details are not missing from the code.

      This applies whether the code is written is by a human or AI, and also whether the code is reviewed by a human or AI.

      Is a Github Copilot auto-reviewer going to click two levels deep into the Slack links that are provided as a motivating reference in the user story that led to the PR that's being reviewed? Or read relevant RFCs? (And does it even have permission to do all this?)

      And would you even do this, as the code reviewer? Or will you just make sure the code makes sense, is maintainable, and doesn't break the architecture?

      This all leads to a conclusion that software engineering isn't getting replaced by AI any time soon. Someone needs to be there to figure out what context is relevant when things go wrong, because they inevitably will.

    • This is especially true if the marketing team claims that humans were validating every step, but the actual humans did not exist or did no such thing.

      If a marketer claims something, it is safe to assume the claim is at best 'technically true'. Only if an actual engineer backs the claim it can start to mean something.

    • the problem with "AI" is that by the very way it was trained: it produces plausible looking code

      so the "reviewing" process will be looking for the needles in the haystack

      when you have no understanding, or mental model of how it works, because there isn't one

      it's a recipe for disaster for anything other than trivial projects