Comment by michaelcampbell
8 hours ago
Indeed, and I don't think there's any reliable signal other than the author saying so that something is "vibe coded" vs. "I used an LLM for some aspect of it."
8 hours ago
Indeed, and I don't think there's any reliable signal other than the author saying so that something is "vibe coded" vs. "I used an LLM for some aspect of it."
I recently ran an experiment where I tried to use _quantitative signals_ (and not _qualitative_ ones) to tell whether something is vibe-coded or not.
My idea was that, if I see that your project is growing 10k LOC per week and you're the only developer working on it, it's most likely vibe-coded.
I analyzed some open-source projects, but unfortunately it turns out not to be so clear cut. It's relatively easy to estimate the growth rate of a project, but figuring out how much time developers worked on it is very error prone, which results in both false positives and false negatives.
I wrote a post about it (https://pscanf.com/s/352/) if you're interested in the details.
Ask a llm for a code review along code duplication, encapsulation and sequential coupling as quality axes and the difference should show up readily
The biggest signal is not the code itself but whether the thing is actively and continually developed for more than a few weeks.
And then look through the commits -- were they only adding new features, or did the author(s) put effort into improvements on engineering fundamentals (benchmarking, testing, documentation, etc)?