Comment by dudeinhawaii
2 hours ago
On the one hand, I created vibe coded a large-ish (100k LOC) C#, Python, Powershell project over the holidays. The whole thing was more than I could ever complete on my own in the 5 days it took to vibe code using three agents. I wrote countless markdown 'spec' files, etc.
The result stunned everyone I work with. I would never in a million years put this code on Github for others. It's terrible code for a myriad reasons.
My lived experience was... the task was accomplished but not in a sustainable way over the course of perhaps 80 individual sessions with the longest being multiple solid 45 minute refactors...(codex-max)
About those. One of things I spotted fairly quickly was the tendency of models to duplicate effort or take convoluted approaches to patch in behaviors. To get around this, I would every so often take the entire codebase, send it to Gemini-3 Pro and ask it for improvements. Comically, every time, Gemini-3-Pro responds with "well this code is hot garbage, you need to refactor these 20 things". Meanwhile, I'm side-eying like.. dude you wrote this. Never fails to amuse me.
So, in the end, the project was delivered, was pretty cool, had 5x more features than I would have implemented myself and once I got into a groove -- I was able to reduce the garbage through constant refactors from large code reviews. Net Positive experience on a project that had zero commercial value and zero risk to customers.
But on the other hand...
I spend a week troubleshooting a subtle resource leak (C#) on a commercial project that was introduced during a vibe-coding session where a new animation system was added and somehow added a bug that caused a hard crash on re-entering a planet scene.
The bug caused an all-stop and a week of lost effort. Countless AI Agent sessions circularly trying to review and resolve it. Countless human hours of testing and banging heads against monitors.
In the end, on the maybe random 10th pass using Gemini-3-Pro it provided a hint that was enough to find the issue.
This was a monumental fail and if game studios are using LLMs, good god, the future of buggy mess releases is only going to get worse.
I would summarize this experience as lots of amazement and new feature velocity. A little too loose with commits (too much entanglement to easily unwind later) and ultimately a negative experience.
A classic Agentic AI experience. 50% Amazing, 50% WTF.
No comments yet
Contribute on Hacker News ↗