Comment by raddan
2 days ago
I tried vibe coding a BMP decoder not too long ago with the rationale being “what’s simpler than BMP?”
What I got was an absolute mess that did not work at all. Perhaps this was because, in retrospect, BMP is not actually all that simple, a fact that I discovered when I did write a BMP decoder by hand. But I spent equal time vibe coding and real coding. At the end of the real coding session, I understood BMP, which I see as a benefit unto itself. This is perhaps a bit cynical but my hot take on vibe coders is that they place little value on understanding things.
Mind explaining the process you tried? As someone who’s generally not had any issue getting LLMs to sort out my side projects (ofc with my active involvement as well), I really wonder what people who report these results are trying. Did you just open a chat with claude code and try to get a single context window to one shot it?
Just out of curiousity (as someone fairly familiar with the BMP spec, and also PNG incidentally): what did you find to be the trickiest/most complex aspects?
None of this is fresh in my mind, so my recollections might be a little hazy. I think the only issue I personally had when writing a decoder was keeping the alignment of various fields right. I wrote the decoder in C# and if I remember correctly I tried to get fancy with some modern-ish deserialization code. I think I eventually resorted to writing a rather ugly but simple low-level byte reader. Nevertheless I found it to be a relatively straightforward program to write and I got most of what I wanted done in under a day.
The vibe coded version was a different story. For simplicity, I wanted to stick to an early version of BMP. I don’t remember the version off the top of my head. This was a simplified implementation for students to use and modify in a class setting. Sticking to early version BMPs also made it harder for students to go off-piste since random BMPs found on the internet probably would not work.
The main problem was that the LLM struggled to stick to a specific version of BMP. Some of those newer features (compression, color table, etc, if I recall correctly) have to be used in a coordinated way. The LLM made a real mess here, mixing and matching newer features with older ones. But I did not understand that this was the problem until I gave up and started writing things myself.
You should try a similar task with a recent model (Opus 4.5, GPT 5.2, etc) and see if there are any improvements since your last attempt. I also encourage using a coding agent. It can use test files that you provide, and perform compile-test loops to work out any mistakes.
It sounds like you used an older model, and perhaps copy-pasted code from a chat session. (Just guessing, based on what you described.)
1 reply →