Comment by AstroBen
7 hours ago
I don't know how other people work, but writing the code for me has been essential in even understanding the problem space. The architecture and design work in a lot of cases is harder without going through that process.
100%! Lots of issues are only discovered when enough code has been written. More than that , other issues are discovered only when the project is actually deployed as MVP.
I recently had to build a widget that lets the user pick from a list of canned reports and then preview them in an overlay before sending to the printer (or save to PDF). All I knew was that I wanted each individual report's logic and display to be in its own file, so if the system needed to grow to 100 reports, it wouldn't get any more complicated than with 6 reports.
The final solution ended up being something like: 1. Page includes new React report widget. 2. Widget imports generic overlay component and all canned reports, and lets user pick a report. 3. User picks report, widget sets that specific report component as a child of the overlay component, launches overlay. 4. Report component makes call to database with filters and business logic, passes generic set of inputs (report title, other specifics, report data) to a shared report display template.
My original plan was for the report display template to also be unique to each report file. But when the dust settled, they were so similar that it made sense to use a shared component. If a future report diverges significantly, we can just skip the shared component and create a one-off in the file.
I could have designed all this ahead of time, as I would need to do with an LLM. But it was 10x easier to just start coding it while keeping my ultimate scalability goals in mind.
See "Programming as Theory Building": https://pages.cs.wisc.edu/~remzi/Naur.pdf
That's a good point and honestly I occasionally do the same thing. Sometimes you have to build something wrong to understand what right looks like. I think the distinction is between exploratory prototyping (building to learn/think) and expecting the prototype to BE the product. The first is thinking, the second is where the 100-hour gap bites you in the ass.
This. It’s also much easier to tell someone what you don’t like if what you don’t like is right in front of you than to tell them what you want without a point of reference.
- version 1 -- we build what we think is needed
- version 2 -- we realise we're solving a completely different problem to what is needed
- version 3 -- we build what is actually needed
This is basically the entire argument for why the 100-hour gap exists and why that's fine. The gap isn't waste, it's the cost of understanding the problem space. AI can compress that cycle but it can't skip it. But you don't WANT to skip, it's the gate between quality and slop.