Comment by quangtrn

5 days ago

The structured spec approach has worked well for me — but only when the spec itself is visual, not more text. I've been designing app navigation flows as screen images with hotspot connections, then exporting that as structured markdown. The AI gets screen-by-screen context instead of one massive prompt. The difference vs writing the spec by hand is that the visual layout catches gaps (orphan screens, missing error states) before you hand anything to the LLM.

How does that work- mind sharing your workflow?

  • It's a tool called Drawd (drawd.app). You upload screen mockups onto an infinite canvas, draw rectangles over tap areas to define hotspots, then connect screens with arrows to map the navigation flow. Each hotspot carries an action type — navigate, API call, modal, conditional branch. When you're done, it exports structured markdown files (screen inventory, navigation map, build guide) that you feed to the LLM as context. The visual step is what catches gaps before you burn tokens on a broken spec.