Comment by jillesvangurp

3 days ago

I've been using chat gpt for quite a bit of UI work lately. Mostly react and tailwind but also other things (python, kotlin-js, etc). It's pretty decent at it and it saves me a lot of time. And more importantly, it saves me from having to herd a bunch of frontend developers into doing what needs doing and frees me up to do more interesting things.

What makes this work well is that tailwind is declarative and relatively simple to figure out for an LLM. There's a lot of stuff but most of it is pretty straightforward stuff. LLMs struggle more on hard stuff like algorithms. UIs are easy.

I just tell it to build me login screen or whatever, I give it some basic instructions (use tailwind, daisyui, react, and typescript). And then I give it some OpenAPI spec in json format and tell it what to do. It does the whole thing complete with working handlers.

The main trick is scoping prompts well and following up when it only implements half of what you asked for or otherwise goes a bit of offscript. Often just nudging it "do the whole thing, damnit" seems to fix that, which I find hilarious. The most tedious thing about this is waiting for it to generate loads of code and then iterating on that code. It's faster than doing it manually but it's a bit like watching paint dry.

Currently the process is time consuming mostly because chat is bit limited as a UX for this and you end up passing it whole files and then re-generating parts of it. I think there's going to be a lot of improvements on that front that will save a lot of time. Mostly that's not even going to be related to improving model quality but simply by building better integrated tools. There's no good technical reason why an LLM can't plugin directly to refactoring or editing APIs in IDEs. You can probably generate a lot of the code that would accomplish that even.