Comment by leecommamichael
20 days ago
> Now is the time to mourn the passing of our craft.
Your craft is not my craft.
It's entirely possible that, as of now, writing JavaScript and Java frontends (what the author does) can largely be automated with LLMs. I don't know who the author is writing to, but I do not mistake the audience to be "programmers" in general...
If you are making something that exists, or something that is very similar to something that exists, odds are that an LLM can be made to generate code which approximates that thing. The LLM encoding is lossy. How will you adjust the output to recover the loss? What process will you go through mentally to bridge the gap? When does the gap appear? How do you recognize it? In the absolute best case you are given a highly visible error. Perhaps you've even shipped it, and need to provide context about the platform and circumstances to further elucidate. Better hope that platform and circumstance is old-hat.
It's funnily enough quite the opposite front ends that have a focus on UX are pretty well protected from generative AI
I haven't heard this perspective. I'm kind of surprised the LLMs can't generate coherent frontend framework-ized code, if that's the implication.
Both of you are right. They can generate the code quite well, but well-considered UX is another thing entirely.
1 reply →
Strangly, yeah. LLMs are absolute trash at generating good UX and UI.
Agreed. That’s the one area where I think my experience will still have value (for a while anyway): translating customer requests into workable UI/UX, before handing off to the LLM.