Comment by bootsmann
7 months ago
> Some things are fine to let rot. Nobody should spend too much time learning Vue, React or Laravel, or even nhibernate, entity framework or structuremap, for example.
The way you phrase this you make it sound like an LLM can already solve every possible task you would ever get in Vue, React or Laravel but my entire point is that this is simply not true. As a consequence of this, whenever the LLM fails at a task (which gets more likely the more complex the task is) you will still need to know how Vue, React or Laravel work to actually finish your task but this is the exact knowledge you lose if you spend 80% of your day prompting instead of writing code. The more you rely on the LLM to write your code the more the code you are able to produce converges with the one that the LLM can put out.
Poor phrasing perhaps, my point is that you the human should spend time and effort on the challenging and interesting things, and learning things that have lasting value. Not the boring, tedious and quickly irrelevant things.
Remember knockout.js? script.aculo.us? I do, but wish I didn't so those brain cells could know more SQL and vanilla javascript instead. :)
I also think LLM's are way more useful than you give them credit for. I save hours per week already and I'm just getting started in how to get the most value from LLM's. It's clear to me that my ability to phrase questions and include context matters more than which model I use.
To be clear I'm not talking about the cool-aid promises of one-shotting complex apps here, I mean questions like
"Give me a log4jconfig that splits logfiles per errorlevel and day"
"Look at #locationclass, #customerclass, #shop.css and #customerview and make a crud view for customers similar to locations"
"We are converting a vue app to react. Look at #oldvue1 and #newreact1 and make a react version of #oldvue2 following the same patterns"
"What could cause the form from #somewebview to pass a null whateverId to #somerepository?"
Questions like that are solved by LLM's at least close enough to 100% that it feels like asking a human to do it.