Comment by FridgeSeal

6 days ago

The counter-argument as I see it is that going from “not using LLM tooling” to “just as competent with LLM tooling” is…maybe a day? And lessening and the tools evolve.

It’s not like “becoming skilled and knowledgeable in a language” which took time. Even if you’re theoretically being left behind, you can be back at the front of the pack again in a day or so. So why bother investing more than a little bit every few months?

> The counter-argument as I see it is that going from “not using LLM tooling” to “just as competent with LLM tooling” is…maybe a day? And lessening and the tools evolve.

Very much disagree with that. Getting productive and competent with LLM tooling takes months. I've been deeply invested in this world for a couple of years now and I still feel like I'm only scraping the surface of what's possible with these tools.

  • Does it take months _now_ or did it take months, months/a year ago?

    I’m still not entirely sure why it’s supposed to take months. I usually retry every few weeks whenever a new model comes out, and they get marginally better at something, but using them isn’t a massive shift? Maybe I’m missing something? I have some code, I pop open the pane, ask it, accept/reject the code and go on. What else is everyone even doing?

    Edit: I’ve seen the prompt configs people at work have been throwing around, and I’m pretty glad I don’t bother with cursor-and-friends when I see that. Some people get LLM’s to write git commits? Lazygit has made most of my git workflow friction disappear and the 1 minute it takes me to write commits and pr’s is less effort than having to police a novel writing incorrect ones.

I think the more "general" (and competent) AI gets, the less being an early adopter _should_ matter. In fact, early adopters would in theory have to suffer through more hallucinations and poor output than late adopters.

Here, the early bird gets the worm with 9 fingered hands, the late bird just gets the worm.

It takes deliberate practice to learn how to work with a new tool.

I believe that AI+Coding is no different from this perspective. It usually takes senior engineers a few weeks just to start building an intuition of what is possible and what should be avoided. A few weeks more to adjust the mindset and properly integrate suitable tools into the workflow.

  • In theory, but how long is that intuition going to remain valid as new models arrive? What if you develop a solid workflow to work around some limitations you've identified, only to realize months late that these limitations don't exist anymore and your workflow is suboptimal? AI is a new tool, but it's a very unstable one at the moment.

    • I'd say that the core principles stayed the same for more than a year by now.

      What is changing - constraints are relaxing, making things easier than they were before. E.g. where you needed a complex RAG to accomplish some task, now Gemini Pro 2.5 can just swallow 200k-500k of cacheable tokens in prompt and get the job done with a similar or better accuracy.

      1 reply →