← Back to context

Comment by abdullin

6 days ago

It takes deliberate practice to learn how to work with a new tool.

I believe that AI+Coding is no different from this perspective. It usually takes senior engineers a few weeks just to start building an intuition of what is possible and what should be avoided. A few weeks more to adjust the mindset and properly integrate suitable tools into the workflow.

In theory, but how long is that intuition going to remain valid as new models arrive? What if you develop a solid workflow to work around some limitations you've identified, only to realize months late that these limitations don't exist anymore and your workflow is suboptimal? AI is a new tool, but it's a very unstable one at the moment.

  • I'd say that the core principles stayed the same for more than a year by now.

    What is changing - constraints are relaxing, making things easier than they were before. E.g. where you needed a complex RAG to accomplish some task, now Gemini Pro 2.5 can just swallow 200k-500k of cacheable tokens in prompt and get the job done with a similar or better accuracy.

    • Sure, the core principles are mostly the same, but the point is that it is getting easier and easier to extract value from these models, which means that the learning curve is getting flatter. The main difficulty for now and for the foreseeable future is to get the models to do what we mean, but DWIM is the trend line, it's the objective everyone's trying to get at. Even if we don't quite reach, we'll get closer. And AI that does what you mean is the ultimate tool: it doesn't require any expertise at all. There is no first mover advantage (save for a hypothetical singularity, perhaps).