Comment by abdullin
6 days ago
I'd say that the core principles stayed the same for more than a year by now.
What is changing - constraints are relaxing, making things easier than they were before. E.g. where you needed a complex RAG to accomplish some task, now Gemini Pro 2.5 can just swallow 200k-500k of cacheable tokens in prompt and get the job done with a similar or better accuracy.
Sure, the core principles are mostly the same, but the point is that it is getting easier and easier to extract value from these models, which means that the learning curve is getting flatter. The main difficulty for now and for the foreseeable future is to get the models to do what we mean, but DWIM is the trend line, it's the objective everyone's trying to get at. Even if we don't quite reach, we'll get closer. And AI that does what you mean is the ultimate tool: it doesn't require any expertise at all. There is no first mover advantage (save for a hypothetical singularity, perhaps).