Comment by meowface

2 days ago

I feel no strong need to convince others. I've been seeing major productivity boosts for myself and others since Sonnet 3.5. Maybe for certain kinds of projects and use cases it's less good, maybe they're not using it well; I dunno. I do think a lot of these people probably will be left behind if they don't adopt it within the next 3 years, but that's not really my problem.

What's there to be left behind on? That's like arguing people who stick to driving cars with manual transmissions are going to get left behind when buses "inevitably get good."

The whole point of the AI coding thing is that it lets inexperienced people create software. What skill moat are you building that a skilled software developer won't be able to pick up in 20 minutes?

  • (Dont take this as an attack on you personally, just on the sentiment you are giving in your comment)

    The attitude you present here has become my litmus test for who has actually given the agents a thorough shake rather than just a cursory glance. These agents are tools, not magic (even though they appear to be magic when they are really humming). They require operator skill to corral them. They need tweaking and iteration, often from people already skilled in the kinds of software they are trying to write. Its only then that you get the true potential, and its only then you realize just how much more productive you can be _because of them_, not _in spite of them_. The tools are imperfect, and there are a lot of rough edges that a skilled operator can sand down to become huge boons for themselves rather than getting cut and saying "the tools suck".

    Its very much like google. Anyone can google anything. But at a certain point, you need to understand _how to google_ to get good results rather than just slapping any random phrase and hoping the top 3 results are magically going to answer you. And sometimes you need to combine the information from multiple results to get what you are going for.

    • > Its very much like google. Anyone can google anything. But at a certain point, you need to understand _how to google_ to get good results rather than just slapping any random phrase and hoping the top 3 results are magically going to answer you. And sometimes you need to combine the information from multiple results to get what you are going for.

      Lmao, and just as with google, they’ll eventually optimize it for ad delivery and it will become shit.

  • Your analogy is the wrong way around :)

    Everyone now is driving automatic, LLMs are the manual transmission in a classic car with "character".

    Yes, anyone can step into one, start it and maybe get there, but the transmission and engine will make strange noises all the way and most people just stick to the first gear because the second gear needs this weird wiggle and a trick with the gas pedal to engage properly.

    Using (agentic) LLMs as coding assistants is a skill that (at the moment) can't really be taught as it's not deterministic and based a lot on feels and getting the hang of different base models (gemini, sonnet/opus, gpt, GLM etc). The only way to really learn is by doing.

    Yes, anyone can start up Google Antigravity or whatever and say "build me a todo app" and you'll get one. That's just the first gear.

> "...these people probably will be left behind if they don't adopt it..."

And there it is, the insecure evangelism.

  • I'm not sure you understand what insecure means. Do you think it means people aren't able to have an opinion about other people's skills, values, attitudes, and behaviors, and what those might ultimately result in?

    • The linked blog post explains what the post's author means by "insecure evangelism" and the parent poster followed the pattern described. Perhaps you should direct your comment to the author.