Comment by dude250711

4 days ago

Man, this is giving me a cognitive dissonance compared to my experiences.

Actually, even the post itself reads like a cognitive dissonance with a dash of the usual "if it's not working for you then you are using it wrong" defence.

I feel exactly like Karpathy here. I have some work to do, and I know exactly what I need to do, and I'm able to explain it to AI, and the AI seems to understand me (I'm lately using Opus 4.5). I wrote down a roadmap, it should take me a few weeks of coding. It feels like with a proper workflow with AI agents, this work should be doable in one or two days. Yet, I know by now that it's not going to be nearly that fast. I'll be lucky if I finish 30% faster than if I just code the entire damn thing myself. The thing is, I am a huge AI optimist, I'm not one of the AI skeptics, not even close. Karpathy is not an AI skeptic. We just both feel this sense of possibility, and the fact that we can't make AI help us more is frustrating. That's all. There's no telling anyone else "it's on you if you can't make it work for you". I think Karpathy figured out by now, and at least I did, that the number of AI skeptics by now far outnumbers the number of AI optimists, and it has become something akin to a political conviction. It's quite futile to try and change someone's mind about whether AI is good, bad, overhyped, underused, etc. People picked their side and that's that.

  • I think you articulated perfectly why it's a bubble and why execs are so eager to push it everywhere. It's so alluring, it constantly feels like we're on the verge of something great. No wonder so many people have their brains fried by it.

    • we're 10 months into agentic coding. Claude code came out in march. I dont understand how you are so unimaginative to think what this might look like in 5 years even with slow progress.

      4 replies →

  • If I can reassure you, if your project is complex enough and involve heavy data manipulation, a 30% improvement using Opus/Gemini 3/codex 5.2 seems like a good result. I think on complex tasks, Opus 4.5 improves my output by around 20-25%.

    And since it's way, way less wrong than sonnet4, it might also improve my whole team velocity.

    I won't lie, AI coding has been a net negative for the 'lazy devs' on my team who don't delves into their own generated code (by 'lazy devs' here I mean the subset of devs who do the work but often don't bother to truly understand the logic behind what they used/did. They are very good coworkers, add velue and are not really lazy, but I don't see another term for that).

I think of it this way. If you dropped Einstein with a time machine two thousand year ago, people would think he is some crazy guy doing scribbles in the sand. No one would ever know how smart he is. The same is with people and advanced AGI like Gemini 3 Pro or Chatgpt 5.2 Pro. We are just dumber than them.

  • Why do you think the models are AGI?

    I also like to think that Einstein would be smart enough to explain things from a common point of understanding if you did drop him 2000 years in the past (assuming he also possesses the scientific knowledge humanity accrued in that 2000 year gap). So, your analogy doesn't really make a lot of sense here. I also doubt he'd be able to prove his theories with the technology of the past but that's a different matter.

    If we did have AGI models, they would be able to solve our hardest problems (assuming a generous definition of AGI) even if we didn't immediately understand exactly how they got there. We already have a lot of complex systems that most people don't fully understand but can certainly verify the quality of. The whole "too smart for people to understand that they're too smart" is just a tired trope.

  • You are certainly dumber than them if you think they are AGI. These models are smart and getting smarter, but they are not AGI.

  • You think they have “advanced AGI” and are worried about keeping up with the software industry? There would be be nothing to keep up with at that point.

    To use an analogy, it would be like spending all your time before a battle making sure your knife is sharp when your opponent has a tank.