← Back to context

Comment by mikert89

21 hours ago

I have 15 years of software engineering experience across some top companies. I truly believe that ai will far surpass human beings at coding, and more broadly logic work. We are very close

HN will be the last place to admit it; people here seem to be holding out with the vague 'I tried it and it came up with crap'. While many of us are shipping software without touching (much) code anymore. I have written code for over 40 years and this is nothing like no-code or whatever 'replacing programmers' before, this is clearly different judging from the people who cannot code with a gun to their heads but still are shipping apps: it does not really matter if anyone believes me or not. I am making more money than ever with fewer people than ever delivering more than ever.

We are very close.

(by the way; I like writing code and I still do for fun)

  • Both can be correct : you might be making a lot of money using the latest tools while others who work on very different problems have tried the same tools and it's just not good enough for them.

    The ability to make money proves you found a good market, it doesn't prove that the new tools are useful to others.

    • No, the comment is about "will", not "is". Of course there's no definitive proof of what will happen. But the writing is on the wall and the letters are so large now, that denying AI would take over coding if not all intellectual endeavors resembles the movie "Don't look up".

  • > holding out with the vague 'I tried it and it came up with crap'

    Isn't that a perfectly reasonable metric? The topic has been dominated by hype for at least the past 5 if not 10 years. So when you encounter the latest in a long line of "the future is here the sky is falling" claims, where every past claim to date has been wrong, it's natural to try for yourself, observe a poor result, and report back "nope, just more BS as usual".

    If the hyped future does ever arrive then anyone trying for themselves will get a workable result. It will be trivially easy to demonstrate that naysayers are full of shit. That does not currently appear to be the case.

> I have 15 years of software engineering experience across some top companies. I truly believe that ai will far surpass human beings at coding, and more broadly logic work. We are very close

Coding was never the hard part of software development.

  • Getting the architecture mostly right, so it's easy to maintain and modify in the future is IMO hard part, but I find that this is where AI shines. I have 20 years of SWE experience (professional) and (10 hobby) and most of my AI use is for architecture and scaffolding first, code second.

They can only code to specification which is where even teams of humans get lost. Without much smarter architecture for AI (LLMs as is are a joke) that needle isn’t going to move.

  • Real HN comment right here. "LLMs are a joke" - maybe don't drink the anti-hype kool aid, you'll blind yourself to the capability space that's out there, even if it's not AGI or whatever.

    • I’ll look past the disrespectful flippant insult on the hope that there’s a brain there too.

      They’re a probabalistic phonograph. They can sharpen the funnel for input but they can’t provide judgement on input or resolve ambiguities in your specifications. Teams of human requirements engineers cannot do it. LLMs are not magic. You’re essentially asking it; from my wardrobe pick an outfit for me and make sure it’s the one I would have picked.

      If you’re dazzled into thinking LLMs can solve this you just don’t understand transformer architecture and you don’t understand requirements engineering.

      You’ll know a proper AI engine when you see it and it doesn’t look like an LLM.