Comment by paradite

1 year ago

Funny enough, now I write better code than I used to thanks to AI because of two reasons:

- AI naturally writes AI code that is more organized and clean (proper abstraction, no messy code)

- I've recognized that, for AI to write code on an existing codebase, the code has to be clean and organized and make sense, so I tend to do more refactoring to make sure AI can take over them and update them when needed.

>Funny enough, now I write better code than I used to thanks to AI because of two reasons:

I assume you also believe you'll be one of the developers AI doesn't replace.

  • I'm actively transitioning out of a "software engineer" role to be more open minded on how to coexist with AI while still contributing value.

    Prompt engineering, organizing code for AI agents to be more effective, guiding non-technical people to understand how to leverage AI, etc. I'm also building products myself and selling them myself.

Today an AI told me that non-behavioral change in my codebase was going to give us a 10x improvement on our benchmarks.

Frankly, if you were writing code that is worse structured than what GPT or whatever generates today then you are just a mediocre developer.

See, the thing is, to determine which abstractions are "right and proper" you _already need a software engineer_ who knows those kinds of things. Moreover, that engineer needs to ability to read that code, understand it, and plan its evolution over time. He/she also needs to be able to fix the bugs, because there will be bugs.

  • I think your main thesis is that "AI of today and foreseeable future is not going to solve them end to end."

    My belief is that we can't solve them today (agree with you), but we can solve them in foreseeable future (in 3 years).

    So it is really a matter of different beliefs. And I don't think we will be able to convince each other to switch belief.

    Let's just watch what happens?

    • I'm with you 100% of the way on this one. Am coding with Claude 3.5 right now using Aider. The future is clear at this point. It won't get worse and there's still so much low hanging fruit. Expertise is still useful to guide it, but we're all product managers now.

      3 replies →

    • Well, I know what will happen within about a year long time horizon. As far as at least developer assistance models are concerned the difference at the end of 2025 is not going to be dramatic, same as it was not between the end of '23 and this year, and for the same reasons - you do need symbolic reasoning to generate large chunks of coherent, correct code. I also don't see where "dramatic" would come from after that either unless we get some new physics which lets us run models 10-20x the size in realtime, economically. But even then, until we get true AGI, which we won't get in my remaining lifetime, those systems will be best used as a "bicycle for the mind" rather than "the mind", and in the vast majority of cases they will not be able to replace humans entirely, at least not in software engineering.

> 'proper abstraction'

I assume you're not talking about chatgpt4o, because in my experience it's absolutely dogshit at abstracting code meaningfully. Good at finding design patterns, sure, but if your AI don't understand how to state machine, I'm not sure how I'm supposed to use it.

It's great at writing tests and documentation though.

  • GPT-4o is at least 1 order of magnitude behind Claude 3.5 Sonnet in coding. I use latter.

    • Claude is better most of the time on the simpler stuff, but o1 is better on some of the more difficult problems that Claude craps out on. Really $40/mo is not too much to pay for both.

      1 reply →