← Back to context

Comment by spoiler

13 hours ago

Yes, and to add, in case it's not obvious: in my experience the maintenance, mental (and emotional costs, call me sensitive) cost of bad code compounds exponentially the more hacks you throw at it

Sure, for humans. Not sure they'll be the primary readers of code going forward

  • I'm pretty sure that will be true with AI as well.

    No accounting for taste, but part of makes code hard for me to reason about is when it has lots of combinatorial complexity, where the amount of states that can happen makes it difficult to know all the possible good and bad states that your program can be in. Combinatorial complexity is something that objectively can be expensive for any form of computer, be it a human brain or silicon. If the code is written in such a way that the number of correct and incorrect states are impossible to know, then the problem becomes undecidable.

    I do think there is code that is "objectively" difficult to work with.

    • All the good practices about strong typing, typically in Scala or Rust, also work great for AI.

      If you make sure the compiler catches most issues, AI will run it, see it doesn't build and fix what needs to be fixed.

      So I agree that a lot of things that make code good, including comments and documentation, is beneficial for AI.

    • There are a number of things that make code hard to reason about for humans, and combinatorial complexity is just one of them. Another one is, say, size of working memory, or having to navigate across a large number of files to understand a piece of logic. These two examples are not necessarily expensive for computers.

      I don't entirely disagree that there is code that's objectively difficult to work with, but I suspect that the Venn diagram of "code that's hard for humans" and "code that's hard for computers" has much less overlap than you're suggesting.

      2 replies →

    • What do you think about the argument that we are entering a world where code is so cheap to write, you can throw the old one away and build a new one after you've validated the business model, found a niche, whatever?

      I mean, it seems like that has always been true to an extent, but now it may be even more true? Once you know you're sitting on a lode of gold, it's a lot easier to know how much to invest in the mine.

      16 replies →

  • AIs struggle with tech debt as much if not more than humans.

    Ive noticed that theyre often quite bad at refactoring, also.

  • I think someday it will be completely unreadable for humans. Ai will have its optimized form.

  • Because LLMs are designed as emulators of actual human reasoning, it wouldn't surprise me if we discover that the things that make software easy for humans to reason about also make it easier for LLMs to reason about.

Now with AI, you're not only dealing with maintenance and mental overhead, but also the overhead of the Anthropic subscription (or whatever AI company) to deal with this spaghetti. Some may decide that's an okay tradeoff, but personally it seems insane to delegate a majority of development work to a blackbox, cloud-hosted LLM that can be rug pulled from underneath of you at any moment (and you're unable to hold it accountable if it screws up)

  • Call me naive, but I don't believe that I'm going to wake up tomorrow and ChatGPT.com and Claude.ai are going to be hard down and never come back. Same as Gmail, which is an entirely different corporation. I mean, they could, but it doesn't seem insane to use Gmail for my email, and that's way more important to my life functioning than this new AI thing.