← Back to context

Comment by stillpointlab

5 days ago

This mirrors insights from Andrew Ng's recent AI startup talk [1].

I recall he mentions in this video that the new advice they are giving to founders is to throw away prototypes when they pivot instead of building onto a core foundation. This is because of the effects described in the article.

He also gives some provisional numbers (see the section "Rapid Prototyping and Engineering" and slides ~10:30) where he suggests prototype development sees a 10x boost compared to a 30-50% improvement for existing production codebases.

This feels vaguely analogous to the switch from "pets" to "livestock" when the industry switched from VMs to containers. Except, the new view is that your codebase is more like livestock and less like a pet. If true (and no doubt this will be a contentious topic to programmers who are excellent "pet" owners) then there may be some advantage in this new coding agent world to getting in on the ground floor and adopting practices that make LLMs productive.

1. https://www.youtube.com/watch?v=RNJCfif1dPY

Great point, but just mentioning (nitpicking?) that I never heard about machines/containers referred to as "livestock", but rather in my milieu it's always "pets" vs "cattle". I now wonder if it's a geographical thing.

Thanks for pointing this out. I think this is an insightful analogy. We will likely manage generated code in the same way we manage large cloud computing complexes.

This probably does not apply to legacy code that has been in use for several years where the production deployment gives you a higher level of confidence (and a higher risk of regression errors with changes).

Have you blogged about your insights, the https://stillpointlab.com site is very sparse as is @stillpointlab

  • I'm currently in build mode. In some sense, my project is the most over complicated blog engine in the history of personal blog engines. I'm literally working on integrating a markdown editor to the project.

    Once I have the MVP working, I will be working on publishing as a means to dogfood the tool. So, check back soon!

IMO the problem with this pets vs. livestock analogy is that it focuses on the code when the value is really in the writers head. Their understanding and mental model of the code is what matters. AI tools can help with managing the code, helping the writer build their models and express their thoughts, but it has zero impact on where the true value is located.

Oo, the "pets vs. livestock" analogy really works better than the "craftsmen vs. slop-slinger" arguments.

Because using an LLM doesn't mean you devalue well-crafted or understandable results. But it does indicate a significant shift in how you view the code itself. It is more about the emotional attachment to code vs. code as a means to an end.

  • I don't think it's exactly emotional attachment. It's the likelihood that I'm going to get an escalated support ticket caused by this particular piece of slop/artisanally-crafted functionality.

    • Not to slip too far into analogy, but that argument feels a bit like a horse-drawn carriage operator saying he can't wait to pick up all of the stranded car operators when their mechanical contraptions break down on the side of the road. But what happened instead was the creation of a brand new job: the mechanic.

      I don't have a crystal ball and I can't predict the actual future. But I can see the list of potential futures and I can assign likelihoods to them. And among the potential futures is one where the need for humans to fix the problems created by poor AI coding agents dwindles as the industry completely reshapes itself.

      3 replies →

    • In my world that isn't inherently a bad thing. Granted, I belong to the YAGNI crowd of software engineers who put business before tech architecture. I should probably mention that I don't think this means you should skip on safety and quality where necessariy, but I do preach that the point of software is to serve the business as fast as possible. I do this to the extend where I actually think that our BI people who are most certainly not capable programmers are good at building programs. They mostly need oversight on external dependencies, but it's actually amazing what they can produce in a very short amount of time.

      Obviously their software sucks, and eventually parts of it always escalates into a support ticket which reaches my colleagues and me. It's almost always some form of performance issue, this is in part because we have monthly sessions where they can bring issues they simply can't get to work to us. Anyway, I see that as a good thing. It means their software is serving the business and now we need to deal with the issues to make it work even better. Sometimes that is because their code is shit, most times it's because they've reached an actual bottleneck and we need to replace part of their Python with a C/Zig library.

      The important part of this is that many of these bottlenecks appear in areas that many software enginering teams that I have known wouldn't necessarily have predicted. Mean while a lot of the areas that traditional "best practices" call for better software architecture for, work fine for entire software lifecycles being absolutely horrible AI slop.

      I think that is where the emotional attachment is meant to fit in. Being fine with all the slop that never actually matters during a piece of softwares lifecycle.

      1 reply →