Comment by libraryofbabel

1 month ago

This rings true and reminds me of the classic blog post “Reality Has A Surprising Amount Of Detail”[0] that occasionally gets reposted here.

Going back and forth on the detail in requirements and mapping it to the details of technical implementation (and then dealing with the endless emergent details of actually running the thing in production on real hardware on the real internet with real messy users actually using it) is 90% of what’s hard about professional software engineering.

It’s also what separates professional engineering from things like the toy leetcode problems on a whiteboard that many of us love to hate. Those are hard in a different way, but LLMs can do them on their own better than humans now. Not so for the other stuff.

[0] http://johnsalvatier.org/blog/2017/reality-has-a-surprising-...

  > Reality Has A Surprising Amount Of Detail

Every time we make progress complexity increases and it becomes more difficult to make progress. I'm not sure why this is surprising to many. We always do things to "good enough", not to perfection. Not that perfection even exists... "Good enough" means we tabled some things and triaged, addressing the most important things. But now to improve those little things now need to be addressed.

This repeats over and over. There are no big problems, there are only a bunch of little problems that accumulate. As engineers, scientists, researchers, etc our literal job is to break down problems into many smaller problems and then solve them one at a time. And again, we only solve them to the good enough level, as perfection doesn't exist. The problems we solve never were a single problem, but many many smaller ones.

I think the problem is we want to avoid depth. It's difficult! It's frustrating. It would be great if depth were never needed. But everything is simple until you actually have to deal with it.

  • > As engineers, scientists, researchers, etc our literal job is to break down problems into many smaller problems and then solve them one at a time.

    Our literal job is also to look for and find patterns in these problems, so we can solve them as a more common problem, if possible, instead of solving them one at a time all the time.

    • Very true. But I didn't want to discuss elegance and abstraction as people seem to misunderstand abstraction in programming. I mean all programming is abstraction... abstraction isn't to be avoided, but things can become too abstract

  • I think we're all coping a bit here. This time, it really is different.

    The fact is, one developer with Claude code can now do the work of at least two developers. If that developer doesn't have ADHD, maybe that number is even higher.

    I don't think the amount of work to do increases. I think the number of developers or the salary of developers decreases.

    In any case, we'll see this in salaries over the next year or two.

    The very best move here might be to start working for yourself and delete the dependency on your employer. These models might enable more startups.

    • Alternate take: what agents can spit out becomes table stakes for all software. Making it cohesive, focused on business needs, and stemming complexity are now requirements for all devs.

      By the same token (couldn’t resist), I also would argue we should be seeing the quality of average software products notch up by now with how long LLMs have been available. I’m not seeing it. I’m not sure it’s a function of model quality, either. I suspect devs that didn’t care as much about quality hadn’t really changed their tune.

      2 replies →

    • You sound bored. If we triple head count overnight, we'd only slow our backlog, temporarily. Every problem we solve only opens up a larger group of harder problems to solve.

    • Why wouldn't we find new things to do with all that new productivity?

      Anecdotally, this is what I see happening in the small in my own work - we say yes to more ideas, more projects, because we know we can unblock things more quickly now - and I don't see why that wouldn't extend.

      I do expect to see smaller teams - maybe a lot more one-person "teams" - and perhaps smaller companies. But I expect to see more work being done, not less, or the same.

      6 replies →

    • If LLMs are good at writing software, then there's lots of good software around written by LLMs. Where is that software? I don't see it. Logical conclusion: LLMs aren't good at writing software.

      3 replies →

"(...) maybe growing vegetables or using a Haskell package for the first time, and being frustrated by how many annoying snags there were." Haha this is funny. Interesting reading.

While this is absolutely true and I've read this before, I don't think you can make this an open and shut case. Here's my perspective as an old guy.

The first thing that comes to mind when I see this as a counterargument is that I've quite successfully built enormous amounts of completely functional digital products without ever mastering any of the details that I figured I would have to master when I started creating my first programs in the late 80s or early 90s.

When I first started, it was a lot about procedural thinking, like BASIC goto X, looping, if-then statements, and that kind of thing. That seemed like an abstraction compared to just assembly code, which, if you were into video games, was what real video game people were doing. At the time, we weren't that many layers away from the ones and zeros.

It's been a long march since then. What I do now is still sort of shockingly "easy" to me sometimes when I think about that context. I remember being in a band and spending a few weeks trying to build a website that sold CDs via credit card, and trying to unravel how cgi-bin worked using a 300 page book I had bought and all that. Today a problem like that is so trivial as to be a joke.

Reality hasn't gotten any less detailed. I just don't have to deal with it any more.

Of course, the standards have gone up. And that's likely what's gonna happen here. The standards are going to go way up. You used to be able to make a living just launching a website to sell something on the internet that people weren't selling on the internet yet. Around 1999 or so I remember friend of mine built a website to sell stereo stuff. He would just go down to the store in New York, buy it, and mail it to whoever bought it. Made a killing for a while. It was ridiculously easy if you knew how to do it. But most people didn't know how to do it.

Now you can make a living pretty "easily" selling a SaaS service that connects one business process to another, or integrates some workflow. What's going to happen to those companies now is left as an exercise for the reader.

I don't think there's any question that there will still be people building software, making judgment calls, and grappling with all the complexity and detail. But the standards are going to be unrecognizable.

Is the surprising amount of detail an indicator that we do not live in a simulation, or is it instead that we have to be living inside a simulation because it doesn't need all this detail for Reality, indicating an algorithmic function run amuck?

Reality is infintely analog and therefore digital will only ever be an approximation.

Can you give an example of an "other stuff"?

  • I once wrote software that had to manage the traffic coming into a major shipping terminal- OCR, gate arms, signage, cameras for inspecting chassis and containers, SIP audio comms, RFID readers, all of which needed to be reasoned about in a state machine, none of which were reliable. It required a lot of on the ground testing and observation and tweaking along with human interventions when things went wrong. I’d guess LLMs would have been good at subsets of that project, but the entire thing would still require a team of humans to build again today.

    • Sir your experience is unique and thanks for answering this.

      That being said, someone took the idea of you saying LLM's might be good at subsets of projects to consider we should use LLMs for that subset as well

      But I digress because (I provided more in depth reasoning in other comment as well) because if there is an even minute bug which might slip up past LLM and code review for subset of that and for millions of cars travelling through points, we assume that one single bug in it somewhere might increase the traffic/fatality traffic rate by 1 person per year. Firstly it shouldn't be used because of the inherent value of human life itself but even from monetary sense as well so there's really not much reason I can see in using it

      That alone over a span of 10 years would cost 75 million-130Million$ (the value of life in US for a normal perosn ranges from 7.5 million - 13 million$)

      Sir I just feel like if the point of LLM is to have less humans or less giving them income, this feels so short sighted because I (if I were the state and I think everyone will agree after the cost analysis) would much rather pay a few hundred thousand dollars to even a few million$ right now to save 75-130 Million$ (on the smallest scale mind you, it can get exponentially more expensive)

      I am not exactly sure how we can detect the rate of deaths due to LLM use itself (the 1 number) but I took the most conservative number.

      And that is also the fact that we won't know if LLM's might save a life but I am 99.9% sure that might not be the case and once again it wouldn't be verifiable itself so we are shooting things in the dark

      And we can have a much more sensitive job with better context (you know what you are working at and you know how valuable it is/can save lives and everything) whereas no amount of words can convey that danger to LLM's

      To put it simply, the LLM might not know the difference between this life or death situation machine's code at times or a sloppy website created by it.

      I just don't think its worth it especially in this context at all even a single % of LLM code might not be worth it here.

      3 replies →

    • I've had good luck when giving the AI its own feedback loop. On software projects, it's letting the AI take screenshots and read log files, so it can iterate on errors without human input. On hardware projects, it's a combination of solenoids, relays, a pi and pizerow, and a webcam. I'm not claiming that an AI could do the above mentioned project, just that (some) hardware projects can also get humans out of the loop.

    • Don’t you understand? That’s why all these AI companies are praying for humanoid robots to /just work/ - so we can replace humans mentally and physically ASAP!

      5 replies →

    • But you admit that fewer humans would be needed as “LLMs would have been good at subsets of that project”, so some impact already and these AI tools only get better.

      5 replies →