← Back to context

Comment by nosianu

20 days ago

Just a funny, or depressing, aside - and then a point about LLMs.

Real coding can, unfortunately, be as bad as that or worse. Here is one very famous HN comment from 2018, and I know what he is talking about because participating in this madness was my first job after university, dispelling a lot of my illusions:

https://news.ycombinator.com/item?id=18442941

I went into that job (of porting Oracle to another Unix platform for an Oracle platform partner) full of enthusiasm and gave up finding any meaning or enjoyment after the first few weeks, or trying to understand or improve anything. If AI could do at least some of that job it would actually a big plus.

(it's the working-on-Oracle-code comment if you didn't already guess it)

I think there's a good chance code becomes more like biology. You can understand the details, but there are sooo many of them, and there are way too many connections directly and indirectly across layers. You have to find higher level methods because it's too much for a direct comprehension.

I saw a main code contributor in a startup I worked at work kind of like that. Not all his fault, forced to move too quickly and the code was so ill defined, not even the big boss knowing what they wanted and only talking in meta terms and always coming up with new sometimes contradicting ideas. The code was very hard to comprehend and debug, especially since much of it was distributed algorithms. So his approach was running it with demo data, observing higher level outcomes, and tweaking this or that component until it kind of worked. It never worked reliably, it was demo-quality software at best. But he managed to implement all the new ideas from management at least.

I found that style interesting and could not dismiss it outright, even though I really really did not want to have to debug that thing in production. But I saw something different from what I was used to, focus on a higher level, working when you just can't have the same depth of understanding of what you are doing as one would traditionally like. Given my Oracle experience, I saw how this would be a useful style IRL for many big long-running projects, like that Oracle code, that you had no chance of comprehending or improving without "rm -rf" and a restart which you could not do.

I think education needs to also show these more "biology-level complexity" and more statistical higher level approaches. Much of our software is getting too complex for the traditional low-level methods.

I see LLMs as just part of such a toolkit for the future. On the one hand, there is supplying code for "traditional" smaller projects, where you still have hope to be in control and have at least the seniors fully understand the system. On the other hand, LLMs could help with too-complex systems, not with making them understandable, that is impossible for those messy systems, but with being able to still productively work with them, add new features and debug issues. Code such as in the Oracle case. A new tool for even higher levels of messiness and complexity in our systems, which we won't be able to engineer away due to real life constraints.