Comment by coldtea

1 month ago

>My "actual job" isn't to write code, but to solve problems

Solve enough problems relying on AI writing the code as a black box, and over time your grasp of coding will worsen, and you wont be undestanding what the AI should be doing or what it is doing wrong - not even at the architectural level, except in broad strokes.

One ends like the clueless manager type who hasn't touched a computer in 30 years. At which point there will be little reason for the actual job owners to retain their services.

Computer programming on the whole relying on the canned experience of the AI data set, producing more AI churn as ratio of the available training code over time, and plateuing both itself and AI, with the dubious future of reaching Singularity its only hope out of this.

Yet most organizations in existence pay the people “who hasn’t touched a computer in 30 years” quite a large amount of money to continue to solve problems, for some inscrutable reason… =)

To be fair, if you manage to stay employed for 30 years before people determine your skills are sub-standard, that's a lot better than most.

But I generally agree with your point.

> Solve enough problems relying on AI writing the code as a black box, and over time your grasp of coding will worsen, and you wont be undestanding what the AI should be doing or what it is doing wrong - not even at the architectural level, except in broad strokes.

Using AI myself _and_ managing teams almost exclusively using AI has made this point clear: you shouldn't rely on it as a black box. You can rely on it to write the code, but (for now at least) you should still be deeply involved in the "problem solving" (that is, deciding _how_ to fix the problem).

A good rule of thumb that has worked well for me is to spend at least 20 min refining agent plans for every ~5 min of actual agent dev time. YMMV based on plan scope (obviously this doesn't apply to small fixes, and applies even moreso to larger scopes).

  • What I find the most "enlightening" and also frightening thing is that I see people that I've worked with for quite some time and who I respected for their knowledge and abilities have started spewing AI nonsense and are switching off their brains.

    It's one thing to use AI like you might use a junior dev that does your bidding or rubber duck. It's a whole other ballgame, if you just copy and paste whatever it says as truth.

    And regarding that it obviously doesn't apply to small fixes: Oh yes it does! So many times the AI has tried to "cheat" its way out of a situation it's not even funny any longer (compare with yesterday's post about Anthropic's original take home test in which they themselves warn you not to just use AI to solve this as it likes to try and cheat, like just enabling more than one core). It's done this enough times that sometimes I don't trust Claude with an answer I don't fully understand myself well enough yet and dismiss a correct assessment it made as "yet another piece of AI BS".

    • > if you just copy and paste whatever it says as truth.

      It's more difficult than ever, because Google is basically broken and knowledge is shared much less these days, just look at stack overflow

It is not that clear to me.

Let us use an analogy. Many (most?) people can tell a well-written book or story from a mediocre or a terrible one, even though the vast majority of the readers hasn't written any in their lives.

To distinguish good from bad doesn't necessarily require the ability to create.

  • This analogy serves my argument, as in it, just like "most people" are mere readers (not just they're not writers, they're also nowhere near the level of a competent book editor or a critic), the programmer becomes a mere user of the end program.

    Not only would this be a bad way of running a publishing business regarding writing and editing (working on the level of understanding of "most people"), but even in the best case of it being workable, the publisher (or software company) can just fire the specialist and get some average readers/users to give a thumbs up or down to whatever it churns.

  • I'm not actually sure that's true. Theres plenty of controversy now that books that are popular and beloved now are actually not very well written. I mean I've been hearing this complaint since Twilight was popular.

    • Sure, but a professional developer is (or should be) equivalent of a more demanding reader and critic, not just any customer in a bookshop.

      1 reply →

    • I haven't read Twilight, but I've read a few beloved and popular books that are atrociously written from a literary standpoint. That does not mean they are not popular for a reason.

      One I did read, out of morbid curiosity, is 50 Shades. It's utter dreck in terms of writing quality. It's trite, it's full of clichees, and formulaic to the extreme (and incidentally a repurposed Twilight fanfic; if you wonder about the weird references to hunger, there's the reason), but if you look at why it became popular, you might notice that it is extremely well crafted for its niche.

      If you don't want a "billionaire romance" (yes, this is a well defined niche; there's a reason Grey is described as one) melded with the "danger" of vampire-transformed-into-traumatised-man-with-a-dark-side, it's easy to tear it apart (I couldn't get all the way through it - it was awful along the axes I care about), but as a study in flawlessly merging two niches popular with one of the biggest book-buying demographics that have extremely predictable and rigid expectations, it's really well executed.

      I'd struggle to accept it as art, but as a particular kind of craft, it is a masterpiece even if I dislike the craft in question.

      You will undoubtedly find poorly executed dreck that is popular just because it happened to strike a chord out of sheer luck as well, but a lot of the time I tend to realise that if I look at something I dislike and ask what made it resonate with its audience, it turns out that a lot of it resonated with its audience because it was crafted to hit all the notes that specific audience likes.

      At the same time, it's never been the case that great pieces of literature was assured doing well on release. Moby Dick, for example, only sold 3,000 copies during Melville's lifetime (makes me feel a lot better about the sales of my own novels, though I don't hold out any hope of posthumous popularity) and was one of his least successful novels when it was first published. A lot of the most popular media of the time is long since forgotten for good reason. And so we end up with a survivorship bias towards the past, where we see centuries of great classics that have stood the test of time and none of the dreck, and measure them up against dreck and art alike of contemporary media.

I have very little knowledge of how transistors shuffle ones and zeros out of registers. That doesn't prevent me from using them to solve a problem.

Computing is always abstractions. We moved from plugging to assembly, then to c, then we had languages that managed memory for you -- how on earth can you understand what the compiler should be doing or what it is doing if you don't deal with explicit pointers on a day by day basis.

We bring in libraries when we need code. We don't run our own database, we use something else, and we just do "apt-get install mysql", but then we moved onto "docker run" or perhaps we invoke it with "aws cli". Who knows what teraform actually does when we declare we want a resource.

I was thinking the other day how abstractions like AWS or Docker are similar to LLM. With AWS you just click a couple of buttons and you have a data store, you don't know how to build a database from scratch, you don't need one. Of course "to build a database from scratch you must first create the universe".

Some people still hand-craft assembly code to great benefit, but that vast majority don't need to to solve problems, and they can't.

This musing was in the context of what do we do if/when aws data centres are not available. Our staff are generally incapable of working in a non-aws environment. Something that we have deliberately cultivated for years. AWS outputs are one option, or perhaps we should run a non-aws stack that we fully own and control.

Is relying on LLMs fundamentally any different than relying on AWS, or apt, or java. Is is different from outsourcing? You concentrate on your core competency, which is understanding the problem and delivering a solution, not managing memory or running databases. This comes with risk -- all outsourcing does, and if outsourcing to a single supplier you don't and can't understand is acceptable risk, then is relying on LLMs not?

  • There's never been a case in my long programming career so far where knowing the low level details has not benefited me. The level of value varies but it is always positive.

    When you use LLMs to write all your code you will lose (or never learn) the details. Your decision making will not be as good.

    • Or you already know all of the details, and you don’t want typing to be the bottleneck to getting things done.

    • This is true.

      However, your ability to write specs and validate requirements before starting to build will increase.

      It’s just trading deep hand-on expertise for deep product/spec expertise.

      No different than how riding the bus all the time instead of driving results in different skill development (assuming productive time on the bus).

      2 replies →

    • I've seen cases in my career where people knowing the low level things is actually a hindrance.

      They start to fight the system, trying to optimise things by hand for an extra 2% of performance while adding 100% of extra maintenance cost because nobody understands their hand-crafted assembler or C code.

      There will always be a place for people who do that, but in the modern world in most cases it's cheaper to just throw more money at hardware instead of spending time optimising - if you control the hardware.

      If things run on customer's devices, then you need the low level gurus again.

    • So when you use a compiler to create your assembly you will lose (or never learn) the details.

  • It is fundamentally different because with an API that other person understands it, but with an LLM neither you nor the LLM understand it

  • > Is relying on LLMs fundamentally any different than relying on AWS, or apt, or java

    yes, because when I call "javac" it won't decide randomly to delete my home directory

    > Is is different from outsourcing?

    no, not really, and has about the same level of quality

    • I think it's a lot like outsourcing. And, expected quality of outsourcing aside, more importantly, I don't see outsourcing as the next step up on the ladder of programming abstraction. It's having someone else do the programming for you (at the same abstraction level).