← Back to context

Comment by djaro

10 hours ago

The problem is that LLMs stop working after a certain point of complexity or specificity, which is very obvious once you try to use it in a field you have deep understanding of. At this point, your own skills should be able to carry you forward, but if you've been using an LLM to do things for you since the start, you won't have the necessary skills.

Once they have to solve a novel problem that was not already solved for all intentes and purposes, Alice will be able to apply her skillset to that, whereas Bob will just run into a wall when the LLM starts producing garbage.

It seems to me that "high-skill human" > "LLM" > "low-skill human", the trap is that people with low levels of skills will see a fast improvement of their output, at the hidden cost of that slow build-up of skills that has a way higher ceiling.

Then test Bob on what you actually want him to produce, ie novel problems, instead of trivial things that won't tell you how good he is.

Why is it a problem of the LLM if your test is unrelated to the performance you want?

  • How can Bob produce novel things when he lacks the skills to do even trivial things?

    I didn't get to be a senior engineer by immediately being able to solve novel problems. I can now solve novel problems because I spent untold hours solving trivial ones.

  • What people forget about programming is it is a notation for formal logic, one that can be executed by a machine. That formal logic is for solving a problem in the real world.

    While we have a lot of abstractions that solve some subproblems, there still need to connect those solutions to solve the main problem. And there’s a point where this combination becomes its own technical challenge. And the skill that is needed is the same one as solving simpler problems with common algorithms.

This whole argument can be made for why every programmer needs to deeply understand assembly language and computer hardware.

At a certain point, higher level languages stop working. Performance, low level control of clocks and interrupts, etc.

I’m old enough dropping into assembly to be clever with the 8259 interrupt controller really was required. Programmers today? The vast majority don’t really understand how any of that works.

And honestly I still believe that hardware-up understanding is valuable. But is it necessary? Is it the most important thing for most programmers today?

When I step back this just reads like the same old “kids these days have it so easy, I had to walk to school uphill through the snow” thing.

  • Teaching how computer hardware works is pretty smart. There is no need to do it in depth though.

    Writing assembly is probably completely irrelevant. You should still know how programming language concepts map to basic operations though. Simple things like strict field offsets, calling conventions, function calls, dynamic linking, etc.

    • > Writing assembly is probably completely irrelevant.

      ffmpeg disagrees.

      More broadly, though, it’s a logical step if you want to go from “here’s how PN junctions work” to “let’s run code on a microprocessor.” There was a game up here yesterday about building a GPU, in the same vein of nand2tetris, Turing Complete, etc. I find those quite fun, and if you wanted to do something like Ben Eater’s 8-bit computer, it would probably make sense to continue with assembly before going into C, and then a higher-level language.