Comment by lkbm

8 hours ago

Yes, just also be sure to spend some time writing "by hand".

I agree with this, but I’m also curious: what would have to change before that advice is as sound as “write a little bit of assembly by hand” or the even more ridiculous “just write the raw bytes for the program in a hex editor?”

  • Well, I'm currently applying for jobs, so being able to write code without AI is actually important.

    More generally, I want analytic reasoning and problem solving skills. Assembly, C, and Python all still have me writing an understanding algorithms, while prompting (mostly) does nt. (Actually, more so with C and Python than Assembly because it abstracted away an appropriate amount of stuff, much the same as how I can do better math with pen and paper than in my head.)

    It's possible that at some point it will make sense to switch to a different analytic reasoning practice regime, but for now, programming is a really relevant one for me, and one I enjoy.

  • A compiler is a reliable layer of abstraction using documented structured languages. For me it would need to be that.

  • Even with LLMs, we need a way to translate between the imprecise plain English description of a program and the completely-unambiguous level of code. You need the ability to see when the LLM has resolved ambiguities in the wrong direction and steer it back. If you can't speak code, that's going to be a very error-prone process.

  • When I can look at a prompt and predict what the code it outputs will look like to some high degree of accuracy.

    I mostly don’t think that is possible though because there’s too much ambiguity in natural language. So the answer is probably when AI is close enough to AGI that I can treat it like an actual trusted senior engineer that I’m delegating to.

    • Can you look at code today and predict what assembly a compiler will output to some high degree of accuracy? Do you avoid certain classes of compiler optimization so you can more accurately predict compiler output? I recall a time where many compilers would remove a bzero() operation in situations where you’re trying to zero out a buffer that had sensitive data in it - it’s why we have APIs like https://github.com/MicrosoftDocs/win32/blob/docs/desktop-src.... I ran into a huge performance regression because I didn’t have all the edges of named return value optimization in mind when I refactored some code.

      There’s ambiguity in the x86 specification, such that you can execute a single instruction and get different results in intel vs amd. See the rcpss instruction, for example.

      I get that LLMs are categorically different, and they’re absolutely not as reliable as compilers are, but compilers are also not as reliable as compilers seem. And even less predictable IMO.

      1 reply →