Comment by xnx

18 hours ago

This concept won't reach that point because when you chisel too hard it crumbles. There are countless lower level tasks that typical programmers no longer learn how to do. Our capacity for knowledge is not unlimited so we offload everything we can to move to the next level of abstraction.

AI coding isn’t an abstraction, though. You can’t treat a prompt like source code because it will give you a different output every time you use it. An abstraction lets you offload cognitive capacity while retaining knowledge of “what you are doing”. With AI coding either you need to carefully review outputs and you aren’t saving any cognitive capacity, or you aren’t looking at the outputs and don’t know what you’re doing, in a very literal sense.

  • Non-determinism is not as much of a problem as the lack of spec. C++ has the C++ norm, Python has its manual. One can refer to it to predict reliably how the program will behave without thinking of the generated assembly. LLMs have no spec.

    • The two come in hand.

      Non determinism is what conveniently feels the gap of having no spec.

      In fact turn temperature to 0. And it will be virtually deterministic. It exacerbates the problem that LLMs, as you rightly point out, have no spec.

  • "You can’t treat a prompt like source code because it will give you a different output every time you use it"

    But it seems we are heading there. For simple stuff, if I made a very clear spec - I can be almost sure, that every time I give that prompt to a AI, it will work without error, using the same algorithms. So quality of prompt is more valuable, than the generated code

    So either way, this is what I focus my thinking on right now, something that always was important and now with AI even more so - crystal clear language describing what the program should do and how.

    That requires enough thinking effort.

    • > if I made a very clear spec - I can be almost sure

      That "almost" is doing a lot of heavy lifting here. This is just "make no mistakes" "you're holding it wrong" magical thinking.

      In every project, there is always a gap between what you think you want and what you actually need. Part of the build process is working that out. You can't write better specs to solve this, because you don't know what it is yet.

      On top of that, you introduce a _second_ gap of pulling a lever and seeing if you get a sip of juice or an electric shock lol. You can't really spec your way out of that one, either, because you're using a non-deterministic process.

      1 reply →

  • It's staggering to me how many times I've heard this argument that LLMs are just the next level of abstraction. Some people are even comparing them to compilers.

    • > Some people are even comparing them to compilers.

      A lot of people are using them as such too: the amount of people talking about "my fleets of agents working on 4 different projects": they aren't reviewing that output. They say they are, but they aren't, anymore than I review the LLVM IR. It makes me feel like I'm in some fantasy land: I watch Opus 4.7 get things consistently backwards at the margins, mess up, make bugs: we wouldn't accept a compiler that did any of this at this scale or level lol

      2 replies →

  • > AI coding isn’t an abstraction

    Isn't it an abstraction similar to how an engineering or product manager is? Tell the (human or AI coder) what you want, and the coder writes code to fulfill your request. If it's not what you want, have them modify what they've made or start over with a new approach.

    • No, because software engineering is more than <insert coin, receive code>. I've never had a full spec dropped on my desk lol. There's no abstraction.

      Software engineering is a lot more social and communication-heavy than people think. Part of my job is to _not_ take specs at face value. You learn real quick that what people say they need and what they actually need are often miles apart. That's not arrogance, that's just how humans work.

      A good product manager understands the biz needs and the consumer market and I know how to build stuff and what's worked in the past. We figure out what to build together. AIs don't think and can't do this in any effective way.

      Also, if you fuck up badly enough that you make your engineers throw out code, you're gonna get fired lol

    • With an abstraction, you literally move your thinking up a level. So you move up a floor up the tower and no longer have to think what's happening below. The moment something leaves your floor, its course is set. If a result come back, its something familiar, not something from the lower floor.

      A human coder can be seen as an abstraction level because it will talk to the PM in product terms, not in code. And the PM will be reviewing the product. What makes this work is that the underlying contract is that there's a very small amount of iterations necessary before the product is done and the latter one should require shorter time from the PM.

      We've already established using a LLM tool that way does not work. You can spend a whole month doing back and forth, never looking at code and still have not something that can be made to work. And as soon as you look at the code, you've breached the abstraction layer yourself.

That's true, but I think it's beside the point. The flip side of that argument, which is equally true, goes something like, "not doing cognitive push-ups leads to cognitive atrophy."

There are skills we're losing that are probably ok to lose (e.g. spacial memory & reasoning vs GPS, mental arithmetic vs calculators), primarily because those are well bounded domains, so we understand the nature of the codependency we're signing up for. AI is an amorphous and still growing domain. It is not a specific rung in the abstraction hierarchy; it is every rung simultaneously, but at different fidelity levels.

  • > There are skills we're losing that are probably ok to lose (e.g. spacial memory & reasoning vs GPS, mental arithmetic vs calculators)

    I'd argue these are not at all OK to lose. You live in an earthquake zone? You sure better know which way is north and where you have to walk to get back home when all the lines are down after a big one. You need to do a quick mental check if a number is roughly where it should be? YOu should be able to do that in your head.

    There might be better examples that support your point more effectively e.g. cursive writing

    • Yep, there are tons. Growing food, building shelter, etc. But, for pretty much all of the skills we've allowed to atrophy in response to the advances of capitalism, technological & scientific progress, and societal changes, one COULD make the same basic argument, which is that losing that skill is detrimental to the individual, and yet here we are, not growing our own food, not building our own shelter, etc.

      The arguments you make ≤ the values you actually hold ≤ the actions you take in support of those values.

      I'm only interested in any such argument to the extent to which you've personally put it into practice. Otherwise, you're living proof of the argument's weakness. (To be fair, it's extremely hard to be internally consistent on this stuff! We all want better for ourselves than we have time and energy for. But that's my point: your fully subconscious emotional calculus will often undercut at least some of your loftier aspirations. Skills that don't matter anymore invariably atrophy due to the opportunity cost of keeping them honed.)

  • > "not doing cognitive push-ups leads to cognitive atrophy" This is one of the points being made in the post, at least in reference to people who already have some mastery of their craft. If they outsource their thinking without elevating it, they aren't exercising that metaphoric muscle between their ears.

I get your point, I just wonder how accurate it is. We basically never look at the output of the compiler, so I agree that tool allows one to operate at a higher level than assembly. But I always have to wade through the output from AI so I’m not sure I got to move to the next level of abstraction. But maybe that’s just me.

  • Are compilers deterministic?

    • I'm sure someone, somewhere, once wrote one that wasn't but in general, yes they are.

      The ones I use certainly are. And with a bit of training you can reason and predict how they will respond to a given input with a large degree of accuracy without being familiar with how the particular compiler under question was implemented.

      Not so with the AI tools. At least with the ones I use anyway.

    • Given the same compiler, I believe they would be the same between runs given the same inputs. I suppose that could not be true at the margins, but I would expect correctness out of whatever path it chose.

    • For all intents and purposes yeh. Its really about the variance in actual outcomes vs the expected. The variance is not much is it? With LLMs that absolutely isnt the case.

The idea that a tool intended to replace all human cognitive work is the next level of abstraction is so fundamentally flawed, that I'm not sure it's made in good faith anymore. The most charitable interpretation I can think of is that it's a coping mechanism for being made redundant.

Nevermind the fact that these tools are nowhere near as capable as their marketing suggests. Once companies and society start hitting the brick wall of inevitable consequences of the current hype cycle, there will be a great crash, followed by industry correction. Only then will actually useful applications of this technology surface, of which there are plenty. We've seen how this plays out a few times before already.