Comment by Terr_

2 days ago

> And then, inevitably, comes the character evaluation, which goes something like this:

I saw a version of this yesterday where a commenter framed LLM-skepticism as a disappointing lack of "hacker" drive and ethos that should be applied to making "AI" toolchains work.

As you might guess, I disagreed: The "hacker" is not driven just by novelty in problems to solve, but in wanting to understand them on more than a surface layer. Messing with kludgy things until they somehow work is always a part of software engineering... but the motive and payoff comes from knowing how things work, and perceiving how they could work better.

What I "fear" from LLMs-in-coding is that they will provide an unlimited flow of "mess around until it works" drudgery tasks with none of the upside. The human role will be hammering at problems which don't really have a "root cause" (except in a stochastic sense) and for which there is never any permanent or clever fix.

Would we say someone is "not really an artist" just because they don't want to spend their days reviewing generated photos for extra-fingers, circling them, and hitting the "redo" button?

I share your fear.

We have a hard enough time finding juniors (hell, non-juniors) that know how to program and design effectively.

The industry jerking itself off over Leetcode practice already stunted the growth of many by having them focus on rote memorization and gaming interviews.

With ubiquitous AI and all of these “very smart people” pushing LLMs as an alternative to coding, I fear we’re heading into an era where people don’t understand how anything works and have never been pushed to find out.

Then again, the ability of LLMs to write boilerplate may be the reset that we need to cut out all of the people that never really had an interest in CS that have flocked to the industry over the last decade or so looking for an easy big paycheck.

  • > to cut out all of the people that never really had an interest in CS

    I had assumed most of them had either filtered out at some stage (an early one being college intro CS classes), ended up employed somewhere that didn't seem to mind their output, or perpetually circle on LinkedIn as "Lemons" for their next prey/employer.

    My gut feeling is that messy code-gen will increase their numbers rather than decrease them. LLMs make it easier to generate an illusion of constant progress, and the humans can attribute the good parts of the output to themselves, while blaming bad-parts on the LLM.

    • > filtered out at some stage (an early one being college intro CS classes)

      Most schools' CS departments have shifted away from letting introductory CS courses perform this function— they go out of their way to court students who are unmotivated or uninterested in computer science fundamentals. Hiring rates for computer science majors are good, so anything to up those enrollment numbers makes the school look better on average.

      That's why intro courses (which were often already paced painfully slowly for anyone with talent or interest, even without any prior experience) are being split into more gradual sequences, Python has gradually replaced Scheme virtually everywhere in schools (access to libs subordinating fundamental understanding even in academia), the relaxation of the major's math requirements, etc.

      Undergraduate computer science classrooms are increasingly full of mercenaries who not only don't give a shit about computer science, but lack basic curiosity about computation.

      2 replies →

> What I "fear" from LLMs-in-coding is that they will provide an unlimited flow of "mess around until it works" drudgery tasks with none of the upside.

I feel like its very true to the hacker spirit to spend more time customizing your text editor than actually programming, so i guess this is just the natural extension.

  • Even when 100% issue-oriented (that is, spending no time on editor-customizatons or developing other skill and toolkits) consider the difference between:

    1. This thing at work broke. Understand why it broke, and fix it in a way which stays and has preventative power. In the rare case where the cause is extremely shallow, like a typo, at least the fix is still reliable.

    2. This thing at work broke. The LLM zigged when it should have zagged for no obvious reason. There is plausible-looking code that is wrong in a way that doesn't map to any human (mis-)understanding. Tweak it and hope for the best.

There’s plenty of understanding we need to get in order to learn to steer the agents precisely, rather than, as you put it, mess around until it works. Some people are actively working on it, while others make a point of looking the other way.