Comment by neilv

6 days ago

That's levels of abstraction, but still thinking.

Just last night, while looking for clear technical information about MCP integration options for Gemini, I found this Google-written article[1], that -- with a positive, hype-compliant spin -- opens with:

> Have you ever had something on the tip of your tongue, but you weren’t exactly sure how to describe what’s in your mind?

> For developers, this is where "vibe coding " comes in. Vibe coding helps developers achieve their vision with models like Gemini 2.5 Pro to generate code from natural language prompts. Instead of writing every line of code, developers can now describe the desired functionality in plain language. AI translates these "vibes" into your vision.

That's not thinking.

We've even appropriated "vibe" terminology, which means something like emotional gut feel, without having to think about it. (Mostly associated with wake-and-bake stoners, who've self-imposed two-digit IQs and munchies, and who will sometimes speak in terms of "vibes", for lack of further analytic capacity.)

Recognizing that the top killer app for "AI" right now is cheating on homework, the collaborate-with-AI 'skill' is like the well-known collaborate-with-lab-partner. The lab partner who does all the work, while the slacking student learns nothing, and therefore the slacker fails the exam. (But, near-term, the slacker might scrape by with a C- for the class, due to copying the lab portion, and due to an instructor who now just wants to be rid of the hopeless student.)

[1] https://cloud.google.com/blog/products/ai-machine-learning/b...

You call compilers "levels of abstraction but still thinking", I call LLMs yet another level of abstraction.

This isn't really some new concept, the only thing new is that it's being applied to areas that haven't historically had a ton of automation.

Hand-wringing about LLMs and "not thinking" is the same thing that was hand-wrung about students using calculators and not knowing how to do long division. Or using a computer lookup and not knowing how to use the dewey decimal system. Heck, or using an automobile/bicycle and not knowing how to shoe a horse.

People over the last decade have demonstrated they are perfectly capable of generating large quantities of crappy, not-thought-out code all on their own. Just look around you. LLMs democratize the lowest common denominator, and those that are doing sufficiently difficult, nuanced, unique things that they actually need to know what they're doing, will continue to do so.

I don't think LLMs will reduce the abilities of the 10% best software engineers, and I don't think the quality of output of the rest will meaningfully change.

  • In this thread, I'm responding to the question of whether you teach a child the things, vs. OP's "Knowing fundamentals is always useful, but learning to collaborate with an AI is probably the more important long-term skill."

    I agree that our field is already full of poo. But, at least with one child, we have a chance to nurture them to be much better than that.

    I'll make that argument with enthusiasm and determination.

    • I completely disagree.

      We're trying to teach a child. That requires things like maintaining interest. Results beat out rigor and fundamentals every time. Teaching primitives is how they lose interest, showing them "this is how you make a game with an LLM, here's the game!" followed by, if they're interested, showing how to change certain things in code, is how they want to learn more.

      In a similar vein, MythBusters got more kids into science than any scientific paper ever did, rigor be damned. When you teach a child, you want to emphasize "you, too, can do this!" not "a monad is a monoid in the category of endofunctors".

      Let the child's interest guide them and you, not your interest.

      1 reply →