← Back to context

Comment by mbirth

4 days ago

The difference is that after you’ve googled it for ½ hour, you’ve learned something. If you ask an LLM to do it for you, you’re none the wiser.

Wrong. I will spend 30 minutes having the LLM explain every line of code and why it's important, with context-specific follow-up questions. An LLM is one of the best ways to learn ...

  • So far, eqch and every time I used an LLM to help me with something it hallucinated non-existant functions or was incorrect in an important but non-obvious way.

    Though, I guess I do treat LLM's as a last resort longshot for when other documentation is failing me.

    • Which LLMs have you tried? Claude Code seems to be decent at not hallucinating, Gemini CLI is more eager.

      I don't think current LLMs take you all the way but a powerful code generator is a useful think, just assemble guardrails and keep an eye on it.

      5 replies →

  • As long as what it says is reliable and not made up.

    • That's true for internet searching. How many times have you gone to SO, seen a confident answer, tried it, and it failed to do what you needed?

      1 reply →

    • I feel like we are just covering whataboutism tropes now.

      You can absolutely learn from an LLM. Sometimes.documentation sucks and the LLM has learned how to put stuff together feom examples found in unusual places, and it works, and shows what the documentation failed to demonstrate.

      And with the people above, I agree - sometimes the fun is in the end process, and sometimes it is just filling in the complexity we do not have time or capacity to grab. I for one just cannot keep up with front end development. Its an insurmountable nightmare of epic proportions. Im pretty skilled at my back end deep dive data and connecting APIs, however. So - AI to help put together a coherent interface over my connectors, and off we go for my side project. It doesnt need to be SOC2 compliant and OWASP proof, nor does it need ISO27001 compliance testing, because after all this is just for fun, for me.

You can study the LLM output. In the “before times” I’d just clone a random git repo, use a template, or copy and paste stuff together to get the initial version working.

  • Studying gibberish doesn't teach you anything. If you were cargo culting shit before AI you weren't learning anything then either.

    • Necessarily, LLM output that works isn't gibberish.

      The code that LLM outputs, has worked well enough to learn from since the initial launch of ChatGPT. This even though back then you might have to repeatedly say "continue" because it would stop in the middle of writing a function.

      2 replies →

    • It's not gibberish. More than that, LLMs frequently write comments (some are fluff but some explain the reasoning quite well), variables are frequently named better than cdx, hgv, ti, stuff like that, plus looking at the reasoning while it's happening provides more clues.

      Also, it's actually fun watching LLMs debug. Since they're reasonably similar to devs while investigating, but they have a data bank the size of the internet so they can pull hints that sometimes surprise even experienced devs.

      I think hard earned knowledge coming from actual coding is still useful to stay sharp but it might turn out the balance is something like 25% handmade - 75% LLM made.

      8 replies →

This is just not true. I have wasted many hours looking for answers to hard-to-phrase questions and learned very little from the process. If an LLM can get me the same result in 30 seconds, it's very hard for me to see that as a bad thing. It just means I can spend more time thinking about the thing I want to be thinking about. I think to some extent people are valorizing suffering itself.

  • Learning means friction, it's not going to happen any other way.

    • Some of it is friction, some of it is play. With AI you can get faster to the play part where you do learn a fair bit. But in a sense I agree that less is retained. I think that is not because of lack of friciton, instead is the fast pace of getting what you want now. You no longer need to make a conscious effort to remember any of it because it's effortless to get it again with AI if you ever need it. If that's what you mean by friction then I agree.

    • I agree! I just don’t think the friction has to come from tediously trawling through a bunch of web pages that don’t contain the answer to your question.

I don't want to waste time learning how to install and configure ephemeral tools that will be obsolete before I ever need to use them again.

  • Exactly, the whole point is it wouldn’t take 30 minutes (more like 3 hours) if the tooling didn’t change all the fucking time. And if the ecosystem wasn’t a house of cards 8 layers of json configuration tall.

    Instead you’d learn it, remember it, and it would be useful next time. But it’s not.

  • And I don't want to use tools I don't understand at least to some degree. I always get nervous when I do something but don't know why I do that something

Not necessarily. The end result of googling a problem might be copying a working piece of code off of stack exchange etc. without putting any work into understanding it.

Some people will try to vibe out everything with LLMs, but other people will use them to help engage with their coding more directly and better understand what's happening, not do worse.

>> The difference is that after you’ve googled it for ½ hour, you’ve learned something.

I've been programming for 15+ years, and I think I've forgotten the overwhelming majority of the things I've googled. Hell, I can barely remember the things I've googled yesterday.

  • Additionally, in the good/bad old days of using StackOverflow, maybe 10% of the answers actually explained how that thing you wanted to do actually worked, the rest just dumped some code on you and left you to figure it out by yourself, or more likely just copy & paste it and be happy when it worked (if you were lucky)...

Usually The thing you've learned after googling for half an hour is mostly that google isn't very useful for search anymore.

I don't think I'll learn anything by yet again implementing authentication, password reset, forgotten password, etc.

Why train to pedal fast when we already got motorcycles? You are preparing for yesterday's needs. There will never be a time when we need to solve this manually like it's 2019. Even in 2019 we would probably have used Google, solving was already based on extensive web resources. While in 1995 you would really have needed to do it manually.

Instead of manual coding training your time is better invested in learning to channel coding agents, how to test code to our satisfaction, how to know if what AI did was any good. That is what we need to train to do. Testing without manual review, because manual review is just vibes, while tests are hard. If we treat AI-generated code like human code that requires a line-by-line peer review, we are just walking the motorcycle.

How do we automate our human in the loop vibe reactions?

  • > Why train to pedal fast when we already got motorcycles? You are preparing for yesterday's needs.

    This is funny in the sense that in properly built urban environment bycicles are one of the best ways to add some physical activity in a time constrained schedule, as we're discovering.

  • > Instead of manual coding training your time is better invested in learning to channel coding agents

    All channelling is broken when the model is updated. Being knowledgeable about the foibles of a particular model release is a waste of time.

    > how to test code to our satisfaction

    Sure testing has value.

    > how to know if what AI did was any good

    This is what code review is for.

    > Testing without manual review, because manual review is just vibes

    Calling manual review vibes is utterly ridiculous. It's not vibes to point out an O(n!) structure. It's not vibes to point out missing cases.

    If your code reviews are 'vibes', you're bad at code review

    > If we treat AI-generated code like human code that requires a line-by-line peer review, we are just walking the motorcycle.

    To fix the analogy you're not reviewing the motorcycle, you're reviewing the motorcycle's behaviour during the lap.

    • > This is what code review is for.

      My point is that visual inspection of code is just "vibe testing", and you can't reproduce it. Even you yourself, 6 months later, can't fully repeat the vibe check "LGTM" signal. That is why the proper form is a code test.

  • Yes and no.

    Yes, I recon coding is dead.

    No, that doesn't mean there's nothing to learn.

    People like to make comparisons to calculators rendering mental arithmetic obsolete, so here's an anecdote: First year of university, I went to a local store and picked up three items each costing less than £1, the cashier rang up a total of more than £3 (I'd calculated the exact total and pre-prepared the change before reaching the head of the queue, but the exact price of 3 items isn't important enough to remember 20+ years later). The till itself was undoubtedly perfectly executing whatever maths it had been given, I assume the cashier mistyped or double-scanned. As I said, I had the exact total, the fact that I had to explain "three items costing less than £1 each cannot add up to more than £3" to the cashier shows that even this trivial level of mental arithmetic is not universal.

    I now code with LLMs. They are so much faster than doing it by hand. But if I didn't already have experience of code review, I'd be limited to vibe-coding (by the original definition, not even checking). I've experimented with that to see what the result is, and the result is technical debt building up. I know what to do about that because of my experience with it in the past, and I can guide the LLM through that process, but if I didn't have that experience, the LLM would pile up more and more technical debt and grind the metaphorical motorbike's metaphorical wheels into the metaphorical mud.

    • > But if I didn't already have experience of code review, I'd be limited to vibe-coding (by the original definition, not even checking).

      Code review done visually is "just vibe testing" in my book. It is not something you can reproduce, it depends on the context in your head this moment. So we need actual code tests. Relying on "Looks Good To Me" is hand waving, code smell level testing.

      We are discussing vibe coding but the problem is actually vibe testing. You don't even need to be in the AI age to vibe test, it's how we always did it when manually reviewing code. And in this age it means "walking your motorcycle" speed, we need to automate this by more extensive code tests.

      2 replies →