← Back to context

Comment by chung8123

6 days ago

For me it all the build stuff and scaffolding I have to get in place before I can even start tinkering on a project. I never formally learned all the systems and tools and AI makes all of that 10x easier. When I hit something I cannot figure out instead of googling for 1/2 hour it is 10 minutes in AI.

The difference is that after you’ve googled it for ½ hour, you’ve learned something. If you ask an LLM to do it for you, you’re none the wiser.

  • Wrong. I will spend 30 minutes having the LLM explain every line of code and why it's important, with context-specific follow-up questions. An LLM is one of the best ways to learn ...

    • So far, eqch and every time I used an LLM to help me with something it hallucinated non-existant functions or was incorrect in an important but non-obvious way.

      Though, I guess I do treat LLM's as a last resort longshot for when other documentation is failing me.

      11 replies →

  • You can study the LLM output. In the “before times” I’d just clone a random git repo, use a template, or copy and paste stuff together to get the initial version working.

  • This is just not true. I have wasted many hours looking for answers to hard-to-phrase questions and learned very little from the process. If an LLM can get me the same result in 30 seconds, it's very hard for me to see that as a bad thing. It just means I can spend more time thinking about the thing I want to be thinking about. I think to some extent people are valorizing suffering itself.

  • I don't want to waste time learning how to install and configure ephemeral tools that will be obsolete before I ever need to use them again.

    • Exactly, the whole point is it wouldn’t take 30 minutes (more like 3 hours) if the tooling didn’t change all the fucking time. And if the ecosystem wasn’t a house of cards 8 layers of json configuration tall.

      Instead you’d learn it, remember it, and it would be useful next time. But it’s not.

    • And I don't want to use tools I don't understand at least to some degree. I always get nervous when I do something but don't know why I do that something

      2 replies →

  • Not necessarily. The end result of googling a problem might be copying a working piece of code off of stack exchange etc. without putting any work into understanding it.

    Some people will try to vibe out everything with LLMs, but other people will use them to help engage with their coding more directly and better understand what's happening, not do worse.

  • >> The difference is that after you’ve googled it for ½ hour, you’ve learned something.

    I've been programming for 15+ years, and I think I've forgotten the overwhelming majority of the things I've googled. Hell, I can barely remember the things I've googled yesterday.

    • Additionally, in the good/bad old days of using StackOverflow, maybe 10% of the answers actually explained how that thing you wanted to do actually worked, the rest just dumped some code on you and left you to figure it out by yourself, or more likely just copy & paste it and be happy when it worked (if you were lucky)...

  • Usually The thing you've learned after googling for half an hour is mostly that google isn't very useful for search anymore.

  • I don't think I'll learn anything by yet again implementing authentication, password reset, forgotten password, etc.

  • Why train to pedal fast when we already got motorcycles? You are preparing for yesterday's needs. There will never be a time when we need to solve this manually like it's 2019. Even in 2019 we would probably have used Google, solving was already based on extensive web resources. While in 1995 you would really have needed to do it manually.

    Instead of manual coding training your time is better invested in learning to channel coding agents, how to test code to our satisfaction, how to know if what AI did was any good. That is what we need to train to do. Testing without manual review, because manual review is just vibes, while tests are hard. If we treat AI-generated code like human code that requires a line-by-line peer review, we are just walking the motorcycle.

    How do we automate our human in the loop vibe reactions?

    • > Why train to pedal fast when we already got motorcycles? You are preparing for yesterday's needs.

      This is funny in the sense that in properly built urban environment bycicles are one of the best ways to add some physical activity in a time constrained schedule, as we're discovering.

    • > Instead of manual coding training your time is better invested in learning to channel coding agents

      All channelling is broken when the model is updated. Being knowledgeable about the foibles of a particular model release is a waste of time.

      > how to test code to our satisfaction

      Sure testing has value.

      > how to know if what AI did was any good

      This is what code review is for.

      > Testing without manual review, because manual review is just vibes

      Calling manual review vibes is utterly ridiculous. It's not vibes to point out an O(n!) structure. It's not vibes to point out missing cases.

      If your code reviews are 'vibes', you're bad at code review

      > If we treat AI-generated code like human code that requires a line-by-line peer review, we are just walking the motorcycle.

      To fix the analogy you're not reviewing the motorcycle, you're reviewing the motorcycle's behaviour during the lap.

      1 reply →

    • Yes and no.

      Yes, I recon coding is dead.

      No, that doesn't mean there's nothing to learn.

      People like to make comparisons to calculators rendering mental arithmetic obsolete, so here's an anecdote: First year of university, I went to a local store and picked up three items each costing less than £1, the cashier rang up a total of more than £3 (I'd calculated the exact total and pre-prepared the change before reaching the head of the queue, but the exact price of 3 items isn't important enough to remember 20+ years later). The till itself was undoubtedly perfectly executing whatever maths it had been given, I assume the cashier mistyped or double-scanned. As I said, I had the exact total, the fact that I had to explain "three items costing less than £1 each cannot add up to more than £3" to the cashier shows that even this trivial level of mental arithmetic is not universal.

      I now code with LLMs. They are so much faster than doing it by hand. But if I didn't already have experience of code review, I'd be limited to vibe-coding (by the original definition, not even checking). I've experimented with that to see what the result is, and the result is technical debt building up. I know what to do about that because of my experience with it in the past, and I can guide the LLM through that process, but if I didn't have that experience, the LLM would pile up more and more technical debt and grind the metaphorical motorbike's metaphorical wheels into the metaphorical mud.

      3 replies →