← Back to context

Comment by gjadi

9 hours ago

Interesting argument.

But isn't the corrections of those errors that are valuable to society and get us a job?

People can tell they found a bug or give a description about what they want from a software, yet it requires skills to fix the bugs and to build software. Though LLMs can speedup the process, expert human judgment is still required.

I think there's different levels to look at it.

If you know that you need O(n) "contains" checks and O(1) retrieval for items, for a given order of magnitude, it feels like you've all the pieces of the puzzle needed to make sure you keep the LLM on the straight and narrow, even if you didn't know off the top of your head that you should choose ArrayList.

Or if you know that string manipulation might be memory intensive so you write automated tests around it for your order of magnitude, it probably doesn't really matter if you didn't know to choose StringBuilder.

That feels different to e.g. not knowing the difference between an array list and linked list (or the concept of time/space complexity) in the first place.

  • My gut feeling is that, without wrestling with data structures at least once (e.g. during a course), then that knowledge about complexity will be cargo cult.

    When it comes to fundamentals, I think it's still worth the investment.

    To paraphrase, "months of prompting can save weeks of learning".

I think the kind of judgement required here is to design ways to test the code without inspecting it manually line by line, that would be walking a motorcycle, and you would be only vibe-testing. That is why we have seen the FastRender browser and JustHTML parser - the testing part was solved upfront, so AI could go nuts implementing.

  • I partially agree, but I don’t think “design ways to test the code without inspecting it manually line by line” is a good strategy.

    Tests only cover cases you already know to look for. In my experience, many important edge cases are discovered by reading the implementation and noticing hidden assumptions or unintended interactions.

    When something goes wrong, understanding why almost always requires looking at the code, and that understanding is what informs better tests.

    • Another possibility is to implement the same spec twice, and do differential testing, you can catch diverging assumptions and clarify them.

      1 reply →