Comment by mcv

5 hours ago

This seems to confirm my feeling when using AI too much. It's easy to get started, but I can feel my brain engaging less with the problem than I'm used to. It can form a barrier to real understanding, and keeps me out of my flow.

I recently worked on something very complex I don't think I would have been able to tackle as quickly without AI; a hierarchical graph layout algorithm based on the Sugiyama framework, using Brandes-Köpf for node positioning. I had no prior experience with it (and I went in clearly underestimating how complex it was), and AI was a tremendous help in getting a basic understanding of the algorithm, its many steps and sub-algorithms, the subtle interactions and unspoken assumptions in it. But letting it write the actual code was a mistake. That's what kept me from understanding the intricacies, from truly engaging with the problem, which led me to keep relying on the AI to fix issues, but at that point the AI clearly also had no real idea what it was doing, and just made things worse.

So instead of letting the AI see the real code, I switched from the Copilot IDE plugin to the standalone Copilot 365 app, where it could explain the principles behind every step, and I would debug and fix the code and develop actual understanding of what was going on. And I finally got back into that coding flow again.

So don't let the AI take over your actual job, but use it as an interactive encyclopedia. That works much better for this kind of complex problem.

My "actual job" isn't to write code, but to solve problems.

Writing code has just typically been how I've needed to solve those problems.

That has increasingly shifted to "just" reviewing code and focusing on the architecture and domain models.

I get to spend more time on my actual job.

  • > My "actual job" isn't to write code, but to solve problems.

    Yes, and there's often a benefit to having a human have an understanding of the concrete details of the system when you're trying to solve problems.

    > That has increasingly shifted to "just" reviewing code

    It takes longer to read code than to write code if you're trying to get the same level of understanding. You're gaining time by building up an understanding deficit. That works for a while, but at some point you have to go burn the time to understand it.

    • It's like any other muscle, if you don't exercise it, you will lose it.

      It's important that when you solve problems by writing code, you go through all the use cases of your solution. In my experience, just reading the code given by someone else (either a human or machine) is not enough and you end up evaluating perhaps the main use cases and the style. Most of the times you will find gaps while writing the code yourself.

  • This feels like it conflates problem solving with the production of artifacts. It seems highly possible to me that the explosion of ai generated code is ultimately creating more problems than it is solving and that the friction of manual coding may ultimately prove to be a great virtue.

    • This statement feels like a farmer making a case for using their hands to tend the land instead of a tractor because it produces too many crops. Modern farming requires you to have an ecosystem of supporting tools to handle the scale and you need to learn new skills like being a diesel mechanic.

      How we work changes and the extra complexity buys us productivity. The vast majority of software will be AI generated, tools will exist to continuously test/refine it, and hand written code will be for artists, hobbyists, and an ever shrinking set of hard problems where a human still wins.

      5 replies →

That's a nice anecdote, and I agree with the sentiment - skill development comes from practice. It's tempting to see using AI as free lunch, but it comes with a cost in the form of skill atrophy. I reckon this is even the case when using it as an interactive encyclopedia, where you may lose some skill in searching and aggregating information, but for many people the overall trade off in terms of time and energy savings is worth it; giving them room to do more or other things.

  • If the computer was the bicycle for the mind, then perhaps AI is the electric scooter for the mind? Gets you there, but doesn't necessarily help build the best healthy habits.

    Trade offs around "room to do more of other things" are an interesting and recurring theme of these conversations. Like two opposites of a spectrum. On one end the ideal process oriented artisan taking the long way to mastery, on the other end the trailblazer moving fast and discovering entirely new things.

    Comparing to the encyclopedia example: I'm already seeing my own skillset in researching online has atrophied and become less relevant. Both because the searching isn't as helpful and because my muscle memory for reaching for the chat window is shifting.

    • Maybe it was always about where you are going and how fast you can get there? And AI might be a few mph faster than a bicycle, and still accelerating.

  • "I reckon this is even the case when using it as an interactive encyclopedia".

    Yes, that is my experience. I have done some C# projects recently, a language I am not familiar with. I used the interactive encylopedia method, "wrote" a decent amount of code myself, but several thousand lines of production code later, I don't I know C# any better than when I started.

    OTOH, it seems that LLMs are very good at compiling pseudocode into C#. And I have always been good at reading code, even in unfamiliar languages, so it all works pretty well.

    I think I have always worked in pseudocode inside my head. So with LLMs, I don't need to know any programming languages!

> But letting it write the actual code was a mistake

I think you not asking questions about the code is the problem (in so far it still is a problem). But it certainly has gotten easy not to.

This mirrors my experience exactly. We have to learn how to tame the beast.

  • I think we all just need to avoid the trap of using AI to circumvent understanding. I think that’s where most problems with AI lie.

    If I understand a problem and AI is just helping me write or refactor code, that’s all good. If I don’t understand a problem and I’m using AI to help me investigate the codebase or help me debug, that’s okay too. But if I ever just let the AI do its thing without understanding what it’s doing and then I just accept the results, that’s where things go wrong.

    But if we’re serious about avoiding the trap of AI letting us write working code we don’t understand, then AI can be very useful. Unfortunately the trap is very alluring.

    A lot of vibe coding falls into the trap. You can get away with it for small stuff, but not for serious work.

    • I'd say the new problem is knowing when understanding is important and where it's okay to delegate.

      It's similar to other abstractions in this way, but on a larger scale due to LLM having so many potential applications. And of course, due to the non-determinism.

Similarly I leave Cursor's AI in "ask" mode. It puts code there, leaving me to grab what I need and integrate myself. This forces me to look closely at code and prevents the "runaway" feeling where AI does too much and you're feeling left behind in your own damn project. It's not AI chat causing cognitive debt it's Agents!

> a hierarchical graph layout algorithm based on the Sugiyama framework, using Brandes-Köpf for node positioning.

I am sorry for being direct but you could have just kept it to the first part of that sentence. Everything after that just sounds like pretentious name dropping and adds nothing to your point.

But I fully agree, for complex problems that require insight, LLMs can waste your time with their sycophancy.

  • This is a technical forum, isn't pretentious name dropping kind of what we do?

    Seriously though, I appreciated it because my curiosity got the better of me and I went down a quick rabbit hole in Sugiyama, comparative graph algorithms, and learning about the node positioning as a particular dimension of graph theory. Sure nothing ground breaking, but it added a shallow amount to my broad knowledge base of theory that continues to prove useful in our business (often knowing what you don't know is the best initiative for learning). So yeah man, lets keep name dropping pretentious technical details because thats half the reason I surf this site.

    And yes, I did use ChatGPT to familiarize myself with these concepts briefly.

    • I think many are not doing anything like this so to the person who is not interested in learning anything, technical details like this sound like pretentious name dropping because that is how they relate to the world.

      Everything to them is a social media post for likes.

      I have explored all kinds of graph layouts in various network science context via LLMs and guess what? I don't know anything much about graph theory beyond G = (V,E). I am not really interested either. I am interested in what I can do with and learn from G. Everything on the right of the equals sign Gemini is already beyond my ability. I am just not that smart.

      The standard narrative on this board seems to be something akin to having to master all volumes of Knuth before you can even think to write a React CRUD app. Ironic since I imagine so many learned programming by just programming.

      I know I don't think as hard when using an LLM. Maybe that is a problem for people with 25 more IQ points than me. If I had 25 more IQ points maybe I could figure out stuff without the LLM. That was not the hand I was dealt though.

      I get the feeling there is immense intellectual hubris on this forum that when something like this comes up, it is a dog whistle for these delusional Erdos in their own mind people to come out of the wood work to tell you how LLMs can't help you with graph theory.

      If that wasn't the case there would be vastly more interesting discussion on this forum instead of ad nauseam discussion on how bad LLMs are.

      I learn new things everyday from Gemini and basically nothing reading this forum.

  • I’ve been forced down that path and based on that experience it added a whole lot. Maybe you just don’t understand the problem?