← Back to context

Comment by liendolucas

3 days ago

I still code daily without any coding assistance mostly because I believe this is the way to not forget how things are done, even trivial things.

My main point against using AI is that I do not want to depend basically on anything when I'm in front of the screen (obviously not including, documentation, books, SO and alike).

I closely see people that are 100% dependent on AI for literally everything, even the most trivial daily tasks and I find that truly scarly because it means that brain effort drops drammatically to a minimum level. To be stolen mental effort is not a minor thing.

Giving away that at least for me means to become a dependent zombie. Knowledge comes basically from manual trial/error almost daily.

Technology being technology if anything has shown us that we can be pushed and manipulated in every single conceivable way. And in my opinion depending on AI is the ultimate way for companies to penetrate and manipulate a very delicate ability of a human being: to think and wonder about things.

Recently, after a month of heavily AI assisted programming, I spent a few days programming the good old fashioned way.

I spent most of the time confused and frustrated, straining painfully against the problem. I spent most of my 7 hour session this way, and the task was successfully completed.

But I was startled by the difficulty. I began to worry that I had given myself some kind of brainrot from disuse. Then I remembered, my goodness, it always felt that way, if I was ever doing something new. That's just what it feels like, grappling with a problem you haven't seen before.

It was always as hard as that, I was just no longer used to the feeling. You get used to the difficulty, and then it feels normal.

Or indeed: you get used to its absence, and then it suddenly feels overwhelming and "wrong" !

I think maintaining the capacity to tolerate difficulty and discomfort is a "muscle" well worth preserving.

  • I'm biased, but I think you're on to something. People are writing about this under the broad framing of "cognitive surrender"

I've had the "problem" of forgetting syntax before any AI, with IDE autocomplete. It was only ever a problem when switching jobs and being expected to write syntactically correct code on platforms without syntax checks or autocomplete. So I did some exercises on such platforms in preparation for interviews.

In the real world, reliance on syntax autocomplete and checks was never an issue. The important thing has always been understanding the core concepts of the language and the runtime, e.g. how the event loop works with Node.js and how to write asynchronous and event driven programs.

I'm the opposite I don't think I've read a single line of code I've shipped in over 6 months.

I'd say it's far more tiring working that way though, you're breaking the satisfaction loop so you never really get the dopamine you used to get coding by hand, when you had a problem figuring it out was like solving a puzzle and you feel satisfaction at the end of it. With AI it feels most of my day is spent being a QA than a puzzle solver and its exhausting and even when it solves difficult problems for me the LLM slot machine is far less satisfying than if I'd figured it out myself.

  • Agree with you for my day job (which is coding corporate web app), for sure. I'm still letting A.I. drive more nowadays, but it does feel less fulfilling than it used to.

    But for my personal projects, I work on games, and by offloading a lot of the coding work to A.I., my puzzle solving is no longer 'how to fix this stupid library spitting stupid errors at me' or 'how to get this shader working' or 'why is this upgrade breaking all the things' and more 'what does this game need in order to be fun and good?', which I find a lot more fulfilling.

    It's also why I switched my focus to board game design for the longest time. I didn't have to fight my tools or learn some new api or library frequently. And if I wanted to try a new mechanic, I didn't need to spend 20 minutes or 2 hours or 2 days implementing it, I could write something on an index card in five seconds and shift mid-game most of the time.

    A.I. just brought video games closer to that experience, which actually has made them more fun to work on again, because board games has the immense (financial/logistical if self-publishing or social/networking if attempting to get published through a publisher) challenge of getting physical games published to worry about.

    • The puzzle thought was mostly me trying to figure out why AI coding was more emotionally tiring when I'm literally doing less and creating more, maybe it's something else.

  • I find this interesting as someone who does primarily devops, my satisfaction has increased with ai. Since for me the code isn't the puzzle but an annoying inconvenience in the way of completing the entire system. For me QA is a big part of solving the puzzle.

    • DevOps is a huge part of my job as a systems engineer and I too have found increased satisfaction with AI.

      I think the reason (for me, at least) is that my markers of success were always perched precariously atop a mountain of systems that I had varying levels of understanding of anyway. Seeing a pipeline "doing the thing" is satisfying regardless of how I sorted it out.

  • >I'm the opposite I don't think I've read a single line of code I've shipped in over 6 months.

    This feels unfair to the people dealing with your (LLM’s) code. You don’t vet it at all? Or am I reading this wrong?

    • What does "fair" have to do with anything? This is exactly the issue the author is writing about. Take the easy way, reap the profits, then someone suffer the obviously predictable consequences at some point in the unforeseeable future... likely not you! "Fair" is not relevant.

      The original author points to the consolidation of military suppliers as a major issue, but the truth is that the economies of the western world have been massively dependent on this sort of consolidation and outsourcing for a large portion of the "growth" that they have achieved for a generation.

      It would be convenient to think that the real question is "how do we climb back out of this hole?" but I feel the more pressing question is actually, "when and why will we start trying?"

      The profit motive simply does not drive society in this direction.

      The crises are catastrophic and perhaps even existential, but they are not profitable. You have to be a really lucky market timer to bet on crisis and win.

      Avoiding crisis over the longer term is simply not investable.

      "Fair" is not a relevant or useful conception in this context.

      11 replies →

    • My boss gets annoyed if I try and do things without AI so eventually I caved but I don't see the point in reading it if thats the culture at the company being pushed.

      Also anyone else dealing with it is just gonna be dealing with it via AI so it doesn't really matter.

      If I worked somewhere where the CEO cared about hand written code I would be writing it and reading it but I don't.

      1 reply →

    • What makes you think the people dealing with the LLMs' code won't also be using LLMs to "deal with it"?

      We're all now basically junior coders who have no idea what is in the codebase. Without LLMs, we won't be able to "deal" with any of it.

      And I don't like it one bit.

      3 replies →

I generally don't have as much time (or patience / fucks) anymore in my day. So, I use AI 3 days a week. On the other two days, I don't use assistants to code, just ask them to review my work after its done.

Helps me keep sane tbh. And keeps the edge sharp.

  • At work we are literally forced to use AI and it’s part of our performance review. Even though I really like coding by hand, I have to now use AI so I can keep my job. I will try this out though, 2 days per week using AI and the rest handcoding, enough to stave off the inevitable lay off perhaps.

    • Surely it can’t be hard to token max at work the same fucking way people have games Jira metrics for years and years.

      If I’m ever in that position (everything I work on it air-gapped, it’ll never happen) I would make it a priority to figure out how to game that bullshit metric so I could get on with solving actual problems.

      I imagine a lot of people do this. Metric becomes a target, etc.

      1 reply →

I have always had a problem, worse than most I think, where if I’m away from a language for a bit I lose my ability to write it quickly and competently, real quick.

It doesn’t matter if I was quite competent in it… the mechanical bits fade fast.

Doing llm assisted work is going to be like pouring bleach on my brain. I can feel it. The more I use it the worse it will be for me.

I can still formulate what I need, and problem solve just fine, but all the nuts and bolts evaporate.

  • There is a difference between understanding the shape of a solution and keeping the "finger memory" of a language alive

    • For sure. I just… I feel really really dumb every time I have to knock the rust off to use a language again.

> Knowledge comes basically from manual trial/error almost daily.

This is the important statement, although I'd swap the word "knowledge" for "experience" here. You can gain "knowledge" from books, but only trial & error will give you experience to know "which" knowledge to use in which situations.

And what's important about this in the context of working with AI is the "error" part.

You have to experience errors to become truly experienced. And part of the experience is to recognize when you're about to make an error - to avoid it.

AI-driven processes mess up our natural trial & error learning curve in multiple ways:

- the AI push forces us to ship features faster (cause if we don't, our competitors will), reviews are sloppier, we discover errors later on, the feedback loop gets longer...

- using AI to debug and fix errors means we spend less time understanding what the error was about, which means we learn less about how to avoid the error in the first place...

- AI itself sounds overly confident, so reading its outputs without previous experience you may be less likely to recognize when it's making an error, which makes it harder for you to recognize when you're making an error trusting it...

On the other hand, this last point I tried to make is also why I don't think avoiding AI completely is a good strategy. Whether we like it or not, AI is becoming a part of developer's workflow. And as such, we also need to learn the trial & error process of using AI - what makes AI make errors and how to prompt it to avoid that.

To add to this issue, a lot of people then offload their mental load and work to the people downstream of their LLM results.

Someone on HN put it well the other day: everyone wants to deliver AI results, no one wants to receive them.

I really don't understand the people who use it for everything.

It's become my first stop for search because it's doing it in bulk—read 50 results and lead me to something useful.

But I just got Claude MCP connected to my personal email/calendar/etc and I can't figure out what to do with it. It wrote a summary of my inbox that took as long to read as flipping through my inbox. And since it makes no sense to delegate decision making, I'm not sure what the actual work I'm supposed to give it would be.

> Giving away that at least for me means to become a dependent zombie.

I suppose people felt the same way in the agrarian revolution and later again with inventions like the plough. Suddenly a lot of people offloaded their food independence onto the work of a few.

What might it open up in our lives to be free of knowledge?

That said, these machines don't run themselves, if we disengage our minds we might get stuck in a dead end with them.

Another perspective: AI reduces brain effort in some domains which actually frees up brain juice that can be applied elsewhere.

  • For me AI mostly reduces time effort. AI types code faster than I do, looks up stuff on the internet faster than I do, debugs faster than I do, but doing those never required much "brain effort" from me.

    What does require "brain effort" from me is making educated decisions. Mostly during planning to figure out which pros/cons of each possible approach are actually relevant for our situation - AI does this poorly, makes lots of wrong assumptions if you don't steer it correctly, and noticing these + correcting AI on them requires "brain effort" too. Then the part of code review where you think about what can go wrong. AI still sucks at figuring out edge cases. It doesn't "know" the entire codebase like I do, its context only has "the parts of codebase deemed relevant".

    Before AI I could jump from 30 minutes of hard thinking into an hour of coding during which my brain essentially rested, before returning to hard thinking again. Nowadays those hour-long coding sessions turn into 5-10 minutes of watching AI do something.

    So for me using AI doesn't "free up brain juice", it instead makes me use my "brain effort" more, and in a workplace environment gives me less time to rest and makes me more tired, cause nowadays bosses expect us to work faster + colleagues working faster means more review requests.

  • Show us effects.

    What amazing breakthroughs were achieved thanks to brain juice freed by AI usage? What great works of art were created?

    • I've got one. I'm working on a cryptographic identity system in rust. One of the stricter iterations of it demanded creating a public version and private version of each type. The best way to accomplish this is a procedural macro. I don't know if you've written proc macros by hand in rust. I have, years ago, and it was somewhat torturous. I didn't want to relearn to do it all over again and spend what would have taken weeks (this is a side project) to gain a skill I will easily forget in a month or so. So I had an LLM code it for me. This is a really great use for it: it's not building any strong logic or doing any IO, it's simply writing code that generates other code, and is entirely verifiable and testable. It built it for me so I could spend those weeks working on higher level logic and p2p syncing protocol stuff that actually matters for the project.

      I want to make it clear that I'm an LLM luddite. I mostly find the things distasteful and obnoxious. But there are definitely use-cases where they can do what's essentially bitch work and save a lot of time that would otherwise be a waste. It's a tool that can be used for specific things. I don't use them for everything.

      4 replies →

    • I'll bite. I've been writing music for decades but I can't sing. With ai I can write lyrics and generate ai vocals, then separate the stems and extract the vocals throwing away the rest. Add the vocals to my daw and create the rest the way I want. Saying its a great work of art is subjective, but for me I can make music I couldn't before now.

      2 replies →

    • So this is the classic tension between the "coding for the love of code" vs the "coding to solve problems" mindset. This cultural concept has been around since before AI was on the scene, heck well before software existed (craftsman vs builder).

      1 reply →

  • Ya this has been my sentiment. If i need to one-off a quick script that does some processing on data, it’s nice to offload that so i can focus on pieces of my code that are more important and interesting to me. The context switching cost is still there tho…

I think your forgetting what programming actually is. Its a way convert instructions into 1 & 0s. The programming language your using is irrelevant. If AI can do the same thing with natural language is just a matter of time before we drop the "programming language" layer and AI just creates programs using straight up machine code.

The danger is not using a tool. We all use tools... The danger is skipping the part where your own brain builds a model of the problem

I hear this a lot, but also I'm curious. How can you really forget coding?

It doesn't seem to me a thing that I could suddenly forget?

Without AI I will feel frustrated that I'm now much slower, but ultimately it's just describing logic. So I'm a bit skeptical of the claim.

My brain effort is also on other things now, such as how to orchestrate guardrails, how to build pipelines to enable multiple agents work on the same thing at the same time, how to understand their weaknesses and strengths, how to automate all of that. So there's definitely a lot of mental effort going into those things.

  • If you are not practicing an activity consistently, you'll forget some of the finer grained aspects. When I'm coding, I subconsciously create a continuous logic map. Having someone or something just generate (and generate so quickly) destroys that and makes it easier for bugs to slip through.

    • I mean if e.g. AI stopped existing all of sudden, it doesn't mean you would have forgot how to code and couldn't all of sudden anymore, right?

      You could forget maybe how a certain lib or framework worked or things like that, or more so how you wouldn't have been up to date with all the new ones, but ultimately code can be represented as just functions with input and output, and that's all there is to it.

      As in how could I possibly forget what loops, conditionals or functions are?

      I haven't written code myself for 1+ year (because AI does it), but I feel like I have forgot absolutely nothing, in fact I feel like I have learned more about coding, because I see what patterns AI uses vs what I did or people did, and I am able to witness different patterns either work out or not work out much faster in front of my eyes.

      14 replies →

  • If your internet died you would likely be worse at programming that you were in 2020. I think is what people are getting at.

    • I always compare AI programming to Google. If that's the case, then without internet, without Google, without Stack Overflow, my abilities would be worse than they were in 2000.

    • If my internet died in 2020 I would also be useless because probably I couldn't install/download all the libs/frameworks, etc.

      But if I didn't need those things, and there was a simple pseudolang syntax which acted exactly the same in all versions, didn't have any breaking changes, I would argue I'd be much better at it now.

      Internet, search etc is needed to understand how to setup libs/frameworks/APIs, but logic at itself isn't something that I could possibly forget. AI will help to get those setups quicker without me having to search, but arguably it's all useless information, that will get out of date, that I really don't even need to know. I don't need to know top of my head what the perfect modern tsconfig setup should look like or what is the best monorepo framework and how to set it up, so it would scalably support all different coding languages for different purposes.

  • I used to be an expert at php but now I haven’t written any in over a decade, I can still read it but it would take me a little while to get back to where I was (hopefully I’ll never need to), same thing could easily happen due to ai

>I closely see people that are 100% dependent on AI for literally everything, even the most trivial daily tasks and I find that truly scarly because it means that brain effort drops drammatically to a minimum level. To be stolen mental effort is not a minor thing.

I find myself thinking more and my thinking is of higher quality. Now I have 30 years of fucked up projects experience, so I know all the rakes I could step into.

  • I think it's been well-documented that people feel more productive with GenAI, even when actual productivity declines.

  • You probably overestimate yourself.

    • I relate to the idea of having a different level of thinking now with AI. How would you evaluate that someone is overestimating themselves?

      As in every little thing that used to be too much effort before, I can just easily get the info, the data now with prompt. The data analysis of something, which otherwise might have taken hours to figure out, I can just have AI write scripts for everything, which allows me to see more data about everything that previously was out of touch. Now you will probably ask of course "how do I know the data is accurate?" -- I can still cross reference things and it is still far faster because even if I spent hours before trying to access that data there wouldn't have been similarly guarantees that it was accurate.

      I am thinking so much more about the things now that I couldn't have possibly time to think about before because they were so far out of reach, or even unimaginable to do in my lifetime. Now I'm thinking about automating everything, having perfect visualizations, data about everything, being able to study/learn everything quickly etc.

      3 replies →