← Back to context

Comment by jijijijij

20 hours ago

For the life of me, I don't get the productivity argument. At least from a worker perspective.

I mean, it's at best a very momentary thing. Expectations will adapt and the time gained will soon be filled with more work. The free time net gain will ultimately be zero, optimistically, but I strongly suspect general life satisfaction will be much lower, since you inherently lose confidence in creation, agency, and the experience in self-efficacy is therefore lessened, too. Even if external pressure isn't increased, the brain will adapt to what's considered a new normal for lazy. Everybody hates clearing the dish washer, aversion threshold is the same as washing dishes by hand.

And yeah, in the process you atrophy your problem solving skills and endurance of frustration. I think we will collectively learn how important some of these "inefficiencies" are for gaining knowledge and wisdom. It's reminiscent of Goodhart's Law, again, and again. "Output" is an insufficient metric to measure performance and value creation.

Costs for using AI services does not at all reflect actual costs to sustainably run them. So, these questionable "productivity gains" should be contrasted with actual costs, in any case. Compare AI to (cheap, plastic) 3D printing, which is factually transformative, revolutionary tech in almost every (real) industry, I don't see how trillions of investments, the absurd energy and resource wasting could ever justify what's offered, or even imaginable for AI (considering inherent limitations).

For me it boils down to that I'm much less tied to tech stacks I've previously worked on and can pick up unfamiliar ones quicker.

Democratization they call it.

  • > and can pick up unfamiliar ones quicker

    Do you tho? Does "picking up" a skill mean the same thing it used to? Do you fact check all the stuff AI tells you? How certain are you, you are learning correct information? Struggling through unfamiliar topics, making mistakes and figuring out solutions by testing internal hypotheses is a big part of how deep, explanatory knowledge is acquired for human brains. Or maybe, it's been always 10,000 kilowatt-hours, after all.

    Even, if you would actually learn different tech stacks faster with AI telling you what to do, it's still a momentary thing, since these systems are fundamentally poisoned by their own talk, so shit's basically frozen in time, still limited to pre-AI-slop information, or requires insane amounts of manual sanitation. And who's gonna write the content for clean new training data anyway?

    Mind you, I am talking about the possible prospect of this technology and a cost-value evaluation. Maybe I am grossly ignorant/uninformed, but to me all of it just doesn't add up, if you project inherent limitations onto wider adoption and draw the obvious logical conclusions. That is, if humanity isn't stagnating and new knowledge is created.

    • > Do you tho?

      Recent success I've been happy with has been moving my laptop config to Nix package manager.

      Common complaint people have is Nix the language. It's a bit awkward, "JSON-like". I probably would not have had the patience to engage with it with the little time I have available. But AI mostly gets the syntax right, allowing me to engage with it, and I think I've a decent grasp by this point of the ecosystem and even syntax. It's been roughly a year I think.

      Like, I don't know all the constructs available in the language, but I can still reason about things as a commoner that I probably don't want to define my username multiple times in my config, esp. when trying to have the setup be reproducible on an arbitary set of personal laptops. So that for a new laptop I just define one new array item as a source of truth and everything downstream just works.

      I feel like with AI the architetural properties are more important than the low-level details. Nix has the nice property of reproducibility/declarativeness. You could for sure put even more effort into alternative solutions, but if they lack reproducibility I think you're going to keep suffering, no matter how much AI you have available.

      I am certain my config has some silliness in it that someone more experienced would pick out, but ultimately I'm not sure how much that matters. My config is still reproducible enough that I have my very custom env up and running after a few commands on an arbitary macbook.

      > Does "picking up" a skill mean the same thing it used to?

      I personally feel confident in helping people move their config to Nix, so I would say yes. But it's a big question.

      > Do you fact check all the stuff AI tells you? How certain are you, you are learning correct information?

      Well, usually I have a more or less testable setup so I can verify whether the desired effect was achieved. Sometimes things don't work, which is when I start reaching for the docs or source code of for example the library I'm trying to use.

      > Struggling through unfamiliar topics, making mistakes and figuring out solutions by testing internal hypotheses is a big part of how deep, explanatory knowledge is acquired for human brains.

      I don't think this is lost. I iterate a lot. I think the claude code author does too, did they have something like +40k-38k lines of changes over the past year or so. I still use github issues to track what I want to get done when a solution is difficult to reach, and comment progress on them. Recently I did that with my struggles in cross-compiling Rust from Linux to macOS. It's just easier to iterate and I don't need to sleep overnight to get unstuck.

      > since these systems are fundamentally poisoned by their own talk,

      _I_ feel like this goes into the overthinking territory. I think software and systems will still die by their merits. Same applies to training data. If bugs regularly make it to end users and a competing solution has less defects, I don't think the buggy solution will stay any more afloat thanks to AI. So, I'd argue, the training data will be ok. Paradigms can still exist. Like Theory of Modern Go discouraging globals and init functions. And I think this was something that Tesla also had to deal with pre modern LLMs? As in not all drivers drove well enough that they wanted to use their data for trsining the autopilot.

      I really enjoyed your reply, thank you.

      1 reply →