Comment by Akranazon
21 days ago
Everything you have said here is completely true, except for "not in that group": the cost-benefit analysis clearly favors letting these tools rip, even despite the drawbacks.
21 days ago
Everything you have said here is completely true, except for "not in that group": the cost-benefit analysis clearly favors letting these tools rip, even despite the drawbacks.
Maybe.
But it's also likely that these tools will produce mountains of unmaintainable code and people will get buried by the technical debt. It kind of strikes me as similar to the hubris of calling the Titanic "unsinkable." It's an untested claim with potentially disastrous consequences.
> But it's also likely that these tools will produce mountains of unmaintainable code and people will get buried by the technical debt.
It's not just likely, but it's guaranteed to happen if you're not keeping an eye on it. So much so, that it's really reinforced my existing prejudice towards typed and compiled languages to reduce some of the checking you need to do.
Using an agent with a dynamic language feels very YOLO to me. I guess you can somewhat compensate with reams of tests though. (which begs the question, is the dynamic language still saving you time?)
Companies aren't evaluating on "keeping an eye on technical debt", but then ARE directly evaluating on whether you use AI tools.
Meanwhile they are hollowing out work forces based on those metrics.
If we make doing the right thing career limiting this all gets rather messy rather quickly.
1 reply →
Tests make me faster. Dynamic or not feels irrelevant when I consider how much slower I’d be without the fast feedback loop of tests.
2 replies →
I can provide evidence for your claim. The technical debt can easily snowball if the review process is not stringent enough to keep out unnecessary functions.
Oh I'm well aware of this. I admitted defeat in a way.. I can't compete. I'm just at loss, and unless LLM stall and break for some reason (ai bubble, enshittification..) I don't see a future for me in "software" in a few years.
Somehow I appreciate this type of attitude more than the one which reflects total denial of the current trajectory. Fervent denial and AI trash-talking being maybe the single most dominant sentiment on HN over the last year, by all means interspersed with a fair amount of amazement at our new toys.
But it is sad if good programmers should loose sight of the opportunities the future will bring (future as in the next few decades). If anything, software expertise is likely to be one of the most sought-after skills - only a slightly different kind of skill than churning out LOCs on a keyboard faster than the next person: People who can harness the LLMs, design prompts at the right abstraction level, verify the code produced, understand when someone has injected malware, etc. These skills will be extremely valuable in the short to medium term AFAICS.
But ultimately we will obviously become obsolete if nothing (really) catastrophic happens, but when that happens then likely all human labor will be obsolete too, and society will need to be organized differently than exchanging labor for money for means of sustenance.
If the world comes to that it will be absolutely catastrophic, and it’s a failure of grappling with the implications that many of the executives of AI companies think you can paper over the social upheaval with some UBI. There will be no controlling what happens, and you don’t even need to believe in some malicious autonomous AI to see that.
I get crazy over the 'engineer are not paid to write loc', nobody is sad because they don't have to type anymore. My two issues are it levels the delivery game, for the average web app, anybody can now output something acceptable, and then it doesn't help me conceptualize solution better, so I revert to letting it produce stuff that is not maleable enough.
2 replies →
The future is either a language model trained on AI code bloats and the ways to optimize the bloat away
OR,
something like Mercor, currently getting paid really well by Meta, OpenAI, Anthropic and Gemini to pay very smart humans really well to proof language model outputs.
Yep, its a rather depressing realization isnt it. Oh well, life moves on i suppose.
I think we realistically have a few years of runway left though. Adoption is always slow outside of the far right of the bell curve.
i'm sorry if I pulled everybody down .. but it's been many months since gemini and claude became solid tools, and regularly i have this strong gut feeling. i tried reevaluating my perception of my work, goals, value .. but i keep going back to nope.
21 replies →
I feel the same. And I expect even a lot of the early adopters and AI enthusiasts are going to find themselves as the short end of the stick sooner than later.
"Oops I automated myself out a job".
I've already seen this play out. The lazies in our floor were all crazy about AI because they could finally work few and finish their tasks. Until they realized that they were visibly replaceable now. The motto in team chats is "we'll lie about the productivity gains to management, just say 10% but with lots of caretaking" now
Yup. The majority of this website is going to find out they were grossly overpaid for a long time.
Imagine everyone who is in less technical or skilled domains.
I can't help but resist this line of thinking as a result. If the end is nigh for us, it's nigh for everyone else too. Imagine the droves of less technical workers in the workforce who will be unseated before software engineers. I don't think it is tenable for every worker in the first world to become replaced by a computer. If an attempt at this were to occur, those smart unemployed people would be a real pain in the ass for the oligarchs.
I feel the same.
Frankly, I am not sure there is a place in the world at all for me in ten years.
I think the future might just be a big enough garden to keep me fed while I wait for lack of healthcare access to put me out of my misery.
I am glad I am not younger.
So why havent you been fired already?
.......
gemini has only been deployed in the corp this year, but the expectations are now higher (doubled). i'll report by the end of the year..
> the cost-benefit analysis clearly favors letting these tools rip
Does it? I have yet to see any evidence that they are a net win in terms of productivity. It seems to just be a feeling that it's more efficient.