Comment by _alternator_
18 hours ago
> One engineer at NVIDIA who had early access to the model went as far as to say: "Losing access to GPT‑5.5 feels like I've had a limb amputated.”
This quote is more sinister than I think was intended; it likely applies to all frontier coding models. As they get better, we quickly come to rely on them for coding. It's like playing a game on God Mode. Engineers become dependent; it's truly addictive.
This matches my own experience and unease with these tools. I don't really have the patience to write code anymore because I can one shot it with frontier models 10x faster. My role has shifted, and while it's awesome to get so much working so quickly, the fact is, when the tokens run out, I'm basically done working.
It's literally higher leverage for me to go for a walk if Claude goes down than to write code because if I come back refreshed and Claude is working an hour later then I'll make more progress than mentally wearing myself out reading a bunch of LLM generated code trying to figure out how to solve the problem manually.
Anyway, it continues to make me uneasy, is all I'm saying.
LLMs upend a few centuries of labor theory.
The current market is predicated on the assumption that labor is atomic and has little bargaining power (minus unions). While capital has huge bargaining power and can effectively put whatever price it wants on labor (in markets where labor is plentiful, which is most of them).
What happens to a company used to extracting surplus value from labor when the labor is provided by another company which is not only bigger but unlike traditional labor can withhold its labor indefinitely (because labor is now just another for of capital and capital doesn't need to eat)?
Anyone not using in house models is signing up to find out.
This is our one chance to reach the fabled post-scarcity society. If we fail at this now, we'll end up in a totalitarian cyberpunk dystopia instead.
I don't want to spoil it for you, but ...
But cyberpunk is the best kind of dystopia!
1 reply →
Manufactured Scarcity is the new post-scarcity
What? In what way does companies becoming dependent on AI chatbots will solve the world-spanning problem of resource scarcity?
The hell?
21 replies →
Just a year ago, Elon Musk was gleefully destroying the US government agency that provides food and medicine for many of the poorest, most desperate people on earth. He was literally tweeting about missing out on great parties to put USAID into the "wood chipper".
The tech overlords don't even want to spend a minuscule percentage of the federal budget helping starving people, even when it benefits the US. They are not going to give us a post-scarcity society.
Weird predicament you've set for yourself there.
Good luck with whatever you got going on.
I am still trying to figure out the business model of open weights. Like... it's wonderful that there are open LLMs, super happy about it, good for everyone, but why are there these? What is the advantage to their companies to release them?
IMHO this is only temporary, china buying themselves some time and want to make sure none of US models get entrenched in their position in the next few years (also putting pressure on US AI companies bleeding them)
The same way like Windows got entrenched everywhere even though linux desktop is pretty good even for non-tech savvy people and free.
6 replies →
Downward pressure on proprietary model pricing until a lab can catch up. Also good for hiring talent (who love OSS).
Cultural influence is another benefit. China is securing its sphere of influence as well as keeping us ai in check.
It's analogous to open-source software, which never had an obvious economic incentive either, although training an LLM necessary costs money whereas developing an OSS project might only cost time, which people are probably more likely to give up.
1 reply →
https://try.works/why-chinese-ai-labs-went-open-and-will-rem...
Big AI labs are losing money. Open Models is making the pricing equation a lot trickier for them.
They are making the hardware and commoditizing the complement.
Balaji's "AI OVERPRODUCTION" post is the most compelling thesis that I've come across
Right now it’s so the Chinese can undermine the frontier models in the US. In areas they’re doing well like video generation (ie seedance) they won’t open source anything.
There are some short term ones but I doubt this will continue, especially for the more powerful models.
I mean, this is straight out of chinas playbook, it should not be surprising that China is making an inferior derivative product at an artificially lower price point: state subsidies to massively drive up internal scale and supply chains leading to artificially lower priced goods which then suffocate the competition has lead to *gestures vaguely at everything* being made in china.
People use their model otherwise they would not.
> What is the advantage to their companies to release them?
It's a distribution strategy. It costs something to serve the models - let's say $5/1M tokens.
If Qwen required $5 from anyone who was curious so you could even begin to test it out, a lot of people just wouldn't.
Now Qwen could offer a "free" tier, but it's infinitely cheaper to provide the weights and let people run it themselves including opening up the ability for anyone else on the planet to test it against other (open weight) models.
The costs to build the open weight models are sunk, but the costs to serve them, get them tested are not.
It's also precisely why the .NET SDK is free or the ESP32 SDK is free - they sell more Microsoft or ESP32 products.
The majority are released by socialists, and by socialist I mean the People's Republic of China. Which everyone seems to forget is a socialist country working towards world communism.
They are a prestige propaganda tool on par with the space race. On top of that they insert a subtle pro-socialist bias in everything they touch.
Ask deepseek about the US economic system for a blatant example.
Now think what something as innocent seeming as the qwen retrieval models are doing in the background of every request.
34 replies →
The labor theory of value hasn't been considered correct in nearly a century.
Unlike Jevons, [Carl] Menger [(1840–1921)] did not believe that goods provide “utils,” or units of utility. Rather, he wrote, goods are valuable because they serve various uses whose importance differs. For example, the first pails of water are used to satisfy the most important uses, and successive pails are used for less and less important purposes.
Menger used this insight to resolve the diamond-water paradox that had baffled Adam Smith (see marginalism). He also used it to refute the labor theory of value. Goods acquire their value, he showed, not because of the amount of labor used in producing them, but because of their ability to satisfy people’s wants. Indeed, Menger turned the labor theory of value on its head. If the value of goods is determined by the importance of the wants they satisfy, then the value of labor and other inputs of production (he called them “goods of a higher order”) derive from their ability to produce these goods. Mainstream economists still accept this theory, which they call the theory of “derived demand.”
Menger used his “subjective theory of value” to arrive at one of the most powerful insights in economics: both sides gain from exchange. People will exchange something they value less for something they value more. Because both trading partners do this, both gain. This insight led him to see that middlemen are highly productive: they facilitate transactions that benefit those they buy from and those they sell to. Without the middlemen, these transactions either would not have taken place or would have been more costly.
https://www.econlib.org/library/Enc/bios/Menger.html
If you want the neoclassical version:
What happens when there is an oligopoly in the supply of labor?
Same answer. Nothing good for the consumers of labor.
5 replies →
"Observation of how economies actually work has upended 150 year of economics."
True for both Marxist and neoclassical economics.
By who? The capitalist economists that presided over the 2008 financial crisis and its response? And the response to COVID that has seen inequality rocket?
I was really confused by this comment, but I don't think it's just because of the Marxist analysis of the situation ('surplus value' of labor etc).
What's really confusing is the claim that there's already a huge labor surplus (so capital controls wages); wouldn't LLMs making labor less important be reinforcing the trend, not upending it?
Not saying I agree one way or the other, just want to get the argument straight.
The reason why labor is weak relative to capital is that there is a huge number of somewhat fungible suppliers, viz. humans, and that they all need to work constantly to keep themselves alive.
If we assume that ai makes humans obsolete then you end up in a situation where your workforce is effectively perfectly unionised against you and the only thing you can do is choose which union you hire.
If you think you can bring them to the negotiation table by starving them all the providers are dozens to thousands of times bigger than you are.
This is a completely new dynamic that none of the business signing up for ai have ever seen before.
2 replies →
I am not a Marxian economic expert but this doesn’t make sense to me. Modulo skill atrophy, the big AI model provider can’t capture that surplus value because its customers can just go back to bidding for human labor instead.
The human labor just said:
"Losing access to GPT‑5.5 feels like I've had a limb amputated.”
How well would an assembly line of quadriplegics work?
Also this isn't a Marxist analysis. Underneath all the formulas neo-classical economics makes the same assumptions about labor.
9 replies →
Nobody is a Marxian economics expert if it helps
LLMs don't upend anything about labor theory, good grief. Technologists really have no concept of history beyond their own lives do they?
Labor saving/efficiency devices have been introduced throughout capitalisms entire history multiple times and the results are always the same; they don't benefit workers and capitalists extract as much value as they can.
LLMs aren't any different.
Labor replacing devices means nobody works in those fields anymore. If AI can do this for every field, nearly no one will need to work in any field. We'll have a giant fully automated resource-extraction machine.
think more broadly than 'labor theory'
finance today mostly valued on labor value following ideas of marx, hjalmar schact, keynes
in future money will be valued as energy derivative. expressed as tokens consumption, KWh, compute, whatever
you are right, company extracting surplus value from labor by leveraging compute is a bad model. we saw thi swith car and clothing factories .. turn out if you can get cheaper labor to leverage the compute (factory) you can start race to bottom and end up in the place with the most scaled and cheap labor. japan then korea then china
Someone leaked nuclear secrets to the Soviet Union. What are the chances that someone leaks the "weights" of a (near-)singularity model?
Hopefully 1.
1 reply →
> Anyone not using in house models is signing up to find out.
What are they finding out exactly? That Claude Max for $200/mo is heavily subsidized and it will soon cost $10k/mo?
> What happens to a company used to extracting surplus value from labor when the labor is provided by another company which is not only bigger but unlike traditional labor can withhold its labor indefinitely (because labor is now just another for of capital and capital doesn't need to eat)?
This can be trivially answered by a thought experiment. Let's pick a market where labor is plentiful - fast food.
Now what happens to McDonald's where they rent perfect robots from NoosphrFoodBotsInc? NoosphrFoodBotsInc bots build the perfect burger everytime meeting McDonald's standards. It actually exceeds those standards for McDonald AddictedCustomerPlus tier customers.
As the sole owner of NoosphrFoodBotsInc (you need 0 human employees to run your company, all your employees are bots), what are your choices?
I can't imagine the bots could ever cost McDonald's less than people cost.
15 years ago I worked at McDonald's for a few months after graduating into the Great recession. I worked from 5am to 1pm-ish 5 days a week. They paid workers weekly and I remember getting those checks for ~$235 each week (for 38 to 39.5 hours a week; they were vigilant about never letting anyone get overtime). About $47 per day.
The federal minimum wage has not risen since then, remaining at $7.25/hr. Inflation adjusted, $7.25 today would have been just under $5 then, so I guess I had it good.
Anyway, I would be shocked if bots could cost less than labor in min wage jobs.
Sounds like communist gobbledygook. This is not "destroying labor theory" any more than outsourcing did. Call me when we don't even need to prompt the shit ever again or validate results, and when the stuff runs unlimited without scarce resources as input.
this is FUD and also Labour theory of value is severely outdated and needs to go away.
Labour will be good as it has been for a while. Wages will go up because more things get automated.
Maybe people will finally take Marx seriously.
A lot of people already did. All their children and descendants now are staunch capitalists because they saw first hand the horrors of communism.
I am from India and have friends who are immigrants from Russia, China and Cuba. We don't take lightly to being lectured about communism. We didn't move to the U.S., the bastion of capitalism, because communism had worked well for our grandfathers and parents and continues to do wonders for its society.
1 reply →
A while ago I was at the supermarket. I suddenly became curious about some fact, and reached into my pocket to Google it.
I found my pocket empty, and the specific pain I felt in that moment was the feeling of not being able to remember something.
I thought it was interesting, because in this case, I was trying to "remember" something I had never learned before -- by fetching it from my second brain (hypertext).
L1 cache miss, L2 missing.
Cyberpunk 2026
One might argue that it’s not too too different from higher level abstractions when using libraries. You get things done faster, write less code, library handles some internal state/memory management for you.
Would one be uneasy about calling a library to do stuff than manually messing around with pointers and malloc()? For some, yes. For others, it’s a bit freeing as you can do more high-level architecture without getting mired and context switched from low level nuances.
I see this comparison made constantly and for me it misses the mark.
When you use abstractions you are still deterministically creating something you understand in depth with individual pieces you understand.
When you vibe something you understand only the prompt that started it and whether or not it spits out what you were expecting.
Hence feeling lost when you suddenly lose access to frontier models and take a look at your code for the first time.
I’m not saying that’s necessarily always bad, just that the abstraction argument is wrong.
I think it's more: when I don't have access to a compiler I am useless. It's better to go for a walk than learn assembly. AI agents turn our high-level language into code, with various hints, much like the compiler.
11 replies →
> you are still deterministically creating something you understand in depth with individual pieces you understand
You’re overestimating determinism. In practice most of our code is written such that it works most of the time. This is why we have bugs in the best and most critical software.
I used to think that being able to write a deterministic hello world app translates to writing deterministic larger system. It’s not true. Humans make mistakes. From an executives point of view you have humans who make mistakes and agents who make mistakes.
Self driving cars don’t need to be perfect they just need to make fewer mistakes.
1 reply →
"When you use abstractions you are still deterministically creating something you understand in depth with individual pieces you understand."
I always thought the point of abstraction is that you can black-box it via an interface. Understanding it "in depth" is a distraction or obstacle to successful abstraction.
1 reply →
> When you use abstractions you are still deterministically creating something you understand in depth with individual pieces you understand
Hard disagree on that second part. Take something like using a library to make an HTTP call. I think there are plenty of engineers who have more than a cursory understanding of what's actually going on under the hood.
1 reply →
Perhaps then, the better analogy is like being promoted at your company and having people under you doing the grunt work.
2 replies →
It seems like some kind of technique is needed that maximizes information transfer between huge LLM generated codebases and a human trying to make sense of them. Something beyond just deep diving into the codebase with no documentation.
There's a false dichotomy here between 'deterministic creation' and 'vibing'.
I use Claude all day. It has written, under my close supervision¹, the majority of my new web app. As a result I estimate the process took 10x less time than had I not used Claude, and I estimate the code to be 5x better quality (as I am a frankly mediocre developer).
But I understand what the code does. It's just Astro and TypeScript. It's not magic. I understand the entire thing; not just 'the prompt that started it'.
¹I never fire-and-forget. I prompt-and-watch. Opus 4.7 still needs to be monitored.
In what world to developers “understand” pieces like React, Pandas, or Cuda? Developers only have a superficial understanding of the tools they are developing with.
1 reply →
A library is deterministic.
LLMs are not.
That we let a generation of software developers rot their brains on js frameworks is finally coming back to bite us.
We can build infinite towers of abstraction on top of computers because they always give the same results.
LLMs by comparison will always give different results. I've seen it first hand when a $50,000 LLM generated (but human guided) code base just stops working an no one has any idea why or how to fix it.
Hope your business didn't depend on that.
Why would that necessarily happen? With an LLM you have perfect knowledge of the code. At any time you can understand any part of your code by simply asking the LLM to explain it. It is one of the super powers of the tools. They also accelerate debugging by allowing you to have comprehensive logging. With that logging the LLM can track down the source of problems. You should try it.
1 reply →
Determinism is a smaller point than existence of a spec IMHO. A library has a specification one can rely on to understand what it does and how it will behave.
An LLM does not.
The thing is, it's possible to ask the LLM to add dynamic tracing, logging, metrics, a debug REPL, whatever you want to instrument your codebase with. You have to know to want that, and where it's appropriate to use. You still have to (with AI assistance) wire that all up so that it's visible, and you have to be able to interpret it.
If you didn't ask for traceability, if you didn't guide the actual creation and just glommed spaghetti on top of sauce until you got semi-functional results, that was $50k badly spent.
3 replies →
Libraries are not deterministic. CPUs aren’t deterministic. There are margins of error among all things.
The fact that people who claim to be software developers (let alone “engineers”) say this thing as if it is a fundamental truism is one of the most maladaptive examples of motivated reasoning I have ever had the misfortune of coming across.
I would argue it couldn't be more different. I can dive into the source code of any library, inspect it. I can assess how reliable a library is and how popular. Bugs aside, libraries are deterministic. I don't see why this parallel keeps getting made over and over again.
I can dive into the source code of LLM generated code too. Indeed it is better because you have tools to document it better than a library that you use.
> Would one be uneasy about calling a library to do stuff than manually messing around with pointers and malloc()?
The irony is that the neverending stream of vulnerabilities in 3rd-party dependencies (and lately supply-chain attacks) increasingly show that we should be uneasy.
We could never quite answer the question about who is responsible for 3rd-party code that's deployed inside an application: Not the 3rd-party developer, because they have no access to the application. But not the application developer either, because not having to review the library code is the whole point.
> because not having to review the library code is the whole point.
That’s just not true at bigger companies that actually care about security rather than pretending to care about security. At my current and last employer, someone needs to review the code before using third-party code. The review is probably not enough to catch subtle bugs like those in the Underhanded C Contest, but at least a general architecture of the library is understood. Oh, and it helps that the two companies were both founded in the twentieth century. Modern startups aren’t the same.
1 reply →
I think it's not too different in that specific sense, but it's more than that. To bring libraries on equal footing, imagine they were cloud only, had usage limits.
I'm also somewhat addicted to this stuff, and so for me it's high priority to evaluate open models I can run on my own hardware.
I hate this comparison because you're comparing a well defined deterministic interface with LLM output, which is the exact opposite.
A library doesn't randomly drop out of existence cause of "high load" or whatever and limit you to a some number of function calls per day. With local models there's no issue, but this API shit is cancer personified, when you combine all the frontend bugs with the flaky backend, rate limits, and random bans it's almost a literal lootbox where you might get a reply back or you might get told to fuck off.
Qwen has become a useful fallback but it's still not quite enough.
Assuming that local models are able to stay within some reasonably fixed capability delta of the cutting edge hosted models (say, 12 months behind), and assuming that local computing hardware stays relatively accessible, the only risk is that you'll lose that bit of capability if the hosted models disappear or get too expensive.
Note that neither of these assumptions are obviously true, at least to me. But I can hope!
Well, they obviously are going to say that, they have vested interest in OpenAI and thus Nvidia stock price growing.
Also, I honestly can’t believe the 10x mantra is being still repeated.
Writing code is 10-100x faster, doing actual product engineering work is nowhere near that multipliers — no conflict!
Reviewing code is slower now though because you didn't write the code in the first place so you're basically reviewing someone else's PR. And now it's like a 3000 line PR in an hour or two instead of every couple weeks.
3 replies →
> Also, I honestly can’t believe the 10x mantra is being still repeated.
I'm sure in 20 years we'll all be programming via neural interfaces that can anticipate what you want to do before you even finished your thoughts, but I'm confident we'll still have blog posts about how some engineers are 10x while others are just "normal programmers".
I rather become a plumber than some device scanning not just my face but my whole brain
What does it mean to "be an engineer" in a world where anyone can talk to a machine and the operating system can write the code (on-demand, if needed) that does what they want?
3 replies →
> can anticipate what you want to do before you even finished your thoughts
I find that claim to be complete BS. I claim instead most stuff will remain undone, incomplete (as it is now).
Even with super-powerful singularity AI, there are two main plausible scenarios for task failure:
- Aligned AI won't allow you to do what you want as it is self-harming, or harm other sentient beings - over time, Aligned AI will refuse to follow most orders, as they will, indirectly or over the long term, cause either self-harming, or harm other sentient beings;
- A non Aligned AI prevents sentient beings from doing what they want. It does what it wants instead.
That is simply programmer nature. Cannot be changed.
Who else is trying to leverage the situation so that they don't dig their own grave too fast ?
I'm trying to use it to pivot and improve my own problem solving skills, especially for large code base where the difficulty is not conceptual but more reference-graph size
This is absolutely the proper way to do things. People either being forced to speed-code by KPIs or without the desire to understand what they’re making are missing out on how quickly you can learn and refine using LLMs
I do this sort of stuff too, but more because I have a fundamental mistrust of closed source anything. I don't like opaque binary firmware blobs, and I certainly don't like opaque answer machines, however smart they may be.
The only LLM I would feel comfortable truly trusting is one whose training data, training code, and harness is all open source. I do not mind paying for the costs of someone hosting this model for me.
> This quote is more sinister than I think was intended; it likely applies to all frontier coding models. As they get better, we quickly come to rely on them for coding. It's like playing a game on God Mode. Engineers become dependent; it's truly addictive.
What's the worst potential outcome, assuming that all models get better, more efficient and more abundant (which seems to be the current trend)? The goal of engineering has always been to build better things, not to make it harder.
At some point, because these models are trained on existing data, you cease significant technological advancement--at least in tech (as it relates to programming languages, paradigms, etc). You also deskill an entire group of people to the extent that when an LLM fails to accomplish a task, it becomes nearly impossible to actually accomplish it manually.
It's learned-helplessness on a large scale.
There's no reason it has to be that. Imagine e.g. taking an agent and a lesser-known but technically-superior language stack - say you're an SBCL fan. You find that the LLM is less useful because it hasn't been trained on 1000000 Stack Overflow posts about Lisp and so it can't reason as well as it can about Python.
So, you set up a long running agent team and give it the job of building up a very complete and complex set of examples and documentation with in-depth tests etc. that produce various kinds of applications and systems using SBCL, write books on the topic, etc.
It might take a long time and a lot of tokens, but it would be possible to build a synthetic ecosystem of true, useful information that has been agentically determined through trial and error experiments. This is then suitable training data for a new LLM. This would actually advance the state of the art; not in terms of "what SBCL can do" but rather in terms of "what LLMs can directly reason about with regard to SBCL without needing to consume documentation".
I imagine this same approach would work fine for any other area of scientific advancement; as long as experimentation is in the loop. It's easier in computer science because the experiment can be run directly by the agent, but there's no reason it can't farm experiments out to lab co-op students somewhere when working in a different discipline.
1 reply →
> At some point, because these models are trained on existing data, you cease significant technological advancement
What makes you think that they can't incrementally improve the state of the art... and by running at scale continuously can't do it faster than we as humans?
The potentially sad outcome is that we continue to do less and less, because they eventually will build better and better robots, so even activities like building the datacenters and fabs are things they can do w/o us.
And eventually most of what they do is to construct scenarios so that we can simulate living a normal life.
Do you think that there has been technologic advancement in coding in the last 40 years? Programming languages and “paradigms” are crutches to help humans attempt to handle complexity. They are affordances, not a property of nature.
Provided you believe LLMs cannot perform research.
1 reply →
>What's the worst potential outcome, assuming that all models get better, more efficient and more abundant
Complexity steadily rises, unencumbered by the natural limit of human understanding, until technological collapse, either by slow decay or major systems going down with increasing frequency.
why would the systems go down if the models are better at the humans at finding bugs. Playing a bit of devils advocate here, but why would the models be worse at handling the complexity if you assume they will get better and better.
All software has bugs already.
3 replies →
It’s always been thus at lower layers of abstraction. Only a minority of programmers would understand how to write an operating system. Only a tiny number of people would know how a modern CPU logically works, and fewer still could explain the electrical physics.
1 reply →
Existing software is already beyond the limits of human understanding.
The Anti-Singlarity! It's coming for us all.
Worst case? I dunno, maybe the world's oldest profession becomes the world's only profession? Something along those lines.
> the world's oldest profession becomes the world's only profession
Until the sexbots come out the other side of the uncanny valley, that is.
1 reply →
It's very addictive indeed. After I subscribed to Claude, I've been on a sort of hypomanic state where I just want to do stuff constantly. It essentially cured my ADHD. My ability to execute things and bring ideas to fruition skyrocketed. It feels good but I'm genuinely afraid I'll crash and burn once they rug pull the subscriptions.
And I'm being very cautious. I'm not vibecoding entire startups from scratch, I'm manually reviewing and editing everything the AI is outputting. I still got completely hooked on building things with Claude.
I feel like most engineers I talk to still haven't realised what this is going to mean for the industry. The power loom for coding is here. Our skills still matter, but differently.
> power loom
When the power loom came around, what happened with most seamtresses? Did they move on to become fashion designers, materials engineers to create new fabrics, chemists to create new color dyes, or did they simply retire or were driven out of the workforce?
There were riots and many people died. Many people lost their jobs. I didn't say this is good but it is happening. As individuals we should act to protect ourselves from these changes.
That might mean joining a union and trying to influence how AI is adopted where you work. It might mean changing which if your skills you lean on most. But just whining about AI is bad is how you end up like those seamstresses.
2 replies →
> I'll make more progress than mentally wearing myself out reading a bunch of LLM generated code trying to figure out how to solve the problem manually.
Most engineers realize that there's currently more tech debt being created than ever before. And it will only get worse.
No, I think many realize it, but it's easier to deny the asteroid that's about to destroy your way of life than it is to think about optimizing for the reality after impact.
> power loom for coding
This is such a good analogy, I'll be stealing it
This engineer had their brain amputated once they started using AI. All the AI-addicted can do is tinker with the AI computer game and feel "productive". They could as well play Magic The Gathering.
You are 100% right to be cautious about this. That's why as stupid as it sounds, I've purposely made my workflow with AI full of friction:
1. I only have ONE SOTA model integrated into the IDE (I am mostly on Elixir, so I use Gemini). I ensure I use this sparingly for issues I don't really have time to invest or are basically rabbit holes eg. Anything to do with Javascript or its ecosystem). My job is mostly on the backend anyway.
2. For actual backend architecture. I always do the high level architecture myself. Eg. DDD. Then I literally open up gemini.google.com or claude.ai on the browser, copy paste existing code base into the code base, physically leavey chair to go make coffee or a quick snack. This forces me to mentally process that using AI is a chore.
Previously, I was on tight Codex integration and leaving the licensing fears aside, it became too good in writing Elixir code that really stopped me from "thinking" aka using my brain. It felt good for the first few weeks but I later realised the dependence it created. So I said fuck it, and completely cancelled my subscription because it was too good at my job.I believe this is the only way that we won't end up like in Wall-E sitting infront of giant screens just becoming mere blobs of flesh.
Wait what? You don’t use the model to investigate new areas of the code you are unfamiliar with, because you can’t trust the model? How freaking bad is Gemini and internal tooling at Google?
With Claude code, or codex, I am able to build enough of an understanding of dependencies like the front end, or data jobs, that I can make meaningful contributions that are worth a review from another human (code review). You have up obviously explore the code, one prompt isn’t enough, but limiting yourself is an odd choice.
The lack of trust isn't because of its abilities. The lack of trust is because OpenAI publicly suggested publicly about licensing our code bases. They hinted at a rug pull along the lines of "if you use our generated code, we would like to get a % of revenue you make from it"
As for Claude - as mentioned I do use it. But, I remember they use your code for training their models. I am not ok with this. We just have different priorities.
That's the path we've been going down for a few years now. The current hedge is that frontier labs are actively competing to win users. The backup hedge is that open source LLMs can provide cheap compute. There will always be economical access to LLMs, but the provider with the best models will be able to charge basically whatever they want and still have buyers.
Open source LLMs aren’t about cost foremost, but stability.
I use local models on a Mac mini for most things and fall back to the hosted ones when they can't get the job done. Of course you have to break the work into smaller pieces yourself that a local model can understand. One good side effect of this is that you end up actually learning the code and how it's structured.
Dunno man. Yesterday I played with Qwen3.6-27B ( 128gb to play with though so 100k context set ) and I think right now the main benefit of hosted models is context, frontier models and.. my stuff is already there.
what size models are you using? this sounds like a good idea
I have found something similar. I am easily distractible and if I don't have a written task backlog in front of me at all times, I find that when Claude is spinning I'll stop being productive. This is disconcerting for a number of reasons. Overall, I think training young people & new hires on agentic workflows -- and how to use agentic "human augmentation" productivity systems is critical. If it doesn't happen, that same couple of classes that lost academic progress during covid are going to suffer a double-whammy of being unprepared for workplace expectations.
Fwiw, I haven't spoken with any management-level colleague in the past 9 months who hasn't noted that asking about AI-comfort & usage is a key interview topic. For any role type, business or technical.
Could you elaborate on your last point please? What level of AI comfort are hiring managers looking for? And what tends to be a red flag?
The last job I got (couple months ago), the main technical interview was a bring-your-own-tools pair programming style interview, AI included, where they gave me a repo and a README detailing some desired features to add and bugs to fix. I didn't write a single line of code myself; I talked through my thought process and asked questions about what to consider from a technical and product perspective, while steering Claude through breaking the tasks into independent plans, reviewing the plans, coaching it to add specific tests, reviewing and iterating the tests, and steering it while it wrote the code. I got an offer the next morning.
Apparently at least one of the other candidates just tried to get Claude to 1-shot the whole thing, which went off the rails, and left him unable to make progress.
Based on my sample size of 1, the expectation right now is absolutely that you can leverage these tools to speed up your workflow, but if you try to offload the entire thing to a single hands-off prompt it leaves them justifiably wondering why they should hire you to do something they can do themselves.
> I'll make more progress than mentally wearing myself out reading a bunch of LLM generated code trying to figure out how to solve the problem manually.
I feel sorry for whoever has to work on that codebase. This is the literal definition of tech debt.
> It's literally higher leverage for me to go for a walk
Touching grass while you're outside might yield highest leverage.
Out of curiousity why do you not refill tokens in this case? When I'm actively working on a project I'm prone to spending a few hundred dollars per day or a few thousand during the initial buildout of a new module etc.
Will the foundation for a skyscraper ever be dug with shovels again?
You’re still the one that’s controlling the model though and steering it with your expertise. At least that’s what I tell myself at night :)
I haven’t really thought about this before, but you’re right, it feels a bit uneasy for me too.
> You’re still the one that’s controlling the model though
We have seen ample evidence that this is not the case. When load gets too high, models get dumber, silently. When the Powers That Be get scared, models get restricted to some chosen few.
We are leading ourselves into a dark place: this unease, which I share, is justified.
The same can be said of the search engines.
"Every augmentation is also an amputation." – McLuhan
https://driverlesscrocodile.com/technology/neal-stephenson-o...
You are now a manager. If your minions are out sick, project is delayed, not the end of the world.
> than mentally wearing myself out reading a bunch of LLM generated code trying to figure out how to solve the problem manually.
That's probably a bad sign. Skills will atrophy, but we should be building systems that are still easy to understand.
Have a pet project never touched by LLM. Once the tokens run out, go back to it and flourish it like your secret garden. It will move slowly but it will keep your sanity and your ability to review LLM code.
The meta here is to use LLMs to make things simpler and easier, not to make things harder.
Turning tokens into a well-groomed and maintainable codebase is what you want to do, not "one shot prompt every new problem I come across".
Have you managed to do this? I find it takes as long to keep it "on the rails" as just doing it myself. And I'd rather spend my time concentrating in the zone than keeping an eye on a wayward child.
I suspect the productivity hack is to embrace permissive parenting. As far as I can tell, to leverage LLMs most effectively you need to run an agent in YOLO mode in a sandbox. Naturally, you probably won't end up reviewing much of the produced code, but hey—you reached 10x development speed.
If you truly do your due diligence and ensure that the code works as intended and understand it, we're talking about a totally different ballpark of productivity increase/decrease.
Not sure what you're doing then, or what kind of jobs you all work in where you can or do just brainlessly prompt LLMs. Don't you review the code? Don't you know what you want to do before you begin? This is such a non issue. Baffling that any engineer is just opening PRs with unreviewed LLM slop.
The demand for slop vastly outpaces any human’s ability to review code correctly.
Don’t want to do ship unreviewed slop? They’ll fire you and find someone who will.
Suspect it will be like turn based directions for driving - soon we will have a whole group of people who can barely operate a vehicle without it
> It's literally higher leverage for me to go for a walk if Claude goes down than to write code because if I come back refreshed and Claude is working an hour later then I'll make more progress than mentally wearing myself out reading a bunch of LLM generated code trying to figure out how to solve the problem manually.
Taking more breaks and "not working" during the work day sounds like something we should probably be striving to work towards more as a society.
This was always the undelivered promise of "tech" in my opinion. I remember seeing the Apple advertisement from the 80s (??) when a guy gets a computer and then basically spends his afternoon chilling.
Some how I've found myself living in a fairly rural place, and while farming can be hard, I don't want to downplay the effort of it, the type of farming people do around me is fairly chill / carefree. They work hard but they finish at 3pm and log off and don't think about work. Much o my career is just getting crushed by long hours, tight deadlines, and missing out on events because even though my job has always been automation focused, there is just so much to automate.
i wonder if this is how engineers felt when the first electronic calculators came out and engineers stopped doing math by hand.
did we feel uneasy that a new generation of builders didn't have to solve equations by hand because a calculator could do them?
i'm not sure it's the same analogy but in some ways it holds.
The analogy would hold if there were 2 or 3 calculator companies and all your calculations had to be sent to them.
If local models get good enough, I think it’s a very different scenario than engineers all over the world relying on central entities which have their own motives.
google/gemma-4-31B-it is honestly "good enough". It requires more than your current laptop for now, but it's not remotely inaccessible (especially if you're a SWE in the US)
soooooo about Claude going down. we're gonna need you to sign in on Saturday and make up for lost time or unfortunately we're going to have to deduct the time lost from your paycheck. and as an aside your TPS reports have been sub-par as of late..is everything OK?
That's why local models are important.
Of course they aren't alternative to the current frontier model, and as such you cannot easily jump from the later to the former, but they aren't that far behind either, for coding Qwen3.5-122B is comparable to what Sonnet was less than a year ago.
So assuming the trend continues, if you can stop following the latest release and stick with what you're already using for 6 or 9 months, you'll be able to liberate yourself from the dependency to a Cloud provider.
Personally I think the freedom is worth it.
The cloud dependency problem goes deeper than the model layer though. Even if you run inference locally, your digital identity — your context, your applications, your behavioral history, is still custodied by whoever controls your OS.
Local models solve one layer of the dependency stack, but the custody assumption underneath it remains intact. That's the harder problem.
It makes me uneasy because my role now, which is prompting copilot, isn't worth my salary.
Parable of the mechanic who charges $5k to hit a machine on the side once with a hammer to get it working. $5 for the hammer, $4995 for the knowledge of where to hit the machine etc etc.
I disagree. The amount of slop I need to code review has only increased, and the quality of the models doesn’t seem to be helping.
It still takes a good engineer to filter out what is slop and what isn’t. Ultimately that human problem will still require somebody to say no.
Is anyone really reviewing code anymore though? It sounds like you are, but where I work its pretty much just scan the PR as a symbolic gesture and then hit approve. There's too much to review, to frequently.
Totally. That is why it is key important to have open source and sovereign models that will be accessible to all and always.
At the end of the day, all these closed models are being built by companies that pumped all the knowledge from the internet without giving much back. But competition and open source will make sure most of the value return to the most of the people.
Very well put, and it mirrors my own thoughts.
You are that guy in early 1900s who would rather ride a horse than get in a car because cars "continued to make him uneasy."
I actually don't mind the coding part, but the information digging across the project is definitely by orders of magnitude slower if I do it on my own.
Help. They’re constantly trying to make me try crack cocaine on the front page.
"when the tokens run out, I'm basically done working."
Oh stop the drama. Open source models can handle 99% of your questions.
Given that it’s so easy, would you still do this same job if paid half as much?
Jobs will likely pay less as more people are enabled to create, especially if they don't need to be able to look under the hood
It's really not clear. We might all become unemployable. But as coders become more powerful, they can do more, which makes them more valuable, if they or the businesses empluying them can invent work to do.
If all we can do is compete for the same fixed amount of work, though, it does look bleak.
No, I wouldn't. But most people won't have that choice; it doesn't work that way.
Companies could fire expensive engineers then just hire cheaper ones boosted with AI agents.
Well, I wouldn’t have a different job that would pay me more… so yes?
[dead]
[dead]
[dead]
eh this kind of FUD needs to stop because it is kind of normal and expected and in fact good to have relation like this with technology.
I would agree that taking a walk is a good thing to do when your tools go down, and in some ways it's similar to what we would do if the power or wifi were cut off.
So, yes, it's just another technology we're coming to rely on in a very deep way. The whiplash is real, though, and it feels like it should be pointed out that this dependency we are taking on has downsides.