Comment by Alex_L_Wood

17 days ago

>If you haven't spent at least $1,000 on tokens today per human engineer, your software factory has room for improvement

…What am I even reading? Am I crazy to think this is a crazy thing to say, or it’s actually crazy?

I'm one of the StrongDM trio behind this tenet. The core claim is simple: it's easy to spend $1k/day on tokens, but hard (even with three people) to do it in a way that stays reliably productive.

$1k per day, 50 work weeks, 5 day a week → $250k a year. That is, to be worth it, the AI should work as well as an engineer that costs a company $250k. Between taxes, social security, and cost of office space, that engineer would be paid, say, $170-180k a year, like an average-level senior software engineer in the US.

This is not an outrageous amount of money, if the productivity is there. More likely the AI would work like two $90k junior engineers, but without a need to pay for a vacation, office space, social security, etc. If the productivity ends up higher than this, it's pure profit; I suppose this is their bet.

The human engineer would be like a tech lead guiding a tea of juniors, only designing plans and checking results above the level of code proper, but for exceptional cases, like when a human engineer would look at the assembly code a compiler has produced.

This does sound exaggeratedly optimistic now, but does not sound crazy.

  • It’s a $90k engineer that sometimes acts like a vandal, who never has thoughts like “this seems to be a bad way to go. Let me ask the boss” or “you know, I was thinking. Shouldn’t we try to extract this code into a reusable component?” The worst developers I’ve worked with have better instincts for what’s valuable. I wish it would stop with “the simplest way to resolve this is X little shortcut” -> boom.

    It basically stumbles around generating tokens within the bounds (usually) of your prompt, and rarely stops to think. Goal is token generation, baby. Not careful evaluation. I have to keep forcing it to stop creating magic inline strings and rather use constants or config, even though those instructions are all over my Claude.md and I’m using the top model. It loves to take shortcuts that save GPU but cost me time and money to wrestle back to rational. “These issues weren’t created by me in this chat right now so I’ll ignore them and ship it.” No, fix all the bugs. That’s the job.

    Still, I love it. I can hand code the bits I want to, let it fly with the bits I don’t. I can try something new in a separate CLI tab while others are spinning. Cost to experiment drops massively.

    • Claude code has those "thoughts" you say it never will. In plan mode, it isn't uncommon that it'll ask you: do you want to do this the quick and simple way, or would you prefer to "extract this code into a reusable component". It also will back out and say "Actually, this is getting messy, 'boss' what do you think?"

      I could just be lucky that I work in a field with a thorough specification and numerous reference implementations.

      4 replies →

    • > sometimes acts like a vandal

      I see you don't have experience working with a large number of real life humans.

  • $250k a year, for now. What's to stop anthropic for doubling the price if your entire business depends on it? What are you gonna do, close shops?

    • Yeah this is just trading largely known & controllable labour management risks for some fun new unknown software ones.

      You can negotiate with your human engineers for comp, you may not be able to negotaiate with as much power against Anthropic etc (or stop them if they start to change their services for the worse).

    • If this is successful supply shock will kick in (because of energy/GPU constraints) and we could easily see a 2-4x price increase maybe more if the market will accept it. That's before taking into account current VC subsidies.

    • I mean… What does your shop even do? Write software? Why? The whole premise is that it’s now easily cloned.

  • >> $170-180k a year, like an average-level senior software engineer in the US.

    I hear things like this all the time, but outside of a few major centers it's just not the norm. And no companies are spending anything like $1k / month on remote work environments.

  • I think that is easy to understand for a lot of people but I will spell it out.

    This looks like AI companies marketing that is something in line 1+1 or buy 3 for 2.

    Money you don’t spend on tokens are the only saved money, period.

    With employees you have to pay them anyway you can’t just say „these requirements make no sense, park for two days until I get them right”.

    You would have to be damn sure of that you are doing the right thing to burn $1k a day on tokens.

    With humans I can see many reasons why would you pay anyway and it is on you that you should provide sensible requirements to be built and make use of employees time.

    • OK, but who is saying that to the llm? Another llm?

      We got feedback in this thread from someone who supposedly knows rust about common anti patterns and someone from the company came back with 'yeah that's a problem, we'll have agents fix it.'[0].

      Agents are obviously still too stupid to have the meta cognition needed for deciding when to refactor, even at $1,000 per day per person. So we still need the buts in seats. So we're back at the idea of centaurs. Then you have to make the case that paying an AI more than a programmer is worth it.[1]

      [0] which has been my exact experience with multi-agent code bases I've burned money on.

      [1] which in my experience isn't when you know how to edit text and send API requests from your text editor.

  • That nobody wants to actually do it is already a problem, but some basically true thing is that somebody has to pay those $90k junior engineers for a couple years to turn them into senior engineers.

    The seem to be plenty of people willing to pay the AI do that junior engineer level work, so wouldn’t it make sense to defect and just wait until it has gained enough experience to do the senior engineer work?

  • Assuming current prices are heavily subsidised (VC money) and there is a supply shock (because we don't have enough GPUs/energy). If that leads to double the price that means 500k/year, and if we see a 4x price increase that's 1000k/year.

    Suddenly, it starts to look precarious. That would be my concern anyway.

  • > 50 work weeks

    What dystopia is this?

    • I took it as a napkin rounding of 365/7 because that’s the floor you pay an employee regardless of vacation time (in places like my country you’d add an extra month plus the prorated amount based on how many vacation days the employee has), so, not that people work 50 weeks per year, it’s just a reasonable approximation of what the cost the hiring company.

    • This is a simplification to make the calculation more straightforward. But a typical US workplace honors about 11 to 13 federal holidays. I assume that an AI does not need a vacation, but can't work 2 days straight autonomously when its human handlers are enjoying a weekend.

      2 replies →

  • It doesn't say 1k per day. Not saying I agree with the statement per se, but it's a much weaker statement than that.

    • "If you haven't spent at least $1,000 on tokens today per human engineer, your software factory has room for improvement" - how exactly is that a weaker statement?

      4 replies →

Meanwhile, me

> $20/month Claude sub

> $20/month OpenAI sub

> When Claude Code runs out, switch to Codex

> When Codex runs out, go for a walk with the dogs or read a book

I'm not an accelerationist singularity neohuman. Oh well, I still get plenty done

  • My gemini subscription is all I need. It's like an interactive stack overflow that doesn't yell at you and answers your questions.

    I was working on a problem and having trouble understanding an old node splitting paper, and Gemini pointed me to a better paper with a more efficient algorithm, then explained how it worked, then generated test code. It's fantastic. I'm not saying it's better than the other LLMs, but having a little oracle available online is a great boost to learning and debugging.

  • The openrouter/free endpoint may make your dog unfit. You're welcome. Sorry doggo.

  • Different beasts on the API, the extra context left makes a huge difference. Unless there's something else out there I've missed, which at the speed things move these days it's always a possibility.

It's crazy if you're an engineer. It's pretty common for middle managers to quantify "progress" in terms of "spend".

My bosses bosses boss like to claim that we're successfully moving to the cloud because the cost is increasing year over year.

  • Growth will be proportional to spend. You can cut waste later and celebrate efficiency. So when growing there isn't much incentive to do it efficiently. You are just robbing yourself of a potential future victory. Also it's legitimately difficult to maximize growth while prioritizing efficiency. It's like how a body builder cycles between bulking and cutting. For mid to long term outlooks it's probably the best strategy.

    • Is this satire? Throwing money into a bottomless pit is the opposite of success. Growth is proportional to spend if and only if spend is proportional to growth. You can't just assume it's the case.

The margins on software are incredibly high and perhaps this is just the cost of having maintainable output.

Also I think you have to consider development time.

If someone creates a SaaS product then it can be trivially cloned in a small timeframe. So the moat that normally exists becomes non existent. Therefore to stay ahead or to catch up it’s going to cost money.

In a way it’s similar to the way FAANG was buying up all the good engineers. It starves potential and lower capitalised but more nimble competitors of resources that it needs to compete with them.

I am not sure why people are getting hung on the price, i.e. this: "They have the gaul to pitch/attention seek a 1$/day with possibly little/no product". The price can drop TBH and while there is some correlation on $/capita output.

The more nuanced "outrage" here, how taking humans out of the agent loop is, as I have commented elsewhere, quite flawed TBH and very bold to say the least. And while every VC is salivating, more attention should instead be given to all the AI Agent PMs, The Tech lead of AI, or whatever that title is on some of the following:

- What _workflow_ are you building? - What is your success with your team/new hires in having them use this? - What's your RoC for investment in the workflow? - How varied is this workflow? Is every company just building their own workflows or are there patterns emerging on agent orchestration that are useful.

I do think it's a crazy thing to say, but not because of the amount. I mean, if putting in more money produces more value, why not $10,000 a day or a million? Does adding more tokens after $1000 stop working for some reason?

Forget about agents or AI: the amount of money that it makes sense to spend on software engineering for a particular company is highly dependent on the specifics of that company.

Perhaps for them this number makes sense, but it's kind of crazy to extrapolate that to everyone as some kind of benchmark. It would be far more interesting to hear how they place a value on the code produced.

I have a harsher take down-thread, but the simulation testing (what they call DTU) is actually interesting and a useful insight into grounding agent behavior.

My favorite conspiracy theory is that these projects/blog posts are secretly backed by big-AI tech companies, to offset their staggering losses by convincing executives to shovel pools of money into AI tools.

  • They have to be. And the others writing this stuff likely do not deal with real systems with thousands of customers, a team who needs to get paid, and a reputation to uphold. Fatal errors that cause permanent damage to a business are unacceptable.

    Designing reliable, stable, and correct systems is already a high level task. When you actually need to write the code for it, it's not a lot and you should write it with precision. When creating novel or differently complex systems, you should (or need to) be doing it yourself anyway.

    • I think there's a fundamental misunderstanding where executives mistake software engineering for "code monkey with a fancy inflated title"

      And coding agents are making that disconnect painfully obvious

  • Is it really a secret, when Anthropic posted a project of building a C compiler totally from scratch for $20k equivalent token spend, as an official article on their own blog? $20k is quite insane for such a self-contained project, if that's genuinely the amount that these tools require that's literally the best possible argument for running something open and leveraging competitive 3rd party inference.

  • Like this?

    https://www.cnbc.com/2026/02/06/google-microsoft-pay-creator...

    • There's about a hundred new posts on reddit every day that im sure are also paid for from this same pile of cash.

      It feels like it really started in earnest around october.

      1 reply →

    • Provided the sponsored content is labelled "sponsored content" this is above board.

      If it's not labelled it's in violation of FTC regulations, for both the companies and the individuals.

      [ That said... I'm surprised at this example on LinkedIn that was linked to by the Washington Post - https://www.linkedin.com/posts/meganlieu_claudepartner-activ... - the only hint it's sponsored content is the #ClaudePartner hashtag at the end, is that enough? Oh wait! There's text under the profile that says "Brand partnership" which I missed, I guess that's the LinkedIn standard for this? Feels a bit weak to me! https://www.linkedin.com/help/linkedin/answer/a1627083 ]

  • I'm also convinced that any post in an AI thread that ends with "What a time to be alive!" is a bot. Seriously, look in any thread and you'll see it.

  • The implication of "you have to have spent $1000 in tokens per engineer, or you have failed" is that you must fire any engineer who works fine by themselves or with other people and who doesn't require LLM crutch (at least if you don't want to be "failed" according to some random guy's opinion).

    Getting rid of such naysayers is important for the industry.

  • Slop influencers like Peter Steinberger get paid to promote AI vibe coding startups and the agentic token burning hype. Ironically they're so deep into the impulsivity of it all that they can't even hide it. The latest frontier models all continue to suffer from hallucinations and slop at scale.

      - Factory, unconvinced. Their marketing videos are just too cringe, and any company that tries to get my attentions with free tokens in my DMs reduce my respect for them. If you're that good, you don't need to convince me by giving me free stuff. Additionally, some posts on Twitter about it have this paid influencer smell. If you use claude code tho, you'll feel right at home with the [signature flicker](https://x.com/badlogicgames/status/1977103325192667323).
    
    
      + Factory, unconvinced. Their videos are a bit cringe, I do hear good things in my timeline about it tho, even if images aren't supported (yet) and they have the [signature flicker](https://x.com/badlogicgames/status/1977103325192667323).
    

    https://github.com/steipete/steipete.me/commit/725a3cb372bc2...

  • Secretly? Most blog posts praising coding agents put something like 'I use $200 Claude subscription' in bold in 2nd-3rd paragraph.

  • I don't think that's really a conspiracy theory lol. As long as you're playing Money Chicken, why not toss some at some influencers to keep driving up the FOMO?

It is not crazy.

Each engineer is very valuable. LLM tokens are cheap. You scale up inference compute, and your engineers can focus on higher order stuff, not reviewing incorrect responses, validating bugs, and what not.

It’s shocking to me that there isn’t a $2,000 / $20,000 per month subscription tier for coding assistants. I’ve always in my mind called this ExecGPT since around 2021, but the notion was that executives have teams that support them to be high functioning and high leverage, responsible for quality of thinking and decision making, not quantity of work output.

And the value/prop existed and continues to exist even as the models get smarter, even Opus 4.6.

  • > It’s shocking to me that there isn’t a $2,000 / $20,000 per month subscription tier for coding assistants.

    What would be the benefit for the providers in offering this over just having those people use the API? I don't think it makes any sense for them.

    • Most people don’t know how to build an effective harness that is well optimized and battle tested to work for a wide range of scenarios. But they can happily and easily use a finished service if they have the money for it.

      1 reply →

Yeah, it's hard to read the article without getting a cringy feeling of second hand embarrassment. The setup is weird too, in that it seems to imply that the little snippets of "wisdom" should be used as prompts to an LLM to come to their same conclusions, when of course this style of prompt will reliably produce congratulatory dreck.

Setting aside the absurdity of using dollars per day spent on tokens as the new lines of code per day, have they not heard of mocks or simulation testing? These are long proven techniques, but they appear bent on taking credit for some kind revolutionary discovery by recasting these standard techniques as a Digital Twin Universe.

One positive(?) thing I'll say is that this fits well with my experience of people who like to talk about software factories (or digital factories), but at least they're up front about the massive cost of this type of approach - whereas "digital factories" are typically cast as a miracle cure that will reduce costs dramatically somehow (once it's eventually done correctly, of course).

Hard pass.

  • Yeah, getting strong Devin vibes here. In some ways they were ahead of their time in other ways agents have become commoditized and their platform is arguably obsolete. I have a strong feeling the same will happen with "software factories".

This is some dumb boast/signaling that they're more AI-advanced than you are.

The desperation to be an AI thought leader is reaching Instagram influencer levels of deranged attention seeking.

It's not so much crazy as very lame and stupid and dumb. The moment has allowed people doing dumb things to somehow grab the attention of many in the industry for a few moments. There's nothing "there".

  • When I see comments like this I wonder if the commenter has used LLMs for software development recently. (Genuine question).

    • What's that got to do with burning thousands of dollars producing nothing of value, and a lot of useless code created by agents you turn loose while advocating not reviewing it? "Used LLMs" is a non-sequitur here. What did your agent build with you spending thousands of dollars and not reviewing the code? If you have something high quality to show that followed this process link it here, else what are you even talking about?