← Back to context

Comment by ryandrake

19 hours ago

Not just Amazon, too. It feels like all of big tech (and some smaller firms) have simultaneously gone insane. Imagine if your CEO woke up one day and told the company: "We need to encourage travel spending. Please book as many business trips as you can, and spend as much money as possible. Fly first class to our satellite offices! Take limos instead of Ubers! Eat at fine restaurants! Make sure you are constantly traveling. In fact, we are going to make Travel Spending part of your annual performance review: If you don't spend enough on business travel, you'll get a low rating!"

We are living in a totally bonkers time.

This is what inspired me to build my new CLI tool, Burn, Baby, Burn (https://news.ycombinator.com/item?id=48151287

  • Just sent it to some developers who could really benefit from this! Please let us know when you have Codex and Gemini versions ready to rumble.

    • Sorry, it will be a while. We're currently building out enterprise features like SSO/SAML support, role based burn access, and a carbon offset marketplace. As you can imagine, we're burning a lot of tokens to get these out, but actual productivity isn't up as much as you'd think.

      1 reply →

    • I want a in-browser Gemini version. For some reason my company doesn't count Gemini CLI use. I guess I'm supposed to copy code between my browser and my editor.

  • any plans for a distributed deployment via cloudflare works. I'm not sure this thing is powerful enough for my use case.

    • Yeah, lots of enterprise features in the works, but first i need to raise money at a $1B+ valuation (this might seem high for a project that started 4 hours ago, but it's actually very low for the project that will soon be the #1 consumer of tokens on the planet)

      2 replies →

  • Won't the company audit the requests to AI and see you're sending a bunch of BS?

    • > Won't the company audit the requests to AI and see you're sending a bunch of BS?

      Shouldn't be too hard to game. Version 2 uses the M365 MCP server to load up your email and iterate over all the messages, summarizing them over an over.

I know some that was told to try and use AI more on the job so they created some agent to just burn tokens and ended up using about 10x what the next highest employee used. Buddy expected to get shit but instead got an accolade and was asked to give a short talk to the other employees about how they could match their success.

  • In my first job ever, I used to get my work done on time and leave. There were a few people who’d stay in the office until late and show up on weekends. Same output, but they got the promotions and my bonus got prorated.

    This is the same thing.

    • At least this one doesn't require spending the manhours moving dung from pocket to pocket, now we finally get credit for automating it!

    • While output may have been part of it. It's possible that by staying later (and working longer), they had better relationships with upper management.

      "I used to get my work done on time and leave"

      This sounds like you just wanted to get your work done and not foster any work relationships. This is fine, but you will not get promoted this way (as you've seen).

      Moving up in a company is 30% work and 70% networking/being likelable/noticed.

      I stopped that nonsense years ago. I work for myself now as a consultant. If I work more, I get paid more.

      9 replies →

  • That’s the part I don’t get: Engineers are smart enough to ask an LLM to ask other LLMs to ask other LLMs to load the policy manual then count the R’s in “LLM fork bomb”.

    Additional story points completed per week, versus token-dollar spent, or some such combo would seem more sane.

    But maybe they aren’t really tracking productivity, so tracking tokens is all they have? … I dunno which part of that is dumber.

    • We never figured out how to track productivity anyway. Only macro-level success in achieving measurable goals. Any AI metric besides "are similar goals being met more quickly" is people encouraging specific behaviors decided a priori.

  • i call BS on this story

At my company we were told AI spend was part of perf review and that the "singularity" had happened. Now 20% of our infrastructure spend is tokens. The average number of pull requests per dev per week increased with all this spend. From 4.2 to 5.1. And that includes a huge chunk of PRs that are just agents changing a line or two in a config. It's all magical thinking

  • Since you're an idiot or fired if you point at this, just collect the money man.

    It's their money. They want to do stupid things? So be it.

  • > The average number of pull requests per dev per week increased with all this spend. From 4.2 to 5.1.

    That's it? I've seen people that are consistently putting out four PRs per day. I don't/can't even code review them. So much of what we do is now just rubber-stamping PRs. We were even told that we shouldn't be writing code by hand anymore.

    • My main problem putting out that many PRs per day is getting them approved and merged back into main so I can start the next one.

      I guess “stacked” PRs are a thing now? I haven’t figured out the process that avoids making the merges for stacked PRs a complete mess, though.

  • Wow, the Singularity happened and nobody bothered to tell me about it?! Vernor Vinge and I.J. Good must be rolling in their graves fast enough to rip a hole in spacetime. Allow me to coin a term for this: Singflation.

  • It's definitely not. It's a fundamental shift on how we interact with computers.

    It's a tractors on farms kind of moment.

    • Agreed, people confuse the (totally expected) bumps and bruises of early adoption with somehow equating to "this technology is useless."

      The Wright Brothers couldn't cross the Atlantic in their first flier and plenty of subsequent designs crashed and burned (literally). But now air travel is commonplace. Same will happen with AI, we just have to get past these early pains.

My dad worked at a company that had their own travel agency (early 90s when you needed a travel agent for reasons that no longer apply), and he was often booked on the more expensive flight because the travel agency made more money. More than once he could have got first class for less on a different flight but company policy didn't allow him to fly first class.

We have always been living in bonkers time.

  • Most big companies still have travel agencies/companies manage their corporate travel. I can’t remember who we used when I was at Amazon, but I made a similar complaint to my manager once given I could fly cheaper in a higher class on a different airline (also one I had heaps of points with so I would have preferred it because I’d be able to upgrade further and/or use the lounge).

    Turns out the price I saw in the booking portal isn’t actually what Amazon paid. It’s kinda more like a rack rate listing. But then there’s all kinds of discounting/cash back that happens on the backend based on the amount of travel booked each month.

  • I used to know someone whose parent worked at travel agency (also 90s) and their whole immediate family could book trips wherever, but only economy class.

> It feels like all of big tech (and some smaller firms) have simultaneously gone insane.

Some companies might just have been scammed by the marketing that told them that AI would make all their employees 10,000x more productive and save them billions and when that didn't happen the assumption was that it's because employees weren't using the magical AI as often as they should be.

Other companies, especially those working on their own AI products, might want employees to use AI as much as possible because they hope it will provide them with the training data they'll need to eventually replace most or all of those employees with the AI. Punishing workers who refuse to train their AI replacement might make sense to them because even though it's costly right now they expect the savings down the road to be much much greater.

Exactly this.

And the fact that it is an industry-wide meme at this point makes bright red flashing lights and klaxons go off on my mind that a catastrophic reckoning can't be too far. There's not enough money in the world to keep this up for too long.

Bragging about token usage is like bragging about LoC written.

  • When I was at Amazon last year, the bragging (from the AI poo-bah in my section of Amazon, note) about AI included "look at the total line count of commits from the heaviest AI users!"

    So if AI screws something up and re-writes it and then screws it up again, needing another re-write, that counted as more positive than if it was done correctly, and simply, the first time.

    • This is like when the Pointy Haired Boss offers a bounty for fixing bugs and Wally pumps his fist and says “I’m gonna go code myself a Porsche!”

      1 reply →

  • It’s honestly 10x worse than LOC. At least in the human era LOC had correlation to shipping features.

    It’s more like bragging about compiler cycles spent.

    • I don't know where you're working but LLM enhanced development has skyrocketed our rate of feature development. As an example, a project roadmapped to take 7 months was delivered in only 4.5 because of CC/Codex.

      I'm confused how anyone could believe it isn't an enhancer, unless they have refused to use any of the technologies.

      3 replies →

Even as a very happy NVDA shareholder I agree with you. It's comical that managers are being so naïve as to think that you can crap out a dashboard of "tokens consumed per week" and get any useful signal at all from it, beyond learning who's not using AI.

Incompetent use of a coding agent, or just general shenanigans, can burn tokens all day but it's not going to get tickets done.

Just looking at the work output - how many story points, tickets, how many new bugs are opened, etc. has not become any less relevant a metric for productivity with AI. If you're a skilled and proper user of AI those numbers would be changing in the right direction, compared to before you had it.

  • > It's comical that managers are being so naïve as to think that you can crap out a dashboard of "tokens consumed per week" and get any useful signal at all from it, beyond learning who's not using AI.

    If some guy decides to spend a bunch of money bringing AI tools into the company things might get very uncomfortable for him if they're seeing zero return on that investment. He's sure not going to get recognition and a massive bonus for it. If on the other hand, he can put some numbers in a spreadsheet or powerpoint showing that employees are using AI all the time and profits are up again this quarter, maybe he can take some credit for that or at least keep his boss or the company's shareholders from questioning the wisdom of dumping so much cash into those AI products.

    •   > things might get very uncomfortable for him if they're seeing zero return on that investment. 
      
        > If on the other hand, he can put some numbers in a spreadsheet or powerpoint showing that employees are using AI all the time and profits are up again this quarter
      

      thats exactly what i see first-hand; no actual measure of dollars in vs dollars out, just x number of employees are generating y number of pr's with z% ai code + this quarter we made a profit = ai productivity boost...

      total brainrot

      2 replies →

  • All those numbers are equally gameable and terrible metrics for productivity. With any of those, as with AI spending, you've got to look at actual results qualitatively. There's no shortcut.

    • The eternal evergreen lesson of managing software developers, and knowledge workers more generally.

I think a lot of these execs have equity in Anthropic... and the dumb ones that don't are just "keeping up with the Joneses" so to speak.

It's more like "We really value face-to-face interaction, so we're going to track that with your total travel spend. We don't want to get in the way, so there's no budget."

This would be hilarious if a bunch of companies did not already do exactly this with exec travel. And academics do this all the time when travel has to be funded from grants.

One reason it works out like that for travel funding is that it’s often the ‘use it or lose it’ kind of funding. If you do not use all of the funds allotted, you can’t ask for more and could realistically get less.

What if instead the manager was saying: “hey team I need you to all buy as many lotto tickets as possible!”

I feel like that’s a better analogy. Some charlatans are buying fake tickets, but as a manager who wants to win big, I’m ok with some chicanery so long as the average person is trying to honestly meet my directive.

It seems like a natural result. People have been trying to use dashboards / metrics to roll up / indicate how well teams and individuals have been doing for a long time. Therefore, "part 1" was already in place. Now, something even easier to track is available (token usage). So, just throw token usage on the dashboard and tell people that higher is better - what other outcome would you possibly expect?

  • > dashboards / metrics to roll up / indicate how well teams and individuals have been doing for a long time

    I'm actually a little curious about how long it has been. Bad managers have always prioritized irrelevant metrics, of course, but I have a feeling (backed by no data, just vibes) that management in general crossed a point of no return as soon as "data-driven" became a cross-industry buzzword.

    Like, I vaguely remember a time when consumer interactions didn't always come with a request to fill out a survey (with the results getting turned into a number and fed into a dashboard somewhere). And then that changed, and now everything must turned into a number and that number must go up.

    • "Data driven" essentially means "scalar driven". There is nothing wrong with it if your chosen scalar is a proxy for anything that matters. Of course, usually no one can explain this mapping.

It might be an ROI calculation, e.g. some people will waste tokens, but if it means someone else feels empowered to make something awesome or impactful, it will have been worth it.

I kind of get what they're thinking in trying to make sure all engineers use AI. For myself, and for the engineers working with me, I saw everyone go through an initial aversion and resistance to AI, and then an instant productivity boost when we started using them. So there's definitely a good reason to get everybody to start using AI. You don't want a good engineer resisting AI indefinitely if you know it will make them more productive.

Incentivizing people who are already using AI to use as many tokens as possible does seem a little crazy, though.

  • It's worth reflecting on why it's so hard to convince hold outs to discover how AI might help them. The fundamental issue is that there really aren't many convincing demonstrations that hold outs can relate to and there remains basically no evidence of real value gained.

    Users attest to higher productivity and point to material but intermediate factors like token use, generated lines of code, pr counts, etc, but there doesn't seem to be a convincing revolution in the quantity or quality of mature software being delivered.

    Combine that puzzling impressions of outcomes with a sense, for many, that they don't feel like they have a personal problem that warrants a new tool, and you end up with a pretty earnest and defensible indifference.

    To get hold out engineers using AI, the industry needs to be focused on demonstrating relatable workflow improvements and demonstrating practical improvements to finished work product. Instead, policies like token use incentives just rely on luring them into pulling the slot machine handle with the expectation that once they do, they'll join the cadre of other converts who justify their transition with subjective improvements and intermediate metrics.

    • Software engineering organizations have agreed for decades that a meaningful measure of developer productivity is a literal impossibility.

      So now introduce AI and then tell every developer that they need to be 20% more effective 20% of what?

    • Unfortunately, a convincing demonstration to convince a skeptical colleague would require measuring developer productivity.

      Among skeptics, I've only seen people won over by using it themselves, because when they use AI for their own work, they invest the time to review the code, understand it, and assess its quality by their own standards. That's how people learn to trust AI coding assistance.

      2 replies →

    • Here's one selling factor from the experience I'm experiencing right now:

      Others will use AI, and it will make your life miserable. You need to know enough about AI to be able to fight back.

      The experience: one employee, self-selected, assigned themselves to a task of configuring integration with MySQL HA deployment. They produced a mountain of code in a short month (we are talking about close to a hundred thousands lines of Python code). And they decided to go with Oracle's tools, instead of Galera...

      Everything this employee produces is, quite obviously, AI-generated. Also, in the initial stages, they worked on their project completely alone: no reviews. To give some sense of size of this insanity: one of the configuration scripts I'm working with now is a 9K+ loc of Python that's supposed to run from `mysqlsh`. About half of it is module-level variables.

      It will take many months to restructure this "prototype" by hand. It's a pain to read and to navigate. GitLab UI has a perceivable lag just trying to display the script, forget about diffs. I will absolutely need AI to try to make sense of it (I'm not allowed to fix it). But, and if it ever comes to fixing, I can't imagine this to be done without automation of some sort.

      Unfortunately, AI generates problems that, sometimes, only AI can fix. :(

    • > It's worth reflecting on why it's so hard to convince hold outs to discover how AI might help them

      I have. My conclusion is... humans are deeply irrational when it comes to rapid change.

      Egg or olive oil prices spike, humans out an entire government.

      The rate of immigration spikes, humans throw them into camps and break useful treaties.

      Most of the resistance I've observed amongst engineers is resistance to change generally.

      And then digging in when challenged.

      6 replies →

  • There is a limit somewhere, but I keep finding more and more ways to use AI.

    Not just coding, but things like "here is my teams mandate, go through all my company's slack channels, linear tasks, notion pages, and recent merges in got, summarize any work other teams are doing that intersect with my team's work."

    That'll burn a lot of tokens.

    Set that up to run once or twice a week and give a report.

    • Sure, findings ways to burn tokens is not hard. Even finding ways to burn tokens on things (like your example) which are actually useful is not hard. But what is the ROI on that from the company perspective. I mean, you could have also hired an intern to do the job of collating this report every week. But if you went to your boss and asked to hire someone to do something, they would, reasonably, ask what the value of that thing is and whether it justifies more headcount. But we're in this bizarro world where the bosses are basically saying "go hire more people, even if you don't have specific high-value things for them to do. Just create make-work jobs for them!" It's wild.

  • I've been using it for many months. I still haven't gotten any kind of boost. If I'm going to get ranked on token use though, best believe I'll be using the optimal quantity of tokens.

  • Yeah management should make clear they just don’t want to see AI use of zero in a given week. Not “more tokens consumed the better“ on performance reviews.

  • People are not actually more productive with LLMs, they are less productive. The data has shown this. So there's no reason to push people into using them - it's all just hype and magical thinking.

Look it might seem silly, but the point is to get all our employees to be travel-pilled. They just don't know how great travel is yet.

Management has confided in me that token usage is a secret performance metric. At the same time I'm getting emails from infrastructure people about prompting techniques to get LLMs to speak more concisely to save the company money lmao. I'd prefer a video essay mode that bulks everything up.

Two years ago everyone would have told you that 'impact' was the way to measure people, and been aghast at tracking inputs like hours. Say what you will, but at least showing up at 8 didn't cost the company money. Today I see people spending time and money vibe coding tools in search of a problem, just to spend tokens and demonstrate that they're on board with the singularity.

Spending is just a proxy for AI use here. This is nothing new. I remember past CEOs saying “Ajax! Ajax! Ajax!”, “Big data! Big data! Why aren’t we using Big data!”

AI is just the next tool to over spend on in poor ways, realise it’s shit and spend a ton more money trying to roll it back.

The situations where is shines will continue to use it when the hype dies down.

> Imagine if your CEO woke up one day and told the company: "We need to encourage travel spending. Please book as many business trips as you can, and spend as much money as possible.

I had a manager like this once. He didn't last very long, but it was without a doubt the most fun six months of my career.

It’s preposterous, companies are blindly funding slop and the product is fool’s gold.

  • It's the state of modern capitalism. Money must flow from one entity to another even if nothing of tangible value is produced. The flows of money prove the growth of both businesses.

    • If I spend money on tokens but my revenue doesn't increase, nor do I get any operating efficiency gains - where is my growth then buddy boyo?

      The growth in revenue's (since earnings are negative for them) only shows up for the model producer

You mean like using lines of code as a metric to rank engineers [1]?

Managers love metrics. Bad managers particularly love metrics. Tokens used was almost the obvious bad metric that was going to be used.

I would argue that tokens used has actually exposed a useful metric: any manager who focused on this, demanded this or ranked based on this should be fired, for being a bad manager.

[1]: https://evan-soohoo.medium.com/did-elon-musk-really-fire-peo...

  • In many many many cases it's not the manager choosing to do that. Its our brilliant job creator class demanding that he does

    • Bad manager: "I have to give you a bad rating because of the company-wide LoC metric."

      Good manager (to good engineer): "can you please churn some code to update your LoC metric so I don't have to give you a worse rating?"

      I'm sorry but any manager who just claims they're a passive victim of company-wide mandates is a lazy and bad manager.

      1 reply →

  • LoC can occasionally give you signal. For instance, imagine you are joining a new team or company so you don't know how much oversight your predecessor did. If you ask an engineer how they spend most of their time and they say "Mostly just writing code" and you look at GitHub and it says they've made 3 minor commits in the past quarter, that person is lying and your predecessor was incompetent (quite possibly both of them have been MIA from their responsibilities for months).

    No, I'm not talking about the engineer who can point to significant contributions outside of code: writing technical specs, leading architecture discussions, etc. I'm talking about the ones who just say they're just coding, but are actually not working at all.

    TL;DR LoC and commit count etc can be used only to flag for review likely cases of quiet quitting.

You'd be surprised...

I worked for an international (mothership in the UK, later acquired by the US) company, which had... sort of a similar policy.

So, the (mothership) company acquired a lot of satellite companies, all in banking business. All over the world. Then they figured their CEO was corrupt, got in problems with the law, got kicked out. While they were waiting for the new "real" CEO to step in, they let some "interim" CEO to take his place.

New new (interim) CEO didn't seem to have a clue about the business she was supposed to run, nor did she care. She knew her time was running out, and she figured she'd spend it traveling the world and partaking in fine dining in every corner of the world the company's tentacle could reach. But, to make it seem more plausible, she, sort of, created a policy of "experience exchange", which sent random troupes of select individuals from different branches of the company to "exchange experience" with another similarly randomly assembled troupe. Of course, the company picked the bill when it comes to lodging and dining.

Our inconsequential branch in Israel saw a pilgrimage of high-ranking banking managers from all over the world, but, mostly the wealthier parts of it. Some didn't even bother to show up in the office though, and proceeded straight to the banquet hall of the most expensive hotel on the Tel Aviv beach.

To be fair though, the interim CEO got the boot even before her time was supposed to end, but it was serendipitously close to the acquisition by the US company, and so she was let go as part of a "restructuring" and "optimization"... but it was a crazy year!

because it's come to CFO's as "free debt" aka fiat printing. They need to spend thisfree fiat to keep buble going. I'm sure some inv. banking team internally assured too. $Trillion instuitions have access to free printer now, you and I don't. This is different world since unlimited printer started in 2020. All debt math is fake now because they can create fiat money out of nothing, literally.

I wonder where in business school they teach you to "measure inputs and try to maximize them", because that's basically what's happening.

The most important part being:

"Because we FEEL this will make you more productive and we will make more money!"

No evidence but more Lines of Code...

IMO, the investors behind AI play the Uber game: they subsidise the AI costs and inject it into all facets of society they can get their hands on. They can tell the execs to increase AI usage at any cost. Their bet is that we'll become AI addicts with athrophied brains before they run out of money.

Also, don't forget that their datacenters will burn our electricity and boil our rivers at rates much cheaper than what we are billed in our homes. So while you're happy generating mountains of AI slop, somewhere there is a datacenter boiling a river.

I'd compare this to a new patented formula of water that's nobody asked for, and the patent owners are trying to replace all water supply with their crap before we wake up.

  • No need to invoke a hypothetical water example, just look to how Nestlé pushed baby formula in developing countries¹:

    >For example, IBFAN claims that Nestlé distributes free formula samples to hospitals and maternity wards; after leaving the hospital, the formula is no longer free, but because the supplementation has interfered with lactation, the family must continue to buy the formula.

    1: https://en.wikipedia.org/wiki/1977_Nestl%C3%A9_boycott

I've definitely been in situations where managers tell me to "spend X amount before the end of the year." They don't want higher ups to think they can cut our budget.

It's like if class-based society materialized within the IT. And the manager class collectively pushes the narrative of AI replacing ICs.

Note that it has beaten capitalism, making rational choices to increase earnings has lost to this AI dream.

  • Note that propagandizing managers was a rational choice by AI companies to increase the revenue of AI companies.

I'm making sure to use the most expensive model possible for the stupidest shit constantly. They asked for it!

Nonsense. It’s a little bit of a loss leader so devs are hooked on it and it’s considered incredibly unproductive to work without one. Then they will just have 10 peoples jobs replaced with one guy.

If we suddenly went from rail travel to jets that's exactly what would happen. We'd go from 0 to all the business flights that happen today. Everyone would be under enormous pressure to not be a laggard.

  • I washed a former intelligence agency person get interviewed on a youtube talk show and (tangential to the policy subject being discussed) they they said that's basically how it was after 9/11. We couldn't onboard people fast enough to figure out how to spend the money so while we were doing that we flew first class half way around the world to waterboard people with bottled water. The people authorizing it didn't care. They were spending X to fight terrorism. The public was never gonna see the nitty gritty breakdown.

    That's basically how it seems to be with AI. Just replace "spent X fighting terrorism" with "spent X implementing AI workflows" or "invested X in AI" or whatever. Nobody actually knows or cares just how far the dollars are going.

    • I think this version is getting very close to The Emperor's New Clothes Subscription in terms of how transparently the leadership are displaying their delusions.