Comment by i7l

3 days ago

The fact that management signed off on measuring AI use through token usage shows how incompetent management really is, including in allegedly technical conmpanies like Amazon. Tokenmaxxing was an entirely expected and rational response. IOW You measure employees in stupid ways, you're going to get stupid behaviour as a consequence.

One argument I have heard in favour of this is that management knew this would be a side effect, but that it's more important to have people engage with AI as much as possible simply to explore what is actually possible. You are effectively knowingly wasting money in the expectation that you might learn something useful that will be more valuable in the long run.

  • If companies are suddenly willing to spend money on letting their staff experiment, why not let them experiment with what they want to? They probably know more about technology than you do, otherwise you wouldn't need them.

  • > engage with AI as much as possible simply to explore what is actually possible

    "Research" isn't part of my job title. If you don't know what's possible then why are you deploying it? You should be telling _me_ what's possible. I mean, you _paid_ for it, how can you possibly not know what you were getting?

    > in the expectation that you might learn something useful that will be more valuable in the long run.

    "I'll take `what even are profits?' for $200, Alex."

    • Hear hear.

      An overly generous steelman in my opinion as well. Have 10% of your employees focus on finding ways to properly leverage the new technology - don’t pressure 100% of your employees with bull shit metrics.

    • >"Research" isn't part of my job title. If you don't know what's possible then why are you deploying it? You should be telling _me_ what's possible. I mean, you _paid_ for it, how can you possibly not know what you were getting?

      Extremely weird take for a knowledge worker.

  • No, it's literally because some dumb manager read a blog where an influencer said that you ain't a real AI native and ain't worth shit unless your developers are spending $XXXX on tokens each day.

    It's that simple.

    (Never mind that these bloggers are just writing ad copy for cloud providers.)

  • in this instance - it seems like Amazon employees are wasting money exploring ways to waste money.

  • My questions for that approach are: Why treat AI as a special technology that needs enterprise-scale exploration to come up with a useful application? And why not take the alternative approach of identifying the subset of people who have indeed found solid uses and spread their best practices around?

    The top-down approach to encouraging (mandating?) AI usage strikes me as infantilizing to the workers, who are perfectly capable of choosing which tools they use and when.

    • Human nature?

      In the early nineties, it was common for experienced electrical engineers to keep on using schematic entry digital design and look down on RTL and synthesis tools, despite that fact the latter was already way more productive. At some point, management had to put their foot down and force everyone to switch to using synthesis.

      It's not unreasonable to assume that many people are set in their ways and unwilling to change their behavior without a bit of a push.

      25 replies →

    • googles 20% project time was a good thing, sadly they dont even seem to do it anymore. for the bulk of corporate workers this brief period of time where they get to play an ai token game is the only break from generating TPS reports all day long.

  • That still sounds like a dumb strategy. Or, more likely, post hoc rationalization.

    You reward me for wasting tokens and punish me for not wasting them, I will maximally waste them and wont "explore hownto make them useful". The latter wastes less tokens and that is punished.

  • Exactly. That's the problem ICs don't want to admit.

    Managing a lot of people at scale is messy and you have to use crude solutions. It's impossible to know everything that's going on.

    If you were a manager you wouldn't do any better. Out of the crooked timber of humanity, no straight thing was ever made.

    • I think that's a convenient excuse for managers at the top to not have to deal with their own sub par middle and lower managers...

  • Are the people engaging though, or are they telling the AI "go do some busywork" and then minimizing that window and getting on with their job?

  • This is induced demand for AI to justify building more datacenters, which will bring AI costs down, and the idea is that will eventually bring demand up organically.

  • in short: dogfood the tool extra hard, and figure out what it can do.

    this means wasting a lot of metaphorical dog food, but now everyone will be 100% how it tastes.

    this then allows you to shift client expectations and alter offerings.

    ...

    or it's just dumb mgmt. but let's be charitable.

So my assessment of the current mania is that it’s basically a management variant of Pascal’s wager.

If you as a “leader” refuse to go along with the crowd and you’re right, then after the dust settles you look like someone who guessed right. Oh and now we’re in a recession so you are probably having a bad time regardless. You maybe get one promotion, congratulations.

If you refuse to go along with the crowd and you’re wrong, you look like a Luddite, you probably got fired at some point along the way and your judgement reputation is hurt.

If you do go along with the crowd and the crowd is wrong, you are just in the same boat as everyone else. You are probably about the same as if you went against the crowd and you were right, possibly even better because it can take awhile to be proven right and you could be hurt in the middle.

So, I think, once something like this picks up enough steam, it’s just logical on a per individual basis for everyone to go along with it, regardless of how they feel about it internally.

  • Yes, leaders can & should be expected to devise experiments to determine what processes might possibly be optimized though AI-assistance.

    But doing so properly requires expending a serious amount of cognitive effort & agile methodology, which is the exact opposite of what Amazon's management has demonstrated here.

    • Well then the solution is to higher more management or pay them more competitive salaries to get top talent

  • Or you know you could argue employee productivity should be messured in an evidence based way.

Depends on what they're trying to incentivise.

It's quite possible they aren't trying to measure performance but are literally just trying to increase token consumption to feed the bubble and hype.

Plus pressure employees may find new unique use cases for AI.

It's like if your goal is inflation, you give out tons of money and as long as its spent, you achieve your goal.

Management loves numbers because they’re the only things you can objectively compare as X > Y.

It makes for pretty charts, extrapolations, and projections.

It doesn’t matter if the numbers are not particularly correct. As long as the data gathering step can be justified it’ll do. Though bonus points if making the number bigger is a good thing (v.s. tracking something like number of sev 1 issues).

  • Sounds a bit like a McNamara Fallacy [0] of over-prioritizing numeric measures, which--when taken "too literally"--becomes:

    > The first step is to measure whatever can be easily measured. This is okay as far as it goes.

    > The second step is to disregard that which can't be easily measured or give it an arbitrary quantitative value. This is artificial and misleading.

    > The third step is to presume that what can't be measured easily really isn't very important. This is blindness.

    > The fourth step is to say that what can't be easily measured really doesn't exist. This is suicide.

    — Daniel Yankelovich, "The New Odds"

    [0] https://en.wikipedia.org/wiki/McNamara_fallacy

  • Yes, but also because management is largely unqualified to be managing the stuff they are hired for. So they regress to numbers because they otherwise cannot participate in anything technical.

This is Matt Garman, the ultimate MBA. Bonus for sure tied to tokens-per-quarter, which is the 2026 equivalent of measuring engineers by lines of code...

This why AWS is bleeding good engineers for years. What is left is starting to look like Boeing post McDonnell merger...

They took out a quarter of their documentation page limited real estate, with AI doc shorts nobody asked for, nobody needs, and cant disable.

If this were the end of the story, that would be a correct interpretation of the situation.

At Amazon, something like this is likely a closely watched experiment. They knew it would incentivise waste. But they don't know what the other effects will end up being. Nobody knows -- this thread is full of loose speculation. So Amazon runs the experiment and collects the data.

----

The annoying thing about goals and incentives is that they can either be phrased in terms of input metrics (behaviours within our control) of output metrics (the outcomes we want). Input metrics are bad because they lead to skewed incentives and gaming the metrics. Output metrics are bad because they're largely affected by chance and external circumstances. (This indeed means a goal cannot be SMART on its own, because A and R are typically in tension.)

Amazon knows this. Their WBR structure is essentially about trying to set goals and targets for input metrics, and then carefully observing how input metrics correlate with output metrics. They're using a semi-scientific process to tease out the causal structure of their business. I would assume this token target is followed very closely to learn exactly what its effects are on output metrics that drive revenue and cost.

For more on this, I thnk the best public writing is Carr's Working Backwards and Chin has written about it on Commoncog too.

  • I don't think this strategy is a viable experiment. Far too many uncontrolled variables for a very shallow complexity of "input variables" how you call it.

    Simpler explanation management has no ideas and goals and this is a replacement strategy. Because they too are affected by "experimental metrics" to a degree, but that doesn't excuse this trite "science".

    Any "answer" this would provide wouldn't be of higher quality than this speculation.

I have recently played around with lots of data from measurements and one can totally dump everything into context and let Claude try to analyze data that way. It burns through a lot of tokens. It is smarter to save data to disk and let Claude write scripts that handles/analyzes the data. It’s much faster and the results are much better and you save a lot of tokens. But I guess Amazon prefers the first approach.

  • I don’t have any specific inside knowledge about Amazon, but I would hazard a guess that the first approach also provides better training material for the LLM.

> IOW You measure employees in stupid ways, you're going to get stupid behaviour as a consequence.

My favorite hilarious metric is measuring the amount of work done by counting lines of code written per day

  • > My favorite hilarious metric is measuring the amount of work done by counting lines of code written per day

    Or by hours spent in the office

My current job is doing the exact same thing. My manager even showed me a tool with graphs showing token use and related metrics.

If it's stupid and it works then it's not stupid. Sometimes executives have to use blunt instruments to turn around the culture of a hidebound large organization. When Jeff Bezos sent his 2002 API mandate it might have seemed stupid at the time and yet it worked.

https://nordicapis.com/the-bezos-api-mandate-amazons-manifes...

  • Stupid things that work are still stupid. There's a reason we have the expression "a broken clock is right twice a day". Moreover, evidence so far seems to suggest that this AI push is not working for Amazon.

I thought that, managers are employees to the corporate too, they're themselves measured and they need proof of work to get paid just like campus janitors.

If a manager or a manager's workforce under it just sat around and ignored AI just because it's stupid and irrelevant and useless, they lose one tool to justify their existence amongst their peers who do not express such views. If they sat around and did their jobs as-before WHILE "investing" on tokenmaxxing, they gain a double dip-able vanity metric like "we spent 12.34 quadrillion tokens last quarter" plus "our new method helped us reduce token count by 10^24 this quarter".

You may call it a fraudulent behavior from a hypothetical shareholder's perspective in this hypothetical scenario, which it is, and call it Goodhart's law scenario too, which it also is, but it's a completely normalized behavior in relative terms. Project Hail Mary is a lighthearted work of fiction.

> You measure employees in stupid ways, you're going to get stupid behaviour as a consequence.

I worked for a healthcare tech startup that made everyone wear fitbits and you got cheaper health insurance premiums if you averaged a higher # of steps every day. People were putting their fitbits on drillbits and whirring them around to log like 20,000 steps a day.

Most productivity metrics are stupid, vain attempts at avoiding doing real management work. If you are actually interfacing with your subordinates regularly, as managers should, it will be obvious who is pulling their weight and who isn't, no need for arbitrary statistics that are easily gamed.

Or maybe they plan to review how effective high usage engineers have been next cycle and the tokenmaxxers will get bit in the ass when they have little to show for all their wasted tokens? Performance metrics can, and do, change on a dime and tokenmaxxing seems short sighted when management can look at old logs.

I've worked for Amazon before as a warehouse worker, I can attest they were one of the stupidest companies I've ever worked for. Stats were blindly followed to the point of absolute stupidity, performative work was enforced for the cameras, communication between staff and management and even between managers was non-existent. I once spent nearly 3 months unable to do a portion of my job because nobody knew how to buy more cardboard boxes. Not that they couldn't, but that nobody with any responsibility over the problem was able to contact anyone capable of buying them.

Goodhart's law in action.

The moment they made it a metric they failed to do anything useful.

Agreed. You really should replace the manger that made that policy as soon as possible. This is a playbook example of the corporate rot that leads to decline in a once innovative space.