Comment by this_user

3 days ago

One argument I have heard in favour of this is that management knew this would be a side effect, but that it's more important to have people engage with AI as much as possible simply to explore what is actually possible. You are effectively knowingly wasting money in the expectation that you might learn something useful that will be more valuable in the long run.

If companies are suddenly willing to spend money on letting their staff experiment, why not let them experiment with what they want to? They probably know more about technology than you do, otherwise you wouldn't need them.

> engage with AI as much as possible simply to explore what is actually possible

"Research" isn't part of my job title. If you don't know what's possible then why are you deploying it? You should be telling _me_ what's possible. I mean, you _paid_ for it, how can you possibly not know what you were getting?

> in the expectation that you might learn something useful that will be more valuable in the long run.

"I'll take `what even are profits?' for $200, Alex."

  • Hear hear.

    An overly generous steelman in my opinion as well. Have 10% of your employees focus on finding ways to properly leverage the new technology - don’t pressure 100% of your employees with bull shit metrics.

  • >"Research" isn't part of my job title. If you don't know what's possible then why are you deploying it? You should be telling _me_ what's possible. I mean, you _paid_ for it, how can you possibly not know what you were getting?

    Extremely weird take for a knowledge worker.

No, it's literally because some dumb manager read a blog where an influencer said that you ain't a real AI native and ain't worth shit unless your developers are spending $XXXX on tokens each day.

It's that simple.

(Never mind that these bloggers are just writing ad copy for cloud providers.)

in this instance - it seems like Amazon employees are wasting money exploring ways to waste money.

My questions for that approach are: Why treat AI as a special technology that needs enterprise-scale exploration to come up with a useful application? And why not take the alternative approach of identifying the subset of people who have indeed found solid uses and spread their best practices around?

The top-down approach to encouraging (mandating?) AI usage strikes me as infantilizing to the workers, who are perfectly capable of choosing which tools they use and when.

  • Human nature?

    In the early nineties, it was common for experienced electrical engineers to keep on using schematic entry digital design and look down on RTL and synthesis tools, despite that fact the latter was already way more productive. At some point, management had to put their foot down and force everyone to switch to using synthesis.

    It's not unreasonable to assume that many people are set in their ways and unwilling to change their behavior without a bit of a push.

    • It is completely unreasonable to assume that. Tech people are so hungry for productivity gains that they regularly will defy management forbidding them from using a tool, because the tool is so good they feel they have to have it.

      If LLMs truly are as good as their proponents say, engineers will use them even if management outright forbade it. The fact that people aren't using them, and have to be forced, is extremely strong evidence that they are not in fact that useful.

      4 replies →

    • Alpha 21064, 1992, was using domino logic [1].

      [1] https://en.wikipedia.org/wiki/Domino_logic

      There was no synthesis algorithms that would map VHDL or Verilog designs into domino logic elements at the time. I believe that the most work in the synthesis-to-domino-logic area was done at the beginning of current century.

      So, DEC's engineers and, I think, Intel's engineers were doing work using schematics well into 21-st century.

      1 reply →

    • I guess the only difference between this and your example is the concrete efficiency gain from RTL and synthesis tools versus dubious applications of AI. I do agree with the second point about pushing people to explore new ways of doing things though.

      14 replies →

    • > It's not unreasonable to assume that many people are set in their ways and unwilling to change their behavior without a bit of a push.

      You include those only in second round along with guidelines and recommendations on how to use it effectively.

      2 replies →

  • A tool so good, the workers need to be forced to use it.

    • Workers are good at their job using tools they know because they had years/decades to hone that craft.

      The new tool might make a lot of that experience obsolete. Also, some people that were good with old tools might be great with the new tool. Some may not.

      Overall, I don't think it's a bad idea to burn some tokens (and money) to let people experiment.

  • googles 20% project time was a good thing, sadly they dont even seem to do it anymore. for the bulk of corporate workers this brief period of time where they get to play an ai token game is the only break from generating TPS reports all day long.

That still sounds like a dumb strategy. Or, more likely, post hoc rationalization.

You reward me for wasting tokens and punish me for not wasting them, I will maximally waste them and wont "explore hownto make them useful". The latter wastes less tokens and that is punished.

Exactly. That's the problem ICs don't want to admit.

Managing a lot of people at scale is messy and you have to use crude solutions. It's impossible to know everything that's going on.

If you were a manager you wouldn't do any better. Out of the crooked timber of humanity, no straight thing was ever made.

  • Your argument that bad processes can't possibly be improved is contradicted by all of recorded history.

    • That's because you've misconstrued my argument. My argument is that everything is a tradeoff and while management can be MORE conscious, there's a certain level of bullshit that's inevitable.

      But more specifically ICs tend to want to say "if you just let me do what I know is right it would be fine." That's a trade-off, too, though. That solution means a lot of people will be messing around due to no accountability.

      1 reply →

  • I think that's a convenient excuse for managers at the top to not have to deal with their own sub par middle and lower managers...

Are the people engaging though, or are they telling the AI "go do some busywork" and then minimizing that window and getting on with their job?

This is induced demand for AI to justify building more datacenters, which will bring AI costs down, and the idea is that will eventually bring demand up organically.

in short: dogfood the tool extra hard, and figure out what it can do.

this means wasting a lot of metaphorical dog food, but now everyone will be 100% how it tastes.

this then allows you to shift client expectations and alter offerings.

...

or it's just dumb mgmt. but let's be charitable.