Comment by the_snooze
3 days ago
My questions for that approach are: Why treat AI as a special technology that needs enterprise-scale exploration to come up with a useful application? And why not take the alternative approach of identifying the subset of people who have indeed found solid uses and spread their best practices around?
The top-down approach to encouraging (mandating?) AI usage strikes me as infantilizing to the workers, who are perfectly capable of choosing which tools they use and when.
Human nature?
In the early nineties, it was common for experienced electrical engineers to keep on using schematic entry digital design and look down on RTL and synthesis tools, despite that fact the latter was already way more productive. At some point, management had to put their foot down and force everyone to switch to using synthesis.
It's not unreasonable to assume that many people are set in their ways and unwilling to change their behavior without a bit of a push.
It is completely unreasonable to assume that. Tech people are so hungry for productivity gains that they regularly will defy management forbidding them from using a tool, because the tool is so good they feel they have to have it.
If LLMs truly are as good as their proponents say, engineers will use them even if management outright forbade it. The fact that people aren't using them, and have to be forced, is extremely strong evidence that they are not in fact that useful.
> extremely strong evidence that they are not in fact that useful.
Surely you can't argue in good faith today that LLMs aren't useful? It was a valid argument a year ago, but the latest models are absolutely useful at solving whole classes of problems.
They're not perfect, need to be carefully monitored, can cause weird gambling like dopamine rushes and can cause lazy development habits to creep in. But none of those things negate the fact that, in many situations, they are useful.
2 replies →
> extremely strong evidence that they are not in fact that useful
See my other reply in this subthread. For my line of work, they are in fact ridiculously useful.
Alpha 21064, 1992, was using domino logic [1].
[1] https://en.wikipedia.org/wiki/Domino_logic
There was no synthesis algorithms that would map VHDL or Verilog designs into domino logic elements at the time. I believe that the most work in the synthesis-to-domino-logic area was done at the beginning of current century.
So, DEC's engineers and, I think, Intel's engineers were doing work using schematics well into 21-st century.
In the early nineties, standard cell design was already used by everyone except those who needed clock speed at all cost.
I guess the only difference between this and your example is the concrete efficiency gain from RTL and synthesis tools versus dubious applications of AI. I do agree with the second point about pushing people to explore new ways of doing things though.
> dubious applications of AI
Leaving aside the ethical aspects of using AI (not because they're not valid, because they're off topic for this discussion), in my line of work, the capabilities and productivity improvement of AI are staggering. Most of it is not writing the new code, which is but a small part of chip design, but everything else.
I can't give a concrete work example, but here is an experiment that I ran a month ago. https://tomverbeure.github.io/2026/04/12/AMIQ-License-Key-Ge.... If it can do that, it's not hard to imaging similar use cases related to root causing complex simulation failures. It is frighteningly good at that.
13 replies →
> It's not unreasonable to assume that many people are set in their ways and unwilling to change their behavior without a bit of a push.
You include those only in second round along with guidelines and recommendations on how to use it effectively.
What if those people are some of the most experienced ones, who can see use cases, and flaws, that more junior people won't?
1 reply →
A tool so good, the workers need to be forced to use it.
Workers are good at their job using tools they know because they had years/decades to hone that craft.
The new tool might make a lot of that experience obsolete. Also, some people that were good with old tools might be great with the new tool. Some may not.
Overall, I don't think it's a bad idea to burn some tokens (and money) to let people experiment.
googles 20% project time was a good thing, sadly they dont even seem to do it anymore. for the bulk of corporate workers this brief period of time where they get to play an ai token game is the only break from generating TPS reports all day long.