Comment by hinkley
6 hours ago
Intel was already built on the Pentium at this point. Not as iterable as pure software but decoding x86 instructions to whatever they wanted to do internally sped up a lot of things on its own.
Perhaps they would have been better off building the decode logic as programmable by making effectively a multicore machine where the translation code ran on its own processor with its own cache, instead of a pure JIT.
When you are operating at that level, there is a lot of similarity between software compiled to machine code, and software compiled to a chip design. The differences are that the machine code comes with some extra overhead, and changing chip designs takes more work.
Fundamentally Transmeta to make iterations of designs quicker, at the cost of some performance overhead. What Intel chose that extra low level performance, at the cost of overhead on iterations. And then Intel made up for the extra cost of iterating designs by having more resources to throw at the problem.
If Transmeta had equivalent resources to throw at their approach, they would have likely won. But they didn't. And I think that they made the right choices for the situation that they were in.
Incidentally the idea of programmable microcode on top of the CPU was not original. It was used in all sorts of software systems before them. Such as Java. The first big use that I'm aware of was the IBM 360, back in the 1960s. There are still programs running on mainframes today that fundamentally think that they are running on that virtual machine from the 1960s!