Comment by panick21_
1 year ago
> This didn't work out
... except it did.
You had literal students design chips that outperformed industry cores that took huge teams and huge investment.
Acorn had a team of just a few people build a core that outperformed an i460 with likely 1/100 investment. Not to mention the even more expensive VAX chips.
Can you imagine how fucking baffled the DEC engineers at the time were when their absurdly complex and absurdly expensive VAX chip were smocked by a bunch of first time chip designers?
> as Pentium II demonstrated
That chip came out in 1997. The original RISC chip research happened in the early 80s or even earlier. It did work, its just that x86 was bound to the PC market and Intel had the finances huge teams hammer away at the problem. x86 was able to overtake Alpha because DEC was not doing well and they couldn't invest the required amount.
> no reason to expose it to the user in the ISA
Except that hidden the implementation is costly.
If you give 2 equal teams the same amount of money, what results in a faster chip. A team that does a simply RISC instruction set. Or a team that does a complex CISC instruction set, transforms that into an underlying simpler instruction set?
Now of course for Intel, they had backward comparability so they had to do what they had to do. They were just lucky they were able to invest so much more then all the other competitors.
> You had literal students design chips that outperformed industry cores that took huge teams and huge investment
Everyone remember to thank our trans heroine Sophie Wilson (CBE).
> If you give 2 equal teams the same amount of money, what results in a faster chip.
Depends on the amount of money. If it's less a certain amount, RISC design will be faster. If it's above, both designs will perform about the same.
I mean, look at ARM: they too have decode their instructions into micro-ops and cache those in their high-performance models. What RISC buys you is the ability to be competitive at the low end of the market, with simplistic implementations. That's why we won't ever see e.g. a stack-like machine — no exposed general-purpose registers, but with flexible addressing modes for the stack, even something like [SP+[SP+12]]; stack is mirrored onto the hidden register file which is used as an "L0" cache which neatly solves the problem that register windows were supposed to solve, — such a design can be made as fast as server-grade x86 or ARM, but only by throwing billions of dollars and several man-millenia at it; and if you try to do it cheaper and quicker, its performance would absolutely suck. That's why e.g. System/360 didn't make that design choice although IBM seriously considered it for half a year — they then found out that the low-level machines would be unacceptably slow so they went with "registers with base-plus-offset addressed memory" design.
All fine except Itanium happened and it goes against everything you list out...?
Itanium was not in any sensible way RISC, it was "VLIW". That pushed a lot of needless complexity into compilers and didn't deliver the savings.