← Back to context

Comment by cpgxiii

7 hours ago

> However, if you add it onto a better CPU it’s a fine technique to bet on - case in point Apple’s move away from Intel onto homegrown CPUs.

I don't think Apple is a good example here. Arm was extremely well-established when Apple began its own phone/tablet CPU designs. By the time Macs began to transition, much of their developer ecosystem was already familiar.

Apple's CPUs are actually notably conservative when compared to the truly wild variety of Arm implementations; no special vector instructions (e.g. SVE), no online translation (e.g. Nvidia Denver), no crazy little/big/bigger core complexes.

I think your focusing on the details and missing my broader point - the JIT technique for translation only works to break out of the instruction set lock-in. It does not improve performance, so betting on that instead of super scalar designs is not wise.

Transmeta’s CPU was not performance competitive and thus had no path to success.

And as for Apple itself, they had built the first iPhone on top of ARM to begin with (partially because Intel didn’t see a market). So they were already familiar with ARM before they even started building ARM CPUs. But also the developer ecosystem familiarity is only partially relevant - even in compat mode the M1 ran faster than equivalent contemporary Intel chips. So the familiarity was only needed to unlock the full potential (most of which was done by Apple porting 1p software). But even if they had never switched on ARM support in the M1 the JIT technique (compiled with a better CPU and better unified memory architecture) would still have been fast enough to slightly outcompete Intel chips on performance and battery life - native software just made it 0 competition.

  • > partially because Intel didn’t see a market

    I saw some articles saying that Intel saw the market very well, they just could not deliver and rather than admit that, they claimed the CEO decided wrong.

    • Both were probably true to some extent but I doubt they wouldn’t have figured out a way to execute given the huge opportunity.

      The mobile CPU market worth is a meaningful chunk of Intel’s overall current market cap and they’re not participating.

> no special vector instructions (e.g. SVE)

Wut - SVE and SME are literally Apple designs (AMX) which have been "back ported".

  • > Wut - SVE and SME are literally Apple designs (AMX) which have been "back ported".

    Literally no Apple CPUs meaningfully support SVE or SVE2. Apple adds what I would say is a relatively "conventional" matrix instructions (AMX) of their own, and now implements SME and SME2, but those are not equivalent to SVE (I call AMX "conventional" in the sense that a fixed-size grid of matrix compute elements is not a particularly new idea, versus variable-sized SIMD which is still quite rare. Really, the only arm64 design with "full fat" SVE support is Fujitsu's a64fx (512-bit vector size); everything else on the very short list of hardware supporting SVE is still stuck with 128-bit vectors.