← Back to context

Comment by n8cpdx

1 day ago

Delusional take. Rosetta is for maintaining compatibility during the transition. Efficiency is fine with Rosetta. But it doesn’t matter because the ARM transition is essentially already done. Not true, unfortunately, for Windows.

Aside from superior performance and battery life (even compared to ARM windows offerings), the M series devices are generally reliable, unlike windows laptops running Intel and (less so) AMD.

Pile onto that the fact that a lot of us are in the cloud, and the cloud has ARM processors, and they're generally priced as competetive, especially with m7i and m7a. So it's not the worst thing in the world to be using arm64 architecture on your dev machine.

  • Which matters very little in my experience whether the cloud is ARM or not. I still need to build my code in a Docker container with Amazon Linux even on my ARM based Mac when targeting an ARM based AWS runtime environment.

What is the efficiency loss specifically? Do you even know, or are you just asserting it?

>it doesn't matter because the ARM transition is essentially already done

'Essentially' is doing a lot of heavy-lifting here, but, putting that aside, A. you're wrong, I've recently ran into Rosetta throttling and B. it's not a good reason to begin the project at all, it's only a good reason when it's already done. You're essentially ceding "Yes, I've been wrong and this has been a fool's errand for the past x years until right this moment as the project is done". It's not done and it'd a weak argument.

>Aside from superior performance and battery life (even compared to ARM windows offerings), the M series devices are generally reliable, unlike windows laptops running Intel and (less so) AMD.

Specifically what are the numbers? Because I have performance/tdp numbers and the M-series performs well but it isn't a categorical difference. In fact, that's no difference, it performs okay but AMD is at the top of the heap currently. Sad.

  • When the M1 transition started, Intel and AMD devices simply were not competitive, even after factoring Rosetta losses (https://www.macrumors.com/2020/11/15/m1-chip-emulating-x86-b...). That was the relevant comparison to Rosetta; it has been 5 years since the transition started, and nowadays as others have stated, it is common to not have Rosetta at all. MacOS is dropping support soon.

    The real difference maker is efficiency. MacBook owners simply do not need to worry about whether they are plugged in or not; the performance does not change and the battery lasts many hours, even on demanding tasks. Occasionally you can cherry pick a benchmark where AMD appears to be competitive, but always at much higher power draw.

    AMD and Intel users don’t really appreciate how much of a qualitative difference that is. Being even close in performance, while offering far superior reliability and battery life, puts apple silicon in a league of its own.

    Share your numbers please. I’m having trouble finding reliable sources that aren’t YouTube videos or forum posts, but nothing I’ve been able to find contradicts my claims.

  • I switched from a 2019 MBP to a new M4 Pro a few weeks ago and I didn’t even know Rosetta wasn’t installed (I assumed on and installed by default) until I had to run a Go binary that hadn’t been updated since 2020.

    I use a lot of nonstandard software (not just a browser), not a single piece needed Rosetta.

    I agree recent AMD chips are power efficient like the M series (though I don’t have one to compare with) but I thought everyone agreed the comparable chips in 2020 weren’t?

    • Apple's marketing on this was a very impressive effort on this, evidenced by:

      >...I thought everyone agreed the comparable chips in 2020 weren’t?

      Possibly, but it was likely far, far closer (see maybe the AMD Ryzen 7 4800U) than justified defense of the project.

      Anyways, with the addition of the Rosetta translation layer there's no way the Apple M1 was as efficient as the Ryzen.

  • > A. you're wrong, I've recently ran into Rosetta throttling […]

    Can you please define and explain the meaning «Rosetta throttling»? Rosetta 2 is static binary translation + JIT optimisations at the run time. Is Rosetta injecting delays slots or delay loops into the translated code? Or, is it injecting branch instructions that consistently fail the branch predictor? Something else? Since you seem to have analysed specific code paths, the esteemed congregation on here is eager to pick the disassembled code apart.

    Without the direct evidence, such claims are as credible as that of a vegetable vendor at the local farmer market claiming that spinach they sell cures cancer.

  • then post the numbers? You're just here doing the same thing, asserting that the efficiency is bad, only using more words.

    Performance and efficiency has been great for me. I've never run into rosetta throttling. I've got the numbers - trust me bro.

    • The null hypothesis is that Apple chips aren't better. You simply assumed they were into evidence. It's up to you to provide the figures that they are.

      Of course, they really aren't, which is pretty obvious. It doesn't make sense that Apple would randomly invent some categorically new CPU technology when they don't even own an instruction set or foundry and that they would simply be concocting some vendor lock-in supply chain scheme.

      1 reply →