← Back to context

Comment by inkyoto

1 year ago

> […] and odds are […]

When it comes to the adoption of a new ISA, there are no odds even the sources exist, it is the scale and QA that are or are not.

The arrival of the first wave of Apple Silicon in 2020 led to a very hectic year in 2021 and beyond of people rushing in to fix numerous issues mostly (but not only) in Linux for aarch64, ranging from bugs to unoptimised code. OpenJDK that had existed for aarch64 for some time was so unstable that it could not be seriously used natively on aarch64, and it took nearly a year to stabilise it. Hand-optimising OpenSSL, Docker, Linux/aarch64 and many, many other packages also took time.

It only became possible because of the mass availability of the hardware (performant consumer-level arm64 CPU's) that has led to a mass adoption of the hardware architecture at the software level. aarch64 has now become the first-class citizen, and Linux as well as other big players have vastly benefited from it (e.g. cloud providers) as a whole. It is far being certain that if not the Apple Silicon catalyst we would have seen Graviton 4 in 2024 (the 4th gen in just 5 years), large multi-core Ampere CPU's in 2023/24 and even a performant Qualcomm laptop CPU this year.

Mass hardware availability to lay people that leads to mass adoption by lay people is critical to the success of a new hardware platform, as all of a sudden a very large pool of free QA becomes available that spurs further interest in improving the software support. Compare it, for instance, with the POWER platform that is open and the hardware has been available for quite a while; however, there has been no scale. The end result is that JIT still yields poor performance in Firefox/ppc64. Embedded people and hardware enthusiasts are not the critical mass that is required to trigger a chain reaction that leads to platform success, it is the lay people incessantly whining about something not working and reporting bugs.

Then there is also a reason why OpenBSD still holds on to a zoo of ancient, no longer available platforms (including a Motorola 88k) – they routinely compile the newly written code – however many moons it takes them to do it – and run it on the exotic hardware today with the single narrow purpose of trapping bugs, subtle and less subtle ones, caused by architectural differences across the platforms. Such an approach stands in stark contrast to the mass availability one; it does not scale as much, but it is a viable approach, too. And this is why the OpenBSD source code has a much better chance of running flawlessly on a new ISA.

Hence, hardware platform adoption is not a simple affair as some enthusiastically try to portray it to be.

Embedded has been doing just that for their platform for ages. they don't care about most of the things you list though.

not that your point is wrong, but for mose uses it doesn't matter. It would be better if they had it but they don't need it.

  • > they don't care […]

    Precisely. Embedded cares only about one thing: «get the product off the ground and ship it fast, bugs including». And since the software in embedded is not user-facing, they can get away with «power cycle the device if it stops responding» recommendations in the user guide.

    Embedded also sees the CPU as a disposable commodity and not a long term asset, and it is a well entrenched habit of throwing the entire code base away if another alternative CPU/ISA (cheaper, more power efficient etc – you name it) comes along. Where is all the code once written for 68HC11, PIC, AVR etc? Nowhere. It has all but been thrown away for varying reasons (architecture switch, architecture obsolescence and stuff). Same has not happened for Intel, and the code is still around and running.

    For more substantial embedded development, the responsibility of adopting a new ISA falls on the vendor of the embedded OS/runtime (e.g. VxWorks or the embedded CPU vendor) who makes reasonable efforts of supporting hardware features important to customers but does not carry out the extensive testing of all features. Again, the focus is on allowing the vendor's customers to ship the product fast. Quality of development toolchains for embedded is also not so infrequently questionable and complaints about poor support of the underlying hardware are common. They are typically ignored.

    > but for mose uses it doesn't matter […]

    Which is why embedded is not a useful success metric when it comes to predicting the success of a CPU architecture in user-facing scenarios (namely, personal and server computing).