← Back to context

Comment by hashtag-til

1 year ago

But then there is the software ecosystem issue.

Having a competitive CPU is 1% of the job. Then you need To have a competitive SoC (oh and not infringe IP), so that you can build the software ecosystem, which is the hard bit.

> But then there is the software ecosystem issue.

We still have problems with software not being optimised for Arm these days, which is just astounding given the prevalence on mobile devices, let alone the market share represented by Apple. Even Golang is still lacking a whole bunch of optimisations that are present in x86, and Google has their own Arm based chips.

Compilers pull off miracles, but a lot of optimisations are going to take direct experience and dedicated work.

  • Considering how often ARM processors are used to run an application on top of a framework over an interpreted language inside a VM, all to display what amounts to kilobytes of text and megabytes of images, using hundreds of megabytes of RAM and billions of operations per second, I'm surprised anyone even bothers optimizing anything, anymore.

  • For all it's success it's still kind of a niche-language (and even with the amount of Google compiler developers, they're are spread thin between V8, Go, Dart,etc).

    I think the keys to Risc-V in terms of software will be,

    LLVM (gives us C, C++, Rust, Zig,etc), this is probably already happening?

    JavaScript (V8 support for Android should be the biggest driver, also enabling Node,Deno,etc but it's speed will depend on Google interest)

    JVM (Does Oracle have interest at all? Could be a major roadblock unless Google funds it, again depends on Android interest).

    So Android on Risc-V could really be a game-changer but Google just backed down a bit recently.

    Dotnet(games) and Ruby (and even Python?) would probably be like Go with custom runtimes/JIT's needing custom work but no obvious clear marketshare/funding.

    It'll remain a niche but I do really think Android devices (or something else equally popular, a chinese home-PC?) would be the gamechanger to push demand over the top.

  • > Even Golang

    Golang's compiler is weak compared to the competition. It's probably not a good demonstration of most ISAs really.

Not an issue because exceyt for a few windows or apple machines everthing arm is compiled and odds are they have the source. Give our ee a good risc-v and a couple years latter we will have our stuff rebult for that cpu

  • The whole reason ARM transition worked is that you had millions of developers with MacBooks who because of Rosetta were able to seamlessly run both x86 and ARM code at the same time.

    This meant that you had (a) strong demand for ARM apps/libraries, (b) large pool of testers, (c) developers able to port their code without needing additional hardware, (d) developers able to seamlessly test their x86/ARM code side by side.

    RISC-V will have none of this.

    • Apple is the only company that has managed a single CPU transition successfully. That they actually did it three times is incredible.

      I think people are blind to the amount of pre-emptive work a transition like that requires. Sure, Linux and FreeBSD support a bunch of architectures, but are they really all free of bugs due to the architecture? You can't convince me that choosing an esoteric, lightly used arch like Big Endian PowerPC won't come with bugs related to that you'll have to deal with. And then you need to figure out who's responsible for the code, and whether or not they have the hardware to test it on.

      It happened to me; small project I put on my ARM-based AWS server, and it was not working even though it was compiled for the architecture.

      1 reply →

    • Embedded is far larger than pcs and dosn't needthat phones too are larger an already you recompile as needed.

  • > […] and odds are […]

    When it comes to the adoption of a new ISA, there are no odds even the sources exist, it is the scale and QA that are or are not.

    The arrival of the first wave of Apple Silicon in 2020 led to a very hectic year in 2021 and beyond of people rushing in to fix numerous issues mostly (but not only) in Linux for aarch64, ranging from bugs to unoptimised code. OpenJDK that had existed for aarch64 for some time was so unstable that it could not be seriously used natively on aarch64, and it took nearly a year to stabilise it. Hand-optimising OpenSSL, Docker, Linux/aarch64 and many, many other packages also took time.

    It only became possible because of the mass availability of the hardware (performant consumer-level arm64 CPU's) that has led to a mass adoption of the hardware architecture at the software level. aarch64 has now become the first-class citizen, and Linux as well as other big players have vastly benefited from it (e.g. cloud providers) as a whole. It is far being certain that if not the Apple Silicon catalyst we would have seen Graviton 4 in 2024 (the 4th gen in just 5 years), large multi-core Ampere CPU's in 2023/24 and even a performant Qualcomm laptop CPU this year.

    Mass hardware availability to lay people that leads to mass adoption by lay people is critical to the success of a new hardware platform, as all of a sudden a very large pool of free QA becomes available that spurs further interest in improving the software support. Compare it, for instance, with the POWER platform that is open and the hardware has been available for quite a while; however, there has been no scale. The end result is that JIT still yields poor performance in Firefox/ppc64. Embedded people and hardware enthusiasts are not the critical mass that is required to trigger a chain reaction that leads to platform success, it is the lay people incessantly whining about something not working and reporting bugs.

    Then there is also a reason why OpenBSD still holds on to a zoo of ancient, no longer available platforms (including a Motorola 88k) – they routinely compile the newly written code – however many moons it takes them to do it – and run it on the exotic hardware today with the single narrow purpose of trapping bugs, subtle and less subtle ones, caused by architectural differences across the platforms. Such an approach stands in stark contrast to the mass availability one; it does not scale as much, but it is a viable approach, too. And this is why the OpenBSD source code has a much better chance of running flawlessly on a new ISA.

    Hence, hardware platform adoption is not a simple affair as some enthusiastically try to portray it to be.

    • Embedded has been doing just that for their platform for ages. they don't care about most of the things you list though.

      not that your point is wrong, but for mose uses it doesn't matter. It would be better if they had it but they don't need it.

      1 reply →