Comment by Lerc
7 hours ago
I'm interested in the idea of a clean slate hardware/software system. I think being constrained to support existing hardware or software reduces opportunities for innovation on the other.
I don't see that in this project. This isn't defined by a clean slate. It is defined by properties that it does not want to be.
Off the top of my head I can think of a bunch of hardware architectures that would require all-new software. There would be amazing opportunities for discovery writing software for these things. The core principles of the software for such a machine could be based upon a solid philosophical consideration of what a computer should be. Not just "One that doesn't have social media" but what are truly the needs of the user. This is not a simple problem. If it should facilitate but also protect, when should it say no?
If software can run other software, should there be an independent notion of how that software should be facilitated?
What should happen when the user directs two pieces of software to perform contradictory things? What gets facilitated, what gets disallowed.
I'd love to see some truly radical designs. Perhaps model where processing and memory are one, A:very simple core per 1k of SRAM per 64k of DRAM per megabytes of flash, machines with 2^n cores where each core has a direct data channel to every core with its n-bit core ID being one but different (plus one for all bits different).
A n=32 system would have four billion cores and 4 terabytes if RAM and nearly enough persistent storage but it would take talking through up to 15 intermediaries to communicate between any two arbitrary cores.
You could probably start with a much lower n. Then consider how to write software for it that meets the principles that meets the criteria of how it should behave.
Different, clean slate, not easy.
Clean slate designs with arbitrarily radical designs are easy when you don’t have to actually build them.
There are reasons that current architecture are mostly similar to each other, having evolved over decades of learning and research.
> Perhaps model where processing and memory are one, A:very simple core per 1k of SRAM per 64k of DRAM per megabytes of flash,
To serve what goal? Such a design certainly wouldn’t be useful for general purpose computing and it wouldn’t even serve current GPU workloads well.
Any architecture that requires extreme overhauls of how software is designed and can only benefit unique workloads is destined to fail. See Itanium for a much milder example that still couldn’t work.
> machines with 2^n cores where each core has a direct data channel to every core with its n-bit core ID being one but different (plus one for all bits different).
Software isn’t the only place where big-O scaling is relevant.
Fully connected graph topologies are great on paper, but the number of connections scales quadratically. For a 64-core fully connected CPU topology you would need 2,016 separate data buses.
Those data buses take up valuable space. Worse, the majority of them are going to be idle most of the time. It’s extremely wasteful. The die area would be better used for anything else.
> A n=32 system would have four billion cores
A four billion core system would be the poster child for Amdahl’s law and a great example of how not to scale compute.
Let’s not be so critical of companies trying to make practical designs.
> Software isn’t the only place where big-O scaling is relevant.
> Fully connected graph topologies are great on paper, but the number of connections scales quadratically. For a 64-core fully connected CPU topology you would need 2,016 separate data buses.
Nitpick: I don't think the comment you're replying to is proposing a fully-connected graph. It's proposing a hypercube topology, in which the number of connections per CPU scales logarithmically. (And with each node also connected to its diagonal opposite, but that doesn't significantly change the scaling.)
If my math is right, a 64-core system with this topology would have only 224 connections.
This is what I meant. I also like the idea of optical, line-of-sight connections. If you do the hypercube topology everything a node connects to has a different parity, so you lay them out on two panels facing each other.
Perhaps not a true counterpoint, but there are systems like the GA144, an array of 144 Forth processors.
I think you're missing the point, and I don't think OP is "being critical of companies making practical designs."
Also, I think OP was imagining some kind of tree based topology, not connected graph since he said:
> ...but it would take talking through up to 15 intermediaries to communicate between any two arbitrary cores.
Are you aware of anyone who has used that system outside of a hobbyist buying the dev board? I looked into it and the ideas were cool, but no clue how to actually do anything with it.
Thanks for these thoughts -- I agree in principle, but we have to juggle a couple things here: while Radiant is in some ways an experiment, it isn't a research project. There are enough "obvious" things we can do better this time around, given everything we've learned as an industry, that I wouldn't want to leapfrog over this next milestone in personal computer evolution and end up building something a little too unfamiliar to be useful.
In that case I think the best advice I can give here is to focus less on features you dislike in other things and conaider the problems caused by those things. Without being encumbered by legacy reqirements you are free to make any changes you want, but each part is workload. Start at the top of each symptomatic feature and work your way down until you can change the part that causes the symptoms. Some things might require going down to the core. Some could be fixed with top level changes. Focus on finding what makes things bad(and why) instead of identifying bad things.
That's a nice approach, thanks for the advice.
BeOS was one.
When Apple was looking for its "next generation" OS, everyone assumed that Gassé and BeOS were going to be it, but they chose Jobs and the legacy (FreeBSD-based) NextOS.
I know that "old is bad," in today's tech world, but, speaking only for myself, I'm glad they made the decision they did. BeOS was really cool, but it was too new.