Comment by deviantbit
1 year ago
"I believe there are two main things holding it back."
He really science’d the heck out of that one. I’m getting tired of seeing opinions dressed up as insight—especially when they’re this detached from how real systems actually work.
I worked on the Cell processor and I can tell you it was a nightmare. It demanded an unrealistic amount of micromanagement and gave developers rope to hang themselves with. There’s a reason it didn’t survive.
What amazes me more is the comment section—full of people waxing nostalgic for architectures they clearly never had to ship stable software on. They forget why we moved on. Modern systems are built with constraints like memory protection, isolation, and stability in mind. You can’t just “flatten address spaces” and ignore the consequences. That’s how you end up with security holes, random crashes, and broken multi-tasking. There's a whole generation of engineers that don't seem to realize why we architected things this way in the first place.
I will take how things are today over how things used to be in a heart beat. I really believe I need to spend 2-weeks requiring students write code on an Amiga, and the programs have to run at the same time. If anyone of them crashes, they all will fail my course. A new found appreciation may flourish.
One of the most important steps of my career was being forced to write code for an 8051 microcontroller. Then writing firmware for an ARM microcontroller to make it pretend it was that same 8051 microcontroller.
I was made to witness the horrors of archaic computer architecture in such depth that I could reproduce them on totally unrelated hardware.
I tell students today that the best way to learn is by studying the mistakes others have already made. Dismissing the solutions they found isn’t being independent or smart; it’s arrogance that sets you up to repeat the same failures.
Sounds like you had a good mentor. Buy them lunch one day.
I had a similar experience. Our professor in high school would have us program a z80 system entirely by hand: flow chart, assembly code, computing jump offsets by hand, writing the hex code by hand (looking up op-codes from the z80 data sheet) and the loading the opcodes one byte at the time on a hex keypads.
It took three hours and your of us to code an integer division start to finish (we were like 17 though).
The amount of understanding it gave has been unrivalled so far.
> I worked on the Cell processor and I can tell you it was a nightmare. It demanded an unrealistic amount of micromanagement and gave developers rope to hang themselves with.
So the designers of the Cell processor made some mistakes and therefore the entire concept is bunk? Because you've seen a concept done badly, you can't imagine it done well?
To be clear, I'm not criticising those designers, they probably did a great job with what they had, but technology has moved on a long way from then... The theoretical foundations for memory models, etc. are much more advanced. We've figured out how to design languages to be memory safe without significantly compromising on performance or usability. We have decades of tooling for running and debugging programs on GPUs and we've figured out how to securely isolate "users" of the same GPU from each other. Programmers are as abstracted from the hardware as they've ever been with emulation of different architectures so fast that it's practical on most consumer hardware.
None of the things you mentioned are inherently at odds with more parallel computation. Whether something is a good idea can change. At one point in time electric cars were a bad idea. Decades of incremental improvements to battery and motor technology means they're now pretty practical. At one point landing and reusing a rocket was a bad idea. Then we had improvements to materials science, control systems, etc. that collectively changed the equation. You can't just apply the same old equation and come to the same conclusion.
> and we've figured out how to securely isolate "users" of the same GPU from each other
That's the problem, isn't it.
I don't want my programs to act independently, they need to exchange data with each other (copy-paste, drag and drop). Also i cannot do many things in parralel. Some thing must be done sequencially.
[flagged]
> There's a whole generation of engineers that don't seem to realize why we architected things this way in the first place.
Nobody teaches it, and nobody writes books about it (not that anyone reads anymore)
So, there are books out there. I use Computer Architecture: A Quantitative Approach by Hennessy and Patterson. Recent revisions have removed historical information. I understand why they did remove it. I wanted to use Stallings book, but the department had already made arrangements with the publisher.
The biggest problem on why we don't write books is that people don't buy them. They take the PDF and stick it on github. Publishers don't respond to the authors on take down requests, github doesn't care about authors, so why spend the time on publishing a book? We can chase grant money. I'm fortunate enough to not have to chase grant money.
While financial incentives is important to some, a lot of people write books to share their knowledge and give the book out for free. I think more people are doing this now, and there are also open collaborative textbook projects.
And I personally think that it is weird to write books during your working hour, and also get monet from selling that book.
2 replies →
> What amazes me more is the comment section—full of people waxing nostalgic for architectures they clearly never had to ship stable software on.
Isn't it much more plausible that the people who love to play with exotic (or also retro), complicated architectures (with in this case high performance opportunities) are different people than those who love to "set up or work in an assembly line for shipping stable software"?
> I really believe I need to spend 2-weeks requiring students write code on an Amiga, and the programs have to run at the same time. If anyone of them crashes, they all will fail my course. A new found appreciation may flourish.
I rather believe that among those who love this kind of programming a hate for the incompetent fellow student will happen (including wishes that these become weed out by brutal exams).
The problem is that the exotic complexity enthusiasts cluster in places like HN and sometimes they overwhelm the voices of reason.
Those students would all drop out and start meditating. That would be a fun course. Speed run developing for all the prickly architectures of the 80s and 90s.
I see what you did there.
Guru meditation, for the uninitiated.
> They forget why we moved on. Modern systems are built with constraints like memory protection, isolation, and stability in mind. You can’t just “flatten address spaces” and ignore the consequences.
Is there any reason why GPU-style parallelism couldn't have memory protection?
It does. GPUs have full MMUs.
They do? Then how do i do the forbidden stuff by accessing neighboring pixel data?
3 replies →
I loved and really miss the cell. It did take quite a bit of work to shuffle things in and out of the SPUs correctly (so yeah, it took much longer to write code and greater care), but it really churned through data.
We had a generic job mechanism with the same restrictions on all platforms. This usually meant if it ran at all on Cell it would run great on PC because the data would generally be cache friendly. But it was tough getting the PowerPC to perform.
I understand why the PS4 was basically a PC after that - because it's easier. But I wish there was still SPUs off the side to take advantage of. Happy to have it off die like GPUs are.
On flattening address spaces: the road not taken here is to run everything in something akin to the JVM, CLR, or WASM. Do that stuff in software not hardware.
You could also do things like having the JIT optimize the entire running system dynamically like one program, eliminating syscall and context switch overhead not to mention most MMU overhead.
Would it be faster? Maybe. The JIT would have to generate its own safety and bounds checking stuff. I’m sure some work loads would benefit a lot and others not so much.
What it would do is allow CPUs to be simpler, potentially resulting in cheaper lower power chips or more cores on a die with the same transistor budget. It would also make portability trivial. Port the core kernel and JIT and software doesn’t care.
> On flattening address spaces: the road not taken here is to run everything in something akin to the JVM, CLR, or WASM.
GPU drivers take SPIR-V code (either "kernels" for OpenCL/SYCL drivers, or "shaders" for Vulkan Compute) which is not that different at least in principle. There is also a LLVM-based soft-implementation that will just compile your SPIR-V code to run directly on the CPU.
We end up relying on software for this so much anyway. Your examples plus the use of containers and the like at OS level.
"The birth and death of JavaScript"
[flagged]
What the ever loving hell, it was a perfectly reasonable idea in response to another idea.
They weren't saying it should be done, and went out of the way to make it explicit that they are not claiming it would be better.
It was a thought exploration, and a valid one, even if it would not pan out if carried all the way to execution at scale. Yes it was handwaving. So what? All ideas start as mere thoughts, and it is useful, productive, and interesting to trade them back and forth in these things called conversations. Even "fantasy" and "handwavy" ones. Hell especially those. It's an early stage in the pollination and generation of new ideas that later become real engineering. Or not, either way the conversation and thought was entertaining. It's a thing humans do, in case you never met any or aren't one yourself.
The brainstorming was a hell of a lot more valid, interesting, and valuable than this shit. "Just go away" indeed.
3 replies →
I'm going to call this out. The entire post obviously has bucket loads if aggression which can be taken as just communication style, but the last line was just uncalled for.
I have seen you make high quality responses to crazy posts.
Do better.
Don't worry, with LLMs, we're moving away from anything that remotely looks like "stable software" :)
Also, yeah, I recall the dreaded days of cooperative multitasking between apps. Moving from Windows 3.x to Linux was a revelation.
With LLM's it is just more visible. When the age of "updates" begun, the age of stable software died.
True. The quality of code yielded by LLMs would have been deemed entirely unacceptable 30 years ago.
> I really believe I need to spend 2-weeks requiring students write code on an Amiga, and the programs have to run at the same time. If anyone of them crashes, they all will fail my course.
Fortran is memory-safe, right? ;-)
[dead]