Comment by Diggsey

1 year ago

> I worked on the Cell processor and I can tell you it was a nightmare. It demanded an unrealistic amount of micromanagement and gave developers rope to hang themselves with.

So the designers of the Cell processor made some mistakes and therefore the entire concept is bunk? Because you've seen a concept done badly, you can't imagine it done well?

To be clear, I'm not criticising those designers, they probably did a great job with what they had, but technology has moved on a long way from then... The theoretical foundations for memory models, etc. are much more advanced. We've figured out how to design languages to be memory safe without significantly compromising on performance or usability. We have decades of tooling for running and debugging programs on GPUs and we've figured out how to securely isolate "users" of the same GPU from each other. Programmers are as abstracted from the hardware as they've ever been with emulation of different architectures so fast that it's practical on most consumer hardware.

None of the things you mentioned are inherently at odds with more parallel computation. Whether something is a good idea can change. At one point in time electric cars were a bad idea. Decades of incremental improvements to battery and motor technology means they're now pretty practical. At one point landing and reusing a rocket was a bad idea. Then we had improvements to materials science, control systems, etc. that collectively changed the equation. You can't just apply the same old equation and come to the same conclusion.

> and we've figured out how to securely isolate "users" of the same GPU from each other

That's the problem, isn't it.

I don't want my programs to act independently, they need to exchange data with each other (copy-paste, drag and drop). Also i cannot do many things in parralel. Some thing must be done sequencially.