← Back to context

Comment by taeric

4 days ago

Right, but that was kind of my question? What is better about not having a lot of these things?

That is, phrasing it as a dream makes it sound like you imagine it would be better somehow. What would be better?

Think about using a modern x86-64 cpu core to run one process with no operating system. Know exactly what is in cache memory. Know exactly what deadlines you can meet and guarantee that.

It's quite a different thing to running a general purpose OS to multiplex each core with multiple processes and a hardware walked page table, TLB etc.

Obviously you know what you prefer for your laptop.

As we get more and more cores perhaps the system designs that have evolved may head back toward that simplicity somewhat? Anything above %x cpu usage gets its own isolated, un-interrupted core(s)? Uses low cost IPC? Hard to speculate with any real confidence.

  • I just don't know that I see it running any better for the vast majority of processes that I could imagine running on it. Was literally just transcoding some video, playing a podcast, and browsing the web. Would this be any better?

    I think that is largely my qualm with the dream. The only way this really works is if we had never gone with preemptive multitasking, it seems? And that just doesn't seem like a win.

    You do have me curious to know if things really do automatically pin to a cpu if it is above a threshold. I know that was talked of some, did we actually start doing that?

    • > Was literally just transcoding some video, playing a podcast, and browsing the web.

      Yeah that's the perfect use case for current system design. Nobody sane wants to turn that case into an embedded system running a single process with hard deadline guarantees. Your laptop may not be ideal for controlling a couple of tonnes of steel at high speed, for example. Start thinking about how you would design for that and you'll see the point (whether you want to agree or not).

      1 reply →

Things would be simpler, more predictable and tractable.

For example, real-time guarantees (hard time constraints on how long a particular type of event will take to process) would be easier to provide.

  • But why do we think that? The complexity would almost certainly still exist. Would just now be up a layer. With no guarantees that you could hit the same performance characteristics that we are able to hit today.

    Put another way, if that would truly be a better place, what is stopping people from building it today?

    • Performance wouldn’t be the same, and that’s why nobody is manufacturing it. The industry prefers living with higher complexity when it yields better performance. That doesn’t mean that some people like in this thread wouldn’t prefer if things were more simple, even at the price of significantly lower performance.

      > The complexity would almost certainly still exist.

      That doesn’t follow. A lot of the complexity is purely to achieve the performance we have.

      3 replies →