Comment by mpweiher
1 month ago
Yes. The point is that the hardware designers were wrong in thinking that the segmented model was the right one.
The hardware designers kept enabling complex segmented models using complex segment machinery. Operating system designers fixed the segments as soon as the hardware made that possible in order to enable a flat (paged) memory model and never looked back.
But were the software people actually right, or did they just follow the well-trodden path of VMS / UNIX, instead of making full use of the x86 hardware?
Having separate segments for every object is problematic because of pointer size and limited number of selectors, but even 3 segments for code/data/stack would have eliminated many security bugs, especially at the time when there was no page-level NX bit. For single-threaded programs, the data and stack segment could have shared the same address space but with a different limit (and the "expand-down" bit set), so that 32-bit pointers could reach both using DS, while preventing [SS:EBP+x] from accessing anything outside the stack.
Inasmuch as hardware exists to run software, so software is the customer, the hardware people were wrong by definition, as they created a product that their customers weren't asking for, didn't want and had no use for.
Might segmentation have been better if the software had wanted it? Well, it's a counterfactual, so in some sense we can't know. And we can argue why we believe one or the other is better, but the evidence seems to be pretty overwhelming. It's not that there weren't (and aren't) operating systems that use segmentation, but somehow their "better" memory model didn't take the world by storm.