Comment by rjsw
1 day ago
I think the difference between the Apple and Xerox approach may be more complicated than the people at PARC not knowing how to do this. The Alto doesn't have a framebuffer, each window has its own buffer and the microcode walks the windows to work out what to put on each scanline.
Not doubting that, but what is the substantive difference here? Does the fact that there is a screen buffer on the Mac facilitate clipping that is otherwise not possible on the Alto?
More details here: https://www.folklore.org/I_Still_Remember_Regions.html
It allows the Mac to use far less RAM to display overlapping windows, and doesn't require any extra hardware. Individual regions are refreshed independently of the rest of the screen, with occlusion, updates, and clipping managed automatically,
Yeah, it seems like the hard part of this problem isn't merely coming up with a solution that technically is correct, but one that also is efficient enough to be actually useful. Throwing specialized or more expensive hardware at something is a valid approach for problems like this, but all else being equal, having a lower hardware requirement is better.
1 reply →
So when the OS needs to refresh a portion of the screen (e.g. everything behind a top window that was closed), what happens?
My guess is it asks each application that overlapped those areas to redraw only those areas (in case the app is able to be smart about redrawing incrementally), and also clips the following redraw so that any draw operations issued by the app can be "culled". If an app isn't smart and just redraws everything, the clipping can still eliminate a lot of the draw calls.
It definitely makes it simpler. You can do a per-screen window sort, rather than per-pixel :).
Per-pixel sorting while racing the beam is tricky, game consoles usually did it by limiting the number of objects (sprites) per-line, and fetching+caching them before the line is reached.
I remember coding games for the C64 with an 8 sprite limit, and having to swap sprites in and out for the top and bottom half of the screen to get more than 8.
Displaying graphics (of any kind) without a framebuffer is called "racing the beam" and is technically quite difficult and involves managing the real world speed of the electron beam with the cpu clock speed ... as in, if you tax the cpu too much the beam goes by and you missed it ...
The very characteristic horizontally stretched graphics of the Atari 2600 are due to this - the CPU was actually too slow, in a sense, for the electron beam which means your horizontal graphic elements had a fairly large minimum width - you couldn't change the output fast enough.
I strongly recommend:
https://en.wikipedia.org/wiki/Racing_the_Beam
... which goes into great detail on this topic and is one of my favorite books.
Frame buffer memory was still incredibly expensive in 1980. Our labs 512 x 512 x 8bit table lookup color buffer cost $30,000 in 1980. Mac's 512 x 384 x 8bit buffer in 1984 had to fit the Macs $2500 price. The Xerox Alto was earlier than these two devices and would have cost even more if it had a full frame buffer.
Wasn’t the original Mac at 512 x 342 x 1bit?
Yes: https://news.ycombinator.com/item?id=44110219
Reminds me of a GPU's general workflow. (like the sibling comment, 'isn't that the obvious way this is done'? Different drawing areas being hit by 'firmware' / 'software' renderers?)