Comment by rjsw
6 months ago
I think the difference between the Apple and Xerox approach may be more complicated than the people at PARC not knowing how to do this. The Alto doesn't have a framebuffer, each window has its own buffer and the microcode walks the windows to work out what to put on each scanline.
Not doubting that, but what is the substantive difference here? Does the fact that there is a screen buffer on the Mac facilitate clipping that is otherwise not possible on the Alto?
More details here: https://www.folklore.org/I_Still_Remember_Regions.html
It allows the Mac to use far less RAM to display overlapping windows, and doesn't require any extra hardware. Individual regions are refreshed independently of the rest of the screen, with occlusion, updates, and clipping managed automatically,
Yeah, it seems like the hard part of this problem isn't merely coming up with a solution that technically is correct, but one that also is efficient enough to be actually useful. Throwing specialized or more expensive hardware at something is a valid approach for problems like this, but all else being equal, having a lower hardware requirement is better.
1 reply →
So when the OS needs to refresh a portion of the screen (e.g. everything behind a top window that was closed), what happens?
My guess is it asks each application that overlapped those areas to redraw only those areas (in case the app is able to be smart about redrawing incrementally), and also clips the following redraw so that any draw operations issued by the app can be "culled". If an app isn't smart and just redraws everything, the clipping can still eliminate a lot of the draw calls.
Displaying graphics (of any kind) without a framebuffer is called "racing the beam" and is technically quite difficult and involves managing the real world speed of the electron beam with the cpu clock speed ... as in, if you tax the cpu too much the beam goes by and you missed it ...
The very characteristic horizontally stretched graphics of the Atari 2600 are due to this - the CPU was actually too slow, in a sense, for the electron beam which means your horizontal graphic elements had a fairly large minimum width - you couldn't change the output fast enough.
I strongly recommend:
https://en.wikipedia.org/wiki/Racing_the_Beam
... which goes into great detail on this topic and is one of my favorite books.
It definitely makes it simpler. You can do a per-screen window sort, rather than per-pixel :).
Per-pixel sorting while racing the beam is tricky, game consoles usually did it by limiting the number of objects (sprites) per-line, and fetching+caching them before the line is reached.
I remember coding games for the C64 with an 8 sprite limit, and having to swap sprites in and out for the top and bottom half of the screen to get more than 8.
Frame buffer memory was still incredibly expensive in 1980. Our labs 512 x 512 x 8bit table lookup color buffer cost $30,000 in 1980. Mac's 512 x 384 x 8bit buffer in 1984 had to fit the Macs $2500 price. The Xerox Alto was earlier than these two devices and would have cost even more if it had a full frame buffer.
Wasn’t the original Mac at 512 x 342 x 1bit?
Yes: https://news.ycombinator.com/item?id=44110219
The Alto created the image from a display list, like the Atari 800 or the Amiga. So you could have a wider rectangle on most of the screen for pictures and a narrower rectangle at the bottom for displaying status. It was not up to showing overlapping windows. Nearly all applications just set things to one rectangle, having a frame buffer in practice. This was the case for Smalltalk, which is where Bill saw the overlapping windows. One problem is that filling up the whole screen (606x808) used up half of the memory and slowed down user code, so Smalltalk-72 reduced this to 512x684 to get back some memory and performance.
The Smalltalk-76 MVC user interface that the Apple people saw only ever updated the topmost window which, by definition, was not clipped by any other window. If you brought some other window to the front it would only then be updated. But since nothing ran in the background it was easy to get the wrong impression that the partially visible windows were being handled.
Bill's solution had two parts: one was regions, as several other people have explained. It allowed drawing to a background window even while clipping to any overlapping windows that are closer. But the second was PICTs, where applications did not directly draw to their windows but instead created a structure (could be a file) with a list of drawing commands which was then passed to the operating system for the actual drawing. You could do something like "open PICT, fill background with grey pattern, draw white oval, draw black rectangle, close PICT". Now if the window was moved the OS could recalculate all the regions of the new configuration and re-execute all the PICTs to update any newly exposed areas. If the application chose to instead draw its own pixels (a game, for example) then the OS would insert a warning into the app's event queue that it should fix its window contents.
In parallel with Bill's work (perhaps a little before it) we had Rob Pike's Blit terminal (commercially released in 1982) which added windows to Unix machines. It had the equivalent of regions (less compact, however) but used a per window buffer so the terminal would have where to copy newly exposed pixels from.
Reminds me of a GPU's general workflow. (like the sibling comment, 'isn't that the obvious way this is done'? Different drawing areas being hit by 'firmware' / 'software' renderers?)