← Back to context

Comment by pavon

12 days ago

The article didn't nail down an exact reason. Here is my guess. The quote from Andy Hertzfeld suggests the limiting factor was the memory bandwidth not the memory volume:

> The most important decision was admitting that the software would never fit into 64K of memory and going with a full 16-bit memory bus, requiring 16 RAM chips instead of 8. The extra memory bandwidth allowed him to double the display resolution, going to dimensions of 512 by 342 instead of 384 by 256

If you look at the specs for the machine, you see that during an active scan line, the video is using exactly half of the available memory bandwidth, with the CPU able to use the other half (during horizontal and vertical blanking periods the CPU can use the entire memory bandwidth)[1]. That dictated the scanline duration.

If the computer had any more scan lines, something would have had to give, as every nanosecond was already accounted for[2]. The refresh rate would have to be lower, or the blanking periods would have had to been shorter, or the memory bandwidth would have to be higher, or the memory bandwidth would have had to be divided unevenly between the CPU and video which was probably harder to implement. I don't know which of those things they would have been able to adjust and which were hard requirements of the hardware they could find, but I'm guessing that they couldn't do 384 scan lines given the memory bandwidth of the RAM chips, and the blanking times of the CRT they selected, if they wanted to hit 60Hz.

[1]https://archive.org/details/Guide_to_the_Macintosh_Family_Ha...

[2]https://archive.org/details/Guide_to_the_Macintosh_Family_Ha...

A lot of those old machines had clock speeds and video pixel rates that meshed together. On some color machines the system clock was an integer multiple of the standard colorburst frequency.

The Timex Sinclair did all of its computation during the blanking interval which is why it was so dog slow.

  • The Commodore Amigas had their 68k clock speed differ based on region due to carrier frequency difference (more specifically, 2x freq for NTSC, 1.6x for PAL, which resulted in almost the same, but not quite, clock speed).

    It's interesting how the differing vertical resolutions between these two (200p /400i vs 256p /512i) also had some secondary effects on software design, it was always easy to tell if a game was made in NTSC regions or with global releases in mind because the bottom 20% of the screen was black in PAL.

  • To save the curious a search: the Timex Sinclair is the American variant of the ZX Spectrum.

    • The ZX Spectrum had (primitive) video hardware. The GP commenter means the ZX80 and ZX81 which used the Z80 CPU to generate the display and so really were unable to both "think" and generate the display at the same time. On the ZX81 there were two modes, SLOW mode and FAST mode. In FAST mode the Z80 CPU prioritized computations over generating the display, so the display would go fuzzy grey while programs were running, then would reappear when the program ended or it was waiting for keyboard input.

      2 replies →

Displays are still bandwidth killers today, we kept scaling them up with everything else. Today you might have a 4k 30bpp 144hz display and just keeping that fed takes 33Gbit/s purely for scanout, not even composing it.

  • I have a 4k 60Hz monitor connected to my laptop over one USB-C cable for data and power, but because of bandwidth limitations my options are 4k30 and USB 3.x support or 4k60 and USB 2.0.

    I love the monitor, it's sharp and clear and almost kind of HDR a lot of the time, but the fact that it has a bunch of USB 3.0 ports that only get USB 2.0 speeds because I don't want choppy 30Hz gaming is just... weird.

  • 4k jumped the gun. It’s just too many pixels and too many cycles. And unfortunately was introduced when pixel shaders starting doing more work.

    Consequently almost nothing actually renders at 4k. It’s all upscaling - or even worse your display is wired to double up on inputs.

    Once we can comfortably get 60 FPS, 1080p, 4x msaa, no upscaling, then let’s revisit this 4k idea.

    • I didn't have to be '4K', it just should be sharp - Apple was right in 2012 with the 'retina' concept. TV manufacturers doubled the resolution and those panels trickled down to PCs. Games and movies don't need that resolution, but it helps text readability tremendously. And as a bonus you can skip all that 'cleartype' nonsense and other readability hacks that often don't work right.

      1 reply →

    • WTF are you talking about, 60 FPS for 4K isn't even that challenging for reasonably optimized applications. Just requires something better than a bargain bin GPU. And 120+ FPS is already the new standard for displays.

      1 reply →

  • We see this in embedded systems all the time too.

    It doesn't help if your crossbar memory interconnect only has static priorities.

  • And marketing said, when LCDs were pushing CRT out of the market, that you don't need to send the whole image to change a pixel on an LCD, you can change only that pixel.

    • except DVI is essentially VGA without Digital-to-Analog part, and original HDMI is DVI with encryption, some predefined "must have" timings, and extra data stuffed into empty spaces of blasting a signal designed for CRT.

      I think partial refresh capability only came with some optional extensions to DisplayPort.

      3 replies →

It's also interesting to look at other architectures at the time to get an idea of how fiendish a problem this is. At this time, Commodore, Nintendo, and some others, had dedicated silicon for video rendering. This frees the CPU from having to generate a video signal directly, using a fraction of those cycles to talk to the video subsystem instead. The major drawback with a video chip of some kind is of course cost (custom fabrication, part count), which clearly the Macintosh team was trying to keep as low as possible.

  • Both the key 8-bit contenders of yore, Atari 8-bit series and Commodore 64 custom graphics chips (Antic and Vic-II) “stole” cycles from the 6502 (or 6510 in the case of C64) did "cycle stealing", when it needed to access memory.

    I remember writing a cpu intensive code on the Atari and using video blanking to speed up the code.

  • Plus those weren’t raw bitmaps but tile based to help keep memory and bandwidth costs down.

    • Not sure we're thinking the same way, but the C64 and Atari had bitmap modes, not just tile or character modes.

  • And yet despite the lower parts count the Macintosh was more expensive than competing products from Commodore and Atari that had dedicated silicon for video rendering. I guess Apple must have had huge gross margins on hardware sales given how little was in the box.

Exactly. Like the Apple ][, the original Mac framebuffer was set up with alternating accesses, relying on the framebuffer reads to manage DRAM refresh.

It looks like DRAM was set up on a 6-CPU-cycle period, as 512 bits (32 16-bit bus accesses) x 342 lines x 60 Hz x 6 cycles x 2 gives 7.87968 MHz, which is just slightly faster than the nominal 7.83 MHz, the remaining .6% presumably being spent during vblank.

  • But why 342 and tune the clock speed down instead of keeping the clock speed at 8MHz and having floor(8e6/2/6/60/32) = 347 lines?

    I suspect kmill is right: https://news.ycombinator.com/item?id=44110611 -- 512x342 is very close to 3:2 aspect ratio, whereas 347 would give you an awkward 1.476:1 aspect ratio.

    • That doesn't sound right. The tube the mac was displaying on was much closer to a TV-style 4:3 ratio anyway, there were significant blank spaces at the top and bottom.

      If I was placing bets, it was another hardware limitation. Maybe 342 put them right at some particular DRAM timing limit for the chips they were signing contracts for. Or maybe more likely, the ~21.5 kHz scan rate was a hard limit from the tube supplier (that was much faster than TVs could do) and they had a firm 60 Hz requirement from Jobs or whoever.

      1 reply →

    • You could reduce the gain on the horizontal deflection drive coil by 2% to get back to 3:2. In fact, I doubt that it was precise to within 2%.

Why did they need 60hz? Why not 50 like Europe? Is there some massive advantage to syncing with the ac frequency of the local power grid?

  • If you’re used to seeing 60Hz everywhere like Americans are 50Hz stands out like a sore thumb.

    But mostly I suspect it’s just far easier.

  • Conventional wisdom a few years after the Macintosh was that 50Hz was annoyingly flickery. Obviously this depends on your phosphors. Maybe it was already conventional wisdom at the time?

    I feel like the extra 16% of screen real estate would have been worth it.