Comment by jrabone
13 years ago
Whether to provide BCD optimisation always seemed to be a tricky engineering decision; virtually nobody used the 6502 BCD instructions in the amateur home microcomputer environment I was familiar with in the 80s, but it was clearly considered to be important to the CPU manufacturers. Were there BCD benchmarks back then? Was it considered a killer feature to make financial software easier to write? Did Rockwell ever capitalise on that patent?
The Atari's ROM's contained a full (well, for the time) floating point library implementation that used BCD floating point values.
The result was that the Atari's, without even trying, had more accurate decimal math algorithms than other contemporary computers. Something to do on the demo machines of the day in stores was to run this loop:
On an Atari this would accurately count down from 100 to zero with zero round off errors. The exact same loop on an IBM PC after about 5 steps started printing things like 99.94999999998 instead of 99.95.
Edit: formatting
I got some interesting results. MSX and Atari computed the results correctly. On the TRS-80 Model I, wrong results started on the 12th iteration. Apple IIe (AppleSoft), VIC-20 and PET started the wrong results on the 8th or so. This has to do with the internal representation of floating-point numbers, of course - the Apple II uses, IIRC, 5 bytes to represent a float while MSX uses, again, IIRC (it's been a long time) 8.
I have no idea what people used BCD for either. I vaguely recall reading that the C64's interrupt routine didn't even bother to clear the D flag, so you had to disable interrupts while using decimal mode! - so obviously most people just weren't expected to be using it.
I only ever saw it used for game scores... and the following, which prints a byte as hex, and is a neat example of cute 6502 code. Saves a few bytes over having a table of hex digits, and you don't need to save X or Y.
(PUTCH takes an ASCII character in A.)
The 68000 had BCD as well. Never used it and don't recall ever seeing it used. I think they only included it so they could have an instruction called ABCD.
I would imagine BCD was useful as a bootstrap for a poor ASM programmer's bignum library (especially when 'bignum' was >16 bits).
Also would be useful for 7-segment LED displays.
SNES games used it a lot for storage of things that need to be displayed on screen, such as score and lives and whatnot. If the counter is checked relatively infrequently, the reduced integer range and hassle of switching to and from BCD mode are a lot better than having to divide by ten repeatedly each frame, which is relatively slow.
It's interesting that the parent comment came up in the context of the chip used in TI calculators. I know the TI-83 series floating point format is BCD [1], but I'm not sure off the top of my head whether the built-in floating-point library actually uses these CPU instructions.
[1] (PDF link) http://education.ti.com/guidebooks/sdk/83p/sdk83pguide.pdf see pages 22-23
In x86-world, floating point hardware was an add-on chip before the 486DX was introduced in 1989 [1] [2].
I think the BCD instructions were never intended to be used outside of software arithmetic libraries, but they provide speedups for crucial operations in such libraries. Sort of like Intel's recently introduced AES instructions, which will probably only be used in encryption libraries.
Of course, it turns out that BCD-based arithmetic isn't much used, because IEEE-style floating-point has a fundamental advantage (you can store more precision in a given amount of space) and is also compatible with hardware FPU's.
[1] http://en.wikipedia.org/wiki/Floating-point_unit#Add-on_FPUs
[2] http://en.wikipedia.org/wiki/I486
I'd guess this goes back to the 4004 which was designed for a desktop calculator. Easy BCD really helps those applications so they must have had that in mind as a target market. There's not much point in using BCD once reasonable amounts of RAM and ROM are available.
Except the Z80 / 80xx don't descend from the 4004, they descend from the Datapoint 2200. The 8008 didn't have BCD instructions or a half-carry flag, but it had a parity flag.
Not architecturally, but Federico Faggin and Masatoshi Shima were the key people on the 4004 and 8080 before leaving to form Zilog and build the Z-80. The Z-80 had to have DAA (decimal adjust) to be compatible with the 8080. Possibly the 8080 had DAA to compare well against the 6800. If that's the case, then we must ask where the 6800 got the idea. Could be from minicomputers or even mainframes, but from what I've read the early microcomputer designers had no pretense of making processors to compete anywhere near the high end. Instead their sights were set more along the line of embedded systems. Desktop calculators fit into that and Shima himself designed desktop calculators and helped specify the 4004 before he came to Intel. Thus my speculation that the impetus could have come from that direction.