← Back to context

Comment by pwg

13 years ago

The Atari's ROM's contained a full (well, for the time) floating point library implementation that used BCD floating point values.

The result was that the Atari's, without even trying, had more accurate decimal math algorithms than other contemporary computers. Something to do on the demo machines of the day in stores was to run this loop:

   10 let x = 100
   20 print x
   30 let x = x - 0.01
   40 goto 20

On an Atari this would accurately count down from 100 to zero with zero round off errors. The exact same loop on an IBM PC after about 5 steps started printing things like 99.94999999998 instead of 99.95.

Edit: formatting

I got some interesting results. MSX and Atari computed the results correctly. On the TRS-80 Model I, wrong results started on the 12th iteration. Apple IIe (AppleSoft), VIC-20 and PET started the wrong results on the 8th or so. This has to do with the internal representation of floating-point numbers, of course - the Apple II uses, IIRC, 5 bytes to represent a float while MSX uses, again, IIRC (it's been a long time) 8.